If anybody wonders why the American public ignores or is simply exasperated by health news, all you have to do is read these two stories from the Associated Press--and by the same reporter:
It's enough to make anybody trying to follow heart health news reach for a beta blocker. How could reports about two legit-seeming studies with opposite conclusions appear within 48 hours of each other?
I've been working the factory floor of health journalism for many years, so I know a bit about how this happens. I'll do my best to explain, but not defend.
First, some context
Last year a major study showed increased risk of death in people who got drug-coated stents (tiny scaffolds that hold open blocked arteries) compared to those who got the old-fashioned, cheaper bare metal type. The increased risk of death was due mainly to blood clots.
The FDA looked at the data and advised physicians to consider carefully which patients are best suited to each kind of stent.
Controvery ensued. Spin erupted. Stock prices dropped. New studies were ordered up. Patients were confused.
Now, let's figure out what's going on with the near-simultaneous bad news/good news reports.
The studies were presented at a medical meeting, not published in peer-reviewed journals.
Meeting presentations have not passed the highest muster in the field--submission to knowledgeable peers who consider the study's methodology and data, and often call for revisions. Some studies presented at meetings never hit print. So all studies presented at meetings need to be looked at skeptically--consider them tentative, draft reports.
Meeting presentations are sometimes distorted by--and outright funded by--commercial interests.
It's not entirely fair to suggest that companies seeking to promote their products use these meetings to trot out the studies that make them look best. . .but, well, it happens.
Both studies are weak designs.
Both studies are analyses of data gathered on previously treated patients. The best studies randomly select matched patients with one of the two treatments or a placebo, control for complicating factors and follow the patients individually.
Each of these studies has caveats big enough to drive an H3 Hummer through.
The good-stent story above looked at four years of data and compared death rates of drug-coated and bare metal stents. The study found a 1 percent difference in death rates, favoring bare metal, not the 18 percent difference an earlier analysis of 3 years of the same data showed. But the better outcomes may simply indicate that drug stents have been used more judiciously--matched to the right patients--during the last year than previously.
The bad-stent study looked at data on stents implanted after a certain type of heart attack. The study showed a much higher death rate after six months in those who got the drug-eluting stents. But most stents are not used after heart attacks: A majority are inserted after angiography shows blocked arteries that have not yet created an acute health crisis.