Study finds Avandia Significantly Increase Heart Attack Rates
News reports of the recently published Avandia study said that the study showed that the drug increased heart attack rates “significantly.” This is true. The problem is that the meaning of significantly as understood by the general public is very different from the meaning of significantly as understood by medical researchers.
To most of us, a significant increase in weight would mean “a large increase in weight.” But that’s not what it means to statisticians. In the latter case, it means “an increase in weight [large or small] that is statistically significant.”
Statistical significance tells you how likely it is that any differences you find in a study are likely to be due to random fluctuations rather than to the drug or other study intervention that the researchers are measuring. The usual cutoff point is 5%, meaning that there is a 5% chance that the differences were random. However, even the 5% value is not considered very good. Values below 1% are considered more likely, and values below 0.1% are even better.
Scientists express these results as something called P values. A P value of 0.05 means there’s a 5% chance your results are due to random fluctuations. This is usually expressed as P = .05. Sometimes you’ll see the less than sign, <, used, for example: P < .01, meaning that there’s less than a 1% chance that the results are due to random errors.
Calculating the P values is not simple, and most people use software to calculate them. What you need to know is that the P value is affected by the number of people in a study (a study of 10,000 people would have a lower P value than a study with 10 people), the size of the differences found (an average weight loss of 50 pounds would have a lower P value than an average weight loss of 0.5 pound), and the amount of variation in the results (if everyone getting the treatment lost between 48 and 52 pounds, the P value would be lower than if some people lost 1 pound and some people lost 100 pounds).
Furthermore, you need to understand that statistical results can only predict the probability of events, not their certainty. Even if the probability that the results of some study were less than 1% that they were due to chance, it’s still possible that they were due to chance. And vice versa. Results that are not statistically significant in one study may turn out to be significant in a larger study.
Think of the weather. The weather people are always telling us there’s an X% chance of rain tomorrow. But even when the probability is high, it doesn’t always rain.
Statistics is complex, and even statisticians argue about the best way to prove that data are reliable. If you love math, you can delve further into this with books and Internet sources.
Internet expert David Mendosa has recommended this site as a starter:
Here’s another one:
If you really want to dig deep, here’s a link to a more in-depth analysis of P values and their flaws:
But you don’t need to understand the math used to get the P values to get a general sense of what the results mean. Just remember that the lower a P value is, the more reliable the results are. But reliable or statistically significant results may not always be significant in the popular sense of the word.
For example, imagine that researcher Ima Quaque studied 100,000 people and gave half of them her new weight-loss drug and half of them a placebo. The study was well controlled, and no one knew who was getting the drug and who was not. After analysis, Dr. Quaque showed that people getting the drug lost an average of 1 pound a year. The results proved to be statistically significant, with P < .01.
Technically, Dr. Quaque could say that her new drug caused a significant loss of weight: it caused a tiny loss that was statistically significant. But who wants to take an expensive new drug with possible long-term side effects just to lose 1 pound a year? Almost no one.
Nevertheless, Dr. Quaque issued a press release saying her new drug “caused significant weight loss,” and the media started a frenzy with headlines like “New Drug Makes Patients Lose Significant Amounts of Weight.” The headlines would be false because the people who wrote them didn’t understand the meaning of statistical significance. Then other news media would pick up the story from the first one and the misinformation would spread.
But I hope you’re now smarter than the average reporter. You can understand the difference. Keep it in mind whenever you read stories about new trials of drugs or other diabetes treatments.