How to Interpret Studies About Diabetes

Gretchen Becker Health Guide
  • I've started reading Gary Taubes's book Good Calories, Bad Calories. It includes a lot of information, and it's 460 pages long. I want to read it slowly and critically, so I won't finish it for some time. But he raises a lot of interesting questions, even when you haven't gotten very far.


    The book is sort of like a mystery story, so naturally I peeked at the end. His basic premise is that it's processed carbs, not fat, that are the cause of "diseases of civilization" like heart disease, diabetes, and high blood pressure.


    But that's not what I want to write about today. Rather, I want to discuss the issue of dogma, the way having a preconceived notion can bias the way you interpret the results of experiments or clinical trials of drugs or diets.

    Add This Infographic to Your Website or Blog With This Code:


    The way real science is done is to experiment and constantly refine your hypotheses on the basis of the outcome of your experiments. You have a hypothesis, let's say, "If I wear tie-dyed trousers and a red bow tie instead of a dark suit and a bowler hat, I'll attract more women while I'm waiting for the bus in the morning."


    Then you'd do a carefully controlled experiment to see whether or not your hypothesis is true. You might alternate wearing tie-dyed trousers with wearing a dark suit and count the number of women at the bus stop who gaze at you adoringly or start a conversation with you. You'd study statistics to know how many days and how many interactions with women you'd have to have to make sure the results were statistically significant.


    If the numbers were not statistically significant, you'd conclude that wearing tie-dyed trousers and a red bow tie had no effect on your appeal to women at the bus stop, and you'd refine your hypothesis. "Maybe if I wear just the bow tie, it will have an effect."


    You'd repeat the experiment, tally the results, decide whether or not this had an effect, and then refine your hypothesis further until you found something that worked.


    But what if you started with a preconceived notion? Let's say your friend Pierre swore that the tie-dyed pants and red bow tie thing worked like a charm for him. Or even more compelling, let's say 10 of your friends said it worked for them.


    Then it would be tempting to fiddle with your data. Maybe you should exclude all the tall women who didn't respond to your new outfit, because you're short and tall women wouldn't be apt to be interested. And maybe you should also exclude all the women carrying briefcases, or all the women wearing business suits, because they might not like people in flamboyant clothing. Or maybe you should exclude the sunny days when the women might be blinded by your bright trousers.


    You could go on excluding certain factors or groups of women until you found a combination in which your hypothesis was supported. Then you'd publish a paper saying, "New Study Supports Tie-Dyed Pants and Bow Tie Hypothesis."


    Unfortunately, this is how some clinical trials are reported. This is especially true of so-called meta-analyses, in which researchers pool the results of many different small studies to get high enough numbers so that the results can be analyzed statistically. This is because the "statistical significance" of a study depends on the size of the study as well as the magnitude of any effect that is found.


    A small difference doesn't mean much if you're studying only 10 people. If you're studying 10 million, it can mean a lot.


    The problem is, the researchers have to decide which studies to include in their meta-analyses, and if they have preconceived notions about what the correct answer will be, they're apt to exclude, perhaps even unconsciously, studies that won't support their preconceived notions.


    Another flaw with having preconceived notions is in the interpretation. For example, one study of low-fat diets and cancer showed no effect of the diet. Those who believed in the low-fat diets interpreted it to mean that low-fat diets do have an effect, but the diets in the study didn't reduce fat levels sufficiently, or they weren't followed for a long enough time.

    Add This Infographic to Your Website or Blog With This Code:


    Yet-another problem with preconceived notions is that you're less apt to publish your results when they're negative. The very large studies, which cost millions of dollars and get a lot of publicity, are reported regardless of the results. But smaller trials might not be.


    Taubes cites the Minnesota Coronary Study, which at the time was the largest trial examining the effect of diet on heart disease in the United States. More than 9,000 men and women were put on cholesterol-lowering diets for a year, and their cholesterol levels did go down. But their rates of heart disease increased.


    The study went unpublished for 16 years. When asked why, the principal investigator said, "We were just disappointed with the way it came out." In other words, it didn't show what they wanted it to show, so they didn't publish it.


    This tendency to see what we want to see is also a reason to take the "evidence" cited by authors of diet books -- no matter what the diet -- with a grain of salt. There are so many published studies out there that you can cite a study to support almost anything. A bibliography of hundreds of studies supporting this diet or that -- and in the interests of disclosure, I'll say I'm on a low-carb diet and even helped the originators of The Four Corners low-carb diet publish a second edition of their diet book -- can look impressive. But it can also be very selective, citing studies that are pro and omitting those that are con.


    Once any diet originator has invested time, money, and reputation in that diet, he or she is not apt to want to change. Unlike the true scientist, who could say, "Scientists were wrong. Starch raises blood sugar faster than sugar," the author of book telling you all your medical problems will disappear if you just eat more starch than sugar is unlikely to embrace this new concept and will come up with various reasons why it's probably not significant.


    For example, Dean Ornish supports an extremely low fat diet. Such diets do lower cholesterol, but they lower the "good" cholesterol HDL as well as the "bad" cholesterol LDL, and so the traditional risk factors stay about the same. Ornish responded to this by saying that when you eat low-fat diets you don't need as much HDL to remove fat from your bloodstream, so the lower HDL doesn't matter.


    Add This Infographic to Your Website or Blog With This Code:

    Maybe he's right. But is he really unbiased on the issue? I don't think so.


    The American Diabetes Association is in a similar position. For years, they've been telling people with diabetes that they should eat a lot of starchy foods, and they can't admit they are wrong, no matter what the evidence is.


    Taubes found a comment from the 1920s, by Maurice Arthus in his Philosophy of Scientific Investigation that sums up what I'm trying to say:


    "In reality, those who repudiate a theory that they had once proposed, or a theory that they had accepted enthusiastically and with which they had identified themselves, are very rare. The great majority of them shut their ears so as not to hear the crying facts, and they shut their eyes so as not to see the glaring facts, in order to remain faithful to their theories in spite of all and everything."


    So, reader, beware. Read all you can about your diabetes and your diet. But keep an open mind. Don't be caught in the trap so many so-called scientists have fallen into. Listen to the evidence with an unbiased mind.



    More on diabetes... 


    Eating right? Get diabetes diet tips here.

    Are the drug companies really hiding a cure to diabetes? 



Published On: October 15, 2007