This gem came to us from a paper published in The Lancet, who, given the problems they created for themselves by publishing some seriously shoddy medical conclusions in the past, really should know better. What’s the problem with this study, you may ask? It’s all about methodology, my friends.
When we think about studies and research, the picture that comes to the average layperson’s mind is of people in white coats, meticulously slaving away over their microscopes in a lab full of beakers and bunson burners. Alas, I hate to be the one to burst the mystical sciencey bubble, but this is not what happens in a pretty substantial number of cases. In fact, in a heck of a lot of research, this isn’t even close to reality. Sometimes it is, but most of the time? Nope.
When looking at studies, finding out how they are structured, and how they controlled for and measured the variables in question in order to generate their data, is hugely important. For example, the “gold standard” of all research is a double-blind, randomized controlled experiment or trial. Double blind means that the subjects and the researchers are all in the dark on which subjects are in the test groups and which ones are in the control group. The idea is that this reduces bias from both the subjects and the researchers. Since both don’t know who is in what group, they are less likely to be influenced in behavior, reporting, or expectations. It’s basically a way to weed out bias, even the subconscious kind. Randomized control means the subjects are completely random. Their selection should be random and their division into the various study groups should be random as well. Again, this is all intended to reduce or eliminate any bias from creeping into the study.
Another quality of good research is isolating the factors you’re studying now as much as possible, and controlling for any other factors that might creep in. The idea is to eliminate the possibility of concluding that the phenomena you’re observing is actually being caused by some factor other than what you’re studying. So, if you’re trying to figure out whether or not hot dogs have a negative health effect, you want to have people eating that food in isolation as much as possible. Essentially, you don’t want to say hot dogs cause cancer, for example, when in reality it could be the bun, the condiments, or the French fries you let your subjects eat alongside their dogs that were really the problem. You also want to rule out environmental factors, too. Even if your subjects were eating nothing but hot dogs, morning, noon, and night, if they happen to live in a building with asbestos or black mold behind the walls, guess what? Blaming it on the hot dogs doesn’t fly there, either. That’s why controlling for outside factors and isolating the variables is so important.
So, the question is, did the study claiming low carb diets reduce human lifespan do any of the things that mark good quality research? How well was it designed?
As it turns out, it’s an insanely poorly designed study, and in my opinion, it’s the epitome of lazy science. My fellow ketonians, this correlation between reduced lifespan and low carb eating was determined by conducting…. a food survey.
What is a food survey? It’s a study wherein researchers have participants fill out information about what they ate several times throughout the study period, and then they correlate health outcomes to those reports. All the information is voluntarily self-reported by the study participants, and is entirely subject to the memory and honesty of the participants. Essentially, the researchers asked several thousand people to guesstimate what they ate, several times over a twenty-five year period, looked at health outcomes, and then estimated life expectancy based on all that.
Now, I could nitpick a heck of a lot of stuff about this study, like what they considered low carb (which wasn’t even remotely what most people in the LC community would concur with) for example, but I think the simple fact that it is a food survey invalidates it on its face. Let me explain.
Nina Teicholz, who once again has done a stellar job digging into this stuff, found an example of the kind of survey that is typically used in these studies (hat tip to the estimable Dr. Georgia Ede for sharing this find on Facebook, from whence the link was acquired by yours truly). Go click on that link and take a look at what participants were asked. I’ll wait.
As you can see, people are asked to estimate (aka, guess) how much of each food they consumed, on average, over the past year. No direct observation. No controlling for outside factors. No isolation of variables in any way. This is estimating lifespan based on whether or not someone accurately or honestly remembers things like how much milk they drank per week over the last year. It would be like saying hot dogs cause cancer because some of the folks who died from cancer in your study guesstimated eating two hot dogs a month. Where is the control for all the other foods they ate, or their overall lifestyle, or their environment? How can you assume any conclusion you come to is accurate under these conditions?
The reality is that you can’t. And this is why anyone who knows me has seen my constant references to food surveys as junk science. It’s just a method of bulk, raw data collection and trying to draw any high confidence conclusions from this stuff is noise, not science.
Short answer: No, it won’t.
Long answer: The Journal of Physiology published a paper that claimed a significant increase in risk for diabetes and insulin resistance was found in mice after two weeks on a Ketogenic protocol.
The study itself seems to be decently designed on its face. There were two groups of mice, one was fed a high carb chow for the duration and the other was fed a low carb, high fat chow. All well and good, thus far.
So, what’s wrong with it? A few things.
To start, I want to very briefly touch on the fact that we are studying mice and applying the findings to human beings. I realize that this isn’t considered outside of the realm of good science and, in fact, clinical trials in humans for pharmaceuticals or cosmetics, for example, typically don’t progress until it’s been tested on animals. The reasons for this should be fairly obvious, and are understandable. If a products kills off the rats, it’s probably not a good idea to give it to the humans.
However, rodents are not humans. Yes, we are both omnivores, and our digestive systems are a similar make up and function in a similar fashion. Rodents, however, are scavengers, and their dietary needs aren’t identical to that of human beings. Now, I want to clarify that this is not even remotely as bad as feeding herbivores a diet designed for humans and then extrapolating the devestating consequences out to the general human population (Hello, trials testing high fat diets on rabbits. Yes, I’m looking at you.). In my opinion, however, making direct dietary outcome comparisons between rodents and humans seems sketchy, at best. So, that’s something to keep in mind when evaluating any dietary study.
The next issue that should be a problem for anyone is that the trial was very short-term, and the results were measured after only a few weeks. Anyone who has been or has hung around the low carb and keto community for any period of time has heard of things like fat adaptation and the keto “flu.” As concisely as I can make this, keto “flu” are the typically flu-like symptoms that many people experience in the first few weeks of drastically lowering carbs. Your kidneys stop holding on to excess salt and water, and without adequate replacement (and sometimes even with it), headaches, fatigue, and shakiness can occur. Your muscles will run through their glycogen stores, and you might experience weakness and decreases in endurance in relation. Your body eventually, after a few weeks, stops operating in a way that relies on glucose for energy and begins to shift metabolism so that you can run on both body and dietary fat instead. Blood sugar and insulin stabilize, and energy levels tend to come roaring back as you start using fatty acids efficiently. This shift is the beginnings of keto adaptation, wherein your body adapts to ketosis for the long term.
The problem with this study in regards to keto and fat adaptation is that it was so short-term neither one is likely to have occurred. Therefore, it is highly likely that what these researchers observed is more related to the early keto adaptation period, and is not necessarily representative of a longer-term maintenance of ketosis. Would the results have been the same had the study gone on longer? We don’t know, but the important thing is that neither do they.
The last thing I want to touch on in regards to this study is what I would consider to be the real “clincher.” When designing an experiment to look at how a specific dietary protocol affects an organism, one would expect the food to actually resemble what a typical adherent regularly eats. And, on the surface, the macronutrient composition of the high fat mouse chow seems to meet this standard. The food was just at 90% calories from fat, and while most of us would probably classify that ratio as somewhat more akin to keto for medical therapy, it’s within the generally accepted guidelines. Not the most common ratios, by any means, but not outside of the realm of possibility.
When you look deeper, however, what you find is that the high fat mouse chow was made up of mostly hydrogenated vegetable oil, corn oil, and cellulose. That’s right, folks. They fed these poor rodents a diet of Crisco for a few weeks and then wrote a paper attributing the predictably awful result to keto.
Thanks to the Courtesy of :