Rather than call out the specific paper that led to this blog post (I also don’t want to add to its Altmetrics), just a question.
If your systematic review findings demonstrate that a particular supplement/food/diet led to an average total weight loss of 0.7lbs is it appropriate to describe that effect as significant even if statistically you believe you’re able to make that claim?
Personally, I don’t think so.
Especially not when we’re discussing food, because as Kevin Klatt recently pointed out on his blog, there are no food placebos. and as John Ionnidis pointed out, we eat thousands of chemicals in millions of different daily combinations which markedly challenges our ability to conclusively opine about the impact of any one food.
Worse though, is the fact that the media (both traditional and social), won’t bother to qualify their enthusiasm when describing these findings and instead will report them as beneficial, significant, and important, as of course will PubMed warriors.
So how to fix this? Perhaps including a qualifying, “but not likely to have any clinical relevance” statement in the abstract might lead to more balanced media coverage (or less media coverage ) which in turn would be less likely to report significant but clinically meaningless outcomes as important, which ultimately would be good for science and scientific literacy.
- Product Reformulation Means Sugar Taxes Work Even If People Don’t Buy Less As A Consequence
- How Much Do You Like Your Diet? Given Adherence Likely Dependent On Enjoyment, Our Recent Paper Set Out To Quantify That
- Why You Should Probably Just Ignore All Breakfast Studies