In my homepage
blog, I discussed the flaws and biases in supplement and drug trials. This is
part II of the discussion.
A 2005 review tilted, “Why Most
Published Research Findings Are False,” provides a good summary of factors that
influence conclusions of studies (bold emphasis mine):
“There is
increasing concern that most current published research findings are false. The
probability that a research claim is true may depend on study power and
bias, the number of other studies on the same question, and, importantly, the
ratio of true to no relationships among the relationships probed in each
scientific field. In this framework, a research finding is less likely
to be true when the studies conducted in a field are smaller; when effect sizes
are smaller; when there is a greater number and lesser preselection of tested
relationships; where there is greater flexibility in designs, definitions,
outcomes, and analytical modes; when there is greater financial and other
interest and prejudice; and when more teams are involved in a scientific field
in chase of statistical significance. Simulations show that for most study
designs and settings, it is more likely for a research claim to be false than
true. Moreover, for many current scientific fields, claimed research findings
may often be simply accurate measures of the prevailing bias. In this essay, I
discuss the implications of these problems for the conduct and interpretation
of research.” (http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124)
So, Do We Ignore the Data?
No, no, and no…there’s
ways to use this information to make informed decisions about what the “evidence”
is actually saying. But, it takes a lot of detective work. Most of my research and looking into studies
has come from my own review of studies reporting on biases and lessons learned
from my mentors and teachers. In no way am I methodology whiz, but I do have a
basic grasp of why our model and interpretations need to be interpreted with caution.
These are some of
the considerations that I always review when applying studies to my clients:
1. Read the
actual study and be wary of media spin. In one cross-sectional analysis of 130
studies of health news reported on google and found the following:
In total, 78% of the news did not
provide a full reference or electronic link to the scientific article. We found
at least one spin in 114 (88%) news items and 18 different types of spin in
news. These spin were mainly related to misleading reporting (59%) such as not
reporting adverse events that were reported in the scientific article (25%),
misleading interpretation (69%) such as claiming a causal effect despite
non-randomized study design (49%) and overgeneralization/misleading
extrapolation (41%) of the results such as extrapolating a beneficial effect
from an animal study to humans (21%). We also identified some new types of spin
such as highlighting a single patient experience for the success of a new
treatment instead of focusing on the group results. (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4608738/)
2. Consider
methods used.
Subjects: Who
were the participants/what population was studied (differences in gender/ethnicity/health
status)? What were their characteristics? What was the dropout rate? Who was
excluded and why? What were the factors controlled for in the subjects?
Type of study:
Was it observational and correlational study which look for relationships
verses cause-and-effect or was it a case-control randomized trial? Was there a
control or was it a comparison trial? (Too much or too little control both have
weakness. For example, too much control prevents extrapolation of the
intervention to the real world and too little prevents interpretation that the
intervention caused the change.)
Intervention: What
is the form of intervention? Was it the appropriate dosage? How long was the
study? How was it taken? What was the placebo effect?
3. Search the
results for inconsistencies:
How are the
results reported? For example, is it the use of an odds ratio, is it relative
or absolute risk? What is the NNT? Is the
p-value of significance truly reflective of compatible data with the
statistical model?
Do the charts
and statistics match the author’s conclusions?
4. More can be found here for the geeks…
·Epidemiological
study interpretation (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3077477/)·Biases
to search for (http://www.ncbi.nlm.nih.gov/pubmed/18582622)·Design, analysis and interpretation of
method-comparison studies (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2944826/)·Quasi-experimental study designs (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1380192/)
Where to Go
from Here?
1. Physicians and practitioners need to be honest about interventions and
be transparent about what they have experience with. Both parties should look
up the NNT and find a few studies to examine if the intervention is new.
2. Consumers and patients need to be aware that some of the studies and
standard of care physicians are using could be flawed. Don’t just accept
treatment that isn’t helping without studying the data or asking your doctor
for more information. Most importantly, look for if you’re getting results with
an intervention (nutrient, herb, oil, supplement, medication) and use that in
your basis for the final decision.
Get the references
here on my homepage.