- A group at Freiburg, Germany compared the original research protocols of 52 randomized controlled trials with 78 journal articles reporting the trial results. Their concern was with eligibility criteria--which types of patients were included in the study or not. They found that only about half the time, the eligibility criteria reported in the final paper actually matched the original protocol; in the other instances the criteria were either not mentioned or were modified. The net result was often to make it appear that a treatment good for a small segment of the population was useful for a much wider group of patients. However, in their sample, this did not vary according to who funded the study, so non-industry researchers seem to be just as guilty as industry-sponsored folks. (http://www.bmj.com/content/342/bmj.d1828?view=long&pmid=21467104)
- John Ioannidis, a frequent flyer in this blog with numerous highly revealing statistical analyses to his credit, who has now apparently forsaken Greece and gone to Stanford, reports with his colleagues on biomarkers--measures that purportedly signal level of disease risk or severity. As they note, new biomarkers are constantly being discovered and tested but relatively few ever end up influencing clinical practice. They looked at biomarker studies that appeared in high-impact journals like New England Journal, JAMA and Lancet that are most frequently read and cited, studies in less known journals, and meta-analyses of all known studies. They discovered that if you only read the articles in the high-impacty journals, you'd conclude that biomarkers were much more useful than they turned out to be according to the total mass of data. More suspiciously, the articles published in the high-impact journals, that showed the most favorable pictures of the biomarkers, were often not the largest studies in terms of numbers of subjects; the larger studies often showed less favorable outcomes. It looked very much like cherry-picking--either be sure to get your best-looking study in the highest impact journal and bury the less-good studies elsewhere; or else the editors of the highest-impact journals accept only the most favorable papers and turn down the others. But the bottom line for physicians is don't believe it just because you read it in a major journal. (http://jama.ama-assn.org/content/305/21/2200.long).
- Another group of authors with John Ioannidis among them looked at cost-effectiveness analyses of various versions of the Pap test for cervical cancer, with many new (and of course much more expensive) variations being promoted by industry as superior to the standard Pap test. By the merest coincidence, in looking at costs and benefits of the newer test vs. the old Pap, not a single industry-sponsored study found that the old Pap was better. Also, the assumptions used in the industry-sponsored studies, in order to prove the superiority of the newer test, estimated the sensitivity of the old Pap test at an average of 10% lower than the assumptions employed in non-industry-sponsored research. Jerry Hoffman likes to say that we should simply throw out any industry-sponsored cost-effectiveness analysis without bothering to read it because you can fudge the assumptions any which way you please, and no one has yet discovered an industry-sponsored CEA study that did not conclude that even though the treatment or test promoted by industry is much more expensive up front, in the end it actually saves money (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3071415/?tool=pubmed).
Thursday, April 12, 2012
Yet More Ways the Literature Can Mislead Us
I'm back to pearls from recent issues of the "Primary Care Medical Abstract" program run by Drs. Rick Bukata and Jerry Hoffman, this time to touch upon some articles that add depth to our understanding of how the published medical literature can give a misleading impression of the world of human health. You may detect a certain for-profit industry footprint in some of these techniques for obfuscation.