--I took off after an editorial in the New England Journal defending industry sponsorship of trials, saying that you should decide whether a trial is trustworthy based on reading its methods, not based on who sponsored it. My rebuttal is that many of the features that make results untrustworthy simply cannot be gleaned from reading the published report in the journal.
Here's the other side of the coin--things that make results untrustworthy that the alert reader can spot.
Again tipping my hat to Rick Bukata and Jerry Hoffman's Primary Care Medical Abstracts, I come across a study by Drs. Michael Hochman of UCLA and Danny McCormick of Harvard:
These guys looked at several ways of reporting results that overestimate benefits:
- Reporting relative vs. absolute risks
- Reporting surrogate endpoints rather than significant changes in health
- Reporting composite endpoints instead of reporting each endpoint of interest separately
- Reporting only disease-specific mortality instead of all-cause mortality
Drs. Hochman and McCormick then compared how likely it was that a study would report results in these ways, based on who sponsored the study. They found significant differences in two categories. Exclusively industry-sponsored studies were more likely than studies with at least some non-commercial support to report surrogate endpoints (45% vs. 29%) and disease-specific mortality (27% vs. 16%).
Now, if you wanted to defend the NEJM editorial position, you could say that readers of studies can readily see how the data are reported according to these criteria and can be wary of any study, regardless who funds it, that reports data in the less desirable way. But let's give the last word to Drs. Hochman and McCormick, in their final recommendations: "These findings highlight the need for educational efforts to ensure that readers understand the complexities of these endpoints and of relative risk reporting. ... In addition, Institutional Scientific Review Committees and regulatory agencies (e.g. the FDA) must closely examine the endpoints used in clinical trials and discourage the inappropriate use of surrogate and composite endpoints, and endpoints involving disease-specific mortality. Finally, medical journals may consider instituting editorial policies mandating the reporting of results in absolute numbers."
In other words, rather than asking readers to sift through whether the results are reported in a useful and valid fashion, medical journals like NEJM could simply refuse to publish papers that don't adhere to the highest standards. Of course, if they did, they might lose revenue as drug companies would not buy so many expensive reprints of papers that are really useful for marketing drugs--which may be one of the roots of the problem.