The current JAMA features an article (subscription needed to access) by an international team of authors headed by Dr. Sylvain Mathieu of Paris. The authors decided to study what has happened as a result of the requirements from leading medical journals (initiated in 2005) that all clinical trials should be properly registered--in theory allowing anyone to compare the final, published results of the trial with the original design.
They found 323 trials that were published in 2008 in high-impact medical journals, either general or in one of 3 specialties (cardiology, rheumatology, gastroenterology). They discovered that only 45% of these were adequately registered. Others were not registered until the trial had been completed (14%), and with no or an unclear description of the primary outcome (12%). More than a quarter were not registered at all.
In about half the cases, there were insufficient data to determine what sort of bias would have been introduced by a change in the primary outcome from that originally identified in the study methods to the one featured in the published report. In the other half, in 19 of 23 studies, an outcome for which the results were statistically insignificant was replaced by an outcome whose results were significant.
Naturally a person like me would wonder--and what association did commercial sponsorship have with whether the authors ended up playing fast and loose with the study design? About 56% of the trials in this review were commercially sponsored (and sponsorship was not reported in another 9%). That would seemingly have allowed some comparisons to be made. But these authors reported no data based on associations with commercial sponsorship.
The most worrisome finding was that the authors found evidence of selective outcome reporting in 28% of studies that were properly registered. This suggests that neither the editors nor the reviewers took the time and trouble to use data available in open trial registries to see whether the outcomes reported in the final publication were indeed the outcomes listed in the pre-trial study design. In short, in these instances, the whole reason for trial registration was subverted by the failure of journals to take advantage of the data.
So let's leave aside the question that we wish had been answered, but was not--whether commercially sponsored studies were more likely to be registered incorrectly or incompletely, or to have partial or biased reporting of endpoints. We have long asked--why do articles get published in major medical journals that are ghostwritten or that suppress key data in the interests of marketing drugs? The usual reply from the journal editors is that they are not detectives. If the authors flat-out lie to them about who wrote the article, or what the endpoints were, how is the journal going to smell a rat? In a previous post (http://brodyhooked.blogspot.com/2009/08/read-em-and-weep-wyeth-ghostwriting.html) I noted the criticism that even if journal editors lack the detective facilities to identify ghostwritten articles, that hardly explains why no article has as yet been officially retracted by a journal once ghostwriting was proven by another route. Now, in light of the Mathieu study, journal editors need to explain why they cannot be bothered to cross-check trial reports against registered data that are available in plain sight.
It appears that it's not simply the case that journal editors make rotten detectives. Journal editors don't seem to run a very tight ship even when other people do the detective work for them and hand them the results neatly giftwrapped.
Clinical trial registries were supposed to solve this problem. Do we have here yet another example of Epstein's Law--"If you think the problem is bad now, just wait till you've solved it"?
Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 302:977-84, Sept. 2, 2009.