Back in January I wrote about a very important study by Turner et al. in NEJM:
Turner and colleagues in Portland showed a truly scary degree of publication bias in journal articles about depression--with roughly half the research studies of SSRI antidepressants showing effectiveness and half not, and with virtually all studies that showed effectiveness getting published, and virtually none of those showing no effectiveness getting published. They managed this study by comparing the FDA biostatisticians' independent review of data presented as part of new drug applications, with the eventual publication that resulted (or failed to result) from the research. They also showed that many published studies were "spun" as positive when they had been assessed by the FDA experts as negative; and that the effect sizes reported in the published studies were frequently inflated from those discerned by the FDA review. The major limitation of Turner et al. was that they addressed only one class of drugs, antidepressants. They also included data that are now many years out of date, perhaps not reflecting more recent practices.
A group at UCSF led by Kristin Rising set out to remedy these deficiencies in a new study:
They looked at all new molecular entities approved by the FDA in the years 2001 and 2002, reasoning that this would have given the investigators enough time to publish all studies that were likely to be published. They reviewed the FDA assessments of all efficacy studies submitted by the drug companies as part of those new drug applications, conducting an extensive search to see if that study was ever published. They also compared the primary endpoints and the conclusions of all FDA-submitted studies with what appeared in the resulting publication (for those studies that were published).
The results were in the same general ballpark as what Turner et al. had found for antidepressants, only not quite so dramatically disastrous. Rising et al. found that 78 percent of the studies they looked at were published. A study submitted to the FDA that showed the company's drug to be effective was about 4 times more likely to be published. Between FDA submission and eventual journal publication, a number of primary trial endpoints that did not show the drug favorably got dropped out, and some new primary endpoints that had not been submitted to the FDA were added, in each case showing the drug in a positive light. The statistical significance of some outcomes changed between FDA submission and publication, in each case in a way that favored the drug.
In sum, Rising et al. noted the same shenanighans--multiple changes being made to the study results by the time they were published, if they were published at all, resulting in a drug footprint in the published literature that bore only a tenuous resemblance to the data submitted to the FDA. The basic lesson is that commercial sponsorship of pharmacotherapy trials has made it harder and harder to practice evidence-based medicine, as the "evidence" is routinely altered in such a way as to make drugs look more effective and safer than they are.
Rising et al. note that a number of the shenanighans they detected could have been prevented by the earlier adoption of mandatory trials registries, aso those who believe that this is the answer to commercial sponsorship will be heartened. I argue in HOOKED that registries are a useful first step but that ultimately a bigger firewall between company money and the conduct of clinical trials is needed.