I have blogged in the past--
--about the hope that mandatory registration of clinical trials would end some of the lack of transparency that currently surrounds industry-sponsored drug studies. In general, the previous studies indicated a lot of problems despite the registry, including frequent changes in study end-points, for example.
The BMJ recently published a new study:
--by a group primarily based at University of North Carolina, that looked only at one outcome--how likely was a trial that included at least 500 subjects to publish results within about 4 years of completion? Note that they did not look at whether anyone juggled the trial design or methods, just whether any results at all were published in peer-reviewed journals or on the registry website itself. The authors chose only large studies because they judged that common excuses for non-publication--we got too busy; or we submitted and the editors rejected the manuscript--would be much less likely to hold for trials of this size.
Basically the authors found that 29% of these large trials remained unpublished. Of that 29%, an additional 22% had some results available through the trial registry itself. The authors commented that this latter is not an optimal result, as the journal peer review process is presumably valuable for identifying possible flaws and gaps in study methods. Industry-sponsored studies were more likely than others not to be published (32% vs. 18%). Put another way, industry-sponsored trials accounted for 88% of all the unpublished trials.
The present authors highlight an ethical concern that I mentioned in HOOKED. When people volunteer as subjects in a research trial, they believe that they are doing something in the service of science. If the results of the trial are never published, their "contract" with the investigator is thereby violated. Hence investigators have an ethical duty to do their best to publish research results, quite apart from whatever other obligations they may also be under.
This study, as noted, merely looked at whether any results were published in peer-reviewed journals; they did not ask whether the methods reported in the final paper matched what had been entered into the registry. So we still could have had, among the trials that were published, cases of industry-sponsored trials changing the endpoints or otherwise adding unscientific spin to make the company's drug look better, as noted in previous research. In short, the present study probably underreports by a considerable extent the continuing problems with the commercial control over pharmaceutical research. Even so, it highlights the fact that merely creating a mandatory trials registry has been far from solving the basic problem.