Last month, a team from IGWIG, an independent, non-governmental body created in Germany to do health technology assessment, published a pair of papers in BMJ:
The main article was a meta-analysis of the antidepressant, reboxetine, manufactured by Pfizer. This drug, a newer generation antidepressant that selectively inhibits norepinephrine reuptake rather than serotonin, has been approved for sale in several European countries, but was rejected by the US FDA. Published meta-analyses showed that the drug was modestly effective, and perhaps about the same in both efficacy and safety as other popular antidepressants.
Somehow the IGWIG folks were able to figure out that the published trials did not embrace the total number of subjects actually enrolled in research studies. Their initial assessment of the drug was that due to missing data, they could not issue a recommendation. Initially Pfizer complained about this, but then changed tack and for some reason, actually disclosed all their in-house data to the IGWIG people. Based on that new body of data, the IGWIG team performed a new meta-analysis, which BMJ published. (The second article describes the process by which IGWIG secured the data.)
The new meta-analysis reveals that although nearly 4600 subjects were enrolled in trials of reboxetine (comparing it to either placebo or SSRI antidepressants), the published data on which previous meta-analyses relied included only about 1/4 of those. If the totality of the data are reviewed, the IGWIG team concluded: "Reboxetine is, overall, an ineffective and potentially harmful antidepressant."
If you bother to look at the BMJ paper's figures, you'll see the graphic presentations of the various trials of the drug, either for drug efficacy or for side effects and safety. Each plot shows what would appear to be clear publication bias. If Pfizer had reviewed all the studies, and had elected to publish out of about 13 studies the 1 or 2 studies that were most favorable to reboxetine, then the resulting plots would have been exactly what you see in the IGWIG meta-analysis.
BMJ accompanied the two IGWIG papers with an editorial:
--in which they said, "Lost in the sometimes rancorous debate over research transparency, and the reasons for publication and non-publication, is the most important thing: efforts are needed to restore trust in existing evidence. To that end, the BMJ is more interested in constructive use of data than finger pointing or blame. " They promised to publish a special theme issue late in 2011 on the problem of research publication transparency and offered to print any useful suggestions for reforming the present system, which as seems obvious is simply broken beyond repair.
For a while, it seemed our salvation was going to lie with trial registries. Just force the companies to register trials when they start, and it will become very har for them to pretend the trial did not exist if they later don't like the outcome. That was a hope that seemed alive when I wrote HOOKED. It now seems, sadly, that registries are not the sole answer, at a minimum. One problem is that registries don't seem to talk to each other very well so that the lack of a single, unified registry is becoming problematic. The second problem is that there are a lot of things the industry can do to the raw data that never show up in any registry--such as happened with the VIGOR trial, for instance, in which reportedly, just enough deaths that were probably due to heart attacks were reclassified as due to something else so as to make Vioxx seem safer than it was.
But as a small bright spot, something apparently worked--since in the US, the FDA never approved reboxetine.