I really should be in another line of work. I am embarrassed that this paper came out in June and I am only now getting around to posting about it.
Lisa Bero, of the pharmacy school at UCSF, and her colleagues have come out with another gem on industry-sponsored clinical trials:
In one way this is old news because it adds to the long list of papers documenting that studies funded by drug companies turn out very often to conclude that the company's drug is best. But this paper extends the findings to a particular class of studies, those comparing one statin with another--the coveted head-to-head trials that we say the industry funds far too few of. And especially the paper gives us new insights into how the bias creeps in. (Or, in this case, the bias does not creep in; it walks right in the front door, sits down, and demands dinner.)
Because statins are such big business, Bero et al. were able find 192 studies comparing one statin to another. No surprise--in almost all cases, the end result was that the sponsoring company's statin performed better than the competitor statin. The authors found a 20-fold increased likelihood that the actual results of the trial would favor the company's own statin, and a 35-fold increased likelihood that the study would end up recommending the company's own statin. Note those numbers--obviously, in many (almost half) of the papers in which the authors confidently recommended the company's statin, the actual study results did not support that recommendation. (This is a fairly usual finding for industry-sponsored studies--that almost always there's a blatant marketing message added in amongst the scientific reporting. A pox on the journal editors and reviewers who lack the gumption to insist that such messages be excised, since it takes no advanced biostatistical smarts to figure out when the study results don't support the recommendations.)
Of even more interest were the reasons why these studies showed such disproprotionate results. Bero et al. were able to identify almost all the study design factors that accounted for the results that favored the company's drug--in this set of studies, the big culprits were inappropriate dosing of the comparator drug, and incomplete blinding. It does not seem to extreme to say that in summary, when the study results of a company-sponsored trial end up favoring the company's drug, it happens because the study was deliberately designed from Day One to assure that their drug came out on top. Again this is no surprise for an industry that makes it more clear all the time that it views its so-called scientific research enterprise as nothing but an extension of the marketing department.