Saturday, January 1, 2011

The Anatomy of Spin

An article (subscription required) that was published some months ago provides some useful information about spin in medical research publications, raises some important qustions, and poses a mystery of its own.

A group out of Oxford and Paris set out to describe exactly how spin appears in medical research publications. They searched databases for articles published between December 2006 and March 2007 reporting the results of randomized controlled trials. They were on the lookout for articles that reported trials in which the difference in the primary outcomes was statistically insignificant--figuring that this was the sort of paper where the temptation to introduce spin was greatest, and hence these papers would be most useful to decide what forms the spin took. They started with 1735 potentially applicable titles but ended up with only 72 papers that met all their criteria. They then proceeded to develop an assessment of the presence, degree, and categories of spin, which of necessity was a subjective enterprise. They also noted that in only 44 of the 72 papers were the primary research outomes clearly identified.

They found that among the 72 papers, 33% were funded wholly or in part by for-profit entities, and in another 37.5% the source of funding was not reported.

They found that spin was common--40% of the papers had spin in at least two separate sections of the main text, and 58% had spin in the Conclusion section of the abstract. The forms that the spin took included focusing on other results (such as within-group comparisons) that were statistically significant while downplaying the lack of significance in the primary outcomes; interpreting the lack of statistical significance as showing equivalence ("at least our treatment was shown to be no worse than...") which is a huge methodological no-no; and simply ignoring the lack of significance and playing up the supposed benefits and/or safety of the experimental treatment anyway.

So now we come to the mystery: the authors never reported how the level of spin correlated with for-profit funding. At one point they state, "Our results are consistent with those of other related studies showing a positive relation between financial ties and favorable conclusions stated in trial reports." But they give no numbers anywhere. My only hunch as to why they report no data on this apparently key variable is that with only 72 studies in their final sample, the numbers would have been too small to be of much reliability, and they did not want to be charged with committing themselves the very spin they were criticizing. But one would have expected an explanation of some sort.

OK, so what else can we glean from this study? Mainly one thing. Where the heck are the journal editors and reviewers? How can a study be published in a supposedly respectable journal, within the past few years, that fails to specify any source of funding for a major clinical trial, or that fails to make clear what are the primary outcomes? Plus, these authors were able to detect spin in a substantial percentage of these papers, based solely on the contents of the paper--they sought no access to any other data. If these authors could, the journal editors and reviewers presumably could too--yet they let these blatant misstatements pass. In short, if industry sponsors of research are seeking to add spin to their publications to goose the marketing of their products, it seems that today's journal editing apparatus is putty in their hands.

This might seem to be unfair criticism because the authors focused on only one specific type of research report--an RCT where the result is insignificant, but that is published anyway. (The problem of journals refusing to publish papers that produce non-statistically-significant findings, thereby skewing the publication record, is another issue beyond our scope here.) It may be that a lot of spin appears in such reports, but very little spin appears in reports of RCTs where the results reach statistical significance--no suprise if so, as there's then much less need to add spin. OK, fair enough. But if these authors focused especially on that sort of trial report because they expected that setting to create a strong temptation to add spin, then journal editors and reviewers should have been equally forewarned and should have been especially vigilant. So the fact that so much spin and nonreporting still made it through the review process is very worrisome.

Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA 303:2058-64, May 26, 2010.

No comments: