Background: My colleague Dan Moerman, a medical anthropologist at U Michigan-Flint, published a classic paper on the placebo effect nearly 30 years ago. He looked at about 35 published studies of the then-new-miracle drug, cimetidine (Tagamet) for healing peptic ulcers, all of which had almost identical methods--the patient was endoscoped at the start of therapy and then a month later to see whether the ulcer had healed and what size it was. (They didn't have the term back then, as I recall, but Moerman did an early meta-analysis.) He showed a number of surprising things:
- According to his meta-analysis, cimetidine was actually no better than placebo.
- Cimetidine, however, was quite consistent in its effects across studies. No matter where the study was done (a wide range of international sites were represented), the healing rate in the cimetidine-treated group at one month was about 70-75%.
- If you looked at the individual studies, about half showed that cimetidine was superior to placebo, and half showed it wasn't.
- Since cimetidine was so consistent, the only variable to explain this inconsistency had to be the placebo response rate. And indeed that varied from a low of 10 percent to a high of 80 percent.
- So whether cimetidine was shown in any individual study to be better than placebo had virtually nothing to do with the cimetidine response rate and everything to do with the placebo response rate.
- The placebo response rate in these studies is not the same as the "placebo effect." Studies of this sort cannot distinguish between healing of ulcers caused by administering a placebo, vs. healing of ulcers due to other causes (primarily, spontaneous remission). Most ulcers, given time, heal. However, it would be contrary to most of what we think about peptic ulcers to imagine that the spontaneous healing rate of these ulcers differ so widely among study centers in different countries. So it is more plausible to imagine that the rates of placebo effect differed among study sites to primarily account for the large differences.
Now, let me make two points about Moerman's (subsequently replicated) research. First, it reveals a real problem in using the typical double-armed, placebo controlled, double-blind randomized trial to assess drug effects. It reveals that the placebo arm of the trial can be a source of "noise" that might obscure a presumably real drug effect. Second, I take Moerman's work to be real science. Moerman was not trying to sell Tagamet. (Nor, so far as I know, did he own stocks in a placebo company.) Moerman was trying to understand what various factors determine the outcome of placebo-controlled studies and quantitatively, how much of a result can be attributed to each factor.
Fast-forward to the article reviewed by Neuroskeptic in his blog. It's one of a series of studies funded by drug companies either directly or indirectly, and differs from earlier entries into the series (according to Neuroskeptic at least) only in its brazenness. If you are trying to sell drugs, then you really want to take what Moerman observed and work it to your advantage. What that usually means is to try to manipulate the placebo arm of the trial so as to reduce, as much as possible, the response rate among subjects randomized to that arm--thereby assuring that the subjects taking your drug have the best possible chance of doing better than their placebo counterparts.
Hence, in the name of "accuracy" or more usually, "efficiency," we get a variety of proposals that all amount to various ways to ignore or toss out data when the placebo effect is inconveniently high. These efforts fall (in my view) along a spectrum. At one end we have relatively innocent and well-reasoned alterations of study design that try to correct for extreme and obvious distortions that lead to underestimating the true drug effect. At the other end of the spectrum are blatant efforts to wipe out unfavorable data and replace them with good-looking data, science be damned. You can read the post about this most recent proposal from GlaxoSmithKline and you be the judge. (Neuroskeptic thinks it's an extreme case of tilting the pinball table by eliminating all study sites that have an "abnormally" high placebo response rate, thereby assuring that your drug will emerge the winner.)My own view is that most efforts, at most points along the spectrum, run afoul of one basic consideration. In the real world of medical practice, the placebo effect is omnipresent. Further, while in a study setting, one might have a legitimate reason to try to minimize placebo effects (in both arms of the trial equally), in the world of clinical medicine, practitioners do everything possible most of the time to augment the placebo effect, quite appropriately as this makes more patients get better faster. So any study that tries to get "better" data by minimizing the placebo effect is likely not to inform us of how this drug will perform in actual practice settings.
Merlo-Pich E, Alexander RC, Fava M, & Gomeni R. A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834 (published on line 22 Sept. 2010).