Friday, November 27, 2009

Another Example of Spin in Published Results of Industry-Sponsored Clinical Trials

Long-time readers of this blog may want to skip this post; but starry-eyed optimist that I am, I continue to believe that a new reader or two may show up once in a while, and so it may be worth adding further examples of how carefully you have to read published reports of clinical trials in medical journals to detect marketing spin in industry-sponsored research. I was alerted to the present study thanks to the good offices of "Primary Care Medical Abstracts," aka Drs. Rick Bukata and Jerry Hoffman.

Tapentadol (Nucynta) is one of the newest analgesics to be discovered and tested. The article by Kleinert and colleagues claims that it is special in having two mechanisms of action--it affects the mu-opioid receptor as does morphine and the other opiates; and it also has norepinephrine-reuptake-inhibition properties. The present study was a double-blind controlled trial of a single oral dose of various medications to treat pain following extraction of a wisdom tooth. The investigators compared 5 different doses of tapentadol to 60 milligrams of morphine (a good sized dose ordinarily, but with a proviso we'll get to in a minute); 400 milligrams of ibuprofen (two over-the-counter Motrin); and placebo.

The pain outcomes were measured using the 100-mm visual analog scale, asking people to mark on a 10-point, 100-mm line where their pain was at any given time and then measuring to the nearest mm. To interpret such a study you need to know the relationship between a statistical drop in pain and a clinically important drop in pain, when you combine the scores of many research subjects in a trial and get a numeric average. This has been looked at repeatedly and it is generally agreed that a drop in pain of 13 mm is the minimum pain relief that a patient can actually detect as a clinical response. That is, if you do a study of a drug and the result is that the drug reduced pain by 10 mm, you can say that this drug is clinically worthless as it did not reach the threshold where a patient could tell the difference.

So when Kleinert et al. report the results for their 399 subjects, the first thing it is logical to do is to compare these reuslts to the clinical threshold. It turns out that of all the doses and drugs tested, only 3 turn in a degree of pain relief on the primary outcome measure that surpasses the minimum threshold--the highest dose tested (200mg) of tapentadol (15.3), morphine (13.8), and ibuprofen (17.9). Notice that in this situation morphine just barely sneaks past the threshold limit. Dr. Hoffman suggests in his commentary that this can be explained by the rapid rate at which oral morphine is metabolized by the liver when one receives a single dose. Also note that none of these numbers is all that much over the threshold so as to make the results worth writing home about.

So how would a strictly honest scientist report these findings? The most candid report would probably be, "In this study, none of the drugs at the dosages tested produced substantial, clinically important pain relief."

What Kleinert and colleagues reported was, first, that tapentadol was obviously better than placebo (which came in at a measly 4.7 mm); and second, that the highest dose was better than morphine. As they say in the abstract (which is all that most readers will bother to read of the article), "Pain relief scores with morphine sulfate 60 mg were between those of tapentadol HCl 100 and 200 mg....These data suggest that tapentadol is a highly effective, centrally acting analgesic..."

Now, note what they also could have said if they were reporting honestly: "The pain relief achieved with the very highest dose of tapentadol was less than what you can get with cheap over-the-counter ibuprofen 400 mg." The way that they handled the inconvenient comparison between ibuprofen and their own drug was to finesse it out of the picture. They explained that ibuprofen, being both analgesic and anti-inflammatory, is a "gold standard" drug for treating dental-extraction pain. In their study, ibuprofen was superior to placebo. This fact, they proclaimed proudly, "established the sensitivity of the model." It was as if the only reason they included the ibuprofen in the study was to show that their experimental model worked. The fact that ibuprofen then outperformed their own drug was conveniently ignored.

If you are interested in the details of the cost issue, I'll report that on drugstore.com, 90 pills of 100 mg Nucynta costs $269.95, coming in at just $3.00 per tablet. Since to get the relief superior to morphine you had to take 200 mg in the study, that would be $6.00 per dose. You might walk into your local pharmacy with six bucks and see how many generic ibuprofen you can buy with that cash.

All of this reminds me of a joke from the old days during the depts of the Cold War, when Soviet propaganda often reached absurd extremes in trying to prove to the average Russian citizen that the USSR was truly better than the US. An auto race was held as match race between two vehicles only, an American and a Russian car. Thje Americans won. The race was reported in the Soviet Communist newspaper Pravda as: "The Russian car came in second. The US car finished next to last."

Kleinert R, Lange C, Steup A, et al. Single dose efficacy of tapentadol in postsurgical dental pain: the results of a randomized, double-blind, placebo-controlled trial. Anesthesia and Analgesia 107:2048-2055, December 2008.

5 comments:

Michael S. Altus, PhD, ELS said...

Re: Merck’s VIGOR trial: Bombardier C, Laine L, Reicin A, et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. N Engl J Med 2000;343:1520-1528.

When comparing your drug with another, choose the comparison drug that is NOT the best possible alternative to your drug. For example, in the VIGOR (Vioxx Gastrointestinal Outcomes Research trial, the investigators compared Vioxx with naproxen, which according to Merck, has more gastrointestinal effects than many other non-selective NSAID.

If your drug does worse than the comparison drug, such as Vioxx causing more heart attacks than naproxen in the VIGOR trial, conclude that the other drug has protective effects (can reduce the number of heart attacks), even if those protective effects have never been established.

See chapter 17, “Vioxx Gastrointestinal Outcomes Research (VIGOR)”, and chapter 18, “Spinning the Results”, in Poison Pills: The Untold Story of the Vioxx Drug Scandal, by Tom Nesi. Thomas Dunne Books, 2008. I understand that the book is due out in paperback before the end of 2009.

Michael S. Altus, PhD, ELS said...

Merck accompanied its spin of the VIGOR results published in the New England Journal with spin in its promotional materials.

But first, some background: In December 2004, a Special Committee of the Board of Directors of Merck & Co., Inc. retained The Honorable John S. Martin, Jr. of Debevoise & Plimpton LLP, and a retired judge, to undertake a thorough review of the conduct of senior management in connection with the development and marketing of Vioxx (http://tinyurl.com/yexadj7). The Martin Report is available on Merck’s Web site (www.merck.com/newsroom/vioxx/martin_report.html).

In my previous comment, I mentioned the Merck-supported study of Vioxx, VIGOR (Vioxx Gastrointestinal Outcomes Research; Bombardier C et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. VIGOR Study Group. N Engl J Med. 2000 Nov 23;343(21):1520-1528). This study reported that participants in the Vioxx group had half as many ulcers in the comparison group, those taking naproxen (Aleve). So far, so good.

But those in the Vioxx group had 4 to 5 times as many heart attacks as those in the naproxen group. Why? The article explained that naproxen has cardioprotective effects. Merck used this explanation also in promotional audio conferences given on behalf of Merck, a news release, and oral presentations made by Merck sales representatives to promote Vioxx. Merck’s use of this explanation for its promotional activities and materials was condemned by the FDA.

“That is a possible explanation, but you fail to disclose that your explanation is hypothetical, has not been demonstrated by substantial evidence, and that there is another reasonable explanation, that Vioxx may have pro-thrombotic properties.”

This quotation is taken from a September 17, 2001, Warning Letter from the FDA to Raymond V. Gilmartin, President and CEO of Merck. The warning letter is available at FDA’s Web site (http://tinyurl.com/ycbwsft).

This extraordinarily harsh, scathing warning letter identified a Merck press release entitled, “Merck Confirms Favorable Cardiovascular Safety Profile of Vioxx,” dated May 22, 2001, (available in the Martin Report at [http://tinyurl.com/y99v7gn], pages 43-46) that claimed that Vioxx has “a favorable cardiovascular safety profile.” The Warning Letter stated that this claim “is simply incomprehensible, given the rate of MI and serious cardiovascular events compared to naproxen.”

The Martin Report discusses Merck’s reply to the FDA Warning letter (http://tinyurl.com/y99v7gn). For example, Merck notes that the Warning Letter “did not acknowledge the fact that substantial balance, including the existence of alternative hypotheses, was included in that (the May 22, 2001) press.”

Indeed, the May 22 release acknowledged the existence of alternative hypotheses but did not identify them, such as the possibility that Vioxx may have pro-thrombotic properties.

The Martin Report discussed other communications to public and professional audiences attributing the Vigor results to the ability of naproxen to reduce the risk of heart disease (http://tinyurl.com/yewqzjz; pages 46 to 73).

I already referred to chapters 17 and 18 of Tom Nesi’s book, Poison Pills. I should have also referred to chapter 19, “Medical ‘Interpretation’”.

Anonymous said...

The only thing that you are exposing here is your lack of expertise.

First, you should be aware (but apparently are not) that pain arising from wisdom tooth extraction is a frequently used model system for pain relief. It has the advantage of providing a group of patients who will experience a consistent and moderate level of pain at a predetermined time. (It's hard to run a useful study in patients who have sprained ankles, for example, because you can't be sure where or when they're going to happen and the intensity of pain varies greatly.) For that reason, the comparative efficacy of the agents investigated in this trial is more important than the absolute level of clinical improvement observed in this particular model.

Second, as you did in your post on telcagepant, you focus exclusively on efficacy and ignore safety. While NSAIDs are cheap and effective, they are far from perfect. You should know (but apparently don't) that chronic use of ibuprofen is associated with significant adverse effects including gastric ulceration which, when it occurs, often leads to death. Demonstrating equivalence in efficacy to ibuprofen and morphine in a phase II study is important. Demonstrating a better safety profile would be the goal of a phase III program and post-marketing surveillance.

Howard Brody said...

I am not sure if the "Anonymous" above is the same that has been complaining on another post that I did not respond to his/her criticisms. I don't feel the need to respond to all comments as the whole point of comments is to get multiple viewpoints out there for readers to consider. But since you rattle my cage I'll respond briefly here.

The two points made by the above comment-- 1) wisdom tooth pain is a standard research benchmark for analgesic trials; and 2)NSAIDS like ibuprofen are not completely safe-- are true, and are quite beside the point of anything I said in the above post.

The point of the post was a simple one. The data reported in the paper gave rise to several plausible interpretations. Some ways of reporting these interpretations made the study drug look good; some made it look bad. The authors assiduously chose ways to present their conclusions that steered toward the favorable "spin" and avoided any hint of the unfavorable.

For instance-- two things were true: 1) morphine was not quite as good as the highest dose of the study drug; and 2) morphine was not very good in this particular study context, indeed being worse in analgesic efficacy than a moderate dose of ibuprofen. The authors made a point of mentioning #1 and avoided mentioning point #2.

The take home message, which perhaps I was at fault for not underlining--anyone who just reads the abstract or the conclusions of a drug-company-sponsored study is at high risk for getting the wrong idea about the study drug. Often unless you actually look at the tables and numbers, you don't see what the true picture is. And everything we know about industry marketing suggests that this is not a random effect; it is a deliberate business strategy (that journal editors are apparently too wimpy, or too self-interested, to put a stop to).

Anonymous said...

"The two points made by the above comment-- 1) wisdom tooth pain is a standard research benchmark for analgesic trials; and 2)NSAIDS like ibuprofen are not completely safe-- are true, and are quite beside the point of anything I said in the above post."

These two points are central to understanding what the investigators and the sponsoring company were trying to achieve. Because you misunderstood the design and objectives of the trial you interpreted the absence of some information as "spin" when in fact it was likely of little interest both to the authors and to the likely readers.

The irony is that your failure to understand the nature of this trial means that you have missed that the last sentence of the abstract ("favorable side effect profile") is not really supported by the evidence presented.

Finally, your last point is alarming. You suggest that to gain information from articles, one must read them. What is the alternative?