When you do a randomized controlled trial, you are supposed to say up front what your endpoint is going to be. For example, if you're studying a new cholesterol-lowering drug, then good endpoints to study would be heart attack, stroke, and death. (Not-so-good endpoints to study would be level of cholesterol. This is the infamous "surrogate endpoints" where we simply assume that if you reduce cholesterol, then of course you must be reducing the number of heart attacks, strokes and deaths later on. The problem with surrogate endpoints is that they've shown many times for many different diseases that these assumptions may be bogus. Drug companies love surrogate endpoints because the studies are cheap; you can show that your drug lowers cholesterol in a couple of months, where it would take years to show that it reduces really important endpoints, if it even does. Plus they have armies of reps to convince docs that the surrogate endpoints really mean something.)
Sometimes, for various statistical and methodological reasons, the investigators may not report heart attack, stroke, and death separately, or not only those. They may also calculate a combined score that reflects whether an individual subject experienced any of the three outcomes. This is called a composite endpoint. A composite endpoint that reflects the risk of stroke, heart attack, and/or death with and without the drug is probably a reasonably valid endpoint, since all of these are bad and you could argue they are all really bad. Ideally, the investigators would report the three endpoints separately as well as the composite. That way if any armchair bean-counters thought they were up to somthing, they could easily double check.
Enter the present set of authors, headed by Gloria Cordoba. They looked at 40 clinical trials published in 2008 that reported composite endpoints. Nearly 3/4 of them were about cardiovascular stuff (like my example above) and 83% had either total or partial industry funding.
To cut to the chase, Cordoba and colleagues found a lot of funny business, such as:
- In at least 4/5 of the studies, the composite was made up of individual endpoints that the authors judged not to be equally important. My pal Jerry Hoffman of Primary Care Medical Abstracts fame (and thanks to PCMA for calling my attention to this study) uses as his standard satirical example "stroke, heart attack, and hangnail." The point here is that the company's drug may not do any better at reducing stroke or heart attack, but it may be statistically superior at reducing hangnail. Instead of saying, "Our drug reduced hangnails," the authors get to announce, "Our drug substantially reduced the composite endpoint that included stroke and heart attack (and oh yes, by the way, hangnails were somewhere in there too)." They hope you won't notice that the hangnails accounted for all the real drug benefit and the stroke and heart attack not at all.
- In 10% of the trials, the authors flat out admitted that they'd made up to composite endpoint only after all the data were in. That's the equivalent of shooting your arrow at a wall, and then drawing the bull's-eye around the arrow wherever it happened to hit.
- In 1/3 of the trials, the definition of the composite endpoint shifted from the abstract to the methods to the results sections of the paper. This is a strong hint that the painting-the-bull's-eye trick was going on though these authors were not honest enough to say so.
- In as many as 82% of the articles, the reader was not reminded at the end that what had gotten better was the composite endpoint but not necessarily the individual components--in other words, major spin.
Final conclusion: beware composite endpoints. If they are not actually trying to make a silk purse out of the ear of this pig, they are probably at the very least putting lipstick on it.
Cordoba G, Schwartz L, Wolosin S, et al. Definition, reporting, and interpretation of composite outcomes in clinical trials: a systematic review. BMJ 341:c3940, Aug. 18, 2010.
1 comment:
This is similar to what Martin Keller did with the infamous Paxil study for adolescent depression. Thank God Brown University decided to defrock him.
Post a Comment