Benjamin Djulbegovic, doing evidence-based medicine and health outcomes research at U. South Florida, and Ash Paul of Bedfordshire, UK team up to make roughly the following points (though the entire article deserves to be read carefully):
- We want to know, for good medical practice, about effectiveness (does something work in real life practice?) and cost-effectiveness (is it worth the cost, considering other alternatives?) but usually we have to settle for data on efficacy only (could it possibly work based on evidence gathered in a select population under more or less unreal conditions?).
- There's an inherent uncertainty in extrapolating from efficacy data to effectiveness. There's no technical fix for this uncertainty. We must make clinical and policy decisions while frankly aware of this.
- Reliance on efficacy data and the absence of real effectiveness data leads to two serious problems--indication creep and prevention creep. We've discussed both in this blog at length. Indication creep is using a drug good for some things for other things for which it is not as good and where it poses risks of adverse reactions. (That's what the Inverse Benefit Law is mostly about.) Prevention creep is using a test that is helpful in some circumstances in other circumstances where it's guaranteed to generate too many false positives to be useful.
- Indication and prevention creep between them drive up the costs of health care--Djulbegovic and Paul agree with what I think are the best available fiugures that about 30% of current health spending in the US goes to purchase "creepy" interventions that do not help patients. This poses a serious barrier to real cost containment.
I interpret the "at its core" as follows: What is mostly going on in indication creep is physicians' desire to treat individual patients based on the best available evidence. When such evidence is flawed--specifically, based on efficacy rather than effectiveness data--physicians resolve the resulting uncertainty in favor of "treat" vs. "don't treat." (Surprise! American physicians are much more terrified of failing to do something they could do, instead of doing something that in the end turns out to be useless or even harmful.) This leads occasionally to harm to individual patients and even more often, to wasted resources. But the culprit is at root a conceptual one, physicians relying on efficacy data for effectiveness applications.
This conclusion makes good sense of your day job is teaching and doing research in evidence-based medicine. But I have to quibble with the suggestion (if I am reading it right) that the most important synapse in this neural mechanism is within the brains of physicians. We need to ask first why there is so much efficacy data and so little effectiveness data, and the answer is to be found in the disproportionate share of research funding coming from the drug industry, as well as that industry's backroom efforts to derail the new movement toward comparative effectiveness research and especially any cost-effectiveness research that might influence Federal policy (http://brodyhooked.blogspot.com/2009/05/stealth-campaign-to-shanghai-ce.html). Next we need to ask where physicians get the "evidence" that leads to so much indication creep, and the answer is to be found both in docs' reliance on industry marketing as so-called "education," and also in the systematic way that the industry distorts the medical literature by data suppression and manupulation (too many prior blog posts to even begin to list).
So, you reply, am I claiming that docs' thinking processes have nothing to do with this at all and we are all being led by the nose by Pharma? No, that's exactly the point. As I have said numerous times before, Pharma marketing hardly ever invents stuff out of whole cloth; they usually manage to grab ahold of something physicians are already thinking, and then turn it to their financial advantage. So yes, physicians are prone to do too much instead of too little, and they are prone to extrapolate from efficacy data to clinical practice on noncomparable patients. But were it not for the Pharma marketing juggernaut of roughly $57B annually in the US, EBM gurus like Djulbegovic and Paul would have a much easier time educating us about these dangers and perhaps even reforming actual practice. As it is, lots of luck.
There is one other person left out of their equation--the patient. Did physicians become fearful of doing too little in a social vacuum? Did physicians decide to extrapolate efficacy data beyond its proper limits just to have some fun? Or have American patients repeatedly demanded this style of care? And have Pharma companies systematically taken advantage of that public attitude to supplement their marketing, both in direct-to-consumer ads and in their funding of "astroturf" patient advocacy? Again we need to remember Applbaum's critical concept of controlling the "marketing channels" (http://brodyhooked.blogspot.com/2010/06/how-does-drug-industry-exert-power.html). Successful drug companies never market their wares in one single way; they are masters at making a set of apparently disconnected bits of marketing all fall into place in one grand pattern to influence medical thought down to its core. If we ignore the scope and importance of that influence we'll never get to the excellent reforms that Djulbegovic and Paul call for.
Djulbegovic B, Paul A. From efficacy to effectiveness in the face of uncertainty: indication creep and prevention creep. JAMA 305:2005-6, May 18, 2011.
I have heard over and over again (from doctors) that patients are the ones demanding tests and meds but my own experience is completely different! My doctors are the ones pushing for tests and meds when I am completely asymptomatic. Obviously there is a disconnect somewhere....
Why is there so much efficacy data and so little effectiveness data? As you say, it is because pharmaceutical corporations now control the clinical research agenda. Nowhere is this more evident than in psychiatry. In the major push over recent years to creep the indication of nonresponding (and nonpsychotic!) depression for the atypical antipsychotic drugs, there are no data comparing these drugs head to head with tried and true off-patent lithium, even though it appears lithium is considerably more useful.
What’s scandalous is that the question of comparing risperidone or aripiprazole or olanzapine with lithium is simply off the table, and that the FDA allows this charade to continue. As for the enabling KOLs, well, you decide.
A good example of psychiatric comparative effectiveness is the STAR*D trial (funded by NIMH). Pharma donated drugs: Citalopram, Sertraline, Bupropion, Venlafaxine, Buspirone, Mirtazapine, Triiodothyronine, Nortriptyline, Tranylcypromine and Lithium without input into design, data collection, analysis,or publication of the study.
Actually, the STAR*D trial is not a 'good' example. Dr. Nardo and others have done extensive analyses of the problems with the STAR*D trial. Start reading here
http://1boringoldman.com/index.php/2011/04/03/a-thirty-five-million-dollar-misunderstanding/ and search his blog on the term STAR*D.
This type of the articles usually comes in many articles and these things which you have mentioned are really true. All the physicians must also think about this. Then there would be more health care.
Apologies for the long comment.
Extrapolating from the group to the individual is extremely difficult. It requires the treating physician to evaluate the conditional probabilities of benefit and harm based on conditions that differ from those tested.
Conditional probabilities are extremely difficult to calculate and our intuition about them can be wildly off base.
Here is an example (adapted from one in The Drunkard's Walk).
If John and Mary have two children what is the probability that both are girls? Answer 1/4.
Now we add a condition: The older child is a girl. What is the probability that both are girls? Answer 1/2.
Now we change the condition slightly: One of the children is a girl. What is the probability that both are girls? Answer 1/3.
Now we add a seemingly irrelevant condition. One of the children is a girl with curly hair. What is the probability that both are girls? Answer: It depends on the proportion of curly-haired girls in the population. If John and Mary are in Nairobi the answer will be about 1/3. If John and Mary are in Seoul the answer will be about 1/2. Note that in a medical situation would be a large difference; 0.3 to 0.5.
Even in this simple example the answers are completely counterintuitive. Who could imagine that the curly hair condition could change the probability of both children being girls.
Now imagine trying to extrapolate from research data in which the subjects met a range of conditions, but the patient we are trying to treat has several other conditions, such as chronic medical illnesses, additional medications for those medical illnesses and different life circumstances. It is mathematically intractable.
Faced with no logical way of finding the correct answer we pick a choice based on what we think and feel we are expected to do. And this is based on who we are listening to, what they are doing, what they are telling us to do, ... . Which is why the marketing and drug reps being schmoozy, and the KOLs etc are so influential.
Post a Comment