As readers of HOOKED and this blog know, it is my view that the vast preponderance of the available evidence shows that physicians are influenced by drug company marketing--particularly contact with and gifts from drug reps--in ways that threaten the scientific integrity of their prescribing, and hence the well-being of the patient. According to a recent marketing study, this is simply not so.
The way a Duke University press release (sorry, I cannot find it on line) put it: When it comes to giving samples and writing prescriptions, doctors are swayed by science -- not by cozy relationships between themselves and pharmaceutical marketing reps or by advertising aimed at patients, new research shows.
The study itself is a bit more moderate, but still it makes rather sweeping claims and so requires careful analysis.
Sriram Venkataraman of Emory U. and Stefan Stremersch of Erasmus University-Rotterdam, the latter also being a visiting prof at Duke, developed a database that was basically designed to show that the impact of drug marketing varies by brand and according to the scientific data about the different drugs. Previous marketing research has tended to ignore these variables, assuming that you can sell pretty much anything to docs if you market the heck out of it.
If you read their references, there are relatively few to medical journals and most all the prior research cited is in the marketing literature. So the question obviously arises of how well these business-school folks understand the medical nuances of what they are studying.
To do their analysis, they needed basically three data sets--physician prescribing patterns associated with the number of detail visits and attendance at marketing meetings per physician for each brand of drug; the side effect profiles of each brand of drug; and the scientific data about the effectiveness of each brand of drug.
The first, major dataset the authors obtained from a drug company, who presumably obtained the data in turn from a commercial outfit that gathers such marketing data directly from physicians' offices. On the one hand this is a real strength, as most such data sets, being proprietary data that companies pay thousands of dollars to obtain, are not accessible to most academic researchers. But the downside is that these data came under a confidentiality agreement so that the authors cannot tell us which company, or give us many details about how the data were obtained, or even say what the drugs were that were studied--all they can say is that the drugs were in 3 classes: statins; ED drugs; and gastrointestinal drugs.
The other data sets came as follows. The efficacy data came from an excellent source--NICE in the UK, the official NHS group that assesses the effectiveness of treatments to allow decisions as to what the British health system ought to spend its money on. Still, the specific end points used to assess efficacy are not given, so we do not know if "effectiveness" of a statin means how well it lowers LDL cholesterol, or how well it prevents heart attacks. (The latter endpoint being obviosuly much more meaningful.) The side effect data set comes from the FDA label for each drug and consists of a simple count of how many side effects are listed.
Briefly, the authors found that some drugs were prescribed more often, and more samples were given out for them, if they were heavily marketed, and others not. The likelihood that marketing would lead to more prescribing was increased with the more effective drugs. It also was increased with the drugs with more, rather than fewer side effects. The authors theorized that this was because those side-effect profiles created excess physician uncertainty about using the drug, and detailing and meetings helped provide more data that then lowered uncertainty.
Do the conclusions follow from the data? Well, the first thing to note is that the side effect data are absolute garbage. The very idea that you could know something useful about the safety vs. the efficacy of a drug simply by counting how many side effects are listed--while knowing nothing about either their frequency nor their severity--would never have occurred to anyone with one brain cell worth of medical knowledge. So anything about side effects in this study is not worth discussing any further.
What about the efficacy conclusions? On the one hand, NICE is a highly regarded source for such data, but the need to keep us blinded to the actual drugs being talked about and to the endpoints used to determine efficacy, seriously undermines any value in this part of the study. Can we make any guesses? In the three drug classes that were studied here--statins, GI drugs (which I have to assume were probably proton pump inhibitors) and erectile dysfunction drugs--the current scientific literature would probably support the conclusion that these are, for all intents and purposes, me-too drugs--that the differences in real efficacy, brand to brand, will be very minor. So we can in turn guess that it is very unlikely that NICE found any serious differences in efficacy among these different brands, and therefore whatever differences in the effects of marketing turned up in this study are probably largely meaningless.
You are probably wondering--how come I said nothing about whether these authors received drug company funding for the study? The article actually does not say one way or the other, so we are left guessing. Here is my guess. The authors note that a large pharmaceutical company, with whom they then agreed to a confidentiality clause, kindly gave them the marketing and prescribing data. This is stuff, as I said, that normally one would pay thousands of dollars for, or even tens of thousands. They are also data that most drug companies keep very close to the vest. What sorts of investigators are such good pals with the drug company that these data would be made freely available to them? Draw your own conclusions.
Now, we have seen that there are serious reasons to doubt the validity of the major conclusions drawn in this study (quite contrary to the Duke press release). It is therefore interesting to see what recommendations these authors then offer for drug marketing:
As the prime need of physicians is information for which the manufacturer can be a useful source, public policy could actively restrict detailing to its purely informative role. Restricting the number of visits and further curtailing gift-giving are options one should consider. For managers, it supports the call for more evidence-based marketing.
In sum, even after massaging their methods like crazy to come up with industry-friendly results, the authors would still endorse some of the same lessons on drug company marketing preached by the industry's severest critics.
Venkataraman S, Stremersch S. The debate on influencing doctors' decisions: are drug characteristics the missing link? Management Science 53:1688-1701, November 2007. (Available on line only by paid subscription)
Subscribe to:
Post Comments (Atom)
1 comment:
Have to agree with the premise that doctors overall really do respond to real science over marketing. Less embellishments and bias with science, historically, which gives the doctor a greater reassurance of the content.
Post a Comment