Thursday, September 27, 2012

NEJM: Overly Defensive, or Just Naive?

The Sept. 20 issue of the New England Journal of Medicine featured a study ( by Dr. Aaron S. Kesselheim and colleagues, of Harvard (where else?), a part of Dr. Jerry Avorn's pharmacoepidemiology & pharmacoeconomics operation. The study addressed how practitioners judged research articles based both on methodological quality and industry sponsorship.

They showed 503 internists fake abstracts that they had manipulated according to the study variables. The good news was that these internists knew quality when they saw it; they were less willing to prescribe a new drug as the rigor of the study design diminished. The part of the study that forms the remainder of our discussion was that independent of study quality, the internists were less likely to prescribe a new drug (by a factor of about half) if the study was labeled as industry-funded compared to NIH-funded.

This latter finding unloosed an editorial from NEJM Editor-in-Chief Dr. Jeffrey M. Drazen, somewhat ominously titled,"Believe the Data" ( Dr. Drazen took serious issue with the idea that one should judge an article based on who funded the study: "A trial's validity should ride on the study design, the quality of data-accrual and analytic processes, and the fairness of results reporting. Ideally, these factors — not the funding source — should be the criteria for deciding the clinical utility." Just in case we were not sufficiently impressed, he then pulled out the moral trump card: "Patients who put themselves at risk to provide these data earn our respect for their participation; we owe them the courtesy of believing the data produced from their efforts and acting on the findings so as to benefit other patients." That is: If you decide to question the results of a study because it's industry-funded, you're being disrespectful of the patients who agreed to participate in the research.

All of which shows that Dr. Drazen is not a regular reader of this blog. Let us take just a sampling of past posts, starting with the most recent one:

(In the last-mentioned post, see the sub-entry, "Data Dredging: Which Studies Do It the Most?")

I could have listed a lot more, but the ones that I've shown basically tell the following story:
  • Research studies paid for by industry commonly distort findings so as to favor their products.
  • As a rule, the journal reader cannot tell how the results have been distorted. (As the latest entry showed, to find out what was misleading about a study that occupies 7 pages in a journal might require wading through 8500 pages of data.)
  • These distortions occur in all medical journals and if anything are even more prevalent in the top-tier journals. (Probably not because these journals are badly edited, but because it's so much more of a coup if the company can land their research findings in those top journals.)
Bottom line-- if the reader is automatically more skeptical of a study because it's industry sponsored, there is good reason for that skepticism. Naturally Dr. Drazen does not want to admit this, as editor of what many regard as the premier medical journal in the world (or at least the US). Sadly, it requires an ostrich-like posture to disagree with those conclusions.

Now--let's address Dr. Drazen's moral trump card. The issue he raises about respect for research subjects is indeed an important issue. It just has nothing to do with whether the reader should be suspicious of a company-sponsored study. The real question is: when are we going to demand that truly "informed consent" for research subjects in industry-sponsored trials include disclosure of when the trial is not designed for scientific purposes and is instead designed for drug marketing purposes?

Wednesday, September 26, 2012

More on Tamiflu: The Challenges of Getting Good Evidence

I posted some time back on oseltamivir (Tamiflu) and the Cochrane systematic review that concluded that there's no good evidence that this drug prevents serious complications of influenza (despite international public health bodies having spent billions to stockpile it):

Earlier this year, the BMJ published a further commentary from the Cochrane team, explaining further their difficulties in getting credible evidence to do their review:

Doshi and colleagues noted that after a good deal of fussing around, they were able to get their hands on portions of the clinical study reports from a number of trials of oseltamivir. They admitted that when this all started, they had no idea of what a clinical study report was. A clinical study report is the province of regulatory agencies; it's what the FDA looks at, for example, when deciding whether to approve a drug for marketing. Academic investigators, by contrast, including the Cochrane folks, generally deal with published study reports in medical journals. The difference can be seen when they noted, "For example, the published version of one cardiac safety trial of 400 patients is seven pages long... compared with 8545 pages for the full clinical study report."

Depite still being unable to get their hands on all the materials from all of the studies, the Cochrane team found temselves the proud possessors of 22,000 pages of documents giving highly detailed information about each study. As a result, "Our new Cochrane review update of oseltamivir engaged the equivalent of two whole time researchers (a junior and a senior) for 14 months."

In short, this account presents a sobering picture of how much we still don't know about the evidence for and against a drug even after reading what purports to be a thorough, well-done systematic review based on published studies. It probably goes without saying that very few academic teams have the resources to let two people spend all their time for 14 months reviewing a single study question.

Since I learned of this paper via the services of Primary Care Medical Abstracts, I also had the advantage of Dr. Jerry Hoffman's trenchant commentary. Jerry asked how likely it was that of the research data that remains unpublished and thus far unanalyzed, any of it supports the wider use of oseltamivir for influenza. If the world were just, there'd be roughly a 50-50 chance that any unpublished data leaned for or against the drug. Based on what we know from past experience, if a study was done and there was any way at all that the results could be spun in a manner that produced positive marketing for Tamiflu, it would probably have taken the drug company (Roche) about 7 nanoseconds to arrange for the data to be published and widely disseminated. So we have good grounds to believe that the unrevealed data is not very friendly to Tamiflu.

Nonetheless, another point that Doshi and colleagues noted in their paper is that the various government agencies cannot see eye to eye on the current data:

"In December 2009, we expressed serious doubts about the credibility of the evidence for oseltamivir because of the inaccessibility of these unpublished trials. Nevertheless, influential organisations such as the US Centers for Disease Control and Prevention (CDC) and European Centre for Disease Prevention and Control continued to cite the Kaiser et al meta-analysis....[a company sponsored study supportive of the drug] Neither agency seems to have done an independent analysis of all available evidence, even after Roche’s public offer to provide full clinical study reports. Their stance is more worrying given that another US agency unambiguously holds the opposite opinion. The FDA, which has reviewed the oseltamivir trial programme in perhaps more detail than anyone outside of Roche, states that “Tamiflu has not been shown to prevent such complications [serious bacterial infections].”... The FDA even sent Roche a warning letter in 2000 instructing it to “immediately cease dissemination of promotional materials” containing “false or misleading” claims, including statements about a reduced risk of influenza complications.... The FDA has, however, not challenged the CDC’s claims."

The Deja Vu Fan Club out there might be commenting at this point that the oseltamivir story seems to be a retread of the reboxetine story:
--in which only one-fourth of the study data on a new antidepressant was published, with that one-fourth favoring the drug, and the 3/4 that remained unpublished showing that the drug was both ineffective and less safe than comparable drugs.

Tuesday, September 25, 2012

Another Example of Industry Spreading Non-Facts: Health Information Technology

While I try not to tread on the territory of our friends over at the Health Care Renewal blog, and keep this blog devoted to pharmaceutical and device industry issues, I've had occasion in the past to mention the good work of Dr. Scot Silverstein (see for example Dr. Silverstein is a highly informed critic of the bad sorts of electronic health records and other uses of health information technology (HIT), which unfortunately today may be the majority of uses. This has put him in the unhappy position of being blasted by the many hyperenthusiasts of HIT within medicine, as well as the industrial providers of HIT. As Dr. Silverstein has documented liberally in his posts, if you dare to suggest that HIT is anything less than perfection itself, the response of the enthusiasts makes the extremist-fundamentalist Muslim reaction to the recent idiotic anti-Mohammed video look like a friendly conversation over coffee.

In the recent post in question:  Dr. Silverstein mentions a recent column in the Wall Street Journal (no need for a separate link as he repeats virtually the entire opinion piece) by Stephen Soumerai and Ross Koppel, two similarly distinguished experts. In turn Soumerai and Koppel refer to a recent study out of McMaster University:

Cutting to the chase in all this, the bottom line is that when the politicians jumped on the bandwagon and called for massive Federal outlays to support and encourage quick adoption of HIT, they were motivated a little bit by promises that electronic records would make medical errors disappear, but no doubt even more by promises of cost savings--even to the point that they fondly imagined that they could spend billions buying HIT systems that would in the end pay for themselves in reduced costs. Just to add a personal note, if I ever believed that electronic records would save mioney, I stopped believing it after attending a meeting of department heads at my own medical center. One head estimated that the faculty now spent an extra 30% more time just completing the required electronic record tasks, and no one else disputed that estimate. I concluded that any technology that reduced physician productivity by 30% was unlikely to be a big money saver overall. (And our medical center owns one of the supposedly better HIT products.)

So the new study by O'Reilly and colleagues at McMaster looks specifically at drug ordering systems, and reviewed 31 research studies that in one way or another addressed the economics of the systems. The conclusion was that while a few studies suggested possible cost savings, the general quality of the research was poor, and one would have to conclude from the overall pattern of what us now known that there's no good evidence that electronic records save money with regard to drug ordering. Soumerai and Koppel, and Silversteion in turn, generalize this to HIT across the board and claim a lack of evidence that any of these promised cost savings are coming to fruition. Given hyped-up predictions of up to $100B in annual savings from wide adoption of HIT, this is indeed (as they used to say on the old TV sitcom) a revolting development.

What we see in the HIT industry seems to emulate a pattern we've seen many times in Pharma. A new drug is rushed onto the market because of some presumed advantage that puts it way ahead of existing drugs. (Think glitazones for diabetes, or COX-2 for arthritis.) As soon as careful studies can be carried out, the supposed advantages of the new kid on the block turn out to be illusory, and the downside starts to become glaringly apparent. But the industry is raking in too much cash to let that negative message get out, and so does its best to stonewall any disclosure of the new information. In the process it calls upon all of its lackeys inside medicine, who use their big university credentials to bolster the industry cause. In the case of Pharma, it seems usually to be the case that these "key opinion leaders" have simply been bought. HIT is different in that the issue seems to be genuine enthusiasm (not to say zealotry) among the early adopters, who then would look so silly if they admitted the validity of the new data that they dig in their heels and attack the messengers.

Dr. Silverstein repeats as often as anyone will listen that HIT holds great potential promise and that he's not against HIT. He's against poorly tested HIT that's rushed into use when it should still be considered experimental. Again, sadly, that seems to be most electronic record products now in use in the US, with more coming on line every day due to the Federal stimulus support and the Affordable Care Act.

Wednesday, September 12, 2012

Inverse Benefit in the Trenches: Primary Care Providers Treat Chronic Illness

My esteemed anthropologist colleague, Dr. Linda M. Hunt, and her graduate student Meta Kreiner at Michigan State University set out to do a study of how concepts of race influence physicians' treatment of patients with common chronic illnesses like hypertension and diabetes. Along the way they became so impressed with issues they were seeing about how drugs are prescribed that they chose to report a separate study on that latter topic. Recognizing the relevance of my own work on the Inverse Benefit Law (, they very kindly invited me to participate in the final data analysis and discussion, with the results just now appearing in the Annals of Family Medicine (

Hunt and Kreiner interviewed 58 clinicians and 70 patients at 44 primary care clinics in Michigan, oversampling clinics treating low-income and minority patients. They also observed 107 office visits. What did they find?

First, they noted that their sample mirrored CDC data reporting that 40% of people over age 60 now take 5 medications or more; the average prescritions per patient in their sample was 4.8. While one might imagine that providers in low-income, minority-serving clinics might be a model of enlightened social awareness, they found that 72% of the clinicians they talked to had regular contact with drug reps and 62% saw more than 10 reps each week. While protesting that they always were alert for commercial bias, 77% reported finding the information they received from the reps useful.

These clinicians were basically sold on the standard practice guidelines that set target numbers for blood pressure and blood sugar, and were unfazed by the likelihood that patients would need to be on multiple medications to reach the targets--as one family physician said: "I tell most new diabetics that the sad news is that they’re going to be on 5 meds…. That’s just what’s going to happen because their cholesterol parameters are lower [and] their blood pressure parameters are lower…. It’s usually a pretty frank talk: 'You have a deadly disease and it’s going to kill you. How long you have it is up to you.' (Laughs)"

Many of these clinicians are being reimbursed in ways that include pay-for-performance bonuses for meeting targets, and this influences their practices: "I was being a little bit lackadaisical with the A1c goal as 7.0[%] or less. I wouldn’t really like to admit it, but the insurance companies making a financial carrot is probably one impetus for really cracking down on my diabetics to get them 7.0[%] or less. 7.1[%] don’t cut it…anymore. It has to be 7.0[%] or less."

Time out--a while ago I blogged about a study ( that showed that there's reasonable evidence that patients with diabetes do well when their glycohemoglobin (A1c) level is around 7.5%, and that trying to get it super-low down to 7.0% actually does harm to patients. I also have blogged repeatedly about studies that show that diabetics are not any healthier, in the long haul, with tighter control of their blood sugar or A1c levels ( Despite study after study on this topic, the drug companies continue to push drugs that lower blood sugar but fail to improve long-term outcomes; groups dependent on drug company money like the American Diabetes Association continue to promulgate guidelines that stress strict control of blood sugar; and as this study shows, physicians in practice toe the line--especially when paid to do so.

Back to the Hunt-Kreiner findings. What happens when you throw a lot of medicines at patients with diabetes and hypertension, trying to make their numbers look good in the chart? At least three bad things. First, their drugs cost a lot and some of the patients go crazy trying to pay for them. Second, the patients get a lot more adverse reactions. Third, the prescribing cascade kicks in--physicians either don't recognize the adverse reations as due to other drugs, or else feel they have no choice but to prescribe those drugs because of the guidelines; and so even more medications are then prescribed to treat the side effects of the first medications, which further raises cost and risk of side effects, and so on: "A number (24%, 14 of 58) discussed the challenge of managing multiple medications, pointing out adverse effects of common medications that may worsen other conditions, requiring even more drugs, for example, β-blockers aggravating asthma symptoms, or antipsychotics elevating blood sugar. When discussing these complicated issues, only 1 clinician mentioned prescribing fewer drugs; all the rest focused on reaching goal numbers by either adding or changing medications."

The impact this had on patients is dramatic, as one fairly typical case report showed: "Her diabetes medications cause diarrhea and bouts of hypoglycemia, which interferes with her ability to leave her home because she must eat and go to the bathroom so frequently. She also had 5 visits to the emergency department in 1 month for excruciating headaches, before they were determined to be an adverse effect of the additional hypertension medication she had been prescribed after her diabetes diagnosis. ... At her most recent appointment, her physician happily told her: 'Your blood pressure is 130/78 [mm Hg], your A1c is 7.0[%], and your cholesterol was normal. Very good!'"

As Hunt and Kreiner comment, "On the basis of current standards, the clinician classified this patient as healthy, a success story; however, this classification does not address the broader question of her well-being. Getting test numbers into the stipulated range jeopardized her employment and led to repeated hospitalizations and serious financial burden." And, I would add, with precious little evidence that at least some of these medications were improving the patient's long-term health.

Sadly, these hard-working, dedicated, and undoubtedly smart clinicians seemed both puzzled and resigned in the face of these outcomes: "I’ve got patients on 4 different medications and their blood pressure is still uncontrolled. We try sending them to the cardiologists, and they say, 'Just keep adding stuff because there’s really nothing we can do about this.'…Some people whose blood pressure that we do get normal again, they don’t function very well at all. I’m not sure why."

After all the usual warnings about not generalizing qualitative studies beyond the small sample included in the research, I worry that this peek into the trenches of clinicians actually caring for patients with common problems strongly validates the worries I have expressed in this blog in more theoretical form, especially in The unholy alliance of drug company marketing, pay-for-performance, and unrealistic and commercially biased practice guidelines are ganging up on these patients to make them sicker rather than healthier--and the clinicians seem helpless to do anything about it.

From the Belly of the Beast: The Future of Drug Reps

"Dear Howard," begins the e-mail, "here at Cutting Edge Information, we have done extensive primary research about some of the hurdles and roadblocks you may be facing as a sales representative or sales manager at University of Texas Medical Branch." These nice people want me to download a free summary of their report, "Reinventing Pharmaceutical Sales Forces."

I have to search around a fair amount to find that if I wanted to actually buy the complete report, it would cost me $7695 for a single-user license or $23,995 for an unlimited license. So it's not too surprising that the free summary gives very little away. What's there, however, may be a bit of a further clue as to what the drug industry is doing in the face of recent changes and pressures on its marketing model.

The summary starts out by citing the Wall Street Journal that after reaching a high of 100,000 just after 2000, we can expect the number of drug reps in the US to drop to 70,000 by 2015. Different companies, it seems are all over the map--a couple have drastically cut their force by as much as 50-60%, with the average being around 15%.

How can they make these big cuts and still peddle drugs? The big discovery seems to be that there were way too many drug reps during the peak season, and they were tripping over each other streaming in and out of the docs' offices and in the process, getting the docs ticked off. I gather that the fancy name for that was "mirroring" and the industry has decided that less mirroring is better. "The arms race of the previous decade is dead," pronounces the summary, referring to the time period when it seemed an article of faith in the industry that simply hiring more reps than your competitor was a guarantee of success.

If you don't "mirror," then what do you do? Two things it seems. First, while companies differ a lot on how much and how they are using electronic detailing, most seem to be moving as fast as they can in that direction, to supplement rather than replace face time between rep and doc. Second is the good old bread-and-buttter of detailing-- real, personalized and persistent attention to the docs (and especially the office staff). The summary gushes: "At still another interviewed company, drug reps who maintain one-to-one relationships with targets are greeted like 'rock stars' within the physicians' offices. On a recent ride-along with a rep, everyone within the doctor's office knew the rep by name and was excited to see him. There was no hint of 'the doctor's busy, so come back later.'"

With all this back to the future, you might wonder--what is changing on drug reps' compensation packages? And what impact have the most recent (2009) "ethics" rules from PhRMA had on the reps' operations? The full report answers those question, if anyone cares to shell out the 7-grand-plus to read it. I'd like especially to know about the latter question as the peer-reviewed medical literature, so far as I know, has been silent on that topic.

In past posts I have opined that Pharma really has discovered no alternative to the traditional rep model, which consists of making the doc believe that you're a good buddy and not a sales person, and If You Feed Them They Will Come. From what I can see of the free portion of this "Cutting Edge" report, that does not seem to have changed, arms race or no arms race. So if medicine ever wishes to reassert its professionalism and get out from under the cloud of having our clinical decisions distorted by commercial marketing, the answer is still, just say NO to drug reps. The report seems to hint that we have a really hard time doing that.

Monday, September 10, 2012

China and Unsafe Drugs--Any Progress Made?

I have in the past blogged about drug safety issues with U.S. companies buying critical drug ingredients from unregulated and uninspected Chinese factories, such as: and Our friends over at the Health Care Renewal blog have kindly alerted me to a recent article from Reuters updating us on this issue:

Time out for a bit of history. In HOOKED I recounted a story well known to all historians of American pharmaceuticals--how Massengill and Company, in Tennessee, killed more than 100 people in 1937 with a new formulation of the miracle antibiotic, sulfanilamide. Massengill decided people might like the drug in the form of a syrup, and set their chemist to find a nice vehicle that would dissolve the drug and create a smooth texture on the palate. The chemist came up with diethylene glycol--the stuff they make antifreeze with. Which happens not to be good for you or your kidneys. The result of this disaster was new Federal legislation giving the FDA for the first time the authority to demand proof that a drug was safe before it could be marketed. (It took thalidomide in the 1960s to force the next step, that drugs had to be proven to be both safe and effective.)

That, as I said, was in 1937. So what are we to think of the fact (according to the Reuters article) that in 2006, about 100 people died in Panama from a cough syrup made with a Chinese-manufactured sweetener that contained diethylene glycol? Besides the fact that dead people in Panama are not worthy of being covered in the US news media?

There are a number of important points made in the article, but perhaps the bottom-line message is that US drug firms are getting a bit cagier about buying impure chemicals from shady Chinese factories. Some Chinese firms are certified by Good Manufacturing Practice, an internationally recognized standard, and US companies can if they wish do business selectively with those firms (as several drug companies told Reuters). So the unsafe drugs are now being shunted selectively to poorer countries, with Africa taking up the brunt of the traffic.

One thing that has not changed is our dependence on China to manufacture the chemicals that go into drugs (called APIs).  Reuters quotes Guy Villax, a Portuguese drug executive: "If China for some reason decided to stop exporting APIs, within three months all our pharmacies would be empty."

Ethics (or Lack Thereof) and Post-Marketing Studies

As I noted in HOOKED, and as has been here documented, all too often the FDA approves a new drug contingent upon the company doing a post-marketing study to address outstanding safety or other concerns, and the study never gets done. So you might imagine that the actual conduct of such a study would be an ethically good thing. Unforunately this seems not always to be the case, as discovered by a committee of the Institute of Medicine that was called together by the FDA to look at a study of rosiglitazone (Avandia). Three members of the IOM committee reported on their ethical findings in last week's New England Journal (subscription required).

I have blogged a good deal about the rosiglitazone saga (most recently, The study that the IOM committee addressed was called TIDE, for Thiazolidinedione Intervention with Vitamin D Evaluation, designed to compare long-term cardiovascular outcomes in diabetic patients on two different, related drugs--rosiglitazone and pioglitazone (Actos). TIDE was done in the face of existing information suggesting that rosigolitazone increased the risk of heart problems in a way that pioglitazone did not. By the time TIDE was inaugurated, clinical practice was already starting to shift away from the use of rosiglitazone in diabetes and toward pioglitazone, for the small number of patients who actually had need for a drug in this class anyway. (Drugs in this class lower blood sugar but have never been shown to have long-term, beneficial effects on major complications of diabetes, which is the whole reason to treat Type II diabetes.)

The IOM committee looked especially at two questions regarding TIDE, first, the adequacy of the consent process, and second, the adequacy of the review that was conducted by the 480 IRBs (research review committees) that all approved the trial at their respective institutions. First, the consent stank (my term not the authors'). Patients should have been told that they were at risk of getting a drug that had already been shown to be potentially dangerous, and that there might be no need for the study since clinical practice was already changing even without the results--in short, they should have been given a consent form that would have discouraged all reasonable people from being in the study at all. The consent process softpedaled all these issues.

One of the reasons that the consent process stank was related to the study design. The TIDE investigators (reporting on their trial which apparently was stopped for regulatory reasons, presumably when the IOM committee was called in to referee) noted two areas of "uncertainty" regarding the two diabetes drugs--heart effects and a relationship between Vitamin D and cancer. That latter issue presumably justified sticking in a Vitamin D component which had nothing to do with heart effects, but trying to explain about the Vitamin D in the consent process could easily have confused research subjects and diverted attention away from the heart risks. If one were inclined to be paranoid about a study designed by industry in hopes of exonerating a suspected unsafe drug, one might think that was not entirely coincidental.

On the issue of the IRB oversight, the IOM committee was perhaps more circumspect than they might have been. Rather than decide that these 480 IRBs were all of them incompetent, they preferred to imagine that they were simply not fully informed of the rationale for the study. Had they been so informed, the committee imagined that they would have realized that even if there had been decent consent (and of course there wasn't), the study would have been questionable because subjects were being put at serious risk to establish a fact that was already known and that was uinlikely to change clinical practice. The committee went on to say some wise things about the circumstances when FDA post-marketing studies are more or less justifiable.

The IOM committee made clear that when the FDA orders a drug company to do a post-marketing study, and a study is then conducted that is ethically unsound, the blame cannot be sloughed off on the drug company; the FDA has to share in the ethical responsibility. The lesson for the FDA seems to be--if you find an unsafe drug on the market, deal with it; don't kick the can down the road by pretending that you cannot do anything without more data, if getting the further data would entail doing unethical research.

Full disclosure: I am a member of the IOM but had nothing to do with this committee.

Mello MM, Goodman SN, Faden RR. Ethical considerations in studying drug safety--the Institute of Medicine report. New England Journal of Medicine 367:959-964, September 6, 2012.

Punthakee Z, Bosch J, Dagenais G, et al. Design, history, and results of the Thiazolidine Intervention  with Vitamin D Evaluation (TIDE) randomised controlled trial. Diabetologia 55:36-45, 2012.

Sunday, September 9, 2012

A Telling Anecdote about Regulatory Capture and Medical Device Safety

Thanks to an exchange of e-mails on a list that includes journalists Jeanne Lenzer and Shannon Brownlee (whose great work I've previously blogged about), I was directed to an article (subscription required) that I failed to make not of when it came out nearly two years ago. It provides useful background to an issue that has become even more heated this last year, the medical device safety oversight problem (see for example:

Lenzer and Brownlee looked in depth at the vagus nerve stimulator manufactured by Cyberonics, a device in which a pacemaker-type pack is surgically inserted near the collarbone, and electrodes are wrapped around the vagus nerve in the neck. The device was intended at first for a select population of patients with a particular type of epilepsy that's resistant to all drug treatment. Like many devices and drugs (and in keeping with the Inverse Benefit Law,, once having gotten the camel's nose into the tent, Cyberonics is now claiming that the stimulator can be used for a large number of other conditions, notably depression, and perhaps obesity and traumatic brain injury (stay tuned for hair loss and bad breath). All such uses rely on the purported safety of the device, which is what Lenzer and Brownlee zeroed in on.

In 1997, Cyberonics went to the FDA to get initial approval of the stimulator for epilepsy. They presented three studies to document the device's effectiveness and safety. One review panelist noted that 17 of the 1000 subjects who'd had the device implanted had died, and asked what was up with that. The explanation offered by the company and other panelists is that people who have that sort of hard-to-treat epilepsy have a high death rate because periodically those seizures cause cardiac and respiratory arrest. So deaths were due to the disease and not the device.

The FDA bought this explanation but added a caveat--Cyberonics got approval conditionally upon conducting a post-marketing surveillance study to address the concern about possible excess deaths.This, as Lenzer and Browlnee explain, is not an unusual measure--the studies needed to lead to drug or device approval are often, of necessity, too small to detect rare but serious adverse effects, and only after a drug or device is more widely used may such effects become apparent.

But here's where the story gets interesting. Lenzer and Brownlee then went after the data accumulated by the company during this mandated post-marketing study phase. They found several very worrisome things. First, there were at least isolated reports of deaths or near-misses that seemed quite clearly to be due to the device. (One patient who survived was observed to have his heart stop at exactly the intervals at which the stimulator fired, and the heart stoppages ceased with the device was turned off.) When asked how many of these events had occurred, or how often, Cyberonics said they didn't know, because mortality statistics were not one of the planned endpoints in any of its trials. And despite the fact that the FDA had ordered these studies because of concerns about mortality, no one at the FDA seemed the least bit worried about the fact that Cyberonics had set up its studies deliberately to exclude mortality data.

For many years, British sociologist John Abraham has written about regulatory capture--what happens when a government agency that is supposed to regulate an industry ends up becoming so closely tied to industry that it becomes a tool rather than a watchdog. When the Institute of Medicine weighed in on how inadequate the current regulations are for monitoring device safety (, one might have thought that the FDA would appreciate the help they were getting to call for more stringent regulations--but instead the FDA went out of its way to defend the current inadequate practices and to blast the IOM's conclusions. Since then, as per previous blog posts referred to above, the device industry has unleashed a lobbying armageddon on Congress, demanding less rather than more regulation of device safety lest a single good job in the US be sent overseas.

The combination of merciless lobbying plus FDA capture makes it highly unlikely that we'll see a day anytime soon when the US public can have much confidence that medical devices are adequately checked for safety. How it works out that there's a huge media outcry if a single person dies from E.coli in their lettuce, but dozens or hundreds can die due to malfunctioning medical devices without anyone losing any sleep, needs to be better explained.

Lenzer J, Brownlee S. Why the FDA can't protect the public. BMJ 341:966-68, 6 November 2010.

Sunday, September 2, 2012

Healthy Skepticism in Peril

I'm passing along news from Dr. Peter Mansfield, the Australian physician who's the founder of the Healthy Skepticism group and website that I've had occasion frequently to quote from and commend. Sadly, Dr. Mansfield reports that his own time is increasingly spoken for and the finances of the website are tenuous, and unless there's an infusion of either cash or labor or both, Healthy Skepticism will have to close its virtual doors--which would be great for the pharmaceutical industry, I imagine, and terrible for anyone interested in an evidence-based approach to pharmaceuticals and their marketing.

I urge you to check out their website at and see what you can do to help this worthy cause. Thanks very much.