Wednesday, March 30, 2011

Cardiovascular Guideline Committees (Half) Rife with Conflicts of Interest

A pair of articles in the most recent Archives of Internal Medicine (subscription required) address the problem of conflict of interest (COI) among the panelists writing guidelines for the treatment of cardiovascular disease. These guidelines become more and more important as insurers pay special bonuses to docs based on how well the docs follow such guidelines (pay-for-performance). Mendelson and colleagues surveyed the landscape--specifically, the most recent 17 guidelines (as of 2008) put together jointly by the American Heart Association and the American College of Cardiology--and report that 56% of the individual members of the panels had COIs. The most common relationships were being a paid consultant or a paid member of a scientific advisory board (in other words, not merely getting research funding, which might be viewed as a more tolerable COI). Chairs of the guideline panel were more likely than mere members to have COIs. Mendelson et al's conclusion was that the glass is half full--there do in fact seem to be a substantial number of potential guideline panelists who don't have COIs, so if one tried, it should not be too hard to assemble a non-conflicted panel (meaning of course that no one is trying that hard). Dr. Steve Nissen of Cleveland Clinic is usually reliable on this topic and does not disappoint in the commentary that he contributes. He notes that the COIs reported here extend "far beyond scientific collaboration. More than half of [guideline] writers served as promotional speakers on behalf of industry, and a substantial number actually held stock in companies affected by the [guideline]." He adds: "Participants in speaker's bureaus essentially become temporary employees of industry, whose duty is the promotion of the company's products....To allow such individuals to write [guidelines] defies logic." Nissen goes on to note that this might not be quite so bad if all the guideline panelists did (as some defenders of the status quo claim) was to carefully sort out high-level scientific evidence. But according to another study, at least half of all the recommendations made by such guideline panels are based not on high-quality evidence, but rather only on "expert opinion." When something as subjective as "expert opinion" shapes that much of what the guidelines contain, then it becomes that much more important to eliminate the potential biases introduced by commercial COI. Nissen concludes, "The revelations related in the current article highlight troubling concerns that must be urgently addressed. If we fail as a profession to police our [guideline] process, the credibility of evidence-based medicine will suffer irreparable harm." Mendelson TB, Meltzer M, Campbell EG, et al. Conflicts of interest in cardiovascular clinical practice guidelines. Archives of Internal Medicine 171:577-84, March 28, 2011. Nissen SE. Can we trust cardiovascular practice guidelines? Archives of Internal Medicine 171:584-85, March 28, 2011.

Monday, March 28, 2011

Against the Grain: Fewer Drugs, Better Health for the Elderly

In the previous post I gave you the bad news, about how drug marketing, coupled with our fond beliefs in the powers of prevention and early screening to confer immortality, ends up making us sicker. I'm now happy to be able to supply a bit of good news, even though the study is highly preliminary and needs confirmation with larger numbers and a randomized design. Scene setting plus personal confession: During the 26 years I saw patients as a family doc, I attended numerous lectures given by geriatrics specialists, and each time heard the same plea--carefully review all the medications your elderly patients are now on, and do your best to stop as many of them as you can. The logic seemed solid. But when I tried to implement it, I almost always ran into a wall, at least in my own mind. I'd ask the patient to bring in all their meds in a shopping bag (some apparently needed a U-Haul trailer) and I'd carefully go through each one. Sadly, in almost all cases, either the patient or I concluded that hardly any could safely be dispensed with. So I ended up wondering how realistic this advice was. I was therefore quite delighted, as well as humbled, to read this recent study by Garfinkel and Mangin from Israel (subscription required). They developed a protocol for reducing the med list of patients in nursing homes, and it worked so well there that they decided to try it on community-dwelling elderly. The present study describes what happened when they used their protocol on 70 such patients. The protocol itself is very simple and very general. Unlike other popular approaches to eliminating ineffective and dangerous drugs in the elderly, it does not consist of a list of drugs to avoid. It is basically a form of zero-based budgeting. It starts from the assumption that if there's no good evidence that people of that age do better when placed on that drug, then the drug should be stopped. The indirect message is that many standard guidelines are based on studies done in younger patients and then inappropriately extrapolated to the elderly--who are first, more prone to side effects of medications, and second, often have limited life expectancies and so cannot benefit from some risk factor reductions that take 5-10 years to produce payoffs if any. As one example, they mention evidence that a reasonable target glycohemoglobin for elderly diabetics is 8.0, whereas most guidelines beat you up if you don't get your patients below 7.0. (If you try to get the elderly down to 7.0, too many of them fall down and break something from the effects of low blood sugar.) So what happened? On average, these older folks were taking about 8 medications each, and after they used their protocol they reduced it to more like 4. They watched the people carefully afterward and only 2% of the medications had to be restarted because of either symptoms or abnormal lab values. And--best of all--88% reported feeling significantly better once they were off all those meds. A few patients with severe cognitive impairments while on the medications actually cleared their mental status significantly. And the Israelis did not even bother to consider what sort of cost saving they generated. So the more general point, assuming that these promising preliminary results can be later backed up, is that if you fight against the mantra I described in the previous post, in this particularly vulnerable population at least, you can do a tremendous amount of good. The specific point is to ask--if we were to listen to the siren song of drug industry marketing, who would even begin to imagine that such a thing were either possible or desirable? Garfinkel D, Mangin D. Feasibility study of a systematic approach for discontinuation of multiple medications in older adults: addressing polypharmacy. Archives of Internal Medicine 170:1648-1654, Oct. 11, 2010. NOTE ADDED 3/28: I have now tried to repost this entry three times, each time going back and inserting the paragraph breaks as I intended them, and each time the blogsite has posted the post eliminating the paragraph breaks. Sorry about this readability problem. I will ask the usually highly reliable Blogspot what gives.--HB

Sunday, March 27, 2011

Synthesis: Why the Drug Industry Tries to Improve the Public's Health--and Ends Up Doing the Opposite

A while back--March 16, to be precise--I promised a post that would pull together a number of previous commentaries and show how they were presenting us with a unified account of what's going wrong with today's efforts at "better living through pharmaceuticals." Sorry about that ol' day job, but I am finally back and will attempt to deliver on my promise.

Here are the pieces of the puzzle I want to link together: 1) John Ioannidis on how even honest reports of new research can overhype drugs,; 2) Don Light's and my work on the Inverse Benefit Law,; James Mold and colleagues on diminishing returns in adhering to chronic disease guidelines,; and 4) a new book that makes a great companion to Mold et al's work, W. Gilbert Welch and colleague's Overdiagnosed (from the excellent gang at Dartmouth Health Atlas).

I offered some graphic representations of the basic concept in the post on the Inverse Benefit Law. In what follows I will offer a slightly different way of trying to show what we're talking about in visual form.

We begin with an unholy alliance between two important forces. Physicians and patients have both been thoroughly indoctrinated, over the past several decades, to the mantra of prevention via early diagnosis. We devoutly believe that if we detect a health problem late, then it's really hard to do anything useful for the patient, whereas if we catch it early, we can easily reverse the bad effects and assure the patient a long and healthy life. To that mantra we have added a further devout belief in risk factors. If we catch the factors that put you at risk for later getting the disease, then things are even better than when we catch the disease early. We can set about reducing or reversing those risk factors, assuring even better health and longevity. This devout set of beliefs in prevention and screening for early disease and risk factors is now added to the other major force--the drug industry's desire to sell its wares and make a profit. A drug that you take for a couple of weeks to treat an acute condition offers a certain amount of profit. A drug that you have to take for the rest of your life, to keep a supposed risk factor under control, yields a much greater profit.

This conjunction of forces leads to the illustration below, of what ideally ought to happen as a result of earlier and earlier diagnosis and risk factor identification--that is, lowering the threshold for saying that a person needs some sort of treatment:

This diagram depicts three variables: How many people are labeled as having the condition that needs treatment; the likelihood that any given patient will receive benefit from treatment; and the likelihood that a person (or the population as a whole) will be harmed by the adverse effects of the treatment.

Lowering the threshold for diagnostic labelling does exactly what the drug marketers want--vastly increase the number of people who are labeled as needing treatment. (For the details on how this happens, see the post on the Inverse Benefit Law.) Because so many more people are candidates for the drug, then that many more people are at risk for adverse reactions from the drug--so the bad news is that in public health terms, the harm burden of the treatment for this disease increases. The fond hope, that makes this the ideal desired state, is that the increased chance of harm is more than offset by the probability of benefit. The newly diagnosed/labeled patients are all located at the high-benefit end of the field. The idea is that through early identification and intervention, we are doing them a great service by offering treatment to reduce their risks of later disease. We are getting them in precisely the most treatable situation, so our chance for helping them is greatest. Or so says the comforting mantra.

However, is this idealized state of affairs what actually happens in the real world? That's where the insights of Welch et al. and Mold et al. come in. Mold and company address the specific situation where there are multiple preventive interventions all recommended for a given chronic disease, according to the expert guidelines. First, it is hardly ever the case that all are of equal value--though the guidelines hardly ever admit that. Some will have a huge bang for the buck and some very little. But even if they were of equal value, Mold adds, they are not additive. Do preventive intervention #1, and you've reduced the risk of having the bad outcomes from the disease by a certain percentage. Do intervention #2, and you are mathematically certain to reduce the remaining risk by a much lesser percentage. Add #3 and you get even less for your investment of money and risk of adverse reactions. And so on, for a law of diminishing returns.

Welch's group makes this point more general, so that it applies even to single interventions (such as statins for lowering cholesterol, drugs for diabetes, or just about anything similar). For almost all such drugs, the greatest likelihood of benefit (which as we explain under the Inverse Benefit Law, corresponds to the lowest number needed to treat or NNT) resides at one end of the horizontal field depicted in the figures. The people with the highest risk of heart disease, or the highest cholesterol, or the highest glycohemoglobin in diabetes, or the highest blood pressure, or you name it, benefit hugely from a treatment to lower that risk variable. The lower any of those measures go, the less the likelihood that treatment will prevent a bad outcome (higher NNT). So the lower you drop the threshold for diagnosing the disease or risk factor (deciding, for example, that instead of diagnosing diabetes when blood glucose reaches 140, we'll make the diagnosis at 126), the lower the likelihood of benefit for any given patient. This means that the real, as opposed to the ideal, state of affairs is shown in the following diagram:

In short, the drug marketers are still in heaven, because you have labeled more and more people as needing what they sell (assuming that you can convince docs and regular folks that taking a pill is the answer--as they've so far done a brilliant job of). But in the process, you've shoved the whole mess of them into the lowest-benefit portion of the field of operations. The increased public-health load of adverse reactions (or excess costs for that matter) is now uncompensated by the hoped-for benefits. You've exposed tons of folks to the harms and costs while assuring that only a small fraction of them would receive any real benefit. If those people could be fully informed of the tradeoff you've just negotiated, many would refuse to take the drug. (Nowadays of course they go like sheep to the slaughter, both because we don't inform them adequately, and also because we all believe the prevention-screening-early identification mantra.)

As a minor point, the "Actual" figure is pretty much exactly what we depicted in our bell-shaped curve in the Inverse Benefit Law, but displayed in a different manner.

So now we turn to Ioaniddis. His insight helps to show that when the marketers work to make sure that we never realize the "Actual" state of affairs and continue to assume that the "Ideal" holds true, they are aided greatly by certain features of the research process. Even when a new drug actually is no better than alternative drugs, or placebo, it's likely to look better early on in the research cycle. That's true even when the marketers don't put their thumb on the research scale. As we've seen over and over in this blog, they can seldom avoid the temptation to add the thumb, exaggerating the benefits of the drug or minimizing the adverse effects by selective reporting of the data or some other form of spin. When both docs and patients are misinformed about the benefits and harms, and the mantra of prevention continues to seduce us, then no one is likely to realize that the actual world is not the ideal world. And that's just what the marketing people want.

A minor footnote. One of the take-home lessons of this sermon is that for all of its heft, and for all the heavy financing muscle behind it, drug marketing can hardly ever convince is that up is down or black is white. Most people, for instance, think that having to take a bunch of pills every day is a real drag. So we therefore never see consumer ads on TV telling us, "Guess what, folks! Taking a dozen or more pills every day is FUN!!!!" These marketing geniuses are way too smart for that. What they're super-good at is taking something we already believe, or want to believe, and tweaking it or amplifying it or running with it, to get us to use that as just one more reason why we really, really need to swallow their new, expensive pill. So when they come across something like the prevention-screening-early detection mantra, they're golden. All they have to do is convince us to keep on believing what we already believe, and we're eating out of their hands. Which is why getting the Mold or the Welch message out there to the public--without which, no patient is truly able to make an informed choice about taking a pharmaceutical--is going to be such a daunting task.

Welch WG, Schwartz L, Woloshin S. Overdiagnosed: Making People Sick in the Pursuit of Health. Boston: Beacon Press, 2011.

Thursday, March 17, 2011

Craziest Drug Ads, From RN Central

Christine Seivers from the RN Central website kindly drew my attention to a posting there of "ten totally ridiculous pharmaceutical ads," both historical and contemporary. I agree with her that readers of this blog would find it interesting, especially to hear from our nursing colleagues on the matter:

Wednesday, March 16, 2011

How Honest Reports of Research Can Still Overhype New Drugs

In recent months, several important books and articles have appeared that jointly help us understand much better how we can be misled about the value of new pharmaceuticals from reports in the medical literature. In a later post I'll try to pull all the strands together to give a big picture. Here I want to get on record a very nice article by a major expert in research analysis, that contributes some of the key threads. (Hat tip to Rick Bukata and Jerry Hoffman at Primary Care Medical Abstracts for citing this paper.)

Our expert of the day is John Ioannidis from Greece, whose work on debunking the claims of the research literature have even made it into the popular press. The article in question appeared in the BMJ (subscription required).

We have focused a lot in previous posts on one way a drug company can mislead us--suppress negative research data and spin neutral data to make it seem positive. Ioannidis asks this question: how can we be misled even if the company is scrupulously honest in reporting the data?

There are two major ways, the authors report, and they can be illustrated by a specific case study, the research history of tumor necrosis factor blocker drugs for cancer and rheumatoid arthritis. (I'll here summarize the general points and you can read the paper if you want the details of the TNF story.)

First, drug companies typically try out a drug on numerous conditions, hoping to expand the sales potential. Typically, for each condition, the drug is tested against an array of outcomes, as many as 10-20 per study. (For example, a cancer drug might be reported in terms of outcomes such as survival at 3, 6, 9, 12, 15, and 18 months, as well as quality of life measures, time to first metastasis, etc.) Statistician Ioannidis reminds us (the reminder really shouldn't be needed) that we can do the math and calculate how many of these outcome measures will be positive simply by chance, assuming that the drug is actually no better than placebo--or in the more common case, is a little bit better than placebo, while maybe also having some significant adverse reactions and a high cost. If you looked at 20 outcomes per trial, and conducted trials for the drug in 6 different medical conditions, the odds are that for each condition, at least one outcome will be statistically significant in favor of the drug. If the company plays its cards right, it can get regulatory approval to market the drug for all 6 conditions, even though the results, so far, occur purely at random and indicate no real benefit.

The second mechanism is one we've previous looked at, early stopping of trials. In those earlier posts (such as, I erred in focusing on the question of whether the company inappropriately pressured the data safety and monitoring committees to end the trials early in ways that benefitted marketing. Ioannidis shrewdly reminds us that we don't have to assume any skullduggery to see how stopping trials early could exaggerate the drug's efficacy. Suppose we simply do what DSM committes are routinely told to do--for ethical reasons so that research subjects are not put at unnecessary risk. If a treatment reaches a pre-specified level of statistical significance showing superiority, then the trial is stopped, on the belief that you have the answer and that continuing the trial longer would not change things. But that's surely wrong, the authors say, because of the well-known phenomenon of regression to the mean. If at any given stage in the research, the drug is beating the placebo by let's say 20%, if you quit then, you report that the drug is better than placebo by 20%. Yet if you'd continued the trial longer, the odds are excellent either that the drug would have turned out to be no better at all, or else that the true degree of superiority is 5% or 10%, not 20%. The authors cite a previous paper that analyzed 91 early-stopped trials and demonstrated these effects clearly.

On this topic I like to use the analogy of a horse race. We all know that the right way to run a race is to run a given distance, and the first horse across the finish line is the winner. Supposing that we decided that a statistically significant lead is 1-1/2 lengths. So we develop a new rule, that as soon as one horse is out in front by at least 1-1/2 lengths, then you stop the race and declare that horse the winner, even if the field has only gone a quarter of the distance. How often do you think the winner by these new rules would be the same horse as would have won by the old rules?

Ioannidis adds a further wrinkle. If you stop a trial early, the results are also published early. If you let a trial run its normal course, the results will be published much later. A trial that's stopped early is stopped because the treatment looks very good early on. A trial that is not stopped is therefore a trial where the treatment does not look so much better for most of the trial duration. That means almost certainly that the first data published about a new drug will be unrepresentatively positive, and that the more negative data will come rolling in much more slowly. (Or not at all if we add the common tactic of data suppression to the company's bag of tricks.)

Bottom line: if companies were scrupulously honest in reporting data, you could still end up concluding that a new drug is much more effective than it really is. But we know, as documented here ad nauseam, that all too often this scrupulous honesty is honored in the breach rather than the observance. So if you add a little sprinkling of dishonesty or spin to the factors Ioannidis cites, then you have an even more misleading picture.

Ioannidis proceeds to explain how the authors of systematic literature reviews and meta-analyses can try to correct for these sources of bias. But for us the main lesson is to understand these sources of bias and how they operate. Later I'll try to connect the dots between these concepts and other recent analyses of sources of bias in the research literature and its interpretation.

Ioannidis J, Karassa F. The need to consider the wider agenda in systematic reviews and meta-analyses. BMJ 341:761-64, 9 October 2010.

Monday, March 14, 2011

PR, Sock Puppets, and the Ways of Corporations

This rather incoherent post arises from a conjunction of two events--first, seeing the following post-- on our fellow blog, Health Care Renewal; and second, being in the process of reading Wendell Potter's book, Deadly Spin, as recommended by HCR's Dr. Roy Poses in his comment to my March 7 post (

HCR's Dr. Scot Silverstein, their regular blogger on matters relating to health information technology and electronic records, using technology that is far beyond my poor powers, was able to track down an anonymous person who regularly left disparaging, ad hominem comments about any criticism of the HIT industry or its products. He found that the messages originated in a computer located in the headquarters of an HIT firm in Massachusetts. As soon as he "outed" the source, the comments from that anonymous individual ceased.

Dr. Silverstein thereby introduced me to the useful term "sock puppet"--a shill who is in the employ of or sympathetic to a moneyed interest, and who attacks opponents of that interest with misdirection, obfuscation, or ad hominem invective, all the while concealing the link between the attacks and the moneyed interest.

Back to Wendell Potter. His book reviews how the health insurance industry has responded to all attempts at health reform in the US with well-financed and highly effective PR campaigns either to defeat reform outright (as with Clinton), or else to be sure that reforms take whatever shape will best preserve the profits of the private insurance industry (as with "Obamacare," that supposedly socialist program). The primary tool of these PR compaigns is the creation of phony organizations fully funded by the insurance industry but supposedly made up of grass roots supporters ("astroturf"), that can parrot the talking points that the insurance poobahs have refined and field-tested, but while making it appear that the statements come from anywhere except the insurance companies. The goal is to make it seem as if "everyone is saying that" when "that" in fact was deliberately invented and promulgated by the insurance folks.

It is instructive to compare the standard PR procedures of the insurance industry with those of the drug industry. While there are some differences, the basic approach seems to be the same. As anthropologist Kalman Applbaum (see previous post, shows, the special insight that Pharma has added is the notion of a drug "channel," the entire collection of events that must occur between the time a new drug is discovered and when it is sold to patients. The drug industry has become adept in creating marketing strategies that manage to control an entire channel. This serves the same purpose as insurance industry PR. To the average spectator, it seems simply inconceivable that the drug companies could control all these disparate players--scientists, physicians, the FDA, celebrities who mention a new drug on TV, etc. Therefore the sheer audacity of the industry strategy renders it invisible, in a sense--it simply does not seem possible that all those inputs could be deliberately orchestrated.

Potter W. Deadly Spin: An Insurance Company Insider Speaks Out on How Corporate PR Is Killing Health Care and Deceiving Americans. New York: Bloomsbury Press, 2010.

Monday, March 7, 2011

And Another Defense of the Medicine-Pharma Status Quo...

Back over to Health Care Renewal, this time from Dr. Roy Poses, commenting on a Medscape piece by a surgeon, Dr. Frank J. Veith. As you need to be a Medscape subscriber to access the original, see Dr. Poses' post at

Dr. Veith is highly exercised about busybodies like us pharmascolds who would mess up the cozy financial relationships between the drug industry and docs. So he writes a piece that's virtually a Xerox copy (says Dr. Poses) of a bunch of pieces that have been published over the past 5 or so years. All, he says, contain the same logical fallacies and the same insistence on the huge benefits of taking cash from Pharma, and the dire dangers of ceasing to do so, with nary a hint of evidence to back up the claims.

Given what we know about the prevalence of industry ghostwriting in the scientific literature, one has to wonder--is it just that these pharmapologists are an unimaginative bunch and can think of nothing new or fresh to say? Or is it that the industry hacks are writing this stuff for them, and the same draft keeps circulating?

Industry Doublespeak, This Time in IT

Our friends over at the Health Care Renewal blog have been providing us much useful material as of late. This time Dr. Scot Silverstein, their persistent critic of electronic health records that have not been properly field-tested before the software companies unleash them on a helpless public, looks at a recent report from the health information technology industry consortium group. He finds it full of gobbledygook, as he puts it, perhaps most notably the idea of "usability maturity model." The industry poobahs admit that their confreres might be hesitant to worry about "usability" of their products, as demanding that docs and nurses actually might be able to use the dang thing before you sell it could cut into sales and profits. But they reassure doubters among them that "usability" is actually a good thing to worry about, because it has a positive ROI (return on investment). Once again, the idea that you might actually make more money if you sell a product that's useable, as opposed to one that's not, seems a revolutionary idea.

Dr. Silverstein is right to be appalled at this tone when we see that the IT industry basically subsumes the notion of patient safety under their idea of "usability." The very idea that the IT industry has to be convinced that worrying about whether your electronic record will cause patients to die is only a good idea if it leads to greater profits gives him the willies.

From our standpoint, we can wonder what will happen when Pharma comes upon this highly promising notion of the "usability maturity model." We will then hear that Vioxx, for instance, did not really cause the estimated 144,000 cases of excess heart disease in the US before it was yanked from the market. The only problem was that its usability matured a little bit too slowly. (Enjoy Dr. Silverstein's blog posting at

AMSA: Tide Has Turned in Medical Schools

AMSA, the American Medical Student Association, can justly claim pride of place in the near-revolution that has occurred in academic medicine over the past half-decade. AMSA developed the astoundingly successful propaganda ploy, their "report card" on institutional COI policies. Deans that had never been bothered about how many of their faculty were in bed with Pharma (just keep those lucrative grants coming in, boys and girls) suddenly developed an abiding concern about COI when it was splashed across the local news that their medical school had been given an F on AMSA's report card. My own stomping ground, UTMB-Galveston, went from an F to an A pretty promptly when spurred on by AMSA (though personally I would not place my own Dean in the "never been bothered" category).

AMSA now reports a new survey showing that now a majority of US medical schools have policies that they consider "strong" in policing drug industry COI. They also add that, "Nearly one-third of medical schools now teach medical students to understand institutional conflict of interest policies, to recognize how industry promotion and marketing can influence clinical judgment and to consider the ethics around conflict of interest." See the entire AMSA press release at

Just How Does One Find New Drugs?

Daniel Cressy, in an informative brief news item in Nature (subscription required), tells us something that most readers of this blog either knew or suspected--that the current drug industry research model for discovering new medications is largely a bust. The industry apparently realizes it too, as Pfizer announces a $1.5B cut in its 2012 R&D budget, and closes a major research facility in the UK.

But what's the answer? After apparently working hard to shift research funding out of academia and into the private sector, the industry may be deciding that academic centers are in fact the very best place to discover promising new molecules. Presumably, according to Cressy, the industry is quite right that it can do very effectively and efficiently what the academics can do only slowly and ploddingly--clinical trials to demonstrate the effectiveness of drugs that have shown promise in Phase I trials. But it should stop trying to do those Phase I trials themselves, or the painstaking basic work leading up to them, and let the academic centers do that heavy lifting, with the companies later swooping in to buy up rights to the promising results.

Cressy reports that this is what the companies now do with drugs for so-called orphan diseases, and basically it is a matter of extending the orphan-disease model to their operations as a whole. The big problem now looming, Cressy concludes, is that ideally everyone involved wants somebody else to pay for this whole process; and academic research is unlikely to produce the hoped-for gains unless both governments and industry are willing to pony up.

Cressy may have offered us a good analysis of cutting-edge thinking about drug R&D, but he totally ignores any issues relating to conflicts of interest. He talks about the possibilities of "long-term partnerships" between drug companies and academic centers, without indicating what these partnherships would look like and how academic values are to be maintained if the industry piper is calling the tune. One promising development, however, is the apparent realization among some in industry that proprietary secrecy works against scientific discovery. Cressy cites Patrick Vallance, senior VP for medicines development at GlaxoSmithKline in London, as a proponent of "open innovation," putting potential molecular structures into the public domain and inviting academics to have a go at taking them to the next level.

Cressy D. "Traditional Drug-Discovery Model Ripe for Reform." Nature 471:17-18, 3 March 2011.

Thursday, March 3, 2011

Good Summary of Markingson Case at U-Minn

I've commented several times previously on the Markingson case, most recently at Dr. Carl Elliott, the U-Minn bioethicist who's done the most to publicize this case and keep the heat on the University, offers a recent blog post--
--on the Hastings Center's bioethics blog. The focus of this latest post is to rebut the University's claims that there's nothing more to investigate about the case since the FDA already looked into it and gave the U. a clean bill. Dr. Elliott analyzes the FDA "investigation" and details its shortcomings. However, I'd recommend this post primarily as a very concise review of the facts of the case for anyone looking for same. There are so many inconsistencies remaining that U-Minn still has a lot to answer for.