It’s hard to say what is most horrifying in Carl Elliott’s report in the current issue of Mother Jones of a young man who died caught up in a pharma study run out of the University of Minnesota. That a man considered mentally ill enough to be facing involuntary commitment was considered well enough to consent to a blinded study of antipsychotic drugs? That apparently the young man was given to understand that he could not leave the study without being involuntarily committed? That research subjects like this are being used in drug trials whose real aim is production of marketing materials, and not “genuine scientific knowledge”? That big pharma is using academic doctors to turn patients like this man into lucrative guinea pigs? That the university’s IRB director testified in a lawsuit that the IRB’s purpose is not to protect clinical trial subjects? Or that, according to Elliott, after the man’s mother unsuccessfully sued the University of Minnesota (where Elliott himself works), the university “filed a legal action against [her], demanding that she pay the university $57,000 to cover its legal expenses”?
Elliott’s hair-raising account, along with my own work in the history of medicine and in patient advocacy, has led me to a troubling meditation, one that goes roughly like this:
A medical ethics travesty occurs, and people yell, “There oughta be a law.” So, if things go right, a new law or regulation goes into place to try to protect patients and research subjects from repetition of the horror that occurred. But then those people charged with watching out for the rights of patients and research subjects – people who increasingly tend to be lawyers who pragmatically treat things like IRBs as liability shields for their institutions – stick as closely to the laws and regulations as possible. They do not worry about the gray zone of questionable practices that remains between patients and the law; they worry about following the law. They worry about protecting the institution from illegal behavior, not protecting the patients or research subjects. Subtle but key difference, one made vivid in Elliott’s article.
Then if someone – say, Elliott – tries to say, “But this particular practice is unethical in the way it treats patients or subjects,” the response is, “But it isn’t illegal.” As if to say to us, “If you really think it’s so bad, go get the law changed, and until you do so, don’t bother us with your worried little heads.”
In other words, the new law or regulation is really meant to make the system more expansively ethical. But by forever being read as a constriction of what we’re allowed, effectively, to worry about, it does the opposite: it restricts what counts as a behavior you can prohibit or stop. It seems to restrict what you can even worry about.
Now, if the systems we set up to protect people are not only inadequate to protect people, but in fact are used to constrict concerns about protection – how do we really protect people in medicine and medical research? (Especially when the laws are used to protect an institution, and not a seriously mentally ill man caught in a research study, or his mother.) Who will do the real work of medical ethics, and how?
The University of Minnesota has issued a statement on this story. Predictably, it comes from the general counsel, and attempts to undermine the power of Elliott’s criticisms through legalistic meandering. The statement also seems to contain two basic misrepresentations. When I put those possible misrepresentations to the university spokesman named on the statement’s Web page, he responded that “Because this is a closed legal case[,] there is much I’m simply unable to discuss openly and on the record.” His e-mail signature line contained the University of Minnesota’s slogan: “Driven to Discover.”
Alice Dreger is a professor of clinical medical humanities and bioethics at Northwestern University’s Feinberg School of Medicine. She has collaborated with Carl Elliott on various occasions.