IRB: Ethics & Human Research

Why Research Oversight Bodies Should Interview Research Subjects

Research oversight bodies conducting for-cause investigations often fail to interview research subjects who have complaints of mistreatment. I argue that this failure is a mistake for three reasons. First, because written medical records are often inaccurate, there may be no entries in a study subject’s record about research-related medical harms. Second, research staff members write the reports that make up the research study records. If there are allegations that a research subject experienced research harms, subjects will rightly feel it is unfair that the for-cause investigation relies only on records written and kept by the research team. Third, the outcome of a for-cause investigation may be influenced by the ethical distance that results from failure to learn directly from research subjects about their complaints of mistreatment.

Keywords: Research subjects, human experimentation, human research subject protections, research ethics, research oversight, for-cause investigations

_____________________________________________________________________________________________________________________________________

The first psychiatric drug Robert Huber took was an experimental one. In July 2007, Huber went to a Minneapolis emergency room feeling panicked. His ears were ringing and he thought he might be hearing voices. Very soon, Huber found himself on a locked psychiatric ward with a diagnosis of paranoid schizophrenia.1 A psychiatrist at the University of Minnesota confined him to the unit under an emergency 72-hour hold.2

The emergency hold had not expired when the psychiatrist asked him to enroll in a research study. “Immediately, they were on me to do experimental medications, non-FDA approved—and I had never been on medications in my life,” Huber later told a reporter. “Then they say you have a giant medical bill and if you do the research, you won’t have this giant medical bill.”3 The study, sponsored by Solvay Pharmaceuticals, was for an unapproved antipsychotic drug called bifeprunox. Huber signed a consent form and was later released from the locked unit, but within weeks, the Food and Drug Administration (FDA) had rejected bifeprunox for marketing approval, citing its lack of efficacy and the death of a research subject in Europe.

The study continued despite the FDA’s decision. Huber began having excruciating abdominal pain and returned to the emergency room three times. He even considered suicide. Finally, he dropped out of the study and sought medical care elsewhere. Several months later, Solvay Pharmaceuticals halted all bifeprunox studies and stopped development of the drug.

In May 2014, almost seven years after Huber was enrolled in the bifeprunox study, his story aired on a local television station. The television report appeared at a time when the University of Minnesota was facing intense public scrutiny for the suicide of another psychiatric research subject. The university declined an on-camera interview but defended itself with a statement about Huber that read in part, “His medical record shows extreme anxiety and paranoia, a history of head injuries and lengthy battle with alcoholism. It is highly inappropriate for him to be put in the media spotlight as a spokesperson for clinical trial safety.”4

Under pressure, the University of Minnesota eventually conducted or commissioned three reviews of Huber’s treatment—by the institutional review board’s (IRB’s) executive committee, by a newly established Research Compliance Office, and by FTI Consulting, an external firm hired by the university.5 None of the reviews found any significant fault with the way Huber was treated. The IRB’s executive committee ruled that Huber’s consent had not been compromised in any way by the fact that he had been involuntarily confined under a 72-hour hold when he was enrolled. A letter to Huber stated, “You were not pressured or coerced to enroll in this study.”6

Whether or not these decisions were fair, an important procedural question remains unanswered. Should Huber have been allowed to speak to the internal and external bodies that were investigating his experience? None of the three bodies tasked with reviewing the case actually interviewed him. Instead, they relied solely on a review of written records. The director of the university’s human research protection program explained, “Standard practice for investigating complaints about mistreatment of research subjects allows for but does not require the interview of witnesses.”7

Such a policy is not unusual. Most oversight bodies (including IRBs and the FDA) distinguish between routine audits, which are typically done periodically to determine compliance with federal regulations or institutional policy, and for-cause audits or investigations, which are usually triggered by a suspicion of research misconduct, failure of compliance, or mistreatment of research subjects. Most bodies have explicit policies for proceeding with for-cause investigations, detailing who should conduct the investigation (typically the IRB, the office of research compliance, or a special committee), which written documents should be reviewed, and the procedures that should be followed.

Some policies state that investigators should interview the principal investigator of the study in question. But they usually say nothing about research subjects themselves. The same appears to be true for federal investigators. While a spokesperson for the FDA says that FDA inspectors sometimes do contact human subjects, she also says that the interviews are typically not mentioned in the FDA’s Establishment Inspection Reports. The FDA compliance manual for bioresearch monitoring inspections of clinical investigators makes no mention of interviews with research subjects.8

Reliance on written documents is one of the key features of modern bureaucracy, as Max Weber pointed out over a century ago,9 and research oversight is nothing if not bureaucratic. But is there a solid rationale for this practice? Limiting an investigation to written documents and interviews of the study team has the advantage of convenience, but it hardly seems fair to research subjects, many of whom will want to give a firsthand account of their experiences. Just as important, failing to interview subjects has the potential to influence the judgments of investigative bodies by distancing them from the human impact of serious mistreatment.

Bureaucratic Management and Limited Information

One supposed virtue of bureaucratic management, as Weber pointed out, is impartiality. Bureaucratic operations are characterized by impersonal rules, which expert specialists are supposed to apply rationally to human problems. Whether bureaucracies are actually impartial is debatable, of course. Many serve the interests of the organizations in which they are located. But the impartiality to which bureaucracies are supposed to aspire is impossible without accurate information, and there is good reason to question the accuracy of written medical records. While little information is available about research study documents, the problems with hospital and personal medical records are well-known. One 2005 study found that only 5% of patients had accurate computerized medication histories.10 A more recent study of medical errors found that electronic health records were a factor in 20% of cases.11 So unreliable are electronic health records that many malpractice attorneys and expert witnesses advise clients to begin their preparations with a thorough vetting of the records in order to avoid embarrassing mistakes later.12

Even if the facts in written documents about research studies are accurate, this does not mean that the written record is complete or that it is a fair representation of the subject’s point of view. Many for-cause investigations are triggered by a complaint in which a research subject and a clinical investigator disagree. But study records are written and compiled by the research team, not the subject. If a subject has been mistreated or injured and there is a genuine disagreement about what happened, it hardly seems fair to rely solely on documents written and compiled by only one party, without giving the other a chance to respond.

Of course, interviews with subjects are not appropriate for all audits and investigations. Some audits concern incidents in which human subjects are not significantly involved. And some subjects may not want to speak to investigators. A subject might not trust the integrity of the oversight body or may simply feel too upset or angry about how he or she has been treated. On rare occasions, subjects involved in litigation against an institution might decline to be interviewed on the advice of an attorney.

Yet in some high-profile controversies, there is good reason to believe that the failure to interview research subjects explains the large gap between the findings of federal regulators and those of investigative journalists. In 2001, for instance, Duff Wilson and David Heath of the Seattle Times published a series of damning investigative reports on two controversial cancer studies at the Fred Hutchinson Cancer Center (“The Hutch”) in Seattle: Protocol 681, which examined whether experimental compounds could protect the vital organs of breast cancer patients getting high-dose chemotherapy, and Protocol 126, which sought to find out whether removing T-cells would prevent graft-versus-host disease in patients getting bone marrow transplants. The Seattle Times presented credible evidence that at least three (and perhaps four) patients died unnecessarily in Protocol 681 and that at least 20 patients died of causes directly related to Protocol 126. It also found alarming conflicts of interest, inadequate IRB review, and evidence that patients had been deceived.13

The Seattle Times reporters drew on extensive interviews with the families of patients who had died, many of whom they identified and traced using public records and death certificates. They also interviewed a whistleblower, Dr. John Pesando, an oncologist and IRB member who told the Times he was “haunted and motivated” by the memory of two of his patients who died in Protocol 126. But investigators from the federal Office of Protection from Research Risks—which did not sanction The Hutch—failed to conduct any interviews at all. According to the Times, “The federal investigators didn’t visit The Hutch or interview any scientists, institutional review board members, patients or patients’ families.”14

A similar sequence of events played out at the University of Minnesota beginning in 2003, when a psychiatric researcher recruited a mentally ill young man named Dan Markingson into an antipsychotic study while Markingson was under a “stayed” involuntary commitment order. Markingson’s mother objected to his recruitment, arguing that his consent was compromised by the commitment order, which legally obliged him to follow the psychiatrist-investigator’s recommendations. She tried for months to have him dropped from the study, warning the study team that he was in danger of killing himself. In May of 2004, Markingson stabbed himself to death with a boxcutter.15

In 2005, an FDA inspector found no wrongdoing on the part of the psychiatrist or any other member of the study team, stating that “[t]here was nothing different about this subject [Markingson] than others enrolled to indicate he couldn’t provide voluntary, informed consent.”16 But the FDA inspector, who did not interview Markingson’s mother, Mary Weiss, apparently did not even consider the possibility that Markingson’s “voluntary, informed consent” was influenced by the fact that he was under a commitment order that compelled him to do as his psychiatrist recommended.

Contrast the FDA’s findings with the 2008 investigative report by Paul Tosto and Jeremy Olson for the St. Paul Pioneer Press, which drew on interviews with Weiss and highlighted the coercive influence of the involuntary commitment order on Markingson’s recruitment.17 A year later, after hearing testimony from Mary Weiss, Minnesota passed state legislation that severely restricted the recruitment of involuntarily committed patients into psychiatric drug studies.18 In 2015, the university was forced to suspend recruitment into psychiatric drug studies after a highly critical report on the Markingson case by the state Office of the Legislative Auditor—which, once again, found that the involuntary commitment order had compromised Markingson’s consent.19

That the findings of an investigator will be influenced by the nature of the information he or she examines is hard to dispute. Yet the absence of full information may not even be the most significant consequence of failing to interview subjects. In a bureaucratic system, as Weber understood, the professional expert achieves success by excluding personal feelings from the execution of his or her tasks. But this depersonalized approach can create a vast moral distance between the bureaucrat and the human being whose problems are being considered. When injuries and injustices to research subjects are reduced to figures on a chart, represented by a number or a set of initials, it becomes all too easy for regulators and compliance officers to distance themselves from the human consequences of their decisions.

The Bureaucratic Response to Exploding Cars

Over twenty years ago, The Journal of Business Ethics published a semiautobiographical article by Dennis Gioia, a professor of management at Pennsylvania State University.20 In 1973, after only a year at the Ford Motor Company, Gioia had become the company’s field recall coordinator, a position that put him in charge of tracking potential safety problems with Ford vehicles. The job came at an inauspicious time; in 1970, Ford had introduced the Pinto, the compact car that later became notorious for a manufacturing defect that triggered gas tank ruptures and explosions when it was hit from behind.

Ford had brought the Pinto to market unusually quickly in response to competitive pressure. According to Gioia, Ford had set out to manufacture the Pinto under strict criteria known as “the limits of 2000”: the Pinto could not exceed $2000 in cost or 2000 pounds in weight. The problems with the gas tank were obvious very early; of the 11 Pintos that Ford crash tested, 8 had potentially catastrophic tank ruptures. The only three tanks that didn’t rupture had all been modified. Ford could have fixed the problem for only $11 per car, but the company was reluctant to spend even that small sum.

By the late 1970s, the catastrophic consequences of Ford’s decision were becoming a matter of public record. A 1977 investigative report in Mother Jones led with the story of a Pinto bursting into flames when the driver was rear-ended by a car traveling only 28 miles an hour. Dr. Leslie Ball, the former safety chief for the National Aeronautics and Space Administration, called the release to production of the Pinto “the most reprehensible decision in the history of American engineering.”21 An investigation by the National Highway Traffic Safety Administration soon followed. In August 1978, after three teenage girls died in a Pinto fire, a grand jury indicted Ford executives on charges of reckless homicide. (They were eventually cleared.)

As the field recall coordinator in the early 1970s, Gioia could have taken action to have the Pinto recalled, or at least to flag the potential safety issues. Yet he didn’t. Twenty years after the fact, Gioia was baffled by his catastrophic mistake. By revisiting the mistake in print, Gioia hoped to identify the organizational factors that allowed it to happen.

At first, Gioia writes, he saw only a few reports of Pinto fires. Reports like this were not unusual; in fact, a large part of Gioia’s job was to sift through large numbers of potentially alarming reports and decide which problems were serious enough to need attention. But later Gioia actually saw a crushed, incinerated Pinto at a Ford inspection depot, a place known at the company as “the Chamber of Horrors.” The sight of the car made a deep impression. Afterward, Gioia recommended a preliminary safety review for the Pinto. But after discussing the case at a meeting, he, like everyone else, voted against a recall. Gioia voted the same way at a later meeting, despite being made aware of damning crash-test data. As Gioia writes, he recommended against a recall despite “even more compelling evidence that people were probably going to die in this car.”22

Most accounts of poor ethical decision-making in organizations assume that when employees are confronted with an ethical problem, they recognize it as such and make a considered decision about how to act. According to Gioia, this is not true. Employees in an organization—and perhaps especially in a bureaucracy—simply follow scripts: deeply embedded, often unarticulated understandings that both shape the way knowledge is organized and provide a program of action. Just as a script for how to behave in a restaurant says that you must sit down at a table, read the menu, and wait for a server to take your order, the script for how to behave as a recall officer for Ford will tell you what to do with safety reports as they come to your desk quickly in very large numbers.

The advantages of a script are obvious. Employees following a script do not need to think deeply when they encounter a problem, because the way to handle the problem has already been worked out and stored in organizational memory. But it is precisely because scripts shortcut thought that they often hide the ethical import of an action. Scripts typically contain no ethical component whatsoever, Gioia writes. As a result, when confronted with reports of exploding Pintos, Gioia says, “I perceived no strong obligation to recall and I remember no strong ethical overtones to the case whatsoever.”23

It is tempting to imagine that Gioia’s time at Ford simply transformed his moral values. Perhaps as he became a corporate insider, bound by duties of loyalty to Ford, he simply learned to place the well-being of the company over that of Pinto drivers. But Gioia doesn’t think so. He did not become a cold-hearted person. Rather, the script he was given required that he squelch any emotion he might feel about the consequences of his choices. It demanded that he behave rationally and impersonally. The only thing that pushed him close to interrupting that script was the sight of an incinerated Pinto in the Chamber of Horrors. Gioia says the revulsion he felt at that sight was “immediate and profound.”24 But that revulsion soon disappeared. At the next safety meeting, discussing the Pinto fires with others, he wondered why he had even brought the case up. There was simply no way to incorporate it into the script he was obliged to follow.

Gioa’s response would not have been unfamiliar to Weber, who understood that bureaucracies typically require employees to suppress their emotions in order to produce fair and impartial results. As Weber wrote, bureaucracy “develops the more perfectly . . . the more completely it succeeds in eliminating from official business love, hatred, and all purely personal, irrational, and emotional elements which escape calculation. This is the special nature of bureaucracy and is appraised as its special virtue.”25 Nor would it have been unfamiliar to Franz Kakfa, who understood the existential horror of getting swallowed by an opaque, emotionless bureaucratic machine.

In Gioia’s view, the solution to ethical indifference is to jolt people out of their scripts and force them to think, usually by exposing them to “vicarious or personal experiences that interrupt tacit knowledge of ‘appropriate’ action.”26 For a Ford recall officer, this could mean visiting the Chamber of Horrors. For a federal research regulator or a university compliance official, it could mean meeting with a research subject who has been injured or mistreated. This may not be enough, of course. For Goia, it clearly wasn’t. But genuine ethical reflection cannot really begin until the sleepwalkers have been jolted awake.

Rewriting the Script

Not every research subject who feels that he or she has been mistreated is right. Some complaints of mistreatment are simply unjustified. Yet it is important for research subjects with complaints to feel as if oversight bodies have treated them fairly, even if the result is not what they would like. It is difficult to imagine anyone feeling fairly treated if an oversight body will not even allow the person to speak.

In my view, any oversight body charged with conducting for-cause investigations involving potential mistreatment of human subjects (such as IRBs, compliance officers, the FDA, the federal Office for Human Research Protections, and others) should develop policies that require auditors and investigators to offer subjects the opportunity to be interviewed in person. Those policies should also allow subjects to submit documentation and other evidence in support of their accounts. While routine audits (as opposed to for-cause investigations) are probably less likely to be affected by the testimony of subjects, provisions should also be made for oversight bodies conducting routine audits to interview subjects as well as family members of any who died, especially when those interviews might provide important information about compliance.

It is hard to know whether any of the three oversight bodies that reviewed Robert Huber’s case at the University of Minnesota would have reached different conclusions if they had interviewed him in person. Yet the striking aspects of the two publicly released reports on Huber’s case are not just their conclusions, but the fact that neither report contains any information about Huber at all.27 There is no narrative, not even the brief patient history that is typically included in a medical record. The reports do not explain why Huber was in the hospital, how he was recruited, or what happened to him in the study. Anyone reading the reports who had not already seen the televised account of his story would be hard-pressed to understand even what had led to the complaints.

Dennis Gioia argues that emotional engagement is necessary to “short-circuit” the standard scripts that bureaucrats learn to follow.28 Gioia may be right, but he does not need to go quite that far. Scripts can be rewritten so that they do not demand robotic detachment. Doctors follow scripts when they see patients, but the script includes (or ought to include) listening to the patient’s story. Investigative journalists follow scripts when they investigate wrongdoing, but the script includes interviewing sources. There is nothing about the nature of research oversight that should require compliance officers, regulators, or other investigators to exclude interviews with research subjects from for-cause investigations. Interviews could only make these investigations better.

Carl Elliott, MD, PhD, is a professor in the Center for Bioethics and the Department of Pediatrics and an affiliate faculty member in the Department of Philosophy and the School of Journalism and Mass Communications at the University of Minnesota.

References

  1. Baillon J. Investigators: U of M drug study criticism grows. KMSP News (Fox 9). May 19, 2014. https://www.fox9.com/health/1647039-story. The accuracy of the details in the television report have been confirmed by cross-checking with the subject’s medical records, with his permission, and with IRB records obtained through state open-records requests.
  2. Dykhuis D. University of Minnesota Human Research Protection Program, letter to Carl Elliott, June 5, 2015. https://www.scribd.com/doc/281844393/Debra-Dykhuis-Letter-to-Carl-Elliott-June-5-2015.
  3. See ref. 1, Baillon 2014.
  4. See ref. 1, Baillon 2014.
  5. FTI Consulting. Independent Assessment of a University of Minnesota Institutional Review Board (IRB) Noncompliance Determination. March 15, 2011. https://www.scribd.com/doc/312577176/FTI-Consulting-Report-on-Robert-Huber-Case; Webb P. University of Minnesota Research Compliance Office report. January 27, 2016. https://www.scribd.com/doc/312577568/Research-Compliance-Office-Report-on-Robert-Huber-Case-University-of-Minnesota-January-27-2016.
  6. Dykhuis D. University of Minnesota Human Research Protection Program, letter to Robert Huber, May 6, 2015. https://www.scribd.com/doc/264627742/Letter-from-Debra-Dykhuis-of-University-of-Minnesota-Research-Protection-to-Robert-Huber-regarding-bifeprunox-study-May-6-2015.
  7. See ref. 2, Dykhuis 2015.
  8. Personal communication to the author from C. Loose, Food and Drug Administration Office of Regulatory Affairs, May 18, 2006. See also U.S. Department of Health and Human Services, Food and Drug Administration, Compliance Program Guidance Manual for FDA Staff, Compliance Program 7348.811, Bioresearch Monitoring: Clinical Investigators, December 8, 2008, https://www.fda.gov/ICECI/EnforcementActions/BioresearchMonitoring/ucm133562.htm.
  9. Weber M (trans. and ed. Gerth HH, Mills CW). From Max Weber: Essays in Sociology. New York: Oxford University Press, 1958, p. 216.
  10. Kaboli PJ et al. Assessing the accuracy of computerized medication histories. American Journal of Managed Care 2004;10:872-877.
  11. Ruder DR. Malpractice claims analysis confirms risks in EHRs. Patient Safety and Quality Healthcare 2014;11(1):20-23.
  12. Allen A. Electronic record errors growing issue in lawsuits. Politico. April 4, 2015. https://www.politico.com/story/2015/05/electronic-record-errors-growing-issue-in-lawsuits-117591#ixzz47bsGneMk.
  13. Wilson D, Heath D. Uninformed consent: What patients at “The Hutch” weren’t told about the experiments in which they died [4-part series]. Seattle Times. 2001. https://old.seattletimes.com/uninformed_consent/.
  14. Uninformed consent: Questions and answers about “The Hutch.” Seattle Times. March 25, 2001. https://old.seattletimes.com/uninformed_consent/qa.html.
  15. Office of the Legislative Auditor, State of Minnesota. A Clinical Drug Study at the University of Minnesota Department of Psychiatry: The Dan Markingson Case. March 19, 2015. https://www.auditor.leg.state.mn.us/sreview/markingson.pdf.
  16. Matson S. FDA Establishment Inspection Report, Stephen Olson MD. Report no. FEI 3004927371. July 22, 2005. https://www.scribd.com/doc/49641428/FDA-Inspection-Markingson-suicide.
  17. Tosto P, Olson J. Dan Markingson had delusions. Pioneer Press (St. Paul). May 18, 2008. https://www.twincities.com/ci_9292549.
  18. Laws of Minnesota 2009, chapter 58, codified as Minnesota Statutes, 253B.095, subdivision 1(d)(4) and (e). See also Olson J, Minnesota House, Senate unanimously pass limits on researchers’ use of mentally ill patients, Pioneer Press (St. Paul), May 8, 2009.
  19. See ref. 15, Office of the Legislative Auditor, State of Minnesota 2015.
  20. Gioia DA. Pinto fires and personal ethics: A script analysis of missed opportunities. Journal of Business Ethics 1992;11(5):379-389.
  21. Dowie M. Pinto madness. Mother Jones. September-October 1977. https://www.motherjones.com/politics/1977/09/pinto-madness.
  22. See ref. 20, Gioia 1992.
  23. See ref. 20, Gioia 1992.
  24. See ref. 20, Gioia 1992.
  25. See ref. 9, Weber 1958, p. 216.
  26. See ref. 20, Gioia 1992.
  27. See ref. 5, FTI Consulting 2011 and Webb 2016.
  28. See ref. 20, Gioia 1992, p. 387.