IRB: Ethics & Human Research

Third-Party Risks in Research: Should IRBs Address Them?

In addition to risks to individual research subjects, scientific research poses risks to third parties and to groups. Genetic research presents such significant third-party risks to groups that the National Human Genome Research Institute at the National Institutes of Health has funded a major Working Group on Community Consultation. Considering risks of harms to groups leads to difficult philosophical problems concerning the nature of groups, the ways in which they may suffer harms, and the relations between harms to individuals and harms to groups. I have tackled some of these elsewhere.1 This article focuses on the narrower question of whether Institutional Review Boards (IRBs) should have the responsibility of protecting third parties from the risks of research involving human subjects.

The risk of harm to third parties has recently generated considerable attention. As Kimmelman documents in detail, research in medicine, biology, and the social sciences may have consequences for people who are not themselves involved in the research.2 For example, subjects involved in a vaccine study might transmit immunity to those with whom they have close contact or infect them with some ailment, and children born to research participants exposed to mutagens might have birth defects. Not treating the syphilis of the men observed in the Tuskegee study put their sexual partners at risk of being infected. These “process-related risks” are independent of the findings of the research.

In addition, research findings may cause harms to third parties. Knowledge is not always a good thing, and even when it is on balance a good thing, it can benefit some people while harming others. For an example of such an “outcome-related risk,” research into an individual’s genetic makeup might yield information about behavioral traits or genetic predisposition to disease that has implications for family members or for socially identifiable groups. For a more controversial example, research on Arab culture reported in books such as The Arab Mind 3 and The Closed Circle: An Interpretation of the Arabs 4 influenced interrogation techniques used by American military personnel on suspected Arab terrorists. Though the distinction between process-related and outcome-related harms (which is also drawn by Bok5 ) is not perfectly sharp, it is clear enough for my purposes.

The harms that research may cause to third parties either in its conduct or via its findings are serious enough to warrant protections. Since IRBs already exist to protect human research subjects, it might seem that they should also protect third parties. In a recent essay in this journal, Resnik and Sharp make exactly this case.6 This article takes issue with their conclusions and argues that IRBs are not the appropriate bodies to protect third parties.

Are IRBs Supposed to Protect Third Parties?

If one carefully examines the content and rationale of the IRB criteria for research approval (the relevant portions of which are reproduced in Table 1 ), the restriction to research subjects is unambiguous. Section 46.111(b) and clauses one, two, six, and seven of 46.111(a) of the federal human research regulations specify what risks and benefits IRBs should attend to and how IRBs should act to limit research risks. Clauses three, four, and five concerning selection of subjects and informed consent also aim to protect research subjects, though indirectly and not via the IRB’s own assessment of risks and benefits. Clauses one, two, six, and seven define risks and benefits narrowly. Section 46.111 directs IRBs to consider only process-related risks to subjects from their participation in the research, while benefits consist either of process-related benefits to subjects from their participation in the research or of “the importance of the knowledge that may reasonably be expected to result.” IRB members are instructed to consider only process-related risks to research subjects. Benefits to third parties count only insofar as they relate to the importance of the knowledge resulting from the research. In speaking of “the importance” of the expected knowledge rather than its benefits, the second clause presumes that knowledge is always a good thing. Otherwise the importance of the knowledge that research provides would not necessarily be an argument in favor of carrying on the research. If the knowledge gained from a research study is dangerous to the participants in the study or to others, it is not for that reason less important. Outcome-related risks are thus ruled out. As the last sentence in clause two specifies, IRBs may ignore all outcome-related risks, even those to participants.

This narrow specification of the risks and benefits that IRBs should consider reflects a particular conception of an IRB’s role. Although IRBs are governed by the general values of beneficence, justice, and respect for persons, the task to which these principles are applied is narrow. An IRB’s job is to protect research subjects—not the community at large—while making room for important research. This task is difficult, and the tradeoffs it demands are vague. It is hard to know even what it means to balance the importance of the knowledge that research may provide against risks to participants, let alone to carry out that balancing. By limiting risks and benefits (other than the importance of the expected knowledge) to risks and benefits to participants, the criteria make the task more manageable.

Furthermore, though IRB members must judge the importance of the proposed research, they are not supposed to make science policy or to ask questions about the benefits of scientific knowledge. They are to presume that the knowledge scientific research provides is a good thing, so that more important results are automatically better results. The authors of these criteria might have instructed IRB members to make this presumption because they believed that scientific knowledge is always a good thing, but this is doubtful. Scientific knowledge does not always do more good than harm. For example, humans might be better off had scientists not learned how to make nuclear weapons. Many argue that human cloning should be banned, regardless of whether it is harmful to the clones, on the grounds that knowledge of ways to enhance and modify human beings is too dangerous. In addition, even if scientific knowledge were always a good thing on the whole, it clearly is not always good for everyone.

If scientific knowledge is not always an overall good, and obviously not always a good to each and every person, then perhaps some research shouldn’t be done—though regulating research might wind up doing more harm than relying on researchers’ self-regulation. If the benefits of scientific research are unevenly distributed and include harms to some people, then perhaps research should be regulated to limit these harms, or some compensation scheme should be devised. IRBs are not constituted to carry out these tasks. They are not designed to make science policy or to implement the demands of distributive justice. Their job is to protect those who participate in research, to make sure that they are treated respectfully and recruited equitably, and to insure that there is good reason to carry out the research.

Should IRBs Consider Third Party Process-Related Risks?

Though the mission of IRBs has been narrowly defined, perhaps it should be broadened so as to address third-party risks, whether process-related or outcome-related. The “International Ethical Guidelines for Biomedical Research Involving Human Subjects” of the Council for International Organizations of Medical Sciences7 go some distance in this direction, and Resnik and Sharp argue that the general moral obligation to avoid doing harm requires IRBs to address third-party risks.8 In this section I shall first construct the strongest case I can for broadening the charge of IRBs to include a concern with third-party, process-related risks and then show why that case cannot be sustained. The next section considers outcome-related risks.

Questions about process-related, third-party risks often resemble questions about process-related risks to research subjects, and at first glance it seems that IRBs should tackle them. For example, questions about the risks of a live-virus vaccine to those in contact with research subjects are similar to questions about the risks of the vaccine to the research subjects themselves. Similarly, the risks of gene transfers to descendents of research subjects are much the same as the risks to the subjects. These links are tacitly acknowledged in the one exception where IRBs are supposed to protect third parties. In the case of research on pregnant women, IRBs are required by federal human research regulations to consider the risks of participation to the developing fetus. It would thus seem (as Resnik and Sharp argue) that immediate risks to third parties can and should be addressed by IRBs.

Extending the responsibilities of IRBs in this way may be too burdensome for IRBs. For example, experimentation on individuals with hereditary ailments that enables them to reproduce may cause them to have children with severe medical problems who would otherwise not have been born. Should IRBs worry about such possibilities? Research involving prisoners with serious illnesses might enable them to commit crimes after they are released that otherwise would not have happened. Are these issues that IRBs should address? Should concerns about third-party risks lead IRBs to insist that subjects who face a serious risk of harm from research have no dependents and no important responsibilities? Should concerns about third-party risks lead IRBs to prefer poorer over richer subjects on the grounds that the expected value of the lost earnings of those who are poor—which is arguably a rough measure of expected harms to others—is lower? Should IRBs investigate details of the personal lives of prospective subjects to determine how their participation in research may affect others? Once IRBs open the Pandora’s box called “third-party risks,” will they be able to function?

Though these complications are disquieting, they do little to refute Resnik and Sharp’s position that IRBs should address third-party risks because these complications will arise regardless of which institution has the job of protecting third parties. These problems are not going to disappear if they are not assigned to IRBs. The way to manage them is to adopt general rules that govern what sort of third-party risks must be considered. Though the rules are bound to be imperfect, there is no other feasible way to protect third parties. Examples of such general rules would be, “Ignore any third-party risks that may result from improving the health or saving the life of a research subject,” or, as Resnik and Sharp suggest, “Ignore risks to third parties who are not directly affected by the research.” These rules may sometimes permit research that does more harm than good. But they are reasonable presumptions and should not be challenged unless the circumstances are exceptional. Rules such as these would limit the discretion of IRBs in considering third-party risks and make that consideration feasible.

Should one then support an expansion of the responsibilities of IRBs to include consideration of process-related risks to directly affected third parties? Consider the following example, which at first glance appears to favor expansion. A few years ago, an IRB at the University of Wisconsin Medical School was asked to approve an experimental transplant of a monkey liver into a patient dying of liver failure for whom no human liver was available. Doctors expected the monkey liver to be rejected, but they hoped it would keep the patient alive until a human liver became available. Since the patient would soon die without the transplant, risks to the patient were not a major concern, but risks to third parties were. What if the transplant introduced a new infectious disease into human beings? The IRB was able to adhere to the stated criteria and limit its deliberations to risks to the individual patient only after being assured that other parties had been alerted to the third-party risks.

Were the IRB members wrong to adhere to the criteria and ignore the third-party risks? Though the probability of harm was low, the harm could have been catastrophic. Shouldn’t IRBs consider such risks? When thinking about risks to research subjects, members of IRBs are in a good position to spot risks to third parties. Are they not then in the best position to protect third parties? Somebody needs to address questions such as whether the risks of this transplant were serious enough to justify prohibiting it. Is any other oversight committee better situated than IRBs to address these questions in a timely manner?

This case for assigning IRBs the task of protecting third parties from research risks is plausible, but there are four problems with it that, taken together, establish the IRB is the wrong entity for the job. First, as the discussion above suggests, protection of third parties must take place within a regulatory and policy framework. Third-party protection is as much as matter of public policy as of regulation of particular research protocols, and so it calls for policy deliberation. IRBs are not policy-making bodies. They have no formal means of coordinating their decisions or of flagging general questions that legislators or administrative bodies with a broader focus should address. Assigning IRBs responsibility for protecting third parties would lead to a serious mismatch of function and structure.

Second, since IRBs are currently designed to protect research subjects only, they need be concerned only with research on human beings. But whoever is supposed to protect third parties from process-related risks must be concerned with other research, too. A great deal of research on plants and nonhuman animals and even research that has no living subjects (like that on nanotechnology) may pose third-party, process-related risks to human beings. So either IRBs would have to review a great deal of research not currently subject to IRB review, or the protection of third parties from scientific research would have to be divided among multiple bodies, with IRBs addressing only third-party risks arising from research involving human subjects. Neither alternative is attractive. If IRBs had to protect third parties from research in general, they would be swamped. If IRBs were concerned only with third-party risks of human research, there would be an inefficient duplication of effort. Since third-party risks of research on humans are similar to third-party risks of other research, shouldn’t the same oversight apply to both? Even though somebody needs to be concerned about third-party risks from research, IRBs are not the right somebodies.

A third reason why IRBs should not be assigned responsibility for protecting people from third-party risks is that—as Kimmelman points out and Resnik and Sharp note—protecting third parties may compete with protecting research subjects. Broadening the concerns of IRBs risks undermining the protections they offer to research subjects.

The fourth reason why the responsibility of IRBs should be limited to protecting research subjects is that different criteria should govern the protection of research subjects than govern the protection of third parties, even directly affected third parties. With few exceptions, competent adult research subjects, unlike third parties, are subject to risks only with their informed consent. Without exaggerating the importance of consent or supposing that consent is always fully informed, one can nevertheless recognize that the consent requirement is an important protection to those who participate in research. This protection cannot always be given to third parties, whose consent often cannot be obtained. Without the possibility of getting informed consent, presumably more stringent protections of other kinds are needed. Merely applying the other parts of the federal human research regulations would fail to show respect to those at risk, and it might easily wind up treating them unjustly. Since third parties, unlike research subjects, typically do not give consent, risks to third parties should not be judged by the same criteria as risks to research subjects. The fact that research participants may receive some benefits to compensate for research risks heightens the contrasts between risks to research subjects and risks to third parties. Protecting someone from a known, understood, partially compensated, and consented-to risk is very different from protecting someone from an unknown and uncompensated risk that she cannot escape. If one oversight committee addresses both kinds of risks, its members are likely to apply the same standards to both.

I conclude that limiting IRBs to protecting research subjects, as the current criteria do, is fully justified. The serious problems raised by third-party risks require an integration of policy-making and regulation, which is foreign to IRBs and beyond their competence. Third-party risks arise in much the same way whether research employs human subjects or not, and they should be treated uniformly—and thus not by IRBs. Furthermore, different criteria should govern imposing risks that are voluntarily accepted and to some extent compensated than govern risks imposed without any specific consent. So IRBs should not be assigned responsibility to protect groups and third parties in general from process-related research risks.

Should IRBs Protect Third Parties from Outcome-Related Risks?

The case against asking IRBs to protect third parties from outcome-related risks is even stronger than that against asking IRBs to consider process-related risks to third parties, and many of the same concerns apply. First, the proposal that IRBs also consider outcome-related risks would transform IRBs from committees that are narrowly concerned with the protection of research subjects to committees that are supposed to address both the overall benefits and risks of scientific research and the distribution of those benefits and risks. IRBs are not suited to these tasks. Attached as they are to particular institutions, their perspective is too parochial. They do not represent the many different constituencies that may be benefited or harmed by research. Lacking any mechanism for coordination, they cannot legislate or implement any coherent criteria assessing the benefits and risks of research or how they are distributed.

Second, whether research involves human subjects is irrelevant to whether its findings pose risks for human beings not engaged in the research. Regulation of research to protect human beings should apply equally to both and should be administered in some uniform way. Since IRBs oversee only research involving human subjects, they are not the right tools for this job.

Third, even if IRBs were radically reconstituted as scientific evaluation committees, they would be in no position to respond to the basic normative requirement that those placed at risk should have some voice in regulating research. The fact that competent third parties are unable to exercise informed consent is a symptom of a deeper problem. Third-party risks raise questions about the democratic governance of research: should research whose findings may do harm be regulated? Since even beneficial research may do harm to particular individuals or groups, how should benefits and harms be distributed? Although expert opinion should play some part in addressing such questions, popular sovereignty constrained by minority protections implies some role for democratic participation in their consideration. When research crosses national and cultural boundaries, matters become even more complicated, since legitimate authorities may disagree on which research to permit. IRBs are not suited to formulate overall research policy. Their job is not to protect humanity at large or to secure distributive justice.


Protecting third parties from both process-related and outcome-related risks poses more general problems than IRBs are or could be equipped to address. Addressing these research-related risks requires general policies concerning what research is permissible and how to distribute fairly the benefits and burdens of research and its findings. Such policies are legitimate and respond to the requirement that those put at risk should have some say in what is done only if they are promulgated by representative institutions. This is not yet the case, and there is a serious need for protections to third parties that help secure the benefits of research while adhering to democratic norms and the requirements of distributive justice. A procedural solution to these problems requires specifying forms of public engagement to regulate research that will respond to the interests and moral considerations that should influence whether and how specific research can be carried out.


The research and writing of this paper was supported by grant #1 R01 HG003042-01 from the National Institutes of Health. I am indebted to Pilar Ossorio, Fred Harrington, and especially Norm Fost for specific comments, criticisms, and suggestions.

Daniel M. Hausman, PhD, is the Herbert A. Simon Professor, Department of Philosophy, University of Wisconsin-Madison, Madison, WI.


1. Hausman DM. Group risks, risks to groups and group engagement in genetics research.Kennedy Institute of Ethics Journal, forthcoming.

2. Kimmelman J. Medical research, risk, and bystanders.IRB: Ethics & Human Research2005;27(4):1-6.

3. Patai R.The Arab Mind.Revised edition. New York: Hatherleigh Press, 2002.

4. Pryce-Jones D.The Closed Circle: An Interpretation of the Arabs.Chicago, IL: Ivan R. Dee, 2002.

5. Bok S. Freedom and risk, In: Holton G, Morison R.Limits of Scientific Inquiry.New York: W.W. Norton, 1979, p. 115-127.

6. Resnik D, Sharp R. Protecting third parties in research.IRB: Ethics & Human Research2006;28(4):1-7.

7. CIOMS.International Ethical Guidelines for Biomedical Research Involving Human Subjects. Geneva, 2002.

8. See ref. 6, Resnik and Sharp 2006, p. 5.

Daniel M. Hausman, “Third-Party Risks in Research: Should IRBs Address Them?”IRB: Ethics & Human Research29, no. 3 (2007): 1-5.