Illustrative image for Gene Editing Cultural Harms and Oversight Mechanisms

Bioethics Forum Essay

Gene Editing, “Cultural Harms,” and Oversight Mechanisms

Is it reasonable to hope that concerns about “cultural harms” can be integrated into oversight mechanisms for technologies like gene editing? That question was raised anew for me by the recent National Academy of Sciences report on human genome editing and at a recent conference at Harvard on the international governance of genome editing technologies. I’m somewhat disheartened to be thinking that the answer might be no.

Before explaining how I ended up in what is, for me, a disheartening place, I should clarify what I take the authors of the NAS report to mean by the term “cultural harms.” First, they were not emphasizing that concerns about emerging technologies can vary from culture to culture or from nation to nation. They weren’t talking about how, say, Samoans and Singaporeans hold different values, and about how such differences might make international governance difficult.

They were using “cultural harms” in contradistinction to what we might call “physical harms.” When we worry about physical harms we worry that a technology is going to fail at achieving some near-term purpose we take to be good–as when, for example, we use an intervention with the aim of curing someone, but it kills them instead. The Food and Drug Administration is an oversight mechanism that seeks to protect us from such physical harms.

When, on the other hand, we worry about non-physical, or cultural, harms, we worry that a technology will succeed at achieving some near-term purpose that we take to be good, but that it will also, inadvertently, produce some long-term effect that we take to be bad–and that effect won’t be just to people’s bodies, but also to their prospects for leading good, meaningful lives. People who worry about such things worry that, for example, while germline enhancement might indeed improve a particular child’s performance in a specific activity, the proliferation of such interventions won’t promote the sorts of environments in which children and families can flourish. Or, for example, they worry that such interventions will put us on a slippery slope to an ugly form of eugenics. Different from concerns about harms to bodies, concerns about the flourishing of persons are not within the remit of any oversight mechanism like the FDA.

To explain my question about the reasonableness of trying to integrate concerns about cultural harms into oversight mechanisms, I offer a little historical background. In 1980, three religious leaders wrote a letter to then-President Jimmy Carter and suggested that “genetic engineering” needed “oversight.” They wrote:

“History has shown us that there will always be those who believe it appropriate to “correct” our mental and social structures by genetic means, so as to fit their vision of humanity. This becomes more dangerous when the basic tools to do so are finally at hand. Those who would play God will be tempted as never before.” (pp. 95-96)

Because I am not religious in any conventional sense, I would not have in 1980 (nor would I today) be inclined to use the phrase “playing God.” But if I had been aware of this letter in 1980, I would have been pleased to hear people struggling to articulate concerns that are about more than physical harms. I could have easily translated that last sentence into something like, “Those who are prone to hubris will be tempted to be hubristic as never before,” and I would have interpreted their letter to be a plea for deeper thought about cultural harms that are as pertinent to secular folk as to ones who are religious.

We who lament the failure to integrate concerns about cultural harms into oversight mechanisms sometimes lay that failure at the feet of the presidential bioethics commission that responded to the letter of those three religious leaders. That presidential commission’s response took the form of the first national report on genetic engineering, Splicing Life, which was published in 1982. That 1982 report was not persuaded that concerns about cultural harms should be taken seriously by oversight mechanisms. Its tone was, in fact, rather dismissive of such concerns.

Indeed, when the authors of the recent NAS report on human genome editing looked back and described Splicing Life, they observed that their predecessors took the concerns in the letter from the three religious leaders and “reformulated the ethical debate so that the report would be ‘meaningful to public policy consideration.’” (italics added, p. 120) Further describing the work of their predecessors, the NAS authors write:

To make ethical claims legally actionable meant [that the authors of Splicing Life moved] away from arguments about future cultural harms or claims that it is not the role of humanity to modify itself. Consequences needed to be more concrete and near-term, not speculative (italics added, p. 120).

By contrast, the NAS report suggests that the prospect of germline genetic enhancement raises “broader and longer-term effects” that warrant consideration. For somebody like me, who thinks that cultural harms warrant attention, that sounds like good news. The NAS report doesn’t relegate concerns about broader and longer-term social or cultural effects to the land of Luddites and religious kooks and people with hyperactive amygdalas.

After the initial delight of seeing the endorsement of social or cultural concerns, however, I was, at least at first, disappointed to see that the NAS didn’t say much about how such concerns might be integrated into oversight mechanisms. Instead, it recommended “engaging the public”—which can begin to sound more like the incantation of a mantra than a piece of advice.

I understand that there are many reasons to engage the public, including that it is a way to show respect and to achieve legitimacy. But one of those reasons is not that members of the public will articulate new concerns about cultural harms, or that they will articulate them more persuasively than has already been done by, say, Sheila Jasanoff, or Adrienne Asch, or Dorothy Roberts, or Michael Sandel, or Jürgen Habermas, or any of myriad other incisive observers.

Could it be that the NAS report doesn’t say how to integrate concerns into oversight mechanisms, not only because the authors don’t know how, but also because it just isn’t clear that there is a good way to do that? After all, when the President’s Council on Bioethics took up questions concerning enhancement in its 2003 report Beyond Therapy, it identified multiple “cultural harms,” but it did not recommend that we integrate those concerns into oversight mechanisms. The report’s sole policy recommendation was that, with regard to the prospect of enhancement, the public should move forward with “its eyes wide open” (p. 310).

Maybe it never was reasonable for me, or for the three religious leaders who wrote to President Carter, to imagine that we could integrate concerns about cultural harms into oversight mechanisms. It could be that cultural concerns—concerns about effects on complex systems—aren’t amenable to being controlled by mechanisms of our making.

So here is a slightly less disheartening way of framing the answer to my original question about whether cultural harms can be integrated into oversight mechanisms. Maybe, those religious leaders were right in 1980 to suggest that concerns about cultural harms warrant attention, but were wrong to envision that such concerns could be managed and controlled by creating the right oversight mechanisms. Or, to put it another way: maybe the 1982 presidential commission was wrong to dismiss concerns about cultural harms, but right to think that we shouldn’t try to integrate them into oversight mechanisms.

This is one of those few circumstances in which I would be pleased to learn that my answer is wrong. If it’s right, we who take concerns about cultural harms seriously have a lot more work to do to say what taking them seriously actually entails.

Erik Parens is a senior research scholar at The Hastings Center. This essay is adapted from a talk he gave at Editorial Aspirations: Human Integrity at the Frontiers of Biology at Harvard on April 27.

Read More Like This
  1. Erik-
    I like that you referenced the work of prior Commissions in dealing with this question. I get that you want cultural harms to be somehow incorporated in whatever gene editing governance mechanisms emerge, but what I don’t get is what that would actually mean. Can you give a few examples of principles, guidelines, or rules that might fit the bill, even if they aren’t necessarily something you’d personally support.

Leave a Reply

Your email address will not be published. Required fields are marked *