Illustrative image for National Research Act at 50 An Ethics Landmark in Need of an Update

Bioethics Forum Essay

National Research Act at 50: An Ethics Landmark in Need of an Update

On July 12, 1974, President Richard M. Nixon signed into law the National Research Act, one of his last major official actions before resigning on August 8. He was preoccupied by Watergate at the time, and there has been speculation about whether he would have done this under less stressful circumstances. But enactment of the NRA was a foregone conclusion. After a series of legislative compromises, the Joint Senate-House Conference Report was approved by bipartisan, veto-proof margins in the Senate (72-14) and House (311-10).

The NRA was a direct response to the infamous Untreated Syphilis Study at Tuskegee whose existence and egregious practices disclosed by whistleblower Peter Buxtun were originally reported by Associated Press journalist Jean Heller in the Washington Star on July 25, 1972.  After congressional hearings exposing multiple research abuses, including the Tuskegee syphilis study, and legislative proposals in 1973, support coalesced around legislation with three main elements: (1) directing preparation of guidance documents on broad research ethics principles and various controversial issues by multidisciplinary experts appointed to a new federal commission, (2) adopting a model of institutional review boards, and (3) establishing federal research regulations applicable to researchers receiving federal funding.

This essay reflects on the NRA at 50. It traces the system of research ethics guidance, review, and regulation the NRA established; assesses how well that model has functioned; and describes some key challenges for the present and future. We discuss some important substantive and procedural gaps in the NRA regulatory structure that must be addressed to respond to the ethical issues raised by modern research.  

Ethical Guidance

The NRA established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The commission was originally proposed as a permanent entity to provide ongoing ethical guidance, but, in a compromise, it was authorized for less than three years. Among other things, the 11-member commission was directed to “identify the basic ethical principles which should underlie the conduct of biomedical and behavioral research involving human subjects [and to] develop guidelines…to assure that it is conducted in accordance with such principles.”

The commission was specifically tasked with considering several contentious issues, some of which remain significant concerns. These include fetal research; psychosurgery; the boundaries between medical research and medical practice; the criteria for assessing risks and benefits for research participants; and informed consent for research involving children, prisoners, and individuals in psychiatric institutions.

The commission’s preeminent members and exemplary staff were extremely productive, and their work products were–and remain–highly influential. For example, commission reports on research with children and prisoners figured prominently in federal regulations. Its best-known work product, the Belmont Report, identified the basic ethical principles and guidelines for research with human subjects as directed by the NRA.

Working in subcommittees and consulting with bioethicists Tom L. Beauchamp and James F. Childress, the commission sought to identify the principles that would reflect the shared values of a diverse population. The commission initially identified seven principles, which later were reduced to the well-known three: respect for persons (honoring participant autonomy, privacy, informed consent), beneficence (requiring minimization of risks and maximization of benefits), and justice (entailing equal distribution of research burdens and benefits and protecting vulnerable populations).

The approach of the Belmont Report became known as “common morality principlism,” a term that has been used dismissively in several criticisms. These criticisms include that it focuses too much on individuals and not enough on communities, in short, that it is too U.S.-centric. In addition, the approach does not rank-order the principles or indicate how they should be applied, particularly when they conflict.

Despite these criticisms, the principles have endured for 50 years. The universal appeal of this approach is illustrated by its prominent place in U.S. regulations governing human subjects research and in international research ethics, and the continued reliance on the principles as valuable guideposts for research ethics analysis by researchers, bioethics scholars, and the public. Beauchamp and Childress have further explored the application of the principles through eight editions of their landmark book, Principles of Biomedical Ethics. In April 1978, as the commission was winding down its work, Willard Gaylin, co-founder and president of a nascent bioethics think tank later known as The Hastings Center, was quoted in the New York Times: “They [the commission members] deserve the compliments and gratitude of all of us in the field.”

In subsequent years, the public bioethics commission model of addressing difficult bioethics issues has been used repeatedly in the U.S. Six federal bioethics commissions or similar entities have been created to address such issues as research using stem cells, somatic cell nuclear transfer, radiation experiments, and human enhancement. However, such commissions have been ad hoc, and, since 2017, there has not been any comparable body to address the numerous problematic bioethics issues of today or the future.

Institutional Review Boards

The NRA required entities applying for grants or contracts involving biomedical or behavioral research with human subjects to demonstrate they had an institutional review board to review the research and “protect the rights of the human subjects of such research.”

Many research institutions already had local IRBs by1974, and researchers preferred local review instead of federally directed research review, a model used by many other countries. Perceived advantages of local IRBs included their knowledge of potential participant communities, researchers, institutional research, social mores, and applicable laws. The NRA formalized and expanded IRB reviews by mandating them for all federally conducted or funded research. According to a study by the Government Accountability Office, as of 2023 there were approximately 2,300 IRBs, most of them affiliated with universities or health care institutions. But there are also many independent, primarily for-profit, IRBs, which have had the largest increase in protocol reviews, a process likely accelerated by the move to single IRB review, described below.  

Traditional IRBs based at universities and health care institutions have inherent conflicts of interest because, in addition to having an interest in assuring the well-being of research participants, the institutionalso has a financial and professional interest in expeditious approval of the protocols supported by external funding. IRB members and administrators may feel pressured to approve submissions. For-profit IRBs also have conflicts of interest because repeat business depends on their being easier, faster, and presumably more favorable alternatives to university or health care IRBs.

Among the most important recent changes to IRB review, effective in 2020, NIH-funded multisite and cooperative research must use single-site (or central) IRB review. This process is designed to eliminate duplicative and sometimes inconsistent IRB reviews and to expedite the review process. It is available for all IRBs, including commercial IRBs, that are registered with the Office for Human Research Protections. It remains to be seen whether this new procedure will achieve the goals of consistency and expediency.       

Despite 50 years of experience, assessing and improving the quality of IRB reviews remains challenging. IRBs must have a minimum of five members, and large institutions typically have multiple, much larger committees. Thus, based on the GAO estimate mentioned previously, U.S. IRBs have a minimum of 11,500 members, plus professional staff. Reviews are rarely shared with IRBs outside the institution. Public Responsibility in Medicine and Research (PRIM&R), a nonprofit organization that provides educational services to researchers and research administrators, was founded in 1974. Since 1999, it has offered a certification process for IRB officials. However, IRB service is burdensome and often uncompensated, and many IRB members do not take advantage of PRIM&R education.

The Association for Accreditation of Human Research Protection Programs, an independent, nonprofit, voluntary organization founded in 2013, uses a peer-review process to accredit IRBs. It reports that approximately 60% of U.S. research-intensive universities and medical schools have been accredited or have begun the accreditation process. Although AAHRPP accreditation requires institutions to assess the quality of their reviews, there are no clear criteria for doing so. Finally, OHRP and the Food and Drug Administration conduct on-site inspections, which may be routine or for cause (e.g., in response to a complaint). According to the GAO, only a small fraction of IRBs are inspected annually. It is also not clear how effective inspections are in preventing or remediating substandard practices.

Federal Research Regulations–the “Common Rule”

The NRA directed the secretary of the Department of Health, Education, and Welfare (now the Department of Health and Human Services) to promulgate regulations necessary to carrying out IRB review. On June 18, 1991, final regulations were published in the Federal Register.  The regulations specify the composition and operations of IRBs and, incorporating the Belmont principles, the criteria for their review. The policy became known as the Common Rule because it was adopted by 15 federal departments and agencies.

Since the NRA was enacted, IRB review and compliance with the Common Rule have been mandatory only for federally funded research. This framework has proven inadequate. Although many universities and health care institutions voluntarily apply the Common Rule to research that is not federally funded, not all do. A few states, notably Maryland and Virginia, have laws that apply the Common Rule standard to all research, but there is little enforcement. Differences in other state laws may result in substantive protections for some research participants, but not others. This patchwork of voluntary compliance and state laws is not up to the task of protecting the welfare of research participants, especially now when online data is exploding, research increasingly is multisite and multistate, and research is no longer confined to universities and health care institutions.

The Common Rule has several other substantive limitations. One of them is the exclusion of deidentified information and biospecimens from protection. Increasingly sophisticated computer technology can reidentify individuals from records and specimens. Both “identifiable private information” and “identifiable biospecimens” use the standard of “readily ascertainable.” This means that if the identity of information or biospecimens is not readily apparent, then they are deemed unidentifiable and the research falls outside the scope of the regulations, even if the identity can be discovered by more complex techniques. By contrast, the Health Insurance Portability and Accountability Act privacy rule uses a much more stringent standard for deidentification and lists 18 identifiers that must be removed.

Another important limitation of the Common Rule is that it prohibits IRBs from considering “possible long-range effects of applying knowledge gained in the research (e.g., the possible effects of the research on public policy)” in assessing research risks. Thus, IRBs can only consider the direct effects of the research on participants and must ignore the larger societal implications, including the impact on groups.  A new international study of 22 countries found that the U.S. is the only country to prohibit their research ethics review bodies from considering societal implications of research.

Conclusion

On the 50th anniversary of the NRA, it is evident that the act needs to be updated.

First, there should be a standing national public bioethics body to study and report on emerging issues such as gene therapy, artificial intelligence, xenotransplants, and brain-computer interfaces would provide necessary guidance in a continuously and rapidly changing scientific environment.

Second, additional efforts are required to assess and improve IRB quality. Single IRB review may mitigate some of the unresolved conflicts of interest inherent in locating research ethics review bodies at the institutions submitting research protocols. But problems remain, since the IRB likely either will be located at the institution receiving the grant (and therefore will have an incentive to approve research proposals) or it will be a for-profit IRB (and therefore will have an incentive to expedite favorable reviews to get repeat business). In addition, there is negligible oversight of IRB decisions and operations, with accreditation and training largely by voluntary, private organizations.

Third, the Common Rule should be expanded and strengthened. There was a missed opportunity to do this in the 2018 revisions.  Although HHS initially proposed expanding the Common Rule’s to all research, the Final Rule retained coverage only for federally funded or conducted research. Arguably, such an expansion would exceed the authority afforded by the NRA. But HHS did not submit a recommendation to Congress to authorize this expansion, nor did it notify Congress that there was a problem with it that should be addressed. Similarly, despite initial proposals, the revised Common Rule failed to add any protections for minimally deidentified information or specimens, retaining a standard that is significantly less protective than the HIPAA privacy rule.

Fifty years ago, the first steps were taken to impose deliberative processes and order on American biomedical research. These actions, however, were not complete, and time and changed circumstances have increased the gap between the NRA’s regulatory system and what is needed for well-considered and coordinated research regulation. It’s time for the research ethics community, researchers, and policymakers to take the next steps to update the actions begun on July 12, 1974.     

Mark A. Rothstein, JD, is Director of Translational Bioethics, Institute for Clinical and Translational Science at the University of California, Irvine. He is a Hastings Center fellow.

Leslie E. Wolf, JD, MPH, is the Ben F. Johnson Jr. Chair in Law and Distinguished University Professor at Georgia State University. @LeslieWolfGSU, LinkedIn (25) Leslie Wolf | LinkedIn

President Joe Biden

Bioethics Forum Essay

Clinical Ethics and a President’s Capacity: Balancing Privacy and Public Interest

The Biden Administration is struggling with a dilemma that has a clinical ethics component. Where does the President’s right to privacy about his health end and the public’s right to know begin? This question has recurred throughout American history and, unfortunately, has often been answered the wrong way–with deception. Clinical ethics norms and recent legal precedent offer important insights for responding to this ethical dilemma with much-needed transparency in a way that respects all parties involved.

Throughout his presidency, President Biden has been compared to both Franklin Delano Roosevelt and Lyndon Baines Johnson in terms of his legislative successes and effectiveness. Ironically, both FDR’s and LBJ’s presidencies led to critical constitutional amendments surrounding a president’s capacity to serve.

In 1951, the 22nd Amendment, which limits presidents to only two terms, was ratified in part to avoid this exact clinical ethics dilemma: preventing a “fourth term Roosevelt” scenario of a president in declining health, seeking re-election. In 1944, the public was alarmed by President Roosevelt’s visible aging when he sought a fourth term. Roosevelt’s oratorical skills were still strong, so he was able to rally the public behind a fourth run for office, but his  appearance at the Yalta conference in 1945 (though he was only 63) revealed his terrible physical condition, prompting further alarm given the consequences of a premature death at a critical moment near the end of World War II. Indeed, Roosevelt died suddenly two months after Yalta, on April 12, leaving his successor, President Harry Truman, in the dark on critical issues, such as the atomic bomb program.

LBJ chose not to run for re-election while in declining health. In addition, the start of his term–in the wake of the assassination of his predecessor, John F. Kennedy–led to the ratification in 1967 of the 25th Amendment to more clearly outline what to do in the event of a president’s sudden death or incapacity.

I published an opinion piece in my newsletter on July 5 about the options available to the Biden Administration and used clinical ethics as a frame, discussing the 25th Amendment as essentially a clinical ethics document. I outlined three options: (a) President Bidenvoluntarily leaves office or steps down as the nominee, which protects his privacy, (b) President Biden is cleared for fitness by an independent medical assessment released to the public with his consent in order to assuage public concern, or (c) the Biden administration veers into 25th Amendment territory by arranging for an independent medical assessment of the President against his wishes.

Since then, an informal capacity assessment of President Biden played out on television screens in an interview the President did on July 6 with journalist George Stephanopoulos. A transcript raised the question whether the President fully understands his debate performance. When asked if he had watched a recording of the debate, Biden responded, “I don’t think I did, no . . .  And so, I just had a bad night. I don’t know why.”

From a clinical ethics perspective, President Biden has the right not to know why he struggled in the debate (or even, as reported, a recurring pattern of cognitive decline), and if he does find out, then he has the right to keep that knowledge private. However, President Biden is not a typical patient. From a governance perspective, there is a good clinical reason for President Biden to find out why he had such a bad night as the implications of an undiagnosed condition may override the President’s personal preference to decline to be assessed at this time. There ought to be limits to a presumptive presidential nominee’s autonomy that could trigger a solution to this dilemma.

Concealment of a president’s health status has some moral defense from a national security perspective, particularly during a war. But this position only works when there are no outward signs of a health problem. If a health condition is on full public display, with objective and overt clinical symptoms, it would be ethically imperative to be transparent about the President’s condition. Some physicians, such as Sanjay Gupta, have called for neurological workups and public disclosure. Ezekiel Emanuel has noted that, even in the absence of any underlying condition other than aging, President Biden has clearly lost some of his cognitive abilities. Investigative reporting led to confusing facts about how often the President was seen by a neurologist, prompting the White House physician to explain these visits in a letter, confirming that, aside from an annual physical, the President has not had any recent neurological workup.

Major global consequences resulting from an American president’s illness are part of our history. In 1918, when Woodrow Wilson was negotiating the Treaty of Versailles, he had already likely suffered from several mini-strokes and was ill with 1918 influenza. Historians note that he was compromised in these World War I negotiations, which contributed to the rise of Nazi Germany. FDR was not his optimal self at Yalta in 1945, either; historians wonder whether this led to a suboptimal negotiation about how to divide Europe at the end of World War II.

Currently, there is absolutely no protocol for how White House physicians–including President Biden’s physician–should balance a president-patient’s privacy, mandated by the Health Insurance Portability and Accountability Act (HIPAA), and the public’s right to know the health status of their sitting president. There are circumstances in which there is a clear ethical “duty to warn.” In the clinical context, the legal and ethical duty to warn identifiable third parties of foreseeable harm was established in Tarasoff v. Regents of the University of California, in which the court held that a patient’s confidentiality or doctor-patient “protective privilege ends where the public peril begins.” In Tarasoff, the failure to warn a woman about premeditated homicide by her boyfriend, whohad confided the plan to his University of California psychologist in 1969, led to a new standard for warning third parties who wittingly or unwittingly may be victims when a patient is an agent of harm. This case established the role of mandating reporting in the psychosocial context.

The Tarasoff case provides guidance regarding the ethical duty to warn, which extends into several health contexts, including infectious disease (e.g., partner notification of HIV), genetics (warning at-risk relatives for serious inherited diseases that are autosomal-dominant), and impaired driving. With respect to impaired driving, health care providers can breach HIPAA when they have a duty to warn the Department of Motor Vehicles about medically or cognitively compromised drivers in the interest of public safety. In fact, failure to warn can expose physicians to litigation by a harmed party. The duty to warn rests with the treating physician, but so does the duty to verify fitness to serve.

A president’s annual physical is supposed to verify fitness to serve, but when a president’s condition becomes alarming, an explanation to the public is ethically obligatory. To balance the president’s medical privacy and the public’s right to know, the president should be allowed time to make a decision about public disclosure of a medically disqualifying condition. But should he (or his administration) decline to disclose it, then the president’s physician is ethically permitted to disclose his medical status.

Legitimate ethical questions can be raised about whether any president–as a “celebrity patient”–is actually a more vulnerable patient because physicians may be less likely to tell the patient the truth, order necessary tests, or refer the patient for appropriate further evaluation due to VIP syndrome and subjective political considerations. VIP syndrome can also lead to conflicts of commitment or conflicts of interest. In 2018, White House physician Dr. Ronny Jackson actually told the public that the President might even live until he was “200 years old.” In 2024, new reports confirm that Kevin O’Connor, the current President’s physician, is a friend of the Biden family.

Throughout American history, there has been a longstanding pattern of physicians deceiving the public about presidents’ health. Examples include Grover Cleveland’s secret cancer surgery in 1893, Woodrow Wilson’s massive stroke in 1919, FDR’s cardiovascular disease, John F. Kennedy’s health issues and his “Dr. Feelgood”, Ronald Reagan’s early signs of dementia, and Donald Trump’s declining oxygen levels when he had Covid. The Biden Administration should end this practice of concealment by providing the public with a truthful assessment of the President’s health status given the staggering consequences of this election and the potential peril facing the country.

M. Sara Rosenthal, PhD, is Professor and Founding Director of the University of Kentucky Program for Bioethics and Oncology Ethics Program and Chair of the UK Healthcare Ethics Committee.

disabled boy with headphones sitting and looking at screen

Bioethics Forum Essay

Access to Pediatric Assistive Technology: A Moral Test

Most of us have a weakness for a donut and coffee in the morning. But not everyone places their order in the same way. One young man we know uses an application on his iPad to communicate his preferences, including his predilection for a chocolate-frosted donut and an iced coffee with almond milk. This device allows him to express himself independently just like everyone else.

For this individual and other people with disabilities, augmentative and alternative communication (AAC) devices facilitate communication, which, as we have argued, helps constitute community and societal integration. AACs encompass a range of technologies, such as tablet applications and eye-gaze devices. For some individuals, these devices supplement another form of communication, such as speech or sign language; for others, AACs are their singular means of connecting with the world beyond them.  

Over the past several years, the Division of Medical Ethics at Weill Cornell Medical College and Blythedale Children’s Hospital (BCH) have collaborated to track the process by which children with brain injury and their families acquire access to assistive technology. Our goal was to map the byzantine process that heretofore had never been charted. We hoped to identify bottlenecks and lead to quality improvement for children with disabilities.

Our previous analysis drew from BCH medical records over a two-year period and included 72 children with brain injury who received a prescription for at least one assistive device. Despite the multitude of resources and remarkable clinical expertise available at BCH, we found that only 55% of devices were delivered. Furthermore, the average time to delivery was 69.4 days, with a range of 12 to 250 days. The device with the longest time to delivery was a special needs car seat, a technology that quite literally provides a child with access to the surrounding community.

We recently met to continue our research. At a multidisciplinary team meeting, we learned of the process by which a child acquires an AAC device. It’s a maddening process. First, the clinician, often a speech-language pathologist, identifies a need and determines what sort of device would be best suited to assist with communication. Then they prescribe an appropriate device. To ensure that this is money well spent, the insurance company requires a one-month trial of the device before it is approved. And then the vendor supplies the device, as required by the insurance company.

Makes sense, right? But now illogic and, we would say, cruelty creeps in. After a successful trial of the device–an intervention that will help a child communicate with their family or go to school–the device is taken away by the insurance company for the duration of the approval process.

Let us reiterate what happens. For one month, the child is given access to the AAC device that provides them with a previously unavailable mode of communication. For one month, they can communicate their wishes to their parents and siblings, respond to their teacher’s question in the classroom, or make new friends at the playground. And during that one month, they grow and develop, as children are prone and ought to do. Maybe they learn to tell jokes, read aloud, or order their favorite breakfast beverage independently.

In 2023, clinicians at BCH prescribed 18 AAC devices. Each of the devices was deemed eligible for a particular child and approved by an insurance company for coverage. However, despite the success of the one-month trial period and subsequent insurance approval, the children had to wait to get their devices. The average time to delivery was nine weeks, with a range of one week to five months. The device with the longest time to delivery was an eye-gaze device. This AAC helps individuals with motoric disabilities communicate.

These delays to delivery are significant. Education literature suggests that first through eighth grade students lose between 17% and 28% of their English language arts skills and 25% to 34% of their math skills during the three-month summer vacation. While the “summer slide” experienced by typically developing children is concerning, one can only imagine the devastating impact of delays for children who rely on access to assistive technology. For a child who waits nine weeks, it’s the loss of nearly a whole summer. When the delay is five months, that’s a couple of summers. Furthermore, children with cognitive or speech disabilities can miss critical neurological milestones when they are unassisted. This compounds the effect of a delay and may lead to repercussions with enduring ill-effects.

The illogic of this delay leaves us speechless. All the more so because the data from BCH reveals that all the patients who demonstrated improvement during the one-month trial ultimately received their devices. So why the wait? Why impede their development? And why the cruelty? After these children are given the keys to communication, these keys are taken away. The door is locked, and their world goes dark. What had been an opportunity for community and reciprocity is now one for segregation and isolation. How can this be right?

To delay these benefits is especially paradoxical because there is a small but growing neuroethics literature arguing in favor of post-trial obligations following device trials that have benefitted study participants, whom Goering et al. characterize as “pioneers.” The question of post-trial obligations for as yet unproven devices is now the focus of grants funded by the BRAIN Initiative. This funding priority represents a normative argument for investigational devices. In the context of AAC, we are delaying access to devices that have already been proved therapeutically effective.

Beyond the ethics, we contend that these delays are also a matter of law.  As we have written, the Americans with Disabilities Act (ADA) mandates maximal societal integration for individuals with disabilities. Title IV of the ADA outlines access to assistive technology, naming telecommunications devices for the deaf, also known as teletypewriters (TTY). In 1990, when the ADA became law, TTY was the primary assistive device for communication. With progress in electronics and neuroscience, communication devices have advanced way beyond TTYs. Because of this progress we must not be stuck in a purely textual reading of the ADA that limits access to more modern technologies.

These advances remind us of the Deweyan aphorism that speaks to how technological progress can expand our moral horizons. In Common Sense and Scientific Inquiry, Dewey wrote, “Inventions of new agencies and instruments create new ends; they create new consequences which stir men [all people] to form new purposes.” So, it is here. We are compelled to use the marvels of modern AT to serve some of the most vulnerable among us. It would violate the spirit of the law, and its normative implications, to eschew novel technologies that could further remediate the segregation of people with disabilities.

Through our collaboration with BCH, we have seen the dedication of hospital administrators, clinicians, and therapists providing excellent care to children and their families. Among their robust services–inpatient and outpatient care and a state-accredited public school –is the specialty AT clinic, which provides loaner devices to help bridge the gap between the trial period and the arrival of the device. However, even Blythedale does not have the resources to make eye-gaze devices (which can cost $15,000 or more) available during the waiting period, especially if it lasts five months.

And what of the children and families who never have access to the specialty care and advocacy that BCH offers? This is a deeper level of inequity that transcends the technology and speaks to broader systems of care. For these children, delays risk becoming denials. This is something society should neither allow nor accept.

Former Vice President Hubert H. Humphrey reminds us, “It was once said that the moral test of government is how that government treats those who are in the dawn of life, the children; those who are in the twilight of life, the elderly; and those who are in the shadows of life, the sick, the needy and the handicapped.”These words, now enshrined in marble in the Department of Health and Human Services building that bears Humphrey’s name, should be ensconced in policy to give voice to the voiceless.

Anything less is not worthy of us and is a violation of civil rights.

Kaiulani S. Shulman, B.A., graduated from Yale College with distinction in religious studies. She is a research assistant in the Division of Medical Ethics at Weill Cornell Medical College and will start medical school in the fall.

Joseph J. Fins, M.D., D. Hum. Litt. (hc), M.A.C.P., F.R.C.P., is the E. William Davis Jr. M.D. Professor of Medical Ethics, a professor of medicine and chief of the division of medical ethics at Weill Cornell Medical College; Solomon Center Distinguished Scholar in Medicine, Bioethics and the Law and a Visiting Professor of Law at Yale Law School; and a member of the adjunct faculty at the Rockefeller University. He is a Hastings Center fellow and chair of the Center’s board of trustees.

Acknowledgements:

The authors acknowledge the support of a pilot award from the Weill Cornell Medical College Clinical & Translational Science Center, “Assistive Technology in Pediatric Brain Injury Following In-patient Rehabilitation: Access, Barriers and Burdens on Patients and Families” [UL1TR002384] and the Blythedale Children’s Hospital, and the Monique Weill-Caulier Charitable Trust. We would like to acknowledge the collegiality and insights of the Assistive Technology in Brain Injury research team, including colleagues Debjani Mukherjee, Linda Gerber, and Jennifer Hersh from Weill Cornell Medical College and Barbara Donleavy-Hiller, Karen Conti, Julie Knitter, Rita Erlbaum, Marnina Allis, Linda Fieback, William Watson, as well as the late Barbara Milch from Blythedale Children’s Hospital. We are especially grateful for the visionary leadership of Larry Levine, President and CEO of Blythedale Children’s Hospital.

hands holding and touching cellphone

Bioethics Forum Essay

Griefbots Are Here, Raising Questions of Privacy and Well-being

Hugh Culber is talking to his abuela, asking why her mofongo always came out better than his even though he is using her recipe. She replies that it never came out well and she ended up ordering it from a restaurant. While it is touching, what makes this scene in a recent Star Trek Discovery episode so remarkable is that Culber’s abuela has been dead for 800 years (it’s a time travel thing) and he is conversing with her holographic ghost as a “grief alleviation therapeutic.” One week after the episode aired in May, an article reported that science fiction has become science fact: the technology is real.

AI ghosts (also called deathbots, griefbots, AI clones, death avatars, and postmortem avatars) are large language models built on available information about the deceased, such as social media, letters, photos, diaries, and videos. You can also commission an AI ghost before your death by answering a set of questions and uploading your information. This option gives you some control over your ghost, such as excluding secrets and making sure that you look and sound your best.

AI ghosts are interactive. Some of them are text bots, others engage in verbal conversations, and still others are videos that appear in a format like a Zoom or FaceTime session. The price of creating an AI ghost varies around the world. In China, it’s as low as several hundred dollars. In the United States, there can be a setup cost ($15,000) and/or a per-session fee (around $10).

Although simultaneously fascinating and creepy, these AI ghosts raise several legal, ethical, and psychological issues.

Moral status: Is the ghost simply a computer program that can be turned off at will? This is the question raised in the 2013 episode of Black Mirror, “Be Right Back,” in which Martha, a grieving widow, has an AI ghost of her husband created and later downloads it into an artificial body. She finds herself tiring of the ghost-program because it never grows. The AI robot ends up being kept in the attic and taken out for special occasions.

Would “retiring” an AI ghost be a sort of second death (death by digital criteria)? If the ghost is not a person, then no, it would not have any rights, and deleting the program would not cause death. But the human response could be complicated. A person might feel guilty about not interacting with the griefbot for several days. Someone who deletes the AI might feel like a murderer.

Ownership: If the posthumous ghost was built by a company from source material scraped from social media and the internet, then it’s possible that the company would own the ghost. Survivors who use the AI would merely be leasing it. In the case of a person commissioning their own AI before death, the program would likely be their property and can be inherited as part of their estate.

Privacy and confidentiality: If Culber tells AI abuela that he altered her recipe, that information might be collected, and owned, by the AI company, which may then program it into other AIs or even reproduce it in a cookbook. The AI abuela could also be sold to marketing companies: Culber’s abuela may try to sell him ready-to-eat mofongo the next time they interact.

AIs are built, in part, on the questions we ask and the information we share. What if Martha’s daughter tells her AI dad that she wants a particular toy? Martha could find a bill for that toy, ordered by the ghost without her knowledge. Modern social media is all about collecting data for marketing, so why would a griefbot be any different?

Efficacy: Culber said that talking to his abuela’s “grief alleviation therapeutic” was helpful to him. Martha eventually found that the AI android of her husband was a hindrance, preventing her from moving on. Would today’s AI ghosts be a help or a hindrance to the grieving process?

Some researchers have suggested that we could become dependent on these tools and that they might may increase the risk of complicated grief, a psychological condition in which we become locked in grief for a prolonged period rather than recovering and returning to our lives. Also consider a survivor who had been abused by the deceased and later encounters this person’s AI ghost by chance, perhaps through marketing. The survivor could be retraumatized—haunted in the most literal sense. On the other hand, in my study of grieving and continuing bonds, I found that nearly 96% of people engage with the dead through dreams, conversations, or letters. The goal of grieving is to take what was an external relationship and reimagine it as an internal relationship that exists solely within one’s mind. An AI ghost could help reinforce the feeling of being connected to the deceased person, and it could help titrate our grief, allowing us to create the internalized relationship in small batches over an extended time.

Whether AI ghosts are helpful or harmful may also depend on a survivor’s age and culture. Complicated grief is the more likely outcome for children who, depending on the developmental stage, might see death as an impermanent state. A child who can see a parent’s AI ghost might insist that the parent is alive. Martha’s daughter is likely to feel more confused than either Martha or Culber. As a Latine person for whom Día de los Muertos is part of the culture, Culber might find speaking with the dead a familiar concept. In China, one reason for the acceptance of AI ghosts might be the tradition of honoring and engaging with one’s ancestors. In contrast, the creepiness that Martha feels, and that I share, might arise from our Western cultures, which draw a comparatively fixed line between living and dead.

A recent article suggests guidelines for the ethical use of griefbots, including restricting them to adult users, ensuring informed consent (from people whose data is used, from heirs, and from mourners), and developing rules for how to retire the griefbots. We must also be wary of unethical uses: engaging in theft, lying, and manipulation. AIs have already been used to steal billions.

Our mourning beliefs and practices have changed over time. During the Covid pandemic, streamed funerals were initially seen as odd, but now they seem like a normal option. A similar trajectory to public acceptance is likely to happen with deathbots. If so, individuals should be able to choose whether to commission one of themselves for their heirs or to create one of their deceased loved ones.

But as a society we must decide whether the free market should continue to dominate this space and potentially abuse our grief. For example, should companies be able to create AI ghosts and then try to sell them to us, operating like an amusement park that takes our picture on a ride and then offers to sell it to us when we disembark? Perhaps griefbots should be considered therapeutics that are subject to approval by the Food and Drug Administration and prescribed by a mental health professional. The starting point should be clinical studies on the effect this technology has on the grieving process, which should inform legislators and regulators on the next steps: to leave AI ghosts to the marketplace, to ban them, or to regulate them.

Craig Klugman, PhDis the Vincent de Paul Professor of Bioethics and Health Humanities at DePaul University. @CraigKlugman

hand reaching out to depressed teenager

Bioethics Forum Essay

Finding the Signal in the Noise on Pediatric Gender-Affirming Care

The Cass  Review of gender identity services for children and young people, a recent report in the U.K., has spurred many headlines and much debate, most of which tout the finding of “weak evidence” for gender-affirming care for children and teenagers. Advocates of such care reject the report as anti-trans, while critics  hail it for finally outing such care as pseudoscience.  However, much of the noise around gender-affirming care in pediatrics, and this report, is misleading and takes away from the substantive improvements needed to provide the best care for transgender youth, something noted in the report’s thoughtful foreword.

The Cass report was commissioned by the U.K.’s National Health Service to make recommendations on improving care for children and young people who are questioning their gender identity or experiencing gender dysphoria. The report made eight recommendations on treatment, two of them on medications: puberty blockers and hormones.

While gender-affirming care is not reducible to medications alone, they are the treatments most often singled out by critics. The report determined that the scientific evidence for puberty-blocking medications in youths needs improvement, expressing concern about the potential risks and questioning the benefits for most children. The report didn’t say that puberty blockers should not be prescribed to children, but it concluded that they should only be prescribed as part of a clinical trial. The report said that masculinizing or feminizing hormones could be given to people starting at age 16, but it advised “extreme caution.”

“I can’t think of any other situation where we give life-altering treatments and don’t have enough understanding about what’s happening to those young people in adulthood,” said Hillary Cass, the pediatrician who produced the report. This statement, and concerns raised in the report about lack of evidence, are misleading for two reasons.

First of all, most medications used in pediatrics lack long-term and pediatric-specific data, and so medicines for gender-affirming care are not exceptional in that regard. In fact, up to 38%  of drugs used in pediatrics and 90% of those used for newborns are prescribed off-label and have had few studies performed on children. These off-label medications include antipsychotics, endocrine medications, and even some antibiotics.

Second, there is safety data on puberty blockers. They have been given to children for decades to treat conditions such as precocious puberty, in some cases for the indication of social distress related to early puberty. These drugs have been shown to be safe in prospective observational studies.

In looking for evidence Cass placed the greatest value on randomized controlled trials. In these studies, participants are randomly assigned to receive either an experimental treatment or a control treatment and then their outcomes are compared. RCTs are great when they are feasible and ethical. But they are not feasible for studying puberty blockers because the participants and researchers would know which group the participants were in when they either did or did not show pubertal changes. This knowledge could interfere with an unbiased scientific comparison of the outcomes.

Without RCTs on puberty blockers, Cass had to review other studies whose evidence she considered “weak.” But this does not mean a lack of benefit. Rather, it should prompt shared decision-making with clinicians, patients, and families discussing the proportionality of benefits and burdens.

Weighing the proportionality of benefit to burden from an intervention is a foundational calculus in ethical decision-making. It goes on every day in pediatrics without apparent controversy. Some arguments appeal to patient autonomy—the rights and interests of the patient who wants a treatment—rather than to the treatment’s ability to reduce morbidity and mortality, as was discussed in an article in the current issue of the Hastings Center Report. Other arguments focus on what is in the best interest of the patient. But for many decisions in adolescent health, it is not a matter of choosing either/or but rather considering both the patient’s autonomy and best interest that is necessary. For example, life-and-death decisions involving serious illness in adolescents require respecting the adolescent’s autonomy and considering the medical team’s and the parents’ assessments of the benefits and burdens, or beneficence and nonmaleficence, of those decisions.

Interestingly, in contrast to gender-affirming care, there seems to be relatively little public controversy over cosmetic surgery for teenagers. And yet in 2022 there were 23,527 cosmetic surgeries performed on teenagers in the U.S., including  breast augmentation for both biologic cis males and females. These surgeries require the same decision-making process as other interventions for teenagers. But as far as we can tell, they receive less public scrutiny than gender-affirming care. There are no court cases against these surgeries or attempts by state governments to ban them despite legitimate questions about their benefits and burdens to adolescents and the fact that, unlike most gender-affirming interventions in youth, cosmetic surgeries are not reversible.

Issues around evidence in pediatrics are abundant, but gender-affirming care receives a disproportionate amount of public criticism. Resources are lacking for research that would provide more evidence on the safety and effectiveness of care in pediatrics, including gender-affirming care. The Cass report recognizes this problem and provides important guidance. The report does not support bans and criminalization of gender-affirming care, which has been the response by more than 20 U.S. states and is under review by the Supreme Court. On the contrary it calls for investment in and expansion of gender-affirming care: improved access, workforce education, and collaborative and coordinated services, along with infrastructure to ensure improved data collection and ongoing quality improvement to strengthen the evidence for various treatment options. While we disagree with the Cass report’s assessment of the evidence for puberty blockers and hormone treatments, its overall recommendations should be heeded by critics of gender-affirming care if the goal of their critiques is truly to provide improved and beneficial care for young people.

Ian D. Wolfe, PhD, MA, RN, HEC-C, is director of ethics at Children’s Minnesota and affiliate faculty in the University of Minnesota’s Center for Bioethics.

Justin M Penny, DO, MA, HEC-C, is an assistant professor in the Department of Family Medicine and Community Health at the University of Minnesota and a clinical ethics assistant professor in the Center for Bioethics.

Clinical Case Studies card

Bioethics Forum Essay

Should He Have a Vasectomy?

Case Narrative

D is an 18-year-old man with autism and intellectual disability whose parents request a vasectomy. After receiving this request, his primary care physician seeks an ethics consultation from her affiliated hospital’s ethics committee asking for guidance on whether and how to respond to the parents’ request. The hospital ethics committee routinely supports affiliated outpatient providers with ethics consultation services. In this case, the ethics committee provides a three-person interdisciplinary ethics consultation team, which initially meets via Zoom with the primary care physician, a urologist with expertise in vasectomies, the patient, his parents, and the patient’s teachers.

D requires 24/7 care and adult supervision but is otherwise healthy. His parents, who also serve as his conservators, state that D has verbalized not wanting to have children and has not been sexually active. D’s parents express concern that if D were to have a child, he would be unable to provide care because he himself requires around-the-clock care.

To ensure that D has access to a decent quality of life, his parents have set aside money to support his future care in a private day program and co-ed group home. If D were to have financial responsibility for a child, his parents worry that this would significantly reduce the resources for his future and require him to reside in a public facility, where he might have a lower quality of life. D’s parents, teachers, and medical team assess that D is unable to understand that sexual intercourse can result in reproduction and in having to care for a child, despite having completed a sexual education class with individualized accommodations for his learning needs. They also agree that D is incapable of using a condom. When asked about a vasectomy (“a surgery that would prevent you from having children”), D readily agrees. But on further questioning, D is unable to explain what he is agreeing to, and, as his family notes, “he is a pleaser–he’ll agree to anything we ask him to do.” His medical team concludes that D lacks capacity to make decisions about a vasectomy.

The central question in this case is whether and under what conditions it would be legal and ethical to proceed with a vasectomy requested by D’s parents with D’s assent but without his full understanding, considering the history of forced sterilization of individuals with intellectual disabilities in the United States.

Ethical Analysis and Process

In evaluating D’s parents’ request, the medical and ethics consultation teams were mindful of that history. U.S. policies resulted in the sterilization of over 60,000 individuals in over 30 states throughout the 20th century. Unfortunately, forced sterilization continues today in some countries and often dehumanizes marginalized populations deemed less worthy of reproduction and family formation, resulting in the disproportionate sterilization of minority groups. The teams also noted the ongoing explicit and implicit bias against people with disabilities in the health care system, and evidence that they have significantly worse health outcomes than people without disabilities.

Sterilization procedures can improve the life of an individual by giving them greater autonomy and control over their reproduction and sexual function. However, sterilization without informed consent violates the fundamental right to keep a person free from unwanted intrusions, including intrusions into their sexual and reproductive preferences and abilities.

The three-person ethics consultation team sought a broad range of perspectives for this case, including those of the full 30-member hospital ethics committee, representatives of disability advocacy organizations, and the health care system’s legal and risk services department. The ethics team asked D’s parents about their views on D’s wishes and best interests and on the impact of a vasectomy on D’s well-being. The team also asked them about the use of alternatives such as condoms, detailed accounts of D’s recent social history, and what they hoped and planned for D’s future. The ethics team met separately with D and asked him about his opinion of children and whether he thought his views might change in the future. D responded, “Jessica and Todd [D’s younger cousins] are so annoying. I don’t want to have kids.”The team also questioned D’s comprehension of sexual reproduction, contraception, and surgical intervention. When D was unable to describe sexual reproduction, the team described it using second-grade terminology and asked D to teach back his understanding. He was unable to do so.

The ethics consultation team reached out via email, phone, and Zoom to representatives from three regional and national disability rights organizations. The representatives noted the possibility that, with maturity and further education, D’s views about children might change and he might become able to use a condom. The disability rights advocates’ considerations were included in the final ethics consultation report, which was shared with D’s doctors and parents.

If D underwent a vasectomy now and later wished to have a child, a vasectomy reversal procedure or sperm retrieval could be considered. D’s urologist estimated that there would be a roughly 80% probability of successful pregnancy using one or both of these procedures in the best-case scenario. D’s urologist recommended that his parents set aside funds to allow for either of these techniques if D or his surrogate decision-makers were to change their mind about reproduction.

It is the practice of the ethics consultation team to discuss with the hospital’s legal and risk services department any legal issues and constraints surrounding the ethical question. For this case, the sterilization procedure laws in D’s state of residence specify that if an individual is unable to provide informed consent, or if the individual is under a conservatorship or guardianship (e.g., because of an intellectual disability), sterilization is prohibited unless a probate court concludes that the procedure is in the individual’s best interest. As part of this process, the probate court appoints a panel of experts to help make the decision.

The Decision

The ethics consultation team determined that vasectomy was in D’s best interest. The team also recommended that funds be set aside for vasectomy reversal or sperm retrieval, as well as for providing more extensive sexual education to D that appropriately accommodates his learning disabilities.

Under state law, the probate court considers the testimony and recommendations of relevant experts including physicians and ethics consultants before making the final decision. In this case, the probate court agreed with the ethics team and ruled that the procedure was in D’s best interest.

Benjamin Tolchin, MD, is the director of the Yale New Haven Health Center for Clinical Ethics, and an associate professor of neurology at Yale School of Medicine. @btolchin, LinkedIn: https://www.linkedin.com/in/benjamin-tolchin-9b6b4a73

Kristina Gaffney is a member of the Bridgeport Hospital and Yale New Haven Hospital ethics committees and a research associate at the Yale Center for Outcomes Research and Evaluation (CORE). LinkedIn: https://www.linkedin.com/in/kristinagaffney

Series Editors’ Comment: Weighing an Individual’s Best Interest and Historical Injustice

The clinical ethicists involved in D’s case provide a nuanced and empathetic approach to addressing a profoundly complex ethical issue. Sterilization of people with intellectual disabilities should prompt careful deliberation. We believe the clinical ethicists were right to prioritize spending time with D and striving to help him understand the decision before him, even if they could not elicit his preferences. They demonstrate the importance of including D’s voice while also recognizing the impact of his intellectual disability on informed decision-making. Additionally, they recognize the importance of talking with D’s parents to help understand D’s lived experience, as well as his values and preference.

The team’s engagement with disability advocates underscores a dedication to inclusive decision-making. Getting their input aligns with ethical best practices and serves as a safeguard against the historical injustices of forced sterilization that persist today. Clinical ethicists should be wary of repeating historical injustices perpetrated against minoritized communities, but also realize that they cannot remediate those injustices by way of recommendations for a particular patient. Denying patients with disabilities access to medical procedures that were once unethically forced upon them can also be a form of disability discrimination. In other words, it would be an overcorrection to deny sterilization to all intellectually or developmentally disabled individuals.

D’s case also highlights the difficult terrain that clinical ethicists encounter when their recommendation is brought before a court for further review. D’s vasectomy was reviewed in probate court per state law. While in this case, the court agreed with the ethics team’s recommendation, one can easily imagine a situation in which a court’s decision and an ethics team’s recommendation differed. Had the court given permission to move forward with D’s vasectomy after the ethics team recommended against it, the medical team would have had to make a difficult decision: whether to go against the ethics recommendation or to transfer D to another health care facility.

D’s case brings attention to the complicated realities of guardianship, or conservatorship. While D’s parents demonstrated love, care, and thoughtful insight into D’s capacity and wishes and appeared to advocate for his best interests, not all individuals under guardianship are so fortunate. In some cases, guardians have complete decision-making power, eclipsing the autonomy of their wards. This raises concerns about the potential for abuse or neglect of a ward’s true interests and preferences. D’s case is made easier by the nature of his parents’ involvement and their willingness to engage in this process, highlighting the importance of ensuring fair and inclusive decision-making processes for individuals who may lack adequate representation.

Finally, D’s case underscores the necessity of making ethical recommendations even in spaces of uncertainty. The ethicists in this case had to balance myriad factors, including legal constraints, historical context, and D’s immediate and future well-being. It is crucial for ethicists to remain flexible, reflective, and responsive to the nuances of each case, ensuring that their recommendations are not only ethically sound but also practically feasible and sensitive to the unique circumstances of the individuals involved.

Lingering questions

Even if moving forward with vasectomy is the best decision for D, might there be long-term psychological and emotional effects on D if he undergoes a vasectomy without fully understanding it? What additional measures could support or enhance D’s understanding of the procedure and assent to it? What more should clinical ethicists do to help identify implicit and explicit biases against individuals with disabilities in the health care system, and what steps can they take to mitigate these biases, especially in decisions involving potentially irreversible procedures like sterilization?

–Adira Hulkower and Devan Stahl

Learn more about the seriesClinical Ethics Case Studies for Hastings Bioethics Forum.

Attention clinical ethicistslearn how to contribute to the series.

surgeons operating in dark in Ukraine

Bioethics Forum Essay

Caring for Patients in Armed Conflict: Narratives from the Front Lines   

As wounded victims came pouring into the civilian hospital in Kharkov after the Ukraine war began in February 2022, Artem Riga initially was the only surgeon on duty. Some colleagues were fleeing the country and others were delayed because of the intense shelling. Doctors had to ration food and medical supplies, performing surgery in body armor, with sandbags on the windowsills of the operating room. A sudden attack significantly damaged his hospital and left patients covered in broken glass and other debris. Amid this chaos, Riga had to teach patients to care for their own wounds.

Riga’s essay is one of 16 accounts in the Narrative Inquiry in Bioethics symposium, “Healthcare Under Fire: Stories from Healthcare Workers During Armed Conflict.” They take place in war zones around the world, describing ethical dilemmas centered on uncertainty, scarcity, and injustice. The narratives also reveal the triumph of solidarity and courage, both in the delivery of health care and in the act of writing about the experience. The essays are intended to be a means of instruction and inspiration for other clinicians in similar circumstances.

“Some write as they listen for the next missile to land near them, while others reflect on conflicts they experienced decades earlier,” write symposium editors Dónal O’Mathúna, Thalia Arawi, and Abdul Rahman Fares in the introduction. Some of these conflicts are currently making headlines.

There were 2,562 reported incidents of violence against or obstruction of health care in 30 countries or territories in 2023, as reported by the Safeguarding Health in Conflict Coalition, a 25% increase over 2022. Four hundred and eighty-seven health care workers were killed, 445 were arrested, and 240 were kidnapped, and there were 625 incidents in which health care facilities were damaged or destroyed. Other humanitarian workers have also faced risk, exemplified by the seven workers from the World Central Kitchen charity who were killed and another gravely injured in separate Israeli air strikes in Gaza as this symposium went to press. This symposium highlights a few of the people behind the statistics.

Several stories raise questions about when it is appropriate for clinicians to perform tasks outside their scope of practice. Under “normal” circumstances doing so would be considered substandard care. But in a war zone, where specialists are lacking, this might be not only permissible but essential to care for patients who otherwise would have had no treatment. For example, Ryan C. Maves, a retired infectious disease specialist with the U.S. Navy, describes being hastily retained to be a “de facto cardiologist” alongside adult intensivists treating children and pediatricians caring for adults.

In the most extreme circumstances, care was provided by people with no medical training. To Riga, this care was not only medically essential, but also an uplifting demonstration of solidarity. “It was a miracle! Before my eyes, people turned into paramedics,” he writes. “I have never seen such a transformation and mutual assistance.”

Some authors struggle with whether to stay and care for patients during war while knowing that doing so would place themselves and their families in grave danger. Ghaiath Hussein, who conducts health research in South Darfur, asks whether the risk of violence against his staff and study participants outweighs the benefits. He continues to struggle with this difficult balance, but he has decided to continue his research there because he wants users of research reports coming from conflict zones to remember that people risk everything to generate important data that can help others.

Handreen Mohammed Saeed, writing from Iraqi Kurdistan, and Ryan Maves, the retired U.S. Navy medical officer, both discuss the ethical challenges of treating patients who are combatants against the health care team’s community. Saeed recounts a situation where injured militants arrived at his medical facility where many of the staff and other patients had been the victims of those militants’ violence. Some staff struggled intensely over caring for the militants. They held meetings to carefully and sensitively talk through their emotions and the ethical principles that guided them, and to listen to people’s painful experiences. They all concluded that their primary role was to care for everyone’s medical needs, not take the role of police or judges.

“I am a doctor from the Gaza Strip in Palestine,” writes Ola Ziara, who reluctantly asks whether it is worth saving a life only to subject the person to “this overstretched healthcare system, this unrelenting crisis.” Asking the question reminds her of her own pain and helplessness in the situation. There is no answer. “We must keep caring while walking through our pain,” she writes. But she adds that she’s no longer sure they can go on, such is the “unbearable grief” they carry.

Some say that bioethics has ignored war. Perhaps because there are specific subfields of military medical ethics and humanitarian ethics, many bioethicists assume health care ethics during war is adequately addressed. However, as these stories make clear, war creates ethical challenges not only for military and humanitarian health care workers, but also for civilian physicians like Riga.

Health care workers writing in the symposium rely on the principles and frameworks of bioethics to inform difficult decisions in times of uncertainty. For example, Oksana Sulaieva, head of a pathology laboratory in Kyiv, shares how her decision to stay in Ukraine and continue to provide laboratory services to hundreds of hospitals and thousands of patients was guided by her professional obligation of beneficence. Furthermore, she stated that “our professional duties were an anchor linking us to each other against fear and panic.”   

Some of the authors see their decision to write as an act of courage inspired by ethical values. “(The) value of this collection lies not only in its documentation of the harsh realities of providing health care in acute conflict but in the intimate and generous act of narration by health care providers operating in extraordinarily difficult circumstances,” writes Kim Thuy Seelinger, research associate professor and director of the Center for Human Rights, Gender and Migration at Washington University in St. Louis. “Their series of essays fosters a deeper understanding of the moral and ethical dimensions of their collective work and the risks they take to do it. The risks they take to write.”

To Esime Agbloyor, a physician from Ghana and a bioethics fellow at the Center for Bioethics at The Ohio State University, the symposium authors demonstrate courage, altruism, sacrifice, and truthfulness. “These authors chose to speak up and, by so doing, exude the virtuous traits that Aristotle describes,” she concludes. “They are worthy of praise and admiration.”

We agree, and we urge everyone to pay attention.

Emily E. Anderson, PhD, MPH, is a professor of bioethics at Loyola University Chicago, where she directs an NIH Fogarty-funded research bioethics training program for Ukrainian physicians and scientists in collaboration with Ukrainian Catholic University.

Dónal O’Mathúna, PhD, is a professor in the College of Nursing and associate director of research in the Center for Bioethics at The Ohio State University and an editor of the symposium. He collaborates with Dr. Anderson on an NIH Fogarty-funded project investigating research ethics during war.

person entering maze of human brain

Bioethics Forum Essay

The Mind is Easy to Penetrate. The Brain, Not So Much

Dualists rejoice! That much-maligned ontology got a new lease on life recently with vividly contrasting cases involving Scarlett Johannsen’s voice and Elon Musk’s brain.

Well, not Musk’s brain but that of a patient volunteer in his Neuralink experiment, and not Johannsen’s voice but that of a strikingly similar vocal double. Yet the parallel mid-May dustups reveal something interesting about minds and brains: One is easy to penetrate, the other far more challenging.

When OpenAI released GPT-4o, its model for “more natural human-computer interaction” was named Sky. The feminine, warm, and (to some listeners) rather flirty voice struck many as remarkably similar to that of Johannsen as the audible representation of Samantha in the operating system in the 2013 film Her. (Spoiler alert: In the film a rather sad and lonely fellow, played by Joaquin Phoenix, falls in love with the disembodied AI, only to be traumatized upon discovering that Sky has been unfaithful, servicing many such users, so to speak.)  It happens that the participants in OpenAI’s demos were mostly young men, presumably not as emotionally involved as the lead character in Her but visibly charmed.

Enter Scarlett Johannsen, who at first expressed concern that Sky was “eerily similar” to her Samantha voice, pointing to an interesting intellectual property issue about what counts as similarity from one voice to another.  It turns out that OpenAI’s Sam Altman had approached Johannsen about voicing the new operating system. The circumstances added up to sufficient cause for suspicion for her to retain legal counsel. Had OpenAI trained its Sky algorithm on Samantha? Was Sky a deepfake? But Altman assured Johannsen that the company had hired a voice actor for the demo and had no intention of resembling her Samantha character. His company hit the delete button on Sky. The sides seem to have parted friends. One might say they remain on speaking terms.

Underneath all the hubbub is a key point: voices matter, metaphorically and literally. Facing death, Socrates said his inner voice told him when he was on the wrong course. Many of us strive our whole lives to be heard, to “find our voice.”  Through the goopy medium of amniotic fluid we bond to our mother’s voice. People with auditory hallucinations have trouble distinguishing those that are real from those that are not. When we’re not sure about the objectivity of an ominous sound, we reach for intersubjectivity: “Did you hear that?”

Now consider the overhyped brain implant experiment conducted by Elon Musk’s Neuralink on a person with quadriplegia, a blatantly conflicted case of science by press release. Following the first few post-op weeks, about 85% of the device’s threads had slipped away from their target sites in Noland Arbaugh’s brain, with further surgery determined to be inadvisable. Someday perhaps a more responsibly described study will succeed in providing people like Arbaugh with brain-linked rehab outside strict laboratory conditions. More likely that will be accomplished by one of the legitimate teams that pioneered such efforts long before late comer Neuralink.

Meanwhile we may reflect that the three-pound lump of jelly between our ears is vastly more difficult to manipulate than the airy stuff of mind said to distinguish zombies from the last of us.  

Jonathan D. Moreno is the David and Lyn SIlfen Professor at the University of Pennsylvania and a fellow of the Hastings Center.

photo of former president nixon with arms outspread and fingers in v sign

Bioethics Forum Essay

The Overlooked Father of Modern Research Protections

Thirty years gone, but the spirit of Richard Nixon still rattles around in my head like Marley’s ghost. Instead of ledgers and cash boxes, he carries an Enemies List. “Never forget, the press is the enemy. The establishment is the enemy. The professors are the enemy. Professors are the enemy. Write that on a blackboard 100 times and never forget it,” Nixon told Henry Kissinger in 1972. I check at least two of those three boxes, but I doubt I would qualify for Nixon’s list anymore. The more time passes, the more Nixon looks like a strange, unlikely political ally.

For American liberals of a certain age, especially those disillusioned with the current state of politics, it has become common to look back in astonishment on the progressive domestic measures that Nixon signed into law. The same man whom Hunter S. Thompson called “a political monster straight out of Grendel” signed the Clean Air and Water Acts, the Equal Employment Opportunity Act, the Endangered Species Act, and measures establishing the Environmental Protection Agency and the Occupational Safety and Health Administration. But the progressive measure overlooked even by Nixon enthusiasts is the National Research Act, which Nixon signed on July 12, 1974, and which gave us our modern research protection system.

If there were an origin story for the National Research Act, it would probably begin in early 1972. In January, Mike Wilkins, a young internist, appeared on an ABC news report exposing the horrific conditions at the Willowbrook State School in Staten Island, where institutionalized, intellectually disabled children had been intentionally infected with hepatitis A and B. Shortly afterwards, Martha Stephens, an English professor at the University of Cincinnati, organized a press conference protesting Pentagon-funded studies in which vulnerable cancer patients were given whole-body irradiation to test how much radiation a soldier could withstand in a nuclear attack. But the most explosive report came in July, when documents provided to the Associated Press by whistleblower Peter Buxtun exposed the Tuskegee syphilis experiment, a study by the U.S. Public Health Service in which poor, Black men in Alabama with syphilis were deceived and deprived of treatment for 40 years.

Whether Nixon was shocked by the scandals is hard to say. He certainly had no sympathy for whistleblowers. His covert Special Investigations Unit, aka “the plumbers,” was devised to prevent internal leaks and punish violators. Nixon was also preoccupied with his re-election campaign. On June 17, 1972, just over a month before the Tuskegee story broke, Washington police arrested five men sent by the plumbers to break into Democratic National Committee headquarters at the Watergate Hotel. By February 1973, when Democratic Senator Edward Kennedy introduced legislation to establish the Senate Watergate Committee, Nixon was waist-deep in a scandal of his own.

It is Edward Kennedy who deserves most of the credit for the subsequent research reforms. As chair of a Senate subcommittee on health, Kennedy had been holding hearings on a range of medical abuses when the Tuskegee story broke. In November 1972, he called for the establishment of federal policy on use of human subjects in medical experimentation, charging that medical researchers were exploiting the poor and uneducated. He began hearings on the Tuskegee study in February and March 1973. Yet the Watergate scandal continued to intrude. When Peter Buxtun, the Tuskegee whistleblower, was giving his Senate testimony, he was interrupted by two men rushing into the room. “I’m feeling kind of stupid. I’m halfway through a sentence,” Buxtun told me when I interviewed him several years ago. When Buxtun returned to his seat, the woman sitting next to him leaned over and whispered, “Haldeman, Ehrlichman and Dean have resigned.”

When the Tuskegee hearings were over, Senator Hubert Humphrey of Minnesota introduced a bill to create a National Human Experimentation Board. Humphrey had in mind a powerful, centralized board of experts, appointed by the president, with the authority to regulate and review all federally funded medical research. A powerful board seemed like the best way to protect vulnerable people from being exploited by a medical establishment that viewed them as useful “research material.” But the medical establishment vigorously opposed any effort at regulation and Humphrey’s bill failed.

In light of that failure, Kennedy introduced a bill that would become the National Research Act. Although it is often portrayed as a landmark reform, the National Research Act was actually a watered-down political compromise. It established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research–a temporary body with purely advisory power–and delegated the oversight of medical research to local peer-review committees (IRBs) of the sort already in existence. It also endorsed a patchwork of federal guidelines governing research that has come to be known as the Common Rule. Yet as depleted as the National Research Act was, it still marked a significant improvement on the status quo.

As far as I can tell, there is no written record of Nixon’s opinion on the National Research Act. On July 12, 1974, the day Nixon signed it into law, newspaper headlines were dominated yet again by Watergate. The House Judiciary Committee had just released 3,888 pages of damning evidence of the Nixon White House’s abuses of power. Later that day, John Ehrlichman, a top Nixon aide, was found guilty of four criminal charges. According to Woodward and Bernstein’s The Final Days, Nixon boarded Air Force One at 4 pm that day to take refuge at his private estate in San Clemente. It would be just over three weeks before he resigned in disgrace.

The evidence suggests that when Nixon signed the National Research Act into law, he had other pressing matters on his mind. In fact, it is entirely possible that he gave it no thought at all. Yet it is not impossible that he endorsed the act. Nixon was nothing if not complicated: a shy, socially inept man who chose a life in the public eye; a rabid anti-communist who established diplomatic relations with China; a conservative who expressed racist views in private yet desegregated more Southern schools than any other president. I would like to imagine that he approved.

Carl Elliott, MD, PhD, is a professor in the department of philosophy at the University of Minnesota and a Hastings Center fellow. His new book, The Occasional Human Sacrifice: Medical Experimentation and the Price of Saying No, will be published in May. @FearLoathingBTX

Illustrative image for Catastrophe Ethics and Charitable Giving

Bioethics Forum Essay

Catastrophe Ethics and Charitable Giving

How can we live a morally decent life in a time of massive, structural threats that seem to implicate us at every turn? Climate change is the paradigm example here, as it poses devastating risk to current and future people, and virtually everything we do contributes to it through the emission of greenhouse gases. So, if I’m trying to carve out a justifiable life, how should I respond? Am I permitted to fly? Should I buy an electric car? Go vegan?

These are the central questions of my new book, Catastrophe Ethics: How to Choose Well in a World of Tough Choices. Of course, none of my little individual actions will have a meaningful impact on the climate. Even the choice to take a flight—which is one of the more environmentally expensive things many of us will do—contributes an infinitesimal fraction to the trillions of tons of greenhouse gases accumulating in the atmosphere and raising the global temperature. So, is it not a bit precious to worry about each thing I do? This tension between feeling implicated in massive structural harms and being largely incapable of making an impact on those harms is what I call The Puzzle of individual ethics in an era of collective catastrophe.

Although the idea for the book was born out of climate angst, one of the central hooks is that our modern world is so massive and complex that the structure of The Puzzle replicates in many areas of our lives. Many of our purchases make us participants in exploitation. Our electronics likely rely on modern day slavery overseas, and our favorite brands may use sweatshop labor or support union-busting.

In writing the book, I was surprised to discover that charitable giving is also part of the broad discussion of how to live well in such complicated times. As Judith Lichtenberg notes in her discussion of “The New Harms,” the way in which our participation in massive harms quickly becomes overwhelming and can feel intensely demanding parallels the now-old debate about how much we are obligated to give to charity in an effort to relieve suffering. In my terminology: the ethics of charitable giving feels a bit like catastrophe ethics.

This led to an experiment of sorts. Since I was writing a trade book, for which I would earn royalties by thinking about catastrophe ethics, I decided to donate some of my proceeds to a charitable organization. But which one? Could the work I was doing help me to choose?

I decided to find out.

Lessons from Catastrophe Ethics

Among my key findings, the first, perhaps most crucial, one is the following: Because our individual contributions to massive, structural harms don’t make a meaningful difference to reducing those harms, philosophers like Walter Sinnott-Armstrong are correct to note that we therefore are not obligated to refrain from making those contributions. But it would be wrong to infer that, as a result, nothing we do matters morally. Not everything that is morally permissible is good or recommended; I may do something that is within my rights, but that is nonetheless some flavor of bad, vicious, or otherwise crappy.

I characterize this first lesson as the insight that we have reasons to respond to catastrophes, even if we aren’t duty-bound to do so.

Second, then, and especially important for our discussion of charitable giving: The threat of catastrophe gives us reason to respond in different ways. There are negative reasons—that is, reasons not to be part of the problem and therefore to avoid doing things that contribute to it. And there are positive reasons—reasons to be part of the solution, like advocating for social reform, getting involved in political solutions, and giving our resources (time and money) to efforts to generate change. Indeed, activists like Mary Annaise Heglar argue that the positive reasons are in fact more important than the negative ones; despite working in the climate movement, she doesn’t care if you recycle, but she wants you in the climate fight.

This is how charitable giving becomes directly implicated in catastrophe ethics. Some people have more money than time to give to any cause. And if the massive threats of today ground reasons both not to be part of the problem (negative reasons) and reasons to be part of the solution (positive reasons), then, plausibly, many of us have good reason to give money to all sorts of organizations trying to mitigate the harms we face.

Triaging Reasons

How, then, do we organize the mass of reasons grounded in the many catastrophes we face?

In my view, because we do not have a duty to respond to catastrophe in a particular way, we have latitude to determine how to act, and so how to live a life that is justifiable. There is far more to be done than any one of us could ever do, and so I propose a kind of division of moral labor: each of us gets to decide how to respond based on our subjective values, interests, passions, strengths, and privilege.

In addition, I think there are special reasons for some of us to include particular ways to respond, and this is due to our social and economic positions. As a 21st century American, I am not well-positioned to significantly reduce my carbon footprint: I live in a car-based society, our electric grid has not quickly decarbonized, and my job and family require a lot of travel. Thus I, like most Americans, have a relatively high carbon footprint. But any success I enjoy is also due significantly to the massive extractive enterprise of American history: I get to live the life I do because America has emitted, since the Industrial Revolution, more greenhouse gases than any other country. One way to think of this is that I have an enormous amount of climate privilege, and I’m continuing to contribute to climate change in an outsized way.

Thus, while there are some harms I can extract myself from (for instance, I can boycott companies that utilize slave labor), I cannot adequately respond to the negative reasons generated by climate change. And the fact that I’m continuing to benefit from climate change—which is causing and will continue to cause serious harm—makes me a participant not just in harm, but in injustice.

These features together suggest to me that I have especially strong reasons to, as Heglar says, “join the climate fight” by responding to the positive reasons to create change.

The Judgment: Where to Donate

At the end of this reasoning, I came to a few conclusions about my charitable giving. One is perhaps best summarized as a response to the philosophy of effective altruism, which recommends donating as much money as you can to the most effective organizations, so that you do as much good as possible.

While I think there is a lot of moral weight to the idea of “doing as much good as you can,” there is more to the ethics of charitable giving than just that. I have special reason to donate to climate organizations because of my position as a beneficiary of America’s extractive economy. And because of my career and my interests, I know a lot about climate change, and so feel well-positioned to choose organizations that do important work. Finally, because we are allowed latitude in the way we respond to catastrophic threats, each of us can choose according to our values.

Based on this reasoning, I decided to donate a portion of the proceeds from my book to the organization Cool Earth, which protects the rainforest, but does so in a specific way: by investing in Indigenous peoples and local communities. Cool Earth’s efforts not only contribute to environmental conservation, they also address broader issues of social justice, equity, and human rights in the context of climate change. By engaging with local communities, promoting sustainable practices, and advocating for the rights of Indigenous peoples, Cool Earth exemplifies a holistic approach that integrates environmental protection with social empowerment and ethical considerations.

Because I believe in wide latitude concerning how each of us responds to catastrophe, I don’t think anyone reading this has an obligation to support Cool Earth. However, I do believe that many people are like me in relevant ways: They would like to respond to climate change; they have benefitted from historical emissions and contribute in an outsized way to ongoing emissions; and they have the means to donate to charity. For such folks, I believe the reasons commend giving to organizations like Cool Earth.

They are in the climate fight, and giving is one way that we can join them.

Travis N. Rieder, PhD, is associate research professor at the Johns Hopkins Berman Institute of Bioethics, where he directs the Master of Bioethics degree program. He is a Hastings Center fellow. @TNREthx

red dna against black background

Bioethics Forum Essay

How to Avoid a Genetic Arms Race

A quiet biological revolution in warfare is underway. The genome is emerging as a new domain of conflict. The level of destruction that only nuclear weapons could previously achieve is fast becoming as accessible as a cyberattack.

Now for the bad news. Great power conflicts and proxy wars are back. The rules-based world order crumbles while an unpredictable–and potentially unstable–multipolar one emerges.

Rapidly accelerating breakthroughs in our ability to change the genes of organisms are generating medically thrilling possibilities. They are also generating novel capabilities for biological weapons, a form of warfare that has been largely abandoned for decades. Take the recent AI-enabled advancements in gene-editing, construction of artificial viral vectors for human genome remodeling, protein folding, and the creation of custom proteins. Far outpacing the regulatory environment, these advances are facilitating the weaponization and delivery of harmful bioagents–overcoming impediments that previously made biological weapons impractical.

Speculation about “genetic weapons” capable of singling out specific groups for infection dates back to the 1970s. In 2012, Vladimir Putin mused publicly about weapons that could be “as effective as nuclear” but “more ‘acceptable’ from the political and military perspective.” He predicted that nuclear weapons would, over the next half- century, become eclipsed by “fundamentally new instruments for achieving political and strategic goals.” The future of war, he said, is “based on new physical principles,” including “genetic” science.

The 2020 edition of Science of Military Strategy, an authoritative textbook published by China’s National Defense University, considers how biotechnology could serve as “a brand new territory for the expansion of national security” with “the use of new biological weapons, bioterrorism attacks, large-scale epidemic infections, specific ethnic genetic attacks, the purposeful genetic modification of the ecological environment, food and industrial products, and the use of environmental factors.”

Although its intelligence community’s 2016 worldwide assessment described genome editing as a potential “weapon of mass destruction,” the United States has been slow and reluctant to face the new challenge. One reason is that it is not clear what this challenge is, how bad it actually is, and what requires immediate attention.

Biodefense in the Age of Synthetic Biology, a report by National Academies of Sciences, Engineering, and Medicine developed at the behest of the U.S. Department of Defense, has since its publication in 2018 been the main guide to understanding probable biological threats. It advised paying more attention to the possibility of recreating known pathogenic viruses, making existing bacteria more dangerous, and making harmful biochemicals via in situ synthesis. It was not without blind spots, however. For example, it considered gene drives only as applied directly to humans, ignoring the more indirect strategic applications, such as agricultural.

In his recent book, zoologist Mathew Cobb admits to being most concerned about gene drives and human genome editing, in addition to pathogen manipulation. A recent RAND report directs attention also to the Internet of Bodies (internet-connected smart consumers and medical devices) and genomic surveillance and enhancement.

Nor is the generation of basic genetic data simply the province of sophisticated laboratories. Many elaborate datasets are open source and online, facilitating scientific exchange. Although most genetic data are de-identified, future technologies may be able to re-identify them. The Biden administration appreciates this threat. On  February 28, the president signed an executive order seeking to prevent the sale of bulk sensitive personal data. The executive order has a legal basis in the National Emergencies Act and International Emergency Economic Powers Act and notes the need to “protect United States persons’ sensitive personal health data and human genomic data from the threat identified in this order.” That threat is the “continuing effort of certain countries of concern to access Americans’ sensitive personal data.”

Amid the apparent collapse of the post-World War II rules-based order, one of the worst things that could happen is a genetic arms race for which international conventions are unprepared. The Biologic and Toxin Weapons Convention bans proliferation of bioagents and toxins that have no peaceful use, but it has no formal verification regime system. There was at least one alleged case of noncompliance by the Soviet Union in 1981 involving a weaponized fungus, far from the exquisite genomic targeting that may eventually be practicable.

The convergence of genetic technologies and intense competition among highly motivated actors along with historic geopolitical shifts requires the attention of the international life sciences community and bioethicists to establish guidance for what was once a threat in the realm of science fiction.

Yelena Biberman, PhD, is an associate professor of political science at Skidmore College.

Jonathan D. Moreno, PhD, is the David and Lyn Silfen University Professor of Ethics at the University of Pennsylvania and a Hastings Center fellow. @jonathanmoreno.bsky.social