headline collage of obesity drug names

Bioethics Forum Essay

What Is Preventable About Obesity?

The suggestion that obesity is a preventable disease has been weighing heavily on my mind ever since I read a recent article in the Hastings Center Report.  The article claims to focus on “ethical, policy, and public health concerns” related to anti-obesity medications, but there is a strong undercurrent of bias throughout. As an endocrinologist who specializes in medical weight management, my clinical experience informs my understanding that obesity is almost never entirely preventable, but bias against those with obesity certainly is.

Like other metabolic diseases, obesity has a range of genetic and other nonmodifiable risk factors. Obesity is no more or less preventable than other metabolic diseases, including hypertension, high cholesterol, and Type 2 diabetes, and yet it is the only condition among these that is blamed primarily on a patient’s choice.

The argument that medications that can help people lose weight should be used with caution because they might discourage “good lifestyle choices” is rooted in bias against obesity and people with it. Do we tighten our belts when it comes to using medication to lower cholesterol or blood pressure because it might give “weak” people the ok to eat more red meat or put extra salt on their food? We certainly don’t hold back on using medication for Type 2 diabetes because it might suggest to millions with the disease that it’s ok to have ice cream and cake and let the glucose-lowering medication mop up the excess blood sugar.

Implicit bias against the newest anti-obesity medications has nothing to do with their cost. When we elevate the bar for using pharmacotherapy to treat high body weight, we do so because, consciously or not, we are labeling obesity a disease of choice.

On any given day, my clinic schedule might include several patients with the same body weight and BMI (body mass index), but almost never the same narrative. It is the physician’s job to ask about, listen to, and hear every story. I have yet to hear a patient describe their lifetime weight history and conclude that their current weight, which is often their highest weight, was preventable. Should we call obesity preventable for a patient with binge eating tendencies who for years has gone to the grocery store every single day to buy exactly what he plans to eat to avoid having extra food in the house? Or for a patient who was “skinny as a stick my whole life” until she assumed primary caregiving responsibilities for her verbally abusive mother with dementia, leading to weight gain because, at the end of long days, she sometimes uses food as a reward? Or the many women who have not changed a single thing about their food intake or exercise and gained 25-to-30 pounds during menopause? How about all those people with childhood obesity or a strong family history of obesity for whom I have ordered genetic testing that identified no specific genetic mutation to explain obesity?

There are so many more: people with severe asthma, juvenile rheumatoid arthritis, or other autoimmune diseases who become at least 8-to-10 pounds heavier each time they use steroids to treat a disease flare. Those whose neural networks are such that food noise always plays on very high volume, and those who tell me, “I don’t know what it feels like to feel full.” And, importantly, those who need to take weight-promoting atypical antipsychotics for depression, mood stabilization, or psychotic illness. Last, let’s not forget those suffering from chronic stress, social isolation, or social defeat—resulting from society’s choices rather than their own—all of which can cause problems with metabolism that include significant weight gain. In my experience, people are doing the very best they can with the information and resources they have. The prevention they need is against belittling obesity by calling it a disease of choice.

The existence of expensive, often very effective medications to treat obesity, including the GLP1 receptor agonists (Saxenda and Wegovy) and the dual GLP1/GIP agonist (Zepbound), may, ironically, contribute to another angle of bias. If you don’t understand how many people lack access to these medications (everyone with Medicare, many with Medicaid or commercial insurance, and all the uninsured), and that a significant minority of people do not tolerate these medicines or lose much weight when taking them, you might go along with the social media/lay press narrative that weekly injections provide a complete cure for everyone with obesity. They do not. With respect to social justice, the government should consider applying price caps to cure the sky-high cost of these medications, as it has for insulin.

There is a final way in which bias against obesity is literally hardwired into medical care: the ICD 10 coding system. I am urged to bill using HCC (hierarchical condition category) codes, as they glean the highest reimbursement from insurance. Fact: these prized HCC billing codes are pejorative, insulting, and physiologically reductive:  E66.01, “morbid obesity due to excess calories,” and E66.09, “other obesity due to excess calories.”  Non-HCC codes, which report BMI numerically without qualifiers, are reimbursed at a much lower rate. What is preventable about obesity? Personal and systemic bias.

Jody Dushay, MD, MMSc, is an assistant professor of Medicine at Harvard Medical School and an attending endocrinologist at the Beth Israel Deaconess Medical Center in in Boston.

closeup of gavel held by a judge

Bioethics Forum Essay

Is Castration of Sex Offenders Ever Ethically Justified?

Louisiana recently became the first U.S. state to permit judges to order surgical castration of sex offenders. Surgical castration as a form of punishment is rare: Madagascar, the Czech Republic, and a Nigerian state use it in their criminal systems.  Several states allow judges to order chemical castration, drugs to significantly diminish sex drive.

Castration of any kind ordered by a court violates informed consent. Surgical castration is not ethically justifiable. The surgery is irreversible, disfiguring, and can cause feelings of humiliation and a lack of dignity. These harms arguably make it cruel and unusual punishment. Chemical castration is reversible and less harmful. Chemical castration for sex offenders can be ethically sound when they choose it in the hope of decreasing aggressive behavior, either in connection with a reduced sentence or not. The ethics are murkier when offenders consent to chemical castration in exchange for a reduced sentence, as part of a plea deal, or as a condition of parole when they otherwise would not wish to have the intervention. In those situations, offenders make a difficult choice and likely feel coerced.

Chemical castration has been linked to reduced recidivism for sex offenders. It suppresses testosterone levels, which have been found to correlate with the risk of both committing violent crimes and recidivism. A study of sex offenders found that those taking testosterone-suppressing drugs for an average of six years had a 28% rate of recidivism compared with 52% for those not taking the drugs. Another study found that men who had committed the most violent crimes had higher testosterone as did those with higher recidivism rates over a nine-year period.

But testosterone isn’t the only influence on recidivism. The previous study also found that psychotherapy mitigated the likelihood of recidivism and negated the impact of testosterone levels. Several reasons for recidivism are well established, including difficulty finding employment and housing, poverty, and the social stigma associated with being a convicted sex offender. Stigma prevents social inclusion and can lead to isolation, a risk factor for reoffending.  

Research shows that reducing sex crimes and other violent criminal behavior requires a community approach. Integration into society is an important component of reducing recidivism. Reentry after incarceration requires commitment from the community, elimination of discrimination in hiring, and reduction of stigma, all while maintaining public safety and reducing the risk of reoffending.  

Deterrence for the sake of decreasing recidivism and ensuring public safety is a common justification for chemical castration. The justification for Louisiana’s new law is that it will reduce repeat offending.  In states that allow chemical castration as a tool of criminal justice, judges may order it or incarcerated people may choose it to decrease prison time. In such cases, judges also require them to give informed consent. Some states require consent in writing to acknowledge an understanding of the side effects of chemical castration. But is such consent truly informed? Voluntariness is an element of informed consent, and it is difficult to say that an act is voluntary when it is in exchange for release from prison.

Castration presents a medical solution to crime and in doing so it medicalizes the crime itself. Some may argue that this is a good version of medicalization. Identifying testosterone as the culprit and altering it could keep people out of prison and improve public safety. However, it is a mistake to shift the blame from human to hormone entirely since that would suggest a lack of personal responsibility.

Castration violates informed consent when a judge orders it. However, years of criminal justice reform advocacy and research lead me to see that chemical castration, when it is as voluntary as possible, could be a tool for decreasing mass incarceration. Let’s say, for example, that a judge offers chemical castration as an option to convicted sex offenders in exchange for reduced sentences or early parole. Some incarcerated people may want to consider it and may even find the choice empowering. Eliminating the option for the sake of protecting the rights of incarcerated people would not be ethically justified if the result is a longer time spent in prison.

Even if chemical castration remains an option in criminal justice, we should not ignore the possibility of rehabilitation of convicted sex offenders. Rehabilitation research shows that many people who commit aggressive sexual crimes show regret, can become integrated into society, and even offer help with the rehabilitation of others who have committed crimes. While the role that rehabilitation should play in criminal justice is debated, it is important to recognize the social aspects of rehabilitation and a social responsibility to provide for the needs of offenders who have spent time in prison and are ready to safely reintegrate into society. Voluntarily decreasing testosterone levels makes sense as it may relieve the sex offender of unwanted aggression, but the practice should be used with other known ways to reduce recidivism. And, above all, it should be the choice of the individual, not the judge.

Anne Zimmerman, JD, MS, is founder and chair of Modern Bioethics and Innovative Bioethics Forum, chair of the New York City Bar Association Bioethical Issues Committee, and editor-in-chief of Voices in Bioethics. Her book Medicine, Power, and the Law: Exploring a Pipeline to Injustice explores the relationships between medicine, science, and technology and the criminal and civil justice.

white woman with a bun sitting by window in wheelchair holding head in hand

Bioethics Forum Essay

Ending Medical Gaslighting Requires More than Self-Empowerment

Over the last few years, there’s been much discussion about gaslighting in general, and medical gaslighting in particular. Headlines include “How to Address ‘Medical Gaslighting’,” “Feeling Dismissed? How to Spot ʻMedical Gaslightingʼ and What to Do About It,” and “How to recognize ‘medical gaslighting’ and better advocate for yourself at your next doctor’s appointment.” Ilana Jacqueline’s forthcoming book, Medical Gaslighting: How to Get the Care You Deserve in a System that Makes You Fight for Your Life, is the culmination of much of this reporting.

Patients who are members of marginalized groups—women, Black people, trans people, elderly people, disabled people—are often dismissed, minimized, or altogether ignored by health care professionals. Over time, this can lead to gaslighting in which patients question their thoughts, feelings, symptoms, even themselves. As a result, they have difficulty advocating for themselves in medical contexts. This can result in delayed or missed diagnoses and ultimately to severe and enduring health (and other) consequences.

Many of the articles and other publications about medical gaslighting argue that if patients were better able to speak the language of health care professionals, they’d more likely be heard and taken seriously. They’d be better able to advocate for themselves when injustices and harms like gaslighting occur. Their situation and relationships with health care professionals, their care, and maybe even their health outcomes would improve. Thus, the argument goes, we ought to focus our attention on empowering patients to take charge of their health by equipping them with tools and strategies to educate themselves. Doing so will enable them to receive better medical care.

The problem with this argument is that it puts the responsibility on victims of medical gaslighting to do something about it. This is a worrisome response, as we discuss in our recent book, Microaggressions in Medicine. First, the suggested solution is not only a form of (unintentional) victim blaming, but it comes with ableist, classist, and racist implications regarding how people “ought” to communicate if they want to be taken seriously. Second, this solution fails to acknowledge the systemic nature of the problem, namely, the gross imbalance of institutional, professional, and epistemic power between health care professionals and patients.

Receiving high-quality medical care shouldn’t depend on one’s medical literacy, articulateness, education, or any other contingent factors pertaining to one’s identity. Most people don’t have much, if any, medical literacy, access to medical journals, or the ability to differentiate between more and less credible sources in a world with endless, often interest-driven, information and medical advice at our fingertips. Many cultural and social norms dictate that expert knowledge like that of doctors is not to be questioned. And many people – for good practical reasons–know that if they do question or challenge doctors, they’ll be viewed as “difficult,” “noncompliant,” or “angry” – particularly egregious, yet common stereotypes of Black women – which come with a variety of harmful repercussions for the kind and quality of care they receive.

Thus, we should avoid any recommendations that implicitly blame patients, their identities, cultures, communication styles, or social situations for their sub-par, inequitable care. Moreover, we should avoid putting the onus on patients–who are already ill, vulnerable, and often powerless in the health care system–to bring about systemic and structural changes.

Our position is that all patients deserve high-quality medical care. It is not up to patients to ensure that they receive it. Health care professionals and administrators should hold themselves and their teams accountable to make it so.

Of course, providing high-quality care to all patients is enormously challenging in our current nonideal health care system: clinicians are spread thin or burned out; they often lack the requisite time, resources, and support to have meaningful communication with patients; and many must adhere to productivity demands set by health care corporations that can compromise the kind and quality of care they’re able to deliver. Creating and maintaining conditions that lead to more just and equitable health care delivery is part of the structural piece of this complicated puzzle that’s worth paying attention to and points to issues that go beyond the need to simply “empower” individual patients.

Medical gaslighting is a real problem for marginalized patients. To fix it, we must place responsibility not on individual patients, but on medical educators, medical professionals, and health care institutions: those with the real power to make much needed structural and systemic changes and the moral obligation to treat all patients well and ultimately, to do no harm.

Lauren Freeman, PhD, is a professor of philosophy and director of the MA in Applied Philosophy at University of Louisville. 

Heather Stewart, PhD, is an assistant professor of philosophy at Oklahoma State University.

smokestacks next to apartment buildings

Bioethics Forum Essay

Ending Unequal Treatment Requires A Shift from Inequitable Health Care to Social Inequities

The National Academies “Ending Unequal Treatment: Strategies to Achieve Equitable Health Care and Optimal Health for All” is the 2024 follow-up to its 2003 seminal predecessor “Unequal Treatment: Confronting Racial Bias and Ethnic Disparities in Health Care.” A noticeable difference between them  is reflected in their titles; whereas the 2003 version made us aware of the systemic causes of racial disparities in health outcomes, the latest edition has moved beyond naming the problem to naming solutions.

Ending Unequal Treatment, however, acknowledges that racial and ethnic disparities in health outcomes still have not ended nor significantly progressed in the 20 years since the original version was first published. What makes this report unique and timely, especially during an election year, is that it names a lack of proper access to health care as a driver of racial disparities in health outcomes and calls on the American government to ensure proper access to affordable health insurance for all people as a solution. Where it misses the mark, though, is its lack of emphasis on calling for the American government to ensure equity in access to the social determinants of health that create health inequities before racial and ethnic minorities enter a clinical setting. Addressing factors such as access to housing, clean air and water, education transportation, and stable communities must be pivotal to any solution to racial disparities in health outcomes.

In 2003 Unequal Treatment made many people aware of racial disparities in health outcomes. It named social inequities such as inadequate access to income and housing and as drivers of health outcomes, but it was unique in calling out racial discrimination within social systems as a major contributing factor. In fact, it noted that even when we correct for social factors like income, people of color—particularly Black, Latino, and Indigenous people—still received worse care in our health care system and it contributed to their worse health outcomes. To address racial disparities in health outcomes caused by structural racism in health care the 2024 report calls on the federal government to make major changes, including ensuring better access to health insurance for all people. The report also calls on Medicaid to reimburse providers as much as Medicare providers are reimbursed, use federal funds to collect race data on patients which can help us have better information about the cause of health inequities and the impact of structural racism, and better fund branches of the U.S. government that are meant to ensure health equity for racialized minorities such as the Indian Health Service and the Office of Civil Rights. All of these policy solutions are meant to eradicate health care of its deeply rooted racial discrimination.

The report frequently mentions how improved access to the social determinants of health can also improve racial disparities in health. For example, it mentions the value of state policies that help reduce income inequities as improved income can lead to improved health for racial minorities. It further acknowledges how some state and federal policies have negatively and positively influenced racial inequities in the social determinants of health, such as the family financial assistance programs in some states that leave Black children with less cash assistance than in other states and programs such as the earned tax credit that have resulted in improvements in Black children’s health.

Convincing people of the importance of the social determinants of health, or our preclinical lives, to our overall health is a thread that ties the 2003 and 2024 reports. While this was a noble cause in 2003, when the social determinants were a less common fixture in health equity discourse, we now have enough research, including research that the 2003 edition contributed to, that demonstrates the importance of the health of our preclinical lives. In fact, by some estimates health care only impacts 20% of our health, while other social factors affect an estimated 80% of our health. Therefore, with its focus on inequities in health care that contribute to racial disparities in health and on improving aspects of health care, such as team-based care approaches in clinical settings, to serve this focus, the 2024 report is concerned with a small portion of the factors that influence our health, but particularly racial minorities’ health.

Ending Unequal Treatment does not advocate enough for using federal and state funding and government policy to eliminate inequities in the social determinants to health that influence racial disparities in health given the magnitude of their influence on our health. For instance, while the authors mention that despite people with asthma’s adherence to their caregiver’s recommended therapy regimen, they may still have trouble managing their asthma because they live in areas with poor air quality caused by pollutants. The authors use this example to demonstrate how health inequities are created by the interconnectedness of social and structural determinants of health. But the authors do not acknowledge that not everyone suffers equally from environmental pollution. Black, Latino, and Indigenous people bear the brunt of environmental injustices more than White people. They also have higher rates of respiratory diseases, like asthma, which can be caused and exacerbated by air pollutants. Pollutants in our air can also contribute to other diseases such as lung cancer and stroke, which also disproportionately impact people of color, specifically Black people.

The connection between racial inequities in health outcomes and inequities in social and structural inequities requires us to use federal and state funding to address the original source of inequities, such as pollution, if we want to end racial disparities in health. Addressing racial disparities in asthma, for instance, by addressing health care would be worthwhile, but would ultimately be a temporary solution. Receiving equitable asthma therapy only to be sent into a world with air pollution as well as racial inequities in pollution exposure is counterproductive. Enforcing current legislation that protects our physical environment and our air and water sources, initiating new legislation that protects our environment and imposes harsher punishments (beyond financial penalties) on entities that pollute our environment will do more for racial disparities in health than improvements to health care.

Ending racial disparities in health outcomes requires us to address racial inequities in our social and economic lives and those that are within the very foundation of health care. But if we don’t take seriously the project of income, housing, education, environmental, and political equity then our world will keep creating patients for health care providers to patch up and send back out into the unhealthy environments that make us patients in the first place. While Ending Unequal Treatment is another valiant effort and much needed resource, a more forceful expansion of its government-based solutions to government sponsored impact on social determinants of health is needed to inspire more long-lasting impact on racial disparities in health.

Keisha Ray, PhD, is the John McGovern, MD Professor of Oslerian Medicine and an associate professor at the McGovern Center for Humanities & Ethics at UT Health Houston where she also serves as the Director of the Medical Humanities Scholarly Concentration. She is a Hastings Center fellow and an advisor to The Hastings Center’s Sadler Scholars, a select group of doctoral students with research interests in bioethics who are from racial or ethnic groups underrepresented in disciplines relevant to bioethics. @drkeisharay

nurse giving baby vaccine by mouth

Bioethics Forum Essay

Bioethics Must Address War as a Public Health Crisis

For most of human history, war has been a major cause of injury and death worldwide, causing harms well beyond the battlefield. Today’s wars kill far more civilians than soldiers —the United Nations Secretary General conservatively estimates civilian deaths to be quadruple the number of direct battle-related deaths. Armed conflict not only takes innocent people’s lives, it also leaves misery in its wake. Civilian war zone populations face alarming rates of post-traumatic stress and other mental health disorders, often spanning generations. They are more prone to hunger and malnutrition, disease and disability, forced displacement from homes and communities, and lack of access to basic goods like health care, education, income, and opportunities. These social determinants of health impact the health of populations more than access to biomedical advances, accounting for 30%-to-55% of health outcomes.

Bioethics must address war not just as an individual tragedy but as a public health disaster. Bioethics’ earliest pioneers recognized this. They called for closer collaboration between bioethics and public health scholars, despite the challenge and clash of values –with bioethics emphasizing patient autonomy and individual rights, and public health highlighting the common good. Today, public health remains pressing for bioethics for many reasons: increased knowledge of the impact of the social determinants of health; heightened awareness of how structural injustice impedes health and drives violence and armed conflicts; and better appreciation of the need for bioethicists to cross borders to deal with global bioethics issues, such as emerging and reemerging infectious diseases, climate change, refugee and migrant health, global access to essential medicines, and generative AI.

Bioethics as a field is ill-equipped to meet these challenges because its primary tools were designed with a different purpose. The classic principles of biomedical ethics –respect for autonomy, beneficence, nonmaleficence and justice– helped societies correct unethical treatment of individual research subjects, and helped physicians and patients make difficult life-and-death decisions.

However, to speak to war as a public health crisis requires ethical principles targeting public health and focused on the common good. Our approach to war and public health sets forth six bioethics principles that aim to do this: health justice, accountability, dignified lives, public health sustainability, nonmaleficence, and public health maximization. These principles supplement the classic four, as well as previously proposed public health principles, bolstering the field’s ability to address war as a crisis for civilian populations. Below, we recount each principle, showing its ethical basis. We suggest how to deploy the principles in practice and hopefully, to enable wiser choices that lead to healthier, more flourishing human lives.

Bioethics Principles

Health Justice demands distributing health-related benefits and burdens fairly and stresses a special responsibility to populations most vulnerable to war’s health harms, such as women, and children. Health justice gains support from the right to health, which the U.N. has recognized as a fundamental right of all human beings.

Accountability holds warring parties responsible for war’s effects on civilian populations. It extends to international groups such as the U.N., International Criminal Court, and World Bank. Accountability’s basis is the dignity and worth of persons, which makes tactics like rape, torture, or using civilian populations as shields indefensible.

Dignified Lives mandates taking reasonable steps to safeguard people’s central human capabilities, such as their ability to be healthy; have bodily integrity; exercise senses, imagination, and thought; plan their lives; affiliate with others; relate to nature; play and recreate; and regulate their immediate environment.

Public Health Sustainability names the ethical requirement of military planners to maintain public health services for war zone populations. Its justification relates to the fact that being healthy directly impacts people’s access to a normal range of opportunities in life, such as their ability to make and carry out a life plan, access education, and earn a living.

Nonmaleficence and Public Health Maximization call for creating the best possible balance of public health benefits and harms. Applying these principles requires comparing the health benefits and harms of war to its alternatives, such as economic sanctions, arms embargoes, diplomacy, nonviolent resistance, positive incentives, or military assistance. All six public health principles take stock of short- and long-term health effects. Combined, they reframe the ethics of war, changing calculations of whether waging or continuing a war is ethically defensible.

Putting Principles into Practice

Putting these principles into practice requires bioethicists to engage more directly with war in their research, teaching, and service. Bioethics research should examine not just the ethical challenges associated with crisis response, but also war’s precipitating factors, such as poverty, food insecurity, displacement, and lack of equitable access to education, health care, and jobs. The preconditions that make war more likely are hardly inevitable. By addressing the social and economic conditions that trigger war, bioethicists can be a “bridge to peace.”

Bioethics teaching should raise awareness about war’s public health effects among trainees and the broader public. For example, education can take the form of hosting public lectures, developing courses, compiling cases, and designing other training materials. Curriculum should include a range of ethical approaches –e.g., those based on human rights, human capabilities, virtue ethics, communitarian ethics, Confucian political ethics and ubuntu ethics, to name a few. Enlisting the wisdom of many traditions not only lends itself to a richer, more sophisticated ethical analysis, it helps balance the field’s heavy focus on civil liberties and respect for individual autonomy, which reflects its Western, especially its American, roots. Since war is a global bioethics concern, ethical analysis must reflect the values and language of many societies.

Bioethics service might include deploying bioethicists as ethics facilitators to serve as advocates for war zone populations. To date, ethics facilitation has been applied mostly to clinical and research settings, yet it is highly relevant outside these settings where the health of populations is at stake. The core competencies for ethics facilitation include “clarifying the ethics concern(s) and question(s) that need to be addressed, gathering relevant information, clarifying relevant concepts and related normative issues, helping involved parties to identify a range of ethically acceptable options, and providing an ethical justification for each option.” These competencies can aid war planners and policymakers, as well as ordinary citizens, by focusing attention on war’s effects on human health.

To illustrate, consider what transpired during the civil conflict in El Salvador, when one-day truces were negotiated each year, from 1985 to 1991, between the government and guerrilla forces. This made it possible to immunize war zone populations on both sides against polio, diphtheria, whooping cough, tetanus, and measles. The truces came only after “a painstaking process that involved PAHO [the Pan American Health Organization], UNICEF, the Red Cross, and the Catholic Church.” Bioethicists can help with negotiations like these, serving as advocates for the health of civilian war zone populations. Emulating the World Medical Association, whose members express a commitment to giving medical care impartially to all, bioethicists should commit to advocate for the health of civilian populations on both sides of an armed conflict.

Wars are currently being fought around the globe, including 45 armed conflicts in the Middle East and North Africa, more than 35 conflicts in Africa, 21 in Asia, 7 in Europe, and 6 in Latin America. The U.N. documented over 33,000 civilian deaths from armed conflicts in 2023, a 72% increase over the prior year, marking a “resoundingly grim” reality. The U.N. has urged a global focus not just on international law, but on harms that civilians experience during armed conflict. Bioethics as a field must do its part, shining a light on war’s destructive effects on human health and the infrastructure required to support it.

Nancy Jecker, PhD, is a professor of bioethics and humanities at the University of Washington School of Medicine. @profjecker

Caesar Atuire, PhD, is a philosopher and health ethicist at the University of Oxford’s Nuffield Department of Medicine. @atuire

Vardit Ravitsky, PhD, is the President of The Hastings Center. @VarditRavitsky

Kevin Behrens, PhD, directs The Steve Biko Centre for Bioethics at the University of the Witwatersrand, South Africa. LinkedIn https://www.linkedin.com/in/kevin-behrens-622931159/?original_referer=https%3A%2F%2Fwww%2Egoogle%2Ecom%2F&originalSubdomain=za

Mohammed Ghaly, PhD, is a professor of Islam and biomedical ethics at the Research Center for Islamic Legislation and Ethics at Hamad Bin Khalifa University, Qatar. @IBioethics

Clinical Case Studies card

Bioethics Forum Essay

Should an Incarcerated Patient Get an Advanced Heart Therapy?

Case Narrative

W is a 32-year-old man with heart failure. Prior to diagnosis he was relatively healthy and physically fit. A trial of medication was not effective. Some members of W’s care team recommended that he be evaluated for an advanced heart therapy—a heart transplant or ventricular assist device (VAD)–but other members of the team questioned whether this would be appropriate, given that W had been incarcerated in a state facility for more than 10 years without the possibility of parole. The ethics team at the hospital where W was receiving treatment was asked to provide guidance.

This dilemma reflected a longstanding public debate about whether incarcerated individuals should be eligible to receive limited resources and the challenges of providing medical care in the carceral setting. The debate traces back at least to 2002, when a 31-year-old incarcerated man in California received a heart transplant, sparking significant disagreement in medical communities and in the public about whether prisoners should be eligible to receive these scarce resources. United States law requires “adequate” medical care for incarcerated individuals, and the United Network for Organ Sharing recommends that incarceration not be an absolute contraindication to transplant consideration. Another important consideration is that the incarcerated population is disproportionally made up of marginalized groups that have historically not received equitable care; W is Black.  As professionals committed to justice in health care, clinical ethicists do not consider “social worth,” including conviction history, in our evaluations of ethical questions about the provision of care.

Ethical Analysis and Process

The ethics team met separately with the medical team and with W. Each of these meetings had different goals.

While the medical team had previously cared for a few patients who had received advanced heart therapies and then been incarcerated, this was their first request to evaluate an incarcerated patient who had developed life-threatening cardiac disease. The team was committed to avoiding inappropriate social judgements, and requested an ethics consult as they weighed potential benefits and harms of transplant, VAD, or continuation of medical management, which would have extended W’s life for a short time. The ethicists created a safe space for the medical team to meet and explore their possible conscious and unconscious biases, invited the team to be honest about hidden assumptions of worth and deservingness, and raised questions about the appropriateness of the traditional psychosocial assessment model for this patient population.

The ethicists asked the medical team to carefully consider the details of prison life and medical care and invited an expert on care in carceral settings to provide information. Many jails and prisons do not have staff who can address medical needs following transplant or VAD placement. This could lead to a lengthy, possibly unnecessary, hospitalization of an incarcerated patient while appropriate placement is sought, straining the hospital’s resources. Nonmedical needs, including access to good nutrition and psychosocial support, may go unmet in the carceral setting. Chronic medical issues related to transplants or VAD could lead to an incarcerated patient being perceived by peers as vulnerable or weak, which could put the patient at risk. The medical team’s inability to directly contact the patient and barriers to its ability to reach infirmary and other relevant staff directly for follow-up, as well as delays in accessing acute care in case of emergency, could be life-threatening to the patient. Conversely, an incarcerated person may have better social support than some people living in community settings. The infirmary in carceral settings guarantees a consistent power source for a VAD. The infirmary is also a protective environment that may help reduce the risk of infection (relative to the general population) and support medication adherence.

The medical team also raised questions about the ultimate benefit of advanced therapy for W given his already constrained lifestyle. The ethicists encouraged the medical team to avoid assumptions about W’s current or future quality of life, as many people who experience significant limitations regardless of the cause have meaningful lives. We also stressed the importance of eliciting W’s own values. When we explored these with W, his concerns centered around suffering due to insufficient medical care at his facility and no longer being able to engage in the activities that he relied on for health and mental well-being, like exercise. He found the thought of new limitations on top of existing restrictions to be almost unimaginable and questioned his ability to tolerate them. At the same time, he felt that his family would want him to make this effort, and he wanted more time with them. We ensured that W felt empowered to ask questions about the risks and benefits of advanced therapy and to exercise agency in treatment decisions.

The evaluation required exponentially more time from the medical team than would have been the case with a patient in the general community. Prison leadership had limited availability to work with the medical team to evaluate whether the prison’s medical resources would be sufficient to meet W’s ongoing needs; access to prison staff for VAD training had also been limited. Similar challenges were anticipated with ongoing monitoring and care, and medical team members shared concerns that the time and resources required to care for W might limit their ability to care for other patients. While agreeing that this should not be a reason to reject the possibility of advanced therapies for W, the ethicists acknowledged it as a potentially limiting factor that might need further consideration in settings of increased demand.

The Decision

A heart transplant was determined to be inappropriate because of the risk of harm, which was heightened by the inability to provide expedited transport to the hospital if complications occurred and the limited medical care in the prison. These issues compromised the benefit of a scarce resource. However, the ethics and medical teams agreed that the benefit of a VAD to W would outweigh its risks, even while requiring greater resource investment from the follow-up team than would be required for patients who are not incarcerated. W’s concerns about his quality of life with a VAD were outweighed by his wish to live, and he chose to accept VAD when it was offered. W underwent successful, uncomplicated VAD placement, and returned to his correctional facility. Follow- up is ongoing.

Lingering Questions

We do not have doubts about the decision to offer an advanced therapy to W, but this case raises a question that requires us to think beyond the boundaries of our institution: What is the scope of responsibility of any individual heart failure program to ensure equitable treatment and work to address structural inequities when so many factors, like access to care and other social determinants, are out of our control? While, in contrast to donor organs, advanced therapies like VADs may not be scarce resources, hospital care teams are. Is it appropriate for more of these clinical resources to be directed toward addressing inequities if there is a counter-effect of being able to serve fewer patients overall?

As the incarcerated population ages, the need for complex medical treatment from many medical specialties is expected to increase (for heart failure, metastatic cancers, and neurological disorders, to name just a few). To help inform future decisions in similar situations, it will be helpful to study not only outcomes, but also anticipated versus actual challenges in receiving appropriate medical care in jails and prisons, as well per-patient follow-up time from hospital teams and any impact that this has on the overall number of patients able to access advanced therapies. This information may help us assess the “cost” of a commitment to equity in the context of technological advances in treatment options. It may also help frame decision-making processes as an increasing number of under-resourced persons in the community (impoverished elderly, unhoused people, asylum seekers) seek care in our health care institutions.

Sarah J. Russe, DBe, MA, HEC-C, is the program manager for the clinical consult service at Northwestern Memorial Hospital in Chicago.

M. Jeanne Wirpsa, MA, BCC, HEC-C, is the program director for medical ethics at Northwestern Memorial Hospital. LinkedIn https://www.linkedin.com/in/m-jeanne-wirpsa-ma-bcc-hec-c-5807561a3/

Series Editors’ Comment: Adequate, Equitable, or Feasible?

Deciding whether a patient should receive an advanced heart therapy begins with assessing medical need and benefit, but it must also consider the social context and, therefore, norms and values. Is the therapy adequate? Equitable? Feasible? 

Advanced heart therapies can rely heavily on social support and access to care outside the hospital, as well as economic and other resources. In a society where these resources are not accessible to every patient, the appropriateness of therapies can come to be judged not just on the patient’s medical status but also on the inaccessibility of necessary supportive care. When system-level approaches are applied to individual patients–such as criteria for candidacy for organ transplantation–the injustices of disproportionate barriers to health care become readily apparent. An ethics consultation is unlikely to have the power to resolve broader injustices, but it can unpack how injustices are operating for a particular patient such as W and offer recommendations based on W’s specific context and values.

Any ethics consultation in a carceral setting can ultimately feel inadequate or even unjust. This is because the ethics consultants must work in a space between (a) perpetuating systemic medical norms that obstruct access for patients like W and (b) advocating for systemic change to prevent such inequities. Though perhaps morally unsatisfactory and distressing, the ethics consultants must question, but also operate within, the boundaries of what is possible for incarcerated patients. In this case, a recommendation for VAD placement was justified based on respect for the patient’s choices, benefit, prevention of harm, and equitable care. But the clinical ethics consultants must also be careful to categorize any ethical failures. For example, was it an ethical failure to not list W for a heart transplant because, even though it might have been medically justified, getting that medical benefit was not feasible in the carceral setting? Ultimately, even when it is impossible to overcome barriers to accessing beneficial medical care, it can still be important to track such barriers in order to make institutional and societal improvements to address patients’ needs equitably and respectfully.

The ethicists in this case had the institutional support needed to explore W’s context and values, even though their process required more resources than are typically devoted to patients living in the community. Ethicists in other institutions may not find similar support. What the ethicists achieved for W would not be considered feasible everywhere, and we should be asking: why not?

– Gina Campelia and Laura Guidry-Grimes

Learn more about the seriesClinical Ethics Case Studies for Hastings Bioethics Forum.

Attention clinical ethicistslearn how to contribute to the series.

white hand guiding black hand to type on keyboard

Bioethics Forum Essay

Bioethicists Should Speak Up Against Facilitated Communication

Last month, Netflix premiered the documentary Tell Them You Love Me, the story of former Rutgers University professor Anna Stubblefield, who was convicted of first-degree aggravated sexual assault in 2015 for raping Derrick Johnson, a profoundly intellectually disabled 28-year-old whose “consent” she claimed to have procured through facilitated communication. Tell Them You Love Me shot to the top of Netflix’s global streaming chart, and the response on social media was consistent: users found the film “disturbing”; they appropriately described Stubblefield as “delusional” and as a “predator”.

But few seemed aware that this thoroughly discredited intervention, in which a nonverbal and severely cognitively impaired person is assisted in spelling out messages on a letter board or keyboard through physical support provided by a non-disabled facilitator, is not only surging in popularity under different names–including the Rapid Prompting Method (RPM) and Spelling to Communicate (S2C)–but is also currently being platformed at the highest levels of science and education. In 2021, a letterboard user was appointed to the Interagency Autism Coordinating Committee; in 2022, facilitated students graduated from UCLA, Berkeley, and Rollins College. And in 2023, a speller was featured in a webinar hosted by the National Institute on Deafness and Other Communication Disorders.

We’re concerned by the muted response of bioethicists to more than three decades of abusive and exploitative pseudoscience. A quick search reveals relatively little bioethics scholarship on this disreputable practice. Public discussions of FC in bioethics have included a panel of FC critics hosted at Harvard’s Petrie-Flom Center in 2016. And  a 2020 Hastings Center event included prominent letter board user DJ Savarese. More broadly, FC raises the question of how the discipline should respond to consequential, even dangerous, health interventions that are widely embraced either in the absence of scientific evidence, or–-as in the case of FC-–despite overwhelming evidence that they just don’t work.

As a field, bioethics has an explicit obligation to defend vulnerable populations from abuse and exploitation. This basic maxim was essentially built into the discipline, embodied in the principle of Respect for Persons and clearly articulated in 1978 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research in both the Belmont Report and in the statement on Research Involving Those Institutionalized as Mentally Infirm. Although Tom Beauchamp and James Childress re-branded this principle a year later in their classic textbook, Respect for Autonomy is just one of the “two separate moral requirements” required by the National Commission. Equally important as the maximization of the autonomy of capacitated individuals, although much less celebrated, is “the requirement to protect those with diminished autonomy,” which may require “extensive protection.” The degree of protection, according to the Report, should be calculated based “upon the risk of harm and the likelihood of benefit.” 

FC spectacularly fails both sides of this cost-benefit analysis. Dozens of controlled studies dating back to the mid-1990s overwhelmingly prove that facilitators (often unintentionally) direct the output in FC through a host of cues, psychological biases, and ideomotor effects–the same small, unconscious movements that also explain Ouija boards and other allegedly paranormal phenomena (see Lilienfeld and Marshall, et al., 2015; Schlosser, Balandin and Hemsley et al., 2014; Hemsley, Bryant, and Schlosser et al., 2018).  Following the publication of these studies, the American Speech-Language-Hearing Association, the American Association on Intellectual and Developmental Disabilities, the American Psychological Association, as well as many other organizations, issued position statements against FC. To this day, independent communication through facilitation has never been demonstrated under controlled conditions, even though the James Randi Education Foundation offered a $1 million prize for anyone who could successfully do so.

Not only is the evidence against FC unequivocal, but this intervention subjects severely cognitively impaired individuals to well-documented harms–the sexual abuse documented in the Stubblefield case representing only the most well-known example. More than five dozen false abuse claims were also made through FC, resulting in the imprisonment of parents and the placement of severely autistic children in foster care. There are significant opportunity and financial costs to this intervention as well, which not only costs upwards of $30,000 per year per student, but is pursued in place of evidence-based forms of alternative and augmentative communication (AAC).  Most importantly and most routinely, every FC interaction involves the co-option of authentic autistic communication by nondisabled facilitators, which strips very disabled people of what little control they have over their day-to-day lives.

We understand why it is tempting to avoid this debate–in an era that privileges lived experience, those who challenge the authenticity of the sophisticated and poignant reflections that emerge through FC are often attacked as “ableist” perpetrators of “epistemological violence” . But there is too much at stake to be intimidated. In a 2012 paper critical of FC, James Todd rightly articulates a “moral obligation to be empirical,” which comprises three duties:

(a) Know exactly what we are doing (not just what we think we are doing),

(b) clearly and objectively determine whether our procedures are actually

bringing socially significant and objectively measurable (not imagined) benefits

to our clients, and (c) stop what we are doing if we cannot meet the standards

of a and b.

Although Todd is a psychologist, his attribution of these duties to those professions that “dabble as professionals in the lives of others, or teach other people to do so” certainly applies to bioethicists as well. And the American Society for Bioethics and Humanities recognizes this duty, requiring in its disclosure form that “all scientific research referred to, reported or used . . . will conform to the generally accepted standards of experimental design, data collection and analysis.” In short, it’s hardly a controversial claim that we who work in bioethics and the medical humanities should be guided by scientific fact and publicly reject pseudoscience, no matter how hopeful or affirming. Which means it’s well past time to mount a vigorous opposition to FC–before more nonverbal persons are hurt and more desperate parents fall prey to the charlatans who promise to channel their child’s intact mind, but who deliver nothing more than ventriloquism.

Amy S.F. Lutz, PhD, is a senior lecturer in the History and Sociology of Science Department at the University of Pennsylvania. @AmySFLutz

Dominic Sisti, PhD, is an associate professor of medical ethics and health policy at the Perelman School of Medicine at the University of Pennsylvania and a Hastings Center fellow. https://www.linkedin.com/in/dominicsisti/

[Photo: Netflix]

african american man in pink shirt on computer

Bioethics Forum Essay

Was This Job Market Study Ethical?

A paper titled “Social Media and Job Market Success: A Field Experiment on Twitter” posted on the Social Science Research Network in May has sparked criticism for lack of informed consent, use of deception, and potential harm to job candidates. While this kind of experiment isn’t an example of clinical research, we think that the ethical norms of clinical research are useful in considering and addressing the criticism of it.

The paper discusses an experiment in which researchers created an account on Twitter (now X) called Econ Job Market Helper and invited people looking for academic jobs in economics to submit a tweet of their job market paper to be posted by the account. Unbeknown to the candidates, some tweets, selected at random, were assigned to be retweeted with a quote–quote-tweeted–by established economists (“influencers”). Candidates from underrepresented groups (women, racial and ethnic minorities, and LGBTQ+ individuals) had the greatest chance of having their tweet quote-tweeted. The researchers’ goal was to assess whether social media promotion could improve employment outcomes, particularly for job applicants from underrepresented groups.

The results were striking. Tweets that were quote-tweeted by influencers (the intervention group) received about four times as many views and three times as many likes as those that weren’t (the control group). This increased Twitter activity appeared to translate into better outcomes: quote-tweeted job candidates secured one additional in-person interview and 0.4 additional job offers compared to those in the control group. Notably, women in the intervention group received 0.9 more job offers than their counterparts in the control group. Both findings were statistically significant.

Was this experiment ethically problematic? Or was it a useful study involving common practices on Twitter?

As research ethics scholars, we see this debate as an opportunity to shed some light on ethical issues in nonclinical research. We see two primary ethical concerns with the study–the deception of participants and the use of randomization. And we aim to show how established research ethics principles and frameworks may be helpful for working through them.

Deception and Informed Consent

An obvious ethical concern with the study is its use of deception. Job market candidates knew they were in a study—they completed surveys–and knew their tweets would be posted by Econ Job Market Helper. However, they were unaware of the quote-tweeting experiment and were not informed about the purpose of the study. This lack of transparency likely led candidates to develop false beliefs about the study’s nature and precluded informed consent.

Deception and lack of informed consent in a research study are not necessarily a problem. Deception is sometimes necessary for research, and U.S. regulations allow waivers of informed consent (including deception) in certain circumstances, including if the research poses minimal risk or does not adversely affect participants’ rights and welfare.

Did this experiment meet these requirements? The researchers could have informed job market candidates that there was a second stage to the tweet promotion that involved randomization, but it would be difficult to assure that candidates would not disclose information about the study publicly thus intentionally or unintentionally undermining it. The key question is whether the experiment posed  minimal risks–whether “the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life or during the performance of routine physical or psychological examinations or tests.”

On the one hand, prominent economists quote-tweet job market papers frequently, suggesting the risks associated with the study are similar to those that candidates ordinarily face. On the other hand, the academic job market season is highly consequential and has a zero-sum structure, with advantages for one candidate coming at the expense of disadvantages for others. The experiment introduces the risk of fewer “flyouts” or job offers for candidates in the control group (as well as for bystander candidates). Such a risk brings with it the possibility of less desirable job offers, lower salaries, or unemployment. Given the high stakes of the academic job market, randomization of quote-tweets may pose more than minimal risk to candidates who don’t get them.

Deception of third parties–in this case, Twitter users and search committee members–is not something that research regulations have fully addressed. These third parties do not meet the definition of human subjects because no identifiable information is collected about them. Yet, they may have developed false beliefs that prominent economists were endorsing papers or simply that their Twitter feeds were not shaped by an experiment. Public health and social science research need guiding principles to consider deception beyond the binary participant-researcher relationship.

Randomization

The second contentious feature of the study was the randomization of participants into intervention and control groups, with candidates from underrepresented groups given a two-thirds probability of being in the intervention group.

Randomization could be considered permissible if it is consistent with the investigator’s duties. For example, if a medical researcher has a duty to provide participants with a specific treatment—such as a standard cancer therapy for participants with cancer who have volunteered for a study to test the safety and effectiveness of an experimental treatment–it is widely acknowledged that randomization is permissible when the relevant expert community is uncertain about whether one of these treatments is superior. In the economics job market study, if influencers’ random quote-tweeting of job market papers is consistent with the researchers’ professional obligations, randomization may be permissible.

What are the relevant professional obligations of this study’s researchers? We think the answer is ambiguous, based on the American Economic Association’s Code of Professional Conduct.It does not include norms regarding the use of social media, but the study’s weighted randomization scheme would seem to be consistent with its responsibility for “supporting participation and advancement in the economics profession by individuals from all backgrounds, including particularly those that have been historically underrepresented.” In addition, professional economists do not have a duty to use social media to promote job candidates, so not promoting all job candidates isn’t an ethical shortcoming. Thus, although the randomization scheme poses risks to candidates in the control group and to bystanders, these risks are not the result of wrongful behavior and so are not morally problematic.

But the AEA also calls on economists to create a “professional environment with equal opportunity and fair treatment for all economists,” which arguably implies that they should avoid advantaging or disadvantaging job market candidates on arbitrary grounds. If, as the study team believes, prior evidence supports the claim that quote-tweets by prominent economists are likely to improve job market outcomes, any randomization scheme would appear to violate this norm. In other words, if this interpretation of the AEA code is correct, then the argument that randomization is permissible because it is consistent with professional norms will fail.

Some commenters on Twitter argued that the weighted randomization scheme offers a defensible balance of costs and benefits because it yields socially valuable knowledge and minimizes unfairness by giving candidates from underrepresented groups a higher probability of being in the intervention group (and perhaps even improves on the status quo in the economics job market). This argument could work when research has minimal risks. But proponents of this argument should be prepared to explain the social value of the knowledge gained from the experiment and to explain why it is sufficient to justify the unfair costs to candidates in the control arm (some of whom were from underrepresented groups).

Our goal in this essay is not to condemn the study but to raise ethical concerns about its use of deception and randomization and show how principles and frameworks from research ethics may be used to work through them. We think there are two lessons to draw from this study. First, it lends support to the call for social science researchers to include structured ethics appendices in their papers, both to improve discussions of the ethics of studies and to clarify and improve ethical norms in the ways that the studies are conducted. Second, it may be worthwhile for experimental economists to consider the ethical dimensions of their experiments more systematically. The U.S. political science community offers a potential model here, with the 2022 edition of the American Political Science Association’s A Guide to Professional Ethics in Political Science including a lengthy section outlining principles and guidelines for human subjects research.

Douglas MacKay, PhD, is an associate professor in the Department of Public Policy, the Center for Bioethics, and the Philosophy, Politics, and Economics Program at the University of North Carolina, Chapel Hill. @douglaspmackay

Katherine W. Saylor, PhD, is a fellow in the Ethical, Legal, and Social Implications of Genetics and Genomics at the Perelman School of Medicine at the University of Pennsylvania. @kwsaylor

white man taking blood sample from black man's arm to test for Syphilis , part of the Tuskegee study.

Bioethics Forum Essay

National Research Act at 50: An Ethics Landmark in Need of an Update

On July 12, 1974, President Richard M. Nixon signed into law the National Research Act, one of his last major official actions before resigning on August 8. He was preoccupied by Watergate at the time, and there has been speculation about whether he would have done this under less stressful circumstances. But enactment of the NRA was a foregone conclusion. After a series of legislative compromises, the Joint Senate-House Conference Report was approved by bipartisan, veto-proof margins in the Senate (72-14) and House (311-10).

The NRA was a direct response to the infamous Untreated Syphilis Study at Tuskegee whose existence and egregious practices disclosed by whistleblower Peter Buxtun [whose death was reported on July 15, 2024] were originally reported by Associated Press journalist Jean Heller in the Washington Star on July 25, 1972.  After congressional hearings exposing multiple research abuses, including the Tuskegee syphilis study, and legislative proposals in 1973, support coalesced around legislation with three main elements: (1) directing preparation of guidance documents on broad research ethics principles and various controversial issues by multidisciplinary experts appointed to a new federal commission, (2) adopting a model of institutional review boards, and (3) establishing federal research regulations applicable to researchers receiving federal funding.

This essay reflects on the NRA at 50. It traces the system of research ethics guidance, review, and regulation the NRA established; assesses how well that model has functioned; and describes some key challenges for the present and future. We discuss some important substantive and procedural gaps in the NRA regulatory structure that must be addressed to respond to the ethical issues raised by modern research.  

Ethical Guidance

The NRA established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The commission was originally proposed as a permanent entity to provide ongoing ethical guidance, but, in a compromise, it was authorized for less than three years. Among other things, the 11-member commission was directed to “identify the basic ethical principles which should underlie the conduct of biomedical and behavioral research involving human subjects [and to] develop guidelines…to assure that it is conducted in accordance with such principles.”

The commission was specifically tasked with considering several contentious issues, some of which remain significant concerns. These include fetal research; psychosurgery; the boundaries between medical research and medical practice; the criteria for assessing risks and benefits for research participants; and informed consent for research involving children, prisoners, and individuals in psychiatric institutions.

The commission’s preeminent members and exemplary staff were extremely productive, and their work products were–and remain–highly influential. For example, commission reports on research with children and prisoners figured prominently in federal regulations. Its best-known work product, the Belmont Report, identified the basic ethical principles and guidelines for research with human subjects as directed by the NRA.

Working in subcommittees and consulting with bioethicists Tom L. Beauchamp and James F. Childress, the commission sought to identify the principles that would reflect the shared values of a diverse population. The commission initially identified seven principles, which later were reduced to the well-known three: respect for persons (honoring participant autonomy, privacy, informed consent), beneficence (requiring minimization of risks and maximization of benefits), and justice (entailing equal distribution of research burdens and benefits and protecting vulnerable populations).

The approach of the Belmont Report became known as “common morality principlism,” a term that has been used dismissively in several criticisms. These criticisms include that it focuses too much on individuals and not enough on communities, in short, that it is too U.S.-centric. In addition, the approach does not rank-order the principles or indicate how they should be applied, particularly when they conflict.

Despite these criticisms, the principles have endured for 50 years. The universal appeal of this approach is illustrated by its prominent place in U.S. regulations governing human subjects research and in international research ethics, and the continued reliance on the principles as valuable guideposts for research ethics analysis by researchers, bioethics scholars, and the public. Beauchamp and Childress have further explored the application of the principles through eight editions of their landmark book, Principles of Biomedical Ethics. In April 1978, as the commission was winding down its work, Willard Gaylin, co-founder and president of a nascent bioethics think tank later known as The Hastings Center, was quoted in the New York Times: “They [the commission members] deserve the compliments and gratitude of all of us in the field.”

In subsequent years, the public bioethics commission model of addressing difficult bioethics issues has been used repeatedly in the U.S. Six federal bioethics commissions or similar entities have been created to address such issues as research using stem cells, somatic cell nuclear transfer, radiation experiments, and human enhancement. However, such commissions have been ad hoc, and, since 2017, there has not been any comparable body to address the numerous problematic bioethics issues of today or the future.

Institutional Review Boards

The NRA required entities applying for grants or contracts involving biomedical or behavioral research with human subjects to demonstrate they had an institutional review board to review the research and “protect the rights of the human subjects of such research.”

Many research institutions already had local IRBs by1974, and researchers preferred local review instead of federally directed research review, a model used by many other countries. Perceived advantages of local IRBs included their knowledge of potential participant communities, researchers, institutional research, social mores, and applicable laws. The NRA formalized and expanded IRB reviews by mandating them for all federally conducted or funded research. According to a study by the Government Accountability Office, as of 2023 there were approximately 2,300 IRBs, most of them affiliated with universities or health care institutions. But there are also many independent, primarily for-profit, IRBs, which have had the largest increase in protocol reviews, a process likely accelerated by the move to single IRB review, described below.  

Traditional IRBs based at universities and health care institutions have inherent conflicts of interest because, in addition to having an interest in assuring the well-being of research participants, the institutionalso has a financial and professional interest in expeditious approval of the protocols supported by external funding. IRB members and administrators may feel pressured to approve submissions. For-profit IRBs also have conflicts of interest because repeat business depends on their being easier, faster, and presumably more favorable alternatives to university or health care IRBs.

Among the most important recent changes to IRB review, effective in 2020, NIH-funded multisite and cooperative research must use single-site (or central) IRB review. This process is designed to eliminate duplicative and sometimes inconsistent IRB reviews and to expedite the review process. It is available for all IRBs, including commercial IRBs, that are registered with the Office for Human Research Protections. It remains to be seen whether this new procedure will achieve the goals of consistency and expediency.       

Despite 50 years of experience, assessing and improving the quality of IRB reviews remains challenging. IRBs must have a minimum of five members, and large institutions typically have multiple, much larger committees. Thus, based on the GAO estimate mentioned previously, U.S. IRBs have a minimum of 11,500 members, plus professional staff. Reviews are rarely shared with IRBs outside the institution. Public Responsibility in Medicine and Research (PRIM&R), a nonprofit organization that provides educational services to researchers and research administrators, was founded in 1974. Since 1999, it has offered a certification process for IRB officials. However, IRB service is burdensome and often uncompensated, and many IRB members do not take advantage of PRIM&R education.

The Association for Accreditation of Human Research Protection Programs, an independent, nonprofit, voluntary organization founded in 2013, uses a peer-review process to accredit IRBs. It reports that approximately 60% of U.S. research-intensive universities and medical schools have been accredited or have begun the accreditation process. Although AAHRPP accreditation requires institutions to assess the quality of their reviews, there are no clear criteria for doing so. Finally, OHRP and the Food and Drug Administration conduct on-site inspections, which may be routine or for cause (e.g., in response to a complaint). According to the GAO, only a small fraction of IRBs are inspected annually. It is also not clear how effective inspections are in preventing or remediating substandard practices.

Federal Research Regulations–the “Common Rule”

The NRA directed the secretary of the Department of Health, Education, and Welfare (now the Department of Health and Human Services) to promulgate regulations necessary to carrying out IRB review. On June 18, 1991, final regulations were published in the Federal Register.  The regulations specify the composition and operations of IRBs and, incorporating the Belmont principles, the criteria for their review. The policy became known as the Common Rule because it was adopted by 15 federal departments and agencies.

Since the NRA was enacted, IRB review and compliance with the Common Rule have been mandatory only for federally funded research. This framework has proven inadequate. Although many universities and health care institutions voluntarily apply the Common Rule to research that is not federally funded, not all do. A few states, notably Maryland and Virginia, have laws that apply the Common Rule standard to all research, but there is little enforcement. Differences in other state laws may result in substantive protections for some research participants, but not others. This patchwork of voluntary compliance and state laws is not up to the task of protecting the welfare of research participants, especially now when online data is exploding, research increasingly is multisite and multistate, and research is no longer confined to universities and health care institutions.

The Common Rule has several other substantive limitations. One of them is the exclusion of deidentified information and biospecimens from protection. Increasingly sophisticated computer technology can reidentify individuals from records and specimens. Both “identifiable private information” and “identifiable biospecimens” use the standard of “readily ascertainable.” This means that if the identity of information or biospecimens is not readily apparent, then they are deemed unidentifiable and the research falls outside the scope of the regulations, even if the identity can be discovered by more complex techniques. By contrast, the Health Insurance Portability and Accountability Act privacy rule uses a much more stringent standard for deidentification and lists 18 identifiers that must be removed.

Another important limitation of the Common Rule is that it prohibits IRBs from considering “possible long-range effects of applying knowledge gained in the research (e.g., the possible effects of the research on public policy)” in assessing research risks. Thus, IRBs can only consider the direct effects of the research on participants and must ignore the larger societal implications, including the impact on groups.  A new international study of 22 countries found that the U.S. is the only country to prohibit their research ethics review bodies from considering societal implications of research.

Conclusion

On the 50th anniversary of the NRA, it is evident that the act needs to be updated.

First, there should be a standing national public bioethics body to study and report on emerging issues such as gene therapy, artificial intelligence, xenotransplants, and brain-computer interfaces would provide necessary guidance in a continuously and rapidly changing scientific environment.

Second, additional efforts are required to assess and improve IRB quality. Single IRB review may mitigate some of the unresolved conflicts of interest inherent in locating research ethics review bodies at the institutions submitting research protocols. But problems remain, since the IRB likely either will be located at the institution receiving the grant (and therefore will have an incentive to approve research proposals) or it will be a for-profit IRB (and therefore will have an incentive to expedite favorable reviews to get repeat business). In addition, there is negligible oversight of IRB decisions and operations, with accreditation and training largely by voluntary, private organizations.

Third, the Common Rule should be expanded and strengthened. There was a missed opportunity to do this in the 2018 revisions.  Although HHS initially proposed expanding the Common Rule’s to all research, the Final Rule retained coverage only for federally funded or conducted research. Arguably, such an expansion would exceed the authority afforded by the NRA. But HHS did not submit a recommendation to Congress to authorize this expansion, nor did it notify Congress that there was a problem with it that should be addressed. Similarly, despite initial proposals, the revised Common Rule failed to add any protections for minimally deidentified information or specimens, retaining a standard that is significantly less protective than the HIPAA privacy rule.

Fifty years ago, the first steps were taken to impose deliberative processes and order on American biomedical research. These actions, however, were not complete, and time and changed circumstances have increased the gap between the NRA’s regulatory system and what is needed for well-considered and coordinated research regulation. It’s time for the research ethics community, researchers, and policymakers to take the next steps to update the actions begun on July 12, 1974.     

Mark A. Rothstein, JD, is Director of Translational Bioethics, Institute for Clinical and Translational Science at the University of California, Irvine. He is a Hastings Center fellow.

Leslie E. Wolf, JD, MPH, is the Ben F. Johnson Jr. Chair in Law and Distinguished University Professor at Georgia State University. @LeslieWolfGSU, LinkedIn (25) Leslie Wolf | LinkedIn

President Joe Biden

Bioethics Forum Essay

Clinical Ethics and a President’s Capacity: Balancing Privacy and Public Interest

The Biden Administration is struggling with a dilemma that has a clinical ethics component. Where does the President’s right to privacy about his health end and the public’s right to know begin? This question has recurred throughout American history and, unfortunately, has often been answered the wrong way–with deception. Clinical ethics norms and recent legal precedent offer important insights for responding to this ethical dilemma with much-needed transparency in a way that respects all parties involved.

Throughout his presidency, President Biden has been compared to both Franklin Delano Roosevelt and Lyndon Baines Johnson in terms of his legislative successes and effectiveness. Ironically, both FDR’s and LBJ’s presidencies led to critical constitutional amendments surrounding a president’s capacity to serve.

In 1951, the 22nd Amendment, which limits presidents to only two terms, was ratified in part to avoid this exact clinical ethics dilemma: preventing a “fourth term Roosevelt” scenario of a president in declining health, seeking re-election. In 1944, the public was alarmed by President Roosevelt’s visible aging when he sought a fourth term. Roosevelt’s oratorical skills were still strong, so he was able to rally the public behind a fourth run for office, but his  appearance at the Yalta conference in 1945 (though he was only 63) revealed his terrible physical condition, prompting further alarm given the consequences of a premature death at a critical moment near the end of World War II. Indeed, Roosevelt died suddenly two months after Yalta, on April 12, leaving his successor, President Harry Truman, in the dark on critical issues, such as the atomic bomb program.

LBJ chose not to run for re-election while in declining health. In addition, the start of his term–in the wake of the assassination of his predecessor, John F. Kennedy–led to the ratification in 1967 of the 25th Amendment to more clearly outline what to do in the event of a president’s sudden death or incapacity.

I published an opinion piece in my newsletter on July 5 about the options available to the Biden Administration and used clinical ethics as a frame, discussing the 25th Amendment as essentially a clinical ethics document. I outlined three options: (a) President Bidenvoluntarily leaves office or steps down as the nominee, which protects his privacy, (b) President Biden is cleared for fitness by an independent medical assessment released to the public with his consent in order to assuage public concern, or (c) the Biden administration veers into 25th Amendment territory by arranging for an independent medical assessment of the President against his wishes.

Since then, an informal capacity assessment of President Biden played out on television screens in an interview the President did on July 6 with journalist George Stephanopoulos. A transcript raised the question whether the President fully understands his debate performance. When asked if he had watched a recording of the debate, Biden responded, “I don’t think I did, no . . .  And so, I just had a bad night. I don’t know why.”

From a clinical ethics perspective, President Biden has the right not to know why he struggled in the debate (or even, as reported, a recurring pattern of cognitive decline), and if he does find out, then he has the right to keep that knowledge private. However, President Biden is not a typical patient. From a governance perspective, there is a good clinical reason for President Biden to find out why he had such a bad night as the implications of an undiagnosed condition may override the President’s personal preference to decline to be assessed at this time. There ought to be limits to a presumptive presidential nominee’s autonomy that could trigger a solution to this dilemma.

Concealment of a president’s health status has some moral defense from a national security perspective, particularly during a war. But this position only works when there are no outward signs of a health problem. If a health condition is on full public display, with objective and overt clinical symptoms, it would be ethically imperative to be transparent about the President’s condition. Some physicians, such as Sanjay Gupta, have called for neurological workups and public disclosure. Ezekiel Emanuel has noted that, even in the absence of any underlying condition other than aging, President Biden has clearly lost some of his cognitive abilities. Investigative reporting led to confusing facts about how often the President was seen by a neurologist, prompting the White House physician to explain these visits in a letter, confirming that, aside from an annual physical, the President has not had any recent neurological workup.

Major global consequences resulting from an American president’s illness are part of our history. In 1918, when Woodrow Wilson was negotiating the Treaty of Versailles, he had already likely suffered from several mini-strokes and was ill with 1918 influenza. Historians note that he was compromised in these World War I negotiations, which contributed to the rise of Nazi Germany. FDR was not his optimal self at Yalta in 1945, either; historians wonder whether this led to a suboptimal negotiation about how to divide Europe at the end of World War II.

Currently, there is absolutely no protocol for how White House physicians–including President Biden’s physician–should balance a president-patient’s privacy, mandated by the Health Insurance Portability and Accountability Act (HIPAA), and the public’s right to know the health status of their sitting president. There are circumstances in which there is a clear ethical “duty to warn.” In the clinical context, the legal and ethical duty to warn identifiable third parties of foreseeable harm was established in Tarasoff v. Regents of the University of California, in which the court held that a patient’s confidentiality or doctor-patient “protective privilege ends where the public peril begins.” In Tarasoff, the failure to warn a woman about premeditated homicide by her boyfriend, whohad confided the plan to his University of California psychologist in 1969, led to a new standard for warning third parties who wittingly or unwittingly may be victims when a patient is an agent of harm. This case established the role of mandating reporting in the psychosocial context.

The Tarasoff case provides guidance regarding the ethical duty to warn, which extends into several health contexts, including infectious disease (e.g., partner notification of HIV), genetics (warning at-risk relatives for serious inherited diseases that are autosomal-dominant), and impaired driving. With respect to impaired driving, health care providers can breach HIPAA when they have a duty to warn the Department of Motor Vehicles about medically or cognitively compromised drivers in the interest of public safety. In fact, failure to warn can expose physicians to litigation by a harmed party. The duty to warn rests with the treating physician, but so does the duty to verify fitness to serve.

A president’s annual physical is supposed to verify fitness to serve, but when a president’s condition becomes alarming, an explanation to the public is ethically obligatory. To balance the president’s medical privacy and the public’s right to know, the president should be allowed time to make a decision about public disclosure of a medically disqualifying condition. But should he (or his administration) decline to disclose it, then the president’s physician is ethically permitted to disclose his medical status.

Legitimate ethical questions can be raised about whether any president–as a “celebrity patient”–is actually a more vulnerable patient because physicians may be less likely to tell the patient the truth, order necessary tests, or refer the patient for appropriate further evaluation due to VIP syndrome and subjective political considerations. VIP syndrome can also lead to conflicts of commitment or conflicts of interest. In 2018, White House physician Dr. Ronny Jackson actually told the public that the President might even live until he was “200 years old.” In 2024, new reports confirm that Kevin O’Connor, the current President’s physician, is a friend of the Biden family.

Throughout American history, there has been a longstanding pattern of physicians deceiving the public about presidents’ health. Examples include Grover Cleveland’s secret cancer surgery in 1893, Woodrow Wilson’s massive stroke in 1919, FDR’s cardiovascular disease, John F. Kennedy’s health issues and his “Dr. Feelgood”, Ronald Reagan’s early signs of dementia, and Donald Trump’s declining oxygen levels when he had Covid. The Biden Administration should end this practice of concealment by providing the public with a truthful assessment of the President’s health status given the staggering consequences of this election and the potential peril facing the country.

M. Sara Rosenthal, PhD, is Professor and Founding Director of the University of Kentucky Program for Bioethics and Oncology Ethics Program and Chair of the UK Healthcare Ethics Committee.

disabled boy with headphones sitting and looking at screen

Bioethics Forum Essay

Access to Pediatric Assistive Technology: A Moral Test

Most of us have a weakness for a donut and coffee in the morning. But not everyone places their order in the same way. One young man we know uses an application on his iPad to communicate his preferences, including his predilection for a chocolate-frosted donut and an iced coffee with almond milk. This device allows him to express himself independently just like everyone else.

For this individual and other people with disabilities, augmentative and alternative communication (AAC) devices facilitate communication, which, as we have argued, helps constitute community and societal integration. AACs encompass a range of technologies, such as tablet applications and eye-gaze devices. For some individuals, these devices supplement another form of communication, such as speech or sign language; for others, AACs are their singular means of connecting with the world beyond them.  

Over the past several years, the Division of Medical Ethics at Weill Cornell Medical College and Blythedale Children’s Hospital (BCH) have collaborated to track the process by which children with brain injury and their families acquire access to assistive technology. Our goal was to map the byzantine process that heretofore had never been charted. We hoped to identify bottlenecks and lead to quality improvement for children with disabilities.

Our previous analysis drew from BCH medical records over a two-year period and included 72 children with brain injury who received a prescription for at least one assistive device. Despite the multitude of resources and remarkable clinical expertise available at BCH, we found that only 55% of devices were delivered. Furthermore, the average time to delivery was 69.4 days, with a range of 12 to 250 days. The device with the longest time to delivery was a special needs car seat, a technology that quite literally provides a child with access to the surrounding community.

We recently met to continue our research. At a multidisciplinary team meeting, we learned of the process by which a child acquires an AAC device. It’s a maddening process. First, the clinician, often a speech-language pathologist, identifies a need and determines what sort of device would be best suited to assist with communication. Then they prescribe an appropriate device. To ensure that this is money well spent, the insurance company requires a one-month trial of the device before it is approved. And then the vendor supplies the device, as required by the insurance company.

Makes sense, right? But now illogic and, we would say, cruelty creeps in. After a successful trial of the device–an intervention that will help a child communicate with their family or go to school–the device is taken away by the insurance company for the duration of the approval process.

Let us reiterate what happens. For one month, the child is given access to the AAC device that provides them with a previously unavailable mode of communication. For one month, they can communicate their wishes to their parents and siblings, respond to their teacher’s question in the classroom, or make new friends at the playground. And during that one month, they grow and develop, as children are prone and ought to do. Maybe they learn to tell jokes, read aloud, or order their favorite breakfast beverage independently.

In 2023, clinicians at BCH prescribed 18 AAC devices. Each of the devices was deemed eligible for a particular child and approved by an insurance company for coverage. However, despite the success of the one-month trial period and subsequent insurance approval, the children had to wait to get their devices. The average time to delivery was nine weeks, with a range of one week to five months. The device with the longest time to delivery was an eye-gaze device. This AAC helps individuals with motoric disabilities communicate.

These delays to delivery are significant. Education literature suggests that first through eighth grade students lose between 17% and 28% of their English language arts skills and 25% to 34% of their math skills during the three-month summer vacation. While the “summer slide” experienced by typically developing children is concerning, one can only imagine the devastating impact of delays for children who rely on access to assistive technology. For a child who waits nine weeks, it’s the loss of nearly a whole summer. When the delay is five months, that’s a couple of summers. Furthermore, children with cognitive or speech disabilities can miss critical neurological milestones when they are unassisted. This compounds the effect of a delay and may lead to repercussions with enduring ill-effects.

The illogic of this delay leaves us speechless. All the more so because the data from BCH reveals that all the patients who demonstrated improvement during the one-month trial ultimately received their devices. So why the wait? Why impede their development? And why the cruelty? After these children are given the keys to communication, these keys are taken away. The door is locked, and their world goes dark. What had been an opportunity for community and reciprocity is now one for segregation and isolation. How can this be right?

To delay these benefits is especially paradoxical because there is a small but growing neuroethics literature arguing in favor of post-trial obligations following device trials that have benefitted study participants, whom Goering et al. characterize as “pioneers.” The question of post-trial obligations for as yet unproven devices is now the focus of grants funded by the BRAIN Initiative. This funding priority represents a normative argument for investigational devices. In the context of AAC, we are delaying access to devices that have already been proved therapeutically effective.

Beyond the ethics, we contend that these delays are also a matter of law.  As we have written, the Americans with Disabilities Act (ADA) mandates maximal societal integration for individuals with disabilities. Title IV of the ADA outlines access to assistive technology, naming telecommunications devices for the deaf, also known as teletypewriters (TTY). In 1990, when the ADA became law, TTY was the primary assistive device for communication. With progress in electronics and neuroscience, communication devices have advanced way beyond TTYs. Because of this progress we must not be stuck in a purely textual reading of the ADA that limits access to more modern technologies.

These advances remind us of the Deweyan aphorism that speaks to how technological progress can expand our moral horizons. In Common Sense and Scientific Inquiry, Dewey wrote, “Inventions of new agencies and instruments create new ends; they create new consequences which stir men [all people] to form new purposes.” So, it is here. We are compelled to use the marvels of modern AT to serve some of the most vulnerable among us. It would violate the spirit of the law, and its normative implications, to eschew novel technologies that could further remediate the segregation of people with disabilities.

Through our collaboration with BCH, we have seen the dedication of hospital administrators, clinicians, and therapists providing excellent care to children and their families. Among their robust services–inpatient and outpatient care and a state-accredited public school –is the specialty AT clinic, which provides loaner devices to help bridge the gap between the trial period and the arrival of the device. However, even Blythedale does not have the resources to make eye-gaze devices (which can cost $15,000 or more) available during the waiting period, especially if it lasts five months.

And what of the children and families who never have access to the specialty care and advocacy that BCH offers? This is a deeper level of inequity that transcends the technology and speaks to broader systems of care. For these children, delays risk becoming denials. This is something society should neither allow nor accept.

Former Vice President Hubert H. Humphrey reminds us, “It was once said that the moral test of government is how that government treats those who are in the dawn of life, the children; those who are in the twilight of life, the elderly; and those who are in the shadows of life, the sick, the needy and the handicapped.”These words, now enshrined in marble in the Department of Health and Human Services building that bears Humphrey’s name, should be ensconced in policy to give voice to the voiceless.

Anything less is not worthy of us and is a violation of civil rights.

Kaiulani S. Shulman, B.A., graduated from Yale College with distinction in religious studies. She is a research assistant in the Division of Medical Ethics at Weill Cornell Medical College and will start medical school in the fall.

Joseph J. Fins, M.D., D. Hum. Litt. (hc), M.A.C.P., F.R.C.P., is the E. William Davis Jr. M.D. Professor of Medical Ethics, a professor of medicine and chief of the division of medical ethics at Weill Cornell Medical College; Solomon Center Distinguished Scholar in Medicine, Bioethics and the Law and a Visiting Professor of Law at Yale Law School; and a member of the adjunct faculty at the Rockefeller University. He is a Hastings Center fellow and chair of the Center’s board of trustees.

Acknowledgements:

The authors acknowledge the support of a pilot award from the Weill Cornell Medical College Clinical & Translational Science Center, “Assistive Technology in Pediatric Brain Injury Following In-patient Rehabilitation: Access, Barriers and Burdens on Patients and Families” [UL1TR002384] and the Blythedale Children’s Hospital, and the Monique Weill-Caulier Charitable Trust. We would like to acknowledge the collegiality and insights of the Assistive Technology in Brain Injury research team, including colleagues Debjani Mukherjee, Linda Gerber, and Jennifer Hersh from Weill Cornell Medical College and Barbara Donleavy-Hiller, Karen Conti, Julie Knitter, Rita Erlbaum, Marnina Allis, Linda Fieback, William Watson, as well as the late Barbara Milch from Blythedale Children’s Hospital. We are especially grateful for the visionary leadership of Larry Levine, President and CEO of Blythedale Children’s Hospital.

hands holding and touching cellphone

Bioethics Forum Essay

Griefbots Are Here, Raising Questions of Privacy and Well-being

Hugh Culber is talking to his abuela, asking why her mofongo always came out better than his even though he is using her recipe. She replies that it never came out well and she ended up ordering it from a restaurant. While it is touching, what makes this scene in a recent Star Trek Discovery episode so remarkable is that Culber’s abuela has been dead for 800 years (it’s a time travel thing) and he is conversing with her holographic ghost as a “grief alleviation therapeutic.” One week after the episode aired in May, an article reported that science fiction has become science fact: the technology is real.

AI ghosts (also called deathbots, griefbots, AI clones, death avatars, and postmortem avatars) are large language models built on available information about the deceased, such as social media, letters, photos, diaries, and videos. You can also commission an AI ghost before your death by answering a set of questions and uploading your information. This option gives you some control over your ghost, such as excluding secrets and making sure that you look and sound your best.

AI ghosts are interactive. Some of them are text bots, others engage in verbal conversations, and still others are videos that appear in a format like a Zoom or FaceTime session. The price of creating an AI ghost varies around the world. In China, it’s as low as several hundred dollars. In the United States, there can be a setup cost ($15,000) and/or a per-session fee (around $10).

Although simultaneously fascinating and creepy, these AI ghosts raise several legal, ethical, and psychological issues.

Moral status: Is the ghost simply a computer program that can be turned off at will? This is the question raised in the 2013 episode of Black Mirror, “Be Right Back,” in which Martha, a grieving widow, has an AI ghost of her husband created and later downloads it into an artificial body. She finds herself tiring of the ghost-program because it never grows. The AI robot ends up being kept in the attic and taken out for special occasions.

Would “retiring” an AI ghost be a sort of second death (death by digital criteria)? If the ghost is not a person, then no, it would not have any rights, and deleting the program would not cause death. But the human response could be complicated. A person might feel guilty about not interacting with the griefbot for several days. Someone who deletes the AI might feel like a murderer.

Ownership: If the posthumous ghost was built by a company from source material scraped from social media and the internet, then it’s possible that the company would own the ghost. Survivors who use the AI would merely be leasing it. In the case of a person commissioning their own AI before death, the program would likely be their property and can be inherited as part of their estate.

Privacy and confidentiality: If Culber tells AI abuela that he altered her recipe, that information might be collected, and owned, by the AI company, which may then program it into other AIs or even reproduce it in a cookbook. The AI abuela could also be sold to marketing companies: Culber’s abuela may try to sell him ready-to-eat mofongo the next time they interact.

AIs are built, in part, on the questions we ask and the information we share. What if Martha’s daughter tells her AI dad that she wants a particular toy? Martha could find a bill for that toy, ordered by the ghost without her knowledge. Modern social media is all about collecting data for marketing, so why would a griefbot be any different?

Efficacy: Culber said that talking to his abuela’s “grief alleviation therapeutic” was helpful to him. Martha eventually found that the AI android of her husband was a hindrance, preventing her from moving on. Would today’s AI ghosts be a help or a hindrance to the grieving process?

Some researchers have suggested that we could become dependent on these tools and that they might may increase the risk of complicated grief, a psychological condition in which we become locked in grief for a prolonged period rather than recovering and returning to our lives. Also consider a survivor who had been abused by the deceased and later encounters this person’s AI ghost by chance, perhaps through marketing. The survivor could be retraumatized—haunted in the most literal sense. On the other hand, in my study of grieving and continuing bonds, I found that nearly 96% of people engage with the dead through dreams, conversations, or letters. The goal of grieving is to take what was an external relationship and reimagine it as an internal relationship that exists solely within one’s mind. An AI ghost could help reinforce the feeling of being connected to the deceased person, and it could help titrate our grief, allowing us to create the internalized relationship in small batches over an extended time.

Whether AI ghosts are helpful or harmful may also depend on a survivor’s age and culture. Complicated grief is the more likely outcome for children who, depending on the developmental stage, might see death as an impermanent state. A child who can see a parent’s AI ghost might insist that the parent is alive. Martha’s daughter is likely to feel more confused than either Martha or Culber. As a Latine person for whom Día de los Muertos is part of the culture, Culber might find speaking with the dead a familiar concept. In China, one reason for the acceptance of AI ghosts might be the tradition of honoring and engaging with one’s ancestors. In contrast, the creepiness that Martha feels, and that I share, might arise from our Western cultures, which draw a comparatively fixed line between living and dead.

A recent article suggests guidelines for the ethical use of griefbots, including restricting them to adult users, ensuring informed consent (from people whose data is used, from heirs, and from mourners), and developing rules for how to retire the griefbots. We must also be wary of unethical uses: engaging in theft, lying, and manipulation. AIs have already been used to steal billions.

Our mourning beliefs and practices have changed over time. During the Covid pandemic, streamed funerals were initially seen as odd, but now they seem like a normal option. A similar trajectory to public acceptance is likely to happen with deathbots. If so, individuals should be able to choose whether to commission one of themselves for their heirs or to create one of their deceased loved ones.

But as a society we must decide whether the free market should continue to dominate this space and potentially abuse our grief. For example, should companies be able to create AI ghosts and then try to sell them to us, operating like an amusement park that takes our picture on a ride and then offers to sell it to us when we disembark? Perhaps griefbots should be considered therapeutics that are subject to approval by the Food and Drug Administration and prescribed by a mental health professional. The starting point should be clinical studies on the effect this technology has on the grieving process, which should inform legislators and regulators on the next steps: to leave AI ghosts to the marketplace, to ban them, or to regulate them.

Craig Klugman, PhDis the Vincent de Paul Professor of Bioethics and Health Humanities at DePaul University. @CraigKlugman