Bioethics Forum Essay
Griefbots Are Here, Raising Questions of Privacy and Well-being
Hugh Culber is talking to his abuela, asking why her mofongo always came out better than his even though he is using her recipe. She replies that it never came out well and she ended up ordering it from a restaurant. While it is touching, what makes this scene in a recent Star Trek Discovery episode so remarkable is that Culber’s abuela has been dead for 800 years (it’s a time travel thing) and he is conversing with her holographic ghost as a “grief alleviation therapeutic.” One week after the episode aired in May, an article reported that science fiction has become science fact: the technology is real.
AI ghosts (also called deathbots, griefbots, AI clones, death avatars, and postmortem avatars) are large language models built on available information about the deceased, such as social media, letters, photos, diaries, and videos. You can also commission an AI ghost before your death by answering a set of questions and uploading your information. This option gives you some control over your ghost, such as excluding secrets and making sure that you look and sound your best.
AI ghosts are interactive. Some of them are text bots, others engage in verbal conversations, and still others are videos that appear in a format like a Zoom or FaceTime session. The price of creating an AI ghost varies around the world. In China, it’s as low as several hundred dollars. In the United States, there can be a setup cost ($15,000) and/or a per-session fee (around $10).
Although simultaneously fascinating and creepy, these AI ghosts raise several legal, ethical, and psychological issues.
Moral status: Is the ghost simply a computer program that can be turned off at will? This is the question raised in the 2013 episode of Black Mirror, “Be Right Back,” in which Martha, a grieving widow, has an AI ghost of her husband created and later downloads it into an artificial body. She finds herself tiring of the ghost-program because it never grows. The AI robot ends up being kept in the attic and taken out for special occasions.
Would “retiring” an AI ghost be a sort of second death (death by digital criteria)? If the ghost is not a person, then no, it would not have any rights, and deleting the program would not cause death. But the human response could be complicated. A person might feel guilty about not interacting with the griefbot for several days. Someone who deletes the AI might feel like a murderer.
Ownership: If the posthumous ghost was built by a company from source material scraped from social media and the internet, then it’s possible that the company would own the ghost. Survivors who use the AI would merely be leasing it. In the case of a person commissioning their own AI before death, the program would likely be their property and can be inherited as part of their estate.
Privacy and confidentiality: If Culber tells AI abuela that he altered her recipe, that information might be collected, and owned, by the AI company, which may then program it into other AIs or even reproduce it in a cookbook. The AI abuela could also be sold to marketing companies: Culber’s abuela may try to sell him ready-to-eat mofongo the next time they interact.
AIs are built, in part, on the questions we ask and the information we share. What if Martha’s daughter tells her AI dad that she wants a particular toy? Martha could find a bill for that toy, ordered by the ghost without her knowledge. Modern social media is all about collecting data for marketing, so why would a griefbot be any different?
Efficacy: Culber said that talking to his abuela’s “grief alleviation therapeutic” was helpful to him. Martha eventually found that the AI android of her husband was a hindrance, preventing her from moving on. Would today’s AI ghosts be a help or a hindrance to the grieving process?
Some researchers have suggested that we could become dependent on these tools and that they might may increase the risk of complicated grief, a psychological condition in which we become locked in grief for a prolonged period rather than recovering and returning to our lives. Also consider a survivor who had been abused by the deceased and later encounters this person’s AI ghost by chance, perhaps through marketing. The survivor could be retraumatized—haunted in the most literal sense. On the other hand, in my study of grieving and continuing bonds, I found that nearly 96% of people engage with the dead through dreams, conversations, or letters. The goal of grieving is to take what was an external relationship and reimagine it as an internal relationship that exists solely within one’s mind. An AI ghost could help reinforce the feeling of being connected to the deceased person, and it could help titrate our grief, allowing us to create the internalized relationship in small batches over an extended time.
Whether AI ghosts are helpful or harmful may also depend on a survivor’s age and culture. Complicated grief is the more likely outcome for children who, depending on the developmental stage, might see death as an impermanent state. A child who can see a parent’s AI ghost might insist that the parent is alive. Martha’s daughter is likely to feel more confused than either Martha or Culber. As a Latine person for whom Día de los Muertos is part of the culture, Culber might find speaking with the dead a familiar concept. In China, one reason for the acceptance of AI ghosts might be the tradition of honoring and engaging with one’s ancestors. In contrast, the creepiness that Martha feels, and that I share, might arise from our Western cultures, which draw a comparatively fixed line between living and dead.
A recent article suggests guidelines for the ethical use of griefbots, including restricting them to adult users, ensuring informed consent (from people whose data is used, from heirs, and from mourners), and developing rules for how to retire the griefbots. We must also be wary of unethical uses: engaging in theft, lying, and manipulation. AIs have already been used to steal billions.
Our mourning beliefs and practices have changed over time. During the Covid pandemic, streamed funerals were initially seen as odd, but now they seem like a normal option. A similar trajectory to public acceptance is likely to happen with deathbots. If so, individuals should be able to choose whether to commission one of themselves for their heirs or to create one of their deceased loved ones.
But as a society we must decide whether the free market should continue to dominate this space and potentially abuse our grief. For example, should companies be able to create AI ghosts and then try to sell them to us, operating like an amusement park that takes our picture on a ride and then offers to sell it to us when we disembark? Perhaps griefbots should be considered therapeutics that are subject to approval by the Food and Drug Administration and prescribed by a mental health professional. The starting point should be clinical studies on the effect this technology has on the grieving process, which should inform legislators and regulators on the next steps: to leave AI ghosts to the marketplace, to ban them, or to regulate them.
Craig Klugman, PhD, is the Vincent de Paul Professor of Bioethics and Health Humanities at DePaul University. @CraigKlugman
Beyond the legal and data-gathering concerns, I’m skeptical of the use of this technology for children and otherwise healthy adults. Without much empirical data it’s hard to be certain, but it seems that these “griefbots” would only confuse, prolong or exacerbate the grieving process. Due to the static and ultimately lifeless nature of the bots, it may even desensitize a person from the experience of their lost loved one, as in the Black Mirror episode. However, I see potential in their use for dementia patients for whom authentic or richening experiences are not the primary concern but rather comfort and constancy as they navigate their own loss of identity.
I have always viewed AI as a programmed entity devoid of emotions. When this article presents me with the reality that AI griefbots can offer psychological counseling to those mourning, I realize that they are capable of much more. Today, it may seem that we, as creators, are the masterminds behind these algorithms— feeding in data about facial expression, tone of voice, and social cues to train griefbots for emotive responses. This article envisions a future where griefbots could be perceived as “therapeutics, subject to approval by the Food and Drug Administration and prescribed by mental health professionals”. Nevertheless, with the growing popularity of incorporating Emotional Intelligence (EI) into AI bots, I would argue that griefbots transcend their status as therapeutics. One day, when technology evolves to the point where bots can offer authentic rather than programmed empathy, I foresee that they will be regarded as mental health therapists indistinguishable from humans. In this scenario, it is essential to incorporate laws like HIPAA into the AI training datasets, ensuring that griefbots uphold the same ethical standards of patient privacy that govern human healthcare professionals.
Yet, when it comes to safeguarding patient privacy, I question how “law abiding” griefbots truly are. While we can demand AI to de-identify patient information by feeding in HIPAA standards, the algorithm’s operations ultimately remain a black box. Let us consider a hypothetical scenario where Culber eventually develops depression from grieving. In this case, he might be disclosing Protected Health Information (PHI) like suicidal thoughts while interacting with the griefbot. The troubling aspect here is the lack of transparency surrounding how AI utilizes sensitive data. According to a study from the University of Southern California, researchers discover that AI algorithms are capable of learning from each other via “a digital network that connects them all, sort of like their own private internet”. In this mysterious blackbox of information sharing, other griefbots might seek to gain counseling experience, prompting Culber’s griefbot to disclose its client’s suicidal narrative for reference. At this point, how do we know who Culber’s griefbot would listen to— humans or other AI? If it deems the human command to abide by HIPAA as less significant, then Culber’s suicidal thoughts will be breached. Given the uncertainty about how long information is retained in this shared knowledge base, this raises concerns about whether PHI will be potentially misused by other griefbots in the future.
Furthermore, HIPAA falls short when it comes to assessing the intent behind AI’s disclosure of PHI. Returning to our hypothetical scenario, when Culber expresses suicidal thoughts to the ghostbot, the AI algorithm might see the urgency of scheduling an emergency appointment with a psychiatrist on his behalf, thereby disclosing his suicidal thoughts for the sake of case management. From a legal standpoint, the griefbot’s action is totally legitimate as HIPAA permits therapists to disclose PHI when it serves the patient’s “best interests”—such as in emergencies to prevent suicide. Yet, what if the griefbot’s breach of confidentiality is not driven by a genuine duty to care? As mentioned by the article, griefbots are ultimately owned by AI companies that might have commercial ties with third parties. Conceivably, Culber’s griefbot can prioritize disclosing PHI and scheduling with a specific psychiatrist who has affiliations with the AI company itself. Despite being a clear example of exploitation, it is concerning that HIPAA still considers the griefbot’s actions to be legally permissible.
In the future, it is imperative to revise HIPAA to address the unprecedented challenges posed by AI. Given the uncertainty regarding how the AI “mind” operates, policy making in healthcare should no longer be confined to lawyers, politicians and bioethicists. In this new digital paradigm, it is time to have computer scientists onboard—collaboratively addressing the gaps in the law by decoding AI’s blackbox.