Bioethics Forum Essay
What Does AI Jesus Teach Us?
This past fall, a Catholic church in Switzerland hosted an AI Jesus that dispensed wisdom in response to visitors’ questions. “Deus in Machina”—literally, “God in the machine”—was an art installation in which an animated holographic image of Jesus Christ was beamed through the lattice of a confessional booth and synchronized to an artificial intelligence, powered by ChatGPT 4.0, that had been trained on theological texts.
AI Jesus is just one of many recent examples in which the artificial intelligence systems known as large language models, such as ChatGPT, have taken on roles that have always seemed to require human beings. Last year, in Germany, ChatGPT took the role of a Protestant priest and gave a sermon. Large language models have also been used to create automated therapists such as Woebot Health, a chatbot-based mental health service. They have appeared as coauthors on scholarly publications. And they have been seen and treated—and sometimes marketed—as carers, friends, and even lovers.
These are of course extraordinary technological achievements that show us humans at our best—solving engineering problems, creating art, and providing useful services. At the same time, one can worry what these creations will eventually do to us humans. What will be left for humans to do? A few days ago, I asked a friend who’s writing a biography and poring through troves of manuscripts and letters whether he’s considered uploading it all to an AI, giving the AI some good prompts, and letting it do the writing. “I would rather die,” he responded, after a momentary silence. Mordant humor along these lines has become a staple of all my conversations about AI in the workplace.
But it is remarkably hard to say exactly why AI should not take over seemingly deeply human roles such as therapist, friend, or spiritual leader. The reasons usually given depend on claims that it just won’t work very well. AI advisors will sometimes pass along falsehoods, make things up, and give dreadful advice. Because what a large language model generates depends entirely on what it is trained on, it will repeat and perhaps reinforce the limitations and biases in that material. It cannot generate anything genuinely novel or insightful. And if it replaces humans in roles that humans traditionally trained for, then, as philosopher Shannon Vallor argues in her book The AI Mirror, it could lead to human “deskilling” as humans stop filling these roles.
But these answers, though not wrong, feel inadequate. Maybe the mistakes and limitations can be corrected in future AI models, and human deskilling for a given role is not obviously bad if AI is really good at that role. More to the point: the objections feel like quibbling about details. They offer mere skepticism about how well AI will work when the response many people feel is more like, “I would rather die.”
Perhaps, then, we’re not asking the right questions. In focusing on questions about how well it will work, we’re ignoring or glossing over questions about other fundamental goods at stake.
Over the last couple of years, an informal group of scholarly journal editors I have been part of has thought about similar sets of questions in order to come up with recommendations for how large language models might be used to create content for our journals. Could they write, review, or edit content for us? If these questions are simply about whether they do things that human reviewers, editors, or authors do—and sometimes do them better, helping to generate good content more efficiently—then the answer is yes, absolutely.
But our group decided, perhaps counterintuitively, that generating good content is not our primary goal as journal editors. Our journals are bioethics journals; they address moral and social issues in medicine, health care, and biological sciences and technology. We’re trying to contribute to public conversations about these issues, but we decided that the most important contribution we make to that conversation is to bring people into it. It’s part of our mission, in other words, to foster a community of people who are doing scholarly work on these issues.
Given that goal, AI can certainly be useful in various and still-evolving ways in reviewing, editing, and writing, but it should not replace people. To draw on a phrase often used in conversations about integrating AI and humans, humans should be “in the loop” not just so they can monitor work processes and make sure outcomes are good, but because “humans in the loop” is itself the primary good.
Similar questions can be asked about many other ways in which AI—or for that matter, technology generally—might be used. Sometimes, I suspect, the answer will be that AI should have a large role. If the reliability issues can be resolved, then perhaps AI can pretty much take over roles that are mostly or purely technical, such as reading x-rays, generating diagnoses, or writing software code. But there are some roles in which human presence is important.
This is a valuable insight that AI Jesus really does teach us. There would be ways of interacting with AI Jesus that did not suggest an inappropriate replacement of humans. For example, AI Jesus might be a tool for searching through the theological material on which it is trained and generating theologically informed answers to visitors’ questions. Visitors might thereby use AI Jesus to help them think more clearly or creatively about those questions.
But if the bioethics’ editors’ statement is on the right track, AI Jesus is troubling if it is in effect taking over the moral thinking—if it is genuinely seen as spiritual leader or moral guide. For one thing, we humans must be in charge of our moral governance. In much the way that, in a democracy, public policy must ultimately be collectively authorized by the citizens in order to be legitimate, so, too, must moral rules be collectively endorsed by the beings to whom they apply. For that idea of endorsement to have any meaning, we must all be thinking about them and settling on them together. We must collectively be the authors of morality, as it were.
Beyond that, it’s our society. Just as the core mission of a bioethics journal is to foster a community of people exchanging scholarly views about bioethical issues, the ultimate goal of a society is about creating the conditions of human flourishing. Human flourishing has something to do with the efficient production of things for people to enjoy, but most people would say that there’s more to it than that—hence my friend who would rather die than let a chatbot take over the biography he’s struggling to write. As philosophers tracing back to Aristotle have often held, human flourishing requires, not just contentment, but activity and engagement. It requires that humans be in the loop.
Gregory E. Kaebnick, PhD, is the director of research at The Hastings Center and co-editor of the Hastings Center Report.
Great and interesting article
Very well written and interesting. I do have similar concerns about AI. IT goes to much of what I always objected in some books on the future where machines and rules that are considered draconian. And that consideration stands for today and for the future. Just consider Aldous Huxlesy’s Brave New World… It still churns my stomach, many years after reading it… that is why I never re-read it and find some science fiction on the topic unsettling… the moral compass is missing.
I love the concluding reference to Aristotelian philosophy, i.e., that it is the PROCESS of moral deliberation and human engagement that leads to flourishing, not merely the technical attainment of “good ends.” In my opinion, consequentialist defences of AI can only take us so far… there seems, in my opinion, to be something essential about the “grappling” with ethical dilemmas and moral quandries that ought not be ceded to technology.