graphic of chat bot coming out of phone

Hastings Center Report

The Fundamental Fallacy of “Empathic AI”

Abstract: “Empathic AI” is being adopted in clinics as a means of offloading some of the work of clinician-patient encounters. Indeed, a recent study reported that generative large language models such as GPT4 were perceived as being more empathetic than human physicians. I argue that encounters between AI chatbots and patients lack an essential feature of good clinical encounters—recognition. More fundamental than empathy, Hegelian recognition is a precondition for features such as honesty and respect for autonomy that are central tenets of medical ethics. I argue that patients have a justified expectation of mutual recognition in a clinical encounter and that, given specific limitations of AI chatbots, this justified expectation cannot be met by them. Problematically, however, AI chatbots are designed to mimic human expressions of recognition, resulting in an alienating absurdity at the heart of “empathic AI.” This fundamental incoherence is not merely a philosophical curiosity; it is an issue that must be directly addressed if AI chatbots are to take on roles in clinical encounters.

Read the Article