doctor using tablet with AI

Bioethics Forum Essay

ChatGPT in the Clinic? Medical AI Needs Ethicists

Concerns about the role of artificial intelligence in our lives, particularly if it will help us or harm us, improve our health and well-being or work to our detriment, are far from new. Whether 2001: A Space Odyssey’s HAL colored our earliest perceptions of AI, or the much more recent M3GAN, these questions are not unique to the contemporary era, as even the ancient Greeks wondered what it would be like to live alongside machines.

Unlike ancient times, today AI’s presence in health and medicine is not only accepted, it is also normative. Some of us rely upon FitBits or phone apps to track our daily steps and prompt us when to move or walk more throughout our day. Others utilize chatbots available via apps or online platforms that claim to improve user mental health, offering meditation or cognitive behavioral therapy. Medical professionals are also open to working with AI, particularly when it improves patient outcomes. Now the availability of sophisticated chatbots powered by programs such as OpenAI’s ChatGPT have brought us closer to the possibility of AI becoming a primary source in providing medical diagnoses and treatment plans.

Excitement about ChatGPT was the subject of much media attention in late 2022 and early 2023. Many in the health and medical fields were also eager to assess the AI’s abilities and applicability to their work. One study found ChatGPT adept at providing accurate diagnoses and triage recommendations. Others in medicine were quick to jump on its ability to complete administrative paperwork on their behalf. Other research found that ChatGPT reached, or came close to reaching, the passing threshold for United States Medical Licensing Exam.

Yet the public at large is not as excited about an AI-dominated medical future. A study from the Pew Research Center found that most Americans are “uncomfortable” with the prospect of AI-provided medical care. The data also showed widespread agreement that AI will negatively affect patient-provider relationships, and that the public is concerned health care providers will adopt AI technologies too quickly, before they fully understanding the risks of doing so.  

Technological advances move quickly, but the regulatory dimensions of innovations always lag behind. This makes the lack of widespread discourse about the risk of harm from AI in health and medicine concerning, and the obligation to have conversations about this even more pressing. Ethical questions should be central to any conversation about AI and its use in medical care and practice for two reasons. First, people deserve clarity about what data AI and the platforms providing them will collect, use, and sell or share. Second, and more relevant to my focus here, is that just because it is possible for us to utilize AI in medicine or another health-related context does not mean that we should do so. Despite whatever excitement exists about the possibilities of sophisticated, conversational AI in medicine, we are already seeing problematic examples of its use.

Consider a recent Twitter thread written by Koko co-founder Rob Morris last month. Koko, a platform that connects anonymous persons experiencing mental distress with volunteers who send supportive messages, studied users’ responses to AI-generated messages without obtaining the users’ consent. In a series of tweets Morris described the company’s use of ChatGPT3 and said that it helped volunteers construct messages for people who had come to the platform. He then disclosed that users had rated those experiences poorly after learning that the messages they’d received had been written by ChatGPT, not by humans.

The backlash came quickly, and Morris’s Twitter thread was soon inundated with criticisms: that Koko’s users had clearly not consented to participating in this study, that they had been unaware that the messages they received would be constructed by AI, and that the company’s actions had been unethical. For those already concerned about the use of AI in medical contexts, this response was unsurprising. While messaging with a chatbot or another AI-powered tool might be fine if it is disclosed, deceiving a person about who or what they are communicating with (including deception through omission) demonstrates a lack of respect for autonomy. What’s more, in the Koko experiment, users had unknowingly been enrolled in a study without the chance to opt out. The indignity of not being provided with the service they sought when using the platform (that is, the ability to message with another human), coupled with Morris’s self-congratulatory tweets discussing their data, demonstrate how excitement about novel technologies all too often results in a lack of consideration of ethics.

While Koko represents a case study of how not to use AI, it does provide a useful starting point for conversations about the ethics of AI in medicine and in health-related research. How, when, and where should disclosures take place to ensure that potential users understand that a health or medical technology is using AI, for example? Most people do not read terms of service and user agreements carefully. Even if they tried, it is unlikely that all the terms would be understood. Is it unethical to include such information only in those places? The answer is a resounding yes. This information should be prominently placed, front and center, on any medical device or consumer-facing platform that utilizes AI. It must explain, clearly and without medical or technological jargon, how the AI is used, for what purposes, and what will be done with any user data that is collected. Only then can individuals truly make informed decisions about whether to use those tools or participate in research that uses them.

It seems that we are approaching an inflection point in the relationship between AI and ethics. For too long the dominant paradigm has been that ethical concerns are secondary to the grand narrative of technological progress. Yet I believe it is possible to move away from this status quo and change the normative perspective, particularly as ChatGPT and AI in general are currently not only objects of public fascination, but also of concern. Expecting technologists to be self-regulating when it comes assessing whether their tools and interventions are ethical has often proved to be unsuccessful. The insight of ethicists, for whom these matters are front and center, is sorely needed. Now is an opportune time to not only intercede into public discourse about AI’s role in health and medicine, but also to establish a set of best practices and evaluative criteria to be used in examining medical AI applications before their release or implementation. As a result of ethicists’ interventions, technologies that could potentially cause harm would never have the chance to do so.

Emma Bedor Hiland, PhD, is an assistant professor of communications at the College of Saint Rose in Albany. @EmmaBedorHiland

Read More Like This

Hastings Bioethics Forum essays are the opinions of the authors, not of The Hastings Center.

  1. Thank you for this timely and needed reflection Emma. I agree with most of what you say, but I would like to make two small points. Even though autonomy has become the main normative concept in bioethics, I think that in the Koko’s case there is lack of respect for autonomy, but what is even more blatant is a violation of human dignity. To be deceived violate our dignity of people deserving to be told the truth, and to be denied the genuine human interaction that we are reaching for is tantamount to telling us that we are not worth the trouble: all we deserve is an automatic reply, not the time and attention of another human being. This undermines our dignity, our dignity as vulnerable people, not only our autonomy as rational people. My second remark is that I agree that ethicists have a role to play, but the matter is not only to involve ethicists, but to involve users, the public and not to leave these key decisions to experts. Two kinds of experts is better than one, but not good enough.

Leave a Reply

Your email address will not be published. Required fields are marked *