AI in Healthcare: Trust and Accountability 5-16-2025 Cedars Sinai / The Hastings Center

Hastings Center News

AI in Healthcare: Trust and Accountability—Takeaways from Our Conference

Cedars-Sinai and The Hastings Center jointly organized a timely conference on AI in healthcare on May 16 in Los Angeles. Discussions with leading experts in the field addressed evolving ethical, societal, and legal issues raised by AI in medicine and biomedical research. Hastings Center President Vardit Ravitsky and senior research scholar Nancy Berlinger participated in the conference and share some highlights.  

 What was the topic of your talk or panel, and what struck you as most impactful about it?

Vardit: I had a fireside chat with David Rhew, Chief Medical Officer & VP Healthcare for Microsoft’s Worldwide Commercial Business, called “Moving Toward Responsible AI in the Medical Landscape.” We mapped out what is most exciting right now and what are the limitations. An example of what’s exciting is the use of AI with data from patients’ eyes to detect chronic disease early and improve access to affordable health care. What was striking to me is the potential of AI to help us address health inequity by simplifying screening procedures and making them more accessible, especially for rural or underserved communities.

Nancy: I moderated a session on the use of AI for inpatient care of seriously ill patients. Our panel included ICU physician Michael Nurok; surgeon and AI ethicist Charles Binkley; Rabbi Jason Weiner, who directs spiritual care at Cedars-Sinai; and clinical ethicist Virginia Bartlett.  We discussed AI models intended to help physicians identify appropriate treatment options for a patient when there is uncertainty about benefits, burdens, or risks. The panelists agreed that these tools may assist but cannot replace clinical judgment and hands-on experience. The AI should not make the decision.

We also heard that Cedars-Sinai is piloting a mental health chatbot, including by the spiritual care service. Rabbi Weiner explained that a chatbot or other AI cannot replace a human presence, yet it might offer individual patients additional support when they need it.  Rabbi Weiner also invited us to consider how AI and other technologies, such as virtual reality, might provide moments of transcendence for a seriously ill person.

What were the highlights of the afternoon workshops?

Vardit: I led a workshop on AI-powered research that explored the use of voice to diagnose diseases. After presenting the evidence to support the use of ‘voice as a biomarker’, I explained the potential of AI to help detect changes in voice and promote our ability to diagnose a disease and follow its progression. Uploading a recording to a cloud so that your clinician can listen to it from anywhere in the world could make this technology available even to patients who cannot consult a voice specialist in person. It therefore has the potential to become an important public health tool that helps address health inequities. We discussed the ethical and social implications of voice as a biomarker, including the risk of deepfakes using voice to impersonate people and steal their identity.     

Nancy: I co-led a workshop with Charles Binkley on how healthcare institutions should approach transparency with patients and caregivers concerning uses of AI models. For example, “ambient scribes” – AIs that record and summarize conversations between physicians and patients or family caregivers—are replacing the physician behind a screen. How should this practice be disclosed and explained to patients? I also appreciated an insight from clinical ethicist Thomas Cunningham, who noted that hospital policy review cycles—which typically occur every three years– don’t correspond to speed of AI development, piloting, and implementation. To keep up, some hospitals using AIs in patient care are creating AI councils or committees, with representatives from ethics, legal, quality improvement, and other departments, that can work swiftly to draft and update policy concerning AI deployment.

As you reflect on the conference, what’s one thing that has stayed with you as especially interesting?

Vardit: I was impressed by the speed of implementation of certain AI-tools into healthcare right now. But I was also impressed by how thoughtful clinicians and healthcare systems are regarding their responsibility to proceed with caution and be accountable. Everyone recognized how important it is to maintain the trust of the public in AI, even if it means slowing down to do things right. I found that very reassuring and left the conference feeling optimistic.

Nancy: Michael Nurok, who is trained in anthropology as well as medicine, reminded us that every technology, when it was new, provoked uncertainty and worry as well as enthusiasm.  We should expect a similar reaction to AIs as we are learning where these new tools are most useful. He also reminded us that the care of seriously ill patients and their families features lots of uncertainty. AI isn’t going to fix these everyday challenges, and we shouldn’t expect it to.

[PHOTO: from left: Michael Nurok, Nancy Berlinger, Charles Binkley]