older black woman talking to young hispanic woman

A Patient’s Journey with Medical AI: The Case of Mrs. Jones

Table of contents
Introduction and Executive Summary  
1. Event: Patient Contacts Healthcare System with a Concern
2. Event: Ambient Recording of Clinical Appointment
3. Event: Reading the Results of a Diagnostic
4. Event: AI Makes Recommendations Regarding Next Steps
5. Event: Patient’s Insurance Denies Next Steps
Conclusion

Do you know the ways Artificial Intelligence (AI) is involved in your healthcare? A Patient’s Journey with Medical AI is an interactive tool that follows our imaginary patient, Mrs. Jones, through 5 interactions with health AI. Following her journey, you’ll learn about some relevant bioethical issues and be invited to answer questions about what you think should happen.   

We meet Mrs. Jones when she contacts her healthcare system about a worrisome spot and gets a recommendation from a chatbot she believes is human. This introduces bioethical issues related to transparency and informed consent, accuracy, and regulatory oversight. Would you want to interact with an AI chatbot about to your health concerns? Should healthcare providers be required to disclose the use of AI in this case? If a chatbot recommends a course of action that causes a patient harm, who should be held responsible? 

Next, Mrs. Jones visits her doctor, who candidly uses AI-enabled ambient recording to take clinical notes during her visit. This introduces bioethical issues related to data privacy and ownership, accuracy and context, and human oversight. Should Mrs. Jones be informed that her conversation was recorded and processed by AI? How might ambient recordings of a patient’s medical visit change the experience for physicians and patients? 

Later, Mrs. Jones’ doctor uses a new AI-enabled device to assist in a diagnosis, without explaining to her the nature of the device. The interaction introduces bioethical issues related to transparency and regulatory approval. Should Mrs. Jones be given more information about the device? What would you want to know regarding the device? Why is it important to consider the diversity of the dataset on which the device was trained? 

Then, Mrs. Jones’ doctor follows an AI system’s recommendation regarding next steps. This introduces bioethical issues related to transparency about AI’s role, medical uncertainty, and patient autonomy. Who ultimately made the treatment decision for Mrs. Jones? What ethical responsibilities does the clinician have in this scenario? What are the potential drawbacks of Mrs. Jones’ being unaware of alternative treatment paths? 

Finally, Mrs. Jones’ insurance company uses an AI algorithm to deny coverage for further care. This introduces bioethical issues related to algorithmic decision-making, transparency and accountability, and fairness and reliability. What potential implications arise when using AI to assess insurance claims? What can be done to improve fairness in AI-based insurance decisions? More broadly, how would you describe the ideal use of AI in healthcare? And what safeguards should be in place when using AI in the healthcare setting? 

This tool was created as a part of Hastings on the Hill. Bioethics is the interdisciplinary study of ethical issues arising in the life sciences, healthcare, technology, and health and science policy, drawing on expertise from law, medicine, philosophy, science, technology and other disciplines.

Back to top

Mrs. Jones notices a new dark spot on her neck and worries that it could be skin cancer. Seeking quick guidance, she remembers that a new digital service offered by her healthcare provider offers rapid personalized advice on health-related concerns.  

Mrs. Jones sends a text message to the digital service requesting help and includes a photo of the dark spot. Within seconds of sending the text, she receives a response with follow-up questions: 
“When did you first notice this spot?” 
“Has it changed in size or color?” 
“Is it flat or raised?” 

The response also acknowledges her concerns: “I understand this might be worrying. It’s best to get this checked by a specialist. Please visit your provider immediately.” It includes a link to schedule an appointment with her provider.

During the exchange, the chatbot powering the service is able to triage her case without involving a human clinician, using image recognition and language processing to make an assessment based on a database of dermatological images and medical protocols. Mrs. Jones is glad to receive quick guidance, but she is not aware that she has been interacting with an AI-powered chatbot.

Transparency & Informed Consent: Should Mrs. Jones be explicitly informed that she is talking to an AI bot rather than a human clinician? Would truly informed consent require a comparison of AI versus human physician error rates? How does this misconception affect the quality of Mrs. Jones’ informed consent to disclose personal health information? How does it impact her autonomy as a patient?

Transparency is fundamental to establishing trust between patients and healthcare providers. In Mrs. Jones’ case, she believes she is communicating with a qualified clinician. Her health system’s failure to disclose that it is rather an AI chatbot undermined Mrs. Jones’ ability to provide informed consent because she is not informed. Clear notifications that inform patients whenever they are interacting with AI systems, and/or options that allow patients to opt out of AI interactions, support patient autonomy. Consent and opt-out options may also cost more than systems that rely solely on AI. Moreover, as time goes by and AI systems become deeply embedded, opt-out options may become impossible to implement 

Accuracy: Do the datasets used to train the AI chatbot allow accurate assessments for all skin types and tones? How might the dataset impact the reliability of the digital health service as a source of medical guidance?

Healthcare AI systems must be trained on datasets that include a wide range of patient demographics, similar to the way that biomedical research tests the effect of interventions on various populations. For Mrs. Jones, insufficient data about her population could lead to an incorrect assessment, misdiagnosis, or delayed care. Rigorous evaluations when training datasets, transparency about their composition, and developer accountability for addressing bias related to sub-populations – all these take time and money, but support higher-quality care for all. 

Regulatory Oversight: What regulatory requirements are appropriate to ensure patient safety while encouraging AI innovation? What causes of action should be available if a patient is harmed by following AI triage recommendations? Should states determine causes of action, allowing liability to vary from state to state?

Misdiagnoses or inappropriate medical advice from an AI-based triage system raise ethical concerns about responsibility and patient protection, which may be addressed by law, regulation, and/or guidelines, all of which are evolving. The law varies among states, allowing for example, malpractice claims against clinicians, product liability claims against device companies and AI developers, and negligence claims against all. In regulation, the Food and Drug Administration (FDA) would likely conclude that an AI-powered chatbot that independently conducts triage is a medical device, triggering requirements for safety and effectiveness. Clear standards specifying acceptable use cases, validation and testing standards, and liability in cases of AI-generated medical errors, can ensure that AI does not replace or diminish professional clinical accountability, patients are protected, and innovators have clarity about requirements. 

Back to top

Following the advice of the digital health service, Mrs. Jones visits the hospital and is examined by a doctor in the emergency room. Unlike past visits, in which doctors seemed overwhelmed or distracted, this doctor appears engaged, maintaining eye contact and asking detailed questions about her medical history. 

Mrs. Jones is unaware that their conversation is being recorded and transcribed by an AI-powered clinical documentation system (known as ‘scribe’). The AI will then summarize the interaction, draft clinical notes, suggest a preliminary diagnosis, and advise follow-up action. Data from the interaction may be used by the AI vendor to improve the AI transcription and the algorithms underlying the diagnostic system. 

The doctor reviews the AI-generated report and, after making minor edits, approves the suggested follow-up for a more detailed skin analysis. Mrs. Jones remains unaware that AI played a significant role in this documentation and decision-making process. 

Privacy & Data Ownership: Should patients be informed when their conversations are being recorded and processed by AI?

Patient autonomy and confidentiality require awareness and consent for data collection practices. Mrs. Jones is unaware that her sensitive medical conversation is recorded and processed by AI, which threatens her privacy and trust with her doctor. Policies that require healthcare providers to obtain informed consent before recording consultations or processing patient data through AI support patient autonomy. They also require extra time, which could burden the clinician and the healthcare system.

Accuracy & Context: Can AI accurately summarize complex medical discussions without missing context or nuance?

AI-generated documentation may omit or misrepresent nuanced human interactions critical to medical care. While human error and bias can also lead to diagnostic inaccuracy, AI systems may create incomplete patient histories or miss complex contexts, affecting diagnostic accuracy. Human review and verification processes take time but can ensure AI-generated summaries and recommendations are complete and correct. 

Human Oversight: To what extent should doctors rely on AI-generated suggestions for patient care? What effect does reliance have on the quality of care and on clinicians’ ability to maintain and hone their professional skills and judgment? 

The final responsibility for clinical decision-making rests with human clinicians. Overreliance on AI-generated summaries and recommendations risks diminishing professional judgment. The role of AI in medicine will likely expand as AI becomes increasingly reliable, though research on the error rates of human versus AI has yielded inconsistent results. Well-defined and validated roles for AI and use of clinical oversight frameworks ensure the right balance of human oversight to ensure good patient outcomes.

Back to top

The doctor uses a small handheld device to examine the dark spot on Mrs. Jones’ skin. This diagnostic device captures high-resolution images of the lesion, using AI to analyze cellular structures and assess whether the spot appears to be malignant. 

Mrs. Jones, curious about the device she has never seen before, asks: “what does this thing do?” The doctor reassures her: “It analyzes pictures of skin to see if there is more that we need to do. It’s FDA-approved, don’t worry.” She is then referred for a skin biopsy. 

The doctor does not share with Mrs. Jones that the device is one of the first AI-powered tools of its kind, recently approved for clinical use, or that the AI model behind it was trained on a dataset that may or may not fully represent patients with her skin type. The doctor also does not share that the referral decision was heavily influenced by the device’s risk assessment score. As a result, Mrs. Jones consents to diagnostic testing without knowing that her care is heavily influenced by a newly created, minimally tested AI device. 

Transparency: Should patients be informed when AI influences their diagnosis and treatment plans?

Ethical principles require comprehensive transparency regarding new diagnostic technology. Mrs. Jones’ lack of understanding about the diagnostic tool’s AI-based nature undermines her informed consent. Standards that require explicit disclosures to patient about the role, limitations, novelty, and uncertainties of new AI medical devices preserve patient autonomy and ensure ethically appropriate informed decision-making. 

Regulatory Approval: What are the limitations of FDA approval for AI-based medical devices? To what extent should these devices be tested before widespread use in medicine? 

FDA approval of AI-based devices does not guarantee full clinical effectiveness across all patient groups. Regulatory standards that include comprehensive validation criteria reflecting the people on whom it is likely to be used, can help ensure effective assessment for all people. Rigorous, real-world evaluations before widespread clinical use take time and involve cost. This means that in some cases, patients could be harmed by not having access to technologies while they are still being tested. The FDA can use expanded access authorities to make products available before approval, particularly for life-threatening diseases and conditions.

Accuracy: Has the device been rigorously tested on diverse patient populations to ensure accuracy when used on patients from various groups? 

Ensuring fairness in healthcare requires thorough validation of medical AI across various patient demographics. The FDA can require testing and validation, as well as labeling that specifies if a medical product may not work for certain populations. Policies that require performance assessments across different populations and ongoing monitoring after deployment require investment but ensure accurate outcomes for all. 

Back to top

The AI system analyzes Mrs. Jones’ biopsy results and determines a moderate risk of melanoma. Based on historical data and clinical guidelines, it suggests either a watchful waiting approach or immediate surgical removal

Mrs. Jones’ doctor, relying on the AI’s recommendations and their own clinical evaluation, leans toward watchful waiting. Although another doctor reviewing the case might have opted for a more aggressive intervention, Mrs. Jones decides to trust her current doctor and does not seek a second opinion.  

In receiving treatment advice, Mrs. Jones is not informed that AI helped shape her treatment plan, or that alternative diagnostic and treatment paths might have been possible. She places her trust in her doctor without knowledge of the basis of her doctor’s recommendations, including the heavy reliance on AI to make clinical treatment decisions. 

Clinical Decision-Making: Should doctors disclose AI-generated recommendations and their level of reliance on them?

Failure to inform patients about AI involvement in their diagnostic and treatment processes limits their ability to make informed decisions about their treatment. Requiring clinicians to tell patients about AI involvement will take clinicians time and will ensure patients can consider all sources of medical recommendations, whether human or AI. The FDA may influence clinicians’ reliance on AI by requiring AI-enabled products’ labels to specify steps to verify results as appropriate – like specifying that a clinician should check portions of pathology slides reviewed by AI to ensure nothing is missed. 

Medical Uncertainty: How does AI balance aggressive vs. conservative treatment approaches?

AI decision-making relies on statistical probabilities, which may not fully consider individual patient contexts, so it may not be appropriate to recommend conservative versus aggressive treatments based solely on AI assessments. Standards that ensure clinicians integrate AI-generated recommendations with comprehensive patient-specific clinical judgment help balance statistical risk with patient values and individual circumstances, like family history. 

Patient Autonomy: Should patients have the right to opt out of AI-influenced decision-making?

Policies that explicitly grant patients the right to be informed about AI’s influence on their healthcare and to refuse AI involvement preserve patient autonomy and reinforce trust in healthcare providers. However, as AI-based tools and processes become integrated into healthcare delivery, and as expectations regarding AI’s involvement change, opt-out option may gradually become more difficult and less necessary to implement.

Back to top

A few days later, Mrs. Jones receives a letter from her insurance company: 
“Your recent biopsy was covered. However, the requested follow-up imaging is not approved at this time.” 

What she does not know is that her insurance claim was reviewed by an AI algorithm which assessed her needs using a risk-based, cost-effective prediction model. The AI flagged her follow-up as “low priority” based on statistical models, leading to an automatic denial for coverage. 

Frustrated, Mrs. Jones tries to appeal the decision but struggles to reach a human representative. When she finally does, the agent explains that the AI system has determined the additional follow-up imaging as “not medically necessary.” The agent cannot provide further details, as they themselves do not fully understand how this determination was made. As a result, Mrs. Jones, her doctor, and her insurance agent are unable to properly advocate for covering further testing. 

Algorithmic Decision-Making: Should insurers rely on AI to approve or deny medical coverage? 

AI is based on statistical modeling and lacks the nuance necessary for good clinical decision-making. Delegating coverage decisions predominantly or entirely to AI raises ethical issues about accountability. Policies that require human oversight for significant healthcare coverage decisions ensure a clinician’s judgment ultimately determines whether a treatment is necessary. One approach to balance expedience with fairness is being tested with a California law that lets insurers use solely AI to approve claims, but not to deny them.

Transparency & Accountability: How can patients challenge AI-based decisions if they do not understand the reasoning behind them? 

If patients and insurance representatives don’t know the basis of AI coverage decisions, accountability and effective appeal processes are lost. Policies requiring documentation and explainability may add to cost but enable conversations between clinicians and insurance physician advisors needed for coverage determinations and appeals processes.  

Reliability: How can we ensure that insurance AI models deliver results appropriate for all people? 

Insurance decisions based on AI-based risk models could perpetuate and exacerbate documented challenges faced by patients in certain populations. Policies that require mandatory audits, transparency standards, and corrective mechanisms to address algorithmic bias, safeguard fair insurance coverage decisions for all patient populations. AI-enabled tools should be implemented with instructions and protocols that account for their limitations, for example suggesting they not be used in cases where data are too limited to produce reliable results.

Back to top

Mrs. Jones’ journey illustrates both the promise and risks of using AI in healthcare. AI may improve the efficiency and accuracy of care but may not perform well in all circumstances. Transparency about its use and limitations support informed consent, transparency, accountability, and fairness in AI-based medical decision-making. 

  • Clear disclosures to patients about AI involvement and allowing them to opt out of the use of AI in their care take time, but enable patients to make informed decisions, and promote trust in the patient-practitioner relationship.
  • Regulatory oversight of AI’s role in clinical decision-making may raise medical device development costs and could enable regulators to standardize AI’s use based on prominent ethical principles (i.e. fairness, transparency, patient privacy, and explainability). 
  • Ensuring transparency and accountability in AI-driven insurance and coverage determinations requires human intervention and understanding, protects patient rights, and ensures fairness.   

Back to top