Loading Events

« All Events

Ethical Considerations for Using AI to Predict Suicide Risk: ASBH Conference

October 24 @ 9:15 am 10:15 am America/Portland, OR

This flash presentation by Hastings Center PMRA Faith Wershba at the annual ASBH Conference will consider using AI to predict a patient’s suicide risk.

Abstract for the presentation: Predicting a patient’s suicide risk is a complex and high-stakes task. While there are a number of known risk factors that correlate with suicide attempts at the population level, the presence or absence of a given risk factor may not reliably predict an individual’s risk of attempting suicide. In an attempt to improve suicide risk prediction at the individual level, a number of researchers have developed machine learning algorithms which can be applied to patient medical records to evaluate suicide risk. Currently, such algorithms are not used as part of clinical practice. However, if they prove successful, many experts think that these AI systems could—and should—be integrated into clinical care to aid suicide prevention efforts. Indeed, if these algorithms are found to increase predictive accuracy compared to clinician judgement, it seems ethically desirable to promote their use in clinical settings. However, this possibility raises ethical considerations regarding disclosure, consent, and data use. If it becomes standard practice to screen patients’ electronic health records using an automated suicide risk prediction algorithm, does a healthcare institution have a duty to disclose this fact to their patients? Moreover, should applying these algorithms to patient medical records require patient consent? If so, should such consent be opt-in or opt-out? Lastly, if suicide risk predictions generated in the clinical setting are used to further train or refine the algorithm, should patients be consulted about the use of their personal data in this way?