Bioethics Forum Essay
Ethical Considerations for Using AI to Predict Suicide Risk
Those who have lost a friend or family member to suicide frequently express remorse that they did not see it coming. One often hears, “I wish I would have known” or “I wish I could have done something to help.” Suicide is one of the leading causes of death in the United States, and with suicide rates rising, the need for effective screening and prevention strategies is urgent.
Unfortunately, clinician judgement has not proven very reliable when it comes to predicting patients’ risk of attempting suicide. A 2016 meta-analysis from the American Psychological Association concluded that, on average, clinicans’ ability to predict suicide risk was no better than chance. Predicting suicide risk is a complex and high-stakes task, and while there are a number of known risk factors that correlate with suicide attempts at the population level, the presence or absence of a given risk factor may not reliably predict an individual’s risk of attempting suicide. Moreover, there are likely unknown risk factors that interact to modify risk. For these reasons, patients who qualify as high-risk may not be identified by existing assessments.
Can AI do better? Some researchers are trying to find out by turning towards big data and machine learning algorithms. These algorithms are trained on medical records from large cohorts of patients who have either attempted or committed suicide (“cases”) or who have never attempted suicide (“controls”). An algorithm combs through this data to identify patterns and extract features that correlate strongly with suicidality, updating itself continuously to increase predictive accuracy. Once the algorithm has been sufficiently trained and refined on test data, the hope is that it can be applied to predict suicide risk in individual patients.
Researchers around the country, including at Vanderbilt University Medical Center, Kaiser Permanente Center for Health Research, and Massachusetts General Hospital, have developed machine learning (ML) algorithms capable of assessing suicide risk. Some of these algorithms predict risk on the basis of structured data (such as diagnostic, laboratory, medication, and procedure codes), while others incorporate unstructured data (such as clinician notes or audio recordings of clinical interviews). By identifying features that statistically correlate with suicidality and characterizing their interactions, these algorithms aim to flag patients who display a combination of such features that puts them at elevated risk for attempting suicide.
Currently, such algorithms are not used as part of clinical practice. However, if they prove successful, many experts think that these AI systems could—and should—be integrated into clinical care to aid suicide prevention efforts. Indeed, if these algorithms are found to increase predictive accuracy compared to clinician judgement, it seems ethically desirable to promote their use in clinical settings. However, this possibility raises ethical considerations regarding disclosure, consent, and data use.
Disclosure
Consider a health care system in which it becomes standard practice to screen patients’ electronic health records using an automated suicide risk prediction algorithm. Does the institution have a duty to disclose this fact to their patients? While most patients recognize that their personal health information is being collected and stored electronically when they visit their health care provider, they may nonetheless feel discomfort if they were not informed that their data was subjected to additional screening or types of evaluation unrelated to their health care visit.
As an analogy, imagine a scenario in which a patient gets her blood drawn for a routine lab panel, but her physician neglects to disclose that a genetic analysis will also be run on the sample to assess her risk of developing breast cancer. Moreover, imagine that the test indicates that the patient possesses a risk-conferring BRCA1 mutation. Disclosing such a finding to the patient might cause considerable distress or even damage the patient’s trust in her clinician, given that she was not aware she was being tested for such a thing to begin with.
By extension, informing a patient that she is at elevated risk of suicide might be unsettling if she was not aware she was being screened for suicidality. Thus, there are two dimensions to address regarding disclosure: whether to disclose that patients’ medical records are being screened for suicide risk and, if so, whether to disclose the results of those predictions. Some patients might want to be screened but not want to know the results, whereas others may not want the prediction made about them at all.
Consent
Even if patients are aware that their data may be screened for suicide risk using computer algorithms, there remains the question of whether using these algorithms should require patient consent. If yes, should consent be opt-in or opt-out? In an opt-in system, individuals could elect to have their electronic medical records screened by the algorithm, whereas in an opt-out system, the algorithm would be applied by default and patients would be required to request that their records not be screened.
While an opt-in system would ensure that patients give consent, it seems likely that such a policy would miss at-risk patients who fail to opt in (either due to lack of awareness or discomfort with monitoring). Conversely, an opt-out system might identify a greater number of at-risk individuals, but the ethicality of such a system would depend on patients being informed that their data is being screened and of their right to opt out of such screening. Already, consent forms regarding personal data use and patient privacy are lengthy and technical, and—despite signing off on such forms—many patients (myself included) do not fully understand their details and implications. Thus, ensuring that patients understand and appreciate that their personal records will be subjected to automated suicide risk assessments—along with the fact that they have the right to opt out of such screening—will likely be challenging in practice.
Data Use
Lastly, incorporating suicide risk prediction algorithms into clinical care raises questions about personal data use. ML algorithms tend to improve when they are trained on increasing quantities of real-world data. As such, it is possible that these algorithms could continue to learn and self-update from incoming patient data. If risk predictions generated in the clinical setting are used to further train or refine the algorithm, does this constitute a form of research as opposed to routine care? If the algorithm is used only by the individual health care institution, then use of patient data in this manner seems more like an instance of quality improvement than research. In that case, patient consent would not be needed. However, if the algorithm is intended to be used more broadly across health care systems, then using patient data in this manner could raise concerns. Should patients have a right to refuse that their personal, potentially identifiable information be used for such purposes? Or are such uses justifiable under the principle of broad consent?
Bolstering clinicians’ ability to accurately predict suicide risk with ML algorithms could be immensely beneficial and aid suicide prevention efforts. However, several ethical dimensions ought to be considered before suicide risk algorithms are integrated into clinical care. Even if these concerns are adequately addressed, suicide risk-prediction algorithms can only benefit patients if they facilitate effective prevention strategies and promote appropriate mental health care. There is no inherent value in prediction; rather, predictions are only as valuable as the actions they enable. Thus, increasing clinicians’ ability to accurately predict suicide risk will only yield improved outcomes if accompanied by compassionate, human-centered care and social supports.
Faith Wershba, HBSc, MPhil, is a project manager and research assistant at The Hastings Center. www.linkedin.com/in/faithwershba
This essay was selected as a recipient of The Hastings Center’s David Roscoe Fund for Early-Career Essays on Ethics, AI, and Other Emerging Technologies.