hand in medical blue glove touching modern digital tablet on x-ray images background. Concept of medical or research theme

Bioethics Forum Essay

Regulation of Software as a Medical Device: Opportunity for Bioethics

In January, right before President Biden took office, the Food and Drug Administration proposed permanently exempting “software as a medical device” from regulatory review. The agency waived the approval process last year to streamline regulatory oversight during the Covid emergency. But the growing use of artificial intelligence programs and digital devices in health care raises safety and ethical concerns that require more attention. The Biden administration put the proposal on hold for now. Meanwhile, bioethicists should weigh in as the FDA reviews its action plan.

Software-based medical devices and AI are used for a wide range of purposes. They manage drug infusions, monitor fetal heart rates, and provide behavioral therapy for psychiatric patients. They provide potentially life-saving warnings, identifying some problems more accurately than clinicians. AI software can help physicians detect cancers, respiratory diseases, broken bones, and other findings on medical images. Data collection and analysis by intelligent software through machine learning is enhancing research and public health surveillance, leading to knowledge discovery and innovation. These technologies have the potential to improve care while empowering patients and clinicians.

However, inaccuracies, malfunctions, hacking, and biases can plague algorithms and devices. Years of reports by the Institute of Medicine, industry newsletters, scholarly studies, malpractice attorneys, and consumer magazines document failures of FDA-approved medical devices. These failures have caused injuries and death. How often do problems like these occur with AI and software in medicine? It’s hard to know. For 20 years, until 2019, the FDA allowed medical device makers to use an obscure reporting program that concealed millions of reports of harm and malfunctions. To this day, device manufacturers are not required to disclose sample size or the gender, race, or geographic location data used to test their products. A recent Stat News investigation of the filings for approved products found a “an uneven patchwork of sample sizes and methods . . .  [so that] patients and doctors . . . know very little about whether [approved AI devices] will work or how they might affect the cost and quality of care.” Between 2012 and 2020, only 73 of 161 AI product filings disclosed the amount of patient data used for testing. Only 1 in 10 products for analyzing breast images, where race could matter, included information about race in the patient data used to validate the devices.

Given the potential risks and the reality that new software-powered medical devices are becoming available, now is the time to update regulatory review and oversight of them. We should reform outdated regulations to encourage innovation while also improving the transparency of the algorithms, the data they use, and the results of testing and ongoing monitoring.

Bioethicists should join other scholars, lawyers, information technologists, health policy experts, and other stakeholders in the push for better reporting, transparency, and accountability by addressing questions such as these: How are software and devices developed, validated, and used? What data sources do they use? How is the software trained and tested? How are performance accuracy, safety, and efficacy evaluated? How can we improve monitoring and reporting of outcomes and adverse events, as well as remedial efforts? How are data shared, and with whom? How is the privacy of people’s data protected?

The FDA oversees safety and efficacy, and collects reports of adverse events. Bioethicists can question what counts as an adverse event, safety and efficacy, or even a medical device. Much health-related software and most commercially available devices, from diet advisors to fitness trackers to medication reminders to asthma trigger alerts, are not considered medical devices according to the FDA definition. Notably, the definition excludes popular health apps, social media (such as numerous health advisory websites and discussion groups and algorithms that flag potential suicidal expressions in someone’s postings), and health-related software and data. Here are some additional questions with ethical implications: Some devices, such as pacemakers, can be hacked; how safe are they? How easily might devices and software be used in ways that violate privacy or be compromised to affect functioning? Does the proliferation of such devices exacerbate or reduce disparities in care for different communities and groups? Are less costly effective alternatives foregone in favor of glitzy new technologies? What new questions should be considered? What values should be promoted? Who decides? Who should?

Software and AI-based medical devices need to be safe and effective. Regulation has to keep pace with changes in the technology. Bioethicists can help strengthen ethical, legal, and social analyses that identify the questions to create a good mix of both precautionary and permissive regulation. They can help influence decisions by those who purchase and use these devices and software. Bioethicists have much to contribute to discussion of information technologies related to health care. The what and how of regulation matter, but they aren’t enough.

Bonnie Kaplan, PhD, FACMI, is faculty at Yale University in the Yale Center for Medical Informatics, a Yale Bioethics Center Scholar, a Yale Information Society Project Faculty Affiliated Fellow, and a Yale Solomon Center for Health Law and Policy Faculty Affiliate.

Read More Like This
  1. Thank you for your insightful post. I agree with your stance; the FDA should not exempt software as a medical device from regulatory review. Even with existing medical device regulations, there is a clear gap in software regulation. For example, the FDA does not currently require that manufacturers submit cybersecurity risk mitigation plans in new device applications and marketing approvals. While regulations may complicate and impede a quick approval process, rules are in place for a reason. Failure to establish industry standards for software in the clinical setting places innocent patients at risk.
    Additionally, it is disturbing to hear that device manufacturers are not required to disclose sample size or demographic information on the testing of their products. Bioethics has a role in this conversation to uphold justice and ensure that product testing does not discriminate based on gender, race, and geographic location. However, we cannot protect against a problem we do not have all of the data. Perhaps the first step in ensuring subject protection is to broaden the data disclosure requirements for device manufacturers.

Leave a Reply

Your email address will not be published. Required fields are marked *