Illustrative image for Why Health Care Organizations Need Technology Ethics Committees

Bioethics Forum Essay

Why Health Care Organizations Need Technology Ethics Committees

Technology, and technology-caused change, is pervasive in health care. Hospitals and technology companies realize that there is big money in using technology to find information in patient and medical staff data. Companies are rushing to cash in. The Food and Drug Administration has approved more than 40 artificial intelligence-based products for use in medicine, and there are many more deployed or in development that don’t need FDA approval. Tens of thousands of medical phone apps are tracking patients and gathering detailed medical information about them. These new technologies bring new ethical questions that hospitals and other health care organizations are poorly equipped to answer.

The unique relationship between doctors and patients requires trust built by the ethical care of patients and family. One of the tools for protecting the doctor-patient relationship and the reputation of the health care industry is the hospital clinical ethics committee. Ethics committee members work with patients, families, and hospital staff to find ethical solutions to complex medical cases. Most ethics cases deal with clinical questions, but don’t address largescale concerns about the effects of technology on medical care and the hospital culture.

Technology has moved beyond life-sustaining treatments such as dialysis, which were the catalysts for creating the first ethics committees. More and more, the major technology-based ethical questions revolve around normative issues emerging from the gathering and analysis of data, and the use of AI. Those issues include concerns that technology is biased, increases wealth and power inequalities, and erodes the human bonds that create a worthwhile life. The impact of technology is not limited to patients. Technology has also changed the day-to-day experience of working in health care. An example of this is how electronic medical records have changed physician interactions and relationships with patients.

We have a choice. Technology in health care can continue to move fast and break things, including breaking the trust between patients, family, and staff. Or health care organizations can start considering technology holistically, including assessing its ethical impact.  

I propose that the ethical analysis of technology be done by a technology ethics committee. This committee would not replace the clinical ethics committees or IRBs but would work with them as needed. Here are some examples of the questions a technology ethics committee might be called upon to examine:

  • Should we use this? This is the first question to ask in considering any new technology. In other words, is the new app or algorithm ultimately beneficial to patients and clinicians.
  • For a predictive algorithm, what type of patient consent is ethical? How can consent be gathered? Should opt-in or opt-out be the default?
  • Is an algorithm that measures patient health biased?
  • Who should have access to AI-generated data and patient identities? When and under what conditions?
  • Does a project designed to help patients change unhealthy behaviors—one that uses psychological targeting (for example, “extracting people’s psychological profiles from their digital footprints”)—respect patient autonomy?
  • Does a project using iPhone apps raise health equity concerns, as the advantages are not equally available to low-income and high-income patients? Does the phone’s surveillance capabilities, combined with the data the app gathers, put patients at risk?
  • Under what circumstances is tracking a staff member’s location within a hospital ethical and not just legal?
  • How does the hospital or health care system detect unintended consequences of a technology? How should the organization respond?

These questions are just a small sample of the ethical questions that arise when a technology becomes part of health care. Technology ethicists are starting to see patterns of risk, benefit, justice, and autonomy that impact patients and staff. If technology is not ethical, creating and deploying it can become a risk to the hospital, increase costs (including reputational costs), reduce the quality of the patient experience, and destroy trust and cooperation between patients and staff as well as within the medical team itself.

Addressing these issues requires an assessment of the ethical consequences of using a technology. This assessment takes understanding the technology in addition to understanding the ethical issues it poses. I believe a technology ethics committee, building upon the successful model of the clinical ethics committee, can address these concerns.

Alan Cossitt is a board-certified hospital chaplain who has developed various technologies, including one of the first commercial neural networks.

Read More Like This

Hastings Bioethics Forum essays are the opinions of the authors, not of The Hastings Center.

  1. I really appreciate this blog post as a full-time ethics consultant for behavioral health and technology. While I agree we need separate commitees for technology and ethics, we also have an obligation to provide education and training to prepare this future ethicists in this space. Currently, there is a lack of graduate programs in Healthcare, in Law, and in Bioethics that offer specialized training in healthcare technology. As a clinical social worker and public health professional, I attended two top masters programs that completely ignored this topic. I tried to pursue education on healthcare technology in other ways, including graduate courses in Bioethics, a post-masters seminar in Healthcare Bioethics with 100 providers from around the country and am now preparing to sit for the HEC-C. My education and training in technology and ethics is primarily self-taught. Now I’m trying to convince universities, CE providers and professional organizations to work with me to educate the next generation so we can have the experts we need in this space. Unfortunately, this has been an uphill battle but academics are notorious for dragging their feet when it comes to innovation. It’s a battle I’ve fought before and I’ll continue to fight it now because our patients deserve better.

  2. I agree that the medical technology industry should incorporate an ethical assessment committee in their practices to have a solid anchor on the moral standards that must be put in place. If I am in the industry, I could see how some organizations could get carried away due to profit or power. New inventions are very critical to health care so it might be best to have consultation services that assess the technical and moral aspects of it. https://medicalinventionconsulting.com/technology-assesment/

  3. Thank you for writing this! I discovered this article because I’ve been ruminating on the ethics of how all the health and public health tech I’m encountering in a job search is implemented. So many new organizations and start-ups working on simplifying or improving health care delivery in some way, making it “accessible” or “equitable,” utilizing AI to rush forward, but then having the main route of implementation be via an employer or an insurance company (mostly private insurance), or through the large, innovative health care systems that have the staff and know-how to support change and the infrastructure to roll out updated technology. This is all well and good, mostly, except I can’t help thinking this urgency to “technify” healthcare will only further inequalities that are already present: small systems, rural clinics, those who are un or under-insured or on public insurance. The unhoused, the unemployed. They’re not benefitting from these shiny new advances. I don’t have formal bioethics training, but I would love to take a class or a certificate course on the ethics of health/public health tech. Since this post is from a few years ago, does anything like this exist now?

Leave a Reply

Your email address will not be published. Required fields are marked *