Illustrative image for AI Meets Bioethics Literature How Did It Do

Bioethics Forum Essay

AI Meets Bioethics Literature: How Did It Do?

In the span of a few months, new artificial intelligence tools like ChatGPT, GPT-4, and DALL-E have taken the excitement over Big Data to a new level. Much of the attention has been focused on AI’s potential to perform tasks traditionally done by humans, such as drafting reports, diagnosing patients, and determining treatments. Using AI to inform the public about medical advancements, however, has received less attention than it deserves. AI could be a powerful tool for making research found in medical and bioethics journals more accessible to a wider audience. 

How accurately could AI translate complex medical information for lay persons? How well could it identify and distill the ethical dilemmas posed by research findings? What safeguards could be used to prevent the use of AI for misinformation and disinformation? 

These questions came up early and often in conversations we have had for the last several months with Eran Reshef, a co-founder of the Israel-based startup Sighteer, a pioneer in the use of AI for content generation at scale. We met Reshef during an NYU Tel Aviv-based project on innovation ecosystems. We talked with him about ways that AI platforms could enhance efforts to educate people about bioethics issues:  patient care, access to investigational medical products, clinical trial design, vaccines, transplants, gender affirmation, sports and wellness, and so on.

The team at Sighteer suggested that we feed scholarly bioethics-related articles into its “content factory.” The aim was to assess the platform’s ability to translate technical jargon and complex ideas accurately and in accessible language for a general audience. So, we decided to conduct a small, non-scientific experiment.

We provided Sighteer with a random assortment of 10 articles authored or co-authored by members of the faculty of the Division of Medical Ethics at NYU Grossman School of Medicine and recently published in peer-reviewed medical journals. The team at Sighteer instructed the platform to produce a summary of the main ideas and arguments of each article, along with a simplified explanation of the ethical issues presented. The platform was then asked to generate text of about 100 words that could be shared on Twitter and LinkedIn, as well as art to accompany each post.

The speed and quality of the samples generated by the Sighteer AI were impressive. In about one minute, the AI identified the main points of each article and provided a brief, easy-to-understand summary that captured the essence of what the authors were trying to communicate, along with social media posts, images, and relevant hashtags. We reviewed the content for accuracy and for whether it was interesting and engaging. When the results fell short, the AI was instructed to generate new content, which it did in an instant.

The final step was to ensure that the content was presented ethically by disclosing that it was created by AI rather than humans. We did this with a watermark on the text and images.    

 Here are some examples of journal articles given to Sighteer and the social media posts it produced. They were succinct, readable, and accurate.

“Ethics and the Emerging Use of Pig Organs for Xenotranplantation,” by Arthur Caplan and Brendan Parent, published in the Journal of Heart and Lung Transplantation



Our aim here goes beyond using technology to create engaging social media posts. It isn’t about collecting “likes” and views. Rather, these are examples of how generative AI can be used to solve a practical issue of making scientific knowledge accessible to the general public at scale. 

As has been widely reported, the results of AI-generated text and images are sometimes inaccurate or biased. This wasn’t the case in our experience with the samples generated by Sighteer. But, in general, AI models are not yet able to consistently produce accurate, unbiased, and otherwise trustworthy results. This is one of the growing number of ethical issues with using generative AI.

But there are also ethical issues with not using it. Chief among them is, is it irresponsible, even with human oversight and some need for revision, to disregard generative AI now that we know it is available and can do some impressive work? Is it irresponsible not to use it when more activity can be offloaded to it and return professionals to the core of their work rather than paperwork and busywork? And what about the moral obligation of professionals to make their scientific knowledge available to as many people as possible?

Like other technological tools that have made their way into health care, generative AI has the capacity to make integration between humans and machines closer and better. That integration should not be seen as the end of humanity or even human medicine. After all, neither human beings nor AIs are perfect or free from making mistakes. With moral and responsible integration, the human-AI relationship may be able to bridge gaps that have hindered improvements in individual and public health. Making scholarly research more accessible to the larger public is one way to do this.   

The basic value of bioethics is in supporting the delivery of responsible, high-quality health care. Promoting access to knowledge of medical science, especially in an age of rapidly advancing technology and communication, is a key part of that call.

Arthur Caplan, PhD, is Mitty Professor of Bioethics at the NYU Grossman School of Medicine. Lee Igel, PhD, is clinical professor at the NYU School of Professional Studies and an associate in medical ethics at the NYU Grossman School of Medicine. (@leeigel)

Read More Like This
  1. The chatbot says that it’s important to ensure that religious leaders are on board for xenotransplantation to go forward. Why is this necessary in a secular country? And which religious leaders, from what religion or sect? What if some religious
    leaders approve and others do not? What about animal rights activists? Should they have to be on board also?

  2. All good points. the original paper reflected concerns from ethicists about Jewish and Moslem worries over the implantation of pig parts. so the chatbot got that sort of correct. I think the paper expressed this as an issue but did not assign veto power to any religious position prior to going forward. Ruth’s policy questions are thus very well taken–more work needed by chatbot and even ethicists in noting religious concerns vs. who ought ‘give permission’ which were muddy in the paper!

  3. I’m not convinced that AI will free up time for professionals to do less “paperwork and busywork” or that it will help to educate people. Numerous things determine such outcomes including the platform, software, and regulatory environment. And while one “basic value of bioethics is supporting the delivery of responsible, high-quality health care” it is not the only one. While promoting “access to knowledge of medical science, especially in an age of rapidly advancing technology and communication” is a key role for bioethics, this essay overstates the potential and downplays the numerous hazards and unknowns. There is no way to predict how the public will digest or understand social media posts (although the samples provided may be accurate) and that will likely vary with demographics and individual interest and knowledge. The potential benefit for bioethics and its aims is real but more significant are perhaps the numerous probable challenges.

Leave a Reply

Your email address will not be published. Required fields are marked *