robotic finger and hand holding pen on tablet

Hastings Center News

Journal Editors Issue Guidance on the Use of AI in Scholarly Publishing

Editors at seven scholarly journals published recommendations on the responsible use of generative artificial intelligence tools by authors, reviewers, and editors. The recommendations ban regarding generative AI as an author – but allow its use to generate text and illustrations.

“These constraints are needed in part to protect high-quality scholarship, as other statements have noted, but they are also vital for wider social reasons,” said Gregory E. Kaebnick, lead author of the recommendations and editor of the Hastings Center Report.

These tools “have the potential to transform scholarly publishing in ways that may be harmful but also valuable,” says the statement, which was published in several of the bioethics and humanities journals edited by the authors and signatories. Signatories include Karen J. Maschke, editor of The Hastings Center’s journal Ethics & Human Research, and Laura Haupt, managing editor of the Hastings Center Report and Ethics & Human Research. Six additional editors are signatories.

The five recommendations are as follows:

  1. LLMs [large language models] or other generative AI tools should not be listed as authors on papers.
  2. Authors should be transparent about their use of generative AI, and editors should have access to tools and strategies for ensuring authors’ transparency.
  3. Editors and reviewers should not rely solely on generative AI to review submitted papers.
  4. Editors retain final responsibility in selecting reviewers and should exercise active oversight of that task.
  5. Final responsibility for the editing of a paper lies with human authors and editors.

While these recommendations are consistent with those taken by the Committee on Publishing Ethics and many journal publishers, they differ in some respects. For one thing, they address the responsibilities of reviewers to authors. In addition, the new statement takes a different position from that of Science magazine, which holds not only that a generative AI tool cannot be an author but also that “text generated by ChatGPT (or any other AI tools) cannot be used in the work, nor can figures, images, or graphics be the products of such tools.”

“Such a proscription is too broad and may be impossible to enforce, in our view,” the new statement says.

The recommendations are preliminary. “We do not pretend to have resolved the many social questions that we think generative AI raises for scholarly publishing, but in the interest of fostering a wider conversation about these questions, we have developed a preliminary set of recommendations about generative AI in scholarly publishing,” the statement says. “We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.”

Authors:

Gregory E. Kaebnick, editor of the Hastings Center Report

David Christopher Magnus, editor in chief of the American Journal of Bioethics

Audiey Kao, editor in chief of the AMA Journal of Ethics

Mohammad Hosseini, associate editor of Accountability in Research

David Resnik, associate editor of Accountability in Research

Veljko Dubljević, editor in chief of the American Journal of Bioethics—Neuroscience

Christy Rentmeester, managing editor of the AMA Journal of Ethics

Bert Gordijn, co-editor in chief of Medicine, Health Care and Philosophy

Mark J. Cherry, editor of the Journal of Medicine and Philosophy

Signatories:

Karen J. Maschke, editor of Ethics & Human Research

Lisa M. Rasmussen, editor in chief of Accountability in Research

Laura Haupt, managing editor of the Hastings Center Report and Ethics & Human Research

Udo Schüklenk, joint editor in chief of Bioethics and of Developing World Bioethics

Ruth Chadwick, joint editor in chief of Bioethics

Debora Diniz, joint editor in chief of Developing World Bioethics