Hastings Center Report
Benefits and Risks of Using AI Agents in Research
Abstract: Scientists have begun using AI agents in tasks such as reviewing the published literature, formulating hypotheses and subjecting them to virtual tests, modeling complex phenomena, and conducting experiments. Although AI agents are likely to enhance the productivity and efficiency of scientific inquiry, their deployment also creates risks for the research enterprise and society, including poor policy decisions based on erroneous, inaccurate, or biased AI works or products; responsibility gaps in scientific research; loss of research jobs, especially entry-level ones; the deskilling of researchers; AI agents’ engagement in unethical research; AI-generated knowledge that is unverifiable by or incomprehensible to humans; and the loss of the insights and courage needed to challenge or critique AI and to engage in whistleblowing. Here, we discuss these risks and argue that, for responsible management of them, reflection on which research tasks should and should not be automated is urgently needed.

