Illustrative image for Policy Recommendations Control and Responsible Innovation of Artificial Intelligence

Hastings Center News

Policy Recommendations: Control and Responsible Innovation of Artificial Intelligence

A major international project at The Hastings Center released policy recommendations for the development of artificial intelligence and robotics to help reap the benefits and productivity gains and minimize the risks and undesirable social consequences.

“Research, innovation, and the deployment of AI and robotic systems are proceeding rapidly, and so, too, is the emergence of a transdisciplinary community of researchers in AI and the social sciences dedicated to AI safety and ethics,” states the executive summary to the final report. “The Hastings AI workshops played a seminal role in catalyzing the emergence of this worldwide network of organizations and individuals.” The Hastings Center’s project, Control and Responsible Innovation in the Development of AI and Robotics, was funded by the Future of Life Institute and led by Wendell Wallach, a senior advisor at The Hastings Center and a scholar at Yale University’s Interdisciplinary Center for Bioethics. Wallach is an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. Project participants included Stuart Russell, of the University of California, Berkeley; Bart Selman, of Cornell University; Francesca Rossi, of IBM; and David Roscoe, a Hastings Center advisory council member.

An event at the Carnegie Council for Ethics in International Affairs discussed the project findings.

Three core recommendations emerged from the project:

  1. A consortium of industry leaders, international governmental bodies and nongovernmental institutions, national and regional (e.g., the European Union) governments, and AI research laboratories should convene an International Congress for the Governance of AI by November 2019. This Congress will initiate the creation of a new international mechanism for the agile and comprehensive monitoring of AI development and any gaps in oversight that need to be addressed.
  2. Universities and colleges should incentivize the education of a cadre of polymaths and transdisciplinary scholars with expertise in AI and robotics, social science research, and philosophy and practical ethics. Foundations and governmental sources of funding should contribution to the establishment of transdisciplinary research centers.
  3. Foundations and governmental sources of funds should help establish in-depth and comprehensive analyses of the benefits and issues arising as AI is introduced into individual sectors of the economy. Project participants identified AI and health care as a good starting point. The benefits of AI for health care are commonly touted, but what will be the tradeoffs as we implement various approaches to reaping those benefits?

Read the executive summary here. Read the draft final report here. Watch a public event of the presentation of the recommendations here. Find a transcript of the event here.