Hastings Center News
Policy Recommendations: Control and Responsible Innovation of Artificial Intelligence
A major international project at The Hastings Center released policy recommendations for the development of artificial intelligence and robotics to help reap the benefits and productivity gains and minimize the risks and undesirable social consequences.
“Research, innovation, and the deployment of AI and robotic systems are proceeding rapidly, and so, too, is the emergence of a transdisciplinary community of researchers in AI and the social sciences dedicated to AI safety and ethics,” states the executive summary to the final report. “The Hastings AI workshops played a seminal role in catalyzing the emergence of this worldwide network of organizations and individuals.” The Hastings Center’s project, Control and Responsible Innovation in the Development of AI and Robotics, was funded by the Future of Life Institute and led by Wendell Wallach, a senior advisor at The Hastings Center and a scholar at Yale University’s Interdisciplinary Center for Bioethics. Wallach is an internationally recognized expert on the ethical and governance concerns posed by emerging technologies, particularly artificial intelligence and neuroscience. Project participants included Stuart Russell, of the University of California, Berkeley; Bart Selman, of Cornell University; Francesca Rossi, of IBM; and David Roscoe, a Hastings Center advisory council member.
An event at the Carnegie Council for Ethics in International Affairs discussed the project findings.
Control and Responsible Innovation of Artificial Intelligence #AI: This distinguished panel discusses a 3-year @hastingscenter project that grapples with essential safety procedures, engineering approaches, and legal and #ethical oversight. Don’t miss it!https://t.co/F7pGs7PILH pic.twitter.com/mvnPON5LSy
— Carnegie Council (@carnegiecouncil) December 7, 2018
Three core recommendations emerged from the project:
- A consortium of industry leaders, international governmental bodies and nongovernmental institutions, national and regional (e.g., the European Union) governments, and AI research laboratories should convene an International Congress for the Governance of AI by November 2019. This Congress will initiate the creation of a new international mechanism for the agile and comprehensive monitoring of AI development and any gaps in oversight that need to be addressed.
- Universities and colleges should incentivize the education of a cadre of polymaths and transdisciplinary scholars with expertise in AI and robotics, social science research, and philosophy and practical ethics. Foundations and governmental sources of funding should contribution to the establishment of transdisciplinary research centers.
- Foundations and governmental sources of funds should help establish in-depth and comprehensive analyses of the benefits and issues arising as AI is introduced into individual sectors of the economy. Project participants identified AI and health care as a good starting point. The benefits of AI for health care are commonly touted, but what will be the tradeoffs as we implement various approaches to reaping those benefits?
Read the executive summary here. Read the draft final report here. Watch a public event of the presentation of the recommendations here. Find a transcript of the event here.