As constant news stories about developments in AI surround and bedazzle us, it’s easy to be concerned about what vision of the future awaits.
We are excited to be working on this topic at the same time that many others are also looking closely at ethics in AI and related issues. There are too many involved in these enterprises to mention here, but recent and current activities include:
Projects funded by non-profit organisations, such as our project which is funded, along with many other recipients of AI grants, by the Future of Life Institute, with a generous donation from Elon Musk;
Academics working across many universities, including dedicated centres, such as the Leverhulme Centre for the Future of Intelligence at Cambridge, and the One Hundred Year Study of AI (AI100) at Stanford University.
Large and smaller corporations, such as the Partnership on Artificial Intelligence to Benefit People and Society, with collaboration from Amazon, DeepMind, Facebook, Google, IBM, and Microsoft; and the Ethics Advisory Panel of Lucid Holdings Inc;
Work by individuals in the professions, such as research examining bias in AI recruitment and the development of apps to investigate bias in algorithms.
Work by government agencies, such as the White House Report on the Future of Artificial Intelligence, and a draft report on robotics and law by the Committee on Legal Affairs of the European Union;
Work by professional bodies such as the IEEE’s Standards Association Global Initiative for Ethical Considerations in the Design of Autonomous Systems, and by research funding councils such as the Engineering and Physical Science Research Council (EPSRC)’s Principles of Robotics.
Work by special interest and pressure groups such as the Campaign to Stop Killer Robots.
What will be our particular contribution? We are focusing on the challenges of developing codes of ethics for artificial intelligence researchers. This naturally involves understanding in broad terms what the ethical questions are, in different fields of AI and for AI more generally. But it also involves considering what the purpose of codes or regulations might be, how to produce workable and effective codes or regulations. This involves looking not simply at the content of such codes, but at questions such as who’s involved in their production and critique, who’s involved in their application. It involves considering also the surrounding social, economic, cultural, legal and political conditions which form a backdrop to the development of AI and of any codes of ethics concerning AI.
We would like to extend our thanks to the Future of Life Institute for the generous support of our project.