Work in progress: We will be continuously updating this page. Please keep checking back and please bear with us as we try to work out the best way to organise this material. We aim to provide categories and short cuts to make it easier for you to search for what’s of most interest.

Here we will suggest some useful resources for ideas on how to develop codes of ethics for artificial intelligence.

These are also listed alphabetically here.

In many instances, issues concerning AI in various areas can benefit from work in ethics already underway on particular topics – it’s just a matter of matching up interests and concerns. Here, we’ll be building resources and suggesting some useful links. This can not only save the wheel being reinvented over and over – it can assist in the collaboration and discussions that are needed to advance debate.

There is work already underway or undertaken in this area.

The EPSRC (Engineering and Physical Sciences Research Council) of the UK has formulated a set of Principles of Robotics, first drawn up on 2010. For our purposes in considering how codes of ethics might be useful in AI, note their current status:

“The five ethical rules for robotics are intended as a living document. They are not intended as hard-and-fast laws, but rather to inform debate and for future reference. Obviously a great deal of thinking has been done around these issues and this document does not seek to undermine any of that work but to serve as a focal point for useful discussion.”

The webpage contains contact details for comments. These principles are pretty broad brush, and contain statements with which some from the robotics community take issue. However, this is a good example of how codes of ethics may serve as a focal point for discussion and a starting point for debate.

The BSI (British Standards Institute) has, in 2016, produced a standard, BS 8611:2016 Robots and robotic devices: Guide to the ethical design and application of robots and robotic systems. Again, it’s of great interest to us to note the background and purposes of this standard:

“It [BS 8611] recognizes that potential ethical hazards arise from the growing number of robots and autonomous systems being used in everyday life. The standard highlights that ethical hazards have a broader implication than physical hazards. Hence it is important that different ethical harms and remedial considerations are duly considered.”

Further details about the standard are given on the above link.

The IIIM (Iceland Institute for Intelligent Machines) in August 2015 published its own Ethics Policy for Peaceful R&D. This is a very interesting document to illustrate the range of purposes of codes of ethics; note that it has specific aims. The preamble states that the code is concerned with ‘two major threats to societal prosperity and peace’ – increases in military spending and military action; and government intrusion into privacy. The IIIM’s preamble also states, ‘As far as we know, no other R&D laboratory has initiated such a policy’.

For research more generally, the EU Horizon 20 20 Framework Programme for Research and Innovation mandates that all its funded projects comply with ethical principles and relevant national and international legislation.

The European Parliament Committee on Legal Affairs has a Working Group on Robotics and Artificial Intelligence whose mission is to “reflect on legal issues and especially to pave the way for the drafting of civil law rules in connection with robotics and artificial intelligence. Its mission is to stimulate the reflection of Members on these issues by facilitating specific information, providing exchange of views with experts from many fields of academic expertise and enabling Members to conduct an in-depth analysis/examination of the challenges and prospects at stake”. Read this article which has links to other information about the EU’s activities in this area.

There is considerable work which aims to address the ethical challenges of AI that are already with us. For example, the UnBias project, “Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy” started in September 2016.

OpenAI was launched in 2015

Ethical issues in AI frequently relate to ethical issues which have been discussed and debated, sometimes at length, coming from other disciplinary directions. In some case, for example, AI simply heightens issues that were already of concern, or adds a particular technological gloss to well-worn paths.

For instance, recent work in AI raises various questions about the ethics of adherence to medications (see Kate Crawford and Ryan Calo, ‘There is a blind spot in AI research, Nature, 20th October 2016). Developments in technology can make it possible to track exactly when a patient has, or has not, taken medications. Developments in robotics can enable robot assistants to remind people about medications – is this undue pressure? But  note that such questions are already closely studied by academics working in the field of medicine, with broad, interdisciplinary approaches. For example, the Academy of Medical Sciences and the Faculty of Pharmaceutical Medicine produced a report on this in 2014 which can be used as an introduction to the ongoing work in this complex area.

Ethical issues in the use of data overlap considerably with many of the ethical challenges of AI. A project from Pace University in New York explores issues in data ethics here.

There are many codes of research ethics in related disciplines which have grappled with the complexity of ethical questions and might have useful approaches.

For instance, the Economic and Social Research Council of the UK (ESRC) has many useful resources listed, to a myriad places where aspects of ethics in research, including research ethics history, are discussed or used. Why replicate this great work? Here is their link to useful resources.

Sometimes the suggestion is made that because ethical questions in AI are complex and because technology is developing rapidly, it’s necessary to think about things creatively and with an open mind, sometimes referencing virtue ethics or phronesis. But others find this frustratingly vague. One thing I really like from the ESRC site which I thought might inspire people working in AI is their  including a sample flow chart of questions to answer in setting out on a project. This is an example of a more concrete way of recognising complexity, which could be adapted to particular contexts. There is also an initial ethics checklist, a glossary of terms, example case studies, and other resources to educate and inform researchers.

Note too that the ESRC’s approach to ethics makes clear how embedded it is within a broader context, of the history of research ethics in general, of social science research in particular, and within a background legal and institutional setting. Providing such a context could be very useful in AI, too. It can help guard against the hype that AI ethics is unique, when, although it has distinctive features, it also shares concerns with many other areas of research and technology.

However, it is true that there are potential important differences. For example, much of the research the ESRC funds will concern specific groups of research subjects who can be identified and hence given specific protections. This may not be the case with much AI research, which may or may not involve specific groups or individuals in its development, but which will of course affect people, often countless millions of people, in its application.

Involvement of the public in research is perhaps an especially strong  motivation for publicly funded research, such as research within the National Health Service. Here is an example of how the National Institute for Health Research (NIHR) actively invites the public to become involved in research, should they so wish. Information is given for researchers, and in some funding streams, researchers are indeed required to involve the public. This information includes links to the organisation INVOLVE, which is dedicated to assisting with the involvement of the public in research. Not all of this will be relevant to much AI research, but for some research, seeing how others do this of it could prove very helpful in addressing the question of how to ensure good ethical practice. There are geographical and other background issues to consider of course. These are UK sites, and so researchers should be aware that laws relevant to research, for example concerning data protection, may vary, sometimes considerably, in other jurisdictions.

The AoIR (Association of Internet Researchers) has produced two reports to assist researchers in making decisions about their research, in 2002, and in 2012. The AoIR states it has ‘an ongoing commitment to ensuring that research on and about the Internet is conducted in an ethical and professional manner’. Given that there are overlaps of concern between internet researchers and AI researchers, and given that both deal with new and rapidly developing technology, these reports are of great potential interest.

Likewise, consider the status of the AoIR’s recommendations concerning Ethical Decision-Making and Internet Research. These have no formal authority per se; any authority is gained from the respect due to the Association and to its methodology for producing the reports. Note too, that the website states, “Just as these documents were immeasurably enriched by comments and contributions from AoIR members, we hope that readers will continue to call attention to issues and resources in Internet research ethics for debate and deliberation by the ethics working committee.” Again, encouraging debate and feedback is one important role of such codes, regulations, or guidance.


We would like to thank the Future of Life Institute for their generous sponsorship of our programme of research.