Workshop, Oxford, June 2016

Workshop: A day of Ethical AI, June 8th 2016

Oxford Martin School, 34 Broad Street, Oxford, OX1 3BD

Programme

9.40 Introduction and welcome              Paula Boddington

9.50 Molly Crockett                      Psychological barriers to trusting artificial agents

11.00 Anders Sandberg On autonomous cars, autopilots and the anthropocene

11.40 Michael Fisher                    Verifiable Ethical Autonomy

12.20 Marina Jirotka                       Practical Problems and Possible Resolutions for Responsible Innovation in AI

2.00 Panel:                                          Current research in Cambridge

Huw Price, Simon Beard, Adrian Weller, Jat Singh

3.45 Owen Cotton-Barratt            Superintelligent tools and agents

4.30 Panel and open discussion: Superintelligence / long term future of AI and ethical implications

Chair: Peter Millican; Anders Sandberg, Owen Cotton-Barratt

ABSTRACTS:

Molly Crockett, Lab Director, Crockett Lab, Dept of Experimental Psychology, University of Oxford.

Psychological barriers to trusting artificial agents

Abstract: As AI becomes increasingly integrated in our daily lives, the decisions they face will go beyond the merely pragmatic, and extend into the ethical. There are good arguments for why some ethical decisions ought to be left to AI—unlike human beings, artificial agents are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy. And free from human limitations, such agents could even be said to make better moral decisions than us. Yet the notion that an AI might be given free reign over moral decision-making seems distressing to many, and this unease could erode public support for AI research. Here we present research that reveals potential psychological barriers to trust in AI. We show that agents who make consequentialist decisions – a proposed core feature of ethical AI – are perceived as less moral and trustworthy by most people. These findings pose challenges for public trust in ethical AI.  We suggest ways to surmount these challenges by identifying features of moral decisions that can mitigate distrust in consequentialist agents.

Anders Sandberg, Future of Humanity Institute, University of Oxford

On autonomous cars, autopilots and the anthropocene Abstract: Design and verification of autonomous robots face ethical challenges due to their interaction with humans and the world. There are several ethical challenges of increasing complexity: creating systems that behave in such a way that human aims are fulfilled beneficially and safely, creating systems that act in the same way as a moral human would, creating systems so that humans are not driven to detrimental actions or belief states, and avoiding systemic risks that emerge from the overall process. Verification in this context involves not only understanding how the robot will interact, but how humans and their institutions will interact back. This talk will give a brief overview of some of these complex boundary conditions and how they impact verification.

Michael Fisher, Professor of Computer Science, University of Liverpool, Director of the cross-disciplinary Centre for Autonomous Systems Technology at Liverpool,  Coordinator of the EPSRC Network on the Verification and Validation of Autonomous Systems

Verifiable Ethical Autonomy Abstract: Autonomous systems must make their own decisions, often without direct human control. But can we be sure that these systems will always make the decisions we would want them to? In this talk I will examine how high-level decision-making is organised in autonomous systems, the formal verification of this decision making, and the impact of this verification on the ethical behaviour of autonomous systems.

Marina Jirotka, Professor of Computer Science, University of Oxford

Practical Problems and Possible Resolutions for Responsible Innovation in AI Abstract: This talk will report on a study conducted into how researchers and practitioners perceive their ethical responsibilities regarding their innovations. The study focussed on the social consequences of a range of ICT innovations including robotics and AI and revealed the various pragmatic issues faced by researchers in the innovation lifecycle. These findings will be discussed in the context of recent developments in Responsible Research and Innovation (RRI). After discussing the background to the context of RRI, I will discuss some of the challenges and issues emerging for ICT development and how these might be met. Finally, whether RRI is widely adopted or not, the approach leads to suggestions as to how researchers and practitioners might practically deal with ethical and societal concerns arising from their innovations. The talk will conclude with some novel tools and approaches for addressing some of these concerns.

Owen Cotton-Barratt, Future of Humanity Institute, University of Oxford

Superintelligent tools and agents

To what extent can we expect superintelligent AI to be deployed in an agent-like manner? To what extent is this desirable (for humans)?

Panel from Cambridge Centres

Huw Price, Leverhulme Centre for the Future of Intelligence, University of Cambridge

Short presentation: Introduction to the work of the Leverhulme Centre for the Future of Intelligence, Cambridge

Adrian Weller, Leverhulme Centre for the Future of Intelligence, University of Cambridge

Short presentation: Trust and Transparency project

Jat Singh, Computer Lab, University of Cambridge

Short presentation: Introduction to the tech-legal implications of machine learning (via mccrc.eu).

Simon Beard, Centre for the Study of Existential Risk, University of Cambridge

The sensible knave: personal intelligence and the control problem

Abstract: I argue that one of the key considerations for developing AI should be whether future AI will come in the form of persons or not. I suggest that most current AI is impersonal, because it lacks a conception of itself through time. Whilst there is no intrinsic need for AI to have a sense of itself through time I argue that this could make AI considerably more dangerous and the control problem harder to solve. This is because non-persons do not have the same capacities of sympathy and empathy as persons do and therefore are far more likely to behave like ‘sensible knaves’ rather than members of a moral community. Finally I suggest that endowing AI with personhood probably means restricting its capacity to retain memories, or at least requiring it to act as if it had less power to retain memories then it actually has.

We would like to extend our thanks to the Future of Life Institute for their generous support of this project.