The Future of Life Institute generously provided us with additional funds towards this workshop. This enabled us to give bursaries to some who would not otherwise have been able to attend, and greatly helped the success of the day.
Robert Wortham, Andreas Theodorou and Joanna Bryson
What is my Robot Thinking? : Transparency as a Fundamental Design Requirement for AI Architectures
Deciphering the behaviour of intelligent others is a fundamental characteristic of our own intelligence. As we interact with complex intelligent artefacts, humans inevitably construct mental models to understand and predict their behaviour. If these models are incorrect or inadequate, we run the risk of self deception or even harm. This paper reports progress on a programme of work investigating approaches for implementing robot transparency, and the effects of these approaches on utility, trust and the perception of agency. Preliminary findings indicate that building transparency into robot action selection can help users build a more accurate understanding of the robot.
Michael Wellman and Uday Rajan
Ethical Issues for Autonomous Trading Agents (Discussion Paper)
The rapid advancement of algorithmic trading has demonstrated the success of AI automation, as well as gaps in our understanding of the implications of this technology proliferation. We explore ethical issues in the context of autonomous trading agents, both to address problems in this domain and as a case study for regulating autonomous agents more generally. We argue that increasingly competent trading agents will be capable of initiative at wider levels, necessitating clarification of ethical and legal boundaries, and corresponding development of norms and enforcement capability.
When Morals Ain’t Enough: Robots, Ethics, and the Rules of the Law
No single moral theory can instruct us as to whether and to what extent we are confronted with legal loopholes, e.g. whether or not new legal rules should be added to the system in the criminal law field. This question on the primary rules of the law appears crucial for today’s debate on roboethics and still, goes beyond the expertise of roboethicists. On the other hand, the unpredictability of robotic behaviour and the lack of data on the probability of events, their consequences and costs, make hard to determine the levels of risk and hence, the amount of insurance premiums and other mechanisms on which new forms of accountability for the behaviour of robots may hinge. By following Japanese thinking, the aim is to show why legally de-regulated, or special, zones for robotics, i.e. the secondary rules of the system, pave the way to understand what kind of primary rules we may want for our robots.
Today’s Law, Tomorrow’s Consequences In this talk we will explore the urgent need for thinking about law and AI in relation to AI developments which are already, or will shortly be, with us. These include, autonomous cars, care robots, drones and more where the law is lagging behind the technology. The importance of how AI is used in these contexts gives us clues as to how AI might develop in its interactions with humanity and the planet. Finally, we will look at how AI businesses and stakeholders might respond to the challenges of self or governmental regulation
The Distinctiveness of AI Ethics
If workable codes or guidance on ethics and AI are to be produced, the distinctive ethical challenges of AI need to be faced head on. The purpose of this paper is to identify several major areas where AI raises distinctive or acute ethical challenges, with a view to beginning an analysis of challenges and opportunities for progress in these areas. Seven areas described are: Hype in AI, and its unfortunate consequences; The interface between technology and wider social or human factors; Uncertainty about future technological development and its impact on society; Underlying philosophical questions; Professional vulnerability in the face of emerging technology; An additional layer of complexity in ethical codes, concerning machine behaviour; The extension, enhancement, or replacement of core elements of human agency.
How do humans or machines make a decision? Whenever we make a decision, we consider our preferences over the possible options. Also, in a social context, collective decisions are made by aggregating the preferences of the individuals. AI systems that support individual and collective decision making have been studied for a long time, and several preference modelling and reasoning frameworks have been defined and exploited in order to provide rationality to the decision process and its result. However, little effort has been devoted to understand whether this decision process, or its result, is ethical or moral. Rationality does not imply morality. How can we embed morality into a decision process? And how do we ensure that the decision we make, as an individual or a collectivity of individuals, are moral? In other words, how do we pass from the individuals’ personal preferences to moral behaviour and decision making?
When we pass from humans to AI systems, the task of modelling and embedding morality and ethical principles is even more vague and elusive. Are the existing ethical theories applicable also to AI systems? On one hand, things seem easier since we can narrow the scope of an AI system, so that the contextual information can help us in define the correct moral values it should work according to. However, it is not clear what moral values we should embed in the system, nor how to embed them. Should we code them in a set of rules, or should we let the system learn the values by observing us humans? Preferences and ethical theories are not that different in one respect: they both define priorities over actions. So, can we use existing preference formalisms to also model ethical theories? We discuss how to exploit and adapt current preference formalisms in order to model morality and ethics theories, as well as the dynamic integration of moral code into personal preferences. We also discuss the use of meta-preferences, since morality seems to need a way to judge preferences according to their morality level.
It is imperative that we build intelligent systems which behave morally. To work and live with us, we need to trust such systems, and this requires that we are ”reasonably” sure that it behaves morally, according to values that are aligned to the human ones. Otherwise, we would not let a robot take care of our elderly people or our kids, nor a car to drive for us, nor we would listen to a decision support system in any healthcare scenario. Of course the word ”reasonable” makes sense when the application domain does not include critical situations (like suggesting a friend on a social media or a movie in an online selling system). But when the AI system is helping (or replacing) humans in critical domains such as healthcare, then we need to have a guarantee that nothing morally wrong will be done. In this extended abstract we introduce some issues in embedding morality into intelligent systems. A few research questions are defined, with no answer to them, with the hope that the discussion raised by the questions will shed some light onto the possible answers.
Deborah G. Johnson and Mario Verdicchio
Reframing AI Discourse
A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualized and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of “autonomy” that induces people to attribute to machines something comparable to the human autonomy, and a “sociotechnical blindness” that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline.
The Singularity May Never Be Near
We are witnessing increasing concerns about ethics in artificial intelligence (AI). These concerns are fueled by both optimism and pessimism around progress in artificial intelligence. The optimists are investing millions of dollars, and even in some cases billions of dollars into AI, and are being rewarded by some impressive results. The pessimists, on the other hand, predict that AI will end many things: jobs, warfare, and even the human race. Both the optimists and the pessimists often appeal to the idea of a technological singularity, a point in time where machine intelligence starts to run away, and a new, more intelligent “species” starts to inhabit the earth. If the optimists are right, this will be a moment that fundamentally changes our economy and our society. If the pessimists are right, this will be a moment that also fundamentally changes our economy and our society. It is therefore very worthwhile spending some time deciding if either of them might be right.
The Value Learning Problem
Autonomous AI systems’ programmed goals can easily fall short of programmers’ intentions. Even a machine intelligent enough to understand its designers’ intentions would not necessarily act as intended.
We discuss early ideas on how one might design smarter-than-human AI systems that can inductively learn what to value from labeled training data, and highlight questions about the construction of systems that model and act upon their operators’ preferences.
Federico Pistono and Roman Yampolskiy
Unethical Research: How to Create a Malevolent Artificial Intelligence
A significant number of papers and books have been published in recent years on the topic of Artificial Intelligence safety and security, particularly with respect to superhuman intelligence [Armstrong, Sandberg, & Bostrom, 2012; Bostrom, 2006; Chalmers, 2010; Hibbard, 2001; Loosemore & Goertzel, 2012; Muehlhauser & Helm, 2012; Omohundro, February 2008; R. V. Yampolskiy, 2014; Roman V. Yampolskiy, 2015a; Roman V Yampolskiy, 2015; R. V. Yampolskiy, July 22-25, 2015a, July 22-25, 2015b; Yudkowsky, 2008]. Most such publications address unintended consequences of poor design decisions, incorrectly selected ethical frameworks or limitations of systems, which do not share human values and human common sense in interpreting their goals. This paper does not focus on unintentional problems, which might arise as the result of construction of intelligent or superintelligent machines, but rather looks at intentional malice in design. Bugs in code, unrepresentative data, mistakes in design and software poorly protected from black hat hackers can all potentially lead to undesirable outcomes. However, intelligent systems constructed to inflict intentional harm could be a much more serious problem for humanity.
The Problem of Superintelligence: Political, not Technological
The most prominent thinkers who have reflected on the problem of a coming artificial superintelligence have seen the problem as a technological problem, a problem of how to control what the superintelligence will do (Vinge 1993; Yudkowsky 2001; Bostrom 2002; 2014). I argue that this approach is probably mistaken because it is based on questionable assumptions about the nature of intelligent agents and, furthermore, potentially counterproductive because it might, in the end, bring about the existential catastrophe that it is meant to prevent. I contend that the problem posed by a future superintelligence will likely be a political problem, a problem of establishing a peaceful form of coexistence with other intelligent agents in a situation of mutual vulnerability, and not a technological problem, a problem of control.
Validation Against Expert Disagreement for Social Robot Ethics
Approaches to developing ethics for social robots that rely upon expert feedback must address the problem that experts often disagree regarding the proper way to prioritize competing ethical norms. Any valid solution to this problem ought to produce an ethics program the output of which agrees sufficiently well with disagreeing ethics experts. This paper uses a modification of Cohen’s kappa statistic to interpret this constraint. So interpreted, the constraint exhibits two virtues: it accommodates a wide range of disagreement among ethics experts, and it provides a well-defined quantitative measure for how strongly a robot’s ethical programming agrees with the judgments of ethics experts. Extant competing validation procedures lack at least one of these virtues. So approaches to programming ethics for social robots ought to revise their validation methodology.