Myth and the EU study on Civil Law Rules in Robotics

The European Parliament has recently produced a study ‘European Civil Law Rules in Robotics’. This continues work by the European Parliament’s Committee on Legal Affairs, such as its publication in May 2016 of a Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics.

Such studies are of immense relevance and interest to any projects of drawing up codes of ethics for AI. Any code of ethics operating within the European Union would need to be cognisant of any relevant laws. This is not simply in order to comply with them, but note should be taken of many aspects of any such laws pertaining to robotics and AI, since it’s vital to be aware of any harmonies or clashes with other influential statements as codes of ethics are in development. This includes considering the surrounding policy context and any preambles or surrounding texts. These can give insights into the guiding motivations and underlying values which inform any regulations. This can be informative for codes both inside and outside the EU.

In working out how best to think about developments in AI, or indeed any fast changing technology, science fiction can be a useful tool. Where situations are not yet with us, we need to use our imagination. Science fiction almost invariably has some moral content, whether explicitly or implicitly. (Indeed, it’s virtually impossible to write any half-plausible and interesting story that does not have some normative elements.) Science fiction frequently plays out scenarios about how we might or might not relate to robots and to very advanced computers. These can be extremely useful in helping us to ponder what our values are and how we might react to new developments.

But another rich source of imagination and values is found in stories we already have. Much work that discusses AI refers to ancient stories and myths about robots created from mere matter, as well as more modern literature. Stories often referred to in such contexts include Frankenstein’s Monster, Golem, Pygmalion, the Tin Man from the Wizard of Oz, and the maidens made of gold and the giant Talos made of bronze which appear in the Iliad. Spirit-powered robots defended relics of the Buddha, according to Indian legend.

But in referring to any such stories, we can draw various lessons. (It’s not wise to conclude from the story of Sleeping Beauty that it’s a good idea to marry a man you’d never met before who’s broken into your bedroom and awoken you with a kiss. Other layers of interpretation in such fairy stories, though, are a rich source of meaning.) So, in looking at the preamble and surrounding text of policy documents which refer to such ancient or more modern stories, it’s useful to take a look at how these stories are used and what lessons are drawn.

The document European Civil Law Rules on Robotics refers to cultural ideas about robots in its introductory texts, as part of a narrative justifying its approach, and in particular, grounding it in a response to what are seen to be European concerns. Here is the relevant section, which comes in the document’s section 2 “General Considerations on Robots: The notion of the robot, its implications and the question of consciousness”, where the discussion is used to explain reservations about using the term ‘smart robot’ in a set of regulations designed for the European context, because of the likely public reaction:

1°/ Western fear of the robot

The common cultural heritage which feeds the Western collective conscience could mean that the idea of the “smart robot” prompts a negative reaction, hampering the development of the robotics industry. The influence that ancient Greek or Hebrew tales, particularly the myth of Golem, have had on society must not be underestimated. The romantic works of the 19th and 20th centuries have often reworked these tales in order to illustrate the risks involved should humanity lose control over its own creations. Today, western fear of creations, in its more modern form projected against robots and artificial intelligence, could be exacerbated by a lack of understanding among European citizens and even fuelled by some media outlets.

This fear of robots is not felt in the Far East. After the Second World War, Japan saw the birth of Astro Boy, a manga series featuring a robotic creature, which instilled society with a very positive image of robots. Furthermore, according to the Japanese Shintoist vision of robots, they, like everything else, have a soul. Unlike in the West, robots are not seen as dangerous creations and naturally belong among humans. That is why South Korea, for example, thought very early on about developing legal and ethical considerations on robots, ultimately enshrining the “smart robot” in a law, amended most recently in 2016, entitled “Intelligent robots development and distribution promotion act”. This defines the smart robot as a mechanical device which perceives its external environment, evaluates situations and moves by itself (Article 2(1))9. The motion for a resolution is therefore rooted in a similar scientific context.

Commentary on this passage:

The passage opens with the suggestion that the collective consciousness of the West shows itself in ancient fears about losing control of robots, which must be addressed in order not to hamper the robotics industry. The wording seems to imply that this fear is unfounded or poorly grounded. This suggests a somewhat cavalier attitude to such myths, as if they indicate something irrational and to be combatted. While a culture’s myths may indeed show things which cannot be reduced entirely to rational analysis, the very fact that they have survived for so long suggests that myths and stories may be indicating something important. Indeed, the document goes on to validate these Western fears, but tellingly, does so by referring not to myth or culture but by heeding the recent warnings about AI of four prominent scientists and technologists: Stephen Hawking, Elon Musk, Bill Gates and Bill Joy, citing these experts “now that the object of fear is no longer trapped in myths or fiction, but rooted in reality”.

This is a telling way of presenting these myths and stories of our past. It’s as if the lessons we need to learn from these myths and stories are merely some uncanny, lucky prediction of the scientific future, and now that we have the science, and now that we have the technologists and scientists to warn us, we can at last realise these warnings were, by fluke, right after all. Yet the appeasement of the general European public is framed in terms of addressing and combatting the cultural sway of the ancient myths. So … are the pre-scientific-myth-fuelled fears of the “great unwashed” general public right by some spooky coincidence? Is scientific reason, endorsed by experts, by happenstance now simply marching in parallel time to unreason?

One reason why these questions are important is because it’s important whether this document is attempting to accommodate reasonable public concerns, or is pandering to an irrational populace. One might develop policy, and in particular public information, quite differently, depending on these attitudes. Indeed, it is somewhat unclear what kind of stance the document is taking on this point.

Something of note is that in discussing the lack of fear of robots in the Far East, the document also grounds the Japanese stories about robots in the underlying metaphysical and normative framework of Shintoism. This makes sense of the positive Japanese response to robots. This sense-making narrative is absent in the account of the Western myths to which the document refers. (Note then, that the EU document subliminally suggests that positive myths of robots are grounded in something more substantive, whereas negative myths are not.)

Is there no sense-making Western narrative available? Note of course that ‘the West’ is not a monolithic idea – there are robot stories in various Western traditions including Norse as well as the Greek and Jewish traditions referred to in the document. But note too at this juncture, that the EU document highlights the Hebrew myth of Golam as being particularly influential in Western society and what the document calls ‘western fears of creations’. Indeed, it’s the only Western robot story actually named.

I had to read this phrase ‘western fear of creations’ several times to make sure I’d understood it. For the idea that it is the West which is afraid of creation, and that a particularly strong influence on this fear stems from the Jews, butts up against the flourishing of science, technology and invention in the West, which has been so profoundly influenced by the Judeo-Christian tradition; not to mention the high density of tech start-up firms in Tel Aviv, for example. By ‘fear of creations’ the document is presumably referring to fear of autonomous creations which escape the control of their creator, not fear of artefacts per se.

But whilst it cites underlying frameworks behind Eastern robot stories, the EU document’s account of Western responses to robots misses out a profoundly influential Hebrew narrative which surely lends heavy cultural salience to Western myths about robots. I refer of course to the story in Genesis of the creation of man and of the Fall. For the Fall shows how in disobeying one’s Creator, Adam and Eve developed the ability to see that they were naked, and enabled them to have knowledge of good and evil. And we all know what happened next, armed with that dangerous knowledge: thousands of years of often sorry human history, with bright spots here and there. Mankind was given the power to act and to think, but the freedom which Adam and Eve were given to act independently of their Creator also led to disobedience and disaster.

But this is precisely the fears that are expressed about AI and robots now. It’s not fear of creation in the sense of invention and artefact, or control over the world per se, since the Genesis story gives mankind dominion over the earth – it’s fear of a creation which escapes the control of its creator. It’s fear of creation which, left in a safe space unobserved, gets into mischief. It’s worries about how we, the creators, might treat robots if they were to develop consciousness and the ethical awareness that Adam and Eve developed. But these are precisely the moral worries of the moderns who are armed with good understandings of science and tech.

Presenting the myths around dangerous robots in the context of Genesis paints a totally different picture than that presented by simply framing the Hebrew myth of Golem as a stand-alone story of uncontrollable robots which just happens to form the strongest influence behind what seems to be an ill grounded, primitive fear. It not only presents this robot-gone-bad narrative as a central influence firmly embedded in the history of Western culture, rather than merely a popular story. It not only embeds it in an account of the nature of humanity, of the place of humanity in the universe and made in God’s image, and hence, with the potential to have responsible control over the world, and hence with a positive potential for advancing science and technology. It does more.

For if we see the Genesis account of the Fall of man as foreshadowing of fears about robots, then Genesis gets the problem exactly right, for exactly the right reasons – it’s a worry about autonomy itself: what might robots do if we can’t control them fully? Will they adhere to the same value system as us? Will they decide to disobey us? What will our relationship with our creations be?

The modern scientific experts can tell us that these fears might now actually be realisable. We didn’t need them to tell us that the fears were in principle well-founded. Far from quaking at a Hebrew scare story which whipped up primitive fears in the general public that need to be allayed, we can thank the Hebrew account of Genesis for pre-warning us thousands and thousands of years ago, in a rich and meaningful story about our place in the world and about our nature, from which we can infer also that creating robots with the ability to judge and to act may be worthwhile. But it can go very, very wrong. This is precisely the central ethical question of AI today. If the general public have concerns about this expressed through myth, these concerns are not irrational. They need to be addressed.

Paula Boddington

We would like to thank the Future of Life Institute for sponsoring our work.

The EPSRC’s Principles of Robotics and ethical debate

Last April, the AISB organised a workshop on the EPSRC’s Principles of Robotics. These Principles were formulated in 2010, and take the form of five ‘rules’ and seven ‘high level messages’. The Principles aimed to provide some guidance but also importantly to stimulate debate which they have indeed done; the workshop was an example of such debate and produced very interesting discussions. Any code of ethics in a field which is advancing so rapidly, and which challenges central notions of human agency, and presents such challenges for the organisation of our social world, must always stand open to discussion and debate. This is at least as important as getting the code ‘right’, perhaps even more important. Ethics must always involve dialogue and close listening among all affected parties.

Papers from the workshop are being published in Connection Science. Paula Boddington’s commentary can be accessed here.

Abstract: The EPSRC Principles of Robotics refer to safety. How safety is understood is relative to how tasks are characterised and identified. But the exact task(s) a robot plays within a complex system of agency may be hard to identify. If robots are seen as products, it is nonetheless vital that the safety and other implications of their use in situ must also be considered carefully, and they must be fit for purpose. The Principles identify humans as responsible, rather than robots. We must thus understand how the replacement of human agency by robotic agency may impact upon attributions of responsibility. The Principles seek to fit into existing systems of law and ethics. But these may need development, and in certain context, attention to more local regulations is also needed. A distinction between ethical issues related to the design of robotics, and to their use, may be needed in the Principles.

We would like to thank the Future of Life Institute for sponsoring this work

The Distinctiveness of AI Ethics, and Implications for Ethical Codes

The Distinctiveness of AI Ethics, and Implications for Ethical Codes

Paula Boddington

Paper presented at the workshop Ethics for Artificial Intelligence, July 9th 2016, IJCAI-16, New Yorkfuture

Abstract

If workable codes or guidance on ethics and AI are to be produced, the distinctive ethical challenges of AI need to be faced head on. The purpose of this paper is to identify several major areas where AI raises distinctive or acute ethical challenges, with a view to beginning an analysis of challenges and opportunities for progress in these areas. Seven areas described are: Hype in AI, and its unfortunate consequences; The interface between technology and wider social or human factors; Uncertainty about future technological development and its impact on society; Underlying philosophical questions; Professional vulnerability in the face of emerging technology; An additional layer of complexity in ethical codes, concerning machine behaviour; The extension, enhancement, or replacement of core elements of human agency.

1              Introduction

This paper arises from work undertaken as part of an FLI project entitled ‘Towards a code of ethics for AI’. The purpose of this project is not to produce a code of ethics as such, but to clarify and analyse the challenges and purposes of producing such codes.

This presentation concerns some potential challenges to producing workable, transparent codes, guidance, or regulation in the field of AI. In this endeavour, we should not presuppose that AI raises any more serious ethical problems than other areas of technology, nor that its problems are completely unique. In many respects, indeed, AI raises issues which have broad similarities with other areas; but the focus here will be on particular respects in which AI does raise issues that are distinctive or unique. Insofar as these bear similarities to other areas, there can be mutual learning; but insofar as they are AI-specific, particularly hard attention must be given to unravelling them.

Our work has looked at several areas where these issues arise, which will be considered in turn in what follows:

  1. Hype in AI, and its unfortunate consequences;
  2. The interface between technology and wider social or human factors;
  3. Uncertainty about future technological development and its impact on society;
  4. Underlying philosophical questions;
  5. Professional vulnerability in the face of emerging technology;
  6. An additional layer of complexity in ethical codes, concerning machine behaviour;
  7. The extension, enhancement, or replacement of core elements of human agency.

2              Hype in AI, and its unfortunate consequences

There is a lot of hype concerning many technologies, and in particular their ethical implications, as we have seen with genomics and nanotechnologies [Caulfield, 2012], for example. In the case of AI, this hype is now virtually at fever pitch [Hawking, 2015], with some prominent individuals recently claiming that AI presents ‘an existential threat’ to humanity. Whether or not such concerns are overblown, this very hype itself will have impacts.

2.1          Angels and bad guys

One impact, which can be quite considerable, is that fear of being branded one of the ‘bad guys’ may lead to individual or collective attempts to promote oneself as on the side of the angels. Such positioning might be for intrinsic or strategic reasons (e.g. the EPSRC meeting on the Principles of Robotics explicitly aimed to avoid the sort of public aversion which the UK public had shown to GM [Bryson, 2012]. But at its worst, appearing to be ethical might trump actually being ethical.

2.2          Virtue signalling and exclusion of the under-resourced

Secondly, this positioning might have an effect upon the very content of the codes, for example by including largely vacuous material that is little more than ‘virtue signalling’ [Bartholemew, 2015], with empty displays of ethical probity (‘we are passionate about the future of the human race’ , ‘we believe that AI should be used for the benefit of the whole of humankind’, etc). More tangibly, attempts have been made to urge ethical behaviours on a group which would exclude both those who disagree, and less well-resourced members: it’s easier to be ethical the richer you are. This can be seriously prejudicial against the least privileged, unless special provision is given or attention paid to the issue [Boddington, 2011, ch 10].

Thus otherwise laudable policy can end up inadvertently excluding the actors who are least well resourced, especially if it focuses too much on the most ‘visible’ ethical questions, without considering them in context. Data-sharing policies in genomics provide one clear example here, biasing procedures against those who are unable – perhaps for resource reasons – to reciprocate.

For an example from AI, the IIIM’s Ethics Policy for Peaceful R&D eschews military funding, and will only collaborate with civilian researchers if they have received no more than 15% of their funding from military sources in the last 5 years [IIIM]. The preamble to this policy makes explicit political statements, including that military funding is commonly defended by reference to an increased ‘“terrorist threat”’ (with scare quotes that clearly imply scepticism); it also endorses the ‘brave’ actions of Edward Snowden. Researchers who are not in the position of finding alternative sources of funds may therefore be excluded from potentially beneficial collaboration. (So far, this is not to comment on the rights or wrongs of the IIIM’s stance, merely to use it as an example of how policy may lead to differential impact for different actors within the world of AI research.)

2.3          Methodological impacts – alleged novelty and comprehensiveness

Thirdly, hyping up the uniqueness of the issues in AI will also have important effects on the methodology of how ethical questions are addressed. For if an ethical question is seen to be (in part, at least) an old question applied in a new context, then one can argue with reference to previous applications, but if a question is presented as being new and unique, then it will be approached very differently. Such framing can be vital, because example choice and description – including both language and institutional context – significantly affects how ethical questions are understood and developed [Chambers, 1999; Fischer et al 1993; Rein et al 1993; Taburrini 2015].

Take, for example, the framing remark: ‘everything that civilisation has to offer is a product of human intelligence’ [Hawking, 2015]. This misleadingly suggests that everything in society derives from design and intelligence, and may lead to hubristic discounting of serendipity, circumstance and pure accident. It can also divert attention from how ethical issues raised by any technology are a complex result of many factors (see below), misleadingly focusing on aspects of a situation that are designed into it, as the expense of those that are contingent or emergent.

2.4          Viewing the landscape through the lens of the hyped technology

Another risk of hype is that the consequent emphasis on the new technology will distort perceptions of the moral context, interpreting it in terms of that (perhaps problematic) technology at the expense of others. As an example from biotechnology, the UK’s HFEA recently sanctioned mitochondrial transfer techniques (so-called ‘3 person IVF’) to combat transmission of mitochondrial disease from mother to child. The HFEA praised the technique ‘giving the chance of having a healthy child’ [HFEA 2013]. But these women could already have children through surrogacy, or gestate a child with a donated egg. So the description of the technique’s advantages tacitly presupposed a particular notion of what it is to have ‘your own child’ (viz. with maternal nuclear DNA), thus eclipsing other, older reproductive techniques that the same organisation also supports and regulates [HFEA]. Though less morally serious, a similar pattern can be seen in the progressive development of information technology, for example in respect of demands made for administrative and ‘audit’ information, whereby as new possibilities have become practicable (e.g. the generation of huge amounts of printed information), these possibilities have often come to be seen as absolutely necessary, usually without careful consideration of the costs and benefits. Thus the new technology shapes our vision, without any careful prior judgement that it is appropriate.

3              The interface between technology and wider social or human factors

Related to the issue of hype is the risk that excitement over the potential of AI and its technological possibilities will lead us to overlook complex social and institutional factors, which may affect how the relevant questions are framed, understood and addressed. These factors are also crucial for the enforcement or influence of any code of ethics, many of which remain unread or ignored [MacLean, Elkind 2004]. The institutional or political driver to the production of such codes is likely to be more concrete than the abstract encouragement of ethical behaviour: for example a wish to avoid public backlashes or lawsuits, to encourage funding, to signal the virtue of the UK in ethical regulation (again perhaps to attract funding), and generally to be seen as virtuous (which is not of course quite the same as the desire to be virtuous). To take a balanced view of these things – and to avoid confusing ethics with public relations – the specific institutional, political, financial and economic context of the creation and use of AI must be properly considered.

4              Uncertainty about the future of the technology and its implications for society

The future of any rapidly developing technology is hard to predict, and not only for technological reasons (since economic, political, social, and other factors can often intervene). Appropriate ethical judgments to new developments are also impossible to anticipate, since attitudes may evolve with the technology, as we have seen in recent debates about privacy, where views vary greatly depending on the context, in ways that could not have been foreseen [Nissenbaum, 2010, 2004]. In the case of AI, these points are especially pertinent, given how far AI could potentially impact upon how we think and relate to each other, and on vital elements of society such as modes of production.

One popular response to this evident impossibility of producing future-proofed codes of ethics in areas of rapid development or contextual uncertainty is to stress the importance of equipping researchers and professionals with the ethical skills to make nuanced decisions in context – to refer to virtue ethics [AOIR 2012, Atkinson 2009]. But virtue ethics is especially badly placed to provide any such panacea when dealing with technology which is likely to bring broad ranging and unpredictable change in central areas of human life. For the dominant model of virtue ethics – inherited from Aristotle – is predicated on a stable and slow-changing society, where the young can learn virtues from the older and wiser who possess phronesis, or practical wisdom. This model is hopeless when the need is to develop a new model of practical wisdom, to cope with future realities many of which probably cannot yet even be conceived, let alone anticipated.

5              Background philosophical questions

Fundamental questions, such as what it means to be human, or what ultimate values we should be pursuing, can readily arise in areas of rapidly developing technology. In genomics and genetics, for example, questions about the ‘essence’ of humanity can appear in debates about what is ‘natural’ or ‘unnatural’, what genetic interventions count as curing disease (as opposed to creating a ‘designer baby’), how the future of humanity might be altered via germline alterations; whether we in fact face a ‘post-human’ future, or whether the human race might diverge into two or more species. A proper approach would involve teasing out the different understandings and assumptions involved, how exactly these relate to the practical ethical questions, and considering how far the deep philosophical issues can be resolved sufficiently, or bypassed, to allow those practical questions to be addressed. But too often the fundamental issues – so far from provoking corresponding deep and careful thinking – can be overdramatised, leading to idealised or rarefied notions of what is ‘natural’ or ‘really’ human, and thus obscuring the issues rather than clarifying them.

Some of these questions arise also in AI. Will interaction between humans and intelligent machines change our natures in some significant ways, and might the future bring hardware interactions between humans and machines – another possible aspect, perhaps, of a post-human future? Will we find ourselves no longer the ‘crown of creation’ but subordinate to superintelligent machines? A distinctive hallmark of how these questions arise in AI is that they focus on mental aspects of what it is to be human – concerning agency, decision and choice – including for instance questions about responsibility and accountability, as well as what counts as human intelligence. Often such debates focus on extrapolated and imagined future agency within AI, commonly contrasted simplistically with idealised and uncontextualised notions of human agency.

As one example, in the context of Lethal Autonomous Weapons, it is sometimes argued that these violate the human right to be killed with dignity by a soldier who is acting in full moral consciousness of the fact that they are taking a human life [Champagne, Tonkins 2015; Sparrow 2007]. But, putting aside the highly debatable question of whether there is any dignity at all in being killed in war, this argument implausibly idealises the actions of the average soldier. Of course, each human being will always retain their own moral responsibility; but qua solider, an agent will be following a chain of military command while on the battlefield. Excepting special circumstances where a soldier has reason to consider that something has gone seriously awry with the chain of command and that the rules of war are being broken, obedience is required and reasonably expected – so the soldier’s time for full and conscious moral reflection was during enlisting: by the time of battle, that opportunity has gone. Moreover, it’s morally cruel to expect our best young soldiers – when having to kill in the heat of dangerous battle for our benefit – to pause to reflect in full consciousness ‘this is a human being that I alone am responsible for killing’. In normal circumstances, such responsibility lies more with military command and the politicians who called for war in the first place. Military robots then may be being held up to faulty idealisation.

6              Professional vulnerability in the face of emerging technology is magnified with AI

One motivating reason for the existence of professional codes of conduct is the relative vulnerabilities of professional versus others: clients, and the general public. It’s assumed that professionals have capacities and knowledge which others lack, or possess to a much lesser degree, and that therefore, professionals must use their skills and knowledge wisely and to the general good. But one of the major issues flagged in relation to AI is the fear that even the professionals might be relatively vulnerable themselves – that AI will become too complex to understand, and possibly to control – especially given the very fast pace at which it is able to operate [Bostrom 2014]. Hence, any ethical codes for AI need to take account of debates about the possible limited capacity of AI researchers to understand, anticipate and control their very products. This does not make AI unique per se – there have also, for example, been worries about biotechnology ‘escaping the lab’ [Koepsell 2009] – but the extent of these fears with respect to AI are probably greater than with any other technology. This issue of control is intimately linked to the following issue, and the two together imply that producing codes of ethics for AI will be particularly challenging, whatever one thinks of the question of how much autonomy AI has or will develop.

7              An additional layer of complexity is required in ethical codes for AI, concerning machine behaviour

Codes of ethics for the professions deal with the behaviour of professionals, and the consequences of the products or services they produce. But in AI, a special feature is that there is a layer of machine behaviour which also needs regulatory attention (and this is true regardless of debates about the genuineness of machine ‘intelligence’). The fact that AI can act in ways unforeseen by its designer raises issues about the ‘autonomy’, ‘responsibility’ and decision-making capacities of AI, and hence of the relation to human autonomy, responsibility, decision and control. If we try to address these issues at a very general level, we risk falling into vagueness and vacuity. So as a general principle, it is likely to be far more productive to attempt to work through these sorts of problems within particular concrete settings of application. When this is done, complex and potentially obscure philosophical debates may even be avoided entirely (as we shall see below).

8              The extension, enhancement, or replacement of core elements of human agency

This is a hallmark of AI, and although all machines enhance human powers to some extent, AI has the potential to do so more effectively than any other technology to date. Three main points deserve stressing here:

8.1          Economic and social effects

First, the potential of AI raises questions beyond the remit of AI researchers per se, such as when considering the wider societal impacts of large shifts in wealth creation and modes of production. Such issues as whether wealth should be systematically redistributed to compensate for the job losses occasioned by AI [Brynjolfsson, McAfee, 2014; Frey, 2013] cannot plausibly be dealt with by any realistic or achievable professional code of ethics. But these issues do serve to illustrate how simplistic it is to presume that calls for ‘beneficial AI’ can be interpreted and applied straightforwardly, even if they are agreed. Whether some application of AI counts as overall ‘beneficial’ might well depend on the economic and social structure within which it is embedded, far beyond the control of AI researchers.

8.2          AI within human systems

Secondly, in many cases, AI will enter complex systems of human agency, making it necessary that codes of ethics deal adequately with this interface. Consider, for example, the use of robotics within a hospital ward. Such places are highly complex systems with lines of responsibility and accountability which are partly formalised and partly informal, and which often change in response to circumstance and policy. Often, also, the lines of responsibility, reporting, and duty may be fragmented, duplicated and overlapping. Robots placed within such as system – whether or not they themselves are considered as responsible agents – will certainly displace some human nodes of responsibility and accountability [Daykin, Clarke, 2000]. Thus careful analysis of the effects of robotic placement within such systems is vital, and codes of ethics need to recognise this complexity. In this light, the EPSRC’s Principles of Robotics – which simply describes robots as ‘products’, and explicitly ascribes responsible agency only to humans, never robots [Boden et al 2011] – falls badly short. A more nuanced approach is required, recognising that within a hospital system, responsibilities are understood as only partly falling on the individual, for they are also part of an entire system, and computers can perfectly well be part of such a system. Thinking in this way might also help to bypass intractable questions of the nature of ‘genuine’ responsibility, and whether robots will ever achieve it. Such questions do not need to be solved if our aim – as in the NHS – is primarily meant to be upon understanding how errors occur with a view to correcting them.

8.3          Spontaneity versus forethought

Humans and AI systems make decisions – including decisions with a moral aspect – in different ways. Close attention to this might be useful in disentangling some moral worries about AI, and clarifying relevant differences with respect to codes of ethics. For example, humans may be forgiven for some decisions, in circumstances where higher standards would be expected of an AI (something like an idealised human standard, perhaps). Consider the concern that has been expressed regarding how an autonomous vehicle might react in a crash situation where a choice has to be made about who might be killed or injured, depending upon what actions are taken [Russell et al 2015]. But discussion of these things – though commonly voiced as an objection to autonomous vehicles – commonly takes for granted that a higher standard of decision-making can reasonably be expected of them. Thus, for example, a human being can often be forgiven for a suboptimal decision made under duress and in haste, and could also be forgiven for e.g. trying to save themselves or their families in a dangerous situation. But we are much less likely to ‘forgive’ a decision conceived and programmed beforehand. This prejudice may or may not have any rational basis in deeper ethical foundations. But it impacts on the consideration of autonomous vehicles, which have to be designed ‘in the cold light of day’ to react appropriately in emergency situations. From one point of view, such careful advance consideration seems ethically superior to the ‘it’ll be alright on the night’ approach; but from another point of view, it’s less, well, ‘human’. Such double-edged complexity abounds when considering replacing human labour and agency with AI. Again, this short discussion is far from complete, but serves only to flag the importance of considering elements of timing, planning, and the agentic source of decision making, in considering the ethics of AI.

9              Conclusions

There are numerous challenges in considering the ethical issues that AI faces us with, and further challenges in developing codes of ethics, guidance or other regulations for ethically robust AI. Although we are faced with much distracting hype which perhaps distorts some of the issues and their import, nonetheless, careful examination of the particular and distinct issues which AI presents us with can help us in understanding these further. Addressing these challenges will require input from both the AI community and more widely.

Acknowledgments

Many thanks to Peter Millican for his careful commentary on the manuscript, as well as to Michael Wooldridge for discussion.

This work has been kindly supported with a grant from the Future of Life Institute

References

[AoIR Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0). 2012.

[Atkinson P. Ethics and ethnography. Twenty-First Century Society. 2009;4(1):17-30.

[Bartholmew J. Hating the Daily Mail is a substitute for doing good. The Spectator. 2015(April 18th 2015).

[Boddington P. Ethical Challenges in Genomics Research: a guide to understanding ethics in context. Heidelberg: Springer; 2011.

[Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, et al. Principles of Robotics. Swindon, UK: Engineering and Physical Sciences Research Council ESPRC, 2011.

[Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press; 2014.

[Brynjolfsson E, McAfee A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W.N. Norton &n Company; 2014.

[Bryson J. The making of the EPSRC Principles of Robotics. AISB quarterly. 2012;133(Spring 2012):14-5.

[Caulfield T, Condit C. Science and the Sources of Hype. Public Health Genomics. 2012;15(3-4):209-17.

[Chambers T. The Fiction of Bioethics (Reflective Bioethics). New York: Routledge; 1999.

[Champagne M, Tonkens R. Bridging the Responsibility Gap in Automated Warfare. Philos Technol. 2015;28(1):125-37.

[Daykin N, Clarke B. ‘They’ll still get the bodily care’. Discourses of care and relationships between nurses and health care assistants in the NHS. Sociology of Health & Illness. 2000;22(3):349-63.

[Fischer F, Forrester J. Editor’s Introduction. In: Fischer F, Forrester J, editors. The Argumentative Turn in Policy and Planning. Durham and London: Duke University Press; 1993. p. 1-17.

[Frey C, Osborne M. The Future of Employment: How Susceptible Are Jobs to Computerisation? Oxford: Oxford Martin School, University of Oxford, 2013.

[Hawking S, Russell S, Tegmark M, Wilczek F. Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’. The Independent. 2014 May 1st 2014.

[HFEA. HFEA agrees advice to Government on the ethics and science of mitochondria replacement [press release]. London2013.

[Human Fertilisation and Embryology Authority. Available from: http://www.hfea.gov.uk/.

[IIM. Ethics Policy for Peaceful R&D. Reykjavik, Iceland: Icelandic Institute for Intelligent Machines.

[Koepsell D. On Genies and Bottles: Scientists’ Moral Responsibility and Dangerous Technology R&D. Sci Eng Ethics. 2009;16(1):119-33.

[McLean B, Elkind P. The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron: Portfolio; 2013, 2004.

[Nissenbaum H. Privacy as Contextual Integrity. Washington Law Review. 2004;79(1):119-58.

[Nissenbaum H. Privacy in Context: Technology, Policy and the Integrity of Social Life. Palo Alto: Stanford University Press; 2010.

[Rein M, Schon D. Reframing Policy Discourse. In: Fischer F, Forrester J, editors. The Argumentative Turn in Policy and Planning. Durham and London: Duke University Press; 1993. p. 145-66.