Myth and the EU study on Civil Law Rules in Robotics

The European Parliament has recently produced a study ‘European Civil Law Rules in Robotics’. This continues work by the European Parliament’s Committee on Legal Affairs, such as its publication in May 2016 of a Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics.

Such studies are of immense relevance and interest to any projects of drawing up codes of ethics for AI. Any code of ethics operating within the European Union would need to be cognisant of any relevant laws. This is not simply in order to comply with them, but note should be taken of many aspects of any such laws pertaining to robotics and AI, since it’s vital to be aware of any harmonies or clashes with other influential statements as codes of ethics are in development. This includes considering the surrounding policy context and any preambles or surrounding texts. These can give insights into the guiding motivations and underlying values which inform any regulations. This can be informative for codes both inside and outside the EU.

In working out how best to think about developments in AI, or indeed any fast changing technology, science fiction can be a useful tool. Where situations are not yet with us, we need to use our imagination. Science fiction almost invariably has some moral content, whether explicitly or implicitly. (Indeed, it’s virtually impossible to write any half-plausible and interesting story that does not have some normative elements.) Science fiction frequently plays out scenarios about how we might or might not relate to robots and to very advanced computers. These can be extremely useful in helping us to ponder what our values are and how we might react to new developments.

But another rich source of imagination and values is found in stories we already have. Much work that discusses AI refers to ancient stories and myths about robots created from mere matter, as well as more modern literature. Stories often referred to in such contexts include Frankenstein’s Monster, Golem, Pygmalion, the Tin Man from the Wizard of Oz, and the maidens made of gold and the giant Talos made of bronze which appear in the Iliad. Spirit-powered robots defended relics of the Buddha, according to Indian legend.

But in referring to any such stories, we can draw various lessons. (It’s not wise to conclude from the story of Sleeping Beauty that it’s a good idea to marry a man you’d never met before who’s broken into your bedroom and awoken you with a kiss. Other layers of interpretation in such fairy stories, though, are a rich source of meaning.) So, in looking at the preamble and surrounding text of policy documents which refer to such ancient or more modern stories, it’s useful to take a look at how these stories are used and what lessons are drawn.

The document European Civil Law Rules on Robotics refers to cultural ideas about robots in its introductory texts, as part of a narrative justifying its approach, and in particular, grounding it in a response to what are seen to be European concerns. Here is the relevant section, which comes in the document’s section 2 “General Considerations on Robots: The notion of the robot, its implications and the question of consciousness”, where the discussion is used to explain reservations about using the term ‘smart robot’ in a set of regulations designed for the European context, because of the likely public reaction:

1°/ Western fear of the robot

The common cultural heritage which feeds the Western collective conscience could mean that the idea of the “smart robot” prompts a negative reaction, hampering the development of the robotics industry. The influence that ancient Greek or Hebrew tales, particularly the myth of Golem, have had on society must not be underestimated. The romantic works of the 19th and 20th centuries have often reworked these tales in order to illustrate the risks involved should humanity lose control over its own creations. Today, western fear of creations, in its more modern form projected against robots and artificial intelligence, could be exacerbated by a lack of understanding among European citizens and even fuelled by some media outlets.

This fear of robots is not felt in the Far East. After the Second World War, Japan saw the birth of Astro Boy, a manga series featuring a robotic creature, which instilled society with a very positive image of robots. Furthermore, according to the Japanese Shintoist vision of robots, they, like everything else, have a soul. Unlike in the West, robots are not seen as dangerous creations and naturally belong among humans. That is why South Korea, for example, thought very early on about developing legal and ethical considerations on robots, ultimately enshrining the “smart robot” in a law, amended most recently in 2016, entitled “Intelligent robots development and distribution promotion act”. This defines the smart robot as a mechanical device which perceives its external environment, evaluates situations and moves by itself (Article 2(1))9. The motion for a resolution is therefore rooted in a similar scientific context.

Commentary on this passage:

The passage opens with the suggestion that the collective consciousness of the West shows itself in ancient fears about losing control of robots, which must be addressed in order not to hamper the robotics industry. The wording seems to imply that this fear is unfounded or poorly grounded. This suggests a somewhat cavalier attitude to such myths, as if they indicate something irrational and to be combatted. While a culture’s myths may indeed show things which cannot be reduced entirely to rational analysis, the very fact that they have survived for so long suggests that myths and stories may be indicating something important. Indeed, the document goes on to validate these Western fears, but tellingly, does so by referring not to myth or culture but by heeding the recent warnings about AI of four prominent scientists and technologists: Stephen Hawking, Elon Musk, Bill Gates and Bill Joy, citing these experts “now that the object of fear is no longer trapped in myths or fiction, but rooted in reality”.

This is a telling way of presenting these myths and stories of our past. It’s as if the lessons we need to learn from these myths and stories are merely some uncanny, lucky prediction of the scientific future, and now that we have the science, and now that we have the technologists and scientists to warn us, we can at last realise these warnings were, by fluke, right after all. Yet the appeasement of the general European public is framed in terms of addressing and combatting the cultural sway of the ancient myths. So … are the pre-scientific-myth-fuelled fears of the “great unwashed” general public right by some spooky coincidence? Is scientific reason, endorsed by experts, by happenstance now simply marching in parallel time to unreason?

One reason why these questions are important is because it’s important whether this document is attempting to accommodate reasonable public concerns, or is pandering to an irrational populace. One might develop policy, and in particular public information, quite differently, depending on these attitudes. Indeed, it is somewhat unclear what kind of stance the document is taking on this point.

Something of note is that in discussing the lack of fear of robots in the Far East, the document also grounds the Japanese stories about robots in the underlying metaphysical and normative framework of Shintoism. This makes sense of the positive Japanese response to robots. This sense-making narrative is absent in the account of the Western myths to which the document refers. (Note then, that the EU document subliminally suggests that positive myths of robots are grounded in something more substantive, whereas negative myths are not.)

Is there no sense-making Western narrative available? Note of course that ‘the West’ is not a monolithic idea – there are robot stories in various Western traditions including Norse as well as the Greek and Jewish traditions referred to in the document. But note too at this juncture, that the EU document highlights the Hebrew myth of Golam as being particularly influential in Western society and what the document calls ‘western fears of creations’. Indeed, it’s the only Western robot story actually named.

I had to read this phrase ‘western fear of creations’ several times to make sure I’d understood it. For the idea that it is the West which is afraid of creation, and that a particularly strong influence on this fear stems from the Jews, butts up against the flourishing of science, technology and invention in the West, which has been so profoundly influenced by the Judeo-Christian tradition; not to mention the high density of tech start-up firms in Tel Aviv, for example. By ‘fear of creations’ the document is presumably referring to fear of autonomous creations which escape the control of their creator, not fear of artefacts per se.

But whilst it cites underlying frameworks behind Eastern robot stories, the EU document’s account of Western responses to robots misses out a profoundly influential Hebrew narrative which surely lends heavy cultural salience to Western myths about robots. I refer of course to the story in Genesis of the creation of man and of the Fall. For the Fall shows how in disobeying one’s Creator, Adam and Eve developed the ability to see that they were naked, and enabled them to have knowledge of good and evil. And we all know what happened next, armed with that dangerous knowledge: thousands of years of often sorry human history, with bright spots here and there. Mankind was given the power to act and to think, but the freedom which Adam and Eve were given to act independently of their Creator also led to disobedience and disaster.

But this is precisely the fears that are expressed about AI and robots now. It’s not fear of creation in the sense of invention and artefact, or control over the world per se, since the Genesis story gives mankind dominion over the earth – it’s fear of a creation which escapes the control of its creator. It’s fear of creation which, left in a safe space unobserved, gets into mischief. It’s worries about how we, the creators, might treat robots if they were to develop consciousness and the ethical awareness that Adam and Eve developed. But these are precisely the moral worries of the moderns who are armed with good understandings of science and tech.

Presenting the myths around dangerous robots in the context of Genesis paints a totally different picture than that presented by simply framing the Hebrew myth of Golem as a stand-alone story of uncontrollable robots which just happens to form the strongest influence behind what seems to be an ill grounded, primitive fear. It not only presents this robot-gone-bad narrative as a central influence firmly embedded in the history of Western culture, rather than merely a popular story. It not only embeds it in an account of the nature of humanity, of the place of humanity in the universe and made in God’s image, and hence, with the potential to have responsible control over the world, and hence with a positive potential for advancing science and technology. It does more.

For if we see the Genesis account of the Fall of man as foreshadowing of fears about robots, then Genesis gets the problem exactly right, for exactly the right reasons – it’s a worry about autonomy itself: what might robots do if we can’t control them fully? Will they adhere to the same value system as us? Will they decide to disobey us? What will our relationship with our creations be?

The modern scientific experts can tell us that these fears might now actually be realisable. We didn’t need them to tell us that the fears were in principle well-founded. Far from quaking at a Hebrew scare story which whipped up primitive fears in the general public that need to be allayed, we can thank the Hebrew account of Genesis for pre-warning us thousands and thousands of years ago, in a rich and meaningful story about our place in the world and about our nature, from which we can infer also that creating robots with the ability to judge and to act may be worthwhile. But it can go very, very wrong. This is precisely the central ethical question of AI today. If the general public have concerns about this expressed through myth, these concerns are not irrational. They need to be addressed.

Paula Boddington

We would like to thank the Future of Life Institute for sponsoring our work.

Case study: Robot camel jockeys. Yes, really.

Introduction

Robot camel jockeys … yes, really.

Why this case study?

Robots are now widely used in place of child camel jockeys. The robots used in camel racing are so simple that they scarcely count as AI. Indeed, developments since their inception have led to many robots becoming still simpler, with some custom-made out of electric drills. Nonetheless, the role that robot camel jockeys are playing strikes at the essence of one of the main functions of AI –  replacing human labour. Hence, it seems a good case study of what might happen when human labour is replaced with machine labour, with the proviso that all case studies are by their nature, partial accounts of the array of ethical issues facing us in the many-faceted world of AI. The discussion here, likewise, can only touch on some of the multiple issues raised.

I chose this example also because this use of robots has been, at least within much of the tech literature, hailed as an example of the beneficial application of robotics (1, 2). But of course, on closer inspection, things are more complex. The challenges of this particular case study are many. These include the difficulty of being able to monitor precisely what the impact of the use of the robot camel jockeys has been, as well as complexities introduced by political, cultural and religious differences between the countries where the firms that are developing these robots are located, and the countries where robots are being commissioned and used. There are issues in international law here as well as individual and professional ethics to consider. So, although it might be an example of a very simple form of robotics, it’s an example of a highly complex ethical issue.

As such, there is much to be teased out and considered. Here are some thoughts and some findings based on the research and commentary I’ve been able to find so far. I’ve included pretty much most the material I have been able to gather online about this topic.

Discussion

The development and use of robot camel jockeys has been framed in terms of an ongoing historical narrative wherein the use of technology is hailed as the rescuer of the worst-off in the labour force, freeing them from arduous and possibly dehumanising work, as illustrated here:  ‘the standard modernist gambit of taking a crappy job and making it more bearable through mechanization will be transformed into a 21st century policy of taking appalling and involuntary servitude and eliminating it through high tech.’ (3) The use of the word ‘servitude’ here is interesting, for it includes both slavery, and working conditions close to slavery. The replacement of child camel jockeys with robots has been hailed as an ethical situation where ‘everyone will win a little’ since ‘every robot camel jockey hopping along on its improbable mount means one Sudanese boy freed from slavery and sent home’ (3); the robots are even described in one headline as ‘the robots that fight for human rights’ (2). This is held up as an example of how ‘there are some issues that can really be solved with innovation and technology’ (Lara Hussein, UNICEF) (4).

Camel racing is an old tradition in the Gulf States, including Saudi Arabia, Qatar, Kuwait, and the UAE (5). The use of children was common owing to their small size and lightness meaning camels could run faster. However, it is well established that children were trafficked – sold or kidnapped – from countries such as Sudan, Eritrea, Mauritania, India, Pakistan and Bangladesh, sometimes from as young as six months old (6). These children were then subjected to further extreme maltreatment, being allegedly deprived of food and sleep to remain small (3, 7), and according to reports, frequently the subject of sexual abuse (8, 9). The racing itself was onerous, frightening and often dangerous (6). There are reports of children injured or killed while racing, killed by fellow jockeys, and by trainers (6, 8).

Probably largely owing to international pressure, and as prohibited by the UN Convention on the Rights of the Child (7), the use of child camel jockeys was banned across the Gulf at varying dates starting with the UAE in 2005 (3, 5, 10). However, laws prohibiting child labour which had been in place for several years in various countries were reportedly routinely flouted (8). Anti-Slavery International commented in 2006, ‘The UAE government is proposing using robots to replace child camel jockeys. This seems a complicated alternative to implementing fair labour conditions for adult jockeys. Furthermore, the use of robot jockeys in races will not preclude the need for people to exercise, feed and care for the camels in camps.’ (7).

Rather than replacing child jockeys with adult jockeys working in reasonable conditions, replacement of children with robots occurred. Companies in Switzerland (3, 5) and Japan produced robots. The camels would not run without riders. Therefore the robots had to be made to look sufficiently human to encourage the camels to run; steps were taken to ensure that they were not so human that they violated Islamic codes about representational art (3). The successful adoption of the robots has been attributed to the fact that they were seen as ‘cool’ and high tech: ‘The motto of the day is clear: “Pimp my camel.” They laugh at the Swiss engineers for voicing technical reservations or concerns about the camels’ welfare,’ although some locals expressed dismay that robot jockeys lowered the value and status of the racing camels (11). Since first adoption, there has been a trade in custom robots made from deWalt drills which are much cheaper and more reliable (2), and at 10 kg even lighter than the children (12). Development of robot jockeys has also been encouraged as a ‘symbol of the future’ (11). A robot jockey development project was a prize-winner in the 2014-15 Khalifa Fund Technopreneur Competition.

The camel racing may take place in considerable secrecy especially since in some areas it may involve illegal gambling (2). There seems often to be few spectators, being watched on TV in certain countries, although not televised in others (3). Some reports claim that the law is openly flouted with even TV races showing child jockeys. Reporters have been banned from filming when children were involved and media in the countries involved is generally tightly controlled (6). Reports claim that robots now deliver electric shocks to the camels to increase competition (13, 14), and that malfunctioning robots cause the camels injury (15) although some claim the reverse that camels suffer fewer injuries (16). This all points to difficulties in assessing the impact of the use of robot jockeys.

Reports about the impact of robot jockeys vary wildly. Vogue even has a stylish and glamourised photo shoot of camel racing with robot jockeys in Dubai (17), whereas reports from trafficking organisations and from inside Pakistan paint a very different picture. Research has found that the number of children being admitted to hospital with injuries consistent with falling from camels has decreased since the introduction of robot jockeys (18). However, it may be that there are injured children being kept away from hospital because of the new illegality (8). Children freed from racing are reported to have serious and lasting injuries years later and to have large educational social and emotional problems; some had never seen a woman (19). Since child camel jockeys were outlawed, human rights organisation Ansar Burney Trust have however recently found that 3000 child camel jockeys are simply missing (20). It had previously been found that children under the age of 10 were apparently failing to be repatriated (7). Despite optimism about the benefits of robot jockeys, it appears that the trafficking of child jockeys may continue (2, 7, 9, 19). Human rights organisations have attempted to repatriate the children involved but few know who their families are and it has rarely been possible to reunite them with their families, some of whom sold them in the first place; homes have been set up for them e.g. in Pakistan (8). However, other reports claim that the vast majority have been returned to their families (concerning children in the UAE – UNICEF in Dubai) (21); perhaps the disparity may be explained by regional differences, by different reporting methods or by differences in observations made from ‘host’ and home countries. Families who had sold their children have been threatened with action if they attempt to resell them, and there are accounts of resentment at their return (22). It has been reported that Qatar simply shipped boys back to Sudan, where they face possible death, and that other boys stayed where they were working in other occupations or unemployed (3). Those working to assist the children have reportedly received threats (8). It has been opined that since Pakistan and Bangladesh are so dependent for income from their citizens who work in the Gulf States, little if any pressure is put on the host countries by the source countries regarding the use of child labour (pers. com), although reportedly India smashed several child selling gangs in the early 1990s (6). AntiSlavery International reports that receiving countries do little to control the entry of children (6).

Commentary

It’s clear that the notion that replacing onerous labour with robots produces a morally good outcome simplifies the issues. These are just a couple of preliminary remarks on a highly complex case.

Consider another situation: replacing human mine clearers with robots. Clearing mines is unfortunately necessary, but very dangerous work. Where possible, replacing a human being who might get killed, with a robot which might merely get blown up, is a moral no-brainer. Robots are expendable. The  kind of robotics that would be involved in this is so far from any possibility of consciousness that we can leave aside the question of whether we might attribute moral agency to mine-clearing robots. Moreover, unlike in the camel jockeys’ case, although there can be of course be moral issues regarding recruitment into the armed services, in considering the impact on human bomb disposal experts of the use of robotics, we are almost certainly not considering a scenario with trafficked minors whose fate following the implementation of robots is uncertain and possibly perilous.

But racing camels, no matter that it’s an embedded part of some cultures, can hardly be construed on the same plane of necessity. The question raised by some commentators, ‘why not just use adult jockeys?’ highlights an apparent reluctance by some camel racers to comply with international legislation. Moreover, we are not talking about onerous compliance with baffling or seemingly pointless bureaucracy. We are talking about compliance with laws against child labour, in the context of highly dangerous work, for the sake of a sport. The implication of much of the discussion is that the main impetus for compliance with anti-slavery  legislation was the use of robot camel jockeys, rather than a change of heart about the impacts on the child jockeys.

I commented above how the description of the child camel jockeys’ work as ‘servitude’ elides the difference between slavery and working conditions that are close to slavery. In some reports, the notion of slavery is used openly, in others, this issue is skated around or bypassed. Orienting the ethical discussion to the narrative of slavery, rather than to the narrative of technology as a force that throughout history can make incremental improvements in people’s  lives, may make a difference to how the question is tackled. Here is a thumbnail sketch of a momentous ethical debate: If something as morally offensive as child slavery is occurring, is it best to do whatever one can to improve the situation? Or is it best to keep one’s hands clean and refuse to have anything to do with those who are responsible? Both cases can be argued to have some merit.

Claims of moral and cultural relativism might conceivably rear their heads in this instance, to bypass the issue of responsibility for involvement with camel racing, (notwithstanding the existence of ratified international legislation accepted by the governments of the states in question); but really? In considering the situation, although conditions may well have varied in different countries and with different owners, one would have to err on the safe side regarding reports of the treatment of the child jockeys. Selling children, food deprivation, emotional deprivation, sleep deprivation, total lack of education, participation in a dangerous sport – all okay because of moral relativism? Not worrying about the fate of these children, that’s just what goes on elsewhere – really? You need to find yourself another philosopher if you want a debate that entertains that argument seriously.

What of individual or corporate moral responsibility in regard to manufacturing and supplying robot camel jockeys? A response to the intransigence of camel racers who were not complying with anti-slavery legislation could be force, or alternatively, attempts to produce a shift in moral consciousness. The latter is likely to be harder, and the neither are realistically easily in the power of a few individuals or small companies. Robotics manufacturers are small beer in this context. It’s not at all obvious that robotics manufacturers who step in to the fray have any responsibility to try to produce a change in the moral outlook of the camel racers themselves, since how could they? On the other hand, one might wonder if by working with camel racers, one becomes to an extent complicit with the morally problematic treatment of the child camel jockeys. One might wonder even if by their involvement, robotics manufacturers had any responsibility to monitor the fate of the child camel jockeys. On the other hand, it’s plausible to consider moral responsibility as distributed among different parties depending upon their location and their powers, and consider that it’s the job of other organisations and parties to track the fate of the former child jockeys and to assess if child servitude persists in other forms. This is a very large and very difficult task.

But at the least, in this complex situation, it would be best to exercise caution in claiming unproblematic success for the implementation of robot camel jockeys. It should be apparent how hard it would be for any professional code of ethics to make clear statements about engaging in such work. The example also shows how presenting the issue in different ways, and looking at it from different angles, can produce varied ethical reactions.

It also illustrates a central lesson for uses of AI which replace human labour – take a very close look at what happens to the human beings who are thereby displaced.

And consider the more general question of how we go about considering ethics and technology. This case study illustrates something interesting – how the technology, even something as simple as this, can get glamourised – the Vogue photo shoot, and the phrase ‘pimp my camel’, stand out here. Assessing the glamour of robotics here is complex, and messy. This is a recurring theme in technology in general (the shiny, sexy sort of tech at least, and robotics and AI in many forms qualifies here). The seductive lure of technology can perhaps lead us to overestimate what moral impact it has on a situation, possibly misconstruing the complexities and being blinded to other elements, which we see in the rush of some to praise robotics for dashing to the rescue of the child jockeys. On the other hand, the seductive lure of technology might just have done the trick of persuading some camel racers to replace children with machines.

This morality tale echoes Pinocchio in reverse (23). Pinocchio was a puppet whose father longed for him to become a real boy; Pinocchio himself of course wished for this too. But in some regions of the world of camel racing, real boys have been reduced to puppets. In the case of the use of robot camel jockeys, the equation of a child with a mere expendable thing pragmatically works to advantage, where a mere thing – the robot – is considered to be better than a child. In Pinocchio’s story, it was the development of moral character that did the trick, and Pinocchio became a real boy when he sacrificed his safety to rescue his father; in the Disney version, he has to prove himself to be brave, truthful and unselfish. Whilst on occasion and pragmatically, a quick technological fix might be the least worst option, if we exchange such elements of moral growth for a technological fix too often, in the long run the world might not be much better off.

The way in which AI typically acts to replace or supplement human labour is going to present immensely complex challenges for ethics. Embedding these considerations within professional codes of ethics will be far from straightforward.

Paula Boddington

We would like to thank the Future of Life Institute for sponsoring our work

  1. Moor JH. The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE. 2006;21(4):18-21.
  2. Brook P. The DIY robots that ride camels and fight for human rights. Wired. 2015(03.03.15).
  3. Lewis J. Robots of Arabia. Wired. 2005(11.1.05).
  4. Rasnai A. Dubai’s Camel Races Embrace Robot Jockeys. The Daily Beast. 2013.
  5. Pejman P. Mideast: rehabilitation for retired child camel jockeys gets top priority IPS News2005 [Available from: http://www.ipsnews.net/2005/05/mideast-rehabilitation-for-retired-child-camel-jockeys-gets-top-priority/.
  6. Gluckman R. Death in Dubai 1992: http://www.gluckman.com/camelracing.html.
  7. International A-s. Information on the United Arab Emirates (UAE) Compliance with ILO Convention No.182 on the Worst Forms of Child Labour (ratified in 2001) Trafficking of children for camel jockeys. 2006.
  8. Williamson L. Child camel jockeys find hope BBC News; 2005/02/04 [Available from: http://news.bbc.co.uk/go/pr/fr/-/1/hi/world/south_asia/4236123.stm.
  9. Lillie M. Camel Jockeys in the UAE: Human Trafficking Search; 2013 [Available from: http://humantraffickingsearch.net/wp/camel-jockeys-in-the-uae/.
  10. Knight W. Robot camel jockeys take to the track. New Scientist. 2005(21 July).
  11. Schmundt H. Camel Races: Robotic Jockeys Revolutionize Desert Races. Speigel Online International. 18/07/2005 ed2005.
  12. Watson I. Robot Jockeys Give Camel Racing a Modern Twist: NPR; 2007 [Available from: http://www.npr.org/templates/story/story.php?storyId=10304576.
  13. Spooky. The camel-riding robot jockeys of Arabia. 2011. http://www.odditycentral.com/pics/the-camel-riding-robot-jockeys-of-arabia.html
  14. Wafa I. “Shock jockey” sellers arrested. The National UAE. 2011 20/01/2011.
  15. Nasir Z. Of camel jockeys and camels. The Nation. 2013 08/07/2013.
  16. Nowais SA. UAE camel trainers prefer robot jockeys. The National UAE. 2015 13/06/2015.
  17. Shaheen S. Meet the Camel-Riding Robot Jockeys in Dubai. Vogue.
  18. Abu-Zidan FM, Hefny AF, Branicki F. Prevention of Child Camel Jockey Injuries: A Success Story From the United Arab Emirates. Clinical Journal of Sport Medicine. 2012;22(6):467-71.
  19. Gishkori. Camel jockeys: Popular Arab sport costs Pakistani children their sanity. The Express Tribune Pakistan. 2013 8/05/2013.
  20. AnsarBurneyTrust. Almost three thousand under age camel jockeys missing: ansarburney.org; 2013 [Available from: http://ansarburney.org/almost-three-thousand-underage-child-camel-jockeys-missing/.
  21. United Arab Emirates: Camel racing continues to be child free: IRIN Humanitarian news; 2006 [updated 24/12/2006. Available from: http://www.irinnews.org/report/62903/united-arab-emirates-camel-racing-continues-to-be-child-free.
  22. Pakistan; Former child camel jockeys and the struggle to return home: IRIN Humanitarian news; 2007 [updated 3/01/2007. Available from: http://www.irinnews.org/report/62967/pakistan-former-child-camel-jockeys-and-the-challenge-to-return-home.
  23. Jordan B. Peterson provides an interesting and illuminating discussion of the moral significance of the Pinocchio story on which I drew in considering these questions and which can be found in a lecture for his course Maps of Meaning which can be seen at: https://www.youtube.com/watch?v=AdAdf4watJQ

We would like to thank the Future of Life Institute for sponsoring this work.

The EPSRC’s Principles of Robotics and ethical debate

Last April, the AISB organised a workshop on the EPSRC’s Principles of Robotics. These Principles were formulated in 2010, and take the form of five ‘rules’ and seven ‘high level messages’. The Principles aimed to provide some guidance but also importantly to stimulate debate which they have indeed done; the workshop was an example of such debate and produced very interesting discussions. Any code of ethics in a field which is advancing so rapidly, and which challenges central notions of human agency, and presents such challenges for the organisation of our social world, must always stand open to discussion and debate. This is at least as important as getting the code ‘right’, perhaps even more important. Ethics must always involve dialogue and close listening among all affected parties.

Papers from the workshop are being published in Connection Science. Paula Boddington’s commentary can be accessed here.

Abstract: The EPSRC Principles of Robotics refer to safety. How safety is understood is relative to how tasks are characterised and identified. But the exact task(s) a robot plays within a complex system of agency may be hard to identify. If robots are seen as products, it is nonetheless vital that the safety and other implications of their use in situ must also be considered carefully, and they must be fit for purpose. The Principles identify humans as responsible, rather than robots. We must thus understand how the replacement of human agency by robotic agency may impact upon attributions of responsibility. The Principles seek to fit into existing systems of law and ethics. But these may need development, and in certain context, attention to more local regulations is also needed. A distinction between ethical issues related to the design of robotics, and to their use, may be needed in the Principles.

We would like to thank the Future of Life Institute for sponsoring this work

Robots Doing Our Dirty Work: Moralisation of health, routine care work, and machines

Preliminaries:

I am going to raise some questions about the replacement or supplementation of care work within a health care setting by robotics and possible impacts upon moralisation. Important questions concern not just the care work itself, but also information gathering and communication by humans or by machines, and status within a social system, and how these might be impacted by the replacement of supplementation of human labour by machines.

There will be no answers, since answers must await further empirical research: I hope merely to discuss how we might go about asking the questions.

The context for my initial thoughts on this were doing two diverse research projects at once: one on ethical questions in AI; the other, preliminary work in conjunction with a team of ethnographers examining routine care of dementia patients, including continence care. A hypothesis of this proposed work is that one of the functions of routine care work, perhaps especially for dementia patients who are on a trajectory of loss of some central markers of agency, is to maintain the person’s acceptable presence in the social and moral community. Thinking about this set me on the road to the thoughts in this paper.

In considering ethics in health care, the routine mundane aspects of life may be overlooked but are rich material for considering moralisation and value attributions. A focus on routine care work, often studied in ethnographies, is a rich source of value material; and because it is of low status in most social settings, including health care contexts, it may be the ‘canary in the cage’ for sniffing out the moral atmosphere.

First, preliminary brief remarks: about Individuals and social systems to stage set some of my concerns: In work on AI there are two approaches which can be usefully briefly contrasted. One, which focuses upon individual machines – which models AI upon the individual human agent, often indeed, the individual human brain. Ethical concerns may include, for instance, examining how to build ethics into a machine. Such an approach in AI tends to focus on getting the ethics right, and on the question of ‘what should be done?’

A second complementary approach in AI looks at autonomous systems. Such an approach is probably a richer model for thinking about ethics, and is certainly needed to complement and in some instances to correct the former model, which might skate over the wider socially embedded context of how decisions and behaviour are instigated, interpreted, and reacted to. Such an approach more readily asks questions concerning who is accountable, how do we relate to others in a system, and how is information distributed around a system.

I am going to be considering a broad approach to moralisation:

Moralisation in health can encompass many domains, and can involve both patients and staff, individuals and institutions or other social settings. In particular, I look at the nature of what I call the moral ecosystem – the local moral universe – formed by the surrounding social context and how it facilitates or blocks expressions of moralisation.

I consider also those judgements expressly labelled moral per se, or into any judgements or actions which have implications for differential or significant attributions of responsibility, praise and blame. Such practices can help or hinder the presentation of an individual as part of the moral community and the nature and extent of their inclusion in this community; this encompasses the question of whose viewpoints are taken as having moral weight, which is one of my major concerns.

So I am concerned with the content of moral judgements. Note that these can include attempts at moral rehabilitation of individuals (what’s in some contexts been called ‘sentimentalising work’, see later), or moral repair work, as well as negatively judgemental moralising. And in addition, the process(es) by which they are formed, including the (formal or informal) role of those individuals involved in forming moral judgements, and the organisational context within which moralising judgements are made. We are interested in how a decision has been arrived at, and who was involved, and communication is a key.

Note then that the flow of communication of moralising judgements and behaviour itself can affect actors within a system who are communicating moralising information, and this flow can be subject to systematic flows and blocks.

And note that thus, hierarchy and status are important variables.

Moral judgements, the attribution of blame, and inclusion within a moral community, occur within social, and, significantly, often hierarchical settings, and this is especially true of health care contexts. The ‘moral ecosystems’ that social settings create may be formalised, or semi-formalised. I stress ‘semi-formal’ for various reasons, including that much of the moralising and sentimentalising work which occurs within health care settings, vital as it may be, occurs almost incidentally to formal structures of role and obligation. There is a considerable body of work which demonstrates the importance of examining such social systems and organisational cultures for understanding the quality of care and my focus here is on moral aspects of this. Please note there’s not enough time to go into fine detail of all the points which I can only indicate – nonetheless, much of what I say has backing of empirical ethnographic findings.

So, how might the introduction of robotic machinery impact upon all this? I continue by discussing how moralising judgements, and attempts to combat negative moralisation, might occur within the social and hierarchical setting of a ward.

I am raising various abstract considerations, so let’s make it concrete. I will start with four examples that I observed while working as a nursing auxiliary on a long-stay ward in a central London hospital treating neurological conditions. These cases all show different aspects of the moralisation of health involving the lines of communication and importantly, hierarchy, within a particular ward and hospital culture.

One: a patient, Anna, had to take anti-TB medications, a prolonged regime, which can have the side effects of producing extreme nausea, and also depression. In her ward notes, it was noted that she had had several abortions, even though this did not seem relevant to her immediate medical needs, but was mentioned by staff on more than one occasion. She frequently complained of nausea, and frequently asked to see the doctor to get more anti-nausea medication. Nurses routinely took little note of her complaints, and dismissed her nausea as exaggerated. I attempted to intervene, since I myself had had to take a course of TB meds, and knew how sick and demoralised they make you feel. This took some struggle. I believe that moralising about her, delayed her access to anti-nausea medication.

Two: a patient, Brian, suffering from a condition which left him tube fed and unable to speak, communicating via a sign board. He used to wander up and down the ward, making distressing noises. This upset the routine of the ward and the other patients, although it also upset the staff. He was given sedation to discourage this. This worsened his ability to communicate. He was given further sedation, which left him bedridden. A nurse explained to me that the outcome would be that he’d get a chest infection from the tube feeding, coupled with the sedation, and die.

Three: a patient, Chris, with a late stage tumour, close to death. He wanted to stay on the ward to die, but the doctors wanted him to transfer to another hospital to have one last ditch treatment. He vanished from the ward one Saturday. I was sent out to scour the streets and the local pubs to find him. Eventually, the police found him half in and half out of a canal, considering suicide. The ward sister promised him he could stay on her ward. By first thing Monday morning, the doctors were back on the ward, had overruled the ward sister, and ‘persuaded’ him to move to the other hospital. We never saw him again.

Four, Dave, a wheelchair bound patient, much disliked on the ward for his loud discussions, judgemental views, and frequent demands on the staff. He needed treatment for severe constipation. One day, left on the commode behind a curtain, he called for the staff to attend to him. The nursing staff walked up and down the ward laughing and deliberately ignoring him. He eventually cottoned on and treated this as a joke. However, there was an element of malice in the nurses’ actions.

What is going on in these examples?

One: Anna. I was the lowest in the staff hierarchy. However, I possessed a particular type of knowledge, which other staff lacked – personal experience of medication side effects. But my low status meant this had little or no traction – what did I know about drugs? I did my best to counter the staff’s general view that the patient’s very complaints were symptomatic of her morally negative character.

Part of my motivation to do this was however, confidence from my knowledge that I was a graduate of the same Oxford college as one of the ward doctors (who routinely ignored my presence), hence encouraging an innate bolshy streak – this is one of the key moments when I first realised the institutional and social constraints on sharing relevant knowledge, and the contextual and differential operation of hierarchy; moreover, having more time to spend with patients, and hence a small amount of discretion in how I spent my time, I could also subvert the patient’s discredited status on the ward by singling her out for sympathy.

HCAs are low in the social hierarchy of hospital wards. Studies show that HCAs have distinct knowledge about patients yet there is a lack of any formal system for taking notice of this – they are a marginalised and low status group and consequently, form a group identity around this. This group identity however further marginalises them, and cements in-group behaviour. (Lloyd et al, 2011.)

Two: Brian disrupted the moral quiet of the ward, literally and figuratively. He disturbed others, but in so doing, he displayed his own distress, hence treatment could be presented as medical treatment for his own good. He was placed on a downward trajectory in an incremental way. One way of seeing this, indeed, is to consider him as ‘punished’ for these violations. His exclusion from the social sphere was worsened by the increased medication; this further escalated his exclusion. The nurses who witnessed this were distressed, but expressed powerlessness against medical prescribing.

Studies show the importance of local ward culture in delivering care (Daykin & Clarke, 2000). My particular take on such findings is how fitting into a ward culture is a requirement for maintaining social i.e. moral standing. I shall discuss ward routines shortly.

Three: Chris was welcomed into the social world of the ward by the staff including crucially the ward sister; all expressed sympathy with his plight and his wishes. He wished to spend his last weeks on the ward where he’d spent months of his life. This would have represented a good outcome to his inevitable death; but for the overriding actions of the doctors, who turned up on the ward early, outside of their usual visiting times, and overruled the ward sister’s promise to the patient. The moralised acceptance of his wishes from a nursing perspective, was trumped by the dominance of the medical treatment paradigm of the higher status doctors. The ward staff were left aghast and demoralised.

Doctors have been found in studies to ignore HCAs although there are sometimes informal interactions with the doctors and culture varies widely (Jervis). Work has also contrasted different managerial styles – that of Pace – getting things done on time – versus Complexity – attending more closely to the complex needs of patients (Bridges, Williams.) Note that these styles will affect local cultures and hence the way in which the moral standing of the patients and the status of their views is presented locally, and will affect staff and their standing and how their views are heard.

Four: Dave was roundly disliked. He often expressed unpleasant views about others, and ordered the staff about aggressively, often instructing them how to do their job. At one point, I counted that he asked for something from the staff once every 90 seconds. By laughing at him when he was completely helpless, this seems like cruelty, but the staff kept this within limits claiming that they would see to him when they had time.

What’s going on? We can see a system of reinforcing social exclusion worsened by the stigma of ‘dirty’ work. Being associated with dehumanised individuals also leads to loss of one’s own social (and hence moral) status – courtesy stigma. A spiral of exclusion can occur, since to be excluded itself is a marker of dehumanisation. ‘Dirty work’ and association with ‘polluting substances’ exacerbates this. (Twigg, 2000; Bolton, 2005; Jervis, 2001, Stacey, 2005). Stress relieving humour in relation to pollution of feces is common in nursing situations to help cement group identity and combat low status (Jervis 2001, Lloyd).

In fact, my take on this was by releasing pent up frustrations with this patient, and finding a disruptive way of ‘getting their own back’, (indeed, their excuse for delay was that they were constrained by ward timetables) the nurses could be said to be acting to morally rehabilitate Dave to the ward: they made him pay for his ‘crimes’, and dissipated some of the underlying resentment towards him, in a spontaneous and creative way.

For now, because of time constraints, to summarise some important points:

Low position in the hierarchy may at once give particular knowledge, but paradoxically block its communication in the organisational culture. In some instances, such low status workers may be able to mitigate or disrupt these hierarchies.

The actions of health care workers may also serve to try to render a patient fit to participate in this moral community. This may also be by keeping a patient compliant with ward routines. Staff attempts to advocate for the patient and explain their needs as legitimate – sometimes against the dominance of a medical model of care (Lloyd).

From a systems point of view, the routine of the ward is also often a major factor in the moral world of the hospital; those who disrupt this may be in some peril of being downgraded in the moral community; witness patient B. Those who attempt to usurp this too are in peril – Witness Dave too, whose attempts to order the nurses about – i.e. to exert unauthorised control over the ward functioning – led to him being put ‘in his place’ in a particularly significant way.

The ways in which low status workers hence help to ensure the moral crediting of patients may take various forms, shaped by their very low status. Keeping the patient in line with ward routines, ensuring acceptable physical appearance, subverting where possible negative messages from others. Note their lessened ability to do this, and the informal ways in which such work is accomplished, and ways unseen to others of higher status.

There are findings of large differences in culture between settings. Note that the moralising work of health care staff may be accomplished via the help of high status.

I illustrate this via research study on the moral and sentimental work of the clinic shows how important informal work can be carried out in a dysmorphology clinic, where parents bring children for diagnosis. Dysmorphology is especially pertinent to discussion of moralisation, since it often produces visible abnormalities which in the case of facial abnormalities may be particularly discrediting. Parents routinely ruminate about whether they have done something to cause this. Genetic diagnosis may help to free parents from feelings of blame, and the clinic space functions as a ‘confessional space’, with assurances routinely given by the staff, often focusing around normalising the child, offering praise for the child’s behaviour, commenting favourably on the child’s appearance, using words like ‘adorable’, ‘handsome’, ‘pretty’. Such reassurance about development, and the placing of the child in a category of the normal, gains weight from the ‘objectivity’ with which this is performed, via measurements, comparisons over time, and with a large backing of professionally acquired expertise.

The moral inclusion of the child, and thus of their parents, into the accepting world of the clinic is often in stark contrast to the discrediting condition of the child in the wider community. Disruptive behaviour of the child which ordinarily serves to discredit, is viewed within the clinic as a sign of ‘liveliness’ or normality. This is in interesting contrast to the discrediting effect of disruptive behaviour of adults on a ward. The authors of the paper note varying practices in different clinics, but emphasise the importance of the moral and sentimental work of the clinic which extends beyond the ‘official’ work of the clinic.

I found this work of great interest in considering the possible impact of robotics, AI or other machinery in moralising in healthcare: the reassurances of the clinic staff gained weight from the conjunction of informalised, human judgement, and action but in conjunction with the ‘objectivity’ gained by ‘scientifically’ measured diagnosis. I ponder, then, about what interactions between human care, and robot or machine input, might most be fruitful. One question which occurs to me is how much the authority of the (medically qualified) clinic staff over this scientific knowledge might work in confluence with the ‘objectivity’ of the scientific diagnosis, bolstering the medical staff’s expertise. The high status of the medically qualified clinic staff at the dysmorphology clinic, within the health care moral ecosystem, is in these instances, of great value in combatting moralising judgements.

So, what conclusions can I draw?

The introduction of machines, of robots, might have multiple effects on the moral work of the hospital ward. This speculation suggests a very mixed picture.

Information/ communication:

Robotics or other machinery are likely to impact upon what data is recorded, and how it is shared. Information recorded by robotics might have a higher ‘objectified’ status than informalised information from front line care staff. However, in replacing care work by human workers, this might disrupt lines of communication of such information for moralising work including informal small talk care work which rescues patients as moral beings by presenting them as unique persons. Note this often includes work which is not any ‘official’ part of duties.

Here’s one thought. Note how discrediting work may impact upon social/moral status of both staff and patients. Research uncovers different paradigms for how ‘polluting’ matter is handled, which handling is part and parcel of maintaining moral and social status. These can include nursing (infection) and lay (disgust) paradigms, and humour. There is not sufficient time to explore this point here, but in other words, what is communicated, and how it is communicated, is not sheer raw ‘data’. How will the introduction of robotics into toileting, for instance, impact upon these complex dynamics?

Status:

How would the use of robotics in routine care work impact upon status, and hence upon the ecosystem of moralisation? We can perhaps find some clues in what has happened as nursing has become more professionalised. Research shows that nurses as take on more technical tasks, and more HCAs are used, HCAS are still seen as low status; their status may decrease, relative to the nurses. Indications are that status follows tech; low status follows routine care work (Lloyd). This ALSO suggests, that as more basic care work is shifted to lower status workers, then the knowledge gleaned at the bedside, MAY BE increasingly marginalised and locked out of the knowledge used to care for the patient and their roles in reconciling perspectives between different actors may be lost. I.e. it suggests that increasing tech (professionalization) may not only decrease opportunities for gathering such informal knowledge, it may further marginalise the kind of knowledge that might humanise patients in combating negative moralisation and in acts to incorporate them into the moral and social world of the ward.

Barriers to moral engagement:

Studies show that where nurses experience moral distress of not being able to deliver good care, they disengaged emotionally from patients (Bridges 2013); in so doing, the patients are ipso facto less integrated into the moral community, and the role of the nurses in acting as their moral advocates is lessened. This has implications for moral repair work. Hence, might the use of robotics help efficiency of wards hence lessening this cause of disengagement – counter to the concerns expressed above?

Formalisation:

Ward routines – will robots help keep these going and hence help moral status of patients in not disrupting these, e.g. by lessening the disruption to ward routine of toilet visits – and/or help to solidify the importance of such routines, perhaps making violations of a rigid routine even less acceptable?

Work within wards shows how ward-level social and organisational conditions affects the delivery of individual care (Bridges et al, 2013, Patterson et al, 2011) If it’s ward level culture, then putting in machinery and expecting that this can be programmed in a general way might be problematic and miss opportunities to make the most of individual cultural dynamics of a ward. This has implications for how the implementation of robotics or other machinery supplementing or replacing human labour is done at a micro-level, ward by ward.

Culture:

Much of the concerns discussed here amount to barriers to multidisciplinary team work – how might this be affected by the employment of machinery?

This will depend upon organisational culture and upon the background managerial styles. Hmmm. Might the use of robotics compound medical dominance? This might indeed be helpful in moral rehabilitation, in cases where the dominating medicos are sensitive and responsive – e.g. as in the dysmorphology clinic – but in cases where dominance of a medical model acts to block out the moralising beneficial influence of the lower grade staff, a problem.

My concern is that the very introduction of robotics might help to reinforce an ‘efficiency’ managerial style. To quote the authors of a metaethnography: “Nursing is then conceptualized as solely technical and physical work, while the more complex but less codifiable relational aspects of care are ignored or viewed as a ‘luxury’ by healthcare planners and managers” (Bridges); this might mean that opportunities for moral inclusion and moral rehabilitation are truncated.

 

References

Bolton, S.C. 2005. Women’s work, dirty work: The gynaecology nurse as ‘other’. Gender, Work and Organization, 12(2), 169–186

Bridges, J., et al., 2013. Capacity for care: meta‐ethnography of acute care nurses’ experiences of the nurse‐patient relationship. Journal of Advanced Nursing, 69(4): p. 760-772.

Daykin, N., Clarke, B., 2000. ‘They’ll still get the bodily care’. Discourses of care and relationships between nurses and health care assistants in the NHS. Sociology of Health & Illness, 22(3), 349-363.

Featherstone, K., Gregory, M. and Atkinson, P. A. 2006. The moral and sentimental work of the clinic: the case of genetic syndromes. In: Atkinson, P. A., Glasner, P. E. and Greenslade, H. T. eds. New Genetics, New Identities. Genetics and Society Abingdon:  Routledge, pp. 101-119

Jervis, L. L., 2001. The pollution of incontinence and the dirty work of caregiving in a US nursing home. Medical Anthropology Quarterly, 15(1), 84–99.

Lloyd, J., Schneider, J., Scales, K., Bailey, S., Jones, R., 2011. Ingroup identity as an obstacle to effective multiprofessional and interprofessional teamwork: findings from an ethnographic study of healthcare assistants in dementia care. Journal of interprofessional care, 25(5), 345-351.

Patterson, M., et al., 2011. From metrics to meaning: culture change and quality of acute hospital care for older people. NIHR SDO programme project, 3(1501):93

Twigg, J., 2000. Carework as a form of bodywork. Ageing and Society, 20(4):389-412.

Williams S, Nolan M, Keady J., 2009. Relational Practice as the Key to Ensuring Quality Care for Frail Older People: Discharge Planning as Case Example, Quality in Ageing, 10 3, 44 -55

Paula Boddington, Department of Computer Science, University of Oxford

We would like to thank the Future of Life Institute for generously funding the work of this project.

Diversity in thinking about ethics in AI: Brief notes on some recent research findings

These are a few notes on the issue of diversity and how addressing it might or might not help improve ethical decision making and the creation and implementation of codes of conduct. There will be a fuller discussion of this in the forthcoming book associated with this project, Towards a Code of Ethics for Artificial Intelligence Research, to be published later this year by Springer.

Note that in the following discussion, some of the focus is on findings related to gender, partly because of research demonstrating their significance, and partly because of the dearth of research on other aspects of diversity, and because some of the research on group diversity in fact examines proxies for gender such as dominance traits. However, the question of bias and discrimination of course also affects other categories.

The scale and difficulty of the problem: There’s known social influence bias in how we think. But we don’t know all that much about this, and we do know that the use of online resources and social media is likely to be a significant factor. In other words, there are changing and possibly significant influencers on our judgements and decision making, some of which have the potential to herd us into like-minded cabals of opinion (Muchnik, Aral, and Taylor 2013; Pariser 2011). It would be arrogant to assume that academics and the other opinion leaders working on the ethics of AI are immune to this.

There is also considerable work on social network theory which likewise demonstrates such large effects, and effects which are significant even if not large (Christakis and Fowler 2010; Jackson 2010).

There are reasons of fairness for including the voices of diverse groups in discussions of any topics and ethics is no exception. These groups are sometimes identified by legally protected characteristics, which may vary to an extent in different jurisdictions, and which may not necessarily cover all categories of people facing various forms of discrimination, or whose voices are harder to hear. Therefore, there are reasons which surpass purely legal reasons for including these voices, should one be concerned with fairness.

A considerable body of work in public engagement with policy making and with research and development deals with these issues of wider engagement and inclusion (O’Doherty and Einseidel 2013). The rationale for this inclusion tends to focus on issues of justice in getting voices heard, and in forming policy and developing research and innovation which caters effectively to people’s needs.

Further reasons for inclusion: There could be additional reasons for inclusion however. It is helpful to distinguish between factors associated with bringing knowledge and understanding of particular experiences to a group, and contributions to a group’s thought and decision processes.

Work in social epistemology argues for the importance of considering the social structures within which knowers gain and share knowledge (Goldman and Blanchard 2015). Work in ethics such as work on standpoint epistemologies argues for the inclusion of diverse voices in debates, with the claims made that those with certain experiences or identities have privileged, sometimes exclusive, access to certain insights and understandings relevant to ethical inquiry (Campbell 2015). This may or may not go along with relativist viewpoints; it should be noted that the need to check all the facts, which might involve something as non-controversial as asking someone else for information which one does not have, is essential to ethical judgement. Furthermore, the awareness that the viewpoints and critiques of others are needed to orient one’s moral views is consistent with an account of the objective nature of moral reality, and indeed seeking alternative viewpoints may be a good strategy for reaching an objective account (MacNaughton 1988).

Collective intelligence: Research findings suggest that the problem solving skills of a diverse group outperform those of a group comprised of the most able individuals (Hong and Page 2004). Recent work shows that collective intelligence in group decision making is a factor independent of individual intelligence, which is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group (Woolley et al. 2010). This gives particular reason, in general, to pay attention to the gender balance of groups, although the research so far is suggestive of the importance of social skills, rather necessarily gender per se. The notion of collective intelligence is of relevance to, e.g., concerns about the make-up of ethics committees, such as the finding that the average research ethics committee member is a middle aged female without degree level education. This may indeed matter less than it appears, and may even be an advantage.

This work in collective intelligence also finds some confirmation in recent work examining the notion of metacognition (Frith 2012). This is a process by which we monitor our own thought processes, taking into account the knowledge and intentions of others. This enhances and enables joint action by allowing us to reflect on and justify our thoughts to others. Individuals have limited ability to do this solo, but working in groups can enhance this capacity. Whilst metacognition would have widespread application, it is in ethics and in the justification of actions and decisions to others that it is of particular relevance. This recent work in psychology indeed chimes extremely well with long traditions in ethics, in both philosophy and theology, from diverse thinkers, about the importance of our own ability to understand our own motivations and thoughts, yet the difficulty of doing this alone (Butler 1827; Boddington 1998). This strongly suggests that working in groups will be particularly important for work in ethics, over and above the need to include a diverse range of experience. (It also has possible implications for attempts to build ethics into AI, but that is a separate topic.) It is important to note that the value of gaining feedback from others is not inconsistent with views in ethics on the importance of moral autonomy and integrity of one’s own moral judgement and decision making, indeed can enhance these values. Even such a foremost proponent of individual moral autonomy as Immanuel Kant recognised this means moral actions must be based on the right motivation, but also how difficult it was for us to know our own real motivations (Kant 1972). The insights of others can surely help here.

Inclusion, group performance, and hierarchy: However, there is also work which suggests that for maximal performance, it is not simply enough to create diverse groups, or mixed gender groups. Humans are not just social animals, they are also hierarchical animals, but working out which forms of hierarchy and which methods of collaboration produce the best results in particular situations is complex. Work which shows the effectiveness of groups with mixed social dominance (which relates to but is not simply a matter of gender composition) also shows the complexity of these effects, and suggests that more work is needed (Ronay et al. 2012). Other work shows that testosterone disrupts collaboration, with a tendency to overweight one’s own judgements compared to that of others (Wright et al. 2012). Since testosterone levels vary not just between genders but within genders, again, simple recipes for good group construction cannot be necessarily drawn from these hints.

Likewise, other work which echoes the work on metacognition in postulating group level traits and the importance of collaboration for humans, points out that in collaborating, it is important not just that you collaborate, but with whom within a group you collaborate (Smaldino 2014). These findings cohere with longer established findings within social science and communication studies showing that in communicating significant information and in making ethical decisions, people make precise judgements about whom to communicate and collaborate with, and who is the most appropriate person in a group to act (Arribas-Ayllon, Featherstone, and Atkinson 2011; Boddington and Gregory 2008).

Work on the gender mix of groups: reviewing the current literature shows the value of the presence of females for improving team collaboration, but there are mixed findings for the value of the presence of females for team performance, with differences attributable to context (Bear and Woolley 2011). The suggestion is that in areas of work where there is already a reasonable gender balance, the presence of women enhances team performance, but where there is an imbalance with a preponderance of men, then sub-groups with a fairer gender balance suffer from the negative stereotypes of women in that field and do not perform so well. This could be problematic in some technical areas, where gender parity is either far off, or may never be naturally achieved, since it is entirely possible that in some areas of human activity, even absent any barriers to participation, one gender may have a greater interest in participating than the other.

The double whammy of AI and philosophical ethics: At the moment, AI, computer science, and also, relevantly to our considerations, philosophy, are very male dominated areas (BeeBee and Saul 2011; Aspray 2016). This is so even in the sub-speciality of philosophy, ethics. This places us in a possible conundrum. The presence of women in a group may enhance group collaboration, but not necessarily enhance group performance, where negative stereotypes of women persist. Hence, where ethics exists as an activity within a male dominated area where there are negative stereotypes of women, inclusion of women within the ethics endeavour might, it could be speculated, act to produce negative stereotypes of the enterprise of ethics, especially if the presence of women merely enhances group collaboration, but not group performance. My personal and admittedly unscientific hunch is that, within philosophy, ‘applied’ subjects like ethics are indeed still often seen as inferior to the hard-boiled theoretical subjects; but that, nonetheless, (and varying greatly with local context and local culture), ethics is still often male dominated. There are many different elements to how human enterprises as slotted into our complex, hierarchical ways of operating in the world.

There are some specific difficulties with forming effective groups. Recent work shows that narcissistic individuals are perceived as effective leaders, yet this may diverge considerably from reality, where such narcissistic individuals inhibit information between group members and inhibits group performance (Nevicka et al. 2011). Note that this research does however yet again show the importance of group dynamics in effective group outcomes. The inhibition of sharing information would be especially relevant in the case of ethics, and the importance in ethics of effective self-reflection likewise suggests that the trait of narcissism in group leaders would be especially unwelcome. Given however the preference for picking narcissistic leaders, this is food for thought in how groups are made up. It seems that perhaps Plato had a good hunch when he argued in the Republic that only those who do not wish to lead should be allowed to do so.

A tentative conclusion from this research is that if you are interested solely in the representation of women or other particular groups for issues of justice pertaining to the participation of those groups, priorities and strategies may not be precisely the same as if you are interested additionally in how diversity improves outcomes. For instance, work which shows that group communication and interaction is vital to success, would indicate that some moves to include particular groups – for the sake of argument here, women – might have little or no impact on improving outcomes or collaboration, if the particular ways in which women are included do not afford the opportunity for communication, empathy, and feedback on ideas, and other such effective procedures as the research indicates merit attention. For example, inclusion of women in panels where the opportunities for interaction are very formal, may do little to enhance collaboration or output. More worryingly, research indicating the possibility for negative reactions to the inclusion of women in certain contexts might suggest that it is possible that the visible inclusion of women in an area of work might lead to that area being stigmatised. It might be wise to take steps to consider and where possible ameliorate such possibilities. Perhaps steps could be taken in various ways to signal the high regard in which work on ethics in AI is held. (But then, I would say that, wouldn’t I!)

There are particular reasons to be especially concerned with these issues in relation to AI. The very question of bias in algorithms is a major ethical challenge in AI; potential problems of bias concern the application of algorithms in certain context, the assumptions that drive the creation of algorithims, and the data sets used to create algorithms and to drive machine learning (O’Neil 2016; Nature Editorial 2016). The problem is quite acute. Figures show, for instance, that the numbers of women in computer science are actually in significant decline, and that this is especially the case in AI, a problem that Margaret Mitchell, a researcher at Microsoft, calls the ‘sea of dudes’ problem (Clark 2016). Research even finds that many job ads in tech tend to contain words which are rated ‘masculine’ and hence tend to put women off applying (Snyder 2016). In Snyder’s study, words were rated as gendered ‘if it statistically changes the proportion of men and women who respond to a job post containing it’; in other words, by an operational definition which used large data sets, rather than one making presuppositions about language and gender. Note then that this is making use of technology to spot the human problems in technology. Job ads for Machine Intelligence were found to be the most masculine biased of them all. The optimistic side of this is that having discovered these issues, steps can be taken to ameliorate them.

Within certain groups and certain professions, viewpoints may be less diverse than in the population as a whole. For instance, within universities, more academics lean towards the left of the political spectrum than to the right. There is currently a particular concern expressed about the dampening down of free speech both within universities and in the media, including social media. Whatever the exact nuances of this situation, in any endeavour towards developing codes of ethics in AI, it would be beneficial to watch out for and address any such biases or gaps in thinking. One arena which is attempting to encourage diversity of viewpoints in debate is the Heterodox Academy.

We wish to thank the Future of Life Institute for their generous sponsorship of our programme of research.

REFERENCES

Arribas-Ayllon, Michael, Katie Featherstone, and Paul Atkinson. 2011. ‘The practical ethics of genetic responsibility: Non-disclosure and the autonomy of affect’, Social Theory & Health, 9: 3-23.

Aspray, William. 2016. Women and Underrepresented Minorities in Computing: A Historical and Social Study (Springer: Heidelberg).

Bear, Julia B., and Anita Williams Woolley. 2011. ‘The role of gender in team collaboration and performance’, Interdisciplinary Science Reviews, 36: 146-53.

BeeBee, Helen, and Jenny Saul. 2011. “Women in Philosophy in the UK: A Report by the British Philosophical Association and the Society for Women in Philosophy in the UK.” In. London: British Philosophical Association, Society for Women in Philosophy UK.

Boddington, P. 1998. “Self-Deception.” In Encyclopedia of Applied Ethics, edited by Ruth Chadwick, 39-51. San Diego: Academic Press, Inc.

Boddington, Paula, and Maggie Gregory. 2008. ‘Communicating genetic information in the family: enriching the debate through the notion of integrity’, Medicine, Health Care and Philosophy, 11: 445-54.

Butler, Joseph. 1827. Fifteen Sermons Preached at the Rolls Chapel (Hilliard, Grey, Little and Wilkins: Boston).

Campbell, Richmond. 2015. “Moral Epistemology.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Stanford: Stanford University.

Christakis, Nicholas, and James Fowler. 2010. Connected: The Amazing Power of Social Networks and How They Shape Our Lives (Harper Press: London).

Clark, Jack. 2016. ‘Artificial Intelligence Has a ‘Sea of Dudes’ Problem’, Bloomberg Technology.

Editorial. 2016. ‘Algorithm and blues’, Nature: 449.

Frith, Chris D. 2012. ‘The role of metacognition in human social interactions’, Phil. Trans. R. Soc. B, 367: 2213-23.

Goldman, Alvin, and Thomas Blanchard. 2015. “Social Epistemology   ” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta.

Hong, Lu, and Scott E Page. 2004. ‘Groups of diverse problem solvers can outperform groups of high-ability problem solvers’, Proceedings of the National Academy of Sciences of the United States of America, 101: 16385-89.

Jackson, Matthew O. 2010. Social and Economic Networks (Princeton University Press: Princeton).

Kant, Immanuel. 1972. The Moral Law, translation of Groundwork for the Metaphysics of Morals (Hutchinson).

MacNaughton, David. 1988. Moral Vision (Blackwell: Oxford).

Muchnik, Lev, Sinan Aral, and Sean J Taylor. 2013. ‘Social influence bias: A randomized experiment’, science, 341: 647-51.

Nevicka, Barbora, Femke S Ten Velden, Annebel HB De Hoogh, and Annelies EM Van Vianen. 2011. ‘Reality at odds with perceptions narcissistic leaders and group performance’, Psychological science: 0956797611417259.

O’Doherty, Kieran, and Edna Einseidel. 2013. Public Engagement and Emerging Technologies (UBC Press: Vancouver).

O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (Allen Lane).

Pariser, Eli. 2011. The Filter Bubble (Viking Penguin: London).

Ronay, Richard, Katharine Greenaway, Eric M Anicich, and Adam D Galinsky. 2012. ‘The path to glory is paved with hierarchy when hierarchical differentiation increases group effectiveness’, Psychological science, 23: 669-77.

Smaldino, Paul E. 2014. ‘Group-level traits emerge’, Behavioral and Brain Sciences, 37: 281-95.

Snyder, Kieran. 2016. ‘Language in your job post predicts the gender of your hire’. https://textio.ai/gendered-language-in-your-job-post-predicts-the-gender-of-the-person-youll-hire-cd150452407d#.gz88w5ovr.

Woolley, Anita Williams, Christopher F Chabris, Alex Pentland, Nada Hashmi, and Thomas W Malone. 2010. ‘Evidence for a collective intelligence factor in the performance of human groups’, science, 330: 686-88.

Wright, Nicholas D, Bahador Bahrami, Emily Johnson, Gina Di Malta, Geraint Rees, Christopher D Frith, and Raymond J Dolan. 2012. ‘Testosterone disrupts human collaboration by increasing egocentric choices’, Proceedings of the Royal Society of London B: Biological Sciences: rspb20112523.

Paula Boddington

We would like to thank the Future of Life Institute for sponsoring our work

The Distinctiveness of AI Ethics, and Implications for Ethical Codes

The Distinctiveness of AI Ethics, and Implications for Ethical Codes

Paula Boddington

Paper presented at the workshop Ethics for Artificial Intelligence, July 9th 2016, IJCAI-16, New Yorkfuture

Abstract

If workable codes or guidance on ethics and AI are to be produced, the distinctive ethical challenges of AI need to be faced head on. The purpose of this paper is to identify several major areas where AI raises distinctive or acute ethical challenges, with a view to beginning an analysis of challenges and opportunities for progress in these areas. Seven areas described are: Hype in AI, and its unfortunate consequences; The interface between technology and wider social or human factors; Uncertainty about future technological development and its impact on society; Underlying philosophical questions; Professional vulnerability in the face of emerging technology; An additional layer of complexity in ethical codes, concerning machine behaviour; The extension, enhancement, or replacement of core elements of human agency.

1              Introduction

This paper arises from work undertaken as part of an FLI project entitled ‘Towards a code of ethics for AI’. The purpose of this project is not to produce a code of ethics as such, but to clarify and analyse the challenges and purposes of producing such codes.

This presentation concerns some potential challenges to producing workable, transparent codes, guidance, or regulation in the field of AI. In this endeavour, we should not presuppose that AI raises any more serious ethical problems than other areas of technology, nor that its problems are completely unique. In many respects, indeed, AI raises issues which have broad similarities with other areas; but the focus here will be on particular respects in which AI does raise issues that are distinctive or unique. Insofar as these bear similarities to other areas, there can be mutual learning; but insofar as they are AI-specific, particularly hard attention must be given to unravelling them.

Our work has looked at several areas where these issues arise, which will be considered in turn in what follows:

  1. Hype in AI, and its unfortunate consequences;
  2. The interface between technology and wider social or human factors;
  3. Uncertainty about future technological development and its impact on society;
  4. Underlying philosophical questions;
  5. Professional vulnerability in the face of emerging technology;
  6. An additional layer of complexity in ethical codes, concerning machine behaviour;
  7. The extension, enhancement, or replacement of core elements of human agency.

2              Hype in AI, and its unfortunate consequences

There is a lot of hype concerning many technologies, and in particular their ethical implications, as we have seen with genomics and nanotechnologies [Caulfield, 2012], for example. In the case of AI, this hype is now virtually at fever pitch [Hawking, 2015], with some prominent individuals recently claiming that AI presents ‘an existential threat’ to humanity. Whether or not such concerns are overblown, this very hype itself will have impacts.

2.1          Angels and bad guys

One impact, which can be quite considerable, is that fear of being branded one of the ‘bad guys’ may lead to individual or collective attempts to promote oneself as on the side of the angels. Such positioning might be for intrinsic or strategic reasons (e.g. the EPSRC meeting on the Principles of Robotics explicitly aimed to avoid the sort of public aversion which the UK public had shown to GM [Bryson, 2012]. But at its worst, appearing to be ethical might trump actually being ethical.

2.2          Virtue signalling and exclusion of the under-resourced

Secondly, this positioning might have an effect upon the very content of the codes, for example by including largely vacuous material that is little more than ‘virtue signalling’ [Bartholemew, 2015], with empty displays of ethical probity (‘we are passionate about the future of the human race’ , ‘we believe that AI should be used for the benefit of the whole of humankind’, etc). More tangibly, attempts have been made to urge ethical behaviours on a group which would exclude both those who disagree, and less well-resourced members: it’s easier to be ethical the richer you are. This can be seriously prejudicial against the least privileged, unless special provision is given or attention paid to the issue [Boddington, 2011, ch 10].

Thus otherwise laudable policy can end up inadvertently excluding the actors who are least well resourced, especially if it focuses too much on the most ‘visible’ ethical questions, without considering them in context. Data-sharing policies in genomics provide one clear example here, biasing procedures against those who are unable – perhaps for resource reasons – to reciprocate.

For an example from AI, the IIIM’s Ethics Policy for Peaceful R&D eschews military funding, and will only collaborate with civilian researchers if they have received no more than 15% of their funding from military sources in the last 5 years [IIIM]. The preamble to this policy makes explicit political statements, including that military funding is commonly defended by reference to an increased ‘“terrorist threat”’ (with scare quotes that clearly imply scepticism); it also endorses the ‘brave’ actions of Edward Snowden. Researchers who are not in the position of finding alternative sources of funds may therefore be excluded from potentially beneficial collaboration. (So far, this is not to comment on the rights or wrongs of the IIIM’s stance, merely to use it as an example of how policy may lead to differential impact for different actors within the world of AI research.)

2.3          Methodological impacts – alleged novelty and comprehensiveness

Thirdly, hyping up the uniqueness of the issues in AI will also have important effects on the methodology of how ethical questions are addressed. For if an ethical question is seen to be (in part, at least) an old question applied in a new context, then one can argue with reference to previous applications, but if a question is presented as being new and unique, then it will be approached very differently. Such framing can be vital, because example choice and description – including both language and institutional context – significantly affects how ethical questions are understood and developed [Chambers, 1999; Fischer et al 1993; Rein et al 1993; Taburrini 2015].

Take, for example, the framing remark: ‘everything that civilisation has to offer is a product of human intelligence’ [Hawking, 2015]. This misleadingly suggests that everything in society derives from design and intelligence, and may lead to hubristic discounting of serendipity, circumstance and pure accident. It can also divert attention from how ethical issues raised by any technology are a complex result of many factors (see below), misleadingly focusing on aspects of a situation that are designed into it, as the expense of those that are contingent or emergent.

2.4          Viewing the landscape through the lens of the hyped technology

Another risk of hype is that the consequent emphasis on the new technology will distort perceptions of the moral context, interpreting it in terms of that (perhaps problematic) technology at the expense of others. As an example from biotechnology, the UK’s HFEA recently sanctioned mitochondrial transfer techniques (so-called ‘3 person IVF’) to combat transmission of mitochondrial disease from mother to child. The HFEA praised the technique ‘giving the chance of having a healthy child’ [HFEA 2013]. But these women could already have children through surrogacy, or gestate a child with a donated egg. So the description of the technique’s advantages tacitly presupposed a particular notion of what it is to have ‘your own child’ (viz. with maternal nuclear DNA), thus eclipsing other, older reproductive techniques that the same organisation also supports and regulates [HFEA]. Though less morally serious, a similar pattern can be seen in the progressive development of information technology, for example in respect of demands made for administrative and ‘audit’ information, whereby as new possibilities have become practicable (e.g. the generation of huge amounts of printed information), these possibilities have often come to be seen as absolutely necessary, usually without careful consideration of the costs and benefits. Thus the new technology shapes our vision, without any careful prior judgement that it is appropriate.

3              The interface between technology and wider social or human factors

Related to the issue of hype is the risk that excitement over the potential of AI and its technological possibilities will lead us to overlook complex social and institutional factors, which may affect how the relevant questions are framed, understood and addressed. These factors are also crucial for the enforcement or influence of any code of ethics, many of which remain unread or ignored [MacLean, Elkind 2004]. The institutional or political driver to the production of such codes is likely to be more concrete than the abstract encouragement of ethical behaviour: for example a wish to avoid public backlashes or lawsuits, to encourage funding, to signal the virtue of the UK in ethical regulation (again perhaps to attract funding), and generally to be seen as virtuous (which is not of course quite the same as the desire to be virtuous). To take a balanced view of these things – and to avoid confusing ethics with public relations – the specific institutional, political, financial and economic context of the creation and use of AI must be properly considered.

4              Uncertainty about the future of the technology and its implications for society

The future of any rapidly developing technology is hard to predict, and not only for technological reasons (since economic, political, social, and other factors can often intervene). Appropriate ethical judgments to new developments are also impossible to anticipate, since attitudes may evolve with the technology, as we have seen in recent debates about privacy, where views vary greatly depending on the context, in ways that could not have been foreseen [Nissenbaum, 2010, 2004]. In the case of AI, these points are especially pertinent, given how far AI could potentially impact upon how we think and relate to each other, and on vital elements of society such as modes of production.

One popular response to this evident impossibility of producing future-proofed codes of ethics in areas of rapid development or contextual uncertainty is to stress the importance of equipping researchers and professionals with the ethical skills to make nuanced decisions in context – to refer to virtue ethics [AOIR 2012, Atkinson 2009]. But virtue ethics is especially badly placed to provide any such panacea when dealing with technology which is likely to bring broad ranging and unpredictable change in central areas of human life. For the dominant model of virtue ethics – inherited from Aristotle – is predicated on a stable and slow-changing society, where the young can learn virtues from the older and wiser who possess phronesis, or practical wisdom. This model is hopeless when the need is to develop a new model of practical wisdom, to cope with future realities many of which probably cannot yet even be conceived, let alone anticipated.

5              Background philosophical questions

Fundamental questions, such as what it means to be human, or what ultimate values we should be pursuing, can readily arise in areas of rapidly developing technology. In genomics and genetics, for example, questions about the ‘essence’ of humanity can appear in debates about what is ‘natural’ or ‘unnatural’, what genetic interventions count as curing disease (as opposed to creating a ‘designer baby’), how the future of humanity might be altered via germline alterations; whether we in fact face a ‘post-human’ future, or whether the human race might diverge into two or more species. A proper approach would involve teasing out the different understandings and assumptions involved, how exactly these relate to the practical ethical questions, and considering how far the deep philosophical issues can be resolved sufficiently, or bypassed, to allow those practical questions to be addressed. But too often the fundamental issues – so far from provoking corresponding deep and careful thinking – can be overdramatised, leading to idealised or rarefied notions of what is ‘natural’ or ‘really’ human, and thus obscuring the issues rather than clarifying them.

Some of these questions arise also in AI. Will interaction between humans and intelligent machines change our natures in some significant ways, and might the future bring hardware interactions between humans and machines – another possible aspect, perhaps, of a post-human future? Will we find ourselves no longer the ‘crown of creation’ but subordinate to superintelligent machines? A distinctive hallmark of how these questions arise in AI is that they focus on mental aspects of what it is to be human – concerning agency, decision and choice – including for instance questions about responsibility and accountability, as well as what counts as human intelligence. Often such debates focus on extrapolated and imagined future agency within AI, commonly contrasted simplistically with idealised and uncontextualised notions of human agency.

As one example, in the context of Lethal Autonomous Weapons, it is sometimes argued that these violate the human right to be killed with dignity by a soldier who is acting in full moral consciousness of the fact that they are taking a human life [Champagne, Tonkins 2015; Sparrow 2007]. But, putting aside the highly debatable question of whether there is any dignity at all in being killed in war, this argument implausibly idealises the actions of the average soldier. Of course, each human being will always retain their own moral responsibility; but qua solider, an agent will be following a chain of military command while on the battlefield. Excepting special circumstances where a soldier has reason to consider that something has gone seriously awry with the chain of command and that the rules of war are being broken, obedience is required and reasonably expected – so the soldier’s time for full and conscious moral reflection was during enlisting: by the time of battle, that opportunity has gone. Moreover, it’s morally cruel to expect our best young soldiers – when having to kill in the heat of dangerous battle for our benefit – to pause to reflect in full consciousness ‘this is a human being that I alone am responsible for killing’. In normal circumstances, such responsibility lies more with military command and the politicians who called for war in the first place. Military robots then may be being held up to faulty idealisation.

6              Professional vulnerability in the face of emerging technology is magnified with AI

One motivating reason for the existence of professional codes of conduct is the relative vulnerabilities of professional versus others: clients, and the general public. It’s assumed that professionals have capacities and knowledge which others lack, or possess to a much lesser degree, and that therefore, professionals must use their skills and knowledge wisely and to the general good. But one of the major issues flagged in relation to AI is the fear that even the professionals might be relatively vulnerable themselves – that AI will become too complex to understand, and possibly to control – especially given the very fast pace at which it is able to operate [Bostrom 2014]. Hence, any ethical codes for AI need to take account of debates about the possible limited capacity of AI researchers to understand, anticipate and control their very products. This does not make AI unique per se – there have also, for example, been worries about biotechnology ‘escaping the lab’ [Koepsell 2009] – but the extent of these fears with respect to AI are probably greater than with any other technology. This issue of control is intimately linked to the following issue, and the two together imply that producing codes of ethics for AI will be particularly challenging, whatever one thinks of the question of how much autonomy AI has or will develop.

7              An additional layer of complexity is required in ethical codes for AI, concerning machine behaviour

Codes of ethics for the professions deal with the behaviour of professionals, and the consequences of the products or services they produce. But in AI, a special feature is that there is a layer of machine behaviour which also needs regulatory attention (and this is true regardless of debates about the genuineness of machine ‘intelligence’). The fact that AI can act in ways unforeseen by its designer raises issues about the ‘autonomy’, ‘responsibility’ and decision-making capacities of AI, and hence of the relation to human autonomy, responsibility, decision and control. If we try to address these issues at a very general level, we risk falling into vagueness and vacuity. So as a general principle, it is likely to be far more productive to attempt to work through these sorts of problems within particular concrete settings of application. When this is done, complex and potentially obscure philosophical debates may even be avoided entirely (as we shall see below).

8              The extension, enhancement, or replacement of core elements of human agency

This is a hallmark of AI, and although all machines enhance human powers to some extent, AI has the potential to do so more effectively than any other technology to date. Three main points deserve stressing here:

8.1          Economic and social effects

First, the potential of AI raises questions beyond the remit of AI researchers per se, such as when considering the wider societal impacts of large shifts in wealth creation and modes of production. Such issues as whether wealth should be systematically redistributed to compensate for the job losses occasioned by AI [Brynjolfsson, McAfee, 2014; Frey, 2013] cannot plausibly be dealt with by any realistic or achievable professional code of ethics. But these issues do serve to illustrate how simplistic it is to presume that calls for ‘beneficial AI’ can be interpreted and applied straightforwardly, even if they are agreed. Whether some application of AI counts as overall ‘beneficial’ might well depend on the economic and social structure within which it is embedded, far beyond the control of AI researchers.

8.2          AI within human systems

Secondly, in many cases, AI will enter complex systems of human agency, making it necessary that codes of ethics deal adequately with this interface. Consider, for example, the use of robotics within a hospital ward. Such places are highly complex systems with lines of responsibility and accountability which are partly formalised and partly informal, and which often change in response to circumstance and policy. Often, also, the lines of responsibility, reporting, and duty may be fragmented, duplicated and overlapping. Robots placed within such as system – whether or not they themselves are considered as responsible agents – will certainly displace some human nodes of responsibility and accountability [Daykin, Clarke, 2000]. Thus careful analysis of the effects of robotic placement within such systems is vital, and codes of ethics need to recognise this complexity. In this light, the EPSRC’s Principles of Robotics – which simply describes robots as ‘products’, and explicitly ascribes responsible agency only to humans, never robots [Boden et al 2011] – falls badly short. A more nuanced approach is required, recognising that within a hospital system, responsibilities are understood as only partly falling on the individual, for they are also part of an entire system, and computers can perfectly well be part of such a system. Thinking in this way might also help to bypass intractable questions of the nature of ‘genuine’ responsibility, and whether robots will ever achieve it. Such questions do not need to be solved if our aim – as in the NHS – is primarily meant to be upon understanding how errors occur with a view to correcting them.

8.3          Spontaneity versus forethought

Humans and AI systems make decisions – including decisions with a moral aspect – in different ways. Close attention to this might be useful in disentangling some moral worries about AI, and clarifying relevant differences with respect to codes of ethics. For example, humans may be forgiven for some decisions, in circumstances where higher standards would be expected of an AI (something like an idealised human standard, perhaps). Consider the concern that has been expressed regarding how an autonomous vehicle might react in a crash situation where a choice has to be made about who might be killed or injured, depending upon what actions are taken [Russell et al 2015]. But discussion of these things – though commonly voiced as an objection to autonomous vehicles – commonly takes for granted that a higher standard of decision-making can reasonably be expected of them. Thus, for example, a human being can often be forgiven for a suboptimal decision made under duress and in haste, and could also be forgiven for e.g. trying to save themselves or their families in a dangerous situation. But we are much less likely to ‘forgive’ a decision conceived and programmed beforehand. This prejudice may or may not have any rational basis in deeper ethical foundations. But it impacts on the consideration of autonomous vehicles, which have to be designed ‘in the cold light of day’ to react appropriately in emergency situations. From one point of view, such careful advance consideration seems ethically superior to the ‘it’ll be alright on the night’ approach; but from another point of view, it’s less, well, ‘human’. Such double-edged complexity abounds when considering replacing human labour and agency with AI. Again, this short discussion is far from complete, but serves only to flag the importance of considering elements of timing, planning, and the agentic source of decision making, in considering the ethics of AI.

9              Conclusions

There are numerous challenges in considering the ethical issues that AI faces us with, and further challenges in developing codes of ethics, guidance or other regulations for ethically robust AI. Although we are faced with much distracting hype which perhaps distorts some of the issues and their import, nonetheless, careful examination of the particular and distinct issues which AI presents us with can help us in understanding these further. Addressing these challenges will require input from both the AI community and more widely.

Acknowledgments

Many thanks to Peter Millican for his careful commentary on the manuscript, as well as to Michael Wooldridge for discussion.

This work has been kindly supported with a grant from the Future of Life Institute

References

[AoIR Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0). 2012.

[Atkinson P. Ethics and ethnography. Twenty-First Century Society. 2009;4(1):17-30.

[Bartholmew J. Hating the Daily Mail is a substitute for doing good. The Spectator. 2015(April 18th 2015).

[Boddington P. Ethical Challenges in Genomics Research: a guide to understanding ethics in context. Heidelberg: Springer; 2011.

[Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, et al. Principles of Robotics. Swindon, UK: Engineering and Physical Sciences Research Council ESPRC, 2011.

[Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press; 2014.

[Brynjolfsson E, McAfee A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W.N. Norton &n Company; 2014.

[Bryson J. The making of the EPSRC Principles of Robotics. AISB quarterly. 2012;133(Spring 2012):14-5.

[Caulfield T, Condit C. Science and the Sources of Hype. Public Health Genomics. 2012;15(3-4):209-17.

[Chambers T. The Fiction of Bioethics (Reflective Bioethics). New York: Routledge; 1999.

[Champagne M, Tonkens R. Bridging the Responsibility Gap in Automated Warfare. Philos Technol. 2015;28(1):125-37.

[Daykin N, Clarke B. ‘They’ll still get the bodily care’. Discourses of care and relationships between nurses and health care assistants in the NHS. Sociology of Health & Illness. 2000;22(3):349-63.

[Fischer F, Forrester J. Editor’s Introduction. In: Fischer F, Forrester J, editors. The Argumentative Turn in Policy and Planning. Durham and London: Duke University Press; 1993. p. 1-17.

[Frey C, Osborne M. The Future of Employment: How Susceptible Are Jobs to Computerisation? Oxford: Oxford Martin School, University of Oxford, 2013.

[Hawking S, Russell S, Tegmark M, Wilczek F. Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’. The Independent. 2014 May 1st 2014.

[HFEA. HFEA agrees advice to Government on the ethics and science of mitochondria replacement [press release]. London2013.

[Human Fertilisation and Embryology Authority. Available from: http://www.hfea.gov.uk/.

[IIM. Ethics Policy for Peaceful R&D. Reykjavik, Iceland: Icelandic Institute for Intelligent Machines.

[Koepsell D. On Genies and Bottles: Scientists’ Moral Responsibility and Dangerous Technology R&D. Sci Eng Ethics. 2009;16(1):119-33.

[McLean B, Elkind P. The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron: Portfolio; 2013, 2004.

[Nissenbaum H. Privacy as Contextual Integrity. Washington Law Review. 2004;79(1):119-58.

[Nissenbaum H. Privacy in Context: Technology, Policy and the Integrity of Social Life. Palo Alto: Stanford University Press; 2010.

[Rein M, Schon D. Reframing Policy Discourse. In: Fischer F, Forrester J, editors. The Argumentative Turn in Policy and Planning. Durham and London: Duke University Press; 1993. p. 145-66.