The European Parliament has recently produced a study ‘European Civil Law Rules in Robotics’. This continues work by the European Parliament’s Committee on Legal Affairs, such as its publication in May 2016 of a Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics.
Such studies are of immense relevance and interest to any projects of drawing up codes of ethics for AI. Any code of ethics operating within the European Union would need to be cognisant of any relevant laws. This is not simply in order to comply with them, but note should be taken of many aspects of any such laws pertaining to robotics and AI, since it’s vital to be aware of any harmonies or clashes with other influential statements as codes of ethics are in development. This includes considering the surrounding policy context and any preambles or surrounding texts. These can give insights into the guiding motivations and underlying values which inform any regulations. This can be informative for codes both inside and outside the EU.
In working out how best to think about developments in AI, or indeed any fast changing technology, science fiction can be a useful tool. Where situations are not yet with us, we need to use our imagination. Science fiction almost invariably has some moral content, whether explicitly or implicitly. (Indeed, it’s virtually impossible to write any half-plausible and interesting story that does not have some normative elements.) Science fiction frequently plays out scenarios about how we might or might not relate to robots and to very advanced computers. These can be extremely useful in helping us to ponder what our values are and how we might react to new developments.
But another rich source of imagination and values is found in stories we already have. Much work that discusses AI refers to ancient stories and myths about robots created from mere matter, as well as more modern literature. Stories often referred to in such contexts include Frankenstein’s Monster, Golem, Pygmalion, the Tin Man from the Wizard of Oz, and the maidens made of gold and the giant Talos made of bronze which appear in the Iliad. Spirit-powered robots defended relics of the Buddha, according to Indian legend.
But in referring to any such stories, we can draw various lessons. (It’s not wise to conclude from the story of Sleeping Beauty that it’s a good idea to marry a man you’d never met before who’s broken into your bedroom and awoken you with a kiss. Other layers of interpretation in such fairy stories, though, are a rich source of meaning.) So, in looking at the preamble and surrounding text of policy documents which refer to such ancient or more modern stories, it’s useful to take a look at how these stories are used and what lessons are drawn.
The document European Civil Law Rules on Robotics refers to cultural ideas about robots in its introductory texts, as part of a narrative justifying its approach, and in particular, grounding it in a response to what are seen to be European concerns. Here is the relevant section, which comes in the document’s section 2 “General Considerations on Robots: The notion of the robot, its implications and the question of consciousness”, where the discussion is used to explain reservations about using the term ‘smart robot’ in a set of regulations designed for the European context, because of the likely public reaction:
1°/ Western fear of the robot
The common cultural heritage which feeds the Western collective conscience could mean that the idea of the “smart robot” prompts a negative reaction, hampering the development of the robotics industry. The influence that ancient Greek or Hebrew tales, particularly the myth of Golem, have had on society must not be underestimated. The romantic works of the 19th and 20th centuries have often reworked these tales in order to illustrate the risks involved should humanity lose control over its own creations. Today, western fear of creations, in its more modern form projected against robots and artificial intelligence, could be exacerbated by a lack of understanding among European citizens and even fuelled by some media outlets.
This fear of robots is not felt in the Far East. After the Second World War, Japan saw the birth of Astro Boy, a manga series featuring a robotic creature, which instilled society with a very positive image of robots. Furthermore, according to the Japanese Shintoist vision of robots, they, like everything else, have a soul. Unlike in the West, robots are not seen as dangerous creations and naturally belong among humans. That is why South Korea, for example, thought very early on about developing legal and ethical considerations on robots, ultimately enshrining the “smart robot” in a law, amended most recently in 2016, entitled “Intelligent robots development and distribution promotion act”. This defines the smart robot as a mechanical device which perceives its external environment, evaluates situations and moves by itself (Article 2(1))9. The motion for a resolution is therefore rooted in a similar scientific context.
Commentary on this passage:
The passage opens with the suggestion that the collective consciousness of the West shows itself in ancient fears about losing control of robots, which must be addressed in order not to hamper the robotics industry. The wording seems to imply that this fear is unfounded or poorly grounded. This suggests a somewhat cavalier attitude to such myths, as if they indicate something irrational and to be combatted. While a culture’s myths may indeed show things which cannot be reduced entirely to rational analysis, the very fact that they have survived for so long suggests that myths and stories may be indicating something important. Indeed, the document goes on to validate these Western fears, but tellingly, does so by referring not to myth or culture but by heeding the recent warnings about AI of four prominent scientists and technologists: Stephen Hawking, Elon Musk, Bill Gates and Bill Joy, citing these experts “now that the object of fear is no longer trapped in myths or fiction, but rooted in reality”.
This is a telling way of presenting these myths and stories of our past. It’s as if the lessons we need to learn from these myths and stories are merely some uncanny, lucky prediction of the scientific future, and now that we have the science, and now that we have the technologists and scientists to warn us, we can at last realise these warnings were, by fluke, right after all. Yet the appeasement of the general European public is framed in terms of addressing and combatting the cultural sway of the ancient myths. So … are the pre-scientific-myth-fuelled fears of the “great unwashed” general public right by some spooky coincidence? Is scientific reason, endorsed by experts, by happenstance now simply marching in parallel time to unreason?
One reason why these questions are important is because it’s important whether this document is attempting to accommodate reasonable public concerns, or is pandering to an irrational populace. One might develop policy, and in particular public information, quite differently, depending on these attitudes. Indeed, it is somewhat unclear what kind of stance the document is taking on this point.
Something of note is that in discussing the lack of fear of robots in the Far East, the document also grounds the Japanese stories about robots in the underlying metaphysical and normative framework of Shintoism. This makes sense of the positive Japanese response to robots. This sense-making narrative is absent in the account of the Western myths to which the document refers. (Note then, that the EU document subliminally suggests that positive myths of robots are grounded in something more substantive, whereas negative myths are not.)
Is there no sense-making Western narrative available? Note of course that ‘the West’ is not a monolithic idea – there are robot stories in various Western traditions including Norse as well as the Greek and Jewish traditions referred to in the document. But note too at this juncture, that the EU document highlights the Hebrew myth of Golam as being particularly influential in Western society and what the document calls ‘western fears of creations’. Indeed, it’s the only Western robot story actually named.
I had to read this phrase ‘western fear of creations’ several times to make sure I’d understood it. For the idea that it is the West which is afraid of creation, and that a particularly strong influence on this fear stems from the Jews, butts up against the flourishing of science, technology and invention in the West, which has been so profoundly influenced by the Judeo-Christian tradition; not to mention the high density of tech start-up firms in Tel Aviv, for example. By ‘fear of creations’ the document is presumably referring to fear of autonomous creations which escape the control of their creator, not fear of artefacts per se.
But whilst it cites underlying frameworks behind Eastern robot stories, the EU document’s account of Western responses to robots misses out a profoundly influential Hebrew narrative which surely lends heavy cultural salience to Western myths about robots. I refer of course to the story in Genesis of the creation of man and of the Fall. For the Fall shows how in disobeying one’s Creator, Adam and Eve developed the ability to see that they were naked, and enabled them to have knowledge of good and evil. And we all know what happened next, armed with that dangerous knowledge: thousands of years of often sorry human history, with bright spots here and there. Mankind was given the power to act and to think, but the freedom which Adam and Eve were given to act independently of their Creator also led to disobedience and disaster.
But this is precisely the fears that are expressed about AI and robots now. It’s not fear of creation in the sense of invention and artefact, or control over the world per se, since the Genesis story gives mankind dominion over the earth – it’s fear of a creation which escapes the control of its creator. It’s fear of creation which, left in a safe space unobserved, gets into mischief. It’s worries about how we, the creators, might treat robots if they were to develop consciousness and the ethical awareness that Adam and Eve developed. But these are precisely the moral worries of the moderns who are armed with good understandings of science and tech.
Presenting the myths around dangerous robots in the context of Genesis paints a totally different picture than that presented by simply framing the Hebrew myth of Golem as a stand-alone story of uncontrollable robots which just happens to form the strongest influence behind what seems to be an ill grounded, primitive fear. It not only presents this robot-gone-bad narrative as a central influence firmly embedded in the history of Western culture, rather than merely a popular story. It not only embeds it in an account of the nature of humanity, of the place of humanity in the universe and made in God’s image, and hence, with the potential to have responsible control over the world, and hence with a positive potential for advancing science and technology. It does more.
For if we see the Genesis account of the Fall of man as foreshadowing of fears about robots, then Genesis gets the problem exactly right, for exactly the right reasons – it’s a worry about autonomy itself: what might robots do if we can’t control them fully? Will they adhere to the same value system as us? Will they decide to disobey us? What will our relationship with our creations be?
The modern scientific experts can tell us that these fears might now actually be realisable. We didn’t need them to tell us that the fears were in principle well-founded. Far from quaking at a Hebrew scare story which whipped up primitive fears in the general public that need to be allayed, we can thank the Hebrew account of Genesis for pre-warning us thousands and thousands of years ago, in a rich and meaningful story about our place in the world and about our nature, from which we can infer also that creating robots with the ability to judge and to act may be worthwhile. But it can go very, very wrong. This is precisely the central ethical question of AI today. If the general public have concerns about this expressed through myth, these concerns are not irrational. They need to be addressed.
Paula Boddington
We would like to thank the Future of Life Institute for sponsoring our work.