Myth and the EU study on Civil Law Rules in Robotics

The European Parliament has recently produced a study ‘European Civil Law Rules in Robotics’. This continues work by the European Parliament’s Committee on Legal Affairs, such as its publication in May 2016 of a Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics.

Such studies are of immense relevance and interest to any projects of drawing up codes of ethics for AI. Any code of ethics operating within the European Union would need to be cognisant of any relevant laws. This is not simply in order to comply with them, but note should be taken of many aspects of any such laws pertaining to robotics and AI, since it’s vital to be aware of any harmonies or clashes with other influential statements as codes of ethics are in development. This includes considering the surrounding policy context and any preambles or surrounding texts. These can give insights into the guiding motivations and underlying values which inform any regulations. This can be informative for codes both inside and outside the EU.

In working out how best to think about developments in AI, or indeed any fast changing technology, science fiction can be a useful tool. Where situations are not yet with us, we need to use our imagination. Science fiction almost invariably has some moral content, whether explicitly or implicitly. (Indeed, it’s virtually impossible to write any half-plausible and interesting story that does not have some normative elements.) Science fiction frequently plays out scenarios about how we might or might not relate to robots and to very advanced computers. These can be extremely useful in helping us to ponder what our values are and how we might react to new developments.

But another rich source of imagination and values is found in stories we already have. Much work that discusses AI refers to ancient stories and myths about robots created from mere matter, as well as more modern literature. Stories often referred to in such contexts include Frankenstein’s Monster, Golem, Pygmalion, the Tin Man from the Wizard of Oz, and the maidens made of gold and the giant Talos made of bronze which appear in the Iliad. Spirit-powered robots defended relics of the Buddha, according to Indian legend.

But in referring to any such stories, we can draw various lessons. (It’s not wise to conclude from the story of Sleeping Beauty that it’s a good idea to marry a man you’d never met before who’s broken into your bedroom and awoken you with a kiss. Other layers of interpretation in such fairy stories, though, are a rich source of meaning.) So, in looking at the preamble and surrounding text of policy documents which refer to such ancient or more modern stories, it’s useful to take a look at how these stories are used and what lessons are drawn.

The document European Civil Law Rules on Robotics refers to cultural ideas about robots in its introductory texts, as part of a narrative justifying its approach, and in particular, grounding it in a response to what are seen to be European concerns. Here is the relevant section, which comes in the document’s section 2 “General Considerations on Robots: The notion of the robot, its implications and the question of consciousness”, where the discussion is used to explain reservations about using the term ‘smart robot’ in a set of regulations designed for the European context, because of the likely public reaction:

1°/ Western fear of the robot

The common cultural heritage which feeds the Western collective conscience could mean that the idea of the “smart robot” prompts a negative reaction, hampering the development of the robotics industry. The influence that ancient Greek or Hebrew tales, particularly the myth of Golem, have had on society must not be underestimated. The romantic works of the 19th and 20th centuries have often reworked these tales in order to illustrate the risks involved should humanity lose control over its own creations. Today, western fear of creations, in its more modern form projected against robots and artificial intelligence, could be exacerbated by a lack of understanding among European citizens and even fuelled by some media outlets.

This fear of robots is not felt in the Far East. After the Second World War, Japan saw the birth of Astro Boy, a manga series featuring a robotic creature, which instilled society with a very positive image of robots. Furthermore, according to the Japanese Shintoist vision of robots, they, like everything else, have a soul. Unlike in the West, robots are not seen as dangerous creations and naturally belong among humans. That is why South Korea, for example, thought very early on about developing legal and ethical considerations on robots, ultimately enshrining the “smart robot” in a law, amended most recently in 2016, entitled “Intelligent robots development and distribution promotion act”. This defines the smart robot as a mechanical device which perceives its external environment, evaluates situations and moves by itself (Article 2(1))9. The motion for a resolution is therefore rooted in a similar scientific context.

Commentary on this passage:

The passage opens with the suggestion that the collective consciousness of the West shows itself in ancient fears about losing control of robots, which must be addressed in order not to hamper the robotics industry. The wording seems to imply that this fear is unfounded or poorly grounded. This suggests a somewhat cavalier attitude to such myths, as if they indicate something irrational and to be combatted. While a culture’s myths may indeed show things which cannot be reduced entirely to rational analysis, the very fact that they have survived for so long suggests that myths and stories may be indicating something important. Indeed, the document goes on to validate these Western fears, but tellingly, does so by referring not to myth or culture but by heeding the recent warnings about AI of four prominent scientists and technologists: Stephen Hawking, Elon Musk, Bill Gates and Bill Joy, citing these experts “now that the object of fear is no longer trapped in myths or fiction, but rooted in reality”.

This is a telling way of presenting these myths and stories of our past. It’s as if the lessons we need to learn from these myths and stories are merely some uncanny, lucky prediction of the scientific future, and now that we have the science, and now that we have the technologists and scientists to warn us, we can at last realise these warnings were, by fluke, right after all. Yet the appeasement of the general European public is framed in terms of addressing and combatting the cultural sway of the ancient myths. So … are the pre-scientific-myth-fuelled fears of the “great unwashed” general public right by some spooky coincidence? Is scientific reason, endorsed by experts, by happenstance now simply marching in parallel time to unreason?

One reason why these questions are important is because it’s important whether this document is attempting to accommodate reasonable public concerns, or is pandering to an irrational populace. One might develop policy, and in particular public information, quite differently, depending on these attitudes. Indeed, it is somewhat unclear what kind of stance the document is taking on this point.

Something of note is that in discussing the lack of fear of robots in the Far East, the document also grounds the Japanese stories about robots in the underlying metaphysical and normative framework of Shintoism. This makes sense of the positive Japanese response to robots. This sense-making narrative is absent in the account of the Western myths to which the document refers. (Note then, that the EU document subliminally suggests that positive myths of robots are grounded in something more substantive, whereas negative myths are not.)

Is there no sense-making Western narrative available? Note of course that ‘the West’ is not a monolithic idea – there are robot stories in various Western traditions including Norse as well as the Greek and Jewish traditions referred to in the document. But note too at this juncture, that the EU document highlights the Hebrew myth of Golam as being particularly influential in Western society and what the document calls ‘western fears of creations’. Indeed, it’s the only Western robot story actually named.

I had to read this phrase ‘western fear of creations’ several times to make sure I’d understood it. For the idea that it is the West which is afraid of creation, and that a particularly strong influence on this fear stems from the Jews, butts up against the flourishing of science, technology and invention in the West, which has been so profoundly influenced by the Judeo-Christian tradition; not to mention the high density of tech start-up firms in Tel Aviv, for example. By ‘fear of creations’ the document is presumably referring to fear of autonomous creations which escape the control of their creator, not fear of artefacts per se.

But whilst it cites underlying frameworks behind Eastern robot stories, the EU document’s account of Western responses to robots misses out a profoundly influential Hebrew narrative which surely lends heavy cultural salience to Western myths about robots. I refer of course to the story in Genesis of the creation of man and of the Fall. For the Fall shows how in disobeying one’s Creator, Adam and Eve developed the ability to see that they were naked, and enabled them to have knowledge of good and evil. And we all know what happened next, armed with that dangerous knowledge: thousands of years of often sorry human history, with bright spots here and there. Mankind was given the power to act and to think, but the freedom which Adam and Eve were given to act independently of their Creator also led to disobedience and disaster.

But this is precisely the fears that are expressed about AI and robots now. It’s not fear of creation in the sense of invention and artefact, or control over the world per se, since the Genesis story gives mankind dominion over the earth – it’s fear of a creation which escapes the control of its creator. It’s fear of creation which, left in a safe space unobserved, gets into mischief. It’s worries about how we, the creators, might treat robots if they were to develop consciousness and the ethical awareness that Adam and Eve developed. But these are precisely the moral worries of the moderns who are armed with good understandings of science and tech.

Presenting the myths around dangerous robots in the context of Genesis paints a totally different picture than that presented by simply framing the Hebrew myth of Golem as a stand-alone story of uncontrollable robots which just happens to form the strongest influence behind what seems to be an ill grounded, primitive fear. It not only presents this robot-gone-bad narrative as a central influence firmly embedded in the history of Western culture, rather than merely a popular story. It not only embeds it in an account of the nature of humanity, of the place of humanity in the universe and made in God’s image, and hence, with the potential to have responsible control over the world, and hence with a positive potential for advancing science and technology. It does more.

For if we see the Genesis account of the Fall of man as foreshadowing of fears about robots, then Genesis gets the problem exactly right, for exactly the right reasons – it’s a worry about autonomy itself: what might robots do if we can’t control them fully? Will they adhere to the same value system as us? Will they decide to disobey us? What will our relationship with our creations be?

The modern scientific experts can tell us that these fears might now actually be realisable. We didn’t need them to tell us that the fears were in principle well-founded. Far from quaking at a Hebrew scare story which whipped up primitive fears in the general public that need to be allayed, we can thank the Hebrew account of Genesis for pre-warning us thousands and thousands of years ago, in a rich and meaningful story about our place in the world and about our nature, from which we can infer also that creating robots with the ability to judge and to act may be worthwhile. But it can go very, very wrong. This is precisely the central ethical question of AI today. If the general public have concerns about this expressed through myth, these concerns are not irrational. They need to be addressed.

Paula Boddington

We would like to thank the Future of Life Institute for sponsoring our work.

Case study: Robot camel jockeys. Yes, really.


Robot camel jockeys … yes, really.

Why this case study?

Robots are now widely used in place of child camel jockeys. The robots used in camel racing are so simple that they scarcely count as AI. Indeed, developments since their inception have led to many robots becoming still simpler, with some custom-made out of electric drills. Nonetheless, the role that robot camel jockeys are playing strikes at the essence of one of the main functions of AI –  replacing human labour. Hence, it seems a good case study of what might happen when human labour is replaced with machine labour, with the proviso that all case studies are by their nature, partial accounts of the array of ethical issues facing us in the many-faceted world of AI. The discussion here, likewise, can only touch on some of the multiple issues raised.

I chose this example also because this use of robots has been, at least within much of the tech literature, hailed as an example of the beneficial application of robotics (1, 2). But of course, on closer inspection, things are more complex. The challenges of this particular case study are many. These include the difficulty of being able to monitor precisely what the impact of the use of the robot camel jockeys has been, as well as complexities introduced by political, cultural and religious differences between the countries where the firms that are developing these robots are located, and the countries where robots are being commissioned and used. There are issues in international law here as well as individual and professional ethics to consider. So, although it might be an example of a very simple form of robotics, it’s an example of a highly complex ethical issue.

As such, there is much to be teased out and considered. Here are some thoughts and some findings based on the research and commentary I’ve been able to find so far. I’ve included pretty much most the material I have been able to gather online about this topic.


The development and use of robot camel jockeys has been framed in terms of an ongoing historical narrative wherein the use of technology is hailed as the rescuer of the worst-off in the labour force, freeing them from arduous and possibly dehumanising work, as illustrated here:  ‘the standard modernist gambit of taking a crappy job and making it more bearable through mechanization will be transformed into a 21st century policy of taking appalling and involuntary servitude and eliminating it through high tech.’ (3) The use of the word ‘servitude’ here is interesting, for it includes both slavery, and working conditions close to slavery. The replacement of child camel jockeys with robots has been hailed as an ethical situation where ‘everyone will win a little’ since ‘every robot camel jockey hopping along on its improbable mount means one Sudanese boy freed from slavery and sent home’ (3); the robots are even described in one headline as ‘the robots that fight for human rights’ (2). This is held up as an example of how ‘there are some issues that can really be solved with innovation and technology’ (Lara Hussein, UNICEF) (4).

Camel racing is an old tradition in the Gulf States, including Saudi Arabia, Qatar, Kuwait, and the UAE (5). The use of children was common owing to their small size and lightness meaning camels could run faster. However, it is well established that children were trafficked – sold or kidnapped – from countries such as Sudan, Eritrea, Mauritania, India, Pakistan and Bangladesh, sometimes from as young as six months old (6). These children were then subjected to further extreme maltreatment, being allegedly deprived of food and sleep to remain small (3, 7), and according to reports, frequently the subject of sexual abuse (8, 9). The racing itself was onerous, frightening and often dangerous (6). There are reports of children injured or killed while racing, killed by fellow jockeys, and by trainers (6, 8).

Probably largely owing to international pressure, and as prohibited by the UN Convention on the Rights of the Child (7), the use of child camel jockeys was banned across the Gulf at varying dates starting with the UAE in 2005 (3, 5, 10). However, laws prohibiting child labour which had been in place for several years in various countries were reportedly routinely flouted (8). Anti-Slavery International commented in 2006, ‘The UAE government is proposing using robots to replace child camel jockeys. This seems a complicated alternative to implementing fair labour conditions for adult jockeys. Furthermore, the use of robot jockeys in races will not preclude the need for people to exercise, feed and care for the camels in camps.’ (7).

Rather than replacing child jockeys with adult jockeys working in reasonable conditions, replacement of children with robots occurred. Companies in Switzerland (3, 5) and Japan produced robots. The camels would not run without riders. Therefore the robots had to be made to look sufficiently human to encourage the camels to run; steps were taken to ensure that they were not so human that they violated Islamic codes about representational art (3). The successful adoption of the robots has been attributed to the fact that they were seen as ‘cool’ and high tech: ‘The motto of the day is clear: “Pimp my camel.” They laugh at the Swiss engineers for voicing technical reservations or concerns about the camels’ welfare,’ although some locals expressed dismay that robot jockeys lowered the value and status of the racing camels (11). Since first adoption, there has been a trade in custom robots made from deWalt drills which are much cheaper and more reliable (2), and at 10 kg even lighter than the children (12). Development of robot jockeys has also been encouraged as a ‘symbol of the future’ (11). A robot jockey development project was a prize-winner in the 2014-15 Khalifa Fund Technopreneur Competition.

The camel racing may take place in considerable secrecy especially since in some areas it may involve illegal gambling (2). There seems often to be few spectators, being watched on TV in certain countries, although not televised in others (3). Some reports claim that the law is openly flouted with even TV races showing child jockeys. Reporters have been banned from filming when children were involved and media in the countries involved is generally tightly controlled (6). Reports claim that robots now deliver electric shocks to the camels to increase competition (13, 14), and that malfunctioning robots cause the camels injury (15) although some claim the reverse that camels suffer fewer injuries (16). This all points to difficulties in assessing the impact of the use of robot jockeys.

Reports about the impact of robot jockeys vary wildly. Vogue even has a stylish and glamourised photo shoot of camel racing with robot jockeys in Dubai (17), whereas reports from trafficking organisations and from inside Pakistan paint a very different picture. Research has found that the number of children being admitted to hospital with injuries consistent with falling from camels has decreased since the introduction of robot jockeys (18). However, it may be that there are injured children being kept away from hospital because of the new illegality (8). Children freed from racing are reported to have serious and lasting injuries years later and to have large educational social and emotional problems; some had never seen a woman (19). Since child camel jockeys were outlawed, human rights organisation Ansar Burney Trust have however recently found that 3000 child camel jockeys are simply missing (20). It had previously been found that children under the age of 10 were apparently failing to be repatriated (7). Despite optimism about the benefits of robot jockeys, it appears that the trafficking of child jockeys may continue (2, 7, 9, 19). Human rights organisations have attempted to repatriate the children involved but few know who their families are and it has rarely been possible to reunite them with their families, some of whom sold them in the first place; homes have been set up for them e.g. in Pakistan (8). However, other reports claim that the vast majority have been returned to their families (concerning children in the UAE – UNICEF in Dubai) (21); perhaps the disparity may be explained by regional differences, by different reporting methods or by differences in observations made from ‘host’ and home countries. Families who had sold their children have been threatened with action if they attempt to resell them, and there are accounts of resentment at their return (22). It has been reported that Qatar simply shipped boys back to Sudan, where they face possible death, and that other boys stayed where they were working in other occupations or unemployed (3). Those working to assist the children have reportedly received threats (8). It has been opined that since Pakistan and Bangladesh are so dependent for income from their citizens who work in the Gulf States, little if any pressure is put on the host countries by the source countries regarding the use of child labour (pers. com), although reportedly India smashed several child selling gangs in the early 1990s (6). AntiSlavery International reports that receiving countries do little to control the entry of children (6).


It’s clear that the notion that replacing onerous labour with robots produces a morally good outcome simplifies the issues. These are just a couple of preliminary remarks on a highly complex case.

Consider another situation: replacing human mine clearers with robots. Clearing mines is unfortunately necessary, but very dangerous work. Where possible, replacing a human being who might get killed, with a robot which might merely get blown up, is a moral no-brainer. Robots are expendable. The  kind of robotics that would be involved in this is so far from any possibility of consciousness that we can leave aside the question of whether we might attribute moral agency to mine-clearing robots. Moreover, unlike in the camel jockeys’ case, although there can be of course be moral issues regarding recruitment into the armed services, in considering the impact on human bomb disposal experts of the use of robotics, we are almost certainly not considering a scenario with trafficked minors whose fate following the implementation of robots is uncertain and possibly perilous.

But racing camels, no matter that it’s an embedded part of some cultures, can hardly be construed on the same plane of necessity. The question raised by some commentators, ‘why not just use adult jockeys?’ highlights an apparent reluctance by some camel racers to comply with international legislation. Moreover, we are not talking about onerous compliance with baffling or seemingly pointless bureaucracy. We are talking about compliance with laws against child labour, in the context of highly dangerous work, for the sake of a sport. The implication of much of the discussion is that the main impetus for compliance with anti-slavery  legislation was the use of robot camel jockeys, rather than a change of heart about the impacts on the child jockeys.

I commented above how the description of the child camel jockeys’ work as ‘servitude’ elides the difference between slavery and working conditions that are close to slavery. In some reports, the notion of slavery is used openly, in others, this issue is skated around or bypassed. Orienting the ethical discussion to the narrative of slavery, rather than to the narrative of technology as a force that throughout history can make incremental improvements in people’s  lives, may make a difference to how the question is tackled. Here is a thumbnail sketch of a momentous ethical debate: If something as morally offensive as child slavery is occurring, is it best to do whatever one can to improve the situation? Or is it best to keep one’s hands clean and refuse to have anything to do with those who are responsible? Both cases can be argued to have some merit.

Claims of moral and cultural relativism might conceivably rear their heads in this instance, to bypass the issue of responsibility for involvement with camel racing, (notwithstanding the existence of ratified international legislation accepted by the governments of the states in question); but really? In considering the situation, although conditions may well have varied in different countries and with different owners, one would have to err on the safe side regarding reports of the treatment of the child jockeys. Selling children, food deprivation, emotional deprivation, sleep deprivation, total lack of education, participation in a dangerous sport – all okay because of moral relativism? Not worrying about the fate of these children, that’s just what goes on elsewhere – really? You need to find yourself another philosopher if you want a debate that entertains that argument seriously.

What of individual or corporate moral responsibility in regard to manufacturing and supplying robot camel jockeys? A response to the intransigence of camel racers who were not complying with anti-slavery legislation could be force, or alternatively, attempts to produce a shift in moral consciousness. The latter is likely to be harder, and the neither are realistically easily in the power of a few individuals or small companies. Robotics manufacturers are small beer in this context. It’s not at all obvious that robotics manufacturers who step in to the fray have any responsibility to try to produce a change in the moral outlook of the camel racers themselves, since how could they? On the other hand, one might wonder if by working with camel racers, one becomes to an extent complicit with the morally problematic treatment of the child camel jockeys. One might wonder even if by their involvement, robotics manufacturers had any responsibility to monitor the fate of the child camel jockeys. On the other hand, it’s plausible to consider moral responsibility as distributed among different parties depending upon their location and their powers, and consider that it’s the job of other organisations and parties to track the fate of the former child jockeys and to assess if child servitude persists in other forms. This is a very large and very difficult task.

But at the least, in this complex situation, it would be best to exercise caution in claiming unproblematic success for the implementation of robot camel jockeys. It should be apparent how hard it would be for any professional code of ethics to make clear statements about engaging in such work. The example also shows how presenting the issue in different ways, and looking at it from different angles, can produce varied ethical reactions.

It also illustrates a central lesson for uses of AI which replace human labour – take a very close look at what happens to the human beings who are thereby displaced.

And consider the more general question of how we go about considering ethics and technology. This case study illustrates something interesting – how the technology, even something as simple as this, can get glamourised – the Vogue photo shoot, and the phrase ‘pimp my camel’, stand out here. Assessing the glamour of robotics here is complex, and messy. This is a recurring theme in technology in general (the shiny, sexy sort of tech at least, and robotics and AI in many forms qualifies here). The seductive lure of technology can perhaps lead us to overestimate what moral impact it has on a situation, possibly misconstruing the complexities and being blinded to other elements, which we see in the rush of some to praise robotics for dashing to the rescue of the child jockeys. On the other hand, the seductive lure of technology might just have done the trick of persuading some camel racers to replace children with machines.

This morality tale echoes Pinocchio in reverse (23). Pinocchio was a puppet whose father longed for him to become a real boy; Pinocchio himself of course wished for this too. But in some regions of the world of camel racing, real boys have been reduced to puppets. In the case of the use of robot camel jockeys, the equation of a child with a mere expendable thing pragmatically works to advantage, where a mere thing – the robot – is considered to be better than a child. In Pinocchio’s story, it was the development of moral character that did the trick, and Pinocchio became a real boy when he sacrificed his safety to rescue his father; in the Disney version, he has to prove himself to be brave, truthful and unselfish. Whilst on occasion and pragmatically, a quick technological fix might be the least worst option, if we exchange such elements of moral growth for a technological fix too often, in the long run the world might not be much better off.

The way in which AI typically acts to replace or supplement human labour is going to present immensely complex challenges for ethics. Embedding these considerations within professional codes of ethics will be far from straightforward.

Paula Boddington

We would like to thank the Future of Life Institute for sponsoring our work

  1. Moor JH. The nature, importance, and difficulty of machine ethics. Intelligent Systems, IEEE. 2006;21(4):18-21.
  2. Brook P. The DIY robots that ride camels and fight for human rights. Wired. 2015(03.03.15).
  3. Lewis J. Robots of Arabia. Wired. 2005(11.1.05).
  4. Rasnai A. Dubai’s Camel Races Embrace Robot Jockeys. The Daily Beast. 2013.
  5. Pejman P. Mideast: rehabilitation for retired child camel jockeys gets top priority IPS News2005 [Available from:
  6. Gluckman R. Death in Dubai 1992:
  7. International A-s. Information on the United Arab Emirates (UAE) Compliance with ILO Convention No.182 on the Worst Forms of Child Labour (ratified in 2001) Trafficking of children for camel jockeys. 2006.
  8. Williamson L. Child camel jockeys find hope BBC News; 2005/02/04 [Available from:
  9. Lillie M. Camel Jockeys in the UAE: Human Trafficking Search; 2013 [Available from:
  10. Knight W. Robot camel jockeys take to the track. New Scientist. 2005(21 July).
  11. Schmundt H. Camel Races: Robotic Jockeys Revolutionize Desert Races. Speigel Online International. 18/07/2005 ed2005.
  12. Watson I. Robot Jockeys Give Camel Racing a Modern Twist: NPR; 2007 [Available from:
  13. Spooky. The camel-riding robot jockeys of Arabia. 2011.
  14. Wafa I. “Shock jockey” sellers arrested. The National UAE. 2011 20/01/2011.
  15. Nasir Z. Of camel jockeys and camels. The Nation. 2013 08/07/2013.
  16. Nowais SA. UAE camel trainers prefer robot jockeys. The National UAE. 2015 13/06/2015.
  17. Shaheen S. Meet the Camel-Riding Robot Jockeys in Dubai. Vogue.
  18. Abu-Zidan FM, Hefny AF, Branicki F. Prevention of Child Camel Jockey Injuries: A Success Story From the United Arab Emirates. Clinical Journal of Sport Medicine. 2012;22(6):467-71.
  19. Gishkori. Camel jockeys: Popular Arab sport costs Pakistani children their sanity. The Express Tribune Pakistan. 2013 8/05/2013.
  20. AnsarBurneyTrust. Almost three thousand under age camel jockeys missing:; 2013 [Available from:
  21. United Arab Emirates: Camel racing continues to be child free: IRIN Humanitarian news; 2006 [updated 24/12/2006. Available from:
  22. Pakistan; Former child camel jockeys and the struggle to return home: IRIN Humanitarian news; 2007 [updated 3/01/2007. Available from:
  23. Jordan B. Peterson provides an interesting and illuminating discussion of the moral significance of the Pinocchio story on which I drew in considering these questions and which can be found in a lecture for his course Maps of Meaning which can be seen at:

We would like to thank the Future of Life Institute for sponsoring this work.