The Distinctiveness of AI Ethics, and Implications for Ethical Codes

The Distinctiveness of AI Ethics, and Implications for Ethical Codes

Paula Boddington

Paper presented at the workshop Ethics for Artificial Intelligence, July 9th 2016, IJCAI-16, New Yorkfuture

Abstract

If workable codes or guidance on ethics and AI are to be produced, the distinctive ethical challenges of AI need to be faced head on. The purpose of this paper is to identify several major areas where AI raises distinctive or acute ethical challenges, with a view to beginning an analysis of challenges and opportunities for progress in these areas. Seven areas described are: Hype in AI, and its unfortunate consequences; The interface between technology and wider social or human factors; Uncertainty about future technological development and its impact on society; Underlying philosophical questions; Professional vulnerability in the face of emerging technology; An additional layer of complexity in ethical codes, concerning machine behaviour; The extension, enhancement, or replacement of core elements of human agency.

1              Introduction

This paper arises from work undertaken as part of an FLI project entitled ‘Towards a code of ethics for AI’. The purpose of this project is not to produce a code of ethics as such, but to clarify and analyse the challenges and purposes of producing such codes.

This presentation concerns some potential challenges to producing workable, transparent codes, guidance, or regulation in the field of AI. In this endeavour, we should not presuppose that AI raises any more serious ethical problems than other areas of technology, nor that its problems are completely unique. In many respects, indeed, AI raises issues which have broad similarities with other areas; but the focus here will be on particular respects in which AI does raise issues that are distinctive or unique. Insofar as these bear similarities to other areas, there can be mutual learning; but insofar as they are AI-specific, particularly hard attention must be given to unravelling them.

Our work has looked at several areas where these issues arise, which will be considered in turn in what follows:

  1. Hype in AI, and its unfortunate consequences;
  2. The interface between technology and wider social or human factors;
  3. Uncertainty about future technological development and its impact on society;
  4. Underlying philosophical questions;
  5. Professional vulnerability in the face of emerging technology;
  6. An additional layer of complexity in ethical codes, concerning machine behaviour;
  7. The extension, enhancement, or replacement of core elements of human agency.

2              Hype in AI, and its unfortunate consequences

There is a lot of hype concerning many technologies, and in particular their ethical implications, as we have seen with genomics and nanotechnologies [Caulfield, 2012], for example. In the case of AI, this hype is now virtually at fever pitch [Hawking, 2015], with some prominent individuals recently claiming that AI presents ‘an existential threat’ to humanity. Whether or not such concerns are overblown, this very hype itself will have impacts.

2.1          Angels and bad guys

One impact, which can be quite considerable, is that fear of being branded one of the ‘bad guys’ may lead to individual or collective attempts to promote oneself as on the side of the angels. Such positioning might be for intrinsic or strategic reasons (e.g. the EPSRC meeting on the Principles of Robotics explicitly aimed to avoid the sort of public aversion which the UK public had shown to GM [Bryson, 2012]. But at its worst, appearing to be ethical might trump actually being ethical.

2.2          Virtue signalling and exclusion of the under-resourced

Secondly, this positioning might have an effect upon the very content of the codes, for example by including largely vacuous material that is little more than ‘virtue signalling’ [Bartholemew, 2015], with empty displays of ethical probity (‘we are passionate about the future of the human race’ , ‘we believe that AI should be used for the benefit of the whole of humankind’, etc). More tangibly, attempts have been made to urge ethical behaviours on a group which would exclude both those who disagree, and less well-resourced members: it’s easier to be ethical the richer you are. This can be seriously prejudicial against the least privileged, unless special provision is given or attention paid to the issue [Boddington, 2011, ch 10].

Thus otherwise laudable policy can end up inadvertently excluding the actors who are least well resourced, especially if it focuses too much on the most ‘visible’ ethical questions, without considering them in context. Data-sharing policies in genomics provide one clear example here, biasing procedures against those who are unable – perhaps for resource reasons – to reciprocate.

For an example from AI, the IIIM’s Ethics Policy for Peaceful R&D eschews military funding, and will only collaborate with civilian researchers if they have received no more than 15% of their funding from military sources in the last 5 years [IIIM]. The preamble to this policy makes explicit political statements, including that military funding is commonly defended by reference to an increased ‘“terrorist threat”’ (with scare quotes that clearly imply scepticism); it also endorses the ‘brave’ actions of Edward Snowden. Researchers who are not in the position of finding alternative sources of funds may therefore be excluded from potentially beneficial collaboration. (So far, this is not to comment on the rights or wrongs of the IIIM’s stance, merely to use it as an example of how policy may lead to differential impact for different actors within the world of AI research.)

2.3          Methodological impacts – alleged novelty and comprehensiveness

Thirdly, hyping up the uniqueness of the issues in AI will also have important effects on the methodology of how ethical questions are addressed. For if an ethical question is seen to be (in part, at least) an old question applied in a new context, then one can argue with reference to previous applications, but if a question is presented as being new and unique, then it will be approached very differently. Such framing can be vital, because example choice and description – including both language and institutional context – significantly affects how ethical questions are understood and developed [Chambers, 1999; Fischer et al 1993; Rein et al 1993; Taburrini 2015].

Take, for example, the framing remark: ‘everything that civilisation has to offer is a product of human intelligence’ [Hawking, 2015]. This misleadingly suggests that everything in society derives from design and intelligence, and may lead to hubristic discounting of serendipity, circumstance and pure accident. It can also divert attention from how ethical issues raised by any technology are a complex result of many factors (see below), misleadingly focusing on aspects of a situation that are designed into it, as the expense of those that are contingent or emergent.

2.4          Viewing the landscape through the lens of the hyped technology

Another risk of hype is that the consequent emphasis on the new technology will distort perceptions of the moral context, interpreting it in terms of that (perhaps problematic) technology at the expense of others. As an example from biotechnology, the UK’s HFEA recently sanctioned mitochondrial transfer techniques (so-called ‘3 person IVF’) to combat transmission of mitochondrial disease from mother to child. The HFEA praised the technique ‘giving the chance of having a healthy child’ [HFEA 2013]. But these women could already have children through surrogacy, or gestate a child with a donated egg. So the description of the technique’s advantages tacitly presupposed a particular notion of what it is to have ‘your own child’ (viz. with maternal nuclear DNA), thus eclipsing other, older reproductive techniques that the same organisation also supports and regulates [HFEA]. Though less morally serious, a similar pattern can be seen in the progressive development of information technology, for example in respect of demands made for administrative and ‘audit’ information, whereby as new possibilities have become practicable (e.g. the generation of huge amounts of printed information), these possibilities have often come to be seen as absolutely necessary, usually without careful consideration of the costs and benefits. Thus the new technology shapes our vision, without any careful prior judgement that it is appropriate.

3              The interface between technology and wider social or human factors

Related to the issue of hype is the risk that excitement over the potential of AI and its technological possibilities will lead us to overlook complex social and institutional factors, which may affect how the relevant questions are framed, understood and addressed. These factors are also crucial for the enforcement or influence of any code of ethics, many of which remain unread or ignored [MacLean, Elkind 2004]. The institutional or political driver to the production of such codes is likely to be more concrete than the abstract encouragement of ethical behaviour: for example a wish to avoid public backlashes or lawsuits, to encourage funding, to signal the virtue of the UK in ethical regulation (again perhaps to attract funding), and generally to be seen as virtuous (which is not of course quite the same as the desire to be virtuous). To take a balanced view of these things – and to avoid confusing ethics with public relations – the specific institutional, political, financial and economic context of the creation and use of AI must be properly considered.

4              Uncertainty about the future of the technology and its implications for society

The future of any rapidly developing technology is hard to predict, and not only for technological reasons (since economic, political, social, and other factors can often intervene). Appropriate ethical judgments to new developments are also impossible to anticipate, since attitudes may evolve with the technology, as we have seen in recent debates about privacy, where views vary greatly depending on the context, in ways that could not have been foreseen [Nissenbaum, 2010, 2004]. In the case of AI, these points are especially pertinent, given how far AI could potentially impact upon how we think and relate to each other, and on vital elements of society such as modes of production.

One popular response to this evident impossibility of producing future-proofed codes of ethics in areas of rapid development or contextual uncertainty is to stress the importance of equipping researchers and professionals with the ethical skills to make nuanced decisions in context – to refer to virtue ethics [AOIR 2012, Atkinson 2009]. But virtue ethics is especially badly placed to provide any such panacea when dealing with technology which is likely to bring broad ranging and unpredictable change in central areas of human life. For the dominant model of virtue ethics – inherited from Aristotle – is predicated on a stable and slow-changing society, where the young can learn virtues from the older and wiser who possess phronesis, or practical wisdom. This model is hopeless when the need is to develop a new model of practical wisdom, to cope with future realities many of which probably cannot yet even be conceived, let alone anticipated.

5              Background philosophical questions

Fundamental questions, such as what it means to be human, or what ultimate values we should be pursuing, can readily arise in areas of rapidly developing technology. In genomics and genetics, for example, questions about the ‘essence’ of humanity can appear in debates about what is ‘natural’ or ‘unnatural’, what genetic interventions count as curing disease (as opposed to creating a ‘designer baby’), how the future of humanity might be altered via germline alterations; whether we in fact face a ‘post-human’ future, or whether the human race might diverge into two or more species. A proper approach would involve teasing out the different understandings and assumptions involved, how exactly these relate to the practical ethical questions, and considering how far the deep philosophical issues can be resolved sufficiently, or bypassed, to allow those practical questions to be addressed. But too often the fundamental issues – so far from provoking corresponding deep and careful thinking – can be overdramatised, leading to idealised or rarefied notions of what is ‘natural’ or ‘really’ human, and thus obscuring the issues rather than clarifying them.

Some of these questions arise also in AI. Will interaction between humans and intelligent machines change our natures in some significant ways, and might the future bring hardware interactions between humans and machines – another possible aspect, perhaps, of a post-human future? Will we find ourselves no longer the ‘crown of creation’ but subordinate to superintelligent machines? A distinctive hallmark of how these questions arise in AI is that they focus on mental aspects of what it is to be human – concerning agency, decision and choice – including for instance questions about responsibility and accountability, as well as what counts as human intelligence. Often such debates focus on extrapolated and imagined future agency within AI, commonly contrasted simplistically with idealised and uncontextualised notions of human agency.

As one example, in the context of Lethal Autonomous Weapons, it is sometimes argued that these violate the human right to be killed with dignity by a soldier who is acting in full moral consciousness of the fact that they are taking a human life [Champagne, Tonkins 2015; Sparrow 2007]. But, putting aside the highly debatable question of whether there is any dignity at all in being killed in war, this argument implausibly idealises the actions of the average soldier. Of course, each human being will always retain their own moral responsibility; but qua solider, an agent will be following a chain of military command while on the battlefield. Excepting special circumstances where a soldier has reason to consider that something has gone seriously awry with the chain of command and that the rules of war are being broken, obedience is required and reasonably expected – so the soldier’s time for full and conscious moral reflection was during enlisting: by the time of battle, that opportunity has gone. Moreover, it’s morally cruel to expect our best young soldiers – when having to kill in the heat of dangerous battle for our benefit – to pause to reflect in full consciousness ‘this is a human being that I alone am responsible for killing’. In normal circumstances, such responsibility lies more with military command and the politicians who called for war in the first place. Military robots then may be being held up to faulty idealisation.

6              Professional vulnerability in the face of emerging technology is magnified with AI

One motivating reason for the existence of professional codes of conduct is the relative vulnerabilities of professional versus others: clients, and the general public. It’s assumed that professionals have capacities and knowledge which others lack, or possess to a much lesser degree, and that therefore, professionals must use their skills and knowledge wisely and to the general good. But one of the major issues flagged in relation to AI is the fear that even the professionals might be relatively vulnerable themselves – that AI will become too complex to understand, and possibly to control – especially given the very fast pace at which it is able to operate [Bostrom 2014]. Hence, any ethical codes for AI need to take account of debates about the possible limited capacity of AI researchers to understand, anticipate and control their very products. This does not make AI unique per se – there have also, for example, been worries about biotechnology ‘escaping the lab’ [Koepsell 2009] – but the extent of these fears with respect to AI are probably greater than with any other technology. This issue of control is intimately linked to the following issue, and the two together imply that producing codes of ethics for AI will be particularly challenging, whatever one thinks of the question of how much autonomy AI has or will develop.

7              An additional layer of complexity is required in ethical codes for AI, concerning machine behaviour

Codes of ethics for the professions deal with the behaviour of professionals, and the consequences of the products or services they produce. But in AI, a special feature is that there is a layer of machine behaviour which also needs regulatory attention (and this is true regardless of debates about the genuineness of machine ‘intelligence’). The fact that AI can act in ways unforeseen by its designer raises issues about the ‘autonomy’, ‘responsibility’ and decision-making capacities of AI, and hence of the relation to human autonomy, responsibility, decision and control. If we try to address these issues at a very general level, we risk falling into vagueness and vacuity. So as a general principle, it is likely to be far more productive to attempt to work through these sorts of problems within particular concrete settings of application. When this is done, complex and potentially obscure philosophical debates may even be avoided entirely (as we shall see below).

8              The extension, enhancement, or replacement of core elements of human agency

This is a hallmark of AI, and although all machines enhance human powers to some extent, AI has the potential to do so more effectively than any other technology to date. Three main points deserve stressing here:

8.1          Economic and social effects

First, the potential of AI raises questions beyond the remit of AI researchers per se, such as when considering the wider societal impacts of large shifts in wealth creation and modes of production. Such issues as whether wealth should be systematically redistributed to compensate for the job losses occasioned by AI [Brynjolfsson, McAfee, 2014; Frey, 2013] cannot plausibly be dealt with by any realistic or achievable professional code of ethics. But these issues do serve to illustrate how simplistic it is to presume that calls for ‘beneficial AI’ can be interpreted and applied straightforwardly, even if they are agreed. Whether some application of AI counts as overall ‘beneficial’ might well depend on the economic and social structure within which it is embedded, far beyond the control of AI researchers.

8.2          AI within human systems

Secondly, in many cases, AI will enter complex systems of human agency, making it necessary that codes of ethics deal adequately with this interface. Consider, for example, the use of robotics within a hospital ward. Such places are highly complex systems with lines of responsibility and accountability which are partly formalised and partly informal, and which often change in response to circumstance and policy. Often, also, the lines of responsibility, reporting, and duty may be fragmented, duplicated and overlapping. Robots placed within such as system – whether or not they themselves are considered as responsible agents – will certainly displace some human nodes of responsibility and accountability [Daykin, Clarke, 2000]. Thus careful analysis of the effects of robotic placement within such systems is vital, and codes of ethics need to recognise this complexity. In this light, the EPSRC’s Principles of Robotics – which simply describes robots as ‘products’, and explicitly ascribes responsible agency only to humans, never robots [Boden et al 2011] – falls badly short. A more nuanced approach is required, recognising that within a hospital system, responsibilities are understood as only partly falling on the individual, for they are also part of an entire system, and computers can perfectly well be part of such a system. Thinking in this way might also help to bypass intractable questions of the nature of ‘genuine’ responsibility, and whether robots will ever achieve it. Such questions do not need to be solved if our aim – as in the NHS – is primarily meant to be upon understanding how errors occur with a view to correcting them.

8.3          Spontaneity versus forethought

Humans and AI systems make decisions – including decisions with a moral aspect – in different ways. Close attention to this might be useful in disentangling some moral worries about AI, and clarifying relevant differences with respect to codes of ethics. For example, humans may be forgiven for some decisions, in circumstances where higher standards would be expected of an AI (something like an idealised human standard, perhaps). Consider the concern that has been expressed regarding how an autonomous vehicle might react in a crash situation where a choice has to be made about who might be killed or injured, depending upon what actions are taken [Russell et al 2015]. But discussion of these things – though commonly voiced as an objection to autonomous vehicles – commonly takes for granted that a higher standard of decision-making can reasonably be expected of them. Thus, for example, a human being can often be forgiven for a suboptimal decision made under duress and in haste, and could also be forgiven for e.g. trying to save themselves or their families in a dangerous situation. But we are much less likely to ‘forgive’ a decision conceived and programmed beforehand. This prejudice may or may not have any rational basis in deeper ethical foundations. But it impacts on the consideration of autonomous vehicles, which have to be designed ‘in the cold light of day’ to react appropriately in emergency situations. From one point of view, such careful advance consideration seems ethically superior to the ‘it’ll be alright on the night’ approach; but from another point of view, it’s less, well, ‘human’. Such double-edged complexity abounds when considering replacing human labour and agency with AI. Again, this short discussion is far from complete, but serves only to flag the importance of considering elements of timing, planning, and the agentic source of decision making, in considering the ethics of AI.

9              Conclusions

There are numerous challenges in considering the ethical issues that AI faces us with, and further challenges in developing codes of ethics, guidance or other regulations for ethically robust AI. Although we are faced with much distracting hype which perhaps distorts some of the issues and their import, nonetheless, careful examination of the particular and distinct issues which AI presents us with can help us in understanding these further. Addressing these challenges will require input from both the AI community and more widely.

Acknowledgments

Many thanks to Peter Millican for his careful commentary on the manuscript, as well as to Michael Wooldridge for discussion.

This work has been kindly supported with a grant from the Future of Life Institute

References

[AoIR Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0). 2012.

[Atkinson P. Ethics and ethnography. Twenty-First Century Society. 2009;4(1):17-30.

[Bartholmew J. Hating the Daily Mail is a substitute for doing good. The Spectator. 2015(April 18th 2015).

[Boddington P. Ethical Challenges in Genomics Research: a guide to understanding ethics in context. Heidelberg: Springer; 2011.

[Boden M, Bryson J, Caldwell D, Dautenhahn K, Edwards L, Kember S, et al. Principles of Robotics. Swindon, UK: Engineering and Physical Sciences Research Council ESPRC, 2011.

[Bostrom N. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press; 2014.

[Brynjolfsson E, McAfee A. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York: W.N. Norton &n Company; 2014.

[Bryson J. The making of the EPSRC Principles of Robotics. AISB quarterly. 2012;133(Spring 2012):14-5.

[Caulfield T, Condit C. Science and the Sources of Hype. Public Health Genomics. 2012;15(3-4):209-17.

[Chambers T. The Fiction of Bioethics (Reflective Bioethics). New York: Routledge; 1999.

[Champagne M, Tonkens R. Bridging the Responsibility Gap in Automated Warfare. Philos Technol. 2015;28(1):125-37.

[Daykin N, Clarke B. ‘They’ll still get the bodily care’. Discourses of care and relationships between nurses and health care assistants in the NHS. Sociology of Health & Illness. 2000;22(3):349-63.

[Fischer F, Forrester J. Editor’s Introduction. In: Fischer F, Forrester J, editors. The Argumentative Turn in Policy and Planning. Durham and London: Duke University Press; 1993. p. 1-17.

[Frey C, Osborne M. The Future of Employment: How Susceptible Are Jobs to Computerisation? Oxford: Oxford Martin School, University of Oxford, 2013.

[Hawking S, Russell S, Tegmark M, Wilczek F. Stephen Hawking: ‘Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?’. The Independent. 2014 May 1st 2014.

[HFEA. HFEA agrees advice to Government on the ethics and science of mitochondria replacement [press release]. London2013.

[Human Fertilisation and Embryology Authority. Available from: http://www.hfea.gov.uk/.

[IIM. Ethics Policy for Peaceful R&D. Reykjavik, Iceland: Icelandic Institute for Intelligent Machines.

[Koepsell D. On Genies and Bottles: Scientists’ Moral Responsibility and Dangerous Technology R&D. Sci Eng Ethics. 2009;16(1):119-33.

[McLean B, Elkind P. The Smartest Guys in the Room: The Amazing Rise and Scandalous Fall of Enron: Portfolio; 2013, 2004.

[Nissenbaum H. Privacy as Contextual Integrity. Washington Law Review. 2004;79(1):119-58.

[Nissenbaum H. Privacy in Context: Technology, Policy and the Integrity of Social Life. Palo Alto: Stanford University Press; 2010.

[Rein M, Schon D. Reframing Policy Discourse. In: Fischer F, Forrester J, editors. The Argumentative Turn in Policy and Planning. Durham and London: Duke University Press; 1993. p. 145-66.