Only a fool would attempt to explain what ethics is on a short webpage. So I’ll have a go.
Or rather, I’ll raise some issues about ethics that are worth considering, when thinking about ethics in AI, and when thinking about developing codes of ethics.
(And see also our page Professional Codes of Ethics)
It’s important to remember that there’s disagreement about how precisely to characterise ethics. But we need to give some consideration to how important ethics is, what it’s significance is, what the boundaries of ethics are, and other central questions.
Firstly, not all codes of conduct are codes of ethics. The codes of conduct for members of a private club are not codes of ethics per se, although some of such codes of conduct may be informed by ethical considerations. The code of conduct of the Mafia is not a code of ‘ethics’ – or is it? Do we assume that anything described as a code of ethics must deal with conduct that is recognised as good? But this could be going too far. A pacifist can surely recognise that there are military codes of ethics, whilst disagreeing with the entire enterprise of war. (See how hard it is already?) The point, for our purposes, is to draw attention to the question of what is it about any codes or regulations governing the production of AI that make them codes of ethics. Some might be standards for safety (which of course has ethical ramifications); some might deal with more technical specifications.
And codes of ethics may not, indeed, typically do not, concern themselves with all possible ethical questions. For instance, codes of professional ethics typically deal with the range of issues that pertain to the product or service that members of that profession provide, and may extend further insofar as they contain general consideration of the broad reputation of the profession, the benefits to society of the profession’s activities, and so on. This point will be of great relevance to thinking about AI (See the page Is AI Ethics Special). AI, considered in and of itself, is of such broad remit, that considered globally, its various applications may seem to encompass more or less all of life, unlike the typical professional code of ethics – or even most systems of law. This is going to make looking at developing codes of ethics for AI harder. And there’s more.
Ethics deal with issues we consider weighty or important in a particular way. But it’s surprisingly hard to capture exactly how weighty ethical considerations are, or should be. Are ethical considerations the most important thing in life? Does ethics encompass all of our values? Or is it just concerned with a sub-set of them? There are different philosophical views on this, but just one reason why it’s an important question in considering AI and ethics is to consider whether considerations of ethics might be in some ways in opposition to human progress in science and technology. Should we hold up the tech until we’ve got the ethics right? Or should the ethics just hop to it and play catch up with the tech? This is just a brief taster introduction to some very complex issues.
There is also the question of ethical authority. Why should we be motivated to adhere to codes of ethics? It’s a notable fact of human life that aspiration to good behaviour often falls short. We need to see a good reason to behave accordingly (and even that isn’t always enough!) As we’ll see, this is an important question in AI, since in some instances of professional codes of conduct the institution that backs these codes may have some powers of enforcement or inducement, including the backing of legal sanction; in other cases, none at all. Authority, and motivation to adhere to codes, may stem from ‘soft’ powers such as the respect for the originating body, or for the colleagues or process by which codes of ethics were drawn up and discussed.
The question of ethical motivation may be posed generally with the question, ‘why should I be moral?’ and more particularly, in terms of the question of altruism – why should I be concerned with anything other than my own self-interest? Why should I care about others? You’ll be sad to know we can’t answer this question definitively here. But for our current purposes, it’s useful to note that there may well be different questions of motivation for developing and implanting ethical codes or standards for different actors and agencies in AI – for public research funders, for governments, for private companies, for smaller corporations, for large corporations having significant global impact. Just why should a corporation, which may have pre-existing duties to its shareholders to be profitable, also have codes of ethics, if these potentially hamper its ability to carry out its core activities to maximum financial efficiency?
And there is an aspect to this question to note here. Why shouldn’t I act as I want to, if I can get away with it? Plato asked this enduring question in the Republic, when he addressed the myth of the Ring of Gyges (book 2, 2.359a–2.360d). In this myth, Gyges is a shepherd serving the King of Lydia who finds a golden ring which has the power to turn its wearer invisible. Using this, he enters the court, seduces the Queen (Plato does not fill in whether he manages this feat whilst invisible even to the Queen herself – if so, wow), and persuades her to help him murder the King and usurp the throne. You can sort of see why Gyges might be tempted to do such things, if he’s not going to be caught.
But isn’t this just a general question in ethics? How is this relevant to AI? This is not a question for AI alone of course. But the very complexity of much of AI means that there is often a particular question of transparency. If we don’t even know how an algorithm produced by machine learning is operating, how do we know if it’s operating ethically or not? While I’m on my computer typing this, is there data being gathered about me used from some unwelcome purpose? How would I know? How would I stop ‘them’? The frequently posed fears that we might be manipulated by powerful machines or very powerful corporations without our knowledge means that AI has its very own modern take on the Ring of Gyges myth. Only, it’s not actually a myth. Likewise, it goes without saying that ethics codes and practices which remain shrouded in secrecy are scarcely worthy of the name. It’s an oxymoron that you can’t claim to be addressing ethical questions, if you refuse to explain yourself to rightly interested parties.
Coming soon:
Agency – and AI
The basic values of ethics – Welfare – human wellbeing – autonomy – happiness – pain –
Moral patients – humans – animals – sentient beings – rational beings – Ai itself?
Other values – why be alive? societal values, political values, economic values.
Conclusions – codes of ethics in AI often address normative ethical theories and approaches rooted in applied ethics. But, as this discussion shows, there are fundamental questions about the nature of ethics – questions in metaethics – which are raised by considerations of ethical questions posed by AI. Indeed, it’s owing to the nature of AI itself that some of these questions are pushed to the forefront of our inquiries. This makes the quest for answers to ethical questions in AI, and for the development of codes of ethics in AI, all the harder. And all the more exciting.
We would like to thank the Future of Life Institute for their generous sponsorship of our programme of research.