Problems with Codes of Ethics

Codes of ethics are not cure-alls for ethical problems. Far from it, indeed. There are pitfalls to watch out for. They can even make things worse.

Codes of ethics need a strong institutional backing to function effectively. Without a positive culture of support, they can be useless. A striking example is the company Enron, which had an inspirational sounding code of ethics … which was obviously ignored, since the company collapsed as a result of massive mismanagement, taking with it the livelihoods of tens of thousands of ordinary people. This institutional support can take the form of formal procedures, such as protection for whistleblowers, but it also needs an embedded culture which is supportive of its proclaimed values.

The very presence of a code of ethics can however, unfortunately, lead to complacency. ‘Look, we’ve done the ethics, aren’t we good!’ A code of ethics is a beginning, not an end.

The very idea of parcelling ethics into a formal ‘code’ is also dangerous, if it leads to the attitude that ethics itself is just some separate part of life and of activities; it’s not. It’s more meaningfully looked at as a part and parcel of how we live, individually and collectively. So it would be unfortunate indeed, if the presence of a code of ethics encouraged the view that  you could do the ethics, and then get on with life, get on with the job. It would also be problematic if it encouraged the idea that it was somebody else’s job to ‘do the ethics’; although there can be good reasons to ensure that specific nominated individuals are assigned responsibility for certain issues, as a check against the diffusion of responsibility within organisations and looser groups.

Moreover, there is a background assumption that ethics can be fully articulated, and not only that, articulated well enough to be distilled into a set of instructions and recommendations that can be understood by a variety of people working within an institutional context. But this is an assumption about ethics that it’s possible to challenge. Of course, articulating ethics values insofar as they can be articulated is of immense value. But sometimes, things of very great value are very hard to articulate. This is likely to be the case, precisely when dealing with cases where there is rapid and in part unknown change concerning the fundamental ways in which we relate to each other and to the world. In other words, in the case of ethical codes for AI.

And codes and regulations can also have a downside, in that they may encourage ‘work to rule’ – to work up to the regulation, up to the code, and no further; to the letter of the code, not the spirit. This may be especially problematic in some areas, such as those pertaining to safety. Well known examples from other fields of operating to rule, not spirit, include ‘shopping around’ for an ethics review board, especially in multi-site research, or operating in countries where the standards are not so tight. The flip side of this is that where codes of ethics are unduly restrictive, there may be some justification in such practices.

Codes of ethics also  need to be developed, in the light of the light of newly discovered facts, broader policy and legal changes, developments in technology, and in line with evolving nuance in understandings of ethics. So, there needs to be a  mechanism for adapting to change, without at the same time, shifting codes to suit which way the wind is blowing, as it were.

One way of understanding the need to develop and refine codes of ethics for institutions can be explained simply by considering institutions and bodies as individual persons. If any individual considers that they have completely sorted out a code of ethics for life which is complete and perfect for all time, for sure they are wrong, and almost certainly self-deceived.

One sceptical way of putting this is the boiling frogs problem. It’s told that if you put a frog into a vat of cold water and gradually increase the heat, it will won’t jump out before it’s cooked. (I’m told this is actually false, but either way, it serves our purposes here.) So, perhaps how codes of ethics and other regulation works, is to set down rules which are found to be acceptable, then, as things change, and once we’re used to that, gradually chip away at them at a pace which we, (the public, the professions) find acceptable until … hey presto, a boiling cauldron, of rules that would never have been accepted first off. Boiling frogs are like slippery slopes, where those slipping down the slope just think they are having a fun time skiing.

Or … the acceptance of the gradually changing rules might be real and well-grounded.

What to do about this? Make sure discussions are free and open, and specifically invite sceptical challenge. And know your history. Make use of historians of technology, and look into the historical background and development of regulation, and check how things are progressing. This is particularly important in institutions with a high staff turnover, and consequently, poor institutional memory.

Again, these issues and more are discussed in the book, Towards a Code of Ethics for Artificial Intelligence.

We would like to extend our thanks to the Future of Life Institute for their generous support of this project.