Ethics and Relativism

Coming soon.

Why is this an issue for AI and codes of ethics?

Some issues in AI are going to have a further reach than simply one particular region of the world. Many of them, indeed; this is particularly so for any open source AI, for AI which extends over the internet, via satellite technology, or that is subject to international trade. In some ways of thinking about values, we might consider that what’s needed is for ways to make sure that everyone can use AI in ways that accord with their own personal values. But to limit our considerations to this, is to ignore the many ways in which what we do affects others. And to simply allow options for choice, is to undermine the particular weighty seriousness of ethics. How do we move towards trying to find the ‘right’ answers in codes of ethics, whilst acknowledging cultural difference, differences in ethics and mores in different parts of the globe? Even calls for minimum standards such as AI which complies with human rights must answer the question – with which charter of human rights?

At the very least, there are concerns already expressed about harmonisation. The EU Commission on Legal Affairs has broached this openly in concerns that if the EU does not regulate robotics, other parts of the world will introduce regulations and standards which the EU will be forced to adopt or align to.

Likewise, will self-driving cars be programmed to drive in the style of a particular region of the world? With the style of Rome’s taxi drivers? With the style of a farmer from rural Lincolnshire (where, for those unfamiliar with the area, it’s almost totally flat)?

We extend our thanks to the Future of Life Institute for generously funding our research.