Diversity of Participation in Code Development

Who gets to have a say in developing codes of ethics for AI? The experts?

But … if AI is going to affect – indeed, does already affect – our whole lives, aren’t we all experts to at least some extent? And shouldn’t we want to canvass as many diverse points of view as possible?

Professional bodies have often drawn up their codes of ethics over some period of time and in consultation with their members. Private corporations may wish to have sole, or major, control over any codes they develop for their own practices. But in many contexts, those drawing up codes of ethics will wish to ensure that there are a diverse range of people involved. There are many reasons for this.

One reason for concern about inclusion and diversity is for concerns of social justice – that members of different groups in society have access to the opportunity to be involved. There might be particular concern for some public involvement where there are potential large impacts on the public, and where large and very wealthy corporations may have the power to affect everyday life considerably. So, concern to represent diverse groups in the efforts to develop codes may be a moral and political concern. This is  much more complex than it might sound, for who gets to carve up the world into groups deserving representation? And who gets to represent these groups and on what basis? Issues such as intersectionality and identity politics quickly hove into view. There are significant differences in approach between trying to ensure that there is equality of outcome in terms of inclusion and diversity, and trying to ensure equality of opportunity to contribute. Here, these issues are merely flagged for consideration.

But another, very pragmatic, reason is to ensure a good outcome. This is an especially important point in AI, because those working on AI tend to come from certain demographics – educated, relatively wealthy, often male, often from certain population groups, often working in geographical areas like North America or Northern Europe. Ethics is not just a technocratic problem. With the best will in the world, it may be hard to know what issues there might be for other people. The use of new technology often in fact increases social inequality for some groups, as those without the technology may be left out of certain benefits. So including those who may be affected in different ways is useful. People with different outlooks on life may think of things others have never even dreamt of. People who have no vested interest in the success of AI may think more critically.

What’s more, recent research indicates that in problem solving skills, a diverse group may perform better than a group of similar people even if individually, they have better skills. Research has also indicated that groups containing women tend to function more effectively than those without women. It may not be gender per se that’s at issue here. Indications are that, on average, women have better social skills including empathy, and this equates to better group communication and understanding.

Finding people with good social skills then may be important. As long as they are not all clones of each other – that may worsen the outcome! Making sure that discussions run well and that there is good communication is also key.

Public involvement in research is well-established in other fields, such as in medical research. Some research funders even require this. There is a whole area of study devoted to examining public engagement and the different groups of people who make up the ‘publics’. For further discussion of this, see our Resources page.

There are some interesting and important reasons why those concerned with AI and information technology in particular should pay attention to how codes of ethics are developed. How do we form our opinions? The evidence around us, and the views of those with whom we associate most closely, are a key element in this. But there is some reason to consider that AI itself may already be exerting some influence in how we think and form opinions, through the way in which algorithms which affect search engines,  news feeds, and adverts on the internet, work, nudging people into like-minded groups, exposing people to things they are already interested in, tending us towards what has been dubbed the Filter Bubble.

Naturally there is also debate about the size and impact of any such effects. But it does give at least a reason for attending to the possibility that in drawing on opinions regarding the ethics of AI, we might fall into the trap of consulting only a certain range of public viewpoints.

A fuller discussion of these issues and some recent research findings is found in this post. They are also discussed in the book, Towards a Code of Ethics for Artificial Intelligence.

We would like to thank the Future of Life Institute for their generous sponsorship of our programme of research.