It is easy to be dazzled by the myriad claims made about AI. There is almost always hype concerning new technology, particularly if it is ‘sexy’ – if it’s developing fast, if it’s expensive, if it’s hard to understand, and if it’s very powerful. Hype in AI is almost inevitable, given that it also challenges views of humanity, and may give rise to fears – and hopes – that our intelligence is being surpassed in profound ways. In this, the hype is reminiscent of hype concerning genomics.
What could be the perils of such hype?
Hype about potential dangers can lead to two unhelpful outcomes – panic and overreaction, or hiding under the blankets and ignoring the problems. Overreaction might be relatively invisible, because it might involve approaches which aim to ‘max up’ the ethical response. But, this could mean stressing one ethical value – at the expense of others. And it could mean excessive or great-sounding, but empty, regulations and codes, producing further problems.
Hype about potential benefits can obviously lead to neglect of any real or at least feasible dangers, and it can also lead to an overreliance on AI as a solution to our concerns. Often problems are seen as technological, requiring technological solutions. But creative thinking can often produce a bigger range of solutions to problems. A parallel here are the well-understood dangers of medicalisation in medicine, where problems which might be better treated as, say, psychological, social, or cultural, are treated in terms of the practice and technology of medicine. Of course, sometimes a medical solution is the best solution – or at least part of the solution. But not always. Ethical challenges in AI may be met with more AI – an ‘AI on AI’ approach. Consider if this is appropriate – and if it’s adequate.
Hype about future possibilities can distract us from present realities. For instance, hype about a possible malevolent Superintelligence being developed some time in the future could distract from the need to consider ways in which AI is affecting our everyday lives now, for example, in the operation of powerful and almost ubiquitous algorithms.
Hype often makes claims that the problems with AI are ‘unique’. While it is vital to ensure that any special features are noted, overemphasising its uniqueness can be a problem, as it can affect the methodology used to address any ethical issues, and divert attention from lessons which can be learned by comparison with other areas.
SO, WHAT CAN WE DO ABOUT THIS?
Firstly, remember to watch out for how hype might be affecting your thinking and for how you or your institution are considering the ethics of AI.
Secondly, pay close attention to hype – it can be very revealing. The hype in texts often comes at the beginning (and is also sprinkled throughout). It often contains hints about the values and focus of the writers of the text. For example, hype about the economic prospects of AI can betray a bias towards financial interests, perhaps at the expense of examining other issues.
There is a discussion of the dangers of hype in technology and its impact on ethical thinking in the book, Towards a Code of Ethics for Artificial Intelligence.
We would like to extend our thanks to the Future of Life Institute for their generous support of this project.