Superintelligence

There is a great deal of concern currently about the possible dangers of superintelligence. Fears have been expressed that a superintelligence, surpassing human capacities, may develop and at that point, humanity may then lose forever the capacity to control it. The notion of a technological singularity posits that AI will develop to a point beyond which we cannot predict or control how our world, and our place in it, will go.

There are currently debates about whether this will happen, how it might come about, and whether we can do anything to protect ourselves, and if so, how.

Meanwhile, others suggest that focusing on such large scale but hypothetical worries about superintelligence is a distraction from ethical and social issues in AI that are already upon us.

In a forthcoming interview for Five Books on the best five books on AI and ethics, Paula Boddington suggested Murray Shanahan’s book The Technological Singularity as a good introduction to the issues around a technological singularity, together with some insightful discussion of the ethical issues, linking them to issues in the philosophy of mind and personhood.

We would like to thank the Future of Life Institute for their generous sponsorship of our programme of research.