Coming soon. This page will consider some examples of how discussions of AI and ethics often idealise both human and machine agency, and how different standards are often set for humans and for machines. Sometimes for good reasons, sometimes, for less good reasons.
This page will also look at whether there are different standards applied for success between humans and machines, and problems that might arise from the use of machines. For example, what should be the standard of accuracy for machine-led or joint human-machine decision making? If machine learning led to better outcomes overall, e.g. in diagnosis of a disease, are there any issues of ethics or legal liability if this outcome might be worse for some individuals?
We would like to thank the Future of Life Institute for sponsoring our programme of research.