AI, Ethics and Law

Work in progress. Meanwhile, some notes:

Questions have already arisen concerning the use of AI and the law, and there are ethical implications to such questions. For example, the 2016 decision in Wisconsin v Eric Loomis concerned whether the use of the COMPAs algorithm in determining sentencing was fair. The finding was that it was used appropriately. A legal decision such as this will make reference to precedent and law in the appropriate jurisdictions of course. There can naturally also be broader debates about whether such legal decisions really do capture ‘fairness’ in such cases. This case is also briefly discussed here.

The European Parliament is currently considering issues of law and robotics and a report on European Civil Law and Robotics issued in January 2017 can be found here. A blog post examining one aspect of the report, its framing reference to myths and stories, can be found here. Further links to the European Parliament’s work on robotics can be found  here and on our Resources page.

There is a strong and complex relationship between ethics and law. Codes of ethics are nested within the appropriate legal jurisdictions of local, national and international laws, and seek to adhere to these. Although the law of the land generally embodies some ethical principles, moral judgements often concern issues which go beyond the concerns of the law, with a realm of private and personal life which the law leaves alone varying from jurisdiction to jurisdiction to some extent. (There is an interesting point of comparison of professional codes of ethics with ethics, considered in and of itself – because in some extreme circumstances one might judge that it was ethically justified to break certain laws, if these are considered to be so unjust that the value of respect for the law was outweighed by the injustice of keeping to the law in a particular case. However, I can’t recall ever seeing a professional code of ethics which did anything other than advise that the law should be followed.)

However, especially when technology is rapidly advancing, the law might not be able to keep up, and professional bodies and others considering ethical aspects of that technology might well lobby for appropriate changes to the law.

Note that there are may be great differences in some aspects of the law between different jurisdictions, some of these being differences of great relevance to AI. For example, there are significant differences between the laws on data protection and privacy between the US and European laws. This can have a large effect on what can and cannot be done e.g. with the use of personal data in AI, and with what can and cannot be done to safeguard ethical concerns in these different jurisdictions.

It may be possible to amend codes of ethics issued by professional bodies more flexibly and more rapidly than national, and especially international, laws. Meanwhile, how can technology cope when a legal regime might be a stumbling block to its development? For example, legal regimes may act as breaks to the development of autonomous vehicles, drawing on laws intended to safeguard the public, yet this might slow the development of technology which in the longer term could have a beneficial impact on road safety.

One possibility is that technology might be tested in more permissive jurisdictions. But this could very well be problematic, since it may amount to the population in certain countries paying a price for the development of technologies from which other countries are more likely to benefit. This would be a parallel to certain highly criticised practices of some pharmaceutical testing; for example, in most countries in the developed world, there are rigorous regimes and protocols which make it onerous to develop drugs for testing on children. Suspicion has been raised that testing for paediatric medicines may take place in less developed or developing countries where children are not so vigorously protected.

Another more attractive possibility is to have certain areas where experimentation with technology was permitted, subject to improved regulations. At our IJCAI-16 workshop in New  York, Ugo Pagallo presented a paper discussing this option. The abstract can be found here.

There are ways in which close attention to the law could be very useful for considerations of ethics. Law has to be applied, and applied rigorously and consistently across a wide range of circumstances. Attention to how the law might be updated to accommodate various developments in technology, including AI, may proceed with an attention to detail from which ethics could sometimes benefit. Sometimes, for instance, faced with difficult judgement calls in ethics, there is a temptation to call on ‘virtue ethics’, the wisdom of the individuals concerned, to make fine-tuned decisions. Whilst this is a broad way of characterising hard cases, this ‘wisdom’ can presumably be articulated in some more concrete ways.

Additionally, the very fact that there are sometimes important relevant differences between jurisdictions on the law, which then shapes debates about ethics and codes of ethics, examining the possibilities of different legal regimes can be a good way of thinking more laterally about what is possible and what kinds of legal reform might be desirable. So, always look around to see what other countries are doing. A good reason to consider diversity in thinking.

We would like to thank the Future of Life Institute for their generous sponsorship of our research programme.