Skip to main content

Oxford showcases its strengths in AI

The first ever expo on artificial intelligence (AI) at the University of Oxford showcased the tremendous advances being made in the field, and looked at where Oxford’s research is likely to lead. The AI@Oxford event brought together researchers from across the collegiate University, highlighting their expertise through a series of talks, debates, and demonstrations. Head of the Department of Computer Science Professor Mike Wooldridge reviews the event.

Reaching the point where a machine can translate Proust's beautifully written French into equally expressive English sentences is still some way in the future - if it is even ever possible. The issues with translating a classic tome by Proust are manifold: computers do not have knowledge of human relations and reactions, and they do not understand the nuances of French life in the period in which Proust was writing - and we see no easy way to give them this knowledge. On the other hand, automatic translation for non-literary works has become a daily reality for many, which 20 years ago seemed in the league of science fiction.

This is an example that I gave during my opening presentation at the AI@Oxford expo to help illustrate both the impressive reality of AI today, and the many challenges that remain for the future. During the event, delegates were given many other examples of the current state of AI - from driverless vehicles to health assistants to banking systems. Expert panels also discussed how to ensure that the technology leads to positive developments for society, minimising any possible downsides.

Another point I emphasised in my talk is that the AI community is currently focusing on developing AI in a relatively narrow way, which is a long way from the dream of general purpose intelligent systems. Current AI systems may be able to perform at super-human levels in certain narrow tasks, but scientists still have no idea how to make self-aware, conscious machines. The suggestion of robots being able to take over the world has no basis whatsover in current scientific reality.

A lot of the excitement in AI at the moment is being generated by advances in machine learning: the idea of machines that can learn for themselves how to conduct tasks, rather than be explicitly programmed to carry them out. This may sound like an alarming idea, but again, although progress has been rapid this century, machine learning currently works only on certain narrow tasks.

AI systems are proving their worth in carrying out routine tasks more quickly and precisely than humans in various industries, and the AI event explored various possible uses, from automating ports to supporting healthcare. A port, for example, is ideal for automated vehicles and cranes because its borders are limited geographically, there are few people on site for machines to avoid, and the tasks performed are narrowly defined.

It is much more complex to use self-driving vehicles on public roads, due to the presence of more road users, and many more unexpected events, for example from pedestrians. Nevertheless Professor Paul Newman, who heads up the University's Oxford Robotics Institute (ORI), said he thinks it is a question of when, not if, driverless cars become the norm.

Oxford is conducting various pieces of research relating to self-driving vehicles. For example, Professor Shimon Whiteson from the Department of Computer Science talked at the expo about how his team is developing smarter simulators with the aim of making autonomous driving safer, and thus speeding up the process of getting self-driving cars onto public roads.

Meanwhile, the ORI is looking beyond driverless cars on the road to solving how to deploy them in more difficult conditions, for example off-road on a glacier, in the desert, and in poor visibility. To do this research, Paul's team has fitted out an ordinary Land Rover with a range of sensors including cameras, radars and lasers. The Land Rover was on show at the Sultan Nazrin Shah Centre in Worcester College, where the expo was held.

A host of system demonstrations were also on show at the expo, including: visual search technologies that enable users to search BBC footage for a specific face or scene; a machine learning system that induces logic (Prolog) programs from data; and a machine-learning image analysis solution which makes it easier for anyone anywhere to use ultrasound technology.

The delegates also received a flavour of student research being conducted at the University in a poster session. The posters covered subjects such as: the application of machine learning to drug research; the use of low-cost smartphones to detect mosquitos; and the application of machine learning to detect evidence of online radical behaviour.

As well as exploring the technological possibilities, the University is conducting research into the potential impact of AI on society. How to ensure that the future of AI has a positive effect on the population at large was a theme that cropped up time and again at the AI Expo, and was discussed in depth by a set of panels on employment, ethics and security.

Ethics, explained Senior Researcher Paula Boddington from the Department of Computer Science, is all about making sure that AI is developed to be positive by setting and enforcing good ethical codes. The field can, on many points, learn lessons from codes developed in other areas, such as medicine. What is troubling about AI, in Paula's view, is that there is a guiding assumption in ethics that the professionals (in this case, machines) know what they are doing.

While the introduction of any emerging technology is never without risk, many speakers felt that the enormous potential benefits of AI outweighed the possible downsides. An example, given by the security panel, was: while a rogue group may use AI in social media to recruit new members, a defensive AI system could catch a vulnerability and repair it before anyone else finds out. Most speakers were in agreement that the greater danger in AI use is from malevolent people, rather than from glitches in the systems.

Another area for public concern, which was discussed by a panel at the expo, is the potential impact of AI on employment, and how the resulting changes might affect society. The employment panel, while agreeing that some jobs might be affected negatively (particularly those which could be easily automated), also believed that AI had an upside, for instance in increased productivity. Sandra Wachter, a Lawyer and Research Fellow at the Oxford Internet Institute, said that machines have a tendency to enhance the capability of humans, for example, working in tandem with a doctor on treatment plans, or helping banks make responsible lending decisions. The advantage of AI systems in making judgements is that they are not swayed by emotion or prejudice in the way that humans can be.

Feedback from the day has been overwhelmingly positive, as you can hear from delegates (from industry and government) themselves in clips from the day - link below.

It's heartening to hear that they were impressed by the research being conducted at the University of Oxford, and that we conveyed the central role that Oxford is playing in this most exciting of research areas.

AI@Oxford delegates' comments: http://goo.gl/ymoQDp
The Ethics panel debate: https://goo.gl/5RQBnv
Facebook Live interview with Prof. Paul Newman on the future of autonomous cars: https://www.youtube.com/watch?v=ciDoxidUdF0 (view in Firefox or Google Chrome)

This article first appeared in the summer 2018 issue of Inspired Research.