Skip to main content

Defining trust helps robots behave

An important step in making sure that robots act safely and effectively in our society is to be able to formally express what trust is in a human-robot partnership. The University of Oxford is working on how to express the properties of trust, writes researcher Morteza Lahijanian.

Robots are becoming members of our society, which consists of convoluted social rules, relationships and expectations. Complex algorithms have rendered robots increasingly sophisticated machines with rising levels of autonomy, enabling them to leave behind their traditionally simple work places in factories for a more complicated world. Driverless cars, home assistive robots, and unmanned aerial vehicles are just a few examples.

As the level of involvement of such systems increases in our daily lives, their decisions affect us more directly. We instinctively expect robots to behave morally and make ethical decisions. For instance, we expect a firefighter robot to follow ethical principles when it is faced with a choice of saving one person's life over another in a rescue mission, and we expect an eldercare robot to take a moral stance in following the instructions of its owner when they are in conflict with the interest of others. Spoiler alert: This is in conflict with the robot in the movie 'Robot & Frank' which partakes in a robbery with Frank, its owner, to achieve its goal: Frank's wellbeing.

Such expectations give rise to the notion of trust in the context of human-robot relationships and to questions such as 'how can I trust a driverless car to take my child to school?' and 'how can I trust a robot to help my elderly parent?' Failing to answer such questions appropriately can cause a major blow to the field of robotics or, more generally, autonomous systems.

In order to design algorithms that can generate trustworthy decisions and hence an ethically reliable system, we need to understand, formalise and express trust. This is a challenging task because it involves many aspects including sociology, psychology, cognitive reasoning, philosophy, logic and computation.

We believe formal methods, specifically quantitative verification and synthesis, can provide a venue to approach the above questions. In recent years, these methods have been receiving a lot of attention in the robotics community and have been explicitly adapted to provide guarantees for the safety and correctness of robot behaviours.

We have begun a thorough investigation into formalisation of trust and expressing its properties with the aid of collaborators in the philosophy and human factors communities as part of an ESPRC-sponsored project. It is a collaboration between the Oxford Robotics Institute and our department entitled 'Mobile Autonomy Programme Grant: Enabling a Pervasive Technology of the Future', which runs from March 2015 to February 2020. Professor Marta Kwiatkowska is leading the Safety, Trust and Integrity theme within the project (goo.gl/uBZctr), working with Research Associate Wenjie Ruan and me. New DPhil student Maciej Olejnik is also joining the project.

The vision of the project is to create, run and exploit the world’s leading research programme in mobile autonomy, addressing fundamental technical issues, which impede large-scale commercial and societal adoption of mobile robotics. Understanding trust and being able to evaluate it to inform trust-based decision-making and reliance on mobile robots is key to their widespread adoption. We have organised two workshops on the topic, and the third one will be at FLoC in Oxford. More information: goo.gl/NAkCgL

The study of trust is a cross-disciplinary challenge creating central research topics, notably from the formalisation angle. The immediate technical research questions are how to quantify trust and how to model its evolution. Another key question is how to design a logic that allows the expression of specifications involving trust. From the verification perspective, the questions are how to verify (reason about) such specifications in the context of a given partnership or, even more prominent, how to synthesise (design) an autonomous system such that, in a partnership with a human, these specifications are guaranteed.

In our research group, we have started taking initial steps in investigating these questions and believe that, only by a thorough study of them, one day, we may be able to guarantee the success of robots in our society.

This article first appeared in the Winter 2017 issue of Inspired Research.