Skip to main content

Modelling trust in human-robot interaction

Supervisor

Suitable for

MSc in Computer Science

Abstract

When human users interact with autonomous robots, appropriate notions of computational trust are needed to ensure that their interactions are safe and effective: too little trust can lead to user disengagement, and too much trust may cause damage. Trust management systems have been introduced for autonomous agents on the Internet, but need to be adapted to the setting of mobile robots, taking into account intermittent connectivity and uncertainty on sensor readings. Recently, a quantitative reputation-based trust model for user-centric networks has been modelled and analysed using PRISM-games, http://www.prismmodelchecker.org/games/, an extension of the PRISM model checker which supports stochastic multi-player games as models and objectives expressed in temporal logic (http://www.prismmodelchecker.org/bibitem.php?key=KPS13). This project aims to develop a quantitative trust management system that is suitable for mobile robots. Initially, a simplified setting will be considered and an approach to modelling trust developed for this setting. The project will suit a student interested in modelling and/or software implementation.