LipNet AI takes lip reading into the future
Posted: 9th November 2016
A Department of Computer Science research team has developed a new automatic lip reading system that far surpasses the performance of human lip readers and previous automatic lip reading systems.
Yannis Assael, Brendan Shillingford, Shimon Whiteson and Nando de Freitas used deep learning AI to create LipNet – software that reads lips faster and more accurately than has previously been possible. Although LipNet has proven to be very promising, it is still at a relatively early stage of development. It has been trained and tested on a research dataset of short, formulaic videos that show a well-lit person face-on.
In its current form LipNet could not be used on more challenging video footage, so it is unsuitable to be used for lip reading as a surveillance tool. But the team is keen to develop it further, especially as an aid for people with hearing disabilities.
Watch LipNet in action here:
Read the paper:
For media enquiries please contact firstname.lastname@example.org