The ability to interact and understand the environment is a fundamental prerequisite for a wide range of applications from
robotics to augmented reality.
In particular, predicting how deformable objects will react to applied forces in real time is a significant challenge. This is further confounded by the fact that shape information about objects in the real world is often impaired by occlusions, noise and missing regions (e.g. a robot manipulating an object will only be able to observe a partial view of the entire solid).
Another challenge is learning to estimate the simple dynamics of moving and interacting objects through observations.
Our research tackles how to bring common sense understanding to robotic perception, with a focus on low-cost vision sensors and interaction with humans. Learning to predict intuitive physics, such as how objects move and interact with each other, will enable robots to interact with a dynamic, unconstrained environment.
3D−PhysNet: Learning the Intuitive Physics of Non−Rigid Object Deformations
Zhihua Wang‚ Stefano Rosa‚ Bo Yang‚ Sen Wang‚ Niki Trigoni and Andrew Markham
In 27th International Joint Conference on Artificial Intelligence and the 23rd European Conference on Artificial Intelligence IJCAI−ECAI. 2018.
Defo−Net: Learning body deformation using generative adversarial networks
N. Trigoni Z. Wang S. Rosa L. Xie B. Yang S. Wang and A. Markham
In IEEE Intl Conference on Robotics and Automation (ICRA). 2018.