Skip to main content

AI Child’s play


Doctoral Student Misha Denil has been part of a team of researchers teaching AI to play with blocks, just as a baby does – learning about the physical world through experimentation.

Working at Google DeepMind with Professor Nando de Freitas, and also with researchers at the University of California, Misha used reinforcement learning techniques to enable the AI to learn using experiments within two virtual environments. 

The first experiment used 5 blocks that looked the same but weighed a different amount. Just as a baby would, the AI had to ‘poke’ each block to identify which was the heaviest. The AI had to interact with all the blocks to come to the right conclusion.

In the second experiment 5 blocks were piled into a tower. Some blocks were stuck together and others were single. The AI had to pull the blocks in order to identify which ones were stuck together. 

The AI was able to learn how to solve the tasks without having any prior knowledge of the laws of physics, or of the physical properties of the blocks used in the experiments. 

Although this research is at an early stage, it has important implications, particularly for robot development. It proves that AI has the potential to learn how to solve problems when clear instructions are not available, acquiring an understanding of the world that exceeds passive perception. 

Read about the research here: