Machine learning blog. And such.
So I finally submitted my PhD thesis, collecting already published results on how to obtain uncertainty in deep learning, and lots of bits and pieces of new research I had lying around...Post (Comments)
These are just some comments and updates on the code from the paper.Post
During a talk I gave at Google recently, I was asked about a peculiar behaviour of the uncertainty estimates we get from dropout networks.Post
I've decided to play with a new idea – encouraging an interactive discussion to try to support or falsify a hypothesis in deep learning. This followed some ideas I've had about the interaction between theoretical and experimental research approaches in our field. This post contains the discussion board for the paper "A theoretically grounded application of dropout in recurrent neural networks" if you have any comments.Post (Comments)
I recently spent some time trying to understand why dropout deep learning models work so well – trying to relate them to new research from the last couple of years. I was quite surprised to see how close these were to Gaussian processes. I was even more surprised to see that we can get uncertainty information from these deep learning models for free – without changing a thing.Post (Comments)