Machine Learning: 2014-2015
OverviewMachine learning techniques enable us to automatically extract features from data so as to solve predictive tasks, such as speech recognition, object recognition, machine translation, question-answering, anomaly detection, medical diagnosis and prognosis, automatic algorithm configuration, personalisation, robot control, time series forecasting, and much more. Learning systems adapt so that they can solve new tasks, related to previously encountered tasks, more efficiently.
The course focuses on the exciting field of deep learning. By drawing inspiration from neuroscience and statistics, it introduces the basic background on neural networks, back propagation, Boltzmann machines, autoencoders, convolutional neural networks and recurrent neural networks. It illustrates how deep learning is impacting our understanding of intelligence and contributing to the practical design of intelligent machines.
On completion of the course students will be expected to:
- Understand what is learning and why it is essential to the design of intelligent machines.
- Know how to fit models to data.
- Understand numerical computation, statistics and optimization in the context of learning.
- Have a good understanding of the problems that arise when dealing with very small and very big data sets, and how to solve them.
- Understand the basic mathematics necessary for constructing novel machine learning solutions.
- Be able to design and implement various machine learning algorithms in a wide range of real-world applications.
- Understand the background on deep learning and be able to implement deep learning models for language, vision, speech, decision making, and more.
Machine Learning is a mathematical discipline, and students will benefit from a good background in probability, linear algebra and calculus. Programming experience is essential.
- 1. Introduction (1 lecture)
- 2. Linear prediction (1 lecture)
- 3. Maximum likelihood (1 lecture)
- 4. Regularizers, basis functions and cross-validation (1 lecture)
- 5. Optimisation (1 lecture)
- 6. Logistic regression (1 lecture)
- 7. Feedforward neural networks (1 lecture)
- 8. Back-propagation (1 lecture)
- 9. Convolutional neural networks (1 lecture)
- 10. Max-margin learning and siamese networks (1 lecture)
- 11. Boltzmann machines and log-bilinear models (1 lecture)
- 12. Autoencoders (1 lecture)
- 13. Helmholtz machines and learning by simulation (1 lecture)
- 14. Recurrent neural networks and LSTMs (1 lecture)
- 15. Reinforcement learning with direct policy search (1 lecture)
- 16. Reinforcement learning with action-value functions (1 lecture)
SyllabusMathematics of machine learning. Overview of supervised, unsupervised, multi-task, transfer, active and reinforcement learning techniques.
- Kevin P. Murphy. Machine Learning: A Probabilistic Perspective, MIT Press 2012.
- Christopher M. Bishop. Pattern Recognition and Machine Learning, Springer 2007.
- T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer 2011.
- S. Haykin. Neural networks and learning machines. Pearson 2008.
Students are formally asked for feedback at the end of the course. Students can also submit feedback at any point here. Feedback received here will go to the Head of Academic Administration, and will be dealt with confidentially when being passed on further. All feedback is welcome.