Advanced Topics in Machine Learning: 2020-2021
The lectures for this course will be pre-recorded and can be found here.
This is an advanced course on machine learning, focusing on recent advances in machine learning with relational data and on Bayesian approaches to machine learning. The course is organized and taught as follows:
- Relational learning (Dr. Ceylan): 8 lectures + 1 guest lecture
- Bayesian machine learning (Dr. Baydin): 8 lectures + 1 guest lecture
After studying this course, students will:
- Have knowledge of the different paradigms for performing machine learning and appreciate when different approaches will be more or less appropriate.
- Understand the definition of a range of neural network models, including graph neural networks.
- Be able to derive and implement optimisation algorithms for these models.
- Understand the foundations of the Bayesian approach to machine learning.
- Be able to construct Bayesian models for data and apply computational techniques to draw inferences from them.
- Have an understanding of how to choose a model to describe a particular type of data.
- Know how to evaluate a learned model in practice.
- Understand the mathematics necessary for constructing novel machine learning solutions.
- Be able to design and implement various machine learning algorithms in a range of real-world applications.
Required background knowledge includes probability theory, linear algebra, continuous mathematics, multivariate calculus, and a basic understanding of graph theory, and logic. Students are required to have taken the Machine Learning course. Good programming skills are needed, and lecture examples and practicals will be given mainly in Python and PyTorch.
Relational Learning: Lectures 1–8, Dr İsmail İlkan Ceylan
- Lecture 1. Relational data & node embeddings
- Lecture 2. Knowledge graph embedding models
- Lecture 3. Graph neural networks
- Lecture 4. Message passing neural network architectures
- Lecture 5. Expressive power of message passing neural networks
- Lecture 6. Higher-Order graph neural networks
- Lecture 7. Message passing neural networks and randomisation
- Lecture 8. Overview of applications of graph neural networks
- Lecture 9. Lecture by William L. Hamilton (Monday, Week 8)
Bayesian Machine Learning: Lectures 10–17, Dr Atılım Güneş Baydin
- Lecture 10. Machine learning paradigms
- Lecture 11. Bayesian modeling 1
- Lecture 12. Bayesian modeling 2
- Lecture 13. Bayesian inference 1
- Lecture 14. Bayesian inference 2
- Lecture 15. Differentiable programming 1
- Lecture 16. Differentiable programming 2
- Lecture 17. Model learning and variational auto-encoders
- Lecture 18. Lecture by Kyle Cranmer (Tuesday, Week 8)
Update: Upon request, lecture slides are available also without any transitions/effects, which are then more suitable for printing; see course materials.
Location: All lectures will be online in the form of pre-recorded lecture videos. There will be arrangements for the students to interact with the lecturers and ask questions.
Overview of relational learning and reasoning. Embedding models and knowledge graphs, inductive capacity of embedding models, graph representation learning, graph neural networks, expressive power of message passing neural networks, limitations and extensions. Overview of the Bayesian paradigm and its use in machine learning. Generative models, Bayesian inference, Monte Carlo methods, variational inference, probabilistic programming, model selection and learning, amortized inference, deep generative models, variational autoencoders.
- William L. Hamilton. (2020). Graph Representation Learning.
Synthesis Lectures on Artificial Intelligence and Machine Learning, Vol. 14,
No. 3 , Pages 1-159. https://www.cs.mcgill.ca/~wlh/grl_book/
- Christopher M. Bishop, “Pattern Recognition and Machine Learning”, Springer, 2006
- Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong, “Mathematics for Machine Learning”, Cambridge University Press, 2020 https://mml-book.github.io/