Skip to main content

Advanced Topics in Machine Learning:  2021-2022

Lecturers

Degrees

Schedule C1 (CS&P)Computer Science and Philosophy

Schedule C1Computer Science

Schedule C1Mathematics and Computer Science

Schedule IIMSc in Advanced Computer Science

Hilary TermMSc in Advanced Computer Science

Term

Overview

This is an advanced course on machine learning, focusing on recent advances in machine learning with relational data and on Bayesian approaches to machine learning. The course is organized and taught as follows:

  • Relational learning (Ismail Ilkan Ceylan): 9 lectures
  • Bayesian machine learning (Jiarui Gan): 5 lectures + Bayesian deep learning (Yarin Gal): 4 lectures
Recent techniques, particularly those based on neural networks, have achieved remarkable progress in these fields, leading to a great deal of commercial and academic interest. The course will introduce the definitions of the relevant machine learning models, discuss their mathematical underpinnings, and demonstrate ways to effectively numerically train them. The coursework will be based on the reproduction/extension of a recent machine learning paper, with students working in teams to accomplish this. Each team will tackle a separate paper, with available topics including embedding models, graph neural networks, gradient-based Bayesian inference methods, and deep generative models.

Learning outcomes

After studying this course, students will:

  • Have knowledge of the different paradigms for performing machine learning and appreciate when different approaches will be more or less appropriate.
  • Understand the definition of a range of neural network models, including graph neural networks.
  • Be able to derive and implement optimisation algorithms for these models.
  • Understand the foundations of the Bayesian approach to machine learning.
  • Be able to construct Bayesian models for data and apply computational techniques to draw inferences from them.
  • Have an understanding of how to choose a model to describe a particular type of data.
  • Know how to evaluate a learned model in practice.
  • Understand the mathematics necessary for constructing novel machine learning solutions.
  • Be able to design and implement various machine learning algorithms in a range of real-world applications.

Prerequisites

Required background knowledge includes probability theory, linear algebra, continuous mathematics, multivariate calculus, and a basic understanding of graph theory, and logic.  Students are required to have taken a Machine Learning course. Good programming skills are needed, and lecture examples and practicals will be given mainly in Python and PyTorch.

Synopsis

Relational Learning: Lectures 1–9, İsmail İlkan Ceylan

  • Lecture 1. Relational data & node embeddings
  • Lecture 2. Knowledge graph embedding models
  • Lecture 3. Graph neural networks
  • Lecture 4. Message passing neural network architectures
  • Lecture 5. Expressive power of message passing neural networks
  • Lecture 6. Higher-order graph neural networks
  • Lecture 7. Message passing neural networks: unique features and randomization
  • Lecture 8. Generative graph neural networks
  • Lecture 9. Overview of applications of graph neural networks

Bayesian Machine Learning: Lectures 10–18, Jiarui Gan and Yarin Gal

  • Lecture 10. Machine learning paradigms
  • Lecture 11. Bayesian modeling 1
  • Lecture 12. Bayesian modeling 2
  • Lecture 13. Bayesian inference 1
  • Lecture 14. Bayesian inference 2
  • Lecture 15 & 16. Bayesian deep learning
  • Lecture 17 & 18. Bayesian deep learning

Syllabus

Overview of relational learning and reasoning. Embedding models and knowledge graphs, inductive capacity of embedding models, graph representation learning, graph neural networks, expressive power of message passing neural networks, limitations and extensions. Overview of the Bayesian paradigm and its use in machine learning. Generative models, Bayesian inference, Monte Carlo methods, variational inference, probabilistic programming, model selection and learning, amortized inference, deep generative models, variational autoencoders. 

Reading list

  • William L. Hamilton. (2020). Graph Representation Learning.
    Synthesis Lectures on Artificial Intelligence and Machine Learning, Vol. 14,
    No. 3 , Pages 1-159. https://www.cs.mcgill.ca/~wlh/grl_book/
  • Christopher M. Bishop, “Pattern Recognition and Machine Learning”, Springer, 2006
  • Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong, “Mathematics for Machine Learning”, Cambridge University Press, 2020 https://mml-book.github.io/

Related research

Themes

Feedback

Students are formally asked for feedback at the end of the course. Students can also submit feedback at any point here. Feedback received here will go to the Head of Academic Administration, and will be dealt with confidentially when being passed on further. All feedback is welcome.

Taking our courses

This form is not to be used by students studying for a degree in the Department of Computer Science, or for Visiting Students who are registered for Computer Science courses

Other matriculated University of Oxford students who are interested in taking this, or other, courses in the Department of Computer Science, must complete this online form by 17.00 on Friday of 0th week of term in which the course is taught. Late requests, and requests sent by email, will not be considered. All requests must be approved by the relevant Computer Science departmental committee and can only be submitted using this form.