Computational Learning Theory: 2014-2015
Lecturer | |
Degrees | Schedule B2 (CS&P) — Computer Science and Philosophy Schedule B2 — Computer Science Schedule B2 — Mathematics and Computer Science |
Term | Michaelmas Term 2014 (16 lectures) |
Overview
Machine learning studies automatic methods for identifying patterns in complex data and for making accurate predictions based on past observations. From predicting which movies a customer will like to assigning credit ratings, systems that learn are becoming increasingly widespread and effective. Computational learning theory aims to develop rigourous mathematical foundations for machine learning, in order to provide guarantees about the behaviour of learning algorithms, to identify common methods underlying effective learning procedures, and to understand the inherent difficulty of learning problems. To address such issues we will bring together notions from probability theory, optimisation, online algorithms, game theory, and combinatorics.
Learning outcomes
On completing this course, students should:
- understand key models of supervised and unsupervised learning and be able to formulate specific learning problems in these models;
- understand a variety of learning algorithms and recognize when they are applicable.
Prerequisites
Students should have experience of reading and writing mathematical proofs. Familarity with calculus, probability theory, and linear algebra (to the level of the undergraduate Computer Science degree) is essential.
Synopsis
- Introduction, PAC model [2 Lectures]
- Sample complexity, the growth function, VC dimension, lower bounds [3 Lectures]
- Online learning, mistake bounds, the Perceptron and Winnow algorithms [2 lectures]
- Learning from expert advice, regret bounds, Weighted Majority algorithm, Minimax Theorem [3 lectures]
- Weak learning, adaptive boosting, margin bounds [2 Lectures]
- Support Vector Machines [2 Lectures]
- Kernels [1 Lecture]
Syllabus
PAC learning: Sample complexity, VC-dimension Online learning: mistake bounds, the Perceptron and Winnow algorithms Learing from expert advice: Deterministic & randomized weighted majority, follow the leader Weak learning and boosting. Support vector machines, kernels
|
Reading list
Primary Text
Secondary Texts
- Michael Kearns and Umesh Vazirani. An Introduction to Computational Learning Theory, MIT Press, 1995.
Feedback
Students are formally asked for feedback at the end of the course. Students can also submit feedback at any point here. Feedback received here will go to the Head of Academic Administration, and will be dealt with confidentially when being passed on further. All feedback is welcome.