Mathematics for Computer Science and Philosophy: 2021-2022
This is a second/third-year course for Computer Science & Philosophy students. They will attend a subset of the first-year lectures in Linear Algebra and Continuous Mathematics, and will so be equipped with the knowledge needed for courses in Machine Learning.
This course is part of both the Preliminary Examination for Computer Science students and the Final Honour School for Computer Science and Philosophy students. Students should note that the questions set on this course in the Final Honour School in Computer Science and Philosophy will be more challenging than those that are set for the Preliminary Examination in Computer Science, and should bear this in mind when attempting sample exam questions and past exam questions.
Classes: Colleges will arrange problem classes for their students, as with the corresponding first-year courses: three classes for Linear Algebra and two for Continuous Mathematics.
Examinations: The FHS examination will consist of three questions, of which you should answer two. One question will be on Linear Algebra, one on Continuous Mathematics, and one will be on either one of these topics or a combination of both topics.
Lectures 1-20 cover the syllabus for the Preliminary Examination in Computer Science.
Lectures 1-17 cover the syllabus for the Final Honour School in Computer Science and Philosophy. They follow closely the corresponding chapters from the textbook.
Lectures 1-3 Linear Systems: solving linear systems; linear geometry; reduced echelon form.
Lectures 4-7 Vector Spaces: definition; linear independence; basis and dimension.
Lectures 8-13 Maps Between Spaces: isomorphisms; homomorphisms; computing linear maps; matrix operations; change of basis; projection.
Lectures 14-15 Determinants: definition; geometry of determinants; Laplace's formula.
- Lectures 16-17 Similarity: definition; eigenvectors and eigenvalues.
Lectures 18-20 Least Squares and Factorisations: least squares; LU factorisation; QR factorisation; singular value decomposition.
*[3 lectures] Derivatives, partial derivatives, differentiation with respect to a vector, gradient, Hessian and Jacobian. Taylor's theorem in 1 dimension (Lagrange remainder), Taylor's theorem in n dimensions (remainder only briefly). Examples.
*[3 lectures] Optimization in 1 and n dimensions. Classification of turning points via Taylor's theorem. Convexity. Constrained optimization: Lagrange multipliers. Examples.
[2-3 lectures] Numerical integration in 1 dimension: midpoint and Simpson’s rules, complexity and error analysis. Briefly, integration in n dimensions and Monte Carlo methods. Examples.
*[1 lecture] Floating-point numbers and rules of thumb for accuracy in practice. Convergence rates of iterative methods.
*[2-3 lectures] Numerical root finding in 1 dimension by bisection, Newton's method, secant method. Root finding in n dimensions by Newton's method, and briefly quasi-Newton methods. Complexity and error analysis. Examples.
[2-3 lectures] Numerical optimization in 1 and n dimensions: root finding for gradient and gradient descent methods. Complexity and error analysis. Examples.
[1-2 lectures] Applications.
Sections marked * cover the syllabus for the Continuous Maths part of Mathematics for Computer Science and Philosophy.
Vector spaces and subspaces. Matrices. Inverse matrices. Solution of linear systems. Elementary matrix factorisations. Eigenvalues and eigenvectors.
Derivatives, partial derivatives, differentiation with respect to a vector; gradient, Hessian, Jacobian. Taylor’s theorem in 1- and n-dimensions, with Lagrange remainder. Optimization in 1- and n-dimensions, classification of stationary points. Constrained optimization and the method of Lagrange multipliers. Iterative numerical methods and rates of convergence. Methods for numerical root finding in 1- and n-dimensions, complexity and (simple cases of) error analysis.
Linear Algebra: Introduction to Jim Hefferon, Linear Algebra
Continuous Maths: Please see the course materials page for Continuous Maths.