Skip to main content

Advanced options in Computer Science

In the fourth year, students spend 62% of their time on advanced options.

The list of options varies from year to year according to the research interests of teaching staff, but the following examples illustrate courses that have been offered recently.

Contents


Advanced Security

The Advanced Security course is designed to bring students towards the research boundaries in computer security, covering contemporary topics in depth. It is split into two modules:

  • Bootstrapping Security. Modern technology brings many opportunities for connecting devices together by means such as wifi, Bluetooth or connection to the internet. In many cases we want to make these connections secure: authenticating the identity of the device connected to, making them secret and carrying out (e.g. financial) transactions using them. In this module we examine the theory and practice of doing this using both conventional means (using a pre-existing security structure) and methods based on things such as co-location and human judgement.
  • Information Hiding. Steganography is the art and science of hiding a secret payload inside an innocent cover, typically hiding inside digital media such as images, movies, or mp3 audio. The course covers the definitions, practice, and theory of hidden information in way accessible to newcomers. We will cover the details of getting inside digital images in order to hide information, the different sorts of embedding operations commonly used, and how hidden information can be detected. Some linear algebra allows for more efficient hiding. Finally, we see some mathematical theory which explains how the amount of hidden data grows with the space in which is can be hidden.

Algorithmic Foundations of Collective Decision Making

The purpose of this module is to explore collective decision making from an algorithmic point of view. We study settings where groups of agents need to make a joint decision by aggregating preferences of individual group members. Examples include (1) voting, i.e., selecting one or more candidates to represent the group, or one or more projects to be implemented, (2) coalition formation, where agents need to split into teams and have preferences over teams they can be part of, (3) fair allocation, i.e, distributing a set of items (or a single divisible item) among the agents in a fair and efficient way. We will consider both normative properties (such as axioms and fairness requirements) and algorithmic aspects of the proposed solutions (polynomial-time algorithms and NP-hardness results).

Bayesian Statistical Probabilistic Programming

Statistical probabilistic programming (SPP) is a general framework for expressing probabilistic models as computer programs. SPP systems are equipped with implementations of genericinference algorithms for answering queries about these models, such as posterior inference and marginalisation. By providing these algorithms, SPP systems enable data scientists to focuson the design of good models, utilising their domain knowledge. The task of constructing efficient generic inference engines can be left to researchers with expertise in statistical machine learning and programming languages.

Categories, Proofs and Processes

Category Theory is a powerful mathematical formalism which has become an important tool in modern mathematics, logic and computer science. One main idea of Category Theory is to study mathematical `universes', collections of mathematical structures and their structure-preserving transformations, as mathematical structures in their own right, i.e. categories - which have their own structure-preserving transformations (functors). This is a very powerful perspective, which allows many important structural concepts of mathematics to be studied at the appropriate level of generality, and brings many common underlying structures to light, yielding new connections between apparently different situations.

Computer Vision

This is an advanced course in modern computer vision and machine learning. It contains fundamental concepts from classical computer vision: filtering, matching, indexing and 3D computer vision. On top of that, a large portion of the course focuses on current computer vision methodologies and problems, which build on top of deep learning techniques: detection, segmentation, generation, and vision and language models. This course will introduce the fundamental mathematical concepts behind these tasks and how they can be integrated into modern machine-learning models. The taught material and assessment include both theoretical derivations as well as applied implementations, and students are expected to be proficient with both.

Computational Biology

This course is intended for students who want to understand modern computational (molecular/structural) biology. The course will provide an introduction to the central dogma of molecular biology and will cover fundamental methods for sequence and structure analysis as well as some essential concepts from statistical mechanics. It will then discuss algorithmic approaches for deciphering the relationship between sequence and structure and techniques to model biomolecular structures. The course will present examples for the regulation of the information flow in molecular biology, the effect of epigenetics and computational methods facilitating genome editing applications.

Computational Game Theory

Game theory is the mathematical theory of strategic interactions between self-interested agents. Game theory provides a range of models for representing strategic interactions, and associated with these, a family of solution concepts, which attempt to characterise the rational outcomes of games. Game theory is important to computer science for several reasons: First, interaction is a fundamental topic in computer science, and if it is assumed that system components are self-interested, then the models and solution concepts of game theory seems to provide an appropriate framework with which to model such systems. Second, the problem of computing with the solution concepts proposed by game theory raises important challenges for computer science, which test the boundaries of current algorithmic techniques. This course aims to introduce the key concepts of game theory for a computer science audience, emphasising both the applicability of game theoretic concepts in a computational setting, and the role of computation in game theoretic problems. The course assumes no prior knowledge of game theory. The aims of the course are threefold: 1. to introduce the key models and solution concepts of non-cooperative and cooperative game theory; 2. to introduce the issues that arise when computing with game theoretic solution concepts, and the main approaches to overcoming these issues, and to illustrate the role that computation plays in game theory; 3. to introduce a research-level topic in computational game theory.

Computational Learning Theory

Machine learning studies automatic methods for identifying patterns in complex data and for making predictions based on past observations. In this course, we develop rigorous mathematical foundations of machine learning, in order to provide guarantees about the behaviour of learning algorithms and also to understand the inherent difficulty of learning problems. The course will begin by providing a statistical and computational toolkit, such as concentration inequalities, fundamental algorithms and methods to analyse learning algorithms. We will cover questions such as when can we generalise well from limited amounts of data, how can we develop algorithms that are compuationally efficient, and understand statistical and computational trade-offs in learning algorithms. We will also discuss new models designed to address relevant practical questions of the day, such as learning with limited memory, communication, privacy, and labelled and unlabelled data. In addition to core concepts from machine learning, we will make connections to principal ideas from information theory, game theory and optimisation. This is a mathematical course that with several definitions, theorems and proofs. It is expected that students will have some familiarity with probability, algorithms, and basic complexity. A formal course on Machine Learning is not required, however, students are more likely to appreciate the contents of the course if they have some familiarity with machine learning.

Computational Medicine

This course is intended for students who want to understand modern computational medicine through a focus on computational cardiology. The course will provide an introduction to the challenges and techniques used in computational cardiology, and will cover fundamental methods for modelling and simulation of patient-specific multiscale and multi-physics human hearts in health and disease and their use for in silico testing of therapies. It will cover both functional and anatomical modelling from multimodality data, and important concepts broadly applicable to all areas of medicine in industry, academia and regulatory practice such as the digital twin vision, in silico trials for therapy evaluation and verification, validation and uncertainty quantification.

Concurrent Algorithms and Data Structures

This is an advanced course on concurrent programming. The course will combine principles and practice. Principles to be studied include correctness conditions for concurrent datatypes, and the relative power of different synchronization operations. More practical topics will include how to implement concurrency primitives - such as locks and monitors - and concurrent datatypes - such as linked lists, queues, and hash tables.

This course is largely independent of the second year Concurrent Programming course, although there are clearly strong links between the two. Concurrent Programming is based upon the message-passing paradigm; much of the emphasis is on using concurrency as a way of structuring large programs. This course will be based upon low-level concurrency primitives, such as compare-and-swap; the emphasis will be on speed. MSc students could take this course in Michaelmas, followed by Concurrent Programming in Hilary.

Database Systems Implementation

This course examines the data structures and algorithms underlying database management systems such as Oracle or PostgreSQL. It covers techniques from both research literature and commercial systems. At the end of this course, students should have a good insight into how DBMSs function internally, and understand how to analyse the performance of data-intensive systems. They will become familiar with a variety of programming techniques for large-scale data manipulation, and be able to apply the insights achieved to build the major components of a mini-DBMS.

Distributed Processes, Types and Programming

This course studies the foundations and type theory of mobile processes and programming languages for communication and distribution. Specifically, this course studies how to program communication through an introduction of a channel-based message-passing process calculus (the pi-calculus) and how to apply its type theory to practice. A course also functions as a brief introduction of message-passing programming languages such as Go language (a popular recent programming language designed by Google, which is widely used for implementing large distributed system). The course assumes a basic knowledge of programming languages. Some knowledge of concurrent programming, lambda-calculus or operational semantics would be useful but not required.

Foundations of Self-Programming Agents

The course will appeal to students who want to gain a better understanding of modern deep learning and will present a systematic geometric blueprint allowing them to derive popular deep neural network architectures (CNNs, GNNs, Transformers, etc) from the first principles of symmetry and invariance. The focus will be on general principles that underpin deep learning as well as concrete examples of their realisations and applications. The course will try to tie together topics in geometry, group theory and representation learning, graph theory, and machine learning into a coherent picture. It ideally targets students in CS & Math cohort or CS students with a strong mathematical background.

Geometric Deep Learning

The course will appeal to students who want to gain a better understanding of modern deep learning and will present a systematic geometric blueprint allowing them to derive popular deep neural network architectures (CNNs, GNNs, Transformers, etc) from the first principles of symmetry and invariance. The focus will be on general principles that underpin deep learning as well as concrete examples of their realisations and applications. The course will try to tie together topics in geometry, group theory and representation learning, graph theory, and machine learning into a coherent picture. It ideally targets students in CS & Math cohort or CS students with a strong mathematical background.

Graph Representation Learning

This is an advanced course on machine learning with relational data, focusing on the recent advances in the field of graph representation learning. The goal is to provide a systematic coverage of the fundamentals and foundations of graph representation learning. The course will introduce the definitions of the relevant machine learning models (e.g., graph neural networks), discuss their mathematical underpinnings, formally study their properties (e.g., relational inductive bias, expressive power), and demonstrate ways to effectively develop and train such models.

Knowledge Representation & Reasoning

Knowledge Representation and Reasoning (KRR) is the field of Artificial Intelligence concerned with the encoding of knowledge in a machine-understandable way. This knowledge can then be processed automatically by suitable computer programs. KRR is at the core of so-called Semantic Technologies which are being widely deployed in applications.

This course is an up-to-date survey of selected topics in KRR. The course starts with a review of the basics of Propositional and First-Order logic and their use in the context of KRR. The second part of the course describes formal languages for KRR that are based on fragments of first-order logic and surveys the main reasoning techniques typically implemented for those fragments. Finally, the last part of the course surveys the important topic of non-monotonic reasoning. The course discusses also several applications were KRR-based technologies are currently being used.

Law and Computer Science

The legal system is entering a period of profound transformation brought on by new technologies and alternative business models. Legal innovation backed by new technologies can drive down costs, make the delivery of legal services more productive, and facilitate better access to justice for citizens. As AI and digital technology permeate more of our lives, they increasingly becomes the source of legally significant events. This means that those who study and/or practice law increasingly need to understand the digital context. At the same time, those who study computer science and/or develop software increasingly need to understand potential legal consequences of design choices. Increasingly law firms are interested in hiring not just those with legal skills but also those with technical skills, and so there are exciting career opportunities for those working at the intersection of law and technology

This course, jointly offered by the Law Faculty and the Department of Computer Science, will introduce students from both backgrounds to the terrain at the boundaries of their two disciplines. The overarching theme is understanding law and computer science at their intersection.

Probabilistic Model Checking

Probabilistic model checking is a formal technique for analysing systems that exhibit random behaviour. Examples include randomised algorithms, communication and security protocols, computer networks, biological signalling pathways, and many others. The course provides a detailed introduction to these techniques, covering both the underlying theory (Markov chains, Markov decision processes, temporal logics) and its practical application (using the state-of-the art probabilistic checking tool PRISM, based here in Oxford). The methods used will be illustrated through a variety of real-life case studies, e.g. the Bluetooth/FireWire protocols and algorithms for contract signing and power management.

Probability and Computing

Probabilistic techniques have numerous Computer Science applications, including combinatorial optimisation, computational geometry, data structures, networks, and machine learning. Randomised algorithms, which typically guarantee a correct result only with high probability, are often simpler and faster than corresponding deterministic algorithms. Randomisation can also be used to break symmetry and achieve load balancing in parallel and distributed computing. This course introduces students to those ideas in probability theory that most are most relevant to Computer Science. This background theory is motivated and illustrated by a wide-ranging series of computer-science applications.

Quantum Processes and Computation

Both physics and computer science have been very dominant scientific and technological disciplines in the previous century. Quantum Computer Science aims at combining both and may come to play a similarly important role in this century. Combining the existing expertise in both fields proves to be a non-trivial but very exciting interdisciplinary journey. Besides the actual issue of building a quantum computer or realising quantum protocols it involves a fascinating encounter of concepts and formal tools which arose in distinct disciplines. This course provides an interdisciplinary introduction to the emerging field of quantum computer science, explaining basic quantum mechanics (including finite dimensional Hilbert spaces and their tensor products), quantum entanglement, its structure and its physical consequences (e.g. non-locality, no-cloning principle), and introduces qubits. We give detailed discussions of some key algorithms and protocols such as Grover's search algorithm and Shor's factorisation algorithm, quantum teleportation and quantum key exchange. At the same time, this course provides an introduction to diagrammatic reasoning. As an entirely diagrammatic presentation of quantum theory and its applications, this course is the first of its kind.

Quantum Software

In recent years, quantum computation has moved from a theoretical exercise to a practical one, as limited, small-scale quantum computers have become increasingly available. This course, which follows on from a basic course in quantum computation, addresses the practical challenges of programming these emerging devices.

Uncertainty in Deep Learning

This is an advanced course in machine learning, focusing on recent advances in deep learning specifically such as Bayesian neural networks. The course will concentrate on underlying fundamental methodology as well as on applications, such as in autonomous driving, astronomy, and medical applications. Recent statistical techniques based on neural networks have achieved remarkable progress in these domains, leading to a great deal of commercial and academic interest. The course will introduce the mathematical definitions of the relevant machine learning models and derive their associated approximate inference algorithms, demonstrating the models in the various domains. The taught material and assessment include both theoretical derivations as well as applied implementations, and students are expected to be proficient with both.