Advanced options in Computer Science
In the fourth year, students spend 62% of their time on
The list of options varies from year to year according to the research
interests of teaching staff, but the following examples illustrate courses that
have been offered recently.
Graph Representation Learning
Machine learning studies automatic methods for identifying patterns in complex data and for making accurate predictions based on past observations. In this course, we develop rigorous mathematical foundations of machine learning, in order to provide guarantees about the behaviour of learning algorithms and also to understand the inherent difficulty of learning problems.
The course will begin by providing a statistical and computational toolkit, such as generalisation guarantees, fundamental algorithms, and methods to analyse learning algorithms. We will cover questions such as when can we generalise well from limited amounts of data, how can we develop algorithms that are computationally efficient, and understand statistical and computational trade-offs in learning algorithms. We will also discuss new models designed to address relevant practical questions of the day, such as learning with limited memory, communication, privacy, and labelled and unlabelled data. In addition to core concepts from machine learning, we will make connections to principal ideas from information theory, game theory and optimisation.
The Advanced Security course is designed to bring students towards the research boundaries in computer security, covering contemporary topics in depth. It is split into two modules:
- Bootstrapping Security. Modern technology brings many opportunities for connecting devices together by means such as wifi, Bluetooth or connection to the internet. In many cases we want to make these connections secure: authenticating the identity of the device connected to, making them secret and carrying out (e.g. financial) transactions using them. In this module we examine the theory and practice of doing this using both conventional means (using a pre-existing security structure) and methods based on things such as co-location and human judgement.
- Information Hiding. Steganography is the art and science of hiding a secret payload inside an innocent cover, typically hiding inside digital media such as images, movies, or mp3 audio. The course covers the definitions, practice, and theory of hidden information in way accessible to newcomers. We will cover the details of getting inside digital images in order to hide information, the different sorts of embedding operations commonly used, and how hidden information can be detected. Some linear algebra allows for more efficient hiding. Finally, we see some mathematical theory which explains how the amount of hidden data grows with the space in which is can be hidden.
Concurrent Algorithms and Data Structures
This is an advanced course on concurrent programming. The course will combine principles and practice. Principles to be studied include correctness conditions for concurrent datatypes, and the relative power of different synchronization operations. More practical topics will include how to implement concurrency primitives - such as locks and monitors - and concurrent datatypes - such as linked lists, queues, and hash tables.
This course is largely independent of the second year Concurrent Programming course, although there are clearly strong links between the two. Concurrent Programming is based upon the message-passing paradigm; much of the emphasis is on using concurrency as a way of structuring large programs. This course will be based upon low-level concurrency primitives, such as compare-and-swap; the emphasis will be on speed. MSc students could take this course in Michaelmas, followed by Concurrent Programming in Hilary.
Database Systems Implementation
This course examines the data structures and algorithms underlying database management systems such as Oracle or PostgreSQL. It covers techniques from both research literature and commercial systems.
At the end of this course, students should have a good insight into how DBMSs function internally, and
understand how to analyse the performance of data-intensive systems.
They will become familiar with a variety of programming techniques for large-scale data manipulation, and be able to apply the insights achieved to build the major components of a mini-DBMS.
Probabilistic Model Checking
Probabilistic model checking is a formal technique for analysing systems that exhibit random behaviour. Examples include randomised algorithms, communication and security protocols, computer networks, biological signalling pathways, and many others. The course provides a detailed introduction to these techniques, covering both the underlying theory (Markov chains, Markov decision processes, temporal logics) and its practical application (using the state-of-the art probabilistic checking tool PRISM, based here in Oxford). The methods used will be illustrated through a variety of real-life case studies, e.g. the Bluetooth/FireWire protocols and algorithms for contract signing and power management.
Probability and Computing
Probabilistic techniques have numerous Computer Science applications, including combinatorial optimisation, computational geometry, data structures, networks, and machine learning. Randomised algorithms, which typically guarantee a correct result only with high probability, are often simpler and faster than corresponding deterministic algorithms. Randomisation can also be used to break symmetry and achieve load balancing in parallel and distributed computing. This course introduces students to those ideas in probability theory that most are most relevant to Computer Science. This background theory is motivated and illustrated by a wide-ranging series of computer-science applications.
Computational Game Theory
Game theory is the mathematical theory of strategic interactions between self-interested agents. Game theory provides a range of models for representing strategic interactions, and associated with these, a family of solution concepts, which attempt to characterise the rational outcomes of games. Game theory is important to computer science for several reasons: First, interaction is a fundamental topic in computer science, and if it is assumed that system components are self-interested, then the models and solution concepts of game theory seems to provide an appropriate framework with which to model such systems. Second, the problem of computing with the solution concepts proposed by game theory raises important challenges for computer science, which test the boundaries of current algorithmic techniques. This course aims to introduce the key concepts of game theory for a computer science audience, emphasising both the applicability of game theoretic concepts in a computational setting, and the role of computation in game theoretic problems. The course assumes no prior knowledge of game theory. The aims of the course are threefold: 1. to introduce the key models and solution concepts of non-cooperative and cooperative game theory; 2. to introduce the issues that arise when computing with game theoretic solution concepts, and the main approaches to overcoming these issues, and to illustrate the role that computation plays in game theory; 3. to introduce a research-level topic in computational game theory.
Quantum Processes and Computation
Both physics and computer science have been very dominant scientific and technological disciplines in the previous century. Quantum Computer Science aims at combining both and may come to play a similarly important role in this century. Combining the existing expertise in both fields proves to be a non-trivial but very exciting interdisciplinary journey. Besides the actual issue of building a quantum computer or realising quantum protocols it involves a fascinating encounter of concepts and formal tools which arose in distinct disciplines. This course provides an interdisciplinary introduction to the emerging field of quantum computer science, explaining basic quantum mechanics (including finite dimensional Hilbert spaces and their tensor products), quantum entanglement, its structure and its physical consequences (e.g. non-locality, no-cloning principle), and introduces qubits. We give detailed discussions of some key algorithms and protocols such as Grover's search algorithm and Shor's factorisation algorithm, quantum teleportation and quantum key exchange. At the same time, this course provides an introduction to diagrammatic reasoning. As an entirely diagrammatic presentation of quantum theory and its applications, this course is the first of its kind.
In recent years, quantum computation has moved from a theoretical exercise to a practical one, as limited, small-scale quantum computers have become increasingly available. This course, which follows on from a basic course in quantum computation, addresses the practical challenges of programming these emerging devices.
Many software and hardware development projects go through a phase called 'Requirements Capture and Analysis' which tries to determine the properties a system should have in order to succeed in the environment in which it will be used. This can be a very difficult task, and typical requirements documents contain errors, some of which are very difficult to detect, as well as very expensive to correct later on. Experience shows that many errors arise from social, political and cultural factors and recent research has focused on the problem of reconciling such factors with traditional concerns about the more technical aspects of system development.
This course takes a unique stance to the discussion of requirements in that it acknowledges the involvement of both the social and technical concerns. The course surveys a wide range of different approaches to the problem of determining requirements and aims to provide students with a set of techniques and skills that may be tailored to address a wide range of requirements problems.
Ethical Computing in Practice
This course is intended for students who want to understand how to integrate considerations of ethics and social responsibility into their own practice as computing practitioners. The students will learn (i) several general step-by-step methods for identifying, addressing, and communicating about the ethical dimensions of their own computing projects and (ii) specific issues of algorithmic bias and algorithmic fairness
Advanced Complexity Theory
Geometric Deep Learning
Foundations of Self-Programming Agents
Deep Learning in Healthcare
Bayesian Statistical Probabilistic Programming
Statistical probabilistic programming (SPP) is a general framework for expressing probabilistic models as computer programs. SPP systems are equipped with implementations of genericinference algorithms for answering queries about these models, such as posterior inference and marginalisation. By providing these algorithms, SPP systems enable data scientists to focuson the design of good models, utilising their domain knowledge. The task of constructing efficient generic inference engines can be left to researchers with expertise in statistical machine learning and programming languages.
Law and Computer Science
The legal system is entering a period of profound transformation brought on by new technologies and alternative business models. Legal innovation backed by new technologies can drive down costs, make the delivery of legal services more productive, and facilitate better access to justice for citizens. As AI and digital technology permeate more of our lives, they increasingly becomes the source of legally significant events. This means that those who study and/or practice law increasingly need to understand the digital context. At the same time, those who study computer science and/or develop software increasingly need to understand potential legal consequences of design choices. Increasingly law firms are interested in hiring not just those with legal skills but also those with technical skills, and so there are exciting career opportunities for those working at the intersection of law and technology
This course, jointly offered by the Law Faculty and the Department of Computer Science, will introduce students from both backgrounds to the terrain at the boundaries of their two disciplines. The overarching theme is understanding law and computer science at their intersection.
Computational Learning Theory