Concurrent Algorithms and Data Structures: 2018-2019
OverviewThis is an advanced course on concurrent programming. The course will combine principles and practice. Principles to be studied include correctness conditions for concurrent datatypes, and the relative power of different synchronization operations. More practical topics will include how to implement concurrency primitives - such as locks and monitors - and concurrent datatypes - such as linked lists, queues, and hash tables.
This course is largely independent of the second year Concurrent Programming course, although there are clearly strong links between the two. Concurrent Programming is based upon the message-passing paradigm; much of the emphasis is on using concurrency as a way of structuring large programs. This course will be based upon low-level concurrency primitives, such as compare-and-swap; the emphasis will be on speed. MSc students could take this course in Michaelmas, followed by Concurrent Programming in Hilary.
After studying this course, students will:
- Understand the importance of mutual exclusion, and different ways to implement it;
- Understand correctness criteria for concurrent datatypes;
- Understand the relative power of different concurrency primitives;
- Be able to implement concurrency primitives, such as locks and monitors;
- Be able to implement concurrent datatypes, such as stacks, queues and hash tables, and justify their correctness.
Familiarity with standard programming techniques (e.g. loop invariants, data abstraction, datatype invariants, inheritance, stacks, queues, linked lists, hash tables). Some familiarity with concurrent programming (e.g. shared variables, race conditions, deadlocks, mutual exclusion, Java-style monitors, locks, message passing). Undergraduates will have acquired the relevant background from core courses. MSc students should have previously studied a course on concurrent programming; they are encouraged to discuss their background with the course lecturer if unsure.
The course textbook uses Java. However, we will use Scala in lectures, because it will make the presentation of some of the algorithms cleaner. Students should be able to understand programs written in either language, and be efficient at coding in at least one; students unfamiliar with Scala should read the Scala Tutorial, http://www.scala-lang.org/docu/files/ScalaTutorial.pdf
The course will be based heavily on the course textbook, The Art of Multiprocessor Programming by Maurice Herlihy and Nir Shavit. The book combines a study of the foundational principles (roughly Chapters 2-6), and more practical aspects roughly Chapters 7 onwards). References below are to the course textbook.
1. Introduction: why concurrent programming is hard; paradigms of concurrent programming; shared objects and synchronization; mutual exclusion; producer-consumer problem; readers-writers problem; Amdahl's Law. (Chapter 1.)
2. Mutual Exclusion: critical sections; building locks; Peterson lock; Bakery lock; fairness; lower bounds on the number of locations. (Chapter 2, excluding Sections 2.4, 2.7.)
3. Concurrent Datatypes: concurrency and correctness; sequential consistency; linearizability; progress conditions; the Java Memory Model. (Chapter 3, excluding Section 3.3.)
4. Spin Locks and Contention: test-and-set locks; exponential back-off; queue locks. (Chapter 7, excluding Sections 7.6-7.8.)
5. Monitors and Blocking Synchronization: monitor locks; conditions; readers-writers locks; re-entrant lock; semaphores. (Chapter 8.)
6. Foundations of Shared Memory: registers; register constructions (probably mostly left for reading); atomic snapshots. (Chapter 4, excluding most of Section 4.2.)
7. The Relative Power of Synchronization Methods: consensus numbers; atomic registers; compare-and-set; other consensus protocols. (Chapter 5, excluding Sections 5.5.)
8. The Universality of Consensus: universality; lock-free universal construction; wait-free universal construction. (Chapter 6.)
9. Linked Lists: the Role of Locking: concurrent reasoning; coarse-grained synchronization; fine-grained synchronization; optimistic synchronization; lazy synchronization; non-blocking synchronization. (Chapter 9.)
10. Concurrent Queues and the ABA Problem: bounded partial queue; unbounded total queue; unbounded lock-free queue; memory reclamation and the ABA problem. (Chapter 10.)
11. Concurrent Stacks and Elimination: a lock-free stack; elimination back-off stack. (Chapter 11.)
12. Counting, Sorting and Distributed Coordination: combining trees; counting networks; sorting networks (Chapter 12, excluding Sections 12.4-12.6.)
13. Concurrent Hashing and Natural Parallelism: closed-address hash sets (coarse-grained, striped, sharded, refinable); lock-free hash set (recursive split-ordering); open-addressed hash set (sharded; cuckoo hashing). (Chapter 13, probably omitting some subsubsections, and including material from elsewhere.)
14. Skiplists and Balanced Search: lock-based concurrent skiplist; lock-free concurrent skiplist. (Chapter 14.)
Mutual exclusion; correctness conditions for concurrent datatypes; foundations of shared memory; the relative power of synchronization methods; universality of consensus. Implementing locks and monitors; implementing concurrent datatypes such as linked lists, queues, counting and sorting networks, hash tables and skip lists.