Sequentializations of Concurrent Programs
The state of the art solutions for analysis of concurrent programs is very unsatisfactory,
on the other hand, the literature is plenty of mature tools that handle sequential programs.
Very recently, it has been proposed that we could use off-the-shelf tools for sequential
programs to analyze concurrent programs, by translating concurrent programs to
sequential ones--so-called sequentializations. Of course, a translation from a concurrent
program to a sequential one is always possible: given a concurrent program, we can build
a sequential program that simply simulates its global behavior. However,
such a sequential interpreter would track the entire global state of the concurrent
program which involves keeping the product of the local state-space of the threads, by
making the whole approach ineffective. Instead, we seek a sequential program that tracks a
bounded number of copies of the local and shared variables at a time, where the bound is
independent of the number of parallel components, thus avoiding the extreme blow-up caused
by a naive simulation.
Although the idea of using sequential analysis effectively to reason about concurrent programs
is not new, no effective translations were known until recently. In the first part of the talk I'll review
some of the sequentializations proposed in literature, and show that the existing sequentializations
based on bounding the number of execution contexts, execution rounds, or delays from a
deterministic task-schedule rely on three key features for scalable concurrent program analyses: (i) reduction
to the sequential program model, (ii) compositional reasoning to avoid expensive task-product constructions,
and (iii) parameterized exploration bounds. To understand how those sequentializations can be unifi ed and
generalized, we defi ne a general framework which preserves their key features, and in which those
sequentializations are particular instances. We also identify a most general instance which considers more executions,
by composing the rounds of diff erent tasks in any order, restricted only by the unavoidable program
and task-creation causality orders. Our framework applies to a general class of shared-memory concurrent
programs, with dynamic task-creation and arbitrary preemption.
All sequentializations proposed in literature consider only the sequential consistency (SC) memory model
--where the behaviors of the different threads are interleaved while the order between actions of each
single thread is maintained. However, modern multi-processors as well as compilers may reorder some
memory access operations, for performance reasons. This leads to the adoption of weak (or relaxed) memory models
such as TSO (Total Store Ordering). In this model, store operations are not immediately visible to all threads (as in SC).
Each thread is supposed to have a store buffer where store operations are kept in order to be executed later.
While the order of the store operations issued by a same thread are executed (i.e., written in the main memory) in the
same order (i.e., the store buffers are FIFO), load operations by this thread can overtake pending stores in the buffer if
they concern different variables, and read values from the main memory. Loads from a variable for which there is a
store operation in the buffer gets the value of the last of such operation.
In the second half of the talk I will describe how to extend/adapt sequentialization techniques to analyze
concurrent programs running under TSO by getting rid of TSO store-buffers.
In the conclusions, I briefly describe how sequentializations of concurrent program are related to the concept of tree-width of graphs.