Skip to main content

Proving safety of multicore programs with asynchronous memory operations

Matko Botincan

Asynchronous memory operations provide a means for avoiding the memory wall in multicore processors. When multiple cores concurrently access shared memory in a synchronous manner, the gap between processor and memory performance leads to a bottleneck: scalable performance improvements due to increased parallelism are lost due to contention for memory. Asynchronous memory operations offer a solution. Cores delegate data-movement to dedicated hardware, and continue processing in parallel with data-movement, thus overlapping computation with communication. With the advent of multicore and GPU computing, asynchronous memory operations are available in many programming platforms and libraries, e.g., DMA operations in the IBM Cell Broadband Engine, I/O Acceleration Technology in Intel Xeon processors, and asynchronous memory copying in the OpenCL and CUDA languages.

However, asynchronous memory operations lead to potential memory races, increasing programming complexity. Reasoning about correct usage of such operations involves complex flow-, path- and context-sensitive analysis of memory accesses dealing with possibly overlapping regions of memory. We develop a methodology for a C-like language based on separation logic that enables automatic reasoning about memory safety of multicore programs utilizing asynchronous memory operations. We present a set of proof rules, and show that if the rules can be used to show that a program satisfies a given specification then the program is memory-safe, and, in particular, free of memory races. We discuss some of the implementation issues and demonstrate how the approach can be applied for checking absence of DMA races in the Cell Broadband Engine.

 

 

Share this: