Jan Hoffmann, Carnegie Mellon University School of Computer Science, Pittsburgh, USA. Title: Invited talk: Resource aware ML Abstract: In this talk, I will describe the design and implementation of Resource Aware ML (RAML), a functional programming language that automatically derives worst-case resource bounds at compile time.
RAML is integrated in the OCaml compiler and automatically derives resource bounds for higher-order polymorphic programs with user-defined algebraic types. The analysis is parametric in the resource and can derive bounds for time, memory allocations, and energy usage. The derived bounds are multivariate resource polynomials which are functions of different size parameters of algebraic data structures. Bound inference is fully automatic and reduced to a linear optimization problem that is passed to an off-the-shelf LP solver.
I will first introduce type-based amortized resource analysis, the analysis technique that RAML is based on. Starting from simple examples and linear resource usage, I will move to polynomial and multivariate bounds, and highlight challenges such as higher-order functions and user-defined types. I will then outline the design of the resource-aware type system, the formal soundness proof with respect to an operational cost semantics, and the automatic type inference using LP solving. Finally, I will demonstrate the implementation of Resource Aware ML and the integration in the OCaml compiler using illustrative examples that highlight strengths and limitations of the technique.
Bio: Jan Hoffmann is a Tenure-Track Assistant Professor of Computer Science at Carnegie Mellon University. He received his PhD in 2011 from LMU Munich under the direction of Martin Hofmann. His research interests are in the intersection of programming languages and formal methods with a focus on quantitative properties of software. He is known for his work on automatic static cost analysis of programs and the design and implementation of Resource Aware ML. His current research is funded by NSF, DARPA-STAC, Schmidt Sciences Grant, and a Google Research Award. It includes projects such as automatic resource regression analysis and finding space/time vulnerabilities in Java bytecode. In the past, Hoffmann has been supported by scholarships of the German Research Foundation (DFG) and the German National Academic Foundation (Studienstiftung).
Benjamin Kaminski, University College London and RWTH Aachen University Title: Reasoning about expected run-times of probabilistic programs Abstract: We present a weakest-precondition-style calculus for reasoning about run-times of deterministic programs and expected run-times of probabilistic programs. Run-times of loops are defined in terms of least fixed points of continuous functions acting on an appropriate run-time domain. We present several proof rules for upper- and lower-bounding the (expected) run-time of loops. In particular, we present a very simple, yet complete proof rule for upper bounds based on just a single run-time invariant. For lower bounds, however, we will discuss how the analogon of the single-invariant-rule is sound for deterministic programs, but unsound for probabilistic ones.
Michael Arntzenius, University of Birmingham Title: Finding fixed points faster Abstract: Datalog is a forward-chaining logic programming language; a Datalog program may be run by repeatedly applying its inference rules until nothing further is deducible. This computes the fixed point of a function (applying the inference rules) over finite sets (of deduced facts).
Datafun is a higher-order functional skin over a generalized version of Datalog’s semantics. It generalizes Datalog in two ways: 1. it permits functional abstraction; 2. it can take fixed-points of functions over other semilattices, not just finite sets.
Datalog has an extensive optimization literature. I and my advisor Neel Krishnaswami aim to generalize Datalog optimizations and apply them to Datafun. I will present our work in progress, which adapts Cai et al’s work on incremental lambda-calculus to provide an analogue of Datalog’s semi-naive evaluation, which avoids unnecessary re-computation when iterating towards a fixed-point.
John Wickerson, Imperial College London Title: Hardware synthesis of weakly consistent C concurrency Abstract: Since 2011, the C language has provided ‘’weak atomics’’ – instructions that allow the implementation of lock-free algorithms with racy memory accesses, but which don’t guarantee that all threads have a consistent view of shared memory. Much work has been done to study the semantics of weak atomics, and how weak atomics can be implemented, both correctly and efficiently, on conventional multiprocessors (x86, PowerPC, ARM, …) and on conventional graphics processors (NVIDIA, AMD, …).
This talk reports on the first attempt to implement C’s weak atomics on an unconventional computing device: the field-programmable gate array (FPGA). FPGAs do not process instructions like conventional processors; rather, they are configured to implement a particular function in hardware. This configuring can be done using a ‘high-level synthesis’ tool, given a C program as a specification.
We implement weak atomics on FPGAs by adjusting the ‘scheduling’ algorithm that maps C instructions to hardware clock cycles. The scheduling algorithm seeks to satisfy a set of constraints on the ordering of each thread’s instructions. We show that strong atomics necessitate many scheduling constraints, and that the weaker acquire/release/relaxed atomics can be obtained by selectively removing some constraints.
We evaluate the performance of our implementation by synthesising hardware from a C implementation of a fine-grained circular buffer. We show that using strong atomics instead of conventional locks yields a 2.5x average performance improvement, and that using weak atomics instead of strong atomics yields a further 1.5x average whole-application performance improvement.
This is joint work with Nadesh Ramanathan, Shane T. Fleming, and George A. Constantinides, all from the Electrical and Electronic Engineering Dept. at Imperial College London. It has been accepted to appear as a full paper at the ACM/SIGDA International Symposium on FPGAs in February.
David Sherratt, University of Bath Title: Towards an Atomic Abstract Machine Abstract: The familiar call-by-need evaluation strategy (as used in Haskell compilers), also known as lazy evaluation shows best performance when terms are duplicated. When we have more control over the duplication of the term, such that we can choose which segments of the term to duplicate, then we can implement full laziness. Full laziness means we duplicate only what is necessary, and the maximal free subexpressions (the largest subterms such that, if they contain a bound variable x, they also contain its binder λx) remain.
Haskell compilers can perform full laziness at compile time. However, during runtime the implementation becomes too expensive. The idea behind this project is to further study the process of implementing a fully lazy evaluation strategy.
The purpose of sharing in the λ-calculus is to have better control over duplication of terms. Sharing is the use of a single representation for multiple instances of a common subterm. During evaluation, instead of duplicating a term, we can share it. This allows us to evaluate all the copies of the subterm simultaneously, by evaluating their shared representation [6].
In the atomic λ-calculus [2], we implement two types of closures. The usual notion of closure is a term with an associated sharing implemented as a let-construct. The second notion of closure introduces the distributor construct, which allows for duplication of a term atomically. The distributor uses a more restricted notion of unsharing similar to Lamping’s sharing graphs [4] which performs optimal reductions as defined in [5]. This is exactly what allows this calculus to perform fully lazy evaluation.
An abstract machine is a theoretical implementation of a strategy for a language on an abstract notion of computer architecture, made from directly implementable constructs. An abstract machine for evaluating atomic λ-terms would be a fully lazy abstract machine. For my PhD I aim to develop an atomic abstract machine based off the atomic λ-calculus. To achieve this, we make all the reduction rules in the calculus local in the sense of [3]. To achieve this, we construct a new calculus based on the atomic λ-calculus that makes the variable scopes explicit while maintaining fully lazy sharing.
This directed atomic λ-calculus uses a variant of director strings [1] to make the scopes of terms explicit. When performing substitution, instead of traversing the complete term, we instead only follow the path of the variable we want to replace which is highlighted through the annotations. An application can indicate the location of a variable with a 0 (in the left hand side) or with a 1 (in the right hand side), and similar for sharing. The scoping information of terms can then be used so that all the reduction rules can be considered local. One can then start work on an abstract machine for this calculus, that performs fully lazy λ-evaluation.
References: [1] Maribel Fernandez, Ian Mackie, and Francois-Regis Sinot. Lambda-calculus with director strings. Applicable Algebra in Engineering, Communication and Computing. 2005. [2] Tom Gundersen, Willem Heijltjes, and Michel Parigot. Atomic lambda calculus: A typed lambda-calculus with explicit sharing. Proceedings of the 2013 28th Annual ACM/IEEE Sym- posium on Logic in Computer Science. IEEE Computer Society, 2013. [3] Yves Lafont. Interaction nets. Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages. 1989. [4] John Lamping. An algorithm for optimal lambda calculus reduction.Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages. 1989. [5] Jean-Jacques Levy. Optimal reductions in the lambda-calculus. To HB Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism. 1980. [6] Christopher Peter Wadsworth. Semantics and Pragmatics of the Lambda-Calculus. Diss. University of Oxford, 1971.
Dan Ghica, University of Birmingham Title: Unifying structural and nominal syntax in the language of diagrams Abstract: The correspondence between various monoidal categories and graphical languages of diagrams has been studied extensively, leading to applications in quantum computing and communication, systems theory, circuit design and more. From the categorical perspectives, diagrams can be specified using (name-free) combinators which enjoy elegant equational properties. However, established notations for diagrammatic structures, such as hardware description languages (VHDL, extsc{Verilog}) or graph languages ( extsc{Dot}), use a totally different style, which is flat, relational, and reliant on extensive use of names (labels). Such languages are not known to enjoy nice syntactic equational properties. However, since they make it relatively easy to specify (and modify) arbitrary diagrammatic structures they are much more popular than the combinator-style specifications. This parallels the situation in programming languages, where combinator-based languages (e.g. APL) proved to be far less popular than languages with variables. In this paper we show how the two approaches to diagram syntax can be reconciled and unified in a way that does not change the original categorical semantics and the existing (categorical) equational theory. We also show how the two notations are semantically complementary and we give sound and complete equational theories.
Bradley Hardy, University of Cambridge Title: Improving Agda’s equational reasoning with reflection Abstract: Agda is a Haskell-style programming language, whose dependent type system allows it to function as an elegant theorem proving language. But compared to similar languages like Coq, Agda lacks a built-in tactic language. Instead, users are encouraged to write libraries to ease reasoning in Agda itself. As an example of this, equational reasoning combinators are often used to lay out proofs by transitivity. These combinators allow the Agda code to resemble standard pen-and-paper equational reasoning proofs. But where a pen-and-paper proof might simply invoke, say, ‘commutativity’ between steps, in Agda we must specify the exact proof that takes us from one step to the next. This can often be very tedious.
In this talk, I will present a new library that eliminates one of the more annoying parts of this process: invoking congruence, which allows us to provide a lambda expression to locate a subterm to rewrite. However, these lambda expressions often amount to duplicating a lot of code. They are both tedious and error-prone to write, and ugly to read. My library uses Agda’s reflection API to automate this. Instead of providing a lambda expression, the user simply annotates the lefthand expression to specify where to rewrite. This improves both ease of writing Agda code, and the readability of the resulting proofs.
Neel Krishnaswami, University of Cambridge Title: Can ML be stopped? Abstract: Higher-order imperative programs are well-known as a source of small-but-ferociously-difficult-to-verify-programs. Languages like ML permit the subtle interplay of features like generative name creation, pointers to code, and type abstraction. Each of these features requires sophisticated mathematics to model, and their combination yields models of daunting complexity.
In recent years, the standard technique for analysing these programs have been based on step-indexed logical relations. However, while step-indexing is well-suited to proving safety properties, it is less able to establish liveness properties such as termination.
In this talk, I will describe some in-progress research on how to prove termination of higher-order imperative programs by combining ideas from process algebra, separation logic, and logical relations.