|List of attendees|
Note: many of the links are dead, which simply means that I haven't yet received the material.
Robby Findler, Contracts for Enumerators; Dependency without Proof (Mon 2015-05-25, 09:10-10:00)
Specifications written using dependent contracts are rich, just like those of dependent types. Racket's `data/enumerate` library contains many of these and during my slot, we spent some time exploring these contracts, what they can say, and how enforcement goes.
Evan Czaplicki, Taxonomy of FPR (Mon 2015-05-25, 10:00-11:00)
See slides (original form).
Koen Claessen, Puzzle (Mon 2015-05-25, 11:25-11:45)
Zipping Lists with Repetition.
Adam Chlipala, The Ur/Web People Organizer (Mon 2015-05-25, 11:45-12:30)
I presented UPO, a new library implemented in the Ur/Web functional DSL for Web applications. UPO facilitates rapid assembly of custom applications in a particular domain: organizing (human!) events with multiple participants. Very general components are implemented using a combination of features like Haskell-style type classes, ML-style functors, and Coq-style type-level computation a la dependent types. Each component is type-checked separately, guaranteeing reasonable behavior for any composition; and an optimizing compiler produces very efficient server binaries.
See project website.
Phil Wadler, Blame and Coercion: Together Again for the First Time (Mon 2015-05-25, 14:30-15:15)
C#, Dart, Pyret, Racket, TypeScript, VB: many recent languages integrate dynamic and static types via gradual typing. We systematically develop three calculi for gradual typing and the relations between them, building on and strengthening previous work. The calculi are: λB, based on the blame calculus of Wadler and Findler (2009); λC, inspired by the coercion calculus of Henglein (1994); λS inspired by the space-efficient calculus of Herman, Tomb, and Flanagan (2006) and the threesome calculus of Siek and Wadler (2010). While λB is little changed from previous work, λC and λS are new. Together, λB, λC, and λS provide a coherent foundation for design, implementation, and optimisation of gradual types.
We define translations from λB to λC and from λC to λS. Much previous work lacked proofs of correctness or had weak correctness criteria; here we demonstrate the strongest correctness criterion one could hope for, that each of the translations is fully abstract. Each of the calculi reinforces the design of the others: λC has a particularly simple definition, and the subtle definition of blame safety for λB is justified by the simple definition of blame safety for λC. Our calculus λS is implementation-ready: the first space-efficient calculus that is both straightforward to implement and easy to understand. We give two applications: first, using full abstraction from λC to λS to validate the challenging part of full abstraction between λB and λC; and, second, using full abstraction from λB to λS to easily establish the Fundamental Property of Casts, which required a custom bisimulation and six lemmas in earlier work.
(joint work with Jeremy Siek and Peter Thiemann)
Peter Thiemann, title (Mon 2015-05-25, 15:15-16:00)
Stephanie Weirich, Towards Dependently Typed Haskell (Mon 2015-05-25, 16:35-17:15)
This talk includes examples of programs written in an extension of GHC that combines types and kinds together.
See wg28.hs and wg28current.hs.
Simon Marlow, title (Mon 2015-05-25, 17:20-18:00)
Anil Madhavapeddy, Immutable Distributed Infrastructure for Unikernels (Tue 2015-05-26, 09:05-09:50)
We enjoy the benefits of distributed version control with our source code: the ability to pull, push and merge multiple streams of work and view the provenance of all changes. Irmin is a library that provides mergeable binary datastructures that can be manipulated much like Git (with pull, push and merge operators), and form higher level control structures for persistent work queues, key-value stores, all with well-defined consistency properties. The results of Irmin operations can be viewed directly using Git, leading to a fresh perspective on how to debug and trace complex distributed systems.
Satnam Singh, title (Tue 2015-05-26, 09:50-10:25)
Dan Piponi, title (Tue 2015-05-26, 11:00-11:50)
Edward Kmett, title (Tue 2015-05-26, 11:50-12:30)
This talk provides a whirlwind tour of some new types of functional data structures and their applications.
Cache-oblivious algorithms let us perform optimally for all cache levels in your system at the same time by optimizing for one cache for which we don't know the parameters. While Okasaki's "Purely Functional Data Structures" taught us how to reason about asymptotic performance in a lazy language like Haskell, reasoning about cache-oblivious algorithms requires some new techniques.
Succinct data structures let us work directly on near-optimally compressed data representations without decompressing.
How can derive new functional data structures from these techniques? Applications include just diverse areas as speeding up something like Haskell's venerable Data.Map, handling "big data" on disk without tuning for hardware, and parsing JSON faster in less memory.
See slides (keynote); Lambda Jam 2014 talk 1 and talk 2; related work.
Lindsey Kuper, LVars for distributed programming, or, LVars and CRDTs join forces (Tue 2015-05-26, 15:30-16:20)
Conflict-free replicated data types, or CRDTs, are data structures that leverage lattice properties to ensure the eventual consistency of replicated data in a distributed system. LVars, which are data structures that leverage lattice properties to ensure the determinism of shared-memory parallel programs, turn out to be a close cousin to CRDTs. In this talk, we'll look at the similarities and differences between LVars and CRDTs, and we'll consider what it would mean for CRDTs to support LVar-style "threshold" read operations (and why we might want them to).
See slides and draft paper.
Ralf Hinze, TBA (Tue 2015-05-26, 16:40-17:20)
Nate Foster, Transactional Forest: Strong Consistency for File Stores (Tue 2015-05-26, 17:20-18:00)
This talk will present work in progress on a variant of the Forest language with strong consistency guarantees.
(joint work with Jonathan Dilorenzo, Kathleen Fisher, Hugo Pacheco, and Richard Zhang)
Dimitrios Vytiniotis, Gradually typed DSLs with Deferrable constraints in Haskell (Tue 2015-05-26, 18:00-18:40)
Using dynamic data in statically typed code can be tedious and can lead to significant code restructuring compared to a purely dynamic approach. We show here how the use of a type class constraint over constraints can greatly simplify the process and allow programmers to smoothly mix statically and dynamically typed code. The technique is particularly useful for DSLs and we demonstrate the ideas through the case study of a mixed static/dynamic information flow control library in Haskell.
Derek Dreyer, title (Wed 2015-05-27, 09:05-09:50)
Koen Claessen, Bounded Model Checking of Functional Programs (Wed 2015-05-27, 09:50-10:30)
We present a bounded model checker for Haskell programs based on a SAT-solver.
Steve Zdancewic, Curry-Howard for GUIs: classical linear linear temporal logic (Wed 2015-05-27, 11:00-11:40)
In this talk, I'll describe some work in progress that is looking to find good type structure for describing GUI programs. The key observation is that the "eventually" modality of temporal logic should provide a nice abstraction for mouse clicks and other events that might happen in the future. One subtlety is capturing the total ordering of events in the GUI system. We show that a suitable rule of the modal logic corresponds via the Curry-Howard correspondence to a "select" operator, which lets the program wait for the first of two events. The hope is that, when combined with classical linear logic (which provides concurrency and state), such linear temporal logic provides a good foundation for GUI programming.
(joint work with Jennifer Paykin and Neel Krishnaswami)
Ulf Norell, Agda Tactics Programming (Wed 2015-05-27, 11:40-12:15)
Agda has been (and still is) far behind systems like Coq when it comes to proof automation. In this talk I describe my efforts at starting to catch up on this front, using Agda's reflection mechanism to implement a tactic for proving equations between arithmetic expressions. The resulting tactic is implemented entirely in Agda and is efficient enough for practical use.
Fritz Henglein, Kleenex: From nondeterministic finite state transducers to streaming string transducers (Thu 2015-05-28, 09:05-09:50)
We show how type theory, prefix codes, nondeterministic automata, streaming and determinization to register automata yield a high-performance streaming implementation of nondeterministic finite state transducers, embodied in a DSL called Kleenex. On stock desktop hardware, with target code rendered in plain C operating on one byte at a time, it processes on the order of 1 Gb/s single-core, outperforming regular expression matching tools such as RE2. We sketch how we expect to achieve 5 Gb/s single-core without hardware or systems software tricks.
(joint work with Bjørn Bugge Grathwohl, Ulrik Terp Rasmussen, Kristoffer Aalund Søholm, and Sebastian Paaske Tørholm)
Xavier Leroy, title (Thu 2015-05-28, 09:50-10:30)
Didier Remy, Full reduction and GADTs (Thu 2015-05-28, 15:30-16:15)
There is a mismatch between our intuitive understanding of type errors and the garantees that the type soundness theorem gives us for a call-by-value or call-by-need strategy. The solution is to prove type soundness for full reduction, which besides has many other advantages. However, GADTs are not sound for full reduction. We explain the problem and how to fix it, adding explicit introduction for implicit use of asumptions and explicit weakening.
See slides and project website.
David MacQueen, Newman’s Typability Algorithm (Thu 2015-05-28, 16:15-16:40)
In 1943, Max Newman (Turing’s mentor, and leader of the Colossus project at Bletchley Park) published a paper describing a very general technique for checking whether a formula is “typable”, using Quine’s New Foundations (an implicitly typed set theory), and an implicitly typed (Curry-style) version of Church’s Simple Theory of Types as examples. This talk gives a simplified and less general description of Newman’s ideas as they apply to typability (and indeed type inference) for the lambda calculus. The essential idea is to define a binary relation of the set of subexpressions of the expression being analyzed that captures the notion of the type of an expression being a subterm (e.g. domain or range of a function type) of the type of another expression. The original expression is typable if this relation generates a strict partial order. This paper represents perhaps the earliest contribution to the subject of type inference, and was overlooked for many decades until Roger Hindley rediscovered it around 2005.
John Launchbury, title (Fri 2015-05-28, 09:05-09:45)
Amal Ahmed, CPS Translation of Dependent Types (Fri 2015-05-28, 09:45-10:25)
In a 2002 paper, Barthes and Uustalu studied type-preserving CPS translation of the Calculus of Constructions (CoC). They discovered that typed CPS translation of Sigma types (strong sums) is not possible, at least not using the usual CPS type translation which employs double negation. In this talk, I will describe the problem and a potential solution. Our main idea is to use a CPS type translation that that employs answer-type polymorphism.
(joint work with Nick Rioux and William J. Bowman)
Kathleen Fisher, Tracking the Flow of Ideas through the Programming Languages Literature (Fri 2015-05-28, 11:00-11:45)
How have conferences like ICFP, OOPSLA, PLDI, and POPL evolved over the last 20 years? Did generalizing the Call for Papers for OOPSLA in 2007 or changing the name of the umbrella conference to SPLASH in 2010 have any effect on the kinds of papers published there? How do POPL and PLDI papers compare, topic-wise? Is there related work that I am missing? Have the ideas in O’Hearn’s classic paper on separation logic shifted the kinds of papers that appear in POPL? Does a proposed program committee cover the range of submissions expected for the conference? If we had better tools for analyzing the programming language literature, we might be able to answer these questions and others like them in a data-driven way. In this paper, we explore how topic modeling, a branch of machine learning, might help the programming language community better understand our literature.
See slides; the paper and tool are available from the TMPL website.
Jack Dennis, The Fresh Breeze Project: A Status Report (Fri 2015-05-28, 11:45-12:15)
The Fresh Breeze Project is a collaborative effort of the MIT Computer Science and Artificial Intelligence Laboratory and the University of Delaware Computer Architecture and Parallel Systems Laboratory. The project has developed a programming model guided by functional programming principles, and an architecture for massively parallel computer systems able to efficiently execute programs that adhere to the programming model with high performance. The principal innovations are representation of all data objects by trees of fixed size blocks of memory called "chunks", and imposing the rule that chunks are not modified once they have been created and filled with data. The architecture is designed to provide dynamic, hardware-mediated management of both memory and processing resources.
A cycle accurate simulator has been built and used to run linear algebra benchmarks (including matrix multiply and lower/ upper matrix decomposition) on simulated systems with up to 128 processing cores. Programs for testing the architecture are written in funJava, a dialect of Java constrained to prevent occurrence of side-effects, and compiled to codelets by our Fresh Breeze compiler. Construction of a prototype Fresh Breeze system using the BlueDBM FPGA-based platform assembled by the CSAIL Computation Structures Group under Prof. Arvind is planned.
Koen Claessen, Discussion of the Puzzle (Mon 2015-05-25, 12:15-12:30)
Why the puzzle is hard and how you can test solution candidates using QuickCheck.