Explicit Substitutions through the Eyes of the Suspension Calculus

Andrew Gacek, University of Minnesota

Slides [pdf]

Higher-order abstract syntax depends on efficient implementations of equality on lambda terms modulo alpha and beta reduction. In particular, we desire the ability to look underneath abstractions and to apply substitutions lazily. Various explicit substitution notions have been developed which can fill this need: the λσ-calculus, the suspension calculus, the λυ-calculus, the λs-calculus, and more. These systems are judged primarily on three desirable properties: confluence on meta-variables, preservation of strong normalization, and ability to combine substitution walks. Of all the calculi that have been developed, the only ones that support the ability to combine substitution traversals are the λσ-calculus and the suspension calculus. It has been shown that the λσ-calculus does not preserve strong normalizability. We conjecture that the suspension calculus does, in which case it would be the only one that we know of to have all three of the mentioned properties. In this talk I will present a new variant of the suspension calculus and discuss the properties above in relation to it. In addition, I will exploit the new form of the suspension calculus to describe a translation from it to the λσ-calculus and use this to highlight the key differences between these two systems. Finally, I will explain why these differences lead us to believe that the suspension calculus preserves strong normalization while the λσ-calculus fails to.

This is joint work with Gopalan Nadathur.

Strategies for the Representation and Reduction of Lambda Terms and their Impact

Chuck Liang, Hofstra University

Slides [dvi] - Figures [dvi]

The use of higher-order abstract syntax requires strategies for the representation and reduction of lambda terms. Substitutions may be applied underneath abstractions in the formation of normal forms, which are used, for example, in testing for equality between terms. Naive implementations of these capabilities can lead to much redundancy in term traversal and creation. We are concerned with a setting, therefore, where term structures can be shared and an explicit substitution system is available in which substitutions can be suspended and combined. In such a context a variety of combinations of reduction and substitution strategies can be considered. The question as to which of these combinations will lead to the best overall performance is not at all obvious. For example, encoding explicit, suspended substitutions inside term structures allows us to combine multiple substitutions that can be introduced at different points of computation. However, eagerly applying substitutions before choice-points in computation can avoid redundant substitution work should the computations be rolled back. We measure and compare the performance of such strategies across a wide range of examples, and pinpoint the causes of significant discrepancies. This will firstly allow us to rule out certain approaches as clearly inferior. Furthermore, by understanding the precise trade-offs between alternatives, it is possible to formulate new combinations of approaches that can be expected to give good overall performance for typical kinds of applications. Further topics that may be examined, should time allow, include the impact of the de Bruijn representation of terms in an explicit substitution context.

A Logic for Reasoning about Logic Specifications

Dale Miller, INRIA-Futurs and LIX

Slides [pdf]

As logic programming grew more declarative and included more logical principles, logic programs could become more expressive. Rich logic programming languages, such as lambda Prolog, Elf, Lolli, and Forum, were all successfully applied to specifying operational semantics and theorem provers. Since these logics provided flexible notions of variable bindings and rich forms of contexts (sets and multiset), specifics could use these directly instead of needing to specify object-level variables via de Bruijn numerals, stacks and heaps as list structures, etc. As a result specifications were more compact and elegant. While their logical correctness seemed more evident, it was not clear how one formally proved that these specifications actually had formal properties. I will outline an approach to representing and reasoning about logic programming specifications that uses recent work on extending sequent calculus with fixed points (``definitions'') and the nabla-quantifier. I will overview a two-level prover architecture and illustrate with some examples. This is joint work with Alwen Tiu and Ray McDowell.

Practical Higher-Order Pattern Unification with On-the-Fly Raising

Gopalan Nadathur, University of Minnesota

Slides [pdf]

Higher-order pattern unification problems arise often in computations within metalanguages and logical frameworks. An important characteristic of such problems is that they are given by equations appearing under a prefix of alternating universal and existential quantifiers. Most existing algorithms for solving these problems assume that such prefixes are simplified to a "forall-some-forall" form by an a priori application of a transformation known as raising. There are drawbacks to this approach. Mixed quantifier prefixes typically manifest themselves in the course of computation, thereby requiring a dynamic form of preprocessing that is difficult to support in low-level implementations. Moreover, raising may be redundant in many cases and its effect may have to be undone by a subsequent pruning transformation. An explicit substitution based method has been proposed by Dowek, Hardin, Kirchner and Pfenning (DHKP) that appears to use a different technique for handling variable dependencies but the resulting algorithm nevertheless appears to give rise to a behaviour akin to undirected dynamic raising followed by pruning. We have proposed a method that overcomes these drawbacks. In particular, we have described a unification algorithm that proceeds by recursively descending through the structures of terms, performing raising and other transformations on-the-fly and only as needed. Our algorithm does not reflect explicit substitutions into the unification process in an intrinsic way and it treats raising in a controlled way but it still has intriguing similarities to the processing manifest in the DHKP procedure. This algorithm has been implemented in SML and C and has been used in two different systems.

In this talk, we will present the essential ideas underlying the new higher-order pattern unification procedure and will also discuss the relevance of explicit substitution based approaches to this particular unification problem.

[This talk is based on joint work with Natalie Linnell.]

Contextual Modal Type Theory: A Foundation for Meta-variables

Brigitte Pientka, McGill University

Slides [pdf]

In recent years, higher-order reasoning systems and logical frameworks have matured and been successful in large-scale applications such as proof-carrying code. Maybe surprisingly, there are still some foundational and implementation issues which are poorly understood.

In this talk, we concentrate on the notion of meta-variables and their role in higher-order unification. Both concepts are fundamental to proof search, type reconstruction and representation of incomplete proofs. First, we will present a contextual modal type theory which provides an elegant, uniform foundation for understanding meta-variables and explicit substitutions. Second, we use the contextual modal type theory as a foundation to describe elegantly higher-order pattern unification. This will provide new insights for efficient implementation strategies and justifies logically design decisions which have so far been largely motivated operationally.

Treatment of Types in an Implementation of λProlog Organized around Higher-Order Pattern Unification

Xiaochu Qi, University of Minnesota

Slides [ppt]

Traditionally, the purpose of types in a programming language has been that of describing and checking properties of programs to prevent certain run-time failures. When used in this moe in a strongly typed language, types and type-checking have a role only at compile time. However, types are increasingly being used in roles other than the one traditionally assigned to them and they are entering more and more into runtime processing. Well known examples of such use occur in dynamic dispatch in Java-like languages and in intensional polymorphism as studied in the TIL implementation of SML. In the logic programming and proof search settings, types are relevant to the unification computation. Their use in this context occurs in two different forms. First, in a situation where polymorphism is present, a constant name can be used at different instances of its declared type and, in this case, type information is needed to determine if different occurrences of the name are indeed ones of the same constant. Second, type information may be needed in determining the structure of a binding for a variable. When types are needed for the second purpose as in the case of higher-order unification in the manner of Huet, then these have to be stored with every variable and constant. In the case of higher-order pattern unification, however, it turns out that nothing actually needs to be done with types during the variable binding phase. Thus, type examination is needed only for the first of the two purposes described and even in this case preprocessing can considerably reduce this computation. We will discuss these various issues and also outline a scheme that takes advantage of the observations we make in designing a treatment of types in a new implementation of λProlog that we are developing. In the end, we will try to contrast the results of the approach we outline to the processing of types in the Teyjus implementation of λProlog.

This talk is based on joint work with Gopalan Nadathur.

An Example of Language Translation and Proof using Higher-Order Abstract Syntax

Mike Whalen, University of Minnesota and Rockwell Collins

Slides [ppt]

In this talk, I will describe my experience using hereditary Harrop clauses and higher order abstract syntax as the basis of a provably-correct code generation approach. The code generator translates from the Requirements State Machine Language without Events (RSML-e), a synchronous dataflow language, to C. The original goal of this work was to create a code generation approach using first order abstract syntax and SOS rules, but managing identifiers made equivalence proofs very complex. In this talk I will describe the basic problem of showing equivalence between two languages and describe the effort required with creating rules and performing proofs using both first-order and higher-order rules. For the most part, higher-order abstract syntax led to much simpler proof obligations. However, there were a few situations where lexical scoping of bound variables led to awkward rule constructions and proofs. I hope to generate discussion on potential tool extensions or rule patterns that may improve these constructions. I will also describe my experience implementing the code generator with the Teyjus λProlog tool, and suggest some tool improvements that might make similar efforts more straightforward.