To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The goal of this note is to present a “modular” proof, for various type systems with η-conversion, of the completeness and correctness of an algorithm for testing the conversion of two terms. The proof of completeness is an application of the notion of logical relations (see Statman 1983, that uses also this notion for a proof of Church-Rosser for simply typed λ-calculus).
An application of our result will be the equivalence between two formulations of Type Theory, the one where conversions are judgement, like in the present version of Martin-Löf set theory, and the one where conversion is defined at the level of raw terms, like in the standard version of LF (for a “conversion-as-judgement” presentation of LF, see Harper 1988). Even if we don't include η-conversion, the equivalence between the “conversion-as-judgement” and “conversion defined on raw terms” formulation appears to be a non-trivial property.
In order to simplify the presentation we will limit ourselves to type theory with only Π, and one universe. This calculus contains LF. After some motivations, we present the algorithm, the proof of its completeness and, as a corollary, its correctness. As a corollary of our argument, we prove normalisation, Church-Rosser, and the equivalence between the two possible formulations of Type Theory.
Informal motivation
The algorithm
The idea is to compute the weak head-normal form of the two terms (in an untyped way), and, in order to take care of η-conversion, in the case where one weak-head normal form is an abstraction (λx:A)M and the other is N a variable or an application, to compare recursively apply(N,ξ) and M[ξ].
We illustrate the effectiveness of proof transformations which expose the computational content of classical proofs even in cases where it is not apparent. We state without proof a theorem that these transformations apply to proofs in a fragment of type theory and discuss their implementation in Nuprl. We end with a discussion of the applications to Higman's lemma by the second author using the implemented system.
Introduction: Computational content
Informal practice
Sometimes we express computational ideas directly as when we say 2 + 2 reduces to 4 or when we specify an algorithm for solving a problem: “use Euclid's GCD (greatest common divisor) algorithm to reduce this fraction.” At other times we refer only indirectly to a method of computation, as in the following form of Euclid's proof that there are infinitely many primes:
For every natural number n there is a prime p greater than n. To prove this, notice first that every number m has a least prime factor; to find it, just try dividing it by 2, 3, …, m and take the first divisor. In particular n! + 1 has a least prime factor. Call it p. Clearly p cannot be any number between 2 and n since none of those divide n! + 1 evenly. Therefore p > n. QED
This proof implicitly provides an algorithm to find a prime greater than n.
This book contains a collection of papers concerned with logical frameworks. Such frameworks arise in a number of ways when considering the relationship between logic and computation, and indeed the general structure of logical formalisms. In particular, in Computer Science, there is interest in the representation and organization of mathematical knowledge on the computer, and in obtaining computational assistance with its derivation. One would especially like to implement program logics and prove programs correct. Again, there is direct computational content in various logical formalisms, particularly constructive ones. Finally, such issues provoke interest in re-examining purely logical questions.
Logical frameworks arise in two distinct but related senses. First, very many logics are of interest in Computer Science, and great repetition of effort is involved in implementing each. It would therefore be helpful to create a single framework, a kind of meta-logic, which is itself implementable and in which the logics of interest can be represented. Putting the two together there results an implementation of any represented logic.
In the second sense, one chooses a particular “universal” logic which is strong enough to do all that is required, and sticks to it. For example, one might choose a set theory, and do mathematics within that. Both approaches have much in common. Even within a fixed logic there is the need for a descriptive apparatus for particular mathematical theories, notations, derived rules and so on. Providing such is rather similar to providing a framework in the first sense.
By
Leen Helmink, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands,
René Ahn, Philips Research Laboratories, P.O. Box 80.000, 5600 JA Eindhoven, the Netherlands
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
In this paper, a method is presented for proof construction in Generalised Type Systems. An interactive system that implements the method has been developed. Generalised type systems (GTSs) provide a uniform way to describe and classify type theoretical systems, e.g. systems in the families of AUTOMATH, the Calculus of Constructions, LF. A method is presented to perform unification based top down proof construction for generalised type systems, thus offering a well-founded, elegant and powerful underlying formalism for a proof development system. It combines clause resolution with higher-order natural deduction style theorem proving. No theoretical contribution to generalised type systems is claimed.
A type theory presents a set of rules to derive types of objects in a given context with assumptions about the type of primitive objects. The objects and types are expressions in typed λ-calculus. The propositions as types paradigm provides a direct mapping between (higher-order) logic and type theory. In this interpretation, contexts correspond to theories, types correspond to propositions, and objects correspond to proofs of propositions. Type theory has successfully demonstrated its capabilities to formalise many parts of mathematics in a uniform and natural way. For many generalised type systems, like the systems in the so-called λ-cube, the typing relation is decidable. This permits automatic proof checking, and such proof checkers have been developed for specific type systems.
The problem addressed in this paper is to construct an object in a given context, given its type.
Various languages have been proposed as specification languages for representing a wide variety of logics. The development of typed λ-calculi has been one approach toward this goal. The logical framework (LF), a λ-calculus with dependent types is one example of such a language. A small subset of intuitionistic logic with quantification over the simply typed λ-calculus has also been proposed as a framework for specifying general logics. The logic of hereditary Harrop formulas with quantification at all non-predicate types, denoted here as hhw, is such a meta-logic. In this paper, we show how to translate specifications in LF into hhw specifications in a direct and natural way, so that correct typing in LF corresponds to intuitionistic provability in hhw. In addition, we demonstrate a direct correspondence between proofs in these two systems. The logic of hhw can be implemented using such logic programming techniques as providing operational interpretations to the connectives and implementing unification on λ-terms. As a result, relating these two languages makes it possible to provide direct implementations of proof checkers and theorem provers for logics specified in LF.
Introduction
The design of languages that can express a wide variety of logics has been the focus of much recent work. Such languages attempt to provide a general theory of inference systems that captures uniformities across different logics, so that they can be exploited in implementing theorem provers and proof systems.
This book is a collection of papers presented at the first annual Workshop held under the auspices of the ESPRIT Basic Research Action 3245, “Logical Frameworks: Design, Implementation and Experiment”. It took place at Sophia-Antipolis, France from the 7th to the 11th of May, 1990. Seventy-four people attended the Workshop: one from Japan, six from the United States, and the rest from Europe.
We thank the European Community for the funding which made the Workshop possible. We also thank Gilles Kahn who, with the help of the Service des Relations Extérieures of INRIA, performed a most excellent job of organisation. Finally, we thank the following researchers who acted as referees: R. Constable, T. Coquand, N.G. deBruijn, P. de Groote, V. Donzeau-Gouge, G. Dowek, P. Dybjer, A. Felty, L. Hallnäs, R. Harper, L. Helmink, F. Honsell, Z. Luo, N. Mendler, C. Paulin, L. Paulson, R. Pollack, D. Pym, F. Rouaix, P. Schröder-Heister, A. Smaill, and B. Werner.
We cannot resist saying a word or two about how these proceedings came into being. Immediately after the Workshop, participants were invited to contribute papers by electronic mail, as LATEX sources. One of us (Huet) then collected the papers together, largely unedited, and the result was “published electronically” by making the collection a file available worldwide by ftp (a remote file transfer protocol). This seems to have been somewhat of a success, at least in terms of numbers of copies circulated, and perhaps had merit in terms of rapid and widespread availability of recent work.
By
Peter Aczel, Computer Science Department Manchester University Manchester, M13 9PL,
David P. Carlisle, Computer Science Department Manchester University Manchester, M13 9PL,
Nax Mendler, Computer Science Department Manchester University Manchester, M13 9PL
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
In this paper we describe a version of the LTC (Logical Theory of Constructions) framework, three Martin-Löf type theories and interpretations of the type theories in the corresponding LTC theories. Then we discuss the implementation of the above in the generic theorem prover Isabelle. An earlier version of the LTC framework was described by Aczel and Mendler in.
Introduction
In the notion of an open-ended framework of deductive interpreted languages is formulated, and in particular an example is given of a hierarchy of languages Li in the LTC framework. In the first part of this three part paper, sections 2 to 4, we review this hierarchy of languages and then discuss some issues concerning the framework, which lead to another hierarchy of languages, LTC0, LTC1, LTCW. In the second part, sections 5 and 6, we give three type theories, TT0, TT1, and TTW, and their interpretations in the corresponding LTC language. In the final part, sections 7 to 9, we document the implementation of the LTC hierarchy in the generic theorem prover, Isabelle, developed by Larry Paulson at Cambridge. We also describe a programme for verifying, in Isabelle, the interpretations of the type theories TT0, TT1 and TTW.
The basic LTC framework is one that runs parallel to the ITT framework. ITT stands for “Intuitionistic Theory of Types”, see. It is a particular language from the latter framework that has been implemented in the Cornell Nuprl System.
We show how Natural Deduction extended with two replacement operators can provide a framework for defining programming languages, a framework which is more expressive than the usual Operational Semantics presentation in that it permits hypothetical premises. This allows us to do without an explicit environment and store. Instead we use the hypothetical premises to make assumptions about the values of variables. We define the extended Natural Deduction logic using the Edinburgh Logical Framework.
Introduction
The Edinburgh Logical Framework (ELF) provides a formalism for defining Natural Deduction style logics. Natural Deduction is rather more powerful than the notation which is commonly used to define programming languages in “inference-style” Operational Semantics, following Plotkin and others, for example Kahn. So one may ask
“Can a Natural Deduction style be used with advantage to define programming languages?”.
We show here that, with a slight extension, it can, and hence that the ELF can be used as a formal meta-language for defining programming languages. However ELF employs the “judgements as types” paradigm and takes the form of a typed lambda calculus with dependent types. We do not need all this power here, and in this paper we present a slight extension of Natural Deduction as a semantic notation for programming language definition. This extension can itself be defined in ELF.
The inspiration for using a meta-logic for Natural Deduction proofs comes from Martin-Löf.
It is to be expected that logical frameworks will become more and more important in the near future, since they can set the stage for an integrated treatment of verification systems for large areas of the mathematical sciences (which may contain logic, mathematics, and mathematical constructions in general, such as computer software and even computer hardware). It seems that the moment has come to try to get to some kind of a unification of the various systems that have been proposed.
Over the years there has been the tendency to strengthen the frameworks by rules that enrich the notion of definitional equality, thus causing impurities in the backbones of those frameworks: the typed lambda calculi. In this paper a plea is made for the opposite direction: to expel those impurities from the framework, and to replace them by material in the books, where the role of definitional equality is taken over by (possibly strong) book equality.
Introduction
Verification systems
A verification system consists of
(i) a framework, to be called the frame, which defines how mathematical material (in the wide sense) can be written in the form of books, such that the correctness of those books is decidable by means of an algorithm (the checker),
(ii) a set of basic rules (axioms) that the user of the frame can proclaim in his books as a general basis for further work.
By
David Basin, Department of Artificial Intelligence, University of Edinburgh, Edinburgh Scotland,
Matt Kaufmann, Computational Logic, Inc. Austin, Texas 78703 USA
Edited by
Gerard Huet, Institut National de Recherche en Informatique et en Automatique (INRIA), Rocquencourt,G. Plotkin, University of Edinburgh
We use an example to compare the Boyer-Moore Theorem Prover and the Nuprl Proof Development System. The respective machine verifications of a version of Ramsey's theorem illustrate similarities and differences between the two systems. The proofs are compared using both quantitative and non-quantitative measures, and we examine difficulties in making such comparisons.
Introduction
Over the last 25 years, a large number of logics and systems have been devised for machine verified mathematical development. These systems vary significantly in many important ways, including: underlying philosophy, object-level logic, support for meta-level reasoning, support for automated proof construction, and user interface. A summary of some of these systems, along with a number of interesting comments about issues (such as differences in logics, proof power, theory construction, and styles of user interaction), may be found in Lindsay's article. The Kemmerer study compares the use of four software verification systems (all based on classical logic) on particular programs.
In this report we compare two interactive systems for proof development and checking: The Boyer-Moore Theorem Prover and the Nuprl Proof Development System. We have based our comparison on similar proofs of a specific theorem: the finite exponent two version of Ramsey's theorem (explained in Section 2). The Boyer-Moore Theorem Prover is a powerful (by current standards) heuristic theorem prover for a quantifier-free variant of first order Peano arithmetic with additional data types.