Hostname: page-component-68c7f8b79f-kpv4p Total loading time: 0 Render date: 2025-12-18T06:30:49.449Z Has data issue: false hasContentIssue false

Controlling unfolding in type theory

Published online by Cambridge University Press:  09 December 2025

Daniel Gratzer*
Affiliation:
Aarhus University, Aarhus, Denmark
Jonathan Sterling
Affiliation:
University of Cambridge, Cambridge, UK
Carlo Angiuli
Affiliation:
Indiana University, Bloomington, IN, USA
Thierry Coquand
Affiliation:
University of Gothenburg, Gothenburg, Sweden
Lars Birkedal
Affiliation:
Aarhus University, Aarhus, Denmark
*
Corresponding author: Daniel Gratzer; Email: gratzer@cs.au.dk
Rights & Permissions [Opens in a new window]

Abstract

We present a new way to control the unfolding of definitions in dependent type theory. Traditionally, proof assistants require users to fix whether each definition will or will not be unfolded in the remainder of a development; unfolding definitions is often necessary in order to reason about them, but an excess of unfolding can result in brittle proofs and intractably large proof goals. In our system, definitions are by default not unfolded, but users can selectively unfold them in a local manner. We justify our mechanism by means of elaboration to a core theory with extension types – a connective first introduced in the context of homotopy type theory – and by establishing a normalization theorem for our core calculus. We have implemented controlled unfolding in the proof assistant, inspiring an independent implementation in Agda.

Information

Type
Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

In dependent type theory, terms are type-checked modulo definitional equality, a congruence generated by $\alpha$ -, $\beta$ -, and $\eta$ -laws, as well as unfolding of definitions. Unfolding definitions is to some extent a convenience that allows type checkers to silently discharge many proof obligations, for example, a list of length $1+1$ is without further annotation also a list of length $2$ . It is by no means the case, however, that we always want a given definition to unfold:

  • Modularity: Dependent types are famously sensitive to the smallest changes to definitions, such as whether $(\!+\!)$ recurs on its first or its second argument. If we plan to change a definition in the future, it may be desirable to avoid exposing its implementation to the type checker.

  • Usability: While unfolding may simplify proof states, it also has the potential to complicate them, resulting in unreadable subgoals, error messages, etc. A user may find that certain definitions are likely to be problematic in this way and thus opt not to unfold them.

Many proof assistants accordingly have implementation-level support for marking definitions opaque (unable to be unfolded), including Agda’s abstract (The Agda Team 2021) and Coq’s Qed (The Coq Development Team 2022). But unfolding definitions is not merely a matter of convenience: to reason about a function, we must unfold it. For example, if we make the definition of $(\!+\!)$ opaque, then $(\!+\!)$ is indistinguishable from a variable of type $\mathbb{N}\to \mathbb{N}\to \mathbb{N}$ and so cannot be shown to be commutative, satisfy $1+1=2$ , etc.

In practice, proof assistants resolve this contradiction by adopting an intermediate stance: definitions are transparent (unfolded during type checking) by default, but users are given some control over their unfolding. Coq provides conversion tactics (cbv, simpl, etc.) for applying definitional equalities, each of which accepts a list of definitions to unfold; its Opaque and Transparent commands toggle the default unfolding behavior of a transparent definition; and the SSReflect tactic language natively supports a “locking” idiom for controlling when definitions unfold (Gonthier et al. Reference Gonthier, Mahboubi and Tassi2016). Agda allows users to group multiple definitions into a single abstract block, inside of which those definitions are transparent and outside of which they are opaque; this allows users to define a function, prove all lemmas that depend on the function’s definition, and then irreversibly make the function and lemmas opaque.

These mechanisms for controlling unfolding pose interesting trade-offs for users: which definitions should be transparent, and which should be opaque? Transparency is in some cases necessary and in many cases convenient, but it is problematic both from an engineering perspective – because any edit to a transparent definition can break the well-typedness of any number of its use sites – and from a performance perspective – because checking definitional equality of type indices often requires unfolding nested definitions into large normal forms.

In addition, the behavior of these mechanisms is more subtle than it may at first appear. In Agda, definitions within abstract blocks are transparent to other definitions in the same block, but opaque to the types of those definitions; without such a stipulation, those types may cease to be well-formed when the earlier definition is made opaque. Furthermore, abstract blocks are anti-modular, requiring users to anticipate all future lemmas about definitions in a block.Footnote 1 Coq’s conversion tactics are more flexible than Agda’s abstract blocks, but being tactics, their behavior can be harder to predict. The lock idiom in SSReflect is more predictable because it creates opaque definitions but comes in four different variations to simplify its use in practice.

1.1 Contributions

We propose a mechanism for fine-grained control over the unfolding of definitions in dependent type theory. We introduce language-level primitives for controlled unfolding that are elaborated into a core calculus with extension types, a connective first introduced by Riehl and Shulman (Reference Riehl and Shulman2017). We justify our elaboration algorithm by establishing a normalization theorem (and hence the decidability of type checking and injectivity of type constructors) for our core calculus, and we have implemented our system for controlled unfolding in the experimental proof assistant (RedPRL Development Team 2020).

Definitions in our framework are opaque by default but can be selectively and locally unfolded as if they were transparent. Our system is finer-grained and more modular than Agda’s abstract blocks: we need not collect all lemmas that unfold a given definition into a single block, making our mechanism better suited to libraries. Our primitives have more predictable meaning and performance than Coq’s unfolding tacticsFootnote 2 because they are implemented by straightforward elaboration into a core Martin-Löf type theory (MLTT) extended with new types and declaration forms.

In particular, we refine earlier approaches to representing definitions within type theory (Dreyer et al. Reference Dreyer, Crary and Harper2003; Harper & Stone Reference Harper, Stone, Plotkin, Stirling and Tofte2000; Milner et al. Reference Milner, Tofte, Harper and MacQueen1997; Sterling and Harper Reference Sterling and Harper2021) in order to more faithfully represent definitions as they are actually used in practice: as neither fully opaque or transparent but instead a mix of the two. Drawing inspiration from cubical type theory (Angiuli et al. Reference Angiuli, Hou (Favonia), Harper, Ghica and Jung2018, Reference Angiuli, Brunerie, Coquand, Hou (Favonia), Harper and Licata2021; Cohen et al. Reference Cohen, Coquand, Huber and Mörtberg2017), we extend MLTT with proof-irrelevant proposition symbols $p$ , dependent products $\{p\}\,A$ over those propositions, and extension types $\lbrace A\vert p \hookrightarrow a \rbrace$ , the subtype of $A$ consisting of the elements of $A$ that definitionally equal $a$ under the assumption that $p$ is true. For readers familiar with cubical type theory, extension types are similar to path types $(\mathsf{Path}\,A\,a_0\,a_1)$ , which classify functions out of an abstract interval $\mathbb{I}$ that are definitionally equal to $a_0$ and $a_1$ when evaluated at the interval’s endpoints $0,1:\mathbb{I}$ .

Encoding definitions through particular types confers a number of benefits. For instance, our mechanism for definitions and unfolding are automatically invariant under definitional equivalence: replacing one term by a definitionally equal alternative cannot change the unfolding behavior of a program. Furthermore, using extension types to encode definitions ensures that our elaboration algorithm is extremely modular and predictable: the rules for extension types are simple and, once grasped, it becomes easy to predict the interactions between unfolding definitions and other features within the language. This elaboration algorithm then serves as a reference for the behavior of our mechanism, against which other implementation strategies may be checked.

Like many elaboration algorithms for dependent type theory, executing our elaboration algorithm requires deciding the equality of types in the core language. To show that our elaboration algorithm can be implemented, we prove a normalization theorem for our core calculus, characterizing its definitional equivalence classes of types and terms and as a corollary establishing the decidability of type checking. This is more subtle than it may appear: the heart of our normalization proof amounts to correctly tracking when definitions are allowed to unfold as well as when they should remain opaque. In the face of higher-order programs and dependent types, this is quite difficult.

Another benefit to shifting from opaque definitions to extension types is their well-studied metatheory. Specifically, we are able to adapt and extend Sterling’s technique of synthetic Tait computability (STC) (Sterling Reference Sterling2021; Sterling and Angiuli Reference Sterling and Angiuli2021; Sterling and Harper Reference Sterling and Harper2021) to prove normalization for our core language. Our proof is fully constructive, an improvement on the prior work of Sterling and Angiuli (Reference Sterling and Angiuli2021); we have also corrected an error in the handling of universes in an earlier revision of Sterling’s doctoral dissertation (Sterling Reference Sterling2021) that was detected while preparing this paper.

1.2 Outline

In Section 2, we introduce our controlled unfolding primitives by way of examples, and in Section 3 we walk through how these examples are elaborated into our core language of type theory with proposition symbols and extension types. In Section 4, we present our elaboration algorithm, and in Section 5 we discuss our implementation of the above in the proof assistant. In Section 6, we establish normalization and its corollaries for our core calculus. We conclude with a discussion of related work in Section 7.

2. A Surface Language with Controlled Unfolding

We begin by describing an Agda-like surface language for a dependent type theory with controlled unfolding. In Section 4, we will give precise meaning to this language by explaining how to elaborate it into our core calculus; for now, we proceed by example, introducing our new primitives bit by bit. Our examples will concern the inductively defined natural numbers and their addition function:

\begin{align*} \begin{array}{l} (\mathsf{+}) : \mathbb{N}\to \mathbb{N}\to \mathbb{N}\\ \mathsf{ze} + n = n\\ {\mathsf{su}\,m} + n = \mathsf{su}\,(m + n) \end{array} \end{align*}

2.1 A simple dependency: length-indexed vectors

In our language, definitions such as $(\!+\!)$ are opaque by default – they are not unfolded automatically. To illustrate the need to selectively unfold $(\!+\!)$ , consider the indexed inductive type of length-indexed vectors with the following constructors:

\begin{align*} \begin{array}{l} {[\hspace {0.1pt}]}{} : \mathsf{vec}\,\mathsf{ze}\,A\\ (\!\mathbin {::}\!) : A\to \mathsf{vec}\,n\,A\to \mathsf{vec}\,(\mathsf{su}\,n)\,A \end{array} \end{align*}

Suppose we attempt to define the append operation on vectors by dependent pattern matching on the first vector. Our goals would be as follows:

As it stands, the goals above are in normal form and cannot be proved; however, we may indicate that the definition of $(\!+\!)$ should be unfolded within the definition of $(\!\oplus\!)$ by adding the following top-level $\mathbf{unfolds}$ annotation:

With our new declaration, the goals simplify:

The first goal is solved with $v$ itself; for the second goal, we begin by applying the $\mathsf{vcons}$ constructor:

The remaining goal is just our induction hypothesis $u\oplus v$ . All in all, we have:

\begin{align*} \begin{array}{l} (\!\oplus\!)\,{\mathbf{unfolds}}\,(\!+\!)\\(\!\oplus\!) : \mathsf{vec}\,m\,A\to \mathsf{vec}\,n\,A\to \mathsf{vec}\,(m+n)\,A\\ {[\hspace {0.1pt}]}{} \oplus v = v\\ (a \mathbin {::}{} u) \oplus v = a \mathbin {::}{} (u\oplus v) \end{array} \end{align*}

2.2 Transitive unfolding

Now suppose we want to prove that $\mathsf{map}$ distributes over $(\!\oplus\!)$ . In doing so, we will certainly need to unfold $\mathsf{map}$ , but it turns out this will not be enough:

\begin{align*} \begin{array}{l} \mathsf{map} : (A\to B) \to \mathsf{vec}\,n\,A \to \mathsf{vec}\,n\,B\\ \mathsf{map}\,f\,{[\hspace {0.1pt}]}{} = {[\hspace {0.1pt}]}{}\\ \mathsf{map}\,f\,(a \mathbin {::}{} u) = f\,a \mathbin {::}{} \mathsf{map}\,f\,u \end{array} \end{align*}

To make further progress, we must also unfold $(\!\oplus\!)$ :

In our language, unfolding $(\!\oplus\!)$ has the side effect of also unfolding $(\!+\!)$ : in other words, unfolding is transitive. To see why this is the case, observe that the unfolding of $(a \mathbin {::}{} u) \oplus v : \mathsf{vec}\,({\mathsf{su}\,m} + n)\,A$ , namely $a \mathbin {::}{} (u\oplus v) : \mathsf{vec}\,(\mathsf{su}\,(m + n))\,A$ , would otherwise not be well-typed. From an implementation perspective, one can think of the transitivity of unfolding as necessary for subject reduction. Having unfolded $\mathsf{map}$ , $(\!\oplus\!)$ , and thus $(\!+\!)$ , we complete our definition:

\begin{align*} \begin{array}{l} \mathsf{cong} : (f:A\to B)\to a\equiv a^{\prime }\to f\,a\equiv f\,a^{\prime }\\ \mathsf{cong}\,f\,\mathsf{refl} = \mathsf{refl} \end{array} \end{align*}
\begin{align*} \begin{array}{l} \mathsf{map\text{-}{\oplus }}\,{\mathbf{unfolds}}\,\mathsf{map}; (\!\oplus\!)\\ \mathsf{map\text{-}{\oplus }} : (f:A\to B)\,(u:\mathsf{vec}\,m\,A)\,(v:\mathsf{vec}\,n\,A) \to \mathsf{map}\,f\,(u\oplus v) \equiv \mathsf{map}\,f\,u \oplus \mathsf{map}\,f\,v\\ \mathsf{map\text{-}{\oplus }}\,f\,{[\hspace {0.1pt}]}{}\,v = \mathsf{refl}\\ \mathsf{map\text{-}{\oplus }}\,f\,(a \mathbin {::}{} u)\,v = \mathsf{cong}\,(f\,a \mathbin {::}{})\,(\mathsf{map\text{-}{\oplus }}\,f\,u\,v) \end{array} \end{align*}

2.3 Recovering unconditionally transparent/opaque definitions

There are also times when we intend a given definition to be a fully transparent abbreviation, in the sense of being unfolded automatically whenever possible. We indicate this with an $\mathbf{abbreviation}$ declaration:

Then the following lemma can be defined without any explicit unfolding:

\begin{align*} \begin{array}{l} \mathsf{abbrv\text{-}{}example} : \mathsf{singleton}\,5 \equiv (5 \mathbin {::}{} {[\hspace {0.1pt}]}{})\\ \mathsf{abbrv\text{-}{}example} = \mathsf{refl} \end{array} \end{align*}

The meaning of the $\mathbf{abbreviation}$ keyword must account for unfolding constraints. For instance, what would it mean to make $\mathsf{map\text{-}{\oplus }}$ an abbreviation?

\begin{align*} \begin{array}{l} {\mathbf{abbreviation}}\,\mathsf{map\text{-}{\oplus }}\\ \mathsf{map\text{-}{\oplus }}\,{\mathbf{unfolds}}\,\mathsf{map};\; (\!\oplus\!)\\ \dots \end{array} \end{align*}

We cannot unfold $\mathsf{map\text{-}{\oplus }}$ in all contexts, because its definition is only well-typed when $\mathsf{map}$ and $(\!\oplus\!)$ are unfolded. The meaning of this declaration must, therefore, be that $\mathsf{map\text{-}{\oplus }}$ shall be unfolded just as soon as $\mathsf{map}$ and $(\!\oplus\!)$ are unfolded. In other words, ${\mathbf{abbreviation}}\,\vartheta$ followed by $\vartheta \,{\mathbf{unfolds}}\,\kappa _1;\ldots ;\;\kappa _n$ means that unfolding $\vartheta$ is synonymous with unfolding all of $\kappa _1;\ldots ;\;\kappa _n$ .

Conversely, we may intend a given definition never to unfold, which we may indicate by a corresponding $\mathbf{abstract}$ declaration. Because definitions in our system do not automatically unfold, the force of ${\mathbf{abstract}}\,\vartheta$ is simply to prohibit users from including $\vartheta$ in any subsequent $\mathbf{unfolds}$ annotations.

Remark 1. A slight variation on our system can recover the behavior of Agda’s abstract blocks by limiting the scope in which a definition $\vartheta$ can be unfolded; the transitivity of unfolding dictates that any definition $\vartheta ^\prime$ that unfolds $\vartheta$ cannot itself be unfolded once we leave that scope. We leave the details to future work.

2.4 Unfolding within the type

The effect of a $\vartheta \,{\mathbf{unfolds}}\,\kappa _1;\ldots ;\;\kappa _n$ declaration is to make $\kappa _1;\ldots \kappa _n$ unfold within the definition of $\vartheta$ , but still not within its type; it will happen, however, that a type might not be expressible without some unfolding. First, we will show how to accommodate this situation using only features we have introduced so far, and then in Section 2.5, we will devise a more general and ergonomic solution.

Consider the left-unit law for $(\!\oplus\!)$ : in order to state that a vector $u$ is equal to the vector ${[\hspace {0.1pt}]}{}\oplus u$ , we must contend with their differing types $\mathsf{vec}\,n\,A$ and $\mathsf{vec}\,(\mathsf{ze}+n)\,A$ , respectively. One approach is to rewrite along the left-unit law for $\mathbb{N}$ ; indeed, to state the right-unit law for $(\!\oplus\!)$ , one must rewrite along the right-unit law for $\mathbb{N}$ . But here, because $(\!+\!)$ computes on its first argument, $\mathsf{vec}\,n\,A$ and $\mathsf{vec}\,(\mathsf{ze}+n)\,A$ would be definitionally equal types if we could unfold $(\!+\!)$ .

In order to formulate the left-unit law for $(\!\oplus\!)$ , we start by defining its type as an abbreviation that unfolds $(\!+\!)$ :

\begin{align*} \begin{array}{l} {\mathbf{abbreviation}}\,\oplus \mathsf {\text{-}left\text{-}unit\text{-}type}\\ \oplus \mathsf {\text{-}left\text{-}unit\text{-}type}\,{\mathbf{unfolds}}\,(\!+\!)\\ \oplus \mathsf {\text{-}left\text{-}unit\text{-}type} : \mathsf{vec}\,n\,A\to \mathsf{Type}\\ \oplus \mathsf {\text{-}left\text{-}unit\text{-}type}\,u = {[\hspace {0.1pt}]}{}\oplus u \equiv u \end{array} \end{align*}

Now we may state the intended lemma using the type defined above:

Clearly, we must unfold $(\!+\!)$ and thus $\oplus \mathsf {\text{-}left\text{-}unit\text{-}type}$ to simplify our goal:

We complete the proof by unfolding $(\!\oplus\!)$ itself, which transitively unfolds $(\!+\!)$ :

\begin{align*} \begin{array}{l} \oplus \mathsf {\text{-}left\text{-}unit}\,{\mathbf{unfolds}}\,(\!\oplus\!)\\ \oplus \mathsf {\text{-}left\text{-}unit} : (u : \mathsf{vec}\,n\,A) \to \oplus \mathsf {\text{-}left\text{-}unit\text{-}type}\,u\\ \oplus \mathsf {\text{-}left\text{-}unit}\,u = \mathsf{refl} \end{array} \end{align*}

2.5 Unfolding within subexpressions

We have just demonstrated how to unfold definitions within the type of a declaration by defining that type as an additional declaration; using the same technique, we can introduce unfoldings within any subexpression by hoisting that subexpression to a top-level definition with its own unfolding constraint.

Unfolding within the type, revisited. Rather than repeating the somewhat verbose pattern of Section 2.4, we abstract it as a new language feature that is easily eliminated by elaboration. In particular, we introduce a new expression former ${\mathbf{unfold}}\,\kappa \,{\mathbf{in}}\,M$ that can be placed in any expression context. Let us replay the example from Section 2.4, but using $\mathbf{unfold}$ rather than an auxiliary definition:

The type ${\mathbf{unfold}}\,(\!+\!)\,{\mathbf{in}}\,\quad\oplus u \equiv u$ is in normal form; the only way to simplify it is to unfold $(\!+\!)$ . We could do this with another inline $\mathbf{unfold}$ expression (see $\oplus \mathsf {\text{-}left\text{-}unit'}$ below), but here we will use a top-level declaration:

By virtue of the above, the $\mathbf{unfold}$ expression in our hole has computed away and we are left with as $(\!\oplus\!)$ is still abstract in this scope. To make progress, we strengthen the declaration to unfold $(\!\oplus\!)$ in addition to $(\!+\!)$ :

The meaning of the code above is exactly as described in Section 2.4: the $\mathbf{unfold}$ scope is elaborated to a new top-level $\mathbf{abbreviation}$ that unfolds $(\!+\!)$ .

Expression-level vs. top-level unfolding. We noted in our definition of $\oplus \mathsf {\text{-}left\text{-}unit}$ above that we could have replaced the top-level ${\mathbf{unfolds}}\,(\!\oplus\!)$ directive of $\oplus \mathsf {\text{-}left\text{-}unit}$ with the new expression-level ${\mathbf{unfold}}\,(\!\oplus\!)\,{\mathbf{in}}$ as follows:

\begin{align*} \begin{array}{l} \oplus \mathsf {\text{-}left\text{-}unit'} : (u : \mathsf{vec}\,n\,A) \to {\mathbf{unfold}}\,(\!+\!)\,{\mathbf{in}}\,{[\hspace {0.1pt}]}{}\oplus u\equiv u\\ \oplus \mathsf {\text{-}left\text{-}unit'}\,u = {\mathbf{unfold}}\,(\!\oplus\!)\,{\mathbf{in}}\,\mathsf{refl} \end{array} \end{align*}

The resulting definition of $\oplus \mathsf {\text{-}left\text{-}unit'}$ has slightly different behavior than $\oplus \mathsf {\text{-}left\text{-}unit}$ above: whereas unfolding $\oplus \mathsf {\text{-}left\text{-}unit}$ causes $(\!\oplus\!)$ to unfold transitively, we can unfold $\oplus \mathsf {\text{-}left\text{-}unit'}$ without unfolding $(\!\oplus\!)$ – at the cost of ${\mathbf{unfold}}\,(\!\oplus\!)$ expressions appearing in our goal. This more granular behavior may be desirable in some cases, and it is a strength of our language and its elaborative semantics that the programmer can manipulate unfolding in such a fine-grained manner.

For completeness, we show the elaborated version of $\oplus \mathsf {\text{-}left\text{-}unit'}$ resulting from eliminating expression-level unfolding from the definition. We defer a systematic discussion of this transformation till Section 4.

\begin{align*} \begin{array}{l} {\mathbf{abbreviation}}\,\oplus\mathsf{\text{-}left\text{-}unit'\text{-}type}\\ \oplus \mathsf {\text{-}left\text{-}unit'\text{-}type}\,{\mathbf{unfolds}}\,(\!+\!)\\ \oplus \mathsf {\text{-}left\text{-}unit'\text{-}type} : \mathsf{vec}\,n\,A\to \mathsf{Type}\\ \oplus \mathsf {\text{-}left\text{-}unit'\text{-}type}\,u = {[\hspace {0.1pt}]}{}\oplus u \equiv u \end{array} \end{align*}
\begin{align*} \begin{array}{l} {\mathbf{abbreviation}}\,\oplus \mathsf {\text{-}left\text{-}unit'\text{-}body}\\ \oplus \mathsf {\text{-}left\text{-}unit'\text{-}body}\,{\mathbf{unfolds}}\,(\!\oplus\!)\\ \oplus \mathsf {\text{-}left\text{-}unit'\text{-}body} : (u : \mathsf{vec}\,n\,A)\to \oplus \mathsf {\text{-}left\text{-}unit'\text{-}type}\,u\\ \oplus \mathsf {\text{-}left\text{-}unit'\text{-}body}\,u = \mathsf{refl} \end{array} \end{align*}
\begin{align*} \begin{array}{l} \oplus \mathsf {\text{-}left\text{-}unit'} : (u : \mathsf{vec}\,n\,A)\to \oplus \mathsf {\text{-}left\text{-}unit'\text{-}type}\,u\\ \oplus \mathsf {\text{-}left\text{-}unit'}\,u = \oplus \mathsf {\text{-}left\text{-}unit'\text{-}body}\,u \end{array} \end{align*}

In our experience, expression-level unfolding seems more commonly useful for end users than top-level unfolding; on the other hand, the clearest semantics for expression-level unfolding are stated in terms of top-level unfolding. Because one of our goals is to provide an account of unfolding that admits a reliable and precise mental model for programmers, it is desirable to include both top-level and expression-level unfolding in the surface language.

3. Controlling Unfolding with Extension Types

Having introduced our new surface language constructs for controlled unfolding in Section 2, we now describe how to elaborate these constructs into our dependently typed core calculus. Again we proceed by example, deferring our formal descriptions of the elaboration algorithm to Section 4.

3.1 A core calculus with proposition symbols

Our core calculus parameterizes intensional MLTT (Martin-Löf Reference Martin-Löf, Rose and Shepherdson1975) by a bounded meet semilattice of proposition symbols $p\in \mathbb{P}$ and adjoins to the type theory a new form of context extension and two new type formers $\{p\}\, A$ and $\lbrace A\vert p \hookrightarrow M \rbrace$ involving proposition symbols:

The bounded meet semilattice structure on $\mathbb{P}$ closes proposition symbols under conjunction $\land$ and the true proposition $\top$ , thereby partially ordering $\mathbb{P}$ by entailment $p \leq q$ (“ $p$ entails $q$ ”) satisfying the usual principles of propositional logic. We say $p$ is true if $\top$ entails $p$ ; the context extension $\Gamma ,p$ hypothesizes that $p$ is true.

Remark 2. Our proposition symbols are much more restricted than, and should not be confused with, other notions of proposition in type theory such as h-propositions (Univalent Foundations Program 2013, §3.3) or strict propositions (Gilbert et al. Reference Gilbert, Cockx, Sozeau and Tabareau2019). In particular, unlike types, our proposition symbols have no associated proof terms.

The type $\{p\}\, A$ is the dependent product “ $\{\_ : p\}\to A$ ,” that is, $\{p\}\, A$ is well-formed when $A$ is a type under the hypothesis that $p$ is true, and $f : \{p\}\, A$ when, given that $p$ is true, we may conclude $f : A$ . The extension type $\lbrace A\vert p \hookrightarrow a_p\rbrace$ is well-formed when $A$ is a type and $a_p : \{p\}\, A$ ; its elements $a : \lbrace A\vert p \hookrightarrow a_p\rbrace$ are terms $a : A$ satisfying the side condition that when $p$ is true, we have $a = a_p : A$ . We provide inference rules for the core calculus, including these connectives, in Section 4.1.

3.2 Elaborating controlled unfolding to our core calculus

Our surface language extends a generic surface language for dependent type theory with a new expression former $\mathbf{unfold}$ and several new declaration forms: $\vartheta \,{\mathbf{unfolds}}\,\kappa _1;\dots ;\;\kappa _n$ for controlled unfolding, ${\mathbf{abbreviation}}\,\vartheta$ for transparent definitions, and ${\mathbf{abstract}}\,\vartheta$ for opaque definitions. Elaboration transforms these surface-language declarations into core-language signatures, that is, sequences of declarations over our core calculus of MLTT with proposition symbols.

Our signatures include the following declaration forms:

  • ${\mathbf{prop}}\,p \leq q$ introduces a fresh proposition symbol $p$ such that $p$ entails $q\in \mathbb{P}$ ;

  • ${\mathbf{prop}}\,p = q$ defines the proposition symbol $p$ to be an abbreviation for $q\in \mathbb{P}$ ;

  • ${\mathbf{const}}\,\vartheta : A$ introduces a constant $\vartheta$ of type $A$ .

We now revisit our examples from Section 2, illustrating how they are elaborated into our core calculus:

Plain definitions

Recall our unadorned definition of $(\!+\!)$ from Section 2:

\begin{align*} \begin{array}{l} (\!+\!) : \mathbb{N}\to \mathbb{N}\to \mathbb{N}\\ \mathsf{ze} + n = n\\ {\mathsf{su}\,m} + n = \mathsf{su}\,(m + n) \end{array} \end{align*}

We elaborate $(\!+\!)$ into a sequence of declarations: first, we introduce a new proposition symbol $\Upsilon _{+}$ corresponding to the proposition that “ $(\!+\!)$ unfolds.” Next, we introduce a new definition $\delta _{+}:\mathbb{N}\to \mathbb{N}\to \mathbb{N}$ satisfying the defining clauses of $(\!+\!)$ above, under the (trivial) assumption of $\top$ ; finally, we introduce a new constant $(\!+\!)$ involving the extension type of $\delta _{+}$ along $\Upsilon _{+}$ :

\begin{align*} \begin{array}{l} {\mathbf{prop}}\,\Upsilon _{+}\leq \top \\ \\[-8pt] \delta _{+} : \{\top \}\,(m\,n : \mathbb{N})\to \mathbb{N}\\ \delta _{+}\,\mathsf{ze}\,n = n\\ \delta _{+}\,(\mathsf{su}\,m)\,n = \mathsf{su}\,(\delta _{+}\,m\,n)\\ \\[-8pt] {\mathbf{const}}\,(\!+\!) : \lbrace \mathbb{N}\to \mathbb{N}\to \mathbb{N} \vert \Upsilon _{+} \hookrightarrow \delta _{+}\rbrace \end{array} \end{align*}

Remark 3. In a serious implementation, it would be simple to induce $\delta _{+}$ to be pretty-printed as $(\!+\!)$ in user-facing displays such as goals and error messages.

Top-level unfolding

To understand why we have elaborated $(\!+\!)$ in this way, let us examine how to elaborate top-level unfolding declarations (Section 2.1):

\begin{align*} \begin{array}{l} (\!\oplus\!)\,{\mathbf{unfolds}}\,(\!+\!)\\ (\!\oplus\!) : \mathsf{vec}\,m\,A\to \mathsf{vec}\,n\,A\to \mathsf{vec}\,(m+n)\,A\\ {[\hspace {0.1pt}]}{} \oplus v = v\\ (a \mathbin {::}{} u) \oplus v = a \mathbin {::}{} (u\oplus v) \end{array} \end{align*}

To elaborate $(\!\oplus\!)\,{\mathbf{unfolds}}\,(\!+\!)$ , we define the proposition symbol $\Upsilon _{\oplus }$ to entail $\Upsilon _{+}$ , capturing the idea that unfolding $(\!\oplus\!)$ always causes $(\!+\!)$ to unfold; in order to cause $(\!+\!)$ to unfold in the body of $(\!\oplus\!)$ , we assume $\Upsilon _{+}$ in the definition of $\delta _{\oplus }$ . In full, we elaborate the definition of $(\!\oplus\!)$ as follows:

\begin{align*} \begin{array}{l} {\mathbf{prop}}\,\Upsilon _{\oplus }\leq \Upsilon _{+}\\ \\[-5pt] \delta _{\oplus } : \{\Upsilon _{+}\} \, (u:\mathsf{vec}\,m\,A)\,(v:\mathsf{vec}\,n\,A)\to \mathsf{vec}\,(m+n)\,A\\ \delta _{\oplus }\,{[\hspace {0.1pt}]}{}\,v = v\\ \delta _{\oplus }\,(a \mathbin {::}{} u)\,v = a \mathbin {::}{} (\delta _{\oplus }\,u\,v)\\ \\[-5pt] {\mathbf{const}}\,(\!\oplus\!) : \lbrace \mathsf{vec}\,m\,A\to \mathsf{vec}\,n\,A\to \mathsf{vec}\,(m+n)\,A \vert \Upsilon _{\oplus } \hookrightarrow \delta _{\oplus }\rbrace \end{array} \end{align*}

Observe that the definition of $\delta _{\oplus }$ is well-typed because $\Upsilon _{+}$ is true in its scope: thus, the extension type of $(\!+\!)$ causes $\mathsf{ze}+n$ to be definitionally equal to $\delta _{+}\,\mathsf{ze}\,n$ , which in turn is defined to be $n$ . The constraint $\Upsilon _{\oplus }\hookrightarrow \delta _{\oplus }$ is well-typed because $\Upsilon _{\oplus }$ entails $\Upsilon _{+}$ .

If a definition $\vartheta$ unfolds multiple definitions $\kappa _1;\dots ;\;\kappa _n$ , we define $\Upsilon _{\vartheta }$ to entail (and define $\delta _{\vartheta }$ to assume) the conjunction $\Upsilon _{\kappa _1}\land \dots \land \Upsilon _{\kappa _n}$ ; if a definition $\vartheta$ unfolds no definitions, then $\Upsilon _{\vartheta }$ entails (and $\delta _{\vartheta }$ assumes) $\top$ , as in our $(\!+\!)$ example.

Abbreviations

To elaborate the combination of the declarations ${\mathbf{abbreviation}}\,\vartheta$ and $\vartheta \,{\mathbf{unfolds}}\,\kappa _1;\dots ;\;\kappa _n$ , we define $\Upsilon _{\vartheta }$ to equal the conjunction $\Upsilon _{\kappa _1}\land \dots \land \Upsilon _{\kappa _n}$ . For example, consider the following code from Section 2.3:

\begin{align*} \begin{array}{l} {\mathbf{abbreviation}}\,\mathsf{map\text{-}{\oplus }}\\ \mathsf{map\text{-}{\oplus }}\,{\mathbf{unfolds}}\,\mathsf{map};\; (\!\oplus\!)\\ \mathsf{map\text{-}{\oplus }} : (f:A\to B)\,(u:\mathsf{vec}\,m\,A)\,(v:\mathsf{vec}\,n\,A) \to \mathsf{map}\,f\,(u\oplus v) \equiv \mathsf{map}\,f\,u \oplus \mathsf{map}\,f\,v\\ \mathsf{map\text{-}{\oplus }}\,f\,{[\hspace {0.1pt}]}{}\,v = \mathsf{refl}\\ \mathsf{map\text{-}{\oplus }}\,f\,(a \mathbin {::}{} u)\,v = \mathsf{cong}\,((f\,a) \mathbin {::}{}-)\,(\mathsf{map\text{-}{\oplus }}\,f\,u\,v) \end{array} \end{align*}

Let us write $\mathfrak{C}$ for the following type:

\begin{align*} & (f:A\to B)\,(u:\mathsf{vec}\,m\,A)\,(v:\mathsf{vec}\,n\,A) \to \mathsf{map}\,f\,(u\oplus v) \equiv \mathsf{map}\,f\,u \oplus \mathsf{map}\,f\,v \end{align*}

The above example is then elaborated as follows:

\begin{align*} \begin{array}{l} {\mathbf{prop}}\,\Upsilon _{\mathsf{map\text{-}{\oplus }}} = \Upsilon _{\mathsf{map}}\land \Upsilon _{\oplus }\\ \\[-5pt] \delta _{\mathsf{map\text{-}{\oplus }}} : \{\Upsilon _{\mathsf{map}}\land \Upsilon _{\oplus }\}\,\mathfrak{C}\\ \delta _{\mathsf{map\text{-}{\oplus }}}\,f\,{[\hspace {0.1pt}]}{}\,v = \mathsf{refl}\\ \delta _{\mathsf{map\text{-}{\oplus }}}\,f\,(a \mathbin {::}{} u)\,v = \mathsf{cong}\,((f\,a) \mathbin {::}{} -)\,(\delta _{\mathsf{map\text{-}{\oplus }}}\,f\,u\,v)\\ \\[-5pt] {\mathbf{const}}\, \mathsf{map\text{-}{\oplus }} : \lbrace \mathfrak{C} \mid \Upsilon _{\mathsf{map\text{-}{\oplus }}} \hookrightarrow \delta _{\mathsf{map\text{-}{\oplus }}} \rbrace \end{array} \end{align*}

Expression-level unfolding

The elaboration of the expression-level unfolding construct ${\mathbf{unfold}}\,\kappa \,{\mathbf{in}}\,M$ to our core calculus factors through the elaboration of expression-level unfolding to top-level unfolding as described in Section 2.5; we return to this in Section 4.3.

4. The Elaboration Algorithm

We now formally specify our mechanism for controlled unfolding by more precisely defining the elaboration algorithm sketched in the previous section, starting with a precise definition of the target of elaboration, our core calculus $\mathbf{TT}_{\mathbb{P}}$ .

4.1 The core calculus $\mathbf {TT}_{\mathbb{P}}$

Our core calculus $\mathbf{TT}_{\mathbb{P}}$ is intensional MLTT (Martin-Löf Reference Martin-Löf, Rose and Shepherdson1975) with dependent sums and products, a Tarski universe, etc., extended with (1) a collection of proof-irrelevant proposition symbols, (2) dependent products over propositions, and (3) extension types for those propositions (Riehl and Shulman Reference Riehl and Shulman2017).

Remark 4. We treat the features of MLTT and of our surface language somewhat generically; our elaboration algorithm can be applied on top of an existing bidirectional elaboration algorithm for type theory, for example, those described by Dagand (Reference Dagand2013) and Gratzer et al. (Reference Gratzer, Sterling and Birkedal2019), which may separately account for features such as implicit arguments or dependent pattern matching.

In fact, $\mathbf{TT}_{\mathbb{P}}$ is actually a family of type theories parameterized by a bounded meet semilattice $(\mathbb{P},\top ,\land )$ whose underlying set $\mathbb{P}$ is the set of proposition symbols of $\mathbf{TT}_{\mathbb{P}}$ ; the semilattice structure on $\mathbb{P}$ axiomatizes the conjunctive fragment of propositional logic with $\land$ as conjunction, $\top$ as the true proposition, and $\leq$ as entailment (where $p\leq q$ is defined as $p\land q = p$ ), subject to the usual logical principles such as $p\land q \leq p$ and $p\land q \leq q$ and $p \leq \top$ .

Remark 5. The judgments of $\mathbf{TT}_{\mathbb{P}}$ are functorial in the choice of $\mathbb{P}$ , in the sense that given any homomorphism ${f}: {\mathbb{P}}\to {\mathbb{P}^{\prime }}$ of bounded meet semilattices and any type or term in $\mathbf{TT}_{\mathbb{P}}$ over $\mathbb{P}$ , we have an induced type/term in TT $_{\mathbb{P}^{\prime }}$ over $\mathbb{P}^{\prime }$ . In particular, we will use the fact that judgments of $\mathbf{TT}_{\mathbb{P}}$ are stable under adjoining new proposition symbols to $\mathbb{P}$ .

The language $\mathbf{TT}_{\mathbb{P}}$ augments ordinary MLTT with a new judgment $\Gamma \vdash p\,\textit{true}$ (for $p\in \mathbb{P}$ ) and the corresponding context extension $\Gamma ,p$ (for $p\in \mathbb{P}$ ). The judgment $\Gamma \vdash p\,\textit{true}$ states that the proposition $p$ is true in context $\Gamma$ , that is, the conjunction of the propositional hypotheses in $\Gamma$ entails $p$ while $\Gamma ,p$ extends $\Gamma$ with the hypothesis that $p$ is true.

\begin{align*} & \frac { \Gamma \ \textit{ctx} \qquad p\in \mathbb{P} }{ \Gamma ,p\ \textit{ctx} } \quad\qquad \frac { p\in \mathbb{P} }{ \Gamma , p \vdash p\,\textit{true} } \quad\qquad \frac { \Gamma ,p \vdash {\mathscr{J}} \qquad \Gamma \vdash p\,\textit{true} }{ \Gamma \vdash \mathscr{J} } \\[5pt] & \frac { }{ \Gamma \vdash \top \,\textit{true} } \quad\qquad \frac { \Gamma \vdash p\,\textit{true} \qquad \Gamma \vdash q\,\textit{true} }{ \Gamma \vdash p\land q\,\textit{true} } \quad\qquad \frac { \Gamma \vdash p\,\textit{true} \qquad p\leq q }{ \Gamma \vdash q\,\textit{true} } \end{align*}

The dependent product $\{p\}\,A$ is defined as an ordinary dependent product:

\begin{align*} &\frac { {\Gamma }, p \vdash A\ \textit{type} }{ {\Gamma } \vdash \{p\}\,A\ \textit{type} } \quad\qquad \frac { \Gamma , p \vdash\, M:{A} }{ \Gamma \vdash \langle p \rangle \,M : \{p\}\,A } \quad\qquad \frac { \Gamma \vdash M : \{p\}\,A \qquad \Gamma \vdash p\,\textit{true} }{ \Gamma \vdash M \mathbin {@} p : A }\\[5pt] &\quad\qquad \frac { \Gamma ,p \vdash\, M:A \qquad \Gamma \vdash p\, true}{\Gamma\vdash(\langle p \rangle M) \mathbin {@} p = M : A} \qquad\qquad \frac{\Gamma\vdash M:{p}A}{\Gamma\vdash M =\langle p \rangle(M \mathbin {@} p):\{p\}A} \end{align*}

The remaining feature of $\mathbf{TT}_{\mathbb{P}}$ is the extension type $\lbrace A\vert p \hookrightarrow a_{p} \rbrace$ . Given a proposition $p\in \mathbb{P}$ and an element $a_{p}$ of $A$ under the hypothesis $p$ , the elements of $\lbrace A\vert p \hookrightarrow a_{p}\rbrace$ correspond to elements of $A$ that equal $a_{p}$ when $p$ holds.

\begin{align*} &\frac { {\Gamma } \vdash A\ \textit{type} \qquad \Gamma ,p \vdash a_{p}:{A} }{ {\Gamma } \vdash \lbrace A\vert p \hookrightarrow a_{p}\rbrace \ \textit{type} } \quad \frac { \begin{array}{l} \Gamma \vdash a : A\\ \Gamma , p \vdash a_{p} : A \\ {\Gamma }, p \vdash a =a_{p}:A \end{array}}{ \Gamma \vdash \mathsf{in}_p\,a : \lbrace A\vert p \hookrightarrow a_{p} \rbrace } \qquad \frac { \Gamma \vdash a : \lbrace A\vert p \hookrightarrow a_{p} \rbrace }{ \Gamma \vdash \mathsf{out}_p\,a : A } \\[8pt] &\qquad\qquad \frac { \Gamma \vdash a : A }{ {\Gamma } \vdash \mathsf{out}_p(\mathsf{in}_p\,a) = a : A } \qquad \frac { \Gamma \vdash a : \lbrace A\vert p \hookrightarrow a_{p} \rbrace }{ {\Gamma } \vdash \mathsf{in}_p(\mathsf{out}_p\,a) = a : \lbrace A\vert p \hookrightarrow a_{p} \rbrace } \\[8pt] &\qquad\qquad\qquad \qquad\qquad \frac { \Gamma \vdash p\,\textit{true} \qquad \Gamma \vdash a : \lbrace A\vert p \hookrightarrow a_{p} \rbrace }{ {\Gamma } \vdash \mathsf{out}_p\,a = a_{p}:{A} } \end{align*}

4.2 Signatures over $\mathbf{{TT}}_{\mathbb{P}}$

Our elaboration procedure takes as input a sequence of surface-language definitions and outputs a well-formed signature, a list of declarations over $\mathbf{TT}_{\mathbb{P}}$ .

A signature is well-formed precisely when each declaration in $\Sigma$ is well-formed relative to the earlier declarations in $\Sigma$ . Our well-formedness judgment $\vdash \Sigma \,\textit{sig} \longrightarrow \mathbb{P}, \Gamma$ computes from $\Sigma$ the $\mathbf{TT}_{\mathbb{P}}$ context $\Gamma$ and proposition semilattice $\mathbb{P}$ specified by $\Sigma$ ’s $\mathbf{const}$ and $\mathbf{prop}$ declarations, respectively.

The rules for signature well-formedness are standard except for the ${\mathbf{prop}}\,p \le q$ and ${\mathbf{prop}}\,p = q$ declarations, which extend $\mathbb{P}$ with a new element $p$ satisfying $p \le q$ or $p = q$ , respectively. Recalling that our core calculus $\mathbf{TT}_{\mathbb{P}}$ is really a family of type theories parameterized by a semilattice $\mathbb{P}$ , these declarations shift us between type theories, for example, from $\mathbf{TT}_{\mathbb{P}}$ to TT $_{\mathbb{Q}}$ , where $\mathbb{Q} = \mathbb{P}[p \le q]$ is the minimal semilattice containing $\mathbb{P}$ and an element $p$ satisfying $p \le q$ . This shifting between theories is justified by Remark 5.

\begin{align*} &\qquad\quad \frac { }{ \vdash \epsilon \,\textit{sig} \longrightarrow \{\top \}, \cdot } \qquad\qquad \frac { \vdash \Sigma \,\textit{sig} \longrightarrow \mathbb{P}, \Gamma \qquad \Gamma \vdash _{\mathbf{TT}_{\mathbb{P}}} A\,\textit{type} }{ \vdash (\Sigma ,\ {\mathbf{const}}\, x : A)\,\textit{sig} \longrightarrow \mathbb{P}, (\Gamma , x : A) }\\[5pt] &\frac { \vdash \Sigma \,\textit{sig} \longrightarrow \mathbb{P}, \Gamma \qquad q\in \mathbb{P} }{ \vdash (\Sigma ,\ {\mathbf{prop}}\, p \le q) \,\textit{sig} \longrightarrow \mathbb{P}[p \le q] , \Gamma \qquad \vdash (\Sigma ,\ {\mathbf{prop}}\, p = q) \,\textit{sig} \longrightarrow \mathbb{P}[p = q], \Gamma } \end{align*}

4.3 Bidirectional elaboration

We adopt a bidirectional elaboration algorithm which mirrors bidirectional type-checking algorithms (Coquand Reference Coquand1996; Pierce and Turner Reference Pierce and Turner2000). The top-level elaboration judgment $\Sigma \vdash \vec {S} \rightsquigarrow \Sigma ^{\prime }$ takes as input the current well-formed signature $\Sigma$ and a list of surface-level definitions $\vec {S}$ and outputs a new well-formed signature $\Sigma ^{\prime }$ .

We define $\Sigma \vdash \vec {S} \rightsquigarrow \Sigma ^{\prime }$ in terms of three auxiliary judgments for elaborating surface-language types and terms; in the bidirectional style, we divide term elaboration into a checking judgment $\Sigma ;\;\Gamma \vdash {\texttt {e}} \Leftarrow A \rightsquigarrow \Sigma ^{\prime },M$ taking a core type as input and a synthesis judgment $\Sigma ;\;\Gamma \vdash {\texttt {e}} \Rightarrow A \rightsquigarrow \Sigma ^{\prime },M$ producing a core type as output. All three judgments take as input a signature $\Sigma$ and a context (telescope) over $\Sigma$ and output a new signature along with a core type or term.

We represent a surface-level definition $S$ as a tuple:

\begin{equation*} ({\mathbf{def}}\,\vartheta :{\texttt {A}},\textit {abbrv?},\textit {abstr?},[\kappa _1,\ldots \kappa _n],{\texttt {e}}) \end{equation*}

In this expression, $\vartheta$ is the name of the definiendum, $\texttt {A}$ is the surface-level type of the definition, abbrv? and abstr? are flags governing whether $\vartheta$ is an $\mathbf{abbreviation}$ (resp., is $\mathbf{abstract}$ ), $[\kappa _1,\ldots ,\kappa _n]$ are the names of the definitions that $\vartheta$ unfolds, and $\texttt {e}$ is the surface-level definiens.

The elaboration judgment elaborates each surface definition in sequence:

Remark 6. When a definition is marked $\mathbf{abstract}$ , the name of the unfolding proposition is generated fresh so that it cannot be accessed by any future $\mathbf{unfold}$ declaration. Conversely, when a definition is marked as an $\mathbf{abbreviation}$ , its unfolding proposition is defined to be equivalent to the conjunction of its dependencies rather than merely entailing its dependencies.

The rules for term and type elaboration are largely standard: for instance, we elaborate a surface-dependent product to a core-dependent product by recursively elaborating the first and second components. We single out two cases below: the boundary between checking and synthesis, and the expression-level $\mathbf{unfold}$ .

\begin{align*} \frac {\begin{array}{l} \Sigma ;\;\Gamma \vdash {\texttt {e}} \Rightarrow A \rightsquigarrow \Sigma _1;\;M \\ \Sigma _1;\;\Gamma \vdash \mathsf{conv}\,A\,B\end{array} }{ \Sigma ;\;\Gamma \vdash {\texttt {e}} \Leftarrow B \rightsquigarrow \Sigma _1;\;M } \quad \frac {\begin{array}{l} \Sigma ;\;\Gamma ,\Upsilon _{\vartheta } \vdash {\texttt {e}} \Leftarrow A \rightsquigarrow \Sigma _1;\;M \qquad {\mathbf{let}}\,\chi := \textit {gensym}\,() \\ {\mathbf{let}}\,\Sigma _2 := \Sigma _1, {\mathbf{const}}\,\chi :\prod _{\Gamma }{\lbrace A\vert \Upsilon _{\vartheta } \hookrightarrow M\rbrace }\end{array} }{ \Sigma ;\;\Gamma \vdash {\mathbf{unfold}}\,\vartheta \,{\mathbf{in}}\,{\texttt {e}} \Leftarrow A \rightsquigarrow \Sigma _2;\; \mathsf{out}_{\Upsilon _{\vartheta }}\,\chi [\Gamma ] } \end{align*}

The first rule states that a term synthesizing a type $A$ can be checked against a type $B$ provided that $A$ and $B$ are definitionally equal; in order to implement this rule algorithmically, we need definitional equality to be decidable. Additionally, our (omitted) type-directed elaboration rules are only well-defined if type constructors are injective up to definitional equality, for example, $A \to B = C \to D$ if and only if $A = C$ and $B = D$ .

Elaborating expression-level unfolding requires the ability to hoist a type to the top level by iterating dependent products over its context, an operation notated $\prod _{\Gamma }$ above. Because $\Gamma$ can hypothesize (the truth of) propositions, this operation relies crucially on the presence of dependent products $\{p\}\,A$ .

5. Case Study: An Implementation in

We have implemented our approach to controlled unfolding in the experimental proof assistant (RedPRL Development Team, 2020); is an implementation of cartesian cubical type theory (Angiuli et al. Reference Angiuli, Brunerie, Coquand, Hou (Favonia), Harper and Licata2021), a computational version of homotopy type theory whose syntactic metatheory is particularly well understood (Huber Reference Huber2019; Sterling Reference Sterling2021; Sterling and Angiuli Reference Sterling and Angiuli2021). The existing support for partial elements and extension types made particularly hospitable for experimentation with elaborating controlled unfolding to extension types. The following example illustrates the use of controlled unfolding in , where $\mathsf{path}\,A\,x\,y$ is the cubical notion of propositional equality ( $x \equiv y$ ):

\begin{align*} &{\mathbf{def}}\,{+} : \mathbb{N}\to \mathbb{N}\to \mathbb{N} :=\\ &\quad {\mathbf{elim}}\\ &\quad \mid \mathsf{zero} \Rightarrow {n \Rightarrow n}\\ &\quad \mid \mathsf{suc}\,\{\_ \Rightarrow \textit{ih}\} \Rightarrow {n \Rightarrow \mathsf{suc}\,\{\textit{ih}\,n\}}\\ &\quad \\ & {\mathbf{unfold}}\,{+}\\ &{\mathbf{def}}\,\mathsf{+0L}\, (x : \mathbb{N}) : \mathsf{path}\,\mathbb{N}\,\{{+}\,0\,x\}\,x :=\\ &\quad i \Rightarrow x\\ &\quad \\ & {\mathbf{def}}\,\mathsf{+0R} : (x : \mathbb{N})\to \mathsf{path}\,\mathbb{N}\,\{{+}\,x\,0\}\,x :=\\ &\quad {\mathbf{elim}}\\ &\quad \mid \mathsf{zero} \Rightarrow \mathsf{+0L}\,0\\ &\quad \mid \mathsf{suc}\,\{x\Rightarrow \textit{ih}\} \Rightarrow \\ &\qquad {\mathbf{equation}}\,\mathbb{N}\\ &\qquad \begin{array}[t]{ll} \mid {+}\,0\,\{\mathsf{suc}\,y\} & {=}[\mathsf{+0L}\,\{\mathsf{suc}\,y\}]\\ \mid \mathsf{suc}\,\{{+}\,x\,0\} & {=}[i \Rightarrow \mathsf{suc}\,\{\textit{ih}\,i\}]\\ \mid \mathsf{suc}\,x\, \end{array} \end{align*}

This example follows a common pattern: we prove basic computational laws ( $\mathsf{+0L}$ ) by unfolding a definition, and then in subsequent results ( $\mathsf{+0R}$ ) use these lemmas abstractly rather than unfolding. Doing so controls the size and readability of proof goals and explicitly demarcates which parts of the library depend on the definitional behavior of a given function.

We have also implemented the derived forms for expression-level unfolding:

\begin{align*} \begin{array}{l} {\mathbf{def}}\,\mathsf{two} : \mathbb{N} := {+}\,1\,1\\ {\mathbf{def}}\,\mathsf{thm} : \mathsf{path}\,\mathbb{N}\,\mathsf{two}\,2:= {\mathbf{unfold}}\,\mathsf{two}\,{+}\,{\mathbf{in}}\,i \Rightarrow 2 \\ {\mathbf{def}}\,\mathsf{thm\text{-}{}is\text{-}{}refl} : \mathsf{path\text{-}p}\,\{i\Rightarrow \mathsf{path}\,\mathbb{N}\,\mathsf{two}\,\{\mathsf{thm}\,i\}\}\,\{i\Rightarrow \mathsf{two}\}\,\mathsf{thm} := \\ \quad i\,j\Rightarrow {\mathbf{unfold}}\,\mathsf{two}\,{+}\,{\mathbf{in}}\,2 \end{array} \end{align*}
\begin{align*} \begin{array}{l} {\mathbf{def}}\,\mathsf{thm\text{-}{}is\text{-}{}refl^{\prime }} : \mathsf{path}\,\{\mathsf{path}\,\mathbb{N}\,\mathsf{two}\,2\}\,\{i\Rightarrow {\mathbf{unfold}}\,\mathsf{two}\,{+}\,{\mathbf{in}}\,\mathsf{two}\}\,\mathsf{thm}:= \\ \quad i\,j\Rightarrow {\mathbf{unfold}}\,\mathsf{two}\,{+}\,{\mathbf{in}}\,2 \end{array} \end{align*}

The third and fourth declarations above illustrate two strategies in for dealing with a dependent type whose well-formedness depends on an unfolding; in $\mathsf{thm\text{-}{}is\text{-}{}refl}$ , we use a dependent path type but only unfold in the definiens, whereas in $\mathsf{thm\text{-}{}is\text{-}{}refl^{\prime }}$ we use a non-dependent path type but must unfold in both the definiens and in its type.

Our implementation deviates in a few respects from the presentation in this paper: in particular, the propositions $\Upsilon _{\kappa }$ are represented by abstract elements $i_\kappa : \mathbb{I}$ of the interval via the embedding $\mathbb{I}\hookrightarrow \mathbb{F}$ sending $i$ to $(i=_{\mathbb{I}}1)$ .

utilizes a standalone library to compute entailment of cofibrations called Kado (Hou (Favonia) Reference Hou (Favonia)2022), created by Kuen-Bang Hou (Favonia). To support our experiment, Favonia modified Kado to support inequalities of dimension variables $i\leq _{\mathbb{I}} j$ in addition to the cofibrations needed for ’s core theory. As a result, the modifications to were quite modest. After the changes to Kado – which could in principle be reused in any proof assistant for the same purpose – the entire change required only a net increase of 996 lines of OCaml code.

6. The Metatheory of $\textbf{TT}_{\mathbb{P}}$

In Section 4, we described an algorithm elaborating a surface language with controlled unfolding to $\mathbf{TT}_{\mathbb{P}}$ . In order to actually execute our algorithm, it is necessary to decide the definitional equality of types in $\mathbf{TT}_{\mathbb{P}}$ ; as is often the case in type theory, type dependency ensures that deciding equality for types also requires us to decide the equality of terms. In order to implement our elaboration algorithm, we therefore prove a normalization result for $\mathbf{TT}_{\mathbb{P}}$ .

At its heart, a normalization algorithm is a computable bijection between equivalence classes of terms up to definitional equality and a collection of normal forms. By ensuring that the equality of normal forms is evidently decidable, this yields an effective decision procedure for definitional equality. In our case, we attack normalization through a synthetic and semantic approach to normalization by evaluation called STC (Sterling and Angiuli Reference Sterling and Angiuli2021; Sterling, Reference Sterling2021, Reference Sterling2025; Sterling and Harper Reference Sterling and Harper2021).

Neutral forms for $\mathbf{TT}_{\mathbb{P}}$ . The semantic analysis of normalization by evaluation rests on the observation of Fiore (Reference Fiore2002) that normal forms, though not stable under arbitrary substitutions, are nonetheless stable under renamings – substitutions that replace variables with variables (not necessarily injective). Therefore, decisive aspects of the normalization algorithm can be expressed internally to a topos of variable sets (presheaves) over the category of contexts and renamings; in order to instrument the semantic normalization algorithm with its proof of correctness, one passes to a larger topos obtained from the former by gluing. STC then instantiates the standard topos model of MLTT to substantially simplify various details that would otherwise be exceedingly tedious by means of a form of higher-order abstract syntax.

This appealingly simple story for normalization is substantially complicated by the boundary law for extension types:

\begin{equation*} \frac {\Gamma \vdash p\,\textit{true}\qquad \Gamma , p \vdash {a_{p}}:{A}\qquad \Gamma \vdash a : \{ A\vert p \hookrightarrow a_{p} \} }{ {\Gamma } \vdash \mathsf{out}_p\,a = a_{p} : A} \end{equation*}

When defining normal forms for $\mathbf{TT}_{\mathbb{P}}$ , we might naively add a neutral form to represent $\mathsf{out}_p$ . In order to ensure that normal and neutral forms correspond bijectively with equivalence classes of terms, however, we should only allow to be applied in a context where $p$ is not true; if $p$ were true, $\mathsf{out}_p\,{a}$ is already represented by the normal form for $a_{p}$ .

A similar problem arises in the context of cubical type theory (Angiuli et al. Reference Angiuli, Brunerie, Coquand, Hou (Favonia), Harper and Licata2021; Cohen et al. Reference Cohen, Coquand, Huber and Mörtberg2017) where some equalities apply precisely when two dimensions coincide. The same problem arises: either renamings must exclude substitutions that identify two dimension terms, or neutral forms will not be stable under renamings. In their proof of normalization for cubical type theory, Sterling and Angiuli (Reference Sterling and Angiuli2021) refined neutral forms to account for this tension by introducing stabilized neutrals. Rather than cutting down on renamings, they expand the class of neutrals by allowing “bad” neutrals akin to in a context where $p$ is true. They then associate each neutral form with a frontier of instability: a proposition that becomes true when the neutral is no longer “stuck.” Crucially, although well-behaved neutrals may not be stable under renamings, the frontier of instability is stable and can therefore be incorporated into the internal language.

We adapt Sterling and Angiuli’s stabilized neutrals to the simplified setting of $\mathbf{TT}_{\mathbb{P}}$ and establish its normalization theorem. In so doing, we refine the approach of op. cit. to obtain a fully constructiveFootnote 3 normalization proof. We also carefully spell out the details of the universe in the normalization model, correcting an oversight in an earlier revision of Sterling’s dissertation (Sterling Reference Sterling2021).

6.1 Type theories as categories with representable maps

While any number of logical frameworks are available (generalized algebraic theories (Cartmell Reference Cartmell1978), essentially algebraic theories (Freyd Reference Freyd1972), locally cartesian closed categories (Gratzer and Sterling Reference Gratzer and Sterling2020), etc.), Uemura’s categories with representable maps (Uemura, Reference Uemura2021, Reference Uemura2023) are particularly attractive because they express exactly the binding and dependency structure needed for type theory: a second-order version of generalized algebraic theories.

Definition 7. A category with representable maps (CwR) $\mathscr{C}$ is a finitely complete category equipped with a pullback-stable class of representable maps $\mathscr{R} \subseteq \mathsf{Arr}(\mathscr{C})$ such that pullback along $f \in \mathscr{R}$ has a right adjoint (dependent product along $f$ ).

Definition 8. A morphism of CwRs is a functor between the underlying categories that preserves finite limits, representability of maps, and dependent products along representable maps.

Definition 9. CwRs, morphisms between them, and natural isomorphisms assemble into a $(2,1)$ -category $\mathbf{CwR}$ .

Uemura’s logical framework axiomatizes the category of judgments of $\mathbf{TT}_{\mathbb{P}}$ as a particular category with representable maps $\mathbb{T}$ . The finite limit structure of $\mathbb{T}$ encodes substitution as well as equality judgments, while the class of representable maps carves out those judgments that may be hypothesized. Uemura (Reference Uemura2023) develops a syntactic method for presenting a CwR as a signature within a variant of extensional type theory, which he has rephrased in terms of second-order generalized algebraic theories in his doctoral dissertation (Uemura Reference Uemura2021). Although we will use the type-theoretic presentation for convenience, the difference between these two accounts is only superficial.

Each judgment of $\mathbf{TT}_{\mathbb{P}}$ is rendered as a (dependent) sort, while operators are modeled by elements of the given sorts. In order to record whether a given judgment may be hypothesized, the sorts of the type theory are stratified by meta-sorts $\star \subseteq \Box$ where $A : \star$ signifies that $A$ is a representable sort (i.e., a context-former) and can be hypothesized, whereas $B : \Box$ cannot parameterize a framework-level dependent product.

Proposition 10. Let $\mathbb{T}$ be the free category with representable maps generated by a given logical framework signature; then the groupoid of CwR functors $\operatorname{hom}{\mathbf{CwR}}{\mathbb{T}}{\mathscr{E}}$ is equivalent to the groupoid of interpretations of the signature within $\mathscr{E}$ .

We will often refer to a category with representable maps $\mathbb{T}$ as a type theory; indeed, as the category of judgments of a given type theory, $\mathbb{T}$ is a suitable invariant replacement for it.

Proposition 10 describes the universal property of a type theory generated by a given signature in a logical framework. Type theories qua CwRs thus give rise to a form of functorial semantics in which algebras (interpretations) arrange into a groupoid of CwR functors $\operatorname{hom}{\mathbf{CwR}}{\mathbb{T}}{\mathscr{E}}$ .

This is an appropriate setting for studying the syntax of type theory, but it is somewhat inappropriate for studying the semantics of type theory – in which one expects models to correspond to structured CwFs (Dybjer Reference Dybjer, Berardi and Coppo1996) or natural models (Awodey Reference Awodey2018), which themselves arrange into a (2,1)-category. The second notion of functorial semantics, developed by Uemura in his doctoral dissertation (Uemura Reference Uemura2021), is a generalization of the theory of CwFs and pseudo-morphisms between them (Clairambault and Dybjer Reference Clairambault and Dybjer2014; Newstead Reference Newstead2018).

Note that we may always regard a presheaf category $\mathbf{Pr}\,{\mathscr{C}}$ as a CwR with the representable maps being representable natural transformations, that is, families of presheaves whose fibers at representables are representable (Awodey Reference Awodey2018).

Definition 11. A model of a type theory $\mathbb{T}$ is a category $\mathbf{M}_\diamond$ together with a CwR functor ${\mathbf{M}}:{\mathbb{T}}\to {\mathbf{Pr}\,{\mathbf{M}_\diamond }}$ .

Models are arranged into a (2,1)-category ${\mathbf{Mod}}\,\mathbb{T}$ (see Appendix A). Essentially, a morphism of models ${\mathbf{M}}\to {\mathbf{N}}$ is given by a functor ${\alpha _\diamond }:{\mathbf{M}_\diamond }\to {\mathbf{N}_\diamond }$ together with a natural transformation ${\mathbf{M}}\to {\alpha ^{\ast}\mathbf{N}}\in \operatorname{hom}_{\mathbf{CwR}}({\mathbb{T}},{\mathbf{Pr}\,{\mathbf{M}_\diamond }})$ that preserves context extensions up to isomorphism; an isomorphism between morphisms of models is a natural isomorphism between the underlying functors satisfying an additional property.

For each CwR $\mathbb{T}$ , Uemura has shown the following theorem:

Proposition 12. The (2,1)-category of models ${\mathbf{Mod}}\,\mathbb{T}$ has a bi-initial object $\mathbf{I}$ , whose category of contexts $\mathbf{I}_\diamond$ is the smallest full subcategory of $\mathbb{T}$ closed under the terminal object and pullbacks along representable maps.

Remark 13. If one takes $\mathbb{T}$ to be for example, MLTT, the bi-initial model $\mathbf{I}$ can be realized by the familiar initial CwF built from the category of contexts.

6.2 Encoding $\mathbf{TT}_{\mathbb{P}}$ in the logical framework

We begin by defining the signature for a category with representable maps $\mathbb{T}_0$ containing exactly the bare judgmental structure of $\mathbf{TT}_{\mathbb{P}}$ , namely the propositions and the judgments for types and terms. In our signature, we make liberal use of the Agda-style notation for implicit arguments. As always, $p$ ranges over $\mathbb{P}$ :

\begin{align*} \begin{array}{l} \langle p \rangle :\star \\\mathsf{tp} \square \\ \mathsf{tm} : \mathsf{tp} \Longrightarrow \star \\ \_:\{u,v:\langle p \rangle \}\Longrightarrow u = v\\ \_:\{\_ : \langle \bigwedge _{i\lt n}p_i\rangle \}\Longrightarrow \langle p_k\rangle \\ \_:\{\_:\langle p_i\rangle ,\dots \}\Longrightarrow \langle {\bigwedge _{i\lt n} p_i}\rangle \end{array} \end{align*}

Note that already this signature encodes the necessary theory of propositions for $\mathbf{TT}_{\mathbb{P}}$ . For instance, if $p \le q$ in $\mathbb{P}$ , then a combination of the final two implications in the signature implies $\langle p \rangle \to \langle q \rangle$ . We next extend the above to include the type formers of $\mathbf{TT}_{\mathbb{P}}$ , writing $\mathbb{T}$ for the CwR generated by the full signature.

Notation 14. Given $X : \{\_ : \langle p \rangle \}\Longrightarrow \square$ , we will write $\{p\}\, X$ to further abbreviate the Agda-style implicit function space $\{\_ : \langle p \rangle \}\to X$ . Note that $\{p\}\,X$ still associates with the right and so $\{p\} A \to B$ signifies $\{p\} (A \to B)$ .

For instance, the following constants specify the rules of extension types given in Section 4.1:

\begin{align*} \begin{array}{l} \mathsf{ext}p : (A : \mathsf{tp}\,(a : \{p\}\, \mathsf{tm}{A})\Longrightarrow \mathsf{tp}\\ \mathsf{in}_p : (A : \mathsf{tp}\,(a : \{p\}\, \mathsf{tm}{A})\,(u : \mathsf{tm}{A})\,\{\_ : \{p\}\, u=a\} \Longrightarrow \mathsf{tm}(\mathsf{ext}p\,A\,a)\\ \mathsf{out}_p : (A:\mathsf{tp}\,(a: \{p\}\,\mathsf{tm}{A})\,(u:\mathsf{tm}(\mathsf{ext}p\,A\,a))\Longrightarrow \mathsf{tm}{A}\\ \_ : (A:\mathsf{tp}\,(a:\{p\}\,\mathsf{tm}{A})\,(u:\mathsf{tm}(\mathsf{ext}p\,A\,a))\,\{\_:\langle p \rangle \} \Longrightarrow \mathsf{out}_p\,A\,a\,u = a\\ \_ : (A:\mathsf{tp}\,(a:\{p\}\,\mathsf{tm}{A})\,(u:\mathsf{tm}{A})\,\{\_:\{p\}\, u = a\}\Longrightarrow \mathsf{out}_p\,A\,a\,(\mathsf{in}_p\,A\,a\,u) = u\\ \_ : (A:\mathsf{tp}\,(a:\{p\}\,\mathsf{tm}{A})\,(u:\mathsf{tm}(\mathsf{ext}p\,A\,a))\Longrightarrow \mathsf{in}_p\,A\,a\,(\mathsf{out}_p\,A\,a\,u) = u \end{array} \end{align*}

The full list of non-standard constants is specified in Figure 1. Once the signature is complete, we obtain from Uemura’s framework a category with representable maps $\mathbb{T}$ together with a bi-initial model $\mathbf{I}$ .

Figure 1. The non-standard aspects of the LF signature for $\textbf{TT}_{\mathbb{P}}$ .

6.3 The atomic figure shape and its universal property

For each context $\Gamma$ and type ${\Gamma }, \Gamma \vdash A\ \textit{type}$ , it is possible to axiomatize the normal forms of type $A$ ; unfortunately, this assignment of sets of normal forms does not immediately extend to a presheaf on the category of contexts $\mathbf{I}_\diamond$ , precisely because normal forms are not a priori closed under substitution. In fact, closing normal forms under substitution is the purpose of normalization, so we are not able to assume it beforehand.

Normal forms are, however, closed under substitutions of variables for variables (often called structural renamings), and in our case we shall be able to close them additionally under the “phase transitions” ${\Gamma ,\langle p \rangle }\to {\Gamma ,\langle q \rangle }$ when $\Gamma , p \vdash q\,\textit{true}$ is derivable. We shall refer to these substitutions as atomic substitutions, and we wish to organize them into a category.

It is possible to inductively define a category of “atomic contexts” whose objects are those of $\mathbf{I}_\diamond$ and whose morphisms are atomic substitutions, but this construction obscures a beautiful and simple (2,1)-categorical universal property first exposed by Bocquet et al. (Reference Bocquet, Kaposi and Sattler2021) that leads to a more modular proof. To explicate this universal property, first note that the theory $\mathbb{T}_0$ axiomatizes exactly the structure of variables and phase transitions, and that the initial model $\mathbf{I}$ of $\mathbb{T}$ is, by restriction along ${\mathbb{T}_0}\to {\mathbb{T}}$ , also a model of $\mathbb{T}_0$ .

Definition 15. An atomic substitution model over a fixed $\mathbb{T}$ -model $\mathbf{M}$ is given by a model $\boldsymbol{\mathbf{A}}$ of the bare judgmental theory $\mathbb{T}_0$ , together with a morphism of models ${\alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{M}}$ in ${\mathbf{Mod}}\,\mathbb{T}_0$ such that $\alpha _{\mathsf{tp}}: \boldsymbol{\mathbf{A}(\mathsf{tp})}\to {\alpha }^{\ast}{(\mathbf{M}(\mathsf{tp}))}\in \mathbf{Pr}\,{\boldsymbol{\mathbf{A}}_\diamond }$ is an isomorphism.

Atomic substitution model over $\mathbf{M}$ arrange themselves into a (2,1)-category, a full subcategory of ${\mathbf{Mod}}\,\mathbb{T}_0\downarrow \mathbf{M}$ . The following result is due to Bocquet et al. (Reference Bocquet, Kaposi and Sattler2021).

Proposition 16. The bi-initial atomic substitution model $(\boldsymbol{\mathbf{A}}, {\alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{I}})$ over $\mathbf{I}$ exists.

When $(\boldsymbol{\mathbf{A}}, {\alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{I}})$ is the bi-initial atomic substitution model over $\mathbf{I}$ as in Proposition 16, we shall refer to an object $\Gamma \in \boldsymbol{\mathbf{A}}$ as an atomic context and a morphism ${\gamma }:{\Delta }\to {\Gamma }$ in $\boldsymbol{\mathbf{A}}$ as an atomic substitution. We shall assume without loss of generality that $\boldsymbol{\mathbf{A}}(\mathsf{tp}) = \alpha ^{\ast}\mathbf{I}(\mathsf{tp})$ so that the component $\alpha _{\mathsf{tp}}$ is the identity map.

6.4 Computability spaces by gluing along the atomic figure shape

We shall use the bi-initial atomic substitution model over $\mathbf{I}$ as a figure shape in the sense of Sterling (Reference Sterling2021, §4.3) to instantiate STC. Here, we transition into the 2-category of Grothendieck topoi, geometric morphisms, and geometric transformations, guided by a phase distinction between “object-space” and “meta-space” (Sterling Reference Sterling2021);Footnote 4 object-space refers to the object language embodied in the model $\mathbf{I}$ , whereas meta-space refers to the metalanguage embodied in the model $\boldsymbol{\mathbf{A}}$ . Later on, we will construct a glued topos in which we may speak of constructs that have extent in both object-space and meta-space. We follow Vickers (Reference Vickers, Aiello, Pratt-Hartmann and Van Benthem2007) and Anel and Joyal (Reference Anel, Joyal, Anel and Catren2021) in emphasizing the distinction between a topos $\boldsymbol{\mathsf{x}}$ and the category of sheaves $\mathbf{Sh}\,{\boldsymbol{\mathsf{x}}}$ presenting it:

Definition 17. We denote by $\boldsymbol{\mathsf{I}}$ and $\boldsymbol{\mathsf{A}}$ the object-space and meta-space topoi, respectively, with underlying categories of sheaves $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}} = \mathbf{Pr}\,{\mathbf{I}_\diamond }$ and $\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}} = \mathbf{Pr}\,{\boldsymbol{\mathbf{A}}_\diamond }$ .

Definition 18. The functor ${\alpha _\diamond }:{\boldsymbol{\mathbf{A}}_\diamond }\to {\mathbf{I}_\diamond }$ gives rise under precomposition to a continuous and cocontinuous functor ${\mathbf{Pr}\,{\mathbf{I}_\diamond }}\to {\mathbf{Pr}\,{\boldsymbol{\mathbf{A}}_\diamond }}$ that shall serve as the inverse image part of an (essential) geometric morphism ${\alpha }:{\boldsymbol{\mathsf{A}}}\to {\boldsymbol{\mathsf{I}}}$ named the atomic figure shape.

That ${\alpha }:{\boldsymbol{\mathsf{A}}}\to {\boldsymbol{\mathsf{I}}}$ is essential means that its inverse image ${\alpha ^{\ast}}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}$ has a left adjoint ${\alpha _!}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}$ ; from the point of view of presheaves, this is precisely the Yoneda extension of ${\alpha _\diamond }:{\boldsymbol{\mathbf{A}}_\diamond }\to {\mathbf{I}_\diamond }$ as depicted below:

Definition 19. We denote by $\boldsymbol{\mathsf{G}}$ the closed mapping cylinder (Johnstone Reference Johnstone1977) of the geometric morphism ${\alpha }:{\boldsymbol{\mathsf{A}}}\to {\boldsymbol{\mathsf{I}}}$ ; in other words, $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ is the comma category ${\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}\downarrow {\alpha ^{\ast}}$ . We will write ${\boldsymbol {j}}:{\boldsymbol{\mathsf{I}}}\hookrightarrow {\boldsymbol{\mathsf{G}}}$ and $\boldsymbol{{i}}: {\boldsymbol{\mathsf{A}}}\hookrightarrow {\boldsymbol{\mathsf{G}}}$ for the open and closed subtopos immersions.

Following Sterling (Reference Sterling2025), we shall refer to a sheaf on $\boldsymbol{\mathsf{G}}$ as a computability space. A computability space $X\in \mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ is then identified with a family ${\pi _X}:{{\boldsymbol{i}}^{\ast}X}\to {\alpha^{\ast} {\boldsymbol{j}}^{\ast}X}$ in $\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}$ . Because the assignment $\pi _X$ is natural in computability spaces $X$ , it corresponds to a 2-cell ${\pi }:{{\boldsymbol{j}}\circ \alpha }\to {{\boldsymbol{i}}}$ in the 2-category of Grothendieck topoi. The universal property of $\boldsymbol{\mathsf{G}}$ is then expressed by the fact that ${\pi }:{\boldsymbol{{j}}\circ \alpha }\to {{\boldsymbol {i}}}$ is a co-comma cell in the 2-category of Grothendieck topoi:

Remark 20 (Relation to Kripke computability predicates). Unraveling Definition 19, a computability space is precisely a family $X^{\prime }$ of presheaves on $\boldsymbol{\mathbf{A}}_\diamond$ indexed in the restriction of a given presheaf $X$ on $\mathbf{I}_\diamond$ along ${\alpha _\diamond }:{\boldsymbol{\mathbf{A}}_\diamond }\to {\mathbf{I}_\diamond }$ . When the family $X^{\prime }$ is valued in subterminal presheaves and the base $X$ is representable, we have precisely the classical notion of a Kripke computability predicate (Jung and Tiuryn Reference Jung, Tiuryn, Bezem and Groote1993); a computability space in our sense is then a generalized, proof-relevant version of a Kripke computability predicate.

6.4.1 Reflection of object and meta-space

By definition, the inverse image functors ${j}^{\ast},{i}^{\ast}$ have fully faithful right adjoints ${j}_{\ast}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}\hookrightarrow {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ and ${i}_{\ast}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}\hookrightarrow {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ ,respectively. These are computed as follows:

\begin{align*} {j}_{\ast}E &= (E, {1_{\alpha ^{\ast}E}}:{\alpha ^{\ast}E}\to {\alpha ^{\ast}E})\\ {i}_{\ast}A &= (\mathbf {1}{\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}, {!_{A}}:{A}\to {{\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}\cong \alpha ^{\ast}{\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}}) \end{align*}

Thus, the adjunctions ${j}^{\ast}\dashv {j}_{\ast}$ and ${i}^{\ast}\dashv {i}_{\ast}$ exhibit $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ and $\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}$ as reflective subcategories of $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ :

  1. (1) The essential image of the reflective embedding ${j}_{\ast}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}\hookrightarrow {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ is spanned by computability spaces $X$ for which ${\pi _X}:{{i}^{\ast}X}\to {\alpha ^{\ast} {j}^{\ast}X}$ is an isomorphism, that is, such that $\pi _X$ is terminal in the slice $\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}\downarrow \alpha ^{\ast}{j}^{\ast}X$ .

  2. (2) The essential of image of the reflective embedding ${{i}}_{\ast}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}\hookrightarrow {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ is spanned by computability spaces $X$ such that ${j}^{\ast}X$ is terminal in $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ .

Definition 21 (Vocabulary for reflective subcategories). When a computability space lies in the essential image of ${{j}_{\ast}}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}\hookrightarrow {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ , we shall refer to it as lying in object-space. Likewise, when a computabiltiy space lies in the essential image of ${{i}_{\ast}}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}\hookrightarrow {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ , we shall say that it lies in meta-space.

6.4.2 Coreflection of object- and meta-space

Both the open and closed immersions are essential morphisms of topoi, in the sense that we have additional (necessarily fully faithful) left adjoints ${j}_!\dashv {j}^{\ast}\colon \mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}\to \mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ and ${i}_!\dashv {i}^{\ast}\colon \mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}\to \mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}$ that are computed as follows:

\begin{align*} {j}_!E &= (E, {!_{\alpha ^{\ast}E}}:{{\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}}\to {\alpha ^{\ast}E}) \\ {i}_!A &= (\alpha _!A, {\eta _A}:{A}\to {\alpha ^{\ast}\alpha _!A}) \end{align*}

Thus, $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ and $\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}$ are not only reflective in $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ – they are also coreflective.

6.5 The language of STC

As $\boldsymbol{\mathsf{I}}$ and $\boldsymbol{\mathsf{A}}$ are both subtopoi of $\boldsymbol{\mathsf{G}}$ , their reflections (Section 6.4.1) can be expressed in the internal language of $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ by means of a pair of complementary lex idempotent monads ( $\large \circ$ , $\bullet$ ). The internal language of $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ is presented by the $\large \circ$ -modal or object-space types and $\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}$ is presented by $\bullet$ -modal or meta-space types. Because they form an open/closed partition, these modal subuniverses admit a particularly simple formulation:

Theorem 22. There exists a proposition $\mathsf{obj} : \Omega$ such that

  1. (1) a type $X$ is $\large \circ$ -modal/object-space iff $X \to (\mathsf{obj} \to X)$ is an isomorphism;

  2. (2) a type $X$ is $\bullet$ -modal/meta-space iff $\mathsf{obj}\times X\to \mathsf{obj}$ is an isomorphism.

Remark 23. In fact, the coreflection of $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ in $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ lifts smoothly into the internal language (though we shall not use this fact) by the idempotent comonadic modality $\square \dashv {\large \circ }$ that sends $X$ to the product $\square {X} = \mathsf{obj}\times X$ . On the other hand, the coreflection of $\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}$ in $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ cannot be expressed directly in the internal language.

Notation 24. We will use extension types $\lbrace A \vert \phi \hookrightarrow a\rbrace$ in the internal language of $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ as realized by the subset comprehension of topos logic, treating their introduction and elimination rules silently. Here, $\phi$ will be an element of the subobject classifier, in contrast to the situation in our object language, where it ranged over fixed proposition symbols.

Remark 25. We assume a subuniverse $\Omega _{\textit {dec}}\subseteq \Omega$ of the subobject classifier that is closed under finite disjunctions and contains $\mathsf{obj}$ ; then $\Omega _{\textit {dec}}$ will ultimately be a subuniverse spanned by pointwise/externally decidable propositions (Angiuli et al. Reference Angiuli, Brunerie, Coquand, Hou (Favonia), Harper and Licata2021), but this fact will not play a role in the synthetic development.

Notation 26. We will reuse Notation 14 and write $\{\mathsf{obj}\}\,A$ rather than $\{\_ : \mathsf{obj}\} \to A$ when $A : \{\_:\mathsf{obj}\}\to \mathscr{U}$ .

As a presheaf topos, $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ inherits a hierarchy of cumulative universes $\mathscr{U}_{i}$ , each of which supports the strict gluing or (mixed-phase) refinement type (Gratzer et al. Reference Gratzer, Shulman and Sterling2022): a version of the dependent sum of a family of meta-space types indexed in an object-space type $A$ that additionally restricts within object-space to exactly $A$ :

\begin{equation*} \frac { A : \{\mathsf{obj}\}\, \mathscr{U}_{i} \qquad B : (\{\mathsf{obj}\}\, A) \to \mathscr{U}_{i} \qquad \{\mathsf{obj}\} (a : A) \to (B\,a \cong {}) }{(x : A) \ltimes B\,x : \lbrace \mathscr{U}_{i}\vert \mathsf{obj} \hookrightarrow A\rbrace \qquad \mathsf{gl} : \lbrace ((x : \{\mathsf{obj}\}\, A)\times B\,x) \cong (x : A) \ltimes B\,x \vert \mathsf{obj} \hookrightarrow \pi _{1}\rbrace } \end{equation*}

Remark 27. In topos logic, it is a property for a function to have an inverse; thus, we have conveniently packaged the introduction and elimination rules for $(x : A) \ltimes B\,x$ into a single function $\mathsf{gl}$ that is assumed to be an isomorphism.

Notation 28. We write $[\mathsf{obj} \hookrightarrow a \vert b]$ for $\mathsf{gl}\,(a,b)$ and $\mathsf{ungl}\,x$ for $\pi _{2}(\mathsf{gl}^{-1} x)$ . When constructing particularly complex inhabitants of $(x : A) \ltimes B\,x$ , we will avail ourselves of copattern matching notation and write the following instead of $c = [\mathsf{obj} \hookrightarrow a \vert b]$ :

\begin{align*} \begin{array}{l} \mathsf{obj} \hookrightarrow c = a\\ \mathsf{ungl}\,c = b \end{array} \end{align*}

Both $\large \circ$ and $\bullet$ induce reflective subuniverses ${\mathscr{U}_{\large \circ }^i,\mathscr{U}_{\bullet }^i}\hookrightarrow {\mathscr{U}_{i}}$ spanned by modal types, and these universes are themselves modal. Following Sterling (Reference Sterling2021), we use strict gluing to choose these universes with additional strict properties:

\begin{equation*} \mathscr{U}_{\large \circ }^i : \lbrace \mathscr{U}_{i+1}\vert \mathsf{obj} \hookrightarrow \mathscr{U}_{i}\rbrace \qquad \mathscr{U}_{\bullet }^i : {\lbrace \mathscr{U}_{i+1}\vert \mathsf{obj} \hookrightarrow \rbrace } \end{equation*}

Furthermore, the inclusion ${\mathscr{U}_{\large \circ }^i}\hookrightarrow {\mathscr{U}_{i}}$ restricts to the identity under $\mathsf{obj}$ . With the modal universes to hand, we may choose ${\large \circ } : \mathscr{U}_{i} \to \mathscr{U}_{i}$ and $\bullet : \mathscr{U}_{i} \to \mathscr{U}_{i}$ to factor through $\mathscr{U}_{\large \circ }^i$ and $\mathscr{U}_\bullet ^i$ , respectively. Henceforth, we will suppress the inclusions ${\mathscr{U}_{\large \circ }^i,\mathscr{U}_{\bullet }^i}\hookrightarrow {\mathscr{U}_{i}}$ and write, for example, ${\large \circ } : \mathscr{U}_{i} \to \mathscr{U}_{\large \circ }^i$ for the reflections.

Remark 29. The strict gluing types, modal universes, and their modal reflections can be chosen to commute strictly with the liftings ${\mathscr{U}_{i}}\to {\mathscr{U}_{i + 1}}$ .

The interpretation of the $\mathbf{TT}_{\mathbb{P}}$ signature within $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ internalizes into $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ as a sequence of constants valued in the subuniverse $\mathscr{U}_{\large \circ }^0$ ; for instance, we have:

\begin{align*} \begin{array}{l} \mathsf{tp} :\mathscr{U}_{\large \circ }^0\\ \mathsf{tm} :\mathsf{tp} \mathscr{U}_{\large \circ }^0\\ \langle p \rangle :\Omega _{\textit {dec}}\ \ \text{(for $p\in \mathbb{P}$)}\\ \mathsf{ext}p : (A : \mathsf{tp})\to (a : \{\langle p \rangle \}\, \mathsf{tm}{A}) \to \mathsf{tp}\\ \mathsf{in}_p : (A : \mathsf{tp})\,(a : \{\langle p \rangle \}\, \mathsf{tm}{A}) \to \lbrace \mathsf{tm}A\vert \langle p \rangle \hookrightarrow a\rbrace \cong \mathsf{tm}(\mathsf{ext}p\,A\,a) \end{array} \end{align*}

Following Remark 27, we package the pair $(\mathsf{in}_p,\mathsf{out}_p)$ as a single isomorphism $\mathsf{in}_p$ .

The presheaf of terms in the model $\boldsymbol{\mathbf{A}}$ internalizes as a meta-space type of variables which by virtue of the structure map ${\boldsymbol{\mathbf{A}}}\to {\mathbf{I}}$ can be indexed over the object-space collection of terms. We realize this synthetically as follows:

\begin{align*} \mathsf{var} : (A:\mathsf{tp}) \to \lbrace \mathscr{U}_{}\vert \mathsf{obj} \hookrightarrow \mathsf{tm}A\rbrace \end{align*}

We refer to extensional type theory extended with these constants and modalities as the language of STC.

Remark 30. To account for strict universes – those for which $\mathsf {el}$ commutes strictly with chosen codes – some prior STC developments employed strict gluing along the image of $\mathsf {el}$ (Sterling Reference Sterling2021; Sterling and Angiuli Reference Sterling and Angiuli2021). By limiting our usage of strict gluing to $\mathsf {obj}$ , we are able to execute our constructions in a constructive metatheory. To model strict universes, we instead use the cumulativity of the hierarchy of universes $\mathscr{U}_{i}$ and the fact that all levels are coherently closed under modalities and strict gluing.

6.6 Normal and neutral forms

Internally to STC, we now specify the normal and neutral forms of terms, and the normal forms of types. Following Sterling and Angiuli (Reference Sterling and Angiuli2021), we index the type of neutral forms by a frontier of instability, a proposition at which the neutral form is no longer meaningful. Our construction proceeds in two steps. First, we define a series of indexed quotient-inductive definitions (Kaposi et al. Reference Kaposi, Kovács and Altenkirch2019) specifying the meta-space components of normal and neutral forms:

\begin{align*} \begin{array}{l} {\mathsf{nf}}_\bullet : (A : \mathsf{tp}) \to \mathsf{tm}A \to \mathscr{U}_{\bullet }^0\\ \mathsf{ne}_\bullet : (A : \mathsf{tp}) \to \Omega _{\textit {dec}}\to \mathsf{tm}A \to \mathscr{U}_{\bullet }^0\\ \mathsf{nftp}_\bullet : \mathsf{tp} \to \mathscr{U}_{\bullet }^0 \end{array} \end{align*}

Next, we use the strict gluing connective to define the types of normals, neutrals, and normal types such that they lie strictly over $\mathsf {tm}$ and $\mathsf {tp}$ :

\begin{align*} \begin{array}{l} {\mathsf{nf}}\,A = (a : \mathsf{tm}A) \ltimes {\mathsf{nf}}_\bullet \,A\,a\\ \mathsf{ne}_\phi \,A = (a : \mathsf{tm}A) \ltimes \mathsf{ne}_\bullet \,A\,\phi \,a\\ \mathsf{nftp} = (A : \mathsf{tp}) \ltimes \mathsf{nftp}_\bullet \,A \end{array} \end{align*}

We illustrate a representative fragment of the inductive definitions in Figure 2.

The induction principles for ${\mathsf{nf}}_\bullet ,\mathsf{ne}_\bullet$ and $\mathsf{nftp}_\bullet$ play no role in the main development, which works with any algebra for these constants. These induction principles, however, are needed in order to prove Theorem 48 and deduce the decidability of definitional equality and the injectivity of type constructors. These same considerations motivate our choice to index $\mathsf{ne}_\bullet$ over $\Omega _{\textit {dec}}$ rather than $\Omega$ .

Figure 2. Selected rules from the definition of $\mathsf{nf}$ , $\mathsf{ne}$ , and $\mathsf{nftp}$ .

6.7 A glued normalization algebra

We can now construct a new $\mathbf{TT}_{\mathbb{P}}$ -algebra internally to $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ , satisfying the constraint that each of its constituents restricts under $\mathsf{obj}$ to the corresponding constant from the $\mathbf{TT}_{\mathbb{P}}$ -algebra inherited from $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ . We shall refer to this as the normalization algebra. For instance, we must define types representing object types and terms:

\begin{equation*} \mathsf {tp}^{\ast} : \lbrace \mathscr{U}_{2}\vert \mathsf{obj} \hookrightarrow \mathsf {tp}\rbrace \qquad \mathsf {tm}^{\ast} : \lbrace \mathsf {tp}^{\ast} \to \mathscr{U}_{1}\vert \mathsf{obj} \hookrightarrow \mathsf {tm}\rbrace \end{equation*}

The meta-space component of the computability structure of types is given as a dependent record below:

\begin{align*} \begin{array}{l} \mathbf{record}\,\mathsf{tp}_\bullet \,(A:\mathsf{tp}):{\mathscr{U}_{2}}\; \mathbf{where}\\ \quad \mathsf{code} : \mathsf{nftp}_\bullet \,A\\ \quad \mathsf{tm}_\bullet : \mathsf{tm}A \to \mathscr{U}_{\bullet }^1\\ \quad \mathsf{reflect} : (a : \mathsf{tm}A)\,(\phi : \Omega _{\textit {dec}})\,(e : \mathsf{ne}_\bullet \,A\,\phi \,a) \to (a_\phi : \{\phi \}\,\mathsf{tm}_\bullet \,a) \to \lbrace {\mathsf{tm}_\bullet \,a}\vert {\phi \hookrightarrow a_\phi }\rbrace \\ \quad \mathsf{reify} : (a : \mathsf{tm}A) \to \mathsf{tm}_\bullet \,a \to {\mathsf{nf}}_\bullet \,A\,a \end{array} \end{align*}

The $\mathsf{tm}_\bullet$ field classifies the meta-space component of a given element; the $\mathsf{reflect}$ and $\mathsf{reify}$ fields generalize the familiar operations of normalization by evaluation, subject to Sterling and Angiuli’s stabilization yoga (Sterling and Angiuli Reference Sterling and Angiuli2021). We finally define both $\mathsf{tp}$ and $\mathsf{tm}$ using strict gluing to achieve the correct boundary:

\begin{align*} \begin{array}{l} \mathsf{tp}^{\ast} = (A : \mathsf{tp}) \ltimes \mathsf{tp}\bullet \,A\\ \mathsf{tm}^{\ast}\,A = (a : \mathsf{tm}A) \ltimes (\mathsf{ungl}\,A).\mathsf{tm}_\bullet \,a \end{array} \end{align*}

Notation 31. Henceforth, we will write $A.\mathsf{fld}$ rather than $(\mathsf{ungl}{A}).\mathsf{fld}$ to access a field of the closed component of $A$ .

We must also define $\langle p \rangle ^{\ast} : \Omega _{\textit {dec}}$ for each $p \in \mathbb{P}$ subject to the condition that $\mathsf{obj}$ implies $\langle p \rangle ^{\ast} = \langle p \rangle$ . As there is no normalization data associated with these propositions, we define $\langle p \rangle ^{\ast} = \langle p \rangle$ which clearly satisfies the boundary condition. It remains to show that $(\mathsf{tp}^{\ast},\mathsf{tm}^{\ast})$ are closed under all the connectives of $\mathbf{TT}_{\mathbb{P}}$ . We show two representative cases: extension types and the universe.

6.7.1 Extension types

Fixing $A : \mathsf{tp}^{\ast}$ , $p : \mathbb{P}$ , $a : \{\langle p \rangle \}\,\mathsf{tm}^{\ast}\,A$ , we must construct the following pair of constants:

\begin{align*} \begin{array}{l} \mathsf{ext}^{\ast}_p\,A\,a : \lbrace \mathsf{tp}^{\ast} \vert \mathsf{obj} \hookrightarrow \mathsf{ext}^{\ast}p\,A\,a\rbrace \\ \mathsf{in}^{\ast}_p\,A\,a : \lbrace \lbrace \mathsf{tm}^{\ast}\,A \vert \langle p \rangle \hookrightarrow a\rbrace \cong \mathsf{tm}^{\ast}\,(\mathsf{ext}^{\ast}_p\,A\,a) \vert \mathsf{obj} \hookrightarrow \mathsf{in}_p\,A\,a\rbrace \end{array} \end{align*}

Recalling the definition of $\mathsf{tp}$ as a strict gluing type, we observe that the boundary condition on $\mathsf{ext}_p$ already fully constrains the first component:

In the above, we have used Notation 28 for constructing elements of a strict gluing type.

We define the second component as follows, using copattern matching notation:

In the clauses of $\mathsf{reify}$ and $\mathsf{reflect}$ , we were allowed to assume that the argument was of the form $\eta _\bullet x$ where $\eta _\bullet$ is the unit of the modality $\bullet$ : $\eta _\bullet : A \to \bullet A$ . This is because we are mapping into meta-space types and so this “pattern-matching” amounts to the bind operation of the monad $\bullet$ .

Remark 32. Stabilized neutrals are crucial to the definition of $(\mathsf{ext}^{\ast}_p\,A\,a) \hspace {.1ex} . \hspace {.1ex} \mathsf{reflect}$ above: without them, we could not ensure that reflecting lies within the specified subtype of $A \hspace {.1ex} . \hspace {.1ex} \mathsf{tm}_\bullet$ .

The definition of $\mathsf{in}_p$ is now straightforward:

\begin{align*} \mathsf{in}^{\ast}_p\,A\,a\,x = [{\mathsf{obj} \hookrightarrow \mathsf{in}_p\,A\,a\,x}\vert {\mathsf{ungl}\,x}] \end{align*}

We leave the routine verification of the various boundary conditions to the reader; nearly all of them follow immediately from the properties of strict gluing.

Figure 3. The normalization structure on the universe.

6.7.2 The universe

We now turn to the construction of the universe in the normalization algebra; it is here that the complexity of unstable neutrals becomes evident. Once again the boundary conditions on $\mathsf{uni}$ force part of its definition:

The second component of $\mathsf{uni}^{\ast}$ is complex, and we present its definition in Figure 3. The inclusion of $\mathsf{el\text{-}{}code}$ in $\mathsf{uni}_\bullet$ is necessary in order to define $\mathsf{el}^{\ast}$ :

\begin{align*} \begin{array}{l} \mathsf{obj} \hookrightarrow \mathsf{el}^{\ast}\,A = \mathsf{el}A\\ (\mathsf{el}^{\ast}\,(\eta _\bullet A)) \hspace {.1ex} . \hspace {.1ex} \mathsf{code} = A \hspace {.1ex} . \hspace {.1ex} \mathsf{el\text{-}{}code}\\ (\mathsf{el}^{\ast}\,(\eta _\bullet A)) \hspace {.1ex} . \hspace {.1ex} \mathsf{tm}_\bullet = A \hspace {.1ex} . \hspace {.1ex} \mathsf{tm}_\bullet \\ (\mathsf{el}^{\ast}\,(\eta _\bullet A)) \hspace {.1ex} . \hspace {.1ex} \mathsf{reflect} = A \hspace {.1ex} . \hspace {.1ex} \mathsf{reflect}\\ (\mathsf{el}^{\ast}\,(\eta _\bullet A)) \hspace {.1ex} . \hspace {.1ex} \mathsf{reify} = A \hspace {.1ex} . \hspace {.1ex} \mathsf{reify} \end{array} \end{align*}

Finally, we must show that $\mathsf{uni}$ is closed under all small type formers and that $\mathsf{el}$ preserves them. This flows from the cumulativity of universes in $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ ; to close $\mathsf{uni}$ under, for example, products, we essentially "redo" the construction of products in $\mathsf{tp}$ by altering its predicate to be valued in $\mathscr{U}_{0}$ rather than $\mathscr{U}_{1}$ .

6.7.3 The evaluation functor

In this section, we equip $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ with the maximal CwR structure, in which all maps are representable.Footnote 5 We have just now defined an interpretation of $\mathbf{TT}_{\mathbb{P}}$ ’s signature (Section 6.2) in $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ , and so by the universal property (Proposition 10) of $\mathbb{T}$ as the classifying CwR for this signature, we obtain a unique CwR functor ${I_{\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}}:{\mathbb{T}}\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ sending every construct of $\mathbf{TT}_{\mathbb{P}}$ to its interpretation.

6.8 The normalization algorithm

Having constructed the normalization algebra in $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ (Section 6.7), we can now define the actual normalization function using an argument based on those presented by Fiore (Reference Fiore2002, Reference Fiore2022, §II.2) and Sterling (Reference Sterling2025, §3.3), making use of the inserter model of atomic substitutions introduced by Bocquet et al. (Reference Bocquet, Kaposi and Sattler2021). As our results are constructive, our normalization function corresponds to an actual normalization by evaluation algorithm – whose executable computational content Fiore (Reference Fiore2022) has demonstrated explicitly in the simply typed case.

6.8.1 Stripping of atomic contexts

We first must establish an intermediate result: that the functor ${\boldsymbol{\mathbf{A}}\langle - \rangle }:{\mathbb{P}}\to {\boldsymbol{\mathbf{A}}_\diamond }$ sending each $p\in \mathbb{P}$ to the associated unary atomic context is fully faithful and has a left adjoint, that is, that $\mathbb{P}$ is reflective in $\boldsymbol{\mathbf{A}}_\diamond$ . The left adjoint allows an atomic context to be “stripped” of anything that induces variables, leaving only propositional assumptions. This result is ultimately used in Lemma 40 to exhibit an isomorphism $\boldsymbol{\mathbf{A}}\langle p \rangle \cong \alpha ^{\ast}\mathbf{I}\langle p \rangle$ .

While it is straightforward to imagine how such a reflection can be defined by “induction on atomic contexts and repeated weakening,” we have not given an inductive specification of $\boldsymbol{\mathbf{A}}_{\diamond }$ and instead opted to specify it through its universal property. Accordingly, we define this stripping map using an model to which we may apply the universal property of $\boldsymbol{\mathbf{A}}$ . Fundamentally, however, the resulting constructions are the same, but our insistence on using only these universal properties enables us to avoid fixing a particular and explicit construction of atomic contexts.

Construction 33 (The stripping model). We consider a model $\mathbf{P}$ of $\mathbb{T}$ in which we set $\mathbf{P}_\diamond = \mathbb{P}$ , $\mathbf{P}(\mathsf{tp}) = \mathbf{P}(\mathsf{tm}) = \mathbf{1}_{{\mathbf{Pr}\,{\mathbb{P}}}}$ , and $\mathbf{P}\langle p \rangle = {}_{\mathsf {Y}{\mathbb{P}}}{p}$ . All the remaining constructs of the model are trivial by virtue of these definitions. From the universal property of $\mathbf{I}$ as the bi-initial $\mathbb{T}$ -model, we obtain a unique homomorphism of models ${I_{\mathbf{P}}}:{\mathbf{I}}\to {\mathbf{P}}$ whose contextual component is a product preserving functor ${I_{\mathbf{P}}^\diamond }:{\mathbf{I}_\diamond }\to {\mathbb{P}}$ .

Lemma 34. The component ${\boldsymbol{\mathbf{A}}\langle - \rangle }:{\mathbb{P}}\to {\boldsymbol{\mathbf{A}}_\diamond }$ of the bi-initial atomic substitution model over $\mathbf{I}$ is full and faithful.

Proof. Any functor out of a poset is necessarily faithful. To see that ${\boldsymbol{\mathbf{A}}\langle - \rangle }:{\mathbb{P}}\to {\boldsymbol{\mathbf{A}}_\diamond }$ is full, we fix a morphism ${\boldsymbol{\mathbf{A}}\langle p \rangle }\to {\boldsymbol{\mathbf{A}}\langle q \rangle }$ in $\boldsymbol{\mathbf{A}}_\diamond$ ; as this morphism is necessarily unique, it suffices to show that $p\leq q$ in $\mathbb{P}$ . We consider the image of ${\boldsymbol{\mathbf{A}}\langle p \rangle }\to {\boldsymbol{\mathbf{A}}\langle q \rangle }$ under the contextual component of the composite homomorphism ${I_{\mathbf{P}}\circ \alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{P}}$ of $\mathbb{T}_0$ -models, which gives precisely the desired inequality $p\leq q$ , recalling that each $\langle r \rangle$ is is representable in $\mathbb{T}_0$ and thus preserved by homomorphisms of models.

Lemma 35. The functor between categories of contexts induced by ${I_{\mathbf{P}}\circ \alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{P}}$ is left adjoint to the embedding ${\boldsymbol{\mathbf{A}}\langle - \rangle }:{\mathbb{P}}\hookrightarrow {\boldsymbol{\mathbf{A}}_\diamond }$ .

Proof. The counit in $\mathbb{P}$ is given by the identity inequality, as each $\langle p \rangle$ is representable in $\mathbb{T}_0$ and thus preserved by homomorphisms. For the unit, we must construct a (necessarily unique) arrow ${\Gamma }\to {\boldsymbol{\mathbf{A}}\langle {{I_{\mathbf{P}}^\diamond (\alpha _\diamond \Gamma )}}\rangle }$ in $\boldsymbol{\mathbf{A}}_\diamond$ for each atomic context $\Gamma$ .

For this, we consider a new atomic substitution model $\mathbf{E}$ over $\boldsymbol{\mathbf{A}}$ whose category of contexts $\mathbf{E}_\diamond$ is the following inserter object (Lack Reference Lack2009, Section 6.5) in $\mathsf{Cat}$ :

Equivalently, $\mathbf{E}_\diamond$ is the full subcategory of $\boldsymbol{\mathbf{A}}_\diamond$ spanned by atomic contexts $\Gamma$ for which there exists an arrow ${\Gamma }\to {\boldsymbol{\mathbf{A}}\langle {{I_{\mathbf{P}}^\diamond (\alpha _\diamond \Gamma )}}\rangle }$ ; as the codomain is subterminal, such arrows are necessarily unique. We define all the constructs of $\mathbb{T}_0$ in $\mathbf{E}$ as in $\boldsymbol{\mathbf{A}}$ , and it remains only to check that $\mathbf{E}$ has a terminal object and is closed under context comprehension and phase comprehension.

  1. (1) For the terminal object, we see that $\boldsymbol{\mathbf{A}}\langle {{I_{\mathbf{P}}^\diamond (\alpha _\diamond {})}}\rangle$ is already terminal.

  2. (2) For the context comprehension, we fix $\Gamma \in \mathbf{E}_\diamond$ and $A\in \boldsymbol{\mathbf{A}}(\mathsf{tp}(\Gamma )$ , and we must check that there exists a map ${\Gamma .A}\to {\boldsymbol{\mathbf{A}}\langle {{I_{\mathbf{P}}^\diamond (\alpha _\diamond (\Gamma .A))}}\rangle }$ . As $\alpha \circ I_{\mathbf{P}}$ is a homomorphism of models, it preserves context comprehensions; unraveling definitions, we ultimately have $I_{\mathbf{P}}^\diamond (\alpha _\diamond (\Gamma .A))=I_{\mathbf{P}}^\diamond (\alpha _\diamond (\Gamma ))$ and so we are done.

  3. (3) For phase comprehension, we fix $\Gamma \in \mathbf{E}_\diamond$ and $p\in \mathbb{P}$ to check that there exists an arrow ${\Gamma .\boldsymbol{\mathbf{A}}\langle p \rangle }\to {\boldsymbol{\mathbf{A}}\langle {{I_{\mathbf{P}}^\diamond (\alpha _\diamond (\Gamma .\boldsymbol{\mathbf{A}}{(\langle p \rangle )}))}}\rangle }$ . But we have $I_{\mathbf{P}}^\diamond (\alpha _\diamond (\Gamma .\boldsymbol{\mathbf{A}}{(\langle p \rangle )})) = I_{\mathbf{P}}^\diamond (\alpha _\diamond (\Gamma ))\land p$ , so we may use the projection ${\Gamma .\boldsymbol{\mathbf{A}}\langle p \rangle }\to {\boldsymbol{\mathbf{A}}\langle p \rangle }$ .

We evidently have a homomorphism of $\mathbb{T}_0$ -models ${\eta }:{\mathbf{E}}\to {\boldsymbol{\mathbf{A}}}$ that exhibits $\mathbf{E}$ as a atomic substitution model over $\boldsymbol{\mathbf{A}}$ . Postcomposing with the structure map ${\alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{I}}$ , we can view $\mathbf{E}$ as a atomic substitution model over $\mathbf{I}$ . Thus, by the universal property of $\boldsymbol{\mathbf{A}}$ , we have a universal section ${J_{\mathbf{E}}}:{\boldsymbol{\mathbf{A}}}\to {\mathbf{E}}$ to ${\eta }:{\mathbf{E}}\to {\boldsymbol{\mathbf{A}}}$ . This shows that every atomic context $\Gamma \in \boldsymbol{\mathbf{A}}_\diamond$ can be equipped with an arrow ${\Gamma }\to {\boldsymbol{\mathbf{A}}\langle {{I_{\mathbf{P}}^\diamond (\alpha _\diamond \Gamma )}}\rangle }$ . Assembling all these arrows together, we have the unit of the adjunction $I_{\mathbf{P}}^\diamond \circ \alpha _\diamond \dashv \boldsymbol{\mathbf{A}}\langle {-}\rangle$ .

The force of Lemma 35 is to show that $\mathbb{P}$ is a reflective subcategory of $\boldsymbol{\mathbf{A}}_\diamond$ .

6.8.2 Computability spaces of atomic and computable substitutions

We will consider two computability spaces induced by an atomic context $\Gamma$ : the computability space $[\![\Gamma ]\!]$ of “computable substitutions into $\Gamma$ ” and the computability space $(\!|{\Gamma }|\!)$ of “atomic substitutions into $\Gamma$ .”

Construction 36 (The computability space of computable substitutions). The computability space $[\![\Gamma ]\!]$ of computable substitutions into an atomic context $\Gamma$ is defined in terms of the interpretation of $\mathbb{T}$ into $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ as follows, sending each atomic context to the computability space determined by the algebra structure:

Construction 37 (The computability space of atomic substitutions). We define an embedding ${(\!|-|\!)}:{\boldsymbol{\mathbf{A}}_\diamond }\hookrightarrow {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ sending $\Gamma \in \boldsymbol{\mathbf{A}}_\diamond$ to the computability space $(\!|\Gamma |\!)$ with ${j}^{\ast}(\!|{\Gamma }|\!) = \mathsf {Y}{\mathbf{I}_\diamond }{\alpha _\diamond \Gamma }$ and ${i}^{\ast}(\!|{\Gamma }|\!) = \mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma$ , such that ${\pi _{(\!|\Gamma |\!)}}\to {\mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma }{\alpha ^{\ast}\mathsf {Y}{\mathbf{I}_\diamond }\alpha _\diamond \Gamma }$ is defined on generalized elements by the functorial action of ${\alpha _\diamond }:{\boldsymbol{\mathbf{A}}_\diamond }\to {\mathbf{I}_\diamond }$ as follows:

\begin{equation*} \pi _{(\!|\Gamma |\!)}^\Delta ({\gamma }:{\Delta }\to {\Gamma }) = {\alpha _\diamond {\gamma }}:{\alpha _\diamond \Delta }\to {\alpha _\diamond \Gamma } \end{equation*}

Miraculously, the coreflective embedding ${{i}_!}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{A}}}}\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ sends $\mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma$ to precisely the computability space $(\!|\Gamma |\!)$ , up to isomorphism:

\begin{align*} {{i}}_!\mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma &= (\alpha _!\mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma , {}:{\mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma }\to {\alpha ^{\ast}\alpha _!\mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma }) \\ &\cong (\mathsf {Y}{\mathbf{I}_\diamond }\alpha _\diamond \Gamma , {}:{\mathsf {Y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma }\to {\alpha ^{\ast}\mathsf {Y}{\mathbf{I}_\diamond }\alpha _\diamond \Gamma }) \\ &= (\!|\Gamma |\!) \end{align*}

Note that the functors $[\![-]\!],(\!|-|\!)$ lift into the slice $\mathsf{Cat}\downarrow \mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ in the following sense:

Notation 38. Let $\Gamma$ be an atomic context, and let ${A}: {(\!|\Gamma |\!)}\to {\mathsf{tp}}$ be an object-space type, which we may regard as a morphism ${\alpha \Gamma }\to {\mathsf{tp}}$ in $\mathbb{T}$ . We shall write $[\![A]\!]\colon [\![\Gamma ]\!]\to \mathsf{tp}^{\ast}$ for the image of $A$ under the interpretation functor $I_{\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ .

Lemma 39. Let $\Gamma$ be an atomic context, and let ${A}:{(\!|\Gamma |\!)}\to {\mathsf{tp}}$ be an object-space type (which we may regard as a morphism ${\alpha _\diamond \Gamma }\to {\mathsf{tp}}$ in $\mathbb{T}$ ). Then we have the following cartesian squares:

Stated in the internal language, we have canonical isomorphisms $(\!|\Gamma .A|\!)\cong \sum _{\gamma :(\!|\Gamma |\!)}\mathsf{var}\,(A\gamma )$ and $[\![\Gamma .A]\!] \cong \sum _{\gamma :[\![\Gamma ]\!]}\mathsf{tm}\,([\![A]\!]\gamma )$ .

Proof. The latter is the image of a pullback square in $\mathbb{T}$ under $I_{\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ , which is finitely continuous. The former can be seen by means of an explicit computation.

We now come to an important result relating the interpretation of $\langle p \rangle$ in $\mathbf{I}$ to the interpretation of the same in $\boldsymbol{\mathbf{A}}$ . Lemma 40 is the raison d’être for the stripping model $\mathbf{P}$ in Section 6.8.1.

Lemma 40. We have a (necessarily unique) isomorphism $\boldsymbol{\mathbf{A}}\langle p \rangle \cong \alpha ^{\ast}\mathbf{I}\langle p \rangle$ .

Proof. As both presheaves are subterminal, it is enough to see that one is inhabited if and only if the other is.

  1. (1) We may transpose a map ${}_{{\mathsf {y}{\boldsymbol{\mathbf{A}}_\diamond }} \Gamma }\to {\alpha ^{\ast}\mathbf{I}\langle p \rangle }$ to get ${\alpha _\diamond \Gamma }\to {\mathbf{I}\langle p \rangle }$ ; applying the functorial action of ${I_{\mathbf{P}}^\diamond }\to {\mathbf{I}_\diamond }{\mathbb{P}}$ , we have $I_{\mathbf{P}}^\diamond \alpha _\diamond \Gamma \leq I_{\mathbf{P}}^\diamond \mathbf{I}\langle p \rangle =p$ and by adjoint transpose with Lemma 35, we have ${\Gamma }\to {\boldsymbol{\mathbf{A}}\langle p \rangle }$ .

  2. (2) Conversely, given an arrow ${\Gamma }\to {\boldsymbol{\mathbf{A}}\langle p \rangle }$ we may apply the functorial action of ${\alpha _\diamond }:{\boldsymbol{\mathbf{A}}_\diamond }\to {\mathbf{I}_\diamond }$ to obtain an arrow ${\alpha _\diamond \Gamma }\to {\alpha _\diamond \boldsymbol{\mathbf{A}}\langle p \rangle }$ . As $\langle p \rangle$ is representable in $\mathbb{T}_0$ , it is preserved by morphisms of models like ${\alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{I}}$ ; thus, $\alpha _\diamond \boldsymbol{\mathbf{A}}\langle p \rangle \cong \mathbf{I}\langle p \rangle$ and so we have ${\alpha _\diamond \Gamma }\to {\mathbf{I}\langle p \rangle }$ , which we may transpose to obtain ${}_{\mathsf {y}{\boldsymbol{\mathbf{A}}_\diamond }\Gamma }\to {\alpha ^{\ast}\mathbf{I}\langle p \rangle }$ .

6.8.3 Hydration of atomic substitutions

A critical point in concrete normalization by evaluation algorithms is to “reflect” a vector of variables as an environment of (computable) values against which the computability interpretation of an open term can be executed; in a concrete setting, this operation is defined by recursion on the atomic contexts. The same process, which we shall refer to here as the hydration of atomic substitutions, plays an equally important role in semantic proofs of normalization in the guise of a certain hydration map ${\nearrow }:\to {(\!|{-}|\!)}{[\![-]\!]}$ in $\mathsf{Cat}\downarrow \mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ that we shall need to construct.

Just as in the definition of the stripping map, we are confronted by the fact that we have defined the atomic contexts only by means of a universal property (the bi-initial atomic substitution model over $\mathbf{I}$ ), so we do not immediately have anything concrete to do recursion on. The innovation of Bocquet et al. (Reference Bocquet, Kaposi and Sattler2021) was to find the correct categorical induction motive that explains the usual recursive argument purely in terms of the universal property of the bi-initial atomic substitution model (likewise due to op. cit.). In what follows, we adapt their ideas to our setting and show how to construct the desired hydration map.

We begin by defining a new model $\mathbf{H}$ of $\mathbb{T}_0$ that we shall refer to as the hydration model. The category of contexts $\mathbf{H}_\diamond$ is defined to be the underlying category of the inserter object for ${(\!|-|\!),[\![-]\!]}:{\boldsymbol{\mathbf{A}}_\diamond }\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ in $\mathsf{Cat}\downarrow \mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ as depicted below:

Explicitly, an object of $\mathbf{H}_{\diamond}$ is a pair $(\Gamma ,h_\Gamma )$ of an object $\Gamma \in {\mathbf{A}}_\diamond$ together with an arrow ${h_\Gamma }:{(\!|\Gamma |\!)}\to {[\![\Gamma ]\!]}$ whose image under ${{\mathbf {j}}^{\ast}}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}$ is the identity map on $\mathsf {Y}{\mathbf{I}_\diamond }\alpha _\diamond \Gamma$ . An arrow from $(\Delta ,h_\Delta )$ to $(\Gamma ,h_\Gamma )$ is given by an arrow ${\gamma }:{\Delta }\to {\Gamma }$ in ${\mathbf{A}}_\diamond$ making the following square commute:

Clearly, $\mathbf{H}_\diamond$ has a terminal object because $[\![\mathbf{1}_{\boldsymbol{\mathbf{A}}_\diamond }]\!]$ is terminal. Anticipating that $\psi _\diamond$ should lift to a morphism of models exhibiting $\mathbf{H}$ as a atomic substitution model over $\boldsymbol{\mathbf{A}}$ , we define $\mathbf{H}\langle p \rangle = \psi _\diamond ^{\ast}\boldsymbol{\mathbf{A}}\langle p \rangle$ and $\mathbf{H}(\mathsf{tp}) = \psi _\diamond ^{\ast}\boldsymbol{\mathbf{A}}(\mathsf{tp})$ . In order to define $\mathbf{H}(\mathsf{tm})$ , it will be easiest to first define context comprehensions.

Construction 41 (Context comprehensions in $\mathbf{H}_\diamond$ ). Given a context $(\Gamma ,h_\Gamma )$ in $\mathbf{H}_\diamond$ and a type ${A}:{\mathsf {Y}_{\boldsymbol{\mathbf{A}}_\diamond }\Gamma }\to {\boldsymbol{\mathbf{A}}(\mathsf{tp})}$ , we can lift the context comprehension $\Gamma .A\in \boldsymbol{\mathbf{A}}_\diamond$ into $\mathbf{H}_\diamond$ by finding a suitable map ${h_{\Gamma .A}}:{(\!|\Gamma .A|\!)}\to {[\![\Gamma .A]\!]}$ whose image in $\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ is the identity. Recalling Lemma 39, we may equivalently construct (from the internal point of view) the following map:

The projection ${p_A}:{\Gamma .A}\to {\Gamma }$ tracks a morphism ${(\Gamma .A,h_{\Gamma .A})}\to {(\Gamma ,h_\Gamma )}$ , by definition of $h_{\Gamma .A}$ .

Construction 42 (Phase comprehensions in $\mathbf{H}_\diamond$ ). Given $\Gamma \in \mathbf{H}_\diamond$ and $p\in \mathbb{P}$ , we must exhibit a map ${h_{\Gamma .\boldsymbol{\mathbf{A}}\langle p \rangle }}:{(\!|\Gamma .\boldsymbol{\mathbf{A}}\langle p \rangle |\!)}\to {[\![\Gamma .\boldsymbol{\mathbf{A}}\langle p \rangle ]\!]}$ . Using Lemma 39, we construct this by combining the assumed map ${h_\Gamma }:{(\!|\Gamma |\!)}\to {[\![\Gamma ]\!]}$ with the (necessarily unique) map ${(\!|\boldsymbol{\mathbf{A}}\langle p \rangle |\!)}\to {[\![\boldsymbol{\mathbf{A}}\langle p \rangle ]\!]}$ obtained from the identity map on $\boldsymbol{\mathbf{A}}\langle p \rangle$ in the following way:

  1. (1) First, we observe that $[\![\boldsymbol{\mathbf{A}}\langle p \rangle ]\!] \cong {\mathbf {j}}_{\ast}\mathbf{I}\langle p \rangle$ as follows:

    \begin{align*} [\![\boldsymbol{\mathbf{A}}\langle p \rangle ]\!] &= I_{\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}\alpha _\diamond \boldsymbol{\mathbf{A}}\langle p \rangle &&\text{by definition} \\ &\cong I_{\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}\mathbf{I}\langle p \rangle &&\text{$\alpha $ is a homomorphism} \\ &\cong {\mathbf {j}}_{\ast}\mathbf{I}\langle p \rangle &&\text{by definition} \end{align*}
  2. (2) Then we proceed by adjoint calisthenics:

Construction 43 (The presheaf of terms). We define $\mathbf{H}(\mathsf{tm})\in \mathbf{Pr}\,{\mathbf{H}_\diamond }\downarrow \mathbf{H}(\mathsf{tp})$ to send $A\in \mathbf{H}(\mathsf{tp})(\Gamma ,h_\Gamma )$ to the set of sections of the projection ${p_A}:{(\Gamma .A,h_{\Gamma .A})}\to {(\Gamma ,h_\Gamma )}$ in $\mathbf{H}_\diamond$ .

Construction 44 (The hydration model). With the definitions that we have given, the projection functor ${\psi _\diamond }:{\mathbf{H}_\diamond }\to {\boldsymbol{\mathbf{A}}_\diamond }$ easily extends to a morphism of models, exhibiting $\mathbf{H}$ as a atomic substitution model over $\boldsymbol{\mathbf{A}}$ .

Construction 45 (The hydration map). As we may compose ${\psi }:{\mathbf{H}}\to {\boldsymbol{\mathbf{A}}}$ with the structure map ${\alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{I}}$ , we can view $\mathbf{H}$ as a atomic substitution model over $\mathbf{I}$ as well, and thus by the universal property of $\boldsymbol{\mathbf{A}}$ we have a universal section ${\boldsymbol{\mathbf{A}}}\to {\mathbf{H}}$ to ${\psi }:{\mathbf{H}}\to {\boldsymbol{\mathbf{A}}}$ . Unraveling the section ${\boldsymbol{\mathbf{A}}}\to {\mathbf{H}}$ , we obtain precisely a natural assignment of hydration map component ${h_\Gamma }:{(\!|\Gamma |\!)}\to {[\![\Gamma ]\!]}$ from which we may assemble a single hydration map ${\nearrow }:{(\!|-|\!)}\to {[\![-]\!]}$ in $\mathsf{Cat}\downarrow \mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}$ .

6.8.4 Normalization and decidability

We can now show how to compute the normal form of a type $\Gamma \vdash A\ \textit{type}$ , which we regard as an arrow ${A}:{\alpha _\diamond \Gamma }\to {\mathsf{tp}}$ in $\mathbb{T}$ . Then we may apply the functorial action of the evaluation functor ${I_{\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}}:{\mathbb{T}}\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ to obtain ${[\![A]\!]}:{[\![\Gamma ]\!]}\to {\mathsf{tp}^{\ast}}$ and then postcompose with the projection of normal forms to obtain ${[\![A]\!] \hspace {.1ex} . \hspace {.1ex} \mathsf{code}}:{[\![\Gamma ]\!]}\to {\mathsf{nftp}}$ . Unraveling the meaning of such a map in the gluing category $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ , we see that this amounts not to a normal form for $A$ but instead to an assignment of normal forms of $A$ to computability witnesses for all the variables in the context $\Gamma$ . It is precisely this gap that hydration fills:

\begin{equation*} \mathsf{norm}_{\mathsf{tp}}(\Gamma \vdash A\ \textit{type}) = (\!|\Gamma |\!)\xrightarrow {\nearrow _\Gamma }[\![\Gamma ]\!]\xrightarrow {[\![A]\!]}\mathsf{tp}^{\ast}\xrightarrow {- \hspace {.1ex} . \hspace {.1ex} \mathsf{code}}\mathsf{nftp} \end{equation*}

The entire map above restricts within object-space, by construction, to the original type $A$ , or (to be more precise) its image in $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ under ${{j}_{\ast}}:{\mathbf{Sh}\,{\boldsymbol{\mathsf{I}}}}\to {\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}}$ . The meta-space component of such a morphism ${(\!|\Gamma |\!)}\to {\mathsf{nftp}}$ is precisely a normal form for $A$ .

Theorem 46. Normalization is sound and complete:

  1. (1) Soundness. If $\mathsf{norm}_{\mathsf{tp}}( \Gamma \vdash A\ \textit{type}) = \mathsf{norm}_{\mathsf{tp}}( \Gamma \vdash B\ \textit{type})$ , then ${\Gamma } \vdash A = {B}$ .

  2. (2) Completeness. If ${\Gamma } \vdash A = B$ , then $\mathsf{norm}_{\mathsf{tp}}(\Gamma \vdash A\ \textit{type}) = \mathsf{norm}_{\mathsf{tp}}(\Gamma \vdash B\ \textit{type})$ .

Proof. Completeness holds by definition, as the normalization function is defined on the denizens of the syntactic CwR rather than on raw terms. Soundness follows from the fact that $\mathsf{norm}_{\mathsf{tp}}(\Gamma \vdash A\ \textit{type})$ restricts in object-space to $A$ itself.

In the same way, we can construct a normalization function for terms and prove that it is sound and complete (though we do not do so here). Of course, deciding equality for terms is practically important only insofar as it arises in the context of deciding equality for the types that mention them.

Definition 47. An object $X \in \mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ has levelwise decidable equality when for each $\Gamma \in \boldsymbol{\mathbf{A}}_{\diamond }$ , the set $({i}^{\ast}X)\Gamma$ has decidable equality where $i:A\hookrightarrow {\boldsymbol{\mathsf{G}}}$ is as in Definition 19.

Theorem 48. Viewed as objects of $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ , the following have levelwise decidable equality:

\begin{equation*} \mathsf{nftp} \qquad (A : \mathsf{tp}) \times {\mathsf{nf}}\,A \qquad (A : \mathsf{tp}) \times (\phi :\Omega _{\textit {dec}}) \times \mathsf{ne}_\phi {A} \end{equation*}

From this, we obtain our main results concerning $\mathbf{TT}_{\mathbb{P}}$ :

Corollary 49. Definitional equality in $\mathbf{TT}_{\mathbb{P}}$ is decidable.

6.8.5 Stronger normalization results

A few stronger results can be proved using routine extensions of the methods on display here.

  1. (1) The external normalization function is surjective, which implies that normalization is idempotent. The main practical impact of normalization being surjective is to prove that the normalization function is effectively computable, as in Sterling (Reference Sterling2021) and Sterling and Angiuli (Reference Sterling and Angiuli2021); this step is redundant in the setting of the present paper, which has been carried out constructively in order to ensure an implicit form of effective computability.

  2. (2) Type constructors are injective in the sense that ${\Gamma }\vdash A \to B=A^{\prime }\to B^{\prime }$ implies ${\Gamma } \vdash A=A^{\prime }$ and ${\Gamma } \vdash B = B^{\prime }$ , etc. Injectivity of type constructors is the main ingredient to establishing determinacy of the standard bidirectional elaboration algorithm. Injectivity is not strictly needed for a type checker written on fully annotated terms, but practical systems involve the elaboration of less-annotated terms to fully annotated terms; this process relies on injectivity.

  3. (3) Normalization can be internalized into $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ as an inverse ${\mathsf{tp}}\to {\mathsf{nftp}}$ to the projection $\mathsf{nftp}\to \mathsf{tp}$ , as in Sterling (Reference Sterling2021, Reference Sterling2025) and Sterling and Angiuli (Reference Sterling and Angiuli2021). This implies, for example, that the normalization function is invariant under variable renaming (or, more generally, atomic substitutions).

We do not detail these results here, but instead remark that detailed proofs for similar theories can be found in the cited literature.

7. Related Work

Proof assistants already have support for various means of controlling the unfolding of definitions; we classify these as either library- or language-level.

7.1 Library-level features

Various library-level idioms for abstract definitions are used in practice such as SSReflect’s lock idiom. While such approaches are flexible and compatible with existing proof assistants, they are often cumbersome in practice. For instance, lock relies on various tactics with subtle behavior, which makes it difficult to use locking idioms in pure Gallina code.

7.2 Language-level features

Many proof assistants include a feature like Agda’s abstract blocks which marks a definition as completely opaque to the remainder of the development. In Remark 1, we explained how to recover Agda’s abstract definitions using controlled unfolding. Moreover, as controlled unfolding does not require a user to decide up front whether a definition can be unfolded, it gives a more realistic and flexible discipline for abstraction in a proof assistant. In practice, however, abstract is often used for performance reasons instead of merely for controlling abstraction; unfolding large or complex definitions can significantly slow down type checking and unification. While we have not discussed performance considerations for controlled unfolding, the same optimizations apply to our mechanism for definitions that are never unfolded. In total, controlled unfolding strictly generalizes abstract blocks.

Recently, Kovács ( Reference Kovács2023, Reference Kovács2024) has proposed a glued evaluation technique to both improve the pretty-printing of goals and more efficiently handle unfolding during conversion testing. Roughly, the proof assistant’s kernel may choose to unfold any definition but avoids doing so whenever possible for efficiency and strives to never show unfolded goals to the user. Both glued evaluation and controlled unfolding relate to the unfolding of definitions, and they are largely orthogonal and complementary. In particular, glued evaluation does not require user intervention, unlike controlled unfolding, but it does not actually preclude any unfolding from taking place. Thus, glued evaluation does not impact the well-formedness of a program and can be used as a “drop-in” technique for improving performance and usability. However, for the same reasons, glued evaluation cannot be used to enforce modularity and independence in the same way as controlled unfolding. Ideally, a proof assistant would support both glued evaluation and controlled unfolding: the more advanced evaluation algorithm would improve baseline performance and controlled unfolding would facilitate users enforcing stronger abstraction boundaries within their programs and assisting the kernel by manually designating certain definitions as opaque.

Program verifiers such as VeriFast and Chalice include similar unfolding mechanisms to cope specifically with recursive definitions ( Jacobs et al. Reference Jacobs, Vogels and Piessens2015; Summers and Drossopoulou Reference Summers, Drossopoulou and Castagna2013). Like our mechanism, these features allow users fine-grained control over how definitions are unfolded. However, these verifiers work only within simply typed theories and thus avoid the substantial complexity of dependency. Moreover, these mechanisms manage a different problem than controlled unfolding; they allow a user to unfold recursive definitions step-by-step, while controlled unfolding is used to control when each definition can be fully inlined.

7.3 Translucent ascription in module systems

Thus far we have focused on proof assistants, but similar considerations arise for ML-style module systems ( Dreyer et al. Reference Dreyer, Crary and Harper2003; Harper and Stone Reference Harper, Stone, Plotkin, Stirling and Tofte2000; Milner et al. Reference Milner, Tofte, Harper and MacQueen1997; Sterling and Harper Reference Sterling and Harper2021). The default opacity for definitions in module systems is the same as in controlled unfolding and opposite to proof assistants: types are abstract unless marked otherwise. The treatment of translucent type declarations in module systems (Harper and Stone Reference Harper, Stone, Plotkin, Stirling and Tofte2000) relies on singleton kinds (Aspinall Reference Aspinall, Pacholski and Tiuryn1995; Stone and Harper Reference Stone and Harper2006), which are the special case of extension types whose boundary proposition is $\top$ . Generalizing from compiletime kinds to mixed compiletime–runtime module signatures, Sterling and Harper have pointed out that transparent ascriptions are best handled by an extension type whose boundary proposition represents the compiletime phase itself (Sterling and Harper Reference Sterling and Harper2021). Thus, the translucency of compiletime module components can be seen as a particular controlled unfolding policy in the sense of this paper.

7.4 Controlled unfolding in Agda

Inspired by our implementation of controlled unfolding in , Amélia Liao and Jesper Cockx have implemented a version of this mechanism called opaque within Agda 2.6.4 (Liao and Cockx Reference Liao and Cockx2022). However, rather than using extension types, their Agda implementation simulates the necessary behaviors by instrumenting conversion checking – a workaround made possible by the very restricted ways in which our elaboration procedure relies on extension types. This demonstrates that controlled unfolding can be adapted to proof assistants like Coq whose core calculi do not presently support extension types.

At the time of writing, Agda’s opaque declarations are new enough that only two major Agda libraries, the 1Lab (The 1Lab Development Team 2022) and the Cubical Agda library (The Agda Community 2023), use them extensively; the Agda standard library may adopt opaque declarations in a future major revision (The Agda Development Team 2023). As of publication, 65 modules in the 1Lab use opaque and 19 use unfolding, in addition to over 100 using abstract blocks; in the Cubical Agda library, 27 modules use opaque, 6 use unfolding, and 35 use abstract.

8. Conclusions and Future Work

We have proposed controlled unfolding, a new mechanism for interpolating between transparent and opaque definitions in proof assistants. We have demonstrated its practical applicability by extending with controlled unfolding; we have also proved its soundness through an elaboration algorithm to a core calculus whose normalization we establish using a constructive STC argument.

In the future, we hope to see controlled unfolding integrated into more proof assistants and to further explore its applications for large-scale organization of mechanized mathematics. As mentioned above, some our mechanism has implemented in Agda, but features such as local unfolds are still absent. Furthermore, in the context of our implementation, we have also already begun to experiment with potential extensions, including one that allows a subterm to be declared locally abstract and then unfolded later on as needed – a more flexible alternative to Coq’s ${\texttt {abstract}}\,t$ tactical. As we mentioned in Remark 1, we also are interested in facilities to limit the scope in which it is possible to unfold a definition.

Funding statement

This work was supported in part by a Villum Investigator grant (no. 25804), Center for Basic Research in Program Verification (CPV), from the VILLUM Foundation. Jonathan Sterling is funded by the European Union under the Marie Skłodowska-Curie Actions Postdoctoral Fellowship project TypeSynth: synthetic methods in program verification and by the United States Air Force Office of Scientific Research under grant number FA9550-23-1-0728 (New Spaces for Denotational Semantics; Dr. Tristan Nguyen, Program Manager). Carlo Angiuli is supported by the US Air Force Office of Scientific Research under grant numbers FA9550-21-1-0009 and FA9550-24-1-0350. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the European Union, the European Commission, or the AFOSR. Neither the European Union nor the granting authority can be held responsible for them.

Competing interests

The authors declare no competing interests.

A. The (2,1)-category of models

Uemura (Reference Uemura2021) has observed that a model $(\mathbf{M}_\diamond ,\mathbf{M})$ in the sense of Definition 11 can be packaged into a single functor ${\widetilde {\mathbf{M}}}:{\mathbb{T}^\rhd }\to {\mathsf{Cat}}$ , in which $\mathbb{T}^\rhd$ freely extends $\mathbb{T}$ by a new terminal object $\diamond$ and $\mathsf{Cat}$ is the 2-category of categories. From this perspective, a sort $X\in \mathbb{T}$ is taken to the total category $\widetilde {\mathbf{M}}(X) = \int _{\mathbf{M}_\diamond }\mathbf{M}(X)$ of a discrete fibration over $\widetilde {\mathbf{M}}(\diamond )=\mathbf{M}_\diamond$ . Here, we are using the equivalence between $\mathbf{DFib}_{\mathscr{C}} \simeq \mathbf{Pr}\,{\mathscr{C}}$ . The preservation of representable maps is then rendered here as the requirement that for representable ${u}: X\to Y$ , each functor ${\widetilde {\mathbf{M}}(u)}:{\widetilde {\mathbf{M}}(X)}\to {\widetilde {\mathbf{M}}(Y)}$ shall have a right adjoint $\widetilde {\mathbf{M}}(u)\dashv q_{\widetilde {\mathbf{M}}(u)}$ taking an element of $\widetilde {\mathbf{M}}(Y)$ to the generic element of $\widetilde {\mathbf{M}}(X)$ in the extended context.

Example 50. For the representable map ${\pi }:{\mathsf{tm}}\to {\mathsf{tp}}$ , the functorial action ${\widetilde {\mathbf{M}}(\pi )}:{\widetilde {\mathbf{M}}(\mathsf{tm}})\to {\widetilde {\mathbf{M}}(\mathsf{tp}})$ takes a term $\Gamma \vdash a:A$ to the type $\Gamma \vdash A$ ; the right adjoint ${q_{\widetilde {\mathbf{M}}(\pi )}}:{\widetilde {\mathbf{M}}(\mathsf{tp}})\to {\widetilde {\mathbf{M}}(\mathsf{tm}})$ sends a type $\Gamma \vdash A$ to the variable $\Gamma ,a:A\vdash a:A$ .

Definition 51. Given two models $\mathbf{M},\mathbf{N}$ of $\mathbb{T}$ , a morphism of models from $\mathbf{M}$ to $\mathbf{N}$ is given by a natural transformation ${F}:{\widetilde {\mathbf{M}}}\to {\widetilde {\mathbf{N}}} \in [\mathbb{T}^\rhd ,\mathsf{Cat}]$ such that for each representable map ${u}:X\to Y$ in $\mathbb{T}$ , the corresponding naturality datum $F_u\colon \widetilde {\mathbf{N}}(u)\circ F_X = F_Y\circ \widetilde {\mathbf{M}}(u)$ depicted below

satisfies the Beck–Chevalley condition in the sense the following 2-cell, obtained by conjugating with units and counits, denotes an invertible natural transformation ${F_X\circ q_{\widetilde {\mathbf{M}}(u)}}\to {q_{\widetilde {\mathbf{N}}(u)}\circ F_Y}$ :

Definition 52. Let $\mathbf{M},\mathbf{N}$ be two models of $\mathbb{T}$ , and let ${F,G}:{\mathbf{M}}\to {\mathbf{N}}$ be two morphisms of models. An isomorphism $h$ from $F$ to $G$ is defined to be an invertible modification between the underlying natural transformations $F,G$ . This amounts to choosing for each $X\in \mathbb{T}^\rhd$ a natural isomorphism ${h_X}:{F_X}\to {G_X}$ in $[\widetilde {\mathbf{M}}(X),\widetilde {\mathbf{N}}(X)]$ , subject to the coherence condition that for each ${u}:{X}\to {Y}$ in $\mathbb{T}^\rhd$ the following two wiring diagrams are equal:

Remark 53. Because each of the induced maps ${\pi _{\mathbf{M}(X)}}:{\widetilde {\mathbf{M}}(X)}\to {\mathbf{M}_\diamond }$ and ${\pi _{\mathbf{N}(X)}}:{\widetilde {\mathbf{N}}(X)}\to {\mathbf{N}_\diamond }$ into the cone point are discrete fibrations, it suffices to check the modification condition of $h$ on only the cone maps ${X}\to {\diamond }$ : as any discrete fibration is a faithful functor, it moreover follows that ${h_\diamond }:{F_\diamond }\to {G_\diamond }$ uniquely determines all the other $h_X$ if they exist. Unfolding further, given $x\in \widetilde {\mathbf{M}}(X)$ we are only requiring that $(h_\diamond )_{\pi _{\mathbf{M}(X)}}^{\ast}(G_X x) = F_X x$ in the sense depicted below in the discrete fibration $\mathbf{N}(X)$ over $\mathbf{N}_\diamond$ :

Thus, we have a (2,1)-category of models ${\mathbf{Mod}}\,\mathbb{T}$ for any category with representable maps $\mathbb{T}$ .

B. The (2,1)-category of atomic substitution models

Definition 54. Given atomic substitution models ${\alpha }:{\boldsymbol{\mathbf{A}}}\to {\mathbf{I}}$ and ${\alpha ^{\prime }}:{\boldsymbol{\mathbf{A}}^{\prime }}\to {\mathbf{I}}$ , a morphism from $(\boldsymbol{\mathbf{A}},\alpha )$ to $(\boldsymbol{\mathbf{A}}^{\prime },\alpha ^{\prime })$ is given by a morphism ${F}:{\boldsymbol{\mathbf{A}}}\to {\boldsymbol{\mathbf{A}}^{\prime }}\in {\mathbf{Mod}}\,\mathbb{T}_0$ together with an isomorphism ${\phi _F}:{\alpha }\to {\alpha ^{\prime }\circ F}$ in $[\boldsymbol{\mathbf{A}},\mathbf{I}]$ as depicted below:

Definition 55. Given two morphisms ${F,G}:{(\boldsymbol{\mathbf{A}},\alpha )}\to {(\boldsymbol{\mathbf{A}}^{\prime },\alpha ^{\prime })}$ , an isomorphism from $F$ to $G$ is given by an isomorphism ${h}:{F}\to {G}\in [\boldsymbol{\mathbf{A}},\boldsymbol{\mathbf{A}}^{\prime }]$ such that the following wiring diagrams denote equal isomorphisms ${\alpha }\to {\alpha ^{\prime }\circ G}$ :

Footnotes

1 Indeed, the Agda standard library (The Agda Development Team 2022) currently uses abstract only once.

3 By constructive, we mean something that can be carried out in an elementary topos with a natural numbers object.

4 In his doctoral dissertation, Sterling referred to “object-space” and “meta-space” as syntactic and semantic, respectively (Sterling Reference Sterling2021). However, there are compelling reasons to consider object-space more semantic than meta-space (in which various admissibilities hold that will not be preserved by homomorphisms of models), so we have changed terminology to avoid confusion.

5 It would be possible to choose a more restrictive class of representable maps for $\mathbf{Sh}\,{\boldsymbol{\mathsf{G}}}$ , but there is no reason to do so.

References

Anel, M. and Joyal, A. (2021). Topo-logie. In: Anel, M. and Catren, G. (eds.) New Spaces in Mathematics: Formal and Conceptual Reflections, vol. 1, Cambridge University Press, 155257.10.1017/9781108854429.007CrossRefGoogle Scholar
Angiuli, C., Brunerie, G., Coquand, T., Hou (Favonia), K.-B., Harper, R. and Licata, D. R. (2021). Syntax and models of Cartesian cubical type theory. Mathematical Structures in Computer Science 31 (4) 424468.10.1017/S0960129521000347CrossRefGoogle Scholar
Angiuli, C., Hou (Favonia), K.-B. and Harper, R. (2018). Cartesian cubical computational type theory: Constructive reasoning with paths and equalities. In: Ghica, D. and Jung, A. (eds.) 27th EACSL Annual Conference on Computer Science Logic (CSL 2018) Dagstuhl, Germany, Leibniz International Proceedings in Informatics (LIPIcs), vol. 119, Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.6:16:17.Google Scholar
Aspinall, D. (1995) Subtyping with singleton types. In: Pacholski, L. and Tiuryn, J. (ed.) Computer Science Logic, Berlin Heidelberg, Springer, 115.Google Scholar
Awodey, S. (2018). Natural models of homotopy type theory. Mathematical Structures in Computer Science 28 (2) 241286.10.1017/S0960129516000268CrossRefGoogle Scholar
Bocquet, R., Kaposi, A. and Sattler, C. (2021). Relative induction principles for type theories. Unpublished manuscript. https://arxiv.org/abs/2102.11649.Google Scholar
Cartmell, J. (1978). Generalised Algebraic Theories and Contextual Categories. Phd thesis, University of Oxford.Google Scholar
Clairambault, P. and Dybjer, P. (2014). The biequivalence of locally cartesian closed categories and Martin-löf type theories. Mathematical Structures in Computer Science 24 (6). https://www.cambridge.org/core/journals/mathematical-structures-in-computer-science/article/abs/biequivalence-of-locally-cartesian-closed-categories-and-martinlof-type-theories/6ECB295B1246A85D5DD92E5F38428D99.10.1017/S0960129513000881CrossRefGoogle Scholar
Cohen, C., Coquand, T., Huber, S. and Mörtberg, A. (2017). Cubical type theory: a constructive interpretation of the univalence axiom. IfCoLog Journal of Logics and Their Applications 4 (10) 31273169.Google Scholar
Coquand, T. (1996). An algorithm for type-checking dependent types. Science of Computer Programming 26 (1) 167177.10.1016/0167-6423(95)00021-6CrossRefGoogle Scholar
Dagand, P.-E. (2013). A Cosmology of Datatypes: Reusability and Dependent Types. Phd thesis, University of Strathclyde, Glasgow, Scotland.Google Scholar
Dreyer, D., Crary, K. and Harper, R. (2003). A type system for higher-order modules. In: Proceedings of the 30th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, POPL’03, New Orleans, Louisiana, USA, Association for Computing Machinery, 236249.Google Scholar
Dybjer, P. (1996). Internal type theory. In: Berardi, S. and Coppo, M. (eds.) Types for Proofs and Programs, Berlin, Heidelberg, Springer Berlin Heidelberg, 120134.10.1007/3-540-61780-9_66CrossRefGoogle Scholar
Fiore, M. (2002). Semantic analysis of normalisation by evaluation for typed lambda calculus. In: Proceedings of the 4th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming, PPDP’02, ACM, 2637.Google Scholar
Fiore, M. P. (2022). Semantic analysis of normalisation by evaluation for typed lambda calculus. Unpublished extended version of PPDP ’02 paper with same title, available at https://arxiv.org/abs/2207.08777.Google Scholar
Freyd, P. (1972). Aspects of topoi. Bulletin of the Australian Mathematical Society 7 (1) 176.10.1017/S0004972700044828CrossRefGoogle Scholar
Gilbert, G., Cockx, J., Sozeau, M. and Tabareau, N. (2019). Definitional proof-irrelevance without K. Proceedings of the ACM on Programming Languages 3 (POPL).10.1145/3290316CrossRefGoogle Scholar
Gonthier, G., Mahboubi, A. and Tassi, E. (2016). A small scale reflection extension for the Coq system. Research Report RR-6455, Inria Saclay Ile de France.Google Scholar
Gratzer, D., Shulman, M. and Sterling, J. (2022). Strict universes for Grothendieck topoi. Unpublished manuscript. https://arxiv.org/abs/2202.12012.Google Scholar
Gratzer, D. and Sterling, J. (2020). Syntactic categories for dependent type theory: sketching and adequacy. Unpublished manuscript. https://arxiv.org/abs/2012.10783.Google Scholar
Gratzer, D., Sterling, J. and Birkedal, L. (2019). Implementing a modal dependent type theory. Proceedings of the ACM on Programming Languages 3.10.1145/3341711CrossRefGoogle Scholar
Harper, R. and Stone, C. (2000). A type-theoretic interpretation of standard ML. In: Plotkin, G., Stirling, C. and Tofte, M. (eds.) Proof, Language, and Interaction, Cambridge, MA, USA, MIT Press, 341387.10.7551/mitpress/5641.003.0019CrossRefGoogle Scholar
Hou (Favonia), K.-B. (2022). kado. http://www.github.com/RedPRL/kado.Google Scholar
Huber, S. (2019). Canonicity for cubical type theory. Journal of Automated Reasoning 63 (2) 173210.10.1007/s10817-018-9469-1CrossRefGoogle Scholar
Jacobs, B., Vogels, F. and Piessens, F. (2015). Featherweight veriFast. Logical Methods in Computer Science 11 (3).Google Scholar
Johnstone, P. T. (1977). Topos Theory, London Mathematical Society Monographs, Academic Press.Google Scholar
Jung, A. and Tiuryn, J. (1993). A new characterization of lambda definability. In: Bezem, M. and Groote, J. F. (eds.) Typed Lambda Calculi and Applications, Berlin, Heidelberg, Springer Berlin Heidelberg, 245257.10.1007/BFb0037110CrossRefGoogle Scholar
Kaposi, A., Kovács, A. and Altenkirch, T. (2019). Constructing quotient inductive-inductive types. Proceedings of the ACM on Programming Languages 3 (POPL) 2:12:24.10.1145/3290315CrossRefGoogle Scholar
Kovács, A. (2023). smalltt.Google Scholar
Kovács, A. (2024). Efficient elaboration with controlled definition unfolding. In: Third Workshop on the Implementation of Type Systems.Google Scholar
Lack, S. (2009). A. 2-Categories Companion, 105-191, Springer New York.Google Scholar
Liao, A. and Cockx, J. (2022). Unfolding control for abstract blocks. https://github.com/agda/agda/pull/6354.Google Scholar
Martin-Löf, P. (1975). An intuitionistic theory of types: predicative part. In: Rose, H. and Shepherdson, J. (eds.) Logic Colloquium’73, Proceedings of the Logic Colloquium, Studies in Logic and the Foundations of Mathematics, vol. 80, North-Holland, 73118.Google Scholar
Milner, R., Tofte, M., Harper, R. and MacQueen, D. (1997). The Definition of Standard ML (Revised), MIT Press.10.7551/mitpress/2319.001.0001CrossRefGoogle Scholar
Newstead, C. (2018). Algebraic Models of Dependent Type Theory. Phd thesis, Carnegie Mellon University.Google Scholar
Pierce, B. C. and Turner, D. N. (2000). Local type inference. ACM Transactions Programming Language and Systems 22 (1) 144.10.1145/345099.345100CrossRefGoogle Scholar
RedPRL Development Team, T. (2020). cooltt. http://www.github.com/RedPRL/cooltt.Google Scholar
Riehl, E. and Shulman, M. (2017). A type theory for synthetic $\infty$ -categories. Higher Structures 1 (1) 147224.10.21136/HS.2017.06CrossRefGoogle Scholar
Sterling, J. (2021). First Steps in Synthetic Tait Computability: The Objective Metatheory of Cubical Type Theory. Phd thesis, Carnegie Mellon University. Version 1.1, revised May 2022.Google Scholar
Sterling, J. (2025). Toward a Geometry for Syntax, Switzerland, Springer Nature, 391432.Google Scholar
Sterling, J. and Angiuli, C. (2021). Normalization for cubical type theory. In: Proceedings of the 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS’21, New York, NY, USA, ACM.Google Scholar
Sterling, J. and Harper, R. (2021). Logical relations as types: proof-relevant parametricity for program modules. Journal of the ACM 68 (6).10.1145/3474834CrossRefGoogle Scholar
Stone, C. A. and Harper, R. (2006). Extensional equivalence and singleton types. Transactions on Computational Logic 7 (4) 676722.10.1145/1183278.1183281CrossRefGoogle Scholar
Summers, A. J. and Drossopoulou, S. (2013). A formal semantics for isorecursive and equirecursive state abstractions. In: Castagna, G. (ed.) ECOOP. 2013 – Object-Oriented Programming, Berlin, Heidelberg, Springer Berlin Heidelberg, 129153.10.1007/978-3-642-39038-8_6CrossRefGoogle Scholar
The Agda Community (2023). The Cubical Agda library. https://github.com/agda/cubical/.Google Scholar
The Agda Development Team (2022). The Agda standard library. https://github.com/agda/agda-stdlib.Google Scholar
The Agda Development Team (2023). Consider where we can use opaque mechanism to provide abstraction. https://github.com/agda/agda-stdlib/issues/2136.Google Scholar
The Agda Team (2021). Agda User Manual, Release 2.6.2.Google Scholar
The Coq Development Team (2022). The Coq proof assistant.Google Scholar
The 1Lab Development Team (2022). The 1Lab. https://1lab.dev.Google Scholar
Uemura, T. (2021). Abstract and Concrete Type Theories. Phd thesis, Institute for Logic, Language and Computation, University of Amsterdam.Google Scholar
Uemura, T. (2023). A general framework for the semantics of type theory. Mathematical Structures in Computer Science 33 (3) 134179.10.1017/S0960129523000208CrossRefGoogle Scholar
Univalent Foundations Program, T. (2013). Homotopy Type Theory: Univalent Foundations of Mathematics, Institute for Advanced Study.Google Scholar
Vickers, S. (2007). Locales and toposes as spaces. In: Aiello, M., Pratt-Hartmann, I. and Van Benthem, J. (eds.) Handbook of Spatial Logics, Dordrecht, Springer Netherlands, 429496.10.1007/978-1-4020-5587-4_8CrossRefGoogle Scholar
Figure 0

Figure 1. The non-standard aspects of the LF signature for $\textbf{TT}_{\mathbb{P}}$.

Figure 1

Figure 2. Selected rules from the definition of $\mathsf{nf}$, $\mathsf{ne}$, and $\mathsf{nftp}$.

Figure 2

Figure 3. The normalization structure on the universe.