0% found this document useful (0 votes)
2 views111 pages

Brauer Notes on Realizability

These notes provide an introduction to realizability theory, emphasizing its connections to computability, category theory, and logic. The document aims to educate mathematicians on computable mathematics, which has often been treated as an afterthought in classical mathematics. It includes informal explanations and suggestions for further reading, reflecting on the author's experiences teaching and updating the material over the years.

Uploaded by

jtpaasch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views111 pages

Brauer Notes on Realizability

These notes provide an introduction to realizability theory, emphasizing its connections to computability, category theory, and logic. The document aims to educate mathematicians on computable mathematics, which has often been treated as an afterthought in classical mathematics. It includes informal explanations and suggestions for further reading, reflecting on the author's experiences teaching and updating the material over the years.

Uploaded by

jtpaasch
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 111

Notes on realizability

[DRAFT: October 7, 2024]

Andrej Bauer

October 7, 2024
Preface

It is not an exaggeration to say that the invention of modern computers was a direct consequence of the great
advances of the 20th century logic: Hilbert’s putting a decision problem on his list, Gödel’s amazing exercise
in programming with numbers, Church’s invention of 𝜆-calculus, Gödel’s of general recursive functions,
and Turing’s of his machines. Unfortunately, by the time computers took over the world and demanded
a fitting foundation of mathematics, generations of mathematicians had been educated with little regard
or sensitivity to questions of computability and constructivity. Some even cherished living in a paradise
removed from earthly matters and encouraged others to take pride in the uselessness of their activity. Today
such mathematics persists as the generally accepted canon.
How is the working mathematician to understand and study computable mathematics? Given their unshakable
trust in classical mathematics, it is only natural for them to “bolt on computability as an afterthought”, as
was put eloquently by a friend of mine. Indeed, this is precisely how many experts practice computable
mathematics, and so shall we.
A comprehensive account of realizability theory would be a monumental work, which we may hope to see
one day in the form of a sketch of an elephant. These notes are at best a modest introduction that aims to
strike a balance between approachable concreteness and inspiring generality. Because my purpose was to
educate, I did not hesitate to include informal explanations and recollection of material that I could have
relegated to background reading. Suggestions for further reading will hopefully help direct those who seek
deeper knowledge of realizability.
Realizability theory weaves together computability theory, category theory, logic, topology and programming
languages. I therefore recommend to the enthusiastic students the adoption of the Japanese martial art
principle 修行.
An early version of these lecture notes were written to support a graduate course on computable topology,
which I taught in 2009 at the University of Ljubljana. I copiously reused part of my dissertation. In 2022
I updated the notes and added a chapter on type theory, on the occasion of my lecturing at the Midlands
Graduate School, hosted by the University of Nottingham.

Andrej Bauer
Ljubljana, March 2022
Contents

Preface ii

Contents iii

1 Introduction 1
1.1 Background material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

2 Models of Computation 4
2.1 Turing machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Type 1 machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Type 2 machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.1.3 Turing machines with oracles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.1.4 Hamkin’s infinite-time Turing machines . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Scott’s graph model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Church’s 𝜆-calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4 Reflexive domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 Partial combinatory algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.5.1 Examples of partial combinatory algebras . . . . . . . . . . . . . . . . . . . . . . . . 29
2.6 Typed partial combinatory algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.7 Examples of Typed Partial Combinatory Algebras . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7.1 Partial combinatory algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7.2 Simply typed 𝜆-calculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.7.3 Gödel’s 𝑇 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.7.4 Plotkin’s PCF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.7.5 PCF∞ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.8 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
2.8.1 Properties of simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.8.2 Decidable simulations and 𝕂1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.8.3 An adjoint retraction from (ℙ , ℙ# ) to (𝔹 , 𝔹# ) . . . . . . . . . . . . . . . . . . . . . . . 42

3 Realizability categories 44
3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.2 Assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.2.1 Modest sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2.2 The unit assembly 𝟙 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.2.3 Natural numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.4 The constant assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.2.5 Two-element assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Equivalent formulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3.1 Existence predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3.3.2 Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3.3 Partial equivalence relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
3.3.4 Equivalence relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.4 Applicative functors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.5 Schools of Computable Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5.1 Recursive Mathematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5.2 Equilogical spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.5.3 Computable countably based spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.5.4 Computable equilogical spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.5.5 Type Two Effectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.6 The categorical structure of assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6.1 Cartesian structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
3.6.2 Cocartesian structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.6.3 Monos and epis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.6.4 Regular structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
3.6.5 Cartesian closed structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
3.6.6 The interpretation of 𝜆-calculus in assemblies . . . . . . . . . . . . . . . . . . . . . . 76
3.6.7 Projective assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4 Realizability and logic 82


4.1 The set-theoretic interpretation of logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.2 Realizability predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.3 The Heyting prealgebra of realizability predicates . . . . . . . . . . . . . . . . . . . . . . . . 84
4.4 Quantifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.5 Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4.6 Equality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.7 Summary of realizability logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
4.8 Classical and decidable predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.8.1 ¬¬-stable predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
4.8.2 Decidable predicates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4.8.3 Predicates classified by two-element assemblies . . . . . . . . . . . . . . . . . . . . . 91

5 Realizability and type theory 93


5.1 Families of sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.1.1 Products and sums of families of sets . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
5.1.2 Type theory as the internal language . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.2 Families of assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
5.2.1 Products and sums of families of assemblies . . . . . . . . . . . . . . . . . . . . . . . 98
5.2.2 Contexts of assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3 Propositions as assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
5.3.1 Propositional truncation of an assembly . . . . . . . . . . . . . . . . . . . . . . . . . 100
5.3.2 Realizability predicates and propositions . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.4 Identity types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.4.1 UIP and equality reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.5 Inductive and coinductive types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.6 Universes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.6.1 Universes of propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.6.2 The universe of modest sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.6.3 The universe of small assemblies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

6 The internal language at work 104


6.1 Epis and monos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.2 The axiom of choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3 Heyting arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.4 Countable objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.5 Markov’s principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.6 Church’s thesis and the computability modality . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.7 Aczel’s presentation axiom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.8 Continuity principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.8.1 Brouwer’s continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.8.2 Kreisel-Lacombe-Schönfield-Ceitin continuity . . . . . . . . . . . . . . . . . . . . . . 104
6.9 Brouwer’s compactness principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Bibliography 105
Introduction 1
1.1 Background material

In this section we overview a selection of concepts which we need later


on. We also fix notation and a number of definitions. At the moment the
sections are not listed in any particular order.

Free and bound variables

Occurrences of variables in an expression may be free or bound. Variables


are bound when they are used to indicate the range over which an
operator acts. For example, in expressions

𝑛 ∫ 𝑏
X 1
∀𝑥. ℝ 𝑥 2 + 𝑦 ≥ 0 , , 𝑓 (𝑡) 𝑑𝑡,
𝑘=0 𝑘 2
𝑎

the variables 𝑥 , 𝑘 , and 𝑡 are bound by the operators ∀, , and ,
P
respectively. The remaining variables are free. It is really the occurrence
of a variable that is bound or free, not the variable itself. In

𝑃(𝑥) ∨ ∃𝑥. ¬𝑄(𝑥)

the left-most occurence of 𝑥 is free whereas the other two are bound
by ∃.

Functions

The set of all functions from 𝐴 to 𝐵 is denoted by 𝐵 𝐴 as well as 𝐴 → 𝐵.


The arrow associates to the right, 𝐴 → 𝐵 → 𝐶 is 𝐴 → (𝐵 → 𝐶). We write
𝑓 : 𝐴 → 𝐵 instead of 𝑓 ∈ 𝐴 → 𝐵. If 𝑓 : 𝐴 → 𝐵 and 𝑥 ∈ 𝐴, the application
𝑓 (𝑥) is also written as 𝑓 𝑥 . We often work with curried functions which
take several arguments in succession, i.e., if 𝑓 : 𝐴 → 𝐵 → 𝐶 then 𝑓
takes 𝑥 ∈ 𝐴, and 𝑦 ∈ 𝐵 to produce an element 𝑓 (𝑥)(𝑦) in 𝐶 , also written
𝑓 𝑥 𝑦.

Partial functions

A partial function1 𝑓 : 𝐴 ⇀ 𝐵 is a function that is defined on a subset 1: In the literature on Type Two Effectiv-
dom( 𝑓 ) ⊆ 𝐴, called the domain of 𝑓 . Sometimes there is confusion between ity the common notation is 𝑓 : ⊆ 𝐴 → 𝐵.

the domain dom( 𝑓 ) and the set 𝐴, which is also called the domain. We
therefore call dom( 𝑓 ) the support of 𝑓 . If 𝑓 : 𝐴⇀𝐵 is a partial function and
𝑥 ∈ 𝐴, we write 𝑓 𝑥↓ to indicate that 𝑓 𝑥 is defined. For an expression 𝑒 ,
we also write 𝑒↓ to indicate that 𝑒 and all of its subexpressions are defined.
The symbol ↓ is sometimes inserted into larger expressions, for example,
𝑓 𝑥↓ = 𝑦 means that 𝑓 𝑥 is defined and is equal to 𝑦 . If 𝑒1 and 𝑒2 are two
expressions whose values are possibly undefined, we write 𝑒1 ≃ 𝑒2 to
indicate that either 𝑒1 and 𝑒2 are both undefined, or they are both defined
1 Introduction 2

and equal. The notation 𝑒1 ⪰ 𝑒2 means that if 𝑒1 is defined then 𝑒2 is


defined and they are equal. Thus we have

𝑒1 ≃ 𝑒2 ⇐⇒ 𝑒1 ⪰ 𝑒2 ∧ 𝑒2 ⪰ 𝑒1 .

A partial map 𝑓 : 𝑋 ⇀ 𝑌 between topological spaces 𝑋 and 𝑌 is said to be


continuous when it is continuous as a total map 𝑓 : dom( 𝑓 ) → 𝑌 , where
the support dom( 𝑓 ) ⊆ 𝑋 is equipped with the subspace topology.

Primitive recursive and recursive function

The primitive recursive function are those function ℕ 𝑘 → ℕ that are


built inductively from the following functions and operations:
1. constant functions 𝑓 (𝑛1 , . . . , 𝑛 𝑘 ) = 𝑐 , where 𝑐 ∈ ℕ ,
2. projections 𝑝 𝑖 (𝑛1 , . . . , 𝑛 𝑘 ) = 𝑛 𝑖 , where 1 ≤ 𝑖 ≤ 𝑘 ,
3. the successor function 𝑠(𝑛) = 𝑛 + 1,
4. composition of functions,
5. primitive recursion: given primitive recursive 𝑓 : ℕ 𝑘 → ℕ and
𝑔 : ℕ 𝑘+2 → ℕ , the function ℎ : ℕ 𝑘+1 → ℕ defined by

ℎ(0 , 𝑛1 , . . . , 𝑛 𝑘 ) = 𝑓 (𝑛1 , . . . , 𝑛 𝑘 ),
ℎ(𝑛 + 1 , 𝑛1 , . . . , 𝑛 𝑘 ) = 𝑔(ℎ(𝑛, 𝑛1 , . . . , 𝑛 𝑘 ), 𝑛, 𝑛1 , . . . , 𝑛 𝑘 )

is primitive recursive.
Every primitive recursive function is computable, but not every com-
putable function is primitive recursive.2 2: The Ackermann function is com-
putable but not primitive recursive.
The (general) partial recursive functions are built from the above op-
erations and minimization: given a partial recursive 𝑓 : ℕ 𝑘+1 ⇀ ℕ the
function 𝑔 : ℕ 𝑘 ⇀ ℕ , defined by

𝑔(𝑛1 , . . . , 𝑛 𝑘 ) = min( 𝑓 (𝑛, 𝑛1 , . . . , 𝑛 𝑘 ) ≠ 0),


𝑛

is partial recursive as well. When no 𝑛 satisfies 𝑓 (𝑛, 𝑛1 , . . . , 𝑛 𝑘 ) ≠ 0 the


value 𝑔(𝑛1 , . . . , 𝑛 𝑘 ) is undefined.
The general recursive functions are those partial recursive functions
whose domain and support coincide.

Order theory

A preorder (𝑃, ≤) is a set with a reflexive and transitive relation. A


partially ordered set (poset) (𝑃, ≤) is a set with a reflexive, transitive,
and anti-symmetric relation.
A function 𝑓 : 𝑃 → 𝑄 between posets is monotone if 𝑥 ≤ 𝑦 in 𝑃 implies
𝑓 (𝑥) ≤ 𝑓 (𝑦) in 𝑄 .
A subset 𝑆 ⊆ 𝑃 is an upper set if 𝑥 ∈ 𝑆 and 𝑥 ≤ 𝑦 implies 𝑦 ∈ 𝑆 . Similarly,
it is a lower set if 𝑦 ∈ 𝑆 and 𝑥 ≤ 𝑦 implies 𝑥 ∈ 𝑆 . A subset 𝑆 ⊆ 𝑃 of a
poset (𝑃, ≤) is directed if it is non-empty and for every 𝑥, 𝑦 ∈ 𝑆 there
exists 𝑧 ∈ 𝑆 such that 𝑥 ≤ 𝑧 and 𝑦 ≤ 𝑧 . An upper bound of a subset
𝑆 ⊆ 𝑃 in a poset is an element 𝑥 ∈ 𝑃 such that 𝑦 ≤ 𝑥 for all 𝑦 ∈ 𝑆. The
1 Introduction 3

supremum sup 𝑆 of a subset 𝑆 ⊆ 𝑃 in a poset is its least upper bound, if


it exists. More precisely, it is an upper bound 𝑥 for 𝑆 such that if 𝑦 is also
an upper bound for 𝑆 then 𝑥 ≤ 𝑦 .
A directed-complete partial order (dcpo) is a poset in which every
directed set has a supremum. Let (𝐷, ≤) be a dcpo. For 𝑥, 𝑦 ∈ 𝐷 we say
that 𝑥 is way below 𝑦 , written 𝑥 ≪ 𝑦 , when for every directed 𝑆 ⊆ 𝐷
such that 𝑦 ≤ sup 𝑆 there exists 𝑧 ∈ 𝑆 for which 𝑥 ≤ 𝑧 . An element 𝑥 ∈ 𝐷
is compact (or finite) when 𝑥 ≪ 𝑥 . A subset 𝑈 ⊆ 𝐷 is Scott open if it
is an upper set and is inaccessible by suprema of directed sets, which
means that, for every directed 𝑆 ⊆ 𝐷 , if sup 𝑆 ∈ 𝑈 then already 𝑥 ∈ 𝑈
for some 𝑥 ∈ 𝑆 . The Scott open sets form the Scott topology of 𝐷 .
If 𝐷 and 𝐸 are dcpos then a function 𝑓 : 𝐷 → 𝐸 is continuous with
respect to the Scott topologies precisely when it preserves suprema of
directed sets. It follows that such a function is monotone.

Topology

A topological space 𝑋 is 𝑇0 -space if each point is uniquely determined


by its open neighborhoods: for all 𝑥, 𝑦 ∈ 𝑋 ,

(∀𝑈 ∈ O(𝑋). (𝑥 ∈ 𝑈 ⇐⇒ 𝑦 ∈ 𝑈)) ⇒ 𝑥 = 𝑦.

A topological space is zero-dimensional if it has a basis consisting of


clopen sets.
Models of Computation 2
A model of computation describes what computation is and how it is
done. The best known is Alan Turing’s model [38] in which a machine [38]: Turing (1937), “On Computable
manipulates the contents of a tape according to a finite set of instructions. Numbers, with an Application to the
Entscheidungsproblem”
It has become the yardstick with which we measure other models of
computation. Turing’s notion of computability is very robust. Firstly, it
is robust because changes to the definition of Turing machines, such
as increasing the number of tapes or heads, or allowing the head to
jump around, does not change the computational power. Secondly, the
notion is robust because many other models of computation turned out
to be equivalent to Turing’s in the sense that they can simulate Turing
machines, and can be simulated by them.
However, it would be wrong to conclude that other models can be safely
ignored. Once a precise definition of simulation is given in Section 2.8, it
will turn out that there is more to equivalence of computational models
than mutual simulation.
We begin our investigations with a review of models of computation other
than Turing machines. After having seen several examples, we take (typed)
partial combinatory algebras as the common unifying notion that is well-
suited for the later development. There are of course interesting models
of computation that fall outside of our framework. For example, even
though partical combinatory algebras can incorporate certain specific
computational effects [25, 26], it is not clear how to give a systematic [25]: Longley (1999), “Matching typed
account of “effectful tpcas”. John Longley [27] investigated more general and untyped realizability”
[26]: Longley (1999), “Unifying typed
structures that do. and untyped realizability”
[27]: Longley (2014), “Computability
structures, simulations and realizability”
2.1 Turing machines

We recall informally how a Turing machine operates. There is little


point in giving a formal definition because we do not intend to actually
write programs for Turing machines. If you are not familiar with Turing
machines we recommend one of standard textbooks on the subject [10,
16, 29]. [10]: Davis (1958), Computability and Un-
solvability
[16]: H.R. Rogers (1992), Theory of Recur-
sive Functions and Effective Computability
0 1 1 0 0 1 [29]: Odifreddi (1989), Classical Recursion
Input tape (read−only)
Theory

1 1 1 1 0

1 0 1 1 0 0 0
working tapes (read & write)

1 1 0 1 Figure 2.1: A Turing machine operates


Output tape (write−once only) with tapes

A Turing machine is a device which operates on a number of tapes and


heads, see Figure 2.1:
2 Models of Computation 5

▶ a read-only input tape is equipped with a reading head that can


move left and right, and read the symbols, but cannot write them.
▶ The read-write working tapes are equipped with heads that move
left and right, and can both read and write symbols.
▶ The write-once output tape is equipped with a head which can
move left and right, and it can write into each cell at most once. Once
a cell is filled with a non-blank symbol all subsequent writes to it
are ignored.
The tapes are infinite1 and contain symbols from a given finite alpha- 1: If you are worried about having actual
bet. A common choice for the alphabet is 0, 1, and a special symbol infinite tapes in your room, note that at
each step of the computation only a finite
‘blank’. The machine manipulates the contents of the tapes according portion of tapes has been inspected. In
to a program, which is a finite list of simple instructions that control the this sense the tapes are potentially infinite.
heads and the tapes. The machine executes one instruction at a time in a
sequential manner. It may terminate after having executed finitely many
computation steps. If it does not terminate then it runs forever, in which
case we say that it diverges.
Our version of Turing machine is different from the usual one, where a
machine is equipped with only a single tape that serves for input, output,
and intermediate work. The two formulations are equivalent in the sense
that a single-tape machine can simulate the workings of a Turing machine
with several tapes, and vice versa. Having working tapes will ease the
description of infinite computations in Subsection 2.1.2.
The state of a Turing machine may be encoded onto a single tape as
follows. First we write down the program, suitably encoded by the
symbols from the alphabet, then the current state (the next instruction to
be executed), and positions of the heads. Finally, we copy the contents of
all the tapes by interleaving them into a single tape.
If we were going to build just one machine, which one would we build?
The answer was given by Turing.

Theorem 2.1.1 (Turing) There exists a universal machine: a machine that


takes a description of another machine, as explained above, and simulates it.

Proof. A traditional proof may be found in any book on computability


theory, and there is nothing wrong with reading the original proof [38] [38]: Turing (1937), “On Computable
either. For me a much more convincing proof is the fact that a universal Numbers, with an Application to the
Entscheidungsproblem”
machine is sitting right here on my desk. (You have to ignore the fact
that several hundred gigabytes of storage are not quite the same thing
as an infinite tape. Also, modern computers are really Von Neumann
machines [13] because they have a central processing unit and random [13]: Goldstein et al. (1947), Report on
access memory instead of a tape.) the mathematical and logical aspects of an
electronic computing instrument

Once we have a universal machine, we can make it behave like any other
machine. It is just “a simple matter of programming” to tell it what to
do.
We mentioned in the introduction that many kinds of computing devices
are equivalent to Turing machines. We shall therefore not insist on
describing computation solely in terms of Turing machines, but rather
rely on familiarity with modern computers and programming languages.
2 Models of Computation 6

After all, programs can actually be run on computers, whereas Turing


machines are hard to get by.

2.1.1 Type 1 machines

How do we use Turing machines to compute a partial function 𝑓 : ℕ ⇀ ℕ ?


A natural idea is to write the argument 𝑛 onto the input tape, run the
machine until it terminates, and read the result 𝑓 (𝑛) off the output tape.
If the machine diverges then 𝑓 (𝑛) is undefined. Of course, the input
𝑛 must be suitably encoded onto the input tape, for example it can be
written in binary form. The output tape contains the result encoded in
the same manner.

Definition 2.1.2 A partial map 𝑓 : ℕ ⇀ ℕ is computable if there exists a


Turing machine 𝑀 such that for every 𝑛 ∈ ℕ : if 𝑓 (𝑛) is defined then 𝑀
terminates on input 𝑛 and gives output 𝑓 (𝑛); if 𝑓 (𝑛) is undefined
then 𝑀 diverges on input 𝑛 .

It is convenient to view every Turing machine as one computing a function


ℕ ⇀ ℕ . This can be arranged as long as we read the result off the output
tape correctly. Suppose the alphabet contains symbols 0, 1, and blank.
We encode the input 𝑛 onto the input tape in binary followed by blanks,
and run the machine. If and when it terminates it has written at most
finally many symbols onto the output tape. Some of the symbols it has
written might be different from 0 and 1. If we ignore everything that
comes after the first blank, we can interpret the output tape as a number
written in binary (the empty sequence encodes zero).
We can similarly define how a Turing machine computes a multivariate
partial function 𝑓 : ℕ 𝑘 ⇀ ℕ . We just have to correctly encode the
arguments on the tape by placing special markers between them so that
we can tell where one ends and the next one begins.

Exercise 2.1.3 Devise a coding scheme for 𝑘 -tuples of numbers using


the symbols 0, 1 and blank. Make sure that every tape which contains
at most finitely many 0’s and 1’s encodes a 𝑘 -tuple of numbers.

It is common knowledge that computers encode everything with 0’s


and 1’s, but logicians prefer to encode everything with natural numbers.
We shall write in general ⌜ 𝑒 ⌝ for encoding of 𝑒 by a natural number. Of
course, we must specify what ⌜ 𝑒 ⌝ is in each particular case. For example,
a pair of numbers (𝑚, 𝑛) may be encoded into a single number as

⌜ (𝑚, 𝑛) ⌝ = 2𝑚 (2𝑛 + 1).

Every number except 0 represents the code of a unique pair so we also


have computable projections fst and snd which recover 𝑚 and 𝑛 from
⌜ (𝑚, 𝑛) ⌝ , respectively. Once we know how to encode pairs of numbers,
lists of numbers can be encoded as iterated pairs:

⌜[ ]⌝ = 0
⌜ [𝑛0 , . . . , 𝑛 𝑘 ] ⌝ = ⌜ (𝑛0 , ⌜ [𝑛1 , . . . , 𝑛 𝑘 ] ⌝ ) ⌝ .
2 Models of Computation 7

Because we defined ⌜ (𝑚, 𝑛) ⌝ so that it is never zero, the elements of a


sequence may be uniquely reconstructed from its code. By iterating the
scheme we may encode lists of lists of numbers, etc.
Turing machines can be encoded with numbers, also. A program is a
finite list of instructions, so it can be encoded as a finite sequence of 0’s
and 1’s (your computer does this every time you save a piece of source
code in a file), which in turn represents a number in binary form. In
fact, every number may be thought of as a code of a program by the
reverse process. Given a number, write it in binary form and interpret
it as a sequence of 0’s and 1’s and decode from it a list of instructions.
It may happen that the binary sequence does not properly encode a list
of instructions, in which case we interpret it as some fixed silly little
program.
The next step is to encode tapes and entire computations with numbers.
Because an infinite tape cannot be encoded in a single natural number,
we limit attention to the so-called type 1 machines which accept only
finite inputs. More precisely, the input always consists of a finite string of
0’s and 1’s followed by blanks. Such input may be encoded by a single
number. Furthermore, at every step of computation the machine has
used up only a finite portion of its working tapes, whose contents may
again be encoded by a single number.
By continuing in this manner we may encode with a single number a
finite sequence of computation steps, including the contents of the tapes
and positions of the heads at each step. Stephen Kleene [19] worked out [19]: Kleene (1943), “Recursive predicates
the details of all this and defined the predicate 𝑇(𝑥, 𝑦, 𝑧) whose meaning and quantifiers”

is
“Machine encoded by 𝑥 with input tape that encodes the
number 𝑦 performs a sequence of computation steps encoded
by 𝑧 and terminates.”
The amazing thing is that 𝑇 may be defined in Peano arithmetic just in
terms of 0, successor, + and ×. There is an associated computable partial
function 𝑈(𝑧) whose meaning is “the number encoded by the contents
of the output tape in the last step of computation encoded by 𝑧 ”. With
it we can extract the result of a computation. It is easy to arrange 𝑈 so
that it is defined for all 𝑧 , even those that do not encode terminating
computations.
Kleene’s normal form theorem [19] says that every partial computable [19]: Kleene (1943), “Recursive predicates
function 𝑓 : ℕ ⇀ ℕ may be written in the form and quantifiers”

𝑦 ↦→ 𝑈(min{𝑧 ∈ ℕ | 𝑇(𝑥, 𝑦, 𝑧)}). (2.1)

The number 𝑥 is the encoding of a machine that computes 𝑓 . We em-


phasize that we are completely ignoring questions of computational
efficiency. Just consider how we would compute 𝑓 (𝑦) according to (2.1):
for each 𝑧 = 0 , 1 , 2 , . . ., test whether 𝑧 encodes a computation of machine
𝑥 with input 𝑦 . When you find the first such 𝑧 , extract the result 𝑈(𝑧)
from it. I dare you to compute the identity function 𝑦 ↦→ 𝑦 this way!
Kleene’s normal form may be used to define a standard enumeration of
2 Models of Computation 8

partial recursive functions. Let

𝝋 𝑥 (𝑦) = 𝑈(min{𝑧 ∈ ℕ | 𝑇(𝑥, 𝑦, 𝑧)}).

The sequence 𝝋0 , 𝝋1 , 𝝋2 , . . . is an enumeration of all computable partial


functions (with repetitions).
The preceding discussion may be generalized to functions of several
variables. For each 𝑘 ∈ ℕ there is Kleene’s predicate 𝑇 (𝑘) (𝑥, 𝑦1 , . . . , 𝑦 𝑘 , 𝑧)
and the corresponding 𝑈 (𝑘) (𝑧) that extracts results from computations.
Similarly, there is a standard enumeration of 𝑘 -place computable partial
functions
(𝑘)
𝝋 𝑥 (𝑦1 , . . . , 𝑦 𝑘 ) = 𝑈 (𝑘) (min{𝑧 ∈ ℕ | 𝑇 (𝑘) (𝑥, 𝑦1 , . . . , 𝑦 𝑘 , 𝑧)}).

These enumerations are not arbitrary, because they have the following
important properties.

Theorem 2.1.4 (utm) There exists a partial computable function 𝑢 : ℕ ×


ℕ ⇀ ℕ such that, for all 𝑥, 𝑦 ∈ ℕ ,

𝑢(𝑥, 𝑦) ≃ 𝝋 𝑥 (𝑦).

Theorem 2.1.5 (smn) There exists a computable function 𝑠 : ℕ 2 → ℕ such


that, for all 𝑥, 𝑦, 𝑧 ∈ ℕ ,

(2)
𝝋 𝑠(𝑥,𝑦) (𝑧) ≃ 𝝋 𝑥 (𝑦, 𝑧).

The utm theorem is essentially a restatement of Theorem 2.1.1 in terms


of computable partial functions. Detailed proofs of the utm and smn
theorems would involve a lot of technical manipulations of Turing
machines, you may consult [10] to get a taste of it. It is illuminating to [10]: Davis (1958), Computability and Un-
see how the utm and smn theorems manifest themselves in modern solvability

programming languages, say in Haskell. Keeping in mind that numbers


are just codes for programs and data, the universal function 𝑢 from the
utm theorem is
u (f, y) = f y

and the function 𝑠 is the currying operation2 2: In Haskell the notation \x -> e
stands for 𝜆-abstraction 𝜆𝑥. 𝑒 , which in
s (f, y) = \z -> f (y, z) turn means “the function which maps 𝑥
to 𝑒 ”, see Section 2.3.
This may seem like a triviality to the programmer but is surely not con-
sidered one by the implementors of the Haskell compiler. The definition
of s uses function application, pairing, currying and 𝜆-abstraction, which
are “the essence” of functional programming, just like the utm and smn
theorems are the essence of partial computable functions.
The following theorem is important in the theory of computable functions
because it allows us to define partial computable functions by recursion.

Theorem 2.1.6 (Recursion theorem) For every total computable 𝑓 : ℕ →


ℕ there exists 𝑛 ∈ ℕ such that 𝝋 𝑓 (𝑛) = 𝝋 𝑛 .
2 Models of Computation 9

Proof. The classical proof goes as follows. First we define a computable


partial map 𝜓 : ℕ 2 ⇀ ℕ such that

𝜓(𝑢, 𝑥) = 𝝋 𝝋 (𝑥),
𝑢 (𝑢)

where 𝜓(𝑢, 𝑥) is undefined when 𝝋𝑢 (𝑢) is undefined. By the smn theorem


there is a computable function 𝑔 : ℕ → ℕ such that 𝝋 𝑔(𝑢) (𝑥) = 𝜓(𝑢, 𝑥).
Now consider any computable 𝑓 : ℕ → ℕ . Because 𝑓 ◦ 𝑔 is computable,
there exists 𝑣 ∈ ℕ such that 𝝋𝑣 = 𝑓 ◦ 𝑔 . Since 𝑓 ◦ 𝑔 is a total function,
𝝋𝑣 (𝑣) is defined. The number 𝑛 = 𝑔(𝑣) has the desired property: 𝝋 𝑔(𝑣) =
𝝋 𝝋 (𝑣) = 𝝋 𝑓 (𝑔(𝑣)) = 𝝋 𝑓 (𝑛) .
𝑣

This was a typical argument in the theory of computable functions. Let


us prove the recursion theorem in Haskell to see what is going on. The
function 𝑓 : ℕ → ℕ operates on codes of computable partial maps.
Haskell has higher-order functions that work directly with functions as
arguments and results, so 𝑓 should be given the type
f :: (Integer -> Integer) -> (Integer -> Integer)

Rather than looking for a number 𝑛 we are looking for a function


n :: Integer -> Integer

such that f n = n. Because Haskell already has recursion built in this is


very easy, just define
n = f n

The Recursion theorem is nothing but definition by recursion for type 1


machines.
We finish this section with a theorem which we shall often use to show
non-computability results.

Theorem 2.1.7 (Halting oracle) The halting oracle,


(
1 if 𝝋 𝑥 (0) is defined,
ℎ(𝑥) =
0 if 𝝋 𝑥 (0) is not defined,

is not computable.

Proof. Let us prove the theorem in Haskell. We must show that there is
no
h :: (Integer -> Integer) -> Integer

such that, for all f :: Integer -> Integer,


(
1 if f 0 terminates,
h f =
0 if f 0 diverges.

Suppose there were such an h. Define


g n = if h g == 1 then g n else 0

By assumption h g is either 0 or 1. In either case there is a contradiction


because g does just the opposite of what h says it will do.
2 Models of Computation 10

2.1.2 Type 2 machines

Type 1 machines from previous section only operate on finite inputs. In


practice we often see programs whose input and output are (potentially)
infinite. For example, when you listen to an Internet radio station, the
player accepts a never-ending stream of data which it outputs to the
speakers. Also, many useful programs, such as servers, operating systems,
and browsers are potentially non-terminating. We therefore need a model
of computation that describes non-terminating programs with infinite
inputs and outputs.
A popular one is type 2 machine, which accepts an infinite sequence on
its input tape and is allowed to work forever. It may or may not fill the
output tape entirely with non-blank symbols. Note that the requirement
for the output tape to be write-once makes it possible to tell when the
machine has actually produced an output in a given cell. Had we allowed
the machine to write to each output cell many times, it could keep coming
back and changing what it has already written.
An important distinction between type 1 and type 2 machines is that the
latter may accept non-computable inputs, from which non-computable
outputs may be produced.
For type 2 machines there are analogues of the standard enumeration 𝝋,
utm and the smn theorems. These are more easily expressed if we allow
the machines to write natural numbers in the cells, rather than symbols
from a finite alphabet. We also equip the machines with instructions
for manipulating numbers, say, instructions for extracting the bits and
for testing equality with zero. These changes are inessential because an
infinite sequence 𝑛0 , 𝑛1 , 𝑛2 , . . . of natural numbers may be encoded as a
binary sequence 1𝑛0 01𝑛1 01𝑛2 0 · · · , where 1 𝑘 means that the symbol 1 is
repeated 𝑘 -times.

Definition 2.1.8 Say that a type 2 machine 𝑀 computes a partial


map 𝑓 : ℕ ℕ ⇀ ℕ ℕ when for every 𝛼 ∈ ℕ ℕ : if 𝑓 (𝛼) is defined then 𝑀
eventually writes to every output cell, and the output tape equals 𝑓 (𝛼);
if 𝑓 (𝛼) is not defined, then at least one output cell to which 𝑀 never
writes to.
A partial map 𝑓 : ℕ ℕ ⇀ ℕ ℕ is type 2 computable if it is computed by
a type 2 machine.

We may similarly define what it means for a machine to compute a


multivariate partial function 𝑓 : (ℕ ℕ ) 𝑘 ⇀ ℕ ℕ . The input (𝛼 0 , . . . , 𝛼 𝑘−1 )
is written onto the input tape in an interleaving manner, so that 𝛼 𝑖 (𝑗) is
found in the cell at position 𝑘 · 𝑗 + 𝑖 .

The Baire space

We recall a few basic facts about the Baire space 𝔹 = ℕ ℕ . Let ℕ ∗ be the
set of all finite sequences of natural numbers. If 𝑎, 𝑏 ∈ ℕ ∗ we write 𝑎 ⊑ 𝑏
when 𝑎 is a prefix of 𝑏 . The length of a finite sequence 𝑎 is denoted by ∥𝑎∥ .
Similarly, we write 𝑎 ⊑ 𝛼 when 𝑎 is a prefix of an infinite sequence 𝛼 ∈ 𝔹.
2 Models of Computation 11

Define 𝛼(𝑛) = [𝛼(0), . . . , 𝛼(𝑛 − 1)] to be the prefix of 𝛼 consisting of the


first 𝑛 terms.
Write 𝑛 :: 𝛼 for the sequence 𝑛, 𝛼(0), 𝛼(1), 𝛼(2), . . ., and 𝑎 ++ 𝛽 for con-
catenation of the finite sequence 𝑎 ∈ ℕ ∗ with the infinite sequence
𝛽 ∈ 𝔹.
We equip 𝔹 with the product topology, which is the topology whose
countable topological base consists of the basic open sets, for 𝑎 ∈ ℕ ∗ ,

𝑎++𝔹 = {𝑎 ++ 𝛽 | 𝛽 ∈ 𝔹} = {𝛼 ∈ 𝔹 | 𝑎 ⊑ 𝛼}.

Because the basic open sets are both closed and open (clopen), 𝔹 is in fact
a countably based 0-dimensional3 Hausdorff space. It is also a complete 3: Recall that a space is 0-dimensional
separable metric space for the comparison metric 𝑑 : 𝔹 × 𝔹 → ℝ, defined when its clopen subsets form a base for
its topology.
by
𝑑(𝛼, 𝛽) = inf {2−𝑛 | 𝛼(𝑛) = 𝛽(𝑛)}.
If the first term in which 𝛼 and 𝛽 differ is the 𝑛 -th one, then 𝑑(𝛼, 𝛽) = 2−𝑛 .
The comparison metric is an ultrametric, which means that it satisfies
the inequality 𝑑(𝛼, 𝛾) ≤ max(𝑑(𝛼, 𝛽), 𝑑(𝛽, 𝛾)). In an ultrametric space
every point of a ball is its center. The clopen sets 𝑎++𝔹 are precisely the
balls of radius 2−∥𝑎∥ .

Encoding of partial maps 𝔹 ⇀ 𝔹

Earlier we encoded the partial computable maps ℕ ⇀ ℕ with numbers


describing Turing machines. We would like to similarly encode com-
putable partial maps 𝔹 ⇀ 𝔹 with sequences. An obvious idea, which we
shall not pursue, is to use the first term of a sequence to encode a Turing
machine. Instead, we shall use sequences as “lookup tables”, with which
even some non-computable maps will be encoded.
Consider a partial map 𝑓 : 𝔹 ⇀ 𝔹 computed by 𝑀 . Suppose that, given
an input tape
𝛼 = 𝑛0 , 𝑛1 , 𝑛2 , . . .
𝑀 writes 𝑗 to the 𝑖 -th output cell after 𝑘 steps of computation, hence
𝑓 (𝛼)(𝑖) = 𝑗 . Because in 𝑘 steps 𝑀 inspects at most the first 𝑘 input cells, it
would have done the same for any other input that agrees with the given
one in the first 𝑘 terms. Thus 𝑓 is determined by pieces of information of
the form:
“If the input starts with 𝑛0 , 𝑛1 , . . . , 𝑛 𝑘−1 then the 𝑖 -th term of
the output is 𝑗 .”
A code 𝛾 ∈ 𝔹 of 𝑓 just has to contain such information, which we
can arrange by coding lists and pairs as numbers. To determine the
value 𝛾(𝑚) for a given 𝑚 ∈ ℕ , decode 𝑚 + 1 as a finite sequence
𝑚 + 1 = [𝑖, 𝑛0 , . . . , 𝑛 𝑘−1 ] and simulate 𝑀 for 𝑘 steps with the input
tape
𝑛0 , . . . , 𝑛 𝑘−1 , 0, 0 , . . .
If the machine writes 𝑗 into the 𝑖 -th output cell during the simulation,
set 𝛾(𝑚) = 𝑗 + 1, otherwise set 𝛾(𝑚) = 0. Think of 𝛾 as a lookup table
which maps a key 𝑖 ::[𝑛0 , . . . , 𝑛 𝑘− ] to an optional value 𝑗 . Clearly, 𝛾 can
be computed from a description of 𝑀 .
2 Models of Computation 12

To decode the function 𝜼 𝛾 : 𝔹 ⇀ 𝔹 encoded by 𝛾 ∈ 𝔹, we just have to


devise a lookup procedure. Given input 𝛼 ∈ 𝔹, we compute the value
of the 𝑖 -th output cell by successively looking up keys 𝑖 :: 𝛼(𝑘) for ever
larger 𝑘 , until we find an answer 𝑗 . More precisely, for 𝛼 ∈ 𝔹, define
ℓ (𝛾, 𝛼) : ℕ ⇀ ℕ as

ℓ (𝛾, 𝛼)(𝑖) = 𝛾( ⌜ 𝑖 ::𝛼(𝑘) ⌝ ) − 1 where 𝑘 = min 𝑘 (𝛾( ⌜ 𝑖 :: 𝛼(𝑘) ⌝ ) ≠ 0)

(if no such 𝑘 exists then ℓ (𝛾, 𝛼)(𝑖) is undefined), and let the map 𝜼 𝛾 : 𝔹⇀𝔹
encoded by 𝛾 be
(
ℓ (𝛾, 𝛼) if ℓ (𝛾, 𝛼) is a total map,
𝜼 𝛾 (𝛼) = (2.2)
undefined otherwise.

Definition 2.1.9 A partial function 𝑓 : 𝔹 ⇀ 𝔹 is (type 2) realized by


𝛾 ∈ 𝔹, called a Kleene associate of 𝑓 , when 𝑓 = 𝜼 𝛾 .

Clearly, if 𝛾 is a computable sequence then 𝜼 𝛾 is type 2 computable. But


what sort of partial map is 𝜼 𝛾 in general?
An arbitrary 𝛾 ∈ 𝔹 may give inconsistent answers, in the sense that
looking up values at 𝑖 :: 𝛼(𝑘) and 𝑖 :: 𝛼(𝑚) may give inconsistent answers.
Above we resolved the problem by taking the answer given by the least 𝑘 .
We may also rectify 𝛾 to a realizer that gives consistent answers.
Say that 𝛾 ∈ 𝔹 is consistent when, for all 𝑘, 𝑚 ∈ ℕ and 𝛼 ∈ 𝔹, if 𝑘 < 𝑚
and 𝛾( ⌜ 𝛼(𝑘) ⌝ ) ≠ 0 then 𝛾( ⌜ 𝛼(𝑚) ⌝ ) = 𝛾( ⌜ 𝛼(𝑘) ⌝ ).

Lemma 2.1.10 Every realized function is realized by a consistent realizer.

Proof. Suppose 𝑓 : 𝔹 ⇀ 𝔹 is realized by 𝛾 . Define 𝛿 by induction on the


length of 𝑎 ∈ ℕ★ by



 𝛾( ⌜ 𝑎 ⌝ ) if 𝑎 = [ ],


𝛿( ⌜ 𝑎 ⌝ ) = 𝛾( ⌜ 𝑎 ⌝ ) if 𝑎 = 𝑎 ′:: 𝑖 and 𝛿( ⌜ 𝑎 ′ ⌝ ) = 0,

 𝛿( ⌜ 𝑎 ′ ⌝ ) if 𝑎 = 𝑎 ′::𝑖 and 𝛿( ⌜ 𝑎 ′ ⌝ ) ≠ 0,


The realizer 𝛿 is consistent by construction. It is easy to check that
𝜼𝛾 = 𝜼𝛿 .

Theorem 2.1.11 (Extension Theorem for 𝔹) Every partial continuous map


𝑓 : 𝔹 ⇀ 𝔹 can be extended to a realized one.

Proof. Suppose 𝑓 : 𝔹 ⇀ 𝔹 is a partial continuous map, and let

𝐴 = (𝑎, 𝑖, 𝑗) ∈ ℕ ∗ × ℕ 2 |


𝑎++𝔹 ∩ dom( 𝑓 ) ≠ ∅ ∧ ∀𝛼. (𝑎++𝔹 ∩ dom( 𝑓 )) 𝑓 (𝛼)(𝑖) = 𝑗 .

If (𝑎, 𝑖, 𝑗) ∈ 𝐴 and (𝑎 ′ , 𝑖, 𝑗 ′) ∈ 𝐴 and 𝑎 ⊑ 𝑎 ′ then 𝑗 = 𝑗 ′ because there


is 𝛼 ∈ 𝑎 ′++𝔹 ∩ dom( 𝑓 ) ⊆ 𝑎++𝔹 ∩ dom( 𝑓 ) such that 𝑗 = 𝑓 (𝛼)(𝑖) = 𝑗 ′.
2 Models of Computation 13

We define a sequence 𝛾 ∈ 𝔹 as follows. For every (𝑎, 𝑖, 𝑗) ∈ 𝐴 let


𝛾( ⌜ 𝑖 :: 𝑎 ⌝ ) = 𝑗 + 1, and for all other arguments 𝑛 let 𝛾(𝑛) = 0. Suppose
that 𝛾( ⌜ 𝑖 :: 𝑎 ⌝ ) = 𝑗 + 1 for some 𝑖, 𝑗 ∈ ℕ and 𝑎 ∈ ℕ ∗ . Then for every prefix
𝑎 ′ ⊑ 𝑎 , 𝛾( ⌜ 𝑖 :: 𝑎 ′ ⌝ ) = 0 or 𝛾( ⌜ 𝑖 :: 𝑎 ′ ⌝ ) = 𝑗 + 1. Thus, if (𝑎, 𝑖, 𝑗) ∈ 𝐴 and 𝑎 ⊑ 𝛼
then 𝜼 𝛾 (𝛼)(𝑖) = 𝑗 . Let us how that 𝜼 𝛾 (𝛼)(𝑖) = 𝑓 (𝛼)(𝑖) for all 𝛼 ∈ dom( 𝑓 )
and all 𝑖 ∈ ℕ . Because 𝑓 is continuous, for all 𝛼 ∈ dom( 𝑓 ) and 𝑖 ∈ ℕ
there exists (𝑎, 𝑖, 𝑗) ∈ 𝐴 such that 𝑎 ⊑ 𝛼 and 𝑓 (𝛼)(𝑖) = 𝑗 . Now we get
𝜼 𝛾 (𝛼)(𝑖) = 𝑗 = 𝑓 (𝛼)(𝑖).

Recall that a 𝐺 𝛿 -set is a countable intersection of open sets.

Lemma 2.1.12 If 𝑈 ⊆ 𝔹 is a 𝐺 𝛿 -set then the partial function 𝑢 : 𝔹 ⇀ 𝔹,


defined by (
1 if 𝛼 ∈ 𝑈 .
𝑢(𝛼) =
undefined otherwise,

is realized.

Proof. The set 𝑈 may be expressed as a countable intersection of countable


unions of basic open sets,
\[
𝑈= 𝑎 𝑖,𝑗 ++𝔹.
𝑖∈ℕ 𝑗∈ℕ

Define 𝛾 ∈ 𝔹 by setting 𝛾( ⌜ 𝑖 :: 𝑎 𝑖,𝑗 ⌝ ) = 2 for all 𝑖, 𝑗 ∈ ℕ , and 𝛾(𝑛) = 0 for


all other arguments 𝑛 . Clearly, if 𝜼 𝛾 (𝛼) is defined then its value is the
constant sequence 1 , 1 , 1 , . . ., so we only need to verify that dom(𝜼 𝛾 ) = 𝑈 .
If 𝛼 ∈ dom(𝜼 𝛾 ) then ℓ (𝛾, 𝛼)(𝑖) is defined for every 𝑖 ∈ ℕ . For every 𝑖 ∈ ℕ
there exists 𝑗𝑖 ∈ ℕ such that 𝛾( ⌜ 𝑖 :: 𝛼(𝑗𝑖 ) ⌝ ) = 2, which implies that 𝑎 𝑖,𝑗𝑖 ⊑ 𝛼 .
Hence \
𝛼∈ 𝑎 𝑖,𝑗𝑖 ++𝔹 ⊆ 𝑈.
𝑖∈ℕ

Conversely, suppose 𝛼 ∈ 𝑈 and consider any 𝑖 ∈ ℕ . There exists 𝑗 ∈ ℕ


such that 𝑎 𝑖,𝑗 ⊑ 𝛼 , therefore 𝛾( ⌜ 𝑖 :: 𝛼(|𝑎 𝑖,𝑗 |) ⌝ ) = 𝛾( ⌜ 𝑖 :: 𝑎 𝑖,𝑗 ⌝ ) = 2 so that
ℓ (𝛾, 𝛼)(𝑖) is defined. We conclude that 𝛼 ∈ dom(𝜼 𝛾 ).

Lemma 2.1.13 Suppose 𝛼 ∈ 𝔹 and 𝑈 ⊆ 𝔹 is a 𝐺 𝛿 -set. Then there exists


𝛿 ∈ 𝔹 such that dom(𝜼 𝛿 ) = 𝑈 ∩ dom(𝜼 𝛼 ) and 𝜼 𝛼 (𝛽) = 𝜼 𝛿 (𝛽) for all
𝛽 ∈ dom(𝜼 𝛼 ) ∩ 𝑈 .

Proof. By Lemma 2.1.10 we may assume that 𝛼 is normalized. Define


𝑓 : 𝔹 ⇀ 𝔹 by
(
𝜼 𝛼 (𝛽)(𝑛) if 𝛽 ∈ dom(𝜼 𝛼 ) ∩ 𝑈 ,
𝑓 (𝛽)(𝑛) =
undefined otherwise

We would like to show that 𝑓 is realized. From Lemma 2.1.12 we obtain a


normalized 𝛾 ∈ 𝔹 such that, for all 𝛽 ∈ 𝔹,
(
𝜆𝑛. 1 𝛽 ∈ 𝑈,
𝜼 𝛾 (𝛽) =
undefined otherwise.
2 Models of Computation 14

We claim that 𝑓 is realized by

𝛿(𝑘) = 𝛼(𝑘) · 𝛾(𝑘)/2.

Recall that 𝛾(𝑘) is either 0 or 2 so 𝛿(𝑘) is either 0 or 𝛼(𝑘). Hence 𝜼 𝛿


is a restriction of 𝜼 𝛼 , by which we mean that dom(𝜼 𝛿 ) ⊆ dom(𝜼 𝛼 ) and
𝜼 𝛿 (𝛽) = 𝜼 𝛼 (𝛽) for all 𝛽 ∈ dom(𝜼 𝛿 ). Also, dom(𝜼 𝛿 ) ⊆ dom(𝜼 𝛾 ) because
𝛿(𝑘) ≠ 0 implies 𝛾(𝑘) ≠ 0. It remains to be shown that 𝛽 ∈ dom(𝜼 𝛼 ) ∩ 𝑈
implies 𝛽 ∈ dom(𝜼 𝛿 ), i.e., that for such 𝛽 , ℓ (𝛿, 𝛽)(𝑖) is defined for every
𝑖 ∈ ℕ . Because ℓ (𝛼, 𝛽)(𝑖) and ℓ (𝛾, 𝛽)(𝑖) are defined, there exist 𝑘 1 and 𝑘2
such that

𝛼( ⌜ 𝑖 ::𝛽(𝑘 1 ) ⌝ ) ≠ 0 and 𝛾( ⌜ 𝑖 ::𝛽(𝑘2 ) ⌝ ) ≠ 0.

Because 𝛼 and 𝛾 are normalized, for 𝑘 = max(𝑘 1 , 𝑘 2 ),

𝛼( ⌜ 𝑖 ::𝛽(𝑘) ⌝ ) ≠ 0 and 𝛾( ⌜ 𝑖 ::𝛽(𝑘) ⌝ ) ≠ 0 ,

hence 𝛿( ⌜ 𝑖 ::𝛽(𝑘) ⌝ ) ≠ 0, which we wanted to show.

We are now able to characterize the realized maps.

Theorem 2.1.14 A partial function 𝑓 : 𝔹 ⇀ 𝔹 is realized if, and only if, 𝑓 is


continuous and its support is a 𝐺 𝛿 -set.

Proof. First we show that 𝜼 𝛼 is a continuous map whose support is a


𝐺 𝛿 -set. It is continuous because the value of 𝜼 𝛼 (𝛽)(𝑛) depends only on 𝑛
and a finite prefix of 𝛽 . The support of 𝜼 𝛼 is the 𝐺 𝛿 -set

dom(𝜼 𝛼 ) = {𝛽 ∈ 𝔹 | ∀𝑛. ℕℓ (𝛼, 𝛽)(𝑛) defined and > 0}


\
= {𝛽 ∈ 𝔹 | ℓ (𝛼, 𝛽)(𝑛) defined and > 0}
\𝑛∈ℕ [
= 𝑛∈ℕ 𝑚∈ℕ
{𝛽 ∈ 𝔹 | ℓ (𝛼, 𝛽)(𝑛) = 𝑚 + 1}.

Each of the sets {𝛽 ∈ 𝔹 | ℓ (𝛼, 𝛽)(𝑛) = 𝑚} is open because ℓ is a continuous


operation.
Now let 𝑓 : 𝔹 ⇀ 𝔹 be a partial continuous function whose support
is a 𝐺 𝛿 -set. By Extension Theorem 2.1.11 there exists 𝛾 ∈ 𝔹 such that
𝑓 (𝛼) = 𝜼 𝛾 (𝛼) for all 𝛼 ∈ dom( 𝑓 ). By Lemma 2.1.13 there exists 𝜓 ∈ 𝔹 such
that dom(𝜼𝜓 ) = dom( 𝑓 ) and 𝜼𝜓 (𝛼) = 𝜼 𝛾 (𝛼) for every 𝛼 ∈ dom( 𝑓 ).

Finally, we formulate the utm and smn theorems for type 2 machines.

Theorem 2.1.15 (type 2 utm) There exists a computable partial function


𝑢 : 𝔹 × 𝔹 ⇀ 𝔹 such that 𝑢(𝛼, 𝛽) ≃ 𝜼 𝛼 (𝛽) for all 𝛼, 𝛽 ∈ 𝔹.

Proof. Let us write a machine for computing 𝑢 in Haskell, but without


resorting to an explicit encoding of finite sequences by numbers. Define
the type
type Baire = Integer -> Integer

The universal u :: ([Integer] -> Integer, Baire) -> Baire is just the
transliteration of (2.2):
2 Models of Computation 15

u (a, b) i = x - 1
where x = head $
filter (/= 0) $
[a (i : map b [0..(k-1)]) | k <- [0..]]

You may entertain yourself by learning Haskell and figuring out how it
works.

The type 2 variant of the smn theorem uses the representation 𝜼(2) for
encoding partial maps 𝔹 × 𝔹 → 𝔹 by

(2)
𝜼 𝛼 (𝛽, 𝛾) = 𝜼 𝛼 (⟨𝛽, 𝛾⟩)

where ⟨𝛽, 𝛾⟩ is the interleaved sequence 𝛽(0), 𝛾(0), 𝛽(1), 𝛾(1), . . ..

Theorem 2.1.16 (type 2 smn) There exists a computable 𝑠 : 𝔹 × 𝔹 → 𝔹


such that, for all 𝛼, 𝛽, 𝛾 ∈ 𝔹,

(2)
𝜼 𝑠(𝛼,𝛽) (𝛾) = 𝜼 𝛼 (𝛽, 𝛾).

Proof. Exercise in Haskell programming.

2.1.3 Turing machines with oracles

The computational models from Subsection 2.1.12.1.2 relativize, i.e., they


may be adapted to use oracle Turing machines. Recall that an oracle is an
infinite binary sequence 𝜔 : ℕ → {0 , 1} and that a Turing machine with
oracle 𝜔 is a Turing machine with an extra read-only tape containing 𝜔 .
The machine consults the tape to obtain the values of 𝜔 . When 𝜔 is a
non-computable sequence, the oracle machine exceeds the computational
power of ordinary Turing machines.
Each oracle 𝜔 yields a type 1 and a type 2 model of computation in
which the machines have access to 𝜔 . That is, in any given such model
all machines access the same fixed oracle 𝜔 .

2.1.4 Hamkin’s infinite-time Turing machines

Another model of computation that exceeds the power of Turing machines


are Hamkin’s infinite-time Turing machines [15]. We give here just a brief [15]: Hamkins et al. (2000), “Infinite time
overview and recommend the cited reference for background reading. Turing machines”

An infinite-time Turing machine, or just machine, is like a Turing machine


which is allowed to run infinitely long, with the computation steps
counted by ordinal numbers. The machine has a finite program, an
input tape, work tapes, an output tape, etc. We assume that the tape
cells contain 0’s and 1’s. At successor ordinals the machine acts like an
ordinary Turing machine. At limit ordinals it enters a special “limit” state,
its heads are placed at the beginnings of the tapes, and the content of
each tape cell is computed as the lim sup of the values written in the cell
2 Models of Computation 16

at earlier stages. More precisely, if 𝑐 𝛼 denotes the value of the cell 𝑐 at


step 𝛼 , then for a limit ordinal 𝛽 we have
(
0 if ∃𝛼 < 𝛽. ∀𝛾. (𝛼 ≤ 𝛾 < 𝛽 ⇒ 𝑐 𝛼 = 0),
𝑐𝛽 =
1 otherwise.

The machine terminates by entering a special halt state, or it may run


forever. It turns out that a machine which has not terminated by step 𝜔1
runs forever.
We can think of machines as computing partial functions 2ℕ ⇀ 2ℕ where
2 = {0 , 1}: initialize the input tape with an infinite binary sequence
𝑥 ∈ 2ℕ , run the machine, and observe the contents of the output tape if
and when the machine terminates. We can also consider infinite time
computation of partial functions ℕ ⇀ ℕ : initialize the input tape with the
input number, run the machine, and interpret the contents of the output
tape as a natural number, where we ignore anything that is beyond the
position of the output head. By performing the usual encoding tricks,
we can feed the machines more complicated inputs and outputs, such as
pairs, finite lists, and even infinite lists of numbers. We say that a function
is infinite-time computable if there is an infinite-time Turing machine
that computes it.
The power of infinite-time Turing machines is vast and extends far beyond
the halting problem for ordinary Turing machines, although of course
they cannot solve their own halting problem. For example, for every
Π11 -subset 𝑆 ⊆ 2ℕ there is a machine which, given 𝑥 ∈ 2ℕ on its input
tape, terminates and decides whether 𝑥 ∈ 𝑆 .
There is a standard enumeration 𝑀0 , 𝑀1 , 𝑀2 , . . . of infinite-time Turing
machines, where 𝑀𝑛 is the machine whose program is encoded by
the number 𝑛 in some reasonable manner. The associated enumeration
𝝍0 , 𝝍1 , 𝝍2 , . . . of infinite-time computable partial functions ℕ ⇀ ℕ is
defined as
(
𝑚 if 𝑀𝑛 (𝑘) terminates and outputs 𝑚 ,
𝝍 𝑛 (𝑘) =
undefined otherwise.

(𝑘) (𝑘) (𝑘)


We may similarly define an enumeration 𝝍 0 , 𝝍 1 , 𝝍 2 , . . . of 𝑘 -ary
infinite-time computable partial maps ℕ 𝑘 ⇀ ℕ .
The enumeration satisfies the smn and utm theorems.

Theorem 2.1.17 (infinite-time smn) There is an infinite time computable


(2)
total map 𝑠 : ℕ × ℕ → ℕ such that 𝝍 𝑠(𝑚,𝑛) (𝑗) = 𝝍 𝑚 (𝑛, 𝑗) for all 𝑚, 𝑛, 𝑗 ∈
ℕ.

Theorem 2.1.18 (infinite-time utm) There is an infinite-time computable


partial map 𝑢 : ℕ × ℕ ⇀ ℕ such that 𝝍 𝑡 (𝑚) = 𝑢(𝑡, 𝑚) for all 𝑡, 𝑚 ∈ ℕ .

To convince ourselves that the utm theorem holds, we think a bit how a
universal infinite-time Turing machine works. It accepts the description 𝑛
of a machine and the initial input tape 𝑥 . At successor steps the simulation
of machine 𝑀𝑛 on input 𝑥 proceeds much like it does for the ordinary
Turing machines. Thus it takes finitely many successor steps to simulate
2 Models of Computation 17

one successor step of 𝑀𝑛 . Each limit step of 𝑀𝑛 is simulated by one


limit step of the universal machine, followed by finitely many successor
steps. Indeed, whenever the universal machine finds itself in the special
limit state, it puts the simulated machine in the simulated limit state,
and moves the simulated heads to the beginnings of the simulated tapes.
These actions take finitely many steps. The contents of the simulated
tapes need not be worried about, as it will be updated correctly at limit
stages.
To see what sort of tasks can be performed by infinite-time Turing
machines, we consider several examples that will be useful later on.

Example 2.1.19 There is a machine which decides whether two infinite


sequences 𝑥, 𝑦 ∈ 2ℕ are equal. It first initializes a fresh work cell with 0,
and then for each 𝑘 , it compares 𝑥(𝑘) and 𝑦(𝑘). If they differ, it sets
the work cell to 1. After 𝜔 steps the work cell will be 1 if, and only if,
𝑥 ≠ 𝑦.

Example 2.1.20 A more complicated problem is to semidecide whether


a given machine 𝑀𝑛 computes a given sequence 𝑥 ∈ 2ℕ . The machine
which performs such a task accepts 𝑛 and 𝑥 as inputs and begins by
writing down the sequence 𝑦 𝑘 = 𝝍 𝑛 (𝑘) onto a work tape. This it can do
by simulating 𝑀𝑛 successively on inputs 0 , 1 , 2 , . . . and writing down
the values 𝑦 𝑘 as they are obtained. The machine also keeps track for
which 𝑘 ’s the values 𝑦 𝑘 have been computed by flipping the 𝑘 -th bit
of a separate “tally” tape from 0 to 1 whenever 𝑀𝑛 (𝑘) terminates. If
any of the 𝑦 𝑘 ’s is undefined, the machine will run forever. Otherwise
it will be able to detect in 𝜔 steps that the entire sequence 𝑦 has been
computed and written down by checking that all bits on the separate
“tally” tape have been flipped to 1. After that, the machine verifies that
𝑥 𝑘 = 𝑦 𝑘 for all 𝑘 ∈ ℕ , as described previously.

Example 2.1.21 Suppose we have a machine 𝑀 which expects as input


an infinite sequence 𝑥 and a number 𝑛 . We would like to construct
another machine which accepts an infinite sequence 𝑥 and outputs
a number 𝑛 such that 𝑀(𝑥, 𝑛) terminates, if one exists. We use the
familiar dovetailing technique to tackle the problem. Given 𝑥 ∈ 2ℕ as
input, we simulate in parallel the executions of machine 𝑀 on inputs
of the form (𝑥, 𝑛), one for each 𝑛 :

𝑀(𝑥, 0), 𝑀(𝑥, 1), 𝑀(𝑥, 2), ...

Each of these requires several infinite tapes, but since we only need
countably many of them, they may be interleaved into a single tape.
At successor steps the simulation performs the usual dovetailing
technique. At limit steps the simulation inserts extra 𝜔 bookkeeping
steps, during which it places the simulated machines in the “limit”
state and moves their head positions. The extra steps do not ruin the
limits of the simulated tapes, because those are left untouched. After
the extra steps are performed, dovetailing starts over again. As soon
as one of the simulations 𝑀(𝑥, 𝑛) terminates, we return the results 𝑛 .
Note that 𝑛 is computed from 𝑥 in a deterministic fashion (that depends
2 Models of Computation 18

on the details of the dovetailing and simulation).

2.2 Scott’s graph model

A model of computation may introduce features which are not easily


detected, until we compare it with other models. For example, the
innocuous looking idea that the input be stored on a tape gives a type 2
machine the ability to take into account the order in which data appear.
In this section we consider a different model of infinite computation,
the graph model, introduced by Dana Scott [35], in which we use sets of [35]: Scott (1976), “Data Types as Lat-
numbers rather than sequences. tices”

How is computation of a map 𝑓 : P(ℕ ) → P(ℕ ) on the powerset of


natural numbers to be performed? One natural idea would be to use
computation with respect to an oracle: 𝑓 is computable if there is a Turing
machine which computes 𝑓 (𝐴) when given 𝐴 as an oracle, i.e., it may
test membership in 𝐴. However, this is still just type 2 computability in
disguise, because asking an oracle whether a number belongs to 𝐴 is
equivalent to having an infinite input tape with a 1 in the 𝑛 -th cell when
𝑛 ∈ 𝐴, and a 0 otherwise.
An oracle provides both positive and negative information about member-
ship in 𝐴. In contrast, Scott’s graph model operates only with positive
information. Rather than describing it explicitly as a kind of Turing
machines, we shall take a different route this time and first describe
the topological aspects of the model. Computability will then follow
naturally.
The set P(ℕ ) may be equipped with a topology in two natural ways.
One is the product topology arising from the observation that P(ℕ ) is
isomorphic to the countable product 2ℕ . This topology encodes positive
and negative information. The other is the Scott topology, which arises
from the lattice structure of P(ℕ ), ordered by ⊆ . A subbasic open set for
the Scott topology on P(ℕ ) is one of the form

↑𝑛 = {𝐴 ⊆ ℕ | 𝑛 ∈ 𝐴}.

By forming finite intersections we get the basic open sets

↑{𝑛0 , . . . , 𝑛 𝑘−1 } = {𝐴 ⊆ ℕ | {𝑛0 , . . . , 𝑛 𝑘−1 } ⊆ 𝐴}.

Let us write
𝐴 ≪ 𝐵 ⇐⇒ 𝐴 ⊆ 𝐵 and 𝐴 is finite.
We may use 𝐴 ≪ ℕ as a convenient shorthand for “𝐴 is a finite subset
of ℕ ”. In the induced topology U ⊆ P(ℕ ) is open if, and only if,
[
U= {↑𝐴 | 𝐴 ∈ U ∧ 𝐴 ≪ ℕ }

or equivalently
𝐵 ∈ U ⇐⇒ ∃𝐴 ≪ 𝐵. 𝐴 ∈ U.
It follows that a Scott open set U is upward closed: if 𝐵 ∈ U and 𝐵 ⊆ 𝐶
then 𝐶 ∈ U. Henceforth we let ℙ denote P(ℕ ) by ℙ qua topological
space equipped with the Scott topology.
2 Models of Computation 19

In information processing and computation the open sets are not about
geometry but about (positively) observable properties. A basic obser-
vation about a set 𝐵 ∈ P(ℕ ) is that it contains a number 𝑛 , whence
the Scott topology is generated by sets of the form ↑𝑛 . The Scott topol-
ogy is not Hausdorff, not even a 𝑇1 -space, but is a 𝑇0 -space.4 Indeed, if 4: The 𝑇0 separation property is a form
𝐴, 𝐵 ∈ P(ℕ ) have the same neighborhoods, then they have the same of Leibniz’s principle of identity which
states that two things are equal if they
subbasic neighborhoods ↑𝑛 , but then they have the same elements. have exactly the same properties.
Another way to get the Scott topology of ℙ is to observe that P(ℕ ) is in
bijective correspondence with the set of all functions {⊥, ⊤}ℕ . If we equip
the two-element set 𝕊 = {⊥, ⊤} with the Sierpinski topology in which
the open sets are ∅, {⊤}, and 𝕊, then ℙ turns out to be homeomorphic to
𝕊ℕ equipped with the product topology.
Next we characterize the continuous maps on ℙ.

Proposition 2.2.1 The following are equivalent for a map 𝑓 : ℙ → ℙ:


1. 𝑓 is continuous,
2. 𝑓 (𝐵) = { 𝑓 (𝐴) | 𝐴 ≪ 𝐵} for all 𝐵 ∈ ℙ,
S
3. 𝑓 preserves directed unions.

Proof. A map 𝑓 : ℙ → ℙ is continuous precisely when the inverse image


𝑓 ∗ (↑𝑛) of every subbasic open set is open. By noting that 𝐵 ∈ 𝑓 ∗ (↑𝑛) is
equivalent to 𝑛 ∈ 𝑓 (𝐵) and using the characterization of Scott open sets,
we may phrase continuity of 𝑓 as

∀𝑛 ∈ ℕ . ∀𝐵 ∈ ℙ. (𝑛 ∈ 𝑓 (𝐵) ⇐⇒ ∃𝐴 ≪ 𝐵. 𝑛 ∈ 𝑓 (𝐴)),

which is equivalent requiring, for all 𝐵 ∈ ℙ,


[
𝑓 (𝐵) = { 𝑓 (𝐴) | 𝐴 ≪ 𝐵}.

We have proved the equivalence of the first two statements. Since {𝐴 |


𝐴 ≪ 𝐵} is a directed family, the third statement obviously implies the
first one. The remaining implication is established as follows. Suppose
𝑓 : ℙ → ℙ satisfies the second statement and F ⊆ ℙ is a directed
family. Observe that the families G = {𝐴 ∈ ℙ | ∃𝐵. F𝐴 ≪ 𝐵} and
H = {𝐴 ∈ ℙ | 𝐴 ≪ F} are actually the same, both are directed, and
S
S S
F = H. Then
[ [ [
𝑓( F) = { 𝑓 (𝐴) | 𝐴 ≪ F}
[
= { 𝑓 (𝐴) | 𝐴 ∈ H}
[
= { 𝑓 (𝐴) | 𝐴 ∈ G}
[ [
= { { 𝑓 (𝐴) | 𝐴 ≪ 𝐵} | 𝐵 ∈ F}
[
= { 𝑓 (𝐵) | 𝐵 ∈ F}.

We used the second statement in the first and last line.

A continuous map on ℙ is also called an enumeration operator.5 The 5: The terminology (probably) origi-
second part of the last proposition says that every enumeration operator nates from computability theory, where
the Scott continuous maps on ℙ corre-
is determined by its values on finite sets. Thus, to encode an enumeration spond to higher-order functions operat-
ing on computably enumerable sets.
2 Models of Computation 20

operator as a set of numbers, it suffices to encode its values on finite sets.


We encode 𝐴 ≪ ℕ as X 𝑛
⌜𝐴⌝ = 2 .
𝑛∈𝐴

and assign to every continuous 𝑓 : ℙ → ℙ its graph6 6: The graph of 𝑔 : 𝑋 → 𝑌 is usually


defined as {(𝑥, 𝑦) ∈ 𝑋 × 𝑌 | 𝑓 (𝑥) = 𝑦}.
Γ( 𝑓 ) = { ⌜ ( ⌜ 𝐴 ⌝ , 𝑛) ⌝ ∈ ℕ | 𝐴 ≪ ℕ ∧ 𝑛 ∈ 𝑓 (𝐴)}. Our definition records the finitary part of
the graph of an enumeration operator.

Conversely, to every 𝐴 ∈ ℙ we assign a map Λ(𝐴) : ℙ → ℙ, defined by

Λ(𝐴)(𝐵) = {𝑛 ∈ ℕ | ∃𝐶 ≪ 𝐵. ⌜ ( ⌜ 𝐶 ⌝ , 𝑛) ⌝ ∈ 𝐴},

which is easily seen to be continuous. Moreover, for any continuous


𝑓 : ℙ → ℙ, 𝐵 ∈ ℙ, and 𝑛 ∈ ℕ we have

Λ(Γ( 𝑓 ))(𝐵) = {𝑛 ∈ ℕ | ∃𝐶 ≪ 𝐵. ⌜ ( ⌜ 𝐶 ⌝ , 𝑛) ⌝ ∈ Γ( 𝑓 )}
= {𝑛 ∈ ℕ | ∃𝐶 ≪ 𝐵. 𝑛 ∈ 𝑓 (𝐶)}
[
= { 𝑓 (𝐶) | 𝐶 ≪ 𝐵}
= 𝑓 (𝐵),

where we appealed to continuity of 𝑓 in the last step. Therefore, Γ and Λ


form a section-retraction pair7 7: Given 𝑠 : 𝑋 → 𝑌 and 𝑟 : 𝑌 → 𝑋 such
that 𝑟 ◦ 𝑠 = id𝑋 , we say that 𝑠 is a section
of the retraction 𝑟 . Together they form a
C(ℙ , ℙ) o
Γ /ℙ (2.3) section-retraction pair.
Λ

and are continuous when the set of continuous maps C(ℙ , ℙ) is equipped
with the compact-open topology. Concretely, the topology of C(ℙ , ℙ) is
generated by subbasic open sets

{ 𝑓 ∈ C(ℙ , ℙ) | 𝑛 ∈ 𝑓 (𝐴)}

where 𝐴 ≪ ℕ and 𝑛 ∈ ℕ . We shall use Γ and Λ in Section 2.4 to model


the untyped 𝜆-calculus in ℙ.

Exercise 2.2.2 There is a pairing function ⟨−, −⟩ : ℙ × ℙ → ℙ which


interleaves sets 𝐴 and 𝐵 as odd and even numbers, respectively,

⟨𝐴, 𝐵⟩ = {2𝑚 | 𝑚 ∈ 𝐴} ∪ {2𝑛 + 1 | 𝑛 ∈ 𝐵}.

Verifty that ⟨−, −⟩ is a homeomorphism between ℙ × ℙ and ℙ.

Let us now discuss the role of ℙ as a model of computation. An enumer-


ation of a set 𝐴 ⊆ ℕ is a function 𝑒 : ℕ → ℕ such that

𝐴 = {𝑛 ∈ ℕ | ∃𝑘. ℕ 𝑒(𝑘) = 𝑛 + 1}.

In words, 𝑒 enumerates the elements of 𝐴, incremented by 1. The incre-


ment is needed so that 𝑒 may enumerate the empty set by outputting
only zeroes. The enumeration 𝑒 may enumerate an element of 𝐴 many
times. Clearly, every 𝐴 ⊆ ℕ has an enumeration.
A computably enumerable set (c.e. set)8 𝐴 ⊆ ℕ is one that has a com- 8: Such sets are also called “recursively
putable enumeration 𝑒 . Define the 𝑘 -th stage of 𝑒 : ℕ → ℕ to be the set enumerable (r.e.)” because “computable
functions” used to be called “(general)
recursive functions”.
2 Models of Computation 21

of elements enumerated by the first 𝑘 terms of 𝑒 ,

𝑒 | 𝑘 = {𝑛 ∈ ℕ | ∃𝑗 < 𝑘. 𝑒(𝑗) = 𝑛 + 1}.

The stages form an increasing chain of finite sets

𝑒 |0 ⊆ 𝑒 |1 ⊆ 𝑒 |2 ⊆ · · ·

whose union is the set enumerated by 𝑒 .


We say that an enumeration operator 𝑓 : ℙ → ℙ is computable when
its graph Γ( 𝑓 ) is a c.e. set. In what sense can we “compute” with a
computable enumeration operator? Suppose the graph of 𝑓 : ℙ → ℙ
is enumerated by 𝑒 𝑓 and 𝐴 ⊆ ℕ is enumerated by 𝑒 𝐴 . Then we may
compute an enumeration 𝑒 𝐵 of 𝐵 = 𝑓 (𝐴) as
(
𝑛 + 1 if ⌜ ( ⌜ 𝑒 𝐴 | 𝑖 ⌝ , 𝑛) ⌝ ∈ 𝑒 𝑓 | 𝑗 ,
𝑒 𝐵 ( ⌜ (𝑛, 𝑖, 𝑗) ⌝ ) =
0 otherwise.

Indeed, suppose 𝑒 𝐵 ( ⌜ (𝑛, 𝑖, 𝑗) ⌝ ) = 𝑛 + 1. Then 𝑛 ∈ 𝑓 (𝑒 𝐴 | 𝑗 ), hence 𝑛 ∈ 𝑓 (𝐴)


Conversely, if 𝑛 ∈ 𝑓 (𝐴), there exists 𝐶 ≪ 𝐴 such that 𝑛 ∈ 𝑓 (𝐶), and for
large enough 𝑖 and 𝑗 we have 𝐶 ⊆ 𝑒 𝐴 | 𝑖 and ⌜ ( ⌜ 𝑒 𝐴 | 𝑖 ⌝ , 𝑛) ⌝ ∈ 𝑒 𝑓 | 𝑗 , so that
𝑒 𝐵 ( ⌜ (𝑛, 𝑖, 𝑗) ⌝ ) = 𝑛 + 1.
You might expect to see the utm and smn theorems for ℙ at this point,
but we take a different route to computing with ℙ, namely by using (2.3)
to interpret the 𝜆-calculus in ℙ, see Section 2.4. We conclude the section
by observing that enumeration operators yield the standard notion of
computability on numbers.

Exercise 2.2.3 Show that a partial map 𝑓 : ℕ ⇀ ℕ is computable if, and


only if, there exists a computable enumeration operator 𝐹 : ℙ → ℙ
such that, for all 𝑛 ∈ ℕ ,
(
{ 𝑓 (𝑛)} if 𝑓 (𝑛)↓,
𝐹({𝑛}) =
∅ otherwise.

2.3 Church’s 𝜆-calculus

We have so far considered a model of computation based on Turing


machines, and two models of a topological nature, the Baire space 𝔹
and the graph model ℙ. We now look at a purely syntactic model, the 𝜆-
calculus, in which computation is expressed as manipulation of symbolic
expressions. It was proposed by Alonzo Church [8] as a notion of general [8]: Church (1932), “A set of postulates
computation before Turing invented his machines. Only later did it turn for the foundation of logic”

out that Church’s and Turing’s models can simulate each other.
The 𝜆-calculus is the abstract theory of functions, just like group theory
is the abstract theory of symmetries. There are two basic operations that
can be performed with functions. The first one is the application of a
function to an argument: if 𝑓 is a function and 𝑎 is an argument, then 𝑓 𝑎
is the application of 𝑓 to 𝑎 . The second operation is abstraction: if 𝑥 is a
2 Models of Computation 22

variable and 𝑡 is an expression in which 𝑥 may appear freely, then there


is a function 𝑓 defined by
𝑓 (𝑥) = 𝑡.
We named the newly formed function 𝑓 , but we could have specified it
without naming it by writing “ 𝑥 is mapped to 𝑡 ” as

𝑥 ↦→ 𝑡,

In 𝜆-calculus we write the above as a 𝜆-abstraction

𝜆𝑥. 𝑡.

For example, 𝜆𝑥. 𝜆𝑦. (𝑥 2 + 𝑦 3 ) is the function which maps an argument 𝑎


to the function 𝜆𝑦. (𝑎 2 + 𝑦 3 ). In the expression 𝜆𝑥. 𝑡 the variable 𝑥 is
bound in 𝑡 .
There are two kinds of 𝜆-calculus, the typed and the untyped one. In the
untyped version there are no restrictions on how application is formed,
so that an expression such as

𝜆𝑥. 𝑥 𝑥

is valid, whatever it means. In the simply typed 𝜆-calculus every expres-


sion has a type, and there are rules for forming valid expressions and
types. For example, we can only form an application 𝑓 𝑎 when 𝑎 has
a type 𝐴 and 𝑓 has a type 𝐴 → 𝐵, which indicates a function taking
arguments of type 𝐴 and giving results of type 𝐵. We postpone discussion
of the type 𝜆-calculus to Subsection 2.7.2.
In the untyped version no restrictions are imposed on application and
abstraction. More precisely, the calculus consists of:
▶ An infinite supply of variables 𝑥, 𝑦, 𝑧, . . .,
▶ For any expressions 𝑒1 and 𝑒2 we may form their application 𝑒1 𝑒2 .
Application associates to the left so that 𝑒1 𝑒2 𝑒3 = (𝑒1 𝑒2 ) 𝑒3 .
▶ If 𝑒 is an expression, then 𝜆𝑥. 𝑒 is its abstraction, where 𝑥 is bound
in 𝑒 . Think of 𝜆𝑥. 𝑒 as the function which maps 𝑥 to 𝑒 . We abbreviate
a nested abstraction 𝜆𝑥 1 . · · · 𝜆𝑥 𝑛 . 𝑒 as 𝜆𝑥 1 𝑥 2 . . . 𝑥 𝑛 . 𝑒 .
The above can be expressed succinctly by the grammar rules:

Variable 𝑣 ::= 𝑥 | 𝑦 | 𝑧 | · · ·
Expression 𝑒 ::= 𝑣 | 𝑒1 𝑒2 | 𝜆𝑥. 𝑒

There are no constants, numbers, or other constants – we are studying


the pure 𝜆-calculus.
Expressions which only differ in the naming of bound variables are equal,
thus 𝜆𝑥. 𝑦 𝑥 = 𝜆𝑧. 𝑦 𝑧 ≠ 𝜆𝑦. 𝑦 𝑦 . Substitution replaces free variables with
expressions. We write 𝑒[𝑒1 /𝑥 1 , . . . , 𝑒 𝑛 /𝑥 𝑛 ] for a simultaneous substitution
of expressions 𝑒1 , . . . , 𝑒 𝑛 for variables 𝑥 1 , . . . , 𝑥 𝑛 in 𝑒 , respectively. The
usual rules for bound variables must be observed when we perform
substitutions.9 9: It is notoriously easy to commit errors
when defining the details of substitution.
The basic axiom of 𝜆-calculus is 𝛽 -reduction: The best way to understand all the intri-
cacies is to write a program that performs
substitutions.
(𝜆𝑥. 𝑒1 ) 𝑒2 = 𝑒1 [𝑒2 /𝑥].
2 Models of Computation 23

It says that the application of a function 𝜆𝑥. 𝑒1 to an argument 𝑒2 is


computed by replacing 𝑥 with 𝑒2 in the function body 𝑒1 . A second axiom,
which is sometimes assumed is 𝜂-reduction, which says that

𝜆𝑥. 𝑒 𝑥 = 𝑒 ,

provided 𝑥 does not occur freely in 𝑒 . We will not assume 𝜂-reduction


unless explicitly stated otherwise.
A sub-expression of the form (𝜆𝑥. 𝑒1 ) 𝑒2 is called a 𝛽 -redex. If 𝑒 contains
such a redex, we may replace it by ⊂ 𝑒1 𝑒2 /𝑥 to obtain a new expression 𝑒 ′.
We say that we performed a 𝛽 -reduction and write

𝑒 ↦→ 𝑒 ′

A chain of reductions 𝑒 ↦→ 𝑒 ′ ↦→ · · · ↦→ 𝑒 ′′ is written 𝑒 ↦→∗ 𝑒 ′′.


In a given expression there may be several 𝛽 -redeces. For example, we
can reduce ((𝜆𝑥. 𝑥) 𝑎) ((𝜆𝑦. 𝑦) 𝑏) either as

((𝜆𝑥. 𝑥) 𝑎) ((𝜆𝑦. 𝑦) 𝑏) ↦→ 𝑎 ((𝜆𝑦. 𝑦) 𝑏)

or as
((𝜆𝑥. 𝑥) 𝑎) ((𝜆𝑦. 𝑦) 𝑏) ↦→ ((𝜆𝑥. 𝑥) 𝑎) 𝑏.
A theorem of Church and Rosser’s [9] states that 𝜆-calculus is confluent, [9]: Church et al. (1936), “Some Proper-
which means that the order of 𝛽 -reductions is not important in the ties of Conversion”

sense that two different ways of reducing and expression may always be
reconciled by further reductions. In the above example we get 𝑎 𝑏 in both
cases after one more reduction.
There are expressions which we can keep reducing forever, for example
the term 𝜔 𝜔 where 𝜔 = 𝜆𝑥. 𝑓 (𝑥 𝑥) has an infinite reduction sequence

𝜔 𝜔 ↦→ 𝑓 (𝜔 𝜔) ↦→ 𝑓 ( 𝑓 (𝜔 𝜔)) ↦→ 𝑓 ( 𝑓 ( 𝑓 (𝜔 𝜔))) ↦→ · · ·

An expression in which no 𝛽 -reductions are possible is called a nor-


mal form. Think of normal forms10 as “finished” computations, and 10: From a programming-language per-
those which cannot be reduced to a normal form as “non-terminating” spective, it is unusual to compute under a
𝜆-abstraction, as we do to reach a normal
computations. form. A good alternative is to consider
the weak-head normal form, which avoids
We outline programming in 𝜆-calculus but do not provide the proofs.
doing so.
First, a pairing with projections may be defined as follows:

pair = 𝜆𝑥 𝑦𝑧. 𝑧 𝑥 𝑦,
fst = 𝜆𝑝. 𝑝 (𝜆𝑥 𝑦. 𝑥),
snd = 𝜆𝑝. 𝑝 (𝜆𝑥 𝑦. 𝑦).

With these we have

fst (pair 𝑎 𝑏) = 𝑎 and snd (pair 𝑎 𝑏) = 𝑏,

for instance

fst (pair 𝑎 𝑏) = (𝜆𝑧. 𝑧 𝑎 𝑏) (𝜆𝑥 𝑦. 𝑥) = (𝜆𝑥 𝑦. 𝑥) 𝑎 𝑏 = 𝑎.


2 Models of Computation 24

The Boolean values and the conditional statement are encoded as

if = 𝜆𝑥. 𝑥,
true = 𝜆𝑥 𝑦. 𝑥,
false = 𝜆𝑥 𝑦. 𝑦.

They satisfy

if false 𝑎 𝑏 = 𝑏 and if true 𝑎 𝑏 = 𝑎.

The natural numbers are encoded by Church numerals. The 𝑛 -th Church
numeral is a function which maps a function to its 𝑛 -th iteration:

0 = 𝜆 𝑓 𝑥. 𝑥,
1 = 𝜆 𝑓 𝑥. 𝑓 𝑥,
2 = 𝜆 𝑓 𝑥. 𝑓 ( 𝑓 𝑥),

and in general

𝑛 = 𝜆 𝑓 𝑥. 𝑓 (· · · ( 𝑓 𝑥) · · · ).
| {z }
𝑛

The successor, addition and multiplication operations are as follows:

succ = 𝜆𝑛 𝑓 𝑥. 𝑛 𝑓 ( 𝑓 𝑥),
add = 𝜆𝑚 𝑛 𝑓 𝑥. 𝑚 𝑓 (𝑛 𝑓 𝑥),
mult = 𝜆𝑚 𝑛 𝑓 𝑥. 𝑚(𝑛 𝑓 )𝑥.

We leave it as exercise to figure out how the following work and what
they do:11 11: Stephen Kleene recounts [20] that he
figured out how to compute predeces-
power = 𝜆𝑚 𝑛. 𝑛 𝑚, sors while at a dentist’s office. Is pro-
gramming the untyped 𝜆-calculus like
iszero = 𝜆𝑛. 𝑛 (𝜆𝑥. false) true , pulling one’s teeth out?

pred = 𝜆𝑛. snd (𝑛 (𝜆𝑝. pair (succ (fst 𝑝)) (fst 𝑝))(pair 0 0)).

Recursion is accomplished by means of the fixed-point operator

fix = 𝜆 𝑓 . (𝜆𝑥. 𝑓 (𝑥 𝑥))(𝜆𝑥. 𝑓 (𝑥 𝑥)).

For any 𝑎 we have

fix 𝑎 = (𝜆𝑥. 𝑎 (𝑥 𝑥))(𝜆𝑥. 𝑎 (𝑥 𝑥))


= 𝑎 ((𝜆𝑥. 𝑎 (𝑥 𝑥)) (𝜆𝑥. 𝑎 (𝑥 𝑥)))
= 𝑎 (fix 𝑎).

The fix-point operator is used to define recursive functions, for example


equality of numbers is computed as follows:

equal = fix (𝜆𝑒 𝑚 𝑛. if (iszero 𝑚) (iszero 𝑛) (𝑒 (pred 𝑚) (pred 𝑛))).

By continuing in this manner we can build a general-purpose program-


ming language. It turns out that the untyped 𝜆-calculus computes exactly
the same partial functions ℕ ⇀ ℕ as Turing machines.
2 Models of Computation 25

2.4 Reflexive domains

The Church-Rosser theorem implies that the untyped 𝜆-calculus is


consistent, i.e., not all expressions are equal. Indeed if 𝜆𝑥. 𝑥 and 𝜆𝑥 𝑦. 𝑥
were equal there would be a sequence of 𝛽 -reductions (performed in
either direction) leading from on to the other. By confluence we would
obtain a normal form to which both reduce, but that cannot be since they
already are distinct normal forms.
Still the question remains what the untyped 𝜆-calculus is about, speaking
mathematically as opposed to formalistically. A naive attempt at an
interpretation runs into difficulties. Suppose we interpret the expressions
of the 𝜆-calculus as the elements of a set 𝐷 , where 𝜆-abstraction should
correspond to formation of functions 𝐷 → 𝐷 , and 𝜆-application to ap-
plication of such functions to elements of 𝐷 . Because every 𝜆-expression
may be used either as an argument or a function, we require

𝐷𝐷 o
Γ /𝐷 (2.4)
Λ

that mediate between the two roles. These can be used to interpret each
𝜆-expression 𝑒 with free variables 𝑥1 , . . . , 𝑥 𝑛 a map

[[𝑥 1 , . . . , 𝑥 𝑛 | 𝑒]] : 𝐷 𝑛 → 𝐷

as follows, where 𝑥® = (𝑥 1 , . . . , 𝑥 𝑛 ) and 𝑎® = (𝑎 1 , . . . , 𝑎 𝑛 ) ∈ 𝐷 𝑛 :

[[ 𝑥® | 𝑥 𝑖 ]] 𝑎® = 𝑎 𝑖
[[ 𝑥® | 𝑒1 𝑒2 ]] 𝑎® = Λ([[ 𝑥® | 𝑒1 ]] 𝑎®)([[ 𝑥® | 𝑒2 ]] 𝑎®), (2.5)
[[ 𝑥® | 𝜆𝑦. 𝑒]] 𝑎® = Γ(𝑏 ↦→ [[ 𝑥®, 𝑦 | 𝑒]](®𝑎 , 𝑏)).

Since we intend to interpret “functions as functions” and “application as


application”, we expect, for all 𝑓 : 𝐷 → 𝐷 and 𝑎 ∈ 𝐷 ,

[[𝑥, 𝑦 | 𝑥 𝑦]](Γ( 𝑓 ), 𝑎) = 𝑓 𝑎,

from which it follows that Γ is a section of Λ because

Λ(Γ 𝑓 ) 𝑎 = [[𝑥, 𝑦 | 𝑥 𝑦]](Γ( 𝑓 ), 𝑎) = 𝑓 𝑎.

The only set that contains its own function space as a retract is the
singleton set. The 𝜆-calculus has no non-trivial set-theoretic models. We
need to look elsewhere, but where?
An answer was given by Dana Scott [34] who constructed a non-trivial [34]: Scott (1972), “Continuous Lattices”
topological space 𝐷∞ such that the space of continuous functions
C(𝐷∞ , 𝐷∞ ), equipped with the compact-open topology, is homeomor-
phic to 𝐷∞ . This gave a topological model of the untyped 𝜆-calculus for
𝛽𝜂-reduction. Since the construction involves more domain theory than
we wish to assume here, we shall look at the simpler case of models that
satisfy just 𝛽 -reduction.
We seek a non-trivial topological space 𝐷 that continuously retracts onto
its own function space12 12: 𝐷 must be nice enough for C(𝐷, 𝐷),
equipped with the compact-open topol-
ogy, to be an exponential in the category of
topological spaces and continuous maps.
2 Models of Computation 26

C(𝐷, 𝐷) o
Γ /𝐷 (2.6)
Λ

Such a space is called a reflexive domain.


The denotations of 𝜆-expressions in a reflexive domain 𝐷 are given
by (2.5), and 𝛽 -reduction is valid thanks to Γ being a section of Λ:

[[ 𝑥® | (𝜆𝑦. 𝑒1 ) 𝑒2 ]] 𝑎® = Λ(Γ(𝑏 ↦→ [[ 𝑥®, 𝑦 | 𝑒1 ]] (®𝑎 , 𝑏))) ([[𝑥® | 𝑒2 ]] 𝑎®)


= (𝑏 ↦→ [[ 𝑥®, 𝑦 | 𝑒1 ]] (®𝑎 , 𝑏)) ([[ 𝑥® | 𝑒2 ]] 𝑎®)
= [[ 𝑥®, 𝑦 | 𝑒1 ]] (®𝑎 , ([[ 𝑥® | 𝑒2 ]] 𝑎®))
= [[ 𝑥® | 𝑒1 [𝑒2 /𝑦]]] 𝑎®.

Exercise 2.4.1 Prove the substitution lemma

[[ 𝑥®, 𝑦 | 𝑒1 ]] (®𝑎 , [[ 𝑥® | 𝑒2 ]] 𝑎®) = [[ 𝑥®, 𝑦 | 𝑒1 [𝑒2 /𝑦]]] 𝑎® ,

which we used in the last line above. You may proceed by induction
on the structure of 𝑒1 , but should first generalize the statement so that
the induction case of 𝜆-abstraction works out.

We have already seen a diagram like (2.6), namely the section-retraction


pair (2.3) for the graph model ℙ from Section 2.2. We thus have at
least one example of a reflexive domain. There are many other reflexive
domains, such as Plotkin’s 𝑇 𝜔 [31] and the universal Scott domain [14]. [31]: Plotkin (1978), “𝕋 𝜔 as a Universal
Domain”
To see how the 𝜆-calculus helps prove things about reflexive domains, let [14]: Gunter et al. (1990), “Semantic Do-
us show that 𝐷 × 𝐷 is a retract of 𝐷 with the aid of pair = 𝜆𝑥 𝑦𝑧. 𝑧 𝑥 𝑦 mains”
from the previous section. Let 𝑝 : 𝐷 × 𝐷 → 𝐷 be defined as

𝑝(𝑎, 𝑏) = [[𝑥, 𝑦 | 𝜆𝑧. 𝑧 𝑥 𝑦]](𝑎, 𝑏),

and let 𝑟 : 𝐷 → 𝐷 × 𝐷 be the map

𝑟(𝑐) = ([[𝑢 | 𝑢 (𝜆𝑥 𝑦. 𝑥)]] 𝑐, [[𝑢 | 𝑢 (𝜆𝑥 𝑦. 𝑦)]] 𝑐).

Then 𝑟(𝑝(𝑎, 𝑏)) = (𝑎, 𝑏) because 𝜆-calculus proves

(𝜆𝑥 𝑦. 𝑥) 𝑥 𝑦 = 𝑥 and (𝜆𝑥 𝑦. 𝑦) 𝑥 𝑦 = 𝑦.

The retraction Λ decodes an element of 𝐷 as a continuous map 𝐷 → 𝐷 .


We may also decode 𝑎 ∈ 𝐷 as a continuous map Λ(2) 𝑎 : 𝐷 × 𝐷 → 𝐷
of two arguments, namely Λ(2) 𝑎 = Γ 𝑎 ◦ 𝑝 where 𝑝 : 𝐷 × 𝐷 → 𝐷 is the
above section.
It is now easy to state and prove the utm and smn theorems for reflexive
domains. Note however that they are somewhat redundant because the
𝜆-calculus already does all the work.

Theorem 2.4.2 (reflexive domain smn) Given a reflexive domain 𝐷 , there


is a continuous map 𝑠 : 𝐷 × 𝐷 → 𝐷 such that Λ(𝑠(𝑚, 𝑛)) 𝑎 = Λ(2) 𝑚 (𝑛, 𝑎)
for all 𝑚, 𝑛, 𝑎 ∈ 𝐷 .

Proof. Take 𝑠 = [[𝑚, 𝑛 | 𝜆𝑎. 𝑚 (𝜆𝑧. 𝑧 𝑛 𝑎)]].


2 Models of Computation 27

Theorem 2.4.3 (reflexive domain utm) Given a reflexive domain 𝐷 , there


is a continuous map 𝑢 : 𝐷 × 𝐷 → 𝐷 such that 𝑢(𝑡, 𝑚) = Λ 𝑡 𝑚 for all
𝑡, 𝑚 ∈ 𝐷 .

Proof. Take 𝑢 = [[𝑥, 𝑦 | 𝑥 𝑦]].

2.5 Partial combinatory algebras

The Baire space from Subsection 2.1.2 is almost a model of the untyped
𝜆-calculus, because every 𝛼 ∈ 𝔹 may be viewed both as a (realizer of) a
function and an argument. One is then tempted to define application by
𝛼 𝛽 = 𝜼 𝛼 (𝛽), but this fails because the result 𝜼 𝛼 (𝛽) need not be defined,
whereas application in 𝜆-calculus is a total operation. We now consider a
generalization of the 𝜆-calculus which allows application to be a partial
operation, and whose example is the Baire space.

Definition 2.5.1 A partial combinatory algebra (pca) (𝔸 , ·) is a set 𝔸


with a partial binary operation · : 𝔸 × 𝔸 ⇀ 𝔸. We usually write
𝑥 𝑦 instead of 𝑥 · 𝑦 , and recall that application associates to the left.
Furthermore, there must exist K , S ∈ 𝔸 such that, for all 𝑥, 𝑦, 𝑧 ∈ 𝔸,

K · 𝑥 · 𝑦 = 𝑥, S · 𝑥 · 𝑦 · 𝑧 ≃ (𝑥 · 𝑧) · (𝑦 · 𝑧), and S · 𝑥 · 𝑦↓. (2.7)

A total combinatory algebra (CA) is a pca whose application is a total


operation.
A sub-pca of (𝔸 , ·) is a subset 𝔸′ ⊆ 𝔸 which is closed under application,
and there exist K , S ∈ 𝔸′ such that (2.7) is satisfied for all 𝑥, 𝑦, 𝑧 ∈ 𝔸.13 13: Read that carefully: the combinators
K and S must come from the subagle-
bra 𝔸′ but they must have the defining
The definition looks strange at first. Where did K and S come from? property with respect to 𝔸, and so con-
Theorem 2.5.2 below explains that the reason for this definition is sequently also with respect to 𝔸′ .
a property called combinatory completeness. For a pca 𝔸 we define
expressions over 𝔸 inductively as follows:
▶ every 𝑎 ∈ 𝔸 is an expression over 𝔸,
▶ a variable is an expression over 𝔸,
▶ if 𝑒1 and 𝑒2 are expressions then 𝑒1 · 𝑒2 is an expression over 𝔸.

Application associates to the left, 𝑥 · 𝑦 · 𝑧 = (𝑥 · 𝑦) · 𝑧 . When no confusion


can arise, we write 𝑥 𝑦 instead of 𝑥 · 𝑦 .
An expression is closed if it contains no variables. We say that a closed
expression 𝑒 is defined and write 𝑒↓ when all applications in 𝑒 are defined
so that 𝑒 denotes an element of 𝔸. More generally, if 𝑒 is an expression
with variables 𝑥 1 , . . . , 𝑥 𝑛 , we write 𝑒↓ when, for all 𝑎 1 , . . . , 𝑎 𝑛 ∈ 𝔸, the
closed expression
𝑒[𝑎1 /𝑥1 , . . . , 𝑎 𝑛 /𝑥 𝑛 ]
is defined. If 𝑒 and 𝑒 ′ are expressions whose variables are among
𝑥1 , . . . , 𝑥 𝑛 , we write 𝑒 ≃ 𝑒 ′ when, for all 𝑎 1 , . . . , 𝑎 𝑛 ∈ 𝔸,

𝑒[𝑎 1 /𝑥1 , . . . , 𝑎 𝑛 /𝑥 𝑛 ] ≃ 𝑒 ′[𝑎 1 /𝑥1 , . . . , 𝑎 𝑛 /𝑥 𝑛 ].


2 Models of Computation 28

Theorem 2.5.2 (Combinatory completeness) Let (𝔸 , ·) be a pca. For every


variable 𝑥 and expression 𝑒 over 𝔸 there is an expression 𝑒 ′ over 𝔸 whose
variables are those of 𝑒 excluding 𝑥 such that 𝑒 ′↓ and 𝑒 ′ · 𝑎 ≃ 𝑒[𝑎/𝑥] for all
𝑎 ∈ 𝔸.

Proof. We give a construction of such an expression ⟨𝑥⟩ 𝑒 :


1. ⟨𝑥⟩ 𝑥 = S K K,
2. ⟨𝑥⟩ 𝑦 = K 𝑦 if 𝑦 is a variable distinct from 𝑥 ,
3. ⟨𝑥⟩ 𝑎 = K 𝑎 if 𝑎 is an element of 𝔸,
4. ⟨𝑥⟩ 𝑒1 𝑒2 = S (⟨𝑥⟩ 𝑒1 ) (⟨𝑥⟩ 𝑒2 )
We omit the verification that 𝑒 ′ = ⟨𝑥⟩ 𝑒 has the required properties.
See [23] for details. [23]: Longley (1995), “Realizability
Toposes and Language Semantics”

The meta-notation ⟨𝑥⟩ 𝑒 plays a role similar to that of 𝜆-abstraction in


the untyped 𝜆-calculus. We abbreviate ⟨𝑥⟩ ⟨𝑦⟩ 𝑒 as ⟨𝑥 𝑦⟩ 𝑒 , and similarly
for more variables. We need to be careful with the “𝛽 -rule” because it is
only valid in a restricted sense, see [23] for details.
We can build up the identity function, pairs, conditionals, natural num-
bers, and recursion by combining the two basic combinators K and S. The
encoding of basic programming constructs is similar to the encoding by
untyped 𝜆-calculus. Pairing, projections, Boolean values and conditional
statement are the same, except that 𝜆-abstraction must be replaced by
⟨ ⟩ -notation:

pair = ⟨𝑥 𝑦𝑧⟩ 𝑧 𝑥 𝑦, if = ⟨𝑥⟩ 𝑥,


fst = ⟨𝑝⟩ 𝑝 (⟨𝑥 𝑦⟩ 𝑥), true = ⟨𝑥 𝑦⟩ 𝑥,
snd = ⟨𝑝⟩ 𝑝 (⟨𝑥 𝑦⟩ 𝑦) false = ⟨𝑥 𝑦⟩ 𝑦.

The notation ⟨ ⟩ makes expressions much more comprehensible and saves


a lot of space, for example the term pair = ⟨𝑥 𝑦 𝑧⟩ 𝑧 𝑥 𝑦 is quite unwieldy
when written just with S and K:

pair = S(S(KS)(S(S(KS)(S(KK)(KS)))(S(S(KS)(S(S(KS)(S(KK)(KS)))
(S(S(KS)(S(S(KS)(S(KK)(KS)))(S(KK)(KK))))(S(KK)(KK)))))
(S(S(KS)(S(KK)(KK)))(S(KK)(SKK))))))(S(S(KS)(S(KK)(KK)))
(S(S(KS)(KK))(KK))).

Natural numbers are implemented as the Curry numerals

0 = I = SKK and 𝑛 + 1 = pair false 𝑛. (2.8)

with successor, predecessor and zero-test

succ = ⟨𝑥⟩ pair false 𝑥,


iszero = fst ,
pred = ⟨𝑥⟩ if (iszero 𝑥) 0 (snd 𝑥).
2 Models of Computation 29

In a pca we can define functions by recursion by using the fixed point


combinators Y and Z, defined by

𝑊 = ⟨𝑥 𝑦⟩ 𝑦(𝑥 𝑥 𝑦), Y = 𝑊 𝑊,
𝑋 = ⟨𝑥 𝑦𝑧⟩ 𝑦(𝑥 𝑥 𝑦)𝑧, Z = 𝑋 𝑋.

These combinators satisfy, for all 𝑓 , 𝑥 ∈ 𝔸,

Y 𝑓 ≃ 𝑓 (Y 𝑓 ), Z 𝑓 ↓, (Z 𝑓 ) 𝑥 ≃ 𝑓 (Z 𝑓 ) 𝑥.

The combinator Y can be used to implement primitive recursion on


natural numbers as

𝑅 = ⟨𝑟𝑥 𝑓 𝑚⟩ if (iszero 𝑚) (K 𝑥)(⟨𝑦⟩ 𝑓 (pred 𝑚)(𝑟 𝑥 𝑓 (pred 𝑚) I))


rec = ⟨𝑥 𝑓 𝑚⟩ ((Y 𝑅) 𝑥 𝑓 𝑚 I).

It satisfies, for all 𝑎, 𝑓 ∈ 𝔸 and 𝑛 ∈ ℕ ,

rec 𝑎 𝑓 0 = 𝑎, rec 𝑎 𝑓 𝑛 + 1 ≃ 𝑓 𝑛 (rec 𝑎 𝑓 𝑛).

Exercise 2.5.3 Implement the minimization combinator min, which


satisfies, for all 𝑓 ∈ 𝔸 and 𝑛 ∈ ℕ ,

min 𝑓 = 𝑛 ⇐⇒ ∃𝑘 > 0. 𝑓 𝑛 = 𝑘 and ∀𝑚 < 𝑛. 𝑓 𝑚 = 0.

With these combinators every general recursive function may be imple-


mented in a pca.

2.5.1 Examples of partial combinatory algebras

The models of computation that we have considered so far are all examples
of partial combinatory algebras.

The first Kleene Algebra 𝕂1

Turing machines, or more precisely their codes, form a pca 𝕂1 = (ℕ , ·)


whose application is defined as 𝑚 · 𝑛 = 𝝋 𝑚 (𝑛). Because 𝑚 · 𝑛 can be easily
confused with multiplication, we also write application with Kleene’s
notation {𝑚}(𝑛).
The combinator K is easily obtained. The function 𝑝(𝑥, 𝑦) = 𝑥 is easily
seen to be computable. By the smn theorem there exists a computable
map 𝑞 : ℕ → ℕ such that 𝝋 𝑞(𝑥) (𝑦) = 𝑥 . Take K to be any number such
that 𝑞 = 𝝋K .
The combinator S requires a bit more thought. The partial function
𝑔(𝑥, 𝑦, 𝑧) = 𝝋 𝝋 (𝑧) (𝝋 𝑦 (𝑧)) is computable by several applications of the
𝑥
utm theorem. By the smn theorem there is a computable 𝑟 : ℕ → ℕ
such that 𝝋 𝑟(𝑥,𝑦) (𝑧) = 𝑔(𝑥, 𝑦, 𝑧). Another application of the smn theorem
yields a computable function 𝑞 : ℕ → ℕ such that 𝝋 𝑞(𝑥) (𝑦) = 𝑟(𝑥, 𝑦).
Take S to be any number such that 𝑞 = 𝝋S .
2 Models of Computation 30

The first Kleene Algebra 𝕂1𝐴 with an oracle

When Turing machines are replaced with oracle Turing machines, we


obtain a variant of 𝕂1 . More precisely, given any 𝐴 ⊆ ℕ , we let 𝕂1𝐴 to
be the pca whose underlying set is ℕ , equipped with the application
{𝑚} 𝐴 (𝑛) = 𝝋 𝐴 𝐴
𝑚 (𝑛). Here 𝝋 𝑚 is the 𝑚 -th Turing machine with oracle 𝐴.

Hamkin’s infinite-time pca 𝕁

We may similarly use infinite-time Turing machines from Subsection 2.1.4


to define a pca 𝕁 (for “Joel”) whose underlying set is ℕ and whose
application is defined by 𝑚 · 𝑛 = 𝝍 𝑚 (𝑛), where 𝝍 𝑚 is the 𝑚 -th infinite-
time Turing machine.

The second Kleene Algebra 𝔹

The Baire space 𝔹, is a pca (𝔹 , |) whose application is 𝛼 | 𝛽 = 𝜼 𝛼 (𝛽),


as defined in (2.2). The combinators K and S exists by Theorems 2.1.15
and 2.1.16, analogously to the first Kleene algebra.
There are actually two version of the second Kleene algebra, the continu-
ous one with carrier 𝔹, and the computable one, whose carrier

𝔹# = {𝛼 ∈ 𝔹 | 𝛼 is computable}.

consists only of the computable sequences. Because K and S obtained


above are computable, 𝔹# is a sub-pca of 𝔹.

Combinatory logic ℂ𝕃

The closed terms of cominbatory logic are generated from constants K, S


and a binary operation ·. We let the operation associate to the left and
write it as juxtaposition. Let ≈ be the least congruence relation14 on the 14: A congruence relation is an equiv-
set CL of all closed terms satisfying, for all 𝑎, 𝑏, 𝑐 ∈ CL, alence relation that respects the opera-
tions. In the present case, if 𝑎 ≈ 𝑎 ′ and
𝑏 ≈ 𝑏 ′ then 𝑎 · 𝑏 ≈ 𝑎 ′ · 𝑏 ′ .
K𝑎𝑏 ≈ 𝑎 S 𝑎 𝑏 𝑐 ≈ (𝑎 𝑐) (𝑏 𝑐).

The quotient CL/≈ is the carrier of a total combinatory algebra ℂ𝕃 whose


structure is induced by the constants K, S and the binary operation. It is
called combinatory logic.

The untyped 𝜆-calculus Λ

The closed expressions of the untyped 𝜆-calculus form a (total) combi-


natory algebra Λ whose application is the one from 𝜆-calculus. More
precisely, we quotient the set of closed expressions by the equivalence
relation generated by 𝛽 -reduction. The basic combinators are K = 𝜆𝑥 𝑦. 𝑥
and S = 𝜆𝑥 𝑦𝑧. (𝑥 𝑧)(𝑦 𝑧). In Subsection 2.7.2 we shall consider a typed
version of the 𝜆-calculus and variants that turn it into a programming
language with an operational semantics.
2 Models of Computation 31

Reflexive domains and the graph model

A reflexive domain 𝐷 is a total combinatory algebra with application


defined by 𝑎 · 𝑏 = Λ 𝑎 𝑏 , where Λ : 𝐷 → C(𝐷, 𝐷) is the retraction
onto the function space, see (2.6). The combinators K and S are (the
interpretations of) those from the untyped 𝜆-calculus.
The graph model comes in two versions. The continuous graph model
has as its carrier the full powerset ℙ = P(ℕ ) and the computable graph
model the computably enumerable sets

ℙ# = {𝐴 ∈ ℙ | 𝐴 is computably enumerable}.

The latter is a sub-pca of the former.

Other partial combinatory algebras

If we replace Turing machines with oracle Turing machines from Sub-


section 2.1.3, we obtain the oracle first Kleene algebra, and by switching
to infinite-time Turing machines from Subsection 2.1.4 the infinite-time
first Kleene algebra. In all cases the basic combinators K and S exist by
the corresponding smn and utm theorems.
There are still many more partial combinatory algebras, but we shall have
to stop here. We refer the interested readers to |1|.4 of [30]. [30]: Oosten (2008), Realizability: An In-
troduction To Its Categorical Side

2.6 Typed partial combinatory algebras

pcas and their models lack an important feature that most real-world
programming languages have, namely types that impose well-formedness
restrictions on programs. For instance, in a typical programming language
we cannot meaningfully apply an expression to itself because the typing
discipline prevents us from doing so.
In this section we look at pcas with types, which were defined by
John Longley [26]. His definition sets up a type system that limits the [26]: Longley (1999), “Unifying typed
applicability of the basic combinators, which is offset by additional basic and untyped realizability”

combinators. In the end we obtain a notion that is closer to a programming


language than to magical incantations with K and S.

Definition 2.6.1 A type system T is a non-empty set, whose elements


are called types, equipped with two binary operations × and →.

The operation → associates to the right, 𝑠 → 𝑡 → 𝑢 = 𝑠 → (𝑡 → 𝑢).

Definition 2.6.2 A typed partial combinatory algebra (tpca) 𝔸 over a


type system T consists of
▶ a non-empty set 𝔸𝑡 for each 𝑡 ∈ T, and
▶ a partial function ·𝑠,𝑡 : 𝔸𝑠→𝑡 × 𝔸𝑠 → 𝔸𝑡 , called application, for
all 𝑠, 𝑡 ∈ T,
2 Models of Computation 32

such that for all 𝑠, 𝑡, 𝑢 ∈ T there exist elements

K𝑠,𝑡 ∈ 𝔸𝑠→𝑡→𝑠 ,
S𝑠,𝑡,𝑢 ∈ 𝔸(𝑠→𝑡→𝑢)→(𝑠→𝑡)→𝑠→𝑢 ,
pair𝑠,𝑡 ∈ 𝔸𝑠→𝑡→𝑠×𝑡 ,
fst𝑠,𝑡 ∈ 𝔸𝑠×𝑡→𝑠 ,
snd𝑠,𝑡 ∈ 𝔸𝑠×𝑡→𝑡 .

We usually omit the types in subscripts and write 𝑥 𝑦 for 𝑥 ·𝑠,𝑡 𝑦 . For
all elements 𝑥, 𝑦, 𝑧 of the appropriate types we require:

K 𝑥 𝑦 = 𝑥,
S 𝑥 𝑦↓,
S 𝑥 𝑦 𝑧 ⪰ (𝑥 𝑧)(𝑦 𝑧),
fst (pair 𝑥 𝑦) = 𝑥,
snd (pair 𝑥 𝑦) = 𝑦.

We say that the elements K𝑠,𝑡 , S𝑠,𝑡,𝑢 , pair𝑠,𝑡 , fst𝑠,𝑡 , snd𝑠,𝑡 are suitable
for 𝔸 when they satisfy the above properties.
A typed (total) combinatory algebra is a tpca whose application
operations are total.
We have required the sets 𝔸𝑡 to be non-empty. While this is not strictly
necessary, it simplifies several constructions.
In a pca the natural numbers may be encoded with the basic combinators
as the Curry numerals. In a tpca they are must be postulated separately.

Definition 2.6.3 A tpca with numerals (n-tpca) is a tpca 𝔸 in which


there is a type nat and elements

0 , 1 , 2 , . . . ∈ 𝔸nat ,
succ ∈ 𝔸nat→nat ,
rec𝑠 ∈ 𝔸𝑠→(nat→𝑠→𝑠)→nat→𝑠 .

such that for all 𝑥 , 𝑓 of appropriate types and all 𝑛 ∈ ℕ

succ 𝑛 = 𝑛 + 1 ,
rec 𝑥 𝑓 0 = 𝑥,
rec 𝑥 𝑓 𝑛 + 1 = 𝑓 𝑛 (rec 𝑥 𝑓 𝑛).

We say that nat, 𝑛 , succ, rec, and the numerals satisfying these
properties are suitable for 𝔸.

Note that 𝔸nat may contain elements other than the numerals 𝑛 . An
n-tpca has primitive recursion but may lack general recursion, so we
need one more definition.

Definition 2.6.4 A tpca with numerals and general recursion (nr-tpca)


2 Models of Computation 33

is a n-tpca 𝔸 containing, for all types 𝑠 and 𝑡 ,

fix𝑠,𝑡 ∈ 𝔸((𝑠→𝑡)→(𝑠→𝑡))→(𝑠→𝑡)

such that, for all 𝑓 ∈ 𝔸(𝑠→𝑡)→(𝑠→𝑡) and 𝑥 ∈ 𝔸𝑠 ,

fix 𝑓 ↓ and fix 𝑓 𝑥 ⪰ 𝑓 (fix 𝑓 ) 𝑥.

Such a fix𝑠,𝑡 is suitable for 𝔸.


The relevant notion of substructure is as follows.

Definition 2.6.5 A sub-tpca 𝔸′ of a tpca 𝔸 is a collection of non-empty


subsets 𝔸′𝑡 ⊆ 𝔸𝑡 , for each 𝑡 ∈ T, such that applications in 𝔸 restrict
to 𝔸′, and there exist elements K𝑠,𝑡 , S𝑠,𝑡,𝑢 , pair𝑠,𝑡 , fst𝑠,𝑡 , and snd𝑠,𝑡
in 𝔸′ of appropriate types which are suitable for 𝔸.
The notions of sub-n-tpca and sub-nr-tpca are defined analogously.

The basic theory of typed pcas follows the theory of untyped pcas.
Expressions over 𝔸 are defined inductively:
1. every 𝑎 ∈ 𝔸𝑡 is a an expression of type 𝑡 , called a primitive constant,
2. an annotated variable 𝑥 𝑡 is an expression of type 𝑡 ,
3. if 𝑒1 is an expression of type 𝑠 → 𝑡 and 𝑒2 is an expression of type 𝑠
then 𝑒1 · 𝑒2 is an expression of type 𝑡 .
Note that variables are annotated with types. We also assume that the
type of a primitive constant is unique (if not we tag constants with their
types), so that every expression has at most one type.
A closed expression 𝑒 of type 𝑡 is defined, written 𝑒↓, when all applications
appearing in it are defined. Such an expression denotes an element of 𝔸𝑡 .
𝑡
If 𝑒 contains variables 𝑥 1𝑡1 , . . . , 𝑥 𝑛𝑛 , we write 𝑒↓ when 𝑒[𝑎 1 /𝑥 1 , . . . , 𝑎 𝑛 /𝑥 𝑛 ]
is defined for all 𝑎 1 ∈ 𝔸𝑡1 , . . . , 𝑎 𝑛 ∈ 𝔸𝑡𝑛 .

Theorem 2.6.6 (Combinatory completeness) Let 𝔸 be a tpca. For every


expression 𝑒 over 𝔸 of type 𝑢 and every variable 𝑥 𝑡 there is an expression 𝑒 ′
of type 𝑡 → 𝑢 whose variables are those of 𝑒 excluding 𝑥 𝑡 such that 𝑒 ′↓ and
𝑒 ′ · 𝑎 ≃ 𝑒[𝑎/𝑥] for all 𝑎 ∈ 𝔸𝑡 .

Proof. Similarly to the untyped case we define ⟨𝑥 𝑡 ⟩ 𝑒 rescursively as


follows:
1. ⟨𝑥 𝑡 ⟩ 𝑥 = S𝑡,𝑡→𝑡,𝑡 K𝑡,𝑡→𝑡 K𝑡,𝑡 ,
2. ⟨𝑥 𝑡 ⟩ 𝑦 = K𝑠,𝑡 𝑦 if 𝑦 𝑠 is a variable distinct from 𝑥 𝑡 ,
3. ⟨𝑥 𝑡 ⟩ 𝑎 = K𝑠,𝑡 𝑎 if 𝑎 is a primitive constant of type 𝑠 .
4. ⟨𝑥 𝑡 ⟩ 𝑒1 𝑒2 = S𝑡,𝑢,𝑣 (⟨𝑥 𝑡 ⟩ 𝑒1 ) (⟨𝑥 𝑡 ⟩ 𝑒2 ) if 𝑒1 and 𝑒2 have types 𝑢 → 𝑣
and 𝑢 , respectively.
The expression 𝑒 ′ = ⟨𝑥 𝑡 ⟩ 𝑒 satisfies the stated condition.
2 Models of Computation 34

2.7 Examples of Typed Partial Combinatory


Algebras

2.7.1 Partial combinatory algebras

Every pca is an nr-tpca if we enrich it with the trivial type system that
contains a single type, T = {★} with (the only possible) operations
★ × ★ = ★ → ★ = ★, and 𝔸★ = 𝔸. The required combinators are the
ones we defined in Section 2.5, we just need to sprinkle ★ on K and S
everywhere.

2.7.2 Simply typed 𝜆-calculus

The simply typed 𝜆-calculus is to the untyped 𝜆-calculus as tpca is to


a pca. The types are inductively generated from the unit type unit, a
(possibly empty) collection of ground types, by using the type constructors
× and →. Expressions are built inductively as follows, where 𝑠 and 𝑡 are
types:
1. An annotated variable 𝑥 𝑡 is an expression of type 𝑡 .
2. A set (possibly empty) of primitive constants with their associated
types.
3. The constant ★, called the unit, has type unit.
4. If 𝑒 is an expression of type 𝑡 then 𝜆𝑥 𝑠 . 𝑒 is an expression of type
𝑠 → 𝑡 . The variable 𝑥 is bound in 𝑒 .
5. If 𝑒1 and 𝑒2 are expressions of types 𝑠 → 𝑡 and 𝑡 , respectively, then
𝑒1 𝑒2 is an expressions of type 𝑡 .
6. If 𝑒1 and 𝑒2 are expressions of types 𝑠 and 𝑡 , respectively, then
(𝑒1 , 𝑒2 ) is an expression of type 𝑠 × 𝑡 .
7. If 𝑒 is an expression of type 𝑠 × 𝑡 then fst , 𝑒 and snd , 𝑒 are expres-
sions of type 𝑠 and 𝑡 , respectively.
An alternative syntax omits annotations from variables, except in 𝜆-
abstractions where 𝜆𝑥 𝑡 . 𝑒 is then written as 𝜆𝑥 :𝑡. 𝑒 . This is in fact the
notation we normally prefer.
As in the untyped version, here too we have the rule of 𝛽 -reduction

(𝜆𝑥 :𝑡. 𝑒1 )𝑒2 = 𝑒1 [𝑒2 /𝑥].

There is also a corresponding 𝜂-reduction rule, for an expression 𝑒 of


type 𝑠 → 𝑡 ,
𝜆𝑥 : 𝑠. 𝑒 𝑥 = 𝑒.
In the untyped 𝜆-calculus we did not assume the 𝜂-rule, but now we
do.
The unit type is characterized by the equation, for any expression 𝑒 of
type ★,
★ = 𝑒.
2 Models of Computation 35

The rules for pairing and projections are, for all 𝑒1 , 𝑒2 , 𝑒 of type 𝑠 , 𝑡 ,
𝑠 × 𝑡:

fst (𝑒1 , 𝑒2 ) = 𝑒1 ,
snd (𝑒1 , 𝑒2 ) = 𝑒2 ,
(fst 𝑒 , snd 𝑒) = 𝑒 ,

There may be additional equations involving primitive constants.


The pure simply typed 𝜆-calculus has only a single ground type 𝑜 and
no primitive constants.
The simply typed 𝜆-calculus is a tpca whose types are those of the
calculus. For a type 𝑡 we take 𝔸𝑡 to be the set of closed expressions of
type 𝑡 , quotiented by the least equivalence relation generated by 𝜂- and
𝛽 -reduction and the rules for the unit, projections, pairing, and primitive
constants.

2.7.3 Gödel’s 𝑇

The pure simply typed 𝜆-calculus is not an n-tpca because it lacks the
natural numbers. To make it into an n-tpca, we add a ground type nat
and primitive constants

0 : nat
succ : nat → nat
rec𝑡 : 𝑡 → (nat → 𝑡 → 𝑡) → nat → 𝑡 (for each type 𝑡 )

We also add two further equations

rec 𝑒1 𝑒2 0 = 𝑒1
rec 𝑒1 𝑒2 (succ 𝑒3 ) = 𝑒2 𝑒3 (rec 𝑒1 𝑒2 𝑒3 ).

This gives us an n-tpca 𝑇 with numerals defined by 0 = 0 and 𝑛 + 1 =


succ 𝑛 . More precisely, the elements of 𝔸𝑡 are the closed expressions of
type 𝑡 , quotiented by the equivalence relation generated by the equational
rules of the simply typeda 𝜆-calculus and primitive recursion.
This n-tpca goes by the name Gödel’s 𝑇 [12]. It was defined by Kurt Gödel [12]: Gödel (1958), “Über eine bisher noch
to give a proof of consistency of Peano arithmetic. It is not as powerful as nicht benützte Erweiterung des finiten
Standpunktes”
a general pca or type 1 machines because only the primitive recursive
functions are expressible in this system.

2.7.4 Plotkin’s PCF

The programming language PCF (“programming computable functions”)


was introduced by Gordon Plotkin [32]. It has since served as the “labo- [32]: Plotkin (1977), “LCF Considered as
ratory mouse” of the theory of programming languages. The design of a Programming Language”

real-world functional languages, such as SML, Ocaml, and Haskell, was


inspired by PCF. In fact, PCF is a fragment of Haskell.
2 Models of Computation 36

We present PCF as an extension of the simply typed 𝜆-calculus. It has


two ground types, nat for natural numbers and bool for Booleans. The
primitive constants of PCF with their types are:

𝑛 : nat (for each 𝑛 ∈ ℕ ),


succ : nat → nat ,
pred : nat → nat ,
true : bool ,
false : bool ,
iszero : nat → bool ,
if𝑡 : bool → 𝑡 → 𝑡 → 𝑡 (for each type 𝑡 ),
fix𝑡 : (𝑡 → 𝑡) → 𝑡 (for each type 𝑡 ),

The specific equations are:

succ 𝑛 = 𝑛 + 1
pred 0 = 0
pred 𝑛 + 1 = 𝑛
iszero 0 = true
iszero 𝑛 + 1 = false
if𝑡 true 𝑒1 𝑒2 = 𝑒1
if𝑡 false 𝑒1 𝑒2 = 𝑒2
fix𝑡 𝑓 = 𝑒 (fix𝑡 𝑒)

To see that PCF is a an nr-tpca, take PCF𝑡 to be the programs of type 𝑡 ,


quotiented by the equations of the simply typed 𝜆-calculus and the
above equations. The primitive recursion combinator rec𝑡 required by
the definition of nr-tpca is implemented as

rec𝑡 = fix (𝜆𝑟 𝑥 𝑓 𝑛. if 𝑛 = 0 then 𝑥 else 𝑓 𝑛 (𝑟 𝑥 𝑓 (pred 𝑛))).

2.7.5 PCF∞

Type 2 machines and the enumeration operators from Subsection 2.1.2


and Section 2.2 compute with non-computable data, which is convenient
for studying computability over topological spaces. To allow PCF com-
putations with non-computable data we extend it oracles, as follows.
For every function 𝑓 : ℕ → ℕ we add a constant func 𝑓 of type nat → nat
to PCF, together with equations

func 𝑓 𝑛 = 𝑓 (𝑛).

We write 𝑓 instead of func 𝑓 when no confusion can arise.


The resulting nr-tpca is denoted as PCF∞ . The original PCF is a sub-nr-
tpca of PCF∞ .
2 Models of Computation 37

2.8 Simulations

If you open a book on computability theory, chances are that you will
find a statement saying that “models of computation are equivalent”.
The claim refers to a collection of specific models of computation, such
as variations of Turing machines, 𝜆-calculi, and recursive functions. The
book supports the claim by describing simulations between such models,
with varying degrees of detail, after which it hurries on to core topics of
computability theory. An opportunity is missed to ask about a general
notion of simulation, and a study of its structural properties.
We seize the opportunity and study a notion of simulation between
pcas. An excellent one is John Longley’s applicative morphism [24]. His [24]: Longley (1994), “Realizability
definition extends easily to account for pcas with sub-pcas. We dare Toposes and Language Semantics”

rename applicative morphisms to simulations, and consider only the


untyped simulations. The typed version of simulations can also be set
up as well, see [25]. [25]: Longley (1999), “Matching typed
and untyped realizability”

Definition 2.8.1 A (pca) simulation, originally called an applicative


pca
morphism [24], 𝜌 : 𝔼 −→ 𝔽 between pcas 𝔼 and 𝔽 is a total relation
𝜌 ⊆ 𝔼 × 𝔽 for which there exists a realizer 𝑟 ∈ 𝔽 such that, for all
𝑢, 𝑣 ∈ 𝔼 and 𝑥, 𝑦 ∈ 𝔽 ,
▶ if 𝜌(𝑢, 𝑥) then 𝑟 𝑥↓ and
▶ if 𝜌(𝑢, 𝑥), 𝜌(𝑣, 𝑦) and 𝑢 𝑣↓ then 𝑟 𝑥 𝑦↓ and 𝜌(𝑢 𝑣, 𝑟 𝑥 𝑦).

We write 𝜌[𝑢] = {𝑥 ∈ 𝔽 | 𝜌(𝑢, 𝑥)}.


pca
A (sub-pca) simulation 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) between pcas with sub-
pca
pcas is a simulation 𝜌 : 𝔼 −→ 𝔽 which has a realizer 𝑟 ∈ 𝔽 ′, and such
pca
that 𝜌 restricted to 𝔼′ × 𝔽 ′ is a simulation 𝔼′ −→ 𝔽 ′ realized by 𝑟 .

A realizer 𝑟 ∈ 𝔽 of a simulation is of course precisely an implementation


in 𝔽 of the applicative structure of 𝔼.
We defined a simulation to be a total relation rather than a function
because an element of the domain may be simulated by many elements
of the codomain, without any one being preferred or distinguished. The
notation 𝜌[𝑢] suggests that 𝜌 is construed as a multi-valued map rather
than a relation.
One might expect that a simulation ought to be a map 𝑓 : 𝔼 → 𝔽 such
that 𝑓 (K𝔼 ) = K𝔽 , 𝑓 (S𝔽 ) = S𝔽 , and 𝑓 (𝑥 ·𝔼 𝑦) ≃ 𝑓 𝑥 ·𝔽 𝑓 𝑦 . This is how
an algebraist would define a morphism, but we are interested in the
computational aspects of pcas, not the algebraic ones.
pca
Simulations can be composed as relations. If 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) and
pca pca
𝜎 : (𝔽 , 𝔽 ′) −→ (𝔾 , 𝔾′) then 𝜎 ◦ 𝜌 : (𝔼, 𝔼′) −→ (𝔾 , 𝔾′) is defined, for 𝑥 ∈ 𝔼
and 𝑧 ∈ 𝔾, by

𝑧 ∈ (𝜎 ◦ 𝜌)[𝑥] ⇐⇒ ∃𝑦 ∈ 𝔽 . 𝑦 ∈ 𝜌[𝑥] ∧ 𝑧 ∈ 𝜎[𝑦].

Exercise 2.8.2 Show that 𝜎 ◦ 𝜌 is realized if 𝜌 and 𝜎 are.


2 Models of Computation 38

pca
The identity simulation id(𝔼,𝔼′ ) : (𝔼 , 𝔼′) −→ (𝔼 , 𝔼′) is the identity relation
on 𝔼. It is realized by ⟨𝑥 𝑦⟩ 𝑥 𝑦 .
Pcas with sub-pcas and simulations between them therefore form a
category. We equip it with a preorder enrichment15 ⪯ as follows. Given 15: A category C is preorder enriched
pca when hom-sets C(𝑋 , 𝑌) are equipped
𝜌, 𝜎 : (𝔼, 𝔼′) −→ (𝔽 , 𝔽 ′), define 𝜌 ⪯ 𝜎 to hold when there exists a
with preorders (reflexive and transitive
translation 𝑡 ∈ 𝔽 ′ such that, for all 𝑥 ∈ 𝔼 and 𝑦 ∈ 𝜌[𝑥], 𝑡 𝑦↓ and relations) under which composition is
𝑡 𝑦 ∈ 𝜎[𝑥]. monotone.

We write 𝜌 ∼ 𝜎 when 𝜌 ⪯ 𝜎 and 𝜎 ⪯ 𝜌.

pca pca
Exercise 2.8.3 Given 𝜌, 𝜌′ : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) and 𝜎, 𝜎′ : (𝔽 , 𝔽 ′) −→
(𝔾 , 𝔾′), show that if 𝜌 ⪯ 𝜌′ and 𝜎 ⪯ 𝜎′ then 𝜎 ◦ 𝜌 ⪯ 𝜎′ ◦ 𝜌′.

The preorder enrichment induces the notions of equivalence and adjunc-


tion of simulations.

Definition 2.8.4 Consider simulations


pca pca
𝛿 : (𝔼, 𝔼′) −→ (𝔽 , 𝔽 ′), 𝛾 : (𝔽 , 𝔽 ′) −→ (𝔼 , 𝔼′).

They form an equivalence when 𝛾 ◦ 𝛿 ∼ 1𝔼 and 𝛿 ◦ 𝛾 ∼ 1𝔽 .


They form an adjunction, written 𝛾 ⊣ 𝛿 , when 1𝔽 ⪯ 𝛿◦𝛾 and 𝛾◦𝛿 ⪯ 1𝔼 .
We say that 𝛾 is left adjoint to 𝛿 , or that 𝛿 is right adjoint to 𝛾 .
Such an adjoint pair is an adjoint inclusion when 𝛾 ◦ 𝛿 ∼ id𝔼 , and a
adjoint retraction when 𝛿 ◦ 𝛾 ∼ 1𝔽 .

2.8.1 Properties of simulations

Nothing prevents a simulation from being trivial. In fact, there always is


pca
the constant simulation 𝜏 : 𝔼 −→ 𝔽 , defined by 𝜏[𝑥] = {K𝔽 } and realized
by ⟨𝑥 𝑦⟩ K. To avoid such examples, we should identify further useful
properties of simulations.
Discreteness prevents a simulation from conflating simulated elements.

pca
Definition 2.8.5 A simulation 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) is discrete when,
for all 𝑥, 𝑦 ∈ 𝔼 if 𝜌[𝑥] ∩ 𝜌[𝑦] is in inhabited then 𝑥 = 𝑦 .

The next property is single-valuedness, up to equivalence.

pca
Definition 2.8.6 A simulation 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) is projective when
there is a single-valued simulation (a function) 𝜌′ such that 𝜌′ ∼ 𝜌.

pca
Exercise 2.8.7 Prove that a simulation 𝜌 : 𝔼 −→ 𝔽 is projective if, and
only if, there is 𝑡 ∈ 𝔽 ′ such that, for all 𝑥 ∈ 𝔼 and 𝑦, 𝑧 ∈ 𝔽 :
▶ if 𝑦 ∈ 𝜌[𝑥] then 𝑡 𝑦↓ and 𝑡 𝑦 ∈ 𝜌[𝑥],
▶ if 𝑦 ∈ 𝜌[𝑥] and 𝑧 ∈ 𝜌[𝑥] then 𝑡 𝑦 = 𝑡 𝑧 .
2 Models of Computation 39

Thus a simulation is projective if each element of 𝔼 has a canonically


chosen simulation in 𝔽 .
pca
For every simulation 𝜌 : 𝔼 −→ 𝔽 it is the case that the Boolean values
of 𝔽 can be converted to the simulated Boolean values. Indeed, take any
𝑎 ∈ 𝜌[true𝔼 ] and 𝑏 ∈ 𝜌[false𝔼 ] and define 𝑒 ∈ 𝔽 ′ to be 𝑒 = ⟨𝑥⟩ if𝔽 𝑥 𝑎 𝑏 ,
so that 𝑒 true𝔽 ∈ 𝜌[true𝔼 ] and 𝑒 false𝔽 ∈ 𝜌[false𝔼 ]. The converse
translation does not come for free.

pca
Definition 2.8.8 A simulation 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) is decidable when
there is 𝑑 ∈ 𝔽 ′, called the decider for 𝜌, such that, for all 𝑥 ∈ 𝔽 ,

𝑥 ∈ 𝜌[true𝔼 ] ⇒ 𝑑 𝑥 = true𝔽 ,
𝑥 ∈ 𝜌[false𝔼 ] ⇒ 𝑑 𝑥 = false𝔽 .

pca
Exercise 2.8.9 Say that a simulation 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) preserves
numerals when there is 𝑐 ∈ 𝔽 ′ such that, for all 𝑛 ∈ ℕ and 𝑥 ∈ 𝔽 ,

𝑥 ∈ 𝜌[𝑛 𝔼 ] =⇒ 𝑐 𝑥 = 𝑛 𝔽 .

Prove that a simulation is decidable if, and only if, it preserves numerals.

We recall several basic results of John Longley’s.

pca pca
Theorem 2.8.10 For 𝛿 : 𝔼 −→ 𝔽 and 𝛾 : 𝔽 −→ 𝔼:
1. If 𝛾 ◦ 𝛿 ⪯ id𝔼 then 𝛿 is discrete and 𝛾 is decidable.
2. If 𝛾 ⊣ 𝛿 then 𝛾 is projective.

Proof. See [24, Theorem 2.5.3].

Corollary 2.8.11 If 𝛾 ⊣ 𝛿 is an adjoint retraction then both 𝛿 and 𝛾 are


discrete and decidable, and 𝛾 is projective.

Proof. Immediate. This is [24, Corollary 2.5.4].

Corollary 2.8.12 If 𝔼 and 𝔽 are equivalent pcas, then the there exist an
equivalence
pca pca
𝛿 : 𝔼 −→ 𝔽 , 𝛾 : 𝔽 −→ 𝔼 ,

such that 𝛾 and 𝛿 are single-valued.

Proof. Both 𝛿 and 𝛾 are projective by Theorem 2.8.10.


2 Models of Computation 40

2.8.2 Decidable simulations and 𝕂1

Decidable simulations are the kind of simulations that arise in com-


putability theory. We investigate them a bit, especially in relation to the
first Kleene algebra 𝕂1 .
Turing machines, embodied as Kleene’s first algebra 𝕂1 , are distinguished
by a universal property.

Theorem 2.8.13 Up to equivalence, the first Kleene algebra 𝕂1 is initial in


the category of pcas and decidable simulations.

Proof. We sketch the proof from [24, Theorem 2.4.18]. Given any pca 𝔸,
pca
define 𝜅 : 𝕂1 −→ 𝔸 by 𝜅[𝑛] = {𝑛 𝔸 }. Because every partial computable
function ℕ × ℕ ⇀ ℕ can be represented in 𝔸, there is 𝑟 ∈ 𝔸 such that,
for all 𝑘, 𝑚, 𝑛 ∈ ℕ ,

𝑟 𝑘 𝑚 = 𝑛 ⇐⇒ 𝝋 𝑘 (𝑚) = 𝑛.

Such an element 𝑟 realizes 𝜅 . Furthermore, 𝜅 is decidable because it maps


numbers to numerals.
pca
Suppose 𝜇 : 𝕂1 −→ 𝔸 is another decidable simulation. Because 𝜇
preserves numerals there exists 𝑓 ∈ 𝔸 such that if 𝑎 ∈ 𝜇[𝑛] then
𝑓 𝑎 = 𝑛 ∈ 𝜅[𝑛], therefore 𝜇 ⪯ 𝜅 . The relation 𝜅 ⪯ 𝜇 holds by the next
exercise, therefore 𝜅 ∼ 𝜇.

pca
Exercise 2.8.14 Verify that for any 𝜌 : 𝔼 −→ 𝔽 there is 𝑞 ∈ 𝔽 such that
𝑞 𝑛 𝔽 ∈ 𝜌[𝑛 𝔼 ] for all 𝑛 ∈ ℕ .

Recall that a Turing reduction of 𝐴 ⊆ ℕ to 𝐵 ⊆ ℕ , written 𝐴 ≤𝑇 𝐵 is a


𝐵-oracle Turing machine 𝑀 which computes the characteristic function
of 𝐴.

Theorem 2.8.15 Suppose 𝐴, 𝐵 ⊆ ℕ . Then 𝐴 ≤𝑇 𝐵 if, and only if, there is a


decidable simulation 𝕂1𝐴 −→ 𝕂1𝐵 .
pca

Proof. This is [24, Proposition 3.1.6].

The following definition and exercise verify that having decidable sim-
ulations between 𝔼 and 𝔽 implies that they both compute the same
number-theoretic functions, which is sometimes taken to be a notion of
equivalence of computational models.

Definition 2.8.16 Say that a function 𝑓 : ℕ → ℕ is realizable in a


pca 𝔸 when there exists 𝑟 ∈ 𝔸 such that 𝑎 𝑛↓ and 𝑎 𝑛 = 𝑓 (𝑛), for all
𝑛 ∈ ℕ.
Pcas 𝔼 and 𝔽 are Turing-equivalent when they realize the same maps
ℕ → ℕ . A pca is Turing-complete when it is Turing-equivalent to 𝕂1 .
2 Models of Computation 41

Exercise 2.8.17 Suppose that


pca pca
𝛿 : 𝔼 −→ 𝔽 𝛾 : 𝔽 −→ 𝔼

are decidable simulations. Show that 𝔼 and 𝔽 are Turing-equivalent.

Consider again decidable simulations


pca pca
𝛿 : 𝔼 −→ 𝕂1 𝛾 : 𝕂1 −→ 𝔼.

Because 𝕂1 is initial, 𝛾 is equivalent to 𝑛 ↦→ 𝑛 , so we might as well


assume that 𝛾[𝑛] = {𝑛}. Initiality also implies that 𝛿 ◦ 𝛾 ∼ id𝕂1 .
Think of 𝑛 ∈ 𝛿[𝑥] as the “source code” of 𝑥 ∈ 𝔼. A translation 𝑡 ∈ 𝔼
witnessing 𝛾 ◦ 𝛿 ⪯ id𝔼 is a self-interpreter for 𝔼. Indeed, given 𝑥 ∈ 𝔼
and 𝑛 ∈ 𝛿[𝑥] we have 𝑡 𝑛 = 𝑥 , which says that 𝑡 evaluates the source
code 𝑛 to the value 𝑥 represented by the source code. Therefore, an
adjoint retraction 𝛾 ⊣ 𝛿 from 𝔼 onto 𝕂1 encompasses two features of 𝔼, a
self-interpreter and Turing-completeness.

An adjoint retraction from Λ to 𝕂1

We construct an adjoint retraction from the pca Λ of the closed terms of


the untyped 𝜆-calculus, and first Kleene algebra 𝕂1 .
pca
Define 𝛿 : 𝕂1 −→ Λ to be the simulation 𝛿[𝑛] = {𝑛} which encodes
numbers as Curry numerals. It is a simulation because every partial
computable map is 𝜆-definable, and therefore so is Kleene application.
pca
In the opposite direction, let 𝛾 : Λ −→ 𝕂1 be the total relation (remember
that Λ is the set of closed terms quotiented by 𝛽 -reduction),

𝛾[𝑡] = { ⌜ 𝑡 ′ ⌝ | 𝑡 ′ ∈ Λ ∧ 𝑡 =𝛽 𝑡 ′ }.

That is, an equivalence class of a closed terms is simulated by the codes


of its members. The simulation is realized because there is a computable
map 𝑓 : ℕ × ℕ ⇀ ℕ satisfying 𝑓 ( ⌜ 𝑡 ⌝ , ⌜ 𝑢 ⌝ ) = ⌜ 𝑡 𝑢 ⌝ for all 𝑡, 𝑢 ∈ Λ.
Verifying 𝛿 ◦ 𝛾 ⪯ idΛ is a simple matter of programming a self-interpreter
for the untyped 𝜆-calculus. From a conceptual point of view it is clear
that this can be done: the syntax of a term 𝑡 can be discerned from the
numeral ⌜ 𝑡 ⌝ , so one just has to recursively traverse syntax tree of 𝑡 and
interpret it into 𝜆-calculus.

Exercise 2.8.18 Why is it not the case that idΛ ⪯ 𝛿 ◦ 𝛾 ?

An adjoint retraction from ℙ# to 𝕂1

Here is one more example, an adjoint retraction from the computable


graph model ℙ# and the first Kleene algebra [24, Proposition 3.3.7]. In
pca
one direction we define 𝛿 : 𝕂1 −→ ℙ# by

𝛿[𝑛] = {{𝑛}}.
2 Models of Computation 42

Careful with the nested singletons: the number 𝑛 is simulated by the


singleton {𝑛}.

Exercise 2.8.19 Above we used singletons {𝑛} as numerals, but in (2.8)


we defined the Curry numerals 𝑛 . Verify that in ℙ# we can translate
between these, i.e., that there are computable enumeration operators
𝑓 : ℙ# → ℙ# and 𝑔 : ℙ# → ℙ# such that 𝑓 ({𝑛}) = 𝑛 ℙ# and 𝑔(𝑛 ℙ# ) =
{𝑛}, for all 𝑛 ∈ ℕ .
pca
In the other direction we define 𝛾 : ℙ# −→ 𝕂1 by taking

𝛾[𝐴] = {𝑛 ∈ ℕ | im(𝝋 𝑛 ) = 𝐴}

to be the index set16 of 𝐴, i.e., the codes of partial computable maps 16: In computability theory, the index set
whose image is 𝐴. of a set 𝐴 is the set of all numbers (the
indices) that encode elements of 𝐴.

Exercise 2.8.20 Verify that 𝛿 and 𝛾 are simulations.

To establish 𝛿 ◦ 𝛾 ⪯ idℙ# , observe that

𝑆 = { ⌜ ⟨ ⌜ {𝑚} ⌝ , 𝑛⟩ ⌝ | ∃𝑘 ∈ ℕ . 𝝋 𝑚 (𝑘) = 𝑛}

is computably enumerable. The computable enumeration operator Λ(𝑆) :


ℙ# → ℙ# , where Λ is as in (2.3) is then a translation from 𝛿 ◦ 𝛾 to idℙ# .

Exercise 2.8.21 Why is it not the case that idℙ# ⪯ 𝛿 ◦ 𝛾 ?

2.8.3 An adjoint retraction from (ℙ , ℙ# ) to (𝔹 , 𝔹# )

To give at least one example of simulations between pcas with sub-pcas,


we review the adjoint retraction between the graph model and Kleene’s
second algebra, which was first given by Peter Lietz [22].
The map 𝜄 : 𝔹 → ℙ, defined by

𝜄 𝛼 = { ⌜ 𝑎 ⌝ | 𝑎 ∈ ℕ ∗ ∧ 𝑎 ⊑ 𝛼}

represents a sequence 𝛼 with the set (of codes) of its initial segments. It
restricts to a map 𝔹# → ℙ# . Let us show that it is a simulation.
In Subsection 2.1.2 we defined the application 𝛼 ·𝔹 𝛽 in 𝔹 by a lookup
procedure, by which every initial segment of 𝛼 ·𝔹 𝛽 is determined by
sufficiently long initial segments of 𝛼 and 𝛽 . Thus the relation 𝑅 ⊆
ℕ ∗ × ℕ ∗ × ℕ ∗ defined by

(𝑎, 𝑏, 𝑐) ∈ 𝑅 ⇐⇒
∀𝛼, 𝛽, 𝛾 ∈ 𝔹. (𝛼 ·𝔹 𝛽)↓ ∧ 𝑎 ⊑ 𝛼 ∧ 𝑏 ⊑ 𝛽 ⇒ 𝑐 ⊑ 𝛼 ·𝔹 𝛽

is computable. The enumeration operator 𝑝 : ℙ × ℙ → ℙ, defined by

𝑝(𝐴, 𝐵) = { ⌜ 𝑐 ⌝ | ∃𝑎, 𝑏 ∈ ℕ ∗ . ⌜ 𝑎 ⌝ ∈ 𝐴 ∧ ⌜ 𝑏 ⌝ ∈ 𝐵 ∧ (𝑎, 𝑏, 𝑐) ∈ 𝑅},

is computable and is (the curried form of) a realizer for 𝜄 .


2 Models of Computation 43

pca
Let 𝛿 : ℙ −→ 𝔹 be the simulation defined by

𝛿[𝐴] = {𝛼 ∈ 𝔹 | 𝐴 = {𝑛 ∈ ℕ | ∃𝑘 ∈ ℕ . 𝛼 𝑘 = 𝑛 + 1}}.

In words, 𝛼 is a 𝛿 -simulation of 𝐴 when it enumerates 𝐴, where the trick


with adding 1 to 𝑛 in the above definition makes it possible to enumerate
the empty set. Clearly, if 𝛼 ∈ 𝔹# then 𝐴 ∈ ℙ# . In order for 𝛿 to be a
simulation, it suffices to find a partial computable map 𝑓 : 𝔹 × 𝔹 ⇀ 𝔹
such that, for all 𝐴, 𝐵 ∈ ℙ,

𝛼 ∈ 𝛿[𝐴] ∧ 𝛽 ∈ 𝛿[𝐵] ⇒ 𝑓 (𝛼, 𝛽) ∈ 𝛿[𝐴 ·ℙ 𝐵].

To determine 𝑓 (𝛼, 𝛽) ⌜ (𝑚, 𝑛) ⌝ , we look for 𝑗 < 𝑚 such that 𝛼 𝑗 = 1 +


⌜ ( ⌜ 𝐶 ⌝ , 𝑛)⌝ and 𝐶 ≪ 𝐴. If there is one, we set 𝑓 (𝛼, 𝛽) ⌜ (𝑚, 𝑛)⌝ = 𝑛 + 1,
otherwise we set it to 0. Clearly, this is an effective procedure. If we
compare the definition of 𝑓 to the definition of application in ℙ, we see
that they match.
Let us show that 𝜄 ⊣ 𝛿 is an adjoint retraction. Suppose 𝛼 ∈ 𝔹 and 𝛽 ∈
𝛿[𝜄(𝛼)]. We can computably reconstruct 𝛼 from 𝛽 , because 𝛽 enumerates
the initial segments of 𝛼 . This shows that 𝛿 ◦ 𝜄 ⪯ id𝔹 . Also, given 𝛼 we
can easily construct a sequence 𝛽 which enumerates the initial segments
of 𝛼 , therefore id𝔹 ⪯ 𝛿 ◦ 𝜄 , and we conclude that 𝛿 ◦ 𝜄 ∼ id𝔹 .
To see that 𝜄 ◦ 𝛿 ⪯ idℙ , consider 𝐴, 𝐵 ∈ ℙ and 𝛼 ∈ 𝔹 such that 𝛼𝑖𝑛𝛿[𝐴]
and 𝐵 = 𝜄(𝛼). The sequence 𝛼 enumerates 𝐴, and 𝐵 consists of the initial
segments of 𝛼 . Hence, we can effectively reconstruct 𝐴 from 𝐵, by

𝑚 ∈ 𝐴 ⇐⇒ ∃𝑛 ∈ 𝐵. 𝑛 = 1 + ⌜ 𝑎 ⌝ ∧ ∃𝑖 < ∥𝑎 ∥. 𝑚 = 𝑎 𝑖 .

Equivalence of Reflexive Domains

Consider reflexive domains

C(𝐷, 𝐷) o
Γ𝐷
/𝐷 C(𝐸, 𝐸) o
Γ𝐸
/𝐷
Λ𝐷 Λ𝐸

which are also retracts of each other,

𝑠𝐷 𝑠𝐸
𝐷o /𝐸 𝐸o /𝐷
𝑟𝐷 𝑟𝐸

so 𝑟𝐷 ◦ 𝑠 𝐷 = id𝐷 and 𝑟𝐸 ◦ 𝑠 𝐸 = id𝐸 . Then 𝐷 and 𝐸 are equivalent as


pcas.

Exercise 2.8.22 Verify the claim by constructing an equivalence 𝐷 ∼ 𝐸


from the given section-retraction pairs.
Realizability categories 3
3.1 Motivation

Realizability was introduced by Stephen Kleene [17] who used it to build [17]: Kleene (1945), “On the Interpreta-
a model of intuitionistic arithmetic. We motivate it by asking a practical tion of Intuitionistic Number Theory”

question: given a mathematical structure (a set equipped with operations


and relations satisfying some axioms), what should its implementation
look like?
For simple cases, the answer is obvious. A group is implemented by
a type whose values represent its elements, a value representing the
neutral element, and functions which compute the group operation and
inverses. But for more interesting structures, especially those arising in
mathematical analysis, the answer is less clear. How do we implement the
real numbers? Which operations on a compact metric space can be imple-
mented? How do we implement a space of smooth functions? Significant
research goes into finding satisfactory answers to such questions [2, 4, 5,
37, 39]. [2]: Bauer (2000), “The Realizability Ap-
proach to Computable Analysis and
To explain the basic idea behind realizability we consider a small real- Topology”
world programming example. Suppose we are asked to design a data [4]: Bauer et al. (2010), “Canonical Effec-
tive Subalgebras of Classical Algebras as
structure for the set Graphs of all finite simple1 directed graphs with Constructive Metric Completions”
vertices labeled by distinct integers, such at the graph 𝐺 shown below: [5]: Blanck (1997), “Domain repre-
sentability of metric spaces”
[37]: Tucker et al. (2000), “Computable
1 2 Functions and Semicomputable Sets on
Many-Sorted Algebras”
[39]: Weihrauch (2000), Computable Anal-
ysis
3 4
1: A graph is simple when there is at
most one edge between any two vertices.

A common representation of graphs uses a pair of lists (ℓ𝑉 , ℓ 𝐴 ), where ℓ𝑉


is the list of vertex labels and ℓ 𝐴 the adjacency list representing the edges
as pairs of labels. For the above graph these would be ℓ𝑉 = [1 , 2 , 3 , 4]
and ℓ 𝐴 = [(1 , 2), (2 , 2), (2 , 3), (3 , 2), (3 , 1)]. Thus we define the datatype
of graphs as2 2: We use Haskell notation in which [𝑡]
is the type of lists of elements of type 𝑡 ,
type Graph = ([Int], [(Int, Int)]) and (𝑡1 , 𝑡2 ) is the cartesian product of
types 𝑡1 and 𝑡2 .
This is not yet a complete description of the intended representation, as
there are representation invariants and conditions not expressed by the
type:
▶ the order in which the vertices and edges are listed is not important,
▶ every vertex and edge must be listed exactly once, and
▶ the source and target of each edge must appear in the list of vertices.

Such conditions can be expressed in terms of a realizability relation

𝑟⊩𝑥
3 Realizability categories 45

which tells which values 𝑟 of the datatype correspond to which elements 𝑥


of the set. We read 𝑟 ⊩ 𝑥 as “𝑟 realizes (implements, represents, witnesses)
𝑥 ”. In the above example we would write

([1, 2, 3 , 4], [(1 , 2), (2 , 2), (2 , 3), (3, 2), (3 , 1)]) ⊩ 𝐺,

and also

([3 , 2 , 1 , 4], [(2, 2), (1 , 2), (2 , 3), (3 , 2), (3, 1)]) ⊩ 𝐺.

We also want to compute with the elements of Graphs. Programmers


intuitively know what this mean, namely to implement, or realize, a map
𝑓 : Graphs → Graphs, is to give a program 𝑝 : graph → graph which
does to realizers what 𝑓 does to elements: if 𝑟 ⊩ 𝐺 then 𝑝 𝑟 ⊩ 𝑓 (𝐺). We
say that 𝑓 is realized or tracked by 𝑝 .

3.2 Assemblies

We now give a precise definition of the ideas presented in the previous


section.

Definition 3.2.1 An assembly over a tpca with a sub-tpca (𝔸 , 𝔸′)


is a triple 𝑆 = (|𝑆|, ∥𝑆∥, ⊩𝑆 ) where |𝑆| is its underlying set, ∥𝑆∥ its
underlying type from 𝔸, and ⊩𝑆 is a relation between 𝔸 ∥𝑆∥ and |𝑆|
satisfying: for every 𝑥 ∈ |𝑆| there is x ∈ 𝔸 ∥𝑆∥ such that x ⊩𝑆 𝑥 .
An assembly map 𝑓 : 𝑆 → 𝑇 between assemblies 𝑆 and 𝑇 is a map
𝑓 : |𝑆| → |𝑇 | between the underlying sets for which there exists
f ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ , called a realizer of 𝑓 , satisfying for all x , 𝑥 : if x ⊩𝑆 𝑥
then f x↓ and f x ⊩𝑇 𝑓 (𝑥).

We sometimes require that the underlying tpca with sub-tpca (𝔸 , 𝔸′) is


in fact an n-tpca, or an nr-tpca with a chosen substructure. Even though
we may not be explicit about the requirement, it should be apparent from
our using the type nat and the fixed-point combinators Y.
We often use the same letter for an element and its realizer, but differen-
tiate between them by using different fonts, for instance the elements 𝑥 ,
𝑦 , 𝑓 , 𝑔 would have realizers x, y, f, g, respectively.
There are many versions of realizability. Ours is known as typed relative
realizability. It is typed because we used typed pcas. It is relative because
maps are realized relative to a choice of a sub-pca. In typical cases,
such as type 2 machines and the graph model from Subsection 2.1.2
and Section 2.2, 𝔸′ is the computable part of a topological pca 𝔸, in
accordance with the slogan

“Topological data – computable functions!”

When 𝔸 is untyped the definition of an assembly simplifies a bit because


we need not mention the (trivial) types.
3 Realizability categories 46

Definition 3.2.2 An assembly over an untyped pca 𝔸 is a pair 𝑆 =


(|𝑆|, ⊩𝑆 ) where |𝑆| is a set and ⊩𝑆 is a relation between 𝔸 and |𝑆| , such
that for every 𝑥 ∈ |𝑆| there is 𝑟 ∈ 𝔸 and 𝑟 ⊩𝑆 𝑥 .

Assemblies and maps over (𝔸 , 𝔸′) form a category Asm(𝔸 , 𝔸′). Indeed,
if 𝑓 : 𝑆 → 𝑇 and 𝑔 : 𝑇 → 𝑈 are realized by f ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ and
g ∈ 𝔸′∥𝑇 ∥→∥𝑈 ∥ , respectively, then their composition 𝑔 ◦ 𝑓 is realized by
⟨𝑥 ∥𝑆∥ ⟩ 𝑟 (𝑞 𝑥) = S (K 𝑟) (S (K 𝑞) (S K K)). The identity map id𝑆 : |𝑆| → |𝑆| is
realized by ⟨𝑥 ∥𝑆∥ ⟩ 𝑥 = S K K. Composition is associative because it is just
composition of maps.
When 𝔸′ = 𝔸 we write Asm(𝔸) instead of Asm(𝔸 , 𝔸).

3.2.1 Modest sets

In the definition of assemblies, nothing prevents several elements from


sharing a common realizer. We sometimes want to prohibit such anoma-
lies.

Definition 3.2.3 A modest assembly 𝑆 , also called a modest set,3 is an 3: The terminology was suggested by
assembly in which elements do not share realizers: Dana Scott. It refers to the fact that the
cardinality of a modest set 𝑆 does not
exceed the cardinality of 𝔸 ∥𝑆∥ .
∀𝑟 ∈ 𝔸 ∥𝑆∥ . ∀𝑥, 𝑦 ∈ |𝑆|. (𝑟 ⊩𝑆 𝑥 ∧ 𝑟 ⊩𝑆 𝑦 ⇒ 𝑥 = 𝑦).

We let Mod(𝔸 , 𝔸′) be the full subcategory of Asm(𝔸 , 𝔸′) on the modest
sets.

Most structures in computable mathematics turn out to be modest, but


assemblies are needed also, and they form a richer category than the
modest sets.

3.2.2 The unit assembly 𝟙

To gain a bit of intuition about assemblies, we look at several concrete


examples of assemblies.
Let unit be a type with an element ★ ∈ 𝔸′unit . It always exists, because
there is at least one type 𝑠 , and then 𝔸𝑠→𝑠→𝑠 contains K𝑠,𝑠 .
The terminal assembly 𝟙 = ({★}, unit , ⊩𝟙 ) has the trivial realizability
relation, 𝑟 ⊩𝟙 ★ for all 𝑟 ∈ 𝔸′𝑡 .

Exercise 3.2.4 Show that 𝟙 is the terminal object4 in Asm(𝔸 , 𝔸′). Con- 4: An object 𝑇 in a category is terminal
clude from this that a different choice of unit results in an isomorphic when there is precisely one morphism to
𝑇 from every object.
copy of 𝟙.

The morphisms 𝟙 → 𝑆 corresponds to those elements of |𝑆| that are


realized by elements of 𝔸′∥𝑆∥ . Indeed, if 𝑓 : 𝟙 → 𝑆 is realized by
f ∈ 𝔸′unit→∥𝑆∥ then 𝑓 ★ is realized by f 𝑡 ∈ 𝔸′∥𝑆∥ , for any 𝑡 ∈ 𝔸′unit .
Conversely, if 𝑎 ∈ |𝑆| is realized by a ∈ 𝔸′∥𝑆∥ then ★ ↦→ 𝑎 is ⟨𝑥 unit ⟩ a.
This may be a good moment to point out the difference between the global
points of 𝑆 , which is the set of morphisms 𝟙 → 𝑆 , and the underlying
3 Realizability categories 47

set |𝑆| of 𝑆 . Both induce functors Asm(𝔸 , 𝔸′) → Set, which need not be
equivalent, unless 𝔸 = 𝔸′.

Exercise 3.2.5 The empty assembly 𝟘 has as its underlying set the
empty set ∅ and as its underlying type unit. Show that the choice of
the underlying type does not matter and that the empty assembly is
the initial object.

3.2.3 Natural numbers

Suppose (𝔸 , 𝔸′) is an n-tpca with a sub-n-tpca. Let 𝑁 = (ℕ , nat , ⊩𝑁 ) be


the set of natural numbers ℕ realized by the numerals, for all 𝑟 ∈ 𝔸nat
and 𝑛 ∈ ℕ ,
𝑟 ⊩𝑁 𝑛 ⇐⇒ 𝑟 = 𝑛.
The successor 𝑛 ↦→ 𝑛 + 1 is realized by succ, and 0 ∈ ℕ is a global point
because 0 ∈ 𝔸′nat .
The assembly ℕ is the natural numbers object. Indeed, given an assembly
𝑆 with 𝑧 ∈ |𝑆| realized by z ∈ 𝔸′∥𝑆∥ , and 𝑓 : |𝑆| → |𝑆| , the unique map
𝑓 : ℕ → |𝑆| satisfying, for all 𝑛 ∈ ℕ ,

𝑓0=0 and 𝑓 (𝑛 + 1) = 𝑓 ( 𝑓 𝑛)

is realized by rec z f.

Exercise 3.2.6 Suppose (𝔸 , 𝔸′) is an tpca with a sub-tpca such that


Asm(𝔸 , 𝔸′ ) has a natural numbers object. Show that (𝔸 , 𝔸′ ) is an n-tpca
with a sub-n-tpca.

3.2.4 The constant assemblies

The extreme case of elements sharing the same realizer happens when
all elements of a set share all realizers. Assemblies with this property are
called the constant assemblies.
Let 𝑡 be a type such that 𝔸′𝑡 is inhabited. Such a type always exists,
because there is at least one type 𝑠 , and then 𝔸𝑠→𝑠→𝑠 contains K𝑠,𝑠 . Given
any set 𝑋 , let
∇𝑋 = (𝑋 , 𝑡, ⊩∇𝑋 )
be the assembly whose underlying set is 𝑋 and the realizability relation
is trivial, i.e., 𝑟 ⊩∇𝑋 𝑥 for all 𝑥 ∈ 𝑋 and 𝑟 ∈ 𝐴𝑡 .
If 𝑓 : 𝑋 → 𝑌 is any map between sets 𝑋 and 𝑌 then 𝑓 is a morphism
∇ 𝑓 : ∇𝑋 → ∇𝑌 because it is tracked by ⟨𝑥 𝑡 ⟩ 𝑥 . Thus ∇ is a functor

∇ : Set → Asm(𝔸, 𝔸′).

Up to natural isomorphism, ∇ is independent of the choice of type 𝑡 . We


will study the properties of ∇ later on. For now we notice that ∇ is full
and faithful, which means that Asm(𝔸 , 𝔸′) contains the category of sets
as a full subcategory.
3 Realizability categories 48

The functor ∇ is devoid of any computational content because it represents


a set 𝑋 by a trivial realizability relation which conveys no information at
all about the elements of 𝑋 . Consequently, from the realizers we cannot
compute anything interesting regarding 𝑋 .

Exercise 3.2.7 Show that an assembly 𝑆 is modest if, and only if, every
assembly map ∇2 → 𝑆 is constant.

Exercise 3.2.8 Given a set 𝑋 and an assembly 𝑆 , show that every map
|𝑆| → 𝑋 is an assembly map 𝑆 → ∇𝑋 .

The functor ∇ is part of an adjunction, as follows.

Exercise 3.2.9 Let Γ : Asm(𝔸 , 𝔸′) → Set be the forgetful functor which
assigns to an assembly its underlying set, and to an assembly map the
underlying set-theoretic function. Show that Γ is left adjoint to ∇.

3.2.5 Two-element assemblies

We explore a bit what the two-element assemblies are like. For simplicity
we consider the non-relative case Asm(𝔸) of assemblies on a pca 𝔸.
Without loss of generality we may assume that a two-element assembly 𝑇
has |𝑇 | = 2 = {0 , 1} as its underlying set. Such an assembly is determined
by the sets of realizers E𝑇 0 ⊆ 𝔸 and E𝑇 1 ⊆ 𝔸, which must both be
inhabited.
We may partially order all two-element assemblies by stipulating that
𝑇 ≤ 𝑈 , where |𝑇 | = |𝑈 | = 2 when id2 is realized as an assembly map
𝑇 → 𝑈 . That is, 𝑇 ≤ 𝑈 holds when the realizers of 𝑇 are more informative
than the realizers of 𝑈 .
With respect to this ordering the largest two-element assembly is ∇2,
since every map into a constant assembly is realized. We call ∇2 the
classical truth values because it comes from classical set theory, where 2
is the object of truth values.
The least two-element assembly is 𝟚 = (2 , ⊩𝟚 ) where, for r ∈ 𝔸 and
𝑏 ∈ 2,

r ⊩𝟚 𝑏 ⇐⇒ (r = false ∧ 𝑏 = 0) ∨ (r = true ∧ 𝑏 = 1).

Indeed 2 ≤ 𝑈 is realized by ⟨r⟩ if r a b where a ⊩𝑈 0 and [𝑏] ⊩𝑈 1. The


assembly 𝟚 is the assembly of Booleans or decidable truth values.
There are plenty of assemblies between 𝟚 and ∇2, for example the
assembly Σ01 of the semidecidable5 truth values, also known as the 5: We need to be careful about the mean-
Rosolini dominance, is defined as ing of “semidecidable” because it de-
pends on 𝔸. For example, in assemblies
over 𝕂1 the Rosolini dominance really
r ⊩Σ0 𝑏 ⇐⇒ (∀𝑛. r 𝑛 ∈ {false , true}) ∧
1 does embody semidecidability, whereas
(𝑏 = 1 ⇔ ∃𝑛. r 𝑛 = true). in assemblies over 𝔹 it is an admissible
representation of the Sierpinski space,
Its realizers compute infinite sequence of bits. Such a realizer represents 1 see Definition 3.5.10.

if, and only if, it computes a sequence that contains true.


3 Realizability categories 49

Exercise 3.2.10 Define two-element assemblies that correspond to the


truth values in the arithmetical hierarchy, defined inductively as follows:
▶ Σ00 = Π00 = {⊥, ⊤} are the decidable truth values,
▶ Σ0𝑛+1 are the truth values of the form ∃𝑛. 𝑝𝑛 where 𝑝 : ℕ → Π0𝑛 ,
▶ Π0𝑛+1 are the truth values of the form ∀𝑛. 𝑝𝑛 where 𝑝 : ℕ → Σ0𝑛 .

In your definition you should replace the maps 𝑝 with suitable realiz-
ers r. Above we already constructed Σ10 = Π00 = 𝟚 and Σ01 . Show that
Σ0𝑛 ≤ Π0𝑛+1 and Π0𝑛 ≤ Σ0𝑛+1 .

Real numbers

As our third example we ask how to equip the real numbers with a
realizability structure. Here we give the correct answer, but leave it
unexplained for the time being.
We work with an nr-tpca 𝔸 with a sub-n-tpca 𝔸′. Intuitively speaking, a
realizer for 𝑥 ∈ ℝ should allow us to compute arbitrarily good approxi-
mations of 𝑥 , so we define the relation ⊩𝑅 between 𝔸nat→nat×nat×nat by
stipulating that x ⊩𝑅 𝑥 holds if, and only if,

𝑎−𝑏
∀𝑘 ∈ ℕ . ∃𝑎, 𝑏, 𝑐 ∈ ℕ . x 𝑘 = (𝑎, 𝑏, 𝑐) ∧ |𝑥 − | < 2−𝑘 .
1+𝑐
The triple of numbers (𝑎, 𝑏, 𝑐) is just a clumsy way of encoding the
rational 𝑎−𝑏
1+𝑐 , so in essence x computes a sequence of rationals such that
the 𝑘 -th term is within 2−𝑘 of 𝑥 .
The assembly of real numbers 𝑅 = (ℝrz , nat → nat × nat × nat , ⊩𝑅 ) has
as its underlying set the realized reals

ℝrz = {𝑥 ∈ ℝ | ∃x ∈ 𝔸nat→nat×nat×nat . x ⊩𝑅 𝑥}.

Which reals are so realized depends on the choice of 𝔸. For example, first
Kleene algebra realizes the Turing computable reals, whereas the second
Kleene algebra realizes all reals.

3.3 Equivalent formulations

Assemblies and modest sets have several equivalent formulations, which


were formulated by different communities for particular choices of
(𝔸, 𝔸′), each using their own notation and terminology. In this section
we review the equivalent formulations, and in Section 3.5 show how
various “schools of computable mathematics” arise as special instances.

3.3.1 Existence predicates

A realizability relation ⊩𝑆 is a subset of 𝔸 ∥𝑆∥ × |𝑆| . By transposition


it may be equivalently expressed as a map E𝑆 : |𝑆| → P(𝔸 ∥𝑆∥ ). The
correspondence is
x ⊩𝑆 𝑥 ⇐⇒ x ∈ E𝑆 (𝑥).
3 Realizability categories 50

Because every 𝑥 is realized by something, E𝑆 (𝑥) always contains at


least one element. Thus an assembly (|𝑆|, ∥𝑆∥, ⊩𝑆 ) may be equivalently
presented as a triple (|𝑆|, ∥𝑆∥, E𝑆 ) where E𝑆 : 𝑆 → P(𝔸 ∥𝑆∥ ) is a map,
called the existence predicate, such that E𝑆 (𝑥) contains at least one
element for every 𝑥 ∈ |𝑆| . The name suggests that the elements of E𝑆 (𝑥)
are computational witnesses for “existence of 𝑥 ”.
An assembly 𝑆 is modest if, and only if, E𝑆 (𝑥) ∩ E𝑆 (𝑦) ≠ ∅ implies
𝑥 = 𝑦.
Under this formulation a map 𝑓 : 𝑆 → 𝑇 is realized if there exists
f ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ such that, for all 𝑥 ∈ |𝑆| and x ∈ E𝑆 (𝑥), f x↓ and f x ∈
E𝑇 ( 𝑓 (𝑥)).

3.3.2 Representations

By transposing ⊩𝑆 the other way around we obtain representations.


Suppose first that 𝑆 is a modest set. Since every realizer 𝑟 ∈ 𝔸 ∥𝑆∥ realizes
at most one 𝑥 ∈ |𝑆| , we may define a partial map 𝛿 𝑆 : 𝔸 ∥𝑆∥ ⇀ |𝑆| by

𝛿 𝑆 (𝑟) = 𝑥 ⇐⇒ 𝑟 ⊩𝑆 𝑥.

The map 𝛿 𝑆 is surjective because element is realized, but it need not


be defined everywhere. The triple (|𝑆|, ∥𝑆∥, 𝛿 𝑆 ) uniquely describes the
modest set 𝑆 . The map 𝛿 𝑆 is called a representation of 𝑆 .
A map 𝑓 : 𝑆 → 𝑇 is realized or tracked by f ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ when, for all
x ∈ dom(𝛿 𝑆 ), f x↓ and 𝛿𝑇 (f x) = 𝑓 (𝛿 𝑆 (𝑥)).
Representations and realized maps form a category Rep(𝔸 , 𝔸′), which is
equivalent to Mod(𝔸 , 𝔸′).
When we transpose ⊩𝑆 for a a general assembly 𝑆 the result is a multi-
valued representation, which is a map 𝛿 𝑆 : 𝔸 ∥𝑆∥ ⇒ P(|𝑆|) that takes
each 𝑟 ∈ 𝔸 ∥𝑆∥ to the (possibly empty) set of elements it realizes,

𝛿 𝑆 (𝑟) = {𝑥 ∈ |𝑆| | 𝑟 ⊩𝑆 𝑥}.

The map is surjective in the sense that for every 𝑥 ∈ |𝑆| there is 𝑟 ∈ 𝔸 ∥𝑆∥
such that 𝑥 ∈ 𝛿 𝑆 (𝑟).
To summarize, there are three ways of specifying the realizability struc-
ture of an assembly: with a realizability relation ⊩𝑆 , an existence predi-
cate E𝑆 , and a multi-valued representation 𝛿 𝑆 . Each determines the other
two by
𝑟 ⊩𝑆 𝑥 ⇐⇒ 𝑟 ∈ E𝑆 (𝑥) ⇐⇒ 𝑥 ∈ 𝛿 𝑆 (𝑟).

3.3.3 Partial equivalence relations

This formulation only works for modest sets. With each modest set 𝑆
we may associate a partial equivalence relation6 (per) ≈𝑆 on 𝔸 ∥𝑆∥ which 6: A partial equivalence relation is a
relates 𝑞 and 𝑟 when they realize the same element: transitive symmetric relation.

𝑞 ≈𝑆 𝑟 ⇐⇒ ∃𝑥 ∈ |𝑆|. 𝑞 ⊩𝑆 𝑥 ∧ 𝑟 ⊩𝑆 𝑥.
3 Realizability categories 51

The pair (∥𝑆∥, ≈𝑆 ) suffices for the reconstruction of the original modest
set, up to isomorphism, which we show next.
Let (𝔸 , 𝔸′) be a tpca with a sub-tpca. A partial equivalence relation on 𝔸
is a pair 𝑆 = (∥𝑆∥, ≈𝑆 ) where ∥𝑆∥ is a type and ≈𝑆 is a transitive and
symmetric relation on 𝔸 ∥𝑆∥ . A realizer 𝑟 ∈ 𝔸 ∥𝑆∥ is total if 𝑟 ≈𝑆 𝑟 . The set
of total realizers is denoted by ∥𝑆∥ = {𝑟 ∈ 𝔸 ∥𝑆∥ | 𝑟 ≈𝑆 𝑟}. Each 𝑟 ∈ ∥𝑆∥
determines the equivalence class [𝑟]𝑆 = {𝑞 ∈ 𝔸 ∥𝑆∥ | 𝑟 ≈𝑆 𝑞}.
An extensional realizer between pers 𝑆 and 𝑇 is 𝑝 ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ such that,
for all 𝑞, 𝑟 ∈ 𝔸 ∥𝑆∥ , if 𝑞 ≈𝑆 𝑟 then 𝑝 𝑞↓, 𝑝 𝑟↓, and 𝑝 𝑞 ≈𝑇 𝑝 𝑟 . Extensional
realizers 𝑝 and 𝑝 ′ are equivalent when 𝑞 ≈𝑆 𝑟 implies 𝑝 𝑞 ≈𝑇 𝑝 ′ 𝑟 .
Pers and equivalence classes of extensional realizers form a category
Per(𝔸 , 𝔸′ ) whose objects are pers on 𝔸 and morphisms are equivalence
classes of extensional realizers. The composition of [𝑝] : 𝑆 → 𝑇 and
[𝑞] : 𝑇 → 𝑈 is [𝑞 ◦ 𝑝] : 𝑆 → 𝑈 where 𝑞 ◦ 𝑝 = ⟨𝑥 ∥𝑆∥ ⟩ 𝑞 (𝑝 𝑥). The identity
morphism id𝑆 : 𝑆 → 𝑆 is represented by ⟨𝑥 ∥𝑆∥ ⟩ 𝑥 . It is easy to check that
this forms a category.
Let 𝑆 and 𝑇 be pers over (𝔸 , 𝔸′). A morphism between them may be
alternatively described as a function 𝑓 : ∥𝑆∥/≈𝑆 → ∥𝑇 ∥/≈𝑇 between the
equivalence classes for which there exists a realizer 𝑝 ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ that
tracks it: for every equivalence class [𝑟]𝑆 , 𝑝 𝑟↓ and [𝑝 𝑟]𝑇 = 𝑓 ([𝑟]𝑆 ).

Lemma 3.3.1 Suppose 𝑆 is an assembly, 𝑇 is a set, and 𝑓 : 𝑇 → 𝑆 is a


bijection. Then 𝑆 is isomorphic to 𝑇 = (𝑇, ∥𝑆∥, ⊩𝑇 ) where 𝑟 ⊩𝑇 𝑥 is defined
as 𝑟 ⊩𝑆 𝑓 (𝑥).

Proof. The map 𝑓 is a morphism from 𝑆 to 𝑇 because it is tracked


by ⟨𝑥 ∥𝑆∥ ⟩ 𝑥 . Similarly, 𝑓 −1 is a morphism because it is also tracked by the
same realizer. Obviously, 𝑓 and 𝑓 −1 are inverses of each other.

Proposition 3.3.2 The categories Mod(𝔸 , 𝔸′) and Per(𝔸 , 𝔸′) are equivalent.

Proof. A modest set (|𝑆|, ∥𝑆∥, ⊩𝑆 ) determines a per (𝑆, ≈𝑆 ), as described


above. A morphism 𝑓 : 𝑆 → 𝑇 which is tracked by 𝑝 ∈ 𝔸′∥𝑆∥→∥𝑇 ∥
determines a morphism of pers [𝑝] : (𝑆, ≈𝑆 ) → (𝑇, ≈𝑇 ). This defines a
functor 𝐹 : Mod(𝔸 , 𝔸′) → Per(𝔸 , 𝔸′).
In the other direction the functor 𝐺 : Per(𝔸 , 𝔸′) → Mod(𝔸 , 𝔸′) sends a
per (∥𝑇 ∥, ≈𝑇 ) to the modest set (∥𝑇 ∥/≈𝑇 , ∥𝑇 ∥, ⊩𝑇 ) whose realizability
relation is
𝑟 ⊩𝑇 [𝑞] ⇐⇒ 𝑟 ≈𝑇 𝑞.
A morphism [𝑝] : (𝑆, ≈𝑆 ) → (𝑇, ≈𝑇 ) is mapped to the map 𝐺[𝑝] :
∥𝑆∥/≈𝑆 → ∥𝑇 ∥/≈𝑇 , defined by 𝐺[𝑝][𝑟]𝑆 = [𝑝 𝑟]𝑇 , which is obviously
tracked by 𝑝 .
The functors 𝐹 and 𝐺 form an equivalence of categories. The composi-
tion 𝐹 ◦ 𝐺 is actually equal to identity, as is easily verified. A modest
set (|𝑆|, ∥𝑆∥, ⊩𝑆 ) is isomorphic to 𝐺(𝐹(𝑆)) by Lemma 3.3.1 applied to
the bijection which takes an 𝑥 ∈ |𝑆| to [𝑟]𝐺(𝐹(𝑆)) , where 𝑟 ∈ 𝔸 ∥𝑆∥ is any
3 Realizability categories 52

realizer such that 𝑟 ⊩𝑆 𝑥 . We leave the verification that the isomorphisms


are natural as exercise.

3.3.4 Equivalence relations

A per (∥𝑆∥, ≈𝑆 ) may be viewed as an equivalence relation on ∥𝑆∥ =


{𝑟 ∈ 𝔸 ∥𝑆∥ | 𝑟 ≈𝑆 𝑟}. This gives us yet another equivalent formulation of
modest sets, this time in terms of equivalence relations.
The category Er(𝔸 , 𝔸′) of equivalence relations has as objects triples
(𝑆, ∥𝑆∥, ≡𝑆 ) where ∥𝑆∥ is a type, 𝑆 ⊆ 𝔸 ∥𝑆∥ , and ≡𝑆 is an equivalence rela-
tion on 𝑆 . As in the case of pers, a morphism (𝑆, ∥𝑆∥, ≡𝑆 ) → (𝑇, ∥𝑇 ∥, ≡𝑇 )
is represented by an extensional realizer 𝑝 ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ .
The difference between pers and equivalence relations is mostly a bu-
reaucratic one. Nevertheless, it is useful to know about Er(𝔸 , 𝔸′) because
sometimes we can describe it in enlightening alternative ways, e.g., in
Subsection 3.5.2 we describe pers on the graph model as equivalence
relations on topological spaces.

3.4 Applicative functors

Categories of assemblies themselves form a category whose morphisms


are functors induced by simulations, known as applicative functors. These
were defined and studied by John Longley [24], and are the appropriate
notion of morphisms of assemblies, as well as realizability toposes. We
review their definition and several basic results about them, which we
cannot do without assuming some knowledge of basic category theory.
pca
A simulation 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) induces an applicative functor

𝜌 : Asm(𝔼, 𝔼′) −→ Asm(𝔽 , 𝔽 ′)


b

which maps an assembly 𝑆 = (𝑆, ⊩𝑆 ) over (𝔼 , 𝔼′) to the assembly b


𝜌𝑆
over (𝔽 , 𝔽 ′), whose underlying set is 𝑆 and

𝑞 ⊩𝜌𝑆
c 𝑥 ⇐⇒ ∃𝑟 ∈ 𝔼. 𝑞 ∈ 𝜌[𝑟] ∧ 𝑟 ⊩𝑆 𝑥.

𝜌 replaces realizers in 𝔼 with their simulations in 𝔽 .


That is, b
Suppose 𝑟 ∈ 𝔽 ′ is a realizer for 𝜌. An assembly map 𝑓 : 𝑆 → 𝑇 , realized
by f ∈ 𝔼′, is mapped by b 𝜌 to the same underlying map b
𝜌 𝑓 = 𝑓 , which is
realized by 𝑟 g for any g ∈ 𝜌[f].

Exercise 3.4.1 Prove that and applicative functor induced by a discrete


simulation restricts to modest sets.

The properties of the induced applicative functor depend on the proper-


ties of the simulation, as follows. (We presume existence of categorical
structure on assemblies which will only be established in ??.)

pca
Proposition 3.4.2 Let 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) be a simulation.
3 Realizability categories 53

1. The functor b 𝜌 is faithful and it preserves finite limits.


2. If 𝜌 is projective then b𝜌 preserves projective objects.
3. If 𝜌 is decidable then b
𝜌 preserves finite colimits and the natural numbers
object.

Proof. The functor b𝛾 preserves finite limits by [24, Proposition 2.2.2]. It


is faithful because it acts trivially on morphisms. For the second claim
see [24, Theorem 2.4.12], and for the third one [24, Theorem 2.4.19].

Adjunctions and equivalences between simulations carry over to the


induced morphisms.

Theorem 3.4.3 Consider simulations


pca pca
𝛿 : (𝔼, 𝔼′) −→ (𝔽 , 𝔽 ′), 𝛾 : (𝔽 , 𝔽 ′) −→ (𝔼 , 𝔼′).

1. If 𝛾 ⊣ 𝛿 is an adjoint pair, then b


𝛾⊣b 𝛿 is an adjunction of functors.
2. If 𝛾 ⊣ 𝛿 is an adjoint inclusion then the counit of the adjunction b
𝛾⊣b𝛿
is a natural isomorphism.
3. If 𝛾 ⊣ 𝛿 is an adjoint retraction then the unit of the adjunction b𝛾⊣b𝛿
is a natural isomorphism.
4. If 𝛾 and 𝛿 form an equivalence, then so do b 𝛾 and b 𝛿.

Proof. The first three claims are subsumed by the easy part of [24,
Proposition 2.5.9], except that we are using simulations on pcas with
sub-pcas. Also, we are restricting attention to categories of assemblies
rather than realizability toposes, but this is not a problem because by
applicative functors on realizability toposes restrict to assemblies.
That equivalences of simulations induce equivalences of categories is
shown in [24, Theorem 2.5.6].

The construction of assemblies and induced applicative functors extends


pca
to a 2-functor between 2-categories. Indeed, given 𝛾, 𝛿 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′)
such that 𝛾 ⪯ 𝛿 , there is an induced natural transformation 𝜁 : b 𝛾⇒b 𝛿
defined by
𝜁 𝑆 = id𝑆 : b
𝛾𝑆 →b 𝛿 𝑆.
This is a valid definition, for if 𝑡 ∈ 𝔽 ′ is a translation witnessing 𝛾 ⪯ 𝛿 ,
then 𝑡 tracks every 𝜁 𝑆 , and the naturality condition is trivial.
pca
An applicative functor induced by 𝜌 : (𝔼 , 𝔼′) −→ (𝔽 , 𝔽 ′) commutes
up to natural isomorphism with the constant assembly functor ∇ from
Subsection 3.2.4,

𝜌
Asm(𝔼 , 𝔼′ ) / Asm(𝔽 , 𝔽 ′)
b
e 9
∇ ∇
Set
3 Realizability categories 54

as well as with the underlying set functor Γ,

𝜌
Asm(𝔼 , 𝔼′ ) / Asm(𝔽 , 𝔽 ′)
b

Γ Γ
% y
Set

See [24, Proposition 2.2.4] for the proof.

3.5 Schools of Computable Mathematics

Realizability is a unifying framework for several “schools” of computable


mathematics [7]. To get a particular variation we just choose an appropri- [7]: Bridges et al. (1987), Varieties of Con-
ate tpca with sub-tpca. We look at some of them and relate the traditional structive Mathematics

terminology and notions to ours. The material should be of interest to


those who care about computability on topological spaces.

3.5.1 Recursive Mathematics

Recursive Mathematics, also known as type one effectivity or Russian construc-


tivism [21, 36, 40], is computable mathematics done with type 1 machines, [21]: Kušner (1984), Lectures on Construc-
cf. Subsection 2.1.1. In our settings it corresponds to the category Rep(𝕂1 ) tive Mathematical Analysis
[36]: Spreen (1998), “On Effective Topo-
of representations over the first Kleene algebra. logical Spaces”
[40]: Šanin (1968), Constructive Real Num-
An object of Rep(𝕂1 ) is called a numbered set [11]. It is a pair (𝑆, 𝛿 𝑆 ),
bers and Constructive Function Spaces
where 𝑆 is a set 𝛿 𝑆 : ℕ ⇀ 𝑆 a partial surjection, called a numbering of 𝑆 .
[11]: Eršov (1999), “Handbook of Com-
A function 𝑓 : 𝑆 → 𝑇 is realized by 𝑛 ∈ ℕ when, for all 𝑚 ∈ dom(𝛿 𝑆 ), putability Theory”

𝝋 𝑛 (𝑚)↓ and 𝛿𝑇 (𝝋 𝑛 (𝑚)) = 𝑓 (𝛿 𝑆 (𝑚)).

A numbered set (𝑆, 𝛿 𝑆 ) has countably many elements because it is


covered by the countable set dom(𝛿 𝑆 ). This is sometimes considered a
disadvantage and a reason for preferring type 2 machines, which are able
to compute with uncountable structures such as real numbers. However,
internally to the category the reals form a Cauchy-complete archimedean
ordered field, on top of which it is perfectly possible to develop a version
of compute analysisg. One gets an unusual variant which is a rich source
of counter-examples.

3.5.2 Equilogical spaces

An equilogical space is a topological space with an equivalence relation [1]. [1]: Bauer et al. (1998), “Equilogical
We study equilogical spaces in some detail because they give us a Spaces”

general theory of computable maps between countably-based spaces.


In Subsection 3.5.5 we relate equilogical spaces to Type Two Effectivity,
which is another school of computability on general topological spaces.
Recall that a topological space is countably based or 2-countable if it
has a countable topological basis. Equivalently, a space is countably
based when it has a countable subbasis. We prefer to work with subbases
because they simplify the treatment of computable maps between spaces.
3 Realizability categories 55

Thus we define a countably based space to be a pair (𝑋 , (𝑈 𝑖 )𝑖∈ℕ ) where


𝑋 is a topological space and (𝑈 𝑖 )𝑖∈ℕ is an enumeration of subbasic open
sets. These generate the topology of 𝑋 by taking finite intersections
and arbitrary unions. While we usually omit an explicit mention of the
subbasis (𝑈 𝑖 )𝑖∈ℕ , we do insist that a countably based space always be
given together with a particular subbasis. This allows us to avoid the axiom
of choice.
The graph model ℙ is a countably based space. We always take its
subbasic open sets to be ↑𝑛 = {𝐴 ⊆ ℕ | 𝑛 ∈ 𝐴}, for 𝑛 ∈ ℕ .
A (countably based) equilogical space (𝑋 , (𝑈 𝑖 )𝑖∈ℕ , ≡𝑋 ) is a countably
based topological space (𝑋 , (𝑈 𝑖 )𝑖∈ℕ ) with an equivalence relation ≡𝑋 . We
usually do not bother writing the subbasis (𝑈 𝑖 )𝑖∈ℕ . The canonical quotient
map 𝑞 𝑋 : 𝑋 → 𝑋/≡𝑋 maps each 𝑥 ∈ 𝑋 to its equivalence class [𝑥]𝑋 . A
morphism 𝑓 : (𝑋 , ≡𝑋 ) → (𝑌, ≡𝑌 ) is a map 𝑓 : 𝑋/≡𝑋 → 𝑌/≡𝑌 between
equivalence classes for which there exists a continuous 𝑔 : 𝑋 → 𝑌 such
that
𝑔
𝑋 /𝑌

𝑞𝑋 𝑞𝑌

 
𝑋/≡𝑋 / 𝑌/≡𝑌
𝑓

commutes, i.e., 𝑓 ([𝑥]𝑋 ) = [𝑔(𝑥)]𝑌 for all 𝑥 ∈ 𝑋 . Morphisms compose as


expected. The category of equilogical spaces and morphisms between
them is denoted by Equ.
A countably based topological space 𝑋 may be construed as an equilogical
space (𝑋 , =𝑋 ) with equality as the equivalence relation. A morphism
𝑓 : (𝑋 , =𝑋 ) → (𝑌, =𝑌 ) is the same thing as a continuous map 𝑓 : 𝑋 → 𝑌
so that we have a full and faithful embedding 𝜔 Top → Equ of the category
𝜔Top of countably based spaces into the category Equ.
Let us show that Equ and Asm(ℙ) are equivalent. A pre-embedding
𝑒 : 𝑋 → 𝑌 between topological spaces is a continuous map such that
𝑒 −1 : O(𝑌) → O(𝑋) is surjective. For 𝑇0 -spaces this is equivalent to 𝑒 being
an embedding. If (𝑈 𝑖 )𝑖∈𝐼 is a topological subbasis for 𝑌 and 𝑒 : 𝑋 → 𝑌 is
a pre-embedding, then (𝑒 ∗ (𝑈 𝑖 ))𝑖∈𝐼 is a topological subbasis for 𝑋 .

Theorem 3.5.1 (Embedding Theorem for ℙ) A space 𝑋 may be pre-


emebedded in ℙ if, and only if, it is countably based.

Proof. Here ℙ is equipped with the Scott topology. If 𝑒 : 𝑋 → ℙ is a


pre-embedding then the open sets 𝑈𝑛 = 𝑒 ∗ (↑𝑛) form a countable subbasis
for 𝑋 .
Conversely, suppose (𝑈𝑛 )𝑛∈ℕ is a countable subbasis for 𝑋 . Define the
map 𝑒 𝑋 : 𝑋 → ℙ by

𝑒 𝑋 (𝑥) = {𝑛 ∈ ℕ | 𝑥 ∈ 𝑈𝑛 }.

We claim that 𝑒 𝑋 is a pre-embedding. It is continuous because 𝑒 𝑋



(↑𝑛) =
𝑈𝑛 . Let 𝑉 ⊆ 𝑋 be open. Then 𝑉 is a union of finite intersections of
3 Realizability categories 56

subbasic opens, [
𝑉= 𝑖
𝑈𝑛 𝑖,1 ∩ · · · ∩ 𝑈𝑛 𝑖,𝑘 𝑖
Now
[  [
𝑒 𝑋∗ 𝑖
↑{𝑛 𝑖,1 , . . . , 𝑛 𝑖,𝑘 𝑖 } = 𝑒 ∗ (↑{𝑛 𝑖,1 , . . . , 𝑛 𝑖,𝑘 𝑖 })
𝑖 𝑋
[
= 𝑒 𝑋∗ (↑𝑛 𝑖,1 ∩ · · · ∩ ↑𝑛 𝑖,𝑘 𝑖 )
[𝑖
= 𝑒 𝑋∗ (↑𝑛 𝑖,1 ) ∩ · · · ∩ 𝑒 𝑋∗ (↑𝑛 𝑖,𝑘 𝑖 )
[𝑖
= 𝑖
𝑈𝑛 𝑖,1 ∩ · · · ∩ 𝑈𝑛 𝑖,𝑘 𝑖
= 𝑉,

therefore 𝑒 𝑋

is surjective, as required.

The pre-embedding 𝑒 𝑋 : 𝑋 → ℙ is called the (subbasic) neighborhood


filter because 𝑒 𝑋 (𝑥) is just the set of (indices of) subbasic neighbor-
hoods of 𝑥 . Henceforth 𝑒 𝑋 : 𝑋 → ℙ will always denote the subbasic
neighobrhood filter.

Theorem 3.5.2 (Extension Theorem for ℙ) Suppose 𝑒 : 𝑋 → 𝑌 is a pre-


embedding and 𝑓 : 𝑋 → ℙ continuous. Then 𝑓 has a continuous extension
𝑔 : 𝑌 → ℙ along 𝑒 .

Proof. Consider the map 𝑔 : 𝑌 → ℙ defined by


[ n\ o
𝑔(𝑦) = 𝑈 ∈O(𝑌) 𝑧∈𝑒 ∗ (𝑈)
𝑓 (𝑧) | 𝑦 ∈ 𝑈 .

It is continuous because

𝑔 ∗ (↑𝑛) = {𝑦 ∈ 𝑌 | 𝑛 ∈ 𝑔(𝑦)}
= {𝑦 ∈ 𝑌 | ∃𝑈. O(𝑌)𝑦 ∈ 𝑈 ∧ ∀𝑧. 𝑒 ∗ (𝑈)𝑛 ∈ 𝑓 (𝑧)}
[
= 𝑈∈O(𝑌)
{𝑈 | ∀𝑧. 𝑒 ∗ (𝑈)𝑛 ∈ 𝑓 (𝑧)} .

Let us show that 𝑔(𝑒(𝑥)) = 𝑓 (𝑥) for all 𝑥 ∈ 𝑋 . Consider the value
[ n\ o
𝑔(𝑒(𝑥)) = 𝑈 ∈O(𝑌) 𝑧∈𝑒 ∗ (𝑈)
𝑓 (𝑧) | 𝑒(𝑥) ∈ 𝑈 .

Because 𝑓 (𝑥) appears in every intersection, each of them is contained


in 𝑓 (𝑥), which shows that 𝑔(𝑒(𝑥)) ⊆ 𝑓 (𝑥). Suppose 𝑛 ∈ 𝑓 (𝑥). Because 𝑒 is
a pre-embedding there exists 𝑊 ∈ O(𝑌) such that 𝑓 ∗ ↑𝑛 = 𝑒 ∗ (𝑊). If 𝑧 ∈ 𝑋
and 𝑒(𝑧) ∈ 𝑊 then 𝑧 ∈ 𝑒 ∗ (𝑊) = 𝑓 ∗ (↑𝑛), hence 𝑛 ∈ 𝑓 (𝑧). The intersection
{ 𝑓 (𝑧) | 𝑧 ∈ 𝑋 ∧ 𝑒(𝑧) ∈ 𝑊 } contains 𝑛 and so 𝑛 ∈ 𝑔(𝑒(𝑥)). We proved
T
that 𝑓 (𝑥) ⊆ 𝑔(𝑒(𝑥)), therefore 𝑓 (𝑥) = 𝑔(𝑒(𝑥)).

The Embedding and Extension theorems now give us the desired equiva-
lence [28]. [28]: Menni et al. (2002), “Topologi-
cal and Limit-Space Subcategories of
Countably-Based Equilogical Spaces”
Proposition 3.5.3 The categories Equ and Asm(ℙ) are equivalent.

Proof. Suppose (𝑋 , ≡𝑋 ) is an equilogical space, and let 𝑒 𝑋 : 𝑋 → ℙ be


the subbasic neighborhood filter pre-embedding. Define the assembly
3 Realizability categories 57

𝐹(𝑋) = (𝑋/≡𝑋 , E𝐹(𝑋) ) by E𝐹(𝑥) = {𝑒 𝑋 (𝑦) | 𝑥 ≡𝑋 𝑦}. To make 𝐹 into a


functor we define 𝐹( 𝑓 ) = 𝑓 for a morphism 𝑓 : (𝑋 , ≡𝑋 ) → (𝑌, ≡𝑌 ). If 𝑓
is tracked by 𝑔 : 𝑋 → 𝑌 , then 𝐹( 𝑓 ) is realized by a continuous extension
of 𝑒𝑌 ◦ 𝑔 : 𝑋 → ℙ along 𝑒 𝑋 , which exists by Extension Theorem 3.5.2.
The functor 𝐺 : Asm(ℙ) → Equ is defined as follows. An assembly (𝑆, E𝑆 )
is mapped to the equilogical space 𝐺(𝑆) = (𝑋𝑆 , ≡𝐺(𝑆) ) whose underlying
space is the set 𝑋𝑆 = {(𝑥, 𝐴) ∈ 𝑆 × ℙ | 𝐴 ∈ E𝑆 (𝑥)}, equipped with the
unique topology that makes the projection 𝑝 : 𝑋𝑆 → ℙ, 𝑝 : (𝑥, 𝐴) ↦→ 𝐴, a
pre-embedding. Explicitly, the open subsets of 𝑋𝑆 are the inverse images
𝑝 ∗ (𝑈) of open subsets 𝑈 ⊆ ℙ. Let ≡𝐺(𝑆) be the equivalence relation
defined by
(𝑥, 𝐴) ≡𝐺(𝑆) (𝑦, 𝐵) ⇐⇒ 𝑥 = 𝑦.
A morphism 𝑓 : (𝑆, ⊩𝑆 ) → (𝑇, ⊩𝑇 ) which is tracked by 𝐵 ∈ ℙ# is
mapped to 𝐺( 𝑓 ) : 𝑋𝑆 /≡𝐺(𝑆) → 𝑋𝑇 /≡𝐺(𝑇) defined by 𝐺( 𝑓 )([(𝑥, 𝐴)]𝐺(𝑆) ) =
[( 𝑓 (𝑥), 𝐵 · 𝐴)]𝐺(𝑇) .
We leave the verification that 𝐹 and 𝐺 form an equivalence of categories
as exercise.

By restricting to the 𝑇0 -spaces we obtain another equivalence. Let Equ0 be


the full subcategory of Equ in which the underlying topological spaces
are 𝑇0 -spaces.

Proposition 3.5.4 The categories Equ0 and Mod(ℙ) are equivalent.

Proof. We verify that the equivalence functors 𝐹 and 𝐺 from the proof of
Proposition 3.5.3 restrict to Equ0 and Mod(ℙ). If (𝑋 , ≡𝑋 ) is an equilogical
space whose underlying space 𝑋 is 𝑇0 , then the pre-embedding 𝑒 : 𝑋 → ℙ
is actually an embedding. Because it is injective the assembly 𝐹(𝑋) is
modest. This shows that 𝐹 restricts to a functor Equ0 → Mod(ℙ). To see
that 𝐺 restricts to a functor Mod(ℙ) → Equ0 , observe that, for a modest
assembly (𝑆, E𝑆 ), the projection 𝑋𝑆 → ℙ is an embedding, therefore 𝑋𝑆
is a 𝑇0 -space.

3.5.3 Computable countably based spaces

We have so far studied the continuous version of realizability over the


graph model in which the realizers for morphisms may be arbitrary
continuous maps. But what about the mixed version Asm(ℙ , ℙ# ), is it
also equivalent to a version of equilogical spaces? To see that the answer
to the question is affirmative, we first need to define computable maps
between countably based spaces.
Recall from Section 2.2 that an enumeration operator 𝑔 : ℙ → ℙ is
computable when its graph Γ(𝑔) is a c.e. set. By Embedding Theorem 3.5.1,
every countably based 𝑇0 -space 𝑋 can be embedded in ℙ, and every
continuous map 𝑓 : 𝑋 → 𝑌 can be extended to an enumeration operator
3 Realizability categories 58

𝑔 : ℙ → ℙ, so that the following diagram commutes:

𝑓
𝑋 /𝑌
 
𝑒𝑋 𝑒𝑌
 𝑔 
ℙ /ℙ

We can define a computable continuous map 𝑓 : 𝑋 → 𝑌 to be a contin-


uous map for which there exists a computable enumeration operator
𝑔 : ℙ → ℙ which makes the above diagram commute. This idea gives
the following definition of computable continuous maps.

Definition 3.5.5 A continuous map 𝑓 : 𝑋 → 𝑌 between countably


based spaces (𝑋 , (𝑈 𝑖 )𝑖∈ℕ ) and (𝑌, (𝑉𝑗 ) 𝑗∈ℕ ) is computable when there
exists a c.e. set 𝐹 ⊆ ℕ × ℕ such that:
1. 𝐹 is monotone in the first argument: if 𝐴 ⊆ 𝐵 ≪ ℕ and
⟨ ⌜ 𝐴 ⌝ , 𝑚⟩ ∈ 𝐹 then ( ⌜ 𝐵 ⌝ , 𝑚) ∈ 𝐹 .
2. 𝐹 approximates 𝑓 : if ( ⌜ {𝑖 1 , . . . , 𝑖 𝑛 } ⌝ , 𝑚) ∈ 𝐹 then 𝑓 (𝑈 𝑖1 ∩ · · · ∩
𝑈 𝑖 𝑛 ) ⊆ 𝑉𝑚 .
3. 𝐹 converges to 𝑓 : if 𝑓 (𝑥) ∈ 𝑉𝑚 then there exist 𝑖 1 , . . . , 𝑖 𝑛 such
that 𝑥 ∈ 𝑈 𝑖1 ∩ · · · ∩ 𝑈 𝑖 𝑛 and ( ⌜ {𝑖 1 , . . . , 𝑖 𝑛 } ⌝ , 𝑚) ∈ 𝐹 .
The relation 𝐹 is called a c.e. realizer for 𝑓 . We also say that 𝐹 tracks 𝑓 .
The category of countably based spaces and computable continuous
maps is denoted by 𝜔 Top# .

The category 𝜔 Top# is well-defined. The identity map id𝑋 : 𝑋 → 𝑋 is


tracked by the relation 𝐼𝑋 , defined by

𝐼𝑋 = {( ⌜ {𝑖1 , . . . , 𝑖 𝑛 } ⌝ , 𝑚) ∈ ℕ × ℕ | 𝑚 ∈ {𝑖 1 , . . . , 𝑖 𝑛 }}.

The composition of computable maps 𝑓 : 𝑋 → 𝑌 and 𝑔 : 𝑌 → 𝑍 ,


which are tracked by 𝐹 and 𝐺 respectively, is again a computable map
𝑔 ◦ 𝑓 : 𝑋 → 𝑍 because it has a c.e. realizer 𝐻 defined by

( ⌜ {𝑗1 , . . . , 𝑗 𝑘 } ⌝ , ℓ ) ∈ 𝐻 ⇐⇒
^𝑛
∃𝑖1 , . . . , 𝑖 𝑛 . ℕ ( ⌜ {𝑖 1 , . . . , 𝑖 𝑛 } ⌝ , ℓ ) ∈ 𝐺 ∧ 𝑠=1
( ⌜ {𝑗1 , . . . , 𝑗 𝑘 } ⌝ , 𝑖 𝑠 ) ∈ 𝐹.

The monotonicity condition in Definition 3.5.5 is redundant, for if 𝐹 is an


c.e. relation that satisfies the second and the third condition, then we can
recover monotonicity by defining a new relation 𝐹′ by
_
𝐹′ = {( ⌜ 𝐴 ⌝ , 𝑚) ∈ ℕ × ℕ | 𝐵⊆𝐴
( ⌜ 𝐵 ⌝ , 𝑚) ∈ 𝐹}.

It is easy to see that 𝐹′ satisfies all three conditions and realizes the same
function as 𝑓 .
A point 𝑥 ∈ 𝑋 is computable when the map {★} → 𝑋 which maps ★ to 𝑥 is
computable. This is equivalent to requiring that 𝑒 𝑋 (𝑥) = {𝑖 ∈ ℕ | 𝑥 ∈ 𝑈 𝑖 }
is a c.e. set.
Next, we prove effective versions of the Embedding and Extension
Theorems.
3 Realizability categories 59

Theorem 3.5.6 (Computable Embedding Theorem) Every countably


based space can be computably pre-embedded into ℙ.

Proof. We just need to provide a c.e. realizer 𝐸𝑋 for the neighborhood


filter 𝑒 𝑋 : 𝑋 → ℙ. It is

𝐸𝑋 = {( ⌜ {𝑖1 , . . . , 𝑖 𝑛 } ⌝ , 𝑚) ∈ ℕ × ℕ | 𝑚 ∈ {𝑖 1 , . . . , 𝑖 𝑛 }}.

This is obviously a c.e. relation which is monotone in the first argument.


The second condition for the c.e. realizer 𝐸𝑋 is

( ⌜ {𝑖0 , . . . , 𝑖 𝑛 } ⌝ , 𝑚) ∈ 𝐸𝑋 =⇒ 𝑒 𝑋 (𝑈 𝑖1 , . . . , 𝑈 𝑖 𝑛 ) ⊆ ↑𝑚,

which clearly holds. Suppose 𝑒 𝑋 (𝑥) ∈ ↑𝑚 . Then 𝑥 ∈ 𝑈𝑚 , and ( ⌜ {𝑚} ⌝ , 𝑚) ∈


𝐸𝑋 , which proves the third condition.

Theorem 3.5.7 (Computable Extension Theorem) Let 𝑋 and 𝑌 be count-


ably based topological spaces and 𝑓 : 𝑋 → 𝑌 a computable map between
them. Then there exists a computable map 𝑔 : ℙ → ℙ such that the following
diagram commutes:
𝑓
𝑋 /𝑌
 
𝑒𝑋 𝑒𝑌
 
ℙ /ℙ
𝑔

Proof. The maps 𝑒 𝑋 and 𝑒𝑌 are the computable embeddings from Em-
bedding Theorem 3.5.6. Let 𝐹 be a c.e. realizer for 𝑓 . We define the map
𝑔 : ℙ → ℙ by specifying its graph to be 𝐹 , i.e.,

𝑔(𝐴) = {𝑚 ∈ ℕ | ∃𝑖 1 , . . . , 𝑖 𝑛 . ℕ ( ⌜ {𝑖1 , . . . , 𝑖 𝑛 } ⌝ , 𝑚) ∈ 𝐹}.

All we have to show is that this choice of 𝑔 makes the diagram commute.
For any 𝑥 ∈ 𝑋 ,

𝑚 ∈ 𝑔(𝑒 𝑋 (𝑥))
⇔ ∃𝑖 1 , . . . , 𝑖 𝑛 ∈ ℕ . 𝑥 ∈ 𝑈 𝑖1 ∩ · · · ∩ 𝑈 𝑖 𝑛 ∧ ( ⌜ {𝑖 1 , . . . , 𝑖 𝑛 } ⌝ , 𝑚) ∈ 𝐹
⇔ 𝑓 (𝑥) ∈ 𝑉𝑚
⇔ 𝑚 ∈ 𝑒𝑌 ( 𝑓 (𝑥)).

The second equivalence is implied from left to right by the second


condition in Definition 3.5.5, and from right to the left by the third
condition.

We should point out that the computable continuous maps, as defined


here, work at the level of open sets, i.e., a c.e. realizer 𝐹 for 𝑓 : 𝑋 → 𝑌
operates on the (indices) of subbasic open sets. Therefore, 𝐹 does not dis-
tinguish points that share the same neighborhoods, even though 𝑓 might.
This is not an issue with 𝑇0 -spaces in which points are distinguished by
their neighborhoods.
3 Realizability categories 60

3.5.4 Computable equilogical spaces

With a notion of computable maps between spaces at hand, we can define


the computable equilogical spaces just like the ordinary ones, except that
we replace continuous maps by their computable versions.

Definition 3.5.8 A morphism 𝑓 : (𝑋 , ≡𝑋 ) → (𝑌, ≡𝑌 ) between equi-


logical spaces is computable if there exists a computable continuous
map 𝑔 : 𝑋 → 𝑌 which tracks 𝑓 . The category of equilogical spaces
and computable morphisms between them is denoted by Equ# .

We check that we got the definition right by proving that Equ# is equivalent
to Asm(ℙ , ℙ# ).

Proposition 3.5.9 The categories Asm(ℙ , ℙ# ) and Equ# are equivalent.

Proof. The proof goes just as the proof of Proposition 3.5.3 that Equ and
Asm(ℙ) are equivalent. The only difference is that we refer to Embedding
Theorem 3.5.6 and Extension Theorem 3.5.7 in order to extend computable
maps to computable enumeration operators.

The category 𝜔 Top# of countably based spaces and computable maps


is embedded fully and faithfully into Equ# . The embedding works as in
the continuous case: a topological space 𝑋 is mapped to the equilogical
space (𝑋 , =𝑋 ), and a computable continuous map 𝑓 : 𝑋 → 𝑌 is the same
thing as a morphism 𝑓 : (𝑋 , =𝑋 ) → (𝑌, =𝑌 ).

3.5.5 Type Two Effectivity

A popular realizability model is Kleene-Vesley function realizability [18], [18]: Kleene et al. (1965), The foundations
also known as Type Two Effectivity (TTE) [6, 39]. As the names say, it is of intuitionistic mathematics. Especially in
relation to recursive functions.
the model of realizability based on functions and type 2 machines.
[6]: Brattka et al. (2021), Handbook of Com-
TTE is traditionally expressed as a theory of representations. There are putability and Complexity in Analysis
[39]: Weihrauch (2000), Computable Anal-
actuallly three variations:
ysis
1. Rep(𝔹 , 𝔹) is the continuous version in which maps are realized by
continuous realizers.
2. Rep(𝔹 , 𝔹# ) is the relative version.
3. Rep(𝔹# , 𝔹# ) is the computable version in which all realizers must be
computable.
Mostly only the first two of these are used. Sometimes multi-valued
representations are considered also, and for these we need to move to
the larger category of assemblies Asm(𝔹 , 𝔹# ).
Specifically, a representation (𝑆, 𝛿 𝑆 ) over the Baire space is a partial
surjection 𝛿 𝑆 : 𝔹 → 𝑆 . When 𝛿 𝑆 (𝛼) = 𝑥 we say that 𝛼 is a 𝛿 𝑆 -name
of 𝑥 . A (continuously) realized map 𝑓 : (𝑆, 𝛿 𝑆 ) → (𝑇, 𝛿𝑇 ) is a map
𝑓 : 𝑆 → 𝑇 for which there exists a partial continuous map 𝑔 : 𝔹 ⇀ 𝔹
such that 𝑓 (𝛿 𝑆 (𝛼)) = 𝛿𝑇 (𝑔(𝛼)) for all 𝛼 ∈ dom(𝛿 𝑆 ). If the realizer 𝑔 is
computable we say that 𝑓 is computably realized. Recall that a computable
3 Realizability categories 61

𝑔 corresponds to a type 2 machine which converts a 𝛿 𝑆 -name of 𝑥 to a


𝛿𝑇 -name of 𝑓 (𝑥).
In the case of equilogical spaces there was a straightforward way of
turning a topological space into an equilogical space. The present situation
is less obvious. One idea is to represent a topological space 𝑋 by a
representation 𝛿 𝑋 : 𝔹 ⇀ 𝑋 for which 𝛿 𝑋 is a quotient map. However,
this is too weak a requirement. To see this, suppose 𝛿 𝑋 : 𝔹 ⇀ 𝑋 and
𝛿𝑌 : 𝔹 ⇀ 𝑌 are representations of topological spaces with 𝛿 𝑋 and 𝛿𝑌
topological quotient maps. Then a continuous map 𝑓 : 𝑋 → 𝑌 may be
lifted to 𝑔 : 𝔹 ⇀ 𝑌 , as in the diagram

ℎ? /𝔹
𝔹
𝑔
𝛿𝑋 𝛿𝑌
  
𝑋 /𝑌
𝑓

because 𝛿 𝑋 is a quotient map. But to make 𝑓 into a morphism we need a


continuous realizer ℎ : 𝔹 ⇀ 𝔹 on the top line, which however might not
exist. A stronger property is required.

Definition 3.5.10 Suppose 𝑋 is a topological space and 𝛿 𝑋 : 𝔹 ⇀ 𝑋 is a


representation and a topological quotient map. Then 𝛿 𝑋 is admissible
if every continuous 𝑔 : 𝔹 ⇀ 𝑋 has a continuous lifting ℎ : 𝔹 ⇀ 𝔹 such
that 𝛿 𝑋 (ℎ(𝛼)) = 𝑔(𝛼) for all 𝛼 ∈ dom(𝑔).

Admissible representations are a central concept in TTE because they


are very well behaved. For example, if 𝛿 𝑋 : 𝔹 ⇀ 𝑋 and 𝛿𝑌 : 𝔹 ⇀ 𝑌 are
admissible, the continuous maps 𝑓 : 𝑋 → 𝑌 coincide with the realized
maps.
The spaces which have admissible representations have been studied
in depth [33]. We only mention a basic result whose proof is not too [33]: Schröder (2021), “Handbook of
complicated. Computability and Complexity in Anal-
ysis”

Proposition 3.5.11 Every countably based 𝑇0 -space has an admissible repre-


sentation.

Proof. Suppose (𝑋 , (𝑈 𝑖 )𝑖∈ℕ ) is a countably based 𝑇0 -space, and let 𝑒 𝑋 :


𝑋 → ℙ be the neighborhood filter. Define the representation 𝛿 𝑋 : 𝔹 ⇀ 𝑋
by
𝛿 𝑋 (𝛼) = 𝑥 ⇐⇒ 𝑒 𝑋 (𝑥) = {𝛼(𝑛) | 𝑛 ∈ ℕ }.
In words, 𝛼 is a 𝛿 𝑋 -name for 𝑥 when it enumerates the (indices of)
subbasic open neighborhoods of 𝑥 . Because 𝑋 is a 𝑇0 -space, any 𝛼
enumerates the subbasic neighborhood filter of at most one 𝑥 , hence 𝛿 𝑋
is single-valued. We leave admissibility of 𝛿 𝑋 as an exercise.
3 Realizability categories 62

A relationship between equilogical spaces and TTE

In Subsection 2.8.3 we constructed an adjoint retraction

𝛿 / (𝔹 , 𝔹 )
(ℙ , ℙ# ) o #
𝜄

where
𝜄 𝛼 = { ⌜ 𝑎 ⌝ | 𝑎 ∈ ℕ ∗ ∧ 𝑎 ⊑ 𝛼}
and
𝛿[𝐴] = {𝛼 ∈ 𝔹 | 𝐴 = {𝑛 ∈ ℕ | ∃𝑘 ∈ ℕ . 𝛼 𝑘 = 𝑛 + 1}}.
The induced functors b𝜄 and b𝛿 give an adjunction between equilogical
spaces and TTE, see [3] for details. [3]: Bauer (2002), “A Relationship be-
tween Equilogical Spaces and Type Two
Effectivity”

3.6 The categorical structure of assemblies

In their everyday lives mathematicians use a limited set of constructions


on sets: products, disjoint sums, subsets, quotient sets, images, func-
tion spaces, inductive and coinductive definitions, powersets, unions,
intersections, and complements. For all of these, except powersets, there
are analogous constructions of assemblies. Therefore, many familiar
set-theoretic constructions carry over to assemblies.

3.6.1 Cartesian structure

A construction which makes a new set, space, or an algebraic structure


from old ones is usually characterized by a universal property which
determines it up to isomorphism. The universal property is shared
among versions of the same construction in different categories. We start
slowly with an easy one, the binary product.
Recall the definition of a (binary) product in a category: the product of
objects 𝑆 and 𝑇 is an object 𝑃 with morphisms 𝑝 1 : 𝑃 → 𝑆 and 𝑝 2 : 𝑃 → 𝑇 ,
satisfying the following universal property: for all morphisms 𝑓 : 𝑈 → 𝑆
and 𝑔 : 𝑈 → 𝑇 there is a unique morphism ℎ : 𝑈 → 𝑃 making the
following diagram commute:

𝑈
𝑓 𝑔

  
𝑆o 𝑝1
𝑃 𝑝2
/𝑇

The products (𝑃, 𝑝 1 , 𝑝 2 ) is determined uniquely up to a unique isomor-


phism. For suppose we had another product (𝑄, 𝑞 1 , 𝑞 2 ) of 𝑆 and 𝑇 . By the
universal property of 𝑃 there is a map ℎ : 𝑃 → 𝑄 such that 𝑝 1 ◦ ℎ = 𝑞 1 and
𝑝2 ◦ ℎ = 𝑞 2 . Similarly, by the universal property of 𝑄 there is 𝑘 : 𝑄 → 𝑃
such that 𝑞 1 ◦ 𝑘 = 𝑝 1 and 𝑞 2 ◦ 𝑘 = 𝑝 2 . Now ℎ ◦ 𝑘 satisfies

𝑝1 ◦ ℎ ◦ 𝑘 = 𝑞1 ◦ 𝑘 = 𝑝1 and 𝑝2 ◦ ℎ ◦ 𝑘 = 𝑞2 ◦ 𝑘 = 𝑝2 .
3 Realizability categories 63

Since id𝑃 also satisfies 𝑝 1 ◦ id𝑃 = 𝑝 1 and 𝑝 2 ◦ id𝑃 = 𝑝 2 , it follows by the


uniqueness condition of the universal property that ℎ ◦ 𝑘 = id𝑃 . A similar
argument shows that 𝑘 ◦ ℎ = id𝑄 , hence 𝑃 and 𝑄 are isomorphic.
A category has binary products if every pair of objects has a binary
product. In most cases we can actually provide an operation × which
maps a pair of objects 𝑆 , 𝑇 to a specifically given product 𝑆 × 𝑇 with
corresponding projections. The unique map ℎ determined by 𝑓 and 𝑔 is
denoted by ⟨ 𝑓 , 𝑔⟩ .
In the category Set the product is just the usual cartesian product of
sets. In assemblies we need to worry about the underlying types and
realizability relations. Let us verify that the product of assemblies 𝑆
and 𝑇 is the assembly

𝑆 × 𝑇 = (|𝑆| × |𝑇 |, ∥𝑆∥ × ∥𝑇 ∥, ⊩𝑆×𝑇 )

with the realizability relation

𝑝 ⊩𝑆×𝑇 (𝑥, 𝑦) ⇐⇒ fst 𝑞 ⊩𝑆 𝑥 ∧ snd 𝑞 ⊩𝑇 𝑦.

and the projection maps 𝜋1 : |𝑆| × |𝑇 | → |𝑆| , 𝜋1 : (𝑥, 𝑦) ↦→ 𝑥 , and


𝜋2 : |𝑆| × |𝑇 | → |𝑇 | , 𝜋2 : (𝑥, 𝑦) ↦→ 𝑦 , which are realized by fst and
snd, respectively. To see that (𝑆 × 𝑇, 𝜋1 , 𝜋2 ) has the universal property,
suppose 𝑓 : 𝑈 → 𝑆 and 𝑔 : 𝑈 → 𝑇 are realized by f ∈ 𝔸′∥𝑈 ∥→∥𝑆∥ and
g ∈ 𝔸′∥𝑈 ∥→∥𝑇 ∥ , respectively. There is a unique map ℎ : |𝑈 | → |𝑆| × |𝑇 | for
which 𝑓 = 𝜋1 ◦ ℎ and 𝑔 = 𝜋2 ◦ 𝑔 , namely ℎ(𝑢) = ⟨ 𝑓 , 𝑔⟩(𝑢) = ( 𝑓 𝑢, 𝑔 𝑢).
We only need a realizer for ℎ , and h = ⟨𝑢 ∥𝑈 ∥ ⟩ pair (f 𝑢) (g 𝑢) does the
job:

u ⊩𝑈 𝑢 =⇒ f u ⊩𝑆 𝑓 𝑢 ∧ g u ⊩𝑇 𝑔 𝑢
=⇒ pair (f u) (g u) ⊩𝑆×𝑇 ( 𝑓 𝑢, 𝑔 𝑢)
⇐⇒ h u ⊩𝑆×𝑇 ℎ(𝑢).

We may form 𝑛 -ary products 𝑆1 × · · · × 𝑆𝑛 for 𝑛 ≥ 1 as nested binary


products. The case 𝑛 = 0 corresponds to the terminal object, which is
an object 1 such that for every object 𝑆 there is exactly one morphism
𝑆 → 1. In the category of sets the terminal object is (any) singleton set,
say 1 = {★}. Then ∇1 is the terminal assembly, since for any assembly 𝑆
the only map 𝑆 → 1 is realized. We denote the terminal assembly as 1.
We may also ask whether Asm(𝔸 , 𝔸′) has infinite products. The answer
depends on the underlying tpcas (𝔸 , 𝔸′). We state without proof that
Asm(ℙ , ℙ# ) and Asm(𝔹 , 𝔹# ) have countable products, whereas Asm(𝕂1 )
does not.

Exercise 3.6.1 Why does Asm(𝕂1 ) not have countable products?

Products are a special case of categorical limits. Two other common


kinds of limits are equalizers and pullbacks. An equalizer of a pair of
morphisms 𝑓 , 𝑔 : 𝑆 → 𝑇 is an object 𝐸 with a morphism 𝑒 : 𝐸 → 𝑆 such
that 𝑒 equalizes 𝑓 and 𝑔 , which means that 𝑓 ◦ 𝑒 = 𝑔 ◦ 𝑒 , and the following
universal property is satisfied: if 𝑘 : 𝐾 → 𝑆 also equalizes 𝑓 and 𝑔 then
3 Realizability categories 64

there exists a unique morphism 𝑖 : 𝐾 → 𝐸 such that 𝑘 = 𝑒 ◦ 𝑖 :

𝑓
𝐸O
𝑒 /𝑆 // 𝑇
? 𝑔

𝑖
𝑘

Think of 𝐸 as the solution-set of equation 𝑓 𝑥 = 𝑔 𝑥 . Indeed, in the


category of sets the equalizer of functions 𝑓 , 𝑔 : 𝑆 → 𝑇 is the subset
𝐸 = {𝑥 ∈ 𝑆 | 𝑓 𝑥 = 𝑔 𝑥} and 𝑒 : 𝐸 → 𝑆 is the subset inclusion. In
the category of assemblies we need to augment this with realizers. The
equalizer of 𝑓 , 𝑔 : 𝑆 → 𝑇 is

𝐸 = ({𝑥 ∈ 𝑆 | 𝑓 𝑥 = 𝑔 𝑥}, ∥𝑆∥, ⊩𝐸 ) (3.1)

where x ⊩𝐸 𝑥 if, and only if, x ⊩𝑆 𝑥 . The map 𝑒 : |𝐸| → |𝑆| is the
subset inclusion, 𝑒(𝑥) = 𝑥 . It is realized by ⟨𝑥 ∥𝑆∥ ⟩ 𝑥 . Clearly, 𝑒 equalizes 𝑓
and 𝑔 .

Exercise 3.6.2 Verify that the above alleged equalizer has the correct
universal property.

A pullback, sometimes called fibered product, is a combination of product


and equalizer. Given morphisms 𝑓 : 𝑆 → 𝑈 and 𝑔 : 𝑇 → 𝑈 , the pullback
of 𝑓 and 𝑔 is an object 𝑃 with morphisms 𝑝 1 : 𝑃 → 𝑆 and 𝑝 2 : 𝑃 → 𝑇
such that 𝑓 ◦ 𝑝 1 = 𝑔 ◦ 𝑝 2 . Furthermore, if 𝑞 1 : 𝑄 → 𝑆 and 𝑞 2 : 𝑄 → 𝑇 are
such that 𝑓 ◦ 𝑞 1 = 𝑔 ◦ 𝑞 2 then there is a unique 𝑖 : 𝑄 → 𝑃 which makes
the following diagram commute:

𝑄
𝑞1
𝑖

 𝑝1 
𝑃 /𝑆
𝑞2
𝑝2 𝑓
  
𝑇 /𝑈
𝑔

The fact that 𝑃 is a pullback is traditionally marked in a diagram with the


“corner” symbol. In the category of assemblies the pullback of 𝑓 : 𝑆 → 𝑈
and 𝑔 : 𝑇 → 𝑈 is the assembly

𝑃 = ({(𝑥, 𝑦) ∈ 𝑆 × 𝑇 | 𝑓 𝑥 = 𝑔(𝑦)}, ∥𝑆∥ × ∥𝑇 ∥, ⊩𝑃 )

where 𝑝 ⊩𝑃 (𝑥, 𝑦) if, and only if, fst 𝑝 ⊩𝑆 𝑥 and snd 𝑝 ⊩𝑇 𝑦 .


Finite products, the terminal object, equalizers, and pullbacks are special
cases of finite limits. A category which has all finite limits is called
cartesian or finitely complete.7 7: We do not like much the still older
terminology left exact or just lex.

Proposition 3.6.3 The categories Asm(𝔸 , 𝔸′) and Mod(𝔸 , 𝔸′) are cartesian.

Proof. It is well known that every finite limit may be constructed as a


3 Realizability categories 65

combination of a finite product and an equalizer, hence Asm(𝔸 , 𝔸′) is


cartesian. It is easy to verify that finite products and equalizers of modest
assemblies are again modest, therefore Mod(𝔸 , 𝔸′) is cartesian.

3.6.2 Cocartesian structure

Colimits are the dual of limits. In particular, the dual of products,


terminal object, equalizers, and pullbacks are respectively coproducts,
initial object, coequalizers, and pushouts. We study which of these exist
in Asm(𝔸 , 𝔸′).
First we discuss (binary) coproducts of sets, also known as disjoint sums.
For some reason there does not seem to be a well-established and practical
notation for these, possibly because the related union operation is taken
as primitive in set theory. The disjoint sum of sets 𝑆 and 𝑇 is usually
defined as
𝑆 + 𝑇 = ({0} × 𝑆) ∪ ({1} × 𝑇).
The canonical injections 𝜄 1 : 𝑆 → 𝑆 + 𝑇 and 𝜄 2 : 𝑇 → 𝑆 + 𝑇 are the maps
𝑥 ↦→ (0, 𝑥) and 𝑦 ↦→ (1 , 𝑥), respectively. A slight notational inconvenience
arises when want to define a map 𝑓 : 𝑆 + 𝑇 → 𝑈 by cases 𝑓1 : 𝑆 → 𝑈
and 𝑓2 : 𝑇 → 𝑈 . One possibility is to write
(
𝑓1 (𝑥) if 𝑢 = (0 , 𝑥),
𝑓𝑢=
𝑓2 (𝑦) if 𝑢 = (1 , 𝑦),

but this is seen rarely. In practice mathematicians prefer to assume, or


shall we say pretend, that the sets 𝑆 and 𝑇 are disjoint and just write
𝑆 + 𝑇 = 𝑆 ∪ 𝑇 . This allows us to get rid of the encoding by pairs,
(
𝑓1 (𝑢) if 𝑢 ∈ 𝑆,
𝑓𝑢=
𝑓2 (𝑢) if 𝑢 ∈ 𝑇 .

Unlike people, computers do not pretend, and so as computer scientists


we need notation that is actually correct. However, it is unnecessary
encode the elements of a disjoin sum as pairs (0 , 𝑥) and (1 , 𝑦). Instead,
we simply take the injections 𝜄 1 and 𝜄 2 as primitive labels that indicate
which part of a disjoint sum we are referring to.8 Thus, every element of 8: If you feel the urge to really encode
𝑆 + 𝑇 is either of the form 𝜄1 (𝑥) for a unique 𝑥 ∈ 𝑆, or 𝜄2 (𝑦) for a unique everything with sets, you can still define
𝜄 1 (𝑥) = (0 , 𝑥) and 𝜄 2 (𝑦) = (1, 𝑦) but then
𝑦 ∈ 𝑇 . In a specific case we may choose different, descriptive names for forget the definition.
the injections.
Definition by cases is a primitive concept involving disjoint sums which
deserves its own notation, preferably one that fits on a single line. We
may mimic Haskell and write

case 𝑒 of 𝜄 1 (𝑥) ↦→ 𝑒1 | 𝜄 2 (𝑦) ↦→ 𝑒2 .

Read this as “if 𝑒 is of the form 𝜄 1 (𝑥) then 𝑒1 , else if 𝑒 is of the form 𝜄 2 (𝑦)
then 𝑒2 ”. The variables 𝑥 and 𝑦 are bound in 𝑒1 and 𝑒2 , respectively. The
definition of 𝑓 above would be written as

𝑓 𝑢 = case 𝑢 of 𝜄 1 (𝑥) ↦→ 𝑓1 (𝑥) | 𝜄 2 (𝑦) ↦→ 𝑓2 (𝑦),


3 Realizability categories 66

or spanning several lines

𝑓 𝑢 = case 𝑢 of
𝜄1 (𝑥) ↦→ 𝑓1 (𝑥)
𝜄2 (𝑦) ↦→ 𝑓2 (𝑦).

We shall use this notation. Let us mention that in Haskell 𝜄 1 and 𝜄 2 are
called Left and Right, respectively.
In a general category a (binary) coproduct of objects 𝑆 and 𝑇 is an object 𝐶
with morphisms 𝜄 1 : 𝑆 → 𝐶 and 𝜄 2 : 𝑇 → 𝐶 such that, for all morphisms
𝑓 : 𝑆 → 𝑈 and 𝑔 : 𝑇 → 𝑈 there exists a unique ℎ : 𝐶 → 𝑈 such that the
following diagram commutes:

? 𝑈O _
𝑓 𝑔

𝑆 /𝐶o 𝑇
𝜄1 𝜄2

Notice that we have exactly reversed all the morphisms with respect to
the definition of products. We write the coproduct of 𝑆 and 𝑇 as 𝑆 + 𝑇
when it is given as an operation, and the unique morphism ℎ as [ 𝑓 , 𝑔].
Whether assemblies Asm(𝔸 , 𝔸′) have binary coproducts is an interesting
question. The answer seems to depend on the structure of the underlying
tpcas.

Definition 3.6.4 A tpca 𝔸 with sums is a tpca with a binary operation


+ on the types such that, for all types 𝑠 , 𝑡 , and 𝑢 there exist constants

left𝑠,𝑡 ∈ 𝔸𝑠→(𝑠+𝑡)
right𝑠,𝑡 ∈ 𝔸𝑡→(𝑠+𝑡)
case𝑠,𝑡,𝑢 ∈ 𝔸(𝑠+𝑡)→(𝑠→𝑢)→(𝑡→𝑢)→𝑢

satisfying, for all 𝑥 , 𝑦 , 𝑓 , 𝑔 of appropriate types,

left𝑠,𝑡 𝑥↓,
right𝑠,𝑡 𝑦↓,
case𝑠,𝑡,𝑢 (left𝑠,𝑡 𝑥) 𝑓 𝑔 ⪰ 𝑓 𝑥,
case𝑠,𝑡,𝑢 (right𝑠,𝑡 𝑦) 𝑓 𝑔 ⪰ 𝑔 𝑦.

We say that the elements left, right, and case are suitable for sums
when they satisfy these properties.
A sub-tpca with sums is a sub-tpca 𝔸′ of 𝔸 such that there exists left,
right, case in 𝔸′ suitable for sums in 𝔸.

Proposition 3.6.5 Suppose 𝔸 is a tpca and 𝔸′ its sub-tpca. The category


Asm(𝔸 , 𝔸′ ) has binary coproducts if, and only if, 𝔸 is a tpca with sums and
𝔸′ is its sub-tpca with sums.

Proof. Suppose first that 𝔸 has sums and that 𝔸′ is a sub-tpca with sums.
3 Realizability categories 67

The coproduct of 𝑆 and 𝑇 is the assembly

𝑆 + 𝑇 = (|𝑆| + |𝑇 |, ∥𝑆∥ + ∥𝑇 ∥, ⊩𝑆+𝑇 )

where ⊩𝑆+𝑇 is most easily defined in terms of the existence predicate


E𝑆+𝑇 :
E𝑆+𝑇 (𝑢) = case 𝑢 of
𝜄1 (𝑥) ↦→ {left x | x ⊩𝑆 𝑥}
𝜄2 (𝑦) ↦→ {right y | y ⊩𝑇 𝑦}.
That is, the realizers for 𝜄 1 (𝑥) are of the form left x where x ⊩𝑆 𝑥 , and the
realizers for 𝜄 2 (𝑦) are of the form right y where y ⊩𝑇 𝑦 . The canonical
inclusions 𝜄 1 : |𝑆| → |𝑆| + |𝑇 | and 𝜄 2 : |𝑇 | → |𝑆| + |𝑇 | are realized
by left ∥𝑆∥,∥𝑇 ∥ and right ∥𝑆∥,∥𝑇 ∥ , respectively. To see that 𝑆 + 𝑇 has the
required universal property, consider 𝑓 : 𝑆 → 𝑈 and 𝑔 : 𝑇 → 𝑈 , realized
by f and g, respectively. The map ℎ = [ 𝑓 , 𝑔] : |𝑆| + |𝑇 | → |𝑈 | , defined by

ℎ(𝑢) = case 𝑢 of 𝜄1 (𝑥) ↦→ 𝑓 𝑥 | 𝜄2 (𝑦) ↦→ 𝑔(𝑦),

is realized by ⟨𝑢 ∥𝑆∥+∥𝑇 ∥ ⟩ case 𝑢 f g. It is the unique morphism satisfying


ℎ ◦ 𝜄1 = 𝑓 and ℎ ◦ 𝜄 2 = 𝑔 .
Conversely, suppose Asm(𝔸 , 𝔸′) has binary coproducts. For every type 𝑡 ,
define the assembly
𝐴𝑡 = (𝔸𝑡 , 𝑡, ⊩𝑡 )
with 𝑟 ⊩𝑡 𝑞 ⇔ 𝑟 = 𝑞 . For types 𝑠 and 𝑡 let 𝑠 + 𝑡 be the underlying type of
the coproduct 𝐴 𝑠 + 𝐴𝑡 ,

𝑠 + 𝑡 = ∥𝐴 𝑠 + 𝐴𝑡 ∥.

Let left𝑠,𝑡 and right𝑠,𝑡 be a realizers for the canonical inclusions 𝜄 1 :


𝐴 𝑠 → 𝐴 𝑠 + 𝐴𝑡 and 𝜄2 : 𝐴𝑡 → 𝐴 𝑠 + 𝐴𝑡 , respectively.
Suppose 𝑠 , 𝑡 , and 𝑢 are types. Define 𝑎 ∈ 𝔸′𝑠→(𝑠→𝑢)→(𝑡→𝑢)→𝑢 and 𝑏 ∈
𝔸′𝑡→(𝑠→𝑢)→(𝑡→𝑢)→𝑢 by

𝑎 = ⟨𝑥 𝑠 ⟩ ⟨ 𝑓 𝑠→𝑢 ⟩ ⟨𝑔 𝑡→𝑢 ⟩ 𝑓 𝑥 and 𝑏 = ⟨𝑥 𝑠 ⟩ ⟨ 𝑓 𝑠→𝑢 ⟩ ⟨𝑔 𝑡→𝑢 ⟩ 𝑔 𝑥.

The map 𝑥 ↦→ 𝑎 𝑥 is a morphism from 𝐴 𝑠 to 𝐴(𝑠→𝑢)→(𝑡→𝑢)→𝑢 because


it is realized by 𝑎 . Similarly, the map 𝑦 ↦→ 𝑏 𝑦 is a morphism from
𝐴𝑡 to 𝐴(𝑠→𝑢)→(𝑡→𝑢)→𝑢 , realized by 𝑏 . There is a unique morphism ℎ :
𝐴 𝑠 + 𝐴𝑡 → 𝐴(𝑠→𝑢)→(𝑡→𝑢)→𝑢 such that ℎ(𝜄(𝑥)) = 𝑎 𝑥 and ℎ(𝜄(𝑦)) = 𝑏 𝑦 for
all 𝑥 ∈ 𝐴 𝑠 and 𝑦 ∈ 𝐴𝑡 . There exists

case𝑠,𝑡,𝑢 ∈ 𝔸′(𝑠+𝑡)→(𝑠→𝑢)→(𝑡→𝑢)→𝑢

which realizes ℎ . We claim that left𝑠,𝑡 , right𝑠,𝑡 , and case𝑠,𝑡,𝑢 have the
desired properties. It is obvious that left𝑠,𝑡 𝑥↓ and right𝑠,𝑡 𝑦↓ for all
𝑥 ∈ 𝔸𝑠 , 𝑦 ∈ 𝔸𝑡 . Next, because case𝑠,𝑡,𝑢 realizes ℎ , left𝑠,𝑡 realizes 𝜄1 , and
ℎ(𝜄1 (𝑥)) = 𝑎 𝑥 , we have case𝑠,𝑡,𝑢 (left𝑠,𝑡 𝑥) = 𝑎 𝑥 , therefore

case𝑠,𝑡,𝑢 (left𝑠,𝑡 𝑥) 𝑓 𝑔 ≃ 𝑎 𝑥 𝑓 𝑔 ≃ 𝑓 𝑥

for all 𝑥 , 𝑓 , and 𝑔 of relevant types. Similarly, case𝑠,𝑡,𝑢 (right𝑠,𝑡 𝑦) 𝑓 𝑔 ≃


𝑔 𝑦 holds as well.
3 Realizability categories 68

The obvious question to ask is when a tpca has sums. We do not know
whether there is a tpca without sums, and we do not explore the question
further. We satisfying ourselves with a sufficient condition that covers
the instances we care about.

Definition 3.6.6 A tpca 𝔸 has booleans when there is a type bool, and
for each type 𝑡 elements

false , true ∈ 𝔸bool and if𝑡 ∈ 𝔸bool→𝑡→𝑡→𝑡

satisfying, for all 𝑥, 𝑦 ∈ 𝔸𝑡 ,

if𝑡 true 𝑥 𝑦 = 𝑥 and if𝑡 false 𝑥 𝑦 = 𝑦.

We say that false, true, if𝑡 are suitable for booleans in 𝔸.


A sub-tpca with booleans is a sub-tpca 𝔸′ of 𝔸 such that there exists
false, true, if𝑡 in 𝔸′ which are suitable for booleans in 𝔸.

Proposition 3.6.7 A tpca 𝔸 has sums if, and only if, it has booleans.
Furthermore, a sub-tpca 𝔸′ is a sub-tpca with sums if, and only if, it is a
sub-tpca with booleans.

Proof. Suppose 𝔸 has sums. Pick any type 𝑜 , an element 𝜔 𝑜 ∈ 𝔸𝑜 , and


define

bool = 𝑜 + 𝑜,
true = left 𝜔 𝑜 ,
false = right 𝜔 𝑜
if𝑡 = ⟨𝑏 bool ⟩ ⟨𝑥 𝑡 ⟩ ⟨𝑦 𝑡 ⟩ case𝑜,𝑡,𝑡 𝑏 (K𝑜,𝑡 𝑥) (K𝑜,𝑡 𝑦)

It is easy to check that these satisfy the conditions from Definition 3.6.6.
Conversely, suppose 𝔸 has booleans, and let 𝑠 , 𝑡 , and 𝑢 be types. There
exist 𝜔 𝑠 ∈ 𝔸𝑠 and 𝜔𝑡 ∈ 𝔸𝑡 . Define

𝑠 + 𝑡 = bool × (𝑠 × 𝑡)
left𝑠,𝑡 = ⟨𝑥 𝑠 ⟩ pair true (pair 𝑥 𝜔𝑡 )
right𝑠,𝑡 = ⟨𝑦 𝑡 ⟩ pair false (pair 𝜔 𝑠 𝑦)
case𝑠,𝑡,𝑢 = ⟨𝑧 𝑠+𝑡 ⟩ ⟨ 𝑓 𝑠→𝑢 ⟩ ⟨𝑔 𝑠→𝑢 ⟩
if (fst 𝑧) ( 𝑓 (fst (fst 𝑧))) (𝑔 (snd (fst 𝑧)))

These have the required properties, as is easily checked.

Proposition 3.6.8 Every N-tpca has booleans and every sub-N-tpca is a


sub-N-tpca with booleans.
3 Realizability categories 69

Proof. Let 𝔸 be a N-tpca and 𝔸′ its sub-N-tpca. Define

bool = nat ,
false = 0 ,
true = 1 ,
if𝑡 = ⟨𝑏 bool ⟩ ⟨𝑥 𝑡 ⟩ ⟨𝑦 𝑡 ⟩ rec𝑡 𝑦 (⟨𝑛 nat ⟩ ⟨𝑧 𝑡 ⟩ 𝑥) 𝑏

Again, it is easy to check that these have the desired properties.

Finally, let us put all these together.

Proposition 3.6.9 If 𝔸 is a N-tpca and 𝔸′ its sub-N-tpca then Asm(𝔸 , 𝔸′)


has binary coproducts.

Proof. Combine Propositions 3.6.5, 3.6.7 and 3.6.8.

The other finite colimits are more easily dealt with. The initial object is
the empty assembly
0 = (∅, 𝑜, ⊩0 )

where 𝑜 is any type.9 Its universal property is that there is exactly one 9: We need not specify ⊩0 because there
morphism 0 → 𝑆 for every assembly 𝑆 . The property holds because there is only one relation between 𝔸𝑜 and ∅.

is a unique map ∅ → 𝑆 , which is realized by K ∥𝑆∥,𝑜 𝑎 , where 𝑎 ∈ 𝔸′𝑜 .


A coequalizer of morphisms 𝑓 , 𝑔 : 𝑆 → 𝑇 is an object 𝑄 with a morphism
𝑞 : 𝑇 → 𝑄 that equalizes 𝑓 and 𝑔 , which means 𝑞 ◦ 𝑓 = 𝑞 ◦ 𝑔 , and has
the following universal property: if 𝑘 : 𝑇 → 𝐾 equalizes 𝑓 and 𝑔 then
there is a unique morphism 𝑖 : 𝑄 → 𝐾 such that 𝑘 = 𝑖 ◦ 𝑞 :

𝑓 𝑞
𝑆 // 𝑇 /𝑄
𝑔

𝑖
𝑘
 
𝐾

In the category of sets the coequalizer is the quotient 𝑄 = 𝑇/≡ of 𝑇 by


the least equivalence relation ≡ satisfying 𝑓 𝑥 ≡ 𝑔 𝑥 for all 𝑥 ∈ 𝑆 . The
map 𝑞 : 𝑇 → 𝑇/≡ is the canonical quotient map which takes 𝑦 ∈ 𝑇 to its
equivalence class [𝑦]≡ .
In Asm(𝔸 , 𝔸′) the coequalizer of 𝑓 , 𝑔 : 𝑆 → 𝑇 is the assembly 𝑇/≡ where
𝑇/≡ is the coequalizer of 𝑓 and 𝑔 computed in sets, as described above,
∥𝑇/≡∥ = ∥𝑇 ∥ , and ⊩𝑇/≡ is defined by

y ⊩𝑋 [𝑧]≡ ⇐⇒ ∃𝑦 ∈ 𝑇. y ⊩𝑇 𝑦 ∧ 𝑦 ≡ 𝑧.

The canonical quotient map 𝑞 : 𝑇 → 𝑇/≡ is realized by ⟨𝑦 ∥𝑇 ∥ ⟩ 𝑦 .


Pushouts, which are dual to pullbacks, exist in Asm(𝔸 , 𝔸′) if coproducts
do, because every finite colimit is a coequalizer of a finite coproduct, as
recorded in the following proposition.
3 Realizability categories 70

Exercise 3.6.10 Give an explicit description of the pushout of assem-


blies.

Proposition 3.6.11 Asm(𝔸 , 𝔸′) is cocartesian10 if, and only if, 𝔸 is a tpca 10: A category is cocartesian or finitely
with sums and 𝔸′ a sub-tpca with sums. cocomplete if it has finite colimits.

Proof. Coequalizers and the initial object always exist. Therefore, all finite
colimits exist, provided binary coproducts do. By Proposition 3.6.5 this
is equivalent to the condition that 𝔸 have sums and 𝔸′ be a sub-tpca
with sums.

3.6.3 Monos and epis

Recall that 𝑓 is a monomorphism (mono) when it can be canceled on the


left: if 𝑓 ◦ 𝑔 = 𝑓 ◦ ℎ then 𝑔 = ℎ . The dual notion is epimorphism (epi),
which is a morphism that can be canceled on the right. In the category of
sets the monos and epis are precisely the injective and surjective maps,
respectively.

Proposition 3.6.12 A morphism in Asm(𝔸 , 𝔸′) is mono if, and only if, it is
mono as a map in Set, and likewise for epis.

Proof. It is obvious that a morphism 𝑓 : 𝑆 → 𝑇 is a mono in Asm(𝔸 , 𝔸′)


if it is mono in Set. Conversely, suppose 𝑓 is mono in Asm(𝔸 , 𝔸′), and
consider maps 𝑔, ℎ : 𝑈 → 𝑇 in Set such that 𝑓 ◦ 𝑔 = 𝑓 ◦ ℎ . Define the
assembly 𝑈 = (𝑈 , ∥𝑆∥ × ∥𝑆∥, ⊩𝑈 ) with the realizability relation

𝑝 ⊩𝑈 𝑢 ⇐⇒ fst 𝑝 ⊩𝑆 𝑓 𝑢 ∧ snd 𝑝 ⊩𝑆 𝑔 𝑢.

The maps 𝑔 and ℎ are morphisms from 𝑈 to 𝑆 because they are realized
by fst and snd, respectively. Since 𝑓 is mono as a morphism of assemblies,
it follows that 𝑔 = ℎ .
Next we consider epis. Again, it is easy to see that a morphism 𝑓 : 𝑆 → 𝑇
is epi if it is epi in Set. Conversely, suppose 𝑓 is epi in Asm(𝔸 , 𝔸′) and
consider maps 𝑔, ℎ : 𝑇 → 𝑈 in Set such that 𝑔 ◦ 𝑓 = ℎ ◦ 𝑓 . The maps 𝑔
and ℎ are morphisms 𝑇 → ∇𝑈 because they are both trivially realized.
Since 𝑓 is epi in Asm(𝔸 , 𝔸′), we may cancel it and obtain 𝑔 = ℎ . This
shows that 𝑓 is epi in Set.

A mono-epi is a morphism 𝑓 : 𝑆 → 𝑇 which is both mono and epi.


In general such a morphism need not be an isomorphism. For exam-
ple, a continuous bijection between topological spaces need not be a
homeomorphism.

Corollary 3.6.13 An assembly map 𝑓 : 𝑆 → 𝑇 is mono-epi if, and only if,


its underlying map is a bijection.

Proof. This follows directly from Proposition 3.6.12 and the fact that in
Set an epi-mono is the same thing as a bijection.
3 Realizability categories 71

It is easy to provide an epi-mono which is not an isomorphism. For


example if 𝑆 is a modest set with at least two different elements, then
id𝑆 : 𝑆 → 𝑆 is realized as a morphism id𝑆 : 𝑆 → ∇𝑆 , hence it is an
epi-mono in Asm(𝔸 , 𝔸′). However, every morphism ∇𝑆 → 𝑆 is a constant
map, therefore 𝑆 is not isomorphic to ∇𝑆 .
Recall that 𝑓 : 𝑆 → 𝑇 is a regular mono if there are 𝑔, ℎ : 𝑇 → 𝑈 such
that 𝑓 is their equalizer. Regular monos are well behaved, and we can
think of them as subspace embeddings. Given an assembly 𝑇 and a subset
𝑇 ′ ⊆ 𝑇 , define the assembly 𝑇 ′ = (𝑇 ′ , ∥𝑇 ∥, ⊩𝑇 ′ ) as the restriction of 𝑇 , i.e.,
x ⊩𝑇 ′ 𝑥 if, and only if 𝑥 ∈ 𝑇 ′ and x ⊩𝑇 𝑥 . The subset inclusion 𝜄 : 𝑇 ′ → 𝑇
is a morphism of assemblies because it is realized by ⟨𝑥 ∥𝑇 ∥ ⟩ 𝑥 , and it is
a regular mono because it is the equalizer of the maps 𝑔, ℎ : 𝑇 → ∇2,
defined by
(
1 if 𝑥 ∈ 𝑇 ′,
𝑔𝑥 =1 and ℎ(𝑥) =
0 otherwise.

Even more, every regular mono is of this form, up to isomorphism. To


see this, suppose 𝑓 : 𝑆 → 𝑇 is a regular mono, thus an equalizer of
morphisms 𝑔, ℎ : 𝑇 → 𝑈 . If we simply compute the equalizer of 𝑔 and ℎ
again according to the recipe (3.1) from Subsection 3.6.1, we see that it is
precisely the restriction of 𝑇 to the subset

𝑇 ′ = {𝑥 ∈ 𝑇 | 𝑔 𝑥 = ℎ(𝑥)}.

Since both 𝜄 : 𝑇 ′ → 𝑇 and 𝑓 : 𝑆 → 𝑇 are equalizers of 𝑔 and ℎ , they are


isomorphic.
The following characterization of regular monos is often useful.

Proposition 3.6.14 A realized map 𝑓 : 𝑆 → 𝑇 is a regular mono if, and only


if, 𝑓 is injective and there exists i ∈ 𝔸′∥𝑇 ∥→∥𝑆∥ such that, for all 𝑥 ∈ 𝑆 and
y ∈ 𝔸 ∥𝑇 ∥ , if y ⊩𝑇 𝑓 𝑥 then i y is defined and i y ⊩𝑆 𝑥 .

Proof. A regular mono 𝑓 : 𝑆 → 𝑇 is injective because it is mono. There


exist realized maps 𝑔, ℎ : 𝑇 → 𝑈 such that 𝑓 is their equalizer. Let
𝑒 : 𝐸 → 𝑇 be the equalizer of 𝑔 and ℎ , as computed in (3.1):

𝐸 = ({𝑦 ∈ 𝑇 | 𝑔(𝑦) = ℎ(𝑦)}, ∥𝑇 ∥, ⊩𝐸 ),


y ⊩𝐸 𝑦 ⇐⇒ y ⊩𝑇 𝑦,
𝑒(𝑦) = 𝑦.

Because 𝑒 equalizes 𝑔 and ℎ , there is a realized map 𝑖 : 𝐸 → 𝑆 such


that 𝑒 = 𝑓 ◦ 𝑖 . We claim that any realizer i ∈ 𝔸′∥𝑇 ∥→∥𝑆∥ of 𝑖 has the
desired property. Suppose 𝑥 ∈ 𝑆 , y ∈ 𝔸 ∥𝑇 ∥ , and y ⊩𝑇 𝑓 𝑥 . Because
𝑔( 𝑓 𝑥) = ℎ( 𝑓 𝑥), 𝑓 𝑥 ∈ 𝐸 and so y ⊩𝐸 𝑓 𝑥 . Then i y is defined and
i y ⊩𝑆 𝑖( 𝑓 𝑥). This is what we want because 𝑓 (𝑖( 𝑓 𝑥)) = 𝑒( 𝑓 𝑥) = 𝑓 𝑥 ,
from which 𝑖( 𝑓 𝑥) = 𝑥 follows by injectivity of 𝑓 .
Conversely, suppose 𝑓 : 𝑆 → 𝑇 is injective, realized by f, and i is as in
statement of the proposition. Let 𝑇 ′ = {𝑦 ∈ 𝑇 | ∃𝑥. 𝑆 𝑓 𝑥 = 𝑦}. It suffices
to show that 𝑓 is isomorphic to the inclusion 𝜄 : 𝑇 ′ → 𝑇 , where 𝑇 ′ is
the restriction of 𝑇 to 𝑇 ′. The map 𝑗 : 𝑆 → 𝑇 ′ defined by 𝑗(𝑥) = 𝑓 𝑥 is
3 Realizability categories 72

realized by f. The map 𝑖 : 𝑇 ′ → 𝑆 , defined by 𝑖( 𝑓 𝑥) = 𝑥 , is well defined


because 𝑓 is injective, and is realized by i. Clearly, 𝑗 and 𝑖 are inverses of
each other and 𝑓 = 𝜄 ◦ 𝑗 .

We repeat the story for regular epis, which are those morphisms that are
coequalizers. They are the well behaved epis which can be thought of as
quotient maps. In fact, if 𝑓 : 𝑆 → 𝑇 is a regular epi, we say that 𝑇 is a
quotient of 𝑆 .
The match between regular epis and quotients is precise in Asm(𝔸 , 𝔸′).
Note that that in the construction of coequalizers, as described above,
we may start with an arbitrary equivalence relation: given an assembly
𝑇 and an equivalence relation ≡ on 𝑇 , define the quotient assembly
𝑇/≡ = (𝑇/≡, ∥𝑇 ∥, ⊩𝑇/≡ ) whose realizability relation satisfies x ⊩𝑇/≡ [𝑦]
if, and only if, x ⊩𝑇 𝑥 and 𝑥 ≡ 𝑦 for some 𝑥 ∈ 𝑇 . The quotient map
𝑞 : 𝑇 → 𝑇/≡ is realized by ⟨𝑥 ∥𝑇 ∥ ⟩ 𝑥 , and is a coequalizer of 𝑔, ℎ : 𝑉 → 𝑇
where11 11: You should convince yourself that 𝑉
is the kernel pair of 𝑞 , i.e., the pullback
of 𝑞 with itself.
𝑉 = ({(𝑥, 𝑦) ∈ 𝑇 × 𝑇 | 𝑥 ≡ 𝑦}, ∥𝑇 ∥ × ∥𝑇 ∥, ⊩𝑉 ),
𝑝 ⊩𝑉 (𝑥, 𝑦) ⇐⇒ fst 𝑝 ⊩𝑇 𝑥 ∧ snd 𝑝 ⊩𝑇 𝑦,
𝑔(𝑥, 𝑦) = 𝑥,
ℎ(𝑥, 𝑦) = 𝑦.

Every regular epi is isomorphic to one of this form. To see this, suppose
𝑓 : 𝑇 → 𝑈 is a coequalizer of 𝑔, ℎ : 𝑈 → 𝑇 . Let ≡ be the least equivalence
relation on 𝑇 such that 𝑔 𝑥 = ℎ(𝑥) for all 𝑥 ∈ 𝑈 . Then 𝑓 is isomorphic
to 𝑞 : 𝑇 → 𝑇/≡ because 𝑔 : 𝑇 → 𝑇/≡ is the coequalizer of 𝑔 and ℎ
according to Subsection 3.6.2.
There is another characterization of regular epis which is used often.

Proposition 3.6.15 in Asm(𝔸 , 𝔸′) a morphism 𝑓 : 𝑇 → 𝑈 is a regular epi


if, and only if, there exists i ∈ 𝔸′∥𝑈 ∥→∥𝑇 ∥ such that, whenever y ⊩𝑈 𝑦 then
i y↓ and there is 𝑥 ∈ 𝑓 ∗ (𝑦) such that i y ⊩𝑇 𝑥 .

Proof. Suppose first that we have a regular epi 𝑓 : 𝑇 → 𝑈 which is a


coequalizer of 𝑔, ℎ : 𝑆 → 𝑇 . Let 𝑞 : 𝑇 → 𝑇/≡ be the coequalizer of 𝑔
and ℎ , as computed in Subsection 3.6.2. Because 𝑓 and 𝑞 both equalize
𝑔 and ℎ , there exists a unique isomorphism 𝑖 : 𝑈 → 𝑇/≡ such that
𝑞 = 𝑖 ◦ 𝑓 . Let i ∈ 𝔸′∥𝑈 ∥→∥𝑇 ∥ be a realizer for 𝑖 . We claim that it has the
required properties. If y ⊩𝑈 𝑦 then i y↓ and i y ⊩𝑇/≡ 𝑖(𝑦). Because 𝑓 is
surjective there exists 𝑥 ′ ∈ 𝑇 such that 𝑓 (𝑥 ′) = 𝑦 , from which we get
𝑖(𝑦) = 𝑖( 𝑓 (𝑥 ′)) = 𝑞(𝑥 ′) = [𝑥 ′]. Since i y ⊩𝑇/≡ [𝑥 ′] there is 𝑥 ∈ 𝑇 such that
i y ⊩𝑇 𝑥 and 𝑥 ≡ 𝑥 ′. The element 𝑥 is the one we are looking for because
𝑥 ≡ 𝑥 ′ implies 𝑓 𝑥 = 𝑓 (𝑥 ′) = 𝑦 .
Conversely, suppose i is as in the statement of the proposition, and let f
be a realizer for 𝑓 . To show that 𝑓 : 𝑇 → 𝑈 is a coequalizer, define the
equivalence relation ≡ on 𝑇 by

𝑥 ≡ 𝑦 ⇐⇒ 𝑓 𝑥 = 𝑓 (𝑦).

It suffices to show that 𝑓 is isomorphic to 𝑞 : 𝑇 → 𝑇/≡. In one direction


we have the map 𝑗 : 𝑇/≡ → 𝑈 defined by 𝑗([𝑥]) = 𝑓 𝑥 , realized by f.
3 Realizability categories 73

In the other direction we have 𝑖 : 𝑈 → 𝑇/≡ defined by 𝑖( 𝑓 𝑥) = [𝑥],


which is well defined because 𝑓 is surjective. The map 𝑖 is realized by i.
It is obvious that 𝑖 and 𝑗 are inverses of each other and that 𝑞 = 𝑖 ◦ 𝑓
holds.

3.6.4 Regular structure

In the category of sets every function 𝑓 : 𝑆 → 𝑇 may be factored as


𝑓 = 𝑒 ◦ 𝑚 where 𝑒 is epi and 𝑚 is mono. Similarly, a continuous map
between topological spaces may be factored into a regular epi and a
mono.
In assemblies a realized map 𝑓 : 𝑆 → 𝑇 factors as

𝑓
𝑆 /𝑇
?
𝑞 𝑖

𝑈 /𝑉
𝑏

where 𝑞 is a regular epi, 𝑏 is a mono-epi and 𝑖 is a regular mono. Indeed,


we may take 𝑈 = 𝑆/≡, where 𝑥 ≡ 𝑦 ⇐⇒ 𝑓 𝑥 = 𝑓 (𝑦), and 𝑞 : 𝑆 → 𝑆/≡
the canonical quotient map. The assembly 𝑉 is the restriction of 𝑇
to the subset 𝑉 = {𝑦 ∈ 𝑇 | ∃𝑥. 𝑆 𝑓 𝑥 = 𝑦} so that ∥𝑉 ∥ = ∥𝑇 ∥ and
y ⊩𝑉 𝑦 ⇔ y ⊩𝑇 𝑦 for all 𝑦 ∈ 𝑉 . The map 𝑖 : 𝑉 → 𝑇 is the subset
inclusion. Finally, 𝑏 : 𝑈 → 𝑉 is characterized by 𝑏([𝑥]≡ ) = 𝑓 𝑥 . It is
realized by the same realizers as 𝑓 .
In the above factorization we call 𝑈 the image of 𝑓 and 𝑉 the stable
image of 𝑓 . The reason for the terminology is revealed in ??, where the
stable image is related to stable propositions.
The factorization is unique up to isomorphism. Suppose we had another
factorization 𝑓 = 𝑖 ′ ◦ 𝑏 ′ ◦ 𝑞 ′ where 𝑖 ′, 𝑏 ′ and 𝑞 ′ are regular mono,
mono-epi, and regular-epi, respectively. We claim that there are unique
isomorphisms 𝑗 and 𝑘 which make the following diagram commute:

𝑓
𝑆 /𝑇
>G
𝑞 𝑖

𝑏 /𝑉
𝑞′ 𝑈 𝑖′

𝑗 𝑘
 
𝑈′ / 𝑉′
𝑏′

Without loss of generality we may assume that 𝑞 ′ is a canonical quotient


map 𝑞 ′ : 𝑆 → 𝑆/≡′ for an equivalence relation ≡′ on 𝑆 , and that
𝑖 ′ : 𝑉 ′ → 𝑇 is a subset inclusion.
First we show that ≡ and ≡′ coincide. If 𝑥 ≡ 𝑦 then 𝑖 ′(𝑏 ′(𝑞 ′(𝑥))) = 𝑓 𝑥 =
𝑓 (𝑦) = 𝑖 ′(𝑏 ′(𝑞 ′(𝑦))), and since 𝑖 ′ ◦ 𝑏 ′ is a mono 𝑞 ′(𝑥) = 𝑞 ′(𝑦) and 𝑥 ≡′ 𝑦
follows. Conversely, if 𝑥 ≡′ 𝑦 then 𝑞 ′(𝑥) = 𝑞 ′(𝑦) and 𝑓 𝑥 = 𝑖 ′(𝑏 ′(𝑞 ′(𝑥))) =
𝑖 ′(𝑏 ′(𝑞 ′(𝑦))) = 𝑓 (𝑦), hence 𝑥 ≡ 𝑦 .
3 Realizability categories 74

Next we verify that 𝑉 and 𝑉 ′ are equal. If 𝑦 ∈ 𝑉 then 𝑓 𝑥 = 𝑦 for


some 𝑥 ∈ 𝑆 . Because 𝑦 = 𝑓 𝑥 = 𝑖 ′(𝑏 ′(𝑞 ′(𝑥))) and 𝑖 ′ is a subset inclusion,
𝑏 ′(𝑞 ′(𝑥)) = 𝑓 𝑥 = 𝑦 , which shows that 𝑦 ∈ 𝑉 ′. Conversely, if 𝑦 ∈ 𝑉 ′ then
there is 𝑥 ∈ 𝑆 such that 𝑦 = 𝑖 ′(𝑏 ′(𝑞 ′(𝑥))) = 𝑓 𝑥 , but then 𝑦 ∈ 𝑉 .
Good factorization properties of Asm(𝔸 , 𝔸′) lead to another important
feature of assemblies.

Proposition 3.6.16 The category Asm(𝔸 , 𝔸′) is regular which means that
1. it is cartesian,
2. every morphism can be factored as a composition of a regular epi and a
mono, and
3. the pullback of a regular epi is a regular epi.

Proof. The first item was proved in Proposition 3.6.3. For the second item,
take the above factorization 𝑓 = 𝑖 ◦ 𝑏 ◦ 𝑞 and notice that 𝑞 is a regular epi
and 𝑖 ◦ 𝑏 a mono. Lastly, suppose 𝑞 : 𝑇 → 𝑇/≡ is a regular epi, where we
assumed without loss of generality that it is a quotient by an equivalence
relation. Let 𝑓 : 𝑆 → 𝑇/≡ be realized by f ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ . The pullback of 𝑞
is the map 𝑟 : 𝑃 → 𝑆 , as in the diagram

𝑃 /𝑇

𝑟 𝑞
 
𝑆 / 𝑇/≡
𝑓

where
𝑃 = ({(𝑥, 𝑦) ∈ 𝑆 × 𝑇 | 𝑓 𝑥 ≡ 𝑦}, ∥𝑆∥ × ∥𝑇 ∥, ⊩𝑃 ),
pair x y ⊩𝑃 (𝑥, 𝑦) if, and only if, x ⊩𝑆 𝑥 and y ⊩𝑇 𝑦 and 𝑟 : (𝑥, 𝑦) ↦→ 𝑥 .
Let us use Proposition 3.6.15 to show that 𝑟 is regular epi. The realizer
i = ⟨𝑥 ∥𝑆∥ ⟩ pair 𝑥 (f 𝑥) satisfies the conditions of the proposition: if 𝑥 ∈ 𝑆
and x ⊩𝑆 𝑥 then i x = pair x (f x) is defined. There is 𝑦 ∈ 𝑇 such that
𝑓 𝑥 = [𝑦]≡ , hence 𝑟(𝑥, 𝑦) = 𝑥 , and also i x ⊩𝑃 (𝑥, 𝑦).

The regular structure of assemblies is important for at least two reasons: it


gives a well-defined notion of an image of a realized map, and it provides
an interpretation of existential quantifiers, cf. ??.

3.6.5 Cartesian closed structure

If 𝑆 and 𝑇 are objects in a category, we may form the set Hom (𝑆, 𝑇)
of morphisms with domain 𝑆 and codomain 𝑇 . Sometimes Hom (𝑆, 𝑇)
carries additional structure that turns it into an object of the category. For
example, in the category of partially ordered sets and monotone maps,
the set Hom (𝑃, 𝑄) of monotone maps between (𝑃, ≤𝑃 ) and (𝑄, ≤𝑄 ) is
partially ordered by 𝑓 ≤ 𝑔 ⇔ ∀𝑥. 𝑃 𝑓 𝑥 ≤𝑄 𝑔 𝑥 . The following definition
explains what it means for an object to correspond to the the set of
morphisms.
3 Realizability categories 75

Definition 3.6.17 An exponential of objects 𝑆 and 𝑇 is an object 𝐸 with


a morphism 𝑒 : 𝐸 × 𝑆 → 𝑇 , such that for every 𝑓 : 𝑈 × 𝑆 → 𝑇 there
exists a unique 𝑓ˆ : 𝑈 → 𝐸 such that the following diagram commutes:

𝐸 ×O 𝑆
𝑒
𝑓ˆ×id𝑆
"
𝑈 ×𝑆 /𝑇
𝑓

A category with finite products in which all exponentials exist is a


cartesian closed category (ccc).

The exponential is determined uniquely up to isomorphism. When


given as an operation, we denote the exponential of 𝑆 and 𝑇 by 𝑇 𝑆 ,
and sometimes by 𝑆 → 𝑇 . The map 𝑒 : 𝑇 𝑆 × 𝑆 → 𝑇 is called the
evaluation morphism. The map 𝑓ˆ : 𝑈 → 𝑇 𝑆 is called the transpose of
𝑓 : 𝑈 × 𝑆 → 𝑇.
Let us explain how exponentials work in the category of sets. The
exponential of sets 𝑆 and 𝑇 is the set

𝑇 𝑆 = { 𝑓 | 𝑓 is a function from 𝑆 to 𝑇}

and the evaluation map is 𝑒( 𝑓 , 𝑥) = 𝑓 𝑥 . The transpose of 𝑓 : 𝑈 × 𝑆 → 𝑇


is 𝑓ˆ(𝑧)(𝑥) = 𝑓 (𝑧, 𝑥). This way the diagram commutes because 𝑒(( 𝑓ˆ ×
id𝑆 )(𝑧, 𝑥)) = 𝑒( 𝑓ˆ(𝑧), 𝑥) = 𝑓ˆ(𝑧)(𝑥) = 𝑓 (𝑧, 𝑥). The transpose 𝑓ˆ is the only
map satisfying this property.

Proposition 3.6.18 The categories Asm(𝔸 , 𝔸′) and Mod(𝔸 , 𝔸′) are carte-
sian closed.

Proof. We prove that Asm(𝔸 , 𝔸′) has exponentials. The same construction
works for modest sets. Suppose 𝑆 and 𝑇 are assemblies. Define the
assembly

𝑇 𝑆 = ({ 𝑓 : 𝑆 → 𝑇 | 𝑓 is realized}, ∥𝑆∥ → ∥𝑇 ∥, ⊩𝑆→𝑇 )

where

f ⊩𝑆→𝑇 𝑓 ⇐⇒ ∀𝑥. 𝑆∀x ∈ ∥𝑆∥. (x ⊩𝑆 𝑥 ⇒ f x↓ ∧ f x ⊩𝑇 𝑓 𝑥).

None of this is surprising because we just copied the definition of realized


maps. The evaluation map 𝑒 : 𝑇 𝑆 × 𝑆 → 𝑇 is 𝑒( 𝑓 , 𝑥) = 𝑓 𝑥 , which is
realized by

e = ⟨𝑝 (∥𝑆∥→∥𝑇 ∥)×∥𝑆∥→∥𝑇 ∥ ⟩ (fst 𝑝) (snd 𝑝).

The transpose of 𝑓 : 𝑈×𝑆 → 𝑇 , realized by f, is the map 𝑓ˆ(𝑧)(𝑥) = 𝑓 (𝑧, 𝑥),


which is realized by

f̂ = ⟨𝑧 ∥𝑈 ∥ ⟩ ⟨𝑥 ∥𝑆∥ ⟩ f (pair 𝑧 𝑥).

The passage from 𝑓 : 𝑈 × 𝑆 → 𝑇 to its transpose 𝑓ˆ : 𝑈 → 𝑇 𝑆 has


an inverse. To every 𝑔 : 𝑈 → 𝑇 𝑆 we may assign 𝑔ˇ : 𝑈 × 𝑆 → 𝑇 ,
3 Realizability categories 76

defined by 𝑔ˇ (𝑧, 𝑥) = 𝑔(𝑧)(𝑥). If 𝑔 is realized by g then 𝑔ˇ is realized by


ˇ
ǧ = ⟨𝑝 ∥𝑈 ∥×∥𝑆∥ ⟩ g (fst 𝑝) (snd 𝑝). It is easy to check that 𝑓ˆ = 𝑓 and 𝑔ˆˇ = 𝑔 .
The operation 𝑓 ↦→ 𝑓ˆ is also known as currying and its inverse 𝑔 ↦→ 𝑔ˇ as
uncurrying.

3.6.6 The interpretation of 𝜆-calculus in assemblies

Currying and uncurrying are useful operations, but the above notation
with “hats and checks” is not very practical. We may take better advantage
of the cartesian closed structure of Asm(𝔸 , 𝔸′) by interpreting the 𝜆-
calculus in it. The types of the 𝜆-calculus are the assemblies, where the
product and function types are interpreted as products and exponentials
of assemblies, respectively. The unit type is the terminal assembly 1.
The expressions are those of the 𝜆-calculus, except that we write the
projections as 𝜋1 and 𝜋2 instead of fst and snd, respectively. In addition
if 𝑇 is an assembly and 𝑎 ∈ 𝑇 is an element for which there exits a realizer
a ∈ 𝔸′∥𝑇 ∥ then 𝑎 is a primitive constant of type 𝑇 .
Suppose 𝑒 is an expression of type 𝑇 and the freely occurring variables
𝑆
of 𝑒 are among 𝑥 1𝑆1 , . . . , 𝑥 𝑛 𝑛 . We prefer to write the list of variables as
𝑥1 : 𝑆1 , . . . , 𝑥 𝑛 : 𝑆𝑛 , which we abbreviate as 𝑥 : 𝑆 , and call it a typing
context for 𝑒 . The expression with the typing context determines a
realized map
[[𝑥 : 𝑆 | 𝑒 : 𝑇]] : 𝑆1 × · · · × 𝑆𝑛 → 𝑇,
which we abbreviate to [[𝑒]] when no confusion may arise. We define
the meaning of [[𝑒]] inductively on the structure of 𝑒 as follows, where
𝑎 = (𝑎1 , . . . , 𝑎 𝑛 ) ∈ 𝑆1 × · · · × 𝑆𝑛 :
1. A primitive constant 𝑏 ∈ 𝑇 which is realized by b ∈ 𝔸′∥𝑇 ∥ is
interpreted as the constant map

[[𝑥 : 𝑆 | 𝑏 : 𝑇]](𝑎) = 𝑏,

which is realized12 by ⟨𝑥 ∥𝑆1 ∥×···×∥𝑆𝑛 ∥ ⟩ b. 12: Had we allowed as primitive con-


𝑆 stants all elements of 𝑇 , we would face
2. A variable 𝑥 𝑖 𝑖 is interpreted as the 𝑖 -th projection
a difficulty here, because we could not
exhibit a computable realizer for the con-
[[𝑥 : 𝑆 | 𝑥 𝑖 : 𝑆 𝑖 ]](𝑎) = 𝑎 𝑖 . stant map.

which of course is realized.


3. A 𝜆-abstraction 𝜆𝑦 :𝑈. 𝑒 of type 𝑈 → 𝑇 is interpreted as the
realized map [[𝜆𝑦 :𝑈. 𝑒]] : 𝑆1 × · · · × 𝑆𝑛 → 𝑇 𝑈 that is obtained as
the transpose of

[[𝑥 :𝑆,𝑦 :𝑈 |𝑒 :𝑇]]


𝑆1 × · · · × 𝑆 𝑛 × 𝑈 /𝑇

4. The interpretation of an application 𝑒1 𝑒2 , where 𝑒1 has type 𝑈 → 𝑇


and 𝑒2 has type 𝑈 , is the map

⟨[[𝑒1 ]],[[𝑒2 ]]⟩


𝑆1 × · · · × 𝑆 𝑛 / 𝑇𝑈 × 𝑈 ev /𝑇

where ev is the evaluation map.


3 Realizability categories 77

5. The interpretation of a pair (𝑒1 , 𝑒2 ) of type 𝑇 × 𝑈 is the map

⟨[[𝑒1 ]],[[𝑒2 ]]⟩


𝑆1 × · · · × 𝑆 𝑛 / 𝑇 ×𝑈

6. A projection 𝜋1 (𝑒) of type 𝑇 , where 𝑒 has type 𝑇 × 𝑈 , is the map

[[𝑒]] 𝜋1
𝑆1 × · · · × 𝑆 𝑛 / 𝑇 ×𝑈 /𝑇

The second projection 𝜋2 is treated analogously.


This definition shows that we may freely use the 𝜆-calculus to define
realized maps. Although the definition tells us exactly how to compute
the realizers from the expressions, the idea is to not do that. We have
verified once and for all that any map defined by 𝜆-calculus is realized,
and in most cases we do not care which specific realizer is used.
When 𝑒 is a closed expression of type 𝑆 and the typing context 𝑥 : 𝑆 is an
empty list, the meaning of 𝑒 is a realized map

[[· | 𝑒 : 1 → 𝑇]]

which amounts to the same thing as an element of 𝑇 . This element has a


computable realizer, i.e., one in 𝔸′∥𝑇 ∥ , because the corresponding realized
map does.
We may further simplify the notation by allowing patterns in 𝜆-abstraction,
a technique commonly used in functional programming languages.
Instead of
𝜆𝑝 :𝑆 × 𝑇. . . .
we write
𝜆(𝑥, 𝑦):𝑆 × 𝑇. . . .
and replace each occurrence of fst 𝑝 by 𝑥 , of snd 𝑝 by 𝑦 , and all other
occurrences of 𝑝 by (𝑥, 𝑦). For instance, we would write

𝜆( 𝑓 , 𝑔):(𝑇 → 𝑈) × (𝑆 → 𝑇). 𝜆𝑥 :𝑆. 𝑓 (𝑔 𝑥)

instead of

𝜆𝑝 :(𝑇 → 𝑈) × (𝑆 → 𝑇). 𝜆𝑥 :𝑆. (fst 𝑝) (snd 𝑝) 𝑥.

It is also useful to write the definition of a function as

𝑓 𝑥1 . . . 𝑥 𝑛 = 𝑒

instead of
𝑓 = 𝜆𝑥 1 . . . 𝑥 𝑛 . 𝑒.

When Asm(𝔸 , 𝔸′) has coproducts, and in most cases of interest it does,
the 𝜆-calculus may be extended further to encompass binary sums. If
𝑆 and 𝑇 are assemblies, viewed as types, then we have the following
expressions:
1. If 𝑒 is an expressions of type 𝑆 then 𝜄 1𝑆,𝑇 (𝑒) is an expression of type
3 Realizability categories 78

𝑆 + 𝑇 . It is interpreted as the composition

[[𝑒]] 𝜄1
𝑆1 × · · · × 𝑆 𝑛 /𝑆 / 𝑆+𝑇

where 𝜄 1 is the canonical inclusion. The expression 𝜄 2𝑆,𝑇 (𝑒) is treated


similarly. We usually omit supscripts 𝑆, 𝑇 on 𝜄 1 and 𝜄 2 .
2. If 𝑒1 has type 𝑆 + 𝑇 , and 𝑒2 and 𝑒3 have type 𝑈 , then

case 𝑒1 of 𝜄 1 (𝑥) ↦→ 𝑒2 | 𝜄 2 (𝑦) ↦→ 𝑒3

is an expression of type 𝑈 , with 𝑥 bound in 𝑒2 and 𝑦 bound in 𝑒3 .


It is interpreted as the composition

[[𝑒1 ]] [[[𝑒2 ]],[[𝑒3 ]]]


𝑆1 × · · · × 𝑆 𝑛 / 𝑆+𝑇 /𝑈

The following equations hold:

(case 𝜄1 (𝑒) of 𝜄1 (𝑥) ↦→ 𝑒1 | 𝜄2 (𝑦) ↦→ 𝑒2 ) = 𝑒1 [𝑒/𝑥],


(case 𝜄2 (𝑒) of 𝜄1 (𝑥) ↦→ 𝑒1 | 𝜄2 (𝑦) ↦→ 𝑒2 ) = 𝑒2 [𝑒/𝑦],
(case 𝑒1 of 𝜄1 (𝑥) ↦→ 𝑒2 [𝜄 1 (𝑥)/𝑧] | 𝜄 2 (𝑦) ↦→ 𝑒2 [𝜄2 (𝑦)/𝑧]) = 𝑒2 [𝑒1 /𝑧],
𝑒[(case 𝑒1 of 𝜄 1 (𝑥) ↦→ 𝑒2 | 𝜄2 (𝑦) ↦→ 𝑒3 )/𝑧] =
case 𝑒1 of
𝜄1 (𝑥) ↦→ 𝑒[𝑒2 /𝑧]
𝜄2 (𝑦) ↦→ 𝑒[𝑒3 /𝑧]

We conclude by using the cartesian closed structure of Asm(𝔸 , 𝔸′) to


derive the distributive law

(𝑆 + 𝑇) × 𝑈  𝑆 × 𝑈 + 𝑇 × 𝑈.

Let us use the 𝜆-calculus to write down the isomorphisms explicitly. The
isomorphism from left to right is

𝑓 = 𝜆(𝑎, 𝑏):(𝑆 + 𝑇) × 𝑈. case 𝑎 of


𝜄1 (𝑥) ↦→ 𝜄1 (𝑥, 𝑏)
𝜄2 (𝑦) ↦→ 𝜄 2 (𝑦, 𝑏)

and its inverse is

𝑔 = 𝜆𝑐 :(𝑆 × 𝑈) + (𝑇 × 𝑈). case 𝑐 of


𝜄 1 (𝑥, 𝑏) ↦→ (𝜄1 (𝑥), 𝑏)
𝜄 2 (𝑦, 𝑏) ↦→ (𝜄2 (𝑦), 𝑏)
3 Realizability categories 79

We compute

𝑔( 𝑓 (𝑎, 𝑏)) = 𝑔(case 𝑎 of 𝜄1 (𝑥) ↦→ 𝜄1 (𝑥, 𝑏) | 𝜄 2 (𝑦) ↦→ 𝜄2 (𝑦, 𝑏))


= case 𝑎 of
𝜄1 (𝑥) ↦→ 𝑔(𝜄1 (𝑥, 𝑏))
𝜄2 (𝑦) ↦→ 𝑔(𝜄2 (𝑦, 𝑏))
= case 𝑎 of
𝜄1 (𝑥) ↦→ (𝜄1 (𝑥), 𝑏)
𝜄2 (𝑦) ↦→ (𝜄 2 (𝑦), 𝑏)
= (𝑎, 𝑏)

and

𝑓 (𝑔(𝑐)) = 𝑓 (case 𝑐 of 𝜄1 (𝑥, 𝑏) ↦→ (𝜄1 (𝑥), 𝑏) | 𝜄 2 (𝑦, 𝑏) ↦→ (𝜄2 (𝑦), 𝑏))


= case 𝑐 of
𝜄1 (𝑥, 𝑏) ↦→ 𝑓 (𝜄1 (𝑥), 𝑏)
𝜄2 (𝑦, 𝑏) ↦→ 𝑓 (𝜄2 (𝑦), 𝑏)
= case 𝑐 of
𝜄1 (𝑥, 𝑏) ↦→ 𝜄1 (𝑥, 𝑏)
𝜄2 (𝑦, 𝑏) ↦→ 𝜄2 (𝑦, 𝑏)
= 𝑐.

This proof works in any cartesian closed category with binary coproducts.
In particular, it works in Asm(𝔸 , 𝔸′). Notice how we need not worry about
the underlying realizers for the isomorphisms 𝑓 and 𝑔 . You are invited to
redo the proof by drawing the relevant commutative diagrams and using
the universal properties of products, coproducts, and exponentials.

3.6.7 Projective assemblies

An object 𝑃 is (regular) projective when for every regular epi 𝑒 : 𝐴 → 𝐵


and 𝑃 : 𝐴 → 𝐵 there is 𝑓 : 𝑃 → 𝐴 such that 𝑓 = 𝑒 ◦ 𝑓 :

?𝐴
𝑓
𝑒

𝐵 /𝑃
𝑓

We say that 𝑃 has the lifting property with respect to morphisms, be-
cause every 𝑓 “lifts“ to 𝑓 , as in the diagram.13 Which assemblies are 13: In a general category an object is
projective? called projective if it has the lifting prop-
erty with respect to all epis, and regular
projective if it has the liftiing property
Definition 3.6.19 An assembly 𝑆 is partitioned if each element has with respect to regular epis. However, we
are only interested in the regular pro-
precisely one realizer: if 𝑟 ⊩𝑆 𝑥 and 𝑟 ⊩𝑆 𝑦 then 𝑥 = 𝑦 .
jective objects, so we drop the qualifier
“regular”.

Proposition 3.6.20 A partitioned assemly is projective.

Proof. Suppose that 𝑃 is a partitioned assembly, 𝑒 : 𝐴 → 𝐵 is a regular


3 Realizability categories 80

epi and 𝑓 : 𝑃 → 𝐵 is realized by f ∈ 𝔸′∥𝑃 ∥→∥𝐵∥ . By Proposition 3.6.15


there is i ∈ 𝔸′∥𝐵∥→∥𝐴∥ such that whenever z ⊩𝐵 𝑧 then i z ⊩𝐴 𝑧 for some
𝑦 ∈ 𝑒 ∗ (𝑧). Because 𝑒 : |𝐴| → |𝐵| is surjective, there exists by the axiom of
choice a map 𝑓 : |𝑃| → |𝐴| such that 𝑒( 𝑓 𝑥) = 𝑓 (𝑥) for all 𝑥 ∈ |𝑃| . The
map 𝑓 is realized by ⟨𝑥 ∥𝑃 ∥ ⟩ i (f 𝑥) because 𝑃 is partitioned.

Proposition 3.6.21 Every assembly is a quotient of a partitioned assembly.

Proof. Given an assembly 𝑆 , let 𝑃 be the partitioned assembly whose


underlying type is ∥𝑃 ∥ = ∥𝑆∥ , the underlying set is the extension of ⊩𝑆 ,

|𝑃| = {(x, 𝑥) ∈ 𝔸 ∥𝑆∥ × |𝑆| | x ⊩𝑆 𝑥},

and the existence predicate is E𝑃 (𝑥, x) = {x}. The first projection 𝑒 :


|𝑃| → |𝑆| , 𝑒 : (𝑥, x) ↦→ 𝑥 is realized by fst and is a regular epi by
Proposition 3.6.15. It is called the canonical cover of 𝑆 .

Exercise 3.6.22 Is canonical cover functorial?

The following notion is useful when one has to compute with concrete
realizers.

Definition 3.6.23 An assembly 𝑆 has canonical realizers when there


exists c ∈ 𝔸′∥𝑆∥→∥𝑆∥ such that, for all 𝑥 ∈ |𝑆| and x ∈ ∥𝑆∥ , if x ⊩𝑆 𝑥
then c x↓, c x ⊩𝑆 𝑥 , and c (c x) = c x. We say that c computes canonical
realizers for 𝑆 .

Not every assembly has canonical realizers. In fact, having them is


equivalent to projectivity, as shown by the following characterization of
projective assemblies.

Theorem 3.6.24 The following are equivalent for an assembly 𝑆 :


1. 𝑆 is projective.
2. 𝑆 the canonical cover is split.14
14: A morphism 𝑓 : 𝑋 → 𝑌 is split if
3. 𝑆 has canonical realizers. there is 𝑔 : 𝑌 → 𝑋 such that 𝑓 ◦ 𝑔 = id𝑌 .
4. 𝑆 is isomorphic to a partitioned assembly.

Proof. To show that the first statement implies the second one, let 𝑆 be a
projective assembly and 𝑒 : 𝑃 → 𝑆 the regular epimorphism constructed
in Proposition 3.6.21. Because 𝑆 is projective, id𝑆 lifts along 𝑒 to an
assembly map 𝑑 : 𝑆 → 𝑃 such that 𝑒 ◦ 𝑑 = id𝑆 , hence 𝑒 is split.
If the canonical cover 𝑒 : 𝑃 → 𝑆 is split by 𝑑 : 𝑆 → 𝑃 then any realizer
of 𝑑 computes canonical realiers for 𝑆 .
If c computes canonical realizers for 𝑆 then 𝑆 is isomorphic to the
partitioned assembly 𝑄 , defined by |𝑄 | = |𝑆| , ∥𝑄 ∥ = ∥𝑆∥ and

y ⊩𝑄 𝑥 ⇐⇒ ∃x ∈ ∥𝑆∥. x ⊩𝑆 𝑥 ∧ y = c x.
3 Realizability categories 81

Finally, the fourth statement implies the first one because projectivity is
preserved by isomorphism and partitioned assemblies are projective by
Proposition 3.6.20.

Exercise 3.6.25 Show that the full subcategory on the projective modest
sets is equivalent to the category of sets of realizers, whose objects are
pairs (|𝑆|, ∥𝑆∥) where ∥𝑆∥ is a type and |𝑆| ⊆ 𝔸 ∥𝑆∥ . A morphism
𝑓 : (|𝑆|, ∥𝑆∥) → (|𝑇 |, ∥𝑇 ∥) is a map 𝑓 : |𝑆| → |𝑇 | that has a realizer
f ∈ 𝔸′∥𝑆∥→∥𝑇 ∥ satisfying f x↓ and f x ∈ |𝑇 | for all x ∈ |𝑆| .
Realizability and logic 4
The idea that the elements of a set are represented by values of a datatype
is familiar to programmers. In the previous chapter we expressed the
idea mathematically in terms of realizability relations and assemblies.
Programmers are less aware of, but still use, the fact that realizability
carries over to logic as well: a logical statement can be validated by
realizers.

4.1 The set-theoretic interpretation of logic

Let us first recall how the usual interpretation of classical first-order logic
works. A predicate on a set 𝑆 is a Boolean function 𝑆 → 2, where 2 = {⊥, ⊤}
is the Boolean algebra on two elements. The Boolean algebra structure
carries over from 2 to predicates, e.g., the conjunction of 𝑝, 𝑞 : 𝑆 → 2 is
computed element-wise as

(𝑝 ∧ 𝑞)𝑥 = 𝑝𝑥 ∧ 𝑞𝑥,

and similarly for other predicates. With this much structure we can
interpret the propositional calculus. The quantifiers ∃ and ∀ can be
interpreted too, because 2 is complete: given a predicate 𝑝 : 𝑆 × 𝑇 → 2,
define ∃𝑆 𝑝 : 𝑇 → 2 and ∀𝑆 𝑝 : 𝑇 → 2 by
_ ^
(∃𝑆 𝑝)𝑦 = 𝑥∈𝑆
𝑝(𝑥, 𝑦) and (∀𝑆 𝑝)𝑦 = 𝑥∈𝑆
𝑝(𝑥, 𝑦).

Categorical logic teaches us that the essential characteristic of the quanti-


fiers is not their relation to infima and suprema, but rather an adjunction:
∃𝑆 𝑝 is the least predicate on 𝑇 such that 𝑝(𝑥, 𝑦) ≤ (∃𝑆 𝑝)𝑦 for all 𝑥 ∈ 𝑆,
𝑦 ∈ 𝑇 , and ∀𝑆 𝑝 is the largest predicate on 𝑇 such that (∀𝑆 𝑝)𝑦 ≤ 𝑝(𝑥, 𝑦)
for all 𝑥 ∈ 𝑆 , 𝑦 ∈ 𝑇 . The adjunction will carry over to the realizability
logic, but completeness will not.

4.2 Realizability predicates

In a category a predicate on an object 𝑆 is represented by a mono with


codomain 𝑆 . These form a preorder, where 𝑢 : 𝑇 ↣ 𝑆 is below 𝑡 : 𝑈 ↣ 𝑆 ,
written 𝑢 ≤ 𝑡 or somewhat less precisely 𝑈 ≤ 𝑇 , whhen 𝑢 factors through
𝑡 , i.e., there exists a morphism 𝑓 : 𝑈 → 𝑇 such that

?𝑆_
𝑢 𝑡
?
𝑈 /𝑇
𝑓

commutes. Such an 𝑓 is unique if it exists and is a mono.1 If 𝑢 ≤ 𝑡 and 1: Such basic category-theoretic observa-
𝑡 ≤ 𝑢 we say that 𝑢 and 𝑡 are isomorphic and write 𝑢 ≡ 𝑡 . The induced tions are excellent exercises. You should
prove them yourself.
4 Realizability and logic 83

partial order Sub(𝑆) = Mono(𝑆)/  of subobjects can be used if one cares


about antisymmetry, which we do not.
In the case of assemblies Mono(𝑆) forms a Heyting prealgebra, which is
enough to interpret intuitionistic propositional calculus, and there is
enough additional structure to interpret the quantifiers too.
However, rather than working with the Heyting prealgebra of monos,
we shall replace it with an equivalent one that expresses the predicates
as maps into Heyting prelagebras.

Definition 4.2.1 A realizability predicate on an assembly 𝑆 is given


by a type ∥𝑝∥ and a map 𝑝 : |𝑆| → P(𝔸 ∥𝑝 ∥ ).

It is customary to write r ⊩ 𝑝𝑥 instead of r ∈ 𝑝𝑥 and to read this as “r


realizes 𝑝𝑥 ”. Realizability predicates are more informative than Boolean
predicates. The latter only express truth and falshood, whereas the former
provide computatational evidence for validity of statements.
The set Pred(𝑆) = P(𝔸 ∥𝑝 ∥ ) |𝑆| of all realizability predicates on 𝑆 is a
preorder for the entailment relation ⊢, defined as follows. Given 𝑝, 𝑞 ∈
Pred(𝑆), we define 𝑝 ⊢ 𝑞 to hold when there exists i ∈ 𝔸′∥𝑆∥→∥𝑝 ∥→∥𝑞 ∥
such that whenever x ⊩𝑆 𝑥 and r ⊩ 𝑝𝑥 then i x r↓ and i x r ⊩ 𝑞𝑥 . Thus,
i converts computational evidence of 𝑝𝑥 to computational evidence
of 𝑞𝑥 . Note that i recieves as input both a realizer for 𝑥 and the evidence
of 𝑝𝑥 .

Exercise 4.2.2 Verify that ⊢ is reflexive and transitive.

Theorem 4.2.3 The preorders Mono(𝑆) and Pred(𝑆) are equivalent.

Proof. Given 𝑝 ∈ Pred(𝑆), define the assembly 𝑆 𝑝 by

|𝑆 𝑝 | = {𝑥 ∈ |𝑆| | ∃r ∈ 𝔸 ∥𝑝 ∥ . r ∈ 𝑝𝑥},
∥𝑆 𝑝 ∥ = ∥𝑆∥ × ∥𝑝 ∥,
𝑞 ⊩𝑆𝑝 𝑥 ⇐⇒ fst 𝑞 ⊩𝑆 𝑥 ∧ snd 𝑞 ⊩ 𝑝𝑥.

The subset inclusion |𝑆 𝑝 | ⊆ |𝑆| is realized by fst. The corresponding


assembly map 𝑖 𝑝 : 𝑆 𝑝 → 𝑆 is called the extension of 𝑝 .
It is easy to check that 𝑝 ⊢ 𝑞 is equuvalent to 𝑖 𝑝 ≤ 𝑖 𝑞 , therefore the
assignment 𝑝 ↦→ 𝑆 𝑝 consistutes a monotone embedding Pred(𝑆) →
Mono(𝑆). We still have to show that it is essentially surjective, i.e., that
every mono 𝑢 : 𝑈 ↣ 𝑆 , realized by u, is equivalent to the extension of a
predicate. Define the predicate 𝑝 𝑢 on 𝑆 by ∥𝑝 𝑢 ∥ = ∥𝑈 ∥ and

r ⊩ 𝑝 𝑢 𝑥 ⇐⇒ ∃𝑦 ∈ 𝑈. 𝑢 𝑦 = 𝑥 ∧ y ⊩𝑈 𝑦.

We claim that 𝑖 𝑝𝑢 and 𝑢 are isomorphic monos. The injection 𝑢 : |𝑈 | → |𝑆|


restricts to a bijection 𝑢 : |𝑈 | → |𝑆 𝑝𝑢 | , which is realized as a morphism
𝑈 → 𝑆 𝑝𝑢 by ⟨𝑦 ∥𝑈 ∥ ⟩ pair (u 𝑦) 𝑦 . Its inverse 𝑢 −1 : |𝑆 𝑝𝑢 | → |𝑈 | is realized
by snd.
4 Realizability and logic 84

4.3 The Heyting prealgebra of realizability


predicates

Recall that a Heyting prealgebra (𝐻, ⊢) is a preorder (reflexive and


transitive) with elements ⊥, ⊤ and binary operations ∧, ∨, and ⇒
governed by the following rules of inference:2 2: These “fractions” are inference rules
stating that the bottom statement follows
𝑟⊢𝑝 𝑟⊢𝑞 𝑝⊢𝑟 𝑞⊢𝑟 𝑟∧𝑝⊢𝑞 from the conjunction of the top ones.
================= ================= ============ The double line indicates a two-way rule
⊥⊢𝑝 𝑝⊢⊤ 𝑟⊢𝑝∧𝑞 𝑝∨𝑞⊢𝑟 𝑟⊢𝑝⇒𝑞 which additionally states that the bottom
statement implies the conjunction of the
top ones.
We say that elements 𝑝, 𝑞 ∈ 𝐻 are equivalent, written 𝑝 ⊣⊢ 𝑞 , if 𝑝 ⊢ 𝑞 and
𝑞 ⊢ 𝑝 . A Heyting algebra is a Heyting prealgebra in which equivalence is
equality, which just means that the preorder is also antisymmetric.
In a Heyting prealgebra there may be many smallest elements, but they
are all equivalent to ⊥. Similarly, the binary infima and suprema may
exist in many copies, all of which are equivalent.

Proposition 4.3.1 The preorder Pred(𝑆) is a Heyting prealgebra.

Proof. Define the predicates ⊥, ⊤ : 𝑆 → P(𝔸unit ) on an assembly 𝑆 by

⊥𝑥 = ∅ and ⊤𝑥 = 𝔸unit .

That is, ⊥ is realized by nothing and ⊤ by everything. It is easy to check


that ⊥ ⊢ 𝑝 ⊢ ⊤ for all 𝑝 ∈ Pred(𝑆).
For predicates 𝑝 and 𝑞 on 𝑆 , let 𝑝 ∧ 𝑞 be the predicate with ∥𝑝 ∧ 𝑞 ∥ =
∥𝑝 ∥ × ∥𝑞 ∥ and

r ⊩ (𝑝 ∧ 𝑞) 𝑥 ⇐⇒ fst r ⊩ 𝑝 ∧ snd r ⊩ 𝑞.

It is customary to write 𝑝 𝑥 ∧ 𝑞 𝑥 instead of (𝑝 ∧ 𝑞) 𝑥 , which we shall do


henceforth, including for other connectives. Let us verify that 𝑝 ∧ 𝑞 is
the infimum of 𝑝 and 𝑞 . If 𝑟 ⊢ 𝑝 and 𝑟 ⊢ 𝑞 are witnessed by a and b, re-
spectively, then 𝑟 ⊢ 𝑝 ∧ 𝑞 is witnessed by ⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑢 ∥𝑟 ∥ ⟩ pair (a 𝑥 𝑢) (b 𝑥 𝑢).
Conversely, if c witnesses 𝑟 ⊢ 𝑝∧𝑞 then ⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑢 ∥𝑟 ∥ ⟩ fst (c 𝑥 𝑢) witnesses
𝑟 ⊢ 𝑝 and ⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑢 ∥𝑟 ∥ ⟩ snd (c 𝑥 𝑢) witnesses 𝑟 ⊢ 𝑞 .
Next, define 𝑝 ∨ 𝑞 to be the predicate with ∥𝑝 ∧ 𝑞 ∥ = ∥𝑝∥ + ∥𝑞 ∥ and

r ⊩ 𝑝𝑥 ∨ 𝑞𝑥 ⇐⇒ (∃u. r = left u ∧ u ⊩ 𝑝𝑥) ∨


(∃v. r = right v ∧ v ⊩ 𝑞𝑥).

If a and b witness 𝑝 ⊢ 𝑟 and 𝑞 ⊢ 𝑟 , respectively, then 𝑝 ∨ 𝑞 ⊢ 𝑟 is witnessed


by
⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑤 ∥𝑝 ∥+∥𝑞∥ ⟩ case 𝑤 (a 𝑥) (b 𝑥).
Conversely, if c witnesses 𝑝 ∨ 𝑞 ⊢ 𝑟 then 𝑝 ⊢ 𝑟 and 𝑞 ⊢ 𝑟 are witnessed by
⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑢 ∥𝑝 ∥ ⟩ c 𝑥 (left 𝑢) and ⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑣 ∥𝑞 ∥ ⟩ c 𝑥 (right 𝑣), respectively.
Finally, define 𝑝 ⇒ 𝑞 to be the predicate with ∥𝑝 ⇒ 𝑞 ∥ = ∥𝑝 ∥ → ∥𝑞 ∥
and
r ⊩ (𝑝𝑥 ⇒ 𝑞𝑥) ⇐⇒ ∀u ∈ 𝔸 ∥𝑝 ∥ . u ⊩ 𝑝𝑥 ⇒ r u ⊩ 𝑞𝑥.
4 Realizability and logic 85

That is, r maps realizers for 𝑝𝑥 to realizers for 𝑞𝑥 . Note that r ∈ 𝔸 ∥𝑝 ∥→∥𝑞 ∥
and not u ∈ 𝔸′∥𝑝∥→∥𝑞∥ , so we have an example of entailment ⊢ and
implication ⇒ not “being the same thing” (about which students of logic
often wonder about).

Exercise 4.3.2 Finish the proof by checking that the definition of ⇒


validates the inference rules for implication.

In intuitionistic logic negation ¬𝑝 is an abbreviation for 𝑝 ⇒ ⊥. The


following holds:

𝑟 ⊩ ¬𝑝𝑥 ⇐⇒ ¬∃s ∈ 𝔸 ∥𝑝∥ . s ⊩ 𝑝𝑥,


𝑟 ⊩ ¬¬𝑝𝑥 ⇐⇒ ∃s ∈ 𝔸 ∥𝑝∥ . s ⊩ 𝑝𝑥.

4.4 Quantifiers

Categorical logic teaches us that the existential and universal quantifiers


are defined as the left and right adjoints to weakening. Let us explain
what this means.
Weakening along the projection 𝜋1 : 𝑆 × 𝑇 → 𝑆 is an operation which
maps a mono 𝑉 ↣ 𝑆 to the mono 𝑉 × 𝑇 ↣ 𝑆 × 𝑇 . This is a monotone
map from Mono(𝑆) to Mono(𝑆 × 𝑇). Existential quantification is its left
adjoint, i.e., a monotone map ∃𝑇 : Mono(𝑆 × 𝑇) → Mono(𝑆) such that, for
all monos 𝑈 ↣ 𝑆 × 𝑇 and 𝑉 ↣ 𝑆 ,

𝑈 ≤ 𝑉 × 𝑇 ⇐⇒ ∃𝑇 𝑈 ≤ 𝑉.

Similarly, universal quantification is the right adjoint:

𝑉 × 𝑇 ≤ 𝑈 ⇐⇒ 𝑉 ≤ ∀𝑇 𝑈.

From the above adjunctions the usual laws of inference for the existential
and universal quantifiers follow. We verify that such adjoints for the
Heyting prealgebras of realizability predicates.
Weakening along 𝜋1 : 𝑆 × 𝑇 → 𝑆 takes 𝑝 ∈ Pred(𝑆) to the predicate
𝑝 × 𝑇 ∈ Pred(𝑆 × 𝑇) defined by

∥𝑝 × 𝑇 ∥ = ∥𝑝∥ and (𝑝 × 𝑇)(𝑥, 𝑦) = 𝑝𝑥,

where 𝑥 ∈ 𝑆 and 𝑦 ∈ 𝑇 . We claim that the left adjoint ∃𝑇 maps 𝑞 ∈


Pred(𝑆 × 𝑇) to ∃𝑇 𝑞 ∈ Pred(𝑆) with ∥∃𝑇 𝑞 ∥ = ∥𝑇 ∥ × ∥𝑞 ∥ and

r ⊩ (∃𝑇 𝑞)(𝑥) ⇐⇒ ∃𝑦 ∈ |𝑇 |. fst 𝑅𝑟 ⊩𝑇 𝑦 ∧ snd r ⊩ 𝑞(𝑥, 𝑦).

Let us verify that we have a left adjoint to weakening. Suppose 𝑝 ∈


Pred(𝑆 × 𝑇) and 𝑞 ∈ Pred(𝑆). If 𝑝 ≤ 𝑞 × 𝑇 is witnessed by f then
∃𝑇 𝑝 ≤ 𝑞 is witnessed by ⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑒 ∥𝑇 ∥×∥𝑝 ∥ ⟩ f (pair 𝑥 (fst 𝑒)) (snd 𝑒). Con-
versely, if ∃𝑇 𝑝 ≤ 𝑞 is witnessed by g then 𝑝 ≤ 𝑞 × 𝑇 is witnessed by
⟨𝑟 ∥𝑆∥×∥𝑇 ∥ ⟩ ⟨𝑠 ∥𝑝∥ ⟩ g (fst 𝑟) (pair (snd 𝑟) 𝑠).
4 Realizability and logic 86

Similarly, the universal quantifier ∀𝑇 maps 𝑞 ∈ Pred(𝑆 × 𝑇) to ∀𝑇 𝑞 ∈


Pred(𝑆) with ∥∀𝑇 𝑞 ∥ = ∥𝑇 ∥ → ∥𝑞 ∥ and

r ⊩ (∀𝑇 𝑞)(𝑥) ⇐⇒ ∀𝑦 ∈ |𝑇 |. ∀y ∈ 𝔸 ∥𝑇 ∥ . y ⊩𝑇 𝑦 ⇒ r y ⊩ 𝑞(𝑥, 𝑦).

To verify that this is the right adjoint, suppose 𝑝 ∈ Pred(𝑆 × 𝑇) and


𝑞 ∈ Pred(𝑆). If 𝑞 × 𝑇 ≤ 𝑝 is witnessed by f then 𝑞 ≤ ∀𝑇 𝑝 is witnessed by
⟨𝑥 ∥𝑆∥ ⟩ ⟨𝑟 ∥𝑞 ∥ ⟩ ⟨𝑦 ∥𝑇 ∥ ⟩ f (pair 𝑥 𝑦) 𝑟 . Conversely, if 𝑞 ≤ ∀𝑇 𝑝 is witnessed by
g then 𝑞 × 𝑇 ≤ 𝑝 is witnessed by ⟨𝑠 ∥𝑆∥×∥𝑇 ∥ ⟩ ⟨𝑟 ∥𝑞 ∥ ⟩ g (fst 𝑠) 𝑟 (snd 𝑠).
It is customary to write ∃𝑦 ∈ 𝑇. 𝑞(𝑥, 𝑦) and ∀𝑦 ∈ 𝑇. 𝑞(𝑥, 𝑦) instead of
∃𝑇 𝑞 and ∀𝑇 𝑞 .

Exercise 4.4.1 Show that the usual inference rules for quantifiers follow
form them being adjoint to weakneing.

4.5 Substitution

We have so far ignored the most basic logical operation of all, which is
substitution. Terms may be substituted into terms and into formulas.
Substitution of terms into terms is interpreted as composition. Think of a
term 𝑠(𝑥) of type 𝑆 with a free variable 𝑥 of type 𝑇 as a map 𝑠 : 𝑇 → 𝑆 .
Given 𝑡 : 𝑈 → 𝑇 , construed as a term 𝑡(𝑦) of type 𝑇 with a free variable 𝑦
of type 𝑈 , the substitution of 𝑡(𝑦) for 𝑥 in 𝑠(𝑥) yields 𝑠(𝑡(𝑦)), which is
just (𝑠 ◦ 𝑡)(𝑦), so 𝑠 ◦ 𝑡 : 𝑈 → 𝑆 .
Substitution of terms into predicates corresponds to composition too,
by an analogous argument. Think about how it works in Set. Given a
predicate 𝑝 : 𝑇 → 2 on a set 𝑆 and a term 𝑡 : 𝑆 → 𝑇 , the composition
𝑝 ◦ 𝑡 : 𝑆 → 2 corresponds to substituting 𝑡 into 𝑝 . Indeed, if we replace 𝑦
with 𝑡(𝑥) in the formula 𝑝(𝑥) we obtain 𝑝(𝑡(𝑦)), which is just (𝑝 ◦ 𝑡)(𝑦).

Exercise 4.5.1 You may have heard the slogan “substitution is pullback”.
Explain how the slogan arises when predicates are viewed as subsets,
rather than maps into 2

Let us introduce a notation for substitution. Given assembly maps


𝑡 : 𝑈 → 𝑇 and 𝑠 : 𝑇 → 𝑆, and 𝑝 ∈ Pred(𝑇), define the substitution of 𝑡
into 𝑠 and 𝑝 to be precomposition with 𝑡 :

𝑡∗ 𝑠 = 𝑠 ◦ 𝑡 : 𝑈 → 𝑆 and 𝑡 ∗ 𝑝 = 𝑝 ◦ 𝑡 ∈ Pred(𝑈).

We still have some work to do, namely check that substitution is functorial
and that it commutes with the logical connective and the quantifiers.
The former guarantees that the identity substitution and compositions
of substitutions act in the expected way, and the latter that substituting
preserves the logical structure of a formula. Functoriality means that

id∗𝑆 𝑠 = 𝑠 and (𝑢 ◦ 𝑡)∗ 𝑠 = 𝑡 ∗ (𝑢 ∗ 𝑠),

which is of course the case.3 3: Substitution is contravariant because


it reverses the order of composition.
4 Realizability and logic 87

Proposition 4.5.2 Realizability predicates and substitution constitute a


functor
Asm(𝔸 , 𝔸′ )op → Heyt

from (the opposite category of) assemblies to the category of Heyting prealge-
bras. Moreover, substitution preserves the quantifiers.

Proof. We already checked that substitution is functorial, but we still


need to verify that it yields a homomorphism of Heyting prealgebras,
i.e.,

𝑡 ∗ ⊤ = ⊤,
𝑡 ∗ ⊥ = ⊥,
𝑡 ∗ (𝑝 ∧ 𝑞) = 𝑡 ∗ 𝑝 ∧ 𝑡 ∗ 𝑞,
𝑡 ∗ (𝑝 ∨ 𝑞) = 𝑡 ∗ 𝑝 ∨ 𝑡 ∗ 𝑞,
𝑡 ∗ (𝑝 ⇒ 𝑞) = 𝑡 ∗ 𝑝 ⇒ 𝑡 ∗ 𝑞.

These hold because they are defined pointwise. For instance, given
𝑝, 𝑞 ∈ Pred(𝑇), define ∧′ : P(𝔸 ∥𝑝 ∥ ) × P(𝔸 ∥𝑞∥ ) → P(𝔸 ∥𝑝 ∥×∥𝑞∥ ) by

𝐴 ∧′ 𝐵 = {r ∈ 𝔸 ∥𝑝∥×∥𝑞 ∥ | fst r ∈ 𝐴 ∧ snd r ∈ 𝐵},

and observe that 𝑝 ∧ 𝑞 = ∧′ ◦ ⟨𝑝, 𝑞⟩ therefore, for any 𝑡 : 𝑈 → 𝑇 ,

𝑡 ∗ (𝑝 ∧ 𝑞) = ∧′ ◦ ⟨𝑝, 𝑞⟩ ◦ 𝑡 = ∧′ ◦ ⟨𝑝 ◦ 𝑡, 𝑞 ◦ 𝑡⟩ = 𝑡 ∗ 𝑝 ∧ 𝑡 ∗ 𝑞.

The other connectives are dealt with analogously.


Preservation of quantifier amounts to checking the Beck-Chevalley
conditions:

𝑡 ∗ (∃𝑇 𝑝) = ∃𝑇 (𝑡 ∗ 𝑝),
𝑡 ∗ (∀𝑇 𝑝) = ∀𝑇 (𝑡 ∗ 𝑝).

Exercise 4.5.3 Verify the Beck-Chevalley conditions for ∀𝑇 and ∃𝑇 .

4.6 Equality

Equality qua binary predicate on an object 𝑆 is represented by the diagonal


morphism Δ : 𝑆 → 𝑆 × 𝑆 . In assemblies this is the map Δ𝑥 = (𝑥, 𝑥),
which is realized by ⟨𝑥 ∥𝑆∥ ⟩ pair 𝑥 𝑥 . We claim that the corresponding
realizability predicate eq ∈ Pred(𝑆 × 𝑆) is given by ∥𝑒 ∥ = unit and

★ ⊩ eq(𝑥, 𝑦) ⇐⇒ 𝑥 = 𝑦.
4 Realizability and logic 88

To verify the claim, we need to factor Δ and 𝑖 eq : 𝑆eq → 𝑆 from Theo-


rem 4.2.3 through each other. We have

|𝑆eq | = {(𝑥, 𝑦) ∈ |𝑆| × |𝑆| | 𝑥 = 𝑦},


𝑖 eq (𝑥, 𝑦) = (𝑥, 𝑦),
∥𝑆eq ∥ = ∥𝑆∥ × ∥𝑆∥ × unit,
𝑞 ⊩𝑆eq 𝑥 ⇔ fst 𝑞 ⊩𝑆 𝑥 ∧ snd 𝑞 = ★.

Thus Δ factors through 𝑖 eq via 𝑥 ↦→ (𝑥, 𝑥) and 𝑖 eq through Δ via (𝑥, 𝑦) ↦→


𝑥 , each of which is easily seen to be realized.

4.7 Summary of realizability logic

We summarize the realizability interpretation of intuitionistic logic for


easy lookup.
A realizability predicate 𝑝 on an assembly 𝑆 is given by a type ∥𝑝∥ and a
map 𝑝 : |𝑆| → P(∥𝑝 ∥). The collection of all realizability predicates on 𝑆
is denoted Pred(𝑆). Given 𝑝, 𝑞 ∈ Pred(𝑆), 𝑝 entails 𝑞 , written 𝑝 ⊢ 𝑞 , when
there is i ∈ 𝔸′∥𝑆∥→∥𝑝 ∥→∥𝑞∥ such that

∀𝑥 ∈ |𝑆|. ∀x ∈ ∥𝑆∥. ∀r ∈ ∥𝑝 ∥. r ⊩𝑆 𝑥 ∧ r ⊩ 𝑝 ⇒ i x r ⊩ 𝑞𝑥.

The entailment relation is a preorder.


The underlying types of connectives are computed as follows, where
𝑝, 𝑞 ∈ Pred(𝑆) and 𝑟 ∈ Pred(𝑇 × 𝑆):

∥⊥∥ = unit ,
∥⊤∥ = unit ,
∥𝑠 = 𝑡 ∥ = unit ,
∥𝑝 ∧ 𝑞 ∥ = ∥𝑝 ∥ × ∥𝑞 ∥,
∥𝑝 ∨ 𝑞 ∥ = ∥𝑝 ∥ + ∥𝑞 ∥,
∥𝑝 ⇒ 𝑞 ∥ = ∥𝑝 ∥ → ∥𝑞 ∥.
∥∀𝑥 ∈ 𝑇. 𝑟(𝑥, −)∥ = ∥𝑇 ∥ → ∥𝑟 ∥,
∥∃𝑥 ∈ 𝑇. 𝑟(𝑥, −)∥ = ∥𝑇 ∥ × ∥𝑟 ∥.

We write r ⊩ 𝑝𝑥 instead of r ∈ 𝑝𝑥 . The Heyting prealgebra structure


on Pred(𝑆) is as follows, where 𝑝, 𝑞 ∈ Pred(𝑆), 𝑟 ∈ Pred(𝑇 × 𝑆), and
4 Realizability and logic 89

𝑠, 𝑡 ∈ |𝑆| :

r⊩⊥⇔⊥
r⊩⊤⇔⊤
r⊩ 𝑠 =𝑡 ⇔ 𝑠 =𝑡∧r=★
r ⊩ 𝑝𝑥 ∧ 𝑞𝑥 ⇔ fst r ⊩ 𝑝 ∧ fst r ⊩ 𝑞
r ⊩ 𝑝𝑥 ∨ 𝑞𝑥 ⇔ (∃u. r = left u ∧ u ⊩ 𝑝𝑥) ∨
(∃v. r = right v ∧ v ⊩ 𝑞𝑥).
r ⊩ (𝑝 ⇒ 𝑞) ⇔ ∀s ∈ 𝔸 ∥𝑝∥ . s ⊩ 𝑝 ⇒ r s ⊩ 𝑞
r ⊩ ∃𝑦 ∈ 𝑇. 𝑝(𝑥, 𝑦) ⇔ ∃𝑦 ∈ |𝑇 |. fst r ⊩𝑇 𝑦 ∧ snd r ⊩ 𝑝(𝑥, 𝑦)
r ⊩ ∀𝑦 ∈ 𝑇. 𝑝(𝑥, 𝑦) ⇔ ∀𝑦 ∈ |𝑇 |. ∀y. y ⊩𝑇 𝑦 ⇒ r y ⊩ 𝑞(𝑥, 𝑦).

We have the following soundness theorem.

Theorem 4.7.1 The realizability interpretation of intuitionistic logic is sound.

Proof. We already proved the theorem when we checked that the pred-
icates form Heyting prealgebras and that the quantifiers validate the
desired inference rules.

Let us spell out what the theorem says: if a formula 𝜙 is intuitionistically


provable then there exists r ∈ 𝔸′∥𝜙∥ such that r ⊩ 𝜙 . Moreover, the
realizer r can be constructed from the proof of 𝜙 , one just has to follow the
inference steps and apply the corresponding realizability constructions.
The above clauses are reminiscent of the Curry-Howard correspondence
between logic and type theory. The similarity is not accidental, as both
realizability and Martin-Löf type theory aim to formalize the Brouwer-
Heyting-Kolmogorov explanation of intuitionistic logic. Let us not forget,
however, that realizability was the first such rigorous explanation.

4.8 Classical and decidable predicates

Some classes of predicates are of special interest.

4.8.1 Classical predicates

A formula 𝜙 is classical4 or ¬¬-stable when ¬¬𝜙 ⇒ 𝜙 . (We do not 4: The terminology is non-standard but
require 𝜙 ⇒ ¬¬𝜙 because it holds anyway.) is vindicated below by the fact that ∇2
classifies the classical predicates. In any
case, almost any terminology seems bet-
Proposition 4.8.1 A predicate 𝑝 ∈ Pred(𝑆) is classical if, and only if, there ter than “¬¬-stable”.
exists q ∈ 𝔸′∥𝑆∥→∥𝑝 ∥ such that, if x ⊩𝑆 𝑥 and 𝑝𝑥 ≠ ∅ then q x ⊩ 𝑝𝑥 .

Exercise 4.8.2 Prove Proposition 4.8.1. More precisely, you need to


show that the stated condition is equivalent to ⊤ ⊢ ¬¬𝑝 ⇒ 𝑝 .

Thus a predicate is classical when its realizers can be computed at will,


when they exist. An important fact about assemblies is the following.
4 Realizability and logic 90

Proposition 4.8.3 Equality on an assembly is classical.

Proof. Apply Proposition 4.8.1 with r = K ★.

It is useful to have a simple syntactic criterion for recognizing classical


formulas. Say that a formula is almost negative5 when it has one of the 5: A negative formula is like an almost
following forms: negative one, except that the antecedent
in an implication is required to be nega-
▶ ⊥, ⊤, 𝑠 = 𝑡 , tive, too.
▶ 𝜙 ∧ 𝜓 where 𝜙 and 𝜓 are almost negative,
▶ ∀𝑥 ∈ 𝑆. 𝜙 where 𝜙 is almost negative,
▶ 𝜙 ⇒ 𝜓 where 𝜓 is almost negative.

Proposition 4.8.4 An almost negative formula is classical.

Exercise 4.8.5 Prove Proposition 4.8.4 without resorting to realizability.


You should just give suitable intuitionistic proofs. For extra credit, do
it in a proof assistant.

The classical predicates can be used to interpret classical logic.

Exercise 4.8.6 Consider the following preorders:


▶ the sub-preorder RegMono(𝑆) ⊆ Mono(𝑆) on the regular monos,
▶ the sub-preorder Pred¬¬ (𝑆) ⊆ Pred(𝑆) on the classical predicates,
▶ the powerset P(|𝑆|) ordered by ≤ .

Show that these preorders are equivalent, and hence complete Boolean
algebras. For extra credit, explain in what sense the equivalence is
functorial.

4.8.2 Decidable predicates

Recall that a formula 𝜙 is decidable6 when 𝜙 ∨ ¬𝜙 . 6: Every decidable statement is classi-


cal, but the converse cannot be shown
intuitionistically. If you are confused,
Proposition 4.8.7 A predicate 𝑝 ∈ Pred(𝑆) is decidable if, and only if, there you might be thinking of the equiv-
exists d ∈ 𝔸′∥𝑆∥→bool such that, for all 𝑥 ∈ |𝑆| and x ∈ ∥𝑆∥ , if x ⊩𝑆 𝑥 then alence between excluded middle (“all
predicates are decidable”) and double-
d x↓ and ( negation elimination (“all predicates are
true if 𝑝𝑥 ≠ ∅, ¬¬-stable”).
dx =
false if 𝑝𝑥 = ∅.

Proof. If r ⊩ 𝑝 ∨ ¬𝑝 then we may take

d = ⟨x ∥𝑆∥ ⟩ case (r x) true false.

Conversely, suppose d with the stated property exists. Because a decidable


predicate is stable, by Proposition 4.8.1 there exists q ∈ 𝔸′∥𝑆∥→∥𝑝∥ such
that, if x ⊩𝑆 𝑥 and 𝑝𝑥 ≠ ∅ then r x ⊩ 𝑝𝑥 . Now the realizer

⟨_unit ⟩ ⟨x ∥𝑆∥ ⟩ if (d x) (left(q x)) (right(⟨_ ∥𝑆∥ ⟩ K))


4 Realizability and logic 91

witnesses ⊤ ⊢ 𝑝 ∨ ¬𝑝 .

The realizer d from Proposition 4.8.7 can be thought of as a decision


procedure for 𝑝 , although what that means depends on the underlying
model of computation. If 𝔸 retracts to 𝕂1 , see Subsection 2.8.2, then d
does indeed correspond to an algorithm in the usual sense of the word.
But a topological model of computation, such as the graph model or
Kleene’s second algebra, d amounts to having a topological separation of
the underlying space into two disjoint open sets.

4.8.3 Predicates classified by two-element assemblies

Consider a two-element assembly 𝑇 with |𝑇 | = {0 , 1}, like we did in


Subsection 3.2.5. Say that a predicate 𝑝 ∈ Pred(𝑆) is classified by an
assembly map 𝑐 : 𝑆 → 𝑇 , called the characteristic map of 𝑝 , when

⊤ ⊢ ∀𝑥 ∈ 𝑆. 𝑝𝑥 ⇔ 𝑐𝑥 = 1

is realized. Because the above is a negative formula, it is ¬¬-stable and so


its realizers are uninformative. In other words, the statement is realized
if, and only if,
∀𝑥 ∈ |𝑆|. 𝑝𝑥 ≠ ∅ ⇔ 𝑐𝑥 = 1.

Exercise 4.8.8 Show that a predicate has at most one characteristic


map.

For a specific choices of two-element assemblies we obtain known classes


of predicates.

Proposition 4.8.9 ∇2 and 𝟚 classify the classical and decidable predicates,


respectively.

Proof. Let us show that ∇2 classifies the classical predicates on an as-


sembly 𝑆 . Given an assembly map 𝑓 : 𝑆 → ∇2, defined 𝑝 𝑓 ∈ Pred(𝑆) by
∥𝑝 𝑓 ∥ = unit and
r ⊩ 𝑝 𝑓 𝑥 ⇐⇒ 𝑓 𝑥 = 1.
Obviously, 𝑝 𝑓 is ¬¬-stable. Conversely, given a classical 𝑝 ∈ Pred(𝑆),
define 𝑓𝑝 : 𝑆 → ∇2 by
(
1 if 𝑝𝑥 ≠ ∅,
𝑓𝑝 𝑥 =
0 if 𝑝𝑥 = ∅.

The map 𝑓𝑝 is realized because every map into a constant assembly is.
We claim that 𝑓 ↦→ 𝑝 𝑓 and 𝑝 ↦→ 𝑓𝑝 form a bijective correspondence. It is
easy to check that 𝑓 = 𝑓𝑝 𝑓 for all 𝑓 : 𝑆 → ∇2. For the other direction, we
need to show that a classical 𝑝 ∈ Pred(𝑆) satisfies 𝑝 ⊣⊢ 𝑝 𝑓𝑝 . One direction
is easy, and other one not much harder with the help of Proposition 4.8.1.
It remains to be checked that 𝟚 classifies the decidable predicates. The
hard part of the proof was already done in Proposition 4.8.7, and we
leave the rest as an exercise.
4 Realizability and logic 92

The law of excluded middle states that all predicates are decidable. It is
never realized.

Proposition 4.8.10 There exists a non-decidable predicate.

Proof. Consider the predicate classified by id∇2 . More precisely, it is the


predicate 𝑝 ∈ Pred(∇2) defined by ∥𝑝∥ = unit and 𝑝 0 = ∅, 𝑝 1 = 𝔸unit . It
is not decidable because every assembly map ∇2 → 𝟚 is constant, since
𝟚 is modest.

The law of double negation states that all predicates are classical. Because
it is inter-derivable with excluded middle, it cannot be valid in realizability
logic.

Exercise 4.8.11 Say that a predicate 𝑝 ∈ Pred(𝑆) is semidecidable if it is


classified by the Rosolini dominance Σ01 . Suppose r ∈ 𝔸′∥𝑆∥→nat→bool
is such that whenever x ⊩𝑆 𝑥 and 𝑛 ∈ ℕ then r x 𝑛 ∈ {true , false}.
Define 𝑞 r ∈ Pred(𝑆) by ∥𝑞 r ∥ = unit and

★ ⊩ 𝑞 r 𝑥 ⇐⇒ ∃𝑛 ∈ ℕ . r x 𝑛 = true.

Show that a predicate is semidecidable if, and only if, it is equivalent


to some 𝑞 r .

Exercise 4.8.12 Is there a pca 𝔸 such that in Asm(𝔸) the decidable and
semidecidable predicates coincide?

One might ask whether there is a two-element assembly Ω which classifies


all predicates, as that would imply that assemblies form a topos. Alas,
this is not the case.

Proposition 4.8.13 If a predicate is classified by a two-element assembly then


it is ¬¬-stable.

Proof. If a predicate is classified by 𝑇 then it is also classified by ∇2,


therefore it is ¬¬-stable.

The point is that not all predicates are ¬¬-stable.

Exercise 4.8.14 Give an example of a realizability predicate which is


not ¬¬-stable.
Realizability and type theory 5
In everyday mathematics parametrized constructions are commonplace.
For example, when a mathematical text says “consider a continuous
map 𝑓 : [𝑎, 𝑏] → ℝ”, there is an implicit use of the parametrized set
[𝑎, 𝑏] = {𝑥 ∈ ℝ | 𝑎 ≤ 𝑥 ≤ 𝑏}, where 𝑎 and 𝑏 are the parameters. And
whenever in algebra we say “the cyclic group ℤ𝑛 ”, that is not a single
group, but a family of groups parameterized by 𝑛 ∈ ℕ .
The language of such parameterized constructions is (dependent) type
theory. It is applicable in many settings, including realizability. In this
chapter we shall give an interpretation of type theory in terms of families
of assemblies.

5.1 Families of sets

To set the scene, let us review the set-theoretic model of type theory. A
family of sets is a map 𝐴 : 𝐼 → Set from an index set 𝐼 to the class of
all sets. We say that 𝐴 is indexed by 𝐼 or that it is a family over base 𝐼 . Let
Fam (𝐼) be the class of all families indexed by 𝐼 .

Each Fam (𝐼) is a category whose objects are the families indexed by 𝐼 .
A morphism 𝑓 : 𝐴 → 𝐵, where 𝐴, 𝐵 ∈ Fam (𝐼), is a map of families
𝑓 : 𝐴 → 𝐵, which is a family of maps 𝑓𝑖 : 𝐴 𝑖 → 𝐵 𝑖 , parameterized by
𝑖 ∈ 𝐼 . Such maps are composed index-wise.

Exercise 5.1.1 Recall the definition of the slice category Set/𝐼 : an object
is a map 𝑎 : 𝐴 → 𝐼 with codomain 𝐼 , and a morphism a map 𝑓 : 𝐴 → 𝐵
such that 𝑏 ◦ 𝑓 = 𝑎 :
𝑓
𝐴 /𝐵
𝑎  𝑏
𝐼
A map into 𝐼 is called a display map over the base 𝐼 , and its domain
the total space. (The terminology is inspired by a geometric picture of
a bundle over a space.)
For each 𝑖 ∈ 𝐼 we define the fiber of 𝑎 at 𝑖 to be the inverse image
𝑎 ∗ {𝑖} = {𝑥 ∈ 𝐴 | 𝑎𝑥 = 𝑖}. Thus a display map 𝑎 over 𝐼 yields an
𝐼 -indexed family 𝑖 ↦→ 𝑎 ∗ {𝑖} of fibers. Conversely, a family 𝐴 : 𝐼 → Set
determines the display map, namely the first projection Σ𝐼 𝐴 → 𝐼 .
Verify that the passages between Fam (𝐼) and Set/𝐼 constitute an equiv-
alence of categories. As a first step you should determine how the
equivalence acts on morphisms.

A map 𝑟 : 𝐽 → 𝐼 , induces reindexing 𝑟 ∗ : Fam (𝐼) → Fam (𝐽) by precompo-


sition, 𝑟 ∗ 𝐴 = 𝐴 ◦ 𝑟 . Reindexing is contravariantly functorial,

id∗𝐼 𝐴 = 𝐴 and (𝑠 ◦ 𝑟)∗ 𝐴 = 𝑟 ∗ (𝑠 ∗ 𝐴).


5 Realizability and type theory 94

Therefore, families and reindexings form a functor Setop → Cat.

5.1.1 Products and sums of families of sets

The two fundamental operations on families of sets are the cartesian


products and sums. Given a family 𝐴 ∈ Fam (𝐼), its product Π𝐼 𝐴 and sum
Σ𝐼 𝐴 are respectively the sets

Π𝐼 𝐴 = {𝑢 : 𝐼 → 𝐴 𝑖 | ∀𝑖 ∈ 𝐼. 𝑢𝑖 ∈ 𝐴 𝑖 },
S
𝑖∈𝐼
Σ𝐼 𝐴 = {(𝑖, 𝑎) | 𝑖 ∈ 𝐼 ∧ 𝑎 ∈ 𝐴 𝑖 }.

The elements of the product are called choice maps.


We shall need their generalized forms, where the product and the sum is
taken with respect to a reindexing, as follows. Consider 𝑟 : 𝐽 → 𝐼 and
𝐴 ∈ Fam (𝐽). For 𝐾 ⊆ 𝐼 write1 𝑟 ∗ 𝐾 = {𝑗 ∈ 𝐽 | 𝑟 𝑗 ∈ 𝐾}, and define the 1: Careful, the notation 𝑟 ∗ is used both
product Π𝑟 𝐴 ∈ Fam (𝐼) and the sum Σ𝑟 𝐴 ∈ Fam (𝐼) by for 𝑟 ∗ : P(𝐼) → P(𝐽) and 𝑟 ∗ : Fam (𝐼) →
Fam (𝐽). The coincidence is not acciden-
tal.
(Π𝑟 𝐴)𝑖 = {𝑢 : 𝑟 ∗ {𝑖} → 𝐴 𝑗 | ∀𝑗 ∈ 𝑟 ∗ {𝑖}. 𝑢 𝑗 ∈ 𝐴 𝑗 }
S
𝑗∈𝑟 ∗ {𝑖} (5.1)

(Σ𝐽 𝐴)𝑖 = {(𝑗, 𝑥) | 𝑗 ∈ 𝑟 {𝑖} ∧ 𝑥 ∈ 𝐴 𝑗 }.

Exercise 5.1.2 Show that Π𝑟 : Fam (𝐽) → Fam (𝐼) is a functor by


providing a suitable action on the morphisms, and similarly for Σ𝑟 .

The distinguishing feature of products and sums is that they are adjoint
to reindexing,
Σ𝑟 ⊣ 𝑟 ∗ ⊣ Π𝑟 .
Concretely, the above amounts to having isomorphisms, natural in
𝐴 ∈ Fam (𝐽) and 𝐵 ∈ Fam (𝐼),

HomFam (𝐼) (Σ𝑟 𝐴, 𝐵)  HomFam (𝐽) (𝐴, 𝑟 ∗ 𝐵)

and
HomFam (𝐼) (𝐵, Π𝑟 𝐴)  HomFam (𝐽) (𝑟 ∗ 𝐵, 𝐴).

We spell out the second isomorphism and leave the first one as an exercise.
Given a map of families 𝑓 : 𝐵 → Π𝑟 𝐴, define 𝑓ˆ : 𝑟 ∗ 𝐵 → 𝐴 by

𝑓ˆ𝑗 𝑥 = 𝑓𝑟 𝑗 𝑥 𝑗, (5.2)

and given 𝑔 : 𝑟 ∗ 𝐵 → 𝐴 define 𝑔ˇ : 𝐵 → Π𝑟 𝐴 by

𝑔ˇ 𝑖 𝑥 𝑗 = 𝑔 𝑗 𝑥. (5.3)

It is easy to see that 𝑓 ↦→ 𝑓ˆ and 𝑔 ↦→ 𝑔ˇ are inverses of each other.


Checking naturality is less pleasant but instructive.

Exercise 5.1.3 Complete the verification of Σ𝑟 ⊣ 𝑟 ∗ ⊣ Π𝑟 .


5 Realizability and type theory 95

5.1.2 Type theory as the internal language

Having identified the relevant set-theoretic structure, we can now inter-


pret the language of type theory in set theory.

Contexts

In type theory the index sets are called contexts. In practice they are
not arbitrary sets (although they can be), but are rather built up by
introduction of new parameters in an inductive fashion:
▶ the empty context is the singleton2 1 = {★}, 2: A family of sets which does not de-
▶ given a context Γ and a family of sets 𝐴 ∈ Fam (Γ), the extended pend on any parameters is just a fixed
set, so an element of Fam (1). If you think
context is the sum ΣΓ 𝐴. the empty context should be ∅, consider
what Fam (∅) is like.
By iterating context extension we obtain a telescope

ΣΣ···ΣΣ 𝐴 𝐴𝑛−1 𝐴 𝑛 .
1 ··· 𝑛−2

Such a nested sum is unwieldy, so we write it as

𝑥 1 : 𝐴1 , 𝑥 2 : 𝐴2 , . . . , 𝑥 𝑛 : 𝐴 𝑛 .

where 𝑥 1 , . . . , 𝑥 𝑛 are distinct variable names. This way we may access


the components of the telescope by referring to the variables, rather
than having to use iterated projections. The elements of a telescope are
tuples3 (𝑎 1 , . . . , 𝑎 𝑛 ) such that 𝑎 𝑖 ∈ 𝐴(𝑎1 ,...,𝑎 𝑖−1 ) for 𝑖 = 1 , . . . , 𝑛 . Once again, 3: To be quite precise, the elements are
“context” is a synonym for “set”, but in practice we use telescopes. nested pairs (((★, 𝑎 1 ), 𝑎 2 ) · · · , 𝑎 𝑛 ) but we
might as well use these as the definition
of 𝑛 -tuples (𝑎 1 , . . . , 𝑎 𝑛 ).
Example 5.1.4 Suppose a mathematical text says
“Consider a continuous 𝑓 : [𝑎, 𝑏] → ℝ bounded by 𝑀 ∈ ℝ.”
What precisely is the context? It is implied that 𝑎, 𝑏 ∈ ℝ, so at first we
might think that the context is

𝑎 :ℝ , 𝑏 :ℝ , 𝑓 :[𝑎, 𝑏] → ℝ , 𝑀 :ℝ .

However, there are also three hypotheses, namely that 𝑎 < 𝑏 , that 𝑓
is continuous, and that 𝑓 is bounded by 𝑀 . Mathematical tradition
would have us ignore these, because it demands that proofs and logical
statements be considered second-class. Indeed, notice how the text
introduces names 𝑎, 𝑏, 𝑓 , 𝑀 for all the entities except the hypotheses,
and even these notes refer to theorems by mere unmemorable numbers,
as if it were forbidden to name them. The correct context is, written
5 Realizability and type theory 96

vertically for readability,

𝑎 : ℝ,
𝑏 : ℝ,
𝑝 : (𝑎 < 𝑏),
𝑓 : [𝑎, 𝑏] → ℝ ,
𝑞 : continuous( 𝑓 ),
𝑀 : ℝ,
𝑟 : ∀𝑥 ∈ [𝑎, 𝑏]. 𝑓 (𝑥) ≤ 𝑀

However, we now face a difficulty: in what sense are logical formulas,


such as 𝑎 < 𝑏 and continuous(f) families of sets? They must be, if they
are to appear in contexts. We shall resolve the matter in ??.

Type families and their elements

When type theory is used to talk about set theory, we prefer to say type
and type family instead of “set” and “set family”, and write

Γ ⊢ 𝐴 type

for 𝐴 ∈ Fam (Γ). The elements of 𝐴 are its choice maps. We write

Γ⊢𝑡:𝐴

when 𝑡 is such a choice map.

Example 5.1.5 To see why it makes sense to call the choice maps
“elements”, we translate the statement
“(𝑎 + 𝑏)/2 is an element of the closed interval [𝑎, 𝑏].”
to the type-theoretic terminology. First, the text expects us to guess
that 𝑎 and 𝑏 are reals such that 𝑎 < 𝑏 , so the context is

𝑎 :ℝ , 𝑏 :ℝ , 𝑝 :(𝑎 < 𝑏).

Over this context we define a type family 𝐶 of closed intervals by4 4: Dragging along the argument 𝑝 seems
a little bureaucratic. In practice we would
𝐶(𝑎,𝑏,𝑝) = [𝑎, 𝑏] = {𝑥 ∈ ℝ | 𝑎 ≤ 𝑥 ≤ 𝑏}. of course drop it, and in a proof assistant
we might use one of several mechanisms
that hide it.
The mid-point is map 𝑚 which assigns to each interval its mid-point,
𝑚(𝑎, 𝑏, 𝑝) = (𝑎 + 𝑏)/2, which of course is just a choice map for 𝐶 .

Dependent products and sums

If a family is indexed by a telescope with parameters 𝑥 1 , . . . , 𝑥 𝑛 , we


may wish to form the cartesian product with respect to just 𝑥 𝑛 , which
is accomplished by taking the product (5.1) along a suitable reindexing.
Suppose Γ, 𝑥 :𝐴 ⊢ 𝐵 type and let 𝑝 : (Γ, 𝑥 :𝐴) → Γ be the first projection
𝑝(𝛾, 𝑎) = 𝛾. Define the product of 𝐵 to be the type family Γ ⊢ Π𝑝 𝐵 type.
5 Realizability and type theory 97

Unfolding the definitions shows that, for 𝛾 ∈ Γ,

(Π𝑝 𝐵)𝛾 = {𝑢 : 𝑝 ∗ {𝛾} → 𝐵 𝛿 | ∀𝛿 ∈ 𝑝 ∗ {𝛾}. 𝑢𝛿 ∈ 𝐵 𝛿 }


S
𝛿∈𝑝 ∗ {𝛾}

 {𝑢 : 𝐴 𝛾 → 𝐵(𝛾,𝑎) | ∀𝑎 ∈ 𝐴 𝛾 . 𝑢𝑎 ∈ 𝐵(𝛾,𝑎) },
S
𝑎∈𝐴 𝛾

which is precisely the desired parameterized version of cartesian prod-


uct.
A similar line of though show that the sum of a family Γ, 𝑥 :𝐴 ⊢ 𝐵 type is
the family Γ ⊢ Σ𝑝 𝐵, where 𝑝 is as above. It is the parameterized version
of the disjoint sum, or coproduct, of a family:

(Σ𝑝 𝐵)𝛾 = {(𝛿, 𝑏) | 𝛿 ∈ 𝛾 ∗ {𝛾} ∧ 𝑏 ∈ 𝐵 𝛿 }


 {(𝑎, 𝑏) | 𝑎 ∈ 𝐴 𝛾 ∧ 𝑏 ∈ 𝐵(𝛾,𝑎) }.

Exercise 5.1.6 Explicitly write down the isomorphisms appearing in


the above calculations of Π𝑝 𝐵 and Σ𝑝 𝐵. Does it matter which of the
two isomorphic versions of products and sums we use?

5.2 Families of assemblies

The interpretation of type theory in assemblies proceeds much as in


sets, we only need to make sure that the set-theoretic maps are realized.
However, we must first define what a family of assemblies is.
Defining a family of assemblies to be a collection of assemblies indexed
by (the underlying set of) an assembly almost works, we just have to
additionally require that all the assemblies in the family share the same
underlying type.5 5: This is so because tpcas are simply
typed. By making all the assemblies in
the family share the same type, we fa-
Definition 5.2.1 A (uniform) family of assemblies 𝑆 : 𝐼 → Asm(𝔸 , 𝔸′) cilitate the construction of its product,
is given by an index assembly 𝐼 , an underlying type ∥𝑆∥ , and for each whose underlying type may then be the
(non-dependent) function type, see ??.
𝑖 ∈ |𝐼 | an assembly 𝑆 𝑖 such that ∥𝑆 𝑖 ∥ = ∥𝑆∥ .

The qualifier “uniform” refers to the fact that all the members share
the same underlying type. We drop it because we only ever consider
uniform families. We write Fam𝔸,𝔸′ (𝐼) or just Fam (𝐼) for the collection of
all families of assemblies indexed by 𝐼 . For everything to work out, maps
between families of assemblies have to be uniformly realized.

Definition 5.2.2 A (uniform) map of families 𝑓 : 𝐴 → 𝐵 between


families of assemblies 𝐴, 𝐵 ∈ Fam (𝐼) is given by a family of maps
( 𝑓𝑖 : |𝐴 𝑖 | → |𝐵 𝑖 |)𝑖∈|𝐼 | for which there exists f ∈ 𝔸′|𝐼 |→|𝐴|→|𝐵| such that,
for all 𝑖 , i, 𝑥 , x,

i ⊩𝐼 𝑖 ∧ x ⊩𝐴𝑖 𝑥 =⇒ f i x ⊩𝐵𝑖 𝑓 𝑥.

The definition endows each Fam (𝐼) with the structure of a category. An
assembly map 𝑟 : 𝐽 → 𝐼 induces a reindexing 𝑟 ∗ : Fam (𝐼) → Fam (𝐽),
defined by
𝑟 ∗ 𝐴 = 𝐴 ◦ 𝑟.
5 Realizability and type theory 98

There is nothing here to be realized, but we shall use realizers for 𝑟 in the
construction of products and sums.

Exercise 5.2.3 Verify that reindexing is contravariantly functorial.

5.2.1 Products and sums of families of assemblies

Consider a family of assemblies 𝑆 ∈ Fam (𝐽). It has an associated family of


underling sets |𝑆| ∈ Fam (𝐽), defined by 𝑗 ↦→ |𝑆 𝑗 | . Given an assembly map
𝑟 : 𝐽 → 𝐼 , 𝑖 ∈ |𝐼 | , and 𝑢 ∈ (Π𝑟 |𝑆|)𝑖 , say that 𝑢 is realized by u ∈ 𝔸 ∥𝐽 ∥→∥𝑆∥
when, for all 𝑗 ∈ 𝑟 ∗ {𝑖} and j ∈ 𝔸 ∥𝐽 ∥ ,

j ⊩𝐽 𝑗 =⇒ u j ⊩𝑆𝑖 𝑢 𝑗.

When this is the case, we write u ⊩(Π𝑟 𝑆)𝑖 𝑢 . Now define the product
Π𝑟 𝑆 ∈ Fam (𝐼) to be the family whose realizability relation at 𝑖 ∈ |𝐼 |
is ⊩(Π𝑟 𝑆)𝑖 and

∥Π𝑟 𝑆∥ = ∥𝐽 ∥ → ∥𝑆∥,
|(Π𝑟 𝑆)𝑖 | = {𝑢 ∈ (Π𝑟 |𝑆|)𝑖 | ∃u ∈ 𝔸 ∥𝐽 ∥→∥𝑆∥ . u ⊩(Π𝑟 𝑆)𝑖 𝑢}.

Define the sum Σ𝑟 𝑆 ∈ Fam (𝐽) to be the family

∥Σ𝑟 𝑆∥ = ∥𝐽 ∥ × ∥𝑆∥,
|(Σ𝑟 𝑆)𝑖 | = (Σ𝑟 |𝑆|)𝑖 ,
r ⊩(Σ𝑟 𝑆)𝑖 (𝑗, 𝑥) ⇔ fst r ⊩𝐽 𝑗 ∧ snd r ⊩𝑆 𝑗 𝑥.

Notice how at the level of underlying types the dependency on the


parameter disappears because we required the families to be uniform.

Exercise 5.2.4 Verify the adjunctions

Σ𝑟 ⊣ 𝑟 ∗ ⊣ Π𝑟 .

Half of work has been done in Subsection 5.1.1. You still need to check
that the map of families 𝑓ˆ defined in (5.2) is realized when 𝑓 is realized,
and similarly for 𝑔ˇ defined in (5.3).

Products and sums along arbitrary reindexings are perhaps a bit un-
intuitive. For better understanding we spell out the non-parameterize
sum and products. Given a family 𝑆 ∈ Fam (𝐼), its product Π𝐼 𝑆 is the
assembly

∥Π𝐼 𝑆∥ = ∥𝐼 ∥ → ∥𝑆∥,
|Π𝐼 𝑆| = {𝑢 ∈ Π|𝐼 | |𝑆| | ∃u ∈ 𝔸 ∥𝐼 ∥→∥𝑆∥ . u ⊩Π𝐼 𝑆 𝑢},
u ⊩Π𝐼 𝑆 𝑢 ⇔ ∀𝑖 ∈ |𝐼 |, i ∈ 𝔸 ∥𝐼 ∥ . i ⊩𝐼 𝑖 ⇒ u i ⊩𝑆𝑖 𝑢𝑖.
5 Realizability and type theory 99

The sum Σ𝐼 𝑆 is the assembly

∥Σ𝐼 𝑆∥ = ∥𝐼 ∥ × ∥𝑆∥,
|Σ𝐼 𝑆| = Σ|𝐼 | |𝑆|,
r ⊩Σ𝐼 𝑆 (𝑖, 𝑥) ⇔ fst r ⊩𝐼 𝑖 ∧ snd r ⊩𝑆𝑖 𝑥.

These are indeed just the familiar set-theoretic constructions embellished


with realizers.

5.2.2 Contexts of assemblies

Contexts of assemblies are built as iterated sums, the same was as contexts
of sets. Thus a telescope of assemblies

Γ = (𝑥1 : 𝑆1 , . . . , 𝑥 𝑛 : 𝑆𝑛 )

is the assembly

∥Γ∥ = ∥𝑆1 ∥ × · · · × ∥𝑆𝑛 ∥,


|Γ| = {(𝑎 1 , . . . , 𝑎 𝑛 ) | ∀𝑖 ≤ 𝑛. 𝑎 𝑖 ∈ |(𝑆 𝑖 )(𝑎1 ,...,𝑎 𝑖−1 ) |}
r ⊩Γ (𝑎 1 , . . . , 𝑎 𝑛 ) ⇔ ∀𝑖 ≤ 𝑛. proj𝑛,𝑖 r ⊩(𝑆𝑖 )(𝑎 𝑎𝑖 ,
1 ,...,𝑎 𝑖−1 )

where proj𝑛,𝑖 is the 𝑖 -th projection from an 𝑛 -tuple.

5.3 Propositions as assemblies

In Example 5.1.4 we placed a hypothesis into the context, but for this
to make sense in needs to be a type family. How can a proposition be a
type?
In classical logic a predicate 𝜙 on a set 𝐴 is a Boolean function 𝜙 : 𝑆 →
{⊥, ⊤}. If we encode the truth values ⊥ and ⊤ with sets, 𝜙 becomes
a family of sets, which is what we want. A choice that works well is
to take ⊥ to be ∅ and ⊤ to be {★}. An even better idea is to allow any
singleton set to represent truth, they are all isomorphic anyhow. We may
do the same with assemblies.

Definition 5.3.1 An assembly is a proposition if all of its elements


are equal.6 A predicate on an assembly 𝑆 is a family of assemblies 6: Even though “all elements are equal”
𝑃 ∈ Fam (𝑆) such that 𝑃𝑥 is a predicate for all 𝑥 ∈ |𝑆| . is equivalent to “empty or singleton”, the
choice of wording matters once we bring
in realizability logic, where excluded
To see that the definition works, note that the following logical operations middle is not valid.
can be expressed with standard constructions on assemblies:

⊥ = 𝟘,
⊤ = 𝟙,
𝑃 ∧ 𝑄 = 𝑃 × 𝑄, (5.4)
𝑃 ⇒ 𝑄 = 𝑃 → 𝑄, ,
∀𝑆 𝑅 = Π𝑆 𝑅.
5 Realizability and type theory 100

For instance, 𝑃 × 𝑄 is inhabited if, and only if, both 𝑃 and 𝑄 are, which
is precisely how conjunction is supposed to be. Similarly, the product
Π𝑆 𝑅 has a choice map only if7 all the assemblies 𝑅𝑥 are inhabited, which 7: We did not say “if and only if” because
again characterizes the universal quantifier. in assemblies choice maps must also be
realized. It is quite possible that there is
Disjunction and the existential quantifier present a bit of a challenge. no realized choice map for a family of
assemblies, even though every one of its
How about
member is inhabited. Even so, Π𝑆 𝑅 satis-
fies the rules of universal quantification,
𝑃∨𝑄 = 𝑃+𝑄 and ∃𝑆 𝑅 = Σ𝑆 𝑅 ? and so can be used as such.

The definitions seem to work: 𝑃 + 𝑄 is inhabited if, and only if, 𝑃 or 𝑄 is


inhabited; and Σ𝑆 𝑅 is inhabited if, and only if, there is 𝑥 ∈ |𝑆| for which
𝑃𝑥 has an element.8 The problem is that 𝑃 + 𝑄 and Σ𝑆 𝑅 need not be 8: We are a bit sloppy with these state-
propositions because they may have more than one element. There are ments because we should also think
about realizers.
two ways to fix the deficiency.
Firstly, we can force any assembly 𝑃 to become a proposition ∥𝑃∥ by
quotienting it with respect to the trivial equivalence relation. Disjunction
and existential quantification are then defined as

𝑃 ∨ 𝑄 = ∥𝑃 ∨ 𝑄 ∥ ,
(5.5)
∃𝑆 𝑅 = ∥Σ𝑆 𝑅∥ .

This option is explored in Subsection 5.3.1.


Secondly, why do we insist that propositions must have at most one
element? We could use all assemblies as if they were propositions, with
the empty assembly representing falsehood and the inhabited assemblies
truth. The correspondence (5.4) still holds and can even be extended to

𝑃 ∨ 𝑄 = 𝑃 + 𝑄,
∃𝑆 𝑅 = Σ𝑆 𝑅.

This approach is taken in Martin-Löf type theory and goes by the name
propositions as types.
Which definition of ∨ and ∃ should we adopt? Mathematical practice
uses both. The truncated sum ∥Σ𝑆 𝑅∥ is a form of abstract existence because
its element, when it exists, reveals no specific 𝑥 ∈ |𝑆| for which 𝑅𝑥 is
inhabited; whereas Σ𝑆 𝑅 is concrete existence because its elements provide
witnesses. Similarly, ∥𝑃 + 𝑄 ∥ states that one or the other disjunct holds
without revealing which one, whereas an element of 𝑃 + 𝑄 makes a
specific choice of one or the other.
Judicious use of propositional truncation thus makes it possible to
formalize aspects of mathematical practice that are not easily incorporated
into traditional first-order logic, which provides only abstract existence
and disjunction, nor into traditional Martin-Löf type theory, which
provides only concrete existence and disjunction.

5.3.1 Propositional truncation of an assembly

Propositional truncation works both for sets and assemblies. The proposi-
tional truncation ∥𝐴∥ of a set 𝐴 is the quotient 𝐴/∼ by the full relation ∼
on 𝐴. The quotient map takes an element 𝑥 ∈ 𝐴 to its equivalence class
5 Realizability and type theory 101

|𝑥| = 𝐴. Thus, if 𝐴 has an element then ∥𝐴∥ = {𝐴} and if 𝐴 is empty then
∥𝐴∥ = ∅. A category theorist would postulate ∥𝐴∥ as the coequalizer

𝜋1
𝐴×𝐴 // 𝐴 |−|
/ / ∥𝐴∥
𝜋2

This construction works in categories other than sets. It takes an assembly


𝑆 to its propositional truncation ∥𝑆∥ where9 9: It is too late to disentangle the con-
flicting notations for propositional trun-
∥ ∥𝑆∥ ∥ = ∥𝑆∥, cation and the underlying type. We could
insert parentheses, ∥(∥𝑆∥)∥ = ∥𝑆∥ and
| ∥𝑆∥ | = ∥|𝑆|∥ , |(∥𝑆∥)| = ∥(|𝑆|)∥ , but that just ruins the
fun.
r ⊩ ∥𝑆∥ 𝜉 ⇔ ∃𝑦 ∈ |𝑆|. r ⊩𝑆 𝑦.

Like any worthy construction, propositional truncation has a univer-


sal property stemming from the above coequalizer diagram: if 𝑃 is a
proposition then for every assembly map 𝑓 : 𝑆 → 𝑃 there is a unique
𝑓¯ : ∥𝑆∥ → 𝑃 such that the following diagram commutes:

|−|
𝑆 / ∥𝑆∥

𝑓¯
𝑓 
𝑃

Exercise 5.3.2 Verify that the universal property of propositional


truncation holds with respect to families of assemblies. Given a family
of assemblies 𝑆 ∈ Fam (Γ), a predicate 𝑃 ∈ Fam (Γ) and a map of
families 𝑓 : 𝑆 → 𝑃 , there is a unique map of families 𝑓¯ : ∥𝑆∥ → 𝑃
such that 𝑓¯ ◦ |−| = 𝑓 .
Moreover, explain what it means for propositional truncation to be
functorial with respect to reindexing, then show that it is.

5.3.2 Realizability predicates and propositions

In Chapter 4 we gave a realizability interpretation of logic, with its own


notions of propositions and predicates. We show that it is equivalent to
truncated logic of the present chapter.
Let Pred′(𝑆) be the class of predicates on 𝑆 in the sense of the present
chapter, i.e., families of assemblies 𝑃 ∈ Fam (𝑆) such that ∀𝑢, 𝑣 ∈ |𝑃𝑥|. 𝑢 =
𝑣 for all 𝑥 ∈ |𝑆| . We endow Pred′(𝑆) with a preorder ⊢′, defined by

𝑃 ⊢′ 𝑄 ⇐⇒ there is a map of families 𝑃 → 𝑄.

In fact, there is at most one map of families 𝑃 → 𝑄 .

Exercise 5.3.3 Verify that the preoder Pred′(𝑆) forms a Heyting preal-
gebra with the operations given by (5.4) and (5.5).

Theorem 5.3.4 The Heyting prealgebras (Pred(𝑆), ⊢) and (Pred′(𝑆), ⊢′) are
equivalent.
5 Realizability and type theory 102

Proof. The equivalence takes a realizability predicate 𝑝 ∈ Pred(𝑆) to the


family 𝑃𝑝 ∈ Fam (𝑆), defined by

∥𝑃𝑝 ∥ = ∥𝑝∥,
|𝑃𝑝 𝑥| = 1 ,
r ⊩𝑃𝑝 𝑥 ★ ⇔ r ⊩ 𝑝𝑥.

In the opposite direction, a predicate 𝑃 ∈ Pred′(𝑆) is taken to the realiz-


ability predicate 𝑝 𝑃 ∈ Pred(𝑆), defined by

∥𝑝 𝑃 ∥ = ∥𝑃∥,
r ⊩ 𝑝 𝑃 𝑥 ⇔ ∃𝜉 ∈ 𝑃𝑥. r ⊩𝑃𝑥 𝜉.

To finish the proof, one would have to verify that 𝑝 → 𝑃𝑝 and 𝑃 → 𝑝 𝑃


are monotone, that 𝑝 ⊣⊢ 𝑝 𝑃𝑝 and 𝑃 ⊣⊢′ 𝑃𝑝𝑃 , and that they preserve the
Heyting prealgebra structure.

Exercise 5.3.5 In the proof of Theorem 5.3.4 the passage from 𝑃 to


realizability predicate 𝑝 𝑃 works for an arbitrary family 𝑃 ∈ Fam (𝑆).
Therefore, the equivalence extends to a pair of functors

𝐼 / Fam (𝑆)
Pred(𝑆) o
𝑇

Show that 𝐼 is full and faithful, 𝑇 is its left adjoint, and that 𝐼 ◦ 𝑇
is naturally isomorphic to propositional truncation. How does the
adjunction interact with reindexing?

The realizability logic in Chapter 4 gave the simple definition of quantifiers


where one of the parameters of a two-parameter predicate was quantified
over. We can improve on that by defining quantification with respect to
reindexing.
An assembly map 𝑟 : 𝐽 → 𝐼 induces a monotone map 𝑟 ∗ : Pred(𝐽) →
Pred(𝐼) which acts by precomposition, 𝑟 ∗ 𝑝 = 𝑝 ◦ 𝑟 . The universal quantifi-
cation of 𝑝 ∈ Pred(𝐽) along 𝑟 is the realizability predicate ∀𝑟 𝑝 ∈ Pred(𝐼),
defined by

∥∀𝑟 𝑝 ∥ = ∥𝐽 ∥ → ∥𝑝∥,
u ⊩ (∀𝑟 𝑝)𝑖 ⇔ ∀𝑗 ∈ 𝑟 ∗ {𝑖}. ∀j ∈ ∥𝐽 ∥. j ⊩𝐽 𝑗 ⇒ u j ⊩ 𝑝 𝑗.

Define the existential quantification along 𝑟 to be the realizability


predicate ∃𝑟 𝑝 ∈ Pred(𝐼),

∥∃𝑟 𝑝 ∥ = ∥𝐽 ∥ × ∥𝑝 ∥,
r ⊩ (∃𝑟 𝑝)𝑖 ⇔ ∃𝑗 ∈ |𝐽 |. fst r ⊩𝐽 𝑗 ∧ snd r ⊩ 𝑝 𝑗.

The quantifiers from Section 4.4 arise as quantification along a projection


𝑇 × 𝑆 → 𝑆.

Exercise 5.3.6 Verify that the equivalence of Pred(𝑆) and Pred′(𝑠)


preserves the quantifiers as well.
5 Realizability and type theory 103

Henceforth we shall use one or the other formulation of realizability


predicates, whichever is more appropriate in the situation at hand.

5.4 Identity types

In Section 4.6 we defined equality on an assembly 𝑆 as as predicate on


𝑆 × 𝑆. The corresponding family of assemblies Id𝑆 ∈ Fam (𝑆 × 𝑆) is given
by (
𝟙 if 𝑥 = 𝑦 ,
∥ Id𝑆 ∥ = unit and Id𝑆 (𝑥, 𝑦) =
𝟘 if 𝑥 ≠ 𝑦 .
It is called the identity type of 𝑆 .
If we compose Id𝑆 with the diagonal map 𝑆 → 𝑆 × 𝑆 we get the family
𝑥 ↦→ Id𝑆 (𝑥, 𝑥) which has an element refl𝑆 , namely refl𝑆 (𝑥) = ★, realized
by ⟨𝑥 ∥𝑆∥ ⟩ ★.
The identity type in Martin-Löf type theory satisfies the following elim-
ination principle. Suppose Γ is a context and Γ ⊢ 𝑆 type a type over
it.

5.4.1 UIP and equality reflection

5.5 Inductive and coinductive types

5.6 Universes

5.6.1 Universes of propositions

The universe of decidable propositions.


The universe of semi-decidable propositions.
The universe of stable propositions.
The universe of propositions.

5.6.2 The universe of modest sets

5.6.3 The universe of small assemblies


The internal language at work 6
6.1 Epis and monos

6.2 The axiom of choice

6.3 Heyting arithmetic

6.4 Countable objects

6.5 Markov’s principle

6.6 Church’s thesis and the computability


modality

6.7 Aczel’s presentation axiom

6.8 Continuity principles

6.8.1 Brouwer’s continuity

6.8.2 Kreisel-Lacombe-Schönfield-Ceitin continuity

6.9 Brouwer’s compactness principle


Bibliography

[1] A. Bauer, L. Birkedal, and D.S. Scott. “Equilogical Spaces”. Submitted for publication. 1998 (cited on
page 54).
[2] Andrej Bauer. “The Realizability Approach to Computable Analysis and Topology”. PhD thesis.
Carnegie Mellon University, 2000 (cited on page 44).
[3] Andrej Bauer. “A Relationship between Equilogical Spaces and Type Two Effectivity”. In: Mathematical
Logic Quarterly 48.S1 (2002), pp. 1–15 (cited on page 62).
[4] Andrej Bauer and Jens Blanck. “Canonical Effective Subalgebras of Classical Algebras as Constructive
Metric Completions”. In: Journal of Universal Computer Science 16.18 (2010), pp. 2496–2522 (cited on
page 44).
[5] Jens Blanck. “Domain representability of metric spaces”. In: Annals of Pure and Applied Logic 83 (1997),
pp. 225–247 (cited on page 44).
[6] Vasco Brattka and Peter Hertling, eds. Handbook of Computability and Complexity in Analysis. Theory and
Applications of Computability. Springer, 2021 (cited on page 60).
[7] Douglas Bridges and Fred Richman. Varieties of Constructive Mathematics. Vol. 97. London Mathematical
Society Lecture Notes Series. Cambridge University Press, 1987 (cited on page 54).
[8] Alonzo Church. “A set of postulates for the foundation of logic”. In: Annals of Mathematics, Series 2 33
(1932), pp. 346–366 (cited on page 21).
[9] Alonzo Church and J. B. Rosser. “Some Properties of Conversion”. In: Transactions of the American
Mathematical Society 39.3 (1936). Available at http://www.jstor.org/pss/1989762, pp. 472–482 (cited
on page 23).
[10] M. Davis. Computability and Unsolvability. Reprinted in 1982 by Dover Publications. McGraw-Hill, 1958
(cited on pages 4, 8).
[11] Yuri L. Eršov. “Handbook of Computability Theory”. In: vol. 140. Studies in Logic and the Foundations
of Mathematics. Springer, 1999. Chap. Theory of numberings, pp. 473–503 (cited on page 54).
[12] K. Gödel. “Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes”. In: Dialectica
12 (1958), pp. 280–287 (cited on page 35).
[13] H. Goldstein, J. von Neumann, and A. Burks. Report on the mathematical and logical aspects of an electronic
computing instrument. Tech. rep. Princeton Institute of advanced study, 1947 (cited on page 5).
[14] C.A. Gunter and D.S. Scott. “Semantic Domains”. In: Handbook of Theoretical Computer Science. Ed. by
J. van Leeuwen. Elsevier Science Publisher, 1990 (cited on page 26).
[15] J. D. Hamkins and A. Lewis. “Infinite time Turing machines”. In: Journal of Symbolic Logic 65.2 (2000),
pp. 567–604 (cited on page 15).
[16] Jr. H.R. Rogers. Theory of Recursive Functions and Effective Computability. 3rd. MIT Press, 1992 (cited on
page 4).
[17] S.C. Kleene. “On the Interpretation of Intuitionistic Number Theory”. In: Journal of Symbolic Logic 10
(1945), pp. 109–124 (cited on page 44).
[18] S.C. Kleene and R. Vesli. The foundations of intuitionistic mathematics. Especially in relation to recursive
functions. Studies in Logic and The Foundations of Mathematics. Amsterdam: North-Holland, 1965
(cited on page 60).
[19] Steephen Kleene. “Recursive predicates and quantifiers”. In: Transactions of the AMS 53.1 (1943), pp. 41–
73 (cited on page 7).
[20] Stephen C. Kleene. “Origins of Recursive Function Theory”. In: IEEE Annals of the History of Computing
3 (1981), pp. 52–67 (cited on page 24).
[21] B. A. Kušner. Lectures on Constructive Mathematical Analysis. Vol. 60. Translations of Mathematical
Monographs. American Mathematical Society, 1984 (cited on page 54).
[22] Peter Lietz. “From Constructive Mathematics to Computable Analysis via the Realizability Interpreta-
tion”. PhD thesis. Technischen Universität Darmstadt, 2004 (cited on page 42).
[23] J. Longley. “Realizability Toposes and Language Semantics”. PhD thesis. Edinburgh University, 1995
(cited on page 28).
[24] John Longley. “Realizability Toposes and Language Semantics”. PhD thesis. University of Edinburgh,
1994 (cited on pages 37, 39–41, 52–54).
[25] John Longley. “Matching typed and untyped realizability”. In: Electronic Notes in Theoretical Computer
Science 23.1 (1999). https://doi.org/10.1016/S1571-0661(04)00105-7, pp. 74–100 (cited on pages 4,
37).
[26] John Longley. “Unifying typed and untyped realizability”. Available at http://homepages.inf.ed.
ac.uk/jrl/Research/unifying.txt. 1999 (cited on pages 4, 31).
[27] John Longley. “Computability structures, simulations and realizability”. In: Mathematical Structures in
Computer Science 24.1 (2014). https://doi.org/10.1017/S0960129513000182 (cited on page 4).
[28] Matías Menni and Alex K. Simpson. “Topological and Limit-Space Subcategories of Countably-Based
Equilogical Spaces”. In: Mathematical Structures in Computer Science 12.6 (2002), pp. 739–770 (cited on
page 56).
[29] P. Odifreddi. Classical Recursion Theory. Vol. 125. Studies in logic and the foundations of mathematics.
North-Holland, 1989 (cited on page 4).
[30] Jaap van Oosten. Realizability: An Introduction To Its Categorical Side. Vol. 152. Studies in logic and the
foundations of mathematics. Elsevier, 2008 (cited on page 31).
[31] G. Plotkin. “𝕋 𝜔 as a Universal Domain”. In: Journal of Computer and System Sciences 17 (1978), pp. 209–236
(cited on page 26).
[32] Gordon Plotkin. “LCF Considered as a Programming Language”. In: Theoretical Computer Science 5
(1977). Available at http://homepages.inf.ed.ac.uk/gdp/publications/LCF.pdf, pp. 223–255
(cited on page 35).
[33] Matthias Schröder. “Handbook of Computability and Complexity in Analysis”. In: Springer, 2021.
Chap. Admissibly represented spaces and QCB-spaces, pp. 305–346 (cited on page 61).
[34] Dana S. Scott. “Continuous Lattices”. In: Lecture Notes in Mathematics 274. Springer, 1972, pp. 97–136
(cited on page 25).
[35] D.S. Scott. “Data Types as Lattices”. In: SIAM Journal of Computing 5.3 (1976), pp. 522–587 (cited on
page 18).
[36] D. Spreen. “On Effective Topological Spaces”. In: The Journal of Symbolic Logic 63.1 (1998), pp. 185–221
(cited on page 54).
[37] J.V. Tucker and J.I. Zucker. “Computable Functions and Semicomputable Sets on Many-Sorted Algebras”.
In: Handbook of Logic in Computer Science, Volume 5. Ed. by S. Abramsky, D.M. Gabbay, and T.S.E.
Maibaum. Oxford: Clarendon Press, 2000 (cited on page 44).
[38] Alan Turing. “On Computable Numbers, with an Application to the Entscheidungsproblem”. In:
Proceedings of the London Mathematical Society 2.42 (1937). Available at http://www.scribd.com/doc/
2937039/Alan-M-Turing-On-Computable-Numbers, pp. 230–265 (cited on pages 4, 5).
[39] Klaus Weihrauch. Computable Analysis. Berlin: Springer, 2000 (cited on pages 44, 60).
[40] N. Šanin. Constructive Real Numbers and Constructive Function Spaces. Vol. 21. Translations of Mathematical
Monographs. American Mathematical Society, 1968 (cited on page 54).

You might also like