Incompleteness and Computability Ic-Screen
Incompleteness and Computability Ic-Screen
and Computability
An Open Introduction to
Gödel’s Theorems
F19
Incompleteness and
Computability
The Open Logic Project
Instigator
Richard Zach, University of Calgary
Editorial Board
Aldo Antonelli,† University of California, Davis
Andrew Arana, Université Paris I Panthénon–Sorbonne
Jeremy Avigad, Carnegie Mellon University
Tim Button, University College London
Walter Dean, University of Warwick
Gillian Russell, University of North Carolina
Nicole Wyatt, University of Calgary
Audrey Yap, University of Victoria
Contributors
Samara Burns, University of Calgary
Dana Hägg, University of Calgary
Zesen Qian, Carnegie Mellon University
Incompleteness and
Computability
An Open Introduction to
Gödel’s Theorems
Fall 2019
The Open Logic Project would like to acknowledge the gener-
ous support of the Taylor Institute of Teaching and Learning of
the University of Calgary, and the Alberta Open Educational Re-
sources (ABOER) Initiative, which is made possible through an
investment from the Alberta government.
1 Introduction to Incompleteness 1
1.1 Historical Background . . . . . . . . . . . . . . . 1
1.2 Definitions . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Overview of Incompleteness Results . . . . . . . 14
1.4 Undecidability and Incompleteness . . . . . . . . 16
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Recursive Functions 20
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . 20
2.2 Primitive Recursion . . . . . . . . . . . . . . . . . 21
2.3 Composition . . . . . . . . . . . . . . . . . . . . . 24
2.4 Primitive Recursion Functions . . . . . . . . . . . 26
2.5 Primitive Recursion Notations . . . . . . . . . . . 30
2.6 Primitive Recursive Functions are Computable . . 30
2.7 Examples of Primitive Recursive Functions . . . . 31
2.8 Primitive Recursive Relations . . . . . . . . . . . 35
2.9 Bounded Minimization . . . . . . . . . . . . . . . 38
2.10 Primes . . . . . . . . . . . . . . . . . . . . . . . . 39
2.11 Sequences . . . . . . . . . . . . . . . . . . . . . . 40
2.12 Trees . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.13 Other Recursions . . . . . . . . . . . . . . . . . . 45
v
vi CONTENTS
3 Arithmetization of Syntax 58
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . 58
3.2 Coding Symbols . . . . . . . . . . . . . . . . . . . 60
3.3 Coding Terms . . . . . . . . . . . . . . . . . . . . 62
3.4 Coding Formulas . . . . . . . . . . . . . . . . . . 65
3.5 Substitution . . . . . . . . . . . . . . . . . . . . . 66
3.6 Derivations in Natural Deduction . . . . . . . . . 67
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4 Representability in Q 76
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . 76
4.2 Functions Representable in Q are Computable . 79
4.3 The Beta Function Lemma . . . . . . . . . . . . . 80
4.4 Simulating Primitive Recursion . . . . . . . . . . 85
4.5 Basic Functions are Representable in Q . . . . . 86
4.6 Composition is Representable in Q . . . . . . . . 89
4.7 Regular Minimization is Representable in Q . . 91
4.8 Computable Functions are Representable in Q . 96
4.9 Representing Relations . . . . . . . . . . . . . . . 97
4.10 Undecidability . . . . . . . . . . . . . . . . . . . . 98
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 100
D Biographies 254
D.1 Alonzo Church . . . . . . . . . . . . . . . . . . . 254
D.2 Kurt Gödel . . . . . . . . . . . . . . . . . . . . . . 255
D.3 Rózsa Péter . . . . . . . . . . . . . . . . . . . . . 257
D.4 Julia Robinson . . . . . . . . . . . . . . . . . . . . 259
D.5 Alfred Tarski . . . . . . . . . . . . . . . . . . . . 261
Bibliography 266
x
xi
Acknowledgments
The material in the OLP used in chapters 1 to 5 and 8 was based
originally on Jeremy Avigad’s lecture notes on “Computability
and Incompleteness,” which he contributed to the OLP. I have
heavily revised and expanded this material. The lecture notes,
e.g., based theories of arithmetic on an axiomatic proof system.
Here, we use Gentzen’s standard natural deduction system (de-
scribed in appendix C), which requires dealing with trees prim-
itive recursively (in section 2.12) and a more complicated ap-
proach to the arithmetization of derivations (in section 3.6). The
material in chapter 8 was also expanded by Zesen Qian during
his stay in Calgary as a Mitacs summer intern.
The material in the OLP on model theory and models of
arithmetic in chapter 6 was originally taken from Aldo Antonelli’s
lecture notes on “The Completeness of Classical Propositional
and Predicate Logic,” which he contributed to the OLP before
his untimely death in 2015.
The biographies of logicians in appendix D and much of the
material in appendix C are originally due to Samara Burns. Dana
Hägg originally worked on the material in appendix B.
CHAPTER 1
Introduction to
Incompleteness
1.1 Historical Background
In this section, we will briefly discuss historical developments
that will help put the incompleteness theorems in context. In
particular, we will give a very sketchy overview of the history of
mathematical logic; and then say a few words about the history
of the foundations of mathematics.
The phrase “mathematical logic” is ambiguous. One can in-
terpret the word “mathematical” as describing the subject mat-
ter, as in, “the logic of mathematics,” denoting the principles
of mathematical reasoning; or as describing the methods, as in
“the mathematics of logic,” denoting a mathematical study of the
principles of reasoning. The account that follows involves math-
ematical logic in both senses, often at the same time.
The study of logic began, essentially, with Aristotle, who lived
approximately 384–322 bce. His Categories, Prior analytics, and
Posterior analytics include systematic studies of the principles of
scientific reasoning, including a thorough and systematic study
of the syllogism.
Aristotle’s logic dominated scholastic philosophy through the
middle ages; indeed, as late as eighteenth century Kant main-
1
2 CHAPTER 1. INTRODUCTION TO INCOMPLETENESS
it too:
The second aim, that the axiom systems developed would set-
tle every mathematical question, can be made precise in two ways.
In one way, we can formulate it as follows: For any sentence A
in the language of an axiom system for mathematics, either A
or ¬A is provable from the axioms. If this were true, then there
would be no sentences which can neither be proved nor refuted
on the basis of the axioms, no questions which the axioms do not
settle. An axiom system with this property is called complete. Of
course, for any given sentence it might still be a difficult task to
determine which of the two alternatives holds. But in principle
there should be a method to do so. In fact, for the axiom and
derivation systems considered by Hilbert, completeness would
imply that such a method exists—although Hilbert did not real-
ize this. The second way to interpret the question would be this
stronger requirement: that there be a mechanical, computational
method which would determine, for a given sentence A, whether
it is derivable from the axioms or not.
In 1931, Gödel proved the two “incompleteness theorems,”
which showed that this program could not succeed. There is
no axiom system for mathematics which is complete, specifically,
the sentence that expresses the consistency of the axioms is a
sentence which can neither be proved nor refuted.
This struck a lethal blow to Hilbert’s original program. How-
ever, as is so often the case in mathematics, it also opened
up exciting new avenues for research. If there is no one, all-
encompassing formal system of mathematics, it makes sense to
develop more circumscribesd systems and investigate what can
be proved in them. It also makes sense to develop less restricted
methods of proof for establishing the consistency of these sys-
tems, and to find ways to measure how hard it is to prove their
consistency. Since Gödel showed that (almost) every formal sys-
tem has questions it cannot settle, it makes sense to look for
“interesting” questions a given formal system cannot settle, and
to figure out how strong a formal system has to be to settle them.
To the present day, logicians have been pursuing these questions
in a new mathematical discipline, the theory of proofs.
7 1.2. DEFINITIONS
1.2 Definitions
In order to carry out Hilbert’s project of formalizing mathematics
and showing that such a formalization is consistent and complete,
the first order of business would be that of picking a language,
logical framework, and a system of axioms. For our purposes, let
us suppose that mathematics can be formalized in a first-order
language, i.e., that there is some set of constant symbols, func-
tion symbols, and predicate symbols which, together with the
connectives and quatifiers of first-order logic, allow us to express
the claims of mathematics. Most people agree that such a lan-
guage exists: the language of set theory, in which ∈ is the only
non-logical symbol. That such a simple language is so expressive
is of course a very implausible claim at first sight, and it took a
lot of work to establish that practically of all mathematics can be
expressed in this very austere vocabulary. To keep things simple,
for now, let’s restrict our discussion to arithmetic, so the part of
mathematics that just deals with the natural numbers N. The nat-
ural language in which to express facts of arithmetic is LA . LA
contains a single two-place predicate symbol <, a single constant
symbol 0, one one-place function symbol ′, and two two-place
function symbols + and ×.
1. |N| = N
8 CHAPTER 1. INTRODUCTION TO INCOMPLETENESS
2. 0N = 0
TA = {A : N ⊨ A}.
𝛤 = {A : 𝛤0 ⊨ A}
∀x ∀y (x ′ = y ′ → x = y) (Q 1 )
′
∀x 0 ≠ x (Q 2 )
′
∀x (x = 0 ∨ ∃y x = y ) (Q 3 )
∀x (x + 0) = x (Q 4 )
′ ′
∀x ∀y (x + y ) = (x + y) (Q 5 )
∀x (x × 0) = 0 (Q 6 )
′
∀x ∀y (x × y ) = ((x × y) + x) (Q 7 )
′
∀x ∀y (x < y ↔ ∃z (z + x) = y) (Q 8 )
Q = {A : {Q 1 , . . . ,Q 8 } ⊨ A}.
To say that 𝛤 is not complete is to say that for at least one sen-
tence A, 𝛤 ⊬ A and 𝛤 ⊬ ¬A. Such a sentence is called independent
(of 𝛤). We can in fact relatively quickly prove that there must
be independent sentences. But the power of Gödel’s proof of the
theorem lies in the fact that it exhibits a specific example of such
an independent sentence. The intriguing construction produces
a sentence G 𝛤 , called a Gödel sentence for 𝛤, which is unprovable
because in 𝛤, G 𝛤 is equivalent to the claim that G 𝛤 is unprovable
in 𝛤. It does so constructively, i.e., given an axiomatization of 𝛤
and a description of the proof system, the proof gives a method
for actually writing down G 𝛤 .
15 1.3. OVERVIEW OF INCOMPLETENESS RESULTS
D = {n : 𝛤 ⊢ ¬An (n)}
Summary
Hilbert’s program aimed to show that all of mathematics could be
formalized in an axiomatized theory in a formal language, such
as the language of arithmetic or of set theory. He believed that
such a theory would be complete. That is, for every sentence A,
either T ⊢ A or T ⊢ ¬A. In this sense then, T would have settled
every mathematical question: it would either prove that it’s true
or that it’s false. If Hilbert had been right, it would also have
turned out that mathematics is decidable. That’s because any
axiomatizable theory is computably enumerable, i.e., there is
19 1.4. UNDECIDABILITY AND INCOMPLETENESS
Problems
Problem 1.1. Show that TA = {A : N ⊨ A} is not axiomatizable.
You may assume that TA represents all decidable properties.
CHAPTER 2
Recursive
Functions
2.1 Introduction
In order to develop a mathematical theory of computability, one
has to, first of all, develop a model of computability. We now
think of computability as the kind of thing that computers do,
and computers work with symbols. But at the beginning of the
development of theories of computability, the paradigmatic ex-
ample of computation was numerical computation. Mathemati-
cians were always interested in number-theoretic functions, i.e.,
functions f : Nn → N that can be computed. So it is not surpris-
ing that at the beginning of the theory of computability, it was
such functions that were studied. The most familiar examples
of computable numerical functions, such as addition, multipli-
cation, exponentiation (of natural numbers) share an interesting
feature: they can be defined recursively. It is thus quite natural
to attempt a general definition of computable function on the basis
of recursive definitions. Among the many possible ways to de-
fine number-theoretic functions recursively, one particulalry sim-
ple pattern of definition here becomes central: so-called primitive
recursion.
In addition to computable functions, we might be interested
20
21 2.2. PRIMITIVE RECURSION
ber x, we’ll eventually reach the step where we define f (x) from
f (x + 1), and so f (x) is defined for all x ∈ N.
For instance, suppose we specify h : N → N by the following
two equations:
h (0) = 1
h (x + 1) = 2 · h (x)
h (1) = 2 · h (0) = 2,
h (2) = 2 · h (1) = 2 · 2,
h (3) = 2 · h (2) = 2 · 2 · 2,
..
.
add(x, 0) = x
23 2.2. PRIMITIVE RECURSION
add(x, y + 1) = add(x, y) + 1
These equations specify the value of add for all x and y. To find
add(2, 3), for instance, we apply the defining equations for x = 2,
using the first to find add(2, 0) = 2, then using the second to
successively find add(2, 1) = 2 + 1 = 3, add(2, 2) = 3 + 1 = 4,
add(2, 3) = 4 + 1 = 5.
In the definition of add we used + on the right-hand-side of the
second equation, but only to add 1. In other words, we used the
successor function succ(z ) = z + 1 and applied it to the previous
value add(x, y) to define add(x, y + 1). So we can think of the
recursive definition as given in terms of a single function which
we apply to the previous value. However, it doesn’t hurt—and
sometimes is necessary—to allow the function to depend not just
on the previous value but also on x and y. Consider:
mult(x, 0) = 0
mult(x, y + 1) = add(mult(x, y),x)
mult(2, 0) = 0
mult(2, 1) = mult(2, 0 + 1) = add(mult(2, 0), 2) = add(0, 2) = 2
mult(2, 2) = mult(2, 1 + 1) = add(mult(2, 1), 2) = add(2, 2) = 4
mult(2, 3) = mult(2, 2 + 1) = add(mult(2, 2), 2) = add(4, 2) = 6
h (x 0 , . . . ,x k −1 , 0) = f (x 0 , . . . ,x k −1 )
h (x 0 , . . . ,x k −1 , y + 1) = g (x 0 , . . . ,x k −1 , y,h (x 0 , . . . ,x k −1 , y))
add(x 0 , 0) = f (x 0 ) = x 0
add(x 0 , y + 1) = g (x 0 , y, add(x 0 , y)) = succ(add(x 0 , y))
mult(x 0 , 0) = f (x 0 ) = 0
mult(x 0 , y + 1) = g (x 0 , y, mult(x 0 , y)) = add(mult(x 0 , y),x 0 )
2.3 Composition
If f and g are two one-place functions of natural numbers, we
can compose them: h (x) = g ( f (x)). The new function h (x) is
then defined by composition from the functions f and g . We’d like
to generalize this to functions of more than one argument.
Here’s one way of doing this: suppose f is a k -place function,
and g 0 , . . . , g k −1 are k functions which are all n-place. Then we
can define a new n-place function h as follows:
Pin (x 0 , . . . ,x n−1 ) = x i
g (x, y, z ) = succ(P 23 ).
h (x 0 ,x 1 ) = f (P 12 (x 0 ,x 1 ),P 02 (x 0 ,x 1 )).
h (x, y) = f (P 02 (x, y), g (P 02 (x, y),P 02 (x, y),P 12 (x, y)),P 12 (x, y)).
h (x 0 , . . . ,x k −1 , 0) = f (x 0 , . . . ,x k −1 )
h (x 0 , . . . ,x k −1 , y + 1) = g (x 0 , . . . ,x k −1 , y,h (x 0 , . . . ,x k −1 , y))
by
Pin (x 0 , . . . ,x n−1 ) = x i ,
for each natural number n and i < n, we will include among the
primitive recursive functions the function zero(x) = 0.
add(x 0 , 0) = f (x 0 ) = x 0
add(x 0 , y + 1) = g (x 0 , y, add(x 0 , y)) = succ(add(x 0 , y))
g (x 0 , y, z ) = succ(z ).
This does not yet tell us that g is primitive recursive, since g and
succ are not quite the same function: succ is one-place, and g has
to be three-place. But we can define g “officially” by composition
as
g (x 0 , y, z ) = succ(P 23 (x 0 , y, z ))
Since succ and P 23 count as primitive recursive functions, g does
as well, since it can be defined by composition from primitive
recursive functions. □
29 2.4. PRIMITIVE RECURSION FUNCTIONS
Proof. Exercise. □
h (0) = 1
h (y + 1) = 2 · h (y).
This function cannot fit into the form required by Definition 2.1,
since k = 0. The definition also involves the constants 1 and 2. To
get around the first problem, let’s introduce a dummy argument
and define the function h ′:
h ′ (x 0 , 0) = f (x 0 ) = 1
h ′ (x 0 , y + 1) = g (x 0 , y,h ′ (x 0 , y)) = 2 · h ′ (x 0 , y).
g (x 0 , y, z ) = g ′ (P 23 (x 0 , y, z ))
g ′ (z ) = mult(g ′′ (z ),P 01 (z ))
and
g ′′ (z ) = succ( f (z )),
h (x⃗ , 3) = g (x⃗ , 2,h (x⃗ , 2)) = g (x⃗ , 2, g (x⃗ , 1, g (x⃗ , 0, f (x⃗ ))))
h (x⃗ , 4) = g (x⃗ , 3,h (x⃗ , 3)) = g (x⃗ , 3, g (x⃗ , 2, g (x⃗ , 1, g (x⃗ , 0, f (x⃗ )))))
..
.
exp(x, 0) = 1
32 CHAPTER 2. RECURSIVE FUNCTIONS
is primitive recursive.
fac(0) = 1
fac(y + 1) = fac(y) · (y + 1).
where g (x, y, z ) = mult(P 23 (x, y, z ), succ(P 13 (x, y, z ))) and then let
From now on we’ll be a bit more laissez-faire and not give the
official definitions by composition and primitive recursion. □
is primitive recursive.
Proof. We have:
x −̇ 0 = x
x −̇ (y + 1) = pred(x −̇ y) □
34 CHAPTER 2. RECURSIVE FUNCTIONS
|︁ |︁
Proposition 2.11. The distance between x and y, |︁x − y |︁, is primitive
recursive.
|︁ |︁
Proof. We have |︁x − y |︁ = (x −̇ y) + (y −̇ x), so the distance can
be defined by composition from + and −̇, which are primitive
recursive. □
max(x, y) = x + (y −̇ x).
Proof. Exercise. □
function
y
∏︂
h (x⃗ , y) = f (x⃗ , z ).
z =0
is primitive recursive.
1. ¬P (x⃗ )
2. P (x⃗ ) ∧ Q (x⃗ )
3. P (x⃗ ) ∨ Q (x⃗ )
4. P (x⃗ ) → Q (x⃗ )
Proof. Suppose P (x⃗ ) and Q (x⃗ ) are primitive recursive, i.e., their
characteristic functions 𝜒 P and 𝜒 Q are. We have to show that
the characteristic functions of ¬P (x⃗ ), etc., are also primitive re-
cursive. {︄
0 if 𝜒 P (x⃗ ) = 1
𝜒 ¬P (x⃗ ) =
1 otherwise
We can define 𝜒 ¬P (x⃗ ) as 1 −̇ 𝜒 P (x⃗ ).
{︄
1 if 𝜒 P (x⃗ ) = 𝜒 Q (x⃗ ) = 1
𝜒 P ∧Q (x⃗ ) =
0 otherwise
𝜒 P (x⃗ , 0) = 1
𝜒 P (x⃗ , y + 1) = min( 𝜒 P (x⃗ , y), 𝜒 R (x⃗ , y))).
cond(0, y, z ) = y, cond(x + 1, y, z ) = z .
Proof. Note than there can be no z < 0 such that R (x⃗ , z ) since
there is no z < 0 at all. So mR (x⃗ , 0) = 0.
In case the bound is of the form y + 1 we have three cases: (a)
There is a z < y such that R (x⃗ , z ), in which case mR (x⃗ , y + 1) =
mR (x⃗ , y). (b) There is no such z < y but R (x⃗ , y) holds, then
39 2.10. PRIMES
mR (x⃗ , 0) = 0
⎧ m (x⃗ , y) if mR (x⃗ , y) ≠ y
⎨ R
⎪
⎪
⎪
mR (x⃗ , y + 1) = y if mR (x⃗ , y) = y and R (x⃗ , y)
⎪
⎪y + 1
⎪
otherwise.
⎩
Note that there is a z < y such that R (x⃗ , z ) iff mR (x⃗ , y) ≠ y. □
2.10 Primes
Bounded quantification and bounded minimization provide us
with a good deal of machinery to show that natural functions
and relations are primitive recursive. For example, consider the
relation “x divides y”, written x | y. The relation x | y holds if
division of y by x is possible without remainder, i.e., if y is an
integer multiple of x. (If it doesn’t hold, i.e., the remainder when
dividing x by y is > 0, we write x ∤ y.) In other words, x | y iff for
some z , x · z = y. Obviously, any such z , if it exists, must be ≤ y.
So, we have that x | y iff for some z ≤ y, x · z = y. We can define
the relation x | y by bounded existential quantification from =
and multiplication by
x | y ⇔ (∃z ≤ y) (x · z ) = y .
Prime(x) ⇔ x ≥ 2 ∧ (∀y ≤ x) (y | x → y = 1 ∨ y = x)
p (0) = 2
p (x + 1) = nextPrime(p (x))
This shows, that nextPrime(x) and hence p (x) are (not just com-
putable but) primitive recursive.
(If you’re curious, here’s a quick proof of Euclid’s theorem.
Suppose p n is the largest prime ≤ x and consider the product
p = p 0 · p 1 · · · · · p n of all primes ≤ x. Either p + 1 is prime or there
is a prime between x and p + 1. Why? Suppose p + 1 is not prime.
Then some prime number q | p + 1 where q < p + 1. None of the
primes ≤ x divide p + 1. (By definition of p, each of the primes
pi ≤ x divides p, i.e., with remainder 0. So, each of the primes
pi ≤ x divides p + 1 with remainder 1, and so pi ∤ p + 1.) Hence,
q is a prime > x and < p + 1. And p ≤ x !, so there is a prime > x
and ≤ x ! + 1.)
2.11 Sequences
The set of primitive recursive functions is remarkably robust.
But we will be able to do even more once we have developed
a adequate means of handling sequences. We will identify finite
41 2.11. SEQUENCES
R (i ,s ) iff pi | s ∧ pi +1 ∤ s .
so we can let
{︄
0 if s = 0 or s = 1
len(s ) =
1 + (min i < s ) R (i ,s ) otherwise
Proposition 2.21. The function append(s ,a), which returns the re-
sult of appending a to the sequence s , is primitive recursive.
hconcat(s ,t , 0) = s
hconcat(s ,t ,n + 1) = append(hconcat(s ,t ,n), (t )n )
sequenceBound(x,k ) = pkk −1
·(x+1)
,
Proof. Exercise. □
2.12 Trees
Sometimes it is useful to represent trees as natural numbers, just
like we can represent sequences by numbers and properties of and
operations on them by primitive recursive relations and functions
on their codes. We’ll use sequences and their codes to do this. A
tree can be either a single node (possibly with a label) or else a
node (possibly with a label) connected to a number of subtrees.
The node is called the root of the tree, and the subtrees it is
connected to its immediate subtrees.
We code trees recursively as a sequence ⟨k ,d1 , . . . ,dk ⟩, where
k is the number of immediate subtrees and d1 , . . . , dk the codes
of the immediate subtrees. If the nodes have labels, they can be
included after the immediate subtrees. So a tree consisting just
of a single node with label l would be coded by ⟨0,l ⟩, and a tree
consisting of a root (labelled l1 ) connected to two single nodes
(labelled l2 , l3 ) would be coded by ⟨2, ⟨0,l2 ⟩, ⟨0,l3 ⟩,l1 ⟩.
g (s , 0) = f ((s )0 )
g (s ,k + 1) = g (s ,k ) ⌢ f ((s )k +1 )
hSubtreeSeq(t , 0) = ⟨t ⟩
hSubtreeSeq(t ,n + 1) = hSubtreeSeq(t ,n) ⌢ h (hSubtree(t ,n)).
h0 (x⃗ , 0) = f 0 (x⃗ )
h1 (x⃗ , 0) = f 1 (x⃗ )
h0 (x⃗ , y + 1) = g 0 (x⃗ , y,h0 (x⃗ , y),h1 (x⃗ , y))
h1 (x⃗ , y + 1) = g 1 (x⃗ , y,h0 (x⃗ , y),h1 (x⃗ , y))
46 CHAPTER 2. RECURSIVE FUNCTIONS
h (x⃗ , 0) = f (x⃗ )
h (x⃗ , y + 1) = g (x⃗ , y, ⟨h (x⃗ , 0), . . . ,h (x⃗ , y)⟩).
h (x⃗ , 0) = f (x⃗ )
h (x⃗ , y + 1) = g (x⃗ , y,h (k (x⃗ ), y))
g (x, y) = fx (y)
h (x) = g (x,x) + 1
= fx (x) + 1.
g 0 (x) = x + 1
g n+1 (x) = g nx (x)
h (g 0 (x⃗ ), . . . , g k (x⃗ ))
the least x such that f (0, z⃗), f (1, z⃗), . . . , f (x, z⃗) are all
defined, and f (x, z⃗) = 0, if such an x exists
The proof of the normal form theorem is involved, but the ba-
sic idea is simple. Every partial recursive function has an index e ,
intuitively, a number coding its program or definition. If f (x) ↓,
the computation can be recorded systematically and coded by
some number s , and that s codes the computation of f on in-
put x can be checked primitive recursively using only x and the
definition e . This means that T is primitive recursive. Given the
full record of the computation s , the “upshot” of s is the value
of f (x), and it can be obtained from s primitive recursively as
well.
The normal form theorem shows that only a single un-
bounded search is required for the definition of any partial recur-
sive function. We can use the numbers e as “names” of partial
recursive functions, and write 𝜑e for the function f defined by the
equation in the theorem. Note that any partial recursive function
can have more than one index—in fact, every partial recursive
function has infinitely many indices.
is not computable.
In the context of partial recursive functions, the role of the
specification of a program may be played by the index e given in
Kleene’s normal form theorem. If f is a partial recursive func-
tion, any e for which the equation in the normal form theorem
53 2.17. THE HALTING PROBLEM
Note that h (e ,x) = 0 if 𝜑e (x) ↑, but also when e is not the index
of a partial recursive function at all.
Summary
In order to show that Q represents all computable functions, we
need a precise model of computability that we can take as the
basis for a proof. There are, of course, many models of com-
putability, such as Turing machines. One model that plays a sig-
nificant role historically—it’s one of the first models proposed,
and is also the one used by Gödel himself—is that of the recur-
sive functions. The recursive functions are a class of arithmeti-
cal functions—that is, their domain and range are the natural
numbers—that can be defined from a few basic functions using a
few operations. The basic functions are zero, succ, and the pro-
jection functions. The operations are composition, primitive
recursion, and regular minimization. Composition is simply a
general version of “chaining together” functions: first apply one,
then apply the other to the result. Primitive recursion defines a
new function f from two functions g , h already defined, by stipu-
lating that the value of f for 0 is given by g , and the value for any
number n + 1 is given by h applied to f (n). Functions that can
be defined using just these two principles are called primitive
recursive. A relation is primitive recursive iff its characteristic
function is. It turns out that a whole list of interesting functions
and relations are primitive recursive (such as addition, multi-
plication, exponentiation, divisibility), and that we can define
new primitive recursive functions and relations from old ones us-
ing principles such as bounded quantification and bounded min-
imization. In particular, this allowed us to show that we can deal
with sequences of numbers in primitive recursive ways. That is,
there is a way to “code” sequences of numbers as single num-
bers in such a way that we can compute the i -the element, the
length, the concatenation of two sequences, etc., all using prim-
itive recursive functions operating on these codes. To obtain all
the computable functions, we finally added definition by regular
minimization to composition and primitive recursion. A func-
tion g (x, y) is regular iff, for every y it takes the value 0 for at
last one x. If f is regular, the least x such that g (x, y) = 0 al-
56 CHAPTER 2. RECURSIVE FUNCTIONS
ways exists, and can be found simply by computing all the values
of g (0, y), g (1, y), etc., until one of them is = 0. The resulting
function f (y) = 𝜇x g (x, y) = 0 is the function defined by regular
minimization from g . It is always total and computable. The re-
sulting set of functions are called general recursive. One version
of the Church-Turing Thesis says that the computable arithmeti-
cal functions are exactly the general recursive ones.
Problems
Problem 2.1. Prove Proposition 2.5 by showing that the prim-
itive recursive definition of mult is can be put into the form re-
quired by Definition 2.1 and showing that the corresponding func-
tions f and g are primitive recursive.
is primitive recursive.
Problem 2.5. Show that integer division d (x, y) = ⌊x/y⌋ (i.e., di-
vision, where you disregard everything after the decimal point)
is primitive recursive. When y = 0, we stipulate d (x, y) = 0. Give
an explicit definition of d using primitive recursion and compo-
sition.
sconcat(⟨s 0 , . . . ,sk ⟩) = s 0 ⌢ . . . ⌢ sk .
tail(𝛬) = 0 and
tail(⟨s 0 , . . . ,sk ⟩) = ⟨s 1 , . . . ,sk ⟩.
Arithmetization
of Syntax
3.1 Introduction
In order to connect computability and logic, we need a way to talk
about the objects of logic (symbols, terms, formulas, derivations),
operations on them, and their properties and relations, in a way
amenable to computational treatment. We can do this directly,
by considering computable functions and relations on symbols,
sequences of symbols, and other objects built from them. Since
the objects of logical syntax are all finite and built from a count-
able sets of symbols, this is possible for some models of compu-
tation. But other models of computation—such as the recursive
functions—-are restricted to numbers, their relations and func-
tions. Moreover, ultimately we also want to be able to deal with
syntax within certain theories, specifically, in theories formulated
in the language of arithmetic. In these cases it is necessary to
arithmetize syntax, i.e., to represent syntactic objects, operations
on them, and their relations, as numbers, arithmetical functions,
and arithmetical relations, respectively. The idea, which goes
back to Leibniz, is to assign numbers to syntactic objects.
It is relatively straightforward to assign numbers to symbols
as their “codes.” Some symbols pose a bit of a challenge, since,
58
59 3.1. INTRODUCTION
e.g., there are infinitely many variables, and even infinitely many
function symbols of each arity n. But of course it’s possible to
assign numbers to symbols systematically in such a way that, say,
v2 and v3 are assigned different codes. Sequences of symbols
(such as terms and formulas) are a bigger challenge. But if we can
deal with sequences of numbers purely arithmetically (e.g., by the
powers-of-primes coding of sequences), we can extend the coding
of individual symbols to coding of sequences of symbols, and then
further to sequences or other arrangements of formulas, such as
derivations. This extended coding is called “Gödel numbering.”
Every term, formula, and derivation is assigned a Gödel number.
By coding sequences of symbols as sequences of their codes,
and by chosing a system of coding sequences that can be dealt
with using computable functions, we can then also deal with
Gödel numbers using computable functions. In practice, all the
relevant functions will be primitive recursive. For instance, com-
puting the length of a sequence and computing the i -th element
of a sequence from the code of the sequence are both primitive
recursive. If the number coding the sequence is, e.g., the Gödel
number of a formula A, we immediately see that the length of a
formula and the (code of the) i -th symbol in a formula can also be
computed from the Gödel number of A. It is a bit harder to prove
that, e.g., the property of being the Gödel number of a correctly
formed term or of a correct derivation is primitive recursive. It is
nevertheless possible, because the sequences of interest (terms,
formulas, derivations) are inductively defined.
As an example, consider the operation of substitution. If A
is a formula, x a variable, and t a term, then A[t /x] is the result
of replacing every free occurrence of x in A by t . Now suppose
we have assigned Gödel numbers to A, x, t —say, k , l , and m, re-
spectively. The same scheme assigns a Gödel number to A[t /x],
say, n. This mapping—of k , l , and m to n—is the arithmetical
analog of the substitution operation. When the substitution oper-
ation maps A, x, t to A[t /x], the arithmetized substitution func-
tions maps the Gödel numbers k , l , m to the Gödel number n.
We will see that this function is primitive recursive.
60 CHAPTER 3. ARITHMETIZATION OF SYNTAX
⊥ ¬ ∨ ∧ → ∀ ∃ = ( ) ,
⊥ ¬ ∨ ∧ → ∀
⟨0, 0⟩ ⟨0, 1⟩ ⟨0, 2⟩ ⟨0, 3⟩ ⟨0, 4⟩ ⟨0, 5⟩
∃ = ( ) ,
⟨0, 6⟩ ⟨0, 7⟩ ⟨0, 8⟩ ⟨0, 9⟩ ⟨0, 10⟩
1. Fn(x,n) iff x is the code of fin for some i , i.e., x is the code of an
n-ary function symbol.
Note that codes and Gödel numbers are different things. For
instance, the variable v5 has a code cv5 = ⟨1, 5⟩ = 22 · 36 . But the
variable v5 considered as a term is also a sequence of symbols (of
length 1). The Gödel number # v5 # of the term v5 is ⟨cv5 ⟩ = 2cv5 +1 =
2 6
22 ·3 +1 .
1. Var((y)i ), or
2. Const((y)i ), or
(y)i = # f jn ( # ⌢ flatten(z ) ⌢ # ) # ,
num(0) = # 0#
num(n + 1) = #′( # ⌢ num(n) ⌢ # ) # . □
65 3.4. CODING FORMULAS
1. There are n, j < x, and z < x such that for each i < n,
Term((z )i ) and x =
#
P jn ( # ⌢ flatten(z ) ⌢ # ) # .
3. x = # ⊥# . □
Proposition 3.8. The relation Frm(x) which holds iff x is the Gödel
number of a formula is primitive recursive.
Proof. Exercise. □
66 CHAPTER 3. ARITHMETIZATION OF SYNTAX
3.5 Substitution
Recall that substitution is the operation of replacing all free oc-
currences of a variable u in a formula A by a term t , written
A[t /u]. This operation, when carried out on Gödel numbers of
variables, formulas, and terms, is primitive recursive.
hSubst(x, y, z , 0) = 𝛬
hSubst(x, y, z ,i + 1) =
{︄
hSubst(x, y, z ,i ) ⌢ y if FreeOcc(x, z ,i )
append(hSubst(x, y, z ,i ), (x)i ) otherwise.
Proof. Exercise. □
[A ∧ B] 1
∧Elim
A
1 →Intro
(A ∧ B) → A
Proposition 3.16. The property Correct(d ) which holds iff the last
inference in the derivation 𝛿 with Gödel number d is correct, is primitive
recursive.
𝛿1 𝛿2
A B
∧Intro
A∧B
(d )0 = 2 ∧ DischargeLabel(d ) = 0 ∧ LastRule(d ) = 1 ∧
EndFmla(d ) = # ( # ⌢ EndFmla((d )1 ) ⌢ # ∧# ⌢ EndFmla((d )2 ) ⌢ # ) # .
(d )0 = 1 ∧ (d )1 = 0 ∧ DischargeLabel(d ) = 0 ∧
(∃t < d ) (ClTerm(t )∧EndFmla(d ) = # =( # ⌢ t ⌢ # ,# ⌢ t ⌢ # ) # )
(d )0 = 1 ∧
(∃a < d ) (Discharge(a, (d )1 , DischargeLabel(d )) ∧
EndFmla(d ) = ( # ( # ⌢ a ⌢ # →# ⌢ EndFmla((d )1 ) ⌢ # ) # ))
(d )0 = 1 ∧ DischargeLabel(d ) = 0 ∧
(∃a < d ) (∃x < d ) (∃t < d ) (ClTerm(t ) ∧ Var(x) ∧
Subst(a,t ,x) = EndFmla((d )1 )∧EndFmla(d ) = ( # ∃# ⌢ x ⌢ a)).
Sent(EndFmla(d )) ∧
(LastRule(d ) = 1 ∧ FollowsBy∧Intro (d )) ∨ · · · ∨
(LastRule(d ) = 16 ∧ FollowsBy=Elim (d )) ∨
(∃n < d ) (∃x < d ) (d = ⟨0,x,n⟩).
Summary
The proof of the incompleteness theorems requires that we have
a way to talk about provability in a theory (such as PA) in the
language of the theory itself, i.e., in the language of arithmetic.
But the language of arithmetic only deals with numbers, not with
formulas or derivations. The solution to this problem is to define
a systematic mapping from formulas and derivations to numbers.
The number associated with a formula or a derivation is called its
Gödel number. If A is a formula, #A# is its Gödel number. We
showed that important operations on formulas turn into primi-
tive recursive functions on the respective Gödel numbers. For
instance, A[t /x], the operation of substituting a term t for ev-
ery free occurrence of x in A, corresponds to an arithmetical
function subst(n,m,k ) which, if applied to the Gödel numbers
of A, t , and x, yields the Gödel number of A[t /x]. In other
words, subst( #A# , #t # , # x # ) = #A[t /x] # . Likewise, properties of
derivations turn into primitive recursive relations on the respec-
tive Gödel numbers. In particular, the property Deriv(n) that
holds of n if it is the Gödel number of a correct derivation in
natural deduction, is primitive recursive. Showing that these are
primitive recursive required a fair amount of work, and at times
some ingenuity, and depended essentially on the fact that op-
erating with sequences is primitive recursive. If a theory T is
decidable, then we can use Deriv to define a decidable relation
74 CHAPTER 3. ARITHMETIZATION OF SYNTAX
Problems
Problem 3.1. Show that the function flatten(z ), which turns the
sequence ⟨#t1 # , . . . , #tn # ⟩ into #t1 , . . . ,tn # , is primitive recursive.
Problem 3.4. Prove Proposition 3.9. You may make use of the
fact that any substring of a formula which is a formula is a sub-
formula of it.
1. FollowsBy→Elim (d ),
2. FollowsBy=Elim (d ),
3. FollowsBy∨Elim (d ),
4. FollowsBy∀Intro (d ).
For the last one, you will have to also show that you can test
primitive recursively if the last inference of the derivation with
Gödel number d satisfies the eigenvariable condition, i.e., the
75 3.6. DERIVATIONS IN NATURAL DEDUCTION
Representability
in Q
4.1 Introduction
The incompleteness theorems apply to theories in which basic
facts about computable functions can be expressed and proved.
We will describe a very minimal such theory called “Q ” (or,
sometimes, “Robinson’s Q ,” after Raphael Robinson). We will
say what it means for a function to be representable in Q , and
then we will prove the following:
A function is representable in Q if and only if it is
computable.
For one thing, this provides us with another model of computabil-
ity. But we will also use it to show that the set {A : Q ⊢ A} is not
decidable, by reducing the halting problem to it. By the time we
are done, we will have proved much stronger things than this.
The language of Q is the language of arithmetic; Q consists
of the following axioms (to be used in conjunction with the other
axioms and rules of first-order logic with identity predicate):
∀x ∀y (x ′ = y ′ → x = y) (Q 1 )
′
∀x 0 ≠ x (Q 2 )
76
77 4.1. INTRODUCTION
∀x (x = 0 ∨ ∃y x = y ′) (Q 3 )
∀x (x + 0) = x (Q 4 )
′ ′
∀x ∀y (x + y ) = (x + y) (Q 5 )
∀x (x × 0) = 0 (Q 6 )
′
∀x ∀y (x × y ) = ((x × y) + x) (Q 7 )
′
∀x ∀y (x < y ↔ ∃z (z + x) = y) (Q 8 )
1. A f (n 0 , . . . ,nk ,m)
Proof. Let’s first give the intuitive idea for why this is
true. If f (x 0 , . . . ,x k ) is representable in Q , there is a for-
mula A(x 0 , . . . ,x k , y) such that
z ≡ y0 mod x 0
z ≡ y1 mod x 1
..
.
z ≡ yn mod x n .
x0 = 1 + j !
x1 = 1 + 2 · j !
x2 = 1 + 3 · j !
..
.
x n = 1 + (n + 1) · j !
(1 + (i + 1) j !) − (1 + (k + 1) j !) = (i − k ) j !.
1. not(x) = 𝜒 = (x, 0)
We can then show that all of the following are also definable
without primitive recursion:
2. Projections
and
3. x < y
4. x | y
Now define
and
𝛽 (d ,i ) = 𝛽 ∗ (K (d ),L(d ),i ).
This is the function we need. Given a0 , . . . ,an , as above, let
j = max(n,a0 , . . . ,an ) + 1,
d0 ≡ ai mod (1 + (i + 1)d1 )
𝛽 (d ,i ) = 𝛽 ∗ (d0 ,d1 ,i )
= rem(1 + (i + 1)d1 ,d0 )
= ai
n + m = n + m and
∀y ((n + m) = y → y = n + m).
is represented in Q by
(x 0 = x 1 ∧ y = 1) ∨ (x 0 ≠ x 1 ∧ y = 0).
Note that the lemma does not say much: in essence it says
that Q can prove that different numerals denote different objects.
For example, Q proves 0′′ ≠ 0′′′. But showing that this holds in
general requires some care. Note also that although we are using
induction, it is induction outside of Q .
(n = n ∧ y = 1) ∨ (n ≠ n ∧ y = 0)
(n = m ∧ y = 1) ∨ (n ≠ m ∧ y = 0)
Lemma 4.15. Q ⊢ (n + m) = n + m
Q ⊢ ∀y (Aadd (n,m, y) → y = k ).
89 4.6. COMPOSITION IS REPRESENTABLE IN Q
Q ⊢ (n + m) = n + m,
Proof. Exercise. □
Lemma 4.17. Q ⊢ (n × m) = n · m
Proof. Exercise. □
Q ⊢ A g (n,k )
Q ⊢ A f (k ,m)
Q ⊢ A g (n,k ) ∧ A f (k ,m)
Q ⊢ ∃y (A g (n, y) ∧ A f (y,m)),
i.e., Q ⊢ Ah (n,m). □
91 4.7. REGULAR MINIMIZATION IS REPRESENTABLE IN Q
Q ⊢ ∀z (A f (k , z ) → z = m)
since A f represents f . Using just a little bit of logic, we can show
that also
∃y 0 , . . . ∃y k −1 (A g 0 (x 0 , . . . ,xl −1 , y 0 ) ∧ · · · ∧
A gk −1 (x 0 , . . . ,xl −1 , y k −1 ) ∧ A f (y 0 , . . . , y k −1 , z ))
represents
Proof. Exercise. □
which (a) satisfies g (x, z ) = 0 and (b) is the least such, i.e., for
any w < x, g (w, z ) ≠ 0. So the following is a natural choice:
Lemma 4.21. For every constant symbol a and every natural num-
ber n,
Q ⊢ (a ′ + n) = (a + n) ′ .
Q ⊢ (a ′ + 0) = a ′ by axiom Q 4 (4.1)
Q ⊢ (a + 0) = a by axiom Q 4 (4.2)
′ ′
Q ⊢ (a + 0) = a by eq. (4.2) (4.3)
′ ′
Q ⊢ (a + 0) = (a + 0) by eq. (4.1) and eq. (4.3)
Q ⊢ (a ′ + n ′) = (a ′ + n) ′ by axiom Q 5 (4.4)
′ ′ ′ ′
Q ⊢ (a + n ) = (a + n ) inductive hypothesis (4.5)
′ ′ ′ ′
Q ⊢ (a + n) = (a + n ) by eq. (4.4) and eq. (4.5). □
Q ⊢ ∀x (x < n + 1 → (x = 0 ∨ · · · ∨ x = n)).
Q ⊢ A g (m,n, 0).
Q ⊢ ¬A g (k ,n, 0).
We get that
∀y (A 𝜒 R (n 0 , . . . ,nk , y) → y = 0).
4.10 Undecidability
We call a theory T undecidable if there is no computational pro-
cedure which, after finitely many steps and unfailingly, provides
a correct answer to the question “does T prove A?” for any
sentence A in the language of T. So Q would be decidable iff
there were a computational procedure which decides, given a sen-
tence A in the language of arithmetic, whether Q ⊢ A or not. We
can make this more precise by asking: Is the relation ProvQ (y),
which holds of y iff y is the Gödel number of a sentence provable
in Q , recursive? The answer is: no.
is not recursive.
Summary
In order to show how theories like Q can “talk” about com-
putable functions—and especially about provability (via Gödel
numbers)—we established that Q represents all computable
functions. By “Q represents f (n)” we mean that there is a for-
mula A f (x, y) in LA which expresses that f (x) = y, and Q can
prove that it does. This, in turn, means that whenever f (n) = m,
then T ⊢ A f (n,m) and T ⊢ ∀y (A f (n, y) → y = m). (Here, n is the
standard numeral for n, i.e., the term 0′...′ with n ′s. The term n
picks out the number n in the standard model N, so it’s a conve-
nient way of representing the number n in LA .) To prove that Q
represents all computable functions we go back to the characteri-
zation of computable functions as those that can be defined from
zero, succ, and the projection functions, by composition, prim-
itive recursion, and regular minimization. While it is relatively
easy to prove that the basic functions are representable and that
100 CHAPTER 4. REPRESENTABILITY IN Q
Problems
Problem 4.1. Prove that y = 0, y = x ′, and y = x i represent zero,
succ, and Pin , respectively.
Incompleteness
and Provability
5.1 Introduction
Hilbert thought that a system of axioms for a mathematical struc-
ture, such as the natural numbers, is inadequate unless it allows
one to derive all true statements about the structure. Combined
with his later interest in formal systems of deduction, this suggests
that he thought that we should guarantee that, say, the formal sys-
tems we are using to reason about the natural numbers is not only
consistent, but also complete, i.e., every statement in its language
is either derivable or its negation is. Gödel’s first incomplete-
ness theorem shows that no such system of axioms exists: there
is no complete, consistent, axiomatizable formal system for arith-
metic. In fact, no “sufficiently strong,” consistent, axiomatizable
mathematical theory is complete.
A more important goal of Hilbert’s, the centerpiece of his
program for the justification of modern (“classical”) mathemat-
ics, was to find finitary consistency proofs for formal systems rep-
resenting classical reasoning. With regard to Hilbert’s program,
then, Gödel’s second incompleteness theorem was a much bigger
blow. The second incompleteness theorem can be stated in vague
terms, like the first incompleteness theorem. Roughly speaking,
101
102 CHAPTER 5. INCOMPLETENESS AND PROVABILITY
Lemma 5.1. Let T be any theory extending Q , and let B (x) be any
formula with only the variable x free. Then there is a sentence A such
that T ⊢ A ↔ B (⌜A⌝).
But what happens when one takes the phrase “yields falsehood
when preceded by its quotation,” and precedes it with a quoted
version of itself? Then one has the original sentence! In short,
the sentence asserts that it is false.
Lemma 5.2. Let B (x) be any formula with one free variable x. Then
there is a sentence A such that Q ⊢ A ↔ B (⌜A⌝).
Proof. Given B (x), let E (x) be the formula ∃y (D diag (x, y) ∧ B (y))
and let A be its diagonalization, i.e., the formula E (⌜E (x)⌝).
Since D diag represents diag, and diag( # E (x) # ) = #A# , Q can
derive
Consider such a y. Since D diag (⌜E (x)⌝, y), by eq. (5.2), y = ⌜A⌝.
So, from B (y) we have B (⌜A⌝).
Now suppose B (⌜A⌝). By eq. (5.1), we
have D diag (⌜E (x)⌝, ⌜A⌝) ∧ B (⌜A⌝). It follows that
∃y (D diag (⌜E (x)⌝, y) ∧ B (y)). But that’s just E (⌜E ⌝), i.e., A. □
Proof. Recall that ProvT (y) is defined as ∃x PrfT (x, y), where
PrfT (x, y) represents the decidable relation which holds iff x is the
Gödel number of a derivation of the sentence with Gödel num-
ber y. The relation that holds between x and y if x is the Gödel
number of a refutation of the sentence with Gödel number y is
also decidable. Let not(x) be the primitive recursive function
which does the following: if x is the code of a formula A, not(x)
is a code of ¬A. Then RefT (x, y) holds iff PrfT (x, not(y)). Let
RefT (x, y) represent it. Then, if T ⊢ ¬A and 𝛿 is a corresponding
derivation, Q ⊢ RefT (⌜𝛿⌝, ⌜A⌝). We define RProvT (y) as
but that’s just RProvT (⌜RT ⌝). By eq. (5.4), Q ⊢ ¬RT . Since T
extends Q , also T ⊢ ¬RT . We’ve assumed that T ⊢ RT , so T
would be inconsistent, contrary to the assumption of the theorem.
Now, let’s show that T ⊬ ¬RT . Again, suppose it did,
and suppose n is the Gödel number of a derivation of ¬RT .
Then RefT (n, #RT # ) holds, and since RefT represents RefT in
Q , Q ⊢ RefT (n, ⌜RT ⌝). We’ll again show that T would then be
inconsistent because it would also derive RT . Since
is logically equivalent to
The only way to verify that these three properties hold is to de-
scribe the formula ProvPA (y) carefully and use the axioms of PA
to describe the relevant formal proofs. Conditions (1) and (2) are
easy; it is really condition (3) that requires work. (Think about
what kind of work it entails . . . ) Carrying out the details would
be tedious and uninteresting, so here we will ask you to take it on
faith that PA has the three properties listed above. A reasonable
choice of ProvPA (y) will also satisfy
The use of logic in the above just elementary facts from proposi-
tional logic, e.g., eq. (5.7) uses ⊢ ¬A ↔ (A → ⊥) and eq. (5.12)
uses A → (B → C ),A → B ⊢ A → C . The use of condi-
tion P2 in eq. (5.9) and eq. (5.10) relies on instances of P2,
Prov (⌜A → B ⌝) → ( Prov (⌜A⌝) → Prov (⌜B ⌝)). In the first one,
A ≡ G and B ≡ Prov (⌜G ⌝) → ⊥; in the second, A ≡ Prov (⌜G ⌝)
and B ≡ ⊥.
The more abstract version of the second incompleteness the-
orem is as follows:
T ⊢ ¬ProvT (⌜G ⌝) ↔ G .
T ⊢ ProvT (⌜H ⌝) ↔ H .
2. Suppose X is true.
T ⊢ ProvT (⌜H ⌝) ↔ H
in particular
T ⊢ ProvT (⌜H ⌝) → H
Now one can ask, is the converse also true? That is, is ev-
ery relation definable in N computable? The answer is no. For
example:
so ∃s DT (z ,x,s ) defines H in N. □
Summary
The first incompleteness theorem states that for any consistent,
axiomatizable theory T that extends Q , there is a sentence G T
such that T ⊬ G T . G T is constructed in such a way that G T , in
a roundabout way, says “T does not prove G T .” Since T does
not, in fact, prove it, what it says is true. If N ⊨ T, then T
121 5.9. THE UNDEFINABILITY OF TRUTH
Problems
Problem 5.1. Every 𝜔-consistent theory is consistent. Show that
the converse does not hold, i.e., that there are consistent but 𝜔-
inconsistent theories. Do this by showing that Q ∪ {¬G Q } is
consistent but 𝜔-inconsistent.
2. T ⊢ A → ProvT (⌜A⌝).
4. T ⊢ ProvT (⌜A⌝) → A
Models of
Arithmetic
6.1 Introduction
The standard model of arithmetic is the structure N with |N| = N
in which 0, ′, +, ×, and < are interpreted as you would expect.
That is, 0 is 0, ′ is the successor function, + is interpeted as
addition and × as multiplication of the numbers in N. Specifically,
0N = 0
′N (n) = n + 1
+N (n,m) = n + m
×N (n,m) = nm
0M = ∅
′M (s ) = s ⌢ a
+M (n,m) = a n+m
123
124 CHAPTER 6. MODELS OF ARITHMETIC
×M (n,m) = a nm
These two structures are “essentially the same” in the sense that
the only difference is the elements of the domains but not how
the elements of the domains are related among each other by
the interpretation functions. We say that the two structures are
isomorphic.
It is an easy consequence of the compactness theorem that
any theory true in N also has models that are not isomorphic
to N. Such structures are called non-standard. The interesting
thing about them is that while the elements of a standard model
(i.e., N, but also all structures isomorphic to it) are exhausted by
the values of the standard numerals n, i.e.,
1. |M| = |M ′ |
′
2. For every constant symbol c ∈ L, c M = c M .
′
4. For every predicate symbol P ∈ L, P M = P M .
M ⊨ A iff M ′ ⊨ A.
Proof. Exercise. □
Proof. Clearly, every ValM (n) ∈ |M|. We just have to show that ev-
ery x ∈ |M| is equal to ValM (n) for some n. Since M is standard,
it is isomorphic to N. Suppose g : N → |M| is an isomorphism.
Then g (n) = g (ValN (n)) = ValM (n). But for every x ∈ |M|, there
is an n ∈ N such that g (n) = x, since g is surjective. □
g (n + 1) = ValM (n + 1) by definition of g
= ValM (n ′) since n + 1 ≡ n ′
= ′M (ValM (n)) by definition of ValM (t ′)
= ′M (g (n)) by definition of g
= ′M (h (n)) by induction hypothesis
= h (′N (n)) since h is an isomorphism
= h (n + 1) □
𝛤 = TA ∪ {c ≠ 0,c ≠ 1,c ≠ 2, . . . }
6.7 Models of Q
We know that there are non-standard structures that make the
same sentences true as N does, i.e., is a model of TA. Since
N ⊨ Q , any model of TA is also a model of Q . Q is much
weaker than TA, e.g., Q ⊬ ∀x ∀y (x +y) = (y +x). Weaker theories
are easier to satisfy: they have more models. E.g., Q has models
which make ∀x ∀y (x + y) = (y + x) false, but those cannot also
be models of TA, or PA for that matter. Models of Q are also
relatively simple: we can specify them explicitly.
134 CHAPTER 6. MODELS OF ARITHMETIC
0K = 0
{︄
x +1 if x ∈ N
′K (x) =
a if x = a
{︄
K x +y if x, y ∈ N
+ (x, y) =
a otherwise
⎧
⎪
⎪ xy if x, y ∈ N
K
⎨
⎪
× (x, y) = 0 if x = 0 or y = 0
⎪
⎪a otherwise
⎪
⎩
<K = {⟨x, y⟩ : x, y ∈ N and x < y } ∪ {⟨x,a⟩ : x ∈ |K|}
x ⊕y 0 m a x ⊗y 0 m a
x x∗
0 0 m a 0 0 0 0
n n+1
n n n +m a n 0 nm a
a a
a a a a a 0 a a
(n ⊕ m ∗ ) = (n + (m + 1)) = (n + m) + 1 = (n ⊕ m) ∗
135 6.7. MODELS OF Q
(x ⊕ a ∗ ) = (x ⊕ a) = a = a ∗ = (x ⊕ a) ∗
(a ⊕ n ∗ ) = (a ⊕ (n + 1)) = a = a ∗ = (a ⊕ n) ∗
(a ⊕ a ∗ ) = (a ⊕ a) = a = a ∗ = (a ⊕ a) ∗
x x∗ x ⊕y m a b
n n+1 n n +m b a
a a a a b a
b b b b b a
a ⊕ a ∗ = b = b ∗ = (a ⊕ a) ∗
b ⊕ b ∗ = a = a ∗ = (b ⊕ b) ∗
136 CHAPTER 6. MODELS OF ARITHMETIC
b ⊕ a ∗ = b = b ∗ = (b ⊕ y) ∗
a ⊕ b ∗ = a = a ∗ = (a ⊕ b) ∗
n ⊕ a ∗ = n ⊕ a = b = b ∗ = (n ⊕ a) ∗
n ⊕ b ∗ = n ⊕ b = a = a ∗ = (n ⊕ b) ∗
6.8 Models of PA
Any non-standard model of TA is also one of PA. We know that
non-standard models of TA and hence of PA exist. We also know
that such non-standard models contain non-standard “numbers,”
i.e., elements of the domain that are “beyond” all the standard
“numbers.” But how are they arranged? How many are there?
We’ve seen that models of the weaker theory Q can contain as
few as a single non-standard number. But these simple structures
are not models of PA or TA.
The key to understanding the structure of models of PA or
TA is to see what facts are derivable in these theories. For in-
stance, already PA proves that ∀x x ≠ x ′ and ∀x ∀y (x +y) = (y +x),
so this rules out simple structures (in which these sentences are
false) as models of PA.
Suppose M is a model of PA. Then if PA ⊢ A, M ⊨ A. Let’s
again use z for 0M , ∗ for ′M , ⊕ for +M , ⊗ for ×M , and 4 for <M .
Any sentence A then states some condition about z, ∗, ⊕, ⊗, and
137 6.8. MODELS OF PA
2. If x 4 y and y 4 z then x 4 z .
3. For any x ≠ y, x 4 y or y 4 x
Proof. PA proves:
1. ∀x ¬x < x
2. ∀x ∀y ∀z ((x < y ∧ y < z ) → x < z )
3. ∀x ∀y ((x < y ∨ y < x) ∨ x = y)) □
Proof. Exercise. □
We call this subset the block of x and write it as [x]. It has no least and
no greatest element. It can be characterized as the set of those y ∈ |M|
such that, for some standard n, x ⊕ n = y or y ⊕ n = x.
Proof. Clearly, such a set [x] always exists since every element y
of |M| has a unique successor y ∗ and unique predecessor ∗ y. For
successive elements y, y ∗ we have y 4 y ∗ and y ∗ is the 4-least
element of |M| such that y is 4-less than it. Since always ∗ y 4 y
and y 4 y ∗ , [x] has no least or greatest element. If y ∈ [x] then
x ∈ [y], for then either y ∗...∗ = x or x ∗...∗ = y. If y ∗...∗ = x (with n
∗’s), then y ⊕ n = x and conversely, since PA ⊢ ∀x x ′...′ = (x + n)
(if n is the number of ′’s). □
Proposition 6.24. If [x] ≠ [y] and x 4 y, then for any u ∈ [x] and
any v ∈ [y], u 4 v .
Proof. Exercise. □
140 CHAPTER 6. MODELS OF ARITHMETIC
0K = 0
{︄
x + 1 if x ∈ N
′K (x) =
a if x = a
{︄
x +y if x, y ∈ N
+K (x, y) =
a otherwise
⎧
⎪
⎪ xy if x, y ∈ N
K
⎨
⎪
× (x, y) = 0 if x = 0 or y = 0
⎪
⎪a otherwise
⎪
⎩
<K = {⟨x, y⟩ : x, y ∈ N and x < y } ∪ {⟨x,a⟩ : n ∈ |K|}
Summary
A model of arithmetic is a structure for the language LA of
arithmetic. There is one distinguished such model, the standard
143 6.9. COMPUTABLE MODELS OF ARITHMETIC
Problems
Problem 6.1. Prove Proposition 6.2.
Problem 6.2. Carry out the proof of (b) of Theorem 6.6 in de-
tail. Make sure to note where each of the five properties charac-
terizing isomorphisms of Definition 6.5 is used.
∀x ∀y (x ′ = y ′ → x = y) (Q 1 )
′
∀x 0 ≠ x (Q 2 )
′
∀x (x = 0 ∨ ∃y x = y ) (Q 3 )
1. M1 ⊨ Q 1 , M1 ⊨ Q 2 , M1 ⊭ Q 3 ;
2. M2 ⊨ Q 1 , M2 ⊭ Q 2 , M2 ⊨ Q 3 ; and
3. M3 ⊭ Q 1 , M3 ⊨ Q 2 , M3 ⊨ Q 3 ;
Obviously, you just have to specify 0Mi and ′Mi for each.
145 6.9. COMPUTABLE MODELS OF ARITHMETIC
Problem 6.6. Prove that K from Example 6.18 satisifies the re-
maining axioms of Q ,
∀x (x × 0) = 0 (Q 6 )
′
∀x ∀y (x × y ) = ((x × y) + x) (Q 7 )
′
∀x ∀y (x < y ↔ ∃z (z + x) = y) (Q 8 )
∀x (x × 0) = 0 (Q 6 )
′
∀x ∀y (x × y ) = ((x × y) + x) (Q 7 )
′
∀x ∀y (x < y ↔ ∃z (z + x) = y) (Q 8 )
Second-Order
Logic
7.1 Introduction
In first-order logic, we combine the non-logical symbols of a given
language, i.e., its constant symbols, function symbols, and pred-
icate symbols, with the logical symbols to express things about
first-order structures. This is done using the notion of satisfac-
tion, which relates a structure M, together with a variable assign-
ment s , and a formula A: M,s ⊨ A holds iff what A expresses when
its constant symbols, function symbols, and predicate symbols
are interpreted as M says, and its free variables are interpreted
as s says, is true. The interpretation of the identity predicate =
is built into the definition of M,s ⊨ A, as is the interpretation
of ∀ and ∃. The former is always interpreted as the identity rela-
tion on the domain |M| of the structure, and the quantifiers are
always interpreted as ranging over the entire domain. But, cru-
cially, quantification is only allowed over elements of the domain,
and so only object variables are allowed to follow a quantifier.
In second-order logic, both the language and the definition of
satisfaction are extended to include free and bound function and
predicate variables, and quantification over them. These vari-
ables are related to function symbols and predicate symbols the
146
147 7.2. TERMS AND FORMULAS
7.3 Satisfaction
To define the satisfaction relation M,s ⊨ A for second-order for-
mulas, we have to extend the definitions to cover second-order
variables. The notion of a structure is the same for second-order
logic as it is for first-order logic. There is only a diffence for vari-
able assignments s : these now must not just provide values for
the first-order variables, but also for the second-order variables.
t ≡ u (t1 , . . . ,tn ):
M ⊨ Inf iff |M| is infinite. We can then define Fin ≡ ¬Inf; M ⊨ Fin
iff |M| is finite. No single sentence of pure first-order logic can
express that the domain is infinite although an infinite set of them
can. There is no set of sentences of pure first-order logic that is
satisfied in a structure iff its domain is finite.
Proposition 7.15. M ⊨ Inf iff |M| is infinite.
m 0 ,m1 ,m 2 , . . .
∀x ∀y (x ′ = y ′ → x = y)
∀x (x = 0 ∨ ∃y x = y ′)
∀x (x + 0) = x
∀x ∀y (x + y ′) = (x + y) ′
∀x (x × 0) = 0
∀x ∀y (x × y ′) = ((x × y) + x)
∀x ∀y (x < y ↔ ∃z (z ′ + x) = y)
A ≥n ≡ ∃x 1 . . . ∃x n (x 1 ≠ x 2 ∧ x 1 ≠ x 3 ∧ · · · ∧ x n−1 ≠ x n ).
𝛤 = {¬Inf,A ≥1 ,A ≥2 ,A ≥3 , . . . }.
∃z ∃u (X (z ) ∧ ∀x (X (x) → X (u (x))) ∧
∀Y ((Y (z ) ∧ ∀x (Y (x) → Y (u (x)))) → X = Y ))
Pow(Y,R,X ) ≡
∀Z (Z ⊆ X → ∃x (Y (x) ∧ Codes(x,R,Z ))) ∧
∀x (Y (x) → ∀Z (Codes(x,R,Z ) → Z ⊆ X )
∀X ∀Y ∀R (Pow(Y,R,X )→
¬∃u (∀x ∀y (u (x) = u (y) → x = y) ∧
∀x (Y (x) → X (u (x)))))
is valid.
expresses that s (Y ) ≈ R.
M ⊨ ∃X ∃Y ∃R (Aleph0 (X ) ∧ Pow(Y,R,X )∧
∃u (∀x ∀y (u (x) = u (y) → x = y) ∧
∀y (Y (y) → ∃x y = u (x)))).
CH ≡ ∀X (Aleph1 (X ) ↔ Cont(X ))
is valid.
Note that it isn’t true that ¬CH is valid iff the Continuum
Hypothesis is false. In a countable domain, there are no subsets
of size ℵ1 and also no subsets of the size of the continuum, so
CH is always true in a countable domain. However, we can give
a different sentence that is valid iff the Continuum Hypothesis is
false:
is valid.
Summary
Second-order logic is an extension of first-order logic by vari-
ables for relations and functions, which can be quantified. Struc-
tures for second-order logic are just like first-order structures and
give the interpretations of all non-logical symbols of the language.
Variable assignments, however, also assign relations and func-
tions on the domain to the second-order variables. The satisfac-
tion relation is defined for second-order formulas just like in the
first-order case, but extended to deal with second-order variables
and quantifiers.
Second-order quantifiers make second-order logic more ex-
pressive than first-order logic. For instance, the identity rela-
tion on the domain of a structure can be defined without =, by
∀X (X (x) ↔ X (y)). Second-order logic can express the transi-
tive closure of a relation, which is not expressible in first-order
logic. Second-order quantifiers can also express properties of
168 CHAPTER 7. SECOND-ORDER LOGIC
Problems
Problem 7.1. Show that ∀X (X (x) → X (y)) (note: → not ↔!)
defines Id |M | .
Problem 7.2. The sentence Inf ∧ Count is true in all and only
countably infinite domains. Adjust the definition of Count so
that it becomes a different sentence that directly expresses that
the domain is countably infinite, and prove that it does.
The Lambda
Calculus
8.1 Overview
The lambda calculus was originally designed by Alonzo Church
in the early 1930s as a basis for constructive logic, and not as
a model of the computable functions. But it was soon shown to
be equivalent to other definitions of computability, such as the
Turing computable functions and the partial recursive functions.
The fact that this initially came as a small surprise makes the
characterization all the more interesting.
Lambda notation is a convenient way of referring to a func-
tion directly by a symbolic expression which defines it, instead of
defining a name for it. Instead of saying “let f be the function
defined by f (x) = x + 3,” one can say, “let f be the function
𝜆x . (x + 3).” In other words, 𝜆x . (x + 3) is just a name for the
function that adds three to its argument. In this expression, x is
a dummy variable, or a placeholder: the same function can just
as well be denoted by 𝜆 y . (y + 3). The notation works even with
other parameters around. For example, suppose g (x, y) is a func-
tion of two variables, and k is a natural number. Then 𝜆x . g (x,k )
is the function which maps any x to g (x,k ).
This way of defining a function from a symbolic expression is
170
171 8.1. OVERVIEW
The system without any constants at all is called the pure lambda
calculus.
We will follow a few notational conventions:
For example,
𝜆xy . xxyx𝜆 z . xz
abbreviates
𝜆x . 𝜆 y . ((((xx)y)x) (𝜆 z . (xz ))).
You should memorize these conventions. They will drive you
crazy at first, but you will get used to them, and after a while
they will drive you less crazy than having to deal with a morass
of parentheses.
173 8.3. REDUCTION OF LAMBDA TERMS
Two terms that differ only in the names of the bound variables
are called 𝛼-equivalent; for example, 𝜆x . x and 𝜆 y . y. It will be
convenient to think of these as being the “same” term; in other
words, when we say that M and N are the same, we also mean
“up to renamings of the bound variables.” Variables that are in
the scope of a 𝜆 are called “bound”, while others are called “free.”
There are no free variables in the previous example; but in
(𝜆 z . yz )x
1. We have
(𝜆x . xxy)𝜆 z . z →
− (𝜆 z . z ) (𝜆 z . z )y
→
− (𝜆 z . z )y
→
− y.
(𝜆x . (𝜆 y . yx)z )v →
− (𝜆 y . yv )z
(𝜆x . (𝜆 y . yx)z )v →
− (𝜆x . zx)v
Proof. If M →
→ N 1 and M →
− → N 2 , by the previous theorem there
−
is a term P such that N 1 and N 2 both reduce to P . If N1 and N 2
are both in normal form, this can only happen if N 1 ≡ P ≡ N 2 . □
8.5 Currying
A 𝜆 -abstract 𝜆x . M represents a function of one argument, which
is quite a limitation when we want to define function accepting
multiple arguments. One way to do this would be by extending
the 𝜆 -calculus to allow the formation of pairs, triples, etc., in
which case, say, a three-place function 𝜆x . M would expect its
argument to be a triple. However, it is more convenient to do
this by Currying.
Let’s consider an example. If we want to define a function that
accepts two arguments and returns the first, we write 𝜆x . 𝜆 y . x,
which literally is a function that accepts an argument and returns
a function that accepts another argument and returns the first
argument while it drops the second. Let’s see what happens when
176 CHAPTER 8. THE LAMBDA CALCULUS
F n 0 m 1 . . . nk −1 →
−
→ f (n0 ,n 1 , . . . ,nk −1 )
(𝜆a. 𝜆 f x . f (a f x)) n →
− 𝜆 f x . f (n f x).
→ 𝜆 f x . f ( f n (x)),
Succ n →
−
i.e., n + 1. □
179 8.7. 𝜆 -DEFINABLE ARITHMETICAL FUNCTIONS
Add ≡ 𝜆ab . 𝜆 f x . a f (b f x)
or, alternatively,
The first addition works as follows: Add first accept two numbers
a and b. The result is a function that accepts f and x and returns
a f (b f x). If a and b are Church numerals n and m, this reduces
to f n+m (x), which is identical to f n ( f m (x)). Or, slowly:
(𝜆ab . 𝜆 f x . a f (b f x))n m →
− 𝜆 f x . n f (m f x)
− 𝜆 f x . n f ( f m x)
→
− 𝜆 f x . f n ( f m x) ≡ n + m.
→
Add′n m →
− n Succ m.
→ Succn (m).
n Succ m →
−
And since Succ 𝜆 -defines the successor function, and the succes-
sor function applied n times to m gives n + m, this in turn reduces
to n + m. □
Mult ≡ 𝜆ab . 𝜆 f x . a (b f )x
180 CHAPTER 8. THE LAMBDA CALCULUS
Mult ≡ 𝜆ab . 𝜆 f . a (b f ).
Pair ≡ 𝜆mn. 𝜆 f . f mn
181 8.9. TRUTH VALUES AND RELATIONS
𝛽
→ false
R n 1 . . . nk −
→
otherwise.
The function “Not” accepts one argument, and returns true if the
argument is false, and false if the argument is true. The function
“And” accepts two truth values as arguments, and should return
true iff both arguments are true. Truth values are represented
as selectors (described above), so when x is a truth value and is
applied to two arguments, the result will be the first argument
if x is true and the second argument otherwise. Now And takes
its two arguments x and y, and in return passes y and false to
its first argument x. Assuming x is a truth value, the result will
evaluate to y if x is true, and to false if x is false, which is just
what is desired.
183 8.10. PRIMITIVE RECURSIVE FUNCTIONS ARE 𝜆 -DEFINABLE
Note that we assume here that only truth values are used as
arguments to And. If it is passed other terms, the result (i.e., the
normal form, if it exists) may well not be a truth value.
Lemma 8.11. The basic primitive recursive functions zero, succ, and
projections Pin are 𝜆 -definable.
Zero ≡ 𝜆a. 𝜆 f x . x
Succ ≡ 𝜆a. 𝜆 f x . f (a f x)
Projin ≡ 𝜆x 0 . . . x n−1 . x i □
where
→
−
→ D n ⟨m,h (n,m)⟩ (by IH)
≡ (𝜆 p. ⟨Succ(Fst p), (G n (Fst p) (Snd p))⟩)⟨m,h (n,m)⟩
→
− ⟨Succ(Fst ⟨m,h (n,m)⟩),
(G n (Fst ⟨m,h (n,m)⟩) (Snd ⟨m,h (n,m)⟩))⟩
→
−
→ ⟨Succ m, (G n m h (n,m))⟩
→ ⟨m + 1, g (n,m,h (n,m))⟩
→
−
8.11 Fixpoints
Suppose we wanted to define the factorial function by recursion
as a term Fac with the following property:
Y g ≡ (𝜆ux . x (uux))U g
→
−
→ (𝜆x . x (U U x))g
→
−
→ g (U U g ) ≡ g (Y g ).
𝛽
Since g (Y g ) and Y g both reduce to g (Y g ), g (Y g ) = Y g , so Y g
is a fixpoint of g . □
Yg→
−
→ g (Y g )
→
−
→ g (g (Y g ))
→
−
→ g (g (g (Y g )))
...
→ Y Fac′ 3
Fac 3 →
−
→ Fac′ (Y Fac′) 3
→
−
≡ (𝜆x . 𝜆n. IsZero n 1 (Mult n (x (Pred n)))) Fac 3
→ IsZero 3 1 (Mult 3 (Fac(Pred 3)))
→
−
→ Mult 3 (Fac 2).
→
−
Similarly,
Fac 2 →
→ Mult 2 (Fac 1)
−
Fac 1 →
→ Mult 1 (Fac 0)
−
but
→ Fac′ (Y Fac′) 0
Fac 0 →
−
≡ (𝜆x . 𝜆n. IsZero n 1 (Mult n (x (Pred n)))) Fac 0
→ IsZero 0 1 (Mult 0 (Fac(Pred 0))).
→
−
→ 1.
→
−
So together
Fac 3 →
→ Mult 3 (Mult 2 (Mult 1 1)).
−
What goes for Fac′ goes for any recursive definition. Suppose
we have a recursive equation
𝛽
g x1 . . . xn = N
𝛽
G x 1 . . . x n = N [G /g ].
189 8.12. MINIMIZATION
G ≡ (Y 𝜆 g . 𝜆x 1 . . . x n . N ) →
−
→ 𝜆 g . 𝜆x 1 . . . x n . N (Y 𝜆 g . 𝜆x 1 . . . x n . N )
≡ (𝜆 g . 𝜆x 1 . . . x n . N ) G
and consequently
G x1 . . . xn →
−
→ (𝜆 g . 𝜆x 1 . . . x n . N ) G x 1 . . . x n
→
−
→ (𝜆x 1 . . . x n . N [G /g ]) x 1 . . . x n
→
−
→ N [G /g ].
→ g (V V ) and thus
V V ≡ (𝜆x . g (xx))V →
−
YC g ≡ (𝜆 g .V V )g →
− → g (V V ), but also
→ VV →
−
g (YC g ) ≡ g ((𝜆 g .V V )g ) →
−
→ g (V V ).
8.12 Minimization
The general recursive functions are those that can be obtained
from the basic functions zero, succ, Pin by composition, primitive
recursion, and regular minimization. To show that all general re-
cursive functions are 𝜆 -definable we have to show that any func-
tion defined by regular minimization from a 𝜆 -definable function
is itself 𝜆 -definable.
190 CHAPTER 8. THE LAMBDA CALCULUS
g (x 1 , . . . ,x k ) = 𝜇y f (x 1 , . . . ,x k , y) = 0
is also 𝜆 -definable.
if f (n 1 , . . . ,nk ,m) = 0, or
→ (Y Search)F n1 . . . nk m + 1
→
−
(Y Search)F n1 . . . nk 0 →
−
→ h (n 1 , . . . ,nk ). □
Problems
Problem 8.1. The term
Succ′ ≡ 𝜆n. 𝜆 f x . n f ( f x)
𝜆 -defines the successor function. Explain why.
Problem 8.3. Explain why the access functions Fst and Snd
work.
Derivations in
Arithmetic
Theories
When we showed that all general recursive functions are repre-
sentable in Q , and in the proofs of the incompleteness theorems,
we claimed that various things are provable in Q and PA. The
proofs of these claims, however, just gave the arguments infor-
mally without exhibiting actual derivations in natural deduction.
We provide some of these derivations in this capter.
For instance, in Lemma 4.15 we proved that, for all n and
m ∈ N, Q ⊢ (n + m) = n + m. We did this by induction on m.
194
195
iom of Q .
Inductive step: Suppose that, for any n, Q ⊢ (n + m) = n + m
(say, by a derivation 𝛿n,m ). We have to show that also Q ⊢ (n +
m + 1) = n + m + 1. Note that m + 1 ≡ m ′, and that n + m + 1 ≡
n + m ′. So we are looking for a derivation of (n + m ′) = n + m ′
from the axioms of Q . Our derivation may use the derivation
𝛿n,m which exists by inductive hypothesis.
∀x ∀y (x + y ′) = (x + y) ′
𝛿n,m ∀Elim
∀y (n + y ′) = (n + y) ′
∀Elim
(n + m) = n + m (n + m ′) = (n + m) ′
=Elim
(n + m ′) = n + m ′
[∃z (z ′ + a) = 0] 2
⊥
2 ¬Intro
¬∃z (z ′ + a) = 0
∀x ∀y (x + y ′ ) = (x + y) ′
∀Elim
a = c ′ (b ′ + a) = 0 ∀y (b ′ + y ′ ) = (b ′ + y) ′
=Elim ∀Elim
∀x ¬0 = x ′ (b ′ + c ′ ) = 0 (b ′ + c ′ ) = (b ′ + c ) ′
′ ′ ∀Elim =Elim
¬0 = (b + c ) 0 = (b + c ) ′
′
⊥ ¬Elim
[a = c ′] 6 (b ′ + a) = 0
𝛿4
∃y a = y ′ ⊥
6
⊥ ∃Elim
[a = 0] 7 [∃y a = y ′ ] 7 ⎫
⎪
⎪
⎪
′ 3 ′ 3 ⎪
[(b + a) = 0] [(b + a) = 0] ⎪
⎪
∀x (x = 0 ∨ ⎪
⎪
⎪
⎪
∃y (a = y ′ ))
⎬
⎪
𝛿3 𝛿5 𝛿2
∀Elim ⎪
a =0∨
⎪
⎪
⎪
⊥ ⊥
⎪
⎪
∃y (a = y ′ )
⎪
⎪
⎪
⎪
[∃z (z ′ + a) = 0] 2 7
⊥ ∨Elim ⎪
⎭
3
⊥ ∃Elim
2 ¬Intro
¬∃z (z ′ + a) = 0
𝛿1
¬a < 0
∀Intro
∀x ¬x < 0
□
198 APPENDIX A. DERIVATIONS IN ARITHMETIC THEORIES
∀x (x < n → (x = 0 ∨ · · · ∨ x = n − 1))
∀Elim
a<n a < n → (a = 0 ∨ · · · ∨ a = n − 1)
→Elim
a = 0∨···∨a = n −1
[a < n] 1
𝜆1 𝜌k
𝜋k
a =k Prf (a, ⌜R⌝)
=Elim
¬Prf (k , ⌜R⌝) Prf (k , ⌜R⌝)
⊥ ¬Elim
and 𝜆 2 is
[a < m] 3
𝜆1
𝜆3 a = 0∨···∨
First-order
Logic
B.1 First-Order Languages
Expressions of first-order logic are built up from a basic vocab-
ulary containing variables, constant symbols, predicate symbols and
sometimes function symbols. From them, together with logical con-
nectives, quantifiers, and punctuation symbols such as parenthe-
ses and commas, terms and formulas are formed.
Informally, predicate symbols are names for properties and
relations, constant symbols are names for individual objects, and
function symbols are names for mappings. These, except for
the identity predicate =, are the non-logical symbols and together
make up a language. Any first-order language L is determined
by its non-logical symbols. In the most general case, L contains
infinitely many symbols of each kind.
In the general case, we make use of the following symbols in
first-order logic:
1. Logical symbols
202
203 B.1. FIRST-ORDER LANGUAGES
1. ⊥ is an atomic formula.
1. ⊤ abbreviates ¬⊥.
2. A ↔ B abbreviates (A → B) ∧ (B → A).
B is the scope of the first ∀v0 , C is the scope of ∃v1 , and D is the
scope of the second ∀v0 . The first ∀v0 binds the occurrences of v0
in B, ∃v1 the occurrence of v1 in C , and the second ∀v0 binds the
occurrence of v0 in D. The first occurrence of v1 and the fourth
occurrence of v0 are free in A. The last occurrence of v0 is free
in D, but bound in C and A.
209 B.4. SUBSTITUTION
B.4 Substitution
1. s ≡ c : s [t /x] is just s .
3. s ≡ x: s [t /x] is t .
Example B.15.
1. A ≡ ⊥: A[t /x] is ⊥.
1. |N| = N
212 APPENDIX B. FIRST-ORDER LOGIC
2. 0N = 0
1. t ≡ c : ValsM (t ) = c M .
2. t ≡ x: ValsM (t ) = s (x).
3. t ≡ f (t1 , . . . ,tn ):
ValsM (t ) = f M
(ValsM (t1 ), . . . , ValsM (tn )).
1. A ≡ ⊥: M,s ⊭ A.
ValsM ( f (a,b)) = f M
(1, 2) = 1 + 2 = 3.
ValsM ( f ( f (a,b),a)) = f M
(ValsM ( f (a,b)), ValsM (a)) = f M
(3, 1) = 3,
since 3 + 1 > 3. Since s (x) = 1 and ValsM (x) = s (x), we also have
ValsM ( f ( f (a,b),x)) = f M
(ValsM ( f (a,b)), ValsM (x)) = f M
(3, 1) = 3,
An atomic formula R (t1 ,t2 ) is satisfied if the tuple of values of
its arguments, i.e., ⟨ValsM (t1 ), ValsM (t2 )⟩, is an element of R M . So,
e.g., we have M,s ⊨ R (b, f (a,b)) since ⟨ValM (b), ValM ( f (a,b))⟩ =
⟨2, 3⟩ ∈ R M , but M,s ⊭ R (x, f (a,b)) since ⟨1, 3⟩ ∉ R M [s ].
To determine if a non-atomic formula A is satisfied, you apply
the clauses in the inductive definition that applies to the main con-
nective. For instance, the main connective in R (a,a) → (R (b,x) ∨
R (x,b) is the →, and
M,s ⊨ R (a,a) → (R (b,x) ∨ R (x,b)) iff
217 B.6. SATISFACTION OF A FORMULA IN A STRUCTURE
since M,s 1 ⊨ R (b,x) ∨ R (x,b) (s 3 would also fit the bill). But,
so M,s 2 ⊨ t1 = t2 .
Now assume M,s 1 ⊨ B iff M,s 2 ⊨ B for all formulas B less com-
plex than A. The induction step proceeds by cases determined by
the main operator of A. In each case, we only demonstrate the
forward direction of the biconditional; the proof of the reverse
direction is symmetrical. In all cases except those for the quanti-
fiers, we apply the induction hypothesis to sub-formulas B of A.
The free variables of B are among those of A. Thus, if s 1 and s 2
agree on the free variables of A, they also agree on those of B,
and the induction hypothesis applies to B.
2. A ≡ B ∧ C : exercise.
4. A ≡ B → C : exercise.
221 B.7. VARIABLE ASSIGNMENTS
6. A ≡ ∀x B: exercise.
Proof. Exercise. □
Proof. Exercise. □
B.8 Extensionality
Extensionality, sometimes called relevance, can be expressed in-
formally as follows: the only factors that bears upon the satisfac-
tion of formula A in a structure M relative to a variable assign-
ment s , are the size of the domain and the assignments made
by M and s to the elements of the language that actually appear
in A.
One immediate consequence of extensionality is that where
two structures M and M ′ agree on all the elements of the lan-
guage appearing in a sentence A and have the same domain, M
and M ′ must also agree on whether or not A itself is true.
Proof. By induction on t .
ValsM (t [t ′/x]) =
224 APPENDIX B. FIRST-ORDER LOGIC
Proof. Exercise. □
1. A(t ) ⊨ ∃x A(x)
2. ∀x A(x) ⊨ A(t )
2. Exercise. □
227 B.10. THEORIES
B.10 Theories
Definition B.43. A set of sentences 𝛤 is closed iff, whenever 𝛤 ⊨
A then A ∈ 𝛤. The closure of a set of sentences 𝛤 is {A : 𝛤 ⊨ A}.
We say that 𝛤 is axiomatized by a set of sentences 𝛥 if 𝛤 is the
closure of 𝛥.
∀x ¬x < x,
∀x ∀y ((x < y ∨ y < x) ∨ x = y),
∀x ∀y ∀z ((x < y ∧ y < z ) → x < z )
∀x (x · 1) = x
∀x ∀y ∀z (x · (y · z )) = ((x · y) · z )
∀x ∃y (x · y) = 1
¬∃x x ′ = 0
∀x ∀y (x ′ = y ′ → x = y)
∀x ∀y (x < y ↔ ∃z (z ′ + x) = y)
∀x (x + 0) = x
∀x ∀y (x + y ′) = (x + y) ′
∀x (x × 0) = 0
228 APPENDIX B. FIRST-ORDER LOGIC
∀x ∀y (x × y ′) = ((x × y) + x)
Since there are infinitely many sentences of the latter form, this
axiom system is infinite. The latter form is called the induction
schema. (Actually, the induction schema is a bit more complicated
than we let on here.)
The third axiom is an explicit definition of <.
Summary
A first-order language consists of constant, function, and
predicate symbols. Function and constant symbols take a speci-
fied number of arguments. In the language of arithmetic, e.g.,
we have a single constant symbol 0, one 1-place function symbol ′,
two 2-place function symbols + and ×, and one 2-place predicate
symbol <. From variables and constant and function symbols
we form the terms of a language. From the terms of a language
together with its predicate symbol, as well as the identity sym-
bol =, we form the atomic formulas. And in turn from them,
using the logical connectives ¬, ∨, ∧, →, ↔ and the quantifiers ∀
and ∃ we form its formulas. Since we are careful to always include
necessary parentheses in the process of forming terms and formu-
las, there is always exactly one way of reading a formula. This
makes it possible to define things by induction on the structure
of formulas.
Occurrences of variables in formulas are sometimes governed
by a corresponding quantifier: if a variable occurs in the scope
of a quantifier it is considered bound, otherwise free. These
concepts all have inductive definitions, and we also inductively
define the operation of substitution of a term for a variable in
a formula. Formulas without free variable occurrences are called
sentences.
229 B.10. THEORIES
Problems
Problem B.1. Give an inductive definition of the bound variable
occurrences along the lines of Definition B.8.
1. |M| = {1, 2, 3}
2. c M = 3
1. A ≡ ⊥: not M ||= A.
Natural
Deduction
C.1 Natural Deduction
Natural deduction is a derivation system intended to mirror ac-
tual reasoning (especially the kind of regimented reasoning em-
ployed by mathematicians). Actual reasoning proceeds by a num-
ber of “natural” patterns. For instance, proof by cases allows us
to establish a conclusion on the basis of a disjunctive premise,
by establishing that the conclusion follows from either of the dis-
juncts. Indirect proof allows us to establish a conclusion by show-
ing that its negation leads to a contradiction. Conditional proof
establishes a conditional claim “if . . . then . . . ” by showing that
the consequent follows from the antecedent. Natural deduction
is a formalization of some of these natural inferences. Each of
the logical connectives and quantifiers comes with two rules, an
introduction and an elimination rule, and they each correspond
to one such natural inference pattern. For instance, →Intro cor-
responds to conditional proof, and ∨Elim to proof by cases. A
particularly simple rule is ∧Elim which allows the inference from
A ∧ B to A (or B).
One feature that distinguishes natural deduction from other
derivation systems is its use of assumptions. A derivation in nat-
232
233 C.1. NATURAL DEDUCTION
Rules for ∨
[A] n [B] n
A
∨Intro
A∨B
B
∨Intro A∨B C C
A∨B n ∨Elim
C
Rules for →
[A] n
A→B A
→Elim
B
B
n →Intro
A→B
Rules for ¬
236 APPENDIX C. NATURAL DEDUCTION
[A] n
¬A A
⊥ ¬Elim
⊥
n ¬Intro
¬A
Rules for ⊥
[¬A] n
⊥ ⊥
I
A
n ⊥ ⊥
C
A
Rules for ∃
[A(a)]n
A(t )
∃Intro
∃x A(x)
∃x A(x) C
n ∃Elim
C
C.5 Derivations
We’ve said what an assumption is, and we’ve given the rules of
inference. Derivations in natural deduction are inductively gen-
erated from these: each derivation either is an assumption on its
own, or consists of one, two, or three derivations followed by a
correct inference.
(A ∧ B) → A
[A ∧ B] 1
A
1 →Intro
(A ∧ B) → A
(¬A ∨ B) → (A → B)
A→B
1 →Intro
(¬A ∨ B) → (A → B)
apply a ∨Elim rule. Let us apply the latter. We will use the as-
sumption ¬A ∨ B as the leftmost premise of ∨Elim. For a valid
application of ∨Elim, the other two premises must be identical
to the conclusion A → B, but each may be derived in turn from
another assumption, namely the two disjuncts of ¬A ∨ B. So our
derivation will look like this:
[¬A] 2 [B] 2
B B
3 →Intro 4 →Intro
[¬A ∨ B] 1 A→B A→B
2 ∨Elim
A→B
1 →Intro
(¬A ∨ B) → (A → B)
[B] 2 , [A] 4
[¬A] 2 [A] 3
⊥ ⊥ ⊥Intro
I
B B
3 →Intro 4 →Intro
[¬A ∨ B] 1 A→B A→B
2 ∨Elim
A→B
1 →Intro
(¬A ∨ B) → (A → B)
[¬A] 2 [A] 3
⊥ ⊥ ¬Elim
B
I [B] 2
3 →Intro →Intro
[¬A ∨ B] 1 A→B A→B
2 ∨Elim
A→B
1 →Intro
(¬A ∨ B) → (A → B)
[¬(A ∨ ¬A)] 1
1
⊥ ⊥C
A ∨ ¬A
¬A A
⊥ ¬Elim
1 ⊥C
A ∨ ¬A
⊥
2 ¬Intro
¬A A
⊥ ¬Elim
1 ⊥C
A ∨ ¬A
[A] 2 [¬A] 3
∨Intro ∨Intro
[¬(A ∨ ¬A)] 1 A ∨ ¬A [¬(A ∨ ¬A)] 1 A ∨ ¬A
⊥ ¬Elim ⊥ ⊥ ¬Elim
2 ¬Intro 3 C
¬A A
⊥ ¬Elim
1 ⊥C
A ∨ ¬A
[∃x ¬A(x)] 1
¬∀x A(x)
1 →Intro
∃x ¬A(x) → ¬∀x A(x)
[¬A(a)] 2
⊥
3 ¬Intro
[∃x ¬A(x)] 1 ¬∀x A(x)
2 ∃Elim
¬∀x A(x)
1 →Intro
∃x ¬A(x) → ¬∀x A(x)
It looks like we are close to getting a contradiction. The easiest
rule to apply is the ∀Elim, which has no eigenvariable conditions.
Since we can use any term we want to replace the universally
quantified x, it makes the most sense to continue using a so we
can reach a contradiction.
[∀x A(x)] 3
∀Elim
[¬A(a)] 2 A(a)
⊥ ¬Elim
1 3 ¬Intro
[∃x ¬A(x)] ¬∀x A(x)
2 ∃Elim
¬∀x A(x)
1 →Intro
∃x ¬A(x) → ¬∀x A(x)
It is important, especially when dealing with quantifiers, to
double check at this point that the eigenvariable condition has
not been violated. Since the only rule we applied that is subject
to the eigenvariable condition was ∃Elim, and the eigenvariable a
246 APPENDIX C. NATURAL DEDUCTION
∃x C (x,b)
We have two premises to work with. To use the first, i.e., try
to find a derivation of ∃x C (x,b) from ∃x (A(x) ∧ B (x)) we would
use the ∃Elim rule. Since it has an eigenvariable condition, we
will apply that rule first. We get the following:
[A(a) ∧ B (a)] 1
[A(a) ∧ B (a)] 1
∧Elim
B (a)
C (a,b). We now have both B (a) → C (a,b) and B (a). Our next
move should be a straightforward application of the →Elim rule.
∀x (B (x) → C (x,b)) [A(a) ∧ B (a)] 1
∀Elim ∧Elim
B (a) → C (a,b) B (a)
→Elim
C (a,b)
¬∀x A(x)
The last line of the derivation is a negation, so let’s try using
¬Intro. This will require that we figure out how to derive a con-
tradiction.
[∀x A(x)] 1
⊥
1 ¬Intro
¬∀x A(x)
248 APPENDIX C. NATURAL DEDUCTION
So far so good. We can use ∀Elim but it’s not obvious if that will
help us get to our goal. Instead, let’s use one of our assumptions.
∀x A(x) → ∃y B (y) together with ∀x A(x) will allow us to use the
→Elim rule.
∀x A(x) → ∃y B (y) [∀x A(x)] 1
→Elim
∃y B (y)
⊥
1 ¬Intro
¬∀x A(x)
We now have one final assumption to work with, and it looks like
this will help us reach a contradiction by using ¬Elim.
t1 = t2 A(t1 )
=Elim
A(t2 )
t = t =Intro
t1 = t2 A(t2 )
=Elim
A(t1 )
s =t A(s )
=Elim
A(t )
∀x ∀y ((A(x) ∧ A(y)) → x = y)
∃x ∀y (A(y) → y = x)
a =b
1 →Intro
((A(a) ∧ A(b)) → a = b)
∀Intro
∀y ((A(a) ∧ A(y)) → a = y)
∀Intro
∀x ∀y ((A(x) ∧ A(y)) → x = y)
∃x ∀y (A(y) → y = x) a =b
2 ∃Elim
a = b
1 →Intro
((A(a) ∧ A(b)) → a = b)
∀Intro
∀y ((A(a) ∧ A(y)) → a = y)
∀Intro
∀x ∀y ((A(x) ∧ A(y)) → x = y)
250 APPENDIX C. NATURAL DEDUCTION
When 𝛤 = {A1 ,A2 , . . . ,Ak } is a finite set we may use the sim-
plified notation A1 ,A2 , . . . ,Ak ⊢ B for 𝛤 ⊢ B, in particular A ⊢ B
means that {A} ⊢ B.
Note that if 𝛤 ⊢ A and A ⊢ B, then 𝛤 ⊢ B. It follows also that
if A1 , . . . ,An ⊢ B and 𝛤 ⊢ Ai for each i , then 𝛤 ⊢ B.
Proof. Exercise. □
252 APPENDIX C. NATURAL DEDUCTION
Summary
Proof systems provide purely syntactic methods for characteriz-
ing consequence and compatibility between sentences. Natural
deduction is one such proof system. A derivation in it con-
sists of a tree of formulas. The topmost formula a derivation are
assumptions. All other formulas, for the derivation to be cor-
rect, must be correctly justified by one of a number of inference
rules. These come in pairs; an introduction and an elimination
rule for each connective and quantifier. For instance, if a for-
mula A is justified by a →Elim rule, the preceding formulas (the
premises) must be B → A and B (for some B). Some inference
rules also allow assumptions to be discharged. For instance, if
A → B is inferred from B using →Intro, any occurrences of A as
assumptions in the derivation leading to the premise B may be
discharged, given a label that is also recorded at the inference.
If there is a derivation with end formula A and all assump-
tions are discharged, we say A is a theorem and write ⊢ A. If all
undischarged assumptions are in some set 𝛤, we say A is deriv-
able from 𝛤 and write 𝛤 ⊢ A. If 𝛤 ⊢ ⊥ we say 𝛤 is inconsistent,
otherwise consistent. These notions are interrelated, e.g., 𝛤 ⊢ A
iff 𝛤 ∪ {¬A} ⊢ ⊥. They are also related to the corresponding
253 C.9. PROOF-THEORETIC NOTIONS
Problems
Problem C.1. Give derivations of the following:
1. ¬(A → B) → (A ∧ ¬B)
2. ∃x (A(x) → ∀y A(y))
Biographies
D.1 Alonzo Church
Alonzo Church was born in
Washington, DC on June 14,
1903. In early childhood, an
air gun incident left Church
blind in one eye. He fin-
ished preparatory school in
Connecticut in 1920 and be-
gan his university education
at Princeton that same year.
He completed his doctoral
studies in 1927. After a cou-
ple years abroad, Church re-
turned to Princeton. Church
was known exceedingly polite Fig. D.1: Alonzo Church
and careful. His blackboard
writing was immaculate, and he would preserve important pa-
pers by carefully covering them in Duco cement (a clear glue).
Outside of his academic pursuits, he enjoyed reading science fic-
tion magazines and was not afraid to write to the editors if he
spotted any inaccuracies in the writing.
Church’s academic achievements were great. Together with
his students Stephen Kleene and Barkley Rosser, he developed
254
255 D.2. KURT GÖDEL
cook his meals for him. Having suffered from mental health issues
throughout his life, he succumbed to paranoia. Deathly afraid of
being poisoned, Gödel refused to eat. He died of starvation on
January 14, 1978, in Princeton.
fected the world economy. During this time, Péter took odd jobs
as a tutor and private teacher of mathematics. She eventually re-
turned to university to take up graduate studies in mathematics.
She had originally planned to work in number theory, but after
finding out that her results had already been proven, she almost
gave up on mathematics altogether. She was encouraged to work
on Gödel’s incompleteness theorems, and unknowingly proved
several of his results in different ways. This restored her confi-
dence, and Péter went on to write her first papers on recursion
theory, inspired by David Hilbert’s foundational program. She
received her PhD in 1935, and in 1937 she became an editor for
the Journal of Symbolic Logic.
Péter’s early papers are
widely credited as founding
contributions to the field of
recursive function theory. In
Péter (1935a), she investi-
gated the relationship be-
tween different kinds of recur-
sion. In Péter (1935b), she
showed that a certain recur-
sively defined function is not
primitive recursive. This sim-
plified an earlier result due to
Wilhelm Ackermann. Péter’s
simplified function is what’s
now often called the Ack- Fig. D.3: Rózsa Péter
ermann function—and some-
times, more properly, the Ackermann-Péter function. She wrote
the first book on recursive function theory (Péter, 1951).
Despite the importance and influence of her work, Péter did
not obtain a full-time teaching position until 1945. During the
Nazi occupation of Hungary during World War II, Péter was not
allowed to teach due to anti-Semitic laws. In 1944 the government
created a Jewish ghetto in Budapest; the ghetto was cut off from
the rest of the city and attended by armed guards. Péter was
259 D.4. JULIA ROBINSON
forced to live in the ghetto until 1945 when it was liberated. She
then went on to teach at the Budapest Teachers Training College,
and from 1955 onward at Eötvös Loránd University. She was the
first female Hungarian mathematician to become an Academic
Doctor of Mathematics, and the first woman to be elected to the
Hungarian Academy of Sciences.
Péter was known as a passionate teacher of mathematics, who
preferred to explore the nature and beauty of mathematical prob-
lems with her students rather than to merely lecture. As a result,
she was affectionately called “Aunt Rosa” by her students. Péter
died in 1977 at the age of 71.
264
265 Photo Credits
266
267 BIBLIOGRAPHY
John Dawson, Jr. 1997. Logical Dilemmas: The Life and Work of
Kurt Gödel. Boca Raton: CRC Press.
Péter, Rózsa. 2010. Playing with Infinity. New York: Dover. URL
https://books.google.ca/books?id=6V3wNs4uv_4C&lpg=
PP1&ots=BkQZaHcR99&lr&pg=PP1#v=onepage&q&f=false.
https://books.google.ca/books?id=lRtSzQyHf9UC&
lpg=PP1&pg=PP1#v=onepage&q&f=false.
Tarski, Alfred. 1981. The Collected Works of Alfred Tarski, vol. I–IV.
Basel: Birkhäuser.
270