0% found this document useful (0 votes)
2 views

recursive-functions

This document discusses recursive functions, focusing on the development of a mathematical theory of computability through primitive recursion and composition. It explains how computable functions can be defined recursively, with examples such as addition and multiplication, and introduces concepts like primitive recursive functions and their generalizations. The document also outlines the process of defining new functions using existing ones through primitive recursion and composition, emphasizing the role of projection functions in this context.

Uploaded by

Deepak Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

recursive-functions

This document discusses recursive functions, focusing on the development of a mathematical theory of computability through primitive recursion and composition. It explains how computable functions can be defined recursively, with examples such as addition and multiplication, and introduces concepts like primitive recursive functions and their generalizations. The document also outlines the process of defining new functions using existing ones through primitive recursion and composition, emphasizing the role of projection functions in this context.

Uploaded by

Deepak Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter udf

Recursive Functions

These are Jeremy Avigad’s notes on recursive functions, revised and


expanded by Richard Zach. This chapter does contain some exercises,
and can be included independently to provide the basis for a discussion of
arithmetization of syntax.

rec.1 Introduction
cmp:rec:int: In order to develop a mathematical theory of computability, one has to, first of
sec
all, develop a model of computability. We now think of computability as the
kind of thing that computers do, and computers work with symbols. But at the
beginning of the development of theories of computability, the paradigmatic
example of computation was numerical computation. Mathematicians were
always interested in number-theoretic functions, i.e., functions f : Nn → N that
can be computed. So it is not surprising that at the beginning of the theory
of computability, it was such functions that were studied. The most familiar
examples of computable numerical functions, such as addition, multiplication,
exponentiation (of natural numbers) share an interesting feature: they can be
defined recursively. It is thus quite natural to attempt a general definition of
computable function on the basis of recursive definitions. Among the many
possible ways to define number-theoretic functions recursively, one particularly
simple pattern of definition here becomes central: so-called primitive recursion.
In addition to computable functions, we might be interested in computable
sets and relations. A set is computable if we can compute the answer to
whether or not a given number is an element of the set, and a relation is
computable iff we can compute whether or not a tuple ⟨n1 , . . . , nk ⟩ is an element
of the relation. By considering the characteristic function of a set or relation,
discussion of computable sets and relations can be subsumed under that of
computable functions. Thus we can define primitive recursive relations as well,
e.g., the relation “n evenly divides m” is a primitive recursive relation.

1
Primitive recursive functions—those that can be defined using just primitive
recursion—are not, however, the only computable number-theoretic functions.
Many generalizations of primitive recursion have been considered, but the most
powerful and widely-accepted additional way of computing functions is by un-
bounded search. This leads to the definition of partial recursive functions, and
a related definition to general recursive functions. General recursive functions
are computable and total, and the definition characterizes exactly the partial
recursive functions that happen to be total. Recursive functions can simulate
every other model of computation (Turing machines, lambda calculus, etc.)
and so represent one of the many accepted models of computation.

rec.2 Primitive Recursion


A characteristic of the natural numbers is that every natural number can be cmp:rec:pre:
sec
reached from 0 by applying the successor operation +1 finitely many times—
any natural number is either 0 or the successor of . . . the successor of 0.
One way to specify a function h : N → N that makes use of this fact is this:
(a) specify what the value of h is for argument 0, and (b) also specify how to,
given the value of h(x), compute the value of h(x + 1). For (a) tells us directly
what h(0) is, so h is defined for 0. Now, using the instruction given by (b) for
x = 0, we can compute h(1) = h(0 + 1) from h(0). Using the same instructions
for x = 1, we compute h(2) = h(1 + 1) from h(1), and so on. For every natural
number x, we’ll eventually reach the step where we define h(x) from h(x + 1),
and so h(x) is defined for all x ∈ N.
For instance, suppose we specify h : N → N by the following two equations:

h(0) = 1
h(x + 1) = 2 · h(x)

If we already know how to multiply, then these equations give us the infor-
mation required for (a) and (b) above. By successively applying the second
equation, we get that

h(1) = 2 · h(0) = 2,
h(2) = 2 · h(1) = 2 · 2,
h(3) = 2 · h(2) = 2 · 2 · 2,
..
.

We see that the function h we have specified is h(x) = 2x .


The characteristic feature of the natural numbers guarantees that there is
only one function h that meets these two criteria. A pair of equations like these
is called a definition by primitive recursion of the function h. It is so-called
because we define h “recursively,” i.e., the definition, specifically the second
equation, involves h itself on the right-hand-side. It is “primitive” because in

2 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


defining h(x + 1) we only use the value h(x), i.e., the immediately preceding
value. This is the simplest way of defining a function on N recursively.
We can define even more fundamental functions like addition and multipli-
cation by primitive recursion. In these cases, however, the functions in question
are 2-place. We fix one of the argument places, and use the other for the recur-
sion. E.g, to define add(x, y) we can fix x and define the value first for y = 0
and then for y + 1 in terms of y. Since x is fixed, it will appear on the left and
on the right side of the defining equations.

add(x, 0) = x
add(x, y + 1) = add(x, y) + 1

These equations specify the value of add for all x and y. To find add(2, 3),
for instance, we apply the defining equations for x = 2, using the first to find
add(2, 0) = 2, then using the second to successively find add(2, 1) = 2 + 1 = 3,
add(2, 2) = 3 + 1 = 4, add(2, 3) = 4 + 1 = 5.
In the definition of add we used + on the right-hand-side of the second
equation, but only to add 1. In other words, we used the successor function
succ(z) = z+1 and applied it to the previous value add(x, y) to define add(x, y+
1). So we can think of the recursive definition as given in terms of a single
function which we apply to the previous value. However, it doesn’t hurt—
and sometimes is necessary—to allow the function to depend not just on the
previous value but also on x and y. Consider:

mult(x, 0) = 0
mult(x, y + 1) = add(mult(x, y), x)

This is a primitive recursive definition of a function mult by applying the func-


tion add to both the preceding value mult(x, y) and the first argument x. It
also defines the function mult(x, y) for all arguments x and y. For instance,
mult(2, 3) is determined by successively computing mult(2, 0), mult(2, 1), mult(2, 2),
and mult(2, 3):

mult(2, 0) = 0
mult(2, 1) = mult(2, 0 + 1) = add(mult(2, 0), 2) = add(0, 2) = 2
mult(2, 2) = mult(2, 1 + 1) = add(mult(2, 1), 2) = add(2, 2) = 4
mult(2, 3) = mult(2, 2 + 1) = add(mult(2, 2), 2) = add(4, 2) = 6

The general pattern then is this: to give a primitive recursive definition of


a function h(x0 , . . . , xk−1 , y), we provide two equations. The first defines the
value of h(x0 , . . . , xk−1 , 0) without reference to h. The second defines the value
of h(x0 , . . . , xk−1 , y + 1) in terms of h(x0 , . . . , xk−1 , y), the other arguments x0 ,
. . . , xk−1 , and y. Only the immediately preceding value of h may be used in
that second equation. If we think of the operations given by the right-hand-
sides of these two equations as themselves being functions f and g, then the

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 3


general pattern to define a new function h by primitive recursion is this:

h(x0 , . . . , xk−1 , 0) = f (x0 , . . . , xk−1 )


h(x0 , . . . , xk−1 , y + 1) = g(x0 , . . . , xk−1 , y, h(x0 , . . . , xk−1 , y))

In the case of add, we have k = 1 and f (x0 ) = x0 (the identity function), and
g(x0 , y, z) = z + 1 (the 3-place function that returns the successor of its third
argument):

add(x0 , 0) = f (x0 ) = x0
add(x0 , y + 1) = g(x0 , y, add(x0 , y)) = succ(add(x0 , y))

In the case of mult, we have f (x0 ) = 0 (the constant function always return-
ing 0) and g(x0 , y, z) = add(z, x0 ) (the 3-place function that returns the sum
of its last and first argument):

mult(x0 , 0) = f (x0 ) = 0
mult(x0 , y + 1) = g(x0 , y, mult(x0 , y)) = add(mult(x0 , y), x0 )

rec.3 Composition
If f and g are two one-place functions of natural numbers, we can compose cmp:rec:com:
sec
them: h(x) = g(f (x)). The new function h(x) is then defined by composition
from the functions f and g. We’d like to generalize this to functions of more
than one argument.
Here’s one way of doing this: suppose f is a k-place function, and g0 , . . . ,
gk−1 are k functions which are all n-place. Then we can define a new n-place
function h as follows:

h(x0 , . . . , xn−1 ) = f (g0 (x0 , . . . , xn−1 ), . . . , gk−1 (x0 , . . . , xn−1 ))

If f and all gi are computable, so is h: To compute h(x0 , . . . , xn−1 ), first


compute the values yi = gi (x0 , . . . , xn−1 ) for each i = 0, . . . , k − 1. Then feed
these values into f to compute h(x0 , . . . , xk−1 ) = f (y0 , . . . , yk−1 ).
This may seem like an overly restrictive characterization of what happens
when we compute a new function using some existing ones. For one thing,
sometimes we do not use all the arguments of a function, as when we defined
g(x, y, z) = succ(z) for use in the primitive recursive definition of add. Suppose
we are allowed use of the following functions:

Pin (x0 , . . . , xn−1 ) = xi

The functions Pik are called projection functions: Pin is an n-place function.
Then g can be defined by

g(x, y, z) = succ(P23 (x, y, z)).

4 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


Here the role of f is played by the 1-place function succ, so k = 1. And we
have one 3-place function P23 which plays the role of g0 . The result is a 3-place
function that returns the successor of the third argument.
The projection functions also allow us to define new functions by reordering
or identifying arguments. For instance, the function h(x) = add(x, x) can be
defined by
h(x0 ) = add(P01 (x0 ), P01 (x0 )).
Here k = 2, n = 1, the role of f (y0 , y1 ) is played by add, and the roles of g0 (x0 )
and g1 (x0 ) are both played by P01 (x0 ), the one-place projection function (aka
the identity function).
If f (y0 , y1 ) is a function we already have, we can define the function h(x0 , x1 ) =
f (x1 , x0 ) by
h(x0 , x1 ) = f (P12 (x0 , x1 ), P02 (x0 , x1 )).
Here k = 2, n = 2, and the roles of g0 and g1 are played by P12 and P02 ,
respectively.
You may also worry that g0 , . . . , gk−1 are all required to have the same
arity n. (Remember that the arity of a function is the number of arguments;
an n-place function has arity n.) But adding the projection functions provides
the desired flexibility. For example, suppose f and g are 3-place functions and
h is the 2-place function defined by

h(x, y) = f (x, g(x, x, y), y).

The definition of h can be rewritten with the projection functions, as

h(x, y) = f (P02 (x, y), g(P02 (x, y), P02 (x, y), P12 (x, y)), P12 (x, y)).

Then h is the composition of f with P02 , l, and P12 , where

l(x, y) = g(P02 (x, y), P02 (x, y), P12 (x, y)),

i.e., l is the composition of g with P02 , P02 , and P12 .

rec.4 Primitive Recursion Functions


cmp:rec:prf: Let us record again how we can define new functions from existing ones using
sec
primitive recursion and composition.

cmp:rec:prf: Definition rec.1. Suppose f is a k-place function (k ≥ 1) and g is a (k + 2)-


defn:primitive-recursion
place function. The function defined by primitive recursion from f and g is
the (k + 1)-place function h defined by the equations

h(x0 , . . . , xk−1 , 0) = f (x0 , . . . , xk−1 )


h(x0 , . . . , xk−1 , y + 1) = g(x0 , . . . , xk−1 , y, h(x0 , . . . , xk−1 , y))

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 5


Definition rec.2. Suppose f is a k-place function, and g0 , . . . , gk−1 are k cmp:rec:prf:
defn:composition
functions which are all n-place. The function defined by composition from f
and g0 , . . . , gk−1 is the n-place function h defined by

h(x0 , . . . , xn−1 ) = f (g0 (x0 , . . . , xn−1 ), . . . , gk−1 (x0 , . . . , xn−1 )).

In addition to succ and the projection functions

Pin (x0 , . . . , xn−1 ) = xi ,

for each natural number n and i < n, we will include among the primitive
recursive functions the function zero(x) = 0.

Definition rec.3. The set of primitive recursive functions is the set of func-
tions from Nn to N, defined inductively by the following clauses:

1. zero is primitive recursive.

2. succ is primitive recursive.

3. Each projection function Pin is primitive recursive.

4. If f is a k-place primitive recursive function and g0 , . . . , gk−1 are n-place


primitive recursive functions, then the composition of f with g0 , . . . , gk−1
is primitive recursive.

5. If f is a k-place primitive recursive function and g is a k+2-place primitive


recursive function, then the function defined by primitive recursion from
f and g is primitive recursive.

explanation Put more concisely, the set of primitive recursive functions is the smallest
set containing zero, succ, and the projection functions Pjn , and which is closed
under composition and primitive recursion.
Another way of describing the set of primitive recursive functions is by
defining it in terms of “stages.” Let S0 denote the set of starting functions:
zero, succ, and the projections. These are the primitive recursive functions of
stage 0. Once a stage Si has been defined, let Si+1 be the set of all functions
you get by applying a single instance of composition or primitive recursion to
functions already in Si . Then
[
S= Si
i∈N

is the set of all primitive recursive functions


Let us verify that add is a primitive recursive function.

Proposition rec.4. The addition function add(x, y) = x + y is primitive re-


cursive.

6 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


Proof. We already have a primitive recursive definition of add in terms of two
functions f and g which matches the format of Definition rec.1:

add(x0 , 0) = f (x0 ) = x0
add(x0 , y + 1) = g(x0 , y, add(x0 , y)) = succ(add(x0 , y))

So add is primitive recursive provided f and g are as well. f (x0 ) = x0 = P01 (x0 ),
and the projection functions count as primitive recursive, so f is primitive
recursive. The function g is the three-place function g(x0 , y, z) defined by

g(x0 , y, z) = succ(z).

This does not yet tell us that g is primitive recursive, since g and succ are not
quite the same function: succ is one-place, and g has to be three-place. But
we can define g “officially” by composition as

g(x0 , y, z) = succ(P23 (x0 , y, z))

Since succ and P23 count as primitive recursive functions, g does as well, since
it can be defined by composition from primitive recursive functions.

cmp:rec:prf: Proposition rec.5. The multiplication function mult(x, y) = x·y is primitive


prop:mult-pr
recursive.

Proof. Exercise.

Problem rec.1. Prove Proposition rec.5 by showing that the primitive recur-
sive definition of mult can be put into the form required by Definition rec.1
and showing that the corresponding functions f and g are primitive recursive.

Example rec.6. Here’s our very first example of a primitive recursive defini-
tion:

h(0) = 1
h(y + 1) = 2 · h(y).

This function cannot fit into the form required by Definition rec.1, since k = 0.
The definition also involves the constants 1 and 2. To get around the first
problem, let’s introduce a dummy argument and define the function h′ :

h′ (x0 , 0) = f (x0 ) = 1
h′ (x0 , y + 1) = g(x0 , y, h′ (x0 , y)) = 2 · h′ (x0 , y).

The function f (x0 ) = 1 can be defined from succ and zero by composition:
f (x0 ) = succ(zero(x0 )). The function g can be defined by composition from
g ′ (z) = 2 · z and projections:

g(x0 , y, z) = g ′ (P23 (x0 , y, z))

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 7


and g ′ in turn can be defined by composition as

g ′ (z) = mult(g ′′ (z), P01 (z))

and

g ′′ (z) = succ(f (z)),

where f is as above: f (z) = succ(zero(z)). Now that we have h′ , we can use


composition again to let h(y) = h′ (P01 (y), P01 (y)). This shows that h can be
defined from the basic functions using a sequence of compositions and primitive
recursions, so h is primitive recursive.

rec.5 Primitive Recursion Notations


One advantage to having the precise inductive description of the primitive cmp:rec:not:
sec
recursive functions is that we can be systematic in describing them. For exam-
ple, we can assign a “notation” to each such function, as follows. Use symbols
zero, succ, and Pin for zero, successor, and the projections. Now suppose h
is defined by composition from a k-place function f and n-place functions g0 ,
. . . , gk−1 , and we have assigned notations F , G0 , . . . , Gk−1 to the latter func-
tions. Then, using a new symbol Compk,n , we can denote the function h by
Compk,n [F, G0 , . . . , Gk−1 ].
For functions defined by primitive recursion, we can use analogous nota-
tions. Suppose the (k + 1)-ary function h is defined by primitive recursion
from the k-ary function f and the (k + 2)-ary function g, and the notations
assigned to f and g are F and G, respectively. Then the notation assigned to h
is Reck [F, G].
Recall that the addition function is defined by primitive recursion as

add(x0 , 0) = P01 (x0 ) = x0


add(x0 , y + 1) = succ(P23 (x0 , y, add(x0 , y))) = add(x0 , y) + 1

Here the role of f is played by P01 , and the role of g is played by succ(P23 (x0 , y, z)),
which is assigned the notation Comp1,3 [succ, P23 ] as it is the result of defining a
function by composition from the 1-ary function succ and the 3-ary function P23 .
With this setup, we can denote the addition function by

Rec1 [P01 , Comp1,3 [succ, P23 ]].

Having these notations sometimes proves useful, e.g., when enumerating prim-
itive recursive functions.

Problem rec.2. Give the complete primitive recursive notation for mult.

8 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


rec.6 Primitive Recursive Functions are Computable
cmp:rec:cmp: Suppose a function h is defined by primitive recursion
sec

h(⃗x, 0) = f (⃗x)
h(⃗x, y + 1) = g(⃗x, y, h(⃗x, y))

and suppose the functions f and g are computable. (We use ⃗x to abbreviate x0 ,
. . . , xk−1 .) Then h(⃗x, 0) can obviously be computed, since it is just f (⃗x) which
we assume is computable. h(⃗x, 1) can then also be computed, since 1 = 0 + 1
and so h(⃗x, 1) is just

h(⃗x, 1) = g(⃗x, 0, h(⃗x, 0)) = g(⃗x, 0, f (⃗x)).

We can go on in this way and compute

h(⃗x, 2) = g(⃗x, 1, h(⃗x, 1)) = g(⃗x, 1, g(⃗x, 0, f (⃗x)))


h(⃗x, 3) = g(⃗x, 2, h(⃗x, 2)) = g(⃗x, 2, g(⃗x, 1, g(⃗x, 0, f (⃗x))))
h(⃗x, 4) = g(⃗x, 3, h(⃗x, 3)) = g(⃗x, 3, g(⃗x, 2, g(⃗x, 1, g(⃗x, 0, f (⃗x)))))
..
.

Thus, to compute h(⃗x, y) in general, successively compute h(⃗x, 0), h(⃗x, 1), . . . ,
until we reach h(⃗x, y).
Thus, a primitive recursive definition yields a new computable function if
the functions f and g are computable. Composition of functions also results in
a computable function if the functions f and gi are computable.
Since the basic functions zero, succ, and Pin are computable, and com-
position and primitive recursion yield computable functions from computable
functions, this means that every primitive recursive function is computable.

rec.7 Examples of Primitive Recursive Functions


cmp:rec:exa: We already have some examples of primitive recursive functions: the addition
sec
and multiplication functions add and mult. The identity function id(x) = x is
primitive recursive, since it is just P01 . The constant functions constn (x) = n
are primitive recursive since they can be defined from zero and succ by suc-
cessive composition. This is useful when we want to use constants in primi-
tive recursive definitions, e.g., if we want to define the function f (x) = 2 · x
can obtain it by composition from constn (x) and multiplication as f (x) =
mult(const2 (x), P01 (x)). We’ll make use of this trick from now on.

Proposition rec.7. The exponentiation function exp(x, y) = xy is primitive


recursive.

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 9


Proof. We can define exp primitive recursively as

exp(x, 0) = 1
exp(x, y + 1) = mult(x, exp(x, y)).

Strictly speaking, this is not a recursive definition from primitive recursive


functions. Officially, though, we have:

exp(x, 0) = f (x)
exp(x, y + 1) = g(x, y, exp(x, y)).

where

f (x) = succ(zero(x)) = 1
g(x, y, z) = mult(P03 (x, y, z), P23 (x, y, z)) = x · z

and so f and g are defined from primitive recursive functions by composition.

Proposition rec.8. The predecessor function pred(y) defined by


(
0 if y = 0
pred(y) =
y − 1 otherwise

is primitive recursive.

Proof. Note that

pred(0) = 0 and
pred(y + 1) = y.

This is almost a primitive recursive definition. It does not, strictly speaking, fit
into the pattern of definition by primitive recursion, since that pattern requires
at least one extra argument x. It is also odd in that it does not actually use
pred(y) in the definition of pred(y + 1). But we can first define pred′ (x, y) by

pred′ (x, 0) = zero(x) = 0,


pred′ (x, y + 1) = P13 (x, y, pred′ (x, y)) = y.

and then define pred from it by composition, e.g., as pred(x) = pred′ (zero(x), P01 (x)).

Proposition rec.9. The factorial function fac(x) = x ! = 1 · 2 · 3 · · · · · x is


primitive recursive.

Proof. The obvious primitive recursive definition is

fac(0) = 1
fac(y + 1) = fac(y) · (y + 1).

10 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


Officially, we have to first define a two-place function h

h(x, 0) = const1 (x)


h(x, y + 1) = g(x, y, h(x, y))

where g(x, y, z) = mult(P23 (x, y, z), succ(P13 (x, y, z))) and then let

fac(y) = h(P01 (y), P01 (y)) = h(y, y).

From now on we’ll be a bit more laissez-faire and not give the official definitions
by composition and primitive recursion.

Proposition rec.10. Truncated subtraction, x −̇ y, defined by


(
0 if x < y
x −̇ y =
x − y otherwise

is primitive recursive.

Proof. We have:

x −̇ 0 = x
x −̇ (y + 1) = pred(x −̇ y)

Proposition rec.11. The distance between x and y, |x − y|, is primitive re-


cursive.

Proof. We have |x − y| = (x −̇ y) + (y −̇ x), so the distance can be defined by


composition from + and −̇, which are primitive recursive.

Proposition rec.12. The maximum of x and y, max(x, y), is primitive re-


cursive.

Proof. We can define max(x, y) by composition from + and −̇ by

max(x, y) = x + (y −̇ x).

If x is the maximum, i.e., x ≥ y, then y −̇ x = 0, so x + (y −̇ x) = x + 0 = x. If


y is the maximum, then y −̇ x = y − x, and so x + (y −̇ x) = x + (y − x) = y.

cmp:rec:exa: Proposition rec.13. The minimum of x and y, min(x, y), is primitive re-
prop:min-pr
cursive.

Proof. Exercise.

Problem rec.3. Prove Proposition rec.13.

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 11


Problem rec.4. Show that
2x

.
..
f (x, y) = 2(2 ) y 2’s

is primitive recursive.

Problem rec.5. Show that integer division d(x, y) = ⌊x/y⌋ (i.e., division,
where you disregard everything after the decimal point) is primitive recursive.
When y = 0, we stipulate d(x, y) = 0. Give an explicit definition of d using
primitive recursion and composition.

Proposition rec.14. The set of primitive recursive functions is closed under


the following two operations:

1. Finite sums: if f (⃗x, z) is primitive recursive, then so is the function


y
X
g(⃗x, y) = f (⃗x, z).
z=0

2. Finite products: if f (⃗x, z) is primitive recursive, then so is the function


y
Y
h(⃗x, y) = f (⃗x, z).
z=0

Proof. For example, finite sums are defined recursively by the equations

g(⃗x, 0) = f (⃗x, 0)
g(⃗x, y + 1) = g(⃗x, y) + f (⃗x, y + 1).

rec.8 Primitive Recursive Relations


cmp:rec:prr:
sec
Definition rec.15. A relation R(⃗x) is said to be primitive recursive if its
characteristic function,

1 if R(⃗x)
χR (⃗x) =
0 otherwise

is primitive recursive.

In other words, when one speaks of a primitive recursive relation R(⃗x),


one is referring to a relation of the form χR (⃗x) = 1, where χR is a primitive
recursive function which, on any input, returns either 1 or 0. For example, the

12 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


relation IsZero(x), which holds if and only if x = 0, corresponds to the function
χIsZero , defined using primitive recursion by
χIsZero (0) = 1,
χIsZero (x + 1) = 0.
It should be clear that one can compose relations with other primitive
recursive functions. So the following are also primitive recursive:
1. The equality relation, x = y, defined by IsZero(|x − y|)
2. The less-than relation, x ≤ y, defined by IsZero(x −̇ y)
Proposition rec.16. The set of primitive recursive relations is closed under
Boolean operations, that is, if P (⃗x) and Q(⃗x) are primitive recursive, so are
1. ¬P (⃗x)
2. P (⃗x) ∧ Q(⃗x)
3. P (⃗x) ∨ Q(⃗x)
4. P (⃗x) → Q(⃗x)

Proof. Suppose P (⃗x) and Q(⃗x) are primitive recursive, i.e., their characteristic
functions χP and χQ are. We have to show that the characteristic functions of
¬P (⃗x), etc., are also primitive recursive.
(
0 if χP (⃗x) = 1
χ¬P (⃗x) =
1 otherwise

We can define χ¬P (⃗x) as 1 −̇ χP (⃗x).


(
1 if χP (⃗x) = χQ (⃗x) = 1
χP ∧Q (⃗x) =
0 otherwise

We can define χP ∧Q (⃗x) as χP (⃗x) · χQ (⃗x) or as min(χP (⃗x), χQ (⃗x)). Similarly,


χP ∨Q (⃗x) = max(χP (⃗x), χQ (⃗x))) and
χP →Q (⃗x) = max(1 −̇ χP (⃗x), χQ (⃗x)).

Proposition rec.17. The set of primitive recursive relations is closed under


bounded quantification, i.e., if R(⃗x, z) is a primitive recursive relation, then so
are the relations
(∀z < y) R(⃗x, z) and
(∃z < y) R(⃗x, z).
(∀z < y) R(⃗x, z) holds of ⃗x and y if and only if R(⃗x, z) holds for every z less
than y, and similarly for (∃z < y) R(⃗x, z).

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 13


Proof. By convention, we take (∀z < 0) R(⃗x, z) to be true (for the trivial reason
that there are no z less than 0) and (∃z < 0) R(⃗x, z) to be false. A bounded
universal quantifier functions just like a finite product or iterated minimum,
i.e., if P (⃗x, y) ⇔ (∀z < y) R(⃗x, z) then χP (⃗x, y) can be defined by
χP (⃗x, 0) = 1
χP (⃗x, y + 1) = min(χP (⃗x, y), χR (⃗x, y))).
Bounded existential quantification can similarly be defined using max. Al-
ternatively, it can be defined from bounded universal quantification, using
the equivalence (∃z < y) R(⃗x, z) ↔ ¬(∀z < y) ¬R(⃗x, z). Note that, for ex-
ample, a bounded quantifier of the form (∃x ≤ y) . . . x . . . is equivalent to
(∃x < y + 1) . . . x . . . .

Problem rec.6. Show that the three place relation x ≡ y mod n (congruence
modulo n) is primitive recursive.

Another useful primitive recursive function is the conditional function,


cond(x, y, z), defined by
(
y if x = 0
cond(x, y, z) =
z otherwise.

This is defined recursively by

cond(0, y, z) = y,
cond(x + 1, y, z) = z.
One can use this to justify definitions of primitive recursive functions by cases
from primitive recursive relations:
Proposition rec.18. If g0 (⃗x), . . . , gm (⃗x) are primitive recursive functions,
and R0 (⃗x), . . . , Rm−1 (⃗x) are primitive recursive relations, then the function f
defined by


 g0 (⃗x) if R0 (⃗x)

 g (⃗
x ) if R1 (⃗x) and not R0 (⃗x)
 1


..
f (⃗x) = .


g


 m−1 (⃗x) if Rm−1 (⃗x) and none of the previous hold

gm (⃗x) otherwise

is also primitive recursive.

Proof. When m = 1, this is just the function defined by


f (⃗x) = cond(χ¬R0 (⃗x), g0 (⃗x), g1 (⃗x)).
For m greater than 1, one can just compose definitions of this form.

14 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


rec.9 Bounded Minimization
cmp:rec:bmi: It is often useful to define a function as the least number satisfying some prop- explanation
sec
erty or relation P . If P is decidable, we can compute this function simply by
trying out all the possible numbers, 0, 1, 2, . . . , until we find the least one
satisfying P . This kind of unbounded search takes us out of the realm of prim-
itive recursive functions. However, if we’re only interested in the least number
less than some independently given bound, we stay primitive recursive. In other
words, and a bit more generally, suppose we have a primitive recursive rela-
tion R(x, z). Consider the function that maps x and y to the least z < y such
that R(x, z). It, too, can be computed, by testing whether R(x, 0), R(x, 1),
. . . , R(x, y − 1). But why is it primitive recursive?

Proposition rec.19. If R(⃗x, z) is primitive recursive, so is the function mR (⃗x, y)


which returns the least z less than y such that R(⃗x, z) holds, if there is one,
and y otherwise. We will write the function mR as

(min z < y) R(⃗x, z),

Proof. Note than there can be no z < 0 such that R(⃗x, z) since there is no
z < 0 at all. So mR (⃗x, 0) = 0.
In case the bound is of the form y + 1 we have three cases:

1. There is a z < y such that R(⃗x, z), in which case mR (⃗x, y+1) = mR (⃗x, y).

2. There is no such z < y but R(⃗x, y) holds, then mR (⃗x, y + 1) = y.

3. There is no z < y + 1 such that R(⃗x, z), then mR (⃗z, y + 1) = y + 1.

So we can define mR (⃗x, 0) by primitive recursion as follows:

mR (⃗x, 0) = 0

mR (⃗x, y) if mR (⃗x, y) ̸= y

mR (⃗x, y + 1) = y if mR (⃗x, y) = y and R(⃗x, y)

y+1 otherwise.

Note that there is a z < y such that R(⃗x, z) iff mR (⃗x, y) ̸= y.

Problem rec.7. Suppose R(⃗x, z) is primitive recursive. Define the function


m′R (⃗x, y) which returns the least z less than y such that R(⃗x, z) holds, if there
is one, and 0 otherwise, by primitive recursion from χR .

rec.10 Primes
cmp:rec:pri: Bounded quantification and bounded minimization provide us with a good
sec
deal of machinery to show that natural functions and relations are primitive
recursive. For example, consider the relation “x divides y”, written x | y. The

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 15


relation x | y holds if division of y by x is possible without remainder, i.e., if y
is an integer multiple of x. (If it doesn’t hold, i.e., the remainder when dividing
x by y is > 0, we write x ∤ y.) In other words, x | y iff for some z, x · z = y.
Obviously, any such z, if it exists, must be ≤ y. So, we have that x | y iff for
some z ≤ y, x · z = y. We can define the relation x | y by bounded existential
quantification from = and multiplication by

x | y ⇔ (∃z ≤ y) (x · z) = y.

We’ve thus shown that x | y is primitive recursive.


A natural number x is prime if it is neither 0 nor 1 and is only divisible
by 1 and itself. In other words, prime numbers are such that, whenever y | x,
either y = 1 or y = x. To test if x is prime, we only have to check if y | x for
all y ≤ x, since if y > x, then automatically y ∤ x. So, the relation Prime(x),
which holds iff x is prime, can be defined by

Prime(x) ⇔ x ≥ 2 ∧ (∀y ≤ x) (y | x → y = 1 ∨ y = x)

and is thus primitive recursive.


The primes are 2, 3, 5, 7, 11, etc. Consider the function p(x) which returns
the xth prime in that sequence, i.e., p(0) = 2, p(1) = 3, p(2) = 5, etc. (For
convenience we will often write p(x) as px (p0 = 2, p1 = 3, etc.)
If we had a function nextPrime(x), which returns the first prime number
larger than x, p can be easily defined using primitive recursion:

p(0) = 2
p(x + 1) = nextPrime(p(x))

Since nextPrime(x) is the least y such that y > x and y is prime, it can be
easily computed by unbounded search. But it can also be defined by bounded
minimization, thanks to a result due to Euclid: there is always a prime number
between x and x ! + 1.

nextPrime(x) = (min y ≤ x ! + 1) (y > x ∧ Prime(y)).

This shows, that nextPrime(x) and hence p(x) are (not just computable but)
primitive recursive.
(If you’re curious, here’s a quick proof of Euclid’s theorem. Suppose pn
is the largest prime ≤ x and consider the product p = p0 · p1 · · · · · pn of all
primes ≤ x. Either p + 1 is prime or there is a prime between x and p + 1.
Why? Suppose p + 1 is not prime. Then some prime number q | p + 1 where
q < p + 1. None of the primes ≤ x divide p + 1. (By definition of p, each of the
primes pi ≤ x divides p, i.e., with remainder 0. So, each of the primes pi ≤ x
divides p + 1 with remainder 1, and so pi ∤ p + 1.) Hence, q is a prime > x and
< p + 1. And p ≤ x !, so there is a prime > x and ≤ x ! + 1.)

Problem rec.8. Define integer division d(x, y) using bounded minimization.

16 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


rec.11 Sequences
cmp:rec:seq: The set of primitive recursive functions is remarkably robust. But we will be
sec
able to do even more once we have developed a adequate means of handling
sequences. We will identify finite sequences of natural numbers with natural
numbers in the following way: the sequence ⟨a0 , a1 , a2 , . . . , ak ⟩ corresponds to
the number
pa0 0 +1 · pa1 1 +1 · pa2 2 +1 · · · · · pakk +1 .
We add one to the exponents to guarantee that, for example, the sequences
⟨2, 7, 3⟩ and ⟨2, 7, 3, 0, 0⟩ have distinct numeric codes. We can take both 0
and 1 to code the empty sequence; for concreteness, let Λ denote 0.
The reason that this coding of sequences works is the so-called Fundamental
Theorem of Arithmetic: every natural number n ≥ 2 can be written in one and
only one way in the form

n = pa0 0 · pa1 1 · · · · · pakk

with ak ≥ 1. This guarantees that the mapping ⟨⟩(a0 , . . . , ak ) = ⟨a0 , . . . , ak ⟩ is


injective: different sequences are mapped to different numbers; to each number
only at most one sequence corresponds.
We’ll now show that the operations of determining the length of a sequence,
determining its ith element, appending an element to a sequence, and concate-
nating two sequences, are all primitive recursive.

Proposition rec.20. The function len(s), which returns the length of the se-
quence s, is primitive recursive.

Proof. Let R(i, s) be the relation defined by

R(i, s) iff pi | s ∧ pi+1 ∤ s.

R is clearly primitive recursive. Whenever s is the code of a non-empty se-


quence, i.e.,
s = pa0 0 +1 · · · · · pakk +1 ,
R(i, s) holds if pi is the largest prime such that pi | s, i.e., i = k. The length
of s thus is i + 1 iff pi is the largest prime that divides s, so we can let
(
0 if s = 0 or s = 1
len(s) =
1 + (min i < s) R(i, s) otherwise

We can use bounded minimization, since there is only one i that satisfies R(s, i)
when s is a code of a sequence, and if i exists it is less than s itself.

Proposition rec.21. The function append(s, a), which returns the result of
appending a to the sequence s, is primitive recursive.

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 17


Proof. append can be defined by:
(
2a+1 if s = 0 or s = 1
append(s, a) =
s · pa+1
len(s) otherwise.

Proposition rec.22. The function element(s, i), which returns the ith ele-
ment of s (where the initial element is called the 0th), or 0 if i is greater than
or equal to the length of s, is primitive recursive.

Proof. Note that a is the ith element of s iff pa+1 i is the largest power of pi
that divides s, i.e., pa+1
i | s but p a+2
i ∤ s. So:
(
0 if i ≥ len(s)
element(s, i) = a+2
(min a < s) (pi ∤ s) otherwise.

Instead of using the official names for the functions defined above, we intro-
duce a more compact notation. We will use (s)i instead of element(s, i), and
⟨s0 , . . . , sk ⟩ to abbreviate

append(append(. . . append(Λ, s0 ) . . . ), sk ).

Note that if s has length k, the elements of s are (s)0 , . . . , (s)k−1 .

Proposition rec.23. The function concat(s, t), which concatenates two se-
quences, is primitive recursive.

Proof. We want a function concat with the property that

concat(⟨a0 , . . . , ak ⟩, ⟨b0 , . . . , bl ⟩) = ⟨a0 , . . . , ak , b0 , . . . , bl ⟩.

We’ll use a “helper” function hconcat(s, t, n) which concatenates the first n


symbols of t to s. This function can be defined by primitive recursion as
follows:

hconcat(s, t, 0) = s
hconcat(s, t, n + 1) = append(hconcat(s, t, n), (t)n )

Then we can define concat by

concat(s, t) = hconcat(s, t, len(t)).

We will write s ⌢ t instead of concat(s, t).


It will be useful for us to be able to bound the numeric code of a sequence in
terms of its length and its largest element. Suppose s is a sequence of length k,
each element of which is less than or equal to some number x. Then s has at

18 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


most k prime factors, each at most pk−1 , and each raised to at most x + 1 in
the prime factorization of s. In other words, if we define
k·(x+1)
sequenceBound(x, k) = pk−1 ,

then the numeric code of the sequence s described above is at most sequenceBound(x, k).
Having such a bound on sequences gives us a way of defining new functions
using bounded search. For example, we can define concat using bounded search.
All we need to do is write down a primitive recursive specification of the object
(number of the concatenated sequence) we are looking for, and a bound on how
far to look. The following works:

concat(s, t) = (min v < sequenceBound(s + t, len(s) + len(t)))


(len(v) = len(s) + len(t) ∧
(∀i < len(s)) ((v)i = (s)i ) ∧
(∀j < len(t)) ((v)len(s)+j = (t)j ))

Problem rec.9. Show that there is a primitive recursive function sconcat(s)


with the property that

sconcat(⟨s0 , . . . , sk ⟩) = s0 ⌢ . . . ⌢ sk .

Problem rec.10. Show that there is a primitive recursive function tail(s) with
the property that

tail(Λ) = 0 and
tail(⟨s0 , . . . , sk ⟩) = ⟨s1 , . . . , sk ⟩.

cmp:rec:seq: Proposition rec.24. The function subseq(s, i, n) which returns the subse-
prop:subseq
quence of s of length n beginning at the ith element, is primitive recursive.

Proof. Exercise.

Problem rec.11. Prove Proposition rec.24.

rec.12 Trees
cmp:rec:tre: Sometimes it is useful to represent trees as natural numbers, just like we can
sec
represent sequences by numbers and properties of and operations on them by
primitive recursive relations and functions on their codes. We’ll use sequences
and their codes to do this. A tree can be either a single node (possibly with a
label) or else a node (possibly with a label) connected to a number of subtrees.
The node is called the root of the tree, and the subtrees it is connected to its
immediate subtrees.
We code trees recursively as a sequence ⟨k, d1 , . . . , dk ⟩, where k is the num-
ber of immediate subtrees and d1 , . . . , dk the codes of the immediate subtrees.

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 19


If the nodes have labels, they can be included after the immediate subtrees. So
a tree consisting just of a single node with label l would be coded by ⟨0, l⟩, and
a tree consisting of a root (labelled l1 ) connected to two single nodes (labelled
l2 , l3 ) would be coded by ⟨2, ⟨0, l2 ⟩, ⟨0, l3 ⟩, l1 ⟩.
Proposition rec.25. The function SubtreeSeq(t), which returns the code of cmp:rec:tre:
prop:subtreeseq
a sequence the elements of which are the codes of all subtrees of the tree with
code t, is primitive recursive.
Proof. First note that ISubtrees(t) = subseq(t, 1, (t)0 ) is primitive recursive
and returns the codes of the immediate subtrees of a tree t. Now we can
define a helper function hSubtreeSeq(t, n) which computes the sequence of all
subtrees which are n nodes removed from the root. The sequence of subtrees
of t which is 0 nodes removed from the root—in other words, begins at the root
of t—is the sequence consisting just of t. To obtain a sequence of all level n + 1
subtrees of t, we concatenate the level n subtrees with a sequence consisting of
all immediate subtrees of the level n subtrees. To get a list of all these, note
that if f (x) is a primitive recursive function returning codes of sequences, then
gf (s, k) = f ((s)0 ) ⌢ . . . ⌢ f ((s)k ) is also primitive recursive:
g(s, 0) = f ((s)0 )
g(s, k + 1) = g(s, k) ⌢ f ((s)k+1 )
For instance, if s is a sequence of trees, then h(s) = gISubtrees (s, len(s)) gives
the sequence of the immediate subtrees of the elements of s. We can use it to
define hSubtreeSeq by
hSubtreeSeq(t, 0) = ⟨t⟩
hSubtreeSeq(t, n + 1) = hSubtreeSeq(t, n) ⌢ h(hSubtreeSeq(t, n)).
The maximum level of subtrees in a tree coded by t, i.e., the maximum distance
between the root and a leaf node, is bounded by the code t. So a sequence of
codes of all subtrees of the tree coded by t is given by hSubtreeSeq(t, t).
Problem rec.12. The definition of hSubtreeSeq in the proof of Proposition rec.25
in general includes repetitions. Give an alternative definition which guarantees
that the code of a subtree occurs only once in the resulting list.

rec.13 Other Recursions


Using pairing and sequencing, we can justify more exotic (and useful) forms cmp:rec:ore:
sec
of primitive recursion. For example, it is often useful to define two functions
simultaneously, such as in the following definition:
h0 (⃗x, 0) = f0 (⃗x)
h1 (⃗x, 0) = f1 (⃗x)
h0 (⃗x, y + 1) = g0 (⃗x, y, h0 (⃗x, y), h1 (⃗x, y))
h1 (⃗x, y + 1) = g1 (⃗x, y, h0 (⃗x, y), h1 (⃗x, y))

20 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


This is an instance of simultaneous recursion. Another useful way of defining
functions is to give the value of h(⃗x, y + 1) in terms of all the values h(⃗x, 0),
. . . , h(⃗x, y), as in the following definition:

h(⃗x, 0) = f (⃗x)
h(⃗x, y + 1) = g(⃗x, y, ⟨h(⃗x, 0), . . . , h(⃗x, y)⟩).

The following schema captures this idea more succinctly:

h(⃗x, y) = g(⃗x, y, ⟨h(⃗x, 0), . . . , h(⃗x, y − 1)⟩)

with the understanding that the last argument to g is just the empty sequence
when y is 0. In either formulation, the idea is that in computing the “successor
step,” the function h can make use of the entire sequence of values computed
so far. This is known as a course-of-values recursion. For a particular example,
it can be used to justify the following type of definition:
(
g(⃗x, y, h(⃗x, k(⃗x, y))) if k(⃗x, y) < y
h(⃗x, y) =
f (⃗x) otherwise

In other words, the value of h at y can be computed in terms of the value of h


at any previous value, given by k.

Problem rec.13. Define the remainder function r(x, y) by course-of-values


recursion. (If x, y are natural numbers and y > 0, r(x, y) is the number less
than y such that x = z × y + r(x, y) for some z. For definiteness, let’s say that
if y = 0, r(x, 0) = 0.)

You should think about how to obtain these functions using ordinary prim-
itive recursion. One final version of primitive recursion is more flexible in that
one is allowed to change the parameters (side values) along the way:

h(⃗x, 0) = f (⃗x)
h(⃗x, y + 1) = g(⃗x, y, h(k(⃗x), y))

This, too, can be simulated with ordinary primitive recursion. (Doing so is


tricky. For a hint, try unwinding the computation by hand.)

rec.14 Non-Primitive Recursive Functions


cmp:rec:npr: The primitive recursive functions do not exhaust the intuitively computable
sec
functions. It should be intuitively clear that we can make a list of all the
unary primitive recursive functions, f0 , f1 , f2 , . . . such that we can effectively
compute the value of fx on input y; in other words, the function g(x, y), defined
by
g(x, y) = fx (y)

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 21


is computable. But then so is the function

h(x) = g(x, x) + 1
= fx (x) + 1.

For each primitive recursive function fi , the value of h and fi differ at i. So h


is computable, but not primitive recursive; and one can say the same about g.
This is an “effective” version of Cantor’s diagonalization argument.
One can provide more explicit examples of computable functions that are
not primitive recursive. For example, let the notation g n (x) denote g(g(. . . g(x))),
with n g’s in all; and define a sequence g0 , g1 , . . . of functions by

g0 (x) = x+1
gn+1 (x) = gnx (x)

You can confirm that each function gn is primitive recursive. Each successive
function grows much faster than the one before; g1 (x) is equal to 2x, g2 (x) is
equal to 2x · x, and g3 (x) grows roughly like an exponential stack of x 2’s. The
Ackermann–Péter function is essentially the function G(x) = gx (x), and one
can show that this grows faster than any primitive recursive function.
Let us return to the issue of enumerating the primitive recursive functions.
Remember that we have assigned symbolic notations to each primitive recursive
function; so it suffices to enumerate notations. We can assign a natural number
#(F ) to each notation F , recursively, as follows:

#(0) = ⟨0⟩
#(S) = ⟨1⟩
#(Pin ) = ⟨2, n, i⟩
#(Compk,l [H, G0 , . . . , Gk−1 ]) = ⟨3, k, l, #(H), #(G0 ), . . . , #(Gk−1 )⟩
#(Recl [G, H]) = ⟨4, l, #(G), #(H)⟩

Here we are using the fact that every sequence of numbers can be viewed as
a natural number, using the codes from the last section. The upshot is that
every code is assigned a natural number. Of course, some sequences (and
hence some numbers) do not correspond to notations; but we can let fi be the
unary primitive recursive function with notation coded as i, if i codes such a
notation; and the constant 0 function otherwise. The net result is that we have
an explicit way of enumerating the unary primitive recursive functions.
(In fact, some functions, like the constant zero function, will appear more
than once on the list. This is not just an artifact of our coding, but also a result
of the fact that the constant zero function has more than one notation. We
will later see that one can not computably avoid these repetitions; for example,
there is no computable function that decides whether or not a given notation
represents the constant zero function.)
We can now take the function g(x, y) to be given by fx (y), where fx refers
to the enumeration we have just described. How do we know that g(x, y) is

22 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


computable? Intuitively, this is clear: to compute g(x, y), first “unpack” x,
and see if it is a notation for a unary function. If it is, compute the value of
that function on input y.
You may already be convinced that (with some work!) one can write a digression

program (say, in Java or C++) that does this; and now we can appeal to the
Church–Turing thesis, which says that anything that, intuitively, is computable
can be computed by a Turing machine.
Of course, a more direct way to show that g(x, y) is computable is to de-
scribe a Turing machine that computes it, explicitly. This would, in particular,
avoid the Church–Turing thesis and appeals to intuition. Soon we will have
built up enough machinery to show that g(x, y) is computable, appealing to a
model of computation that can be simulated on a Turing machine: namely, the
recursive functions.

rec.15 Partial Recursive Functions


cmp:rec:par: To motivate the definition of the recursive functions, note that our proof that
sec
there are computable functions that are not primitive recursive actually estab-
lishes much more. The argument was simple: all we used was the fact that it
is possible to enumerate functions f0 , f1 , . . . such that, as a function of x and
y, fx (y) is computable. So the argument applies to any class of functions that
can be enumerated in such a way. This puts us in a bind: we would like to
describe the computable functions explicitly; but any explicit description of a
collection of computable functions cannot be exhaustive!
The way out is to allow partial functions to come into play. We will see
that it is possible to enumerate the partial computable functions. In fact, we
already pretty much know that this is the case, since it is possible to enumerate
Turing machines in a systematic way. We will come back to our diagonal
argument later, and explore why it does not go through when partial functions
are included.
The question is now this: what do we need to add to the primitive recursive
functions to obtain all the partial recursive functions? We need to do two
things:

1. Modify our definition of the primitive recursive functions to allow for


partial functions as well.

2. Add something to the definition, so that some new partial functions are
included.

The first is easy. As before, we will start with zero, successor, and projec-
tions, and close under composition and primitive recursion. The only difference
is that we have to modify the definitions of composition and primitive recur-
sion to allow for the possibility that some of the terms in the definition are not
defined. If f and g are partial functions, we will write f (x) ↓ to mean that f
is defined at x, i.e., x is in the domain of f ; and f (x) ↑ to mean the opposite,

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 23


i.e., that f is not defined at x. We will use f (x) ≃ g(x) to mean that either
f (x) and g(x) are both undefined, or they are both defined and equal. We
will use these notations for more complicated terms as well. We will adopt the
convention that if h and g0 , . . . , gk all are partial functions, then
h(g0 (⃗x), . . . , gk (⃗x))
is defined if and only if each gi is defined at ⃗x, and h is defined at g0 (⃗x),
. . . , gk (⃗x). With this understanding, the definitions of composition and prim-
itive recursion for partial functions is just as above, except that we have to
replace “=” by “≃”.
What we will add to the definition of the primitive recursive functions to
obtain partial functions is the unbounded search operator. If f (x, ⃗z) is any
partial function on the natural numbers, define µx f (x, ⃗z) to be
the least x such that f (0, ⃗z), f (1, ⃗z), . . . , f (x, ⃗z) are all defined, and
f (x, ⃗z) = 0, if such an x exists
with the understanding that µx f (x, ⃗z) is undefined otherwise. This defines
µx f (x, ⃗z) uniquely.
explanation Note that our definition makes no reference to Turing machines, or al-
gorithms, or any specific computational model. But like composition and
primitive recursion, there is an operational, computational intuition behind
unbounded search. When it comes to the computability of a partial func-
tion, arguments where the function is undefined correspond to inputs for which
the computation does not halt. The procedure for computing µx f (x, ⃗z) will
amount to this: compute f (0, ⃗z), f (1, ⃗z), f (2, ⃗z) until a value of 0 is returned.
If any of the intermediate computations do not halt, however, neither does the
computation of µx f (x, ⃗z).
If R(x, ⃗z) is any relation, µx R(x, ⃗z) is defined to be µx (1 −̇ χR (x, ⃗z)). In
other words, µx R(x, ⃗z) returns the least value of x such that R(x, ⃗z) holds.
So, if f (x, ⃗z) is a total function, µx f (x, ⃗z) is the same as µx (f (x, ⃗z) = 0).
But note that our original definition is more general, since it allows for the
possibility that f (x, ⃗z) is not everywhere defined (whereas, in contrast, the
characteristic function of a relation is always total).
Definition rec.26. The set of partial recursive functions is the smallest set of
partial functions from the natural numbers to the natural numbers (of various
arities) containing zero, successor, and projections, and closed under composi-
tion, primitive recursion, and unbounded search.

Of course, some of the partial recursive functions will happen to be total,


i.e., defined for every argument.
Definition rec.27. The set of recursive functions is the set of partial recur- cmp:rec:par:
defn:recursive-fn
sive functions that are total.

A recursive function is sometimes called “total recursive” to emphasize that


it is defined everywhere.

24 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


rec.16 The Normal Form Theorem
cmp:rec:nft:
sec
cmp:rec:nft: Theorem rec.28 (Kleene’s Normal Form Theorem). There is a primi-
thm:kleene-nf
tive recursive relation T (e, x, s) and a primitive recursive function U (s), with
the following property: if f is any partial recursive function, then for some e,

f (x) ≃ U (µs T (e, x, s))

for every x.

The proof of the normal form theorem is involved, but the basic idea is explanation
simple. Every partial recursive function has an index e, intuitively, a number
coding its program or definition. If f (x) ↓, the computation can be recorded
systematically and coded by some number s, and the fact that s codes the
computation of f on input x can be checked primitive recursively using only x
and the definition e. Consequently, the relation T , “the function with index e
has a computation for input x, and s codes this computation,” is primitive
recursive. Given the full record of the computation s, the “upshot” of s is the
value of f (x), and it can be obtained from s primitive recursively as well.
The normal form theorem shows that only a single unbounded search is
required for the definition of any partial recursive function. Basically, we can
search through all numbers until we find one that codes a computation of the
function with index e for input x. We can use the numbers e as “names”
of partial recursive functions, and write φe for the function f defined by the
equation in the theorem. Note that any partial recursive function can have
more than one index—in fact, every partial recursive function has infinitely
many indices.

rec.17 The Halting Problem


cmp:rec:hlt: The halting problem in general is the problem of deciding, given the specifica-
sec
tion e (e.g., program) of a computable function and a number n, whether the
computation of the function on input n halts, i.e., produces a result. Famously,
Alan Turing proved that this problem itself cannot be solved by a computable
function, i.e., the function
(
1 if computation e halts on input n
h(e, n) =
0 otherwise,

is not computable.
In the context of partial recursive functions, the role of the specification of a
program may be played by the index e given in Kleene’s normal form theorem.
If f is a partial recursive function, any e for which the equation in the normal
form theorem holds, is an index of f . Given a number e, the normal form
theorem states that
φe (x) ≃ U (µs T (e, x, s))

recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY 25


is partial recursive, and for every partial recursive f : N → N, there is an e ∈ N
such that φe (x) ≃ f (x) for all x ∈ N. In fact, for each such f there is not just
one, but infinitely many such e. The halting function h is defined by
(
1 if φe (x) ↓
h(e, x) =
0 otherwise.

Note that h(e, x) = 0 if φe (x) ↑, but also when e is not the index of a partial
recursive function at all.
Theorem rec.29. The halting function h is not partial recursive. cmp:rec:hlt:
thm:halting-problem

Proof. If h were partial recursive, we could define


(
1 if h(y, y) = 0
d(y) =
µx x ̸= x otherwise.

Since no number x satisfies x ̸= x, there is no µx x ̸= x, and so d(y) ↑ iff


h(y, y) ̸= 0. From this definition it follows that
1. d(y) ↓ iff φy (y) ↑ or y is not the index of a partial recursive function.
2. d(y) ↑ iff φy (y) ↓.
If h were partial recursive, then d would be partial recursive as well. Thus,
by the Kleene normal form theorem, it has an index ed . Consider the value of
h(ed , ed ). There are two possible cases, 0 and 1.
1. If h(ed , ed ) = 1 then φed (ed ) ↓. But φed ≃ d, and d(ed ) is defined iff
h(ed , ed ) = 0. So h(ed , ed ) ̸= 1.
2. If h(ed , ed ) = 0 then either ed is not the index of a partial recursive
function, or it is and φed (ed ) ↑. But again, φed ≃ d, and d(ed ) is undefined
iff φed (ed ) ↓.
The upshot is that ed cannot, after all, be the index of a partial recursive
function. But if h were partial recursive, d would be too, and so our definition
of ed as an index of it would be admissible. We must conclude that h cannot
be partial recursive.

rec.18 General Recursive Functions


There is another way to obtain a set of total functions. Say a total function cmp:rec:gen:
sec
f (x, ⃗z) is regular if for every sequence of natural numbers ⃗z, there is an x
such that f (x, ⃗z) = 0. In other words, the regular functions are exactly those
functions to which one can apply unbounded search, and end up with a to-
tal function. One can, conservatively, restrict unbounded search to regular
functions:

26 recursive-functions rev: 6891b66 (2024-12-01) by OLP / CC–BY


cmp:rec:gen: Definition rec.30. The set of general recursive functions is the smallest set
defn:general-recursive
of functions from the natural numbers to the natural numbers (of various ari-
ties) containing zero, successor, and projections, and closed under composition,
primitive recursion, and unbounded search applied to regular functions.

Clearly every general recursive function is total. The difference between


Definition rec.30 and Definition rec.27 is that in the latter one is allowed to
use partial recursive functions along the way; the only requirement is that
the function you end up with at the end is total. So the word “general,” a
historic relic, is a misnomer; on the surface, Definition rec.30 is less general
than Definition rec.27. But, fortunately, the difference is illusory; though the
definitions are different, the set of general recursive functions and the set of
recursive functions are one and the same.

Photo Credits

27
Bibliography

28

You might also like