CT CH3
CT CH3
(CoSc-4132)
Chapter Three
Computability
Samuel Gudeta (MSc.)
Department of Computer Science
Dilla University
November 2019
Introduction to Recursive Function
◼ Recursive function theory, like the theory of Turing machines, is one way to
make formal and precise the intuitive, informal, and imprecise notion of an
effective method.
◼ It happens to identify the very same class of functions as those
that are Turing computable.
◼ This fact is informal support for Church's Thesis, asserting that every
function that is effectively computable in the intuitive sense is computable
in these formal ways.
◼ Like the theory of Turing machines, recursive function theory is vitally
concerned with the converse of Church's Thesis: to prove that all the functions
they identify as computable are effectively computable, that is, to make clear
that they use only effective methods.
Cont.
◼ Recursive function theory begins with some very elementary functions that
are intuitively effective. Then it provides a few methods for building more
complicated functions from simpler functions.
◼ The building operations preserve computability in a way that is both
demonstrable and (one hopes) intuitive.
◼ The initial (basic) functions and the building operations are very few in
number, and very simple in character, so that they are entirely open to
inspection.
◼ It is easy to trust the foundation, and by its nature if you trust the foundation
you trust the superstructure it permits you to build.
Basic Functions
◼ and so on; that is the procedure required by the two equations. But eventually by
this means the second argument will equal zero, in which case we use the first
equation, and calculate function f, and are done.
◼ Let's look at an example. To calculate 5! (5 "factorial"), we multiply 5x4x3x2x1.
How would we define the general factorial function recursively? To say that it is
n(n-1)(n-2)...(n-(n-1)) would be correct but not recursive. In that formulation,
the function never calls itself and it contains the mysterious ellipsis (three dots).
Recursive functions not only have the peculiarity of calling themselves, but they
eliminate the need to use the ellipsis. This is a considerable improvement in
clarity, rigor, completeness of specification, and intuitive effectiveness, even if it
is still "weird" from another standpoint.
Cont.…
◼ So, the recursive definition of the factorial function would be constructed like
this. Initially, we note that 1! = 1. Now, to compute n!, we do not multiply n by
(n-1), for that would commit us to the non-recursive series we saw above.
Instead we multiply n by (n-1)!. And that's it; there is no further series. But to
calculate (n-1)! we refer to our equations; if we have not reached 1 yet, then we
multiply n-1 by (n-2)!, and so on until we do reach 1, whose factorial value is
already defined. So the general formula would be expressed by these two
equations:
1! = 1
n! = n(n-1)!
Cont.
◼ A stricter definition of the factorial function, f(n), consists of these two
equations:
f(n) = 1 when n=1,
f(n) = n(f(n-1)) when n>1.
◼ Minimization: Let us take a function f(x). Suppose there is at least one value of x
which makes f(x) = 0.
◼ Now suppose that we wanted to find the least value of x which made f(x) = 0.
There is an obviously effective method for doing so.
◼ We know that x is a natural number. So we set x = 0, and then compute f(x); if
we get 0 we stop, having found the least x which makes f(x) = 0; but if not, we
try the next natural number
1.We try 0,1,2,3... until we reach the first value which makes f(x) = 0.
Cont.
◼ When we know that there is such a value, then we know that this method
will terminate in a finite time with the correct answer.
◼ If we say that g(x) is a function that computes the least x such that
f(x) = 0, then we know that g is computable.
◼ We will say that g is produced from f by minimization.
◼ The only catch is that we don't always know that there is any x which
makes f(x) = 0. Hence the project of testing 0,1,2,3... may never
terminate.
◼ If we run the test anyway, that is called unbounded minimization.
Obviously, when g is produced by unbounded minimization, it is not
effectively computable.
Cont.
◼ If we don't know whether there is any x which makes f(x) = 0, then there is
still a way to make function g effectively computable.
◼ We can start testing 0,1,2,3... but set an upper bound to our search. We
search until we hit the upper bound and then stop.
◼ If we find a value before stopping which makes f(x) = 0, then g returns that
value; if not, then g returns 0. This is called bounded minimization.
◼ While unbounded minimization has the disadvantages of a
partial function which may never terminate.
◼ Bounded minimization has the disadvantage of sometimes failing to
minimize.
Cont.
Recursively Enumerable
Recursive
(weak result)
• If a language is recursive then there is an enumeration
procedure for it
(strong result)
• A language is recursively enumerable if and only if
there is an enumeration procedure for it