Theory of Computation
Theory of Computation
Computation
*introduction
In the course of this lesson, we should be able to:
*Explain the concept of computational theory.
*Define Turing machine and explain its
functioning.
*Defines classes of problems (P, NP, NP hard, NP
complete …).
*Calculate and express the time efficiency of an
algorithm in term of O(n) notation
*Our interests
*Basic Questions to solve
problems
You have a task, before You formulate an algorithm to
perform the task, You have to answer the 3 basic
questions:
Q1-What is an algorithm?
• To make an algorithm, you have to know exactly what it is, it
components and how to put these components together.
• To ask questions, you have to know what you are asking
questions about.
*Q1-what is an
Algorithm
*We notice the term finite.
* Algorithms should lead to an eventual
solution.
* The algorithm should eventually (halt).
*Sequence of steps
* Each step should to one logical action.
*keywords
Q2-Will an algorithms be able to compute the task?
*Computability
theory
A Turing machine is a theoretical machine that is used
in thought experiments to examine the abilities and
limitations of computers.
It is imagined to be a simple computer that reads and
writes symbols one at a time on an endless tape by
strictly
following a set of rules.
It determines what action it should perform next
according to it internal "state" and what symbol it
currently sees. An example of one of a Turing Machine's
rules might be: "If you are in state 2 and you see an
'A', change it to 'B' and move left."
*The church-Turing
thesis
*The decision problem is a YES or NO algorithmic
problem.
*E.g. given that x and y are two natural numbers, can x
evenly divide y?
*A problem which can be solved by an algorithm is
called decidable else it is un-decidable(e.g. what
is the meaning of life?).
*What is computable
or solvable
*The tiling problem
*The halting problem
*Non-computable
problems
it is the problem of determining whether an arbitrary
computer program and it input, whether the program will
finish running(halt) or continue to run forever without
executing the problem?.
It can be shown that the halting problem is un-decidable,
hence unsolvable.
Alan Turing proved in 1936 that a general algorithm to solve
the halting problem for all possible program-input pairs
cannot exist. A key part of the proof was a mathematical
definition of a computer and program, which became known
as a Turing machine; the halting problem is un-decidable
over Turing machines. It is one of the first examples of a
decision problem.
*Complexity
*Computational complexity is important when we
want to evaluate the efficiency of algorithms.
*We can solve a problems using two or more
methods and still arrive at the same solution.
However, one of them will be the "best”.
*The best algorithm can be determined through
analysis of algorithms.
*The analysis of algorithms is the area of
computer science that provides tools for
contrasting the efficiency of different methods of
solution.
Efficiency analysis
of algorithms
The best or efficient algorithm uses
minimum resources to run to completion(halt).
The efficiency of an algorithm is affected by
many factors including
(a) computer used, the hardware platform
(b) representation of abstract data types
(ADT's)
(c) efficiency of compiler
(d) competence of implementer
(programming skills)
(e) complexity of underlying algorithm
Efficiency of
(f) size of the input
algorithms
*There are generally two criteria used to determine
whether one algorithm is "better" than
another.
Space requirements (i.e. how much
memory is needed to complete the task). Space is
needed for instructions, data and environment
stack(used to save
information needed to resume execution of
partially completed operations)
Time requirements (i.e. how much time
will it take to complete the task).
*Algorithms cannot be compared by running them
on computers. Run time is system dependent.
Space and time
complexities
*The exact running time of an algorithm is usually
complex so we always estimate it as number of
steps.
*We seek to understand the run time of algorithms
as input size(n) grows.
*Time requirements can be defined as a numerical
function T(n), where T(n) can be measured as the
number of steps, assuming each step consumes
constant time.
*Estimating Time
Complexity
Asymptotic analysis of an algorithm refers to
defining the mathematical boundary/framing of its
run-time performance.
hence it can be computed in 3 cases:
Best Case (lower bound)- Minimum time
required for program execution.
Average Case - Average time required for
program execution.
Worst Case (upper bound)- Maximum time
required for program execution.
.
*Asymptotic
analysis
*The run time of an algorithm always approximate
measure.
*To do this, the following asymptotic notations are
used
Big Ο Notation
omega Ω Notation
theta θ Notation
*Asymptotic
notations
The big-O notation Ο(n) is the formal way to
express the upper bound of an algorithm's running
time.
O(n) means on the Order of n.
It measures the worst case time complexity or the
longest amount of time an algorithm can possibly
take to complete.
*Big O notations,O(n)
The notation Ω(n) is the formal way to express the
lower bound of an algorithm's running time.
It measures the best case time complexity or the
best amount of time an algorithm can
possibly take to complete.
*Omega notations
Ω(n)
The notation θ(n) is the formal way to express both
the lower bound and the upper bound of an
algorithm's running time.
*Using Big O
notation
For example, the run time of a given algorithm is
given as t(n) = 5n^4 + 3n^2 + n + 9 where input
size is n. express the run time using big O notation.
solution
The terms are 5n^4 , 3n^2 , n , 9 .
We select The largest term 5n^4 and ignore the
rest.
5 is a constant and is also ignored.
Hence we have g(n) = n^4
Then O(t(n)) = O(g(n))
*exercises
▪ Functions learnt in mathematics, help us to express time complexity.
Functions map a set of input values to a set of output values.
input outpu
Function t
*Notion of
functions
• Afunction in the form of f(x)=axp + bx + c, is called
polynomial function. In this type of function, as the
value of x increases, the value of function is
dependent in the axp (p>1).
• A function in the form of f(x)=abx is called
exponential function. The function grows at a faster
rate
• A function in the form of f(x)=a logn x is called
logarithmic function. For example, log28=3,
log216=4 and log327=3.
*Notion of
functions
*Notion of
functions
*Thus we see that
*O(n!)>>
*O(3^n)>> O(2^n)>> O(3^n)>>……
O(n^k)>>
*O(n^2)>>O(n log n)>> O(n)>>O(log
n)>> O(1
*Notion of
functions
*Order of
complexity
* Constant time O(1) or constant space regardless of
the input size. E.g. assigning a value to a variable.
* Linear time O(n): The time taken by the algorithm
increases linearly with the size of input data set. For
example, printing numbers from 0 to n-1,
function printNumbers(n){
for (i=0; i <n, i++) {
print (i)
}
}
*Order of
complexity
Quadratic time O(n^2):The time taken by the
algorithm is proportional to the square the number
of input data set. In general, algorithms with
nested loops have time complexity of O(n2). For
example,
*Order of
complexity
▪ logarithmic time O(log n): The time taken
by the algorithm grows very slowly with the
addition of input data to the data set.
▪ For example, printing the numbers that are
powers of 2 between 2 and n,
▪ In this algorithm, when n=100, the numbers
printed are 2, 4, 8, 16, 32, 64.
. function exLogarithmic (n) {
for ( var i=2; i <n: i=i*2) {
print (i)
}
}
*Order of
complexity
Polynomial Time or O(n^k ) An algorithm is said to
run in polynomial time if its execution time is
proportional to n raised to some constant power k. A
program that contains k nested loops, each with a
number of steps proportional to n, will take time
proportional to.
*Order of
complexity
Exponential Time or O( K^n) :An algorithm is said
to run in exponential time if its execution time is
proportional to
someconstant k raised to the nth power. Exponential-
time computations are generally not
practical.
*Order of
complexity
Calculating time complexities
• Consider the code given below that finds the highest number
in an array.
max=array[0]
for ( var i = 1; n-1, i++){
if a[i] > max {
max = array[k]
}
}
• The time complexity is found out by calculating the number
of times each line runs. The time complexity of this
algorithm is n+1. Since 1 is insignificant, this algorithm has
linear time complexity O(n).
Calculating time
complexities
• Consider the code given below that uses nested for loops.
count=0
for (var i=0; i<n; i++) {
count+=1; • The line count+=1 in the inner
for (var i=0; i<2*n; i++){ loop runs 2n times, and
count+=1; count+=1 in outer loop runs n
} times.
}
• Therefore, the time complexity
of this algorithm is 2n2, and
hence, O(n)=n2.
Tractable and in tractable problems
45
45
00
50
85
4 3
The traveling salesman's problem
▪ We notice that when we pick the first city, we are left
with n-1 cities,
▪ We pick the next, we are left with n-2 choices
▪ And so on…….
▪ Hence the maximum number of routes is
▪ (n-1) *(n-2)* (n-3) *……. (2)*(1)= n!
▪ Thus if w have n=12 town, we will have 12 ! Routes.
On a medium speed computer, it will take 39s. If
n=23, on the same computer, it will take 51yrs.
▪ This is an intractable problem because the time
taken to solve the problem increases dramatically as
the size of the input increases.
Optimization problems
Activity-1
Duration: 10 minutes
Activity-1
Duration: 10 minutes
Code (c)
function ques3(n) { Code (d)
var count=0
for (var i=0; i<n; function ques4(n) {
i++){ var count=0
count+=1; for (var i=0;
for (var i=0; i<n*n; i++){
i<7n; i++){ count+=1;
count+=1; return count;
} }
}
return count;
}
End of topic
58
questions
1. State the factors that determine the efficiency of algorithms.
2. Why is readability important for an efficient algorithm?
3. What is the time measure? What does the parameter ‘time’
refer to?
4. What is space measure?
5. What is refactoring? What are its advantages?
6. How can be a code containing arrays be optimised?
7. List some measures taken to make the program more
understandable.
End of topic
59
questions
8. Complete the table for
values of n given. Consider Notati n= n= n= n=
a problem that can be n=1
on 2 4 8 16
solved using algorithm A, B
O (n)
and C. The time
complexities of algorithm A, O
B and C are O log2n, 2 and (log2n
n
Big O notation