0% found this document useful (0 votes)
25 views60 pages

Theory of Computation

The theory of computation encompasses the study of how efficiently problems can be solved using algorithms, divided into automata theory, computability theory, and computational complexity theory. It emphasizes the importance of Turing machines as models of computation to explore algorithmic solvability and complexity. Key concepts include defining algorithms, understanding computability, analyzing time and space complexity, and using asymptotic notations like Big O to evaluate algorithm efficiency.

Uploaded by

sahngwain795
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views60 pages

Theory of Computation

The theory of computation encompasses the study of how efficiently problems can be solved using algorithms, divided into automata theory, computability theory, and computational complexity theory. It emphasizes the importance of Turing machines as models of computation to explore algorithmic solvability and complexity. Key concepts include defining algorithms, understanding computability, analyzing time and space complexity, and using asymptotic notations like Big O to evaluate algorithm efficiency.

Uploaded by

sahngwain795
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 60

*The Theory of

Computation

Computability and complexity theory


*The theory of computation is the branch that
deals with how efficiently problems can be
solved on a model of computation, using an
algorithm.
*The field is divided into three major
branches: automata theory, computability theory,
and computational complexity theory.
*In order to perform a rigorous study of computation,
computer scientists work with a mathematical
abstraction of computers called a model of
computation. There are several models in use, but the
most commonly examined is the Turing machine.

*introduction
In the course of this lesson, we should be able to:
*Explain the concept of computational theory.
*Define Turing machine and explain its
functioning.
*Defines classes of problems (P, NP, NP hard, NP
complete …).
*Calculate and express the time efficiency of an
algorithm in term of O(n) notation

*Our interests
*Basic Questions to solve
problems
You have a task, before You formulate an algorithm to
perform the task, You have to answer the 3 basic
questions:
Q1-What is an algorithm?
• To make an algorithm, you have to know exactly what it is, it
components and how to put these components together.
• To ask questions, you have to know what you are asking
questions about.

Q2-Will an algorithm be able to compute the task?


(computability)
• Is the task I want Impossible
• Is it possible to formulate an impossible task?
• What are the consequences of other algorithms?
*Questions(Cont…)

Q3-How complex is the algorithm?


* Will the algorithm take too much time?(for cryptography,
this is a good thing),
* Can I make a faster algorithm for given type of task?
There are several definitions of algorithms
however let us consider the following:
*An algorithm is a finite, discrete
unambiguous set of instructions for
solving a particular problem.
*It has an input and is expected to produce
and output
*Each instruction can be carried out in a
finite amount of time in a deterministic
way.

*Q1-what is an
Algorithm
*We notice the term finite.
* Algorithms should lead to an eventual
solution.
* The algorithm should eventually (halt).
*Sequence of steps
* Each step should to one logical action.

*keywords
Q2-Will an algorithms be able to compute the task?

*Computability theory deals primarily with problems


which are solvable or unsolvable on a model of
computation/abstract machine.
*If a problem is solvable on an abstract machine then it
will be solved on a computer.
*An example of an abstract machine is the Turing
machine

*Computability
theory
A Turing machine is a theoretical machine that is used
in thought experiments to examine the abilities and
limitations of computers.
It is imagined to be a simple computer that reads and
writes symbols one at a time on an endless tape by
strictly
following a set of rules.
It determines what action it should perform next
according to it internal "state" and what symbol it
currently sees. An example of one of a Turing Machine's
rules might be: "If you are in state 2 and you see an
'A', change it to 'B' and move left."

*The Turing machine


A Turning machine remembers only one number, called its
state.
It moves back and forth along an infinite tape, scanning
and writing symbols and changing its state.
Its action at a given step in the calculation is based on only
two factors: its current state number and the
symbol that it is currently scanning on the tape. It
continues in this way until it enters a special state called
the halt state.
In spite of their simplicity, Turing machines can perform
any calculation that can be performed by any computer.

*The Turing machine


*The Turing machine
Deterministic Turing machine, the set of rules
prescribes at most one action to be performed for any
given situation.
A non-deterministic Turing machine (NTM), by contrast,
may have a set of rules that prescribes more than one
action for a given situation.
For example, a non-deterministic Turing machine may
have both "If you are in state 2 and you see an 'A', change
it to a 'B' and move left" and "If you are in state 2 and you
see an 'A',
change it to a 'C' and move right" in its rule set.
Deterministic and non
deterministic Turing
machine
*Alan Turin investigated whether there were some
mathematical problems which cannot be solved by any
such abstract machines(Turing machines) and
concluded that:

* A function is computable if it is computable by one of


these abstract machines.

*The church-Turing
thesis
*The decision problem is a YES or NO algorithmic
problem.
*E.g. given that x and y are two natural numbers, can x
evenly divide y?
*A problem which can be solved by an algorithm is
called decidable else it is un-decidable(e.g. what
is the meaning of life?).

*The decision problem


*A computation is usually modeled as a mapping form
input to outputs carried out by a formal machine or
program which processes its input in a sequence of
step.
*An effective computable function is one that can be
computed in the finite amount of time using finite
number of steps and resources.
*Using Turing’s concept of abstract machines,
we would say that a function is non-
computable if it not algorithmic and there
exist no Turing machine that could compute it.

*What is computable
or solvable
*The tiling problem
*The halting problem

*Non-computable
problems
it is the problem of determining whether an arbitrary
computer program and it input, whether the program will
finish running(halt) or continue to run forever without
executing the problem?.
It can be shown that the halting problem is un-decidable,
hence unsolvable.
Alan Turing proved in 1936 that a general algorithm to solve
the halting problem for all possible program-input pairs
cannot exist. A key part of the proof was a mathematical
definition of a computer and program, which became known
as a Turing machine; the halting problem is un-decidable
over Turing machines. It is one of the first examples of a
decision problem.

*The halting problem


*You’re a given the task of covering a large areas
using colored tiles. A tile is defined to be a full square
divided into quarters by it diagonal. Each quarter is
colored with a different color. The tiles are assumed
to have a fixed orientation and not rotatable. An input
is a final number of tiles description.
*The problem ask whether any finite area of any size
and with integer dimensions can be covered using
only tiles given and following the constraint that
colors of the any touching edge of any 2 adjacent
tiles must be identical.

The tiling problem


Q3-How complex is the algorithm?
*The algorithm exist?
*How many discrete steps does it have?
*What is the cost of each step(in terms of time
and space)?
*How does complexity depend on the size of the
input?
*Computational complexity theory is a branch of
the theory of computation that investigates the
cost of solving computational problems focusing
on time, space and other requirements.

*Complexity
*Computational complexity is important when we
want to evaluate the efficiency of algorithms.
*We can solve a problems using two or more
methods and still arrive at the same solution.
However, one of them will be the "best”.
*The best algorithm can be determined through
analysis of algorithms.
*The analysis of algorithms is the area of
computer science that provides tools for
contrasting the efficiency of different methods of
solution.

Efficiency analysis
of algorithms
The best or efficient algorithm uses
minimum resources to run to completion(halt).
The efficiency of an algorithm is affected by
many factors including
(a) computer used, the hardware platform
(b) representation of abstract data types
(ADT's)
(c) efficiency of compiler
(d) competence of implementer
(programming skills)
(e) complexity of underlying algorithm
Efficiency of
(f) size of the input

algorithms
*There are generally two criteria used to determine
whether one algorithm is "better" than
another.
Space requirements (i.e. how much
memory is needed to complete the task). Space is
needed for instructions, data and environment
stack(used to save
information needed to resume execution of
partially completed operations)
Time requirements (i.e. how much time
will it take to complete the task).
*Algorithms cannot be compared by running them
on computers. Run time is system dependent.
Space and time
complexities
*The exact running time of an algorithm is usually
complex so we always estimate it as number of
steps.
*We seek to understand the run time of algorithms
as input size(n) grows.
*Time requirements can be defined as a numerical
function T(n), where T(n) can be measured as the
number of steps, assuming each step consumes
constant time.

*Estimating Time
Complexity
Asymptotic analysis of an algorithm refers to
defining the mathematical boundary/framing of its
run-time performance.
hence it can be computed in 3 cases:
 Best Case (lower bound)- Minimum time
required for program execution.
 Average Case - Average time required for
program execution.
 Worst Case (upper bound)- Maximum time
required for program execution.
.

*Asymptotic
analysis
*The run time of an algorithm always approximate
measure.
*To do this, the following asymptotic notations are
used
Big Ο Notation
omega Ω Notation
theta θ Notation

*Asymptotic
notations
The big-O notation Ο(n) is the formal way to
express the upper bound of an algorithm's running
time.
O(n) means on the Order of n.
It measures the worst case time complexity or the
longest amount of time an algorithm can possibly
take to complete.

*Big O notations,O(n)
The notation Ω(n) is the formal way to express the
lower bound of an algorithm's running time.
It measures the best case time complexity or the
best amount of time an algorithm can
possibly take to complete.

*Omega notations
Ω(n)
The notation θ(n) is the formal way to express both
the lower bound and the upper bound of an
algorithm's running time.

*theta notations θ(n)


*Given 2 functions f and g such that g(n)>0 for all
n, w write f(n)=O(g(n) read as f(n) is big O of g(n)
if there exist a constant c,k, > 0 such that
f(n)<cg(n) for all n>k.
*To convert time functions to big-O notation, we
concentrate on the term that takes the most time,
and eliminating constants, coefficients, and terms
with insignificant contributions to the total run
time.

*Using Big O
notation
For example, the run time of a given algorithm is
given as t(n) = 5n^4 + 3n^2 + n + 9 where input
size is n. express the run time using big O notation.
solution
The terms are 5n^4 , 3n^2 , n , 9 .
We select The largest term 5n^4 and ignore the
rest.
5 is a constant and is also ignored.
Hence we have g(n) = n^4
Then O(t(n)) = O(g(n))

*Using Big O notation


Take note
O(k) = O(1) constant times are expressed as O(1)
O(kT)= O(T) constants are ignored
O(J)+ O(T)= O(J+T)=max[O(J),O(T) when adding
two functions together, the biggest of the two is
chosen
O(J)O(T)= O(JT), the product of two functions gives
the product of functions inside.

*Using Big O notation


express the run time using big O notation
of the following time functions.
1. 5n^3 + 3n^2 +n+ 1
2. Nlogn +n
3. n^2 + n
4. Root(n) + log n
5. 20

*exercises
▪ Functions learnt in mathematics, help us to express time complexity.
Functions map a set of input values to a set of output values.

input outpu
Function t

▪ A function in the form of f(x)=ax+b is termed as a linear function. As


the value of x increases, the ax term plays an important role in
determining the value of the function. When a graph of this function
is drawn, it will be a straight line.

*Notion of
functions
• Afunction in the form of f(x)=axp + bx + c, is called
polynomial function. In this type of function, as the
value of x increases, the value of function is
dependent in the axp (p>1).
• A function in the form of f(x)=abx is called
exponential function. The function grows at a faster
rate
• A function in the form of f(x)=a logn x is called
logarithmic function. For example, log28=3,
log216=4 and log327=3.
*Notion of
functions
*Notion of
functions
*Thus we see that
*O(n!)>>
*O(3^n)>> O(2^n)>> O(3^n)>>……
O(n^k)>>
*O(n^2)>>O(n log n)>> O(n)>>O(log
n)>> O(1

O(n!) is the greatest.

*Notion of
functions
*Order of
complexity
* Constant time O(1) or constant space regardless of
the input size. E.g. assigning a value to a variable.
* Linear time O(n): The time taken by the algorithm
increases linearly with the size of input data set. For
example, printing numbers from 0 to n-1,
function printNumbers(n){
for (i=0; i <n, i++) {
print (i)
}
}

*Order of
complexity
Quadratic time O(n^2):The time taken by the
algorithm is proportional to the square the number
of input data set. In general, algorithms with
nested loops have time complexity of O(n2). For
example,

function exQuadratic (n){


for (i=0; i <n, i++) {
print (i)
for (j=i; j <n, j++) {
print (j)
}
}
}

*Order of
complexity
▪ logarithmic time O(log n): The time taken
by the algorithm grows very slowly with the
addition of input data to the data set.
▪ For example, printing the numbers that are
powers of 2 between 2 and n,
▪ In this algorithm, when n=100, the numbers
printed are 2, 4, 8, 16, 32, 64.
. function exLogarithmic (n) {
for ( var i=2; i <n: i=i*2) {
print (i)
}
}
*Order of
complexity
Polynomial Time or O(n^k ) An algorithm is said to
run in polynomial time if its execution time is
proportional to n raised to some constant power k. A
program that contains k nested loops, each with a
number of steps proportional to n, will take time
proportional to.

*Order of
complexity
Exponential Time or O( K^n) :An algorithm is said
to run in exponential time if its execution time is
proportional to
someconstant k raised to the nth power. Exponential-
time computations are generally not
practical.

*Order of
complexity
Calculating time complexities
• Consider the code given below that finds the highest number
in an array.
max=array[0]
for ( var i = 1; n-1, i++){
if a[i] > max {
max = array[k]
}
}
• The time complexity is found out by calculating the number
of times each line runs. The time complexity of this
algorithm is n+1. Since 1 is insignificant, this algorithm has
linear time complexity O(n).
Calculating time
complexities

• Consider the code given below that uses nested for loops.
count=0
for (var i=0; i<n; i++) {
count+=1; • The line count+=1 in the inner
for (var i=0; i<2*n; i++){ loop runs 2n times, and
count+=1; count+=1 in outer loop runs n
} times.
}
• Therefore, the time complexity
of this algorithm is 2n2, and
hence, O(n)=n2.
Tractable and in tractable problems

▪ Tractable problems are those that are


solvable in reasonable(polynomial)
time. i.e. O(1), O(n), O(n^2), O(n^3),
O(log n), O(n log n),
▪ Intractable problems are those that are
not solved in polynomial time or are
solved in exponential ( O(2^n),
O(n^n), O(n!), ) .
▪ An example of an intractable problem
is the travelling salesmans problems
The traveling salesmans problem

▪ Consider a city with n towns. Find the


cheapest round trip route that takes the
salesman through every city and back to
the starting city. The diagram below
shows a travelling salesman problem
with four towns. This example assumes
there is a route between any two towns
in the city.

The traveling salesmans problem

▪ the shortest route is 1 - 2 - 3 – 4 which


is 260
100 2
1

45

45
00
50

85
4 3
The traveling salesman's problem
▪ We notice that when we pick the first city, we are left
with n-1 cities,
▪ We pick the next, we are left with n-2 choices
▪ And so on…….
▪ Hence the maximum number of routes is
▪ (n-1) *(n-2)* (n-3) *……. (2)*(1)= n!
▪ Thus if w have n=12 town, we will have 12 ! Routes.
On a medium speed computer, it will take 39s. If
n=23, on the same computer, it will take 51yrs.
▪ This is an intractable problem because the time
taken to solve the problem increases dramatically as
the size of the input increases.
Optimization problems

▪ An optimization problem is one that one that


ask “ what is the optimal solution to a problem
X”.
Reducibility
▪ A reduction is a way of converting one problem
to another such that a solution to the second
problem can be used to solve the first.
▪ For example: if you want to find your way in a
city, you find a map; hence the problem of
finding your way has been reduced to a finding
a map.
▪ Also to find the area of a circle you need to
know the radius.
▪ Reducibility always involves two problems
A and B. if A reduces to B, we can find a
solution to B to solve A.
Classes of problems

▪ This is the classification of problems


according to their degree of difficulty.
we identify 4 types of problems
□ P-type(easy)
□ NP-type(medium)
□ NP-hard(hard)
□ NP-Complete(difficult)
Classes of problems: P-type

▪ Class P(polynomial time algorithm): these are


problems with polynomial time deterministic
algorithms
▪ . They are solvable in O(n^k) for some constant
k where n is the input size to the problem.
▪ These algorithms usually compute to the
correct answer.
▪ For example: all arithmetic operations, linear,
binary search, sorting algorithms etc.
Classes of problems: NP-type

▪ Class NP(Non-deterministic Polynomial time):


this are decision problems that are solvable on
a non deterministic machine( with a non
deterministic algorithm)
▪ These problems are quick to verify, slow to
solve.
▪ A non deterministic computer is one that can
guess the guess the right answer or solution.
▪ it can also be thought of as a class of problems
whose solutions can be verified in polynomial
time.
▪ Example the travelling salesman’s problem.
Classes of problems: NP complete -
type

▪ These are problems which are quick to verify,,


slow to solve and can be reduced to another NP
complete problem.
▪ The group of problem X in NP for which it is
possible to reduce to any other NP problem Y
to X in polynomial time.
▪ This means that we can solve Y quickly if we
know how to solve X quickly.
▪ Example the travelling salesman's problem,
graph coloring
Classes of problems: NP hard-type

▪ Thses problems are slow to verify, slow to solve


and can be reduced to any other NP problem.
▪ A problem X is NP hard if there is an NP
complete problem Y, such that X is reducible to
Y i in polynomial time.
▪ Example: the halting problem, travelling
salesman, graph coloring.
56

Activity-1
Duration: 10 minutes

1. Determine the time complexities


of the algorithms given.
Code (b)
function ques2(n) {
Code (a) var count=0
function ques1(n) { for (var i=0;
var count=0 i<n; i++){
for (var i=0; count+=1;
i<n; i++){ }
count+=1; for (var i=0;
} i<3*n; i++){
count+= 5 count+=1;
return count; }
} return count;
}
57

Activity-1
Duration: 10 minutes

1. Determine the time complexities of the algorithms given.

Code (c)
function ques3(n) { Code (d)
var count=0
for (var i=0; i<n; function ques4(n) {
i++){ var count=0
count+=1; for (var i=0;
for (var i=0; i<n*n; i++){
i<7n; i++){ count+=1;
count+=1; return count;
} }
}
return count;
}
End of topic
58

questions
1. State the factors that determine the efficiency of algorithms.
2. Why is readability important for an efficient algorithm?
3. What is the time measure? What does the parameter ‘time’
refer to?
4. What is space measure?
5. What is refactoring? What are its advantages?
6. How can be a code containing arrays be optimised?
7. List some measures taken to make the program more
understandable.
End of topic
59

questions
8. Complete the table for
values of n given. Consider Notati n= n= n= n=
a problem that can be n=1
on 2 4 8 16
solved using algorithm A, B
O (n)
and C. The time
complexities of algorithm A, O
B and C are O log2n, 2 and (log2n
n

n2. Which of the algorithm is )


the fastest based on the
O
number of operations? (n2)
O (2n)
*Use big O natation to compare the
following algorithms:

Big O notation

You might also like