0% found this document useful (0 votes)
133 views

Theory of Computation and Complexity

The document discusses the Theory of Computation, which examines how problems can be solved using algorithms and how efficiently they can be solved. It covers key areas like automata theory, computability theory, and complexity theory. Automata theory studies abstract computational models like finite state machines. Computability theory determines whether problems can be solved by machines. Complexity theory analyzes resource needs like time based on problem size. Understanding computation theory helps develop efficient algorithms and programming.

Uploaded by

Stephanie Chu
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views

Theory of Computation and Complexity

The document discusses the Theory of Computation, which examines how problems can be solved using algorithms and how efficiently they can be solved. It covers key areas like automata theory, computability theory, and complexity theory. Automata theory studies abstract computational models like finite state machines. Computability theory determines whether problems can be solved by machines. Complexity theory analyzes resource needs like time based on problem size. Understanding computation theory helps develop efficient algorithms and programming.

Uploaded by

Stephanie Chu
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIVERSITY OF NORTHEASTERN

PHILIPPINES
School of Graduate Studies Theory of
City of Iriga Computation
  and Complexity
Theory
ED 215
(COMPUTER EDUCATION)
Prepared by:
Stephanie P. Chu
Theory of computation (TOC) 
•- is a branch of Computer Science that is
concerned with how problems can be solved
using algorithms and how efficiently they can
be solved.

•Real-world computers perform computations


that by nature run like mathematical models to
solve problems in systematic ways. The essence
of the theory of computation is to help develop
mathematical and logical models that run
efficiently and to the point of halting. Since all
machines that implement logic apply TOC,
studying TOC gives learners an insight into
computer hardware and software limitations
Key considerations of computational problems
What can and cannot be computed.
Speed of such computations.
The amount of memory in use during such computations.

Importance of Theory of computation


The theory of computation forms the basis for:
Writing efficient algorithms that run in computing devices.
Programming language research and their development.
Efficient compiler design and construction.

The Theory of Computation is made up of 3 branches.


They are:
Automata Theory.
Computability Theory.
Complexity Theory.
Automata Theory
•Mathematicians and Computer Scientists developed this
theoretical computer science branch to simplify the logic of
computation by using well defined abstract computational
devices (models).
•Automata Theory is the study of abstract computational devices.
It forms a formal framework for designing and analyzing
computing devices such as biocomputers and quantum
computers. These models are essential in several areas of
computation (applied and theoretical fields).
•An Automaton is a machine that operates singularly on input
and follows a defined pattern or configuration to produce the
desired output. Through automata, we learn how problems and
compute functions are solved by the use of automatons.
•A basic computation performed on/by an automaton is
defined by the following features:
•A set of input symbols.
•The configuration states.
•Output.
•Now, let’s understand the basic terminologies, which are
important and frequently used in the Theory of
Computation. 
• 
•Symbol: Symbol(often also called character) is the
smallest building block, which can be any alphabet, letter, or
picture. 
Alphabets (Σ): Alphabets are a set of symbols, which are always finite. 

String: String is a finite sequence of symbols from some alphabet. A


string is generally denoted as w and the length of a string is denoted as |
w|. 
Note: Σ* is a set of all possible strings(often power set(need not be
unique here or we can say multiset) of string) So this implies
that language is a subset of Σ*
•Note – If the number of symbols in
alphabet Σ is represented by |Σ|, then
a number of strings of length n,
possible over Σ is |Σ|n.

•Language: A language is a set of


strings, chosen from some Σ* or we
can say- ‘A language is a subset of
Σ* ‘. A language that can be formed
over ‘ Σ ‘ can be Finite or Infinite.
Branches of Automata theory
•Finite Automata (FA): This is a computer model
that is inferior in its computation ability. This model is
fit for devices with limited memory. It is a simple
abstract machine with five elements that define its
functioning and processing of problems.
A Finite Automaton (FA)  -
• is the simplest machine to recognize patterns.
• The finite automata or finite state machine is an
abstract machine that has five elements or tuples.
• It has a set of states and rules for moving from one
state to another, but it depends upon the applied
input symbol. Basically, it is an abstract model of a
digital computer.
The following figure shows
some essential features of
general automation.
The figure shows the following features of
automata:
• Input
• Output
• States of automata
• State relation
• Output relation
 
A Finite Automata consists of the following :

Formal specification of machine is


FA is characterized into two types: 
1) Deterministic Finite Automata (DFA) –

In a DFA, for a particular input


character, the machine goes to one
state only. A transition function is
defined on every state for every For example, below DFA with Σ = {0,
input symbol. Also, in DFA null (or ε) 1} accepts all strings ending with 0. 
move is not allowed, i.e., DFA cannot
change state without any input
character. 
One important thing to note is, there can be many possible DFAs for
a pattern. A DFA with a minimum number of states is generally
preferred. 
2) Nondeterministic Finite Automata(NFA) 
NFA is similar to DFA except following additional features: 
1.Null (or ε) move is allowed i.e., it can move forward without reading
symbols. 
2.Ability to transmit to any number of states for a particular input. 
However, these above features don’t add any power to NFA. If we compare
both in terms of power, both are equivalent. 
Due to the above additional features, NFA has a different transition
function, the rest is the same as DFA. 
As you can see in the transition function is for any input including null
(or ε), NFA can go to any state number of states. 
For example, below is an NFA for the above problem.

One important thing to note is, in NFA, if any path for an input
string leads to a final state, then the input
string is accepted. For example, in the above NFA, there are
multiple paths for the input string “00”. Since one of the paths leads
to a final state, “00” is accepted by the above NFA. 
Finite Automata is useful in building text editors/text
preprocessors. FA are poor models of computers. They
can only perform simple computational tasks.

Context-Free Grammars (CFGs): They are more powerful


abstract models than FA and are essentially used in the
programming languages and natural language research
work.

Turing Machines: They are abstract models for real


computers having an infinite memory (in the form of a
tape) and a reading head. They form much more powerful
computation models than FA, CFGs, and Regular
Expressions.
Computability theory
•The Computability theory defines whether a problem is
“solvable” by any abstract machine. Some problems are
computable while others are not.

•Computation is done by various computation models


depending on the nature of the problem at hand, examples
of these machines are: the Turing machine, Finite state
machines, and many others.

Complexity theory
•This theoretical computer science branch is all about
studying the cost of solving problems while focusing on
resources (time & space) needed as the metric. The
running time of an algorithm varies with the inputs and
usually grows with the size of the inputs.
Measuring Complexity

Measuring complexity involves an algorithm analysis to


determine how much time it takes while solving a
problem (time complexity). To evaluate an algorithm, a
focus is made on relative rates of growth as the size of the
input grows.

Since the exact running time of an algorithm often is a


complex expression, we usually just estimate it. We
measure an algorithm’s time requirement as a function of
the input size (n) when determining the time complexity
of an algorithm.
As T(n), the time complexity is expressed using the Big O notation
where only the highest order term in the algebraic expressions are
considered while ignoring constant values.
The common running times when analyzing algorithms are:

O(1) - Constant time or constant space regardless of the input size.

O(n) - Linear time or linear space, where the requirement increases


uniformly with the size of the input.

O(log n) - Logarithmic time, where the requirement increases in a


logarthimic nature.

O(n^2) - Quadratic time, where the requirement increases in a quadratic


nature.
This analysis is based on 2 bounds that can be
used to define the cost of each algorithm.
•They are:
•Upper (Worst Case Scenario)
•Lower (Best Case Scenario)
The major classifications of complexities include:
•Class P: The class P consists of those problems
that are solvable in polynomial time. These are
problems that can be solved in time O(n^k) for
some constant k where n is the input size to the
problem. It is devised to capture the notion of
efficient computation.
•Class NP: It forms the class of all problems
whose solution can be achieved in polynomial
time by non-deterministic Turing machine. NP is
a complexity class used to classify decision
problems.
•A major contributor to the complexity
theory is the complexity of the algorithm
used to solve the problem. Among several
algorithms used in solving computational
problems are those whose complexity can
range from fairly complex to very complex.
•The more complex an algorithm, the more
computational complexity will be in a given
problem.

Factors that influence program


efficiency

•The problem being solved.


•The algorithm used to build the program.
•Computer hardware.
•Programming language used.
Conclusion

•Programs are formally written from descriptions of computations for


execution on machines. We’ve learned that TOC is concerned with a
formalism that helps build efficient programs. Efficient algorithms lead to
better programs that optimally use hardware resources.
•Good understanding of the Theory of Computation helps programmers
and developers express themselves clearly and intuitively, thus avoiding
entering into potentially incomputable problems while working with
computational models.

You might also like