Complex It 4
Complex It 4
COMPUTATIONAL COMPLEXITY
4.1. What-is-computation?
All parts of this process (the question, computation, and answer) are finite.
Only questions/computations/answers that can be presented in finite time, are
distinguishable by a human.
Since the limits to human cognition are finite.
A computer is a system that performs the computation.
All computational models have been computationally equal to a very simple
model, called Turing machine. ⇒ church-turing thesis
The Church-Turing hypothesis is not True for Quantum computational models.
4.2. Big-O Notation - Introduction
The efficiency of an algorithm or piece of code is measured with various resources.
CPU (time) usage, Memory usage, Disk usage, Network usage
Performance Vs complexity
Performance: how much time/memory/disk/... is actually used when a program is run. This depends on the
machine, compiler, etc. as well as the code.
Complexity: how do the resource requirements scale, i.e., What happens as the size of the problem being
solved gets larger?
Complexity affects performance but not the other way around.
The time required by a function is proportional to the number of "basic operations" it have.
Basic operations: one arithmetic operation (e.g., +, *,-,), one assignment (e.g. x := 0), one test (e.g., x = 0),
one read (of a primitive type: integer, float, character, Boolean), one write (of a primitive type: integer,
float, character, Boolean).
4.2. Big-O Notation - Introduction(cont’d)
Some functions perform the same number of operations every time they
are called. For example, StackSize
Other functions may perform different numbers of operations, depending
on the value of a parameter. For example, BubbleSort algorithm.
When we find the complexity of the function/algorithm, the exact number
of operations that are being performed aren’t required. Instead, the
relation of the number of operations to the problem size is determined.
We find the worst case: what is the maximum number of operations that
might be performed for a given problem size.
For example, the worst case, while inserting at the beginning of the array,
all elements in the array must be moved, i.e., in the worst case, the time for
insertion is proportional to the number of elements in the array.
4.2. Big-O Notation (cont’d)
Complexity can be expressed using big-O notation.
Big-O expressions removes constants or low-order terms. E.g., for a problem of size N:
A constant-time algorithm is "order 1": O(1)
A linear-time algorithm is "order N": O(N)
A quadratic time algorithm order N square O(N2)
A function T(N) = O(F(N)) if T(N) <= c * F(N), for some constant c & for values of N greater
than some value n:
F(N) is an upper bound on that complexity (worst case).
E.g. for time complexity T(N) = 2N + 5, then its big-O notation is O(N),i.e., 2N+5=O(N)
Since, 2N+5<= 3N, for all N>=5
4.2. Big-O Notation - (cont’d)
How to determine Complexities?
It depends on the kinds of statements used.
Simple statement:
statement 1;
The total time is found by adding the times of all statements:
...
Total time = time (statement 1) + ... + time (statement k)
statement k;
If each statement is "simple" then the time for each statement is constant
and the total time is also constant: O(1).
4.2. Big-O Notation - How to determine Complexities? (cont’d)
If-Then-Else statement
Example ls e
e
if- o
tw
for the ),
e f 1
o k
tim um lo
c
im (b
a se ax e
t -c m tim
rs e x (
o th a
w is m
e n t
Th e s :
tem tie )
a il i 2 )
st s ib c k
o s lo
p b
e(
tim
4.2. Big-O Notation - How to determine Complexities? (cont’d)
While Loops statement Do-While loop
Example #include<iostream.h> Example #include<iostream.h>
void main() e void main()
c
n {
{ e
equ
int a=1, num; e s int a=1, num;
cout << "Enter any number: "; th .
, s o es N) cout << "Enter any number : ";
cin >> num; es im O( cin >> num;
t =
while (a<=num) tim es N + 3
do
s N u t 3 N
{ u te xec 3 = {
ec e N +
cout << "\nHello...!!"; ex lso
N + cout << "\nHello...!!";
o p s a +
a++; lo n t N a++;
e e e =
} Th a tem
ti m } while (a<=num);
} st ta l
of , to }
h en
T
4.2. Big-O Notation - How to determine Complexities? (cont’d)
For loop statement
Example #include<iostream.h>
void main()
{
int a, num;
cout << "Enter any number: "; The worst-case time complexity for
cin >> num;
for (a=1;a<=num;a++) the for-loop statement is the same
cout << "\nHello...!!";
} with the while loop, i.e., 3N+3= O(N)
Output:
Enter any number: 24
Hello...!!
Hello...!!
Hello...!!
Hello...!!
Hello...!!
4.2. Big-O Notation - How to determine Complexities? (cont’d)
If f, g are two functions from N to N, then
f = O(g) if there exists a constant c such that f(n) ≤ c · g(n) for every sufficiently large n.
O(g(n)) denotes set of all functions f(n) if f(n)/g(n) ≤ c, for some c>0 and n ≥ n0
f = Ω(g) if g = O(f), Ω expresses the lower bound algorithm's run time (best case time).
Ω(g(n)) denotes set of all functions f(n) if f(n)/g(n) ≥ c, for some c>0 and n ≥ n0
f = Θ(g) if f = O(g) & g = O(f), Θ expresses both the lower & upper bound of running time.
Θ(g(n)) denotes set of all functions f(n) if f(n)∈O(g(n)) and f(n)∈ Ω(g(n))
f = o(g) if for every c > 0, f(n) ≤ c · g(n) for n≥n0, or iff
f = ω(g) if g = o(f) (for every c>0, g(n) ≤ c . f(n)) Or iff
4.2. Big-O Notation - How to determine Complexities? (cont’d)
Examples
1. If f(n)=100nlogn and g(n) = n2 then f = O(g), g = Ω(f), f = o(g), g = ω(f).
2. If f(n)=100n2+24n+2logn & g(n)=n2 then f =O(g), g =O(f) & hence f =Θ(g)& g = Θ(f).
3. If f(n) = min{n, 106} and g(n) = 1, then f = O(g) Or f = O(1). Similarly, if h(n) is a
function such that for every c, h(n) > c g(n) for sufficiently large n then h = ω(1).
4. If f(n) = 2n then for every number c, if g(n) = nc then g = o(f).
5. and
6. It is also true that
4.3. Complexity Classes P and NP
A decision problem Π belongs to class P if there exists an algorithm that, for every
instance I∈Π, determines whether I is a yes instance or a no-instance in
polynomial time.
Example: The LINEAR PROGRAMMING PROBLEM (LP) belongs to class p, even
though the simplex algorithm is not a polynomial-time algorithm for LP.
LP = ‘is there an assignment of rational numbers to the variables u1, . . ., un that
satisfies all the inequalities?’
NP stands for ‘non-deterministic polynomial-time’
A decision problem Π belongs to class NP if every yes instance I ∈ Π admits a
certificate S whose validity can be verified in polynomial time.
4.3. Complexity Classes P and NP(cont’d)
Example: some of decision problems in NP are:
Traveling salesperson:
Hamiltonian cycle
4.4. Polynomial-time Reductions and NP-completeness
A polynomial-time reduction from a DP Π1 to a DP Π2 is a function ϕ : I1 → I2 that
maps every instance I1 ∈ I1 of Π1 to an instance I2 = ϕ(I1) ℇ I2 of Π2 such that:
1. The mapping can be done in time that is polynomially bounded in the size of I1;
2. I1 is a yes-instance of Π1 if I2 is a yes-instance of Π2.
If there exist such a polynomial-time reduction from Π1 to Π2 then we say that Π1
can be reduced to Π2, and we will write Π1Π2.
Then, Π2 is more difficult to solve than Π1.
Every polynomial-time algorithm ALG2 for Π2 can be used to derive a polynomial-
time algorithm ALG1 for Π1 as follows:
1. Transform the instance I1 of Π1 to a corresponding instance I2 = ϕ(I1) of Π2.
2. Run ALG2 on I2 and report that I1 is a yes-instance if ALG2 concluded that I2 is a
yes-instance.
4.4. Polynomial-time Reductions and NP-
completeness(cont’d)
Suppose Π1Π2, then the existence of a polynomial-time algorithm for Π1 has no
implications for the existence of a polynomial-time algorithm for Π2, even if we can
compute the inverse of ϕ efficiently.
Because ϕ is not necessarily a one-to-one mapping; it may thus map the instances of
Π1 to a subset of instances of Π2.
Thus, being able to efficiently solve every instance of Π1 reveals nothing about
the problem of solving Π2.
Polynomial-time reductions are transitive; If Π1 Π2 and Π2 Π3 then Π1 Π3.
4.4. Polynomial-time Reductions and NP-
completeness(cont’d)
A problem within NP & a subclass of most difficult problems called NP-complete problem.
A decision problem Π is NP-complete if
1. Π belongs to NP;
2. Every problem in NP is polynomial-time reducible to Π.
An NP-complete problem is as difficult as any other problem in NP.
Theorem: If: a. Language A is NP-complete
b. Language B is in NP
c. A is polynomial time reducible to B,
Then B is NP-complete
4.5. Examples of NP-completeness Proofs
The SATISFIABILITY (SAT) problem: is the problem of determining if there
exists a consistent TRUE/FALSE assignment to the variables such that a
given Boolean formula true.
A Boolean formula F of SAT is conjunctive normal form (CNF) F = C1 ∧ C2 ∧ ...∧
Cm, where Ci = (v1 ∨ v2 ∨ … ∨ vn). E.g. C1 = (x1 ∨ ¬x2 ∨ x3 ∨x4).
Theorem 5.1: SAT is NP-complete. (Cook’s theorem)
Theorem 5.2. 3-SAT is NP-complete. (by SAT 3-SAT)
4.5. Examples of NP-completeness Proofs (cont’d)
Vertex cover, independent set & clique of an undirected graph G=(V,E) are
NP-complete.
A vertex cover of G is a subset V of vertices such that every edge has at
least one of its two incident vertices in V. Decide whether G contains a
vertex cover of size at most K.
An independent set of G is a subset V of vertices such that no two of them
are incident to the same edge, i.e., for every two vertices u,v ∈V, {u,v} ∈/E.
A clique of G is a subset V of the vertices that induces a complete
subgraph, i.e., for every 2 vertices u,v∈V , {u,v} ∈ E. Decide whether G
contains a clique of size at most K.
4.5. Examples of NP-completeness Proofs (cont’d)