0% found this document useful (0 votes)
7 views

Complex It 4

This document discusses computational complexity and algorithms analysis. It begins by defining computation and explaining that computational models are equivalent to Turing machines. It then introduces Big O notation to analyze algorithms' time complexity by determining how resource requirements scale with problem size. Several examples are provided to illustrate determining time complexities of algorithms using Big O notation. Finally, it defines the complexity classes P and NP, with P containing problems solvable in polynomial time and NP containing problems verifiable in polynomial time.

Uploaded by

Miki Abera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Complex It 4

This document discusses computational complexity and algorithms analysis. It begins by defining computation and explaining that computational models are equivalent to Turing machines. It then introduces Big O notation to analyze algorithms' time complexity by determining how resource requirements scale with problem size. Several examples are provided to illustrate determining time complexities of algorithms using Big O notation. Finally, it defines the complexity classes P and NP, with P containing problems solvable in polynomial time and NP containing problems verifiable in polynomial time.

Uploaded by

Miki Abera
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Chapter 4

COMPUTATIONAL COMPLEXITY
4.1. What-is-computation?

 All parts of this process (the question, computation, and answer) are finite.
Only questions/computations/answers that can be presented in finite time, are
distinguishable by a human.
Since the limits to human cognition are finite.
A computer is a system that performs the computation.
All computational models have been computationally equal to a very simple
model, called Turing machine. ⇒ church-turing thesis
The Church-Turing hypothesis is not True for Quantum computational models.
4.2. Big-O Notation - Introduction
The efficiency of an algorithm or piece of code is measured with various resources.

 CPU (time) usage, Memory usage, Disk usage, Network usage
Performance Vs complexity
Performance: how much time/memory/disk/... is actually used when a program is run. This depends on the
machine, compiler, etc. as well as the code.
Complexity: how do the resource requirements scale, i.e., What happens as the size of the problem being
solved gets larger?
Complexity affects performance but not the other way around.
The time required by a function is proportional to the number of "basic operations" it have.

Basic operations: one arithmetic operation (e.g., +, *,-,), one assignment (e.g. x := 0), one test (e.g., x = 0),
one read (of a primitive type: integer, float, character, Boolean), one write (of a primitive type: integer,
float, character, Boolean).
4.2. Big-O Notation - Introduction(cont’d)
 Some functions perform the same number of operations every time they
are called. For example, StackSize
 Other functions may perform different numbers of operations, depending
on the value of a parameter. For example, BubbleSort algorithm.
 When we find the complexity of the function/algorithm, the exact number
of operations that are being performed aren’t required. Instead, the
relation of the number of operations to the problem size is determined.
 We find the worst case: what is the maximum number of operations that
might be performed for a given problem size.
 For example, the worst case, while inserting at the beginning of the array,
all elements in the array must be moved, i.e., in the worst case, the time for
insertion is proportional to the number of elements in the array.
4.2. Big-O Notation (cont’d)
 Complexity can be expressed using big-O notation.
 Big-O expressions removes constants or low-order terms. E.g., for a problem of size N:
 A constant-time algorithm is "order 1": O(1)
 A linear-time algorithm is "order N": O(N)
 A quadratic time algorithm order N square O(N2)
 A function T(N) = O(F(N)) if T(N) <= c * F(N), for some constant c & for values of N greater
than some value n:
F(N) is an upper bound on that complexity (worst case).
E.g. for time complexity T(N) = 2N + 5, then its big-O notation is O(N),i.e., 2N+5=O(N)
Since, 2N+5<= 3N, for all N>=5
4.2. Big-O Notation - (cont’d)
How to determine Complexities?
It depends on the kinds of statements used.
Simple statement:
statement 1;
The total time is found by adding the times of all statements:
...
Total time = time (statement 1) + ... + time (statement k)
statement k;
If each statement is "simple" then the time for each statement is constant
and the total time is also constant: O(1).
4.2. Big-O Notation - How to determine Complexities? (cont’d)
If-Then-Else statement
Example ls e
e
if- o
tw
for the ),
e f 1
o k
tim um lo
c
im (b
a se ax e
t -c m tim
rs e x (
o th a
w is m
e n t
Th e s :
tem tie )
a il i 2 )
st s ib c k
o s lo
p b
e(
tim
4.2. Big-O Notation - How to determine Complexities? (cont’d)
While Loops statement Do-While loop
Example #include<iostream.h> Example #include<iostream.h>
void main() e void main()
c
n {
{ e
equ
int a=1, num; e s int a=1, num;
cout << "Enter any number: "; th .
, s o es N) cout << "Enter any number : ";
cin >> num; es im O( cin >> num;
t =
while (a<=num) tim es N + 3
do
s N u t 3 N
{ u te xec 3 = {
ec e N +
cout << "\nHello...!!"; ex lso
N + cout << "\nHello...!!";
o p s a +
a++; lo n t N a++;
e e e =
} Th a tem
ti m } while (a<=num);
} st ta l
of , to }
h en
T
4.2. Big-O Notation - How to determine Complexities? (cont’d)
For loop statement
Example #include<iostream.h>
void main()
{
int a, num;
cout << "Enter any number: "; The worst-case time complexity for
cin >> num;
for (a=1;a<=num;a++) the for-loop statement is the same
cout << "\nHello...!!";
} with the while loop, i.e., 3N+3= O(N)
Output:
Enter any number: 24
Hello...!!
Hello...!!
Hello...!!
Hello...!!
Hello...!!
4.2. Big-O Notation - How to determine Complexities? (cont’d)
If f, g are two functions from N to N, then
 f = O(g) if there exists a constant c such that f(n) ≤ c · g(n) for every sufficiently large n.
O(g(n)) denotes set of all functions f(n) if f(n)/g(n) ≤ c, for some c>0 and n ≥ n0
 f = Ω(g) if g = O(f), Ω expresses the lower bound algorithm's run time (best case time).
Ω(g(n)) denotes set of all functions f(n) if f(n)/g(n) ≥ c, for some c>0 and n ≥ n0
 f = Θ(g) if f = O(g) & g = O(f), Θ expresses both the lower & upper bound of running time.
Θ(g(n)) denotes set of all functions f(n) if f(n)∈O(g(n)) and f(n)∈ Ω(g(n))
 f = o(g) if for every c > 0, f(n) ≤ c · g(n) for n≥n0, or iff
 f = ω(g) if g = o(f) (for every c>0, g(n) ≤ c . f(n)) Or iff
4.2. Big-O Notation - How to determine Complexities? (cont’d)
Examples
1. If f(n)=100nlogn and g(n) = n2 then f = O(g), g = Ω(f), f = o(g), g = ω(f).
2. If f(n)=100n2+24n+2logn & g(n)=n2 then f =O(g), g =O(f) & hence f =Θ(g)& g = Θ(f).
3. If f(n) = min{n, 106} and g(n) = 1, then f = O(g) Or f = O(1). Similarly, if h(n) is a
function such that for every c, h(n) > c g(n) for sufficiently large n then h = ω(1).
4. If f(n) = 2n then for every number c, if g(n) = nc then g = o(f).
5. and
6. It is also true that
4.3. Complexity Classes P and NP
A decision problem Π belongs to class P if there exists an algorithm that, for every
instance I∈Π, determines whether I is a yes instance or a no-instance in
polynomial time.
Example: The LINEAR PROGRAMMING PROBLEM (LP) belongs to class p, even
though the simplex algorithm is not a polynomial-time algorithm for LP.
LP = ‘is there an assignment of rational numbers to the variables u1, . . ., un that
satisfies all the inequalities?’
NP stands for ‘non-deterministic polynomial-time’
A decision problem Π belongs to class NP if every yes instance I ∈ Π admits a
certificate S whose validity can be verified in polynomial time.
4.3. Complexity Classes P and NP(cont’d)
Example: some of decision problems in NP are:
Traveling salesperson:

Integer linear programming

Hamiltonian cycle
4.4. Polynomial-time Reductions and NP-completeness
A polynomial-time reduction from a DP Π1 to a DP Π2 is a function ϕ : I1 → I2 that
maps every instance I1 ∈ I1 of Π1 to an instance I2 = ϕ(I1) ℇ I2 of Π2 such that:
1. The mapping can be done in time that is polynomially bounded in the size of I1;
2. I1 is a yes-instance of Π1 if I2 is a yes-instance of Π2.
If there exist such a polynomial-time reduction from Π1 to Π2 then we say that Π1
can be reduced to Π2, and we will write Π1Π2.
Then, Π2 is more difficult to solve than Π1.
Every polynomial-time algorithm ALG2 for Π2 can be used to derive a polynomial-
time algorithm ALG1 for Π1 as follows:
1. Transform the instance I1 of Π1 to a corresponding instance I2 = ϕ(I1) of Π2.
2. Run ALG2 on I2 and report that I1 is a yes-instance if ALG2 concluded that I2 is a
yes-instance.
4.4. Polynomial-time Reductions and NP-
completeness(cont’d)
Suppose Π1Π2, then the existence of a polynomial-time algorithm for Π1 has no
implications for the existence of a polynomial-time algorithm for Π2, even if we can
compute the inverse of ϕ efficiently.
Because ϕ is not necessarily a one-to-one mapping; it may thus map the instances of
Π1 to a subset of instances of Π2.
Thus, being able to efficiently solve every instance of Π1 reveals nothing about
the problem of solving Π2.
Polynomial-time reductions are transitive; If Π1 Π2 and Π2 Π3 then Π1 Π3.
4.4. Polynomial-time Reductions and NP-
completeness(cont’d)
 A problem within NP & a subclass of most difficult problems called NP-complete problem.
A decision problem Π is NP-complete if
1. Π belongs to NP;
2. Every problem in NP is polynomial-time reducible to Π.
 An NP-complete problem is as difficult as any other problem in NP.
 Theorem: If: a. Language A is NP-complete
b. Language B is in NP
c. A is polynomial time reducible to B,
Then B is NP-complete
4.5. Examples of NP-completeness Proofs
The SATISFIABILITY (SAT) problem: is the problem of determining if there
exists a consistent TRUE/FALSE assignment to the variables such that a
given Boolean formula true.
A Boolean formula F of SAT is conjunctive normal form (CNF) F = C1 ∧ C2 ∧ ...∧
Cm, where Ci = (v1 ∨ v2 ∨ … ∨ vn). E.g. C1 = (x1 ∨ ¬x2 ∨ x3 ∨x4).
Theorem 5.1: SAT is NP-complete. (Cook’s theorem)
Theorem 5.2. 3-SAT is NP-complete. (by SAT 3-SAT)
4.5. Examples of NP-completeness Proofs (cont’d)
Vertex cover, independent set & clique of an undirected graph G=(V,E) are
NP-complete.
A vertex cover of G is a subset V of vertices such that every edge has at
least one of its two incident vertices in V. Decide whether G contains a
vertex cover of size at most K.
An independent set of G is a subset V of vertices such that no two of them
are incident to the same edge, i.e., for every two vertices u,v ∈V, {u,v} ∈/E.
A clique of G is a subset V of the vertices that induces a complete
subgraph, i.e., for every 2 vertices u,v∈V , {u,v} ∈ E. Decide whether G
contains a clique of size at most K.
4.5. Examples of NP-completeness Proofs (cont’d)

Theorem 5.3. VERTEX COVER (VC) is NP-complete.


Proof: we proof this theorem by the method of reduction (3-SAT → VC)
To proof that VC is NP-complete, proof that VCϵNP and show 3-SAT → VC in
polynomial time.
Whether a subset of vertices are VC or not can be verified in polynomial time.
For a graph of n vertices it can be proved in O(n2). Thus, VC is NP.
Next show that 3-SAT → VC
4.5. Examples of NP-completeness Proofs (cont’d)
Theorem 5.3. VERTEX COVER (VC) is NP-complete (cont’d).
Given 3SAT, F = (x1 ∨ x2 ∨ ¬x3)∧(¬x1 ∨ x2 ∨ x4)∧(¬x2 ∨ x3 ∨ x4).
1.For each variable x, create a pair of vertices x and ¬x connected by an edge (a
variable-gadget)
2.For each clause ci, create vertex for each literals/variables in it and connect
them together into a triangle. (clause-gadget)
3.Finally, connect each vertex representing a literal in a clause-gadget to the
corresponding vertex representing the same literal in the variable gadget.
4.Then, every assignment satisfying F can be turned into a vertex cover of size k =
n+2m, where n are variables, m are clauses. This can be interpreted as depicted the
next slide.
4.5. Examples of NP-completeness Proofs (cont’d)
Theorem 5.3. VERTEX COVER (VC) is NP-complete (cont’d).
Choose an assignment of values for the variables to make F true. Let t,f,t,t
Each variable vertex labeled by a true literal has been chosen as vertex cover.
Select one of the true literals in the clause c and put the other two into the VC.
This is a VC with k=n+2m.
This transformation can be done in polynomial time.
Therefore, VC is NP-complete
4.5. Examples of NP-completeness Proofs (cont’d)
Theorem 5.4. INDEPENDENT SET (IS)is NP-complete.
Proof: we proof this theorem by the method of reduction (VC → IS)
First prove IS is in NP.
A certificate for a yes-instance is a subset V2 of V that forms an independent set.
To verify this, check that there is no edge between every pair of vertices in V2. This
can be done in O(n) time. Thus, ISϵNP
Next, reduce VC → IS
 Given an instance of graph G = (V,E) with a VERTEX COVER of size k.
 Then, the instance of INDEPENDENT SET is the same graph G with |n − K| vertices
V2 such that if V1 is a vertex cover in G, then V2 = V\V1 is an independent set in G.
 V1 is a vertex cover in G if and only if V\V1 is an independent set in G.
Note that this mapping can be done in polynomial time, in fact in constant time.
Therefore IS is NP-complete
4.5. Examples of NP-completeness Proofs (cont’d)
Theorem 5.5. CLIQUE is NP-complete.
Proof: we proof this theorem by the method of reduction (IS → CLIQUE)
First prove CLIQUE ϵ NP. The certificate is a subset V1 of V that forms a clique.
To verify this, we just need to check that there is an edge between every pair of
vertices in V1. This can be done in polynomial time. Thus, CLIQUE ϵNP
Next, reduce IS → CLIQUE
 Given an instance of graph G = (V,E) with an IS of size k.
 Create the complement graph of G, let G¯ = (V,E¯) with {u,v} ∈ E ¯ iff {u,v} ∈ /E.
 There is an IS of size k in G implies no two vertices in IS share an edge in G and
all of those vertices share an edge with all others in G¯ forming a clique.
 V is an independent set of G then V is a clique of G¯.
Note that this mapping can be done in polynomial time, in fact in constant time.
Therefore, CLIQUE is NP-complete
4.5. Examples of NP-completeness Proofs (cont’d)
Theorem 5.5. CLIQUE is NP-complete.(cont’d)

The graph in the left shows, the


transformation of IS to clique
G=(V, E), where V is vertices and E is
edges in blue color
Vertices with blue color fill are IS for G,
Then, G¯ = (V,E¯) where E¯ is edges\E
(edges in red color)
As shown in G¯, the vertices in IS forms
clique in G¯.
4.5. Examples of NP-completeness Proofs (cont’d)
Theorem 5.6. HAMILTONIAN CYCLE is NP-complete.
Theorem 5.7. TSP is NP-complete.
4.6. Some Other NP-complete Problems
2-PARTITION:
Instance: Integers/rational numbers s1,...,sn.
Goal: Decide whether there is a set S1, S2 ⊆ {s1,...,sn} such that = .
3-PARTITION:
Instance: Rational numbers s1,...,sn, where n=3m for an integer m
Goal: Determine whether the set {s1,...,sn} can be partitioned into p triplets S1,...,Sp such that all
have the same sum.
SET COVER.
Instance: A universe U = {1,... ,n} of n elements, a family of m subsets S1,...,Sm ⊆ U and an integer K.
Goal: Determine whether there is a selection of at most K subsets such that their union is U.
4.6. Some Other NP-complete Problems (cont’d)
MAKESPAN SCHEDULING:
Instance: m machines and n jobs; each job j has a processing time pj, j =
1,...,n, and a constant K.
Goal: Determine whether there is a schedule of the jobs on the
machines, in which each machine cannot process two jobs
simultaneously, each job is processed uninterruptedly by exactly one
machine, and all jobs complete processing before time K.
The MAKESPAN SCHEDULING can be interpreted as a load balancing
problem: divide the total work as equally as possible over the machines,
such that the load of the machine with heaviest load is minimized.
4.7. NP-hard Problems
If we unable to prove that a problem Π is in NP but nevertheless can show that all
problems in NP are reducible to Π. We call such problem NP-hard.
Example: the KTH HEAVIEST SUBSET problem
 Instance: Integers w1,...,wn, t and a parameter K.
Goal: Determine whether the weight of the t heaviest subset of {w1,...,wn} is at
least K. (Formally, determine whether there are t distinct subsets S1,...,St ⊆
{w1,...,wn} such that w(Si) ≥ K for every i = 1,...,t.)
It can be proven that all problems in NP are polynomial-time reducible to the
KTH HEAVIEST SUBSET problem. However, a certificates for yes-instances
verified with poly time is non-existent. i.e., the KTH HEAVIEST SUBSET problem is
not NP. thus, the KTH HEAVIEST SUBSET problem is NP-hard.
4.8. Complexity Class co-NP
A decision problem Π belongs to the class co-NP iff its complement belongs to class NP.
A DP belongs to co-NP if every no-instance I ∈ I admits a certificate whose validity can
be verified in polynomial time.
Every problem in P also belongs to co-NP. Thus, P ⊆ NP∩ co-NP.
Theorem 5.8. If the complement of an NP-complete problem is in NP, then NP = co-NP.
This implies that if the complement of a problem in NP is also in NP then (unless NP =
co-NP) this problem is not NP-complete. or
A problem that belongs to NP ∩ co-NP is unlikely to be NP-complete.
4.9. Pseudo-polynomiality and Strong NP-completeness
The running time of an algorithm is polynomial in the size of the instance I and the
largest number in the input num(I).
An algorithm ALG for a problem Π is pseudo-polynomial if it solves every
instance I ϵ I of Π in time bounded by a polynomial function in |I| and num(I).
Problems that remain NP-complete even if the largest integer appearing in its
description is bounded polynomially in the size of I is called strongly NP-complete.
A problem Π is strongly NP-complete if the restriction of Π to instances I ϵ I
satisfying num(I) is polynomially bounded in |I| is NP-complete.
For example, the INTEGER KNAPSACK problem, all graph problems such as HAMILTONIAN CYCLE,
CLIQUE, INDEPENDENT SET, VERTEX COVER, TSP, etc. are strongly NP-complete.
4.9. Pseudo-polynomiality and Strong NP-
completeness (cont’d)
Theorem 5.10. There does not exist a pseudo-polynomial algorithm for a
strongly NPcomplete problem, unless P = NP.
Meaning, we cannot expect to find a pseudo-polynomial algorithm for a strongly
NP-complete problem (unless P = NP).
Proof. Let Π be a strongly NP-complete problem and suppose that ALG is a
pseudopolynomial algorithm for Π. Consider the restriction Π ¯ of Π to instances I
∈ I that satisfy that num(I) is polynomially bounded in |I|. By Definition 5.9, Π ¯ is
NP-complete. But ALG can solve every instance I¯ of Π ¯ in time polynomial in |I¯|
and num(I¯), which is polynomial in |I¯|. This is impossible unless P = NP.
End of chapter four.
Thank you
Questions???

You might also like