0% found this document useful (0 votes)
0 views

DAA Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

DAA Notes

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 81

DESIGN AND ANALYSIS OF ALGORITHMS

UNIT - I
Introduction

ALGORITHM:
Algorithm was first time proposed a purshian mathematician Al-Chwarizmi in 825
AD. According to web star dictionary, algorithm is a special method to represent the
procedure to solve given problem.

OR

An Algorithm is any well-defined computational procedure that takes some value or


set of values as Input and produces a set of values or some value as output. Thus
algorithm is a sequence of computational steps that transforms the input into the
output.

Formal Definition:

An Algorithm is a finite set of instructions that, if followed, accomplishes a


particular task. In addition, all algorithms should satisfy the following criteria.

 Input. Zero or more quantities are externally supplied.


 Output. At least one quantity is produced.
 Definiteness. Each instruction is clear and unambiguous.
 Finiteness. If we trace out the instructions of an algorithm, then
for all cases, the algorithm terminates after a finite number of
steps.
 Effectiveness. Every instruction must very basic so that it can be
carried out, in principle, by a person using only pencil & paper.

Areas of study of Algorithm:

 How to device or design an algorithm– It includes the study of various design


techniques and helps in writing algorithms using the existing design
techniques like divide and conquer.
 How to validate an algorithm– After the algorithm is written it is necessary to
check the correctness of the algorithm i.e for each input correct output is
produced, known as algorithm validation. The second phase is writing a
program known as program proving or program verification.
 How to analysis an algorithm–It is known as analysis of algorithms or
performance analysis, refers to the task of calculating time and space
complexity of the algorithm.
 How to test a program – It consists of two phases . 1. debugging is detection
and correction of errors. 2. Profiling or performance measurement is the actual
amount of time required by the program to compute the result.

Algorithm Specification:
Algorithm can be described in three ways.
 Natural language like English:
 Graphic representation called flowchart:
 This method will work well when the algorithm is small& simple.
 Pseudo-code Method:
 In this method, we should typically describe algorithms as program, which
resembles language like Pascal &algol.

Pseudo-Code for writing Algorithms:

1.Comments begin with // and continue until the end of line.


2.Blocks are indicated with matching braces {and}.
3.An identifier begins with a letter. The data types of variables are not explicitly
declared.
4.Compound data types can be formed with records. Here is an
example, Node. Record
{
data type – 1 data-1; . data type
– n data – n; node * link;
}
Here link is a pointer to the record type node. Individual data items of a record can
be accessed with  and period.
5.Assignment of values to variables is done using the assignment statement.
<Variable>:= <expression>;

6.There are two Boolean values TRUE and


FALSE. Logical Operators
AND, OR, NOT

Relational Operators <, <=,>,>=, =, !=


7.The following looping statements are
employed. For, while and repeat-until

While Loop:
While < condition >do{
<statement-1>
. .
<statement-n>
}
For oop:

For variable: = value-1 to value-2 step step do


{
<statement-1>
.
.
<statement-n>
}
One step is a key word, other Step is used for increment or decrement.

repeat-until:

repeat{
<statement-1>
.
.
<statement-n>
}until<condition>
8.A conditional statement has the following forms.
(1) If <condition> then <statement>

(2) If <condition>

then <statement-1> Else <statement-2>

Case statement:

Case
{ :<condition-1>:<statement-1>
.
.
:<condition-n>:<statement-n>
:else:<statement-n+1>
}
9.Input and output are done using the instructions read & write.
10. There is only one
type of procedure: Algorithm, the
heading takes the form,

Algorithm Name (<Parameter list>)

As an example, the following algorithm fields & returns the maximum of ‘n’ given
numbers:

Algorithm Max(A,n)
// A is an array of size n
{
Result := A[1]; for
I:= 2 to n do
if A[I] > Result then Result
:=A[I];
return Result;
}
In this algorithm (named Max), A & n are procedure parameters. Result & I are
Local variables.
Performance Analysis.
There are many Criteria to judge an algorithm.
– Is it correct?
– Is it readable?
– How it works
Performance evaluation can be divided into two major phases.

1.Performance Analysis (machine independent)

– space complexity: The space complexity of an algorithm is the amount of memory it


needs to run for completion.

– time complexity: The time complexity of an algorithm is the amount of computer


time it needs to run to completion.

2 .Performance Measurement (machine dependent).

Space Complexity:

The Space Complexity of any algorithm P is given by S(P)=C+SP(I),C is constant.

1. Fixed Space Requirements (C)


Independent of the characteristics of the inputs and outputs
– It includes instruction space
– space for simple variables, fixed-size structured variable, constants
2. Variable Space Requirements (SP(I))
depend on the instance characteristic I
– number, size, values of inputs and outputs associated with I
– recursive stack space, formal parameters, local variables, return address
Examples:
*Program 1 :Simple arithmetic
function Algorithmabc( a, b, c)
{
return a + b + b * c + (a + b - c) / (a + b) + 4.00;
}
SP(I)=0 Hence S(P)=Constant
Program 2: Iterative function for sum a list of
numbers Algorithm sum( list[ ], n)
{
tempsum =
0; for i = 0
ton do
tempsum +=
list [i]; return
tempsum;

}
In the above example list[] is dependent on n. Hence SP(I)=n. The remaining
variables are i,n, tempsum each requires one location.

Hence S(P)=3+n

*Program 3: Recursive function for sum a list of


numbers Algorithmrsum( list[ ], n)
{

If (n<=0)

then return

0.0

else
return rsum(list, n-1) + list[n];

In the above example the recursion stack space includes space for formal
parameters local variables and return address. Each call to rsum requires 3
locations i.e for list[],n and return address .As the length of recursion is n+1.

S(P)>=3(n+1)

Time complexity:

T(P)=C+TP(I)

It is combination of-Compile time


(C) independent of instance
characteristics
-run (execution) time TP
dependent of instance characteristics
Time complexity is calculated in terms of program step as it is difficult to know the
complexities of individual operations.
Definition: Aprogram step is a syntactically or semantically meaningful
program segment whose execution time is independent of the instance
characteristics.

Program steps are considered for different statements as : for comment zero steps .
assignment statement is considered as one step. Iterative statements such as “for,
while
and until-repeat” statements, we consider the step counts based on the expression .

Methods to compute the step count:


1) Introduce variable count into programs
2) Tabular method
– Determine the total number of steps contributed by each statement step
per execution frequency
– add up the contribution of all statements
Program 1.with count statements

Algorithm sum( list[ ], n)


{
tempsum := 0; count++; /* for assignment */ for i := 1 to
n do {
count++; /*for the for loop */
tempsum := tempsum + list[i]; count++; /* for assignment */
}
count++; /* last execution of for */
return tempsum;
count++; /* for return */

Hence T(n)=2n+3

Program 2 :Recursive sum

Algorithmrsum( list[ ], n)
{
count++; /*for if conditional */
if (n<=0) {
count++; /* for return */ return 0.0 }

else

returnrsum(list, n-1) + list[n];

count++;/*for return and rsum invocation*/

T(n)=2n+2

II Tabular method.

Complexity is determined by using a table which includes steps per execution(s/e) i.e amount by
which count changes as a result of execution of the statement.

Frequency – number of times a statement is executed.

Statement s/e Frequency Total steps


Algorithm sum( list[ ], n) 0 - 0
{ 0 -1 0
tempsum := 0; for i := 0 ton do 1 n+1 n 1
tempsum := tempsum + list [i]; return tempsum; 1 1 n+1 n
} 1 0 1
1 0
0

Total 2n+3
Statement s/e Frequency Total steps
n=0 n>0 n=0 n>0
Algorithmrsum( list[ ], n) 0 - - 0 0
{ 0 - - 0 0
If (n<=0) then 1 1 1 1 1
return 0.0; 1 1 0 1 0
else 0 0 0 0 0
return rsum(list, n-1) + list[n]; 1+x 0 0 1 0 1+x 0
0 0 0
}

Total 2 2+x

Complexity of Algorithms

The complexity of an algorithm M is the function f(n) which gives the running time and/or storage space
requirement of the algorithm in terms of the size ‘n’ of the input data. Mostly, the storage space required
by an algorithm is simply a multiple of the data size ‘n’. Complexity shall refer to the running time of
thealgorithm.
The function f(n), gives the running time of an algorithm, depends not only on the size ‘n’ of the input
data but also on the particular data. The complexity function f(n) for certain casesare:

1. Best Case: The minimum possible value of f(n) is called the bestcase.

2. Average Case : The average value off(n).

3. Worst Case : The maximum value of f(n) for any key possibleinput.

The field of computer science, which studies efficiency of algorithms, is known as analysis of
algorithms.
Algorithms can be evaluated by a variety of criteria. Most often we shall be interested in the rate of
growth of the time or space required to solve larger and larger instances of a problem. We will associate
with the problem an integer, called the size of the problem, which is a measure of the quantity of input
data.

Rate of Growth:
The following notations are commonly use notations in performance analysis and used to characterize
the complexity of an algorithm:

Asymptotic notation Big oh


notation:O
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”) iff there exist positive constants c and n 0
such that f(n)≤C*g(n) for all n, n≥0
The value g(n)is the upper bound value of f(n).
Example:
3n+2=O(n) as
3n+2 ≤4n for all n≥2

Omega notation:Ω
The function f(n)=Ω (g(n)) (read as “f of n is Omega of g of n”) iff there exist positive constants c and
n0 such that f(n)≥C*g(n) for all n, n≥0
The value g(n) is the lower bound value of f(n).
Example:
3n+2=Ω (n) as
3n+2 ≥3n for all n≥1

Theta notation:θ
The function f(n)= θ (g(n)) (read as “f of n is theta of g of n”) iff there exist positive constants c1, c2
and n0 such that C1*g(n) ≤f(n)≤C2*g(n) for all n, n≥0
Example:
3n+2=θ (n) as
3n+2 ≥3n for all n≥2 3n+2 ≤3n
for all n≥2
Here c1=3 and c2=4 and n0=2

Little oh: o
The function f(n)=o(g(n)) (read as “f of n is little oh of g of n”) iff
Lim f(n)/g(n)=0 for all n, n≥0
n~ Example:
3n+2=o(n2) as

Lim ((3n+2)/n2)=0 n~

Little Omega:ω
The function f(n)=ω (g(n)) (read as “f of n is little ohomega of g of n”) iff

Lim g(n)/f(n)=0 for all n, n≥0


n~ Example:
3n+2=o(n2) as

Lim (n2/(3n+2) =0 n~

AnalyzingAlgorithms
Suppose ‘M’ is an algorithm, and suppose ‘n’ is the size of the input data. Clearly the complexity f(n) of
M increases as n increases. It is usually the rate of increase of f(n) we want to examine. This is usually
done by comparing f(n) with some standard functions. The most common computing timesare:
O(1), O(log n), O(n), O(n. log n), O(n2), O(n3), O(2n), n! andnn
2 2

Numerical Comparison of DifferentAlgorithms


The execution time for six of the typical functions is givenbelow:
N log2n n*log2n n2 n3 2n
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65,536
32 5 160 1024 32,768 4,294,967,296
64 6 384 4096 2,62,144 Note1
128 7 896 16,384 2,097,152 Note2
256 8 2048 65,536 1,677,216 ????????

Note1: The value here is approximately the number of machine instructions executed by a 1
gigaflop computer in 5000years.
Note 2: The value here is about 500 billion times the age of the universe in nanoseconds, assuming a
universe age of 20 billionyears.
Graph of log n, n, n log n, n2, n3, 2n, n! andnn

One way to compare the function f(n) with these standard function is to use the functional ‘O’ notation,
suppose f(n) and g(n) are functions defined on the positive integers with the property that f(n) is bounded
by some multiple g(n) for almost all ‘n’.Then,f(n) =O(g(n)) Which is read as “f(n) is of order g(n)”. For
example, the order of complexityfor:
 Linear search is O(n)
 Binary search is O (logn)
 Bubble sort is O(n2)
 Merge sort is O (n logn)

Probabilistic analysis of algorithms is an approach to estimate the computational complexity of an


algorithm or a computational problem. It starts from an assumption about a probabilistic distribution of
the set of all possible inputs. This assumption is then used to design an efficient algorithm or to derive the
complexity of a known algorithm.

Deterministic Algorithms

► Goal: To solve a computational problem correctly and efficiently.


► Behaviour of the algorithm is determined completely by the input.
► Upon reruns, the algorithm executes in exactly the same manner.
Randomized Algorithms

► In addition to the input, the algorithm execution depends on some random bits as well.
► Behaviour of the algorithm is not determined completely by the input.
Upon reruns, the algorithm can execute in a different manner, even with the same input

► Deterministic algorithms can have certain “bad inputs”.


► Could make computation go to worst case running time.
► Most inputs aren’t so “bad”.
► Randomized algorithms use random bits to change the execution.
► Any given input is now unlikely to be bad.

Deterministic Algorithm
 Always correct answer.
 Always runs within the worst case running time
Randomized Algorithm
► Gives the right answer.
► In good running time.
► Not necessarily always, but with good probability

Two Types
o Las Vegas Algorithms
o Monte Carlo Algorithms

Las Vegas Algorithms

► Correctness is guaranteed
► May not be fast always.
► Probability of worst case running time is small.
► Expected running time < Worst case running time

Monte Carlo Algorithms



Running time is fixed.

Correctness of the algorithm need not be assured.

Probability of an incorrect output is small
Advantages

 Simplicity
 In some cases, faster than deterministic
 In some cases, no deterministic algorithm exists.
 Adversary cannot choose a bad input

Disadvantages

Randomness is a resource.

With some probability, we can get an incorrect output.

With some probability, can perform in worst case time.

Probabilistic Analysis
► Algorithm output/performance can vary depending on random bits.
► Analysis will yield probabilistic statements.

► We need mathematical basis to analyze randomized algorithms.


► At times, the analysis could be long and complicated.
► The analysis could use mathematical tools of varying difficulty.
But most randomized algorithms are extremely simple to describe and program
► Probability is over the distribution of the random bits.
► Probability is not over the input distribution.

► For a random variable X , Pr(X = x ) denotes the probability with which X takes the value x .

E (X ) denotes the expectation of the random variable X

Polynomial Identity Testing


Is a given polynomial P(x ) identically equal to 0?
Another form
Are two given polynomials F (x ) and G (x ) identically equal to one
another?
F (x ) ≡ G (x )
► Want to check if
F (x ) = (x − 1)(x + 3)(x − 6) ≡ x 3 + 4x 2 − 12x + 18 = G (x ).

Deterministic Algorithm

Convert the two polynomials to a standard format.

Check if they are the same.

If polynomials are of degree d , this requires Θ(d 2) time.

Always correct.
A Randomized Algorithm

Choose a value r from a set S of 100d possible values.

Evaluate F (r ) and G (r ).

Check if F (r ) = G (r ).

Running time: How long does the evaluation take?

Can use Horner’s Method to evaluate F (r ) and G (r ) in Θ(d )
time.


What about correctness?

If F (x ) ≡ G (x ), then F (r ) = G (r ) for any r .

What if F (x ) /≡ G (x )?

F (r ) = G (r ) when r is a root of the polynomial F (x ) − G (x )

Fundamental Theorem of Algebra

If P(x ) is a polynomial of degree d , it has at most d roots.


► Let F (x ) /≡ G (x ).
► By the above theorem, F (r ) − G (r ) = 0 for ≤ d values of r ∈ S .
► There are at most d values of r which can lead to a wrong answer.
► If |S | ≥ 100d , then Pr(F (r ) = G (r )) ≤ 1/100.
► Probability of error is ≤ 1/100.

► Not happy with the probability of success? Then repeat!


► Boosting the Probability of Success
► Saw a Monte Carlo algorithm with Pr(success) ≥ 1 − 1/100.
► Repeating a Monte Carlo algorithm to achieve a better probability of success.

Boosted Algorithm

► Choose two values r1, r2 from a set S of 100d possible values.


► Evaluate F (r1), F (r2), G (r1) and G (r2).
► Check if F (r1) = G (r1) and F (r2) = G (r2).
► Report “same” if both F (r1) = G (r1) and F (r2) = G (r2).
► Suppose F (x ) /≡ G (x ).
► If we run two independent trials
Pr(F (r ) = G (r ) in both trials) ≤ 1/1002
► Boosting is a standard technique for achieving desired probability with Monte Carlo
algorithms.

Polynomial Identity Testing: Multivariate Case


Is a given polynomial P(x1, x2, . . . , xn) identically equal to 0?
► No known deterministic polynomial time algorithm for the multivariate case.
► Multiplying out a polynomial can result in exponentially many terms.
► For the multivariate case, we need a stronger theorem than the Fundamental Theorem of Algebra.

A Randomized Algorithm
► Choose (r1, r2, . . . , rn) ∈ S n where |S | = 100d .
► Evaluate P(r1, r2, . . . , rn).
► Report that P(x1, x2, . . . , xn) ≡ 0 if P(r1, r2, . . . , rn) = 0.

DeMillo-Lipton-Schwartz-Zippel Lemma
If P /≡ 0, then in the above setting
Pr(P(r1, r2, . . . , rn) = 0) ≤ d/|S |.

 Probability of failure is ≤ 1/100.


 Assumption: P(r1, r2, . . . , rn) can be efficiently evaluated.
 Randomization, indeed, seems to help
Primality Test
Prime numbers are of immense importance in cryptography, computational number theory, information
science and computer science. There are several algorithms to test if a number is prime. Some of them are fast,
but no fast algorithm to factorize a number is known.

A primality test is deterministic if it outputs True when the number is a prime and False when the input is
composite with probability 1. Otherwise, the primality test is probabilistic. A probabilistic primality test is often
called a pseudoprimality test.

Trial Division
This is one of the simplest deterministic primality tests, by naively checking the condition of a number being
prime. It uses the fact that a prime number is not divisible by any other positive integer other than itself and 11,
so we can turn it into its contrapositive: if a number is divisible by some other positive integer besides itself
and 11, it is composite.

1 def isPrime(n):
2 if n < 2: return False
3 for i in xrange(2, n): # from 2 to n-1
4 if n % i == 0: # n is divisible by i
5 return False
6 return True
However, it is not necessary to check all numbers from 22 to n-1n−1.

n^2n2\sqrt{n}n\log nlogn\sqrt[4]{n}4n
Your friend has written a program to check whether or not a number nn is prime. It is very simple and works by
checking if nn is divisible by each number from 2 all the way to n-1n−1. When you see this, you curse your
friend and tell him he's wasting time.
If your friend didn't want to waste time, what is (approximately) the biggest number that he needs to check before
he can determine that any number nn is prime?
Suppose that nn is composite, so n=pqn=pq for 2 \le p,q \le n-12≤p,q≤n−1. We claim that at least one of p,qp,q is
not greater than \sqrt{n}n. Indeed, if both are greater than \sqrt{n}n, then pq > \sqrt{n} \cdot \sqrt{n} = npq>n⋅n
=n, a contradiction. Thus whenever nn is composite, one of its factors is not greater than \sqrt{n}n, so we can
modify the range endpoint above:

1 from math import sqrt


2 def isPrime(n):
3 if n < 2: return False
4 for i in range(2, int(sqrt(n)) + 1): # from 2 to sqrt(n)
5 if n % i == 0: # n is divisible by i
6 return False
7 return True
We loop through ii for \sqrt{n}n times, so the time complexity is O(\sqrt{n})O(n), multiplied the time complexity
of division (about as fast as multiplication at around O(\lg n \lg \lg n)O(lgnlglgn), using Newton-Raphson
division algorithm). This gives poor performance for large values of nn.

Wilson's Theorem
Wilson's theorem is an interesting number theoretic result that can be turned into a primality test.

A positive integer n>1n>1 is prime if and only if (n-1)! \equiv -1 \pmod{n}(n−1)!≡−1(modn). _\square□
Thus we can simply compute (n-1)! \mod n(n−1)!modn to check whether nn is prime.

1 def isPrime(n):
2 fac = 1
3 for i in range(1, n): # from 1 to n-1
4 fac = (fac * i) % n
5 return (fac == n-1) # true if (n-1)! mod n = n-1, that is (n-1)! = -1 mod n
However, this is absurdly slow; this takes O(n)O(n) multiplications, even slower than trial division above. (This
is comparable to the unmodified trial division, checking up to n-1n−1 instead of \sqrt{n}n.)
Fermat Primality Test

This primality test uses the idea of Fermat's little theorem.


Let pp be a prime number and aa be an integer not divisible by pp. Then a^{p-1}-1ap−1−1 is always divisible
by pp, or a^{p-1} \equiv 1 \pmod{p}ap−1≡1(modp). _\square□
The idea of Fermat primality test is to use the contrapositive: if for some aa not divisible by nn we have a^{n-1} \
not\equiv 1 \pmod{n}an−1≡1(modn), then nn is definitely composite.
However, it's not true that if nn is composite, then any aa works! For example, consider n = 15, a = 4n=15,a=4.
We can compute that 4^{15-1} = 4^{14} \equiv 1 \pmod{15}415−1=414≡1(mod15), so if we have n =
15n=15 but we pick a = 4a=4, we cannot conclude that nn is composite. So we need to choose aa that allows the
test to pass, marking nn as composite.

But how do we choose such aa? The solution is simple: make it probabilistic; that is, we pick aa randomly. To
give high confidence, we repeat the test several times, say kk times.

1 import random
2 def isPrime(n, k):
3 for i in range(k):
4 a = random.randrange(2, n) # 2 <= a <= n-1
5 if pow(a, n-1, n) != 1: # compute a^(n-1) mod n
6 return False # definitely composite
7 else:
8 return True # probably prime
If a^{n-1} \equiv 1 \pmod{n}an−1≡1(modn) but nn is composite, then aa is called a Fermat liar for nn, and nn is
called a Fermat pseudoprime to base aa. Otherwise, aa is called a Fermat witness for nn.

To make things worse, there are numbers which are Fermat pseudoprime to all bases! That is, a^{n-1} \equiv 1 \
pmod{n}an−1≡1(modn) for all aa relatively prime to nn, but nn itself is composite. These numbers are
called Carmichael numbers, the first of which is 561561.

The good thing is that Carmichael numbers are pretty rare. The bad thing is that there are infinitely many of them.
This makes this test rarely used, as it's not feasible to simply store all Carmichael numbers (since there are
infinitely many of them). Another good thing is that if nn is not a Carmichael number, then at least half of the
integers in the range [1,n-1][1,n−1] are Fermat witnesses, so if we can tell that nn is not a Carmichael number,
every iteration of the test passed halves the probability that nn is composite. The probability of a composite
number is mistakenly called prime for kk iterations is 2^{-k}2−k.

The running time of the above algorithm is O(k)O(k) for kk tests, multiplied by O(\lg n)O(lgn) for modular
exponentiation, multiplied by some time for multiplication (schoolbook multiplication uses O(\lg^2
n)O(lg2n) time). Although this test has flaws (being probabilistic, and Carmichael numbers), it runs in
polylogarithmic time, which is fast.
Miller-Rabin Primality Test
Miller-Rabin primality test, in some sense, is a more advanced form of Fermat primality test. Here is the
algorithm first, with explanation to follow.

1 import random
2 def isPrime(n, k):
3 if n < 2: return False
4 if n < 4: return True
5 if n % 2 == 0: return False # speedup
6
7 # now n is odd > 3
8 s=0
9 d = n-1
10 while d % 2 == 0:
11 s += 1
12 d //= 2
13 # n = 2^s * d where d is odd
14
15 for i in range(k):
16 a = random.randrange(2, n-1) # 2 <= a <= n-2
17 x = (a**d) % n
18 if x == 1: continue
19 for j in range(s):
20 if x == n-1: break
21 x = (x * x) % n
22 else:
23 return False
24 return True
To understand the algorithm, we need a couple of concepts.

First, suppose pp is prime, and consider the modular equation x^2 \equiv 1 \pmod{p}x2≡1(modp). What are the
solutions to this equation? We know that x^2 - 1 \equiv 0 \pmod{p}x2−1≡0(modp), or (x-1)(x+1) \equiv 0 \
pmod{p}(x−1)(x+1)≡0(modp). Since pp is prime, pp has to divide either x-1x−1 or x+1x+1; this leads to x \equiv
1, -1 \pmod{p}x≡1,−1(modp).

Now, suppose pp is an odd prime and aa is an integer relatively prime to pp. By Fermat's little theorem, we know
that a^{p-1} \equiv 1 \pmod{p}ap−1≡1(modp). The idea is that we repeatedly divide the exponent by two: as
long as the exponent is even, a^{2e} \equiv 1 \pmod{p}a2e≡1(modp) for some ee, we can invoke the above
result, giving a^e \equiv -1 \pmod{p}ae≡−1(modp) or a^e \equiv 1 \pmod{p}ae≡1(modp). In the latter case, we
can invoke the result again, until either we get the first case or the exponent is odd.

In other words, we have the following theorem:

Let pp be an odd prime, and let p-1 = 2^s \cdot d,p−1=2s⋅d, where dd is an odd integer and ss is a positive
integer. Also let aa be a positive integer coprime to pp. Then at least one of the following must hold:

 Some of \large a^{2^s \cdot d}, a^{2^{s-1} \cdot d}, a^{2^{s-2} \cdot d}, \ldots,
a^da2s⋅d,a2s−1⋅d,a2s−2⋅d,…,ad is congruent to -1 \pmod{p}−1(modp).
 a^d \equiv 1 \pmod{p}ad≡1(modp).
Miller-Rabin primality test uses the contrapositive of this theorem. That is, if for some aa, neither of the above
holds, then pp is clearly not prime. This explains the bulk of the algorithm:
1 x = (a**d) % n
2 if x == 1: continue
3 for j in range(s):
4 if x == n-1: break
5 x = (x**2) % n
6 else:
7 return False
Here, we compute x = a^d \mod nx=admodn. (In the algorithm, we name the integer as nn instead of pp as we
don't know if it's a prime yet.) If x \equiv 1 \pmod{n}x≡1(modn), then the second point in the theorem is true.
Otherwise, we test if x \equiv -1 \pmod{n}x≡−1(modn), which means the first point is true, or otherwise we
square xx, giving a^{2^1 \cdot d}a21⋅d. We repeat this again, checking if it is congruent to -1 \pmod{n}
−1(modn); if it doesn't, we square it to a^{2^2 \cdot d}a22⋅d, and so on until a^{2^{s-1} \cdot d}a2s−1⋅d. Since
we know a^{2^s \cdot d} = a^{n-1} \equiv 1 \pmod{n}a2s⋅d=an−1≡1(modn) if nn is prime, there is no need to
check one more time. If we ever encountered the if x == n-1: break line, then the first point is true; otherwise,
neither point is true and thus nn is certainly composite. Note that we're using the for-else construct here; if
the for exits because of break , the else clause is not run, otherwise it is run. When the break is not encountered,
we know that neither point is true, so nn is certainly composite and thus we can go to the else clause and
immediately return False .

Now, the problem is on finding the value of aa. How do we select it? There is no good way to do so with
certainty, which means we simply select aa at random and repeat it for several, kk, times, like in Fermat primality
test. Just like in Fermat primality test, there are a couple of terms: if for a composite nn, aa fails to detect the
compositeness of nn, then aa is called a strong liar for nn and nn is a strong pseudoprime to base aa, while
if aa manages to detect it, it is called a witness for aa.

For any composite nn, at least 3/43/4 of the integers in the range [2,n-2][2,n−2] are witnesses for nn, which is
better than Fermat primality test: it has Carmichael numbers where none of the integers are witnesses, and even
if nn is not Carmichael, there are only half witnesses guaranteed. For Miller-Rabin primality test, the probability
of a composite is mistakenly passed as a prime for kk iterations is 4^{-k}4−k.

However, it's still a flaw that the primality is not guaranteed. If we have an upper bound on nn, a solution is
simply to fix the values of aa. For example, by simply testing for a = 2, 3, 5, 7, 11, 13, 17a=2,3,5,7,11,13,17, the
test is guaranteed correct as long as n < 341550071728321n<341550071728321, so if we can guarantee that nn is
lower than this, checking those seven values of aa is enough. There are many such deterministic variants.

The running time of this algorithm is composed of two parts: preparation, when we compute n-1 = 2^s \cdot
dn−1=2s⋅d, and testing, when we pick a random aa. The preparation is simply O(\lg n)O(lgn). The testing
is O(k)O(k) for kk trials, multiplied by O(\lg n)O(lgn) for modular exponentiation, multiplied by time for
multiplication; this is exactly the same running time as Fermat primality test.
AKS primality test

This section needs a more thorough explanation, as well as code for the algorithm.
So far, the primality tests discussed are either prohibitively slow (polynomial in nn instead of \lg nlgn, the
number of digits of nn), or only probabilistic (not guaranteed to be correct). This makes people back then to
wander, whether the problem PRIMES, determining primality of a test, is actually solvable "quickly" enough (in
polynomial time in number of digits) or not; that is, whether it is in complexity class PP or not.
In 1976, Gary Miller (the one who invented the Miller-Rabin primality test above, together with Michael Robin)
also wrote about "Miller test," a deterministic variant (actually the original version) of Miller-Rabin primality
test. This test runs in polynomial time, but its correctness depends on the so-far unsettled question of the
generalized Riemann hypothesis, and thus this test is unsatisfactory.

However, in 2002, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena at the Indian Institute of Technology
solved this problem affirmatively by constructing such deterministic test that runs in polynomial time that doesn't
depend on such unproven hypotheses. Their test is called AKS test, given on a paper titled simply PRIMES is in
P.

The idea of the algorithm relies on the simple fact (x-a)^n \equiv x^n-a \pmod{n}(x−a)n≡xn−a(modn) for
all aa coprime to nn, if and only if nn is prime. Note that the two sides are polynomials; that is, each coefficient
must be compared. The proof is simply done by Fermat's little theorem as used in Fermat primality test, together
with \binom{n}{k} \equiv 0 \pmod{n}(kn)≡0(modn) for all 1 \le k \le n-11≤k≤n−1 if and only if nn is prime.

The above is a primality test by itself, but it takes exponential time in number of digits, so the paper uses a similar
fact: (x-a)^n \equiv x^n-a \pmod{(n, x^r-1)}(x−a)n≡xn−a(mod(n,xr−1)), which means (x-a)^n - (x^n-a) = nf +
(x^r-1)g(x−a)n−(xn−a)=nf+(xr−1)g for some polynomials f,gf,g. If r = O(\lg n)r=O(lgn), this fact can be checked
in polynomial time. This fact is still satisfied by all primes, but some composites also pass, for different values
of a, ra,r. The paper thus proves that there exists some small enough rr and small enough set AA, such that if for
all a \in Aa∈A the relation is satisfied, then nn cannot be composite, proving that it is prime.

Currently, the running time of this algorithm is \tilde{O}(\lg^6 n)O~(lg6n) (that is, O(\lg^6 n \cdot \lg^k \lg
n)O(lg6n⋅lgklgn) for some kk). The paper originally had 1212 on the exponent, but this is quickly reduced down
by works of various authors. This is still slower than Miller-Rabin primality test as well as more complex, which
is why Miller-Rabin test is used more widely at the moment for practical numbers (say, in that 341 \cdot
10^{12}341⋅1012 range), but theoretically this algorithm is of great interest for being the first algorithm to prove
that primality is not a hard problem after all
UNIT - II

DIVIDE AND CONQUER

General method:
Given a function to compute on ‘n’ inputs the divide-and-conquer strategy suggests splitting
the inputs into ‘k’ distinct subsets, 1<k<=n, yielding ‘k’ sub problems.
These sub problems must be solved, and then a method must be found to combine sub
solutions into a solution of the whole.

If the sub problems are still relatively large, then the divide-and-conquer strategy can
possibly be reapplied.Often the sub problems resulting from a divide-and-conquer design
are of the same type as the original problem.For those cases the re application of the divide-
and-conquer principle is naturally expressed by a recursive algorithm.DAndC(Algorithm) is
initially invoked as DandC(P), where ‘p’ is the problem to be solved.Small(P) is a Boolean-
valued function that determines whether the i/p size is small enough that the answer can be
computed without splitting.If this so, the function ‘S’ is invoked.Otherwise, the problem P
is divided into smaller sub problems.These sub problems P1, P2 …P k are solved by
recursive application of DAndC.Combine is a function that determines the solution to p
using the solutions to the ‘k’ sub problems.If the size of ‘p’ is n and the sizes of the ‘k’ sub
problems are n1, n2 ….nk, respectively, then the computing time of DAndC is described by
the recurrence relation.
T(n)= { g(n) n small

T(n1)+T(n2)+……………+T(nk)+f(n); otherwise.

Where T(n) is the time for DAndC on any i/p of size ‘n’.
g(n) is the time of compute the answer directly for small i/ps.
f(n) is the time for dividing P & combining the solution to
sub problems.

Algorithm DAndC(P)
{
if small(P) then return S(P);
else
{
divide P into smaller instances
P1, P2… Pk, k>=1;

Apply DAndC to each of these sub problems;


return combine (DAndC(P1), DAndC(P2),…….,DAndC(Pk));
}
}

The complexity of many divide-and-conquer algorithms is given by recurrence relation


of the form

T(n) = T(1) n=1


= aT(n/b)+f(n) n>1

Where a & b are known constants.

We assume that T(1) is known & ‘n’ is a power of b(i.e., n=bk)

One of the methods for solving any such recurrence relation is called the substitution
method.This method repeatedly makes substitution for each occurrence of the function.
T is the right-hand side until all such occurrences disappear.
Example:

1) Consider the case in which a=2 and b=2. Let T(1)=2 &
f(n)=n. We have,

T(n) = 2T(n/2)+n

= 2[2T(n/2/2)+n/2]+n

= [4T(n/4)+n]+n

= 4T(n/4)+2n

= 4[2T(n/4/2)+n/4]+2n

= 4[2T(n/8)+n/4]+2n

= 8T(n/8)+n+2n

= 8T(n/8)+3n

*
*
 In general, we see that T(n)=2iT(n/2i)+in., for any log2 n >=i>=1.
T(n) =2log n T(n/2log n) + n log n

Corresponding to the choice of i=log2n

Thus, T(n) = 2log n T(n/2log n) + n log n

= n. T(n/n) + n log n

= n. T(1) + n log n [since, log 1=0, 20=1]

= 2n + n log n

T(n)= nlogn+2n.

The recurrence using the substitution method,it can be shown as


log a
T(n)=n b [T(1)+u(n)]

h(n) u(n)

O(nr),r<0 O(1)
((log n)i),i≥0 ((log n)i+1/(i+1))
Ω(nr),r>0 (h(n))

Applications of Divide and conquer rule or algorithm:


Binary search, Quick sort, Merge sort, Strassen’s matrix multiplication.

BINARY SEARCH
Given a list of n elements arranged in increasing order. The problem is to determine
whether a given element is present in the list or not. If x is present then determine the
position of x, otherwise position is zero.

Divide and conquer is used to solve the problem. The value Small(p) is true if n=1. S(P)= i,
if x=a[i], a[] is an array otherwise S(P)=0.If P has more than one element then it can be
divided into sub-problems. Choose an index j and compare x with a j. then there 3
possibilities (i). X=a[j] (ii) x<a[j] (x is searched in the list a[1]…a[j-1])
(iii) x>a[j ] ( x is searched in the list a[j+1]…a[n]).
And the same procedure is applied repeatedly until the solution is found or solution is zero.
Algorithm Binsearch(a,n,x)
// Given an array a[1:n] of elements in non-decreasing
//order, n>=0,determine whether ‘x’ is present and
// if so, return ‘j’ such that x=a[j]; else return 0.
{
low:=1; high:=n;
while (low<=high) do
{
mid:=[(low+high)/2];
if (x<a[mid]) then high;
else if(x>a[mid]) then
low:=mid+1;
else return mid;
}
return 0;
}
Algorithm, describes this binary search method, where Binsrch has 4 inputssa[], I , n& x.It
is initially invoked as Binsrch (a,1,n,x)A non-recursive version of Binsrch is given below.
This Binsearch has 3 i/psa,n, & x.The while loop continues processing as long as there are
more elements left to check.At the conclusion of the procedure 0 is returned if x is not
present, or ‘j’ is returned, such that a[j]=x.We observe that low & high are integer Variables
such that each time through the loop either x is found or low is increased by at least one or
high is decreased at least one.
Thus we have 2 sequences of integers approaching each other and eventually low becomes
> than high & causes termination in a finite no. of steps if ‘x’ is not
present. Example:

1) Let us select the 14 entries.


-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.
Place them in a[1:14], and simulate the steps Binsearch goes through as it searches for
different values of ‘x’.
Only the variables, low, high & mid need to be traced as we simulate the algorithm.
We try the following values for x: 151, -14 and 9.
for 2 successful searches & 1 unsuccessful search.
Table. Shows the traces of Binsearch on these 3 steps.
X=151 low high mid

1 147
8 14 11
12 14 13
14 14 14
Found
x=-14 low high mid
1 14 7
1 6 3
1 2 1
2 2 2
2 1 Not found

x=9 low high mid


1 14 7
1 6 3
4 6 5
Found

Theorem: Algorithm Binsearch(a,n,x) works correctly.

Proof:We assume that all statements work as expected and that comparisons such as
x>a[mid] are appropriately carried out.

Initially low =1, high= n,n>=0, and a[1]<=a[2]<= <=a[n].


If n=0, the while loop is not entered and is returned.Otherwise we observe that each
time thro’ the loop the possible elements to be checked of or equality with x and
a[low], a[low+1],……..,a[mid],……a[high]. If x=a[mid], then the algorithm
terminates successfully.Otherwise, the range is narrowed by either increasing low to
(mid+1) or decreasing high to (mid-1).Clearly, this narrowing of the range does not
affect the outcome of the search.If low becomes > than high, then ‘x’ is not present
& hence the loop is exited.
The complexity of binary search issuccessful searches
is Worst case is O(log n) or θ(log n)
Average case is O(log n) or θ(log n)
Best case is O(1) or θ(1)

Unsuccessful searches is: θ(log n) for all cases.


MergeSort
Merge sort algorithm is a classic example of divide and conquer. To sort an array,
recursively, sort its left and right halves separately and then merge them. The time
complexity of merge sort in the best case, worst case and average case is O(n log n) and
the number of comparisons used is nearlyoptimal.
This strategy is so simple, and so efficient but the problem here is that there seems to be
no easy way to merge two adjacent sorted arrays together in place (The result must be
build up in a separatearray).The fundamental operation in this algorithm is merging two
sorted lists. Because the lists are sorted, this can be done in one pass through the input, if
the output is put in a thirdlist.

Algorithm MERGESORT (low,high)

// a (low : high) is a global array to besorted.


{
if (low <high)
{
mid := (low +high)/2;//finds where to split theset
MERGESORT(low, mid); //sortonesubset
MERGESORT(mid+1, high); //sort the other subset
MERGE(low,mid,high); // combine theresults
}

}
Algorithm MERGE (low, mid,high)
// a (low : high) is a global array containing two sortedsubsets
// in a (low : mid) and in a (mid + 1 :high).
// The objective is to merge these sorted sets into singlesorted
// set residing in a (low : high). An auxiliary array B isused.
{
h :=low; i := low; j:= mid + 1;
while ((h <mid) and (J <high))do
{
if (a[h] <a[j])then
{
b[i] :=a[h]; h:=h+1;

}
else
{

}
b[i] :=a[j];
j := j +1;
}
i := i +1;

for k := h to middo
f (h > mid)then
for k := j to high do
{
b[i]:=a[k];
i:=i+1
{
b[i] := a[K]; i := i +l;
}
for k := low to highdo
a[k] :=b[k];
}

Example

Tree call of Merge sort:

A[1:10]={310,285,179,652,351,423,861,254,450,520}

1, 10

1, 5 6, 10

1, 3 4, 5 6, 8 9, 10

1, 2 3,3 4, 4 5, 5 6, 7 8, 8 9,9 10, 10

1, 1 2, 2 6, 6 7, 7

Tree call of Merge sort (1, 10)

Analysis of MergeSort

We will assume that ‘n’ is a power of 2, so that we always split into even halves, so
we solve for the case n =2k.

For n = 1, the time to merge sort is constant, which we will be denote by 1.


Otherwise, the time to merge sort ‘n’ numbers is equal to the time to do two recursive
merge sorts of size n/2, plus the time to merge, which is linear. The equation says
thisexactly:
T(1) =1
T(n) = 2 T(n/2) +n

This is a standard recurrence relation, which can be solved several ways. We will
solve by substituting recurrence relation continually on the right–handside.

We have, T(n) = 2T(n/2) +n


2T(n/2) = 2 (2 (T(n/4)) +n/2)
= 4 T(n/4) +n
Wehave,
T(n/2) = 2 T(n/4) +n
T(n) = 4 T(n/4) +2n
Again, by substituting n/4 into the main equation, we see that 4T(n/4) ==4 (2T(n/8)) +n/4
8 T(n/8) +n
So wehave,
T(n/4) = 2 T(n/8) +n
T(n) = 8 T(n/8) +3n
Continuing in this manner, weobtain:
T(n) = 2k T(n/2k) + K.n

As n = 2k, K = log2n, substituting this in the aboveequation


T(n) = 2log nT(n/2log n ) +log n * n
=nT(1)+ n log n
=n+n log n
Representing in O-notation T(n)=O(n log n).
We have assumed that n = 2 k. The analysis can be refined to handle cases when ‘n’ is not
a power of 2. The answer turns out to be almostidentical.
Although merge sort’s running time is O(n log n), it is hardly ever used for main memory
sorts. The main problem is that merging two sorted lists requires linear extra memory and
the additional work spent copying to the temporary array and back, throughout the
algorithm, has the effect of slowing down the sort considerably. The Best and worst case
time complexity of Merge sort is O(n logn).

Strassen’s MatrixMultiplication:

The matrix multiplication of algorithm due to Strassens is the most dramatic example of
divide and conquer technique(1969).
Let A and B be two n×n Matrices. The product matrix C=AB is also a n×n matrix whose i,
jth element is formed by taking elements in the i th row of A and jth column of B and
multiplying them to get

The usual wayC(i, j)= Here 1≤ i & j ≤ n means i and j are in between 1 and n.

To compute C(i, j) using this formula, we need n multiplications.


The divide and conquer strategy suggests another way to compute the product of two n×n
matrices.For Simplicity assume n is a power of 2 that is n=2k, k is a nonnegative integer.
If n is not power of two then enough rows and columns of zeros can be added to both A and
B, so that resulting dimensions are a power of two.
To multiply two n x n matrices A and B, yielding result matrix ‘C’ as follows:
Let A and B be two n×n Matrices. Imagine that A & B are each partitioned into four square
sub matrices. Each sub matrix having dimensions n/2×n/2.
The product of AB can be computed by using previous formula.
If AB is product of 2×2 matrices then
=
Then cijcan be found by the usual matrix
multiplicationalgorithm, C11 = A11 .B11 + A12 .B21
C12 = A11 .B12 + A12 .B22
C21 = A21 .B11 + A22 .B21
C22 = A21 .B12 + A22 .B22

This leads to a divide–and–conquer algorithm, which performs nxn matrix multiplication


by partitioning the matrices into quarters and performing eight (n/2)x(n/2) matrix
multiplications and four (n/2)x(n/2) matrixadditions.
T(1) = 1
T(n) = 8T(n/2)

Which leads to T (n) = O (n3), where n is the power of2.


Strassens insight was to find an alternative method for calculating the C ij, requiring seven
(n/2) x (n/2) matrix multiplications and eighteen (n/2) x (n/2) matrix additions
andsubtractions:
P = (A11 + A22) (B11 + B22)

Q = (A21 + A22)B11

R = A11 (B12 -B22)

S = A22 (B21 - B11)

T = (A11 + A12)B22

U = (A21 – A11) (B11 + B12)

V = (A12 – A22) (B21 + B22)

C11 = P + S – T +V

C12 = R + T

C21 = Q +S

C22 = P + R - Q +U.

This method is used recursively to perform the seven (n/2) x (n/2) matrix multiplications,
then the recurrence equation for the number of scalar multiplications performedis:
T(1) = 1
T(n) = 7T(n/2)

Solving this for the case of n = 2k iseasy:


T(2k) = =

=
=
7T(2k–1)

72T(2k-
- - - - --
- - - - --
= 7iT(2k–i) Put i =k
k 0
=7 T(2 )

As k is the power of 2

log n
That is, T(n) = 7 2

log 27
= n

=O(nlog 27)= O(n2.81)

So, concluding that Strassen’s algorithm is asymptotically more efficient than the
standard algorithm. In practice, the overhead of managing the many small matrices does
not pay off until ‘n’ revolves thehundreds.

QuickSort

The main reason for the slowness of Algorithms in which all comparisons and exchanges
between keys in a sequence w1, w2,........., wn take place between adjacent pairs. In this
way it takes a relatively long time for a key that is badly out of place to work its way into
its proper position in the sortedsequence.
Hoare his devised a very efficient way of implementing this idea in the early 1960’s
that improves the O(n2) behavior of the algorithm with an expected performance that is
O(n logn).In essence, the quick sort algorithm partitions the original array by rearranging it
into two groups. The first group contains those elements less than some arbitrary chosen
value taken from the set, and the second group contains those elements greater than or
equal to the chosenvalue.
The chosen value is known as the pivot element. Once the array has been rearranged in
this way with respect to the pivot, the very same partitioning is recursively applied to
each of the two subsets. When all the subsets have been partitioned and rearranged, the
original array issorted.
The function partition() makes use of two pointers ‘i’ and ‘j’ which are moved toward
each other in the followingfashion:
Repeatedly increase the pointer ‘i’ until a[i] >=pivot.
Repeatedly decrease the pointer ‘j’ until a[j] <=pivot.
If j > i, interchange a[j] witha[i]
Repeat the steps 1, 2 and 3 till the ‘i’ pointer crosses the ‘j’ pointer. If ‘i’ pointer crosses ‘j’
pointer, the position for pivot is found and place pivot element in ‘j’ pointerposition.
The program uses a recursive function quicksort(). The algorithm of quick sort
function sorts all elements in an array ‘a’ between positions ‘low’ and‘high’.
It terminates when the condition low >= high is satisfied. This condition will be satisfied
only when the array is completelysorted.Here we choose the first element as the ‘pivot’.
So, pivot = x[low]. Now it calls the partition function to find the proper position j of the
element x[low] i.e. pivot. Then we will have two sub-arrays x[low], x[low+1], . . ..
. . x[j-1] and x[j+1], x[j+2], x[high].It calls itself recursively to sort the left sub-
array x[low], x[low+1], . . . ... . x[j-1] between positions low and j-1 (where j is
returned by the partitionfunction).It calls itself recursively to sort the right sub-array
x[j+1], x[j+2],.................x[high] between positions j+1 andhigh.
AlgorithmQUICKSORT(low,high)
// sorts the elements a(low), . . . . . , a(high) which reside in the global array A(1 :n) into
//ascending order a (n + 1) is considered to be defined and must be greater than all
//elements in a(1 : n); A(n + 1) = α*/
{
If( low < high) then
{
j := PARTITION(a, low,high+1);
// J is the position of the partitioningelement
QUICKSORT(low, j –1);
QUICKSORT(j + 1 ,high);
}
}

Algorithm PARTITION(a, m,p)


{
V :=a(m); i :=m; j:=p;
// a (m) is thepartitionelement
do
{
repeat
i := i +1;
until (a(i)>v);
repeat
j := j –1;
until (a(j)<v);
if (i < j) then INTERCHANGE(a, i,j)
} while (i >j);
a[m] :=a[j];a[j]:=V;
returnj;
}
Algorithm INTERCHANGE(a, i,j)
{
p:= a[i];
a[i]:=a[j];
a[j]:=p;
}

Example
Select first element as the pivot element. Move ‘i’ pointer from left to right in search of
an element larger than pivot. Move the ‘j’ pointer from right to left in search of an
element smaller than pivot. If such elements are found, the elements are swapped. This
process continues till the ‘i’ pointer crosses the ‘j’ pointer. If ‘i’ pointer crosses ‘j’
pointer, the position for pivot is found and interchange pivot and element at ‘j’
position.
Let us consider the following example with 13 elements to analyze quicksort:
1 2 3 4 5 6 7 8 9 10 11 12 13 Remarks
38 08 16 06 79 57 24 56 02 58 04 70 45
pivot I j swap i &j
04 79
i j swap i &j
02 57
j i
swap
(24 08 16 06 04 02) 38 (56 57 58 79 70 45) pivot
swap
pivot j,i pivot
(02 08 16 06 04) 24
pivot swap
,j i pivot
02 (08 16 06 04)
pivot i j swap i &j
04 16
j i
swap
(06 04) 08 (16) pivot
pivot
,j i
swap
(04) 06 pivot
04
pivot
, j,i
16
pivot
, j,i
(02 04 06 08 16 24) 38
(56 57 58 79 70 45)
pivot i j swap i
45 57
j i
swap
(45) 56 (58 79 70 57) pivot
45
pivot swap
, j,i pivot
(58 79 57)
pivo i 70 j swap i
57 79
j i
swap
(57) 58 (70 79) pivot
57
pivot
, j,i
(70 79)
pivot swap
,j i pivot
70
79
pivot
, j,i
(45 56 57 58 70 79)
02 04 06 08 16 24 38 45 56 57 58 70 79

Analysis of QuickSort:

Like merge sort, quick sort is recursive, and hence its analysis requires solving a
recurrence formula. We will do the analysis for a quick sort, assuming a random pivot
We will take T (0) = T (1) = 1, as in merge sort.
The running time of quick sort is equal to the running time of the two recursive calls
plus the linear time spent in the partition (The pivot selection takes only constant time).
This gives the basic quick sortrelation:
T (n) = T (i) + T (n – i – 1) + Cn - (1)
Where, i = |S1| is the number of elements inS1.

Worst CaseAnalysis
The pivot is the smallest element, all the time. Then i=0 and if we ignore T(0)=1,
which is insignificant, the recurrenceis:

T (n) = T (n – 1) + Cn n>1 - (2)


Using equation – (1) repeatedly,thus

T (n – 1) = T (n – 2) + C (n –1)

T (n – 2) = T (n – 3) + C (n –2)
- - - - - - --
T (2) = T (1) + C(2)
Adding up all these equationsyields
=O(n2) - (3)

Best CaseAnalysis
In the best case, the pivot is in the middle. To simply the math, we assume that the two
sub-files are each exactly half the size of the original and although this gives a slight over
estimate, this is acceptable because we are only interested in a Big – oh answer.

T (n) = 2 T (n/2) +Cn - (4)

Divide both sides byn and Substitute n/2 for ‘n’

Finally,

Which yields, T (n) = C n log n + n = O(n logn) -

This is exactly the same analysis as merge sort, hence we get the sameanswer.
Average CaseAnalysis
The number of comparisons for first call on partition: Assume left_to_right moves over k
smaller element and thus k comparisons. So when right_to_left crosses left_to_right it has
made n-k+1 comparisons. So, first call on partition makes n+1 comparisons. The average
case complexity of quicksort is
T(n) = comparisons for first call onquicksort
+
{Σ 1<=nleft,nright<=n [T(nleft) + T(nright)]}n = (n+1) + 2 [T(0) +T(1) + T(2) +
----- +T(n-1)]/n
nT(n) = n(n+1) + 2 [T(0) +T(1) + T(2) +------+ T(n-2) +T(n-1)]
(n-1)T(n-1) = (n-1)n + 2 [T(0) +T(1) + T(2) +------+ T(n-2)]\
Subtracting bothsides:
nT(n) –(n-1)T(n-1) = [ n(n+1) – (n-1)n] + 2T(n-1) = 2n + 2T(n-1) nT(n)
= 2n + (n-1)T(n-1) + 2T(n-1) = 2n +(n+1)T(n-1)
T(n) = 2 +(n+1)T(n-1)/n
The recurrence relation obtained is:
T(n)/(n+1) = 2/(n+1) +T(n-1)/n
Using the method ofsubstitution:
T(n)/(n+1) = 2/(n+1) +T(n-1)/n
T(n-1)/n = 2/n +T(n-2)/(n-1)
T(n-2)/(n-1) = 2/(n-1) +T(n-3)/(n-2)
T(n-3)/(n-2) = 2/(n-2) +T(n-4)/(n-3)
. .
. .
T(3)/4 = 2/4 +T(2)/3
T(2)/3 = 2/3 + T(1)/2 T(1)/2 = 2/2 +T(0)
Adding bothsides:
T(n)/(n+1) + [T(n-1)/n + T(n-2)/(n-1) +--------------+ T(2)/3 +T(1)/2]
= [T(n-1)/n + T(n-2)/(n-1) +--------------+ T(2)/3 + T(1)/2] + T(0)+ [2/(n+1)
+ 2/n + 2/(n-1) +-----------+2/4 +2/3]
Cancelling the commonterms:
T(n)/(n+1) = 2[1/2 +1/3+1/4+-------------+1/n+1/(n+1)]
Finally,
We will get,
O(n log n)
GREEDY METHOD

GENERALMETHOD
Greedy is the most straight forward design technique. Most of the problems have n
inputs and require us to obtain a subset that satisfies some constraints. Any subset that
satisfies these constraints is called a feasible solution. We need to find a feasible solution
that either maximizes or minimizes the objective function. A feasible solution that does this
is called an optimal solution.
The greedy method is a simple strategy of progressively building up a solution, one
element at a time, by choosing the best possible element at each stage. At each stage, a
decision is made regarding whether or not a particular input is in an optimal solution. This is
done by considering the inputs in an order determined by some selection procedure. If the
inclusion of the next input, into the partially constructed optimal solution will result in an
infeasible solution then this input is not added to the partial solution. The selection
procedure itself is based on some optimization measure. Several optimization measures are
plausible for a given problem. Most of them, however, will result in algorithms that
generate sub-optimal solutions. This version of greedy technique is called subset paradigm.
Some problems like Knapsack, Job sequencing with deadlines and minimum cost spanning
trees are based on subset paradigm.
For the problems that make decisions by considering the inputs in some order, each
decision is made using an optimization criterion that can be computed using decisions
already made. This version of greedy method is ordering paradigm. Some problems like
optimal storage on tapes, optimal merge patterns and single source shortest path are based
on ordering paradigm.

CONTROLABSTRACTION

Algorithm Greedy (a,n)


// a(1 : n) contains the ‘n’ inputs
{
solution:=ᶲ ; // initialize the solution to be empty
for i:=1 to ndo
{
x := select(a);
if feasible (solution, x)then
solution := Union (Solution,x);
}
return solution;
}
Procedure Greedy describes the essential way that a greedy based algorithm will look,
once a particular problem is chosen and the functions select, feasible and union are properly
implemented.
The function select selects an input from ‘a’, removes it and assigns its value to ‘x’.
Feasible is a Boolean valued function, which determines if ‘x’ can be included into the
solution vector. The function Union combines ‘x’ with solution and updates the objective
KNAPSACK PROBLEM
Let us apply the greedy method to solve the knapsack problem. We are given ‘n’
objects and a knapsack. The object ‘i’ has a weight w i and the knapsack has a capacity ‘m’.
If a fraction xi, 0 < xi < 1 of object i is placed into the knapsack then a profit of p ixi is
earned. The objective is to fill the knapsack that maximizes the total profit earned.
Since the knapsack capacity is ‘m’, we require the total weight of all chosen objects to be at
most ‘m’. The problem is stated as:

Maximize
subject to

The profits and weights are positive numbers.


Algorithm
If the objects are already been sorted into non-increasing order of p[i] / w[i] then the
algorithm given below obtains solutions corresponding to this strategy.

Algorithm GreedyKnapsack (m,n)


// P[1 : n] and w[1 : n] contain the profits and weights respectively of
// Objects ordered so that p[i] / w[i]> p[i + 1] / w[i + 1].
// m is the knapsack size and x[1: n] is the solution vector.
{
for i := 1 to n do
x[i] :=0.0 ; //initialize the solution vector
U :=m;
for i := 1 to n do
{
if (w(i) > U) then break;
x [i] := 1.0;
U := U –w[i];
}
if (i <n) then x[i] := U /w[i];
}

Running time:
The objects are to be sorted into non-decreasing order of pi / wi ratio. But if we disregard
the time to initially sort the objects, the algorithm requires only O(n)time.

Example:
Consider the following instance of the knapsack problem: n = 3, m = 20, (p1, p2, p3) =
(25, 24, 15) and (w1, w2, w3) = (18, 15,10).

1. First, we try to fill the knapsack by selecting the objects in some order:
x1 x2 x3 ∑wi xi ∑pi xi
1/2 1/3 1/4 18 x 1/2 + 15 x 1/3 + 10 x1/4 25 x 1/2 + 24 x 1/3 + 15 x 1/4=
=16.5 24.25

2. Select the object with the maximum profit first (p = 25). So, x1 = 1 and profit earned is
25. Now, only 2 units of space is left, select the object with next largest profit (p = 24).
So, x2 =2/15

x1 x2 x3 ∑wi xi ∑pi xi
1 2/15 0 18 x 1 + 15 x 2/15 =20 25 x 1 + 24 x 2/15 =28.2

3. Considering the objects in the order of non-decreasing weightswi.

x1 x2 x3 ∑ wi xi ∑ pi xi
0 2/3 1 15 x 2/3 + 10 x 1 =20 24 x 2/3 + 15 x 1 =31

4. Considered the objects in the order of the ratio pi / wi.

p1/w1 p2/w2 p3/w3


25/18 24/15 15/10
1.4 1.6 1.5

Sort the objects in order of the non-increasing order of the ratio p i / xi. Select the object
with the maximum pi / xi ratio, so, x2 = 1 and profit earned is 24. Now, only 5 units of
space is left, select the object with next largest p i / xi ratio, so x3 = ½ and the profit
earned is7.5.
x1 x2 x3 ∑wi xi ∑pi xi
0 1 1/2 15 x 1 + 10 x 1/2 =20 24 x 1 + 15 x 1/2 =31.5

This solution is the optimal solution.


Subgraphs and SpanningTrees: V and E’
Subgraphs: A graph G’ = (V’, E’) is a subgraph of graph G = (V, E) iff V’
E.
The undirected graph G is connected, if for every pair of vertices u, v there exists a path
from u to v. If a graph is not connected, the vertices of the graph can be divided into
connected components. Two vertices are in the same connected component iff they are
connected by a path.

Tree is a connected acyclic graph. A spanning tree of a graph G = (V, E) is a tree that
contains all vertices of V and is a subgraph of G. A single graph can have multiple spanning
trees.

Lemma 1: Let T be a spanning tree of a graph G. Then

1. Any two vertices in T are connected by a unique simple path.


2. If any edge is removed from T, then T becomes disconnected.
3. If we add any edge into T, then the new graph will contain a cycle.
4. Number of edges in T isn-1.

Minimum Spanning Trees(MST):

A spanning tree for a connected graph is a tree whose vertex set is the same as the vertex set
of the given graph, and whose edge set is a subset of the edge set of the given graph. i.e.,
any connected graph will have a spanning tree.

Weight of a spanning tree w (T) is the sum of weights of all edges in T. The Minimum
spanning tree (MST) is a spanning tree with the smallest possible weight.
G:

A grap hG:

2 2
4
G: 3 3
5
6
1 1

A weighted graphG: TheminimalspanningtreefromweightedgraphG:

Examples:
To explain the Minimum Spanning Tree, let's consider a few real-world examples:
1. One practical application of a MST would be in the design of a network. For instance,
a group of individuals, who are separated by varying distances, wish to be
connected together in a telephone network. Although MST cannot do anything about
the distance from one connection to another, it can be used to determine the least
cost paths with no cycles in this network, thereby connecting everyone at a
minimum cost.
2. Another useful application of MST would be finding airline routes. The vertices of
the graph would represent cities, and the edges would represent routes between the
cities. Obviously, the further one has to travel, the more it will cost, so MST can be
applied to optimize airline routes by finding the least costly paths with no cycles.
To explain how to find a Minimum Spanning Tree, we will look at two algorithms: the
Kruskal algorithm and the Prim algorithm. Both algorithms differ in their methodology, but
both eventually end up with the MST. Kruskal's algorithm uses edges, and Prim’s algorithm
uses vertex connections in determining the MST.

Kruskal’s Algorithm

This is a greedy algorithm. A greedy algorithm chooses some local optimum(i.e. picking
an edge with the least weight in a MST).
Kruskal's algorithm works as follows: Take a graph with 'n' vertices, keep on adding the
shortest (least cost) edge, while avoiding the creation of cycles, until (n - 1) edges have been
added. Sometimes two or more edges may have the same cost. The order in which the edges
are chosen, in this case, does not matter. Different MSTs may result, but they will all have
the same total cost, which will always be the minimum cost.
Algorithm:
The algorithm for finding the MST, using the Kruskal’s method is as follows:
Algorithm Kruskal (E, cost, n,t)
// E is the set of edges in G. G has n vertices. cost [u, v] is the
// cost of edge (u, v). ‘t’ is the set of edges in the minimum-cost spanning tree.
// The final cost is returned.
{
Construct a heap out of the edge costs using heapify; for
i := 1 to n do parent [i] :=-1;
// Each vertex is in a different set.
i := 0; mincost :=0.0;
while ((i < n -1) and (heap not empty))do
{
Delete a minimum cost edge (u, v) from the heap and re-
heapify using Adjust;
j := Find (u); k := Find(v); if
(j k)then
{
i := i +1;
t [i, 1] := u; t [i, 2] := v; mincost
:=mincost + cost [u,v]; Union
(j,k);
}
}
if (i n-1) then write ("no spanning tree"); else
return mincost;
}
Running time:

 The number of finds is at most 2e, and the number of unions at most n-1. Including
the initialization time for the trees, this part of the algorithm has a complexity that is
just slightly more than O (n +e).
 We can add at most n-1 edges to tree T. So, the total time for operations on T is
O(n).
Summing up the various components of the computing times, we get O (n + e log e) as
asymptotic complexity

Example1:
10
1 2 50

45 40
3
30 35

4 25 5
55
20 15
6
Arrange all the edges in the increasing order of their costs:

Cost 10 15 20 25 30 35 40 45 50 55
Edge (1,2) (3,6) (4,6) (2,6) (1,4) (3,5) (2,5) (1,5) (2,3) (5,6)

The edge set T together with the vertices of G define a graph that has up to n connected
components. Let us represent each component by a set of vertices in it. These vertex sets are
disjoint. To determine whether the edge (u, v) creates a cycle, we need to check whether u
and v are in the same vertex set. If so, then a cycle is created. If not then no cycle is created.
Hence two Finds on the vertex sets suffice. When an edge is included in T, two components
are combined into one and a union is to be performed on the two sets.
Edge Cost Spanning Forest Edge Sets Remarks

1 2 3 4 5 6 {1}, {2}, {3},


{4}, {5},{6}

(1, 2) 10 1 2 3 4 5 6 {1, 2}, {3},{4}, The vertices 1and


{5},{6} 2 are in different
sets, so the edge
Is combined

1 2 3 4 5
(3, 6) 15 {1, 2}, {3, 6}, The vertices 3and
6 {4},{5} 6 are in different
sets, so the edge
Is combined

1 2 3 5
(4, 6) 20 {1, 2}, {3, 4, 6}, The vertices 4and
4 6 {5} 6 are in different
sets, so the edge
is combined

1 2 5
(2, 6) 25 {1, 2, 3, 4, 6}, The vertices 2and
4 3 {5} 6 are in different
6 sets, so the edge
is combined
The vertices 1and
(1, 4) 30 Reject 4 are in the same
set, so the edge is
rejected

(3, 5) 35 1 2 The vertices 3and


5 are in the same
{1, 2, 3, 4, 5,6} set, so the edge is
4 5 3 combined
6

MINIMUM-COST SPANNING TREES: PRIM'SALGORITHM


A given graph can have many spanning trees. From these many spanning trees, we have to select
a cheapest one. This tree is called as minimal cost spanning tree.

Minimal cost spanning tree is a connected undirected graph G in which each edge is labeled with
a number (edge labels may signify lengths, weights other than costs). Minimal cost spanning tree
is a spanning tree for which the sum of the edge labels is as small as possible

The slight modification of the spanning tree algorithm yields a very simple algorithm for finding
an MST. In the spanning tree algorithm, any vertex not in the tree but connected to it by an edge
can be added. To find a Minimal cost spanning tree, we must be selective - we must always add
a new vertex for which the cost of the new edge is as small as possible.

This simple modified algorithm of spanning tree is called prim's algorithm for finding an
Minimal cost spanning tree.

Prim's algorithm is an example of a greedy algorithm.

Algorit hm Algorit hm Prim E, cost,n,t)

// E is the set of edges in G. cost

[1:n, 1:n] is the cost

// adjacency matrix of an n vertex graph such that cost [i, j]is


// either a positive real number or if no edge (i, j)exists.
// A minimum spanning tree is computed and stored as a set of
// edges in the array t [1:n-1, 1:2]. (t [i, 1], t [i, 2]) is an edge in
// the minimum-cost spanning tree. The final cost is returned.
{
Let (k, l) be an edge of minimum cost in E;
mincost := cost [k,l];
t [1, 1] := k; t [1, 2] :=l;
for i :=1 to n do //Initialize near if
(cost [i, l] < cost [i, k]) then near [i] :=l;
else near [i] := k; near
[k] :=near [l] :=0;
for i:=2 to n - 1do // Find n - 2 additional edges fort.
{
Let j be an index such that near [j] 0and
cost [j, near [j]] is minimum;
t [i, 1] := j; t [i, 2] := near [j]; mincost :=
mincost + cost [j, near [j]]; near [j] :=0
for k:= 1 to n do // Update near[].
if ((near [k] 0) and (cost [k, near [k]] > cost [k, j])) then near
[k] :=j;
}
return mincost;
}

Running time:
We do the same set of operations with dist as in Dijkstra's algorithm (initialize structure, m
times decrease value, n - 1 times select minimum). Therefore, we get O (n 2) time when we
implement dist with array, O (n + E log n) when we implement it with a heap.
For each vertex u in the graph we dequeue it and check all its neighbors in O (1 + deg (u))
time.
EXAMPLE1:

Use Prim’s Algorithm to find a minimal spanning tree for the graph shown below starting
with the vertex A.
4
B D

4
3 2 1 2
4 E 1

A C 2 G
6
2 F 1

The stepwise progress of the prim’s algorithm is as follows:

Step1:

D 
B3 Vertex A B C D E F G

Status 0 1 1 1 1 1 1
E 
0 6 G  Dist. 0 3 6    
A C
F 
Next * A A A A A A

Step2:
4 D Vertex A B C D E F G
B 3

0 2

C
Status 0 0 1 1 1 1 1
Dist. 0 3 2 4   
Next * A B B A A A
E
 G
A


F

Step3:

D Vertex A B C D E F G
B 3 1 Status 0 0 0 1 1 1 1
Dist. 0 3 2 1 4 2 
Next * A B C C C A
0 2
E
A G

C 2 F

Step4:

B 3 1 D Vertex A B C D E F G

2 Status 0 0 0 0 1 1 1
2 E
4 Dist. 0 3 2 1 2 2 4
A G
F Next * A B C D C D

Step5:

B3 1 D Vertex A B C D E F G

Status 0 0 0 0 1 0 1
A 0 2 1G Dist. 0 3 2 1 2 2 1
C 2 F Next * A B C D C E

Step6:

Vertex A B C D E F G
B3 1 D
Status 0 0 0 0 0 1 0
Dist. 0 3 2 1 2 1 1
E Next * A B C D G E
A 0 2 1 G

C 1 F

52
Step7:

Vertex A B C D E F G
B3 1 D
Status 0 0 0 0 0 0 0
Dist. 0 3 2 1 2 1 1
A E Next * A B C D G E
0 2 1 G

C 1 F

GRAPH ALGORITHMS
Basic Definitions:
 Graph G is a pair (V, E), where V is a finite set (set of vertices) and E is a finite set
of pairs from V (set of edges). We will often denote n := |V|, m :=|E|.
 Graph G can be directed, if E consists of ordered pairs, or undirected, if E consists
of unordered pairs. If (u, v) E, then vertices u, and v are adjacent.
 We can assign weight function to the edges: wG(e) is a weight of edge e E. The
graph which has such function assigned is called weighted graph.
 Degree of a vertex v is the number of vertices u for which (u, v) E (denote deg(v)).
The number of incoming edges to a vertex v is called in–degree of the vertex
(denote indeg(v)). The number of outgoing edges from a vertex is called out-degree
(denote outdeg(v)).

Representation of Graphs:

Consider graph G = (V, E), where V= {v1,v2,….,vn}.

Adjacency matrix represents the graph as an n x n matrix A = (ai,j),where

The matrix is symmetric in case of undirected graph, while it may be asymmetric if the
graph is directed.

We may consider various modifications. For example for weighted graphs, we may have

53
Where default is some sensible value based on the meaning of the weight function
(for example, if weight function represents length, then default can be , meaning
value larger than any other value).

Adjacency List: An array Adj [1 . . . . . . . n] of pointers where for 1 <v <n, Adj [v]
points to a linked list containing the vertices which are adjacent to v (i.e. the vertices
that can be reached from v by a single edge). If the edges have weights then these
weights may also be stored in the linked list elements.

Paths and Cycles:

A path is a sequence of vertices (v1, v2, . . . . . . , vk), where for all i, (vi, vi+1) E. A path
is simple if all vertices in the path are distinct.

A (simple) cycle is a sequence of vertices (v1, v2,............., vk, vk+1 = v1), where for all i,
(vi, vi+1) E and all vertices in the cycle are distinct except pair v1,vk+1.

Techniques forgraphs:
Given a graph G = (V, E) and a vertex V in V (G) traversing can be done in two ways.
1. Depth first search
2. Breadth first search

Connected Component:

Connected component of a graph can be obtained by using BFST (Breadth first search and
traversal) and DFST (Dept first search and traversal). It is also called the spanning tree.

BFST (Breadth first search and traversal):

In BFS we start at a vertex V mark it as reached (visited).The vertex V is at this time said
to be unexplored (not yet discovered).A vertex is said to been explored (discovered) by
visiting all vertices adjacent from it.All unvisited vertices adjacent from V are visited
next.The first vertex on this list is the next to be explored.Exploration continues until no
unexplored vertex is left. These operations can be performed by using Queue.

This is also called connected graph or spanning tree.

Spanning trees obtained using BFS then it called breadth first spanning trees
54
Algorithm BFS(v)
// a bfs of G is begin at vertex v
// for any node I, visited[i]=1 if I has already been visited.
// the graph G, and array visited[] are global
{
U:=v; // q is a queue of unexplored vertices.
Visited[v]:=1;
Repeat{
For all vertices w adjacent from U do
If (visited[w]=0) then
{
Add w to q; // w is unexplored
Visited[w]:=1;
}
If q is empty then return; // No unexplored vertex.
Delete U from q; //Get 1st unexplored vertex.
} Until(false)
}
Maximum Time complexity and space complexity of G(n,e), nodes are in adjacency
list.
T(n, e)=θ(n+e)
S(n, e)=θ(n)

If nodes are in adjacency matrix then


T(n, e)=θ(n2)
S(n, e)=θ(n)

DFST(Dept first search and traversal).:

DFS different from BFS. The exploration of a vertex v is suspended (stopped) as soon as a
new vertex is reached.In this the exploration of the new vertex (example v) begins; this new
vertex has been explored, the exploration of v continues. Note: exploration start at the new
vertex which is not visited in other vertex exploring and choose nearest path for exploring
next or adjacent vertex.

Algorithm dFS(v)
// a Dfs of G is begin at vertex v
// initially an array visited[] is set to zero.
//this algorithm visits all vertices reachable from v.
// the graph G, and array visited[] are global
{
Visited[v]:=1;
For each vertex w adjacent from v do
{
If (visited[w]=0) then DFS(w);
{ 54
Add w to q; // w is unexplored
Visited[w]:=1;
}

Maximum Time complexity and space complexity of G(n,e), nodes are in adjacency
list.
T(n, e)=θ(n+e)
S(n, e)=θ(n)

If nodes are in adjacency matrix then


T(n, e)=θ(n2)

S(n, e)=θ(n)

Bi-connected Components:

A graph G is biconnected, iff (if and only if) it contains no articulation point (joint or
junction).

A vertex v in a connected graph G is an articulation point, if and only if (iff) the deletion of
vertex v together with all edges incident to v disconnects the graph into two or more none
empty components.

The presence of articulation points in a connected graph can be an undesirable(un wanted)


feature in many cases.

For example

if G1Communication network with


Vertex  communication stations.
Edges Communication lines.

Then the failure of a communication station I that is an articulation point, then we loss the
communication in between other stations. F
Form graph G1

55
There is an efficient algorithm to test whether a connected graph is biconnected. If the case of
graphs that are not biconnected, this algorithm will identify all the articulation points.

Once it has been determined that a connected graph G is not biconnected, it may be desirable
(suitable) to determine a set of edges whose inclusion makes the graph biconnected.

UNIT – III
Dynamic Programming
Dynamic programming is a name, coined by Richard Bellman in 1955. Dynamic programming,
as greedy method, is a powerful algorithm design technique that can be used when the solution to the
problem may be viewed as the result of a sequence of decisions. In the greedy method we make
irrevocable decisions one at a time, using a greedy criterion. However, in dynamic programming we
examine the decision sequence to see whether an optimal decision sequence contains optimal
decision subsequence.

When optimal decision sequences contain optimal decision subsequences, we can establish
recurrence equations, called dynamic-programming recurrence equations that enable us to solve the
problem in an efficient way.

Dynamic programming is based on the principle of optimality (also coined by Bellman). The
principle of optimality states that no matter whatever the initial state and initial decision are, the
remaining decision sequence must constitute an optimal decision sequence with regard to the state
resulting from the first decision. The principle implies that an optimal decision sequence is
comprised of optimal decision subsequences. Since the principle of optimality may not hold for some
formulations of some problems, it is necessary to verify that it does hold for the problem being
solved. Dynamic programming cannot be applied when this principle does not hold.

The steps in a dynamic programming solution are:


Verify that the principle of optimality holds. Set up the dynamic-programming recurrence equations.
Solve the dynamic-programming recurrence equations for the value of the optimal solution. Perform a
trace back step in which the solution itself is constructed.

Dynamic programming differs from the greedy method since the greedy method produces only
one feasible solution, which may or may not be optimal, while dynamic programming produces all
possible sub-problems at most once, one of which guaranteed to be optimal. Optimal solutions to
sub-problems are retained in a table, thereby avoiding the work of recomputing the answer every
time a sub-problem is encountered

The divide and conquer principle solve a large problem, by breaking it up into smaller problems
which can be solved independently. In dynamic programming this principle is carried to an extreme:
when we don't know exactly which smaller problems to solve, we simply solve them all, then store
the answers away in a table to be used later in solving larger problems. Care is to be taken to avoid
recomputing previously computed values, otherwise the recursive program will have prohibitive
complexity. In some cases, the solution can be improved and in other cases, the dynamic
programming technique is the best approach.

Two difficulties may arise in any application of dynamic programming:


1. It may not always be possible to combine the solutions of smaller problems to form the
65
solution of a larger one.
2. The number of small problems to solve may be un-acceptably large.

There is no characterized precisely which problems can be effectively solved with dynamic
programming; there are many hard problems for which it does not seen to be applicable, as well as
many easy problems for which it is less efficient than standard algorithms.

5.1 MULTI STAGEGRAPHS


A multistage graph G = (V, E) is a directed graph in which the vertices are partitioned into k >2
disjoint sets Vi, 1 <i <k. In addition, if <u, v> is an edge in E, then u V i and v Vi+1 for some i, 1 <i
<k.
Let the vertex ‘s’ is the source, and ‘t’ the sink. Let c (i, j) be the cost of edge <i, j>. The cost of a
path from ‘s’ to ‘t’ is the sum of the costs of the edges on the path. The multistage graph problem is
to find a minimum cost path from ‘s’ to ‘t’. Each set V i defines a stage in the graph. Because of the
constraints on E, every path from ‘s’ to ‘t’ starts in stage 1, goes to stage 2, then to stage 3, then to
stage 4, and so on, and eventually terminates in stage k.
A dynamic programming formulation for a k-stage graph problem is obtained by first
noticingthateverystoppathistheresultofasequenceofk–2decisions.Theith
decision involves determining which vertex in vi+1, 1 <i <k - 2, is to be on the path. Let c (i, j) be the
cost of the path from source to destination. Then using the forward approach, we obtain:

cost (i, j) = min {c (j, l) + cost (i + 1,l)}


l in Vi+1

<j, l> in E

ALGORITHM:
Algorithm Fgraph(G, k, n,p)
// The input is a k-stage graph G = (V, E) with n vertices
// indexed in order or stages. E is a set of edges and c [i,j]
// is the cost of (i, j). p [1 : k] is a minimum cost path.
{
cost [n] :=0.0;
for j:= n - 1 to 1 step – 1do
{ // compute cost[j]
let r be a vertex such that (j, r) is an edge of G and c [j,
r] + cost [r] is minimum; cost [j] := c [j, r] + cost[r];
d [j] :=r:
}
p [1] := 1; p [k] :=n; // Find a minimum cost path. for j := 2 to k
- 1 do
p [j] := d [p [j -1]];
}
The multistage graph problem can also be solved using the backward approach. Let bp(i, j) be a
minimum cost path from vertex s to j vertex in V i. Let Bcost(i, j) be the cost of bp(i, j). From the
backward approach we obtain:

66
Bcost (i, j) = min { Bcost (i –1, l) + c (l, j)}
1 in Vi -1

<l, j> in E

Algorithm Bgraph(G, k, n,p)


// Same function asFgraph
{
Bcost [1] :=0.0;
for j := 2 to ndo
{ // Compute Bcost[j].
Let r be such that (r, j) is an edge of G and Bcost
[r] + c [r, j] is minimum; Bcost [j] := Bcost [r] + c
[r,j];
D [j] :=r;
} //find a minimum costpath
p [1] := 1; p [k] :=n;
for j:= k - 1 to 2 do p [j] := d [p [j +1]];
}

EXAMPLE1:

Find the minimum cost path from s to t in the multistage graph of five stages shown below. Do this
first using forward approach and then using backward approach.

2 4 6
2 6 9
9 2 5 4
1

3 4
7 7 2
s 1 3
7 10 t 12
3

4 11
2 5
5
8 11
11 6
FORWARDAPPROACH: 5 8

We use the following equation to find the minimum cost path from s to t:
cost (i, j) = min {c (j, l) + cost (i + 1,l)} l inVi +1

<j, l>inE
cost (1, 1) = min {c (1, 2) + cost (2, 2), c (1, 3) + cost (2, 3), c (1, 4) + cost (2,4),
c (1, 5) + cost (2,5)}
= min {9 + cost (2, 2), 7 + cost (2, 3), 3 + cost (2, 4), 2 + cost (2,5)}

Now first starting with,

67
cost (2, 2) = min{c (2, 6) + cost (3, 6), c (2, 7) + cost (3, 7), c (2, 8) + cost (3,8)}
= min {4 + cost (3, 6), 2 + cost (3, 7), 1 + cost (3,8)}

cost(3,6) = min {c (6, 9) + cost (4, 9), c (6, 10) + cost (4,10)}
= min {6 + cost (4, 9), 5 + cost (4,10)}

cost(4,9) = min {c (9, 12) + cost (5, 12)} = min {4 + 0) =4

cost (4, 10) = min {c (10, 12) + cost (5, 12)} =2

Therefore, cost (3, 6) = min {6 + 4, 5 + 2} =7

cost(3,7) = min {c (7, 9) + cost (4, 9) , c (7, 10) + cost (4,10)}


= min {4 + cost (4, 9), 3 + cost (4,10)}

cost(4,9) = min {c (9, 12) + cost (5, 12)} = min {4 + 0} =4

Cost (4, 10) = min {c (10, 2) + cost (5, 12)} = min {2 + 0} =2

Therefore, cost (3, 7) = min {4 + 4, 3 + 2} = min {8, 5} =5

cost(3,8) = min {c (8, 10) + cost (4, 10), c (8, 11) + cost (4,11)}
= min {5 + cost (4, 10), 6 + cost (4 +11)}

cost (4, 11) = min {c (11, 12) + cost (5, 12)} =5

Therefore, cost (3, 8) = min {5 + 2, 6 + 5} = min {7, 11} =7

Therefore, cost (2, 2) = min {4 + 7, 2 + 5, 1 + 7} = min {11, 7, 8} =7

Therefore, cost (2, 3) = min {c (3, 6) + cost (3, 6), c (3, 7) + cost (3,7)}
= min {2 + cost (3, 6), 7 + cost (3,7)}
= min {2 + 7, 7 + 5} = min {9, 12} =9

68
cost (2, 4) = min {c (4, 8) + cost (3, 8)} = min {11 + 7} =18
cost (2, 5) = min {c (5, 7) + cost (3, 7), c (5, 8) + cost (3,8)}
= min {11 + 5, 8 + 7} = min {16, 15} =15

Therefore, cost (1, 1) = min {9 + 7, 7 + 9, 3 + 18, 2 +15}


= min {16, 16, 21, 17} =16

The minimum cost path is16.

The path is 1 2 7 10 12

or
1 3 6 10 12

All pairs shortestpaths

In the all pairs shortest path problem, we are to find a shortest path between every pair of vertices
in a directed graph G. That is, for every pair of vertices (i, j), we are to find a shortest path from i
to j as well as one from j to i. These two paths are the same when G is undirected.

When no edge has a negative length, the all-pairs shortest path problem may be solved by using
Dijkstra’s greedy single source algorithm n times, once with each of the n vertices as the source
vertex.

The all pairs shortest path problem is to determine a matrix A such that A (i, j) is the length of a
shortest path from i to j. The matrix A can be obtained by solving n single- source problems using
the algorithm shortest Paths. Since each application of this procedure requires O (n 2) time, the matrix
A can be obtained in O (n3)time.

The dynamic programming solution, called Floyd’s algorithm, runs in O (n 3) time. Floyd’s algorithm
works even when the graph has negative length edges (provided there are no negative length cycles).

The shortest i to j path in G, i ≠ j originates at vertex i and goes through some intermediate vertices
(possibly none) and terminates at vertex j. If k is an intermediate vertex on this shortest path, then the
subpaths from i to k and from k to j must be shortest paths from i to k and k to j, respectively.
Otherwise, the i to j path is not of minimum length. So, the principle of optimality holds. Let Ak (i, j)
represent the length of a shortest path from i to j going through no vertex of index greater than k, we
obtain:

69
Ak (i, j) = {min {min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i,j)} 1<k<n

Algorithm All Paths (Cost, A,n)


// cost [1:n, 1:n] is the cost adjacency matrix of a graph which
// n vertices; A [I, j] is the cost of a shortest path from vertex
// i to vertex j. cost [i, i] = 0.0, for 1 <i <n.
{
for i := 1 to n do
for j:= 1 to n do
A [i, j] := cost [i,j]; // copy cost into A for k := 1 to n
do
for i := 1 to n do
for j := 1 to n do
A [i, j] := min (A [i, j], A [i, k] + A [k,j]);
}

Complexity Analysis: A Dynamic programming algorithm based on this recurrence involves in


calculating n+1 matrices, each of size n x n. Therefore, the algorithm has a complexity of O(n 3).

General formula: min {Ak-1 (i, k) + Ak-1 (k, j)}, c (i,j)} 1<k<n

Solve the problem for different values of k = 1, 2 and3


Step 1: Solving the equation for, k =1;

A1 (1, 1) = min {(Ao (1, 1) + Ao (1, 1)), c (1, 1)} = min {0 + 0, 0} =0

A1 (1, 2) = min {(Ao (1, 1) + Ao (1, 2)), c (1, 2)} = min {(0 + 4), 4} =4

A1 (1, 3) = min {(Ao (1, 1) + Ao (1, 3)), c (1, 3)} = min {(0 + 11), 11} =11

A1 (2, 1) = min {(Ao (2, 1) + Ao (1, 1)), c (2, 1)} = min {(6 + 0), 6} =6

A1 (2, 2) = min {(Ao (2, 1) + Ao (1, 2)), c (2, 2)} = min {(6 + 4), 0)} =0

A1 (2, 3) = min {(Ao (2, 1) + Ao (1, 3)), c (2, 3)} = min {(6 + 11), 2} =2

A1 (3, 1) = min {(Ao (3, 1) + Ao (1, 1)), c (3, 1)} = min {(3 + 0), 3} =3

A1 (3, 2) = min {(Ao (3, 1) + Ao (1, 2)), c (3, 2)} = min {(3 + 4), 0} =7

A1 (3, 3) = min {(Ao (3, 1) + Ao (1, 3)), c (3, 3)} = min {(3 + 11), 0} =0

Step 2: Solving the equation for, K =2;

A2 (1, 1) = min {(A1 (1, 2) + A1 (2, 1), c (1, 1)} = min {(4 + 6), 0} = 0
70
A2 (1, 2) = min {(A1 (1, 2) + A1 (2, 2), c (1, 2)} = min {(4 + 0), 4} = 4

A2 (1, 3) = min {(A1 (1, 2) + A1 (2, 3), c (1, 3)} = min {(4 + 2), 11} =6

A2 (2, 1) = min {(A (2, 2) + A (2, 1), c (2, 1)} = min {(0 + 6), 6} =6

A2 (2, 2) = min {(A (2, 2) + A (2, 2), c (2, 2)} = min {(0 + 0), 0} =0

A2 (2, 3) = min {(A (2, 2) + A (2, 3), c (2, 3)} = min {(0 + 2), 2} =2

A2 (3, 1) = min {(A (3, 2) + A (2, 1), c (3, 1)} = min {(7 + 6), 3} =3
A2 (3, 2) = min {(A (3, 2) + A (2, 2), c (3, 2)} = min {(7 + 0), 7} =7

A2 (3, 3) = min {(A (3, 2) + A (2, 3), c (3, 3)} = min {(7 + 2), 0} =0
0 4

A (2) =6 0

3 7

Step 3: Solving the equation for, k =3;

A3 (1, 1) = min {A2 (1, 3) + A2 (3, 1), c (1, 1)} = min {(6 + 3), 0} =0

A3 (1, 2) = min {A2 (1, 3) + A2 (3, 2), c (1, 2)} = min {(6 + 7), 4} =4

A3 (1, 3) = min {A2 (1, 3) + A2 (3, 3), c (1, 3)} = min {(6 + 0), 6} =6

A3 (2, 1) = min {A2 (2, 3) + A2 (3, 1), c (2, 1)} = min {(2 + 3), 6} =5

A3 (2, 2) = min {A2 (2, 3) + A2 (3, 2), c (2, 2)} = min {(2 + 7), 0} =0

A3 (2, 3) = min {A2 (2, 3) + A2 (3, 3), c (2, 3)} = min {(2 + 0), 2} =2

A3 (3, 1) = min {A2 (3, 3) + A2 (3, 1), c (3, 1)} = min {(0 + 3), 3} =3

A3 (3, 2) = min {A2 (3, 3) + A2 (3, 2), c (3, 2)} = min {(0 + 7), 7} =7
A3 (3, 3) = min {A2 (3, 3) + A2 (3, 3), c (3, 3)} = min {(0 + 0), 0} =0

0 4 6
5 0 2
A(3) = 3 7 0

71
The Single Source Shortest-Path Problem: DIJKSTRA'SALGORITHMS

In the previously studied graphs, the edge labels are called as costs, but here we think
them as lengths. In a labeled graph, the length of the path is defined to be the sum of the
lengths of its edges.

In the single source, all destinations, shortest path problem, we must find a shortest
path from a given source vertex to each of the vertices (called destinations) in the
graph to which there is a path.

Dijkstra’s algorithm is similar to prim's algorithm for finding minimal spanning trees.
Dijkstra’s algorithm takes a labeled graph and a pair of vertices P and Q, and finds the
shortest path between then (or one of the shortest paths) if there is more than one. The
principle of optimality is the basis for Dijkstra’salgorithms.Dijkstra’s algorithm does
not work for negative edges at all.
The figure lists the shortest paths from vertex 1 for a five vertex weighted digraph.

0 1

4 5 1 3
1 2

2 4 5

33 4 3 1 3 4
1
Graph

4 1 2

6 1 3 4 5

Shortest Paths

Algorithm:

Algorithm Shortest-Paths (v, cost, dist,n)


// dist [j], 1 <j <n, is set to the length of the shortest path
// from vertex v to vertex j in the digraph G with n vertices.
// cost adjacency matrix cost [1:n,1:n].
72
{
for i :=1 to n do
{

S [i]:=false; //Initialize S.
dist [i] :=cost [v,i];
}
S[v] := true; dist[v] :=0.0; // Put v in S.
for num := 2 to n – 1do
{
Determine n - 1 paths from v.
Choose u from among those vertices not in S such that dist[u] is
minimum; S[u]:=true; // Put u is S.
for (each w adjacent to u with S [w] = false)do
if (dist [w] > (dist [u] + cost [u, w])then //Update distances
dist [w] := dist [u] + cost [u,w];
}
}

Runningtime:

Depends on implementation of data structures fordist.

 Build a structure with nelements A


 at most m = E times decrease the value of anitem mB
 ‘n’ times select the smallestvalue nC
 For array A = O (n); B = O (1); C = O (n) which gives O (n2)total.
 For heap A = O (n); B = O (log n); C = O (log n) which gives O (n + m logn) total.

73
Example1:
the graph:
4
B D

4
3 2 1 2
4 E 1

A C 2 G
6
2 F 1

The problem is solved by considering the following information:

 Status[v] will be either ‘0’, meaning that the shortest path from v to v0 has
definitely been found; or ‘1’, meaning that it hasn’t.

 Dist[v] will be a number, representing the length of the shortest path from vto v0
found so far.

 Next[v] will be the first vertex on the way to v0 along the shortest path found so far
from v to v0

The progress of Dijkstra’s algorithm on the graph shown above is as follows:

Step1:

B3
 D Vertex A B C D E F G

Status 0 1 1 1 1 1 1

0 6  E Dist. 0 3 6    
A G
 F Next * A A A A A A
C

Step2:

4 7 D Vertex A B C D E F G
B 3 Status 0 0 1 1 1 1 1
2 Dist. 0 3 5 7   
Next * A B B A A A
0 5
 E

A G

74
C 
F 64
Step3:

A 3 6 D
Vertex A B C D E F G
A 0 5 Status 0 0 0 1 1 1 1
Dist. 0 3 5 6 9 7 
B 9 E  G Next * A B C C C A

F 7

Step4:

B 3 7 D Vertex A B C D E F G

Status 0 0 0 0 1 1 1
8
5 E
A 10 G Dist. 0 3 5 6 8 7 10

F Next * A B C D C D

Step5:

B3 6 D Vertex A B C D E F G

Status 0 0 0 0 1 0 1
A 0 5 8 G Dist. 0 3 5 6 8 7 8
C 7 F
Next * A B C D C F

75
Step6:

Vertex A B C D E F G
B3 8 D
Status 0 0 0 0 0 0 1
Dist. 0 3 5 6 8 7 8
E Next * A B C D C F
0 A5 8 G
C 7 F

Step7:

B3 9 D Vertex A B C D E F G

Status 0 0 0 0 0 0 0
0 A5 8 G
Dist. 0 3 5 6 8 7 8
C 7 F
Next * A B C D C F

0/1 –KNAPSACK

We are given n objects and a knapsack. Each object i has a positive weight w i and a
positive value Vi. The knapsack can carry a weight not exceeding W. Fill the knapsack so
that the value of objects in the knapsack isoptimized.

A solution to the knapsack problem can be obtained by making a sequence of decisions


on the variables x1, x2, . . . . , x n. A decision on variable xi involves
determiningwhichofthevalues0or1istobeassignedtoit.Letusassume that
decisions on the xi are made in the order xn, xn-1, x1. Following a decision on xn, we
may be in one of two possible states: the capacity remaining in m – wn and a profit of pn
has accrued. It is clear that the remaining decisions xn-1, , x1 must be optimal with
respect to the problem state resulting from the decision on xn. Otherwise, xn,. . . . , x1
will not be optimal. Hence, the principal of optimalityholds.

Fn (m) = max {fn-1 (m), fn-1 (m - wn)+pn} -- 1

For arbitrary fi (y), i > 0, this equation generalization:

76
Fi (y) = max {fi-1 (y), fi-1 (y - wi)+pi} -- 2

Equation-2 can be solved for fn (m) by beginning with the knowledge fo (y) = 0 for all y
and fi (y) = - 0, y < 0. Then f1, f2, fn can be successively computed using equation–2.

When the wi’s are integer, we need to compute fi (y) for integer y, 0 <y <m. Sincefi
(y) = - for y < 0, these function values need not be computed explicitly. Since each fi
can be computed from fi - 1 in Θ (m) time, it takes Θ (m n) time to compute fn. When
the wi’s are real numbers, fi (y) is needed for real numbers y such that 0 <y <m. So, fi
cannot be explicitly computed for all y in this range. Even when the w i’s are integer, the
explicit Θ (m n) computation of fn may not be the most efficient computation. So, we
explore an alternative method for bothcases.

The fi (y) is an ascending step function; i.e., there are a finite number of y’s, 0 = y1
< y2 < . . . . < yk, such that fi (y1) < fi (y2) < . . . . . < fi (yk); fi (y) = - , y < y1; fi
(y) = f (yk), y >yk; and fi (y) = fi (yj), yj <y <yj+1. So, we need to compute only fi (yj), 1
<j <k. We use the ordered set Si = {(f (yj), yj) | 1 <j <k} to represent fi (y). Each number

77
of Si is a pair (P, W), where P = fi (yj) and W = yj. Notice that S0 =
{(0, 0)}. We can compute Si+1 from Si by firstcomputing:

Si1 = {(P, W) | (P – pi, W – wi) Si}

Now, Si+1 can be computed by merging the pairs in Si and Si1 together. Note that if Si+1
contains two pairs (Pj, Wj) and (Pk, Wk) with the property that Pj <Pk and Wj >Wk, then
the pair (Pj, Wj) can be discarded because of equation-2. Discarding or purging rules such
as this one are also known as dominance rules. Dominated tuples get purged. In the
above, (Pk, Wk) dominates (Pj,Wj).

Example1:

Consider the knapsack instance n = 3, (w1, w2, w3) = (2, 3, 4), (P1, P2, P3) = (1,2,
5) and M =6.

Solution:

Initially, fo (x) = 0, for all x and fi (x) = - if x < 0. Fn


(M) = max {fn-1 (M), fn-1 (M - wn) +pn}

F3 (6) = max (f2 (6), f2 (6 – 4) + 5} = max {f2 (6), f2 (2) +5}

F2 (6) = max (f1 (6), f1 (6 – 3) + 2} = max {f1 (6), f1 (3) +2}

F1 (6) = max (f0 (6), f0 (6 – 2) + 1} = max {0, 0 + 1} =1

F1 (3) = max (f0 (3), f0 (3 – 2) + 1} = max {0, 0 + 1} =1

Therefore, F2 (6) = max (1, 1 + 2} =3

F2 (2) = max (f1 (2), f1 (2 – 3) + 2} = max {f1 (2), - 0+ 2}

F1 (2) = max (f0 (2), f0 (2 – 2) + 1} = max {0, 0 + 1} =1

F2 (2) = max {1, - 0+ 2} =1

78
Finally, f3 (6) = max {3, 1 + 5} =6

OtherSolution:

For the given data wehave:

0
S ={(0,0)}; = {(1,2)}

0
S 1

S1 = (S0 U S01) = {(0, 0), (1,2)}

X - 2 = 0 => x =2. y – 3 = 0 => y =3


X - 2 = 1 => x =3. y – 3 = 2 => y =5

S1 1 = {(2, 3), (3,5)}

2 1
S = (S U ) = {(0, 0), (1, 2), (2, 3), (3,5)}
1
S 1

X–5=0 => x =5. y–4=0 => y =4


X–5=1 => x =6. y–4=2 => y =6
X–5=2 => x =7. y–4=3 => y =7
X–5=3 => x =8. y–4=5 => y =9

S2 1 = {(5, 4), (6, 6), (7, 7), (8,9)}

S3 = (S2 U S21) = {(0, 0), (1, 2), (2, 3), (3, 5), (5, 4), (6, 6), (7, 7), (8,9)}

By applying Dominancerule,

S3 = (S2 U S21) = {(0, 0), (1, 2), (2, 3), (5, 4), (6,6)}

From (6, 6) we can infer that the maximum pi Profit

79
xi = 6 and weight wi =6 xi

ReliabilityDesign

80
The problem is to design a system that is composed of several devices connected in
series. Let ri be the reliability of device Di (that is ri is the probability that device i will
function properly) then the reliability of the entire system is r i. Even if the individual
devices are very reliable (the ri’s are very close to one), the reliability of the system may
not be very good. For example, if n = 10 and r i = 0.99, i <i <10, then ri = .904. Hence, it
is desirable to duplicate devices. Multiply copies of the same device type are connected
inparallel.

Ifstage I contains miscopies ofdeviceDi.Thentheprobabilitythatallmi havea


mi mi
malfunction is (1 -ri) . Hence the reliability of stage i becomes 1 – (1 - r) .i

81
The reliability of stage ‘i’ is given by a function i(mi).

Our problem is to use device duplication. This maximization is to be carried out under a
cost constraint. Let ci be the cost of each unit of device i and let c be the maximum
allowable cost of the system beingdesigned.

Clearly, f0 (x) = 1 for all x, 0 <x <C and f (x) = - for all x < 0.

Let Si consist of tuples of the form (f, x), where f = fi(x).

There is at most one tuple for each different ‘x’, that result from a sequence of decisions
on m1, m2, mn. The dominance rule (f1, x1) dominate (f2, x2) if f1 ≥ f2 and x1 ≤ x2.
Hence, dominated tuples can be discarded fromSi.

DominanceRule:

If Si contains two pairs (f1, x1) and (f2, x2) with the property that f 1 ≥ f2 and x1 ≤ x2,
then (f1, x1) dominates (f2, x2), hence by dominance rule (f2, x2) can be discarded.
Discarding or pruning rules such as the one above is known as dominance rule.
Dominating tuples will be present in Si and Dominated tuples has to be discarded fromSi.

Case 1: if f1 ≤ f2 and x1 > x2 then discard (f1, x1)

Case 2: if f1 >f2 and x1 < x2 the discard (f2, x2)

Case 3: otherwise simply write (f1,x1)

S2 = {(0.72, 45), (0.864, 60), (0.8928,75)}

82
3
S3 (0.63, 105), 1.756, 120 , 0.7812,135

If cost exceeds 105, remove thattuples

S3 = {(0.36, 65), (0.437, 80), (0.54, 85), (0.648, 100)}

The best design has a reliability of 0.648 and a cost of 100. Tracing back forthe
solution through Si ‘s we can determine that m3 = 2, m2 = 2 and m1 = 1.

OtherSolution:

According to the principle ofoptimality:

fn(C) = max {on (mn). fn-1 (C - Cn mn) with fo (x) = 1 and 0 ≤ x ≤C;
1 mn un

Since,wecanassumeachci >0,eachmimustbeintherange1≤mi ≤ ui.

83
TRAVELLING SALESPERSONPROBLEM

Let G = (V, E) be a directed graph with edge costs C ij. The variable cijis defined such
that cij> 0 for all I and j and c ij= if < i, j> E. Let |V| = n and assume n > 1. A tour of G
is a directed simple cycle that includes every vertex in V. The cost of a tour is the sum of
the cost of the edges on the tour. The traveling sales person problem is to find a tour of
minimum cost. The tour is to be a simple path that starts and ends at vertex1.

Let g (i, S) be the length of shortest path starting at vertex i, going through all vertices in
S, and terminating at vertex 1. The function g (1, V – {1}) is the length of an optimal
salesperson tour. From the principal of optimality it followsthat:
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j belongs to S, j != i and j != 1.

The Equation can be solved for g (1, V – 1}) if we know g (k, V – {1, k}) for all

84
choices of k.

Complexity Analysis:

Foreachvalueof|S|therearen–

1choicesfori.ThenumberofdistinctsetsSof size k not including 1 and i

is k  .
Hence, the total number of g (i, S)’s to be computed before computing g (1, V – {1})
To calculate this sum, we use the binominaltheorem:

This is Φ (n 2n-2), so there are exponential number of calculate. Calculating one g (i,
S) require finding the minimum of at most n quantities. Therefore, the entire algorithm is
Φ (n2 2n-2). This is better than enumerating all n! different tours to find the best one. So,
we have traded on exponential growth for a much smaller exponential growth. The most
serious drawback of this dynamic programming solution is the space needed, which is O
(n 2n). This is too large even for modest values of n.

Example1:

For the following graph find minimum cost tour for the traveling sales person
problem:

1 2

3 4

The cost adjacency matrix


=0 10 15 20
5 0 9 10
6 13 0 12
8 8 9 0
Let us start the tour from vertex1:

g (1, V – {1}) = min {c1k + g (k, V – {1,K})} - (1)


2<k<
n 117
More generally writing:
g (i, s) = min {cij+ g (J, s –{J})} - (2)

Clearly, g (i, 0) = ci1 , 1 ≤ i ≤ n.

g (2, 0) = C21 =5

g (3, 0) = C31 = 6

g (4, 0) = C41 =8

Using equation – (2) we obtain:

g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}, c13 + g (3, {2, 4}), c14 + g (4, {2,3})}

g (2, {3, 4}) = min {c23 + g (3, {4}), c24 + g (4,{3})}


= min {9 + g (3, {4}), 10 + g (4,{3})}

g (3, {4}) = min {c34 + g (4, 0)} = 12 + 8 =20

g (4, {3}) = min {c43 + g (3, 0)} = 9 + 6 =15

Therefore, g (2, {3, 4}) = min {9 + 20, 10 + 15} = min {29, 25} =25

g (3, {2, 4}) = min {(c32 + g (2, {4}), (c34 + g (4,{2})}

g (2, {4}) = min {c24 + g (4, 0)} = 10 + 8 =18

g (4, {2}) = min {c42 + g (2, 0)} = 8 + 5 =13

Therefore, g (3, {2, 4}) = min {13 + 18, 12 + 13} = min {41, 25} =25

g (4, {2, 3}) = min {c42 + g (2, {3}), c43 + g (3,{2})}

g (2, {3}) = min {c23 + g (3, 0} = 9 + 6 =15

g (3, {2}) = min {c32 + g (2, 0} = 13 + 5 =18

1 18
Therefore, g (4, {2, 3}) = min {8 + 15, 9 + 1 8 } = min {23, 27}
=23
g (1, {2, 3, 4}) = min {c12 + g (2, {3, 4}), c13 + g (3, {2, 4}), c14 + g (4, {2,3})}
= min {10 + 25, 15 + 25, 20 + 23} = min {35, 40, 43} =35

The optimal tour for the graph has length = 35 The

optimal tour is: 1, 2, 4, 3,1.

OPTIMAL BINARY SEARCHTREE


Let us assume that the given set of identifiers is {a1, . . . , an} with a1 < a2 <.........<
an. Let p (i) be the probability with which we search for ai. Let q (i) be the probability
that the identifier x being searched for is such that ai < x < ai+1, 0 <i <n (assume a0 = -
and an+1 = + ). We have to arrange the identifiers in a binary search tree in a way that
minimizes the expected total access time.
In a binary search tree, the number of comparisons needed to access an element at depth
'd' is d + 1, so if 'ai' is placed at depth 'di', then we want to minimize:

Let P (i) be the probability with which we shall be searching for 'a i'. Let Q (i) be the
probability of an un-successful search. Every internal node represents a point where a
successful search may terminate. Every external node represents a point where an
unsuccessful search may terminate.

The expected cost contribution for the internal node for 'ai'is:
P(i)*level(ai).

Unsuccessful search terminate with I = 0 (i.e at an external node). Hence the cost
contribution for this node is:
Q (i) * level ((Ei) -1)
The expected cost of binary search tree is:

119
Given a fixed set of identifiers, we wish to create a binary search tree organization. We
may expect different binary search trees for the same identifier set to have different
performance characteristics.

The computation of each of these c(i, j)’s requires us to find the minimum of m quantities.
Hence, each such c(i, j) can be computed in time O(m). The total time for all c(i, j)’s with
j – i = m is therefore O(nm –m2).
Example 1: The possible binary search trees for the identifier set (a 1, a2, a3) = (do, if,
stop) are as follows. Given the equal probabilities p (i) = Q (i) = 1/7 for all i, we have:

if

stop
do stop

if

do

Tree2
Tree1

do do

if stop

stop if

Tree 4
Tree3

Huffman coding tree solved by a greedy algorithm has a limitation of having the data only
at the leaves and it must not preserve the property that all nodes to the left of the root have
keys, which are less etc. Construction of an optimal binary search tree is harder, because
the data is not constrained to appear only at the leaves, and also because the tree must
satisfy the binary search tree property and it must preserve the property that all nodes to
the left of the root have keys, which areless.
A dynamic programming solution to the problem of obtaining an optimal binary search
tree can be viewed by constructing a tree as a result of sequence of decisions by holding
the principle of optimality. A possible approach to this is to make a decision as which of
the ai's be arraigned to the root node at 'T'. If we choose 'a k' then is clear that the internal
nodes for a1, a2, . . . . . ak-1 as well as the external nodes for the classes Eo, E1, . . . . . . .
Ek-1 will lie in the left sub tree, L, of the root. The remaining nodes will be in the right
subtree, R. The structure of an optimal binary search treeis:

120
The C (i, J) can be computedas:

C (i, J) = min {C (i, k-1) + C (k, J) + P (K) + w (i, K-1) + w (K,J)}


i<k<J

= min {C (i, K-1) + C (K, J)} + w (i,J) -- (1)


i<k<J

Where W (i, J) = P (J) + Q (J) + w (i,J-1) -- (2)

Initially C (i, i) = 0 and w (i, i) = Q (i) for 0 <i <n.

Equation (1) may be solved for C (0, n) by first computing all C (i, J) such that J - i = 1
Next, we can compute all C (i, J) such that J - i = 2, Then all C (i, J) with J - i = 3 and
soon.

C (i, J) is the cost of the optimal binary search tree 'T ij' during computation we record the
root R (i, J) of each tree 'T ij'. Then an optimal binary search tree may be constructed from
these R (i, J). R (i, J) is the value of 'K' that minimizes equation(1).

We solve the problem by knowing W (i, i+1), C (i, i+1) and R (i, i+1), 0 ≤ i < 4; Knowing
W (i, i+2), C (i, i+2) and R (i, i+2), 0 ≤ i < 3 and repeating until W (0, n), C (0, n) and R
(0, n) areobtained.

The results are tabulated to recover the actualtree.

Example1:

Let n = 4, and (a1, a2, a3, a4) = (do, if, need, while) Let P (1: 4) = (3, 3, 1, 1) and Q (0: 4)
= (2, 3, 1, 1,1)

Solution:
Table for recording W (i, j), C (i, j) and R (i,j):

Column
Row 0 1 2 3 4
121
0 2, 0,0 3, 0,0 1, 0,0 1, 0,0, 1, 0,0
1 8, 8,1 7, 7,2 3, 3,3 3, 3,4
2 12, 19,1 9, 12,2 5, 8,3
3 14, 25,2 11, 19,2
4 16, 32,2

Thiscomputationiscarriedoutrow-wisefromrow0torow4.Initially,W(i,i)=Q
(i) and C (i, i) = 0 and R (i, i) = 0, 0 <i <4.

Solving for C (0,n):

First, computing all C (i, j) such that j - i = 1; j = i + 1 and as 0 <i < 4; i = 0, 1, 2 and 3; i
< k ≤ J. Start with i = 0; so j = 1; as i < k ≤ j, so the possible value for k =1

W (0, 1) = P (1) + Q (1) + W (0, 0) = 3 + 3 + 2 =8


C (0, 1) = W (0, 1) + min {C (0, 0) + C (1, 1)} =8
R (0, 1) = 1 (value of 'K' that is minimum in the above equation). Next

with i = 1; so j = 2; as i < k ≤ j, so the possible value for k =2

W (1, 2) = P (2) + Q (2) + W (1, 1) = 3 + 1 + 3 =7


C (1, 2) = W (1, 2) + min {C (1, 1) + C (2, 2)} =7
R (1, 2) =2

Next with i = 2; so j = 3; as i < k ≤ j, so the possible value for k =3

W (2, 3) = P (3) + Q (3) + W (2, 2) = 1 + 1 + 1 =3


C (2, 3) = W (2, 3) + min {C (2, 2) + C (3, 3)} = 3 + [(0 + 0)] =3
R (2, 3) =3

Next with i = 3; so j = 4; as i < k ≤ j, so the possible value for k =4 W (3,

4) = P (4) + Q (4) + W (3, 3) = 1 + 1 + 1 =3


C (3, 4) = W (3, 4) + min {[C (3, 3) + C (4, 4)]} = 3 + [(0 + 0)] =3
R (3, 4) =4

Second, Computing all C (i, j) such that j - i = 2; j = i + 2 and as 0 <i < 3; i = 0, 1, 2; i < k
≤ J. Start with i = 0; so j = 2; as i < k ≤ J, so the possible values for k = 1 and2.

W (0, 2) = P (2) + Q (2) + W (0, 1) = 3 + 1 + 8 =12


C (0, 2) = W (0, 2) + min {(C (0, 0) + C (1, 2)), (C (0, 1) + C (2,2))}
= 12 + min {(0 + 7, 8 + 0)} =19
122
123
R (0, 2) =1
Next, with i = 1; so j = 3; as i < k ≤ j, so the possible value for k = 2 and3.

W (1, 3) = P (3) + Q (3) + W (1, 2) = 1 + 1+ 7 =9


C (1, 3) = W (1, 3) + min {[C (1, 1) + C (2, 3)], [C (1, 2) + C (3,3)]}
= W (1, 3) + min {(0 + 3), (7 + 0)} = 9 + 3 =12
R (1, 3) =2

Next, with i = 2; so j = 4; as i < k ≤ j, so the possible value for k = 3 and 4. W (2,

4) = P (4) + Q (4) + W (2, 3) = 1 + 1 + 3 =5


C (2, 4) = W (2, 4) + min {[C (2, 2) + C (3, 4)], [C (2, 3) + C (4,4)]
= 5 + min {(0 + 3), (3 + 0)} = 5 + 3 =8
R (2, 4) =3

Third, Computing all C (i, j) such that J - i = 3; j = i + 3 and as 0 <i < 2; i = 0,1;
i < k ≤ J. Start with i = 0; so j = 3; as i < k ≤ j, so the possible values for k = 1, 2 and3.

W (0, 3) = P (3) + Q (3) + W (0, 2) = 1 + 1 + 12 =14


C (0, 3) = W (0, 3) + min {[C (0, 0) + C (1, 3)], [C (0, 1) + C (2,3)],
[C (0, 2) + C (3,3)]}
= 14 + min {(0 + 12), (8 + 3), (19 + 0)} = 14 + 11 =25
R (0, 3) =2

Start with i = 1; so j = 4; as i < k ≤ j, so the possible values for k = 2, 3 and4. W (1, 4)


= P (4) + Q (4) + W (1, 3) = 1 + 1 + 9 =11
C (1, 4) = W (1, 4) + min {[C (1, 1) + C (2, 4)], [C (1, 2) + C (3,4)],
[C (1, 3) + C (4,4)]}
= 11 + min {(0 + 8), (7 + 3), (12 + 0)} = 11 + 8 =19
R (1, 4) =2

Fourth, Computing all C (i, j) such that j - i = 4; j = i + 4 and as 0 <i < 1; i = 0; i < k
≤J.

Start with i = 0; so j = 4; as i < k ≤ j, so the possible values for k = 1, 2, 3 and4.

W (0, 4) = P (4) + Q (4) + W (0, 3) = 1 + 1 + 14 =16


C (0, 4) = W (0, 4) + min {[C (0, 0) + C (1, 4)], [C (0, 1) + C (2,4)],
[C (0, 2) + C (3, 4)], [C (0, 3) + C (4,4)]}
= 16 + min [0 + 19, 8 + 8, 19+3, 25+0] = 16 + 16 =32
R (0, 4) =2

From the table we see that C (0, 4) = 32 is the minimum cost of a binary search tree for
(a1, a2, a3, a4). The root of the tree 'T04' is'a2'.

124
Hence the left sub tree is 'T01' and right sub tree is T24. The root of 'T01' is 'a1' and the
root of 'T24' isa3.

The left and right sub trees for 'T01' are 'T00' and 'T11' respectively. The root of T01 is
'a1'

The left and right sub trees for T24 are T22 and T34 respectively.

The root of T24 is'a3'.

The root of T22 is null

The root of T34 isa4.

a2
T04 if

a1 a3
T01 T24 do read

a4
T00 T11 T22 T34 while

Example2:

Consider four elements a1, a2, a3 and a4 with Q0 = 1/8, Q1 = 3/16, Q2 = Q3 = Q4 = 1/16
and p1 = 1/4, p2 = 1/8, p3 = p4 =1/16. Construct an optimal binary search tree. Solving
for C (0,n):

First, computing all C (i, j) such that j - i = 1; j = i + 1 and as 0 <i < 4; i = 0, 1, 2 and 3; i
< k ≤ J. Start with i = 0; so j = 1; as i < k ≤ j, so the possible value for k =1

W (0, 1) = P (1) + Q (1) + W (0, 0) = 4 + 3 + 2 =9


C (0, 1) = W (0, 1) + min {C (0, 0) + C (1, 1)} = 9 + [(0 + 0)] =9
R (0, 1) = 1 (value of 'K' that is minimum in the above equation). Next

with i = 1; so j = 2; as i < k ≤ j, so the possible value for k =2

W (1, 2) = P (2) + Q (2) + W (1, 1) = 2 + 1 + 3 =6


C (1, 2) = W (1, 2) + min {C (1, 1) + C (2, 2)} = 6 + [(0 + 0)] =6
R (1, 2) =2
Next with i = 2; so j = 3; as i < k ≤ j, so the possible value for k =3 W (2,
125
3) = P (3) + Q (3) + W (2, 2) = 1 + 1 + 1 =3
C (2, 3) = W (2, 3) + min {C (2, 2) + C (3, 3)} = 3 + [(0 + 0)] =3

126
R (2, 3) =3

Next with i = 3; so j = 4; as i < k ≤ j, so the possible value for k =4 W (3,

4) = P (4) + Q (4) + W (3, 3) = 1 + 1 + 1 =3


C (3, 4) = W (3, 4) + min {[C (3, 3) + C (4, 4)]} = 3 + [(0 + 0)] =3
R (3, 4) =4

Second, Computing all C (i, j) such that j - i = 2; j = i + 2 and as 0 <i < 3; i = 0, 1, 2; i <
k ≤J

Start with i = 0; so j = 2; as i < k ≤ j, so the possible values for k = 1 and 2. W (0,

2) = P (2) + Q (2) + W (0, 1) = 2 + 1 + 9 =12


C (0, 2) = W (0, 2) + min {(C (0, 0) + C (1, 2)), (C (0, 1) + C (2,2))}
= 12 + min {(0 + 6, 9 + 0)} = 12 + 6 =18
R (0, 2) =1
Next, with i = 1; so j = 3; as i < k ≤ j, so the possible value for k = 2 and3.

W (1, 3) = P (3) + Q (3) + W (1, 2) = 1 + 1+ 6 =8


C (1, 3) = W (1, 3) + min {[C (1, 1) + C (2, 3)], [C (1, 2) + C (3,3)]}
= W (1, 3) + min {(0 + 3), (6 + 0)} = 8 + 3 =11
R (1, 3) =2
Next, with i = 2; so j = 4; as i < k ≤ j, so the possible value for k = 3 and 4. W (2,

4) = P (4) + Q (4) + W (2, 3) = 1 + 1 + 3 =5


C (2, 4) = W (2, 4) + min {[C (2, 2) + C (3, 4)], [C (2, 3) + C (4,4)]
= 5 + min {(0 + 3), (3 + 0)} = 5 + 3 =8
R (2, 4) =3

Third, Computing all C (i, j) such that J - i = 3; j = i + 3 and as 0 <i < 2; i = 0,1;
i < k ≤ J. Start with i = 0; so j = 3; as i < k ≤ j, so the possible values for k = 1, 2 and3.

W (0, 3) = P (3) + Q (3) + W (0, 2) = 1 + 1 + 12 =14


C (0, 3) = W (0, 3) + min {[C (0, 0) + C (1, 3)], [C (0, 1) + C (2,3)],
[C (0, 2) + C (3,3)]}
= 14 + min {(0 + 11), (9 + 3), (18 + 0)} = 14 + 11 =25
R (0, 3) =1

Start with i = 1; so j = 4; as i < k ≤ j, so the possible values for k = 2, 3 and4. W (1, 4)

= P (4) + Q (4) + W (1, 3) = 1 + 1 + 8 =10


C (1, 4) = W (1, 4) + min {[C (1, 1) + C (2, 4)], [C (1, 2) + C (3,4)],
[C (1, 3) + C (4,4)]}
= 10 + min {(0 + 8), (6 + 3), (11 + 0)} = 10 + 8 =18
R (1, 4) =2

Fourth, Computing all C (i, j) such that J - i = 4; j = i + 4 and as 0 <i < 1; i =0;
127
i < k ≤ J. Start with i = 0; so j = 4; as i < k ≤ j, so the possible values for k = 1, 2, 3 and4.

W (0, 4) = P (4) + Q (4) + W (0, 3) = 1 + 1 + 14 =16


C (0, 4) = W (0, 4) + min {[C (0, 0) + C (1, 4)], [C (0, 1) + C (2,4)],
[C (0, 2) + C (3, 4)], [C (0, 3) + C (4,4)]}
= 16 + min [0 + 18, 9 + 8, 18 + 3, 25 + 0] = 16 + 17 =33
R (0, 4) =2

Table for recording W (i, j), C (i, j) and R (i,j)

Column
0 1 2 3 4
Row
0 2, 0,0 1, 0,0 1, 0,0 1, 0,0, 1, 0,0
1 9, 9,1 6, 6,2 3, 3,3 3, 3,4
2 12, 18,1 8, 11,2 5, 8,3
3 14, 25,2 11, 18,2
4 16, 33,2

From the table we see that C (0, 4) = 33 is the minimum cost of a binary search tree for
(a1, a2, a3,a4)

The root of the tree 'T04' is'a2'.

Hence the left sub tree is 'T01' and right sub tree is T24. The root of 'T01' is 'a1' and the
root of 'T24' isa3.

The left and right sub trees for 'T01' are 'T00' and 'T11' respectively. The root of T01 is
'a1'

The left and right sub trees for T24 are T22 and T34 respectively.

The root of T24 is'a3'.

The root of T22 is null.

The root of T34 isa4.

128
a2
T04 a2

a1 a3 a1 a3
T 01 T24

T00 T11
a4 T22 T34 a4

12

You might also like