0% found this document useful (0 votes)
2 views

DAA UNIT -2

The document provides an overview of greedy algorithms, including their design principles and applications such as the Fractional Knapsack Problem, Job Sequencing with Deadlines, and Huffman Coding. It explains the steps involved in greedy algorithm design, the characteristics of the Fractional Knapsack Problem, and the implementation of Huffman coding for data compression. Additionally, it covers Prim's and Kruskal's algorithms for Minimum Spanning Trees (MST) and the Activity Selection Problem.

Uploaded by

rahul104941
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

DAA UNIT -2

The document provides an overview of greedy algorithms, including their design principles and applications such as the Fractional Knapsack Problem, Job Sequencing with Deadlines, and Huffman Coding. It explains the steps involved in greedy algorithm design, the characteristics of the Fractional Knapsack Problem, and the implementation of Huffman coding for data compression. Additionally, it covers Prim's and Kruskal's algorithms for Minimum Spanning Trees (MST) and the Activity Selection Problem.

Uploaded by

rahul104941
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 105

Apex Institute of Technology

Department of Computer Science & Engineering


Bachelor of Engineering (Computer Science & Engineering)
Design and Analysis of Algorithms– (21CSH-282)
Prepared By: Mr. Vikas Kumar (E13657)

3/21/2024 DISCOVER . LEARN . EMPOWER


1
Content

Greedy Method: Understanding of the greedy approach

Greedy algorithms for Knapsack Fractional Problem

2
Greedy Algorithm

Steps of Greedy Algorithm Design:


1.Formulate the optimization problem in the form: we
make a choice and we are left with one sub-problem to
solve.
2. Show that the greedy choice can lead to an optimal
solution so that the greedy choice is always safe.

3. Demonstrate that an optimal solution to original


problem = greedy choice + an optimal solution to the
subproblem

A good clue is that a greedy strategy will solve the


problem.
3
Greedy Algorithm

• Many optimization problems can be solved more quickly using a greedy


approach.
• The basic principle is that local optimal decisions may be used to build an
optimal solution.
• But the greedy approach may not always lead to an optimal solution overall for
all problems.
• The key is knowing which problems will work with this approach and which
will not.

We will study
• The activity selection problem
• Element of a greedy strategy
• The problem of generating Huffman codes

4
Greedy Algorithm
•Algorithm Greedy(a,n)
//a[1:n] contains the n inputs.
{
Solution :=0;
For i=1 to n do
{
X:=select(a);
If Feasible(solution, x) then
Solution :=Union(solution,x);
}
Return solution;
}
5
Fractional Knapsack Problem

6
Fractional Knapsack Problem
 Knapsack capacity: W

 There are n items: the i-th item has value


vi and weight wi

 Goal:

 find xi such that for all 0  xi  1, i =


1, 2, .., n

 wixi  W and

 xivi is maximum

7
Fractional Knapsack Problem
Alg.: Fractional-Knapsack (W, v[n], w[n])

1. While w > 0 and as long as there are items remaining

2. pick item with maximum vi/wi

3. xi  min (1, w/wi)

4. remove item i from list

5. w  w – xiwi

 w – the amount of space remaining in the knapsack (w = W)

 Running time: (n) if items already ordered; else (nlgn)


8
Fractional Knapsack - Example

20
 E.g.: ---
$80
Item 3 30 +

Item 2 50 50
20 $100
Item 1 30
20 +
10 10 $60

$60 $100 $120 $240

$6/pound $5/pound $4/pound

9
Fractional Knapsack Problem

 Greedy strategy 1:
 Pick the item with the maximum value
 E.g.:
 W=1
 w1 = 100, v1 = 2
 w2 = 1, v2 = 1
 Taking from the item with the maximum value:
Total value taken = v1/w1 = 2/100
 Smaller than what the thief can take if choosing the other item
Total value (choose item 2) = v2/w2 = 1
10
Fractional Knapsack Problem

Greedy strategy 2:

 Pick the item with the maximum value per pound vi/wi

 If the supply of that element is exhausted and the thief can


carry more: take as much as possible from the item with
the next greatest value per pound

 It is good to order items based on their value per pound

11
REFERENCES
Text books:
• Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”, Prentice Hall of India, 3rd edition 2012.
problem, Graph coloring.

Websites:
• https://www.tutorialspoint.com/data_structures_algorithms/kruskals_spanning_tree_algorithm.htm
• https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-algorithm-greedy-algo-2/
THANK YOU
Apex Institute of Technology
Department of Computer Science & Engineering
Bachelor of Engineering (Computer Science & Engineering)
Design and Analysis of Algorithms– (21CSH-282)
Prepared By: Mr. Vikas Kumar (E13657)

3/21/2024 DISCOVER . LEARN . EMPOWER


1
Content

Greedy algorithms for Job Sequencing Problems with the deadline

Huffman Coding

2
JOB SEQUENCING WITH DEADLINES

There is set of n-jobs. For any job i, is a integer deadling di≥0 and profit Pi>0, the
profit Pi is earned iff the job completed by its deadline.
•To complete a job one had to process the job on a machine for one unit of time.
Only one machine is available for processing jobs.
•A feasible solution for this problem is a subset J of jobs such that each job in this
subset can be completed by its deadline.
•The value of a feasible solution J is the sum of the profits of the jobs in J, i.e.,
∑i∈jPi
•An optimal solution is a feasible solution with maximum value.
•The problem involves identification of a subset of jobs which can be completed by
its deadline. Therefore the problem suites the subset methodology and can be
solved by the greedy method.

3
JOB SEQUENCING WITH DEADLINES
algorithm js(d, j, n)
//d=dead line, j=subset of jobs ,n฀=total number of jobs
// d[i]≥1 1 ≤ i ≤ n are the dead lines,
// the jobs are ordered such that p[1]≥p[2]≥---≥p[n]
//j[i] is the ith job in the optimal solution 1 ≤ i ≤ k, k฀ subset range
{
d[0]=j[0]=0;
j[1]=1;
k=1;
for i=2 to n do{
r=k;
while((d[j[r]]>d[i]) and [d[j[r]]≠r)) do
r=r-1;
if((d[j[r]]≤d[i]) and (d[i]> r)) then
{
for q:=k to (r+1) setp-1 do j[q+1]= j[q];
j[r+1]=i;
k=k+1;
}}
return k; 4
}
Huffman Coding

 Huffman’s algorithm achieves data compression by finding the


best variable length binary encoding scheme for the symbols
that occur in the file to be compressed.
 The more frequently a symbol occurs, the shorter should be the
Huffman binary word representing it.

 The Huffman code is a prefix-free code.


 No prefix of a code word is equal to another codeword.

5
Huffman Coding
 Huffman codes: compressing data (savings of 20% to 90%)
 Huffman’s greedy algorithm uses a table of the frequencies of occurrence of
each character to build up an optimal way of representing each character as a
binary string

6
Huffman Coding

 Assume we are given a data file that contains only 6 symbols, namely a, b, c, d, e, f With
the following frequency table:

 Find a variable length prefix-free encoding scheme that compresses this data file as much
as possible?

7
Huffman Coding

 Left tree represents a fixed length encoding scheme


 Right tree represents a Huffman encoding scheme

8
Huffman Coding

9
Constructing A Huffman Code

// C is a set of n characters

// Q is implemented as a binary min-heap O(n)


Total computation time = O(n lg n)

O(lg n)

O(lg n)

O(lg n)

10
Cost of a Tree T

 For each character c in the alphabet C


 let f(c) be the frequency of c in the file
 let dT(c) be the depth of c in the tree
 It is also the length of the codeword. Why?
 Let B(T) be the number of bits required to
encode the file (called the cost of T)

B(T )   f (c)dT (c)


cC

11
Running time of Huffman's algorithm

 The running time of Huffman's algorithm assumes that Q is


implemented as a binary min-heap.

 For a set C of n characters, the initialization of Q in line 2 can


be performed in O(n) time using the BUILD-MINHEAP

 The for loop in lines 3-8 is executed exactly n - 1 times, and


since each heap operation requires time O(lg n), the loop
contributes O(n lg n) to the running time. Thus, the total
running time of HUFFMAN on a set of n characters is O(n lg
n).

12
Prefix Code

 Prefix(-free) code: no codeword is also a prefix of some other codewords (Un-


ambiguous)
 An optimal data compression achievable by a character code can always be achieved
with a prefix code
 Simplify the encoding (compression) and decoding

 Encoding: abc  0 . 101. 100 = 0101100

 Decoding: 001011101 = 0. 0. 101. 1101  aabe

 Use binary tree to represent prefix codes for easy decoding


 An optimal code is always represented by a full binary tree, in which every non-leaf node
has two children
 |C| leaves and |C|-1 internal nodes Cost:

B(T )   f (c)dT (c) Depth of c (length of the codeword)


cC
Frequency of c

13
Huffman Code

 Reduce size of data by 20%-90% in general

 If no characters occur more frequently than others, then no


advantage over ASCII

 Encoding:
 Given the characters and their frequencies, perform the algorithm and
generate a code. Write the characters using the code

 Decoding:
 Given the Huffman tree, figure out what each character is (possible
because of prefix property)
14
Application on Huffman code

 Both the .mp3 and .jpg file formats use Huffman


coding at one stage of the compression

15
REFERENCES
Text books:
• Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”, Prentice Hall of India, 3rd edition 2012.
problem, Graph coloring.

Websites:
• https://www.tutorialspoint.com/data_structures_algorithms/kruskals_spanning_tree_algorithm.htm
• https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-algorithm-greedy-algo-2/
THANK YOU
Apex Institute of Technology
Department of Computer Science & Engineering
Bachelor of Engineering (Computer Science & Engineering)
Design and Analysis of Algorithms– (21CSH-282)
Prepared By: Mr. Vikas Kumar (E13657)

3/21/2024 DISCOVER . LEARN . EMPOWER


1
Content

MST ALGORITHM: PRIMS ALGORITHM

KRUSKALS ALGORITHM

2
MST Algorithm

3
GenericAlgorithm
“Grows” a set A.

A is subset of some MST.

Edge is “safe” if it can be added to A without destroying this invariant.

A := ;
while A not complete tree do
find a safe edge (u, v);
A := A  {(u, v)}
od

4
MST Algorithm

5
Prims Algorithm

Builds one tree, so A is always a tree.


Starts from an arbitrary “root” r .
At each step, adds a light edge crossing cut (VA, V - VA) to A.
VA = vertices that A is incident on.

6
Prims Algorithm

Uses a priority queue Q to find a light edge quickly.


Each object in Q is a vertex in V - VA.
Key of v is minimum weight of any edge (u, v), where u  VA.
Then the vertex returned by Extract-Min is v such that there exists u
 VA and (u, v) is light edge crossing (VA, V - VA).
Key of v is  if v is not adjacent to any vertex in VA.

7
Prim’s Algorithm

Q := V[G]; Complexity:
for each u  Q do Using binary heaps: O(E lg V).
key[u] :=  Initialization – O(V).
od; Building initial queue – O(V).
key[r] := 0; V Extract-Min’s – O(V lgV).
[r] := NIL; E Decrease-Key’s – O(E lg V).
while Q   do
u := Extract - Min(Q); Using Fibonacci heaps: O(E + V lg V).
for each v  Adj[u] do (see book)
if v  Q  w(u, v) < key[v]
then  decrease-key operation
[v] := u;
key[v] := w(u, v)
fi
od
Note: A = {(v, [v]) : v  v - {r} - Q}.
od
Prims Algorithm

9
Prims Algorithm

10
Prims Algorithm

11
Prims Algorithm

12
Prims Algorithm

13
Example of Prim’s Algorithm

a/0 5 b/5 7 c/1

11 3 1 -3 Q=f
-3
d/0 e/3 f/-3
0 2

14
Example of Prim’s Algorithm

a/0 5 b/5 7 c/1

11 3 1 -3 Q=
d/0 e/3 f/-3
0 2

15
Example of Prim’s Algorithm

a/0 5 b/5 c/1

3 1 -3

d/0 e/3 f/-3


0

16
MST Algorithm

17
REFERENCES

Text books:
• Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”, Prentice Hall
of India, 3rd edition 2012. problem, Graph coloring.

Websites:
• https://www.tutorialspoint.com/data_structures_algorithms/kruskals_span
ning_tree_algorithm.htm
• https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-
algorithm-greedy-algo-2/

18
THANK YOU
Apex Institute of Technology
Department of Computer Science & Engineering
Bachelor of Engineering (Computer Science & Engineering)
Design and Analysis of Algorithms– (21CSH-282)
Prepared By: Mr. Vikas Kumar (E13657)

3/21/2024 DISCOVER . LEARN . EMPOWER


1
Content

Activity Selection problem

2
Activity-Selection Problem
• Here are a set of start and finish times

• What is the maximum number of activities that can be completed?


• {a3, a9, a11} can be completed
• But so can {a1, a4, a8’ a11} which is a larger set
• But it is not unique, consider {a2, a4, a9’ a11}
• We will solve this problem in the following manner
• Show the optimal substructure property holds
• Solve the problem using dynamic programming
• Show it is greedy and provide a recursive greedy solution
• Provide an iterative greedy solution 3
A Top Down Recursive Solution

• The step by step solution is on the next slide


• Assuming the activities have been sorted by finish times,
then the complexity of this algorithm is Q(n)
• Developing an iterative algorithm would be even faster
4
Activity-Selection Problem

• Here is a step by step solution


• Notice that the solution is not
unique
• But the solution is still optimal
• No larger set of activities can
be found

5
An Iterative Approach
• The recursive algorithm is almost tail recursive (what is that?) but there is a
final union operation
• We let fi be the maximum finishing time for any activity in A

• The loop in lines 4-7 stops when the earliest finishing time is found

•The overall complexity of this algorithm is Q(n) 6


Developing a Dynamic Solution
• Define the following subset of activities which are activities that
can start after ai finishes and finish before aj starts

• Sort the activities according to finish time

• We now define the the maximal set of activities from i to j as

• Let c[i,j] be the maximal number of activities

• Our recurrence relation for finding c[i, j] becomes

• We can solve this using dynamic programming, but a simpler


approach exists
REFERENCES
Text books:
• Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”, Prentice Hall of India, 3rd edition 2012. problem, Graph
coloring.

Websites:
• https://www.tutorialspoint.com/data_structures_algorithms/kruskals_spanning_tree_algorithm.htm
• https://www.geeksforgeeks.org/kruskals-minimum-spanning-tree-algorithm-greedy-algo-2/
THANK YOU
Apex Institute of Technology
Department of Computer Science & Engineering
Bachelor of Engineering (Computer Science & Engineering)
Design and Analysis of Algorithms– (21CSH-282)
Prepared By: Mr. Vikas Kumar (E13657)

3/21/2024 DISCOVER . LEARN . EMPOWER


1
Content

Dynamic Programming: Understanding of dynamic programming


approach

Algorithms for 0/1 Knapsack problem

Longest Common Subsequence problem

2
Dynamic Programming

• Dynamic Programming is also used in optimization problems.


Like divide-and-conquer method, Dynamic Programming
solves problems by combining the solutions of subproblems.
Moreover, Dynamic Programming algorithm solves each sub-
problem just once and then saves its answer in a table,
thereby avoiding the work of re-computing the answer every
time.
• Two main properties of a problem suggest that the given
problem can be solved using Dynamic Programming. These
properties are overlapping sub-problems and optimal
substructure.

3
Steps of Dynamic Programming Approach

• Characterize the structure of an optimal solution.


• Recursively define the value of an optimal solution.
• Compute the value of an optimal solution, typically in a bottom-up
fashion.
• Construct an optimal solution from the computed information.

4
Applications of Dynamic Programming Approach

• Matrix Chain Multiplication


• Longest Common Subsequence
• Travelling Salesman Problem

5
0-1 Knapsack

Problem Statement:
A thief is robbing a store and can carry a maximal weight
of W into his knapsack. There are n items and weight of ith item
is wi and the profit of selecting this item is pi. What items should
the thief take?

6
Dynamic-Programming Approach

• Let i be the highest-numbered item in an optimal


solution S for W dollars. Then S' = S - {i} is an optimal solution
for W - wi dollars and the value to the solution S is Vi plus the
value of the sub-problem.
• We can express this fact in the following formula: define c[i, w] to
be the solution for items 1,2, … , i and the maximum weight w.
• The algorithm takes the following inputs
• The maximum weight W
• The number of items n
• The two sequences v = <v1, v2, …, vn> and w = <w1, w2, …, wn>

7
Dynamic-0-1-knapsack (v, w, n, W)

8
Longest common Subsequence

• If a set of sequences are given, the longest common subsequence


problem is to find a common subsequence of all the sequences
that is of maximal length.
• The longest common subsequence problem is a classic computer
science problem, the basis of data comparison programs such as
the diff-utility, and has applications in bioinformatics. It is also
widely used by revision control systems, such as SVN and Git, for
reconciling multiple changes made to a revision-controlled
collection of files.

9
Algorithm

10
Example

LCS-LENGTH on the sequences:

X: A,B,C,B,D,A,B and
Y: B,D,C,A,B,A

11
Algorithm for printing LCS

This procedure prints BCBA (for given example 1)

12
REFERENCES

Text books:
• Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”, Prentice Hall of
India, 3rd edition 2012. problem, Graph coloring.

Websites:
• https://www.javatpoint.com/longest-common-sequence-algorithm

• https://www.tutorialspoint.com/design_and_analysis_of_algorithms/design_an
d_analysis_of_algorithms_longest_common_subsequence.htm

13
THANK YOU
Apex Institute of Technology
Department of Computer Science & Engineering
Bachelor of Engineering (Computer Science & Engineering)
Design and Analysis of Algorithms– (21CSH-282)
Prepared By: Mr. Vikas Kumar (E13657)

3/21/2024 DISCOVER . LEARN . EMPOWER


1
Content

OBST

Matrix Chain Multiplication,

2
Optimal Binary Search Tree

A set of integers are given in the sorted order and another array freq to
frequency count.
Our task is to create a binary search tree with those data to find the
minimum cost for all searches.
An auxiliary array cost[n, n] is created to solve and store the solution of
subproblems.
Cost matrix will hold the data to solve the problem in a bottom-up
manner.

3
Example

Input:
The key values as node and the frequency.
Keys = {10, 12, 20}
Frequency = {34, 8, 50}
Output:
The minimum cost is 142.

4
Possible BST from the given values

For case 1, the cost is: (34*1) + (8*2) + (50*3) = 200


For case 2, the cost is: (8*1) + (34*2) + (50*2) = 176.
Similarly for case 5, the cost is: (50*1) + (34 * 2) + (8 * 3) = 142
(Minimum)
5
Algorithm

6
Matrix Chain Multiplication

It is a Method under Dynamic Programming in which previous


output is taken as input for next.
Here, Chain means one matrix's column is equal to the second
matrix's row [always].
In general:
If A = ⌊aij⌋ is a p x q matrix
B = ⌊bij⌋ is a q x r matrix
C = ⌊cij⌋ is a p x r matrix
Then

7
Example

Let us have 3 matrices, A1,A2,A3 of order (10 x 100), (100 x 5) and


(5 x 50) respectively.
Three Matrices can be multiplied in two ways:
A1,(A2,A3): First multiplying(A2 and A3) then multiplying and
resultant withA1.
(A1,A2),A3: First multiplying(A1 and A2) then multiplying and
resultant withA3.
No of Scalar multiplication in Case 1 will be:
(100 x 5 x 50) + (10 x 100 x 50) = 25000 + 50000 = 75000
No of Scalar multiplication in Case 2 will be:
(100 x 10 x 5) + (10 x 5 x 50) = 5000 + 2500 = 7500

8
Number of ways for parenthesizing the matrices:

There are very large numbers of ways of parenthesizing these


matrices. If there are n items, there are (n-1) ways in which the
outer most pair of parenthesis can place.
(A1) (A2,A3,A4,................An)
Or (A1,A2) (A3,A4 .................An)
Or (A1,A2,A3) (A4 ...............An)
........................

Or(A1,A2,A3.............An-1) (An)
It can be observed that after splitting the kth matrices, we are left
with two parenthesized sequence of matrices: one consist 'k'
matrices and another consist 'n-k' matrices.

9
Development of Dynamic Programming
Algorithm

Characterize the structure of an optimal solution.


Define the value of an optimal solution recursively.
Compute the value of an optimal solution in a bottom-up fashion.
Construct the optimal solution from the computed information.

10
Algortihm

11
ALGORITHM

12
Example

13
References

Text books:
Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”, Prentice
Hall of India, 3rd edition 2012. problem, Graph coloring.

Websites:
https://www.tutorialspoint.com/Optimal-Binary-Search-Tree

14
THANK YOU
Apex Institute of Technology
Department of Computer Science & Engineering
Bachelor of Engineering (Computer Science & Engineering)
Design and Analysis of Algorithms– (21CSH-282)
Prepared By: Mr. Vikas Kumar (E13657)

3/21/2024 DISCOVER . LEARN . EMPOWER


1
Content

2
Change making problem/Coin Change Problem

In the coin change problem, we are basically provided with coins


with different denominations like 1¢, 5¢ and 10¢. Now, we have
to make an amount by using these coins such that a minimum
number of coins are used.
Let's take a case of making 10¢ using these coins, we can do it in
the following ways:
Using 1 coin of 10¢
Using two coins of 5¢
Using one coin of 5¢ and 5 coins of 1¢
Using 10 coins of 1¢

3
Example

4
Approach to Solve the Coin Change Problem
Let's say Mn is the minimum number of coins needed to make the change
for the value n.
Let's start by picking up the first coin i.e., the coin with the value d1. So, we
now need to make the value of n−d1 and Mn−d1 is the minimum number of
coins needed for this purpose. So, the total number of coins needed
are 1+Mn−d1 (1 coin because we already picked the coin with
value d1 and Mn−d1 is the minimum number of coins needed to make the
rest of the value).
Similarly, we can pick the second coin first and then attempt to get the
optimal solution for the value of n−d2 which will require Mn−d2 coins and
thus a total of 1+Mn−d2.
We can repeat the process with all the k coins and then the minimum value
of all these will be our answer. i.e., mini:di≤n {Mn−di+1}.

5
Analysis

6
Travelling Salesman Problem

Travelling Salesman Problem (TSP): Given a set of cities


and distance between every pair of cities, the problem
is to find the shortest possible route that visits every
city exactly once and returns to the starting point.

7
Naive Solution

1) Consider city 1 as the starting and ending point.


2) Generate all (n-1)! Permutations of cities.
3) Calculate cost of every permutation and keep track of
minimum cost permutation.
4) Return the permutation with minimum cost.

Time Complexity: Θ(n!)

8
Using Dynamic Programming

To calculate cost(i) using Dynamic Programming, we need


to have some recursive relation in terms of sub-problems.
Let us define a term C(S, i) be the cost of the minimum
cost path visiting each vertex in set S exactly once,
starting at 1 and ending at i.
We start with all subsets of size 2 and calculate C(S, i) for
all subsets where S is the subset, then we calculate C(S, i)
for all subsets S of size 3 and so on. Note that 1 must be
present in every subset.

9
Algorithm

If size of S is 2, then S must be {1, i},


C(S, i) = dist(1, i)
Else if size of S is greater than 2.
C(S, i) = min { C(S-{i}, j) + dis(j, i)} where j
belongs to S, j != i and j != 1.

10
Differentiate between Divide & Conquer Method vs Dynamic
Programming.

Divide & Conquer Method Dynamic Programming

1.It deals (involves) three steps at each level of recursion: 1.It involves the sequence of four steps:
Divide the problem into a number of subproblems.
o Characterize the structure of optimal solutions.
Conquer the subproblems by solving them recursively.
Combine the solution to the subproblems into the solution for original o Recursively defines the values of optimal solutions.
subproblems.
o Compute the value of optimal solutions in a Bottom-up minimum.

o Construct an Optimal Solution from computed information.

2. It is Recursive. 2. It is non Recursive.

3. It does more work on subproblems and hence has more time 3. It solves subproblems only once and then stores in the table.
consumption.

4. It is a top-down approach. 4. It is a Bottom-up approach.

5. In this subproblems are independent of each other. 5. In this subproblems are interdependent.

6. For example: Merge Sort & Binary Search etc. 6. For example: Matrix Multiplication.

11
Differentiate between Dynamic Programming and Greedy
Method

Dynamic Programming Greedy Method

1. Dynamic Programming is used to obtain the optimal 1. Greedy Method is also used to get the optimal
solution. solution.

2. In Dynamic Programming, we choose at each step, 2. In a greedy Algorithm, we make whatever choice
but the choice may depend on the solution to sub- seems best at the moment and then solve the sub-
problems. problems arising after the choice is made.

3. Less efficient as compared to a greedy approach 3. More efficient as compared to a greedy approach

4. Example: 0/1 Knapsack 4. Example: Fractional Knapsack

5. It is guaranteed that Dynamic Programming will 5. In Greedy Method, there is no such guarantee of
generate an optimal solution using Principle of getting Optimal Solution.
Optimality.

12
13
14
15
16
References

Text books:
Cormen, Leiserson, Rivest, Stein, “Introduction to Algorithms”,
Prentice Hall of India, 3rd edition 2012. problem, Graph coloring.
Horowitz, Sahni and Rajasekaran, “Fundamentals of
ComputerAlgorithms”, University Press (India), 2nd edition

Websites:
https://www.tutorialspoint.com/design_and_analysis_of_algorithm
s/design_and_analysis_of_algorithms_01_knapsack.htm
https://algorithms.tutorialhorizon.com/dynamic-programming-
subset-sum-problem/
https://www.codesdope.com/course/algorithms-coin-change/

17
THANK YOU

You might also like