SlideShare a Scribd company logo
Design and Analysis of
Algorithms (DAA)
LECTURE 10
DYNAMIC PROGRAMMING APPLICATION
DR. HAMID H. AWAN
2
Review of lecture 09
 Introduction to Dynamic Programming
 Hallmarks of Dynamic Programming
 Fibonacci Series
3
Contents of the lecture
 Topics
 Minimum Coin Problem
 The Longest Common Sub-string
 Pre-requisite knowledge
 C/C++ programming
 Mathematical induction
 Theorem proving taxonomy
4
Min Coin Problem
 Suppose your are given some coins. Each coin
HAS A WORTH OF Rs 1, Rs 4 or Rs 5.
 That is coins={1, 4, 5}
 Now you are asked to make Rs 13 with the help
of given coins.
 How will you make Rs 13?
 Can you determine the combination which requires
minimum number of coins to make Rs 13?
5
Min Coin Problem (2)
 Greedy Approach
 Take the largest coins first to reach as closer as possible with
minimum number of coins.
 Solution: 5+5+1+1+1=13
 Total coins usedL: 5
 Brute Force approach:
 Perform an exhaustive search to find the most optimum result. It
guarantees best solution.
 It turns out that 5+4+4=13 requires only 3 coins to make Rs 13.
 Better than greedy approach, but at the cost of??
6
Min Coin Problem (3) – Brute-Force
7
Min Coin Problem (4) – Brute-Force
 Algorithm-2: min-coin(coins, m) : a
Input:
Coins: set of coins
m: the number of Rupees to make with coins.
Returns:
a: the minimum number of coins used to make m.
[To be written on board].
8
Min Coin Problem (4) – Dynamic
Programming
 Algorithm-3: min-coinDP(coins, m, mem) :
a
Input:
Coins: set of coins
m: the number of Rupees to make with coins.
Mem: memorization array
Returns:
a: the minimum number of coins used to make m.
9
Min Coin Problem (5) – Dynamic
Programming
 Time and space complexity?
T(m)=O(m)
 S(m)=O(m)
10
Longest Common Subsequence
 Consider two strings
 S1=“ABCD”
 S2=“ACFD A, C and D are common
LCS (s1, s2):=“ACD”
|LCS(s1, s2)|:=3
Lines can’t overlap.
|s1| is not necessarily equal to |
s2|
A B C D
A C F D
11
Longest Common Subsequence (2)
 Brute-Force Approach
 Every character of s1 may have to be compared with every character of s2 in
worst case.
 If length of s1 is m and that of s2 is n then:
 S
12
Longest Common Subsequence (3)
 The Dynamic Programming approach
 Example:
 Consider s1=“BRANXH” and s2=“CRASH”
B R A N C H
C
R
A
S
H
B R A N X H
C
R
A
S
H
|s1|=m
|s2|=n
13
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0
R 0
A 0
S 0
H 0
|s1|=m
|s2|=n
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
Define map[n+!, m+1] matrix
Let map[0, :]=0 and map[:, 0]:=0
14
Longest Common Subsequence (4)
X  B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0
A 0
S 0
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
15
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0
A 0
S 0
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
16
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1
A 0
S 0
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
S1[i]=s[j]
17
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1 1
A 0
S 0
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
S1[i]!=s[j]333
18
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1 1 1 1 1
A 0
S 0
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
19
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1 1 1 1 1
A 0 0 1
S 0
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
20
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1 1 1 1 1
A 0 0 1 2
S 0
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
21
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1 1 1 1 1
A 0 0 1 2 2 2 2
S 0 0 1 2 2 2 2
H 0
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
22
Longest Common Subsequence (4)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1 1 1 1 1
A 0 0 1 2 2 2 2
S 0 0 1 2 2 2 2
H 0 0 1 2 2 2 3
LCS(s1, s2)=3
i=1 to m
j=1
to
n
If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1
Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
23
Longest Common Subsequence (5)
 B R A N X H
 0 0 0 0 0 0 0
C 0 0 0 0 0 0 0
R 0 0 1 1 1 1 1
A 0 0 1 2 2 2 2
S 0 0 1 2 2 2 2
B 0 1 1 2 2 2 2
24
Longest Common Subsequence (6)
 Time Complexity?
 T(m, n)= O(m+n)
 Space Complexity?
 S(m, n)=O(mn)
25
Thank you…!
 End of Lecture 04
Design and Analysis of
Algorithms (DAA)
LECTURE 11
GREEDY ALGORITHM
DR. HAMID H. AWAN
27
Review of lecture 10
 Applications of Dynamic Programming
 Min Coins Problem
 Longest Common Subsequent Problem
28
Contents of the lecture
 Topics
 Greedy Algorithm
 The knapsack problem
 The shortest path finding problem
 Pre-requisite knowledge
 C/C++ programming
 Mathematical induction
 Theorem proving taxonomy
29
The Greedy Approach
 Greedy Algorithm
A technique to build a complete solution by
making a sequence of best “selection” steps.
Selection depends on actual problem
Focus is on “What is best step from this point”.
30
Applications of Greedy Algorithm
 Sorting
 Merging sorted lists
 Knapsack
 Minimum Spanning Trees
 Huffman encoding
31
Applications of Greedy Algorithm (2)
 Sorting
Select the minimum element in the list and
move it to the beginning.
E.g., selection sort, insertion sort.
How much is a greedy sorting algorithm
optimal?
32
Applications of Greedy Algorithm (3)
 Merging Sorted Lists
 Input: n sorted arrays
 A [1], A[2], A[3], …., A[n]
 Problem: To merge all the sorted arrays into one sorted array as fast as possible.
 E.g., Merge Sort
Dynamic Programming vs. Greedy Algorithms
 Dynamic programming
 We make a choice at each step
 The choice depends on solutions to subproblems
 Bottom up solution, from smaller to larger subproblems
 Greedy algorithm
 Make the greedy choice and THEN
 Solve the subproblem arising after the choice is made
 The choice we make may depend on previous choices, but not on
solutions to subproblems
 Top down solution, problems decrease in size
33
34
Dynamic Programming vs. Greedy
Algorithms (2)
Dynamic Programming = Brute-
Force + Greedy Algorithm
35
Optimization problems
 An optimization problem is one in which you want
to find, not just a solution, but the best solution
 A “greedy algorithm” sometimes works well for
optimization problems
 A greedy algorithm works in phases. At each phase:
 You take the best you can get right now, without regard
for future consequences
 You hope that by choosing a local optimum at each
step, you will end up at a global optimum
36
The Knapsack Problem
 The 0-1 knapsack problem
 A thief robbing a store finds n items: the i-th item is worth vi dollars and
weights wi pounds (vi, wi integers)
 The thief can only carry W pounds in his knapsack
 Items must be taken entirely or left behind
 Which items should the thief take to maximize the value of his load?
 The fractional knapsack problem
 Similar to above
 The thief can take fractions of items
37
0-1 Knapsack - Dynamic Programming
 P(i, w) – the maximum profit that can be obtained from
items 1 to i, if the knapsack has size w
 Case 1: thief takes item i
P(i, w) =vi *wi+ P(i - 1, w-wi)
 Case 2: thief does not take item i
P(i, w) =
P(i - 1, w)
38
The Fractional Knapsack Problem
 Given: A set S of n items, with each item i having
 bi - a positive benefit
 wi - a positive weight
 Goal: Choose items with maximum total benefit but with weight at
most W.
 If we are allowed to take fractional amounts, then this is the fractional
knapsack problem.
 In this case, we let xi denote the amount we take of item i
 Objective: maximize
 Constraint:

S
i
i
i
i w
x
b )
/
(



S
i
i W
x
39
Example
 Given: A set S of n items, with each item i having
 bi - a positive benefit
 wi - a positive weight
 Goal: Choose items with maximum total benefit but with weight at
most W.
Weight:
Benefit:
1 2 3 4 5
4 ml 8 ml 2 ml 6 ml 1 ml
$12 $32 $40 $30 $50
Items:
Value: 3
($ per ml)
4 20 5 50
10 ml
Solution:
• 1 ml of 5
• 2 ml of 3
• 6 ml of 4
• 1 ml of 2
“knapsack”
40
The Fractional knapsack algorithm
 The greedy algorithm:
Step 1: Sort pi
/wi
into nonincreasing order.
Step 2: Put the objects into the knapsack
according to the sorted sequence as possible as
we can.
Example Capacity =20
S.No Weight Price
1 18 25
2 15 24
3 10 15
41
The Fractional knapsack algorithm
Solution
p1
/w1
= 25/18 = 1.32
p2
/w2
= 24/15 = 1.6
p3
/w3
= 15/10 = 1.5
Sort in the descending order that will select the one item # 2 and half item number three
and none from first item
so
Optimal solution: x1
= 0, x2
= 1, x3
= 1/2 value 31.5
42
Shortest paths on a special graph
 Problem: Find a shortest path from v0
to v3
.
 The greedy method can solve this problem.
 The shortest path: 1 + 2 + 4 = 7.
43
Shortest paths on a multi-stage
graph
 Problem: Find a shortest path from v0
to v3
in the multi-stage
graph.
44
Solution of the above problem
 dmin(i,j): minimum distance between i and j.
 This problem can be solved by the dynamic programming method.
d
m
i
n
(
v
0
,
v
3
)
=
m
i
n





3
+
d
m
i
n
(
v
1
,
1
,
v
3
)
1
+
d
m
i
n
(
v
1
,
2
,
v
3
)
5
+
d
m
i
n
(
v
1
,
3
,
v
3
)
7
+
d
m
i
n
(
v
1
,
4
,
v
3
)
45
Thank you.
Questions?

More Related Content

Similar to Design and Analysis of Algorithm-Lecture.pptx (20)

PPTX
Applied Algorithms Introduction to Algorithms.pptx
nishankarsathiyamoha
 
PPTX
Algorithm Design Techiques, divide and conquer
Minakshee Patil
 
PDF
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
PPTX
Design and analysis of algorithms
PSG College of Technology
 
PPTX
Dynamic Programming - Part 1
Amrinder Arora
 
PPT
Dynamic pgmming
Dr. C.V. Suresh Babu
 
PPTX
unit-4-dynamic programming
hodcsencet
 
PDF
Approximation alogrithms
Mohsen Fatemi
 
PPTX
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
PPTX
Dynamic programming - fundamentals review
ElifTech
 
PPT
Optimization problems
Ruchika Sinha
 
PPT
Elak3 need of greedy for design and analysis of algorithms.ppt
Elakkiya Rajasekar
 
PPTX
Dynamic programming (dp) in Algorithm
RaihanIslamSonet
 
PDF
L21_L27_Unit_5_Dynamic_Programming Computer Science
priyanshukumarbt23cs
 
PPT
Dynamic1
MyAlome
 
PPT
5.3 dynamic programming
Krish_ver2
 
PPT
Algorithm designs and its technique.ppt
AnchalaSharma4
 
PPT
ADT(Algorithm Design Technique Backtracking algorithm).ppt
AnchalaSharma4
 
PDF
Lec07-Greedy Algorithms.pdf Lec07-Greedy Algorithms.pdf
MAJDABDALLAH3
 
Applied Algorithms Introduction to Algorithms.pptx
nishankarsathiyamoha
 
Algorithm Design Techiques, divide and conquer
Minakshee Patil
 
Design and Analysis of Algorithms-DP,Backtracking,Graphs,B&B
Sreedhar Chowdam
 
Design and analysis of algorithms
PSG College of Technology
 
Dynamic Programming - Part 1
Amrinder Arora
 
Dynamic pgmming
Dr. C.V. Suresh Babu
 
unit-4-dynamic programming
hodcsencet
 
Approximation alogrithms
Mohsen Fatemi
 
AAC ch 3 Advance strategies (Dynamic Programming).pptx
HarshitSingh334328
 
Dynamic programming - fundamentals review
ElifTech
 
Optimization problems
Ruchika Sinha
 
Elak3 need of greedy for design and analysis of algorithms.ppt
Elakkiya Rajasekar
 
Dynamic programming (dp) in Algorithm
RaihanIslamSonet
 
L21_L27_Unit_5_Dynamic_Programming Computer Science
priyanshukumarbt23cs
 
Dynamic1
MyAlome
 
5.3 dynamic programming
Krish_ver2
 
Algorithm designs and its technique.ppt
AnchalaSharma4
 
ADT(Algorithm Design Technique Backtracking algorithm).ppt
AnchalaSharma4
 
Lec07-Greedy Algorithms.pdf Lec07-Greedy Algorithms.pdf
MAJDABDALLAH3
 

Recently uploaded (20)

PDF
Understanding AI Optimization AIO, LLMO, and GEO
CoDigital
 
PDF
Java 25 and Beyond - A Roadmap of Innovations
Ana-Maria Mihalceanu
 
PPTX
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
PDF
Dev Dives: Accelerating agentic automation with Autopilot for Everyone
UiPathCommunity
 
PDF
Darley - FIRST Copenhagen Lightning Talk (2025-06-26) Epochalypse 2038 - Time...
treyka
 
PDF
TrustArc Webinar - Navigating APAC Data Privacy Laws: Compliance & Challenges
TrustArc
 
PPTX
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
PPTX
Reimaginando la Ciberdefensa: De Copilots a Redes de Agentes
Cristian Garcia G.
 
PPSX
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
PPTX
Practical Applications of AI in Local Government
OnBoard
 
PDF
Simplify Your FME Flow Setup: Fault-Tolerant Deployment Made Easy with Packer...
Safe Software
 
PDF
''Taming Explosive Growth: Building Resilience in a Hyper-Scaled Financial Pl...
Fwdays
 
PDF
Automating the Geo-Referencing of Historic Aerial Photography in Flanders
Safe Software
 
PDF
Enhancing Environmental Monitoring with Real-Time Data Integration: Leveragin...
Safe Software
 
PPTX
Enabling the Digital Artisan – keynote at ICOCI 2025
Alan Dix
 
PDF
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
PDF
DoS Attack vs DDoS Attack_ The Silent Wars of the Internet.pdf
CyberPro Magazine
 
PDF
GDG Cloud Southlake #44: Eyal Bukchin: Tightening the Kubernetes Feedback Loo...
James Anderson
 
PDF
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
PPTX
2025 HackRedCon Cyber Career Paths.pptx Scott Stanton
Scott Stanton
 
Understanding AI Optimization AIO, LLMO, and GEO
CoDigital
 
Java 25 and Beyond - A Roadmap of Innovations
Ana-Maria Mihalceanu
 
New ThousandEyes Product Innovations: Cisco Live June 2025
ThousandEyes
 
Dev Dives: Accelerating agentic automation with Autopilot for Everyone
UiPathCommunity
 
Darley - FIRST Copenhagen Lightning Talk (2025-06-26) Epochalypse 2038 - Time...
treyka
 
TrustArc Webinar - Navigating APAC Data Privacy Laws: Compliance & Challenges
TrustArc
 
MARTSIA: A Tool for Confidential Data Exchange via Public Blockchain - Pitch ...
Michele Kryston
 
Reimaginando la Ciberdefensa: De Copilots a Redes de Agentes
Cristian Garcia G.
 
Usergroup - OutSystems Architecture.ppsx
Kurt Vandevelde
 
Practical Applications of AI in Local Government
OnBoard
 
Simplify Your FME Flow Setup: Fault-Tolerant Deployment Made Easy with Packer...
Safe Software
 
''Taming Explosive Growth: Building Resilience in a Hyper-Scaled Financial Pl...
Fwdays
 
Automating the Geo-Referencing of Historic Aerial Photography in Flanders
Safe Software
 
Enhancing Environmental Monitoring with Real-Time Data Integration: Leveragin...
Safe Software
 
Enabling the Digital Artisan – keynote at ICOCI 2025
Alan Dix
 
How to Visualize the ​Spatio-Temporal Data Using CesiumJS​
SANGHEE SHIN
 
DoS Attack vs DDoS Attack_ The Silent Wars of the Internet.pdf
CyberPro Magazine
 
GDG Cloud Southlake #44: Eyal Bukchin: Tightening the Kubernetes Feedback Loo...
James Anderson
 
Optimizing the trajectory of a wheel loader working in short loading cycles
Reno Filla
 
2025 HackRedCon Cyber Career Paths.pptx Scott Stanton
Scott Stanton
 
Ad

Design and Analysis of Algorithm-Lecture.pptx

  • 1. Design and Analysis of Algorithms (DAA) LECTURE 10 DYNAMIC PROGRAMMING APPLICATION DR. HAMID H. AWAN
  • 2. 2 Review of lecture 09  Introduction to Dynamic Programming  Hallmarks of Dynamic Programming  Fibonacci Series
  • 3. 3 Contents of the lecture  Topics  Minimum Coin Problem  The Longest Common Sub-string  Pre-requisite knowledge  C/C++ programming  Mathematical induction  Theorem proving taxonomy
  • 4. 4 Min Coin Problem  Suppose your are given some coins. Each coin HAS A WORTH OF Rs 1, Rs 4 or Rs 5.  That is coins={1, 4, 5}  Now you are asked to make Rs 13 with the help of given coins.  How will you make Rs 13?  Can you determine the combination which requires minimum number of coins to make Rs 13?
  • 5. 5 Min Coin Problem (2)  Greedy Approach  Take the largest coins first to reach as closer as possible with minimum number of coins.  Solution: 5+5+1+1+1=13  Total coins usedL: 5  Brute Force approach:  Perform an exhaustive search to find the most optimum result. It guarantees best solution.  It turns out that 5+4+4=13 requires only 3 coins to make Rs 13.  Better than greedy approach, but at the cost of??
  • 6. 6 Min Coin Problem (3) – Brute-Force
  • 7. 7 Min Coin Problem (4) – Brute-Force  Algorithm-2: min-coin(coins, m) : a Input: Coins: set of coins m: the number of Rupees to make with coins. Returns: a: the minimum number of coins used to make m. [To be written on board].
  • 8. 8 Min Coin Problem (4) – Dynamic Programming  Algorithm-3: min-coinDP(coins, m, mem) : a Input: Coins: set of coins m: the number of Rupees to make with coins. Mem: memorization array Returns: a: the minimum number of coins used to make m.
  • 9. 9 Min Coin Problem (5) – Dynamic Programming  Time and space complexity? T(m)=O(m)  S(m)=O(m)
  • 10. 10 Longest Common Subsequence  Consider two strings  S1=“ABCD”  S2=“ACFD A, C and D are common LCS (s1, s2):=“ACD” |LCS(s1, s2)|:=3 Lines can’t overlap. |s1| is not necessarily equal to | s2| A B C D A C F D
  • 11. 11 Longest Common Subsequence (2)  Brute-Force Approach  Every character of s1 may have to be compared with every character of s2 in worst case.  If length of s1 is m and that of s2 is n then:  S
  • 12. 12 Longest Common Subsequence (3)  The Dynamic Programming approach  Example:  Consider s1=“BRANXH” and s2=“CRASH” B R A N C H C R A S H B R A N X H C R A S H |s1|=m |s2|=n
  • 13. 13 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 R 0 A 0 S 0 H 0 |s1|=m |s2|=n i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j]) Define map[n+!, m+1] matrix Let map[0, :]=0 and map[:, 0]:=0
  • 14. 14 Longest Common Subsequence (4) X  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 A 0 S 0 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
  • 15. 15 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 A 0 S 0 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
  • 16. 16 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 A 0 S 0 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j]) S1[i]=s[j]
  • 17. 17 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 1 A 0 S 0 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j]) S1[i]!=s[j]333
  • 18. 18 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 1 1 1 1 A 0 S 0 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
  • 19. 19 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 1 1 1 1 A 0 0 1 S 0 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
  • 20. 20 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 1 1 1 1 A 0 0 1 2 S 0 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
  • 21. 21 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 1 1 1 1 A 0 0 1 2 2 2 2 S 0 0 1 2 2 2 2 H 0 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
  • 22. 22 Longest Common Subsequence (4)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 1 1 1 1 A 0 0 1 2 2 2 2 S 0 0 1 2 2 2 2 H 0 0 1 2 2 2 3 LCS(s1, s2)=3 i=1 to m j=1 to n If s1[i]==s2[j] then map[I, j]:=map[i-1, j-1]+1 Else map[I, j]:=argmax (map[i, j-1], map[i-1, j])
  • 23. 23 Longest Common Subsequence (5)  B R A N X H  0 0 0 0 0 0 0 C 0 0 0 0 0 0 0 R 0 0 1 1 1 1 1 A 0 0 1 2 2 2 2 S 0 0 1 2 2 2 2 B 0 1 1 2 2 2 2
  • 24. 24 Longest Common Subsequence (6)  Time Complexity?  T(m, n)= O(m+n)  Space Complexity?  S(m, n)=O(mn)
  • 25. 25 Thank you…!  End of Lecture 04
  • 26. Design and Analysis of Algorithms (DAA) LECTURE 11 GREEDY ALGORITHM DR. HAMID H. AWAN
  • 27. 27 Review of lecture 10  Applications of Dynamic Programming  Min Coins Problem  Longest Common Subsequent Problem
  • 28. 28 Contents of the lecture  Topics  Greedy Algorithm  The knapsack problem  The shortest path finding problem  Pre-requisite knowledge  C/C++ programming  Mathematical induction  Theorem proving taxonomy
  • 29. 29 The Greedy Approach  Greedy Algorithm A technique to build a complete solution by making a sequence of best “selection” steps. Selection depends on actual problem Focus is on “What is best step from this point”.
  • 30. 30 Applications of Greedy Algorithm  Sorting  Merging sorted lists  Knapsack  Minimum Spanning Trees  Huffman encoding
  • 31. 31 Applications of Greedy Algorithm (2)  Sorting Select the minimum element in the list and move it to the beginning. E.g., selection sort, insertion sort. How much is a greedy sorting algorithm optimal?
  • 32. 32 Applications of Greedy Algorithm (3)  Merging Sorted Lists  Input: n sorted arrays  A [1], A[2], A[3], …., A[n]  Problem: To merge all the sorted arrays into one sorted array as fast as possible.  E.g., Merge Sort
  • 33. Dynamic Programming vs. Greedy Algorithms  Dynamic programming  We make a choice at each step  The choice depends on solutions to subproblems  Bottom up solution, from smaller to larger subproblems  Greedy algorithm  Make the greedy choice and THEN  Solve the subproblem arising after the choice is made  The choice we make may depend on previous choices, but not on solutions to subproblems  Top down solution, problems decrease in size 33
  • 34. 34 Dynamic Programming vs. Greedy Algorithms (2) Dynamic Programming = Brute- Force + Greedy Algorithm
  • 35. 35 Optimization problems  An optimization problem is one in which you want to find, not just a solution, but the best solution  A “greedy algorithm” sometimes works well for optimization problems  A greedy algorithm works in phases. At each phase:  You take the best you can get right now, without regard for future consequences  You hope that by choosing a local optimum at each step, you will end up at a global optimum
  • 36. 36 The Knapsack Problem  The 0-1 knapsack problem  A thief robbing a store finds n items: the i-th item is worth vi dollars and weights wi pounds (vi, wi integers)  The thief can only carry W pounds in his knapsack  Items must be taken entirely or left behind  Which items should the thief take to maximize the value of his load?  The fractional knapsack problem  Similar to above  The thief can take fractions of items
  • 37. 37 0-1 Knapsack - Dynamic Programming  P(i, w) – the maximum profit that can be obtained from items 1 to i, if the knapsack has size w  Case 1: thief takes item i P(i, w) =vi *wi+ P(i - 1, w-wi)  Case 2: thief does not take item i P(i, w) = P(i - 1, w)
  • 38. 38 The Fractional Knapsack Problem  Given: A set S of n items, with each item i having  bi - a positive benefit  wi - a positive weight  Goal: Choose items with maximum total benefit but with weight at most W.  If we are allowed to take fractional amounts, then this is the fractional knapsack problem.  In this case, we let xi denote the amount we take of item i  Objective: maximize  Constraint:  S i i i i w x b ) / (    S i i W x
  • 39. 39 Example  Given: A set S of n items, with each item i having  bi - a positive benefit  wi - a positive weight  Goal: Choose items with maximum total benefit but with weight at most W. Weight: Benefit: 1 2 3 4 5 4 ml 8 ml 2 ml 6 ml 1 ml $12 $32 $40 $30 $50 Items: Value: 3 ($ per ml) 4 20 5 50 10 ml Solution: • 1 ml of 5 • 2 ml of 3 • 6 ml of 4 • 1 ml of 2 “knapsack”
  • 40. 40 The Fractional knapsack algorithm  The greedy algorithm: Step 1: Sort pi /wi into nonincreasing order. Step 2: Put the objects into the knapsack according to the sorted sequence as possible as we can. Example Capacity =20 S.No Weight Price 1 18 25 2 15 24 3 10 15
  • 41. 41 The Fractional knapsack algorithm Solution p1 /w1 = 25/18 = 1.32 p2 /w2 = 24/15 = 1.6 p3 /w3 = 15/10 = 1.5 Sort in the descending order that will select the one item # 2 and half item number three and none from first item so Optimal solution: x1 = 0, x2 = 1, x3 = 1/2 value 31.5
  • 42. 42 Shortest paths on a special graph  Problem: Find a shortest path from v0 to v3 .  The greedy method can solve this problem.  The shortest path: 1 + 2 + 4 = 7.
  • 43. 43 Shortest paths on a multi-stage graph  Problem: Find a shortest path from v0 to v3 in the multi-stage graph.
  • 44. 44 Solution of the above problem  dmin(i,j): minimum distance between i and j.  This problem can be solved by the dynamic programming method. d m i n ( v 0 , v 3 ) = m i n      3 + d m i n ( v 1 , 1 , v 3 ) 1 + d m i n ( v 1 , 2 , v 3 ) 5 + d m i n ( v 1 , 3 , v 3 ) 7 + d m i n ( v 1 , 4 , v 3 )