SlideShare a Scribd company logo
Optimization Problems
• In which a set of choices must be made in
order to arrive at an optimal (min/max)
solution, subject to some constraints. (There
may be several solutions to achieve an
optimal value.)
• Two common techniques:
– Dynamic Programming (global)
– Greedy Algorithms (local)
Dynamic Programming
• Similar to divide-and-conquer, it breaks
problems down into smaller problems that are
solved recursively.
• In contrast, DP is applicable when the sub-
problems are not independent, i.e. when sub-
problems share sub-sub-problems. It solves
every sub-sub-problem just once and save the
results in a table to avoid duplicated
computation.
Elements of DP Algorithms
• Sub-structure: decompose problem into smaller sub-
problems. Express the solution of the original problem
in terms of solutions for smaller problems.
• Table-structure: Store the answers to the sub-problem
in a table, because sub-problem solutions may be used
many times.
• Bottom-up computation: combine solutions on smaller
sub-problems to solve larger sub-problems, and
eventually arrive at a solution to the complete problem.
Applicability to Optimization
Problems
• Optimal sub-structure (principle of optimality): for the global
problem to be solved optimally, each sub-problem should be
solved optimally. This is often violated due to sub-problem
overlaps. Often by being “less optimal” on one problem, we
may make a big savings on another sub-problem.
• Overlapping of sub-problems: Many NP-hard problems can
be formulated as DP problems, but these formulations are not
efficient, because the number of sub-problems is
exponentially large. Ideally, the number of sub-problems
should be at most a polynomial number.
Optimized Chain Operations
• Determine the optimal sequence for performing a series of
operations. (the general class of the problem is important
in compiler design for code optimization & in databases
for query optimization)
• For example: given a series of matrices: A1…An , we can
“parenthesize” this expression however we like, since matrix
multiplication is associative (but not commutative).
• Multiply a p x q matrix A times a q x r matrix B, the result will
be a p x r matrix C. (# of columns of A must be equal to # of
rows of B.)
Matrix Multiplication
• In particular for 1  i  p and 1  j  r,
C[i, j] = k = 1 to q A[i, k] B[k, j]
• Observe that there are pr total entries in C
and each takes O(q) time to compute, thus
the total time to multiply 2 matrices is pqr.
Chain Matrix Multiplication
• Given a sequence of matrices A1 A2…An , and
dimensions p0 p1…pn where Ai is of dimension pi-1
x pi , determine multiplication sequence that
minimizes the number of operations.
• This algorithm does not perform the
multiplication, it just figures out the best order in
which to perform the multiplication.
Example: CMM
• Consider 3 matrices: A1 be 5 x 4, A2 be 4 x 6,
and A3 be 6 x 2.
Mult[((A1 A2)A3)] = (5x4x6) + (5x6x2) = 180
Mult[(A1 (A2A3 ))] = (4x6x2) + (5x4x2) = 88
Even for this small example, considerable savings
can be achieved by reordering the evaluation
sequence.
DP Solution (I)
• Let Ai…j be the product of matrices i through j. Ai…j is a pi-1 x pj matrix. At the
highest level, we are multiplying two matrices together. That is, for any k, 1 
k  n-1,
A1…n = (A1…k)(Ak+1…n)
• The problem of determining the optimal sequence of multiplication is broken
up into 2 parts:
Q : How do we decide where to split the chain (what k)?
A : Consider all possible values of k.
Q : How do we parenthesize the subchains A1…k & Ak+1…n?
A : Solve by recursively applying the same scheme.
NOTE: this problem satisfies the “principle of optimality”.
• Next, we store the solutions to the sub-problems in a table and build the table
in a bottom-up manner.
DP Solution (II)
• For 1  i  j  n, let m[i, j] denote the minimum number
of multiplications needed to compute Ai…j .
• Example: Minimum number of multiplies for A3…7
• In terms of pi , the product A3…7 has
dimensions ____.
DP Solution (III)
• The optimal cost can be described be as follows:
– i = j  the sequence contains only 1 matrix, so m[i, j] = 0.
– i < j  This can be split by considering each k, i  k < j,
as Ai…k (pi-1 x pk ) times Ak+1…j (pk x pj).
• This suggests the following recursive rule for computing
m[i, j]:
m[i, i] = 0
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj ) for i < j
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
=
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
= Ai…k Ak+1…j (m[k+1, j] mults)
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
= Ai…k Ak+1…j (m[k+1, j] mults)
= Ai…j (pi-1 pk pj mults)
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Computing m[i, j]
• For a specific k,
(Ai …Ak)(Ak+1 …Aj)
= Ai…k(Ak+1 …Aj) (m[i, k] mults)
= Ai…k Ak+1…j (m[k+1, j] mults)
= Ai…j (pi-1 pk pj mults)
• For solution, evaluate for all k and take minimum.
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Matrix-Chain-Order(p)
1. n  length[p] - 1
2. for i  1 to n // initialization: O(n) time
3. do m[i, i]  0
4. for L  2 to n // L = length of sub-chain
5. do for i  1 to n - L+1
6. do j  i + L - 1
7. m[i, j]  
8. for k  i to j - 1
9. do q  m[i, k] + m[k+1, j] + pi-1 pk pj
10. if q < m[i, j]
11. then m[i, j]  q
12. s[i, j]  k
13. return m and s
Extracting Optimum Sequence
• Leave a split marker indicating where the best split is (i.e.
the value of k leading to minimum values of m[i, j]). We
maintain a parallel array s[i, j] in which we store the value
of k providing the optimal split.
• If s[i, j] = k, the best way to multiply the sub-chain Ai…j is
to first multiply the sub-chain Ai…k and then the sub-chain
Ak+1…j , and finally multiply them together. Intuitively s[i, j]
tells us what multiplication to perform last. We only need
to store s[i, j] if we have at least 2 matrices & j > i.
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Example: DP for CMM
• The initial set of dimensions are <5, 4, 6, 2, 7>: we are
multiplying A1 (5x4) times A2 (4x6) times A3 (6x2) times
A4 (2x7). Optimal sequence is (A1 (A2A3 )) A4.
Finding a Recursive Solution
• Figure out the “top-level” choice you
have to make (e.g., where to split the
list of matrices)
• List the options for that decision
• Each option should require smaller sub-
problems to be solved
• Recursive function is the minimum (or
max) over all the options
m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Longest Common Subsequence
(LCS)
• Problem: Given sequences x[1..m] and
y[1..n], find a longest common
subsequence of both.
• Example: x=ABCBDAB and
y=BDCABA,
– BCA is a common subsequence and
– BCBA and BDAB are two LCSs
LCS
• Writing a recurrence equation
• The dynamic programming solution
Brute force solution
• Solution: For every subsequence of x,
check if it is a subsequence of y.
Writing the recurrence
equation
• Let Xi denote the ith prefix x[1..i] of x[1..m],
and
• X0 denotes an empty prefix
• We will first compute the length of an LCS of
Xm and Yn, LenLCS(m, n), and then use
information saved during the computation for
finding the actual subsequence
• We need a recursive formula for computing
LenLCS(i, j).
Writing the recurrence
equation
• If Xi and Yj end with the same character xi=yj,
the LCS must include the character. If it did
not we could get a longer LCS by adding the
common character.
• If Xi and Yj do not end with the same
character there are two possibilities:
– either the LCS does not end with xi,
– or it does not end with yj
• Let Zk denote an LCS of Xi and Yj
Xi and Yj end with xi=yj
Xi
x1 x2 … xi-1 xi
Yj
y1 y2 … yj-1 yj=xi
Zk
z1 z2…zk-1 zk =yj=xi
Zk is Zk -1 followed by zk = yj = xi where
Zk-1 is an LCS of Xi-1 and Yj -1 and
LenLCS(i, j)=LenLCS(i-1, j-1)+1
Xi and Yj end with xi  yj
Xi
x1 x2 … xi-1 xi
Yj
y1 y2 … yj-1 yj
Zk
z1 z2…zk-1 zk yj
Zk is an LCS of Xi and Yj -1
Xi
x1 x2 … xi-1 x i
Yj
yj y1 y2 …yj-1 yj
Zk
z1 z2…zk-1 zk xi
Zk is an LCS of Xi -1 and Yj
LenLCS(i, j)=max{LenLCS(i, j-1), LenLCS(i-1, j)}
The recurrence equation
lenLCS i j
i j
lenLCS i j i j x y
lenLCS i j lenLCS i j
( , )
,
( , )
max{ ( , ), ( , )}

 
  
 





0 0 0
1 1 1
1 1
if or
if , > 0 and =
otherwise
i j
The dynamic programming
solution
• Initialize the first row and the first column of
the matrix LenLCS to 0
• Calculate LenLCS (1, j) for j = 1,…, n
• Then the LenLCS (2, j) for j = 1,…, n, etc.
• Store also in a table an arrow pointing to the
array element that was used in the
computation.
Example
yj B D C A
xj 0 0 0 0 0
A 0 0 0 0 1
B 0 1 1 1 1
C 0 1 1 2 2
B 0 1 1 2 2
To find an LCS follow the arrows, for each
diagonal arrow there is a member of the LCS
Dynamic_methods_Greedy_algorithms_11.ppt
Dynamic_methods_Greedy_algorithms_11.ppt
Ad

More Related Content

Similar to Dynamic_methods_Greedy_algorithms_11.ppt (20)

Learn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorithLearn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorith
MazenulIslamKhan
 
DynamicProgramming.ppt
DynamicProgramming.pptDynamicProgramming.ppt
DynamicProgramming.ppt
DavidMaina47
 
Matrix multiplicationdesign
Matrix multiplicationdesignMatrix multiplicationdesign
Matrix multiplicationdesign
Respa Peter
 
Daa chapter 3
Daa chapter 3Daa chapter 3
Daa chapter 3
B.Kirron Reddi
 
Chapter 16
Chapter 16Chapter 16
Chapter 16
ashish bansal
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9
M S Prasad
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverLeast Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear Solver
Ji-yong Kwon
 
Dynamic programming
Dynamic programmingDynamic programming
Dynamic programming
Amit Kumar Rathi
 
DynamicProgramming.pdf
DynamicProgramming.pdfDynamicProgramming.pdf
DynamicProgramming.pdf
ssuser3a8f33
 
DAA - UNIT 4 - Engineering.pptx
DAA - UNIT 4 - Engineering.pptxDAA - UNIT 4 - Engineering.pptx
DAA - UNIT 4 - Engineering.pptx
vaishnavi339314
 
Applied Algorithms and Structures week999
Applied Algorithms and Structures week999Applied Algorithms and Structures week999
Applied Algorithms and Structures week999
fashiontrendzz20
 
Matrix mult class-17
Matrix mult class-17Matrix mult class-17
Matrix mult class-17
Kumar
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx
36rajneekant
 
Matrix chain multiplication
Matrix chain multiplicationMatrix chain multiplication
Matrix chain multiplication
Respa Peter
 
Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation Algorithms
Ajay Bidyarthy
 
8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx
zahidulhasan32
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
hodcsencet
 
17-dynprog2.ppt
17-dynprog2.ppt17-dynprog2.ppt
17-dynprog2.ppt
GGHSJANDAWALA
 
17-dynprog2 17-dynprog2 17-dynprog2 17-dynprog2
17-dynprog2 17-dynprog2 17-dynprog2 17-dynprog217-dynprog2 17-dynprog2 17-dynprog2 17-dynprog2
17-dynprog2 17-dynprog2 17-dynprog2 17-dynprog2
Shanmuganathan C
 
algorithm Unit 2
algorithm Unit 2 algorithm Unit 2
algorithm Unit 2
Monika Choudhery
 
Learn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorithLearn about dynamic programming and how to design algorith
Learn about dynamic programming and how to design algorith
MazenulIslamKhan
 
DynamicProgramming.ppt
DynamicProgramming.pptDynamicProgramming.ppt
DynamicProgramming.ppt
DavidMaina47
 
Matrix multiplicationdesign
Matrix multiplicationdesignMatrix multiplicationdesign
Matrix multiplicationdesign
Respa Peter
 
Lp and ip programming cp 9
Lp and ip programming cp 9Lp and ip programming cp 9
Lp and ip programming cp 9
M S Prasad
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverLeast Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear Solver
Ji-yong Kwon
 
DynamicProgramming.pdf
DynamicProgramming.pdfDynamicProgramming.pdf
DynamicProgramming.pdf
ssuser3a8f33
 
DAA - UNIT 4 - Engineering.pptx
DAA - UNIT 4 - Engineering.pptxDAA - UNIT 4 - Engineering.pptx
DAA - UNIT 4 - Engineering.pptx
vaishnavi339314
 
Applied Algorithms and Structures week999
Applied Algorithms and Structures week999Applied Algorithms and Structures week999
Applied Algorithms and Structures week999
fashiontrendzz20
 
Matrix mult class-17
Matrix mult class-17Matrix mult class-17
Matrix mult class-17
Kumar
 
DimensionalityReduction.pptx
DimensionalityReduction.pptxDimensionalityReduction.pptx
DimensionalityReduction.pptx
36rajneekant
 
Matrix chain multiplication
Matrix chain multiplicationMatrix chain multiplication
Matrix chain multiplication
Respa Peter
 
Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation Algorithms
Ajay Bidyarthy
 
8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx8_dynamic_algorithm powerpoint ptesentation.pptx
8_dynamic_algorithm powerpoint ptesentation.pptx
zahidulhasan32
 
daa-unit-3-greedy method
daa-unit-3-greedy methoddaa-unit-3-greedy method
daa-unit-3-greedy method
hodcsencet
 
17-dynprog2 17-dynprog2 17-dynprog2 17-dynprog2
17-dynprog2 17-dynprog2 17-dynprog2 17-dynprog217-dynprog2 17-dynprog2 17-dynprog2 17-dynprog2
17-dynprog2 17-dynprog2 17-dynprog2 17-dynprog2
Shanmuganathan C
 

Recently uploaded (20)

Mathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdfMathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdf
TalhaShahid49
 
AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)
Vəhid Gəruslu
 
DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...
DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...
DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...
charlesdick1345
 
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxLidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
RishavKumar530754
 
How to use nRF24L01 module with Arduino
How to use nRF24L01 module with ArduinoHow to use nRF24L01 module with Arduino
How to use nRF24L01 module with Arduino
CircuitDigest
 
LECTURE-16 EARTHEN DAM - II.pptx it's uses
LECTURE-16 EARTHEN DAM - II.pptx it's usesLECTURE-16 EARTHEN DAM - II.pptx it's uses
LECTURE-16 EARTHEN DAM - II.pptx it's uses
CLokeshBehera123
 
15th International Conference on Computer Science, Engineering and Applicatio...
15th International Conference on Computer Science, Engineering and Applicatio...15th International Conference on Computer Science, Engineering and Applicatio...
15th International Conference on Computer Science, Engineering and Applicatio...
IJCSES Journal
 
Smart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineeringSmart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineering
rushikeshnavghare94
 
Main cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxb
Main cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxbMain cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxb
Main cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxb
SunilSingh610661
 
Data Structures_Searching and Sorting.pptx
Data Structures_Searching and Sorting.pptxData Structures_Searching and Sorting.pptx
Data Structures_Searching and Sorting.pptx
RushaliDeshmukh2
 
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E..."Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
Infopitaara
 
Compiler Design_Lexical Analysis phase.pptx
Compiler Design_Lexical Analysis phase.pptxCompiler Design_Lexical Analysis phase.pptx
Compiler Design_Lexical Analysis phase.pptx
RushaliDeshmukh2
 
Introduction to Zoomlion Earthmoving.pptx
Introduction to Zoomlion Earthmoving.pptxIntroduction to Zoomlion Earthmoving.pptx
Introduction to Zoomlion Earthmoving.pptx
AS1920
 
Machine learning project on employee attrition detection using (2).pptx
Machine learning project on employee attrition detection using (2).pptxMachine learning project on employee attrition detection using (2).pptx
Machine learning project on employee attrition detection using (2).pptx
rajeswari89780
 
Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...
Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...
Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...
Journal of Soft Computing in Civil Engineering
 
Value Stream Mapping Worskshops for Intelligent Continuous Security
Value Stream Mapping Worskshops for Intelligent Continuous SecurityValue Stream Mapping Worskshops for Intelligent Continuous Security
Value Stream Mapping Worskshops for Intelligent Continuous Security
Marc Hornbeek
 
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G..."Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
Infopitaara
 
new ppt artificial intelligence historyyy
new ppt artificial intelligence historyyynew ppt artificial intelligence historyyy
new ppt artificial intelligence historyyy
PianoPianist
 
New Microsoft PowerPoint Presentation.pdf
New Microsoft PowerPoint Presentation.pdfNew Microsoft PowerPoint Presentation.pdf
New Microsoft PowerPoint Presentation.pdf
mohamedezzat18803
 
Introduction to FLUID MECHANICS & KINEMATICS
Introduction to FLUID MECHANICS &  KINEMATICSIntroduction to FLUID MECHANICS &  KINEMATICS
Introduction to FLUID MECHANICS & KINEMATICS
narayanaswamygdas
 
Mathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdfMathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdf
TalhaShahid49
 
AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)
Vəhid Gəruslu
 
DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...
DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...
DATA-DRIVEN SHOULDER INVERSE KINEMATICS YoungBeom Kim1 , Byung-Ha Park1 , Kwa...
charlesdick1345
 
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxLidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
RishavKumar530754
 
How to use nRF24L01 module with Arduino
How to use nRF24L01 module with ArduinoHow to use nRF24L01 module with Arduino
How to use nRF24L01 module with Arduino
CircuitDigest
 
LECTURE-16 EARTHEN DAM - II.pptx it's uses
LECTURE-16 EARTHEN DAM - II.pptx it's usesLECTURE-16 EARTHEN DAM - II.pptx it's uses
LECTURE-16 EARTHEN DAM - II.pptx it's uses
CLokeshBehera123
 
15th International Conference on Computer Science, Engineering and Applicatio...
15th International Conference on Computer Science, Engineering and Applicatio...15th International Conference on Computer Science, Engineering and Applicatio...
15th International Conference on Computer Science, Engineering and Applicatio...
IJCSES Journal
 
Smart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineeringSmart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineering
rushikeshnavghare94
 
Main cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxb
Main cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxbMain cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxb
Main cotrol jdbjbdcnxbjbjzjjjcjicbjxbcjcxbjcxb
SunilSingh610661
 
Data Structures_Searching and Sorting.pptx
Data Structures_Searching and Sorting.pptxData Structures_Searching and Sorting.pptx
Data Structures_Searching and Sorting.pptx
RushaliDeshmukh2
 
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E..."Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
Infopitaara
 
Compiler Design_Lexical Analysis phase.pptx
Compiler Design_Lexical Analysis phase.pptxCompiler Design_Lexical Analysis phase.pptx
Compiler Design_Lexical Analysis phase.pptx
RushaliDeshmukh2
 
Introduction to Zoomlion Earthmoving.pptx
Introduction to Zoomlion Earthmoving.pptxIntroduction to Zoomlion Earthmoving.pptx
Introduction to Zoomlion Earthmoving.pptx
AS1920
 
Machine learning project on employee attrition detection using (2).pptx
Machine learning project on employee attrition detection using (2).pptxMachine learning project on employee attrition detection using (2).pptx
Machine learning project on employee attrition detection using (2).pptx
rajeswari89780
 
Value Stream Mapping Worskshops for Intelligent Continuous Security
Value Stream Mapping Worskshops for Intelligent Continuous SecurityValue Stream Mapping Worskshops for Intelligent Continuous Security
Value Stream Mapping Worskshops for Intelligent Continuous Security
Marc Hornbeek
 
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G..."Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
Infopitaara
 
new ppt artificial intelligence historyyy
new ppt artificial intelligence historyyynew ppt artificial intelligence historyyy
new ppt artificial intelligence historyyy
PianoPianist
 
New Microsoft PowerPoint Presentation.pdf
New Microsoft PowerPoint Presentation.pdfNew Microsoft PowerPoint Presentation.pdf
New Microsoft PowerPoint Presentation.pdf
mohamedezzat18803
 
Introduction to FLUID MECHANICS & KINEMATICS
Introduction to FLUID MECHANICS &  KINEMATICSIntroduction to FLUID MECHANICS &  KINEMATICS
Introduction to FLUID MECHANICS & KINEMATICS
narayanaswamygdas
 
Ad

Dynamic_methods_Greedy_algorithms_11.ppt

  • 1. Optimization Problems • In which a set of choices must be made in order to arrive at an optimal (min/max) solution, subject to some constraints. (There may be several solutions to achieve an optimal value.) • Two common techniques: – Dynamic Programming (global) – Greedy Algorithms (local)
  • 2. Dynamic Programming • Similar to divide-and-conquer, it breaks problems down into smaller problems that are solved recursively. • In contrast, DP is applicable when the sub- problems are not independent, i.e. when sub- problems share sub-sub-problems. It solves every sub-sub-problem just once and save the results in a table to avoid duplicated computation.
  • 3. Elements of DP Algorithms • Sub-structure: decompose problem into smaller sub- problems. Express the solution of the original problem in terms of solutions for smaller problems. • Table-structure: Store the answers to the sub-problem in a table, because sub-problem solutions may be used many times. • Bottom-up computation: combine solutions on smaller sub-problems to solve larger sub-problems, and eventually arrive at a solution to the complete problem.
  • 4. Applicability to Optimization Problems • Optimal sub-structure (principle of optimality): for the global problem to be solved optimally, each sub-problem should be solved optimally. This is often violated due to sub-problem overlaps. Often by being “less optimal” on one problem, we may make a big savings on another sub-problem. • Overlapping of sub-problems: Many NP-hard problems can be formulated as DP problems, but these formulations are not efficient, because the number of sub-problems is exponentially large. Ideally, the number of sub-problems should be at most a polynomial number.
  • 5. Optimized Chain Operations • Determine the optimal sequence for performing a series of operations. (the general class of the problem is important in compiler design for code optimization & in databases for query optimization) • For example: given a series of matrices: A1…An , we can “parenthesize” this expression however we like, since matrix multiplication is associative (but not commutative). • Multiply a p x q matrix A times a q x r matrix B, the result will be a p x r matrix C. (# of columns of A must be equal to # of rows of B.)
  • 6. Matrix Multiplication • In particular for 1  i  p and 1  j  r, C[i, j] = k = 1 to q A[i, k] B[k, j] • Observe that there are pr total entries in C and each takes O(q) time to compute, thus the total time to multiply 2 matrices is pqr.
  • 7. Chain Matrix Multiplication • Given a sequence of matrices A1 A2…An , and dimensions p0 p1…pn where Ai is of dimension pi-1 x pi , determine multiplication sequence that minimizes the number of operations. • This algorithm does not perform the multiplication, it just figures out the best order in which to perform the multiplication.
  • 8. Example: CMM • Consider 3 matrices: A1 be 5 x 4, A2 be 4 x 6, and A3 be 6 x 2. Mult[((A1 A2)A3)] = (5x4x6) + (5x6x2) = 180 Mult[(A1 (A2A3 ))] = (4x6x2) + (5x4x2) = 88 Even for this small example, considerable savings can be achieved by reordering the evaluation sequence.
  • 9. DP Solution (I) • Let Ai…j be the product of matrices i through j. Ai…j is a pi-1 x pj matrix. At the highest level, we are multiplying two matrices together. That is, for any k, 1  k  n-1, A1…n = (A1…k)(Ak+1…n) • The problem of determining the optimal sequence of multiplication is broken up into 2 parts: Q : How do we decide where to split the chain (what k)? A : Consider all possible values of k. Q : How do we parenthesize the subchains A1…k & Ak+1…n? A : Solve by recursively applying the same scheme. NOTE: this problem satisfies the “principle of optimality”. • Next, we store the solutions to the sub-problems in a table and build the table in a bottom-up manner.
  • 10. DP Solution (II) • For 1  i  j  n, let m[i, j] denote the minimum number of multiplications needed to compute Ai…j . • Example: Minimum number of multiplies for A3…7 • In terms of pi , the product A3…7 has dimensions ____.
  • 11. DP Solution (III) • The optimal cost can be described be as follows: – i = j  the sequence contains only 1 matrix, so m[i, j] = 0. – i < j  This can be split by considering each k, i  k < j, as Ai…k (pi-1 x pk ) times Ak+1…j (pk x pj). • This suggests the following recursive rule for computing m[i, j]: m[i, i] = 0 m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj ) for i < j
  • 12. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 13. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 14. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) = Ai…k Ak+1…j (m[k+1, j] mults) m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 15. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) = Ai…k Ak+1…j (m[k+1, j] mults) = Ai…j (pi-1 pk pj mults) m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 16. Computing m[i, j] • For a specific k, (Ai …Ak)(Ak+1 …Aj) = Ai…k(Ak+1 …Aj) (m[i, k] mults) = Ai…k Ak+1…j (m[k+1, j] mults) = Ai…j (pi-1 pk pj mults) • For solution, evaluate for all k and take minimum. m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 17. Matrix-Chain-Order(p) 1. n  length[p] - 1 2. for i  1 to n // initialization: O(n) time 3. do m[i, i]  0 4. for L  2 to n // L = length of sub-chain 5. do for i  1 to n - L+1 6. do j  i + L - 1 7. m[i, j]   8. for k  i to j - 1 9. do q  m[i, k] + m[k+1, j] + pi-1 pk pj 10. if q < m[i, j] 11. then m[i, j]  q 12. s[i, j]  k 13. return m and s
  • 18. Extracting Optimum Sequence • Leave a split marker indicating where the best split is (i.e. the value of k leading to minimum values of m[i, j]). We maintain a parallel array s[i, j] in which we store the value of k providing the optimal split. • If s[i, j] = k, the best way to multiply the sub-chain Ai…j is to first multiply the sub-chain Ai…k and then the sub-chain Ak+1…j , and finally multiply them together. Intuitively s[i, j] tells us what multiplication to perform last. We only need to store s[i, j] if we have at least 2 matrices & j > i.
  • 21. Example: DP for CMM • The initial set of dimensions are <5, 4, 6, 2, 7>: we are multiplying A1 (5x4) times A2 (4x6) times A3 (6x2) times A4 (2x7). Optimal sequence is (A1 (A2A3 )) A4.
  • 22. Finding a Recursive Solution • Figure out the “top-level” choice you have to make (e.g., where to split the list of matrices) • List the options for that decision • Each option should require smaller sub- problems to be solved • Recursive function is the minimum (or max) over all the options m[i, j] = mini  k < j (m[i, k] + m[k+1, j] + pi-1pkpj )
  • 26. Longest Common Subsequence (LCS) • Problem: Given sequences x[1..m] and y[1..n], find a longest common subsequence of both. • Example: x=ABCBDAB and y=BDCABA, – BCA is a common subsequence and – BCBA and BDAB are two LCSs
  • 27. LCS • Writing a recurrence equation • The dynamic programming solution
  • 28. Brute force solution • Solution: For every subsequence of x, check if it is a subsequence of y.
  • 29. Writing the recurrence equation • Let Xi denote the ith prefix x[1..i] of x[1..m], and • X0 denotes an empty prefix • We will first compute the length of an LCS of Xm and Yn, LenLCS(m, n), and then use information saved during the computation for finding the actual subsequence • We need a recursive formula for computing LenLCS(i, j).
  • 30. Writing the recurrence equation • If Xi and Yj end with the same character xi=yj, the LCS must include the character. If it did not we could get a longer LCS by adding the common character. • If Xi and Yj do not end with the same character there are two possibilities: – either the LCS does not end with xi, – or it does not end with yj • Let Zk denote an LCS of Xi and Yj
  • 31. Xi and Yj end with xi=yj Xi x1 x2 … xi-1 xi Yj y1 y2 … yj-1 yj=xi Zk z1 z2…zk-1 zk =yj=xi Zk is Zk -1 followed by zk = yj = xi where Zk-1 is an LCS of Xi-1 and Yj -1 and LenLCS(i, j)=LenLCS(i-1, j-1)+1
  • 32. Xi and Yj end with xi  yj Xi x1 x2 … xi-1 xi Yj y1 y2 … yj-1 yj Zk z1 z2…zk-1 zk yj Zk is an LCS of Xi and Yj -1 Xi x1 x2 … xi-1 x i Yj yj y1 y2 …yj-1 yj Zk z1 z2…zk-1 zk xi Zk is an LCS of Xi -1 and Yj LenLCS(i, j)=max{LenLCS(i, j-1), LenLCS(i-1, j)}
  • 33. The recurrence equation lenLCS i j i j lenLCS i j i j x y lenLCS i j lenLCS i j ( , ) , ( , ) max{ ( , ), ( , )}              0 0 0 1 1 1 1 1 if or if , > 0 and = otherwise i j
  • 34. The dynamic programming solution • Initialize the first row and the first column of the matrix LenLCS to 0 • Calculate LenLCS (1, j) for j = 1,…, n • Then the LenLCS (2, j) for j = 1,…, n, etc. • Store also in a table an arrow pointing to the array element that was used in the computation.
  • 35. Example yj B D C A xj 0 0 0 0 0 A 0 0 0 0 1 B 0 1 1 1 1 C 0 1 1 2 2 B 0 1 1 2 2 To find an LCS follow the arrows, for each diagonal arrow there is a member of the LCS