The document discusses algorithms and data structures. It begins with two quotes about programming and algorithms. It then provides pseudocode for naive and optimized recursive Fibonacci algorithms, as well as an iterative dynamic programming version. It also covers dynamic programming approaches for calculating Fibonacci numbers, Catalan numbers, the chessboard traversal problem, the rod cutting problem, longest common subsequence, and assembly line traversal. The key ideas are introducing dynamic programming techniques like memoization and bottom-up iteration to improve the time complexity of recursive algorithms from exponential to polynomial.
This document discusses advanced algorithm design and analysis techniques including dynamic programming, greedy algorithms, and amortized analysis. It provides examples of dynamic programming including matrix chain multiplication and longest common subsequence. Dynamic programming works by breaking problems down into overlapping subproblems and solving each subproblem only once. Greedy algorithms make locally optimal choices at each step to find a global optimum. Amortized analysis averages the costs of a sequence of operations to determine average-case performance.
Dynamic programming is used to solve optimization problems by breaking them down into overlapping subproblems. It solves subproblems only once, storing the results in a table to lookup when the same subproblem occurs again, avoiding recomputing solutions. Key steps are characterizing optimal substructures, defining solutions recursively, computing solutions bottom-up, and constructing the overall optimal solution. Examples provided are matrix chain multiplication and longest common subsequence.
The document discusses the dynamic programming approach to solving the Fibonacci numbers problem and the rod cutting problem. It explains that dynamic programming formulations first express the problem recursively but then optimize it by storing results of subproblems to avoid recomputing them. This is done either through a top-down recursive approach with memoization or a bottom-up approach by filling a table with solutions to subproblems of increasing size. The document also introduces the matrix chain multiplication problem and how it can be optimized through dynamic programming by considering overlapping subproblems.
The document discusses using dynamic programming to solve optimization problems like finding the longest increasing subsequence in a sequence, cutting a rod into pieces for maximum profit, and finding the shortest path in a directed acyclic graph. It provides examples and explanations of how to model these problems as dynamic programming problems and efficiently solve them using techniques like memoization and bottom-up computation.
The document discusses the technique of dynamic programming. It begins with an example of using dynamic programming to compute the Fibonacci numbers more efficiently than a naive recursive solution. This involves storing previously computed values in a table to avoid recomputing them. The document then presents the problem of finding the longest increasing subsequence in an array. It defines the problem and subproblems, derives a recurrence relation, and provides both recursive and iterative memoized algorithms to solve it in quadratic time using dynamic programming.
The document discusses finding the longest common subsequence between two sequences and provides an algorithm using dynamic programming. It explains using a matrix to store the current alignment results, where each cell Aij is calculated based on the adjacent cells, with scores considered. There are two steps - find the length of the LCS using the matrix, then trace back to find the exact alignment. It also discusses the knapsack problem and how dynamic programming can be applied to optimize combination problems.
Learn Dynamic Programming Roadmap at Tutort AcademyTutort Academy
Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for the same inputs, we can optimize it using Dynamic Programming.
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Learn about dynamic programming and how to design algorithMazenulIslamKhan
Dynamic Programming (DP): A 3000-Character Description
Dynamic Programming (DP) is a powerful algorithmic technique used to solve complex problems by breaking them down into simpler subproblems and solving each of those subproblems only once. It is especially useful for optimization problems, where the goal is to find the best possible solution from a set of feasible solutions. DP avoids the repeated calculation of the same subproblem by storing the results of solved subproblems in a table (usually an array or matrix) and reusing those results when needed. This approach is known as memoization when done recursively and tabulation when done iteratively.
The main idea behind dynamic programming is the principle of optimal substructure, which means that the solution to a problem can be composed of optimal solutions to its subproblems. Additionally, DP problems exhibit overlapping subproblems, meaning the same subproblems are solved multiple times during the execution of a naive recursive solution. By solving each unique subproblem just once and storing its result, dynamic programming reduces the time complexity significantly compared to a naive approach like brute-force recursion.
DP is commonly applied in a variety of domains such as computer science, operations research, bioinformatics, and economics. Some classic examples of dynamic programming problems include the Fibonacci sequence, Longest Common Subsequence (LCS), Longest Increasing Subsequence (LIS), Knapsack problem, Matrix Chain Multiplication, Edit Distance, and Coin Change problem. Each of these demonstrates how breaking down a problem and reusing computed results can lead to efficient solutions.
There are two main approaches to implementing DP:
1. Top-Down (Memoization): This involves writing a recursive function to solve the problem, but before computing the result of a subproblem, the function checks whether it has already been computed. If it has, the stored result is returned instead of recomputing it. This avoids redundant calculations.
2. Bottom-Up (Tabulation): This approach involves solving all related subproblems in a specific order and storing their results in a table. It starts from the smallest subproblems and combines their results to solve larger subproblems, ultimately reaching the final solution. This method usually uses iteration and avoids recursion.
One of the strengths of dynamic programming is its ability to transform exponential-time problems into polynomial-time ones. However, it requires careful problem formulation and identification of states and transitions between those states. A typical DP solution involves defining a state, figuring out the recurrence relation, and determining the base cases.
In summary, dynamic programming is a key technique for solving optimization problems with overlapping subproblems and optimal substructure. It requires a strategic approach to modeling the problem, but when applied correctly, it can yield solutions that are b
The document discusses the technique of dynamic programming, including its use of storing solutions to subproblems to avoid redundant computations. Several examples are provided to illustrate dynamic programming, including matrix multiplication, Fibonacci numbers, and the traveling salesman problem. The principle of optimality is explained as the idea that optimal solutions to subproblems must be contained within an optimal solution to the overall problem.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. It is applicable when subproblems overlap and share common subsubproblems. The dynamic programming approach involves (1) characterizing the optimal structure of a solution, (2) recursively defining the optimal solution value, and (3) computing the optimal solution in a bottom-up manner by solving subproblems from smallest to largest. This allows for computing the optimal solution without resolving overlapping subproblems multiple times.
The document describes several algorithms for dynamic programming and graph algorithms:
1. It presents four algorithms for computing the Fibonacci numbers using dynamic programming with arrays or recursion with and without memoization.
2. It provides algorithms for solving coin changing and matrix multiplication problems using dynamic programming by filling out arrays.
3. It gives algorithms to find the length and a longest common subsequence between two strings using dynamic programming.
4. It introduces algorithms to find shortest paths between all pairs of vertices in a weighted graph using Floyd's and Warshall's algorithms. It includes algorithms to output a shortest path between two vertices.
The document describes several algorithms related to dynamic programming. It presents algorithms for computing the Fibonacci numbers, coin changing, matrix multiplication, longest common subsequence, shortest paths in graphs, and transitive closure. The algorithms use dynamic programming techniques like memoization and storing intermediate results in tables to solve problems more efficiently.
The document discusses the divide and conquer algorithm design technique. It begins by defining divide and conquer as breaking a problem down into smaller subproblems, solving the subproblems, and then combining the solutions to solve the original problem. It then provides examples of applying divide and conquer to problems like matrix multiplication and finding the maximum subarray. The document also discusses analyzing divide and conquer recurrences using methods like recursion trees and the master theorem.
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
This document discusses dynamic programming and provides examples of how it can be applied to optimize algorithms to solve problems with overlapping subproblems. It summarizes dynamic programming, provides examples for the Fibonacci numbers, binomial coefficients, and knapsack problems, and analyzes the time and space complexity of algorithms developed using dynamic programming approaches.
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
The document discusses algorithms and complexity analysis. It provides Euclid's algorithm for computing greatest common divisor, compares the orders of growth of n(n-1)/2 and n^2, and describes the general strategy of divide and conquer methods. It also defines problems like the closest pair problem, single source shortest path problem, and assignment problem. Finally, it discusses topics like state space trees, the extreme point theorem, and lower bounds.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
This document discusses dynamic programming and provides examples of problems that can be solved using dynamic programming, including assembly line scheduling and matrix chain multiplication. It explains the key aspects of a dynamic programming algorithm:
1) Characterizing the optimal substructure of a problem - how optimal solutions can be built from optimal solutions to subproblems.
2) Defining the problem recursively in terms of optimal solutions to subproblems.
3) Computing the optimal solution in a bottom-up manner by first solving subproblems and building up to the final solution.
Skiena algorithm 2007 lecture18 application of dynamic programmingzukun
The document summarizes a lecture on applications of dynamic programming. It provides examples of how to use dynamic programming to solve problems involving string breaking, high density bar code encoding, dividing work evenly among workers, and the traveling salesman problem. Dynamic programming can be applied when problems exhibit the principle of optimality and the problem space can be broken down into overlapping subproblems that are stored in a table to avoid recomputing solutions.
The document discusses several brute-force algorithms including bubble sort, selection sort, string matching, closest pair of points, convex hulls, traveling salesman problem, knapsack problem, and assignment problem. It analyzes the runtime of each algorithm, which are often quadratic or exponential time due to considering all possible combinations in a systematic way.
(1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. (2) It is applicable to problems where subproblems overlap and solving them recursively would result in redundant computations. (3) The key steps of a dynamic programming algorithm are to characterize the optimal structure, define the problem recursively in terms of optimal substructures, and compute the optimal solution bottom-up by solving subproblems only once.
How to Configure Subcontracting in Odoo 18 ManufacturingCeline George
Subcontracting in manufacturing involves outsourcing specific production tasks to external vendors or subcontractors. These tasks may include manufacturing certain components, handling assembly processes, or even producing entire product lines.
More Related Content
Similar to Applied Algorithms and Structures week999 (20)
Dynamic Programming is an algorithmic paradigm that solves a given complex problem by breaking it into subproblems and stores the results of subproblems to avoid computing the same results again.
Learn about dynamic programming and how to design algorithMazenulIslamKhan
Dynamic Programming (DP): A 3000-Character Description
Dynamic Programming (DP) is a powerful algorithmic technique used to solve complex problems by breaking them down into simpler subproblems and solving each of those subproblems only once. It is especially useful for optimization problems, where the goal is to find the best possible solution from a set of feasible solutions. DP avoids the repeated calculation of the same subproblem by storing the results of solved subproblems in a table (usually an array or matrix) and reusing those results when needed. This approach is known as memoization when done recursively and tabulation when done iteratively.
The main idea behind dynamic programming is the principle of optimal substructure, which means that the solution to a problem can be composed of optimal solutions to its subproblems. Additionally, DP problems exhibit overlapping subproblems, meaning the same subproblems are solved multiple times during the execution of a naive recursive solution. By solving each unique subproblem just once and storing its result, dynamic programming reduces the time complexity significantly compared to a naive approach like brute-force recursion.
DP is commonly applied in a variety of domains such as computer science, operations research, bioinformatics, and economics. Some classic examples of dynamic programming problems include the Fibonacci sequence, Longest Common Subsequence (LCS), Longest Increasing Subsequence (LIS), Knapsack problem, Matrix Chain Multiplication, Edit Distance, and Coin Change problem. Each of these demonstrates how breaking down a problem and reusing computed results can lead to efficient solutions.
There are two main approaches to implementing DP:
1. Top-Down (Memoization): This involves writing a recursive function to solve the problem, but before computing the result of a subproblem, the function checks whether it has already been computed. If it has, the stored result is returned instead of recomputing it. This avoids redundant calculations.
2. Bottom-Up (Tabulation): This approach involves solving all related subproblems in a specific order and storing their results in a table. It starts from the smallest subproblems and combines their results to solve larger subproblems, ultimately reaching the final solution. This method usually uses iteration and avoids recursion.
One of the strengths of dynamic programming is its ability to transform exponential-time problems into polynomial-time ones. However, it requires careful problem formulation and identification of states and transitions between those states. A typical DP solution involves defining a state, figuring out the recurrence relation, and determining the base cases.
In summary, dynamic programming is a key technique for solving optimization problems with overlapping subproblems and optimal substructure. It requires a strategic approach to modeling the problem, but when applied correctly, it can yield solutions that are b
The document discusses the technique of dynamic programming, including its use of storing solutions to subproblems to avoid redundant computations. Several examples are provided to illustrate dynamic programming, including matrix multiplication, Fibonacci numbers, and the traveling salesman problem. The principle of optimality is explained as the idea that optimal solutions to subproblems must be contained within an optimal solution to the overall problem.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. It is applicable when subproblems overlap and share common subsubproblems. The dynamic programming approach involves (1) characterizing the optimal structure of a solution, (2) recursively defining the optimal solution value, and (3) computing the optimal solution in a bottom-up manner by solving subproblems from smallest to largest. This allows for computing the optimal solution without resolving overlapping subproblems multiple times.
The document describes several algorithms for dynamic programming and graph algorithms:
1. It presents four algorithms for computing the Fibonacci numbers using dynamic programming with arrays or recursion with and without memoization.
2. It provides algorithms for solving coin changing and matrix multiplication problems using dynamic programming by filling out arrays.
3. It gives algorithms to find the length and a longest common subsequence between two strings using dynamic programming.
4. It introduces algorithms to find shortest paths between all pairs of vertices in a weighted graph using Floyd's and Warshall's algorithms. It includes algorithms to output a shortest path between two vertices.
The document describes several algorithms related to dynamic programming. It presents algorithms for computing the Fibonacci numbers, coin changing, matrix multiplication, longest common subsequence, shortest paths in graphs, and transitive closure. The algorithms use dynamic programming techniques like memoization and storing intermediate results in tables to solve problems more efficiently.
The document discusses the divide and conquer algorithm design technique. It begins by defining divide and conquer as breaking a problem down into smaller subproblems, solving the subproblems, and then combining the solutions to solve the original problem. It then provides examples of applying divide and conquer to problems like matrix multiplication and finding the maximum subarray. The document also discusses analyzing divide and conquer recurrences using methods like recursion trees and the master theorem.
dynamic programming complete by Mumtaz Ali (03154103173)Mumtaz Ali
The document discusses dynamic programming, including its meaning, definition, uses, techniques, and examples. Dynamic programming refers to breaking large problems down into smaller subproblems, solving each subproblem only once, and storing the results for future use. This avoids recomputing the same subproblems repeatedly. Examples covered include matrix chain multiplication, the Fibonacci sequence, and optimal substructure. The document provides details on formulating and solving dynamic programming problems through recursive definitions and storing results in tables.
This document discusses dynamic programming and provides examples of how it can be applied to optimize algorithms to solve problems with overlapping subproblems. It summarizes dynamic programming, provides examples for the Fibonacci numbers, binomial coefficients, and knapsack problems, and analyzes the time and space complexity of algorithms developed using dynamic programming approaches.
Cs6402 design and analysis of algorithms may june 2016 answer keyappasami
The document discusses algorithms and complexity analysis. It provides Euclid's algorithm for computing greatest common divisor, compares the orders of growth of n(n-1)/2 and n^2, and describes the general strategy of divide and conquer methods. It also defines problems like the closest pair problem, single source shortest path problem, and assignment problem. Finally, it discusses topics like state space trees, the extreme point theorem, and lower bounds.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
This document discusses dynamic programming and provides examples of problems that can be solved using dynamic programming, including assembly line scheduling and matrix chain multiplication. It explains the key aspects of a dynamic programming algorithm:
1) Characterizing the optimal substructure of a problem - how optimal solutions can be built from optimal solutions to subproblems.
2) Defining the problem recursively in terms of optimal solutions to subproblems.
3) Computing the optimal solution in a bottom-up manner by first solving subproblems and building up to the final solution.
Skiena algorithm 2007 lecture18 application of dynamic programmingzukun
The document summarizes a lecture on applications of dynamic programming. It provides examples of how to use dynamic programming to solve problems involving string breaking, high density bar code encoding, dividing work evenly among workers, and the traveling salesman problem. Dynamic programming can be applied when problems exhibit the principle of optimality and the problem space can be broken down into overlapping subproblems that are stored in a table to avoid recomputing solutions.
The document discusses several brute-force algorithms including bubble sort, selection sort, string matching, closest pair of points, convex hulls, traveling salesman problem, knapsack problem, and assignment problem. It analyzes the runtime of each algorithm, which are often quadratic or exponential time due to considering all possible combinations in a systematic way.
(1) Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller subproblems and storing the results of already solved subproblems. (2) It is applicable to problems where subproblems overlap and solving them recursively would result in redundant computations. (3) The key steps of a dynamic programming algorithm are to characterize the optimal structure, define the problem recursively in terms of optimal substructures, and compute the optimal solution bottom-up by solving subproblems only once.
How to Configure Subcontracting in Odoo 18 ManufacturingCeline George
Subcontracting in manufacturing involves outsourcing specific production tasks to external vendors or subcontractors. These tasks may include manufacturing certain components, handling assembly processes, or even producing entire product lines.
The PDF titled "Critical Thinking and Bias" by Jibi Moses aims to equip a diverse audience from South Sudan with the knowledge and skills necessary to identify and challenge biases and stereotypes. It focuses on developing critical thinking abilities and promoting inclusive attitudes to foster a more cohesive and just society. It defines bias as a tendency or prejudice affecting perception and interactions, categorizing it into conscious and unconscious (implicit) biases. The content highlights the impact of societal and cultural conditioning on these biases, particularly within the South Sudanese context.
How to Setup Renewal of Subscription in Odoo 18Celine George
A subscription is a recurring plan where you set a subscription period, such as weekly, monthly, or yearly. Based on this period, the subscription renews automatically. In Odoo 18, you have the flexibility to manage renewals either manually or automatically.
How to Setup Lunch in Odoo 18 - Odoo guidesCeline George
In Odoo 18, the Lunch application allows users a convenient way to order food and pay for their meal directly from the database. Lunch in Odoo 18 is a handy application designed to streamline and manage employee lunch orders within a company.
Updated About Me. Used for former college assignments.
Make sure to catch our weekly updates. Updates are done Thursday to Fridays or its a holiday/event weekend.
Thanks again, Readers, Guest Students, and Loyalz/teams.
This profile is older. I started at the beginning of my HQ journey online. It was recommended by AI. AI was very selective but fits my ecourse style. I am media flexible depending on the course platform. More information below.
AI Overview:
“LDMMIA Reiki Yoga refers to a specific program of free online workshops focused on integrating Reiki energy healing techniques with yoga practices. These workshops are led by Leslie M. Moore, also known as LDMMIA, and are designed for all levels, from beginners to those seeking to review their practice. The sessions explore various themes like "Matrix," "Alice in Wonderland," and "Goddess," focusing on self-discovery, inner healing, and shifting personal realities.”
How to Manage Orders in Odoo 18 Lunch - Odoo SlidesCeline George
The Lunch module in Odoo 18 helps users place their food orders, making meal management seamless and efficient. It allows employees to browse available options, place orders, and track their meals effortlessly.
Types of Actions in Odoo 18 - Odoo SlidesCeline George
In Odoo, actions define the system's response to user interactions, like logging in or clicking buttons. They can be stored in the database or returned as dictionaries in methods. Odoo offers various action types for different purposes.
What are the Features & Functions of Odoo 18 SMS MarketingCeline George
A key approach to promoting a business's events, products, services, and special offers is through SMS marketing. With Odoo 18's SMS Marketing module, users can notify customers about flash sales, discounts, and limited-time offers.
Jack Lutkus is an education champion, community-minded innovator, and cultural enthusiast. A social work graduate student at Aurora University, he also holds a BA from the University of Iowa.
How to Use Owl Slots in Odoo 17 - Odoo SlidesCeline George
In this slide, we will explore Owl Slots, a powerful feature of the Odoo 17 web framework that allows us to create reusable and customizable user interfaces. We will learn how to define slots in parent components, use them in child components, and leverage their capabilities to build dynamic and flexible UIs.
In this presentation we will show irrefutable evidence that proves the existence of Pope Joan, who became pontiff in 856 BC and died giving birth in the middle of a procession in 858 BC.
"Orthoptera: Grasshoppers, Crickets, and Katydids pptxArshad Shaikh
Orthoptera is an order of insects that includes grasshoppers, crickets, and katydids. Characterized by their powerful hind legs, Orthoptera are known for their impressive jumping ability. With diverse species, they inhabit various environments, playing important roles in ecosystems as herbivores and prey. Their sounds, often produced through stridulation, are distinctive features of many species.
"Orthoptera: Grasshoppers, Crickets, and Katydids pptxArshad Shaikh
Applied Algorithms and Structures week999
1. CSC 421: Applied
Algorithms and Structures
Week 6
“Programming isn't about what you know; it's about what you can figure out.” – Chris Pine
“An algorithm must be seen to be believed.” - Donald Knuth
2. Fibonacci pseudocode
Naïve recursive version (Θ 2𝑛 )
Fibo(n)
if n = 0 then return 1
return Fibo(n-1)+Fibo(n-2)
DP smart recursive version (Θ 𝑛 )
Fibo(n, f)
if f[n] exists then return f[n]
if n = 0 or n = 1 then
f[n] = 1
return 1
sum = Fibo(n-1) + Fibo(n-2)
f[n] = sum
return sum
8. Catalan numbers
• Naïve pseudocode:
CATALAN(n)
if n = 0 then return 1
sum = 0
for i = 1 to n
sum += CATALAN(i-1) * CATALAN(n-i)
return sum
• Top-down approach
• Bottom-up approach using memoization
9. Catalan numbers
• Dynamic programming pseudocode:
Catalan(n)
table[0] = 1
for i = 1 to n+1
sum = 0
for j = 0 to i-1
sum += table[j]*table[i-j-1]
table[i] = sum
return table[n]
11. Chessboard traversal
Given an 𝑛 × 𝑛 chessboard 𝑝 where each square has a profit associated with it. The
problem is to find the path from the top row to the bottom row that maximizes the
total profit. A move from a square may only go to a square in the next row down that
it touches, that is, one down and one to the left, one straight down, or one down and
one to the right.
See example on the whiteboard.
12. Chessboard traversal
This is a maximization optimization problem. You are given an 𝑛 × 𝑛 table 𝑝 of profits.
Here is top down recursive definition of maximal profit:
𝑞(𝑖, 𝑗) =
0 𝑗 < 1 or 𝑗 > 𝑛
𝑝[𝑖, 𝑗] if 𝑖 = 1
𝑝 𝑖, 𝑗 + max{𝑞(𝑖 − 1, 𝑗 − 1), 𝑞(𝑖 − 1, 𝑗), 𝑞(𝑖 − 1, 𝑗 + 1)} otherwise
q(i,j)
if j < 1 or j > n then return 0
else if i = 1 then return p[i,j]
else return p[i,j] + max(q(i-1,j-1), q(i-1,j), q(i-1,j+1))
This algorithm would take Θ(2𝑛
) time. Show this with a recursion tree.
13. Chessboard traversal
Instead of beginning by trying to compute the maximal profit path of length 𝑛, we will
compute maximal profits for paths of length 1, then paths of length 2, and so forth.
We’ll use a 2-dimensional table called 𝑞.
The value of 𝑞[𝑖, 𝑗] is the maximum profit one can earn for every path that ends at
square 𝑖, 𝑗.
Represent the computation of 𝑞 with an 𝑛 × 𝑛 table. Fill the table row-by-row. Begin
by filling row 1 with the values from the profit table (𝑝).
14. Developing a dynamic programming
solution
a) Formulate the problem recursively.
a) Specification.
b) Solution.
b) Build solutions to your recurrence from the bottom up.
a) Identify the sub-problems.
b) Choose a memoization data structure.
c) Identify dependencies.
d) Find a good evaluation order.
e) Analyze space and running time.
f) Write down the algorithm.
15. Edit distance
• This is a minimization optimization problem.
• Note the representation showing both strings and the edit operations, for example:
𝐴 𝐿 𝐺 𝑂 𝑅 𝐼 𝑇 𝐻 𝑀
𝐴 𝐿 𝑇 𝑅 𝑈 𝐼 𝑆 𝑇 𝐼 𝐶
𝑑 𝑠 𝑖 𝑖 𝑠 𝑠
• Look at the steps in creating a dynamic programming solution to a problem.
• Specify those for the edit distance problem.
16. Rod cutting problem
Given a rod of length 𝑛 and a price chart for segments of length up to 𝑛. Determine
which cut maximizes the total price.
Given a rod of length 4. What is the optimal set of cuts?
17. Rod cutting problem
We can simplify the optimization by realizing that we need to know where the leftmost
cut will be in an optimal cutting of the rod. Given that, we get the following recurrence
relation for calculating 𝑟[𝑘], the maximum profit obtainable from a rod of length 𝑘:
𝑟 𝑘 =
0 if 𝑘 = 0
max
1≤𝑖≤𝑘
{𝑝 𝑖 + 𝑟 𝑘 − 𝑖 } otherwise
The index 𝑖 in the maximum calculation is the location of the leftmost cut.
Calculating this recurrence from the top-down is done by the pseudocode on the next
slide.
21. Analysis of that solution
Two nested loops:
𝑗=1
𝑛
𝑖=1
𝑗
1 =
𝑗=1
𝑛
𝑗 =
𝑛 𝑛 + 1
2
= Θ(𝑛2)
22. How does the bottom-up approach work?
The array element 𝑝[𝑖] holds the price for a segment of length 𝑖. The array element
𝑟[𝑖] holds the optimal price for a rod of length 𝑖.
Compute the optimal solution for a rod of length 1.
Compute the optimal solution for a rod of length 2.
…
Computing the optimal solution for a rod of length 𝑗 means trying each of the possible
𝑗 − 1 cuts. We think of each cut as being the first one (leftmost).
23. Longest common subsequence
In the longest-common-subsequence problem, we are given two sequences:
𝑋 = 〈𝑥1, 𝑥2, … , 𝑥𝑚〉, 𝑌 = 〈𝑦1, 𝑦2, … , 𝑦𝑛〉
We wish to find a maximum-length common subsequence 𝑍 of 𝑋 and 𝑌.
24. LCS: definition of subsequence
Given a sequence
𝑋 = 〈 𝑥1, 𝑥2, … , 𝑥𝑚〉
Another sequence
𝑍 = 〈𝑧1, 𝑧2, … , 𝑧𝑘〉
is a subsequence of 𝑋 if there exists a strictly increasing sequence 〈𝑖1, 𝑖2, … , 𝑖𝑘〉 of
indices of 𝑋 such that for all 𝑗 = 1, 2, … , 𝑘, we have 𝑥𝑖𝑗
= 𝑧𝑗.
Notation: 𝑋𝑗 is the first 𝑗 characters of sequence 𝑋.
Also, assume that 𝑋 has 𝑚 characters, 𝑌 has 𝑛 characters, and 𝑍 has 𝑘 characters.
25. LCS: optimal substructure
Given sequences 𝑋 and 𝑌 and longest common subsequence 𝑍:
• If 𝑥𝑚 = 𝑦𝑛, then 𝑧𝑘 = 𝑥𝑚 = 𝑦𝑛 and 𝑍𝑘−1 is an LCS of 𝑋𝑚−1 and 𝑌𝑛−1.
• If 𝑥𝑚 ≠ 𝑦𝑛, then 𝑧𝑘 ≠ 𝑥𝑚 implies that 𝑍 is an LCS of 𝑋𝑚−1 and 𝑌.
• If 𝑥𝑚 ≠ 𝑦𝑛, then 𝑧𝑘 ≠ 𝑦𝑛 implies that 𝑍 is an LCS of 𝑋 and 𝑌𝑛−1.
The LCS problem has optimal substructure.
26. LCS: recursive solution
We can use the optimal substructure formulas to construct a recurrence relation for
𝑐[𝑖, 𝑗], which is the length of the LCS for the pair 𝑋𝑖 and 𝑌
𝑗:
𝑐 𝑖, 𝑗 =
0 𝑖𝑓 𝑖 = 0 𝑜𝑟 𝑗 = 0
𝑐 𝑖 − 1, 𝑗 − 1 + 1 𝑖𝑓 𝑖, 𝑗 > 0 𝑎𝑛𝑑 𝑥𝑖 = 𝑦𝑗
max{𝑐 𝑖, 𝑗 − 1 , 𝑐 𝑖 − 1, 𝑗 } 𝑖𝑓 𝑖, 𝑗 > 0 𝑎𝑛𝑑 𝑥𝑖 ≠ 𝑦𝑗
There are many possible sub-problems but the formula only considers a few.
27. LCS: computing using DP
Note that the 𝑐 values may be placed into a table of size 𝑚 × 𝑛.
Index the rows from the top to bottom 0 to 𝑚, index the columns from left to right 0 to
𝑛. Note that computing 𝑐[𝑖, 𝑗] requires table values to its left, above it, or left and
above.
So fill the table row-by-row starting at row 0.
On the board: An example with the sequences 𝑋 = 〈𝐴, 𝐵, 𝐶, 𝐵, 𝐷, 𝐴, 𝐵〉 and 𝑌 =
〈𝐵, 𝐷, 𝐶, 𝐴, 𝐵, 𝐴〉.
31. Assembly line traversal
There are two assembly lines
We want to minimize the cost of going from the entry point to the exit point. That is,
we want to minimize the total of the costs associated with each station and each
transition from line to the other.
32. Assembly line traversal
With 𝑛 stations, there are 2𝑛 possible paths (once again, the number of 𝑛-bit binary
strings comes in handy).
Here’s a recurrence that defines an optimal solution for a station on the first line:
𝑓1 𝑗 =
𝑒1 + 𝑎1,1 𝑖𝑓 𝑗 = 1
min{𝑓1 𝑗 − 1 + 𝑎1,𝑗, 𝑓2 𝑗 − 1 + 𝑡2,𝑗−1 + 𝑎1,𝑗} 𝑖𝑓 𝑗 > 1
The first term in the min expression is when the optimal path comes to station 𝑆1,𝑗
from station 𝑆1,𝑗−1; the second term when it comes from station 𝑆2,𝑗−1.
33. Assembly line traversal
Treating the recurrence relation as a way to define values in a pair of tables, fill each
table bottom-up, that is, starting at the table entry at position 1, rather than filling it by
starting at position 𝑛.
Let 𝑓1 1 = 𝑒1 + 𝑎1,1 and 𝑓2 1 = 𝑒2 + 𝑎2,1.
Have a for-loop indexing 𝑗 from 2 to 𝑛. Inside the for-loop compute 𝑓1[𝑗] and 𝑓2[𝑗].
From the recurrence, each of these relies only values in the table to the left, that is,
values that have already been computed.