Module 2ppt.pptx divid and conquer methodJyoReddy9
This document discusses dynamic programming and provides examples of problems that can be solved using dynamic programming. It covers the following key points:
- Dynamic programming can be used to solve problems that exhibit optimal substructure and overlapping subproblems. It works by breaking problems down into subproblems and storing the results of subproblems to avoid recomputing them.
- Examples of problems discussed include matrix chain multiplication, all pairs shortest path, optimal binary search trees, 0/1 knapsack problem, traveling salesperson problem, and flow shop scheduling.
- The document provides pseudocode for algorithms to solve matrix chain multiplication and optimal binary search trees using dynamic programming. It also explains the basic steps and principles of dynamic programming algorithm design
Dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable when problems exhibit overlapping subproblems that are only slightly smaller. The method involves 4 steps: 1) developing a mathematical notation to express solutions, 2) proving the principle of optimality holds, 3) deriving a recurrence relation relating solutions to subsolutions, and 4) writing an algorithm to compute the recurrence relation. Dynamic programming yields optimal solutions when the principle of optimality holds, without needing to prove optimality. It is used to solve production, scheduling, resource allocation, and inventory problems.
Dynamic programming is an algorithm design technique for optimization problems that reduces time by increasing space usage. It works by breaking problems down into overlapping subproblems and storing the solutions to subproblems, rather than recomputing them, to build up the optimal solution. The key aspects are identifying the optimal substructure of problems and handling overlapping subproblems in a bottom-up manner using tables. Examples that can be solved with dynamic programming include the knapsack problem, shortest paths, and matrix chain multiplication.
Dynamic programming is an algorithm design technique that solves problems by breaking them down into smaller overlapping subproblems and storing the results of already solved subproblems, rather than recomputing them. It is applicable to problems exhibiting optimal substructure and overlapping subproblems. The key steps are to define the optimal substructure, recursively define the optimal solution value, compute values bottom-up, and optionally reconstruct the optimal solution. Common examples that can be solved with dynamic programming include knapsack, shortest paths, matrix chain multiplication, and longest common subsequence.
Introduction to Dynamic Programming, Principle of OptimalityBhavin Darji
Introduction
Dynamic Programming
How Dynamic Programming reduces computation
Steps in Dynamic Programming
Dynamic Programming Properties
Principle of Optimality
Problem solving using Dynamic Programming
Dynamic programming, Branch and bound algorithm & Greedy algorithms Dr. SURBHI SAROHA
This document summarizes different optimization algorithms: dynamic programming, branch and bound, and greedy algorithms. It provides details on the steps and properties of dynamic programming, how branch and bound explores the search space to find optimal solutions, and how greedy algorithms select locally optimal choices at each step. Applications discussed include matrix chain multiplication, longest common subsequence, and the travelling salesman problem for dynamic programming and fractional knapsack for greedy algorithms. Advantages and disadvantages are outlined for greedy approaches.
A brief study on linear programming solving methodsMayurjyotiNeog
This document summarizes linear programming and two methods for solving linear programming problems: the graphical method and the simplex method. It outlines the key components of linear programming problems including decision variables, objective functions, and constraints. It then describes the steps of the graphical method and simplex method in solving linear programming problems. The graphical method involves plotting the feasible region and objective function on a graph to find the optimal point. The simplex method uses an algebraic table approach to iteratively find the optimal solution.
3. CPT121 - Introduction to Problem Solving - Module 1 - Unit 3.pptxAgoyi1
This document provides an overview of different computational approaches to problem solving, including brute force, divide and conquer, dynamic programming, and greedy algorithms. It describes the key characteristics of each approach, provides examples, and discusses their advantages and disadvantages. The objectives are to describe various computational approaches, classify them by paradigm, evaluate the best approach for a given problem, and apply an approach to solve problems.
This document provides an introduction to linear programming. It defines linear programming as a mathematical modeling technique used to optimize resource allocation. The key requirements are a well-defined objective function, constraints on available resources, and alternative courses of action represented by decision variables. The assumptions of linear programming include proportionality, additivity, continuity, certainty, and finite choices. Formulating a problem as a linear program involves defining the objective function and constraints mathematically. Graphical and analytical solutions can then be used to find the optimal solution. Linear programming has many applications in fields like industrial production, transportation, and facility location.
The document provides an overview of linear programming models (LPM), including:
- Defining the key components of an LPM, such as the objective function, decision variables, constraints, and parameters.
- Explaining the characteristics and assumptions of an LPM, such as linearity, divisibility, and non-negativity.
- Describing methods for solving LPMs, including graphical and algebraic (simplex) methods.
- Providing examples of formulating LPMs to maximize profit based on constraints like available resources.
The document discusses dynamic programming and how it can be used to calculate the 20th term of the Fibonacci sequence. Dynamic programming breaks problems down into overlapping subproblems, solves each subproblem once, and stores the results for future use. It explains that the Fibonacci sequence can be calculated recursively with each term equal to the sum of the previous two. To calculate the 20th term, dynamic programming would calculate each preceding term only once and store the results, building up the solution from previously solved subproblems until it reaches the 20th term.
For a good business plan creative thinking is important. A business plan is very important and strategic tool for entrepreneurs. A good business plan not only helps entrepreneurs focus on specific steps necessary for them to make business ideas succeed, but it also helps them to achieve short-term and long-term objectives. As an inspiring entrepreneur who is looking towards starting a business, one of the businesses you can successfully start without much stress is book servicing café.
Importance:
Nowadays, network plays an important role in people’s life. In the process of the improvement of the people’s living standard, people’s demand of the life’s quality and efficiency is more higher, the traditional bookstore’s inconvenience gradually emerge, and the online book store has gradually be used in public. The online book store system based on the principle of providing convenience and service to people.
With the online book servicing café, college student do not need to blindly go to various places to find their own books, but only in a computer connected to the internet log on online book servicing café in the search box, type u want to find of the book information retrieval, you can efficiently know whether a site has its own books, if you can online direct purchase, if not u can change the home book store to continue to search or provide advice to the seller in order to supply. This greatly facilitates every college student saving time.
The online book servicing café’s main users are divided into two categories, one is the front user, and one is the background user. The main business model for Book Servicing Café relies on college students providing textbooks, auctions, classifieds teacher evaluations available on website. Therefore, our focus will be on the marketing strategy to increase student traffic and usage. In turn, visitor volume and transactions will maintain the inventory of products and services offered.
Online bookstore system i.e. Book Servicing Café not only can easily find the information and purchase books, and the operating conditions are simple, user-friendly, to a large extent to solve real-life problems in the purchase of the books.
When you shop in online book servicing cafe, you have the chance of accessing and going through customers who have shopped at book servicing café and review about the book you intend to buy. This will give you beforehand information about that book.
While purchasing or selling books at the book servicing café, you save money, energy and time for your favorite book online. The book servicing café will offer discount coupons which help college students save money or make money on their purchases or selling. Shopping for books online is economical too because of the low shipping price.
Book servicing café tend to work with multiple suppliers, which allows them to offer a wider variety of books than a traditional retail store without accruing a large, costly inventory which will help colle
Dynamic programming is a technique for solving complex problems by breaking them down into simpler sub-problems. It involves storing solutions to sub-problems for later use, avoiding recomputing them. Examples where it can be applied include matrix chain multiplication and calculating Fibonacci numbers. For matrix chains, dynamic programming finds the optimal order for multiplying matrices with minimum computations. For Fibonacci numbers, it calculates values in linear time by storing previous solutions rather than exponentially recomputing them through recursion.
This document provides an introduction to the analysis of algorithms. It discusses algorithm specification, performance analysis frameworks, and asymptotic notations used to analyze algorithms. Key aspects covered include time complexity, space complexity, worst-case analysis, and average-case analysis. Common algorithms like sorting and searching are also mentioned. The document outlines algorithm design techniques such as greedy methods, divide and conquer, and dynamic programming. It distinguishes between recursive and non-recursive algorithms and provides examples of complexity analysis for non-recursive algorithms.
The document defines linear programming and its key components. It explains that linear programming is a mathematical optimization technique used to allocate limited resources to achieve the best outcome, such as maximizing profit or minimizing costs. The document outlines the basic steps of the simplex method for solving linear programming problems and provides an example to illustrate determining the maximum value of a linear function given a set of constraints. It also discusses other applications of linear programming in fields like engineering, manufacturing, energy, and transportation for optimization.
Divide and conquer is an algorithm design paradigm where a problem is broken into smaller subproblems, those subproblems are solved independently, and then their results are combined to solve the original problem. Some examples of algorithms that use this approach are merge sort, quicksort, and matrix multiplication algorithms like Strassen's algorithm. The greedy method works in stages, making locally optimal choices at each step in the hope of finding a global optimum. It is used for problems like job sequencing with deadlines and the knapsack problem. Minimum cost spanning trees find subgraphs of connected graphs that include all vertices using a minimum number of edges.
Dynamic programming is a powerful technique for solving optimization problems by breaking them down into overlapping subproblems. It works by storing solutions to already solved subproblems and building up to a solution for the overall problem. Three key aspects are defining the subproblems, writing the recurrence relation, and solving base cases to build up solutions bottom-up rather than top-down. The principle of optimality must also hold for a problem to be suitable for a dynamic programming approach. Examples discussed include shortest paths, coin change, knapsack problems, and calculating Fibonacci numbers.
The document provides an overview of the simplex algorithm, which is used to solve linear programming problems. It defines key terms like standard form, slack variables, basic and non-basic variables, and pivoting. The simplex algorithm involves writing the problem in standard form, selecting a pivot column and row, and performing row operations to find an optimal solution. An example problem is worked through in multiple iterations to demonstrate how the algorithm progresses from an initial tableau to the optimal solution. Potential issues like cycling are also discussed, along with software tools to implement the simplex method.
The document discusses the concept of duality in linear programming. There is a primal linear programming problem (LPP) and its dual LPP. The optimal solution of one problem reveals information about the optimal solution of the other. Every LPP has an associated dual problem that is formed by transposing the constraint coefficients and objective function. Solving the dual may be easier in some cases. Duality ensures that if one problem is feasible and bounded, then so is the other, and they have the same optimal value.
The document summarizes key concepts regarding linear programming problems. It discusses:
1. Linear programming problems aim to optimize an objective function subject to constraints. They can model many practical operations research problems.
2. The document provides an example problem of determining production levels to maximize profit. It demonstrates formulating the problem as a mathematical model and solving it graphically and with the simplex method.
3. The simplex method solves linear programming problems by examining vertex points of the feasible solution space. It involves setting up the problem in standard form and using minimum ratio and pivot element calculations to systematically search for an optimal solution.
3. CPT121 - Introduction to Problem Solving - Module 1 - Unit 3.pptxAgoyi1
This document provides an overview of different computational approaches to problem solving, including brute force, divide and conquer, dynamic programming, and greedy algorithms. It describes the key characteristics of each approach, provides examples, and discusses their advantages and disadvantages. The objectives are to describe various computational approaches, classify them by paradigm, evaluate the best approach for a given problem, and apply an approach to solve problems.
This document provides an introduction to linear programming. It defines linear programming as a mathematical modeling technique used to optimize resource allocation. The key requirements are a well-defined objective function, constraints on available resources, and alternative courses of action represented by decision variables. The assumptions of linear programming include proportionality, additivity, continuity, certainty, and finite choices. Formulating a problem as a linear program involves defining the objective function and constraints mathematically. Graphical and analytical solutions can then be used to find the optimal solution. Linear programming has many applications in fields like industrial production, transportation, and facility location.
The document provides an overview of linear programming models (LPM), including:
- Defining the key components of an LPM, such as the objective function, decision variables, constraints, and parameters.
- Explaining the characteristics and assumptions of an LPM, such as linearity, divisibility, and non-negativity.
- Describing methods for solving LPMs, including graphical and algebraic (simplex) methods.
- Providing examples of formulating LPMs to maximize profit based on constraints like available resources.
The document discusses dynamic programming and how it can be used to calculate the 20th term of the Fibonacci sequence. Dynamic programming breaks problems down into overlapping subproblems, solves each subproblem once, and stores the results for future use. It explains that the Fibonacci sequence can be calculated recursively with each term equal to the sum of the previous two. To calculate the 20th term, dynamic programming would calculate each preceding term only once and store the results, building up the solution from previously solved subproblems until it reaches the 20th term.
For a good business plan creative thinking is important. A business plan is very important and strategic tool for entrepreneurs. A good business plan not only helps entrepreneurs focus on specific steps necessary for them to make business ideas succeed, but it also helps them to achieve short-term and long-term objectives. As an inspiring entrepreneur who is looking towards starting a business, one of the businesses you can successfully start without much stress is book servicing café.
Importance:
Nowadays, network plays an important role in people’s life. In the process of the improvement of the people’s living standard, people’s demand of the life’s quality and efficiency is more higher, the traditional bookstore’s inconvenience gradually emerge, and the online book store has gradually be used in public. The online book store system based on the principle of providing convenience and service to people.
With the online book servicing café, college student do not need to blindly go to various places to find their own books, but only in a computer connected to the internet log on online book servicing café in the search box, type u want to find of the book information retrieval, you can efficiently know whether a site has its own books, if you can online direct purchase, if not u can change the home book store to continue to search or provide advice to the seller in order to supply. This greatly facilitates every college student saving time.
The online book servicing café’s main users are divided into two categories, one is the front user, and one is the background user. The main business model for Book Servicing Café relies on college students providing textbooks, auctions, classifieds teacher evaluations available on website. Therefore, our focus will be on the marketing strategy to increase student traffic and usage. In turn, visitor volume and transactions will maintain the inventory of products and services offered.
Online bookstore system i.e. Book Servicing Café not only can easily find the information and purchase books, and the operating conditions are simple, user-friendly, to a large extent to solve real-life problems in the purchase of the books.
When you shop in online book servicing cafe, you have the chance of accessing and going through customers who have shopped at book servicing café and review about the book you intend to buy. This will give you beforehand information about that book.
While purchasing or selling books at the book servicing café, you save money, energy and time for your favorite book online. The book servicing café will offer discount coupons which help college students save money or make money on their purchases or selling. Shopping for books online is economical too because of the low shipping price.
Book servicing café tend to work with multiple suppliers, which allows them to offer a wider variety of books than a traditional retail store without accruing a large, costly inventory which will help colle
Dynamic programming is a technique for solving complex problems by breaking them down into simpler sub-problems. It involves storing solutions to sub-problems for later use, avoiding recomputing them. Examples where it can be applied include matrix chain multiplication and calculating Fibonacci numbers. For matrix chains, dynamic programming finds the optimal order for multiplying matrices with minimum computations. For Fibonacci numbers, it calculates values in linear time by storing previous solutions rather than exponentially recomputing them through recursion.
This document provides an introduction to the analysis of algorithms. It discusses algorithm specification, performance analysis frameworks, and asymptotic notations used to analyze algorithms. Key aspects covered include time complexity, space complexity, worst-case analysis, and average-case analysis. Common algorithms like sorting and searching are also mentioned. The document outlines algorithm design techniques such as greedy methods, divide and conquer, and dynamic programming. It distinguishes between recursive and non-recursive algorithms and provides examples of complexity analysis for non-recursive algorithms.
The document defines linear programming and its key components. It explains that linear programming is a mathematical optimization technique used to allocate limited resources to achieve the best outcome, such as maximizing profit or minimizing costs. The document outlines the basic steps of the simplex method for solving linear programming problems and provides an example to illustrate determining the maximum value of a linear function given a set of constraints. It also discusses other applications of linear programming in fields like engineering, manufacturing, energy, and transportation for optimization.
Divide and conquer is an algorithm design paradigm where a problem is broken into smaller subproblems, those subproblems are solved independently, and then their results are combined to solve the original problem. Some examples of algorithms that use this approach are merge sort, quicksort, and matrix multiplication algorithms like Strassen's algorithm. The greedy method works in stages, making locally optimal choices at each step in the hope of finding a global optimum. It is used for problems like job sequencing with deadlines and the knapsack problem. Minimum cost spanning trees find subgraphs of connected graphs that include all vertices using a minimum number of edges.
Dynamic programming is a powerful technique for solving optimization problems by breaking them down into overlapping subproblems. It works by storing solutions to already solved subproblems and building up to a solution for the overall problem. Three key aspects are defining the subproblems, writing the recurrence relation, and solving base cases to build up solutions bottom-up rather than top-down. The principle of optimality must also hold for a problem to be suitable for a dynamic programming approach. Examples discussed include shortest paths, coin change, knapsack problems, and calculating Fibonacci numbers.
The document provides an overview of the simplex algorithm, which is used to solve linear programming problems. It defines key terms like standard form, slack variables, basic and non-basic variables, and pivoting. The simplex algorithm involves writing the problem in standard form, selecting a pivot column and row, and performing row operations to find an optimal solution. An example problem is worked through in multiple iterations to demonstrate how the algorithm progresses from an initial tableau to the optimal solution. Potential issues like cycling are also discussed, along with software tools to implement the simplex method.
The document discusses the concept of duality in linear programming. There is a primal linear programming problem (LPP) and its dual LPP. The optimal solution of one problem reveals information about the optimal solution of the other. Every LPP has an associated dual problem that is formed by transposing the constraint coefficients and objective function. Solving the dual may be easier in some cases. Duality ensures that if one problem is feasible and bounded, then so is the other, and they have the same optimal value.
The document summarizes key concepts regarding linear programming problems. It discusses:
1. Linear programming problems aim to optimize an objective function subject to constraints. They can model many practical operations research problems.
2. The document provides an example problem of determining production levels to maximize profit. It demonstrates formulating the problem as a mathematical model and solving it graphically and with the simplex method.
3. The simplex method solves linear programming problems by examining vertex points of the feasible solution space. It involves setting up the problem in standard form and using minimum ratio and pivot element calculations to systematically search for an optimal solution.
Concept of Problem Solving, Introduction to Algorithms, Characteristics of Algorithms, Introduction to Data Structure, Data Structure Classification (Linear and Non-linear, Static and Dynamic, Persistent and Ephemeral data structures), Time complexity and Space complexity, Asymptotic Notation - The Big-O, Omega and Theta notation, Algorithmic upper bounds, lower bounds, Best, Worst and Average case analysis of an Algorithm, Abstract Data Types (ADT)
Analysis of reinforced concrete deep beam is based on simplified approximate method due to the complexity of the exact analysis. The complexity is due to a number of parameters affecting its response. To evaluate some of this parameters, finite element study of the structural behavior of the reinforced self-compacting concrete deep beam was carried out using Abaqus finite element modeling tool. The model was validated against experimental data from the literature. The parametric effects of varied concrete compressive strength, vertical web reinforcement ratio and horizontal web reinforcement ratio on the beam were tested on eight (8) different specimens under four points loads. The results of the validation work showed good agreement with the experimental studies. The parametric study revealed that the concrete compressive strength most significantly influenced the specimens’ response with the average of 41.1% and 49 % increment in the diagonal cracking and ultimate load respectively due to doubling of concrete compressive strength. Although the increase in horizontal web reinforcement ratio from 0.31 % to 0.63 % lead to average of 6.24 % increment on the diagonal cracking load, it does not influence the ultimate strength and the load-deflection response of the beams. Similar variation in vertical web reinforcement ratio leads to an average of 2.4 % and 15 % increment in cracking and ultimate load respectively with no appreciable effect on the load-deflection response.
This paper proposes a shoulder inverse kinematics (IK) technique. Shoulder complex is comprised of the sternum, clavicle, ribs, scapula, humerus, and four joints.
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxRishavKumar530754
LiDAR-Based System for Autonomous Cars
Autonomous Driving with LiDAR Tech
LiDAR Integration in Self-Driving Cars
Self-Driving Vehicles Using LiDAR
LiDAR Mapping for Driverless Cars
π0.5: a Vision-Language-Action Model with Open-World GeneralizationNABLAS株式会社
今回の資料「Transfusion / π0 / π0.5」は、画像・言語・アクションを統合するロボット基盤モデルについて紹介しています。
拡散×自己回帰を融合したTransformerをベースに、π0.5ではオープンワールドでの推論・計画も可能に。
This presentation introduces robot foundation models that integrate vision, language, and action.
Built on a Transformer combining diffusion and autoregression, π0.5 enables reasoning and planning in open-world settings.
The Fluke 925 is a vane anemometer, a handheld device designed to measure wind speed, air flow (volume), and temperature. It features a separate sensor and display unit, allowing greater flexibility and ease of use in tight or hard-to-reach spaces. The Fluke 925 is particularly suitable for HVAC (heating, ventilation, and air conditioning) maintenance in both residential and commercial buildings, offering a durable and cost-effective solution for routine airflow diagnostics.
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...Infopitaara
A Boiler Feed Pump (BFP) is a critical component in thermal power plants. It supplies high-pressure water (feedwater) to the boiler, ensuring continuous steam generation.
⚙️ How a Boiler Feed Pump Works
Water Collection:
Feedwater is collected from the deaerator or feedwater tank.
Pressurization:
The pump increases water pressure using multiple impellers/stages in centrifugal types.
Discharge to Boiler:
Pressurized water is then supplied to the boiler drum or economizer section, depending on design.
🌀 Types of Boiler Feed Pumps
Centrifugal Pumps (most common):
Multistage for higher pressure.
Used in large thermal power stations.
Positive Displacement Pumps (less common):
For smaller or specific applications.
Precise flow control but less efficient for large volumes.
🛠️ Key Operations and Controls
Recirculation Line: Protects the pump from overheating at low flow.
Throttle Valve: Regulates flow based on boiler demand.
Control System: Often automated via DCS/PLC for variable load conditions.
Sealing & Cooling Systems: Prevent leakage and maintain pump health.
⚠️ Common BFP Issues
Cavitation due to low NPSH (Net Positive Suction Head).
Seal or bearing failure.
Overheating from improper flow or recirculation.
2. • Useful for solving multistage optimization problems.
• An optimization problem deals with the maximization or minimization
of the objective function as per problem requirements.
• In multistage optimization problems, decisions are made at successive
stages to obtain a global solution for the given problem.
• Dynamic programming divides a problem into subproblems and
establishes a recursive relationship between the original problem and
its subproblems.
BASICS OF DYNAMIC PROGRAMMING
2
3. BASICS OF DYNAMIC PROGRAMMING-cont’d
• A subproblem representing a small part of the original problem is
solved to obtain the optimal solution.
• Then the scope of this subproblem is enlarged to find the optimal
solution for a new subproblem.
• This enlargement process is repeated until the subproblem’s scope
encompasses the original problem.
• After that, a solution for the whole problem is obtained by combining
the optimal solutions of its subproblems.
3
4. • The difference between dynamic programming and the top-down or divide-and-conquer approach is that the
subproblems overlap in the dynamic programming approach. In contrast, in the divide-and-conquer
approach, the subproblems are independent. The differences between the dynamic programming and
divide-and-conquer approaches.
Dynamic programming vs divide-and-conquer
approaches
4
5. Dynamic programming approach vs greedy
approach
• Dynamic programming can also be compared with the greedy approach.
• Unlike dynamic programming, the greedy approach fails in many
problems.
• consider the shortest-path problem for the distance between s and t in
the graph shown in Fig. A greedy algorithm such as Dijkstra would take
the route from s to vertex 2, as its path length is shorter than that of the
route from s to ‘1’.
• After that, the path length increases, resulting in a total path length of
1002.
• However, dynamic programming would not make such a mistake, as it
treats the problem in stages.
• For this problem, there are three stages with vertices {s}, {1, 2}, and {t}.
The problem would find the shortest path from stage 2, {1, 2}, to {t} first,
then calculate the final path. Hence, dynamic programming would result
in a path s to 1 and from 1 to t.
• Dynamic programming problems, therefore, yield a globally optimal
result compared to the greedy approach, whose solutions are just locally
optimal.
5
7. Components of Dynamic Programming
• The guiding principle of dynamic programming is the ‘principle of optimality’. In
simple words, a given problem is split into subproblems.
• Then, this principle helps us solve each subproblem optimally, ultimately leading
to the optimal solution of the given problem. It states that an optimal sequence
of decisions in a multistage decision problem is feasible if its sub-sequences are
optimal.
• In other words, the Bellman principle of optimality states that “an optimal policy
(a sequence of decisions) has the property that whatever the initial state and
decision are, the remaining decisions must constitute an optimal policy
concerning the state resulting from the first decision”.
• The solution to a problem can be achieved by breaking the problem into
subproblems. The subproblems can then be solved optimally. If so, the original
problem can be solved optimally by combining the optimal solutions of its
subproblems.
7
9. overlapping subproblems
• Dynamic programs possess two essential properties—
• overlapping subproblems
• optimal substructures.
• Overlapping subproblems One of the main characteristics of dynamic
programming is to split the problem into subproblems, similar to the
divide-and-conquer approach. The sub-problems are further divided into
smaller problems. However, unlike divide and conquer, here, many
subproblems overlap and cannot be treated distinctly.
• This feature is the primary characteristic of dynamic programming. There
are two ways of handling this overlapping problem:
• The memoization technique
• The tabulation method.
9
10. overlapping subproblems
• The Fibonacci sequence is given as (0, 1, 1, 2, 3, 5, 8, 13, …). The
following recurrence equations can generate this sequence:
𝐹0 = 0
𝐹1 = 1
𝐹𝑛 = 𝐹𝑛−1 + 𝐹𝑛−2 𝑓𝑜𝑟 𝑛 ≥ 2
• A straightforward pseudo-code for implementing this recursive
equation is as follows:
• Consider a Fibonacci recurrence tree for n = 5, as shown in Fig. 13.2.
• It can be observed from Fig. 13.3 that there are multiple overlapping
subproblems.
• As n becomes large, subproblems increase exponentially, and
repeated calculation makes the algorithm ineffective. In general, to
compute Fibonacci(n), one requires two terms:
• Fibonacci (n-1) and Fibonacci (n-2)
• Even for computing Fibonacci(50), one must have all the previous 49
terms, which is tedious.
• This exponential complexity of Fibonacci computation is because many
of the subproblems are repetitive, and hence there are chances of
multiple recomputations.
• The complexity analysis of this algorithm yields 𝑇 𝑛 = Ω 𝜙𝑛 . Here f
is called the golden ratio, whose value is approximately 1.62. 10
11. Optimal substructure
• The optimal solution of a problem can be expressed in terms of optimal
solutions to its subproblems.
• In dynamic programming, a problem can be divided into subproblems.
Then, the subproblem can be solved suboptimally. If so, the optimal
solutions of the subproblems can lead to the optimal solution of the
original problem.
• If the solution to the original problem has stages, then the decision taken
at the current stage depends on the decision taken in the previous stages.
• Therefore, there is no need to consider all possible decisions and their
consequences as the optimal solution of the given problem is built on the
optimal solution of its subproblems
11
12. Optimal substructure
• Consider the graph shown in Fig. 13.3. Consider the best route to visit the
city v from city u.
• This problem can be broken into a problem of finding a route from u to x
and from x to v. If <u, x> is the optimal route from u to x, and the best
route from x to v is <x, v>, then the best route from u to v can be built on
the optimal solutions of its two subproblems. In other words, the route
constructed on the suboptimal optimal routes must also be optimal.
• The general template for solving dynamic programming looks like this:
• Step 1: The given problem is divided into many subproblems, as in the case
of the divide-and-conquer strategy. In divide and conquer, the subproblems
are independent of each other; however, in the dynamic programming
case, the subproblems are not independent of each other, but they are
interrelated and hence are overlapping subproblems.
12
13. Optimal substructure
• Step 2: A table is created to avoid the recomputation of multiple
overlapping subproblems repeatedly; a table is created. Whenever a
subproblem is solved, its solution is stored in the table, so that its solutions
can be reused.
• Step 3.The solutions of the subproblems are combined in a bottom-up
manner to obtain the final solution of the given problem
• The essential steps are thus:
1. It is breaking the problems into its subproblems.
2. Creating a lookup table that contains the solutions of the subproblems.
3. Reuse the solutions of the subproblems stored in the table to construct
solutions to the given problem.
13
14. Optimal substructure
Dynamic programming uses the lookup table in both a top-down manner
and a bottom-up manner. The top-down approach is called the
memorization technique, and the bottom-up manner is called the tabulation
method.
1. Memoization technique: This method looks into a table to check whether
the table has any entries. The lookup table Contains entries such as ‘NIL’ or
‘undefined’. If no value is present, then it is computed. Otherwise, the pre-
computed value is reused. In other words, computation follows a top-down
method similar to the recursion approach.
2. Tabulation method: Here, the problem is solved from scratch. The
smallest subproblem is solved, and its value is stored in the table. Its value is
used later for solving larger problems. In other words, computation follows a
bottom-up method.
14
15. FIBONACCI PROBLEM
• The Fibonacci sequence is given as (0, 1, 1, 2, 3, 5, 8, 13, …). The following
recurrence equations can generate this sequence:
𝐹0 = 0
𝐹1 = 1
𝐹𝑛 = 𝐹𝑛−1 + 𝐹𝑛−2 𝑓𝑜𝑟 𝑛 ≥ 2
• The best way to solve this problem is to use the dynamic programming
approach, in which the results of the intermediate problems are stored.
Consequently, results of the previously computed subproblems can be used
instead of recomputing the subproblems repeatedly. Thus, a subproblem is
computed only once, and the exponential algorithm is reduced to a
polynomial algorithm.
• A table is created to store the intermediate results, a table is created, and
its values are reused.
15
16. Bottom-up approach
• An iterative loop can be used to modify this Fibonacci number computation
effectively, and the resulting dynamic programming algorithm can be used to
compute the nth Fibonacci number.
• Step 1: Read a positive integer n.
• Step 2: Initialize first to 0.
• Step 3: Initialize second to 1.
• Step 4: Repeat n − 1 times.
• 4a: Compute current as first + second.
• 4b: Update first = second.
• 4c: Update second = current.
• Step 5: Return(current).
• Step 6: End.
16
18. Top-down approach and Memoization
• Dynamic programming can use the top-down approach for populating and
manipulating the table.
• Step 1: Read a positive integer n
• Step 2: Create a table A with n entries
• Step 3: Initialize table A with the status ‘undefined.’
• Step 4: If the table entry is ‘undefined.’
• 4a: Compute recursively
mem_Fib(n) = mem_Fib(n − 1) + mem_Fib(n − 2)
else
return table value
• Step 5: Return(mem_Fibonacci(A, n))
18
20. COMPUTING BINOMIAL COEFFICIENTS
• Given a positive integer 𝑛 and any real number 𝑎 and 𝑏, the
expansion of (𝑎 + 𝑏)𝑛 is done as follows:
• Where, C=Combination, 𝐶 𝑛, 𝑘 𝑜𝑟
• Binomial coefficients represented as 𝐶(𝑛, 𝑘) or 𝑛𝐶𝑘, can be used to
represent the coefficients of (𝑎 + 𝑏)𝑛
𝑛
𝑘
20
21. COMPUTING BINOMIAL COEFFICIENTS
• The binomial coefficient can be computed using the following
formula:
• 𝐶 𝑛, 𝑘 =
𝑛!
𝑘! 𝑛−𝑘 !
for non-negative integers n and k. And, by
convention, 𝐶(0,0) = 1.
21
24. • Compute the binomial coefficient value of C[2,1]using the dynamic programming
approach.
• Init Row 0:
• Row 0 is computed by setting C[0, 0] = 1. This is required as a start-up of the
algorithm as by convention C(0, 0) = 1.
• Compute the first row:
C[1, 0] = 1; C[1, 1] = 1
• Using row 1, row 2 is computed as follows:
• Compute the second row:
C[2, 0] = 1; C[2, 1] = C[1, 0] + C[1, 1] = 1 + 1 = 2; C[2, 2] = 1
• Compute the third row.
C[3, 1] = C[2, 0] + C[2, 1] = 1 + 2 = 3
C[3, 2] = C[2, 1] + C[2, 2] = 2 + 1 = 3
• As the problem is about the computation of C[3, 2], it can be seen that the
binomial coefficient value of C[2, 1] is 2 and tallies with the conventional result
3!
1!1!
=6/2=3
24
25. MULTISTAGE GRAPH PROBLEM (stagecoach
problem)
• The multistage problem
is a problem of finding
the shortest path from
the source to the
destination. Every edge
connects two nodes
from two partitions.
The initial node is
called a source (No
indegree), and the last
node is called a sink (as
there is no outdegree).
25
26. Forward computation procedure
• Step 1: Read directed graph G = <V, E> with k stages.
• Step 2: Let n be the number of nodes and dist[1… k] be the distance array.
• Step 3: Set the initial cost to zero.
• Step 4: Loop index i from (n − 1) to 1.
• 4a: Find a vertex v from the next stage such that the edge connecting the current
stage and the next stage is minimum (j, v) + cost(v) is minimum.
• 4b: Update cost and store v in another array dist[].
• Step 5: Return cost.
• Step 6: End.
• The shortest path can finally be obtained or reconstructed by tracking the
distance array
26
29. • Consider the graph shown in Fig. 13.7. Find the
shortest path from the source to the sink using
the forward approach of dynamic programming.
29
30. Backward computation procedure
• Consider the graph shown in Fig. 13.8. Find the shortest path from
the source to the sink using the backward approach of dynamic
programming.
• Unlike the forward approach, the computation starts from stage 1 to
stage 5. The cost of the edges that connect Stage 2 to Stage 1 is
calculated.
30
36. • Apply the Warshall algorithm and
find the transitive closure for the
graph shown in Fig. 13.10.
38
37. • Consider the graph shown in Fig. 13.11 and find the shortest path
using the Floyd–Warshall algorithm.
39
38. Travelling salesman problem
• Step 1: Read weighted graph G = <V, E>.
• Step 2: Initialize d[i, j] as follows:
0
𝑑 𝑖, 𝑗 = ∞
𝑤𝑖𝑗
𝑖𝑓 𝑖 = 𝑗
𝑖𝑓 𝑒𝑑𝑔𝑒 (𝑖, 𝑗) ∉ 𝐸(𝐺)
𝑖𝑓 𝑒𝑑𝑔𝑒 (𝑖, 𝑗) ∈ 𝐸(𝐺)
• Step 3: Compute a function g(i, S), a function that gives the length of the shortest path starting from
vertex i, traversing through all the vertices in set S, and terminating at vertex i as follows:
3a: 𝑐𝑜𝑠𝑡 𝑖, 𝜙 = 𝑑 𝑖, 1 , 1 ≤ 𝑖 ≤ 𝑛
3b: For |S| = 2 to n − 1, where, 𝑖 ≠ 1, 1 ∉ 𝑆, 𝑖 ∉ 𝑆 compute recursively
𝑐𝑜𝑠𝑡(𝑆. 𝑖) 𝑚𝑖𝑛𝑗∈𝑆{𝑐𝑜𝑠𝑡[𝑖, 𝑗], 𝑆 − {𝑗}} and store the value.
• Step 4: Compute the minimum cost of the travelling salesperson tour as follows: Compute
cost 1, v − {1} = 𝑚𝑖𝑛2 ≤ 𝑖 ≤ 𝑘{𝑑[1, 𝑘] + 𝑐𝑜𝑠𝑡(𝑘, 𝑣 − {1, 𝑘}} 𝑢𝑠𝑖𝑛𝑔 𝑔(𝑖, 𝑆) computed in Step 3.
• Step 5: Return the value of Step 4.
• Step 6: End.
46
39. Travelling salesman problem
• Solve the TSP for the graph shown in Fig. 13.17 using dynamic programming.
• cost(2, ∅) = d[2, 1] = 2; cost(3, ∅) = d[3, 1] = 4; cost(4, ∅) = d[4, 1] = 6
• This indicates the distance from vertices 2, 3, and 4 to vertex 1. When |S| = 1, cost(i, j) function
can be solved using the recursive function as follows:
cost(2,{3}) = d[2, 3] + cost(3, ∅)=5+4=9
cost(2,{4}) = d[2, 4] + cost(4, ∅) = 7 + 6 = 13
cost(3,{2}) = d[3, 2] + cost(2, ∅) =3+2=5
cost(3,{4}) = d[3, 4] + cost(4, ∅) = 8 + 6 = 14
cost(4,{2}) = d[4, 2] + cost(2, ∅) =5+2=7
cost(4,{3}) = d[4, 3] + cost(3, ∅) = 9 + 4 = 13
47
40. • Now, cost(S, i) is computed with |S| = 2, i ≠ 1, 1 ∉S, i ∉n, that is, set S involves two intermediate nodes.
cost(2,{3,4}) = min{d[2, 3] + cost(3, {4}), d[2, 4] + cost(4, {3})} = min{5 + 14 , 7 + 13} = min{19, 20} = 19
cost(3,{2,4}) = min{d[3, 2] + cost(2, {4}), dist[3, 4]+cost(4, {2})} = min{3 + 13, 8 + 7} = min{16, 15} = 15
cost(4,{2,3}) = min{d[4, 2] + cost(2, {3}), d[4, 3] + cost(3, {2})} = min{5 + 9, 9 + 5} = min{14, 14} = 14
• Finally, the total cost of the tour is calculated, which involves three intermediate nodes, that is, |S| = 3. As |S|
= n − 1, where n is the number of nodes, the process terminates.
• Finally, using the equation, cost(1, v − {1}) = 〖min〗_(2≤i≤k) {dist[1, k] + cost(k, v − {1, k}}, cost of the tour
[cost(1, {2, 3, 4})] is computed as follows:
cost(1,{2, 3, 4}) = min{d[1, 2] + cost(2,{3, 4}), d[1, 3] + cost(3, {2, 4}),d[1, 4] + cost(4, {2, 3})} = min{5 + 19, 3 +
15, 10 + 14} = min {24,18, 24} = 18
Hence, the minimum cost tour is 18. Therefore, P(1, {2, 3, 4}) = 3, as the minimum cost obtained is only 18. Thus,
the tour goes from 1→3. It can be seen that C(3, {2, 4}) is minimum and the successor of node 3 is 4. Hence, the
path is 1 → 3 → 4, and the final TSP tour is given as 1 → 3 → 4 → 2 → 1. 48
41. Knapsack Problem
• Step 1: Let n be the number of objects.
• Step 2: Let W be the capacity of the knapsack.
• Step 3: Construct the matrix V [i, j] that stores items and their weights. Index i
tracts the items added to the knapsack (i.e., 1 to n), and index j tracks the weights
(i.e., 0 to W)
• Step 4: Initialize V[0, j] = 0 if j≥0. This implies no profit if no item is in the
knapsack.
• Step 5: Recursively compute the following steps:
5a: Leave the object i if 𝑤𝑗 > 𝑗 or 𝑗 − 𝑤𝑖 < 0. This leaves the knapsack
with the items {1, 2, … , 𝑖 − 1] with profit 𝑉 𝑖 − 1, 𝑗 .
5b: Add the item i if 𝑤𝑗 ≤ 𝑗 or 𝑗 − 𝑤𝑖 ≥ 0. In this case, the addition of the
items results in a profit max{𝑉 [𝑖 − 1,𝑗], 𝑉𝑖 + 𝑉 [𝑖 − 1,𝑗 − 𝑤𝑖])}. Here 𝑉𝑖 is the
profit of the current item.
• Step 6: Return the maximum profit of adding feasible items to the knapsack, V
[1… n].
• Step 7: End. 55
43. • Apply the dynamic programming algorithm to the instance of the
knapsack problem shown in Table 13.28. Assume that the knapsack
capacity is w = 3.
Knapsack Problem
57