The document discusses the 0-1 knapsack problem and presents a dynamic programming algorithm to solve it. The 0-1 knapsack problem aims to maximize the total value of items selected without exceeding the knapsack's weight capacity, where each item must either be fully included or excluded. The algorithm uses a table B to store the maximum value for each sub-problem of filling weight w with the first i items, calculating entries recursively to find the overall optimal value B[n,W]. An example demonstrates filling the table to solve an instance of the problem.
The document discusses the 0-1 knapsack problem and how it can be solved using dynamic programming. It first defines the 0-1 knapsack problem and provides an example. It then explains how a brute force solution would work in exponential time. Next, it describes how to define the problem as subproblems and derive a recursive formula to solve the subproblems in a bottom-up manner using dynamic programming. This builds up the solutions in a table and solves the problem in polynomial time. Finally, it walks through an example applying the dynamic programming algorithm to a sample problem instance.
The document discusses optimal binary search trees (OBST) and describes the process of creating one. It begins by introducing OBST and noting that the method can minimize average number of comparisons in a successful search. It then shows the step-by-step process of calculating the costs for different partitions to arrive at the optimal binary search tree for a given sample dataset with keys and frequencies. The process involves calculating Catalan numbers for each partition and choosing the minimum cost at each step as the optimal is determined.
This document discusses optimal binary search trees and provides an example problem. It begins with basic definitions of binary search trees and optimal binary search trees. It then shows an example problem with keys 1, 2, 3 and calculates the cost as 17. The document explains how to use dynamic programming to find the optimal binary search tree for keys 10, 12, 16, 21 with frequencies 4, 2, 6, 3. It provides the solution matrix and explains that the minimum cost is 2 with the optimal tree as 10, 12, 16, 21.
The document discusses disjoint set data structures and union-find algorithms. Disjoint set data structures track partitions of elements into separate, non-overlapping sets. Union-find algorithms perform two operations on these data structures: find, to determine which set an element belongs to; and union, to combine two sets into a single set. The document describes array-based representations of disjoint sets and algorithms for the union and find operations, including a weighted union algorithm that aims to keep trees relatively balanced by favoring attaching the smaller tree to the root of the larger tree.
Given two integer arrays val[0...n-1] and wt[0...n-1] that represents values and weights associated with n items respectively. Find out the maximum value subset of val[] such that sum of the weights of this subset is smaller than or equal to knapsack capacity W. Here the BRANCH AND BOUND ALGORITHM is discussed .
Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.
Binary search is a fast search algorithm that works on sorted data by comparing the middle element of the collection to the target value. It divides the search space in half at each step to quickly locate an element. The algorithm gets the middle element, compares it to the target, and either searches the left or right half recursively depending on if the target is less than or greater than the middle element. An example demonstrates finding the value 23 in a sorted array using this divide and conquer approach.
The document discusses Strassen's algorithm for matrix multiplication. It begins by explaining traditional matrix multiplication that has a time complexity of O(n3). It then explains how the divide and conquer strategy can be applied by dividing the matrices into smaller square sub-matrices. Strassen improved upon this by reducing the number of multiplications from 8 to 7 terms, obtaining a time complexity of O(n2.81). His key insight was applying different equations on the sub-matrix multiplication formulas to minimize operations.
The document summarizes the disjoint-set data structure and algorithms for implementing union-find operations on disjoint sets. Key points:
- Disjoint sets allow representing partitions of elements into disjoint groups and supporting operations to merge groups and find the group a element belongs to.
- Several implementations are described, including linked lists and rooted trees (forests).
- The weighted-union heuristic improves the naive linked list implementation from O(n^2) to O(m log n) time by merging shorter lists into longer ones.
- In forests, union-by-rank and path compression heuristics combine sets in nearly linear time by balancing tree heights and flattening paths during finds.
The document discusses greedy algorithms and their use for optimization problems. It provides examples of using greedy approaches to solve scheduling and knapsack problems. Specifically:
- A greedy algorithm makes locally optimal choices at each step in the hope of finding a globally optimal solution. However, greedy algorithms do not always yield optimal solutions.
- For scheduling jobs, a greedy approach of scheduling the longest jobs first is not optimal, while scheduling the shortest jobs first is also not optimal. An optimal solution can be found by trying all possible assignments.
- For the knapsack problem, different greedy strategies like selecting items with highest value or lowest weight are discussed. The optimal greedy strategy is to select items in order of highest value to
The document discusses the all pairs shortest path problem, which aims to find the shortest distance between every pair of vertices in a graph. It explains that the algorithm works by calculating the minimum cost to traverse between nodes using intermediate nodes, according to the equation A_k(i,j)=min{A_{k-1}(i,j), A_{k-1}(i,k), A_{k-1}(k,j)}. An example is provided to illustrate calculating the shortest path between nodes over multiple iterations of the algorithm.
It is related to Analysis and Design Of Algorithms Subject.Basically it describe basic of topological sorting, it's algorithm and step by step process to solve the example of topological sort.
Binary search is an algorithm for finding an element in a sorted array. It works by recursively checking the middle element, dividing the array in half, and searching only one subarray. The time complexity is O(log n) as the array is divided in half in each step.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
This document discusses the greedy algorithm approach and the knapsack problem. It defines greedy algorithms as choosing locally optimal solutions at each step in hopes of reaching a global optimum. The knapsack problem is described as packing items into a knapsack to maximize total value without exceeding weight capacity. An optimal knapsack algorithm is presented that sorts by value-to-weight ratio and fills highest ratios first. An example applies this to maximize profit of 440 by selecting full quantities of items B and A, and half of item C for a knapsack with capacity of 60.
The document discusses the maximum subarray problem and different solutions to solve it. It defines the problem as finding a contiguous subsequence within a given array that has the largest sum. It presents a brute force solution with O(n2) time complexity and a more efficient divide and conquer solution with O(nlogn) time complexity. The divide and conquer approach recursively finds maximum subarrays in the left and right halves of the array and the maximum crossing subarray to return the overall maximum.
The document discusses insertion sort, a simple sorting algorithm that builds a sorted output list from an input one element at a time. It is less efficient on large lists than more advanced algorithms. Insertion sort iterates through the input, at each step removing an element and inserting it into the correct position in the sorted output list. The best case for insertion sort is an already sorted array, while the worst is a reverse sorted array.
The document discusses algorithms and data structures, focusing on binary search trees (BSTs). It provides the following key points:
- BSTs are an important data structure for dynamic sets that can perform operations like search, insert, and delete in O(h) time where h is the height of the tree.
- Each node in a BST contains a key, and pointers to its left/right children and parent. The keys must satisfy the BST property - all keys in the left subtree are less than the node's key, and all keys in the right subtree are greater.
- Rotations are a basic operation used to restructure trees during insertions/deletions. They involve reassigning child
Kruskal's algorithm is a minimum spanning tree algorithm that finds the minimum cost collection of edges that connect all vertices in a connected, undirected graph. It works by sorting the edges by weight and then sequentially adding the shortest edge that does not create a cycle. The algorithm uses a disjoint set data structure to track connected components in the graph. It runs in O(ElogE) time, where E is the number of edges.
1. An algorithm is a sequence of unambiguous instructions to solve a problem and obtain an output for any valid input in a finite amount of time. Pseudocode is used to describe algorithms using a natural language format.
2. Analyzing algorithm efficiency involves determining the theoretical and empirical time complexity by counting the number of basic operations performed relative to the input size. Common measures are best-case, worst-case, average-case, and amortized analysis.
3. Important problem types for algorithms include sorting, searching, string processing, graphs, combinatorics, geometry, and numerical problems. Fundamental algorithms are analyzed for correctness and time/space complexity.
Data structures and algorithms involve organizing data to solve problems efficiently. An algorithm describes computational steps, while a program implements an algorithm. Key aspects of algorithms include efficiency as input size increases. Experimental studies measure running time but have limitations. Pseudocode describes algorithms at a high level. Analysis counts primitive operations to determine asymptotic running time, ignoring constant factors. The best, worst, and average cases analyze efficiency. Asymptotic notation like Big-O simplifies analysis by focusing on how time increases with input size.
1. Branch and bound is an algorithm that uses a state space tree to solve optimization problems like the knapsack problem and traveling salesman problem. It works by recursively dividing the solution space and minimizing costs at each step.
2. The document then provides an example of using branch and bound to solve a job assignment problem with 4 jobs and 4 people. It calculates lower bounds at each step of the algorithm and prunes branches that cannot lead to an optimal solution.
3. After exploring the solution space, the algorithm arrives at the optimal assignment of Person A to Job 2, Person B to Job 1, Person C to Job 3, and Person D to Job 4, with a minimum total cost of 21.
The branch-and-bound method is used to solve optimization problems by traversing a state space tree. It computes a bound at each node to determine if the node is promising. Better approaches traverse nodes breadth-first and choose the most promising node using a bounding heuristic. The traveling salesperson problem is solved using branch-and-bound by finding an initial tour, defining a bounding heuristic as the actual cost plus minimum remaining cost, and expanding promising nodes in best-first order until finding the minimal tour.
The document describes the quicksort algorithm. Quicksort works by:
1) Partitioning the array around a pivot element into two sub-arrays of less than or equal and greater than elements.
2) Recursively sorting the two sub-arrays.
3) Combining the now sorted sub-arrays.
In the average case, quicksort runs in O(n log n) time due to balanced partitions at each recursion level. However, in the worst case of an already sorted input, it runs in O(n^2) time due to highly unbalanced partitions. A randomized version of quicksort chooses pivots randomly to avoid worst case behavior.
The document discusses the 0-1 knapsack problem and provides an example of solving it using dynamic programming. The 0-1 knapsack problem aims to maximize the total value of items selected from a list that have a total weight less than or equal to the knapsack's capacity, where each item must either be fully included or excluded. The document outlines a dynamic programming algorithm that builds a table to store the maximum value for each item subset at each possible weight, recursively considering whether or not to include each additional item.
The document describes the 0-1 knapsack problem and how to solve it using dynamic programming. The 0-1 knapsack problem involves packing items of different weights and values into a knapsack of maximum capacity to maximize the total value without exceeding the weight limit. A dynamic programming algorithm is presented that breaks the problem down into subproblems and uses optimal substructure and overlapping subproblems to arrive at the optimal solution in O(nW) time, improving on the brute force O(2^n) time. An example is shown step-by-step to illustrate the algorithm.
The document discusses Strassen's algorithm for matrix multiplication. It begins by explaining traditional matrix multiplication that has a time complexity of O(n3). It then explains how the divide and conquer strategy can be applied by dividing the matrices into smaller square sub-matrices. Strassen improved upon this by reducing the number of multiplications from 8 to 7 terms, obtaining a time complexity of O(n2.81). His key insight was applying different equations on the sub-matrix multiplication formulas to minimize operations.
The document summarizes the disjoint-set data structure and algorithms for implementing union-find operations on disjoint sets. Key points:
- Disjoint sets allow representing partitions of elements into disjoint groups and supporting operations to merge groups and find the group a element belongs to.
- Several implementations are described, including linked lists and rooted trees (forests).
- The weighted-union heuristic improves the naive linked list implementation from O(n^2) to O(m log n) time by merging shorter lists into longer ones.
- In forests, union-by-rank and path compression heuristics combine sets in nearly linear time by balancing tree heights and flattening paths during finds.
The document discusses greedy algorithms and their use for optimization problems. It provides examples of using greedy approaches to solve scheduling and knapsack problems. Specifically:
- A greedy algorithm makes locally optimal choices at each step in the hope of finding a globally optimal solution. However, greedy algorithms do not always yield optimal solutions.
- For scheduling jobs, a greedy approach of scheduling the longest jobs first is not optimal, while scheduling the shortest jobs first is also not optimal. An optimal solution can be found by trying all possible assignments.
- For the knapsack problem, different greedy strategies like selecting items with highest value or lowest weight are discussed. The optimal greedy strategy is to select items in order of highest value to
The document discusses the all pairs shortest path problem, which aims to find the shortest distance between every pair of vertices in a graph. It explains that the algorithm works by calculating the minimum cost to traverse between nodes using intermediate nodes, according to the equation A_k(i,j)=min{A_{k-1}(i,j), A_{k-1}(i,k), A_{k-1}(k,j)}. An example is provided to illustrate calculating the shortest path between nodes over multiple iterations of the algorithm.
It is related to Analysis and Design Of Algorithms Subject.Basically it describe basic of topological sorting, it's algorithm and step by step process to solve the example of topological sort.
Binary search is an algorithm for finding an element in a sorted array. It works by recursively checking the middle element, dividing the array in half, and searching only one subarray. The time complexity is O(log n) as the array is divided in half in each step.
The document discusses solving the 8 queens problem using backtracking. It begins by explaining backtracking as an algorithm that builds partial candidates for solutions incrementally and abandons any partial candidate that cannot be completed to a valid solution. It then provides more details on the 8 queens problem itself - the goal is to place 8 queens on a chessboard so that no two queens attack each other. Backtracking is well-suited for solving this problem by attempting to place queens one by one and backtracking when an invalid placement is found.
This document discusses the greedy algorithm approach and the knapsack problem. It defines greedy algorithms as choosing locally optimal solutions at each step in hopes of reaching a global optimum. The knapsack problem is described as packing items into a knapsack to maximize total value without exceeding weight capacity. An optimal knapsack algorithm is presented that sorts by value-to-weight ratio and fills highest ratios first. An example applies this to maximize profit of 440 by selecting full quantities of items B and A, and half of item C for a knapsack with capacity of 60.
The document discusses the maximum subarray problem and different solutions to solve it. It defines the problem as finding a contiguous subsequence within a given array that has the largest sum. It presents a brute force solution with O(n2) time complexity and a more efficient divide and conquer solution with O(nlogn) time complexity. The divide and conquer approach recursively finds maximum subarrays in the left and right halves of the array and the maximum crossing subarray to return the overall maximum.
The document discusses insertion sort, a simple sorting algorithm that builds a sorted output list from an input one element at a time. It is less efficient on large lists than more advanced algorithms. Insertion sort iterates through the input, at each step removing an element and inserting it into the correct position in the sorted output list. The best case for insertion sort is an already sorted array, while the worst is a reverse sorted array.
The document discusses algorithms and data structures, focusing on binary search trees (BSTs). It provides the following key points:
- BSTs are an important data structure for dynamic sets that can perform operations like search, insert, and delete in O(h) time where h is the height of the tree.
- Each node in a BST contains a key, and pointers to its left/right children and parent. The keys must satisfy the BST property - all keys in the left subtree are less than the node's key, and all keys in the right subtree are greater.
- Rotations are a basic operation used to restructure trees during insertions/deletions. They involve reassigning child
Kruskal's algorithm is a minimum spanning tree algorithm that finds the minimum cost collection of edges that connect all vertices in a connected, undirected graph. It works by sorting the edges by weight and then sequentially adding the shortest edge that does not create a cycle. The algorithm uses a disjoint set data structure to track connected components in the graph. It runs in O(ElogE) time, where E is the number of edges.
1. An algorithm is a sequence of unambiguous instructions to solve a problem and obtain an output for any valid input in a finite amount of time. Pseudocode is used to describe algorithms using a natural language format.
2. Analyzing algorithm efficiency involves determining the theoretical and empirical time complexity by counting the number of basic operations performed relative to the input size. Common measures are best-case, worst-case, average-case, and amortized analysis.
3. Important problem types for algorithms include sorting, searching, string processing, graphs, combinatorics, geometry, and numerical problems. Fundamental algorithms are analyzed for correctness and time/space complexity.
Data structures and algorithms involve organizing data to solve problems efficiently. An algorithm describes computational steps, while a program implements an algorithm. Key aspects of algorithms include efficiency as input size increases. Experimental studies measure running time but have limitations. Pseudocode describes algorithms at a high level. Analysis counts primitive operations to determine asymptotic running time, ignoring constant factors. The best, worst, and average cases analyze efficiency. Asymptotic notation like Big-O simplifies analysis by focusing on how time increases with input size.
1. Branch and bound is an algorithm that uses a state space tree to solve optimization problems like the knapsack problem and traveling salesman problem. It works by recursively dividing the solution space and minimizing costs at each step.
2. The document then provides an example of using branch and bound to solve a job assignment problem with 4 jobs and 4 people. It calculates lower bounds at each step of the algorithm and prunes branches that cannot lead to an optimal solution.
3. After exploring the solution space, the algorithm arrives at the optimal assignment of Person A to Job 2, Person B to Job 1, Person C to Job 3, and Person D to Job 4, with a minimum total cost of 21.
The branch-and-bound method is used to solve optimization problems by traversing a state space tree. It computes a bound at each node to determine if the node is promising. Better approaches traverse nodes breadth-first and choose the most promising node using a bounding heuristic. The traveling salesperson problem is solved using branch-and-bound by finding an initial tour, defining a bounding heuristic as the actual cost plus minimum remaining cost, and expanding promising nodes in best-first order until finding the minimal tour.
The document describes the quicksort algorithm. Quicksort works by:
1) Partitioning the array around a pivot element into two sub-arrays of less than or equal and greater than elements.
2) Recursively sorting the two sub-arrays.
3) Combining the now sorted sub-arrays.
In the average case, quicksort runs in O(n log n) time due to balanced partitions at each recursion level. However, in the worst case of an already sorted input, it runs in O(n^2) time due to highly unbalanced partitions. A randomized version of quicksort chooses pivots randomly to avoid worst case behavior.
The document discusses the 0-1 knapsack problem and provides an example of solving it using dynamic programming. The 0-1 knapsack problem aims to maximize the total value of items selected from a list that have a total weight less than or equal to the knapsack's capacity, where each item must either be fully included or excluded. The document outlines a dynamic programming algorithm that builds a table to store the maximum value for each item subset at each possible weight, recursively considering whether or not to include each additional item.
The document describes the 0-1 knapsack problem and how to solve it using dynamic programming. The 0-1 knapsack problem involves packing items of different weights and values into a knapsack of maximum capacity to maximize the total value without exceeding the weight limit. A dynamic programming algorithm is presented that breaks the problem down into subproblems and uses optimal substructure and overlapping subproblems to arrive at the optimal solution in O(nW) time, improving on the brute force O(2^n) time. An example is shown step-by-step to illustrate the algorithm.
Knapsack problem ==>>
Given some items, pack the knapsack to get
the maximum total value. Each item has some
weight and some value. Total weight that we can
carry is no more than some fixed number W.
So we must consider weights of items as well as
their values.
The document describes the 0-1 knapsack problem and provides an example of solving it using dynamic programming. The 0-1 knapsack problem involves packing items into a knapsack of maximum capacity W to maximize the total value of packed items, where each item has a weight and value. The document defines the problem as a recursive subproblem and provides pseudocode for a dynamic programming algorithm that runs in O(nW) time, improving on the brute force O(2^n) time. It then works through an example application of the algorithm to a sample problem with 4 items and knapsack capacity of 5.
This Presentation will Use to develop your knowledge and doubts in Knapsack problem. This Slide also include Memory function part. Use this Slides to Develop your knowledge on Knapsack and Memory function
This document discusses graph algorithms and directed acyclic graphs (DAGs). It explains that the edges in a graph can be identified as tree, back, forward, or cross edges based on the color of vertices during depth-first search (DFS). It also defines DAGs as directed graphs without cycles and describes how to perform a topological sort of a DAG by inserting vertices into a linked list based on their finishing times from DFS. Finally, it discusses how to find strongly connected components (SCCs) in a graph using DFS on the original graph and its transpose.
This document discusses string matching algorithms. It begins with an introduction to the naive string matching algorithm and its quadratic runtime. Then it proposes three improved algorithms: FC-RJ, FLC-RJ, and FMLC-RJ, which attempt to match patterns by restricting comparisons based on the first, first and last, or first, middle, and last characters, respectively. Experimental results show that these three proposed algorithms outperform the naive algorithm by reducing execution time, with FMLC-RJ working best for three-character patterns.
The document discusses shortest path problems and algorithms. It defines the shortest path problem as finding the minimum weight path between two vertices in a weighted graph. It presents the Bellman-Ford algorithm, which can handle graphs with negative edge weights but detects negative cycles. It also presents Dijkstra's algorithm, which only works for graphs without negative edge weights. Key steps of the algorithms include initialization, relaxation of edges to update distance estimates, and ensuring the shortest path property is satisfied.
The document discusses strongly connected component decomposition (SCCD) which uses depth-first search (DFS) to separate a directed graph into subsets of mutually reachable vertices. It describes running DFS on the original graph and its transpose to find these subsets in Θ(V+E) time, then provides an example applying the three step process of running DFS on the graph and transpose, finding two strongly connected components.
Red-black trees are self-balancing binary search trees. They guarantee an O(log n) running time for operations by ensuring that no path from the root to a leaf is more than twice as long as any other. Nodes are colored red or black, and properties of the coloring are designed to keep the tree balanced. Inserting and deleting nodes may violate these properties, so rotations are used to restore the red-black properties and balance of the tree.
This document discusses recurrences and the master method for solving recurrence relations. It defines a recurrence as an equation that describes a function in terms of its value on smaller functions. The master method provides three cases for solving recurrences of the form T(n) = aT(n/b) + f(n). If f(n) is asymptotically smaller than nlogba, the solution is Θ(nlogba). If f(n) is Θ(nlogba), the solution is Θ(nlogba lgn). If f(n) is asymptotically larger and the regularity condition holds, the solution is Θ(f(n)). It provides examples of applying
The document discusses the Rabin-Karp algorithm for string matching. It defines Rabin-Karp as a string search algorithm that compares hash values of strings rather than the strings themselves. It explains that Rabin-Karp works by calculating a hash value for the pattern and text subsequences to compare, and only does a brute force comparison when hash values match. The worst-case complexity is O(n-m+1)m but the average case is O(n+m) plus processing spurious hits. Real-life applications include bioinformatics to find protein similarities.
The document discusses minimum spanning trees (MST) and two algorithms for finding them: Prim's algorithm and Kruskal's algorithm. It begins by defining an MST as a spanning tree (connected acyclic graph containing all vertices) with minimum total edge weight. Prim's algorithm grows a single tree by repeatedly adding the minimum weight edge connecting the growing tree to another vertex. Kruskal's algorithm grows a forest by repeatedly merging two components via the minimum weight edge connecting them. Both algorithms produce optimal MSTs by adding only "safe" edges that cannot be part of a cycle.
This document discusses the analysis of insertion sort and merge sort algorithms. It covers the worst-case and average-case analysis of insertion sort. For merge sort, it describes the divide-and-conquer technique, the merge sort algorithm including recursive calls, how it works to merge elements, and analyzes merge sort through constructing a recursion tree to prove its runtime is O(n log n).
The document discusses loop invariants and uses insertion sort as an example. The invariant for insertion sort is that at the start of each iteration of the outer for loop, the elements in A[1...j-1] are sorted. It shows that this invariant is true before the first iteration, remains true after each iteration by how insertion sort works, and when the loops terminate the entire array A[1...n] will be sorted, proving correctness.
Linear sorting algorithms like counting sort, bucket sort, and radix sort can sort arrays of numbers in linear O(n) time by exploiting properties of the data. Counting sort works for integers within a range [0,r] by counting the frequency of each number and using the frequencies to place numbers in the correct output positions. Bucket sort places numbers uniformly distributed between 0 and 1 into buckets and sorts the buckets. Radix sort treats multi-digit numbers as strings by sorting based on individual digit positions from least to most significant.
The document discusses heap data structures and algorithms. A heap is a binary tree that satisfies the heap property of a parent being greater than or equal to its children. Common operations on heaps like building
Electrical and Electronics Engineering: An International Journal (ELELIJ)elelijjournal653
Call For Papers...!!!
Electrical and Electronics Engineering: An International Journal (ELELIJ)
Web page link: https://wireilla.com/engg/eeeij/index.html
Submission Deadline: June 08, 2025
Submission link: [email protected]
Contact Us: [email protected]
This presentation showcases a detailed catalogue of testing solutions aligned with ISO 4548-9, the international standard for evaluating the anti-drain valve performance in full-flow lubricating oil filters used in internal combustion engines.
Topics covered include:
Digital Crime – Substantive Criminal Law – General Conditions – Offenses – In...ManiMaran230751
Digital Crime – Substantive Criminal Law – General Conditions – Offenses – Investigation Methods for
Collecting Digital Evidence – International Cooperation to Collect Digital Evidence.
MODULE 5 BUILDING PLANNING AND DESIGN SY BTECH ACOUSTICS SYSTEM IN BUILDINGDr. BASWESHWAR JIRWANKAR
: Introduction to Acoustics & Green Building -
Absorption of sound, various materials, Sabine’s formula, optimum reverberation time, conditions for good acoustics Sound insulation:
Acceptable noise levels, noise prevention at its source, transmission of noise, Noise control-general considerations
Green Building: Concept, Principles, Materials, Characteristics, Applications
ISO 4020-6.1 – Filter Cleanliness Test Rig: Precision Testing for Fuel Filter Integrity
Explore the design, functionality, and standards compliance of our advanced Filter Cleanliness Test Rig developed according to ISO 4020-6.1. This rig is engineered to evaluate fuel filter cleanliness levels with high accuracy and repeatability—critical for ensuring the performance and durability of fuel systems.
🔬 Inside This Presentation:
Overview of ISO 4020-6.1 testing protocols
Rig components and schematic layout
Test methodology and data acquisition
Applications in automotive and industrial filtration
Key benefits: accuracy, reliability, compliance
Perfect for R&D engineers, quality assurance teams, and lab technicians focused on filtration performance and standard compliance.
🛠️ Ensure Filter Cleanliness — Validate with Confidence.
Tesia Dobrydnia brings her many talents to her career as a chemical engineer in the oil and gas industry. With the same enthusiasm she puts into her work, she engages in hobbies and activities including watching movies and television shows, reading, backpacking, and snowboarding. She is a Relief Senior Engineer for Chevron and has been employed by the company since 2007. Tesia is considered a leader in her industry and is known to for her grasp of relief design standards.
This presentation provides a detailed overview of air filter testing equipment, including its types, working principles, and industrial applications. Learn about key performance indicators such as filtration efficiency, pressure drop, and particulate holding capacity. The slides highlight standard testing methods (e.g., ISO 16890, EN 1822, ASHRAE 52.2), equipment configurations (such as aerosol generators, particle counters, and test ducts), and the role of automation and data logging in modern systems. Ideal for engineers, quality assurance professionals, and researchers involved in HVAC, automotive, cleanroom, or industrial filtration systems.
May 2025: Top 10 Cited Articles in Software Engineering & Applications Intern...sebastianku31
The International Journal of Software Engineering & Applications (IJSEA) is a bi-monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Software Engineering & Applications. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on understanding Modern software engineering concepts & establishing new collaborations in these areas.
Kevin Corke Spouse Revealed A Deep Dive Into His Private Life.pdfMedicoz Clinic
Kevin Corke, a respected American journalist known for his work with Fox News, has always kept his personal life away from the spotlight. Despite his public presence, details about his spouse remain mostly private. Fans have long speculated about his marital status, but Corke chooses to maintain a clear boundary between his professional and personal life. While he occasionally shares glimpses of his family on social media, he has not publicly disclosed his wife’s identity. This deep dive into his private life reveals a man who values discretion, keeping his loved ones shielded from media attention.
"The Enigmas of the Riemann Hypothesis" by Julio ChaiJulio Chai
In the vast tapestry of the history of mathematics, where the brightest minds have woven with threads of logical reasoning and flash-es of intuition, the Riemann Hypothesis emerges as a mystery that chal-lenges the limits of human understanding. To grasp its origin and signif-icance, it is necessary to return to the dawn of a discipline that, like an incomplete map, sought to decipher the hidden patterns in numbers. This journey, comparable to an exploration into the unknown, takes us to a time when mathematicians were just beginning to glimpse order in the apparent chaos of prime numbers.
Centuries ago, when the ancient Greeks contemplated the stars and sought answers to the deepest questions in the sky, they also turned their attention to the mysteries of numbers. Pythagoras and his followers revered numbers as if they were divine entities, bearers of a universal harmony. Among them, prime numbers stood out as the cornerstones of an infinite cathedral—indivisible and enigmatic—hiding their ar-rangement beneath a veil of apparent randomness. Yet, their importance in building the edifice of number theory was already evident.
The Middle Ages, a period in which the light of knowledge flick-ered in rhythm with the storms of history, did not significantly advance this quest. It was the Renaissance that restored lost splendor to mathe-matical thought. In this context, great thinkers like Pierre de Fermat and Leonhard Euler took up the torch, illuminating the path toward a deeper understanding of prime numbers. Fermat, with his sharp intuition and ability to find patterns where others saw disorder, and Euler, whose overflowing genius connected number theory with other branches of mathematics, were the architects of a new era of exploration. Like build-ers designing a bridge over an unknown abyss, their contributions laid the groundwork for later discoveries.
2. Knapsack 0-1 Problem
• The goal is to maximize
the value of a knapsack
that can hold at most W
units (i.e. lbs or kg) worth
of goods from a list of
items I0, I1, … In-1.
– Each item has 2
attributes:
1) Value – let this be vi for
item Ii
2) Weight – let this be wi for
item Ii
Dr. AMIT KUMAR @JUET
3. Knapsack 0-1 Problem
• The difference
between this problem
and the fractional
knapsack one is that
you CANNOT take a
fraction of an item.
– You can either take it or
not.
– Hence the name
Knapsack 0-1 problem.
Dr. AMIT KUMAR @JUET
4. Knapsack 0-1 Problem
• Brute Force
– The naïve way to solve this problem is to cycle
through all 2n subsets of the n items and pick the
subset with a legal weight that maximizes the
value of the knapsack.
– We can come up with a dynamic programming
algorithm that will USUALLY do better than this
brute force technique.
Dr. AMIT KUMAR @JUET
5. Knapsack 0-1 Problem
• As we did before we are going to solve the
problem in terms of sub-problems.
– So let’s try to do that…
• Our first attempt might be to characterize a sub-
problem as follows:
– Let Sk be the optimal subset of elements from {I0, I1, …, Ik}.
• What we find is that the optimal subset from the elements
{I0, I1, …, Ik+1} may not correspond to the optimal subset of
elements from {I0, I1, …, Ik} in any regular pattern.
– Basically, the solution to the optimization problem for
Sk+1 might NOT contain the optimal solution from
problem Sk. Dr. AMIT KUMAR @JUET
6. Knapsack 0-1 Problem
• Let’s illustrate that point with an example:
Item Weight Value
I0 3 10
I1 8 4
I2 9 9
I3 8 11
• The maximum weight the knapsack can hold is 20.
• The best set of items from {I0, I1, I2} is {I0, I1, I2}
• BUT the best set of items from {I0, I1, I2, I3} is {I0, I2, I3}.
– In this example, note that this optimal solution, {I0, I2, I3}, does
NOT build upon the previous optimal solution, {I0, I1, I2}.
• (Instead it build's upon the solution, {I0, I2}, which is really the
optimal subset of {I0, I1, I2} with weight 12 or less.)
Dr. AMIT KUMAR @JUET
7. Knapsack 0-1 problem
• So now we must re-work the way we build upon previous sub-
problems…
– Let B[k, w] represent the maximum total value of a subset Sk with
weight w.
– Our goal is to find B[n, W], where n is the total number of items and
W is the maximal weight the knapsack can carry.
• So our recursive formula for subproblems:
B[k, w] = B[k - 1,w], if wk > w
= max { B[k - 1,w], B[k - 1,w - wk] + vk}, otherwise
• In English, this means that the best subset of Sk that has total
weight w is:
1) The best subset of Sk-1 that has total weight w, or
2) The best subset of Sk-1 that has total weight w-wk plus the item kDr. AMIT KUMAR @JUET
8. Knapsack 0-1 Problem –
Recursive Formula
• The best subset of Sk that has the total weight
w, either contains item k or not.
• First case: wk > w
– Item k can’t be part of the solution! If it was the
total weight would be > w, which is unacceptable.
• Second case: wk ≤ w
– Then the item k can be in the solution, and we
choose the case with greater value.Dr. AMIT KUMAR @JUET
9. Knapsack 0-1 Algorithm
for w = 0 to W { // Initialize 1st row to 0’s
B[0,w] = 0
}
for i = 1 to n { // Initialize 1st column to 0’s
B[i,0] = 0
}
for i = 1 to n {
for w = 0 to W {
if wi <= w { //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
}
else B[i,w] = B[i-1,w] // wi > w
}
}
Dr. AMIT KUMAR @JUET
10. Knapsack 0-1 Problem
• Let’s run our algorithm on the following data:
– n = 4 (# of elements)
– W = 5 (max weight)
– Elements (weight, value):
(2,3), (3,4), (4,5), (5,6)
Dr. AMIT KUMAR @JUET
11. Knapsack 0-1 Example
// Initialize the base cases
for w = 0 to W
B[0,w] = 0
for i = 1 to n
B[i,0] = 0
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
Dr. AMIT KUMAR @JUET
12. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0
2 0
3 0
4 0
i = 1
vi = 3
wi = 2
w = 1
w-wi = -1
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0
2 0
3 0
4 0
Dr. AMIT KUMAR @JUET
13. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0
2 0
3 0
4 0
i = 1
vi = 3
wi = 2
w = 2
w-wi = 0
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3
2 0
3 0
4 0
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
14. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3
2 0
3 0
4 0
i = 1
vi = 3
wi = 2
w = 3
w-wi = 1
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3
2 0
3 0
4 0
Dr. AMIT KUMAR @JUET
15. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3
2 0
3 0
4 0
i = 1
vi = 3
wi = 2
w = 4
w-wi = 2
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3
2 0
3 0
4 0
Dr. AMIT KUMAR @JUET
16. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3
2 0
3 0
4 0
i = 1
vi = 3
wi = 2
w = 5
w-wi = 3
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0
3 0
4 0
Dr. AMIT KUMAR @JUET
17. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0
3 0
4 0
i = 2
vi = 4
wi = 3
w = 1
w-wi = -2
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0
3 0
4 0
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
18. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0
3 0
4 0
i = 2
vi = 4
wi = 3
w = 2
w-wi = -1
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3
3 0
4 0
Dr. AMIT KUMAR @JUET
19. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3
3 0
4 0
i = 2
vi = 4
wi = 3
w = 3
w-wi = 0
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4
3 0
4 0
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
20. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4
3 0
4 0
i = 2
vi = 4
wi = 3
w = 4
w-wi = 1
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4
3 0
4 0
Dr. AMIT KUMAR @JUET
21. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4
3 0
4 0
i = 2
vi = 4
wi = 3
w = 5
w-wi = 2
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0
4 0
Dr. AMIT KUMAR @JUET
22. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0
4 0
i = 3
vi = 5
wi = 4
w = 1..3
w-wi = -3..-1
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4
4 0
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
23. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4
4 0
i = 3
vi = 5
wi = 4
w = 4
w-wi = 0
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5
4 0
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
24. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5
4 0
i = 3
vi = 5
wi = 4
w = 5
w-wi = 1
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
25. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0
i = 4
vi = 6
wi = 5
w = 1..4
w-wi = -4..-1
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
26. Knapsack 0-1 Example
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5
i = 4
vi = 6
wi = 5
w = 5
w-wi = 0
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
if wi <= w //item i can be in the solution
if vi + B[i-1,w-wi] > B[i-1,w]
B[i,w] = vi + B[i-1,w- wi]
else
B[i,w] = B[i-1,w]
else B[i,w] = B[i-1,w] // wi > w
Dr. AMIT KUMAR @JUET
27. Knapsack 0-1 Example
We’re DONE!!
The max possible value that can be carried in this knapsack is $7
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
Dr. AMIT KUMAR @JUET
28. Knapsack 0-1 Algorithm
• This algorithm only finds the max possible
value that can be carried in the knapsack
– The value in B[n,W]
• To know the items that make this maximum
value, we need to trace back through the
table.
Dr. AMIT KUMAR @JUET
29. Knapsack 0-1 Algorithm
Finding the Items
• Let i = n and k = W
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1 // Assume the ith item is not in the knapsack
// Could it be in the optimally packed knapsack?
Dr. AMIT KUMAR @JUET
30. Knapsack 0-1 Algorithm
Finding the Items
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i = 4
k = 5
vi = 6
wi = 5
B[i,k] = 7
B[i-1,k] = 7
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
i = n , k = W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Knapsack:
Dr. AMIT KUMAR @JUET
31. Knapsack 0-1 Algorithm
Finding the Items
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i = 3
k = 5
vi = 5
wi = 4
B[i,k] = 7
B[i-1,k] = 7
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
i = n , k = W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Knapsack:
Dr. AMIT KUMAR @JUET
32. Knapsack 0-1 Algorithm
Finding the Items
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i = 2
k = 5
vi = 4
wi = 3
B[i,k] = 7
B[i-1,k] = 3
k – wi = 2
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
i = n , k = W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Knapsack:
Item 2
Dr. AMIT KUMAR @JUET
33. Knapsack 0-1 Algorithm
Finding the Items
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i = 1
k = 2
vi = 3
wi = 2
B[i,k] = 3
B[i-1,k] = 0
k – wi = 0
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
i = n , k = W
while i, k > 0
if B[i, k] ≠ B[i-1, k] then
mark the ith item as in the knapsack
i = i-1, k = k-wi
else
i = i-1
Knapsack:
Item 1
Item 2
Dr. AMIT KUMAR @JUET
34. Knapsack 0-1 Algorithm
Finding the Items
Items:
1: (2,3)
2: (3,4)
3: (4,5)
4: (5,6)
i = 1
k = 2
vi = 3
wi = 2
B[i,k] = 3
B[i-1,k] = 0
k – wi = 0
i / w 0 1 2 3 4 5
0 0 0 0 0 0 0
1 0 0 3 3 3 3
2 0 0 3 4 4 7
3 0 0 3 4 5 7
4 0 0 3 4 5 7
k = 0, so we’re DONE!
The optimal knapsack should contain:
Item 1 and Item 2
Knapsack:
Item 1
Item 2
Dr. AMIT KUMAR @JUET
35. Knapsack 0-1 Problem – Run Time
for w = 0 to W
B[0,w] = 0
for i = 1 to n
B[i,0] = 0
for i = 1 to n
for w = 0 to W
< the rest of the code >
What is the running time of this algorithm?
O(n*W)
Remember that the brute-force algorithm takes: O(2n)
O(W)
O(W)
Repeat n times
O(n)
Dr. AMIT KUMAR @JUET
36. Knapsack Problem
1) Fill out the
dynamic
programming
table for the
knapsack
problem to the
right.
2) Trace back
through the
table to find the
items in the Dr. AMIT KUMAR @JUET
37. References
• Slides adapted from Arup Guha’s Computer
Science II Lecture notes:
http://www.cs.ucf.edu/~dmarino/ucf/cop3503/
lectures/
• Additional material from the textbook:
Data Structures and Algorithm Analysis in Java (Second
Edition) by Mark Allen Weiss
• Additional images:
www.wikipedia.com
xkcd.com
Dr. AMIT KUMAR @JUET