0% found this document useful (0 votes)
7 views

UNIT 5

Uploaded by

Neha Bhati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

UNIT 5

Uploaded by

Neha Bhati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

UNIT 5

Divide and Conquer Algorithm


A divide and conquer algorithm is a strategy of solving a large problem by
1. breaking the problem into smaller sub-problems
2. solving the sub-problems, and
3. combining them to get the desired output.
To use the divide and conquer algorithm, recursion is used
How Divide and Conquer Algorithms Work?
Here are the steps involved:
1. Divide: Divide the given problem into sub-problems using recursion.
2. Conquer: Solve the smaller sub-problems recursively. If the subproblem is small
enough, then solve it directly.
3. Combine: Combine the solutions of the sub-problems that are part of the recursive
process to solve the actual problem.
Let us understand this concept with the help of an example.
Here, we will sort an array using the divide and conquer approach (ie. merge sort).

1. Let the given array be:


2. Array for merge sort
3. Divide the array into two halves.

4. Divide the array into two subparts


Again, divide each subpart recursively into two halves until you get individual
elements.

5. Divide the array into smaller subparts


6. Now, combine the individual elements in a sorted manner.
Here, conquer and combine steps go side by side.

Combine the subparts


Merging or Merge Sort:
 It divides input array into two halves, calls itself for the two halves and then sorted
and merged that two halves.
Example:
 For example consider the array of elements: 38, 27, 43, 3, 9, 82, 10
 Now the array is recursively divided into two halves till the size becomes one
which is shown in the following figure.

38 27 43 3 9 82 10

38 27 43 3 9 82 10

38 27 43 3 9 82 10

38 27 43 3 9 82 10
 Once the size becomes one, the merge process comes into action and starts merging
with sorted array till the complete array is merged

MergeSort Algorithm
The MergeSort function repeatedly divides the array into two halves until we reach a stage
where we try to perform MergeSort on a subarray of size 1 i.e. p == r.
After that, the merge function comes into play and combines the sorted arrays into larger
arrays until the whole array is merged.
MergeSort(A, p, r):
if p > r
return
q = (p+r)/2
mergeSort(A, p, q)
mergeSort(A, q+1, r)
merge(A, p, q, r)

To sort an entire array, we need to call MergeSort(A, 0, length(A)-1).


As shown in the image below, the merge sort algorithm recursively divides the array into
halves until we reach the base case of array with 1 element. After that, the merge function
picks up the sorted sub-arrays and merges them to gradually sort the entire array.

Merge sort in action


The merge Step of Merge Sort
Every recursive algorithm is dependent on a base case and the ability to combine the results
from base cases. Merge sort is no different. The most important part of the merge sort
algorithm is, you guessed it, merge step.
The merge step is the solution to the simple problem of merging two sorted lists(arrays) to
build one large sorted list(array).
The algorithm maintains three pointers, one for each of the two arrays and one for
maintaining the current index of the final sorted array.
Have we reached the end of any of the arrays?
No:
Compare current elements of both arrays
Copy smaller element into sorted array
Move pointer of element containing smaller element
Yes:
Copy all remaining elements of non-empty array
Merge step

Writing the Code for Merge Algorithm


A noticeable difference between the merging step we described above and the one we use for
merge sort is that we only perform the merge function on consecutive sub-arrays.
This is why we only need the array, the first position, the last index of the first subarray(we
can calculate the first index of the second subarray) and the last index of the second subarray.
Our task is to merge two subarrays A[p..q] and A[q+1..r] to create a sorted array A[p..r]. So
the inputs to the function are A, p, q and r
The merge function works as follows:
1. Create copies of the subarrays L <- A[p..q] and M <- A[q+1..r].
2. Create three pointers i, j and k
a. i maintains current index of L, starting at 1
b. j maintains current index of M, starting at 1
c. k maintains the current index of A[p..q], starting at p.
3. Until we reach the end of either L or M, pick the larger among the elements
from L and M and place them in the correct position at A[p..q]
4. When we run out of elements in either L or M, pick up the remaining elements and
put in A[p..q]
In code, this would look like:
// Merge two subarrays L and M into arr
void merge(int arr[], int p, int q, int r) {

// Create L ← A[p..q] and M ← A[q+1..r]


int n1 = q - p + 1;
int n2 = r - q;

int L[n1], M[n2];

for (int i = 0; i < n1; i++)


L[i] = arr[p + i];
for (int j = 0; j < n2; j++)
M[j] = arr[q + 1 + j];

// Maintain current index of sub-arrays and main array


int i, j, k;
i = 0;
j = 0;
k = p;

// Until we reach either end of either L or M, pick larger among


// elements L and M and place them in the correct position at A[p..r]
while (i < n1 && j < n2) {
if (L[i] <= M[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = M[j];
j++;
}
k++;
}

// When we run out of elements in either L or M,


// pick up the remaining elements and put in A[p..r]
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}

while (j < n2) {


arr[k] = M[j];
j++;
k++;
}
}

Merge( ) Function Explained Step-By-Step


A lot is happening in this function, so let's take an example to see how this would work.
As usual, a picture speaks a thousand words.

Merging two consecutive subarrays of


array
The array A[0..5] contains two sorted subarrays A[0..3] and A[4..5]. Let us see how the
merge function will merge the two arrays.
void merge(int arr[], int p, int q, int r) {
// Here, p = 0, q = 4, r = 6 (size of array)
Step 1: Create duplicate copies of sub-arrays to be sorted
// Create L ← A[p..q] and M ← A[q+1..r]
int n1 = q - p + 1 = 3 - 0 + 1 = 4;
int n2 = r - q = 5 - 3 = 2;

int L[4], M[2];

for (int i = 0; i < 4; i++)


L[i] = arr[p + i];
// L[0,1,2,3] = A[0,1,2,3] = [1,5,10,12]

for (int j = 0; j < 2; j++)


M[j] = arr[q + 1 + j];
// M[0,1] = A[4,5] = [6,9]

Create copies of subarrays for merging


Step 2: Maintain current index of sub-arrays and main array

int i, j, k;
i = 0;
j = 0;
k = p;
Maintain indices of copies of sub array and main array
Step 3: Until we reach the end of either L or M, pick larger among elements L and M
and place them in the correct position at A[p..r]
while (i < n1 && j < n2) {
if (L[i] <= M[j]) {
arr[k] = L[i]; i++;
}
else {
arr[k] = M[j];
j++;
}
k++;
}
Comparing individual elements of sorted subarrays until we reach end of one
Step 4: When we run out of elements in either L or M, pick up the remaining elements
and put in A[p..r]
// We exited the earlier loop because j < n2 doesn't hold
while (i < n1)
{
arr[k] = L[i];
i++;
k++;
}

Copy the remaining elements from the first array to main subarray
// We exited the earlier loop because i < n1 doesn't hold
while (j < n2)
{
arr[k] = M[j];
j++;
k++;
}
}

Copy remaining elements of second array to main subarray


This step would have been needed if the size of M was greater than L.
At the end of the merge function, the subarray A[p..r] is sorted.
// Merge sort in C

#include <stdio.h>

// Merge two subarrays L and M into arr


void merge(int arr[], int p, int q, int r) {

// Create L ← A[p..q] and M ← A[q+1..r]


int n1 = q - p + 1;
int n2 = r - q;

int L[n1], M[n2];

for (int i = 0; i < n1; i++)


L[i] = arr[p + i];
for (int j = 0; j < n2; j++)
M[j] = arr[q + 1 + j];

// Maintain current index of sub-arrays and main array


int i, j, k;
i = 0;
j = 0;
k = p;

// Until we reach either end of either L or M, pick larger among


// elements L and M and place them in the correct position at A[p..r]
while (i < n1 && j < n2) {
if (L[i] <= M[j]) {
arr[k] = L[i];
i++;
} else {
arr[k] = M[j];
j++;
}
k++;
}

// When we run out of elements in either L or M,


// pick up the remaining elements and put in A[p..r]
while (i < n1) {
arr[k] = L[i];
i++;
k++;
}

while (j < n2) {


arr[k] = M[j];
j++;
k++;
}
}

// Divide the array into two subarrays, sort them and merge them
void mergeSort(int arr[], int l, int r) {
if (l < r) {

// m is the point where the array is divided into two subarrays


int m = l + (r - l) / 2;

mergeSort(arr, l, m);
mergeSort(arr, m + 1, r);

// Merge the sorted subarrays


merge(arr, l, m, r);
}
}

// Print the array


void printArray(int arr[], int size) {
for (int i = 0; i < size; i++)
printf("%d ", arr[i]);
printf("\n");
}

// Driver program
int main() {
int arr[] = {6, 5, 12, 10, 9, 1};
int size = sizeof(arr) / sizeof(arr[0]);

mergeSort(arr, 0, size - 1);

printf("Sorted array: \n");


printArray(arr, size);
}

Quicksort Algorithm
Quicksort is a sorting algorithm based on the divide and conquer approach where
1. An array is divided into subarrays by selecting a pivot element (element selected
from the array).

While dividing the array, the pivot element should be positioned in such a way that
elements less than pivot are kept on the left side and elements greater than pivot are
on the right side of the pivot.
2. The left and right subarrays are also divided using the same approach. This process
continues until each subarray contains a single element.
3. At this point, elements are already sorted. Finally, elements are combined to form a
sorted array.
Working of Quicksort Algorithm
1. Select the Pivot Element
There are different variations of quicksort where the pivot element is selected from different
positions. Here, we will be selecting the rightmost element of the array as the pivot element.

Select a pivot element


2. Rearrange the Array
Now the elements of the array are rearranged so that elements that are smaller than the pivot
are put on the left and the elements greater than the pivot are put on the right.

Put all the smaller elements on the left and greater on the right of pivot element
Here's how we rearrange the array:
1. A pointer is fixed at the pivot element. The pivot element is compared with the
elements beginning from the first index.

2. Comparison of pivot element with element beginning from the first index
3. If the element is greater than the pivot element, a second pointer is set for that

element.
4. If the element is greater than the pivot element, a second pointer is set for that
element.
5. Now, pivot is compared with other elements. If an element smaller than the pivot
element is reached, the smaller element is swapped with the greater element found
earlier.
6. Pivot is compared with other elements.
7. Again, the process is repeated to set the next greater element as the second pointer.
And, swap it with another smaller element.

8. The process is repeated to set the next greater element as the second pointer.
9. The process goes on until the second last element is reached.

10. The process goes on until the second last element is reached.
11. Finally, the pivot element is swapped with the second pointer.
12. Finally, the pivot element is swapped with the second pointer.
3. Divide Subarrays
Pivot elements are again chosen for the left and the right sub-parts separately. And, step 2 is
repeated.

Select pivot element of in each half and put at correct place using recursion

The subarrays are divided until each subarray is formed of a single element. At this point, the
array is already sorted.

Quick Sort Algorithm

quickSort(array, leftmostIndex, rightmostIndex)


if (leftmostIndex < rightmostIndex)
pivotIndex <- partition(array,leftmostIndex, rightmostIndex)
quickSort(array, leftmostIndex, pivotIndex - 1)
quickSort(array, pivotIndex, rightmostIndex)

partition(array, leftmostIndex, rightmostIndex)


set rightmostIndex as pivotIndex
storeIndex <- leftmostIndex - 1
for i <- leftmostIndex + 1 to rightmostIndex
if element[i] < pivotElement
swap element[i] and element[storeIndex]
storeIndex++
swap pivotElement and element[storeIndex+1]
return storeIndex + 1

Quicksort Code
// Quick sort in C

#include <stdio.h>

// function to swap elements


void swap(int *a, int *b) {
int t = *a;
*a = *b;
*b = t;
}

// function to find the partition position


int partition(int array[], int low, int high) {

// select the rightmost element as pivot


int pivot = array[high];

// pointer for greater element


int i = (low - 1);

// traverse each element of the array


// compare them with the pivot
for (int j = low; j < high; j++) {
if (array[j] <= pivot) {

// if element smaller than pivot is found


// swap it with the greater element pointed by i
i++;

// swap element at i with element at j


swap(&array[i], &array[j]);
}
}

// swap the pivot element with the greater element at i


swap(&array[i + 1], &array[high]);

// return the partition point


return (i + 1);
}

void quickSort(int array[], int low, int high) {


if (low < high) {
// find the pivot element such that
// elements smaller than pivot are on left of pivot
// elements greater than pivot are on right of pivot
int pi = partition(array, low, high);

// recursive call on the left of pivot


quickSort(array, low, pi - 1);

// recursive call on the right of pivot


quickSort(array, pi + 1, high);
}
}

// function to print array elements


void printArray(int array[], int size) {
for (int i = 0; i < size; ++i) {
printf("%d ", array[i]);
}
printf("\n");
}

// main function
int main() {
int data[] = {8, 7, 2, 1, 0, 9, 6};

int n = sizeof(data) / sizeof(data[0]);

printf("Unsorted Array\n");
printArray(data, n);

// perform quicksort on data


quickSort(data, 0, n - 1);

printf("Sorted array in ascending order: \n");


printArray(data, n);
}

Convex Hull Algorithm

The Convex Hull problem is a fundamental computational geometry problem, where the goal
is to find the smallest convex polygon called convex hull that can enclose a set of points in a
2D plane. This problem has various applications, such as in computer graphics, geographic
information systems, and collision detection.
In this article, we will learn multiple algorithms for solving the Convex Hull problem and
their implementation in the C programming language.
Example
Input:
Point points[] = {{0, 0}, {0, 4}, {-4, 0}, {5, 0}, {0, -6}, {1, 0}};
Output:
The points in the Convex Hull are:
(-4, 0)
(5, 0)
(0, -6)
(0, 4)

What is Convex Hull?


The convex hull of a set of points in a Euclidean space is the smallest convex polygon that
encloses all the points. In 2D space, the convex hull is a convex polygon. If we imagine each
point as a nail sticking out of a board, the convex hull is the shape formed by stretching a
rubber band around the nails.

Convex Hull
Convex Hull using Divide and Conquer Algorithm in C
In this approach, we recursively divide the set of points into smaller subsets, finds the convex
hulls for these subsets, and then merges the results to form the final convex hull. This is
similar to the merge sort algorithm in terms of its approach.
Approach:
 Start dividing the set of points into two halves based on their x-coordinates. The left
half will contain all points with x-coordinates less than or equal to the median x-
coordinate, and the right half will contain the remaining points.
 Recursively apply the divide and conquer approach to find the convex hull for the left
half and the right half.
 After finding the convex hulls for the left and right halves, the next step is to merge
these two convex hulls. To do this, find the upper and lower tangents that connect the
two convex hulls.
o The upper tangent is the line that touches both convex hulls and lies above all
the points in both hulls.
o The lower tangent is the line that touches both convex hulls and lies below all
the points in both hulls.
o The points between these tangents form the final convex hull.
 Finally, Combine the points from the left convex hull and right convex hull that lie
between the tangents to form the complete convex hull for the entire set of points.
Algorithm
The steps that we’ll follow to solve the problem are:
1. First, we’ll sort the vector containing points in ascending order (according to their x-
coordinates).
2. Next, we’ll divide the points into two halves S1 and S2. The set of points S1 contains
the points to the left of the median, whereas the set S2 contains all the points that are
right to the median.
3. We’ll find the convex hulls for the set S1 and S2 individually. Assuming the convex
hull for S1 is C1, and for S2, it is C2.
4. Now, we’ll merge C1 and C2 such that we get the overall convex hull C.

Greedy Algorithms
A greedy algorithm decides what to do in each step, only based on the current situation,
without a thought of how the total problem looks like.
In other words, a greedy algorithm makes the locally optimal choice in each step, hoping to
find the global optimum solution in the end. Greedy algorithms are a class of algorithms
that make locally optimal choices at each step with the hope of finding a global
optimum solution.

Two properties must be true for a problem for a greedy algorithm to work:
 Greedy Choice Property: Means that the problem is so that the solution (the global
optimum) can be reached by making greedy choices in each step (locally optimal
choices).
 Optimal Substructure: Means that the optimal solution to a problem, is a collection
of optimal solutions to sub-problems. So solving smaller parts of the problem locally
(by making greedy choices) contributes to the overall solution.

Activity Selection Problem


You are given n activities with their start and finish times. Select the maximum number of
activities that can be performed by a single person, assuming that a person can only work on
a single activity at a time.
Examples:
Input: start[] = {10, 12, 20}, finish[] = {20, 25, 30}
Output: 0
Explanation: A person can perform at most one activities.
Input: start[] = {1, 3, 0, 5, 8, 5}, finish[] = {2, 4, 6, 7, 9, 9};
Output: 0 1 3 4
Explanation: A person can perform at most four activities. The
maximum set of activities that can be executed
is {0, 1, 3, 4} [ These are indexes in start[] and finish[]
Approach: To solve the problem follow the below idea:
The greedy choice is to always pick the next activity whose finish time is the least among the
remaining activities and the start time is more than or equal to the finish time of the
previously selected activity. We can sort the activities according to their finishing time so
that we always consider the next activity as the minimum finishing time activity
Follow the given steps to solve the problem:
 Sort the activities according to their finishing time
 Select the first activity from the sorted array and print it
 Do the following for the remaining activities in the sorted array
o If the start time of this activity is greater than or equal to the finish time of the
previously selected activity then select this activity and print it

 The activity selection problem is a mathematical optimization problem. Our first


illustration is the problem of scheduling a resource among several challenge activities.
We find a greedy algorithm provides a well designed and simple method for selecting
a maximum- size set of manually compatible activities.
 Suppose S = {1, 2....n} is the set of n proposed activities. The activities share
resources which can be used by only one activity at a time, e.g., Tennis Court, Lecture
Hall, etc. Each Activity "i" has start time si and a finish time fi, where si ≤fi. If
selected activity "i" take place meanwhile the half-open time interval [s i,fi). Activities
i and j are compatible if the intervals (si, fi) and [si, fi) do not overlap (i.e. i and j are
compatible if si ≥fi or si ≥fi). The activity-selection problem chosen the maximum- size
set of mutually consistent activities.
 Algorithm Of Greedy- Activity Selector:

 GREEDY- ACTIVITY SELECTOR (s, f)


 1. n ← length [s]
 2. A ← {1}
 3. j ← 1.
 4. for i ← 2 to n

6. then A ← A ∪ {i}
 5. do if si ≥ fi

 7. j ← i
 8. return A

 Example: Given 10 activities along with their start and end time as
 S = (A1 A2 A3 A4 A5 A6 A7 A8 A9 A10)
 Si = (1,2,3,4,7,8,9,9,11,12)
 fi = (3,5,4,7,10,9,11,13,12,14)

 Compute a schedule where the greatest number of activities takes place.


 Advertisement

 Solution: The solution to the above Activity scheduling problem using a greedy
strategy is illustrated below:
 Arranging the activities in increasing order of end time

 Now, schedule A1
 Next schedule A3 as A1 and A3 are non-interfering.
 Next skip A2 as it is interfering.
 Next, schedule A4 as A1 A3 and A4 are non-interfering, then next, schedule A6 as
A1 A3 A4 and A6 are non-interfering.
JobJ

JOB SCHEDULING ALGORITHM

Job sequencing with deadlines is a problem that involves scheduling a set of jobs to
maximize profit while adhering to their respective deadlines. This approach assumes that
each job can be completed in exactly one unit of time. If jobs have different durations, a more
advanced scheduling algorithm might be necessary. Also, if the deadlines are represented as
relative time (e.g., time units after job release), the algorithm would require adjustments
accordingly.

The prime objective of the Job Sequencing with Deadlines algorithm is to complete the given
order of jobs within respective deadlines, resulting in the highest possible profit. To achieve
this, we are given a number of jobs, each associated with a specific deadline, and completing
a job before its deadline earns us a profit. The challenge is to arrange these jobs in a way that
maximizes our total profit.
It is not always possible to complete all of the assigned jobs within the deadlines. For each
job, denoted as Ji, we have a deadline di and a profit pi associated with completing it on time.
Our objective is to find the best solution that maximizes profit while still ensuring that the
jobs are completed within their deadlines.
Here’s how Job Sequencing with Deadlines algorithm works:
Problem Setup
You’re given a list of jobs, where each job has a unique identifier (job_id), a deadline (by
which the job should be completed), and a profit value (the benefit you gain by completing
the job).
Sort the Jobs by Profit
To ensure we consider jobs with higher profits first, sort the jobs in non-increasing order
based on their profit values.
Initialize the Schedule and Available Time Slots
Set up an array to represent the schedule. Initialize all elements to -1, indicating that no job
has been assigned to any time slot. Also, create a boolean array to represent the availability of
time slots, with all elements set to true initially.
Assign Jobs to Time Slots
Go through the sorted jobs one by one. For each job, find the latest available time slot just
before its deadline. If such a time slot is available, assign the job to that slot. If not, skip the
job.
Calculate Total Profit and Scheduled Jobs
Sum up the profits of all the scheduled jobs to get the total profit. Additionally, keep track of
which job is assigned to each time slot.
Output the Results
Finally, display the total profit achieved and the list of jobs that have been scheduled.
Job Sequencing with Deadlines algorithm
Given jobs J(i) with deadline D(i) and profit P(i) for 0≤i≤1, these jobs are arranged in
descending order of profit p1⩾p2⩾p3⩾…⩾pn.
Job-Sequencing-With-Deadline (D, J, n, k)
D(0) := J(0) := 0
k:= 1
J(1) := 1 // means first job is selected
for i = 2 … n do
r:= k
while D(J(r)) > D(i) and D(J(r)) ≠ r do
r:= r – 1
if D(J(r)) ≤ D(i) and D(i) > r then
for l = k … r + 1 by -1 do
J(l + 1): = J(l)
J(r + 1): = i
k:= k + 1

Input: Four Jobs with following deadlines and profits


JobID Deadline Profit
a 4 20
b 1 10
c 1 40
d 1 30
Output: Following is maximum profit sequence of jobs: c, a
Input: Five Jobs with following deadlines and profits
JobID Deadline Profit
a 2 100
b 1 19
c 2 27
d 1 25
e 3 15
Output: Following is maximum profit sequence of jobs: c, a, e

Introduction to Knapsack Problem

The Knapsack problem is an example of the combinational optimization problem. This


problem is also commonly known as the “Rucksack Problem“. The name of the problem is
defined from the maximization problem as mentioned below:
Given a bag with maximum weight capacity of W and a set of items, each having a weight
and a value associated with it. Decide the number of each item to take in a collection such
that the total weight is less than the capacity and the total value is maximized.

1. Fractional Knapsack Problem


The Fractional Knapsack problem can be defined as follows:
Given the weights and values of N items, put these items in a knapsack of capacity W to get
the maximum total value in the knapsack. In Fractional Knapsack, we can break items for
maximizing the total value of the knapsack.

Fractional Knapsack Problem


Given the weights and profits of N items, in the form of {profit, weight} put these items in a
knapsack of capacity W to get the maximum total profit in the knapsack. In Fractional
Knapsack, we can break items for maximizing the total value of the knapsack.
Input: arr[] = {{60, 10}, {100, 20}, {120, 30}}, W = 50
Output: 240
Explanation: By taking items of weight 10 and 20 kg and 2/3 fraction of 30 kg.
Hence total price will be 60+100+(2/3)(120) = 240
Input: arr[] = {{500, 30}}, W = 10
Output: 166.667

Consider the example: arr[] = {{100, 20}, {60, 10}, {120, 30}}, W = 50.
Sorting: Initially sort the array based on the profit/weight ratio. The sorted array will be
{{60, 10}, {100, 20}, {120, 30}}.
Iteration:
 For i = 0, weight = 10 which is less than W. So add this element in the knapsack.
profit = 60 and remaining W = 50 – 10 = 40.
 For i = 1, weight = 20 which is less than W. So add this element too. profit = 60 +
100 = 160 and remaining W = 40 – 20 = 20.
 For i = 2, weight = 30 is greater than W. So add 20/30 fraction = 2/3 fraction of the
element. Therefore profit = 2/3 * 120 + 160 = 80 + 160 = 240 and remaining W
becomes 0.
So the final profit becomes 240 for W = 50.

ALGORITHM

Follow the given steps to solve the problem using the above approach:
 Calculate the ratio (profit/weight) for each item.
 Sort all the items in decreasing order of the ratio.
 Initialize res = 0, curr_cap = given_cap.
 Do the following for every item i in the sorted order:
o If the weight of the current item is less than or equal to the remaining capacity
then add the value of that item into the result
o Else add the current item as much as we can and break out of the loop.
 Return res.

. 0/1 Knapsack Problem


The 0/1 Knapsack problem can be defined as follows:
We are given N items where each item has some weight (wi) and value (vi) associated with it.
We are also given a bag with capacity W. The target is to put the items into the bag such that
the sum of values associated with them is the maximum possible.
Note that here we can either put an item completely into the bag or cannot put it at all.

0/1 Knapsack Problem


Given N items where each item has some weight and profit associated with it and also given
a bag with capacity W, [i.e., the bag can hold at most W weight in it]. The task is to put the
items into the bag such that the sum of profits associated with them is the maximum
possible.
Note: The constraint here is we can either put an item completely into the bag or cannot put it
at all [It is not possible to put a part of an item into the bag].
Examples:
Let, weight[] = {1, 2, 3}, profit[] = {10, 15, 40}, Capacity = 6
 If no element is filled, then the possible profit is 0.

weight

item⇣/ 0 1 2 3 4 5 6

0 0 0 0 0 0 0 0

 For filling the first item in the bag: If we follow the above mentioned procedure, the
table will look like the following.
weight

item⇣/ 0 1 2 3 4 5 6

0 0 0 0 0 0 0 0
weight

item⇣/ 0 1 2 3 4 5 6

1 1 1 1 1 1
0
1 0 0 0 0 0 0

 For filling the second item:


When jthWeight = 2, then maximum possible profit is max (10, DP[1][2-2] + 15) =
max(10, 15) = 15.
When jthWeight = 3, then maximum possible profit is max(2 not put, 2 is put into
bag) = max(DP[1][3], 15+DP[1][3-2]) = max(10, 25) = 25.
weight

item⇣/ 0 1 2 3 4 5 6

0 0 0 0 0 0 0 0

1 1 1 1 1 1
0
1 0 0 0 0 0 0

1 1 2 2 2 2
0
2 0 5 5 5 5 5

 For filling the third item:


When jthWeight = 3, the maximum possible profit is max(DP[2][3], 40+DP[2][3-3])
= max(25, 40) = 40.
When jthWeight = 4, the maximum possible profit is max(DP[2][4], 40+DP[2][4-3])
= max(25, 50) = 50.
When jthWeight = 5, the maximum possible profit is max(DP[2][5], 40+DP[2][5-3])
= max(25, 55) = 55.
When jthWeight = 6, the maximum possible profit is max(DP[2][6], 40+DP[2][6-3])
= max(25, 65) = 65.
weight

item⇣/ 0 1 2 3 4 5 6

0 0 0 0 0 0 0 0
weight

item⇣/ 0 1 2 3 4 5 6

1 1 1 1 1 1
0
1 0 0 0 0 0 0

1 1 2 2 2 2
0
2 0 5 5 5 5 5

1 1 4 5 5 6
0
3 0 5 0 0 5 5

#SOLVE NUMERICAL WAY TAUGHT IN CLASS USING MATRIX THIS IS JUST FOR
EXAMPLE.

Sr.
No 0/1 knapsack problem Fractional knapsack problem

The 0/1 knapsack problem is solved Fractional knapsack problem is solved


1. using dynamic programming approach. using a greedy approach.

The 0/1 knapsack problem has an The fractional knapsack problem also has
2. optimal structure. an optimal structure.

Fractional knapsack problem, we can


In the 0/1 knapsack problem, we are not
break items for maximizing the total
allowed to break items.
3. value of the knapsack.

0/1 knapsack problem, finds a most In the fractional knapsack problem, finds
valuable subset item with a total value a most valuable subset item with a total
4. less than equal to weight. value equal to the weight.

In the fractional knapsack problem, we


In the 0/1 knapsack problem we can
can take objects in fractions in floating
take objects in an integer value.
5. points.

The 0/1 knapsack does not have greedy The fraction knapsack do have greedy
6. choice property choice property.

You might also like