0% found this document useful (0 votes)
17 views35 pages

DAA.UNIT-2

The document discusses two fundamental algorithm design techniques: Divide and Conquer and the Greedy Method. It details the steps of the Divide and Conquer technique, including examples like Binary Search, Quick Sort, and Merge Sort, emphasizing its recursive nature and efficiency. Additionally, it outlines the advantages and disadvantages of Binary Search, providing insights into its performance in best, average, and worst-case scenarios.

Uploaded by

heenatabassum569
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views35 pages

DAA.UNIT-2

The document discusses two fundamental algorithm design techniques: Divide and Conquer and the Greedy Method. It details the steps of the Divide and Conquer technique, including examples like Binary Search, Quick Sort, and Merge Sort, emphasizing its recursive nature and efficiency. Additionally, it outlines the advantages and disadvantages of Binary Search, providing insights into its performance in best, average, and worst-case scenarios.

Uploaded by

heenatabassum569
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 35

UNIT-II

Divide and Conquer: General Method, Defective chessboard, Binary Search, finding the
maximum and minimum, Merge sort, Quick sort.
The Greedy Method: The general Method, container loading, knapsack problem, Job
sequencing with deadlines, minimum-cost spanning Trees.
------------------------------------------------------------------------------------------------
Divide-and-conquer method: Divide-and-conquer are probably the best
knowngeneral algorithm design technique. The principle behind the Divide-and-conquer
algorithm design technique is that it is easier to solve several smaller instance of a problem
than the larger one.
The “divide-and-conquer” technique involves solving a particular problem by dividing it
into one or more cub-problems of smaller size, recursively solving each sub-problem and
then “merging” the solution of sub-problems to produce a solution to the original problem.
Divide-and-conquer algorithms work according to the following general plan.
1. Divide: Divide the problem into a number of smaller sub-problems ideally of
aboutthe same size.
2. Conquer: The smaller sub-problems are solved, typically recursively. If the
sub-problem sizes are small enough, just solve the sub-problems in a straight
forward manner.
3. Combine: If necessary, the solution obtained the smaller problems are
connected toget the solution to the original problem.
The following figure shows-

Fig: 2.1.Divide and Conquer typical case


Control abstraction for divide-and-conquer technique:
Control abstraction means a procedure whose flow of control is clear but whose
primary operations are satisfied by other procedure whose precise meanings are left
undefined.
Algorithm: Control abstraction for divide-and-conquer
Algorithm DandC(p)
{
if small (p)
then return
S(p)
else
{
Divide P into small instances P1, P2, P3 .............. Pk, k≥1;
Apply DandC to each of these sub-problems;\
return combine (DandC(P1), DandC(P1),…. (DandC(Pk);}}

DandC(p) is the divide-and-conquer algorithm, where P is the problem to be solved.


Small(p) is a Boolean valued function(i.e., either true or false) that determines whether the
input size is small enough that the answer can be computed without splitting. If this, is so the
function S is invoked. Otherwise the problem P is divided into smaller sub-problems. These
sub-problems
P1, P2, P3 ..............Pk, are solved by receive applications of DandC.
Combine is a function that combines the solution of the K sub-problems to get the
solution for original problem ‘P’.
Example: Specify an application that divide-and-conquer cannot be applied.
Solution: Let us consider the problem of computing the sum of n numbers a0, a1, an-1. If
n>1, we divide the problem into two instances of the same problem. That is to compute the
sum of the first [n/2] numbers and to compute the sum of the remaining [n/2] numbers. Once
each of these two sum is compute (by applying the same method recursively), we can add
their values to get the sum in question-
a0+ a1+….+an-1= (a0+ a1+….+a[n/2]-1)+ a[n/2]-1+............ + an-1).
For example, the sum of 1 to 10 numbers is as follows-
(1+2+3+4+ ......................... +10) = (1+2+3+4+5)+(6+7+8+9+10)
= [(1+2) + (3+4+5)] + [(6+7) + (8+9+10)]
= …..
= …..
= (1) + (2) +…………..+ (10).
This is not an efficient way to compute the sum of n numbers using divide-and-
conquer
technique. In this type of problem, it is better to use brute-force method.
Applications of Divide-and Conquer: The applications of divide-and-conquer methods are-
1. Binary search.
2. Quick sort
3. Merge sort.
Binary Search:
Binary search is an efficient searching technique that works with only sorted lists. So the list
must be sorted before using the binary search method. Binary search is based on divide-and-
conquer technique.
The process of binary search is as follows:
The method starts with looking at the middle element of the list. If it matches with the key
element, then search is complete. Otherwise, the key element may be in the first half or
second half of the list. If the key element is less than the middle element, then the search
continues with the first half of the list. If the key element is greater than the middle element,
then the search continues with the second half of the list. This process continues until the key
element is found or the search fails indicating that the key is not there in the list.
Consider the list of elements: -4, -1, 0, 5, 10, 18, 32, 33, 98, 147, 154, 198, 250, 500.
Trace the binary search algorithm searching for the element -1.
Sol: The given list of elements are:

Low High
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

-4 -1 0 5 10 18 27 32 33 98 147 154 198 250 500

Searching key '-1': Here the key to search is '-1'


First calculate mid;
Mid = (low + high)/2
= (0 +14) /2 =7

Low Mid High


0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

-4 -1 0 5 10 18 27 32 33 98 147 154 198 250 500

< First Half > < Second Half >


Here, the search key -1 is less than the middle element (32) in the list. So the search
process continues with the first half of the list.
Low High
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

-4 -1 0 5 10 18 27 32 33 98 147 154 198 250 500

Now mid = (0+6)/2


=3.
Low Mid High
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

-4 -1 0 5 10 18 27 32 33 98 147 154 198 250 500

< First Half > <Second Half >


The search key ‘-1’ is less than the middle element (5) in the list. So the search
process continues with the first half of the list.
Low High
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

-4 -1 0 5 10 18 27 32 33 98 147 154 198 250 500

Now mid= ( 0+2)/2


=1
Low Mid High
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14

-4 -1 0 5 10 18 27 32 33 98 147 154 198 250 500


Here, the search key -1 is found at position 1.
The following algorithm gives the iterative binary Search
AlgorithmAlgorithmBinarySearch(a, n, key)
{
/ a is an array of size n elements
/ key is the element to be searched
/ if key is found in array a, then return j, such
that //key = a[i]
//otherwise
return -1. Low:
= 0;
High: = n-1;
While (low high) do
{
Mid: = (low +
high)/2; If ( key =
a[mid]) then
Return mid;
Else if (key
<a[mid])
{
High: = mid +1;
}
Else if( key > a[mid])
{
Low: = mid +1;
}
}
The following algorithm gives Recursive Binary
Search Algorithms Binsearch( a, n, key, low,
high)
{
/ a is array of size n
/ Key is the element to be searched
/ if key is found then return j, such that key =
a[i]. //otherwise return -1
If ( low high) then
{
Mid: = (low + high)/2;
If ( key = a[mid]) then
Return mid;
Else if (key <a[mid])
Binsearch( a, n, key, low, mid-1);
Else if ( key> a[mid])
Binsearch( a, n, key, mid+1, high);
}
Return -1;
}
Advantages of Binary Search: The main advantage of binary search is that it is faster
thansequential (linear) search. Because it takes fewer comparisons, to determine whether
the given key is in the list, then the linear search method.
Disadvantages of Binary Search: The disadvantage of binary search is that can be
applied toonly a sorted list of elements. The binary search is unsuccessful if the list is
unsorted. Efficiency of Binary Search: To evaluate binary search, count the number of
comparisons inthe best case, average case, and worst case.
Best Case:The best case occurs if the middle element happens to be the key element. Then
onlyone comparison is needed to find it. Thus the efficiency of binary search is O(1).
Ex: Let the given list is: 1, 5, 10, 11, 12.
Low Mid High
1 5 10 11 12
Let key = 10.
Since the key is the middle element and is found at our first attempt.
Worst Case:Assume that in worst case, the key element is not there in the list. So the
process ofdivides the list in half continues until there is only one item left to check.
Items left to search Comparisons so far
16 0
8 1
4 2
2 3
1 4
For a list of size 16, there are 4 comparisons to reach a list of size one, given that there is
one comparison for each division, and each division splits the list size in half.
In general, if n is the size of the list and c is the number of comparisons, then
C. = log2 n
. . Eficiency in worst case = O(log n)

Average Case:In binary search, the average case efficiency is near to the worst case
efficiency.
So the average case efficiency will be taken as O(log n).
Efficiency in average case = O (log n).

Binary Search

Best Case O(1)

Average Case O( log n)

Worst Case O(log n)

Table:2.1.Space Complexity is O(n)


Quick Sort:
The quick sort is considered to be a fast method to sort the elements. It was developed by
CAR Hoare. This method is based on divide-and-conquer technique i.e. the entire list is
divided into various partitions and sorting is applied again and again on these partitions. This
method is also called as partition exchange sorts.
The quick sort can be illustrated by the following
Example 1 2 6 18 4 9 8 2 1 5
The reduction step of the quick sort algorithm finds the final position of one of the
numbers. In this example, we use the first number, 12, which is called the pivot (rotate)
element. This is accomplished as follows-
Let ‘i’ be the position of the second element and ‘j’ be the position of the last element.
i.e. i =2 and j =8, in this example.
Assume that a [n+1] = , where ‘a’ is an array of size n.
[1] [2] [3] [4] [5] [6] [7] [8] [9] i j
126 18 4 9 8 2 15 2 8
First scan the list from left to right (from i to j) can compare each and every element
with the pivot. This process continues until an element found which is greater than or equal to
pivot element. If such an element found, then that element position becomes the value of ‘i’.
Now scan the list from right to left (from j to i) and compare each and every element
with the pivot. This process continues until an element found which is less than or equal to
pivot element. If such an element finds then that element’s position become ‘j’ value.
Now compare ‘i’ and ‘j’. If i <j, then swap a[i] and a[j]. Otherwise swap pivot
element and a[j].
Continue the above process the entire list is sorted.
[1] [2] [3] [4] [5] [6] [7] [8] [9] i j
12 6 18 4 9 8 2 15 2 8

12 6 18 4 9 8 2 15 3 7

12 6 2 4 9 8 1815 7 6

Since i = 7 j=6, then swap pivot element and 6th element ( jth element), we get
8 6 2 4 9 12 18 15
Thus pivot reaches its original position. The elements on left to the right pivot are
smaller than pivot (12) and right to pivot are greater pivot (12).
8 6 2 4 9 12 18 15
Sublist 1 Sublist 2
Now take sub-list1 and sub-list2 and apply the above process recursively, at last we
get sorted list.
Ex 2: Let the given list is-
8 18 56 34 9 92 6 2 64

[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] i j
8 18 56 34 9 92 6 2 64 2 98
8 18 56 34 9 92 6 2 64 2 8
8 2 56 34 9 92 6 18 64 3 7
8 2 6 34 9 92 56 18 64 4 3

Since ij, then swap jth element, and pivot element, we get
6 2 8 34 9 92 56 18 64
< > < >
Sublist 1 Sublist 2
Now take a sub-list that has more than one element and follow the same
process as above. At last, we get the sorted list that is, we get
2 6 8 9 18 34 56 64 92
The following algorithm shows the quick sort
algorithm-Algorithm Quicksort(i, j)
{
// sorts the array from a[i] through a[j]
If ( i<j) then //if there are more than one element
{
//divide P into two sub-programs
K: = partition (a, i, j+1);
//Here K denotes the position of the partitioning
element //solve the sub problems
Quicksort(i, K-1);
Quicksort(K=1, j);
// There is no need for combining solution
}
}
Algorithm Partition (a, left, right)
{
/ The element from a[left] through a[right] are rearranged in such a manner that if
initially
/ pivot =a[left] then after completion a[j]= pivot, then return. Here j is the position where
/ pivot partition the list into two partitions. Note that a[right]= .
pivot: a[left];
i:= left; j:=right;
repeat
{
repeat
i: =i+1;
until (a[i] ≥ pivot);
repeat
j: =j-1;
until (a[j] < pivot);
if( i<j) then
Swap (a, i, j);
}until (i ≥ j);
a[left]: = a[j];
a[j]: = pivot;
return j;
}
Algorithm Swap (a, i, j)
{
//Example a[i] with a[j]
temp:= a[i];
a[i]: = a[j];
a[j]:= temp;
Advantages of Quick-sort: Quick-sort is the fastest sorting method among all the
sortingmethods. But it is somewhat complex and little difficult to implement than other
sorting methods.
Efficiency of Quick-sort: The efficiency of Quick-sort depends upon the selection of
pivotelement.
Best Case: In best case, consider the following two assumptions-
1. The pivot, which we choose, will always be swapped into the exactly the middle of
the list. And also consider pivot will have an equal number of elements both to its
left and right.
y
2. The number of elements in the list is a power of 2 i.e. n= 2 .

This can be rewritten as, y=log2n.

Thus, the total number of comparisons would be


O (n) + O (n) + O (n) + ......... (y terms)
= O (n * y).
.
..
. . Efficency in best case O( n log n) ( . y=log2 n)

Worst Case: In worst case, assume that the pivot partition the list into two parts, so that
one ofthe partition has no elements while the other has all the other elements.
Fig:2.1.Worst case Analysis
Total number of comparisons will be-

2
Thus, the efficiency of quick-sort in worst case is O (n ).
Average Case: Let cA(n) be the average number of key comparisons made by quick-sort on
alist of elements of size n. assuming that the partitions split can happen in each position k(1
k n) With the same probability 1/n, we get the following recurrence relation.

Fig:2.2.Average case Analysis


The left child of each node represents a sub-problem size 1/4 as large, and the right
child represents a sub-problem size 3/4 as large.
There are log4/3 n levels, and so the total partitioning time is O(nlog4/3n). Now,
there's a mathematical fact that
logan = logbn/logba
for all positive numbers a, b, and n. Letting a=4/3 and b=2, we get
that log4/3 n=log n / log(4/3)

Quick Sort

Best Case O(n log n)

Average Case O(n log n)

2
Worst Case O(n )

Table:2.2.Space Complexity O(n)

Merge Sort:
Merge sort is based on divide-and-conquer technique. Merge sort method is a two phase
process-
1. Dividing
2. Merging
Dividing Phase: During the dividing phase, each time the given list of elements is divided
intotwo parts. This division process continues until the list is small enough to divide.
Merging Phase: Merging is the process of combining two sorted lists, so that, the resultant
list isalso the sorted one. Suppose A is a sorted list with n element and B is a sorted list with
n2 elements. The operation that combines the elements of A and B into a single sorted list C
with n=n1 + n2, elements is called merging.
Algorithm-(Divide algorithm)
Algorithm Divide (a, low, high)
{
// a is an array, low is the starting index and high is the end index of a

If( low < high) then


{
Mid: = (low + high) /2;
Divide( a, low, mid);
Divide( a, mid +1, high);
Merge(a, low, mid, high);
}

}
The merging algorithm is as follows-

Algorithm Merge( a, low, mid, high)


{
L:= low;
H:= high;
J:= mid +1;
K:= low;

While (low mid AND j high) do


{
If (a[low < a[j]) then
{
B[k] = a[low];
K:= k+1;
Low:= low+1;
}
Else
{
B[k]= a[j];
K: = k+1;
J: = j+1;
}
}
While (low mid) do
{
B[k]=a[low];
K: = k+1;
Low: =low
+ 1;
}
While (j high) do
{
B[k]=a[j];
K: = k+1;
j: =j + 1;
}
//copy elements of b to a
For i: = l to n do
{
A[i]: =b[i];
}
}
Ex: Let the list is: - 500, 345, 13, 256, 98, 1, 12, 3, 34, 45, 78, 92.
500 345 13 256 98 1 12 3 34 45 78 92

500 345 13 256 98 1 12 3 34 45 78 92

500 345 13 256 98 1 12 3 34 45 78 92

500 345 13 256 98 1 12 3 34 45 78 92

1
500 345 13 256 98 12 3 34 45 78 92

345 500 13 98 256 1 3 12 34 45 78 92

13 345 500 1 98 256 3 12 34 45 78 92

500 345 13 256 98 1 3 12 34 45 78 92

1 3 12 13 34 45 78 92 98 256 354 500 Sorted List

Fig:2.3.Merge Sort Solution


The merge sort algorithm works as follows-
Step 1: If the length of the list is 0 or 1, then it is already sorted, otherwise,
Step 2: Divide the unsorted list into two sub-lists of about half the size.
Step 3: Again sub-divide the sub-list into two parts. This process continues until each
element inthe list becomes a single element.
Step 4: Apply merging to each sub-list and continue this process until we get one sorted list.
Randomized Algorithms:
An algorithm that uses random numbers to decide what to do next anywhere in its logic is
called Randomized Algorithm. For example, in Randomized Quick Sort, we use random
number to pick the next pivot (or we randomly shuffle the array). Typically, this randomness is
used to reduce time complexity or space complexity in other standard algorithms.
For example consider below a randomized version of Quick Sort.
A Central Pivot is a pivot that divides the array in such a way that one side has at-least 1/4
elements.
Algorithm:
// Sorts an array arr[low..high]
randQuickSort(arr[], low, high)

1. If low >= high, then EXIT.

2. While pivot 'x' is not a Central Pivot.


(i) Choose uniformly at random a number from [low..high].
Let the randomly picked number number be x.
(ii) Count elements in arr[low..high] that are smaller
than arr[x]. Let this count be sc.
(iii) Count elements in arr[low..high] that are greater
than arr[x]. Let this count be gc.
(iv) Let n = (high-low+1). If sc >= n/4 and
gc >= n/4, then x is a central pivot.
3. Partition arr[low..high] around the pivot x.
4. // Recur for smaller elements
randQuickSort(arr, low, sc-1)
5. // Recur for greater elements
randQuickSort(arr, high-gc+1, high)

In worst case, each partition divides array such that one side has n/4 elements and other side
has 3n/4 elements. The worst case height of recursion tree is Log 3/4 n which is O(Log n).

T(n) < T(n/4) + T(3n/4) + O(n)


T(n) < 2T(3n/4) + O(n)

Solution of above recurrence is O(n Log n)


The Greedy Method: The general Method, container loading, knapsack problem, Job
sequencing with deadlines, minimum-cost spanning Trees

Greedy method: It is most straight forward method. It is popular for solving the
optimization problems.
Optimization Problem:An optimization problem is the problem of finding the best solution
(optimal solution) from all the feasible solutions (practicable of possible solutions).(OR)
A Problem which demands or requires either maximum or minimum
result is called an optimization problem.
A problem which demands maximum result is called Maximization problem. Ex: Greedy
Knapsack problem, Greedy Job Sequencing with deadlines problem.
A problem which demands minimum result is called Minimization problem. Ex: Minimum
Spanning Tree problem, Single source shortest path problem.
In an optimization problem, we are given a set of constraints and an optimization functions.
Solutions that satisfy the constraints are called feasible solutions.
A feasible solution for which the optimization function has the best possible value is called
optimal solution.
Example:
Problem: Finding a minimum spanning tree from a weighted connected directed graph G.
Constraints: Every time a minimum edge is added to the tree and adding of an edge does not
form a simple circuit.
Feasible solutions: The feasible solutions are the spanning trees of the given graph G.
Optimal solution: An optimal solution is a spanning tree with minimum cost i.e.
minimum spanning tree.
Question: Find the minimum spanning tree for the following graph.

From the above spanning tree the figure 4 gives the optimal solution, because it is the
spanning tree with the minimum cost i.e. it is a minimum spanning tree of the graph G.
Characteristics of the Greedy algorithm:
Suppose that a problem can be solved by a sequence of decisions. The greedy method
has that each decision is locally optimal. These locally optimal solutions will finally add up
to a globally optimal solution. Such optimal decision taken at every step must be feasible,
locally optimal and irrecoverable.
Feasible: The choice which is made has to be satisfying the problems constraints.
Locally optimal: The choice has to be the best local choice among all feasible
choices available on that step.
Irrecoverable: The choice once made cannot be changed on sub-sequent steps of the algorithm
(Greedy method).
Control Abstraction for Greedy Method (OR) The General Method (OR) The General
Principle of Greedy Method:
Algorithm GreedyMethod (a, n)
{
// a is an array of n
inputs Solution: =Ø;
for i: =0 to n do
{
s: = select (a);
if (feasible (Solution, s)) then
{
Solution: = union (Solution, s);
}
else
reject (); // if solution is not feasible reject it.
}
return solution;
}
In greedy method, there are three important activities:
1.A selection of solution from the given input domain is performed,
i.e. s:= select(a).

2.The feasibility of the solution is performed, by using feasible ‘(solution, s)’


and then all feasible solutions are obtained.

3.From the set of feasible solutions, the particular solution that minimizes or maximizes the
given objection function is obtained. Such a solution is called optimal solution.
Q: A child buys a candy 42 rupees and gives a 100 note to the cashier. Then the
cashier wishes to return change using the fewest number of coins. Assume that the cashier
has Rs.1, Rs. 5 and Rs. 10 coins.
Note: This problem can be solved using the greedy method.
Applications of Greedy method: The following are the some applications of greedy
method:
1. Greedy Job Sequencing with deadlines
2. Greedy knapsack Problem
3. Minimum Cost Spanning tree (Prim’s, Krushkal’s)
4. Single Source Shortest path (Dijkstra’s)
5. Optimal Merge Patterns.
Subset paradigm:
Consider the inputs in an order based on some selection procedure (Use some optimization
measure for selection procedure).
At every stage, examine an input to see whether it leads to an optimal solution.
If the inclusion of input into partial solution yields an infeasible solution, discard the input;
otherwise, add it to the partial solution.
Most of these, however, will result in algorithms that generate sub- optimal solutions.
This version of the greedy technique is called the subset paradigm.
Examples: Greedy Knapsack problem, job sequencing with problem, Minimum Spanning tree.
Ordering paradigm:
Some algorithms do not need selection of an optimal subset but make decisions by looking at
the inputs in some order.

Difference between subset paradigm & ordering paradigm :Subset paradigm is one type of
greedy method that generates sub- optimal solutions whereas ordering paradigm is another
type of greedy method that does not call for sub-optimal solution. Subset paradigm works
based on optimal subset whereas Ordering paradigm works based on optimal ordering.

Difference between Divide and Conquer & Greedy method:


Divide and conquer(DAC) Greedy method(GA)
1. Divide the given problem into many 1. Many decisions and sequences are
sub problems. Find the individual solutions guaranteed and all the overlapping sub
and combine them to get the solution for the instances are considered.
main problem.
2.Follows top down technique 2. Follows Bottom-up technique.
3. Split the input only at specific 3. Split the input at every possible
points (midpoint), each problem is points rather than at a particular point.
independent.
4. Sub problems are independent on the 4. Sub problems are dependent on the
main Problem. main problem.

Table:3.1.Differences of Divide and Conquer & Greedy Method


APPLICATION - JOB SEQUENCING WITH DEADLINES
This problem consists of n jobs each associated with a deadline and profit and our objective
is to earn maximum profit. We will earn profit only when job is completed on or before
deadline. We assume that each job will take one unit time to complete.
In this problem we have n jobs j1, j2, … jn, each has an associated deadlines
are d1, d2, … dn and profits are p1, p2, ... pn.
Profit will only be awarded or earned if the job is completed on or before the
deadline.
We assume that each job takes unit time to complete.
The objective is to earn maximum profit when only one job can be
scheduled or processed at any given time.
Example: Consider the following 5 jobs and their associated deadline
and profit.

Table:3.2.Jobs and Deadlines

Table:3.3.Jobs and Deadlines


 Find the maximum deadline value (dmax).
 Looking at the jobs we can say the max deadline value is 3. So, dmax = 3.
 As dmax = 3 so we will have THREE slots to keep track of free time slots.
Set the time slot status to EMPTY.
Table:3.4.Time and Status Information
Total number of jobs is 5. So we can write n = 5.
Note: If we look at job j2, it has a deadline 1. This means we have to complete
job j2 in time slot 1 if we want to earn its profit.
Similarly, if we look at job j1 it has a deadline 2. This means we have to complete
job j1 on or before time slot 2 in order to earn its profit.
Similarly, if we look at job j3 it has a deadline 3. This means we have to complete
job j3 on or before time slot 3 in order to earn its profit.
Our objective is to select jobs that will give us higher profit.

TIME SLOT 0-1 1-2 2-3


JOB J2 J1 J3
PROFIT 100 60 20
Table:3.4:Total profit is 180.

J ASSIGNED SLOTS JOB ACTION PROFIT


CONSIDERED
{} J2 Assigned Slot 0
-
[0,1]
{J2} [0,1] J1 Assigned Slot 100
[1,2]
{J2,J1} [0,1], [1,2] J4 Cannot fit, 100+60=160
Reject
{J2,J1} [0,1], [1,2] J3 Assigned Slot 160
[2,3]
{J2,J1,J3} [0,1], [1,2],[2,3] J5 Cannot fit, 160+20=180
Reject
Table:3.5.Solution for the Profits
EXAMPLE-2: Find an optimal sequence to the n=5 Jobs where profits (P1,P2,P3,P4,P5) =
(20,15,10,5,1) and deadlines (d1,d2,d3,d4,d5) =( 2,2,1,3,3).
JOBS J1 J2 J3 J4 J5
PROFITS 20 15 10 5 1
DEADLINES 2 2 1 3 3

SOLUTION: In the given data, the jobs are already in descending order of their profits. So,
the job sequence is calculated and presented below.

TIME SLOT 0-1 1-2 2-3


JOB J2 J1 J4
PROFIT 15 20 5
TOTAL PROFIT IS 40.

J ASSIGNED SLOTS JOB ACTION PROFIT


CONSIDERED
{} - J1 Assigned Slot 0
[1,2]
{J1} [1,2] J2 Assigned Slot 20
[0,1]
{J2,J1} [0,1], [1,2] J3 Cannot fit, 20+15=35
Reject
{J2,J1} [0,1], [1,2] J4 Assigned Slot 35
[2,3]
{J2,J1,J4} [0,1], [1,2],[2,3] J5 Cannot fit, 35+5=40
Reject
EXAMPLE-3: Find an optimal sequence to the n=7 Jobs where profits (P1,P2,P3,P4,P5,P6,P7)
= (3,5,20,18,1,6,30) and deadlines
(d1,d2,d3,d4,d5,d6,d7) =( 1,3,4,3,2,1,2).

JOBS J1 J2 J3 J4 J5 J6 J7
PROFI 3 5 20 18 1 6 30
TS
DEADL 1 3 4 3 2 1 2
INES

SOLUTION: After sorting jobs in descending of their profits.

JOBS J7 J3 J4 J6 J2 J1 J5
PROFI 30 20 18 6 5 3 1
TS
DEADL 2 4 3 1 3 1 2
INES

JOB SEQUENCE:

TIME 0-1 1-2 2-3 3-4


SLOT
JOB J6 J7 J4 J3
PROFIT 6 30 18 20
TOTAL PROFIT IS 74.

J ASSIGNED SLOTS JOB ACTION PROFIT


CONSIDERED

{} J7 Assigned Slot 0
-
[1,2]
{J7} [1,2] J3 Assigned Slot 30
[3,4]
{J7,J3} [1,2], [3,4] J4 Assigned Slot 30+20=50
[2,3]
{J7,J4,J3} [1,2], [2,3],[3,4] J6 Assigned Slot 50+18=68
[0,1]
{J6,J7,J4, J3} [0,1], [1,2], J2 Cannot fit, 68+6=
74
[2,3],[3,4] Reject
{J6,J7,J4, J3} [0,1], [1,2], J1 Cannot fit, 74
[2,3],[3,4] Reject
{J6,J7,J4, J3} [0,1], [1,2], J5 Cannot fit, 74
[2,3],[3,4] Reject
{J6,J2, J4,J3} [0,1], [1,2], J1 Cannot fit, 44+5=49
[2,3],[3,4] Reject
{J6,J2, J4,J3} [0,1], [1,2], J5 Cannot fit, 49
[2,3],[3,4] Reject
{J4,J3,J1, J2} [0,1], [1,2], J5 Cannot fit, 90+20=110
[2,3],[3,4] Reject
{J4,J3,J1, J2} [0,1], [1,2], J6 Cannot fit, 110
[2,3],[3,4] Reject
{J4,J3,J1, J2} [0,1], [1,2], J7 Cannot fit, 110
[2,3],[3,4] Reject

High-level description of job sequencing algorithm:


JOB SEQUENCING WITH DEADLINES
The computing time taken by greedy job sequencing with deadlines is O(n2), where n denotes
number of the jobs in the given problem.
For considering n jobs, it takes ‘n’ computing time and for assigning a slot in its stipulated
deadline it takes ‘n’ computing time; constituting a total of n2 time.
KNAPSACK PROBLEM:
In this problem the objective is to fill the knapsack with items to get maximum profit
without crossing the weight capacity of the knapsack and we are also allowed to take
an item in fractional part.
Points to remember: In this problem we have a Knapsack that has a weight limit W
There are items i1, i2, ..., in each having positive weight w1, w2, … wn and some
profit associated with it P1, P2, ..., Pn.
Our objective is to maximise the Profit such that the total weight inside the knapsack is at
most M and we are also allowed to take an item in fractional part.

Where xi is the fraction of the object ‘i’.

EXAMPLE-1: Find an optimal solution to the Knapsack instance n=3, M=20,


(P1, P2, P3) = (25, 24, 15) and (W1, W2, W3) = (18, 15, 10).

SOLUTION:

OBJECTS (i) I1 I2 I3
PROFITS (Pi) 25 24 15
WEIGHTS 18 15 10
(Wi)
Pi/ Wi 1.3 1.6 1.5
OPTIMAL- 0 1 1/2
SEQUENCE (Xi )
WiXi (18*0)= 0 (15*1)= 15 (10*1/2)= 5
PiXi (25*0)= 0 (24*1)= 24 (15*1/2)= 7.5
∑ WiXi = (0+15+5) =20 ≤ M(20) AND
(MAXIMUM PROFIT) ∑ PiXi = (0+24+7.5) =31.5
OPTIMAL-SEQUENCE= Xi (0, 1, 1/2).

EXAMPLE-2: Find an optimal solution to the knapsack instance n=4 objects and the
capacity of knapsack M=15, profits (10, 5, 7, 11) and weight are (3, 4, 3, 5).
SOLUTION:

OBJECTS (i) I1 I2 I3 I4
PROFITS (Pi) 10 5 7 11
WEIGHTS (Wi) 3 4 3 5
Pi/ Wi 3.33 1.25 2.33 2.20
OPTIMAL- 1 1 1 1
SEQUENCE (Xi )
WiXi (3*1)= 3 (4*1)= 4 (3*1)= 3 (5*1)= 5
PiXi (10*1)= 10 (5*1)= 5 (7*1)= 7 (11*1)= 11

∑ WiXi = (3+4+3+5) =15 ≤ M(15)


AND
(MAXIMUM PROFIT) ∑ PiXi = (10+5+7+11) =33
OPTIMAL-SEQUENCE= Xi (1, 1, 1,1).
EXAMPLE-3: Find an optimal solution to the knapsack instance n=7 objects and the
capacity of knapsack m=15. The profits and weights of the objects are (P1,P2,P3, P4,
P5, P6, P7)= (10, 5,15,7,6,18,3) (W1,W2,W3,W4,W5,W6,W7)= (2,3,5,7,1,4,1)

SOLUTION:

OBJECTS (i) I1 I2 I3 I4 I5 I6 I7
PROFITS(Pi) 10 5 15 7 6 18 3
WEIGHTS(Wi) 2 3 5 7 1 4 1
Pi/ Wi 5 1.66 3 1 6 4.5 3
OPTIMAL 1 2/3 1 0 1 1 1
SEQUENCE
(Xi )
WiXi (2*1)=2 (3*2/3)=2 (5*1)=5 (7*0)=0 (1*1)=1 (4*1)=4 (1*1)=1

PiXi (10*1)=10 (5*2/3)=3.33 (15*1)=15 (7*0)=0 (6*1)=6 (18*1)=18 (3*1)=3

∑ WiXi = (2+2+5+0+1+4+1) =15 ≤ M(15)


AND
(MAXIMUM PROFIT) ∑ PiXi = (10+3.33+15+0+6+18+3) =55.33
OPTIMAL-SEQUENCE= Xi (1, 2/3,1,0, 1,1,1).
Algorithm
If the objects are already been sorted into non-increasing order of p[i] / w[i]
then the algorithm given below obtains solutions corresponding to this strategy.
Algorithm GreedyKnapsack (m, n)
// P[1 : n] and w[1 : n] contain the profits and weights respectively of
// Objects ordered so that p[i] / w[i] > p[i + 1] / w[i + 1].
// m is the knapsack size and x[1: n] is the solution vector.
{
for i := 1 to n do x[i] := 0.0 //
initialize x U := m;
for i := 1 to n do
{
if (w(i) > U) then break;
x [i] := 1.0; U := U – w[i];// USED TO CONSIDER FULL ITEMS
}
if (i < n) then x[i] := U / w[i]; // USED TO CONSIDER FRACTION OF
THE OBJECT [NOTE: U / w[i] REFERS TO OBJECT WEIGHT WITH
RESPECTIVE REAMAINING SPACE IN THE KNAPSACK]
}
Running time of the Greedy knapsack:
The objects are to be sorted into descending order of p i / wi ratio. But if we
disregard the time to initially sort the objects, the algorithm requires only O(n) time,
where ‘n’ refers to number of objects.
Minimum-cost spanning Trees.
Graphs:
1.Definition: A graph G=(V,E) consists of a finite set V, whose elements are called nodes,
and a set E, which is a subset of V x V. The elements of E are called edges.
2.Directed vs. undirected graphs: If the directions of the edges are of significance, that is,
(x,y) is different from (y,x), the graph is called directed. Otherwise, the graph is called
undirected.
Graph concepts:if (x,y) is an edge, then x is said to be adjacent to y, and y is adjacent from
x,in the case of undirected graphs, if (x,y) is an edge, we just say that x and y are adjacent (or
x is adjacent to y, or y is adjacent to x). Also, we say that x is the neighbor of y. The in degree
of a node x is the number of nodes adjacent to x
A spanning tree of a graph G = (V, E) is a tree that contains all vertices of V and is a sub-
graph of G with |V|-1 edges .A single graph can have multiple spanning trees. Lemma 1: Let
T be a spanning tree of a graph G. Then
1.Any two vertices in T are connected by a unique simple path.
2.If any edge is removed from T, then T becomes disconnected.
3.If we add any edge into T, then the new graph will contain a cycle.
4.Number of edges in T is n-1.
Minimum Spanning Trees (MST): Weight of a spanning tree w (T) is the sum of
weights of all edges in T. The Minimum spanning tree (MST) is a spanning tree with
the smallest possible weight.

Mathematical Properties of Spanning Tree


 Spanning tree has n-1 edges, where n is the number of nodes (vertices).
 From a complete graph, by removing maximum e - n + 1 edges, we can
construct a spanning tree.
 A complete graph can have maximum nn-2 number of spanning trees.
 Thus, we can conclude that spanning trees are a subset of connected
Graph G and disconnected graphs do not have spanning tree.

Applications of Spanning Tree

 Spanning tree is basically used to find a minimum path to connect all


nodes in a graph.
Common applications of spanning trees are:
 Civil Network Planning
 Computer Network Routing Protocol
 Cluster Analysis
 Finding airline routes
Minimum Spanning-Tree Algorithm:
 We shall learn about two most important spanning tree algorithms (greedy
algorithms): the Krushkal’s algorithm and the Prim algorithm.
 Both algorithms differ in their methodology, but both eventually end up
with the MST.
 Krushkal's algorithm uses edges, and Prim’s algorithm uses vertex
connections in determining the MST.
Krushkal’s Algorithm:
 This is a greedy algorithm. A greedy algorithm chooses some local
optimum (i.e. picking an edge with the least weight in a MST).
Krushkal's algorithm works as follows:
 Step-01: Sort all the edges from low weight to high weight.
 Step-02: Take the edge with the lowest weight and use it to connect the
vertices of graph.If adding an edge creates a cycle, then reject that edge and go for the
next least weight edge.
 Step-03: Keep adding edges until all the vertices are connected and a
Minimum Spanning Tree (MST) is obtained
Krushkal’s Algorithm-
 Like Prim’s Algorithm, Krushkal’s Algorithm is another greedy algorithm
used for finding the Minimum Spanning Tree (MST) of a given graph.
 The graph must be weighted, connected and undirected.
 Thumb Rule to Remember- Simply draw all the vertices on the paper and
connect them using edges with minimum weights such that no cycle gets formed.
 Time Complexity- Worst case time complexity of Krushkal’s Algorithm =
O(ElogV) or O(ElogE).
Explanation-
 The edges are maintained as min heap.
 The next edge can be obtained in O(logE) time if graph has E edges.
 Reconstruction of heap takes O(E) time.
 So, Krushkal’s Algorithm takes O(ElogE) time.
 The value of E can be at most O(V2). So, O(logV) and O(logE) are same.
 If the edges are already sorted, then there is no need to construct min heap.
 So, deletion from min heap time is saved.
 In this case, time complexity of Kruskal’s Algorithm = O(E + V)

Problem-01: Construct the minimum spanning tree (MST) for the given graph
using Krushkal’s Algorithm-
Solution:
Step-01:

Step-02:

Step-03:
Step-04:

Step-05:

Step-06:
Step-07:

Since all the vertices have been included in the MST, so we stop.
Weight of the MST= Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14 = 99 units.
Prim’s Algorithm-
Steps for implementing Prim’s Algorithm-
Step-01:

Randomly choose any vertex.

We usually select and start with a vertex that connects to the edge having
least weight.

Step-02:

Find all the edges that connect the tree to new vertices, then find the least
weight edge among those edges and include it in the existing tree.

If including that edge creates a cycle, then reject that edge and look for the next
least weight edge.

Step-03:

Keep repeating step-02 until all the vertices are included and Minimum Spanning
Tree (MST) is obtained.

Time Complexity:
Worst case time complexity of Prim’s Algorithm
= O(ElogV) using binary heap
= O(E + VlogV) using Fibonacci heap
Explanation

If adjacency list is used to represent the graph, then using breadth first
search, all the vertices can be traversed in O(V + E) time.

We traverse all the vertices of graph using breadth first search and use a min
heap for storing the vertices not yet included in the MST.

To get the minimum weight edge, we use min heap as a priority queue.

Min heap operations like extracting minimum element and decreasing key
value takes O(logV) time.

So,overall time complexity


= O(E + V) x O(logV)
= O((E + V)logV)
= O(ElogV)
This time complexity can be improved and reduced to O(E + VlogV) using
Fibonacci heap.

Problem-01:
Construct the minimum spanning tree (MST) for the given graph using Prim’s Algorithm-

Solution-
Step-01:
Step-02:

Step-03:

Step-04:
Step-05:

Step-06:

Since all the vertices have been included in the MST, so

we stop. Weight of the MST


= Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14
= 99 units

You might also like