0% found this document useful (0 votes)
6 views

DAA - Divide-and-Conquer Method Brute Force

Divide-and-Conquer Method: The general method, Binary search, Finding maximum and minimum, Merge sort, Quick sort. Brute Force: Knapsack, Traveling salesman problem, Convex-Hull

Uploaded by

cse20733005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

DAA - Divide-and-Conquer Method Brute Force

Divide-and-Conquer Method: The general method, Binary search, Finding maximum and minimum, Merge sort, Quick sort. Brute Force: Knapsack, Traveling salesman problem, Convex-Hull

Uploaded by

cse20733005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Design and Analysis of Algorithms ( PC 602 CS)

LECTURE NOTES
UNIT-II
Divide-and-Conquer Method: The general method, Binary search, Finding maximum and
minimum, Merge sort, Quick sort.

Brute Force: Knapsack, Traveling salesman problem, Convex-Hull

Introduction
Divide-and-conquer general method

Both merge sort and quick sort employ a common algorithmic paradigm based on recursion. This
paradigm, divide-and-conquer, breaks a problem into sub problems that are similar to the
original problem, recursively solves the sub problems, and finally combines the solutions to the
sub problems to solve the original problem. Because divide-and-conquer solves sub problems
recursively, each sub problem must be smaller than the original problem, and there must be a
base case for sub problems.

Divide the problem into a number of subproblems that are smaller instances of the same
problem.

Conquer the sub problems by solving them recursively. If they are small enough, solve the sub
problems as base cases.

Combine the solutions to the sub problems into the solution for the original problem. You can
easily remember the steps of a divide-and-conquer algorithm as divide, conquer, combine. Here's
how to view one step, assuming that each divide step creates two sub problems (though some
divide-and-conquer algorithms create more than two):

17
Design and Analysis of Algorithms ( PC 602 CS)

Fig. Divide and Conquer


Control Abstraction of Divide and Conquer

A control abstraction is a procedure whose flow of control is clear but whose primary operations
are specified by other procedures whose precise meanings are left undefined. The control
abstraction for divide and conquer technique is DANDC(P), where P is the problem to be solved.

DANDC (P)
{

if SMALL (P) then return S (p);


else
{

divide p into smaller instances p1, p2, …. Pk, k 1;


apply DANDC to each of these sub problems;
return (COMBINE (DANDC (p1) , DANDC (p2),…., DANDC (pk));
}

SMALL (P) is a Boolean valued function which determines whether the input size is small
enough so that the answer can be computed without splitting. If this is so function ‘S’ is invoked
otherwise, the problem ‘p’ into smaller sub problems. These sub problems p1, p2, . . . , pk are
solved by recursive application of DANDC.

Binary Search

If we have ‘n’ records which have been ordered by keys so that x1 < x2 < … < xn . When we are
given a element ‘x’, binary search is used to find the corresponding element from the list. In case
‘x’ is present, we have to determine a value ‘j’ such that a[j] = x (successful search). If ‘x’ is not
in the list then j is to set to zero (un successful search).

18
Design and Analysis of Algorithms ( PC 602 CS)

In Binary search we jump into the middle of the file, where we find key a[mid], and compare ‘x’
with a[mid]. If x = a[mid] then the desired record has been found. If x < a[mid] then ‘x’ must be
in that portion of the file that precedes a[mid], if there at all. Similarly, if a[mid] > x, then further
search is only necessary in that past of the file which follows a[mid]. If we use recursive
procedure of finding the middle key a[mid] of the un-searched portion of a file, then every un-
successful comparison of ‘x’ with a[mid] will eliminate roughly half the un-searched portion
from consideration.
Since the array size is roughly halved often each comparison between ‘x’ and a[mid], and since
an array of length ‘n’ can be halved only about log 2n times before reaching a trivial length, the
worst case complexity of Binary search is about log2n

Algorithm

BINSRCH (a, n, x)
// array a(1 : n) of elements in increasing order, n 0,
// determine whether ‘x’ is present, and if so, set j such that x = a(j)

// else return j
{
low :=1 ; high :=n ;
while (low < high) do
{
mid :=|(low + high)/2|

if (x < a [mid]) then high:=mid – 1;


else if (x > a [mid]) then low:= mid + 1
else return mid;
}

return 0;
}
low and high are integer variables such that each time through the loop either ‘x’ is found or low
is increased by at least one or high is decreased by at least one. Thus we have two sequences of
integers approaching each other and eventually low will become greater than high causing
termination in a finite number of steps if ‘x’ is not present.

Example for Binary Search

Let us illustrate binary search on the following 9 elements:

Index 1 2 3 4 5 6 7 8 9

Elements -15 -6 0 7 9 23 54 82 101

19
Design and Analysis of Algorithms ( PC 602 CS)

The number of comparisons required for searching different elements is as follows:

1. Searching for x = 101 low high mid


1 9 5
6 9 7
8 9 8
9 9 9
found

Number of comparisons = 4

2. Searching for x = 82 low high mid


1 9 5
6 9 7
8 9 8

found

Number of comparisons = 3

3. Searching for x = 42 low high mid

1 9 5

6 9 7
6 6 6

7 6 not found

Number of comparisons = 4

4. Searching for x = -14 low high mid

1 9 5
1 4 2
1 1 1
2 1 not found
Number of comparisons = 3

Continuing in this manner the number of element comparisons needed to find each of nine
elements is:
20
Design and Analysis of Algorithms ( PC 602 CS)

Index 1 2 3 4 5 6 7 8 9

Elements -15 -6 0 7 9 23 54 82 101


Comparisons 3 2 3 4 1 3 2 3 4

No element requires more than 4 comparisons to be found. Summing the comparisons needed to
find all nine items and dividing by 9, yielding 25/9 or approximately 2.77 comparisons per
successful search on the average.
There are ten possible ways that an un-successful search may terminate depending upon the
value of x.
If x < a[1], a[1] < x < a[2], a[2] < x < a[3], a[5] < x < a[6], a[6] < x < a[7] or a[7] < x < a[8] the
algorithm requires 3 element comparisons to determine that ‘x’ is not present. For all of the
remaining possibilities BINSRCH requires 4 element comparisons. Thus the average number of
element comparisons for an unsuccessful search is:
(3 + 3 + 3 + 4 + 4 + 3 + 3 + 3 + 4 + 4) / 10 = 34/10 = 3.4

The time complexity for a successful search is O(log n) and for an unsuccessful search is Θ(log
n).

Successful searches Θ(1), Θ(log n), Θ(log n) Best average worst

un-successful searches Θ(log n) best, average and worst case

Finding Maximum and Minimum

Let P = (n, a [i],……,a [j]) denote an arbitrary instance of the problem.

b. Here ‘n’ is the no. of elements in the list (a [i],….,a[j]) and we are interested in finding the
maximum and minimum of the list.

c. If the list has more than 2 elements, P has to be divided into smaller instances.

d. For example, we might divide ‘P’ into the 2 instances, P1=([n/2],a[1],……..a[n/2]) & P2= ( n-
[n/2], a[[n/2]+1],….., a[n]) After having divided ‘P’ into 2 smaller sub problems, we can solve
them by recursively invoking the same divide-and-conquer algorithm.

MaxMin(i, j, max, min)

// a[1:n] is a global array. Parameters i and j are integers, // 1≤i≤j≤n. The effect is to set max and
min to the largest and // smallest values in a[i:j].

21
Design and Analysis of Algorithms ( PC 602 CS)

if (i=j) then max := min := a[i]; //Small(P)

else if (i=j-1) then // Another case of Small(P)

{
if (a[i] < a[j]) then max := a[j]; min := a[i];
else max := a[i]; min := a[j];
}
else
{
// if P is not small, divide P into sub-problems.
// Find where to split the set.
mid := ( i + j )/2;
// Solve the sub-problems.
MaxMin( i, mid, max, min );
MaxMin( mid+1, j, max1, min1 );
// Combine the solutions.
if (max < max1) then max := max1;
if (min > min1) then min := min1;
}
}

Complexity:
Now what is the number of element comparisons needed for MaxMin? If T(n) represents this
number, then the resulting recurrence relation is
0 n=1
T(n) = 1 n=2
T(n/2) + T(n/2) + 2 n>2

When n is a power of two, n = 2k

for some positive integer k, then


T(n) = 2T(n/2) + 2
= 2(2T(n/4) + 2) + 2
= 4T(n/4) + 4 + 2
.
.
.
= 2k-1 T(2) + ∑(1≤i≤k-1) 2k
= 2k-1 + 2k – 2
= 3n/2 – 2 = O(n)
Note that 3n/2 – 2 is the best, average, worst case number of comparison when n is a power of
two.

22
Design and Analysis of Algorithms ( PC 602 CS)

Merge Sort
Merge sort algorithm is a classic example of divide and conquer. To sort an array, recursively,
sort its left and right halves separately and then merge them. The time complexity of merge mort
in the best case, worst case and average case is O(n log n) and the number of comparisons used
is nearly optimal.

This strategy is so simple, and so efficient but the problem here is that there seems to be no easy
way to merge two adjacent sorted arrays together in place (The result must be build up in a
separate array).

The fundamental operation in this algorithm is merging two sorted lists. Because the lists are
sorted, this can be done in one pass through the input, if the output is put in a third list.

The basic merging algorithm takes two input arrays ‘a’ and ’b’, an output array ‘c’, and three
counters, a ptr, b ptr and c ptr, which are initially set to the beginning of their respective arrays.
The smaller of a[a ptr] and b[b ptr] is copied to the next entry in ‘c’, and the appropriate
counters are advanced. When either input list is exhausted, the remainder of the other list is
copied to ‘c’.

To illustrate how merge process works. For example, let us consider the array ‘a’ containing 1,
13, 24, 26 and ‘b’ containing 2, 15, 27, 38. First a comparison is done between 1 and 2. 1 is
copied to ‘c’. Increment a ptr and c ptr.

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
1 13 24 26 2 15 27 28 1
h j i
ptr ptr ptr

and then 2 and 13 are compared. 2 is added to ‘c’. Increment b ptr and c ptr.

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
1 13 24 26 2 15 27 28 1 2
h j i
ptr ptr ptr

then 13 and 15 are compared. 13 is added to ‘c’. Increment a ptr and c ptr.

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
1 13 24 26 2 15 27 28 1 2 13
h j i
ptr ptr ptr

23
Design and Analysis of Algorithms ( PC 602 CS)

24 and 15 are compared. 15 is added to ‘c’. Increment b ptr and c ptr.

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
1 13 24 26 2 15 27 28 1 2 13 15
h j i
ptr ptr ptr

24 and 27 are compared. 24 is added to ‘c’. Increment a ptr and c ptr.

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
1 13 24 26 2 15 27 28 1 2 13 15 24
h j i
ptr ptr ptr

26 and 27 are compared. 26 is added to ‘c’. Increment a ptr and c ptr.


1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
1 13 24 26 2 15 27 28 1 2 13 15 24 26
h j i
ptr ptr ptr

As one of the lists is exhausted. The remainder of the b array is then copied to ‘c’.

1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
1 13 24 26 2 15 27 28 1 2 13 15 24 26 27 28
h j i
ptr ptr ptr

Algorithm MERGESORT (low, high)

// a (low : high) is a global array to be sorted.

{
if (low < high)
{
mid := (low + high)/2 //finds where to split the set
MERGESORT(low, mid) //sort one subset MERGESORT(mid+1,
high) //sort the other subset MERGE(low, mid, high) // combine the
results
}

Algorithm MERGE (low, mid, high)

// a (low : high) is a global array containing two sorted subsets

24
Design and Analysis of Algorithms ( PC 602 CS)

// in a (low : mid) and in a (mid + 1 : high).


// The objective is to merge these sorted sets into single sorted
// set residing in a (low : high). An auxiliary array B is used.
{
h :=low; i := low; j:= mid + 1;
while ((h < mid) and (J < high)) do
{
if (a[h] < a[j]) then

{
b[i] := a[h]; h := h + 1;
}
else
{
b[i] :=a[j]; j := j + 1;
}
i := i + 1;
}
if (h > mid) then
for k := j to high do
{
b[i] := a[k]; i := i + 1;
}
else
for k := h to mid do
{
b[i] := a[K]; i := i + l;
}
for k := low to high do
a[k] := b[k];
}

Quick Sort

The main reason for the slowness of Algorithms like SIS is that all comparisons and exchanges
between keys in a sequence w1, w2, . . . . , wn take place between adjacent pairs. In this way it
takes a relatively long time for a key that is badly out of place to work its way into its proper
position in the sorted sequence.

Hoare his devised a very efficient way of implementing this idea in the early 1960’s that
improves the O(n2) behavior of SIS algorithm with an expected performance that is O(n log n).

In essence, the quick sort algorithm partitions the original array by rearranging it into two
groups. The first group contains those elements less than some arbitrary chosen value taken from
the set, and the second group contains those elements greater than or equal to the chosen value.

25
Design and Analysis of Algorithms ( PC 602 CS)

The chosen value is known as the pivot element. Once the array has been rearranged in this way
with respect to the pivot, the very same partitioning is recursively applied to each of the two
subsets. When all the subsets have been partitioned and rearranged, the original array is sorted.

The function partition() makes use of two pointers ‘i’ and ‘j’ which are moved toward each other
in the following fashion:
 Repeatedly increase the pointer ‘i’ until a[i] >= pivot.
 Repeatedly decrease the pointer ‘j’ until a[j] <= pivot.

If j > i, interchange a[j] with a[i]

 Repeat the steps 1, 2 and 3 till the ‘i’ pointer crosses the ‘j’ pointer. If ‘i’
pointer crosses ‘j’ pointer, the position for pivot is found and place pivot
element in ‘j’ pointer position.

The program uses a recursive function quicksort(). The algorithm of quick sort function
sorts all elements in an array ‘a’ between positions ‘low’ and ‘high’.

 It terminates when the condition low >= high is satisfied. This condition
will be satisfied only when the array is completely sorted.

 Here we choose the first element as the ‘pivot’. So, pivot = x[low]. Now it
calls the partition function to find the proper position j of the element
x[low] i.e. pivot. Then we will have two sub-arrays x[low], x[low+1], . . . .
. . . x[j-1] and x[j+1], x[j+2], . . .x[high].
 It calls itself recursively to sort the left sub-array x[low], x[low+1], . . . . .
. . x[j-1] between positions low and j-1 (where j is returned by the
partition function).
 It calls itself recursively to sort the right sub-array x[j+1], x[j+2], . . . . . .
. . . x[high] between positions j+1 and high.

Algorithm

QUICKSORT(low, high)
/* sorts the elements a(low), . . . . . , a(high) which reside in the global array A(1 : n)into
ascending order a (n + 1) is considered to be defined and must be greater than all elements in
a(1 : n); */

{
if low < high then
{
j := PARTITION(a, low, high+1);
// J is the position of the partitioning element
QUICKSORT(low, j – 1);
QUICKSORT(j + 1 , high);
}
26
Design and Analysis of Algorithms ( PC 602 CS)

}
partition (arr[], low, high)
{
// pivot (Element to be placed at right position)
pivot = arr[high];
i = (low - 1) // Index of smaller element
for (j = low; j <= high- 1; j++)
{
// If current element is smaller than the pivot
if (arr[j] < pivot)
{
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}
swap arr[i + 1] and arr[high])
return (i + 1)
}

Brute Force
A brute force approach is an approach that finds all the possible solutions to find a satisfactory
solution to a given problem. The brute force algorithm tries out all the possibilities till a
satisfactory solution is not found.Such an algorithm can be of two types:
Optimizing: In this case, the best solution is found. To find the best solution, it may either find
all the possible solutions to find the best solution or if the value of the best solution is known, it
stops finding when the best solution is found. For example: Finding the best path for the
travelling salesman problem. Here best path means that travelling all the cities and the cost of
travelling should be minimum.
Satisficing: It stops finding the solution as soon as the satisfactory solution is found. Or
example, finding the travelling salesman path which is within 10% of optimal.Often Brute force
algorithms require exponential time. Various heuristics and optimization can be used:
Heuristic: A rule of thumb that helps you to decide which possibilities we should look at first.

Optimization: A certain possibilities are eliminated without exploring all of them.


Suppose we have converted the problem in the form of the tree shown as below:

27
Design and Analysis of Algorithms ( PC 602 CS)

Brute force search considers each and every state of a tree, and the state is represented in the
form of a node. As far as the starting position is concerned, we have two choices, i.e., A state and
B state. We can either generate state A or state B. In the case of B state, we have two states, i.e.,
state E and F.
In the case of brute force search, each state is considered one by one. As we can observe in the
above tree that the brute force search takes 12 steps to find the solution.

On the other hand, backtracking, which uses Depth-First search, considers the below states only
when the state provides a feasible solution. Consider the above tree, start from the root node,
then move to node A and then node C. If node C does not provide the feasible solution, then
there is no point in considering the states G and H. We backtrack from node C to node A. Then,
we move from node A to node D. Since node D does not provide the feasible solution, we
discard this state and backtrack from node D to node A.
We move to node B, then we move from node B to node E. We move from node E to node K;
Since k is a solution, so it takes 10 steps to find the solution. In this way, we eliminate a greater
number of states in a single iteration. Therefore, we can say that backtracking is faster and more
efficient than the brute force approach.
Advantages of a brute-force algorithm
The following are the advantages of the brute-force algorithm:
This algorithm finds all the possible solutions, and it also guarantees that it finds the correct
solution to a problem.
This type of algorithm is applicable to a wide range of domains.
It is mainly used for solving simpler and small problems.
It can be considered a comparison benchmark to solve a simple problem and does not require any
particular domain knowledge.
Disadvantages of a brute-force algorithm
28
Design and Analysis of Algorithms ( PC 602 CS)

The following are the disadvantages of the brute-force algorithm:

It is an inefficient algorithm as it requires solving each and every state.


It is a very slow algorithm to find the correct solution as it solves each state without considering
whether the solution is feasible or not.
The brute force algorithm is neither constructive nor creative as compared to other algorithms.

Knapsack Problem

You are given the following-


 A knapsack (kind of shoulder bag) with limited weight capacity.
 Few items each having some weight and profit.
The problem states-
Which items should be placed into the knapsack such that-
 The profit obtained by putting the items into the knapsack is maximum.
 And the weight limit of the knapsack does not exceed.
Knapsack Problem Variants-

Knapsack problem has the following two variants-


1. Fractional Knapsack Problem
2. 0/1 Knapsack Problem

29
Design and Analysis of Algorithms ( PC 602 CS)

Traveling salesman problem

The travelling salesman problem is a graph computational problem where the salesman needs to
visit all cities (represented using nodes in a graph) in a list just once and the distances
(represented using edges in the graph) between all these cities are known. The solution that is
needed to be found for this problem is the shortest possible route in which the salesman visits all
the cities and returns to the origin city.

Consider the following graph.

Sales person has to start from a vertex and reach vertex a with minimum cost using Brute force
approach.

30
Design and Analysis of Algorithms ( PC 602 CS)

Convex-Hull

Suppose we have a set of points. We have to make a polygon by taking less amount of points,
that will cover all given points
The convex hull of a set of points in S is the boundary of the smallest convex region that contain
all the points of S inside it or on its boundary.
Definition:
Given a finite set of points P={p 1,…,pn}, the convex hull of P is the smallest convex set C
such that PÌC.

Definition:
Given a finite set of points P={p 1,…,pn}, the convex hull of P is the smallest convex set C
such that PÌC.

31
Design and Analysis of Algorithms ( PC 602 CS)

There are many algorithms for computing the convex hull:


– Brute Force O(n3)
– Gift Wrapping (Jarvis March) O(nlogn),O(n2)
– Quick hull O(nlogn)
– Divide and Conquer O(nlogn)

Gift Wrapping (Jarvis March)


Key Idea:
Iteratively growing edges of the convex hull, we want to turn as little as possible.
• Given a directed edge on the hull

Key Idea:
Iteratively growing edges of the convex hull, we want to turn as little as possible.

Given a directed edge on the hull… • Of all the vertices the next edge can connect to
Choose the one which turns least.

Repeat the process

32
Design and Analysis of Algorithms ( PC 602 CS)

The final Convex hull is

33

You might also like