0% found this document useful (0 votes)
35 views

Analysis of Algorithms - 12112014 - 054619AM

This document discusses analysis of algorithms and provides examples. It explains that algorithm analysis is important to determine the most efficient algorithm for a problem. There are two approaches: empirical testing on sample data and theoretical analysis of resource needs. Worst-case, best-case, and average-case complexity are defined. Elementary operations are operations with bounded execution time. Insertion sort is analyzed, showing it has quadratic time complexity.

Uploaded by

Russel Patrick
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Analysis of Algorithms - 12112014 - 054619AM

This document discusses analysis of algorithms and provides examples. It explains that algorithm analysis is important to determine the most efficient algorithm for a problem. There are two approaches: empirical testing on sample data and theoretical analysis of resource needs. Worst-case, best-case, and average-case complexity are defined. Elementary operations are operations with bounded execution time. Insertion sort is analyzed, showing it has quadratic time complexity.

Uploaded by

Russel Patrick
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

DARSHAN INST. OF ENGG. & TECH.

150703 – Design and Analysis of Algorithm


Computer Engineering Unit 2 Analysis of Algorithms

(1) Explain why analysis of algorithms is important.


 When we have a problem to solve, there may be several suitable algorithms
available. We would obviously like to choose the best.
 Analyzing an algorithm has come to mean predicting the resources that the
algorithm requires.
 Generally, by analyzing several candidate algorithms for a problem, a most efficient
one can be easily identified.
 Analysis of algorithm is required to decide which of the several algorithms is
preferable.
 There are two different approaches to analyze an algorithm.
1. Empirical (posteriori) approach to choose an algorithm: Programming
different competing techniques and trying them on various instances with
the help of computer.
2. Theoretical (priori) approach to choose an algorithm: Determining
mathematically the quantity of resources needed by each algorithm as a
function of the size of the instances considered. The resources of most
interest are computing time (time complexity) and storage space (space
complexity). The advantage is this approach does not depend on
programmer, programming language or computer being used.
 Analysis of algorithm is required to measure the efficiency of algorithm.
 Only after determining the efficiency of various algorithms, you will be able to make
a well informed decision for selecting the best algorithm to solve a particular
problem.
 We will compare algorithms based on their execution time. Efficiency of an
algorithm means how fast it runs.
 If we want to measure the amount of storage that an algorithm uses as a function of
the size of the instances, there is a natural unit available Bit.
 On the other hand, is we want to measure the efficiency of an algorithm in terms of
time it takes to arrive at result, there is no obvious choice.
 This problem is solved by the principle of invariance, which states that two different
implementations of the same algorithm will not differ in efficiency by more than
some multiplicative constant.

[Gopi Sanghani] Page 1


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 Suppose that the time taken by an algorithm to solve an instance of size n is never
more than cn seconds, where c is some suitable constant.
 Practically size of instance means any integer that in some way measures the
number of components in an instance.
 Sorting problem: size is no. of items to be sorted.
 Graph: size is no. of nodes or edges or both involved.
 We say that the algorithm takes a time in the order of n i.e. it is a linear time
algorithm.
 If an algorithm never takes more than cn 2 seconds to solve an instance of size n, we
say it takes time in the order of cn2 i.e. quadratic time algorithm.
 Polynomial : nk , Exponential : cn or n!

(2) Explain: Worst Case, Best Case & Average Case Complexity.
Best Case Complexity
 Best case of a given algorithm is considered when the resource usage is at least.
Usually the resource being considered is running time.
 The term best-case performance is used to describe an algorithm's behavior under
optimal conditions.
 The best-case complexity of the algorithm is the function defined by the minimum
number of steps taken on any instance of size n.
 In the best case analysis, we calculate lower bound on running time of an algorithm.
 The best case behavior of an algorithm is not so useful.
 For example, the best case for a simple linear search on a list occurs when the
desired element is the first element of the list. In sorting problem the best case
occurs when the input elements are already sorted in required order.
Average Case Complexity
 Average case of a given algorithm is considered when the resource usage is on
average.
 Average performance and worst-case performance are the most used in algorithm
analysis.

[Gopi Sanghani] Page 2


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 The average-case complexity of the algorithm is the function defined by the average
number of steps taken on any instance of size n.
 In average case analysis, we take all possible inputs and calculate computing time for
all of the inputs. Sum all the calculated values and divide the sum by total number of
inputs.
 For example, the average case for a simple linear search on a list occurs when the
desired element is any element of the list. In sorting problem the average case
occurs when the input elements are randomly arranged.
Worst Case Complexity
 Worst case of a given algorithm is considered when the resource usage is at most.
 The worst-case complexity of the algorithm is the function defined by the maximum
number of steps taken on any instance of size n.
 In the worst case analysis, we calculate upper bound on running time of an
algorithm.
 We must know the case that causes maximum number of operations to be
executed.
 For Linear Search, the worst case happens when the element to be searched is not
present in the array. In sorting problem the worst case occurs when the input
elements are sorted in reverse order.

(3) Elementary Operations


 An elementary operation is one whose execution time can be bounded above by a
constant depending only on the particular implementation used – the machine, the
programming language used.
 Thus the constant does not depend on either the size or the parameters of the
instance being considered.
 Because when we consider the execution time bounded by a multiplicative constant,
it is only the number of elementary operations executed that matters in the analysis
and not the exact time required by each of them.
 For example, some instance of an algorithm needs to carry out a additions, m
multiplications and s assignment instructions.

[Gopi Sanghani] Page 3


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 Suppose we also know that addition does not more than t a microseconds,
multiplication does not take t m microsecond and assignment instruction t s where tm,
ta, ts are constants depending on the machine used.
 Addition, multiplication and assignment can all therefore be considered as
elementary operations.
 The total time t required by an algorithm can be bounded by,
t ≤ at a + mtm + sts
t ≤ max(ta + tm + ts ) × ( a + m + s)
 That is t is bounded by a constant multiple of the number of elementary operations
to be executed.

Write an algorithm / method for Insertion Sort. Analyze the algorithm and
(4)
find its time complexity.
 Insertion Sort works by inserting an element into its appropriate position during each
iteration.
 Insertion sort works by comparing an element to all its previous elements until an
appropriate position is found.
 Whenever an appropriate position is found, the element is inserted there by shifting down
remaining elements.
Algorithm
Procedure insert (T[1….n]) cost times
for i ←2 to n do C1 n
x ← T[i] C2 n-1
j ←i-1 C3 n-1
while j > 0 and x < T[j] do C4
T[j+1] ← T[j] C5 -1
j←j -1 C6 -1
T[j + 1] ← x C7 n-1

Analysis
 The running time of an algorithm on a particular input is the number of primitive

[Gopi Sanghani] Page 4


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

operations or "steps" executed.


 A constant amount of time is required to execute each line of our pseudo code. One
line may take a different amount of time than another line, but we shall assume that
each execution of the ith line takes time ci , where ci is a constant.
 The running time of the algorithm is the sum of running times for each statement
executed; a statement that takes ci steps to execute and is executed n times will
contribute cin to the total running time.
 Let the time complexity of selection sort is given as T(n), then
T(n) = C1n+ C2(n-1)+ C3(n-1)+ C4( )+(C5 +C6) -1+ C7(n-1).
= C1n+ C2n+ C3n+ C4 +C5 + C6 + C4 + C5 + C6 + C7n-C2-C3- C7.

= n2(C4 + C5 + C6 )+n(C1 +C2 +C3 +C7 + C4 + C5 + C6 )-1(C2 +C3 +C7).

= C8n2 +C9 n+ C10.


Thus, T(n) € Θ(n 2)
Time complexity of insertion sort is Θ(n2)

Write an algorithm / method for Selection Sort. Analyze the algorithm and find its
(5)
time complexity.

 Selection Sort works by repeatedly selecting elements.


 The algorithm finds the smallest element in the array first and exchanges it with the
element in the first position.
 Then it finds the second smallest element and exchanges it with the element in the
second position and continues in this way until the entire array is sorted.
Algorithm
Procedure select ( T [1….n] ) Cost times
for i←1 to n-1 do C1 n
minj ← i ; minx ← T[i] C2 n-1
for j←i+1 to n do C3
if T[j] < minx then minj ← j C4
minx ← T[j] C5
T[minj] ← T[i] C6 n-1
T[i] ← minx C7 n-1

[Gopi Sanghani] Page 5


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

Analysis
 The running time of an algorithm on a particular input is the number of primitive
operations or "steps" executed.
 A constant amount of time is required to execute each line of our pseudo code. One
line may take a different amount of time than another line, but we shall assume that
each execution of the ith line takes time ci , where ci is a constant.
 The running time of the algorithm is the sum of running times for each statement
executed; a statement that takes ci steps to execute and is executed n times will
contribute cin to the total running time.
 Let the time complexity of selection sort is given as T(n),then
T(n)=C1n+ C2(n-1)+ C3( C4( +C5( +C6(n-1)+ C7(n-1)
= C1n+ C2n+ C6n+ C7n+ C3 +C4 + C5 + C3 + C4 + C5 -C2-C6-C7

=n(C1+ C2+ C6 +C7 + C3 + C4 + C5 ) + n2(C3 + C4 + C5 )-1(C2+ C6 +C7)

= C8n2+ C9n-C10 [@ an2+bn+c]


Thus, T(n) € Θ(n 2)
Time complexity of selection sort is Θ(n 2)

(6) Explain different asymptotic notations in brief.


 The following notations are commonly used notations in performance analysis and
used to characterize the complexity of an algorithm.
Θ-Notation (Same order)
 For a given function g(n), we denote by Θ(g(n)) the set of functions
Θ(g(n)) = { f(n) : there exist positive constants c 1, c2 and n0 such that
0 ≤ c 1g(n) ≤ f (n) ≤ c 2g(n) for all n ≥ n 0 }
 Because Θ(g(n)) is a set, we could write f(n) € Θ(g(n)) to indicate that f(n) is a
member of Θ(g(n)).
 This notation bounds a function to within constant factors. We say f(n) = Θ(g(n)) if
there exist positive constants n0, c1 and c2 such that to the right of n0 the value of
f(n) always lies between c1g(n) and c2g(n) inclusive.
 Figure a gives an intuitive picture of functions f(n) and g(n). For all values of n to the

[Gopi Sanghani] Page 6


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

right of n0, the value of f(n) lies at or above c1g(n) and at or below c2g(n). In other
words, for all n ≥ n0, the value of f(n) is equal to g(n) to within a constant factor.
 We say that g(n) is an asymptotically tight bound for f(n).

O-Notation (Upper Bound)


 For a given function g(n), we denote by Ο(g(n)) the set of functions
Ο(g(n)) = { f(n) : there exist positive constants c and n0 such that
0 ≤ f (n) ≤ cg(n) for all n ≥ n 0 }
 We use Ο notation to give an upper bound on a function, to within a constant factor.
For all values of n to the right of n0, the value of the function f(n) is on or below g(n).
 This notation gives an upper bound for a function to within a constant factor. We
write f(n) = O(g(n)) if there are positive constants n0 and c such that to the right of
n0, the value of f(n) always lies on or below cg(n).
 We say that g(n) is an asymptotically upper bound for f(n).

[Gopi Sanghani] Page 7


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

Ω-Notation (Lower Bound)


 For a given function g(n), we denote by Ω(g(n)) the set of functions
Ω (g(n)) = { f(n) : there exist positive constants c and n 0 such that
0 ≤ cg(n) ≤ f (n)for all n ≥ n 0 }
 Ω Notation provides an asymptotic lower bound. For all values of n to the right of
n0, the value of the function f(n) is on or above cg(n).
 This notation gives a lower bound for a function to within a constant factor. We
write f(n) = Ω(g(n)) if there are positive constants n0 and c such that to the right of
n0, the value of f(n) always lies on or above cg(n).

(7) To compute the greatest common divisor (GCD) of two numbers.


 Let m and n be two positive numbers.
 The greatest common divisor of m and n, denoted by GCD(m, n) is the largest
integer that divides both m and n exactly.

[Gopi Sanghani] Page 8


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 When GCD(m, n) = 1, we say that m and n are coprime.


 The obvious algorithm for calculating GCD(m, n) is obtained directly as,
Function GCD(m, n)
i ← min(m, n) + 1
repeat i ← i – 1
until i divides both m and n exactly
return i
Analysis
 Time taken by this algorithm is in the order of the difference between the smaller of
two arguments and their greatest common divisor.
 In the worst case, when m and n are coprime, time taken is in the order of Θ(n).
 There exists a much more efficient algorithm for calculating GCD(m, n), known as
Euclid’s algorithm.
Function Euclid(m, n) {Here n ≥ m, if not then swap n and m}
while m > 0 do
t←m
m ← n mod m
n←t
return n
Analysis
 Total time taken by algorithm is in the exact order of the number of trips round the
loop.
 Considering the while loop execution as recursive algorithm,
 Let T(k) be the maximum number of times the algorithm goes round the loop on
inputs m and n when m ≤ n ≤ k.
1. If n ≤ 2, loop is executed either 0 or 1 time
2. If m = 0 or m divides n exactly and remainder is zero then less then loop is
executed at the max twice.
3. m ≥ 2 then the value of n is getting half every time during the execution of n mod
m. Therefore it takes no more than T (k ÷ 2) additional trips round the loop to

[Gopi Sanghani] Page 9


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

complete the calculation.


 Hence the recurrence equation for it is given as,

T(k) Є O(logk)
 Time complexity of Euclid’s algorithm is in O(logk).

(8) Compare Iterative and Recursive algorithm to find out Fibonacci series.
Iterative Algorithm for Fibonacci series
Function fibiter(n)
i ← 1; j ← 0
for k ← 1 to n do
j←i+j
i←j–i
return j
Analysis
 If we count all arithmetic operations at unit cost; the instructions inside for loop take
constant time.
 Let the time taken by these instructions be bounded above by some constant c.
 The time taken by the for loop is bounded above by n times this constant, i.e. nc.
 Since the instructions before and after this loop take negligible time, the algorithm
takes time in Θ (n).
 Hence the time complexity of iterative Fibonacci algorithm is Θ (n).
 If the value of n is large then time needed to execute addition operation increases
linearly with the length of operand.
 At the end of kth iteration, the value of i and j will be f k-1 and fk.
 As per De Moiver’s formula the size of fk is in Θ(k).
 So, kth iteration takes time in Θ(k). let c be some constant such that this time is
bounded above by ck for all k ≥ 1.
 The time taken by fibiter algorithm is bounded above by,

[Gopi Sanghani] Page 10


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 Hence the time complexity of iterative Fibonacci algorithm for larger value of n is
Θ (n2).

Recursive Algorithm for Fibonacci series


Function fibrec(n)
if n < 2 then return n
else return fibrec (n – 1) + fibrec (n – 2)

Analysis
 Let T(n) be the time taken by a call on fibrec(n).
 The recurrence equation for the above algorithm is given as,

 Solving this will give the complexity, T(n ) Є Θ (Фn)


 Time taken by algorithm grows exponentially.
 Hence the time complexity of recursive Fibonacci algorithm is Θ (Фn).
 Recursive algorithm is very inefficient because it recalculates the same values many
times.
 Iterative algorithm takes linear time if n is small and quadratic time ( n 2 ) if n is large
which is still faster than the exponential time fibrec.

What is Recursion? Give the algorithm of Tower of Hanoi problem using


(9)
Recursion.
 Recursion is a method of solving problems that involves breaking a problem down
into smaller and smaller sub-problems until you get to a small enough problem that
it can be solved trivially.
 Usually recursion involves a function calling itself.

[Gopi Sanghani] Page 11


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 Recursion allows us to write elegant solutions to problems that may otherwise be


very difficult to program.
 All recursive algorithms must obey three important characteristics:
1. A recursive algorithm must have a base case.
2. A recursive algorithm must change its state and move toward the base case.
3. A recursive algorithm must call itself, recursively.
 In a recursive algorithm, there are one or more base cases for which no recursion is
required. All chains of recursion eventually end up at one of the base cases.
 The simplest way to guarantee that these conditions are met is to make sure that
each recursion always occurs on a smaller version of the original problem.
 A very small version of the problem that can be solved without recursion then
becomes the base case.
 The Tower of Hanoi puzzle was invented by the French mathematician Edouard
Lucas in 1883.
 We are given a tower of n disks, initially stacked in increasing size on one of three
pegs. The objective is to transfer the entire tower to one of the other pegs, moving
only one disk at a time and never a larger one onto a smaller.
The Rules:
1. There are n disks (1, 2, 3... n) and three towers. The towers are labeled 'A', 'B',
and 'C'.
2. All the disks are initially placed on the first tower (the 'A' peg).
3. No disk may be placed on top of a smaller disk.
4. You may only move one disk at a time and this disk must be the top disk on a
tower.
 For a given number N of disks, the problem appears to be solved if we know how to
accomplish the following tasks:
 Move the top N - 1 disks from Src to Aux (using Dst as an intermediary tower)
 Move the bottom disk from Src to Dst
 Move N - 1 disks from Aux to Dst (using Src as an intermediary tower)

[Gopi Sanghani] Page 12


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

Algorithm
Hanoi(N, Src, Aux, Dst)
if N is 0
exit
else
Hanoi(N - 1, Src, Dst, Aux)
Move from Src to Dst
Hanoi(N - 1, Aux, Src, Dst)
Analysis :
 The number of movements of a ring required in the Hanoi problem is given by the
recurrence equation,

 Solving this will give,


t(m) = 2m – 1
 Hence the time complexity of Tower of Hanoi problem is Θ (2m) to solve the
problem with m rings.

(10) Heaps
 A heap data structure is a binary tree with the following properties.
1. It is a complete binary tree; that is each level of the tree is completely filled,
except possibly the bottom level. At this level it is filled from left to right.
2. It satisfies the heap order property; the data item stored in each node is greater
than or equal to the data item stored in its children node.
Example:

8 4

6 3 2

[Gopi Sanghani] Page 13


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 Heap can be implemented using an array or a linked list structure. It is easier to


implement heaps using arrays.
 We simply number the nodes in the heap from top to bottom, numbering the nodes
on each level from left to right and store the i th node in the ith location of the array.
 An array A that represents a heap is an object with two attributes:
i. length[A], which is the number of elements in the array, and
ii. heap-size[A], the number of elements in the heap stored within array A.
 The root of the tree is A[1], and given the index i of a node, the indices of its parent
PARENT(i), left child LEFT(i), and right child RIGHT(i) can be computed simply:
PARENT(i)
return ⌊i/2⌋
LEFT(i)
return 2i
RIGHT(i)
return 2i + 1
Example:

1 23

2 3
17 14

4 6 5 13 6 10 7 1

8 5 9 7 10 12

 The array form for the above heap is,

23 17 14 6 13 10 1 5 7 12
 There are two kinds of binary heaps: max-heaps and min-heaps. In both kinds, the
values in the nodes satisfy a heap property.

[Gopi Sanghani] Page 14


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 In a max-heap, the max-heap property is that for every node i other than the root,
A[PARENT(i)] ≥ A[i] ,
 That is, the value of a node is at most the value of its parent. Thus, the largest
element in a max-heap is stored at the root, and the sub-tree rooted at a node
contains values no larger than that contained at the node itself.
 A min-heap is organized in the opposite way; the min-heap property is that for
every node i other than the root, A[PARENT(i)] ≤ A[i] .
 The smallest element in a min-heap is at the root.
 For the heap-sort algorithm, we use max-heaps. Min-heaps are commonly used in
priority Queues.
 Viewing a heap as a tree, we define the height of a node in a heap to be the number
of edges on the longest simple downward path from the node to a leaf, and we
define the height of the Heap to be the height of its root.
 Height of an n element heap based on a binary tree is lg n
 The basic operations on heap run in time at most proportional to the height of the
tree and thus take O(lg n) time.
 Since a heap of n elements is a complete binary tree which has height k; that is one
node on level k, two nodes on level k-1 and so on…
 There will be 2k-1 nodes on level 1 and at least 1 and not more than 2 k nodes on level
0.

Building a heap
 For the general case of converting a complete binary tree to a heap, we begin at the
last node that is not a leaf; apply the “percolate down” routine to convert the sub-
tree rooted at this current root node to a heap.
 We then move onto the preceding node and percolate down that sub-tree.
 We continue on in this manner, working up the tree until we reach the root of the
given tree.
 We can use the procedure MAX-HEAPIFY in a bottom-up manner to convert an array
A[1,…,n], where n = length[A], into a max-heap.

[Gopi Sanghani] Page 15


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 The elements in the sub-array A[(⌊n/2⌋+1),…, n] are all leaves of the tree, and so
each is a 1-element heap to begin with.
 The procedure BUILD-MAX-HEAP goes through the remaining nodes of the tree and
runs MAXHEAPIFY on each one.

Algorithm
BUILD-MAX-HEAP(A)
heap-size[A] ← length[A]
for i ← ⌊length[A]/2⌋ downto 1
do MAX-HEAPIFY(A, i)

Analysis
 Each call to Heapify costs O(lg n) time, and there are O(n) such calls. Thus, the
running time is at most O(n lg n)

Maintaining the heap property


 One of the most basic heap operations is converting a complete binary tree to a
heap. Such an operation is called Heapify.
 Its inputs are an array A and an index i into the array. When MAX-HEAPIFY is called,
it is assumed that the binary trees rooted at LEFT(i) and RIGHT(i) are max-heaps, but
that A[i] may be smaller than its children, thus violating the max-heap property.
 The function of MAX-HEAPIFY is to let the value at A[i] "float down" in the max-heap
so that the sub-tree rooted at index i becomes a max-heap.
Algorithm
MAX-HEAPIFY(A, i)
left ← LEFT(i)
right ← RIGHT(i)
if left ≤ heap-size[A] and A[left] > A[i]

[Gopi Sanghani] Page 16


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

then largest ← left


else largest ← i
if right ≤ heap-size[A] and A[right] > A[largest]
then largest ← right
if largest ≠ i
then exchange A[i] ↔ A[largest]
MAX-HEAPIFY(A, largest)
 At each step, the largest of the elements A[i], A[LEFT(i)], and A[RIGHT(i)] is
determined, and its index is stored in largest.
 If A[i] is largest, then the sub-tree rooted at node i is a max-heap and the procedure
terminates.
 Otherwise, one of the two children has the largest element, and A[i] is swapped with
A[largest], which causes node i and its children to satisfy the max-heap property.
 The node indexed by largest, however, now has the original value A[i], and thus the
sub-tree rooted at largest may violate the max-heap property. therefore, MAX-
HEAPIFY must be called recursively on that sub-tree.

Analysis
 The running time of MAX-HEAPIFY on a sub-tree of size n rooted at given node i is
the Θ(1) time to fix up the relationships among the elements A[i], A[LEFT(i)], and
A[RIGHT(i)], plus the time to run MAX-HEAPIFY on a sub-tree rooted at one of the
children of node i.
 The children's sub-trees can have size of at most 2n/3 and the running time of MAX-
HEAPIFY can therefore be described by the recurrence
T (n) ≤ T(2n/3) + Θ(1)
 The solution to this recurrence is T (n) = O ( lg n).

The heapsort algorithm


 The heapsort algorithm starts by using BUILD-MAX-HEAP to build a max-heap on the
input array A[1,…, n], where n = length[A].

[Gopi Sanghani] Page 17


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 Since the maximum element of the array is stored at the root A[1], it can be put into
its correct final position by exchanging it with A[n].
 If we now "discard" node n from the heap (by decrementing heap-size[A]), we
observe that A[1,…, (n - 1)] can easily be made into a max-heap.
 The children of the root remain max-heaps, but the new root element may violate
the max-heap property.
 All that is needed to restore the max heap property, however, is one call to MAX-
HEAPIFY(A, 1), which leaves a max-heap in A[1,…, (n - 1)].
 The heapsort algorithm then repeats this process for the max-heap of size n – 1
down to a heap of size 2.

Algorithm
HEAPSORT(A)
BUILD-MAX-HEAP(A)
for i ← length[A] downto 2
do exchange A[1] ↔ A[i]
heap-size[A] ← heap-size[A] - 1
MAX-HEAPIFY(A, 1)
Analysis
The HEAPSORT procedure takes time O(n lg n), since the call to BUILD-MAX-HEAP takes
time O(n) and each of the n - 1 calls to MAX-HEAPIFY takes time O(lg n).

(10) Explain linear search and binary search methods.


 Let T[1 . . . n] be an array of non-decreasing sorted order; that is T [i] ≤ T [j]
whenever 1 ≤ i ≤ j ≤ n.
 Let x be some number. The problem consists of finding x in the array T if it is there.
 If x is not in the array, then we want to find the position where it might be inserted.

Sequential (Linear) search algorithm

[Gopi Sanghani] Page 18


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

Function sequential ( T[1,…,n], x )


for i = 1 to n do
if T [i] ≥ x then return index i
return n + 1
Analysis
 Here we look sequentially at each element of T until either we reach to end of array
or find a number no smaller than x.
 This algorithm clearly takes time in Θ (r), where r is the index returned. Θ (n) in
worst case and O (1) in best case.

Binary Search Algorithm (Iterative)


 The basic idea of binary search is that for a given element we check out the middle
element of the array. We continue in either the lower or upper segment of the
array, depending on the outcome of the search until we reached the required
(given) element.
Function biniter ( T[1,…,n], x )
i ← 1; j ← n
while i < j do
k ← (i + j ) ÷ 2
case x < T [k] : j ← k – 1
x = T [k] : i, j ← k {return k}
x > T [k] : i ← k + 1
return i

Analysis
 To analyze the running time of a while loop, we must find a function of the variables
involved whose value decreases each time round the loop.
 Here it is j – i + 1
 Which we call as d. d represents the number of elements of T still under
consideration.
 Initially d = n.

[Gopi Sanghani] Page 19


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 Loop terminates when i ≥ j, which is equivalent to d ≤ 1


 Each time round the loop there are three possibilities,
I. Either j is set to k – 1
II. Or i is set to k + 1
III. Or both i and j are set to k
 Let d and d’ stand for the value of j – i +1 before and after the iteration under
consideration. Similarly i, j , i’ and j’.
 Case I : if x < T [k]
So, j ← k – 1 is executed.
Thus i’ = i and j’ = k -1 where k = (i + j) ÷ 2
Substituting the value of k, j’ = [(i + j) ÷ 2] – 1
d’ = j’ – i’ + 1
Substituting the value of j’, d’ = [(i + j) ÷ 2] – 1 – i + 1
d’ ≤ (i + j) /2 – i
d’ ≤ (j – i)/2 ≤ (j – i + 1) / 2
d’ ≤ d/2
 Case II : if x > T [k]
So, i ← k + 1 is executed
Thus i’ = K + 1 and j’ = j where k = (i + j) ÷ 2
Substituting the value of k, i’ = [(i + j) ÷ 2] + 1
d’ = j’ – i’ + 1
Substituting the value of I’, d’ = j - [(i + j) ÷ 2] + 1 + 1
d’ ≤ j - (i + j -1) /2 ≤ (2j – i – j + 1) / 2 ≤ ( j – i + 1) / 2
d’ ≤ d/2
 Case III : if x = T [k]
i = j → d’ = 1

 We conclude that whatever may be case, d’ ≤ d/2 which means that the value of d is
at least getting half each time round the loop.
 Let dk denote the value of j – i + 1 at the end of kth trip around the loop. d0 = n.

[Gopi Sanghani] Page 20


DARSHAN INST. OF ENGG. & TECH. 150703 – Design and Analysis of Algorithm
Computer Engineering Unit 2 Analysis of Algorithms

 We have already proved that dk = dk -1 / 2


 For n integers, how many times does it need to cut in half before it reaches or goes
below 1?
 n / 2k ≤ 1 → n ≤ 2k
 k = lgn , search takes time.
 The complexity of binary search iterative is Θ (lg n).

[Gopi Sanghani] Page 21

You might also like