Analysis of Algorithms - 12112014 - 054619AM
Analysis of Algorithms - 12112014 - 054619AM
Suppose that the time taken by an algorithm to solve an instance of size n is never
more than cn seconds, where c is some suitable constant.
Practically size of instance means any integer that in some way measures the
number of components in an instance.
Sorting problem: size is no. of items to be sorted.
Graph: size is no. of nodes or edges or both involved.
We say that the algorithm takes a time in the order of n i.e. it is a linear time
algorithm.
If an algorithm never takes more than cn 2 seconds to solve an instance of size n, we
say it takes time in the order of cn2 i.e. quadratic time algorithm.
Polynomial : nk , Exponential : cn or n!
(2) Explain: Worst Case, Best Case & Average Case Complexity.
Best Case Complexity
Best case of a given algorithm is considered when the resource usage is at least.
Usually the resource being considered is running time.
The term best-case performance is used to describe an algorithm's behavior under
optimal conditions.
The best-case complexity of the algorithm is the function defined by the minimum
number of steps taken on any instance of size n.
In the best case analysis, we calculate lower bound on running time of an algorithm.
The best case behavior of an algorithm is not so useful.
For example, the best case for a simple linear search on a list occurs when the
desired element is the first element of the list. In sorting problem the best case
occurs when the input elements are already sorted in required order.
Average Case Complexity
Average case of a given algorithm is considered when the resource usage is on
average.
Average performance and worst-case performance are the most used in algorithm
analysis.
The average-case complexity of the algorithm is the function defined by the average
number of steps taken on any instance of size n.
In average case analysis, we take all possible inputs and calculate computing time for
all of the inputs. Sum all the calculated values and divide the sum by total number of
inputs.
For example, the average case for a simple linear search on a list occurs when the
desired element is any element of the list. In sorting problem the average case
occurs when the input elements are randomly arranged.
Worst Case Complexity
Worst case of a given algorithm is considered when the resource usage is at most.
The worst-case complexity of the algorithm is the function defined by the maximum
number of steps taken on any instance of size n.
In the worst case analysis, we calculate upper bound on running time of an
algorithm.
We must know the case that causes maximum number of operations to be
executed.
For Linear Search, the worst case happens when the element to be searched is not
present in the array. In sorting problem the worst case occurs when the input
elements are sorted in reverse order.
Suppose we also know that addition does not more than t a microseconds,
multiplication does not take t m microsecond and assignment instruction t s where tm,
ta, ts are constants depending on the machine used.
Addition, multiplication and assignment can all therefore be considered as
elementary operations.
The total time t required by an algorithm can be bounded by,
t ≤ at a + mtm + sts
t ≤ max(ta + tm + ts ) × ( a + m + s)
That is t is bounded by a constant multiple of the number of elementary operations
to be executed.
Write an algorithm / method for Insertion Sort. Analyze the algorithm and
(4)
find its time complexity.
Insertion Sort works by inserting an element into its appropriate position during each
iteration.
Insertion sort works by comparing an element to all its previous elements until an
appropriate position is found.
Whenever an appropriate position is found, the element is inserted there by shifting down
remaining elements.
Algorithm
Procedure insert (T[1….n]) cost times
for i ←2 to n do C1 n
x ← T[i] C2 n-1
j ←i-1 C3 n-1
while j > 0 and x < T[j] do C4
T[j+1] ← T[j] C5 -1
j←j -1 C6 -1
T[j + 1] ← x C7 n-1
Analysis
The running time of an algorithm on a particular input is the number of primitive
Write an algorithm / method for Selection Sort. Analyze the algorithm and find its
(5)
time complexity.
Analysis
The running time of an algorithm on a particular input is the number of primitive
operations or "steps" executed.
A constant amount of time is required to execute each line of our pseudo code. One
line may take a different amount of time than another line, but we shall assume that
each execution of the ith line takes time ci , where ci is a constant.
The running time of the algorithm is the sum of running times for each statement
executed; a statement that takes ci steps to execute and is executed n times will
contribute cin to the total running time.
Let the time complexity of selection sort is given as T(n),then
T(n)=C1n+ C2(n-1)+ C3( C4( +C5( +C6(n-1)+ C7(n-1)
= C1n+ C2n+ C6n+ C7n+ C3 +C4 + C5 + C3 + C4 + C5 -C2-C6-C7
right of n0, the value of f(n) lies at or above c1g(n) and at or below c2g(n). In other
words, for all n ≥ n0, the value of f(n) is equal to g(n) to within a constant factor.
We say that g(n) is an asymptotically tight bound for f(n).
T(k) Є O(logk)
Time complexity of Euclid’s algorithm is in O(logk).
(8) Compare Iterative and Recursive algorithm to find out Fibonacci series.
Iterative Algorithm for Fibonacci series
Function fibiter(n)
i ← 1; j ← 0
for k ← 1 to n do
j←i+j
i←j–i
return j
Analysis
If we count all arithmetic operations at unit cost; the instructions inside for loop take
constant time.
Let the time taken by these instructions be bounded above by some constant c.
The time taken by the for loop is bounded above by n times this constant, i.e. nc.
Since the instructions before and after this loop take negligible time, the algorithm
takes time in Θ (n).
Hence the time complexity of iterative Fibonacci algorithm is Θ (n).
If the value of n is large then time needed to execute addition operation increases
linearly with the length of operand.
At the end of kth iteration, the value of i and j will be f k-1 and fk.
As per De Moiver’s formula the size of fk is in Θ(k).
So, kth iteration takes time in Θ(k). let c be some constant such that this time is
bounded above by ck for all k ≥ 1.
The time taken by fibiter algorithm is bounded above by,
Hence the time complexity of iterative Fibonacci algorithm for larger value of n is
Θ (n2).
Analysis
Let T(n) be the time taken by a call on fibrec(n).
The recurrence equation for the above algorithm is given as,
Algorithm
Hanoi(N, Src, Aux, Dst)
if N is 0
exit
else
Hanoi(N - 1, Src, Dst, Aux)
Move from Src to Dst
Hanoi(N - 1, Aux, Src, Dst)
Analysis :
The number of movements of a ring required in the Hanoi problem is given by the
recurrence equation,
(10) Heaps
A heap data structure is a binary tree with the following properties.
1. It is a complete binary tree; that is each level of the tree is completely filled,
except possibly the bottom level. At this level it is filled from left to right.
2. It satisfies the heap order property; the data item stored in each node is greater
than or equal to the data item stored in its children node.
Example:
8 4
6 3 2
1 23
2 3
17 14
4 6 5 13 6 10 7 1
8 5 9 7 10 12
23 17 14 6 13 10 1 5 7 12
There are two kinds of binary heaps: max-heaps and min-heaps. In both kinds, the
values in the nodes satisfy a heap property.
In a max-heap, the max-heap property is that for every node i other than the root,
A[PARENT(i)] ≥ A[i] ,
That is, the value of a node is at most the value of its parent. Thus, the largest
element in a max-heap is stored at the root, and the sub-tree rooted at a node
contains values no larger than that contained at the node itself.
A min-heap is organized in the opposite way; the min-heap property is that for
every node i other than the root, A[PARENT(i)] ≤ A[i] .
The smallest element in a min-heap is at the root.
For the heap-sort algorithm, we use max-heaps. Min-heaps are commonly used in
priority Queues.
Viewing a heap as a tree, we define the height of a node in a heap to be the number
of edges on the longest simple downward path from the node to a leaf, and we
define the height of the Heap to be the height of its root.
Height of an n element heap based on a binary tree is lg n
The basic operations on heap run in time at most proportional to the height of the
tree and thus take O(lg n) time.
Since a heap of n elements is a complete binary tree which has height k; that is one
node on level k, two nodes on level k-1 and so on…
There will be 2k-1 nodes on level 1 and at least 1 and not more than 2 k nodes on level
0.
Building a heap
For the general case of converting a complete binary tree to a heap, we begin at the
last node that is not a leaf; apply the “percolate down” routine to convert the sub-
tree rooted at this current root node to a heap.
We then move onto the preceding node and percolate down that sub-tree.
We continue on in this manner, working up the tree until we reach the root of the
given tree.
We can use the procedure MAX-HEAPIFY in a bottom-up manner to convert an array
A[1,…,n], where n = length[A], into a max-heap.
The elements in the sub-array A[(⌊n/2⌋+1),…, n] are all leaves of the tree, and so
each is a 1-element heap to begin with.
The procedure BUILD-MAX-HEAP goes through the remaining nodes of the tree and
runs MAXHEAPIFY on each one.
Algorithm
BUILD-MAX-HEAP(A)
heap-size[A] ← length[A]
for i ← ⌊length[A]/2⌋ downto 1
do MAX-HEAPIFY(A, i)
Analysis
Each call to Heapify costs O(lg n) time, and there are O(n) such calls. Thus, the
running time is at most O(n lg n)
Analysis
The running time of MAX-HEAPIFY on a sub-tree of size n rooted at given node i is
the Θ(1) time to fix up the relationships among the elements A[i], A[LEFT(i)], and
A[RIGHT(i)], plus the time to run MAX-HEAPIFY on a sub-tree rooted at one of the
children of node i.
The children's sub-trees can have size of at most 2n/3 and the running time of MAX-
HEAPIFY can therefore be described by the recurrence
T (n) ≤ T(2n/3) + Θ(1)
The solution to this recurrence is T (n) = O ( lg n).
Since the maximum element of the array is stored at the root A[1], it can be put into
its correct final position by exchanging it with A[n].
If we now "discard" node n from the heap (by decrementing heap-size[A]), we
observe that A[1,…, (n - 1)] can easily be made into a max-heap.
The children of the root remain max-heaps, but the new root element may violate
the max-heap property.
All that is needed to restore the max heap property, however, is one call to MAX-
HEAPIFY(A, 1), which leaves a max-heap in A[1,…, (n - 1)].
The heapsort algorithm then repeats this process for the max-heap of size n – 1
down to a heap of size 2.
Algorithm
HEAPSORT(A)
BUILD-MAX-HEAP(A)
for i ← length[A] downto 2
do exchange A[1] ↔ A[i]
heap-size[A] ← heap-size[A] - 1
MAX-HEAPIFY(A, 1)
Analysis
The HEAPSORT procedure takes time O(n lg n), since the call to BUILD-MAX-HEAP takes
time O(n) and each of the n - 1 calls to MAX-HEAPIFY takes time O(lg n).
Analysis
To analyze the running time of a while loop, we must find a function of the variables
involved whose value decreases each time round the loop.
Here it is j – i + 1
Which we call as d. d represents the number of elements of T still under
consideration.
Initially d = n.
We conclude that whatever may be case, d’ ≤ d/2 which means that the value of d is
at least getting half each time round the loop.
Let dk denote the value of j – i + 1 at the end of kth trip around the loop. d0 = n.