Foundation of Analysis of Algorithm 4 HRS: Unit:1
Foundation of Analysis of Algorithm 4 HRS: Unit:1
Foundation of Analysis of
Algorithm 4 Hrs
Instructor: Tekendra Nath Yogi
[email protected]
College Of Applied Business And Technology
Contents
• Introduction to algorithm
• Properties of algorithms
• Types of Analysis
• RAM model
– Find out simple interest based on time, principal and interest rate.
– Graduate from TU
problem
algorithm
4. Correctness: Correct set of output values must be produced from the each
• Analysis:
– Predicting the resources ( memory, time) required to run the algorithm.
– Multiple algorithms exist for solving the same problem. E.g., sorting.
– Analysis helps us determining which of them is efficient in terms of time and
space consumed.
– Choose the most efficient algorithm for problem solving.
• Here algorithm1 makes m additions always what ever be the value of m where
as algorithm 2 makes min(m, n ) additions always.
1. Space complexity
2. Time complexity
– Average case.
– gives lower bound on the running time of the algorithm for any instance of input(s).
– This indicates that the algorithm can never have lower running time than best case for particular
class of problems.
– Gives upper bound on the running time of the algorithm for all the instances of the input(s).
– This ensures that no input can overcome the running time limit posed by worst case complexity.
– Generally, we seek upper bounds on the running time, because everybody likes a guarantee.
Figure: Graphical representation of Best case, worst case and average case
• The above algorithm searches for a given value K in a given array A[0..n-1]
and returns the index of the first element in A that matches K or return -1 if
there are no matching elements.
In the above algorithms for swapping two numbers algorithm1 uses three variables of integer
type, so algorithm takes 3* 2 =6 byte of memory. Algorithm 2 uses two variables a and b of
integer type. So algorithm 2 takes 2*2= 4 byte of memory.
But time complexity of algorithm 2 is higher than algorithm 1: In algorithm 1 just five
assignment operations. But in algorithm2 five assignment as in algorithm1and three additional
arithmetic operations . This show the space time tradeoff.
7/30/2021 By: Tekendra Nath Yogi 15
Algorithm Analysis-- Types of analysis
– Posterior
7000
– Plot the results 6000
Time (ms)
5000
4000
3000
2000
1000
• Limitations: 0
0 50 100
Input Size
– Results may not be indicative of the running time on other inputs not included in the
experiment.
– In order to compare two algorithms, the same hardware and software environments must
be used.
So, Experimental(Posterior ) analysis is not good!
7/30/2021 By: Tekendra Nath Yogi 17
Algorithm Analysis-- Prior(Theoretical) Analysis
• Means doing an analysis of an algorithm before running it on the
system.
– Uses a high-level description of the algorithm instead of an implementation
– By Asymptotic analysis
7/30/2021 By: Tekendra Nath Yogi 18
Algorithm Analysis– RAM Model
• RAM model is the base model for analyzing any algorithm to have
design and analysis in machine independent scenario.
• This model assumes :
– All algorithm must be implemented on the machine with random access
memory and a single processor. So that, in the RAM model,
instructions are executed one after another, with no concurrent
operations.
– Each basic operations (+, -, *,=…etc.) takes 1 step, loops and
subroutines are not basic operations.
– Each memory reference is 1 step.
• But with all these weaknesses, RAM model is not so bad because we have to
give the
– Comparison not the absolute analysis of any algorithm.
– We have to deal with large inputs not with the small size
• If the time t needed for one operation is known, then we can state
– Algorithm X takes (2n2 + 100n)t time units
– Algorithm B is 0.39n3 + n
• But, When solving larger problem, i.e. larger n. The dominating term 3n2
• a(n) = ½ n + 4
– Leading term: ½ n
– Example
• f(n) = 2n2 + 100n
– Although actual time will be different due to the different constants, the
growth rates of the running time are the same
– Compare with another algorithm with leading term of n3, the difference in
growth rate is a much more dominating factor
9. Print avg. 9 1
Total steps = 6n+7
Therefore, T(n) = O(n)
Presented By: Tekendra Nath Yogi 27
Analysis of algorithm– Example2
• Analyze the following algorithm to find the smallest element in an array
• Algorithm: Time
1. Set min= A[0] 1
2. Min_index= 0 1
3. For(i=1; i< n; i++) (1+ n + n- 1)
1. If(A[i] < A[min_index]) n-1
2. Set min_index=i n-1
4. Return(A[min_index]) 1
T(n) = 4n+ 1
= O(n)
• Statements time(Repetition)
1. Sum =0 1
4. Return(sum) 1
= O(n)
• Steps Repetition
1. Sum = 0 1
4. Sum++ 1*(n* n )
• Only after you have determined the efficiency of the various algorithms
will you be able to make a well informed decision.
• Nevertheless, there are some techniques that are often useful, such as:
• If the condition of the if- else statement is true then the if part is executed
otherwise, else part is executed.
• Note: Similarly, for the below can also, worst case rate of growth is O(logn).
That means, the same discussion holds good for decreasing sequence also.
i=i/2;
• Explain worst case, best case and average case of algorithm analysis with
an example.
• Describe the best case and worst case complexity of an algorithm. Write
algorithm for insertion sort and estimate the best and worst case
complexity.
• Solving recurrences:
– substitution method
– Master method.
• Why?
• They give a simple characterization of an algorithm’s efficiency.
1) Big O Notation
2)Big Ω Notation
3) Big Θ Notation
• Means that f(n) belongs to O(g(n)). i.e., the growth rate of the algorithm
belong to the set of growth rates which are bounded above by c* g(n).
• Let f(n) be the growth rate of the algorithm’s efficiency and g(n) be an arbitrary
function.
• Means that f(n) belongs to Ω(g(n)). i.e., the growth rate of the algorithm belong
to the set of growth rates which are bounded below by c* g(n).
• From the figure we can say that for all n at or to the right of n0, the value of
f(n ) is on or above cg(n).
• Let f(n) be the growth rate of the algorithm’s efficiency and g(n) be an
arbitrary function.
• Means that f(n) belongs to Θ (g(n)). i.e., the growth rate of the algorithm
belong to the set of growth rates which are bounded above by c2* g(n) and
below by c1*g(n).
are as follows.
• where, in each equation above, logarithm bases are not 10.Computer scientists
find 2 to be the most natural base for logarithms because so many algorithms
and data structures involve splitting a problem into two parts.
– The first term is a1, the common difference is d, and the number of terms
is n. The sum of an arithmetic series is found by multiplying the number of
terms times the average of the first and last terms.
– Formula: or
– or
Prepared by: Tekendra Nath Yogi 20
Mathematical Foundation
• Geometric Series
– A series such as 3 + 1 + 1/3 + 1/9 + 1/27 + 1/81 which has
a constant ratio between terms. The first term is a1, the common ratio is r,
and the number of terms is n.
•
conquer algorithms).
space) of algorithm.
– We can also express the solution as big oh and big omega notation because big
theta is the combination of both.
• Recurrences as inequalities:
– T(n) <= 2T(n/2) + O(n)
• Because such recurrence relation states only an upper bound
on T(n), so we express its solution using big oh notation.
2. Recurrence relation for the algorithm that divides each sub-problems into
unequal sizes:
E.g., A recursive algorithm that divides the problem into 2/3 and 1/3 of
the original size and if the divide and combine steps takes linear time then
the running time of an algorithm can be expressed in terms of following
recurrence relation:
Note: We neglect the following technical details when we state and solve
recurrences
After omit:
– Substitution method
– Master method.
• The series obtained by expansion is summarized to obtain the big oh estimate of the
recurrence.
2. Another way to make a good guess is to prove loose upper and lower
bounds on the recurrence and then reduce the range of uncertainty.
– For example, we might start with a lower bound of T (n) = big omega (n) for
the recurrence
– since we have the term n in the recurrence, and we can prove an initial upper bound of T
(n) = O(n2).
– Then, we can gradually lower the upper bound and raise the lower bound until we
converge on the correct, asymptotically tight solution of T (n) =big theta (n log n).
• There are times when we can correctly guess at an asymptotic bound on the
solution of recurrence, but somehow the math does not seem to work out in
the induction.
• The cost of dividing the problem and combining the results of the sub-
problems is described by the function f (n).
• The master method depends on the following theorem called the master
theorem.
Prepared by: Tekendra Nath Yogi 43
Solving Recurrence Relations-- Master Method
• The master theorem
of input.
• Only considers growth rate and eliminates constants and lower order terms.
– Potential method.
• 2.3. Sorting Algorithms: Bubble, Selection, and Insertion Sort and their
Analysis
12/5/2021 3
Algorithm for GCD
• The GCD of two numbers is the largest number that divides both numbers
without leaving a remainder. E.g. GCD (15,20) = 5.
• The most efficient iterative algorithm to find GCD of two natural numbers
is Euclid’s algorithm.
– Solution:
Here, A != 0 and
B != 0
R= 270%192 =78
Iteration 1: A= B= 192
B= R= 78
R = A % B =192% 78 =36
B= R= 36
R = A % B =78% 36 =6
Iteration 3: A= B= 36
B= R= 6
R = A % B =36% 6 =0
Iteration 4: A= B= 6
B= R= 0
– Note that complexities are always given in terms of the sizes of inputs,
in this case the number of digits.
Fn = Fn-1 + Fn-2
• Why searching?
– We know that today’s computer store a lot of information.
• Sequential search algorithm searches for a given value called key in a given
array left to right and return the index of the element, if found. Otherwise
return “ Not Found ”.
12/5/2021 By: Tekendra Nath Yogi 13
Sequential Search algorithm
• Algorithm:
– Input: array A [0.. n-1], array size n and search key k
– The worst case is when the value is not in the list (or occurs only once at the
end of the list), in which case n comparisons are needed.
12/5/2021 17
Comparison Sorting Algorithms
– Bubble sort
• Insertion Sort
each other.
– If the element at the lower index is greater than the element at the
– This process will continue till the list of unsorted elements exhausts.
• For example,
(c) Compare 52 and 27. Since 52 > 27, swapping is done. 29, 30, 27, 52,
19, 54, 63, 87
(d) Compare 52 and 19. Since 52 > 19, swapping is done. 29, 30, 27, 19,
52, 54, 63, 87
After the end of the third pass, the third largest element is placed at the
third highest index of the array. All the other elements are still unsorted.
(b) Compare 30 and 27. Since 30 > 27, swapping is done. 29, 27, 30, 19,
52, 54, 63, 87
(c) Compare 30 and 19. Since 30 > 19, swapping is done. 29, 27, 19, 30,
52, 54, 63, 87
• After the end of the fourth pass, the fourth largest element is placed at the
fourth highest index of the array. All the other elements are still unsorted.
(a) Compare 29 and 27. Since 29 > 27, swapping is done. 27, 29, 19, 30,
52, 54, 63, 87
(b) Compare 29 and 19. Since 29 > 19, swapping is done. 27, 19, 29, 30,
52, 54, 63, 87
• After the end of the fifth pass, the fifth largest element is placed at the fifth
highest index of the array. All the other elements are still unsorted.
(a) Compare 27 and 19. Since 27 > 19, swapping is done. 19, 27, 29, 30,
52, 54, 63, 87
• After the end of the sixth pass, the sixth largest element is placed at the
sixth largest index of the array. All the other elements are still unsorted.
• Pass 7:
After the end of the seventh pass, the input array becomes sort array.
12/5/2021 By: Tekendra Nath Yogi 26
Bubble sort --Analysis
• Analysis:
– The complexity of any sorting algorithm depends upon the number of
comparisons. In bubble sort, there are N–1 passes in total. In the first
pass, N–1 comparisons are made to place the highest element in its
correct position. Then, in Pass 2, there are N–2 comparisons and the
second highest element is placed in its position. Therefore, to compute
the complexity of bubble sort, we need to calculate the total number of
comparisons. It can be given as:
T(n) = (n – 1) + (n – 2) + (n – 3) + ..... + 3 + 2 + 1
T(n) = n (n + 1)/2
T(n) = n2/2 + n/2 = O(n2)
– Therefore, the complexity of bubble sort algorithm is O(n2).
12/5/2021 By: Tekendra Nath Yogi 27
Insertion Sort--Idea
• Idea: Insertion sort inserts each item into its proper place in the final sorted
list as follows.
– During each iteration of the algorithm, the first element in the unsorted
set is picked up and inserted into the correct position in the sorted set.
• For example,
– Furthermore, every iteration of the inner loop will have to shift the
elements of the sorted set of the array before inserting the next element.
– Therefore, in the worst case, insertion sort has a quadratic running time
(i.e., O(n2)).
– First find the smallest value in the array and place it in the first
position.
– Then, find the second smallest value in the array and place it in the
second position.
• For example,
• Note: Red is current min, Yellow is sorted list and Blue is current item.
12/5/2021 By: Tekendra Nath Yogi 32
Selection sort-- Algorithm
• Algorithm:
– Input: Array A and its size N
– Method:
– Therefore,
• T (n)= (n – 1) + (n – 2) + ... + 2 + 1
• Trace the bubble sort algorithm for following list of data items: A = { 4, 55,
3, 2, 88, 1, 98, 43, 66,11,93,12, 4,76}.
• Trace the Insertion sort algorithm for following list of data items: A = {3, 4,
55, 11, 2, 34, 33, 23}.
• Trace the Selection sort algorithm for following list of data items: A = {1,
23, 21, 66, 22, 14, 98, 45, 78}.
• 3.2. Sorting Algorithms: Merge Sort and Analysis, Quick Sort and Analysis
(Best Case, Worst Case and Average Case), Heap Sort (Heapify, Build
Heap and Heap Sort Algorithms and their Analysis), Randomized Quick
sort and its Analysis.
– To calculate an this algorithm break n into two parts i and j such that n
= i+ j.
– Therefore, from this illustration we can say that divide and conquer
approach is an efficient algorithm design approach.
• Why searching?
– We know that today‟s computer store a lot of information.
– Compare the element K with the element at mid position. If it is same, the
searching element is found so, return the index of that element.
– Otherwise, if the element K is smaller than the mid element then repeat the
divide and search process on the left half of the list otherwise on the right half
of the list.
– This process continues until either the element match or the list size becomes
1(list with single element).
• A[] = {2 , 5 , 7, 9 ,18, 45 ,53, 59, 67, 72, 88, 95, 101, 104}
• For key=2
Low High Mid Condition testing
0 13 6 Key<A[6]
0 5 2 Key<A[2]
• From the above algorithm we can say that the running time of
the algorithm is:
• f(n) is also 1.
• This satisfies the case 2 of master theorem so, the solution of this
recurrence relation is: T(n) = O(log2n).
• Conquer
– Sort the subsequences recursively using merge sort
– When the size of the sequences is 1 there is nothing more to do because, they
are sorted in themselves.
• Combine
– Merge the two sorted subsequences into a single sorted sequence. So, requires
extra storage for temporary hold the merged sequence.
– The complexity of merge operation is O(n). Since it compares at most n times for
merging n elements
= 2T(n/2) + O(n)
– Conquer, and
– Combine
2. Partitioning around the pivot: Rearrange the elements in the array so,
that pivot is placed in its final position.
2. Set left marker L at beginning of array and right marker R at end of array
i.e., L = 0 and R = (n-1) for array of n element.
3. While(L<R) repeat
• Increment L (move right the left marker) until the element at index position is less than or
equal to pivot element.
• Decrement R(Move left the Right marker) until the element at index position is greater
than pivot element.
12/6/2021 24
Quick Sort Algorithm
Partitioning Algorithm :
• Quick sort Algorithm:
Partition(A, i, j)
QuickSort(A, i, j) {
L= i
{
R= j
If(i< j) Pivot = A[i]
{ While(L<R)
{
p = partition(A, i, j); While(A[L]<= pivot && L<= j)
QuickSort(A, i, p-1); {
L++
QuickSort(A, P+1, j);
}
While(A[R]> pivot)
} {
R--
} }
if(L<R)
Swap(A[L], A[R])
}
swap(A[R], pivot)
Return R
12/6/2021 } 25
Quick Sort -- Example
12/6/2021 26
Quick Sort -- Example
12/6/2021 27
Quick Sort Algorithm-- Analysis
• The running time of quick sort depends on whether the partitioning
is balanced or unbalanced, and this in turn depends on which
elements are used for partitioning.
• If the partitioning is balanced, the algorithm runs fast. If the
partitioning is unbalanced it can run slowly.
• complexity of partitioning is O(n) because outer while loop executes
(c*n) times.
• Thus quick sort algorithm‟s time complexity can be written as the
recurrence relation:
T(n) = T(k-1) + T(n-k) + O(n).
• Where k is the result of Partition(A,1,n)
12/6/2021 Presented By: Tekendra Nath Yogi 28
Quick Sort Algorithm-- Analysis
• Best Case:
– Occurs when division is as balanced as possible at all time.
– So, we have
T(n) = 2T(n/2) + O(n).
– The tree for best case is like
12/6/2021 29
Quick Sort Algorithm-- Analysis
• Worst Case:
– Worst case occurs if the partition gives the pivot as first element or last
element at all the time i.e. k =1 or k = n.
– This happens when the elements are completely sorted, so we have
T(n) = T(n-1) + O(n).
– The tree for worst case is like
Quick Sort -- Pros and Cons:
It is faster than other algorithms such as
bubble sort, selection sort and insertion
sort, Quick sort can be used to sort arrays
of small size, medium size, or large size.
On the flip side quick sort is complex and
massively recursive.
• Example2: Trace out the quick sort algorithm for the following array
• A[]={40,66,93,8,12,44,6,88,60}
• Example3: Trace out the quick sort algorithm for the following array
• A[]={16,7,6,13,14,1,8,25,55,32,45,37}
• Example4: Trace out the quick sort algorithm for the following array
• A[]={10,20,30,40,50,60,70,80,90, 100}
• This is the worst case for quick sort. So to improve the performance in
worst case, the randomized partitioning can be applied.
swap(A[i],A[k]);
• Example2: Trace out the randomized quick sort algorithm for the following
array
• A[]={40,66,93,8,12,44,6,88,60}
• Example3: Trace out the randomized quick sort algorithm for the following
array
• A[]={16,7,6,13,14,1,8,25,55,32,45,37}
• Example4: Trace out the randomized quick sort algorithm for the following
array
• A[]={10,20,30,40,50,60,70,80,90, 100}
– Parent of A[i] =
– Heapsize[A] ≤ length[A]
MAX-HEAPIFY(A, 2, 10)
A[2] A[4]
A[2] violates the max-heap property A[4] violates the max-heap property
A[4] A[9]
– Intuitively:
• It traces a path from the root to a leaf (longest path length = h).
– In this max heap, node at index i= 2 does not hold max heap property
– So we need to perform heapify(A, 2, 10)
Now node at i= 5 does not hold max heap property, so we need to perform recursive call
Heapify(A, 5, 10)
Now perform Heapify(A, 10, 10) this yield no changes in the max heap.
Algorithm: BUILD-MAX-HEAP(A) 1
4
1. n = length[A]
2 3
2. for i ← n/2 down to 1 1 3
4 5 6 7
3. do MAX-HEAPIFY(A, i, n) 2 16 9 10
8 9 10
14 8 7
4 1 3 2 16 9 10 14 8 7
– n/2 = 5 2 3
For i= 5 8
14 9
8 7 10
• Heapify(A, 5, 10)
• No violation of max heap property so no change
2 3
1 3
4 5 6 7
2 16 9 10
8 9 10
14 8 7
• Heapify(A, 4, 10)
2 3
1 3
4 5 6 7
14 16 9 10
8 9 10
2 8 7
• Heapify(A, 3, 10)
2 3
1 10
4 5 6 7
14 16 9 3
8 9 10
2 8 7
• Heapify(A, 2, 10)
2 3
16 10
4 5 6 7
14 7 9 3
8 9 10
2 8 1
• Heapify(A, 1, 10)
16
2 3
14 10
4 5 6 7
8 7 9 3
8 9 10
2 4 1
Analysis:
BUILD-MAX-HEAP(A)
1. n = length[A]
2. for i ← n/2 downto 1
O(n)
3. do MAX-HEAPIFY(A, i, n)
O(logn)
• Idea:
– Build a max-heap from the array
– Swap the root (the largest element) with the last element in the array
A={7, 4, 3, 1, 2}
MAX-HEAPIFY(A, 1, 2)
{
1. BUILD-MAX-HEAP(A) O(n)
2. for i ← length[A] down to 2
3. do exchange A[1] ↔ A[i] n-1 times
4. MAX-HEAPIFY(A, 1, i - 1) O(logn)
}
• Example4: is the given array represents a heap? If not construct the min
heap and sort using heap sort.
– A[] ={ 23,17, 14, 6,13,10,1, 5, 7,12}
– Formally :
• Input: A set of n distinct elements and number i, with 1 <= i <= n.
• Output: The element from the set of elements, that is larger than
exactly (i-1) other elements of the given set.
RandomPartition (A, p, r)
swap(A[p],A[n]);
– = O(n2)
Question: Is there an algorithm that runs in linear time in the worst case?
Answer: Yes, due to Blum, Floyd, Pratt, Rivest, and Tarjan [1973].
– So, for the worse case, to guarantee the running time linear the
algorithm can be used as computing the median of group medians of
small groups and taking the median of medians as a partitioning pivot.
greater
Choosing the pivot
1. Divide the n elements into groups of 5. Find the median of each lesser
5-element group by rote.
2. Recursively SELECT the median x of the n/5 group medians to
greater
be the pivot.
Analysis
lesser
At least half the group medians are x, which is at
least n/5 /2 = n/10 group medians.
greater
Analysis
At least half the group medians are x, which is at least n/5 /2 lesser
= n/10 group medians.
• Therefore, at least 3 n/10elements are x.
greater
(Assume all elements are distinct.)
Analysis
At least half the group medians are x, which is at least n/5 lesser
/2 = n/10 group medians.
• Therefore, at least 3 n/10elements are x.
greater
• Similarly, at least 3 n/10elements are x.
Analysis
Need “at most” for worst-case runtime
Therefore, running time of worst case linear time selection algorithm is T(n) = O(n)
Matrix Multiplication
• Given two A and B n-by-n matrices our aim is to find the
product of A and B as C that is also n-by-n matrix.
• Algorithm:
MatrixMultiply(A,B)
{
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
for(k=0;k<n;k++)
{
C[i][j] = C[i][j]+ A[i][k]*B[k][j];
}
}
}
}
• Analysis:
• Analysis:
– Now, We can write recurrence relation for this as
• T(n) = 1 ; if n≤2
• What is min-max problem? State and explain the divide and conquer approach
based algorithm for finding min and max of input array with suitable example.
• What is searching? State and explain the binary search algorithm with suitable
example. Also analyze the binary search algorithm in detail.
• What is sorting? Explain the merge sort algorithm with suitable example.
Analyze merge sort algorithm.
• How quick sort algorithm works? Explain with suitable example. Analyze the
best case, average case and worst case complexity of quick sort algorithm.