Unit 1
Unit 1
UNIT – I
Analysis of Algorithm:
I N T R O D U C T I O N – A N A LY Z I N G CONTROL STRUCTURES-
AV E R A G E C A S E A N A LY S I S -S O LV I N G RECURRENCES.
ALGORITHM
Informal Definition:
An Algorithm is any well-defined computational procedure that takes some value or set of
values as Input and produces a set of values or some value as output. Thus algorithm is a sequence of
computational steps that transforms the i/p into the o/p.
Formal Definition:
An Algorithm is a finite set of instructions that, if followed, accomplishes a particular task.
In addition, all algorithms should satisfy the following criteria.
Algorithm Specification:
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program, which
resembles language like Pascal & algol.
3. An identifier begins with a letter. The data types of variables are not explicitly declared.
Here link is a pointer to the record type node. Individual data items of a record can be
accessed with à and period.
<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
.
.
.
<statement-n>
}
repeat-until:
repeat
<statement-1>
.
.
.
<statement-n>
until<condition>
Case statement:
Case
{
: <condition-1> : <statement-1>
.
.
.
: <condition-n> : <statement-n>
: else : <statement-n+1>
}
9. Input and output are done using the instructions read & write.
à As an example, the following algorithm fields & returns the maximum of ‘n’ given numbers:
1. algorithm Max(A,n)
2. // A is an array of size n
3. {
4. Result := A[1];
5. for I:= 2 to n do
6. if A[I] > Result then
7. Result :=A[I];
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
8. return Result;
9. }
In this algorithm (named Max), A & n are procedure parameters. Result & I are Local variables.
à Next we present 2 examples to illustrate the process of translation problem into an algorithm.
Selection Sort:
Suppose we Must devise an algorithm that sorts a collection of n>=1 elements of arbitrary
type.
( From those elements that are currently unsorted ,find the smallest & place it next in the
sorted list.)
Algorithm:
1. For i:= 1 to n do
2. {
3. Examine a[I] to a[n] and suppose the smallest element is at a[j];
4. Interchange a[I] and a[j];
5. }
t := a[i];
a[i]:=a[j];
a[j]:=t;
The first subtask can be solved by assuming the minimum is a[ I ];checking a[I] with
a[I+1],a[I+2]…….,and whenever a smaller element is found, regarding it as the new
minimum. a[n] is compared with the current minimum.
Putting all these observations together, we get the algorithm Selection sort.
Theorem:
Algorithm selection sort(a,n) correctly sorts a set of n>=1 elements .The result remains is a
a[1:n] such that a[1] <= a[2] ….<=a[n].
Selection Sort:
Selection Sort begins by finding the least element in the list. This element is moved to the
front. Then the least element among the remaining element is found out and put into second position.
This procedure is repeated till the entire list has been studied.
Example:
LIST L = 3,5,4,1,2
1 is selected , à 1,5,4,3,2
2 is selected, à1,2,4,3,5
3 is selected, à1,2,3,4,5
4 is selected, à1,2,3,4,5
Proof:
We first note that any I, say I=q, following the execution of lines 6 to 9,it is the case that
a[q] Þ a[r],q<r<=n.
Also observe that when ‘i’ becomes greater than q, a[1:q] is unchanged. Hence, following
the last execution of these lines (i.e. I=n).We have a[1] <= a[2] <=……a[n].
We observe this point that the upper limit of the for loop in the line 4 can be changed to n-
1 without damaging the correctness of the algorithm.
Algorithm:
Recursive Algorithms:
1. Towers of Hanoi:
.
.
.
Algorithm:
1. Algorithm TowersofHanoi(n,x,y,z)
2. //Move the top ‘n’ disks from tower x to tower y.
3. {
.
.
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
.
4.if(n>=1) then
5. {
6. TowersofHanoi(n-1,x,z,y);
7. Write(“move top disk from tower “ X ,”to top of tower “ ,Y);
8. Towersofhanoi(n-1,z,y,x);
9. }
10. }
2. Permutation Generator:
Given a set of n>=1elements, the problem is to print all possible permutations of this set.
For example, if the set is {a,b,c} ,then the set of permutation is,
{ (a,b,c),(a,c,b),(b,a,c),(b,c,a),(c,a,b),(c,b,a)}
It is easy to see that given ‘n’ elements there are n! different permutations.
A simple algorithm can be obtained by looking at the case of 4 statement(a,b,c,d)
The Answer can be constructed by writing
Algorithm:
Algorithm perm(a,k,n)
{
if(k=n) then write (a[1:n]); // output permutation
else //a[k:n] ahs more than one permutation
// Generate this recursively.
for I:=k to n do
{
t:=a[k];
a[k]:=a[I];
a[I]:=t;
perm(a,k+1,n);
//all permutation of a[k+1:n]
t:=a[k];
a[k]:=a[I];
a[I]:=t;
}
}
Performance Analysis:
1. Space Complexity:
2. Time Complexity:
The time complexity of an algorithm is the amount of computer time it needs to run to
compilation.
Space Complexity:
à The Space needed by each of these algorithms is seen to be the sum of the following component.
1.A fixed part that is independent of the characteristics (eg:number,size)of the inputs and outputs.
The part typically includes the instruction space (ie. Space for the code), space for simple
variable and fixed-size component variables (also called aggregate) space for constants, and so on.
2. A variable part that consists of the space needed by component variables whose size is dependent
on the particular problem instance being solved, the space needed by referenced variables (to the
extent that is depends on instance characteristics), and the recursion stack space.
The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics)
Where ‘c’ is a constant.
Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}
The problem instances for this algorithm are characterized by n,the number of
elements to be summed. The space needed d by ‘n’ is one word, since it is of type
integer.
The space needed by ‘a’a is the space needed by variables of tyepe array of floating
point numbers.
This is atleast ‘n’ words, since ‘a’ must be large enough to hold the ‘n’ elements to be
summed.
So,we obtain Ssum(n)>=(n+s)
[ n for a[],one each for n,I a& s]
Time Complexity:
The time T(p) taken by a program P is the sum of the compile time and the run
time(execution time)
àThe compile time does not depend on the instance characteristics. Also we may assume that a
compiled program will be run several times without recompilation .This rum time is denoted
by tp(instance characteristics).
à The number of steps any problem statemn t is assigned depends on the kind of statement.
Interactive statement such as for, while & repeat-untilà Control part of the statement.
1. We introduce a variable, count into the program statement to increment count with initial value
0.Statement to increment count by the appropriate amount are introduced into the program.
This is done so that each time a statement in the original program is executes count is
incremented by the step count of that statement.
Algorithm:
Algorithm sum(a,n)
{
s= 0.0;
count = count+1;
for I=1 to n do
{
count =count+1;
s=s+a[I];
count=count+1;
}
count=count+1;
count=count+1;
return s;
}
à If the count is zero to start with, then it will be 2n+3 on termination. So each invocation of sum
execute a total of 2n+3 steps.
2. The second method to determine the step count of an algorithm is to build a table in which we
list the total number of steps contributes by each statement.
àFirst determine the number of steps per execution (s/e) of the statement and the total number of
times (ie., frequency) each statement is executed.
àBy combining these two quantities, the total contribution of all statements, the step count for the
entire algorithm is obtained.
2n+3
Total
Most of the time, average-case analysis are performed under the more or less realistic
assumption that all instances of any given size are equally likely.
For sorting problems, it is simple to assume also that all the elements to be sorted are distinct.
Suppose we have ‘n’ distinct elements to sort by insertion and all n! permutation of these
elements are equally likely.
To determine the time taken on a average by the algorithm ,we could add the times required to
sort each of the possible permutations ,and then divide by n! the answer thus obtained.
An alternative approach, easier in this case is to analyze directly the time required by the
algorithm, reasoning probabilistically as we proceed.
For any I,2¿ I¿ n, consider the sub array, T[1….i].
The partial rank of T[I] is defined as the position it would occupy if the sub array were sorted.
For Example, the partial rank of T[4] in [3,6,2,5,1,7,4] in 3 because T[1….4] once sorted is
[2,3,5,6].
Clearly the partial rank of T[I] does not depend on the order of the element in
Sub array T[1…I-1].
Analysis
Best case:
This analysis constrains on the input, other than size. Resulting in the fasters possible run time
Worst case:
This analysis constrains on the input, other than size. Resulting in the fasters possible run
time
Average case:
This type of analysis results in average running time over every type of input.
Complexity:
Complexity refers to the rate at which the storage time grows as a function of the problem
size
Asymptotic notation:
Big ‘oh’: the function f(n)=O(g(n)) iff there exist positive constants c and no such that f(n)≤c*g(n) for all
n, n ≥ no.
Omega: the function f(n)=Ω(g(n)) iff there exist positive constants c and no such that f(n) ≥ c*g(n) for all
n, n ≥ no.
Theta: the function f(n)=ө(g(n)) iff there exist positive constants c1,c2 and no such that c1 g(n) ≤ f(n) ≤
c2 g(n) for all n, n ≥ no.
Recursion:
Recursion may have the following definitions:
-The nested repetition of identical algorithm is recursion.
-It is a technique of defining an object/process by itself.
-Recursion is a process by which a function calls itself repeatedly until some specified condition has been
satisfied.
Recursion can be used for repetitive computations in which each action is stated in terms of
previous result. There are two conditions that must be satisfied by any recursive procedure.
1. Each time a function calls itself it should get nearer to the solution.
2. There must be a decision criterion for stopping the process.
In making the decision about whether to write an algorithm in recursive or non-recursive form, it is
always advisable to consider a tree structure for the problem. If the structure is simple then use non-
recursive form. If the tree appears quite bushy, with little duplication of tasks, then recursion is suitable.
The recursion algorithm for finding the factorial of a number is given below,
Algorithm : factorial-recursion
Input : n, the number whose factorial is to be found.
Output : f, the factorial of n
Method : if(n=0)
f=1
else
f=factorial(n-1) * n
if end
algorithm ends.
3. Restore the most recently saved parameters, local variable and return address and goto the latest
return address.
The indispensable last step when analyzing an algorithm is often to solve a recurrence equation.
With a little experience and intention, most recurrence can be solved by intelligent guesswork.
However, there exists a powerful technique that can be used to solve certain classes of recurrence
almost automatically.
This is a main topic of this section the technique of the characteristic equation.
0 if n=0
T(n) = 3T(n ÷ 2)+n otherwise
n 1 2 4 8 16 32
T(n) 1 5 19 65 211 665
Then,
T(A) = 3 * T(2) +4
= 3 * (3 * 1 +2) +4
= (32 * 1) + (3 * 2) +4
n T(n)
1 1
2 3 * 1 +2
22 32 * 1 + 3 * 2 + 22
23 33 * 1 + 32 * 2 + 3 * 22 + 23
24 34 * 1 + 33 * 2 + 32 * 22 + 3 * 23 + 24
25 35 * 1 + 34 * 2 + 33 * 22 + 32 * 23 + 3 * 24 + 25
= ∑ 3k-i 2i
= 3k ∑ (2/3)i
Let Sn be the sum of the first n terms of the geometric series a, ar, ar2….Then
Sn = a(1-rn)/(1-r), except in the special case when r = 1; when Sn = an.
3 k+1 – 2k+1 3
k
= 3 * ----------------- * ----
3 k+1 1
k+1 k+
3 –2 1
= 3k * -----------------
3k+1-1
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
= 3k+1 – 2k+1
EG : 2
0 n=0
tn = 5 n=1
3tn-1 + 4tn-2, otherwise
Characteristics Polynomial, x2 – 3x – 4 = 0
(x – 4)(x + 1) = 0
Roots r1 = 4, r2 = -1
Eqn 1 è C1 = -C2
= 5 / (-5) = -1
C2 = -1 , C1 = 1
fn = 1. 4n + (-1) . (-1)n
fn = 4 n + 1 n
Given a function to compute on ‘n’ inputs the divide-and-conquer strategy suggests splitting the inputs into ‘k’ distinct subsets,
1<k<=n, yielding ‘k’ sub problems.
These sub problems must be solved, and then a method must be found to combine sub solutions into a solution of the whole.
If the sub problems are still relatively large, then the divide-and-conquer strategy can possibly be reapplied.
Often the sub problems resulting from a divide-and-conquer design are of the same type as the original problem.
For those cases the re application of the divide-and-conquer principle is naturally expressed by a recursive algorithm.
D And C(Algorithm) is initially invoked as D and C(P), where ‘p’ is the problem to be solved.
Small(P) is a Boolean-valued function that determines whether the i/p size is small enough that the answer can be computed
without splitting.
These sub problems P1, P2 …Pk are solved by recursive application of D And C.
Combine is a function that determines the solution to p using the solutions to the ‘k’ sub problems.
If the size of ‘p’ is n and the sizes of the ‘k’ sub problems are n1, n2 ….nk, respectively, then the computing time of D And C
is described by the recurrence relation.
Where T(n) à is the time for D And C on any I/p of size ‘n’.
g(n) à is the time of compute the answer directly for small I/ps.
f(n) à is the time for dividing P & combining the solution to
sub problems.
Example:
1) Consider the case in which a=2 and b=2. Let T(1)=2 & f(n)=n.
We have,
T(n) = 2T(n/2)+n
= 2[2T(n/2/2)+n/2]+n
= [4T(n/4)+n]+n
= 4T(n/4)+2n
= 4[2T(n/4/2)+n/4]+2n
= 4[2T(n/8)+n/4]+2n
= 8T(n/8)+n+2n
= 8T(n/8)+3n
*
*
*
= n. T(n/n) + n log n
= n. T(1) + n log n [since, log 1=0, 2^0=1]
= 2n + n log n
BINARY SEARCH
Algorithm, describes this binary search method, where Binsrch has 4I/ps a[], I , l & x.
It is initially invoked as Binsrch (a,1,n,x)
A non-recursive version of Binsrch is given below.
This Binsearch has 3 i/ps a,n, & x.
The while loop continues processing as long as there are more elements left to check.
At the conclusion of the procedure 0 is returned if x is not present, or ‘j’ is returned, such that a[j]=x.
Thus we have 2 sequences of integers approaching each other and eventually low becomes > than high & causes termination
in a finite no. of steps if ‘x’ is not present.
Example:
1) Let us select the 14 entries.
-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.
à Place them in a[1:14], and simulate the steps Binsearch goes through as it searches for different values of ‘x’.
à Only the variables, low, high & mid need to be traced as we simulate the algorithm.
à We try the following values for x: 151, -14 and 9.
for 2 successful searches &
1 unsuccessful search.
Proof:
We assume that all statements work as expected and that comparisons such as x>a[mid] are appropriately carried out.
Otherwise we observe that each time thro’ the loop the possible elements to be checked of or equality with x and a[low],
a[low+1],……..,a[mid],……a[high].
If x=a[mid], then the algorithm terminates successfully.
Otherwise, the range is narrowed by either increasing low to (mid+1) or decreasing high to (mid-1).
Clearly, this narrowing of the range does not affect the outcome of the search.
If low becomes > than high, then ‘x’ is not present & hence the loop is exited.
The problem is to find the maximum and minimum items in a set of ‘n’ elements.
In analyzing the time complexity of this algorithm, we once again concentrate on the no. of element comparisons.
More importantly, when the elements in a[1:n] are polynomials, vectors, very large numbers, or strings of character, the cost of
an element comparison is much higher than the cost of the other operations.
Hence, the time is determined mainly by the total cost of the element comparison.
Straight MaxMin requires 2(n-1) element comparison in the best, average & worst cases.
An immediate improvement is possible by realizing that the comparison a[I]<min is necessary only when a[I]>max is false.
Now the best case occurs when the elements are in increasing order.
à The no. of element comparison is (n-1).
The worst case occurs when the elements are in decreasing order.
à The no. of elements comparison is 2(n-1)
On the average a[I] is > than max half the time, and so, the avg. no. of comparison is 3n/2-1.
A divide- and conquer algorithm for this problem would proceed as follows:
If the list has more than 2 elements, P has to be divided into smaller instances.
For example , we might divide ‘P’ into the 2 instances, P1=([n/2],a[1],……..a[n/2]) & P2= (n-[n/2],a[[n/2]+1],…..,a[n])
After having divided ‘P’ into 2 smaller sub problems, we can solve them by recursively invoking the same divide-and-conquer
algorithm.
à When ‘n’ is a power of 2, n=2^k for some +ve integer ‘k’, then
T(n) = 2T(n/2) +2
= 2(2T(n/4)+2)+2
= 4T(n/4)+4+2
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
*
*
= 2^k-1T(2)+
= 2^k-1+2^k-2
= 2^k/2+2^k-2
= n/2+n-2
= (n+2n)/2)-2
T(N)=(3N/2)-2
*Note that (3n/3)-3 is the best-average, and worst-case no. of comparisons when ‘n’ is a power of 2.
MERGE SORT
As another example divide-and-conquer, we investigate a sorting algorithm that has the nice property that is the worst case its
complexity is O(n log n)
This algorithm is called merge sort
We assume throughout that the elements are to be sorted in non-decreasing order.
Given a sequence of ‘n’ elements a[1],…,a[n] the general idea is to imagine then split into 2 sets a[1],…..,a[n/2] and
a[[n/2]+1],….a[n].
Each set is individually sorted, and the resulting sorted sequences are merged to produce a single sorted sequence of ‘n’
elements.
Thus, we have another ideal example of the divide-and-conquer strategy in which the splitting is into 2 equal-sized sets & the
combining operation is the merging of 2 sorted sets into one.
1. Algorithm MergeSort(low,high)
2. //a[low:high] is a global array to be sorted
3. //Small(P) is true if there is only one element
4. //to sort. In this case the list is already sorted.
5. {
6. if (low<high) then //if there are more than one element
7. {
8. //Divide P into subproblems
9. //find where to split the set
10. mid = [(low+high)/2];
11. //solve the subproblems.
12. mergesort (low,mid);
13. mergesort(mid+1,high);
14. //combine the solutions .
15. merge(low,mid,high);
16. }
17. }
1. Algorithm merge(low,mid,high)
2. //a[low:high] is a global array containing
3. //two sorted subsets in a[low:mid]
4. //and in a[mid+1:high].The goal is to merge these 2 sets into
5. //a single set residing in a[low:high].b[] is an auxiliary global array.
6. {
7. h=low; I=low; j=mid+1;
8. while ((h<=mid) and (j<=high)) do
9. {
10. if (a[h]<=a[j]) then
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
11. {
12. b[I]=a[h];
13. h = h+1;
14. }
15. else
16. {
17. b[I]= a[j];
18. j=j+1;
19. }
20. I=I+1;
21. }
22. if (h>mid) then
23. for k=j to high do
24. {
25. b[I]=a[k];
26. I=I+1;
27. }
28. else
29. for k=h to mid do
30. {
31. b[I]=a[k];
32. I=I+1;
33. }
34. for k=low to high do a[k] = b[k];
35. }
Consider the array of 10 elements a[1:10] =(310, 285, 179, 652, 351, 423, 861, 254, 450, 520)
Algorithm Mergesort begins by splitting a[] into 2 sub arrays each of size five (a[1:5] and a[6:10]).
The elements in a[1:5] are then split into 2 sub arrays of size 3 (a[1:3] ) and 2(a[4:5])
Then the items in a a[1:3] are split into sub arrays of size 2 a[1:2] & one(a[3:3])
The 2 values in a[1:2} are split to find time into one-element sub arrays, and now the merging begins.
(310| 285| 179| 652, 351| 423, 861, 254, 450, 520)
à Repeated recursive calls are invoked producing the following sub arrays.
(179, 285, 310, 351, 652| 423| 861| 254| 450, 520)
à Next a[9] &a[10] are merged, and then a[6:8] & a[9:10]
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
(179, 285, 310, 351, 652| 254, 423, 450, 520, 861 )
à At this point there are 2 sorted sub arrays & the final merge produces the fully sorted
result.
(179, 254, 285, 310, 351, 423, 450, 520, 652, 861)
IF THE TIME FOR THE MERGING OPERATIONS IS PROPORTIONAL TO ‘N’, THEN THE COMPUTING
TIME FOR MERGE SORT IS DESCRIBED BY THE RECURRENCE RELATION.
à When ‘n’ is a power of 2, n= 2^k, we can solve this equation by successive substitution.
QUICK SORT
The divide-and-conquer approach can be used to arrive at an efficient sorting method different from merge sort.
In merge sort, the file a[1:n] was divided at its midpoint into sub arrays which were independently sorted & later merged.
In Quick sort, the division into 2 sub arrays is made so that the sorted sub arrays do not need to be merged later.
This is accomplished by rearranging the elements in a[1:n] such that a[I]<=a[j] for all I between 1 & n and all j between (m+1)
& n for some m, 1<=m<=n.
It is assumed that a[p]>=a[m] and that a[m] is the partitioning element. If m=1 & p-1=n, then a[n+1] must be defined and must
be greater than or equal to all elements in a[1:n]
The assumption that a[m] is the partition element is merely for convenience, other choices for the partitioning element than the
first item in the set are better in practice.
1. Algorithm Partition(a,m,p)
2. //within a[m],a[m+1],…..,a[p-1] the elements
3. // are rearranged in such a manner that if
4. //initially t=a[m],then after completion
5. //a[q]=t for some q between m and
6. //p-1,a[k]<=t for m<=k<q, and
7. //a[k]>=t for q<k<p. q is returned
8. //Set a[p]=infinite.
9. {
10. v=a[m];I=m;j=p;
11. repeat
12. {
13. repeat
14. I=I+1;
15. until(a[I]>=v);
16. repeat
17. j=j-1;
18. until(a[j]<=v);
19. if (I<j) then interchange(a,i.j);
20. }until(I>=j);
21. a[m]=a[j]; a[j]=v;
22. retun j;
23. }
1. Algorithm Interchange(a,I,j)
2. //Exchange a[I] with a[j]
3. {
4. p=a[I];
5. a[I]=a[j];
6. a[j]=p;
7. }
1. Algorithm Quicksort(p,q)
2. //Sort the elements a[p],….a[q] which resides
3. //is the global array a[1:n] into ascending
4. //order; a[n+1] is considered to be defined
5. // and must be >= all the elements in a[1:n]
6. {
7. if(p<q) then // If there are more than one element
8. {
9. // divide p into 2 subproblems
10. j=partition(a,p,q+1);
11. //’j’ is the position of the partitioning element.
12. //solve the subproblems.
13. quicksort(p,j-1);
14. quicksort(j+1,q);
15. //There is no need for combining solution.
16. }
17. }
Partition(int m, int p)
{
int v,I,j;
v=a[m];
i=m;
j=p;
do
{
do
i=i+1;
while(a[i]<v);
if (i<j)
interchange(I,j);
} while (I<j);
a[m]=a[j];
a[j]=v;
return j;
}
Interchange(int I, int j)
{
int p;
p= a[I];
a[I]=a[j];
a[j]=p;
}
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
Output:
Enter the no. of elements 5
Enter the array elements
3
8
1
5
2
The sorted elements are,
1
2
3
5
8
Divide and conquer method suggest another way to compute the product of n*n matrix.
We assume that N is a power of 2 .In the case N is not a power of 2 ,then enough rows and columns of zero can be added to
both A and B .SO that the resulting dimension are the powers of two.
If n=2 then the following formula as a computed using a matrix multiplication operation for the elements of A & B.
If n>2,Then the elements are partitioned into sub matrix n/2*n/2..since ‘n’ is a power of 2 these product can be recursively
computed using the same formula .This Algorithm will continue applying itself to smaller sub matrix until ‘N” become
suitable small(n=2) so that the product is computed directly .
The formula are
For EX:
2222 1 1 11
4*4= 2222 1 1 11
2222 * 1 1 11
2222 1 11 1
2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 * 1 1 1 1 = 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4
2 2 2 2 1 1 1 1 4 4 4 4
To compute AB using the equation we need to perform 8 multiplication of n/2*n/2 matrix and from 4 addition of n/2*n/2
matrix.
Ci,j are computed using the formula in equation à4
As can be sum P, Q, R, S, T, U, and V can be computed using 7 Matrix multiplication and 10 addition or subtraction.
The Cij are required addition 8 addition or subtraction.
Example
4 4 4 4
*
4 4 4 4
P=(4*4)+(4+4)=64
Q=(4+4)4=32
R=4(4-4)=0
S=4(4-4)=0
T=(4+4)4=32
U=(4-4)(4+4)=0
V=(4-4)(4+4)=0
C11=(64+0-32+0)=32
C12=0+32=32
C21=32+0=32
C22=64+0-32+0=32
32 32
since n/2n/2 &matrix can be can be added in Cn for some constant C, The overall computing time T(n) of the resulting divide
and conquer algorithm is given by the sequence.
That is T(n)=O(n^3)
* Matrix multiplication are more expensive then the matrix addition O(n^3).We can attempt to reformulate the equation for Cij
so as to have fewer multiplication and possibly more addition .
Stressen has discovered a way to compute the Cij of equation (2) using only 7 multiplication and 18 addition or subtraction.
Strassen’s formula are
B.Balakonda Reddy KKC INSTITUTE OF SCIENCE AND
TECHNOLOGY
DESIGN AND ANALYSIS OF ALGORITHMS COMPUTER SCIENCE AND ENGINEERING
P= (A11+A12)(B11+B22)
Q= (A12+A22)B11
R= A11(B12-B22)
S= A22(B21-B11)
T= (A11+A12)B22
U= (A21-A11)(B11+B12)
V= (A12-A22)(B21+B22)
C11=P+S-T+V
C!2=R+t
C21=Q+T
C22=P+R-Q+V