(R18A0507) Design and Analysis of Algorithms-6-13
(R18A0507) Design and Analysis of Algorithms-6-13
Algorithm:
An Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values may
be, an algorithm terminates after executing a finite number of instructions. In addition every algorithm
must satisfy the followingcriteria:
In formal computer science, one distinguishes between an algorithm, and a program. A program does not
necessarily satisfy the fourth condition. One important example of such a program for a computer is its
operating system, which never terminates (except for system crashes) but continues in a wait loop until
more jobs areentered.
2. Graphic representation called flowchart: This method will work well when thealgorithm
is small&simple.
Pseudo-Code Conventions:
3. An identifier begins with a letter. The data types of variables are not explicitlydeclared.
1
4. Compound data types can be formed with records. Here is anexample,
Node.Record
{
data type – 1data-1;
.
.
.
data type – n data – n;
node * link;
}
Here link is a pointer to the record type node. Individual data items of a record can
be accessed with and period.
<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:
repeat
<statement-1>
.
.
.
2
<statement-n>
until<condition>
Case statement:
Case
{
: <condition-1>:<statement-1>
.
.
.
: <condition-n>:<statement-n>
: else : <statement-n+1>
}
9. Input and output are done using the instructions read &write.
As an example, the following algorithm fields & returns the maximum of „n‟ given
numbers:
1. AlgorithmMax(A,n)
2. // A is an array of size n
3. {
4. Result :=A[1];
5. for I:= 2 to ndo
6. if A[I] > Resultthen
7. Result:=A[I];
8. return Result;
9. }
In this algorithm (named Max), A & n are procedure parameters. Result & I are Local
variables.
Algorithm:
3
7. for k:=i+1 to ndo
8. if (a[k]<a[j])
9. t:=a[I];
10. a[I]:=a[j];
11. a[j]:=t;
12. }
13. }
Performance Analysis:
The performance of a program is the amount of computer memory and time needed to
run a program. We use two approaches to determine the performance of a program. One
is analytical, and the other experimental. In performance analysis we use analytical
methods, while in performance measurement we conductexperiments.
Time Complexity:
The time needed by an algorithm expressed as a function of the size of a problem is
called the time complexity of the algorithm. The time complexity of a program is the
amount of computer time it needs to run to completion.
The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately determines
the size of problems that can be solved by the algorithm.
1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0
4
Space Complexity:
The space complexity of a program is the amount of memory it needs to run to
completion. The space need by a program has the following components:
Instruction space: Instruction space is the space needed to store the compiled
version of the program instructions.
Data space: Data space is the space needed to store all constant and variable
values. Data space has two components:
Space needed by constants and simple variablesinprogram.
Space needed by dynamically allocated objects such as arrays andclass
instances.
Environment stack space: The environment stack is used to save information
needed to resume execution of partially completed functions.
Instruction Space: The amount of instructions space that is needed depends on
factors such as:
The compiler used to complete the program intomachinecode.
The compiler options in effect at the timeofcompilation
The targetcomputer.
The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics)
Where „c‟ is a constant.
Example 2:
Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}
The problem instances for this algorithm are characterized by n,the number of
elements to be summed. The space needed d by „n‟ is one word, since it is of type
integer.
The space needed by „a‟a is the space needed by variables of tyepe array offloating
point numbers.
This is atleast „n‟ words, since „a‟ must be large enough to hold the „n‟ elements tobe
summed.
So,we obtain Ssum(n)>=(n+s)
[ n for a[],one each for n,I a&s]
Complexity of Algorithms
The complexity of an algorithm M is the function f(n) which gives the running time
and/or storage space requirement of the algorithm in terms of the size „n‟ of the input
data. Mostly, the storage space required by an algorithm is simply a multiple of the data
size „n‟. Complexity shall refer to the running time ofthealgorithm.
The function f(n), gives the running time of an algorithm, depends not only on the
size „n‟ of the input data but also on the particular data. The complexity function f(n) for
certain casesare:
1. Best Case : The minimum possible value of f(n) is called the bestcase.
5
3. Worst Case : The maximum value of f(n) for any key possibleinput.
Asymptotic Notations:
The following notations are commonly use notations in performance analysis and
used to characterize the complexity of an algorithm:
1. Big–OH(O)
2. Big–OMEGA(Ω),
3. Big–THETA (Θ)and
4. Little–OH(o)
f(n) = O(g(n)), (pronounced order of or big oh), says that the growth rate of f(n) is less
than or equal (<) that of g(n).
f(n) = Ω (g(n)) (pronounced omega), says that the growth rate of f(n) is greater than or
equal to (>) that of g(n).
6
Big–THETA Θ (Same order)
f(n) = Θ (g(n)) (pronounced theta), says that the growth rate of f(n) equals (=) the
growth rate of g(n) [if f(n) = O(g(n)) and T(n) = Θ (g(n)].
little-o notation
Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed,
given the problem size n, which is usually the number of items. Informally, saying some equation f(n) =
o(g(n)) means f(n) becomes insignificant relative to g(n) as n approaches infinity. The notation is read, "f
of n is little oh of g of n".
Formal Definition: f(n) = o(g(n)) means for all c > 0 there exists some k > 0 such that 0 ≤ f(n) < cg(n) for
all n ≥ k. The value of k must not depend on n, but may depend on c.
O(1), O(log2 n), O(n), O(n. log2 n), O(n2), O(n3), O(2n), n! and nn
Classification of Algorithms
If „n‟ is the number of data items to be processed or degree of polynomial or the size of
the file to be sorted or searched or the number of nodes in a graph etc.
1 Next instructions of most programs are executed once or at most only a few
times. If all the instructions of a program have this property, we say that its
running time is aconstant.
Logn When the running time of a program is logarithmic, the program gets
slightly slower as n grows. This running time commonly occurs in
programs that solve a big problem by transforming it into a smaller
problem, cutting the size by some constant fraction., When n is a million,
log n is a doubled. Whenever n doubles, log n increases by a constant, but
log n does not double until n increases ton2.
n Whentherunningtimeofaprogramislinear,itisgenerallythecasethata
7
small amount of processing is done on each input element. This is the
optimal situation for an algorithm that must process n inputs.
n log n This running time arises for algorithms that solve a problem by breaking it
up into smaller sub-problems, solving then independently, and then
combining the solutions. When n doubles, the running time more than
doubles.
The execution time for six of the typical functions is given below:
n log2 n n*log2n n2 n3 2n
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65,536
32 5 160 1024 32,768 4,294,967,296
64 6 384 4096 2,62,144 Note 1
128 7 896 16,384 2,097,152 Note 2
256 8 2048 65,536 1,677,216 ????????
Randomized algorithms:
An algorithm that uses random numbers to decide what to do next anywhere in its logic is called
Randomized Algorithm. For example, in Randomized Quick Sort, we use random number to pick the next
pivot (or we randomly shuffle the array). Quicksort is a familiar, commonly used algorithm in which
randomness can be useful. Any deterministic version of this algorithm requires O(n2) time to
sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the
specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if
the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing
in O(n log n) time regardless of the characteristics of the input. Typically, this randomness is used to
reduce time complexity or space complexity in other standardalgorithms.