0% found this document useful (0 votes)
17 views

(R18A0507) Design and Analysis of Algorithms-6-13

Uploaded by

sanodiyashubh04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

(R18A0507) Design and Analysis of Algorithms-6-13

Uploaded by

sanodiyashubh04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

UNIT I:

Introduction- Algorithm definition, Algorithm Specification, Performance Analysis- Space


complexity, Time complexity, Randomized Algorithms.
Divide and conquer- General method, applications - Binary search, Merge sort, Quick sort,
Strassen‟s Matrix Multiplication.

Algorithm:
An Algorithm is a finite sequence of instructions, each of which has a clear meaning and can be
performed with a finite amount of effort in a finite length of time. No matter what the input values may
be, an algorithm terminates after executing a finite number of instructions. In addition every algorithm
must satisfy the followingcriteria:

 Input: there are zero or more quantities, which are externallysupplied;


 Output: at least one quantity isproduced
 Definiteness: each instruction must be clear andunambiguous;
 Finiteness: if we trace out the instructions of an algorithm, then for all cases the algorithm will
terminate after a finite number ofsteps;
 Effectiveness: every instruction must be sufficiently basic that it can in principle be carried out by a
person using only pencil and paper. It is not enough that each operation be definite, but it must also
befeasible.

In formal computer science, one distinguishes between an algorithm, and a program. A program does not
necessarily satisfy the fourth condition. One important example of such a program for a computer is its
operating system, which never terminates (except for system crashes) but continues in a wait loop until
more jobs areentered.

We represent algorithm using a pseudo language that is a combination of the constructs of a


programming language together with informal Englishstatements.

Psuedo code for expressing algorithms:


Algorithm Specification: Algorithm can be described in three ways.
1. Natural language like English: When this way is choused care should be taken, weshould
ensure that each & every statement isdefinite.

2. Graphic representation called flowchart: This method will work well when thealgorithm
is small&simple.

3. Pseudo-code Method: In this method, we should typically describe algorithms as program,


which resembles language like Pascal &algol.

Pseudo-Code Conventions:

1. Comments begin with // and continue until the end ofline.

2. Blocks are indicated with matching braces{and}.

3. An identifier begins with a letter. The data types of variables are not explicitlydeclared.

1
4. Compound data types can be formed with records. Here is anexample,
Node.Record
{
data type – 1data-1;
.
.
.
data type – n data – n;
node * link;
}

Here link is a pointer to the record type node. Individual data items of a record can
be accessed with  and period.

5. Assignment of values to variables is done using the assignmentstatement.


<Variable>:= <expression>;

6. There are two Boolean values TRUE andFALSE.

 LogicalOperators AND, OR,NOT


Relational Operators <, <=,>,>=, =,!=

7. The following looping statements areemployed.

For, while and repeat-until


While Loop:
While < condition > do
{
<statement-1>
.
.
.

<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do

{
<statement-1>
.
.
.
<statement-n>
}
repeat-until:

repeat
<statement-1>
.
.
.
2
<statement-n>
until<condition>

8. A conditional statement has the followingforms.

 If <condition> then <statement>


 If <condition> then <statement-1>
Else <statement-1>

Case statement:

Case
{
: <condition-1>:<statement-1>
.
.
.
: <condition-n>:<statement-n>
: else : <statement-n+1>
}

9. Input and output are done using the instructions read &write.

10. There is only one type of procedure:


Algorithm, the heading takes theform,

Algorithm <Name> (<Parameter lists>)

 As an example, the following algorithm fields & returns the maximum of „n‟ given
numbers:

1. AlgorithmMax(A,n)
2. // A is an array of size n
3. {
4. Result :=A[1];
5. for I:= 2 to ndo
6. if A[I] > Resultthen
7. Result:=A[I];
8. return Result;
9. }

In this algorithm (named Max), A & n are procedure parameters. Result & I are Local
variables.

Algorithm:

1. Algorithm selection sort(a,n)


2. // Sort the array a[1:n] into non-decreasing
order.3.{
4. for I:=1 to ndo
5. {
6. j:=I;

3
7. for k:=i+1 to ndo
8. if (a[k]<a[j])
9. t:=a[I];
10. a[I]:=a[j];
11. a[j]:=t;
12. }
13. }

Performance Analysis:
The performance of a program is the amount of computer memory and time needed to
run a program. We use two approaches to determine the performance of a program. One
is analytical, and the other experimental. In performance analysis we use analytical
methods, while in performance measurement we conductexperiments.

Time Complexity:
The time needed by an algorithm expressed as a function of the size of a problem is
called the time complexity of the algorithm. The time complexity of a program is the
amount of computer time it needs to run to completion.
The limiting behavior of the complexity as size increases is called the asymptotic time
complexity. It is the asymptotic complexity of an algorithm, which ultimately determines
the size of problems that can be solved by the algorithm.

The Running time of a program


When solving a problem we are faced with a choice among algorithms. The basis for this
can be any one of the following:
i. We would like an algorithm that is easy to understand codeanddebug.
ii. We would like an algorithm that makes efficient use of the computer‟s
resources, especially, one that runs as fast aspossible.

Measuring the running time of a program

The running time of a program depends on factors such as:


1. The input to theprogram.
2. The quality of code generated by the compiler used to create theobject
program.
3. The nature and speed of the instructions on the machine used to execute the
program,
4. The time complexity of the algorithm underlyingtheprogram.
Statement S/e Frequency Total

1. Algorithm Sum(a,n) 0 - 0
2.{ 0 - 0
3. S=0.0; 1 1 1
4. for I=1 to n do 1 n+1 n+1
5. s=s+a[I]; 1 n n
6. return s; 1 1 1
7. } 0 - 0

The total time will be 2n+3

4
Space Complexity:
The space complexity of a program is the amount of memory it needs to run to
completion. The space need by a program has the following components:
Instruction space: Instruction space is the space needed to store the compiled
version of the program instructions.
Data space: Data space is the space needed to store all constant and variable
values. Data space has two components:
 Space needed by constants and simple variablesinprogram.
 Space needed by dynamically allocated objects such as arrays andclass
instances.
Environment stack space: The environment stack is used to save information
needed to resume execution of partially completed functions.
Instruction Space: The amount of instructions space that is needed depends on
factors such as:
 The compiler used to complete the program intomachinecode.
 The compiler options in effect at the timeofcompilation
 The targetcomputer.

The space requirement s(p) of any algorithm p may therefore be written as,
S(P) = c+ Sp(Instance characteristics)
Where „c‟ is a constant.

Example 2:

Algorithm sum(a,n)
{
s=0.0;
for I=1 to n do
s= s+a[I];
return s;
}

 The problem instances for this algorithm are characterized by n,the number of
elements to be summed. The space needed d by „n‟ is one word, since it is of type
integer.
 The space needed by „a‟a is the space needed by variables of tyepe array offloating
point numbers.
 This is atleast „n‟ words, since „a‟ must be large enough to hold the „n‟ elements tobe
summed.
 So,we obtain Ssum(n)>=(n+s)
[ n for a[],one each for n,I a&s]

Complexity of Algorithms
The complexity of an algorithm M is the function f(n) which gives the running time
and/or storage space requirement of the algorithm in terms of the size „n‟ of the input
data. Mostly, the storage space required by an algorithm is simply a multiple of the data
size „n‟. Complexity shall refer to the running time ofthealgorithm.
The function f(n), gives the running time of an algorithm, depends not only on the
size „n‟ of the input data but also on the particular data. The complexity function f(n) for
certain casesare:
1. Best Case : The minimum possible value of f(n) is called the bestcase.

2. Average Case : The expected value off(n).

5
3. Worst Case : The maximum value of f(n) for any key possibleinput.

Asymptotic Notations:
The following notations are commonly use notations in performance analysis and
used to characterize the complexity of an algorithm:

1. Big–OH(O)
2. Big–OMEGA(Ω),
3. Big–THETA (Θ)and
4. Little–OH(o)

Big–OH O (Upper Bound)

f(n) = O(g(n)), (pronounced order of or big oh), says that the growth rate of f(n) is less
than or equal (<) that of g(n).

Big–OMEGA Ω (Lower Bound)

f(n) = Ω (g(n)) (pronounced omega), says that the growth rate of f(n) is greater than or
equal to (>) that of g(n).

6
Big–THETA Θ (Same order)
f(n) = Θ (g(n)) (pronounced theta), says that the growth rate of f(n) equals (=) the
growth rate of g(n) [if f(n) = O(g(n)) and T(n) = Θ (g(n)].

little-o notation
Definition: A theoretical measure of the execution of an algorithm, usually the time or memory needed,
given the problem size n, which is usually the number of items. Informally, saying some equation f(n) =
o(g(n)) means f(n) becomes insignificant relative to g(n) as n approaches infinity. The notation is read, "f
of n is little oh of g of n".
Formal Definition: f(n) = o(g(n)) means for all c > 0 there exists some k > 0 such that 0 ≤ f(n) < cg(n) for
all n ≥ k. The value of k must not depend on n, but may depend on c.

Different time complexities


Suppose „M‟ is an algorithm, and suppose „n‟ is the size of the input data. Clearly
the complexity f(n) of M increases as n increases. It is usually the rate of increase of
f(n) we want to examine. This is usually done by comparing f(n) with some standard
functions. The most common computing times are:

O(1), O(log2 n), O(n), O(n. log2 n), O(n2), O(n3), O(2n), n! and nn

Classification of Algorithms

If „n‟ is the number of data items to be processed or degree of polynomial or the size of
the file to be sorted or searched or the number of nodes in a graph etc.

1 Next instructions of most programs are executed once or at most only a few
times. If all the instructions of a program have this property, we say that its
running time is aconstant.

Logn When the running time of a program is logarithmic, the program gets
slightly slower as n grows. This running time commonly occurs in
programs that solve a big problem by transforming it into a smaller
problem, cutting the size by some constant fraction., When n is a million,
log n is a doubled. Whenever n doubles, log n increases by a constant, but
log n does not double until n increases ton2.
n Whentherunningtimeofaprogramislinear,itisgenerallythecasethata

7
small amount of processing is done on each input element. This is the
optimal situation for an algorithm that must process n inputs.

n log n This running time arises for algorithms that solve a problem by breaking it
up into smaller sub-problems, solving then independently, and then
combining the solutions. When n doubles, the running time more than
doubles.

n2 When the running time of an algorithm is quadratic, it is practical for use


only on relatively small problems. Quadratic running times typically arise
in algorithms that process all pairs of data items (perhaps in a double nested
loop) whenever n doubles, the running time increasesfourfold.

n3 Similarly, an algorithm that process triples of data items (perhaps in a


triple–nested loop) has a cubic running time and is practical for use only on
small problems. Whenever n doubles, the running time increases eightfold.

2n Few algorithms with exponential running time are likely to be appropriate


for practical use, such algorithms arise naturally as “brute–force” solutions
to problems. Whenever n doubles, the runningtimesquares.

Numerical Comparison of Different Algorithms

The execution time for six of the typical functions is given below:

n log2 n n*log2n n2 n3 2n
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65,536
32 5 160 1024 32,768 4,294,967,296
64 6 384 4096 2,62,144 Note 1
128 7 896 16,384 2,097,152 Note 2
256 8 2048 65,536 1,677,216 ????????

Note1: The value here is approximately the number of machine instructions


executed by a 1 gigaflop computer in 5000 years.

Randomized algorithms:
An algorithm that uses random numbers to decide what to do next anywhere in its logic is called
Randomized Algorithm. For example, in Randomized Quick Sort, we use random number to pick the next
pivot (or we randomly shuffle the array). Quicksort is a familiar, commonly used algorithm in which
randomness can be useful. Any deterministic version of this algorithm requires O(n2) time to
sort n numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the
specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if
the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing
in O(n log n) time regardless of the characteristics of the input. Typically, this randomness is used to
reduce time complexity or space complexity in other standardalgorithms.

You might also like