Class 10 To Class 12 Addiotional Notes and Materials
Class 10 To Class 12 Addiotional Notes and Materials
What is an algorithm?
Algorithm is a set of steps to complete a task.
For example,
Task: to make a cup of tea.
Algorithm:
‘’a set of steps to accomplish or complete a task that is described precisely enough that a
computer can run it’’.
Described precisely: very difficult for a machine to know how much water, milk to be added
etc. in the above tea making algorithm.
GPS uses shortest path algorithm. Online shopping uses cryptography which uses RSA
algorithm.
Characteristics of an algorithm:-
• Correctness:-
Correct: Algorithms must produce correct result.
Produce an incorrect answer:Even if it fails to give correct results all the time still
there is a control on how often it gives wrong result. Eg.Rabin-Miller PrimalityTest
(Used in RSA algorithm): It doesn’t give correct answer all the time.1 out of 250 times
it gives incorrect result.
Approximation algorithm: Exact solution is not found, but near optimal solution can
be found out. (Applied to optimization problem.)
• Less resource usage:
Algorithms should use less resources (time and space).
Resource usage:
Here, the time is considered to be the primary measure of efficiency .We are also
concerned with how much the respective algorithm involves the computer memory.But
mostly time is the resource that is dealt with. And the actual running time depends on a
variety of backgrounds: like the speed of the Computer, the language in which the
algorithm is implemented, the compiler/interpreter, skill of the programmers etc.
So, mainly the resource usage can be divided into: 1.Memory (space) 2.Time
2. How fast the function that characterizes the running time grows with the input
size.
“Rate of growth of running time”.
The algorithm with less rate of growth of running time is considered better.
Algorithms are just like a technology. We all use latest and greatest processors but we need to
run implementations of good algorithms on that computer in order to properly take benefits of
our money that we spent to have the latest processor. Let’s make this example more concrete
by pitting a faster computer(computer A) running a sorting algorithm whose running time on n
values grows like n2 against a slower computer (computer B) running asorting algorithm whose
running time grows like n lg n. They eachmust sort an array of 10 million numbers. Suppose that
computer A executes 10 billion instructions per second (faster than anysingle sequential
computer at the time of this writing) and computer B executes only 10 million instructions per
second, so that computer A is1000 times faster than computer B in raw computing power. To
makethe difference even more dramatic, suppose that the world’s craftiestprogrammer codes
in machine language for computer A, and the resulting code requires 2n2 instructions to sort n
numbers. Suppose furtherthat just an average programmer writes for computer B, using a high-
level language with an inefficient compiler, with the resulting code taking 50n lg n instructions.
Before going for growth of functions and asymptotic notation let us see how to analyase
an algorithm.
Let us form an algorithm for Insertion sort (which sort a sequence of numbers).The pseudo
code for the algorithm is give below.
Pseudo code:
key=A[j]-----------------------------------------------------------------C2
i=j-1------------------------------------------------------------------------C4
A[i+1]=A[i]---------------------------------------------------------------C6
i=i-1------------------------------------------------------------------------C7
A[i+1]=key----------------------------------------------------------------C8
Let Ci be the cost of ith line. Since comment lines will not incur any cost C3=0.
C1n
C2 n-1
C3=0 n-1
C4n-1
C5
C6 )
C7
C8n-1
Best case:
Worst case:
• The worst-case running time gives a guaranteed upper bound on the runningtime for
any input.
• For some algorithms, the worst case occurs often. For example, when searching, the
worst case often occurs when the item being searched for is not present, and searches
for absent items may be frequent.
• Why not analyze the average case? Because it’s often about as bad as the worst case.
Order of growth:
It is described by the highest degree term of the formula for running time. (Drop lower-order
terms. Ignore the constant coefficient in the leading term.)
Example: We found out that for insertion sort the worst-case running time is of the form
an2 + bn + c.
Drop lower-order terms. What remains is an2.Ignore constant coefficient. It results in n2.But we
cannot say that the worst-case running time T(n) equals n2 .Rather It grows like n2 . But it
doesn’t equal n2.We say that the running time is Θ (n2) to capture the notion that the order of
growth is n2.
We usually consider one algorithm to be more efficient than another if its worst-case
running time has a smaller order of growth.
Asymptotic notation
• If the given instance of the problem is small or simple enough, just solve it.
• Otherwise, reduce the problem to one or more simpler instances of the same problem.
E.g.the worst case running time T(n) of the merge sort procedure by recurrence can be
expressed as
T(n)= ϴ(1) ; if n=1
2T(n/2) + ϴ(n) ;if n>1
whose solution can be found as T(n)=ϴ(nlog n)
1. SUBSTITUTION METHOD:
We substitute the guessed solution for the function when applying the inductive
hypothesis to smaller values. Hence the name “substitution method”. This method is powerful,
but we must be able to guess the form of the answer in order to apply it.
T(n)=4T(n/2)
F(n)=4f(n/2)
F(2n)=4f(n)
F(n)=n2
So, T(n) is order of n2
Guess T(n)=O(n3)
Assume T(k)<=ck3
T(n)=4T(n/2)+n
<=4c(n/2)3 +n
<=cn3/2+n
<=cn3-(cn3/2-n)
T(n)<=cn3 as (cn3/2 –n) is always positive
So what we assumed was true.
T(n)=O(n3)
Cn3/2-n>=0
n>=1
c>=2
Assume,T(k)<=ck2
T(n)=4T(n/2)+n
4c(n/2)2+n
cn2+n
So,T(n) will never be less than cn2. But if we will take the assumption of T(k)=c1 k2-c2k, then we
can find that T(n) = O(n2)
2. BY ITERATIVE METHOD:
e.g. T(n)=2T(n/2)+n
=>22T(n/4)+n+n
=>23T(n/23) +3n
T(n)=nT(1)+nlogn
In a recursion tree ,each node represents the cost of a single sub-problem somewhere in the set of
recursive problems invocations .we sum the cost within each level of the tree to obtain a set of
per level cost,and then we sum all the per level cost to determine the total cost of all levels of
recursion .
Constructing a recursion tree for the recurrence T(n)=3T(n/4)+cn2
Constructing a recursion tree for the recurrence T (n)= 3T (n=4) + cn2.. Part (a) shows T (n),
which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part
(d) has height log4n (it has log4n + 1 levels).
T(n)= + c nlog43
<= cn2
T(n)=aT(n/b)+f(n)
where a>=1 and b>1 are constants and f(n) is a asymptotically positive function .
3. If f(n)=Ὠ(nlogba+Ɛ) for some constant Ɛ>0 ,and if a*f(n/b)<=c*f(n) for some constant c<1
and all sufficiently large n,then T(n)=ϴ(f(n))
e.g. T(n)=2T(n/2)+nlogn
=>ϴ(nlog2n)
Merge sort
It is one of the well-known divide-and-conquer algorithm. This is a simple and very efficient
algorithm for sorting a list of numbers.
How can we apply divide-and-conquer to sorting? Here are the major elements of the Merge
Sort algorithm.
Divide: Split A down the middle into two sub-sequences, each of size roughly n/2 .
Combine: Merge the two sorted sub-sequences into a single sorted list.
The dividing process ends when we have split the sub-sequences down to a single item. A
sequence of length one is trivially sorted. The key operation where all the work is done is in the
combine stage,which merges together two sorted lists into a single sorted list. It turns out that
the merging process is quite easy to implement.