0% found this document useful (0 votes)
17 views

Ada 1

Uploaded by

Sunshine Ratrey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Ada 1

Uploaded by

Sunshine Ratrey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

1. What is algorithm? Explain characteristics of algorithm.

Answer-

 An algorithm is a finite step by step list of well-defined instructions to solve a particular


problem.

Example-

Algorithm: (Linear Search) LINEAR(LA, N, ITEM, LOC)

Here LA is a linear array with N elements and ITEM is a given element to be searched. This algorithm
finds the location LOC of ITEM in LA or sets LOC =0, if the search is unsuccessful.

Step-1 [Insert ITEM at the end of LA] Set LA[N+1] = ITEM.

Step-2 [Initialize Counter] Set LOC = 0.

Step-3 [Search for ITEM]

Repeat while LA[LOC] ≠ ITEM

Set: LOC = LOC + 1.

[End of the Loop]

Step-4 [Successful?] If: LOC = N + 1, then

Set: ITEM is not in array.

Else:

Set: ITEM is in array.

Step- 5 Exit.

Formal Representation of an Algorithm:-

 The format for the formal representation of an algorithm consists of two parts:
a. Paragraph - It tells the purpose of the algorithm and lists the input data.
b. Steps - Steps that is to be executed.

Characteristics of an Algorithm: -

o Input: An algorithm should take zero or more input.


o Output: An algorithm should produce one or more outputs at the end of an algorithm.
o Unambiguity: An algorithm should be unambiguous which means that the instructions in an
algorithm should be clear and simple.
o Finiteness: The algorithm should contain a limited number of instructions or steps.
o Effectiveness: A human should be able to calculate the values involved in the procedure of
the algorithm using pen and paper.
o Termination: An algorithm must terminate after a finite number of steps.

2. What is meant by Algorithm Analysis? Why it is important?

Answer-

 Algorithm analysis is the analysis of performance (speed to solve a problem) of an algorithm.

 Algorithm analysis provides theoretical estimation for the required resources by an algorithm to
solve a specific computational problem.

 Analysis of algorithms is the determination of the amount of time and space resources required
to execute it.

Importance of Algorithm Analysis:-


 Algorithm Analysis is used to predict the behaviour of an algorithm without implementing it
on a specific computer.
 The analysis is thus only an approximation; it is not perfect.
 By analysing different algorithms, we can compare them to determine the best one for our
purpose.
3. What do you mean by complexity of an algorithm?
Answer-
 In order to compare algorithms we must have some criteria to measure the efficiency of our
algorithms.
 The complexity of an algorithm is the function f(n) gives the running time or storage space
required by the algorithm in terms of size n of input data.
 Two main measures for the efficiency of algorithm are:
a. Time complexity
b. Space complexity
Time complexity-
 The time complexity of an algorithm is the amount of time it needs to run for its completion.
 Some of the reason to run time complexity:
a. We may be interested to know in advance whether the program will provide a
satisfactory real time response.
b. There may be several possible solutions with different time requirement, therefore to
choose best one among them.
 Usually, the time required by an algorithm falls under three types −
a. Worst Case Analysis (Mostly used)
b. Best Case Analysis (Very Rarely used)
c. Average Case Analysis (Rarely used)

Worst Case Analysis (Mostly used)


 Maximum time required for program execution.
 In the worst-case analysis, we calculate the upper bound on the running time of an algorithm.
 We must know the case that causes a maximum number of operations to be executed.
Example-
 For Linear Search, the worst case happens when the element to be searched is not present in
the array or last element.
 Let we a linear array LA and element to be searched is ITEM = 4
10 3 9 5 10 11 4

 When element is not present or it is last element, the search () function compares it with all
the elements of arr[] one by one.
 Therefore, the worst-case time complexity of the linear search would be O(n) means O(7)
here.

Best Case Analysis (Very Rarely used)


 Minimum time required for program execution.
 In the best-case analysis, we calculate the lower bound on the running time of an algorithm.
 We must know the case that causes a minimum number of operations to be executed.
Example-
 In the linear search problem, the best case occurs when x is present at the first location.
 Let we a linear array LA and element to be searched is ITEM = 4
4 10 3 9 5 10 11

 The number of operations in the best case is constant (not dependent on n).
 So time complexity in the best case would be Ω(1)

Average Case Analysis (Rarely used)


 Average time required for program execution.
 In average case analysis, we take all possible inputs and calculate the computing time for all
of the inputs.
 Sum all the calculated values and divide the sum by the total number of inputs.
 We must know (or predict) the distribution of cases.
Example-
 For the linear search problem, let us assume that all cases are uniformly distributed (including
the case of x not being present in the array).
 So, we sum all the cases and divide the sum by (n+1).
∑ 𝜃(𝑖) (𝑛 + 1)(𝑛 + 2)/2
=
𝑛+1 𝑛+1
Space complexity-
 The space complexity of an algorithm is the amount of memory it needs to run for its
completion.
 Some of the reason to run time complexity:
a. There may be several possible solutions with in different space requirement.
b. To estimate the size of the largest problem that a program can solve.
c. We may be interested to know in advance size of the largest problem that a program
can solve.
d. There may be several possible solutions with different space requirement, therefore to
choose best one among them.
Example-
 The input to Linear Search involves:
a. A list/ array of N elements
b. A variable storing the element to be searched.
 As the amount of extra data in Linear Search is fixed, the Space Complexity is O(1).
 Therefore, Space Complexity of Linear Search is O(1).

4. What is Asymptotic analysis of an algorithm?


Answer-

 Asymptotic analysis of an algorithm refers to defining the mathematical boundation on its


run-time performance.
 In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size
(we don't measure the actual running time).
 Asymptotic analysis is input bound i.e., if there's no input to the algorithm, it is concluded to
work in a constant time.
 Other than the "input" all other factors are considered constant.
 Using asymptotic analysis, we can very well conclude the best case, average case, and worst
case scenario of an algorithm.
Example: -
 The running time of one operation is computed as f(n) and may be for another operation it is
computed as g(n2).
 This means the first operation running time will increase linearly with the increase in n and
the running time of the second operation will increase exponentially when n increases.
 Similarly, the running time of both operations will be nearly the same if n is significantly
small.

5. What is an Asymptotic notation? Explain different Asymptotic notations.

Answer-

 Asymptotic notations are mathematical tools to represent the time complexity of algorithms
for asymptotic analysis.

There are mainly three asymptotic notations:-

1. Big-O Notation (O-notation)


2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)

Big-O Notation (O-notation):


 Big-O notation represents the upper bound of the running time of an algorithm.
 Therefore, it gives the worst-case complexity of an algorithm.

 Mathematically-

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

Where-

f(n) = algorithm runtime

g(n)= an arbitrary time complexity we are trying to relate to our algorithm.

n0 = the point where the equation starts being true and does so until infinity.

c = positive constant such that c > 0.

n = input size such that n > n0.

 It returns the highest possible output value (big-O) for a given input.

Example-
 For Linear Search, the worst case happens when the element to be searched is not present in
the array or last element.
 Let we a linear array LA and element to be searched is ITEM = 4
10 3 9 5 10 11 4
 When element is not present or it is last element, the search () function compares it with all
the elements of arr[] one by one.
 Therefore, the worst-case time complexity of the linear search would be O(n) means O(7)
here.
Omega Notation (Ω-Notation): -

 Omega notation represents the lower bound of the running time of an algorithm.

 Thus, it provides the best-case complexity of an algorithm.

 It is defined as the condition that allows an algorithm to complete statement execution in


the shortest amount of time.

 Mathematical Representation of Omega notation:-

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }

Where-

f(n) = algorithm runtime

g(n)= an arbitrary time complexity we are trying to relate to our algorithm.

n0 = the point where the equation starts being true and does so until infinity.

c = positive constant such that c > 0.

n = input size such that n ≥ n0.


Example-

 In the linear search problem, the best case occurs when x is present at the first location.
 Let we a linear array LA and element to be searched is ITEM = 4
4 10 3 9 5 10 11

 The number of operations in the best case is constant (not dependent on n).
 So time complexity in the best case would be Ω(1)

Theta Notation (Θ-Notation):-

 Theta notation encloses the function from above and below.


 Since it represents the upper and the lower bound of the running time of an algorithm, it is
used for analysing the average-case complexity of an algorithm.
 The exact asymptotic behaviour (Tight Bound) is done by this theta notation.
 It exists as both, most and least boundaries for a given input value.
 Mathematical Representation of Theta notation:-

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c2 * g(n) ≤ f(n) ≤ c1
* g(n) for all n ≥ n0}

Where-

f(n) = algorithm runtime

g(n)= an arbitrary time complexity we are trying to relate to our algorithm.

n0 = the point where the equation starts being true and does so until infinity.

c1, c2 = positive constant such that c1, c2 > 0.

n = input size such that n > n0.


Example-
 For the linear search problem, let us assume that all cases are uniformly distributed (including
the case of x not being present in the array).
 So, we sum all the cases and divide the sum by (n+1).
 Following is the value of average-case time complexity.

∑ 𝜃(𝑖) (𝑛 + 1)(𝑛 + 2)/2


=
𝑛+1 𝑛+1

---------------------------------------------------------------------------------------------------------------------------

6. What is the difference between Big oh, Big Omega and Big Theta.

S.No. Big Oh (O) Big Omega (Ω) Big Theta (Θ)


1. It is like (<=) It is like (>=) It is like (==)
rate of growth of an rate of growth is greater than meaning the rate of growth is
algorithm is less than or or equal to a specified value. equal to a specified value.
equal to a specific value.
2. Asymptotic upper bound The asymptotic lower bound is The bounding of function
is given by Big O notation. given by Omega notation. from above and below is
represented by theta
notation. The exact
asymptotic behavior is done
by this theta notation.
3. Big oh (O) – Upper Bound Big Omega (Ω) – Lower Big Theta (Θ) – Tight Bound
Bound
4. It is define as upper bound It is define as lower bound and It is define as tightest bound
and upper bound on an lower bound on an algorithm is and tightest bound is the best
algorithm is the most the least amount of time of all the worst case times
amount of time required ( required ( the most efficient that the algorithm can take.
the worst case way possible, in other words
performance). best case).
5. Mathematically: Big Oh is Mathematically: Big Omega is Mathematically – Big Theta
0 <= f(n) <= Cg(n) for all n 0 <= Cg(n) <= f(n) for all n >= is 0 <= C2g(n) <= f(n) <=
>= n0 n0 C1g(n) for n >= n0

---------------------------------------------------------------------------------------------------------------------------

7. What are the different properties of Asymptotic Notations?


Answer-
1. General Properties:
 If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a is a constant.
Example:
f(n) = 2n²+5 is O(n²)
then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).

Similarly, this property satisfies both Θ and Ω notation.

We can say,
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)), where a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)), where a is a constant.

2. Transitive Properties:

 If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).


Example:
If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)

Similarly, this property satisfies both Θ and Ω notation.

We can say,
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))

3. Reflexive Properties:

 Reflexive properties are always easy to understand after transitive.


 If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be f(n)
ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.
Example:
f(n) = n² ; O(n²) i.e O(f(n))
Similarly, this property satisfies both Θ and Ω notation.
We can say that,
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).

4. Symmetric Properties:

 If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).


Example:
If(n) = n² and g(n) = n²
then, f(n) = Θ(n²) and g(n) = Θ(n²)

This property only satisfies for Θ notation.

5. Transpose Symmetric Properties:

 If f(n) is O(g(n)) then g(n) is Ω (f(n)).


Example:
If(n) = n , g(n) = n²
then n is O(n²) and n² is Ω (n)

This property only satisfies O and Ω notations.

6. Some More Properties:

a. If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n))


b. If f(n) = O(g(n)) and d(n)=O(e(n)) then f(n) + d(n) = O( max( g(n), e(n) ))
Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) + d(n) = n + n² i.e O(n²)

c. If f(n)=O(g(n)) and d(n)=O(e(n)) then f(n) * d(n) = O( g(n) * e(n))


Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) * d(n) = n * n² = n³ i.e O(n³)
---------------------------------------------------------------------------------------------------------------------------

You might also like