chapter one part 2
chapter one part 2
Algorithm analysis refers to the process of determining how much computing time and storage
that algorithms will require. In other words, it’s a process of predicting the resource requirement
of algorithms in a given environment.
In order to solve a problem, there are many possible algorithms. One has to be able to choose the
best algorithm for the problem at hand using some scientific method. To classify some data
structures and algorithms as good, we need precise ways of analyzing them in terms of resource
requirement. The main resources are:
Running Time
Memory Usage
Running time is usually treated as the most important since computational time is the most
precious resource in most problem domains.
Accordingly, we can analyze an algorithm according to the number of operations required, rather
than according to an absolute amount of time involved. This can show how an algorithm’s
efficiency changes according to the size of the input.
Complexity Analysis
Complexity Analysis is the systematic study of the cost of computation, measured either in time
units or in operations performed, or in the amount of storage space required.
The goal is to have a meaningful measure that permits comparison of algorithms independent of
operating platform.
1|Page
Chapter one Data Structures and Algorithm Analysis
There is no generally accepted set of rules for algorithm analysis. However, an exact count of
operations is commonly used.
Analysis Rules:
3. Running time of a selection statement (if, switch) is the time for the condition evaluation
+ the maximum of the running times for the individual clauses in the selection.
Example
if (<condition>)
<if-part>
else
<else-part>
where
2|Page
Chapter one Data Structures and Algorithm Analysis
2. The if-part is a statement that is executed only if the condition is true (the
value of the expression is not zero), and
3. The else-part is a statement that is executed if the condition is false (evaluates
to 0). The else followed by the <else-part> is optional.
Suppose that there are no function calls in the condition, and that the if- and else-parts
have big-oh upper bounds f (n) and g(n), respectively. Thus, we must write the running
time of the selection statement as O(max(f(n),g(n)).
Switch(<variable name>){
case <value 1>: <statement>
case <value 2>: <statement>
*
*
*
case <value n>: <statement>
default:<statement>
}
Suppose each of the cases (case1, case2 … case n) has run time complexity of (f(c1),
f(c2) … f(cn)) then the complexity of the overall switch statement would be
O(max(f(c1),f(c2)…f(cn),default).
4. Loops: Running time for a loop is equal to the running time for the statements inside the
loop * number of iterations.
The total running time of a statement inside a group of nested loops is the running time of
the statements multiplied by the product of the sizes of all the loops.
For nested loops, analyze inside out.
Always assume that the loop executes the maximum number of iterations possible.
3|Page
Chapter one Data Structures and Algorithm Analysis
The amount of iteration is calculated by (given the block of statement have f(n) = 1:
f(n) = (upper bound – lower bound)/increment amount or decrement amount + 2 +
upper bound
Example
for(int i = 2; i < n; i = i + 2) {
block of statement
}
f(n) = n – 2 / 2
therefore the loop in the example will iterate 4 times before it halts. However, if the block
of statement doesn’t cost a run-time of only 1 then we must consider to multiply the
amount of iteration in our case 4 by the complexity of the block of statement inside the
loop. Therefore if block of statement inside the loop costs 5 then the total complexity of
the loop will be f(n) = (n-2/2) * 5 = 5/2n – 5
Nested for-loop is simply a loop under another loop. Therefore just by considering the
nested loop as a block of statement, after calculating its complexity we simply will
multiply this value with the complexity of the upper loop. Look at the following example.
5. Running time of a function call is 1 for setup + the time for any parameter calculations +
the time required for the execution of the function body.
Example
4|Page
Chapter one Data Structures and Algorithm Analysis
function(<parameters n>);
let the function costs m, let each parameters n cost 2 each, which is 2n, and the function
call which is 1, will be added.
Therefore f(n) = m + 2n + 1
Examples:
1. int count(){
int k=0;
cout<< “Enter an integer”;
cin>>n;
for (i=0;i<n;i++)
k=k+1;
return 0;}
Time Units to Compute
-------------------------------------------------
1 for the function
1 for the assignment statement: int k=0
1 for the output statement.
1 for the input statement.
In the for loop:
1 assignment, n+1 tests, and n increments.
n loops of 2 units for an assignment, and an addition.
1 for the return statement.
-------------------------------------------------------------------
T (n)= 1+1+1+1+(1+n+1+n)+2n+1 = 4n+7
2. int total(int n)
{
int sum=0;
for (int i=1;i<=n;i++)
sum=sum+1;
return sum;
}
Time Units to Compute
-------------------------------------------------
1 for the function
1 for the assignment statement: int sum=0
In the for loop:
1 assignment, n+1 tests, and n increments.
n loops of 2 units for an assignment, and an addition.
1 for the return statement.
-------------------------------------------------------------------
T (n)= 1+1+ (1+n+1+n)+2n+1 = 4n+5
5|Page
Chapter one Data Structures and Algorithm Analysis
3. void func()
{
int x=0;
int i=0;
int j=1;
cout<< “Enter an Integer value”;
cin>>n;
while (i<n){
x++;
i++;
}
while (j<n)
{
j++;
}
}
Time Units to Compute
-------------------------------------------------
1 for the function
1 for the first assignment statement: x=0;
1 for the second assignment statement: i=0;
1 for the third assignment statement: j=1;
1 for the output statement.
1 for the input statement.
In the first while loop:
n+1 tests
n loops of 2 units for the two increment (addition) operations
In the second while loop:
n tests
n-1 increments
-------------------------------------------------------------------
T (n)= 1+1+1+1+1+1+n+1+2n+n+n-1 = 5n+6
4. int sum (int n)
{
int partial_sum = 0;
for (int i = 1; i <= n; i++)
partial_sum = partial_sum +(i * i * i);
return partial_sum;
}
Time Units to Compute
-------------------------------------------------
1 for the function
1 for the assignment.
1 assignment, n+1 tests, and n increments.
n loops of 4 units for an assignment, an addition, and two multiplications.
6|Page
Chapter one Data Structures and Algorithm Analysis
Categorizing Algorithms
In computing, we try to categorize the algorithms that we use according to their asymptotic run-
time functions. It turns out that the run-time behavior of most algorithms fall into one of only
about seven primary categories:
These categories are placed in ascending order of their complexity. Which means algorithm
which belongs to category Θ(n) is faster than algorithm that belongs to category Θ(n logn). The
notation used in this example is called theta notation.
Big – O Notation
We say a complexity f(n) of an algorithm is an element of O(g(n)) if f(n) is faster than C*g(n).
this notation simply put will say that if f(n) O(g(n)) then f(n) worst case will be g(n), or in
worst case f(n)’s run time will not exceed C*g(n).
Therefore big – O notation tells the worst case scenario we can think of when we analyze an
algorithm. As a programmer if you notice the big – O notation any where you will immediately
recognize that the worst case complexity of an algorithm is g(n) which is included in the
notation.
Theorem: f(n) is O(g(n)) if there are positive constants C and k such that:
To prove big-Oh, choose values for C and k and prove n > k implies f(n) ≤ C g(n).
1. Choose k = 1.
2. Assuming n > 1, find/derive a C such that
f(n)/g(n) ≤ C g(n)/g(n) = C
7|Page
Chapter one Data Structures and Algorithm Analysis
Keep in mind:
Example: 1 Example: 3
Proving Big-Oh: Example 1
Show that f(n) = n2 + 2n + 1 is O(n2) Show f(n) = (n+1)3 is O(n3)
Choose k = 1 Choose k=1.
Assuming n > 1, then Assuming n > 1, then
f(n)/g(n) = n2 + 2n + 1 / n2 < n2 + 2n2 + n2 / n2 F(n)/g(n) = (n+1)3/n3 < (n+n)3/n3 = 8n3/n3 = 8
=4 Choose C=8. Note that n+1 < n + n and
choose C=4. Note that 2n < 2n2 and 1 < n2 (n+n)3 = (2n)3 = 8n3.
thus, n2 + 2n + 1 is O(n2) Thus (n+1)3 is O(n3)
Because (n+1)3 < 8n3 whenever n > 1.
Example: 2
Example: 4
Show that f(n) = 3n + 7 is O(n). n
i=1 i=1
Thus 3n+7 is O(n) because 3n+7 < 10n
choose c = 1. Note that I < n because is n is
Whenever n > 1 n
the upper limit. Thus, ∑ i is O(n2) because
i=1
n
Big – Ω Notation
We say f is Ω(g) if there exists constants C and k so that |f(n)|≥ C|g(n)| for all n>k. Big-Ω is just
like big-O, except that Cg(n) is now a lower bound for f(n) for all large values of n. All of the
same comments and proof techniques as above apply except the inequalities are in the other
direction.
Fact: A function f is Ω(g) if and only if g is O(f). You should know how to prove this Fact, and
should also be able to use it in arguments involving big-Ω.
8|Page
Chapter one Data Structures and Algorithm Analysis
Big – Θ Notation
exists, is finite, and is not equal to zero. Most often, however, you will be asked to show
f is Θ(g) directly, so this method will not apply.
Here are a couple of facts you should be able to prove:
A polynomial is big-Θ of its largest term.
For any integer k, 1k +2k + ... + nk is Θ(nk+1).
Proofs:
9|Page