0% found this document useful (0 votes)
90 views

FDS Unit I

Fundamentals of data structure Savitribai Phule Pune University 2019 pattern

Uploaded by

sidmalakar89
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views

FDS Unit I

Fundamentals of data structure Savitribai Phule Pune University 2019 pattern

Uploaded by

sidmalakar89
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 126

Unit-I

Introduction to Algorithm and


Data Structures

By: Prof. Pradnya K. Bachhav


What is Problem
The Problem is a state of difficulty that need to be resolved.

Steps to solve a Problem


1. Identify the problem
2. Understand the problem.
3. Identify alternative solutions
4. List a set of numbered step-by-step instructions to attain
the solution.
5. Test the solution
What is an algorithm?
• Before a computer can perform a task, it must
have an algorithm that tells it what to do.
• Informally: “An algorithm is a set of steps that define how a
task is performed.”
• Formally: “An algorithm is an ordered set of unambiguous
executable steps, defining a terminating process.”
• Ordered set of steps: structure!
• Executable steps: Possible!
• Unambiguous steps: follow the directions!
• Terminating: must have an end!
What is Program?

• A program is executable software that runs on a computer.


• Representation of an algorithm in some programming
language
• Sequence of code which is written in any programming
language.
Problem

Solution

Algorithm

Data Structure

Program
Data Structure Terms
• Data: Data are simply collection of facts and figures. Data are
values or set of values.

• Information: Information is nothing but the refined form of


data, which is helpful to understand the meaning.

• Knowledge: After processing facts and information,


knowledge can be acquired through experience or education.
What is Data Structure?
• Data Structure is a way of collecting and organizing data in
such a way that we can perform operations on these data in
an effective way.
• The data structure can be defined as the collection of
elements and all the possible operations which are required
for those set of elements
• In other words data structure will tell us which are the
elements required as well as the legal operations on those set
of elements.
Definition : A data structure is a set of domains D, a set of functions F and
set of axioms A. This triple (D,F,A) denotes data structure d.
For E.g.

• Consider set of elements which are required to store in an array.


• Various operations such as reading of elements and storing them
at appropriate index can be performed.
• If we want to access any particular element then that element can
be retrieved from the array.
• Thus, reading, printing, searching would be the operations
required to perform these tasks for the elements.
• Thus data object integer elements and set of operations form the
data structure-array.
Types of Data Structure
Relationship among data, data structure and algorithm
• In computer science, to solve a problem we require data and
data structure that will arrange the data in specific manner
and the algorithm which will handle this data structure in
some systematic manner.
• Data structure is the means by which we can model the
problem.
• E.g. Arrays- It contains integer elements (data) which are
arranged in sequential manner (data structure) and then
algorithms such as – addition of all elements, for sorting the
elements in specific manner are applied on this DS.
Abstract Data Type (ADT)
• The abstract data type is a triple of D – set of domains, F- set
of functions and A- set of axioms in which only what is to be
done is mentioned but how it is to be done is not mentioned.
• The definition of ADT only mentions what operations are to be
performed but not how these operations will be implemented.
It does not specify how data will be organized in memory and
what algorithms will be used for implementing the operations.
• It is called “abstract” because it gives an implementation-
independent view. The process of providing only the
essentials and hiding the details is known as abstraction.
E.g. Figure shows a picture of what an abstract data type is and
how it operates. The user interacts with the interface, using the
operations that have been specified by the abstract data type. The
abstract data type is the shell that the user interacts with. The
implementation is hidden one level deeper. The user is not
concerned with the details of the implementation.
Data Structure
Classification
Linear and Non Linear Data Structure
• Linear data structure are the data structure in which data is
arranged in a list or in a straight sequence.
• E.g. arrays, list
int arr[5];
Linear and Non Linear Data Structure
• Non linear data structures are the data structures in which
data may be arranged in hierarchical manner.
• E.g. trees, graph
Static and Dynamic Data Structure
• Static data structures are data structures having fixed size
memory utilization. One has to allocate the size of this data
structure before using it.
• E.g. Array is a static data structure.

int mark[5];
Static and Dynamic Data Structure
• Dynamic data structure is a data structure in which one can
allocate memory as per the requirement. If one does not want
to use some block of memory, we can deallocate it.
• E.g. Linked List
class Node:
# constructor
def __init__(self, data, next=None):
self.data = data
self.next = next

# Creating a single node


first = Node(A)
print(first.data)
Persistent and Ephimeral Data Structure
• Persistent data structures are the data structures which
retain their previous state and modifications can be done by
performing certain operations on it.
• E.g. Stack
Persistent Data Structure Example
• If we push 10,20,30 onto it and by performing pop operation
we can gain its previous state.
Persistent and Ephemeral Data Structure
• Ephemeral data structure are the data structure in which we
cannot retain its previous state.
• E.g. Queues
Ephemeral Data Structure Example

State 1: In queue elements are inserted by one end and deleted by


other end. Let us insert 10,20,30 in the queue

10 20 10 30 20 10
(a) (b) (c)

State 2: Now if we delete the element, we can delete it only by other


end. That means 10 can be deleted. And queue will be -

30 20
Algorithms
Problem Solving
Here are six steps for problem solving

Select the
Step 1 Understan Step 3 best Step 5 Evaluate
d the List the solution to List the the
problem. possible the Instruction solution.
Identify the problem.
problem. solution to s using the
the selected
Step 2 problem. Step 4 solution. Step 6
Problem Solving- Example

An admission charge for the cinemax theatre varies according to


the age of the person. Complete the six problem solving steps to
calculate the ticket charge given the age of the person. The
charges are as follows :
i) Over 55 : Rs. 50.00 ii) 22-54 : Rs. 75.00
iii) 13-20 : Rs. 50.00 iv) 3-12 : Rs. 25.00
v) Under 3: Free
SPPU : Dec-09, Marks 12
Solution
Step 1 : Identify the problem : What are the charges for the person for the cine
ticket?
Step 2 : Understand the problem : Here the age of the person must be known
so that the charge for the ticket can be calculated.
Step 3 : Identify alternatives : No alternative is possible for this problem.
Step 4 : Select the best way to solve the problem : The only way to solve this
problem is to calculate ticket charge based on the age of the person.
Step 5 : Prepare the list of selected solutions :
i. Enter the age of the person.
ii. If age>55 charge for the ticket is Rs. 50.00
iii. If age>=21 and <44 charge for the ticket is Rs. 75.00
iv. If ago=13 and <=20 charge for the ticket is Rs. 50.00
v. If age>3 and <=12 charge for the ticket is Rs. 25.00
vi. If age <=3 charge for the ticket is Rs. 00.00
vii. Print charge
Step 6 : Evaluate Solution : The charge for the ticket should be according to the
age of the person.
Types of Problem
1. Algorithmic:
Solutions that can be solved with a series of known
actions are called Algorithmic Solutions.
E.g. To make a cup of coffee
To find largest of three numbers
2. Heuristic:
Solutions that cannot be reached through a direct set of
steps are called heuristic solutions.
E.g. How to play chess
Difficulties in Problem Solving
Various difficulties in solving the problem are:

• People do not know how to solve particular problem.


• Many times people get afraid of taking decisions
• While following the problem solving steps people complete one or two
steps inadequately.
• People do not define the problem statements correctly.
• They do not generate the sufficient list of alternatives. Sometimes
good alternatives might get eliminated. Sometimes the merits and
demerits of the alternatives is defined hastily.
• The sequence of solution is not defined logically, or the focus of the
design is sometimes on detailed work before the framework solution.
• When solving the problems on computer then writing the
instructions for the computer is most crucial task. With lack of
knowledgebase one can not write the proper instruction set for the
computers.
Introduction to Algorithms
Definition : An algorithm is a finite set of instructions for performing a particular
task. The instructions are nothing but the statements in simple English language.

Example : Let us take a very simple example of an algorithm which adds the two
numbers and store the result in a third variable.
Step 1 : Start.
Step 2 : Read the first number is variable ‘a’
Step 3 : Reed the second number in variable ‘b’
Step 4 : Perform the addition of both the numbers ie. and store the result in
variable ‘c’
Step 5 : Print the value of ‘c’ as a result of addition.
Step 6 : Stop.
Characteristics of Algorithm
1. Unambiguous − Algorithm should be clear and unambiguous. Each of its
steps (or phases), and their inputs/outputs should be clear and must lead to
only one meaning.
2. Input − An algorithm should have 0 or more well-defined inputs.
3. Output − An algorithm should have 1 or more well-defined outputs, and
should match the desired output.
4. Finiteness − Algorithms must terminate after a finite number of steps.
5. Feasibility − Should be feasible with the available resources.
6. Independent − An algorithm should have step-by-step directions, which
should be independent of any programming code.
Algorithm Design Tools
There are various ways by which we can specify an algorithm.

• Pseudo code
This representation of algorithm is mix of programming
language and natural language.
• Flowchart
As name suggested these are charts or diagrams which
represent the flow of the algorithm.
Psudocode
Definition : Pseudo code is nothing but an informal way of writing a program. It is a
combination of algorithm written in simple English and some programming language.
• In pseudo code, there is no restriction of following the syntax of the
programming language.
• Pseudo codes cannot be compiled. It is just a previous step of developing a
code for given algorithm.
• Even sometimes by examining the pseudo code one can decide which
language has to select for implementation.
• The algorithm is broadly divided into two sections as follows
Algorithm heading
It consist of name of algorithm, problem description,
input and output

Algorithm body
It consist of logical body of the algorithm by making use
of various programming constructs and assignment
statement

Structure of algorithm
Rules for Writing an Algorithm
let us understand some rules for writing the algorithm.
1. Algorithm is a procedure consisting of heading and body. The heading
consists of keyword Algorithm and name of the algorithm and parameter
list. The syntax is
Algorithm name (p1,p2,…………..pn)

This keyword Here write the Write parameters


should be written name of an (If any)
first algorithm

2. Then in the heading section we should write following things:

//Problem description :
//Input :
//Output :
3. Then body of an algorithm is written, in which various programming constructs
like if, for, while or some assignment statements may be written.
4. The compound statements should be enclosed within { and } brackets.
5. Single line comments are written using // as beginning of comment.
6. The Identifier should begin by letter and not by digit. An identifier can be a
combination of alphanumeric string.
7. Using assignment operator < an assignment statement can be given.
For instance : Variable expression

8. There are other types of operators such as Boolean operators such as true or
false. Logical operators such as and, or, not And relational operators such as <,
<=, >, >=, =, ≠
9. The array indices are stored with in square bracket [ ] . The index of array usually
start at zero. The multidimensional arrays can also be used in algorithm.
10. The inputting and outputting can be done using read and write
For example:

write (“This message will be displayed on console”);


read (val);

11. The conditional statement such as if-then or if-then-else are written in following
form:
if (condition) then statement
if (condition) then statement else statement

If the if-then statement is of compound type then { and } should be used.

12. while statement can be written as :


while (condition) do
{
statement 1
statement 2
.
.
statement n
}
13. The general form for writing for loop is:

for variable ← value1 to valuen do


{
statement 1
statement 2
.
.
statement n
}

Here value1 is initialization condition and valuen is a terminating condition.

for i ← 1 to n step 1
Here variable i is
{
incremented by 1 at
write (i)
each iteration
}
14. The repeat – until statement can be written as :

repeat
statement 1
statement 2
.
.
statement n
until (condition)

15. The break statement is used to exit from inner loop. The return statement is
used to return control from one point to another.

Note that statements in an algorithm executes in sequential order i.e in the same
order as they appear-one after another.
Sub-Algorithm
A sub-algorithm is complete and independently defined algorithmic module.
This module is actually called by some main algorithm or some another sub –
algorithm.

There are two types of sub-algorithms –


1. Function sub-algorithm 2. Procedure sub-algorithm
Example of Function sub-algorithm Example of Procedure sub-algorithm

Function sum (a,b:integer) : integer Procedure sum (a,b:integer) : integer


{ {
//body of function //body of procedure
//return statement }
}

The difference between function sub-algorithm and procedure sub-algorithm is


that, the function can return only one value where as the procedure can return more
than one value.
Examples
Write an pseudo code to count the sum of n numbers.

Algorithm sum (1, n)


//Problem Description: This algorithm is for finding the
//sum of given n numbers
//input: 1 to n numbers
//Output: The sum of n numbers
result ← 0
for i ←1 to n do i ← i+1
result ← result + i.
return result
Examples
Write an pseudo code to check whether given number is even or odd.

Algorithm eventest (val)


//Problem Description: This algorithm test whether given
//number is even or odd
//Input : the number to be tested i.e. val
//Output. Appropriate messages indicating even or oddness
if (val%2= 0) then
write ("Given number is even")
else
write(Given number is odd")
Examples
Write an pseudo code to for sorting the elements

Algorithm sort (a,n)


//Problem Description: sorting the elements in ascending order
//Input : An array a in which the elements are stored and n
//is total number of elements in the array
//Output. The sorted array
for i ← 1 to n do
for j ← i+1 to n-1 do
{
if (a[i] > a[j]) then
{
temp ← a[i]
a[i] ← a[j]
a[j] ← temp
}
}
Examples
Write an pseudo code to find factorial of n number. (n!)

Algorithm fact (n)


//Problem Description: This algorithm finds the factorial
//of given number n
//Input: The number n of which the factorial is to be calculated.
//Output factorial value of given n number.
if (n ← 1) then
return 1
else
return n * fact(n —1)
Examples to solve

• Write an pseudo code to find the largest between 3


numbers
• Write an pseudo code for sorting the elements in
descending order
• Write an pseudo code to calculate sum of elements
from 1 to n
Examples
• Write an pseudo code to find the largest between 3 numbers

Algorithm largest (a,b,c)


//Problem Description: This algorithm finds the largest
//between 3 numbers
//Input: 3 numbers as a, b, c
//Output: Largest number

if (a>b && a>c) then


write (“a is largest number”)
if (b>a && b>c) then
write (“b is largest number”)
if (c>a && c>b) then
write (“c is largest number”)
Examples
• Write an pseudo code for sorting the elements in descending order

Algorithm sort (a,n)


//Problem Description: sorting the elements in descending order
//Input : An array a in which the elements are stored and n
//is total number of elements in the array
//Output. The sorted array in descending order
for i ← 1 to n do
for j ← i+1 to n-1 do
{
if (a[i] < a[j]) then
{
temp ← a[i]
a[i] ← a[j]
a[j] ← temp
}
}
Examples
• Write an pseudo code to calculate sum of elements from 1 to n

Algorithm sum (1, n)


//Problem Description: This algorithm is for finding the
//sum of given n numbers
//input: 1 to n numbers
//Output: The sum of n numbers
result ← 0
for i ←1 to n do i ← i+1
result ← result + i.
return result
Flowchart
Flowchart
• Flowcharts are the graphical representation of the
algorithms.
• The algorithms and flowcharts are the final steps in
organizing the solutions.
• Using the algorithms and flowcharts the programmers can
find out the bugs in the programming logic and then can go
for coding.
• Flowcharts can show errors in the logic and set of data can
be easily tested using flowcharts.
Flowchart Symbols
Flowchart Symbols

Que: What do you mean by flow chart? Give the meaning of each
symbol used in flowchart. Draw flowchart to compute the sum of
elements from a given integer array.
SPPU: May-10, Marks-8
Examples
Design and explain an algorithm to find the sum of the digits of an integer
number.
SPPU Dec-10, Marks-6

Read N
Remainder = 0
Sum = 0
Repeat
Remainder = N mod 10
Sum = Sum + Remainder
N = N/10
Until N < 0
Display Sum
End
Examples
What is the difference between flowchart and algorithm? Convert the
algorithm for computing factorial of given number into flowchart.
SPPU May-11, Marks-8

Read N
Set i and F to 1
While i < = N
F=F*i
increase the value of i by 1
Display F
End
Analysis of Algorithm

• The efficiency of algorithm can be decided by measuring


the performance of algorithm. We can measure the
performance of an algorithm by counting two factors
1. Amount of time required by the algorithm to execute. This
is known as Time complexity.
2. Amount of space required by the algorithm to execute.
This is known as Space complexity.
Suppose we want to find out the time taken by following program
statement
x=x+1
• Determining the amount of time required by the above statement in
terms of clock time is not possible following is always dynamic
1. The machine that is used to execute the programming statement
2. Machine language instruction set
3. Time required by each machine instruction
4. The translation of compiler will make for this statement to machine
language
5. The kind of operating system (multi-programming or time sharing)
• The above terms varies from machine to machine. Hence it is not
possible to find out exact figure. Hence, the performance of machine is
measured in term of frequency count or step count method.
Step (Frequency) Count and its Importance
Definition : The frequency count is a count that denotes how many times
particular statement is executed and it is also known as step count method.

For Example : Consider following code for counting the frequency count

def fun () :
a = 10 …………………………………….1
print(a)…………………………………….1

The frequency count of above program is 2.


Examples
Obtain the step count for the following code

def fun () :
a = 0 …………………………………….1
for i in range (1, n)……………………….n + 1
a = a + i………………………...n
print (a)…………………………………...1

The step count of above program is 2n + 3


Examples
Obtain the frequency count for the following code

adj = ["red", "big", "tasty"]


fruits = ["apple", "banana", "cherry"]
for x in adj:…………………………………..n+1
for y in fruits:………………………………n(m+1)
print(x, y)………………………………….n.m

The frequency count of above program is :


n+1+ n(m+1)+ nm = n+1 + nm + n + nm = 2nm+2n+1 = 2n (m+1)+1
= 25
Examples
Obtain the frequency count for the following code

for i in range (1,n):


for j in range (1,n):
c[i][j] = 0
for k in range (1,n):
c[i][j] = c[i][j] + a[i][k]
Statement Step Count
for i in range (1,n): n+1
for j in range (1,n): n.(n+1)
c[i][j] = 0 n.(n)
for k in range (1,n): n.n(n+1)
c[i][j] = c[i][j] + a[i][k] n.n.n
Total 2n3+3n2+2n+1
Examples
Obtain the frequency count for the following code
int i=1, n=5;
do
{
a++;
if (i==5)
break;
i++;
} while(i<=n)
Statement Step Count
int i=1, n=5; 1
a++; 5
if (i==5) 5
break; 1
i++; 5
while(i<=n) 5
Total 22
Examples
Obtain the frequency count for the following code
double IterPow (double X, int N)
{
double result = 1;
while (N > 0)
{
Result = Result * X;
N--;
}
return Result; } SPPU: May-10 Marks-6
Statement Step Count
double result = 1; 1
while (N > 0) N+1
Result = Result * X; N
N--; N
return Result; 1
Total 3N+3
Examples
Obtain the frequency count for the following code
(i) for (i=1; i<=n; i++)
for (j=1; j<=m; j++)
for (k=1; k<=p; k++)
sum = sum + i;
(ii) i=n;
while (i >=1)
i--;
SPPU: Dec-11, Marks-6 & May-14, Marks-3
Solution (i): Solution (ii):
Statement Step Count Statement Step Count
for (i=1; i<=n; i++) n+1 i=n; 1
for (j=1; j<=m; j++) n(m+1) while (i >=1) n+1
for (k=1; k<=p; k++) n.m(p+1) i--; n
sum = sum + i; n.m.p Total 2(n+1)
Total 2(n+nm+nmp)+1
Complexity of Algorithms
Space Complexity
The space complexity can be defined as amount of memory required by
an algorithm to run

To compute the space complexity we use two factors: constant and instance
characteristics. The space requirement S(p) can be given as

S(p) = C + Sp

• Where C is a constant i.e. fixed part and it denotes the space of inputs and
outputs.
• This space is an amount of space taken by instruction, variables and identifiers.
• And Sp is space dependent upon instance characteristics.
• This is a variable part whose space requirement depends on particular problem
instance.

Continue…
Space Complexity
There are two types of components that contribute to the space
complexity – Fixed part and variable part

The fixed part includes space for:


● Instructions ● Variables ● Array size ● Space for constants

The variable part includes space for:


• The variables whose size is dependent upon the particular problem instance
being solved. The control statements (such as for, do, while, choice) are used
to solve such instances.
• Recursion stack for handling recursive call.
Example
Compute the space needed by the following algorithms justify your answer
Algorithm sum (a,n)
{
s : = 0.0 ;
for i : = 1 to n do
s : = s + a[i];
return s
}
In the given code we require space for
s : = 0.0 ; ← O(1)
for i : = 1 to n do ← O(n)
s : = s + a[i]; ← O(n)
return s ← O(1)
Hence the space complexity of given algorithm can be denoted in terms of
big-oh notation. It is O(n).
Time Complexity
The amount of time required by an algorithm to execute is called the
time complexity of that algorithm.

For determining the time complexity of particular algorithm following steps are
carried out -

1. Identify the basic operation of the algorithm


2. Obtain the step count for this basic operation.
3. Consider the order of magnitude of the step count and express it in terms of
big oh notation.

Continue…
Example
Write an algorithm to find smallest element in a array of integers and
analyze its time complexity SPPU-May-13, Marks-8

Algorithm MinValue(int a[n])


{
Statement S. C.
min_element = a[0];
min_element = a[0]; 1
for (i=0; i<n; i++)
for (i=0; i<n; i++) (n+1)
{
if (a[i]<min_element) then n
if (a[i]<min_element) then
min_element = a[i]; Write (min_element) 1

} Total 2n+3
Write (min_element)
}

By neglecting the constant terms and by considering the order of magnitude


we can express the step count in terms of Big-oh notation as O(n). Hence the
time complexity of above code is O(n).
Asymptotic Notations
Asymptotic Notations
• To choose the best algorithm, we need to check efficiency of
each algorithm.
• The efficiency can be measured by computing time complexity of
each algorithm.
• Using asymptotic notations we can give time complexity as
“fastest possible”, “slowest possible” or “average time”. (best,
average, worst)
• Various notations such as Ω, Θ, O used are called asymptotic
notations.
Big oh notation (O)

• The big oh notation is denoted by ‘O’,.


• It is method of representing the upper bound of algorithm’s
running time.

• Using big oh notation we can give longest amount of time taken


by the algorithm to complete.
Definition :
• Let, f(n) & g(n) be non-negative functions

• Let, n0 and constant c are two integers such that n0 denotes some value of
input and n > n0. Similarly c is some constant such that c > 0. We can
write
f(n) <= c*g(n)

• Then f(n) is big oh of g(n).


• It is also denoted as f(n) ∈ O (g(n))
• In other words f(n) is less than g(n)
if g(n) is multiple of some constant
c.
Fig. Big oh notation
Example
Consider function f(n) = 2n + 2 and g(n) = n2 . Then we have to find some
constant c, so that f(n) <= c* g(n)

Solution: As, f(n) = 2n + 2 and g(n) = n2 then we find c for

If n = 1 then If n = 2 then If n = 3 then


f(n) = 2n + 2 f(n) = 2n + 2 f(n) = 2n + 2
= 2(1) + 2 = 2(2) + 2 = 2(3) + 2
=4 =6 =8
and g(n) = n2 and g(n) = n2 and g(n) = n2
= (1)2 = (2)2 = (3)2
=1 =4 =9
i.e. f(n) > g(n) i.e. f(n) > g(n) i.e. f(n) < g(n) is true

Hence, we can conclude that for n > 2, we obtain

f(n) < g(n)


Thus always upper bound of existing time is obtained by big oh notation.
Omega notation (Ω)

• The omega notation is denoted by ‘Ω’.


• This notation is used to represent the lower bound of algorithm’s
running time.

• Using omega notation we can denote shortest amount of time


taken by algorithm.
Definition :
• A function f(n) is said to be in Ω (g(n)) if f(n) is bounded below by some
positive constant multiple of g(n) such that

f(n) >= c*g(n)

• For all n >= n0


• It is denoted as f(n) ∈ Ω (g(n))

Fig. Omega notation


Example
Consider function f(n) = 2n2 + 5 and g(n) = 7n

Solution:

If n = 1 then If n = 2 then If n = 3 then


f(n) = 2(1)2 + 5 f(n) = 2(2)2 + 5 f(n) = 2(3)2 + 5
=2+5 =8+5 = 18 + 5
=7 = 13 = 23
and g(n) = 7n and g(n) = 7n and g(n) = 7n
= 7(1) = 7(2) = 7(3)
=7 = 14 = 21
i.e. f(n) = g(n) i.e. f(n) < g(n) i.e. f(n) > g(n)

Thus, for n > 3 we get f(n) > c* g(n)


It can be represented as Similarly,
2n2 + 5 ∈ Ω (g(n)) n3 ∈ Ω (n2)
Theta notation (Θ)

• The theta notation is denoted by ‘Θ’.

• By this method the running time is between upper bound

and lower bound.


Definition :
• Let f(n) and g(n) be two non-negative functions.
• There are two positive constants namely c1 and c2 such that

c1 g(n) <= f(n) <= c2 g(n)

• Then we can say that


• It is denoted as f(n) ∈ Θ (g(n))

Fig. Theta notation


Example
Consider function if f(n) = 2n + 8 and g(n) = 7n where n >= 2

Solution:

for n >= 2 then


f(n) = 2(2) + 8
=4+8
= 12
and g(n) = 7n
= 7(2)
= 14
∴ f(n) < g(n)
i.e. 5n < 2n +8 < 7n for n >=2
Here, c1 = 5 & c2 = 7

Note: The theta notation is more precise with both big oh and omega notation.
Some Example of Asymptotic Notation

1) f(n) = log2n then,


log2n ∈ O(n) ∵ log2n <= O(n), the order of growth of
log2n is slower than n.
log2n ∈ O(n2) ∵ log2n <= O(n2), the order of growth of
log2n is slower than n2 as well.
But,
log2n ∉ Ω(n) ∵ log2n <= Ω(n) and if certain function
f(n) is belonging to Ω(n) it should
satisfy the condition f(n) >= c*g(n)
Similarly, log2n ∉ Ω(n2) or Ω(n3)
Some Example of Asymptotic Notation

1) f(n) = n(n-1)/2 then,


n(n-1)/2 ∉ O(n) ∵ f(n) > O(n) we get f(n) = n(n-1)/2 =(n2 – n)/2
i.e. maximum order is n2 which is > O(n).
Hence, f(n) ∉ O(n)
But, n(n-1)/2 ∈ O(n2) As, f(n) <= O(n2)
and n(n-1)/2 ∈ O(n3)
Similarly,
n(n-1)/2 ∈ Ω(n) ∵ f(n) >= Ω(n)
n(n-1)/2 ∈ Ω(n2) ∵ f(n) >= Ω(n2)
n(n-1)/2 ∈ Ω(n3) ∵ f(n) >= Ω(n3)
Properties of Order of Growth
1. If f1(n) is order of g1(n) and f2(n) is order of g2(n), then
f1(n) + f2(n) ∈ O(max (g1(n), g2(n))
2. Polynomials of degree m ∈ Θ (nm).
That means maximum degree is considered from the polynomial.
For E.g. a1n3 + a2n2 + a3n + c has order of growth Θ(n3)
3. O(1) < O(log n) < O(n) < O(n2) < O(2n)
4. Exponential functions an have different orders of growth for different
values of a.
Common functions of Big oh
Common functions of Big oh
Key points to remember
• O(g(n)) is a class of functions f(n) that grows less than g(n),
that means f(n) posses the time complexity which is always
lesser than the time complexities that g(n) have. It’s also
called as worst case analysis.
• Θ(g(n)) is class of functions f(n) that grows at same rate as
g(n) means it is average case analysis.
• Ω(g(n)) is a class of functions f(n) that grows faster than
g(n). That means f(n) is greater than g(n). It’s called as best
case analysis.
Worst, Best and Average Case
Analysis
Best Case Analysis
• If an algorithm takes minimum amount of time to run to
completion for a specific set of input then it is called best case time
complexity
• E.g. While searching a particular element by using sequential search
we get the desired element at first place itself then it is called best
case time complexity
• Best time complexity is a time complexity when an algorithm runs
for short time
Algorithm Seq_search (X[0…n-1, key]
// Problem Description: This algorithm is for searching the
// key element from an array X[0...n-1] sequentially
// Input: An array X[0...n-1] and search key
// Output: Returns the index of X where key value is present
for i ← 0 to n – 1 do
if (X[i] = key) then
return i
Worst Case Analysis
• If an algorithm takes maximum amount of time to run to
completion for a specific set of input then it is called worst case
time complexity.
• E.g. While searching an element by using sequential searching
method. If desired element is placed at end of the list then we get
worst time complexity
• Worst case time complexity is a time complexity when algorithm
runs for longest time.
Algorithm Seq_search (X[0…n-1, key]
// Problem Description: This algorithm is for searching the
// key element from an array X[0...n-1] sequentially
// Input: An array X[0...n-1] and search key
// Output: Returns the index of X where key value is present
for i ← 0 to n – 1 do
if (X[i] = key) then
return i
Average Case Analysis
• The time complexity that we get for certain set of inputs is as
average same. Then for corresponding input such a time complexity
is called Average case time complexity.
• This type of complexity gives information about the behavior of an
algorithm on specific or random input.

Algorithm Seq_search (X[0…n-1, key]


// Problem Description: This algorithm is for searching the
// key element from an array X[0...n-1] sequentially
// Input: An array X[0...n-1] and search key
// Output: Returns the index of X where key value is present
for i ← 0 to n – 1 do
if (X[i] = key) then
return i
Analysis of Programming
Constructs

Constant, Linear, Quadratic, Cubic,


Logarithmic, Linear Logarithmic, Exponential,
Factorial
The Constant Function f(n) = C
• For any argument n, the constant function f(n) assigns the
value C.
• It doesn't matter what the input size n is, f(n) will always be
equal to the constant value C
• The most fundamental constant function is f(n) = 1
• Constant algorithm does not depend on the input size.
• Examples: arithmetic calculation, comparison, variable
declaration, assignment statement, invoking a method or
function.
The Logarithm Function f(n) = logn
• It is one of the interesting and surprising aspects of the analysis of data
structures and algorithms.
• The general form of a logarithm function is f(n) = logbn, for some
constant b > 1
• This function is defined as follows:
x = logbn, if and only if bx = n
• The value b is known as the base of the logarithm.
• Computing the logarithm function for any integer n is not always easy,
but we can easily compute the smallest integer greater than or equal to
logbn, for this number is equal to the number of times we can
divide n by b until we get a number less than or equal to 1.
The Logarithm Function f(n) = logn

• For example, log327 is 3, since 27/3/3/3 = 1. Likewise, log212 = 4,


since 12/2/2/2/2 = 0.75 <= 1.
• The most common base for the logarithm in computer science is 2. We
typically leave it off when it is 2.
• Logarithm function gets slightly slower as n grows. Whenever n doubles,
the running time increases by a constant.
• Examples: binary search.
The Linear Function f(n) = n

• Another simple yet important function. Given an input value n, the


linear function f assigns the value n itself.
• This function arises in an algorithm analysis any time we do a single
basic operation for each of n elements.
• For example, comparing a number x to each element of an array of
size n will require n comparisons.
• Whenever n doubles, so does the running time.
• Example: print out the elements of an array of size n.
The Linear Logarithmic Function
f(n) = nlogn
• This function grows a little faster than the linear function and a lot
slower than the quadratic function (n2)
• If we can improve the running time of solving some problem from
quadratic to N-Log-N, we will have an algorithm that runs much faster
in general.
• It scales to a huge problem, since whenever n doubles, the running time
more than doubles.
• Example: merge sort, which will be discussed later.
The Quadratic Function f(n) = n2

• It appears a lot in the algorithm analysis, since there are many


algorithms that have nested loops, where the inner loop performs a
linear number of operations and the outer loop is performed a linear
number of times.
• In such cases, the algorithm performs n*n = n2 operations.
• The quadratic function can also be used in the context of nested loops
where the first iteration of a loop uses one operation, the second uses
two operations, the third uses three operations, and so on. That is, the
number of operations is 1 + 2 + 3 + ... + (n-1) + n.
The Quadratic Function f(n) = n2

• For any integer n >= 1, we have 1 + 2 + 3 + ... + (n-1) + n = n*(n+1) /


2.
• Quadratic algorithms are practical for relatively small problems.
Whenever n doubles, the running time increases fourfold.
• Example: some manipulations of the n by n array.
The Cubic Function f(n) = n3

• The cubic function f(n) = n3.


• This function appears less frequently in the context of the algorithm
analysis than the constant, linear, and quadratic functions.
• It's practical for use only on small problems.
• Whenever n doubles, the running time increases eightfold.
• Example: n by n matrix multiplication.
The Exponential Function f(n) = bn
• In this function, b is a positive constant, called the base, and the
argument n is the exponent.
• In the algorithm analysis, the most common base for the exponential
function is b = 2.
• For instance, if we have a loop that starts by performing one operation
and then doubles the number of operations performed with each
iteration, then the number of operations performed in the nth iteration
is 2n.
• Exponential algorithm is usually not appropriate for practical use.
The Factorial Function f(n) = n!

• Factorial function is even worse than the exponential


function.
• Whenever n increases by 1, the running time increases by
a factor of n.
• For example, permutations of n elements.
Comparing Growth Rates
Introduction to Algorithm
Design Strategies

Divide-and-Conquer, Greedy Technique


Algorithm Design Strategies
• Algorithm design strategy is general approach by which many problems
can be solved algorithmically.
• Theses problems may belong to different areas of computing
• Algorithmic strategies are also called as algorithmic techniques or
algorithm paradigm.
• Various algorithm techniques are-
o Brute Force: This is straightforward technique with naïve approach.
o Divide-and-Conquer: The problem is divided into smaller instances.
o Greedy Technique: To solve problem locally optimal decisions are
made.
o Backtracking: In this method, we start with one possible move out
from many moves out and if the solution is not possible through the
selected move then we backtrack for another move.
Divide and Conquer
• In divide and conquer method, a given problem is,
1. Divide into smaller sub problems.
2. These sub problems are solved independently.
3. If necessary, the solutions of sub problems are combined to get a
solution to the original problem.
• If the sub problems are large enough, then divide and conquer is
reapplied.
• The generated
sub problems are
usually same type
as main problem.
Hence, recursive
algorithms are
used in this
method
Example: Merge Sort
• The merge sort is a sorting algorithm that uses the divide and conquer
strategy. In this method division is dynamically carried out.
 Merge sort on an input array with n elements consist of three steps:
Divide : Partition array into two sub list with n/2 elements
Conquer : Then sort sub lists
Combine : Merge sub lists into unique sorted group
• The merge() function is used for merging two sub lists.
• The merge (arr, l, m, r) is key process that assumes that arr[l….m] and
arr[m+1….r] are sorted and merges the two sub lists(sub arrays) into
one. See following implementation:
Example: Merge Sort

mergeSort(arr[], l, r)
if r > 1
1. Find the middle point to divide the array into two sub lists:
middle m = (l + r) /2
2. Call mergeSort for first half:
Call mergeSort (arr, l, m)
3. Call mergeSort for second half:
Call mergeSort (arr, m+1, r)
4. Merge the two sub-lists sorted in step 2 and 3 :
Call merge (arr, l, m, r)
Example: Merge Sort
• The following diagram shows complete merge sort process for an
example array {38,27,43,3,9,82,10}.
• The array is recursively divided in two lists till the size becomes 1.
• Once the size becomes 1, the merge process comes into action and starts
merging array back till the complete array is merged.
# Pseudo Code
Algorithm mergeSort (arr, l, r)
// Problem Description : This algorithm is for sorting the
// elements using merge sort
// Input: Array arr of unsorted elements, l as beginning (left side)
// pointer of array arr and r as end pointer of array (right side)
// Sorted array arr[0……n-1]
if (l < r) then
{
mid ← (l + r) / 2 //split the list at mid
mergeSort (arr, l, r) // First sublist
mergeSort (arr, mid+1, r) // Second sublist
Combine (arr, l, mid, r) //merging of two sublists
}
Algorithm Combine (arr, l, mid, r)
{
k←l // k as index for array temp
i←l // i as index for left sublist of array arr
j ← mid + 1 // j as index for right sublist of array arr
while (i <= mid and j <= r) do
{
if (arr [i] <= arr [j])then // if smaller element is present in left sublist
{
temp [k] ← arr [i]
i←i+1
k←k+1
}
else //smaller element is present in right sublist
{
// copy smaller element to temp array
temp [k] ← arr [j]
i←i+1
k←k+1
}
}
// copy remaining elements of left sublist to temp
while (i <= mid) do
{
temp [k] ← arr [i]
i←i+1
k←k+1
}
// copy remaining elements of right sublist to temp
while (j <= r) do
{
temp [k] ← arr [j]
i←i+1
k←k+1
}

Time complexity of Merge Sort is O(nLogn) in all 3 cases (worst, average and best)
as merge sort always divides the array into two halves and take linear time to merge
two halves.
# Python program for implementation of MergeSort
def mergeSort(arr):
if len(arr) >1:
mid = len(arr)//2 # Finding the mid of the array
L = arr[:mid] # Dividing the array elements
R = arr[mid:] # into 2 halves

mergeSort(L) # Sorting the first half


mergeSort(R) # Sorting the second half

i=j=k=0

# Copy data to temp arrays L[] and R[]


while i < len(L) and j < len(R):
if L[i] < R[j]:
arr[k] = L[i]
i+= 1
else:
arr[k] = R[j]
j+= 1
k+= 1
# Checking if any element was left
while i < len(L):
arr[k] = L[i]
i+= 1
k+= 1
while j < len(R):
arr[k] = R[j]
j+= 1
k+= 1
# Code to print the list
def printList(arr):
for i in range(len(arr)):
print(arr[i], end =" ")
print()
# driver code to test the above code
arr = [12, 11, 13, 5, 6, 7] Output:
print ("Given array is", end ="\n") Given array is
printList(arr)
mergeSort(arr) 12 11 13 5 6 7
print("Sorted array is: ", end ="\n") Sorted array is:
printList(arr) 5 6 7 11 12 13
Step wise execution of Merge Sort
Greedy Strategy
• This method is popular for obtaining the optimized solutions.
• In Greedy technique, the solution is constructed through a sequence of
steps, each expanding a partially constructed solution obtained so far, until
a complete solution to the problem is reached.
• In Greedy method following activities are performed :
1. First we select some solution from input domain
2. Then we check whether the solution is feasible or not.
3. From the set of feasible solutions, particular solution that satisfies
or nearly satisfies the objective of the function. Such a solution is
called optimal solution.
4. As Greedy method works in stages. At each stage only one input is
considered at each time. Based on this input it is decided whether
particular input gives the optimal solution or not.
Example of Greedy Strategy
• Dijkstra’s Algorithm is a popular algorithm for finding shortest path using
Greedy method. This algorithm is called single source shortest path
algorithm.
• In this algorithm, for a given vertex called source the shortest path to all
other vertices is obtained.
• In this algorithm the main focus is not to find only one single path but to
find the shortest paths from any vertex to all other remaining vertices.
• This algorithm applicable to graphs with non-negative weights only.
Consider a weighted connected graph as given below.
3
B D
4 8
1 7

A C E
8 3
Continue…
Example of Greedy Strategy
• Now we will consider each vertex as a source and will find the shortest
distance from this vertex to every other remaining vertex.
Source
Distance with other vertices Path shown in graph
Vertex
A-B, path = 4
A-C, path = 8
A
A-D, path = ∞
A-E, path = ∞
B-C, path = 4 + 1
B B-D, path = 4 + 3
B-E, path = ∞
C-D, path = 5+7 = 12
C
C-E, path = 5+3 = 8
D D-E, path = 7+8 = 15
• But we have one shortest distance obtained from A to E and that is A-B-
C-E with path length = 4+1+3 = 8. Similarly other shortest paths can be
obtained by choosing appropriate source and destination.
Pseudo Code
Time Complexity of the implementation is O(V2).
Assignment No.1
1. Define and explain the following terms :
(i) Data (ii) Data structure (iii) Flowchart.
2. Define algorithms and explain its characteristics.
3. Explain the divide and conquer strategy with suitable example.
Comment on its time complexity.
4. Explain the Asymptotic notation Big O, Omega and Theta with
suitable example.
5. Explain static and dynamic data structures with examples.
6. Differentiate between linear and non-linear data structure with
example.
7. Explain the Greedy strategy with suitable example. Comment on its
time complexity.
8. Define and explain ADT(Abstract Data Type).
9. State Analysis of programming constructs - Linear, Quadratic, Cubic,
Logarithmic
10. Define and explain the following terms : (a) Persistent data structure
(b) Ephemeral data structure (c) Time complexity (d) Space complexity
Case Study

Multiplication technique by the mathematician


Carl Friedrich Gauss and Karatsuba algorithm for
fast multiplication.
Thank You

You might also like