0% found this document useful (0 votes)
12 views

Data Structure A-Week2

The document discusses complexity analysis and insertion sort algorithms. It explains how to analyze the time complexity of algorithms and defines common complexity classes like constant, linear, and quadratic time. It also provides details about insertion sort, including the steps and an example algorithm.

Uploaded by

Malik Nadeem
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Data Structure A-Week2

The document discusses complexity analysis and insertion sort algorithms. It explains how to analyze the time complexity of algorithms and defines common complexity classes like constant, linear, and quadratic time. It also provides details about insertion sort, including the steps and an example algorithm.

Uploaded by

Malik Nadeem
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

Data Structures &

Algorithms
Week-2
Contents

• Last Lecture Preview


• Complexity Analysis
• Insertion Sort
Last Lecture Preview
Data Structure
Need for Data Structures
How to Select a Data Structure
Cost and Benefits of Data Structure
Data Structure Operations
Algorithms
Categories of Algorithms
Characteristics of an Algorithm
How to Write an Algorithm
Arrays
Complexity Analysis
• An essential aspect to data structures is algorithms.
• Data structures are implemented using algorithms.
• Some algorithms are more efficient than others.
• We would prefer to chose an efficient algorithm. so it would be nice
to have metrics for comparing algorithm efficiency.

Complexity analysis allows us to measure how fast a program is when


it performs computations.
Complexity of an Algorithm
• The complexity of an algorithm is a function describing the efficiency
of the algorithm in terms of the amount of data the algorithm must
process.
• There are two main complexity measures of the efficiency of an
algorithm.

What are they???


Complexity of an Algorithm
• Time complexity
Time complexity is a function describing the amount of time an algorithm takes in
terms of the amount of input to the algorithm.
f(n) = 4 + 6n
"Time" can mean the number of memory accesses performed, the number of
comparisons between integers, the number of times some inner loop is executed.
We try to keep this idea of time separate from "wall clock" time, since many factors
unrelated to the algorithm itself can affect the real time (like the language used,
type of computing hardware, proficiency of the programmer, optimization in the
compiler, etc.).
It turns out that, if we chose the units wisely, all of the other stuff doesn't matter
and we can get an independent measure of the efficiency of the algorithm.
Complexity of an Algorithm
• Space complexity
Space complexity is a function describing the amount of memory (space) an
algorithm takes in terms of the amount of input to the algorithm.
We often speak of "extra" memory needed, not counting the memory needed to
store the input itself.
We use natural (but fixed-length) units to measure this. We can use bytes, but it's
easier to use, say, number of integers used, number of fixed-sized structures, etc.
Space complexity is sometimes ignored because the space used is minimal
and/or obvious, but sometimes it becomes as important an issue as time.
Time Complexity
• Counting Instructions
Lets assume our processor can execute the
following operations as one instruction
Assigning a value to a variable
Looking up the value of a particular element in
an array
Comparing two values
Incrementing a value
Basic arithmetic operations such as addition and
multiplication
Time Complexity

This requires 2 instructions:


One for looking up a[ 0 ] and one for assigning the
value to m
So instructions are 2.
Time Complexity

The for loop initialization code also has to always run.


This gives us two more instructions; an assignment and
a comparison:
i = 0;
i < n;
So instructions are 2 + 2 (previous step) = 4.
Time Complexity

After each for loop iteration, we need two more


instructions to run, an increment of i and a comparison
to check if we'll stay in the loop:

i++;
i < n;
So the instructions are 2n ( n for loop, no. of iterations)

So instructions will be 4(previous)+2n=4+2n


Time Complexity

Find 4n???
Time Complexity

Now, looking at the for body, we have an array lookup


operation and a comparison that happen always:
if ( a[ i ] >= m )
So instructions are 2.

But the if-body may run or may not run, depending on


what the array values actually are. If it happens to be so
that a[ i ] >=m, then we'll run these two additional
instructions- an array lookup and an assignment:
So instructions will be 2+2=4

Still finding 4n???


Time Complexity

Now, looking at the for body, we have an array lookup


operation and a comparison that happen always:
if ( a[ i ] >= m )
So instructions are 2.

But the if-body may run or may not run, depending on


what the array values actually are. If it happens to be so
that a[ i ] >=m, then we'll run these two additional
instructions- an array lookup and an assignment:
So instructions will be 2+2=4

4n (no. of iterations)
Time Complexity

2
2
2n
4n

So 2+2+2n+4n = 4+6n

f(n) = 4 + 6n
Time Complexity-Exercise
int ArraySum()
{
a=[2,6,7]
int sum=0;
for(int i=0;i<n;i++)
{
sum=sum+a[i]
}
return sum;
}
Time Complexity-Exercise
int ArraySum()
{
a=[2,6,7]
int sum=0;
for(int i=0;i<n;i++)
{
sum=sum+a[i]
}
return sum;
}
Hint Find 5n+4???
Time Complexity
• Worst, Average and Best Cases
• Asymptotic Notations
• Common Time Complexities
Worst, Average and Best Cases
• Worst Case Analysis
In the worst case analysis, we calculate upper bound on running time
of an algorithm.
Find Upper Bound in 4 + 6n??
• Average Case Analysis
In average case analysis, we take all possible inputs and calculate
computing time for all of the inputs.
• Best Case Analysis
Asymptotic Notations
1. Θ Notation
The theta notation bounds a functions from above and below, so it
defines exact asymptotic behavior.
A simple way to get Theta notation of an expression is to drop low
order terms and ignore leading constants.
3n3 + 6n2 + 6000 = Θ(n3)
Asymptotic Notations
2. Ω Notation
• Ω notation provides an asymptotic lower bound.
• Ω Notation can be useful when we have lower bound on time complexity of an
algorithm
• The best case performance of an algorithm is generally not useful, the Omega
notation is the least used notation among all three.
• Time complexity of all computer algorithms can be written as Ω(1)

3. Big O Notation
The Big O notation defines an upper bound of an algorithm, it bounds a function only
from above.
• Linear , Quadratic and Constant Time
Big O Notation Usage
1. Constant Time
2. Linear Time
3. Quadratic time
Big O Notation Usage
1. Constant Time
• An algorithm is said to be constant time if the value of T(n) is bounded by a value
that does not depend on the size of the input.
int a[4]=[25,16,14,10]
• For example, accessing any single element in an array takes constant time as only
one operation has to be performed to locate it.
2. Linear Time
• An algorithm is said to take linear time if its time complexity is O(n)
• This means that the running time increases at most linearly with the size of the
input.
4 + 6n
Big O Notation Usage
3. Quadratic Time
An algorithm is said to be quadratic time if T(n) = o(n2).
6n2 + 4n+1

Find an example where time complexity can be n2 ?

What about Nested loops???


Performance Classification
f(n) Classification
1 Constant: run time is fixed, and does not depend upon n. Most instructions are
executed once, or only a few times, regardless of the amount of information being
processed
log n Logarithmic: when n increases, so does run time, but much slower. Common in
programs which solve large problems by transforming them into smaller problems.

n Linear: run time varies directly with n. Typically, a small amount of processing is
done on each element.

n log n When n doubles, run time slightly more than doubles. Common in programs which
break a problem down into smaller sub-problems, solves them independently, then
combines solutions
n2 Quadratic: when n doubles, runtime increases fourfold. Practical only for small
problems; typically the program processes all pairs of input (e.g. in a double nested
loop).
n3 Cubic: when n doubles, runtime increases eightfold

2n Exponential: when n doubles, run time squares. This is often the result of a natural,
“brute force” solution.
Size does matter

What happens if we double the input size N?

Nlog2N 5N N log2N N2 2N
8 3 40 24 64 256
16 4 80 64 256 65536
32 5 160 160 1024 ~109
64 6 320 384 4096 ~1019
128 7 640 896 16384 ~1038
256 8 1280 2048 65536 ~1076
COMPLEXITY CLASSES

Time (steps)

27

27
Insertion Sort
Insertion Sort
• Insertion sort is a very simple method to sort numbers in an
ascending or descending order.
• It can be compared with the technique how cards are sorted at the
time of playing a game.
• The numbers, which are needed to be sorted, are known as keys.
Insertion Sort - Steps
Example
Insertion Sort- Algorithm
for j = 1 to A.length
key = A[j]
i=j–1
while i >= 0 and A[i] > key
A[i + 1] = A[i]
i = i -1
A[i + 1] = key

Dry Run??
Insertion Sort- Code
void insertionSort()
{
int arr[6]=[10,6,8,12,9,20]
int key, j;
for (int i = 1; i < arr.Length; i++)
{
key = arr[i];
j = i-1;

while (j >= 0 && arr[j] > key)


{
arr[j+1] = arr[j];
j = j-1;
}
arr[j+1] = key;
}
}
Insertion Sort-Time Complexity
void insertionSort() Worst Case: O(n2)
{ Best Case: O(n)
int arr[6]=[10,6,8,12,9,20]
int key, j; c⋅1+c⋅2+c⋅3+⋯c⋅(n−1)
for (int i = 1; i < arr.Length; i++) c⋅(1+2+3+⋯+(n−1)) .
{
key = arr[i]; Formula for arithmetic series
j = i-1;  n(a1 + an)/2
So
while (j >= 0 && arr[j] > key) n-1((1+n-1)/2)
{
arr[j+1] = arr[j]; c. n-1((1+n-1)/2)
j = j-1; c.((n²-n)/2)
} cn²/2 –cn/2
arr[j+1] = key;
}
} n²
Class Work
Sort the following array in descending order through insertion sort

Input:
a[24,8,25,63,14,7]

Output:
a[63,25,24,14,8,7]

Submit Algorithm/code with Dry Run in hand written form

You might also like