Introduction
Introduction
Subject Faculties:
26-06-2023
Objectives
• Analyze the asymptotic performance of algorithms
• Derive time and space complexity of different sorting algorithms and compare them to
choose application specific efficient algorithm.
• Understand and analyze the problem to apply design technique from divide and conquer,
dynamic programming, backtracking, branch and bound techniques and understand how
the choice of algorithm design methods impact the performance of programs
• Understand and apply various graph algorithms for finding shorted path and minimum
spanning tree.
• Synthesize efficient algorithms in common engineering design situations.
• Understand the notations of P, NP, NP-Complete and NP-Hard.
• Well-Defined: Algorithms have a clear, precise, and unambiguous description that can be followed
by anyone to obtain the desired result.
• Input and Output: Algorithms take one or more inputs, process them through a series of steps, and
produce an output or solution.
• Finiteness: Algorithms have a finite number of well-defined steps. They terminate after a finite
number of operations, providing the expected output.
• Deterministic: Algorithms are deterministic, meaning that given the same input, they always
produce the same output. The behavior of an algorithm is predictable and consistent.
• Effective: Algorithms are practical and provide a feasible solution to the problem at hand. They can
be implemented using computational resources, such as a computer or a human following the
instructions.
• Optimality: In some cases, algorithms aim to provide optimal solutions, such as finding the most
efficient or optimal solution among multiple possibilities.
• Algorithms play a crucial role in various fields, including computer science, mathematics,
engineering, and beyond.
• They are the building blocks of software development, data analysis, artificial intelligence,
and many other computational applications.
• By employing algorithms, complex problems can be broken down into simpler, manageable
steps, enabling efficient and effective problem-solving.
• An Algorithm provides a method of doing the things, from our daily life to every other
complex problem of the world.
• Example:
• Gather Information
• Explore Route Options
• Evaluate Route Criteria
• Compare Routes
• Select Optimal Route
• Execute the Route
• Arrive at Destination
• Prediction of Performance: Algorithm analysis provides insights into how an algorithm will perform
under different scenarios.
• It allows us to predict and estimate the time and space requirements for various input sizes. This
information is valuable for capacity planning and system design.
• Algorithmic Design and Improvement: Analysis helps us gain a deeper understanding of
algorithms, enabling us to design better algorithms or improve existing ones.
• It allows us to identify inefficiencies, bottlenecks, or areas where optimizations can be applied.
• By analyzing the trade-offs between different algorithmic approaches, we can make informed design
decisions and enhance the overall quality and efficiency of our solutions.
• Algorithm Selection: Algorithm analysis aids in the selection of appropriate algorithms for specific
problem domains.
• Different algorithms may have different performance characteristics, and analysis helps us identify
the most suitable algorithm based on the problem's requirements and constraints.
• By understanding the trade-offs between different algorithms, we can make informed decisions that
result in effective and efficient solutions.
• Theoretical Foundation: Algorithm analysis is an essential component of the theoretical foundation
of computer science.
• Theoretical analysis helps us understand the fundamental limits of problem-solving and develop
algorithms that work efficiently within those limits.
• The analysis of algorithms allows us to evaluate and compare their efficiency, optimize resource
utilization, predict performance, design and improve algorithms, select appropriate algorithms, and
establish theoretical foundations.
• It is a crucial aspect of algorithm design, system optimization, and problem-solving in various fields,
ensuring that we can develop efficient, scalable, and effective solutions to real-world problems.
• Step 4: Implementation
• Decide the programming language to implement the algorithm.
• Step 5: Testing
• Integrate feedback from users, fix bugs, ensure compatibility across different versions
• Step 6: Mainanance
• Release Updates
• The Random Access Machine (RAM) model is a theoretical computational model used to analyze
the complexity and efficiency of algorithms.
• It provides a simplified abstraction of a computer architecture to evaluate the running time and space
complexity of algorithms.
• The term “Random access” refers to the ability of CPU to access an arbitrary memory cell with one
primitive operations.
• It is assumed that, CPU in the RAM model can perform any primitive operations in a constant
number of steps which do not depend on size of input.
• Thus an accurate bound on the number of primitive operations an algorithm performs corresponds
directly to the running time of that algorithm in RAM model.
• The RAM model assumes that each memory access, arithmetic operation, and control flow
instruction can be executed in constant time, implying that these operations have uniform costs.
• The RAM model supports a basic set of instructions, including arithmetic operations (addition,
subtraction, multiplication, division), memory operations (load, store), control flow
instructions (conditional branching, loops), and function calls.
• Using the RAM model, algorithmic analysis focuses on counting the number of basic operations
performed by an algorithm, such as arithmetic operations and memory accesses.
• This analysis helps in determining the time complexity and space complexity of algorithms.
• The RAM model provides a simplified framework for analyzing algorithms.
• It serves as a fundamental tool for reasoning about algorithmic efficiency and comparing the relative
performance of different algorithms.
• When analyzing the running time of functions or algorithms, we often classify them into
different categories based on their growth rate or time complexity. The most commonly
used classifications are as follows:
• Constant Time (O(1)): Functions with constant time complexity have running times that do
not depend on the input size. Regardless of the input size, the execution time remains
constant.
• Linear Time (O(n)): Functions with linear time complexity have running times that scale
linearly with the input size. As the input size increases, the running time increases
proportionally.
• Logarithmic Time (O(log n)): Functions with logarithmic time complexity have running
times that grow logarithmically with the input size. These functions often divide the input
into smaller parts in each step, resulting in efficient performance even for large inputs.
• Quadratic Time (O(n^2)): Functions with quadratic time complexity have running times
that grow quadratically with the input size.
• These functions often involve nested loops or repetitive operations on the input.
• Polynomial Time (O(n^k)): Functions with polynomial time complexity have running times
that grow with a power of the input size.
• The exponent k indicates the degree of the polynomial. Polynomial time algorithms are
generally considered efficient, but the performance can degrade for large input sizes,
especially for higher degrees.
• Exponential Time (O(2^n)): Functions with exponential time complexity have running
times that grow exponentially with the input size.
• These algorithms are generally considered inefficient and can become infeasible for even
moderate input sizes.
• To calculate the efficiency of algorithms, you can analyze their time complexity and space
complexity.
• These complexities describe how the running time and memory usage of an algorithm scale with the
input size. Here's a general approach to calculate efficiency:
• Analyze Time Complexity:
• Determine the number of operations performed by the algorithm as a function of the input size.
• Count the most significant operations, such as loops, recursive calls, and significant computations.
• Express the count of operations in terms of the input size using Big O notation (O(...)). Simplify the
expression by removing lower-order terms and constant factors.
• Identify the time complexity class based on the simplified expression. For example, O(1) for constant
time, O(n) for linear time, O(n^2) for quadratic time, etc.
• Consider the best-case, worst-case, or average-case scenarios, depending on the nature of the
algorithm.
• Analysis of algorithms is the process of studying and evaluating the efficiency and performance
characteristics of algorithms.
• It involves analyzing the resources, such as time and space, utilized by an algorithm to solve a
specific problem. The primary goals of algorithm analysis are:
• Efficiency Evaluation: The analysis helps determine how efficient an algorithm is in terms of its
running time and space usage. It provides insights into the algorithm's performance and scalability
as the input size grows.
• Algorithm Comparison: By comparing the performance of different algorithms solving the same
problem, analysis helps identify the most efficient algorithm for a given task. It allows us to select the
best algorithm based on the problem's requirements, constraints, and input characteristics.
• Algorithm Design Improvement: Through analysis, inefficiencies or bottlenecks in an algorithm
can be identified. This helps in optimizing or improving the design of the algorithm to achieve better
performance.
• Space Complexity Analysis: Along with time complexity analysis, it evaluates the amount of
memory required by an algorithm. It considers the auxiliary space used by data structures, recursive
calls, and other variables.
• Trade-off Analysis: Sometimes, algorithms optimize one aspect (such as time complexity) at the
expense of another (such as space complexity).
• Trade-off analysis helps identify the optimal balance between different performance metrics based
on the problem's requirements and available resources.
• Example-1: Let's consider a sorting algorithm such as Quick Sort. In the best case scenario, the
input array is already sorted o partially sorted.
• In this case, Quick Sort's partitioning step would divide the array into two roughly equal halves at
each recursive call, leading to balanced partitions. As a result, the algorithm's performance is
optimal, and the time complexity reduces to its best case, which is typically denoted as O(n log n).
• Exmaple-2: Consider a linear search algorithm that searches for a specific element in an array.
• The best case scenario occurs when the target element is found at the very beginning of the array,
i.e., at index 0. In this case, the algorithm would perform a single comparison and immediately find
the element.
• Therefore, the best case time complexity of linear search is O(1), indicating that it requires a
constant number of operations regardless of the size of the input array.
• Example -1 : Let's consider a sorting algorithm such as Bubble Sort. In the worst case scenario, the
input array is in reverse sorted order. In each pass of the algorithm, the largest element shifted to its
correct position by swapping adjacent elements.
• In the worst case, the largest element needs to traverse the entire array in each pass, resulting in
multiple swaps for each element. This leads to a time complexity of O(n^2) for Bubble Sort in the
worst case.
• Example -2 : Consider a binary search algorithm that searches for a specific element in a sorted
array. The worst case scenario occurs when the target element is not present in the array, and the
algorithm needs to search until the end of the array.
• In this case, the binary search algorithm repeatedly divides the search space in half and checks the
middle element. As the search space is halved at each step, the algorithm requires logarithmic time
to reach the end of the array. Therefore, the worst case time complexity of binary search is O(log n),
where n is the size of the input array.
• Asymptotic notations are mathematical notations used to describe the upper and lower bounds of
the running time or space complexity of algorithms. They provide a way to analyze and compare the
efficiency of algorithms without focusing on specific constant factors or lower-order terms. The most
commonly used asymptotic notations are:
• Big O notation (O): It represents the upper bound or worst-case scenario of the running time or
space complexity of an algorithm.
• The notation O(f(n)) denotes that the algorithm's running time or space usage grows no faster than a
constant multiple of the function f(n) as the input size n approaches infinity.
• Omega notation (Ω): It represents the lower bound or best-case scenario of the running time or
space complexity of an algorithm.
• The notation Ω(f(n)) denotes that the algorithm's running time or space usage is at least a constant
multiple of the function f(n) as the input size n approaches infinity.
• Theta notation (Θ): It represents the tight bound or average-case scenario of the running time or
space complexity of an algorithm.
• The notation Θ(f(n)) denotes that the algorithm's running time or space usage is bounded both from
above and from below by a constant multiple of the function f(n) as the input size n approaches
infinity.
• These notations provide a convenient way to express the growth rates of algorithms and
classify them into different complexity classes.
• For example, if an algorithm has a time complexity of O(n^2), it means that the running time grows
quadratically with the input size. If an algorithm has a space complexity of Ω(log n), it means that the
space usage grows logarithmically with the input size.
• Asymptotic notations allow us to analyze the scalability and efficiency of algorithms, compare
different algorithms, and make informed decisions about which algorithm to use based on their time
or space complexity characteristics.
• The notation Ο(𝑛) is the formal way to express the upper bound of an algorithm's running time.
• It measures the worst case time complexity or the longest amount of time an algorithm can possibly
take to complete.
• For a given function g(n), we denote by Ο(g(n)) the set of functions,
𝒇(𝒏)“≤” 𝒄.𝒈(𝒏)
𝒇 (𝒏)
• An upper bound 𝑔(𝑛) of an algorithm defines the
maximum time required, we can always solve the
problem in at most 𝑔(𝑛) time.
𝒏 𝟎 𝒇 (𝒏)=𝜴 (𝒈 (𝒏)) 𝒏
• 𝑓(𝑛)=𝜃(𝑔(𝑛)) implies:
𝒄 𝟏 .𝒈(𝒏)
𝑓(𝑛)“=” 𝑐.𝑔(𝑛)
𝒏𝟎
𝒏
𝒇 (𝒏)=𝜽(𝒈 (𝒏))
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Asymptotic Notations
• To analyze the time complexity of an algorithm using a recurrence relation, we typically solve the
recurrence relation to obtain a closed-form solution or an asymptotic upper bound.
• Techniques like substitution method, recursion tree, or master theorem are often used to solve
recurrence relations and determine the time complexity of algorithms.
• Recurrence relations provide a powerful tool for understanding and analyzing algorithms, allowing us
to reason about their time complexity and make predictions about their performance for different
input sizes.
n
n( n 1)
• Arithmetic series:
k 1
k 1 2 ... n
2
n
x n 1 1
x 1 x x ... x
k 2 n
x 1
x 1
• Geometric series: k 0
1
• Special case: |x| < 1: x
k 0
k
1 x
n
1 1 1
• Harmonic series:
k 1 k
1
2
...
n
ln n
lg k n lg n
• Other important formulas: k 1
n
1
k p 1p 2 p ... n p
k 1 p 1
n p 1
• Sorting is any process of arranging items systematically or arranging items in a sequence ordered by
some criterion.
• The purpose of sorting is to impose order on a set of data, making it easier to search, retrieve,
analyze, and manipulate.
• By organizing the data in a specific order, sorting enables efficient searching and facilitates tasks
such as finding the minimum or maximum element, determining if a value exists in the data, or
performing various operations that rely on the order of the elements.
• Sorting can be performed on various types of data, including numbers, strings, records, or custom
objects.
• The choice of sorting algorithm depends on various factors, including the size of the data, the
distribution of the data, the stability requirement, and the available resources.
Pass 1 :
34
45 34 34 34
swap
34
45 45 45 45
56 56 56
23 23
swap
23 23 23
56 56
12
swap
12 12 12 12
56
34 34 34 23
34 23 12
23
swap
swap
45 45
23 23 23
34 34
12 12
23
swap
swap
23 23
45 45
12 12 12
34 34
swap
12 12 12
45 45 45 45
56 56 56 56 56 56
• It is a simple sorting algorithm that works by comparing each pair of adjacent items and swapping
them if they are in the wrong order.
• The pass through the list is repeated until no swaps are needed, which indicates that the list is
sorted.
• As it only uses comparisons to operate on elements, it is a comparison sort.
• Although the algorithm is simple, it is too slow for practical use.
• The time complexity of the bubble sort algorithm is O(n^2), where n is the number of elements in the
list.
• It is not very efficient for large data sets and is mainly used for educational purposes or when dealing
with small or nearly sorted lists.
• Best Case:
• The best-case scenario occurs when the input array is already sorted in the correct order.
• In each pass, Bubble Sort compares adjacent elements and finds that they are already in the correct
order, so no swaps are needed.
• Bubble Sort can detect that the array is sorted after the first pass, so it takes only one pass to
determine that no more swaps are required.
• The best-case time complexity of Bubble Sort is O(n).
• Average Case:
• The average-case time complexity of Bubble Sort is also O(n^2).
• This is because, on average, Bubble Sort requires approximately the same number of comparisons
and swaps as the worst-case scenario.
• Even if the input array is not in completely reverse order, Bubble Sort still needs to make multiple
passes through the array to sort it completely.
• The average-case time complexity of O(n^2) holds for random or arbitrary input distributions.
• Worst Case:
• The worst-case scenario occurs when the input array is in reverse order, meaning the largest
element is at the beginning.
• In each pass, the largest element "bubbles" its way to the end of the array, requiring n - 1
comparisons and swaps.
• Since there are n elements in the array, Bubble Sort needs to make (n - 1) passes to sort the array
completely.
• The total number of comparisons and swaps in the worst case is approximately (n - 1) + (n - 2) + ... +
2 + 1, which is equal to (n * (n - 1)) / 2.
• Therefore, the worst-case time complexity of Bubble Sort is O(n^2).
# Input: Array A
# Output: Sorted array A
Algorithm: Selection_Sort(A)
for i ← 1 to n-1 do
minj ← i;
minx ← A[i];
for j ← i + 1 to n do
if A[j] < minx then
minj ← j;
minx ← A[j];
A[minj] ← A[i];
A[i] ← minx;
5 1 12 -5 16 2 12 14
Step 1 :
Unsorted Array
5 1 12 -5 16 2 12 14
1 2 3 4 5 6 7 8
Step 2 :
Minj denotes the current index and Minx is the value
Unsorted Array (elements 2 to 8) stored at current index.
So, Minj = 1, Minx = 5
-5
5 1 12 -5
5 16 2 12 14 Assume that currently Minx is the smallest value.
Now find the smallest value from the remaining entire
1 2 3 4 5 6 7 8
Unsorted array.
Swap Index = 4, value = -5
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Selection Sort
Step 3 :
Unsorted Array (elements 3 to 8) Now Minj = 2, Minx = 1
Find min value from remaining
-5 1 12 5 16 2 12 14 unsorted array
1 2 3 4 5 6 7 8
Index = 2, value = 1
Swap
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Selection Sort
Swap
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Selection Sort
Step 7 :
Unsorted Array Now Minj = 6, Minx = 16
(elements 7 to 8) Find min value from remaining
unsorted array
-5 1 2 5 12 12
16 16
12 14
1 2 3 4 5 6 7 8 Index = 7, value = 12
Swap
Step 8 :
Unsorted Array Minj = 7, Minx = 16
(element 8) Find min value from remaining
unsorted array
-5 1 2 5 12 12 14
16 16
14 Index = 8, value = 14
1 2 3 4 5 6 7 8
• It's worth noting that Selection Sort is not considered an efficient sorting algorithm for large datasets,
as its time complexity is quadratic.
• However, it can be useful for small arrays or cases where the number of swaps is a concern, as it
performs a limited number of swaps compared to other quadratic sorting algorithms like Bubble Sort.
• Insertion sort is a simple comparison-based sorting algorithm that builds the final sorted array one
element at a time.
• It works by dividing the input array into two parts: the sorted part and the unsorted part.
• The algorithm iterates over the unsorted part, taking each element and inserting it into its correct
position within the sorted part.
Step 1 :
Unsorted Array
5 1 12 -5 16 2 12 14
1 2 3 4 5 6 7 8
Step 2 :
Shift down
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Insertion Sort
Step 3 :
𝒋 𝒊=𝟑 , 𝒙=𝟏𝟐 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
1 5 12 -5 16 2 12 14 while do
1 2 3 4 5 6 7 8
Step 5 :
𝒋 𝒊=𝟓 , 𝒙=𝟏𝟔 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
-5 1 5 12 16 2 12 14 while do
1 2 3 4 5 6 7 8
Step 6 :
𝒊=𝟔 , 𝒙=𝟐 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
𝒋 𝒋 while do
-5 1 52 12 16 2 12 14
1 2 3 4 5 6 7 8
Step 7 :
𝒋 𝒊=𝟕 , 𝒙=𝟏𝟐 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
-5 1 2 12 12 14
5 12 16 while do
1 2 3 4 5 6 7 8
Shift down
Step 8 :
𝒊=𝟖 , 𝒙=𝟏𝟒 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
𝒋 while do
-5 1 2 5 12 12 14
16 14
1 2 3 4 5 6 7 8
The entire array is sorted
Shift down
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
now.
Thank You