0% found this document useful (0 votes)
6 views

Introduction

This document provides an overview of an algorithms course taught at Devang Patel Institute of Advance Technology And Research. The course objectives are to analyze algorithm performance, derive time and space complexity of sorting algorithms, and understand techniques like divide and conquer. It also defines what an algorithm is, lists their key characteristics and properties, and describes common algorithmic paradigms and the typical steps for solving problems using algorithms.

Uploaded by

21dce106
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Introduction

This document provides an overview of an algorithms course taught at Devang Patel Institute of Advance Technology And Research. The course objectives are to analyze algorithm performance, derive time and space complexity of sorting algorithms, and understand techniques like divide and conquer. It also defines what an algorithm is, lists their key characteristics and properties, and describes common algorithmic paradigms and the typical steps for solving problems using algorithms.

Uploaded by

21dce106
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 89

DEVANG

DEVANG PATEL INSTITUTE OF


PATEL INSTITUTE OFADVANCE
ADVANCETECHNOLOGY
TECHNOLOGY AND
AND RESEARCH
RESEARCH

CE355 – Design and Analysis of Algorithms


Subject Coordinator:
Chapter 1 - Introduction
Prof. Shraddha Vyas, Assistant Professor

Subject Faculties:

Prof. Shraddha Vyas, Prof. Jay Patel


Devang Patel Institute of Advance Technology And Research
Charotar University of Science and Technology

26-06-2023
Objectives
• Analyze the asymptotic performance of algorithms
• Derive time and space complexity of different sorting algorithms and compare them to
choose application specific efficient algorithm.
• Understand and analyze the problem to apply design technique from divide and conquer,
dynamic programming, backtracking, branch and bound techniques and understand how
the choice of algorithm design methods impact the performance of programs
• Understand and apply various graph algorithms for finding shorted path and minimum
spanning tree.
• Synthesize efficient algorithms in common engineering design situations.
• Understand the notations of P, NP, NP-Complete and NP-Hard.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms
• An algorithm is a step-by-step procedure or a set of well-defined rules that outlines a
computational process to solve a specific problem or accomplish a particular task.
• It is a precise, unambiguous, and systematic approach to problem-solving that provides a
clear sequence of instructions to achieve a desired outcome.
• In other words, an algorithm is a series of logical instructions that takes an input, performs a
sequence of operations or computations on that input, and produces an output or solution.
• It provides a structured methodology for solving problems by breaking them down into
smaller, more manageable sub problems or steps.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Characteristics of algorithm

• Well-Defined: Algorithms have a clear, precise, and unambiguous description that can be followed
by anyone to obtain the desired result.
• Input and Output: Algorithms take one or more inputs, process them through a series of steps, and
produce an output or solution.
• Finiteness: Algorithms have a finite number of well-defined steps. They terminate after a finite
number of operations, providing the expected output.
• Deterministic: Algorithms are deterministic, meaning that given the same input, they always
produce the same output. The behavior of an algorithm is predictable and consistent.
• Effective: Algorithms are practical and provide a feasible solution to the problem at hand. They can
be implemented using computational resources, such as a computer or a human following the
instructions.
• Optimality: In some cases, algorithms aim to provide optimal solutions, such as finding the most
efficient or optimal solution among multiple possibilities.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Characteristics of algorithm

• Algorithms play a crucial role in various fields, including computer science, mathematics,
engineering, and beyond.
• They are the building blocks of software development, data analysis, artificial intelligence,
and many other computational applications.
• By employing algorithms, complex problems can be broken down into simpler, manageable
steps, enabling efficient and effective problem-solving.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Properties of algorithm
• Algorithms must have a unique name
• An algorithm takes zero or more inputs
• An algorithm results in one or more outputs
• All operations can be carried out in a finite amount of time
• An algorithm should be efficient and flexible
• It should use less memory space as much as possible
• An algorithm must terminate after a finite number of steps.
• Each step in the algorithm must be easily understood for some reading it
• An algorithm should be concise and compact to facilitate verification of their correctness

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithmic Paradigm
• There are various algorithmic paradigms or design techniques that guide the development of
algorithms. Some common paradigms include:
• Brute Force: Exhaustively tries all possible solutions.
• Divide and Conquer: Breaks down a problem into smaller subproblems, solves them independently,
and combines the solutions.
• Greedy Algorithms: Makes locally optimal choices at each step to achieve a global optimum.
• Dynamic Programming: Solves a problem by breaking it into overlapping subproblems and solving
each subproblem only once.
• Backtracking: Systematically explores all possible solutions by trying different choices and
backtracks when a choice leads to a dead end.
• Randomized Algorithms: Uses randomness to improve efficiency or provide probabilistic
guarantees.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• The main steps for Problem Solving are:


1. Problem definition
2. Algorithm design / Algorithm specification
3. Algorithm analysis
4. Implementation
5. Testing
6. Maintenance

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• Step1: Problem Definition


• What is the task to be accomplished?
• Ex: Finding the Shortest Route to Work

• Step2: Algorithm Design / Specifications:


• Describe: in natural language / pseudo-code / diagrams / etc

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• An Algorithm provides a method of doing the things, from our daily life to every other
complex problem of the world.
• Example:

Finding the Shortest Route to Work

• Gather Information
• Explore Route Options
• Evaluate Route Criteria
• Compare Routes
• Select Optimal Route
• Execute the Route
• Arrive at Destination

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• Step3: Algorithm analysis


• Efficiency Evaluation: By analyzing algorithms, we can assess their efficiency in terms of time
complexity and space complexity.
• This evaluation helps us understand how an algorithm's performance scales with input size. It allows
us to compare different algorithms and select the most efficient one for a particular problem.
• Efficient algorithms can save computational resources, reduce execution time, and enhance the
overall performance of systems and applications.
• Resource Optimization: Understanding the resource requirements of an algorithm is essential for
optimizing resource utilization.
• Analysis helps identify areas where improvements can be made, such as reducing memory
consumption or minimizing the number of operations.
• This optimization can lead to more efficient algorithms, which in turn can lead to cost savings,
improved scalability, and better utilization of computational resources.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• Prediction of Performance: Algorithm analysis provides insights into how an algorithm will perform
under different scenarios.
• It allows us to predict and estimate the time and space requirements for various input sizes. This
information is valuable for capacity planning and system design.
• Algorithmic Design and Improvement: Analysis helps us gain a deeper understanding of
algorithms, enabling us to design better algorithms or improve existing ones.
• It allows us to identify inefficiencies, bottlenecks, or areas where optimizations can be applied.
• By analyzing the trade-offs between different algorithmic approaches, we can make informed design
decisions and enhance the overall quality and efficiency of our solutions.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• Algorithm Selection: Algorithm analysis aids in the selection of appropriate algorithms for specific
problem domains.
• Different algorithms may have different performance characteristics, and analysis helps us identify
the most suitable algorithm based on the problem's requirements and constraints.
• By understanding the trade-offs between different algorithms, we can make informed decisions that
result in effective and efficient solutions.
• Theoretical Foundation: Algorithm analysis is an essential component of the theoretical foundation
of computer science.
• Theoretical analysis helps us understand the fundamental limits of problem-solving and develop
algorithms that work efficiently within those limits.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• The analysis of algorithms allows us to evaluate and compare their efficiency, optimize resource
utilization, predict performance, design and improve algorithms, select appropriate algorithms, and
establish theoretical foundations.
• It is a crucial aspect of algorithm design, system optimization, and problem-solving in various fields,
ensuring that we can develop efficient, scalable, and effective solutions to real-world problems.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Algorithm for Problem Solving

• Step 4: Implementation
• Decide the programming language to implement the algorithm.

• Step 5: Testing
• Integrate feedback from users, fix bugs, ensure compatibility across different versions

• Step 6: Mainanance
• Release Updates

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Performance of Algorithm
• The performance of an algorithm refers to its actual execution time and resource usage when
implemented on a specific platform.
• It involves measuring and evaluating the practical efficiency of an algorithm in real-world scenarios.
• The performance of an algorithm takes into account factors such as processor speed, memory
capacity, input data characteristics, programming language, compiler optimizations, and
hardware constraints.
• It often involves experimentation to assess the algorithm's execution time, memory usage, and other
performance metrics.
• The performance of an algorithm provides practical insights into how it performs in real-world
scenarios and helps identify bottlenecks, optimization opportunities, and trade-offs between different
implementation choices.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis and Performance

• The analysis of an algorithm focuses on theoretical analysis and mathematical modeling to


understand its efficiency and scalability, while the performance of an algorithm involves empirical
measurements and evaluation of its actual execution time and resource usage in real-world
implementations.
• Both aspects are important for understanding and optimizing algorithms, with analysis providing
theoretical insights and performance evaluation providing practical observations.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Performance of Algorithm
• Common time complexity classifications include:
• Constant Time (O(1)): The algorithm's runtime remains constant, regardless of the input size.
• Linear Time (O(n)): The algorithm's runtime increases linearly with the input size.
• Quadratic Time (O(n^2)): The algorithm's runtime grows quadratically with the input size.
• Logarithmic Time (O(log n)): The algorithm's runtime grows logarithmically with the input size.
• Exponential Time (O(2^n)): The algorithm's runtime grows exponentially with the input size.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Performance of Algorithm
• Space Complexity: Space complexity measures the amount of memory or space required by an
algorithm to solve a problem as a function of the input size.
• It estimates the maximum amount of memory used by the algorithm during its execution. Similar to
time complexity, space complexity is also expressed using Big O notation.

• Common space complexity classifications include:


• Constant Space (O(1)): The algorithm uses a constant amount of memory, regardless of the input
size.
• Linear Space (O(n)): The algorithm's memory usage grows linearly with the input size.
• Quadratic Space (O(n^2)): The algorithm's memory usage grows quadratically with the input size.
• Logarithmic Space (O(log n)): The algorithm's memory usage grows logarithmically with the input
size.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Running time of Algorithm
• The running time of an algorithm can depend on several factors. Here are some of the key factors
that can influence the running time of an algorithm:
• Input Size: The running time of an algorithm often increases with the size of the input.
• For example, an algorithm that processes a list of n elements will generally take longer as the
number of elements in the list increases.
• Input Characteristics: The nature and distribution of the input can impact the running time of an
algorithm. Certain inputs may cause the algorithm to take longer to execute than others.
• For example, an algorithm that performs a linear search will take longer to find an element if it is
located at the end of the list rather than at the beginning.
• Algorithm Design: The efficiency of the algorithm design itself plays a significant role in determining
the running time.
• Well-designed algorithms with optimal strategies and data structures can significantly reduce the
number of operations required to solve a problem, leading to faster execution.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Running time of Algorithm
• Hardware and Software Environment: The underlying hardware and software environment on
which the algorithm runs can affect its running time.
• Factors such as the processing power of the CPU, memory capacity, caching mechanisms, and the
efficiency of the programming language or compiler can impact the overall performance.
• Implementation Details: The specific implementation of the algorithm, including the choice of data
structures, algorithms for subroutines, and low-level optimizations, can influence the running time.
• Different implementation choices can result in variations in the number of operations performed and
memory accesses, leading to different running times.
• Problem Complexity: The inherent complexity of the problem being solved by the algorithm can
affect its running time.
• Some problems inherently require more computational resources to solve, and algorithms
addressing these complex problems may have longer running times compared to simpler problems.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Running time of Algorithm
• It is important to consider these factors when analyzing and comparing the running time of
algorithms.
• By understanding these factors, you can make informed decisions about algorithm selection,
optimization, and trade-offs based on the specific requirements of your problem and the constraints
of your computing environment.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – The Random Access Machine

• The Random Access Machine (RAM) model is a theoretical computational model used to analyze
the complexity and efficiency of algorithms.
• It provides a simplified abstraction of a computer architecture to evaluate the running time and space
complexity of algorithms.
• The term “Random access” refers to the ability of CPU to access an arbitrary memory cell with one
primitive operations.
• It is assumed that, CPU in the RAM model can perform any primitive operations in a constant
number of steps which do not depend on size of input.
• Thus an accurate bound on the number of primitive operations an algorithm performs corresponds
directly to the running time of that algorithm in RAM model.
• The RAM model assumes that each memory access, arithmetic operation, and control flow
instruction can be executed in constant time, implying that these operations have uniform costs.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – The Random Access Machine

• The RAM model supports a basic set of instructions, including arithmetic operations (addition,
subtraction, multiplication, division), memory operations (load, store), control flow
instructions (conditional branching, loops), and function calls.
• Using the RAM model, algorithmic analysis focuses on counting the number of basic operations
performed by an algorithm, such as arithmetic operations and memory accesses.
• This analysis helps in determining the time complexity and space complexity of algorithms.
• The RAM model provides a simplified framework for analyzing algorithms.
• It serves as a fundamental tool for reasoning about algorithmic efficiency and comparing the relative
performance of different algorithms.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Primitive Operations
• Primitive operations, also known as basic operations, are fundamental computational operations that
are considered to have a constant execution time.
• They are the elementary building blocks of algorithms and are used to measure the complexity of
algorithms.
• The specific set of primitive operations that are independent from the programming language and
can be identified in pseudo code. Some common examples include:
• Arithmetic Operations: Basic arithmetic operations such as addition, subtraction, multiplication, and
division.
• Comparison Operations: Operations that compare two values or variables, such as equality checks
(e.g., "==" or "==="), less than ("<"), greater than (">"), less than or equal to ("<="), and greater than
or equal to (">=") comparisons.
• Assignment Operations: Assigning a value to a variable or updating the value of a variable.
• Array Access: Accessing or updating the value of an element in an array or a similar data structure.
• Function Call: Invoking a function or a subroutine.
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Primitive Operations
• Variable Access: Reading or writing the value of a variable.
• Logical Operations: Basic logical operations such as AND, OR, and NOT.
• Bitwise Operations: Operations that manipulate individual bits of binary data, such as bitwise AND,
OR, XOR, and shifting operations.
• Input/ Output Operations: Reading input from a user or a file, and writing output to the console or a
file.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Primitive Operations
• It's important to note that the choice of primitive operations depends on the level of abstraction and
the programming language being used.
• For example, in low-level programming languages like C or assembly, primitive operations might
include memory access, pointer manipulation, or bitwise operations.
• In high-level languages, primitive operations are typically more abstract and focused on fundamental
computational operations.
• By considering the number of primitive operations performed by an algorithm, we can estimate the
algorithm's time complexity and analyze its efficiency.
• Primitive operations serve as a foundational concept in algorithm analysis and allow us to compare
and evaluate the efficiency of different algorithms.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• To calculate the time complexity of an algorithm using primitive operations in C, you need to analyze
the algorithm's code and determine the number of times each primitive operation is executed as a
function of the input size.
• Here's a step-by-step approach to calculate time complexity:
• Identify Primitive Operations: Identify the specific primitive operations used in your algorithm.
These can include arithmetic operations, comparisons, assignments, array accesses, function calls,
and so on.
• Analyze Loops: Look for loops in your code as they often contribute significantly to the overall time
complexity. Determine the number of iterations performed by each loop.
• Count Operations: Count the number of times each primitive operation is executed within the
algorithm or loop. This can be done by examining the code and identifying the instances where the
operation is performed.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• Express Complexity in Terms of Input Size: Once you have counted the operations, express the
total number of operations as a function of the input size.
• For example, if your algorithm performs a linear search on an array of size n, the number of
comparisons would be n, and the time complexity would be O(n).
• Simplify the Expression: Simplify the expression obtained in the previous step using asymptotic
notation (such as Big O notation) to focus on the dominant term.
• This dominant term represents the growth rate of the algorithm as the input size increases.
• Determine the Final Time Complexity: Based on the simplified expression, determine the final time
complexity of your algorithm.
• This will provide an understanding of how the algorithm's runtime increases with the size of the input.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• Counting Primitive Operations
• To determine the running time t, of an algorithm as a function of the input size, n, we need
to perform the following steps:
 Identify each primitive operation in the pseudocode
 Count how many times each primitive operation is executed
 Calculate the running time by summing the counts of primitive operations

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• Example:

• In this example, we have an array of size n as the input.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• Assignment operation: 1 (for initializing sum)
• Loop initialization: 1 (for initializing i)
• Loop comparison: n + 1 (includes the final comparison when i becomes size and the loop
terminates)
• Loop increment: n (incrementing i in each iteration)
• Addition operation: n (performing the addition of arr[i] to sum in each iteration)
• Return operation: 1 (returning sum)
• Total count = 1 + 1 + (n + 1) + n + n + 1 = 3n + 4
• Based on this count, we can determine that the running time complexity of this algorithm is O(n),
indicating a linear relationship between the input size n and the number of primitive operations.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• Example:

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• Example:
• Assignment operation: 1 (for initializing count)
• Loop initialization (outer loop): 1
• Loop comparison (outer loop): n + 1
• Loop increment (outer loop): n
• Loop initialization (inner loop): 1
• Loop comparison (inner loop): n*(n + 1)
• Loop increment (inner loop): n
• Print operation: 1
• Total count = 1 + 1 + (n + 1) + n + 1 + n*(n + 1) + n + 1
• Simplifying the expression, we get:
• Total count = n^2 + 4n + 5
• Based on this count, we can determine that the running time complexity of this algorithm is O(n^2).
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Calculate Time Complexity using
Primitive Operations
• Based on this count, we can determine that the running time complexity of this nested loop
is O(n^2), indicating a quadratic relationship between the input size n and the number of
primitive operations.
• By counting the primitive operations, we can analyze the efficiency of the algorithm and
understand how it scales with different input sizes.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Classifying Running Time of functions

• When analyzing the running time of functions or algorithms, we often classify them into
different categories based on their growth rate or time complexity. The most commonly
used classifications are as follows:
• Constant Time (O(1)): Functions with constant time complexity have running times that do
not depend on the input size. Regardless of the input size, the execution time remains
constant.
• Linear Time (O(n)): Functions with linear time complexity have running times that scale
linearly with the input size. As the input size increases, the running time increases
proportionally.
• Logarithmic Time (O(log n)): Functions with logarithmic time complexity have running
times that grow logarithmically with the input size. These functions often divide the input
into smaller parts in each step, resulting in efficient performance even for large inputs.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Classifying Running Time of functions

• Quadratic Time (O(n^2)): Functions with quadratic time complexity have running times
that grow quadratically with the input size.
• These functions often involve nested loops or repetitive operations on the input.
• Polynomial Time (O(n^k)): Functions with polynomial time complexity have running times
that grow with a power of the input size.
• The exponent k indicates the degree of the polynomial. Polynomial time algorithms are
generally considered efficient, but the performance can degrade for large input sizes,
especially for higher degrees.
• Exponential Time (O(2^n)): Functions with exponential time complexity have running
times that grow exponentially with the input size.
• These algorithms are generally considered inefficient and can become infeasible for even
moderate input sizes.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Common orders of magnitude

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Efficiency of Algorithm

• To calculate the efficiency of algorithms, you can analyze their time complexity and space
complexity.
• These complexities describe how the running time and memory usage of an algorithm scale with the
input size. Here's a general approach to calculate efficiency:
• Analyze Time Complexity:
• Determine the number of operations performed by the algorithm as a function of the input size.
• Count the most significant operations, such as loops, recursive calls, and significant computations.
• Express the count of operations in terms of the input size using Big O notation (O(...)). Simplify the
expression by removing lower-order terms and constant factors.
• Identify the time complexity class based on the simplified expression. For example, O(1) for constant
time, O(n) for linear time, O(n^2) for quadratic time, etc.
• Consider the best-case, worst-case, or average-case scenarios, depending on the nature of the
algorithm.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Efficiency of Algorithm

• Analyze Space Complexity:


• Determine the amount of memory used by the algorithm as a function of the input size. Consider
variables, data structures, recursive calls, and auxiliary space.
• Express the space usage in terms of the input size using Big O notation (O(...)). Simplify the
expression by removing lower-order terms and constant factors.
• Identify the space complexity class based on the simplified expression. For example, O(1) for
constant space, O(n) for linear space, O(n^2) for quadratic space, etc.
• Consider Trade-Offs:
• Evaluate the trade-offs between time complexity and space complexity. Some algorithms may have
a better time complexity but higher space complexity, or vice versa.
• Consider the requirements of your specific problem and the available resources to determine the
most suitable algorithm.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Analysis of algorithms is the process of studying and evaluating the efficiency and performance
characteristics of algorithms.
• It involves analyzing the resources, such as time and space, utilized by an algorithm to solve a
specific problem. The primary goals of algorithm analysis are:
• Efficiency Evaluation: The analysis helps determine how efficient an algorithm is in terms of its
running time and space usage. It provides insights into the algorithm's performance and scalability
as the input size grows.
• Algorithm Comparison: By comparing the performance of different algorithms solving the same
problem, analysis helps identify the most efficient algorithm for a given task. It allows us to select the
best algorithm based on the problem's requirements, constraints, and input characteristics.
• Algorithm Design Improvement: Through analysis, inefficiencies or bottlenecks in an algorithm
can be identified. This helps in optimizing or improving the design of the algorithm to achieve better
performance.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• The analysis of algorithms involves several techniques and approaches, including:


• Asymptotic Analysis: It focuses on characterizing the growth rate of an algorithm's running time
and space complexity as the input size approaches infinity.
• Asymptotic notation, such as Big O, Omega, and Theta, is used to describe the upper and lower
bounds of an algorithm's efficiency.
• Worst-case, Average-case, and Best-case Analysis: Algorithms can have different performance
characteristics based on the input they receive.
• Analysis considers the worst-case scenario (input that results in maximum time or space usage),
average-case scenario (expected performance on random inputs), and best-case scenario (input
that results in minimum time or space usage).
• Experimental Analysis: In addition to theoretical analysis, experimental analysis involves running
algorithms on actual inputs and measuring their performance. It helps validate theoretical findings
and provides practical insights into the algorithm's behavior in real-world scenarios.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Space Complexity Analysis: Along with time complexity analysis, it evaluates the amount of
memory required by an algorithm. It considers the auxiliary space used by data structures, recursive
calls, and other variables.
• Trade-off Analysis: Sometimes, algorithms optimize one aspect (such as time complexity) at the
expense of another (such as space complexity).
• Trade-off analysis helps identify the optimal balance between different performance metrics based
on the problem's requirements and available resources.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Best Case Analysis:


• Best case analysis refers to the analysis of an algorithm's performance under the most favorable or
optimal conditions.
• It involves determining the lower bound on the time complexity or other performance metrics when
the input to the algorithm is in its best possible state.
• In best case analysis, we typically consider the input that results in the fewest number of operations
or the shortest execution time for the algorithm. It represents an ideal scenario where the algorithm
performs exceptionally well.
• It's important to note that best case analysis provides an optimistic view of an algorithm's
performance and does not necessarily reflect its typical or average behavior on random inputs.
• It is useful for understanding the algorithm's lower bound in the best possible situation but may not
represent the average or worst-case performance in practical scenarios.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Example-1: Let's consider a sorting algorithm such as Quick Sort. In the best case scenario, the
input array is already sorted o partially sorted.
• In this case, Quick Sort's partitioning step would divide the array into two roughly equal halves at
each recursive call, leading to balanced partitions. As a result, the algorithm's performance is
optimal, and the time complexity reduces to its best case, which is typically denoted as O(n log n).
• Exmaple-2: Consider a linear search algorithm that searches for a specific element in an array.
• The best case scenario occurs when the target element is found at the very beginning of the array,
i.e., at index 0. In this case, the algorithm would perform a single comparison and immediately find
the element.
• Therefore, the best case time complexity of linear search is O(1), indicating that it requires a
constant number of operations regardless of the size of the input array.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Worst Case Analysis:


• Worst case analysis refers to the analysis of an algorithm's performance under the most unfavorable
or pessimistic conditions. It involves determining the upper bound on the time complexity or other
performance metrics when the input to the algorithm is in its worst possible state.
• In worst case analysis, we typically consider the input that results in the maximum number of
operations or the longest execution time for the algorithm. It represents a scenario where the
algorithm performs the least efficiently.
• Worst case analysis provides an upper bound on the time complexity or other performance metrics,
guaranteeing that the algorithm will not perform worse than this bound for any input size.
• It helps us understand the maximum resources required by an algorithm, which can be crucial in
scenarios where the input data is expected to be large or the algorithm needs to operate efficiently
under all possible inputs..

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Example -1 : Let's consider a sorting algorithm such as Bubble Sort. In the worst case scenario, the
input array is in reverse sorted order. In each pass of the algorithm, the largest element shifted to its
correct position by swapping adjacent elements.
• In the worst case, the largest element needs to traverse the entire array in each pass, resulting in
multiple swaps for each element. This leads to a time complexity of O(n^2) for Bubble Sort in the
worst case.
• Example -2 : Consider a binary search algorithm that searches for a specific element in a sorted
array. The worst case scenario occurs when the target element is not present in the array, and the
algorithm needs to search until the end of the array.
• In this case, the binary search algorithm repeatedly divides the search space in half and checks the
middle element. As the search space is halved at each step, the algorithm requires logarithmic time
to reach the end of the array. Therefore, the worst case time complexity of binary search is O(log n),
where n is the size of the input array.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Average Case Analysis:


• Average case analysis is a method of analyzing the performance of an algorithm by considering the
average behavior over a range of possible inputs. It involves taking into account the statistical distribution
of inputs and calculating the expected time complexity or other performance metrics based on that
distribution.
• Unlike worst case analysis, which considers the most unfavorable input scenario, and best case analysis,
which considers the most favorable input scenario, average case analysis provides a more realistic
assessment of an algorithm's performance on typical inputs.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Algorithm

• Average Case Analysis:


• Consider a simple linear search algorithm that searches for a specific element in an unsorted array. Let's
assume the probability distribution of the input instances follows a uniform distribution, meaning that each
element in the array is equally likely to be the target element.
• In this case, the average case analysis involves calculating the expected number of comparisons required
to find the target element, taking into account the uniform distribution.
• Let's say we have an array of size n. Since each element has an equal probability of being the target
element, the expected number of comparisons can be calculated as the average position of the target
element in the array. In a uniform distribution, the average position is (n+1)/2.
• Therefore, the average case time complexity of the linear search algorithm would be O(n/2), which
simplifies to O(n).
• This analysis suggests that, on average, the linear search algorithm would need to examine half of the
elements in the array before finding the target element when the input follows a uniform distribution.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Asymptotic Notations

• Asymptotic notations are mathematical notations used to describe the upper and lower bounds of
the running time or space complexity of algorithms. They provide a way to analyze and compare the
efficiency of algorithms without focusing on specific constant factors or lower-order terms. The most
commonly used asymptotic notations are:
• Big O notation (O): It represents the upper bound or worst-case scenario of the running time or
space complexity of an algorithm.
• The notation O(f(n)) denotes that the algorithm's running time or space usage grows no faster than a
constant multiple of the function f(n) as the input size n approaches infinity.
• Omega notation (Ω): It represents the lower bound or best-case scenario of the running time or
space complexity of an algorithm.
• The notation Ω(f(n)) denotes that the algorithm's running time or space usage is at least a constant
multiple of the function f(n) as the input size n approaches infinity.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Asymptotic Notations

• Theta notation (Θ): It represents the tight bound or average-case scenario of the running time or
space complexity of an algorithm.
• The notation Θ(f(n)) denotes that the algorithm's running time or space usage is bounded both from
above and from below by a constant multiple of the function f(n) as the input size n approaches
infinity.
• These notations provide a convenient way to express the growth rates of algorithms and
classify them into different complexity classes.
• For example, if an algorithm has a time complexity of O(n^2), it means that the running time grows
quadratically with the input size. If an algorithm has a space complexity of Ω(log n), it means that the
space usage grows logarithmically with the input size.
• Asymptotic notations allow us to analyze the scalability and efficiency of algorithms, compare
different algorithms, and make informed decisions about which algorithm to use based on their time
or space complexity characteristics.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Big(𝐎) notation (Upper Bound)

• The notation Ο(𝑛) is the formal way to express the upper bound of an algorithm's running time.
• It measures the worst case time complexity or the longest amount of time an algorithm can possibly
take to complete.
• For a given function g(n), we denote by Ο(g(n)) the set of functions,

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Big(𝐎) Notation

• 𝑔(𝑛) is an asymptotically upper bound for 𝑓(𝑛).


𝒄 . 𝒈(𝒏) • 𝑓(𝑛) = 𝑂(𝑔(𝑛)) implies:

𝒇(𝒏)“≤” 𝒄.𝒈(𝒏)
𝒇 (𝒏)
• An upper bound 𝑔(𝑛) of an algorithm defines the
maximum time required, we can always solve the
problem in at most 𝑔(𝑛) time.

• Time taken by a known algorithm to solve a


problem with worst case input gives the upper
bound.
𝒏 𝟎 𝒇 (𝒏)=𝑶 (𝒈 (𝒏))
𝒏

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – 𝛀 Notation (Omega notation) (Lower
Bound)
• Big Omega notation (Ω) is used to define the lower bound of any algorithm or we can say the best
case of any algorithm.
• This always indicates the minimum time required for any algorithm for all input values, therefore the
best case of any algorithm.
• When a time complexity for any algorithm is represented in the form of big-Ω, it means that the
algorithm will take at least this much time to complete it's execution. It can definitely take more time
than this too.
• For a given function 𝑔(𝑛), we denote by Ω(𝑔(𝑛)) the set of functions,

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – 𝛀 Notation

𝒇 (𝒏) • 𝑔(𝑛) is an asymptotically lower bound for 𝑓(𝑛).

• 𝑓(𝑛)= Ω(𝑔(𝑛)) implies:

𝒄 . 𝒈(𝒏) 𝑓(𝑛)“≥” 𝑐.𝑔(𝑛)

• A lower bound 𝑔(𝑛) of an algorithm defines the


minimum time required, it is not possible to have
any other algorithm (for the same problem) whose
time complexity is less than 𝑔(𝑛) for random input.

𝒏 𝟎 𝒇 (𝒏)=𝜴 (𝒈 (𝒏)) 𝒏

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – 𝛉-Notation (Theta notation) (Same
order)
• The notation θ(n) is the formal way to enclose both the lower bound and the upper bound of an
algorithm's running time.
• Since it represents the upper and the lower bound of the running time of an algorithm, it is used for
analyzing the average case complexity of an algorithm.
• The time complexity represented by the Big-θ notation is the range within which the actual running
time of the algorithm will be.
• So, it defines the exact Asymptotic behavior of an algorithm.
• For a given function 𝑔(𝑛), we denote by θ(𝑔(𝑛)) the set of functions,

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – 𝛉-Notation

𝒄 𝟐 .𝒈(𝒏) • 𝜃(𝑔(𝑛)) is a set, we can write 𝑓(𝑛) 𝜖 𝜃(𝑔(𝑛)) to


indicate that 𝑓(𝑛) is a member of 𝜃(𝑔(𝑛)).

𝒇 (𝒏) • 𝑔(𝑛) is an asymptotically tight bound for 𝑓(𝑛).

• 𝑓(𝑛)=𝜃(𝑔(𝑛)) implies:
𝒄 𝟏 .𝒈(𝒏)
𝑓(𝑛)“=” 𝑐.𝑔(𝑛)

𝒏𝟎
𝒏
𝒇 (𝒏)=𝜽(𝒈 (𝒏))
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Asymptotic Notations

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Recurrence Relation

• A recurrence relation is a mathematical equation or formula that defines a sequence or function in


terms of its previous values.
• It is commonly used to describe the time complexity of recursive algorithms or to analyze the
behavior of iterative algorithms with repetitive structures.
• In the context of algorithm analysis, a recurrence relation expresses the running time or resource
usage of an algorithm as a function of the size of the input.
• It allows us to define the time complexity of an algorithm recursively, breaking down the problem into
smaller sub problems and expressing the time required to solve the larger problem in terms of the
time required to solve the smaller sub problems.
• Recurrence relations are typically defined using a recursive formula that expresses the value of a
function or sequence at a given index in terms of its previous values or indices.
• The base case(s) of the recurrence relation provide the initial values for the sequence or function.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Recurrence Relation

• To analyze the time complexity of an algorithm using a recurrence relation, we typically solve the
recurrence relation to obtain a closed-form solution or an asymptotic upper bound.
• Techniques like substitution method, recursion tree, or master theorem are often used to solve
recurrence relations and determine the time complexity of algorithms.
• Recurrence relations provide a powerful tool for understanding and analyzing algorithms, allowing us
to reason about their time complexity and make predictions about their performance for different
input sizes.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Recurrence Relation

n
n( n  1)
• Arithmetic series: 
k 1
k  1  2  ...  n 
2
n
x n 1  1
 x  1  x  x  ...  x 
k 2 n

x 1
x  1
• Geometric series: k 0


1
• Special case: |x| < 1: x
k 0
k

1 x
n
1 1 1
• Harmonic series:

k 1 k
 1 
2
 ... 
n
 ln n

 lg k  n lg n
• Other important formulas: k 1

n
1
 k p  1p  2 p  ...  n p 
k 1 p 1
n p 1

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Sorting Algorithms

• Sorting is any process of arranging items systematically or arranging items in a sequence ordered by
some criterion.
• The purpose of sorting is to impose order on a set of data, making it easier to search, retrieve,
analyze, and manipulate.
• By organizing the data in a specific order, sorting enables efficient searching and facilitates tasks
such as finding the minimum or maximum element, determining if a value exists in the data, or
performing various operations that rely on the order of the elements.
• Sorting can be performed on various types of data, including numbers, strings, records, or custom
objects.
• The choice of sorting algorithm depends on various factors, including the size of the data, the
distribution of the data, the stability requirement, and the available resources.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Bubble Sort

• Bubble sort is a simple comparison-based sorting algorithm.


• It repeatedly steps through the list to be sorted, compares adjacent elements, and swaps them if
they are in the wrong order.
• The process is repeated until the entire list is sorted.
• It is one of the simplest sorting algorithms to understand and implement, but it is not very efficient for
large or complex data sets.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Bubble Sort

• How the bubble sort algorithm works:


• Start with an unsorted list of elements.
• Compare the first element with the second element.
• If the first element is greater than the second element, swap them.
• Move to the next pair of elements (second and third) and continue comparing and swapping them if
necessary.
• Repeat this process until you reach the end of the list.
• At this point, the largest element will be at the end of the list.
• Repeat the above steps for all elements in the list, except for the last one.
• After each iteration, the largest remaining element will "bubble" to the correct position.
• Repeat the process until the entire list is sorted.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Bubble Sort

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Bubble Sort

Sort the following array in Ascending order


45 34 56 23 12

Pass 1 :

34
45 34 34 34
swap

34
45 45 45 45
56 56 56
23 23

swap
23 23 23
56 56
12

swap
12 12 12 12
56

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Bubble Sort

Pass 2 : Pass 3 : Pass 4 :

34 34 34 23
34 23 12
23

swap
swap
45 45
23 23 23
34 34
12 12
23

swap
swap
23 23
45 45
12 12 12
34 34

swap
12 12 12
45 45 45 45
56 56 56 56 56 56

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Bubble Sort

• It is a simple sorting algorithm that works by comparing each pair of adjacent items and swapping
them if they are in the wrong order.
• The pass through the list is repeated until no swaps are needed, which indicates that the list is
sorted.
• As it only uses comparisons to operate on elements, it is a comparison sort.
• Although the algorithm is simple, it is too slow for practical use.
• The time complexity of the bubble sort algorithm is O(n^2), where n is the number of elements in the
list.
• It is not very efficient for large data sets and is mainly used for educational purposes or when dealing
with small or nearly sorted lists.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Bubble Sort

• Best Case:
• The best-case scenario occurs when the input array is already sorted in the correct order.
• In each pass, Bubble Sort compares adjacent elements and finds that they are already in the correct
order, so no swaps are needed.
• Bubble Sort can detect that the array is sorted after the first pass, so it takes only one pass to
determine that no more swaps are required.
• The best-case time complexity of Bubble Sort is O(n).

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Bubble Sort

• Average Case:
• The average-case time complexity of Bubble Sort is also O(n^2).
• This is because, on average, Bubble Sort requires approximately the same number of comparisons
and swaps as the worst-case scenario.
• Even if the input array is not in completely reverse order, Bubble Sort still needs to make multiple
passes through the array to sort it completely.
• The average-case time complexity of O(n^2) holds for random or arbitrary input distributions.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Bubble Sort

• Worst Case:
• The worst-case scenario occurs when the input array is in reverse order, meaning the largest
element is at the beginning.
• In each pass, the largest element "bubbles" its way to the end of the array, requiring n - 1
comparisons and swaps.
• Since there are n elements in the array, Bubble Sort needs to make (n - 1) passes to sort the array
completely.
• The total number of comparisons and swaps in the worst case is approximately (n - 1) + (n - 2) + ... +
2 + 1, which is equal to (n * (n - 1)) / 2.
• Therefore, the worst-case time complexity of Bubble Sort is O(n^2).

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Selection Sort

• Selection sort is a simple comparison-based sorting algorithm.


• It works by dividing the input array into two parts: the sorted part and the unsorted part.
• The algorithm repeatedly selects the smallest (or largest, depending on the sorting order) element
from the unsorted part and moves it to the beginning of the sorted part.
• This process continues until the entire array is sorted.
• How the selection sort algorithm works?
• Start with an unsorted array of elements.
• Find the smallest (or largest) element in the unsorted part of the array.
• Swap the smallest element with the first element of the unsorted part, moving it to the sorted part.
• Increment the boundary of the sorted part and decrement the boundary of the unsorted part.
• Repeat steps 2-4 until the entire array is sorted.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Selection Sort

# Input: Array A
# Output: Sorted array A

Algorithm: Selection_Sort(A)
for i ← 1 to n-1 do
minj ← i;
minx ← A[i];
for j ← i + 1 to n do
if A[j] < minx then
minj ← j;
minx ← A[j];
A[minj] ← A[i];
A[i] ← minx;

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Selection Sort
Sort the following elements in Ascending order

5 1 12 -5 16 2 12 14

Step 1 :
Unsorted Array
5 1 12 -5 16 2 12 14
1 2 3 4 5 6 7 8

Step 2 :
 Minj denotes the current index and Minx is the value
Unsorted Array (elements 2 to 8) stored at current index.
 So, Minj = 1, Minx = 5
-5
5 1 12 -5
5 16 2 12 14  Assume that currently Minx is the smallest value.
 Now find the smallest value from the remaining entire
1 2 3 4 5 6 7 8
Unsorted array.
Swap Index = 4, value = -5
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Selection Sort

Step 3 :
Unsorted Array (elements 3 to 8)  Now Minj = 2, Minx = 1
 Find min value from remaining
-5 1 12 5 16 2 12 14 unsorted array
1 2 3 4 5 6 7 8
Index = 2, value = 1

No Swapping as min value is already at right place


Step 4 :
Unsorted Array  Minj = 3, Minx = 12
(elements 4 to 8)  Find min value from remaining
unsorted array
-5 1 12
2 5 16 12
2 12 14 Index = 6, value = 2
1 2 3 4 5 6 7 8

Swap
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Selection Sort

Step 5 : Unsorted Array


 Now Minj = 4, Minx = 5
(elements 5 to 8)
 Find min value from remaining
unsorted array
-5 1 2 5 16 12 12 14
1 2 3 4 5 6 7 8 Index = 4, value = 5

No Swapping as min value is already at right place


Step 6 :
 Minj = 5, Minx = 16
 Find min value from remaining
Unsorted Array unsorted array
(elements 6 to 8)
Index = 6, value = 12
-5 1 2 5 12
16 16
12 12 14
1 2 3 4 5 6 7 8

Swap
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Selection Sort

Step 7 :
Unsorted Array  Now Minj = 6, Minx = 16
(elements 7 to 8)  Find min value from remaining
unsorted array
-5 1 2 5 12 12
16 16
12 14
1 2 3 4 5 6 7 8 Index = 7, value = 12

Swap

Step 8 :
Unsorted Array  Minj = 7, Minx = 16
(element 8)  Find min value from remaining
unsorted array
-5 1 2 5 12 12 14
16 16
14 Index = 8, value = 14
1 2 3 4 5 6 7 8

Swap The entire array is sorted now.


DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Selection Sort

• Selection sort divides the array or list into two parts,


The sorted part at the left end
and the unsorted part at the right end.
• Initially, the sorted part is empty and the unsorted part is the entire list.
• The smallest element is selected from the unsorted array and swapped with the leftmost element,
and that element becomes a part of the sorted array.
• Then it finds the second smallest element and exchanges it with the element in the second leftmost
position.
• This process continues until the entire array is sorted.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Analysis of Selection Sort

• It's worth noting that Selection Sort is not considered an efficient sorting algorithm for large datasets,
as its time complexity is quadratic.
• However, it can be useful for small arrays or cases where the number of swaps is a concern, as it
performs a limited number of swaps compared to other quadratic sorting algorithms like Bubble Sort.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Insertion Sort

• Insertion sort is a simple comparison-based sorting algorithm that builds the final sorted array one
element at a time.
• It works by dividing the input array into two parts: the sorted part and the unsorted part.
• The algorithm iterates over the unsorted part, taking each element and inserting it into its correct
position within the sorted part.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Insertion Sort

• How the insertion sort algorithm works:

• Start with an unsorted array of elements.


• Assume the first element is already in the sorted part (considered as the sorted subarray of size 1).
• Iterate over the remaining unsorted elements from the second element (index 1) to the last element
(index n-1), where n is the size of the array.
• In each iteration, compare the current element with the elements in the sorted part from right to left.
• Shift the elements in the sorted part that are greater than the current element one position to the
right.
• Insert the current element into its correct position within the sorted part.
• Repeat steps 3-6 until the entire array is sorted.

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Insertion Sort

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH


Fundamentals of Algorithms – Insertion Sort

Sort the following elements in Ascending order


5 1 12 -5 16 2 12 14

Step 1 :
Unsorted Array
5 1 12 -5 16 2 12 14
1 2 3 4 5 6 7 8

Step 2 :

𝒋 𝒊=𝟐 , 𝒙=𝟏 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎


51 1 12 -5 16 2 12 14 while do
1 2 3 4 5 6 7 8

Shift down
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Insertion Sort

Step 3 :
𝒋 𝒊=𝟑 , 𝒙=𝟏𝟐 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
1 5 12 -5 16 2 12 14 while do
1 2 3 4 5 6 7 8

No Shift will take place


Step 4 :
𝒊=𝟒 , 𝒙=− 𝟓 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
while do
𝒋 𝒋
-5
1 5 12 -5 16 2 12 14
1 2 3 4 5 6 7 8
Shift down
Shift down
Shift down
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Insertion Sort

Step 5 :
𝒋 𝒊=𝟓 , 𝒙=𝟏𝟔 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
-5 1 5 12 16 2 12 14 while do
1 2 3 4 5 6 7 8

No Shift will take place

Step 6 :
𝒊=𝟔 , 𝒙=𝟐 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
𝒋 𝒋 while do

-5 1 52 12 16 2 12 14
1 2 3 4 5 6 7 8

Shift down Shift down


Shift down
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
Fundamentals of Algorithms – Insertion Sort

Step 7 :
𝒋 𝒊=𝟕 , 𝒙=𝟏𝟐 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
-5 1 2 12 12 14
5 12 16 while do
1 2 3 4 5 6 7 8

Shift down

Step 8 :
𝒊=𝟖 , 𝒙=𝟏𝟒 𝒋=𝒊 – 𝟏 𝒂𝒏𝒅 𝒋>𝟎
𝒋 while do

-5 1 2 5 12 12 14
16 14
1 2 3 4 5 6 7 8
The entire array is sorted
Shift down
DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH
now.
Thank You

DEVANG PATEL INSTITUTE OF ADVANCE TECHNOLOGY AND RESEARCH

You might also like