0% found this document useful (0 votes)
36 views

Module 1 DSA 24

Uploaded by

pratiknagre34
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Module 1 DSA 24

Uploaded by

pratiknagre34
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 81

BASICS OF ALGORITHMS

DEPARTMENT OF AIDS/AIML/CSD/CSE/CSME

FACULTY OF ENGINEERING
Course outcomes
Topic CO
Introduction to an Array, Representation of Arrays 1
Introduction to Stack, Stack-Definitions & Concepts 2
Operations On Stacks, Applications of Stacks 3
Polish Expression, Reverse Polish Expression And Their 3
Compilation
Recursion, Tower of Hanoi 3
Linked List 3
Applications of Linked List 4
Specific Learning Objectives

S/No Learning objectives Level Criteria Condition

1 Understand the basic concepts of Algorithms Must know At least one -

2 Learn the characteristics of Algorithm Must know At least two -

3 Understand time and space complexities Must know all -


What are Data Structures?

 Data Structure allows us to understand the organization of data and the


management of the data flow in order to increase the efficiency of any process or
program.
 Data Structure is a particular way of storing and organizing data in the memory of
the computer so that these data can easily be retrieved and efficiently utilized in the
future when required.
Types of Data
The scope of a particular data model depends on two factors

1. First, it must be loaded enough into the structure to reflect the


definite correlation of the data with a real-world object.

2. Second, the formation should be so straightforward that one can


adapt to process the data efficiently whenever necessary.
Basic Terminologies related to Data Structures

• Data: We can define data as an elementary value or a collection of values. For


example, the Employee's name and ID are the data related to the Employee.
• Data Items: A Single unit of value is known as Data Item.
• Group Items: Data Items that have subordinate data items are known as Group
Items. For example, an employee's name can have a first, middle, and last name.
• Elementary Items: Data Items that are unable to divide into sub-items are known as
Elementary Items. For example, the ID of an Employee.
• Entity and Attribute: A class of certain objects is represented by an Entity. It
consists of different Attributes. Each Attribute symbolizes the specific property of
that Entity.
The term "information" is sometimes utilized for data with given attributes of
meaningful or processed data.

• Field: A single elementary unit of information symbolizing the Attribute of an


Entity is known as Field.
• Record: A collection of different data items are known as a Record. For
example, if we talk about the employee entity, then its name, id, address, and job
title can be grouped to form the record for the employee.
• File: A collection of different Records of one entity type is known as a File. For
example, if there are 100 employees, there will be 25 records in the related file
containing data about each employee.
Why should we learn Data Structures?

Data Structures and Algorithms are two of the key aspects of Computer
Science.
Data Structures allow us to organize and store data, whereas Algorithms
allow us to process that data meaningfully.
Learning Data Structures and Algorithms will help us become better
Programmers.
We will be able to write code that is more effective and reliable.
We will also be able to solve problems more quickly and efficiently.
Key Features of Data Structures

• Robustness: Generally, all computer programmers aim to produce software that yields
correct output for every possible input, along with efficient execution on all hardware
platforms. This type of robust software must manage both valid and invalid inputs.
• Adaptability: Building software applications like Web Browsers, Word Processors, and
Internet Search Engine include huge software systems that require correct and efficient
working or execution for many years. Moreover, software evolves due to emerging
technologies or ever-changing market conditions.
• Reusability: The features like Reusability and Adaptability go hand in hand. It is known
that the programmer needs many resources to build any software, making it a costly
enterprise. However, if the software is developed in a reusable and adaptable way, then it
can be applied in most future applications. Thus, by executing quality data structures, it
is possible to build reusable software, which appears to be cost-effective and timesaving.
History of algorithms

The word algorithm originates from the Arabic word algorism, which is linked to the
name of the Arabic mathematician. Al Khwarizmi is accredited as the first algorithm
Abu Jafar Mohammed Ibn Musa Al Khwarizmi (825 CE) designer for adding numbers
represented in the Hindu numeral system. The algorithm designed by him and followed
until today calls for summing the digits occurring at a specific position and the previous
carry digit repetitively, moving from the least significant digit to the most significant
digit until the digits have been exhausted.
Introduction to Algorithm
 An algorithm is a step-by-step procedure that defines a set of
instructions that must be carried out in a specific order to produce the
desired result.

 Algorithms are generally developed independently of underlying


languages, which means that an algorithm can be implemented in more
than one programming language
What is an Algorithm?
An algorithm is a set of commands that must be followed for a computer to
perform calculations or other problem-solving operations.

According to its formal definition, an algorithm is a finite set of instructions


carried out in a specific order to perform a particular task.

It is not the entire program or code; it is simple logic to a problem represented
as an informal description in the form of a flowchart or pseudo code.
•Problem: A problem can be defined as a real-world problem or real-world instance
problem for which you need to develop a program or set of instructions. An algorithm is a
set of instructions.
•Algorithm: An algorithm is defined as a step-by-step process that will be designed for a
problem.
•Input: After designing an algorithm, the algorithm is given the necessary and desired
inputs.
•Processing unit: The input will be passed to the processing unit, producing the desired
output.
•Output: The outcome or result of the program is referred to as the output.
A Simple Algorithm

Fibonacci Numbers
• The Fibonacci numbers are very useful for introducing algorithms, so before we
continue, here is a short introduction to Fibonacci numbers.

• The Fibonacci numbers are named after a 13th century Italian mathematician known
as Fibonacci.

• The two first Fibonacci numbers are 0 and 1, and the next Fibonacci number is
always the sum of the two previous numbers, so we get 0, 1, 1, 2, 3, 5, 8, 13, 21, ...
Code:

def F(n):
if n <= 1:
return n
else:
return F(n - 1) + F(n - 2)

print(F(19))
Find The Lowest Value in an Array

my_array = [7, 12, 9, 4, 11]


minVal = my_array[0] # Step 1

for i in my_array: # Step 2


if i < minVal: # Step 3
minVal = i

print('Lowest value: ',minVal) # Step


4
Characteristics of an Algorithm
 Input: An algorithm requires some input values. An algorithm can be given a value
other than 0 as input.
 Output: At the end of an algorithm, you will have one or more outcomes.
 Unambiguity: A perfect algorithm is defined as unambiguous, which means that its
instructions should be clear and straightforward.
 Finiteness: An algorithm must be finite. Finiteness in this context means that the
algorithm should have a limited number of instructions, i.e., the instructions should be
countable.
 Effectiveness: Because each instruction in an algorithm affects the overall process, it
should be adequate.
 Language independence: An algorithm must be language-independent, which means
that its instructions can be implemented in any language and produce the same results.
Why Do You Need an Algorithm?

 To understand the basic idea of the problem.


 To find an approach to solve the problem.
 To improve the efficiency of existing techniques.
 To understand the basic principles of designing the algorithms.
 To compare the performance of the algorithm with respect to other techniques.
 It is the best method of description without describing the implementation detail.
 The Algorithm gives a clear description of requirements and goal of the problem to
the designer.
How to Write an Algorithm?

 There are no well-defined standards for writing algorithms. It is, however, a problem
that is resource-dependent. Algorithms are never written with a specific programming
language in mind.

 As you all know, basic code constructs such as loops like do, for, while,
all programming languages share flow control such as if-else, and so on. An algorithm
can be written using these common constructs.

 Algorithms are typically written in a step-by-step fashion, but this is not always the
case. Algorithm writing is a process that occurs after the problem domain has been
well-defined. That is, you must be aware of the problem domain for which you are
developing a solution.
Example to learn how to write algorithms.
Problem: Create an algorithm that multiplies two numbers and displays the output.
• In algorithm design and analysis, the second method is typically used to describe an
algorithm.
• It allows the analyst to analyze the algorithm while ignoring all unwanted definitions
easily.
• They can see which operations are being used and how the process is progressing. It
is optional to write step numbers.
• To solve a given problem, you create an algorithm. A problem can be solved in a
variety of ways
Factors of an Algorithm

• Modularity: This feature was perfectly designed for the algorithm if you
are given a problem and break it down into small-small modules or small-
small steps, which is a basic definition of an algorithm.
• Correctness: An algorithm's correctness is defined as when the given
inputs produce the desired output, indicating that the algorithm was
designed correctly. An algorithm's analysis has been completed correctly.
• Maintainability: It means that the algorithm should be designed in a
straightforward, structured way so that when you redefine the algorithm,
no significant changes are made to the algorithm.
• Functionality: It takes into account various logical steps to solve a real-world
problem.
• Robustness: Robustness refers to an algorithm's ability to define your problem
clearly.
• User-friendly: If the algorithm is difficult to understand, the designer will not explain
it to the programmer.
• Simplicity: If an algorithm is simple, it is simple to understand.
• Extensibility: Your algorithm should be extensible if another algorithm designer or
programmer wants to use it.
Use of the Algorithms

 Computer Science: Algorithms form the basis of computer programming and are used to solve
problems ranging from simple sorting and searching to complex tasks such as artificial intelligence
and machine learning.
 Mathematics: Algorithms are used to solve mathematical problems, such as finding the optimal
solution to a system of linear equations or finding the shortest path in a graph.
 Operations Research: Algorithms are used to optimize and make decisions in fields such as
transportation, logistics, and resource allocation.
 Artificial Intelligence: Algorithms are the foundation of artificial intelligence and machine learning,
and are used to develop intelligent systems that can perform tasks such as image recognition, natural
language processing, and decision-making.
 Data Science: Algorithms are used to analyze, process, and extract insights from large amounts of
data in fields such as marketing, finance, and healthcare.
Types of Algorithms

• Brute Force Algorithm


• Recursive Algorithm
• Backtracking Algorithm
• Searching Algorithm
• Sorting Algorithm
• Hashing Algorithm
• Divide and Conquer Algorithm
Brute Force Algorithm
Brute Force Algorithm

• It is an intuitive, direct, and straightforward technique of problem-solving in which


all the possible ways or all the possible solutions to a given problem are enumerated.
• Many problems solved in day-to-day life using the brute force strategy, for example
exploring all the paths to a nearby market to find the minimum shortest path.
• Arranging the books in a rack using all the possibilities to optimize the rack spaces,
etc. In fact, daily life activities use a brute force nature, even though
optimal algorithms are also possible.
• Brute force algorithm is a technique that guarantees solutions for problems of any
domain helps in solving the simpler problems and also provides a solution that can
serve as a benchmark for evaluating other design techniques, but takes a lot of run
time and inefficient.
Advantages of a brute-force algorithm

 This algorithm finds all the possible solutions, and it also guarantees that it finds the
correct solution to a problem.
 This type of algorithm is applicable to a wide range of domains.
 It is mainly used for solving simpler and small problems.
 It can be considered a comparison benchmark to solve a simple problem and does
not require any particular domain knowledge.
Disadvantages of a brute-force algorithm

 It is an inefficient algorithm as it requires solving each and every state.


 It is a very slow algorithm to find the correct solution as it solves each state without
considering whether the solution is feasible or not.
 The brute force algorithm is neither constructive nor creative as compared to other
algorithms
Recursive Algorithm

• The process in which a function calls itself directly or indirectly is called recursion
and the corresponding function is called a recursive function.
• Using a recursive algorithm, certain problems can be solved quite easily. Examples of
such problems are
• A recursive function solves a particular problem by calling a copy of itself and
solving smaller sub problems of the original problems.
• Many more recursive calls can be generated as and when required.
• It is essential to know that we should provide a certain case in order to terminate this
recursion process. So we can say that every time the function calls itself with a
simpler version of the original problem.
Calculating Factorial using Recursion
def factorial(n):
# Base case: if n is 0 or 1, the factorial is 1
if n == 0 or n == 1:
return 1
# Recursive case: n! = n * (n-1)!
else:
return n * factorial(n - 1)

# Example usage
number = 5
print(f"The factorial of {number} is {factorial(number)}")
Recursion
Recursion
Recursion is a technique where a function calls itself in order to solve a
smaller instance of the same problem until it reaches a base case.
Structure
Base Case: This is the condition under which the recursion stops.
Without a base case, recursion would continue indefinitely, leading to a
stack overflow.
Recursive Case: The part of the function where it calls itself with a
modified argument, moving towards the base case.
Iteration

Definition
Iteration is a technique that repeatedly executes a set of statements using
control structures like loops (for, while) until a specified condition is met.
Structure
Initialization: Setting up initial conditions.
Condition: The loop continues to execute as long as this condition is true.
Update: Modifying variables that control the loop’s execution to eventually
terminate the loop.
Backtracking Algorithms
Backtracking algorithms are like problem-solving strategies that help explore
different options to find the best solution. They work by trying out different
paths and if one doesn’t work, they backtrack and try another until they find the
right one. It’s like solving a puzzle by testing different pieces until they fit
together perfectly.
3 2 1
3 1 2
3 2 1
2 1 3
2 3 1
1 3 2
1 2 3
Searching Algorithm

Searching algorithms are the ones that are used for searching elements or groups of
elements from a particular data structure. They can be of different types based on their
approach or the data structure in which the element should be found.

Searching Algorithms are designed to check for an element or retrieve an


element from any data structure where it is stored .

1. Sequential Search:

2. Interval Search
Linear Search
 Linear search is also called as sequential search algorithm. It is the simplest
searching algorithm.
 In Linear search, we simply traverse the list completely and match each
element of the list with the item whose location is to be found. If the match
is found, then the location of the item is returned; otherwise, the algorithm
returns NULL.

 It is widely used to search an element from the unordered list, i.e., the list in
which items are not sorted. The worst-case time complexity of linear search
is O(n).
Binary Search

 Binary search is a search algorithm used to find the position


of a target value within a sorted array. It works by repeatedly
dividing the search interval in half until the target value is
found or the interval is empty. The search interval is halved
by comparing the target element with the middle value of the
search space.

To apply Binary Search algorithm:


 The data structure must be sorted.
 Access to any element of the data structure takes constant
time.
Sorting Algorithm

Sorting is arranging a group of data in a particular manner according to the


requirement. The algorithms which help in performing this function are called
sorting algorithms. Generally sorting algorithms are used to sort groups of data in
an increasing or decreasing manner.

A Sorting Algorithm is used to rearrange a given array or list of elements


according to a comparison operator on the elements. The comparison operator is
used to decide the new order of elements in the respective data structure.
The below list of characters is sorted in increasing order of their ASCII values. That is, the character with a lesser ASCII value will be
placed first than the character with a higher ASCII value.
Hashing Algorithm

 Hashing is a fundamental data structure that efficiently stores and retrieves data in a
way that allows for quick access.

 It involves mapping data to a specific index in a hash table using a hash function that
enables fast retrieval of information based on its key.

 This method is commonly used in databases, caching systems, and various


programming applications to optimize search and retrieval operations.
Components of Hashing

 Key: A Key can be anything string or integer which is fed as input in the hash
function the technique that determines an index or location for storage of an item in a
data structure.
 Hash Function: The hash function receives the input key and returns the index of an
element in an array called a hash table. The index is known as the hash index.
 Hash Table: Hash table is a data structure that maps keys to values using a special
function called a hash function. Hash stores the data in an associative manner in an
array where each data value has its own unique index.
Divide and Conquer Algorithm

In Divide and Conquer algorithms, the idea is to solve the problem in two
sections, the first section divides the problem into sub problems of the
same type. The second section is to solve the smaller problem
independently and then add the combined result to produce the final answer
to the problem.

This technique can be divided into the following three parts:


1.Divide: This involves dividing the problem into smaller sub-problems.
2.Conquer: Solve sub-problems by calling recursively until solved.
3.Combine: Combine the sub-problems to get the final solution of the
whole problem.
EFFICIENCY IN PROGRAMMING

• Efficiency in programming refers to how effectively a program uses resources such as


time (execution speed) and space (memory usage). A more efficient program
performs its tasks faster and/or uses less memory. Here are the key aspects of
efficiency in programming:
• Time Complexity
• Space Complexity
Complexity of Algorithm

 Complexity in algorithms refers to the amount of resources (such as time or memory)


required to solve a problem or perform a task.

 The most common measure of complexity is time complexity, which refers to the
amount of time an algorithm takes to produce a result as a function of the size of the
input.

 Memory complexity refers to the amount of memory used by an algorithm. Algorithm


designers strive to develop algorithms with the lowest possible time and memory
complexities, since this makes them more efficient and scalable.
 O(f) notation represents the complexity of an algorithm, which is also termed as an
Asymptotic notation or "Big O" notation. Here the f corresponds to the function
whose size is the same as that of the input data. The complexity of the asymptotic
computation O(f) determines in which order the resources such as CPU time,
memory, etc. are consumed by the algorithm that is articulated as a function of the
size of the input data.
Time Complexity

• Measuring time to execute

• Counting Operations involved

• Abstract notion of Order of Growth


Measuring time to execute

for i in range(100):
0.78 seconds
print(“Swapnil")

count = 0
while count < 100:
0.75 seconds
print("Hello")
count += 1
Issues with Measuring time to execute

• Different time for different algorithm


• Time varies if implementation changes
• Different machines, different time
• Does not work well for extremely small input
• Time varies with different inputs but unable to form relationship
Counting Operations
def celsius_to_fahrenheit(celsius):
def celsius_to_fahrenheit(celsius): operations_count = 0
operations_count = 0
# Operation 1: Multiply by 9
# Operation 1: Multiply by 9 step1 = celsius * 9
fahrenheit = celsius * 9 operations_count += 1
operations_count += 1
# Operation 2: Assign result of step1 to a new variable
intermediate = step1
# Operation 2: Divide by 5 operations_count += 1
fahrenheit /= 5
operations_count += 1 # Operation 3: Divide by 5
step2 = intermediate / 5
# Operation 3: Add 32 operations_count += 1
fahrenheit += 32
operations_count += 1 # Operation 4: Assign result of step2 to a new variable
intermediate = step2
operations_count += 1
return fahrenheit, operations_count
# Operation 5: Add 32
# Example usage fahrenheit = intermediate + 32
celsius_temp = 25 operations_count += 1
fahrenheit_temp, num_operations = celsius_to_fahrenheit(celsius_temp)
print(f"Celsius: {celsius_temp}, Fahrenheit: {fahrenheit_temp}") return fahrenheit, operations_count
print(f"Number of operations: {num_operations}")
# Example usage
celsius_temp = 25
fahrenheit_temp, num_operations = celsius_to_fahrenheit(celsius_temp)
print(f"Celsius: {celsius_temp}, Fahrenheit: {fahrenheit_temp}")
print(f"Number of operations: {num_operations}")
Order of Growth

• Want to evaluate program efficiency when input is very large


• Want to express growth of programs run time as input size grows
• Want to put upper bound on growth as tight as possible
• Do not precise : “Order of” not “exact” growth
• Will look at largest factors in run time
• Thus, generally we want tight upper bound on growth , as function of size of input, in
worst case.
Factorial of a number

def factorial iterative(n):


result = 1
for i in range(1, n + 1):
result *= i  Ignore additive constant
return result  Ignore multiplicative constant

# Test the function


number = 5
print(f"Factorial of {number} (iterative):
{factorial_iterative(number)}")
Time Complexity

• Time complexity is a function describing the amount of time an algorithm


takes to complete as a function of the length of the input. It is typically
expressed using Big O notation, which describes an upper bound on the
time. Common time complexities include:

• O(1): Constant time – the algorithm takes the same amount of time
regardless of the input size.
• O(log n): Logarithmic time – the time grows logarithmically with the
input size.
• O(n): Linear time – the time grows linearly with the input size.
• O(n log n): Linearithmic time – the time grows in a combination of linear
and logarithmic fashion.
• O(n^2): Quadratic time – the time grows quadratically with the input size.
• O(2^n): Exponential time – the time doubles with each additional element
in the input.
• O(n!): Factorial time – the time grows factorially with the input size.
Space Complexity

• Space complexity is a function describing the amount of memory an


algorithm uses in relation to the size of the input. Like time complexity, it
is expressed using Big O notation. Common space complexities include:
• O(1): Constant space – the algorithm uses a fixed amount of space
regardless of the input size.
• O(n): Linear space – the space grows linearly with the input size.
• O(n^2): Quadratic space – the space grows quadratically with the input
size.
Analyzing Complexity

Identify the basic operations that determine the running time or space usage
of the algorithm.

Count the number of times these basic operations are executed as a


function of the input size.

Express the count as a function of the input size, n.

Simplify the function using Big O notation to express the complexity in its
most general form.
Example Analysis
Consider the following simple algorithm to find the maximum element in
an array:
Time Complexity Analysis

• The basic operation here is the comparison arr[i] > max_val.


• This comparison is performed once for each of the n elements in the
array, except the first one.
• Thus, the number of comparisons is n-1.
• Therefore, the time complexity is O(n).
Space Complexity Analysis

• The algorithm uses a fixed amount of extra space: one variable (max_val)
and a loop counter (i).
• This space does not depend on the input size n.
• Therefore, the space complexity is O(1).
Complexity Measures

• Complexity measures the efficiency of an algorithm in terms of time


(time complexity) and space (space complexity) required to solve a
problem.
• These complexities are typically expressed using Big O notation.
• There are three primary types of complexity: best-case, worst-case, and
average-case. Each of these scenarios provides insight into how an
algorithm performs under different conditions.
Best-Case Complexity

The best-case complexity describes the behavior of the algorithm


under the most optimal conditions. It's the minimum amount of time
or space that an algorithm can take to complete its task.

Definition: The scenario where the algorithm performs the fewest


possible steps.
Example: For the bubble sort algorithm, the best-case occurs when the
input array is already sorted. In this case, the complexity is O(n),
where n is the number of elements in the array, because only one pass
is needed to confirm the array is sorted.
Worst-Case Complexity

The worst-case complexity describes the behavior of the algorithm


under the least favorable conditions. It's the maximum amount of time
or space that an algorithm can take to complete its task.

Definition: The scenario where the algorithm performs the most


possible steps.

Example: For the quicksort algorithm, the worst-case occurs when the
pivot selection is poor, such as always picking the smallest or largest
element as the pivot in an already sorted array, leading to unbalanced
partitions. In this case, the complexity is 𝑂()
Average-Case Complexity

The average-case complexity describes the expected behavior of the


algorithm over all possible inputs. It provides a more realistic measure of
the algorithm's performance compared to the best and worst cases.

Definition: The expected number of steps the algorithm takes, averaged


over all possible inputs of a given size.

Example: For the merge sort algorithm, regardless of the input array's
order, the average-case complexity is 𝑂(𝑛log 𝑛) the same as its worst-case
complexity. For quicksort, the average-case complexity is 𝑂( 𝑛log 𝑛),
assuming that the pivot splits the array reasonably well on average.
Typical Complexities of an Algorithm

1. Constant Complexity

It imposes a complexity of O(1). It undergoes an execution of a constant number of


steps like 1, 5, 10, etc. for solving a given problem. The count of operations is
independent of the input data size.
2. Logarithmic Complexity

It imposes a complexity of O(log(N)). It undergoes the execution of the order of


log(N) steps. To perform operations on N elements, it often takes the logarithmic
base as 2.
For N = 1,000,000, an algorithm that has a complexity of O(log(N)) would
undergo 20 steps (with a constant precision). Here, the logarithmic base does not
hold a necessary consequence for the operation count order, so it is usually
omitted.
Summary
• Basics of Algorithm
• Characteristics of Algorithm
• Design of Algorithm
• Types of Algorithms
• Complexities of Algorithms
Bibliography
1. Data structure using C and C++-AM Tanenbaum, Y Langsam&amp; MJ
Augustein,Prentice Hall India.
2. Data structures & Program Design in C -Robert Kruse, Bruse Leung,Pearson
Education.
Expected Questions
1) Illustrate the characteristics of Algorithms.
2) Illustrate types of algorithm.
3) Explain the complexities associated with algorithm.
4) Compare best case and worst case analysis.
5) Compare recursion and iteration using suitable example.
Thank you

You might also like