0% found this document useful (0 votes)
16 views4 pages

ADSA Lecture 1

estuyjyu

Uploaded by

Little Gentleman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views4 pages

ADSA Lecture 1

estuyjyu

Uploaded by

Little Gentleman
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ADSA

Date: 13-08-2024

Definition and Significance of Algorithmic Complexity


Algorithmic Complexity measures the efficiency of an algorithm in terms of the resources it
consumes. There are two primary types of complexity:
• Time Complexity: Reflects the amount of time an algorithm takes to complete as a function
of the input size.
• Space Complexity: Reflects the amount of memory an algorithm uses as a function of the
input size.
The significance of algorithmic complexity lies in its ability to provide insights into an
algorithm's performance, helping in comparing algorithms and choosing the most efficient one
for a given problem. It is crucial for scalability and performance optimization.
2. Time and Space Complexity
• Time Complexity: Measures how the running time of an algorithm increases with the size of
the input. It's often expressed as a function T(n), where n is the size of the input.
• Space Complexity: Measures how the memory usage of an algorithm grows with the input
size. It's expressed as a function S(n), where n is the input size.
3. Big O Notation
Big O Notation is used to describe the upper bound of an algorithm's time or space complexity.
It provides an asymptotic analysis of the algorithm's performance in the worst-case scenario.
The notation ignores constant factors and lower-order terms, focusing on the dominant term
as the input size grows.
• Example: For a sorting algorithm with time complexity O(n^2), the time taken grows
quadratically with the size of the input.
4. Omega Notation (Ω)
Omega Notation (Ω) provides the lower bound of an algorithm's time or space complexity. It
describes the best-case scenario, giving a guarantee that the algorithm will take at least this
amount of time or space.
• Example: For a linear search algorithm, the best-case time complexity is Ω(1) (if the target
element is the first one).
5. Theta Notation (Θ)
Theta Notation (Θ) provides both the upper and lower bounds of an algorithm's complexity. It
describes the exact asymptotic behavior of an algorithm, providing a tight bound on its time or
space complexity.
• Example: For an algorithm with a complexity of Θ(nlogn), it means the algorithm's
performance grows logarithmically with the input size, and it will not perform worse or
better asymptotically.
6. Analyzing Basic Algorithms
To analyze the complexity of basic algorithms, follow these steps:

New Section 1 Page 1


To analyze the complexity of basic algorithms, follow these steps:

1. Identify the Basic Operations: Determine which operations are most significant in the
algorithm (e.g., comparisons, assignments).
2. Count the Operations: Calculate how many times the basic operations are performed as a
function of the input size.
3. Express Complexity Using Big O Notation: Find the dominant term and express it using Big O
notation.
7. Example Problems on Basic Time Complexity Analysis
1. Linear Search:
o Description: Searches for an element in an unsorted array by checking each element.
o Time Complexity: O(n), where n is the number of elements in the array.
o Explanation: In the worst case, each element must be checked once.
2. Binary Search:
o Description: Searches for an element in a sorted array by repeatedly dividing the search
interval in half.
o Time Complexity: O(logn).
o Explanation: Each step halves the search interval, leading to a logarithmic growth.
3. Bubble Sort:
o Description: Sorts an array by repeatedly stepping through the list, comparing adjacent
New Section 1 Page 2
o Description: Sorts an array by repeatedly stepping through the list, comparing adjacent
elements, and swapping them if they are in the wrong order.
o Time Complexity: O(n^2).
o Explanation: It involves nested loops, each running up to n times.
4. Merge Sort:
o Description: Sorts an array by dividing it into halves, sorting each half, and then merging the
sorted halves.
o Time Complexity: O(nlogn).

o Explanation: The array is divided log(n) times, and merging takes linear time.
These examples cover basic algorithms and their time complexities, providing a foundation for
understanding how different algorithms perform relative to the size of their input.
Time Complexity vs. Execution Time
1. Definition:
• Time Complexity:
o Concept: Time complexity is a theoretical measure of the time an algorithm takes to
run as a function of the input size n. It expresses the efficiency of an algorithm in
terms of the growth rate of its running time, focusing on how the time required by
the algorithm increases as the input size grows.
o Notation: Usually represented using Big O notation (e.g., O(n), O(logn), O(n^2), etc.).
o Purpose: Time complexity allows you to compare algorithms independently of the
hardware or software environment, providing a way to evaluate their scalability.
• Execution Time:
o Concept: Execution time is the actual time taken by an algorithm to run on a specific
machine with a specific input. It is the real-world measure of how long an algorithm
takes to execute, usually expressed in units of time like milliseconds, seconds, etc.
o Factors: Affected by various factors such as the processor speed, memory hierarchy,
compiler optimizations, and the specific input provided to the algorithm.
o Purpose: Execution time provides a concrete measure of an algorithm's performance
in a particular environment, allowing you to observe the actual time it takes to run a
specific piece of code.
2. Relationship Between Time Complexity and Execution Time:
• Abstraction vs. Reality:
o Time Complexity: Abstracts away hardware details and focuses on how the running
time of an algorithm scales with input size.
o Execution Time: Reflects the real performance of an algorithm on a specific system
with specific inputs.

New Section 1 Page 3


with specific inputs.
• General vs. Specific:
o Time Complexity: Offers a general understanding of an algorithm’s performance in
different scenarios, especially for large input sizes.
o Execution Time: Provides specific performance data for inputs and environments.
3. Why Time Complexity Matters Despite Execution Time:
• Scalability: Time complexity is crucial for understanding how an algorithm will perform as
the input size increases, which is often more important than the actual time it takes to run
on small inputs.
• Independence: Time complexity is independent of the machine or environment, allowing
for a more objective comparison between algorithms.
• Predictability: By knowing the time complexity, you can predict how an algorithm will
behave for larger inputs, even without running the code.
4. Examples to Illustrate the Difference:
• Example 1: Linear Search vs. Binary Search:
o Linear Search: Has a time complexity of O(n), meaning the time to search grows
linearly with the input size.
o Binary Search: Has a time complexity of O(logn), meaning the time to search grows
logarithmically with the input size.
o Execution Time: On small datasets, linear search might be faster due to lower
overhead, but as the dataset size grows, binary search will outperform linear search
due to its lower time complexity.
• Example 2: Sorting Algorithms:
o Bubble Sort: Has a time complexity of O(n^2), which can make it very slow on large
datasets.
o Merge Sort: Has a time complexity of O(nlogn), making it much more efficient for
larger datasets.
o Execution Time: On small datasets, bubble sort might not be noticeably slower than
merge sort, but for large datasets, the difference in execution time will be significant.
5. Practical Considerations:
• Small Inputs: For small input sizes, execution time might be more important, and an
algorithm with higher time complexity might still perform well.
• Large Inputs: For large input sizes, time complexity becomes more critical, as it provides a
better estimate of how the algorithm will scale.
In summary, time complexity provides a broad understanding of an algorithm’s efficiency,
especially in terms of scalability, while execution time gives you practical insights into how the
algorithm performs in a specific environment with specific inputs. Both are important in
evaluating and selecting algorithms, depending on the context and requirements.

New Section 1 Page 4

You might also like