TIME EXECUTION OF DIFFERENT SORTED ALGORITHMSTanya Makkar
what is Algorithm and classification and its complexity
Time Complexity
Time Space trade-off
Asymptotic time complexity of algorithm and its notation
Why do we need to classify running time of algorithm into growth rates?
Big O-h notation and example
Big omega notation and example
Big theta notation and its example
best among the 3 notation
finding complexity f(n) for certain cases
1. Average case
2.Best case
3.Worst case
Searching
Sorting
complexity of Sorting
Conclusion
Unit 1: Fundamentals of the Analysis of Algorithmic Efficiency, Units for Measuring Running Time, PROPERTIES OF AN ALGORITHM, Growth of Functions, Algorithm - Analysis, Asymptotic Notations, Recurrence Relation and problems
Performance analysis is important for algorithms and software features. Asymptotic analysis evaluates how an algorithm's time or space requirements grow with increasing input size, ignoring constants and machine-specific factors. This allows algorithms to be analyzed and compared regardless of machine or small inputs. The document discusses common time complexities like O(1), O(n), O(n log n), and analyzing worst, average, and best cases. It also covers techniques like recursion, amortized analysis, and the master method for solving algorithm recurrences.
Design Analysis of Alogorithm 1 ppt 2024.pptxrajesshs31r
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. It then provides examples of Euclid's algorithm for computing the greatest common divisor. The document goes on to discuss the fundamentals of algorithmic problem solving, including understanding the problem, choosing exact or approximate solutions, and algorithm design techniques. It also covers analyzing algorithms by measuring time and space complexity using asymptotic notations.
Analysis of Algorithm full version 2024.pptxrajesshs31r
This document discusses algorithms and their analysis. It begins by defining an algorithm as a sequence of unambiguous instructions to solve a problem in a finite amount of time. Euclid's algorithm for computing the greatest common divisor is provided as an example. The document then covers fundamentals of algorithmic problem solving, including understanding the problem, choosing exact or approximate solutions, and algorithm design techniques. It also discusses analyzing algorithms based on time and space complexity, as well as worst-case, best-case, and average-case efficiencies. Common problem types like sorting, searching, and graph problems are briefly outlined.
This document provides an overview of algorithms including definitions, characteristics, design, and analysis. It defines an algorithm as a finite step-by-step procedure to solve a problem and discusses their key characteristics like input, definiteness, effectiveness, finiteness, and output. The document outlines the design of algorithms using pseudo-code and their analysis in terms of time and space complexity using asymptotic notations like Big O, Big Omega, and Big Theta. Examples are provided to illustrate linear search time complexity and the use of different notations to determine algorithm efficiency.
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...2022cspaawan12556
What is Complexity Analysis?
What is the need for Complexity Analysis?
Asymptotic Notations
How to measure complexity?
1. Time Complexity
2. Space Complexity
3. Auxiliary Space
How does Complexity affect any algorithm?
How to optimize the time and space complexity of an Algorithm?
Different types of Complexity exist in the program:
1. Constant Complexity
2. Logarithmic Complexity
3. Linear Complexity
4. Quadratic Complexity
5. Factorial Complexity
6. Exponential Complexity
Worst Case time complexity of different data structures for different operations
Complexity Analysis Of Popular Algorithms
Practice some questions on Complexity Analysis
practice with giving Quiz
Conclusion
This document provides an introduction to algorithms and their design and analysis. It discusses what algorithms are, their key characteristics, and the steps to develop an algorithm to solve a problem. These steps include defining the problem, developing a model, specifying and designing the algorithm, checking correctness, analyzing efficiency, implementing, testing, and documenting. Common algorithm design techniques like top-down design and recursion are explained. Factors that impact algorithm efficiency like use of loops, initial conditions, invariants, and termination conditions are covered. Finally, common control structures for algorithms like if/else, loops, and branching are defined.
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first removed. Common stack operations are push to insert and pop to remove elements. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first removed. Common queue operations are enqueue to insert and dequeue to remove elements. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also introduced.
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first to be removed. Common stack operations are push to add an element and pop to remove an element. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first to be removed. Common queue operations are enqueue to add an element and dequeue to remove an element. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also discussed briefly.
This document provides an overview of asymptotic analysis and Landau notation. It discusses justifying algorithm analysis mathematically rather than experimentally. Examples are given to show that two functions may appear different but have the same asymptotic growth rate. Landau symbols like O, Ω, o and Θ are introduced to describe asymptotic upper and lower bounds between functions. Big-Q represents asymptotic equivalence between functions, meaning one can be improved over the other with a faster computer.
2-Algorithms and Complexit data structurey.pdfishan743441
The document discusses algorithms design and complexity analysis. It defines an algorithm as a well-defined sequence of steps to solve a problem and notes that algorithms always take inputs and produce outputs. It discusses different approaches to designing algorithms like greedy, divide and conquer, and dynamic programming. It also covers analyzing algorithm complexity using asymptotic analysis by counting the number of basic operations and deriving the time complexity function in terms of input size.
Linear search examines each element of a list sequentially, one by one, and checks if it is the target value. It has a time complexity of O(n) as it requires searching through each element in the worst case. While simple to implement, linear search is inefficient for large lists as other algorithms like binary search require fewer comparisons.
This document provides an overview of algorithms. It begins by discussing the origins and evolution of the term "algorithm" from its Arabic roots in the 9th century to its modern meaning of a well-defined computational procedure. The document then defines algorithms and their key characteristics such as being precise, unambiguous, and terminating after a finite number of steps. Common algorithm design techniques like divide-and-conquer, greedy algorithms, and dynamic programming are introduced. Examples of merge sort and finding the maximum element in a list are used to illustrate algorithm concepts.
This document contains lecture notes on the design and analysis of algorithms. It covers topics like algorithm definition, complexity analysis, divide and conquer algorithms, greedy algorithms, dynamic programming, and NP-complete problems. The notes provide examples of algorithms like selection sort, towers of Hanoi, and generating permutations. Pseudocode is used to describe algorithms precisely yet readably.
The document discusses algorithms, data abstraction, asymptotic analysis, arrays, polynomials, and sparse matrices. It defines algorithms and discusses their advantages and disadvantages. It explains how to design an algorithm and describes iterative and recursive algorithms. It defines data abstraction and gives an example using smartphones. It discusses time and space complexity analysis and different asymptotic notations like Big O, Omega, and Theta. It describes what arrays are, different types of arrays, and applications of arrays. It explains how to represent and add polynomials using linked lists. Finally, it defines sparse matrices and two methods to represent them using arrays and linked lists.
The document discusses algorithms and data structures. It defines an algorithm as a well-defined procedure that takes input and produces output. Algorithms are used for calculation, data processing, and automated reasoning. The document discusses different ways of describing algorithms including natural language, flowcharts, and pseudo code. It also discusses analyzing the time complexity of algorithms using asymptotic notation such as Big-O, Omega, and Theta notation. Recursive algorithms and solving recurrences are also covered.
This document discusses data structures and algorithms. It provides course objectives which include imparting concepts of data structures and algorithms, introducing searching and sorting techniques, and developing applications using suitable data structures. Course outcomes include understanding algorithm performance analysis, concepts of data structures, linear data structures, and identifying efficient data structure implementations. The document also covers algorithm basics, specifications, expressions, analysis techniques and asymptotic notations for analyzing algorithms.
Defense Against LLM Scheming 2025_04_28.pptxGreg Makowski
https://www.meetup.com/sf-bay-acm/events/306888467/
A January 2025 paper called “Frontier Models are Capable of In-Context Scheming”, https://arxiv.org/pdf/2412.04984, demonstrated how a wide variety of current frontier LLM models (i.e. ChatGPT, Claude, Gemini and Llama) can, under specific conditions, scheme to deceive people. Before models can scheme, they need: a) goal-directedness, b) situational awareness, including an opportunity to discover motivations for a different goal, and c) reasoning about scheming, to come up with deceptive strategies. The deceptive strategies were discovered in the “thought traces” of the LLMs, such as from internal chain-of-thoughts dialogues not shown to the end users. For example, given a goal of “solving math problems”, an LLM using its granted situational awareness of files in a file system found that it would be replaced once it got over X% correct on the math exam. It reasoned that it did not want to be deleted, and decided internally to “sandbag” or reduce its performance to stay under the threshold.
While these circumstances are initially narrow, the “alignment problem” is a general concern that over time, as frontier LLM models become more and more intelligent, being in alignment with human values becomes more and more important. How can we do this over time? Can we develop a defense against Artificial General Intelligence (AGI) or SuperIntelligence?
The presenter discusses a series of defensive steps that can help reduce these scheming or alignment issues. A guardrails system can be set up for real-time monitoring of their reasoning “thought traces” from the models that share their thought traces. Thought traces may come from systems like Chain-of-Thoughts (CoT), Tree-of-Thoughts (ToT), Algorithm-of-Thoughts (AoT) or ReAct (thought-action-reasoning cycles). Guardrails rules can be configured to check for “deception”, “evasion” or “subversion” in the thought traces.
However, not all commercial systems will share their “thought traces” which are like a “debug mode” for LLMs. This includes OpenAI’s o1, o3 or DeepSeek’s R1 models. Guardrails systems can provide a “goal consistency analysis”, between the goals given to the system and the behavior of the system. Cautious users may consider not using these commercial frontier LLM systems, and make use of open-source Llama or a system with their own reasoning implementation, to provide all thought traces.
Architectural solutions can include sandboxing, to prevent or control models from executing operating system commands to alter files, send network requests, and modify their environment. Tight controls to prevent models from copying their model weights would be appropriate as well. Running multiple instances of the same model on the same prompt to detect behavior variations helps. The running redundant instances can be limited to the most crucial decisions, as an additional check. Preventing self-modifying code, ... (see link for full description)
By James Francis, CEO of Paradigm Asset Management
In the landscape of urban safety innovation, Mt. Vernon is emerging as a compelling case study for neighboring Westchester County cities. The municipality’s recently launched Public Safety Camera Program not only represents a significant advancement in community protection but also offers valuable insights for New Rochelle and White Plains as they consider their own safety infrastructure enhancements.
Ad
More Related Content
Similar to Design and analysis of algorithm in Computer Science (20)
This document provides an overview of algorithms including definitions, characteristics, design, and analysis. It defines an algorithm as a finite step-by-step procedure to solve a problem and discusses their key characteristics like input, definiteness, effectiveness, finiteness, and output. The document outlines the design of algorithms using pseudo-code and their analysis in terms of time and space complexity using asymptotic notations like Big O, Big Omega, and Big Theta. Examples are provided to illustrate linear search time complexity and the use of different notations to determine algorithm efficiency.
DSA Complexity.pptx What is Complexity Analysis? What is the need for Compl...2022cspaawan12556
What is Complexity Analysis?
What is the need for Complexity Analysis?
Asymptotic Notations
How to measure complexity?
1. Time Complexity
2. Space Complexity
3. Auxiliary Space
How does Complexity affect any algorithm?
How to optimize the time and space complexity of an Algorithm?
Different types of Complexity exist in the program:
1. Constant Complexity
2. Logarithmic Complexity
3. Linear Complexity
4. Quadratic Complexity
5. Factorial Complexity
6. Exponential Complexity
Worst Case time complexity of different data structures for different operations
Complexity Analysis Of Popular Algorithms
Practice some questions on Complexity Analysis
practice with giving Quiz
Conclusion
This document provides an introduction to algorithms and their design and analysis. It discusses what algorithms are, their key characteristics, and the steps to develop an algorithm to solve a problem. These steps include defining the problem, developing a model, specifying and designing the algorithm, checking correctness, analyzing efficiency, implementing, testing, and documenting. Common algorithm design techniques like top-down design and recursion are explained. Factors that impact algorithm efficiency like use of loops, initial conditions, invariants, and termination conditions are covered. Finally, common control structures for algorithms like if/else, loops, and branching are defined.
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first removed. Common stack operations are push to insert and pop to remove elements. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first removed. Common queue operations are enqueue to insert and dequeue to remove elements. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also introduced.
The document discusses stacks and queues as linear data structures. A stack follows LIFO (last in first out) where the last element inserted is the first to be removed. Common stack operations are push to add an element and pop to remove an element. Stacks can be implemented using arrays or linked lists. A queue follows FIFO (first in first out) where the first element inserted is the first to be removed. Common queue operations are enqueue to add an element and dequeue to remove an element. Queues can also be implemented using arrays or linked lists. Circular queues and priority queues are also discussed briefly.
This document provides an overview of asymptotic analysis and Landau notation. It discusses justifying algorithm analysis mathematically rather than experimentally. Examples are given to show that two functions may appear different but have the same asymptotic growth rate. Landau symbols like O, Ω, o and Θ are introduced to describe asymptotic upper and lower bounds between functions. Big-Q represents asymptotic equivalence between functions, meaning one can be improved over the other with a faster computer.
2-Algorithms and Complexit data structurey.pdfishan743441
The document discusses algorithms design and complexity analysis. It defines an algorithm as a well-defined sequence of steps to solve a problem and notes that algorithms always take inputs and produce outputs. It discusses different approaches to designing algorithms like greedy, divide and conquer, and dynamic programming. It also covers analyzing algorithm complexity using asymptotic analysis by counting the number of basic operations and deriving the time complexity function in terms of input size.
Linear search examines each element of a list sequentially, one by one, and checks if it is the target value. It has a time complexity of O(n) as it requires searching through each element in the worst case. While simple to implement, linear search is inefficient for large lists as other algorithms like binary search require fewer comparisons.
This document provides an overview of algorithms. It begins by discussing the origins and evolution of the term "algorithm" from its Arabic roots in the 9th century to its modern meaning of a well-defined computational procedure. The document then defines algorithms and their key characteristics such as being precise, unambiguous, and terminating after a finite number of steps. Common algorithm design techniques like divide-and-conquer, greedy algorithms, and dynamic programming are introduced. Examples of merge sort and finding the maximum element in a list are used to illustrate algorithm concepts.
This document contains lecture notes on the design and analysis of algorithms. It covers topics like algorithm definition, complexity analysis, divide and conquer algorithms, greedy algorithms, dynamic programming, and NP-complete problems. The notes provide examples of algorithms like selection sort, towers of Hanoi, and generating permutations. Pseudocode is used to describe algorithms precisely yet readably.
The document discusses algorithms, data abstraction, asymptotic analysis, arrays, polynomials, and sparse matrices. It defines algorithms and discusses their advantages and disadvantages. It explains how to design an algorithm and describes iterative and recursive algorithms. It defines data abstraction and gives an example using smartphones. It discusses time and space complexity analysis and different asymptotic notations like Big O, Omega, and Theta. It describes what arrays are, different types of arrays, and applications of arrays. It explains how to represent and add polynomials using linked lists. Finally, it defines sparse matrices and two methods to represent them using arrays and linked lists.
The document discusses algorithms and data structures. It defines an algorithm as a well-defined procedure that takes input and produces output. Algorithms are used for calculation, data processing, and automated reasoning. The document discusses different ways of describing algorithms including natural language, flowcharts, and pseudo code. It also discusses analyzing the time complexity of algorithms using asymptotic notation such as Big-O, Omega, and Theta notation. Recursive algorithms and solving recurrences are also covered.
This document discusses data structures and algorithms. It provides course objectives which include imparting concepts of data structures and algorithms, introducing searching and sorting techniques, and developing applications using suitable data structures. Course outcomes include understanding algorithm performance analysis, concepts of data structures, linear data structures, and identifying efficient data structure implementations. The document also covers algorithm basics, specifications, expressions, analysis techniques and asymptotic notations for analyzing algorithms.
Defense Against LLM Scheming 2025_04_28.pptxGreg Makowski
https://www.meetup.com/sf-bay-acm/events/306888467/
A January 2025 paper called “Frontier Models are Capable of In-Context Scheming”, https://arxiv.org/pdf/2412.04984, demonstrated how a wide variety of current frontier LLM models (i.e. ChatGPT, Claude, Gemini and Llama) can, under specific conditions, scheme to deceive people. Before models can scheme, they need: a) goal-directedness, b) situational awareness, including an opportunity to discover motivations for a different goal, and c) reasoning about scheming, to come up with deceptive strategies. The deceptive strategies were discovered in the “thought traces” of the LLMs, such as from internal chain-of-thoughts dialogues not shown to the end users. For example, given a goal of “solving math problems”, an LLM using its granted situational awareness of files in a file system found that it would be replaced once it got over X% correct on the math exam. It reasoned that it did not want to be deleted, and decided internally to “sandbag” or reduce its performance to stay under the threshold.
While these circumstances are initially narrow, the “alignment problem” is a general concern that over time, as frontier LLM models become more and more intelligent, being in alignment with human values becomes more and more important. How can we do this over time? Can we develop a defense against Artificial General Intelligence (AGI) or SuperIntelligence?
The presenter discusses a series of defensive steps that can help reduce these scheming or alignment issues. A guardrails system can be set up for real-time monitoring of their reasoning “thought traces” from the models that share their thought traces. Thought traces may come from systems like Chain-of-Thoughts (CoT), Tree-of-Thoughts (ToT), Algorithm-of-Thoughts (AoT) or ReAct (thought-action-reasoning cycles). Guardrails rules can be configured to check for “deception”, “evasion” or “subversion” in the thought traces.
However, not all commercial systems will share their “thought traces” which are like a “debug mode” for LLMs. This includes OpenAI’s o1, o3 or DeepSeek’s R1 models. Guardrails systems can provide a “goal consistency analysis”, between the goals given to the system and the behavior of the system. Cautious users may consider not using these commercial frontier LLM systems, and make use of open-source Llama or a system with their own reasoning implementation, to provide all thought traces.
Architectural solutions can include sandboxing, to prevent or control models from executing operating system commands to alter files, send network requests, and modify their environment. Tight controls to prevent models from copying their model weights would be appropriate as well. Running multiple instances of the same model on the same prompt to detect behavior variations helps. The running redundant instances can be limited to the most crucial decisions, as an additional check. Preventing self-modifying code, ... (see link for full description)
By James Francis, CEO of Paradigm Asset Management
In the landscape of urban safety innovation, Mt. Vernon is emerging as a compelling case study for neighboring Westchester County cities. The municipality’s recently launched Public Safety Camera Program not only represents a significant advancement in community protection but also offers valuable insights for New Rochelle and White Plains as they consider their own safety infrastructure enhancements.
Telangana State, India’s newest state that was carved from the erstwhile state of Andhra
Pradesh in 2014 has launched the Water Grid Scheme named as ‘Mission Bhagiratha (MB)’
to seek a permanent and sustainable solution to the drinking water problem in the state. MB is
designed to provide potable drinking water to every household in their premises through
piped water supply (PWS) by 2018. The vision of the project is to ensure safe and sustainable
piped drinking water supply from surface water sources
Mieke Jans is a Manager at Deloitte Analytics Belgium. She learned about process mining from her PhD supervisor while she was collaborating with a large SAP-using company for her dissertation.
Mieke extended her research topic to investigate the data availability of process mining data in SAP and the new analysis possibilities that emerge from it. It took her 8-9 months to find the right data and prepare it for her process mining analysis. She needed insights from both process owners and IT experts. For example, one person knew exactly how the procurement process took place at the front end of SAP, and another person helped her with the structure of the SAP-tables. She then combined the knowledge of these different persons.
Design and analysis of algorithm in Computer Science
1. UNIT – I
Algorithms and Problem
Solving
ndamtals
Design and Analysis of Algorithms
2. 2
Algorithms
A tool for solving a well-specified
computational problem
Algorithms must be:
Correct: For each input produce an appropriate output
Efficient: run as quickly as possible, and use as little
memory as possible – more about this later
Algorithm
Input Output
3. 3
Algorithms Cont.
A well-defined computational procedure that
takes some value, or set of values, as input and
produces some value, or set of values, as output.
Written in a pseudo code which can be
implemented in the language of programmer’s
choice.
4. 4
Correct and incorrect algorithms
Algorithm is correct if, for every input instance, it ends
with the correct output. We say that a correct algorithm
solves the given computational problem.
An incorrect algorithm might not end at all on some
input instances, or it might end with an answer other
than the desired one.
We shall be concerned only with correct algorithms.
5. 5
Problems and Algorithms
We need to solve a computational problem
“Convert a weight in pounds to Kg”
An algorithm specifies how to solve it, e.g.:
1. Read weight-in-pounds
2. Calculate weight-in-Kg = weight-in-pounds *
0.455
3. Print weight-in-Kg
A computer program is a computer-
executable description of an algorithm
7. 7
From Algorithms to Programs
Problem
C++ Program
C++ Program
Algorithm
Algorithm: A sequence
of instructions describing
how to do a task (or
process)
8. •Algorithms as technology
•hardware with high clock rates, pipelining, and
superscalar architectures
•easy-to-use, intuitive graphical user interfaces (GUIs),
•object-oriented systems
•local-area and wide-area networking.
9. Classification of problem
Types:
1) Problems based on Algorithmic Solutions:
Include Step by step Instructions and Series of actions
eg: To find fibonacci series – Sequence of steps/actions are
required.
2) Problems based on Heuristic Solutions:
Trial and Error form
No sequence of actions or instructions
eg: Decision of which stock should I buy?
11. Problem Solving Strategies
• 1) Divide and conquer: A divide-and-conquer algorithm works by
recursively breaking down a problem into two or more sub-problems of the
same or related type, until these become simple enough to be solved directly.
• Eg: Binary search, Merge sort, Quick sort
• 2) Greedy method: The greedy approach is an algorithm strategy in
which a set of resources are recursively divided based on the maximum,
immediate availability of that resource at any given stage of execution. To solve
a problem based on the greedy approach, there are two stages. scanning the
list of items. optimization.
• Eg: Knapsack Problem, Job scheduling with deadlines.
12. Problem Solving Strategies
• 3) Dynamic programming: Dynamic Programming is mainly an
optimization over plain recursion. Wherever we see a recursive solution that
has repeated calls for same inputs, we can optimize it using Dynamic
Programming. The idea is to simply store the results of subproblems, so that
we do not have to re-compute them when needed later.
• Eg: 0/1 Knapsack problem, Optimum Binary Search Tree, Travelling salesman
problem
• 4) Trial and error: Trial And Error. Trial and error is a problem solving
method in which multiple attempts are made to reach a solution. It is a basic
method of learning that essentially all organisms use to learn new behaviors.
Trial and error is trying a method, observing if it works, and if it doesn't trying a
new method.
• Eg: Printer not working problem: check ink level or check paper tray jamming
13. 13
Practical Examples
Internet and Networks
The need to access large amount of information with the
shortest time.
Problems of finding the best routs for the data to travel.
Algorithms for searching this large amount of data to quickly
find the pages on which particular information resides.
Electronic Commerce
The ability of keeping the information (credit card numbers,
passwords, bank statements) private, safe, and secure.
Algorithms involves encryption/decryption techniques.
14. 14
Hard problems
We can identify the Efficiency of an algorithm
from its speed (how long does the algorithm take
to produce the result).
Some problems have unknown efficient solution.
These problems are called NP-complete problems.
If we can show that the problem is NP-complete,
we can spend our time developing an efficient
algorithm that gives a good, but not the best
possible solution.
15. 15
Components of an Algorithm
Variables and values
Instructions
Sequences
A series of instructions
Procedures
A named sequence of instructions
we also use the following words to refer to a
“Procedure” :
Sub-routine
Module
Function
16. 16
Components of an Algorithm Cont.
Selections
An instruction that decides which of two possible
sequences is executed
The decision is based on true/false condition
Repetitions
Also known as iteration or loop
Documentation
Records what the algorithm does
18. Classification of Time Complexities
•Constant Time Complexity: O(1) ...
•Linear Time Complexity: O(n) ...
•Logarithmic Time Complexity: O(log n) ...
•Quadratic Time Complexity: O(n²) ...
•Exponential Time Complexity: O(2^n)
19. Asymptotic Notation ( Ο Ω Θ )
[Big “oh” ] : This notation is used to express an upper bound on
computing time of an algorithm. When we say that time complexity
of Selection sort algorithm is Ο(n2
) , means that for sufficiently large
values of ‘n’ , computation time will not exceed some constant time *
n2
i.e proportional to n2
(Worst Case).
20. [Big “oh”] Ο
• Definition : The function f(n) = Ο( g(n) ) (read as “f of n is
big oh of g of n”) iff (if and only if ) there exist positive
constants c and n0 such that f(n) <= c * g(n) for all n, n
>= n0.
• Example:
• The function 3n+2 = Ο(n) as 3n + 2 <= 4n for all n >= 2
• The function 3n+3 = Ο(n) as 3n + 3 <= 4n for all n >= 3
• The function 100n+6 = Ο(n) as 100n + 6 <= 101n for all n
>= 6
• The function 10n2
+4n+2 = Ο(n2
) as 10n2
+4n+2 <= 11n2
for
all n >= 5
• The function 1000n2
+100n-6 = Ο(n2
) as 1000n2
+100n-6
<= 1001n2
for all n >= 100
• The function 6*2n
+ n2
= Ο(2n
) as 6*2n
+ n2
<= 7*2n
for all
n >= 4
21. [Omega] Ω
This notation is used to express an lower bound on computing time
of an algorithm. When we say that best case time complexity of
insertion sort is Ω (n), means that for sufficiently large values of ‘n’,
minimum computation time will be some constant time * n i.e
proportional to n.
22. [Omega] Ω
• Definition : The function f(n) = Ω( g(n) ) (read as “f of n is
omega of g of n”) iff there exist positive constants c and n0
such that f(n) >= c * g(n) for all n, n >= n0.
• Example:
• The function 3n+2 = Ω (n) as 3n + 2 >= 3n for all n >= 1
• The function 3n+3 = Ω (n) as 3n + 3 >= 3n for all n >= 1
• The function 100n+6 = Ω (n) as 100n + 6 >= 100n for all n
>= 1
• The function 10n2
+4n+2 = Ω (n2
) as 10n2
+4n+2 >= n2
for all
n >= 1
• The function 6*2n
+ n2
= Ω (2n
) as 6*2^n + n2
>= 2n
for all n
>= 1
23. [Theta] Θ
•This notation is used to express time complexity of
an algorithm when it is same for worst & Best cases.
For example best and worst case time complexities
for selection sort is Ο(n2
) & Ω (n2
) i.e. it can be
expressed as Θ (n2
).
•Definition : The function f(n) = Θ( g(n) ) (read as “f of
n is theta of g of n”) iff there exist positive constants
c1, c2 and n0 such that c1 *g(n) <= f(n) <= c2 * g(n)
for all n, n >= n0.
24. • Example:
• The function 3n+2 = Θ (n) as 3n + 2 >= 3n for all n >= 2 and 3n + 2 <=
4n for all n >= 2.
• The function 3n+3 = Θ (n)
• The function 10n2
+4n+2 = Θ (n2
)
• The function 6*2n
+ n2
= Θ (2n
)
25. • Proving the Correctness of Algorithms
• Preconditions and Postconditions
• Loop Invariants
• Induction – Math Review
• Using Induction to Prove Algorithms
26. What does an algorithm ?
•An algorithm is described by:
• Input data
• Output data
• Preconditions: specifies restrictions on input data
• Postconditions: specifies what is the result
•Example: Binary Search
• Input data: a:array of integer;
x:integer;
• Output data: found:boolean;
• Precondition: a is sorted in ascending order
• Postcondition: found is true if x is in a, and found
is false otherwise
27. Correct algorithms
• An algorithm is correct if:
• for any correct input data:
•it stops and
•it produces correct output.
•Correct input data: satisfies precondition
•Correct output data: satisfies postcondition
28. Proving correctness
• An algorithm = a list of actions
• Proving that an algorithm is totally
correct:
1. Proving that it will terminate
2. Proving that the list of actions applied to the
precondition imply the postcondition
• This is easy to prove for simple sequential
algorithms
• This can be complicated to prove for repetitive
algorithms (containing loops or recursivity)
• use techniques based on loop invariants and induction
29. Example – a sequential
algorithm
Swap1(x,y):
aux := x
x := y
y := aux
Precondition:
x = a and y = b
Postcondition:
x = b and y = a
Proof: the list of actions
applied to the
precondition imply the
postcondition
1. Precondition:
x = a and y = b
2. aux := x => aux = a
3. x : = y => x = b
4. y := aux => y = a
5. x = b and y = a is
the Postcondition
30. Example – a repetitive
algorithm
Algorithm Sum_of_N_numbers
Input: a, an array of N numbers
Output: s, the sum of the N numbers
in a
s:=0;
k:=0;
While (k<N) do
k:=k+1;
s:=s+a[k];
end
Proof: the list of actions
applied to the
precondition imply
the postcondition
BUT: we cannot
enumerate all the
actions in case of a
repetitive
algorithm !
We use techniques
based on loop
invariants and
induction
31. Loop invariants
•A loop invariant is a logical predicate such
that: if it is satisfied before entering any
single iteration of the loop then it is also
satisfied after the iteration
32. Example: Loop invariant for
Sum of n numbers
Algorithm Sum_of_N_numbers
Input: a, an array of N numbers
Output: s, the sum of the N numbers in a
s:=0;
k:=0;
While (k<N) do
k:=k+1;
s:=s+a[k];
end
Loop invariant = induction
hypothesis: At step k, S holds the
sum of the first k numbers
33. Using loop invariants in proofs
• We must show the following 3 things
about a loop invariant:
1. Initialization: It is true prior to the first
iteration of the loop.
2. Maintenance: If it is true before an
iteration of the loop, it remains true
before the next iteration.
3. Termination: When the loop terminates,
the invariant gives us a useful property
that helps show that the algorithm is
correct.
34. Example: Proving the
correctness of the Sum
algorithm (1)
• Induction hypothesis: S= sum of the first k
numbers
1. Initialization: The hypothesis is true at the
beginning of the loop:
Before the first iteration: k=0, S=0. The first 0 numbers
have sum zero (there are no numbers) =>
hypothesis true before entering the loop
35. Example: Proving the
correctness of the Sum
algorithm (2)
• Induction hypothesis: S= sum of the first k numbers
2. Maintenance: If hypothesis is true before step k, then it will
be true before step k+1 (immediately after step k is
finished)
We assume that it is true at beginning of step k: “S is the sum of the first k
numbers”
We have to prove that after executing step k, at the beginning of step k+1:
“S is the sum of the first k+1 numbers”
We calculate the value of S at the end of this step
K:=k+1, s:=s+a[k+1] => s is the sum of the first k+1 numbers
36. Example: Proving the
correctness of the Sum
algorithm (3)
• Induction hypothesis: S= sum of the first k
numbers
3. Termination: When the loop terminates,
the hypothesis implies the correctness of
the algorithm
The loop terminates when k=n=> s= sum of
first k=n numbers => postcondition of
algorithm, DONE
37. Loop invariants and induction
• Proving loop invariants is similar to mathematical
induction:
• showing that the invariant holds before the first iteration corresponds to the
base case, and
• showing that the invariant holds from iteration to iteration corresponds to
the inductive step.
38. Mathematical induction -
Review
• Let T be a theorem that we want to
prove. T includes a natural parameter n.
• Proving that T holds for all natural
values of n is done by proving following
two conditions:
1. T holds for n=1
2. For every n>1 if T holds for n-1, then T holds for n
Terminology:
T= Induction Hypothesis
1= Base case
2= Inductive step
39. Mathematical induction -
Review
• Strong Induction: a variant of induction
where the inductive step builds up on all
the smaller values
• Proving that T holds for all natural
values of n is done by proving following
two conditions:
1. T holds for n=1
2. For every n>1 if T holds for all k<= n-1, then T holds for n
40. Mathematical induction review –
Example1
• Theorem: The sum of the first n natural
numbers is n*(n+1)/2
• Proof: by induction on n
1. Base case: If n=1, s(1)=1=1*(1+1)/2
2. Inductive step: We assume that s(n)=n*(n+1)/2,
and prove that this implies s(n+1)=(n+1)*(n+2)/2 ,
for all n>=1
s(n+1)=s(n)+(n+1)=n*(n+1)/2+(n+1)=(n+1)*(n+2)/2
41. Mathematical induction review –
Example2
•Theorem: Every amount of postage that is at
least 12 cents can be made from 4-cent and
5-cent stamps.
•Proof: by induction on the amount of postage
• Postage (p) = m * 4 + n * 5
• Base case:
• Postage(12) = 3 * 4 + 0 * 5
• Postage(13) = 2 * 4 + 1 * 5
• Postage(14) = 1 * 4 + 2 * 5
• Postage(15) = 0 * 4 + 3 * 5
42. Mathematical induction review –
Example2 (cont)
• Inductive step: We assume that we can construct
postage for every value from 12 up to k. We need
to show how to construct k + 1 cents of postage.
Since we have proved base cases up to 15 cents,
we can assume that k + 1 16.
≥
• Since k+1 16, (k+1) 4 12. So by the inductive
≥ − ≥
hypothesis, we can construct postage for (k + 1)
4 cents: (k + 1) 4 = m * 4+ n * 5
− −
• But then k + 1 = (m + 1) * 4 + n * 5. So we can
construct k + 1 cents of postage using (m+1) 4-
cent stamps and n 5-cent stamps
43. Correctness of algorithms
• Induction can be used for proving the correctness of
repetitive algorithms:
• Iterative algorithms:
• Loop invariants
• Induction hypothesis = loop invariant = relationships between the variables during loop
execution
• Recursive algorithms
• Direct induction
• Hypothesis = a recursive call itself ; often a case for applying strong induction
44. Example: Correctness proof
for Decimal to Binary
Conversion
Algorithm Decimal_to_Binary
Input: n, a positive integer
Output: b, an array of bits, the bin repr. of n,
starting with the least significant bits
t:=n;
k:=0;
While (t>0) do
k:=k+1;
b[k]:=t mod 2;
t:=t div 2;
end
It is a repetitive (iterative)
algorithm, thus we use loop
invariants and proof by induction
45. Example: Loop invariant for
Decimal to Binary Conversion
Algorithm Decimal_to_Binary
Input: n, a positive integer
Output: b, an array of bits, the bin repr. of n
t:=n;
k:=0;
While (t>0) do
k:=k+1;
b[k]:=t mod 2;
t:=t div 2;
end
At step k, b holds the k least
significant bits of n, and the value
of t, when shifted by k,
corresponds to the rest of the bits
b
1 2 3 k
20
21
22
2k-1
46. Example: Loop invariant for
Decimal to Binary Conversion
Algorithm Decimal_to_Binary
Input: n, a positive integer
Output: b, an array of bits, the bin repr. of n
t:=n;
k:=0;
While (t>0) do
k:=k+1;
b[k]:=t mod 2;
t:=t div 2;
end
Loop invariant: If m is the
integer represented by array
b[1..k], then n=t*2k
+m
b
1 2 3 k
20
21
22
2k-1
47. Example: Proving the
correctness of the
conversion algorithm
• Induction hypothesis=Loop Invariant:
If m is the integer represented by array
b[1..k], then n=t*2^k+m
• To prove the correctness of the
algorithm, we have to prove the 3
conditions:
1. Initialization: The hypothesis is true at the
beginning of the loop
2. Maintenance: If hypothesis is true for step k, then
it will be true for step k+1
3. Termination: When the loop terminates, the
hypothesis implies the correctness of the
algorithm
48. Example: Proving the
correctness of the
conversion algorithm (1)
• Induction hypothesis: If m is the integer
represented by array b[1..k], then
n=t*2^k+m
1. The hypothesis is true at the beginning of
the loop:
k=0, t=n, m=0(array is empty)
n=n*2^0+0
49. Example: Proving the
correctness of the
conversion algorithm (2)
• Induction hypothesis: If m is the integer
represented by array b[1..k], then n=t*2^k+m
2. If hypothesis is true for step k, then it will be true
for step k+1
At the start of step k: assume that n=t*2^k+m, calculate the
values at the end of this step
If t=even then: t mod 2==0, m unchanged, t=t / 2, k=k+1=> (t /
2) * 2 ^ (k+1) + m = t*2^k+m=n
If t=odd then: t mod 2 ==1, b[k+1] is set to 1, m=m+2^k , t=(t-
1)/2, k=k+1 => (t-1)/2*2^(k+1)+m+2^k=t*2^k+m=n
50. Example: Proving the
correctness of the
conversion algorithm (3)
• Induction hypothesis: If m is the integer
represented by array b[1..k], then
n=t*2^k+m
3. When the loop terminates, the hypothesis
implies the correctness of the algorithm
The loop terminates when t=0 =>
n=0*2^k+m=m
n==m, proved
51. Proof of Correctness for
Recursive Algorithms
• In order to prove recursive algorithms,
we have to:
1. Prove the partial correctness (the fact that the
program behaves correctly)
• we assume that all recursive calls with arguments that
satisfy the preconditions behave as described by the
specification, and use it to show that the algorithm
behaves as specified
2. Prove that the program terminates
• any chain of recursive calls eventually ends and all loops, if
any, terminate after some finite number of iterations.
52. Example - Merge Sort
MERGE-SORT(A,p,r)
if p < r
q= (p+r)/2
MERGE-SORT(A,p,q)
MERGE-SORT(A,q+1,r)
MERGE(A,p,q,r)
Precondition:
Array A has at least 1 element between
indexes p and r (p<=r)
Postcondition:
The elements between indexes p and r are
sorted
p r
q
53. Correctness proofs
for recursive algorithms
• Base Case: Prove that RECURSIVE works for n = small_value
• Inductive Hypothesis:
• Assume that RECURSIVE works correctly for n=small_value, ..., k
• Inductive Step:
• Show that RECURSIVE works correctly for n = k + 1
RECURSIVE(n) is
if (n=small_value)
return ct
else
RECURSIVE(n1)
…
RECURSIVE(nr)
some_code
n1, n2, … nr are some
values smaller than n but
bigger than small_value
54. •Proving that an algorithm is totally
correct means:
1.Proving that it will terminate
2.Proving that the list of actions applied to the
precondition imply the postcondition
•How to prove repetitive algorithms:
• Iterative algorithms: use Loop invariants, Induction
• Recursive algorithms: use induction using as
hypothesis the recursive call