What is an Algorithm
Time Complexity
Space Complexity
Asymptotic Notations
Recursive Analysis
Selection Sort
Insertion Sort
Recurrences
Substitution Method
Master Tree Method
Recursion Tree Method
This document discusses algorithm analysis tools. It explains that algorithm analysis is used to determine which of several algorithms to solve a problem is most efficient. Theoretical analysis counts primitive operations to approximate runtime as a function of input size. Common complexity classes like constant, linear, quadratic, and exponential time are defined based on how quickly runtime grows with size. Big-O notation represents the asymptotic upper bound of a function's growth rate to classify algorithms.
This document discusses asymptotic analysis and recurrence relations. It begins by introducing asymptotic notations like Big O, Omega, and Theta notation that are used to analyze algorithms. It then discusses recurrence relations, which express the running time of algorithms in terms of input size. The document provides examples of using recurrence relations to find the time complexity of algorithms like merge sort. It also discusses how to calculate time complexity functions like f(n) asymptotically rather than calculating exact running times. The goal of this analysis is to understand how algorithm running times scale with input size.
This document discusses algorithms and their analysis. It begins by defining an algorithm and analyzing its time and space complexity. It then discusses different asymptotic notations used to describe an algorithm's runtime such as Big-O, Omega, and Theta notations. Examples are provided to illustrate how to determine the tight asymptotic bound of functions. The document also covers algorithm design techniques like divide-and-conquer and analyzes merge sort as an example. It concludes by defining recurrences used to describe algorithms and provides an example recurrence for merge sort.
This document discusses algorithms and their analysis. It begins by defining an algorithm and its key characteristics like being finite, definite, and terminating after a finite number of steps. It then discusses designing algorithms to minimize cost and analyzing algorithms to predict their performance. Various algorithm design techniques are covered like divide and conquer, binary search, and its recursive implementation. Asymptotic notations like Big-O, Omega, and Theta are introduced to analyze time and space complexity. Specific algorithms like merge sort, quicksort, and their recursive implementations are explained in detail.
The document describes the syllabus for a course on design analysis and algorithms. It covers topics like asymptotic notations, time and space complexities, sorting algorithms, greedy methods, dynamic programming, backtracking, and NP-complete problems. It also provides examples of algorithms like computing greatest common divisor, Sieve of Eratosthenes for primes, and discusses pseudocode conventions. Recursive algorithms and examples like Towers of Hanoi and permutation generation are explained. Finally, it outlines the steps for designing algorithms like understanding the problem, choosing appropriate data structures and computational devices.
how to calclute time complexity of algortihmSajid Marwat
This document discusses algorithm analysis and complexity. It defines key terms like asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like a sum function to determine their time complexity. Common analyses include looking at loops, nested loops, and sequences of statements. The goal is to classify algorithms according to their complexity, which is important for large inputs and machine-independent. Algorithms are classified based on worst, average, and best case analyses.
This document discusses algorithm analysis and complexity. It defines key terms like algorithm, asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like summing array elements. The running time is expressed as a function of input size n. Common complexities like constant, linear, quadratic, and exponential time are introduced. Nested loops and sequences of statements are analyzed. The goal of analysis is to classify algorithms into complexity classes to understand how input size affects runtime.
The document discusses analyzing the running time of algorithms using Big-O notation. It begins by introducing Big-O notation and how it is used to generalize the running time of algorithms as input size grows. It then provides examples of calculating the Big-O running time of simple programs and algorithms with loops but no subprogram calls or recursion. Key concepts covered include analyzing worst-case and average-case running times, and rules for analyzing the running time of programs with basic operations and loops.
Introducción al Análisis y diseño de algoritmosluzenith_g
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms to determine their time and space complexity, and how this involves determining how the resources required grow with the size of the problem. It provides examples of analyzing simple algorithms and determining whether they have linear, quadratic, or other complexity.
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms based on their time complexity, space complexity, and correctness. It provides examples of analyzing simple algorithms and calculating their complexity based on the number of elementary operations.
Basic Computer Engineering Unit II as per RGPV SyllabusNANDINI SHARMA
The document provides an overview of algorithms and computational complexity. It defines an algorithm as a set of unambiguous steps to solve a problem, and discusses how algorithms can be expressed using different languages. It then covers algorithmic complexity and how to analyze the time complexity of algorithms using asymptotic notation like Big-O notation. Specific time complexities like constant, linear, logarithmic, and quadratic time are defined. The document also discusses flowcharts as a way to represent algorithms graphically and introduces some basic programming concepts.
The document discusses data structures and algorithms. It defines key concepts like algorithms, programs, data structures, and asymptotic analysis. It explains how to analyze algorithms to determine their efficiency, including analyzing best, worst, and average cases. Common notations for describing asymptotic running time like Big-O, Big-Omega, and Big-Theta are introduced. The document provides examples of analyzing sorting algorithms like insertion sort and calculating running times. It also discusses techniques for proving an algorithm's correctness like assertions and loop invariants.
Order notation is a mathematical method used to analyze algorithms as the problem size increases. It allows comparison of performance independent of machine-specific factors. Common notations include Big-O (upper bound), Big-Omega (lower bound), and Theta (tight bound). These describe the limiting behavior of execution time as the problem size approaches infinity and are used to classify algorithms by their running time growth rates like constant, logarithmic, linear, quadratic, and exponential.
Algorithms required for data structures(basics like Arrays, Stacks ,Linked Li...DebiPrasadSen
An algorithm is a well-defined computational procedure that takes input values and produces output values. It has several key properties including being well-defined, producing the correct output for any possible input, and terminating after a finite number of steps. When analyzing algorithms, asymptotic complexity is important as it measures how processing requirements grow with increasing input size. Common complexity classes include constant O(1), logarithmic O(log n), linear O(n), quadratic O(n^2), and exponential O(2^n) time. For large inputs, slower asymptotic growth indicates better performance.
The document discusses algorithms and algorithm analysis. It provides examples to illustrate key concepts in algorithm analysis including worst-case, average-case, and best-case running times. The document also introduces asymptotic notation such as Big-O, Big-Omega, and Big-Theta to analyze the growth rates of algorithms. Common growth rates like constant, logarithmic, linear, quadratic, and exponential functions are discussed. Rules for analyzing loops and consecutive statements are provided. Finally, algorithms for two problems - selection and maximum subsequence sum - are analyzed to demonstrate algorithm analysis techniques.
The document discusses various topics related to algorithms including introduction to algorithms, algorithm design, complexity analysis, asymptotic notations, and data structures. It provides definitions and examples of algorithms, their properties and categories. It also covers algorithm design methods and approaches. Complexity analysis covers time and space complexity. Asymptotic notations like Big-O, Omega, and Theta notations are introduced to analyze algorithms. Examples are provided to find the upper and lower bounds of algorithms.
Ch-2 final exam documet compler design elementsMAHERMOHAMED27
The "Project Risk Management" course transformed me from a passive observer of risk to a proactive risk management champion. Here are some key learnings that will forever change my approach to projects:
The Proactive Mindset: I transitioned from simply reacting to problems to anticipating and mitigating them. The course emphasized the importance of proactive risk identification through techniques like brainstorming, SWOT analysis, and FMEA (Failure Mode and Effect Analysis). This allows for early intervention and prevents minor issues from snowballing into major roadblocks.
Risk Assessment and Prioritization: I learned to assess the likelihood and impact of each identified risk. The course introduced qualitative and quantitative risk analysis methods, allowing me to prioritize risks based on their potential severity. This empowers me to focus resources on the most critical threats to project success.
Developing Response Strategies: The course equipped me with a toolbox of risk response strategies. I learned about risk avoidance, mitigation, transference, and acceptance strategies, allowing me to choose the most appropriate approach for each risk. For example, I can now advocate for additional training to mitigate a knowledge gap risk or build buffer time into the schedule to address potential delays.
Communication and Monitoring: The course highlighted the importance of clear communication regarding risks. I learned to effectively communicate risks to stakeholders, ensuring everyone is aware of potential challenges and mitigation plans. Additionally, I gained valuable insights into risk monitoring and tracking, allowing for continuous evaluation and adaptation as the project progresses.
In essence, "Project Risk Management" equipped me with the knowledge and tools to navigate the inevitable uncertainties of projects. By embracing a proactive approach, I can now lead projects with greater confidence, increasing the chances of achieving successful outcomes.
This document discusses analyzing the efficiency and complexity of algorithms. It begins by explaining that running time depends on input size and nature, and is generally measured by the number of steps or operations. Different examples are provided to demonstrate analyzing loops and recursive functions to derive asymptotic complexity bounds. Key points covered include using Big-O notation to classify algorithms according to worst-case running time, analyzing nested loops, sequences of statements, and conditional statements. The document emphasizes that asymptotic complexity focuses on higher-order terms as input size increases.
Generative Artificial Intelligence and Large Language ModelShiwani Gupta
Natural Language Processing (NLP) is a discipline dedicated to enabling computers to comprehend and generate human language.
Word embedding is a technique in NLP that converts words into dense numerical vectors, capturing their semantic meanings and contextual relationships. Analyzing sequential data often requires techniques such as time series analysis and sequence modeling, using machine learning models like Recurrent Neural Networks (RNNs) and Long Short-Term Memory networks (LSTMs).
Encoder-Decoder architecture is an RNN framework designed for sequence-to-sequence tasks. Beam Search is a search algorithm used in sequence-to-sequence models, particularly in natural language processing tasks. BLEU is a popular evaluation metric for assessing the quality of text generated by machine translation systems. Attention mechanism allows models to selectively focus on the most relevant information within large datasets, thereby enhancing efficiency and accuracy in data processing.
The document provides an introduction to unsupervised learning and reinforcement learning. It then discusses eigen values and eigen vectors, showing how to calculate them from a matrix. It provides examples of covariance matrices and using Gaussian elimination to solve for eigen vectors. Finally, it discusses principal component analysis and different clustering algorithms like K-means clustering.
Ad
More Related Content
Similar to module1_Introductiontoalgorithms_2022.pdf (20)
how to calclute time complexity of algortihmSajid Marwat
This document discusses algorithm analysis and complexity. It defines key terms like asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like a sum function to determine their time complexity. Common analyses include looking at loops, nested loops, and sequences of statements. The goal is to classify algorithms according to their complexity, which is important for large inputs and machine-independent. Algorithms are classified based on worst, average, and best case analyses.
This document discusses algorithm analysis and complexity. It defines key terms like algorithm, asymptotic complexity, Big-O notation, and time complexity. It provides examples of analyzing simple algorithms like summing array elements. The running time is expressed as a function of input size n. Common complexities like constant, linear, quadratic, and exponential time are introduced. Nested loops and sequences of statements are analyzed. The goal of analysis is to classify algorithms into complexity classes to understand how input size affects runtime.
The document discusses analyzing the running time of algorithms using Big-O notation. It begins by introducing Big-O notation and how it is used to generalize the running time of algorithms as input size grows. It then provides examples of calculating the Big-O running time of simple programs and algorithms with loops but no subprogram calls or recursion. Key concepts covered include analyzing worst-case and average-case running times, and rules for analyzing the running time of programs with basic operations and loops.
Introducción al Análisis y diseño de algoritmosluzenith_g
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms to determine their time and space complexity, and how this involves determining how the resources required grow with the size of the problem. It provides examples of analyzing simple algorithms and determining whether they have linear, quadratic, or other complexity.
The document discusses algorithms and their analysis. It defines an algorithm as a well-defined computational procedure that takes inputs and produces outputs. It discusses analyzing algorithms based on their time complexity, space complexity, and correctness. It provides examples of analyzing simple algorithms and calculating their complexity based on the number of elementary operations.
Basic Computer Engineering Unit II as per RGPV SyllabusNANDINI SHARMA
The document provides an overview of algorithms and computational complexity. It defines an algorithm as a set of unambiguous steps to solve a problem, and discusses how algorithms can be expressed using different languages. It then covers algorithmic complexity and how to analyze the time complexity of algorithms using asymptotic notation like Big-O notation. Specific time complexities like constant, linear, logarithmic, and quadratic time are defined. The document also discusses flowcharts as a way to represent algorithms graphically and introduces some basic programming concepts.
The document discusses data structures and algorithms. It defines key concepts like algorithms, programs, data structures, and asymptotic analysis. It explains how to analyze algorithms to determine their efficiency, including analyzing best, worst, and average cases. Common notations for describing asymptotic running time like Big-O, Big-Omega, and Big-Theta are introduced. The document provides examples of analyzing sorting algorithms like insertion sort and calculating running times. It also discusses techniques for proving an algorithm's correctness like assertions and loop invariants.
Order notation is a mathematical method used to analyze algorithms as the problem size increases. It allows comparison of performance independent of machine-specific factors. Common notations include Big-O (upper bound), Big-Omega (lower bound), and Theta (tight bound). These describe the limiting behavior of execution time as the problem size approaches infinity and are used to classify algorithms by their running time growth rates like constant, logarithmic, linear, quadratic, and exponential.
Algorithms required for data structures(basics like Arrays, Stacks ,Linked Li...DebiPrasadSen
An algorithm is a well-defined computational procedure that takes input values and produces output values. It has several key properties including being well-defined, producing the correct output for any possible input, and terminating after a finite number of steps. When analyzing algorithms, asymptotic complexity is important as it measures how processing requirements grow with increasing input size. Common complexity classes include constant O(1), logarithmic O(log n), linear O(n), quadratic O(n^2), and exponential O(2^n) time. For large inputs, slower asymptotic growth indicates better performance.
The document discusses algorithms and algorithm analysis. It provides examples to illustrate key concepts in algorithm analysis including worst-case, average-case, and best-case running times. The document also introduces asymptotic notation such as Big-O, Big-Omega, and Big-Theta to analyze the growth rates of algorithms. Common growth rates like constant, logarithmic, linear, quadratic, and exponential functions are discussed. Rules for analyzing loops and consecutive statements are provided. Finally, algorithms for two problems - selection and maximum subsequence sum - are analyzed to demonstrate algorithm analysis techniques.
The document discusses various topics related to algorithms including introduction to algorithms, algorithm design, complexity analysis, asymptotic notations, and data structures. It provides definitions and examples of algorithms, their properties and categories. It also covers algorithm design methods and approaches. Complexity analysis covers time and space complexity. Asymptotic notations like Big-O, Omega, and Theta notations are introduced to analyze algorithms. Examples are provided to find the upper and lower bounds of algorithms.
Ch-2 final exam documet compler design elementsMAHERMOHAMED27
The "Project Risk Management" course transformed me from a passive observer of risk to a proactive risk management champion. Here are some key learnings that will forever change my approach to projects:
The Proactive Mindset: I transitioned from simply reacting to problems to anticipating and mitigating them. The course emphasized the importance of proactive risk identification through techniques like brainstorming, SWOT analysis, and FMEA (Failure Mode and Effect Analysis). This allows for early intervention and prevents minor issues from snowballing into major roadblocks.
Risk Assessment and Prioritization: I learned to assess the likelihood and impact of each identified risk. The course introduced qualitative and quantitative risk analysis methods, allowing me to prioritize risks based on their potential severity. This empowers me to focus resources on the most critical threats to project success.
Developing Response Strategies: The course equipped me with a toolbox of risk response strategies. I learned about risk avoidance, mitigation, transference, and acceptance strategies, allowing me to choose the most appropriate approach for each risk. For example, I can now advocate for additional training to mitigate a knowledge gap risk or build buffer time into the schedule to address potential delays.
Communication and Monitoring: The course highlighted the importance of clear communication regarding risks. I learned to effectively communicate risks to stakeholders, ensuring everyone is aware of potential challenges and mitigation plans. Additionally, I gained valuable insights into risk monitoring and tracking, allowing for continuous evaluation and adaptation as the project progresses.
In essence, "Project Risk Management" equipped me with the knowledge and tools to navigate the inevitable uncertainties of projects. By embracing a proactive approach, I can now lead projects with greater confidence, increasing the chances of achieving successful outcomes.
This document discusses analyzing the efficiency and complexity of algorithms. It begins by explaining that running time depends on input size and nature, and is generally measured by the number of steps or operations. Different examples are provided to demonstrate analyzing loops and recursive functions to derive asymptotic complexity bounds. Key points covered include using Big-O notation to classify algorithms according to worst-case running time, analyzing nested loops, sequences of statements, and conditional statements. The document emphasizes that asymptotic complexity focuses on higher-order terms as input size increases.
Generative Artificial Intelligence and Large Language ModelShiwani Gupta
Natural Language Processing (NLP) is a discipline dedicated to enabling computers to comprehend and generate human language.
Word embedding is a technique in NLP that converts words into dense numerical vectors, capturing their semantic meanings and contextual relationships. Analyzing sequential data often requires techniques such as time series analysis and sequence modeling, using machine learning models like Recurrent Neural Networks (RNNs) and Long Short-Term Memory networks (LSTMs).
Encoder-Decoder architecture is an RNN framework designed for sequence-to-sequence tasks. Beam Search is a search algorithm used in sequence-to-sequence models, particularly in natural language processing tasks. BLEU is a popular evaluation metric for assessing the quality of text generated by machine translation systems. Attention mechanism allows models to selectively focus on the most relevant information within large datasets, thereby enhancing efficiency and accuracy in data processing.
The document provides an introduction to unsupervised learning and reinforcement learning. It then discusses eigen values and eigen vectors, showing how to calculate them from a matrix. It provides examples of covariance matrices and using Gaussian elimination to solve for eigen vectors. Finally, it discusses principal component analysis and different clustering algorithms like K-means clustering.
Cross validation is a technique for evaluating machine learning models by splitting the dataset into training and validation sets and training the model multiple times on different splits, to reduce variance. K-fold cross validation splits the data into k equally sized folds, where each fold is used once for validation while the remaining k-1 folds are used for training. Leave-one-out cross validation uses a single observation from the dataset as the validation set. Stratified k-fold cross validation ensures each fold has the same class proportions as the full dataset. Grid search evaluates all combinations of hyperparameters specified as a grid, while randomized search samples hyperparameters randomly within specified ranges. Learning curves show training and validation performance as a function of training set size and can diagnose underfitting
This document provides an overview of supervised machine learning algorithms for classification, including logistic regression, k-nearest neighbors (KNN), support vector machines (SVM), and decision trees. It discusses key concepts like evaluation metrics, performance measures, and use cases. For logistic regression, it covers the mathematics behind maximum likelihood estimation and gradient descent. For KNN, it explains the algorithm and discusses distance metrics and a numerical example. For SVM, it outlines the concept of finding the optimal hyperplane that maximizes the margin between classes.
The document provides information on solving the sum of subsets problem using backtracking. It discusses two formulations - one where solutions are represented by tuples indicating which numbers are included, and another where each position indicates if the corresponding number is included or not. It shows the state space tree that represents all possible solutions for each formulation. The tree is traversed depth-first to find all solutions where the sum of the included numbers equals the target sum. Pruning techniques are used to avoid exploring non-promising paths.
The document discusses the greedy method and its applications. It begins by defining the greedy approach for optimization problems, noting that greedy algorithms make locally optimal choices at each step in hopes of finding a global optimum. Some applications of the greedy method include the knapsack problem, minimum spanning trees using Kruskal's and Prim's algorithms, job sequencing with deadlines, and finding the shortest path using Dijkstra's algorithm. The document then focuses on explaining the fractional knapsack problem and providing a step-by-step example of solving it using a greedy approach. It also provides examples and explanations of Kruskal's algorithm for finding minimum spanning trees.
The document describes various divide and conquer algorithms including binary search, merge sort, quicksort, and finding maximum and minimum elements. It begins by explaining the general divide and conquer approach of dividing a problem into smaller subproblems, solving the subproblems independently, and combining the solutions. Several examples are then provided with pseudocode and analysis of their divide and conquer implementations. Key algorithms covered in the document include binary search (log n time), merge sort (n log n time), and quicksort (n log n time on average).
This document provides an outline for a machine learning syllabus. It includes 14 modules covering topics like machine learning terminology, supervised and unsupervised learning algorithms, optimization techniques, and projects. It lists software and hardware requirements for the course. It also discusses machine learning applications, issues, and the steps to build a machine learning model.
The document discusses problem-solving agents and their approach to solving problems. Problem-solving agents (1) formulate a goal based on the current situation, (2) formulate the problem by defining relevant states and actions, and (3) search for a solution by exploring sequences of actions that lead to the goal state. Several examples of problems are provided, including the 8-puzzle, robotic assembly, the 8 queens problem, and the missionaries and cannibals problem. For each problem, the relevant states, actions, goal tests, and path costs are defined.
The simplex method is a linear programming algorithm that can solve problems with more than two decision variables. It works by generating a series of solutions, called tableaus, where each tableau corresponds to a corner point of the feasible solution space. The algorithm starts at the initial tableau, which corresponds to the origin. It then shifts to adjacent corner points, moving in the direction that optimizes the objective function. This process of generating new tableaus continues until an optimal solution is found.
The document discusses functions and the pigeonhole principle. It defines what a function is, how functions can be represented graphically and with tables and ordered pairs. It covers one-to-one, onto, and bijective functions. It also discusses function composition, inverse functions, and the identity function. The pigeonhole principle states that if n objects are put into m containers where n > m, then at least one container must hold more than one object. Examples are given to illustrate how to apply the principle to problems involving months, socks, and selecting numbers.
The document discusses relations and their representations. It defines a binary relation as a subset of A×B where A and B are nonempty sets. Relations can be represented using arrow diagrams, directed graphs, and zero-one matrices. A directed graph represents the elements of A as vertices and draws an edge from vertex a to b if aRb. The zero-one matrix representation assigns 1 to the entry in row a and column b if (a,b) is in the relation, and 0 otherwise. The document also discusses indegrees, outdegrees, composite relations, and properties of relations like reflexivity.
This document discusses logic and propositional logic. It covers the following topics:
- The history and applications of logic.
- Different types of statements and their grammar.
- Propositional logic including symbols, connectives, truth tables, and semantics.
- Quantifiers, universal and existential quantification, and properties of quantifiers.
- Normal forms such as disjunctive normal form and conjunctive normal form.
- Inference rules and the principle of mathematical induction, illustrated with examples.
1. Set theory is an important mathematical concept and tool that is used in many areas including programming, real-world applications, and computer science problems.
2. The document introduces some basic concepts of set theory including sets, members, operations on sets like union and intersection, and relationships between sets like subsets and complements.
3. Infinite sets are discussed as well as different types of infinite sets including countably infinite and uncountably infinite sets. Special sets like the empty set and power sets are also covered.
The document discusses uncertainty and probabilistic reasoning. It describes sources of uncertainty like partial information, unreliable information, and conflicting information from multiple sources. It then discusses representing and reasoning with uncertainty using techniques like default logic, rules with probabilities, and probability theory. The key approaches covered are conditional probability, independence, conditional independence, and using Bayes' rule to update probabilities based on new evidence.
The document outlines the objectives, outcomes, and learning outcomes of a course on artificial intelligence. The objectives include conceptualizing ideas and techniques for intelligent systems, understanding mechanisms of intelligent thought and action, and understanding advanced representation and search techniques. Outcomes include developing an understanding of AI building blocks, choosing appropriate problem solving methods, analyzing strengths and weaknesses of AI approaches, and designing models for reasoning with uncertainty. Learning outcomes include knowledge, intellectual skills, practical skills, and transferable skills in artificial intelligence.
Concept of Problem Solving, Introduction to Algorithms, Characteristics of Algorithms, Introduction to Data Structure, Data Structure Classification (Linear and Non-linear, Static and Dynamic, Persistent and Ephemeral data structures), Time complexity and Space complexity, Asymptotic Notation - The Big-O, Omega and Theta notation, Algorithmic upper bounds, lower bounds, Best, Worst and Average case analysis of an Algorithm, Abstract Data Types (ADT)
Analysis of reinforced concrete deep beam is based on simplified approximate method due to the complexity of the exact analysis. The complexity is due to a number of parameters affecting its response. To evaluate some of this parameters, finite element study of the structural behavior of the reinforced self-compacting concrete deep beam was carried out using Abaqus finite element modeling tool. The model was validated against experimental data from the literature. The parametric effects of varied concrete compressive strength, vertical web reinforcement ratio and horizontal web reinforcement ratio on the beam were tested on eight (8) different specimens under four points loads. The results of the validation work showed good agreement with the experimental studies. The parametric study revealed that the concrete compressive strength most significantly influenced the specimens’ response with the average of 41.1% and 49 % increment in the diagonal cracking and ultimate load respectively due to doubling of concrete compressive strength. Although the increase in horizontal web reinforcement ratio from 0.31 % to 0.63 % lead to average of 6.24 % increment on the diagonal cracking load, it does not influence the ultimate strength and the load-deflection response of the beams. Similar variation in vertical web reinforcement ratio leads to an average of 2.4 % and 15 % increment in cracking and ultimate load respectively with no appreciable effect on the load-deflection response.
Value Stream Mapping Worskshops for Intelligent Continuous SecurityMarc Hornbeek
This presentation provides detailed guidance and tools for conducting Current State and Future State Value Stream Mapping workshops for Intelligent Continuous Security.
RICS Membership-(The Royal Institution of Chartered Surveyors).pdfMohamedAbdelkader115
Glad to be one of only 14 members inside Kuwait to hold this credential.
Please check the members inside kuwait from this link:
https://www.rics.org/networking/find-a-member.html?firstname=&lastname=&town=&country=Kuwait&member_grade=(AssocRICS)&expert_witness=&accrediation=&page=1
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...Infopitaara
A Boiler Feed Pump (BFP) is a critical component in thermal power plants. It supplies high-pressure water (feedwater) to the boiler, ensuring continuous steam generation.
⚙️ How a Boiler Feed Pump Works
Water Collection:
Feedwater is collected from the deaerator or feedwater tank.
Pressurization:
The pump increases water pressure using multiple impellers/stages in centrifugal types.
Discharge to Boiler:
Pressurized water is then supplied to the boiler drum or economizer section, depending on design.
🌀 Types of Boiler Feed Pumps
Centrifugal Pumps (most common):
Multistage for higher pressure.
Used in large thermal power stations.
Positive Displacement Pumps (less common):
For smaller or specific applications.
Precise flow control but less efficient for large volumes.
🛠️ Key Operations and Controls
Recirculation Line: Protects the pump from overheating at low flow.
Throttle Valve: Regulates flow based on boiler demand.
Control System: Often automated via DCS/PLC for variable load conditions.
Sealing & Cooling Systems: Prevent leakage and maintain pump health.
⚠️ Common BFP Issues
Cavitation due to low NPSH (Net Positive Suction Head).
Seal or bearing failure.
Overheating from improper flow or recirculation.
Data Structures_Linear data structures Linked Lists.pptxRushaliDeshmukh2
Concept of Linear Data Structures, Array as an ADT, Merging of two arrays, Storage
Representation, Linear list – singly linked list implementation, insertion, deletion and searching operations on linear list, circularly linked lists- Operations for Circularly linked lists, doubly linked
list implementation, insertion, deletion and searching operations, applications of linked lists.
☁️ GDG Cloud Munich: Build With AI Workshop - Introduction to Vertex AI! ☁️
Join us for an exciting #BuildWithAi workshop on the 28th of April, 2025 at the Google Office in Munich!
Dive into the world of AI with our "Introduction to Vertex AI" session, presented by Google Cloud expert Randy Gupta.
2. 2
What is an algorithm?
An algorithm is
a sequence of unambiguous instructions
for solving a computational problem,
i.e., for obtaining a required output
for any legitimate input
in a finite amount of time.
Shiwani Gupta
3. 4
Features of Algorithm
• Input : zero or more valid inputs are clearly specified
• Output: produce at least 1 correct output given valid input
• Definiteness: clearly and unambiguously specified
instructions
• Finiteness : terminates after finite steps for all cases
• Effectiveness: steps are sufficiently simple and basic
Shiwani Gupta
4. 5
A Brief History of Algorithms
• According to the Oxford English Dictionary, the
word algorithm is a combination of the Middle
English word algorism with arithmetic.
• The word algorism derives from the name of Arabic
mathematician Al-Khwarizmi.
• Al-Khwarizmi wrote a book on solving equations
from whose title the word algebra derives.
• It is commonly believed that the first algorithm was
Euclid’s Algorithm for finding the greatest common
divisor of two integers, m and n (m ≥n).
Shiwani Gupta
5. 6
Rules for writing a Pseudocode
• Head
Algorithm name (<parameter list>)
//Problem Description:
// Input:
// Output:
• Body {includes programming constructs or assignment statements}
Compound statements enclosed in { }
Single line comments //
Identifier can be combination of alphanumeric string beginning by letter
Use assignment operator
Boolean, Logical, Relational operators
Array indices [ ] begin with 0
Input and Output using read(val) and write(“Hello”)
if (condition) then statement
if (condition) then statement else statement
Shiwani Gupta
6. 7
while (cond) do
{
stmt 1
stmt 2
:
:
stmt n
}
for var val1 to valn do
{
stmt 1
stmt 2
:
:
stmt n
}
Rules for writing a Pseudocode
repeat
{
stmt 1
stmt 2
:
:
stmt n
} until (condition)
break stmt to exit from inner loop
return stmt to return control from one
point to another
Shiwani Gupta
7. 8
• Brute Force
• Divide and Conquer / Decrease and Conquer
– Merge Sort / Binary Search
• Greedy Method
– Kruskal algorithm for Minimal Spanning Tree
• Dynamic Programming
– All Pair Shortest Path
• Search and Enumeration (Search algo, B&B,
Backtracking)
– Graph Problems
• Probabilistic and Heuristics and Genetic
Classification by Design Paradigm
Shiwani Gupta
8. 9
Classification of Algorithm by Implementation
• Recursion / Iteration
– Tower of Hanoi, Fibonacci
• Logical
– Algorithm = logic + control
• Serial / Parallel or Distributed
– Sorting, Iterative
• Deterministic / Non Deterministic
– Exact decision / Guess via heuristics
• Exact / Approximate
– Approximate algorithms are useful for hard problems
Shiwani Gupta
9. 10
Performance Analysis of Algorithm
• Determine run-time of a program as function of
input –TIME COMPLEXITY
• Determine total or maximum memory required for
program data – SPACE COMPLEXITY
• Determine total size of program code
• Determine whether program correctly computes
desired result
• Determine complexity of program
– Ease of reading, understanding and modification
• Determine robustness of program
– Dealing with unexpected and erroneous input
Shiwani Gupta
10. 11
Space Complexity
• The space S(p) needed by an algorithm is the sum of
a fixed part and a variable part
• The fixed part c includes space for
– Instructions
– Variables
– Identifiers
– Constants
• The variable part Sp includes space for
– Variables whose size is dependant on the particular
problem instance being solved
– Recursion stack space S(p) = c + Sp
Shiwani Gupta
11. 12
Algorithm abc (a, b, c)
{
return a+b+b*c+((a+b-c)/(a+b)+4.0
}
For every instance 3 computer words require to store variables: a, b, c
Sp () = 3
Shiwani Gupta
12. 13
Algorithm Sum (a, n)
{
s:=0.0
for i 1 to n do
s = s + a[i]
return s
}
Every instance needs to store array a[] and n
Space needed to store n = 1 word
Space needed to store a[] = n floating point words
Space needed to store i and s = 2 words
Sp (n) = n+3
Shiwani Gupta
13. 14
Time Complexity
• The time complexity of a problem is
– The number of steps that it takes to solve an instance of
the problem as a function of the size of the input (usually
measured in bits), using the most efficient algorithm.
• The time needed by an algorithm T(p) is the sum of
a fixed part and a variable part
– The fixed part includes compile time c which is
independent of problem instance.
– The variable part is the run time tp which is dependent on
problem instance.
T(p) = c + tp
Shiwani Gupta
14. 15
Analyzing Running Time
1. read(n)
2. Sum ← 0
3. i ← 0
4. while (i < n) do
5. read(number)
6. sum ← sum + number
7. i ← i + 1
8. mean ← sum / n
T(n), or the running time of a particular algorithm on input of size n, is
taken to be the number of times the instructions in the algorithm are
executed.
Example pseudo code illustrates the calculation of the mean (average) of
a set of n numbers:
Number of times executed
1
1
1
n+1
n
n
n
1
The computing time for this algorithm in
terms on input size n is: T(n) = 4n + 5.
15. 16
Sr.
No.
Statements S/E Freq. Total
1 Algorithm Sum(a, n ) 0 - 0
2 { 0 - 0
3 s←0.0 1 1 1
4 for i ← 1 to n do 1 n+1 n+1
5 s←s+a[i] 1 n n
6 return s 1 1 1
7 } 0 - 0
Shiwani Gupta
16. 17
Sr.
No.
Statements S/E Freq. Total
1 Algorithm Add(a, b, c, n, m ) 0 - 0
2 { 0 - 0
3 for i ←1 to n do 1 n+1 n+1
4 for j ← 1 to m do 1 n(m+1) n(m+1)
5 c[i,j] ← a[i,j]+b[i,j] 1 nm nm
6 } 0 - 0
Shiwani Gupta
18. 19
Best, average, worst-case Efficiency
• Worst case:
– Efficiency (# of times the basic operation will be executed) for the worst case
input of size n, for which
– The algorithm runs the longest among all possible inputs of size n.
• Best case:
– Efficiency (# of times the basic operation will be executed) for the best case
input of size n, for which
– The algorithm runs the fastest among all possible inputs of size n.
• Average case:
– Efficiency (#of times the basic operation will be executed) for a typical/random
input
– NOT the average of worst and best case.
Shiwani Gupta
19. 20
Growth of function (Asymptotics)
Used to formalize that an algorithm has running
time or storage requirements that are
``never more than'' ,
``always greater than” , or
``exactly'' some amount
Less than equal to (“≤”)
Greater than equal to (“≥”)
Equal to (“=“)
Shiwani Gupta
20. 21
Big Oh (O) Notation
Asymptotic Upper Bound
Definition 1: Let f(n) and g(n) be two functions. We write:
f(n) = O(g(n)) or f = O(g)
(read "f of n is big oh of g of n" or "f is big oh of g")
if there is a positive integer c such that f(n) <= c * g(n) for all positive integers n>n0.
The basic idea of big-Oh notation is this: Suppose f and g are both real-valued functions
of a real variable x. If, for large values of x, the graph of f lies closer to the horizontal
axis than the graph of some multiple of g, then f is of order g, i.e., f(x) = O(g(x)). So,
g(x) represents an upper bound on f(x).
Shiwani Gupta
21. 22
Common plots of O( )
O(2n)
O(n3 )
O(n2)
O(nlogn)
O(n)
O(√n)
O(logn)
O(1)
Shiwani Gupta
22. 23
Omega (Ω) Notation
Asymptotic Lower Bound
Definition 2: Let f(n) and g(n) be two functions. We write:
f(n) = Ω(g(n)) or f = Ω(g)
(read "f of n is omega of g of n" or "f is omega of g")
if there is a positive integer c such that f(n) >= c * g(n) >= 0 for all positive integers n>n0.
The basic idea of omega notation is this: Suppose f and g are both real-valued functions
of a real variable x. If, for large values of x, the graph of some multiple of g lies closer
to the horizontal axis than the graph of f, then f is of order g, i.e., f(x) = Ω(g(x)). So,
g(x) represents an lower bound on f(x).
Shiwani Gupta
23. 24
Theta (Θ) Notation
Asymptotic Tight Bound
Definition 3: Let f(n) and g(n) be two functions. We write:
f(n) = Θ (g(n)) or f = Θ (g)
(read "f of n is theta of g of n" or "f is theta of g")
if there is a positive integer c such that
c2 * g(n) >= f(n) >= c1 * g(n) >= 0 for all positive integers n.
The basic idea of theta notation is this: Suppose f and g are both real-valued functions
of a real variable x. If, for large values of x, the graph of some multiple of g lies closer
to the horizontal axis than the graph of f and the graph of f lies closer to the horizontal
axis than some other multiple of g, then f is of order g, i.e., f(x) = Ω(g(x)). So, g(x)
represents a tight bound on f(x). Thus g(x) is both upper and lower bound on f(n).
Thus, (f) = O(f) (f)
Shiwani Gupta
24. 25
Compare n and (n+1)/2
lim( n / ((n+1)/2 )) = 2,
same rate of growth
(n+1)/2 = Θ(n)
rate of growth of a linear function
Compare n2 and n2+ 6n
lim( n2 / (n2+ 6n ) )= 1
same rate of growth.
n2+6n = Θ(n2)
rate of growth of a quadratic function
Compare log n and log n2
lim( log n / log n2 ) = 1/2
same rate of growth.
log n2 = Θ(log n)
logarithmic rate of growth
Θ(n3): n3
5n3+ 4n
105n3+ 4n2 + 6n
Θ(n2): n2
5n2+ 4n + 6
n2 + 5
Θ(log n): log n
log n2
log (n + n3)
Shiwani Gupta
25. 26
Input
Size:
n
(1) log n n n log n n² n³ 2ⁿ n!
constant log linear n-log-n quadratic cubic exponential factorial
Scanning
Array
Elements
Performing
Binary
Search
Operation
Performing
Sequential
Search
Operation
Sorting
Elements
using Merge
Sort or Quick
Sort
Scanning
Matrix
Elements
Performing
Matrix
Multiplicatio
n
Towers of
Hanoi
problem
Traveling
Salesman
Problem by
Brute Force
Search
5 1 3 5 15 25 125 32 120
10 1 4 10 33 100 10³ 10³ 3628800
100 1 7 100 664 104 106 1030 9.33e+157
1000 1 10 1000 104 106 109 10300 4.02e+2567
10000 1 13 10000 105 108 1012 103000 2.84e+35659
Generalizing Running Time (Basic Efficiency classes)
Shiwani Gupta
32. 33
- If-then-else
if(condition)
i = 0;
else
for ( j = 0; j < n; j++)
a[j] = j;
• Complexity
= O(1) + max ( O(1), O(N))
= O(1) + O(n)
= O(n)
- Sequential Search
• Given an unsorted vector a[], find if the element X occurs in a[]
for (i = 0; i < n; i++) {
if (a[i] == X) return true;
}
return false;
• Complexity = O(n)
Shiwani Gupta
33. 34
Mathematical Background for non
recursive algorithm analysis
1. Decide on a parameter(s) indicating an input’s size.
2. Identify algorithm’s basic operation.
3. Check whether the number of times basic operation is executed
depends only on size of input.
4. Investigate worst, average and best case efficiencies.
5. Find no. of times basic operation is executed.
6. Either Find a closed formula for count OR It’s order of growth.
// Largest element in an array
ALGORITHM
MaxElement(A[0...n-1])
Maxval A[0]
for i 1 to n-1 do
if A[i]>maxval
maxval A[i]
return maxval Shiwani Gupta
34. 35
Mathematical Background for recursive
algorithm analysis
1. Decide on a parameter(s) indicating an input’s size.
2. Identify algorithm’s basic operation.
3. Check whether the number of times basic operation is executed can
vary on different inputs of same size.
4. Investigate worst, average and best case efficiencies.
5. Set up a recurrence relation with an appropriate initial condition for
no. of times the basic operation is executed.
6. Solve recurrence or ascertain order of growth.
// factorial for an arbitrary nonnegative no.
ALGORITHM F(n)
if n=0 return 1
else return F(n-1)*n
Shiwani Gupta
35. 36
Algorithm: Fibonacci
Algorithm 1 fib(n)
if n = 0 then
return (0)
if n = 1 then
return (1)
return (fib(n − 1) + fib(n − 2))
Algorithm 2: fib(n)
comment: Initially we create an array A[0: n]
A[0] ← 0, A[1] ← 1
for i = 2 to n do
A[i] = A[i − 1] + A[i − 2]
return (A[n])
Recurrence Relation of Fibonacci Number fib(n):
{0, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, …}
Shiwani Gupta
36. TASK
• Determine the running time of a piece of code for the following cases:
(i) Dependent loop (ii) If-then-else statement (iii) Nested For loop
• An algorithm takes 0.5ms for input size 100. How long will it take for input
size 500 if run time is
(i) quadratic (ii) nlogn
• Calculate the running time for following program segment.
i=1
loop(i<=n)
print(i)
i=i+1
• Write procedure to find sum of series and find time complexity.
• Write a routine for finding factorial of a given number using recursion.
• Define notations. State their interrelationship.
Shiwani Gupta 37
37. Selection Sort
• Task: rearrange books on shelf by height
– Shortest book on the left
• Approach:
– Look at books, select shortest book
– Swap with first book
– Look at remaining books, select shortest
– Swap with second book
– Repeat …
Shiwani Gupta 38
39. Iterative Selection Sort
Algorithm selectionSort(a, n)
// Sorts the first n elements of an array a.
for (index = 0; index < n - 1; index++)
{ indexOfNextSmallest = the index of the smallest value among
a[index], a[index+1], . . . , a[n-1]
Interchange the values of a[index] and a[indexOfNextSmallest]
// Assertion: a[0] £ a[1] £ . . . £ a[index], and these are the smallest of
the original array elements.
// The remaining array elements begin at a[index+1].
}
Shiwani Gupta 40
40. Recursive Selection Sort
Algorithm selectionSort(a, first, last)
// Sorts the array elements a[first] through a[last] recursively
if (first < last)
{ indexOfNextSmallest = the index of the smallest value among
a[first], a[first+1], . . . , a[last]
Interchange the values of a[first] and a[indexOfNextSmallest]
// Assertion: a[0] £ a[1] £ . . . £ a[first] and these are the smallest of
the original array elements.
// The remaining array elements begin at a[first+1].
selectionSort(a, first+1, last)
}
Shiwani Gupta 41
41. The Efficiency of Selection Sort
• Iterative method for loop executes n – 1 times
– For each of n – 1 calls, the indexOfSmallest is invoked, last is n-1,
and first ranges from 0 to n-2.
– For each indexOfSmallest, compares last – first times
– Total operations: (n – 1) + (n – 2) + …+ 1 = n(n – 1)/2 = O(n2)
• It does not depends on the nature of the data in the array.
• Recursive selection sort performs same operations
– Also O(n2)
Shiwani Gupta 42
42. Insertion Sort
• If only one book, it is sorted.
• Consider the second book, if shorter than first one
– Remove second book
– Slide first book to right
– Insert removed book into first slot
• Then look at third book, if it is shorter than 2nd book
– Remove 3rd book
– Slide 2nd book to right
– Compare with the 1st book, if is taller than 3rd, slide 1st to right,
insert the 3rd book into first slot
Shiwani Gupta 43
43. Insertion Sort
• Partitions the array into two parts. One part is sorted and initially
contains the first element.
• The second part contains the remaining elements.
• Removes the first element from the unsorted part and inserts it into
its proper sorted position within the sorted part by comparing with
element from the end of sorted part and toward its beginning.
• The sorted part keeps expanding and unsorted part keeps shrinking
by one element at each pass
Shiwani Gupta 44
44. Insertion Sort
at each iteration, the array is divided in two sub-arrays:
Shiwani Gupta 45
45. INSERTION-SORT (Iterative)
Alg.: INSERTION-SORT(A)
for j ← 2 to n
do key ← A[ j ]
Insert A[ j ] into the sorted sequence A[1 . . j -1]
i ← j - 1
while i > 0 and A[i] > key
do A[i + 1] ← A[i]
i ← i – 1
A[i + 1] ← key
a8
a7
a6
a5
a4
a3
a2
a1
1 2 3 4 5 6 7 8
key
Shiwani Gupta 46
46. Analysis of Insertion Sort
cost times
c1 n
c2 n-1
0 n-1
c4 n-1
c5
c6
c7
c8 n-1
n
j j
t
2
n
j j
t
2
)
1
(
n
j j
t
2
)
1
(
)
1
(
1
1
)
1
(
)
1
(
)
( 8
2
7
2
6
2
5
4
2
1
n
c
t
c
t
c
t
c
n
c
n
c
n
c
n
T
n
j
j
n
j
j
n
j
j
INSERTION-SORT(A)
for j ← 2 to n
do key ← A[ j ]
Insert A[ j ] into the sorted sequence A[1 . . j -1]
i ← j - 1
while i > 0 and A[i] > key
do A[i + 1] ← A[i]
i ← i – 1
A[i + 1] ← key
tj: # of times the while statement is executed at iteration j
n2/2 comparisons
n2/2 exchanges
Shiwani Gupta 47
47. Best Case Analysis
• The array is already sorted
– A[i] ≤ key upon the first time the while loop test is run
(when i = j -1)
– tj = 1
• T(n) = c1n + c2(n -1) + c4(n -1) + c5(n -1) + c8(n-1)
= (c1 + c2 + c4 + c5 + c8)n + (c2 + c4 + c5 + c8)
= an + b a linear function of n
T(n) = (n) linear growth
“while i > 0 and A[i] > key”
Shiwani Gupta 48
48. Worst Case Analysis
• The array is in reverse sorted order
– Always A[i] > key in while loop test
– Have to compare key with all elements to the left of the j-th
position compare with j-1 elements tj = j
a quadratic function of n
T(n) = (n2) quadratic growth
1 2 2
( 1) ( 1) ( 1)
1 ( 1)
2 2 2
n n n
j j j
n n n n n n
j j j
)
1
(
2
)
1
(
2
)
1
(
1
2
)
1
(
)
1
(
)
1
(
)
( 8
7
6
5
4
2
1
n
c
n
n
c
n
n
c
n
n
c
n
c
n
c
n
c
n
T
c
bn
an
2
“while i > 0 and A[i] > key”
using we have:
Shiwani Gupta 49
49. Recursive Insertion Sort
Algorithm insertionSort(a, first, last)
// Sorts the array elements a[first] through a[last] recursively.
if (the array contains more than one element)
{ Sort the array elements a[first] through a[last-1]
Insert the last element a[last] into its correct sorted position
within the rest of the array
}
• Complexity is O(n2).
• Usually implemented nonrecursively.
• If array is closer to sorted order; Less work the insertion sort does
and hence more efficient the sort is
• Insertion sort is acceptable for small array sizes
Shiwani Gupta 50
50. Recurrence Relations
• Equation or an inequality that characterizes a function by its
values on smaller inputs.
• Solution Methods
– Substitution Method.
– Recursion-tree Method.
– Master Method.
• Recurrence relations arise when we analyze the running
time of iterative or recursive algorithms.
– Ex: Divide and Conquer.
T(n) = (1) if n d
T(n) = a T(n/b) + D(n) + C(n) otherwise
Shiwani Gupta 51
51. Substitution Method
• Guess the form of the solution, then
use mathematical induction to show it correct.
– Substitute guessed answer for the function when the inductive
hypothesis is applied to smaller values – hence, the name.
• Works well when the solution is easy to guess.
• No general way to guess the correct solution.
Shiwani Gupta 52
53. Tower of Hanoi
A mathematical puzzle where we have three rods and n
disks.
The objective of the puzzle is to move the entire stack to
another rod, obeying the following simple rules:
1) Only one disk can be moved at a time.
2) Each move consists of taking the upper disk from one
of the stacks and placing it on top of another stack i.e. a
disk can only be moved if it is the uppermost disk on a
stack.
3) No disk may be placed on top of a smaller disk.
Shiwani Gupta 54
55. Shiwani Gupta 56
PseudoCode
TOH(n, x, y, z)
{
if (n >= 1)
{
// put (n-1) disk to z by using y
TOH((n-1), x, z, y)
// move larger disk to right place
move:x-->y
// put (n-1) disk to right place
TOH((n-1), z, y, x)
}
}
56. Analysis
Shiwani Gupta 57
Recursive Equation : Eq-1
Solving it by Backsubstitution :
Eq-2
T(n-2) = 2T(n-3) + 1 Eq-3
Put the value of T(n-2) in the Eq-2 with help of Eq-3
Eq-4
Put the value of T(n-1) in Eq-1 with help of Eq-4
After Generalization :
Base condition T(1) =1
n – k = 1
k = n-1
put, k = n-1
It is a GP series, and the sum is
or you can say which is exponential
57. Recursion-tree Method
• Making a good guess is sometimes difficult with the
substitution method.
• Use recursion trees to devise good guesses.
• Recursion Trees
– Show successive expansions of recurrences using trees.
– Keep track of the time spent on the subproblems of a
Divide and Conquer algorithm.
Shiwani Gupta 58
59. Master Method
• Many Divide-and-Conquer recurrence equations have the form:
where a>=1, b>1, f(n) is asymptotically positive function
• The Master Theorem Cases:
d
n
n
f
b
n
aT
d
n
c
n
T
if
)
(
)
/
(
if
)
(
log log
log log 1
log
1. if ( ) is ( ), then ( ) is ( ), 0
2. if ( ) is ( log ), then ( ) is ( log )
3. if ( ) is ( ), then ( ) is ( ( )), 0
provided ( / ) ( ) for some 1.
b b
b b
b
a a
a a
k k
a
f n O n T n n
f n n n T n n n
f n n T n f n
af n b f n
f(n) grows asymptotically slower
Same growth rate
f(n) grows asymptotically faster
Compare f(n) and nlog
b
a
Requires memorization of three cases
60
60. Master Method
Eg. 1:
Eg. 2:
Eg. 3:
Eg. 4:
Eg. 5:
Eg. 6:
n
n
T
n
T
)
2
/
(
4
)
(
Solution: logba=2, so case 1 says T(n) is Ѳ(n2).
n
n
n
T
n
T log
)
2
/
(
2
)
(
Solution: logba=1, so case 2 says T(n) is Ѳ(n log2 n).
n
n
n
T
n
T log
)
3
/
(
)
(
Solution: logba=0, so case 3 says T(n) is Ѳ(n log n) , (δ=1/3<1).
n
n
T
n
T log
)
2
/
(
2
)
(
Solution: logba=1, so case 1 says T(n) is Ѳ(n).
1
)
2
/
(
)
(
n
T
n
T
Solution: logba=0, so case 2 says T(n) is Ѳ(log n).
3
)
3
/
(
9
)
( n
n
T
n
T
Solution: logba=2, so case 3 says T(n) is Ѳ(n3) , (1/3<δ=1/3<1).
Heap construction
Binary search
61
61. Task
• Explain the terms in the recurrence relation: T(n)= aT(n/b) + f(n)
• Explain Master's Method for solving recurrences to obtain asymptotic bounds.
• Solve the recurrence using Master Method -
a. T(n) = 16T (n/4) + n
b. T(n) = 3T (n/4) + n log n
c. T(n) = 2T (n/4) + √n
d. T(n) = 4T (n/2) +n2
e. T(n) = 2T (n/2) +n3
f. T(n) = 16T (n/4) +n2
• Arrange the following functions in increasing order. n, logn, n3, n2, nlogn, 2n, n!
• Frame and solve the recurrence using Substitution method for –
a. Tower of Hanoi b Fibonacci series c Factorial
• Solve using recursion tree method
a. T(n) = 3T(n/4)+n2
b. T(n) = 2T(n/2)+n2
62
g. T(n) = 16T(n/4) - n2log n
h. T(n) = 2T(n/2) + n, n>1
i. T(n) = T(n/2)+1
j. T(n) = 9T(n/3)+n
k. T(n) = 3T (n/2) +n
l. T(n) = 64T (n/4) + n
c. T(n) = 3T(n/2)+cn2
d. T(n) = T(n/3)+T(2n/3)+n
e. T(n) = 3T (n/2) +n