0% found this document useful (0 votes)
16 views

Chapter one

Chapter one covers the fundamentals of algorithms, defining an algorithm as a well-defined computational procedure that transforms input into output. It outlines the steps for designing and analyzing algorithms, including understanding the problem, choosing data structures, and proving correctness. Additionally, it discusses algorithm analysis concepts, including time and space complexity, and introduces asymptotic notation for evaluating algorithm efficiency.

Uploaded by

kururueya
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Chapter one

Chapter one covers the fundamentals of algorithms, defining an algorithm as a well-defined computational procedure that transforms input into output. It outlines the steps for designing and analyzing algorithms, including understanding the problem, choosing data structures, and proving correctness. Additionally, it discusses algorithm analysis concepts, including time and space complexity, and introduces asymptotic notation for evaluating algorithm efficiency.

Uploaded by

kururueya
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 49

Chapter one

Fundamentals of Algorithm

1
Notation of algorithm

• What is algorithm?
– An algorithm is any well-defined computational procedure
that takes some value, or set of values, as input and
produces some value, or set of values, as output.
– An algorithm is thus a sequence of computational steps
that transform the input into the output.
– Computing time is a bounded resource, and so is space in
memory.
– We should use these resources wisely, and algorithms that
are efficient in terms of time or space will help you do so.

2
Methods of Specifying an Algorithm
• Once you have designed an algorithm, you
need to specify it in some fashion.
– A pseudocode is a mixture of a natural language
and programming language like constructs.
– Flowchart is a method of expressing an algorithm
by a collection of connected geometric shapes
containing descriptions of the algorithm's steps.
This representation technique has proved to be
inconvenient for all but very simple algorithms;

3
Algorithmic problem solving
• The following are sequence of steps one
typically goes through in designing and
analyzing an algorithm:
Step 1-Understanding the Problem
– The first thing you need to do before designing an
algorithm is to understand completely the
problem given. This includes
• Specify exactly the range of inputs (instances) the
algorithm needs to handle.

4
Step 2- Ascertaining the Capabilities of a
Computational Device
– Once you completely understand a problem. You
need to ascertain the capabilities of the
computational device the algorithm is intended
for.

5
Step 3 Choosing between Exact and Approximate Problem
Solving
• We might need to write exact algorithms or an approximation
algorithm depending on the kind of the problem we are
solving.
• What kind of problems needs approximation algorithms?
– Some problems are simply cannot be solved exactly for
most of their instances; examples include:
• extracting square roots, solving nonlinear equations
• Available algorithms for solving a problem exactly can be
unacceptably slow because of the problem's intrinsic
complexity.
• Third, an approximation algorithm can be a part of a more
sophisticated algorithm that solves a problem exactly. 6
Step 4- Deciding on Appropriate Data
Structures
• Some algorithms do not demand any
ingenuity in representing their inputs. But
others are, in fact, predicated on ingenious
data structures.
• Data structure together with an algorithm
constitutes a program.

7
Step 5- Proving an Algorithm's Correctness
• Once an algorithm has been specified, you have to
prove its correctness. That is, you have to prove
that the algorithm yields a required result for every
legitimate input in a finite amount of time.
• For an approximation algorithm, we usually would
like to be able to show that the error produced by
the algorithm does not exceed a predefined limit.

8
Step 6- Analyzing an Algorithm
• When analyzing algorithms we should check if it is time
and space efficiency, simple, generality.
– Time efficiency: indicates how fast the algorithm runs;
– Space efficiency: indicates how much extra memory
the algorithm needs.
– Simplicity: cannot be precisely defined and
investigated with mathematical method. But we can
say that a simpler algorithms are easier to understand
and easier to program;
– Generality: there are two things to consider in this
regard:
• generality of the problem the algorithm solves
• the range of inputs it accepts.
9
• If you are not satisfied with the algorithm's
efficiency, simplicity, or generality, you must
change or modify the algorithm.
Step 7- Coding an Algorithm
• Most algorithms are destined to be ultimately
implemented as computer programs.

10
Characteristics of good algorithm

• Finiteness: An algorithm must terminate after


a finite number of steps and further each step
must be executable in finite amount of
time that it terminates (in finite number of
steps) on all allowed inputs.
• Definiteness (no ambiguity): Each step must
have a unique defined preceding and
succeeding step. The first step (start step) and
last step (halt step) must be clearly noted.
11
• Feasibility: It must be possible to perform
each instruction.
• Correctness: It must compute correct answer
all possible legal inputs.
• Language Independence: It must not depend
on any one programming language.

12
• Completeness: It must solve the problem
completely.
• Effectiveness: It must be possible to perform
each step exactly and in a finite amount of
time.
• Efficiency: It must solve with the least amount
of computational resources such as time and
space.

13
• Generality: Algorithm should be valid on all
possible inputs.
• Input/ Output: There must be a specified
number of input values, and one or more
result values.

14
Algorithm Analysis Concepts
• Algorithm analysis refers to the process of
determining how much computing time and
storage that algorithms will require.
• In other words, it’s a process of predicting the
resource requirement of algorithms in a given
environment.

15
• In order to solve a problem, there are many
possible algorithms.
• One has to be able to choose the best
algorithm for the problem at hand using some
scientific method.
• To classify some data structures and algorithms
as good, we need precise ways of analyzing
them in terms of resource requirement. The
main resources are:
– Running Time
– Memory Usage
– Communication Bandwidth
16
• Running time is usually treated as the most
important since computational time is the most
precious resource in most problem domains.
• There are two approaches to measure the
efficiency of algorithms:
– Empirical: Programming competing algorithms and
trying them on different instances.
– Theoretical: Determining the quantity of resources
required mathematically (Execution time, memory
space, etc.) needed by each algorithm.
17
• However, it is difficult to use actual clock-time
as a consistent measure of an algorithm’s
efficiency, because clock-time can vary based
on many things. For example,
– Specific processor speed
– Current processor load
– Specific data for a particular run of the program
• Input Size
• Input Properties
– Operating Environment

18
• Accordingly, we can analyze an algorithm
according to the number of operations
required, rather than according to an absolute
amount of time involved.
• This can show how an algorithm’s efficiency
changes according to the size of the input.

19
Complexity Analysis
• Complexity Analysis is the systematic study of
the cost of computation, measured either in
time units or in operations performed, or in
the amount of storage space required.
• The goal is to have a meaningful measure that
permits comparison of algorithms
independent of operating platform.

20
• There are two things to consider:
– Time Complexity: Determine the approximate number
of operations required to solve a problem of size n.
– Space Complexity: Determine the approximate
memory required to solve a problem of size n.
• Complexity analysis involves two distinct phases:
– Algorithm Analysis: Analysis of the algorithm or data
structure to produce a function T(n) that describes the
algorithm in terms of the operations performed in
order to measure the complexity of the algorithm.
– Order of Magnitude Analysis: Analysis of the function
T (n) to determine the general complexity category to
which it belongs.

21
• There is no generally accepted set of rules for
algorithm analysis. However, an exact count of
operations is commonly used.
• Analysis Rules
– We assume an arbitrary time unit.
– Execution of one of the following operations takes time 1:
• Assignment Operation
• Single Input/ Output Operation
• Single Boolean Operations
• Single Arithmetic Operations
• Function Return
– Running time of a selection statement (if, switch) is the
time for the condition evaluation + the maximum of the
running times for the individual clauses in the selection.
– Loops: Running time for a loop is equal to the running
time for the statements inside the loop * number of 22
– The total running time of a statement inside a
group of nested loops is the running time of the
statements multiplied by the product of the sizes of
all the loops.
– For nested loops, analyze inside out.
 Always assume that the loop executes the maximum
number of iterations possible.
– Running time of a function call is 1 for setup + the
time for any parameter calculations + the time
required for the execution of the function body.

23
• Worst-Case Analysis –The maximum amount of time that an algorithm require
to solve a problem of size n.
– We try to analyze the algorithm by considering its worst situation
– This gives an upper bound for the time complexity of an algorithm.
– Normally, we try to find worst-case behavior of an algorithm.
• Best-Case Analysis –The minimum amount of time that an algorithm require
to solve a problem of size n.
– We try to analyze the algorithm by considering its best situation
– The best case behavior of an algorithm is NOT so useful.
• Average-Case Analysis –The average amount of time that an algorithm
require to solve a problem of size n.
– Sometimes, it is difficult to find the average-case behavior of an algorithm.
– We have to look at all possible data organizations of a given size n, and
their distribution probabilities of these organizations.
• Worst-case analysis is more common than average-case analysis.

24
Analysis Examples

int count(){
int k=0;
cout<< “Enter an integer”;
cin>>n;
for (i=0;i<n;i++)
k=k+1;
return 0;}

25
int total(int n)
{
int sum=0;
for (int i=1;i<=n;i++)
sum=sum+1;
return sum;
}

26
void func()
{
int x=0;
int i=0;
int j=1;
cout<< “Enter an Integer value”;
cin>>n;
while (i<n){
x++;
i++;}
while (j<n)
{
j++;
}}
27
int sum (int n)
{
int partial_sum = 0;
for (int i = 1; i <= n; i++)
partial_sum = partial_sum +(i * i * i);
return partial_sum;
}

28
x = 0;
for (int i=1; i<n; i=i*2) {
x = x + 1;
}

29
N No of
iteration
1 0 20

2 1 21

3 2
4 2 22
5 3
6 3
7
8
3
3 23
• 2k=n log2n =k
9 4
10 4
11 4
12 4
13 4
14 4
15 4
16 4 24
-----
n K 2k

30
for(int i = 0; i < n; i+=5)
sum++;
N No of N=k*5,
iteration K=n/5 number of iteration
1 1

2 1
3 1
4 1
5 1 K*5
6 2
7 2
8 2
9 2
10 2 k*5
n k K*5

31
Asymptotic notation
• The order of growth of the running time of an algorithm gives
a simple characterization of the algorithm’s efficiency and also
allows us to compare the relative performance of alternative
algorithms.
• Once the input size n becomes large enough and algorithm
with O(nlgn) worst-case running time, beats an algorithm ,
whose worst-case running time is O(n2).
• Although we can sometimes determine the exact running
time of an algorithm, as we did for pervious examples , the
extra precision is not usually worth the effort of computing it.
• For large enough inputs, the multiplicative constants and
lower-order terms of an exact running time are dominated by
the effects of the input size itself.
32
• When we look at input sizes large enough to make only
the order of growth of the running time relevant, we
are studying the asymptotic efficiency of algorithms.
• Asymptotic analysis is concerned with how the running
time of an algorithm increases with the size of the
input in the limit, as the size of the input increases
without bound.
• Usually, an algorithm that is asymptotically more
efficient will be the best choice for all but very small
inputs.

33
• There are five notations used to describe a
running time function. These are:
– Big-Oh Notation (O)
– Big-Omega Notation ()
– Theta Notation ()
– Little-o Notation (o)
– Little-Omega Notation ()

34
The Big-Oh Notation

• Big-Oh notation is a way of comparing


algorithms and is used for computing the
complexity of algorithms; i.e., the amount of
time that it takes for computer program to
run.
• It’s only concerned with what happens for
very a large value of n.

35
36
• Therefore only the largest term in the expression
(function) is needed.
• For example, if the number of operations in an
algorithm is n2 – n, n is insignificant compared to
n2 for large values of n.
• Big-O expresses an upper bound on the growth
rate of a function, for sufficiently large values of n.
• Formal Definition: f (n) = O (g (n)) if there exist c,
k ε ℛ+ such that for all n≥ k, f (n) ≤ c.g (n).
37
• The following points are facts that you can use
for Big-Oh problems:
• 1<=n for all n>=1
• n<=n2 for all n>=1
• 2n <=n! for all n>=4
• log2n<=n for all n>=2
• n<=nlog2n for all n>=2

38
Big-O Theorems
• For all the following theorems, assume that
f(n) is a function of n and that k is an arbitrary
constant.
– Theorem 1: k is O(1)
– Theorem 2: A polynomial is O(the term containing
the highest power of n).
• Polynomial’s growth rate is determined by the leading
term
• If f(n) is a polynomial of degree d, then f(n) is O(nd)
• In general, f(n) is big-O of the dominant term of f(n).
39
– Theorem 3: k*f(n) is O(f(n))
• Constant factors may be ignored
• E.g. f(n) =7n4+3n2+5n+1000 is O(n4)
– Theorem 4(Transitivity): If f(n) is O(g(n))and g(n) is
O(h(n)), then f(n) is O(h(n)).
– Theorem 5: For any base b, logb(n) is O(logn).
• All logarithms grow at the same rate
• logbn is O(logdn) b, d > 1

40
Examples
1. f(n)=10n+5 and g(n)=n. Show that f(n) is
O(g(n)).
2. f(n) = 3n2 +4n+1. Show that f(n)=O(n2).

41
Big-Omega Notation

• Just as O-notation provides an asymptotic


upper bound on a function,  notation
provides an asymptotic lower bound.
• Formal Definition: A function f(n) is  ( g (n))
if there exist constants c and k ε ℛ+ such that
– f(n) >=c. g(n) for all n>=k.
– f(n)=  ( g (n)) means that f(n) is greater than or
equal to some constant multiple of g(n) for all
values of n greater than or equal to some k.
42
43
Examples
1. If f(n) =n2, then f(n)=  ( n)
2. If f(n) =3n+2, then f(n)=  ( n)

44
Theta Notation
• A function f (n) belongs to the set of (g(n)) if
there exist positive constants c1 and c2 such
that it can be sandwiched between c1.g(n)
and c2.g(n), for sufficiently large values of n.
• Formal Definition: A function f (n) is  (g(n)) if
it is both O( g(n) ) and  ( g(n) ). In other
words, there exist constants c1, c2, and k >0
such that c1.g (n)<=f(n)<=c2. g(n) for all n >= k
• If f(n)=  (g(n)), then g(n) is an asymptotically
tight bound for f(n).
45
46
• 1. If f(n)=2n+1, then f(n) =  (n)
• 2. f(n) =2n2 then
• f(n)=O(n4)
• f(n)=O(n3)
• f(n)=O(n2)
• All these are technically correct, but the last
expression is the best and tight one. Since 2n2
and n2 have the same growth rate, it can be
written as f(n)=  (n2).
47
Little-o Notation

• f(n)=o(g(n)) means for all c>0 there exists some k>0


such that f(n)<c.g(n) for all n>=k.
• Informally, f(n)=o(g(n)) means f(n) becomes
insignificant relative to g(n) as n approaches infinity.
• Example: f(n)=3n+4 is o(n2)
– In simple terms, f(n) has less growth rate compared to
g(n).
– g(n)= 2n2 g(n) =o(n3), O(n2), g(n) is not o(n2).
• Because holds for all constants c > 0.

48
Little-Omega ( notation)

• We use  notation to denote a lower bound


that is not asymptotically tight.
• Formal Definition: f(n)=  (g(n)) if there exists
a constant C>0 such that 0<= c. g(n)<f(n) for all
n>=k.
• Example: 2n2= (n)

49

You might also like