0% found this document useful (0 votes)
11 views

Slide 3

These are the Java algorithm lessons that we're studying them at university.

Uploaded by

farrokhisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Slide 3

These are the Java algorithm lessons that we're studying them at university.

Uploaded by

farrokhisar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Algorithm

Algorithm Design Techniques

► 1. Divide and Conquer Approach: It is a top-down approach. The algorithms


which follow the divide & conquer techniques involve three steps:

► Divide the original problem into a set of subproblems.


► Solve every subproblem individually, recursively.
► Combine the solution of the subproblems (top level) into a solution of the
whole original problem.
Continued…

► 2. Greedy Technique: Greedy method is used to solve the optimization


problem. An optimization problem is one in which we are given a set of input
values, which are required either to be maximized or minimized (known as
objective), i.e. some constraints or conditions.

► Greedy Algorithm always makes the choice (greedy criteria) looks best at the
moment, to optimize a given objective.
► The greedy algorithm doesn't always guarantee the optimal solution however
it generally produces a solution that is very close in value to the optimal.
Continued…

► 3. Dynamic Programming: Dynamic Programming is a bottom-up approach we


solve all possible small problems and then combine them to obtain solutions
for bigger problems. This is particularly helpful when the number of copying
subproblems is exponentially large. Dynamic Programming is frequently
related to Optimization Problems.
► 4. Branch and Bound: In Branch & Bound algorithm a given subproblem, which
cannot be bounded, has to be divided into at least two new restricted
subproblems. Branch and Bound algorithm are methods for global
optimization in non-convex problems. Branch and Bound algorithms can be
slow, however in the worst case they require effort that grows exponentially
with problem size, but in some cases we are lucky, and the method coverage
with much less effort.
Continued…

► 5. Randomized Algorithms: A randomized algorithm is defined as an algorithm


that is allowed to access a source of independent, unbiased random bits, and
it is then allowed to use these random bits to influence its computation.

► 6. Backtracking Algorithm: Backtracking Algorithm tries each possibility until


they find the right one. It is a depth-first search of the set of possible
solution. During the search, if an alternative doesn't work, then backtrack to
the choice point, the place which presented different alternatives, and tries
the next alternative.
Continued…

► 7. Randomized Algorithm: A randomized algorithm uses a random number at


least once during the computation make a decision.

► Example 1: In Quick Sort, using a random number to choose a pivot.

► Example 2: Trying to factor a large number by choosing a random number as


possible divisors.
Loop invariants

► This is a justification technique. We use loop invariant that helps us to


understand why an algorithm is correct. To prove statement S about a loop is
correct, define S concerning series of smaller statement S0 S1....Sk where,

► The initial claim so is true before the loop begins.


► If Si-1 is true before iteration i begin, then one can show that Si will be true
after iteration i is over.
► The final statement Sk implies the statement S that we wish to justify as
being true.
Asymptotic Analysis of algorithms
(Growth of function)
► Resources for an algorithm are usually expressed as a function regarding
input. Often this function is messy and complicated to work. To study
Function growth efficiently, we reduce the function down to the important
part.
► Let f(n)= a*n^2+b*n+c
► In this function, the n2 term dominates the function that is when n gets
sufficiently large.

► Dominate terms are what we are interested in reducing a function, in this; we


ignore all constants and coefficient and look at the highest order term
concerning n.
Asymptotic notation:

► The word Asymptotic means approaching a value or curve arbitrarily closely


(i.e., as some sort of limit is taken).
Asymptotic analysis

► It is a technique of representing limiting behavior. The methodology


has the applications across science. It can be used to analyze the
performance of an algorithm for some large data set.
► 1. In computer science in the analysis of algorithms, considering the
performance of algorithms when applied to very large input datasets
► The simplest example is a function ƒ (n) = n2+3n, the term 3n becomes
insignificant compared to n2 when n is very large. The function "ƒ (n) is
said to be asymptotically equivalent to n2 as n → ∞", and here is
written symbolically as ƒ (n) ~ n2.
Continued…

► Asymptotic notations are used to write fastest and slowest possible


running time for an algorithm. These are also referred to as 'best case'
and 'worst case' scenarios respectively.
► "In asymptotic notations, we derive the complexity concerning the size
of the input. (Example in terms of n)"
► "These notations are important because without expanding the cost
of running the algorithm, we can estimate the complexity of the
algorithms."
Why is Asymptotic Notation Important?

► 1. They give simple characteristics of an algorithm's efficiency.

► 2. They allow the comparisons of the performances of various algorithms.


Asymptotic Notations:

► Asymptotic Notation is a way of comparing function that ignores constant


factors and small input sizes. Three notations are used to calculate the
running time complexity of an algorithm:
► 1. Big-oh notation: Big-oh is the formal method of expressing the upper
bound of an algorithm's running time. It is the measure of the longest amount
of time. The function f (n) = O (g (n)) [read as "f of n is big-oh of g of n"] if
and only if exist positive constant c and such that

f (n) ⩽ k.g (n)f(n)⩽k.g(n) for n>n0n>n0 in all case
► Hence, function g (n) is an upper bound for function f (n), as g (n)
grows faster than f (n)
Continued…
Continued…

► For Example:
1. 1. 3n+2=O(n) as 3n+2≤4n for all n≥2
2. 2. 3n+3=O(n) as 3n+3≤4n for all n≥3
► Hence, the complexity of f(n) can be represented as O (g (n))
Continued…

► 2. Omega () Notation: The function f (n) = Ω (g (n)) [read as "f of n is


omega of g of n"] if and only if there exists positive constant c and
n0 such that
► F (n) ≥ k* g (n) for all n, n≥ n0
Continued…
Continued…

► For Example:
► f (n) =8n2+2n-3≥8n2-3
► =7n2+(n2-3)≥7n2 (g(n))
► Thus, k1=7
► Hence, the complexity of f (n) can be represented as Ω (g (n))
Continued…

► 3. Theta (θ): The function f (n) = θ (g (n)) [read as "f is the theta of g of
n"] if and only if there exists positive constant k1, k2 and k0 such that
► k1 * g (n) ≤ f(n)≤ k2 g(n)for all n, n≥ n0
Continued…
Continued…

► Hence, the complexity of f (n) can be represented as θ (g(n)).

► The Theta Notation is more precise than both the big-oh and Omega notation.
The function f (n) = θ (g (n)) if g(n) is both an upper and lower bound.
?

You might also like