UNIT-IV Advanced Algorithms
UNIT-IV Advanced Algorithms
Problem solving is a creative process which defines systematization and mechanization. There
are a number of steps that can be taken to raise the level of one’s performance in problem
solving.
A problem-solving technique follows certain steps in finding the solution to a problem. Let us
look into the steps one by one:
The success in solving any problem is possible only after the problem has been fully understood.
That is, we cannot hope to solve a problem, which we do not understand. So, the problem
understanding is the first step towards the solution of the problem. In problem definition phase,
we must emphasize what must be done rather than how is it to be done. That is, we try to extract
the precisely defined set of tasks from the problem statement. Inexperienced problem solvers too
often gallop ahead with the task of problem - solving only to find that they are either solving the
wrong problem or solving just one particular problem.
There are many ways of solving a problem and there may be several solutions. So, it is difficult
to recognize immediately which path could be more productive. Sometimes you do not have any
idea where to begin solving a problem, even if the problem has been defined. Such block
sometimes occurs because you are overly concerned with the details of the implementation even
before you have completely understood or worked out a solution. The best advice is not to get
concerned with the details. Those can come later when the intricacies of the problem has been
understood.
To get started on a problem, we can make use of heuristics i.e., the rule of thumb. This approach
will allow us to start on the problem by picking a specific problem we wish to solve and try to
work out the mechanism that will allow solving this particular problem. It is usually much easier
to work out the details of a solution to a specific problem because the relationship between the
mechanism and the problem is more clearly defined. This approach of focusing on a particular
problem can give us the foothold we need for making a start on the solution to the general
problem.
One way to make a start is by considering a specific example. Another approach is to bring the
experience to bear on the current problem. So, it is important to see if there are any similarities
between the current problem and the past problems which we have solved. The more experience
one has the more tools and techniques one can bring to bear in tackling the given problem. But
sometimes, it blocks us from discovering a desirable or better solution to the problem. A skill
that is important to try to develop in problem - solving is the ability to view a problem from a
variety of angles. One must be able to metaphorically turn a problem upside down, inside out,
sideways, backwards, forwards and so on. Once one has developed this skill it should be possible
to get started on any problem.
There are a few general strategies for solving a given problem namely,
1. Divide the problem into a number of subproblems that are smaller instances of the same
problem.
2. Conquer the subproblems by solving them recursively. If they are small enough, solve
the subproblems as base cases.
3. Combine the solutions to the subproblems into the solution for the original problem.
Examples: Merge sort
Dynamic Programming
Top- down design is a strategy that we can apply to take the solution of a Computer problem
from a vague outline to a precisely defined algorithm and program implementation. Top-down
design provides us with a way of handling the inherent logical complexity and detail frequently
encountered in computer algorithms. It allows us to build our solutions to a problem in a
stepwise fashion.
The process of repeatedly breaking a task down into subtasks and then each subtask into still
smaller subtasks must continue until we eventually end up with subtasks that can be
implemented as program statements.
For most algorithms we only need to go down to two or three levels although obviously for
large software projects this is not true. The larger and more complex the problem, the more it
will need to be broken down. Breakdown of a problem is shown in the below figure.
One of the most important decisions we have to make in formulating computer solutions to
problems is the choice of the appropriate data structure.All programs operate on data and
consequently the way the data is organized can have a profound effect on every aspect of the
final solution.
In particular, an inappropriate choice of data structure often leads to clumsy, inefficient, and
difficult implementations. On the other hand, an appropriate choice usually leads to a simple,
transparent, and efficient implementation.
There is no hard and fast rule that tells us at what stage in the development of an algorithm we
need to make decisions about the associated data structures.A small change in data organization
can have a significant influence on the algorithm
The sort of things we must however be aware of in setting up data structures are such
questions as:
1. How can intermediate results be arranged to allow fast access to information that will
reduce the amount of computation required at a later stage?
2. Can the data structure be easily searched?
3. Can the data structure be easily updated?
4. Does the data structure provide a way of recovering an earlier state in the computation?
5. Does the data structure involve the excessive use of storage?
6. Can the problem be formulated in terms of one of the common data structures (e.g. array,
set, queue, stack, tree, graph, list)?
3) Construction of loops
In moving from general statements about the implementation towards subtasks that can be
realized as computations, almost invariably we are led to a series of iterative constructs, or loops,
and structures that are conditionally executed. These structures, together with input/output
statements, computable expressions, and assignments, make up the heart of program
implementations.
To construct any loop we must take into account three things, the initial conditions that
need to apply before the loop begins to execute, the invariant relation that must apply after each
iteration of the loop, and the conditions under which the iterative process must terminate.
This solution for n=1 can be built from the solution for n = 0 using the values for i and s when n
=0 and the two expressions
The same two steps can be used to extend the solution from n=1 to when n = 2 and so on.
6) Termination of loops
There are a number of ways in which loops can be terminated. In general the termination
conditions are dictated by the nature of the problem.
a) The simplest condition for terminating a loop occurs when it is known in advance how many
iterations need to be made.
For an example, the for-loop can be used for such computations:
for(i=0;i<n; i++)
{
…
}
b) A second way in which loops can terminate is when some conditional expression becomes
false. An example is:
while (x>0) and (x<10)
{
…
}
With loops of this type it cannot be directly determined in advance how many iterations there
will be before the loop will terminate.
In fact there is no guarantee that loops of this type will terminate at all. In these circumstances
the responsibility for making sure that the loop will terminate rests with the algorithm designer.
c) Yet another way in which termination of a loop can be set up is by forcing the condition
under which the loop will continue to iterate to become false. This approach to termination
can be very useful for simplifying the test that must be made with each iteration.
Suppose we wish to establish that an array of n elements is in strictly ascending order (i.e.
To do this we can use the following instructions:
If n was assigned the value 5 and the data set was 2, 3, 5, 11, 14, then the first assignment prior
to the loop would result in the array configuration below:
The two 14s guarantee that the test a[i]<a[i+1] will be false when i = n and so the loop will
terminate correctly when i = n if not before.
Implementation of Algorithms
The implementation of an algorithm should follow the top down design process. If an algorithm
has been properly designed the path of execution should flow in a straight line from top to
bottom. It is important that the program implementation adheres to this top-to-bottom rule.
To make the task of detecting logical errors somewhat easier it is a good idea to build into the
program a set of statements that will print out information at strategic points in the computation.
The actual task of trying to find logical errors in programs can be a very testing and there are no
foolproof methods for debugging but there are some steps we can take to ease the burden of the
process. Probably the best advice is to always work the program through by hand-before ever
attempting to execute it. The simplest way to do-this is to draw up a two-dimensional table
consisting of steps executed against all the variables used in the section of program under
consideration. We must then execute the statements in the section one by one and update our
table of variables as each variable is changed. If the process we are modelling is a loop it is
usually only necessary to check the first couple of iterations and the last couple of iterations
before termination.
5) Program Testing
In attempting to test whether or not a program will handle all variations of the problem it was
designed to solve we must make every effort to be sure that it will cope with the limiting and
unusual cases. Some of the things we might check are whether the program solves the smallest
possible problem, whether it handles the case when all data values are the same, and so on.
Table 1.2 Appropriate data sets for testing binary search algorithm
Search Sample data
Test value(s) x
Will the algorithm handle 0, 1, 2 a[l]= I n=1
the search of array of one
element?
(iij Will it handle the case where 0, 1, 2 a[1]— 1 a[2]= 1 am] — I
all array values are equal?
(iii) Will it handle the case where 1 a[1 - I a[2]= 2 a[n]= n
the element sought equals
the first value in ihe array?
(iv) Will it handle the case where n a[1]= 1 a[2]= 2 a[n]=n
the value sought equals the
last value in the array?
(v) Will it handle the case where 0 a[1) = 1 a[2]= 2 a[n]= n
the value sought is less than
t he first e leme nt in t he
array?
(vi) Will it handle the case where n+1 a{I = I a[2]=2 am] —- n
the value sought is greater
than the last value in the
array?
(vii) Will it handle the case where 2 a[1]= 1 a[2]=2 am] —— n
the value sought is at an even
array location?
Will ii handle the case where 3 a[l}— I n[2]—2 n[n]= n
the value sought is at an odd
array location?
(1X) Will it handle the case where 5 a[1]=2 n[2]= 4 o[n]= 2n
the value sought is abse nt
but within the range of array
val ucs?