0% found this document useful (0 votes)
15 views

UNIT-IV Advanced Algorithms

Uploaded by

k16861978
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

UNIT-IV Advanced Algorithms

Uploaded by

k16861978
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

PROBLEM - SOLVING ASPECT

Problem solving is a creative process which defines systematization and mechanization. There
are a number of steps that can be taken to raise the level of one’s performance in problem
solving.

A problem-solving technique follows certain steps in finding the solution to a problem. Let us
look into the steps one by one:

1) Problem definition phase

The success in solving any problem is possible only after the problem has been fully understood.
That is, we cannot hope to solve a problem, which we do not understand. So, the problem
understanding is the first step towards the solution of the problem. In problem definition phase,
we must emphasize what must be done rather than how is it to be done. That is, we try to extract
the precisely defined set of tasks from the problem statement. Inexperienced problem solvers too
often gallop ahead with the task of problem - solving only to find that they are either solving the
wrong problem or solving just one particular problem.

2) Getting started on a problem

There are many ways of solving a problem and there may be several solutions. So, it is difficult
to recognize immediately which path could be more productive. Sometimes you do not have any
idea where to begin solving a problem, even if the problem has been defined. Such block
sometimes occurs because you are overly concerned with the details of the implementation even
before you have completely understood or worked out a solution. The best advice is not to get
concerned with the details. Those can come later when the intricacies of the problem has been
understood.

3) The use of specific examples

To get started on a problem, we can make use of heuristics i.e., the rule of thumb. This approach
will allow us to start on the problem by picking a specific problem we wish to solve and try to
work out the mechanism that will allow solving this particular problem. It is usually much easier
to work out the details of a solution to a specific problem because the relationship between the
mechanism and the problem is more clearly defined. This approach of focusing on a particular
problem can give us the foothold we need for making a start on the solution to the general
problem.

4) Similarities among problems

One way to make a start is by considering a specific example. Another approach is to bring the
experience to bear on the current problem. So, it is important to see if there are any similarities
between the current problem and the past problems which we have solved. The more experience
one has the more tools and techniques one can bring to bear in tackling the given problem. But
sometimes, it blocks us from discovering a desirable or better solution to the problem. A skill
that is important to try to develop in problem - solving is the ability to view a problem from a
variety of angles. One must be able to metaphorically turn a problem upside down, inside out,
sideways, backwards, forwards and so on. Once one has developed this skill it should be possible
to get started on any problem.

5) Working backwards from the solution


In some cases we can assume that we already have the solution to the problem and then try to
work backwards to the starting point. Even a guess at the solution to the problem may be enough
to give us a foothold to start on the problem. We can systematize the investigations and avoid
duplicate efforts by writing down the various steps taken and explorations made. Another
practice that helps to develop the problem solving skills is, once we have solved a problem, to
consciously reflect back on the way we went about discovering the solution.

6) General Problem solving strategies

There are a few general strategies for solving a given problem namely,

Divide and conquer strategy

This algorithm as having three parts:

1. Divide the problem into a number of subproblems that are smaller instances of the same
problem.
2. Conquer the subproblems by solving them recursively. If they are small enough, solve
the subproblems as base cases.
3. Combine the solutions to the subproblems into the solution for the original problem.
Examples: Merge sort

Dynamic Programming

Dynamic Programming (DP) is an algorithmic technique for solving an optimization problem by


breaking it down into simpler subproblems and utilizing the fact that the optimal solution to the
overall problem depends upon the optimal solution to its subproblems.

Eg: Greedy search, Backtracking


TOP DOWN DESIGN

Top- down design is a strategy that we can apply to take the solution of a Computer problem
from a vague outline to a precisely defined algorithm and program implementation. Top-down
design provides us with a way of handling the inherent logical complexity and detail frequently
encountered in computer algorithms. It allows us to build our solutions to a problem in a
stepwise fashion.

1) Breaking a problem into subproblems


Top-down design suggests that we take the general statements that we have about the solution,
one at a time, and break them down into a set of more precisely defined subtasks. These subtasks
should more accurately describe how the final goal is to be reached.

The process of repeatedly breaking a task down into subtasks and then each subtask into still
smaller subtasks must continue until we eventually end up with subtasks that can be
implemented as program statements.
For most algorithms we only need to go down to two or three levels although obviously for
large software projects this is not true. The larger and more complex the problem, the more it
will need to be broken down. Breakdown of a problem is shown in the below figure.

Fig: Schematic breakdown of a problem into subtasks as employed in top-down design.

2) Choice of a suitable data structure

One of the most important decisions we have to make in formulating computer solutions to
problems is the choice of the appropriate data structure.All programs operate on data and
consequently the way the data is organized can have a profound effect on every aspect of the
final solution.

In particular, an inappropriate choice of data structure often leads to clumsy, inefficient, and
difficult implementations. On the other hand, an appropriate choice usually leads to a simple,
transparent, and efficient implementation.
There is no hard and fast rule that tells us at what stage in the development of an algorithm we
need to make decisions about the associated data structures.A small change in data organization
can have a significant influence on the algorithm

The sort of things we must however be aware of in setting up data structures are such
questions as:

1. How can intermediate results be arranged to allow fast access to information that will
reduce the amount of computation required at a later stage?
2. Can the data structure be easily searched?
3. Can the data structure be easily updated?
4. Does the data structure provide a way of recovering an earlier state in the computation?
5. Does the data structure involve the excessive use of storage?
6. Can the problem be formulated in terms of one of the common data structures (e.g. array,
set, queue, stack, tree, graph, list)?

3) Construction of loops
In moving from general statements about the implementation towards subtasks that can be
realized as computations, almost invariably we are led to a series of iterative constructs, or loops,
and structures that are conditionally executed. These structures, together with input/output
statements, computable expressions, and assignments, make up the heart of program
implementations.
To construct any loop we must take into account three things, the initial conditions that
need to apply before the loop begins to execute, the invariant relation that must apply after each
iteration of the loop, and the conditions under which the iterative process must terminate.

4) Establishing initial conditions for loops


To establish the initial conditions for a loop, a usually effective strategy is to set the loop
variables to the values that they would have to assume in order to solve the smallest problem
associated with the loop.
Typically the number of iterations n that must be made by a loop are in the range 0<=i<=n. The
smallest problem usually corresponds to the case where i either equals 0 or i equals 1.
Eg: Suppose that we wish to sum a set of numbers in an array using an iterative construct. The
loop variables are i the array and loop index and s the variable for accumulating the sum of array
elements.

The initial values of i and s must be


5) Finding the iterative construct
Once we have the conditions for solving the smallest problem the next step is to try to extend it
to the next smallest problem (in this case when i = 1).

This solution for n=1 can be built from the solution for n = 0 using the values for i and s when n
=0 and the two expressions

The same two steps can be used to extend the solution from n=1 to when n = 2 and so on.

6) Termination of loops

There are a number of ways in which loops can be terminated. In general the termination
conditions are dictated by the nature of the problem.

a) The simplest condition for terminating a loop occurs when it is known in advance how many
iterations need to be made.
For an example, the for-loop can be used for such computations:
for(i=0;i<n; i++)
{

}

This loop terminates unconditionally after n iterations

b) A second way in which loops can terminate is when some conditional expression becomes
false. An example is:
while (x>0) and (x<10)
{

}
With loops of this type it cannot be directly determined in advance how many iterations there
will be before the loop will terminate.
In fact there is no guarantee that loops of this type will terminate at all. In these circumstances
the responsibility for making sure that the loop will terminate rests with the algorithm designer.

c) Yet another way in which termination of a loop can be set up is by forcing the condition
under which the loop will continue to iterate to become false. This approach to termination
can be very useful for simplifying the test that must be made with each iteration.

The following example best illustrates this method of loop termination.

Suppose we wish to establish that an array of n elements is in strictly ascending order (i.e.
To do this we can use the following instructions:

If n was assigned the value 5 and the data set was 2, 3, 5, 11, 14, then the first assignment prior
to the loop would result in the array configuration below:

The two 14s guarantee that the test a[i]<a[i+1] will be false when i = n and so the loop will
terminate correctly when i = n if not before.
Implementation of Algorithms

The implementation of an algorithm should follow the top down design process. If an algorithm
has been properly designed the path of execution should flow in a straight line from top to
bottom. It is important that the program implementation adheres to this top-to-bottom rule.

1) Use of Procedures to emphasize modularity


• To assist with both the development of the implementation and the readability of the
main program it is usually helpful to modularize the program along the lines that follow
naturally from the top-down design.
• This practice allows us to implement a set of independent procedures to perform specific
and well-defined tasks.
• In applying modularization in an implementation one thing to watch is that the process is
not taken too far, to a point at which the implementation again becomes difficult to read
because of the fragmentation.
• When it is necessary to implement somewhat larger software projects a good strategy is
to first complete the overall design in a top-down fashion. The mechanism for the main
program can then be implemented with calls to the various procedures that will be needed
in the final implementation.
2) Choice of variable names
• Another implementation detail that can make programs more meaningful and easier to
understand is to choose appropriate variable and constant names.
• For example, if we have to make manipulations on days of the week we are much better
off using the variable day rather than the single letter a or some other variable.
• This practice tends to make programs much more self-documenting. In addition, each
variable should only have one role in a given program.
3) Documentation of programs
A good programming practice is to always write programs so that they can be executed and used by
other people unfamiliar with the workings and input requirements of the program. This means that
the program must specify during execution exactly what responses (and their format) it requires
from the user. Considerable care should be taken to avoid ambiguities in these specifications. They
should be concise but accurately specify what is required. Also the program should "catch"
incorrect responses to its requests and inform the user in an appropriate manner.
Another useful documenting practice that can be employed is to use an accurate comment with
each start statement used. This is appropriate because start statements usually signal that some
modular part of the computation is about to follow. A related part of program documentation is the
information that the program presents to the user during the execution phase.
4) Debugging programs
In implementing an algorithm it is always necessary to carry out a number of tests to ensure that the
program is behaving correctly according to its specifications. Even with small programs it is likely
that there will be logical errors that do not show up in the compilation phase.
A logical error is a bug in the program that causes it to operate incorrectly but terminate correctly
Eg: if(score>50)
Print C;
else
if(score>75)
Print B;
For this program, if the input score=80, the program displays the output C but the appropriate
output is B. We can observe that the program does not terminate abnormally but the program is
working incorrectly, due to the presence of logical error.

To make the task of detecting logical errors somewhat easier it is a good idea to build into the
program a set of statements that will print out information at strategic points in the computation.
The actual task of trying to find logical errors in programs can be a very testing and there are no
foolproof methods for debugging but there are some steps we can take to ease the burden of the
process. Probably the best advice is to always work the program through by hand-before ever
attempting to execute it. The simplest way to do-this is to draw up a two-dimensional table
consisting of steps executed against all the variables used in the section of program under
consideration. We must then execute the statements in the section one by one and update our
table of variables as each variable is changed. If the process we are modelling is a loop it is
usually only necessary to check the first couple of iterations and the last couple of iterations
before termination.

5) Program Testing
In attempting to test whether or not a program will handle all variations of the problem it was
designed to solve we must make every effort to be sure that it will cope with the limiting and
unusual cases. Some of the things we might check are whether the program solves the smallest
possible problem, whether it handles the case when all data values are the same, and so on.
Table 1.2 Appropriate data sets for testing binary search algorithm
Search Sample data
Test value(s) x
Will the algorithm handle 0, 1, 2 a[l]= I n=1
the search of array of one
element?
(iij Will it handle the case where 0, 1, 2 a[1]— 1 a[2]= 1 am] — I
all array values are equal?
(iii) Will it handle the case where 1 a[1 - I a[2]= 2 a[n]= n
the element sought equals
the first value in ihe array?
(iv) Will it handle the case where n a[1]= 1 a[2]= 2 a[n]=n
the value sought equals the
last value in the array?
(v) Will it handle the case where 0 a[1) = 1 a[2]= 2 a[n]= n
the value sought is less than
t he first e leme nt in t he
array?

(vi) Will it handle the case where n+1 a{I = I a[2]=2 am] —- n
the value sought is greater
than the last value in the
array?
(vii) Will it handle the case where 2 a[1]= 1 a[2]=2 am] —— n
the value sought is at an even
array location?
Will ii handle the case where 3 a[l}— I n[2]—2 n[n]= n
the value sought is at an odd
array location?
(1X) Will it handle the case where 5 a[1]=2 n[2]= 4 o[n]= 2n
the value sought is abse nt
but within the range of array
val ucs?

You might also like