0% found this document useful (0 votes)
2 views

Notes-Analysis & Design of Algorithms(BCS401)_removed

The document discusses the analysis and design of algorithms, focusing on the branch-and-bound technique for solving optimization problems like the assignment problem, traveling salesman problem, and 0/1 knapsack problem. It explains how to determine bounds for solutions and the process of exploring state-space trees to find optimal solutions. The document also highlights the strengths and weaknesses of the branch-and-bound method compared to backtracking.

Uploaded by

jeevithar048
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Notes-Analysis & Design of Algorithms(BCS401)_removed

The document discusses the analysis and design of algorithms, focusing on the branch-and-bound technique for solving optimization problems like the assignment problem, traveling salesman problem, and 0/1 knapsack problem. It explains how to determine bounds for solutions and the process of exploring state-space trees to find optimal solutions. The document also highlights the strengths and weaknesses of the branch-and-bound method compared to backtracking.

Uploaded by

jeevithar048
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

Analysis and Design of Algorithms (BCS401)

88
Analysis and Design of Algorithms (BCS401)

89
Analysis and Design of Algorithms (BCS401)

Analysis

90
Analysis and Design of Algorithms (BCS401)

Hamiltonian cycles

91
Analysis and Design of Algorithms (BCS401)

92
Analysis and Design of Algorithms (BCS401)

2. Branch and Bound


Recall that the central idea of backtracking, discussed in the previous section, is tocut off a
branch of the problem‘s state-space tree as soon as we can deduce that it cannot lead to a
solution. This idea can be strengthened further if we deal with an optimization problem.
An optimization problem seeks to minimize or maximize some objective function (a tour
length, the value of items selected, the cost of an assignment, and the like), usually subject to
some constraints. An optimal solution is a feasible solution with the best value of the
objective function (e.g., the shortest Hamiltonian circuit or the most valuable subset of items
that fit the knapsack).
Compared to backtracking, branch-and-bound requires two additional items:
1. a way to provide, for every node of a state-space tree, a bound on the best value of
the objective function on any solution that can be obtained by adding further
components to the partially constructed solution represented by the node
2. the value of the best solution seen so far
In general, we terminate a search path at the current node in a state-space tree of a branch-
and-bound algorithm for any one of the following three reasons:
1. The value of the node‘s bound is not better than the value of the best solution seen so
far.
2. The node represents no feasible solutions because the constraints of the problem are
already violated.
3. The subset of feasible solutions represented by the node consists of a single point (and
hence no further choices can be made)—in this case, we compare the value of the
objective function for this feasible solution with that of the best solution seen so far
and update the latter with the former if the new solution is better.

Assignment Problem
Let us illustrate the branch-and-bound approach by applying it to the problem of assigning n
people to n jobs so that the total cost of the assignment is as small as possible.
An instance of the assignment problem is specified by an n × n cost matrix C so that we can
state the problem as follows: select one element in each row of the matrix so that no two
selected elements are in the same column and their sum is the smallest possible. We will
demonstrate how this problem can be solved using the branch-and-bound technique by
considering the small instance of the problem. Consider the data given below.

93
Analysis and Design of Algorithms (BCS401)

How can we find a lower bound on the cost of an optimal selection without actually solving
the problem?
We can do this by several methods. For example, it is clear that the cost of any solution,
including an optimal one, cannot be smaller than the sum of the smallest elements in each
of the matrix’s rows. For the instance here, this sum is 2 + 3+ 1+ 4 = 10.We can and will
apply the same thinking to partially constructed solutions. For example, for any legitimate
selection that selects 9 from the first row, the lower bound will be 9 + 3 + 1+ 4 = 17.
Rather than generating a single child of the last promising node as we did in backtracking, we
will generate all the children of the most promising node among non-terminated leaves in the
current tree. (Non terminated, i.e., still promising, leaves are also called live.) How can we
tell which of the nodes is most promising? We can do this by comparing the lower bounds of
the live nodes. It is sensible to consider a node with the best bound as most promising,
although this does not, of course, preclude the possibility that an optimal solution will
ultimately belong to a different branch of the state-space tree. This variation of the strategy is
called the best-first branch-and-bound.
We start with the root that corresponds to no elements selected from the cost matrix. The
lower-bound value for the root, denoted lb, is 10. The nodes on the first level of the tree
correspond to selections of an element in the first row of the matrix, i.e., a job for person a.
See the figure given below.

Figure: Levels 0 and 1 of the state-space tree for the instance of the assignment
problem being solved with the best-first branch-and-bound algorithm. The number
above a node shows the order in which the node was generated. A node‘s fields
indicate the job number assigned to person a and the lower bound value, lb, for this
node.

So we have four live leaves—nodes 1 through 4—that may contain an optimal solution. The
most promising of them is node 2 because it has the smallest lower bound value. Following
our best-first search strategy, we branch out from that node first by considering the three
different ways of selecting an element from the second row and not in the second column -
the three different jobs that can be assigned to person b. See the figure given below (Fig
12.7).

94
Analysis and Design of Algorithms (BCS401)

Of the six live leaves—nodes 1, 3, 4, 5, 6, and 7—that may contain an optimal solution, we
again choose the one with the smallest lower bound, node 5. First, we consider selecting the
third column‘s element from c‘s row (i.e., assigning person c to job 3); this leaves us with no
choice but to select the element from the fourth column of d‘s row (assigning person d to job
4). This yields leaf 8 (Figure 12.7), which corresponds to the feasible solution {a→2, b→1,
c→3, d →4} with the total cost of 13. Its sibling, node 9, corresponds to the feasible solution
{a→2,b→1, c→4, d →3} with the total cost of 25. Since its cost is larger than the cost of the
solution represented by leaf 8, node 9 is simply terminated. (Of course, if its cost were
smaller than 13, we would have to replace the information about the best solution seen so far
with the data provided by this node.)

Now, as we inspect each of the live leaves of the last state-space tree—nodes1, 3, 4, 6, and 7
in Figure 12.7—we discover that their lower-bound values are not smaller than 13, the value
of the best selection seen so far (leaf 8). Hence, we terminate all of them and recognize the
solution represented by leaf 8 as the optimal solution to the problem.

Travelling Sales Person problem


We will be able to apply the branch-and-bound technique to instances of the traveling
salesman problem if we come up with a reasonable lower bound on tour lengths. One very
simple lower bound can be obtained by finding the smallest element in the intercity distance
matrix D and multiplying it by the number of cities n.

95
Analysis and Design of Algorithms (BCS401)

But there is a less obvious and more informative lower bound for instances with symmetric
matrix D, which does not require a lot of work to compute. We can compute a lower bound
on the length l of any tour as follows. For each city i, 1≤ i ≤ n, find the sum si of the distances
from city i to the two nearest cities; compute the sums of these n numbers, divide the result
by 2, and, if all the distances are integers, round up the result to the nearest integer:

lb = ⌈s/2⌉... (1)
For example, for the instance in Figure 2.2a, formula (1) yields

Moreover, for any subset of tours that must include particular edges of a given graph, we can
modify lower bound (formula 1) accordingly. For example, for all the Hamiltonian circuits of
the graph in Figure 2.2a that must include edge (a, d), we get the following lower bound by
summing up the lengths of the two shortest edges incident with each of the vertices, with the
required inclusion of edges (a, d)and (d, a):

We now apply the branch-and-bound algorithm, with the bounding function given by
formula-1, to find the shortest Hamiltonian circuit for the graph in Figure 2.2a.
To reduce the amount of potential work, we take advantage of two observations.
1. First, without loss of generality, we can consider only tours that start at a.
2. Second, because our graph is undirected, we can generate only tours in which b is
visited before c. (Refer Note at the end of section 2.2 for more details)
In addition, after visiting n−1= 4 cities, a tour has no choice but to visit the remaining
unvisited city and return to the starting one. The state-space tree tracing the algorithm‘s
application is given in Figure 2.2b.

Note: An inspection of graph with 4 nodes (figure given below) reveals three pairs of tours
that differ only by their direction. Hence, we could cut the number of vertex permutations by
half. We could, for example, choose any two intermediate vertices, say, b and c, and then
consider only permutations in which b precedes c. (This trick implicitly defines a tour‘s
direction.)

Figure: Solution to a small instance of the traveling salesman problem by exhaustive search.

96
Analysis and Design of Algorithms (BCS401)

Figure 2.2(a)Weighted graph. (b) State-space tree of the branch-and-bound algorithm to find
a shortest Hamiltonian circuit in this graph. The list of vertices in a node specifies a
beginning part of the Hamiltonian circuits represented by the node.

Discussion
The strengths and weaknesses of backtracking are applicable to branch-and-bound as well.
The state-space tree technique enables us to solve many large instances of difficult
combinatorial problems. As a rule, however, it is virtually impossible to predict which
instances will be solvable in a realistic amount of time and which will not.
In contrast to backtracking, solving a problem by branch-and-bound has both the challenge
and opportunity of choosing the order of node generation and finding a good bounding
function. Though the best-first rule we used above is a sensible approach, it may or may not
lead to a solution faster than other strategies. (Artificial intelligence researchers are
particularly interested in different strategies for developing state-space trees.)
Finding a good bounding function is usually not a simple task. On the one hand, we want this
function to be easy to compute. On the other hand, it cannot be too simplistic - otherwise, it
would fail in its principal task to prune as many branches of a state-space tree as soon as
possible. Striking a proper balance between these two competing requirements may require
intensive experimentation with a wide variety of instances of the problem in question.

97
Analysis and Design of Algorithms (BCS401)

3. 0/1 Knapsack problem


Note: For this topic as per the syllabus both textbooks T1 & T2 are suggested.
Here we discuss the concepts from T1 first and then that of from T2.
Topic form T1 (Levitin)
Let us now discuss how we can apply the branch-and-bound technique to solving the
knapsack problem. Given n items of known weights wi and values vi, i = 1, 2, . . . , n, and a
knapsack of capacity W, find the most valuable subset of the items that fit in the knapsack.
∑ 𝑤𝑖𝑥𝑖 ≤ 𝑊 ∑ 𝑝𝑖𝑥𝑖 𝑖𝑠 𝑚𝑎𝑥𝑖𝑚𝑖𝑧𝑒𝑑, 𝑤ℎ𝑒𝑟𝑒 𝑥𝑖 = 0 𝑜𝑟 1
𝑎𝑛𝑑 1≤𝑖≤𝑛

1≤𝑖≤𝑛
It is convenient to order the items of a given instance in descending order by their value-to-
weight ratios.

Each node on the ith level of state space tree, 0 ≤ i ≤ n, represents all the subsets of n items
that include a particular selection made from the first i ordered items. This particular
selection is uniquely determined by the path from the root to the node: a branch going to the
left indicates the inclusion of the next item, and a branch going to the right indicates its
exclusion.
We record the total weight w and the total value v of this selection in the node, along with
some upper bound ub on the value of any subset that can be obtained by adding zero or more
items to this selection. A simple way to compute the upper bound ub is to add to v, the total
value of the items already selected, the product of the remaining capacity of the knapsack W
– w and the best per unit payoff among the remaining items, which is vi+1/wi+1:

ub = v + (W − w)(vi+1/wi+1).
Example: Consider the following problem. The items are already ordered in descending order
of their value-to-weight ratios.

Let us apply the branch-and-bound algorithm. At the root of the state-space tree (see Figure
12.8), no items have been selected as yet. Hence, both the total weight of the items already
selected w and their total value v are equal to 0. The value of the upper bound is 100.
Node 1, the left child of the root, represents the subsets that include item 1. The total weight
and value of the items already included are 4 and 40, respectively; the value of the upper
bound is 40 + (10 − 4) * 6 = 76.
98
Analysis and Design of Algorithms (BCS401)

Node 2 represents the subsets that do not include item 1. Accordingly, w = 0, v = 0, and ub =
0 + (10 − 0) * 6 = 60. Since node 1 has a larger upper bound than the upper bound of node 2,
it is more promising for this maximization problem, and we branch from node 1 first. Its
children—nodes 3 and 4—represent subsets with item 1 and with and without item 2,
respectively. Since the total weight w of every subset represented by node 3 exceeds the
knapsack‘s capacity, node 3 can be terminated immediately.
Node 4 has the same values of w and v as its parent; the upper bound ub is equal to 40 + (10
− 4) * 5 = 70. Selecting node 4 over node 2 for the next branching (Due to better ub), we get
nodes 5 and 6 by respectively including and excluding item 3. The total weights and values as
well as the upper bounds for these nodes are computed in the same way as for the preceding
nodes.
Branching from node 5 yields node 7, which represents no feasible solutions, and node 8,
which represents just a single subset {1, 3} of value 65. The remaining live nodes 2 and 6
have smaller upper-bound values than the value of the solution represented by node 8. Hence,
both can be terminated making the subset {1, 3} of node 8 the optimal solution to the
problem.

Solving the knapsack problem by a branch-and-bound algorithm has a rather unusual


characteristic. Typically, internal nodes of a state-space tree do not define a point of the
problem‘s search space, because some of the solution‘s components remain undefined. (See,
for example, the branch-and-bound tree for the assignment problem discussed in the

99
Analysis and Design of Algorithms (BCS401)

preceding subsection.) For the knapsack problem, however, every node of the tree represents
a subset of the items given. We can use this fact to update the information about the best
subset seen so far after generating each new node in the tree. If we had done this for the
instance investigated above, we could have terminated nodes 2 and 6 before node 8 was
generated because they both are inferior to the subset of value 65 of node 5.

Concepts form textbook T2 (Horowitz)


Let us understand some of the terminologies used in backtracking &branch and bound.
Live node - a node which has been generated and all of whose children are not yet been
generated.
E-node - is a live node whose children are currently being explored. In other words, an E-
node is a node currently being expanded.
Dead node - a node that is either not to be expanded further, or for which all of its
children have been generated
Bounding Function - will be used to kill live nodes without generating all their children.
Backtracking - is depth first node generation with bounding functions.
Branch-And-Bound is a method in which E-node remains E-node until it is dead.
Breadth-First-Search: Branch-and Bound with each new node placed in a queue. The
front of the queen becomes the new E-node.
Depth-Search (D-Search): New nodes are placed in to a stack. The last node added is the
first to be explored.

100
Analysis and Design of Algorithms (BCS401)

0/1 Knapsack problem - Branch and Bound based solution


As the technique discussed here is applicable for minimization problems, let us convert the
knapsack problem (maximizing the profit) into minimization problem by negating the
objective function

101
Analysis and Design of Algorithms (BCS401)

LC (Least Cost) Branch and Bound solution

102
Analysis and Design of Algorithms (BCS401)

103
Analysis and Design of Algorithms (BCS401)

FIFO Branch and Bound solution

104
Analysis and Design of Algorithms (BCS401)

Conclusion

105
Analysis and Design of Algorithms (BCS401)

4. NP-Complete and NP-Hard problems


Basic concepts
For many of the problems we know and study, the best algorithms for their solution have
computing times can be clustered into two groups;
1. Solutions are bounded by the polynomial- Examples include Binary search O(log n),
Linear search O(n), sorting algorithms like merge sort O(n log n), Bubble sort O(n2)
&matrix multiplication O(n3) or in general O(nk) where k is a constant.
2. Solutions are bounded by a non-polynomial-Examples include travelling salesman
problem O(n22n) & knapsack problem O(2n/2). As the time increases exponentially,
even moderate size problems cannot be solved.
So far, no one has been able to device an algorithm which is bounded by the polynomial for
the problems belonging to the non-polynomial. However impossibility of such an algorithm
is not proved.

Non deterministic algorithms


We also need the idea of two models of computer (Turing machine): deterministic and non-
deterministic. A deterministic computer is the regular computer we always thinking of; a non-
deterministic computer is one that is just like we‘re used to except that is has unlimited
parallelism, so that any time you come to a branch, you spawn a new ―process‖ and examine
both sides.
When the result of every operation is uniquely defined then it is called deterministic
algorithm.
When the outcome is not uniquely defined but is limited to a specific set of possibilities, we
call it non deterministic algorithm.
We use new statements to specify such non deterministic algorithms.
 choice(S) - arbitrarily choose one of the elements of set S
 failure - signals an unsuccessful completion
 success - signals a successful completion
The assignment X = choice(1:n) could result in X being assigned any value from the integer
range[1..n]. There is no rule specifying how this value is chosen.

―The nondeterministic algorithms terminates unsuccessfully iff there is no set of choices


which leads to the successful signal‖.

Example-1: Searching an element x in a given set of elements A(1:n). We are required to


determine an index j such that A(j) = x or j = 0 if x is not present.
j := choice(1:n)
if A(j) = x then print(j); success endif

106
Analysis and Design of Algorithms (BCS401)

print(‗0‘); failure

Example-2: Checking whether n integers are sorted or not

procedure NSORT(A,n);
//sort n positive integers//
var integer A(n), B(n), n, i, j;
begin
B := 0; //B is initialized to zero//
for i := 1 to n do
begin
j := choice(1:n);
if B(j) <> 0 then failure;
B(j) := A(j);
end;

for i := 1 to n-1 do //verify order//


if B(i) > B(i+1) then failure;
print(B);
success;
end.
―A nondeterministic machine does not make any copies of an algorithm every time a choice
is to be made. Instead it has the ability to correctly choose an element from the given set‖.

A deterministic interpretation of the nondeterministic algorithm can be done by making


unbounded parallelism in the computation. Each time a choice is to be made, the algorithm
makes several copies of itself, one copy is made for each of the possible choices.

Decision vs Optimization algorithms


An optimization problem tries to find an optimal solution.
A decision problem tries to answer a yes/no question. Most of the problems can be specified
in decision and optimization versions.
For example, Traveling salesman problem can be stated as two ways
 Optimization - find hamiltonian cycle of minimum weight,
 Decision - is there a hamiltonian cycle of weight k?
For graph coloring problem,
 Optimization – find the minimum number of colors needed to color the vertices of a
graph so that no two adjacent vertices are colored the same color
 Decision - whether there exists such a coloring of the graph‘s vertices with no more
than m colors?
Many optimization problems can be recast in to decision problems with the property that the
decision algorithm can be solved in polynomial time if and only if the corresponding
optimization problem.

107
Analysis and Design of Algorithms (BCS401)

P, NP, NP-Complete and NP-Hard classes


NP stands for Non-deterministic Polynomial time.

Definition: P is a set of all decision problems solvable by a deterministic algorithm in


polynomial time.
Definition: NP is the set of all decision problems solvable by a nondeterministic algorithm in

polynomial time. This also implies P ⊆ NP


Problems known to be in P are trivially in NP — the nondeterministic machine just never
troubles itself to fork another process, and acts just like a deterministic one. One example of a
problem not in P but in NP is Integer Factorization.

But there are some problems which are known to be in NP but don‘t know if they‘re in P. The
traditional example is the decision-problem version of the Travelling Salesman Problem
(decision-TSP). It‘s not known whether decision-TSP is in P: there‘s no known poly-time
solution, but there‘s no proof such a solution doesn‘t exist.
There are problems that are known to be neither in P nor NP; a simple example is to
enumerate all the bit vectors of length n. No matter what, that takes 2n steps.
Now, one more concept: given decision problems P and Q, if an algorithm can transform a
solution for P into a solution for Q in polynomial time, it‘s said that Q is poly-time
reducible (or just reducible) to P.

The most famous unsolved problem in computer science is ―whether P=NP or P≠NP? ‖

Figure: Commonly believed


relationship between P and NP

Figure: Commonly believed relationship between P, NP, NP-


Complete and NP-hard problems

Definition: A decision problem D is said to be NP-complete if:


1. it belongs to class NP
2. every problem in NP is polynomially reducible to D
The fact that closely related decision problems are polynomially reducible to each other is not
very surprising. For example, Hamiltonian circuit problem is polynomially reducible to the
decision version of the traveling salesman problem.

108
Analysis and Design of Algorithms (BCS401)

NP-Complete problems have the property that it can be solved in polynomial time if all other
NP-Complete problems can be solved in polynomial time. i.e if anyone ever finds a poly-time
solution to one NP-complete problem, they‘ve automatically got one for all the NP-complete
problems; that will also mean that P=NP.
Example for NP-complete is CNF-satisfiability problem. The CNF-satisfiability problem
deals with boolean expressions. This is given by Cook in 1971. The CNF-satisfiability
problem asks whether or not one can assign values true and false to variables of a given
boolean expression in its CNF form to make the entire expression true.
Over the years many problems in NP have been proved to be in P (like Primality Testing).
Still, there are many problems in NP not proved to be in P. i.e. the question still remains
whether P=NP? NP Complete Problems helps in solving this question. They are a subset
of NP problems with the property that all other NP problems can be reduced to any of them in
polynomial time. So, they are the hardest problems in NP, in terms of running time. If it can
be showed that any NP-Complete problem is in P, then all problems in NP will be in P
(because of NP-Complete definition), and hence P=NP=NPC.
NP Hard Problems - These problems need not have any bound on their running time. If
any NP-Complete Problem is polynomial time reducible to a problem X, that problem X
belongs to NP-Hard class. Hence, all NP-Complete problems are also NP-Hard. In other
words if a NP-Hard problem is non-deterministic polynomial time solvable, it is a NP-
Complete problem. Example of a NP problem that is not NPC is Halting Problem.
If a NP-Hard problem can be solved in polynomial time then all NP-Complete can be solved
in polynomial time.
―All NP-Complete problems are NP-Hard but not all NP-Hard problems are not NP-
Complete.‖NP-Complete problems are subclass of NP-Hard
The more conventional optimization version of Traveling Salesman Problem for finding the
shortest route is NP-hard, not strictly NP-complete.

*****

109
Analysis and Design of Algorithms (BCS401)

Module-1

1. Find gcd(31415,14142) by applying Euclid’s algorithm. Estimate how many times it is faster when
compared to the algorithm based on consecutive integer checking.
2. Compare the order of growth of 1/2n (n-1) and n^2.
3. Explain the mathematical analysis of Fibonacci recursive algorithm.
4. Write Bruteforce string matching algorithm.
5. Define three asymptotic notations.
6. Design a recursive algorithm for solving tower of Hanoi problem and give the general plan of
analyzing that algorithm. Show that the time complexity of tower of Hanoi algorithm is exponential in
nature.
7. With algorithm and a suitable example, explain how the brute force string matching algorithm works.
Analyse for its complexity.
8. With the help of a flow chart, explain the various steps of algorithm design and analysis process.

9. If f1(n) ∈ O(g1(n)) and f2(n) ∈ O O(g2(n)) prove that f1(n)+f2(n) ∈ O(max {g(n),g2(n)}).
10. Write an algorithm for selection sort and show that the time complexity of this algorithm is quadratic.
11. What is an algorithm? What are the properties of an algorithm? Explain with an example.
12. Express using asymptotic notation i) n! ii) 6*2n+n2.
13. Give formal definitions of asymptotic notations.
14. Give informal definitions of asymptotic notations.

Module-2
1. 1Find the upper bound of recurrences given below by substitution method. a.
2T(n/2)+n ii) T(n/2)+1
2. Sort the following elements using merge sort. Write the recursion tree.70,20,30,40,10,50,60
3. Write the algorithm for quick sort. Derive the worst case time efficiency of the algorithm.
4. Give general divide and conquer recurrence with necessary explanation. Solve the recurrence a.
i)T (n) =2T (n/2) +1
b. ii)T (n) =T (n/2) +n
5. Explain with suitable example a sorting algorithm that uses divide and conquer technique which
divides the problem size by considering position. Give the corresponding algorithms and analyse for
time complexity.
6. Give the problem definition of a defective chess board? Explain clearly how divide and conquer
method can be applied to solve 4 * 4 defective chess board problem.
7. What is divide and conquer method. Show that the worst case efficiency of binary search algorithm
is(log).
8. What is stable algorithm? Is quick sort stable? Explain with example.
9. Give an algorithm for merge sort.
10. Trace the merge sort algorithm for the data : 60,50,25,10,35,25,75,30
11. Explain matrix multiplication with respect to divide and conquer technique.

110
Analysis and Design of Algorithms (BCS401)

Module-3

1. Write greedy method control abstraction for subset paradigm.


2. Using greedy method, trace the following graph to get shortest path from vertex ‘a’ to all other
vertices

3. What is the solution generated by the function job scheduling (JS) when
n=5,[P1,P2,P3,P4,P5]=[10,15,10,5,1] and [d1,d2,d3,d4,d5]=[2,2,1,3,3]
4. Apply PRIMS algorithm for the following graph to find minimum spanning tree.

5. What is job sequencing with deadline problem?Find solution generated by job sequencing problem
with deadlines for 7 jobs given profits 3,5,20,18,1,6,30 and deadlines 1,3,4,3,2,1,2 respectively.
6. Define minimum cost spanning tree. Give high level description of Prim’s algorithm to find minimum
spanning tree and find minimum spanning tree for graph shown in following figure using Prim’s
algorithm.

7. What is a knapsack problem? Obtain solution for the knapsack problem using greedy method for n=3,
capacity m=20 values 25, 24, 15 and weights 18, 15, 10 respectively.
8. Write Krushal’s algorithm to construct a minimum spanning tree and show that the time efficiency is

O (| ∈ |log| ∈ |).

9. Apply Kruskal’s algorithm to find the min spanning tree of the graph.

111
Analysis and Design of Algorithms (BCS401)

10. Write Dijkstra’s algorithm to find single source shortest path.


11. Explain the concept of greedy technique for Prim’s algorithm. Obtain minimum cost spanning tree for
the graph below Prim’s algorithm.

12. Solve the following single source shortest path problem assuming vertex 5 as the source.

13. Define the following: i) Optimal solution ii) Feasible solution


14. Design an algorithm for the problem job sequencing with deadlines.
15. Design an algorithm for knapsack problem that uses greedymethod.

Module-4
16. Use Dynamic programming, compute the shortest path from vertex 1 to all other vertices

17. Solve the Knapsack instance n=3,{W1,W2,W3}={1,2,2} and {P1,P2,P3}={18,16,6} and M=4 by
dynamic programming.
18. For the given graph, obtain optimal cost tour using dynamic programming.

112
Analysis and Design of Algorithms (BCS401)

19. What is dynamic programming? Explain how you would solve all pair shortest paths problem using
dynamic programming.
20. Give the necessary recurrence relation used to solve 0/1 knapsack problem using dynamic
programming. Apply it to solve the following instance and show the results n=4, m=5 values
12,10,20,15 and weights are 2,1,3,2 respectively.
21. Solve the following TSP which is represented as a graph shown in the figure using dynamic
programming

22. Write the dynamic programming algorithm to compute binomial co-efficient and obtain its time
complexity.
23. Explain Warshall algorithm to find the transitive closure of a directed graph. Apply this algorithm to
the graph given below.

24. State Floyd’s algorithm. Solve all pairs shortest path problem for the given graph using Floyd
algorithm

25. Using Floyd’s algorithm solve the all pair shortest problem for the graph whose weight matrix is
given below:

113
Analysis and Design of Algorithms (BCS401)

26. Using dynamic programming, solve the following knapsack instance.


27. N=4 M=5 (W1, W2, W3, W4) = (2, 1, 3, 2) (P1, P2, P3, P4) = (12, 10, 20, 15).
28. Outline an exhaustic search algorithm to solve traveling salesman problem.
29. Prove that the time efficiency of Warshall’s algorithm is cubic.
30. Write a pseudocode of the algorithm that finds the composition of an optimal subset from the table
generated by the bottom-up dynamic programming algorithm for the knapsack problem.

31. What are the three variations of decrease and conquer technique.
32. Conduct DFS for the following graph:

33. Apply DFS based algorithm to solve topological sorting problem for the following graph:

34. Construct shift table for the pattern EARN and search for the same in text FAIL-MEANS-FIRST-
ATTEMPT- IN-LEARNING using Horspool algorithm.
35. Explain the working of depth-first search algorithm for the graph shown in following figure.

36. With pseudocode, explain how the searching for a pattern BARBER in the given text
JIM_SAW_ME_IN_BARBER_SHOP is performed using Horspool’s algorithm.
37. With suitable example, explain topological sorting.
38. Explain decrease and conquer method, with a suitable example.
39. Apply the DFS – based algorithm to solve the topological sorting problem for given graph.

114
Analysis and Design of Algorithms (BCS401)

40. Write and explain DFS and BFS algorithm with example.
41. Obtain topologies sorting for the given digraph using source removal method.

42. What are the various applications of BFS and DFS?


43. Define the terms a) tree edge b) cross edge c) back edge.
44. How do you obtain topological sort using source removal method? Explain with an example
and write the algorithm for the same.

Module-5

1. Explain the four methods used to establish lower bounds of algorithm.


2. Define Decision tress. Write the decision tree for the three element selection sort.
3. Define P, NP and NP complete problems.
4. What are NP-complete problems and NP-hard problems? Apply four iterations of Newton’s method
to compute √2 and estimate the absolute and relative errors of the computations.
5. What do you mean by lower bound algorithm? What are the advantages of finding the lower bound?
6. Prove that the classic recursive algorithm for the tower of Hanoi puzzle makes the minimum
number of disks moves needed to solve it.
7. Write short notes on:
a. Tight lower bound
b. Trivial lower bound
8. What is numeric analysis? Brief overflow and underflow in numeric analysis algorithms.
9. What are tractable problems and intractable problems?
10. What are decision trees? Obtain the decision tree to find minimum of 3 numbers.
11. Explain how back tracking used for solving 4-queens problem. Write the state space tree.
12. Explain Information theoretic lower bound.
13 Apply twice-bound-the tree algorithm for the travelling sales person problem for the
following graph.

14 Explain how the TSP problem can be solved, using branch and bound
method. 15 Solve 8 – queens problem for a feasible sequence (6, 4, 7, 1).
16 What is back tracking? Apply back tracking problem to solve the instance of the sum of
subset problem: S= {3, 5, 6, 7} and d=15.
17 Write an algorithm to place all queens on the chess board.
18 What is Nearest-neighbor algorithm to compute TSP problem?
19 Explain the various models for parallel computations.

115
Analysis and Design of Algorithms (BCS401)

20 Let the i/p to the prefix computation be 5,12,8,6,3,9,11,12,1,5,6,7,10,4,3,5 and there are four

116
Analysis and Design of Algorithms (BCS401)

processors and ⊕ stands for addition. With diagram explain how prefix computation is done
21 by
Letparallel
the i/p algorithm.
to the prefix computation be 5,12,8,6,3,9,11,12,1,5,6,7,10,4,3,5 and there are four
processors and ⊕ stands for addition. With diagram explain how prefix computation is done
by work optimal algorithm.
22 Explain how M is calculated using parallel algorithm for given graph.

23 Write short notes on:


a. Hamiltonian problem
b. ii)M-Coloring
24 Explain prefix computation problem and list ranking algorithm, with suitable
examples. 26 What are the different ways of resolving read and write conflicts?
27 What is prefix computation problem? Give the algorithms for prefix computation
which uses:
28 i) n processors ii) n/logn processors. Obtain the time complexities of these
algorithms.
29 What is linear speed up? Obtain the maximum speed up when P = 10 and various
values of 30 f = 0.5, 0.1, 0.01.
31 What is fixed connection model? What is shared memory
model? 32 What is EREW PRAM, CREW PRAM, and CRCW
PRAM?
33 What is super linear speedup? Explain.

117
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (B
Analysis and Design of Algorithms (BC
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)
Analysis and Design of Algorithms (BCS401)

For More Question Papers Visit - www.pediawikiblog.com

For More Question Papers Visit - www.pediawikiblog.com


Analysis and Design of Algorithms (BCS401)

For More Question Papers Visit - www.pediawikiblog.com

For More Question Papers Visit - www.pediawikiblog.com


Analysis and Design of Algorithms (BCS401)
Institute Vision
"To become a recognized world class Women Educational Institution, by imparting professional education to the
students, creating technical opportunities through academic excellence and technical achievements, with ethical
values"

Institute Mission
 To support value-based education with state of art infrastructure.

 To empower women with the additional skill for professional future career

 To enrich students with research blends in order to fulfil the international challenges

 To create multidisciplinary centre of excellence

 To achieve Accreditation standards towards international education recognition.

 To establish more Post Graduate & Research course.

 To increase Doctorates numbers towards the Research quality of academics.

You might also like