Sleeping Barber Problem
Sleeping Barber Problem
Lecture Notes on
15CS43
Design and Analysis of
Algorithms
(CBCS Scheme)
Prepared by
Harivinod N
Dept. of Computer Science and Engineering,
VCET Puttur
May 2017
Module : Backtracking
Module-5
Contents
Course Website
www.TechJourney.in
Lecture Notes | 10CS43 – Design & Analysis
A of Algorithms | Module 4: Dynamic Programming
1. Backtracking
Some problems can be solved,
solved by exhaustive search. The exhaustive-search
exhaustive technique
suggests generating all candidate solutions and then identifying the one (or the ones) with a
desired property.
Backtracking is a more intelligent variation of this approach. The principal idea is to
construct solutions one component at a time and evaluate such partially constructed
candidates as follows. If a partially constructed solution can be developed further without
violating the problem’s constraints, it is done by taking the first remaining legitimate option
for the next component. If there is no legitimate option for the next component, no
alternatives for any remaining component need to be considered. In this case, the algorithm
backtracks to replace the last component of the partially constructed solution with its next
option.
It is convenient to implement this kind of processing by constructing a tree of choices being
made, called the state-space tree Its root represents an initial state before the search for a
space tree.
solution begins. The nodes of the first level in the tree represent the choices made for the first
component of a solution; the nodes of the second level represent the choices for the second
component, and so on. A node in a state-space
state space tree is said to be promising if it corresponds to
a partially constructed solution that may still lead to a complete solution; otherwise,
o it is
called non-promising.. Leaves represent either non-promising
non promising dead ends or complete
solutions found by the algorithm.
In the majority of cases, a statespace tree for a backtracking algorithm is constructed in the
manner of depth-first search.. If the current node is promising, its child is generated by adding
the first remaining legitimate option for the next component of a solution, and the processing
moves to this child. If the current node turns out to be non-promising,
non promising, the algorithm
backtracks
racks to the node’s parent to consider the next possible option for its last component; if
there is no such option, it backtracks one more level up the tree, and so on. Finally, if the
algorithm reaches a complete solution to the problem, it either stops (if just one solution is
required) or continues searching for other possible solutions.
We start with the empty board and then place queen 1 in the first possible position of its row,
which is in column 1 of row 1. Then we place queen 2, after trying unsuccessfully columns 1
and 2, in the first acceptable position for it, which is square (2, 3), the square in row 2 and
column 3. This proves to be a dead end because there is no acceptable position for queen 3.
So, the algorithm backtracks and puts queen 2 in the next possible position at (2, 4). Then
queen 3 is placed at (3, 2), which proves to be another dead end. The algorithm algorith then
backtracks all the way to queen 1 and moves it to (1, 2). Queen 2 then goes to (2, 4), queen 3
to (3, 1), and queen 4 to (4, 3), which is a solution to the problem. The state-space
state tree of this
search is shown in figure.
Figure: State-space
space tree of solving the four-queens
queens problem by backtracking.
× denotes an unsuccessful attempt to place a queen in the indicated column. The
numbers above the nodes indicate the order in which the nodes are generated.
If other solutions need to be found, the algorithm can simply resume its operations at the leaf
at which it stopped. Alternatively, we can use the board’s symmetry for this purpose.
Finally, it should be pointed out that a single solution to the n-queens
n problem for any n ≥ 4
can be found in linear time.
Note: The algorithm NQueens() is not in the syllabus. It is given here for interested learners.
The algorithm is referred from textbook T2.
T2
1.3 Sum
um of subsets problem
Problem definition: Find a subset of a given set A = {a1, . . . , an } of n positive integers
whose sum is equal to a given positive integer d.
For example, for A = {1, 2, 5, 6, 8} and d = 9, there are two solutions: {1, 2, 6} and {1, 8}.
Of course, some instances of this problem may have no solutions.
It is convenient to sort the set’s elements in increasing order. So, we will assume that
a1< a2 < . . . < an.
The state-space
space tree can be constructed as a binary tree like that in Figure shown below for
the instance A = {3, 5, 6, 7} and d = 15.
The number inside a node is the sum of the elements already included in the subsets
represented by the node. The inequality below a leaf indicates the reason for its termination.
Example: Apply backtracking to solve the following instance of the subset sum problem: A
= {1, 3, 4, 5} and d = 11.
Analysis
How can we find a lower bound on the cost of an optimal selection without actually solving
the problem?
We can do this by several methods. For example, it is clear that the cost of any solution,
solution
including an optimal one, cannot be smaller than the sum of the smallest elements in each
of the matrix’s rows.. For the instance here, this sum is 2 + 3+ 1+ 4 = 10. We can and will
apply the same thinking to partially constructed solutions. For example, for any legitimate
selection that selects 9 from the first row, the lower bound will be 9 + 3 + 1+ 4 = 17.
Rather
ather than generating a single child of the last promising node as we did in backtracking, we
will generate all the children of the most promising node among non-terminated
terminated leaves in the
current tree. (Nonterminated, i.e., still promising, leaves are also called live.) How can we tell
which of the nodes is most promising? We can do this by comparing the lower bounds of the
live nodes. It is sensible to consider a node with the best bound as most promising, although
this does not, of course, preclude the possibility
possibility that an optimal solution will ultimately
belong to a different branch of the state-space
state tree. This variation of the strategy is called the
best-first branch-and-bound
bound.
We start with the root that corresponds to no elements selected from the cost matrix. The
lower-bound
bound value for the root, denoted lb, is 10. The nodes on the first level of the tree
correspond to selections of an element in the first row of the matrix, i.e., a job for person a.
a
See the figure given below.
But there is a less obvious and more informative lower bound for instances with symmetric
matrix D, which does not require a lot of work to compute. We can an compute a lower bound
on the length l of any tour as follows. For each city i, 1≤
1 i ≤ n, find the sum si of the distances
from city i to the two nearest cities; compute the sum s of these n numbers, divide the result
by 2, and, if all the distances are integers, round up the result to the nearest integer:
lb s/2 ... (1)
For example, for the instance in Figure 2.2a,
2.2 formula (1) yields
Moreover, for any subset of tours that must include particular edges of a given graph, we can
modify lower bound (formula
formula 1) accordingly. For example, for all the Hamiltonian circuits of
the graph in Figure 2.2aa that must include edge (a, d), we get the following lower bound by
summing up the lengths of the two shortest edgeses incident with each of the vertices, with the
required inclusion of edges (a, d) and (d, a):
We now apply the branch-and and-bound algorithm, with the bounding ing function given by
formula-1,, to find the shortest Hamiltonian circuit
circuit for the graph in Figure 2.2a.
2.
To reduce the amount of potential work, we take
take advantage of two observations.
1. First, without loss of generality, we can consider only tours that start at a.
2. Second, because our graph is undirected, we can generate only tours in which b is
visited before c. (Refer Note at the end of section 2.2 for more details)
In addition, after visiting n−−1=
1= 4 cities, a tour has no choice but to visit the remaining
unvisited city and return to the starting one. The state-space
state space tree tracing the algorithm’s
application is given in Figure 2.2b.
2.2
Figure: Solution to a small instance of the traveling salesman problem by exhaustive search.
Discussion
The strengths and weaknesses of backtracking are applicable to branch-and
branch and-bound as well.
The state-space tree technique enables us to solve many large instances of difficult
combinatorial problems. As a rule, however, it is virtually impossible to predict which
instances will be solvable in a realistic amount of time and which will not.
In contrast
trast to backtracking, solving a problem by branch-and-bound
branch bound has both the challenge
and opportunity of choosing the order of node generation and finding a good bounding
function. Though the best-first
first rule we used above is a sensible approach, it may or may
ma not
lead to a solution faster than other strategies. (Artificial intelligence researchers are
particularly interested in different strategies for developing state-space
space trees.)
Finding a good bounding function is usually not a simple task. On the one hand, we want this
function to be easy to compute. On the other hand, it cannot be too simplistic - otherwise, it
would fail in its principal task to prune as many branches of a state-space
space tree as soon as
possible. Striking a proper balance between these two competing requirements may require
intensive experimentation with a wide variety of instances of the problem in question.
3. 0/1 Knapsack
sack problem
Note: For this topic as per the syllabus both textbooks T1 & T2 are suggested.
Here we discuss the concepts
conce from T1 first and then that of from T2.
T2
Topic form T1 (Levitin)
Let us now discuss how we can apply the branch-and-bound
branch bound technique to solving the
knapsack problem. Given
iven n items of known weights wi and values vi , i = 1, 2, . . . , n, and a
knapsack of capacity W, find the most valuable subset of the items that fit in the knapsack.
knapsack
, 0 1
It is convenient to order the items of a given instance in descending order by their value-to-
weight ratios.
Each node on the ith level of state space tree, 0 ≤ i ≤ n, represents all the subsets of n items
that include a particular selection made from the first i ordered items. This particular
selection is uniquely determined by the path from the root to the node: a branch going to the
left indicates the inclusion of the next item, and a branch going to the right indicates its
exclusion.
We record the total weight w and the total value v of this selection in the node, along with
some upper bound ub on the value of any subset that can be obtained by adding zero or more
items to this selection. A simple way to compute the upper bound ub is to add to v, the total
value of the items already selected, the product of the remaining capacity of the knapsack
W − w and the best per unit payoff among the remaining items, which is vi+1/wi+1:
ub = v + (W − w)(vi+1/wi+1).
Example: Consider the following problem.
problem. The items are already ordered in descending
order of their value-to-weight
weight ratios.
ratios
preceding subsection.) For the knapsack problem, however, every node of the tree represents
a subset of the items given. We can use this fact to update the information about the best
subset seen so far after generating each new node in the tree. If we had done this for the
instance investigated above, we could have terminated nodes 2 and 6 before node 8 was
generated because they both are inferior
i to the subset of value 65 of node 5.
Conclusion
4. NP-Complete
Complete and NP-Hard
NP problems
4.1 Basic concepts
For many of the problems we know and study, the best algorithms for their solution have
computing times cann be clustered into two groups;
1. Solutions are bounded by the polynomial- Examples include Binary search O(log n),
Linear search O(n), sorting algorithms like merge sort O(n log n),, Bubble sort O(n2)
trix multiplication O(n3) or in general O(nk) where k is a constant.
& matrix
2. Solutions are bounded by a non-polynomial
non - Examples include travelling salesman
2 n n/2
problem O(n 2 ) & knapsack problem O(2 ). As the time increases exponentially,
even moderate size problems cannot be solved.
So far, noo one has been able to device an algorithm which is bounded by the polynomial for
the problems belonging to the non-polynomial. However impossibility of such an algorithm
is not proved.
print(‘0’); failure
Example-2: Checking whether n integers are sorted or not
procedure NSORT(A,n);
//sort n positive integers//
var integer A(n), B(n), n, i, j;
begin
B := 0; //B is initialized to zero//
for i := 1 to n do
begin
j := choice(1:n);
if B(j) <> 0 then failure;
B(j) := A(j);
end;
Decision
sion vs Optimization algorithms
An optimization problem tries to find an optimal solution.
A decision problem tries to answer a yes/no question. Most of the problems can be specified
in decision and optimization versions.
For example, Traveling
raveling salesman problem can be stated as two ways
• Optimization - find hamiltonian cycle of minimum weight,
• Decision - is there a hamiltonian cycle of weight ≤ k?
For graph coloring problem,
• Optimization – find the minimum number of colors needed to color the vertices of a
graph so that no two adjacent vertices are colored the same color
• Decision - whether there exists such a coloring of the graph’s
graph s vertices with no more
than m colors?
Many optimization problems can be recast in to decision problems with the property that the
decision algorithm can be solved in polynomial time if and only iff the corresponding
correspondin
optimization problem.
4.3 P, NP, NP-Complete
Complete and NP-Hard
NP classes
But there are some problems which are known to be in NP but don’t know if they’re
t in P. The
traditional example is the decision-problem
decision version of the Travelling
ing Salesman Problem
(decision-TSP). ). It’s not known whether decision-TSP
decision TSP is in P: there’s no known poly-time
poly
solution, but there’s no proof such a solution doesn’t exist.
There are problems that are known to be neither in P nor NP; a simple example is to
enumerate all the bit vectors of length n. No matter what, that takes 2n steps.
Now, one more concept: given decision problems P and Q, if an algorithm can transform a
tion for P into a solution for Q in polynomial time, it’s said that Q is poly-time
solution
reducible (or just reducible) to P.
The most famouss unsolved problem in computer science
s is “whether P=NP or P≠NP? ”
Figure: Commonly believed Figure: Commonly believed relationship between P, NP, NP-
NP
relationship between P and NP Complete and NP-hard
hard problems
NP-Complete problems have the property that it can be solved in polynomial time if all other
NP-Complete
Complete problems can be solved in polynomial time. i.e if anyone ever finds a poly-time
poly
complete problem, they’ve automatically got one for all the NP-complete
solution to one NP-complete
problems;
roblems; that will also mean that P=NP.
Example for NP-complete
complete is CNF-satisfiability problem. The CNF-satisfiability
satisfiability problem
deals with boolean expressions. This is given by Cook in 1971. The CNF-satisfiability
CNF
problem asks whether or not one can assign values
v true and false to variables of a given
boolean expression in its CNF form to make the entire expression true.
Over the years many problems in NP have been proved to be in P (like Primality Testing).
Still, there are many problems in NP not proved to be in P. i.e. the question still remains
whether P=NP? NP Complete Problems helps in solving this question. They are a subset
of NP problems with the property that all other NP problems can be reduced
duced to any of them in
polynomial time. So, they are the hardest problems in NP, in terms of running time. If it can
be showed that any NP-Complete
omplete problem is in P, then all problems in NP will be in P
(because of NP-Complete definition), and hence P=NP=NPC.
NP Hard Problems - These problems need not have any bound on their running time. If
any NP-Complete Problem is polynomial time reducible to a problem X, that problem X
belongs to NP-Hard
Hard class. Hence, all NP-Complete problems are also NP-Hard.
N In other
words if a NP-Hard problem is non-deterministic
non deterministic polynomial time solvable, it is a NP-
Complete problem. Example of a NP problem that is not NPC is Halting Problem.
Problem
If a NP-Hard problem can be solved
solv in polynomial time then all NP-Complete
Complete can be solved
in polynomial time.
“All NP-Complete
Complete problems are NP-Hard
NP but not all NP-Hard
Hard problems are not NP-
NP
Complete.” NP-Complete
Complete problems are subclass of NP-Hard
NP
The more conventional optimization version of Traveling Salesman Problem for finding the
shortest route is NP-hard,
hard, not strictly NP-complete.
NP
*****