Unit 3
Unit 3
Structure
3.0 Introduction
3.1 Objectives
3.2 Intelligent Exhaustive Search
3.2.1 Backtracking
3.2.2 Branch and Bound
3.3 Approximation Algorithms Basics
3.4 Summary
3.5 Solution to Check Your Progress
3.0 INTRODUCTION
It has been stated in the previous units that the large class of optimization problems
belong to NP -hard class of problems.It is widely accepted that NP-hard problems
are intractable problems, i.e., although not yet proven, it appears that such
problems do not have polynomial time solutions.Their time complexities are
exponential in the worst cases. Some examples of intractable problems are traveling
salesman problem, vertex cover problems and graph coloring problems. Besides these
combinatorial problems, there are several thousand computational problems in
different domains like, biology, data science, finance and operation research fall into
NP-hard category.
An exhaustive search can be applied to solve these combinatorial problems but it
works when problem instances are quite small. An optimization technique such as
dynamic programming may be applied to find solutions for combinatorial problems
some such problems but it also has the similar limitation as in exhaustive search that
the problem instance should be small. Another limitation of this technique is that the
problems must follow the principle of optimality.
Problem solving techniques such as backtracking and branch and bound
techniques perform better in comparison to exhaustive search. But unlike exhaustive
search, these techniques construct the solutions step by step (considering only one
element at a time) and performs evaluation of the partial solution. In case the results
do not represent a better solution, further exploration of the remaining elements are
not considered. Backtracking and branch and bound techniques apply some kinds of
intelligence on the exhaustive search. It provides efficient solution if it has good
design. But in the worst case it takes exponential time. One real advantage of branch
and bound technique is that it can handle a large problem instance.
We can also apply Tabu search or Simulated Annealing which are heuristic local
search methods or Genetic Algorithms and Particle Swarm Optimization which are
meta heuristic techniques to find out solutions to the optimization problems.
But all these methods in spite of good performance do not ensure rigorous guarantees
for achieving the quality of solution i.e., how far the proposed solution is far away
from the optimal solution. Sometimes we should ask questions whether there is a
possibility of finding near optimal solutions (approximate solution) to combinatorial
optimization problems efficiently? Approximation algorithms have been found to be
efficient algorithms with good quality solutions which are measured in terms of the
maximum distance between the proposed approximate solution and the optimal
1
Handling Intractability solution over all the problem instances. What does it mean? It means that the
approximation algorithms always produce solution quite close to the optimal solution.
The focus of the unit will be to discuss few techniques such as backtracking and
branch and bound techniques and approximation algorithms to handle intractable
problems.
3.1 OBJECTIVES
The main objectives of the unit are to:
3.2.1 Backtracking
Backtracking is a general technique for design of algorithms. It is applied to solve
problems in which components are selected in sequence from the specified set so that
it satisfies some criteria or objective. The backtracking procedure includes depth first
search of a state space tree, verifying whether a node is leading to any solution(called
promising node)or not but dead ends (called non promising node), does backtracking
to the parent of the node if a node is not promising and continuing with the search
process on the next child.
In this section we will use backtracking technique to solve two problems: (i)
Hamiltonian Circuit problem and (ii) Subset Sum problem.
(i) Hamiltonian circuit problem
2
vertex.𝑉 is a starting vertex of a cycle where𝑉 ∈ G and 𝑉 , 𝑉 ……..𝑉 are distinct Design and Analysis of
vertices in the cycle except𝑉 and 𝑉 vertices which are equal. Algorithms
V1 V2
V3
V4 V5
V6
V1
V4
V3 V6 V5
V5 V2 V5 V6 V3
dead dead
V6 end V3 end V2
dead dead
end V2 end
V1
Final solution
3
Handling Intractability In order to explore another Hamiltonian cycle, the process can re-start from the leaf
node of the tree through backtracking .
(ii) Subsetsum Problem
Given a positive integer W and a set S of n positive integer values i.e., S = { 𝑠 , 𝑠 ,..
𝑠 }, the main objective of the Subset Sum problem is to search for all combinations
of subsets of integers whose sum is equal to W. As an example let us take S = {
1,4,6,9} and W = 10, there are two solution subsets : {1,9} and {4,6}. In some cases,
problem instances may not have any solution subset.
We will assume that elements in the set are in the sorted order, i.e., 𝑠 ≤ 𝑠 ≤ ⋯ ≤
𝑠 .
Design of state space tree for subset sum problem:
S = {4,6,7,8} and W = 18
The state space tree is designed as a binary tree. The root of the tree does not represent
any decision about a node. It is simply a starting point of the tree. Its left and right sub
trees include and exclude the element of the set represented by 1 and 0 respectively at
every level.
At level 1 in the left branch of the tree, the second element 𝑠 is included while in the
right subtree it is excluded. A subset of a given set is represented by a path from the
root node to the leaf node in the tree. A path from the root node of the tree to a node at
the 𝑖 level represent the inclusion of the first i numbers in the subset.
Let s’ be the sum of numbers of all the nodes included at the 𝑖 level. If s’ is equal to
W then the problem has a final solution. If we want to find out all the subsets, we need
to do backtracking to the parent of the node or stop if no further subsets are required.
If the sum s’ is not equal to W then the following two inequalities hold and the node
can be declared as non-promising node.
(i) s’ + s > W ( the large value of the sum)
(ii) s’ + ∑ s <W ( the small value of the sum)
4
Design and Analysis of
0 0 Algorithms
0
4 0
6 0
0
0
10 4 6
7 0 7 0 7 0
0
11 4 13 6 0
17 10
x
x x x
x x
8 0
(17+8 > 18) (11+8 > 18)
18 10
x
Promising Solution
s’ + s >W
5
Handling Intractability w – total weight of items selected at a node
p – total profit
bound – total profits of any subset of items
One of the simplest way to calculate the bound is given below:
bound = p + (W-w)(p /w )
The bound at any node is addition of profit p, the total profits of already selected
items with the product of the left over capacity of knapsack (W-w) and the best profit
per weight unit which is :
p /w .
Example: Given a knapsack problem instance, apply the branch and bound to find a
subset which gives the maximum profit. The following is a problem instance:
Knapsack capacity = 10
The following is the state space tree (figure 4)of representing the given instance of
knapsack problem:
node 0
P=0, W=0
bound= 100
node1 (with 1)
without 1 node 2
P=40, W=40
P=40, W=40
bound= 76
bound= 76
node 3 (with 2)
without 2 node 4
W=11
P=40, W=40
bound= 70
node 8
node 7 (with 4)
P=65, W=9
W=12 bound= 65
Figure 3.4: State space tree for knapsack problem using branch and bound technique
6
The root node indicates that no items have been selected as yet. Therefore p=0 and w= Design and Analysis of
0 and the bound is computed as per equation (i) which is Rs.100( 0+ (10-0) (40/4)). Algorithms
The left branch of the tree includes the item1 and which is the only item in the subset
while the right branch excludes the item 1. The total profit p and weight w is Rs 40
and 4 respectively. The bound value is 76 (40 + (10 -4)(42/7)). All three values are
shown at node 1. Node 2 at the right branch excludes item 1.Therefore p = 0 and w =
0, because no item has been selected in the subset at node 2. Bound at this node is60(
0 + (10-0) (42/7)). Compared to node 2, node 1 is more promising for optimization
because it has a larger bound. Node 3 and node 4 are child nodes of node 1 have a
subset with item 1 and item 2or without item 2 respectively. Let us calculate the total
profit(p), total weight(w) and bound at node 3 first. Since w( total weight) of a subset
represented by a node 3 exceeds 10 which is a capacity of the knapsack , this node is
considered non promising. Since item 2 is not included at node 4 values of p and w
are the same as its parent node 2 i.e., p= 40 , w= 4 . The bound= Rs.70( 40 + (10-
4)(25/5)). Compared to node 2, node 4 is selected for expanding the state space tree
because its bound is larger than node 2. Now we move to node 5 and 6 which
represent subsets including and excluding item 3 respectively. The total profit(p) ,
total weight (w) and bound at node 5 are:
p(item1 + item3) = Rs. 40 +Rs 25 = Rs.65
w( item 1 +item3) = 4 + 5 = 9
bound= 65+ (10 -9)(12/3)= 69
We will repeat the computation of p, w and bound at node 6:
P(item 1) = 40
w(item 1) = 4
bound = 40 + (10 – 4)(12/3) = 64
We will continue with node 5 because of its larger bound. Node 6 and node 7
represent a subset with and without item 4 respectively.
Node 7 is a non promising node because its total weight is 12(9 +3) exceeding the
knapsack capacity.
p and w at node 8( without item 4) are 65 and 9 respectively, same as its parent
bound = 65 + (10-9)(0) =65
There is a single subset {1,3) represented by node 8. The other two nodes 2 and 6
have lower bounds than node 8. Hence node 8 is the final and optimal solution to the
knapsack problem.
7
Handling Intractability
Further suppose that the nodes correspond to points in an Euclidean space, e.g.,
a 3D room, and the distance between any two points is their Euclidean
distance.
There is an algorithm whose solution has the following guarantee:
approxvalue is less than twice optimal value (i.e.<2xoptvalue)
where, optvalue is the optimal solution for the problem and approxvalue is the
approximate solution that the algorithm outputs.
3.4 SUMMARY
Backtracking and branch and bound techniques apply some kinds of intelligence on
the exhaustive search. It takes exponential time in the worst case for the solution of
difficult combinatorial problems but it provides efficient solution if it has good
design. But unlike exhaustive search, these techniques construct the solutions step by
step, (one element at a time) and perform evaluation of the partial solution. If no
solution is achieved by a given result, the tree is not further expanded.
State space tree is a principal mechanism employed by both backtracking and branch
and bound techniques which is a binary tree where nodes represent partial solutions.
Expansion of the tree is stopped as soon it is ensured that no solutions can be achieved
by considering choices that correspond to the node’s descendants.
Approximation algorithms are often applied to find near optimal solution to NP-hard
optimization problems. Approximation ratio is the main metric to measure the
accuracy of the solution to combinatorial optimization problems.
9
Handling Intractability Ans. There are three distinct features of branch and bound technique: (i) unlike
backtracking which traverse the tree in DFS, branch and bound does not restrict
traversing in a particular way (ii) branch and bound technique computes a bound
(value) of a node to decide whether the node is promising or not promising (iii)
generally used for optimization problems
Q3 How is backtracking different from branch and bound technique?
Ans. They differ in terms of types of problems they can solve and how nodes in the
state space tree are generated?
(i) Backtracking generally does not apply to optimization problems whereas
branch and bound technique can be used to solve optimization problems
because it computes a bound on the possible values of the objective
function.
(ii) Depth first search is used to generate a tree in backtracking whereas there
is no such restriction applied to branch and bound to generate a state space
tree.
10