The document discusses various algorithms for solving combinatorial optimization problems including backtracking, branch-and-bound, the knapsack problem, the traveling salesman problem, and approximation algorithms. Backtracking and branch-and-bound techniques build state space trees to systematically search for optimal solutions. Branch-and-bound improves on backtracking by computing bounds to prune non-promising search branches early. Several NP-hard problems addressed include the knapsack problem, TSP, and their approximation using heuristics like nearest neighbor and greedy selection.
The document discusses various algorithms for solving combinatorial optimization problems including backtracking, branch-and-bound, the knapsack problem, the traveling salesman problem, and approximation algorithms. Backtracking and branch-and-bound techniques build state space trees to systematically search for optimal solutions. Branch-and-bound improves on backtracking by computing bounds to prune non-promising search branches early. Several NP-hard problems addressed include the knapsack problem, TSP, and their approximation using heuristics like nearest neighbor and greedy selection.
AND BOUND Divide and Conquer Backtracking and branch-and-bound— often make it possible to solve at least some large instances of difficult combinatorial problems.
Both strategies can be considered an
improvement over exhaustive search State Space Trees Both backtracking and branch-and-bound are based on the construction of a state- space tree whose nodes reflect specific choices made for a solution’s components.
Both techniques terminate a node as soon
as it can be guaranteed that no solution to the problem can be obtained by considering choices that correspond to the node’s descendants. State Space Trees Its root represents an initial state before the search for a solution begins. The nodes of the first level in the tree represent the choices made for the first component of a solution. The nodes of the second level represent the choices for the second component, and so on. – A node in a state-space tree is said to be promising if it correspond to a partially constructed solution that may still lead to a complete solution; otherwise, it is called non-promising Backtracking Both backtracking and branch-and-bound are based on the construction of a state- space tree whose nodes reflect specific choices made for a solution’s components.
Both techniques terminate a node as soon
as it can be guaranteed that no solution to the problem can be obtained by considering choices that correspond to the node’s descendants. Backtracking The principal idea is to construct solutions one component at a time and evaluate such partially constructed candidates as follows. – If a partially constructed solution can be developed further without violating the problem’s constraints, it is done by taking the first remaining legitimate option for the next component. – If there is no legitimate option for the next component, no alternatives for any remaining component need to be considered. In this case, the algorithm backtracks to replace the last component of the partially constructed solution with its next option N-Queens Problem Hamiltonian Circuit Problem A Hamiltonian circuit is a circuit that visits every vertex once with no repeats. Being a circuit, it must start and end at the same vertex. A Hamiltonian path also visits every vertex once with no repeats, but does not have to start and end at the same vertex Hamiltonian Circuit Problem Hamiltonian Circuit Problem Subset-Sum Problem • find a subset of a given set A = {a1, . . . , an} of n positive integers whose sum is equal to a given positive integer d. • For example, for A = {1, 2, 5, 6, 8} and d = 9, there are two solutions: • {1, 2, 6} and {1, 8}. • Of course, some instances of this problem may have no solutions. Subset-Sum Problem Branch-and-Bound • Branch and Bound is used to solve optimization problems. • An optimization problem seeks to minimize or maximize some objective function • (a tour length, the value of items selected, the cost of an assignment, and the like), usually subject to some constraints Feasible Solution
• A Feasible solution is a point in the problem’s
search space that satisfies all the problem’s constraints.
• An optimal solution is a feasible solution with
the best value of the objective function Back Tracking and Branch and Bound
• Compared to backtracking, branch-and-bound
requires two additional items: – A way to provide, for every node of a state- space tree, a bound on the best value of the objective function on any solution that can be obtained by adding further components to the partially constructed solution represented by the node – The value of the best solution seen so far Terminating Search path • In general, we terminate a search path at the current node in a state-spacetree of a branch- and-bound algorithm for any one of the following three reasons: – The value of the node’s bound is not better than the value of the best solution seen so far. – The node represents no feasible solutions because the constraints of the problem are already violated. – The subset of feasible solutions represented by the node consists of a single point (and hence no further choices can be made)—in this case, we compare the value of the objective function for this feasible solution with that of the best solution seen so far and update the latter with the former if the new solution is better. Assignment Problem • An assignment problem is specified by an n × n cost matrix C • We can state the problem as follows: – Select one element in each row of the matrix so that no two selected elements are in the same column and their sum is the smallest possible Assignment Problem Assignment Problem Assignment Problem Assignment Problem Knapsack Problem • Given n items of known weights wi and values vi , i = 1, 2, . . . , n, • And a knapsack of capacity W, find the most valuable subset of the items that fit in the knapsack Ordering of Items •Arrange the given instance in descending order by their value-to-weight ratios.
•Then the first item gives the best payoff
per weight unit and the last one gives the worst payoff per weight unit, with ties resolved arbitrarily State Space Trees • Each node on the ith level of this tree, 0 ≤ i ≤ n, represents all the subsets of n items that include a particular selection made from the first i ordered items.
• This particular selection is uniquely
determined by the path from the root to the node: • a branch going to the left indicates the inclusion of the next item, and a branch going to the right indicates its exclusion. State Space Trees • A simple way to compute the upper bound ub is to add to v, the total value of the items already selected. • The product of the remaining capacity of the knapsack W − w and the best per unit payoff among the remaining items, which is Example Traveling Salesman Problem • Apply the branch-and-bound technique to instances of the traveling salesman problem if we come up with a reasonable lower bound on tour lengths. Traveling Salesman Problem • We can compute a lower bound on the length l of any tour as follows. • For each city i, 1≤ i ≤ n, find the sum si of the distances from city i to the two nearest cities; • compute the sum s of these n numbers, divide the result by 2, and, if all the distances are integers, round up the result to the nearest integer: Traveling Salesman Problem Traveling Salesman Problem Traveling Salesman Problem Traveling Salesman Problem Approximation Algorithms • Their optimization versions fall in the class of NP-hard problems. • Problems that are at least as hard as NP- complete problems. Nearest-neighbor algorithm Example Optimal Solution • The optimal solution, as can be easily checked by exhaustive search, is the tour
• s∗: a − b − d − c − a of length 8.
• Thus, the accuracy ratio of this approximation
is Accuracy of Approximation
Tour sa is 25% longer than the optimal tour s∗
Approximation Algorithms for the Knapsack Problem Example Example Solution • The greedy algorithm will select the first item of weight 4, • skip the next item of weight 7, • select the next item of weight 5, • skip the last item of weight 3. • The solution obtained happens to be optimal for this instance