Complexity and Approximation
Complexity and Approximation
Reoptimization
1 Introduction
In this article we illustrate the role that a new computational paradigm called
reoptimization plays in the solution of NP-hard problems in various practical
circumstances. As it is well-known a great variety of relevant optimization prob-
lems are intrinsically difficult and no solution algorithms running in polynomial
time are known for such problems. Although the existence of efficient algorithms
cannot be ruled out at the present state of knowledge, it is widely believed that
this is indeed the case. The most renowned approach to the solution of NP-hard
problems consists in resorting to approximation algorithms which in polynomial
⋆
This work was partially supported by the Future and Emerging Technologies Unit of
EC (IST priority - 6th FP), under contract no. FP6-021235-2 (project ARRIVAL).
time provide a suboptimal solution whose quality (measured as the ratio between
the values of the optimum and approximate solution) is somehow guaranteed.
In the last twenty years the definition of better and better approximation algo-
rithms and the classification of problems based on the quality of approximation
that can be achieved in polynomial time have been among the most important
research directions in theoretical computer science and have produced a huge
flow of literature [4, 35].
More recently a new computational approach to the solution of NP-hard
problems has been proposed [1]. This approach can be meaningfully adopted
when the following situation arises: given a problem Π, the instances of Π that
we need to solve are indeed all obtained by means of a slight perturbation of a
given reference instance I. In such case we can devote enough time to the exact
solution of the reference instance I and then, any time that a the solution for
a new instance I ′ is required, we can apply a simple heuristic that efficiently
provides a good approximate solution to I ′ . Let us imagine, for example, that
we know that a traveling salesman has to visit a large set S of, say, one thou-
sand cities plus a few more cities that may change from time to time. In such
case it is quite reasonable to devote a conspicuous amount of time to the exact
solution of the traveling salesman problem on the set S and then to reoptimize
the solution whenever the modified instance is known, with a (hopefully) very
small computational effort.
To make the concept more precise let us consider the following simple example
(Max Weighted Sat). Let φ be a Boolean formula in conjunctive normal form,
consisting of m clauses over n variables, and let us suppose we know a truth
assignment τ such that the weight of the clauses satisfied by τ is maximum; let
this weight be W . Suppose that now a new clause c with weight w over the same
set of variables is provided and that we have to find a “good” although possibly
not optimum truth assignment τ ′ for the new formula φ′ = φ ∧ c. A very simple
heuristic can always guarantee a 1/2 approximate truth assignment in constant
time. The heuristic is the following: if W ≤ w then put τ ′ = τ , otherwise take
as τ ′ any truth assignment that satisfies c. It is easy to see that, in any case, the
weight provided by this heuristic will be at least 1/2 of the optimum.
Actually the reoptimization concept is not new. A similar approach has been
applied since the early 1980s to some polynomial time solvable optimization
problems such as minimum spanning tree [16] and shortest path [14, 31] with
the aim to maintain the optimum solution of the given problem under input
modification (say elimination or insertion of an edge or update of an edge weight).
A big research effort devoted to the study of efficient algorithms for the dynamic
maintenance of the optimum solution of polynomial time solvable optimization
problems followed the first results. A typical example of this successful line of
research has been the design of algorithms for the partially or fully dynamic
maintenance of a minimum spanning tree in a graph under edge insertion and/or
edge elimination [12, 22] where at any update, the computation of the new
optimum solution requires at most O(n1/3 log n) amortized time per operation,
much less than recomputing the optimum solution from scratch.
A completely different picture arises when we apply the concept of reop-
timization to NP-hard optimization problems. In fact reoptimization provides
very different results when applied to polynomial time optimization problems
with respect to what happens in the case of NP-hard problems. In the case of
NP-hard optimization problems, unless P=NP polynomial time reoptimization
algorithms can only help us to obtain approximate solutions since if we knew
how to maintain an optimum solution under input updates, we could solve the
problem optimally in polynomial time (see Section 3.1).
The application of the reoptimization computation paradigm to NP-hard op-
timization problems is hence aimed at two possible directions: either at achieving
an approximate solution of better quality than we would have obtained without
knowing the optimum solution of the base instance, or to achieve an approximate
solution of the same quality but at a lower computational cost (as it is the case
in our previous example).
In the first place the reoptimization model has been applied to classical NP-
hard optimization problems such as scheduling (see Bartusch et al. [6], Schäffter
[33], or Bartusch et al. [7] for practical applications). More recently it has been
applied to various other NP-hard problems such as Steiner Tree [9, 13] or the
Traveling Salesman Problem [1, 5, 8]. In this article we will discuss some general
issues concerning reoptimization of NP-hard optimization problems and we will
review some of the most interesting applications.
The article is organized as follows. First in Section 2 we provide basic def-
initions concerning complexity and approximability of optimization problems
and we show simple preliminary results. Then in Section 3 the computational
power of reoptimization is discussed and results concerning the reoptimization of
various NP-hard optimization problems are shown. Finally Section 4 is devoted
to the application of the reoptimization concept to a variety of vehicle routing
problems. While most of the results contained in Section 3 and Section 4 derive
from the literature, it is worth noting that a few of the presented results appear
in this paper for the first time.
Definition 3. An NPO problem Π belongs to the class APX if there exists a poly-
nomial time approximation algorithm A and a rational value r such that, given
any instance I of Π, ρA (I) 6 r (resp. ρA (I) > r) if Π is a minimization prob-
lem (resp. a maximization problem). In such case A is called an r-approximation
algorithm.
For instance, Max Weighted Independent Set, Max Weighted Bipartite Sub-
graph, Max Weighted Planar Subgraph are three famous problems in Hered that
correspond to the three hereditary properties given above.
For all these problems, we have a simple reoptimization strategy that achieves
a ratio 1/2, based on the same idea used in the introduction. Note that this is
a huge improvement for some problems respect to their approximability prop-
erties; for instance, it is well-known that Max Weighted Independent Set is not
approximable within any constant ratio, if P 6= NP5 .
Now, let us try to outperform this trivial ratio 1/2. A first idea that comes
to mind is to improve the solution S1 of the previous proof since it only contains
one vertex. In particular, one can think of applying an approximation algorithm
on the “remaining instance after taking v”. Consider for instance Max Weighted
Independent Set, and revisit the proof of the previous property. If SI∗′ does not
take the new vertex v, then our initial solution S ∗ is optimum. If SI∗′ takes v, then
consider the remaining instance Iv after having removed v and its neighbors.
4
I.e. having no edge.
5
And not even within ratio n1−ε for any ε > 0, under the same hypothesis [36].
Suppose that we have a ρ-approximate solution S2 on this instance Iv . Then
S2 ∪ {v} is a feasible solution of weight:
If we output the best solution S among S ∗ and S2 ∪{v}, then, by adding equations
(1) and (2) with coefficients 1 and (1 − ρ), we get:
1
w(S) ≥ w(SI∗′ )
2−ρ
Note that this ratio is always better than ρ.
This technique is actually quite general and applies to many problems (not
only graph problems and maximization problems). We illustrate this on two
well-known problems: Max Weighted Sat (Theorem 2) and Min Vertex Cover
(Theorem 3). We will also use it for Max Knapsack in Section 3.2.
Theorem 2. Under the insertion of a clause, reoptimizing Max Weighted Sat is
approximable within ratio 0.81.
Proof. Let φ be a conjunction of clauses over a set of binary variables, each
clause being given with a weight, and let τ ∗ (φ) be an initial optimum solution.
Let φ′ := φ ∧ c be the final formula, where the new clause c = l1 ∨ l2 ∨ . . . ∨ lk
(where li is either a variable or its negation) has weight w(c).
We consider k solutions τi , i = 1, . . . , k. Each τi is built as follows:
– We set li to true;
– We replace in φ each occurrence of li and li with its value;
– We apply a ρ-approximation algorithm on the remaining instance (note that
the clause c is already satisfied); together with li , this is a particular solution
τi .
Then, our reoptimization algorithm outputs the best solution τ among τ ∗ (φ)
and the τi ’s.
As previously, if the optimum solution τ ∗ (φ′ ) on the final instance does not
satisfy c, then τ ∗ (φ) is optimum. Otherwise, at least one literal in c, say li , is
true in τ ∗ (φ′ ). Then, it is easy to see that
On the other hand, w(τ ∗ (φ)) ≥ w(τ ∗ (φ′ )) − w(c), and the following result
follows:
1
w(τ ) ≥ w(τ ∗ (φ′ ))
2−ρ
The fact that Max Weighted Sat is approximable within ratio ρ = 0.77 [3]
concludes the proof. ⊓
⊔
It is worth noticing that the same ratio (1/(2 − ρ)) is achievable for other
satisfiability or constraint satisfaction problems. For instance, using the result
of Johnson [24], reoptimizing Max Weighted E3SAT6 when a new clause is in-
serted is approximable within ratio 8/9.
Let us now focus on a minimization problem, namely Min Vertex Cover.
Given a vertex-weighted graph G = (V, E, w), the goal in this problem is to find
a subset V ′ ⊆ V such that (i) every edge P e ∈ E is incident to at least one vertex
in V ′ , and (ii) the global weight of V ′ v∈V ′ w(v) is minimized.
Theorem 3. Under a vertex insertion, reoptimizing Min Vertex Cover is ap-
proximable within ratio 3/2.
Proof. Let v denote the new vertex and S ∗ the initial given solution. Then,
S ∗ ∪ {v} is a vertex cover on the final instance. If SI∗′ takes v, then S ∗ ∪ {v} is
optimum.
From now on, suppose that SI∗′ does not take v. Then, it has to take all
its neighbors N (v). S ∗ ∪ N (v) is a feasible solution on the final instance. Since
w(S ∗ ) ≤ w(SI∗′ ), we get:
w(S ∗ ∪ N (v)) ≤ w(SI∗′ ) + w(N (v)) (3)
Then, as for Max Weighted Independent Set, consider the following feasible
solution S1 :
– Take all the neighbors N (v) of v in S1 ;
– Remove v and its neighbors from the graph;
– Apply a ρ-approximation algorithm on the remaining graph, and add these
vertices to S1 .
Since we are in the case where SI∗′ does not take v, it has to take all its neighbors,
and finally:
w(S1 ) ≤ ρ(w(SI∗′ ) − w(N (v))) + w(N (v)) = ρw(SI∗′ ) − (ρ − 1)w(N (v)) (4)
Of course, we take the best solution S among S ∗ ∪ N (v) and S1 . Then, a convex
combination of equations (3) and (4) leads to:
2ρ − 1
w(S) ≤ w(SI∗′ )
ρ
The results follows since Min Vertex Cover is well-known to be approximable
within ratio 2. ⊓
⊔
To conclude this section, we point out that these results can be generalized
when several vertices are inserted. Indeed, if a constant number k > 1 of vertices
are added, one can reach the same ratio with similar arguments by considering
all the 2k possible subsets of new vertices in order to find the ones that will
belong to the new optimum solution. This brute force algorithm is still very fast
for small constant k, which is the case in the reoptimization setting with slight
modifications of the instance.
6
Restriction of Max Weighted Sat when all clauses contain exactly three literals.
Unweighted problems. In the previous subsection, we considered the general
cases where vertices (or clauses) have a weight. It is well-known that all the
problems we focused on are already NP-hard in the unweighted case, i.e. when
all vertices/clauses receive weight 1. In this (very common) case, the previous
approximation results on reoptimization can be easily improved. Indeed, since
only one vertex is inserted, the initial optimum solution has an absolute error of
at most one on the final instance, i.e.:
|S ∗ | ≥ |SI∗′ | − 1
Then, in some sense we don’t really need to reoptimize since S ∗ is already
a very good solution on the final instance (note also that since the reoptimiza-
tion problem is NP-hard, we cannot get rid of the constant −1). Dealing with
approximation ratio, we derive from this remark, with a standard technique, the
following result.
Proof. Let ε > 0, and set k = ⌈1/ε⌉. We consider the following algorithm:
1. Test all the subsets of V of size at most k, and set S1 be the largest one such
that G[S1 ] satisfies the hereditary property,
2. Output the largest solution S between S1 and S ∗ .
Then, if SI∗′ has size at most 1/ε, we found it in step 1. Otherwise, |SI∗′ | ≥ 1/ε
and:
|S ∗ | |SI∗′ | − 1
≥ ≥1−ε
|SI∗′ | |SI∗′ |
Of course, the algorithm is polynomial as long as ε is a constant. ⊓
⊔
In other words, the PTAS is derived from two properties: the absolute error
of 1, and the fact that problems considered are simple. Following [29], a problem
is called simple if, given any fixed constant k, it is polynomial to determine
whether the optimum solution has value at most k (maximization) or not.
This result easily extends to other simple problems, such as Min Vertex
Cover for instance. It also generalizes when several (a constant number of) ver-
tices are inserted, instead of only 1.
However, it is interesting to notice that, for some of other (unweighted) prob-
lems, while the absolute error 1 still holds, we cannot derive a PTAS as in The-
orem 4 because they are not simple. One of the most famous such problems
is the Min Coloring problem. In this problem, given a graph G = (V, E), one
wishes to partition V into a minimum number of independent sets (called col-
ors) V1 , . . . , Vk . When a new vertex is inserted, an absolute error 1 can be easily
achieved while reoptimizing. Indeed, consider the initial coloring, and add a new
color which contains only the newly inserted vertex. Then this coloring has an
absolute error of one since a coloring on the final graph cannot use less colors
than an optimum coloring on the initial instance.
However, deciding whether a graph can be colored with 3 colors is an
NP-hard problem. In other words, Min Coloring is not simple. We will discuss
the consequence of this fact in the section on hardness of reoptimization.
To conclude this section, we stress the fact that there exist obviously many
problems that do not involve weights and for which the initial optimum solution
cannot be directly transformed into a solution on the final instance with absolute
error 1. Finding the longest cycle in a graph is such a problem: adding a new
vertex may change considerably the size of an optimum solution.
Proof. The proof is actually quite straightforward. Assume you have such a
reoptimization algorithm A within a ratio ρ = 4/3 − ε. Let G = (V, E) be
a graph with V = {v1 , · · · , vn }. We consider the subgraphs Gi of G induced
by Vi = {v1 , v2 , · · · , vi } (in particular Gn = G). Suppose that you have a 3-
coloring of Gi , and insert vi+1 . If Gi+1 is 3-colorable, then A outputs a 3-coloring.
Moreover, if Gi is not 3-colorable, then neither is Gi+1 . Hence, starting from the
empty graph, and iteratively applying A, we get a 3-coloring of Gi if and only if
Gi is 3-colorable. Eventually, we are able to determine whether G is 3-colorable
or not. ⊓
⊔
This proof is based on the fact that Min Coloring is not simple (according to
the definition previously given). A similar argument, leading to inapproximabil-
ity results in reoptimization, can be applied to other non simple problems (under
other modifications). It has been in particular applied to a scheduling problem
(see Section 3.2).
For other optimization problems however, such as MinTSP in the metric
case, finding a lower bound in approximability (if any!) seems a challenging task.
Min Steiner Tree. The Min Steiner Tree problem is a generalization of the Min
Spanning Tree problem where only a subset of vertices (called terminal vertices)
have to be spanned. Formally, we are given a graph G = (V, E), a nonnegative
distance d(e) for any e ∈ E, and a subset R ⊆ V of terminal vertices. The goal
is to connect the terminal vertices with a minimum global distance, P i.e. to find
a tree T ⊆ E that spans all vertices in R and minimizes d(T ) = e∈T d(e). It
is generally assumed that the graph is complete, and the distance function is
metric (i.e. d(x, y) + d(y, z) ≥ d(x, z) for any vertices x, y, z): indeed, the general
problem reduces to this case by initially computing shortest paths between pairs
of vertices.
Min Steiner Tree is one of the most famous network design optimization
problems. It is NP-hard, and has been studied intensively from an approximation
viewpoint (see [18] for a survey on these results). The best known ratio obtained
so far is 1 + ln(3)/2 ≃ 1.55 [30].
Reoptimization versions of this problem have been studied with modifications
on the vertex set [9, 13]. In Escoffier et al. [13], the modification consists of the
insertion of a new vertex. The authors study the cases where the new vertex is
terminal or non terminal.
Moreover, the result has been generalized to the case in which several
vertices are inserted. Interestingly, when p non terminal vertices are inserted,
then reoptimizing the problem is still 3/2-approximable (but the running time
grows very fast with p). On the other hand, when q terminal vertices are added,
the obtained ratio decreases (but the running time remains very low)7 . The
strategies consist, roughly speaking, of merging the initial optimum solution
with Steiner trees computed on the set of new vertices and/or terminal vertices.
The authors tackle also the case where a vertex is removed from the vertex set,
and provide a lower bound for a particular class of algorithms.
7
The exact ratio is 2 − 1/(q + 2) when p non terminal and q terminal vertices are
added.
Böckenhauer et al. [9] consider a different instance modification. Rather than
inserting/deleting a vertex, the authors consider the case where the status of a
vertex changes: either a terminal vertex becomes non terminal, or vice versa.
The obtained ratio is also 3/2.
Moreover, they exhibit a case where this ratio can be improved. When all
the distances between vertices are in {1, 2, · · · , r}, for a fixed constant r, then
reoptimizing Min Steiner Tree (when changing the status of one vertex) is still
NP-hard but admits a PTAS.
Note that in both cases (changing the status of a vertex or adding a new ver-
tex), no inapproximability results have been achieved, and this is an interesting
open question.
Theorem 8 ([33]). If P 6= NP, for any ε > 0, reoptimizing the Scheduling with
forbidden sets problem is inapproximable within ratio 3/2 − ε under a constraint
insertion, and inapproximable within ratio 4/3 − ε under a constraint deletion.
Under a constraint insertion Schäffter also provides a reoptimization strat-
egy that achieves approximation ratio 3/2, thus matching the lower bound of
Theorem 8. It consists of a simple local modification of the initial scheduling,
by shifting one task (at the end of the schedule) in order to ensure that the new
constraint is satisfied.
Max Knapsack. In the Max Knapsack problem, we are given a set of n objects
O = {o1 , . . . , on }, and a capacity B. Each objects has a weight wi and a value
′
vP
i . The goal is to choose a subset O of objects that P maximizes the global value
oi ∈O ′ vi but that respects the capacity constraint oi ∈O′ wi ≤ B.
This problem is (weakly) NP-hard, but admits an FPTAS [23]. Obviously,
the reoptimization version admits an FPTAS too. Thus, Archetti et al. [2] are
interested in using classical approximation algorithms for Max Knapsack to de-
rive reoptimization algorithms with better approximation ratios but with the
same running time. The modifications considered consists of the insertion of a
new object in the instance.
Though not being a graph problem, it is easy to see that the Max Knap-
sack problem satisfies the required properties of heritability given in Section 3.1
(paragraph on hereditary problems). Hence, the reoptimization version is 1/2-
approximable in constant time; moreover, if we have a ρ-approximation algo-
1
rithm, then the reoptimization strategy presented in section 3.1 has ratio 2−ρ
[2]. Besides, Archetti et al. [2] show that this bound is tight for several classical
approximation algorithms for Max Knapsack.
Finally, studying the issue of sensitivity presented earlier, they show that any
reoptimization algorithm that does not consider objects discarded by the initial
optimum solution cannot have ratio better than 1/2.
TSP-like problems other than those above have also been considered in the
literature from the point of view of reoptimization; in particular, see Böckenhauer
et al. [8] for a hardness result on the TSP with deadlines.
Given a vehicle routing problem Π from the above list, we will consider
the following reoptimization variants, each corresponding to a different type of
perturbation of the instance: insertion of a node (Π+), deletion of a node (Π−),
and variation of a single entry of the matrix d (Π±).
Definition 10. An instance of Π± is given by a triple (In , In′ , Tn∗ ), where In , In′
are instances of Π of size n, Tn∗ is an optimum solution of Π on In , and In′ differs
from In only in one entry of the distance matrix d. A solution for this instance
of Π± is a solution to In′ . The objective function is the same as in Π.
In the following, we will sometimes refer to the initial problem Π as the
static problem. In Table 1 we summarize the approximability results known for
the static and reoptimization versions of the problems above under these types
of perturbations.
Table 1. Best known results on the approximability of the standard and reoptimiza-
tion versions of vehicle routing problems (AR = approximation ratio, Π+ = vertex
insertion, Π− = vertex deletion, Π± = distance variation).
Minimum Metric TSP. In the previous section we have seen that no constant-
factor approximation algorithm exists for reoptimizing the Minimum TSP in its
full generality. To obtain such a result, we are forced to restrict the problem
somehow. A very interesting case for many applications is when the matrix d
is a metric, that is, the Min MTSP. This problem admits a 3/2-approximation
algorithm, due to Christofides [11], and it is currently open whether this fac-
tor can be improved. Interestingly, it turns out that the reoptimization version
Min MTSP+ is (at least if one consider the currently best known algorithms)
easier than the static problem: it allows a 4/3-approximation – although, again,
we do not know whether even this factor may be improved via a more sophisti-
cated approach.
Theorem 10 ([5]). Min MTSP+ is approximable within ratio 4/3.
Proof. The algorithm used to prove the upper bound is a simple combination of
Nearest Insertion and of the well-known algorithm by Christofides [11]; namely,
both algorithms are executed and the solution returned is the one having the
lower weight.
∗
Consider an optimum solution Tn+1 of the final instance In+1 , and the solu-
tion Tn∗ available for the initial instance In . Let i and j be the two neighbors of
∗
vertex n + 1 in Tn+1 , and let T1 be the tour obtained from Tn∗ with the Nearest
Insertion rule. Furthermore, let v ∗ be the vertex in {1, . . . , n} whose distance to
n + 1 is the smallest.
∗
Using the triangle inequality, we easily get w(T1 ) ≤ w(Tn+1 ) + 2d(v ∗ , n + 1)
∗ ∗
where, by definition of v , d(v , n + 1) = min{d(k, n + 1) : k = 1, . . . , n}. Thus
∗
w(T1 ) ≤ w(Tn+1 ) + 2 max(d(i, n + 1), d(j, n + 1)) (5)
Now consider the algorithm of Christofides applied on In+1 . This gives a
∗
tour T2 of length at most (1/2)w(Tn+1 ) + MST(In+1 ), where MST(In+1 ) is the
∗
weight of a minimum spanning tree on In+1 . Note that MST(In+1 ) ≤ w(Tn+1 )−
max(d(i, n + 1), d(j, n + 1)). Hence
3 ∗
w(T2 ) ≤ w(Tn+1 ) − max(d(i, n + 1), d(j, n + 1)). (6)
2
The result now follows by combining equations (5) and (6), because
the weight of the solution given by the algorithm is min(w(T1 ), w(T2 )) ≤
∗
(1/3)w(T1 ) + (2/3)w(T2 ) ≤ (4/3)w(Tn+1 ). ⊓
⊔
The above result can be generalized to the case when more than a single ver-
tex is added in the perturbed instance. Let Min MTSP+k be the corresponding
problem when k vertices are added. Then it is possible to give the following re-
sult, which gives a tradeoff between the number of added vertices and the quality
of the approximation guarantee.
Theorem 11 ([5]). For any k ≥ 1, Min MTSP+k is approximable within ratio
3/2 − 1/(4k + 2).
Reoptimization under variation of a single entry of the distance matrix (that
is, problem Min MTSP±) has been considered by Böckenhauer et al. [9].
Theorem 12 ([9]). Min MTSP± is approximable within ratio 7/5.
Proof. The obvious idea is to skip the deleted node in the new tour, while visiting
the remaining nodes in the same order. Thus, if i and j are respectively the nodes
∗
preceding and following n + 1 in the tour Tn+1 , we obtain a tour T such that
∗
w(T ) = w(Tn+1 ) + d(i, j) − d(i, n + 1) − d(n + 1, j). (7)
Consider an optimum solution Tn∗ of the modified instance In , and the node l
that is consecutive to i in this solution. Since inserting n + 1 between i and l
would yield a feasible solution to In+1 , we get, using triangle inequality:
∗
w(Tn+1 ) ≤ w(Tn∗ ) + d(i, n + 1) + d(n + 1, l) − d(i, l)
≤ w(Tn∗ ) + d(i, n + 1) + d(n + 1, i).
Maximum Metric TSP. The usual Maximum TSP problem does not admit
a polynomial-time approximation scheme, that is, there exists a constant c such
that it is NP-hard to approximate the problem within a factor better than c.
This result extends also to the Maximum Metric TSP [28]. The best known
approximation for the Maximum Metric TSP is a randomized algorithm with an
approximation guarantee of 7/8 [21].
By contrast, in the reoptimization of Max MTSP under insertion of a ver-
tex, the Best Insertion algorithm turns out to be a very good strategy: it is
asymptotically optimum. In particular, the following holds.
Theorem 16 ([5]). Max MTSP+ is approximable within ratio 1 − O(n−1/2 ).
Using the above result one can easily prove that Max MTSP+ admits a
polynomial-time approximation scheme: if the desired approximation guarantee
is 1 − ǫ, for some ǫ > 0, just solve by enumeration the instances with O(1/ǫ2 )
nodes, and use the result above for the other instances.
4.3 The Minimum Latency Problem
Although superficially similar to the Minimum Metric TSP, the Minimum La-
tency Problem appears to be more difficult to solve. For example, in the special
case when the metric is induced by a weighted tree, the MLP is NP-hard [34]
while the Metric TSP is trivial. One of the difficulties in the MLP is that lo-
cal changes in the input can influence the global shape of the optimum solution.
Thus, it is interesting to notice that despite this fact, reoptimization still helps. In
fact, the best known approximation so far for the static version of the MLP gives
a factor of 3.59 and is achieved via a sophisticated algorithm due to Chaudhuri
et al. [10], while it is possible to give a very simple 3-approximation for MLP+,
as we show in the next theorem.
Proof. We consider the Insert Last algorithm that inserts the new node n + 1
at the “end” of the tour, that is, just before node 1. Without loss of generality,
let Tn∗ = {(1, 2), (2, 3), . . . , (n − 1, n)} be the optimal tour for the initial instance
∗
In (that is, the kth node to be visited is k). Let Tn+1 be the optimal tour for
∗ ∗
the modified instance In+1 . Clearly ℓ(Tn+1 ) ≥ ℓ(Tn ) since relaxing the condition
that node n + 1 must be visited cannot raise the Pnoverall latency.
The quantity ℓ(Tn∗ ) can be expressed as i=1 ti , where for i = 1, . . . , n,
Pi−1
ti = j=1 d(j, j + 1) can be interpreted as the “time” at which node i is first
visited in the tour Tn∗ .
In the solution constructed by Insert Last, the time at which each node i 6=
n+1 is visited is the same as in the original tour (ti ), while tn+1 = tn +d(n, n+1).
Pn+1 Pn
The latency of the solution is thus i=1 ti = i=1 ti + tn + d(n, n + 1) ≤
2ℓ(Tn∗ ) + ℓ(Tn+1
∗
) ≤ 3ℓ(Tn+1 ∗
), where we have used ℓ(Tn+1 ∗
) ≥ d(n, n + 1) (any
feasible tour must include a subpath from n to n + 1 or vice versa). ⊓
⊔
Bibliography