finalsample
finalsample
Question 1.
This problem is concerned with using an optimality criterion for linear programming to
decide whether the vector x∗ = (0, 2, 0, 7, 0) is optimal for the linear program
2. Noting that the constraints of (P) have the form Ax ≤ b, x ≥ 0, write down the basic
variables at solution x∗ in the canonical form.
3. For the moment, assume that x∗ is an optimal solution of (P). Write down a system of
equations in the dual variables that must hold as a consequence of this assumption.
4. Solve for the dual solution according to the system in part (3) and verify whether the
vector x∗ is optimal for the linear program.
minimize y1 + y2 + 22y3
subject to 2y1 + y2 + 5y3 ≥ 8
−3y1 + 7y2 + 4y3 ≥ −9
4y1 + 3y2 − 6y3 ≥ 12
y1 − 2y2 + 2y3 ≥ 4
3y1 + y2 + 3y3 ≥ 11
y1 , y2 , y3 ≥ 0
2. Noting that the constraints of (P) have the form Ax ≤ b, x ≥ 0, write down the basic
variables at solution x∗ in the canonical form.
Let the slack variables for the three constraints be x6 , x7 , and x8 from top down. Then
4. Solve for the dual solution according to the system in part (3) and verify whether the
vector x∗ is optimal for the linear program.
After substituting y2 = 0 into the first two equations, we obtain a system of two equations
in two unknowns. The solution of the latter system is
µ ¶
17 3
(y1 , y3 ) = , .
5 10
So, the assumption that x∗ is optimal leads to the conclusion that the values of the dual
variables must be µ ¶
17 3
(y1 , y2 , y3 ) = , 0, .
5 10
It is tempting to conclude that x∗ is optimal because the primal and dual objective
functions have the same value at
µ ¶
∗ 17 3
x = (0, 2, 0, 7, 0) and (y1 , y2 , y3 ) = , 0, ,
5 10
³ ´
respectively, namely 10. This would be the correct answer if y = 17 3
5 , 0, 10 were feasible
in (D). But it is not feasible for (D)—the third dual constraint is violated. Hence the
correct answer is that x∗ is not optimal for (P).
Question 2.
Consider the problem
max −p1 x1 + p2 x2
(P ) s.t. x1 − x2 = 0
0 ≤ xj ≤ 1, j = 1, 2
Discuss the range of optimal feasible solutions to (P) and its dual (D) in the three cases:
(a) p1 < p2
(b) p1 = p2
2
(c) p1 > p2
The primal and the dual are
min y2 + y3
max −p1 x1 + p2 x2 (P ) s.t. y1 + y2 ≥ −p1
(P ) s.t. x1 − x2 = 0
y1 − y3 ≤ −p2
0 ≤ xj ≤ 1, j = 1, 2
0 ≤ yj , j = 2, 3
(a) p1 < p2 . In this case, the optimal solution for the primal is (x1 , x2 ) = (1, 1). The
optimal dual soution is y2 = t, y1 = −p1 − t, y3 = p2 − p1 − t for any t ∈ [0, p2 − p1 ].
(b) p1 = p2 . In this case, any feasible solution is an optimal solution for the primal. The
optimal dual soution is y2 = y3 = 0, y1 = −p1 = −p2 .
(c) p1 > p2 . In this case, the optimal solution for the primal is (x1 , x2 ) = (0, 0). The
optimal dual soution is y2 = y3 = 0 and any y1 ∈ [−p1 , −p2 ].
P = {y ∈ <m : AT y ≤ c}
where A ∈ <m×n with rank m and m < n, and c ∈ <n , and the linear program
Show
ii P has a nonempty interior if and only if the linear program has the unique all 0 minimal
solution.
iii P is bounded if and only if X has a nonempty interior, that is, there is x > 0 and Ax = 0.
cT x̄ = (c − AT y)T x̄ = sT x̄.
Thus, for every j, sj x̄j ≤ cT x̄ or sj ≤ (cT x̄)/x̄j . This implies that the dual slack variables are
bounded or P is bounded.
3
On the other hand, if at least one variable in x has to be zero at every feasible solution to
x ≥ 0 and Ax = 0, then, from the strictly complementarity partition theorem, there is s̄ 6= 0,
s̄ ≥ 0 and s̄ = −AT ȳ. Let y be any feasible point in P, then y(α) = y + α · ȳ is also in P for
any α ≥ 0, which imply that P is unbounded.
Question 4. Prove Lemmas 1 in Lecture Note #10. They are: Given (x > 0, y, s > 0) interior
feasible solution for the standard form linear program, let the direction d = (dx , dy , ds ) be
generated by equation
Sdx + Xds = γµe − Xs,
Adx = 0,
−AT dy − ds = 0
with γ = n/(n + ρ), and let
p
α min(Xs)
θ= x s T , (1)
k(XS)−1/2 ( (n+ρ) e − Xs)k
Proof of Lemma 1. Note that dTx ds = 0 by the facts that Adx = 0 and ds = −AT dy . Recall
that n X
ψn+ρ (x, s) = (n + ρ) log(sT x) − log(sj xj ).
j=1
and
x+ = x + θdx , y + = y + θdy , and s+ = s + θds .
Then it is clear that
4
where τ = max{||θS −1 ds ||∞ , ||θX −1 dx ||∞ }, and the last inequality follows from Karmakar’s
Lemma. In the following, we first bound the difference of the first two terms and then bound
the third term.
n+ρ
For simplicity, we let β = sT x
. Then we have
n sT x 1
γµe = e = e.
n+ρ n β
The difference of the first two terms
Therefore,
5
||(XS)−1/2 (Xds + Sdx )||2
= θ2
min(Xs)
||(XS)−1/2 ( β1 e − Xs)||2
= θ2
min(Xs)
2
= α
and the last equality follows from the definition of θ. This also show that τ ≤ α. Therefore,
we must have
||θS −1 ds ||2 + ||θX −1 dx ||2 α2
≤ . (3)
2(1 − τ ) 2(1 − α)
The desired inequality follows from (2) and (3).
To complete the proof of Lemma 1, we should show that
Ax+ = b
c − AT y + = s+ .
The last two equalities are easily verified by the definition of dx , dy , ds . To prove the inequalities,
we recall that
||θS −1 ds ||2 + ||θX −1 dx ||2 ≤ α2 < 1
and thus τ < 1.