0% found this document useful (0 votes)
16 views

finalsample

Uploaded by

scxxm2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

finalsample

Uploaded by

scxxm2
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

MS&E 310 Sample Final Examination

Linear Programming Final Sample


Page 1 of

Question 1.
This problem is concerned with using an optimality criterion for linear programming to
decide whether the vector x∗ = (0, 2, 0, 7, 0) is optimal for the linear program

maximize 8x1 − 9x2 + 12x3 + 4x4 + 11x5


subject to 2x1 − 3x2 + 4x3 + x4 + 3x5 ≤ 1
(P) x1 + 7x2 + 3x3 − 2x4 + x5 ≤ 1
5x1 + 4x2 − 6x3 + 2x4 + 3x5 ≤ 22
x1 , x2 , x3 , x4 , x5 ≥ 0

1. Write down the dual (D) of the problem (P).

2. Noting that the constraints of (P) have the form Ax ≤ b, x ≥ 0, write down the basic
variables at solution x∗ in the canonical form.

3. For the moment, assume that x∗ is an optimal solution of (P). Write down a system of
equations in the dual variables that must hold as a consequence of this assumption.

4. Solve for the dual solution according to the system in part (3) and verify whether the
vector x∗ is optimal for the linear program.

1. Write down the dual (D) of the problem (P).


The dual problem (D) is

minimize y1 + y2 + 22y3
subject to 2y1 + y2 + 5y3 ≥ 8
−3y1 + 7y2 + 4y3 ≥ −9
4y1 + 3y2 − 6y3 ≥ 12
y1 − 2y2 + 2y3 ≥ 4
3y1 + y2 + 3y3 ≥ 11
y1 , y2 , y3 ≥ 0

2. Noting that the constraints of (P) have the form Ax ≤ b, x ≥ 0, write down the basic
variables at solution x∗ in the canonical form.
Let the slack variables for the three constraints be x6 , x7 , and x8 from top down. Then

Basic varaibles are {2, 4, 7}


3. For the moment, assume that x∗ is an optimal solution of (P). Write down a system of
equations in the dual variables that must hold as a consequence of this assumption.
From part (2), we find that the second and fourth of the dual constraints must hold as
equations. Likewise, the second dual variable must be zero. Thus, we have

−3y1 + 7y2 + 4y3 = −9


y1 − 2y2 + 2y3 = 4
y2 = 0

4. Solve for the dual solution according to the system in part (3) and verify whether the
vector x∗ is optimal for the linear program.
After substituting y2 = 0 into the first two equations, we obtain a system of two equations
in two unknowns. The solution of the latter system is
µ ¶
17 3
(y1 , y3 ) = , .
5 10
So, the assumption that x∗ is optimal leads to the conclusion that the values of the dual
variables must be µ ¶
17 3
(y1 , y2 , y3 ) = , 0, .
5 10
It is tempting to conclude that x∗ is optimal because the primal and dual objective
functions have the same value at
µ ¶
∗ 17 3
x = (0, 2, 0, 7, 0) and (y1 , y2 , y3 ) = , 0, ,
5 10
³ ´
respectively, namely 10. This would be the correct answer if y = 17 3
5 , 0, 10 were feasible
in (D). But it is not feasible for (D)—the third dual constraint is violated. Hence the
correct answer is that x∗ is not optimal for (P).

Question 2.
Consider the problem
max −p1 x1 + p2 x2
(P ) s.t. x1 − x2 = 0
0 ≤ xj ≤ 1, j = 1, 2

Discuss the range of optimal feasible solutions to (P) and its dual (D) in the three cases:
(a) p1 < p2
(b) p1 = p2

2
(c) p1 > p2
The primal and the dual are

min y2 + y3
max −p1 x1 + p2 x2 (P ) s.t. y1 + y2 ≥ −p1
(P ) s.t. x1 − x2 = 0
y1 − y3 ≤ −p2
0 ≤ xj ≤ 1, j = 1, 2
0 ≤ yj , j = 2, 3

(a) p1 < p2 . In this case, the optimal solution for the primal is (x1 , x2 ) = (1, 1). The
optimal dual soution is y2 = t, y1 = −p1 − t, y3 = p2 − p1 − t for any t ∈ [0, p2 − p1 ].
(b) p1 = p2 . In this case, any feasible solution is an optimal solution for the primal. The
optimal dual soution is y2 = y3 = 0, y1 = −p1 = −p2 .
(c) p1 > p2 . In this case, the optimal solution for the primal is (x1 , x2 ) = (0, 0). The
optimal dual soution is y2 = y3 = 0 and any y1 ∈ [−p1 , −p2 ].

Question 3. Consider the linear inequality system

P = {y ∈ <m : AT y ≤ c}

where A ∈ <m×n with rank m and m < n, and c ∈ <n , and the linear program

min cT x subject to x ∈ X = {x ∈ <n : Ax = 0, x ≥ 0}

Show

i P is feasible if and only if the minimal value of the linear program is 0.

ii P has a nonempty interior if and only if the linear program has the unique all 0 minimal
solution.

iii P is bounded if and only if X has a nonempty interior, that is, there is x > 0 and Ax = 0.

Proof i This is identical to Farkas’ lemma.


Proof ii Let x∗ be any optimal solution and s̄ be an interior slack solution, that is, s̄ > 0.
Then, cT x∗ = 0 and Ax∗ = 0 imply that s̄T x∗ = 0 which further implies that x∗ = 0. The
reverse is true due to the strictly complementarity partition theorem.
Proof iii Let x̄ > 0 and Ax̄ = 0, then, for any point in P we have

cT x̄ = (c − AT y)T x̄ = sT x̄.

Thus, for every j, sj x̄j ≤ cT x̄ or sj ≤ (cT x̄)/x̄j . This implies that the dual slack variables are
bounded or P is bounded.

3
On the other hand, if at least one variable in x has to be zero at every feasible solution to
x ≥ 0 and Ax = 0, then, from the strictly complementarity partition theorem, there is s̄ 6= 0,
s̄ ≥ 0 and s̄ = −AT ȳ. Let y be any feasible point in P, then y(α) = y + α · ȳ is also in P for
any α ≥ 0, which imply that P is unbounded.

Question 4. Prove Lemmas 1 in Lecture Note #10. They are: Given (x > 0, y, s > 0) interior
feasible solution for the standard form linear program, let the direction d = (dx , dy , ds ) be
generated by equation
Sdx + Xds = γµe − Xs,
Adx = 0,
−AT dy − ds = 0
with γ = n/(n + ρ), and let
p
α min(Xs)
θ= x s T , (1)
k(XS)−1/2 ( (n+ρ) e − Xs)k

where α is a positive constant less than 1. Let

x+ = x + θdx , y + = y + θdy , and s+ = s + θds .

Then, we have (x+ , y + , s+ ) remains interior feasible and

ψn+ρ (x+ , s+ ) − ψn+ρ (x, s)


q (n + ρ) α2
≤ −α min(Xs)k(XS)−1/2 (e − T
Xs)k + ,
x s 2(1 − α)
where n
X
ψn+ρ (x, s) = (n + ρ) log(sT x) − log(sj xj ).
j=1

Proof of Lemma 1. Note that dTx ds = 0 by the facts that Adx = 0 and ds = −AT dy . Recall
that n X
ψn+ρ (x, s) = (n + ρ) log(sT x) − log(sj xj ).
j=1

and
x+ = x + θdx , y + = y + θdy , and s+ = s + θds .
Then it is clear that

ψn+ρ (x+ , s+ ) − ψn+ρ (x, s)


à ! n
à !
θdT x + θdTx s X θdsj θdxj
= (n + ρ) log 1 + s T − log(1 + ) + log(1 + )
s x j=1
sj xj
dTs x+ dTx s ||θS −1 ds ||2 + ||θX −1 dx ||2
≤ (n + ρ)θ T
− θeT (S −1 ds + X −1 dx ) + .
s x 2(1 − τ )

4
where τ = max{||θS −1 ds ||∞ , ||θX −1 dx ||∞ }, and the last inequality follows from Karmakar’s
Lemma. In the following, we first bound the difference of the first two terms and then bound
the third term.
n+ρ
For simplicity, we let β = sT x
. Then we have

n sT x 1
γµe = e = e.
n+ρ n β
The difference of the first two terms

βθ(dTs x + dTx s) − θeT (S −1 ds + X −1 dx )


= βθeT (Xds + Sdx ) − θeT (S −1 ds + X −1 dx )
= θ(βe − (XS)−1 e)T (Xds + Sdx )
1
= θ(βe − (XS)−1 e)T ( e − Xs) (By Newton’s equations)
β
1
= −θ(e − βXs)T (XS)−1 ( e − Xs)
β
1 1
= −θβ( − Xs)T (XS)−1 ( e − Xs)
e β
1
= −θβ||(XS)−1/2 ( e − Xs)||2
β
q 1
= −βα min(Xs)||(XS)−1/2 ( e − Xs)|| (By the definition of θ)
β
q
= −α min(Xs)||(XS)−1/2 (e − βXs)||
q n+ρ
= −α min(Xs)||(XS)−1/2 (e − T Xs)||.
x s
(2)
Now we will try to bound
||θS −1 ds ||2 + ||θX −1 dx ||2
.
2(1 − τ )
Note that

S −1 ds = (XS)−1/2 (XS)−1/2 Xds , and X −1 dx = (XS)−1/2 (XS)−1/2 Sdx .

Therefore,

||θS −1 ds ||2 + ||θX −1 dx ||2


||(XS)−1/2 Xds ||2 + ||(XS)−1/2 Sdx ||2
≤ θ2
min(Xs)
||(XS)−1/2 Xds + (XS)−1/2 Sdx ||2
≤ θ2
min(Xs)

5
||(XS)−1/2 (Xds + Sdx )||2
= θ2
min(Xs)
||(XS)−1/2 ( β1 e − Xs)||2
= θ2
min(Xs)
2
= α

where the first equality holds since

dTs X T (XS)−1/2 (XS)−1/2 Sdx = dTs dx = 0

and the last equality follows from the definition of θ. This also show that τ ≤ α. Therefore,
we must have
||θS −1 ds ||2 + ||θX −1 dx ||2 α2
≤ . (3)
2(1 − τ ) 2(1 − α)
The desired inequality follows from (2) and (3).
To complete the proof of Lemma 1, we should show that

x+ > 0, ands+ > 0

Ax+ = b
c − AT y + = s+ .
The last two equalities are easily verified by the definition of dx , dy , ds . To prove the inequalities,
we recall that
||θS −1 ds ||2 + ||θX −1 dx ||2 ≤ α2 < 1
and thus τ < 1.

You might also like