0% found this document useful (0 votes)
9 views

Lecture 4

This document discusses the optimality conditions and simplex method for solving linear programming problems. It begins by reviewing the definition of a linear programming problem in standard form as minimizing a linear objective function subject to linear equality and non-negativity constraints. It then presents the optimality conditions theorem, stating that a finite optimal solution exists if and only if the objective function values are non-negative along all extreme directions of the feasible region. The document proceeds to explain how the simplex method works by starting at an initial extreme point and iteratively moving to adjacent extreme points with improving objective values until an optimal solution is found or unboundedness is detected. Pseudocode for the full simplex algorithm is provided.

Uploaded by

Josselyn Game
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Lecture 4

This document discusses the optimality conditions and simplex method for solving linear programming problems. It begins by reviewing the definition of a linear programming problem in standard form as minimizing a linear objective function subject to linear equality and non-negativity constraints. It then presents the optimality conditions theorem, stating that a finite optimal solution exists if and only if the objective function values are non-negative along all extreme directions of the feasible region. The document proceeds to explain how the simplex method works by starting at an initial extreme point and iteratively moving to adjacent extreme points with improving objective values until an optimal solution is found or unboundedness is detected. Pseudocode for the full simplex algorithm is provided.

Uploaded by

Josselyn Game
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Fundamentals of Optimization

Lecture 4

Xavier Cabezas

FCNM-ESPOL
April 8, 2021
Outline

Optimality conditions for LP

Simplex method

Optimality conditions for LP 2


Recalling the linear programming problem definition

A mathematical model of optimization is a function f : X→ R .


x 7→f (x)
• X is called feasible set.
• x ∈ X is called a decision (In this course x ∈ Rn ).
• f (x) is called cost.
• The aim is to find an x∗ ∈ X such that f (x∗ ) ≤ f (x), ∀x ∈ X.

A mathematical programming problem P is min { f (x) | x ∈ X }.


• The feasible set X can be represented by a finite number of
constraints { x | gi (x) ≤ 0, i = 1, . . . , m }.
• If f and gi (x) are linear for all i, the problem P is called linear
programming problem.

Optimality conditions for LP 3


Standard form of a linear programming problem

The linear programming problem can be written as

min { c0 x | Ax ≤ b } .
Also can be written as

min { c0 x | Ax = b, x ≥ 0 } ,

by adding slack variables s = b − Ax, si ≥ 0. This is called the


standard form of the linear program and c0 x is called the objective
function. .

• The optimal value of the objective function, c0 x∗ , may be finite or


unbounded.

Optimality conditions for LP 4


Optimality conditions for linear programming

Theorem
Let L = min { c0 x | Ax = b, x ≥ 0 } be a linear programming problem
with a nonempty feasible region. c ∈ Rn , A ∈ Rm×n , rank(A) = m and
b ∈ Rm . Let x1 , . . . , xk be extreme points and d1 , . . . , dl be extreme
directions of the feasible region.

1. L has a finite optimal solution if and only if c0 dj ≥ 0 for all


j = 1, . . . , l.
2. If L has a finite optimal solution, then at least an extreme point xi
solves the problem.

Optimality conditions for LP 5


Optimality conditions for linear programming

Proof.
Since the representation theorem, we have
   
 X k l
X k
X 
L = min c0  λ i xi + µj dj  | λi = 1, λi ≥ 0, µj ≥ 0
 
i j i

.
1. ⇒) If c0 dj < 0 for some j and since µj ≥ 0 and L is a minimization
problem, then the L’s solution is unbounded. ⇐) Similar.

2. If L’s solution is finite, then it can be chosen µj = 0 for all j, then


L’s solution is finite and can be found letting λi = 1 for some i such
that c0 xi = min { c0 xi | i = 1, . . . , k }.

Optimality conditions for LP 6


LP solutions examples

x2 x2

x1 x1
(a) A possible finite solution in a (b) A possible finite solution in a
bounded feasible region. unbounded feasible region.

Optimality conditions for LP 7


Outline

Optimality conditions for LP

Simplex method

Simplex method 8
Finding solution by exhaustive enumeration?

Let L = min { c0 x | Ax = b, x ≥ 0 } with extrems points x1 , . . . , xk in its


feasible region. Checking c0 xi for all i = 1, . . . , k, even though it is
possible can be a bad good idea (computationally speaking).

The simplex method is a procedure that start from an extreme point


and moves to another one with equal or better objective function value.

Simplex method 9
Checking optimality in extreme points

Let L = min { c0 x | Ax = b, x ≥ 0 } with rank(A) = m. We will assume


the feasible region is nonempty.

• In order to find optimal solutions we will focus on extreme points


(even though a solution x∗ could no be necessarily a extreme point,
case of infinite solutions).

• Let A = B N with B ∈ Rm×m , B = [aB1 , . . . , aBm ] and


 

rank(B) = m. B is called the basis. The corresponding variables


xB1 , . . . , xBm to B are called basic variables. N ∈ Rm×(n−m) .
The corresponding variables to N are called nonbasic variables.

Simplex method 10
Checking optimality in extreme points

• Let x̄ be an extreme point. Then x̄0 = x̄0B x̄0N = b̄0 00 with


   

b̄ = B −1 b ≥ 0.

• Let x0 = x0B x0N be a vector in the feasible region of L. Since


 

Ax = BxB + N xN = b, xB = B −1 b − B −1 N xN .

• c0 x = c0B xB + c0N xN = c0B B −1 b + (c0N − c0B B −1 N )xN =


c0 x̄ + (c0N − c0B B −1 N )xN .

• c0 x = c0 x̄ + (c0N − c0B B −1 N )xN implies that c0 x ≥ c0 x̄ if


c0N − c0B B −1 N ≥ 0, because xN ≥ 0. Therefore x∗ = x̄
(we found a solution!!!).

Simplex method 11
What if c0N − c0B B−1 N  0?

• Suppose that c0N − c0B B −1 N  0. Then at least a j-th component


is negative. Suppose that cj − c0B B −1 aj < 0.

−1
 
• Consider x = x̄ + λdj = x̄ + λ −B aj .
ej

−1
 
• c0 x = c0 (x̄ + λdj ) = c0 x̄ + λc0 −B aj =
ej
c0 x̄ + λ(c0B (−B −1 aj ) + c0N ej ) = c0 x̄ + λ(cj − c0B B −1 aj ).

• Since cj − c0B B −1 aj < 0, then c0 x < c0 x̄ for λ > 0.

Simplex method 12
Checking unbounded solutions
Finding an extreme direction such that c0 dj < 0
• Let yj = B −1 aj ∈ Rm×1 .
−1
 
• Adj = B N −B aj = −aj + N ej = −aj + aj = 0.
 
ej

Remember: dj is an direction of a convex set C if x + λdj ∈ C for all


x ∈ C and λ > 0.
• A(x̄ + λdj ) = Ax̄ + λAdj = b. Then, x = x̄ + λdj is feasible if and
only if x ≥ 0. x ≥ 0 for all λ ≥ 0 if yj ≤ 0 and in this way dj is an
extreme direction.
• c0 dj = cj − c0B B −1 aj < 0 and therefore the objective function value
is unbounded (optimality condition theorem).

By choosing a nonbasic variable with cj − c0B B −1 aj < 0 and checking


the sign of B −1 aj we can realize if the objective function value is
unbounded.
Simplex method 13
Finding an extreme point with a better objective value

• Let yj = B −1 aj  0, b̄ = B −1 b ∈ Rm×1 and


 
b̄i b̄r
λ = min | yij > 0 = ≥ 0.
1≤i≤m yij yrj
• The components of x = x̄ + λdj ≥ 0 are given by

x0 = [xB1 , · · · , xBr−1 , xBr , xBr+1 , · · · , xBm , 0, · · · , 0, xj , 0, · · · , 0]

where xBi = b̄i − yb̄rj


r
yij for i = 1, . . . , m, xj = yb̄rj
r
and zero for any
other component. Note that xBr is also equal to zero.
• The corresponding columns in A of x are linear independent. Why?.
For the characterization of extreme points theorem, x is an extreme
point and c0 x < c0 x̄ (See last line in slice 12).

It is said that the basic variable xBr left the basis and the nonbasic xj
entered to the basis.
Simplex method 14
The Simplex algorithm for min { c0 x | Ax = b, x ≥ 0 }
Initialization: Find a starting extreme point x with basis B.
repeat
Compute c0N − c0B B −1 N ;
if c0N − c0B B −1 N ≥ 0 then
stop. x is an optimal extreme point;
else
Choose the most negative component c0j − c0B B −1 aj ;
if yj = B −1 aj ≤ 0 then
stop. The optimal objective value is unbounded;
else
b̄ = B −1 b;
n o
Compute r from min1≤i≤m yb̄iji | yij > 0 = yb̄rj
r
≥ 0;
 −1   −1

B b −B aj
Update B and x = + yb̄rj
r
;
0 ej
end
end
until As long as necessary ;
Simplex method 15
The Simplex algorithm for min { c0 x | Ax = b, x ≥ 0 }
Initialization: Find a starting extreme point x with basis B.
repeat
Compute c0B B −1 N − c0N ;
if c0B B −1 N − c0N ≤ 0 then
stop. x is an optimal extreme point;
else
Choose the most positive component c0B B −1 aj − c0j ;
if yj = B −1 aj ≤ 0 then
stop. The optimal objective value is unbounded;
else
b̄ = B −1 b;
n o
Compute r from min1≤i≤m yb̄iji | yij > 0 = yb̄rjr
≥ 0;
 −1   −1

B b −B aj
Update B and x = + yb̄rj
r
;
0 ej
end
end
until As long as necessary ;
Simplex method 16
The Simplex Tableau
With a starting basis we can write

Objective row: z − c0B xB − c0N xN = 0

Constraint rows: BxB + N xN = b

z x0B x0N RHS


1 −c0B −c0N 0
0 B N b
All the information for the Simplex method iterations is displayed in the
tableau by updating it.

z x0B x0N RHS


z 1 00 c0B B −1 N − c0N c0B b̄
xB 0 I B −1 N b̄

Simplex method 17
The Simplex method examples

The updating of the tableau is done by multiplying the constraints rows


by B −1 and adding to the objective row c0B times the new constraint
rows.

The update of the tableau can be done (equivalently) by pivoting at the


xBr row and xj column, i.e., at yrj as follows
1. Divide the rth row corresponding to xBr by yrj .
2. Multiply the new rth row by yij and subtract from the ith
constraint row for i = 1, . . . , m, i 6= r.
3. Mutiply the new rth row by c0B B −1 aj − cj and subtract from the
objective row.

It sounds difficult but it is actually quite easy!!!.

Simplex method 18
The Simplex method: A unbounded example

min { −2x1 − x2 | x1 − x2 ≤ 10, 2x1 ≤ 40, x1 ≥ 0, x2 ≥ 0 }

30 -30 40 70

25

80
20
x2

15
x1 - x2 ≤ 10
2 x1 ≤ 40
10 -

0 -50

0 5 10 15 20 25 30
x1

Simplex method 19
The Simplex method: An unbounded example

By adding slack variables x3 and x4 we get the standard form

min { −2x1 − x2 | x1 − x2 + x3 = 10, 2x1 + x4 = 40, x1 ≥ 0, x2 ≥ 0 }

Initialization
     
1 0 1 −1 10
, c0B = 0 0 , c0N = −2
   
B= ,N= −1 , b = ,
0 1 2 0 40
x0 = 0 0 10 40 .
 

x1 x2 x3 x4
z 2 1 0 0 0
x3 1 -1 1 0 10
x4 2 0 0 1 40

Simplex method 20
The Simplex method: An unbounded example (Iter 1)

Optimal solution?   
 1 0 1 −1
c0B B −1 N − c0N = 0
    
0 − −2 −1 = 2 1 , j = 1,
0 1 2 0
x1 →

Objective function
 value
   unbounded?
 
−1 1 0 1 1
y1 = B a1 = =  0,
0 1 2 2

Ratio test     
1 0 10 10
b̄ = B −1 b = = → min { 10/1, 40/2 } = 10, r = 3,
0 1 40 40
← x3

Updating
  x and  other
 matrices
 
0 1 10
0  0  0
x=10 + 10 −1 =  0 .
    

40 −2 20
Simplex method 21
The Simplex method: An unbounded example (Iter 1)
Tableau form

x1 x2 x3 x4
z 2 1 0 0 0
x3 1 -1 1 0 10
x4 2 0 0 1 40

x1 x2 x3 x4
z 0 3 -2 0 -20
x1 1 -1 1 0 10
x4 0 2 -2 1 20

Simplex method 22
The Simplex method: An unbounded example (Iter 2)
Optimal solution?   
1 0 −1 1
c0B B −1 N − c0N = −2
     
0 − −1 0 = 3 −2 ,
−2 1 0 0
j = 2, x2 →

Objective function
 value
  unbounded?
  
−1 1 0 −1 −1
y2 = B a2 = =  0,
−2 1 0 2

Ratio test     
1 0 10 10
b̄ = B −1 b = = → min { −, 20/2 } = 10, r = 4,
−2 1 40 20
← x4

Updating
  x and  other
 matrices
 
10 1 20
0  1  10
x= 0  + 10  0  =  0 .
    

20 −2 0
Simplex method 23
The Simplex method: An unbounded example (Iter 2)
Tableau form

x1 x2 x3 x4
z 0 3 -2 0 -20
x1 1 -1 1 0 10
x4 0 2 -2 1 20

x1 x2 x3 x4
z 0 0 1 -3/2 -50
x1 1 0 0 1/2 20
x2 0 1 -1 1/2 10

Simplex method 24
The Simplex method: An unbounded example (Iter 3)

Optimal solution?   
0 1/2 1 0
c0B B −1 N − c0N = −2
     
−1 − 0 0 = 1 −3/2 ,
−1 1/2 0 1
j = 3, x3 →

Objective function
 value 
unbounded?
  
−1 0 1/2 1 0
y3 = B a3 = = ≤ 0,
−1 1/2 0 −1

then the objective function value is unbounded!!!

Simplex method 25
The Simplex method: An unbounded example (Iter 3)
Tableau form
x1 x2 x3 x4
z 0 0 1 -3/2 -50
x1 1 0 0 1/2 20
x2 0 1 -1 1/2 10

then the objective function value is unbounded!!!

Observation: In the initial tableau (below), it is possible that x2 enters


the basis. However all the coefficients below it are negatives or zero, that
means that x2 can be increased indefinitely and holding all constraints.

x1 x2 x3 x4
z 2 1 0 0 0
x3 1 -1 1 0 100
x4 2 0 0 1 40
Simplex method 26
Other Issues in Simplex method

• Initial extreme point (Artificial variables)


– The Big M method.
– The Two-phase method.

• Infinity optimal solutions.

• Degenerate solution (Redundant constrains).

• Cycling (Finite convergence).

Simplex method 27

You might also like