0% found this document useful (0 votes)
3 views

Simplex3

The document discusses the Simplex Algorithm for linear programming, detailing the computation of relative costs and the structure of the tableau used in the algorithm. It outlines the pivoting process, column selection strategies, and the potential for cycling in the algorithm, along with methods to avoid it, such as Bland's anticycling rule. Additionally, it addresses how to find a basic feasible solution when one is not provided, using artificial variables and the two-phase method.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Simplex3

The document discusses the Simplex Algorithm for linear programming, detailing the computation of relative costs and the structure of the tableau used in the algorithm. It outlines the pivoting process, column selection strategies, and the potential for cycling in the algorithm, along with methods to avoid it, such as Bland's anticycling rule. Additionally, it addresses how to find a basic feasible solution when one is not provided, using artificial variables and the two-phase method.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Linear Programming & the Simplex Algorithm

Part III – The Simplex Algorithm


Last Revision Oct 14, 2004
P&S Chapter 2

1
Pm
Recall c̄j = cj − i=1 xi,jcB(i) = cj − zj

cj is the cost of one unit of xj while zj is the cost of the


equivalent of one unit of xj expressed using elements
of basis B.

c̄j is therefore cost of bringing in one unit of xj to re-


place the equivalent amount of basis variables. This
is the relative cost of column j vis-a-vis basis B.

Since c̄j are so useful we would like to keep them in


the tableau.This is usually done by adding a row 0 to
the tableau storing the c̄j .

Before seeing how to maintain this, note that relative


cost associated with basis column j is
m
X
c̄j = cj − xi,j cB(i) = 0
i=1

2
Now write the original 0th row of the tableau as
0 = −z + c1x1 + · · · + cnxn.
Suppose j is a basis column with B(i) = j.
To make the jth entry in row 0 be 0 we need to mul-
tiply row i by −cB(i) = −cj and subtract from row
0.

Do this for every basic j. Note that for all nonbasic j


the jth column of the 0th row will then be exactly the
relative cost
m
X
c̄j = cj − xi,j cB(i)
i=1

Note too that the LHS of the equation will have value
m
X
−z0 = − xi,0cB(i)
i=1
which is exactly the cost of the current basic solution.

0th row is then


n
X
−z0 = −z + c̄j xj
i=1
(where, if j is basic, xj = 0).
3
The 0th row is then
n
X
−z0 = −z + c̄j xj
i=1
(where, if j is basic, xj = 0).

Consider the LP as having n + 1 variables instead of


n, with the (n + 1)st variable corresponding to −z.

The same pivot rules should work for the 0th row .
That is, when column j enters the basis using pivot
xl,j we need to zero cj by using the same rules that
we used for the other rows Recall that, after pivoting,
x′l,j = 1. This means that, after all of the other piv-
oting operations, we need to multiply row l by c̄j and
subtract it from row 0.

Note that this gives us what we want since


• c̄′j = 0
• If q 6= j in basis, c̄′q = c̄q = 0
• −z0′ = −z0 − x′l,0c̄j (new cost)
Pm
• If q not in new basis then c̄q = cq − i=1 x′i,q cB ′(i)

(prove this)

• −z ′ = −z (so don’t need extra column for −z)

4
We start with 0th row of tableau reflecting
n
X
−z0 = −z + c̄j xj
i=1
and this equation is maintained with every pivot. Recall that we
can only pivot on column j if c̄j < 0, and that if ∀j, c̄j ≥ 0, then
solution is optimal.

We can read the c̄j information off the top row of the tableau. So,
tableaus permit us to choose feasible pivot columns,and allow us
to implement pivots easily.. Assuming nondegeneracy of bases,
this gives us

procedure simplex
begin
opt:=‘no’, unbounded:=‘no’;
(comment: when either becomes ‘yes’, algorithm terminates)
while opt = ‘no’ and unbounded = ‘no’ do
if c̄j ≥ 0 for all j then opt:=‘yes’
else begin
choose any j such that c̄j < 0;
if xij ≤ 0 for all i then unbounded:= ‘yes’
else  
find θ0 = min i
xi0
xij = xxk0
xij >0 kj

and pivot on xkj


end
end
5
Suppose the original LP has cost function
z = x1 + x2 + x3 + x4 + x5 and in tableau form is
x1 x2 x3 x4 x5
0 1 1 1 1 1
1 3 2 1 0 0
3 5 1 1 1 0
4 2 5 1 0 1

Now make basis B = {A3, A4, A5}. When doing this


don’t forget to zero columns 3, 4 and 5 in row 0.
x1 x2 x3 x4 x5
−z= −6 − 3 −3 0 0 0
x3 = 1 3 2 1 0 0
x4 = 2 2 −1 0 1 0
x5 = 3 −1 3 0 0 1

The non-basis columns have c̄1 = c̄2 = −3 so they


are both eligible to enter basis. We choose column
2. Pivot will then be on x1,2 with θ0 = 1/2. After
pivoting, basis will be B′ = {A2, A4, A5}.
x1 x2 x3 x4 x5
−z= − 9/2 3/2 0 3/2 0 0
x2 = 1/2 3/2 1 1/2 0 0
x4 = 5/2 7/2 0 1/2 1 0
x5 = 3/2 − 11/2 0 −3/2 0 1
6
x1 x2 x3 x4 x5
−z= − 9/2 3/2 0 3/2 0 0
x2 = 1/2 3/2 1 1/2 0 0
x4 = 5/2 7/2 0 1/2 1 0
x5 = 3/2 − 11/2 0 −3/2 0 1

We now have basis B′ = {A2, A4, A5}. Note that the


nonbasis columns both have c̄j ≥ 0 so the BFS

(0, 1/2, 0, 5/2, 3/2)


with cost z0 = 1/2 + 5/2 + 3/2 = 9/2, is optimal.

7
Column Selection

Q. The simplex algorithm allows choosing any non-


basis column j with c̄j < 0 to move into the basis. In
the case that more than one column has c̄j < 0 which
column is best to choose?

A. No one knows. There are many possibilities.


Some common ones are
1. Nonbasic gradient method:
Choose j with most negative cj .
Easy to implement.
Might not give largest decrease in cost.

2. Greatest increment method:


Choose j that results in largest decrease in cost.
This is j that minimizes θ0c̄j . Requires more work.

3. All-variable gradient method


Choose j that minimizes
c̄j
q
1+
Pm 2
i=1 xi,j

8
Q. For given column j how do we deal with ties in k?
That is, suppose more than one row k has xk,j > 0
and
!
xk0 xi0
= θ0 = min
xkj i xij
xij >0

A. This can be a major issue. Note that if we pivot


on xk,j with xk,0 = 0 then θ0 = 0 and we are not
changing the cost at all. We are just moving from one
basis to another of a degenerate BFS.

If we are not careful, then after a sequence of such


non-improving “pivots” we can end up back at the orig-
inal basis. This is called cycling and needs to be
avoided.

9
A cycling example

Consider
x1 x2 x3 x4 x5 x6 x7
3 − 3/4 +20 −1/2 +6 0 0 0
0 1/ 4 −8 −1 9 1 0 0
0 1/2 −12 −1/2 3 0 1 0
1 0 0 1 0 0 0 1

Suppose we use the rule that

• Always select nonbasis column with most nega-


tive c̄j to enter the basis.

• In case of a tie, always select basic variable with


smallest subscript to leave basis.

10
x1 x2 x3 x4 x5 x6 x7
3 − 3/4 +20 −1/2 +6 0 0 0
0 1/ 4 −8 −1 9 1 0 0
0 1/2 −12 −1/2 3 0 1 0
1 0 0 1 0 0 0 1

x1 x2 x3 x4 x5 x6 x7
3 0 −4 −7/2 33 3 0 0
0 1 −32 −4 36 4 0 0
0 0 4 3/2 −15 −2 1 0
1 0 0 1 0 0 0 1

x1 x2 x3 x4 x5 x6 x7
3 0 0 −2 18 1 1 0
0 1 0 8 −84 −12 8 0
0 0 1 3/8 −15/4 −1/2 1/4 0
1 0 0 1 0 0 0 1

x1 x2 x3 x4 x5 x6 x7
3 1/4 0 0 −3 −2 3 0
0 1/8 0 1 −21/2 −3/2 1 0
0 − 3/64 1 0 3/16 1/16 −1/8 0
1 − 1/8 0 0 21/2 3/2 −1 1

11
x1 x2 x3 x4 x5 x6 x7
3 1/4 0 0 −3 −2 3 0
0 1/8 0 1 −21/2 −3/2 1 0
0 − 3/64 1 0 3/16 1/16 −1/8 0
1 − 1/8 0 0 21/2 3/2 −1 1

x1 x2 x3 x4 x5 x6 x7
3 − 1/2 16 0 0 −1 1 0
0 − 5/2 56 1 0 2 −6 0
0 − 1/4 16/3 0 1 1/3 −2/3 0
1 5/2 −56 0 0 −2 6 1

x1 x2 x3 x4 x5 x6 x7
3 − 7/4 44 1/2 0 0 −2 0
0 − 5/4 28 1/2 0 1 −3 0
0 − 1/6 −4 −1/6 1 0 1/3 0
1 0 0 1 0 0 0 1

x1 x2 x3 x4 x5 x6 x7
3 − 3/4 +20 −1/2 +6 0 0 0
0 1/4 −8 −1 9 1 0 0
0 1/2 −12 −1/2 3 0 1 0
1 0 0 1 0 0 0 1

12
We will now see Bland’s anticycling rule
that guarantees that cycles won’t occur.

Theorem:
Suppose in the simplex algorithm we choose the col-
umn to enter the basis by

j = min{j : cj − zj < 0}
(choose the lowest numbered favorable column), and
the row by
xi,0 xk,0
n o
B(i) = min B(i) : xi,j > 0 and ≤ for every k with xk,j > 0
xij xk,j

(choose in case of tie the lowest numbered column to


leave the basis). Then the algorithm terminates after
a finite number of pivots.

Note: See proof in P&S textbook.

13
The “Real” Simplex Algorithm

Q. Until now always assumed we were given a BFS.


In reality this isn’t always provided, so how do we find
one?

A. If we begin with Ax ≤ b
then slack variables form a BFS.
Otherwise, we can use the artificial variable or two-
phase method. This works by adding m new artificial
variables xai, i = 1, . . . , m to the LP.

Starting BFS will be xai = bi.

Note” Might have to multiply some of the original equations by


−1 to ensure b ≥ 0.

xa1 xam x1 xn
1 0
xj ≥ 0 j = 1, . . . , n
b A xai ≥ 0 i = 1, . . . , m

0 1

14
xa1 xam x1 xn
1 0
xj ≥ 0 j = 1, . . . , n
b A xai ≥ 0 i = 1, . . . , m

0 1

Pm a
Phase I. Minimize cost function ξ = i=1 xi

There are three possible outcomes:

1. ξ = 0 and all xai have been driven out of the ba-


sis. Current basis is then a BFS of original prob-
lem and we can continue. (best case!).
When we continue it is with Phase II, which uses
original cost function.

2. Reach optimality with ξ > 0.


Then there is no feasible solution to original prob-
lem and we stop.

3. ξ = 0 but some artificial variables remain at zero


level in basis.

15
Case 3 (cont). ξ = 0 but some artificial variables
remain at zero level.

In this case suppose ith basis column at end of Phase


I belongs to artificial variable and xai = 0.
Then “pivot” on any nonzero (not necessarily positive)
xi,j where j corresponds to a non-artificial variable.
This drives the artificial variable out of the basis. Note
that since xai = 0 we have θ0 = 0 so no infeasibility
occurs and ξ = 0 does not change.

Also note that this might not be what we usually call a


pivot, since it’s possible that c̄j ≥ 0 or xi,j < 0. Since
θ0 = 0, this does not matter.

Keep on doing this until we get a basis composed of


only non-artificial variables. Only way that this can
fail is if a row has zero values in the columns corre-
sponding to all non-artificial variables. This implies
that we’ve created a zero row in original matrix using
elementary row operations, i.e, that the original matrix
was not of rank m.

In this case deleting that row (it’s useless) contributes


nothing to original LP. So, delete it and continue with
Phase I. In this way we must at some point end up at
case 1.
16
Case 3 (cont). ξ = 0 but some artificial variables
remain at zero level.

We just saw that in this case we can always drive arti-


ficial variables out of the basis and replace them with
real variables unless at some point we’ve created a
zero row in original matrix using elementary row op-
erations, i.e, that the original matrix was not of rank
m.

In this case deleting that row (it’s useless) contributes


nothing to original LP. So, delete it and continue with
phase I. In this way we must at some point end up at
case 1.

Note that if A was not of rank m then it is impossible


to drive all of the artificial variables out of the basis
(why?), so, if there is a feasible solution, we must end
up at case 3.

Therefore, if there exists a feasible solution, the two


phase algorithm will always correctly find a BFS from
which to start phase 3, even if the original matrix was
not of full rank.

We have therefore just seen how to get rid of the orig-


inal assumption that A is of full rank.
17
procedure two-phase
begin
infeasible := ‘no’, redundant := ‘no’;
(comment: Phase I may set these to ‘yes’)
Phase I: introduce an artificial basis, P xai ;
call simplex with cost ξ = xai ;
if ξopt > 0 in Phase I then infeasible := ‘yes’
else begin
if an artificial variable is in the basis and
can not be driven out then redundant := ‘yes’,
and omit the corresponding row;
Phase II: call simplex with original cost
end
end

We now have the final version of simplex algorithm.

Note that all of our original assumptions are no longer


necessary.

• If A is not of rank m we find out at the end of


Phase I and throw out enough rows so that it be-
comes of full rank.

• If the original problem is infeasible, we find out at


the end of Phase I.

• If the original problem is unbounded, we find out


in Phase II.

18
In order to implement the two-phase method, we can
have two row 0s; the first corresponding to the real
cost and the second corresponding to the artificial vari-
able cost. While pivoting during Phase I, we will main-
tain both of these rows.

xa1 xa2 xa3 x1 x2 x3 x4 x5


−z= 0 0 0 0 1 1 1 1 1 row 0′
−ξ= 0 1 1 1 0 0 0 0 0 row 0
1 1 0 0 3 2 1 0 0
3 0 1 0 5 1 1 1 0
4 0 0 1 2 5 1 0 1

Starting with the above, we subtract rows 1,2,3 from


the ξ cost row to set the appropriate c̄j = 0.

xa1 xa2 xa3 x1 x2 x3 x4 x5


−z= 0 0 0 0 1 1 1 1 1
−ξ= −8 0 0 0 − 10 −8 −3 −1 −1
xa1 = 1 1 0 0 3 2 1 0 0
xa2 = 3 0 1 0 5 1 1 1 0
xa3 = 4 0 0 1 2 5 1 0 1
19
xa1 xa2 xa3 x1 x2 x3 x4 x5
−z= 0 0 0 0 1 1 1 1 1
−ξ= −8 0 0 0 − 10 −8 −3 −1 −1
xa1 = 1 1 0 0 3 2 1 0 0
xa2 = 3 0 1 0 5 1 1 1 0
xa3 = 4 0 0 1 2 5 1 0 1
xa1 xa2 xa3 x1 x2 x3 x4 x5
−z= − 1/3 − 1/3 0 0 0 1/3 2/3 1 1
−ξ= − 14/3 10/3 0 0 0 −4/3 1/3 −1 −1
xa1 = 1/3 1/3 0 0 1 2/3 1/3 0 0
xa2 = 4/3 − 5/3 1 0 0 −7/3 −2/3 1 0
xa3 = 10/3 − 2/3 0 1 0 11/3 1/3 0 1

xa1 xa2 xa3 x1 x2 x3 x4 x5


−z= − 1/2 − 1/2 0 0 − 1/2 0 1/2 1 1
−ξ= −4 4 0 0 2 0 1 −1 −1
xa1 = 1/2 1/2 0 0 3/2 1 1/2 0 0
xa2 = 5/2 − 1/2 1 0 7/2 0 1/2 1 0
xa3 = 3/2 − 15/6 0 1 − 11/2 0 −3/2 0 1

xa1 xa2 xa3 x1 x2 x3 x4 x5


−z= −3 0 −1 0 −4 0 0 0 1
−ξ= − 3/2 7/2 1 0 11/2 0 3/2 0 −1
xa1 = 1/2 1/2 0 0 3/2 1 1/2 0 0
xa2 = 5/2 − 1/2 1 0 7/2 0 1/2 1 0
xa3 = 3/2 − 5/2 0 1 − 11/2 0 −3/2 0 1

xa1 xa2 xa3 x1 x2 x3 x4 x5


−z= − 9/2 5/2 −1 −1 3/2 0 3/2 0 0
−ξ= 0 1 1 1 0 0 0 0 0
xa1 = 1/2 1/2 0 0 3/2 1 1/2 0 0
xa2 = 5/2 − 1/2 1 0 7/2 0 1/2 1 0
xa3 = 3/2 − 5/2 0 1 − 11/2 0 −3/2 0 1

20
The previous example was only implementing Phase
I.

At the end of phase I, ξ = 0, and all of the artificial


variables have been driven out of the basis.

Note, though, that as we start Phase II, we have c̄j ≥


0 for all of the non-artificial variables, so the starting
BFS of Phase II is already optimal.

21
Consider the following sequence of pivots in a LP

−34 −1 −14 −6 0 0 0 0
4 1 1 1 1 0 0 0
2 1 0 0 0 1 0 0
3 0 0 1 0 0 1 0
6 0 3 1 0 0 0 1

−32 0 −14 −6 0 1 0 0
2 0 1 1 1 −1 0 0
2 1 0 0 0 1 0 0
3 0 0 1 0 0 1 0
6 0 3 1 0 0 0 1

−20 0 −8 0 6 −5 0 0
2 0 1 1 1 −1 0 0
2 1 0 0 0 1 0 0
1 0 −1 0 −1 1 1 0
4 0 2 0 −1 1 0 1

−4 0 0 8 14 −13 0 0
2 0 1 1 1 −1 0 0
2 1 0 0 0 1 0 0
3 0 0 1 0 0 1 0
0 0 0 −2 −3 3 0 1

−4 0 0 −2/3 1 0 0 13/3
2 0 1 1/3 0 0 0 1/3
2 1 0 2/3 1 0 0 −1/3
3 0 0 1 0 0 1 0
0 0 0 −2/3 −1 1 0 1/3

−2 1 0 0 2 0 0 4
1 −1/2 1 0 −1/2 0 0 1/2
3 3/2 0 1 3/2 0 0 −1/2
0 −3/2 0 0 −3/2 0 1 1/2
2 1 0 0 0 1 0 0

22
They actually correspond to moving in the given order,
from vertex to vertex in the following polytope:

x3

1
2 x1

4 = 5
x2

23
In the previous example simplex we saw that simplex
moved along edges from vertex to vertex. We now
see that this always happens.

Two vertices x̂, ŷ of a polytope are called adjacent if


line segment [x̂, ŷ] is an edge of the polytope.

Two BFSs of LP Ax = b, x ≥ 0 are called adjacent if


there exist bases Bx By such that
 
By = Bx − {Aj } ∪ {Ak }
and
x = Bx−1b, and y = By−1b.

24
Theorem:
Let P be a polytope,
F = {x : Ax = b, x ≥ 0} the corresponding feasible
set, and
x̂ = (x1, . . . , xn−m), ŷ = (y1, . . . , yn−m)
be distinct vertices of P .

Then the following are equivalent.

(a) The segment [x̂, ŷ] is an edge of P .

(b) For every ẑ ∈ [x̂, ŷ],


if ẑ = λẑ ′ + (1 − λ)ẑ ′′ with
0 < λ < 1 and ẑ ′, ẑ ′′ ∈ P ,
then ẑ ′, ẑ ′′ ∈ [x̂, ŷ].

(c) The corresponding vectors x, y of F are adjacent


bfs’s.

Proof: In the P&S textbook.


25

You might also like