0% found this document useful (0 votes)
19 views

Chapter 3

Uploaded by

Sàboh-à Dél
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Chapter 3

Uploaded by

Sàboh-à Dél
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

S Delenda

Chapter 3
Primal method for solving a linear program

The Simplex algorithm solves linear programming problems by focusing


on basic feasible solutions. The basic idea is to start from some vertex v and
look at the adjacent vertices. If an improvement in cost is possible by moving
to one of the adjacent vertices, then we do so. Thus, we will start with a
bfs corresponding to a basis B and, at each iteration, try to improve the cost
of the solution by removing one variable from the basis and replacing it by
another.

1 Problem position
The primal problem of semi-definite programming in standard form is written

 min(C; Xi )
(LPP) AX = b
X ∈ Sn+

where b ∈ Rm and A is a linear operator of Sn in Rm such that


 
(A1 X)
AX =  : 
(Am X)

C and Ai ; i = 1; ...; m are the matrices in Mn (R). Without loss of generality


we can assume that the Ai are symmetrical.

2 Characterization of extreme points


The characterization of extreme points is the fundamental result that drives
the Simplex method for solving linear programs.
Theorem (Characterization of extreme points).
Let P = {x ∈ Rn : Ax ≤ b}, X ∈ P . Then the following are equivalent:
(i) x is an extreme point of P .
(ii) If A′ x ≤ b′ is the subsystem of Ax ≤ b satisfied as equality by x , then
rank(A′ ) = n.
(iii) F = x is a face of P (with dim(F ) = 0).
(iv) There exist c ∈ Rn such that x is the unique optimum solution of the
LP max cT x : x ∈ P .

1
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 1: Infeasible solution

3 Optimality at an extreme point


Definition The set

{x ∈ Rn ; Ax ≤ b; x ≥ 0}

is called the feasible region of (LP). A point x in the feasible region is called
a feasible solution. An LP is said to be feasible if the feasible region is not
empty, and infeasible otherwise.

Example In the above example, the feasible region is the triangle whose
vertices are (0,0), (0,1) and (1,0). Consider the linear program:

max x1 + x2
s:t: x1 + x2 ≤ −1
x1 ; x2 ≥ 0

The constraints are requesting that x1 , x2 are non-negative, but also below
the line x1 + x2 = −1 figure (1), this LP is thus infeasible.

Note : If the objective function is parallel to an edge, then there will be


other optima on that edge, but there is always an optimum at a corner.
If there is an optimal solution, then there is one at a corner or a vertex or
an extreme point of the polyhedron X. A polyhedron has a finite number of
extreme points.

2
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 2: optimal extreme point

The objective function of the problem P reaches its maximum at an extreme


point of the set X of feasible solutions.
Theorem: The optimum of z on X is reached at least one extreme point.
If it is reached at several extreme points, it is reached at any convex combi-
nation of these extreme points.

4 Optimality criteria: formula for increasing


the objective function, optimality criterion
4.1 formula for increasing the objective function
Theorem The set of extreme points of the convex polygon X = {x ∈
Rn /Ax = b; x ≥ 0} is equal to the set of feasible basic solutions.
Definition A feasible solution x is said to be basic if (m − n) of its compo-
nents are zero and the others
xj 1 , xj 2 , ....., xj m , correspond to the m vectors aj 1 , aj 2 , ...., aj m , of the matrix
which are linearly independent.
1. JB = {j1 , ..., jm } is called base index set.
2. JN = J − JB : set of non-base variables.
In other words, a solution x = x(J) is called a basic solution if: xN = x(J)
is called a basic solution if xN = x(JN ) = 0 and det(AB ) ̸= 0, where
AB = A(I; JB ).
1. The AB matrix is called the “basis matrix”.
2. xB = x(JB ): basic components.
3. xN = x(JN ): components outside the base.
Let x be a feasible basic solution and AB = A(I, JB ) the associated basic
matrix. Let x be another feasible solution such that x = x + δx,

3
S Delenda
Chapter 3
Primal method for solving a linear program

δZ = Z(x) − Z(x)
= c T x − cT x
= cT (x − x)
= cT δx.
We have x and x feasible solutions:

A.x = b....(1)
A.x = b.....(2)

4.2 optimality criterion


Let x be a basic feasible solution associated with the base AB = A(I, JB ).
What are the necessary conditions and sufficient for x to be an optimal
solution to problem (LPP)?

 min(C; Xi )
(LPP) AX = b
X ∈ Sn+

Theorem The inequality EN ≤ 0 is sufficient for the optimality of the basic


feasible solution x. ie.


xisabasicf easiblesolution
=⇒x is a solution to problem (LPP).
EN ≤ 0
The inequality EN ≤ 0 is necessary for the optimality of the feasible basic
solution x if it is not degenerate i.e. If x is a non-degenerate optimal solution,
then EN ≤ 0

5 sufficient condition for the existence of an


unbounded solution
The objective function of problem (LPP) does not reach its maximum if
among the non-basic components of the vector of estimates, there exists one
which is strictly negative
Sometimes a linear program has an unbounded solution. In this situation
the objective function can achieve a value of positive infinity for a maximiza-
tion problem or negative infinity for a minimization problem.

4
S Delenda
Chapter 3
Primal method for solving a linear program

For example, consider the problem

Maximize z =A +2B
Subject to A ≤ 10
2A + B ≥ 5
A, B ≥ 0

As long as A is kept less than or equal to 10, B can be increased without


limit and the objective function increases without limit. There is no finite
optimum. Note that unboundedness refers to the objective function value,
not the constraint set.
It is true that for the objective function to be unbounded the feasible region
must be unbounded in some direction. However, an unbounded feasible set
does not imply that there is no finite optimum. To see this, we simply have
to change the objective of the preceding example to minimize A + 2B. The
feasible set is unaffected, and therefore still unbounded in some direction.
However, the optimal solution is (A = 2.5, B = 0, z = 2.5).
Defnition A feasible maximum (respectively minimum) LP is said to be
unbounded if the objective function can assume arbitrarily large positive
(respectively negative) values at feasible points. Otherwise it is said to be
bounded.

6 Simplex algorithm: improvement of the ob-


jective function by passing from one ex-
treme bridge to another, simplex algorithm
in matrix form, finiteness of the simplex al-
gorithm, simplex algorithm and table
6.1 improvement of the objective function by passing
from one extreme bridge to another
To improve the solution it is necessary to generate another basic solution
(extreme point) which increases the value of the objective function. That is
to say, we must select an off-base variable and a base variable and permute

5
S Delenda
Chapter 3
Primal method for solving a linear program

them in such a way that the new solution gives a greater value of the objec-
tive function.
Reasoning about the number of optimal solutions:
— If the domain is empty, then no optimal solution.
— If the domain is non-empty, bounded: 1 or an infinity of solutions optimal
(give graphic illustration).
— If the domain is non-empty, not bounded: 0 or 1 or an infinity of optimal
solutions or infinite solution.
A resolution algorithm could then be interested in the vertices of the poly-
hedron uniquely. This is what the simplex algorithm does to find out if we
can improve our initial basic feasible solution.

6.2 Simplex algorithm in matrix form,


linear programming problem:
Pn
maximize j=1 Cj xj
subject to nj=1 aij Xj ≤ bi ; i = 1, 2, ..., m
P
xj ≥ 0; j = 1, 2, ..., n.
Defnition A slack variable is a variable that is added to an inequality con-
straint in Ax ≤ b to transform it into an equality.
Example Consider the linear program:

max x1 + x2
s:t: x1 + 3x2 ≤ 9
2x1 + x2 ≥ 8
x1 ; x2 ≥ 0
We can rewrite it as
max x1 + x2
s:t: x1 + 3x2 + s1 = 9
2x1 + x2 − s2 = 8
x1 ; x2 ; s1 ; s2 ≥ 0
and s1 ; s2 are slack variables. Indeed, 2x1 + x2 − s2 = 8 ⇐⇒ 2x1 + x2 =
8+s2 ≥ 8 for s2 ≥ 0, and similarly x1 +3x2 +s1 = 9 ⇐⇒ x1 +3x2 = 9−s1 ≤ 9
for s1 ≥ 0.
With these slack variables, we now write our problem in matrix form:

6
S Delenda
Chapter 3
Primal method for solving a linear program

maximize cT x
subject to Ax = b
x ≥ 0,

where
 
a11 a12 ... a1n 1 0 ... 0

 a21 a22 ... a2n 1 


 : : : : : : : 

 : : : : : : : 
am1 am2 ... amn 0 ... 0 1
   
c1 x1
   c2   x2 
b1    
 :   : 
 b2     
   :   : 
b= : , C =   ; and x =  
   cn   xm 
 :     
 O   x1+n 
bm    
 :   : 
0 xn+m

As we know, the simplex method is an iterative procedure in which each


iteration is characterized by specifying which m of the n + m variables are
basic.
As before, we denote by B the set of indices corresponding to the basic
variables, and we denote by N the remaining nonbasic indices. In component
notation, the ith component of Ax can be broken up into a basic part and a
nonbasic part:
Pn+m P P
j=1 = j∈B aij xj + j∈N aij xȷ

We wish to introduce a notation for matrices that will allow us to break up the
matrix product Ax analogously. To this end, let B denote an m × m matrix
whose columns consist precisely of the m columns of A that are associated
with the basic variables. Similarly, let N denote an m x n matrix whose
columns are the n nonbasic columns of A. Then we write A in a partitioned-
matrix form as follows:

A=[B N]

7
S Delenda
Chapter 3
Primal method for solving a linear program

example We consider the linear program (LP model)

maximiserZ = 3x + 5y + 6w
x + 2y + 4w ≤ 70
2x + y + w ≤ 80
3x + 2y + 2w ≤ 60
(x, y, w) ≥ 0

We recall its canonical form and its standard form:


 

 (x, y, w) ≥ 0 
 (x, y, w, e1 , e2 , e3 ) ≥ 0
x + 2y + 4w ≤ 70 x + 2y + 4w + e1 = 70

 

 
2x + y + w ≤ 80 =⇒ 2x + y + w + e2 = 80
3x + 2y + 2w ≤ 60 3x + 2y + 2w + e3 + 60

 


 

maximiserZ = 3x + 5y + 6w maximiserZ = 3x + 5y + 6w
 

If we put
       
1 2 4 x 70 3
A+ 2 1 1 , x= y ,b= 80 ,c= 5 ,
      
3 2 2 w 60 4
   
x 3
   y   5 
1 2 4 1 0 0 
 w 
  
 4 
A =  2 1 1 0 1 0 ,x = 
 e1 ,c =  0 
  
3 2 2 0 0 1  
 e2 
 
 0 
e3 0
the program is written in the following canonical and standard matrix forms:
 
Ax ≤ b, x ≥ 0 Ax = b, x ≥ 0
T =⇒
maximiserZ = c x maximiserZ = cT x

6.3 finiteness of the simplex algorithm


At each stage of the simplex algorithm (in phase 2), there are remarkable
cases which all lead to the stopping of the algorithm.
1 If the reduced costs LH < 0, then the current feasible basic solution is the
only optimum.
2-If the product costs LH ≤ 0,

8
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 3: unique optimum solution

Figure 4: the optimum is not unique

9
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 5: the optimum is unique degenerate

a)if (LH )e = 0 and xe > 0, then the optimum is not unique


b)if (LH )e = 0 and xe = 0, then the optimum is unique (a priori). In this
case, the base is said to be degenerate, that is to say that there exists a zero
base variable.
3- if (LH )e > 0 and xe is unbounded then the objectif function F is not
increased.
Theorem If during the simplex algorithm, no base encountered is degenerate,
then the algorithm ends in a finite number of iterations.

6.4 simplex algorithm and table


Here is the initial problem that we had.

Maximize P = 2x1 + x2
Subject to: x1 + x2 ≤ 5
−2x1 + 3x2 ≤ 6
2x1 − 3x2 ≤ 6
x1 , x2 ≥ 0

The Initial System


The initial system is found by converting the ≤ constraints into = constraints
by adding a slack variable.

M aximize P = 2x1 + x2
Subject to: x1 + x2 + s1 = 5

10
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 6: Initial Table of the simplex method

-2x1 + 3x2 + s2 = 6
2x1 − 3x2 + s3 = 6
x1 , x2 , s1 , s2 , s3 ≥ 0
The Initial Table The first table of the simplex will be constructed as
follows(figure 6):

7 Initiation of the simplex algorithm: case


of the linear program in normal form, M
method, two-phase method
7.1 case of the linear program in normal form,
A LP is in normal form if:
— The n variables are all ≥ 0.
— The m constraints are ≤ if it is a maximization problem, they are ≥ if it
is a minimization problem.
In other words, we can write a LP. in normal form as

Pn
maxz
Pn = j=1 cj Xj
s.c. j=1 aij xj ≤ bi i = 1...m
xj ≥ 0withj = 1...n
then

11
S Delenda
Chapter 3
Primal method for solving a linear program

= nj=1 cj xj
P
Pminz
s.c. nj=1 aij xj ≥ bi i = 1...m
xj ≥ 0j = 1...n
the standard form of the linear program satisfies these two properties.
- Among the variables of the standard problem, there are m variables which
appear with a non-zero coefficient in each constraint (in our example: x3 , x4 , x5 , x6
and x7 ). The values of the right hand side of the constraints (the components
of vector b) are positive.
Then a basic feasible solution is obtained by canceling the (n-m) decision
variables and the value of the deviation variables is directly given by the
right hand side.
- The second property ensures the satisfaction of the non-negativity con-
straints of the deviation variables.
Example
Let’s write the model in its standard form; we must add a deviation variable
in the type constraints≤ and subtract a deviation variable in the type con-
straints
 ≥. (figure(7).

 M inZ = 3x1 + 4x2
S.C : 5x1 + 5x2 ≤ 60




 x1 + 3x2 ≥ 12


x1 + 2x2 ≥ 2
x1 ≥ 6




x 2 ≤ 8




x1 , x2 ≥ 0

then
 we have the standard form:

 M inZ = 3x1 + 4x2 + 0x3 + 0x4 + 0x5 + 0x6 + 0x7
S.C : 5x1 + 5x2 + x3 = 60




x1 + 3x2 − x4 = 12



x1 + 2x2 − x5 = 2
x1 − x6 = 6




x2 + x7 = 8




x1 , x2 , x3 , x4 , x5 , x6 , x7 ≥ 0

We have n =7and m= 5 ; we must cancel (n-m )= 7- 5 =2 variables to hope
for a feasible solution.
If we cancel x1 and x2 , we obtain: x3 x4 x5 x6 x7 = 80, −12, −2, −6, 8 Variables
x4 , x5 x6 being negative.
In the case of constraints of type ≥ or type =, in addition to subtracting
a deviation variable for constraints of type ≥ or type =, add an artificial

12
S Delenda
Chapter 3
Primal method for solving a linear program

variable
 in these respective constraints

 M inZ = 3x1 + 4x2 + 0x3 + 0x4 + 0x5 + 0x6 + 0x7 + 1x8 + 1x9 + 1x1 0
S.C : 5x1 + 5x2 + x3 = 60




x1 + 3x2 − x4 + x8 = 12



x1 + 2x2 − x5 + x9 = 2
x1 − x6 + x1 0 = 6




x2 + x7 = 8




x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 , x9 , x1 0 ≥ 0

We can then obtain a starting solution by canceling(n-m)=10-5=5 variables,
i.e. the two decision variables x1 x2 ,And x4 x5 x6 (those which have been sub-
tracted). The basic solution is then: x3 = 60, x8 = 12, x9 = 2, x1 0 = 6, x7 = 8
and x1 = x2 = x4 = x5 = x6 = 0
Remark
- Regular LPP we use the Simplex Methode:
→ All constraints sign are (≤)
→ So the objective function must be Max.
- Irregular LPP we use two phasse or M methode → All constraints sign
allowable.
→ the objective could be Max or Min.

7.2 Simplex methode


Solve using the Simplex method the following problem:
Maximize Z = 3x + 2y
subject to: 2x + y ≤ 18
2x + 3y ≤ 42
3x + y ≤ 24
x ≥ 0, y ≥ 0
Consider the following steps:

1. Make a change of variables and normalize the sign of the independent


terms.

A change is made to the variable naming, establishing the following


correspondences:

13
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 7: Normalize restrictions

x becomes X1
y becomes X2
As the independent terms of all restrictions are positive no further ac-
tion is required. Otherwise there would be multiplied by ”-1” on both
sides of the inequality (noting that this operation also affects the type
of restriction).

2. Normalize restrictions. The inequalities become equations by adding


slack, surplus and artificial variables as the following figure(6):
In this case, a slack variable (e1 , e2 ande3 ) is introduced in each of the
restrictions of ≤ type, to convert them into equalities, resulting the
system of linear equations:

2X1 + X2 + e1 = 18
2·X1 + 3X2 + e2 = 42
3·X1 + X2 + e3 = 24

3. Match the objective function to zero.

Z − 3X1 − 2X2 − 0e1 − 0e2 − 0e3 = 0

4. Write the initial tableau of Simplex method. The initial tableau of


Simplex method consists of all the coefficients of the decision variables
of the original problem and the slack, surplus and artificial variables
added in second step (in columns, with P0 as the constant term and
Pi as the coefficients of the rest of Xi variables), and constraints (in

14
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 8: Tableau I . 1st iteration

rows). The Cb column contains the coefficients of the variables that are
in the base.
The first row consists of the objective function coefficients, while the
P reduced costs Zj −Cj .
last row contains the objective function value and
The last row is calculated as follows: Zj = (Cbi Pj ) for i = 1..m,
where if j = 0, P0 = bi and C0 = 0, else Pj = aij . Although this
is the first tableau of the Simplex method and all Cb are null, so the
calculation can simplified, and by this time Zj = −Cj .

5. Stopping condition. If the objective is to maximize, when in the last


row (indicator row) there is no negative value between discounted costs
(P1 columns below) the stop condition is reached.

In that case, the algorithm reaches the end as there is no improvement


possibility. The Z value (BRS column) is the optimal solution of the
problem.
Another possible scenario is all values are negative or zero in the input
variable column of the base. This indicates that the problem is not
limited and the solution will always be improved.
Otherwise, the following steps are executed iteratively.

6. Choice of the input and output base variables.


First, input base variable is determined. For this, column whose value
in Z row is the lesser of all the negatives is chosen. In this example it
would be the variable X1 with -3 as coefficient.
If there are two or more equal coefficients satisfying the above condi-
tion, then choice the basic variable.
The column of the input base variable is called pivot column .
Once obtained the input base variable, the output base variable is de-

15
S Delenda
Chapter 3
Primal method for solving a linear program

termined. The decision is based on a simple calculation: divide each


independent term (RHS column) between the corresponding value in
the pivot column, if both values are strictly positive (greater than zero).
The row whose result is minimum score is chosen.
If there is any value less than or equal to zero, this quotient will not
be performed. If all values of the pivot column satisfy this condition,
the stop condition will be reached and the problem has an unbounded
solution .
In this example: 18/2[= 9], 42/2[= 21]and 24/3[= 8]
The term of the pivot column which led to the lesser positive quotient
in the previous division indicates the row of the slack variable leaving
the base. In this example, it is e3 , with 3 as coefficient. This row is
called pivot row .
If two or more quotients meet the choosing condition, other than that
basic variable is chosen .
The intersection of pivot column and pivot row marks the pivot value,
in this example, 3.

7. Update tableau. The new coefficients of the tableau are calculated as


follows:
In the pivot row each new value is calculated as:

New value = Previous value / Pivot

In the other rows each new value is calculated as:

New value = Previous value - (Previous value in pivot column * New


value in pivot row)

So the pivot is normalized (its value becomes 1), while the other val-
ues of the pivot column are canceled (analogous to the Gauss-Jordan
method).
Calculations for e3 row are shown below:

The table corresponding to this second iteration is(figure9):

16
S Delenda
Chapter 3
Primal method for solving a linear program

P reviousP 4row 42 2 3 0 1 0
- - - - - -
Previous value in pivot column 2 2 2 2 2 2
× × × × × ×
New value in pivot row 8 1 1/3 0 0 1/3
= = = = = =
New e3 row 26 0 7/3 0 1 -2/3
Table 1: Calculations for P4 row

Figure 9: Tableau II . 2nd iteration

8. When checking the stop condition is observed which is not fulfilled since
there is one negative value in the last row, -1. So, continue iteration
steps 6 and 7 again.
6.1- The input base variable is X2 , since it is the variable that corre-
sponds to the column where the coefficient is -1.
6.2- To calculate the output base variable, the constant terms RHS
column) are divided by the terms of the new pivot column: 2 / 1/3
[=6] , 26 / 7/3 [=78/7] and 8 / 1/3 [=24]. As the lesser positive quo-
tient is 6, the output base variable is e1 .
6.3- The new pivot is 1/3.
7.Updating the values of tableau again is obtained:(figure 10

Figure 10: Tableau III . 3rd iteration

17
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 11: Tableau IV . 4th iteration

9. Checking again the stop condition reveals that the pivot row has one
negative value, -1. It means that optimal solution is not reached yet
and we must continue iterating (steps 6 and 7):
6.1. The input base variable is e2 , since it is the variable that corre-
sponds to the column where the coefficient is -1.
6.2. To calculate the output base variable, the constant terms (RhS)
are divided by the terms of the new pivot column: 6/(-2) [=-3] , 12/4
[=3] , and 6/1 [=6]. In this iteration, the output base variable is e2 .
6.3. The new pivot is 4 7. Updating the values of tableau again is
obtained:(figure 11)

10. End of algorithm. It is noted that in the last row, all the coefficients
are positive, so the stop condition is fulfilled.
The optimal solution is given by the val-ue of Z in the constant terms
column (RHS column), in the example: 33. In the same column, the
point where it reaches is shown, watching the corresponding rows of
input decision variables: X1 = 3 and X2 = 12.
Undoing the name change gives x = 3 and y = 12.

7.3 M Methode (big M)


If an LP has any ≥ or = constraints, a starting BFS may not be readily
apparent.
When a BFS is not readily apparent, the Big M method or the two phase
simplex method may be used to solve the problem.
The Big M method is a version of the Simplex Algorithm that first finds a
BFS by adding ”artificial” variables to the problem. The objective function
of the original LP must, of course, be modified to ensure that the artificial
variables are all equal to 0 at the conclusion of the simplex algorithm.

18
S Delenda
Chapter 3
Primal method for solving a linear program

Steps
1. Modify the constraints so that the RHS of each constraint is nonnegative
(This requires that each constraint with a negative RHS be multiplied by -1)
1. Remember that if you multiply an inequality by any negative number,
the direction of the inequality is reversed!). After modification, identify each
constraint as a ≤, ≥, or = constraint.
2. Convert each inequality constraint to standard form (If constraint i is a
≤ constraint, we add a slack variable si ; and if constraint i is a ≥ constraint,
we subtract an excess variable ei ).
3. Add an artificial variable ai to the constraints identified as ≥ or = con-
straints at the end of Step 1. Also add the sign restriction ai ≥ 0.
4. If the LP is a max problem, add (for each artificial variable) −M ai to the
objective function where M denote a very large positive number.
5. If the LP is a min problem, add (for each artificial variable) M ai to the
objective function.
6. Solve the transformed problem by the simplex . Since each artificial vari-
able will be in the starting basis, all artificial variables must be eliminated
from row 0 before beginning the simplex. Now (In choosing the entering
variable, remember that M is a very large positive number!).
If all artificial variables are equal to zero in the optimal solution, we have
found the optimal solution to the original problem.
If any artificial variables are positive in the optimal solution, the original
problem is infeasible!!!
Example
Min Z = 60x + 160y
Subject to
2x + 8y ≥ 80
4x + 4y ≥ 80
x + y ≤ 250
x, y ≥ 0
set:
x = x1 , y = x2
Convert the constrains to equations
2x1 + 8x2 − s1 + A1 = 80
4x1 + 4x2 − s2 + A2 = 80
x1 + x2 + s3 = 250
x1 , x2 , s1 , s2 , s3 , A1 , A2 ≥ 0
calculate Zj we have Zj = 2 ∗ M + 4 ∗ M + 1 ∗ 0f orx1 , Zj = 8 ∗ M + 4 ∗ M +

19
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 12: First table

Figure 13: second table

1 ∗ 0f orx2 . . . . . .
Remark
1- If objectif function is Max then
-if Cj − Zj > 0 then we start with the the biggest value as a pivot column
- If Cj − Zj ≤ 0 so the optimal solution is arrived
2- If objectif function is Min then
- If Cj − Zj < 0 then we start with the the minimum value as a pivot column
- If Cj − Zj ≥ 0 so the optimal solution is arrived
The first table
the Pivot value is 3
because : all value of cj −Zj are positive or zero the solution x2 = 20/3, x1 =
40/3, s3 = 230

7.4 two phase methode


Two-Phase Method Steps (Rule)
Step-1: Phase-1

a. Form a new objective function by assigning zero to every original vari-

20
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 14: Enter Caption

able (including slack and surplus variables) and -1 to each of the artificial
variables.

eg. Max Z = −A1 − A2

b. Using simplex method, try to eliminate the artificial varibles from the
basis.

c. The solution at the end of Phase-1 is the initial basic feasible solution
for Phase-2
Step-2: Phase-2

a. The original objective function is used and coefficient of artificial vari-


able is 0 (so artificial variable is removed from the calculation process).

b. Then simplex algorithm is used to find optimal solution


Example Find solution using Two-Phase method
MIN Z = x1 + x2
subjectto
2x1 + x2 ≥ 4
x1 + 7x2 ≥ 7
andx1, x2 ≥ 0
Solution:
Problem is
M inZ = x1 + x2
subjectto
2x1 + x2 ≥ 4

21
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 15: Iteration-1 phase 1

x1 + 7x2 ≥ 7
and x1 , x≥ 0;
->Phase-1<————————————————

The problem is converted to standard form by adding slack, surplus and


artificial variables as appropiate

1. As the constraint-1 is of type ’≥’ we should subtract surplus variable


S1 and add artificial variable A1

2. As the constraint-2 is of type ’≥’ we should subtract surplus variable


S2 and add artificial variable A2

After introducing surplus,artificial variables


M inZ = A1 + A2
subjectto
2x1 + x2 − S1 + A1 = 4
x1 + 7x2 − S2 + A2 = 7
and x1 , x2 , S1 , S2 , A1 , A2 ≥ 0
Negative minimum Cj − Zj is -8 and its column index is 2. So, the enter-
ing variable is x2 .

Minimum ratio is 1 and its row index is 2. So, the leaving basis variable
is A2 .

The pivot element is 7.

Entering =x2 , Departing =A2 , Key Element =7

R2 (new) = R2 (old)7

22
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 16: Iteration-2 phase1

Figure 17: Iteration-3 phase1

R1 (new) = R1 (old) − R2 (new)

Negative minimum Cj − Zj is -13/7 and its column index is 1. So, the


entering variable is x1 .

Minimum ratio is 1.62 and its row index is 1. So, the leaving basis vari-
able is A1 .

The pivot element is 13/7

Entering =x1 , Departing =A1 , Key Element =13/7

R1 (new) = R1 (old)7/13
R2 (new) = R2 (old) − 1/7R1 (new)

Since all Cj − Zj ≥ 0
Hence, optimal solution is arrived with value of variables as : x1 = 21/13, x2 =
10/13
M inZ = 0
–>Phase-2<————————————————

23
S Delenda
Chapter 3
Primal method for solving a linear program

Figure 18: Iteration-1 phase2

we eliminate the artificial variables and change the objective function for
the original,
Since all Cj − Zj ≥ 0

Hence, optimal solution is arrived with value of variables as : x1 =


21/13, x2 = 10/13
M inZ = 31/13

7.5 Special cases


multiple solution when we use the simplex method, we identify this prob-
lem when one of the net effects (ci − zj ) (relative to a variable outside the
base) is zero.
bounded noun solution if we cannot select the outgoing variable (all the
reports are negative or zero) in such a case the optimal solution is infinite
and the maximum value is infinite.
impossible solution when the optimal solution contains artificial variables
in the base at non-zero levels.

24

You might also like