0% found this document useful (0 votes)
12 views

Presentation Opt - Lecture3 - 2019

Optimización

Uploaded by

barahona0777cv
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Presentation Opt - Lecture3 - 2019

Optimización

Uploaded by

barahona0777cv
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Linear programming

Primal and dual problems


Geometry of the feasible set
Simplex method

Optimization Techniques in Finance


3. Linear programming

Andrew Lesniewski

Baruch College
New York

Fall 2019

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Outline

1 Linear programming

2 Primal and dual problems

3 Geometry of the feasible set

4 Simplex method

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example: graphical method

Linear programming (LP) is concerned with optimization problems where the


objective function and the constraint functions are linear. The constraints may be
equalities or inequalities.
Example 1. Consider the following problem with two decision variables
(unknowns) and five (inequality) constraints

x1 − x2 ≥ −4,

min f (x) = x1 + x2 , subject to 2x1 + x2 ≤ 8,

xi ≥ 0, for i = 1, 2.

This problem is easy to solve using a graphical method.


The feasible set of this problem is the quadrilateral Q with vertices at (0, 0),
(0, 4), (4/3, 16/3), and (4, 0). Since the objective function is linear, its minimum
can only be achieved at a vertex of Q.
Sliding the objective function through Q we see that its minimum is achieved at
x ∗ = (0, 0), where its value is f (x ∗ ) = 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example: the diet problem

Example 2. There are n types of nutrients contained in m different types of food.


How should a person’s daily diet be structured so that he gets the required
amount of each nutrient at the minimum cost?
Let cj , j = 1, . . . , m denote the unit price of food j, let bi , i = 1, . . . , n denote the
daily minimum required amount of nutrient i, let aij denote the content of nutrient
i in food j, and let xj denote the daily amount of food j.
This leads to the following optimization problem:

m
(P
m
X
j=1aij xj ≥ bi , for i = 1, . . . , n,
min f (x) = cj xj , subject to
j=1
xj ≥ 0, for j = 1, . . . , m.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example: the optimal assignment problem

Example 3. There are p people available to carry out q tasks. How should the
tasks be assigned so that the total value is maximized?
Let cij denote the daily value of person i = 1, . . . , p carrying out task j = 1 . . . , q,
and let xij denote the fraction of person’s i workday to spend on task j.
This leads to the following optimization problem:
Pq
p X
X q Pj=1 xij ≤ 1,
 for i = 1, . . . , p,
p
max f (x) = cij xij , subject to xij ≤ 1, for j = 1, . . . , q,
i=1 j=1  i=1

xij ≥ 0, for i = 1, . . . , p, j = 1, . . . , q.

The first constraint guarantees that no one works more than a day, while the
second condition means that no more than one person is assigned to a task.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example: the transportation problem

Example 4. There are p factories that supply a certain product, and there are q
sellers to which this product is shipped. How should the market demand be met
at a minimum cost?
Assume that factory i = 1, . . . , p can supply si units of the product, while seller
j = 1, . . . , q requires at least dj units of the product. Let cij be the cost of
transporting one unit of the product from factory i to seller j, and let xij denote the
number of units shipped from factory i to seller j.
This leads to the following optimization problem:
P q
p X
X q Pj=1 xij ≤ si ,
 for i = 1, . . . , p,
p
min f (x) = cij xij , subject to xij ≥ dj , for j = 1, . . . , q,
i=1 j=1  i=1

xij ≥ 0, for i = 1, . . . , p, j = 1, . . . , q.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example

Example 5. Consider the following set of constraints in R2 :

−x1 + x2 ≤ 1,
x1 , x2 ≥ 0.

These inequalities define an unbounded subset of R2 .


LP problems with these constraints may show very different outcomes:
(i) min f (x) = x1 + x2 has a unique solution at x = (0, 0).
(ii) min f (x) = x1 has infinitely many solutions at x = (0, t), 0 ≤ t ≤ 1. These
solutions are bounded.
(iii) min f (x) = x2 has infinitely many solutions at x = (t, 0), t ≥ 0. These
solutions are unbounded.
(iv) For min f (x) = −x1 − x2 no feasible point is optimal and the optimal cost
is −∞.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Examples

Two dimensional LP problems with a moderate number of constraints can be


solved using a graphical method.
LP problems (especially if the number of decision variables and constraints are
large) are notoriously difficult to solve.
A breakthrough occurred in 1947, when George Dantzig developed the simplex
method, which proved to be a very effective tool for solving linear programming
problems.
Another major breakthrough occurred in 1984, when Narendra Karmarkar
developed an interior point method bearing his name. It is an efficient,
polynomial runtime algorithm.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Standard form

In general, we will be concerned with linear optimization problems of the form


(
aiT x = bi , for i ∈ E,
min f (x) = c T x, subject to (1)
aiT xi ≥ bi , for i ∈ I,

where c ∈ Rn , ai ∈ Rn , and bi ∈ Rn , i = 1, . . . , m, are constant vectors.


Note that there may be no constraints on some of the variables xi , in which case
their values are unrestricted.
Recall that a feasible solution to the problem (1) (or feasible vector ) is any x
satisfying all the constraints. The feasible set is the the set of all feasible
solutions.
We will be assuming that the feasible set is nonempty.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Standard form

Any LP problem can be written in the standard form, namely


(
Ax = b,
min c T x, subject to (2)
xi ≥ 0, for i = 1, . . . , n.

where c ∈ Rn , A ∈ Matmn (R), and b ∈ Rm .


This form is convenient for describing solution algorithms for LP problems, and
(unless otherwise stated) we will be assuming it in the following.
Every problem of the form (1) can be transformed into the form (2).

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Standard form

This can be accomplished by means of two types of operations:


(i) Elimination of inequality constraints: given an inequality of the form

n
X
aij xj ≤ bi ,
j=1

we introduce a slack variable si , and the standard constraint:

n
X
aij xj + si = bi ,
j=1

si ≥ 0.

(ii) Elimination of free variables: if xi is an unrestricted variable, we replace it


by xi = xi+ − xi− , with xi+ , xi− ≥ 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example

Example 6. Consider the following LP problem written in the non-standard form:



2x1 + x2 ≤ 12,

min −x1 − x2 , subject to x1 + 2x2 ≤ 9,

x1 ≥ 0, x2 ≥ 0.

Introducing slack variables x3 and x4 , this problem can be written in the standard
form:

2x1 + x2 + x3 = 12,

min −x1 − x2 , subject to x1 + 2x2 + x4 = 9,

x1 ≥ 0, x2 ≥ 0, x3 ≥ 0, x4 ≥ 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example

Example 7. Consider the following LP problem written in the non-standard form:


(
x1 + x2 ≥ 1,
min x2 , subject to
x1 − x2 ≤ 0.

Notice that x1 and x2 are unrestricted.


Writing xi = xi+ − xi− , i = 1, 2, and introducing slack variables s1 and s2 , this
problem can be written in the standard form:

− −

+ +
x1 − x1 + x2 − x2 − s1 = 1,

+ − + − + −
min x2 − x2 , subject to x1 − x1 − x2 + x2 + s2 = 0,
x1 ≥ 0, x1− ≥ 0, x2+ ≥ 0, x2− ≥ 0, s1 ≥ 0, s2 ≥ 0.
 +

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Lower bound for the solution

Let us go back to Example 6 written in the standard form, and consider the
following few feasible solutions:

x = (0, 9/2, 15/2, 0), f (x) = −9/2,


x = (6, 0, 0, 3), f (x) = −6,
x = (5, 2, 0, 0), f (x) = −7.

Is the last solution optimal?


One way to approach this question is to establish a lower bound on the objective
function over the feasible set.
For example, using the first constraint, we find that

−x1 − x2 ≥ −2x1 − x2 − x3
− 12.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Lower bound for the solution

The second constraint helps a little more

−x1 − x2 ≥ −x1 − 2x2 − x4


= −9.

An even tighter bound is obtained if we add both constraints multiplied by −1/3,

1 1
−x1 − x2 ≥ −x1 − x2 − x3 − x4
3 3
1 1
= − (2x1 + x2 + x3 ) − (x1 + 2x2 + x4 )
3 3
= −7.

The last lower bound means that f (x) ≥ −7 for any feasible solution.
Since we have already found a feasible solution saturating this bound, namely
x = (5, 2, 0, 0), it means that this x is an optimal solution to the problem.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Lower bound for the solution


Let us formalize this process.
Consider any linear combination of the two main constraints with coefficients y1
and y2 :

y1 (2x1 + x2 + x3 ) + y2 (x1 + 2x2 + x4 ) = (2y1 + y2 )x1 + (y1 + 2y2 )x2 + y1 x3 + y2 x4 .

Since x3 , x4 ≥ 0, this expression will provide a lower bound on the objective


function −x1 − x2 , if y1 and y2 satisfy the following conditions:

2y1 + y2 ≤ −1,
y1 + 2y2 ≤ −1,
y1 , y2 ≤ 0.

Indeed, in this case,

(2y1 + y2 )x1 + (y1 + 2y2 )x2 + y1 x3 + y2 x4 ≤ −x1 − x2 .

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Lower bound for the solution

In other words, we showed that

12y1 + 9y2 ≤ −x1 − x2 .

This motivates the following idea.


In order to obtain the largest possible lower bound, we should maximize the
corresponding linear combination of the right hand sides of the constraints of the
original problem, namely 12y1 + 9y2 , provided that y1 , y2 satisfy the constraints
introduced above.
This leads us to the following dual problem:

2y1 + y2 ≤ −1,

max 12y1 + 9y2 , subject to y1 + 2y2 ≤ −1,

y1 , y2 ≤ 0.

This can be generalized to any LP problem in standard form.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Duality

For a general problem (2) (called the primal problem), we consider the
corresponding dual problem:

max bT y, subject to AT y ≤ c. (3)

Introducing the slack variables s, we can state it in the standard form:


(
AT y + s = c,
max bT y, subject to (4)
si ≥ 0, for i = 1, . . . , n.

As expected from the motivational example above, there is a relationship


between the solutions of the primal and dual problems.
The objective function of a feasible solution to the primal problem is bounded
from below by the objective function of any feasible solution to the dual problem.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Duality

Weak duality theorem. Let x be a feasible solution to the primal problem, and let
y be a feasible solution to the dual problem. Then

c T x ≥ bT y. (5)

For the proof, observe that, since x ≥ 0, and c − AT y ≥ 0 component-wise, the


inner product of these vector must be nonnegative:

0 ≤ (c − AT y)T x
= c T x − y T Ax
= c T x − y T b.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Duality

The quantity c T x − y T b is called the duality gap.


It follows immediately from the weak duality theorem that if
(i) x is feasible for the primal problem,
(ii) y is feasible for the dual problem,
(iii) c T x = y T b,
then x is an optimal solution to the primal problem, and y is an optimal solution
to the dual problem.
This condition gives thus a sufficient condition for optimality.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Duality

It is also necessary. This is the content of the following theorem.


Strong duality theorem. The primal problem has an optimal solution x if and only
if the dual problem has an optimal solution y such that c T x = y T b.
The following statement is a corollary of the strong duality theorem.
It allows us to find an optimal solution to the primal problem given an optimal
solution to the dual problem, and vice versa.
Complementary slackness. Let x be an optimal solution to the primal problem,
and let y be an optimal solution to the dual. Then the following two equations
hold:

y T (Ax − b) = 0,
(6)
(c − AT y)T x = 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

The Lagrange multipliers perspective

We can approach solving LP problems by means of the method of Lagrange


multipliers.
Only the first order conditions, the KKT conditions, will play a role: the Hessian of
the Lagrange function is zero, as the objective function and the constraints are
linear in x.
Convexity of the problem is sufficient for the existence of a global minimum (we
will discuss it later in these lectures).
For an LP problem written in the standard form, the Lagrange function is

L(x, λ, s) = c T x + λT (b − Ax) − sT x. (7)

Here, λ is the vector of Lagrange multipliers corresponding to the equality


constraints Ax = b, and s is the vector of Lagrange multipliers corresponding to
the inequality constraints −xi ≤ 0 (note the sign!).

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

The Lagrange multipliers perspective


Applying the KKT necessary conditions, we find that

AT λ + s = c,
Ax = b,
x ≥ 0, (8)
s ≥ 0,
xi si = 0, for all i = 1, . . . , n.

Note that, as a consequence of the nonnegativity of the xi ’s, the complementary


slackness condition can equivalently be stated as a single condition x T s = 0.
If (x ∗ , λ∗ , s∗ ) is a solution to this system, then

c T x ∗ = (AT λ∗ + s∗ )T x ∗
= λ∗T Ax ∗ + s∗T x ∗
(9)
= (Ax ∗ )T λ∗ + x ∗T s∗
= bT λ∗ .

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

The Lagrange multipliers perspective

In other words, the Lagrange multipliers can be identified with the dual variables
y in (4), and bT λ is the objective function for the dual problem!
Conversaly, we can apply the KKT conditions to the dual problem (3). The
Lagrange function reads:

L̄(y, x) = bT y + x T (c − AT y), (10)

and the first order conditions are

Ax = b,
AT y ≤ c,
(11)
x ≥ 0,
x T (c − AT y) = 0.

The primal-dual relationship is symmetric: by taking the dual of the dual problem,
we recover the original problem.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Interpretation of the dual problem

The dual problem provides a useful intuitive interpretation of the primal problem.
As an illustration, consider the nutrition problem in Example 2.
The dual constraints read ni=1 aij λi ≤ cj , j = 1, . . . , m, and so λi represents
P
the unit price of nutrient i.
Therefore, the dual objective function ni=1 λi bi represents the cost of the daily
P
nutrients that the (imagined) salesman of nutrients is trying to maximize.
The optimal values λ∗i of the dual variables are called the shadow prices of the
nutrients i.
Even though the nutrients cannot be directly purchased, the shadow prices
represent their actual market values.
Another way of interpreting λ∗ is as the sensitivities of the primal function.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Polyhedra

We now turn to the simplex method, a numerical algorithm for solving LP


problems.
Our presentation of the simplex algorithm follows closely [1]. All the details left
out from our discussion can be found in that book.
Key to the formulation of the method is the geometry of the feasible set of an LP
problem.
Each such set forms a polyhedron.
Definition. A polyhedron is a subset of Rn of the form P = {x ∈ Rn : Ax ≥ b},
where A ∈ Matmn (R), and b ∈ Rm .
If the feasible set is presented in standard form P = {x ∈ Rn : Ax = b, x ≥ 0},
the polyhedron is said to be in a standard form representation.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Polyhedra

A polyhedron may extend to infinity or be a bounded set. In the latter case, we


refer to the polyhedron as bounded.
A set S ⊂ Rn is convex, if for any x, y ∈ S and λ ∈ (0, 1), λx + (1 − λ)y ∈ S.
In other words, a convex set has the property that the line segment connecting
any two of its points is contained in the set.
Notice that a polyhedron P is a convex set.
Namely, for x, y ∈ P and λ ∈ (0, 1),

A(λx + (1 − λ)y) = λAx + (1 − λ)Ay


≥ λb + (1 − λ)b
= b.

Polyhedra represented in standard form are, of course, convex as well.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Extreme points

A vector x ∈ P is called an extreme point or vertex, if it is not a convex


combination of two distinct points y, z ∈ P. In other words, x is an extreme point,
if
x = λy + (1 − λ)z, with λ ∈ (0, 1), (12)
implies y = z = x.
In other words, an extreme point does not lie on a line between two other points
of P.
Not every polyhedron has extreme points. For example, a half-space
{x ∈ Rn : aT x ≥ b} has no extreme points.
Drawing a picture of a bounded polygon in the plane we verify that this geometric
definition of a vertex conforms with the intuition.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Extreme points

Let a1T , . . . , am
T denote the row vectors of the matrix A. In terms of these vectors,

the feasible set can be characterized as aiT x ≥ bi , i = 1 . . . , m.


We say that an inequality constraint ajT x ≥ bj or an equality constraint ajT x = bj
is active at a point x ∗ , if ajT x ∗ = bj .
By A(x ∗ ) we denote the set of (the indices of) all constraints active at x ∗ .
If x ∗ ∈ Rn , then the system

aiT x = bi , for all i ∈ A(x ∗ ),

has a unique solution, if and only if there exist n vectors in the set
{ai : i ∈ A(x ∗ )} which are linearly independent (why?).
We will refer to the constraints as linearly independent, if the vectors ai are
linearly independent.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Basic solutions

We will now define a basic feasible solution (BFS) of an LP problem.


Definition. Consider a polyhedron P, and let x ∗ ∈ Rn (not necessarily in P!).
(a) The vector x ∗ is a basic solution, if
(i) all equality constraints are active,
(ii) n of the active (equality and inequality) constraints at x ∗ are linearly
independent.
(b) If x ∗ is a basic solution that satisfies all of the constraints, it is called a
basic feasible solution.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Extreme points

Theorem. Suppose that the polyhedron P = {x ∈ Rn : aiT x ≥ bi , i = 1, . . . , m}


is nonempty. Then the following conditions are equivalent:
(i) P has at least one extreme point.
(ii) P does not contain a line (i.e. there is no direction d ∈ Rn such that
x + λd ∈ P for all λ ∈ R.)
(iii) There exist n linearly independent vectors among the vectors a1 , . . . , am .
The proof of this theorem is elementary but a bit too lengthy to discuss here, and
it can be found in [1].

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Extreme points

Corollary. Every nonempty bounded polyhedron and every nonempty polyhedron


in standard form has at least one extreme point.
Indeed, in neither case the polyhedron contains a line.
The following theorem justifies why we bother about extreme points.
Theorem. Consider the LP problem minx∈P c T x. Suppose that P has an
extreme point and that there exists an optimal solution. Then, there exists an
optimal solution that is an extreme point of P.
An immediate corollary to this theorem and the fact that every polyhedron in
standard form has an extreme point is the following alternative:
(i) either the optimization problem is unbounded (and the optimal value is
−∞),
(ii) or there is an optimal solution.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Extreme points
The proof of the theorem is fun: Let Q be the (nonempty) set of all optimal
solutions, and let v be the optimal value of the objective function c T x.
Then Q = {x ∈ Rn : Ax ≥ b, c T x = v }, which is also a polyhedron. Since
Q ⊂ P and P does not contain a line, Q does not contain a line, and so Q has
an extreme point x ∗ .
We will show that x ∗ is also an extreme point of P.
Assume that x ∗ = λy + (1 − λ)z, for y, z ∈ P and λ ∈ (0, 1). Then

v = cTx ∗
= λc T y + (1 − λ)c T z.

Since c T y , c T z ≥ v , this is possible only if c T y = c T z = v .


Therefore, y , z ∈ Q. But this contradicts the fact that, by assumption, x ∗ is an
extreme point of Q.
This contradiction shows that x ∗ is an extreme point of P. Also, since x ∗ ∈ Q, it
is optimal.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Degeneracy

A basic solution x ∈ Rn is called degenerate, if more than n of the constraints


are active at x.
Example 8. Consider the polyhedron:

x1 + x2 + 2x3 ≤ 8
x2 + 6x3 ≤ 12
x1 ≤ 4
x2 ≤ 6.
x1 , x2 , x3 ≥ 0.

The vector x = (2, 6, 0) is a nondegenerate BFS, because there are exactly


three active, linearly independent constraints: the first, the fourth, and x3 ≥ 0.
The vector x = (4, 0, 2) is degenerate, because there are four active constraints:
the first three and x2 ≥ 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Degeneracy

Generically, basic solutions are nondegenerate. If we generated the constraints


coefficients purely randomly, we would end up, with probability 100%, with a fully
nondegenerate problem.
Degeneracies occur as a result of additional or coincidental dependencies that
are common in real life situations. For this reason, they need to be addressed in
any solution methodology.
The simple method becomes a bit hairy if a degenerate BFS is encountered. In
order to keep the discussion straightforward, from now on we assume that all
BFSs are nondegenerate.
We will make a few remarks regarding the degenerate case, following the
presentation of the main outline of the simplex method.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Basic solutions for polyhedra in standard form


From now on we assume that the polyhedron is represented in standard form
P = {x ∈ Rn : Ax = b, x ≥ 0}.
We will also assume that the matrix A is of full rank, i.e. exactly m ≤ n of its rows
are linearly independent.
This is no loss of generality, because if the rank of A is k < m, we can consider
an equivalent problem with a rank k submatrix D of A with the redundant rows
eliminated.
Example 9. Consider the polyhedron defined by the constraints:

2x1 + x2 + x3 = 2,
x1 + x2 = 1,
x1 + x3 = 1,
x1 , x2 , x3 ≥ 0.

The corresponding matrix A has rank 2. The first constraint is redundant (it is the
sum of the second and third constraints), and can be eliminated without
changing the problem.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Basic solutions for polyhedra in standard form

At a basic solution of a polyhedron in standard form, the m equality constraints


are always active.
Then A has m columns that are linearly independent. Let Ar1 , . . . , Arm , where
each Ari ∈ Rn , denote a set of m linearly independent columns of A.
We must also have xj = 0, for all j ∈
/ {r1 , . . . , rm }.
We let B denote the submatrix of A formed by these columns, i.e.

B = (Ar1 . . . Arm ).

B is called a basis matrix.


Since B is a square matrix of maximum rank, it is invertible.
Clearly, a basic solution for polyhedra in standard form is degenerate if more
than n − m components of x are zero.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Basic solutions

Permuting the columns of A we write it in the block form (B N). Under the same
permutation, a vector x can be written in the block form:
 
xB
.
xN

Recall that in order for x to be a basic solution, we must have xN = 0.


The equation Ax = b is equivalent to the block form equation
 
 xB
B N = b,
0

or
BxB = b.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Basic solutions

Its solution reads


xB = B −1 b. (13)
The variables xB are called basic variables, while the variables xN are called
nonbasic variables.
Recall that a basic solution is not guaranteed to be feasible, as it may violate the
nonnegativity condition xB ≥ 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Basic solutions

There is an key link the geometric concept of a vertex of a polyhedron and the
analytic concept of a BFS given by the theorem below.
Theorem. A vector x ∗ is a BFS, if and only if it is a vertex in P.
For the proof, we assume that x ∗ is not an extreme point of P, i.e. it can be
represented as x ∗ = λy + (1 − λ)z, with 0 < λ < 1, and distinct y, z ∈ P.
But then also xN∗ = λyN + (1 − λ)zN . However, since xN∗ = 0, and y, z ≤ 0
(since they are elements of P), it follows that also yN = 0 and zN = 0.
Since BxB∗ = b, we also must have ByB = b and BzB = b (because
xN∗ = yN = zN = 0).
This implies that xB∗ = yB = zB (= B −1 b), and so x ∗ = y = z. This contradiction
means that x ∗ is extreme.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Basic solutions

In order to prove the converse statement, we suppose that x ∗ is not an BFS.


Therefore, there do not exist n linearly independent active constraints, and thus
the vectors ai span a proper subspace of Rn . We can thus find a direction d such
that aiT d = 0, for all i ∈ A(x ∗ ).
Consider now the vectors y = x ∗ − εd and z = x ∗ + εd. Note that these vectors
satisfy the active constraints, as aiT (x ∗ ± εd) = aiT x ∗ ± εaiT d = b.
Choosing ε sufficiently small, we can assure that they also satisfy the inactive
constraint aiT x > b.
Consequently, y, z ∈ P, and x ∗ = 12 (y + z), which means that x ∗ is not an
extreme point. This contradiction completes the proof.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Adjacent BFSs

We will now proceed to describing an algorithm for moving from one BFS to
another and decide when to stop the search.
We start with the following definition.
(i) Two BFSs are adjacent, if their basic matrices differ in one basic column
only.
(ii) Let x ∈ P. A vector d ∈ Rn is a feasible direction at x, if there is a positive
number θ such that x + θd ∈ P.
(iii) A vector d ∈ Rn is an improving direction, if c T d < 0.
In other words, moving from x in an improving direction d lowers the value of the
objective function c T x by c T d.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Adjacent BFSs

Note that if the new point x + θd is feasible, then

Ad = 0. (14)

Indeed, with θ > 0,

θAd = A(x + θd) − Ax


=b−b
= 0.

The strategy is, starting from a BFS, to find an improving feasible direction
towards an adjacent BFS.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Adjacent BFSs

We move in the j-th basic direction d = (dB dN ) that has exactly one positive
component corresponding to a non-basic variable.
When moving in the basic direction, the nonbasic variable xj = 0 becomes
positive, while the other nonbasic variables remain zero. We say that xj enters
the basis.
Specifically, we select a nonbasic variable xj and set

dj = 1,
di = 0, for every nonbasic index i 6= j.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Adjacent BFSs

As a result, x changes to x + dB , where

0 = Ad
n
X
= Ai di
i=1
m
X
= Ari dri + Aj
i=1
= BdB + Aj ,

and so dB = −B −1 Aj .

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Adjacent BFSs

We are now facing two cases.


Case 1: x is a nondegenerate BFS. Then xB > 0, which implies that
xB + θdB > 0, and feasibility is assured by choosing θ sufficiently small.
In particular, d is a feasible direction.
Case 2: x is degenerate. In this case, d is not always a feasible direction. It is
possible that a basic variable xri is zero, while the corresponding component dri
is negative.
In this case, if we follow the j-th basic direction, the nonnegativity constraint for
dri is violated, and we are led to nonfeasible solutions.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Reduced cost

We will now study the effect of moving in the j-th basic direction on the objective
function.
Let x be a basic solution with basis matrix B, and let cB be the vector of the costs
of the basic variables.
For each i = 1, . . . , n the reduced cost c̄i of xi is defined by

c̄i = ci − cBT B −1 Ai . (15)

The j-th basic direction is improving if and only if c̄i < 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example
Example 10. Consider the following problem. For x ∈ R4 ,

x1 + x2 + x3 + x4 = 2,

min c T x subject to 2x1 + 3x3 + 4x4 = 2,

xi ≥ 0, i = 1, . . . , 4.

The first two columns of the matrix A are


   
1 1
A1 = and A2 = .
2 0

Since they are linearly independent, we can choose x1 and x2 as our basic
variables, and  
1 1
B= .
2 0

We set x3 = x4 = 0 and solve the constraints to find x1 = 1, and x2 = 1. We


have thus constructed a nondegenerate BFS.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Example

A basic direction corresponding to the nonbaisc variable x3 is obtained as


follows:

dB = −B −1 A3
  
0 1/2 1
=−
1 −1/2 3
 
−3/2
= .
1/2

The cost of moving along this basic direction is c T = −3c1 /2 + c2 /2 + c3 .

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Optimality condition

The next result provides us with optimality conditions.


Theorem. Let x be a BFS and let c̄ be the corresponding vector of reduced
costs.
(i) If c̄ ≥ 0, then x is optimal.
(ii) If x is optimal and nondegenerate, then c̄ ≥ 0.
To prove (i) we let y be an arbitrary feasible solution, and d = y − x. Feasibility
implies that Ad = 0, and so
X
BdB + Aj dj = 0,
j

where the summation extends over the nonbasic indices j.


Then X
dB = − dj B −1 Aj .
j

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Optimality condition

Therefore,
X
c T d = cBT dB + cj dj
j
X
= (cj − cBT B −1 Aj )dj
j
X
= c j dj .
j

For any nonbasic j we have xj = 0 and yj ≥ 0 (since y is feasible). Therefore,


dj ≥ 0, and c j dj ≥ 0.
As a consequence, c T y − c T x = c T d ≥, and x is optimal.
To prove (ii), assume that c j < 0. Since the reduced cost of a basic variable is
zero, xj must be nonbasic. Since x is nondegenerate, the j-th basic direction is a
direction of cost decrease, and x cannot be optimal.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Step size

Let d be a basic, feasible, improving direction from the current BFS x, and let B
be the basis matrix for x.
We wish to move by the amount of θ > 0 in the direction d in order to find a BFS
x 0 adjacent to x. This takes us to the point x + θ∗ d, where

θ∗ = max{θ ≥ 0 : x + θd ∈ P},

and the resulting cost change is θ∗ c T d.


Since Ad = 0, the equality constraints will never be violated, and x + θd can
become infeasible only if its components become negative.
We want to find the largest possible θ such that xB + θdB ≥ 0.
If d ≥ 0, then x + θd ≥ 0, and θ∗ = ∞.
If di < 0 for some i, then the condition becomes xi + θdi ≥ 0, or θ ≤ −xi /di .
This condition has to hold for all i with di < 0.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Step size

In other words, we are led to the following choice:


 xi 
θ∗ = min − . (16)
i: di <0 di

If xi is a nonbasic variable, then xi is either entering and di = 1, or else di = 0.


Therefore, it is sufficient to consider the basic variables only and thus
 xri 
θ∗ = min − . (17)
i: dri <0 dri

Since we are assuming nondegeneracy, xri > 0 and θ∗ > 0.


Once θ∗ has been chosen (and is finite), we move to the next feasible solution.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Step size

Since xj = 0 and dj = 1, we have yj = θ∗ > 0. Let l be the index saturating the


minimum (17). Then drl < 0 and

xrl + θ∗ drl = 0.

This means that the new basic variable has become 0, whereas the nonbasic
variable xj has become positive. This indicates that, in the next iteration, the
index j should replace rl .
In other words, the new basis matrix B is obtained from B by replacing its column
Arl with the column Aj .
The columns Ari , i 6= l, and Aj are linearly independent and form a new basis
matrix B.
The vector y = x + θ∗ d is a BFS corresponding to B.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

An iteration of the simplex algorithm


We can now summarize a typical iteration of the simplex algorithm as follows:
(i) Start from a basis Ar1 , . . . , Arm of the columns of the matrix A and the
associated BFS x.
(ii) Compute the vector c̄ of reduced costs corresponding to all nonbasic
indices j. If c̄ ≥ 0, then the current BFS x is optimal and the algorithm
exits. Otherwise, choose a nonbasic index j for which c̄j < 0.
(iii) Compute u = B −1 Aj . If no component of u is positive, then the optimal
cost is −∞, and the algorithm terminates.
(iv) If some components of u are positive, compute the step size θ∗ using (17).
(v) Let l be the index saturating the minimum in (17). Form a new basis by
replacing Arl with Aj . If y is a new BFS, the value of the new basic
variables are yj = θ∗ and yri = yri − θ∗ ui .
(vi) Replace xri with xj in the list of basic variables.
If P is nonempty and every BFS is nondegenerate, the simplex method
terminates after finitely many steps. At termination, there are two possibilities:
(i) We have an optimal basis B and an associated BFS that is optimal.
(ii) We have found a d satisfying Ad = 0, d ≥ 0, and c T d < 0, and the
optimal value is −∞.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Degenerate problems

Degenerate problems present a number of technical issues such as


(i) What nonbasic variable should enter the BFS?
(ii) If more than one basic variable could leave (i.e. more than one basic
variable attains the minimum that gives θ∗ ), which one should leave?
(iii) The algorithm may cycle forever at the some degenerate BFS.
Various extensions and refinements to the basic method outlined in these notes
have been developed to address the issues above.
For example, Bland’s rule is used to eliminate the risk of cycling:
(i) Among all nonbasic variables that can enter the new basis, select the one
with the minimum index.
(ii) Among all basic variables that can exit the basis, select the one with the
minimum index.
The details are presented in Chapter 3 of [1].

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

Finding an initial BFS

As usual, starting the iteration may sometimes not be easy, and finding an initial
BFS may prove challenging.
One strategy is to solve an auxiliary problem.
For example, if we want a BFS with xi = 0, we set the objective function to xi and
find the optimal solution to this problem.
If the optimal value is 0 then we found a BFS with xi = 0, otherwise there is no
such feasible solution.

A. Lesniewski Optimization Techniques in Finance


Linear programming
Primal and dual problems
Geometry of the feasible set
Simplex method

References

[1] Bertsimas, D., and Tsitsiklis, J. N.: Linear Optimization, Athena Scientific
(1997).
[2] Nocedal, J., and Wright, S. J.: Numerical Optimization, Springer (2006).

A. Lesniewski Optimization Techniques in Finance

You might also like