Texts in Mathematics
Texts in Mathematics
Volume 6
A Mathematical Primer on
Linear Optimization
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Volume 1
An Introduction to Discrete Dynamical Systems and their General Solutions
F. Oliveira-Pinto
Volume 2
Adventures in Formalism
Craig Smoryński
Volume 3
Chapters in Mathematics. From π to Pell
Craig Smoryński
Volume 4
Chapters in Probability
Craig Smoryński
Volume 5
A Treatise on the Binomial Theorem
Craig Smoryński
Volume 6
A Mathematical Primer on Linear Optimization
Diogo Gomes, Amílcar Sernadas, Cristina Sernadas, João Rasga, Paulo Mateus
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
A Mathematical Primer on
Linear Optimization
Diogo Gomes
Amílcar Sernadas
Cristina Sernadas
João Rasga
Paulo Mateus
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
© Individual author and College Publications 2019. All rights reserved.
ISBN 978-1-84890-315-9
College Publications
Scientific Director: Dov Gabbay
Managing Director: Jane Spurr
Department of Computer Science
King’s College London, Strand, London WC2R 2LS, UK
http://www.collegepublications.co.uk
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system
or transmitted in any form, or by any means, electronic, mechanical, photocopying, recording
or otherwise without prior permission, in writing, from the publisher.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
v
Preface
Our main objective is to provide a self-contained mathematical introduction
to linear optimization for undergraduate students of Mathematics. This book
is equally suitable for Science, Engineering, and Economics students who are
interested in gaining a deeper understanding of the mathematical aspects of the
subject. The linear optimization problem is analyzed from different perspec-
tives: topological, algebraic, geometrical, logical, and algorithmic. Neverthe-
less, no previous knowledge of these subjects is required. The essential details
are always provided in a special section at the end of each chapter. The techni-
cal material is illustrated with multiple examples, problems with fully-worked
solutions, and a range of proposed exercises.
In Chapter 1, the optimization problem is presented along with the con-
cepts of set of admissible vectors and set of optimizers. Then, we discuss the
linear case, including the canonical and the standard optimization problems.
Finally, we relate the general linear optimization problem and the canonical op-
timization problem and analyze the relationship between the canonical and the
standard optimization problems. The relevant background section of this chap-
ter includes some basic algebraic concepts, notation, and preliminary results
namely about groups, fields, and vector spaces.
Chapter 2 explores some topological techniques that provide sufficient con-
ditions for the existence of optimizers in canonical optimization problems. The
chapter starts with the definition of interior and boundary of the set of ad-
missible vectors (later on it is shown that these concepts coincide with the
topological ones) and the proof that optimizers are always on the boundary
of the set of admissible vectors when the objective map is not the zero map.
Then, we provide sufficient conditions for a canonical optimization problem to
have maximizers. In the relevant background section, we provide a modicum
of topological notions and results.
The main objective of Chapter 3 is to provide a way for deciding whether or
not an admissible vector is an optimizer, relying on Farkas’ Lemma. Namely,
we introduce the concept of line active in an admissible vector and prove the
Local Maximizer Theorem and the Maximizer Theorem using convex cones.
Moreover, a technique for deciding whether or not there are admissible vectors
is presented, using Farkas’ Lemma again.
In Chapter 4, linear algebra concepts are used for computing optimizers for
a standard optimization problem. The objective is to find the basic admissible
vectors of the problem at hand. These vectors are relevant since we prove that,
under certain conditions, there is always a basic admissible vector which is
an optimizer. Moreover, the notion of a non-degenerate standard optimization
problem is introduced and a characterization of basic vectors is provided in this
case. In the relevant background section of this chapter, we provide an overview
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
vi
of notions and results from linear algebra including dimension, span, rank,
determinant of a matrix, singular matrices, and some well-known theorems.
Chapter 5 concentrates on geometrical aspects of linear optimization, namely
it provides a geometrical characterization of the basic admissible vectors of a
standard optimization problem as vertices of an appropriate convex polyhe-
dron. In the relevant background section, we present the relevant results about
affine spaces and subspaces.
Chapter 6 presents several aspects of duality. The objective is to show that
the dual provides yet a new technique for finding optimizers of the original prob-
lem. The dual problems are found using the Lagrange Multiplier Technique.
We start by presenting two main results: the Weak and the Strong Duality
Theorems. Then, we concentrate on slacks for the linear optimization problem
and prove the Slack Complementarity Theorem. Afterward, the Equilibrium
Theorem is established as well as the Uniqueness Theorem. Finally, a Logic of
Inequalities is presented and some results are proved relating consistency and
satisfiability of a formula.
Chapter 7 provides an introduction to computational complexity to analyze
the efficiency of linear optimization algorithms. In Section 7.1, we present the
rigorous definition of the decision problems associated with linear optimization,
and in Section 7.2, we discuss the representation of vectors and matrices. In
Section 7.3, we prove that the standard decision problem is in NP. A deter-
ministic algorithm for getting an optimizer whenever there is one, based on
a brute-force method, is discussed in Section 7.4. The complexity of this al-
gorithm is shown not to be polynomial. Finally, in the relevant background
section, we introduce the central notions and techniques that are necessary for
assessing the computational efficiency of an algorithm.
Chapter 8 is targeted at the Simplex Algorithm that allows us to find an op-
timizer, whenever there is one, for the standard optimization problem satisfying
some mild conditions. In Section 8.2, we prove the soundness and completeness
of the Simplex Algorithm and point out that its complexity is not polynomial.
Finally, in Chapter 9, a description of the integer linear optimization prob-
lem is presented along with the respective relaxed problem. Then, we compare
the sets of admissible vectors and optimizers of both problems. The integral-
ity gap is introduced and discussed. Afterward, we discuss totally unimodular
problems and provide a sufficient condition for a matrix to be totally uni-
modular. Furthermore, we give the Assignment Problem as an example. The
importance of this concept is made clear by proving a sufficient condition for
the existence and characterization of the optimizers of totally unimodular prob-
lems. We conclude by presenting and illustrating an algorithm based on the
Branch and Bound Technique for solving integer optimization problems.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
vii
Acknowledgements
We would like to express our deepest gratitude to the many undergraduate
math students of Instituto Superior Técnico that attended the Introduction to
Optimization course.
Diogo Gomes was partially supported by King Abdullah University of Sci-
ence and Technology (KAUST) baseline funds and KAUST OSR-CRG2017-
3452. Amílcar Sernadas, Cristina Sernadas and João Rasga acknowledge the
National Funding from Fundação para a Ciência e a Tecnologia (FCT) under
the project UID/MAT/04561/2019 granted to Centro de Matemática, Apli-
cações Fundamentais e Investigação Operacional (CMAFcIO) of Universidade
de Lisboa. Paulo Mateus acknowledges the National Funding from FCT under
project PEst-OE/EEI/LA0008/2019 granted to Instituto de Telecomunicações.
Last but not least, we greatly acknowledge the excellent working environ-
ment provided by the Department of Mathematics of Instituto Superior Téc-
nico, Universidade de Lisboa.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
viii
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Contents
1 Optimization Problems 1
1.1 General Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Linear Optimization . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Canonical and standard optimization problems . . . . . . . . . 11
1.4 Relating Problems . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5 Solved Problems and Exercises . . . . . . . . . . . . . . . . . . 23
1.6 Relevant Background . . . . . . . . . . . . . . . . . . . . . . . . 30
2 Optimizers 39
2.1 Boundary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
2.2 Existence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.3 Solved Problems and Exercises . . . . . . . . . . . . . . . . . . 52
2.4 Relevant Background . . . . . . . . . . . . . . . . . . . . . . . . 54
3 Deciding Optimizers 63
3.1 Farkas’ Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.2 Using Convex Cones . . . . . . . . . . . . . . . . . . . . . . . . 74
3.3 Solved Problems and Exercises . . . . . . . . . . . . . . . . . . 80
4 Computing Optimizers 87
4.1 Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
4.2 Using Basic Vectors . . . . . . . . . . . . . . . . . . . . . . . . 95
4.3 Basic Through Counting . . . . . . . . . . . . . . . . . . . . . . 99
4.4 Solved Problems and Exercises . . . . . . . . . . . . . . . . . . 102
4.5 Relevant Background . . . . . . . . . . . . . . . . . . . . . . . . 108
ix
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
x CONTENTS
6 Duality 147
6.1 Weak and Strong Duality . . . . . . . . . . . . . . . . . . . . . 147
6.2 Complementarity . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.3 Equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
6.4 Logic of Inequalities . . . . . . . . . . . . . . . . . . . . . . . . 166
6.5 Solved Problems and Exercises . . . . . . . . . . . . . . . . . . 170
7 Complexity 177
7.1 Optimization Decision Problem . . . . . . . . . . . . . . . . . . 177
7.2 Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
7.3 Non-Deterministic Approach . . . . . . . . . . . . . . . . . . . 184
7.4 Deterministic Approach . . . . . . . . . . . . . . . . . . . . . . 186
7.5 Solved Problems and Exercises . . . . . . . . . . . . . . . . . . 189
7.6 Relevant Background . . . . . . . . . . . . . . . . . . . . . . . . 197
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 1
Optimization Problems
The objective of this chapter is to present the optimization problem and its
different formulations.
Definition 1.1
An (n-dimensional) constraint, with n ∈ N+ , is a triple (g, o
n, b), written
g(x1 , . . . , xn ) o
n b,
where g : Rn → R is a constraint map, on ∈ {≤, =, ≥} is a binary relation and
b ∈ R. When o n is =, we say that the constraint is an equality and when o
n is
either ≤ or ≥, we say that the constraint is an inequality.
Definition 1.2
An (n-dimensional) description, with n ∈ N+ , is a pair
ni , bi ) : 1 ≤ i ≤ m}, U )
({(gi , o
for some m ∈ N+ , where (gi , o ni , bi ) is an n-dimensional constraint for i =
1, . . . , m and U ⊆ R is a non-empty set.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2 CHAPTER 1. OPTIMIZATION PROBLEMS
Definition 1.3
We say that (d1 , . . . , dn ) ∈ U n is an admissible vector for an n-dimensional
description
n bi : 1 ≤ i ≤ m}, U )
({gi (x1 , . . . , xn ) o
when
gi (d1 , . . . , dn ) o
n bi
holds for every i = 1, . . . , m.
Notation 1.1
We denote by
XD
the set of admissible tuples or vectors for a description D.
Definition 1.4
An (n-dimensional) optimization problem P is a tuple
(D, f, ),
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.1. GENERAL CONCEPTS 3
Notation 1.2
Given an optimization problem (D, f, ), we denote by
XP
the set XD . Moreover, when is ≤, P is a maximization problem and each
element of SP is a maximizer of P . Otherwise, P is a minimization problem
and each element of SP is a minimizer of P .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4 CHAPTER 1. OPTIMIZATION PROBLEMS
Remark 1.1
In the sequel, we may use
x 7→ f (x)
for presenting a map f .
(D, f, ≤),
Notation 1.3
In the sequel, we present the n-dimensional optimization problem
n bi : 1 ≤ i ≤ m}, U ), f, ≤)
(({gi (x1 , . . . , xn ) o
by
max f (x1 , . . . .xn )
(x1 ,...,xn )
g (x , . . . .xn ) on1 b1
1 1
..
.
gm (x1 , . . . .xn ) o
nm bm
x1 , . . . , x n ∈ U
or even
max f (x)
x
g (x) o
n1 b1
1
..
.
gm (x) o nm bm
x ∈ U n.
Similarly for ≥ using min instead of max. When U = R, we may omit the
constraint on U .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.1. GENERAL CONCEPTS 5
a1 x1 + · · · + an xn ≤ b
x , . . . , x ∈ N.
1 n
Definition 1.5
Let ≤ be the binary relation over Rn such that
x≤y
Example 1.7
As an illustration, observe that (0, 1) ≤ (2, 3). Nevertheless, ≤ is not a total
relation. For example
Exercise 1.1
Present x and y in Rn such that
• x 6= y;
• x 6≤ y;
• x 6≥ y.
Notation 1.4
Sometimes, it is convenient to group constraints by their associated relational
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6 CHAPTER 1. OPTIMIZATION PROBLEMS
where b0 = (b01 , . . . , b0m0 ), b00 = (b001 , . . . , b00m00 ) and b000 = (b000 000
1 , . . . , bm000 ). Similarly
for minimization problems.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.2. LINEAR OPTIMIZATION 7
Definition 1.6
We say that a maximization problem
max f (x1 , . . . , xn )
(x1 ,...,xn )
g≤ (x1 , . . . , xn ) ≤ b0
g≥ (x1 , . . . , xn ) ≥ b00
g= (x1 , . . . , xn ) = b000
x1 , . . . , xn ∈ U
Example 1.8
The optimization problem
max 2x1 + x2
(x1 ,x2 )
3x1 − x2 ≤ 6
(x1 − 3x2 , x1 , x2 ) ≥ (−6, 0, 0)
is linear.
Notation 1.5
We denote by
L
Remark 1.2
From now on, we present the linear maps in an optimization problem by the
induced matrices with respect to the standard basis (see Section 1.6). Moreover,
we assume that each line of the induced matrices is non-null.
Notation 1.6
Taking into account the previous remark, any general linear maximization prob-
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8 CHAPTER 1. OPTIMIZATION PROBLEMS
A00 x ≥ b00
A000 x = b000
x ∈ U n
where
0 00 000
• A0 = (a0ij ) ∈ Rm ×n , A00 = (a00ij ) ∈ Rm ×n
and A000 = (a000
ij ) ∈ R
m ×n
;
0 00 000
• b0 = (b0i ) ∈ Rm ×1 , b00 = (b00i ) ∈ Rm ×1
and b000 = (b000
i )∈R
m ×1
;
• c = (cj ) ∈ R1×n .
Similarly for minimization problems. When m0 = 0, then A0 is the empty
matrix and similarly for m00 and m000 . For simplification, when there is no
ambiguity, we omit empty matrices and vectors in the expanded presentation.
Furthermore, as before, when U = R, we may omit the constraint on U .
Example 1.9
The linear optimization problem presented in Example 1.8 can be described in
expanded matricial form as follows:
h i
max 2 1 x
x
h i h i
3 −1 x ≤ 6
1 −3 −6
1 0 x ≥ 0 .
0 1 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.2. LINEAR OPTIMIZATION 9
• the food components that can be included in a meal range from compo-
nent 1 to component n;
The goal is to minimize the amount of fat in each meal. The only way to do
so is by adjusting the amount of each food component in a meal. Thus, let
xj
be the amount of food component j in the meal. Hence, the total amount of
fat in a meal is given by:
X n
cj xj .
j=1
The goal of the problem is to minimize this quantity taking account the re-
quirements on the nutrients, which are modeled by
n
X
aij xj ≥ bi
j=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
10 CHAPTER 1. OPTIMIZATION PROBLEMS
Exercise 1.2
Let A ∈ Rm×n , b ∈ Rm and f : Rm+n → R be such that
f (x1 , . . . , xn , z1 , . . . , zm ) = z1 + · · · + zm .
(x1 , . . . , xn , z1 , . . . , zm ) ≥ 0
Definition 1.7
A linear maximization problem
max cx
x
A0 x ≤ b0
A00 x ≥ b00
A000 x = b000
x ∈ U n
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.3. CANONICAL AND STANDARD OPTIMIZATION PROBLEMS 11
x ∈ Nn
Definition 1.8
A linear optimization problem is canonical if it has the form:
max
x
cx
Ax ≤ b
x ≥ 0,
Notation 1.7
When a problem is known to be canonical, it can simply be presented as a
triple of the form:
(A, b, c).
Notation 1.8
Sometimes, we need to present a canonical optimization problem
max
x
cx
Ax ≤ b
x≥0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
12 CHAPTER 1. OPTIMIZATION PROBLEMS
as A and b, respectively.
Example 1.12
The following linear optimization problem
max 2x1 + x2
x
3x1 − x2 ≤ 6
−x1 + 3x2 ≤ 6
x ≥ 0;
that is, h i
max 2 1 x
x
" # " #
3 −1 6
x ≤
−1 3 6
x ≥ 0,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.3. CANONICAL AND STANDARD OPTIMIZATION PROBLEMS 13
Exercise 1.3
Let P be a canonical optimization problem. Show that
SP = {s ∈ XP : cs = sup cx}.
x∈XP
Notation 1.9
We denote by
Definition 1.9
A linear n-dimensional optimization problem is standard if it has the form
min
x
cx
Ax = b
x ≥ 0;
Notation 1.10
When a problem is known to be standard, it can simply be presented as a triple
of the form:
(A, b, c).
Example 1.13
The following linear optimization problem
min −2x1 − x2
x
3x1 − x2 + x3 = 6
−x1 + 3x2 + x4 = 6
x ≥ 0;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
14 CHAPTER 1. OPTIMIZATION PROBLEMS
that is, h i
min −2 −1 0 0 x
x
" # " #
3 −1 1 0 6
x =
−1 3 0 1 6
x ≥ 0,
is standard.
Notation 1.11
We denote by
S
the set of all standard linear optimization problems.
x2
XP
t = 8 t = 9 t = 10
x1
Example 1.14
Denote the canonical optimization problem in Example 1.12 by P . The set XP
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.4. RELATING PROBLEMS 15
Notation 1.12
Given a set Q ⊆ Rk and 1 ≤ i ≤ j ≤ k, we denote by
Q|ij
the set
{(xi , . . . , xj ) ∈ Rj−i : (x1 , . . . , xi , . . . , xj , . . . , xk ) ∈ Q}.
We omit i when i = 1 and j when j = k.
Proposition 1.1
Let LC : L → C be the map
h i
max −c c y
y
min cx
A0 −A0 b0
x
A0 x ≤ b0
−A00 A00 −b00
LC =
y ≤
A00 x ≥ b00 000 000
−A b000
A
−A000 A000 −b000
000
A x = b000
y ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
16 CHAPTER 1. OPTIMIZATION PROBLEMS
Then,
Proof:
Assume that A0 is an m0 × n-matrix, A00 is an m00 × n-matrix and A000 is an
m000 × n-matrix.
(1) XP = XLC(P ) |n − XLC(P ) |n+1 . Given x ∈ XP , let y ∈ R2n be such that
(
(xj , 0) if xj ≥ 0
(yj , yn+j ) =
(0, −xj ) otherwise
yj − yn+j = xj .
Hence,
y1 yn+1
A0 −A0 y = A0 ... − ... = A0 x ≤ b0 .
yn y2n
The other restrictions for y ∈ XLC(P ) are proved in a similar way. Thus,
y1 yn+1
.. . n+1
. ∈ XLC(P ) |n and .. ∈ XLC(P ) | .
yn y2n
yn y2n
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.4. RELATING PROBLEMS 17
is in XP .
(2) SP = SLC(P ) |n − SLC(P ) |n+1 . Given s ∈ SP let r ∈ R2n be such that
(
(sj , 0) if sj ≥ 0
(rj , rn+j ) =
(0, −sj ) otherwise
for each j = 1, . . . , n. Observe that
rj − rn+j = sj .
Let y ∈ XLC(P ) . Then,
r1 rn+1
c r = −c ... + c ... = −cs
−c
rn r2n
≥
y1 yn+1
−c ... − ... = −c
c y.
yn y2n
Thus, r ∈ SLC(P ) and so s ∈ SLC(P ) |n − SLC(P ) |n+1 . The proof of the other
inclusion follows in a similar way. QED
Example 1.15
For instance,
h i
min −3 −5 x
x
" # " #
−5 2 5
LC x ≤ =
2 −1 5
h i h i
6 1 x ≥
−7
h i
max 3 5 −3 −5 y
y
−5 2 5 −2 5
=
2 −1 −2 1 y ≤ 5
−6 −1 6 1 7
y ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
18 CHAPTER 1. OPTIMIZATION PROBLEMS
Proposition 1.2
Let SC : S → C be the map
max −cx
x
min cx
" # " #
x A b
SC Ax = b =
x ≤
−A −b
x≥0
x ≥ 0.
Proof:
Assume that A is an m × n-matrix.
(1) XP = XSC(P ) . It is enough to observe that, for every i = 1, . . . , m,
n
X
aij xj ≤ bi
Xn
j=1
aij xj = bi if and only if X n
j=1
− aij xj ≤ −bi .
j=1
Example 1.16
For instance,
h i
min −2 −1 0 0 x
x
" # " #
3 −1 0 1 6
SC
x =
=
−1 3 1 0 6
x≥0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.4. RELATING PROBLEMS 19
h i
max 2 1 0 0 x
x
3 −1 0 1 6
= −1 3 1 0 6
x ≤
−3 −1 −6
1 0
−3 −1 −6
1 0
x ≥ 0.
Proposition 1.3
Let CS : C → S be the map
h i
min −c 0 x̄
x
max cx
x h i
CS Ax ≤ b =
A I x =b
x≥0
x ≥ 0.
Proof:
Assume that A is an m × n-matrix. It is immediate that f is injective. We
now show that f (XP ) = XCS(P ) :
(⊆) Given x ∈ XP , it is easy to check that f (x) ∈ XCS(P ) . Indeed,
A I f (x) = Ax + I(b − Ax) = b.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
20 CHAPTER 1. OPTIMIZATION PROBLEMS
(⊇) Given x ∈ XCS(P ) , we now show that there exists x ∈ XP such that
f (x) = x. Take x as the vector with the first n components of x and y the
vector with the remaining components. Thus,
x
x= .
y
So, f −1 (s0 ), that is, the vector with the first n components of s0 , is in SP . QED
Example 1.17
For instance,
h i
max 2 1 x
x
" # " #
3 −1 6
CS
x ≤
=
−1 3 6
x≥0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.4. RELATING PROBLEMS 21
h i
min −2 −1 0 0 x
x
" # " #
= 3 −1 1 0 6
x =
−1 3 0 1 6
x ≥ 0.
The following result shows that deciding whether or not the set of admissi-
ble vectors of a canonical optimization problem is non-empty, is equivalent to
deciding whether or not the set of optimizers of another canonical optimization
problem is non-empty.
Proposition 1.4
Let P be the n-dimensional canonical optimization problem
max cx
x
Ax ≤ b
x ≥ 0
−ux0
max
x0
h i
A −I x0 ≤ b
0
x ≥ 0,
Proof:
Observe that, for every x0 ∈ XP 0 ,
(x1 , . . . , xn , 0, . . . , 0) ∈ XP 0 .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
22 CHAPTER 1. OPTIMIZATION PROBLEMS
Therefore,
(x1 , . . . , xn , 0, . . . , 0) ∈ SP 0
since −u(x1 , . . . , xn , 0, . . . , 0) = 0.
(←) Suppose that (s01 , . . . , s0n , 0, . . . , 0) ∈ SP 0 . Then, A(s01 , . . . , s0n ) ≤ b and
(s01 , . . . , s0n ) ≥ 0. Therefore, (s01 , . . . , s0n ) ∈ XP . QED
Exercise 1.4
Let SP : S → S be the map
h i
min c 0 y
y
+
b+
min cx A 0
x −A −
0 −b−
SP Ax = b =
y =
A0
1 1
x≥0
0 1 1
y ≥ 0,
where
• A+ and b+ are obtained from A and b by removing the lines i such that
bi ≤ 0, respectively;
• A− and b− are obtained from A and b by removing the lines i such that
bi ≥ 0, respectively;
Exercise 1.5
Define a map for getting a canonical optimization problem with positive re-
striction vector from a given canonical problem in such a way that the sets of
admissible vectors and optimizers are equivalent.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.5. SOLVED PROBLEMS AND EXERCISES 23
Solution:
Assume that
• the weekly CO2 emission for the transportation of a unit of the good
between factory i and distribution center j is eij .
The goal is to minimize the total weekly CO2 emission produced by the trans-
portation of the good between the factories and the distribution centers. The
objective is attained by picking up the adequate number of units that should
be transported from factory i to distribution center j that we represent by
yij .
We want to minimize this quantity taking into account the following conditions:
Pm
• i=1 yij ≥ vj for j = 1, . . . , n; that is, the total weekly amount of units
of the good transported to the distribution center j should be at least vj ;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
24 CHAPTER 1. OPTIMIZATION PROBLEMS
Pn
• j=1 yij ≤ ui for i = 1, . . . , m; that is, the total weekly amount of units
of the good transported from factory i is at most ui .
where (
1 if β(j)1 = i
aij =
0 otherwise
and (
1 if β(j)2 = i
a0ij =
0 otherwise
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.5. SOLVED PROBLEMS AND EXERCISES 25
Problem 1.2
Consider the following linear optimization problem P :
max 2x1 + x2
x
−x1 − x2 ≤ −1
x1 + x2 ≤ 3
x1 − x2 ≤ 1
−x1 + x2 ≤ 1
x ≥ 0.
Solution:
(1) The matricial form is as follows:
h i
max 2 1 x
x
−1 −1 −1
1 1
3
x ≤
1 −1
1
−1 1 1
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
26 CHAPTER 1. OPTIMIZATION PROBLEMS
x2
XP
x1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.5. SOLVED PROBLEMS AND EXERCISES 27
and
1
2 1 = 3.
1
min −2x1 − x2
x
−x 1 − x2 + x3 = −1
x1 + x2 + x4 = 3
x1 − x2 + x5 = 1
−x1 + x2 + x6 = 1
x ≥ 0
or in matricial form
h i
min −2 −1 0 0 0 0 x
x
−1 −1 1 0 0 0 −1
1 1 0 1 0 0 3
x =
1 −1
0 0 1 0 1
−1
1 0 0 0 1 1
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
28 CHAPTER 1. OPTIMIZATION PROBLEMS
+x1 + x2 − x3 ≤1
−x1 − x2 − x4 ≤ −3
−x1 + x2 − x5 ≤ −1
+x1 − x2 − x6 ≤ −1
x1 , . . . , x6 ≥ 0
or in matricial form:
h i
max 2 1 0 0 0 0 x
x
−1 −1 1 0 0 0 −1
1 1 0 1 0 0 3
1
−1 0 0 1 0
1
−1
1 0 0 0 1 1
x ≤
1 −1
1 0 0 0 1
−1 −1 0 −1 0 0 −3
−1 1 0 0 −1 0 −1
1 −1 0 0 0 −1 −1
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.5. SOLVED PROBLEMS AND EXERCISES 29
SC(CS(P )) 6= P,
Exercise 1.6
Consider the following linear optimization problem P :
min −3x1 − 4x2
x
−8x1 − 3x2 ≥ −5
−6x1 + 4x2 ≥ 5
2x1 − x2 ≤ 2.
Exercise 1.7
Consider the following canonical optimization problem P :
max 2x1 + x2
x
x1 ≤ 5
4x1 + x2 ≤ 25
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
30 CHAPTER 1. OPTIMIZATION PROBLEMS
Definition 1.10
A group is a pair
(G, +),
where
• G is a non-empty set;
• + : G2 → G is a map;
such that
• x + (y + z) = (x + y) + z;
Example 1.18
The pair
(R, +),
where + is the sum of real numbers, is an Abelian group. On the other hand,
(N, +), where + is the sum of natural numbers, is not a group.
Example 1.19
Let (G, +) be a group. Then, the pair
(Gn , +),
where
• Gn = {(x1 , . . . , xn ) : xj ∈ G, j = 1, . . . , n};
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.6. RELEVANT BACKGROUND 31
Remark 1.3
Given a group (G, +) and x ∈ G, we denote by
−x
x + (−x) = 0.
Definition 1.11
A field is a tuple
(K, +, ×),
where
• K is a non-empty set;
• +, × : K 2 → K;
such that
• x × (y × z) = (x × y) × z;
• x × y = y × x;
• x × (y + z) = (x × y) + (x × z);
• 0 6= 1;
for every x, y, z ∈ K.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
32 CHAPTER 1. OPTIMIZATION PROBLEMS
Example 1.20
The tuple
(Q, +, ×),
where + and × are the usual operations over rational numbers, is a field.
Furthermore, the tuple
(R, +, ×),
where + and × are the usual operations over real numbers, is a field.
Notation 1.13
In the sequel, we may denote a field (K, +, ×) by K.
Definition 1.12
A vector space over a field K is an Abelian group (V, +) with a map
(α, x) 7→ αx : K × V → V
satisfying the following properties:
• (αµ)x = α(µx);
• (α + µ)x = αx + µx;
• α(x + y) = αx + αy;
• 1x = x, where 1 is the multiplicative identity of K.
The elements of V and the elements of K are called vectors and scalars, re-
spectively.
Example 1.21
Let K be a field, {v} a singleton set and + : {v}2 → {v} such that v + v = v.
Then, ({v}, +) and the map (α, v) 7→ v : K × {v} → {v} is a vector space over
K.
Notation 1.14
Consider the vector space introduced in Example 1.21. We start by observing
that v is the identity of the group and so is usually denoted by 0. Furthermore,
we denote such a vector space by
{0}K
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.6. RELEVANT BACKGROUND 33
Definition 1.13
The vector space induced by a field K and n ∈ N+ , denoted by
K n,
(α, (β1 , . . . , βn )) 7→ (α × β1 , . . . , α × βn ),
(0, . . . , 0)
Example 1.22
We denote by
Rn ,
Definition 1.14
A subspace S of a vector space V over a field K is a subset of V containing 0,
closed under vector addition and multiplication by scalar.
Proposition 1.5
A subspace of a vector space V over a field K with the multiplication by scalar
and addition induced by V in S is a vector space over K.
Example 1.23
The smallest subspace of a vector space V over a field K is {0}K . That is,
{0}K ⊆ V1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
34 CHAPTER 1. OPTIMIZATION PROBLEMS
Definition 1.15
Let V be a vector space over a field K and V1 , V2 ⊆ V . Then,
V1 + V2 ,
{v 1 + v 2 : v 1 ∈ V1 and v 2 ∈ V2 }.
Proposition 1.6
The class of all subspaces of a vector space over a field K is closed under
intersections and finite sums. On the other hand, it is not closed under unions
except in trivial case.
Definition 1.16
Let V be a vector space over a field K and U ⊆ V . Then, the set
spanV (U ),
the span of U in V , is
\
V1 .
{V1 : V1 is a subspace of V and U ⊆V1 }
Proposition 1.7
Let V be a vector space over a field K and U ⊆ V . Then, spanV (U ) is a
subspace of V .
Example 1.24
Let V be a vector space over a field K. Note that
{0}K
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.6. RELEVANT BACKGROUND 35
Proposition 1.8
Let V be a vector space over a field K and V1 , V2 ⊆ V . Then,
spanV (V1 ∪ V2 ) = V1 + V2 .
is the 0 vector in V .
Proposition 1.9
Let V be a vector space over a field K and U ⊆ V . Then,
X k
spanV (U ) = αj uj : k ∈ N, αj ∈ K, uj ∈ U .
j=1
Hence, the span of a set U is seen as the set of all linear combination of
vectors in U .
We now discuss finite-dimensional vector spaces, and begin by stating their
definition.
Definition 1.17
A vector space V over a field K is finite-dimensional whenever there is a finite
set U ⊆ V such that spanV (U ) = V .
Remark 1.4
From now on, when referring to vector spaces we mean finite-dimensional vector
spaces.
Definition 1.18
A finite set U is linearly independent in a vector space V over a field K whenever
U ⊆ V and for any u1 , . . . , un ∈ U and α1 , . . . , αn ∈ K, if
α1 u1 + · · · + αn un = 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
36 CHAPTER 1. OPTIMIZATION PROBLEMS
Example 1.25
The sets
{(1, 0), (0, 1)} and {(2, 2), (−3, 2)}
are linearly independent in R2 . On the other hand, the set
{(1, 1), (2, 2)}
is not linearly independent in R2 . Indeed, taking α1 = 2 and α2 = −1 we get
2(1, 1) − (2, 2) = 0.
Example 1.26
The set
∅
is linearly independent in every vector space.
Proposition 1.10
Let V be a vector space over a field K and U1 ⊆ U2 ⊆ V . Then, U1 is linearly
independent in V whenever U2 is linearly independent in V .
Definition 1.19
Let V be a vector space over a field K. A finite set B is a basis of V whenever
V = spanV (B)
and B is a linearly independent set in V .
Example 1.27
The set {(1, 0), (0, 1)} is a basis of R2 .
Definition 1.20
The standard or canonical basis of a vector space K n induced by a field K is
composed by the vectors
e1 = (1, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0), . . . , en = (0, . . . , 0, 1).
Example 1.28
Let K be a field. Observe that
∅ is a basis of {0}K
taking into account Example 1.24 and Example 1.26.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
1.6. RELEVANT BACKGROUND 37
Proposition 1.11
Let V be a vector space over a field K and U ⊆ V a finite set with spanV (U ) =
V . Then, there is a linearly independent set B ⊆ U in V such that spanV (B) =
V.
Proposition 1.12
Every vector space over a field K has a basis. Moreover, all bases have the
same number of elements.
We now define maps between vector spaces over the same field.
Definition 1.21
Let V and W be vector spaces over the same field K. We say that a map
h : V → W is linear whenever the following properties hold:
• h(x + y) = h(x) + h(y);
• h(αx) = αh(x);
for x, y ∈ V and α ∈ K.
Exercise 1.8
Show that h is linear if and only if h(αx + βy) = αh(x) + βh(y).
Example 1.29
Let W be a vector space over K. Then, h : {0}K → W such that h(0) = 0 is a
linear map.
Definition 1.22
Let V and W be vector spaces over the same field K with bases v1 , . . . , vn and
w1 , . . . , wm , respectively. The m × n-matrix, or simply the matrix , induced by
the linear map h : V → W with respect to the given bases, denoted by
is an m-by-n tuple
(aij ) ∈ K m×n
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
38 CHAPTER 1. OPTIMIZATION PROBLEMS
such that
h(vj ) = a1j w1 + · · · + amj wm .
Notation 1.15
When either V or W is {0}K , the matrix induced by the linear map h : V → W
is called the empty matrix , denoted by
[ ].
Notation 1.16
When V and W are the vector spaces Rn and Rm over R, respectively, and
v1 , . . . , vn and w1 , . . . , wm are their standard or canonical bases, we may write
M (h)
for M (h, (v1 , . . . , vn ), (w1 , . . . , wm )).
Example 1.30
Let h : R2 → R3 be such that h(x1 , x2 ) = (x1 − 3x2 , x1 , x2 ). Then,
M (h)
is the matrix
1 −3
1 0 .
0 1
Remark 1.5
In the sequel, we may refer to a vector (x1 , . . . , xn ) ∈ Rn as
x1
..
. .
xn
Remark 1.6
In the sequel, we may present a matrix without referring to the underlying
linear map.
Definition 1.23
A submatrix of a matrix is obtained by deleting any collection of rows and/or
columns of the matrix.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 2
Optimizers
2.1 Boundary
The first insight provided by the topological analysis of the linear optimization
problem is the characterization of the subset of the set of admissible vectors
where we should look for the optimizers. We start by defining the interior and
the boundary of the set of admissible vectors. It turns out that these notions
agree with the corresponding topological notions.
Definition 2.1
The interior of the set of admissible vectors of a canonical n-dimensional op-
timization problem P
max
x
cx
Ax ≤ b
x ≥ 0,
39
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
40 CHAPTER 2. OPTIMIZERS
denoted by
XP◦ ,
is the set {x ∈ Rn : Ax < b and x > 0}. The boundary of XP , denoted by
∂XP ,
Example 2.1
Recall Example 1.12 and Example 1.14. Note that
• XP◦ = {(x1 , x2 ) ∈ XP : 3x1 − x2 < 6, −x1 + 3x2 < 6, x1 > 0, x2 > 0};
{(x1 , x2 ) ∈ XP : 3x1 − x2 = 6}
∪{(x1 , x2 ) ∈ XP : −x1 + 3x2 = 6}
∪{(x1 , x2 ) ∈ XP : x1 = 0}
∪{(x1 , x2 ) ∈ XP : x2 = 0}.
We are now ready to prove that the interior of the set of admissible vectors
of a linear optimization problem coincides with the topological interior (see
Section 2.4). Recall that each line of matrix A is non-null (see Remark 1.2).
The first step is to characterize, from a topological point of view, the constraint
map (as well as the objective map).
Proposition 2.1
Every linear map from Rn to Rm is continuous.
Proof:
Observe that |xj | ≤ kxk, for every j = 1, . . . , n, by Proposition 2.12. Therefore,
n
X
|xj | ≤ nkxk (†).
j=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.1. BOUNDARY 41
that is,
kf (x)k ≤ µnkxk, (‡).
Finally, we show that f is continuous. Consider two cases:
(1) µ = 0. Then, f (x) = 0 for every x (by positive definiteness). It is immedi-
ate that f is continuous.
(2) µ 6= 0. Let {xk }k∈N be a sequence converging to x. We show that
{f (xk )}k∈N is a sequence converging to f (x). Take any δ > 0. Note that
δ
∈ R+ .
µn
Since {xk }k∈N converges to x, there exists j such that
δ
kxm − xk <
µn
for every m ≥ j. Hence, for every m ≥ j, it follows that
kf (xm ) − f (x)k = kf (xm − x)k by linearity of f
≤ µnkxm − xk by (‡)
< δ.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
42 CHAPTER 2. OPTIMIZERS
Proposition 2.2
The objective and the constraint maps of a (pure) canonical optimization prob-
lem are continuous.
Proof:
Immediate by Proposition 2.1 taking into account that, by definition, the ob-
jective and the constraint maps of a linear optimization problem are linear
maps. QED
Proposition 2.3
Consider a canonical linear optimization problem P . Then,
XP◦ = int(XP ).
Proof:
Assume that P is in pure form and that A is an m × n matrix.
(⊆) Let x ∈ XP◦ . Then,
Ax < b.
We must find ε > 0 such that
Bε (x) ⊂ XP .
Let
δ = min{(bi − (Ax)i ) : 1 ≤ i ≤ m + n}.
Note that δ > 0. Since x 7→ Ax is continuous (by Proposition 2.2), then, by
Proposition 2.23, there exists ε > 0 such that
We now show that Bε (x) ⊆ XP◦ . Let y ∈ Bε (x). Then, from kAy − Axk < δ,
using Proposition 2.12, it follows that
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.1. BOUNDARY 43
(Ay)i < bi
Therefore,
(Ay)i < δ + (Ax)i ≤ bi − (Ax)i + (Ax)i = bi .
Therefore, y ∈ XP◦ . Since XP◦ ⊂ XP , then Bε (x) ⊂ XP . Hence, x ∈ int(XP ).
(⊇) Assume that x ∈ int(XP ). Then, let ε > 0 be such that Bε (x) ⊂ XP . Take
v ∈ Rn such that
• vj 6= 0 for j = 1, . . . , n;
Pn
• j=1 aij vj 6= 0 for i = 1, . . . , m (see Remark 1.2);
• kvk < ε.
x ∈ XP◦ .
Indeed:
(1) xj > 0 for j = 1, . . . , n. Observe that
(
(†) xj + vj ≥ 0
(‡) xj − vj ≥ 0.
Exercise 2.1
Let P be a canonical optimization problem. Show that ∂XP = bn(XP ).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
44 CHAPTER 2. OPTIMIZERS
x3
(0,0,3) (0,3,3)
(2,4,3)
(4,0,3)
(0,3,0)
(0,0,0)
x2
(2,4,0)
(4,0,0)
(12,4,3)
(12,4,0)
x1
Example 2.2
Consider the following linear optimization problem P
max 3x1 + 3x2 + 2x3
(x1 ,x2 ,x3 )
x2 ≤ 4
x3 ≤ 3
2x2 − x1 ≤ 6
−2x2 + x1 ≤ 4
x1 , x2 , x3 ≥ 0
with set of admissible vectors depicted in Figure 2.1. We now present examples
of vectors in the interior and in the boundary of XP . We start by showing that
(2, 2, 2) ∈ XP◦ .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.1. BOUNDARY 45
(2, 2, 2) ∈ int(XP )
and then use Proposition 2.3. Hence, we must find a real number ε > 0
such that Bε (2, 2, 2) ⊂ XP . Take ε equal to 12 . It remains to show that
B 12 (2, 2, 2) ⊆ XP . Let y ∈ B 12 (2, 2, 2). Then,
1
||(2, 2, 2) − (y1 , y2 , y3 )|| < .
2
That is,
1
(2 − y1 )2 + (2 − y2 )2 + (2 − y3 )2 < .
4
Thus, (2 − y1 )2 < 14 , (2 − y2 )2 < 1
4 and (2 − y3 )2 < 14 . Therefore,
1 1 1
|2 − y1 | < , |2 − y2 | < and |2 − y3 | < .
2 2 2
So,
3 5 3 5 3 5
< y1 < , < y2 < and < y3 < .
2 2 2 2 2 2
Capitalizing on these bounds, it is immediate that y ∈ XP . So, (2, 2, 2) ∈
int(XP ). Hence,
(2, 2, 2) ∈
/ bn(XP ).
Moreover, (2, 2, 2) ∈ XP◦ , by Proposition 2.3. Furthermore, (2, 2, 2) ∈
/ ∂XP , by
Exercise 2.1. We now prove that
(12, 4, 2) ∈ ∂XP .
{(x1 , x2 , x3 ) ∈ XP : x2 = 4}.
(12, 4, 2) ∈ bn(XP )
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
46 CHAPTER 2. OPTIMIZERS
and then use Exercise 2.1. Hence, we must prove that for every real number
ε > 0 there is y ∈ Bε (12, 4, 2) ∩ XP and there is y ∈ Bε (12, 4, 2) ∩ (R3 \ XP ).
Let ε > 0 be a real number. Observe that
(12, 4, 2) ∈ Bε (12, 4, 2) ∩ XP .
Proposition 2.4
Let P be a canonical optimization problem. If c 6= 0 then
SP ∩ XP◦ = ∅.
Proof:
The result is established by contradiction. Take s ∈ SP ∩ XP◦ . Then, As < b
and s > 0. Consider the family of vectors
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.1. BOUNDARY 47
Then,
csε > cs
for any ε ∈ R+ , because cεcT = εccT > 0, since c 6= 0.
We now show that there is ε such that sε ∈ XP . We concentrate first on
choosing a set D such that sε ≥ 0 for every ε ∈ D. Let
sj
D = {ε ∈ R+ : ε ≤ − for j = 1, . . . , n whenever cj < 0}.
cj
Consider two cases:
(1) cj ≥ 0 for every j. Then, for every j, sεj = sj + εcj ≥ 0 since sj > 0 and
cj ≥ 0. So, sε ≥ 0 for every ε ∈ R+ = D.
(2) cj < 0 for some j. Observe that
i
i sj
D = 0, min − : cj < 0 .
cj
Let k ∈ {1, . . . , n}. When ck ≥ 0, then sεk ≥ 0, similarly to (1). Otherwise,
ε sk
sk = sk + εck ≥ sk + − ck = 0.
ck
We now identify a set E such that Asε ≤ b for every ε ∈ E. Let
bi − (As)i
E = {ε ∈ R+ : ε ≤ for i = 1, . . . , m whenever (AcT )i > 0}.
(AcT )i
Consider two cases:
(1) (AcT )i ≤ 0 for every i = 1, . . . , m. Then, (As)i + ε(AcT )i < bi since
(As)i < bi and (AcT )i ≤ 0. Thus, Asε ≤ b.
(2) (AcT )i > 0 for some i. Observe that
i
i bi − (As)i T
E = 0, min : (Ac )i > 0 .
(AcT )i
Let k ∈ {1, . . . , m}. When (AcT )k ≤ 0, then (Asε )k ≤ bk , similarly to (1).
Otherwise,
bk − (As)k
(Asε )k = (As)k + ε(AcT )k ≤ (As)k + (AcT )k = bk .
(AcT )k
In any case,
D ∩ E 6= ∅.
Pick up ε ∈ D ∩ E. Then, sε is admissible and csε > cs contradicting the
hypothesis that s ∈ SP . QED
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
48 CHAPTER 2. OPTIMIZERS
x2
XP
x1
In other words, if c 6= 0 then the maximizers of the problem must lie on the
boundary of the set of admissible vectors.
Example 2.3
Consider the canonical optimization problem P in Example 1.12 and the graph-
ical representation of XP in Figure 1.1. In Figure 2.2, we use different colors to
represent the interior and the boundary of XP . Since c 6= 0, by Proposition 2.4,
any maximizer of P is in the darker part of the representation of XP .
2.2 Existence
The objective of this section is to provide sufficient conditions for the existence
of optimizers. We start by proving a result about the existence of optimizers
of a canonical optimization problem when b is 0.
Proposition 2.5
In a canonical optimization problem, if b = 0 then either 0 is a maximizer or
the objective map has no upper bound on the set of admissible vectors.
Proof:
Let P be a canonical optimization problem. Note that 0 is admissible. If 0 is
a maximizer, the thesis follows immediately. Otherwise, observe that there is
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.2. EXISTENCE 49
{cx : x ∈ XP }
|r| + 1
y.
cy
|r|+1
Observe that z ≥ 0, since y ≥ 0 and cy > 0. Moreover,
|r| + 1 |r| + 1
Az = A y= Ay ≤ 0.
cy cy
|r| + 1 |r| + 1
cz = c y= cy = |r| + 1 > r.
cy cy
Thus, the objective map has no upper bound on XP . QED
Proposition 2.6
The set {x ∈ Rn : x ≥ 0} is closed.
Proof:
Let {xk }k∈N be a sequence in {x ∈ Rn : x ≥ 0} converging to z ∈ Rn . Moreover,
let {ak }k∈N be a sequence in Rn such that ak = 0 for every k ∈ N. Observe
that {ak }k∈N converges to 0 ∈ Rn by Proposition 2.17. On the other hand,
xk ≥ ak for every k ∈ N. Hence, by Proposition 2.16, z ≥ 0. Thus, z ∈ {x ∈
Rn : x ≥ 0}. So, by Proposition 2.18, set {x ∈ Rn : x ≥ 0} is closed. QED
Proposition 2.7
Let A be an m × n-matrix and b ∈ Rm . Then, {x ∈ Rn : Ax ≤ b} is closed.
Proof:
Let {xk }k∈N be a sequence in {x ∈ Rn : Ax ≤ b} converging to z ∈ Rn . Then,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
50 CHAPTER 2. OPTIMIZERS
Proposition 2.8
The set of admissible vectors of a canonical optimization problem is closed.
Proof:
Let P be a canonical optimization problem. Observe that
XP = {x ∈ Rn : x ≥ 0} ∩ {x ∈ Rn : Ax ≤ b}.
Proposition 2.9
The set of optimizers of a canonical optimization problem is closed.
Proof:
Let P be a canonical optimization problem. If SP = ∅ then SP is closed (see
Remark 2.2). Otherwise, let {sk }k∈N be a sequence in SP converging to s. By
Proposition 2.2, the map x 7→ cx is continuous and so the sequence {csk }k∈N
converges to cs. On the other hand, since each sk ∈ SP , then
csk = max{cx : x ∈ XP }
for every k. Hence, {csk }k∈N is a constant sequence and so, by Proposition 2.15,
{csk }k∈N converges to max{cx : x ∈ XP }. Therefore,
cs = max{cx : x ∈ XP }.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.2. EXISTENCE 51
Theorem 2.1
When the set of admissible vectors is non-empty and bounded, there exists a
maximizer for a canonical optimization problem.
Proof:
Let P be a canonical optimization problem. The set XP is bounded, by hypoth-
esis, and closed, by Proposition 2.8. Hence, XP is compact (see Definition 2.12).
Observe that, the objective map x 7→ cx is continuous, by Proposition 2.2.
Thus, since XP 6= ∅, by Proposition 2.28, the objective map x 7→ cx has a
maximum in XP . Therefore, P has a maximizer. QED
Example 2.4
Recall the canonical optimization problem P in Example 2.2. We now show
that XP is bounded. We must prove that there is a positive real number µ
such that
||x|| ≤ µ
for every x ∈ XP . Take µ = 13. Let x ∈ XP . Then,
x2 ≤ 4, x3 ≤ 3 and − 2x2 + x1 ≤ 4.
Hence,
x1 ≤ 4 + 2x2 ≤ 12.
Therefore,
1 1
||x|| = (x21 + x22 + x23 ) 2 ≤ (122 + 42 + 32 ) 2 .
On the other hand,
XP 6= ∅
since (2, 2, 2) ∈ XP , as shown in Example 2.2. Thus, by Theorem 2.1,
SP 6= ∅.
||x|| ≤ µ
for every x ∈ XP 0 . Consider the vector (1, 1, µ). It is immediate that (1, 1, µ) ∈
XP 0 . On the other hand,
1
||(1, 1, µ)|| = (1 + 1 + µ2 ) 2 > µ,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
52 CHAPTER 2. OPTIMIZERS
The next proposition uses Theorem 2.1 for standard optimization problems.
Proposition 2.10
Let P be a standard optimization problem with XP 6= ∅. Then, SP 6= ∅
provided that XSC(P ) is a bounded set.
Proof:
Note that XSC(P ) 6= 0 since XSC(P ) = XP (see Proposition 1.2). Because, by
hypothesis, XSC(P ) is a bounded set, then, by Theorem 2.1, SSC(P ) 6= ∅. Thus,
SP 6= ∅, again by Proposition 1.2. QED
(1) Choose A and b in such a way that XP contains the points (1, 0) and (0, 1).
(2) Show that XP is bounded.
(3) Choose an interior point of XP and show that it is in int(XP ).
Solution:
(1) Recall that the equation of a line has the form
x2 = dx1 + e.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.3. SOLVED PROBLEMS AND EXERCISES 53
1 −1
and b is the vector:
1
b= 3
1
(2) We must show that there is a positive real number µ such that ||x|| ≤ µ for
every x ∈ XP . Take µ = 5 and let x ∈ XP . Observe that
• x2 ≤ 3 because x1 + x2 ≤ 3 and x1 ≥ 0;
• x1 ≤ 3 because x1 − x2 ≤ 1 and x2 ≥ 0.
Therefore,
1
||x|| ≤ (32 + 32 ) 2 ≤ 5.
(3) Consider the vector (1, 1). It is immediate that this vector is in XP◦ . Then,
(1, 1) is in int(X) (see Proposition 2.3). /
Exercise 2.2
Consider the following canonical optimization problem P
max x1 + 4x2
(x1 ,x2 )
x1 ≤ 2
2x1 + x2 ≤ 7
x , x ≥ 0.
1 2
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
54 CHAPTER 2. OPTIMIZERS
Exercise 2.3
Consider the following standard optimization problem P
min −2x1 − 3x2
x
2x1 + x2 + x3 = 6
x ≥ 0.
Definition 2.2
A norm on the vector space Rn is a map
N : Rn → R+
0
such that
• N (x) = 0 if and only if x = 0;
• N (αx) = |α|N (x), for any α ∈ R;
• N (x + y) ≤ N (x) + N (y).
The first requirement is called positive definiteness, the second positive ho-
mogeneity, and the last triangular inequality. The number
N (x)
is called the norm of vector x.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.4. RELEVANT BACKGROUND 55
Definition 2.3
The Euclidean norm (on Rn ) is the map k · k : Rn → R+
0 such that
v
u n 2
uX
kxk = t xj .
j=1
Exercise 2.4
Show that k · k : Rn → R+
0 when n = 1 coincides with the absolute value map
| · | : R → R+
0 .
Definition 2.4
An inner product on the vector space Rn is a map
I : (Rn )2 → R
• I(x, x) ≥ 0;
The first requirement is called symmetry, the second and the third are called
linearity conditions, and the fourth and the fifth are called positive definiteness.
The usual inner product over Rn is as follows:
Definition 2.5
The Euclidean inner product (on Rn ) is the map · : (Rn )2 → R such that
n
X
x·y = xj yj .
j=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
56 CHAPTER 2. OPTIMIZERS
Remark 2.1
Observe that √
kxk = x · x.
Definition 2.6
The vectors v 1 , v 2 ∈ Rn are perpendicular or orthogonal , indicated by
v1 ⊥ v2 ,
whenever v 1 · v 2 = 0.
Example 2.5
The vectors (1, 1) and (−1, 1) are orthogonal.
Proof:
Consider two cases:
(1) y = 0. Immediate, by inspection.
(2) y 6= 0. Then, y · y 6= 0 by positive definiteness. Let z be the vector
x·y
x− y.
y·y
Observe that
x·y x·y x·y x·y
z·y =x·y− y · y = 0 and x = x − y+ y=z+ y.
y·y y·y y·y y·y
Therefore,
x·y x·y
x·x = (z + y) · (z + y)
y·y y·y
x·y (x · y)2
= z·z+2 z·y+ y·y
y·y (y · y)2
(x · y)2
= z·z+
y·y
(x · y)2
≥
y·y
and so the thesis follows immediately. QED
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.4. RELEVANT BACKGROUND 57
The next results shows that the norm of a vector is an upper bound of the
absolute value of each component of the vector.
Proposition 2.12
Let x ∈ Rn . Then,
|xj | ≤ kxk
for every j = 1, . . . , n.
Proof:
Then,
|xj | = |x · ej |
≤ kxkkej k (by Proposition 2.11)
= kxk
using the standard basis of Rn (see Definition 1.20). QED
Definition 2.7
The open ball of radius ε > 0 centered on x ∈ Rn , denoted by
Bε (x),
is the set {y ∈ Rn : kx − yk < ε}. Moreover, the closed ball of radius ε centered
on x, denoted by
Bε [x],
is the set {y ∈ Rn : kx − yk ≤ ε}.
Example 2.6
Observe that
1 1 1
, , ∈ B1 (1, 1, 1)
2 2 2
since r
1 1 1 1 1 1 3
||(1, 1, 1) − , , || = || , , || = < 1.
2 2 2 2 2 2 4
On the other hand,
(0, 0, 0) ∈
/ B 12 (2, 2, 2).
Indeed:
√ 1
||(2, 2, 2) − (0, 0, 0)|| = ||(2, 2, 2)|| = 12 > .
2
Thus, (0, 0, 0) ∈
/ B 12 (2, 2, 2).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
58 CHAPTER 2. OPTIMIZERS
x2
x1
Definition 2.8
Let x ∈ Rn and U ⊆ Rn . We say that x is an interior point of U if there is
ε > 0 such that:
Bε (x) ⊂ U.
Furthermore, we say that x is a boundary point of U if for every ε > 0 we have
Bε (x) ∩ U 6= ∅ and Bε (x) ∩ (Rn \ U ) 6= ∅.
Example 2.7
Let X be the set
{(x1 , x2 ) ∈ R2 : x1 + x2 ≤ 2, x1 ≥ 0, x2 ≥ 0}
depicted in Figure 2.3. Then,
1 1
, is an interior point
2 2
and
(0, 1) is a boundary point.
Notation 2.1
The set of all interior points of U is called the interior of U and is denoted by
int(U ).
Moreover, the set of all boundary points of U is called the boundary of U and
is denoted by
bn(U ).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.4. RELEVANT BACKGROUND 59
Definition 2.9
We say that a set U ⊆ Rn is open if U = int(U ). The complement of an open
set is called a closed set.
Remark 2.2
Observe that ∅ is an open and a closed set.
Definition 2.10
A sequence {xk }k∈N in Rn converges to x ∈ Rn if for every real number δ > 0
there is k ∈ N such that
Example 2.8
The sequence
1 1
− ,
k k k∈N
Proposition 2.13
Let {xk }k∈N be a sequence in Rn and x ∈ Rn . Then, {xk }k∈N converges to x if
and only if, for each j = 1, . . . , n, the sequence {(xk )j }k∈N converges to xj in
R.
Proposition 2.14
Let {uk }k∈N and {wk }k∈N be sequences in R such that there is m ∈ N with
un ≤ wn for every natural number n > m. Assume that {uk }k∈N and {wk }k∈N
converge to u and w, respectively. Then, u ≤ w.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
60 CHAPTER 2. OPTIMIZERS
Proposition 2.15
Let {uk }k∈N be a constant sequence in R. Then, {uk }k∈N converges to u0 .
Proposition 2.16
Let {xk }k∈N and {y k }k∈N be sequences in Rn such that there is m ∈ N with
xn ≤ y n for every natural number n > m. Assume that {xk }k∈N and {y k }k∈N
converge to x and y, respectively. Then, x ≤ y.
Proposition 2.17
Let {xk }k∈N be a constant sequence in Rn . Then, {xk }k∈N converges to x0 .
Proposition 2.18
A set U ⊆ Rn is closed if and only if, for every sequence {xk }k∈N of elements
of U , if {xk }k∈N converges to x in Rn , then x ∈ U .
Proposition 2.19
The class of closed sets is closed under finite union and intersection.
Proposition 2.20
A closed ball is a closed set.
Definition 2.11
We say that a set U ⊆ Rn is bounded if there exists µ ∈ R+ such that kuk ≤ µ
for every u ∈ U .
Definition 2.12
A set U ⊆ Rn is compact if it is closed and bounded.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
2.4. RELEVANT BACKGROUND 61
Example 2.9
For instance, the closed unit ball in Rn centered in 0 is compact; that is, the
set
{x ∈ Rn : x21 + · · · + x2n ≤ 1}
is compact.
Proposition 2.21
Every sequence taking values on a compact set has a convergent subsequence
in that set.
Proposition 2.22
Let U ⊆ Rn be a set such that every infinite sequence in U has a subsequence
converging to an element of U . Then, U is compact.
We now introduce the notion of continuous map which will be essential for
some results in linear optimization.
Definition 2.13
A map f : Rn → Rm is continuous at x in Rn if, for every sequence {xk }k∈N in
Rn that converges to x, {f (xk )}k∈N converges to f (x). A map is continuous if
it is continuous at every x ∈ Rn .
Proposition 2.23
A map f : Rn → Rm is continuous at x if and only if, for every δ > 0, there
exists > 0 such that if ky − xk < then kf (y) − f (x)k < δ.
Proposition 2.24
A map f : Rn → Rm is continuous if and only if the map fi : Rn → R such
that fi (x) = (f (x))i is continuous, for every i = 1, . . . , m.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
62 CHAPTER 2. OPTIMIZERS
Proposition 2.25
The map f = u 7→ ku − vk : Rn → R+ n
0 is continuous, where v ∈ R .
Proposition 2.26
The inverse image of a closed set under a continuous map is closed.
Proposition 2.27
Let U ⊆ Rn be a compact set and f : Rn → R a continuous map. Then, f (U )
is compact.
Definition 2.14
A set U ⊆ R has an upper bound r whenever u ≤ r for every u ∈ U . A
map f : Rn → R has an upper bound or is bounded from above whenever
{f (x) : x ∈ Rn } has an upper bound. Furthermore, f has a maximum in
X ⊆ Rn when there is s ∈ X such that f (x) ≤ f (s) for every x ∈ X. Similarly
for lower bound , bounded from below and minimum.
Proposition 2.28
Let X ⊆ Rn be a non-empty compact set and f : Rn → R a continuous map.
Then, f has a maximum and has a minimum in X.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 3
Deciding Optimizers
The objective of this chapter is to provide a way for deciding whether or not a
given admissible vector is an optimizer, relying on the Farkas’ Lemma. Further-
more, we also use Farkas’ Lemma to decide whether or not there are admissible
vectors for the optimization problem at hand.
Definition 3.1
A subset U of Rn is convex whenever U is closed under convex combinations;
that is,
αx + (1 − α)y ∈ U
whenever x, y ∈ U and α ∈ [0, 1].
Example 3.1
The vector ( 34 , 1) is a convex combination of vectors (2, 3) and (1, 0). Indeed:
4 " # " #
1 2 2 1
3 = + .
1 3 3 3 0
63
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
64 CHAPTER 3. DECIDING OPTIMIZERS
Exercise 3.1
Show that ∅ is a convex set.
Definition 3.2
The convex cone generated by {u1 , . . . , uk } ⊂ Rn ,
C({u1 , . . . , uk }),
is the set ( k )
X
αi ui : αi ∈ R+
0 ,1 ≤ i ≤ k .
i=1
Exercise 3.2
Show that C({u1 , . . . , uk }) is a convex set.
Example 3.2
Let {e1 , . . . , en } be the canonical basis of Rn (recall Definition 1.20). Then,
C({e1 , . . . , en }) = {x ∈ Rn : x ≥ 0}
as we now show.
(⊆) Let x ∈ C({e1 , . . . , en }). Then,
n
X
x= αj ej
j=1
Definition 3.3
A subset C of Rn is a convex cone if there exists a finite subset U of Rn such
that C = C(U ).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.1. FARKAS’ LEMMA 65
Example 3.3
Recall Example 3.2. Then, the set
{x ∈ Rn : x ≥ 0}
is a convex cone.
Definition 3.4
A convex cone C is primitive if there exists a set U of linearly independent
vectors such that C = C(U ).
Example 3.4
Recall Example 3.3. Then, the convex cone
{x ∈ Rn : x ≥ 0}
is primitive.
Example 3.5
Observe that ∅ is a primitive convex cone since
∅ = C(∅)
We now prove several results that are needed for Farkas’ Lemma.
Proposition 3.1
Every primitive convex cone is closed.
Proof:
Let C ⊆ Rn be a primitive convex cone. Since C is a primitive cone, then
C = C(U ) for some set U = {u1 , . . . , uk } of linearly independent vectors in Rn .
Let h : Rk → Rn be the map
h(x) = x1 u1 + · · · + xk uk .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
66 CHAPTER 3. DECIDING OPTIMIZERS
x = α1 u1 + · · · + αk uk , for some α1 , . . . , αk ∈ R+
0.
x = h(α)
Therefore, x ∈ C.
We now show that
h is an injective map.
Indeed, assume that h(x) = h(y). Then,
This map is well defined since y = h(x) for some x ∈ Rk and there is only
one such x, for each y ∈ h(Rk ). Observe that g is a bijection. Moreover,
g −1 : Rk → h(Rk ) is such that
g −1 (x) = h(x)
C = g −1 (C({e1 , . . . , ek })).
The next result states that the set of primitive convex cones generates the
class of convex cones.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.1. FARKAS’ LEMMA 67
Proposition 3.2
Every convex cone is a finite union of of primitive convex cones.
Proof:
Let C be a convex cone. Then,
C = C(U )
for some finite set U = {u1 , . . . , uk } ⊂ Rn . We will show that each element x of
C(U ) belongs to a primitive convex cone generated by a subset of U . The thesis
follows from the fact that the number of primitive convex cones generated by
subsets of U is finite (indeed there are 2k subsets of U but not all of them may
be linearly independent sets).
Take x ∈ C(U ). Consider two cases:
(1) x = 0. Then, x ∈ C({u1 }). So, x belongs to a primitive convex cone
generated by a subset of U .
(2) x 6= 0. Let Ux = {ui1 , . . . , ui` } ⊆ U be such that x ∈ C(Ux ) and
|Ux | = min{|V | : V ⊆ U, x ∈ C(V )}.
V
Then,
x = α1 ui1 + · · · + α` ui` ,
where αj > 0 for each j = 1, . . . , `. Moreover, Ux 6= ∅. We now prove that
Ux is a linearly independent set.
Assume, by contradiction, that Ux is not linearly independent. Hence, there
exists β ∈ R` such that β1 ui1 + · · · + β` ui` = 0 and βi 6= 0 for some i = 1, . . . , `.
Pick j such that
αj
= min αi : βi 6= 0 ∧ 1 ≤ i ≤ ` .
βj βi
Take
αj
γ =α− β.
βj
Observe that there exists i = 1, . . . , k such that γi = 0. Indeed:
αj
γj = αj − βj = 0.
βj
We now show that
αj
γi = αi − βi ≥ 0, for every i 6= j.
βj
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
68 CHAPTER 3. DECIDING OPTIMIZERS
(d) βj , βi ∈ R+ . Then,
αj αi
βi ≤ βi
βj βi
and so
αj αi
γi = αi − βi ≥ αi − βi = 0.
βj βi
Thus, γ ≥ 0.
Therefore,
γ1 ui1 + · · · + γ` ui` =
αj αj
(α1 − β1 )ui1 + · · · + (α` − β` )ui` =
βj βj
αj
(α1 ui1 + · · · + α` ui` ) − (β1 ui1 + · · · + β` ui` ) = x,
βj
Proposition 3.3
Every convex cone is closed.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.1. FARKAS’ LEMMA 69
Proof:
Let C be a convex cone. Then, by Proposition 3.2, C is a finite union of
primitive cones. Since, by Proposition 3.1, each primitive cone is a closed set
and any finite union of closed sets is also a closed set (see Proposition 2.19),
we conclude that C is a closed set. QED
Proposition 3.4
Let U ⊆ Rn be a non-empty closed set and v ∈ Rn . Then, there exists at least
one element of U whose distance to v is minimal.
Proof:
Let x ∈ U and
B = {u ∈ U : ku − vk ≤ kx − vk}.
Observe that x ∈ B. We start by proving that B is a closed set. Observe that
B = Bkx−vk [v] ∩ U,
f = u 7→ ku − vk : Rn → R+
0.
The next result is a direct consequence of the proposition above for convex
cones.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
70 CHAPTER 3. DECIDING OPTIMIZERS
Proposition 3.5
Let U ⊆ Rn be a non-empty convex cone and v ∈ Rn . Then, there exists at
least one element of U whose distance to v is minimal.
Proof:
Let U be a convex cone. Then, by Proposition 3.3, U is a closed set. Hence,
by Proposition 3.4, we conclude the thesis. QED
The next result called the Geometric Variant of the Farkas’ Lemma, is
particularly relevant for optimization. It was stated and proved by Julius Farkas
(see [30]). Its generalization to topological vector spaces is known as the Hahn–
Banach Theorem (see [57]).
Proposition 3.6
Let U = {u1 , . . . , uk } ⊂ Rn and v ∈ Rn . Then, exactly one of the following
alternatives holds:
• v ∈ C(U );
Proof:
Let z be an element of C(U ) such that the distance from z to v is minimal
(such a z exists by Proposition 3.5) and
w = (z − v)T .
wz = 0.
{y α }α∈]0,1]
{y α }α∈R−
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.1. FARKAS’ LEMMA 71
yα − v = (1 − α)z − v
= (1 − α)z − (z − wT )
= wT − αz.
Therefore,
2
ky α − vk = (wT − αz)T (wT − αz)
= (w − αz T )(wT − αz)
= wwT − αz T wT − αwz + α2 z T z
2 2
= kwk − 2αwz + α2 kzk .
Observe that αwz > 0. Consider the set composed by the α’s such that
2
2αwz > α2 kzk ;
that is,
2wz
α< 2.
kzk
Hence, if wz > 0 pick up
!
2wz
0 < α < min 1, 2 ,
kzk
otherwise
2wz
α< 2.
kzk
Thus, in both cases,
2 2 2
ky α − vk < kwk = kz − vk
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
72 CHAPTER 3. DECIDING OPTIMIZERS
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.1. FARKAS’ LEMMA 73
Notation 3.1
Given a matrix A, we denote by
a•j
the j-th column of A and by
ai•
the i-th row of A.
Notation 3.2
Given an m × n-matrix A, we denote by
C(A)
the convex cone in Rn generated by the set {a1• , . . . , am• } of lines of A.
Proposition 3.7
Let A be an m × n-matrix and v ∈ Rn . Then, exactly one of the following
alternatives holds:
• v ∈ C(A);
• there is a non-null row vector w ∈ Rn with wAT ≥ 0T and wv < 0.
Proof:
Using Proposition 3.6 with U = {a1• , . . . , am• }, exactly one of the following
alternatives holds:
• v ∈ C(U );
• there exists a non-null row vector w ∈ Rn such that waT
i• ≥ 0 for i =
1, . . . , m and wv < 0.
The thesis follows since C(A) = C(U ) and waT
i• ≥ 0 for i = 1, . . . , m is equiva-
lent to saying that wAT ≥ 0T . QED
Exercise 3.3
Let A be an m × n-matrix and v ∈ Rm . Show that exactly one of the following
alternatives holds:
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
74 CHAPTER 3. DECIDING OPTIMIZERS
• v ∈ C(AT );
• there exists a non-null row vector w ∈ Rm such that wA ≥ 0T and wv < 0.
Definition 3.5
Let P be a canonical problem in pure form and x ∈ XP . The i-th line of A is
active in x if
ai• x = bi .
Notation 3.3
We denote by
Ax
the matrix containing the lines of A that are active in x, where x ∈ XP .
Example 3.6
Consider the canonical optimization problem P in Example 1.12 presented in
pure form. The 1st and the 2nd lines of A are active in (3, 3) ∈ XP . Hence,
(3,3) 3 −1
A =
−1 3
On the other hand, the 3rd and the 4th lines of A are active in (0, 0) ∈ XP .
Hence,
−1 0
A(0,0) = .
0 −1
With the example above, the reader can start to infer that for any vector in
the boundary of the set of admissible vectors there exists a line active in that
vector. This is the case as we show now.
Proposition 3.8
Consider a canonical optimization problem P in pure form and x ∈ XP . Then,
x ∈ ∂XP if and only if there exists a line of A active in x.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.2. USING CONVEX CONES 75
Proof:
We just prove one of the implications. The other implication is proved in a
similar way.
(→) Assume that x ∈ ∂XP . There are two cases.
(1) There is i ∈ {1, . . . .m} such that ai• x = bi . Then, the i-th line of A is
active in x.
(2) There is ` ∈ {m + 1, . . . , m + n} such that x`−m = 0. Thus, the `-th line of
A is active in x. QED
The aim is to provide a test for checking whether or not an admissible vector
is a maximizer. For that, we introduce the concept of local maximum for maps
from Rn into R as follows.
Definition 3.6
A vector x is a local maximum of f : Rn → R in D ⊆ Rn if there exists ε > 0
such that, for every y ∈ D,
The following result, called the Local Maximizer Theorem, provides a way
to decide whether or not a boundary vector is a local maximum, using convex
cones.
Theorem 3.1
Let P be a canonical optimization problem in pure form. Then, for every
x ∈ ∂XP ,
Proof:
If c = 0 then the thesis follows immediately. Otherwise, we show the equivalent
statement that exactly one of the following alternatives holds:
• cT ∈ C(Ax ) and x is a local maximum of the map x 7→ cx in XP ;
• cT 6∈ C(Ax ) and x is not a local maximum of the map x 7→ cx in XP .
By Proposition 3.8, there is a line of A active in x. By applying Proposition 3.7
to matrix Ax and vector cT , it follows that exactly one of the following alter-
natives holds:
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
76 CHAPTER 3. DECIDING OPTIMIZERS
(1) cT ∈ C(Ax );
(2) there exists a non-null row vector w ∈ Rn such that w(Ax )T ≥ 0T and
wcT < 0.
In case (1), the thesis is established by contradiction as follows. Assume that
x is not a local maximum of the map x 7→ cx in XP . That is, for every ε > 0
there exists y ∈ XP such that kx − yk ≤ ε and cx < cy. Pick ε > 0 and y in
these conditions.
Since cT ∈ C(Ax ), there exists α ≥ 0 such that (Ax )T α = cT ; that is, αT Ax = c.
Hence, from cx < cy it follows that
αT Ax x < αT Ax y.
as we now show. Assume, by contradiction, that for every j such that αj > 0
we have (Ax x)j ≥ (Ax y)j . Then,
X X
αT Ax x = αj (Ax x)j ≥ αj (Ax y)j = αT Ax y,
j∈{j:αj >0} j∈{j:αj >0}
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.2. USING CONVEX CONES 77
cy β = cx − βcwT
= cx − β(wcT )T
> cx
ai• y β = bi − βai• wT .
If ai• wT ≥ 0 then ai• y β ≤ bi . Otherwise ai• wT < 0. In this case, consider the
subfamily
β 0 + bi − ai• x
{y }β∈Bi0 , where Bi = β ∈ R : β ≤ .
|ai• wT |
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
78 CHAPTER 3. DECIDING OPTIMIZERS
Example 3.7
Recall Example 3.6. Then, (3, 3) is a local maximum of x 7→ 2x1 + x2 in XP
since (2, 1)T ∈ C(A(3,3) ), by Theorem 3.1. Indeed,
2 3 −1
= α1 + α2
1 −1 3
7 5
has a positive solution with α1 = and α2 = . On the other hand, (0, 0)
8 8
/ C(A(0,0) ), by
is not a local maximum of x 7→ 2x1 + x2 in XP since (2, 1)T ∈
Theorem 3.1. Indeed,
2 −1 0
= α1 + α2
1 0 −1
Theorem 3.2
Every local maximum of the objective map of a canonical optimization problem
in the set of admissible vectors is a maximizer and vice versa.
Proof:
Let P be a canonical optimization problem. It is immediate that a maximizer is
also a local maximum. The proof of the other implication is by contraposition.
Assume that x ∈ XP is not a maximum of x 7→ cx in XP . Then, there is
z ∈ XP such that cx < cz. We must show that x is not a local maximum
of x 7→ cx in XP . That is, we must show that, for every ε > 0, there exists
y ∈ XP such that kx − yk ≤ ε and cx < cy. Let ε > 0. Consider the family
with
ε
B = β ∈ R+ : β ≤ .
kx − zk
Each y β satisfies kx − y β k ≤ ε. Pick β ∈ B such that β < 1. Then, y β is a
convex combination of elements of XP . Moreover, y β ∈ XP as we show now.
Indeed, it is immediate that y β ≥ 0. Furthermore,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.2. USING CONVEX CONES 79
Finally,
cy β = (1 − β)cx + βcz > (1 − β)cx + βcx = cx
because cz > cx and β is positive. QED
Example 3.8
Recall Example 3.7. Then, (3, 3) is a maximizer of P and (0, 0) is not a maxi-
mizer of P , taking into account Theorem 3.2.
Proposition 3.9
Let P be a standard optimization problem. Then,
XP 6= ∅ if and only if b ∈ C(AT ).
Proof:
(→) Let A be an m × n-matrix and x ∈ XP . Then,
bi = ai1 x1 + . . . ain xn
for every i = 1, . . . , m and xj ≥ 0 for every j = 1, . . . , n. We want to find
α ∈ (R+ n
0 ) such that
b = α1 a•1 + · · · + αn a•n .
It is enough to take αj as xj for j = 1, . . . , n.
(←) This implication follows in a similar way. QED
The following results, for deciding the non-emptiness of the set of admissible
vectors, are called the Standard and the Canonical Variants of the Farkas’
Lemma, respectively.
Proposition 3.10
Let P be a standard optimization problem. Then,
XP 6= ∅ if and only if for every w 6= 0, wb ≥ 0 when wA ≥ 0.
Proof:
Observe that, by Proposition 3.9,
XP 6= ∅ iff b ∈ C(AT ).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
80 CHAPTER 3. DECIDING OPTIMIZERS
By Exercise 3.3 and since the two possibilities are exclusive, we conclude that
Proposition 3.11
Let P be a canonical optimization problem. Then,
Proof:
Using Proposition 1.3, we have that
XP 6= ∅ iff XCS(P ) 6= ∅.
Observe that
w A I ≥ 0 iff wA ≥ 0 and w ≥ 0
and so the thesis follows. QED
Exercise 3.4
Prove the standard variant from the canonical variant for deciding the non-
emptiness of the set of admissible vectors using the Farkas’ Lemma.
Solution:
Let U = {u1 , . . . , uk }.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.3. SOLVED PROBLEMS AND EXERCISES 81
k
X
x+y = (αi + βi )ui ∈ C(U ),
i=1
since (αi + βi ) ∈ R+
0 for i = 1, . . . , k.
(2) C(U ) is closed under multiplication by non-negative scalars. Let x ∈ C(U ).
Then,
k
X
x= αi ui
i=1
since (βαi ) ∈ R+
0 for i = 1, . . . , k.
(⊆) We show that C(U ) ⊆ V for any V ∈ V such that U ⊆ V .
We start by observing that
k
[ [
C(U ) = C(U 0 ),
`=1 U 0 ⊆ U
|U 0 | = `
We prove that
C(U 0 ) ⊆ V
for every V ∈ V and U 0 such that U 0 ⊆ U ⊆ V and |U 0 | = `, by induction on `.
(Base) ` = 1. Let U 0 = {u0 } be such that U 0 ⊆ U . Then, C(U 0 ) has the form
{αu0 : α ∈ R+
0}
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
82 CHAPTER 3. DECIDING OPTIMIZERS
Then,
`
X
• αi u0 i ∈ V , using the induction hypothesis;
i=1
Problem 3.2
Consider the following canonical optimization problem P :
max
x
2x1 + 3x2
x1 + x2 ≤ 3
−x1 + x2 ≤ −1
x ≥ 0.
Solution:
(1) The pure form of P is:
h i
max 2 3 x
x
1 1 3
−1 1 −1
≤
x .
−1
0 0
0 −1 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.3. SOLVED PROBLEMS AND EXERCISES 83
in the x1 -axis;
• Ax is the matrix 1 1 . Then, by Proposition 3.8, x ∈ ∂XP . More-
A(2,0) = 0 −1
and
(2,1) 1 1
A = .
−1 1
Using Theorem 3.2, to prove whether or not x ∈ XP is a maximizer, we must
show that x is a local maximum for the objective map in XP . Moreover, by
Theorem 3.1 it is enough to verify whether or not cT ∈ C(Ax ).
• (2, 0): in this case, there is no α ∈ R+
0 such that
2 0
=α .
3 −1
Hence, (2, 0) is not a maximizer.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
84 CHAPTER 3. DECIDING OPTIMIZERS
Indeed, take
α1 = 5
2
α2 = 1
2.
Problem 3.3
Let P be the following canonical optimization problem
max 2x1 + 5x2
x
− 4 x1 − x2 ≤ −2
3
2x1 + x2 ≤ 10
x ≥ 0.
Solution:
Observe that CS(P ) is as follows:
min −2x1 − 5x2
x
− 4 x1 − x2 + x3 = −2
3
2x1 + x2 + x4 = 10
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
3.3. SOLVED PROBLEMS AND EXERCISES 85
That is, we must find a non-negative solution for the following system of equa-
tions:
− 4 α1 − α2 + α3 = −2
3
2α1 + α2 + α4 = 10.
Exercise 3.5
Let A be an m × n-matrix and x ∈ Rn . Show that exactly one of the following
alternatives holds:
• x ∈ C(A);
• there exists a non-null row vector e ∈ Rn such that eAT ≥ 0T and ex < 0.
Exercise 3.6
Consider the following canonical optimization problem P :
max 2x1 + x2
x
−x1 − x2 ≤ −1
x1 + x2 ≤ 3
x1 − x2 ≤ 1
−x1 + x2 ≤ 1
−x1 ≤ 0
−x2 ≤ 0.
3 1
Decide whether or not (2, 1) and , are maximizers of P .
2 2
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
86 CHAPTER 3. DECIDING OPTIMIZERS
Exercise 3.7
Consider the following canonical optimization problem:
max 2x1 + x2
x
−x + x ≤ 1
1
2
x1 + x2 ≤ 3
x1 − x2 ≤ 1
x ≥ 0.
(1) Verify that (2, 1) is a local maximum and that (1, 2) is not a local maximum.
(2) Find an objective map for which (1, 2) is a local maximum.
(3) Is there an objective map such that all points in an edge are local maxima?
Justify your answer.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 4
Computing Optimizers
Definition 4.1
A standard optimization problem
min
x
cx
Ax = b
x ≥ 0,
1. m < n;
2. rank(A) = m.
87
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
88 CHAPTER 4. COMPUTING OPTIMIZERS
Proposition 4.1
Let P be a canonical optimization problem. Then,
CS(P )
is a restricted standard optimization problem.
Proof:
Assume that P is
max cx
x
Ax ≤ b
x ≥ 0.
Then, CS(P ) is h i
min −c 0 y
y
h i h i
A I y= b
y ≥ 0,
where I is the m × m identity matrix. Note that the dimension of the matrix
of CS(P ) is m × (n+m). Therefore, m < n + m. Moreover, the rank of the
matrix is m because it includes the identity matrix of dimension m. QED
The following result shows that every standard optimization problem has
an equivalent restricted counterpart (recall the definition of map SC in Propo-
sition 1.2).
Proposition 4.2
Let P be a standard optimization problem. Then,
CS(SC(P ))
is a restricted standard optimization problem.
Proof:
It is enough to use Proposition 4.1 over SC(P ). QED
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.1. BASIC VECTORS 89
Remark 4.1
From now on, we assume, without loss of generality, that any standard opti-
mization problem is restricted, except when otherwise stated.
Notation 4.1
Given an m × n-matrix, we denote by
Notation 4.2
Given a matrix A and B ⊆ N , we denote by
AB
the submatrix (see Definition 1.23) of A where the columns are the ones with
indices in B.
Definition 4.2
Let A be an m × n matrix of a standard optimization problem. An (index)
basis for A is a subset B of N with cardinality m such that the set of columns
in AB is linearly independent (that is, AB is nonsingular).
Exercise 4.1
Show that
n n!
=
m m!(n − m)!
is an upper bound on the number of basis.
Example 4.1
Consider the following standard optimization problem
h i
min
x −1 −4 0 0 x
" # " #
−1 1 1 0 1
x=
2 1 0 1 6
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
90 CHAPTER 4. COMPUTING OPTIMIZERS
Observe that N = {1, 2, 3, 4}. The subsets of N that are candidates to basis
are the following:
{1, 2}, {1, 3}, {1, 4}, {2, 3}, {2, 4}, {3, 4}
Proposition 4.3
Consider a standard optimization problem. Let B ⊆ N be an index basis for
A. Then, the columns in AB constitute a basis for span(A).
Proof:
Since B is an index basis for A, then the set composed by the columns of AB
is linearly independent. Moreover, rank(A) = rank(AB ). Observe that, by
Proposition 4.24, the dimension of span(A) is rank(A). So, the set composed
by the columns of AB is a basis for span(A), by Proposition 4.12. QED
Definition 4.3
Consider a standard optimization problem. An admissible vector x is basic
whenever there exists B ⊆ N such that:
• B is an (index) basis;
• xj = 0 for every j ∈ (N \ B).
In this case, B is admissible. Moreover, x is degenerate when the number of
zero components of x is strictly greater than |N \ B|.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.1. BASIC VECTORS 91
Example 4.2
Recall the standard optimization problem P introduced in Example 4.1. We
now investigate which bases are admissible.
(1) B = {1, 2}. Let x ∈ R4 be such that x3 = 0 and x4 = 0. Observe that
x1
" # " #
−1 1 1 0 x2
1
=
2 1 0 1 0
6
0
if and only if
8
x2 =
(
−x1 + x2
= 1 3
if and only if
2x1 + x2 = 6
x 5
1 = .
3
Hence, x ∈ XP . So, this basis is admissible with basic admissible vector
5 8
, , 0, 0 .
3 3
if and only if
( (
−x1 + x3 = 1 x3 = 4
if and only if
2x1 = 6 x1 = 3.
(3, 0, 4, 0).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
92 CHAPTER 4. COMPUTING OPTIMIZERS
if and only if (
−x1 = 1
2x1 + x4 = 6.
So, x ∈
/ XP . Thus, the basis B = {1, 4} is not admissible.
We omit details for the other bases. Anyhow, the admissible bases are:
respectively.
Proposition 4.4
There is at most one basic admissible vector for each basis of a standard opti-
mization problem.
Proof:
For x ∈ Rn to be admissible, it is necessary that Ax = b; that is, AB xB +
AN \B xN \B = b. Then, for B to be a basis for x, it is necessary that AB xB +
AN \B 0 = b; that is, AB xB = b. This system has exactly one solution, since
AB is nonsingular. If this solution (in Rm ) is non-negative, then adding zeros
in the adequate positions yields an admissible vector basic for B. QED
Proposition 4.5
The set of basic admissible vectors of a standard optimization problem is finite.
Proof:
The thesis follows from Exercise 4.1 and Proposition 4.4. QED
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.1. BASIC VECTORS 93
Notation 4.3
Given x ∈ Rn , we denote by
Px
the set {j ∈ N : xj > 0}.
Proposition 4.6
Consider a standard optimization problem with matrix A and let x be an
admissible vector. Then, x is basic if and only if the set of columns of A with
indices in Px is linearly independent.
Proof:
(→) Assume that x is basic and B ⊆ N is a basis for x. Thus, xj = 0 for each
j ∈ (N \ B). Hence, Px ⊆ B. Therefore, since the set {a•j : j ∈ B} is linearly
independent, so is the set {a•j : j ∈ Px }.
(←) Assume that the set of columns with indices in Px (that is, the set
{a•j : j ∈ Px }) is linearly independent. We consider two cases:
(1) |Px | = rank(A). Then, Px is an index basis for x. Hence, x is basic.
(2) |Px | < rank(A). By Proposition 4.11, there is a set of columns of A consti-
tuting a basis for span(A) and containing {a•j : j ∈ Px }. Let B be the set of
indices of the columns in that set. Then, B ⊃ Px and |B| = rank(A). Finally,
xj = 0 for each j ∈ (N \ B) follows from (N \ B) ⊂ (N \ Px ). QED
Proposition 4.7
Consider a standard optimization problem and let x be an admissible vector.
Then, x is basic if and only if for every admissible vectors y and z, whenever
there exists α ∈ ]0, 1[ such that x = αy + (1 − α)z then x = y = z.
Proof:
Let A be the matrix of the problem.
(→) Assume that x is basic. Let y and z be admissible vectors and α ∈ ]0, 1[
such that
x = αy + (1 − α)z.
Let B be an admissible basis for x. Hence, xj = 0 for every j ∈ N \ B. Thus,
yj = zj = 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
94 CHAPTER 4. COMPUTING OPTIMIZERS
AB xB = AB yB = AB zB = b
since Ax = Ay = Az = b. So,
AB (xB − yB ) = AB (xB − zB ) = 0.
xB − yB = xB − zB
x = αy + (1 − α)z
Hence,
(xPx )j ≥ γβj and (xPx )j ≥ −γβj .
So,
(xPx )j − γβj ≥ 0 and (xPx )j + γβj ≥ 0.
Observe that γβ 6= 0. Let w ∈ Rn be such that
Then, w 6= 0 and
x − w ≥ 0 and x + w ≥ 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.2. USING BASIC VECTORS 95
and similarly
A(x − w) = b.
Therefore, x + w and x − w are admissible vectors. Observe that
1 1
x= (x + w) + (x − w).
2 2
Thus, x = x + w = x − w and so w = 0 which is a contradiction since we proved
before that w 6= 0. QED
Proposition 4.8
Let P be a standard optimization problem such that the objective map is
bounded from below in XP . Then, for every z ∈ XP , there exists a basic
vector z̃ ∈ XP such that cz̃ ≤ cz.
Proof:
Take z ∈ XP . Consider the set
XPz = {x ∈ XP : cx ≤ cz}.
Observe that XPz 6= ∅. Among the elements of XPz , pick z̃ with the largest
possible number of zero components. That is, if y ∈ XPz then
|N \ Py | ≤ |N \ Pz̃ |.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
96 CHAPTER 4. COMPUTING OPTIMIZERS
in Pz̃ is linearly dependent. Then, there exists a non-zero vector v ∈ R|Pz̃ | such
that
APz̃ v = 0.
Assume that cPz̃ v < 0 (the other cases follow in a similar way). Let w ∈ Rn
be such that (
wj = vj whenever j ∈ Pz̃
0 otherwise.
That is, wPz̃ = v and, moreover, Aw = 0. Furthermore, cw < 0. Consider the
family
{y t }t∈R+
0
of vectors in Rn , where
y t = z̃ + tw
for each t ∈ R+
0 . Observe that
Ay t = b,
since Az̃ = b and Aw = 0. The proof proceeds by cases:
(a) w ≥ 0. Then, for every t ≥ 0, the vector y t = z̃ + tw is admissible, since
y t ≥ 0, because z̃ ≥ 0, as z̃ is admissible, and tw ≥ 0. On the other hand,
cy t = cz̃ + tcw
z̃k
− ≤ u.
wk
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.2. USING BASIC VECTORS 97
Theorem 4.1
Consider a standard optimization problem P . Assume that XP is non-empty
and that the objective map is bounded from below in XP . Then, there is a
basic admissible vector in SP .
Proof:
By Proposition 4.8, for each x ∈ XP there exists a basic vector x̃ ∈ XP such
that cx̃ ≤ cx. Hence,
inf{cx : x ∈ XP } = inf{cx̃ : x ∈ XP }.
Since the set of basic admissible vectors is finite, see Proposition 4.5, the set
{cx̃ : x ∈ XP } is also finite but non-empty. Hence, this set has a minimum.
Then, there is a minimizer which is basic. QED
The constructive nature of this result should be stressed: it provides an
algorithm to find a minimizer when the objective map has a lower bound in
XP and XP 6= ∅. Indeed, since the set of basic admissible vectors is finite (see
Proposition 4.5), it suffices to search for a minimizer in this set. Obviously,
this is an inefficient algorithm (see Proposition 7.10).
Example 4.3
Consider the following standard optimization problem:
min
x
x1
x2 − x3 = 0
x ≥ 0.
Observe that any vector (0, r, r) with r ≥ 0 is a minimizer but only the vector
(0, 0, 0) is a basic admissible vector.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
98 CHAPTER 4. COMPUTING OPTIMIZERS
Example 4.4
Recall the standard optimimization problem P in Example 4.2. The basic
admissible vectors are
5 8
, , 0, 0 , (3, 0, 4, 0), (0, 1, 0, 5) and (0, 0, 1, 6).
3 3
We now prove that the objective map
is bounded from below in XP . Taking into account the expression of the objec-
tive map, it is enough to provide an upper bound for the first two components
of each admissible vector. It is immediate that x1 ≤ 3 and x2 ≤ 6 because
2x1 + x2 + x4 = 6 and x1 , x2 , x4 ≥ 0. Hence, Theorem 4.1 can be used to
conclude that there is a basic admissible vector which is a minimizer. Since the
minimum value of the objective map among the basic vectors is
37
− ,
3
which is achieved for the vector
5 8
, , 0, 0 ,
3 3
this vector is a minimizer of P .
Theorem 4.2
Consider a canonical optimization problem P . Assume that XP is non-empty
and that the objective map is bounded from above in XP . Then, there is
s ∈ SP such that f (s) ∈ SCS(P ) is a basic admissible vector.
Proof:
Observe that CS(P ) is a standard optimization problem, by Proposition 4.1.
By Proposition 1.3, XCS(P ) 6= ∅ since XP 6= ∅. Let µ be such that
cx ≤ µ
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.3. BASIC THROUGH COUNTING 99
Therefore, the objective map of CS(P ) is bounded from below in XCS(P ) . Let
s0 ∈ SCS(P ) be a basic admissible vector (see Theorem 4.1). Hence, s0 |n ∈ SP ,
by Proposition 1.3. QED
Definition 4.4
A standard optimization problem is non-degenerate whenever b is not a linear
combination of less than m columns of A.
Example 4.5
Recall problem P in Example 1.13. We now show that P is non-degenerate.
Because of m = 2, we must check whether or not b is a combination of any of
the columns of A. Indeed, it is not the case that either
6 3
=α
6 −1
for some α ∈ R, or
6 −1
=α
6 3
for some α ∈ R, or
6 1
=α
6 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
100 CHAPTER 4. COMPUTING OPTIMIZERS
for some α ∈ R, or
6 0
=α
6 1
for some α ∈ R.
Proposition 4.9
Let P be a non-degenerate standard optimization problem and x ∈ XP . Then,
x is basic if and only if |Px | = m.
Proof:
(→) Assume x is basic. Then, by Definition 4.3 of basic vector, |Px | ≤ m. We
now show that |Px | ≥ m. Indeed, assume, by contradiction, that |Px | < m.
Observe that, taking into account Proposition 4.16,
Ax = APx xPx + A(N \Px ) x(N \Px ) = APx xPx = b.
Hence, X
a•j xj = b.
j∈Px
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.3. BASIC THROUGH COUNTING 101
Example 4.6
Consider the standard optimization problem in Example 1.13. There are
4
=6
2
possible basic admissible vectors. The basic vectors are found as follows:
• x3 = x1 = 0: the system
(
−x2 = 6
3x2 + x4 = 6
has the solution (0, −6, 0, 24) which is not a basic admissible vector since
(0, −6, 0, 24) ∈
/ XP .
• x3 = x2 = 0: the system
(
3x1 = 6
−x1 + x4 = 6
has the solution (2, 0, 0, 8) which is a basic admissible vector since (2, 0, 0, 8) ∈
XP .
• x3 = x4 = 0: the system
(
3x1 − x2 = 6
−x1 + 3x2 = 6
has the solution (3, 3, 0, 0) which is a basic admissible vector since (3, 3, 0, 0) ∈
XP .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
102 CHAPTER 4. COMPUTING OPTIMIZERS
• x4 = x1 = 0: the system
(
−x2 + x3 = 6
3x2 = 6
has the solution (0, 2, 8, 0) which is a basic admissible vector since (0, 2, 8, 0) ∈
XP .
• x4 = x2 = 0: the system
(
3x1 + x3 = 6
−x1 = 6
has the solution (−6, 0, 24, 0) which is not a basic admissible vector since
(−6, 0, 24, 0) ∈
/ XP .
• x1 = x2 = 0: the system (
x3 = 6
x4 = 6
has the solution (0, 0, 6, 6) which is a basic admissible vector since (0, 0, 6, 6) ∈
XP .
• α = maxi∈M,j∈N |aij |;
• β = maxi∈M |bi |.
Solution:
Let x be a basic admissible vector. Then,
Ax = AB xB = b
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.4. SOLVED PROBLEMS AND EXERCISES 103
xB = (AB )−1 b.
1
(AB )−1 = (cof AB )T
det AB
where
cof AB = {(−1)i+j det (AB )i,j }i,j=1,...,m .
Let
(AB )i,j = {ek` }k,`=1,...,m−1
and P be the set of all permutations of {1, . . . , m − 1}. Then, by the Leibniz’s
Formula (see Proposition 4.19), we have:
X m−1
Y
det (AB )i,j = sgn(p) ekp(k) .
p∈P k=1
then
m−1
Y
|ekp(k) | ≤ αm−1 .
k=1
Hence,
| det (AB )i,j | ≤ (m − 1)! αm−1 .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
104 CHAPTER 4. COMPUTING OPTIMIZERS
Finally,
m
X
|(xB )j | = | ((AB )−1 )j,i bi |
i=1
m
X
≤ |((AB )−1 )j,i | |bi |
i=1
m
X 1
= | ((cof AB )T )j,i | |bi |
i=1
det AB
m
X 1
= | (det (AB )i,j )| |bi |
i=1
det AB
m
1 X
≤ | | (m − 1)! αm−1 |bi |
det AB i=1
m
X
≤ (m − 1)! αm−1 |bi |
i=1
Xm
≤ (m − 1)! αm−1 β
i=1
≤ m! αm−1 β,
for each j = 1, . . . , |B|. /
Problem 4.2
Let P be a standard optimization problem. Show that:
Solution:
(1) Let x be a degenerate basic admissible vector of P . Observe that, by
definition of degenerate vector,
|Px | < m.
Since,
APx xPx = b,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.4. SOLVED PROBLEMS AND EXERCISES 105
Problem 4.3
Let P be the following standard optimization problem:
min 2x1 + x2
x
1 − x2 + x3 = −1
−x
x1 + x2 + x4 = 3
x1 − x2 + x5 = 1
−x1 + x2 + x6 = 1
x ≥ 0.
Solution:
Observe that
−1 −1 0 0
3 1 1 0
1 = 1
+ 2 + 2 .
0 0
1 −1 0 1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
106 CHAPTER 4. COMPUTING OPTIMIZERS
Exercise 4.2
Consider the following canonical optimization problem P :
max 2x1 + 5x2
x
−2x1 + 2x2 ≤ 2
x1 ≤ 2
x ≥ 0.
Exercise 4.3
Consider the following standard optimization problems:
min cx min cx
x x
0 0
P1 = Ax = b P = Ax=b
x ≥ 0 x ≥ 0,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.4. SOLVED PROBLEMS AND EXERCISES 107
Assume that XP1 6= ∅ and that the objective map is bounded from below on
XP1 . Show that SP1 = SP .
Exercise 4.4
Consider the following standard optimization problems
min cx
min cx x
x
Ax = b
P1 = Ax = b P2 =
x≥0
x ≥ 0
x ≤ µ,
where
• P1 is restricted;
• µ is
where
– α = max1≤i≤m,1≤j≤n |aij |;
– β = maxi∈M |bi |.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
108 CHAPTER 4. COMPUTING OPTIMIZERS
Exercise 4.5
Show that every basic admissible vector with more than one basis is degenerate.
Proposition 4.10
Any two bases of a vector space over a field K have the same the cardinality.
Definition 4.5
The dimension of a vector space V over a field K, denoted by
dim V,
Example 4.7
The dimension of the vector space
{0}K
is 0, by Example 1.28.
Example 4.8
The dimension of the vector space Rn is n taking account the standard basis
of Rn (Definition 1.20).
Proposition 4.11
Let V be a vector space over a field K of dimension n and {v 1 , . . . , v k } ⊆ V
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.5. RELEVANT BACKGROUND 109
Proposition 4.12
Let V be a vector space over a field K of dimension n and V 0 ⊆ V a linearly
independent set in V such that |V 0 | = n. Then, V 0 is a basis for V .
Proposition 4.13
Let V1 band V2 be subspaces of a vector space over a field K. Then, dim V1 ≤
dim V2 .
Proposition 4.14
Let V1 and V2 be subspaces of a vector space over a field K. Then,
Definition 4.6
Let V1 be a subspace of the vector space V over a field K. The orthogonal
complement of V1 , denoted by
V1⊥ ,
is the set {v ∈ V : v · v 1 = 0, for each v 1 ∈ V1 }.
Example 4.9
Let
V1 = {(x1 , x2 ) ∈ R2 : x1 = x2 }.
Then, V1 is a subspace of R2 with dim V1 = 1. Moreover,
Proposition 4.15
Let V1 be a subspace of the vector space V over a field K. Then,
• V1⊥ is a subspace of V ;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
110 CHAPTER 4. COMPUTING OPTIMIZERS
• (V1⊥ )⊥ = V1 .
Proposition 4.16
Let A be an m × n-matrix, x ∈ Rn and b ∈ Rm such that Ax = b. Then,
b = x1 a•1 + · · · + xn a•n .
Definition 4.7
An m × m-matrix A is nonsingular whenever
{a•1 , . . . , a•m }
is linearly independent in Rm .
Example 4.10
Let A be the following matrix
−1 1 1
2 −1 − 23 .
1 3 1
1 3 1
Hence,
−α1 + α2 + α3
= 0
2α1 − α2 − 32 α3 = 0
α1 + 3α2 + α3 = 0.
Thus,
α1
= 12 α3
0α3 = 0
= − 12 α3 .
α2
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.5. RELEVANT BACKGROUND 111
1 3 1 0
Therefore, A is singular.
Proposition 4.17
An m × m-matrix A is nonsingular if and only if there is an m × m-matrix B
such that
AB = BA = I,
where I is the identity matrix.
Notation 4.4
Let A be an n × n-matrix and i, j ∈ N . We denote by
Ai,j
the (n − 1) × (n − 1)-matrix obtained from A by removing line i and column j.
Definition 4.8
Let A be an n × n-matrix. The determinant of A denoted by
det A
is
1 if n = 0
Xn
(−1)i+j aij det Ai,j otherwise
j=1
for a fixed i ∈ M .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
112 CHAPTER 4. COMPUTING OPTIMIZERS
Proposition 4.18
Let A be an n × n-matrix. Then,
1 if n = 0
det A = X n
(−1)i+j aij det Ai,j otherwise,
i=1
for a fixed j ∈ M .
Definition 4.9
Let A be an n × n-matrix and i, j ∈ N . The (i, j)-minor of A, denoted by
A
Mi,j ,
is
det Ai,j .
Moreover, the (i, j)-cofactor of A, denoted by
A
Ci,j ,
is (−1)i+j Mi,j
A
.
Remark 4.2
Observe that, given an n × n-matrix A, we have
1 if n = 0
n
det A = X A
aij Ci,j otherwise,
j=1
for a fixed i ∈ M .
This remark explains why the Laplace’s Expansion Formula is also known
as the Cofactor Expansion. The following characterization of the determinant
is known as the Leibniz’s Formula (see [7]).
Proposition 4.19
Let A be an n × n-matrix. Then,
X n
Y
det A = sgn(p) aip(i) ,
p∈P i=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.5. RELEVANT BACKGROUND 113
(an even permutation is obtained from the identity permutation as the com-
position of an even number of exchanges of two elements and similarly for odd
permutations).
Proposition 4.20
Let A and B be n × n-matrices and I the identity n × n-matrix. Then,
• det I = 1;
• det(αA) = αn det A;
1
• det A−1 = whenever det A 6= 0.
det A
Proposition 4.21
An n × n-matrix A is nonsingular if and only if det A 6= 0.
Proposition 4.22
Let A be a nonsingular n × n-matrix. Then,
1
A−1 = (cof A)T ,
det A
where
A
cof A = (Ci,j )i,j=1,...,n
is the matrix of cofactors of A.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
114 CHAPTER 4. COMPUTING OPTIMIZERS
Proposition 4.23
Let A be a nonsingular n × n-matrix and x, b ∈ Rn such that x = A−1 b. Then,
for each j = 1, . . . , n,
det [A]jb
xj = ,
det A
where [A]jb is the matrix obtained from A by replacing column j by b.
Definition 4.10
The rank of a matrix A, denoted by
rank(A),
is the number of elements of the largest linearly independent set of rows, which
coincides with the number of elements of the largest linearly independent set
of columns.
Example 4.11
Let A be the following matrix
1 0
2 3 .
1 2
Then, rank(A) is 2 since the set composed by the two columns is linearly
independent.
Definition 4.11
Given an m × n-matrix A, the set
Proposition 4.24
The span of an m × n-matrix A is a subspace of the vector space Rm and the
dimension of span(A) is rank(A).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
4.5. RELEVANT BACKGROUND 115
It is possible to establish the relationship between the rank and the number
of columns of a matrix. Before that, we need to introduce the concept of kernel
of a matrix.
Definition 4.12
The kernel of an m × n-matrix A, denoted by
ker(A),
Theorem 4.3
Given a matrix A, then
Example 4.12
Observe that
−2 1
ker = {(x1 , x2 ) ∈ R2 : x2 = 2x1 }.
2 −1
since
−2 1
dim ker =1
2 −1
and the number of columns is 2.
Proposition 4.25
Let A be an m × n-matrix. Then, {a•1 , . . . , a•n }⊥ = ker(A).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
116 CHAPTER 4. COMPUTING OPTIMIZERS
Proposition 4.26
Let V be a k-dimensional subspace of Rn and B be a basis of V ⊥ . Then,
V = ker(E),
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 5
Geometric View
5.1 Admissibility
The goal of this section is to introduce several notions concernbing affine spaces
that are relevant to standard optimization problems.
We start by introducing the affine space (see Definition 5.12) useful for
optimization.
Proposition 5.1
The pair
(Rn , Θ),
where Θ is the map
(u, w) 7→ w − u,
is an affine space over Rn .
Proof:
(1) Θv : Rn → Rn is a bijection for each v ∈ Rn . Indeed:
(a) Injectivity. Let u 6= w. Then, u − v 6= w − v. Therefore, Θv (u) 6= Θv (w).
117
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
118 CHAPTER 5. GEOMETRIC VIEW
The objective now is to define the smallest affine subspace containing a set.
Definition 5.1
The smallest affine subspace of (Rn , Θ) containing W , denoted by
A(W ),
Example 5.1
Observe that A({u}) = {u} since {u} is an affine subspace, by Example 5.9.
Moreover, A(∅) = ∅ since ∅ is an affine subspace (see Definition 5.13).
Definition 5.2
An affine combination of points u, w ∈ Rn is a point of the form
αu + (1 − α)w,
for some α ∈ R.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.1. ADMISSIBILITY 119
Example 5.2
The point (3, 6) is an affine combination of the points (2, 3) and (1, 0). Indeed:
" # " # " #
3 2 1
=2 − .
6 3 0
Definition 5.3
A set W ⊆ Rn is closed under affine combinations if
αu + (1 − α)w ∈ U
whenever u, w ∈ W and α ∈ R.
Exercise 5.1
Show that W ⊆ Rn is closed under affine combinations if and only if
k
X
αi wi ∈ W
i=1
Pk
for every w1 , . . . , wk ∈ W and α1 , . . . , αk ∈ R such that i=1 αi = 1.
Proposition 5.2
Any subset of Rn is closed under affine combinations if and only if it is an affine
subspace of (Rn , Θ).
Proof:
Let U ⊆ Rn .
(→) Assume that U is closed under affine combinations. There are two cases:
(1) U = ∅. Then, U is an affine subspace of (Rn , Θ).
(2) U 6= ∅. Take t ∈ U . We show that
Θt (U ) = {x − t : x ∈ U }
is a subspace of Rn .
(a) 0 ∈ Θt (U ), since 0 = t − t;
(b) If v ∈ Θt (U ), then αv ∈ Θt (U ).
Assume that v ∈ Θt (U ). By definition of Θt (U ), we have v = x − t for some
x ∈ U . Then, αv = α(x − t). Thus,
α(x − t) = α(x − t) + (1 − α)(t − t) = αx + (1 − α)t − t.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
120 CHAPTER 5. GEOMETRIC VIEW
Observe that
αx + (1 − α)t ∈ U
because U is closed under affine combinations. Hence, α(x − t) ∈ Θt (U ).
(c) If v 1 , v 2 ∈ Θt (U ) then v 1 + v 2 ∈ Θt (U ).
Assume that v 1 , v 2 ∈ Θt (U ). By definition of Θt (U ), there exist x1 , x2 ∈ U
such that v 1 = x1 − t and v 2 = x2 − t. Therefore, we must show that
v 1 + v 2 = (x1 + x2 ) − (t + t) ∈ Θt (U ).
Since
1 1 1 1 1
(x + x2 ) = x + 1− x2 ,
2 2 2
then
1 1
(x + x2 ) ∈ U
2
since U is closed under affine combinations. Hence,
1 1 1
(x + x2 ) − (t + t) ∈ Θt (U ).
2 2
Thus,
1 1 1
(x1 + x2 ) − (t + t) = 2 (x + x2 ) − (t + t) ∈ Θt (U ),
2 2
using (b).
(←) Assume that U is an affine subspace of (Rn , Θ). Then, let t ∈ U be such
that
Θt (U )
is a subspace of Rn . Let u, w ∈ U and α ∈ R. Then,
that is,
αu + (1 − α)w − t ∈ Θt (U ).
Therefore, αu + (1 − α)w ∈ U . QED
Example 5.3
The set {x ∈ Rn : x ≥ 0} is not closed for affine combinations as the following
case shows. Observe that
2e1 − e2 ∈
/ {x ∈ Rn : x ≥ 0},
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.1. ADMISSIBILITY 121
where e1 and e2 are vectors in the canonical basis of Rn (see Definition 1.20).
Therefore, by Proposition 5.2, the set {x ∈ Rn : x ≥ 0} is not an affine subspace
of (Rn , Θ).
Definition 5.4
The affine hull of W ⊆ Rn , denoted by
A(W ),
is the set
( k k
)
X X
αi wi : w1 , . . . , wk ∈ W, α1 , . . . , αk ∈ R, αi = 1 .
i=1 i=1
Proposition 5.3
Let W ⊆ Rn . Then
A(W ) = A(W ).
Proof:
(⊆) Let U be an affine subspace containing W . We must show that A(W ) ⊆ U .
Observe that, by Proposition 5.2, U is closed under affine combinations. Since
U contains W then U is closed under affine combinations of elements of W .
(⊇) Observe that W ⊆ A(W ). Moreover, by Proposition 5.2, A(W ) is an affine
subspace, since A(W ) is closed under affine combinations, by Exercise 5.1.
Then, A(W ) ⊆ A(W ), by definition of A(W ). QED
The following result states that the set of solutions of a system of equations
is an affine subspace (see Definition 5.13) over (Rn , Θ).
Proposition 5.4
0
Let A0 be an m0 × n0 -matrix, b0 ∈ Rm ,
0
U = {x ∈ Rn : A0 x = b0 }
0
and t ∈ U . Then, Θt (U ) = ker(A0 ). So, U is an affine subspace of (Rn , Θ).
Proof:
Indeed:
(⊆) Assume that v ∈ Θt (U ). Then, v = x − t, where x ∈ U . Hence, A0 v =
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
122 CHAPTER 5. GEOMETRIC VIEW
Definition 5.5
Let P be a standard optimization problem where A is an m × n-matrix. The
matrix and the vector of implicit equality restrictions of P , denoted by
A= and b= ,
are " # " #
A b
and ,
I= 0
respectively, where I = is the submatrix of I obtained by removing each line
with index in
N > = {j ∈ N : xj > 0 for some x ∈ XP }.
Example 5.4
Consider the following standard optimization problem P :
min −2x1 − 5x2
x
x1 + x2 + x3 = 1
−x1 − x2 + x4 = −1
x ≥ 0.
Observe that
1 1
, , 0, 0 ∈ XP
2 2
and so 1, 2 ∈ {1, 2, 3, 4}> . Then, x3 + x4 = 0. Therefore, x3 = x4 = 0. Hence,
{1, 2, 3, 4}> = {1, 2}.
So,
1 1 1 0 1
−1 −1 0 1 −1
A= = and b= = .
0 0 1 0 0
0 0 0 1 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.1. ADMISSIBILITY 123
Exercise 5.2
Let P be a standard optimization problem where A is an m × n-matrix. Show
that
XP = {x ∈ Rn : A= x = b= and xj ≥ 0 for every j ∈ N > }.
Proposition 5.5
Let P be a standard optimization problem where A is an m × n-matrix with
XP 6= ∅. Then, there is x ∈ XP such that xj > 0 for every j ∈ N > .
Proof:
Let wj ∈ XP be a vector such that wjj > 0 and αj ∈]0, 1[ for every j ∈ N >
such that X
αj = 1.
j∈N >
Let X
x= αj wj .
j∈N >
Exercise 5.3
Let W1 , W2 ⊆ Rn be such that W1 ⊆ W2 . Show that A(W1 ) ⊆ A(W2 ).
Proposition 5.6
Let P be a standard optimization problem where A is an m × n-matrix with
XP 6= ∅. Then
A(XP ) = {y ∈ Rn : A= y = b= }.
Proof:
(⊆) Observe that
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
124 CHAPTER 5. GEOMETRIC VIEW
A({y ∈ Rn : A= y = b= }) = {y ∈ Rn : A= y = b= },
{k ∈ N > : yk 6= xk } = ∅.
{k ∈ N > : yk 6= xk } 6= ∅.
Then,
β < 0.
There are two cases:
(a) xj − yj < 0. Hence,
(b) xj − yj = 0. Then,
zjβ = xj > 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.1. ADMISSIBILITY 125
β > 0.
Thus,
xj
β(yj − xj ) ≥ (yj − xj ).
xj − yj
Therefore,
zjβ > 0.
(c) xj − yj = 0. Then,
zjβ = xj > 0.
It remains to show that
y ∈ A(XP ).
Observe that
β 6= 0
and
1 β β−1
y= z + x.
β β
Thus, y is an affine combination of x and z β in XP . So, y ∈ A(XP ). QED
We now define the dimension of a set in the context of affine spaces. Recall
the concept of dimension of an affine subspace in Definition 5.15.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
126 CHAPTER 5. GEOMETRIC VIEW
Definition 5.6
The dimension of W ⊆ Rn , denoted by
dim W,
is dim A(W ).
Remark 5.1
Observe that
dim ∅ = −1
taking into account Definition 5.15 and Example 5.1. Moreover, dim ∅ ≤
dim W for every W ⊆ Rn .
Proposition 5.7
Consider the standard optimization problem
min
x
cx
P = Ax = b
x ≥ 0,
Proof:
If XP = ∅ then dim XP = −1 (see Remark 5.1). Otherwise,
dim XP = dim A(XP ) = dim A(XP ) = dim {x ∈ Rn : A= x = b= }
by Proposition 5.3 and Proposition 5.6. Thus,
dim XP = dim ker(A= ) = n − rank(A= ),
by Proposition 5.4 and Theorem 4.3. QED
5.2 Optimizers
The main objective of this section is to show that basic admissible vectors are
vertices of a relevant convex polyhedron.
We now concentrate on the important notion of hyperplane relevant to
define polyhedron and faces.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.2. OPTIMIZERS 127
Definition 5.7
A hyperplane in Rn is an affine subspace of (Rn , Θ) with dimension n − 1.
Exercise 5.4
Show that any hyperplane is non-empty.
Proposition 5.8
Let U 6= ∅ be an affine subspace of (Rn , Θ) with dimension k. Then,
U = {x ∈ Rn : Ex = Et},
Proof:
Observe that, for some t ∈ U , Θt (U ) is a subspace of Rn with dimension k.
Thus,
Θt (U )⊥
is a subspace of Rn with dimension n − k (Proposition 4.15). Let
BΘt (U )⊥
{v ∈ Rn : Ev = 0} = (Θt (U )⊥ )⊥ = Θt (U ). (†)
U = {x ∈ Rn : Ex = Et}.
(⊆) Let u ∈ U . Then, u = Θt (u) + t. Thus, Eu = EΘt (u) + Et. Hence, by (†),
Eu = Et.
(⊇) Take x ∈ Rn such that Ex = Et. Then, E(x − t) = 0. Hence, (x − t) ∈
Θt (U ) by (†). Therefore, x ∈ U . QED
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
128 CHAPTER 5. GEOMETRIC VIEW
Proposition 5.9
Let H ⊆ Rn be a hyperplane and t ∈ H. Then,
H = {x ∈ Rn : v · x = v · t},
Proof:
Observe that Θt (H) is a subspace of Rn with dimension n − 1. Then,
Θt (H)⊥
Proposition 5.10
Let v ∈ Rn and r ∈ R. Then, {x ∈ Rn : v · x = r} is a hyperplane if and only
if v 6= 0.
Proof:
(→) Assume that v = 0. There are two cases:
(1) r = 0. Then, {x ∈ Rn : v · x = r} = Rn is not a hyperplane, since
dim {x ∈ Rn : v · x = r} = dim Rn = n 6= n − 1.
dim {x ∈ Rn : v · x = r} = dim ∅ = −1 6= n − 1.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.2. OPTIMIZERS 129
Proposition 5.11
Let H ⊆ Rn . Then, H is a hyperplane if and only if
H = {x ∈ Rn : v · x = r}
Proof:
Immediate by Proposition 5.9 and Proposition 5.10. QED
Definition 5.8
Let {x ∈ Rn : v · x = r} be a hyperplane. The sets
{x ∈ Rn : v · x ≥ r} and {x ∈ Rn : v · x ≤ r}
are the upper semi-space and the lower semi-space of the hyperplane, respec-
tively.
Proposition 5.12
The semi-spaces of a hyperplane are closed sets.
Proof:
Let {x ∈ Rn : v · x = r} be a hyperplane. The proof that
{x ∈ Rn : v · x ≥ r}
is a closed set is similar to the one in Proposition 2.9, by choosing the linear
map x 7→ v · x. Similarly for {x ∈ Rn : v · x ≤ r}. QED
Example 5.5
The set {x ∈ Rn : xj ≥ 0} is an upper semi-space of the hyperplane
{x ∈ Rn : xj = 0}
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
130 CHAPTER 5. GEOMETRIC VIEW
Exercise 5.5
Show that every semi-space is convex.
Proposition 5.13
The class of convex subsets of Rn is closed under intersection.
Proof:
Let {U k }k∈K be a family of convex sets and
\
U= U k.
k∈K
Exercise 5.6
Show that every hyperplane is convex.
The next example shows that the union of convex sets is not always a convex
set.
Example 5.6
Let H1 and H2 be the hyperplanes
{x ∈ R2 : v · x = 2} and {x ∈ R2 : v · x = 1},
respectively, where v is (0, 1). Then, H1 and H2 are convex sets (see Exer-
cise 5.6). Observe that (1, 2), (1, 1) ∈ H1 ∪ H2 . Then,
3
1,
2
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.2. OPTIMIZERS 131
Definition 5.9
A convex polyhedron is a finite intersection of semi-spaces. A convex polytope
is a bounded convex polyhedron.
Exercise 5.7
Show that a convex polyhedron is a convex set.
Proposition 5.14
The set of admissible vectors of a standard optimization problem is a convex
polyhedron and so is a convex set.
Proof:
Let P be a standard optimization problem with m × n-matrix A. Observe that,
for each j = 1, . . . , n,
{x ∈ Rn : xj ≥ 0}
is a semi-space of Rn (see Example 5.5). On the other hand, for every i =
1, . . . , m, the i-th line of matrix A, denoted ai• , is non-zero (see Remark 1.2).
Thus, for each i = 1, . . . , m,
{x ∈ Rn : ai• x = bi }
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
132 CHAPTER 5. GEOMETRIC VIEW
Definition 5.10
Let X be a convex polyhedron in Rn . A set F ⊆ X is a face of X by the
hyperplane H = {x ∈ Rn : v · x = r} whenever
(
v · x = r for every x ∈ F
v · x > r for every x ∈ (X \ F ).
A vertex of X by H is a face of X by H with dimension 0. We say that x ∈ X
is a vertex of X whenever there exists an hyperplane H such that x is a vertex
of X by H.
Proposition 5.15
Let U ⊆ Rn . Then, dim U = 0 if and only if U is a singleton set.
Proof:
(←) Observe that, by Example 4.7 and Example 5.9,
dim {u} = dim A({u}) = dim Θu ({u}) = dim {0}R = 0.
(→) Assume that dim U = 0. Then,
Θt (U ) = {0}R .
Taking into account that Θt is a bijection, U is a singleton set. QED
So, in the sequel, for simplifying the presentation, we may identify each
vertex with its unique element.
Example 5.7
Consider the standard optimization problem
min −2x1 − x2
x
3x1 − x2 + x4 = 6
P =
−x1 + 3x2 + x3 = 6
x≥0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.2. OPTIMIZERS 133
Proposition 5.16
Let P be a standard optimization problem. Then,
x is a vertex of XP iff x is a basic vector of XP .
Proof:
Indeed:
(→) Assume that x is a vertex of XP . Then, there exist a non-zero v ∈ Rn and
r ∈ R such that v · x = r and v · y > r for every y ∈ (XP \ {x}). Consider the
standard optimization problem P1 :
min
v·y
y
Ay = b
y ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
134 CHAPTER 5. GEOMETRIC VIEW
x̃ = x.
Thus, x is basic.
(←) Assume that x is a basic vector of XP . Let B ⊆ N be a basis for x. Then,
xj = 0 for every j ∈ (N \ B). Let w ∈ Rn be such that
(
0 if j ∈ B
wj =
1 otherwise.
Then,
(1) w · x = 0 by the definition of w.
(2) w · y > 0 for each y ∈ XP \ {x}. Observe that
X
w·y = yj .
j∈N \B
Then, to show that w · y > 0, we must prove that there exists j ∈ N \ B such
that
yj > 0.
Assume, by contradiction, that yj = 0, for every j ∈ N \ B. Hence,
yj = xj , for every j ∈ N \ B.
AB zB = b
yB = xB .
Definition 5.11
A hyperplane {x ∈ Rn : v · x = r} is m-basic, for 0 < m < n, whenever v has
m components with value 0 and n − m components with value 1 and r is 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.2. OPTIMIZERS 135
Proposition 5.17
Let P be a standard optimization problem with an m × n-matrix. Then, x is
a vertex of XP if and only if x is a vertex of XP by an m-basic hyperplane.
Proof:
(→) Assume that x is a vertex of XP . Then, by Proposition 5.16, x is a basic
admissible vector. Let B be the corresponding (index) basis. Then, xj = 0 for
every j ∈ N \ B. Take v such that
(
1 if j ∈ N \ B
vj =
0 otherwise.
AB yB = Ay = b = Ax = AB xB .
Example 5.8
Consider the standard optimization problem
min −2x1 − 5x2
x
−2x1 + 2x2 + x3 = 2
P =
x1 + x4 = 2
x ≥ 0.
where v ∈ R4 is a vector having 2 components with value 0 and the others with
value 1. Observe that a possible vertice x of XP by the hyperplane is such that
xj = 0 whenever vj = 1. We have the following candidates for v:
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
136 CHAPTER 5. GEOMETRIC VIEW
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.3. SOLVED PROBLEMS AND EXERCISES 137
Theorem 5.1
Let P be a standard optimization problem. Assume that XP is non-empty and
that the objective map is bounded from below in XP . Then, there is a vertex
of XP in SP .
Proof:
By Theorem 4.1, under the assumptions, there is a basic admissible vector in
SP . So, by Proposition 5.16, there is a vertex of XP in SP . QED
Solution:
We start by showing that
A({x ∈ Rn : x ≥ 0}) = Rn .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
138 CHAPTER 5. GEOMETRIC VIEW
(2) u 6≥ 0. Then,
u = (−1)u + 2u,
where u, u ∈ Rn are such that
(
−uj if uj < 0
uj =
uj otherwise
and (
0 if uj < 0
uj =
uj otherwise
for each j = 1, . . . , n. Since u ≥ 0 and u ≥ 0, we have
u, u ∈ {x ∈ Rn : x ≥ 0} ⊆ A({x ∈ Rn : x ≥ 0}).
u ∈ A({x ∈ Rn : x ≥ 0})
by Proposition 5.2.
Therefore, the dimension of A({x ∈ Rn : x ≥ 0}) is n because dim Rn is n, see
Example 4.8. Hence, dim {x ∈ Rn : x ≥ 0} = n. /
Problem 5.2
Consider the following standard optimization problem
min −2x1 − 5x2
x
−2x1 + 2x2 + x3 = 2
P =
x1 + x4 = 2
x ≥ 0.
Solution:
By Proposition 5.14, XP is a convex polyhedron. We now show that XP is
bounded. Let x ∈ XP . Then,
• x1 ≤ 2 since x1 + x4 = 2 and 0 ≤ x4 ;
• x4 ≤ 2 because x1 + x4 = 2 and 0 ≤ x1 ;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.3. SOLVED PROBLEMS AND EXERCISES 139
Therefore,
p
kxk = x21 + x22 + x23 + x24
√
≤ 22 + 32 + 62 + 22 .
So, XP is a polytope. /
Problem 5.3
Let P be a standard optimization problem and x ∈ XP . Then, x is a vertex of
XP if and only if for every y, z ∈ XP , x = y = z whenever there exists α ∈]0, 1[
such that x = αy + (1 − α)z.
Solution:
Using Proposition 5.16, x is a vertex of XP if and only if x is basic vector of
XP . On the other hand, by Proposition 4.7, x is a basic vector of XP if and
only if for every y, z ∈ XP , x = y = z whenever there exists α ∈]0, 1[ such that
x = αy + (1 − α)z. /
Problem 5.4
Let j ∈ N . Show, using the definition, that
{x ∈ Rn : xj = 0}
is a hyperplane.
Solution:
(1) {x ∈ Rn : xj = 0} is an affine subspace of (Rn , Θ); that is,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
140 CHAPTER 5. GEOMETRIC VIEW
(αx + (1 − α)t)j = 0.
Problem 5.5
Let W ⊆ Rn be a non-empty set, t ∈ W and V a subspace of Rn such that
Θt (W ) ⊆ V . Show that there is an affine subspace U such that Θt (U ) = V
and W ⊆ U .
Solution:
Take
U = {v + t : v ∈ V }.
Then, it is immediate that Θt (U ) = V . Moreover, W ⊆ U as we show now.
Assume that w ∈ W . Then, Θt (w) = w − t ∈ V since Θt (W ) ⊆ V . Hence,
w ∈ U because w = (w − t) + t. /
Problem 5.6
Let W ⊆ Rn . Show that
for every t ∈ W .
Solution:
Let t ∈ W .
(⊆) Assume that v ∈ Θt (A(W )). Then, v = u − t for some u in A(W ).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.3. SOLVED PROBLEMS AND EXERCISES 141
Problem 5.7
Show that the set of minimizers for a standard optimization problem is convex.
Solution:
Let P be a standard optimization problem. If SP = ∅ then SP is a convex set
(see Exercise 3.1). Otherwise, assume that s0 , s00 ∈ SP are minimizers. That
is, s0 , s00 ∈ XP and
cs0 = cs00 = min{cx : x ∈ XP }.
Take s = αs0 + (1 − α)s00 with α ∈ [0, 1]. Observe that s ∈ XP since XP is a
convex set (see Proposition 5.14). On the other hand, for every x ∈ XP ,
cs = c(αs0 + (1 − α)s00 )
= αcs0 + (1 − α)cs00
≤ αcx + (1 − α)cx
= cx
using the fact that α and 1 − α are non-negative. Therefore, s ∈ SP . /
Exercise 5.8
Show that U ⊆ Rn is convex if and only if U is the set of all linear combinations
of the form
Xk
αj uj ,
j=1
k
X
αj = 1.
j=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
142 CHAPTER 5. GEOMETRIC VIEW
Exercise 5.9
Let X be a convex polyhedron in Rn . Show that F ⊆ X is a face of X by
hyperplane {x ∈ Rn : v · x = r} if and only if
(
v · x = r for every x ∈ F
v · x < r for every x ∈ (X \ F ).
Exercise 5.10
Consider the canonical optimization problem
max 2x1 + 5x2
x
4
− x1 − x2 ≤ −2
P = 3
2x 1 + x2 ≤ 10
x ≥ 0.
Definition 5.12
Let V be a vector space over a field with characteristic 0.1 An affine space over
V is a pair
(A, Θ),
where A is a set and Θ : A × A → V is a map such that:
• the map Θa1 : A → V , with Θa1 (a) = Θ(a1 , a), is a bijection, for every
a1 ∈ A;
• Θ(a1 , a3 ) = Θ(a1 , a2 ) + Θ(a2 , a3 ), for every a1 , a2 , a3 ∈ A (known as
Chasles’ Relation).
teristic 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.4. RELEVANT BACKGROUND 143
Proposition 5.18
Let (A, Θ) be an affine space over V . Then,
Proof:
Both equalities are consequence of Chasles’ Relation. Indeed, observe that
Proposition 5.19
Let (A, Θ) be an affine space over V . Then,
Proof:
Using Chasles’ Relation, we have that
and
Θ(a0 , b) = Θ(a0 , b0 ) + Θ(b0 , b).
Thus,
Θ(a, b) = Θ(a, a0 ) + Θ(a0 , b0 ) + Θ(b0 , b);
that is,
Θ(a, b) − Θ(a0 , b0 ) = Θ(a, a0 ) + Θ(b0 , b).
Therefore, by Proposition 5.18,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
144 CHAPTER 5. GEOMETRIC VIEW
Definition 5.13
Let (A, Θ) be an affine space over V and U ⊆ A. We say that U is an affine
subspace of (A, Θ) if it is either empty or there is t ∈ U such that
Θt (U )
is a subspace of V .
Example 5.9
Observe that any singleton subset {u} of Rn is an affine subspace since
Θu ({u}) = {0}R
Remark 5.2
Let V be a subspace of Rn and t ∈ Rn . Then,
Θ−1
t (V ) = {v + t : v ∈ V }
We now show that the class of affine subspaces of an affine space is closed
under intersection.
Proposition 5.20
Let {Uj }j∈J be a family of affine subspaces of the affine space (A, Θ) over a
vector space V . Then,
\
U= Uj
j∈J
for each u ∈ U .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
5.4. RELEVANT BACKGROUND 145
Proof:
Let \
U= Uj .
j∈J
Θu (Uj )
is a subspace of V . Hence, \
Θu (Uj )
j∈J
So, \
Θu (U ) ⊆ Θu (Uj ).
j∈J
T
(⊇) Let v ∈ j∈J Θu (Uj ). Then,
On the other hand, the union of affine subspaces is not always an affine
subspace.
Example 5.10
Consider the affine subspaces H1 and H2 introduced in Example 5.6. We
showed that H1 ∪ H2 is not closed under convex combinations. Therefore,
H1 ∪ H2 is not closed under affine combinations and so is not an affine subspace
by Proposition 5.2.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
146 CHAPTER 5. GEOMETRIC VIEW
Definition 5.14
The dimension of an affine space (A, Θ) over V is the dimension of V .
Definition 5.15
Let U be an affine subspace of the affine space (A, Θ) over V . The dimension
of U , denoted by
dim U
is the dimension of the subspace Θt (U ) of V when U 6= ∅ and is set to be −1
otherwise.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 6
Duality
Definition 6.1
Let P = (A, b, c) be a canonical optimization problem where A is an m × n-
matrix and y ∈ Rm . The parameterized canonical optimization problem over P
and y is defined as follows:
(
max cx + y T (b − Ax)
x
x ≥ 0.
147
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
148 CHAPTER 6. DUALITY
Proposition 6.1
Let P = (A, b, c) be a canonical optimization problem where A is an m × n-
matrix and y ∈ Rm . For every z ∈ XP , we have
if y ≥ 0 then cz ≤ max cx + y T (b − Ax).
x≥0
Proof:
Let z ∈ XP . Hence,
cz ≤ cz + y T (b − Az) ≤ max cx + y T (b − Ax)
x≥0
Notation 6.1
Let P = (A, b, c) be a canonical optimization problem. We denote by
P
the optimization problem
min max cx + y T (b − Ax)
(
y x≥0
y ≥ 0.
Proposition 6.2
Let P = (A, b, c) be a canonical optimization problem and Q the linear opti-
mization problem
T
min b y
y
AT y ≥ cT
y ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.1. WEAK AND STRONG DUALITY 149
Proof:
(1) SQ ⊆ SP . Let r ∈ SQ . Then, AT r ≥ cT , r ≥ 0 and
bT r ≤ bT y
Therefore,
max (c − y T A)x = +∞.
x≥0
Then,
max cx + y T (b − Ax) = bT y + max (c − y T A)x = +∞.
x≥0 x≥0
Thus,
max cx + rT (b − Ax) ≤ max cx + y T (b − Ax).
x≥0 x≥0
AT r ≥ cT .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
150 CHAPTER 6. DUALITY
Thus,
max cx + rT (b − Ax) = +∞.
x≥0
Hence,
bT y + max (c − y T A)x = max cx + y T (b − Ax) = +∞.
x≥0 x≥0
Therefore,
c − y T A 6≤ 0
which contradicts the hypothesis that y ∈ XQ .
Thus,
bT r = bT r + max (c − rT A)x
x≥0
= max cx + rT (b − Ax)
x≥0
≤ max cx + y T (b − Ax)
x≥0
= bT y + max (c − y T A)x
x≥0
= bT y.
So, r ∈ SQ . QED
Notation 6.2
In the sequel, we refer to the problem Q introduced in Proposition 6.2, by the
dual of P . The set of admissible vectors for Q is
YQ
RQ
rather that SQ .
Example 6.1
Consider the canonical optimization problem P introduced in Example 1.12.
The dual problem Q of P is as follows:
min 6y1 + 6y2
y
3y − y ≥ 2
1 2
−y1 + 3y2 ≥ 1
y ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.1. WEAK AND STRONG DUALITY 151
Exercise 6.1
Let Q be the dual of a canonical optimization problem. Show that if YQ 6= ∅
and the objective map of Q is bounded from below in YQ then RQ 6= ∅.
The next result, called the Canonical Weak Duality Theorem, shows that
the value of the objective map of Q for each vector in YQ is an upper bound of
the value of the objective map of P for any vector in XP .
Theorem 6.1
Let P be a canonical optimization problem and Q the dual of P . Then,
cx ≤ bT y
Proof:
Let y ∈ YQ and x ∈ XP . Then, y ≥ 0 and cT ≤ AT y. Hence, c ≤ y T A. On
the other hand, x ≥ 0 and Ax ≤ b. Thus, cx ≤ y T Ax and y T Ax ≤ y T b. Hence,
cx ≤ y T b. Thus, cx ≤ bT y, since bT y ∈ R. QED
Proposition 6.3
Let P be a canonical optimization problem and Q the dual of P . Then,
Proof:
We only prove the first statement since the other one is similar. Assume that
YQ 6= ∅. There are two cases to consider:
(1) XP = ∅, in which case {cx : x ∈ XP } = ∅. Hence, each real number is an
upper bound of that set.
(2) XP 6= ∅, in which case the thesis follows from Theorem 6.1. QED
The next result, called the Optimality Criterion, provides a sufficient con-
dition for the existence of optimizers.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
152 CHAPTER 6. DUALITY
Proposition 6.4
Let P be a canonical optimization problem and Q the dual of P . If x ∈ XP ,
y ∈ YQ and cx = bT y then x ∈ SP and y ∈ RQ .
Proof:
Let x ∈ XP and y ∈ YQ be such that cx = bT y. Then, by Theorem 6.1, the
following hold:
1. cx ≤ bT y 0 for every y 0 ∈ YQ ;
2. cx0 ≤ bT y for every x0 ∈ XP .
In other words:
1. bT y ≤ bT y 0 for every y 0 ∈ YQ ;
2. cx0 ≤ cx for every x0 ∈ XP .
Then, y ∈ RQ and x ∈ SP . QED
Example 6.2
Consider the canonical optimization problem P introduced in Example 1.12.
Recall the dual problem Q of P in Example 6.1. It is immediate that (3, 3) ∈
XP . We want to use duality to conclude that (3, 3) ∈ SP . To find an admissible
vector of Q, observe that the system of linear equations:
(
3y1 − y2 = 2
−y1 + 3y2 = 1
7 5 7 5
has , as the solution. Hence, , ∈ YQ since this vector is also
8 8 8 8
non-negative. Moreover,
" # " 7 #
3 8
c = 9 = bT 5 .
3 8
7 5
Therefore, (3, 3) ∈ SP and , ∈ RQ by Proposition 6.4.
8 8
The following result, called the Lemma of Existence of Dual , states that
there is always an admissible vector y of the dual problem with objective value
as close as envisaged of cs when s is an optimizer of the canonical optimization
problem.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.1. WEAK AND STRONG DUALITY 153
Proposition 6.5
Let P be a canonical optimization problem, Q the dual of P and s ∈ SP . Then,
for each ε > 0 there exists y ∈ YQ such that
Proof:
Let ε > 0. Given s ∈ SP , observe that the system
(
Ax ≤ b
cx ≥ cs
XP ε = ∅.
• (v, z) 6= 0;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
154 CHAPTER 6. DUALITY
T
• (v, z) A ≥ 0T , and so AT v ≥ zcT ;
T ε
• (v, z) b < 0, and so bT v < z(cs + ε).
On the other hand, the problem
max c00 x
x
" #
b
Ax ≤
−cs
x ≥ 0
Hence,
bT v ≥ zcs.
Thus,
zcs ≤ bT v < z(cs + ε).
Therefore, z > 0. Take
1
y= v.
z
Then,
• y ∈ YQ , since
– y ≥ 0, because v ≥ 0 and z > 0;
– AT y ≥ cT , noticing that AT v ≥ zcT .
• cs ≤ bT y < (cs + ε), since zcs ≤ bT v < z(cs + ε).
Hence, for each > 0, we are able to find a y ∈ YQ in the conditions required
by the statement. QED
Proposition 6.6
Let P be a canonical optimization problem and Q the dual of P . Then,
• if XP 6= ∅ and YQ = ∅ then the objective map of P in XP has no upper
bound;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.1. WEAK AND STRONG DUALITY 155
Proof:
We only show the first statement since the proof of the second statement is sim-
ilar. Let XP 6= ∅ and YQ = ∅. Assume, by contradiction, that there is an upper
bound of the objective map of P in XP . Then, from Theorem 4.2, we conclude
that SP 6= ∅. Therefore, invoking Proposition 6.5, YQ 6= ∅, contradicting the
hypothesis. QED
Theorem 6.2
Let P be a canonical optimization problem and Q the dual of P . Then,
cs = bT r
Proof:
(1) It is immediate since SP ⊆ XP and RQ ⊆ YQ .
(2) In this case, Proposition 6.6 implies that the objective map of P in XP has
no upper bound. Therefore, SP = ∅. On the other hand, RQ = ∅ because YQ
is empty.
(3) The argument is similar to that of the previous case.
(4) Since YQ 6= ∅, by Theorem 6.1, it follows that the objective map of P in
XP has an upper bound. Hence, by Theorem 4.2, we conclude that SP 6= ∅.
Similarly, we can show that RQ 6= ∅. It remains to prove that cs = bT r for
every s ∈ SP and r ∈ RQ .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
156 CHAPTER 6. DUALITY
Exercise 6.2
Show that the dual of a standard optimization problem is the linear optimization
problem
max bT y
(
y
AT y ≤ cT .
Example 6.3
Consider the standard optimization problem P introduced in Example 1.13.
The dual problem Q of P is as follows:
max 6y1 + 6y2
(y1 ,y2 )
3y1 − y2 ≤ −2
−y1 + 3y2 ≤ −1
y1 ≤ 0
y ≤ 0.
2
Exercise 6.3
Let P be a standard optimization problem and Q the dual of P . Show that
bT y ≤ cx
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.1. WEAK AND STRONG DUALITY 157
Exercise 6.4
Let P be a standard optimization problem and Q the dual of P . Show that for
every x ∈ XP and y ∈ YQ , if
bT y = cx
then x ∈ SP and y ∈ RQ .
Example 6.4
Consider the standard optimization problem P introduced in Example 1.13 and
recall the dual problem Q of P in Example 6.3. We want to use Exercise 6.4 to
conclude that (3, 3, 0, 0) ∈ SP . Recall that (3, 3, 0, 0) ∈ XP (see Example 4.6).
To find an admissible vector of Q, observe that the system of linear equations:
(
3y1 − y2 = −2
−y1 + 3y2 = −1
7 5 7 5
has − , − as the solution. Hence, − , − ∈ YQ , since this vector is
8 8 8 8
also non-negative. Moreover,
3 " 7 #
3 T
−8
0 = −9 = b
c .
− 58
0
7 5
Hence, (3, 3, 0, 0) ∈ SP and − , − ∈ RQ , by Exercise 6.4.
8 8
Exercise 6.5
Let P be a standard optimization problem and Q the dual of P . Show that
1. if XP = ∅ and YQ = ∅ then SP = ∅ and RQ = ∅;
2. if XP 6= ∅ and YQ = ∅ then SP = ∅ and RQ = ∅ and the objective map
of P in XP has no lower bound;
3. if XP = ∅ and YQ 6= ∅ then SP = ∅ and RQ = ∅ and the objective map
of Q in YQ has no upper bound;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
158 CHAPTER 6. DUALITY
Exercise 6.6
Show that the dual of the linear optimization problem P
(
max 0x
x
Ax ≤ b
is the problem Q defined as follows:
min bT y
y
AT y = 0
y ≥ 0.
6.2 Complementarity
The objective of this section is to provide techniques for deciding whether or
not an admissible vector is an optimizer, using duality and slack variables.
Definition 6.2
Let P be a canonical optimization problem and Q the dual of P . Then,
d = x 7→ b − Ax : Rn → Rm and e = y 7→ AT y − cT : Rm → Rn
are called the slack maps for P and Q, respectively.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.2. COMPLEMENTARITY 159
Remark 6.1
Observe that when x ∈ XP and d(x)i = 0, then the i-th line of A is active in
x (see Definition 3.5).
Notation 6.3
When x ∈ XP , d(x) is the slack of x. On the other hand, when y ∈ YQ , e(y)
is the slack of y.
Proposition 6.7
Let P be a canonical optimization problem, Q the dual of P , x ∈ XP and
y ∈ YQ . Then,
• d(x)T y + e(y)T x = bT y − cx;
• d(x)T y ≥ 0;
• e(y)T x ≥ 0.
Proof:
(1) d(x)T y + e(y)T x = bT y − cx since
The following result establishes one of the implications of the Slack Com-
plementarity Theorem.
Proposition 6.8
Let P be a canonical optimization problem, Q the dual of P , s ∈ SP and
r ∈ RQ . Then,
d(s)T r = 0 and e(r)T s = 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
160 CHAPTER 6. DUALITY
Proof:
Observe that, by Theorem 6.2, cs = bT r. So, by Proposition 6.7, the thesis
follows. QED
This result implies that if the i-th component of r is non-zero then the i-th
line of A is active in s; that is, d(s)i = 0 (see Remark 6.1). Similarly for e(r).
The following characterization of optimizers is known as the Slack Comple-
mentarity Theorem.
Theorem 6.3
Let P be a canonical optimization problem, Q the dual of P , x ∈ XP and
y ∈ YQ . Then,
Proof:
(1) Assume that x ∈ SP and y ∈ RQ . Then, by Proposition 6.8, d(x)T y = 0
and e(y)T x = 0.
(2) Assume that d(x)T y = 0 and e(y)T x = 0. Then, by Proposition 6.7,
cx = bT y. Thus, by Proposition 6.4, x ∈ SP and y ∈ RQ . QED
Example 6.5
Consider the canonical optimization problem P in Example 1.12 and the dual
Q of P in Example 6.1. We check whether or not (3, 3) ∈ SP . Observe that
d(3, 3) = 0. On the other hand, the system
3 3
e(y)T
= 3y1 − y2 − 2 −y1 + 3y2 − 1 =0
3 3
implies that
3
(∗) y1 = −y2 + .
2
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.3. EQUILIBRIUM 161
Exercise 6.7
Consider the following optimization problem:
max 2x1 + x2
x
−x1 + x2 ≤ 1
x1 + x2 ≤ 3
x1 − x2 ≤ 1
x ≥ 0.
Show, using duality, that (2, 1) is an optimizer and that (1, 2) is not an opti-
mizer.
6.3 Equilibrium
The objective of this section is to provide another way to check whether or
not an admissible vector is an optimizer. Moreover, we address the problem of
knowing if there is a unique optimizer. Recall Notation 4.3.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
162 CHAPTER 6. DUALITY
Definition 6.3
Let P be a standard optimization problem, Q the dual of P , x ∈ Rn and
y ∈ Rm . The system of equations
(APx )T y = cT
Px
Theorem 6.4
Let P be a standard optimization problem, Q the dual of P and x ∈ XP . Then,
the following statements hold:
• if x ∈ SP then RQ 6= ∅ and the Equilibrium Condition for x and y holds
for every y ∈ RQ .
• if there exists y ∈ YQ such that the Equilibrium Condition holds for x
and y then x ∈ SP .
Proof:
(1) Assume that x ∈ SP . Then, SP 6= ∅. Thus, by Exercise 6.5, RQ 6= ∅. Let
y ∈ RQ . Using, Exercise 6.5, bT y = cx. That is,
cx − xT AT y = 0.
Therefore,
n
X m
X
xj (cj − yi aij ) = 0.
j=1 i=1
Hence,
X m
X
(†) xj (cj − yi aij ) = 0.
j∈Px i=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.3. EQUILIBRIUM 163
Furthermore,
m
X
(cj − yi aij ) = 0, for j ∈ Px .
i=1
Hence,
X m
X
(‡) (cj − yi aij ) = 0.
j∈Px i=1
Exercise 6.8
Let P be a standard optimization problem, Q the dual of P and x ∈ XP . Show
that
• if RQ 6= ∅ and the Equilibrium Condition for x and y holds for every
y ∈ RQ then x ∈ SP ;
• if x ∈ SP then there exists y ∈ YQ such that the Equilibrium Condition
for x and y holds.
Theorem 6.4 provides a new way to conclude that a given admissible vector
x is an optimizer. The idea is to solve the system of equilibrium equations
(APx )T y = cT
Px
and after that check whether or not the solution is an admissible vector of the
dual problem.
Example 6.6
Consider the standard optimization problem P in Example 1.13 and its dual Q
described in Example 6.3. Take the vector (3, 3, 0, 0) ∈ XP (see Example 4.6).
Then, P(3,3,0,0) = {1, 2} and the Equilibrium Condition is as follows:
3 −1 −2
y= .
−1 3 −1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
164 CHAPTER 6. DUALITY
Theorem 6.5
Let P be a non-degenerate standard optimization problem, Q the dual of P ,
s ∈ SP and r ∈ RQ . Assume that:
• s is basic;
• rT a•j < cj for each j ∈ (N \ Ps ).
Then, SP = {s} and RQ = {r}.
Proof:
(1) |SP | = 1. Assume, by contradiction, that there is s0 ∈ SP such that
s0 6= s.
Take v = s0 − s. Then,
Av = A(s0 − s) = As0 − As = b − b = 0.
Hence, X X X
(rT a•j )vj + (rT a•j )vj = (rT a•j )vj
j∈Ps j∈(N \Ps ) j∈N
= rT (Av)
= 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.3. EQUILIBRIUM 165
rT a•j = cj
So,
cv > 0,
0
contradicting s, s ∈ SP .
(*) We now show that there exists j ∈ (N \ Ps ) such that vj > 0. Assume, by
contradiction, that
(†) vj = 0
for every j ∈ (N \ Ps ). Since the problem is non-degenerate and s is basic, by
Proposition 4.9,
|Ps | = m.
Therefore, by Proposition 4.6,
APs vPs = 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
166 CHAPTER 6. DUALITY
(APs )T r = cT
Ps .
Since matrix APs is nonsingular, then (APs )T is also nonsingular. Hence, the
Equilibrium Condition
(APs )T y = cT
Ps
Example 6.7
Consider the standard optimization problem P in Example 1.13 and its dual Q
described in Example 6.3. We know, from Example 6.4, that (3, 3, 0, 0) ∈ SP
and (− 87 , − 58 ) ∈ RQ . We now show that (3, 3, 0, 0) is the unique optimizer of P .
Since (3, 3, 0, 0) is basic (see Example 4.6), it remains to check, by Theorem 6.5,
that the system of inequalities:
7 5 1 0 0
− − <
8 8 0 1 0
is satisfied. Indeed, this is the case and so, by Theorem 6.5, we conclude that
(3, 3, 0, 0)
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.4. LOGIC OF INEQUALITIES 167
Proposition 6.9
Let P be the optimization problem introduced in Exercise 6.6 and Q the dual
of P . Then,
Proof:
(→) Assume that XP 6= ∅. Let w ∈ (R+ 0)
m
be such that wT A = 0T . Observe
that w ∈ YQ . Then, by Exercise 6.6,
bT w ≥ 0.
Definition 6.4
An m × n-formula is a system of inequalities of the form Ax ≤ b, where b ∈ Rm
and A is an m × n-matrix.
Definition 6.5
Let A0 x ≤ b0 be an m0 × n-formula and Ax ≤ b be an m × n-formula. We say
that A0 x ≤ b0 is derived from Ax ≤ b, written
Ax ≤ b ` A0 x ≤ b0
Example 6.8
We show that from the formula
4 3 10
−3 x 1
1 ≤ 6
x2
0 −1 −4
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
168 CHAPTER 6. DUALITY
Definition 6.6
An m × n-formula Ax ≤ b is inconsistent when
Ax ≤ b ` 0T x ≤ −1.
Otherwise, is consistent.
Exercise 6.9
Show that the following system is inconsistent:
2x1 − 5x2 ≥ 0
x1 ≤ −1
x2 ≥ 3.
We now present the semantics of the logic. For that, we define satisfaction
of a formula in Rn .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.4. LOGIC OF INEQUALITIES 169
Definition 6.7
An m × n-formula Ax ≤ b is satisfied by u ∈ Rn , written
u Ax ≤ b,
Definition 6.8
The m × n-formula Ax ≤ b is satisfiable if it has a model.
Theorem 6.6
Let A0 x ≤ b0 be an m0 × n-formula, Ax ≤ b an m × n-formula and u ∈ Rn .
Assume that u
(Ax ≤ b) and Ax ≤ b ` A0 x ≤ b0 . Then, u
A0 x ≤ b0 .
Proof:
The hypothesis Ax ≤ b ` A0 x ≤ b0 yields the existence of an m0 × m-matrix
D with non-negative components such that A0 = DA and b0 = Db. We show
that u
A0 x ≤ b0 . This amounts to prove that
(DAu)k ≤ (Db)k
for k = 1, . . . , m0 .
Observe that
m
X
• (DAu)k = dki (Au)i ;
i=1
m
X
• (Db)k = dki bi .
i=1
(DAu)k ≤ (Db)k
for k = 1, . . . , m0 . QED
The following result is called the Logical Variant of the Farkas’ Lemma.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
170 CHAPTER 6. DUALITY
Proposition 6.10
Let Ax ≤ b be an m × n-formula. Then, Ax ≤ b is satisfiable if and only if
Ax ≤ b is consistent.
Proof:
(→) Let u ∈ Rn be such that u
Ax ≤ b. Assume, by contradiction, that
Ax ≤ b is not consistent; that is,
1
wT .
|wT b|
x1 ≤ 4
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.5. SOLVED PROBLEMS AND EXERCISES 171
Solution:
Observe that
1 2 12 1
A= b= and c = 1 .
1 0 4 2
(1) The dual Q of P is:
min 12y1 + 4y2
y
bT y
min
y1 + y2 ≥ 1
y
2
AT y ≥ cT =
y ≥ 0
2y1 ≥ 1
y ≥ 0.
does not have a solution. Taking into account (2), let y ∈ RQ . Therefore, it is
not the case that
1
d(1, 1)T y = 0 and e(y)T = 0.
1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
172 CHAPTER 6. DUALITY
Take y = ( 21 , 0) ∈ YQ . Then,
bT y = 6 = cx.
Problem 6.2
Let A be an m × n-matrix and b ∈ Rm . Show that
{x ∈ Rn : Ax ≤ b} = ∅ iff {y ∈ Rm : y T A = 0, y T b < 0, y ≥ 0} 6= ∅.
Solution:
(→) The result is shown by contraposition. Assume that
(†) {y ∈ Rm : y T A = 0, y T b < 0, y ≥ 0} = ∅.
yT A = 0
y ≥ 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.5. SOLVED PROBLEMS AND EXERCISES 173
Problem 6.3
Let P be the following canonical optimization problem
max
−cx
x
−Ax ≤ −b
x ≥ 0,
Solution:
Note that Q is the problem:
min
−bT y
y
−AT y ≥ −cT
y ≥ 0
and
(‡) 0 = e(r)T s = (−AT r + cT )T s.
From (‡), we have
cs − rT As = (c − rT A)s = (−AT r + cT )T s = 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
174 CHAPTER 6. DUALITY
Exercise 6.10
Consider the optimization problem
max 2x1 + x2
x
−x1 + x2 ≤ 1
P = x1 + x2 ≤ 3
x1 − x2 ≤ 1
x ≥ 0.
Exercise 6.11
Consider the optimization problem
max x1 + 2x2
x
x1 + x2 ≤ 1
P =
2x1 + x2 ≤ 3
2
x ≥ 0.
Find the dual of P and show directly that both have the same optimizer.
Exercise 6.12
Prove the Canonical Variant of Farkas’ Lemma in Proposition 3.11, from the
Pure Variant of Farkas’ Lemma in Proposition 6.9.
Exercise 6.13
Consider the standard optimization problem
min 2x1 + x2
x
−x1 − x2 + x3 = −1
x1 + x2 + x4 = 3
P =
x1 − x2 + x5 = 1
−x1 + x2 + x6 = 1
x ≥ 0.
Let Q be the dual of P . Check whether or not SP and RQ are singleton sets.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
6.5. SOLVED PROBLEMS AND EXERCISES 175
Exercise 6.14
Let P be a canonical optimization problem. Relate the duals of P and CS(P ).
Exercise 6.15
Find the dual of the problem introduced in Example 1.1 and interpret it.
Exercise 6.16
Show that the dual of the dual problem is equivalent to the original problem.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
176 CHAPTER 6. DUALITY
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 7
Complexity
Notation 7.1
For each m, n ∈ N+ with m < n, we denote by
DSF
mn ,
177
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
178 CHAPTER 7. COMPLEXITY
Notation 7.2
Furthermore, let
[
DSF = DSF
mn ;
m, n ∈ N+
m<n
[
Q∪ = Qn .
n∈N+
The following result tells us that there exists an optimizer with rational
components for a standard optimization problem provided that the problem
has rational numbers as parameters and a non-empty set of optimizers.
Proposition 7.1
Let P ∈ DSF
mn be a standard optimization problem such that SP 6= ∅. Then,
SP ∩ Qn 6= ∅.
Proof:
Since SP 6= ∅, then x 7→ cx is bounded from below in XP and XP 6= ∅. Hence,
by Theorem 4.1, there is a basic admissible vector s ∈ SP . Then, there is
B ⊂ N such that AB is nonsingular, sN \B = 0, and sB is the unique solution
of
AB = b.
Thus, by Cramer’s Rule (see Proposition 4.23), each component of s is given
by
det (AB )jb
sj = .
det AB
Clearly, each sj is in Q since det (AB )jb and det AB are rational numbers as
can be seen by the Leibniz’s Formula (see Proposition 4.19). QED
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.2. REPRESENTATION 179
Definition 7.1
The Standard Optimization Decision Problem is the map
such that
SDP(P ) = 1 if and only if SP 6= ∅
for every standard optimization problem P . Similarly for the Canonical Opti-
mization Decision Problem
7.2 Representation
Before analyzing the complexity of the standard optimization decision prob-
lem, we need to discuss the representation of vectors and matrices. Recall the
representation of natural and rational numbers in Section 7.6.
Notation 7.3
An n-dimensional vector w of rational numbers is represented by the following
string of bits
ŵ = ŵ1 010 · · · 010 ŵn ,
where ŵj is the encoding of the rational number wj , as explained in Remark 7.3,
for j = 1, . . . , n. Moreover, we choose 010 as the separator of the encodings of
the rational numbers. An m × n-matrix A of rational numbers is represented
by
 = â1• 011 · · · 011 âm• ,
where âi• is the representation of the i-th row of the matrix. Moreover,
(A, b, c) ∈ DSF is represented by
\
(A, b, c) = Â 100 b̂ 100 ĉ.
The reader may think of alternative ways of representing vectors and ma-
trices of arbitrary dimensions and ponder on the fact that any reasonably eco-
nomic representation would serve the purpose at hand.1
The representation adopted here is not minimal since it contains some ac-
ceptable redundancy that simplifies the computation of the dimension.
1 Redundancy is allowed as long as it does not grow faster then polynomial on the sum of
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
180 CHAPTER 7. COMPLEXITY
input : (k, u)
4. return 1.
Proposition 7.2
There is an algorithm for the decision problem
such that sizeGE(k, u) = 1 if and only if the size of k is greater than or equal
to the size in bits of u, which is polynomial on the first argument.
Proof:
It is immediate that the algorithm in Figure 7.1 computes sizeGE and is poly-
nomial on the first argument. QED
Notation 7.4
In the sequel, we denote the algorithm presented in Figure 7.1 by asizeGE .
Proposition 7.3
There exists a polynomial algorithm for dim : Q∪ → N+ .
Observe that, for the chosen representation of vectors and matrices, for any
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.2. REPRESENTATION 181
and !
m
X
pAp = (3 + pai• p) − 3,
i=1
and
p(A, c)p = pAp + pcp + 3.
The following result establishes a bound on the size of the determinant of
a square matrix with rational entries.
Proposition 7.4
For every non-empty square matrix A with rational entries,
p det Ap ≤ 3pAp4 .
Proof:
First, we show that, for every square matrix Z with integer entries represented
as rational numbers with unit denominator
p det Zp ≤ 2pZp2 .
p det Zp ≤ 2mpZp.
(Basis) In this case, det Z = z11 . Thus, p det Zp = pZp < 2pZp.
(Step) Recall Laplace’s Expansion Formula for j = 1 (see Definition 4.8),
m
X
det Z = (−1)i+1 zi1 det Zi,1 .
i=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
182 CHAPTER 7. COMPLEXITY
Proposition 7.5
For every P = (A, b, c) ∈ DSF such that SP 6= ∅, there is s ∈ SP with rational
components such that
5
psp ≤ 9p(A, b)p .
Proof:
Recall from the proof of Proposition 7.1 that among the rational optimizers of
P there is a basic vector s with each component given by
det (AB )bj
sj = .
det AB
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.2. REPRESENTATION 183
Proposition 7.6
Let P = (A0 , A00 , A000 , b0 , b00 , b00 , c) be a linear optimization problem with rational
coefficients in all components such that SP 6= ∅. Then, there is s ∈ SP with
rational components such that
psp ≤ d(pA0 p + pA00 p + pA000 p + pb0 p + pb00 p + pb000 p)5
for some d ∈ N+ .
Proof:
Assume that A0 is an m0 × n-matrix, A00 is an m00 × n-matrix and A000 is an
m000 ×n-matrix. Observe that CS(LC(P )) is the standard optimization problem
h i
min −c c 0 y
y
Ay = b
y ≥ 0,
−b000
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
184 CHAPTER 7. COMPLEXITY
Therefore,
SCS(LC(P )) 6= ∅
by Proposition 1.1 and Proposition 1.3. Hence, by Proposition 7.1, there is
s ∈ SCS(LC(P )) with rational components such that
5
psp ≤ 9p(A, b)p .
s|n − s|n+1
2n ∈ SP
p(A, b)p ≤ d0 pA0 p + d00 pA00 p + d000 pA000 p + e0 pb0 p + e00 pb00 p + e000 pb000 p + e
Proposition 7.7
The standard optimization decision problem SDP is in NP.
Proof:
Take W to be Q∪ × Q∪ × N+ and consider the algorithm d in Figure 7.2 with
Dd = DSF × (Q∪ × Q∪ × N+ ). Let P = (A, b, c) ∈ DSF and Q be the dual of
P . Then,
(1) We show that if SDP(P ) = 1 then the execution of d on (P, x, y, d) returns
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.3. NON-DETERMINISTIC APPROACH 185
RQ 6= ∅
and
5
pyp ≤ d(pAT p + pcT p)5 ≤ dp(A, c)p .
Then, the execution of algorithm d on (P, x, y, d) returns 1 since the conditions
of the if in step 1 are true.
(2) We show that if SDP(P ) = 0 then for every (x, y, d) ∈ Q∪ × Q∪ × N+ the
execution of d on (P, x, y, d) returns 0.
Assume that SDP(P ) = 0. Thus, SP = ∅. Assume, by contradiction, that there
is (x, y, d) ∈ Q∪ × Q∪ × N+ such that the execution of d on (P, x, y, d) returns
1. Then, all the conditions of the if in step 1 are true. Therefore, x ∈ XP and
y ∈ YQ . Moreover, by Exercise 6.4, x ∈ SP and y ∈ RQ , contradicting SP = ∅.
(3) Observe that the evaluation of each of the conditions of the if is performed
in polynomial time on the size of (A, b, c):
5 5
• asizeGE (6p(A, b)p , x) and asizeGE (dp(A, c)p , y), by Proposition 7.2;
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
186 CHAPTER 7. COMPLEXITY
input : (A, b, c)
1. s := 0;
2. o := +∞;
3. for each subset B of {1, . . . , n} with m elements do:
(a) if there is a (unique) solution z of AB z = b and z > 0 then:
• set x such that (
zj if j ∈ B
xj :=
0 otherwise;
• if cx < o then:
o := cx;
s := x;
4. return s.
• the other conditions, by Proposition 7.12, Exercise 7.4 and the first item.
Thus, d is polynomial on the first part of the input; that is, on DSF . QED
Notation 7.5
We say that b in Figure 7.3 is the basic algorithm.
Proposition 7.8
Let P = (A, b, c) ∈ DSF be a standard optimization problem such that SP 6= ∅.
Then, the basic algorithm when executed on P returns a vector in SP .
Proof:
The algorithm is correct since it finds a basic admissible vector which is an
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.4. DETERMINISTIC APPROACH 187
Definition 7.2
Let µ : N → N be the map defined as follows:
µ(ν) = p(A, b, c)p,
where
• (A, b, c) ∈ DSF ;
n
• A is a 2 × n-matrix and n is the (ν + 1)-th positive even number;
• each rational entry of A, b and c has 13 bits.
Note that 13 bits is the minimum size of a rational number (see Remark 7.3).
Notation 7.6
We denote by
Pν
the optimization problem corresponding to µ(ν) for every ν ∈ N. Moreover, we
denote by
nν
the number of columns of the matrix in Pν .
Proposition 7.9
For each ν ∈ N, √ p
2 µ(ν) + 21 3
nν = − .
4 2
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
188 CHAPTER 7. COMPLEXITY
Proof:
Observe that
• pcp = 16nν − 3;
• pbp = 8nν − 3;
• pAp = 8n2ν − 3.
For each ν ∈ N,
is √ p
2 µ(ν) + 21 3
−
4 2
for every ν ∈ N. QED
Proposition 7.10
The basic algorithm is not efficient.
Proof:
Observe that (see [19])
!
nν nν
nν ≥ 22
2
since nν is a positive even number. Hence, by Proposition 7.9, we obtain:
√ p
2 µ(ν) + 21 3
−
nν
nν ≥ 2 8 4.
2
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.5. SOLVED PROBLEMS AND EXERCISES 189
and, so, √ p
2 µ(ν) + 21 3
−
ωb (µ(ν)) ≥ τb (Pν ) ≥ 2 8 4.
ωb (µ(ν))
Hence, limν→∞ ρ(µ(ν)) = limν→∞ f (µ(ν)) = +∞. QED
input : (m, n)
1. while scanning m̂ and n̂ simultaneously until both end:
• if the scan of m̂ ends before that of n̂ then return 1;
• if the scan of n̂ ends before that of m̂ then return 0;
2. o := 1;
3. while scanning m̂ and n̂ simultaneously:
4. return o.
Problem 7.1
Show that ≤N ∈ P.
Solution:
Given m, n ∈ N. Consider the algorithm in Figure 7.4 that returns the result
in o. The first loop performs at most
2 min{pmp, pnp} + 3
6 min{pmp, pnp}
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
190 CHAPTER 7. COMPLEXITY
input : (m, n)
1. b := 0;
2. o := 0;
3. while scanning m̂ and n̂ simultaneously until both end:
• if the scan of m̂ has ended then
x := 0;
else:
set x to the current bit of m̂;
• if the scan of n̂ has ended then:
y := 0;
else:
set y to the current bit of n̂;
• set the current bit of ô to the xor of b, x and y;
• if there are at least two 1’s among b, x, y then:
b := 1;
else:
b := 0;
4. if b 6= 0 then ô := 1ô;
5. return o;
Problem 7.2
Show that +N ∈ P.
Solution:
Given m, n ∈ N, consider the algorithm in Figure 7.5 that returns the result in
o and uses b for the carry-over bit. The algorithm computes m +N n in at most
12 max{pmp, pnp} + 5
bit operations. Therefore, the algorithm is linear and so +N ∈ P. /
Problem 7.3
Show that ×N ∈ P.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.5. SOLVED PROBLEMS AND EXERCISES 191
input : (m, n)
1. if m = 0 ∨ n = 0 then return 0;
2. o := 0;
3. while scanning n̂:
• if the current bit of n̂ is 1 then o := o +N m;
• m̂ := m̂0;
4. return o.
Solution:
Given m, n ∈ N, consider the algorithm in Figure 7.6. In short, the algorithm
notices that, !
X X
i
m×n=m× 2 = m × 2i ,
i∈I i∈I
where
I = {i : 0 ≤ i < pnp and n̂pnp−i = 1}.
Moreover, recall that multiplying m by 2i in binary is equivalent to appending
to m̂ a sequence of size i composed only by 0’s. The multiplication algorithm
calls the algorithm for +N , and therefore, it is relevant to give a description on
how this algorithm is called, to understand its complexity. The main idea is to
copy the inputs of the algorithm to some free area of the computer memory,
and there, perform the +N algorithm. Finally, we copy the output back to the
program being run.2 It is easy to see that the loop in step 3 runs for pnp times.
Moreover, the summation in the loop adds numbers with at most pnp + pmp bits.
Therefore, the number of bit operations in the execution of the algorithm is
at most pnp(pnp + pmp) modulo a multiplicative factor. Hence, the worst-case
execution time of the algorithm is in O(ν 7→ ν 2 ). Thus, it is quadratic and so
×N ∈ P. /
Problem 7.4
Show that ≤ ∈ P.
Solution:
Given q, r ∈ Q, consider the algorithm in Figure 7.7 that returns the required
2 We stress that there are more efficient multiplication algorithms than the one presented
here, namely those based on the Fast Fourier Transform (see for instance [19]).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
192 CHAPTER 7. COMPLEXITY
input : (q, r)
1. if σq = − ∧ σr = + then return 1;
2. if σq = + ∧ σr = − then return (0 ≤N q) ∧ (q ≤N r) ∧ (r ≤N 0);
3. if σq = σr = + then return q ×N r ≤N q ×N r;
4. if σq = σr = − then return q ×N r ≤N q ×N r.
result using the algorithms for ≤N and ×N (see Notation 7.10). The execution
times of the algorithms implementing the operations are polynomial on the size
of the input. Therefore, the algorithm is efficient. So, ≤ ∈ P. /
input : (q, r)
1. if σq = σr then:
σo := +;
else:
σo := −;
2. (o, o) := red(q ×N r, q ×N r);
3. return (σo , o, o).
Problem 7.5
Show that × ∈ P.
Solution:
Consider the algorithm in Figure 7.8 using the algorithms for ×N and red (see
Notation 7.10). Clearly, the algorithm is polynomial. So, × ∈ P. /
Problem 7.6
Show that + ∈ P.
Solution:
Given q, r ∈ Q, consider the algorithm in Figure 7.9 that returns the required
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.5. SOLVED PROBLEMS AND EXERCISES 193
input : (q, r)
1. if σq = σr then:
• σo := σq ;
• (o, o) := red(q ×N r +N q ×N r, q ×N r);
2. if σq = + ∧ σr = − then:
• if q ×N r ≤N q ×N r then:
σo := +;
else:
σo := −;
• if σo = + then:
N
(o, o) := red(q ×N r−̇ q ×N r, q ×N r);
else:
N
(o, o) := red(q ×N r−̇ q ×N r, q ×N r);
3. if σq = − ∧ σr = + then:
• if q ×N r ≤N q ×N r then:
σo := +;
else:
σo := −;
• if σo = − then:
N
(o, o) := red(q ×N r−̇ q ×N r, q ×N r);
else:
N
(o, o) := red(q ×N r−̇ q ×N r, q ×N r);
4. return (σo , o, o).
N
result (σo , o, o) using the algorithms for ≤N , +N , −̇ , ×N and red (see Nota-
tion 7.10). It is immediate that the algorithm is polynomial on the size of the
input. So, + ∈ P. /
Problem 7.7
Consider the decision problem
Prime : N → {0, 1}
such that Prime(x) = 1 if and only if x is a prime number. Show that Prime
is in P (see Definition 7.9).
Solution:
There are several algorithms, usually called primality tests for this problem
based on different characterization of a prime number. The Wilson Theorem
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
194 CHAPTER 7. COMPLEXITY
Problem 7.8
Consider the decision problem
Factor : N3 → {0, 1}
Factor ∈ NP
Solution:
Let
W=N
and
aFact : N3 × W → {0, 1}
be the algorithm in Figure 7.10, using the efficient algorithm aPrime mentioned
in Problem 7.7. Assume that
Factor(d, k1 , k2 ) = 1.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.5. SOLVED PROBLEMS AND EXERCISES 195
input : ((d, k1 , k2 ), p)
1. if p > k2 ∨ p < k1 ∨ aPrime (p) = 0 then:
return 0;
else:
(a) if mod(d, p) = 0 then:
return 1;
else:
return 0.
Problem 7.9
Consider the maps f = ν 7→ an ν n + · · · + a0 and g = ν 7→ ν n . Show that
f ∈ O(g).
Solution:
We have
an ν n + · · · + a0 ≤ ν n (|an | + · · · + |a0 |).
The thesis follows taking α = |an | + · · · + |a0 | and ν0 = 0. /
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
196 CHAPTER 7. COMPLEXITY
Exercise 7.1
Show that =N , ÷N , rem, gcd, red ∈ P (see Notation 7.10).
Exercise 7.2
Show that ·, ·, σ· , p · p, =, −, ·−1 ∈ P (see Notation 7.12).
Exercise 7.3
Show that there exists a polynomial algorithm for each of the following opera-
tions:
• w 7→ pwp : Q∪ → N+ ;
• A 7→ pAp : Q∪ × Q∪ → N+ ;
Exercise 7.4
Show that there is a polynomial algorithm for computing each of the following
linear algebra operations over the field Q:
• transpose of a matrix;
Exercise 7.5
Let CVP : DSF × Q∪ → {0, 1} be a map such that
for every pure canonical optimization problem P . Use the active lines technique
(see Section 3.2) to conclude that CVP ∈ P.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.6. RELEVANT BACKGROUND 197
Exercise 7.6
Let g : N → R+
0 . Show that
f (ν)
O(g) = f : N → R+
0 : lim sup < ∞ .
ν→∞ g(ν)
Exercise 7.7
Let f, g, h : N → R+
0 be maps. Show that
Notation 7.7
Given an algorithm a, we denote by
Da
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
198 CHAPTER 7. COMPLEXITY
Remark 7.1
In the sequel, we assume that Da is a denumerable set.
Notation 7.8
For each d ∈ Da , we denote by
pdp
the size of d; that is, the number of bits used to represent d.
Notation 7.9
Given an algorithm a, we denote by
τa : Da → R+
0
the map assigning to each d the execution time of a on the input d; that is, the
number of bit operations that a executes on d.
Definition 7.3
The worst-case execution time of an algorithm a is the map
In general, the idea is not to obtain the map ωa explicitly but rather to
identify a class of maps where this map belongs. In particular, from an efficiency
point of view, it is desirable that ωa belongs to a polynomial class.
For this purpose, we need to define the big O notation (also called the
Landau or the Bachman-Landau notation and in some cases also called the
asymptotic notation, see [19]).
Definition 7.4
The class of maps with asymptotic upper bound g : N → R+
0 up to a multiplica-
tive constant, denoted by
O(g),
is the set of maps f : N → R+
0 such that
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.6. RELEVANT BACKGROUND 199
Definition 7.5
An algorithm a is polynomial or algebraic or efficient (in time), written
ωa ∈ P or even a∈P
if
∃k ∈ N ωa ∈ O(ν 7→ ν k )
ka
Definition 7.6
We say that a polynomial algorithm a is linear when ka = 1 and say that it is
quadratic when ka = 2.
Definition 7.7
An algorithm a is exponential if
k
∃k ∈ N ωa ∈ O(ν 7→ 2ν ).
Definition 7.8
A computation problem is a map h : D → E where D is a denumerable set
and E is a countable set. A decision problem is a computation problem where
E = {0, 1}.3
Definition 7.9
A computation problem h : D → E is polynomial, written
h ∈ P,
• Da = D;
Hilbert in 1928 (see [37]) when asking whether or not Mathematics is decidable.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
200 CHAPTER 7. COMPLEXITY
Definition 7.10
Let a be an algorithm with input data space factorized as follows:
Da = Da0 × Da00 .
ωa1 ,
is the map
+
ν 7→ sup{τa (d0 , d00 ) : (d0 , d00 ) ∈ Da , pd0 p ≤ ν} : N → R0 .
Definition 7.11
An algorithm a with Da = Da0 × Da00 is polynomial on Da0 if
∃k ∈ N ωa1 ∈ O(ν 7→ ν k ).
Definition 7.12
A decision problem h : D → {0, 1} is non-deterministically polynomial, written
h ∈ NP,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.6. RELEVANT BACKGROUND 201
where each n̂i ∈ {0, 1}, (n) is the length of the binary representation of n, and
n̂1 is its most significant bit. Recall that either n̂1 = 1 or n = 0. In the latter
case (n) = 1. From this point on, we assume that the representation of n is n̂
and, so, that pnp = (n).
We assume also that there is independent and direct access to the least
significant bit of each of the inputs of an operation and that afterwards each
input can be sequentially scanned from bit to bit until the most significant bit
is reached.
We start by analyzing the time complexity of the following relevant basic
operations on natural numbers.
Notation 7.10
The following operations on natural numbers are relevant in the chapter:
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
202 CHAPTER 7. COMPLEXITY
The reader will expect that adding two natural numbers with a few bits will
take less time than making the same operation on two natural numbers with
many bits. The key assumption here is that each single-bit operation, including
scanning a bit, comparing bits, summing two bits, assigning a bit to a variable,
among others, takes a unit of time.
Proposition 7.11
There exists a polynomial algorithm for each operation in Notation 7.10.
The proof of the previous proposition is spread across Problem 7.1, Prob-
lem 7.2, Problem 7.3 and Exercise 7.1.
Actually, for showing that all these operations can be computed in polyno-
mial time, we need only sufficiently good upper bounds on the time complexity
of these operations (see Problem 7.1, Problem 7.2 and Problem 7.3). Never-
theless, when available, we mention en passant better upper bounds that one
can easily find in the literature but that are more difficult to establish.
Capitalizing on the efficient algorithms presented in the proof of Propo-
sition 7.11, it is straightforward to provide efficient algorithms for the basic
operations on rational numbers. To this end, we start with the concept of
coprime natural numbers.
Definition 7.13
Two natural numbers are coprime if their greatest common divisor is 1.
Remark 7.2
Observe that the map
(m, n) 7→ m/n : {(m, n) ∈ N × N+ : m and n are coprime} → Q+
0
Notation 7.11
Given a non-negative rational number q, we refer to its unique coprimal repre-
sentation by
(q, q).
Moreover, each q ∈ Q is identified with the triple
(σq , q, q).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
7.6. RELEVANT BACKGROUND 203
Remark 7.3
The encoding
σ̂q
of the sign is 1 if σq is + and is 0 otherwise. To encode the comma “,”, we use
three bits, namely 001. Finally, for the representation of natural numbers, we
need to distinguish them from the separator. Hence, we triplicate each bit in
its binary representation. If
pqp
X
q= 2i−1 bi
i=1
Notation 7.12
The following operations on rational numbers are relevant in the chapter:
· : q 7→ q : Q → N;
· : q 7→ q : Q → N+ ;
σ· : q 7→ σq : Q → {+, −};
p · p :7→ pqp : Q → N+ ;
≤ : q, r 7→ q less than or equal to r : Q2 → {0, 1};
= : q, r 7→ q equal to r : Q2 → {0, 1};
+ : q, r 7→ sum of q and r : Q2 → Q;
× : q, r 7→ product of q and r : Q2 → Q;
− : q 7→ symmetric of q : Q → Q;
·−1 : q 7→ 1/q : Q+ ∪ Q− → Q.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
204 CHAPTER 7. COMPLEXITY
Proposition 7.12
There exists a polynomial algorithm for each of the operations in the rational
numbers.
The proof of the previous proposition is spread across Problem 7.4, Prob-
lem 7.5, Problem 7.6 and Exercise 7.2.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 8
8.1 Algorithm
In this section we discuss the Simplex Algorithm. We assume given a standard
optimization problem resulting from a canonical one. This is not a limitation
since we can always get a canonical optimization problem from a general linear
optimization problem (see Proposition 1.1). When an optimization problem is
such that b 6> 0, we can use either Exercise 1.4 or Exercise 1.5 to obtain an
equivalent optimization problem with a positive restriction vector.
Definition 8.1
Let P ∈ CS(C) be a non-degenerate problem with b > 0. The Simplex Algo-
rithm for P is the algorithm in Figure 8.1.
The variables b and s contain the output results. The variable b is equal
to 0 when the problem has no optimizers and is 1 otherwise. The variable
s returns an optimizer whenever there is one. The variables x, P, y, k, t, o
and j are auxiliary. The auxiliary variable x contains at each point the basic
admissible vector under analysis. The variable P is the set Px . The variable y is
a potential admissible vector of the dual problem. When y is admissible, then
205
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
206 CHAPTER 8. THE SIMPLEX ALGORITHM
1. x := (0, . . . , 0, b1 , . . . , bm );
2. P := {j ∈ {1, . . . , n + m} : xj > 0};
3. y := solution of the system (AP )T y = (cP )T ;
4. if AT y ≤ cT then:
(a) (b, s) := (1, x);
(b) return (b, s);
else:
(a) k := some k such that (AT y)k > (cT )k ;
(b) t := t with tP = (AP )−1 a•k , tk = −1, t{1,...,n+m}\(P∪{k}) = 0;
(c) if t ≤ 0, then:
i. b := 0;
ii. return (b, x);
else:
xj
i. o := min : tj > 0 ;
tj
ii. x := x − ot;
iii. go to 2.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.1. ALGORITHM 207
Example 8.1
Recall the standard optimization problem P
min −2x1 − x2
x
3x1 − x2 + x3 = 6
−x1 + 3x2 + x4 = 6
x ≥ 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
208 CHAPTER 8. THE SIMPLEX ALGORITHM
In this case, P = {1, 4}. By solving the system of equations (AP )T y = (cP )T
we get
2
−
y = 3 .
0
Vector y satisfies
AT y ≤ cT .
Therefore, s outputs b = 1 and s = (3, 3, 0, 0).
As we have seen in the example above, it may be the case that there are 2 or
more inequalities in AT y ≤ cT that do not hold, for a particular y. Therefore,
there are several possible choices for k. One of the most well known choices
is the gradient choice (which was used by George Dantzig in his seminal pa-
per [22]) consisting in choosing k = k where k is the smallest element in
{j ∈ N : cj − (AT y)j < 0}.
The choice of k in Example 8.1 was done according to this criterion.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.2. SOUNDNESS, COMPLETENESS AND COMPLEXITY 209
Proposition 8.1
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. Then,
(0, . . . , 0, b1 , . . . , bm ), in Step 1 of the Simplex Algorithm in Figure 8.1, is a
basic admissible vector.
Proof:
We start by observing that (0, . . . , 0, b1 , . . . , bm ) ∈ XP , since b > 0 by hypothe-
sis. Moreover, (0, . . . , 0, b1 , . . . , bm ) is a basic vector by Proposition 4.9, because
b > 0. QED
Proposition 8.2
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. When
x ∈ XP and the guard of Step 4 of the Simplex Algorithm in Figure 8.1 holds,
then x ∈ SP .
Proof:
Let Q be the dual of P (see Exercise 6.2). Since AT y ≤ cT , then y ∈ YQ .
Furthermore, after assignment at Step 3,
AP T y = cP T .
Hence,
T
xT T T
P AP y = xP cP .
Thus,
(AP xP )T y = xT T
P cP .
Therefore,
(Ax)T y = xT cT .
So,
bT y = cx,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
210 CHAPTER 8. THE SIMPLEX ALGORITHM
Proposition 8.3
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. When
x ∈ XP , the guard of Step 4 of the Simplex Algorithm in Figure 8.1 does not
hold and t ≤ 0 then SP = ∅.
Proof:
Consider the family
{wδ }δ∈R+ ,
0
where
wδ = x − δt.
That is,
(AT y)j = (cT )j
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.2. SOUNDNESS, COMPLETENESS AND COMPLEXITY 211
cwδ = ck δ + cP xP − δ cP tP
= cP xP + δ(ck − cP tP )
= cP xP + δ(ck − yT AP tP ) (∗)
T −1
= cP xP + δ(ck − y AP (AP ) a•k ) (∗∗)
= cP xP + δ(ck − yT a•k ) (†)
< cP xP (∗ ∗ ∗)
= cx,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
212 CHAPTER 8. THE SIMPLEX ALGORITHM
Then,
cwδ = cP xP + δ(ck − yT a•k ) by (†)
cz − cP xP
< cP xP + (ck − yT a•k )
ck − yT a•k
= cz.
(2) cz ≥ cP xP . Then, cwδ < cz since cwδ < cx for every δ ∈ R+ , as seen above.
Therefore, SP = ∅. QED
Proposition 8.4
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. Assume
that x ∈ XP . Then, in Step (4c)(ii) of the Simplex Algorithm in Figure 8.1,
x − ot ∈ XP and c(x − ot) < cx.
Proof:
Let j be such that
xj
o= .
tj
Note that tj > 0. Then,
j ∈ P,
because tk = −1 and t{1,...,n+m}\(P∪{k}) = 0. Thus,
xj > 0.
Thus,
o > 0.
Note that
x − ot is wo ,
where wδ was defined in the proof of Proposition 8.3. Then,
x − ot ∈ XP
and
c(x − ot) < cx,
as already established in the proof of Proposition 8.3. QED
Proposition 8.5
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. Then,
in Step 1 and in each execution of Step (4c)(ii) of the Simplex Algorithm in
Figure 8.1, x is a basic admissible vector.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.2. SOUNDNESS, COMPLETENESS AND COMPLEXITY 213
Proof:
The proof follows by induction on the number k of assignments on x during the
execution of s.
(Base) k = 1. Then, x is basic by Proposition 8.1.
(Step) Assume by induction hypothesis that x is a basic vector in the (k − 1)-th
execution of Step (4c)(ii) of the Simplex Algorithm in Figure 8.1. We must
show that
x − ot
is a basic vector. Observe that x − ot ∈ XP by Proposition 8.4. By Proposi-
tion 4.9, because P is non-degenerate, it suffices to establish
taking into account the proof of Proposition 8.4. Let j be such that
xj
o= .
tj
We now prove that
Px−ot = {k} ∪ (Px \ {j}).
Since
k 6∈ Px
(see the proof of Proposition 8.3), then
wko = o > 0
k ∈ Px−ot .
On the other hand, since j ∈ Px (taking into account that o > 0), then
xj
xj − otj = xj − tj = 0.
tj
Thus,
j∈
/ Px−ot .
Note that
x` − ot` = 0,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
214 CHAPTER 8. THE SIMPLEX ALGORITHM
` ∈ ({1, . . . , n + m} \ Px−ot )
|{1, . . . , n + m} \ Px−ot | ≥ n − 1.
j∈
/ {1, . . . , n + m} \ (Px ∪ {k})
since j ∈ Px . Thus,
|{1, . . . , n + m} \ Px−ot | ≥ n.
since j ∈
/ Px−ot . So,
|Px−ot | ≤ m.
Assume, by contradiction, that
Proposition 8.6
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. Then, the
execution of the Simplex Algorithm in Figure 8.1 on P terminates.
Proof:
Let xi be the basic vector of the i-th iteration of the execution of s on P .
Observe that
|{xi : i ≤ j}| = j
for every j ∈ N+ , since
cxi+1 < cxi ,
by Proposition 8.4. Assume, by contradiction, that the execution of s on P
does not terminate. Then,
|{xi : i ∈ N+ }| = |N|.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.2. SOUNDNESS, COMPLETENESS AND COMPLEXITY 215
{xi : i ∈ N+ } ⊆ BP ,
Theorem 8.1
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. Then, the
execution of the Simplex Algorithm in Figure 8.1 on P terminates
Proof:
(1) Assume that s terminates with b = 0. Then, x ∈ XP , t ≤ 0 and the guard
of the Simplex Algorithm in Step 4 is false. Therefore, by Proposition 8.3,
SP = ∅.
(2) Assume that s terminates with b = 1. Then, x ∈ XP and the guard of Step
4 of the Simplex Algorithm is true. Thus, by Proposition 8.2, x ∈ SP and so
s ∈ SP . QED
Theorem 8.2
Let P = (A, b, c) ∈ CS(C) be a non-degenerate problem with b > 0. Then,
Proof:
Let Q be the dual of P (see Exercise 6.2).
(1) Assume that SP = ∅. Note that XP 6= ∅ since the initial x is an admissible
vector, by Proposition 8.1. Using Exercise 6.5, we conclude that:
YQ = ∅.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
216 CHAPTER 8. THE SIMPLEX ALGORITHM
x2
(0, 1)
(1, 1 − 2ε)
XP2ε
(0, 0) (1, 0)
x1
Observe that the guard of Step 4 tests whether or not y is an admissible vector
of Q. Hence, the guard of Step 4 never holds since YQ = ∅. Hence, the execution
of s on P never terminates with (b, s) = (1, x). Since the execution of s on P
terminates, by Proposition 8.6, then b = 0.
(2) Assume that SP 6= ∅. By Theorem 8.1, the execution of s on P does not
terminate with b = 0 but with b = 1 and with s ∈ SP . QED
Definition 8.2
Let 0 < ε ≤ 31 and n ∈ N+ . The standard optimization problem Pnε is defined
as follows:
n
X
εn−j xj
min −
x
j=1
x1 + xn+1 = 1
i−1
X
εi−j xj + xn+i = 1,
2 i = 2, . . . , n
j=1
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.3. SOLVED PROBLEMS AND EXERCISES 217
Solution:
We start by noting that P ∈ CS(C), P is non-degenerate and b > 0. Hence, we
can apply the Simplex Algorithm. Let Q be the dual of P . The initial basic
admissible vector x is
0
0
.
1
1
Moreover, P = {3, 4}. The unique solution of
(AP )T y = (cP )T
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
218 CHAPTER 8. THE SIMPLEX ALGORITHM
x2
XP
x1
0
Then, P = {1, 3}. The unique solution of
(AP )T y = (cP )T
is y = (0, −2). Since
AT y 6≤ cT
for the first and the second components, then y ∈
/ YQ . Hence, take k = 2.
Then,
−1
−1
t= 0 .
0
Since it is the case that t ≤ 0, then b = 0. So, by Theorem 8.1, SP = ∅. /
Problem 8.2
Let P be the following optimization problem:
min −2x1 − x2
x
x1 + x3 = 5
4x1 + x2 + x4 = 25
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.3. SOLVED PROBLEMS AND EXERCISES 219
Solution:
It is immediate that P is non-degenerate and b > 0. Hence, we can apply the
Simplex Algorithm. Let Q be the dual of P . The initial basic admissible vector
x is
(0, 0, 5, 25).
Then, P = {3, 4}. The unique solution of
(AP )T y = (cP )T
4
Moreover,
25
o = min 5, = 5.
4
Therefore,
x = (5, 0, 0, 5).
Then, P = {1, 4}. The solution of
(AP )T y = (cP )T
Moreover,
o = 5.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
220 CHAPTER 8. THE SIMPLEX ALGORITHM
x2
XP
x1
Thus,
x = (5, 5, 0, 0).
Hence, P = {1, 2}. The solution of
(AP )T y = (cP )T
Moreover,
o = 5.
Thus,
x = (0, 25, 5, 0).
Hence, P = {2, 3}. The solution of
(AP )T y = (cP )T
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
8.3. SOLVED PROBLEMS AND EXERCISES 221
Exercise 8.1
Let P be the following optimization problem:
min 3x1 + 3x2 + 2x3
x
x1 + 2x2 + 3x3 = 3
4x1 + 5x2 + 6x3 = 9
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
222 CHAPTER 8. THE SIMPLEX ALGORITHM
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Chapter 9
Integer Optimization
Ax = b
x ∈ Nn ,
Ax ≤ b
x ∈ Nn ,
223
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
224 CHAPTER 9. INTEGER OPTIMIZATION
ax ≤ b
x ∈ Nn ,
where
• xj is the number of objects of kind j;
• aj is the weight of each object of kind j;
• cj is the value of each object of kind j;
• b is the upper bound on the weight;
and aj and cj are positive rational numbers for j = 1, . . . , n and b is a positive
rational number.
Definition 9.1
Let
max
x
cx
P = Ax ≤ b
x ∈ Nn
Ax ≤ b
x ≥ 0.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.1. RELAXED PROBLEM 225
Proposition 9.1
Let P be an integer canonical optimization problem. Then,
1. XP ⊆ XR(P ) ;
Proof:
(1) Let x ∈ XP . Then, x ≥ 0 since x ∈ Nn , and Ax ≤ b. Therefore, x ∈ XR(P ) .
(2) Assume that s ∈ SP and s0 ∈ SR(P ) . Observe that, by (1),
Then,
cs0 = max{cx : x ∈ XR(P ) } ≥ max{cx : x ∈ XP } = cs.
x x
n
(3) Assume that s ∈ SR(P ) ∩ N . Hence, s ∈ XP . Moreover, s ∈ SP by (2).
Let s0 ∈ SP . Then, cs = cs0 and so s0 ∈ SR(P ) . QED
Another way to compare the sets of optimizers is via the integrality gap.
Definition 9.2
Let P be an integer canonical optimization problem such that there is s0 ∈
SR(P ) with cs0 6= 0. The integrality gap between SP and SR(P ) , denoted by
IGP ,
is
cs
,
cs0
where s ∈ SP and s0 ∈ SR(P ) . Similarly, for integer standard optimization
problems.
Observe that (
≤1 if P is canonical
IGP
≥1 if P is standard.
The integrality gap is 1 when an optimizer of the relaxed problem is also an
optimizer of the integer problem. In Proposition 9.7, we will see that when the
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
226 CHAPTER 9. INTEGER OPTIMIZATION
ax ≤ b
x≥0
and can be solved using duality techniques. Observe that the dual problem
QR(P ) is
min by
y
aT y ≥ cT
y ≥ 0.
(observe that the weight aj and the value cj of each object of kind j are
positive, see Example 9.1); that is, k is a kind of an object with the greatest
ratio value/weight. Let x be such that
0 if j 6= k
xj =
b otherwise
ak
(note that b is positive, see Example 9.1); that is, we only put in the knapsack
objects of kind k. Moreover, let
ck
y= .
ak
We show that x ∈ SR(P ) . Indeed:
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.2. TOTALLY UNIMODULAR PROBLEMS 227
• x ∈ XR(P ) ;
• y ∈ YQR(P ) ;
bck
• by = = cx.
ak
Therefore, by Proposition 6.4, x ∈ SR(P ) and y ∈ RQR(P ) .
Definition 9.3
An integer square matrix is unimodular if its determinant is either 1 or −1. A
matrix is totally unimodular if each of its nonsingular submatrices (see Defini-
tion 1.23) is unimodular.
Proposition 9.2
The entries of a totally unimodular matrix are either 0 or 1 or −1.
Proof:
Let A be a totally unimodular matrix and aij a coefficient of A. Assume that
aij 6= 0. Then, [aij ] is a nonsingular matrix with determinant aij . Since A
is totally modular by hypothesis, then either aij = 1 or aij = −1. Otherwise
aij = 0. QED
Example 9.3
For instance, the matrix
−1 0 1
1 1 0
is totally unimodular. Indeed:
(1) all the nonsingular submatrices 1 × 1 have determinants in {−1, 1} since it
is the case that det [1] = 1 and det [−1] = −1.
(2) all the nonsingular submatrices 2 × 2 have determinants in {−1, 1} since it
is the case that the determinant of each of the submatrices
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
228 CHAPTER 9. INTEGER OPTIMIZATION
−1 0 0 1 −1 1
1 1 1 0 1 0
is −1.
Proposition 9.3
The collection of all unimodular n × n-matrices is a group.
Proof:
(1) Assume that A and B are unimodular n × n-matrices. Hence, det A and
det B are in {−1, 1}. Then, A × B is also unimodular since
det (A × B) = (det A) × (det B),
by Proposition 4.20. Therefore, det (A × B) ∈ {−1, 1}.
(2) I is unimodular since det I = 1.
(3) Assume that A is a unimodular n × n-matrix. Hence, det A ∈ {−1, 1}.
Then, A is nonsingular since det A 6= 0 (see Proposition 4.21). Then, A−1 is
also unimodular since
1
det A−1 = ,
det A
by Proposition 4.20. Therefore, det A−1 ∈ {−1, 1}. QED
Proposition 9.4
Let A be an m × m-matrix with integer coefficients. Then,
A is unimodular iff det A 6= 0 and for each b ∈ Zm if Ax = b then x ∈ Zm .
Proof:
(→) Observe that A is a nonsingular matrix. Since x = A−1 b, then, by Cramer’s
Rule (see Proposition 4.23),
det [A]jb
xj = .
det A
Observe that either det A = 1 or det A = −1 and det [A]jb ∈ Z. Therefore,
xj ∈ Z for every j = 1, . . . , m.
(←) Let b be the vector ej (see Definition 1.20) and x = A−1 ej . Then, by
hypothesis, x ∈ Zm . Therefore,
(A−1 )•j = A−1 ej ∈ Zm
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.2. TOTALLY UNIMODULAR PROBLEMS 229
1
det A−1 =
det A
by Proposition 4.20. Therefore, either det A−1 = det A = 1 or det A−1 =
det A = −1. Hence, A is unimodular. QED
The following result states that total unimodularity is preserved under spe-
cific operations.
Proposition 9.5
Let A be a totally unimodular m×n-matrix with integer coefficients. Then, any
matrix obtained from A by adding a row or a column of the form (0, . . . , 1, . . . , 0)
is again totally unimodular. The same happens if we multiply any row or col-
umn of A by −1.
Proof:
Denote the coefficients of A by aij for any i = 1, . . . , m and j = 1, . . . , n. We
only consider the case where the matrix A0 is obtained from A by adding the
line (0, . . . , 1, . . . , 0) at the top. That is, A0 is the matrix:
0 ... 1 ... 0
a11 ... a1j . . . a1n
...
am1 ... amj . . . amn
det B 0 = ± det B,
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
230 CHAPTER 9. INTEGER OPTIMIZATION
Proposition 9.6
Let A be a matrix such that:
• the set of the indices of the rows can be partitioned in two subsets with
the following properties:
– if a column has two entries of the same sign then the indices of the
rows of those entries belong to different subsets of the partition;
– if a column has two entries of different signs then the indices of the
rows of those entries belong to the same subset of the partition.
Proof:
We must show that each nonsingular k × k-submatrix of A is unimodular, for
every k ∈ N. The proof is by induction on k.
Base: k = 0. Observe that there is a unique 0 × 0-submatrix of A, the empty
matrix, which has determinant 1 (see Definition 4.8).
Step: Let k = `+1. Assume, by the induction hypothesis, that each nonsingular
` × `-submatrix of A has determinant either equal to 1 or equal to −1. Let A0
be a nonsingular k × k-submatrix of A. Recall (see Remark 4.2) that
k
A0
X
det A0 = (−1)i+j a0ij Mi,j ,
j=1
0
A
where Mi,j is the (i, j)-minor of A0 . There are three cases to consider:
(1) A0 has a column with only zero entries. In this case, det A0 = 0. Hence,
A0 is a singular matrix. So, this case is not possible.
(2) A0 has a column j with a unique non-zero entry. So, a0ij ∈ {−1, 1}. Then,
k
A0 A0
X
0
det A = (−1)i+j a0ij Mi,j = (−1)i+j a0ij Mi,j .
i=1
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.2. TOTALLY UNIMODULAR PROBLEMS 231
0
Since A0 is nonsingular, then Mi,j
A
6= 0 and the matrix A0i,j is nonsingular.
A0
Thus, by the induction hypothesis, Mi,j ∈ {−1, 1}. Therefore,
(3) All columns of A0 have two non-zero entries. Let {K1 , K2 } be the partition
of {1, . . . , k} corresponding to the given partition of A. We now show that
X X
(†) a0ij = a0ij
i∈K1 i∈K2
for every j = 1, . . . , k. Assume that a0i1 j and a0i2 j are the two non-zero entries
of column j. Then, there are two cases:
(a) a0i1 j and a0i2 j have the same sign. Assume without loss of generality, that
i1 ∈ K1 , i2 ∈ K2 and a0i1 j = a0i2 j = 1. Then,
X X
a0ij − a0ij = a0i1 j − a0i2 j = 1 − 1 = 0.
i∈K1 i∈K2
(b) a0i1 j and a0i2 j have different signs. Assume without loss of generality, that
i1 , i2 ∈ K1 , a0i1 j = −1 and a0i2 j = 1. Then,
X X X
a0ij − a0ij = a0ij = a0i1 j + a0i2 j = −1 + 1 = 0.
i∈K1 i∈K2 i∈K1
Hence, by (†),
X X
a0i• − a0i• = 0
i∈K1 i∈K2
Example 9.4
The matrix
1 0 0
0 −1 1
1 1 −1
is totally unimodular. Indeed, it satisfies the first two conditions of Proposi-
tion 9.6. The partition {{1}, {2, 3}} of the indices of rows shows that the third
condition of Proposition 9.6 is also satisfied. Therefore, Proposition 9.6 allow
us to conclude that the matrix is totally unimodular.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
232 CHAPTER 9. INTEGER OPTIMIZATION
x : E → {0, 1}
such that,
is a set whose elements are called edges. For each e ∈ E, S(e) gives the first component of
e and T (e) the second component. A finite graph is a graph where V is a finite set. A
bipartite graph (V, E) induced by the partition {V1 , V2 } of V is a finite graph such that:
(1) E ⊆ V1 × V2 ; (2) for each v ∈ V1 there is e ∈ E such that S(e) = v; (3) for each v ∈ V2
there is e ∈ E such that T (e) = v.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.2. TOTALLY UNIMODULAR PROBLEMS 233
ai1 j = ai2 j = 1
aij = 0
for every i ∈ {1, . . . , m} \ {i1 , i2 }. Thus, in each column there are exactly
two entries equal to 1. So A is totally unimododular by Proposition 9.6 by
partitioning the set of indices of rows in those indexed by V1 and those indexed
by V2 .
Consider the following sets V1 = {v1 , v2 , v3 } and V2 = {v4 , v5 , v6 } of in-
dividuals and tasks, respectively and the set E = {e1 : v1 → v4 , e2 : v1 →
v5 , e3 : v2 → v5 , e4 : v2 → v6 , e5 : v3 → v6 , e6 : v3 → v4 } of preferences. More-
over, assume that the degree of satisfaction for each possible choice is 1. So,
the corresponding assignment problem is the following 0-1 integer canonical
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
234 CHAPTER 9. INTEGER OPTIMIZATION
optimization problem:
max [1 1 1 1 1 1] [xe1 xe2 xe3 xe4 xe5 xe6 ]T
x
1 1 0 0 0 0 x e1 1
0 0 1 1 0 0 x e2 1
0
0 0 0 1 1 x e3 ≤
1
1 0 0 0 0 1 x e4
1
0 1 1 0 0 0 x e5 1
0 0 0 1 1 0 x e6 1
x , x , x , x , x , x ∈ {0, 1}.
e1 e2 e3 e4 e5 e6
v1 7→ v4 , v2 7→ v5 , v3 7→ v6 and v1 7→ v5 , v2 7→ v6 , v3 7→ v4 ,
min(|V1 |, |V2 |) = 3
The reader interested in assignment problems may consult [13, 28, 56, 16].
Also of interest to this subject are the works [17, 51, 42].
We now provide a sufficient condition for an integer canonical optimization
problem to have optimizers.
Proposition 9.7
Let P = (A, b, c) be an integer canonical optimization problem. Assume that:
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.3. THE BRANCH AND BOUND TECHNIQUE 235
Proof:
The restriction matrix of CS(R(P )) is
[A I]
AB sB = b.
The result above tells us that integer optimization problems with totally
unimodular matrices under some mild conditions can be analyzed using generic
linear optimization techniques.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
236 CHAPTER 9. INTEGER OPTIMIZATION
input : (A, b, c)
1. (b, s, o) := a(A, b, c);
2. if b = 0 then return 0;
3. w := {((A, b, c), (b, s, o))};
4. while w 6= {} do
(a) let ((A0 , b0 , c), (b, s, o)) be the first element of w;
(b) remove the first element from w;
(c) if each component of s is in N then return (1, s);
(d) let j be such that sj ∈ / N;
(e) let A0≤ , b0≤ be obtained by adding ej to A0 , bsj c to b0 ;
(f) (b, s, o) := a(A0≤ , b0≤ , c);
(g) if b = 1 then add ((A0≤ , b0≤ , c), (b, s, o)) to w sorted by (*);
(h) let A0≥ , b0≥ be obtained by adding −ej to A0 , −dsj e to b0 ;
(i) (b, s, o) := a(A0≥ , b0≥ , c);
(j) if b = 1 then add ((A0≥ , b0≥ , c), (b, s, o)) to w sorted by (*);
5. return 0.
Example 9.6
Herein, we illustrate the application of the branch and bound technique to the
following integer linear optimization problem
max −2x1 + 5x2
x
−4x + 7x ≤ 21
1 2
P = 2x1 + 5x2 ≤ 32
x1 ≤ 6
x ∈ N2 .
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.3. THE BRANCH AND BOUND TECHNIQUE 237
2x1 + 5x2 ≤ 32
x1 ≤ 6
x≥0
Since the first component of the maximizer is not a natural number, we consider
the following subproblems:
max −2x1 + 5x2
x
−4x1 + 7x2 ≤ 21
2x1 + 5x2 ≤ 32
P1 =
x1 ≤ 6
x1 ≤ 3
x ≥ 0
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
238 CHAPTER 9. INTEGER OPTIMIZATION
In the next step consider the first element of w. Since the second component
of the maximizer of P1 is not a natural number, we can consider the following
two subproblems of P1 :
max
−2x1 + 5x2
x
−4x1 + 7x2 ≤ 21
2x1 + 5x2 ≤ 32
P11 =
x1 ≤ 3
x2 ≤ 4
x ≥ 0
In the next step consider the first element of w. Since the first component of
the maximizer of P11 is not a natural number, we can consider the following
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.3. THE BRANCH AND BOUND TECHNIQUE 239
In the next step the execution of the algorithm ends returning (1, (2, 4)).
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
240 CHAPTER 9. INTEGER OPTIMIZATION
g : V → {1, . . . , k}
Solution:
Assume that V = {v1 , . . . , v` }. An admissible vector has the form
meaning that
( (
1 if color j is used 1 if vertex v is colored with j
xj = zjv =
0 otherwise 0 otherwise.
models the vertex coloring problem. Observe that when there are no optimizers,
we conclude that the graph is not k-colorable. When there are optimizers, the
value of the objective map for each optimizer is the chromatic number of G
(the smallest number of colors needed to color the graph). /
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
9.4. SOLVED PROBLEMS AND EXERCISES 241
Exercise 9.1
Consider the problem
max 2x1 + 3x2
x
− 2 x1 + x2 ≤ 1
3
I=
8
3 x1 + x2 ≤ 16
x ∈ N.
Use the branch and bound algorithm to conclude whether or not this problem
has a maximizer.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
242 CHAPTER 9. INTEGER OPTIMIZATION
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Bibliography
[4] T. M. Apostol. Calculus. Vol. II: Multi-Variable Calculus and Linear Al-
gebra, with Applications to Differential Equations and Probability. Wiley,
second edition, 1991.
[7] S. Axler. Linear Algebra Done Right. Springer, second edition, 1997.
243
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
244 BIBLIOGRAPHY
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
BIBLIOGRAPHY 245
[30] J. Farkas. Theorie der einfachen Ungleichungen. Journal für die Reine
und Angewandte Mathematik, 124:1–27, 1902.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
246 BIBLIOGRAPHY
[44] V. Klee and G. J. Minty. How good is the simplex algorithm? In Inequal-
ities, III, pages 159–175. Academic Press, 1972.
[60] T. Terlaky and S. Z. Zhang. Pivot rules for linear programming: A sur-
vey on recent theoretical developments. Annals of Operations Research,
46/47(1-4):203–233, 1993.
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
BIBLIOGRAPHY 247
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
248 BIBLIOGRAPHY
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
List of symbols
=, 5 C, 13
[ ], 38 CS, 19
≥, 5
≤, 5 DSF , 178
DSF
mn , 177
{0}K , 32
Da , 197
(Rn , Θ), 117
det A, 111
ˆ·, 179, 201, 203
dim, 108, 126, 146, 180
·, 202
d(x), 158
p · p, 198
·, 202 e(y), 158
A= , 122 int(U ), 58
AB , 89 IGP , 225
Ai,j , 111 ker(A), 115
A(W ), 121
A(W ), 118 L, 7
A, 12 LC, 15
Ax , 74
M , 89
a•j , 73 A
Mi,j , 112
ai• , 73
N , 89
Bε (x), 57 N > , 122
Bε [x], 57 NP, 200
bn(U ), 58
b, 12 O(g), 198
b= , 122 P, 199
A
P , 148
Ci,j , 112
i
CDP, 179 Q|j , 15
C(A), 73 Q∪ , 178
249
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
250 LIST OF SYMBOLS
RQ , 150 τa , 198
rank(A), 114
Rn , 33 V ⊥ , 109
V1 + V2 , 34
SP , 3
SDP, 179 ωa1 , 200
span(A), 114 ωa , 198
spanV (U ), 34
s, 206 XP , 3
S, 14 XP◦ , 40
sizeGE, 180 ∂XP , 40
SC, 18
SP , 22 YQ , 150
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
Subject index
251
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
252 SUBJECT INDEX
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
SUBJECT INDEX 253
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
254 SUBJECT INDEX
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.
SUBJECT INDEX 255
This book has been purchased as a PDF version from the Publisher and is for the purchaser's sole use.