0% found this document useful (0 votes)
46 views7 pages

Partial Exam 23 Nov 2011

This document discusses mathematical foundations of optimization and decision-making. It defines optimization problems as either minimizing or maximizing an objective function over a feasible set. It presents level sets and their use in characterizing optimal solutions geometrically. It discusses properties of optimal solutions including existence proven by Weierstrass' theorem when the objective function is continuous over a compact feasible set. It also characterizes uniqueness of optimal solutions.

Uploaded by

AdelaCodrea
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views7 pages

Partial Exam 23 Nov 2011

This document discusses mathematical foundations of optimization and decision-making. It defines optimization problems as either minimizing or maximizing an objective function over a feasible set. It presents level sets and their use in characterizing optimal solutions geometrically. It discusses properties of optimal solutions including existence proven by Weierstrass' theorem when the objective function is continuous over a compact feasible set. It also characterizes uniqueness of optimal solutions.

Uploaded by

AdelaCodrea
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Mathematical Foundations of the Decision-Making Process

Partial Exam Date: November 23, 2011


1 Optimization problems in general setting
Denition 1.1 Given a function f: D R, dened on some nonempty set D, and
given a nonempty subset S of D, we will use the notations:
argmin
xS
f(x) := x
0
S [ f(x
0
) f(x), x S
and
argmax
xS
f(x) := x
0
S [ f(x
0
) f(x), x S.
An element of argmin
xS
f(x) is called a minimum point of f on S, or an optimal
solution of the optimization (i.e., minimization) problem

f(x) min
x S.
(1)
Similarly, an element of argmax
xS
f(x) is called a maximum point of f on S, or an
optimal solution of the optimization (i.e., maximization) problem

f(x) max
x S.
(2)
The function f and the set S, involved in a general optimization problem of type (1)
or (2), are called objective function and feasible set, respectively. Thus by a feasible
point of these problems we mean any element of S.
Remark 1.1 It is easily seen that maximization problems can be converted into
minimization problems, and vice-versa, since
argmax
xS
f(x) = argmin
xS
(-f)(x) and argmin
xS
f(x) = argmax
xS
(-f)(x). (3)
1
2 Geometric interpretation of optimal solutions
Denition 2.1 Given a function f: D R, dened on a nonempty set D, and a
nonempty subset S D, we shall use in the sequel the following level sets of f
associated with a number R:
S
f
() := x S [ f(x) = ,
S

f
() := x S [ f(x) ,
S
<
f
() := S

f
() S
f
(),
S
>
f
() := S S

f
(),
S

f
() := S S
<
f
().
Proposition 2.1 Let f: D R be a function dened on a nonempty set D. Then,
for any nonempty subset S of D, the following hold:
argmin
xS
f(x) = x
0
S [ S D

f
(f(x
0
)), (4)
argmax
xS
f(x) = x
0
S [ S D

f
(f(x
0
)). (5)
Proof: Let x
0
argmin
xS
f(x). Then x
0
S and f(x
0
) f(x) for all x S, which
means that x S

f
(f(x
0
)) D

f
(f(x
0
)) for all x S. Thus we have S D

f
(f(x
0
)).
Hence the inclusion in (4) is true. In order to prove the converse inclusion, let
x
0
S be such that S D

f
(f(x
0
)). Then, for every x S, we have x D

f
(f(x
0
)),
i.e., f(x) f(x
0
), hence x
0
argmin
xS
f(x). Thus the inclusion in (4) is also
true.
In view of (3), the equality (5) can be easily deduced from (4), taking into account
that D

f
(f(x
0
)) = D

f
(f(x
0
)).
Exercise 2.1 Let f, g: S R be two functions dened on a nonempty subset S of
R
n
. Prove that:
(a) If g(x
1
) g(x
2
) for all x
1
, x
2
S with f(x
1
) f(x
2
), then
argmin
xS
f(x) argmin
xS
g(x) and argmax
xS
f(x) argmax
xS
g(x).
(b) If g(x
1
) < g(x
2
) for all x
1
, x
2
S with f(x
1
) < f(x
2
), then
argmin
xS
g(x) argmin
xS
f(x) and argmax
xS
g(x) argmax
xS
f(x).
(c) If there exists an increasing function : D R, dened on a nonempty set
D R with f(S) D, such that g = f, then
argmin
xS
f(x) = argmin
xS
g(x) and argmax
xS
f(x) = argmax
xS
g(x).
2
(d) If there exists a decreasing function : D R, dened on a nonempty set
D R with f(S) D, such that g = f, then
argmin
xS
f(x) = argmax
xS
g(x) and argmax
xS
f(x) = argmin
xS
g(x).
Solution: (a) Assume that for all x
1
, x
2
S with f(x
1
) f(x
2
) we have g(x
1
)
g(x
2
). Then it is easily seen that S

f
(f(x)) S

g
(g(x)) and S

f
(f(x)) S

g
(g(x))
for all x S. By Proposition 2.1 we get
argmin
xS
f(x) = x
0
S [ S S

f
(f(x
0
))
x
0
S [ S S

g
(g(x
0
)) = argmin
xS
g(x);
argmax
xS
f(x) = x
0
S [ S S

f
(f(x
0
))
x
0
S [ S S

g
(g(x
0
)) = argmax
xS
g(x).
(b) Assume that for all x
1
, x
2
S with f(x
1
) < f(x
2
) we have g(x
1
) < g(x
2
).
Then, for all x
1
, x
2
S such that g(x
1
) g(x
2
), we actually have f(x
1
) f(x
2
).
Then, by interchanging the roles of f and g in (a), we get the desired conclusion.
(c) This assertion directly follows from (a) and (b).
(d) By considering := , and recalling the property (3), the conclusion easily
follows from (c).
Consider now the particular case of a linear optimization problem, where
, = S D := R
2
and
f(x) := x, c, x R
2
,
the point c R
2
0
2
being a priori given.
In this case f is a (not constant) linear objective function. For every point x
0
S,
both level sets D

f
(f(x
0
)) = x R
2
[ x, c x
0
, c and D

f
(f(x
0
)) = x R
2
[
x, c x
0
, c actually are closed halfplanes, c being a normal vector of the straight
line D
f
(f(x
0
)) = x R
2
[ x, c = x
0
, c, which indicates the increasing direction
for the objective function. Thus, (4) shows that x
0
argmin
xS
f(x) if and only if
the feasible set S is contained in the halfplane D

f
(f(x
0
)) (which is bounded by the
straight line D
f
(f(x
0
)) and is oriented by the vector c). Similarly, (5) shows that
x
0
argmax
xS
f(x) if and only if the feasible set S lies in the halfplane D

f
(f(x
0
))
(bounded by the straight line D
f
(f(x
0
)) and oriented by the vector c).
Consider now the particular case of the optimal location problem, where
, = S D := R
2
and
f(x) := |x x

|, x R
2
,
3
the point x

R
2
S being a priori given.
In this case, for every point x R
2
, f(x) represents the Euclidean distance
between x and x

. Consider a point x
0
S. Since x

/ S, we have |x
0
x

| > 0,
hence the level set D

f
(f(x
0
)) = x R
2
[ |xx

| |x
0
x

| = B(x

, |x
0
x

|)
actually is the closed Euclidean ball (i.e., a closed disk) centered at x

with radius
|x
0
x

|, and D

f
(f(x
0
)) = x R
2
[ |xx

| |x
0
x

| = R
n
B(x

, |x
0
x

|)
represents the complement of the open Euclidean ball (i.e., the complement in R
2
of an open disk) centered at x

with radius |x
0
x

|. Thus (4) shows that x


0

argmin
xS
f(x) if and only if the feasible set S lies outside the open disk centered
at x

with radius |x
0
x

|. Similarly, (5) shows that x


0
argmax
xS
f(x) if and
only if the feasible set S is contained in the closed disk centered at x

with radius
|x
0
x

|.
3 Existence and unicity of optimal solutions
Theorem 3.1 (Weierstrass; Existence of optimal solutions) Let f : S R
be a function, dened on a nonempty set S R
n
. If f is continuous and S is
compact (i.e., bounded and closed), then both optimization problems (1) and (2)
have at least one optimal solution.
Proposition 3.1 (Unicity of optimal solutions) Let f: S R be a function
dened on a nonempty set S R
n
. The following assertions are equivalent:
1

The optimization problem (1) has at most one optimal solution.


2

For all x
1
, x
2
S, x
1
,= x
2
, there exists x

S such that
f(x

) < maxf(x
1
), f(x
2
).
Proof: 1

. Assume that card(argmin


xS
f(x)) 1 and suppose to the contrary
that there exist two distinct points x
1
, x
2
S satisfying the inequality f(x)
maxf(x
1
), f(x
2
) for every x S. We infer that f(x
1
) f(x) and f(x
2
) f(x)
for all x S, i.e., x
1
, x
2
argmin
xS
f(x), contradicting the hypothesis.
2

. Assume that for every distinct points x


1
, x
2
S there exists some point
x

S such that f(x

) < maxf(x
1
), f(x
2
), and suppose to the contrary that
card(argmin
xS
f(x)) > 1. Then we can choose x
1
, x
2
argmin
xS
f(x), x
1
,= x
2
.
By hypothesis, we can nd x

S such that f(x

) < maxf(x
1
), f(x
2
). We infer
that f(x

) < f(x
1
) = f(x
2
) = inf f(S) f(x

), a contradiction. .
4
4 Convex sets
For any points x, y R
n
let
[x, y] := (1 t)x +ty [ t [0, 1]
[x, y[ := (1 t)x +ty [ t [0, 1[
]x, y] := (1 t)x +ty [ t ]0, 1]
]x, y[ := (1 t)x +ty [ t ]0, 1[.
Note that if x = y, then [x, y] =]x, y[ = [x, y[ =]x, y] = x; otherwise, if x ,= y,
then ]x, y[ = [x, y] x, y = [x, y[ x =]x, y] y.
Denition 4.1 A subset S of R
n
is said to be convex if [x, y] S for all x, y S.
In other words, S is convex if and only if
(1 t)S +tS S for all t [0, 1].
Proposition 4.1 If T is a family of convex sets in R
n
, then the following hold:
(a)

SF
S is convex.
(b) If the family T is directed, i.e.,
A, B T, C T : A B C,
then

SF
S is convex.
Proof: (a) Let t [0, 1]. For every M T, we have (1 t)

SF
S + t

SF
S
(1t)M+tM M, since M is convex. Hence (1t)

SF
S+t

SF
S

SF
S,
i.e.,

SF
S is convex.
(b) Let x, y

SF
S and t [0, 1]. Then there exist X, Y T such that x X
and y Y . The family T being directed, we can choose Z T such that XY Z.
Since S is convex and x, y Z, it follows that (1 t)x + ty Z

SF
S. Thus

SF
S is convex.
Corollary 4.1 Let (M
i
)
iN
be a sequence of convex sets in R
n
. Then the following
hold:
(a)

i=1

j=i
M
j
is convex.
(b) If the sequence (M
i
)
iN
is ascending, i.e., M
i
M
i+1
for all i N

, then

i=1
M
i
is convex.
5
Proof: (a) For each i N

, consider the set S


i
:=

k=i
M
k
. According to Proposi-
tion 4.1(a), S
i
is convex for every i N

. Moreover, the family T := S


i
[ i N

is directed, since for all i, j N

we have S
i
S
j
S
max{i,j}
. Thus, by Proposition
4.1(b) we can conclude that

i=1
S
i
, i.e.,

i=1

j=i
M
j
, is convex.
(b) Since (M
i
)
iN
is ascending, we have M
i
=

j=i
M
j
for every i N

. The
conclusion directly follows from (a).
Denition 4.2 The convex hull of an arbitrary set M R
n
is dened by
conv M :=

S R
n
[ S is convex and M S.
Note that conv M is a convex set (as an intersection of a family of convex sets). It
is clear that M is convex if and only if M = conv M.
Denition 4.3 Given an arbitrary nonempty set M R
n
, a point x R
n
is
said to be a convex combination of elements of M R
n
, if there exist k N

,
x
1
, . . . , x
k
M, and (t
1
, . . . , t
k
)
k
:= (s
1
, . . . , s
k
) R
k
+
[ s
1
+ . . . + s
k
= 1,
such that x = t
1
x
1
+. . . +t
k
x
k
.
Theorem 4.1 (Characterization of the convex hull by means of convex
combinations) The convex hull of a nonempty set M R
n
admits the following
representation:
conv M =

k
i=1
t
i
x
i
[ k N

, x
1
, . . . , x
k
M, (t
1
, . . . , t
k
)
k

.
Theorem 4.2 (Caratheodory) If S is a nonempty subset of R
n
, then every point
x conv S can be expressed as a convex combination of at most n + 1 points of S.
5 Convex cones
Denition 5.1 A subset C R
n
is called a cone if C is nonempty and R
+
C C.
If furthermore C is a (closed) convex set, then it is a (closed) convex cone. The
cone C is called pointed if C (C) = 0
n
.
Theorem 5.1 (Characterization of convex cones) For any cone C in R
n
the
following assertions are equivalent:
1

C is convex.
2

C +C C.
6
Proof: 1

. If C is convex, then
1
2
C +
1
2
C C. Since C is a cone, we get that
C +C = 2(
1
2
C +
1
2
C) 2C R
+
C C.
2

. Let x, y C and t [0, 1]. Since the set C is a cone, it follows that
(1 t)x, ty R
+
C C, hence (1 t)x + ty C + C. Applying 2

we obtain
that (1 t)x +ty C. Thus C is convex.
Proposition 5.1 Consider the set of all vectors in R
n
whose rst nonzero coordi-
nate (if any) is positive, i.e.,
C
lex
:= 0
n
x = (x
1
, . . . , x
n
) R
n
[
i 1, . . . , n : x
i
> 0, j 1, . . . , n, j < i : x
j
,= 0.
The set C
lex
is a pointed convex cone (the so-called lexicographic cone)..
Proof: Consider the function : R
n
0
n
1, . . . , n, dened for all x = (x
1
, . . . , x
n
)
R
n
0
n
by
(x) := mini 1, . . . , n [ x
i
,= 0.
Observe that
C
lex
= 0
n
x = (x
1
, . . . , x
n
) R
n
0
n
[ x
(x)
> 0.
It is easily seen that, for every x C
lex
and every t 0, we have tx C
lex
. Thus
C
lex
is a cone.
Suppose to the contrary that the cone C
lex
is not pointed. Then we can choose
a point x = (x
1
, . . . , x
n
) C
lex
(C
lex
) 0
n
. It follows that
x, x v = (v
1
, . . . , v
n
) R
n
0
n
[ v
(v)
> 0,
hence x
(x)
> 0 and x
(x)
> 0. Since (x) = (x), we infer that 0 < x
(x)
=
(x
(x)
) < 0, a contradiction. Thus C
lex
is a pointed cone.
In order to prove the convexity of C
lex
it suces to show that C
lex
+C
lex
C
lex
.
To this end, consider two arbitrary points x = (x
1
, . . . , x
n
), y = (y
1
, . . . , y
n
) C
lex
.
If x = 0
n
or y = 0
n
, then we have x +y = 0
n
C
lex
. Otherwise, if x ,= 0
n
,= y, then
we have x
(x)
> 0 and y
(y)
> 0, hence x +y ,= 0
n
and (x +y) = min(x), (y).
Without loss of generality we can assume that x
(x)
y
(y)
. Then, we have x
(x)
> 0
and y
(x)
0, hence (x + y)
(x+y)
= x
(x)
+ y
(x)
> 0. In both cases we infer
x +y C
lex
. Thus we have C
lex
+C
lex
C
lex
.
7

You might also like