L2
L2
23-2-2019
Daniel Kirschen
1
Introduction to Optimization
Where can be used ?
Minimize cost of building a transformer
Maximize profit from a bidding strategy Achieve the best possible design
Unit Commitment
2
Decision Variables
Objective function
-f(x*) -f(x)
3
Necessary Condition for Optimality (one dimension)
If x = x * maximises f ( x ) then:
df
df f(x) =0
f ( x ) < f ( x ) for x < x Þ
* *
> 0 for x < x * dx
dx f(x*)
df df df
f ( x ) < f ( x ) for x > x Þ
*
< 0 for x > x *
*
dx >0 <0
dx dx
df
If x = x maximises f ( x ) then
*
= 0 for x = x *
dx x* x
Example
• A, B, C, D are stationary points • B is a minimum
For what values of x is ? • A and D are maxima • C is an inflexion point
In other words, for what values of x is the
necessary condition for optimality satisfied? f(x)
d2 f
For x = A and x = D, we have: < 0 The objective function is concave around a maximum
dx 2
d2 f
For x = B we have: > 0 The objective function is convex around a minimum
dx 2
d2 f A B C D x
For x = C , we have: = 0 The objective function is flat around an inflexion point 4
dx 2
Necessary and Sufficient Conditions of Optimality
df
• Necessary condition: =0
dx
• Sufficient condition:
– For a maximum: d 2
f
<0
dx 2
d2 f
– For a minimum: >0
dx 2
5
Feasible Set
The values that the decision variables can take are usually limited
Examples:
• Physical dimensions of a transformer must be positive
• Active power output of a generator may be limited to a certain range (e.g. 200 MW to 500 MW)
• Reactive power output of a generator may be limited to a certain range (e.g. -100 MVAr to 150 MVAr)
¶f ( x 1 ,x 2 )
=0
¶x 1
x * ,x *
1 2
¶f ( x 1 ,x 2 )
=0
x1* ¶x 2 x * ,x * minimum
x1 1 2
x1
x2 *
x2 x2
f(x1,x2)
Multi-Dimensional Case Saddle point
At a maximum or minimum value of f ( x1, x2 , x3 , .. xn )
we must have: ¶f = 0
¶x 1
¶f
=0 A point where these conditions are x1
¶x 2
satisfied is called a stationary point
x2
¶f
=0 7
¶x n
Contours
• A contour is the locus of all the point that give the same value to the objective function
f(x1,x2)
x2
F2
F1
x1 x1
F1 F Minimum or maximum
2
x2
8
Sufficient Conditions for Optimality
Calculate the Hessian matrix at the stationary point: • Calculate the eigenvalues of the Hessian matrix at the
2 f 2 f 2 f stationary point
• If all the eigenvalues are greater or equal to zero:
x12 x1x2 x1xn – The matrix is positive semi-definite
– The stationary point is a minimum
2 f 2 f 2 f
• If all the eigenvalues are less or equal to zero:
H x2 x1 x22 x2 xn
– The matrix is negative semi-definite
– The stationary point is a maximum
• If some or the eigenvalues are positive and other are
2 f 2 f f
2
negative:
xn x1 xn x2 xn2
– The stationary point is a saddle point
2 8
H= e=
2
C C
2
x1 5 - 13^(1/2)
13^(1/2) + 5
x x x22
2 1 The stationary point
Minimum: C=0 is a minimum
10
2 2 2
0 2 4 8 0
2 6
Minimize C = -x12 + 3x22 + 2x1 x2 =2 2 5 0
The stationary point
or =2 2 5 0 is a saddle point
Necessary conditions for optimality:
¶C
= -2x1 + 2x2 = 0
¶x1 ì x1 =0 x2
¶C í C=0
¶x2
= 2x1 + 6x2 = 0 îx 2 = 0 C=9 syms x1 x2
C=-x1^2 + 3*x2^2 + 2*x1*x2;
is a stationary point C=4 H=hessian(C,[x1,x2]);
e = eig(H);
Sufficient conditions for optimality: C=1
C=-4 C=-9
H=
C=-9 C=-4 C=-1
2C 2C x1 [ -2, 2]
C=1 Matlab
[ 2, 6]
x1 x1 x2 2 2
2
H= 2 6
C=4 e=
2
C C
2
C=9 2 - 2*5^(1/2)
2*5^(1/2) + 2
x x x22 C=0
The stationary point
2 1 This stationary point is a saddle point is a saddle point
11
Optimization with Constraints
Optimization with Equality Constraints
• There are usually restrictions on the values that the decision variables can take
Minimise
Number of Constraints
f ( x1 , x2 ,.. , xn ) Objective function • N decision variables
• M equality constraints
subject to: • If M > N, the problems is over-constrained
w 1 ( x1 , x2 ,.. , xn ) = 0 – There is usually no solution
• If M = N, the problem is determined
Equality constraints – There may be a solution
w m ( x1 , x2 ,.. , xn ) = 0
• If M < N, the problem is under-constrained
– There is usually room for optimization
12
Example A Example B
Solution by substitution
Minimise f ( x 1 , x 2 ) = 0.25 x 12 + x 22 Minimise C = a1 + a 2 + b1 x 12 + b 2 x 22
Subject to w ( x 1 , x 2 ) º 5 - x 1 - x 2 = 0
Subject to: x 1 + x 2 = L
Þ x 2 = L - x1
x2
Þ C = a1 + a 2 + b1 x 12 + b 2 ( L - x 1 )
2
w ( x 1 ,x 2 ) º 5 - x 1 - x 2 = 0 • Difficult
dC
2b1 x1 2b2 L x1 0 • Usually impossible
dx1 when constraints are
non-linear
Minimum b2 L b1 L
x1 x2 • Provides little or no
b1 b2 b1 b2 insight into solution
d 2C
x1
2b1 2b2 0 minimum • Solution using
f ( x 1 ,x 2 ) = 0.25 x + x
2
1
2
2
dx12 Lagrange multipliers
13
Gradient
Consider a function f (x1 , x2 ,.. , xn ) Properties of the Gradient
æ ¶f ö • Each component of the gradient vector indicates the
ç ¶x1 ÷ rate of change of the function in that direction
ç ÷
ç ¶f ÷ • The gradient indicates the direction in which a
function of several variables increases most rapidly
The gradient of f is the vector Ñf = ç ¶x2 ÷÷
ç • The magnitude and direction of the gradient usually
ç ÷ depend on the point considered
ç ¶f ÷
ç ÷ • At each point, the gradient is perpendicular to the
çè ¶xn ÷ø contour of the function
f = f3
Example A Example B
f ( x , y ) = ax 2 + by 2 f ( x , y ) = ax + by f = f2 y Ñf
y
æ ¶f ö æ ¶f ö f = f1 Ñf
ç ¶x ÷ æ 2 ax ö ç ¶x ÷ æ a ö
Ñf = ç ÷ = ç
B
Ñf = ç ÷ = ç ÷ Ñf
÷ ¶f
¶f è 2 by ø C ç ÷ è bø
ç ÷ A
è ¶y ø
è ¶y ø x
x
D 14
Lagrange multipliers æ ¶w
ç ¶x 1
ö
÷
Minimise f x1 , x2 0.25 x12 x22 , subject to x1 , x2 5 x1 x2 0 Ñw = ç ÷
çç ¶w ÷÷
w ( x1 , x 2 ) = 5 - x1 - x 2 w ( x1 , x 2 ) è ¶x 2 ø
Ñf
f ( x 1 ,x 2 ) = 6 f ( x1 , x 2 ) = 6
f = 0.25 x 12 + x 22 = 6 f ( x1 , x 2 ) = 5 Ñf f ( x1 , x 2 ) = 5
æ ¶f
Ñw
f = 0.25 x 12 + x 22 = 5 ö Ñf Ñw
ç ¶x 1 ÷
2 Ñf = ç ÷ 3
1 ç ¶f ÷
ç
è ¶x 2
÷
ø Ñw
16
Lagrangian function
To simplify the writing of the conditions for optimality,
it is useful to define the Lagrangian function:
( x 1 , x 2 ,l ) = f ( x 1 , x 2 ) + lw ( x 1 , x 2 )
The necessary conditions for optimality are then given
by the partial derivatives of the Lagrangian:
x1 , x2 , f
0
x1 x1 x1
x1 , x2 , f
0
x2 x2 x2
x1 , x2 ,
x1 , x2 0
17
Generalization
Minimise
f x1 , x2 ,.. , xn
subject to:
1 x1 , x2 ,.. , xn 0
m x1 , x2 ,.. , xn 0
Lagrangian:
Minimise
f ( x1 , x2 ,.. , xn ) Objective function
subject to:
w 1 ( x1 , x2 ,.. , xn ) = 0
Equality constraints
w m ( x1 , x2 ,.. , xn ) = 0
and:
g1 ( x1 , x2 ,.. , xn ) £ 0
Inequality constraints
gp ( x1 , x2 ,.. , xn ) £ 0
21
https://www.mathworks.com/help/optim/ug/fmincon.html
Optimization Toolbox
fmincon
• Nonlinear programming solver.
• b and beq are vectors, A and Aeq are matrices, c(x) and ceq(x) are functions that return vectors, and f(x) is
a function that returns a scalar. f(x), c(x), and ceq(x) can be nonlinear functions.
defines a set of lower and upper bounds on the design variables in x, so that the solution is always in the
range lb ≤ x ≤ ub. If no equalities exist, set Aeq = [] and beq = []. If x(i) is unbounded below, set lb(i) = -Inf, and
if x(i) is unbounded above, set ub(i) = Inf.
22
Example A:
Example B:
23
Applications
24
Economic dispatch problem
L
A B C
25
Characteristics of the generating units
J/h
B T G
Input
Fuel Electric Power MW
Pmin Pmax
(Input) (Output)
Output
• Thermal generating units
• Consider the running costs only
• Input / Output curve
– Fuel vs. electric power
• Fuel consumption measured by its energy content
• Upper and lower limit on output of the generating unit
26
Cost Curve Incremental Cost Curve
• Multiply fuel input by fuel cost • Incremental cost curve
• No-load cost D FuelCost vs Power
– Cost of keeping the unit running if it could D Power
produce zero MW
• Derivative of the cost curve
• In $/MWh
$/h • Cost of the next MWh
Cost
27
Mathematical formulation
• Objective function
C = CA (PA ) + CB (PB ) + CC (PC ) A B C
L
• Constraints
– Load / Generation balance:
L = PA + PB + PC
– Unit Constraints:
PAmin £ PA £ PAmax
PBmin £ PB £ PBmax
PCmin £ PC £ PCmax
This is an optimization problem ?
28
Application to Economic Dispatch
x1 x2 minimise f x1 , x2 C1 x1 C2 x2
L
s.t. x1 , x2 L x1 x2 0
G1 G2
x1 , x2 , C1 x1 C2 x2 L x1 x2
dC1
0
x1 dx1 dC1 dC2
dC2 dx1 dx2
0
x2 dx2
Equal incremental cost
L x1 x2 0 solution
29
Equal incremental cost solution
C1 ( x 1 ) C2 ( x 2 )
Cost curves:
x1 x2
dC 1 dC 2
Incremental dx 1 dx 2
cost curves:
x1 x2
30
Interpretation of this solution
dC 1 dC 2
dx 1 dx 2
x1* x1 x2* x2
-
L -
+
If < 0, reduce λ
L-x -x *
1
*
2 If > 0, increase λ
31
dC1
: Cost of one more MW from unit 1
Physical interpretation dx1
dC2
: Cost of one more MW from unit 2
dC C
C( x ) lim dx2
dx x 0 x dC1 dC2
Suppose that
DC dx1 dx2
Dx For x sufficiently small: dC1
Decrease output of unit 1 by 1MW decrease in cost=
dC dx1
C x
dx dC2
x Increase output of unit 2 by 1MW increase in cost=
If x 1MW : dx2
dC(x) dC dC2 dC1
C Net change in cost= 0
dx2 dx1
dx dx
It pays to increase the output of unit 2 and decrease the
The incremental cost is the cost of output of unit 1 until we have:
one additional MW for one hour. dC 1 dC 2
= =l
This cost depends on the output of dx 1 dx 2
the generator. The Lagrange multiplier λ is thus the cost of one more MW
at the optimal solution.
This is a very important result with many applications in
economics. 32
HW#1
Three generating units that have the following cost functions:
1
How should these units be dispatched if the generating units must supply a load of 350 MW
at minimum cost?
Solve this problem by Lagrangian function and by Optimization Toolbox fmincon.
2 Suppose a consumer has utility function 𝐶 𝑥1 , 𝑥2 = 𝐴𝑥1𝛼 𝑥21−𝛼 and faces the budget constraint
𝑝1 𝑥1 + 𝑝2 𝑥2 = 𝑀. 1) Find the values of demand functions 𝑥1 and 𝑥2 at minimum utility function 𝐶.
33
3 Solve this problem by Optimization Toolbox fmincon.
34
Thank you
35