2018-solution
2018-solution
Suggested Solution
Problem 1
Assume that you are newly hired as an automation engineer at a firm. You have been assigned the task
of designing MPC for controlling a process whose nonlinear model is given by,
𝑑𝒙
= 𝑓(𝒙, 𝒖, 𝑡) (1)
𝑑𝑡
Here 𝑥 and 𝑢 represents the process states and control inputs to the process respectively. As a first try
you designed a linear MPC based on a linearized model of the plant given by,
𝑥𝑘+1 = 𝐴𝑥𝑘 + 𝐵𝑢𝑘 (2)
𝑦𝑘 = 𝐶𝑥𝑘 (3)
When you applied the linear MPC taking the nonlinear model as the target system, you got the following
simulation results as shown in Figure 1.
Figure 1: Output of the linear MPC when applied to the nonlinear plant model
You and your manager are not satisfied with the simulation results and want to improve the controller’s
performance. To do so, you plan to introduce integral action to your linear MPC. For this, you chose to
introduce output integrators in a systematic way into your MPC formulation by using the equations for
the output integrator given by equation 3.
𝑥𝑖,𝑘+1 = 𝑥𝑖,𝑘 + (𝑟𝑘 − 𝑦𝑘 ) (4)
Here, 𝑟𝑘 is the reference/setpoint to the controlled variables and 𝑦𝑘 is the measurement (or the
controlled variable). Assume that 𝒓𝒌 is a known signal.
Tasks:
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
i) [7%] How would you augment the linear model of the process with the output integrators?
Write the augmented model for both the state equation and the measurement equation.
Suggested Solution:
The linear process model is given by,
𝑥𝑘+1 = 𝐴𝑥𝑘 + 𝐵𝑢𝑘 (p1)
𝑦𝑘 = 𝐶𝑥𝑘 (p2)
The discrete time equation for the integrators on the outputs is given by,
𝑥𝑖,𝑘+1 = 𝑥𝑖,𝑘 + (𝑟𝑘 − 𝑦𝑘 ) (p3)
Similarly, the measurement equation has to be extended by taking into account that 𝑥𝑖,𝑘 is a
measured quantity (In equation p4, 𝑟𝑘 is known and 𝐶𝑥𝑘 is measured). This gives us,
𝐶 0 𝑥𝑘
𝑦̃𝑘 = [ ][ ]
⏟0 𝐼 ⏟𝑥𝑖,𝑘 (p5)
𝐶̃ 𝑥̃ 𝑘
The augmented model is written as,
𝑥̃𝑘+1 = 𝐴̃𝑥̃ 𝑘 + 𝐵
̃𝑢𝑘 + 𝐵𝑟 𝑟𝑘 (p6)
𝑦̃𝑘 = 𝐶̃ 𝑥̃𝑘 (p7)
ii) [7%] Write down the control problem formulation (the objective function and the constraints)
for designing a linear MPC. Your formulation should be based on the augmented model.
Suggested Solution:
Let us consider the following for LQ optimal control problem with the prediction horizon of 𝑁
samples. The control horizon is also taken to be of
Quadratic Objective function:
𝑁
1 (p8)
𝑚𝑖𝑛 𝐽 = ∑ 𝑒̃𝑘𝑇 𝑄𝑘 𝑒̃𝑘 + 𝑢𝑘−1
𝑇
𝑃𝑘−1 𝑢𝑘−1
2
𝑘=1
Linear Constraint (given by the augmented process model) and augmented error term
𝑥̃𝑘+1 = 𝐴 ̃𝑥̃ 𝑘 + 𝐵
̃𝑢𝑘 + 𝐵𝑟 𝑟𝑘 (p9)
̃
𝑦̃𝑘 = 𝐶 𝑥̃𝑘 (p10)
𝑒̃𝑘 = 𝑟̃𝑘 − 𝑦̃𝑘 (p11)
It is very important to include the term 𝑒̃𝑘 into the objective function so that the output
integration error is systematically included into the optimization problem. Here, 𝑟̃𝑘 is a vector
that contains the augmented setpoints of the measurements 𝑦𝑘 and the output integrators
𝑥𝑖,𝑘 , i.e.
𝑦𝑘𝑆𝑃 (p12)
𝑟̃𝑘 = [ 𝑆𝑃 ]
𝑥𝑖,𝑘
The setpoints for 𝑥𝑖,𝑘 can be taken to be zeros since at the steady state (𝑟𝑘 − 𝑦𝑘 ) = 0.
Let us suppose that 𝑛𝑥 = no. of states of the augmented model, 𝑛𝑢 = no. of control inputs, 𝑛𝑦
= no. of outputs of the augmented model
𝑄𝑘 = weighting matrix for 𝑒𝑘 and is positive definite. It is a diagonal matrix with 𝑛𝑦 number
of weighting elements on the diagonal corresponding to each output.
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
𝑃𝑘 = weighting matrix for 𝑢𝑘 and is positive semi definite. It is a diagonal matrix with 𝑛𝑢
number of weighting elements on the diagonal corresponding to each inputs.
iii) In order to solve your MPC problem, you should re-formulate the control problem of task (ii)
into a standard QP problem given by equation 4 in an efficient way (say using kronecker
products).
𝑚𝑖𝑛 1 𝑇
𝑧 𝐻𝑧 + 𝑐 𝑇 𝑧,
𝑧 2
𝑠. 𝑡
𝐴𝑒 𝑧 = 𝑏𝑒 , (5)
𝑧𝐿 ≤ 𝑧 ≤ 𝑧𝐻 .
The vector of unknowns, 𝑧, should include the following elements from the augmented model:
control inputs, states, error and measurement.
𝑢
𝑥̃
𝑧 = [ 𝑒̃ ] 𝑖. 𝑒. 𝑧 𝑇 = (𝑢𝑇 , 𝑥̃ 𝑇 , 𝑒̃ 𝑇 , 𝑦̃ 𝑇 )
𝑦̃
Based on our choice, the total number of unknowns (𝑛𝑧 ) in vector 𝑧 for the whole prediction
horizon length (𝑁) is,
𝑛𝑍 = 𝑁 (𝑛𝑢 + 𝑛𝑥 + 𝑛𝑦 + 𝑛𝑦 )
If we expand the objective function 𝐽 of equation (p8) for 𝑘 = 1 to 𝑘 = 𝑁 we will get,
1
𝐽 = [𝑒̃1𝑇 𝑄1 𝑒̃1 + 𝑒̃2𝑇 𝑄2 𝑒̃2 + ⋯ + 𝑒̃𝑁𝑇 𝑄𝑁 𝑒̃𝑁 + 𝑢0𝑇 𝑃0 𝑢0 + 𝑢1𝑇 𝑃1 𝑢1 + ⋯ (p13)
2
𝑇
+ 𝑢𝑁−1 𝑃𝑁−1 𝑢𝑁−1 ]
Now with our choice of vector 𝑧, the standard quadratic objective function of equation 4 can be
written as,
1
𝐽 = 𝑧 𝑇 𝐻𝑧 + 𝑐 𝑇 𝑧
2
𝑢 𝑇 𝐻11 0 0 0 𝑢 𝑐1 𝑇 𝑢
1 𝑥̃ 0 𝐻22 0 0 𝑥̃ 𝑐 𝑥̃
𝐽 = [ 𝑒̃ ] [ ] [ 𝑒̃ ] + [𝑐2 ] [ 𝑒̃ ] (p14)
2 0 0 𝐻 33 0 3
𝑦̃ ⏟ 0
⏟ 0 0 𝐻 44
𝑦
⏟̃ ⏟𝑐4 𝑦
⏟̃
𝑧𝑇 𝐻 𝑧 𝑐𝑇 𝑧
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
𝑢0 𝑇 𝑃0 0 ⋯ 0 𝑢0
𝑢1 0 𝑃1 0 𝑢
𝑇
𝑢 𝐻11 𝑢 = [ ] [ ][ 1 ] (p16)
⋮ ⋮ ⋱ ⋮ ⋮
𝑢𝑁−1 0 ⋯ 𝑃𝑁−1 𝑢𝑁−1
Comparing both sides of equation (p16) we finally can write,
𝑃0 0 ⋯ 0
0 𝑃1 ⋯ 0
𝐻11 = [ ]
⋮ ⋮ ⋱ ⋮
0 0 ⋯ 𝑃𝑁−1
If, 𝑃0 = 𝑃1 = … … … … . = 𝑃𝑁−1 = 𝑃 with 𝑃𝜖ℝ𝑛𝑢 ×𝑛𝑢 , then we have
𝑷 𝟎 ⋯ 𝟎
𝑯𝟏𝟏 = [ 𝟎 𝑷 ⋯ 𝟎 ] = 𝑰𝑵 𝑷
⋮ ⋮ ⋱ ⋮
𝟎 𝟎 ⋯ 𝑷
Note:
IN = Identify matrix of size ′N′
= Kronecker product
Again comparing (p13) with (p15) we see that there is no ‘𝑥̃’ term in equation (p15). So we have,
0 0 ⋯ 0
𝑯𝟐𝟐 = [0 0 ⋯ 0] = 𝟎𝑵.𝒏𝒙 ×𝑵.𝒏𝒙 = 𝑰𝑵 𝟎𝒏𝒙 ×𝒏𝒙
⋮ ⋮ ⋱ ⋮
0 0 ⋯ 0
Similarly, for the error term, comparing (p13) with (p15) we have,
𝑒̃ 𝑇 𝐻33 𝑒̃ = 𝑒̃1𝑇 𝑄1 𝑒̃1 + 𝑒̃2𝑇 𝑄2 𝑒̃2 + ⋯ + 𝑒̃𝑁𝑇 𝑄𝑁 𝑒̃𝑁
In matrix form,
𝑒̃1 𝑇 𝑄1 0 0 𝑒̃1
⋯
𝑒̃ 0 𝑄 2 0 𝑒̃
𝑒̃ 𝑇 𝐻33 𝑒̃ = [ 2 ] [ ] [ 2] (p17)
⋮ ⋮ ⋱ ⋮ ⋮
𝑒̃𝑁 0 ⋯ 𝑄𝑁 𝑒̃𝑁
𝑸 𝟎 ⋯ 𝟎
𝟎 𝑸 ⋯ 𝟎
𝑯𝟑𝟑 = [ ] = 𝑰𝑵 𝑸
⋮ ⋮ ⋱ ⋮
𝟎 𝟎 ⋯ 𝑸
Again comparing (p13) with (p15) we see that there is no ‘𝑦’ term in equation (p15). So we have,
0 0 ⋯ 0
𝑯𝟒𝟒 = [0 0 ⋯ 0] = 𝟎
𝑵.𝒏𝒚 ×𝑵.𝒏𝒚 = 𝑰𝑵 𝟎𝒏𝒚 ×𝒏𝒚
⋮ ⋮ ⋱ ⋮
0 0 ⋯ 0
Then the 𝐻 matrix of the standard quadratic objective function can be written as,
𝐻 = blkdiag (𝐻11 , 𝐻22 , 𝐻33 , 𝐻44 )
= blkdiag (𝐼𝑁 𝑃 , 0𝑁.𝑛𝑥 ×𝑁.𝑛𝑥 , 𝐼𝑁 𝑄 , 0𝑁.𝑛𝑦 ×𝑁.𝑛𝑦 )
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
In addition, we can clearly see that in equation (p15) we do not have any linear term. Then
comparing equations (p13) with (p15) we get,
0𝑁.𝑛𝑢
𝑐1
𝑐2 0𝑁.𝑛𝑥
𝑐 = [𝑐 ] = 0 = 0(𝑛𝑧×1) 𝑛𝑍 = no. of total unknowns
3 𝑁.𝑛𝑦
𝑐4
[ 0𝑁.𝑛𝑦 ]
Suggested solution:
Now let us express the equality constraints of the LQ optimal control problem given by equation
(p9) - (p11) in the standard QP form 𝐴𝑒 𝑧 = 𝑏𝑒 given by equation 5. Let us first organize the
matrix 𝐴𝑒 and vector 𝑏𝑒 as follows,
𝐴𝑒,1𝑢 𝐴𝑒,1𝑥̃ 𝐴𝑒,1𝑒̃ 𝐴𝑒,1𝑦̃ 𝑢 𝑏𝑒,1
𝑥̃
[ 𝐴𝑒,2𝑢 𝐴𝑒,2𝑥̃ 𝐴𝑒,2𝑒̃ 𝐴𝑒,2𝑦̃ ] [ 𝑒̃ ] = [𝑏𝑒,2 ] (p18)
⏟𝐴𝑒,3𝑢 𝐴𝑒,3𝑥̃ 𝐴𝑒,3𝑒̃ 𝐴𝑒,3𝑦̃ 𝑦̃ ⏟𝑏𝑒,3
𝐴𝑒 𝑏𝑒
Each row of 𝐴𝑒 in equation (3.24) corresponds to each equality constraint of equations (p9) -
(p11). Each column of 𝐴𝑒 represents an element of the unknown vector 𝑧. Let us first consider
the equality constraint given by equation (p9)
𝑥̃𝑘+1 = 𝐴̃𝑥̃𝑘 + 𝐵̃𝑢𝑘 + 𝐵𝑟 𝑟𝑘
𝑥̃𝑘 − 𝐴̃𝑥̃𝑘−1 − 𝐵̃𝑢𝑘−1 = 𝐵𝑟 𝑟𝑘−1
We should obey the constraints for the whole prediction horizon length. Since we consider a
prediction horizon from 𝑘 = 1 to 𝑘 = 𝑁 we have,
𝑥̃1 − 𝐴̃𝑥̃0 − 𝐵̃𝑢0 → 𝑥̃1 − 𝐵̃𝑢0 = 𝐴̃𝑥̃0 + 𝐵𝑟 𝑟0 for 𝑘 = 1
̃ ̃
𝑥̃2 − 𝐴𝑥̃1 − 𝐵𝑢1 = 𝐵𝑟 𝑟1 for 𝑘 = 2
⋮ ⋮ ⋮ (p19)
𝑥̃𝑁 − 𝐴̃𝑥̃𝑁−1 − 𝐵̃𝑢𝑁−1 = 𝐵𝑟 𝑟𝑁−1 for 𝑘 = 𝑁
Arranging the set of equations (p19) into a matrix form we get the first row of 𝐴𝑒 in equation
(p18) as,
𝑢0 𝐴̃𝑥0 + 𝐵𝑟 𝑟0
𝑢1
⋮ 0 0 0 … 0 𝐵𝑟 𝑟1
⋮ 0 0 0 … 0 ⋮
−𝐵̃ 0 0 … 0 ⋮ 𝐼 0 0 … 0 ⋮ 0 0 … 0 𝐵𝑟 𝑟2
⋮ 0 0 … ⋮ 𝑢𝑁−1
0 −𝐵̃ 0 … 0 ⋮ −𝐴̃ 𝐼 0 … 0 ⋮ 0 0 … 0 ⋮
… … ⋮ 0 0 … ⋮ ⋱ 𝑥̃1 ⋮
0 0 −𝐵̃ 0 ⋮ 0 −𝐴̃ 𝐼 0 ⋱ ⋮ ⋮ ⋮ ⋱ ⋱ ⋮ =
⋱ ⋮ ⋮ ⋱ ⋱ ⋱ ⋮ ⋮ ⋮ ⋱ ⋱ … 𝑥̃2 ⋮
⋮ ⋱ ⋱ ⋮ … 0 ⋮ ⏟
0 0 … 0
… −𝐵̃ ⏟0 0 … −𝐴̃ 𝐼 ⋮ ⏟
0 0 … ⋮ ⋮
⏟0 … … ⋮ 𝐴𝑒,1𝑦̃
[ 𝐴𝑒,1𝑢 𝐴𝑒,1𝑥̃ 𝐴𝑒,1𝑒̃
](𝑁.𝑛 ×𝑛 ) 𝑥̃𝑁 ⋮
𝑥 𝑧 𝑒̃ ⋮
[ 𝑦̃ ](𝑛 [ 𝐵𝑟 𝑟𝑁−1 ]
⏟
𝑧 ×1)
𝑏𝑒,1
So, we have from the first linear equality constraint,
−𝑩̃ 𝟎 ⋯ 𝟎
𝟎 −𝑩̃ ⋯ 𝟎
𝑨𝒆,𝟏𝒖 = [ ] = −𝑰𝑵 𝑩 ̃
⋮ ⋮ ⋱ ⋮
𝟎 𝟎 ⋯ −𝑩 ̃
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
𝑰 𝟎 𝟎 ⋯ 𝟎
̃
−𝑨 𝑰 𝟎 ⋯ 𝟎
𝑨𝒆,𝟏𝒙̃ = 𝟎 −𝑨 ̃ 𝑰 ⋯ 𝟎 = 𝑰𝑵.𝒏𝒙 − (𝑰𝑵,−𝟏 𝑨
̃)
⋮ ⋱
[𝟎 ⋯ ⋯ −𝑨̃ 𝑰]
Similarly,
𝑨𝒆,𝟏𝒆̃ = 𝟎(𝑵.𝒏𝒙 ×𝑵.𝒏𝒚)
𝑨𝒆,𝟏𝒚̃ = 𝟎(𝑵.𝒏𝒙 ×𝑵.𝒏𝒚)
̃ 𝒙𝟎 + 𝑩𝒓 𝒓𝟎
𝑨
𝑩𝒓 𝒓𝟏
and 𝒃𝒆,𝟏 = [ ]
⋮
𝑩𝒓 𝒓𝑵−𝟏 (𝑵.𝒏 𝒙 ×𝟏)
Arranging the set of equations (p20) obtained from the second equality constraints into a matrix
form we get the second row of 𝐴𝑒 in equation (p18) as,
𝑢 0𝑛𝑦×1
𝑥̃1 0𝑛𝑦×1
⋮ 𝑥̃ ⋮
⋮ −𝐶̃ 0 ⋯ 0 ⋮ 0 0 ⋯ 0 𝐼 0 ⋯ 0 2
0 0 ⋯ 0 ⋮ … ⋮ … ⋮ ⋮
⋮ 0 0 0 0 𝐼 0
0 0 … 0 ⋮ 0 −𝐶̃ … 0 ⋮ ⋮ ⋱ ⋮
⋮
⋮ ⋱ ⋮
𝑥̃𝑁 ⋮
⋱ ⋮ ⋮ ⋱ ⋮ ⋮ 𝑒̃ =
⋮ ⋮ ⏟ ⋮ ⏟ ⋯ 0 ⏟ ⋯ 𝐼 ⋮
⏟
0 ⋯ 0 0 ⋯ −𝐶̃ ⋮ 0 ⋮ 0 𝑦̃1 ⋮
⋮ 𝐴 ̃
𝑒,2𝑒 𝐴 ̃
𝑒,2𝑦
[ 𝐴𝑒,2𝑢 𝐴𝑒,2𝑥̃ ](𝑁.𝑛 ×𝑛 ) 𝑦̃2 ⋮
𝑦 𝑧
⋮ ⋮
[𝑦̃𝑁 ](𝑛 ×1) ⏟ 0
[ 𝑛𝑦×1 ]
𝑧
𝑏∈,2
So we have,
𝑨𝒆,𝟐𝒖 = 𝟎(𝑵.𝒏𝒚×𝑵.𝒏𝒖 )
−𝑪 ̃ 𝟎 ⋯ 𝟎
̃
𝑨𝒆,𝟐𝒙̃ = [ 𝟎 −𝑪 ⋯ 𝟎 ] = −𝑰𝑵 𝑪 ̃
⋮ ⋮ ⋱ ⋮
𝟎 𝟎 ⋯ −𝑪 ̃
𝑨𝒆,𝟐𝒆̃ = 𝟎(𝑵.𝒏𝒚×𝑵.𝒏𝒚)
𝑰 𝟎 ⋯ 𝟎
𝑨𝒆,𝟐𝒚̃ = [ 𝟎 𝑰 ⋯ 𝟎] = 𝑰
⋮ ⋮ ⋱ ⋮ 𝑵.𝒏𝒚
𝟎 𝟎 ⋯ 𝑰
𝒃𝒆,𝟐 = 𝟎(𝑵.𝒏𝒚×𝟏)
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
This equality constraint must also be satisfied over the whole prediction horizon length. Since we consider
a prediction horizon from 𝑘 = 1 to 𝑘 = 𝑁 we get,
𝑒̃1 + 𝑦̃1 = 𝑟̃1 for 𝑘 = 1
𝑒2 + 𝑦2 = 𝑟̃2 for 𝑘 = 2
(p21)
⋮ ⋮ ⋮
𝑒̃𝑁 + 𝑦̃𝑁 = 𝑟̃𝑁 for 𝑘 = 𝑁
Arranging the set of equations (p21) obtained from the third equality constraints in a matrix form we get
the third row of 𝐴𝑒 in equation (p18) as,
𝑢 𝑟̃1
𝑥̃ 𝑟̃2
⋮ 𝑒̃1
⋮ 0 0 ⋯ 0 ⋮ 𝐼 0 ⋯ 0 𝐼 0 ⋯ 0 ⋮
0 0 ⋯ 0 ⋮ 𝑒̃ 2
⋮ 0 𝐼 … 0 … 0 ⋮
⋮
0 0 … 0 ⋮ 0 0 … 0 ⋮ ⋮ 0 𝐼 ⋮ ⋮
⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ 𝑒̃𝑁 =
⋮ ⋱ ⋮ ⋮ ⏟ ⋮ ⋮
⋮ ⏟ 0 ⋯ 0 0 ⋯ 𝐼 ⏟
0 ⋯ 𝐼 𝑦
̃
⏟
0 ⋯ 0 ⋮ ⋮ 1 ⋮
⋮ 𝐴𝑒,3𝑒̃ 𝐴𝑒,3𝑦̃
[ 𝐴𝑒,3𝑢 𝐴𝑒,3𝑥̃ ](𝑁.𝑛 ×𝑛 ) 𝑦̃2 ⋮
𝑦 𝑧
⋮ ⋮
[𝑦̃𝑁 ](𝑛𝑧×1) [
⏟𝑟̃𝑁]
𝑏𝑒,3
So we have,
𝑨𝒆,𝟑𝒖 = 𝟎(𝑵.𝒏𝒚×𝑵.𝒏𝒖 )
𝑨𝒆,𝟑𝒙̃ = 𝟎(𝑵.𝒏𝒚×𝑵.𝒏𝒙 )
𝑰 𝟎 ⋯ 𝟎
𝑨𝒆,𝟑𝒆̃ = [𝟎 𝑰 ⋯ 𝟎] = 𝑰 𝑰 = 𝑰
⋮ ⋮ ⋱ ⋮ 𝑵 𝒏𝒚 𝑵.𝒏𝒚
𝟎 𝟎 ⋯ 𝑰
𝑰 𝟎 ⋯ 𝟎
𝑨𝒆,𝟑𝒚̃ = [ 𝟎 𝑰 ⋯ 𝟎] = 𝑰
⋮ ⋮ ⋱ ⋮ 𝑵.𝒏𝒚
𝟎 𝟎 ⋯ 𝑰
𝒓̃𝟏
𝒃𝒆,𝟑 = [ ⋮ ]
𝒓̃𝑵 (𝑵.𝒏
𝒚 ×𝟏)
Finally, all the three equality constraints can be written in the standard form of 𝐴𝑒 𝑧 = 𝑏𝑒
where,
−𝐼𝑁 𝐵̃ 𝐼𝑁.𝑛𝑥 − (𝐼𝑁,−1 𝐴̃) 0𝑁.𝑛𝑥 ×𝑁.𝑛𝑦 0𝑁.𝑛𝑥 ×𝑁.𝑛𝑦
𝐴𝑒 = [0𝑁.𝑛𝑦 ×𝑁.𝑛𝑢 −𝐼𝑁 𝐶̃ 0𝑁.𝑛𝑦 ×𝑁.𝑛𝑦 𝐼𝑁.𝑛𝑦 ]
0𝑁.𝑛 ×𝑁.𝑛 0𝑁.𝑛 ×𝑁.𝑛 𝐼𝑁.𝑛 𝐼𝑁.𝑛𝑦
𝑦 𝑢 𝑦 𝑥 𝑦
and
𝐴̃𝑥̃0 + 𝐵𝑟 𝑟0
𝐵𝑟 𝑟1
𝑏𝑒,1 ⋮
𝐵𝑟 𝑟𝑁−1
𝑏𝑒 = [𝑏𝑒,2 ] =
𝟎(𝑵.𝒏𝒚×𝟏)
𝑏𝑒,3
𝑟̃1
⋮
[ 𝑟̃𝑁 ]
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
Problem 2
MPC algorithms are computationally demanding. The algorithm is suitable for systems whose sampling
time is larger than or equal to the computational time required for solving the MPC problem at each time
step i.e. for slow processes. For processes with fast system dynamics, real time application of MPC may be
difficult to realize. Failing to properly address the computational time delay in MPC, may lead the process
to become unstable.
Assume that you are working on a process where the sampling time 𝑑𝑡 is smaller than an upper bound 𝜏
on the computation time. Let the model of the process be given by,
𝑥𝑘+1 = 𝐴𝑥𝑘 + 𝐵𝑢𝑘 (6)
𝑦̅𝑘 = 𝐷𝑥𝑘 (7)
Tasks
(i) [8%] For obtaining an improved closed loop response, how will you handle the computational
time delay as an output delay? Explain in detail with all necessary equations.
Suggested solution:
If it is possible to define an upper bound '𝜏' on the computation time (i.e. to define the
maximum time required to solve MPC problem), then computational delay time ' 𝜏 ' can be
handled as an output delay. If 𝑑𝑡 is the sampling time, then the number of samples
corresponding to the computational delay can be calculated as,
𝜏
𝑛𝜏 = round down ( )
𝑑𝑡
Figure (a) shows a simplified diagram of an output delay.
Output delay
Output Delay Model
model
Without the output delay, the state space model of the process can be written as,
𝑥𝑘+1 = 𝐴𝑥𝑘 + 𝐵𝑢𝑘 (p22)
𝑦̅𝑘 = 𝐷𝑥𝑘 (p23)
Let us assume that the state space model from 𝑦̅𝑘 to 𝑦𝑘 (i.e. the delay model) be written as,
𝜏
𝑥𝑘+1 = 𝐴𝜏 𝑥𝑘𝜏 + 𝐵𝜏 𝑦̅𝑘 (p24)
𝑦𝑘 = 𝐷 𝜏 𝑥𝑘𝜏 (p25)
To handle the computational time delay, the process model without output delay (equation
p24 & p25) should be augmented with the output delay model (equation p22 & p23) as,
𝑥𝑘+1 𝐴 0 𝑥𝑘 𝐵
[𝑥 𝜏 ] = [ 𝜏 ] [𝑥 𝜏 ] + [ ] 𝑢𝑘
⏟𝑘+1 ⏟𝐵 𝐷 𝐴𝜏 ⏟ 𝑘 ⏟
0 (p26)
𝑥̃𝑘+1 𝐴̃ 𝑥̃𝑘 𝐵̃
𝑥𝑘
[0 𝐷 𝜏 ] [ 𝜏 ]
𝑦𝑘 = ⏟
⏟𝑥𝑘 (p27)
̃
𝐷
𝑥̃𝑘
Augmented model is in a standard form as,
𝑥̃𝑘+1 = 𝐴̃𝑥̃𝑘 + 𝐵̃𝑢𝑘 (p28)
𝑦𝑘 = 𝐷 ̃ 𝑥̃𝑘 (p29)
(ii) [4%] Show an illustration of your output delay model for a computational delay of 3 samples.
Show clearly all the elements of matrices used in the delay model.
Suggested solution:
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
For a computational delay of 3 samples i.e. 𝑛𝜏 = 3,
1 1
𝑥𝑘+1 0 0 0 𝑥𝑘 1
2
[𝑥𝑘+1 ] = [1 0 0] [𝑥𝑘2 ] + [0] 𝑦̅𝑘
⏟0 1 0 ⏟ ⏟ (p30)
3
𝑥𝑘+1 𝑥𝑘3 0
𝜏
𝐴𝜏 𝐵
𝑥𝑘𝜏
𝑥𝑘1
[0 0
𝑦𝑘 = ⏟ 1] [𝑥𝑘2 ]
(p31)
𝐷𝜏
⏟𝑥𝑘3
𝑥𝑘𝜏
Analysing equations (p30) and (p31) we get,
1
𝑥𝑘+1 = 𝑦̅𝑘
𝑥𝑘+1 = 𝑥𝑘1 = 𝑦̅𝑘−1
2
3
𝑥𝑘+1 = 𝑥𝑘2 = 𝑦̅𝑘−2
𝑦𝑘 = 𝑥𝑘3 = 𝑦̅𝑘−3
Problem 3
Tasks
(i) [7%] What is the difference between a state feedback MPC and an output feedback MPC. Draw
necessary signal flow diagram (block diagrams) to illustrate your reasoning.
Suggested solution:
A state feedback MPC as the name implies is a controller where the states are directly fed back
to the controller. This is only possible if all the states of the process being controlled are
available or measurable i.e. full state information is a necessity. This makes the state feedback
MPC an ideal case. The block diagram in figure (b) illustrates the state feedback MPC.
Ref
uk 𝑦𝑘 = 𝐶𝑥𝑘
MPC Process
𝑥𝑘
However, in practice, it may not always be possible to measure all the state of the system. In
such cases, the states of the system should be "ESTIMATED". Estimation of the states can be
performed by utilizing the available measurements. So in this case, the measurements
(output) of the process is fed to an estimator (say a Kalman filter) which instead estimates the
states. These estimated states are then fed to the MPC as shown in the block diagram in Figure
(c).
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
Re
f
uk yk
MPC Proces
s
Estimator
𝑥ො𝑘
𝑦ොk
Figure (c): Output feedback MPC with a full order estimator
In Figure (c) the estimator used is a full order estimator which estimates all the states of the
system (both measured and unmeasured). It is also possible to design reduced order
estimators (e.g. a reduced order nonlinear observer for simplifying calculations and proof of
convergence) that estimates only the unmeasured states. The measured states (probably after
low pass filtering if necessary) and the estimated states are fed to the MPC as shown in Figure
(d).
Re
f
uk yk
MPC Proce
ss
Estimato
r
𝑥ො𝑘 𝑦ොk
(ii) [5%] Explain the goal attainment method for solving a multi objective optimization problem
with necessary diagram.
Suggested solution:
The goal attainment method can be through as the relaxation of the ɛ- constraint method. In
this method, a set of goals (𝐹𝑖∗ ) are expressed for each 𝑖 𝑡ℎ objective function (𝐹𝑖 (𝑥)) of the
multi-objective optimization problem (MOO). Since in MOO, objective functions can be
conflicting with each other, it may be difficult (may be impossible) to achieve all the goals set
for the objective functions simultaneously. To come around this, slack variables (𝜆) are used
to violate the unachieved goals. The violation should be such that it is kept at the minimum.
Thus we can define the following optimization problem.
𝑚𝑖𝑛
𝜆
𝑥, 𝜆
s.t
Here, the idea is to achieve the value of each objective function 𝐹𝑖 (𝑥) at least equal to or less
than the defined goal for the objective. The goals are either under or over achieved by making
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.
use of weights 𝑤𝑖 . The weights are used to control the degree of under or over achievement.
It allows the user to express a measure of relative tradeoffs between the conflicting objectives.
The term 𝑤𝑖 𝜆 introduces an element of slackness into the problem. If 𝑤𝑖 = 0, it simply means
that the goal has to be rigidly met. To illustrate the method, let us take two conflicting
objective functions 𝐹1 and 𝐹2 as shown in Figure (e).
The choice of the goals 𝐹𝑖∗ defines the goal point 𝑃. Weighting vector 𝑤𝑖 for slack variables
defines the direction of search. As the iteration of the optimization proceeds, the slack variable
is adjusted and as a result of which the size/shape of the feasible region changes until it
converges to a unique solution point denoted by 𝐹1𝑠 and 𝐹2𝑠 in Figure (e).
Roshan Sharma
19th November 2018, Porsgrunn, Norway
CANDIDATES MUST THEMSELVES CHECK THAT ALL ASSIGNMENTS AND ATTACHMENTS ARE IN ORDER.