0% found this document useful (0 votes)
62 views28 pages

Stateestimation With Observers

The document discusses state estimation using observers. It explains that observers estimate unknown state variables of a dynamic system by running a mathematical model of the process in parallel with corrections based on errors between real and estimated measurements. The key points are: 1) An observer derives an estimation error model to describe how errors between real and estimated states evolve over time. 2) The observer gain K determines how strongly corrections are applied and is chosen to make the error model asymptotically stable. 3) With a proper gain K, the estimation errors will converge to zero, providing accurate estimated states even when real states cannot be directly measured.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views28 pages

Stateestimation With Observers

The document discusses state estimation using observers. It explains that observers estimate unknown state variables of a dynamic system by running a mathematical model of the process in parallel with corrections based on errors between real and estimated measurements. The key points are: 1) An observer derives an estimation error model to describe how errors between real and estimated states evolve over time. 2) The observer gain K determines how strongly corrections are applied and is chosen to make the error model asymptotically stable. 3) With a proper gain K, the estimation errors will converge to zero, providing accurate estimated states even when real states cannot be directly measured.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 17

State estimation with


observers
17.1 Introduction
An observer is an algorithm for estimating the values of state variables of
a dynamic system. Why can such state estimates be useful?
Supervision: State estimates can provide valuable information
about important variables in a physical process, for example feed
composition to a reactor, environmental forces acting on a ship, load
torques acting on a motor, etc.
Control: In general, the more information the controller has about
the process it controls, the better (more accurate) it can control it.
In particular, some control methods assumes that the state of the
process to be controlled are known. If the state variables are not
measured, they may be estimated, and the estimates can be used by
the controller as if they were measurement.
Note that even process disturbances and process parameters can be
estimated. The clue is to model the disturbances or parameters as
ordinary state variables.
Relevant control methods that may benet from state estimators are:
Feedforward control [6], where the feedforward can be based on
estimated disturbances.
Cascade control [6], where the inner loops can be based on
estimated states.
185
186
Feedback linearization, see Section 20, where the feedbacks can
be based on estimated states.
LQ (linear quadratic) optimal control, see Section 21, where the
feedbacks can be based on estimated states.
Model-based predictive control (MPC), see Section 22, where the
prediction of future behaviour and optimization can be based on
an estimated present state.
Observers are calculated from specied estimator error dynamics, or in
other words: how fast and stable you want the estimates to converge to the
real values (assuming you could measure them). An alternative to
observers is the Kalman Filter which is an estimation algorithm based on
stochastic theory. The Kalman Filter produces state estimates that
contain a minimum amount of noise in the assumed presence of random
process disturbances and random measurement noise. In observers
however, such stochastic signals acting on the system is not in focus. The
theory and implementation of observers are simpler than with Kalman
Filters, and this is benecal. One particular drawback about observers is
that they are not straightforward to design for systems having more than
one measurement, while this is straightforward for Kalman Filters. The
Kalman Filter is described in Chapter 18, for discrete-time systems (the
discrete-time Kalman Filter is more commonly used than the
continuous-time Kalman Filter).
I have chosen to describe continuous-time not discrete-time observers.
This makes the mathematical operations involved in design of the observer
simpler. In a practical implementation you will use a computer, which
operates in discrete time. Consequently, to obtain an observer ready for
computer implementation, you will need to discretize the observer
algorithm, but that is straightforward using Forward discretization. There
is a potential danger about just discretizing a continuous-time algorithm:
The resulting algorithm may become unstable if the sampling time is too
large. However, with the computer power of today, there is probably no
problem selecting suciently small sampling time in the implementation
for any given application.
As with every model-based algorithm you should test your observer with a
simulated process before applying it to the real system. You can implement
a simulator in e.g. LabVIEW or MATLAB/Simulink since you already
have a model (the observer is model-based). In the testing, you start with
testing the observer with the nominal model in the simulator, including
process and measurement noise. This is the model on which you are basing
the observer. Secondly, you should introduce some reasonable model errors
187
by making the simulator model somewhat dierent from the observer
model, and check if the observer still produces usable estimates.
17.2 How the observer works
The purpose of the observer is to estimate assumed unknown states in the
process. Figure 17.1 shows a block diagram of a real system (process) with
observer. The operating principle of the observer is that the
Process
K
C f()
u
dx
e
/dt x
e
e = y - y
e
y
y
e
Ke
Applied state
estimate
v w
Control variable
Sensor
x
State variable
(unknown value)
Measurement
variable
dx/dt = f(x,u,w)
y = Cx + n
Innovation variable
(measurement estimation error)
Observer
gain
Real system
(process)
Process
disturbance
(environmental or
load variable)
Observer
x
e
System
function
Measurement
function
Estimation loop with
error-driven correction of
estimate
Measurement
noise
x
e0
Known
disturbance
w
k
Figure 17.1: Real system (process) with observer
mathematical model of the process is running or being simulated in
parallel with the process. If the model is perfect, x
e
will be equal to the
real states, x. But in practice there are model errors, so there will be a
dierence between x and x
e
. It is always assumed that at least one of the
states are measured. If there is a dierence between x and x
e
, there will
also be a dierence between the real measurement y and the estimated
measurement y
e
. This dierence is the error e of the measurement estimate
188
y
e
(it is also denoted the innovation variable):
e = y y
e
(17.1)
This error is used to update the estimate via the observer gain K. Thus,
the correction of the estimates are error-driven which is the same
principle as of a error-driven control loop.
The numerical value of the observer gain determines the strength of the
correction. In the following section we will calculate proper value of K.
17.3 How to design observers
17.3.1 Deriving the estimation error model
We assume that the process model is (the time t is omitted for simplicity)
x = f (x, u, w) (17.2)
where x is the state variable (vector), u is the control variable (vector), and
w is the disturbance (vector). f is a possibly nonlinear function (vector).
Furthermore we assume that the process measurement y is given by
y = Cx +v (17.3)
C is a matrix, and v is measurement noise, which is assumed to be
random, and therefore not predictible. This noise inuence the state
estimates, and we will later see how we can reduce or minimize the
inuence of noise. If the sensor (including scaling) is producing a value in a
proper engineering unit, e.g. m/s,

C or Pa, the element(s) of C have
numerical value 1 or 0. For example, if the system has the two states x
1
=
position and x
2
= speed, and only the position is measured with a sensor
which gives the position in unit of meter, then
C =
_
1 0

(17.4)
The state estimates, x
e
, are calculated from the model together with a
correction term being proportional to the measurement estimate error:
x
e
= f (x
e
, u, w
k
) +Ke (17.5)
= f (x
e
, u, w
k
) +K(y y
e
) (17.6)
= f (x
e
, u, w
k
) +K(Cx +v Cx
e
) (17.7)
= f (x
e
, u, w
k
) +KC (x x
e
) +Kv (17.8)
189
where w
k
are process disturbances that are assumed to have known values
by measurement etc.
The measurement estimate is given by
y
e
= Cx
e
(17.9)
The measurement noise v is not included in (17.9) because v is assumed
not to be predictible.
It is of course of crucial importance that the error of the state estimate is
small. So, let us derive a model of the state estimate error. We dene this
error as
e
x
= x x
e
(17.10)
These variables are actually vectors. In detail (17.10) looks like this:
_

_
e
x
1
e
x
2
.
.
.
e
xn
_

_
=
_

_
x
1
x
2
.
.
.
x
n
_

_

_

_
x
1e
x
2e
.
.
.
x
ne
_

_
(17.11)
Now, we subtract (17.8) from (17.2):
x x
e
= f (x, u, w) [f (x
e
, u, w
k
) +KC (x x
e
) +Kv] (17.12)
= [f (x, u, w) f (x
e
, u, w
k
)] KC (x x
e
) Kv (17.13)
Let us assume that the dierence between the values of the two functions
in the square bracket in (17.13) is caused by a small dierence between x
and x
e
. Then we have
[f (x, u, w) f (x
e
, u, w
k
)]
f()
x

xe(t), u(t),w
k
(t)
(x x
e
) (17.14)
Here we dene
A
c

f()
x

x
e
(t), u(t),w
k
(t)
(17.15)
A
c
is the Jacobian (partial derivative) of the system function f (subindex c
in A
c
is for continuous-time), and it is the same as the resulting
transition matrix after linearization of the non-linear state-space model, cf.
Section 1.4. Now, (17.13) can be written
x x
e
= A
c
(x x
e
) KC (x x
e
) Kv (17.16)
or, using (17.10):
e
x
= A
c
e
x
KCe
x
Kv (17.17)
= (A
c
KC) e
x
Kv (17.18)
190
which denes the error dynamics of the observer. (17.18) is the estimation
error model of the observer Now, assume that we disregard the impact that
the measurement noise v has on e
x
. Then the estimation error model is
e
x
= (A
c
KC) e
x
(17.19)
which is an autonomous system (i.e. not driven by external inputs). If that
system is asymptotically stable, each of the error variables, e
x
i
, will
converge towards zero from any non-zero initial value. Of course this is
what we want namely that the estimation errors become zero. More
specically, the dynamics (with respect to speed and stability) of e
x
is
given by the eigenvalues of the system matrix,
A
e
A
c
KC (17.20)
And the observer gain K is a part of A
e
! (The next section explains how
we can calculate K.)
Note that the matrices A
c
and C in (17.20) are matrices of a linearized
process model, assumed to be on the form
x = A
c
x +B
c
u (17.21)
y = Cx +Du (17.22)
As pointed out above, A
c
can be calculated by linearization of the system
function at the operating point:
A
c

f()
x

x
e
(t), u(t),w
k
(t)
(17.23)
To calculate the observer gain the B
c
matrix is actually not needed, but
when you use e.g. the LabVIEW function named CD Ackerman.vi to
calculate K, you still need B
c
, as demonstrated in Example 17.1. B
c
is
found by linearization:
B
c

f()
u

xe(t), u(t),w
k
(t)
(17.24)
In (17.22) the C and D matrices comes automatically. For example, if
the system has the two states x
1
= position and x
2
= speed, and only the
position is measured with a sensor which gives the position in unit of
meter, then
C =
_
1 0

(17.25)
And D is a matrix of proper dimension containing just zeros.
191
17.3.2 Calculation of the observer gain
Here is a procedure for calculating K:
1. Specify proper error dynamics in the term of the eigenvalues of
(17.20). As explained below, these eigenvalues can be calculated from
the specied response time of the observer.
2. Calculate K from the specied eigenvalues.
These two steps are explained in detail in the following.
Regarding step 1: What are proper eigenvalues of the error dynamics?
There are many options. A good option is Butterworth eigenvalues, and
we will concentrate on this option here. The characteristic equation from
which the eigenvalues are calculated, is then a Butterworth polynomial.
They are a common way to specify the denominator of a lowpass lter in
the area of signal processing. The step response of such lters have a slight
overshoot, with good damping. (Such step responses will also exist in an
observer if a real state variable changes value abruptly.) Below are
Butterworth polynomials of order 2, 3, and 4, which are the most relevant
orders.
1
B
2
(s) = (Ts)
2
+ 1.4142 (Ts) + 1 (17.26)
B
3
(s) = (Ts)
3
+ 2 (Ts)
2
+ 2 (Ts) + 1 (17.27)
B
4
(s) = (Ts)
4
+ 2.6131 (Ts)
3
+ 3.4142 (Ts)
2
+ 2.6131 (Ts) + 1 (17.28)
The parameter T is used to dene the speed of the response. (In
normalized Butterworth polynomials T = 1.) The speed is inversely
proportional to T, so the smaller T the faster response. (We will specify T
more closely below.) To give an impression of Butterworth dynamics,
Figure 17.2 shows the step response of Butterworth lters of order 2, 3,
and 4, all with T = 1:
H
2
(s) =
1
B
2
(s)
(17.29)
H
3
(s) =
1
B
3
(s)
(17.30)
H
4
(s) =
1
B
4
(s)
(17.31)
Let us dene the response time T
r
as the observer response time as the
time that the step response needs to reach 63% of the steady state value of
1
Other orders can be found from the butter function in MATLAB and LabVIEW.
192
Figure 17.2: Step response of normalized Butterworth lters (with T = 1) of
order 2, 3 and 4.
the response
2
. From Figure 17.2 we can see that a rough and simple still
useful estimate of T
r
is
T
r
nT (17.32)
where n is the order of the transfer function, which is the number of the
poles and eigenvalues. T
r
will be the only tuning parameter of the observer!
Once T
r
is specied, the T parameter to be used in the appropriate
Butterworth polynomial among (17.26) (17.28) is
T
T
r
n
(17.33)
And once the Butterworth polynomial among (17.26) (17.28) is
determinded, you must calculate the eigenvalues {s
1
, s
2
, . . . , s
n
} as the
roots of the polynomial:
{s
1
, s
2
, . . . , s
n
} = root(B
n
) (17.34)
Figure 17.3 sums up the procedure of calculating the observer gain K.
2
Similar to the time-constant of rst order dynamic system.
193
Calculate
K
A
c
C
Determine
Butterworth
polynomial of
error-model
Observer
response time
T
r
System
matrices
n
System order
B
n
(s)
Calculate
eigenvalues
as roots of
B
n
(s)
{s
1
, s
2
, , s
n
}
Observer
gain
K
Figure 17.3: The procedure of calculating the observer gain K.
Calculation of observer gain in MATLAB and LabVIEW
Both MATLAB and LabVIEW has functions to calculate the roots of a
polynomial (e.g. in MATLAB the function is roots).
Step 2 in the procedure list above is calculate the observer gain K from
the specied eigenvalues {s
1
, s
2
, . . . , s
n
} of AKC. We have
eig (AKC) = {s
1
, s
2
, . . . , s
n
} (17.35)
As is known from mathematics, the eigenvalues are the s-roots of the
characteristic equation:
det [sI (AKC)] =(s s
1
)(s s
2
) (s s
n
) = 0 (17.36)
By equating the polynomials on the left and the right side of (17.36) we
can calculate the elements of K. This can be done manually. We can also
use functions in e.g. MATLAB or LabVIEW. In MATLAB and in
MathScript (LabVIEW) you can use the function acker. In LabVIEW you
can also use the function (block) CD Ackerman.vi.
In LabVIEW, using acker and CD Ackerman.vi is straightforward.
In MATLAB, acker is a little tricky: In general, acker (MATLAB)
calculates the gain K
1
so that the eigenvalues of the matrix (A
1
B
1
K
1
)
are as specied. acker is used as follows:
K1=acker(A1,B1,eigenvalues)
But we need to calculate K so that the eigenvalues of (AKC) is as
specied. Now, the eigenvalues of AKC are the same as the eigenvalues
of
(AKC)
T
= A
T
C
T
K
T
(17.37)
Therefore we use acker as follows:
K1=acker(A,C,eigenvalues);
K=K1
194
Example 17.1 Calculating the observer gain K in MATLAB and
LabVIEW
Given a second order continuous-time model with the following system
matrices:
A =
_
0 1
0 0
_
, B =
_
0
1
_
, C =
_
1 0

, D = [0] (17.38)
State-variable x
2
shall be estimated with an observer. x
1
= y is measured.
We specify that the response time of the estimator is 0.2 s, which implies
that the parameter T in (17.33) is
T =
T
r
n
=
0.2
2
= 0.1 s (17.39)
The Butterworth polynomial becomes
B
2
(s) = (Ts)
2
+ 1.4142 (Ts) + 1 = T
2
s
2
+ 1.4142Ts + 1 (17.40)
from which we can calculate the roots, which are the specied eigenvalues
of the observer.
MATLAB:
The following MATLAB-script calculates the estimator gain K:
A = [0,1;0,0];
B = [0;1];
C = [1,0];
D = [0];
n=2;
Tr=0.2;
T=Tr/n;
B2=[T*T,1.4142*T,1];
eigenvalues=roots(B2);
K1=acker(A,C,eigenvalues);
K=K1
The result is
K =
14.142
100
LabVIEW/MathScript:
195
The following MathScript-script calculates the estimator gain K:
A = [0,1;0,0];
B = [0;1];
C = [1,0];
D = [0];
n=2;
Tr=0.2;
T=Tr/n;
B2=[T*T,1.4142*T,1];
eigenvalues=roots(B2);
K=acker(A,C,eigenvalues,o) %o for observer
The result is as before
K =
14.142
100
LabVIEW/Block diagram:
The function CD Ackerman.vi on the Control Design and Simulation
Toolkit palette in LabVIEW can also be used to calculate the observer gain
K. Figure 17.4 shows the front panel, and Figure 17.5 shows the block
diagram of a LabVIEW program. The same K as above is obtained.
Figure 17.4: Example 17.1: Front panel of the LabVIEW program to calculate
the observer gain K
[End of Example 17.1]
196
Figure 17.5: Example 17.1: Block diagram of the LabVIEW program to calcu-
late the observer gain K
What if the estimates are too noisy?
Real measurement contains random noise. This noise will propagate
through the observer via the term Ke = K(y y
e
) where y is the
more-or-less noisy measurement. This implies that the state estimate x
e
will contain noise. What can you do if you regard the amount of noise to
bee too large? Here are two options:
Reduce the measurement-based updating of the estimate.
This means that the values of the observer gain K must be smaller.
This can be achieved by increasing the specied response time T
r
dened earlier in this section.
3
Include a lowpass lter to smooth out the noise in an
estimate that is too noisy. See Figure 17.6. The lter(s) can be
ordinary time-constant lters.
3
Dont just reduce the values of K directly. The consequence can be an unstable
observer loop!
197
Observer
Lowpass
filter
x
3e
x
1e
x
2e
x
2e,filt
Figure 17.6: A lowpass lter can be used to smooth a noisy estimate
A drawback about both these approach is that the estimates will track real
variations of the states of the process more slowly. If the estimates are
applied in feedback control system, this lag may reduce the stability of the
control system. How can you nd the allowable amount of ltering, for
example the maximum time-constant of the lowpass lters?
4
17.4 Observability test of continuous-time
systems
It can be shown that a necessary condition for placing the eigenvalues of
the observer for a system at an arbitrary location in the complex plane is
that the system is observable. A consequence of non-observability is that
the acker functions in MATLAB and in LabVIEW used to calculate K
(cf. Example 17.1) gives an error message.
How is observability dened? A dynamic system given by
x = Ax +Bu (17.41)
y = Cx +Du (17.42)
is said to be observable if every state x(t
0
) can be determined from the
observation of y(t) over a nite time interval, [t
0
, t
1
].
4
By simulating the system.
198
How can you check if a system is non-observable? Let us make a denition:
Observability matrix:
M
obs
=
_

_
C
CA
.
.
.
CA
n1
_

_
(17.43)
The following can be shown:
Observability Criterion:
The system (17.41) (17.42) is observable if and only if the observability
matrix M
obs
has rank equal to n where n is the order of the system model
(the number state variables).
The rank can be checked by calculating the determinant of M
obs
. If the
determinant is non-zero, the rank is full, and hence, the system is
observable. If the determinant is zero, system is non-observable.
Non-observability has several concequences:
The transfer function from the input variable y to the output variable
y has an order that is less than the number of state variables (n).
There are state variables or linear combinations of state variables
that do not show any response.
The eigenvalues of an observer for the system can not be placed
freely in the complex plane, and the acker functions in MATLAB
and in LabVIEW and also the CD Ackerman.vi in LabVIEW used
to calculate K (cf. Example 17.1) gives an error message.
Example 17.2 Observability
Given the following state space model:
_
x
1
x
2
_
=
_
0 a
0 0
_
. .
A
_
x
1
x
2
_
+
_
0
1
_
. .
B
u (17.44)
y =
_
c
1
0

. .
C
_
x
1
x
2
_
+ [0]
..
D
u (17.45)
199
The observability matrix is (n = 2)
M
obs
=
_
C
CA
21
= CA
_
=
_

_
_
c
1
0


_
c
1
0

_
0 a
0 0
_
_

_
=
_
c
1
0
0 ac
1
_
(17.46)
The determinant of M
obs
is
det (M
obs
) = c
1
ac
1
0 0 = a c
1
2
(17.47)
The system is observable only if a c
1
2
= 0.
Assume that a = 0 which means that the rst state variable, x
1
,
contains some non-zero information about the second state variable,
x
2
. Then the system is observable if c
1
= 0, i.e. if x
1
is measured.
Assume that a = 0 which means that x
1
contains no information
about x
2
. In this case the system is non-observable despite that x
1
is
measured.
5
[End of Example 17.2]
17.5 Discrete-time implementation of the
observer
The model dening the state estimate is given by (17.5), which is repeated
here:
x
e
= f (x
e
, u, w
k
) +Ke (17.48)
Of course, it is x
e
(t) that you want. It can be found easily by solving the
above dierential equation numerically. The easiest numerical solver,
which probably is accurate enough given that the sampling or
discretization time, h, is small enough is the Forward integration
method. This method can be applied by substituting the time-derivative
by the forward dierence:
x
e

x
e
(t
k+1
) x
e
(t
k
)
T
s
= f [x
e
(t
k
), u(t
k
), w
k
(t
k
)]
. .
f(,t
k
)
+Ke(t
k
) (17.49)
5
When I tried to calculate an observer gain for this system with a = 0, I got the
following error message from LabVIEW/MathScript: Error in function acker at line 9.
Control Design Toolset: The system model is not observable.
200
Solving for x
e
(t
k+1
) gives the observer algorithm, which is ready for being
programmed:
Observer:
x
e
(t
k+1
) = x
e
(t
k
) +T
s
[f(, t
k
) +Ke(t
k
)] (17.50)
An example of discrete-time implementation is given in Example 17.3.
It is important to prevent the state estimates from getting unrealistic
values. For example, the estimate of a liquid level should not be negative.
And it may be useful to give the user the option of resetting an estimate to
a predened value by clicking a button etc. The following code implements
such limitation and reset of the estimate x
1e
:
...
x1e_k1=x1e_k+Ts*(f_k1+K1*e); //Normal update of estimate.
if(x1e_k1>x1_max) {x1e_k1=x1_max;} //Limit to max.
if(x1e_k1>x1_min) {x1e_k1=x1_min;} //Limit to min.
if(reset==1) {x1e_k1=x1_reset;} //Reset
...
17.6 Estimating parameters and disturbances
with observers
In some applications it may be useful to estimate parameters and/or
disturbances in addition to the ordinary state variables. One example is
dynamic positioning systems for ship position control where the
Kalman Filter is used to estimate environmental forces acting on the ship
(these estimates are used in the controller as a feedforward control signal).
These parameters and/or disturbances must be represented as state
variables. They represent additional state variables. The original state
vector is augmented with these new state variables which we may denote
the augmentative states. The observer is used to estimate the augmented
state vector which consists of both the original state variables and the
augmentative state variables. But how can you model these augmentative
state variables? The augmentative model must be in the form of
dierential equations because that is the model used when designing
observers. To set up an augmentative model you must make an
assumption about the behaviour of the augmentative state. Let us look at
some augmentative models.
Augmentative state is (almost) constant, or we do not know
201
how it varies: Both these assumptions are expressed with the
following dierential equations describing the augmentative state
variable x
a
:
x
a
(t) = 0 (17.51)
This is the most common way to model the augmentative state.
Augmentative state has (almost) constant rate: The
corresponding dierential equation is
x
a
= 0 (17.52)
or, in state space form, with x
a
1
x
a
,
x
a
1
= x
a
2
(17.53)
x
a
2
= 0 (17.54)
where x
a
2
is another augmentative state variable.
Once you have dened the augmented model, you can design and
implement the observer in the usual way. The observer estimates both the
original states and the augmentative states.
The following example shows how the state augementation can be done in
a practical (simulated) application.
Example 17.3 Observer for estimating level and ow
Figure 17.7 shows a liquid tank with a level control system (PI controller)
and an observer. (This system is also be used in Example 18.2 where a
Kalman Filter is used in stead of an observer.) We will design an observer
to estimate the outow F
out
. The level h is measured.
Mass balance of the liquid in the tank is (mass is Ah)
A
tank

h(t) = K
p
u F
out
(t) (17.55)
= K
p
u F
out
(t) (17.56)
After cancelling the density the model is

h(t) =
1
A
tank
[K
p
u F
out
(t)] (17.57)
We assume that we do not know how the outow is actually varying, so we
use the following augmentative model describing its behaviour:

F
out
(t) = 0 (17.58)
202
Figure 17.7: Example 17.3: Liquid tank with level control system and observer
for estimation of outow
The model of the system is given by (17.57) (17.58). The parameter
values of the tank are displayed (and can be adjusted) at the front panel,
see Figure 17.7. The sampling time is
T
s
= 0.1 s (17.59)
Although it is not strictly necessary, it is convenient to rename the state
variables using standard names. So we dene
x
1
= h (17.60)
x
2
= F
out
(17.61)
The model (17.57) (17.58) is now
x
1
(t) =
1
A
tank
[K
p
u(t) x
2
(t)] f
1
() (17.62)
203
x
2
(t) = 0 f
2
() (17.63)
The measurement equation is
y = x
1
(17.64)
The initial estimates are as follows:
x
1p
(0) = x
1
(0) = y(0) (from the sensor) (17.65)
x
2p
(0) = 0 (assuming no information about initial value) (17.66)
The observer algorithm is, according to (17.50),
x
1
e
(t
k+1
) = x
1e
(t
k
) +T
s
[f
1
(, t
k
) +K
1
e] (17.67)
= x
1e
(t
k
) +T
s
_
1
A
tank
[K
p
u(t
k
) x
2e
(t
k
)] +K
1
e
_
(17.68)
x
2e
(t
k+1
) = x
2e
(t
k
) +T
s
[f
2
(, t
k
) +K
2
e] (17.69)
x
2e
(t
k
) +T
s
K
2
e (17.70)
To calculate observer gain K we need a linearized process model on the
form
x = A
c
x +B
c
u (17.71)
y = Cx +Du (17.72)
Here:
A
c
=
_

_
f
1
x
1
= 0
f
1
x
2
=
1
A
tank
f
2
x
1
= 0
f
2
x
2
= 0
_

x
e
(k), u(k)
(17.73)
=
_
_
0
1
A
tank
0 0
_
_
(17.74)
B
c
=
_

_
f
1
u
=
Kp
A
tank
f
2
u
= 0
_

xe(k), u(k)
(17.75)
=
_
_
0
1
A
tank
0 0
_
_
(17.76)
204
C =
_
1 0

(17.77)
D = [0] (17.78)
The Butterworth polynomial is (17.26) which is repeated here:
B
2
(s) = (Ts)
2
+ 1.4142 (Ts) + 1 (17.79)
where T is given by (17.33) which is repeated here:
T
T
r
n
(17.80)
where n = 2 (the number of states). I specify the observer response time
T
r
to be
T
r
= 2 s (17.81)
The observer gain K is calculated using function blocks in LabVIEW, see
Figure 17.5. The result is
Figure 17.8: Example 17.3: While-loop for calculating the observer gain K
K =
_
K
1
K
2
_
=
_
1.414
0.1
_
(17.82)
205
Figure 17.9 shows the responses after a stepwise change of the outow.
(The level is controlled with a PI controller with settings K
c
= 10 and
T
i
= 10 s.) The gure shows the real (simulated) and estimated level
and outow. We see from the lower chart in the gure that the Kalman
Filter seems to estimate the outow well, with response time
approximately 2 sec, as specied, and with zero error in steady state.
Figure 17.9: Example 17.3: The responses after a stepwise change of the
outow.
Figure 17.10 shows the implementation of the observer with C-code in a
Formula Node. (The Formula Node is just one part of the block diagram.
The total block diagram consists of one While loop where the observer
gains are calculated, and one Simulation loop containing the Formula
Node, PID controller, and the tank simulator.) Limitation of the estimated
states to maximum and minimum values is included in the code. The
206
input a is used to force the observer to run just as a simulator which is
very useful at sensor failure, cf. Section 17.8.
Figure 17.10: Example 17.3: Implementation of the observer in a Formula
Node. (The observer gain K is fetched from the While loop in the Block
diagram, see Figure 17.8, using local variables.)
[End of Example 17.3]
17.7 Using observer estimates in controllers
In the introduction to this chapter are listed several control functions
which basically assumes that measurements of states and/or disturbances
(loads) are available. If measurements from hard-sensors for some reason
are not available, you can try using an estimate as provided by a
soft-sensor as an observer (or Kalman Filter) in stead. One such control
function is feedforward control. Figure 17.11 shows feedforward from
207
estimated disturbance.
Process
d
y y
SP
u
e
PID-
controller
State estimator
(Observer or
Kalman Filter )
Feedforward
controller
u
f
u
PID
Feedback
Sensor
d
est
Disturbance
y
m
Feedforward
Figure 17.11: Control system including feedforward control from estimated
disturbance (with observer or Kalman Filter)
Example 17.4 Level control with feedforward from estimated
disturbance (load)
Figure 17.7 in Example 17.3 shows the front panel of a LabVIEW program
of a simulated level control system. On the front panel is a switch which
can be used to activate feedforward from estimated outow, F
outest
. The
estimator for F
outest
based on observer was derived in that example. Let us
now derive the feedforward controller, and then look at simulations of the
control system.
The feedforward controller is derived from a mathematical model of the
process. The model is given by (17.57), which is repeated here:

h(t) =
1
A
tank
[K
p
u F
out
(t)] (17.83)
Solving for the control variable u, and substituting process output variable
h by its setpoint h
SP
gives the feedforward controller:
u
f
(t) =
A
tank

h
SP
(t)
K
p
. .
u
f
SP
+
F
out
(t)
K
p
. .
u
f
d
(17.84)
208
Let us assume that the level setpoint h
SP
is constant. Then,

h
SP
(t) = 0,
and the feedforward controller becomes
u
f
(t) =
F
out
(t)
K
p
(17.85)
Assuming that the estimate F
outest
(t) is used in stead of F
out
, the
feedforward controller becomes
u
f
(t) =
F
outest
(t)
K
p
(17.86)
Let us look at a simulation where the outow has been changed as a step
from 0.002 to 0.008 m
3
/s. Figure 17.12 shows the level response with
feedforward. Compare with Figure 17.9 which shows the response without
Figure 17.12: Example 17.4: Level response with feedforward from estimated
outow
feedforward. There is a substantial improvement by using feedforward
from outow even if the outow was not measured (only estimated)!
209
[End of Example 17.4]
17.8 Using observer for increased robustness of
feedback control at sensor failure
If in a feedback control system the sensor fails so that the controller (e.g. a
PID controller) receives an erroneous measurement signal, then the
controller will adjust the control signal to a too large or a too low value.
For example, assume that the level sensor fails and sends zero level
measurement signal to the level controller. Then the level controller adjusts
the control signal to maximum value, causing the tank to become full.
This problem can be solved as follows:
Base the feedback on the estimated measurement, y
e
, as calculated
by an observer (or a Kalman Filter).
While the sensor is failing (assuming some kind of measurement error
detection has been implemented, of course): Prohibit the estimate
from being updated by the (erroneous) measurement. This can be
done by simply multiplying the term Ke by a factor, say a, so that
the resulting estimator formula is
x
e
(t
k+1
) = x
e
(t
k
) +T
s
[f(, t
k
) +aKe(t
k
)] (17.87)
a is set to 1 is the default value, to be used when x
e
is to be updated
by the measurement (via the measurement estimate error e). a is set
to 0 when x
e
shall not be updated, impying that x
e
eectively is
x
e
(t
k+1
) = x
e
(t
k
) +T
s
f(, t
k
) (17.88)
which is just a simulator of the process. So, the controller uses a
more-or-less correct simulated measurement in stead of an erroneous
real measurement. This will continue the normal operation of the
control system, delaying or preventing dangerous situations.
Example 17.5 Increased robustness of level control system with
observer at sensor failure
This example is based on the level control system studied earlier in this
chapter.
210
The following two scenarios are simulated. In both, the level sensor fails by
suddenly producing zero voltage (indicating zero level) at some point of
time.
Scenario 1 (not using observer): Nothing special has been done
to handle the sensor failure. The level controller continues to control
using the erroneous level measurement. (Actually, the observer is not
in use.) The measurement value of zero causes the controller to act
as the tank actually is empty, thereby increasing the control signal to
the inlet pump to maximum, causing the tank to become full (which
could be a dangerous situation in certain cases or with other
processes). Figure 17.13 shows the simulated responses.
Figure 17.13: Example 17.5: Scenario 1: Simulation of level control system
with sensor failure.
Scenario 2 (using observer): The level controller uses
continuously the estimated level as calculated by the observer for
211
feedback control. When the sensor fails (as detected by some
assumed error-detection algorithm or device), the state estimates are
prevented from being updated by the measurement. This is done by
setting the parameter a in (17.87) equal to zero, and consequently
the observer just runs as a simulator. To illustrate that the control
system continues to work well after the has sensor failed, the level
setpoint is changed from 0.5 to 0.7 m. The outow is changed
towards the end of the simulation.
The simulations show that the control system continues to work
despite the sensor failure: The level follows the setpoint. However,
when the outow is increased, the level is decreasing. This is because
the estimator is not able to estimate the outow correctly since the
observer has no measurement-based update of the estimates. So, the
control system may work well for some time, but not for ever because
unmodeled disturbances can cause the states to diverge from the true
states. Still the robustness against sensor failure has been largely
improved!
[End of Example 17.5]
212
Figure 17.14: Example 17.5: Scenario 2: Simulation of level control system
with sensor failure. Robustness is increased thanks to the observer!

You might also like