(Sici) 1099 1115 (199711) 11:7 603::aid Acs455 3.0.co 2 H
(Sici) 1099 1115 (199711) 11:7 603::aid Acs455 3.0.co 2 H
SUMMARY
A simple, practical and unified method is presented for detecting parameter identifiability problems
caused by non-persistent excitation, overparametrization and/or output feedback within the system to be
identified. All the required information is generated inherently by the multiple-model least-squares (MMLS)
method and/or the augmented UD identification (AUDI) algorithm developed by the authors, so very little
extra computation is required. Several examples are included to illustrate the principles involved and their
application. ( 1997 by John Wiley & Sons, Ltd.
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997)
No. of Figures: 2 No. of Tables: 11 No. of References: 12
Key words: system identification; parameter identifiability; multiple-model least-squares (MMLS);
augmented UD identification (AUDI); closed-loop identification
1. INTRODUCTION
Parameter identifiability is a concept that is central to system identification.1, 2 It is crucial to
know whether the model parameters are identifiable with the obtained process input/output data
and within the given model set. This paper presents a simple, practical and unified means to detect
parameter identifiability problems associated with non-persistent input excitation, overpara-
metrization and/or output feedback, which are the main causes of identifiability problems.
In practice, non-identifiability of model parameters results from the singularity of the informa-
tion or covariance matrix, which is usually caused by the following conditions.
1. Input signals are autocorrelated. This is also called non-persistent input excitation.3 In this
case the input variable at any time can be represented by a finite-order combination of its
past values. As a result, model parameters are uniquely identifiable only up to a certain
order. This can occur, for example, when a process is running near steady state and the
process input signal is almost constant. As a result, the order of the input excitation
approaches zero and the input/output data are no longer informative enough for parameter
estimation.
1 Correspondence to: S. S. Niu, Treiber Controls Inc., Suite 1200, 390 Bay Street, Toronto, Ontario M5H 2Y2, Canada.
In this paper the identifiability problems associated with low input excitation, overparametri-
zation and/or output feedback are investigated using the multiple-model least-squares (MMLS)
method6 and the augmented UD identification (AUDI) algorithm,7 which are a fundamental
reformulation and efficient implementation of the widely used least-squares estimator in batch
and recursive form respectively. The MMLS and AUDI approaches simultaneously produce the
model parameter estimates and loss functions of all the process and feedback models from order
1 to a user-specified maximum value n, plus other relevant information such as the process
signal-to-noise ratio. This information provides the basis for evaluation of parameter identifiabil-
ity at the same time as the model parameters are being estimated, either in batch or recursive
mode.
2. MULTIPLE-MODEL LEAST-SQUARES
This section briefly reviews the MMLS and AUDI methods to provide the necessary background
for this paper. Details on MMLS/AUDI can be found in References 6—8.
Assume that the process being investigated is represented by the difference equation model
Note that the input/output variables are arranged in pairs and the current process output z(t) is
included in the augmented data vector. This special structure is the basis of the MMLS approach
and is also the fundamental difference between the MMLS formulation and that of the conven-
tional identification methods.
Define the augmented information matrix (AIM) as
t
S(t)" + u( j) uT( j) (3)
j/1
and decompose S(t) into the LDLT factored form
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 605
0 N z(t!n)
z(t!n) N u(t!n)
z(t!n), u(t!n) N z(t!n#1)
z(t!n), u(t!n), z(t!n#1) N u(t!n#1) (8)
F
z(t!n), u(t!n), z(t!n#1), 2 , z(t!1) N u(t!1)
z(t!n), u(t!n), z(t!n#1), 2 , z(t!1), u(t!1) N z(t)
where ‘N’ means to use the best linear combination (in the least-squares sense) of the variables
on the left-hand side to fit/predict the variable on the right-hand side. That is, each equation in (8)
represents a difference equation model of a particular order 0)i)n and hence equation (8) is
referred to as the multiple-model structure. The decomposition in (4) simultaneously provides, in
U(t) (5) and D(t) (6) respectively, all the parameter estimates and corresponding loss functions for
all the models defined in (8).
(a) The odd-numbered columns of the parameter matrix U(t), i.e. from column 3 up to 2n#1,
contain the parameter estimates of the forward models (also called the process models) from
order 1 to n. For example, the third column contains the parameter estimates for the
first-order model
which is the third equation in (8), under the assumption that the process input/output are
stationary. More generally, the (2i#1)th column contains the parameter estimates of the
ith-order process model, i"1, 2 , n, which is the (2i#1)th equation in (8).
(b) The even-numbered columns, i.e. from column 2 up to 2n, contain the parameter estimates
of the backward models (also called the feedback models) from order 1 to n. For example, the
fourth column contains the parameter estimates for the second-order feedback model
2. The loss function matrix D(t) contain the loss functions corresponding to all the process and
feedback models defined in the parameter matrix U(t).
(a) The odd-numbered elements J (i) in D(t) contain the loss functions for the process models
&
defined in matrix U(t). For example, the third diagonal element in matrix D(t) is the loss
function of the process model (9), which corresponds to the third column in U(t).
(b) The even-numbered elements J (i) in D(t) contain the loss functions of the feedback models
"
defined in matrix U(t). For example, the fourth diagonal element in matrix D(t) is the loss
function of the feedback model (10), which corresponds to the fourth column of U(t).
uT(1)
C D
uT(2) R(t)
'(t)" "Q (11)
F 0
uT(t)
C(t)"S~1(t)"['T(t) '(t)]~1
From (4) it is known that a UDUT decomposition of the augmented covariance matrix (ACM)
C(t) leads to
C(t) contains all the information about the process parameters and loss functions and is the only
matrix that needs to be updated with time, i.e.
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 607
Substitute (13) into (14) and apply the rank-one update formula.2 Then at every time interval the
updated parameter matrix U(t) and loss function matrix D(t) are obtained. The stepwise
procedure is given in Table I.
The parameter matrix U(t) and the loss function matrix D(t) from either the batch MMLS
method or the recursive AUDI algorithm, which contain the parameter estimates and loss
functions of all the process and feedback models, provide the basis for detection of low excitation
and overparametrization. The feedback (backward) models provide the basis for measuring the
amount of output feedback inherent in the process being identified.
In the feedback channel of Figure 1(b), r(t) is the calculated control action from the controller,
based on the process outputs up to time t and process inputs up to time t!1. w(t) is the noise in
the feedback channel, which can be probing noise that is added deliberately for improving
closed-loop identifiability. r (t) is the external test input signal, which usually takes the form of
41
a pseudorandom binary sequence (PRBS) or simply step changes. Obviously, the backward
(feedback) model discussed in Section 2 represents the correlation of the current input u(t) with
the outputs and past inputs via the feedback channel, i.e.
( 1997 by John Wiley & Sons, Ltd. Int. J. Adapt. Control Signal Process., 11, 603—619 (1997)
608 S. S. NIU AND D. G. FISHER
As a special case, when there is no feedback present, the parameter estimates of the backward
models are all zeros, indicating no correlation between inputs and outputs via the feedback loop.
From a mathematical point of view the process and feedback models are identical in structure
(see Figure 1(b)) and thus can be treated similarly. The ideal condition for identification of the
process model is maximum correlation between the output z(t) and the regressor via the
process channel and minimum correlation between the input and the regressor via the feedback
channel.
Now examine the structure of the loss function matrix D(t) in greater detail. The loss function is
a measure of the goodness-of-fit of the regressor to the output (forward model) or the input
(feedback model). Figure 1 shows that the process and feedback channels are identical in structure
for identification, so the interpretations of the parameter and loss functions of both the forward
and backward models are very similar. The trajectories of the loss functions versus the model
order are depicted in Figures 2(a) and 2(b) for the forward and backward models respectively.
Some properties of the loss functions of the forward and backward models are discussed below.
t
J (0) (t)" + z2( j)
&
j/1
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 609
t
J (n) (t)" + e2( j)"(t!dim h) p2 for large t
& v
j/1
where e(t) is the estimation residual and dim h stands for the dimension of h. The following
relationship among the loss functions of different orders holds for tPR:
J (0) (t)'J (1) (t)'2'J (n0) (t)"J (n0`1) (t)"2"J (n) (t)*0 (15)
& & & & &
where n is the true model order. These are the odd-numbered diagonal elements in the loss
0
function matrix D(t). The loss functions can be interpreted in terms of the following three cases.
1. Line 1 in Figure 2(a): this is the situation with no process noise v(t), i.e. p2"0, and implies
v
maximum correlation via the process channel. The loss function converges to zero at model
order 2 in this case, which indicates that the estimated model order nL "2. The correspond-
ing parameter estimates for the model with order nL are exact. Any model with order higher
than nL is overparametrized.
2. Line 2 in Figure 2(a): this corresponds to the case where p2'0 and is the most commonly
v
encountered case in practice. The loss function converges to a non-zero constant value. The
model order can be determined easily with the loss functions provided. In this example,
nL "2.
3. Line 3 in Figure 2(a): the loss function is a flat line and does not decrease as the model order
increases. This implies that the process output is not correlated with past outputs and
inputs, i.e. the process does not have any dynamics relating u and y.
The correlation via the process channel for the ith-order process model can be measured by the
output correlation coefficient s(i) , which is defined as
z
SA
J (i) (t)
s(i)"
z
1! &
J (0) (t)
&
B
, i"1, 2 , n (16)
( 1997 by John Wiley & Sons, Ltd. Int. J. Adapt. Control Signal Process., 11, 603—619 (1997)
610 S. S. NIU AND D. G. FISHER
0"s(0)(s(1)(2(s(n0)"2"s(n))1 (17)
z z z z
Note that s(i)"1 implies J (i) (t)"0 and corresponds to maximum correlation of the inputs and
z &
outputs via the process channel, which is line 1 in Figure 2(a). On the other hand, s(i)"0
z
corresponds to J (i) (t)"J (0) (t) and implies no correlation between process output and input via
& &
the process channel, i.e. no process dynamics.
t
J (0) (t)¢ + u2( j)
"
j/1
and converges to J (n) (t) whose value is given by
"
J (n) (t)"(t!dim a) p2 for large t
" w
The loss functions of different order satisfy
1. Line 1: this is the ideal case for identification and corresponds to open-loop conditions, i.e.
there is no correlation between process input and output via the feedback channel. The loss
function stays at the value J (0) (t) as the model order increases.
"
2. Line 2: this corresponds to the case where the input is correlated with past outputs and/or
past inputs owing to non-persistent input excitation or output feedback. The order of the
excitation can be determined by investigating the loss functions of the backward models.
3. Line 3: this is the worst case for identification of process model parameters. The input is
completely correlated with past inputs, outputs and/or noise.
The correlation of the input and output via the feedback channel can be measured by the input
correlation coefficient , s(i) , which is defined, similarly to (16), as
u
SA
J (i) (t)
s(i)"
u
1! "
J (0) (t)
"
B
, i"1, 2 , n (19)
0"s(0)(s(1)(2(s(n0)"2"s(n))1 (20)
u u u u
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 611
Clearly, s(i)"0 corresponds to the open-loop condition, while s(i)"1 indicates maximum
u u
correlation via the feedback channel, which is not desirable for identification.
Define a new diagonal matrix !, called the (input/output) correlation matrix, with the form
1
s(1) (t)
u
s(1) (t)
!" z (21)
}
s(n) (t)
u
s(n) (t)
z
This matrix will be used as the basis for detecting identifiability problems.
where DM3Rd is the set in which the d-dimensional parameter vector h varies.
'(t) is defined in (11) and U(t) has the structure defined in (5). If '(t) is of full rank, then U(t) from
(5) is the unique least-squares parameter matrix for all the models defined in (8).
Now assume that a subset of the augmented data vector (2) is linearly correlated, i.e.
Assume that any subset smaller than u (t) is not correlated, then there exists a matrix P(t) with
1
the form
0 2 0 w 2 w
} F F F
0 w 2 w
P(t)"
0 2 0
} F
0
that satisfies
'(t) P(t)"0
where each column of asterisks ‘w’ in P(t) is an arbitrary scalar multiple of the vector c. Now
obviously
Since the matrix P(t) is not unique, neither is the parameter matrix U(t). Bearing in mind that the
augmented data vector u(t) is the basic component of the above-mentioned matrices, the
following statement regarding parameter identifiability holds.
A necessary condition for the process parameters to be uniquely identifiable within a given
model set is that any subset of the augmented data vector (2) should not be linearly correlated.
A closer look into the parameter matrix reveals that the first m columns of U(t) are unique, since
1
the first m columns in P(t) can only be zeros. This means that with the MMLS structure,
1
parameter identifiability should always be associated with model order, i.e. to what order the
models are identifiable. The size of the largest uncorrelated subset of the augmented data vector
determines the maximum order of models that can be uniquely identified.
As shown by the MMLS structure discussed in Section 2, each column of the parameter and
loss function matrices corresponds to a specific model order, which in turn corresponds to
a subset of the data vector with a certain dimension. If this specific subset of the data vector is
autocorrelated, then a zero loss function in the corresponding loss function matrix will result.
This renders all higher-order models (both forward and backward models) not uniquely identifi-
able, as can be easily seen from (24). The unified rule for parameter identifiability with MMLS or
AUDI then becomes the following.
With the MMLS structure, if the loss function of a model is zero, then all the models with
higher orders will not be uniquely identifiable. In another words, in the loss function matrix
(from top left to bottom right corner) the first zero diagonal element indicates the beginning of
identifiability problems. All models to the right of this element are not uniquely identifiable.
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 613
This rule covers all situations of overparametrization, non-persistent input excitation and output
feedback. When the loss function of a forward model becomes zero, then all the higher-order
models are overparametrized and not uniquely identifiable. On the other hand, when the loss
function of the backward model is zero, then the input is non-persistent excitation or there is
output feedback present. Checking the parameter estimates gives further information about
whether the identifiability problem is due to output feedback.
In practice, however, the elements of the loss function matrix usually converge to a small
positive value instead of zero, owing to the presence of coloured noise, non-linearity, etc. Some
quantitive criteria are therefore needed. To this end the correlation coefficients s(i) and s(i) can be
u z
used. The ideal condition for the identifiability of the ith-order process model is given by
s(i)"1 (26)
z
In summary, for better identifiability and accuracy the following conditions should be satisfied:
4. EXAMPLES
Assume that the process to be identified is represented by the difference equation model
where z(t) and u(t) are the process output and input respectively and v(t) is white noise with zero
mean and standard deviation p . A series of simulation is presented below to illustrate how
v
MMLS can be used for testing system identifiability.
3502·7
(0·00) 500·0
(0·00) 205·7
(0·97) 499·3
(0·00) 137·4
(0·98) 499·0
(0·00) 137·0
(0·98)
Constructing the augmented data vector and the augmented information matrix (AIM) accord-
ing to equations (2) and (3), and decomposing the AIM with the LDLT factorization technique,
leads to the parameter matrix U(t) and loss function matrix D(t) shown in Tables II and III
respectively.
From the loss function matrix it is seen that there is no element close to zero, therefore no
identifiability problem should occur. Actually, by investigating the loss functions of the backward
(feedback) models (the even-numbered elements in the loss function matrix), it is found that they
are approximately equal to each other and to the sum of the squared inputs (+ 500 u2( j)"500).
j/1
This implies an input correlation coefficient s(i)+0 for i"1, 2, 3, which suggests at least 3rd
u
order excitation. This is consistent with the properties of RBS. The correlation coefficients given
in parentheses in Table III for each model indicate that there are no identifiability problems due
to either inputs or outputs.
The loss functions of the forward (process) models (the odd-numbered diagonal elements in the
loss function matrix) converge to a constant 137 (+tp2"500]0·52"125) at order 2. This
v
clearly suggests that the order of the forward (process) model is 2, which agrees with the actual
process (29).
Since none of the loss functions of any of the models (forward and backward models) is zero, all
the models produced in the parameter matrix, including process and feedback models, are reliable
and unique. For example, although the third-order model (the seventh column in the parameter
matrix) has two extra degrees of freedom, the two extra parameters associated with these two
extra degrees of freedom are close to their true value of zero. Since there is no feedback present in
the system, all the parameter estimates of the backward models are in the vicinity of zero, their
true value.
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 615
1139·8
(0·00) 499·6
(0·00) 68·3
(0·97) 495·3
(0·09) 0·0
(1·00) 492·2
(0·12) 0·0
(1·00)
4.2. Overparametrization
The identification procedure is repeated with no process noise, i.e. v(t)"0. The resulting
parameter and loss function matrices are shown in Table IV and V respectively. Now it is seen
that the loss function of the second-order process model converges to zero, which leads to
s(2)"1. This is the ideal condition for identifying the second-order process model. As can be seen
z
from the parameter matrix, the parameter estimates of the second-order process model equal to
their true values. However, identifiability problems should be expected in the overparametrized
models. This is seen by comparing the third-order process model parameters (column 7 in
Table IV) with the true parameters or with those in column 7 of Table II. Actually, if column 5 of
the parameter matrix is multiplied by a constant i and added to column 7, the resulting column
7 will be a new third-order model that gives exactly the same loss function as the original column
7, no matter what value i takes. A special value of i"1·4026 results in a new column 7 as
which are the exact values of the third-order model. This clearly indicates that the third-order
models cannot be uniquely identified. Checking the correlation coefficients in Table V, it is seen
that s(2) (t)"1. According to the rule in (28), models with order higher than 2 will not be
z
identifiable.
Remark 1
Ideal identification conditions, i.e. open-loop with persistent input excitation (s "0) and with
u
no process noise (s P1), produce accurate model parameter estimates for the model with the
z
( 1997 by John Wiley & Sons, Ltd. Int. J. Adapt. Control Signal Process., 11, 603—619 (1997)
616 S. S. NIU AND D. G. FISHER
correct order. However, the parameter estimates of the overparametrized models are not unique.
Therefore, when the noise level is low, it is very important to always choose the correct model
order. Identifying an overparametrized model with an ordinary least-squares method when the
process noise is very low can lead to serious numerical problems. However, with the MMLS
method, overparametrization is not a problem. MMLS is an order-recursive method from low to
high order. Numerical problems associated with overparametrization only occur in the over-
parametrized models and do not affect the accuracy of models from order 1 up to the correct
order. See Reference 6 for more details. In addition, the correct model order can always be easily
obtained from the loss function matrix D(t).
Remark 2
Non-persistent excitation is common in practice, especially in control applications where the
input/output data are obtained under normal operating conditions. For example, when the
2·280]103
4·633]102
1·795]102
(0·96) e
(1·00) 1·366]102
(0·97) e
1·3651]102
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 617
process is running near steady state, the process is not fully excited. In this case a common
practice is to stop the identification until informative data are available. Many on/off criteria can
be used to stop the identification; see e.g. References 7 and 11. Monitoring the system identifiabil-
ity, e.g. the order of the input excitation, would identify the potential problems so that appropri-
ate action could be taken, e.g. add extra excitation in the feedback loop or simply stop the
identification.
r(t)"z(t)#0·2 z(t!1)#w(t)
Table IX. Loss function matrix (output feedback with no probing noise)
2·340]102
6·024
1·377]102
(0·97) e
(1·00) 1·361]102
(0·97) e
1·3579]102
( 1997 by John Wiley & Sons, Ltd. Int. J. Adapt. Control Signal Process., 11, 603—619 (1997)
618 S. S. NIU AND D. G. FISHER
Table XI. Loss function matrix (first-order feedback with probing noise)
388·2
(0·00) 143·7
(0·84) 195·5
(0·70) 124·6
(0·86) 136·2
(0·80) 123·4
(0·86) 136·5
(0·80)
It is now seen that the loss functions of the feedback models converge to a constant 136, which
is close to the expected value t p2 "500]0·52"125. All the forward models are now identifiable,
w
but with reduced accuracy. This can be seen by comparing Table X with Tables II and VIII. In
terms of the correlation coefficients in Table IX, s(2) (t)"0 makes all models with order higher
u
than 2 non-identifiable. The probing noise w(t) that is added reduces the input correlation
coefficient and thus improves parameter identifiability. Obviously, the larger the magnitude of
w(t), the smaller s(2) will be and thus the better identifiability is achieved. In practice, however,
u
since w(t) is part of the process input but is not the desired input, the magnitude of w(t) should be
kept as small as possible. As a result, an appropriate compromise is needed in choosing the
probing noise.
Remark 3
A low-order output feedback correlates the input with the process output and thus makes the
data vector linearly correlated. This causes identifiability problems with the process parameters.
To improve the parameter identifiability, the principle is to break the linear correlation of the
data vector, i.e. to break equation (23). Common methods include (i) adding probing noise in the
feedback loop (as shown in this example), (ii) increasing the order of the feedback law and (iii)
using a variable structure for the feedback. Increasing the time delay in the backward channel has
the same effect on identifiability as increasing the order of the feedback, but would degrade the
feedback control performance.
If sufficient probing noise is added in the feedback loop, the process parameters can become
identifiable. The quality of the identified process parameters depends on the relative magnitude of
the noise in the feedback loop versus the magnitude of the estimation residual of the process
Int. J. Adapt. Control Signal Process., 11, 603—619 (1997) ( 1997 by John Wiley & Sons, Ltd.
PARAMETER IDENTIFIABILITY PROBLEMS 619
model. Numerical and practical factors require that the input excitation have sufficient magni-
tude as well as a high enough order. This can be easily checked by examining the signal-to-noise
ratio that can be produced simultaneously by the MMLS algorithm.12
5. CONCLUSIONS
It has been shown that parameter identifiability problems due to factors such as non-persistent
input excitation, correlation of the input/output data from feedback, overparametrization, etc.
can be detected by monitoring the loss function of the process (forward) and feedback (backward)
models of different orders. The test is simple: if the loss function of any forward or backward
model approaches zero at a certain order, then all models with higher orders are not uniquely
identifiable. In another words, if any of the elements in the loss function matrix D(t) is, or is very
close to, zero, then all the estimated model parameters in the parameter matrix which are to the
right of the column corresponding to this zero loss function are not uniquely identifiable.
The multiple-model least-squares (MMLS) method and augmented UD identification (AUDI)
algorithm simultaneously produce all the models, loss functions and associated information
required for monitoring parameter identifiability. Therefore the extra computation required is
minimal. (Implementation using conventional least-squares algorithms is possible but computa-
tionally prohibitive.)
Continuous, on-line monitoring parameter identifiability makes it possible to detect problems
immediately and initiate appropriate corrective actions, e.g. add excitation or stop parameter
updating.
REFERENCES
1. Ljung, L., System Identification: ¹heory for the ºser, Prentice-Hall, Englewood Cliffs, NJ, 1987.
2. Söderström, T. and P. Stoica, System Identification, Prentice-Hall, Englewood Cliffs, NJ, 1989.
3. As ström, K. J. and T. Bohlin, ‘Numerical identification of linear dynamic systems from normal operating records’,
Proc. IFAC Symp. on Self-Adaptive Control Systems, Teddington, 1965, pp. 96—110.
4. Gustavsson, I., L. Ljung and T. Söderström, ‘Identification of processes in closed loop—identification and accuracy
aspects’, Automatica, 13, 59—75 (1977).
5. Gustavsson, I., L. Ljung and T. Söderström, ‘Choice and effect of different feedback configurations’, Eykhoff, P. (ed.),
¹rends and Progress in System Identification, Pergamon, Oxford, 1981, pp. 367—388.
6. Niu, S., L. Ljung and As . Bjöck, ‘Decomposition methods for least-squares parameter estimation’, IEEE ¹rans. Signal
Process., Vol. 44, No. 11, 1996.
7. Niu, S., D. G., Fisher and D. Xiao, ‘An augmented UD identification algorithm’, Int. J. Control, 56, 193—211 (1992).
8. Niu, S., D. G. Fisher, L. Ljung and S. L. Shah, ‘A tutorial on multiple model least-squares and augmented UD
identification’, ¹ech. Rep. ¸i¹H-IS½-R-1710, Department of Electrical Engineering, Liniköping University, 1994.
9. Golub, G. H. and C. F. van Loan, Matrix Computations, 2nd edn, Johns Hopkins University Press, 1989.
10. Björck, As ., Numerical Methods for ¸east-Squares Problems, Society for Industrial and Applied Mathematics,
Philadelphia, PA, 1995.
11. Yin, G., ‘A stopping rule for least-squares identification’, IEEE ¹rans. Automatic Control, AC-34, 659—662 (1989).
12. Niu, S. and D. G. Fisher, ‘Simultaneous estimation of process parameters, noise variance and signal-to-noise ratio’,
IEEE ¹rans. Signal Process., Vol. 43, No 7. 1995.
( 1997 by John Wiley & Sons, Ltd. Int. J. Adapt. Control Signal Process., 11, 603—619 (1997)