0% found this document useful (0 votes)
11 views

A comparison of some subspace identification methods

A Comparison of Some Subspace Identification Methods

Uploaded by

slim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

A comparison of some subspace identification methods

A Comparison of Some Subspace Identification Methods

Uploaded by

slim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

TA08 11:40 Proceedings of the 37th IEEE

Conference on Decision & Control


Tampa, Florida USA December 1998

A Comparison of Some Subspace Identification Methods


'I'ohru Katayamat, Shogo Omorit and Giorgio Piccis
t Department of Applied Mathematics and Physics, Kyoto University
Kyoto 606-0 1, Japan; e-mail: kat ayama@kuamp .kyot 0-u .ac.j p
$ Department of Electronics and Informatics, University of Padova
35131 Padova, Italy; e-mail: [email protected]

Abstract where II, @ are given by


-1
Recently, we have derived stochastic realization
methods for a system with exogenous inputs [4, 11 and [rI 'PI = [E,, C,,] [ Cup C,, ] (2)
the relevance of stochastic realization to subspace iden-
tification of state-space systems has shown in [4]. In this It can be also shown that the operators ll and 0 satisfy
paper, we briefly review the basis of stochastic subspace the discrete Wiener-Hopf type equations
identification algorithm 11, 21 and present some simula-
tions results to compare the performance and compu- HCppIyu = Cfplul @'cuulp = Cfulp (3)
tational loads of the realization based algorithms, the where Cpplu,Cuulpare the conditional covariance opera-
CLS algorithm [3], and the basic 4SID [7]. tors of the past vector p ( t ) given u+(t)and of the future
input u + ( t ) given the past p ( t ) , and are defined by
Stochastic Realization with
Exogenous Inputs C,alc := E{(aICL)(blCL)T} = c a b - C,,C,-,'Cca
In this section, we summarize the stochastic realiza- where aJc* := a - (ulc). 0
tion method based on [I, 21. Consider a discrete-time Let {y(t), u ( t ) } be the jointly stationary regular full
stochastic linear system with the rn x 1 input vector rank process. Suppose that there is no feedback from
u ( t ) and the p x 1 output vector y(t). It is assumed that y t o U . Then it can be shown that is block lower-
{ u ( t ) , y(t), t = O , f l , .. are jointly wide sense sta-
9) triangular, so that it is a causal operator.
tionary processes with zero mean and finite covariance Let rankCjplu = n. Consider the Cholesky factor-
matrices. izations CP,lu = LpLT and C f f l , , = L j L F . Define
Let t be the present time and k a positive integer. E + @ ) := L;1(fIU+L)(t), a - ( t ) := L,l(plu+*)(t). It
We then define the stacked vectors of past and future then follows that E{E+(t)E?(t)}= LflCj,luL;T. Sup-

[ :y] [ ]
inputs as pose that the SVD of the normalized block Hankel ma-
trix LT'Cjpl,L;T be given by
u-(t) := , u + ( t ) := fl)
(4)

u(t + k - 1) 2
where UTU = I,,, VTV = I,, and = diag(81,...,8,,)
is a diagonal matrix with nonzero singular values (1 1
and y-(t) and yt(t), the stacked vectors of past and 51 1 ... 2 5,, > 0). We see that 6:s are the canonical
future of outputs, are defined similarly. For notational correlation coefficients between the conditional random
simplicity, we also define the past and fiture its vectors (flu+')(t> and ( p l u + l ) ( t ) .
For the SVD of (4), we define the extended observ-
d t ):= [ y-(t) ] , f(t) := y+(t) ability and controllability matrices as

0 := L f U W 2 , c := j y 2 V T L TP (5)
Theorem 1 Suppose that p ( t ) n ut(t) = 0. Then the
optimal LS predictor f^(t), of the future output vector where rankU = rankC = n. Then the block Hankel
f(t), based on the past input-output data p ( t ) and fu- matrix Cfplu has a decomposition Ejplu = OC. Since
ture inputs u+(t), is given by the orthogonal projection II = C,pluC&, the oblique projection is expressed as
i(t) = f(t)lP(t) v u+(t)= nP(t)+ @ut(t) (1) rrp(t) = O z ( t ) (6)
$1 0.000 1998 IEEE
0-7803-4394-8198 1850
where the state vector is now defined to be the n x 1
vector Table 1: The number of flops for 50 simulation runs

z ( t ) = cc-'
P P b At) = 5'/'VTL,'p(t) (7)
COV-a CLS-a Basic 4SID LQ
Theorem 2 Suppose that there is no feedback from Flops 1.58 x 10' 2.23 x log 5.26 x lo7 6.42 x lo8
the output y(t) to the input. u ( t ) . We assume that
rankCjplu = n. Then in terms of a state vector z ( t )
of (7), we have a stochastic realization of the form-
In the present simulation studies, the input is cho-
z(t + 1) = A z ( t ) + B u ( t )+ K e ( t ) (8) 10
sen as u ( t ) = U O C ~&(wit),
= ~ where the frequencies
y(t) +
= Cz(t) Du(t) e ( t )+ (9)
wi's are uniformly spaced in the interval (O.l,3)(rad)
and where U0 is adjusted to yield U," = 1. The noise
variances are chosen as U: = cr," = (0.05)'. It follows
from the PE condition for U012k-1 that IC 5 10. The
Simulation Results performance is evaluated by the mean square error

-
Fig. 1 The plant model
where N = 200,400,1000,2000, and B j denotes the true
parameter and e,(/, N ) is the estimate of B, at I-th run
with the number of data N, and where M denotes the
number of simulation runs.
Fig. 2 depicts the performance of five algorithms,
Some results of computer simulations are presented
where IC = 8, M = 100. In this case, COV-a and COV-b
to show the performance of five subspace identification
show similar performance, but the performance of CLS-
algorithms.
a and CLS-b is rather different. In order to analyze
Basic 4SID is due to Verhaegen[7], where the this fact, we have simulated CLS-a and CLS-b methods
Cholesky factorization is used to get L factor. for several different IC's, where the input is a sum of
15 sinusoids and it4 = 50, N = 1000. We see from
COV-a is the algorithm based on the stochastic re- Fig. 3 that both methods give similar performance for
alization, where the system matrices are estimated IC greater than 10, but for the smaller IC, CLS-a shows
by using the estimate of state vector (see (7)). better performance. In Figs. 4 and 5, the pole estimates
by COV-a and CLS-a methods are depicted for IC = 8,
COV-b is the algorithm based on the stochastic re- N = 1000. We see that COV-a gives a rather scattered
alization, where the system matrices are estimated
pole estimates, but CLS-a yields better pole estimates
by using 0 and 0 (see (3) and (5)).
with a smaller variability.
CLS-a is the algorithm based on the constraint Table 1 shows the number of flops of four algorithms,
least-squares algorithm due to Peternell et a1.[3] where LQ denotes the algorithm based on the LQ fac-
and using the estimation of state vector (see (7)). torization of the Hankel matrix [5, 6, 71. The number of
flops includes all the computations for the whole simu-
CLS-b is the constraint least-squares algorithm and lations by each algorithm for IC = 8, M = 50, N = 1000.
using 0 and CP (see (3) and (5)). It therefore follows that by using the Cholesky factor-
ization [2], we get a great computational saving over the
We consider a 5th-order SISO system shown in Fig.
method based on LQ factorization. Also, it is rather sur-
1 [9], where u ( t ) is the input, and w(t) and v(t) are
prising to find that CLS-a is ten times more expensive
white noises with mean zeros. The transfer function is
than COV-a.
given by G ( z )= B ( z ) / A ( z ) ,where

B ( z ) = 0 . 0 2 7 5 ~ -+~ 0 . 5 5 1 ~ - ~
A (. z.) = 1 - 2.3443~-1+ 3.081z-' - 2 . 5 2 7 4 ~ - ~
References
+ 1 . 2 4 1 5 ~ --~ 0 . 3 6 8 6 ~ - ~ [l] T. Katayama and G . Picci, "An Approach to Real-
ization of Stochastic Systems with Exogenous Input",
The G ( z ) has a zero at z = -2 and poles at z = 0.9, Preprints of 11th IFAC Symposium on System Identifi
0 . 8 e f J ,0.8ef1.'j. cation, Kitakyushu, Japan, July 1997, pp. 1107-1112.

1851
T. Katayama and G. Picci, “Realization of Stochastic P. Van Overschee and B. De Moor, Subspace Identifi-
Systems with Exogenous Inputs and Subspace Identifi- cation for Linear Systems, Kluwer Academic Publica-
cation Methods,” 1998 (submitted). tions, 1996.
K. Peternell, W. Scherrer and M. Deistler, “Statisti- M. Verhaegen and P. Dewilde, “Subspace Model Identi-
cal Analysis of Novel Subspace Identification Methods,” fication (Parts 1 and 2),” Int. J . Control, vol. 56, 1992,
Signal Processing, kol. 52, no. 2, July 1996, pp. 161-177. pp. 1187-1210 & pp. 1211-1241.
G. Picci and T. Katayama, “StochasticRealization with M. Verhaegen, “Identification of the Deterministic Part
Exogenous Inputs and “Subspace Methods” Identifica- of MIMO State Space Models given in Innovations Form
tion,” Signal Processing, vol. 52, no. 2, July 1996, pp. from Input-Output Data,” Autornatica, vol. 30, no. 1,
145-160. 1994, pp. 61-74.
P. Van Overschee, P. and B. De Moor, “N4SID - Sub- M. Viberg, “Subspace-based Methods for the Identifi-
space Algorithms for the Identification of Combined De- cation of Linear Time-invariant Systems,” Autornatica,
terministic - Stochastic Systems,” Automatica, vol. 30, vol. 31, no. 12, 1995, pp. 1835-1851.
no, 1, 1994, pp. 75-93.

[-Bulc :COV 4 L S o(da10) x(PM) &hueoldwlvd.w25l

0.3 -

0.2 -
0.1 -
0
0.2 0.4 0.6 0.1
I I .2
Numbar ol DU. N Red Rn

Fig. 2 The performance of five algorithms Fig. 4 The pole estimates by COV-a

I
10-1
7 8 0 10 11 12
(run~t!-mMIXk

Fig. 3 The performance vs. number of rows Fig. 5 The pole estimates by CLS-a

1852

You might also like