0% found this document useful (0 votes)
36 views

The Interacting Multiple Model Algorithm For Systems With Markovian Switching Coefficients

This document discusses a new approach called the interacting multiple model (IMM) algorithm for filtering linear systems with Markovian switching coefficients. The IMM algorithm exploits merging of hypotheses to reduce the number of hypotheses at each time step in an elegant way. Simulation results show the IMM algorithm performs well at a relatively low computational load, representing a significant improvement over previous methods.

Uploaded by

Aditya Joshi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

The Interacting Multiple Model Algorithm For Systems With Markovian Switching Coefficients

This document discusses a new approach called the interacting multiple model (IMM) algorithm for filtering linear systems with Markovian switching coefficients. The IMM algorithm exploits merging of hypotheses to reduce the number of hypotheses at each time step in an elegant way. Simulation results show the IMM algorithm performs well at a relatively low computational load, representing a significant improvement over previous methods.

Uploaded by

Aditya Joshi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

7 80 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 33, NO.

8, AUGUST 1988

where merging. When applied to the problem of filtering for a linear system with
Markovian coefficients this yields an elegant way to derive the interacting
multiple model (IMM) algorithm. Evaluation of the IMM algorithm
makes it clear that it performs very well at a relatively low computational
load. These results imply a significant change in the state of the art of
approximate Bayesian filtering for systems with Markovian coefficients.

I. INTRODUCTION
Therefore, from ( 4 . 3 ~ )
In this contribution we present a novel approach to the problem of
a2
p(t- 1)= b-’ -
1 -a2
Q ( t - 1)+ b-’ filtering for a linear system with Markovian coefficients

a - ‘LY(t) x,=a(O,)x,-~+ b(OOwl (1)


= -b-l ~

1 -a-‘a(t) . with observations


Substituting in (4.2) and recalling that a > 1, 0 < y < 1 gives
immediately Y , = h w x , + g(o,) V I (2)

u(t)= b-’a-‘cr(t)Aaa‘+’+E(t) Or is a finite state Markov chain taking values in { 1, . . .,N } according to
a transition probability matrix H, and w I ,ul are mutually independent
=b-’Aa . U * cr(t)+e(t) white Gaussian processes. The exact filter consists of a growing number
of linear Gaussian hypotheses, with the growth being exponential with the
E([) a linear combination of exponentially decaying terms. 0 time. Obviously, for filtering we need recursive algorithms whose
complexity does not grow with time. With this, the main problem is to
REFERENCES
avoid the exponential growth of the number of Gaussian hypotheses in an
G. C. Goodwin and K. S . Sin, Adaptive Filtering, Prediction and Control. efficient way.
Englewood Cliffs, NJ: Prentice-Hall, 1984.
H. Elliott, “Direct adaptive pole placement with application to non-minimum
This hypotheses management problem is also known for several other
phase systems,” IEEE Trans. Automat. Contr., vol. AC-27, pp. 720-722, filtering situations [lo], [ 5 ] , [6], [9], and [4]. All these problems have
1982. stimulated during the last two decades the development of a large variety
G. C. Goodwin and E. K. Teoh, “Persistency of excitation in the presence of of approximation methods. For our problem the majority of these are
possibly unbounded signals,” IEEE Trans. Automat. Contr., vol. AC-30, pp.
589-592, 1985.
techniques that reduce the number of Gaussian hypotheses, by pruning
M. Heymann, “Persistency of excitation results for structured nonminimal and/or merging of hypotheses. Well-known examples of this approach are
models,” IEEE Trans. Automat. Contr., vol. 33, pp. 112-116, 1988. the detection estimation (DE) algorithms and the generalized pseudo
R. L. Leal and G. C. Goodwin, “A globally convergent adaptive pole placement Bayes (GPB) algorithms. For overviews and comparisons see [14], [7],
algorithm without a persistency of excitation requirement,” in Proc. 23rd Conf.
Decision Contr., Las Vegas, NV, Dec. 1984.
[12], and [17]. None of the algorithms discussed appeared to have good
I. Mareels, “Sufficiency of excitation,” Syst. Contr. Lett., vol. 5 , pp. 159-163, performance at modest computational load. Because of that, other
1984. approaches have been also developed, mainly by way of approximating
R. D. Nussbaum, “Some remarks on a conjecture in adaptive control,” Sysf. the model (I), (2). Examples are the modified multiple model (MM)
Contr. Lett., vol. 3, pp. 243-246, Nov. 1983.
S . Boyd and S . S . Sastry, “Necessary and sufficient conditions for parameter algorithms [20], [?‘I, the modified gain extended Kalman (MGEK) filter of
convergence in adaptive control,” Automafica, vol. 22, pp. 629-639, 1986. Song and Speyer [13], [7], and residual based methods [19], [2]. These
S. Dasgupta, B. D. 0. Anderson, and A. Tsoi, “Input conditions for continuous- algorithms, however, also lack good performance at modest computa-
time adaptive systems,” in Proc. CDC, 1983, pp. 211-216. tional load in too many situations. In view of this unsatisfactory situation
B. D. 0. Anderson and R. Johnstone, “Global adaptive pole positioning,” IEEE
Trans. Automat. Contr., vol. AC-30, pp. 11-21, 1985. and the practical importance of better solutions, the filtering problem for
H. Elliott, R. Cristi, and M. Das, “Global stability of adaptive pole-placement the class of systems (l), (2) needed further study.
algorithms,” IEEE Trans. Automat. Contr., vol. AC-30, pp. 348-356, 1985. One item that has not received much attention in the past is the timing of
hypotheses reduction. It is common practice to reduce the number of
Gaussian hypotheses immediately after a measurement update. Indeed, on
first sight there does not seem to be a better moment. However, in two
recent publications [3], [I], this point has been exploited to develop,
respectively, the so-called IMM (interacting multiple model) and AFMM
(adaptive forgetting through multiple models) algorithms. The latter
The Interacting Multiple Model Algorithm for Systems exploits pruning to reduce the number of hypotheses, while the IMM
with Markovian Switching Coefficients exploits merging. The IMM algorithm was the reason for a further
evaluation of the timing of hypotheses reduction. A novel approach to
HENK A . P. BLOM AND YAAKOV BAR-SHALOM hypotheses merging is presented for a dynamic MM situation, which leads
to an elegant derivation of the IMM algorithm. Next Monte Carlo
Abstract-An important problem in filtering for linear systems with simulations are presented to judge the state of the art in MM filtering after
Markovian switching coefficients (dynamic multiple model systems) is the the introduction of the IMM algorithm.
one of management of hypotheses, which is necessary to limit the
computational requirements. A novel approach to hypotheses merging is 11. TIMINGOF HYFQTHESES REDUCTION
presented for this problem. The novelty lies in the timing of hypotheses
To show the possibilities of timing the hypothesis reduction, we start
Manuscript received June 24, 1987; revised October 21, 1987. This paper is based on a with a filter cycle from one measurement update up to and including the
prior submission of October 20, 1986. The work of the second author was supported by next measurement update. For this, we take a cycle of recursions for the
the Air Force Office of Scientific Research under Grant 84-01 12. evolution of the conditional probability measure of our hybrid state
H. A. P. Blom is with the National Aerospace Laboratory, NLR, Amsterdam, The Markov process ( x , , 0,). This cycle reads as follows:
Netherlands.
Y.Bar-Shalom is with the University of Connecticut, Storrs, CT 06268.
IEEE Log Number 8821022. (3)

0018-9286/88/0800-0780$01.00 0 1988 IEEE

Authorized licensed use limited to: Universitaetsbibliothek der RWTH Aachen. Downloaded on April 06,2022 at 09:24:46 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 33, NO. 8, AUGUST 1988 78 1
HI. THE IMM ALGORITHM

The IMM algorithm cycle consists of the following four steps, of which
the first three steps are illustrated in Fig. 1 .
I) Starting with the N weights@,(t - I), the N means 2,(t - 1) and
the N associated covariances R^,(t - I), one computes the mixed initial
condition for the filter matched to 0, = i, according to the following
equations:

p , ( t ) = z H,JfiJ(t-l), ifp,(t)=O prune hypothesis i , (12)


I

2'(f - 1) = - l)i,(f
H,JfiJ(f - l)/P,(t), (13)
J

Let us take a closer look at the derivation of the above cycle. As ut and w,
R^'(t- l ) = c H,JfiJ(t-1)[RJ(t- l)+[.i?J(t- l ) - P ( f - I)][. .lTl/P,(f).
are mutually independent, the Bayes formula, which represents (6) and
J
(7),follows easily from (2). From the evolution of system (1) follows (5).
The Chapman-Kolmogorov equation for the Markov chain 0, (14)
p { e , = i l ~ , - , ) =H
C, ~ P { o , - ~ = ~ ~ Y , - ~ } (9) 2) Each of the N pairs P ( t - I), R'(t - I) is used as input to a
J Kalman filter matched to 0' = i. Time-extrapolation yields, , f , ( t ) ,R , ( t ) ,
and then, measurement updating yields, i , ( t ) , R^,(t).
which represents (3), can be seen as a "mixing." To derive a
3) The N weights p , ( t ) are updated from the innovations of the N
representation of (4) we first introduce the following equation on the basis
Kalman filters,
of the law of total probability:

P I X , - I IO,= i , Y,- 11 = [PIX,- I 18r-l= j , e,= i , Y,-


fi,(t)=c . PA0 . IIQ,(t)ll-1'2 exp { - 1/2SJ(t)Q;l(t)S,(t)l (15)
J with c denoting a normalizing constant
. p{e,-,=jIe,=i, Y , - ~ } I . (io)
S,(t) =A- h(i),f,(t) (16)
As 8, is independent of x,- if 0,- is known, we easily obtain
Q,(f) = h ( i ) d , ( t ) hT ( i )+ g ( i ) g T ( i ) . (17)
P [x,- I 0,- = j , e, = i, Y,- = P I X , - I I 0,- I = j , y,-ll. 4) For output purpose only, 2' and R, are computed according to
Substitution of this and of the following:

p{e,-, = j le,=i, Y , - ~=H,JP{fl-l


} = j l Y , - ~}/P{e,=il Y , - ~
R^,= h(t)[R^,(t)+
[f,(t)-f,I[. .IT]. (19)
in (10) yields the desired representation of transition (4)

p[x,-lie,=i, Y , - ~ I H,JP{e,-l
= ~ =jl Y,-~I Only step 1) is typical for the IMM algorithm. Specifically, the mixing
J represented by (13) and (14) and by the interaction box in Fig. 1, cannot
. p [ ~ , - ~ l B , - ~ =Y,-l]/P{6,=ilY,-l}.
j, (11) be found in the GPB algorithms. This is the key of the novel approach to
the timing of fixed depth hypotheses merging that yields the IMM
Notice that the mixing of the densities in (1 1) is explicitly related to the algorithm. We give a derivation of the key step 1).
above-mentioned Markov properties of 0, and the conditional indepen- Application of fixed depth merging with d = 1 implies that
,,
dence of 0, and x,- given 0,- According to the above filtering cycle
there are at any moment in time N densities on R" and N scalars. The P [ x , - I I e,- I = i, U,- -N { f , ( t - I), R,(t- 1)).
densities on R" are rarely Gaussian. Even if p[xol Yo]is Gaussian, then
p[x,)0, = i, Y,] is in general a sum of N t - ' weighted Gaussians Substitution of this in (11) immediately yields (13) and (14), with
(Gaussian mixture). Explicit recursions for these N' individual Gaussians
and their weights can simply be obtained from the above filter cycle.
Obviously, the N times increase of the number of Gaussians during each
filter cycle is caused by (4) only. and
In the sequence of elementary transitions, (3) through (7), we can apply
a hypotheses reduction either after (4), after (5), or after (7). We review R'(f- 1)
these reduction timing possibilities for the fixed depth merging hypotheses
reduction. This fixed depth merging approach implies that the Gaussian the associated covariance. Finally, we introduce the approximation,
hypotheses, for which the Markov chain paths are equivalent during the
recent past of some fixed depth, are merged to one moment-matched - P ( r - 1))
p[x,-llO,=i, Y , - l I - ~ { ~ i ( t I),
Gaussian hypothesis. The degrees of freedom in applying this fixed depth
merging approach are the choice of the depth, d ( 2I), and the moment of which guarantees that all subsequent IMM steps fit correctly.
application. If the application is immediately after each measurement Remark: The IMM ca_" be approximated by the GPB! algorithm by
update pass (7), it yields the GPB ( d + 1) algorithms [14], [16].In the replacing f i ( t - 1) and R i ( t - 1) in step 1) by f,-land Together
next section we derive the IMM algorithm by applying the fixed depth with (12) this approximates (13) and (14) in step 1) by, P ( t - 1) = 2,-,
merging approach with depth, d = 1, after each pass of (4). It can easily and @ ( t - 1) = These equations are equivalent to (13) and (14) if
be verified that all other timing possibilities yield disguised versions of each component of H equals UN, which implies that 0, is a sequence of
IMM and GPB algorithms. Merging after ( 5 ) with d = 1 yields a mutually independent stochastic variables. The latter is hardly ever the
disguised but more complex IMM algorithm. Merging either after (4) or case and we conclude that the reduction of the IMM to GPBl leads to a
after (5) with d 2 2 yields a disguised but more complex GPBd significant performance degradation. Obviously, the computational loads
algorithm. of IMM and GPBl are almost equivalent.

Authorized licensed use limited to: Universitaetsbibliothek der RWTH Aachen. Downloaded on April 06,2022 at 09:24:46 UTC from IEEE Xplore. Restrictions apply.
782 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 33, NO. 8, AUGUST 1988

TABLE I
THE PARAMETERS OF THE 19 CASES OF WESTWOOD [is]
CASE H-VALUES I 8-DEPENDENT VALUES

R-
- -
Interaction
~
# -TO h(O), h(1) !do), g(1)
Filter 1 40 1.0 1.o
2 40 1.o 5
3 40 .995,.990 1.o 5.0
4 200 1.o 5.0
5 40 8.0 1.o 1.o
6 40 20 .995,.990 1.o 1.o .3
7 40 20 .995..900 .5 1.o 2.0
8 40 20 .995,.750 1.0 1.0 .6
9 40 20 .995 2.0 1.0,.95 .5
10 40 20 ,995 1.o 1.0,.80 .2
11 40 20 ,995 .5 1.0,.80 8
12 4 2 ,995 5 1.0,.80 .8
13 200 100 .995 5 1.0,.80 .8
14 40 20 ,995 .1,5.0 1.0 1.o
IV. PERFORMANCE OF THE IMM ALGORITHM 15 40 20 ,995 1.o 1.o .1.5.0
16 10 2 .95 5 1.o,o.o 1.0.2.0
17 200 5 .950,0.0 1.o 1.o 1.o
Presently a comparison of the different filtering algorithms for systems 18 50 5 .950,1.2 1.o 1.o 1.o
with Markovian coefficients with respect to their performance is 19 10 2 .95 .5 1.o 1.0.40.0
-
hampered by the analytical complexity of the problem [16], [15]. Because
of this, such comparisons necessarily rely on Monte Carlo simulations for
specific examples. For our simulated examples we used the set of 19 cases
that have been developed by Westwood [18]. To make the comparison ...... GPUl
more precise, we specify these cases and summarize the observed
performance results. In all 19 cases both xr and y , are scalar processes,
which satisfy x, = a(O,)x,-l + b(B,)w, + u ( t ) and yr = h(O,)x, +
g(O,)u,, with 0,:Q = (0, l ) , u ( f ) = 10. cos {27rt/100), xo a Gaussian
variable with expectation 10 and variance 10, P{e0 = 1 ) = P{O0 = 0)
= 112, whileHm = (1 - 1/70) a n d H I I = (1 - 1 1 ~ ~The ) . parameters
I
a, b, h , g and the average sojourn times 70 and 71 of these 19 cases are
given in Table I.
The results of Westwood [18] show that, in all 19 cases the differences
in performance of the GPB2 and the GPB3 algorithms are negligible,
while in only seven cases (5, 6, 8, 16, 17, 18, 19) the differences in
performance of the GPBl and the GPB2 algorithms are negligible. To our 0 5 t

present comparison the other 12 cases ( 1 , 2 , 3 , 4 , 7 , 9 , 10, 1 I , 12, 13, 14, 00 ,bI I
aI A I I
m I
m 70
1 I
m 80I
I
I
1m
15) are interesting. For each of these 12 cases we simulated the GPBl, the -+t

GPB2, and the IMM algorithms and ran Monte Carlo simulations, Fig. 2. rms error for case 7, illustrative of the six cases (1,2, 7, 12, 14, 15) where both
consisting of 1 0 0 runs from t = 0 to t = 100. For simplicity of IMM and GPB2 perform slightly better than GPBl,
interpretation of the results we used one fixed path of 0 during all runs: 0
= 0 on the time interval [0,301, 0 = 1 on the interval [31, 601, and 0 = 0
on the interval 161, 1001.
The results of our simulations for the 12 interesting cases are as ......GPBl
follows. In six cases (1, 2 , 7, 12, 14, 15) both the IMM and the GPB2
performed slightly better than the GPB1, while the IMM and the GPB2
performed equally well. For typical results, see Fig. 2 . In the other six
cases both the IMM and the GPB2 performed significantly better than the
GPBl. For typical results see Figs. 3 and 4 . Of these six cases the IMM
and the GPB2 performed four times equally well (cases 3 , 4 , 1 1 , and 13)
and two times significantly different (cases 9 and 10).
On ,the basis of these simulations we can conclude that the IMM
performs almost as well as the GPB2, while its computational load is
about that of GPBl . We can further differentiate this overall conclusion.
Increasing the parameters T~ and 71 increases the difference in
performance between GPBl and GPB2, but not between IMM and GPB2. 0 5 i -

If a is being switched, then the IMM performs as well as the GPB2, 00


I I I I I I I I I
I
while the GPBl sometimes stays significantly behind.
b ,b 2b i 20 20 A 70 m 90
-, 1w

If the white noise gains, b or g, are being switched, then the IMM Fig. 3. rms error for case 3, illustrative of the four cases (3,4, 11, 13) where both IMM
performs as well as the GPB2, while the GPBl sometimes stays and GPB2 perform better than GPB1, while IMM and GPB2 perform equally well.
significantly behind.
If only h is being switched, then in some cases the IMM, and even for cases 1, 3, and 4 the GPB2 and the IMM algorithm performed equally
more often, the GPBl tend to diverge while the GPB2 works well. well, one can conclude that the MM, the modified MM, the MGEK, the
Another interesting question is how the IMM compares to the modified MGEK with “postprocessor,” and the GPBl are in all 19 cases
MM algorithm and the MGEK filter. Apart from the GPB algorithms, outperformed by the IMM algorithm.
Westwood [18] also evaluated four more filters, the MM, the modified On the basis of these comparisons one can conclude that for practical
MM, the MGEK, and a MGEK with a “postprocessor.” For the 19 cases filtering applications with N = 2, the IMM algorithm is the best first
there was only one algorithm that outperformed the GPBl algorithm in choice. As the IMM algorithm has been developed on the basis of some
some cases. It was the MGEK filter in the cases 1 , 3, and 4. He also found general hypotheses reduction principles, which are N-invariant, one can
that the MGEK filter performed in these cases marginally or significantly reasonably expect that this is also true for larger N . But it is unlikely that
less good than the GPB2 algorithm. As the above experiments showed that the IMM performs in all applications almost as good as the exact filter.

Authorized licensed use limited to: Universitaetsbibliothek der RWTH Aachen. Downloaded on April 06,2022 at 09:24:46 UTC from IEEE Xplore. Restrictions apply.
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 33, NO. 8, AUGUST 1988 783

I. L. Weiss, “A comparison of finite fdtering methods for status directed


T ‘ O T / I 1 I- processes,” Master’s thesis, Charles Stark Draper Lab., Mass. Inst. Technol.,
Rep. CSDL-T-819, 1983.
J. L. Weiss, T. N. Upadhyay, and R. Tenney, “Finite computable filters for linear
systems subject to time-varying model uncertainty,’’ in Proc. NAECON, 1983,
pp. 349-355.
E. K. Westwood, “Filtering algorithms for the linear estimation problem with
switching parameters,” M.S. thesis, Univ. of Texas at Austin, 1984.
A. S. Willsky, “Detection of abrupt ‘changes in dynamic systems,” Rep. MIT-
LIDS-P-1351, 1984.
A. S . Willsky, E. Y. Chow, S. B. Gershwin, C. S. Greene, P. K. Houpt, and A.
L. Kurkjian, “Dynamic model-based techniques for the detection of incidents on
freeways,” IEEE Trans. Automat. Contr., vol. 25, pp. 347-360, 1980.
J. W. Woods, S . Dravida, and R. Mediavilla, “Image estimation using doubly
stochastic Gaussian random field models,’’ IEEE flans. Pattern Anal. Machine
Intell., vol. PAMI-9, pp. 245-253, 1987.

00
b A
I 1
io
I
Jb
I
20
I
Y) w
I I
70
I
80
I
so
I
1m
-I

Fig. 4. r m s error for case 9, illustrative of the two cases (9 and 10) where IMM
performs better than GPB1, but slightly worse than GPB2 (in these two cases only h
jumps).
Upper Bounds and Approximate Solutions for Multidisk
Problems
Therefore, if the IMM performs not well enough in a particular
application one should consider using a suitable GPB ( 2 2 ) or DE
algorithm [14], or one might try to design a better algorithm by using THOMAS TING AND KAMESHWAR POOLLA
adaptive merging techniques [16]. The DE algorithm might possibly be
improved by the novel timing of hypotheses reduction [l]. If for a Abstract-The objective of this note is to consider approximate
particular application the performance of the selected algorithm has a too solutions of multiobjective H”-optimization problems, also referred to as
high computational load, then it is best to try to exploit some geometrical multidisk problems. The main result is the presentation of a systematic
struchire of the problem considered [2], [ll]. algorithm that enables us to compute an upper bound for linear two-disk
In situations where estimation has to be done outside some time-critical problems and also a (suboptimal) controller that achieves this hound.
control loop, it is usually preferable to use a smoothing algorithm instead This algorithm involves some graphical techniques which can also be used
of a filtering algorithm [8], [14], [21]. In view of the above filtering to explicitly demonstrate the design tradeoffs inherent in problems
results, this suggests that the ideas that underly the IMM algorithm can be involving competing objectives. The results presented here can easily be
exploited to develop better smoothing algorithms. generalized to incorporate multidisk problems.

REFERENCES I. INTRODUCTION
P. Anderson, “Adaptive forgetting in recursive identification through multiple
models,” Int. J. Contr., vol. 42, pp. 1175-1193, 1985. Recently, the single objective W”-optimization problem, introduced by
M. Basseville and A. Benveniste, Detection of Abrupt Changes in Signals and Zames [151 as an alternative to the classical Wiener-Hopf approach to
Dynamical Systems. New York: Springer-Verlag. 1986. feedback synthesis, has received much attention. This problem encom-
H. A. P. Blom, “An efficient filter for abruptly changing systems,” in Proc. 23rd
IEEE Conf.Decision Contr., 1984, pp. 656-658. passes two fundamental problems in frequency domain control; the robust
-, “Overlooked potential of systems with Markovian coefficients,” in Proc. stabilization problem and the uniformly or Ha-optimization problem.
25th IEEE Conf. Decision Contr., Athens, Greece, 1986, pp. 1758-1764. Both of these problems involve a single performance criterion and, in the
C. Y. Chong, S . Mori, E. Tse, and R. P. Wishner, “A general tlieory for Bayesian context of LTI controllers, both problems reduce to the same mathemati-
Multitarget tracking and classification,” Advanced Decision Systems, Rep. TR-
1015-1, 1982. cal problem, a one-disk problem. In practice, however, a control system
A. M. Makowski, W. S. Levine, and M. Asher, “The nonlinear MMSE fdter for designer must often consider multiple performance criteria, i.e., one may
partially observed system driven by non-Gaussian white noise, with applications wish to simultaneously consider both the robust stability properties and
to failure estimation,” in Proc. 23rd IEEE Conf. Decision Contr., Las Vegas, the disturbance rejection capabilities of a particular control design.
NV, 1984, pp. 644-650.
S. I. Marcus and E. K. Westwood, “On asymptotic approximation for some
Therefore, several interesting and important control problems require the
nonlinear fdtering problems,” in Proc. 9th IFAC Triennial World Congress, solution of a multiobjective H”-optimization problem, also referred to as
1984, pp. 811-816. a multidisk problem (see Francis and Doyle [6]). For example, the
V. J. Mathews, and J. K. Tugnait, “Detection and estimation with fvted lag for problems of
abruptly changing systems,” IEEE Trans. Aerosp. Electron. Syst., vol. 19, pp.
730-739, 1983.
S. Mori, C . Y. Chong, E. Tse, and R. P. Wishner, “Tracking and classifying robust stabilization with optimal nominal disturbance rejection, (1.1)
multiple targets without apriori identification,” IEEE Trans. Automat. Contr.,
vol. 31, pp. 401-409, 1986. robust simultaneous stabilization, (1.2)
K. R. Pattipati and N. R. Sandell, Jr., “A unified view of state estimation in
switching environments,” in Proc. 1983 Amer. Contr. Conf., 1983, pp. 458- optimal nominal disturbance rejection
465. with robust stability around a failure operating point (1.3)
I. Raisch, “Comments on ‘A multiple-model adaptive predictor for stochastic
processes with Markov switching parameters,’ ” Int. J. Contr., vol. 45, pp.
1489-1490, 1986.
all reduce to (perhaps nonlinear) multidisk problems.
linear systems,” in Analysis and Optimization of Stochastic Systems, 0 . L. R. In complete generality, these multidisk problems are very difficult and
Jacobs et al. Eds. New York: Academic, 1980, pp. 333-345. no closedform solution is known. For certain special classes of multidisk
T. L. Song and J. L. Speyer, “A stochastic analysis of a modified gain extended
kalman fdter with applications to estimation with bearings only measurements,” in
Proc. 22nd IEEE Conf. Decision Contr., 1983, pp. 1291-1296. Manuscript received July 7, 1987; revised November 19, 1987. This work was
1. K. Tugnait, “Detection and estimation for abruptly changing systems,” supported in part by the Joint Services Electronics Program under Contract N00014-84-
Automatica, vol. 18, pp. 607-615, 1982. C-0149 and by the National Science Foundation under Grant ECS-87-09265.
R. B. Washburn, T. G. Allen, and D. Teneketzis, “Performance analysis for The authors are with the Coordinated Science Laboratory, University of Illinois at
hybrid state estimation problems,” in Proc. 1985 Amer. Contr. Conf.., 1985, pp. Urbana-Champaign, Urbana, IL 61801,
1047-1053. IEEE Log Number 8821565.

.oO 0 1988 IEEE


oO18-9286/88/0800-0783$01

Authorized licensed use limited to: Universitaetsbibliothek der RWTH Aachen. Downloaded on April 06,2022 at 09:24:46 UTC from IEEE Xplore. Restrictions apply.

You might also like