0% found this document useful (0 votes)
55 views

Spanos A Direct Determination of ARMA Algorithms For The Simulation Fo Stationary Random Processes

The document presents a technique for determining the parameters of an autoregressive moving average (ARMA) model that can be used to simulate multivariate random processes. The technique minimizes errors between the target and ARMA spectral matrices in the frequency domain, without requiring a prior autoregressive approximation. It formulates the optimization problem to select ARMA parameters that satisfy constraints involving the target process autocorrelation values. Solving this optimization problem provides an ARMA model that can accurately simulate the target process for applications like Monte Carlo simulations of nonlinear systems subjected to random vibrations.

Uploaded by

Muntoia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views

Spanos A Direct Determination of ARMA Algorithms For The Simulation Fo Stationary Random Processes

The document presents a technique for determining the parameters of an autoregressive moving average (ARMA) model that can be used to simulate multivariate random processes. The technique minimizes errors between the target and ARMA spectral matrices in the frequency domain, without requiring a prior autoregressive approximation. It formulates the optimization problem to select ARMA parameters that satisfy constraints involving the target process autocorrelation values. Solving this optimization problem provides an ARMA model that can accurately simulate the target process for applications like Monte Carlo simulations of nonlinear systems subjected to random vibrations.

Uploaded by

Muntoia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

In,. J. Non-Lmear .Urckanrcs, Vol. 25. No. 5. pp. 5%-568.

1990 Pcrgamoo Press plc


Printed in Great Bnrain.

A DIRECT DETERMINATION OF ARMA ALGORITHMS


FOR THE SIMULATION OF STATIONARY RANDOM
PROCESSES

MARC P. MIGNOLET
Department of Mechanical and Aerospace Engineering, Arizona State University, Tempe,
AZ 85287-6106, U.S.A.

and
POL D. SPANOS
L.B. Ryon Endowed Chair of Engineering, Department of Mechanical Engineering and
Civil Engineering, Brown School of Engineering, Rice University, P.O. Box 1892, Houston,
TX 77251, U.S.A.

(Received 7 August 1989)

Abstract-An efficient technique to determine autoregressive moving average (ARMA) algorithms


for simulating realizations of multivariate random processes with a specified (target) spectral matrix
is presented. The computation of the parameters of the ARMA model is accomplished by relying on
the minimization of frequency domain errors. The need to perform a spectral factorization is
discussed and some new algorithms to perform this operation are presented. Finally, the features of
this technique are critically assessed and examples of application are given in context with non-
linear mechanics problems.

INTRODUCTION
The usefulness of the Monte Carlo simulation approach to random vibration analyses of
non-linear structures subjected to the effects of a hostile natural environment such as wind
turbulence, ocean waves, and earthquake ground motions has been presented in a series of
recent papers [l-4]. In these articles efficient algorithms for the generation of records of
values of Gaussian multivariate stationary processes with specified (target) spectral matrix
were presented.
In [2) the importance of the availability of an approximation of the target process as the
output to white noise input of autoregressive (AR) systems was pointed out. Specifically, it
was shown that a prior AR model can be the basis to obtain very efficient and reliable
autoregressive moving average (ARMA) algorithms. It was also demonstrated that the
determination of the AR system which is straightforward for most processes, can become
a quite delicate task when the target spectral matrix belongs to a special class of functions.
An alternative technique for generating the required time histories which is based on the
development of purely moving average (MA) systems has also been presented [S-7]. It was
shown that this latter technique can yield in a straightforward manner reliable models of
most processes, even of the pathological ones described in ref. [2]. However, the number of
arithmetic operations involved in the computation of a record of values through this
algorithm can be much larger than the number corresponding to an ARMA simulation
algorithm.
In view of this situation, it is desirable to develop an ARMA determination technique
which does not require a prior reliable AR approximation. The goal of the present paper is
to present such a technique and assess its usefulness for efficient and accurate Monte Carlo
simulation analyses. For completeness of the presentation, the definitions of ARMA
processes and ARMA systems are briefly reviewed.

Contributed by J. N. Reddy.

555
556 M. P. MIGNOLET and P. D. SP~NOS

AUTOREGRESSIVE MOVING AVERAGE APPROXI~~ATiO~

Preliminary remarks
An n-variate ARMA process P is a discrete random vector process whose rth sample can
be obtained from the other ones in the following manner

A$, = - f A&+ + & BIW,_, (1)


k=l

where A, and B, are real n x n matrices. The symbol W denotes an n-variate bandlimited
[ - CL+,,
q,] white noise process. The autocorrelation matrix of W is defined as
E[W,W;] = 2WbQij. (2)
where E [.J and [.I’ are the operators of mathematical expectation and transposition,
respectively. The symbols 1, and dij denote the n x n identity matrix and the Kronecker
delta. The sampling period T and the cut-off frequency ob are reiated through the Nyquist
relation
T=R. (3)
0b
The process P can also be considered as the output to white noise input of a multi-
degree-of-freedom dynamic system whose transfer function matrix is
H(z) = D- ‘(z)N(z) (4)
with
D(z) = i A,z-’ (5)
k=O
and
N(Z) = i B,Y’. (6)
I=0
For the purpose of simulation, the corresponding spectral matrix
Si.p(~) = H*(ejwr)Ht (ejar) (7)
where [.]* denotes the operation of complex conjugation, must represent a good approx-
imation of a target expression &p(o).

For~zuiat~on of the optimum ARMA approximation


In this section, a selection strategy of the ARMA parameters A, and B, that leads to
a good matching between the target and ARMA spectral matrices is presented. This
approach is based on a separation of the target spectral matrix into its causal and anticausal
parts [S]
SW(W) = S,(o) + S,(w) (8)
where

and
S&O) = $-
b
R,,(O) f f
k=l
Ryy(k)e-jk”’
1 (9)

SAW =G :, iRvv(0)
2 + f
k=l
RYY( - k)eikmT
1
. (10)

In the above equations the symbol Ryy (k) denotes the autocorrelation at lag k of the target
process. It is related to the spectral matrix &v(o) through the Fourier transform

Ryy(k) = wb SW b.4 e jko T do. (11)


s - Wb

Given the matrix SC(o), an ARMA approximation can be sought in the form
S,(w) z No(ej”‘)D~l(e~o’) (12)
Determination of ARMA algorithms 557

where
Do(z) = 2 Aiz-’ (13)
k=O

and
N,(z) = f BPt-‘. (14)
l=O
Following an approximation technique similar to the one developed in [Z], the ARMA
parameters Ai and By are selected to minimize the error sc defined as

EC = & ” b s %
IS,-(w)DO(ej”‘) - N,,(ejmr)12 do (15)

where the symbol ) U I2 signifies the Euclidean norm of an arbitrary matrix U of elements
[I/ Jij. That is,
IU12 = i i [V]ij[U]$. (16)

i=l j=l

Denoting by tr( LJ) the trace of the matrix W,it can be shown that the norm ) U 1’ can also be
written as
) CJJ2= tr(U* U). (17)
The minimum of ~c, equation (15), will be sought among the parameters A: and I$ which
satisfy the following constraints (see [2] for a justification)
A; = I, (18)
f-1

c RYY(i - k) A: + ;R~~(~)A: = 0 forIECq,+Lq,+r,l~(-ccrp,l (19)


k=max(O.I-m)

and
Ryy(l-k)Af=O forIE[qo+l,q,+r,]n[p,+l,+co]. (20)
k=mar(O.l-m)

The parameter r,,, 0 I r0 -< pO, represents the number of constraints of the type equa-
tions (19)-(20) which are enforced in the minimization step.
It is readily shown that the parameters At and BP which minimize sc as defined by
equation (15) and satisfy the constraints, equations (18)-(20) are the solution of

1
f [“+y (k.9 min (40.m+ij
R^JY(u-i)dyy(u-k) AkO-2wb c
k=O u=max(k,i) I=i

min (po+ro. m+i)

I&(1 - i) Bf’ + c &(I - i)A/ = 0 for i = 1, _ . . , p. (21)


l=mai(qo+ 1. i)

and
min(po,0
c d,,(l - k)A: = 2w,Bf I= 0,. .. , q. (22)
k=max (0. I-m)

together with equations (18)-(20). The integer m appearing in the limits of the summations
of equations (21) and (22) is the highest lag k for which the autocorrelation RYY(k) is
non-negligible. That is, m is such that
R,(k) x 0 for k > m. (23)
Moreover, the matrices d,,(k) are defined as

&v(k) = &v(k) O<k<m (244


and
&v(O) = &v(O) (24b)

and the symbol Ak, k = p. + 1, . . . , p. + r. denotes the lo Lagrange multipliers associated


with the constraints, equations (19) and (20).
558 M. P. MIGHOLET and P. D. SPANOS

Similarly to the causal modeling, an approximation of the anticausal matrix S,(o) can be
sought in the form
S,(o) 5 D;*(ejoT)N:(ejoT) (25)
where
Di(@Z f A:z-k (26)
k=O
and

N,(z) = 2 B:z-‘. (27)


I=0
Proceeding similarly to the causal part modeling, an approximation technique based on the
frequency domain error sA defined as

eA =&IT, IDT(ej”‘)S,(co) - N:(ejmT)12 dw (28)

and on a set of constraints similar to equations (18)-(20) could be derived. In fact, this
method would appear identical to an approximation of Sl;+(o) by N’, (ej”‘)D;t (ejWT)
through the procedure presented above, equations (15), (18)-(20). However, the symmetry
property of the autocorrelation sequence

R:Y( - k) = &v(k) (29)


implies that the causal and anticausal matrices are related through the equation
Sf;‘(w) = S,(o) (30)
so that it is logical to assume p. = p1 = p, q. = q1 = q and r. = r1 = r. On the basis of the
uniqueness of the minimum of the errors ~c and E*, equations (15) and (28) it is readily
shown that the parameters of the causal and anticausal approximations satisfy the
equations
A; = A;+ (31)
and
B; = By+. (32)
Upon examining the preceding results regarding the matching of the causal and anticausal
parts of the target spectrum, it is seen that the ARMA spectral matrix can be written as
&p(o) = N,(ej”‘)D;‘(ej”r) + b;*(ej”T)NT(ejwT) (33)
or
SpB(w) = D;*(ej”T)[D:(ejO’)N:(ej”T) + N:(ej”T)D~(ejoT)]D;t(e’“r), (34)
where the term in brackets represents a Hermitian matrix polynomial in ejar with powers
ranging from - max(p, q) to max(p, q). Further, introduce the polynomial
max (P.¶)
N,(z) = c Bfz-’ (35)
I=0

associated with a spectral factorization [9-131 of the term in brackets in equation (34).
That is,
Df(ej”‘)Nf(eiWT) + N:(ej”T)Di(ei”r) = N:(ej”‘)Ni(ej”r). (36)
Then, combining equations (4), (7), (34) and (36) yields the final form of the ARMA
simulation scheme. Specifically,
rnax(p.9)

f A:P,_k= c B: w,_,. (37)


k=O I=0

Note that the discussed technique does not guarantee the positive semidefiniteness of
Df(ejoT)Nl(eiwr) + N:(ej”T)D:(ej”‘T ) which is necessary for the existence of a solution
to equation (36). If the target spectral matrix &v(w) is strictly positive definite, a close
matching between this expression and its ARMA approximation &p(o) will suggest that
Determination of ARMA algorithms 559

the latter matrix is also positive definite. Consequently, the matrix


DT(ej”r)Nf(eioT) + N~(ej”T)D~(eioT) = o:(ej”T)SPP(W)Dtl(ej”T) (38)
will also possess this property and a solution to equation (36) will exist.
However, if the target spectral matrix is only positive semidefinite or nearly so for a range
of frequencies, the discussed technique may lead to a close ARMA approximation possess-
ing some slightly negative eigenvalues. In this case a simple modification, such as any of the
ones described in the Appendix, of the matrix D:(ej”r)Nf (ejwT) + N:(ej”‘)Df (ej-‘)
must be introduced to restore its positive definiteness. As the magnitude of these spurious
eigenvalues is very small, the effect of this correction on the quality of matching of the
spectral matrices will be negligible. Once the positive definiteness of the left-hand-side of
equation (36) has been ensured, the associated spectral factorization can be pursued. This
computation is discussed in detail in the second part of this paper.

Properties
The properties of the ARMA approximation obtained from equations (18)-(24) and
(31)-(32) are investigated next by relying on the results derived in [2,3] in context with an
AR to ARMA two-stage spectral matching method. Specifically, the matching of 4 + t + 1
values of the input-output crosscorrelation of the ARMA systems proved in these
references implies that

&p(k) = R,,(k) for k = - (q + r), . . . , q + r (39)


in the present approach. Note that the number, q + r + 1, of samples of the autocorrelation
sequence which are matched is maximum when the parameter r is selected as r = p.
Moreover, note that the corresponding approximation procedure requires only the values
of R,,(k), k = 0, . . . , p + q to obtain the ARMA parameters. In addition to the computa-
tional saving that this fact represents, the selection of r = p leads to an ARMA approxima-
tion of a real target spectral matrix which is also real. Indeed, in this case, the matching
property, equation (39) implies that

N,(ej”T)D;l(ej”‘) = ARP~(O) + f Rpp(k)e-jkoT


2 k=l

= iRYY(0) + ‘T R,,(k)e-jk”’ + f Ryp(k)espoT. (40)


k=l k=p+q+l

Further, the real character of the target spectral matrix requires that

I&(k) = &vt - k) = R,,(k) for all k (41)


so that

No@ioT)D;l(ej”T) _ D;t(ejuT)N&(ejo*) = f [RPq(k) - R&p(k)]e-jkwT. (42)


k=p+q+ 1

This last relation can be rewritten in the form


D&(ej”‘) No (ejoT) - N6 (ejaT) Do (ej”=) = D&(ej”=)

f [R&k) - R6p(k)]e-ik”T Do(eiaT). (43)


k=p+q+ 1

Clearly, the right-hand-side of this equation is a polynomial of e-j”’ whose lowest power is
p + q + 1. On the contrary, the highest power in the left-hand-side of the same relation is
p + q. Thus, the equality is achieved for all o if and only if

D&(ej”r)No(ejWT) - N$(ejWT)DO(ejoT) = 0 for all 0 (44)


and
k(k) - R&p(k) = 0 for all k 2 p + q + 1. (45)
Combining the last result with equations (39) and (41), it is readily shown that the ARMA
autocorrelation matrices are all symmetric or equivalently, that the corresponding spectral
560 M. P. MIGNOLET and P. D. SPANOS

matrix is real. Specifically, equations (31)-(33) and (44) imply that


St,-(o) = 2Re[No(ej”T)D;‘(ej”r)] = 2Re[D;‘(ej”T)N,(ej”‘)]. (46)

SPECTRAL FACTORIZATION

Preliminary remarks
The computation of the matrix polynomial N*(z) satisfying equation (36) is traditionally
referred to as spectral factorization. In general terms, this problem can be stated in the
following form. Given a Hermitian nonnegative definite matrix function S(w), seek a spec-
tral factor Q(m) satisfying the relation
S(o)= Q*b)Q+W (47)
and having a Fourier expansion which involves only the harmonics e-jkor, k 2 0. In
addition to satisfying this causality condition, the spectral factor Q(w) is required to possess
only real Fourier coefficients.
The questions of existence and uniqueness of the solution Q(w) have been resolved in ref.
[9]. Specifically, there exist an infinite number of such spectral factors which differ by the
postmultiplication by a constant unitary matrix J. In other words, if Q(w) is a bonafide
solution of equation (47), then Q(m)J is also a valid spectral factor provided that J is unitary
or equivalently satisfies the relation
J* J’ = I,. (48)
Further, if S(o) possesses only 2m + 1 non-zero Fourier coefficients

R(k)= “’ S(w) e jkUrdaIkl <rn (49)


s - “,,
then any spectral factor Q(o) is a polynomial of degree m in e-j”“. Clearly, this result is
quite crucial in solving equation (36) since it shows that the polynomial N2(:) can be
completely specified by the coefficients B:, I = 0, . . . , max(p, q) = m.
Various numerical techniques have already been suggested for the computation of the
spectral factor Q(O). Among these procedures are the method of alternating projections [9],
the Cholesky decomposition of the covariance matrix [lo], the numerical solution of the
non-linear equations [ll] and other approaches based on Riccati matrix equations
[12, 133. In the next section a simple factorization technique relying on the powerful
autoregressive (AR) spectral approximation will be introduced. Note that some related
studies have already been reported in references [14-161. For completeness the definition
and some properties of the AR modeling technique are briefly reviewed.

Autoregressioe (AR) modeling


An n-variate autoregressive (AR) process P of order ti is a discrete random vector process
whose rlh sample can be computed from the ti previous ones in the following manner

P,= - i A^,P,_,+&)w,
k=l

where A^kand B^, are real n x n matrices and W is defined by equation (2). The process
defined by equation (50) can be considered as the response vector to white noise excitation
of a discrete system whose transfer function matrix is
ti(z) = d-‘(z)& (51)
where
Iqz) = I, + 5 ti,z-k.
k=l
(52)

Given an arbitrary Hermitian positive definite matrix S(w), the AR modeling problem
involves the determination of the coefficients 2, and fir, so that the product
~*(&“r)~t(ej”r) is close in some sense to S(o). It has been shown [17] that a meaningful
Determination of ARMA algorithms 561

measure of the error is

where Q(o) is a spectral factor of S(o) satisfying equation (47). It can be shown 12-j that the
minimum error is obtained when the coefficients 2, satisfy the following equations
(Yule-Walker equations)

R+(I)+ i /i,R(k - I) = 0 for f = 1, . . . ,A (54)


k=l

where R(k) -designates the Fourier coefficients of S(o) as defined by equation (49). The
parameter B, is obtained by equating the means of the matrix S(w) and of its AR
approximation. That is,
%
~-*(e’“~)8,B^:,~-t(e’“‘)dw = % S(w) dw (55)
j -WI, s -I,,*
or

&,B^~=$ R(O)+ 5 i&R(k) . (56)


k i k=l 1
It has also been shown that the AR approximation E?* (ejar ) r-7 + (ej”‘) becomes an exact
representation of the matrix S(w) as the system order 6 tends to infinity so that

(57)

The factorization method presented next is applicable to spectral matrices S(w), equation
(47), that are exactly represented by limited Fourier series or that can be approximated by
such expansions. Denoting by m the highest power of e-joT in this Fourier representation, it
can be proved [9) that the spectral factor Q(o) can be written in the form

Q(w) = ‘f Qke-jkmT (58)


k=O
where
Q(w) ejkor do. (59)

The asymptotic relation, equation (57), provides the basis for the following computation-
ally simple factorization algorithm.
First, perform an AR(+) modeling of the spectral matrix S(w) using the Yule-Walker
equations, (54) and (56). Upon selecting an appropriate autoregressive order ni, the AR
spectrum I?* (ej,r)ti+ (ejoT ) represents a good approximation of the target matrix S(o) or
equivalently
G(e@r) = 5-i (ej”?)&, zz Q(w). (60)
This relation implies that the matrices Qk and the AR coefficients 2, are approximately
related through the following convolution equations
min (k. m)

c ak-1 Q r~'.oJ,o for k = 0, 1, . . . , m + 61. (61)


L=max{O. k-d)

Finally, a least squares solution of this overdetermined system of equations for the
Fourier coefficients Qk of the spectral factor Q(w) is sought. Specifically, select these
matrices to minimize the deconvolution error

Edec = (63
562 M. P. MIGNOLETand P. D. SPANOS

for a given value of p 2 rn. It can be shown that the minimum is attained when these
matrices satisfy the Iinear system of equations
LjL,B=L;I; (63)
where L,,B and ifl are (p f t)n x (m + l)n, (m -t 1)n x n and (p + 1)n x n matrices the
Hock elements of which are, respectively,
(L,],,=&'Al,_, for Id k I min(l+ m,p) and 0 4 t I m Wa)
CL,),, = 0 otherwise (64b)
W, = Q,t for 0 I k I m (65)
and
KJ, = L&* for 0 I k s ,u. (66)
The determination of the solution of the linear system of equations, equation (63), yields
the Fourier coefficients Qk and the spectral factor Q(o) using equation (58).

Application to the salution of equation (36)


The above technique can be used to provide an approximate solution to equation (36). In
this case the parameter m equals max(p, q), and the matrices S(w) and Q(w) are defined as
S(0) = D:(e’“T)Nf(eioT) + Nl(ejWT)Df(ejUT) (67)
and
Q(o) = N,(ejmT). (68)
The corresponding Fourier coe~cients R(k) are readily computed as

1
min(q,p+k) min (4. p-k)
R(k)=2w, 1 A:+ B:’ + t: B: A::k for 0 I k 5 max(p, q) (69a)
[ I=k I=0
and
R(k) = 0 for k > max (p, q). (6W
Note that the contribution of any of the two summations appearing in equation f69a)
should be taken as zero if the starting value of the index i is greater than its final one.
Further, the value R(k) for any negative integer k is readily obtained from the preceding
formulae according to the symmetry relation
R(k) = R+( -k). (70)
From this set of Fourier coefficients, the AR parameters Sk and go corresponding to
a system order rit can be computed using equations (54) and (56). Finally, the matrices Qk are
obtained by solving the system of linear equations (63) for a given value p 2 m of the
deconvolution parameter. This step completes the determination of the ARMA simulation
algorithm, equation (37), as
8; = Qi I = 0, . . . , max (p, q). (71)

NUMERICAL RESULTS

Impiement~tiun aspects
The preceding mathematical developments can be summarized in the following
algorithmic form.
First, an appropriate value of the parameter m is obtained for a given spectral matrix
&v(o) by relying on the condition specified by equation (23).
Next, upon selecting a set of parameters p, q and r, the coefficients Ai and BP of the
approximation of the causal part Se(o) are determined from equations (18H22). The
matrices A: and B,’ corresponding to the modeling of the anticausal part are then readily
computed from equations (31) and (32) and the ARMA spectral matrix can be estimated for
all frequencies using equation (34).
Determination of ARMA algorithms 563

Once a satisfactory matching of the target and the ARMA spectral matrices has been
obtained by an appropriate selection of the parameters p, ~7and r, the factorization of
equation (36) must be performed. This step is accomplished by first determining the
matrices R(k) satisfying equations (69) and (70). Then, the corresponding AR parameters AL
and 8, can be computed from the Yule-Walker equations (54) and (56) and, upon selecting
the parameter p, the coefficients B: are obtained as the solution of a system of linear
equations (63x66). Note that the minimum value p = m has been found to lead to reliable
estimates of the coefficients B,?.

Examples
The applicability of the present ARMA modeling technique will be demonstrated by
considering various spectral shapes encountered in engineering. The notation ARMA(p, 4)
(r = 0) and ARMA(p, 4) (r = p) will be used to denote ARMA approximations obtained on
the basis of the selections r = 0 and I = p respectively in equations (18)-(24).
First, the dimensionless Pierson-Moskowitz (P-M) spectrum [7] defined as

Syy(o) = $ e-s (72)

with c = 5/4 which is used to model the elevation of ocean waves, was selected as target
expression. This spectrum is particularly pertinent for problems of non-linear stochastic
mechanics since it is commonly utilised in dynamic analyses of offshore structures exposed
to non-linear drag forces [18-J. Further, this spectrum is notoriously hard to approximate by
AR algorithms [73. Observe in Fig. 1 the excellent matching between this curve and the
ARMA(7,7) (r = 0) and ARMA(8,8) (r = p) spectra, equation (33). Note, however, that the
initial spectral approximations obtained by the present technique for I = 0 and r = p were
found to be slightly negative in the neighbourhood of the origin. In fact, the positive spectra
shown in Fig. 1 were obtained by relying on a spectral shift by a small constant as described
in the Appendix.
Next, the von Karman model of turbulence was selected to demonstrate the reliability of
the present ARMA modeling procedure in the multivariate case. According to this model,
the dimensionless spectral matrix of the longitudinai, horizontal and vertical velocity

Fig. t. The dimensionless Pierson-Moskowitz spectrum, equation (721,and its AKMA approxima-
tions, T= 1.00.
564 Xl. P. %fIGNOLET and P. D. SPANX

fluctuations is [ 191

1
su,(4 =
[l f(1.339W)‘]” 6

1
[1+( 1.339cg-j [l + 8(1.339w)‘/3] l’*
[ 1 +( 1.339w)~] 0
[ S[l -t-O.ld]

0 0.32[ :1+8( 1.3394”/3] 0

1
[1+(~.339~)2][1~8(1.339~)2‘~3] i2
0 0.12.5[1+8(1.339w)‘;3]
[ SC1+O.lw’]

(73)

Figures 2-6 show the matching between the non-zero elements of the target spectral matrix
and their ARMA(3,4) (r = 0) and ARMA(5,5) (r = p) approximations, equation (33). The
longitudinal-horizontal and horizontal-vertical cross-spectra corresponding to the ARMA
models are not shown because, like their target counterparts, they vanish at all frequencies.
Further, note that the matching of the imaginary part of the diagonal components is not
shown. Indeed, as both matrices &v(cu) and Si<(wf are Hermitian, these elements are purely
real. Clearly, either of the ARMA simulation schemes can be used to produce reliable time
histories of the trivariate process.
Note the small number of parameters required to obtain a good ARMA approximation
of a target spectral matrix when the present ARMA modeling technique is used. Remark-
ably, in the preceding examples, the values of the ARMA system orders p and 4 required to
achieve an excellent matching between the target and the ARMA spectra are slightly lower
or equal to the corresponding numbers used in connection with the two-stage AR to ARMA
approximation technique [3, 19,201.

In this paper the determination of autoregressive moving average (ARMA) systems for
the simulation of multivariate random processes with a given target spectral matrix has

0 2 4 6 a 10

DtENSlCNLESS FREUB-0

Fig. 2. The dimensionless l~ngitudinai van Karman spectrum, equation (73), and its ARMA
approximations, T = 0.314.
Determination of ARMA algorithms

m. (3,10 (r = 0). .4WA (5.5) (r = p)

1
0 i
‘4 6 i 10

DIAZ% ICNESS FX.XUXY

Fig. 3. The dimensionless horizontal von Karman spectrum, equation (73). and its ARMA approx-
imations, T = 0.3 14.

0. ; 4 ;, 8 10

DIENSlCf%ESS FRKuEmy

Fig. 4. The dimensionless vertical von Karman spectrum. equation (73). and its ARMA approx-
imations, 1”= 0.314.

been addressed. Specifically, a new technique based on a separate modeling of the causal
and anticausal parts of the spectral matrix, equations (9) and (lo), as ratios of matrix
polynomials has been presented. Both of these approximation problems, causaf and
anticausal parts, have been approached from the perspective of minimization of frequency
domain errors, equations (15) and (28). An associated system of linear equations for the
matrix coefficients of the polynomials has been derived, equations (18)--(24).The causal and
anticausal parts representations lead to an ARMA approximation of the target spectrum. It
has been found that the AR part of this ARMA system can readily be identified. However,
the computation of the MA part requires the spectral factorization of equation (36). This
fact has motivated the study of the spectral factorization problem, equation (47), and of its
possible solution techniques. fn particuIar, an algorithm based on a purely AR modeling of
566 M. P. MIGNOLET and P. D. SPANOS

Fig. 5. The realpart of the longitudinal-vertical von Karman cross-spectrum, equation (73). and its
ARMA approximations, T = 0.3 14.

Tm, F&+4 (5,s) (r = p)

.JVW (3,4) (r = 0)


d
i 4 ii 8 10

DlENStCNLfZSS FFE@BKX

Fig. 6. The imaginary part of the longitudinal-vertical von Karman cross-spectrum, equation (73).
and its ARMA approximations, 7’ = 0.314.

the matrix to be factorized has been devised. It was shown that the corresponding AR
parameters and the Fourier coefficients of the required spectral factor are related through
an overdetermined set of convolution equations, equation (61). A least squares solution of
this system has been presented and the set of equations for the Fourier coefficients of the
spectral factor have been derived, equations (63H66). It has been pointed out that these
reIations are linear in the unknowns so that this spectral factorization technique’ requires
solving systems of linear algebraic equations only.
Finally, various spectra1 shapes encountered in problems of linear and non-linear mech-
anics have been considered. These applications have shown that an excellent matching
Determination of ARMA algorithms 567

between target and ARMA spectral matrices can be achieved by using a very small number
of ARIMA coefficients. In connection with these results, it has been proved that the target
and the corresponding ARMA autocorrelation sequences coincide at 4 + I + 1 lags, equa-
tion (39).

REFERENCES

1. E. Samaras, M. Shinozuka and A. Tsurui. ARM A representation of random processes. J. enyny Mech. 111.
449-461 (1985).
2. M. P. Mignolet and P. D. Spanos, Recursive simulation of stationary multivariate random processes-Part I.
J. appl. Mech. 109, 674-680 (1987).
3. P. D. Spanos and M. P. Mignolet, Recursive simulation of stationary multivariate random processes-Part Il.
J. appl. Mech. 109, 681-687 (1987).
4. T. Naganuma, G. Deodatis and M. Shinozuka, ARMA model for two-dimensional processes. J. engng Mech.
113.234-251 (1987).
5. L. E. Borgman, Ocean wave simulation for engineering design. J. Warways Harb. Dir., Am. Sot. cia. Engrs.
557-583 (1969).
6. 9. G. Burke and J. T. Tighe, A time series model for dynamic behavior of offshore structures. Sot. Per. Enyrs J.
156170 (1972).
7. P-T. D Spanos, ARMA algorithms for ocean wave modeling. J. Energy Res. Technol. 105. 300-309 (1983).
8. J. A. Cadzow, Spectral estimation: an overdetermined rational model equation approach. Proc IEEE 70,
907-939 (1982).
9. N. Wiener and P. Masani, The prediction theory of multivariate stochastic processes, I. The regularity
condition. Acta math. 98, 1 I I-150 (1957).
10. D. C. Youla and N. N. Kazanjian, Bauer-type factorization of positive matrices and the theory of matrix
polynomials orthogonal on the unit circle. IEEE Trans. Circuirs Syst. CAS25, 57-69 (1978).
11. J. Jezek and V. Kucera, Efficient algorithm for matrix spectral factorization. Automatica 21, 663-669 (1985).
12. W. G. Tuel, Computer algorithm for spectral factorization of rational matrices. IBM J. Res. Dec., 163-l 70
(1968).
13. 9. D 0. Anderson, K. L. Hitz and N. D. Diem, Recursive algorithm for spectral factorization. IEEE Trans.
Circuirs Sysr. CAS-21, 742-750 (1974).
14. J. Durbin. Efficient estimation of parameters in moving-average models. Biometrika 46, 306-316 (1959).
15. K. Steiglitz, On the simultaneous estimation of poles and zeros in speech analysis. IEEE Trans. Acoust. Speech
Sig. Process. ASSP-25, 229-234 (1977).
16. J. L. Shanks, Recursion filters for digital processing. Geophysics 32, 33-51 (1967).
17. E. J. Hannan, Multiple Time Series. Wiley, New York (1970).
18. S. K. Chakrabarti, Hydrodynamics of Ofihore Srructures. Springer, New York (1987).
19. P-T. D. Spanos and K. P. Schultz, Numerical synthesis of trivariate velocity realizations of turbulence. Int. J.
Non-Linear Mech. 21, 269-277 (1986).
20 P-T. D. Spanos and K. P. Schultz Two-stage order-of-magnitude matching for the von Karman turbulence
spectrum. Proc. 4th Int. Co,lf on Srructural Safety and Reliability, Kobe, Japan, 27-29 May, Vol. I, pp. 21 l-217
(1985).
21. M. P. Mignolet, ARMA simulation of multivariate and multidimensional random processes. Ph.D. Disserta-
tion, Rice University, Houston, Texas (1987).
22. J. A. Cadzow and Y. Sun. Sequences with positive semidefinite Fourier transforms IEEE Trans. Acoust. Speech
Sig. Process. ASSP-34, 1502-l 5 IO (1986).

APPENDIX: POSITIVE DEFINITENESS OF


j”r)N: (&UT) + N:(&“~)f~ (ejoT)
D?(e

It was recognized in the main body of this paper that the positive definiteness of the matrix
Q(e jwr)N;(eJmr) + N:(ejuT)D;(ejmr ) ISa necessary and sufficient condition for the existence of a polynomial.
N,(z) satisfying exactly equation (36). If the above property does not hold, an approximate solution to this
equation must be produced before proceeding to the final determination of the ARMA system.
An approach to solving equation (36) approximately is based on seeking N, (e joT), in the form of equation (35),
subject to the constraint of minimizing the error

e=-
2~.~~~,iD;(ej'T)N;(ej"T)+N:(ejwr)D~(ej~r)-N;(ej~r)N:(ej")Izdw, (74)

This criterion leads to the following system of equations

1
mi”,0.I-L,+m.r,&I,
B;E,f:,_,-R(k-I) B:=O forI=O,...,max(p,q) (75)
m:<q’[2wb ..,.:,,_t,
where the matrices R(k) are defined by equations (69) and (70).
By solving equation (75). the unknown matrix parameters Et can be determined. However, this approach is
numerically impeded by the fact that equation (75) is non-linear in terms of B:.
Another approach to solving equation (36) approximately can be based on replacing its left-hand-side by
a positive semidefinite matrix S’(w), involving a Hermitian matrix polynomial K(ej”r) such that

S’(W) = D;(ej”r)Nf (e@r) + N;(ejUr)DI (ejwr) + K(ej”T) (76)


w_L*
15:5-s
568 M. P. MIGNOLET and P. D. SPANOS

is indeed positive definite for all w. To retain the polynomial character of S’(o), the powers of eJ”r in K(e’“r)
should range from - max(p, 4) to max(p, q). Further, the eigenvalues of this matrix should be as small as possible
in order to minimize the corresponding modifications of the optimum ARMA approximation Spp (w) of S,,(w).
A convenient choice for K (ejor) is
K(ej”‘) = K,(e’“r) = Y, I, (77)
where v, is a positive constant. In this case, the eigenvalues of S’(w) are equal to the eigenvalues of
0: (eloT) N: (ejor) + NT (e@r) 0: (ej-r) increased by the quantity yI. Another reasonable choice is
K(e@Jr) = K,(ejur) = yLD:(ej~r)D:(ejwr) (78)
where vLis a positive scalar. In this case, the eigenvalues of D; * (ejoT) S’(w) D;’ (e joT) are equal to the eigenvalues
of &+(o) increased by the quantity vL. Thus, appropriate values of these constants making S’(w) positive
semidetinite are

min i.i[D:(ej”T)N:(eJOT)+ N;(ej”‘)Df(ej”T)] (79)

and
Y, = - min min ii [ S&J)] (8’3
iwi<w,1 i= L.. ,.I I
where i.i [ U], i = I, . , n, denotes the n eigenvalues of an arbitrary n x n matrix U.
A third approach to producing an approximate solution to equation_ (36) is to replace
D:(e jor)Ni(ej”r) + N:(ej”‘)D:(e jar) by a Hermitian positive semidefinite matrix S(w) so that the error

is minimized. It can be shown [21] that the corresponding matrix S(o) has the same eigenvectors as
D:(ejwr)Nf(ejwT ) + NT (ejwr) Dr (ejoT) and that its eigenvalyes are equ’al to the corresponding values of this
matrix or to zero whichever is greater. Further, if the matrix S(w) is restricted to possess a finite Fourier series
representation, an algorithm is available for its computation [22].

You might also like