2002 - Kerschen - Physical Interpretation of The Proper Orthogonal Modes Using The Singular Value Decomposition PDF
2002 - Kerschen - Physical Interpretation of The Proper Orthogonal Modes Using The Singular Value Decomposition PDF
Proper orthogonal decomposition (POD) is a procedure for extracting a basis for a modal
decomposition from an ensemble of signals. A very appealing property of the POD is its
optimality. Among all possible decompositions of a random "eld, the POD is the most
e$cient in the sense that for a given number of modes, the projection on the subspace used
for modelling the random "eld will on average contain the most energy possible. Although
POD has been regularly applied to non-linear problems, it is essential to underline that it is
a linear technique and that it is optimal only with respect to other linear representations.
The applications of this procedure are extensive in modelling of turbulence [1, 2] and image
processing [3], and POD is now emerging as a useful tool in the "eld of structural
dynamics. For instance, it has been applied to estimate the dimensionality of a system [4],
to build reduced order models [5, 6], and to the identi"cation and updating of non-linear
systems [7}9].
The purpose of this paper is to determine whether a physical interpretation can be
attributed to the modes obtained from the decomposition, i.e., the proper orthogonal modes
(POMs). Particularly, it is inquired when the POMs are related to the vibration
eigenmodes. This work is closely related to the paper of Feeny and Kappagantu [10].
However, in the present paper, the emphasis is shifted towards the singular value
decomposition of the displacement matrix rather than the eigenvalue problem of the
covariance matrix. Furthermore, the case of linear systems under harmonic and white noise
excitations is discussed in greater detail.
The paper is organized as follows. In Section 2, the POD is brie#y introduced. Section
3 gives a brief review of the singular value decomposition and its properties that are relevant
in the context of this paper. Sections 4, 5 and 6 study the physical interpretation of the
POMs of discrete linear systems, respectively, for the free response in the undamped and
damped cases, and for the harmonic response. Section 7 o!ers a geometric approach to the
0022-460X/02/050849#17 $35.00/0
850
comparison between vibration eigenmodes and POMs. It also investigates the relationship
between non-linear normal modes (NNMs) and POMs. Finally, the discussion of the
stationary random response of a linear system to a white noise excitation is included in
Appendix A.
(1)
Maximize
(1/N) , ( X
(x)v (x) dX)
L
L
j"
X
(x)
(x) dX
x3X.
(2)
Finally, the optimization problem can be reduced to the following integral eigenvalue
problem [6]:
K(x, x)
(x) dx"j
(x),
(3)
(4)
2
2
K(x , x )
K
2
.
K(x , x ) 2 K(x , x )
K
K K
(5)
851
q (t )
L
2
2 "[q(t )2q(t )]
K
q (t ) 2 q (t )
K
K L
2
(6)
is formed, then the POMs are merely the eigenvectors of G"(1/n)QQ2 and the
corresponding eigenvalues are the proper orthogonal values (POVs). A POV measures the
relative energy of the system dynamics contained in the associated POM.
The objective of this section is to review the singular value decomposition (SVD) and its
features that are relevant in the context of POD. Particularly, it is pointed out that the
POMs are optimal with respect to energy content. For a detailed description of SVD and its
several possible applications in structural dynamics, the reader is referred to references
[14, 15]. Since the matrices considered throughout the paper are built from system
responses, e.g., displacements, the discussion is restricted to real matrices only.
For any real (m;n) matrix A, there exists a real factorization
A"URV2,
(7)
where U is an (m;m) orthonormal matrix. Its columns form the left singular vectors. R is an
(m;n) pseudo-diagonal and semi-positive-de"nite matrix with diagonal entries containing
the singular values p . V is an (n;n) orthonormal matrix. Its columns form the right
G
singular vectors.
3.1.
GEOMETRIC INTERPRETATION
The SVD of a matrix, seen as a collection of column vectors, provides important insight
into the oriented energy distribution of this set of vectors. It is worth recalling that
1. the energy of a vector sequence a building an (m;n) matrix A is de"ned via the
I
Frobenius norm
K L
N
E(A)"#A#" a " p where p"min(m, n),
$
GH
I
G H
I
(8)
so that the energy of a vector sequence is equal to the energy in its singular spectrum;
2. the oriented energy of a vector sequence in some direction p with unit vector e of the
N
m-dimensional column space is the sum of squared projections of the vectors on to
direction p
L
E (A)" (e2 a ).
N
N I
I
(9)
852
One essential property of SVD is that extrema in this oriented energy distribution occur
at each left singular direction [15]. The oriented energy measured in the direction of the ith
left singular vector is equal to the ith singular value squared. Since the POMs are directly
related to the left singular vectors, it can be stated that they are optimal with respect to
energy content in a least-square sense, i.e., they capture more energy per mode than any
other set of basis functions.
3.2.
The SVD of a matrix can be calculated by means of solving two eigenvalue problems, or
even one if only the left or the right singular vectors are required. Indeed,
AA2"URU2
A2A"VRV2.
(10)
Consequently, the singular values of A are found to be the square roots of the eigenvalues of
AA2 or A2A. The left and right singular vectors of A are the eigenvectors of AA2 and A2A
respectively. Applying this reasoning to POD, it is now clear that the POMs, de"ned as the
eigenvectors of the covariance matrix G"(1/n)AA2, are the left singular vectors of A. The
POVs, de"ned as the eigenvalues of the covariance matrix, are the square of the singular
values divided by the number of samples n. In conclusion, POD can be carried out directly
by means of an SVD of matrix A.
An interesting interpretation of the eigenvalue problem is that if a matrix is real,
symmetric and positive de"nite, then the eigenvectors of the matrix are the principal axes of
the associated quadratic form which is an n-dimensional ellipsoid centered at the origin of
the Euclidean space [16]. Since AA2 is real, symmetric and positive de"nite, the POMs as
eigenvectors of the covariance matrix are the principal axes of the family of ellipsoids
de"ned by y2Gy"c where y is a real non-zero vector and c is a positive constant.
It is worth pointing out that Feeny and Kappagantu showed that if each data has unit
mass, then the POMs are the principal axes of inertia [10].
The aim of this section is to "nd the existing relationships between the POMs and the
eigenmodes of an undamped and unforced linear system with m. d.o.f. The equation of
motion may be written as follows:
MqK #Kq"0,
(11)
where M and K are the mass and sti!ness matrices, respectively, and q is the vector of
displacement co-ordinates.
The system response due to initial conditions may be expressed as
K
K
q(t)" (A cos u t#B sin u t)x " e (t)x ,
G
G
G
G G
G
G
G
G
(12)
853
where u , x are the natural frequencies (in rad/s) and eigenmodes of the system; A and B
G G
G
G
are constants depending on the initial conditions; and e (t)"A cos u t#B sin u t
G
G
G
G
G
represents the time modulation of mode x .
G
The time discretization of the system response leads to n sampled values of the time
functions which form an (m;n) matrix whose columns are the members of the data
ensemble
Q"[q(t )2q(t )]
L
K
K
" e (t )x 2 e (t )x ,
G G
G L G
G
G
(13)
Q"[x 2x ]
K
"[x 2x
K
e (t )
2
e (t )
L
2
e (t )
K
2 e (t )
K L
e2
] 2
e2
K
"[x 2x ] [e 2e ]2
K
K
"XE2
"X[I Z] [E R]2,
(14)
where X is the (m;m) modal matrix whose columns are the eigenmodes of the system; E is
an (n;m) matrix whose columns are the functions e (t) at times t ,2, t ; I is an (m;m)
G
L
identity matrix; Z is an (m;(n!m)) matrix full of zeros; R is an (n;(n!m)) matrix; and
e "[e (t )2e (t )]2.
G
G
G L
Attention should be paid to the fact that R does not in#uence Q since it is multiplied by
a matrix full of zeros. Equation (14) can be expressed in a more familiar form as
Q"[X] [I Z] [E R]2"URV2.
GHI GHI GHI
U
(15)
V2
Accordingly, the above decomposition of Q may be thought of as the SVD of this matrix.
However, this decomposition requires matrices U and V to be orthonormal as mentioned in
Section 3. The aim now is to "nd the conditions when the columns of U(,X) and
V(,[E R]) are orthogonal.
1. The columns of U are formed by the eigenmodes of the structure. The eigenmodes are
orthogonal to each other in the metrics of the mass and sti!ness matrices. If the mass
matrix is proportional to the identity matrix, it turns out that x 2 x "d .
G H
GH
Consequently, X is orthogonal if the mass matrix is proportional to the identity
matrix.
854
2. It remains to determine when the columns of V are orthogonal. For this purpose,
equation (14) may be rewritten as follows:
Q"X[I Z] [E R]2
(16)
2 #e #
K
2 2 2
0
e
e
e
2
2 K R
.
#e # #e # #e #
K
If the natural frequencies u are distinct, it can be easily argued that the columns of
G
E diag(#e #\) are orthogonal if we consider an in"nite set of sampled values, i.e.,
G
e
e
H P0 if nPR, iOj.
G
#e # #e #
H
G
(17)
Since R does not have an in#uence on Q, its columns can be computed in order that they are
orthogonal to those of E diag(#e #\). As can also be seen from equation (16), POD is
G
a bi-orthogonal decomposition that uncouples the spatial and temporal information
contained in the data.
To summarize, if the mass matrix is proportional to the identity matrix and if the number
of samples is in"nite, the singular value decomposition of Q is such that
(1) the columns of U are the eigenmodes;
(2) the "rst n columns of V are the normalized time modulations of the modes.
As stated in section 3.2, the POD basis vectors are just the columns of the matrix U in the
singular value decomposition of the displacement matrix. Therefore, it can be concluded
that the POMs converge to the eigenmodes of an undamped and unforced linear system
whose mass matrix is proportional to identity if a su.cient number of samples is considered.
Feeny and Kappagantu [10] previously obtained the same conclusion by a di!erent way.
They based their demonstration on the fact that the POMs are the eigenvectors of the
covariance matrix.
In the case of a mass matrix not proportional to identity, the POMs no longer converge
to the eigenmodes since the former are orthogonal to each other while the latter are
orthogonal with respect to the mass matrix. However, knowing the mass matrix, it is still
possible to retrieve the eigenmodes from the POMs. Equation (11) has to be rewritten
through the co-ordinate transformation q"M\p as
pK #M\KM\p"0.
(18)
In equation (18), the system matrices are still symmetric while the e!ective mass matrix is
equal to the identity. Thus, the left singular vectors of P"[p(t )2p(t )], i.e., the POMs,
L
converge to the eigenmodes y of this system. It is a simple matter to demonstrate that the
G
eigenmodes x of system (11) are related to those of system (18) by the following
G
relationship:
x "M\y .
G
G
(19)
855
This section has investigated the discrete case. A detailed study of distributed systems can
be found in reference [17]. This paper underlines that the conclusions are still valid if the
distributed system is uniformly discretized.
Consider now a damped but still unforced linear system with m. d.o.f. for which the
equation of motion is given as follows:
MqK #Cq #Kq"0.
(20)
If the structure is lightly damped or with the assumption of modal damping, the system
response can be readily written as
K
K
q(t)" A exp\CG SGR cos((1!e u t#a )x " e (t)x .
G
G G
G G
G
G
G
G
(21)
K
K
" e (t )x 2 e (t )x
G G
G L G
G
G
"[x 2x
K
e2
] 2
(22)
e2
K
"XE2
"X[I Z] [E R]2
"X[diag(#e #) Z] [E diag(#e #\) R]2
G
G
#e #
0
2
0
#e # 2
"[x 2x ]
K
2
2 2
0
2 #e #
K
2 2 2
0
e
e
2
e
2 K R
.
#e # #e # #e #
K
"URV2,
where e "[A exp\CG SG R cos((1!e u t #a )2A exp\CG SG R cos((1!e u t #a )]2.
G
G
G G
G
G
G G L
G
Again, the columns of U(,X) are orthogonal if the mass matrix is proportional to the
identity matrix. The main di!erence with the undamped case is that the time modulations
e (t)P0 if tPR since the system returns to the equilibrium position in a "nite time.
G
Consequently, it can no longer be a$rmed that #e #PR if nPR and that the columns of
G
E diag(#e #\) are orthogonal to each other. This causes a set of POMs di!erent from the
G
856
This section is divided into two parts. Firstly, the harmonic response of a linear system is
considered. By harmonic response, we mean the combination of the free and forced
responses. Secondly, attention is focused only on the forced response of the linear system.
6.1.
HARMONIC RESPONSE
The equation of motion of a linear system with m. d.o.f. excited by an harmonic force with
a constant amplitude is
MqK #Kq"f sin u t.
C
(23)
Equation (23) may be transformed by considering a new variable s"sin u t that accounts
C
for the harmonic force
MqK #Kq"fs
with s(0)"0,
sK #u s"0
C
sR (0)"u ,
C
(24)
which yields
M*
K*
CFDFE
M 0
0
CFFDFFE
K
qK
#
sK
0
!f
u
C
q
0
"
.
s
0
(25)
For the sake of clarity, note (u , x ) the eigensolutions of the initial system (23) and
G G
(u*, x * ) the eigensolutions of the transformed system (24). This latter system may be
G
G
viewed as an unforced system with m#1 d.o.f. (25). If the mass matrix is proportional to
identity and if the number of samples is large enough, section 4 allows us to conclude that
the POMs of the transformed system response converge to the eigenmodes of that system.
Let us now compute the eigenmodes of the transformed system. These are the solution of
(K*!u* M*)x * "0
G
G
(26)
K !f
0
u
C
!u
M 0
0
"0.
(27)
857
det
K!uM
!f
u!u
C
(28)
As can be seen from equation (28), the transformed system has m#1 eigenvalues.
m eigenvalues are equal to those of the initial system (23)
u*"u with i"1,2, m
G
G
(29)
and the additional eigenvalue is equal to the square of the excitation frequency (in rad/s)
u * "u .
K>
C
(30)
K!u * M
!f
G
x * "0.
0
u!u* G
C
G
(31)
x
x * " G .
G
0
(32)
M*\K*"
M\ 0
0
K !f
u
C
M\K !M\f
"
u
C
(33)
and
M\K !M\f
0
u
C
[x * 2x * x *
]
K K>
"[x * 2x * x *
]
K K>
M\K
0
!M\f
u
C
X
0
diag(u *,2, u * )
0
G
K
0
u *
K>
x*
K>:K " X
*
x
0
K>K>
x*
K>:K
*
x
K>K>
(34)
X
0
0
,
u
C
(35)
858
(36)
M\Kx*
!M\fx *
"x *
u ,
K>:K
K>K>
K>:K C
(37)
0"0,
(38)
u x *
"x *
u .
C K>K>
K>K> C
(39)
Equation (37) allows us to calculate the "rst m components of the last eigenmode x *
:
K>
x*
"[M\K!u I]\M\fx*
"[K!u M]\fx*
.
K>:K
C
K>K>
C
K>K>
(40)
[K!u M]\ is the dynamic in#uence coe$cient matrix and its spectral expansion is [18]
C
K
x x2
G G .
[K!u M]\"
C
(u!u )k
C G
G G
(41)
(42)
(43)
To summarize, consider a matrix which contains the response of the transformed system
(24), i.e., its "rst m rows contain the response of the initial system (23) and its (m#1)th row
is the applied force
Q*"
q(t )2q(t )
L .
s(t )2s(t )
L
(44)
This matrix has m#1 POMs that have m#1 components. The dominant POM is related
to the forced harmonic response of the system and its "rst m components are given by
equation (43). Furthermore, if the mass matrix is proportional to identity, the "rst
m components of the remaining POMs are merely the eigenmodes of the linear system. This
perspective should be useful in the context of modal analysis.
6.2.
The forced response is de"ned as the part of the response synchronous to the excitation
q(t)"q sin u t.
D
C
(45)
859
(46)
K
x x2
G G
q(t)"
f sin u t.
C
(u!u )k
C G
G G
(47)
and
"
K
x x2
K
x x2
G G
G G
f sin u t 2
f sin u t .
C
C L
(u!u )k
(u!u )k
C G
C G
G G
G G
(48)
Q"
sin u t 2
C
2
K
x x2
G G
f
(u!u )k
C G
G G
sin u t
C L
"q e2
D
"[q S ]
D
2 0 0 2 0
[e R]2
0 2 2 0 0 2 0
"
q
D S
#q #
D
#q # #e#
D
0
q
D S
#q #
D
2 0 0 2 0
"
[I ]
e
2
R
#e#
2 2 0 0 2 0
e
2
R
#e#
(49)
V2
860
(50)
Knowing the structural matrices and the spatial discretization of the excitation, the
POM may be calculated without "rst simulating the system response as required in
the de"nition of the POMs.
3. The expression of the POM (50) is equal, to the norm, to the last eigenmode of the
transformed system for the harmonic response (43). This last eigenmode is thus related
to the forced harmonic response.
4. The convergence of the dominant POM to an eigenmode is no longer guaranteed. The
POM appears now as a combination of all the eigenmodes. However, if the excitation
frequency u tends to a resonant frequency of the system, u for instance, then the
C
H
denominator u!u of the jth term of combination (50) tends to zero. It is thus
H
C
observed that this term has a much larger amplitude than the others:
+ K x x 2 /(u!u )k ,f
x x 2 /(u!u )k f
H
C H
G
C G
G G G
POM"
K H H
#+ K x x 2 /(u!u )k ,f# #x x 2 /(u!u )k f#
H H
H
C H
G
C G
G G G
"ax
H
if u Pu .
C
H
(51)
Since x 2 f represents a scalar product, the POM has the same direction as the
H
eigenmode x which means that the POM is equal to the resonating mode shape. This
H
is consistent with that obtained in reference [10] using the eigensolution perspective. It
is worth pointing out that the non-resonating mode shapes should not be revealed by
POD.
For the sake of clarity, the eigenmodes of a linear system are called here linear normal
modes (LNMs). The determination of LNMs is reduced to the equivalent problem of
computing the eigensolutions of linear transformations. Obviously, such an approach as
well as the superposition principle is inadmissible for non-linear systems. The concept of
synchronous non-linear normal mode (NNM) for discrete conservative oscillators was
introduced for non-linear systems by Rosenberg [19]: &&A nonlinear system vibrates in
normal modes when all masses execute periodic motions of the same period, when all of
them pass through the equilibrium position at the same instant, and when, at any time t, the
position of all the masses is uniquely de"ned by the position of any one of them.''
The objective of this section is to examine the geometric interpretation of LNMs, NNMs
and POMs.
861
7.1.
LINEAR SYSTEMS
Consider a linear system consisting of masses and springs (Figure 1). If the displacement
of the ith mass from its equilibrium position is denoted by q , then the equations of motion
G
of the system are
m qK "k (q !q )!k (q !q ) where i"1, 2,2, n,
G G
G G\
G
G> G
G>
q "q ,0.
L>
(52)
k
m
k
m
m
m
G\ ! G ! G>
G> ! G
mK " G
G m m
m
m m
m
G
G
G
G
G\
G>
(53)
The transformed equations of motion (53) may be regarded as those of a unit mass which
moves in an n-dimensional space. The right-hand side of equation (53) derives from
a potential function
*;
L> k m
m
mK " , with ;"! G G\ ! G
.
G *m
2 m
m
G
G
G\
G
(54)
If no external force is present and if the motion is due to an initial displacement, the system
occupies at time t"0 a position of maximum potential ;"!; . This latter equation
de"nes an ellipsoid which is symmetric with respect to the origin. This ellipsoid is called the
bounding ellipsoid because all solutions must lie in this domain.
In its de"nition of a normal mode for a linear system, Rosenberg [19] stated that it is
a straight line in the (m ,2, m ) space which passes through the origin of that space and
L
which intersects the bounding ellipsoid orthogonally. It follows from the de"nition that the
LNMs are the principal axes of the bounding ellipsoid in the (m ,2, m ) space. This result
L
can also be obtained with the interpretation of the eigenvalue problem (section 3.2). Further
discussion is given in Appendix B.
If the mass matrix is proportional to identity, the LNMs are also the principal axes of the
bounding ellipsoid in the (q ,2, q ) space whose expression is
L
L> k
1
; " G (q !q )" q2Kq.
G\
G
2
2
G
(55)
862
Figure 2. LNMs and POMs. Principal axes of similar and similarly placed ellipsoids: } }, K; ))))); G; **, POM;
, LNM.
As far as the POMs are concerned, they are the principal axes of the ellipsoid c"q2Gq
where G is the covariance matrix (cf. Section 3.2). Since for an unforced system with a mass
matrix proportional to identity, the POMs and the LNMs coincide, it can be concluded
that ; " q2Kq and c"q2Gq are similar and similarly placed ellipsoids. This is illustrated
in Figure 2 (two d.o.f. system with an initial displacement).
7.2.
NON-LINEAR SYSTEMS
8. CONCLUSION
This paper has presented a new way, based on the singular value decomposition, of
interpreting the POMs in the "eld of structural dynamics. This work has underlined some
features of POD which might be useful in the future. Since the POMs are related to the
vibration eigenmodes in some cases, POD should be an alternative way of modal analysis
for extracting the mode shapes of a structure. POMs could also be used to reconstruct
a signal using a minimum number of modes.
863
ACKNOWLEDGMENTS
Mr Kerschen is supported by a grant from the Belgian National Fund for Scienti"c
Research which is gratefully acknowledged. This work presents research results of the
Belgian programme on Inter-University Poles of Attraction initiated by the Belgian state,
Prime Minister's o$ce, Science Policy Programming. The scienti"c responsibility for this
paper is assumed by its authors.
REFERENCES
1. P. HOLMES, J. L. LUMLEY and G. BERKOOZ 1996 urbulence, Coherent Structures, Dynamical
Systems and Symmetry. Cambridge: New York.
2, W. CAZEMIER 1997 Ph.D. hesis, Rijksuniversiteit, Groningen. Proper orthogonal decomposition
and low dimensional models for turbulent #ows.
3. G. UYTTERHOEVEN 1999 Ph.D. hesis, Katholieke ;niversiteit, euven. Wavelets: software and
applications.
4. J. P. CUSUMANO, M. T. SHARKADY and B. W. KIMBLE 1993 Aerospace Structures: Nonlinear
Dynamics and System Response, American Society of Mechanical Engineers AD-33, 13}22. Spatial
coherence measurements of a chaotic #exible-beam impact oscillator.
5. R. KAPPAGANTU and B. F. FEENY 1999 Journal of Sound and <ibration 224, 863}877. An optimal
modal reduction of a system with frictional excitation.
6. M. F. A. AZEEZ and A. F. VAKAKIS 1998 echnical Report, ;niversity of Illinois at ;rbana
Champaign. Proper orthogonal decomposition of a class of vibroimpact oscillations.
7. T. K. HASSELMAN, M. C. ANDERSON and W. G. GAN 1998 Proceedings of the 16th International
Modal Analysis Conference, Santa Barbara ;.S.A., 644}651. Principal component analysis for
nonlinear model correlation, updating and uncertainty evaluation.
8. V. LENAERTS, G. KERSCHEN and J. C. GOLINVAL 2000 Proceedings of the 18th International
Modal Analysis Conference, San Antonio, ;.S.A. Parameter identi"cation of nonlinear mechanical
systems using proper orthogonal decomposition.
9. V. LENAERTS, G. KERSCHEN and J. C. GOLINVAL 2001 Mechanical Systems and Signal Processing
15, 31}43. Proper orthogonal decomposition for model updating of non-linear mechanical
systems.
10. B. F. FEENY and R. KAPPAGANTU 1998 Journal of Sound and <ibration 211, 607}616. On the
physical interpretation of proper orthogonal modes in vibrations.
11. D. KOSAMBI 1943 Journal of Indian Mathematical Society 7, 76}88. Statistics in function space.
12. H. HOTELLING 1933 Journal of Educational Psychology 24, 417}441 and 498}520. Analysis of
a complex of statistical variables into principal components.
13. B. RAVINDRA 1999 Journal of Sound and <ibration 219, 189}192. Comments on &&On the physical
interpretation of proper orthogonal modes in vibrations''.
14. D. OTTE 1994 Ph.D. hesis, Katholieke ;niversiteit, euven. Development and evaluation of
singular value analysis methodologies for studying multivariate noise and vibration problems.
15. J. STAAR 1982 Ph.D. hesis, Katholieke ;niversiteit, euven. Concepts for reliable modelling of
linear systems with application to on-line identi"cation of multivariate state space descriptions.
16. L. MEIROVITCH 1980 Computational Methods in Structural Dynamics. Alphen a/d Rijn: Sijtho!
and Noordho!.
17. B. F. FEENY 1997 Proceedings of ASME Design Engineering echnical Conferences, Sacramento,
;.S.A. Interpreting proper orthogonal modes in vibrations.
18. M. GERADIN and D. RIXEN 1994 Mechanical <ibrations, heory and Application to Structural
Dynamics. Paris: Masson.
19. R. M. ROSENBERG 1962 Journal of Applied Mechanics 29, 7}14. The normal modes of nonlinear
n-degree-of-freedom systems.
20. S. W. SHAW and C. PIERRE 1993 Journal of Sound and <ibration 164, 85}124. Normal modes for
non-linear vibratory systems.
21. A. E. BRYSON and Y. C. HO 1975 Applied Optimal Control (Optimization, Estimation and Control).
New York: Wiley.
22. D. F. MORRISON 1967 Multivariate Statistical Methods, McGraw-Hill Series in Probability and
Statistics. New York: McGraw-Hill.
864
This study concerns linear systems subjected to white noise sequences. With this aim, the
equation of motion is recast in the state variable from
r "Ar#Bw,
q"Dr,
(A1, A2)
where
A"
Z
I
M\K M\C
is the system matrix, B is the input matrix, D is the output matrix, and w(t) is a vector white
noise process such that
E[w(t)]"0 and E[w(t)w(q)2]"ld(t!q).
It is assumed that the system is stable and time invariant, and that all processes are
Gaussian. In this context, it can be shown [21] that the covariance matrix of the steady state
response G "E[r(t)r(t)2] satis"es the Lyapunov equation
P
AG #G A2#BlB2"0.
P
P
(A3)
It is worth pointing out that G also corresponds to the constant l in the de"nition of the
P
controllability grammian W of the system.
A
If only the displacements are considered, then the covariance matrix of the system
response becomes
G "E[q(t)q(t)2]"DG D2.
O
P
(A4)
Equation (59) means that the POMs may be evaluated without "rst simulating the system.
Indeed, if the structural matrices are assumed to be known, the Lyapunov equation (58)
may be solved in order to compute the covariance matrix G and consequently G . The
P
O
POMs are then the eigenvectors of G . The analytical relationship between the POMs and
O
the eigenmodes is now obscured.
If all states (displacement and velocity) are measured, the POMs are merely the
eigenvectors of the controllability grammian W .
A
If the white noise excitation is Gaussian, then the POMs may be geometrically
interpreted. In that case, the response of the linear system is also Gaussian and is
characterized at each d.o.f. by a probability density function equal to
1
h(q)"
exp \ O\IN
,
(2np
(A5)
where k"E[q] is the mean and p"E[(q!k)] is the standard deviation. The joint
probability density function reads
1
h(q ,2, q )"
exp \ KG OG\IGNG
.
K
(2n)Kp 2p
K
(A6)
865
It can be demonstrated [22] that the contours of h(q ,2, q ) consist of m-dimensional
K
ellipsoids and that the POMs are the principal axes of these ellipsoids.
The LNMs are the eigenvectors of the matrix M\K. In order that the LNMs be the
principal axes of the ellipsoid q2M\Kq"1, the matrix M\K must be real, positive
de"nite and symmetric [16]. This is the case if the mass matrix is proportional to identity,
i.e., M"aI. Accordingly, the LNMs are the principal axes of the ellipsoid (1/a)q2Kq"1.
This latter expression is to a constant, the expression of the potential energy in the
(q ,2, q ) space. Since it is assumed that the mass matrix is proportional to identity, this is
L
also the expression, to a constant, of the potential energy in the (m ,2, m ) space. This is
L
another way to demonstrate that the LNMs are the principal axes of the bounding ellipsoid
in the (m ,2, m ) space.
L