unprotected-STRUCTURAL DYNAMICS
unprotected-STRUCTURAL DYNAMICS
Structural Dynamics
Joseph C. Slater
March 21, 2006
Contents
1 Concepts of Linear Algebra 4
1.1 Linear Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Bases and Dimension of a Vector Space . . . . . . . . . . . . . 6
1.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Inner Products and Orthogonal Vectors . . . . . . . . . . . . . 7
1.4.1 Vector norms . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Gram - Schmidt Orthogonalization . . . . . . . . . . . . . . . 9
1.6 The Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . 10
1.6.1 Singular Value Decomposition - Principle Component
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Properties of Matrices . . . . . . . . . . . . . . . . . . . . . . 14
1
2.5.2 Lyapunov Stability Criteria . . . . . . . . . . . . . . . 21
2.5.3 Conservative Systems . . . . . . . . . . . . . . . . . . . 21
2.5.4 Systems with Damping . . . . . . . . . . . . . . . . . . 22
2.5.5 Gyroscopic Systems . . . . . . . . . . . . . . . . . . . . 23
2.5.6 Damped Gyroscopic Systems . . . . . . . . . . . . . . . 23
2.5.7 Circulatory Systems . . . . . . . . . . . . . . . . . . . 24
2.5.8 General Asymmetric Systems . . . . . . . . . . . . . . 24
2.6 Self-Adjoint (Symmetric) Systems . . . . . . . . . . . . . . . . 24
2.7 Maximum-Minimum Characteristics of Eigenvalues . . . . . . 35
2.8 The Inclusion Principle . . . . . . . . . . . . . . . . . . . . . . 37
2.8.1 Example: The inclusion principal . . . . . . . . . . . . 37
2.9 Perturbation of the Symmetric Eigenvalue Problem . . . . . . 38
2.9.1 Example: Perturbation . . . . . . . . . . . . . . . . . . 40
4 Energy Methods 55
4.1 Virtual Work (Shames Solid Mechanics) , p. 62 our book
(discrete) and p. 369, eqn. 7.20 our book . . . . . . . . . . . . 55
4.1.1 Review from Strength of Materials . . . . . . . . . . . 57
4.1.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Derivation of Hamilton’s Principle from Virtual Work . . . . . 61
4.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2.2 Example 2: String with tension T . . . . . . . . . . . . 65
4.2.3 String Boundary Condition Example . . . . . . . . . . 67
4.3 Lagrange’s Equation for a Continuous System . . . . . . . . . 68
4.3.1 Example: Beam is bending on spinning shaft . . . . . . 71
2
5 The Eigenvalue Problem 73
5.1 Self-adjoint systems . . . . . . . . . . . . . . . . . . . . . . . . 74
5.1.1 Example: Self-adjointness of beam stiffness operator . . 79
5.2 Proof that the eigenfunctions are orthogonal for self-adjoint
systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.3 Non self-adjoint systems . . . . . . . . . . . . . . . . . . . . . 82
5.4 Repeated eigenvalues . . . . . . . . . . . . . . . . . . . . . . . 83
5.5 Vibration of Rods Shafts and Strings . . . . . . . . . . . . . . 85
5.6 Bending Vibration of a Helicopter Blade . . . . . . . . . . . . 89
5.7 Variational Characterization of the Eigenvalues . . . . . . . . 91
5.8 Integral Formulation of the Eigenvalue Problem . . . . . . . . 94
5.8.1 Example: Cantilever beam. EI=const. . . . . . . . . . 96
5.8.2 Example: String solution using Green’s functions . . . 97
3
1 Concepts of Linear Algebra
1.1 Linear Vector Spaces
Definition 1 (Field) Set of scalars possessing certain algebraic properties
(F ).
1. Commutativity: α + β = β + α and αβ = βα
3. Distributivity: α(β + γ) = αβ + αγ
4. Identity: α + 0 = α, α · 1 = α
1. Commutativity: (x + y = y + x)
2. Associativity: (x + y) + z = x + (y + z)
2. If x is in S and α is in F , than αx is in S
4
1.2 Linear Dependence
A set of vectors
xi , i = 1, 2, 3, . . . , n
in a linear space C n1 are linearly independent iff
X
αi xi = 0
1.2.1 Example
1 0
x1 = , x2 =
2 2
α1 x1 + α2 x2 = 0
α1 1 + α2 0 = 0 α1 = 0
α1 2 + α2 2 = 0 α2 = 0
∴ x1 and x2 are independent
1
Complex Vector Space
5
1.2.2 Example
1 −1 1
x1 = 2
x2 = 3 x3 = 7
2 1 5
α1 = 2, α2 = 1, α3 = −1
satisfies
α1 x1 + α2 x2 + α3 x3 = 0
Thus x1 , x2 , and x3 are not independent.
The subspace S of L spanned by the vectors xi is defined by
n
X
αi xi
i=1
• If they are independent, then they are basis vectors for the space L.
6
1.3.1 Example
The vectors of examples 1.2.1 and 1.2.2 each are generating systems for spaces
of dimension (n).
However, they do not span the same space.
The vectors of example 1.2.2 span a 2-d space that can be visualized as the
plane ⊥ to
−4
x1 × x2 = − 3
5
The dimension of the space IS NOT the length of the vector!
where ȳi is the complex conjugate of yi . The inner product space Ln defined
over the field of complex numbers is called unitary space.
When x and y are real
(x, y) = x1 y1 + x2 y2 + . . . + xn yn
3. (x, y) = (y, x)
4. (λx, y) = λ(x, y)
(x, λy) = λ̄(x, y) for all λ in F
5. Distributive law
(x, y + z) = (x, y) + (x, z) in Ln
7
1.4.1 Vector norms
A measure of the length of a vector is the norm, kxk
3. kx + yk ≤ kxk + kyk
n
! 12
X
kxk = x2i
l=1
(x, y) = 0
If all pairs of vectors in a set are orthogonal, the set is an orthogonal set.
If they have unit length, they are an orthonormal set.
Any set of mutually orthonormal vectors are linearly independent. The con-
verse is not true.
8
1.5 Gram - Schmidt Orthogonalization
Takes a set of independent vectors and renders them orthogonal (orthonormal
if we choose to normalize).
Independent vectors are given by x1 , x2 , x3 , . . .
Desired orthogonal vectors are y 1 , y 2 , y 3 , . . .
Desired orthonormal set is ŷ 1 , ŷ 2 , ŷ 3 , . . .
1. ŷ 1 = x̂1 = x1 /kx1 k
2. Note that we want y 2 ⊥ ŷ 1 (we can normalize later).
∴ (y 2 , ŷ 1 ) = 0
A vector y 2 that satisfied this is
y 2 = x2 − (x2 , ŷ 1 )ŷ 1
since
(y 2 , ŷ 1 ) = (x2 − (x2 , ŷ 1 )ŷ 1 , ŷ 1 )
1
z }| { (1)
= (x2 , ŷ 1 ) − (x2 , ŷ 1 ) (ŷ 1 , ŷ 1 )
=0
∴ y 2 = x2 − (x2 , ŷ 1 )ŷ 1
3. We now want (y 3 , ŷ 1 ) = 0 and (y 3 , ŷ 2 ) = 0
1 0
z }| { z }| {
(y 3 , ŷ 1 ) = (x3 , ŷ 1 ) − (x3 , ŷ 1 ) (ŷ 1 , ŷ 1 ) −(x3 , y 2 ) (ŷ 2 , ŷ 1 ) (2)
=0
and
0 1
z }| { z }| {
(y 3 , ŷ 2 ) = (x3 , ŷ 2 ) − (x3 , ŷ 1 ) (ŷ 1 , ŷ 2 ) −(x3 , y 2 ) (ŷ 2 , ŷ 2 ) (3)
=0
9
The modified Gram-Schmidt process described next in the book yields better
results. It basically normalizes values and further iterates.
10
−1
can be made in equation (5), and then pre-multiplying by BcT gives
−1
BcT ABc Y = Y Λ (9)
11
1.6.1 Singular Value Decomposition - Principle Component Anal-
ysis
Singular value decomposition, or SVD, is an extension of the eigenvalue de-
composition to non-square matrices. It is often used to identify, consolidate,
and rank contributions of vectors to a large non-square matrix. The SVD of
an m × n matrix A, where m ≥ n is defined by
A = |{z}
|{z} U |{z} VT
Σ |{z} (16)
m×n m×n n×n n×n
where
UT U = I (17)
VVT =I (18)
and
σ1 0 0 ··· 0
0 σ2 0
··· 0
Σ = 0 0 σ3
··· 0 (19)
.. .. .. .. ..
. . . . .
0 0 0 · · · σn
Some algorithms will alternatively return results satisfying
A = |{z}
|{z} U |{z} VT
Σ |{z} (20)
m×n m×m m×n n×n
12
The problem of finding the singular values can quickly transformed into one
of finding eigenvalues. Pre-multiplying (16) by itself transposed yields
AT A = V ΣU T U ΣV T (22)
AT A = V Σ 2 V T (23)
Comparing this to equation (12) one should recognize the Σ2 are the eigen-
values of the matrix AT A. Further, the eigenvalue problem is a symmetric
one because AT A is symmetric, which is easily proven by taking its transpose
(likewise this can be done with the right size of equation (23). The solution
of the eigenvalue problem yields
2.9 0.00
Σ= (24)
0.00 0.06
and
T 0.70 0.72
V = (25)
0.72 − 0.70
The matrix U can then be found using (16) to be
0.49 0.30
0.49 0.30
U = AV S −1 = 0.49 0.30
(26)
0.52 − 0.85
13
ordered from the highest to lowest value. Note that the first singular value is
much greater than the second, indicating that the nearly constant vector is a
much greater contributor to A. As can be observed in A, the second column
of U has a relatively small contribution to A, and the second singular value
highlights this. Further, σi ≥ 0. The matrix V can then be recognized as an
organizer. It determines how much of each σ-weighted column of U is needed
to produce the corresponding column of A.
In practice, for very large matrices with redundant data, the higher-index
values of σ tend towards 0, and for all practical purposes can be treated as
zero. When this happens, the SVD problem is written as
Σp [0]
A = |{z}U p×p
|{z} VT (27)
|{z} |{z}
n×m m×n [0] [0] n×n
| {z }
n×n
where the near-zero values of Σ have been set to zero leaving only p non-zero
values. As a result, columns p + 1 through n of U and rows p + 1 through n
of V T can be discarded along with all of Σ outside of Σp . The resulting SVD
problem statement then appears as
A = |{z}
|{z} U Σp |{z}VT (28)
|{z}
n×m m×p p×p p×n
where m ≥ n ≥ p.
14
Can we solve for B and C in the general case?
AT = B T + C T = B − C
A + AT = B + C + B − C = 2B
A + AT
∴B=
2
T
A − A = B + C − B + C = 2C
A − AT
∴C=
2
Yes.
Define AH = ĀT to be the Hermitian adjoint of A. This is what is returned
by Matlab when you take a transpose.
If A is such that
AH = A,
then A is said to be Hermitian.
The real part is symmetric, the imaginary part is skew symmetric.
Because A is equal to its adjoint, Hermitian matrices are said to be self-
adjoint.
15
2. Perform a coordinate transformation (shift coordinates) to set the equi-
librium to be at coordinates zero.
16
2.3 Example
For y:
mÿ + 2Ωmẋ − mΩ2 y + k2 y + 2 y 3 = 0
M ẍ + (C + G) ẋ + (K + H) x = 0
17
H and G are skew-symmetric matrices
m 0
M=
0 m
c 0
C=
0 0
0 − 2Ωm
G=
2Ωm 0
k − mΩ2 0
K= 1
0 k2 − mΩ2
Observing T we can expand to
1 1
T = m ẋ2 + ẏ 2 + mΩ (xẏ − y ẋ) + mΩ2 x2 + y 2
2 2
= T2 + T1 + T0
1 m 0 ẋ
T2 = ẋ ẏ
2 0 m ẏ
| {z }
Quadratic in generalized velocities due to time rate change of coord system.
0 mΩ ẋ
T1 = x y
− mΩ 0 ẏ
| {z }
Linear in generalized velocities.
(linearized6 )
6
Means curvature, and higher derivatives, of force set to zero
18
1 c 0 ẋ
F= ẋ ẏ
2 0 0 ẏ
Definitions:
1
T2 = q̇ T M q̇, T1 = q T Gq̇
2
1 T 1
F = q̇ C q̇, U = q T Kq
2 2
More definitions:
1. M is positive definite
2. C is positive semi-definite
19
Pre-multiplying the equation of motion (29) by q̇ T
q̇ T M q̈ + q̇ T C q̇ + q̇ T Kq + q̇ T Hq = 0
Hamiltonian H = T2 + U
d
H = −2 (F 0 + F) , (F 0 = q̇ T Hq is the circulatory dissipation function)
dt
If there are no viscous damping or circulatory forces,
H = const
2.5.1 Example
mẍ + kx = 0
1 p
kx (t) k = xT x 2 = x(t)2 + ẋ(t)2
20
q
k
Consider i.c. of x(0) = 0, ẋ(0) = m =ω
The solution is x(t) = sin(ωt)
The system is Lyapunov stable because for
12
kx (0) k = x2 (0) + ẋ2 (0) =ω (δ)
1 1
kx (t) k = sin2 ωt + ω 2 cos2 ωt 2 < 1 + ω 2 2 ()
1
We must then choose > (1 + ω 2 ) 2 .
This worked only because we knew the solution.
The Lyapunov direct method or Liapunov second method does not require
soln. of EOM.
The method consists of devising a suitable scalar testing function which can
be used in conjunction with its total time derivative to determine the char-
acteristics of equilibrium points.
Definition: A function V (x) is said to be positive definite if it is positive for
all values of x 6= 0.
Definition: A function V (x) is said to be positive semi-definite if it is ≥ 0
for all x 6= 0.
Definition: A function V (x) is said to be indefinite or sign-variable if the
sign varies.
Negative semi-definite and negative definite can be defined likewise.
2. If V (x) > 0 for all ẋ 6= 0, and V̇ (x) < 0, then the system is asymp-
totically stable
3. If V (x) > 0 for all x 6= 0, and V̇ (x) is indefinite, the stability is not
known.
21
and
V (x) = xT2 Kx2 > 0, x 6= 0
Let’s pick as a Lyapunov function
1 T
ẋ M ẋ + xT Kx
V (x) = (mechanical energy)
2
Since M and K are P.D., V (x) is P.D.
d
(V (x)) = ẋT M ẍ + ẋT Kx
dt
= ẋT (M ẍ + Kx)
= ẋT 0 = 0
Since V̇ (x) = 0 and V (x) > 0, the system is stable.
22
2.5.5 Gyroscopic Systems
M ẍ + Gẋ + Kx = 0
Assume M and K are P.D. and G is skew-sym.
xT Gx = 0 for any x, so previous Lyapunov function still works, and equi-
librium is still stable. If K is P.D., system is stable.
If K is indefinite, semidefinite, or negative definite, the system may still be
stable.
23
2.5.7 Circulatory Systems
M ẍ + (K + H) x = 0
H = −H T
Example
The phenomenon occurs in aeroelasticity. Results for stability are not as well
developed.
A2 = T1 T2
A3 = S 1 S 2
If T1 = S1 exists and is PD, the system is symmetrizable, Such systems are
asymptotically stable if the eigenvalues of A2 and A3 are > 0. (S2 and T2
are PD).
See Inman ’89 for more details.
24
where M is a positive-definite n × n matrix, C and K are positive-semi-
definite n × n matrices, and all are real and symmetric. The displacement
vector
x1 (t)
x2 (t)
x(t) = x3 (t) (31)
..
.
xn (t)
represents displacements of the n degrees of freedom (generalized coordi-
nates), which may be in any direction, x, y, z, or any combination thereof,
or a rotation about any unit direction vector in three dimensional space. The
force vector
f1 (t)
f2 (t)
f3 (t)
f (t) = (32)
..
.
fn (t)
represents forces acting on each of the n generalized coordinates. Such equa-
tions of motion are typical of non-rotating structures without inclusion of
aerodynamic loading.
Solution of equation (30) is most often performed in modal coordinates due
to physical insight obtained and numerical advantages. In order to transform
into modal coordinates, we first consider the un-forced, or homogeneous, and
undamped, C = 0, equation of motion given by
M ẍ + Kx = 0 (33)
x = ψeλt (34)
(M λ2 + K)ψ = 0 (35)
25
we also have the unknown λ2 to solve for, we have only n equations but
n + 1 unknowns. In addition, the equations are nonlinear in the combined
unknowns λ2 and ψi . A first attempt at solving these equations for ψ would
be to premultiply equation (35) by (M λ2 + K)−1 giving
26
often learned in introductory courses. Most often methods such as subspace
iteration, the QR method, inverse iteration, or other even more sophisticated
algorithms are used[4, 1] and are beyond the scope of this class.
It is convenient to mass normalize the eigenvectors ψ m such that
ψ Tm M ψ m = 1 (38)
After doing so, the eigenvectors are mass orthonormal and stiffness orthog-
onal. Consider the two unique eigenvalue solutions (l 6= m), written in a
slightly different form
Kψ l = −M λ2l ψ l (39a)
Kψ m = −M λ2m ψ m (39b)
Pre-multiplying each by the alternate eigenvector transposed gives
ψ Tm Kψ l − ψ Tm Kψ l = −λ2m ψ Tm M ψ l + λ2l ψ Tm M ψ l
(41)
0 = λ2l − λ2m ψ Tm M ψ l
where δlm is the Kronecker delta function and is as defined above. Consid-
ering now equation (40a)
ψ Tm Kψ l = −λ2l ψ Tm M ψ l
= −λ2l δlm
( (43)
− λ2l , l = m
=
0, l 6= m
27
In the case where there are repeated solutions, i.e. λ2m = λ2m+1 = . . . , substi-
tuting the coordinate transformation
x = M −1/2 q (44)
T
into equation (30) and pre-multiplying by M −1/2 where M −1/2 is defined
such that T
M = M 1/2 M 1/2 (45)
gives
T T T
M −1/2 M M −1/2 q̈ + M −1/2 KM −1/2 q = M −1/2 f
(46)
−1/2 T
I q̈ + K̃q = M f
ψ = M −1/2 υ (49)
Applying the Euler relation this can be written in the more physically intu-
itive form n
X
x= Rm sin(ωm t + φm )ψ m (51)
m=1
28
where Rm is a modal amplitude. If we further define
Pre-multiplying (55) by ΨT , and using equations (42) and (43) the equations
are now transformed into individual uncoupled modal equations
and
f˜(t) = ΨT f (t) (58)
is the modal force vector. The resulting decoupled equations can each then
be solved using SDOF methods. Applying equation (53), the solution can
be transformed back into physical coordinates. Also, using equation (53),
29
the initial conditions can be transformed into modal coordinates for use in
solving equations (56) giving
then if
CM −1 K = KM −1 C (61)
the damping matrix, C, is diagonalized by ΨT CΨ = diag(2ζi ωi )[5]. Consider
substituting
x(t) = M −1/2 q(t) (62)
into equation (60), but with no forcing vector, where
Now let
q(t) = Υr(t) (65)
where Υ is the orthonormal matrix of the eigenvectors of K̃, υ, such that
ΥT Υ = I (66)
ΥT K̃Υ = Ω2 (67)
substituting (65) into equation (64) and pre-multiplying by ΥT yields
30
Two matrices are simultaneously diagonalized if and only if they commute,
i.e.
C̃ K̃ = K̃ C̃ (69)
The modal equations then are
if
KM −1 K 0 = K 0 M −1 K (73)
then the imaginary part of the stiffness matrix, K 0 , is diagonalized by ΨT K 0 Ψ =
diag(ηi ωi2 ) yielding modal equations of motion
The modal frequency response functions are then obtained by taking the
Fourier transform of the appropriate modal equation of motion, either (56),
(70), or (74), and solving for
1
ωi2 −ω 2
, undamped
Ri (jω)
1
h̃i = = ω2 +2ζi jωi ω−ω2 , viscous damping (75)
F̃i (jω) i
2 1 , complex stiffness damping
ω (1+ηi j)−ω 2
i
Consider now the full system modal equations in frequency response form
31
where H̃(jω) = diag(h̃i ). Premultiplying by Ψ and substituting X(jω) =
ΨR(jω) and F̃ (jω) = ΨT F (jω) into equation (76) and premultiplying by
Ψ−1 gives
ΨR(jω) = ΨH̃(jω)ΨT F (jω)
X(jω) = ΨH̃(jω)ΨT F (jω) (77)
= H(jω)F (jω)
where H(jω) is the matrix of transfer functions between forces and displace-
ments in physical coordinates. Expressing Ψ using equation (54), and con-
sidering equation (77)
T
ψ1
ψ T
2
T
· · · ψ n H̃(jω) ψ 3
H(jω) = ψ 1 ψ 2 ψ 3
..
.
ψ Tn
T
ψ1
ψ T
2
T
= ψ 1 h̃1 (jω) ψ 2 h̃2 (jω) ψ 3 h̃3 (jω) · · · ψ n h̃n (jω) ψ 3 (78)
..
.
ψ Tn
X n
= ψ i ψ Ti h̃i (jω)
i=1
n
X
= i Ah̃i (jω)
i=1
32
Thus an individual frequency response function between an input l and out-
put, m, is
Xn
Hl,m = ψl,i ψm,i h̃i (jω) (80)
i=1
An alternative representation of the frequency response function can be ob-
tained directly from equations (72) or (60). Taking the Fourier transform of
each equation,
−ω 2 M + jωC + K X(jω) = F (jω),
viscous (81a)
−ω 2 M + (K + K 0 j) X(jω) = F (jω),
hysteretic (81b)
Since the analysis is the same for either form of damping, we show only the
viscous case for the sake of clarity. The frequency response function matrix
is then
−1
X(jω) = −ω 2 M + jωC + K F (jω) = H(jω)F (jω), viscous (82)
Since −1
H(jω) = −ω 2 M + jωC + K (83)
using the definition of the inverse of a matrix of
adj(−ω 2 M + jωC + K)l,m
Hl,m (jω) = (84)
det(−ω 2 M + jωC + K)
When
adj(−ω 2 M + jωC + K)l,m = 0 (85)
the frequency response function will exhibit a zero or anti-resonance. Recall
that the l, m element of the adjoint of a matrix is the determinant of the
remaining matrix when the lth row and the mth column are removed, mul-
tiplied by − 1l+m , called the l, m cofactor. When l = m, this is equivalent
to constraining the equations of motion such that xl = 0.
Example 2.1 A system is defined by
M ẍ + C ẋ + Kx = 0 (86)
where
1 0 0 0 0 0 2 −1 −1
M = 0 1 0 , C = 0 0 0 , K =−1 3 − 1 (87)
0 0 1 0 0 0 −1 −1 4
33
Find the poles and zeros of the frequency response function, as well as the
FRF itself, between the first and third degrees of freedom.
Solution:
The poles (natural frequencies) of the system are given by the solution of
det(−M ω 2 + K) = 0
(88)
−ω 6 + 9ω 4 − 23ω 2 + 13 = 0
adj1,3 (K − ω 2 M ) = 0
−1 −1 0 0 2
det − ω =0 (90)
3 −1 1 0
4 − ω2 = 0
−ω 2 + 4
H1,3 (jω) = (94)
−ω 6 + 9ω 4 − 23ω 2 + 13
34
2.7 Maximum-Minimum Characteristics of Eigenval-
ues
Consider
2 −1
A= (95)
−1 2
The eigenvalues are λ = 1, 3. The first eigenvalue is the minimum value of
f (x) = xT Ax
where x is normalized.
Trial 1:
1
x= , f =2
0
Trial 2:
0
x= , f =2
1
Trial 3:
1 1
x= √ , f =3
2 −1
Trial 4:
1 1
x= √ , f =1
2 1
This could have been written as
df
= 0, ||x|| = 1 (96)
dx1
p 2 − 1 x 1
f (x1 ) = x1 1 − x21 p
−1 2 1 − x21
q q
= 2x21 − x1 1 − x21 − x1 1 − x21 + 2(1 − x21 )
q (97)
1
= 2(x1 − x1 1 − x21 + 1 − x21 )
q
= 2(1 − x1 1 − x21 )
35
Then, setting f 0 = 0
df
q − 1
= − 1 − x21 + x21 1 − x21 2 = 0
dx1
(−1 + x1 ) + x21 = 0
2
(98)
2x21 = 1
1
x1 = ± √ , Note: by trial, (+) is the correct sign
2
To find the second eigenvalue, we will pick values of v such that x is con-
strained by
xT v = 0
For each vector v we minimize f (x). The highest value of all of these mini-
mums is our second eigenvalue.
In this case, because the system has only two dimensions (R2 ), choosing v
is equivalent to choosing x
Trial 1:
0 1
v= , x= , f =2
1 0
Trial 2:
1 0
v= , x= , f =2
0 1
Trial 3:
1 1 1
v= , x= √ , f =1
1 2 −1
Trial 4:
1 1 1
v= , x= √ , f =3
−1 2 1
f = 3 is the second eigenvalue.
Extrapolating, the (n+1)th eigenvalue is the highest minimum value of xT Ax
subject to n constraints.
36
2.8 The Inclusion Principle
What is the effect of using fewer degrees of freedom to represent a system?
This is a vital concern when using finite elements and the more general
continuum methods to be discussed later.
Consider a system described by the Hermitian matrix A (n × n). Consider
another system defined by B (n − 1 × n − 1) formed from A by deleting the
last row and column.
The eigenvalues of A are named λ1 , λ2 , λ3 , . . . and the eigenvalues of B are
γ1 , γ2 , γ3 , . . ..
y H By = xH Ax if xi = yi , and xn = 0
xn = 0 can be considered a constraint such that xT ên = 0
From Rayleigh’s principle,
γ1 = min y H By, ||y|| = 1
This is equivalent to the constrained minimization
γ1 = λ̃1 (ên ) = min xH Ax, ||x|| = 1, xH ên = 0
From the min-max principle,
λ1 ≤ γ1 ≤ λ2
Successively considering the remaining eigenvalues yields
λ1 ≤ γ1 ≤ λ2 ≤ γ2 ≤ λ3 ≤ . . . λn
which is called the inclusion principle
37
If we constrain degree of freedom 1 to zero amplitude, the new stiffness matrix
is
2 −1
B=
−1 1
The eigenvalues of A are 0.1981, 1.5550, 3.2470. The eigenvalues of B are
0.3820, 2.6180. The eigenvalues are interwoven with one another. This ap-
plies for more general constraints as well. For example, setting x1 = x2 . Note
that this also modifies the mass matrix. Verify that the inclusion principle
works for this case for homework.
38
Here we would like to estimate the values of xi and λi without repeating the
solution to the eigenvalue problem. Thus
1 dλi 1 d2 λi
λi () = λ0i + + 2 2 + · · ·
1! d 2! d (104a)
2
= λ0i + λ1i + λ2i + · · · , i = 1, 2, . . . , n
1 dxi 1 d2 xi
xi () = x0i + + 2 2 + · · ·
1! d 2! d (104b)
= x0i + x1i + 2 x2i + · · · , i = 1, 2, . . . , n
Substituting (104) and equation (101) into equation (102) gives
A0 x0i + A0 x1i + A1 x0i + 2 A1 x1i + 2 A0 x2i + 3 A1 x2i + · · ·
= λ0i x0i + λ0i x1i + λ1i x0i + 2 λ2i x0i + 2 λ1i x1i + 2 λ0i x2i + · · ·
(105)
Solution of this equation for x1i and λ1i is the solution of the first order
perturbation problem.
To do this, we presume that x1i can be written as a linear combination of
the non-corresponding unperturbed system eigenvectors9 , i.e.
n
X
x1i = κij x0j δ̃ij , i = 1, 2, . . . , n (108)
j=1
9
There is no benefit to perturbing an eigenvector by itself times a constant because,
after normalization, it doesn’t change.
39
where (
1 i 6= j
δ̃ij = (109)
0 i=j
Is the opposite of the Kronecker delta.
Substituting equation (108) into equation (107) yields
n
X n
X
A0 κij x0j δ̃ij + A1 x0i = λ0i κij x0j δ̃ij + λ1i x0i (110)
j=1 j=1
For k = i,
λ1i = xT0i A1 x0i (114)
The perturbed eigenvalue is then given by substituting equation (114) into
equation (104a). Applying equation (113) for k 6= i
xT0k A1 x0i
κik = (115)
λ0i − λ0k
and the perturbed eigenvector is obtained by substituting equation (115) into
equation (108) then into equation (104b).
40
The eigensolution is given by
− 0.32799 0.73698 − 0.59101
x01 = − 0.59101 , x02 = 0.32799 , x03 = 0.73698 (117)
− 0.73698 − 0.59101 − 0.32799
with eigenvalues
Then
0 0 0
A1 = 0 0 0 (120)
0 0 1
with = 0.1
Applying equation (114)
0 0 0 − 0.32799
λ11 = − 0.328 − 0.591 − 0.737 0 0 0 − 0.59101 = 0.543
0 0 1 − 0.73698
(121)
thus, from equation (104a)
41
and
0 0 0 − 0.591
− 0.328 − 0.591 − 0.737 0 0 0 0.737
0 0 1 − 0.328
κ13 = = −0.079
0.19806 − 3.24698
(124)
Thus, applying equation (108)
− 0.18971
x11 = − 0.16371 (125)
0.21571
42
X ∂2w
Fy = ρ∆x
∂t2
∂w ∂w ∂2w
f (x, t)∆x + τ2 − τ1 = ρ∆x 2
∂x x2 ∂x x1 ∂t
X
Fx = 0 = τ2 cos θ2 − τ1 cos θ1
For small θ1 , cos θ1 ≈ 1 ≈ cos θ2 τ1 = τ2 = τ
∂2w
∂w ∂w
τ − + ∆xf (x, t) = ρ∆x 2
∂x x2 ∂x x1 ∂t
substituting
∂2w ∂2w
τ ∆x + ∆xf (x, t) = ∆xρ
∂x2 x1 ∂t2
Dividing by ∆x, and since ∆x is infinitesimal
∂2w ∂2w
τ + f (x, t) = ρ
∂x2 ∂t2
or
τ wxx + f (x, t) = ρwtt
q
τ
where c = ρ
is the wave speed.
43
3.1.1 Finding Mode Shapes and Natural Frequencies of a String
Presume fixed boundary conditions. The equation of motion for a string is
c2 wxx = wtt
T̈
2
= −σ 2 (129)
cT
We call it − σ 2 because later in the solution we realize that it would have
been convenient to do earlier. However, at this point in the solution process,
there is no justification for it. Rearranging (128) yields
X 00 + σ 2 X = 0
Summing moments in the vertical direction, it’s clear that the slope must be
zero at the right end.
The boundary conditions are
X(0) = 0 = A, X 0 = 0 = Bσ cos σ`
`
πn
∴ σn ` = , n = 1, 3, 5, . . .
2
or
π(2n − 1)
σn = , n = 1, 2, 3, 4, . . .
2`
and the mode shapes are
π(2n − 1)
X(x) = an sin x
2`
Substituting into the temporal part:
T̈ + c2 σn2 T = 0
τ π(2n − 1)
r
∴ ωn = , n = 1, 2, 3, 4, . . .
ρ 2`
45
3.1.2 Free Response of a String
Find the response of a string plucked in the middle.
2 `
w(x, 0) = x, 0<x<
` 2
2 `
w(x, 0) = (` − x), <x<`
` 2
∞
X nπx
w(x, t) = sin (cn sin ωn t + dn cos ωn t) (130)
n=1 | {z` } | {z
Temporal solution
}
Mode shapes
cn is clearly zero.
Multiplying (131) by sin mπx
`
and integrating over `
∞
!
Z ` Z `
mπx X nπx mπx
w(x, 0) sin dx = sin dn sin dx (133)
0 ` 0 n=1
` `
From orthogonality,
(
`
m 6= n
Z
nπx mπx 0
sin sin dx = `
0 ` ` 2
m=n
46
Substituting, solving for dn , and performing the integration
8 nπ
dn = sin , n = 1, 2, 3, . . .
π 2 n2 2
Thus ∞
X nπx 8 nπ
w(x, t) = sin sin cos ωn t
n=1
` π 2 n2 2
nπ
where σn = `
.
w(x, t) = T (t)X(t)
T̈ X − c2 T X 00 = 0
Using separation of variables
T̈ X 00
= = −σ 2
c2 T X
∴ X(x) = A cos σx + B sin σx
The boundary conditions are
T̈ + c2 σn2 T = 0
∴ ωn = cσn
47
Assume a form of the solution10
∞
X x
w(x, t) = an sin t sin nπ
n=1
`
` nπ 2 ` nπ
−an + c2 an = 100 sin
2 ` 2 2
Recall (
`
m 6= n
Z
nπx mπx 0
sin sin dx = `
0 ` ` 2
m=n
Solving gives
2 100 sin nπ
an = 2
` c2 nπ 2 − 1
`
− 1 n = 3, 7, 11, . . .
nπ
sin = 0 n = 2, 4, 6, . . .
2
1 n = 1, 5, 9, . . .
n−1
1 n
2
= (−1 − 1)
2
Then ∞
X 200 sin nπ2 x
w(x, t) =
2
sin t sin nπ
n=1 ` c2 nπ
−1 `
`
10
We know that undamped systems always have a phase difference of 0◦ or 180◦ .
48
3.2 Bending Vibration of a Beam
∂2w
M = −EI(x) 2
∂x
11
∂2w
∂M ∂
V = = −EI(x) 2 (134)
∂x ∂x ∂x
∂I(x) ∂ 2 w ∂3w
= −E − EI(x) 3
∂x ∂x2 ∂x
X
Fz = ∆xρ(x)ẅ (linear density)
(V + ∂V ) − V + p(x, t)∆x = ∆xρ(x)ẅ
dividing by ∆x and taking the limit as ∆x → 0
∂V
+ p(x, t) = ρ(x)ẅ
∂x
substituting for V
∂2 ∂2w ∂2w
EI(x) 2 + ρ(x) = p(x, t)
∂x2 ∂x ∂t2
49
dividing by ∆x
∂M ∆x
−V +p =0
∆x 2
Taking the limit as ∆x → 0
∂M
=V
∂x
as was stated in equation (134).
Allowable Boundary Conditions:
Clamped: w is known and w0 is known.
Pinned: w is known and M is known.
Free: V is known and M is known.
Sliding end: w0 is known and V is known.
50
X 00 (x) = 0, moment is zero (140c)
x=l
Using equations (141) and (142) and simplifying (by substituting for C and
D)
A (sin(βl) + sinh(βl)) + B (cos(βl) + cosh(βl)) = 0 (144)
Then applying equation (140d)
− cos2 (βl) − 2 cosh(βl) cos(βl) − cosh2 (βl) − sin2 (βl) + sinh2 (βl) = 0 (148)
Since sin2 (βl) + cos2 (βl) = 1 and cosh2 (βl) − sinh2 (βl) = 1, this simplifies
to
cos(βl) cosh(βl) = −1 (149)
Having a value for βl we can use equation (146) to obtain the ratio between
A and B as
A sin(βl) − sinh(βl)
= σn = (150)
B cosh(βl) + cos(βl)
51
Not that this could also be solved for using equation (144)
A cos(βl) + cosh(βl)
= σn = (151)
B sin(βl) + sinh(βl)
These are equivalent expressions for the values of βl that satisfy equation
149. It is left to the reader to prove this.
Thus the mode shape is
w(x, t) = Σ∞ ∞
n=1 wn (x, t) = Σn=1 Xn (x)Tn (t) (153)
The mode shapes for a fixed-free beam are given by equation (152), σn are
given by equation (150) and βn l are obtained by solving equation (149).
Natural frequencies are then given by
ρ 2
Xn0000 = ω Xn = βn4 Xn (154)
EI n
Substituting equation (153) into the equation of motion
∞
X F0
T̈n + ωn2 Tn Xn = δ(x − `) sin ωt (155)
n=1
ρ
52
An = 4βn (4βn ` + 2σn cos (2βn `) − 2σn cosh (2βn `)
− 4 cosh (βn `) sin (βn `) − 4σn2 cosh (βn `)
+ sin 2βn ` − σn2 sin 2βn ` − 4 cos βn ` sinh βn `
+ 4σn2 cos (βn `) sinh (βn `) + 8σn sin (βn `) sinh (βn `)
+ sinh (2βn `) + σn2 sinh (2βn `))−1
Multiply the EOM by Xm , integrating over x, and using
Z ` (
0 m= 6 n
Xn Xm dx =
0 1 m=n
F0 `
Z
T̈n + ωn2 Tn = Xn (x)δ(x − `)dx sin(ωt)
ρ 0
Fo
= Xn (`) sin(ωt)
ρ
Solving for Tn
Fo Xn (`)
Tn = sin(ωt)
ρA ωn2 − ω 2
The total solution is ∞
X
w (x, t) = Tn (t)Xn (x)
n=1
=============
Recall ωn was given as s
EI
ωn = βn2
ρ
where βn can be found from
cos βn ` cosh βn ` = −1
53
Let
x dx
ξ= , then =`
` dξ
∂w ∂w dx ∂w
= = `
∂ξ ∂x dξ ∂x
or
∂w 1 ∂w
=
∂x ` ∂ξ
Also,
∂2w
∂ ∂w
=
∂ξ 2 ∂ξ ∂ξ
∂ ∂w dx
=
∂x ∂ξ dξ
∂ ∂w
=`
∂x ∂ξ
∂ ∂w
=` `
∂x ∂x
2
∂ w
= `2 2
∂x
So
∂2w 1 ∂2w
=
∂x2 `2 ∂ξ 2
Substituting into E.O.M.
c2
wξξ (ξ, t) = wtt (ξ, t)
`2
Similarly, let
ct
γ= (dimensionless time)
`
∂2w c2 ∂ 2 w
=
∂t2 `2 ∂γ 2
Substituting into the E.O.M.
wξξ (ξ, γ) = wγγ (ξ, γ)
w
define y = `
yξξ (ξ, γ) = yγγ (ξ, γ)
Which represents the nondimensionalized E.O.M.
54
4 Energy Methods
4.1 Virtual Work (Shames Solid Mechanics) , p. 62
our book (discrete) and p. 369, eqn. 7.20 our
book
Virtual work is the work done on a particle by all the forces acting on the
particle as the particle is given a small hypothetical displacement, a “virtual
displacement,” which is consistent with the constraints present. Applied
forces are constant during the virtual displacement.
The virtual work acting on a body is
ZZZ ZZ
δWvirt = ~
B · δ~udV + T~ · δ~udA
V S
| {z } | {z }
Body forces Traction forces
ZZZ Z Z Z Z
δW = Bx δux dV + Tx δux dA − Tx δux dA
V S 2 S 1
ZZZ ZZ
= Bx δux dV + ((Tx δux )2 − (Tx δux )1 ) dA
V S
Note: Z 2
d
(Tx δux )2 − (Tx δux )1 =
(Tx δux ) dx
1 dx
Note that Tx is force/unit area. Thus Tx = σxx (Direct stress in the x
direction). Thus
ZZZ ZZZ
d
δW = Bx δux dV + (σxx δux ) dV
V V dx
55
Carrying out the differentiation and collecting terms
Z Z Z
dσxx d
δW = Bx + δux + σxx δux dV
V dx dx
56
4.1.1 Review from Strength of Materials
Consider
X
Fx = (σxx + dσxx ) dydz − σxx dydz
+ (σyx + dσyx ) dxdz − σyx dxdz
+ (σzx + dσzx ) dxdy − σzx dxdy
+ Bx dxdydz = 0
Dividing by dxdydz
∂σxx ∂σyx ∂σzx
+ + + Bx = 0
∂x ∂y ∂z
∂σxy ∂σyy ∂σzy
+ + + By = 0
∂x ∂y ∂z
∂σxz ∂σyz ∂σzz
+ + + Bz = 0
∂x ∂y ∂z
(Continue derivation) The first term in the integrand is zero. It is the equa-
tion of equilibrium for an element.
Thus
ZZZ
σxx δ dux dV
δW = dx
V |{z}
x
57
More generally
ZZZ ZZ ZZZ
~ · δ~udV +
B T~ · δ~udA = σij δij dV
V S V
where
3 X
X 3
σij δij = σij δij
i=1 j=1
4.1.2 Example
58
δvA
δAC =
L
δvA cos α δvA cos2 α
δAB = L
=
cos α
L
With no body forces, the principle of virtual work is
Z L/ cos α Z L
P δvA = σAB δAB Ad` + σAC δAC Ad`
0 0
cos2 α L
P = σAB A + σAC A
L cos α
P = σAB cos αA + σAC A (156)
2nd: Apply virtual displacement in x direction.
δAC = 0
δuA sin α δuA
δAB = = sin α cos α
L cos α L
The principle of virtual work
Z L/ cos α Z L
δuA
0δuA = σAB sin α cos αAd` + σAC 0Ad`
0 L 0
59
0 = σAB (sin α cos α)A (157)
σAB = 0
P
σAC =
A
Now let’s find the movement of the pin. Consider actual displacements ūA
and v̄A
v̄A v̄A cos2 α ūA
AC = AB = + sin α cos α
L L L
Using Hooke’s Law
E
v̄A cos2 α + ūA sin α cos α
σAB = EAB =
L
E
σAC = EAC = (v̄A )
L
Substituting the stresses into the virtual work equations (156) and (157):
E E
v̄A cos2 α + ūA sin α cos α cos α + v̄A A
P = (158)
L L
E
v̄A cos2 α + ūA sin α cos α A sin α cos α
0= (159)
L | {z }
must be 0 because: sin α6=0, cos α6=0, A6=0
From (158)
E
P = v̄A A
L
PL
v̄A =
EA
From (159)
PL
cos α + ūA sin α = 0
EA
−P L cos α
ūA =
EA sin α
60
4.2 Derivation of Hamilton’s Principle from Virtual
Work
Consider Newton’s Law for a particle
X
F~ = m~a
Rearrange it as X
F~ + −m~a = 0
The form is now that of a statics problem.
This is called D’Alembert’s principle.
For a body, this force is
d2 ui
ZZZ
−m~a = − ρ 2 dV
V dt
and the virtual work due to this force is
d2 ui
ZZZ
− ρ 2 δui dV
V dt
The Principle of Virtual Work is now
d2 ui
ZZZ ZZ ZZZ ZZZ
Bi δui dV + Ti δui dA− ρ 2 δui dV = σij δij dV (160)
V S V dt V
This is also the negative of the “Force Potential”12 . The variation of the
nonconservative work is
ZZZ ZZ
δWnc = Bi δui dV + Ti δui dA
V S
61
then ZZZ ZZZ
σij δij dV = δ UdV = δV
V V
So, the principle of virtual work is
d2 ui
ZZZ
− ρ 2 δui dV + δWnc − δV = 0 (161)
V dt
To make this apply for a range of times we integrate with respect to time.
Z t2 Z Z Z Z t2 Z t2
d2 ui
− ρ 2 δui dV dt + δWnc dt − δV dt = 0 (162)
t1 V dt t1 t1
Consider the first term. We can swap the order of the integrations, and thus
Z t2 Z Z Z Z Z Z Z t2 2
d2 ui
d ui
− ρ 2 δui dV dt = − ρ 2 δui dt dV
t1 V dt V t1 dt
Integrating in time by parts
Z t2 2 Z t2
d ui dui t2 dui d
2
δui dt = δui |t1 − (δui ) dt
t1 dt dt t1 dt dt
We adopt the rules that the variation δui = 0 at t = t1 , and t = t2 . Then
Z t2 2 Z t2 2
d ui 1 du̇i
δui dt = − δ dt
t1 dt2 t1 2 dt
Substituting into (162)
ρ t2 2
ZZZ Z Z t2 Z t2
δ u̇1 dtdV + δ Wnc dt − δ V dt = 0 (163)
V 2 t1 t1 t1
62
V is the total potential energy. Then
T −V =L (Lagrangian)
Thus Z t2
δ L + Wnc dt = 0
t1
63
4.2.1 Example
Hamilton’s Principle (Shames 112-114, 323-329: Energy and Finite Element
Methods in Structural Mechanics)
Z t2
(δL + δWnc ) dt = 0
t1
L = T − V = T + Wc
Example: SDOF system
1
T = mẋ2 , δT = mẋδ ẋ
2
1
Wc = − kx2 , δWc = −kxδx
2
Wnc = F x, δWnc = F δx
Z t2 Z t2
(δT + δWc + δWnc ) dt = (mẋδ ẋ − kxδx + F δx) dt
t1 t1
Z t2 Z t2
= mẋδ ẋdt + (F − kx) δxdt = 0
t1 t1
| {z }
integrate this one by parts
u = mẋ dv = δ ẋdt
du = mẍdt v = δx
t2
Z t2
= mẋδx − mẍδxdt
t1 t1
64
Substituting back Z t2
(−mẍ + F − kx) δxdt = 0
t1
−mẍ + F − kx = 0
√
The shortening is dx − dx2 − dw2
s 2
dw
= dx − dx 1−
dx
2
1 dw
≈ dx
2 dx
dw 2
Per unit length, Wc = − 12
dx
T dx
65
Nonconservative work
Wnc = p (x, t) w (x, t)
Applying Hamilton’s Principle
Z t2
δT + δWc + δWnc dt = 0
t1
2 ! !
Z t2 Z x2 Z x2
1 2 1 dw
δ ρẇ (x, t) dx + − T + p(x, t)wdx dt = 0
t1 x1 2 x1 2 dx
Z t2 Z x2 Z x2
ρẇδ ẇdx + w0 δw0 + p(x, t) δwdx dt = 0
−T |{z}
t1 x1 x1 dw
dx
u = −T w0 dv = δw0 dx
du = −T w00 dx v = δw
x2
Z x2
0
= −T w δw − −T w00 δwdx
x1 x1
66
Substituting and collecting terms
Z t2 x2
Z x2
0 00
−T w δw + (T w − ρẅ + p(x, t)) δwdx dt = 0
t1 x1 x1
67
X dw
Fy = 0 = −T − kw at x2
dx
Note: small angle approximation tan θ ≈ sin θ ≈ θ
Using the result from Hamilton’s principle
T w20 = −kw2
Kinetic Energy: Z `
T (t) = T̂ (x, t) dx
0 | {z }
kinetic energy density
68
Hamilton’s Principle states
Z t2
δT + δW dt = 0 δW = 0 @ t = t1 , t2 (165)
t1
or Z t2 Z `
δ T̂ + δ Ŵ dxdt = 0 δW = 0 @ t = t1 , t2 (166)
t1 0
Z t2 Z `
δ L̂ + δWnc dxdt = 0 δW = 0 @ t = t1 , t2 (167)
t1 0
∂ L̂ ∂ L̂ 0 ∂ L̂ 00 ∂L ∂L 0
δ L̂ =
δw + 0
δw + 00
δw + δ ẇ + δ ẇ (168)
∂w ∂w δw ∂ ẇ δ ẇ0
substituting (168) into (167) and using the definition of δWnc
!
Z t2 Z `
∂ L̂ ∂ L̂ 0 ∂ L̂ ∂ L̂ ∂ L̂ 0
δw + 0
δw − 00
δw00 + δ ẇ + δ ẇ + p(x, t)δw dx = 0
t1 0 ∂w ∂w ∂w ∂ ẇ ∂ ẇ0
(169)
Like we did using Hamilton’s Principle before, we want the integrand to
contain only δw, so we integrate by parts.
a) !
Z ` Z `
∂ L̂ 0 ∂ L̂ ` ∂ ∂ L̂
0
δw dx = δw − δwdx
0 ∂w ∂w0 0 0 ∂x ∂w0
b)
! !
` `
∂2
Z Z
∂ L̂ 00 ∂ L̂ 0
` ∂ ∂ L̂ ` ∂ L̂
00
δw dx = 00
δw − δw + δwdx
0 ∂w ∂w 0 ∂x ∂w00 0 0 ∂x2 ∂w00
c) !
Z t2 Z t2
∂ L̂ ∂ L̂ t2 ∂ ∂ L̂
δ ẇdx = δw − δw dt
t1 ∂ ẇ ∂ ẇ t1 t1 ∂t ∂ ẇ
69
d)
! !
Z t2 Z ` Z t2 Z `
∂ L̂ 0 ∂ L̂ ` ∂ ∂ L̂
δ ẇ dxdt = δ ẇ − δ ẇ dt
t1 0 ∂ ẇ0 t1 ∂ ẇ0 0 0 ∂x ∂ ẇ0
Z t2 !
∂ L̂ ` t2 ∂ ∂ L̂ `
= δw − δw dt
∂ ẇ0 0 t1 t1 ∂t ∂ ẇ0 0
Z ` ! !t2
∂ ∂ L̂
− δwdx
0 ∂x ∂ ẇ0
Z t2 Z ` 2 ! t1
∂ ∂ L̂
+ δwdxdt
t1 0 ∂t∂x ∂ ẇ0
Substituting into (169) and recalling δw = 0 at t = t1 , t2
"Z ! !
t2 `
∂ 2 ∂ L̂ ∂2
Z
∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂ ∂ L̂
− + − + + p(x, t) δwdx
t1 0 ∂w ∂x ∂w0 ∂x2 ∂w00 ∂t ∂ ẇ ∂x∂t ∂ ẇ0
! ! #
∂ L̂ ∂ ∂ L̂ ∂ ∂L ` ∂ L̂ 0 `
+ − − δw + δw dt = 0
∂w0 ∂x ∂w00 ∂t ∂ ẇ0 0 δw00 0
(170)
Since δw is arbitrary
!
2 2
∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂
− 0
+ 2 00
− + + p(x, t) = 0
∂w ∂x ∂w ∂x ∂w ∂t ∂ ẇ ∂x∂t ∂ ẇ0
with the boundary conditions
!
∂ L̂ ∂ ∂ L̂ ∂ ∂L
0
− − =0
∂w ∂x ∂w00 ∂t ∂ ẇ0
or
w is known
AND
∂ L̂
=0
∂w0
or
w0 is known at each end
70
4.3.1 Example: Beam is bending on spinning shaft
Shaft is spinning about vertical axis at angular velocity Ω. Derive the equa-
tions of motion.
Z ` Z `
1 1
T (t) = m(x)ẇ dx 2
+ J(x)ẇ02 dx
2 0 2 0
| {z } | {z }
Kinetic energy in translation Kinetic energy in rotation
1 ` `
Z Z
V (t) = EI(x)w002 dx + P (x, t)(ds − dx)
2 0 0
P (x, t) is axial force (centrifugal effects)
71
Consider p(x, t) (external transverse load) to be the weight of the blade
Z `
δWnc = − m(x)gδw(x, t)dx
0
L̂ is then
Z `
1 1 2 1 2 1 2
L̂ = mẇ2 + J (ẇ0 ) − EI(x) (w00 ) − m(ξ)Ω ξdξ (w0 )
2
2 2 2 2 x
∂ L̂
=0
∂w
! Z x
∂ ∂ L̂ ∂ 2 0
= m(ξ)Ω ξdξ (w )
∂x ∂w0 ∂x `
Z `
0
= m(x)Ω xw − 2
m(ξ)Ω2 ξdξw00
x
!
∂2 ∂ L̂ ∂2
2 00
= − 2
(EI(x)w00 )
∂x ∂w ∂x
!
∂ ∂ L̂
= mẅ
∂t ∂ ẇ
!
∂2 ∂ L̂ ∂
0
= (J ẅ0 )
∂x∂t ∂ ẇ ∂x
So the E.O.M. is
Z `
∂2 ∂
−mΩ xw + m(ξ)Ω2 ξdξw00 − 2 (EI(x)w00 )−mẅ+ (J ẅ0 )−mg = 0
2 0
0<x<`
x ∂x ∂x
To get the B.C., let’s again evaluate each term.
Z `
∂ L̂
0
=− mΩ ξdξ w0
2
∂w x
!
∂ ∂ L̂ ∂
= − (EI(x)w00 )
∂x ∂w00 ∂x
72
!
∂ ∂ L̂
= J ẅ0
∂t ∂ ẇ
∂ L̂
00
= −EIw00
∂w
So the B.C. are
Z `
∂
− mΩ ξdξ w0 +
2
(EIw00 ) − J ẅ0 = 0 or w = 0
x ∂x
AND
−EIw00 = 0 or w0 = 0
At the left end, w = 0 and w0 = 0. At the right end w 6= 0 and w0 6= 0, so
∂
(EIw00 ) x=`
= J ẅ0 x=`
(Integral is zero)
∂x
AND
−EIw00 x=`
=0
Boundary conditions demanded by geometry (w = 0, w0 = 0) are geometric
∂
or essential boundary conditions. These are typically BC through ∂x .
The other boundary conditions are dynamic boundary conditions. These are
∂3
BC through ∂x 3
`
d2
Z
2 0 2 00 00
−mΩ xW + mΩ ξdξ W − 2 (EIW ) F
x dx
(171)
d 0
= mW − (JW ) F̈ , 0<x<`
dx
The boundary conditions at the left end are W (0)F = 0, and W 0 |x=0 F = 0
and the BCs at the right end are
73
d
(EIW 00 ) x=` F = JW 0 |x=` F , and (EIW 00 ) |x=` F = 0
dx
Solving for −FF̈
R
` d2
mΩ2 xW 0 − x mΩ2 ξdξ W 00 + dx 2 (EIW )
00
−F̈
d(JW 0 )
= =λ
mW − F
dx
F̈ + λF = 0
f (t) = Aest
substituting
s2 + λ = 0
Let’s assume λ = ω 2 , s = ±iω
Determining λ such that the spacial equation and the B.C. are satisfied is
called the eigenvalue problem.
There is no closed form solution for this particular eigenvalue problem. A
finite difference or finite element method must be employed to get the “com-
plete” solusion.
74
L is a “linear homogeneous differential operator”
d d2p
L = A0 (x) + A2 (x) + . . . + A2p (x) 2p
dx dx
Let M be an operator similar to L, but of order 2q, q < p, and write
LW = λM W, 0<x<`
Where λ comes from separation of variables.13
The boundary conditions can be written as
Bi W = λCi W i = 1, 2, . . . , p x = 0, `
Bi and Ci are also linear homogenous differential operators.
The maximum order of Bi is 2p − 1
The maximum order of Ci is 2q − 1
For the previous problem:
Z `
2 d d2 d2 d2 d4
L = mΩ x − mΩ2 ξdξ 2 + E 2 I 2 + EI 4 (p = 2)
dx x dx dx dx dx
d d d2
M =m− J −J 2 q=1
dx dx dx
At x = 0:
B1 = 1, C1 = 0 (172)
d
B2 = , C2 = 0 (173)
dx
and at x = `
dI d2 d3 d
B1 = −E − EI , C1 = J (174)
dx dx2 dx3 dx
d2
B2 = EI 2 , C2 = 0 (175)
dx
Define the inner product of functions f and g where f and g are: real,
piecewise smooth (continuous and in 0th and 1st derivative).
Z `
(f, g) = f (x)g(x)dx
0
13
M = −τ 2 for a string.
75
f (x) and g(x) are orthogonal if (f, g) = 0 k f k is the norm of f (x) defined
by Z `
k f k2 = (f, f ) = f 2 (x)dx < ∞
0
c1 φ1 + c2 φ2 + c3 φ3 + . . . cn φn = 0 (176)
without all cr = 0, r = 1, . . . n
Assume (for the moment) that φr are orthogonal. i.e. (φr , φs ) = 0 r 6= s
Multiply equation (176) by φs and integrate over 0 ≤ x ≤ `
n
X Z `
cr φr φs dx = 0
r=1 0
n
X
cr k φs k2 δrs = 0
r−1
cs k φs k2 = 0
Since k φs k6= 0, cs , for any s, must be zero to satisfy (176).
Therefore, orthogonal functions are always linearly independent. The con-
verse is not always true.
Consider φr and an arbitrary function f (x). Let cr be defined by
Z `
cr = (f, φr ) = f φr dx r = 1, 2, . . .
0
Pn
Consider the norm of (f − r=1 cr φr )
n n
!2
X Z ` X
kf− cr φr k2 = f− cr φr dx ≥ 0
r=1 0 r=1
76
Z ` n
X Z ` n
X
2
f dx − 2 cr f φr dx + c2r
0 r=1 |0 {z } r=1
cr
n n
=k f k −2 2
X
c2r +
X
c2r (177)
r=1 r=1
n
X
=k f k2 − c2r ≥ 0
r=1
Therefore n
X
k f k≥ c2r
r=1
77
Pn
Let fn = r=1 cr φr be the series representation of f . Then
lim k f − fn k= 0
n→∞
78
Consider the helicopter blade problem again.
If J is neglected (it is usually small) the eigenvalue problem is
LW = λM W 0<x<`
Bi W = 0 i = 1, 2, . . . p x = 0, `
which is much simpler than before.
Consider two comparison functions, u and v. The system is considered to be
self-adjoint iff
(u, Lv) = (v, Lu)
This is the symmetry we often obtain for discrete systems. Discrete systems
with symmetric stiffness matrices (and symmetric damping matrices) are also
self-adjoint. (this means no gyroscopic matrices, . . . ).
Recall L is of order 2p
Define
Z `Xp p−1
dk u dk v X dl u dl v
[u, v] = f (x) k k dx + bl l l |`0
0 k=0 dx dx l=0
dx dx
to be the energy inner product (obtained by integration by parts).
The energy inner product is symmetric in u and v and their derivatives.
(Property of self-adjoint systems).
Note that (u, v) only has derivatives through p and boundary conditions only
through p − 1 (characteristic of Geometric Boundary Conditions).
Questions: Can we allow functions outside of KB2p ? Yes. We can enlarge this
space to include functions with only p derivatives having finite energy.
Define the expanded space KGp . The G indicates that the members of the
space satisfy the geometric boundary conditions. This is known as the energy
space. Members are energy functions, or admissible functions.
79
` Z `
d4 v du d3 v d3 v `
Z
(u, Lv) = u 4 =− dx + u |
0 dx 0 dx dx
3 dx3 0
Z ` 2 2
d ud v du d2 v ` d3 v `
= dx − |
2 0
+u |
3 0
2
0 dx dx
2
|dx dx {z dx } (179)
If u and v satisfy the BCs, these terms are zero
= [u, v]
| {z }
energy inner product
Likewise
`
d2 u d2 v
Z
(v, Lu) = dx
0 dx2 dx2
Thus, L is self-adjoint.
This indicates that only the derivative p of u and v is required to have finite
energy and functions must satisfy only geometric B.C., so the space can be
enlarged by to KGp .
R`
Consider (u, Lu) If (u, Lu) = 0 uLudx ≥ 0 for all u, and zero only for
u = 0, then the system is positive definite (Compare to xT Ax ≥ 0 for all
non-zero x).
If the system is positive definite, all λr > 0.
Lu = λM u
Z ` Z `
u (Lu) dx = uλM udx
0 0
Z `
(u, Lu) = λ u2 M dx
0
2
If (u, Lu) > 0, since M > 0 and k u k > 0, λ must be > 0.
Recall −FF̈ = λ (separation of variables) λ = ω 2
If λ > 0, ω has no real part, and the system is marginally stable.
∴ Positive definite systems are stable.
∴ Positive semi-definite systems are marginally stable (Rigid body rotation)
80
to the eigenproblem
LWr = mλr Wr
LWs = mλs Ws
Multiply and integrate
Z ` Z `
Ws LWr = λr mWs Wr dx
0 0
Z ` Z `
Wr LWs = λs mWr Ws dx
0 0
Subtracting
Z ` Z ` Z `
Ws LWr dx − Wr LWs dx = (λr − λs ) mWs Wr dx
0 0 0
LWr = λr mWr
Z ` Z `
Ws LWr dx = λr mWs Wr dx
0 0
| {z }
=0
Z `
Ws LWr dx = 0
0
For self-adjoint systems the eigenfunctions are orthogonal-also in the energy
inner product.
81
5.3 Non self-adjoint systems
Some systems, such as these involving flutter, are not self-adjoint.
Consider the linear operator L. It can be shown that the operator L has and
adjoint L∗ such that
(v, Lu) = (u, L∗ v)
Where u and v are in the domain of L and L∗ .
Consider ui and vj , eigenfunctions of L and L∗ .
Li u i = λi u i , L∗j vj = λ∗j vj
82
∴ ui is orthogonal to every function (L∗ − λi ) vi
Consider: Assume (L∗ − λi ) vi can represent any arbitrary function f . Then
(f, ui ) = 0.
This is not true for any function, but only those orthogonal to ui , or those
equal to 0. Since that is a solution.
∴ (L∗ − λi ) vi = 0
Thus, the eigenvalues of L∗ are the same as those for L. (The eigenfunctions
are not the same.)
If ui and vi , 1 = 1, 2, . . . ∞, are normalized, then
Z `
(ui , vj ) = ui vj dx = δij , i, j = 1, 2, 3, . . . ∞ (180)
0
The sets of eigenfunctions are assumed to be complete, so that for any arbi-
trary function f ,
X∞
f= αi ui
1−1
αi = (v, f )
from (180).
Likewise for ∞
X
f= βi vi , βi = (ui f )
L=1
∞
X ∞
X
Lf = αi Lui = αi λi ui
L=1 L=1
∞
X ∞
X
∗ ∗
Lf= βi L vi = βi λi vi
i=1 L=1
∗
If L = L, then the system is self-adjoint.
Non self-adjoint systems are not generally easy to solve closed-form.
83
• Any linear combination of the eigenfunctions is also an eigenfunction.
• They can (and should) be taken so that they are mutually orthogonal.
= λr δrs
and
√ √
Z `
mwr , mws = mwr ws dx = δrs
0
Every function w with continuous Lw and satisfying the boundary conditions
can be expanded in a convergent series.
∞
X
w= cr wr
|{z}
r=1 eigenfunctions
where Z `
cr = mwwr dx
0
This solution form is written in terms of the eigenfunctions. Often they
cannot be found closed-form.
When closed-form solutions do not exist, approximate methods must be used.
The approximations are linear combinations of comparison or admissible
functions.
Admissible functions are preferred because there are more of them.
84
5.5 Vibration of Rods Shafts and Strings
All are represented by a second-order PDE.
Consider
Z `
1
T (t) = m(x)ẇ(x, t)2 dx
2 0
Z `
1 1
s(x)w0 (x, t)2 + k(x)w(x, t)2 dx + Kw (`, t)2
V (t) =
2 0 2
Z `
δWnc (t) = p(x, t)δw(x, t)dx
0
Hamilton’s Principle:
Z t2 Z `
δ L̂ + δ Ŵnc dx + δL0 dt = 0
t1 0
|{z}
discrete Lagrangian
1
L0 = − Kw (`, t)2
2
δL = δ T̂ − δ V̂
Applying Hamilton’s principle or the Lagrange equation yields
∂
−k(x)w + (sw0 ) − mẅ + p = 0 0<x<`
∂x
w (0, t) = 0, Geometric B.C.
85
sw0 + Kw = 0 @x = `, Dynamic B.C.
Consider the unforced problem. Separation of variables yields
d
− (sW 0 ) + kW = ω 2 mW 0<x<`
dx
W (0) = 0, sW 0 + KW = 0 @x = `
The operators are
d d
L = − dx s dx +k
“L” is of order 2
Since p = 1
β1 = 1 @x = 0
d
β2 = s + K @x = `
dx
First: Let’s see if L is self adjoint. Consider two comparison functions u and
v.
Z ` Z `
d dv
uLvdx = u − s + kv dx
0 0 dx dx
Z ` Z `
d dv
=− u s dx + ukvdx (182)
0 dx dx 0
Z ` Z `
dv ` du dv
= −us | + s dx + ukvdx
dx 0 0 dx dx 0
v(0) = 0
86
Letting v = u
Z ` Z `
2
uLudx = u K |` + u02 s + u2 kdx > 0
0 0
So the system is positive definite, and all of the eigenvalues are positive.
Note: We did not need to solve the E.O.M. to reach these conclusions.
Let s(x), k(x) and m(x) all be constants.
Then
d
− (sW 0 ) + kW = ω 2 mW
dx
becomes
w00 + β 2 w = 0
β 2 = ω 2 m − k /s
The solution is
W (x) = C1 sin βx + C2 cos βx
From the boundary conditions
C2 = 0
tan(x)
4 x
2
-2
-4
0 1 2 3 4 5 6 7 8 9
87
The eigenfunctions are
88
5.6 Bending Vibration of a Helicopter Blade
The differential equation with J = 0 is
Z `
d2
00 d
(EIW ) − mΩ ξdξ W 0 = λmW
2
dt2 dx x
with B.C.
W (0) = 0, W 0 (0) = 0
EIW 00 |x=` = 0
d
(EIW 00 ) |x=` = 0
−
dx
There is no known closed-form solution.
Let’s consider the self-adjointness and positive definiteness.
The operator L is
Z `
d2 d2
d 2 d
L = 2 EI 2 − mΩ ξdξ
dx dx dx x dx
89
∴ The mass weighted eigenfunctions are orthogonal. Since (u, Lv) = [u, v]
we can check for positive definiteness by setting v = u and observing the
sign.
Z `
[u, u] = uLudx
0
Z ` Z ` Z `
002 02 2 (184)
= | {z } dx +
EIu u
|{z} mΩ ξdξ dx > 0
0 ≥0 0 ≥0 | x {z }
≥0
mΩ2 x = mλx
Ω2 = λ
Since ω 2 = λ, ω = Ω is the frequency of the flapping mode.
90
5.7 Variational Characterization of the Eigenvalues
Consider a self-adjoint system. Define the Rayleigh quotient as
[u, u]
R (u) = √ √ (185)
( mu, mu)
91
Substituting for R gives
! ∞ !−1
∞
∂R ∂ X X
= c2r λr c2r
∂ci ∂ci r=1 r=1
∞
! ∞
!−1
X ∂cr X
(186)
= 2cr λr c2r
r=1
∂c i r=1
∞
! ∞
! ∞ !−2
X
2
X ∂cr X
+ c r λr −1 2cr c2r =0
r=1 r=1
∂ci r=1
is satisfied.
Hence the Rayleigh quotient has stationary points at the system eigenfunc-
tions.
Letting u = ci Wi in the Rayleigh Quotient,
R(Wi ) = λi
∴ The stationary values of the Rayleigh quotient are the system eigenvalues.
92
Assume u has the form Wi + v where is a small value and v is an arbitrary
function.
[Wi + v, Wi + v]
R (Wi + v) = √ √
( m (Wi + v) , m (Wi + v))
[Wi , Wi ] + 2 [Wi , v] + 2 [v, v]
= √ √ √ √ √ √
( mWi , mWi , ) + 2 ( mWi , mv) + 2 ( mv, mv)
(188)
Using the Binomial Expansion Theorem
√ √ √ √
[Wi , v] ( mWi , mWi ) − [Wi , Wi ] ( mWi , mv) 2
R (Wi + v) ≈ R (Wi ) + 2 √ √ 2 + O
( mWi , mWi )
√ √
[Wi , v] − λi ( mWi , mv)
+ O 2
≈ λi + 2 √ √
( mWi , mWi )
(189)
R(u) ≥ λs+1
93
vi are arbitrary functions, they are in effect constraints on the function u.
So λs+1 is the max value of (minR(u) with respect to u) with respect to vi .
To apply this we can use admissible √ functions
√ u since [u, u] satisfies the
dynamic B.C.s automatically and ( mu, mu) does not require their con-
sideration.
A−1 − λ−1 I x = 0
94
Let a(x, ξ) be the displacement at x due to a unit load at point ξ. The total
displacement W (x) is
Z `
W (x) = a(x, ξ)f (ξ)dξ
0
95
5.8.1 Example: Cantilever beam. EI=const.
Influence function (static response)
1 2 1
a(x, ξ) = x ξ− x x<ξ
2EI 3
1 2 1
a(x, ξ) = ξ x− ξ ξ<x
2EI 3
Pick an initial admissible function W0 (x) = x2 (Picking an admissible func-
tion gets us closer to the answer quicker. However any initial displacement,
even one that does not satisfy the boundary conditions, will also work). Sub-
stitution yields
λm Z x
1
Z `
1
2 2 2 2
W1 (x) = ξ ξ x − ξ dξ + ξ x ξ − x dξ
2EI 3 3
0 |{z} x
|{z}
W (ξ) | {z } W (ξ) | {z }
a(x,ξ) a(x,ξ)
W1000 (`) = 0
So, the BCs are now satisfied with our current estimate of the first mode.
HW: Apply Rayleigh’s quotient and compare to ω true.
96
5.8.2 Example: String solution using Green’s functions
Assume T and m const, fixed at each end
(
x(1 − ξ) x < ξ
a(x, ξ) =
ξ(1 − x) ξ < x
T (−2)2
R0 = = 10.0000c
ρ(.25 − (x − .5)2 )2
R1 = 9.87097c2
R2 = 9.86962c2
R = 9.86960c2 T rue
97
6 Discretization of continuous systems
6.1 The Rayleigh-Ritz method
Consider again the eigenvalue problem
LW = mλW W = W (x)
Bi W = 0 i = 1, 2, . . . p
L is self-adjoint of order p.
Instead of solving this, we will seek stationary values of the Rayleigh quotient,
equation (185)
[W, W ]
R(W ) = √ √
( mW, mW )
where W is a trial function.
The function space considered will be the finite subspace S n of KGp
Denote a function approximating W in S n : W n
[W n , W n ]
R(W n ) = √ √
( mW n , mW n )
Select a sequence of functions (φ1 (x), φ2 (x), φ3 (x), . . . , φn (x)) of x that are
linearly independent and are complete in energy.
That is to say for any W (x) in KGp and any small , the are enough functions
such that n
X
kW− ai φi (x) k<
i=1
PN
∴ i=1 ai φi (x) can adequately represent W (x)
n
X
n
W = ai φi
i=1
ai need to be determined
[ ni=1 ai φi , ni=1 ai φi ]
P P
R(ai , a2 , . . . , an ) = √ Pn √ P
( m i=1 ai φi , m ni=1 ai φi )
Pn Pn (190)
i=1 j=1 ai aj [φi , φj ]
= Pn Pn √ √
i=1 j=1 ai aj ( mφi , mφj )
98
Next, define
[φi , φj ] = Kij
√ √
mφi , mφj = Mij
These are the mass and stiffness coefficients (elements of mass and stiffness
matrices).
Note that they must be symmetric as a result of L being self-adjoint.
N (a1 , a2 , . . .)
R(a1 , a2 , . . . , an ) =
D(a1 , a2 , . . .)
n X
X n
N (a1 , a2 , . . . , an ) = Kij ai aj compare to xT Kx
i=1 j=1
n X
X n
D(ai , a2 , . . . , an ) = Mij ai aj compare to xT M x
i=1 j=1
99
Likewise n
∂D X
=2 ai Mir
∂aj i=1
∂R
=0
∂ar
gives
∂N n ∂D
−Λ =0
∂ar ∂ar
n
X n
X
n
2 ai Kir − Λ 2 ai Mir = 0
i=1 i=1
(K − Λn M ) a = 0
where Kir are the elements of K and Mir are the elements of M
The eigenvalue problem is now algebraic. The n eigenvalues Λn are approx-
imations of the first n eigenvalues of the actual model, the quality of which
is determined by the ability of
n
X
n
W = ai φi
i=1
an+1 = an+2 = . . . = 0
100
P
• presumably not completely represented by ai φi .
Then
• Often (but not usually) the Rayleigh-Ritz method is applied to find only
the first (fundamental) natural frequency. A function approximating
the first natural frequency is tried. This is what was done in the section
on Green’s functions.
• Note that using additional functions, φ, is simple since for each addi-
tional term in the series, only 3 new calculations must be made. i.e.
n n
n+1 K | n+1 M |
K = M =
− + − +
Let Λni be the ith eigenvalue of the n-term Rayleigh-Ritz solution and
Λn+1
i be the ith eigenvalue of the n + 1-term Rayleigh-Ritz solution.
Then
Λn+1
1 ≤ Λn1 ≤ Λn+1
2 ≤ Λn2 ≤ . . . Λnn ≤ Λn+1
n+1
Note:
Important constraints:
101
1. The admissible functions must be linearly independent
6. The best method is often to solve a more simple similar problem and
use the eigenfunctions as admissible functions for the more difficult
problem.
102
6.1.1 Example: Cantilever Beam, Shames, p 340
Find the first two natural frequencies of a beam free at the left end, and
clamped at the right end.
Assume two admissible functions:
x 2
φ1 (x) = 1 − `
x x 2
φ2 (x) = `
1− `
x 2 x x 2
W 2 (x) = a1 1 − + a2 1−
` ` `
4
d
L = EI 4
dx
Z `
d4
Kij = [φi , φj ] = φi EI 4 φj dx
0 dx
Z ` 2 2
d d
= EI 2
φi φj dx other terms are zero
0 dx dx2
Z ` 2 Z ` 2
00 2 4EI
K11 = EI φ dx = EI dx =
0 0 `2 `3
Z ` Z `
00 00 2 2 x 2EI
K12 = K21 = EI φ1 φ2 dx = EI 2 2
3 − 2 dx = − 3
0 0 ` ` ` `
Z ` 2 Z ` 2
00 2 x 2 4EI
K22 = EI φ2 dx = EI 2
3 − 2 dx = 3
0 0 ` ` `
√ √
Mij = mφi , mφj
103
Z `
x 4 m`
M11 = m 1− dx =
0 ` 5
Z `
x 2 x x 2 m`
M12 = M21 = m 1− 1− dx =
0 ` ` ` 30
Z ` 2
x x 2 m`
M22 = m 1− dx =
0 ` ` 105
EI 4 −2
K= 3
` −2 4
1 1
M = m` 1 5 30
1
30 105
(K − ΛM ) a = 0
EI
Λ21 = 12.60
m`4
EI
Λ22 = 1212 4
m`
True values are
EI
Λ1 = 12.30
m`4
EI
Λ2 = 483 4
m`
The second function chosen was a lousy choice, and thus our estimate of the
second natural frequency is way too high.
104
and the potential energy can be written in general form
1
V = [w, w]
2
substituting the series approximation into T yields
n
! n !
1 `
Z X X
T = m φi q̇i φj q̇j dx =
2 0 i=1 j=1
n n Z `
1 XX
= q̇i q̇j mφi φj dx
2 i=1 j=1 0
n n
1 XX
= q̇i q̇j Mij
2 i=1 j=1
where Z `
Mij = mφi φj dx, i, j = 1, 2, 3, . . . n
0
Likewise for the potential energy
n n
1 XX
V = qi qj Kij
2 i=1 j=1
where
Kij = [φi , φj ], i, j = 1, 2, 3, . . . n
Note that the mass and stiffness matrices are identical to those obtained in
the Rayleigh-Ritz method.
Lagrange’s equation for a discrete system (No damping, external loads, or T0
or T1 energy) are
d ∂L ∂L
− =0 r = 1, ..., n
dt ∂ q̇r ∂qr
L=T −V
Substituting for L yields
n
X
(Mrj q̈j + Krj qj ) = 0 r = 1, 2, ..., n
j=1
105
Assuming q̈r = −ωr2 qr = −λr qr
n
X
(Krj − ΛMrj ) qj = 0 r = 1, 2, ..., n
j=1
106
6.3 Weighted Residual Methods
Rayleigh-Ritz: Variation of Rayleigh’s quotient is zero.
Weighted residual methods are applied directly using the DE and BCs.
Consider a trial function W (x) in the space KB2p substituted into the differ-
ential equation.
LW (x) = λmW (x)
The residual is the error
R (W, x) = LW − λmW
R (W, x) = 0
vR = v (LW − λmW )
107
As n → ∞, V n fills the entire space,
Then the only way for (ψi , R) = 0 is for R = 0.
n
X X
(ψi , R) = Kij − λn Mij aj = 0
j=1
Z `
Kij = (ψi , Lφj ) = ψi Lφj dx
0
Z `
Mij = (ψi , mφj ) = ψi mφj dx
0
Kij is usually not symmetric, regardless of whether or not L is self-adjoint.
(Depends on technique.)
108
6.3.1 Galerkin’s Method (Ritz’s Second Method)
Assume the weighting functions are the same as the trial functions
ψi = φi ` = 1, 2, ...n
109
6.3.2 Example: Clamped-clamped beam (Dimarogonas)
Consider a clamped-clamped beam, length `
Choose ψ1 (x) = 1 − cos 2πx
`
, ψ2 (x) = 1 − cos 4πx
`
Z `
Ri = ψi (LW − mω 2 W )dx
0
110
2
ψ1
ψ2
1.8
1.6
1.4
1.2
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x/l
111
6.3.3 The collocation method
The weighting functions are spatial Dirac delta functions
ψi = δ(x − xi )
The points x = xi are chosen in advance
Z `
(ψi , R) = δ(x − xi ) (LW n − λmW n ) dx = 0
0
= (LW n − λn mW n ) |x=xi = 0
What this means is the D.E. is satisfied at n preselected locations.
As n increases, the equation is satisfied everywhere.
Now Z `
Kij = δ(x − xi )Lφj dx = Lφj (xi )
0
Z `
Mij = δ(x − xi )mφj dx = mφj (xi )
0
The coefficients here are not symmetric because δ is not a comparison func-
tion of L.
This is true even when L is self-adjoint.
Pro: Easy to find Kij , Mij
Con: Difficult to obtain solution to non-symmetric eigenvalue problem.
πx
m(x) = m0 sin EI = const
`
weighting functions trial functions
ψ1 = δ(x − 4` ) φ1 = sin πx
`
ψ2 = δ(x − 2` ) φ2 = sin 2πx
`
ψ3 = δ(x − 3`4 ) φ3 = sin 3πx
`
112
d4
Kij = Lφj (xi ) = EI 4 φj (xi )
dx
4
jπ jπxi
= EI sin
` `
π 4 1 4 4
2π 3π 1
K11 = EI √ , K12 = EI , K13 = EI √
` 2 ` ` 2
π 4 4
3π
K21 = EI , K22 = 0, K23 = −EI
` `
π 4 1 4 4
2π 3π 1
K31 = EI √ , K32 = −EI , K33 = EI √
` 2 ` ` 2
1
16 57.27
π 4 √2
K = EI 1 0 − 81
` √1
2
− 16 57.27
113
Mij = m (xi ) φj (xi )
1 1 1
M11 = m0 , M12 = √ m0 , M13 = m0
2 2 2
M21 = m0 , M22 = 0, M23 = −m0
1 1 1
M31 = m0 , M32 = − √ m0 , M33 = m0
2 2 2
Assembling: 1
√1 1
2 2 2
M = m0 10 − 1
11 1
− √2
2 2
r
p
−1
1 EI
eig (M K) = 2
(11.74, 39.48, 105.6)
` m0
For comparison, a uniform simply supported beam:
r
1 EI
ωn = 2
(9.87, 39.48, 88.83)
` m0
114
6.4 System Response By Approximate Methods: Galerkin’s
Method - the foundation of Finite Elements
6.4.1 Damped (and undamped) Non-gyroscopic System
Consider
∂w(x, t) ∂ 2 w(x, t)
Lw(x, t) + C +M = f (x, t)
∂t ∂t2
Let n
X
w(x, t)n = φj (x)qj (t)
j=1
M q̈ + C q̇ + Kq = F (t)
Z `
Mij = φi m(x)φj dx
0
Z `
Kij = φi Lφj dx
0
Z `
Cij = φi Cφj dx
0
Z `
Fi = φi f dx
0
The discrete eqns can be solved as shown before.
115
More efficient than Given’s method, 1/2 as many multiplications. Requires
n-2 transformations.
Ak = Pk Ak−1 Pk , k = 1, 2, 3, . . . , n − 2
where
Pk = I − 2v k v Tk
A1 must have the form
(1) (1)
a11 a12 0 ··· 0
(1) (1) (1) (1)
a21 a22 a23 · · · a2n
(1) (1) (1)
0 a32 a33
A1 = · · · a3n
.. .. .. .. ..
. . . . .
(1) (1) (1)
0 an2 an3 · · · ann
with v Tk v k = 1
Let’s pick v1,1 = 0, so
1 0 0 ··· 0
0 1 − 2v 2 − 2v12 v13 ··· − 2v1,2 v1n
12
0 − 2v12 v13 2
P1 = 1 − 2v13 ··· − 2v1,3 v1n
.. .. .. .. ..
. . . . .
2
0 − 2v12 v1n ··· ··· 1 − 2v1n
Then 12
1 a1,2
v1,2 = 1∓
2 α1
where ! 12
n
X
α1 = a21j
j=2
then
a1j
vi,j = ∓
2α1 v12
116
Where the signs are chosen to be the same as that of a12 .
The procedure is generalized to
where 12
1 ak,k+1
vk,k+1 = 1∓
2 αk
and
akj
vk,j = ∓
2αk vk,k+1
where ! 21
n
X
αk = a2kj
j=k+1
1 1 1 1
1 2 2 2
1 2 3 3
1 2 3 4
k = 1
n
! 21
X
αk = a2kj
j=k+1
alpha = 1.7321
12
1 ak,k+1
vk,k+1 = 1∓
2 αk
and
akj
vk,j = ∓
2αk vk,k+1
117
V =
0.00000
0.88807
0.32506
0.32506
Pk = I − 2v k v Tk
P1 =
A =
k = 2
alpha = 1.2472
V =
0.00000
0.00000
0.84755
0.53071
Psub =
118
0.00000 0.00000 -0.89961 0.43670
A =
Ak = P A 0 P (193)
where
k
Y
P = Pi (194)
1
119
7.2 The QR Method
The QR Method for obtaining eigenvalues is computationally expensive if the
matrix is full. It is very efficient when Given’s method (or the more efficient
Householder’s method) is applied first.
A1=[2 -1 0;-1 2.5 -1.5;0 -1.5 3]
A1 =
2.0000 -1.0000 0
-1.0000 2.5000 -1.5000
0 -1.5000 3.0000
First, try to get rid of sub-diagonal elements
s1=-1/sqrt(2^2+1)
s1 =
-0.4472
c1=2/sqrt(2^2+1)
c1 =
0.8944
120
s2=A1p1(3,2)/sqrt(A1p1(3,2)^2+A1p1(2,2)^2)
s2 =
-0.6425
c2=A1p1(2,2)/sqrt(A1p1(3,2)^2+A1p1(2,2)^2)
c2 =
0.7663
The rotation matrix Θ2 is
1 0 0
Θ1 = 0
cos(θ) sin(θ)
0 − sin(θ) cos(θ)
t2=[1 0 0;0 c2 s2;0 -s2 c2]
t2 =
1.0000 0 0
0 0.7663 -0.6425
0 0.6425 0.7663
A1p1 =
2.2361 -2.0125 0.6708
0 1.7889 -1.3416
0 -1.5000 3.0000
A1p2=t2*A1p1
A1p2 =
2.2361 -2.0125 0.6708
0 2.3345 -2.9556
0 -0.0000 1.4367
Now we need to post-multiply by our rotations
Q1=t1’*t2’
Q1 =
0.8944 0.3427 0.2873
-0.4472 0.6854 0.5747
0 -0.6425 0.7663
A2=A1p2*Q1
A2 =
2.9000 -1.0440 0.0000
-1.0440 3.4991 -0.9231
0.0000 -0.9231 1.1009
121
This is the completion of one step Note that this is identical to
The beginning of subsequent steps begins with the solution of the eigenvalue
problem for the lowest 2 × 2 principle minor of the original matrix. i.e.
2.5 − 1.5
eig
− 1.5 3
eig(A1(2:3,2:3))
ans =
1.22931
4.27069
eig(A2(2:3,2:3))
ans =
3.8133
0.7867
0.7867 1
− 1 = .81 >
4.27 2
Thus, a shift is not in order (we will have one later). So, repeat the process
again from the start.
122
s1=A2(2,1)/sqrt(A2(2,1)^2+A2(1,1)^2)
s1 =
-0.3387
c1=A2(1,1)/sqrt(A2(2,1)^2+A2(1,1)^2)
c1 =
0.9409
t1=[c1 s1 0;-s1 c1 0;0 0 1]
t1 =
0.9409 -0.3387 0
0.3387 0.9409 0
0 0 1.0000
A2p1=t1*A2
A2p1 =
3.0822 -2.1676 0.3127
-0.0000 2.9386 -0.8686
0.0000 -0.9231 1.1009
s2=A2p1(3,2)/sqrt(A2p1(3,2)^2+A2p1(2,2)^2)
s2 =
-0.2997
c2=A2p1(2,2)/sqrt(A2p1(3,2)^2+A2p1(2,2)^2)
c2 =
0.9540
t2=[1 0 0;0 c2 s2;0 -s2 c2]
123
t2 =
1.0000 0 0
0 0.9540 -0.2997
0 0.2997 0.9540
A2p2=t2*A2p1
A2p2 =
3.0822 -2.1676 0.3127
-0.0000 3.0802 -1.1586
0.0000 0.0000 0.7900
Q2=t1’*t2’
Q2 =
0.9409 0.3232 0.1015
-0.3387 0.8976 0.2820
0 -0.2997 0.9540
A3=A2p2*Q2
A3 =
3.6342 -1.0433 0.0000
-1.0433 3.1121 -0.2368
0.0000 -0.2368 0.7537
124
This is the end of a second rotation. We already have the eigensolution of
the 2 × 2 minor of A2 . We need it for A3 .
eig(A3(2:3,2:3))
ans =
3.1356
0.7301
0.7301 1
− 1 = 0.0719 <
0.7867 2
A shift is thus advisable. The next iteration begins by using A3 − 0.7301I.
A3p=A3-eye(3)*ans(2)
A3p =
2.9041 -1.0433 0.0000
-1.0433 2.3820 -0.2368
0.0000 -0.2368 0.0235
s1=A3p(2,1)/sqrt(A3p(2,1)^2+A3p(1,1)^2)
s1 =
-0.3381
c1=A3p(1,1)/sqrt(A3p(2,1)^2+A3p(1,1)^2)
c1 =
0.9411
t1=[c1 s1 0;-s1 c1 0;0 0 1]
125
t1 =
0.9411 -0.3381 0
0.3381 0.9411 0
0 0 1.0000
A3p1=t1*A3p
A3p1 =
3.0858 -1.7873 0.0801
-0.0000 1.8889 -0.2228
0.0000 -0.2368 0.0235
s2=A3p1(3,2)/sqrt(A3p1(3,2)^2+A3p1(2,2)^2)
s2 =
-0.1244
c2=A3p1(2,2)/sqrt(A3p1(3,2)^2+A3p1(2,2)^2)
c2 =
0.9922
t2=[1 0 0;0 c2 s2;0 -s2 c2]
t2 =
1.0000 0 0
0 0.9922 -0.1244
0 0.1244 0.9922
A3p2=t2*A3p1
A3p2 =
3.0858 -1.7873 0.0801
-0.0000 1.9037 -0.2240
0.0000 0.0000 -0.0044
126
Q3=t1’*t2’
Q3 =
0.9411 0.3355 0.0421
-0.3381 0.9338 0.1170
0 -0.1244 0.9922
We now complete the rotation and add back the subtracted approximation
to the eigenvalue.
A4=A3p2*Q3+(eye(3)*.7301)
A4 =
4.2385 -0.6437 -0.0000
-0.6437 2.5356 0.0005
0.0000 0.0005 0.7258
127
7.3 Subspace Iteration
Consider the eigenvalue problem
KΦ = λM Φ
for the case where we only need the 1st p eigenvalues and eigenvectors
Start with an initial guess X1 (n × q) like the Rayleigh Ritz method
X̄2 = K −1 M X1
|{z} |{z}
n×q n×q
K2 = X̄2T K X̄2
|{z}
q×q
M2 = X̄2T M X̄2
The reduced eigenvalue problem is
K2 Q2 = M2 Q2 Λ2
which can be solved for using other techniques better for smaller eigenvalue
problems.
In general, q n.
An improved approximation for the eigenvectors is
X2 = X̄2 Q2
|{z} |{z} |{z}
n×q n×q q×q
Repeat this
X̄k+1 = K −1 M Xk
| {z } |{z}
n×q n×q
T
Kk+1 = X̄k+1 K X̄k+1
| {z }
q×q
T
Mk+1 = X̄k+1 M X̄k+1
Solve
Kk+1 Qk+1 = Mk+1 Qk+1 Λk+1
128
for Qk+1 , then
Xk+1 = X̄k+1 Qk+1
| {z } | {z } | {z }
n×q n×q q×q
M =
1 0 0
0 1 0
0 0 1
x1 =
1 1
0 1
0 1
>> x2bar=K\M*x1
x2bar =
0.70000 1.30000
0.40000 1.60000
0.20000 1.13333
>> k2=x2bar’*K*x2bar
k2 =
0.70000 1.30000
129
1.30000 4.03333
>> m2=x2bar’*M*x2bar
m2 =
0.69000 1.77667
1.77667 5.53444
>> [q2,lam2]=eig(m2\k2);
>> q2=q2(:,[2,1]),lam2=diag(sort(diag(lam2)))% need to sort eigenvalues
q2 =
-0.02669 0.95230
0.99964 -0.30515
lam2 =
0.72874 0.00000
0.00000 2.34844
x2 =
0.54935 0.81937
0.68141 -0.32580
0.48362 -0.47169
>> x3bar=K\M*x2
x3bar =
0.75384 0.34890
0.95832 -0.12157
0.64037 -0.21801
>> k3=x3bar’*K*x3bar
k3 =
1.37683 0.00339
130
0.00339 0.42832
>> m3=x3bar’*M*x3bar
m3 =
1.89671 0.00690
0.00690 0.18404
0.00417 -0.99998
-0.99999 -0.00548
lam3 =
0.72590 0.00000
0.00000 2.32760
131
>> x3=x3bar*q3;x3=x3/[norm(x3(:,1)) 0;0 norm(x3(:,2))] %normalize eigenvectors
x3 =
-0.54874 -0.80601
-0.69534 0.29272
-0.46410 0.51445
After 9 iterations
0.7258 0
Λ=
0 2.3198
and
0.5480 0.7909
Φ = 0.6983
− 0.2533
0.4606 − 0.5571
Initial vectors are chosen to b:
132
7.4 Shifting
Consider the eigenvalue problem
(K − λM )ψ = 0
Subspace iteration does not work with a singular K. (rigid body motion
capable system)
Consider redefining λ = µ − 1. Then the eigenvalue problem is restated as
(K − λM )ψ =0
(K − (µ − 1)M )ψ =0
(196)
((K + M ) − µM ) ψ =0
(K 0 − µM ) ψ =0
References
[1] Meirovitch, L., Principles and Techniques of Vibrations, Prentice Hall,
Inc., Englewood Cliffs, 1997.
[2] Golub, G. H. and Van Loan, C. F., Matrix Computations, Johns Hopkins
University Press, Baltimore, 1985.
[4] Bathe, K.-J. and Wilson, E. L., Numerical Methods in Finite Element
Analysis, Prentice Hall, Inc., Englewood Cliffs, 1976.
133