0% found this document useful (0 votes)
2 views

unprotected-STRUCTURAL DYNAMICS

The document contains notes on Computational Methods in Structural Dynamics, authored by Joseph C. Slater, covering various topics including linear algebra concepts, vibration of discrete and continuous systems, energy methods, the eigenvalue problem, and discretization techniques. It includes definitions, examples, and mathematical formulations relevant to structural dynamics and computational methods. The content is structured into sections and subsections, providing a comprehensive overview of the subject matter.

Uploaded by

mugambisamuel94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

unprotected-STRUCTURAL DYNAMICS

The document contains notes on Computational Methods in Structural Dynamics, authored by Joseph C. Slater, covering various topics including linear algebra concepts, vibration of discrete and continuous systems, energy methods, the eigenvalue problem, and discretization techniques. It includes definitions, examples, and mathematical formulations relevant to structural dynamics and computational methods. The content is structured into sections and subsections, providing a comprehensive overview of the subject matter.

Uploaded by

mugambisamuel94
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 133

Notes for Computational Methods in

Structural Dynamics
Joseph C. Slater
March 21, 2006

Contents
1 Concepts of Linear Algebra 4
1.1 Linear Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Linear Dependence . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Bases and Dimension of a Vector Space . . . . . . . . . . . . . 6
1.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.4 Inner Products and Orthogonal Vectors . . . . . . . . . . . . . 7
1.4.1 Vector norms . . . . . . . . . . . . . . . . . . . . . . . 8
1.5 Gram - Schmidt Orthogonalization . . . . . . . . . . . . . . . 9
1.6 The Eigenvalue Problem . . . . . . . . . . . . . . . . . . . . . 10
1.6.1 Singular Value Decomposition - Principle Component
Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Properties of Matrices . . . . . . . . . . . . . . . . . . . . . . 14

2 Vibration of Discrete Systems 15


2.1 Lagrange’s Equations . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Small Motions about Equilibrium Points . . . . . . . . . . . . 16
2.3 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4 Energy Considerations . . . . . . . . . . . . . . . . . . . . . . 19
2.5 Lyapunov Stability . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 20

1
2.5.2 Lyapunov Stability Criteria . . . . . . . . . . . . . . . 21
2.5.3 Conservative Systems . . . . . . . . . . . . . . . . . . . 21
2.5.4 Systems with Damping . . . . . . . . . . . . . . . . . . 22
2.5.5 Gyroscopic Systems . . . . . . . . . . . . . . . . . . . . 23
2.5.6 Damped Gyroscopic Systems . . . . . . . . . . . . . . . 23
2.5.7 Circulatory Systems . . . . . . . . . . . . . . . . . . . 24
2.5.8 General Asymmetric Systems . . . . . . . . . . . . . . 24
2.6 Self-Adjoint (Symmetric) Systems . . . . . . . . . . . . . . . . 24
2.7 Maximum-Minimum Characteristics of Eigenvalues . . . . . . 35
2.8 The Inclusion Principle . . . . . . . . . . . . . . . . . . . . . . 37
2.8.1 Example: The inclusion principal . . . . . . . . . . . . 37
2.9 Perturbation of the Symmetric Eigenvalue Problem . . . . . . 38
2.9.1 Example: Perturbation . . . . . . . . . . . . . . . . . . 40

3 Vibration of Continuous Systems 42


3.1 Strings and Cables . . . . . . . . . . . . . . . . . . . . . . . . 42
3.1.1 Finding Mode Shapes and Natural Frequencies of a
String . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
3.1.2 Free Response of a String . . . . . . . . . . . . . . . . 46
3.1.3 Example, Forced response of a string . . . . . . . . . . 47
3.2 Bending Vibration of a Beam . . . . . . . . . . . . . . . . . . 49
3.2.1 Example: Mode shapes of cantilever beam . . . . . . . 50
3.2.2 Example: Forced response of cantilever beam . . . . . 52
3.3 Nondimensionalizing E.O.M. . . . . . . . . . . . . . . . . . . . 53

4 Energy Methods 55
4.1 Virtual Work (Shames Solid Mechanics) , p. 62 our book
(discrete) and p. 369, eqn. 7.20 our book . . . . . . . . . . . . 55
4.1.1 Review from Strength of Materials . . . . . . . . . . . 57
4.1.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.2 Derivation of Hamilton’s Principle from Virtual Work . . . . . 61
4.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.2.2 Example 2: String with tension T . . . . . . . . . . . . 65
4.2.3 String Boundary Condition Example . . . . . . . . . . 67
4.3 Lagrange’s Equation for a Continuous System . . . . . . . . . 68
4.3.1 Example: Beam is bending on spinning shaft . . . . . . 71

2
5 The Eigenvalue Problem 73
5.1 Self-adjoint systems . . . . . . . . . . . . . . . . . . . . . . . . 74
5.1.1 Example: Self-adjointness of beam stiffness operator . . 79
5.2 Proof that the eigenfunctions are orthogonal for self-adjoint
systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.3 Non self-adjoint systems . . . . . . . . . . . . . . . . . . . . . 82
5.4 Repeated eigenvalues . . . . . . . . . . . . . . . . . . . . . . . 83
5.5 Vibration of Rods Shafts and Strings . . . . . . . . . . . . . . 85
5.6 Bending Vibration of a Helicopter Blade . . . . . . . . . . . . 89
5.7 Variational Characterization of the Eigenvalues . . . . . . . . 91
5.8 Integral Formulation of the Eigenvalue Problem . . . . . . . . 94
5.8.1 Example: Cantilever beam. EI=const. . . . . . . . . . 96
5.8.2 Example: String solution using Green’s functions . . . 97

6 Discretization of continuous systems 98


6.1 The Rayleigh-Ritz method . . . . . . . . . . . . . . . . . . . . 98
6.1.1 Example: Cantilever Beam, Shames, p 340 . . . . . . . 103
6.2 The Assumed-Modes Method . . . . . . . . . . . . . . . . . . 104
6.3 Weighted Residual Methods . . . . . . . . . . . . . . . . . . . 107
6.3.1 Galerkin’s Method (Ritz’s Second Method) . . . . . . . 109
6.3.2 Example: Clamped-clamped beam (Dimarogonas) . . . 110
6.3.3 The collocation method . . . . . . . . . . . . . . . . . 112
6.3.4 Collocation Method Example . . . . . . . . . . . . . . 112
6.4 System Response By Approximate Methods: Galerkin’s Method
- the foundation of Finite Elements . . . . . . . . . . . . . . . 115
6.4.1 Damped (and undamped) Non-gyroscopic System . . . 115

7 The Computational Eigenvalue Problem 115


7.1 Householder’s method . . . . . . . . . . . . . . . . . . . . . . 115
7.2 The QR Method . . . . . . . . . . . . . . . . . . . . . . . . . 120
7.3 Subspace Iteration . . . . . . . . . . . . . . . . . . . . . . . . 128
7.4 Shifting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

3
1 Concepts of Linear Algebra
1.1 Linear Vector Spaces
Definition 1 (Field) Set of scalars possessing certain algebraic properties
(F ).

Consider field of real numbers. To be a field it must satisfy:

1. Commutativity: α + β = β + α and αβ = βα

2. Associativity:(α + β) + γ = α + (β + γ) and (αβ)γ = α(βγ)

3. Distributivity: α(β + γ) = αβ + αγ

4. Identity: α + 0 = α, α · 1 = α

5. Inverse: α + (−α) = 0, αα−1 = 1

∴ The set of real numbers is a field.

Definition 2 (Linear Vector Space L) If vector addition is defined and scalar


multiplication is defined, then the set of L and F are a linear vector space
over a field F . They satisfy

1. Commutativity: (x + y = y + x)

2. Associativity: (x + y) + z = x + (y + z)

3. Identity: There exists a vector 0 such that x + 0 = x

4. Inverse: There exists for each x a vector (−x) so that x + (−x) = 0

For any x, there also exists the vector αx in L.


A vector space L possessing n elements of F is a vector space Ln . (consisting
of vectors length n).
Let S be a subset of the vectors in L. Then S is a subspace if:

1. If x and y are in S, then x + y is in S

2. If x is in S and α is in F , than αx is in S

4
1.2 Linear Dependence
A set of vectors
xi , i = 1, 2, 3, . . . , n
in a linear space C n1 are linearly independent iff
X
αi xi = 0

cannot be satisfied without all αi = 0 i = 1, . . . , n

1.2.1 Example
   
1 0
x1 = , x2 =
2 2

α1 x1 + α2 x2 = 0
α1 1 + α2 0 = 0 α1 = 0
α1 2 + α2 2 = 0 α2 = 0
∴ x1 and x2 are independent

1
Complex Vector Space

5
1.2.2 Example
  
  
1 −1 1
x1 = 2
 x2 =  3  x3 = 7

2 1 5
α1 = 2, α2 = 1, α3 = −1
satisfies
α1 x1 + α2 x2 + α3 x3 = 0
Thus x1 , x2 , and x3 are not independent.
The subspace S of L spanned by the vectors xi is defined by
n
X
αi xi
i=1

for all values of αi .

1.3 Bases and Dimension of a Vector Space


• A vector space L (set of vectors) over F (field of scalars) is finite di-
mensional if there exists a finite set of vectors xi such that xi spans
L.

• The vectors xi are called a generating system.

• If they are independent, then they are basis vectors for the space L.

• The space is of dimension n and is denoted Ln .

The standard basis set is


     
1 0 0
0 1 0
     
e1 = 0 e2 = 0 ... en = 0
     
 ..   ..   .. 
. . .
0 0 1

Any vector x may be written

x = x1 ê1 + x2 ê2 + x3 ê3 + . . . xn ên

6
1.3.1 Example
The vectors of examples 1.2.1 and 1.2.2 each are generating systems for spaces
of dimension (n).
However, they do not span the same space.
The vectors of example 1.2.2 span a 2-d space that can be visualized as the
plane ⊥ to  
−4
x1 × x2 =  − 3
5
The dimension of the space IS NOT the length of the vector!

1.4 Inner Products and Orthogonal Vectors


The complex inner product of x and y is defined by

(x, y) = x1 ȳ1 + x2 ȳ2 + . . . + xn ȳn

where ȳi is the complex conjugate of yi . The inner product space Ln defined
over the field of complex numbers is called unitary space.
When x and y are real

(x, y) = x1 y1 + x2 y2 + . . . + xn yn

is the real inner product.


Note:

1. (x, x) ≥ 0 for all x in Ln

2. (x, x) = 0 only for x = 0

3. (x, y) = (y, x)

4. (λx, y) = λ(x, y)
(x, λy) = λ̄(x, y) for all λ in F

5. Distributive law
(x, y + z) = (x, y) + (x, z) in Ln

7
1.4.1 Vector norms
A measure of the length of a vector is the norm, kxk

1. kxk ≥ 0 and kxk = 0 only if x = 0

2. kλxk = |λ| kxk for any λ in F

3. kx + yk ≤ kxk + kyk

Definition 3 (Quadratic norm) kxk = (x, x)1/2 length of x

Definition 4 (Euclidean norm (x real))

n
! 12
X
kxk = x2i
l=1

Definition 5 (Unit Vector) A unit vector is a vector whose quadratic norm


is 1.

A vector can be normalized by


x
x̂ =
kxk

Two vectors are orthogonal if

(x, y) = 0

If all pairs of vectors in a set are orthogonal, the set is an orthogonal set.
If they have unit length, they are an orthonormal set.
Any set of mutually orthonormal vectors are linearly independent. The con-
verse is not true.

8
1.5 Gram - Schmidt Orthogonalization
Takes a set of independent vectors and renders them orthogonal (orthonormal
if we choose to normalize).
Independent vectors are given by x1 , x2 , x3 , . . .
Desired orthogonal vectors are y 1 , y 2 , y 3 , . . .
Desired orthonormal set is ŷ 1 , ŷ 2 , ŷ 3 , . . .
1. ŷ 1 = x̂1 = x1 /kx1 k
2. Note that we want y 2 ⊥ ŷ 1 (we can normalize later).
∴ (y 2 , ŷ 1 ) = 0
A vector y 2 that satisfied this is

y 2 = x2 − (x2 , ŷ 1 )ŷ 1
since
(y 2 , ŷ 1 ) = (x2 − (x2 , ŷ 1 )ŷ 1 , ŷ 1 )
1
z }| { (1)
= (x2 , ŷ 1 ) − (x2 , ŷ 1 ) (ŷ 1 , ŷ 1 )
=0
∴ y 2 = x2 − (x2 , ŷ 1 )ŷ 1
3. We now want (y 3 , ŷ 1 ) = 0 and (y 3 , ŷ 2 ) = 0

y 3 = x3 − (x3 , ŷ 1 )ŷ 1 − (x3 , ŷ 2 )ŷ 2


is a solution

1 0
z }| { z }| {
(y 3 , ŷ 1 ) = (x3 , ŷ 1 ) − (x3 , ŷ 1 ) (ŷ 1 , ŷ 1 ) −(x3 , y 2 ) (ŷ 2 , ŷ 1 ) (2)
=0
and
0 1
z }| { z }| {
(y 3 , ŷ 2 ) = (x3 , ŷ 2 ) − (x3 , ŷ 1 ) (ŷ 1 , ŷ 2 ) −(x3 , y 2 ) (ŷ 2 , ŷ 2 ) (3)
=0

9
The modified Gram-Schmidt process described next in the book yields better
results. It basically normalizes values and further iterates.

1.6 The Eigenvalue Problem


The eigenvalue problem is obtained in a very wide array of mathematical
problems.2 In vibration testing, it results from modal analysis and in calcu-
lation of the Hv frequency response function.
The general eigenvalue is one of finding λ and x when
Ax = λBx (4)
and A and B are square N × N matrices. There are n linearly independent
solutions x called the eigenvectors. Each eigenvector has a corresponding
eigenvalue λ. The eigenvalues λ are usually unique, but often are not. The
more general form of the eigenvalue problem is
AX = BXΛ (5)
where X = [x1 , x2 , . . . , xn ], and Λ is a diagonal matrix of the eigenvalues
corresponding to the eigenvectors assembled to form X.
Under special circumstances such as equation (137), the matrix B is the
identity matrix and equation (5) becomes
AX = XΛ (6)
which is more familiar to most. Eigen-solvers are more often capable of
handling the eigenvalue problem in the form of equation (6) than the form of
equation (5). Many eigensolvers that can solve the eigenvalue problem form
of equation (5) do so by transformation to the form of equation (6). This
case can be obtained by pre-multiplying (5) by B −1 . For large matrices,
Gauss elimination should be used to obtain B −1 A. Better yet, Cholesky
decomposition [1] should be applied to B such that
B = BcT Bc (7)
where Bc is an upper triangular matrix3 . The substitution
Y = Bc X (8)
2
The eigenvalue problem is also used for calculating principle stresses and strains, as
well is principle axes and moments of inertias of rigid bodies
3
All values below the diagonal are zero

10
−1
can be made in equation (5), and then pre-multiplying by BcT gives
 
−1
BcT ABc Y = Y Λ (9)

which is an equivalent form to equation (6). Once the solution is obtained,


the eigenvectors X can be obtained from X = BC−1 Y .
This methodology has significant advantages over simply inverting B. One
is that it is more numerically robust, resulting in reduced propagation of
numerical errors. A second is that when A and B are Hermitian 4 , the eigen-
vectors are orthogonal and the eigenvalues are real. These two pieces of
information can be used by the algorithm programmer to improve compu-
tational efficiency, and can be used by the user to verify the quality of the
results.
Equation (6) is often written as a matrix decomposition form of
X −1 AX = Λ (10)
The rows of X −1 are referred to as the left eigenvectors because transposing
equation (10) results in the alternate eigenvalue problem
T
X T AT X −1 = Λ (11)
For the sake of substitution and solution of linear algebra problems, a matrix
is often decomposed using equation (128) such that
A = XΛX −1 (12)
Further, if A is Hermitian, X −1 = X T , and then
A = XΛX T (13)
The eigenvalue problem can thus be used to identify whether or not a matrix
is non-singular or invertible. If a matrix is invertible, it may not have any
eigenvalues equal to zero because
−1 −1 −1
A−1 = X T Λ X (14)
Thus, since Λ is diagonal, and its inverse is
Λ−1 = diag λ−1 −1 −1
 
1 , λ2 , . . . , λn (15)
any zero eigenvalue will cause a singularity precluding an inverse.
4
A Hermitian Matrix, A, is one where its conjugate transpose, represented by (·H ), is
equal to itself.

11
1.6.1 Singular Value Decomposition - Principle Component Anal-
ysis
Singular value decomposition, or SVD, is an extension of the eigenvalue de-
composition to non-square matrices. It is often used to identify, consolidate,
and rank contributions of vectors to a large non-square matrix. The SVD of
an m × n matrix A, where m ≥ n is defined by

A = |{z}
|{z} U |{z} VT
Σ |{z} (16)
m×n m×n n×n n×n

where
UT U = I (17)
VVT =I (18)
and  
σ1 0 0 ··· 0
 0 σ2 0
 ··· 0 
Σ =  0 0 σ3
 ··· 0  (19)
 .. .. .. .. .. 
. . . . .
0 0 0 · · · σn
Some algorithms will alternatively return results satisfying

A = |{z}
|{z} U |{z} VT
Σ |{z} (20)
m×n m×m m×n n×n

by appending zero rows to the matrix Σ and appending additional mutually


orthogonal vectors to the matrix U . Since the nonzero appended columns of
U correspond to zero-valued rows of Σ they have no impact on the matrix A
when computed.
Just as in the eigenvalue problem, practical solution methodologies are ex-
tensive and sophisticated and will not be addressed here. The interested
reader is referred to [2].
For illustration, consider obtaining the SVD of the matrix A defined by
 
1 1
1 1 
A= 1 1 
 (21)
1 1.1

12
The problem of finding the singular values can quickly transformed into one
of finding eigenvalues. Pre-multiplying (16) by itself transposed yields

AT A = V ΣU T U ΣV T (22)

Applying equation (17), this simplifies to

AT A = V Σ 2 V T (23)

Comparing this to equation (12) one should recognize the Σ2 are the eigen-
values of the matrix AT A. Further, the eigenvalue problem is a symmetric
one because AT A is symmetric, which is easily proven by taking its transpose
(likewise this can be done with the right size of equation (23). The solution
of the eigenvalue problem yields
 
2.9 0.00
Σ= (24)
0.00 0.06

and  
T 0.70 0.72
V = (25)
0.72 − 0.70
The matrix U can then be found using (16) to be
 
0.49 0.30
0.49 0.30 
U = AV S −1 = 0.49 0.30 
 (26)
0.52 − 0.85

where we have taken advantage of the fact that V T = V −1 .


The numerical results of this example illustrate the application and behavior
of the SVD analysis. Observing the original matrix A, ones first impression
is that it is almost entirely constructed of values 1. In fact, only one value
even slightly deviates from 1. This is illustrated in the resulting matrix U .
The matrix U contains the principle components, or vectors, in the matrix
A. The first column of A is a vector with nearly the same direction as a
vector of ones but having been normalized. Notice then that the second
column primarily shows a deviation of the fourth value. This is due to that
small contribution of adding 0.1 to A4,2 . Importantly, Σ illustrates the degree
of contribution of each of these vectors to the matrix and is almost always

13
ordered from the highest to lowest value. Note that the first singular value is
much greater than the second, indicating that the nearly constant vector is a
much greater contributor to A. As can be observed in A, the second column
of U has a relatively small contribution to A, and the second singular value
highlights this. Further, σi ≥ 0. The matrix V can then be recognized as an
organizer. It determines how much of each σ-weighted column of U is needed
to produce the corresponding column of A.
In practice, for very large matrices with redundant data, the higher-index
values of σ tend towards 0, and for all practical purposes can be treated as
zero. When this happens, the SVD problem is written as
 
Σp [0]
A = |{z}U  p×p
|{z}  VT (27)
|{z} |{z}
n×m m×n [0] [0] n×n
| {z }
n×n

where the near-zero values of Σ have been set to zero leaving only p non-zero
values. As a result, columns p + 1 through n of U and rows p + 1 through n
of V T can be discarded along with all of Σ outside of Σp . The resulting SVD
problem statement then appears as

A = |{z}
|{z} U Σp |{z}VT (28)
|{z}
n×m m×p p×p p×n

where m ≥ n ≥ p.

1.7 Properties of Matrices


A matrix A for which
AT = A
is symmetric
A matrix A for which
AT = −A
is skew symmetric
Any arbitrary matrix can be written as the sum of a symmetric and a skew
symmetric matrix
Proof: Suppose A = B + C where B is symmetric and C is skew symmetric

14
Can we solve for B and C in the general case?

AT = B T + C T = B − C

A + AT = B + C + B − C = 2B
A + AT
∴B=
2
T
A − A = B + C − B + C = 2C
A − AT
∴C=
2
Yes.
Define AH = ĀT to be the Hermitian adjoint of A. This is what is returned
by Matlab when you take a transpose.
If A is such that
AH = A,
then A is said to be Hermitian.
The real part is symmetric, the imaginary part is skew symmetric.
Because A is equal to its adjoint, Hermitian matrices are said to be self-
adjoint.

2 Vibration of Discrete Systems


2.1 Lagrange’s Equations
 
d ∂L ∂L ∂F
− + = Qi
dt ∂ q̇i ∂qi ∂ q̇i
L=T −V
T is kinetic energy
V is potential energy
n n
!
1X X
cij q̇ij2

F=
2 i=1 j=1

is Rayleigh’s dissipation function.


cij = cji are the damping coefficients and q̇ij = q̇i − q̇j We can:
1. Linearize about equilibrium

15
2. Perform a coordinate transformation (shift coordinates) to set the equi-
librium to be at coordinates zero.

The later is what is commonly done in linear vibrations when motion is in


the direction of gravity.

2.2 Small Motions about Equilibrium Points


A dynamic system moves in what is known as state space. The trajectory
through any point in state space is unique. The state space is defined by the
generalized coordinates and their first time derivatives.
A constant solution qi = qio = const, q̇i = q̇io = 0 defines an equilibrium
point. All accelerations and higher time derivatives are zero as well.
From Langrange’s Equation, equilibrium points are found from
∂U
=0
∂qi
where
U
|{z} = V
|{z} − T0
|{z}
dynamic potential potential energy kinetic energy not de-
pendent on velocity of
generalized coordinates

This is derived by setting q̇i = 0 and q̈i = 0.


T0 is the kinetic energy that does not depend on the velocity of any general-
ized coordinates.5
5
For example, the kinetic energy of us due to the spin of the Earth. Since the rotational
speed of the Earth is presumed not to be affected by our individual motion, it is not a
generalized coordinate, and thus its impact on kinetic energy is T0 .

16
2.3 Example

Derive equations of motion, linearize, derive matrices.


Attaching x and y to spinning hub
1
T = m (ẋ − yΩ)2 + (ẏ + xΩ)2

2
F = 12 cẋ2 is the Rayleigh Dissipation Function
From sophomore dynamics, for conservative forces
∂V
Fi = −
∂qi
so
Z x Z y
V =− Fs1 (ξ) dξ − Fs2 (ξ) dξ
0 0
 
1  1  1  2 
= k1 x + x 4 + k2 y 2 + y 4
2
2 2 2 2
substituting into Lagrange’s equations
For x:
mẍ − 2Ωmẏ − mΩ2 x + k1 x + 1 x3 + cẋ = 0


For y:
mÿ + 2Ωmẋ − mΩ2 y + k2 y + 2 y 3 = 0


The equations of motion are

M ẍ + (C + G) ẋ + (K + H) x = 0

17
H and G are skew-symmetric matrices
 
m 0
M=
0 m
 
c 0
C=
0 0
 
0 − 2Ωm
G=
2Ωm 0
 
k − mΩ2 0
K= 1
0 k2 − mΩ2
Observing T we can expand to
1 1
T = m ẋ2 + ẏ 2 + mΩ (xẏ − y ẋ) + mΩ2 x2 + y 2
 
2 2
= T2 + T1 + T0
  
1  m 0 ẋ
T2 = ẋ ẏ
2 0 m ẏ
| {z }
Quadratic in generalized velocities due to time rate change of coord system.
  
  0 mΩ ẋ
T1 = x y
− mΩ 0 ẏ
| {z }
Linear in generalized velocities.

T1 produces Coriolis type forces and is the gyroscopic term.


  
1  mΩ2 0 x
T0 = x y 2
2 0 mΩ y
| {z }
No generalized velocities

T0 behaves like a potential energy and causes centrifugal forces.


The potential energy is
  
1  k1 0 x
V = x y
2 0 k2 y

(linearized6 )
6
Means curvature, and higher derivatives, of force set to zero

18
  
1  c 0 ẋ
F= ẋ ẏ
2 0 0 ẏ
Definitions:
1
T2 = q̇ T M q̇, T1 = q T Gq̇
2
1 T 1
F = q̇ C q̇, U = q T Kq
2 2
More definitions:

1. A matrix A is positive definite iff

2. f = xT Ax > 0 for all non-zero vectors x

3. A matrix A is positive semi-definite iff

4. f = xT Ax ≥ 0 for all non-zero vectors x

In the previous example:

1. M is positive definite

2. C is positive semi-definite

3. K is positive definite only for small Ω

2.4 Energy Considerations


M q̈ + (C + G)q̇ + (K + H)q = 0 (29)
Consider the skew-symmetric matrix G pre-multiplied by q̇ T

q̇ T Gq̇ = (q̇ T Gq̇)T , scalar


= q̇ T (q̇ T G)T (note: (AB)T = B T AT )
= q̇ T GT q̇

By definition, since G is skew symmetric, GT = −G

q̇ T Gq̇ = q̇ T GT q̇ = q̇ T (−G)q̇ = −(q̇ T Gq̇)

This is only true if q̇ T Gq̇ = 0

19
Pre-multiplying the equation of motion (29) by q̇ T

q̇ T M q̈ + q̇ T C q̇ + q̇ T Kq + q̇ T Hq = 0

Integrating the first three terms and rearranging gives


1d T
q̇ M q̇ + q T Kq = −q̇ T C q̇ − q̇ T Hq

2 dt | {z } | {z }
Hamiltonian: H=T2 +U circulatory forces constraint damping forces

Hamiltonian H = T2 + U
d
H = −2 (F 0 + F) , (F 0 = q̇ T Hq is the circulatory dissipation function)
dt
If there are no viscous damping or circulatory forces,

H = const

and the system is conservative. When T1 = T0 = 0


1
T = T2 = q̇ T M q̇
2
U =V
H = T + V = E = const
This is known as the principle of conservation of energy.

2.5 Lyapunov Stability


Let x (0) represent the vector of initial conditions (states at time zero) of
a given system. The system is said to have a stable equilibrium if for any
arbitrary positive number  there exists a positive number δ() such that
whenever
kx(0)k < δ, then kx(t)k < .

2.5.1 Example
mẍ + kx = 0
1 p
kx (t) k = xT x 2 = x(t)2 + ẋ(t)2

20
q
k
Consider i.c. of x(0) = 0, ẋ(0) = m =ω
The solution is x(t) = sin(ωt)
The system is Lyapunov stable because for
 12
kx (0) k = x2 (0) + ẋ2 (0) =ω (δ)
1 1
kx (t) k = sin2 ωt + ω 2 cos2 ωt 2 < 1 + ω 2 2 ()
1
We must then choose  > (1 + ω 2 ) 2 .
This worked only because we knew the solution.
The Lyapunov direct method or Liapunov second method does not require
soln. of EOM.
The method consists of devising a suitable scalar testing function which can
be used in conjunction with its total time derivative to determine the char-
acteristics of equilibrium points.
Definition: A function V (x) is said to be positive definite if it is positive for
all values of x 6= 0.
Definition: A function V (x) is said to be positive semi-definite if it is ≥ 0
for all x 6= 0.
Definition: A function V (x) is said to be indefinite or sign-variable if the
sign varies.
Negative semi-definite and negative definite can be defined likewise.

2.5.2 Lyapunov Stability Criteria


1. If V (x) > 0 for all x 6= 0 and V̇ (x) ≤ 0, then the system is stable.

2. If V (x) > 0 for all ẋ 6= 0, and V̇ (x) < 0, then the system is asymp-
totically stable

3. If V (x) > 0 for all x 6= 0, and V̇ (x) is indefinite, the stability is not
known.

2.5.3 Conservative Systems


M ẍ + Kx = 0
Assume M and K are P.D. (Positive Definite) matrices

V (x) = xT1 M x1 > 0, x 6= 0

21
and
V (x) = xT2 Kx2 > 0, x 6= 0
Let’s pick as a Lyapunov function
1 T
ẋ M ẋ + xT Kx

V (x) = (mechanical energy)
2
Since M and K are P.D., V (x) is P.D.

d
(V (x)) = ẋT M ẍ + ẋT Kx
dt
= ẋT (M ẍ + Kx)
= ẋT 0 = 0
Since V̇ (x) = 0 and V (x) > 0, the system is stable.

2.5.4 Systems with Damping


M ẍ + C ẋ + Kx = 0
Let
V (x) > 0
still Then, using the equation of motion
V̇ (x) = ẋT M ẍ + ẋT Kx = − ẋT C ẋ
 

If C is positive definite, then the system is asymptotically stable.


If C is positive semi-definite, the system is asymptotically stable iff:
 
C
 CK 
 
 CK 2 
rank  =n
 .. 
 . 
CK n−1
where n is the number DoF
(Inman, ’89)
This is equivalent to proving that all of the modal damping ratios are greater
than zero (of course, if all of the modal damping ratios are greater than zero,
then the system is also asymptotically stable).
Alternatively: The system is asymptotically stable if none of the modes are
in the null space of the damping matrix.

22
2.5.5 Gyroscopic Systems
M ẍ + Gẋ + Kx = 0
Assume M and K are P.D. and G is skew-sym.
xT Gx = 0 for any x, so previous Lyapunov function still works, and equi-
librium is still stable. If K is P.D., system is stable.
If K is indefinite, semidefinite, or negative definite, the system may still be
stable.

1. If K is negative definite, and 4K − GM −1 G is negative definite, the


system is unstable.

2. Special cases for n = 2 have been examined (Inman, 1989)

3. If 4K − GM −1 G is P.D. and (GM −1 K − KM −1 G) is Ps- D, then the


system is stable.

4. If GM −1 K = KM −1 G, the system is stable iff 4K − GM −1 G is P.D.

2.5.6 Damped Gyroscopic Systems


M ẍ + (C + G) ẋ + Kx = 0
M is P.D.
From Inman ’89.

1. If K and C are PD the system is asymptotically stable

2. If K is not PD and C is PD, the system is unstable. (Note, damping


can destabilize a gyroscopic system with rigid body modes).

3. If K is PD and C is Ps-D, the system is:

(a) Asymptotically stable is none of the eigenvectors of the undamped


gyroscopic system are in the null space of C (Are an eigenvector
of a zero eigenvalue of C)
(b) Stable if proportionally damped.

23
2.5.7 Circulatory Systems
M ẍ + (K + H) x = 0
H = −H T
Example

The phenomenon occurs in aeroelasticity. Results for stability are not as well
developed.

2.5.8 General Asymmetric Systems


ẍ + A2 ẋ + A3 x = 0
A2 = M −1 (C + G) , A3 = M −1 (K + M )
Any real matrix can be written as the product of two symmetric matrices.

A2 = T1 T2

A3 = S 1 S 2
If T1 = S1 exists and is PD, the system is symmetrizable, Such systems are
asymptotically stable if the eigenvalues of A2 and A3 are > 0. (S2 and T2
are PD).
See Inman ’89 for more details.

2.6 Self-Adjoint (Symmetric) Systems


The majority of structures can be modeled as self-adjoint systems with n
degrees of freedom. These systems are governed by equations of motion of
the form
M ẍ(t) + C ẋ(t) + Kx(t) = f (t) (30)

24
where M is a positive-definite n × n matrix, C and K are positive-semi-
definite n × n matrices, and all are real and symmetric. The displacement
vector  
x1 (t)
 x2 (t) 
 
x(t) =  x3 (t)  (31)
 
 .. 
 . 
xn (t)
represents displacements of the n degrees of freedom (generalized coordi-
nates), which may be in any direction, x, y, z, or any combination thereof,
or a rotation about any unit direction vector in three dimensional space. The
force vector  
f1 (t)
 f2 (t) 
 
 f3 (t) 
f (t) =   (32)
 .. 
 . 
fn (t)
represents forces acting on each of the n generalized coordinates. Such equa-
tions of motion are typical of non-rotating structures without inclusion of
aerodynamic loading.
Solution of equation (30) is most often performed in modal coordinates due
to physical insight obtained and numerical advantages. In order to transform
into modal coordinates, we first consider the un-forced, or homogeneous, and
undamped, C = 0, equation of motion given by

M ẍ + Kx = 0 (33)

where for the sake of simplicity dependence on time is no longer explicitly


shown. A solution for x(t) is then assumed to be

x = ψeλt (34)

Substituting into equation (33) gives

(M λ2 + K)ψ = 0 (35)

Equation (35) should be recognized as an eigenvalue problem with the eigen-


value λ2 and the eigenvector ψ. Since the vector is n elements long, and

25
we also have the unknown λ2 to solve for, we have only n equations but
n + 1 unknowns. In addition, the equations are nonlinear in the combined
unknowns λ2 and ψi . A first attempt at solving these equations for ψ would
be to premultiply equation (35) by (M λ2 + K)−1 giving

(M λ2 + K)−1 (M λ2 + K)ψ = (M λ2 + K)−1 0


Iψ = (M λ2 + K)−1 0 (36)
ψ = (M λ2 + K)−1 0

At first observation, the only solution appears to be ψ = 0. However, this


is the so-called trivial solution. In fact observing equation (33), x = 0 is a
viable solution, if useless for understanding dynamic response. An alternative
solution is the case where A = (M λ2 + K)−1 does not exist.
The inverse of a matrix A is defined as
adj(A)
A−1 = (37)
det(A)

where adj(A) is the adjoint 7 [2, 1] or classical adjoint[3] matrix of A and


det(A) is the determinant of A. Of importance is the denominator. If the
det(A) = 0, then the inverse of A is undefined, and the solution in the form
of equation (36) makes no sense. Thus, since we are expecting solutions other
than the trivial solution, finding the value/s of λ2 such that det(M λ2 +K) =
0 is the only feasible solution.
Taking the determinant of M λ2 + K and setting it equal to zero results in
an nth order polynomial in λ2 , the solution of which results in n real and
negative values of λ2 . Solving for λ results in n imaginary conjugate pairs
of values λ = ±jω. For each value of λ2m , a unique (linearly independent
from the rest) ψ m can be found. Even when there are repeated values, i.e.
λ2m = λ2m+1 , linearly independent vectors ψ m and ψ m+1 can be found for each
occurrence of a solution, λ2 . In the current case, the vectors ψ m are real and
linearly independent. The vectors ψ m are the mode shape of the structure
and the corresponding values ωm are the natural frequencies in radians/sec8 .
The methods for obtaining λm and ψ m from a given mass matrix, M , and
stiffness matrix, K, pair vary greatly in practice from the elementary methods
7
The i, j element of the adjoint of a matrix is the determinant of the remaining matrix
when the jth row and the ith column are removed, multiplied by − 1i+j . In practice,
other methods are used to calculate the inverse when necessary.
8
It is very rare for a unit of time other than seconds to be used for a structure

26
often learned in introductory courses. Most often methods such as subspace
iteration, the QR method, inverse iteration, or other even more sophisticated
algorithms are used[4, 1] and are beyond the scope of this class.
It is convenient to mass normalize the eigenvectors ψ m such that

ψ Tm M ψ m = 1 (38)

After doing so, the eigenvectors are mass orthonormal and stiffness orthog-
onal. Consider the two unique eigenvalue solutions (l 6= m), written in a
slightly different form
Kψ l = −M λ2l ψ l (39a)
Kψ m = −M λ2m ψ m (39b)
Pre-multiplying each by the alternate eigenvector transposed gives

ψ Tm Kψ l = −ψ Tm M λ2l ψ l = −λ2l ψ Tm M ψ l (40a)

ψ Tl Kψ m = −ψ Tl M λ2m ψ m = −λ2m ψ Tl M ψ m (40b)


Subtracting (40a) from the transpose of (40b), and again noting that M and
K are symmetric, gives

ψ Tm Kψ l − ψ Tm Kψ l = −λ2m ψ Tm M ψ l + λ2l ψ Tm M ψ l
(41)
0 = λ2l − λ2m ψ Tm M ψ l


If (λ2l − λ2m ) 6= 0, then ψ Tm M ψ l = 0, and since ψ Tm M ψ m = 1,


(
T 1, l = m
ψ m M ψ l = δlm = (42)
0, l 6= m

where δlm is the Kronecker delta function and is as defined above. Consid-
ering now equation (40a)

ψ Tm Kψ l = −λ2l ψ Tm M ψ l
= −λ2l δlm
( (43)
− λ2l , l = m
=
0, l 6= m

27
In the case where there are repeated solutions, i.e. λ2m = λ2m+1 = . . . , substi-
tuting the coordinate transformation

x = M −1/2 q (44)
T
into equation (30) and pre-multiplying by M −1/2 where M −1/2 is defined
such that T
M = M 1/2 M 1/2 (45)
gives
T T T
M −1/2 M M −1/2 q̈ + M −1/2 KM −1/2 q = M −1/2 f
(46)
−1/2 T

I q̈ + K̃q = M f

where K̃ is the mass normalized stiffness matrix. Solving the homogeneous


equation by assuming a solution of the form

q(t) = υeλt (47)

yields the single matrix eigenvalue problem

(Iλ2 + K̃)υ = 0 (48)

Since K̃ is symmetric, its eigenvectors, υ m , are orthonormal[2]. Substituting


(34) and (47) into (44) gives

ψ = M −1/2 υ (49)

Since the vectors υ m are orthonormal, and M −1/2 must be non-singular,


the vectors ψ m must be linearly independent even in the case of repeated
eigenvalues.
Given the multiplicity of solutions for ω 2 and ψ, the solution for x must then
be considered the linear summation
n
X
am e+jωm t + ām e−jωm t ψ m

x= (50)
m=1

Applying the Euler relation this can be written in the more physically intu-
itive form n
X
x= Rm sin(ωm t + φm )ψ m (51)
m=1

28
where Rm is a modal amplitude. If we further define

rm (t) = Rm sin(ωm t + φm ) (52)

to be our modal coordinates, then equation (51) can be written in matrix


form as
x(t) = Ψr(t) (53)
where
··· ···
 
ψ1,1 ψ1,2 ψ1,m ψ1,n
 ψ2,1
 . ψ2,2 ··· ψ2,m ··· ψ2,n 
.. .. .. .. 
  .. . . . ··· . 
 
Ψ = ψ1 ψ2 ψ3 · · · ψn =   (54)
 ψl,1 ψl,2 ··· ψl,m ··· ψl,n 
 . .. .. .. .. .. 
 .. . . . . . 
ψn,1 ψn,2 · · · ψn,m ··· ψn,n

Substituting equation (53) into equation (30),

M Ψr̈(t) + KΨr(t) = f (t) (55)

Pre-multiplying (55) by ΨT , and using equations (42) and (43) the equations
are now transformed into individual uncoupled modal equations

I r̈(t) + Ω2 r(t) = f˜(t) (56)

where I is the identity matrix,


 
ω12 0 0 ··· 0
 0 ω2 0 ··· 0
 2 
 0 0 ω2 ··· 0
Ω= 3  (57)
 .. .. .. ... 
. . . 0
0 0 0 0 ωn2

and
f˜(t) = ΨT f (t) (58)
is the modal force vector. The resulting decoupled equations can each then
be solved using SDOF methods. Applying equation (53), the solution can
be transformed back into physical coordinates. Also, using equation (53),

29
the initial conditions can be transformed into modal coordinates for use in
solving equations (56) giving

r(0) = Ψ−1 x(0) (59a)

ṙ(0) = Ψ−1 ẋ(0) (59b)


This procedure can also be followed for damped systems under limited cases.
In the case of viscous damping, where the governing equation of motion is

M ẍ(t) + C ẋ(t) + Kx(t) = f (t), (60)

then if
CM −1 K = KM −1 C (61)
the damping matrix, C, is diagonalized by ΨT CΨ = diag(2ζi ωi )[5]. Consider
substituting
x(t) = M −1/2 q(t) (62)
into equation (60), but with no forcing vector, where

(M −1/2 )T M M −1/2 = I (63)

Pre-multiplying by (M −1/2 )T yields

I q̈(t) + C̃ q̇(t) + K̃q(t) = 0 (64)

Now let
q(t) = Υr(t) (65)
where Υ is the orthonormal matrix of the eigenvectors of K̃, υ, such that

ΥT Υ = I (66)

ΥT K̃Υ = Ω2 (67)
substituting (65) into equation (64) and pre-multiplying by ΥT yields

I r̈(t) + ΥT C̃Υṙ(t) + Ω2 r(t) = 0 (68)

In order for the equations of motion to be decoupled it is necessary that


ΥT C̃Υ = 2diag(ζi Ωi ) be diagonal.

30
Two matrices are simultaneously diagonalized if and only if they commute,
i.e.
C̃ K̃ = K̃ C̃ (69)
The modal equations then are

I r̈(t) + 2diag(ζi Ωi )ṙ(t) + Ω2 r(t) = f˜(t) (70)

A special case of this is Rayleigh damping, often referred to as proportional


damping, where C = αM + βK. In this case the damping ratio is given by
 
1 α
ζi = + βΩi (71)
2 Ωi

Similarly, for a complex stiffness model of the form

M ẍ(t) + (K + K 0 j)x(t) = f (t), (72)

if
KM −1 K 0 = K 0 M −1 K (73)
then the imaginary part of the stiffness matrix, K 0 , is diagonalized by ΨT K 0 Ψ =
diag(ηi ωi2 ) yielding modal equations of motion

I r̈(t) + (Ω2 + diag(ηi )Ω2 j)r(t) = f˜(t)


(74)
I r̈(t) + Ω2 (1 + diag(ηi )j)r(t) = f˜(t)

The modal frequency response functions are then obtained by taking the
Fourier transform of the appropriate modal equation of motion, either (56),
(70), or (74), and solving for

1

 ωi2 −ω 2
, undamped
Ri (jω) 
1
h̃i = = ω2 +2ζi jωi ω−ω2 , viscous damping (75)
F̃i (jω)  i
 2 1 , complex stiffness damping

ω (1+ηi j)−ω 2
i

Consider now the full system modal equations in frequency response form

R(jω) = H̃(jω)F̃ (jω) (76)

31
where H̃(jω) = diag(h̃i ). Premultiplying by Ψ and substituting X(jω) =
ΨR(jω) and F̃ (jω) = ΨT F (jω) into equation (76) and premultiplying by
Ψ−1 gives
ΨR(jω) = ΨH̃(jω)ΨT F (jω)
X(jω) = ΨH̃(jω)ΨT F (jω) (77)
= H(jω)F (jω)
where H(jω) is the matrix of transfer functions between forces and displace-
ments in physical coordinates. Expressing Ψ using equation (54), and con-
sidering equation (77)
 T
ψ1
ψ T 
 2
 T
· · · ψ n H̃(jω) ψ 3 
 
H(jω) = ψ 1 ψ 2 ψ 3
 .. 
 . 
ψ Tn
 T
ψ1
ψ T 
2

 T
= ψ 1 h̃1 (jω) ψ 2 h̃2 (jω) ψ 3 h̃3 (jω) · · · ψ n h̃n (jω) ψ 3  (78)

 .. 
 . 
ψ Tn
X n
= ψ i ψ Ti h̃i (jω)
i=1
n
X
= i Ah̃i (jω)
i=1

The variable, i A = ψ i ψ Ti , represents the outer product of ψ i with itself and


is called the modal constant or residue. That is,
ψ1,i ψ1,i ψ1,i ψ2,i · · · ψ1,i ψm,i · · ·
 
ψ1,i ψn,i
 ψ2,i ψ1,i ψ2,i ψ2,i · · · ψ2,i ψm,i · · · ψ2,i ψn,i 
 . .. .. .. 
 . ...
 . . . ··· . 

T
iA = ψiψi =  (79)
 ψl,i ψ1,i ψl,i ψ2,i · · · ψl,i ψm,i · · · ψl,i ψn,i 

 . .. .. .. ... .. 
 .. . . . . 
ψn,i ψ1,i ψn,i ψ2,i · · · ψn,i ψm,i · · · ψn,i ψn,i

32
Thus an individual frequency response function between an input l and out-
put, m, is
Xn
Hl,m = ψl,i ψm,i h̃i (jω) (80)
i=1
An alternative representation of the frequency response function can be ob-
tained directly from equations (72) or (60). Taking the Fourier transform of
each equation,
−ω 2 M + jωC + K X(jω) = F (jω),

viscous (81a)
−ω 2 M + (K + K 0 j) X(jω) = F (jω),

hysteretic (81b)
Since the analysis is the same for either form of damping, we show only the
viscous case for the sake of clarity. The frequency response function matrix
is then
−1
X(jω) = −ω 2 M + jωC + K F (jω) = H(jω)F (jω), viscous (82)
Since −1
H(jω) = −ω 2 M + jωC + K (83)
using the definition of the inverse of a matrix of
adj(−ω 2 M + jωC + K)l,m
Hl,m (jω) = (84)
det(−ω 2 M + jωC + K)
When
adj(−ω 2 M + jωC + K)l,m = 0 (85)
the frequency response function will exhibit a zero or anti-resonance. Recall
that the l, m element of the adjoint of a matrix is the determinant of the
remaining matrix when the lth row and the mth column are removed, mul-
tiplied by − 1l+m , called the l, m cofactor. When l = m, this is equivalent
to constraining the equations of motion such that xl = 0.
Example 2.1 A system is defined by
M ẍ + C ẋ + Kx = 0 (86)
where
     
1 0 0 0 0 0 2 −1 −1
M = 0 1 0 , C = 0 0 0 , K =−1 3 − 1 (87)
0 0 1 0 0 0 −1 −1 4

33
Find the poles and zeros of the frequency response function, as well as the
FRF itself, between the first and third degrees of freedom.
Solution:
The poles (natural frequencies) of the system are given by the solution of

det(−M ω 2 + K) = 0
(88)
−ω 6 + 9ω 4 − 23ω 2 + 13 = 0

the solution of which is

ω ≈ ±0.8864, ±1.8813, ±2.1622 (89)

The zeros of this specific frequency response function are given by

adj1,3 (K − ω 2 M ) = 0
    
−1 −1 0 0 2
det − ω =0 (90)
3 −1 1 0
4 − ω2 = 0

Solving for ω gives zeros at


ω = ±2 (91)
The mass normalized mode shapes are
     
0.7558 0.6318 0.1721
ψ 1 = 0.5207 , ψ 2 =  − 0.7392 , and ψ 3 =  0.4271  (92)
0.3971 − 0.2332 − 0.8877

Using equation (80), the frequency response function is


3
X
H1,3 (jω) = ψ1,i ψ3,i h̃i (jω)
i=1 (93)
0.3001 −0.1473 −0.1528
= 2
+ 2
+
0.7857 − ω 3.5392 − ω 4.6751 − ω 2
Alternatively, applying equation (84), the FRF can also be represented as

−ω 2 + 4
H1,3 (jω) = (94)
−ω 6 + 9ω 4 − 23ω 2 + 13

34
2.7 Maximum-Minimum Characteristics of Eigenval-
ues
Consider  
2 −1
A= (95)
−1 2
The eigenvalues are λ = 1, 3. The first eigenvalue is the minimum value of

f (x) = xT Ax

where x is normalized.
Trial 1:
 
1
x= , f =2
0
Trial 2:
 
0
x= , f =2
1
Trial 3:
 
1 1
x= √ , f =3
2 −1
Trial 4:
 
1 1
x= √ , f =1
2 1
This could have been written as
df
= 0, ||x|| = 1 (96)
dx1
  
 p  2 − 1 x 1
f (x1 ) = x1 1 − x21 p
−1 2 1 − x21
q q
= 2x21 − x1 1 − x21 − x1 1 − x21 + 2(1 − x21 )
q (97)
1
= 2(x1 − x1 1 − x21 + 1 − x21 )
q
= 2(1 − x1 1 − x21 )

35
Then, setting f 0 = 0
df
q − 1
= − 1 − x21 + x21 1 − x21 2 = 0
dx1
(−1 + x1 ) + x21 = 0
2
(98)
2x21 = 1
1
x1 = ± √ , Note: by trial, (+) is the correct sign
2
To find the second eigenvalue, we will pick values of v such that x is con-
strained by
xT v = 0
For each vector v we minimize f (x). The highest value of all of these mini-
mums is our second eigenvalue.
In this case, because the system has only two dimensions (R2 ), choosing v
is equivalent to choosing x
Trial 1:
   
0 1
v= , x= , f =2
1 0
Trial 2:
   
1 0
v= , x= , f =2
0 1
Trial 3:
   
1 1 1
v= , x= √ , f =1
1 2 −1
Trial 4:
   
1 1 1
v= , x= √ , f =3
−1 2 1
f = 3 is the second eigenvalue.
Extrapolating, the (n+1)th eigenvalue is the highest minimum value of xT Ax
subject to n constraints.

36
2.8 The Inclusion Principle
What is the effect of using fewer degrees of freedom to represent a system?
This is a vital concern when using finite elements and the more general
continuum methods to be discussed later.
Consider a system described by the Hermitian matrix A (n × n). Consider
another system defined by B (n − 1 × n − 1) formed from A by deleting the
last row and column.
The eigenvalues of A are named λ1 , λ2 , λ3 , . . . and the eigenvalues of B are
γ1 , γ2 , γ3 , . . ..
y H By = xH Ax if xi = yi , and xn = 0
xn = 0 can be considered a constraint such that xT ên = 0
From Rayleigh’s principle,
γ1 = min y H By, ||y|| = 1
This is equivalent to the constrained minimization
γ1 = λ̃1 (ên ) = min xH Ax, ||x|| = 1, xH ên = 0
From the min-max principle,
λ1 ≤ γ1 ≤ λ2
Successively considering the remaining eigenvalues yields
λ1 ≤ γ1 ≤ λ2 ≤ γ2 ≤ λ3 ≤ . . . λn
which is called the inclusion principle

2.8.1 Example: The inclusion principal

The original stiffness matrix is:


 
2 −1 0
A=−1 2 − 1
0 −1 1

37
If we constrain degree of freedom 1 to zero amplitude, the new stiffness matrix
is  
2 −1
B=
−1 1
The eigenvalues of A are 0.1981, 1.5550, 3.2470. The eigenvalues of B are
0.3820, 2.6180. The eigenvalues are interwoven with one another. This ap-
plies for more general constraints as well. For example, setting x1 = x2 . Note
that this also modifies the mass matrix. Verify that the inclusion principle
works for this case for homework.

2.9 Perturbation of the Symmetric Eigenvalue Prob-


lem
We have already developed that the symmetric eigenvalue problem can be
reduced to the eigenproblem of a single matrix. Define that system A0 =
−1 −1
M0 2 K0 M0 2 such that its eigensolution satisfies

A0 x0i = λ0i x0i (99)


The eigenvectors satisfy orthogonality, i.e.
xT0i x0j = δij , xT0i A0 x0j = λ0i δij , i, j = 1, 2, . . . , n (100)
Presume that the system has been changed by some small amount that we
define A1 so that the new system is defined by
A = A0 + A1 (101)
where A0 is the unperturbed system matrix,  is the perturbation parameter,
and A1 is the perturbation matrix. It is presumed that the magnitude of the
significant non-zero quantities in A1 are of the same order as corresponding
quantities of A0 and that  < 1. Agreement is much better for values  
1. If the only stiffness matrix is changed, then K = K0 + K1 and A1 =
−1 −1
M0 2 K1 M0 2 . Alternatively, if both stiffness and mass are perturbed, then
1 1
M = M0 +M1 as well, and A1 = (M0 + M1 )− 2 (K0 + K1 ) (M0 + M1 )− 2 −A0 .
While this may be a rather computationally expensive, keep in mind that
the eigensolution is much more difficult.
The solution to the perturbed eigenvalue problem is
Axi = λi xi (102)

38
Here we would like to estimate the values of xi and λi without repeating the
solution to the eigenvalue problem. Thus

xTi xj = δij , xTi Axj = λi δij , i, j = 1, 2, . . . , n (103)

The solution of the perturbed eigenvalue problem can then be presumed to


be perturbations of the unperturbed solution. Thus

1 dλi 1 d2 λi
λi () = λ0i +  + 2 2 + · · ·
1! d 2! d (104a)
2
= λ0i + λ1i +  λ2i + · · · , i = 1, 2, . . . , n

1 dxi 1 d2 xi
xi () = x0i +  + 2 2 + · · ·
1! d 2! d (104b)
= x0i + x1i + 2 x2i + · · · , i = 1, 2, . . . , n
Substituting (104) and equation (101) into equation (102) gives
A0 x0i + A0 x1i + A1 x0i + 2 A1 x1i + 2 A0 x2i + 3 A1 x2i + · · ·
= λ0i x0i + λ0i x1i + λ1i x0i + 2 λ2i x0i + 2 λ1i x1i + 2 λ0i x2i + · · ·
(105)

Acknowledging equation (99) this becomes


A0 x1i + A1 x0i + 2 A1 x1i + 2 A0 x2i + 3 A1 x2i + · · ·
(106)
= λ0i x1i + λ1i x0i + 2 λ0i x2i + 2 λ1i x1i + 2 λ2i x0i + · · ·
Considering only first-order terms in  yields

A0 x1i + A1 x0i = λ0i x1i + λ1i x0i (107)

Solution of this equation for x1i and λ1i is the solution of the first order
perturbation problem.
To do this, we presume that x1i can be written as a linear combination of
the non-corresponding unperturbed system eigenvectors9 , i.e.
n
X
x1i = κij x0j δ̃ij , i = 1, 2, . . . , n (108)
j=1

9
There is no benefit to perturbing an eigenvector by itself times a constant because,
after normalization, it doesn’t change.

39
where (
1 i 6= j
δ̃ij = (109)
0 i=j
Is the opposite of the Kronecker delta.
Substituting equation (108) into equation (107) yields
n
X n
X
A0 κij x0j δ̃ij + A1 x0i = λ0i κij x0j δ̃ij + λ1i x0i (110)
j=1 j=1

Applying equation (99) gives


n
X n
X
κij λ0j x0j δ̃ij + A1 x0i = λ0i κij x0j δ̃ij + λ1i x0i (111)
j=1 j=1

Pre-multiplying by xT0k , and recognizing that

xT0k x0j = δjk and xT0k A0 x0j = λ0j δjk (112)

for a self-adjoint system, gives

κik (λ0k − λ0i ) + xT0k A1 x0i = λ1i δik (113)

For k = i,
λ1i = xT0i A1 x0i (114)
The perturbed eigenvalue is then given by substituting equation (114) into
equation (104a). Applying equation (113) for k 6= i

xT0k A1 x0i
κik = (115)
λ0i − λ0k
and the perturbed eigenvector is obtained by substituting equation (115) into
equation (108) then into equation (104b).

2.9.1 Example: Perturbation


Presume  
2 −1 0
A0 =  − 1 2 − 1 (116)
0 −1 1

40
The eigensolution is given by
     
− 0.32799 0.73698 − 0.59101
x01 =  − 0.59101 , x02 =  0.32799  , x03 =  0.73698  (117)
− 0.73698 − 0.59101 − 0.32799

with eigenvalues

λ01 = 0.19806, λ02 = 1.55496, λ03 = 3.24698 (118)

We would like to obtain the eigenvalues and eigenvectors of the following


matrix without resolving the eigenvalue problem:
 
2.00000 − 1.00000 0.00000
A =  − 1.00000 2.00000 − 1.00000 (119)
0.00000 − 1.00000 1.10000

Then  
0 0 0
A1 = 0 0 0 (120)
0 0 1
with  = 0.1
Applying equation (114)
  
  0 0 0 − 0.32799
λ11 = − 0.328 − 0.591 − 0.737 0 0 0  − 0.59101 = 0.543
0 0 1 − 0.73698
(121)
thus, from equation (104a)

λ1 = 0.19806 + 0.1 × 0.54313 = 0.25238 (122)

as compared to the true value of 0.25077 which is an error or − 0.63843%.


Then, applying equation (115)
  
  0 0 0 0.73698
− 0.328 − 0.591 − 0.737 0 0 0  0.32799 
0 0 1 − 0.59101
κ12 = = −0.321
0.19806 − 1.55496
(123)

41
and
  
  0 0 0 − 0.591
− 0.328 − 0.591 − 0.737 0 0 0  0.737 
0 0 1 − 0.328
κ13 = = −0.079
0.19806 − 3.24698
(124)
Thus, applying equation (108)
 
− 0.18971
x11 =  − 0.16371 (125)
0.21571

and using equation (104b)


     
− 0.32799 − 0.18971 − 0.34696
x1 =  − 0.59101 + 0.1  − 0.16371 =  − 0.60738 (126)
− 0.73698 0.21571 − 0.71540

which are errors of  


0.02762
 − 0.05061 % (127)
− 0.07694
Thus a 10% change in one value of the matrix has caused less than a 1%
change in the first eigenvalue and corresponding eigenvectors. This is con-
sistent with the remainder of the eigenvalue and eigenvectors.

3 Vibration of Continuous Systems


3.1 Strings and Cables

Consider an infinitesimal element, neglect gravity

42
X ∂2w
Fy = ρ∆x
∂t2
∂w ∂w ∂2w
f (x, t)∆x + τ2 − τ1 = ρ∆x 2
∂x x2 ∂x x1 ∂t
X
Fx = 0 = τ2 cos θ2 − τ1 cos θ1
For small θ1 , cos θ1 ≈ 1 ≈ cos θ2 τ1 = τ2 = τ

∂2w
 
∂w ∂w
τ − + ∆xf (x, t) = ρ∆x 2
∂x x2 ∂x x1 ∂t

From Taylor series expansion:


 
∂w ∂w ∂ ∂w
= + ∆x + O(∆x2 )
∂x x2 ∂x x1 ∂x ∂x x1

substituting
∂2w ∂2w
τ ∆x + ∆xf (x, t) = ∆xρ
∂x2 x1 ∂t2
Dividing by ∆x, and since ∆x is infinitesimal

∂2w ∂2w
τ + f (x, t) = ρ
∂x2 ∂t2
or
τ wxx + f (x, t) = ρwtt
q
τ
where c = ρ
is the wave speed.

43
3.1.1 Finding Mode Shapes and Natural Frequencies of a String
Presume fixed boundary conditions. The equation of motion for a string is

c2 wxx = wtt

Let’s assume w(x, t) = X(x)T (t)


Substituting
c2 X 00 T = X T̈
X 00 T̈
= 2
X cT
1
f (x) = 2 g(t)
c
The only way this can be satisfied is if each side is equal to a constant.
X 00
∴ = a = −σ 2 (128)
X
also


2
= −σ 2 (129)
cT
We call it − σ 2 because later in the solution we realize that it would have
been convenient to do earlier. However, at this point in the solution process,
there is no justification for it. Rearranging (128) yields

X 00 + σ 2 X = 0

The solution to this is

X(x) = A cos σx + B sin σx

The boundary conditions are

X(0) = 0 = A, X(`) = 0 = B sin σ`


πn
∴ σn =
`
and the mode shapes are
 πn 
X(x) = A sin x
`
44
Substituting into the temporal part:
T̈ + c2 σn2 T = 0
r  
τ πn
∴ ωn =
ρ `
Next, consider:

The only difference is at the right end.

Summing moments in the vertical direction, it’s clear that the slope must be
zero at the right end.
The boundary conditions are

X(0) = 0 = A, X 0 = 0 = Bσ cos σ`
`
πn
∴ σn ` = , n = 1, 3, 5, . . .
2
or
π(2n − 1)
σn = , n = 1, 2, 3, 4, . . .
2`
and the mode shapes are
 
π(2n − 1)
X(x) = an sin x
2`
Substituting into the temporal part:
T̈ + c2 σn2 T = 0
τ π(2n − 1)
r
∴ ωn = , n = 1, 2, 3, 4, . . .
ρ 2`

45
3.1.2 Free Response of a String
Find the response of a string plucked in the middle.
2 `
w(x, 0) = x, 0<x<
` 2
2 `
w(x, 0) = (` − x), <x<`
` 2


X nπx
w(x, t) = sin (cn sin ωn t + dn cos ωn t) (130)
n=1 | {z` } | {z
Temporal solution
}
Mode shapes

Applying the initial conditions



X nπx
w(x, 0) = sin dn (131)
n=1
`

where w(x, 0) was defined earlier, and



X nπx
0 = ẇ(x, 0) = sin cn ωn (132)
n=1
`

cn is clearly zero.
Multiplying (131) by sin mπx
`
and integrating over `

!
Z ` Z `
mπx X nπx mπx
w(x, 0) sin dx = sin dn sin dx (133)
0 ` 0 n=1
` `

From orthogonality,
(
`
m 6= n
Z
nπx mπx 0
sin sin dx = `
0 ` ` 2
m=n

46
Substituting, solving for dn , and performing the integration
8 nπ
dn = sin , n = 1, 2, 3, . . .
π 2 n2 2
Thus ∞  
X nπx 8 nπ
w(x, t) = sin sin cos ωn t
n=1
` π 2 n2 2

where σn = `
.

3.1.3 Example, Forced response of a string


Solve for the steady-state (particular) response of the following p
system if the
boundary conditions are presumed to be fixed-fixed where c = τ /ρ.

wtt (x, t) − c2 wxx (x, t) = 100 sin(t)δ(x − l/2)


Recall that the integral of a Dirac delta function times another function is
equal to the “another function” evaluated when the argument of the argu-
ment of the Dirac delta function is zero.

w(x, t) = T (t)X(t)
T̈ X − c2 T X 00 = 0
Using separation of variables

T̈ X 00
= = −σ 2
c2 T X
∴ X(x) = A cos σx + B sin σx
The boundary conditions are

X(0) = 0 = A, X(`) = 0 = B sin σ`


πn
∴ σn =
`
Substituting into the temporal part:

T̈ + c2 σn2 T = 0

∴ ωn = cσn

47
Assume a form of the solution10

X x
w(x, t) = an sin t sin nπ
n=1
`

Substituting into the equation of motion


∞    
X x  nπ 2 x `
(−an sin t sin nπ + c2 an sin t sin nπ = 100 sin t δ x −
n=1
` ` ` 2

Multiplying by sin mπ x` and integrating over `,

`  nπ 2 ` nπ
−an + c2 an = 100 sin
2 ` 2 2
Recall (
`
m 6= n
Z
nπx mπx 0
sin sin dx = `
0 ` ` 2
m=n
Solving gives
2 100 sin nπ
an =  2
` c2 nπ 2 − 1
`


 − 1 n = 3, 7, 11, . . .
nπ 
sin = 0 n = 2, 4, 6, . . .
2 
1 n = 1, 5, 9, . . .

  n−1
1 n
2
= (−1 − 1)
2

Then ∞
X 200 sin nπ2 x
w(x, t) = 
2
 sin t sin nπ
n=1 ` c2 nπ

−1 `
`

10
We know that undamped systems always have a phase difference of 0◦ or 180◦ .

48
3.2 Bending Vibration of a Beam

∂2w
M = −EI(x) 2
∂x
11
∂2w
 
∂M ∂
V = = −EI(x) 2 (134)
∂x ∂x ∂x
∂I(x) ∂ 2 w ∂3w
= −E − EI(x) 3
∂x ∂x2 ∂x
X
Fz = ∆xρ(x)ẅ (linear density)
(V + ∂V ) − V + p(x, t)∆x = ∆xρ(x)ẅ
dividing by ∆x and taking the limit as ∆x → 0
∂V
+ p(x, t) = ρ(x)ẅ
∂x
substituting for V

∂2 ∂2w ∂2w
 
EI(x) 2 + ρ(x) = p(x, t)
∂x2 ∂x ∂t2

Taking moments about the right end


∆x
−M + (M + ∂M ) − V ∆x + p(x, t)∆x =0
2
11
Derive this by assuming plane sections remain plane, linear elasticity, and no Poisson
effect.

49
dividing by ∆x
∂M ∆x
−V +p =0
∆x 2
Taking the limit as ∆x → 0
∂M
=V
∂x
as was stated in equation (134).
Allowable Boundary Conditions:
Clamped: w is known and w0 is known.
Pinned: w is known and M is known.
Free: V is known and M is known.
Sliding end: w0 is known and V is known.

3.2.1 Example: Mode shapes of cantilever beam


Find the mode shapes of a cantilever beam
By separation of variables
EI 0000
T̈ X + TX = 0 (135)
ρ
0000
T̈ −EI X −EI 4
= = β = −ω 2 (136)
T ρ X ρ
ρ
X 0000 = ω2X = β 4X (137)
EI
or
X 0000 − β 4 X = 0 (138)
There must be 4 independent solutions to a 4th order homogeneous ODE
(see Wronskian in an introductory DE text).
Four functions that satisfy this equation are sin, cos, sinh, and cosh. The
total solution is then

X(x) = A sin(βx) + B cos(βx) + C sinh(βx) + D cosh(βx) (139)

The boundary conditions for a cantilever beam are

X(x) = 0, displacement is zero (140a)


x=0

X 0 (x) = 0, slope is zero (140b)


x=0

50
X 00 (x) = 0, moment is zero (140c)
x=l

X 000 (x) = 0, shear is zero (140d)


x=l

Applying equation (140a)


B+D =0 (141)
Applying equation (140b)
Aβ + Cβ = 0 (142)
Applying equation (140c)

− A sin(βl)β 2 − B cos(βl)β 2 + C sinh(βl)β 2 + D cosh(βl)β 2 = 0 (143)

Using equations (141) and (142) and simplifying (by substituting for C and
D)
A (sin(βl) + sinh(βl)) + B (cos(βl) + cosh(βl)) = 0 (144)
Then applying equation (140d)

− A cos(βl)β 3 + B sin(βl)β 3 + C cosh(βl)β 3 + D sinh(βl)β 3 = 0 (145)

Using equations (141) and (142) and simplifying

A (cos(βl) + cosh(βl)) − B (sin(βl) − sinh(βl)) = 0 (146)

Combining equations (144) and (146)


    
(sin(βl) + sinh(βl)) (cos(βl) + cosh(βl)) A 0
= (147)
(cos(βl) + cosh(βl)) − (sin(βl) − sinh(βl)) B 0

Setting the determinant of the matrix equal to zero gives

− cos2 (βl) − 2 cosh(βl) cos(βl) − cosh2 (βl) − sin2 (βl) + sinh2 (βl) = 0 (148)

Since sin2 (βl) + cos2 (βl) = 1 and cosh2 (βl) − sinh2 (βl) = 1, this simplifies
to
cos(βl) cosh(βl) = −1 (149)
Having a value for βl we can use equation (146) to obtain the ratio between
A and B as
A sin(βl) − sinh(βl)
= σn = (150)
B cosh(βl) + cos(βl)

51
Not that this could also be solved for using equation (144)

A cos(βl) + cosh(βl)
= σn = (151)
B sin(βl) + sinh(βl)

These are equivalent expressions for the values of βl that satisfy equation
149. It is left to the reader to prove this.
Thus the mode shape is

Xn (x) = An ((cos(βx) − cosh(βx)) + σn (sin(βx) − sinh(βx))) (152)

3.2.2 Example: Forced response of cantilever beam


Find the response of a cantilever beam (uniform) to F sin ωt at its free end
The E.O.M. is
ρwtt + EIwxxxx = Fo δ(x − `) sin ωt
Where δ(x − `) is the Dirac delta function
Assume a solution of the form

w(x, t) = Σ∞ ∞
n=1 wn (x, t) = Σn=1 Xn (x)Tn (t) (153)

The mode shapes for a fixed-free beam are given by equation (152), σn are
given by equation (150) and βn l are obtained by solving equation (149).
Natural frequencies are then given by
ρ 2
Xn0000 = ω Xn = βn4 Xn (154)
EI n
Substituting equation (153) into the equation of motion
∞ 
X  F0
T̈n + ωn2 Tn Xn = δ(x − `) sin ωt (155)
n=1
ρ

The mode shapes can be normalized by


Z `
Xn2 dx = (Xn , Xn ) = 1
o
Z `
A2n (cosh βn x − cos βn x − σn (sinh βn x − sin βn x))2 dx = 1
o

52
An = 4βn (4βn ` + 2σn cos (2βn `) − 2σn cosh (2βn `)
− 4 cosh (βn `) sin (βn `) − 4σn2 cosh (βn `)
+ sin 2βn ` − σn2 sin 2βn ` − 4 cos βn ` sinh βn `
+ 4σn2 cos (βn `) sinh (βn `) + 8σn sin (βn `) sinh (βn `)
+ sinh (2βn `) + σn2 sinh (2βn `))−1
Multiply the EOM by Xm , integrating over x, and using
Z ` (
0 m= 6 n
Xn Xm dx =
0 1 m=n

F0 `
Z
T̈n + ωn2 Tn = Xn (x)δ(x − `)dx sin(ωt)
ρ 0
Fo
= Xn (`) sin(ωt)
ρ
Solving for Tn
Fo Xn (`)
Tn = sin(ωt)
ρA ωn2 − ω 2
The total solution is ∞
X
w (x, t) = Tn (t)Xn (x)
n=1
=============
Recall ωn was given as s
EI
ωn = βn2
ρ
where βn can be found from

cos βn ` cosh βn ` = −1

3.3 Nondimensionalizing E.O.M.


Consider the EOM of a rod (extension or torsion), or a string

c2 wxx (x, t) = wtt (x, t)


r
τ
c=
`

53
Let
x dx
ξ= , then =`
` dξ
∂w ∂w dx ∂w
= = `
∂ξ ∂x dξ ∂x
or
∂w 1 ∂w
=
∂x ` ∂ξ
Also,
∂2w
 
∂ ∂w
=
∂ξ 2 ∂ξ ∂ξ
 
∂ ∂w dx
=
∂x ∂ξ dξ
 
∂ ∂w
=`
∂x ∂ξ
 
∂ ∂w
=` `
∂x ∂x
2
∂ w
= `2 2
∂x
So
∂2w 1 ∂2w
=
∂x2 `2 ∂ξ 2
Substituting into E.O.M.
c2
wξξ (ξ, t) = wtt (ξ, t)
`2
Similarly, let
ct
γ= (dimensionless time)
`
∂2w c2 ∂ 2 w
=
∂t2 `2 ∂γ 2
Substituting into the E.O.M.
wξξ (ξ, γ) = wγγ (ξ, γ)
w
define y = `
yξξ (ξ, γ) = yγγ (ξ, γ)
Which represents the nondimensionalized E.O.M.

54
4 Energy Methods
4.1 Virtual Work (Shames Solid Mechanics) , p. 62
our book (discrete) and p. 369, eqn. 7.20 our
book
Virtual work is the work done on a particle by all the forces acting on the
particle as the particle is given a small hypothetical displacement, a “virtual
displacement,” which is consistent with the constraints present. Applied
forces are constant during the virtual displacement.
The virtual work acting on a body is
ZZZ ZZ
δWvirt = ~
B · δ~udV + T~ · δ~udA
V S
| {z } | {z }
Body forces Traction forces

δ~u is a virtual displacement field that satisfies the boundary conditions.


Consider the virtual work for a virtual displacement field (δux (x)) in the
following:

ZZZ Z Z  Z Z 
δW = Bx δux dV + Tx δux dA − Tx δux dA
V S 2 S 1
ZZZ ZZ
= Bx δux dV + ((Tx δux )2 − (Tx δux )1 ) dA
V S
Note: Z 2
d
(Tx δux )2 − (Tx δux )1 =
(Tx δux ) dx
1 dx
Note that Tx is force/unit area. Thus Tx = σxx (Direct stress in the x
direction). Thus
ZZZ ZZZ
d
δW = Bx δux dV + (σxx δux ) dV
V V dx

55
Carrying out the differentiation and collecting terms
Z Z Z   
dσxx d
δW = Bx + δux + σxx δux dV
V dx dx

56
4.1.1 Review from Strength of Materials
Consider

X
Fx = (σxx + dσxx ) dydz − σxx dydz
+ (σyx + dσyx ) dxdz − σyx dxdz
+ (σzx + dσzx ) dxdy − σzx dxdy
+ Bx dxdydz = 0

Dividing by dxdydz
∂σxx ∂σyx ∂σzx
+ + + Bx = 0
∂x ∂y ∂z
∂σxy ∂σyy ∂σzy
+ + + By = 0
∂x ∂y ∂z
∂σxz ∂σyz ∂σzz
+ + + Bz = 0
∂x ∂y ∂z
(Continue derivation) The first term in the integrand is zero. It is the equa-
tion of equilibrium for an element.
Thus   
ZZZ
σxx δ dux  dV
  
δW =   dx 
V |{z}
x

57
More generally
ZZZ ZZ ZZZ
~ · δ~udV +
B T~ · δ~udA = σij δij dV
V S V
where
3 X
X 3
σij δij = σij δij
i=1 j=1

4.1.2 Example

Members AB and AC have the same modulus of elasticity and cross-sectional


area. Find the forces in AB and AC and deflection of pin.
1st: Apply virtual displacement in y direction

58
δvA
δAC =
L
δvA cos α δvA cos2 α
δAB = L
=
cos α
L
With no body forces, the principle of virtual work is
Z L/ cos α Z L
P δvA = σAB δAB Ad` + σAC δAC Ad`
0 0

substituting for the strains


L/ cos α L
cos2 α
Z Z
A
P δvA = σAB δvA Ad` + σAC δvA d`
0 L 0 L
Since δvA is non-zero, integrating gives

cos2 α L
P = σAB A + σAC A
L cos α
P = σAB cos αA + σAC A (156)
2nd: Apply virtual displacement in x direction.

δAC = 0
δuA sin α δuA
δAB = = sin α cos α
L cos α L
The principle of virtual work
Z L/ cos α Z L
δuA
0δuA = σAB sin α cos αAd` + σAC 0Ad`
0 L 0

59
0 = σAB (sin α cos α)A (157)
σAB = 0
P
σAC =
A
Now let’s find the movement of the pin. Consider actual displacements ūA
and v̄A
v̄A v̄A cos2 α ūA
AC = AB = + sin α cos α
L L L
Using Hooke’s Law
E
v̄A cos2 α + ūA sin α cos α

σAB = EAB =
L
E
σAC = EAC = (v̄A )
L
Substituting the stresses into the virtual work equations (156) and (157):
E E
v̄A cos2 α + ūA sin α cos α cos α + v̄A A

P = (158)
L L
E
v̄A cos2 α + ūA sin α cos α A sin α cos α

0= (159)
L | {z }
must be 0 because: sin α6=0, cos α6=0, A6=0

From (158)
E
P = v̄A A
L
PL
v̄A =
EA
From (159)
PL
cos α + ūA sin α = 0
EA
−P L cos α
ūA =
EA sin α

60
4.2 Derivation of Hamilton’s Principle from Virtual
Work
Consider Newton’s Law for a particle
X
F~ = m~a

Rearrange it as X
F~ + −m~a = 0
The form is now that of a statics problem.
This is called D’Alembert’s principle.
For a body, this force is
d2 ui
ZZZ
−m~a = − ρ 2 dV
V dt
and the virtual work due to this force is
d2 ui
ZZZ
− ρ 2 δui dV
V dt
The Principle of Virtual Work is now
d2 ui
ZZZ ZZ ZZZ ZZZ
Bi δui dV + Ti δui dA− ρ 2 δui dV = σij δij dV (160)
V S V dt V

Define the work of the external forces (the Nonconservative Work )


ZZZ ZZ
Wnc = Bi ui dV + Ti ui dA
V S

This is also the negative of the “Force Potential”12 . The variation of the
nonconservative work is
ZZZ ZZ
δWnc = Bi δui dV + Ti δui dA
V S

Assume the existence of a strain energy density function U such that


∂U
σij =
∂ij
12
Potential energy of externally applied loads.

61
then ZZZ ZZZ
σij δij dV = δ UdV = δV
V V
So, the principle of virtual work is
d2 ui
ZZZ
− ρ 2 δui dV + δWnc − δV = 0 (161)
V dt
To make this apply for a range of times we integrate with respect to time.
Z t2 Z Z Z Z t2 Z t2
d2 ui
− ρ 2 δui dV dt + δWnc dt − δV dt = 0 (162)
t1 V dt t1 t1

Consider the first term. We can swap the order of the integrations, and thus
Z t2 Z Z Z Z Z Z Z t2 2
d2 ui

d ui
− ρ 2 δui dV dt = − ρ 2 δui dt dV
t1 V dt V t1 dt
Integrating in time by parts
Z t2 2 Z t2
d ui dui t2 dui d
2
δui dt = δui |t1 − (δui ) dt
t1 dt dt t1 dt dt
We adopt the rules that the variation δui = 0 at t = t1 , and t = t2 . Then
Z t2 2 Z t2  2
d ui 1 du̇i
δui dt = − δ dt
t1 dt2 t1 2 dt
Substituting into (162)
ρ t2 2
ZZZ Z Z t2 Z t2
δ u̇1 dtdV + δ Wnc dt − δ V dt = 0 (163)
V 2 t1 t1 t1

The first term can be written


 
Z t2 ZZZ Z t2 Z t2

δ 1 2 
ρu̇i dV  dt =
 δT dt = δ T dt (164)
V 2

t1 t1 t1
| {z }
T

substituting into (163)


Z t2
δ (T − V + Wnc ) dt = 0
t1

62
V is the total potential energy. Then

T −V =L (Lagrangian)

Thus Z t2
δ L + Wnc dt = 0
t1

This is Hamilton’s Principle


Note
δV = −δWc
That is, the potential energy of the system is negative of the conservative
work that it has done. e.g. compressing a spring a distance x, the spring
stores V = 12 kx2 , but the work done by the spring is Wc = − 12 kx2 .

63
4.2.1 Example
Hamilton’s Principle (Shames 112-114, 323-329: Energy and Finite Element
Methods in Structural Mechanics)
Z t2
(δL + δWnc ) dt = 0
t1

L = T − V = T + Wc
Example: SDOF system

1
T = mẋ2 , δT = mẋδ ẋ
2
1
Wc = − kx2 , δWc = −kxδx
2
Wnc = F x, δWnc = F δx
Z t2 Z t2
(δT + δWc + δWnc ) dt = (mẋδ ẋ − kxδx + F δx) dt
t1 t1
Z t2 Z t2
= mẋδ ẋdt + (F − kx) δxdt = 0
t1 t1
| {z }
integrate this one by parts

Integrating the first term by parts


Z t2
mẋδ ẋdt
t1

u = mẋ dv = δ ẋdt
du = mẍdt v = δx
t2
Z t2
= mẋδx − mẍδxdt
t1 t1

δx is zero at t1 and t2 (x is known, we are looking for the equation describing


the trajectory between these two times)

64
Substituting back Z t2
(−mẍ + F − kx) δxdt = 0
t1

Since δx is arbitrary, this can be satisfied only if

−mẍ + F − kx = 0

for all time


Thus, the EOM is
mẍ + kx = F (t)

4.2.2 Example 2: String with tension T

The Kinetic energy is Z x2


1 2
T = ρẇ (x, t) dx
x1 2
Conservative work


The shortening is dx − dx2 − dw2
s  2
dw
= dx − dx 1−
dx
 2
1 dw
≈ dx
2 dx
dw 2
Per unit length, Wc = − 12

dx
T dx

65
Nonconservative work
Wnc = p (x, t) w (x, t)
Applying Hamilton’s Principle
Z t2
δT + δWc + δWnc dt = 0
t1

2 ! !
Z t2 Z x2 Z x2 
1 2 1 dw
δ ρẇ (x, t) dx + − T + p(x, t)wdx dt = 0
t1 x1 2 x1 2 dx
   
Z t2 Z x2 Z x2
ρẇδ ẇdx + w0 δw0 + p(x, t) δwdx dt = 0
−T |{z}
   

t1 x1 x1 dw
dx

The order of the integrations is not important so the first term is


Z x2 Z t2
ρẇδ ẇdtdx
x1 t1

Integrating by parts with respect to time yields


Z x2  t2
Z t2 
ρẇδw − ρẅδwdt dx
x1 t1 t1

Recall that we prescribe δw = 0 at t = t1 and t2


The total integral is then
Z t2 Z x2 Z x2 Z x2 
0 0
−ρẅδwdx + −T w δw dx + p(x, t)δwdx dt = 0
t1 x1 x1 x1

Let’s now consider the 2nd term


Z x2
−T w0 δw0 dx
x1

u = −T w0 dv = δw0 dx
du = −T w00 dx v = δw
x2
Z x2
0
= −T w δw − −T w00 δwdx
x1 x1

66
Substituting and collecting terms
Z t2  x2
Z x2 
0 00
−T w δw + (T w − ρẅ + p(x, t)) δwdx dt = 0
t1 x1 x1

The first term yields the boundary conditions:


At x1 :
Either δw = 0 ( w is known) or − T w0 = 0
At x2 :
Either δw = 0 (w is known) or − T w0 = 0
Actually, the true boundary conditions are such that − T w0 is known. How
to incorporate this into Hamilton’s principle follows.
The second term must be zero for any arbitrary δw for all time. Thus
d2 w d2 w
T + p (x, t) = ρ
dx2 dt2
(see 6.158 Inman) The ability to have non fixed ends with non-zero forces
is accomplished by adding nonconservative work terms at each end of the
string to the total work. i.e.
Z x2  2 !
1 dw
W = − T + p(x, t)w dx + p∗1 w1 + p∗2 w2
x1 2 dx
Thus the 1st term of the final form of Hamilton’s equation becomes
x2
−T w0 δw + p∗1 δw1 + p∗2 δw2 = 0
x1

(T w10 + p∗1 ) δw1 + (−T w20 + p∗2 ) δw2 = 0


So T w10 = −p∗1 at x1 is a valid B.C.
and T w20 = p∗2 at x2 is a valid B.C.

4.2.3 String Boundary Condition Example

67
X dw
Fy = 0 = −T − kw at x2
dx
Note: small angle approximation tan θ ≈ sin θ ≈ θ
Using the result from Hamilton’s principle

T w20 = −kw2

4.3 Lagrange’s Equation for a Continuous System

Kinetic Energy: Z `
T (t) = T̂ (x, t) dx
0 | {z }
kinetic energy density

T̂ (x, t) will have the form T̂ (ẇ, ẇ0 )


T̂ depends on x and t because w (x, t)
Virtual Work: Z `
δW (t) = δ Ŵ (x, t)dx
0

δ Ŵ (x, t) = δ Ŵc (x, t) + δ Ŵnc (x, t)

δ Ŵc (x, t) = −δ V̂ (x, t) V̂ : potential energy density


V̂ (x, t) = V̂ (w, w0 , w00 )
δ Ŵnc (x, t) = p(x, t)δw(x, t)

68
Hamilton’s Principle states
Z t2
δT + δW dt = 0 δW = 0 @ t = t1 , t2 (165)
t1

or Z t2 Z `
δ T̂ + δ Ŵ dxdt = 0 δW = 0 @ t = t1 , t2 (166)
t1 0
Z t2 Z `
δ L̂ + δWnc dxdt = 0 δW = 0 @ t = t1 , t2 (167)
t1 0

where δ L̂ = δ T̂ − δ V̂ is the variation of the Lagrangian Density

∂ L̂ ∂ L̂ 0 ∂ L̂ 00 ∂L ∂L 0
δ L̂ =
δw + 0
δw + 00
δw + δ ẇ + δ ẇ (168)
∂w ∂w δw ∂ ẇ δ ẇ0
substituting (168) into (167) and using the definition of δWnc
!
Z t2 Z `
∂ L̂ ∂ L̂ 0 ∂ L̂ ∂ L̂ ∂ L̂ 0
δw + 0
δw − 00
δw00 + δ ẇ + δ ẇ + p(x, t)δw dx = 0
t1 0 ∂w ∂w ∂w ∂ ẇ ∂ ẇ0
(169)
Like we did using Hamilton’s Principle before, we want the integrand to
contain only δw, so we integrate by parts.
a) !
Z ` Z `
∂ L̂ 0 ∂ L̂ ` ∂ ∂ L̂
0
δw dx = δw − δwdx
0 ∂w ∂w0 0 0 ∂x ∂w0
b)
! !
` `
∂2
Z Z
∂ L̂ 00 ∂ L̂ 0
` ∂ ∂ L̂ ` ∂ L̂
00
δw dx = 00
δw − δw + δwdx
0 ∂w ∂w 0 ∂x ∂w00 0 0 ∂x2 ∂w00

c) !
Z t2 Z t2
∂ L̂ ∂ L̂ t2 ∂ ∂ L̂
δ ẇdx = δw − δw dt
t1 ∂ ẇ ∂ ẇ t1 t1 ∂t ∂ ẇ

69
d)
! !
Z t2 Z ` Z t2 Z `
∂ L̂ 0 ∂ L̂ ` ∂ ∂ L̂
δ ẇ dxdt = δ ẇ − δ ẇ dt
t1 0 ∂ ẇ0 t1 ∂ ẇ0 0 0 ∂x ∂ ẇ0
Z t2 !
∂ L̂ ` t2 ∂ ∂ L̂ `
= δw − δw dt
∂ ẇ0 0 t1 t1 ∂t ∂ ẇ0 0

Z ` ! !t2
∂ ∂ L̂
− δwdx
0 ∂x ∂ ẇ0
Z t2 Z ` 2 ! t1
∂ ∂ L̂
+ δwdxdt
t1 0 ∂t∂x ∂ ẇ0
Substituting into (169) and recalling δw = 0 at t = t1 , t2
"Z  ! !
t2 `
∂ 2 ∂ L̂ ∂2
Z 
∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂ ∂ L̂
− + − + + p(x, t) δwdx
t1 0 ∂w ∂x ∂w0 ∂x2 ∂w00 ∂t ∂ ẇ ∂x∂t ∂ ẇ0
!  ! #
∂ L̂ ∂ ∂ L̂ ∂ ∂L ` ∂ L̂ 0 `
+ − − δw + δw dt = 0
∂w0 ∂x ∂w00 ∂t ∂ ẇ0 0 δw00 0

(170)
Since δw is arbitrary
!
2 2
∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂ ∂ ∂ L̂
− 0
+ 2 00
− + + p(x, t) = 0
∂w ∂x ∂w ∂x ∂w ∂t ∂ ẇ ∂x∂t ∂ ẇ0
with the boundary conditions
!  
∂ L̂ ∂ ∂ L̂ ∂ ∂L
0
− − =0
∂w ∂x ∂w00 ∂t ∂ ẇ0
or
w is known
AND

∂ L̂
=0
∂w0
or
w0 is known at each end

70
4.3.1 Example: Beam is bending on spinning shaft
Shaft is spinning about vertical axis at angular velocity Ω. Derive the equa-
tions of motion.

Z ` Z `
1 1
T (t) = m(x)ẇ dx 2
+ J(x)ẇ02 dx
2 0 2 0
| {z } | {z }
Kinetic energy in translation Kinetic energy in rotation

1 ` `
Z Z
V (t) = EI(x)w002 dx + P (x, t)(ds − dx)
2 0 0
P (x, t) is axial force (centrifugal effects)

ds − dx is the shortening of the beam


 2 ! 21
∂w 1
2
− dx ≈ w02 dx

ds − dx = dx + dx
∂x 2

The axial load is assumed to be


Z `
P (x, t) = P (x) = m (ξ) Ω2 ξdξ
x

The potential energy can now be written


1 ` 1 `
Z Z Z ` 
002
V (t) = EI(x)w dx + m (ξ) Ω ξdξ w02 dx
2
2 0 2 0 x

71
Consider p(x, t) (external transverse load) to be the weight of the blade
Z `
δWnc = − m(x)gδw(x, t)dx
0

L̂ is then
Z ` 
1 1 2 1 2 1 2
L̂ = mẇ2 + J (ẇ0 ) − EI(x) (w00 ) − m(ξ)Ω ξdξ (w0 )
2
2 2 2 2 x

Evaluating terms of Lagrange’s equation one at a time:

∂ L̂
=0
∂w
! Z x 
∂ ∂ L̂ ∂ 2 0
= m(ξ)Ω ξdξ (w )
∂x ∂w0 ∂x `
Z `
0
= m(x)Ω xw − 2
m(ξ)Ω2 ξdξw00
x
!
∂2 ∂ L̂ ∂2
2 00
= − 2
(EI(x)w00 )
∂x ∂w ∂x
!
∂ ∂ L̂
= mẅ
∂t ∂ ẇ
!
∂2 ∂ L̂ ∂
0
= (J ẅ0 )
∂x∂t ∂ ẇ ∂x
So the E.O.M. is
Z `
∂2 ∂
−mΩ xw + m(ξ)Ω2 ξdξw00 − 2 (EI(x)w00 )−mẅ+ (J ẅ0 )−mg = 0
2 0
0<x<`
x ∂x ∂x
To get the B.C., let’s again evaluate each term.
Z ` 
∂ L̂
0
=− mΩ ξdξ w0
2
∂w x
!
∂ ∂ L̂ ∂
= − (EI(x)w00 )
∂x ∂w00 ∂x

72
!
∂ ∂ L̂
= J ẅ0
∂t ∂ ẇ

∂ L̂
00
= −EIw00
∂w
So the B.C. are
Z ` 

− mΩ ξdξ w0 +
2
(EIw00 ) − J ẅ0 = 0 or w = 0
x ∂x
AND
−EIw00 = 0 or w0 = 0
At the left end, w = 0 and w0 = 0. At the right end w 6= 0 and w0 6= 0, so

(EIw00 ) x=`
= J ẅ0 x=`
(Integral is zero)
∂x
AND
−EIw00 x=`
=0
Boundary conditions demanded by geometry (w = 0, w0 = 0) are geometric

or essential boundary conditions. These are typically BC through ∂x .
The other boundary conditions are dynamic boundary conditions. These are
∂3
BC through ∂x 3

5 The Eigenvalue Problem


Assume a solution of
w(x, t) = W (x)F (t)
substituting into the E.O.M.

`
d2
 Z  
2 0 2 00 00
−mΩ xW + mΩ ξdξ W − 2 (EIW ) F
x dx
  (171)
d 0
= mW − (JW ) F̈ , 0<x<`
dx

The boundary conditions at the left end are W (0)F = 0, and W 0 |x=0 F = 0
and the BCs at the right end are

73
d
(EIW 00 ) x=` F = JW 0 |x=` F , and (EIW 00 ) |x=` F = 0

dx
Solving for −FF̈
R 
` d2
mΩ2 xW 0 − x mΩ2 ξdξ W 00 + dx 2 (EIW )
00
−F̈
d(JW 0 )
= =λ
mW − F
dx

Thus, the temporal equation is

F̈ + λF = 0

f (t) = Aest
substituting
s2 + λ = 0
Let’s assume λ = ω 2 , s = ±iω

F (t) = A1 eiwt + A2 e−iwt

The spacial equation is


Z `   
2 0 2 00 d 00 d 0
mΩ xW − mΩ ξdξ W + 2 (EIW ) = λ mW − (JW )
x dx dx
with the BCs
W (0) = 0, W 0 |x=0 = 0, − dx d
EIW 00 x=` = λ (JW 0 ) |x=` , and EIW 00 |x=` = 0


Determining λ such that the spacial equation and the B.C. are satisfied is
called the eigenvalue problem.
There is no closed form solution for this particular eigenvalue problem. A
finite difference or finite element method must be employed to get the “com-
plete” solusion.

5.1 Self-adjoint systems


Let W be a function of a spacial variable x.
dW d2 W d2p
LW = A0 (x)W + A1 (x) + A2 (x) 2 + . . . A2p 2p
dx dx dx
LW is a linear homogeneous combination of W and its derivatives through
order 2p.

74
L is a “linear homogeneous differential operator”
d d2p
L = A0 (x) + A2 (x) + . . . + A2p (x) 2p
dx dx
Let M be an operator similar to L, but of order 2q, q < p, and write

LW = λM W, 0<x<`
Where λ comes from separation of variables.13
The boundary conditions can be written as
Bi W = λCi W i = 1, 2, . . . , p x = 0, `
Bi and Ci are also linear homogenous differential operators.
The maximum order of Bi is 2p − 1
The maximum order of Ci is 2q − 1
For the previous problem:
Z `
2 d d2 d2 d2 d4
L = mΩ x − mΩ2 ξdξ 2 + E 2 I 2 + EI 4 (p = 2)
dx x dx dx dx dx
d d d2
M =m− J −J 2 q=1
dx dx dx
At x = 0:
B1 = 1, C1 = 0 (172)
d
B2 = , C2 = 0 (173)
dx
and at x = `
dI d2 d3 d
B1 = −E − EI , C1 = J (174)
dx dx2 dx3 dx
d2
B2 = EI 2 , C2 = 0 (175)
dx
Define the inner product of functions f and g where f and g are: real,
piecewise smooth (continuous and in 0th and 1st derivative).
Z `
(f, g) = f (x)g(x)dx
0
13
M = −τ 2 for a string.

75
f (x) and g(x) are orthogonal if (f, g) = 0 k f k is the norm of f (x) defined
by Z `
k f k2 = (f, f ) = f 2 (x)dx < ∞
0

The space of such functions is denoted K 0


Consider a set of functions φi , i = 1, . . . n
The set is linearly dependent if

c1 φ1 + c2 φ2 + c3 φ3 + . . . cn φn = 0 (176)

without all cr = 0, r = 1, . . . n
Assume (for the moment) that φr are orthogonal. i.e. (φr , φs ) = 0 r 6= s
Multiply equation (176) by φs and integrate over 0 ≤ x ≤ `
n
X Z `
cr φr φs dx = 0
r=1 0

n
X
cr k φs k2 δrs = 0
r−1

cs k φs k2 = 0
Since k φs k6= 0, cs , for any s, must be zero to satisfy (176).
Therefore, orthogonal functions are always linearly independent. The con-
verse is not always true.
Consider φr and an arbitrary function f (x). Let cr be defined by
Z `
cr = (f, φr ) = f φr dx r = 1, 2, . . .
0
Pn
Consider the norm of (f − r=1 cr φr )

n n
!2
X Z ` X
kf− cr φr k2 = f− cr φr dx ≥ 0
r=1 0 r=1

Expanding the integral

76
Z ` n
X Z ` n
X
2
f dx − 2 cr f φr dx + c2r
0 r=1 |0 {z } r=1
cr
n n
=k f k −2 2
X
c2r +
X
c2r (177)
r=1 r=1
n
X
=k f k2 − c2r ≥ 0
r=1

Therefore n
X
k f k≥ c2r
r=1

Bessel’s Inequality: The sum of the squares of the coefficients cr is always


bounded. Pn
This is significant when approximatingPf (x) by r=1 dr φr .
If we represent the system by f (x) = nr=1 dr φr then the mean square error
of the approximation is
Z ` n
!2
X
M= f− dr φr dx
0 r=1

To best approximate f , we want to minimize this. First expand the integral


n
X n
X
2
M =k f k −2 dr cr + d2r
r=1 r=1
n
X n
X
=k f k2 − c2r + d2r − 2dr cr + c2r (178)
r=1 r=1
Xn Xn
=k f k2 − c2r + (dr − cr )2
r=1 r=1

which is minimum for dr = cr . (recall we are minimizing with respect to dr ).


An approximation by this method is an “approximation in the mean.” If n
is large enough, the error can be smaller than some arbitrarily small . Such
a set is considered to be complete.

77
Pn
Let fn = r=1 cr φr be the series representation of f . Then

lim k f − fn k= 0
n→∞

The completeness of a set does not require the functions to be orthonormal,


but only that they be independent.
Considering the original eigenvalue problem (in w(x, t)) The space is denoted
KB2p . The 2p derivative must have finite energy and the boundary conditions
must be satisfied.
Call functions in this space KB2p comparison functions. They need not satisfy
the differential equation.
The space of eigenfunctions is a subspace of KB2p .

78
Consider the helicopter blade problem again.
If J is neglected (it is usually small) the eigenvalue problem is

LW = λM W 0<x<`

Bi W = 0 i = 1, 2, . . . p x = 0, `
which is much simpler than before.
Consider two comparison functions, u and v. The system is considered to be
self-adjoint iff
(u, Lv) = (v, Lu)
This is the symmetry we often obtain for discrete systems. Discrete systems
with symmetric stiffness matrices (and symmetric damping matrices) are also
self-adjoint. (this means no gyroscopic matrices, . . . ).
Recall L is of order 2p
Define
Z `Xp p−1
dk u dk v X dl u dl v
[u, v] = f (x) k k dx + bl l l |`0
0 k=0 dx dx l=0
dx dx
to be the energy inner product (obtained by integration by parts).
The energy inner product is symmetric in u and v and their derivatives.
(Property of self-adjoint systems).
Note that (u, v) only has derivatives through p and boundary conditions only
through p − 1 (characteristic of Geometric Boundary Conditions).
Questions: Can we allow functions outside of KB2p ? Yes. We can enlarge this
space to include functions with only p derivatives having finite energy.
Define the expanded space KGp . The G indicates that the members of the
space satisfy the geometric boundary conditions. This is known as the energy
space. Members are energy functions, or admissible functions.

5.1.1 Example: Self-adjointness of beam stiffness operator


d4
L= 4 p=2
dx

79
` Z `
d4 v du d3 v d3 v `
Z
(u, Lv) = u 4 =− dx + u |
0 dx 0 dx dx
3 dx3 0
Z ` 2 2
d ud v du d2 v ` d3 v `
= dx − |
2 0
+u |
3 0
2
0 dx dx
2
|dx dx {z dx } (179)
If u and v satisfy the BCs, these terms are zero

= [u, v]
| {z }
energy inner product

Likewise
`
d2 u d2 v
Z
(v, Lu) = dx
0 dx2 dx2
Thus, L is self-adjoint.
This indicates that only the derivative p of u and v is required to have finite
energy and functions must satisfy only geometric B.C., so the space can be
enlarged by to KGp .
R`
Consider (u, Lu) If (u, Lu) = 0 uLudx ≥ 0 for all u, and zero only for
u = 0, then the system is positive definite (Compare to xT Ax ≥ 0 for all
non-zero x).
If the system is positive definite, all λr > 0.

Lu = λM u
Z ` Z `
u (Lu) dx = uλM udx
0 0
Z `
(u, Lu) = λ u2 M dx
0
2
If (u, Lu) > 0, since M > 0 and k u k > 0, λ must be > 0.
Recall −FF̈ = λ (separation of variables) λ = ω 2
If λ > 0, ω has no real part, and the system is marginally stable.
∴ Positive definite systems are stable.
∴ Positive semi-definite systems are marginally stable (Rigid body rotation)

5.2 Proof that the eigenfunctions are orthogonal for


self-adjoint systems.
Consider two solutions (λr , Wr and λs , Ws )

80
to the eigenproblem
LWr = mλr Wr
LWs = mλs Ws
Multiply and integrate
Z ` Z `
Ws LWr = λr mWs Wr dx
0 0
Z ` Z `
Wr LWs = λs mWr Ws dx
0 0
Subtracting
Z ` Z ` Z `
Ws LWr dx − Wr LWs dx = (λr − λs ) mWs Wr dx
0 0 0

If the system is self-adjoint, the left side is zero by definition. If we assume


λr and λs are distinct.
Z `
0= mWs Wr dx, m= 6 0
0

∴ Ws and Wr are orthogonal with respect to the function m.


Considering the eigenproblem again

LWr = λr mWr
Z ` Z `
Ws LWr dx = λr mWs Wr dx
0 0
| {z }
=0
Z `
Ws LWr dx = 0
0
For self-adjoint systems the eigenfunctions are orthogonal-also in the energy
inner product.

81
5.3 Non self-adjoint systems
Some systems, such as these involving flutter, are not self-adjoint.
Consider the linear operator L. It can be shown that the operator L has and
adjoint L∗ such that
(v, Lu) = (u, L∗ v)
Where u and v are in the domain of L and L∗ .
Consider ui and vj , eigenfunctions of L and L∗ .

Li u i = λi u i , L∗j vj = λ∗j vj

The set of eigenfunctions vj is said to be adjoint to the eigenfunctions ui .


Multiply and integrating
Z ` Z `
(vi , Lui ) = vi Lui dx = λi vj ui dx
0 0
Z ` Z `
∗ ∗
(ui , L vi ) = ui L vj dx = λ∗i ui vj dx
0 0
Subtracting Z `
λ∗j

0 = λi − ui vj dx
0
If λi 6= λ∗j then
Z `
(ui , vj ) = ui vj dx = 0, i, j = 1, 2, . . .
0

∴ The eigenfunction of L corresponding to the eigenvalue λi is orthogonal


to the eigenfunction of L∗ corresponding to λ∗j , where λ∗j is distinct from λi .
This means that there is a biorthogonality relation, (orthogonal to two sets).
Lui = λui can be written
(L − λi ) ui = 0
(vi , (L − λi ) ui ) = 0
For any eigenfunction function vi of L∗

(vi , Lui ) − λi (vi , ui ) = (L∗ vi , ui ) − λi (vi , ui )


= ((L∗ − λ) vi , ui ) = 0

82
∴ ui is orthogonal to every function (L∗ − λi ) vi
Consider: Assume (L∗ − λi ) vi can represent any arbitrary function f . Then
(f, ui ) = 0.
This is not true for any function, but only those orthogonal to ui , or those
equal to 0. Since that is a solution.

∴ (L∗ − λi ) vi = 0

Thus, the eigenvalues of L∗ are the same as those for L. (The eigenfunctions
are not the same.)
If ui and vi , 1 = 1, 2, . . . ∞, are normalized, then
Z `
(ui , vj ) = ui vj dx = δij , i, j = 1, 2, 3, . . . ∞ (180)
0
The sets of eigenfunctions are assumed to be complete, so that for any arbi-
trary function f ,
X∞
f= αi ui
1−1

αi = (v, f )
from (180).
Likewise for ∞
X
f= βi vi , βi = (ui f )
L=1

X ∞
X
Lf = αi Lui = αi λi ui
L=1 L=1

X ∞
X
∗ ∗
Lf= βi L vi = βi λi vi
i=1 L=1

If L = L, then the system is self-adjoint.
Non self-adjoint systems are not generally easy to solve closed-form.

5.4 Repeated eigenvalues


Consider λi has multiplicity mi .

• There are mi eigenfunctions corresponding to λi .

83
• Any linear combination of the eigenfunctions is also an eigenfunction.

• They can (and should) be taken so that they are mutually orthogonal.

They are normalized such that


Z `
m(x)wr2 dx = 1
0

Recall that the energy inner product is


Z `
[wr , ws ] = wr Lws dx
0
p
Z `X p−1
dk wr dk ws X d` wr d` ws ` (181)
= ak k k
dx + b` `
0 K=1
dx dx `=1
dx dx` 0

= λr δrs

and
√ √ 
Z `
mwr , mws = mwr ws dx = δrs
0
Every function w with continuous Lw and satisfying the boundary conditions
can be expanded in a convergent series.

X
w= cr wr
|{z}
r=1 eigenfunctions

where Z `
cr = mwwr dx
0
This solution form is written in terms of the eigenfunctions. Often they
cannot be found closed-form.
When closed-form solutions do not exist, approximate methods must be used.
The approximations are linear combinations of comparison or admissible
functions.
Admissible functions are preferred because there are more of them.

84
5.5 Vibration of Rods Shafts and Strings
All are represented by a second-order PDE.
Consider

Z `
1
T (t) = m(x)ẇ(x, t)2 dx
2 0
Z `
1 1
s(x)w0 (x, t)2 + k(x)w(x, t)2 dx + Kw (`, t)2

V (t) =
2 0 2
Z `
δWnc (t) = p(x, t)δw(x, t)dx
0
Hamilton’s Principle:
 
Z t2 Z `  
 δ L̂ + δ Ŵnc dx + δL0  dt = 0
t1 0
|{z}
discrete Lagrangian

1
L0 = − Kw (`, t)2
2

δL = δ T̂ − δ V̂
Applying Hamilton’s principle or the Lagrange equation yields

−k(x)w + (sw0 ) − mẅ + p = 0 0<x<`
∂x
w (0, t) = 0, Geometric B.C.

85
sw0 + Kw = 0 @x = `, Dynamic B.C.
Consider the unforced problem. Separation of variables yields
d
− (sW 0 ) + kW = ω 2 mW 0<x<`
dx
W (0) = 0, sW 0 + KW = 0 @x = `
The operators are
d d
L = − dx s dx +k
“L” is of order 2
Since p = 1
β1 = 1 @x = 0
d
β2 = s + K @x = `
dx
First: Let’s see if L is self adjoint. Consider two comparison functions u and
v.

Z ` Z `    
d dv
uLvdx = u − s + kv dx
0 0 dx dx
Z `   Z `
d dv
=− u s dx + ukvdx (182)
0 dx dx 0
Z ` Z `
dv ` du dv
= −us | + s dx + ukvdx
dx 0 0 dx dx 0

Recall the boundary conditions

v(0) = 0

(also true for u)


and
sv 0 |x=` = −Kv |`
Substituting Z `Z `
du dv
= uKv |` + s dx + ukvdx
0 dx dx 0
is symmetric in u and v and their first derivatives.
Thus, L (and the system) is self-adjoint. Thus the eigenfunctions are orthog-
onal.

86
Letting v = u
Z ` Z `
2
uLudx = u K |` + u02 s + u2 kdx > 0
0 0

So the system is positive definite, and all of the eigenvalues are positive.
Note: We did not need to solve the E.O.M. to reach these conclusions.
Let s(x), k(x) and m(x) all be constants.
Then
d
− (sW 0 ) + kW = ω 2 mW
dx
becomes
w00 + β 2 w = 0
β 2 = ω 2 m − k /s


The solution is
W (x) = C1 sin βx + C2 cos βx
From the boundary conditions

C2 = 0

sβC1 cos β` + KC1 sin β` = 0


Simplifying
s
tan β` = − β`
K`
which must be solved numerically

tan(x)
4 x
2

-2

-4

0 1 2 3 4 5 6 7 8 9

87
The eigenfunctions are

Wr (x) = Cr sin βrx r = 1, 2, . . .


r
p (sβr2 + k)
ωr = λr = r = 1, 2, . . .
m
The eigenfunctions should now be normalized such that
Z `
1= mWr2 dx
0
r
βr
Cr = 2 (2βr ` − sin 2βr `)
m
Note that the distributed spring does not effect √ the √ eigenvalues β` or the
eigenfunctions. (It does effect ω). (Note ω = λ 6= β`)
The orthogonality of the modes must be checked.
Z `
?
sin βr x sin βs xdx = Cδrs
0
1 `
Z
= cos(βr − βs )x − cos(βr + βs )xdx
2 0
!`
(β + β ) sin(β − β )x (183)
1 r s r s
=
2(βr2 − βs2 ) + (βr − βs ) sin(βr + βs )x
0
1
= 2 (βs sin βr ` cos βs ` − βr sin βs ` cos βr `)
βr − βs2 | {z }
=0 for s=r, but not for s6=r

Recall the characteristic equation


s
tan β` = − β`
K`
thus
s
sin β` = − β` cos β`
K`
Since βs l and βr ` are solutions
1  s s 
∴= 2 −β s cos (βs `) βr ` cos (βr `) + βr cos (βr l) βs ` cos (βs `) =0
βr − βs2 K` K`
for r 6= s

88
5.6 Bending Vibration of a Helicopter Blade
The differential equation with J = 0 is
Z `
d2
 
00 d
(EIW ) − mΩ ξdξ W 0 = λmW
2
dt2 dx x

with B.C.
W (0) = 0, W 0 (0) = 0
EIW 00 |x=` = 0
d
(EIW 00 ) |x=` = 0

dx
There is no known closed-form solution.
Let’s consider the self-adjointness and positive definiteness.
The operator L is
Z `
d2 d2
   
d 2 d
L = 2 EI 2 − mΩ ξdξ
dx dx dx x dx

To consider the self adjointness, consider two comparison functions u and v.


Z `
(u, Lv) = uLvdx
0
Z ` Z  0 ! `
00
= u (EIv 00 ) − mΩ2 ξdξ v 0 dx
0 x
Z ` 
0
=u (EIv 00 ) |`0−u (EIv ) −u0 00
mΩ ξdξ v 0 |`0
|`0 2
x
Z ` Z ` 
+ u00 EIv 00 + u0 mΩ2 ξdξ v 0 dx
0 x

Since u and v satisfy all the B.C., this reduces to


Z ` Z ` 
00 00 0
(u, Lv) = u EIv + u mΩ ξdξ v 0 dx
2
0 x

which means that L is self-adjoint

89
∴ The mass weighted eigenfunctions are orthogonal. Since (u, Lv) = [u, v]
we can check for positive definiteness by setting v = u and observing the
sign.
Z `
[u, u] = uLudx
0
Z ` Z ` Z ` 
002 02 2 (184)
= | {z } dx +
EIu u
|{z} mΩ ξdξ dx > 0
0 ≥0 0 ≥0 | x {z }
≥0

So the eigenvalues are all positive (positive definite system)


Consider the hinged blade (no longer clamped at left end). . . articulated
blade. Then
W (0) = 0 =⇒ EIW 00 |x=0 = 0
(no moment at left end)
Consider: Does W = x cause λ = 0?
Substituting into [u, u] gives
Z ` Z ` 
2
[u, u] = mΩ ξdξ dx > 0
0 x

so the system is still positive definite.


For kicks try W = x in the EOM
Z `
d
− mΩ2 ξdξ = λmx
dx x

mΩ2 x = mλx
Ω2 = λ
Since ω 2 = λ, ω = Ω is the frequency of the flapping mode.

90
5.7 Variational Characterization of the Eigenvalues
Consider a self-adjoint system. Define the Rayleigh quotient as

[u, u]
R (u) = √ √ (185)
( mu, mu)

where u is some arbitrary function (satisfying B.C.)


Recall that we can represent u by u = ∞
P
r=1 r Wr
c
where Wr are the eigenfunctions. Substituting gives
P∞ P∞
r=1 s=1 cr cs [Wr , Ws ]
R (c1 , c2 , . . . .) = P∞ P ∞ √ √
r=1 s=1 cr cs ( mWr , mWs )

Since the eigenfunctions are orthogonal, and


√ √ 
[Wr , Wr ] = λr and mWr , mWr = 1
P∞ 2
c r λr
R (c1 , c2 , . . .) = Pr=1
∞ 2
r=1 cr
The first variation of Rayleigh’s quotient is

∂R ∂R X ∂R
δR = δc1 + δc2 + . . . = δci
∂c1 ∂c2 i=1
∂c i

At a stationary point; δR must vanish


∂R
∴ ∂c i
= 0 (small variations of c don’t change R)

91
Substituting for R gives
 ! ∞ !−1 

∂R ∂  X X
= c2r λr c2r 
∂ci ∂ci r=1 r=1


! ∞
!−1
X ∂cr X
(186)
= 2cr λr c2r
r=1
∂c i r=1

! ∞
! ∞ !−2
X
2
X ∂cr X
+ c r λr −1 2cr c2r =0
r=1 r=1
∂ci r=1

Since cr and ci are independent


∂cr
= δri
∂ci
and
2ci λi ∞
P 2
P∞ 2
∂R r=1 cr − 2ci c r λr
= P∞ 2 2 r=1
∂ci ( c)
P∞ 2 r=1 r (187)
2ci r=1 cr (λi − λr )
= =0
( ∞ 2 2
P
r=1 cr )

If u is an eigenfunction, say u = Wi , then cr = ci δir .

∂R 2c3 (λi − λi ) 2 (λi − λi )


= i 4 = =0
∂ci ci ci

is satisfied.
Hence the Rayleigh quotient has stationary points at the system eigenfunc-
tions.
Letting u = ci Wi in the Rayleigh Quotient,

R(Wi ) = λi

∴ The stationary values of the Rayleigh quotient are the system eigenvalues.

92
Assume u has the form Wi + v where  is a small value and v is an arbitrary
function.
[Wi + v, Wi + v]
R (Wi + v) = √ √
( m (Wi + v) , m (Wi + v))
[Wi , Wi ] + 2 [Wi , v] + 2 [v, v]
= √ √ √ √ √ √
( mWi , mWi , ) + 2 ( mWi , mv) + 2 ( mv, mv)
(188)
Using the Binomial Expansion Theorem
√ √ √ √
[Wi , v] ( mWi , mWi ) − [Wi , Wi ] ( mWi , mv) 2

R (Wi + v) ≈ R (Wi ) + 2 √ √ 2 + O 
( mWi , mWi )
√ √
[Wi , v] − λi ( mWi , mv)
+ O 2

≈ λi + 2 √ √
( mWi , mWi )
(189)

For a fixed v, R depends only on . If R(Wi + v) is stationary, then the


second term must be zero for non-zero .
Z ` Z `
vLWi dx − λi mvWi dx = 0
0 0
Z `
v (LWi − λi mWi ) dx = 0
0
For this to be true, v must be orthogonal to LWi − λi mWi . But since v is
arbitrary
LWi − λi mWi = 0
So, the stationarity of Rayleigh’s quotient is equivalent to the eigenvalue
problem.
If λ1 is the lowest eigenvalue, then R(u) ≥ λ1 .
If u is orthogonal to the first s eigenvalues then

R(u) ≥ λs+1

But this requires the eigenfunctions


A better characterization is

λs+1 = max min R(u) (u, v ) = 0 i = 1, 2, . . . s


| i {z }
s constraints

93
vi are arbitrary functions, they are in effect constraints on the function u.
So λs+1 is the max value of (minR(u) with respect to u) with respect to vi .
To apply this we can use admissible √ functions
√ u since [u, u] satisfies the
dynamic B.C.s automatically and ( mu, mu) does not require their con-
sideration.

5.8 Integral Formulation of the Eigenvalue Problem


For a discrete system with stiffness matrix A (M = I, K = A) with eigen-
values λ and eigenvectors x, the eigenvectors of A−1 (assuming A is non-
singular) are λ−1 and the eigenvectors are x.
(A − λI) x = 0
A−1 (A − λI) x = 0
I − λA−1 x = 0


A−1 − λ−1 I x = 0


∴ the inverse of the eigenvalues of A−1 are the eigenvalues of A. So we


actually had a choice of solving eigenvalue problem of A or A−1 !
A−1 is the compliance matrix or the inverse operator.
Consider the differential equation
LW (x) = f (x) 0<x<`
L is P.D. and self-adjoint.
Does Af (x) = W (x) where A is the inverse operator of L exist?
Yes, it is usually an integral operator.
Assume: Z `
W = Af = a (x, ξ) f (ξ) dξ
0
a(x, ξ) is the kernel of the integrator operator.
a(x, ξ) is known as the Green’s function or influence function.
Consider a cantilever beam

94
Let a(x, ξ) be the displacement at x due to a unit load at point ξ. The total
displacement W (x) is
Z `
W (x) = a(x, ξ)f (ξ)dξ
0

Note that from Maxwell’s reciprocity theorem


a(x, ξ) = a(ξ, x)
Consider the E.O.M.
LW = −mẄ
since
Ẅ = −ω 2 W
LW = mω 2 W = λmW λ = ω2
Comparing to the original equation
f (x) = λmW
so Z `
W (x) = a(x, ξ)λm(ξ)W (ξ)dξ
0
which is the eigenvalue problem in integral form.
Recall from discrete systems
M ẍ + Kx = 0
−ω 2 M x + Kx = 0
x = ω 2 K −1 M x = λAM x =
If M has been diagonalized
n
X
xi = λ aij mj xj
j=1

The continuous integral form can be used in an iterative process.


Z `
Wi+1 (x) = λ a(x, ξ)m(ξ)Wi (ξ)dξ
0

If the first function W0 is an admissible function, then the resulting function


is a comparison function.

95
5.8.1 Example: Cantilever beam. EI=const.
Influence function (static response)
 
1 2 1
a(x, ξ) = x ξ− x x<ξ
2EI 3
 
1 2 1
a(x, ξ) = ξ x− ξ ξ<x
2EI 3
Pick an initial admissible function W0 (x) = x2 (Picking an admissible func-
tion gets us closer to the answer quicker. However any initial displacement,
even one that does not satisfy the boundary conditions, will also work). Sub-
stitution yields
 

λm Z x 
1
 Z ` 
1
 
2 2 2 2
W1 (x) = ξ ξ x − ξ dξ + ξ x ξ − x dξ 
 
2EI 3 3

 0 |{z} x
|{z} 
W (ξ) | {z } W (ξ) | {z }
a(x,ξ) a(x,ξ)

New trial function


   
λm 3 1 2 1 3 1 6
= ` x `− x + x
2EI 4 9 180

Taking the second derivative:


   
λm 1 2 1 4
W100 (x) = `3
`− x + x
2EI 2 3 6

Evaluating at the right end:


W100 (`) = 0
And the shear is proportional to the 3rd derivative:
 
000 λm 2 3 2 3
W1 (x) = − ` + x
2EI 3 3

W1000 (`) = 0
So, the BCs are now satisfied with our current estimate of the first mode.
HW: Apply Rayleigh’s quotient and compare to ω true.

96
5.8.2 Example: String solution using Green’s functions
Assume T and m const, fixed at each end
(
x(1 − ξ) x < ξ
a(x, ξ) =
ξ(1 − x) ξ < x

Pick an initial trial function that satisfies the boundary conditions

W0 (x) = .25 − (x − .5)2


Z x Z ` 
2 2

W1 (x) = λm ξ (1 − x) .25 − (ξ − .5) dξ + x (1 − ξ) .25 − (ξ − .5) dξ
0 x
1 1 1
W1 (x) = x − x3 + x4
12 6 12
continuing
1 1 1 5 1 6
x − x3 +
W2 (x) = x − x
120 72 120 360
The resulting Rayleigh quotients are

T (−2)2
R0 = = 10.0000c
ρ(.25 − (x − .5)2 )2
R1 = 9.87097c2
R2 = 9.86962c2
R = 9.86960c2 T rue

97
6 Discretization of continuous systems
6.1 The Rayleigh-Ritz method
Consider again the eigenvalue problem

LW = mλW W = W (x)

Bi W = 0 i = 1, 2, . . . p
L is self-adjoint of order p.
Instead of solving this, we will seek stationary values of the Rayleigh quotient,
equation (185)
[W, W ]
R(W ) = √ √
( mW, mW )
where W is a trial function.
The function space considered will be the finite subspace S n of KGp
Denote a function approximating W in S n : W n
[W n , W n ]
R(W n ) = √ √
( mW n , mW n )
Select a sequence of functions (φ1 (x), φ2 (x), φ3 (x), . . . , φn (x)) of x that are
linearly independent and are complete in energy.
That is to say for any W (x) in KGp and any small , the are enough functions
such that n
X
kW− ai φi (x) k< 
i=1
PN
∴ i=1 ai φi (x) can adequately represent W (x)
n
X
n
W = ai φi
i=1

ai need to be determined

[ ni=1 ai φi , ni=1 ai φi ]
P P
R(ai , a2 , . . . , an ) = √ Pn √ P
( m i=1 ai φi , m ni=1 ai φi )
Pn Pn (190)
i=1 j=1 ai aj [φi , φj ]
= Pn Pn √ √
i=1 j=1 ai aj ( mφi , mφj )

98
Next, define
[φi , φj ] = Kij
√ √ 
mφi , mφj = Mij
These are the mass and stiffness coefficients (elements of mass and stiffness
matrices).
Note that they must be symmetric as a result of L being self-adjoint.
N (a1 , a2 , . . .)
R(a1 , a2 , . . . , an ) =
D(a1 , a2 , . . .)
n X
X n
N (a1 , a2 , . . . , an ) = Kij ai aj compare to xT Kx
i=1 j=1
n X
X n
D(ai , a2 , . . . , an ) = Mij ai aj compare to xT M x
i=1 j=1

Recall that for stationarity


∂R
=0 r = 1, 2, . . . n
∂ar
   
∂N ∂D
∂R ∂ar
D − ∂ar
N
=
∂ar D2 (191)
∂N ∂D
∂ar
− Λn ∂a r
=
D
N
Note that Λn = D
at the stationary point.
Further,
n n  
∂N XX ∂ai ∂aj
= Kij aj + ai
∂ar i=1 j=1
∂ar ∂ar
n X
X n
= Kij (δir aj + δrj ai )
i=1 j=1
n n
(192)
X X
= aj Krj + ai Kir
j=1 i=1
X n
=2 ai Kir
i=1

99
Likewise n
∂D X
=2 ai Mir
∂aj i=1

∂R
=0
∂ar
gives    
∂N n ∂D
−Λ =0
∂ar ∂ar
n
X n
X
n
2 ai Kir − Λ 2 ai Mir = 0
i=1 i=1

(K − Λn M ) a = 0
where Kir are the elements of K and Mir are the elements of M
The eigenvalue problem is now algebraic. The n eigenvalues Λn are approx-
imations of the first n eigenvalues of the actual model, the quality of which
is determined by the ability of
n
X
n
W = ai φi
i=1

to represent each mode.


n
X
Wrn = air φi
i=1

air is the ith element of the rth vector


Wrn are the Ritz eigenfunctions, Λnr are the Ritz eigenvalues.
Space S n is finite, we can consider W n to be part of the solution for W , with

an+1 = an+2 = . . . = 0

(we ignored higher terms)


Since λ1 is the minimum value of Rayleigh’s quotient over KGP , and Λn1 is the
minimum in the subspace S n of KGP , λ1 ≤ Λn1

Since the higher eigenfunctions are

• orthogonal to the first eigenfunction

100
P
• presumably not completely represented by ai φi .

Then

• The Ritz eigenvalues bound the true eigenvalues from above.

• Usually the lower eigenvalues are best represented.

• The Rayleigh-Ritz method requires trying W 1 = a1 φ1 , W 2 = a1 φ1 +


a2 φ2 , W 3 = a1 φ1 + a2 φ2 + a3 φ3 , . . . and observing convergence of the
eigenvalues.

• Often (but not usually) the Rayleigh-Ritz method is applied to find only
the first (fundamental) natural frequency. A function approximating
the first natural frequency is tried. This is what was done in the section
on Green’s functions.

• Note that using additional functions, φ, is simple since for each addi-
tional term in the series, only 3 new calculations must be made. i.e.
 n   n 
n+1 K | n+1 M |
K = M =
− + − +

Let Λni be the ith eigenvalue of the n-term Rayleigh-Ritz solution and
Λn+1
i be the ith eigenvalue of the n + 1-term Rayleigh-Ritz solution.
Then
Λn+1
1 ≤ Λn1 ≤ Λn+1
2 ≤ Λn2 ≤ . . . Λnn ≤ Λn+1
n+1

This is the p-type FEA method.

Note:

• As n increases, the approximations of the eigenvalues improves mono-


tonically

• As n increases, the approximations of the eigenvalues approach the true


eigenvalues from above

• The rate of convergence depends on the nature of the admissible func-


tions chosen

Important constraints:

101
1. The admissible functions must be linearly independent

2. They must be complete in the energy space KGp

3. Simple functions should be chosen to simplify calculations

4. Example sets: Power series, trig functions, Bessel functions, Legendre


polynomials, Tchebycheff polynomials

5. Orthogonality is not required, but if


√ √ 
mφr , mφs = δrs

the effort will be reduced

6. The best method is often to solve a more simple similar problem and
use the eigenfunctions as admissible functions for the more difficult
problem.

102
6.1.1 Example: Cantilever Beam, Shames, p 340
Find the first two natural frequencies of a beam free at the left end, and
clamped at the right end.
Assume two admissible functions:

x 2

φ1 (x) = 1 − `

x x 2

φ2 (x) = `
1− `

 x 2 x x 2
W 2 (x) = a1 1 − + a2 1−
` ` `
4
d
L = EI 4
dx
Z `
d4
Kij = [φi , φj ] = φi EI 4 φj dx
0 dx
Z ` 2  2 
d d
= EI 2
φi φj dx other terms are zero
0 dx dx2
Z `  2 Z `  2
00 2 4EI
K11 = EI φ dx = EI dx =
0 0 `2 `3
Z `   Z `
00 00 2 2  x  2EI
K12 = K21 = EI φ1 φ2 dx = EI 2 2
3 − 2 dx = − 3
0 0 ` ` ` `
Z `  2 Z `  2 
00 2 x 2 4EI
K22 = EI φ2 dx = EI 2
3 − 2 dx = 3
0 0 ` ` `
√ √ 
Mij = mφi , mφj

103
Z `
x 4 m`
M11 = m 1− dx =
0 ` 5
Z `
x 2 x  x 2 m`
M12 = M21 = m 1− 1− dx =
0 ` ` ` 30
Z `  2 
x x 2 m`
M22 = m 1− dx =
0 ` ` 105
 
EI 4 −2
K= 3
` −2 4
1 1

M = m` 1 5 30
1
30 105

(K − ΛM ) a = 0

EI
Λ21 = 12.60
m`4
EI
Λ22 = 1212 4
m`
True values are
EI
Λ1 = 12.30
m`4
EI
Λ2 = 483 4
m`
The second function chosen was a lousy choice, and thus our estimate of the
second natural frequency is way too high.

6.2 The Assumed-Modes Method


Similar to the Rayleigh-Ritz method except the solution is assumed to be
n
X
wn (x, t) = φi (x)qi (t)
`=1

The kinetic energy can be written


Z `
1
T = mẇ2 dx
2 0

104
and the potential energy can be written in general form
1
V = [w, w]
2
substituting the series approximation into T yields
n
! n !
1 `
Z X X
T = m φi q̇i φj q̇j dx =
2 0 i=1 j=1

n n Z `
1 XX
= q̇i q̇j mφi φj dx
2 i=1 j=1 0

n n
1 XX
= q̇i q̇j Mij
2 i=1 j=1
where Z `
Mij = mφi φj dx, i, j = 1, 2, 3, . . . n
0
Likewise for the potential energy
n n
1 XX
V = qi qj Kij
2 i=1 j=1

where
Kij = [φi , φj ], i, j = 1, 2, 3, . . . n
Note that the mass and stiffness matrices are identical to those obtained in
the Rayleigh-Ritz method.
Lagrange’s equation for a discrete system (No damping, external loads, or T0
or T1 energy) are
 
d ∂L ∂L
− =0 r = 1, ..., n
dt ∂ q̇r ∂qr
L=T −V
Substituting for L yields
n
X
(Mrj q̈j + Krj qj ) = 0 r = 1, 2, ..., n
j=1

105
Assuming q̈r = −ωr2 qr = −λr qr
n
X
(Krj − ΛMrj ) qj = 0 r = 1, 2, ..., n
j=1

which is identical to the Rayleigh-Ritz method.

106
6.3 Weighted Residual Methods
Rayleigh-Ritz: Variation of Rayleigh’s quotient is zero.
Weighted residual methods are applied directly using the DE and BCs.
Consider a trial function W (x) in the space KB2p substituted into the differ-
ential equation.
LW (x) = λmW (x)
The residual is the error

R (W, x) = LW − λmW

and is a function of the trial function W and depends on the position x.


If the trial function is an eigenfunction,

R (W, x) = 0

Next consider a test function v (weighting function) from the space K 0


Define the weighted residual

vR = v (LW − λmW )

If v is orthogonal to R, then (v, R) = 0


Restrictions on the space K 2p can be changed using integration by parts.
Again, assume a solution of the form
n
X
n
W = ai φi (x)
i=1

φi (x) are a complete set of trial functions (comparison function)


φi (x) are a basis for the subspace S n of KB2p
Choose n functions ψi and regard then as a basis for the subspace V n
S n is the trial space
V n is the test space
The coefficients ai are determined by placing the constraint that ψi be or-
thogonal to R (W n , x)
Z `
(ψi , R) = ψi (LW n − mλn W n ) dx = 0 ` = 1, 2, . . . , n
0

Why does this work?

107
As n → ∞, V n fills the entire space,
Then the only way for (ψi , R) = 0 is for R = 0.

lim (LW n − λn mW n ) = LW − λmW = 0


n→∞

Substituting the approximation for W yields


n n
!
Z ` X X
(ψi , R) = ψi aj Lφi − λn aj mφj dx
0 j=1 j=1

n 
X X 
(ψi , R) = Kij − λn Mij aj = 0
j=1
Z `
Kij = (ψi , Lφj ) = ψi Lφj dx
0
Z `
Mij = (ψi , mφj ) = ψi mφj dx
0
Kij is usually not symmetric, regardless of whether or not L is self-adjoint.
(Depends on technique.)

108
6.3.1 Galerkin’s Method (Ritz’s Second Method)
Assume the weighting functions are the same as the trial functions

ψi = φi ` = 1, 2, ...n

Kij = (φi , Lφj )


Mij = (φi , mφj )
Which are the same as the Ritz results if L is self-adjoint.
Consider the non self-adjoint eigenvalue problem
 
d dW dW
− s +r = λmW
dx dx dx
Z `    
d dφj dφj
Kij = (φi , Lφi ) = φi − s +r dx
0 dx dx dx
Integrating by parts and considering the boundary conditions yields
Z `
sφ0i φ0j + rφi φ0j dx

Kij =
0

which is not symmetric in φi and φj . What happened to the boundary


condition terms when integrating by parts?

109
6.3.2 Example: Clamped-clamped beam (Dimarogonas)
Consider a clamped-clamped beam, length `
Choose ψ1 (x) = 1 − cos 2πx
`
, ψ2 (x) = 1 − cos 4πx
`
Z `
Ri = ψi (LW − mω 2 W )dx
0

For the two trial functions


Z `
2 2 2 8EIπ 4 3
R1 = ψ1 (LW − mω W )dx = `(a1 4
− a1 mω 2 − a2 mω 2 )
0 ` 2
Z `
128EIπ 4 3
R2 = ψ2 (LW 2 − mω 2 W 2 )dx = `(a2 4
− a1 mω 2 − a2 mω 2 )
0 ` 2
Where we recall that W 2 is the two term representation of the solution. Thus
 8EIπ4   3  
`4
0 a1 2 2m m a1
128EIπ 4 −ω =0
0 `4
a2 m 23 m a2
or  4   3   
8π 0 a1 1 a1
−λ 2 3 =0
0 128π 4 a2 1 2 a2
where
ω 2 m`4
λ=
EI
λ1/2 22.47 - 124.06
True 22.373 61.673 120.90
The second mode was missed due to a poor selection of functions ψi .
For the first mode
a1 = 23.2, a2 = 1
 
2πx 4πx
φ1 = 23.2 1 − cos + 1 − cos
` `
For the second mode
a1 = −.69, a2 = 1
 
2πx 4πx
φ2 = −.69 1 − cos + 1 − cos
` `

110
2
ψ1
ψ2
1.8

1.6

1.4

1.2

0.8

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x/l

111
6.3.3 The collocation method
The weighting functions are spatial Dirac delta functions
ψi = δ(x − xi )
The points x = xi are chosen in advance
Z `
(ψi , R) = δ(x − xi ) (LW n − λmW n ) dx = 0
0
= (LW n − λn mW n ) |x=xi = 0
What this means is the D.E. is satisfied at n preselected locations.
As n increases, the equation is satisfied everywhere.
Now Z `
Kij = δ(x − xi )Lφj dx = Lφj (xi )
0
Z `
Mij = δ(x − xi )mφj dx = mφj (xi )
0
The coefficients here are not symmetric because δ is not a comparison func-
tion of L.
This is true even when L is self-adjoint.
Pro: Easy to find Kij , Mij
Con: Difficult to obtain solution to non-symmetric eigenvalue problem.

6.3.4 Collocation Method Example


(interior method)
Simply-supported beam with non-uniform mass distribution.

πx
m(x) = m0 sin EI = const
`
weighting functions trial functions
ψ1 = δ(x − 4` ) φ1 = sin πx
`
ψ2 = δ(x − 2` ) φ2 = sin 2πx
`
ψ3 = δ(x − 3`4 ) φ3 = sin 3πx
`

112
d4
Kij = Lφj (xi ) = EI 4 φj (xi )
dx
 4
jπ jπxi
= EI sin
` `
 π 4 1  4  4
2π 3π 1
K11 = EI √ , K12 = EI , K13 = EI √
` 2 ` ` 2
 π 4   4

K21 = EI , K22 = 0, K23 = −EI
` `
 π 4 1  4  4
2π 3π 1
K31 = EI √ , K32 = −EI , K33 = EI √
` 2 ` ` 2
1
16 57.27

 π 4 √2
K = EI 1 0 − 81
` √1
2
− 16 57.27

113
Mij = m (xi ) φj (xi )
1 1 1
M11 = m0 , M12 = √ m0 , M13 = m0
2 2 2
M21 = m0 , M22 = 0, M23 = −m0
1 1 1
M31 = m0 , M32 = − √ m0 , M33 = m0
2 2 2
Assembling: 1
√1 1 
2 2 2
M = m0  10 − 1
11 1
− √2
2 2
 r
p
−1
1 EI
eig (M K) = 2
(11.74, 39.48, 105.6)
` m0
For comparison, a uniform simply supported beam:
 r
1 EI
ωn = 2
(9.87, 39.48, 88.83)
` m0

114
6.4 System Response By Approximate Methods: Galerkin’s
Method - the foundation of Finite Elements
6.4.1 Damped (and undamped) Non-gyroscopic System
Consider
∂w(x, t) ∂ 2 w(x, t)
Lw(x, t) + C +M = f (x, t)
∂t ∂t2
Let n
X
w(x, t)n = φj (x)qj (t)
j=1

where φj are comparison functions.


Substituting yields
n
X
(Lφj qj + Cφj q̇j + m (x) φj q̈j ) = f (x, t)
j=1

Pre-multiplying by φi , i = 1, 2, . . . n and integrating over `.

M q̈ + C q̇ + Kq = F (t)
Z `
Mij = φi m(x)φj dx
0
Z `
Kij = φi Lφj dx
0
Z `
Cij = φi Cφj dx
0
Z `
Fi = φi f dx
0
The discrete eqns can be solved as shown before.

7 The Computational Eigenvalue Problem


7.1 Householder’s method
Reduced a symmetric matrix to tri-diagonal form

115
More efficient than Given’s method, 1/2 as many multiplications. Requires
n-2 transformations.

Ak = Pk Ak−1 Pk , k = 1, 2, 3, . . . , n − 2
where
Pk = I − 2v k v Tk
A1 must have the form
 (1) (1) 
a11 a12 0 ··· 0
 (1) (1) (1) (1) 
a21 a22 a23 · · · a2n 
 (1) (1) (1) 
 0 a32 a33
A1 =  · · · a3n 
 .. .. .. .. .. 

 . . . . . 
(1) (1) (1)
0 an2 an3 · · · ann

the requirement is then


(1) (1) (1)
a31 = a41 = · · · = an1 = 0

with v Tk v k = 1
Let’s pick v1,1 = 0, so
 
1 0 0 ··· 0
0 1 − 2v 2 − 2v12 v13 ··· − 2v1,2 v1n 
 12 
0 − 2v12 v13 2
P1 =  1 − 2v13 ··· − 2v1,3 v1n 

 .. .. .. .. .. 
. . . . . 
2
0 − 2v12 v1n ··· ··· 1 − 2v1n

Then    12
1 a1,2
v1,2 = 1∓
2 α1
where ! 12
n
X
α1 = a21j
j=2

then
a1j
vi,j = ∓
2α1 v12

116
Where the signs are chosen to be the same as that of a12 .
The procedure is generalized to

v Tk = 0 0 0 · · · vk,k+1 vk,k+2 · · · vk,n


 

where    12
1 ak,k+1
vk,k+1 = 1∓
2 αk
and
akj
vk,j = ∓
2αk vk,k+1
where ! 21
n
X
αk = a2kj
j=k+1

The eigenvectors of the transformed matrix, T = Ak , are then related to


those of the original matrix A, by T = P T AP .
Example:
A =

1 1 1 1
1 2 2 2
1 2 3 3
1 2 3 4

k = 1
n
! 21
X
αk = a2kj
j=k+1

alpha = 1.7321

   12
1 ak,k+1
vk,k+1 = 1∓
2 αk
and
akj
vk,j = ∓
2αk vk,k+1

117
V =

0.00000
0.88807
0.32506
0.32506

Pk = I − 2v k v Tk
P1 =

1.00000 0.00000 0.00000 0.00000


0.00000 -0.57735 -0.57735 -0.57735
0.00000 -0.57735 0.78868 -0.21132
0.00000 -0.57735 -0.21132 0.78868

A =

1.00000 -1.73205 0.00000 0.00000


-1.73205 7.66667 -0.54466 -1.12201
0.00000 -0.54466 0.37799 0.16667
0.00000 -1.12201 0.16667 0.95534

k = 2
alpha = 1.2472
V =

0.00000
0.00000
0.84755
0.53071

Psub =

1.00000 0.00000 0.00000 0.00000


0.00000 1.00000 0.00000 0.00000
0.00000 0.00000 -0.43670 -0.89961

118
0.00000 0.00000 -0.89961 0.43670

A =

1.00000 -1.73205 0.00000 0.00000


-1.73205 7.66667 1.24722 -0.00000
0.00000 1.24722 0.97619 -0.12372
0.00000 0.00000 -0.12372 0.35714

The resulting matrix is a tridiagonal matrix

Ak = P A 0 P (193)

where
k
Y
P = Pi (194)
1

119
7.2 The QR Method
The QR Method for obtaining eigenvalues is computationally expensive if the
matrix is full. It is very efficient when Given’s method (or the more efficient
Householder’s method) is applied first.
A1=[2 -1 0;-1 2.5 -1.5;0 -1.5 3]
A1 =
2.0000 -1.0000 0
-1.0000 2.5000 -1.5000
0 -1.5000 3.0000
First, try to get rid of sub-diagonal elements
s1=-1/sqrt(2^2+1)
s1 =
-0.4472
c1=2/sqrt(2^2+1)
c1 =
0.8944

The rotation matrix Θ1 is


 
cos(θ) sin(θ) 0
Θ1 =  − sin(θ) cos(θ) 0
0 0 1

t1=[c1 s1 0;-s1 c1 0;0 0 1]


t1 =
0.8944 -0.4472 0
0.4472 0.8944 0
0 0 1.0000
We now do a partial rotation, A1p1 = Θ1 A1
A1p1=t1*A1
A1p1 =
2.2361 -2.0125 0.6708
0 1.7889 -1.3416
0 -1.5000 3.0000
Now do a rotation to remove a3,2

120
s2=A1p1(3,2)/sqrt(A1p1(3,2)^2+A1p1(2,2)^2)
s2 =
-0.6425
c2=A1p1(2,2)/sqrt(A1p1(3,2)^2+A1p1(2,2)^2)
c2 =
0.7663
The rotation matrix Θ2 is
 
1 0 0
Θ1 = 0
 cos(θ) sin(θ) 
0 − sin(θ) cos(θ)
t2=[1 0 0;0 c2 s2;0 -s2 c2]
t2 =
1.0000 0 0
0 0.7663 -0.6425
0 0.6425 0.7663
A1p1 =
2.2361 -2.0125 0.6708
0 1.7889 -1.3416
0 -1.5000 3.0000
A1p2=t2*A1p1
A1p2 =
2.2361 -2.0125 0.6708
0 2.3345 -2.9556
0 -0.0000 1.4367
Now we need to post-multiply by our rotations
Q1=t1’*t2’
Q1 =
0.8944 0.3427 0.2873
-0.4472 0.6854 0.5747
0 -0.6425 0.7663
A2=A1p2*Q1
A2 =
2.9000 -1.0440 0.0000
-1.0440 3.4991 -0.9231
0.0000 -0.9231 1.1009

121
This is the completion of one step Note that this is identical to

A2 = Θ2 Θ1 A1 ΘT1 ΘT2 (195)

The beginning of subsequent steps begins with the solution of the eigenvalue
problem for the lowest 2 × 2 principle minor of the original matrix. i.e.
 
2.5 − 1.5
eig
− 1.5 3

eig(A1(2:3,2:3))
ans =
1.22931
4.27069

The eigenvalue closest to 3 (bottom right value) is 4.27.


Next, solve the eigenvalue problem for the lowest 2 × 2 principle minor of the
new matrix.

eig(A2(2:3,2:3))
ans =
3.8133
0.7867

The eigenvalue closest to 1.1009 (bottom right value) is 0.7867.

0.7867 1
− 1 = .81 >
4.27 2

Thus, a shift is not in order (we will have one later). So, repeat the process
again from the start.

122
s1=A2(2,1)/sqrt(A2(2,1)^2+A2(1,1)^2)
s1 =
-0.3387
c1=A2(1,1)/sqrt(A2(2,1)^2+A2(1,1)^2)
c1 =
0.9409
t1=[c1 s1 0;-s1 c1 0;0 0 1]
t1 =
0.9409 -0.3387 0
0.3387 0.9409 0
0 0 1.0000
A2p1=t1*A2
A2p1 =
3.0822 -2.1676 0.3127
-0.0000 2.9386 -0.8686
0.0000 -0.9231 1.1009
s2=A2p1(3,2)/sqrt(A2p1(3,2)^2+A2p1(2,2)^2)
s2 =
-0.2997
c2=A2p1(2,2)/sqrt(A2p1(3,2)^2+A2p1(2,2)^2)
c2 =
0.9540
t2=[1 0 0;0 c2 s2;0 -s2 c2]

123
t2 =
1.0000 0 0
0 0.9540 -0.2997
0 0.2997 0.9540
A2p2=t2*A2p1
A2p2 =
3.0822 -2.1676 0.3127
-0.0000 3.0802 -1.1586
0.0000 0.0000 0.7900
Q2=t1’*t2’
Q2 =
0.9409 0.3232 0.1015
-0.3387 0.8976 0.2820
0 -0.2997 0.9540
A3=A2p2*Q2
A3 =
3.6342 -1.0433 0.0000
-1.0433 3.1121 -0.2368
0.0000 -0.2368 0.7537

124
This is the end of a second rotation. We already have the eigensolution of
the 2 × 2 minor of A2 . We need it for A3 .

eig(A3(2:3,2:3))
ans =
3.1356
0.7301

0.7301 1
− 1 = 0.0719 <
0.7867 2
A shift is thus advisable. The next iteration begins by using A3 − 0.7301I.

A3p=A3-eye(3)*ans(2)
A3p =
2.9041 -1.0433 0.0000
-1.0433 2.3820 -0.2368
0.0000 -0.2368 0.0235
s1=A3p(2,1)/sqrt(A3p(2,1)^2+A3p(1,1)^2)
s1 =
-0.3381
c1=A3p(1,1)/sqrt(A3p(2,1)^2+A3p(1,1)^2)
c1 =
0.9411
t1=[c1 s1 0;-s1 c1 0;0 0 1]

125
t1 =
0.9411 -0.3381 0
0.3381 0.9411 0
0 0 1.0000
A3p1=t1*A3p
A3p1 =
3.0858 -1.7873 0.0801
-0.0000 1.8889 -0.2228
0.0000 -0.2368 0.0235
s2=A3p1(3,2)/sqrt(A3p1(3,2)^2+A3p1(2,2)^2)
s2 =
-0.1244
c2=A3p1(2,2)/sqrt(A3p1(3,2)^2+A3p1(2,2)^2)
c2 =
0.9922
t2=[1 0 0;0 c2 s2;0 -s2 c2]
t2 =
1.0000 0 0
0 0.9922 -0.1244
0 0.1244 0.9922
A3p2=t2*A3p1
A3p2 =
3.0858 -1.7873 0.0801
-0.0000 1.9037 -0.2240
0.0000 0.0000 -0.0044

126
Q3=t1’*t2’
Q3 =
0.9411 0.3355 0.0421
-0.3381 0.9338 0.1170
0 -0.1244 0.9922

We now complete the rotation and add back the subtracted approximation
to the eigenvalue.

A4=A3p2*Q3+(eye(3)*.7301)
A4 =
4.2385 -0.6437 -0.0000
-0.6437 2.5356 0.0005
0.0000 0.0005 0.7258

the 3 × 3 element is almost completely independent, so the first extracted


eigenvalue is approximately 0.7258 (we could, or course, keep going to con-
verge further). This compares to the true value of 0.725817
The entire process is then repeated again for the sub-matrix created from
removing the last row and column.

127
7.3 Subspace Iteration
Consider the eigenvalue problem

KΦ = λM Φ

for the case where we only need the 1st p eigenvalues and eigenvectors
Start with an initial guess X1 (n × q) like the Rayleigh Ritz method

X̄2 = K −1 M X1
|{z} |{z}
n×q n×q

which can be solved for using Gauss elimination


A reduced eigenvalue problem is obtained

K2 = X̄2T K X̄2
|{z}
q×q

M2 = X̄2T M X̄2
The reduced eigenvalue problem is

K2 Q2 = M2 Q2 Λ2

which can be solved for using other techniques better for smaller eigenvalue
problems.
In general, q  n.
An improved approximation for the eigenvectors is

X2 = X̄2 Q2
|{z} |{z} |{z}
n×q n×q q×q

Repeat this
X̄k+1 = K −1 M Xk
| {z } |{z}
n×q n×q

T
Kk+1 = X̄k+1 K X̄k+1
| {z }
q×q
T
Mk+1 = X̄k+1 M X̄k+1
Solve
Kk+1 Qk+1 = Mk+1 Qk+1 Λk+1

128
for Qk+1 , then
Xk+1 = X̄k+1 Qk+1
| {z } | {z } | {z }
n×q n×q q×q

Repeat until convergence.


Example;
K =

2.00000 -1.00000 0.00000


-1.00000 2.50000 -1.50000
0.00000 -1.50000 3.00000

M =

1 0 0
0 1 0
0 0 1

>> x1=[1 1;0 1;0 1]

x1 =

1 1
0 1
0 1

>> x2bar=K\M*x1
x2bar =

0.70000 1.30000
0.40000 1.60000
0.20000 1.13333

>> k2=x2bar’*K*x2bar
k2 =

0.70000 1.30000

129
1.30000 4.03333
>> m2=x2bar’*M*x2bar
m2 =

0.69000 1.77667
1.77667 5.53444
>> [q2,lam2]=eig(m2\k2);
>> q2=q2(:,[2,1]),lam2=diag(sort(diag(lam2)))% need to sort eigenvalues
q2 =

-0.02669 0.95230
0.99964 -0.30515

lam2 =

0.72874 0.00000
0.00000 2.34844

>> x2=x2bar*q2;x2=x2/[norm(x2(:,1)) 0;0 norm(x2(:,2))]% normalize vector lengths

x2 =

0.54935 0.81937
0.68141 -0.32580
0.48362 -0.47169

>> x3bar=K\M*x2

x3bar =

0.75384 0.34890
0.95832 -0.12157
0.64037 -0.21801

>> k3=x3bar’*K*x3bar
k3 =

1.37683 0.00339

130
0.00339 0.42832

>> m3=x3bar’*M*x3bar
m3 =

1.89671 0.00690
0.00690 0.18404

>> [q3,lam3]=eig(m3\k3);q3=q3(:,[2,1]),lam3=diag(sort(diag(lam3)))% need to sort e


q3 =

0.00417 -0.99998
-0.99999 -0.00548

lam3 =

0.72590 0.00000
0.00000 2.32760

131
>> x3=x3bar*q3;x3=x3/[norm(x3(:,1)) 0;0 norm(x3(:,2))] %normalize eigenvectors

x3 =

-0.54874 -0.80601
-0.69534 0.29272
-0.46410 0.51445

After 9 iterations  
0.7258 0
Λ=
0 2.3198
and  
0.5480 0.7909
Φ = 0.6983
 − 0.2533
0.4606 − 0.5571
Initial vectors are chosen to b:

1. Diagonal of mass matrix

2. Vectors of + 1 where mii /kii are maximum.

Only the first p eigenvalues are useful. q is generally min(2p, p + 8).


This will converge to the lowest p eigenvalues given that all of the vectors in
X1 are not orthogonal to any of the corresponding eigenvectors.

132
7.4 Shifting
Consider the eigenvalue problem

(K − λM )ψ = 0

Subspace iteration does not work with a singular K. (rigid body motion
capable system)
Consider redefining λ = µ − 1. Then the eigenvalue problem is restated as

(K − λM )ψ =0
(K − (µ − 1)M )ψ =0
(196)
((K + M ) − µM ) ψ =0
(K 0 − µM ) ψ =0

K 0 is non-singular. The obtained eigenvectors are the same. The eigenvalues


can be obtain by unshifting the eigenvalues using λ = µ − 1.

References
[1] Meirovitch, L., Principles and Techniques of Vibrations, Prentice Hall,
Inc., Englewood Cliffs, 1997.

[2] Golub, G. H. and Van Loan, C. F., Matrix Computations, Johns Hopkins
University Press, Baltimore, 1985.

[3] Horn, R. A. and Johnson, C. R., Matrix Analysis, Cambridge University


Press, New York, 1999.

[4] Bathe, K.-J. and Wilson, E. L., Numerical Methods in Finite Element
Analysis, Prentice Hall, Inc., Englewood Cliffs, 1976.

[5] Caughey, T. K. and O’Kelly, M. E. J., “Classical Normal Modes in


Damped Linear Dynamic Systems,” ASME Journal of Applied Mechan-
ics, Vol. 32, 1965, pp. 583–588.

133

You might also like