Random Vibration Mechanical Structural and Earthquake Engineering Applications
Random Vibration Mechanical Structural and Earthquake Engineering Applications
RANDOM VIBRATION
Handle Random Processes
After determining that most textbooks on random vibrations are mathematically
intensive and often too difficult for students to fully digest in a single course, the Mechanical, Structural, and
authors of Random Vibration: Mechanical, Structural, and Earthquake
Engineering Applications decided to revise the current standard. This text Earthquake Engineering
incorporates more than 20 years of research on formulating bridge design limit states.
Utilizing the authors’ experience in formulating real-world failure probability-based
Applications
engineering design criteria and their discovery of relevant examples using the basic
ideas and principles of random processes, the text effectively helps students readily
grasp the essential concepts. It eliminates the rigorous math-intensive logic training
applied in the past, greatly reduces the random process aspect, and works to change
a knowledge-based course approach into a methodology-based course approach.
This approach underlies the book throughout, and students are taught the fundamental
methodologies of accounting for random data and random processes as well as how
to apply them in engineering practice.
www.TechnicalBooksPdf.com
Advances in Earthquake Engineering
Series Editor: Franklin Y. Cheng
www.TechnicalBooksPdf.com
RANDOM
VIBRATION
Mechanical, Structural, and
Earthquake Engineering Applications
ZACH LIANG
GEORGE C. LEE
www.TechnicalBooksPdf.com
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not
warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® soft-
ware or related products does not constitute endorsement or sponsorship by The MathWorks of a particular
pedagogical approach or particular use of the MATLAB® software.
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
www.TechnicalBooksPdf.com
Contents
Series Preface...........................................................................................................xix
Preface.....................................................................................................................xxi
Acknowledgments...................................................................................................xxv
Series Editor..........................................................................................................xxvii
www.TechnicalBooksPdf.com
vi Contents
www.TechnicalBooksPdf.com
Contents vii
www.TechnicalBooksPdf.com
viii Contents
xix
Preface
Understanding and modeling a vibration system and measuring and controlling its
oscillation responses are important basic capacities for mechanical, structural, and
earthquake engineers who deal with the dynamic responses of mechanical/structural
systems. Generally speaking, this ability requires three components: the basic theo-
ries of vibrations, experimental observations, and measurement of dynamic systems
and analyses of the time-varying responses.
Among these three efforts, the former two are comparatively easily learned by
engineering students. However, the third component often requires a mathemati-
cal background of random processes, which is rather abstract for students to grasp.
One course covering stochastic processes and random vibrations with engineering
applications is already too much for students to absorb because it is mathematically
intensive and requires students to follow an abstract thinking path through “pure”
theories without practical examples. To carry out a real-world modeling and analy-
sis of specific types of vibration systems while following through the abstract pure
thinking path of mathematical logic would require an additional course; however,
there is no room in curriculums for such a follow-up course. This has been the obser-
vation of the first author during many years of teaching random vibration. He fre-
quently asked himself, How can one best teach the material of all three components
in a one-semester course?
The authors, during the past 20 years, have engaged in an extensive research
study to formulate bridge design limit states; first, for earthquake hazard and, sub-
sequently, expanded to multiple extreme natural hazards for which the time-varying
issue of rare-occurring extreme hazard events (earthquakes, flood, vehicular and
vessel collisions, etc.) had to be properly addressed. This experience of formulat-
ing real-world failure probability–based engineering design criteria provided nice
examples of using the important basic ideas and principles of random process (e.g.,
correlation analysis, the basic relationship of the Wiener–Khinchine formula to
transfer functions, the generality of orthogonal functions and vibration modes, and
the principles and approaches of dealing with engineering random process). We thus
decided to emphasize the methodology of dealing with random vibration. In other
words, we have concluded that it is possible to offer a meaningful course in ran-
dom vibration to students of mechanical and structural engineering by changing the
knowledge-based course approach into a methodology-based approach. The course
will guide them in understanding the essence of vibration systems, the fundamental
differences in analyzing the deterministic and dynamic responses, the way to han-
dle random variables, and the way to account for random process. This is the basic
approach that underlines the material developed in this book. By doing so, we give
up coverage of the rigorous mathematical logic aspect and greatly reduce the portion
of random process. Instead, many real-world examples and practical engineering
issues are used immediately following the abstract concepts and theories. As a result,
students might gain the basic methodology to handle the generality of engineering
xxi
xxii Preface
projects and develop a certain capability to establish their own logic to systemati-
cally handle the issues facing the theory and application of random vibrations. After
such a course, students are not expected to be proficient in stochastic process and to
model a random process, but they will be able to design the necessary measurement
and observation, to understand the basic steps and validate the accuracy of dynamic
analyses, and to master and apply newly developed knowledge in random vibrations
and corresponding system reliabilities.
With this approach, we believe it is possible to teach students the fundamental
methodology accounting for random data and random process and apply them in
engineering practice. This is done in this book by embedding engineering examples
wherever appropriate to illustrate the importance and approach to deal with random-
ness. The materials are presented in four sections. The first is a discussion of the
scope of random process, including engineering problems requiring the concept of
probability to deal with. The second is the overview of random process, including
the time domain approach to define time-varying randomness, the frequency domain
approach for the spectral analysis, and the statistical approach to account for the
process. The third section is dedicated specifically to random vibrations, a typical
dynamic process with randomness in engineering practice. The fourth section is the
application of the methodology. In recent years, we used typical examples of devel-
oping fatigue design limit states for mechanical components and reliability-based
extreme event design limit states for bridge components in teaching this course. The
nice performances and positive responses of the students have encouraged us to pre-
pare this manuscript.
Section I consists of two chapters. Chapter 1 expresses the brief background and
the objectives of this book, followed by a brief review of the theory of probabil-
ity within the context at application to engineering. The attempt is to only intro-
duce basic concepts and formulas to prepare for discussions of random process. The
review of the theory of probability is continued in Chapter 2, with focus on treating
random data measured as a function of certain basic random distributions for ran-
domness in their actual applications. This will also help engineers to gain a deeper
understanding of the randomness in sequences. In this section, the essence of prob-
ability as the chance of occurrence in sample space, the basic treatment to handle
one-dimensional random variables by using two-dimensional deterministic prob-
ability distributions (PDF), and the tools to study the random variables of averag-
ing (statistics) that changes quantities from random to deterministic are emphasized.
Two important issues in engineering practice, the uncertainty of data and the prob-
ability of failure, are introduced.
Section II begins with Chapter 3, where the random (also called stochastic) pro-
cess is introduced in the time domain. The nature of time-varying variables is first
explained by joint PDF through the Kolmogorov extension. Because of the existence
of the indices in both sample space and in the time domain, the averages should be
well defined, in other words, the statistics must be used in rigorous conditions by
identifying if the process is stationary as well as ergodic. Although the averaged
results of mean and variance are often easily understandable, the essence of correla-
tion analysis is explained through the concept of function/variable orthogonality. In
Chapter 4, random process is further examined in the frequency domain. Based on
Preface xxiii
xxv
Series Editor
Dr. Franklin Cheng earned a BS (1960) at the National
Cheng-Kung University, Taiwan, and an MS (1962) at the
University of Illinois at Urbana-Champaign. He gained indus-
trial experience with C. F. Murphy and Sargent & Lundy in
Chicago, Illinois. Dr. Cheng then earned a PhD (1966) in civil
engineering at the University of Wisconsin, Madison. Dr. Cheng
joined the University of Missouri, Rolla (now named Missouri
University of Science and Technology) as assistant professor
in 1966 and then associate professor and professor in 1969 and
1974, respectively. In 1987, the board of curators of the univer-
sity appointed him curators’ professor, the highest professorial position in the sys-
tem comprising four campuses. He has been Curators’ Professor Emeritus of Civil
Engineering since 2000. In 2007, the American Society of Civil Engineers recog-
nized Dr. Cheng’s accomplishments by electing him to honorary membership, which
is now renamed as distinguished membership. Honorary membership is the highest
award the society may confer, second only to the title of ASCE president. Honorary
members on this prestigious and highly selective list are those who have attained
acknowledged eminence in a branch of engineering or its related arts and sciences.
Until 2007, there have been 565 individuals who were elected to this distinguished
grade of membership since 1853. For the year 2007, only 10 honorary members were
selected from more than 14,000 members.
Dr. Cheng was honored for his significant contributions to earthquake structural
engineering, optimization, nonlinear analysis, and smart structural control and for
his distinguished leadership and service in the international engineering community,
as well as for being a well-respected educator, consultant, author, editor, and mem-
ber of numerous professional committees and delegations. His cutting-edge research
helped recognize the vital importance of the possibilities of automatic computing
in the future of civil engineering. He was one of the pioneers in allying computing
expertise to the design of large and complex structures against dynamic loads. His
research expanded over the years to include the important topics of structural opti-
mization and design of smart structures. In fact, he is one of the foremost experts in
the world on the application of structural dynamics and optimization to the design of
structures. Due to the high caliber and breadth of his research expertise, Dr. Cheng
has been regularly invited to serve on the review panels for the National Science
Foundation (NSF), hence setting the direction of future structural research. In addi-
tion, he has been instrumental in helping the NSF develop collaborative research
programs with Europe, China, Taiwan, Japan, and South Korea. Major industrial
corporations and government agencies have sought Dr. Cheng’s consultancy. He
has consulted with Martin Marietta Energy Systems, Inc., Los Alamos National
Laboratory, Kjaima Corporation, Martin & Huang International, Inc., and others.
xxvii
xxviii Series Editor
Dr. Cheng received four honorary professorships from China and chaired 7 of his
24 NSF delegations to various countries for research cooperation. He is the author
of more than 280 publications, including 5 textbooks: Matrix Analysis of Structural
Dynamics: Applications and Earthquake Engineering, Dynamic Structural Analysis,
Smart Structures: Innovative Systems for Seismic Response Control, Structure
Optimization—Dynamic and Seismic Applications, and Seismic Design Aids for
Nonlinear Pushover Analysis of Reinforced Concrete and Steel Bridges. Dr. Cheng
has received numerous honors and awards, including Chi Epsilon, MSM–UMR
Alumni Merit for Outstanding Accomplishments, Faculty Excellence Award,
Halliburton Excellence Award, and recognitions in 21 biographical publications,
such as Who’s Who in Engineering and Who’s Who in the World. He has twice
received the ASCE State-of-the-Art Award, in 1998 and 2004.
Section I
Basic Probability Theory
1 Introduction
3
4 Random Vibration
1.1.2.1 Concept of Vibration
Vibration is a repetitive motion of objects to a stationary frame of reference or nomi-
nal position (usually equilibrium). It refers to mechanical oscillations about an equi-
librium point. The oscillations may be periodic (such as the motion of a pendulum),
transient (such as the impact response of vehicle collision), or random (such as the
movement of a tire on a gravel road).
The common sense about vibration is that any object moves back and forth, such
as a swing mass, a car driving on a bumpy road, an earthquake, a rocking boat, tree
branches swaying in the wind, a man’s heartbeat; vibration is everywhere.
Let us consider the generality of the above examples. What these examples have
in common can be seen from the following vibrational responses:
First, all of them possess mass, damping, and stiffness. Second, potential energy
and kinetic energy are exchanged, and third, a vibration system has frequency and
damping ratio. In addition, vibration has a certain shape function. To see these gen-
eral points, let us consider an example shown in Figure 1.1.
Vibration can also be seen as the responses of a system due to certain excitations.
In Figure 1.1a, a flying airplane will be excited by flowing air and its wings will
vibrate accordingly. Because of the uncertainty with how air flows, at a deterministic
time point, it is difficult to predict the exact displacement of a specific location on the
wing. Or we can say that the vibration of the wing is random.
In Figure 1.1, an airplane engine is also shown. Although the engine is working at a
certain rotating speed, the corresponding vibration is mainly periodic, which is concep-
tually shown in Figure 1.1b. In this manuscript, we focus on random vibration responses.
In Figure 1.1a, the vibration of the airplane’s wing is a function of a certain loca-
tion in the air where different excitation input occurs. At virtually the same moment,
the wing vibrates accordingly. The vibration will also be seen as a function of time,
which is a more universal reference. Therefore, we use the time t, instead of other
reference such as the position of the road to describe various vibrations. From either
the location in the air or the moment in time, we can realize that the amplitude of
vibration is not a constant but a variable. Generally speaking, it is a time or temporal
variable. As a comparison, the vibration in the airplane engine shown in Figure 1.1b
can be exactly seen as a deterministic function of time.
Introduction 5
n … 2 1
Response
Force Time
Time
(a)
Displacement
Time
(b)
FIGURE 1.1 Different types of vibration. (a) Random vibration of airplane wings, (b) peri-
odic vibration of airplane engine.
1.1.2.1.1 Deterministic Vibration
Let us now have a closer look at the temporal variable, which is conceptually shown
in Figure 1.1 and referred to as vibration time histories. Specifically, the vibration
time history of the engine vibration, described by displacement x(t) can be repre-
sented by a Fourier series, that is,
where x1 is the amplitude of the particular frequency component ω1, θ1 is the phase
shift of this component, and so on. The trigonometric function cos(ωit + θi) indicates
a back-and-forth movement, the vibration.
Once the excitation of a system is removed, the vibrational response will gradu-
ally decay because a realistic system will always dissipate the vibration energy. In
this sense, we can add the energy-decaying term such as e −ζ1ω1t in the above equation
so that the vibrational response is written as
From the above description, it can be seen that a temporal vibration function is
described by three basic quantities or parameters: namely, the amplitude (xi), the
frequency (ωi), and the phase (θi). If the energy dissipation is also considered, we
should have another term ζi.
Now, let us compare the vibrations described in Equations 1.1 and 1.2, respec-
tively. The essential difference between periodic vibration (Equation 1.1) and tran-
sient vibration (Equation 1.2) is twofold. First, periodic vibration (Equation 1.1) will,
6 Random Vibration
theoretically, last “forever” whereas transient vibration (Equation 1.2) will sooner
or later die out. Second, the vibration in Equation 1.1 will repeat itself periodically
whereas the vibration in Equation 1.2 will not have repeatability.
Note that, if the ratio of frequency ω1 and ω2 is a rational number, the vibration
(Equation 1.1) is periodic. If this is not satisfied, for example, ω1/ω2 = π, we will not
have periodic vibration. Therefore, such a vibration is also referred to as transient
vibration. In this sense, the duration of the vibration is not important. Thus, the major
difference between Equations 1.1 and 1.2 is whether they can repeat themselves or not.
A closer look at both Equations 1.1 and 1.2 unveils their generality. Whenever
we choose a given time, the value of x(t) can be calculated if the frequencies ωi and
damping ratio ζi are known. Therefore, the regular vibration theory devotes most
chapters to formulating methods of how to determine ωi and damping ratio ζi, as
well as how to find the response (Equations 1.1 or 1.2), which is the investigation of
vibration systems.
1.1.2.1.2 Vibration Systems
In regular vibration, an object that vibrates is seen as a mass system, called a vibra-
tion system. In most cases, we assume the vibration system is linear. Any engineer-
ing object that is subjected to a certain load and has responses due to the load can be
seen to have the relationship of “input-system-output.” When the input and output
are functions of time, the system is dynamic. A vibration system is dynamic. On the
other hand, if both the load and the response will not develop as time goes on, or if
the development with time is sufficiently slow so that it can be treated as constant to
time, the system is considered to be static.
The response of a dynamic system can be considered in two basic categories. The
first is that the system response versus time can continue to grow until the entire
system is broken, or the development of the response versus time will continue to
decrease until the system response dies out. In the first case, the response starts at
an origin and can continue to develop but will never come back to the origin. Such a
dynamic system is not a vibration system.
The second type of dynamic system is that the response will sometimes grow
but other times reduce when it reaches a certain peak value and then grow again,
either along the same direction or the opposite direction. Furthermore, the growing
response reaches the next peak value and starts to decrease. As mentioned previ-
ously, this repetitive motion is called vibration and thus the second type of dynamic
system is the vibration system.
It is seen that the responses of a dynamic system will continue to vary, so that we
need at least two quantities to describe the responses, namely, the amplitude of the
responses and the time at which the amplitude reaches a certain level, such as the
term xi and t in the above equation, respectively.
The responses of vibrational dynamic system, however, need at least one additional
quantity to express how fast the responses can go back and forth. This term is the fre-
quency of vibration, such as the value of ωi in Equation 1.1. From this discussion, we
can realize that the response of a vibration system must go back and forth or it is not
vibration. Therefore, the term that describes how fast the response changes values
from growing to reducing is the fundamental quantity distinguishing if a system is
Introduction 7
a vibration system. Thus, frequency is the most important parameter for a vibration
system. In the viewpoint of vibration modal analysis for linear systems, frequency (or
more precisely, natural frequency) is the most important modal parameter.
Also from the input-system-output viewpoint, we can understand that the reason
the system will have a dynamic response is due to the input, or there must be an
amount of energy input to the system. At the same time, a real-world system will
dissipate energy, which can be understood through the second law of thermodynam-
ics. Therefore, we need another term to describe the capability of a given system that
dissipates energy. It can be seen that the larger the capacity of energy dissipation a
system has, to have the same level of vibration, the more energy input is needed. A
quantifiable term to describe the capacity of energy dissipation is called the damping
ratio, which is the second most important modal parameter of a vibration system,
such as the term ζi in the above equation.
When a mass system, such as a car, an airplane, a machine, or a structure vibrates,
we often see that, at different locations of the system, the vibration level can be
different. For example, the vibration at the driver’s seat can be notably different
from the rear passenger seat of a compact car. We thus need another parameter to
express the vibration profile at different locations, such a vibration shape function is
called the mode shape. Different from natural frequency and damping ratio, which
are scalars, the mode shape function is a vector, which is the third most important
parameter of modal analysis. This can be seen conceptually in Figure 1.1a. Suppose
we can measure the vibration not only at location 1 but also at location 2 (through
location n; see Figure 1.1a), the vibration at these two different locations are likely
not identical. In this case, let us assume that the vibration of the airplane wing is
deterministic, which can be expressed by Equation 1.1. In this case of n vibration
locations, the system free-decay responses in Equation 1.2 can be further written as
−ζ ω t −ζ ω t
1 x (t ) = x11 e 1 1 cos(ω d1t + θ11 ) + x12 e 2 2 cos(ω d 2 t + θ12 )
2 x (t ) = x 21 e
− ζ1ω1t
cos(ω d1t + θ21 ) + x 22 e − ζ2ω 2t cos(ω d 2 t + θ22 ) (1.3)
n x (t ) = x n1 e
− ζ1ω1t
cos(ω d1t + θn1 ) + x n 2 e − ζ2ω 2t cos(ω d2 t + θn 2 )
where ω di = 1 − ζi2 ω i is the damped natural frequency. The amplitudes and phase
angles at different locations, such as the ones specifically marked by using the solid
and broken lines in the above equation, actually describe the vibration shapes, which
are referred to as mode shapes, its jth component of the ith mode contains amplitude
xji and phase θji.
Again, suppose a system is linear. In this case, the three terms mentioned previ-
ously, natural frequency, damping ratio, and mode shape, are the most important
parameters of a vibration system; the set of modal parameters. These parameters are,
in most cases, deterministic values. In this manuscript, we assume that our vibration
systems have deterministic modal parameters.
In a linear system, the ratio between the output and input measured at certain
locations is a function of frequency (as well as damping ratio), which is referred to as
8 Random Vibration
a transfer function, and is the most important parameter describing a linear vibration
system. Therefore, the general relationship of input-system-output can be further
written as input-transfer function-output.
1.1.2.1.3 Random Vibrations
Based on the theory of vibration discussed above, once the vibration system is
known, with given forcing functions or initial conditions (or both), the responses of
deterministic vibrations can be calculated as long as the forcing function and initial
conditions are deterministic. Mathematically, the procedure is to find the solution of
a governing equation of motion of a vibration system, namely, the parameters ωi, ζi,
xij, and θij.
On the other hand, for random vibrations, we do not have these deterministic
parameters ωi, ζi, xij, and θij in the closed forms of vibration responses described in
Equations 1.1 or 1.2. The reason is, in general, we do not have deterministic time
history of input or the deterministic initial conditions. In this case, even if the exact
characteristics of the vibration system are known, we will not be able to predict the
amplitude of vibration at a given time. Also, for a given value of vibration amplitude,
we do not know when it will occur.
This does not mean that the vibration is totally uncertain. With the tools of basic
random processes and statistics, we may be able to obtain the rate of occurrence of
a particular value. We may predict the major vibration frequency of a linear system,
with certain knowledge of the input statistics. We may understand the averaged value
or root mean square value of the responses, and so on. In most engineering applica-
tions, these values can be sufficient to design or control the vibration systems, to
predict the fatigue life of a machine or airplane, or to estimate the chance of the
occurrence of some extreme values and the possibility of failures of certain systems.
These are the major motivations for studying random vibration of a mass system.
A deterministic vibration is a dynamic process, and thus is a random vibration.
Therefore, although the vibration responses cannot be written as Equations 1.1
through 1.3, what we do know is that the responses are a function of time. In other
words, random vibration belongs to the category of random process.
Similar to deterministic vibration, in this circumstance, we still stay within the
concept of a mass system. Additionally, in most cases, we have linear systems, and
thus the basic concept or relationship of “input-transfer function-output” are continu-
ously used. The only difference is, in the case of random vibrations, both inputs and
outputs are random processes. Thus, instead of devoting more thought to transfer
function as a deterministic vibration, we will focus more on the time histories of
inputs and outputs. In addition, the main methodology to account for these time
histories is through averaging.
Note that a thorough investigation of random processes can be mathematically
intensive. For the purpose of understanding the engineering of random vibration
and calculating commonly used values of random responses, the author minimizes
the necessary knowledge of random process. Thus, this manuscript only emphasizes
the introduction of stochastic dynamics. All the materials in this manuscript are
managed to let mechanical and civil engineering students in their graduate levels
to master the basics of random vibration in a one-semester course. Readers who are
Introduction 9
interested in more advanced theory may further consult textbooks of random pro-
cess, such as those by Soong and Grigoriu (1992).
1.1.3 Arrangement of Chapters
In Section 1.2, we study the fundamental concept of probability theory by briefly
reviewing the necessary knowledge of random process and random vibrations. Thus,
we only discuss the basics of set theory, axioms of probability, and conditional prob-
ability. In Section 1.3, we consider random variables. In particular, we emphasize the
details of normal distribution. The focus is on single random variables, including con-
tinuous and discrete variables and the important probability density and mass functions.
In Chapter 2, we further discuss the functions of random variables, and the random
distributions of input and output for vibration systems. In Chapter 3, the random pro-
cesses in the time domain are introduced, including the basic definitions and classifi-
cations, the state spaces and index sets, the stationary process, and the conditions and
calculations of ensemble and temporal averages. Correlation analysis is also discussed.
In Chapter 4, random processes in the frequency domain are discussed. The
spectral density function and its relationship with correlation functions are the key
issues. In addition, white noise and band-pass–filtered spectra are also discussed. In
Chapter 5, the random process is further considered with certain statistical proper-
ties, such as the concepts of level crossings and distributions of extrema. By ana-
lyzing the level crossing, the readers can relate with the handling of time-varying
randomness based on random processes. This chapter also provides knowledge bases
to understand fatigue processes and engineering failure probabilities.
In Chapter 6, linear signal degree of freedom (SDOF) vibration systems are con-
sidered with deterministic forcing functions and initial conditions, the emphasis is
on the vibration system itself, including basic vibration models and basic vibration
parameters. In addition, free vibration and simple forced vibration with harmonic
excitation are also considered.
In Chapter 7, the response of linear SDOF systems to random forces will be dis-
cussed. Deterministic impulse responses are considered first followed by arbitrary
loading convolutions. The relationship between impulse response function and the
transfer function is also discussed as Borel’s theorem is introduced. Here, the random
environments are considered as excitations and are treated as random processes. The
mean of the response process, the correlations and the spectral density functions of
the response process, are also discussed.
In Chapter 8, we further extend the discussion to linear multi-degree of freedom
(MDOF) systems. Proportional and nonproportional damping are discussed. The
basic treatment of a MDOF system is to decouple it into SDOF systems. Because of
different types of damping, the decoupling procedures are different. In Chapter 9,
the concept of inverse problems is introduced, including the first and second inverse
problems, to help engineers improve their estimations and to help them minimize
measurement noises.
In Chapter 10, understanding the failures of signal components and total structures
is introduced. The 3σ criterion, the first passage failure, and fatigue are discussed.
In this chapter, the concept of failure is not only focused on materials but also on the
10 Random Vibration
1.2.1 Set Theory
It is known that modern probability theory is based on the set theory. In the follow-
ing, we briefly review basic concepts without further proofs. An experiment is a case
that may lead to results referred to as outcomes. An outcome is the result of a single
trial of an experiment, whereas an event is one or more outcomes of an experiment.
Set. A set is a collection of events. For example, the collection of all the vibration
peak values can be a set, and all these values have the same units. The collection of
the modal parameters, such as natural frequencies, damping ratios, and mode shapes
of the first few modes of a vibration system can be another set. Here, these param-
eters can have different units. However, readers can still find what is in common with
the second set.
Event. Now, consider the event in a set. If an event “a” occurs, it is denoted as ωa;
note that if only “a” occurs, we have ωa. When another event “b” occurs, we have ωb.
Collect those ωi’s, denoted by
ωa ∈ A (1.5)
Space.
5. All the possible events consist of a space of basic events denoted by U. U is
an event that must happen. It is also called a universal set, which contains
all objects, including itself. Furthermore, in engineering, these events may
also be called samples, and U is the space of all the samples. In the litera-
ture, the phrase “space” is often used for continuous variables only.
6. Impossible event Φ. The empty set is the set that has no elements (the empty
set is uniquely determined by this property, as it is the only set that has no
elements—this is a consequence of the understanding that sets are deter-
mined by their elements):
Φ = {} (1.6)
1. A union B (A or B, A + B)
A union B is the collection of all events in both A and B, denoted by
A ∪ B = A + B (1.7)
Example 1.1
2. A intersection B (A and B)
A intersection B is the portion shared by both A and B (Figure 1.3),
denoted by
A ∩ B = AB (1.8)
B
A
B
A
Example 1.2
A ∩ B = Φ (1.9)
A ⊂ B (1.10)
(A is included in B)
or
B ⊃ A (1.11)
a. A ⊂ A (1.12)
b. If A ⊂ B, B ⊂ C then A ⊂ C (1.13)
B
A
A B
Example 1.3
d. A ⊃ Φ (1.14)
5. A = B (1.15)
This is a special case, that set A and set B are identical, the condition is
iff A ⊂ B and B ⊂ A
c. If A = B then B = A (1.16c)
A ∩ B = Φ, A + B = U (1.17)
A+ A =U (1.18a)
A∩ A = Φ (1.18b)
14 Random Vibration
A
A
A − B (1.19)
A− B = A− A∩ B = A∩ B (1.20)
A + B = B + A (1.21a)
A ∩ B = B ∩ A (1.21b)
Associativity:
(A + B) + C = A + (B + C) (1.22a)
(A ∩ B) ∩ C = A ∩ (B ∩ C) (1.22b)
U U
A B B A
Distributive laws:
(A + B) ∩ C = A ∩ C + B ∩ C (1.23a)
A ∩ B + C = (A + C) ∩ (B + C) (1.23b)
A + (B ∩ C) = (A + B) ∩ (A + C) (1.23c)
A ∩ (B + C) = (A ∩ B) + (A ∩ C) (1.23d)
A1 + A2 + An = A1 A2 An (1.24a)
A1 A2 An = A1 A2 + An (1.24b)
Example 1.4
1. X1X2X3X4
2. X1X 2X 3X 4
3. X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4
4. X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X3X 4
1.2.2 Axioms of Probability
In most modern science, a theory is often described by several axioms and basic
concepts, the probability can also be systematically established in a similar fashion.
In the above, we actually introduced these basic concepts; in the following, let us
consider the axioms.
1.2.2.1.1 Frequency fN(A)
The frequency of occurrence of A can be viewed as that in N tests, A occurs n times,
denoted by
n
f N ( A) = (1.25)
N
Equation 1.25 provides a very important starting point, which classically expresses
the essence of probability. In other words, any case of probability can be seen as a
ratio of n and N. If one can successfully and completely find n and N without any
overlapping, then he or she locates the corresponding probability.
In Equation 1.26, N is referred to as the sample space and n is referred to as the
total occurrence of the tested samples.
For the frequency of occurrence, we have the following basic relationships:
1. 0 ≤ f N(A) ≤ 1 (1.26)
2. f N(U) = 1 (1.27)
3. if AB = Φ, then
1.2.2.1.2 Probability
Now, with the help of the viewpoint of occurrence frequency, we have the classic
definition of probability.
n
P( A) = lim (1.29)
N → NU N
Here,
NU: in space U, the total possible number of tests
n: number of occurrence of A
2. The possibilities of occurrence of all the ωi’s are equal, that is,
1.2.2.2 Axiom of Probability
Having reviewed the classic thoughts of probability, let us introduce the axioms.
P(U) = 1 (1.32)
In general
1. P(Φ) = 0 (1.35)
2. If A1, A2, …Am are mutually exclusive, then
m m
P
∑
i =1
Ai =
∑ P( A )
i =1
i (1.36)
P( A) = 1 − P( A) (1.37)
A⊃B
then
1.2.3.1 Conditional Probability
A conditional probability is denoted as
P ( A ∩ B)
P( A|B) = (1.41)
P ( B)
Equation 1.41 is a normalization to generate a new sample space, in which the prob-
ability of B is 1; namely, B always occurs (because B already occurs), that is, P(B|B) = 1.
Example 1.5
11/17
P(master|MAE) = = 11/12
12/17
Table 1.1
Students in MAE536/CIE520
PhD Master Subtotal
MAE 1 11 12
CIE 3 2 5
Subtotal 4 13 17
Introduction 19
Table 1.2
MAE Students
PhD Master Subtotal
MAE 1 11 12
CIE 3 2 5
Subtotal 4 13 17
Table 1.3
Master Students
PhD Master Subtotal
MAE 1 11 12
CIE 3 2 5
Subtotal 4 13 17
4. “to be from MAE” → the space is shrunk from the “total classmates” to
“MAE classmates”
The above can be expressed in Table 1.2 with bold numbers.
11/17
P(MAE|master ) = = 11/13
13/17
1.2.3.2 Multiplicative Rules
Based on the conditional probability, we have
Example 1.6
Note that
and
and
if and only if
P(B) = 1 (1.44b)
1.2.3.3 Independency
The following are useful concepts of variable independency:
A and B are independent, which means the occurrence of A does not affect
the occurrence of B.
In addition,
Proof:
Example 1.7
A ∩ B = A − B
therefore
P ( A ∩ B ) = P ( A − B)
1.2.3.4.1 Total Probability
If B1, B2,… Bn are mutually exclusive, and P(Bi) > 0, in addition, A is included in one
of the Bi’s, that is,
n
A⊂ ∑B
i =1
i (1.48)
then
n
P( A) = ∑ P( B )P ( A B )
i =1
i i (1.49)
Proof:
Because B1, B2,… Bn are mutually exclusive, and so will be A ∩ B1, A ∩ B2,… A ∩ Bn.
From Equation 1.48
A = A∩ ∑B i =1
i
22 Random Vibration
Furthermore,
n n
P( A) = P A ∩
∑
i =1
Bi = P( A ∩ B1 + A ∩ B2 + A ∩ Bn ) =
∑ P( A ∩ B )
i =1
i
From the multiplicative rules, P(A ∩ B) = P(B) P(A|B) (Equation 1.42b), the above
equation
n
= ∑ P( B )P ( A B )
i =1
i i
Example 1.8
We see that
A = A ∩ Ba +A ∩ Bb +A ∩ Bc
and
[A ∩ Ba] ∩ [A ∩ Bb] = Φ, [A ∩ Ba] ∩ [A ∩ Bc] = Φ, [A ∩ Bb] ∩ [A ∩ Bc] = Φ
Known
P(Ba) = 20%
P(Bb) = 35%
P(Bc) = 45%
P(A|Ba) = 5%
P(A|Bb) = 7%
P(A|Bc) = 6%
we can calculate
The essence of total probability is (i) to find P(A), event A must join a group of
mutually exclusive event Bi ’s; (ii) identify this group of disjointed events B1, B2, …Bn
Bi ∩ Bj = Φ (1 ≤ i < j ≤ n) (1.50)
Introduction 23
A⊂ ∑B
i =1
i (1.51)
(
P Bi A = )
P( Bi ) P A Bi ( ) (1.52)
n
∑ P( B ) P ( A B )
i =1
i i
Example 1.9
In the Example 1.8, one product is found to be defective, with the condition “|A,”
the question is, which inspector is more likely to be responsible? We have
Therefore, we have
1. A has already occurred, to check the probability of Bi, that is, the probability
of event “Bi|A”
2. We also need to find the group of disjointed events B1, B2, …Bn (see
Equation 1.50)
1.2.4 Engineering Examples
Now, let us consider certain engineering examples of multiple occurrences.
1.2.4.1 Additive Rules
Consider two failure modes A and B that are treated as double extreme events to a
single system, say a car or a bridge,
Practically, we have two or more possible cases: (1) failure modes in parallel and
(2) failure modes in series, which can be seen in Figure 1.8a and b, respectively.
Here, the terms P1, P2, … are the probabilities of occurrences of event 1, 2, …
In Figure 1.8a these events causing possible bridge failure occurs simultane-
ously. For example, the combined loads of “earthquake + vehicular collision,” etc. In
Figure 1.8b, these events occur in consequence; say, first an earthquake occurs, and
then a vehicular collision occurs.
Consider the case of combined loads on a bridge, which is listed as follows:
Scour + wind
Scour + surge
Scour + vessel collision
Scour + debris flow/land slide
Scour + fire
P1
P2
P1 P2 Pn
(b)
Pn
(a)
Surge + wind
Surge + vessel collision
Fire + wind
3. Triple extreme events
Third, we may have triple events and so on, which are also omitted due
to the limited space in the manuscript. In the above, each individual load
will have its own occurrence probability. Now, the practical question is,
what is the probability of the combined loads?
If we know the total sample space of all the combined loads, then based
on the theory of total probability and Bayes’ formula, we can determine for
the combined probability.
1.2.4.2 Multiplication Rules
If an operation can be performed in p ways, and if for each of these ways, a second
operation can be performed in q ways, then the two operations can be performed
together in pq ways.
1.2.4.3 Independent Series
An independent series of tests satisfies
Example 1.10
Suppose we have six bulbs. Two evens are shown in Figure 1.9 (six bulbs in series)
and Figure 1.10 (three sets of two bulbs in series are in parallel), respectively. The
probability of each bulb being broken is 0.2.
Question: What is the probability of total failure?
Let ωi = {the ith bulb is broken}, we have P(Ai) = 20%
26 Random Vibration
Then,
1. Event A = ω1 + ω 2 + ω 3 + ω 4 + ω 5 + ω 6 = ω1ω 2ω 3ω 4ω 5ω 6
6
where p is the occurrence rate. For example, within a year, an earthquake with a
return period of 2500 years has the probability
1 1
p= = = 0.0004 (1.56)
TR 2500
1 − p = 0.9996 (1.57)
pn = 1 − (1 − p)n (1.58)
For example, the possibility in 100 years of seeing an earthquake with a return
period of 2500 years is
100
1
P100 = 1 − 1 − = 0.0392
2500
t
1
D
ptD = 1 − 1 − (1.59)
TR
When the return period becomes a large number, the following equation is used
to approximate the probability of occurrence,
t
− D
ptD = 1 − e TR
= 1 − e − ptD (1.60)
Figure 1.11 shows this uniform distribution. The uniform probability distribution
implies that we have no reason not to use such a distribution. That is, we have no rea-
son in year i that the chance of seeing the event is greater than year j. In many cases,
p 1
TR
…… Year
1 2 3 4 5 6 ….. TR – 1 TR
using uniform distribution reflects the fact that we have not yet clearly understood
the nature of such events.
From Figure 1.11 and Equation 1.55, we can also see that
TR p = 1 (1.61)
TR
1
pn = 1 − 1 − <1 (1.62)
TR
2500
1
pn = 1 − 1 − = 0.6322 < 1 (1.63)
2500
Example 1.11
Suppose ten balls are in a bucket, two balls are black and eight are white (Figure 1.12).
1. For each test, pick one ball and return it to the bucket
2. Pick one ball and never return it
a. In the first test, the chance of getting a black ball is 2/10. In the second
test, we also have a chance of 2/10.
1.
2.
b. In the first test, the chance of getting a black ball is, again, 2/10. In the
second test, getting a black ball will depend on the first test. The prob-
ability of picking up a specially colored ball is then a variable.
Now, suppose the chance of occurrence of a hazard follows the pos-
sibility as described in the second ball game.
1 1 1 1
pn = 1− 1− 1− 1− 1− (1.64)
TR TR − 1 TR − 2 TR − n + 1
T − 1 T − 2 TR − 3 TR − n
ptD = 1− R R (1.65)
TR TR − 1 TR − 2 TR − n + 1
Therefore, we have
n
pn = (1.66)
TR
1.3 Random Variables
In the above, the basic concepts of probability were reviewed. Now, let us consider
random variables, based on the theory of probability.
x2 = [ x 21 x 22 x 2 n ]
(1.67)
xm = [ x m1 x m 2 x mn ]
1.3.1.2 “Two-Dimensional” Approach
Suppose variable xij is associated with the probability of occurrence(s) written as
(1.68)
P( xm ) = [ P( x m1 ), P( x m 2 ) P( x mn )]
p, k =1
PK ( k ) = 1 − p, k = 0 (1.70)
0, elsewhere
Introduction 31
PK(k)
1–p
k
0 1
S = {0, 1} (1.72)
We can also let U = S = {0, 1}, in this case, Equation 1.70 is replaced by
p, k = 1
PK ( k ) = (1.73)
1 − p, k = 0
1.3.1.5 Binomial Distribution
In n tests, the probability of m event A with Bernoulli distribution (P(A) = p) is called
the Bernoulli test, (see Section 1.2.4.3) which has sample space S given by
C m pm (1 − p)n− m, m = 0, 1, 2, 3, n
Pn (m) = n (1.75)
0, elsewhere
0.25
p = 0.1
0.2 p = 0.3
p = 0.7
Probability
0.15
0.1
0.05
0
0 5 10 15 20 25 30
Value of m
A couple, a father and a mother, both have mixed genes. They have three children.
Find the chance that one of them has the dominant gene.
Let the dominant gene be denoted by d and the recessive gene is denoted by r.
We see that dd is the pure dominant, rr is the pure recessive, and rd is the mixed
gene. Therefore,
S = {0, 1, 2, 3, 4, …} (1.76)
λ k e− λ
, k = 0, 1, 2, 3,
PΛ ( k ) = k ! (1.77)
0, elsewhere
∞ ∞
∑ k =0
PΛ ( k ) = e − λ ∑
k =0
λk
k!
= e− λe λ = 1 (1.78)
Introduction 33
0.35
Lambda = 2.5
0.3 Lambda = 5
Lambda = 10
0.25
Probability
0.2
0.15
0.1
0.05
0
0 2 4 6 8 10 12 14 16 18 20
Value of k
Example 1.13
The number of times an oscillating mass travels across a certain level in a special
time interval (0,t) can be described as
(λt )n e − λt
, n = 0, 1, 2, 3,
PN (n) = n! (1.79)
0, elsewhere
Here the notation (0,t), or more generally (a,b), stands for an open interval
excluding its two ends 0 and t or a and b; whereas [a b] is a closed interval includ-
ing a and b; such as [0 1] in Equation 1.84a. These two notations will be used
throughout this book.
1.3.1.7 Poisson Approximation
In a Bernoulli test, if n is sufficiently large and p is sufficiently small, we can use
Poisson distribution to approximately calculate the corresponding probability, that is,
(λ ) k e − λ
Cnk p k (1− p)n− k ≈ (1.80)
k!
where
λ = np (1.81)
Example 1.14
Question (1): How many technicians are needed to ensure that the probability
of not fixing an instrument will be 0.005?
Question (2): If one technician handles 20 instruments, then find the probabil-
ity of the instruments not being fixed.
Question (3): If three technicians are responsible for 80 instruments, then find
the probability of an instrument not being fixed.
Answer:
1. Denote the number of broken instruments to be x, then x → Bernoulli
distribution
PN (x) = P100(x) with probability p = 0.01.
Now, assume y technicians are needed, that “x > y” means x − y instru-
ments cannot be fixed. Thus, what we need is P(x > y) ≤ 0.005.
100 100
P( x > y ) = ∑C
k = y +1
k
100 p k (1− p)100− k = ∑C
k = y +1
k
100 (0.01)k (0.99)100− k
100 −λ 100 −1
∑ (λ) k !e ∑ (1)ke!
k k
≈ =
k = y +1 k = y +1
Solving
100 −1
∑ (1)ke!
k
≤ 0.005
k = y +1
y=5
100
Poisson approximation
100 −1
∑ (1)ke!
k
P( x > y ) = 0.0037 < 0.005
k =6
2. In the group with 20 instruments, if only one is broken, and because a tech-
nician is available, it will be fixed. We need to consider the case of “more
than 1.” That is, the probability is P(x ≥ 2).
Using the Poisson distribution to approximate P(0 ≤ x ≤ 1) =
1
∑Ck =0
k
20 p k (1− p)20− k and note that λ = np = 20 (0.01) = 0.2:
1 −0.2
∑ (0.2)k !e
k
P( x ≥ 2) = 1− P(0 ≤ x ≤ 1) ≈ 1− = 0.0176
k =0
Note that, if the Bernoulli distribution is used,
1
P( x ≥ 2) = 1− P(0 ≤ x ≤ 1) = 1− ∑Ck =0
k
20 p k (1− p)20− k = 0.0169
Introduction 35
3
(0.8)k e −0.8
P( x ≥ 4) = 1− P(0 ≤ x ≤ 3) ≈ 1− ∑
k =0
k!
= 0.0091 < 0.0176
1.3.1.8.1 Essence of PMF
The reason we need to study the PMF can be realized by using the two-dimensional
deterministic distribution to treat one-dimensional random variables. That is, we
first sort the random variables, using certain integers to denote these “cases.” In so
doing, we sort random variables into deterministic arrangement in x axis.
Then, we find the probability p(x), using a fixed value to denote the probabil-
ity at position j. In so doing, we turn random variables into deterministic values in
the y axis. That is, the introduction of PMF allows us to use the two-dimensional
deterministic relationship of variables and the corresponding probability to treat the
original one-dimensional random variables. This is the essence of PMF.
1.3.1.8.2 Basic Property
The basic property of PMF is the basic property of generic probability, that is,
0 ≤ PN(n) ≤ 1 (1.82)
ΣPN(n) = 1 (1.83)
Therefore, any function that satisfies Equations 1.82 and 1.83 is a probability dis-
tribution function for discrete random variables.
1.3.2.1.2 Sample Space
Compare the sample space. It is seen that
Example 1.15
b
A(a < X ≤ b) =
∫ a
f (x) d x (1.85)
1.3.2.2.2 Probability
The idea implied in Equation 1.85 can be used for the distribution function of con-
tinuous variables. Suppose the distribution can be expressed as a PDF f X(x) at loca-
tion x. Consider the corresponding probability, which may also be an integral given
by (see Figure 1.16)
b
P (a < X ≤ b) =
∫ a
fX ( x ) d x (1.86)
Introduction 37
fX(x)
b
P(a < X ≤ b) = ∫a fX (x) dx
a b x
1.3.2.2.3 Axiom Requirement
To realize that Equation 1.86 is indeed a probability, let us check the axiom require-
ment, that is,
1. f X(x) ≥ 0 (1.87)
and
∞
2.
∫ −∞
f X ( x )dx = 1 (1.88)
For the sample space in the interval [a b] (e.g., see Equation 1.84a), we alterna-
tively have
b
∫ a
f X ( x )dx = 1 (1.89)
Because using the concept of density function can satisfy the basic axioms, we
realize that the integral of the density function is indeed the probability. We thus
called this the probability density function (PDF).
x + dx
P ( x < X ≤ x + dx ) =
∫x
f X (u)du = f X ( x )d x (1.90a)
From now on, we will use both concepts of PMF and PDF as fundamental
approaches to deal with random variables.
38 Random Vibration
1.3.2.2.5 Property of PDF
Similar to PMF, PDF has basic properties.
First, it is impossible that f X < 0. However, it is highly possible that f X > 1. On the
other hand, we do not have the chance that f X → ∞. Therefore, the basic property of
PDF f X(x) can be expressed as
In the following, let us consider some important PDFs. Perhaps normal distribution
is the most important one; however, it will be discussed in Section 1.3.5 separately.
1.3.2.3 Uniform Distribution
The uniform distribution has PDF given by
1
, a<x≤b
fU ( x ) = b − a (1.92)
0, elsewhere
For example, the possible phase angle of a sinusoidal signal is (see Figure 1.17)
1
, 0 < θ ≤ 2π
fΘ (θ) = 2π
0, elsewhere
Example 1.16
Buses will arrive at a bus station at 8:00, 8:15, and 8:30. A passenger comes to the
bus station in the time interval of 30 minutes from 8:00 to 8:30. Find that (1) he can
take the bus within 5 minutes; (2) he must wait for more than 10 minutes.
1
1. Uniform distribution p =
30
To wait for less than 5 minutes, he must be in between 8:10 and 8:15 or
8:25 and 8:30
fΘ
1
2π
θ
0 2π
P[(8:10 < x < 8:15) ∪ (8:25 < x < 8:30)] = P(8:10 < x < 8:15) + P(8:25 < x < 8:30)
15 30
1 1
=
∫
10 30
dx +
∫ 25 30
d x = 1/ 3
2. Only if he is in between 8:00 and 8:05 or 8:15 and 8:20, he needs to wait
for more than 10 minutes
P[(8:00 < x < 8:05) ∪ (8:15 < x < 8:20)] = P(8:00 < x < 8:05) + P(8:15 < x < 8:20)
5 20
1 1
=
∫ 0 30
dx +
∫
15 30
dx = 1/ 3
Again, the reason for employing the uniform distribution is as follows: we could
not find a reason that the distribution is not uniform. For example, in between
8:00 and 8:30, we do not know the pattern or rule (if it does have a rule) when a
bus can arrive at the station.
1.3.2.4 Exponential Distribution
The exponential distribution has PDF given by
λ is a possible constant.
Example 1.17
The total months of a car running without a mechanical problem is a discrete vari-
able, but the total time becomes a continuous variable; with PDF (see Figure 1.18)
× 10–3
10
PDF of exponential distribution
3
0 20 40 60 80 100 120
Time (years)
x
−
fΛ ( x ) = λe 120 , x>0
Example 1.18
1. Because
∞ ∞ x
∫ ∫
−
fΛ ( x ) d x = 1→ 1 = λe 120 dx = 120 λ
−∞ 0
We have
1
λ=
120
Thus,
PΛ ( x < 120) =
∫
120
1 − 120
120
e
x
−
(
dx = − e −1 − e 120 = 0.63
0
)
0
Compare with Poisson distribution for discrete variables (see Equation 1.77):
λk −λ
PΛ (k ) = e , k = 0, 1, 2, 3,
k!
where integer k denotes the kth event.
It is generally acknowledged that the number of telephone calls, the number of
passengers in a bus station, and the number of airplanes landing in an airport dur-
ing a period, with given length, can be modeled by discrete Poisson distributions.
Now, if we do not count a period of time duration (0, t), but an event x occurs
in a specific moment during (0, t), then x ~ PΛ(λt). This example shows the essential
difference between discrete and continuous distributions. The reader may con-
sider the question of what the difference is in terms of variables in this example.
∆P P(τ ≤ t + ∆t ) − P(τ ≤ t ) d
fΛ (t ) = lim = lim = P(τ ≤ t ) = λe − λt
∆t →0 ∆t ∆t→0 ∆t dt
Let ξ denote the life span of a bridge (a car, a machine, and so on) if the bridge
was built for τ years, that the probability distribution of lasting another t years has
nothing to do with the first τ years. In other words, the first τ year is not memo-
rized. Distribution of “always young.”
For any real-world buildings, bridges, machines, airplanes, and cars that are not
“always young,” care must be taken to use exponential distributions.
0.8
CDF of exponential distribution
0.6
0.4
0.2
0
0 20 40 60 80 100 120
Time (years)
t t + ∆t
t
−D
ptD = 1− e TR
= 1− e − ptD
to approximate the probability of occurrence of an extreme event in year tD, when
the return period of such an event is TR, see Equation 1.59.
The occurrence of such an event should be “memoryless.”
τ ≤ t = (t1 < t |τ=t1 ) ∪ (t2 < t |τ=t2 ) ∪ (t3 < t |τ=t3 ) (1.96)
1.4
Probability density function
1.2
1
σ = 0.5
0.8
0.6
σ = 1.0
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4
h
0, t1 t2 t3 t
P(τ ≤ t ) = P[(t1 < t |τ=t1 ) ∪ (t2 < t |τ=t2 ) ∪ (t3 < t |τ=t3 ) ] (1.97)
Note that event (t1 < t |τ=t1 ) and event (t2 < t |τ=t2 ) are not mutually exclusive so
that using Equation 1.97 to calculate the probability P(τ ≤ t) is extremely challenging.
However, from another angle,
1.3.3.2.1 Definition of CDF
Equation 1.98 is the way to compute the probability of cumulative events, which is
referred to as cumulative distribution function denoted by F(t) and written as
t
FΛ (t ) =
∫
0
fΛ (t ) dt
In the above equation, the variable “τ” does not appear.
Generally, we can write
x
FX ( x ) = P( X ≤ x ) =
∫ −∞
fX ( x ) d x (1.99)
1. Range of CDF
0 ≤ FX(x) ≤ 1 (1.100)
2. Nondecreasing:
if x2 ≥ x1, then
dFX ( x )
fX ( x ) = (1.102)
dx
1.3.3.3.1 Probability Computation
If the PDF is known, we can find the corresponding CDF through integration.
Example 1.20
Given earthquake records in the past 250 years, suppose we have the following
data of PGA as
Table 1.4
Exceedance Peak Ground Acceleration (U.S. Geological Survey)
PGA > 0.01 g PGA > 0.02 g ….
Frequency (probability) p0.01 p0.02
Frequency
Table 1.5
Nonexceedance Peak Ground Acceleration
PGA < 0.01 g PGA < 0.02 g ….
Frequency (probability) p0.01 p0.01 + p0.02
1.3.3.3.3 Curve Fit
To figure out a distribution through statistical data, we can use curve fit technology.
Generally, using CDF can be easier than using PDF.
µx = ∑ f (x )x
all xi
X i i (1.103)
Recall a set
∑x i n
x= i =1
n
= ∑ 1n x
i =1
i (1.105)
In set X, each xi has an equal chance to be considered; the weighting function for
n is 1/n; we thus have the probability of any value xi showing up to be 1/n, that is,
1
fX ( x ) = (1.106)
n
46 Random Vibration
∞
µx =
∫ −∞
fX ( x ) x d x (1.107)
σ 2X = ∑ f (x )(x − µ )
all xi
X i i X
2
(1.108)
∞
σ 2X =
∫−∞
f X ( x )( x − µ X )2 d x (1.109)
See Figure 1.23 for geometric description of mean and variance. Mean value is
related to the first moment, centroid, or the center of mass. Variance is related to the
second moment or moment of inertia.
1.3.4.3.3 Standard Deviation σX
The standard deviation is the square root of variance, that is,
σ X = σ 2X (1.110)
fX(x) fX(x)
x x – µX
fX(x)
x x
0 µX dx 0 µX dx
Note that both the variance and the standard deviation can be used to denote the
dispersion of a set of variables. However, the standard deviation has identical units
as these variables.
1.3.4.3.4 Coefficient of Variation CX
The coefficient of variation is the ratio of standard deviation and the mean value.
σX
CX = (1.111)
µX
1.3.4.4 Expected Values
Given a function of random variable g(X), the expected value denoted by E[g(X)] is
written as
∞
E[ g( X )] =
∫
−∞
f X ( x ) g( x ) d x (1.112)
σ 2X = E[( X − µ x )2 ] (1.114)
Furthermore, we have
σ 2X = E[( X )2 ] − µ 2x (1.115)
where g(X) and h(X) are functions of X; and α and β are deterministic scalars.
Example 1.21
Y = a X + b (1.117)
μY = E[aX + b] = a μX + b (1.118)
48 Random Vibration
1.3.5.1 Standardized Variables Z
First, consider a standardized variable defined as
X − µx
Z= (1.119)
σx
X − µX 1 1
µZ = E = σ E[( X − µ X )] = σ {E[ X ] − µ X } = 0 (1.120)
σ X X X
X − µ 2 1 1
σ = E
2
Z
X
= 2 E[( X − µ X )2 ] = 2 σ 2X = 1 (1.121)
σ X σ X σX
In the following, let us examine the distribution and additional important proper-
ties of normal random variables.
−
( x −µ X )2
1 2 σ 2X
fX ( x ) = e , −∞< x <∞ (1.123)
2πσ X
z2
1 −
fZ (z) = e 2 , −∞<z <∞ (1.124)
2π
Introduction 49
2
Sigma = 0.2
Sigma = 0.4
Sigma = 0.8
1.5
Normal PDF
0.5
0
–3 –2 –1 0 1 2 3 4 5 6 7
X
1.3.5.4.1 General Variable
For general variables, we have
( ξ − µ X )2
x −
1
FX ( x ) =
2πσ X ∫
−∞
e 2 σ 2X
dξ (1.125)
1
Sigma = 0.2
Sigma = 0.4
0.8
Sigma = 0.8
Normal CDF
0.6
0.4
0.2
0
–3 –2 –1 0 1 2 3 4 5 6 7
X
1.3.5.4.2 Standardized Variable
For standardized variables, we have
x ξ2
1
∫
−
FZ ( z ) = e 2 dξ (1.126)
2π −∞
Example 1.22
From a hotel to the Buffalo airport, one can take the local or expressway. Time needed
through local road tL ~ N(26,5). Time needed through expressway tL ~ N(31,2).
Question (1): If one only has 30 minutes, then, which way is better? (Less
chance to miss the flight)
Question (2): One has 35 minutes, which way is better?
( ξ − 26 )2
30 −
1
P(t > 30) = 1− P(t ≤ 30) = 1−
2π (5) ∫ −∞
e 2( 52 )
dξ = 1− 0.7881 = 0.21
( ξ − 31)2
30 −
1
∫
2
P(t > 30) = 1− P(t ≤ 30) = 1−
e 2( 2 ) dξ = 0.6915
2π ( 2) −∞
Thus, using local road → less probability of “missing the flight.”
1
PDF
Standard normal distribution
CDF
0.8
0.6
0.4
0.2
0
–5 –4 –3 –2 –1 0 1 2 3 4 5
X
Note that one can also use standard normal distribution. For example, for
tL ~ N(26,5).
Denote
t − 26
T= = 0.8
5 t =30
2. Local road
t − 26
T= = 1.8
5 t =35
1 − Φ(1.8) = 0.036
Expressway
t − 31
T= =2
2 t =35
1 − Φ(2) = 0.023
1.3.6 Engineering Applications
1.3.6.1 Probability-Based Design
Consider the component of a system, which can be a structure, a machine, a vehicle,
etc. Designing that component to resist a certain load, we basically have two differ-
ent approaches, namely, the allowed stress design and the probability-based design.
R N > QN (1.128)
where
R N and QN are, respectively, the values of nominal resistance and load.
Equation 1.128 can be realized by using a safety factor S, that is,
R N = S QN (1.129)
where RD, Q D values of nominal resistance and load. Equation 1.130 means that
the probability of the event that the resistance is smaller/equal to the load, which is
referred to as the failure probability pf must be smaller than the allowed value [pr].
Or in general we can also write
pf = 1 − pr (1.131)
Q D = γ Q N (1.132)
and
RD = Φ RN (1.133)
Here, RN and Q N are, respectively, the mean values of resistance and load; the
terms γ and ϕ are, respectively, the load and resistance factors.
µQ
βQ = (1.135)
QN
σQ
CQ = (1.136)
µQ
µR
βR = (1.138a)
RN
Introduction 53
0.4
0.35 Load
Resistance
0.3
Normal PDF
0.25
0.2
0.15
Nominal load Mean load Design resistance
0.1 Nominal resistance
Mean resistance
0.05 Design load
0
32 34 36 38 40 42 44 46
Intensity of force (Ton)
σR
CR = (1.138b)
µR
The relationship between the PDF of the load and resistance relates to failure
probability pf can be shown in Figure 1.27, where the dark line is PDF of the demand
fQ and lighter line is the PDF of resistance f R. Thus, the failure probability pf can be
written as
∞ q
pf =
∫−∞
fQ ( q )
∫
−∞
fR (r )dr d q (1.139a)
If both the demand and the resistance are normal, then the random variable R–Q
is also normal; with the PDF denoted by f R–Q(z) the failure probability can be further
written as
0
pf =
∫ −∞
fR–Q ( z ) dz (1.139b)
1.3.6.2 Lognormal Distributions
In general, we do not have negative load or resistance; in these cases, we may con-
sider lognormal distribution. A lognormal distribution of a random set has all of its
variables whose logarithm is normally distributed. If x is a random variable with a
normal distribution, then
y = ex (1.140)
(ln y − µ X )2
−
1 2 σ 2X
f X ( y) = e y>0 (1.141)
2π σ X y
Here, μX and σX are the mean and standard deviation of the variable’s natural
logarithm, in which the variable’s logarithm is normally distributed.
ln( y) − µ X
FY ( y) = Φ (1.142)
σX
ln( MY ) − µ X
FY ( MY ) = Φ = 0.5
σX
Note that Φ−1 (0.5) = 0, thus
ln( MY ) − µ X
Φ −1 (0.5) = = 0 (1.144)
σX
Therefore, we have
ln(MY) = μX (1.145)
or
1 µY2
µX = ln (1.146b)
2 1 + CY2
Introduction 55
and
1
µ X + σ 2X 2
σY = e 2 eσ X − 1 (1.147a)
or
σ 2X = µY2 ( eµY − 1)
2
(1.147b)
F = RD − Q D (1.148)
We must have
The mean is
μF = μR − μQ (1.150)
( )
1/ 2
σ F = σ R2 + σ Q2 (1.151)
F = 0. (1.152)
F − µF 0 − µF µ − µQ
= =− R ≡ −β (1.153)
σF σF σ 2R + σ 2Q
Example 1.23
Table 1.6
Reliability Indices and Failure Probabilities
β 2 2.5 3 3.5
pf 0.0228 0.0062 0.0013 0.0002
Problems
1. Using a Venn diagram to show that P(A ∪ B) = P(A) + P(B) − P(A ∩ B)
by the fact that the set (A ∪ B) can be seen as the union of disjointed sets
( A ∩ B), ( A ∩ B), and (A ∩ B).
2. Find the sample spaces of the following random tests:
a. The record of average score of midterm test of class CIE520/MAE536.
(Hint, we have n people and the full score is 40).
b. To continuously have 10 products up to standard, the total number of
checked products.
c. Inspecting products that are marked “C” if certified and marked “D” if
defective; if two “D” are consequently checked, the inspection will be
stopped. Or, if four products have been checked, the inspection will also
be stopped; the records of the inspection.
d. The coordinates of point inside a circle.
3. A and B denote two events, respectively.
a. If AB = AB Prove A = B
b. Suppose either A or B occurs; find the corresponding probability
4. For events A, B, and C, P(A) = 1/2, P(B) = 1/3, P(C) = 1/5, P(A ∩ B) = 1/10,
P(A ∩ C) = 1/15, P(B ∩ C) = 1/20, P(A ∩ B ∩ C) = 1/30, find
a. P(A ∪ B)
b. P( A ∪ B)
c. P(A ∪ B ∪ C)
d. P( A ∩ B ∪ C )
e. P( A ∩ B ∩ C )
f. P( A ∩ B ∩ C )
5. Calculate the probabilities of (a) P(X < 3), (b) P(X > 2), and (c) P(2 < X < 3)
with the following distributions
Poisson (λ = 1)
Uniform (a = 1, b = 4)
Rayleigh (σ = 1)
Normal (μ = 2, σ = 0.5)
6. Find the mean and standard deviation of the following distributions:
Bernoulli
Poisson
Rayleigh
Normal
Introduction 57
1 2
2 3
1 3
4
4 5
(a) (b)
FIGURE P1.1
2 Functions of Random
Variables
In Chapter 1, the basic assumptions and theory of probability are briefly reviewed
together with the concepts of random variables and their distribution. In this chapter,
we will discuss several important functions of random variables, which are also ran-
dom variables. Because the basic idea to treat random variables is to investigate their
distributions, similarly, to study the functions of random variables, we also need to
consider the corresponding probability density function (PDF) and cumulative dis-
tribution function (CDF).
2.1.1 Dynamic Systems
2.1.1.1 Order of Systems
A dynamic system can be modeled as a differential equation. In this manuscript, the
order of a system means the highest order of differentiation.
59
60 Random Vibration
Input Output
System
(a)
Variable Function
f
(b)
2.1.1.2 Simple Systems
2.1.1.2.1 Subsystems
A system of functions can be rather complex. However, in engineering applications,
we can always break a complex system down into several subsystems. When a sys-
tem is broken down, these subsystems can be in parallel or in series (see Figure 1.8a
and b, respectively, where all the “Pi ” symbols can be replaced by “subsystem i”). In
the following, let us consider basic subsystems.
Y = aX (2.1)
d
Y= X (2.2)
dt
Functions of Random Variables 61
Y=
∫ X dt + C (2.3)
Y = a 0 + a1X + a2 X2 + … (2.4)
(2) ∑∑ p
all j all k
JK =1 (2.8)
See Table 2.1 for a list of joint distributions.
62 Random Vibration
Table 2.1
Joint Distribution
K
J k=1 k=2 ….
i=1 p11 p12
i=2 p21 p22
…
Example 2.1
Among truck buyers, 50% purchase American pickups, 20% purchase Japanese,
and 30% buy other country’s pickups. Randomly find two customers, denote
as A and J as the number of American and Japanese customers. Find the joint
distributions.
The possible values of A and J are 0, 1, 2 (Table 2.2). When A = a (a people buy
American); J = j (j people buy Japanese), 2 − a − j people buy others. Therefore,
Table 2.2
Joint Distribution of J and K
J
A j=0 j=1 j=2
a=0 0.09 0.12 0.04
a=1 0.3 0.2 0
a=2 0.25 0 0
Functions of Random Variables 63
Table 2.3
CDF of J and K
J
A j=0 j=1 j=2 pA(a)
a=0 0.09 0.12 0.04 0.25
a=1 0.3 0.2 0 0.5
a=2 0.25 0 0 0.25
pJ(j) 0.64 0.32 0.04 1
Equation 2.11 can be graphically shown in Figure 2.2. Generally, the ranges of x
and y are
−∞ < x < ∞, −∞ < y < ∞ (2.12)
(x, y)
= f XY ( x , y) dx dy (2.13)
(2)
∫∫ f XY ( x , y) d x d y = 1 (2.15)
y x
FXY ( x , y) =
∫ ∫
−∞ −∞
f XY (u, v) du d v (2.17)
∂2
f XY ( x , y) = [ FXY ( x , y)] (2.18)
∂x ∂y
=
∫ ∫
−∞ −∞
f XY (u, v) d v du
and
=
∫ ∫
−∞ −∞
f XY (u, v) du d v
Functions of Random Variables 65
The PDF of X is
∞
dFX ( x )
fX ( x ) =
dx
=
∫ −∞
f XY ( x , y) d y
(2.21)
The PDF of Y is
∞
dFY ( y)
fY ( x ) =
dy
=
∫ −∞
f XY ( x , y) d x
(2.22)
Example 2.2
∞ ∞ ∞ ∞
(1)
∫ ∫
−∞ −∞
fXY (u ,v ) du dv =
∫ ∫ ce
0 0
−2( x + y )
d x dy
∞ ∞ c
∫ ∫
−2( y )
= c e dx e −2( y ) dy = → c = 4
0 0 4
y x y x
(2)
FXY ( x , y ) =
∫ ∫ −∞ −∞
fXY (u ,v )du dv =
∫ ∫ −∞ −∞
4e −2(u +v ) du dv
1 x+y=1
X
1
1 1− x 1 1− x
(3) P[( X ,Y ) ∈C ] =
0 0 ∫∫fXY ( x , y ) d x dy = 4 d x
0 0 ∫ ∫
e −2( x + y ) dy = 1− 3e −2
if
we have
P ( X = xi ) ∩ (Y = y j )
pXY ( xiy j ) = P[( X = xi )(Y = y j )] = (2.26)
P(Y = y j )
and
(2) ∑p
all i
XY ( xiy j ) = 1 (2.28)
Functions of Random Variables 67
That is, the series pX∣Y (xi∣yj) (i = 1, 2, 3…) satisfies the two basic requirements of
probability distributions. This implies that pX∣Y (xi∣yj) (i = 1, 2, 3…) is a probability
distribution, which describes the statistical property of random variable X, under the
condition that Y = yj.
Now, we can ask a question: is the probability distribution pX∣Y (xi∣yj) equal to pX(xi)?
Generally speaking, it is not, because pX∣Y (xi∣yj) needs the condition
Y = yj. (2.29)
Here, pX∣Y (xi∣yj) is the conditional distribution. And we have the following
P ( X = xi ) ∩ (Y = y j )
pYX ( y jxi ) = P[(Y = yi )( X = x j )] = (2.30)
P ( X = xi )
Example 2.3
Recall the above-mentioned example shown in Table 2.4 again. Consider the con-
ditional distribution under conditions j = 0, j = 1 and j = 2, we have Table 2.5.
Table 2.4
CDF of the Above-Mentioned Example
J
A j=0 j=1 j=2 pA(a)
a=0 0.09 0.12 0.04 0.25
a=1 0.3 0.2 0 0.5
a=2 0.25 0 0 0.25
pJ(j) 0.64 0.32 0.04 1
Table 2.5
Conditional Distributions
J
P(A = a|j = 0) P(A = a|j = 1) P(A = a|j = 2)
A (pJ(0) = 0.64) (pJ(1) = 0.32) (pJ(2) = 0.04)
a=0 0.09/0.64 = 0.14 0.12/0.32 = 0.375 0.04/0.04 = 1
a=1 0.3/0.64 = 0.47 0.2/0.32 = 0.625 0/0.04 = 0
a=2 0.25/0.64 = 0.39 0/0.32 = 0 0/0.04 = 0
∑ p (a)
A 1 1 1
68 Random Vibration
2.1.3.2 Continuous Variables
Similar to discrete variables, Equation 2.30 can be written as
f XY (x , y) dx dy
P[( X = x )(Y = y)] = f XY ( xY = y) dx = (2.31)
fY ( y) dyy
f XY ( x , y)
f XY ( xY = y) = (2.32)
fY ( y)
Similarly, we have
f XY ( x , y)
fYX ( yX = x ) = (2.33)
fX ( x )
2.1.3.3 Variable Independence
Variable independence is an important concept. Treating two independent sets can
significantly reduce the efforts. On the other hand, if two sets of variables are not
independent but we mistakenly treat them as independent, severe errors may be
introduced.
furthermore, if
Because
and
and
also
Example 2.4
Mr. A and Ms. B plan to meet in place C at 6:00 pm. The arriving times of
each person are independent and are uniformly distributed between 6:00 pm
and 7:00 pm. Find the probability that the first arrival must wait for more than
10 minutes.
Denote that Mr. A arrives at X minutes after 6:00 and Ms. B at Y minutes after
6:00. The probability is
2
60 y −10
1 25
P( X + 10 > Y ) =
∫∫
X +10 >Y
fXY (x , y )d x dy =
∫
10
dy
∫
0
60 d x = 72
Y
x + 10 = y
60
y + 10 = x
10
X
0 10 60
fX(x)
120
x = –2t +120
E[ g( X , Y )] =
∫ ∫
−∞ −∞
f XY (x ) g( x , y) d x d y
(2.44a)
E[ X ] =
∫ ∫
−∞ −∞
f XY (x ) x d x d y (2.44b)
Functions of Random Variables 71
E[ g( X , Y )Y = y] =
∫ −∞
f X Y (xY = y) g( x , y) d x
(2.45)
2.1.4.3 Variance
Because we have two sets of variables, we should have two variances to describe the
dispersions, namely, for X, we have
∞ ∞
D[ X ] =
∫ ∫
−∞ −∞
f XY (x , y)( x − µ X )2 d x d y
(2.46)
D[Y ] =
∫ ∫
−∞ −∞
f XY (x , y)( y − µY )2 d x d y
(2.47)
and
In the following, we will continue to use E[(.)] and D[(.)] to denote the expected
value and variance of the set (.)
2.1.4.4 Covariance of X,Y
The covariance of random variables X,Y is given as
∞ ∞
σ XY = E[( X − µ X )(Y − µY )] =
∫ ∫
−∞ −∞
f XY (x , y)( x − µ X )( y − µY ) d x d y
(2.49)
2.1.4.5 Correlation Coefficient
Accordingly, the correlation coefficient is given by
σ XY
ρXY = (2.51)
σ X σY
It is seen that the range of correlation coefficients is in between −1 and 1, that is,
−1 ≤ ρXY ≤ 1 (2.52)
72 Random Vibration
( X − µ X )(Y − µ X )
ρXY = E (2.53)
σ X σY
Example 2.5
1 1 x2 ρXY y 2
fXY ( x , y ) = exp − − 2 xy +
2πσ X σ Y 1− ρ2XY ( 2
)
2 1− ρXY σ X
2
σ X σY σ Y2
2.1.5 Linear Independence
2.1.5.1 Relationship between Random Variables X and Y
For two sets of random variables, the amount of linear independence can be judged
by the correlation coefficient as follows.
Y = aX + b (2.54)
ρXY = ±1 (2.55)
Because
That is,
1, a>0
ρXY = (2.57)
−1, a<0
Functions of Random Variables 73
2.1.5.1.2 Independence
Now, we further consider independence by examining the following cases.
ρXY = 0 (2.58)
Because
σ XY
ρXY =
σ X σY
and
E[ XY ] =
∞ ∞ ∞ ∞ (2.59)
∫ ∫
−∞ −∞
f XY ( x , y)( x )( y) d x d y =
∫
−∞
fX ( x ) x d x
∫
−∞
fY ( y) y d y = µ X µY
Note that Equation 2.62 holds for any pair of X and Y, no matter if it is indepen-
dent or not.
system can be seen as a function of the random input variables. Now, the CDF and
PDF of functions Y of random variables X are discussed as follows. Here, Y = f(X).
2.1.6.1 Discrete Variables
First, let us consider the following examples. Through these examples, we can real-
ize how the probability mass functions (PMF) of discrete random variables are
determined.
Example 2.6
Y = {0, 1, 4}
Therefore,
Example 2.7
Table 2.6
Probability Distribution of Random Variable X
X −2 −1 0 1 2
p(X = xi) 0.3 0.2 0.2 0.1 0.2
Table 2.7
Probability Distribution of Random Variable X2
Y 0 1 4
p(Y = yi) 0.2 0.3 0.5
Functions of Random Variables 75
Table 2.8
Probability Distribution of Random Variable X
X 1 2 3 … n
p(X = xi) 1/2 1/22 1/23 … 1/2n
−1, n = 4k − 1
πn
sin = 1, n = 4k − 3 k = 1, 2,…
2
0, n = 2k
∞
2 2 −1
P(Y = −1) = ∑ P[X = 4k − 1] = 21 + 21 + 21 + =
3 7 11
1
= X = sin (−1)
1 15 π
k =1 8 1−
16
∞
1
1
P(Y = 0) =
1 3 ∑ P[X = 2k] = 21 + 21 + 21 + =
2
= X = sin−1(0)
π 2 4 6
k =1 4 1−
4
∞
8 2 −1
1
P(Y = 1) =
1 ∑ P[X = 4k − 3] = 12 + 21 + 21 + =
=
15
X = sin (1)
π 5 9
k =1 2 1−
16
Table 2.9
Probability Distribution of Random Variable Y
Y −1 0 1
p(Y = yj) 2/15 1/3 5/15
76 Random Vibration
2.1.6.2 Continuous Variables
For continuous variables and the corresponding functions, similar to the case of
discrete variables, let us consider the following examples.
Example 2.8
The area of a circle has uniform distribution in [a, b], the PDF is
1
, x ∈[ a, b]
fX ( x ) = b − a , 0<a<b
0, elsewhere
X
Find the PDF f Y (y) = d/dy [P(Y≤y)] for circles with the radius Y =
π
1. Let us find the CDF FY (y) = P(Y ≤ y).
When y < 0, because
X
Y= ≥0
π
We have
FY (y) = P (Y ≤ y) = P(Φ) = 0
and
2. Considering y ≥ 0, we have
X πy 2
FY (y ) = P(Y ≤ y ) = P
π
≤ y = P[ X ≤ π y 2 ] =
∫
−∞
fX ( x ) d x
πy 2
∫
−∞
0 d x = 0, πy 2 < a
πy 2
a
1 πy 2 − a
=
∫
−∞
0 dx +
∫ a b−a
dx =
b−a
, a ≤ πy 2 < b
a b
1 πy 2
∫
−∞
0 dx +
∫ a b−a
dx +
∫b
0 d x = 1, πy 2 > b
Functions of Random Variables 77
and
fY (y ) = d/dy[FY (y )]
0′ = 0 , 0 ≤ y < a/π
2
′
= πy − a = 2πy , a /π ≤ y ≤ b /π
b − a b−a
1′ = 0, y > b /π
Thus,
2πy
, a/π ≤ y ≤ b /π
fY (y ) = b − a
0, elsewhere
f ( x ), x ∈ (a, b)
fX ( x ) = X (2.63)
0, elsewhere
and g(x) is a differentiable monotonic function within range (a, b), that is, when
−1
f X [ g −1 ( y)] d[ g ( y)] , y ∈(α, β)
fY ( y) = dy (2.65)
0, elsewhere
Proof:
Therefore,
d[ g −1 ( y)]
fY ( y) = d[ FY ( y)]/dy = d{FX [ g −1 ( y)]}/dy = f X [ g −1 ( y)] (2.70)
dy
Because g(x) monotonically increases in range (a, b), so does g–1(y) in range (α, β).
That is,
d[g−1(y)]/dy ≥ 0
So that
d[ g −1 ( y)]
fY ( y) = f X [ g −1 ( y)] (2.71)
dy
Therefore,
d[ g −1 ( y)]
fY ( y) = d[ FY ( y)]/dy = d{1 − FX [ g −1 ( y)]}/dy = − f X [ g −1 ( y)] (2.73)
dy
Because g(x) monotonically decreases in range (a, b), so does g–1(y) in range
(α, β). That is,
d[g–1(y)]/dy ≤ 0
We also have
d[ g −1 ( y)]
fY ( y) = f X [ g −1 ( y)]
dy
Functions of Random Variables 79
y−b
g −1 ( y) = , and y ∈ (−∞, ∞)
a
d[ g −1 ( y)]
fY ( y) = f X [ g −1 ( y)]
dy
y − b
2
(2.75)
−µ X
a ( y − a µ X − b )2
− −
1 2 σ 2X 1 1 2 a 2σ 2X
= e = e y ∈ (−∞,∞
∞)
2πσ X a 2π a σ X
Example 2.9
Y = sin(X)
1 sin−1(y )
FY (y ) = + , − 1≤ y ≤ 1 (2.77)
2 π
2.2.1 Discrete Variables
First, consider discrete variables by studying some examples.
Example 2.10
X1 and X2 both have Bernoulli distributions with probability p and mutually inde-
pendent variables (see Table 2.10). Find PDF of Y = X1 + X2.
The range of Y is {0, 1, 2}. We thus have
and
Table 2.10
Probability Distribution of Random Variable Xi
Xi 0 1
P(Xi = xj) 1−p p
Table 2.11
Probability Distribution of Sums
Y 0 1 2
P(Y = yk) 1−p 2(1 − p)p p2
Functions of Random Variables 81
The sum
Z = X + Y (2.80)
has PDF as
pZ ( zk ) = ∑ P(X = x ) P(Y = z − x ),
all i
i k i k = 1, 2,… (2.81)
To prove Equation 2.81, we see that pZ(zk) = P(Z = zk) = P(X + Y = zk) =
∑ P(X = x ,Y = z − x ), k = 1, 2, …
all i
i k i
2.2.2 Continuous Variables
Now, we extend the consideration to continuous variables. Similar to the case of
discrete variables, when X and Y are independent random continuous variables with
PDF f X(x) and f Y (y), the PDF of sum
Z = X + Y (2.82)
has PDF as
fZ (z) =
∫−∞
f X ( x ) fY ( z − x ) d x (2.83a)
or
fZ (z) =
∫−∞
f X ( z − y) fY ( y) d y (2.83b)
∞ ∞
FZ ( z ) = P( Z ≤ z ) = P( X + Y ≤ z ) =
∫∫
x + y≤ z
f XY ( x , y) d x d y =
∫ ∫
−∞ −∞
f XY ( x , y) d y d x
82 Random Vibration
∞ ∞ ∞ ∞
FZ ( z ) =
∫ ∫
−∞ −∞
f XY ( x , w − x ) d w d x =
∫ ∫
−∞ −∞
f XY ( x , w − x ) d x dw
Furthermore,
f Z ( z ) = d/dz[ FZ ( z )] =
∫ −∞
f XY ( x , w − x ) d x (2.84)
This is the relationship for the sum of general variable X and Y. Note that when X
and Y are independent, we have
fZ (z) =
∫−∞
f X ( x ) fY ( w − x ) d x
x2
1 −
fX ( x ) = e 2
(2.85)
2π
and
y2
1 −
fY ( y) = e 2
(2.86)
2π
The PDF of
Z = X + Y (2.87)
is given by
z2
−
1
e 2( 2 )
2
fZ (z) = (2.88)
2π 2
Functions of Random Variables 83
Z ~ N 0, 2 ( ) (2.89)
( x − µ X )2
−
1 2 σ 2X
fX ( x ) = e (2.90)
2πσ X
( y − µY )2
−
1 2 σY2
fY ( x ) = e (2.91)
2πσY
The PDF of
Z = X + Y (2.92)
is
( z − µ X − µY )2
−
fZ (z) =
1
e
(
2 σ 2X + σY2 )
(2.93)
2π σ + σ 2 2
X Y
That is
(
Z ~ N µ X + µY , σ 2X + σY2 ) (2.94)
Xi ~ N(0,1), i = 1, 2, …n (2.95)
84 Random Vibration
then, we have
∑ X ~ N ( 0, n )
i =1
i (2.96)
( )
X i ~ N µ Xi , σ Xi , i = 1, 2, … n
(2.97)
then, we have
n n n
∑ i =1
Xi ~ N
∑ i =1
µ Xi , ∑
i =1
σ 2X
i
(2.98)
( )
X i ~ N µ Xi , σ Xi , i = 1, 2, n
(2.99)
n n n
∑
i =1
ci X i ~ N
∑i =1
ciµ Xi , ∑i =1
ci2σ 2X
i
(2.100)
Z = XY (2.101)
Functions of Random Variables 85
has PDF as
z
f X ( x ) fY dx
∞
x
fZ (z) =
∫−∞ x
(2.102)
or
w
f X ( x ) fY
z ∞
x
FZ ( z ) =
∫ ∫
−∞ −∞ x
d x dw
(2.103)
When X and Y are not independent, they have joint PDF f XY (x,y),
z
f XY x , dx
∞
x
fZ (z) =
∫ −∞ x
(2.104)
or
w
f XY x , dx
z ∞
x
FZ ( z ) =
∫ ∫ −∞ −∞ x
dw (2.105)
2.3.2.1 Sample Variance
n
1
∑( x − X )
2
S X2 = i (2.106a)
n −1 i =1
S X2 =
n − 1 ∑i =1
xi2 − nX 2
(2.106b)
2.3.2.2 Chi-Square Distribution
Chi-square distribution is defined as
χ 2n = ∑z
i =1
2
i (2.108)
2.3.2.3 CDF of Chi-Square, n = 1
In the case of one degree of freedom, that is, n = 1, we have the CDF as
( ) ( ) (
Fχ2 = P χ12 ≤ u = P z12 ≤ u = P − u < z1 ≤ u = Φ(u) − Φ(−u) (2.109)
1
)
2.3.2.4 PDF of Chi-Square, n = 1
Furthermore, in the case of one degree of freedom, that is n = 1, we have the PDF as
u
1 −
fχ 2 = e 2, u > 0 (2.110)
1
2π u
2.3.2.5 Mean
The mean is given by
µ χ2 = 1 (2.111)
1
2.3.2.6 Variance
The variance is
σ 2χ2 = 2 (2.112)
1
u n / 2−1
u
−
fχ2 (u) = e 2, u > 0 (2.113)
n
2 Γ (n / 2)
n/2
Functions of Random Variables 87
Γ(n / 2) =
∫ 0
t n / 2−1e − t dt
(2.114)
2.3.2.8 Reproductive
The chi-square distribution is reproductive, which can be written as:
n k n
χ 2n = ∑ ∑ ∑z
i =1
zi2 =
i =1
zi2 +
i = k +1
2
i = χ 2k + χ 2n− k (2.115)
2.3.2.9 Approximation
When n is sufficiently large, say
n > 25
Y = χ 2n (2.116)
Then approximately
Y −n
~ N (0,1) (2.117)
2n
2.3.2.10 Mean of Y
Consider the mean
µ χ2 = n (2.118)
n
2.3.2.11 Variance of Y
The variance is given by
σ 2χ2 = 2n (2.119)
n
88 Random Vibration
2.3.2.12.1 PDF of χ
First, the PDF is given as
n / 2−1 v2
v2 −
fχ n = 1 v e 2 , v>0 (2.120)
Γ(n / 2) 2
2
1 h
h −
f H (h) = 2 e 2 σ , h > 0 (2.121)
σ
2.3.2.12.3 Variance of χ
The variance of χ is
σ 2χn = n − µ χn (2.123)
2.3.2.13.1 PDF
The PDF of gamma distribution is given by
λ
fX ( x ) = (λx )r −1 e − λx , x > 0 (2.124)
Γ (r )
2.3.2.13.2 Mean
The mean of gamma distribution is
r
µX = (2.125)
λ
Functions of Random Variables 89
2.3.2.13.3 Variance
The variance of gamma distribution is
r
σ 2X = (2.126)
λ2
In engineering statistics, we often calculate the mean values and variances of sam-
ples. Let us first consider the sample variance as follows.
n −1
Multiplying 2 on both sides of Equation 2.106a results in
σX
n
n −1 2 1
∑( x − X )
2
SX = 2 i (2.127)
σX
2
σX i =1
n
xi − µ X X − µ X
2 n x − µ 2 xi − µ X X − µ X X − µ X
2
∑ σ −
σ X
= ∑ i X
σ X
− 2 σ σ +
σ
i =1 X i =1 X X X
n 2 2
xi − µ X X − µX
= ∑
i =1
σ
X
− n σ
X
Thus,
n 2 2 n 2 n 2
xi − µ X X − µX xi − µ X X − µX
n −1 2
σ 2x
SX = ∑
i =1
σ
X
− n σ
X
= ∑
i =1
σ
X
− ∑
i =1
σ
X
(2.128)
Note that
n 2
xi − µ X
∑i =1
σ
X
= χ n
2
(2.129)
and it can be proven that (see Section 2.5.1.2, the Lindeberg–Levy theorem)
n 2
X − µX
∑i =1
σ
X
= χ1
2
(2.130)
90 Random Vibration
and therefore
n −1 2
S X = χ 2n − χ12 = χ n2 −1 (2.131)
σ 2X
X
Z= (2.132)
Y
fZ (z) =
∫ −∞
y f XY ( yz , y)dy
(2.133)
fZ (z) =
∫ −∞
y f X ( yz ) fY ( y) d y
(2.134)
where f X(x) and f Y (y) are, respectively, the PDF of variables X and Y.
2.3.3.2 Student’s Distribution
2.3.3.2.1 Student’s Random Variable
Random variable with Student’s distribution, denoted by Tn is a ratio of a standard
normal variable Z to the square root of a chi-square variable divided by its degree of
freedom, that is,
Z
Tn = (2.135)
χ 2n /n
fT (t ) =
Γ ( )
n +1
2
(2.136)
()
( n +1)/ 2
t2
nπ Γ n + 1
2 n
Functions of Random Variables 91
n
σ T2 = , n>2 (2.138)
n−2
X − µX
σX / n
X − µX
It is seen that the variable Z = , used to standardize the random variable
σX / n
Xi, is a standard normal distribution with variance σ 2X /n . If the standard deviation
is known, Z can be used to estimate how close the mean value can be. However,
the standard deviation (or variance) is often unknown, and will be estimated by the
sample variance S X2 . In this case, we have
X − µX
=
( X − µX ) (σ X n ) = ( X − µ ) (σ
X X n )= Z
= Tn−1
SX n SX σ X (n − 1) S 2
χ 2n−1
X
(n − 1)σ 2
X (n − 1)
(2.139)
That is,
X − µX
Tn−1 = (2.140)
SX / n
2.3.3.3 F Distribution
We now consider another distribution of F random variables as follows
χu2 /u
F (u, v) = (2.141)
χ 2v /v
92 Random Vibration
u/2
u+ v u u / 2−1
Γ f
2 v
fFu ,v ( f ) = , f >2 (2.142)
u v u
( u + v )/ 2
Γ Γ
2 2 f + 1
v
v
µ Fu ,v = , v>2 (2.143)
v−2
2 v 2 (u + v − 2)
σ 2Fu ,v = , v>4 (2.144)
u( v − 2)2 ( v − 4)
2.4 DESIGN CONSIDERATIONS
With the help of the above-mentioned random variables including definitions, means
and variances, and PDF and CDF, let us consider the design under random loads. The
focus is on the reliability or failure probability of these systems.
pf = P(RD − Q D ≤ 0) (2.146)
By substitution of Equations 1.132 and 1.133 into Equation 2.146, we can have
Furthermore, if more than one load is applied, the limit state can be written as
F = −
∑ γ Q − ϕR
i
i i N
=0
(2.148)
Here, Qi is the ith nominal load, and γi is the corresponding load factor.
Assume all Qi and R are normally distributed. That is,
(
Qi ~ N µQi , σ Qi ) (2.149)
µF = ∑µ Qi (2.151)
i
and
σF = ∑σ 2
Qi (2.152)
i
µ
µR − ∑µ Qi
β= F = i
(2.153)
σF σ 2R + ∑σ 2
Qi
i
β is defined as the reliability index. Recall that Equation 1.154, the failure prob-
ability, can be computed as
pf = Φ(−β) (2.154)
Here, [.] stands for allowed value of (.). From Equation 2.155, if only one load is
considered and assuming the standard deviation σF can be determined, then
µR – µQ
κQσQ –κRσR
Intensity
0 µQ RD = QD µR
From Figure 2.6, when the distance between the two mean values μR and μQ is
fixed, under the requirement of [β], one can always find a middle point R D = QD.
Recall Equations 1.132 through 1.139, we can write
QD = μQ + κQ σQ = βQ (1 + κ RCQ)QN = γQN
and
R D = μR − κ R σR = βR (1 + κ RCR)R N = ΦR N
Let
[β]σ F − κ R σ R − κ Q σ Q = [β] σ 2R + ∑σ
i
2
Qi − κ Rσ R − κ Q σQ = 0 (2.158)
For a given allowed reliability index [β], if the standard deviations σR and σQ are
known, then by choosing the proper value of κ R and κQ, the critical point is deter-
mined at QD = R D or
γQN = ΦR N (2.159)
That is, Equation 2.159 is determined by the required condition of allowed failure
probability. This can be extended to multiple loads. Suppose that there exists n loads.
The limit state is
∑ γ Q − ΦR
i =1
i i N =0 (2.160)
must be determined by a preset allowed failure probability [pf ]. Here, γi and Qi are,
respectively, the ith load factor and the ith nominal load.
Functions of Random Variables 95
2.4.2 Combination of Loads
With the help of Equation 2.160, if the allowed failure probability is given and sup-
pose the resistance is also known, the summation
nation, can be determined. i
∑
γ iQi , namely, the load combi-
Consider the inverse problem of given resistance and failure probability to deter-
mine the load factor γis. In Equation 2.160, ΦR N is given. Knowing all the nominal
values of load Qis, we now have n unknowns (γi).
Consider the case of two loads.
R Q2 R
C B
ΦR = γ2 Q2
Safe design
ΦR plane region
ΦR A ΦR
β plane
ΦR
γ2
Q2
O Q1
ΦR ΦR
γ1 γ2
(a) (b)
ΦR = γ1Q1
Q2
ΦR
ΦR
γ2
Q1 Q1
ΦR ΦR
γ1 γ1
(c) (d)
FIGURE 2.7 Loads and resistance case 1. (a) R-Q1-Q2 three dimensional plot, (b) R-Q2
plan, (c) R-Q1 plan, and (d) Q1-Q2 plan.
96 Random Vibration
The design value of ΦR chosen above the β plan will yield a large value of β
or a smaller value of failure probability. Thus, we can have a safe design region
(see Figure 2.7b, for example). Now, let value ΦR be the design value ΦR N for the
designed resistance, and γiQi be the ith design load.
Figure 2.7 shows a three-dimensional plot with two loads, Q1 and Q2. In Figure
2.7a, the thick solid line in plan R-Q1, which is also shown in Figure 2.7c,
ΦR = γ1Q1 (2.162)
Equation 2.162 is obtained under the condition Q2 = 0, when the limit state F = 0
is reached. In Figure 2.7a, the thick break in plan R-Q2, which is also shown in
Figure 2.7d, is
ΦR = γ2Q2 (2.163)
R Q2
R
C´ B
ΦR = γ2́Q2́
C´
ΦR plane
ΦR A´
ΦR
β plane
ΦR
γ´2
Q1
O O Q2
ΦR ΦR
(a) γ1́ (b) γ2́
Q2
R
ΦR = γ1́ ΦR
γ2́
C´
Q1
A´ A
Q1
O
ΦR ΦR ΦR
γ1́ γ1́ γ1
(c) (d)
FIGURE 2.8 Loads and resistance case 2. (a) R-Q1-Q2 three dimensional plot, (b) R-Q2
plan, (c) R-Q1 plan, and (d) Q1-Q2 plan.
Functions of Random Variables 97
Equation 2.163 is obtained under the condition Q1 = 0, when the limit state F = 0
is reached.
Because Equations 2.162 and 2.163 are determined based on a given allowed fail-
ure probability, namely, the given value of [β], these two lines define a special plan
called equal β plan, or simply β plan, shown in Figure 2.7a (plan O-A-C).
When we have different combinations of Q1 and Q2, with given resistance ΦR,
the load factors will be different. Let us consider two possible load combinations.
The first case is denoted by Q1 and Q2 and the second is denoted by Q1′ and Q2′.
Correspondingly, we have γ1 and γ2 as well as γ 1′ and γ ′2. The second case is shown
in Figure 2.8.
The intersection of the β plan formed by Q1-Q 2 and the ΦR plan forms a
straight line. Equation 2.161 is the corresponding equation (see Figures 2.7d and
2.8d). The intersection of an alternative β plan formed by Q1′-Q2′ and the ΦR plan
forms another straight line. Equation 2.164 is the corresponding equation (see
Figure 2.8d).
Example 2.11
The Q1-Q2 combination can be a large truck load combined with a small earth-
quake load, acting on a bridge. The Q1′-Q2′ combination can be a small truck load
combined with a large earthquake load, acting on the same bridge.
In Figure 2.8, only the area 0-A′-E-C is the safe region.
Sn = ∑X
i =1
i
(2.165)
µ Sn = ∑µ
i =1
i (2.166)
and
n
σ 2Sn = ∑σ
i =1
2
i (2.167)
In the limit as n goes to infinity, the standardized variable of Sn, Zn, has the stan-
dard normal distribution
S n − µ Sn
Zn = (2.168)
σ Sn
That is,
n
∑ ( X i − µ )i
1 ξ ζ2
∫
−
lim P i =1
< ξ = e 2 dζ (2.170)
n→∞
n
2π −∞
∑σ
i =1
2
i
Functions of Random Variables 99
E(Sn) = 0 (2.171)
D(Sn) = 1 (2.172)
Sn = ∑X
i =1
i (2.173)
In the limit as n goes to infinity, the standardized variable of Sn, Zn has the stan-
dard normal distribution
Sn − nµ
Zn = (2.174)
n σ Sn
That is,
or
n
∑ X − nµ
i
1 ξ ζ2
∫
−
lim P i =1
< ξ = e 2 dζ (2.176)
n→∞
nσ 2π −∞
Again, we have
E(Sn) = 0 (2.177)
and
D(Sn) = 1 (2.178)
100 Random Vibration
K = {0, 1} (2.179)
The probability of K = 0 is
P(K = 0) = p (2.180)
P(K = 1) = 1 − p (2.181)
X= ∑K
i =1
i (2.182)
μX = np (2.184a)
and
σ 2X = np(1 − p)
(2.184b)
When n becomes sufficiently large, the PMF of P(X = x) can be written approxi-
mately as
( x − µ X )2
−
1 2 σ 2X
pX ( x ) = e (2.185)
2πσ X
x − np
Denoting x k = for convenience, which is a standardized variable, we
np(1 − p)
can see that when n → ∞, x = np + np(1 − p) x k → ∞ and (n − x) → ∞. Furthermore,
with Stirling’s approximation
1
n+
n ! ≈ 2π n 2 −n
e
we thus have
1 1
x+ n− x +
1 np 2 n(1 − p) 2
pX ( x ) ≈ n − x
2πnp(1 − p) x
1
x+ (1− p ) 2
np 2 − x k np (1− p ) − xk
≈e 2
and
1
n− x + p
n(1 − p) 2 x k np (1− p ) − x k2
n − x ≈e 2
Therefore,
1 ( x − np )2
n− x 1 − x k2 1 −
2 np (1− p )
pX ( x ) = C p (1 − p)
x
n
x
= e 2 = e
2πnp(1 − p) 2πnp(1 − p)
(x − µ X )2
−
1 2 σ 2X
= e
2πσ X
Example 2.12
Suppose there are 170 identical and independent computers in a department and
each of them has a 1% failure probability. Calculate the total failure probability of
up to two computers.
102 Random Vibration
Therefore,
( λ )k e − λ
P( X = x ) ≈ Cnk p k (1 − p)n− k ≈ with λ = 170 × 0.01 = 1.7
k!
2 − np 0 − np
P(0 ≤ X ≤ 2) ≈ Φ −Φ = Φ (0.2312) − Φ (−1.3104)
np(1 − p) np(1 − p)
Y= ∏X ,
i =1
i Xi > 0 (2.186)
Functions of Random Variables 103
Z = ln(Y ) = ∑ ln(X )
i =1
i (2.187)
Thus,
Y = eZ (2.188)
As
n → ∞ (2.189)
we finally have
Z − E(Z )
~ N (0,1) (2.190)
D( Z )
Thus,
d FYn ( y)
fYn ( y) = = n[ FX ( y)]n−1 f X ( y) (2.194)
dy
104 Random Vibration
2.5.4 Special Distributions
2.5.4.1 CDF and PDF of Extreme Value of Rayleigh Distributions
2.5.4.1.1 CDF of Rayleigh Distribution
Recalling Equation 1.94 (Rayleigh distribution)
2
1 h
h −
fH ( h ) = 2 e 2 σ , h > 0
σ
and with the help of this PDF, the CDF of Rayleigh distribution can be calculated as
h2
1 h
2 u= y2
y
h 2σ2 y −
∫ ∫
−
2 σ −u 2σ 2
FH (h) = e dh = e du = 1 − e (2.195)
0 σ2 0
2.5.4.1.2 CDF
Furthermore, with the help of Equation 2.193, we can write the largest value in n
independent samples of Rayleigh distribution as
n
−
y2
FYn ( y) = 1 − e 2σ 2
, y ≥ 0 (2.196)
2.5.4.1.3 PDF
And the corresponding PDF is
n −1
−
y2 y2
y − 2σ2
fYn ( y) = n 1 − e σ
2
2 e (2.197)
σ2
2.5.4.2.1 CDF
The CDF of EVI is
− α ( y −β )
FYn(y) = e − e (2.198)
Functions of Random Variables 105
2.5.4.2.2 PDF
The PDF of EVI is
− α ( y −β )
1
FX (β) = 1 − (2.200)
n
or
1
β = FX−1 1 − (2.201)
n
and
α = nf X(β) (2.202)
γ
µYn = β + (2.203)
α
π2 1.645
σY2n = ≈ (2.204)
6α 2
α2
1.282
σYn ≈ (2.205)
α
106 Random Vibration
γ =−
∫
0
e − x ln x d x = 0.5772157 ≈ 0.577
(2.206)
2.5.4.2.5.1 Values of β and α First of all, the parameters β and α can be calcu-
lated as (see Gumbel 1958; Ang and Tang 1984)
β = σ 2 ln n (2.207)
and
2 ln n
α= (2.208)
σ
respectively.
γ
µYn = σ 2 ln n + (2.209)
2 ln n
σ 2π 2
σY2n = (2.210)
12 ln n
Example 2.13
EVI distribution is often used to model peak annual flow of a river. It is measured
that μY = 2000 m3/s and σY = 1000 m3/s.
Question (1): Find the CDF of EVI
Question (2): In a particular year, find the probability of peak flow exceeding
5000 m3/s
1. We have
1.282
α= = 0.00128
σY
Functions of Random Variables 107
0.577 0.577
β = µY − = 2000 − = 1549.8 (m3 /s)
α 0.00128
2. We have
TR = 1/0.01 = 100
P(Zn ≤ y) = P(all Xi ≤ y) = 1 − P(all Xi > y) = 1 − P[(X1 > y) ∩ (X2 > y)…(Xn > y)]
(2.211)
Thus,
and
d FZ n( y)
f Z n ( y) = = n[1 − FX ( y)]n−1 f X ( y) (2.214)
dy
108 Random Vibration
Note that if Xis are independent, each has its own CDF FXi(x),
2.5.4.4.1 CDF
First, consider the CDF. Suppose the variable Xi has CDF as
k
1
FX ( x ) = 1 − β , x ≥ 0 (2.216)
x
Let X be the random set with n total variables. Yn is the largest value in n inde-
pendent samples, and k < 0 is the shape parameter. The asymptotic distribution of Yn
can be written as
k
u
−
y
FYn( y) = e , y≥0 (2.217)
2.5.4.4.2 PDF
The PDF of EVII is
k
k +1 u
k u −
y
fYn( y) = e , y≥0 (2.218)
u y
2.5.4.4.3 Mean
The mean of EVII is
1
µYn = uΓ 1 − (2.219)
k
2.5.4.4.4 Variance
Finally, the variance of EVII is
2 2 1
σY2n = u 2 Γ 1 − − Γ 1 − (2.220)
k k
Functions of Random Variables 109
1.6
Normal
1.4 EVI
EVII
1.2
1
PDF
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Value of y
Figure 2.9 shows comparisons among normal, EVI, and EVII distributions. All
these distributions have identical mean = 1 and standard deviation = 0.46. From
Figure 2.9, we can realize that EVI and EVII have larger left tails, which means that
the chance to have larger value of y, the extreme value, is comparatively greater.
2.5.4.5.1 CDF
When the largest value of variables Xi falls off to a certain maximum value m, which
has CDF as
Let X be the random set with n total variables. Yn is the largest value in n inde-
pendent samples.
The distribution of Yn is
k
m− y
−
m − u
FYn( y) = e , y≤m (2.222)
110 Random Vibration
2.5.4.5.2 PDF
The PDF of EVIII is
k
k −1 m− y
k m − y −
m − u
fYn ( y) = e , y≤m (2.223)
m − u m − u
Problems
1. X and Y are continuous variables and X > 0, Y > 0. The joint PDF
y 2
f XY ( x , y) = e − ( x + y / 2)
A
a. What is suitable value of parameter A?
b. Determine the marginal PDFs
c. Find conditional PDF of X when y = 1
d. Find the covariance and correlation coefficient
2. Random variables X and Y are independent with exponential PDFs, respec-
tively, given by
λe − λx , x > 0
fX ( x ) =
0, x ≤ 0
and
νe − νy , y > 0
fY ( x ) =
0, y ≤ 0
1, X ≤ Y
Z=
0, X > Y
is also a PDF.
4. Derive the general normal PDF based on knowing the standard normal PDF
5. The PMF of random variable X is shown in Table P2.1. Find the PMF of
Z = X2.
Functions of Random Variables 111
Table P2.1
X −2 −1 0 1 5
pk 1/5 1/6 1/5 1/15 11/30
6. Show that the sum of two correlated normal random variables is also nor-
mally distributed by using convolution [Hint: consider a2 ± 2ab + b2 =
(a ± b)2]
7. Suppose random variable X is uniformly distributed in (0,1). Find the PDF
of (a) Y = eX and (b) Z = −2ln(X)
115
116 Random Vibration
In a sample space of random test Ω = {e}, if at any moment t ∈ T, then there exists
a random variable X(t, e), referred to as the temporal set of random variables, where
{X(t, e), t ∈ T, e ∈ Ω} is the random process. It is denoted by {X(t), t ∈ T} or more
simply X(t).
The essence of a random process is that a random process must be of
1. A random sequence.
2. The sequence must be inside a set Ω = {e}, sample space, which is seen as a state
space, where e is an independent state used to denote an individual sample.
3. The order of sequence is trackable by index t, which is an element of the
index set T.
1. Both are random, thus the need to consider the distribution functions.
2. X is a time invariable, whereas X(t) is a time variable, thus the distribu-
tion of X is fixed. In most cases, the first and second moments, mean, and
variance, are sufficient to describe X. However, the distribution of X(t) is a
temporal function. In this case, moments alone are insufficient and a cor-
relation analysis is needed.
3. The independent variable for random variable X is e, thus for the random
process X(t), we have (e, t).
3.1.1.3.1 One-Dimensional
It is important to note that the phrase “dimensional” differs from the one used at the
beginning of Chapter 3. Previously, the phrase “dimensional” was used in a phil-
osophical (or general) sense. Here, it is more mathematical (or specific). In other
words, “one-dimensional” refers to only one group of variables X1(t) = X(t) being
considered. Furthermore, “two-dimensional” will involve X1(t) and X2(t), where X1(t)
and X2(t) are different groups of variables.
For each fixed time t ∈ T, the random process X(t) becomes a random variable,
its CDF
∂FX ( x; t )
f X ( x; t ) = (3.2)
∂x
∂2 FX ( x1 , x 2 ; t1 , t2 )
f X ( x1 , x 2 ; t1 , t2 ) = (3.4)
∂x1∂x 2
3.1.1.3.2.3 Joint Distribution Generally, X(t1) and X(t2) are different sets of ran-
dom variables. Thus, X(t1) and X(t2) will have a joint distribution:
3.1.1.3.3 N-Dimensional
Similar to joint distributions, we consider n-dimensional distributions.
FX(x1, x2, …xn; t1, t2, …tn) = P[(X(t1) ≤x1) ∩ (X(t2) ≤ x2) …∩ (X(tn) ≤ xn)] (3.6)
3.1.1.3.3.2 PDF The N-dimensional density function of random process X(t) is:
∂n FX ( x1 , x 2 , x n; t1 , t2 , ...tn )
f X ( x1 , x 2 , x n; t1 , t2 , ...tn ) = (3.7)
∂x1∂x 2 ...∂x n
3.1.1.3.4.1 Symmetry From the arbitrary rearrangement of (t1, t2, …tn), namely,
(t1′, t2′ , tn′ )
Let us use a simple, incomplete example to show this symmetry. Suppose, in two-
dimensional cases, x1 = 1, x2 = 2, t1 = 0.1, and t2 = 0.5,
FX(x1, x2; t1, t2) = FX(1, 2, 0.1, 0.5) = P[(X(0.1) ≤ 1) ∩ (X(0.5) ≤2)] = P[(X(0.5) ≤ 2) ∩
(X(0.1) ≤ 1)] = FX(2, 1, 0.5, 0.1) = FX(x2, x1; t2, t1)
FX(x1, x2, …xm; t1, t2, …tm) = FX(x1, x2, …xm, ∞|m+1,∞|m+2, …∞|n; t1, t2, …tm, tm+1 …tn)
(3.9)
Proof:
FX(x1, x2, …xm, ∞|m+1, ∞|m+2, …∞|n; t1, t2, …tm, tm+1 …tn) = P[(X(t1) ≤ x1) ∩ (X(t2) ≤
x2) … ∩ (X(tm) ≤ xm) ∩ (X(tm+1) < ∞) … ∩ (X(tn) < ∞)] = P[(X(t1) ≤ x1) ∩ (X(t2) ≤ x2) … ∩
(X(tm) ≤ xm) = FX(x1, x2, …xm; t1, t2, …tm)
Note that Equation 3.9 is also known as the consistency condition, which implies
Example 3.1
Suppose a random process X(t, x) (−∞ < t < ∞) has only two sample functions
and
X (0 ) −4 4
P 1/ 3 2/ 3
and the CDF is
0, −∞ < x ≤ −4
F (0, x ) = 1/ 3, −4 < x ≤ 4
1, x>4
Similarly, X(π/3) has PMF as
X (π/ 3) −2 2
P 1/ 3 2/ 3
and the CDF is
0, −∞ < x ≤ −2
F (π /3, x ) = 1/ 3, −2 < x ≤ 2
1, x>2
P{X(π/3) = −2|X(0) = −4 = 1
P{X(π/3) = 2|X(0) = −4 = 0
Random Processes in the Time Domain 121
P{X(π/3) = −2|X(0) = 4 = 0
P{X(π/3) = 2|X(0) = 4 = 1
Therefore, P{X(0) = −4, X(π/3) = −2} = P{X(0) = −4} P{X(π/3) = −2|X(0) = −4} =
P({X(0) = −4} = 1/3.
Similarly
and
X (π/ 3) −2 2
X (0 )
−4 1/ 3 0
4 0 2/ 3
0, x1 ≤ −4, or x2 ≤ −2
F (0, π / 3, x1, x2 ) = 1/ 3, x1 > −4, − 2 < x2 ≤ 2, or x2 > −2, − 4 < x1 ≤ 4
1, x1 > 4, x2 > 2
FX(x1, x2, …xn; t1, t2, …tn) = FX(x1; t1) FX(x2; t2) …FX(xn; tn) (3.10)
or
f X(x1, x2, …xn; t1, t2, …tn) = f X(x1; t1) f X(x2; t2) …f X(xn; tn) (3.11)
Equations 3.10 and 3.11 show that in an independent random process X(t) at dif-
ferent moments: t1, t2, …, the distributions can be decoupled.
Note that decoupling is an important measure for dealing with complex events,
functions, systems, and others.
If
Example 3.2
For a random process Z(t) = (X 2 + Y 2) t, t > 0, where X~N(0,1), Y~N(0,1), and X,Y are
independent, find the one-dimensional density function.
The joint PDF is
x2+y 2
1 −
fXY ( x , y ) = e 2 , − ∞ < x < ∞, − ∞ < y < ∞
2π
when z ≥ 0,
FZ ( z; t ) = P[ Z(t ) ≤ z ] = P[( x 2 + x 2 ) ≤ z /t ]
x2 + y 2 z r2 z
2π
1 − r −2
∫∫ ∫ ∫
t −
= e 2 d x dy = dθ e dr = 1− e 2t
2π 0 0 2π
z
x2 + y 2 ≤
t
Note that in the above derivation, the Cartesian coordinates are replaced by
polar coordinates.
When
Therefore,
−
z
FZ ( z; t ) = 1− e , z ≥ 0
2t
0, z<0
1 −z
∂fZ ( z; t ) e 2t , z ≥ 0
fZ ( z; t ) = = 2t
∂z 0,
z<0
3.1.2.1 Concept of Ensembles
To consider the above-mentioned average, we must introduce the concept of ensembles.
3.1.2.1.1 Definition
Ensembles: Set of all possible sample realizations of a random process. The essence
of ensembles is that the process is viewed as a whole. The reason to employ ensem-
bles in the study of a random process is to understand the joint probability distribu-
tions at all times.
Recall marginal distribution: the distribution of X(t) at a given t, denoted by
f X(t)(x) (see Figure 3.1).
X(t)
fX(t2)(x)
fX(t1)(x)
t
t1 t2
Generally, we have
3.1.2.2.1 Mean
The mean value can be calculated as
∞
µ X (t ) = E[ X (t )] =
∫−∞
xf X (t ) d x (3.15)
3.1.2.2.2 Variance
The variance is
∞
σ 2X (t ) = E[{X (t ) − µ X (t )}2 ] ≡ D[ X (t )] =
∫ −∞
[ x − µ X (t )]2 f X (t ) d x (3.16)
∞
RX (t1 , t2 ) = E[ X (t1 ) X (t2 )] =
∫ −∞
x1 x 2 f X (t1 ) X (t2 ) ( x1 , x 2 ) dx (3.17)
Equations 3.15 through 3.17 are averages over the ranges of X, namely, the whole
space of Ω.
Example 3.3
σ A2 + µ A2 = 1
Consider the joint PDFs of X(t), at two pairs of time (τ0, τ1) and (τ0, τ2).
For the PDF of A:
1. Find fA(a)
Let us assume (note that this assumption is not necessary)
then
x0
a2 2
∫0
(ka)da = k
2
x0
0 = 1→ k =
x02
and
2
fA (a) = a
x02
x0
2a 2x
µA =
∫0
2
x0
a da = 0
3
and the variance is given by
2
x0
2a 2x x2
σ 2A =
∫0
2
x0
a − 0 da = 0
3 18
because
2
2x x2 x2
σ 2A + µ 2A = 1= 0 + 0 = 0
3 18 2
we have
x0 = (2)1/2
2
fA (a) = a=a
x02
∞
Rx (t1, t 2 ) =
∫ −∞
x1x2fX (t1)X (t2 ) ( x1, x2 ) d x
2
=
∫ 0
[a cos( 2πtt1)][a cos( 2πt 2 )]a da = cos( 2πt1)cos( 2πt 2 )
Note that the second question in the above example can also be obtained by using
Equation 3.18 as follows:
σ XX (t1 , t2 )
ρXX (t1 , t2 ) = (3.19)
σ X (t1 )σ X (t2 )
where
σ X (.) = σ 2X (.)
t1 = t 2 = t
we have
∑ ∑ α α R (t , t ) ≥ 0
j =1 k =1
j k X j k (3.21)
Example 3.4
1. Let X(t) = C, where C is a random variable with PDF: fC(c), −∞ < c < ∞, t ∈ [0, T]
In this case, X(t) is not a function of time; therefore, the random process
reduces to a random set.
Random Processes in the Time Domain 127
μX(t) = μC = const.
σX(t) = σC = const.
2. Let X(t) = Bt, where B is a random variable with PDF: fB (b), −∞ < b < ∞, t ∈
[0, T];
The mean is
∞ ∞
µ X (t ) =
∫ −∞
fB (b)tb db = t
∫−∞
fB (b)b db = µ Bt
the variance is
∞ ∞
σ 2X (t ) =
∫ −∞
fB (b)(bt − µ Bt )2 db = t 2
∫−∞
fB (b)(b − µ B )2 db = σ B2t 2
and the autocorrelation is
∞ ∞
RX (t1,t 2 ) =
∫ −∞
x1x2fX (t1)X (t2 ) ( x1, x2 ) d x =
∫−∞
b1t1b2t 2fB (b) db
( )
∞
= t1t 2
∫ −∞
b1b2fB (b) db = t1t 2 µ X2 + σ X2
3. Let X(t) = B + t, where B is a random variable with PDF: fB (b), −∞ < b < ∞,
t ∈ [0, T]
The mean is
∞ ∞ ∞
µ X (t ) =
∫ −∞
fB (b)(t + b) db =
∫ −∞
fB (b)b db + t
∫ −∞
fB (b) db = µ B + t
the variance is
σ 2X (t ) =
∫ −∞
fB (b)(b + t − µ B − t )2 db = σ B2
128 Random Vibration
∞ ∞ ∞
RX (t1, t 2 ) =
∫ −∞
(b1 + t1)(b2 + t 2 )fB (b) db =
∫ −∞
b1b2fB (b) db +
∫ −∞
t1b2fB (b) db
∞ ∞
+
∫ −∞
t 2b1 fB (b) db +
∫ −∞
t1t 2fB (b) db = µ B2 + σ B2 + t1µ B + t 2µ B + t1t 2
4. Let X(t) = cos2(ωt + Θ), where Θ is random variable 0 < Θ ≤ 2π, with PDF fΘ(θ),
t ∈ [0, T]
The mean is
2π 2π
1 1
µ Θ (t ) =
∫ 0
fΘ (θ)cos 2(ωt + θ) dθ =
∫ 0
fΘ (θ) [1+ cos( 2ωt + 2θ)]dθ
2
θ=
2
since
2π 2π
∫
0
[cos( 2ωt + 2θ)]dθ ≠ 1, fΘ (θ) ≠ cos( 2ωt + 2θ),
∫0
fΘ (θ)[cos( 2ωt + 2θ)]dθ = 0
The variance is
2 2
2π
1 2π
cos( 2ωt + 2θ)
σ Θ2 (t ) =
∫ 0
fΘ (θ) cos 2(ωt + θ) − dθ =
2 ∫ 0
fΘ (θ)
2 dθ
2π
1 1+ cos(4ωt + 4θ) 1
=
∫ 0
fΘ (θ)
4 2
dθ =
8
The autocorrelation is
2π
RX (t1, t 2 ) =
∫ 0
cos 2(ωt1 + θ)cos 2 (ωt 2 + θ)fΘ (θ) dθ
2π
1 1 1
=
∫ 0
[1+ cos( 2ωt1 + 2θ)] [1+ cos( 2ωt 2 + 2θ)] fΘ (θ) dθ =
2 2 8
2π 2π 2π
∫ 0
fΘ (θ) dθ +
∫ 0
cos( 2ωt1 + 2θ)fΘ (θ) dθ +
∫0
cos( 2ωt 2 + 2θ)fΘ (θ) dθ
2π 1
+
∫ 0
cos( 2ωt1 + 2θ)cos( 2ωt 2 + 2θ)fΘ (θ) dθ =
8
It is important to note that in parts (1) and (4) of the above example, the means,
variances, and autocorrelation functions are all constants, whereas in parts (2) and
(3), the means, variances, and autocorrelation functions are time variables.
Random Processes in the Time Domain 129
Example 3.5
RX (t1, t 2 ) = E[ X (t1)X (t 2 )]
2 2 1
(4) cos 2 t1 + (−4)2 cos 2 t1, t1 = t 2 16 cos 2 t , t = t
3 3
= = 1 1 2
σXX(t1, t2) = RX(t1, t2) − μX(t1)μX(t2) = 16 cost1 cost2 − 16/9 cost1 cost2 = 128/9 cost1 cost2
FX(x1, x2, … xn; t1, t2, … tn) = FX(x1, x2, … xn; t1 + h, t2 + h; … tn + h) (3.22)
f X(x1, x2, … xn; t1, t2, … tn) = f X(x1, x2, … xn; t1 + h, t2 + h; … tn + h) (3.23)
Conditions 3.22 and 3.23 imply that the n-dimensional distribution does not
evolve over the time intervals.
130 Random Vibration
1. If X(t) is a strictly stationary process, then the joint PDF of {X(t1), X(t2), …
X(tn)} is identical to that of {X(t1 + h), X(t2 + h), …X(tn + h)}.
2. If X(t) is a strictly stationary process, and if the expectation of X(t) is
Then,
and
D[ X (t )] = σ 2X = const. (3.26)
It is important to note that Condition 3.24 is not necessary for strictly stationary
processes. Additionally, note that a strictly stationary process is defined by examin-
ing its distributions.
D[ X (t )] = σ 2X (t ) = σ 2X = const. (3.31)
σ XX (τ)
ρXX (τ) = (3.34)
σ 2X
1. If the process is complex valued, then R X(τ) is also complex valued and
Here, the symbol (.)* stands for taking the complex conjugate of (.)
2. The autocorrelation function is positive semi-definite, that is, for any com-
plex number α1, α2, …,α n and any real number t1, t2, …,tn, we have
n n
∑ ∑ α α*R (t − t ) ≥ 0
j =1 k =1
j k X j k (3.35b)
Example 3.6
1
−1
E[ X (t )] = E[sin( 2πAt )] =
∫ sin(2πAt )da = 2πt cos(2πAt )
0
1
0 =0
132 Random Vibration
1
, −1 < x < 1
fX ( x ,t ) = πt 1− x 2
0, elsewhere
which is a function of time t; therefore, X(t) is not strictly stationary.
From the above-mentioned points, we can make further categorizations. For points
(1) and (3) satisfied, the processes are stationary. Point (2) is not used often.
Example 3.7
A random process Z(t) = Xcos(2πt) + Ysin(2πt), where both X and Y are random
variables and
EX = EY = 0
DX = DY = 1
Random Processes in the Time Domain 133
As well as
EXY = 0
Therefore, the mean of Z(t) is constant, and it can be readily shown that the
variance of Z(t) is also constant and the autocorrelation function of Z(t) depends
only on the time difference τ, which concludes that Z(t) is a stationary process.
3.1.3.2 Ergodic Process
As mentioned previously, to find the moments, we take the ensemble average. In
most instances, determining the ensemble average is difficult. On the contrary, the
average over the time domain can be significantly simpler to compute.
In fact, in many engineering practices, the temporal average is used to calculate
the mean and variance values. Before using the temporal average, there is a question
that must first be asked. That is, under what conditions can the temporal average be
used? Using the temporal average under the incorrect conditions may have severe
computation errors. Mathematically, the correct condition is that the process must be
ergodic. This issue, however, must be further discussed.
T
1
X (t , k ) = lim
T →∞ 2T ∫ −T
X (t , k ) dt (3.36)
2T
1
X (t , k ) = lim
T →∞ 2T ∫
0
X (t , k ) dt (3.37)
3.1.3.2.2 Ergodicity
Ergodicity means the temporal average can be used to replace the ensemble average.
A process is ergodic in the mean, if
{X (t , k ) − µ X }2 = D[ X (t )] = E[{X (t , k ) − µ X }2 ] = σ 2X (3.39)
From Equations 3.38 through 3.40, it is established that an ergodic process must
be stationary. However, it must be remembered that a stationary process is not neces-
sarily an ergodic one.
A weakly ergodic process is one that satisfies these three conditions, a strongly ergo-
dic process is one that satisfies all ensemble averages to be equal to temporal averages,
wheras a nonergodic process is one that does not satisfy any of these three conditions.
1 2T
τ
lim
T →∞ T ∫ 0
1 − RX (τ) − µ X dt = 0
2T
2
(3.41)
u
{ }
2T
1
lim
T →∞ T ∫
0
1 − 2T E[ X (t + τ + u) X (t + τ) X (t + u) X (t )] − RX dt = 0 (3.42)
2
In this example, X(t) = C is stationary but nonergodic, and X(t) = B + t is not station-
ary, and therefore is also nonergodic.
Ergodicity is important because we can use temporal averages to replace ensem-
ble averages, which will be discussed in detail in the next section. Practically, how-
ever, we rarely have exact ergodic processes. Caution must be taken in using the
temporal average. In many engineering applications, taking several temporal aver-
ages to see if the corresponding moments have converged to the correct values may
be advantageous.
Example 3.8
Assuming both X~N(0, 1) and Y~N(0, 1) are mutually independent, let us consider
Z(t) = X + Yt
R(t1, t2) = E[(X + Yt1) (X + Yt2)] = E(X2 + XYt1 + XYt2 + Y 2t1t2) = 1 + t1t2
D[Z(t)] = 1 + t2
ξ2
z −
1
FZ ( x; t ) =
2π(1+ t 2 ) ∫ −∞
e 2(1+t 2 )
dξ
It is determined that Z(t) is Gaussian but not stationary, and therefore nonergodic.
Note that, in Section 3.1.1.3, we use distribution functions to describe a ran-
dom process. In this particular case, we have the Gaussian process.
Example 3.9
Given Gaussian processes {X(t) −∞ < t < ∞} and {Y(t) −∞ < t < ∞}, which are inde-
pendent, prove Z(t) = X(t) + Y(t) −∞ < t < ∞ is also Gaussian.
Consider a nonzero vector q = [q1, q2, …qn] and the nonzero linear combina-
tion of Z(t), that is,
Z(t )
1
Z(t )
q 2
...
Z(tn )
must be normally distributed, so that Z(t) is Gaussian. This example implies that the
Gaussian process is addible. One of the nice properties of the Gaussian process is,
if a process is Gaussian, then its derivatives and integrals are also Gaussian.
3.1.4.2 Poisson Process
Before specifically introducing the Poisson process, let us consider various general cases.
Definition
Note that Equation 3.43 implies the meaning of factor λ, which is a propor-
tional constant.
b. The probability of no arrivals in Δt is
N(0) = 0 (3.45)
We now explain the nature of the Poisson process. Consider the following prob-
ability equation based on the above-mentioned condition,
pN(n, t + Δt) = P[N(t + Δt) = n] = P{[(N(t) = n) ∩ (no new arrival in Δt)] ∪ [(N(t) =
n − 1) ∩ (one new arrival in Δt)]} = pN(n, t) [1 − λΔt] + pN(n − 1, t) [λΔt] (3.46)
Furthermore,
dpN (0, t )
= −λpN (0, t ) (3.48)
dt
pN (0, t ) = e −λt
(λt )n − λt
pN (n, t ) = e , n ≥ 0, t ≥ 0 (3.49)
n!
This is the PMF of the Poisson process. Similar to the above-mentioned Gaussian
process, we can use the distribution function to describe the Poisson process.
In addition, we consider the corresponding moments:
Mean
μN(t) = λt (3.50)
138 Random Vibration
Variance
σ 2N (t ) = λt (3.51)
Autocorrelation function
λt + λ 2 t t , 0 ≤ t1 ≤ t2
RN (t1 , t2 ) = 1 1 2
(3.52)
λt2 + λ t1t2 , 0 ≤ t2 ≤ t1
2
[λ(t − τ)]k − λ (t − τ )
P{[ N (t ) − N (τ)] = k} = e (3.53)
k!
Example 3.10
The number of radiated particles during [0, t] from a source is denoted by N(t).
{N(t), t ≥ 0} is a Poisson process with mean radiation rate λ. Assume each particle
can be recorded with probability p, and the record of an individual particle is
independent from other records, and also to the process N(t). Denote the total
number of particles during [0, t] to be M(t). Prove {M(t), t ≥ 0} is also a Poisson
process with mean rate λp.
First, it is seen that M(0) = 0.
Second, let Xi be the record of the ith particle, the distribution is
1, p
Xi =
0, 1− p
These Xis are mutually independent and with identical distributions. Consider
a set of increments denoted as
Because Xis are mutually independent and have identical distributions, the
increments of M(t) must be independent from each other, thus, the process of M(t)
is an independent increment.
Now, consider P[M(t2) − M(t1) = n], which can be written as
{ }
∞
P[M(t 2 ) − M(t1) = n] = ∑
n= k
P N(t 2 ) − N(t1) = n P M(t 2 ) − M(t1) = k
N (t 2 ) − N (t1) = n
∑ [λ(t n−! t )] e
n
− λ (t 2 −t1)
= 2 1
Cnk p k (1− p)n− k
n= k
[ λp(t 2 − t1)]k
= e − λp(t2 −t1)
k!
Random Processes in the Time Domain 139
Therefore, M(t2) − M(t1) is a Poisson distribution with parameter λp(t2 − t1) and
from the above statement, it is seen that {M(t), t ≥ 0} is a Poisson process with
mean rate λp.
Example 3.11
and
Show that
= ∑ P{N (t ) − N (t ) = j ,N (t ) − N (t ) = i − j ,N (t ) − N (t ) + N (t ) − N (t ) = i ,...
j1= 0
1 2 1 1 1 2 2 1 1 1 1 1 3 1 2 2 3 2 2 2
Therefore,
in which, X(t2) − X(t1), X(t3) − X(t2), ….,X(tn) − X(tn−1) are mutually indepen-
dent, which means that {X(t), t ≥ 0} is an independent increment process.
Third, when τ < t, we can have
(t − τ)k e ( 1 2 )( )
− λ +λ t−τ k
=
k! ∑C λ λ
i =0
i i k−i
k 1 2
Therefore, the process X(t) − X(τ) is Poisson with parameter (λ1 + λ2)(t − τ).
The above discussion illustrates that the Poisson processes are addible.
2. Now, consider Y(t) = N1(t) − N2(t)
Therefore, {Y(t), t ≥ 0)} is not Poisson, because if Y(t) is Poisson, P{Y(t) = −1} = 0.
Random Processes in the Time Domain 141
(λy)0 − λy
P(Y > y) = P( N ( y) = 0) = e = e − λy (3.54)
0!
−λy
FY ( y) = P(Y ≤ y) = 1 − e (3.55)
Example 3.12
Suppose vehicles are passing a bridge with the rate of two per minute.
To determine the above, the Poisson process is assumed, where V(t) is the num-
ber of vehicles in time interval [0, t], with a rate of λ = 2.
(λt )n − λt
P {V (t ) = n} = e , n = 1, 2, 3...
n!
1. Substituting t = 5, we have
(10)k −10
P {V (t ) = 5} = e
k!
μV (5) = 10
2. The variance is
σ v2(5) = 10
3. To calculate the probability, we can write
3.1.4.3 Harmonic Process
The concept of harmonic process is practically very useful in random variation.
3.1.4.3.1 Definition
X(t) is a harmonic process given by
μA = μB = 0 (3.57)
The variance is
σ 2A = σ 2B = σ 2 (3.58)
3.1.4.3.2 Mean
We see the mean is
μX(t) = 0 (3.59)
R X(τ) = E[X(t) X(t + τ)] = E[{A cosωt + B sinωt}{A cosω(t + τ) + B sinω(t + τ)}] =
E[A2] cosωt cosω(t + τ) + E[B2] sinωt sinω(t + τ)
Consequently, resulting in
Furthermore, we define a new harmonic process as the sum of all the Xk(t):
m m
X (t ) = ∑
k =1
X k (t ) = ∑ ( A cosω t + B sinω t)
k =1
k k k k (3.62)
The variance is
σ2 = ∑σ
k =1
2
k (3.63)
and the autocorrelation is
m m
RX (τ) = ∑R
k =1
Xk (τ) = ∑σ
k =1
2
k cosω k τ (3.64)
Also, let p(ωk) represent the portion of the total variance contributed by the pro-
cess with frequency ωk. In other words, let
σ 2k
p(ω k ) = (3.65)
σ2
It is seen that
∑ p(ω ) = 1
k =1
k (3.66)
Note that when the frequency interval between ωk+1 and ωk for all k are equal, we
can write
1
p(ω k ) = g(ω k )∆ω (3.68)
2π
In this case,
ωk+1 − ωk → dω (3.70)
m 1 1
∑
∞
RX (τ) = σ 2 lim
k =1 2π
∆ω→0
g(ω k )∆ω cosω k τ =
2π ∫
0
σ 2 g(ω ) cosωτ dω (3.71)
Equation 3.71 is the Fourier cosine transform of function σ2g(ω). That is, R X(τ)
and σ2g(ω) is a Fourier pair, denoted by
where the symbol “x(τ) ⇔ g(ω)” denotes a Fourier pair of x(τ) and g(ω).
In Chapter 4, function σ2g(ω) is referred to as the spectral density function because
it distributes the variance of X(t) as a density across the spectrum in the frequency
domain. Note that because the Fourier pair indicated in Equation 3.72 is unique, g(ω)
contains precisely the same information as R X(τ).
Comparing Equation 3.64 (in which a series consists of discrete harmonic terms
cosωk τ) and Equation 3.71 (in which an integral contains harmonic terms cosωτ), we
see that both represent the autocorrelation functions. The autocorrelation described
by Equation 3.71 has a continuous spectrum, with infinitesimal frequency resolution
dω, which implies that at any frequency point ω, the resolution is identical. That is,
the continuous spectrum has an infinite number of spectral lines. On the other hand,
the autocorrelation described by Equation 3.64 has a discrete spectrum and the num-
ber of the spectral lines is m. However, at frequency ωp and ωq, the corresponding
frequency intervals are not necessarily equal. That is, in general
3.2 CORRELATION ANALYSIS
3.2.1 Cross-Correlation
In Section 3.1, we introduced the concept of autocorrelation without discussing its
physical meaning and engineering applications in detail. In this section, the concept
of correlation of random processes in more specific ways is considered. The focus is
given to stationary processes.
Random Processes in the Time Domain 145
3.2.1.1 Cross-Correlation Function
3.2.1.1.1 Definition
Recalling Equation 3.17, the autocorrelation function is given by
RX (t1 , t2 ) = E[ X (t1 ) X (t2 )] =
∫ −∞
x1x2 f X (t1 ) X (t2 ) ( x1 , x 2 ) dxx
Similar to the term R X(t1, t2), consider the case in which there exists a second pro-
cess Y(t2). In this instance, we would have a cross-correlation, denoted by R XY (t1, t2),
which is the measure of correlation between two random processes X(t) and Y(t).
∞ ∞
RXY (t1 , t2 ) = E[ X (t1 )Y (t2 )] =
∫ ∫
−∞ −∞
x1 y2 f XY ( x , y, t1 , t2 ) d x d y (3.74)
If both X(t) and Y(t) are stationary, the cross-correlation function depends only on
the time lag τ, where
τ = t2 − t1 (3.75)
and
Observing that
RXY (τ)
RXY (τ)
µX µY + σX σY
0.6
0.4
0.2 µX µY
0 τ
τ0
–0.2
µX µY – σX σY
–0.4
0 100 200 300 400 500 600
R XY (τ) = E[X(t) Y(t + τ)] = E[X(s − τ) Y(s)] = E[Y(s)X(s − τ)] = RYX(−τ) (3.79)
3.2.1.1.3 Bounds
The cross-correlation function has upper and lower bounds. Figure 3.3 shows con-
ceptually the bounds of a cross-correlation function.
In Figure 3.3, R XY (τ) is bounded by
3.2.1.2 Cross-Covariance Function
3.2.1.2.1 Definition
The cross-covariance function is given by
1. If X(t) and Y(t) are mutually independent, then they are uncorrelated.
2. If X(t) and Y(t) are uncorrelated, they are not necessarily independent.
3. If X(t) and Y(t) are Gaussian processes, then the condition of being “mutu-
ally independent” is sufficient and a necessary condition of “uncorrelated.”
3.2.2 Autocorrelation
By considering autocorrelation functions, the meaning of correlation will be further
explored.
∞ ∞
RX (t1 , t2 ) = E[ X (t1 ) X (t2 )] =
∫ ∫
−∞ −∞
x1x 2 f X ( x , y, t1 , t2 ) d x d y (3.87)
From this equation, it is shown that if X(t) is ergodic, it must also be stationary,
then one can use temporal averages to replace the ensemble average as
T
1
RX (t1 , t2 ) = RX (τ) = lim
T →∞ 2T ∫
−T
x (t ) x (t + τ) dt (3.89)
148 Random Vibration
In Equation 3.89
t1 = t (3.90a)
t2 = t + τ (3.90b)
That is
τ = t2 − t1 (3.90c)
T
1
RX (τ) = lim
T →∞ 2T ∫
−T
x k (t ) x k (t + τ) dt = E[ X k (t ) X k (t + τ)] (3.91)
In this instance, the subscription k stands for the kth record. It can be shown that the
notation “k” in Equation 3.91 is necessary.
Readers may consider the condition that we can always have Equation 3.92.
Due to the orthogonality of the cosine and sines from Equation 3.91 in the integra-
tion of Equation 3.92, the result will cancel the “uncorrelated” frequency components.
In this case, only the correlated terms will be left. This unveils the physical meaning
of correlation analysis. From this point on, let us denote random processes as “signals.”
In Figure 3.4, several correlation functions of typical signals have been plotted.
In the first case, a sinusoidal signal with an autocorrelation function that will never
decay is shown. Note that a signal that does not contain a sine wave will always
decay. Furthermore, the autocorrelation function of a signal that is closer to sinusoi-
dal will have a slower decaying rate, or on the contrary, it will decay rather quickly.
This is also shown in Figure 3.5.
3.2.2.2.1 Bounds
For the case: X(t) is stationary.
1 1
0.8
0.6
0.4 0.5
0.2
0
–0.2
–0.4 0
–0.6
–0.8
–1 –0.5
0 100 200 300 400 500 600 –10 –8 –6 –4 –2 0 2 4 6 8 10
Sine wave Sine wave contaminated by random noises
1 1.2
0.8 1
0.6
0.4 0.8
0.2 0.6
0
–0.2 0.4
–0.4 0.2
–0.6
0
–0.8
–1 –0.2
–10 –8 –6 –4 –2 0 2 4 6 8 10 –10 –8 –6 –4 –2 0 2 4 6 8 10
Narrow band random noise Broad band random noise
Note that
−1 ≤ ρXX(τ) ≤ 1
Thus,
Because
RX (0) = E[ X 2 (t )] = σ 2X + µ 2X ≥ 0 (3.95)
3.2.2.2.2 Symmetry
If
Then, it is symmetric at
τ = 0 (3.98)
150 Random Vibration
1
0.8
0.6
0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
Constant: RX(t) = C –1
0 100 200 300 400 500 600
Sinusoidal
0.15
0.1
0.05
–0.05
–6 –4 –2 0 2 4 6
White noise Low-pass white noise
0.1 1
0.08 0.8
0.06
0.04 0.6
0.02 0.4
0
0.2
–0.02
–0.04 0
–6 –4 –2 0 2 4 6 –6 –4 –2 0 2 4 6
Band-pass white noise Exponential
1 3
2.5
2
0.5 1.5
1
0 0.5
0
–0.5
–0.5 –1
–1.5
–1 –2
–6 –4 –2 0 2 4 6 –6 –4 –2 0 2 4 6
Cosine exponential Sine-cosine exponential
Equation 3.99 implies that when the time difference becomes sufficiently large,
X(t) and X(t + τ) becomes uncorrelated, and σXX(τ) vanishes.
Example 3.13
1
Rx (τ ) = 36 +
1+ 36τ 2
RX (∞) = µ X2 = 36
thus, the mean is
μX = ±6
2. The variance can be written as
σ 2X = RX (0) − µ 2X = 37 − 36 = 1
1 T
1 1 T
θ = lim
T →∞ T ∫
0
ρXX (τ) d τ = lim
2 T →∞
σX T ∫
0
RX (τ) d τ
(3.100)
Note that when θ is considerably longer than time lag τ, little correlation in the ran-
dom process can be expected.
152 Random Vibration
sin(ω C τ)
RX (τ) = σ 2X (3.101)
ωC τ
Gain
1
0.707
ω
0 ωC
(a)
Gain
ω
0 ωC
(b)
Figure 3.6 Low-pass filter. (a) Practical low-pass filter, (b) idealized low-pass filter.
Random Processes in the Time Domain 153
R ZX(τ) = E[Z(t) X(t + τ)] = E[{X(t) + Y(t)}{X(t + τ)}] = R X(τ) + R XY (τ) (3.105)
If X(t) and Y(t) are uncorrelated with zero mean, that is,
then
Therefore, for the cases of X(t) and Y(t) are uncorrelated with zero mean
N(t)
X(t) r Y(t)
Nondispersive
R independent to frequency
t
Dispersive
R independent to frequency
t
d
Yt = aX t − + N (t ) (3.110)
r
d
RXY (τ) = aRX τ − (3.112)
r
X(t) = X(t + T)
R X(T + τ) = R X(τ)
It is seen that
lim X j = X 0
j →∞
for Xj are random sets.
Using f( X ) to denote the frequency, it is also problematic to have
j
lim f( X j ) = p
j →∞
Random Processes in the Time Domain 155
This holds true because the above equation implies that there exists an ε > 0, for
no matter how large a number N > 0, one can always find n > N, such that
f( Xn ) − p < ε
This is given that if we let ε < p, then
{ }
P f( Xn ) = 0 = (1 − p)n ≠ 0
lim P( X j = X 0 ) = 1 (3.113a)
j →∞
j →∞
(
lim P X j − X 0 ≥ ε = 0 ) (3.113b)
3.2.3.2 Mean-Square Limit
The second importance of derivatives in the temporal process is the limit. Similarly,
because the process is random, we will need to consider some different approaches.
3.2.3.2.1 Definition
{Xn} is a real series of random variables, where N = 0, 1, 2, 3. For {Xn}, its mean
square values exist, given by
E X n2 < ∞ (3.114)
If
lim X n − X 0 = 0 (3.115)
n→∞
156 Random Vibration
or
l.i.m X n = X 0 (3.117)
n→∞
3.2.3.2.2 Property
{Xn} and {Yn} are two real series of random variables, where both have a limited
mean and variance:
l.i.m X n = X 0
n→∞
and
l.i.m Yn = Y0
n→∞
Written with constants a and b, we have
lim E X n2 = E X 02 (3.121)
n→∞
Example 3.15
If, for random variables X and Y, we have EX < ∞ and EY < ∞, then the complex
valued random variable Z = X + jY has its mathematical expectation EZ given by
EZ = EX + jEY
Note that E[cos(tW)] < ∞ and E[sin(tW)] < ∞, so that the characteristic function
of the random variable W, ϕW (t), always exists.
Random Processes in the Time Domain 157
It can be proven that the characteristic function and the PDF of a random vari-
able are uniquely determined by each other. For example, for random variables
whose distribution is Poisson, then
( jt )
φW (t ) = e λ e −1
Now, let us consider showing the mean-square limit of a Poisson random sequence
is Poisson random variable.
Let {X n , n = 1, 2,} to denote the Poisson random sequence, and we have
l.i.m X n = X
n→∞
lim E ( X n ) = E ( X )
n→∞
which implies
lim λ n = λ
n→∞
( jt )
φ X (t ) = lim φ Xn (t ) = lim e n ( e −1) = e λ e −1
λ jt
n→∞ n→∞
which implies that X is the random variable with Poisson distribution, for its char-
acteristic function is Poisson.
3.2.3.3 Mean-Square Continuity
X(t) is a real process, if for t ∈ T,
l.i.m X (t + h) = X (t ) (3.122a)
h→0
Proof:
E[{X(t + τ) − X(t)}2] = E[{X(t + τ)}2] + E [{X(t)}2] − 2E[X(t + τ) X(t)}] = 2(R X(0) − R X(τ))
Example 3.16
Xn 0 n
1 1
P( X n ) 1− 2
n n2
l.i.m[ X (t + h) − X (t )] = 0
h→0
Based on the above equation, let us check the expectation E(|Xm − Xn|2).
(
E Xm − Xn
2
) = E (X 2
m ) + E ( X ) − 2E(X )E(X
2
n n m )
It is seen that
E( X m ) = m
1 1 1
( )
= , E X m2 = m2 2 = 1
m2 m m
and
E( X n ) = n
1 1
n 2
n
( )
n
1
= , E X n2 = n2 2 = 1
Therefore, when m ≠ n
1
E ( X m X n ) = E ( X m )E ( X n ) =
mn
We now have
(
lim E X m − X n
m→∞
2
) = lim 2 − 2 mn1 = 2 ≠ 0
m→∞
n→∞ n→∞
Thus, if {Xn, n ≥ 1} is not continuous in mean square, it will not converge in a mean
square.
Random Processes in the Time Domain 159
X (t + h) − X (t )
X (t ) = l.i.m (3.122b)
h→0 h
Furthermore,
d 2 RX (τ)
= RXX (τ) (3.124)
dτ 2
In addition, we have
d 3 RX (τ)
= − RXX
(τ) (3.126)
dτ 3
and
d 4 RX (τ)
= RX (τ) (3.127)
dτ 4
Example 3.17
The ordinary random walk (also called binomial process) is the simplest random
process. Using Zt to denote the increments from time t − 1 to time t, taking exclu-
sively the values +1 or −1, we have
Zt = Xt − X−t−1
Xt = X 0 + ∑ Z , t = 1, 2....
k
k =1
Thus, X0, Z1, Z2, … are independent and for all k, we have
For a more general case, we can have the binomial process, if replacing 1 by f and
replacing −1 by b (f stands for “walking” forward and b stands for backward), that is,
(
E Xn − Xm
2
) = E E {(X n − X m )2 N } = P(N ≤ m) E {(X n − X m )2 N ≤m }
∑ P(N = k)E {( X } + ∑ P(N = k)E {( X − X }
n ∞
) )
2 2
+ n − Xm N=k n m N=k
k = m+1 k = n+1
n ∞
= 0+ ∑
k = m+1
λk −λ
k!
e (k − m) p[1+ (k − m − 1)p] +
k = n+1
λk −λ
k! ∑
e (n − m)p[1+ (n − m − 1)p]
n ∞
λ k−2 λ k−2 −λ 2
≤ ∑
k = m+1
(k − 2)!
e−λ λ 2p +
k = n+1
∑
(k − 2)!
e λ p
∞ ∞
λ k−2
= e−λ λ 2p ∑
k = m+1
(k − 2)!
= e−λ λ 2p
k = m−1
λk
k! ∑
Furthermore, considering a series S
∑ λk ! = e
k
λ
S=
k =0
Random Processes in the Time Domain 161
and letting
n
∑ λk !
k
Sn =
k =0
we can write
∞
∑ λk ! → 0, as m → ∞
k
S − Sm− 2 =
k = m−1
lim e − λ λ 2p
m→∞ ∑
k = m−1
λk
k!
=0
That is,
(
lim E X n − X m
m→∞
2
)=0
n→∞
dRX (τ)
τ= 0 = RXX (0) = 0 (3.130)
dτ
and
X (t + h, k ) − X (t , k )
X (t , k ) = l.i.m (3.134)
h→0 h
Because X(t, k) is Gaussian, we see that [X(t + h, k) − X(t, k)] is also Gaussian and
could therefore conclude that X (t , k ) is Gaussian as well.
Example 3.18
X(t) = Y(i)
Therefore,
X (t ) = 0
X − (t ) = 0
X (t + h) − X (t ) Y (i − 1) − Y (i )
X + (t ) = l.i.m = l.i.m
h→0 + h h→0 + h
which means that the right mean-square derivative does not exist; therefore, at
t = 2−i, (i = 1, 2, …), {X(t), 0 < t ≤ 1} is not mean-square derivable.
Problems
1. Identify the state space and index set for the following random process
a. Temperature measured hourly at an airport. Continuous state and dis-
crete index
b. Elevation of sea surface measured continuously at a wave staff. Continu
ous state and continuous index
Random Processes in the Time Domain 163
cos πt H
X (t ) = −∞<t<∞ (P3.1)
2t , T
where H and T stands for the case of head and tail when a coin is tossed.
Note that P(H) = P(T) = 0.5.
a. Find the one-dimensional distribution F(x; 0.5) and F(x, 1) of the ran-
dom process X(t)
b. Find the two-dimensional distribution F(x1,x2, 0.5, 1) of X(t)
3. A deterministic square wave process Xsquare(t) with period T. Find means,
variations, and autocorrelations for the following random process variation
of this function (Figure P3.1)
Xsquare
1
t
–T/2 0 T/2 T
Figure P3.1
164 Random Vibration
(λt )n − λt
pN (n, t ) = e , n ≥ 0, t ≥ 0
n!
Denote
∆ = max (t k +1 − t k ) (4.2)
0 ≤ k ≤ n −1
165
166 Random Vibration
t k ≤ t k′ ≤ t k +1 , k = 0,1, n − 1 (4.3)
∑
n −1 b
If the term
namely,
k =0
X (t k′ )(t k +1 − t k ) possesses a mean-square limit
∫
a
X (t ) dt ,
n−1
∑
b
l.i.m
n→∞
k =0
X (t k′ )(t k +1 − t k ) −
∫ a
X (t ) dt
(4.4)
n −1
2
∑ X (t′ )(t
b
= lim E
∆→0
k =0
k k +1 − tk ) −
∫ a
X (t ) dt = 0
b b b b
E
∫ a
X (s ) d s
∫ a
X (t ) dt =
∫∫
a a
E[ X (s) X (t )] d s dt
(4.5)
b b b b
=
∫∫
a a
RX (s − t ) d s dt =
∫∫
a a
RX (τ) d s dt
where
τ = s − t (4.6)
Example 4.1
where,
σ > 0 (4.8)
the time difference t2 − t1, t3 − t2, tn − tn−1, then X(t) is said to have a stationary
independent increment (a Wiener process is a stationary independent increment).
Consider now a case in which a Wiener process is a mean-square integrable.
By definition of the Wiener process,
E[X(t)] = 0 (4.9)
Furthermore, the following is also true:
σ 2s , s<t
σ X ( s ,t ) = E[ X ( s )X (t )] = 2
σ t , t<s
or
σX(s,t) = E[X(s)X(t)] = σ2min(s,t) = R X(s,t) (4.10)
E[X(s)X(t)] = E[X(s) {X(t) − X(s) + X(s)}] = E[X(s) {X(t) – X(s)}] + E[X(s)X(s)] = 0 + D[X(s)] = σ2s.
u u u u u u
∫∫ 0 0
E[ X ( s )X (t )]d s dt =
∫ ∫
0 0
RX ( s ,t )dt ds =
∫ ∫ 0 0
σ 2 min( s ,t )dt ds
s t s t
u
For integral
∫ 0
t
σ 2 min( s ,t )dt , there exists two possibilities: if t < s, then the limit
s
∫
have s dt. Therefore,
s ∫ ∫ σ min(s,t ) ds dt = σ ∫ ∫ t dt + ∫ s dt ds = 3 u .
u
0 0
2 2
0 0 s
3
∫
We can see that X (t )dt exists.
0
168 Random Vibration
Example 4.2
Using Y(u) to denote the integral given by the above example, find its mean, auto-
correlation function, and variance, where
b u
Y (u ) =
∫ a
X (t ) dt =
∫
0
X (t ) dt
For mean,
u
E[Y (u )] =
∫ 0
E[ X (t )]dt = 0
0<v ≤u
This case is illustrated in Figure 4.2a, wherein the lower domain, denoted by
Dl, the time point s can be either shorter or longer than t; and in the upper domain,
denoted by Du, s is always smaller than t. Therefore, the correlation function can
be written as
u v u v
RY (u ,v ) =
∫∫
0 0
E[ X ( s )X (t )]d s dt =
∫∫
0 0
σ 2 min( s ,t ) d s dt
v s v t
=
∫∫ σ min(s,t )ds dt + ∫∫ σ s ds dt = ∫
Dl
2
∫
∫ ∫
d s σ t dt + dt σ s d s
Du
2
0 0
2
0 0
2
Dl
v u
+
∫ σ s ds ∫ dt
0
2
v
Du
v
σ 2s 2 v
σ 2s 2
=2
∫ 0 2
ds +
∫
0
σ 2(u − v )s d s =
6
(3u − v )
t t
u
Du Upper Left Right
v u
Dl Lower Dl Dr
s s
0 v 0 u v
(a) (b)
Similarly, when, 0 < u ≤ v, see Figure 4.2b with domains Dl and Dr, we can
write
σ 2u 2
RY (u ,v ) = (3v − u )
6
σ 2u 3
D[Y (u )] = RY (u ,u ) =
3
Strictly
stationary
Gaussian
nth-order process
stationary
Second-order Weakly
stationary stationary
Autocorrelated
First-order stationary
stationary
Mean
stationary
to show the relationships between strictly stationary processes and various weakly
stationary processes.
It is noted that the stationary Gaussian process is both strictly and weakly sta-
tionary. In Figure 4.3, the second-order stationary process and specially the weakly
stationary process are of great importance. Because in this situation, the autocorrela-
tion functions are only a function of τ, the time difference t2 − t1. Namely, R X(t1, t2) =
R X(t2 − t1) = R X(τ). It will be shown that the Fourier transform of such correlation func-
tions will have deterministic spectra, although the processes themselves are random.
Additionally, to operate random vibration, we need to consider both the input and
output processes, namely, the excitation process X(t) and the response process Y(t).
For a linear time-invariant system, if the excitation is stationary, then the response
will also be stationary. In this circumstance, the cross-correlation functions will
also be only functions of τ, the time lag, namely, R XY (t1, t2) = R XY (t2 − t1) = R XY (τ).
Therefore, we will see that the Fourier transform of the cross-correlation functions
also have deterministic spectra.
Additionally, with the help of the Fourier transforms of these correlation func-
tions, we can further obtain the transfer functions, which is one of the most funda-
mental concepts of vibrational systems. Practically speaking, the transfer functions
obtained through the Fourier transforms of correlation functions will be notably
more accurate than measurements through the direct definition of transfer functions.
m m
m m
RX (τ) = ∑
k =1
RX k (τ) =
1
2π ∑σ
k =1
2
k cos ω k τ (4.12)
m 1 σ2
∑
∞
RX (τ) = σ 2 lim
∆ω → 0
k =1
2π
g(ω k )∆ω cos ω k τ =
2π ∫
0
g(ω ) cos ωτ dω (4.13)
Random Processes in the Frequency Domain 171
That is, R X(τ) and σ2g(ω) are a Fourier pair, denoted by (recall Equation 3.72)
The relationship between R X(τ) and σ2g(ω) can be extended to general cases. This
is one of the fundamental approaches in dealing with random processes.
Example 4.3
Given the following density function σ2g(ω) taken from a stationary process
n
ap
1. σ 2 g (ω ) = ∑
p =1
ω + bp2
2
, bp > 0, p = 0,1, 2, ,n
and
a2 , ω1 ≤ ω ≤ 2ω1
2. σ 2 g (ω ) =
0, elsewhere
ap ap − bp τ
⇔ e
ω 2 + bp2 2bp
Therefore,
n
ap
RX (τ ) = ∑ 2b
p =1 p
e
− bp τ
2. Let
a2 , ω ≤ 2ω1
σ 2g1(ω ) =
0, elsewhere
and
a2 , ω < ω1
σ 2 g 2 (ω ) =
0, elsewhere
172 Random Vibration
we have
Furthermore,
∞ 2ω1
1 a2
R1(τ ) =
2π ∫
−∞
a2e jωτ dω =
2π ∫−2ω1
(cos ωτ + j sin ωτ) dω
2ω1a2 sin 2ω1τ a2 sin 2ω1τ
= =
π 2ω1τ πτ
and
∞ ω1
1 a2
R2(τ ) =
2π ∫ −∞
a2e jωτ dω =
2π ∫ − ω1
(cos ωτ + j sin ωτ) dω
∞
S X (ω ) =
∫ −∞
RX (τ)e − jωτ d τ (4.15)
∞
1
RX (τ) =
2π ∫−∞
S X (ω )e jωτ dω (4.16)
Example 4.4
A stationary process {X(t), −∞ < t < ∞} is zero-mean. It has the PSD function as
6ω 2
S X (ω ) =
ω + 5ω 2 + 4
4
6ω 2 A B
S X (ω ) = = +
ω + 5ω 2 + 4 ω 2 + 4 ω 2 + 1
4
We can obtain
A+B=6
and
A + 4B = 0
8 −2
S X (ω ) = +
ω2 + 4 ω2 + 1
Furthermore,
8 −1 −2
RX (τ ) = F −1[ SX (ω )] = F −1 2 +F 2
ω + 4 ω + 1
= 2e −2 τ − e − τ
D[ X (t )] = R(0) − µ 2X (t ) = 1
174 Random Vibration
∞
∫−∞
RX (τ) d τ < ∞ (4.18)
For a non-zero mean process, X′(t), when τ→∞, Equation 3.99 will be rewritten
with the help of the mean-square limit:
so that
E(X(t)) = 0 (4.21)
Example 4.5
Show that the increment {X(t), t ≥ 0} is a stationary process and find the autocor-
relation function as well as auto-PSD function.
First,
Second,
= RW (t + s, t + s + τ) − RW (t + s, t + τ) − RW (t, t + s + τ) + RW (t, t + τ)
Third,
0, τ < −s
2
σ (τ + s ), −s ≤ τ < 0 σ 2( s − τ ), τ ≤s
= 2 =
σ ( s − τ ), 0≤τ≤s 0, elsewhere
0, τ>s
Then
sω
1 4σ 2 sin2
SX (ω ) = F [RX (τ )] =
2π ∫ σ (s − τ ) e
τ ≤s
2 − jωτ
dτ =
ω2
2
ω ∞
e − jωτ − 1
Ψ(ω ) =
∫
−∞
S X (ϖ) d ϖ =
∫
−∞
RX (τ)
jτ
dτ (4.22)
It can be proven that any stationary random process can be regarded as the super
position of mutually uncorrelated harmonic oscillations of various frequencies and with
random phases and amplitudes. In this regard, the spectral distribution function Ψ(ω)
unveils a different insight into the random time series because Ψ(ω) is also deterministic.
Note that in Equation 4.22, the integral limit starts from −∞, which may limit certain
existence of Ψ(ω). However, in engineering applications, the lowest frequency of a PSD
function starts from zero. Namely, for engineering applications, Ψ(ω) always exist.
Example 4.6
A stationary process has auto-PSD function given by the following equation. Find
the corresponding spectral distribution function, Ψ(ω).
S ωC < ω
0
SX (ω ) = S0 / 2 ωC = ω
0 ωC > ω
176 Random Vibration
ω ω
Ψ(ω ) =
∫ −∞
S X ( ϖ) d ϖ =
∫
− ωC
S0 dϖ = S0 (ω + ω C )
4.1.1.5.1 Symmetry
Proof:
Because both R X(τ) and cos(ωτ) are even functions, in this case, we have
∞
S X (ω ) = 2
∫
0
RX (τ) cos(ωτ) d τ (4.24)
From Equation 4.24, it is easy to see that Equation 4.23 holds; furthermore,
∞
1
RX (τ) =
π ∫ 0
S X (τ) cos(ωτ) dω (4.25)
Proof:
n n
∑ ∑ α α R (t − t ) ≥ 0
j =1 k =1
j k X j k (4.26)
We thus have
∞ ∞
∫ ∫ −∞ −∞
g(s) g(t ) RX (t − s) d s dt ≥ 0 (4.27)
Random Processes in the Frequency Domain 177
By denoting
∞ ∞
q(u) =
∫ ∫
−∞ −∞
g(s) g(u + t ) RX (u + t − s) d s dt ≥ 0 (4.28)
∞
Q(ω ) =
∫−∞
q(u)e − jωu du
∞ ∞ ∞
=
∫ ∫ ∫
−∞ −∞ −∞
g(s) g(u + t ) RX (u + t − s) d s dt e − jωu du (4.29)
2
= 2π G (ω ) S X (ω ) ≥ 0
∞ ∞
1 1
RX (0) = E[ X 2 (t )] = σ 2X =
2π ∫−∞
S X (ω )e 0 dω =
2π ∫
−∞
S X (ω ) dω (4.30a)
∞
1
2π ∫−∞
S X (ω ) dω = σ 2X (4.30b)
1
σ 2X = Ψ(∞) (4.31)
2π
Example 4.7
Check if the following functions are auto-PSD functions. If the function is an auto-
PSD function, find the corresponding autocorrelation function and mean-square
value.
ω2 + 9
1. S1(ω ) =
(ω + 4)(ω + 1)2
2
178 Random Vibration
ω2 + 9
S1(−ω ) = ≠ S1(ω )
(ω + 4)(−ω + 1)2
2
ω2 + 4
2. S 2 (ω ) =
ω − 10ω 2 + 3
4
S2(1) = −0.8333
2
e − jω
3. S3 (ω ) =
ω2 + 6
ω2 + 1
4. S4 (ω ) =
ω 4 + 5ω 2 + 8
ω2 + 1 −1 2
S4 (ω ) ≡ SX (ω ) = = +
ω + 5ω 2 + 8 ω 2 + 2 ω 2 + 3
4
1 2 2 1 2 3
=− +
( 2) ( 3)
2 2
2 2 ω2 + 3 ω2 +
1 2 2 1 2 3
RX (τ ) = F −1[ SX (ω )] = F −1 − + F −1
( ) ( )
2 2
2 2 ω2 + 2 3 ω2 + 3
1 1
=− e− 2τ
+ e 3τ
2 2 3
1 1
E X (t ) = −
2
+
2 2 3
Random Processes in the Frequency Domain 179
often does not satisfy random processes. To deal with this problem, we introduced
the power spectrum instead. In the following, let us discuss this issue in detail.
4.1.2.2 Energy Equation
First, consider the amount of energy contained in a dynamic process X(t).
The left-hand side is the total energy in (−∞, ∞), which is the time domain.
Remember that X(t) is a random process in the time domain.
The Parseval equation is important in signal analyses. Nevertheless, for random pro-
cesses, there may be two problems. The primary issue is that in the domain (−∞, ∞), the
energy can become infinite so that the energy integration does not exist.
X(t) continues forever, and Equation 4.18 will not be satisfied, thus the spectrum
X(ω), as described in Equation 4.34, does not exist. As a result, we need an alterna-
tive approach to consider the average power spectrum.
X (t ), t ≤T
X T (t ) = (4.35)
0, t >T
∞ T
X (ω , T ) =
∫ −∞
X T (t )e − jωt dt =
∫
0
X (t )e − jωt dt (4.36)
Let us denote a function Y(ω) to represent a case with a limited time duration T,
μY (ω) = 0 (4.38)
To further evaluate σY2 (ω ), change the variables in Equation 4.39 by letting (see
Figure 4.4)
τ = t − s (4.40)
Note that the autocorrelation function R X(τ) is even. Exchanging the order of
mathematical expectation and integration, we have
Random Processes in the Frequency Domain 181
s τ τ
T T T
τ=t t=τ
Du
t t t
0 0 0
T T T
Dl
(a)
t=τ+T
τ=t–T
–T –T
(b) (c)
Figure 4.4 Integration domains of variance function. (a) Original coordinates (t, s).
(b) Transformed coordinates (t, τ). (c) In (t, τ) integrating first on t.
T t T t
σY2 (ω ) = E
∫ ∫ 0
t
t −T
τ
X (t ) X (t − τ)e − jωτ dτ dt =
∫ ∫
0
t
t −T
τ
E[ X (t ) X (t − τ)]e − jωτ dτ dt
RX ( − τ )= RX ( τ ) T t
=
∫ ∫ 0 t −T
RX (τ)e − jωτ dτ dt (4.41)
t τ
0 τ+T T T
σY2 (ω ) =
∫ −T
τ
RX (τ)e − jωτ
0
t
dt dτ +
0 ∫
RX (τ)e − jωτ
τ
t
∫
dt dτ
∫
τ
Dl Du
( )
0 T T
=
∫ −T
RX (τ)e − jωτ (τ + T )dτ +
∫
0
RX (τ) e − jωτ (T − τ)dτ =
∫ −T
RX (τ) e − jωτ T − τ dτ
(4.42)
182 Random Vibration
σY2 (ω ) T→∞ →∞
1 2 T τ
T
σY (ω ) =
∫ −T
RX (τ) e − jωτ 1 − dτ
T
(4.43)
The operation of dividing by T indicates that the resulting term is power, instead
of energy. That is why, in the literature, this function is often called the “power”
spectral density function.
Note that τ is the time difference between s and t, and both are elements inside
(0, T). Thus, when T→∞, it results in
τ
1 − = 1 (4.44)
T T →∞
Therefore,
1 T
lim σY2 (ω ) = lim
T →∞ T T →∞ ∫ −T
RX (τ) e − jωτ dτ
(4.45)
∞
=
∫−∞
RX (τ) e − jωτ dτ = S X (ω )
Note that Equation 4.45 is derived from the equation σY2 (ω ) = E[ X (ω , T ) X *(ω , T )].
Thus, we can have a very useful formula to obtain the auto-PSD:
1
E X (ω , T )
2
S X (ω ) = lim (4.46)
T →∞ T
Random Processes in the Frequency Domain 183
Practically, this can be written as though the following average for each Fourier
transform, say, the kth, denoted by Xk(ω, T) was taken from a sample realization of XT (t):
n
11
∑ X (ω, T )
2
SˆX (ω , T , n) = k (4.47)
T n k =1
4.1.2.3.1.1 Average Power Spectrum To have the power function, we must first
divide by 2T on both sides of Equation 4.49, then take the limit as
T ∞
1 1 1
∫ ∫
2
lim X 2 (t ) dt = lim Y (ω ) dω (4.49)
T →∞ 2T −T 2π −∞ T →∞ 2T
1 T 1 ∞
1
∫ ∫
2
lim E X 2 (t ) dt = lim E Y (ω ) dω (4.50)
T →∞ 2T −T 2π T →∞ −∞ 2T
Because the mean of the stationary process is zero, we are able to write
1 T 1 ∞
lim E
T →∞ 2T ∫ −T
X 2 (t ) dt = lim
T →∞ 2T ∫ −∞
EX 2 (t ) dt = RX (0) = EX 2 (t ) (4.51)
From Equation 4.50, the integrand on the right-hand side of Equation 4.51 is
SX(ω); therefore,
∞
1
RX (0) =
2π ∫−∞
S X (ω ) dω (4.53)
4.1.3.1 White Noise
First, consider the white noise process as shown in Figure 4.5.
The auto-PSD function is
SX(ω) = S 0 (4.54)
Example 4.8
A white noise sequence {Xn, n = 0, ±1, ±2, …} has autocorrelation function given by
σ 2 , n=0
RX (n) =
0, n≠0
X(t)
4
SX(ω)
3
2
1 S0
0
–1
–2
–3 0 ω
–4
0 2 4 6 8 10 12 14 16 18 20
RX(τ)
0 τ
E(Xn) = 0
D(Xn) = σ2 < ∞
Therefore,
S X (ω ) = ∑R (n) eX
− jnω
= RX (0)e 0 = σ 2
−∞
1. Zero mean
E[X(t)] = 0 (4.56a)
Example 4.9
Consider the spectral distribution function Ψ(ω) of the white noise process.
Recall Equation 4.22. We have
ω ω
Ψ(ω ) =
∫
−∞
S X ( ϖ) d ϖ =
∫ −∞
S0 dϖ = S0ϖ ω
−∞ = S0 (ω + ∞) = ∞
It is shown that, for white noise, the spectral distribution function does not
exist.
186 Random Vibration
X(t)
2
1.5
SX(ω)
1
0.5 S0
0
–0.5
ω
–1
–ωC ωC
–1.5
–2
0 2 4 6 8 10 12 14 16 18 20
(a) (b)
R(τ)
0.15 Ψ(ω)
0.1
0.05
–ωC ωC
–0.05
–6 –4 –2 0 2 4 6
(c) (d)
Figure 4.6 Low pass noise. (a) Random process, abscissa : time. (b) Auto-PSD, abscissa :
frequency. (c) Autocorrelation abscissa : time lag τ. (d) Spectral distribution function abscissa :
frequency.
4.1.3.2 Low-Pass Noise
When the white noise is low-pass filtered (as seen in Figure 4.6), we have the auto-
PSD function given by
S ωC < ω
0
S X (ω ) = S0 / 2 ωC = ω
(4.57)
0 ωC > ω
sin(ω C τ)
RX (τ) = σ 2X (4.58)
ωC τ
σ 2X = 2ω C S0 (4.59)
Random Processes in the Frequency Domain 187
Recall Equation 4.22, and from the following example, we can see that the spec-
tral distribution function is
4.1.3.3 Band-Pass Noise
When the white noise is band-pass filtered (as seen in Figure 4.7), we have the auto-
PSD function given by
S ω L < ω < ωU
0
S X (ω ) = S0 / 2 ω = ω L , ωU (4.60)
0 elsewhere
sin(∆ωτ / 2)
RX (τ) = σ 2X cos ω 0 τ (4.61)
∆ωτ / 2
In this case:
Δω = ωU − ωL (4.62)
σ 2X = 2 ∆ωS0 (4.63)
X(t)
2 SX(ω)
S0
ω
–2 –ωU –ωL 0 ωL ωU
0 5 10 15 20
R(τ)
0.1
0.08
0.06
0.04
0.02
0
–0.02
–0.04
–6 –4 –2 0 2 4 6
X(t)
0.4
SX(ω)
2 2
0.2 –σX/2 σX/2
–0.2
ω
–0.4 –ω0 0 ω0
0 5 10 15 20
R(τ)
1
0.8
0.6
0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
–1
–10 –8 –6 –4 –2 0 2 4 6 8 10
4.1.3.4 Narrow-Band Noise
When the white noise is narrow-band filtered (as shown in Figure 4.8), we can obtain
the following auto-PSD function as
4.2 Spectral Analysis
In the second section of this chapter, the auto-PSD function and cross-PSD function,
which are also related to the Wiener–Khinchine formula, are discussed. The focus is
on the spectral analysis of vibration systems.
4.2.1 Definition
4.2.1.1 Cross-Power Spectral Density Function
4.2.1.1.1 Wiener–Khinchine Relation
Defining the cross-power spectral density function SXY (ω) through the Wiener–
Khinchine relations:
Random Processes in the Frequency Domain 189
∞
S XY (ω ) =
∫ −∞
RXY (τ) e − jωτ d τ (4.67)
Similarly,
∞
SYX (ω ) =
∫ −∞
RYX (τ) e − jωτ d τ (4.68)
Additionally,
∞
1
RXY (τ) =
2π ∫
−∞
S XY (ω ) e jωτ dω (4.69)
∞
1
RYX (τ) =
2π ∫
−∞
SYX (ω ) e jωτ dω (4.70)
4.2.1.1.3 Symmetry
4.2.1.1.3.1 Skew Symmetry Unlike the auto-PSD functions, the cross-PSD func-
tions are not symmetric. However, cross spectral density functions do have the fol-
lowing relationship referred to as skew symmetry:
and
* (ω )
SYX (−ω ) = SYX (4.73)
* ω)
SYX (−ω ) = SYX (4.74)
and
also
Furthermore,
1
S XY (ω ) = lim E[ X *(ω , T )Y (ω , T )] (4.80)
T →∞ T
and
1
SYX (ω ) = lim E[ X (ω , T )Y *(ω , T )] (4.81)
T →∞ T
Similar to Equation 4.48, when the Fourier transforms of X(t) and Y(t) are obtained
through the practicality of the kth measurement, namely, XK(ω,T), YK(ω,T), the esti-
mated cross-correlation function can be written as
SˆXY (ω , T , n) =
11
T n ∑ X* (ω, T )Y (ω, T )
k =1
K K (4.82)
and
SˆYX (ω , T , n) =
11
T n ∑ Y *(ω, T )X
k =1
K K (ω , T ) (4.83)
Equations 4.47, 4.82, and 4.83 enable us to obtain the cross-PSD functions practi-
cally, which will be discussed further in Chapters 7 and 9.
Random Processes in the Frequency Domain 191
4.2.2 Transfer Function
The transfer function is an important concept in linear systems, given that it com-
pletely describes the dynamic behavior of the system. In this section, two basic issues
are considered. The first is, given the transfer function and input random excitations,
to find the statistical properties of the random output. The second is, by measuring
both the random input and output, to find the transfer function of a linear system.
The mapping of T[(.)] can be analyzed in both the time and the frequency domain.
Let us consider the operation in the time domain first.
1. Foldable
X(t) + Y(t) + Z(t) = [X(t) + Y(t)] + Z(t) = X(t) + [Y(t) + Z(t)] (4.85)
2. Interchangeable
3. Superposition
Input Output
X(t) Y(t)
T[.]
and the input forcing function. Thus, based on the convolution integral, the statistical
properties (mean values, etc.) of the output as well as correlation between input and
output can also be considered. In Section 4.3, the properties describing the frequency
domain will be further explored.
4.2.2.1.2.1 Linear Filtering and Convolution When the unit impulse response
function, h(t), is known, the random process X(t) being mean square integrable through
the corresponding linear time-invariant system can be described by convolution:
∞
Y (t ) =
∫ −∞
X (τ)h(t − τ) d τ = X (t ) * h(t ) (4.88)
or
∞
Y (t ) =
∫ −∞
X (t − τ)h(τ) d τ = h(t )* X (t ) (4.89)
∞
µY (t ) = E[Y (t )] = E
∫−∞
X (t − τ)h(τ) dt
(4.90)
Because
we have
∞
µY (t ) =
∫−∞
h(τ) µ X (t − τ) dt = µ X (t ) * h(t ) (4.93)
∞ ∞
RY (t , u) = E[Y (t )Y (u)] = E
∫−∞
X (t − ζ)h(ζ) d τ
∫−∞
X (u − ξ) h(ξ) dξ (4.94)
Random Processes in the Frequency Domain 193
RY (t , u) =
∫ ∫
−∞ −∞
E[ X (t − ζ) X (u − ξ)] h(ζ) h(ξ) dζ d ξ
(4.95)
∞ ∞
=
∫ ∫
−∞ −∞
RX (t − ζ, u − ξ) h(ζ) h(ξ) dζ d ξ
Explicitly, the autocorrelation function of the output process Y(t) can be written as
the convolution of the three terms h(τ), h(−τ), and R X(τ).
RXY (t , u) = E[ X (t )Y (u)]
∞ ∞
= E[ X (t )]E
∫ −∞
h(ξ) X (u − ξ) d ξ =
∫
−∞
h(ξ) E[ X (t ) X (u − ξ)] d ξ
∞
=
∫
−∞
h(ξ) RX (t , u − ξ) d ξ
(4.98)
Then
Because
Input Output
X(ω) Y(ω)
System H(ω)
where H(ω) is the Fourier transform of the unit impulse response function h(t) (in
Chapter 6, we will discuss h(t) in more detail):
T
H (ω ) =
∫ 0
h(t ) e − jωt dt (4.106)
It will be shown later in this section that H(ω) is nothing more than the transfer
function.
μX(t) = μX (4.109)
μY = H(j0) μX (4.111)
L[ RY (τ)] = SY (s) = L[(h(τ) * h(− τ))]L[ RX (τ)] = H (s) H *(s) S X (s) (4.112b)
Example 4.10
τ
RX (τ ) = e −2
196 Random Vibration
∞
4
S X (ω ) =
∫−∞
e −2 τ e − jωτ dτ =
(ω 2 + 4 )
Furthermore,
∞
1
H (ω ) =
∫ −∞
e − τ e − jωτ dτ =
( jω + 1)
Thus,
2
1 4 1/ π
SY (ω ) = =
( jω + 1) (ω 2 + 4) (ω 2 + 1)(ω 2 + 4)
So that
Similarly, we apply Laplace transform on both sides of Equation 4.100, that is,
The above equation is obtained with known PSD function and transfer function;
we thus used inverse Fourier and/or inverse Laplace transform to calculate the cross-
correlation function.
Similarly, by again taking the Fourier transform on both sides of Equation 4.102,
we have
Example 4.11
A linear system has unit impulse response function h(t) = e–t, t > 0. Its input is
a random process and has autocorrelation function given by RX (τ ) = 1/ 2 e −2 τ
applied from t = −∞. In the following, we describe the process of determining the
cross-correlation function RYX(τ).
Upon checking the autocorrelation function, which is only a function of time
lag τ, the input is a stationary process. Using inverse Laplace transform, we have
t >0
RYX (τ) = L−1[H*( s )SX ( s )] = L−1[L{e −t }L(1/ 2e −2|τ| )]
2 1 1
1 1 1
−1 1 −1 2 −1
−3 −
2 6
=L + = L =L + +
s + 1 2 s + 2 −s + 2 ( s + 1)( s + 2)(− s + 2) s + 1 s + 2 −s + 2
2 1 1
= − e − τ − e −2τ u(τ ) + e 2τu(− τ )
3 2 6
1, τ ≥0
u(τ ) =
0, τ <0
4.2.2.2.1 Definition
Dividing by X(ω) on both sides of Equation 4.105 yields
Y (ω )
H (ω ) = (4.116)
X (ω )
4.2.2.2.2 Estimation
Stemming from Equation 4.114a,
S XY (ω )
H (ω ) = (4.117)
S X (ω )
On the right-hand side of Equation 4.114, both the numerator and the denominator
can be multiplied by X*(ω), such that
Y (ω ) X *(ω )
H (ω ) = (4.118)
X (ω ) X *(ω )
198 Random Vibration
Practically speaking, it is beneficial to let Xk(ω, T) and Yk(ω, T) denote the kth
sample pair. By taking the average of n total sample realizations, Equation 4.118 can
be rewritten as
n
1
2nT ∑Y (ω, T )X* (ω, T )
K K
SˆXY (ω )
Hˆ (ω ) = k =1
= (4.119)
n
SˆX (ω )
1
2nT ∑X
k =1
K (ω , T ) X *K (ω , T )
In addition, on the right-hand side of Equation 4.114, both the numerator and the
denominator can be multiplied by Y*(ω), such that
Y (ω )Y *(ω )
H (ω ) = (4.120)
X (ω )Y *(ω )
Once more, let Xk(ω, T) and Yk(ω, T) denote the kth sample pair, and take the
average of n total sample realizations. This allows Equation 4.118 to be rewritten as
n
1
2nT ∑Y (ω, T )Y *(ω, T )
K K
SˆY (ω )
Hˆ (ω ) = k =1
= (4.121)
n
SˆYX (ω )
1
2nT ∑k =1
X K (ω , T )YK* (ω , T )
Both Equations 4.121 and 4.119 can be used to measure the transfer function in
practical measurements. To distinguish these two cases, an estimation of the transfer
function through Equation 4.121 will be denoted as H1(ω) and that through Equation
4.119 as H2(ω).
Formally, the functions H1(ω) and H2(ω) are defined as follows:
S XY (ω )
H1 (ω ) = (4.122)
S X (ω )
SY (ω )
H 2 (ω ) = (4.123)
SYX (ω )
A transfer function can also be stated by its amplitude and phase angle as
Im[ H (ω )]
θ = ∠H (ω ) = tan −1 (4.127)
Re[ H (ω )]
The phase angle, θ, is the phase difference between the input X(t) and the output
Y(t), which can be directly obtained through Equation 4.114. Referring to Equation
4.122, the expression of H2(ω), SY (ω) is recognized as being real valued, which does
not contain information on the phase. Thus, information on the phase is contained in
the cross-PSD function SYX(ω).
4.2.2.3 Stationary Input
Based on the above discussion, we now consider whether an output is stationary
when the input is stationary. In most cases of engineering applications, we do not
distinguish whether the input is strictly stationary or weakly stationary. However, in
this situation, they will be different.
4.2.3 Coherence Analysis
In this section, the coherence function will first be defined. It will then be explained
how the coherence function is related to the aforementioned transfer functions and,
finally, the application of coherence analysis. The practical measurement of coher-
ence functions and applications will be discussed further in Chapter 7.
200 Random Vibration
4.2.3.1 Coherence Function
4.2.3.1.1 Definition
The coherence function of two random processes is defined as
2
S XY (ω ) S (ω ) S*XY (ω )
γ 2XY (ω ) = = XY (4.128)
S X (ω ) SY (ω ) S X (ω ) SY (ω )
−1 ≤ γ 2XY (ω ) ≤ 1 (4.129)
S XY (ω )
= H1 (ω ) (4.130)
S X (ω )
and
S*XY (ω ) SYX (ω ) 1
= = (4.131)
SY (ω ) SY (ω ) H 2 (ω )
H1 (ω )
γ 2XY (ω ) = (4.132)
H 2 (ω )
H1 (ω )
≤ 1 (4.133)
H 2 (ω )
Thus,
H1 (ω ) ≤ H 2 (ω ) (4.134)
γ 2XY (ω ) = 1 (4.140)
S X (ω )
γ 2ZX (ω ) = (4.144)
S X (ω ) + SY (ω )
d
RX (τ) = RXX (τ) (4.145)
dτ
Here, if the random process X(t) is seen as displacement, then respectively its
derivatives X and X can be seen as velocity and acceleration.
The Fourier transform of the derivative of a temporal function g(t) is
d
F g(t ) = jωG (ω ) (4.146)
dτ
Thus,
d
S XX (ω ) = F {RXX (τ)} = F RX (τ) = jωS X (ω ) (4.147)
dτ
It is known that
S X (ω ) = ω 2 S X (ω ) (4.148)
We thus have
S XX (ω ) = −ω 2 S X (ω ) (4.149)
and
S X (ω ) = ω 4 S X (ω ) (4.150)
Random Processes in the Frequency Domain 203
Now, the variance functions of the velocity and the acceleration can be written as
∞ ∞
1 1
σ 2X =
2π ∫
−∞
S X (ω ) dω =
2π ∫
−∞
ω 2 S X (ω ) dω (4.151)
and
∞ ∞
1 1
σ 2X =
2π ∫
−∞
S X (ω ) dω =
2π ∫
−∞
ω 4 S X (ω ) dω (4.152)
4.3.1 One-Sided PSD
Initially, in real-world measurements, it is difficult to have negative frequency, in
particular, as the lower integral limit approaches −∞. For this reason, the one-sided
Fourier transform is used.
ω = 2πf (4.153)
ω ~ (rad/s); f ~ (Hz)
The period is
T = 1/f (4.154)
and
T ~ (s)
T = 2π/ω (4.155)
By drawing on the above notations, the one-sided auto-PSD function can be writ-
ten as
GX(ω) 2 WX( f ) 4π
SX(ω) 1 SX( f ) 2π
ω f
–ω0 0 ω0 –f0 0 f0 = ω0/2π
2 S (ω ), ω≥0
GX (ω ) = X (4.157)
0, ω<0
4 πS (ω ), ω, f ≥ 0
X
WX ( f ) = (4.158)
0, ω, f < 0
Figure 4.11 graphically describes the relationships shown in Equations 4.157 and
4.158.
T = n Δt (4.159)
X(t) T = n ∆t
t
∆t
X(f ) F = n/2 ∆f
X(fi)
∆f
f
0 fi
1
∆f = 1/T = (4.160)
n ∆t
F = n/2 Δf (4.161)
2
X( f ) = [FFT( x (t ))] (4.162)
n
4.3.2 Signal-to-Noise Ratios
Signal-to-noise ratio (S/N) is one of the fundamental concepts in dealing with sig-
nals. Nevertheless, it is difficult to measure the exact value of a S/N because it is
almost impossible to measure a random process of noises.
4.3.2.1 Definition
Let us consider the definition of S/N as follows.
206 Random Vibration
4.3.2.1.1 Deterministic
In deterministic cases, the signal-to-noise ratio, r, is equal to
ps
r= (4.163)
pn
RS (τ)
r (τ) = (4.164)
RN (τ)
where, RS and R N are autocorrelation functions of the signal and noise, respectively.
S X (ω )
r (ω ) = (4.165)
S N (ω )
In most instances, the region is very close to the region between the so-called
“half-power points” (Figure 4.13), which will be discussed in Chapter 6.
SX
High S/N
Mid S/N
Poor S/N
ω
4.3.2.2 Engineering Significances
Practically speaking, there exists other issues that must be taken notice of as follows.
C=r n (4.168)
C ≥ Cpreset (4.169)
n
Cm = r (4.170)
m
Cm ≥ Cm,preset (4.171)
When
N(t) can be seen as a small term “added” to the main process. Suppose N(t) can be
represented by the Taylor series such that
X0 = ‖A(t)‖ (4.176)
µY (t ) − µ X (t )
≤ (1−5)% (4.179)
µ X (t )
σY2 (t ) − σ 2X (t )
≤ (1−5)% (4.180)
σ 2X (t )
σY (t ) − σ X (t )
≤ (1−5)% (4.181)
σ X (t )
The above concept may be summarized as though the amount of noise is often
unknown; artificial “noises” can be added, which are known. Thus, if a small amount
of “random” noise, namely, a perturbation, is added to the total measured data and it
is determined that the outcome of the data analysis was not varied in a statistically
significant manner, then the total system of the signal pickup and processing is con-
sidered to be stable.
denoted by FX(x), which is essentially a calculation of probabilities. That is, the prob-
ability of all the chances that X is smaller than x through averaging.
To deal with a set of random variables, the averaging is simple because the com-
putation is done among the variables themselves, namely, in the sample space. In
the case of a random process X(t,e), however, the average can be far more complex
because we will have not only the sample space Ω = {e} but also another index t,
“time.” As a result, the distribution is defined in an n-dimensional way, denoted by
FX(x1, x2, … xn; t1, t2, …, tn). Therefore, only if the entire n-dimensional distribution
is evaluated would we understand the global properties of X(t,e). It is understandable
that this task can be very difficult.
On the other hand, in many cases of engineering applications, we may only need
two or even one dimensional distribution, in which the corresponding averages can-
not provide global information. Instead, these averages provide local parameters,
such as autocorrelation and cross-correlation functions, variance, and mean.
In Figure 4.14, the global and local properties of random processes are illustrated
by a conceptual block diagram. In showing the relationships between these proper-
ties, we also realize the major topics of random process, discussed mainly in Chapter
3. In Figure 4.14, inside the frame with broken lines is a special relationship between
correlation functions and PSD functions, which is the main topic in this chapter and
is shown in detail in Figure 4.15.
In Figure 4.15, through Fourier transform (practically, also including Laplace
transform), which is a powerful mathematical tool, the correlation functions in the
time domain are transferred into the frequency domain. Analyzing vibration sig-
nals, which is one of the major topics in this manuscript, can be carried out in the
frequency domain. Such analysis is a necessary and powerful tool and can provide
insight into vibration systems, which cannot be obtained through the time domain
only. In Section 4.4.2, we discuss the frequency distributions and spectral presenta-
tions of random process in a more rigorous fashion.
Global
properties
n-dimensional distributions
Local
properties Correlation
Two-dimensional distributions
functions
Variance Mean
Correlation functions
Process in Cross-
Autocorrelation
the time domain correlation
Fourier/Laplace
(inverse)
transforms
PSD functions
Process in
Autopower Cross-power
the frequency domain
spectral spectral
density density
Transfer
Input PSD Output PSD
functions
Figure 4.15 Relationship between analyses of the time and the frequency domain.
4.4.2 Stationary Process
4.4.2.1 Dynamic Process in the Frequency Domain
By definition, in general, a stationary process will not grow to infinity nor will it die
out. As a dynamic process, one can realize that the instance value of the process will
fluctuate, namely, it will be up at a certain time point and down at other points. Such
an up-and-down process in the time domain can be represented by various sinusoi-
dal terms, sin(ωit)’s and cos(ωit)’s.
Such an up-and-down process will contain a certain group of frequency components.
To view such processes in the frequency domain is in fact to list these frequency compo-
nents as a spectrum, which unveils important information on the profile of frequencies.
For a nonstationary process, the Fourier spectrum often does not exist. Therefore,
we cannot have the spectrum that a stationary process does. However, for nonsta-
tionary processes, one can also perform frequency analysis by introducing the finite
Fourier transform (recall Equation 4.35). In this circumstance, when the value of T is
not sufficiently long, the corresponding spectra are not deterministic.
∞
1
RX (τ) =
2π ∫ −∞
S (ω )e jωτ dω (4.182)
Random Processes in the Frequency Domain 211
dΨ (ω )
S X (ω ) = (4.183)
dω
Thus,
then the random process X(t) can be seen as an inverse Fourier transform of function
X(ω), that is,
∞
1
X (t ) =
2π ∫ −∞
X (ω )e jωt dω
dZ (ω )
X (ω ) =
dω
Similar to Equation 4.185, the random process X(t) can be written as a Fourier–
Stieltjes integral:
∞
X (t ) =
∫ −∞
e jωt d Z (ω ) (4.186)
If Equation 4.186 does exist, then it implies that a random process can be replaced
by its spectral representation. In fact, Z(ω) does exist under certain conditions, and
it is defined as the spectral representation function of random process. Now, let us
consider the required conditions. In Chapter 3, we noted that to use Fourier trans-
form to replace the Fourier series of a random process is not always doable because
it requires conditions. In Sections 4.1 and 4.2, we further point out that to let T→∞
also needs conditions. These two issues are closely related.
To see this point, recall the sum of harmonic process (Equation 3.62) with zero
mean, which can be written in an alternative form given by
m/2 m/2
X (t ) = ∑
k =− m / 2
X k (t ) = ∑Ce
k =− m / 2
k
jω k t
(4.187)
where Ck are complex-valued uncorrelated random variables with zero mean, and
have variance σ2. Because X(t) must be real-valued, we have symmetry for the pair
of frequencies ωk and −ωk, C− k = C*k .
With help from the notation in Equation 4.187, consider the autocorrelation
function
m / 2 * m/2 m m
= E ∑
p=− m / 2
jω t
C pe p ∑
q=− m / 2
jω ( t + τ )
p=1
− jω t
Cqe q = E C*p e p Cqe q ∑
q=1
jω ( t + τ )
∑
m/2 m/2
= ∑ ∑
p=− m / 2 q =− m / 2
E C*p Cq e p q
j[ − ω t + ω ( t + τ )]
(4.188)
Note that X is zero-mean, and E[CpCq] = 0 for p ≠ q. Also, based on Euler equation
ejθ = cosθ + j sinθ, the above equation can be rewritten as
m / 2 m
RX (τ) = Re ∑
k =− m / 2
E Ck2 e jω k τ = Re
k =1
∑
E Ck2 e jω k τ
(4.189)
where the symbol Re(.) means taking the real portion of function (.) only.
Random Processes in the Frequency Domain 213
where
ϖ
Z (ϖ) = ∑C
p=− m / 2
p (4.191)
m→∞
1 ∞ 1 ∞
RX (τ) = Re
2π ∫−∞
e jωτ ϒ (dω ) =
2π ∫ −∞
cos ωτ ϒ (dω ) (4.192a)
∞
1
RX (τ) =
2π ∫ −∞
e jωτ ϒ (dω ) (4.192b)
σ2
Mp = g p∆ω (4.193)
2π
(4.194)
σ2
= lim M p = lim g p ∆ω
m→∞ 2π ∆mω →∞
→0
214 Random Vibration
Amplitude Amplitude
Mp Mp
σ2 g σ2 g
p p
2π 2π
Freq. Freq.
∆ω
∆ω dω dω
ωp ωp
(a) (b)
Amplitude Amplitude
Mq Mean Xi(t)
Mp
Freq. Freq.
(c) (d)
Figure 4.16 Mass and density functions with various frequency intervals. (a) Equal fre-
quency interval Δω. (b) Equal frequency interval dω. (c) Unequal frequency interval Δω.
(d) Unequal frequency interval dω.
Note that when we replace the harmonic random series by integrations (Equation
4.11), it is not necessary to have equally spaced frequency intervals. That is, recall
Equation 3.73, that is, Δωp = ωp+1 − ωp ≠ Δωq = ωq+1 − ωq. The equal and unequal
frequency interval can also be conceptually shown in Figure 4.16.
In Figure 4.16a and b, we show equal frequency intervals whereas in Figure 4.16c and
d, the frequency interval is unequal. From Figure 4.16c, it is seen that to have unequal
frequency interval Δωp and Δωq at point ωp and ωq, respectively, does have advantages
because at point ωp the curve has a steeper slope and at point ωq the curve is flatter.
Mathematically, when equal frequency intervals are used, it implies that the orig-
inal function, say, R X(τ) and/or X(t), is periodic with period T, namely,
2π
= ∆ω = ω T (4.195)
T
In this case, the function, say, R X(τ), can be represented by Fourier series with
harmonic terms cosnωT τ, sinnωT τ and/or ejnωTτ, and others, and ωT is the fundamen-
tal frequency. However, we may also use nonperiodic cases, in this situation, we use
the relationship (Equation 4.192)
∞
1
RX (τ) =
2π ∫ −∞
e jωτ ϒ (dω )
∞
1
RX (τ) =
2π ∫−∞
e jωτ S X (ω ) dω
The following is a more general description. A Fourier series, the discrete form,
only works for periodic functions. A numerical realization of Fourier transform
in discrete format, with limited recorded length, will inherit certain drawbacks
of the Fourier series. In addition, when we use Y(ω) = X(ω,Τ) in Equation 4.37,
letting T→∞ is not always legitimate. To be mathematically consistent, the above-
mentioned Wiener–Khinchine equation defines auto-PSD function SX(ω), instead
of introducing the concept of PSD first and then proving that it is the Fourier
transform of R X(τ).
Compared with the definition of the formula for spectral distribution function in
ω
Equation 4.22, namely, Ψ(ω ) =
∫−∞
S X (ϖ) d ϖ, we can realize that if periodic func-
tions do exist, Υ(ϖ) is nothing but the spectral distribution function Ψ, that is,
and
Equation 4.197 indicates that the autocorrelation function R X(τ) and function
d[Ψ(ω )] are Fourier pairs, that is,
dω
d[Ψ(ω )]
R( τ ) ⇔ = S X (ω ) (4.198)
dω
Furthermore, compare Equation 4.192a with Equation 4.13, for continuous fre-
quency domain,
σ2
ϒ (dω ) = Ψ(dω ) = g(ω )dω (4.199)
2π
σ2
g(ω ) = S X (ω ) (4.200)
2π
σ Zα Zβ = ϒ(ω α ω β ) (4.202)
and
Second, ϒ(ω) does not integrate to unity, but integrates to the 2π times the vari-
ance of the process because σ 2X , which can be seen from Equation 4.31. We thus
have, however,
Both the spectral distribution and the CDF of a random variable are right-
continuous, nondecreasing bounded functions with countable jumps (in the case of
mixtures of discrete and continuous random variables).
T ω0
1
∫ ∫
2
X 2 (t ) d τ and/or X (ω ) dω (4.206)
−T 2π −ω0
In the following, we discuss the finite temporal and frequency domains of random
and transient processes, which are sometimes treated as measured signals.
n
fmax = (4.207)
2T
Problems
1. A random process {X(t), −∞ < t < ∞} is given by X(t) = At2 + Bt + C with
A, B, and C to be independent random variables and A ~ N(0,1), B ~ N(0,1),
and C ~ N(0,1). Find if X(t) is mean-square continuous, mean-square dif-
ferentiable, or mean-square integrable.
2. Derive autocorrelation functions for the following processes:
a. White noise
SX(ω) = S 0
b. Low pass
S ωC < ω
0
S X (ω ) = S0 / 2 ωC = ω
0 ωC > ω
c. Band pass
S ω L < ω < ωU
0
S X (ω ) = S0 / 2 ω = ω L , ωU
0 elsewhere
d. Narrow band
T τ
∫ −T
RX (τ) e − jωτ 1 − dτ
T =1
∞
5. Let W(t) = X(t)Y(t), X and Y uncorrelated random process. Find the PSD
function of W(t) and the cross-PSD function and coherence of W(t) and X(t).
6. Consider a random binary function with random phasing Θ, which is uni-
formly distributed between 0 and T, shown in Figure P4.1
a. Model this random process Y(t)
b. Find the autocorrelation and auto-PSD
c. Is the process stationary? Ergodic? Hint: It still depends on whether or
not t1 and t2 are in the same time interval, but this now depends on Θ
7. A local average process Y T (t) is defined by
t +T / 2
1
YT (t ) =
T ∫
t −T / 2
X (u) du
2
sin(ωT / 2)
SYT (ω ) = S X (ω )
ωT / 2
Y(t) Random
+1
θ
t
T 2T
–1
ω 2 + 33
S X (ω ) =
ω + 10ω 2 + 9
4
a. Is Z(t) stationary?
b. Find the cross-PSD SZY (ω) and SXZ (ω).
10. Θ is a uniformly distributed random variable in a period of frequency ωT.
The parameters ai, bi, and ω are all constant. The summation
∑(a
i =1
2
i )
+ bi2 < ∞
a
X (t ) = 0 +
2 ∑{a cos[nω (t + Θ)] + b sin[nω (t + Θ)]}
n =1
n T n T
221
222 Random Vibration
problems but also to describe the methodology of how to understand specific random
process in detail. However, the emphasis is still given to practical application instead
of mathematical rigorousness. Due to its randomness, statistical surveys are used to
carry out the analyses. In so doing, the “3D” processes are reduced into “2D” vari-
ables and furthermore into “1D” parameters, in general.
5.1 Level Crossings
To analyze a random time history, a specific preset level is first established. The
objective of this is then to examine the probability of when the value of the process is
greater than the preset level. Note that the objective is now reduced to a 1D problem.
This approach may be referred to as a special parameterization (see Rice 1944, 1945
and Wirsching et al. 2006).
5.1.1 Background
5.1.1.1 Number of Level Crossings
For a time history x(t), in an arbitrary time interval (t, t + ΔT), with an arbitrary level
x = a (5.1)
the total number for which x(t) > a, namely, the level being crossed, can be expressed
as
x
f
c d
a
a b
t
e
na = 2
na = 1
na = 4
λ = const. (5.4)
μY = 1/λ (5.5)
µ N a (t ) = E[ N a (t )] = λt (5.6)
224 Random Vibration
σ 2N a (t ) = λt (5.7)
5.1.1.2.1 Crossing Pair
It is frequently seen that an up-crossing is followed by a down-crossing, as shown
in Figure 5.2.
5.1.1.2.2 Cluster Crossing
As shown in Figure 5.3, an additional case called cluster crossing may also occur.
In this instance, the time history is a narrow-band noise. Unlike the crossing pair, in
this case, many pairs may follow the initial pair.
5.1.2.1 Stationary Crossing
If the probability distributions of random processes Na(t1, t2) and Na(t1 + s, t2 + s) are
identical, the random process is said to have stationary increments. It is seen that a
Poisson process has stationary increments.
Let X(t) be a zero mean stationary random process in which, for simplicity, X(t)
is interpreted as displacement. Therefore, X (t ) and X (t ) are considered to be the
velocity and the acceleration, respectively. For these conditions, the expected value is
0.4
0.2
–0.2
–0.4
0 5 10 15 20
5.1.2.2 Up-Crossing
Considering the case of up-crossing only, the rate is
1
va + = va (5.10)
2
5.1.2.3 Limiting Behavior
When
Δt → 0
P( A), N a+ (dt ) = 1
P N a+ (dt ) = 1 − P( A), N a+ (dt ) = 0 (5.12)
0, elsewhere
Then,
E N a+ (dt ) = va+ dt
such that
va+ dt = P( A) (5.14)
226 Random Vibration
or
X (t ) > a − X (t ) dt (5.18b)
The event in which the above three conditions are met has the single probability
P(A):
Figure 5.5 shows the integral domain with the x, x coordinates. The probability
can then be written as
∞ a
P( A) =
∫ ∫ 0
x
a − vdt
x
f XX (u, v) du d v (5.20)
When dt → 0, the starting point of x must be very close to line a, that is, u → a,
in this case, f XX (u, v) → f XX (a, v) and
a a
∫ a− v d t
f XX (a, v) du = f XX (a, v)
∫
a− v d t
du = f XX (a, v)[a − (a − vdt )] = f XX (a, v) vdt
x(t + dt)
a
·
x(t)dt
x(t)
t dt t + dt
x·
·
a – x dt
x
0 a
∞
f XX (a, v) ( v dt ) d v
P( A) =
∫ 0
(5.21)
The absolute value of v indicates that the slope of X(t) must be positive. Thus,
from Equation 5.15, the closed form formula of the rate va+ is:
∞
P( A)
va + =
dt
=
∫0
v f XX (a, v) d v (5.22)
5.1.3 Specializations
To find the rate va+ in Equation 5.22, the joint density function f XX (a, v) must be
known. The latter is generally unknown, unless X(t) is Gaussian. In the event that X(t)
is Gaussian, then X (t ) will also be Gaussian. Practically speaking, the assumption of
Gaussian process is reasonably valid.
∞ ∞ ∞
va + =
∫
0
v f XX (a, v) d v =
∫ 0
v f X (a) f X ( v) d v = fX (a)
∫
0
v f X ( v) d v (5.23)
Here, the variable v represents the “velocity” X (t ), which is zero-mean, that is,
∞
σ X
∫0
v f X ( v) d v =
2π
228 Random Vibration
1 a2
1 σ X − 2 σ 2X
va + = e (5.24)
2π σ X
Note that when the crossing threshold “a” increases, the level up-crossing rate
decreases.
For a given σX, whereas the RMS velocity σ X increases, the crossing rate also
increases.
5.1.3.2 Zero Up-Crossing
When the level of interest is zero, up-crossing will likewise be zero
a = 0 (5.25)
1 σ X
v0 + = (5.26)
2π σ X
Note that va+ is the rate of level up-crossing, namely, in a unit time, the number of
crossings from which the angular frequency of crossing is determined to be
σ X
ω 0+ = 2πv0+ = (5.27)
σX
In Equations 5.26 and 5.27, the terms of standard deviations can be expressed as
follows:
∞ ∞
σ 2X =
∫−∞
S X (ω ) dω =
∫ 0
WX ( f ) d f (5.28)
and
∞ ∞
σ 2X =
∫
−∞
ω 2 S X (ω ) dω = 4 π 2
∫ 0
f 2WX ( f ) d f (5.29)
Substitution of the above formulas into Equations 5.26 and 5.27, respectively,
results in
va + =
∫ f W (f)d f
0
2
X
(5.30)
∞
∫ W (f)d f
0
X
Statistical Properties of Random Process 229
and
ω 0+ =
∫ ω S (ω) dω
−∞
2
X
(5.31)
∞
∫ S (ω) dω
−∞
X
5.1.3.3 Peak Frequency
5.1.3.3.1 General Process
The same level crossing analysis can also be applied on the velocity, X (t ). A zero
down-crossing of X (t ) results in a change of velocity from positive to negative at
the peak of X(t), so the velocity is zero. Comparing this case with the “displacement
crossing,” we replace a by v and let v = 0 for the velocity and replace v by ϖ for the
“acceleration” in Equation 5.22. In this case, the “acceleration” is negative, so that
the peak frequency vp can be written as
0
vp =
∫−∞
X
− ϖ f XX
(0, ϖ) d ϖ (5.32)
5.1.3.3.2 Gaussian Process
(0, ϖ) is when the process is Gaussian. In case
A special case to have the term f XX
the process is Gaussian, similar to the approach for the development of the formula
for va+ ,
1 σ X
vp = (5.33)
2π σ X
and
σ X
ω p = 2πv p = (5.34)
σ X
vp =
∫ 0
∞
f 4WX ( f ) d f
(5.35)
∫ 0
f 2WX ( f ) d f
230 Random Vibration
ωp =
∫ −∞
∞
ω 4 S X (ω ) dω
(5.36)
∫ −∞
ω 2 S X (ω ) dω
5.1.3.4.1 Narrow-Band Gaussian
First, suppose the random process X(t) is a narrow-band Gaussian process (refer to
Equation 4.65). For this case, the frequency of zero up-crossing as well as peak fre-
quency will be examined.
For the narrow-band Gaussian process, the auto-spectral density function can be
written as
where ωm is the midband (normal) frequency of the process. For further clarification,
see description of ω 0 in Figure 4.8.
Substitution of Equation 5.37 into Equation 5.31 yields
ω 0+ =
∫−∞
ω 2 [δ(ω + ω m ) + δ(ω − ω m )]/ 2 dω
=
ω 2m
∞
1
∫
−∞
[δ(ω + ω m ) + δ(ω − ω m )]/ 2 dω
Thus,
ω 0+ = ω m (5.38)
ωp = ωm (5.39)
In conclusion, for narrow-band processes, the zero up-crossing and the peak fre-
quency are identical and equal to the midband frequency.
5.1.3.4.2 Non–Narrow-Band
If the frequency band is not narrow, then the logical subsequent question is “how
wide” it can be. To measure the width of the band-pass filtering, an irregular factor
is introduced.
Statistical Properties of Random Process 231
v0 + ω 0+
α= = (5.40)
vp ωp
E[ N 0+ (∆t )]
α= (5.41)
E[ N p (∆t )]
When α = 1, there will be one peak for every zero up-crossing (see Figure 5.6a).
This implies that the random process only contains a single frequency, which is the
case with the narrow band. Otherwise, if the process contains higher frequency,
whose mean special values are often considerably smaller than that of the lowest
frequency (called fundamental frequency), we will have v p > v0+ , so that 0 < α < 1.
Specifically, when α → 0, there will be an infinite number of peaks for every zero
up-crossing–high-frequency dithering. This is illustrated in Figure 5.6b.
5.1.3.4.2.2 Special Width Parameter ε With the help of the irregular factor, the
special width parameter is further defined as
ε = 1 − α2 (5.43)
(a) (b)
Figure 5.6 Number of peaks. (a) Single peak. (b) Multiple peaks.
232 Random Vibration
5.1.3.4.3 Gaussian
If the process is Gaussian, the expression of the irregular factor can be simplified as
follows.
Substitutions of Equations 5.26 and 5.33 into Equation 5.40 yields
σ 2X
α= (5.44)
σ X σ X
Thus,
and
− σ XX
α= = −ρXX (5.47)
σ X σ X
Observe that the irregularity factor, α, is equal to the minus correlation coefficient
between the displacement and the acceleration.
1.5
1
Amplitude
0.5
–0.5
–1
–1.5
0 0.5 1 1.5 2 2.5 3 3.5
Time (s)
t1 t2
Z (t ) =
1
n ∑ Y (t)
i =1
i (5.48)
Z(t) will be a free decay time history, with the initial condition
=0
y(0) (5.49)
and
y(0) = a (5.50)
Yi(t) can be seen as a response of the system due to three kinds of excitations:
1. A random input Fi(t) with zero mean, where the corresponding portion of
Yi(t) is denoted as YFi (t ).
2. An initial velocity vi, which is equal to the slope of Yi(t) at ti, where the cor-
responding portion of Yi (t), is denoted as Yvi (t ).
3. An initial displacement a, where the corresponding portion of Yi(t) is
denoted as YDi (t ).
234 Random Vibration
Z (t ) =
1
n ∑i =1
YFi (t ) +
1
n ∑
i =1
Yvi (t ) +
1
n ∑Y
i =1
Di (t ) (5.52)
Initially, let us consider the first term: the system’s impulse response function is a
linear time invariant for a stationary process, which is denoted by h(t). This can be
written as
n n n
∑i =1
YFi (t ) = ∑
i =1
h(t ) * Fi (t ) = h(t ) *
∑i =1
Fi (t ) = h(t ) *{0} = {0}
(5.53a)
v1 ≈ −v2
vi ≈ –vi+1 (5.53b)
Consequently, the responses due to the initial velocity vi and vi+1 will cancel each
other. Therefore, this will be reduced to the following:
∑Y (t) = {0}
i =1
vi (5.53c)
Lastly, consider the response due to the initial displacement a. With the same
initial displacement a, the same response should result, which is a free decay time
history under the excitation of the step function
u(t) = a (5.54)
Z (t ) =
1
n ∑Y
i =1
Di (t ) (5.55)
1 0.3
0.2
0
0.1
–1
0
–2 –0.1
–3 –0.2
–4 –0.3
0 100 200 300 400 500 600 700 800 900 0 10 20 30 40 50 60 70 80 90 100
(a) (b)
Figure 5.8 Example of random decrement. (a) Random process. (b) Free-decay process.
Example 5.1
In this example, we show the original random signal and the free-decay time his-
tory of using the random decrement method. Figure 5.8a plots the original signal
and Figure 5.8b is the re-generated time history.
m+
Z + (t ) =
1
m+ ∑Y (t)
i =1
i (5.56a)
where the subscript + stands for zero up-crossing and m+ is the number of fraction
pieces of selected time histories.
Slightly different from the case of random decrement based on level up-crossing,
Z(t) can be seen to have two kinds of excitations. The first decrement is due to the
random force and the summation that cancels the response. The second decrement
is due to the initial velocities at time ti, because Yi(t) is taken from the points in time
for which Y(ti) > 0.
Directly from the above discussion, the case in which Yj(t), tj ≤ t ≤ T + tj is taken
from Y(tj−1) > 0 and Y(tj) < 0 can also be considered. By changing the sign and placing
the time history inside the sum, the result is
236 Random Vibration
m_
Z − (t ) =
1
m− ∑Y (t)
j =1
j (5.56b)
where the subscript − stands for zero down-crossing and m− is the number of fraction
pieces of selected time histories.
Based on the rate of zero up-crossing given by Equation 5.26 and the rate of level
up-crossing given by Equation 5.24, we can calculate the ratio of v0+ /va+
1 a2 1 a2
1 σ X 1 σ
−
2 2 = e 2 σ 2X
v0+ /va+ =
X
e σX (5.57)
2π σ X 2π σ X
1 a2
since >0
2 σ 2X
v0 + > va + (5.58)
This ratio is always greater than 1, and it can be seen that such a ratio can be quite large.
Example 5.2
In this example, we have shown the original random signal and the free-decay
time history of using the leg superposition method. Figure 5.9a plots the original
signal and Figure 5.9b is the re-generated time history.
1 0.4
0.2
0
0
–1
–0.2
–2 –0.4
–3 –0.6
–4 –0.8
0 100 200 300 400 500 600 700 800 900 0 100 200 300 400 500 600 700 800 900
(a) (b)
Figure 5.9 Example of lag superposition. (a) Random process. (b) Free-decay process.
Statistical Properties of Random Process 237
m+
Z + (t ) =
1
m+ ∑ Y (t)
i =1
i (5.59a)
where the subscript + stands for the positive value of the peak and m+ is the number
of fraction pieces of selected time histories.
Similarly, we can also pick up the pieces of time histories from each valley of
the process Y(t). Namely, with the same random time history Y(t), first denote ti, the
ith time point, where Y(ti−1) > Y(ti) and Y(ti+1) > Y(ti). Namely, the valley is reached
between tt−1 and ti+1.
Next, take Yi(t), for the range ti ≤ t ≤ T + ti.
Last, let
m−
Z − (t ) =
1
m− ∑Y (t)
i =1
i (5.59b)
where the subscript – stands for the negative value of the valley and m− is the number
of fraction pieces of selected time histories.
Similar to the case of random decrement based on level up-crossing, Z(t) also has
three kinds of excitations. The first one is due to the random force and the summa-
tion that cancels the response. The second is due to the initial velocities. This portion
of Z(t) due to the initial velocities will also, respectively, cancel each other. The third
decrement is the summation of the responses due to multiple levels of step functions;
each is excited by a specific level of the initial displacement at the peaks.
Now, let us consider the numbers of useful pieces of selected time histories based
on peak reaching and zero-crossing. It is seen that this ratio is the reciprocal of the
irregularity factor α, that is,
vp 1
= (5.60)
v0 + α
238 Random Vibration
8 8
6 6
4
4
2
Amplitude
Amplitude
2
0
0
–2
–2
–4
–4 –6
–6 –8
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Time point Time point
(a) (b)
Figure 5.10 Free-decay time histories generated from lag superposition (peak reaching).
(a) Peak reaching. (b) Valley reaching.
Because in most α < 1, this ratio is usually greater than 1. Additionally, it is seen that
such a ratio can be rather large.
v p > v0 + (5.61)
Example 5.3
The following example shows a free-decay time history generated through lag
supposition methods based on peak/valley reaching methods. The results are plot-
ted in Figure 5.10a and b.
0.5
Amplitude
–0.5
–1
0 0.5 1 1.5 2 2.5 3 3.5
Time (s)
Here, ω 0 is the center frequency of the narrow band, referring back to Equation
4.66 and the frequency ω 0; the random process R(t) is the envelope, and
In this case, it is assumed that the variation of both R(t) and Θ(t) are slower in com-
parison with X(t).
C(t) and S(t) are independent, identical Gaussian, with a zero mean and variance
of σ 2X .
From Equations 5.66 and 5.67, C(t) and S(t) are determined to be zero-mean.
240 Random Vibration
and
From Equations 5.68 and 5.69, the derivatives of C(t) and S(t) are also determined
to be zero-mean.
5.1.5.1.1.2 Joint PDF of C(t), S(t) and C ( t ), S ( t ) If the one-sided spectral density
function, W X( f ), is symmetric about the midband frequency ωm, then C(t), S(t) and
C (t ) and S (t ) are all independent. Suppose C (t ) and S (t ) have a variance σ 2R , which
will be determined later, then the joint density function of C(t), S(t), C (t ) and S (t ) is
given by
1 1 c 2 + s 2 c 2 + s 2
(c, s , c , s )
fCCSS = exp − + (5.70)
4π σ X σ R
2
2 2
2 σ X
2
σ 2R
where
c, s > 0 (5.71)
and
r2 1 r 2 r 2 + r 2θ 2
f RR ΘΘ ( r , θ, r , θ ) = exp − 2 + (5.73)
4 π 2 σ 2X σ 2R 2 σ X σ 2R
where r > 0,
0 < θ ≤ 2π (5.74)
and
θ < ∞
−∞ < r, (5.75)
Statistical Properties of Random Process 241
If R(t) has a Rayleigh distribution and Θ(t) has a uniform distribution, then the
joint PDF of R(t) and R (t ) is
r 1 r2 r 2
f RR (r , r ) = exp − 2 + 2
, r > 0, − ∞ < r < ∞ (5.76)
2πσ 2X σ R 2 σ X σ R
The joint density function is the product of the two marginal PDF, given by
r 1 r2
f R (r ) = exp − 2 , r > 0 (5.77)
σX2
2 σ X
and
1 1 r 2
f R (r ) = exp − 2 , −∞ < r < ∞ (5.78)
2πσ R 2 σ R
σ 2X = σ 2R + ω 2m σ 2X (5.80)
σ 2R = σ 2X − ω 2m σ 2X (5.81)
where
( )
σ 2R = σ 2X ω 20+ − ω m2 σ 2X = σ 2X ω 20+ − ω 2m (5.83)
242 Random Vibration
∫ (ω )
∞
σ 2R = 2
− ω 2m S X (ω ) dω (5.84)
−∞
Equation 5.84 indicates that the term σ 2R can be seen as the moment of the special
density function SX(ω) taken about ωm, the midband frequency of the process.
When X(t) approaches its narrow-band limit, σ 2R tends to zero; this indicates that no
variation exists. Distinctively, each envelope of the narrow-band process does not vary,
thus becoming deterministic. Likewise, X(t) reduces to a pure deterministic sine wave.
∞
v R= a+ =
∫ −∞
r f RR (a, r ) dr
(5.85)
∞
rr 1 r2 r 2
=
∫ −∞ 2πσ 2X σ R
exp − + 2
2 σ X σ R
2
dr
Consequently,
aσ R 1 a2
v R= a+ = exp − 2 (5.86)
2πσ 2X 2 σ X
a 1 a2
v R = a+ = ω 20+ − ω m2 exp − 2 (5.87)
2πσ X 2 σ X
v X = a+
cs(a) = (5.88)
v R = a+
Statistical Properties of Random Process 243
Substitution of the two values of crossing rate expressed in Equations 5.23 and
5.86 into Equation 5.87 yields
σX ω 0+ σX
cs(a) = = (5.89)
2π a ω − ω
2
0+
2
m a 2π 1 − ω 2m /ω 20+
The average clump size can be used to estimate the waiting time of envelope
crossing. It is seen that
Equation 5.90 shows that if the size is large, there will be a significant waiting time.
5.2 Extrema
In the second section of this chapter, the extreme values of certain random processes
are considered. By collecting the extreme values, which are no longer processes
but variables, and determining the corresponding distributions, the 3D problems are
reduced to 2D distributions. Namely, the targets are the probability density functions
for specific cases.
x = z (5.91)
vz + dt = P( A) (5.92)
z + ∆z
dt
x = z + Δz (5.93)
∆z→0
(
lim vz + − vz + ∆z + = − ) d
v + dz
dz z
(5.95)
E[rate of peaking in (z, z + Δz)] = E[total rate of peaking] P[peak in(z, z + Δz)]
(5.96)
d
− v + dz
f Z ( z )dz = dz z
vp
Thus, simplifying the above equation produces
d
− v+
fZ (z) = dz z (5.98)
vp
5.2.1.1.2 Narrow-Band Process
5.2.1.1.2.1 General Narrow-Band Process If it is a narrow-band process, the
peaking frequency can be replaced by the zero-up-crossing frequency:
d
− v+
fZ (z) = dz z (5.99)
v0 +
z 1 z2
fZ (z) = exp − 2 , z > 0 (5.100)
σ 2X 2 σ X
5.2.1.1.2.3 Gaussian Narrow-Band Process, PDF of Height of the Rise The height
of rise
H = 2Z (5.101)
h 1 h2
f H (h) = exp − 2 , h > 0 (5.102)
4 σ 2X 2 4 σ X
5.2.1.2 General Approach
5.2.1.2.1 Conditional Probability
Recall the conditional probability:
P( B ∩ C )
P( B | C ) = (5.103)
P(C )
P[(peak = Z ) ∩ ( X (t ) is a peak )]
P(peak = Z | X (t ) is a peak ) = (5.104)
P( X (t ) is a peak )
1. X (t ) > 0 (5.105)
2. X (t ) < 0 (5.106)
3. Correspondingly at the end of the interval,
X (t + dt ) < 0 (5.107)
or
Let C denote the event of having a peak of any magnitude, then P(C) can be writ-
ten as the combination of the above statements. Specifically, this can be written as
{
P(C ) = P 0 < X (t ) < 0 − X (t )dt ∩ X (t ) < 0 } (5.110)
{
P( B ∩ C ) = P [ z < X (t ) ≤ z + dz ] ∩ 0 < X (t ) < 0 − X (t )dt ∩ X (t ) < 0 (5.112) }
5.2.1.2.3 General Application
Denote the joint PDF of X(t), X (t ), and X (t ) to be f XXX
(u, v , w).
0 0− w dt z+dz
∫ ∫ ∫
−∞ 0 −∞
(u, v , w) du d v d w
f XXX
=
∫−∞
d z (− w dt ) f XXX
( z , 0, w) d w
∫−∞
(− w dt ) f XX
(0, w) d w
=
∫−∞
(− w) f XXX
( z , 0, w) d w
dz
0
∫ −∞
(− w) f XX
(0, w) d w
(5.113)
Because the resulting denominator is the peaking frequency vp, dz can be divided
on both sides, resulting in
fZ (z) =
∫ −∞
− w f XXX
( z , 0, w) d w
(5.114)
vp
Equation 5.114 is a workable solution for zero-mean process with arbitrary band-
width and distribution.
Statistical Properties of Random Process 247
5.2.1.2.4 Gaussian
The joint PDF of f XXX (u, v , w) is, in general, quite complex. However, in the event
the displacement X(t) is Gaussian, then the velocity and the acceleration will also be
Gaussian. Additionally, suppose each of the three processes are independent, then
the joint PDF of f XXX
(u, v , w) can be simplified to f XXX (u, v , w) = f X (u) f X ( v ) f X ( w).
Substitution of the PDF into Equation 5.114 yields
1 1 z2
f Z ( z ) = (1 − α 2 ) exp − 2
=
2π(1 − α ) σ X
2
2 (1 − α )σ X
2
(5.115)
αz z 1 z2
+ αΦ exp − 2
, −∞ < z < ∞
(1 − α )σ X σ X
2 2
2 σX
Here, α is the irregularity factor and Φ(.) is the cumulative of the standard normal
distribution.
5.2.2 Engineering Approximations
5.2.2.1 Background
In the above discussion, it was assumed that the PDF was known. Realistically, this
assumption is seldom true. When the PDF is not known, the approximate distribu-
tions are then used (see Rice [1964], Yang [1974], Krenk [1978], and Tayfun [1981]).
Random rises and falls are useful in studying fatigue problems, but the correspond-
ing exact PDF in most cases is not known. Fortunately, the average rise can be easily
obtained. In this subsection, the issue of rise and fall will be examined. For engi-
neering applications, a certain percentage error may be tolerated to find workable
solutions. In the following, comparative loss assumptions are made.
5.2.2.1.1 Basic Concept
Rise Hi is the difference between X(t), denoted by X(ti) at the valley and X(t), denoted
by X (ti′) at the next peak. The value of hi can be expressed as (see Figure 5.13)
X(t)
hi
pi
ti t
vi tiʹ
In this case, ti is used to denote the event when X(t) reaches a valley. To denote the
height H is once more used.
H = {hi} (5.117)
0) = 0
X( (5.118)
5.2.2.1.2 Average Rise
In the duration of Δt, the average traveling distance of ∣X(t)∣ is equal to E X (t )Δt.
This distance is also equal to the average height 2 μH times the number of ups and
down, denoted by v p∆t. This can be expressed as follows:
E X (t )
µH = (5.119)
2v p
It can be proven that if the process is Gaussian, then the average height is
µ H = α 2πσ X (5.120)
5.2.2.1.3 Shape Function
Now, consider the trajectory between a valley and the subsequent peak, referred to as
the shape or trajectory function. With a simple shape function, the analysis becomes
easier.
where ωp is the frequency of the assumed sinusoidal fluctuation of X(t). Note that in
this case,
H > 0
Statistical Properties of Random Process 249
5.2.2.1.3.2 Velocity To solve for the velocity, the derivative of Equation 5.121 is
taken.
t ) = ω ( H /2)sin(ω t )
Ψ( (5.122)
p p
X (ti ) = 0 (5.123)
Here,
ω 2p H
A= (5.125)
2
5.2.2.2.1 PDF of X ( t i )
5.2.2.2.1.1 PDF of A, General Process The distribution of X (ti ), the accelera-
tion at the valley, can be found by using the same approach as used in Section 5.1.
For this approach, first define the conditions for X(ti), then calculate the PDF for
X (ti ).
It can be proven that
a + da 0 ∞
f A ( a ) da =
∫ ∫ ∫
a 0 − w d t −∞
(u, v , w) du d v d w
f XXX
, a>0 (5.126)
∞ 0 ∞
∫ ∫ ∫
0 0− w dt −∞
(u, v , w) du d v d w
f XXX
Thus, in simplifying
(0, a)
a f XX
f A (a) = , a>0 (5.127)
vp
250 Random Vibration
a2
−
a 2 σ 2
f A (a) = 2 e X
, a>0 (5.128)
σ X
Note that Equation 5.128 is a Rayleigh distribution with σ X as the parameter σ.
For further explanation, refer to Equation 1.94. This results in
σ X = σ (5.129)
2A
H= (5.130)
ω 2p
Thus,
h2
h − 2
f H (h) = 2 e 2 θ H , h > 0 (5.131)
θH
where,
2σ X
θH = (5.132)
ω 2p
θH = 2σXa (5.133)
h2
−
h 2 ( 2 σ X a )2
f H (h) = e , h>0 (5.134)
(2σ X a) 2
a = 1 (5.135)
V = {vi} (5.136)
and
P = {pi} (5.137)
For convenience, the subscript i will be omitted in the following equations: con-
sider the example in which the joint distribution of the height, peak, and valley can
be used to count the fatigue cycles that rise above a floor level set by a crack opening
stress (Perng 1989).
1 1 ( v + h / 2)2 h 1 h2
f HV (h, v) = exp − exp − =
2 (1 − α )σ X 4α σ X 2 4α σ X
2 2 2 2 2 2
2π(1 − α 2 ) σ X
1 v + h/2 h 1 h2
Φ exp − , 0 < h < ∞, −∞ < v < ∞
1 − α 2 σ X 1 − α 2 σ X 4α σ X 2 4α σ X
2 2 2 2
(5.139)
1 1 m2 h 1 h2
f HM (h, m) = exp − 2 2 exp − 2 2
=
2 (1 − α )σ X 4α σ X 2 4α σ X
2 2
2π(1 − α 2 ) σ X
1 m h 1 h2
Φ exp − 2 2
, 0 < h < ∞, −∞ < m < ∞
1 − α 2 σ X 1 − α 2 σ X 4α σ X 2 4α σ X
2 2
(5.141)
252 Random Vibration
1 m
Φ (5.142)
1− α σX 1− α σX
2 2
h 1 h2
exp − 2 2
(5.143)
4α σ X
2 2
2 4α σ X
1 ( p + v)2 p − v ( p − v)2
fPV ( p, v) = exp − exp − 2 2
=
8(1 − α )σ X 4α σ X 8α σ X
2 2 2 2
2π(1 − α 2 ) σ X
1 p+ v p− v ( p − v)2
Φ exp − 2 2
, −∞ < v < p < ∞
2π(1 − α 2 ) σ X 2 (1 − α 2 ) σ X 4α σ X 8α σ X
2 2
(5.144)
In the equation above, each joint PDF is the product of a Gaussian term and a
Rayleigh term.
5.3 ACCUMULATIVE DAMAGES
In Sections 5.1 and 5.2, we examined certain important statistical properties rather
than typical mean, variance, and correlation functions only. These studies imply
that, for certain types of random processes, we can have issues beyond typical sta-
tistics. It is known that, for a general process, the previous conclusion may not be
sufficiently accurate; they may not even be workable. Therefore, specific processes
such as the stationary Gaussian process are assumed. In the following, we will use
an engineering problem as an example to show that if certain statistical conclusions
are needed, we need to select a proper model of random process. In this case, the
Markov process will be used.
Accumulative damages are often seen in engineering structures with repeated
loading, such as vibration displacements or unbalanced forces, which are closely
related to the phenomena of level crossing. Material fatigue is a typical example
of damage accumulation. As mentioned at the beginning of this chapter, the
focus here is given to the time-varying developments of the accumulative dam-
age, instead of the resulting damage itself. In Chapter 10, such damages will
Statistical Properties of Random Process 253
5.3.1.1 S–N Curves
When a component of a machine or a structure is subjected to high-cycle loading,
although the load level is smaller than its yielding threshold, after certain cycles, it
may fail to take additional loads. The number of cycles is referred to as fatigue life-
time. Such fatigue is called high-cycle fatigue, or simply fatigue.
Generally speaking, S–N curves are used in a high-cycle fatigue study. An S–N
curve for a material defines alternating stress values versus the number of duty cycles
required to cause failure at a given stress ratio. A typical S–N curve is shown in the
figure. The y axis represents the alternating stress (S) and the x axis represents the
number of cycles (N). An S–N curve is based on a stress ratio or mean stress. One can
define multiple S–N curves with different stress ratios for a material. The software
uses linear interpolation to extract data when you define multiple S–N curves for a
material.
S–N curves are based on mean fatigue life or a given probability of failure.
Generating an S–N curve for a material requires many tests to statistically vary
the alternating stress, mean stress (or stress ratio), and count the number of duty
cycles.
5.3.1.2 Miner’s Rule
In 1945, M.A. Miner popularized a rule that had first been proposed by A. Palmgren
in 1924, which is variously called Miner’s rule or the Palmgren–Miner linear dam-
age hypothesis. Consider the S–N curve shown in Figure 5.14. Suppose that it takes
N1 duty cycles at an alternating stress S1 to cause fatigue failure; then the theory
254 Random Vibration
Stress (MPa)
300
200
100
Fatigue strength
states that each cycle causes a damage factor D1 that consumes 1/N1 of the life of the
structure.
Moreover, if a structure is subjected to N1 duty cycles at S1 alternating stress and
N2 duty cycles at S2 alternating stress, then the total damage factor D is calculated as
where N1 is the number of cycles required to cause failure under S1, and N2 is the
number of cycles required to cause failure under S2.
The damage factor D, also called usage factor, represents the ratio of the con-
sumed life of the structure. A damage factor of 0.35 means that 35% of the structure’s
life is consumed. Failure due to fatigue occurs when the damage factor reaches 1.0.
The linear damage rule does not consider the effects of load sequence. In other
words, it predicts that the damage caused by a stress cycle is independent of where
it occurs in the load history. It also assumes that the rate of damage accumulation is
independent of the stress level. Observed behavior indicates that cracks initiate in a
few cycles at high stress amplitudes, whereas almost all the life is spent on initiating
the cracks at low stress amplitudes.
The linear damage rule is used in its simple form when you specify that fatigue
events do not interact with each other in the properties of the study. When you set the
interaction between events to random, the program uses the ASME code to evaluate
the damage by combining event peaks.
5.3.2 Markov Process
To better understand when the accumulative damage occurs as a random process,
let us describe a useful model, the Markov process (Andrey A. Markov, 1856–1922).
A random process whose present state is dependent on its past history, the
Markov process is one of the most important processes that plays an important role
in engineering applications. The aforementioned Poisson and Wiener processes
(Brownian motion) are all Markovian. Many physical phenomena, including
Statistical Properties of Random Process 255
5.3.2.1 General Concept
5.3.2.1.1 Definition
A random process X(t), 0 ≤ t ≤ T is said to be a Markov process, if for every n and for
t1 < t2 < … < tn ≤ T, we can have the distribution given by
The above equations imply that a Markov process represents a set of trajectories
whose conditional probability distribution at a selected instance, given all past obser-
vations, only depends on the most recent ones. For example, the fatigue damage at a
given time point t2 depends only on the state of time t1; anything before t1, however,
has no influence on the damage level at t2.
Equation 5.146 is equivalent to the following conditional probability
P{X (tn ) < x n ,| X (t1 ) < x1 , X (t2 ) < x 2 , X (tn−1 ) < x n−1 )
(5.148)
= P{X (tn ) < x n ,| X (tn−1 ) < x n−1 )
Example 5.4
q q
5.3.2.2.2 Transition Probability
Suppose {X(n), n = 0, 1, 2, …} is a discrete Markov chain. The following probability
where
Example 5.5
X (k ) 1 2 3 k
PMF pk1 pk 2 pk 3 pkk
Statistical Properties of Random Process 257
Denote
Y (n) = ∑ X(k), n = 1, 2, …
k =1
Show that {Y(n), n = 1, 2, …} is a Markov chain and find its transition probability matrix.
From the equation that defines sequence Y(n), it is seen that the increment
[Y(n) − Y(n − 1)], that is, X(n), and increment [Y(m) − Y(m − 1)], that is, X(m), are
independent because X(n) and X(m) are independent. Therefore, {Y(n), n = 1, 2, …}
is an independent increment process, and thus it is a Markov chain.
Furthermore, the entry of the corresponding transition probability matrix can
be written as
m+ k
= P ∑
r = m+1
X (r ) = j − i = ∑
i1+ i2 ++ ik = j − i
pm+1,i1 pm+ 2,i2 ,, pm+ k ,ik
where
m(m + 1) (m + k )(m + K + 1)
m≤i ≤ ,m+ k ≤ j ≤ , j−i ≥k
2 2
5.3.2.2.3 Probability Distribution
The initial distribution of a discrete Markov chain {X(n), n = 0, 1, 2, …} is denoted
, given by
by P0
= { p = P[ X (0) = i], i ∈ Ω}
P (5.152)
0 i
n j {
= p = P[ X (n) = j], j ∈ Ω
P } (5.153)
5.3.2.2.4 Homogeneity
If the one-step transition probability pij(n) is not related to the initial time n, denoted
by p(n), for a discrete Markov chain {X(n), n = 0, 1, 2, …}, then such a discrete
Markov chain is homogeneous, whose k steps transition the probability matrix
denoted as pij(k) and P(k), respectively; and the corresponding one-step transition
probability matrix is denoted by P.
258 Random Vibration
Example 5.6
Y (n) = ∑ X (k )
k =1
First, for question (1), consider any instant 0 < m1 < m2 < … < mn, and 0 ≤ i1 ≤ i2 ≤
… ≤ in that satisfies for any 1 ≤ k ≤ n − 1, we have ik ≤ ik+1 ≤ ik + mk+1 − mk, then
mn m1 mn−1
P {Y (mn ) = in Y (m1) = i1,,Y (mn−1) = in−1} = P X (k ) = in ∑ ∑ X (k ) = i1,, ∑
X (k ) = in−1
k =1 k =1 k =1
mn m1 m2 mn−1
=P ∑
k = mn−1+1
X (k ) = in − in−1 ∑ k =1
X (k ) = i1, ∑
k = m1+1
X (k ) = i2 − i1,, ∑
k = mn− 2 +1
X (k ) = in−1 − in− 2
mn
=P ∑ {
X (k ) = in − in−1 P Y (mn ) = in Y (mn−1) = in−1 }
k = mn−1+1
mn mn−1
∑
= P X ( k ) = in
k =1
∑ X (k ) = i
k =1
n−1
mn mn−1
∑ ∑
=P X (k ) = in − in−1 X (k ) = in−1
k = mn−1+1 k =1
mn
=P ∑ X (k ) = in − in−1
k = mn−1+1
{
P Y (mn ) = in Y (m1) = i1,,Y (mn−1) = in−1 }
{
= P Y (mn ) = in Y (mn−1) = in−1 }
Statistical Properties of Random Process 259
mn m1 mn−1
{
P Y (mn ) = in Y (m1) = i1,,Y (mn−1) = in−1 = P X ( k ) < in } ∑ ∑ X (k ) = i1,, ∑
X (k ) = in−1
k =1 k =1 k =1
mn m1 m2 mn−1
= P ∑
k = mn−1+1
X (k ) < in − in−1 ∑ k =1
X (k ) = i1, ∑
k = m1+1
X (k ) = i2 − i1,, ∑
k = mn− 2 +1
X (k ) = in−1 − in− 2
(x )
in − in−1
=
∫0
f mn
∑ X (k )
m1
∑ X ( k ), ∑
mn −1
X (k )
i ,, in−1 − in− 2 dxn
n 1
k = mn −1+1 k =1 k = mn − 2 +1
in − in−1
f (i1,in−1 − in− 2 , xn )
=
∫0 f (i1,in−1 − in− 2 )
dxn
in − in−1
=
∫0
f ( xn ) d xn
{
P Y (mn ) < in Y (mn−1) = in−1 }
mn mn−1
= P
k =1
∑
X ( k ) < in ∑ X (k ) = i
k =1
n−1
mn mn−1
∑ ∑ X (k ) = i
= P X (k ) < in − in−1 n−1
k = mn−1+1 k =1
(x )
in − in−1
=
∫ 0
f mn
∑ X (k )
mn −1
∑ X (k )
n in−1 dxn
k = mn −1+1 k =1
in − in−1
f (in−1, xn )
=
∫ 0 f (in−1)
dxn
in − in−1
=
∫ 0
f ( xn ) d xn
260 Random Vibration
m+ n n
{
P Y (n + m) < i Y (n) = i P } ∑
X (k ) < i ∑ X(k) = j
k = n+1 k =1
m+ n n
= P
∑
k = n+1
X (k ) < i − j ∑
k =1
X (k ) = j
f n+ m n ( x n+ m , j )
in − in−1 ∑ X ( k ) ∑ X ( k )
=
∫
k = n +1 k =1
d x n+ m
0 fn ( j)
∑ X ( k )
k =1
i− j i− j x2
1 −
=
∫0
f n+ m
∑ X (k )
( xn+ m ) dxn+ m =
∫0 2πσ m
e 2mσ 2 dx
k = n +1
i− j
=Φ
σ m
It is thus seen that this probability does not relate to the starting time point n; there-
fore, {Y(n), n = 1, 2, …} is a homogeneous Markov process.
5.3.2.2.5 Ergodicity
For a homogeneous discrete Markov chain {X(n), n = 0, 1, 2, …}, if for any state i, j ∈ Ω,
there exists a limit independent to i, such that
∑π
j ∈Ω
j = 1 (5.156)
1. vj ≥ 0 (5.157)
2. ∑v = 1
j ∈Ω
j (5.158)
3. v j = ∑v p
i ∈Ω
i ij (5.159)
v = {vj, j ∈ Ω} (5.160)
vP = v (5.161)
1. C–K Equation
For the Markov chain, the Chapman–Kolmogorov (C–K) equation is
an identity relating the joint probability distributions of different sets
of coordinates on a random process (Sydney Chapman, 1888–1970),
given by
pij ( k + q) = ∑ p ( k ) p (q)
r ∈Ω
ir rj (5.162)
p j (n) = ∑ p p (n)
i ∈Ω
i ij (5.164)
=P
P Pn (5.165)
n 0
262 Random Vibration
(5.166)
= ∑ p p (n ) p
i ∈Ω
i ii1 1 i1i2 (n2 − n1 ) pik −1ik (nk − nk −1 )
πj = ∑π p ,
i =1
i ij j = 1, 2, ..., s (5.168)
and
s
∑π = 1
i =1
i (5.170)
v = vPn (5.171)
Example 5.7
1/ 2 1/ 3 1/ 6
P = 1/ 3 1/ 3 1/ 3
1/ 3 1/ 2 1/ 6
Statistical Properties of Random Process 263
X (0 ) 1 2 3
p 2/5 2/5 1/5
1. Calculate the second transition probability matrix.
2. Find the probability distribution of X(2).
3. Find the stationary distribution.
5/12 13/ 36 2/ 9
1. P 2 = 7 /18 7 /18 2/ 9
7 /18 13/ 36 1/ 4
2. P2 = P0P2 = ( 2/ 5, 67 /180, 41/180)
3. Because
v = vP
and
∑ v = 1
i
5.3.3 Fatigue
5.3.3.1 High-Cycle Fatigue
With the help of the above-mentioned discrete Markov chain, let us consider the
process of accumulative damage, failure due to a material’s high-cycle fatigue (see
Soong and Grigoriu, 1992).
Suppose that, during a fatigue test, a sufficient number of specimens are used
under cyclic loading. Assume that the damage probability can be described by
Equation 5.155 but, in this case, rewritten as
where the term πj is the probability that the test specimens are in the damage state
j at time zero and the probability at the failure state at the initial time is assumed to
be zero. It is seen that these probabilities satisfy Equation 5.156, with the dimension
1 × f, where f states a final failure and the states can be denoted by
Ω = {1, 2, … f} (5.173)
264 Random Vibration
Having denoted the failure probabilities in state j at time zero, let us further
denote time x to be
It is seen that
∑ π ( j) = 1
j=1
x (5.175)
px = p0P x (5.176)
with P being the first transition probability matrix. Now, assume that the damage
state does not change (no increase) by more than one unit during the duty cycle, the
transition probability matrix P can be written as
π 1 − π1 0 0 0 0
1
0 π2 1 − π2 0 0 0
0 0 π3 1 − π3 0 0
P= (5.177)
0 0 0 0 π f −1 1 − π f −1
0 0 0 0 0 0
It is noted that this case is a stationary Markov chain because the transition prob-
ability matrix P is time-invariant and also note that any πj in P is smaller than 1 but
greater than 0.
Furthermore, the cumulative distribution function of time Tf to failure state f is
πx( f ), that is,
FT f ( x ) = P{T f ≤ x} = π x ( f ) (5.178)
When the time becomes sufficiently long, namely, let x → ∞, the failure probabil-
ity approaches unity, that is,
lim π x ( f ) = 1 (5.179)
x →∞
RT f ( x ) = 1 − FT f ( x ) (5.180)
Statistical Properties of Random Process 265
The mean and standard deviation of the fatigue problem can be calculated as
µ T f = E[T f ] = ∑R x=0
Tf (x) (5.181)
and
1/ 2
∞
{ } ∑ xRT ( x ) + µ T − µ
1/ 2
σ T f = E T − µ
2 2
= 2
(5.182)
f Tf f f Tf
x =0
We can also directly calculate the mean and standard deviation in terms of the
quantity f and the probabilities πq and 1 − πq. Suppose all new specimens have a
probability of 1 at the initial state 1, that is,
π1 = 1, π2 = 0, π3 = 0, … (5.183)
where Tq stands for the time within the duty cycle in state q. It can be proven that all
Tq are mutually independent with the following distribution
The mean and standard deviation of the fatigue problem can be calculated based
on the distribution described in Equation 5.185. It can be proven that
f −1
µ T f 1 = E[T f 1 ] = f − 1 + ∑r
i =1
i (5.186)
and
1/ 2
f −1
{ } ∑
1/ 2
σ T f 1 = E T f21 − µ T2 f 1 = ri (1 + ri ) (5.187)
i =1
πi
ri = (5.188)
1 − πi
266 Random Vibration
ri = r
µ T f 1 = ( f − 1)(1 + r ) (5.189)
and
σ T f 1 = ( f − 1)r (1 + r ) (5.190)
5.3.3.2 Low-Cycle Fatigue
When the stress is sufficiently high, plastic deformation will occur. Accounting for
the loading in terms of stress is less useful and the strain in the material can be a
simpler and more accurate description. In this case, we witness the low-cycle fatigue.
One of the widely accepted theories is the Coffin–Manson relation (see Sornette
et al. 1992) given by
∆ε p = f (ε′f , N , c) (5.191)
5.3.3.2.1 Cyclic Test
Unlike high-cycle fatigue, when the load applied on a component or a test specimen
is greater than its yielding point, nonlinear displacement will occur. To study the
material behaviors, a forced cyclic test is often performed. In this case, one type
of material will retain its stiffness for several cycles until reaching the final stage
of broken failure (see Figure 5.15a), where B(t) stands for the broken point. Steel
is a typical material of this kind. Another type of material will reduce its stiffness
continuously until the stiffness is below a certain level at which the total failure is
defined (see Figure 5.15b), where a preset level marked by the dot-dashed line stands
for the failure level. When this level is reached at the nth cycle shown in Figure
5.15b, the corresponding amount of force is denoted by B(n). Reinforced concrete is
typically seen with the overloading cycles. In both cases, the number of cycles when
failure occurs is considerably smaller than the above-mentioned high-cycle fatigue.
Note that during a low-cycle fatigue test, if the stiffness is reduced, the amount of
loading will often be reduced as well; otherwise, the corresponding displacement can
be too large to realize with common test machines. Therefore, instead of applying
Statistical Properties of Random Process 267
1 1
0.8 0.8
0.6 0.6
Normalized load
Normalized load
0.4 0.4
0.2 0.2
0 0
–0.2 –0.2
–0.4 –0.4
–0.6 –0.6
–0.8 –0.8
–1 –1
0 2 4 6 8 10 12 B(t) 0 1 2 3 4 5 6 7 8 9 10 B(n)
(a) Number of cycles (b) Number of cycles
k1
k2
1
0.8
0.6
Normalized load
0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
–1
–1.5 –1 –0.5 0 0.5 1 1.5
(c) Displacement (cm)
Figure 5.15 Low-cycle fatigue. (a) Failure history without significant change in stiffness
of materials. (b) Failure history with decrease of stiffness of materials. (c) Stiffness variation.
equal amplitudes of force, equal displacement is used, which is referred to as the test
with the displacement control. On the other hand, during a cyclic test, if the level of
force is controlled, it is called the force control. There is another type of low-cycle
fatigue test, which is uncontrolled. Tests with ground excitations on a vibration system
whose stiffness is contributed by the test specimen can be carried out to study uncon-
trolled low-cycle fatigue. Figure 5.15c shows conceptually an uncontrolled cyclic test,
from which we can see that the secant stiffness, which is the ratio of the peak force
and corresponding displacement, marked as k1, k2, and so on, is continuously reduced.
5.3.3.2.2 Remaining Stiffness
From Figure 5.15a, we can see that the first type of low-cycle fatigue occurs without
decaying stiffness, because when using displacement control, the force applied on
a test specimen will remain virtually constant in several cycles until sudden failure
occurs. We may classify the failure of this type of material under overloading condi-
tions with constant stiffness as “type C” low-cycle fatigue.
On the other hand, as seen in Figure 5.15b, the overload failure of materials with
decaying stiffness can be called “type D” low-cycle fatigue. To model these two
types of low-cycle fatigue, we will have rather different random processes.
Consider type D low-cycle fatigue first. We will see that, under controlled cyclic
tests, the remaining stiffness at the qth cycle may be caused by the accumulated
deformation in the previous cycles. The corresponding forces under displacement
control at cycle q measured from different specimens are likely different, that is,
random. On the other hand, under force control, the displacement at cycle q is also
268 Random Vibration
random. Thus, both the displacement and the force can be seen as random processes.
In the following paragraphs, we can see that the sums of these random quantities at
cycle q can be approximated as Markov processes.
Experimental studies shows that the force or the displacement of type C material
before failure will remain constant. Therefore, the corresponding test process cannot
be characterized as a random process. However, the amount of force at the broken point
is rather random. In addition, the specific broken cycle is also random. That is, Figure
5.15a conceptually shows the point to be 10 cycles, which is only realized in the test.
In reality, it can happen at 9, 11, or other cycles. In addition, because the level of force
is random, the exact time point of the sudden failure is also random. In the following
examples, we can see that using B(t) to mark the failure force B(t) is not Markovian.
1.5 1.5
Peak load, higher level Peak load, lower level
Load decreasing test
B(n) Load increasing test
1 1
B(n)
Normalized load
Normalized load
0.5 0.5 B(n)
0 0
–0.5 –0.5
–1 –1
–1.5 –1.5
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
(a) Test cycles (b) Test cycles
1.5
1
Normalized load
0.5
–0.5
–1
–1.5
0 2 4 6 8 10 12 14 16 18 20
(c) Test cycles
Figure 5.16 Different loadings for type C tests. (a) Constant load amplitude. (b) Monotonic
increasing/decreasing load amplitude. (c) Random loading amplitude.
specify the yielding load. Comparing Figure 5.16c with Figure 5.1, we can realize
that the study on type C low-cycle fatigue can be carried out based on the aforemen-
tioned conclusions to the engineering problem of level-crossing.
k = f(N(n)) (5.192)
Assume that in the nth cycle, the total displacement is D(n), which can be N(n)
longer than elastic displacement L(n). Note that the inelastic distance is treated as a
random process because of the possibility that when the material is damaged, the
allowed linear displacement may vary from cycle to cycle. That is,
That is, D(n) is the distance of inelastic displacement in cycle n. The accumulated
total displacement, up to cycle n, is denoted by Z(n). To simplify the process, let the
270 Random Vibration
allowed linear displacement be constant, that is, L(n) = L, and assume that, in each
cycle, the forced displacement is greater than L. In this case, we can write
n n
It is seen that Z(n) is a continuous state Markov chain. To simplify the analysis,
however, let us consider another Markov chain, Y(n), that has only a deterministic
difference with Z(n) by nL, such that,
Y (n) = Z (n) + nL = ∑ D ( q)
q =1
(5.195)
Assume the displacement D(q) has a normal distribution with zero mean, that is,
d2
1 −
fY ( n ) (n, d ) = e 2 nσ 2
(5.197)
2πσ n
5.3.4 Cascading Effect
5.3.4.1 General Background
When a system or a structure subjects multiple loads applied in sequence, either of the
same kind or with different types of forces, the previous ones may cause certain damage
and the consequent ones will make further damage that can be far more severe than a
single load. This consequent loading and damaging is referred to as a cascading effect.
Among load-resilient designs, the cascading effect is one of the most unclear
issues. This is because the normal training for engineers is to design systems to be
successful, instead of being failures. However, many real-world experiences have
witnessed severe failure under cascading effects. Examples can be found such as
mountain sliding after strong earthquakes, bridge scour failure after heavy floods,
structural failure due to overload after fatigue, and so on.
Because both the magnitudes and the acting time are random, the cascading effect
can be treated as a random process. Although fewer systematic researches on such
effects have been carried out, in this manuscript, we discuss possible approaches.
Again, the discussion of such a topic serves only to encourage readers to develop
a knowledge and methodology to understand the nature and essence of random pro-
cesses. It will also serve the purpose of opening the window to engineers who have
been trained in the deterministic world.
Statistical Properties of Random Process 271
X (t ) = ∑ A φ (t),
i=0
i i t ∈[a, b] (5.198)
such that
n n
2
l.i.m
n→∞ ∑
i=0
Aiφi (t ) − X (t ) = lim E
n→∞
∑
Aiφi (t ) − X (t ) = 0
(5.199)
i=0
In Equations 5.198 and 5.199, {Ai} a set of random variables and {ϕi(t)} is a set
of deterministic temporal functions, which is called a base or coordinate functions.
The essence of Equation 5.198 is the methodology of variable separation, because
in each individual product Aiϕi(t), the random variable Ai in state space and the tem-
poral variable ϕi(t) in the time domain are separated. This is similar to the method
of variable separation to solve partial differential equations, where the spatial and
temporal variables are first separated to form a set of ordinary equations.
A common representation that satisfies Equation 5.199 takes the following form
X (t ) = µ(t ) + ∑ A φ (t),
i =1
i i t ∈[a, b] (5.200)
where
1. E[Ai] = 0 (5.202)
2. E Ai A j = σ i2δ ij (5.203)
272 Random Vibration
where
σ i2 = E Ai2 (5.204a)
1, i= j
δ ij = (5.204b)
0, i≠ j
The complete set of temporal functions {ϕi(t)} should satisfy the following:
1. The covariance of X(t) can be represented by
σ XX (t1 , t2 ) = ∑ σ φ (t )φ (t ),
i =1
2
i i 1 i 2 t1 , t2 ∈[a, b] (5.205)
∫
2
2. φi (t1 ) dt < ∞ (5.206)
a
and
b
3.
∫
a
φi (t1 )φi (t2 ) dt = δ ij (5.207)
If X(t) is taken to be a measured time history of the random process (see Chapter 9,
Inverse Problems), then the coefficient Ai is no longer random and can then be
calculated by
Note that the coefficient Ai can then be calculated by
b
Ai =
∫ [X (t) − µ(t)]φ (t) dt
a
i (5.208)
Practically speaking, only the first n terms in Equation 5.200 are used to proxi-
mate the random process X(t), that is
µ Xˆ (t ) = E Xˆ (t ) (5.210)
σ 2Xˆ (t ) = ∑ σ φ (t)
i =1
2 2
i i (5.211)
Statistical Properties of Random Process 273
and
RXˆ (t1 , t2 ) = ∑ σ φ (t )φ (t ),
i =1
2
i i 1 i 2 t1 , t2 ∈[a, b] (5.212)
To represent or reconstruct a random process, the sample range [a,b] must be real-
ized. Suppose those orthogonal function ϕi(t) are known, the following integral can
be used as a trial-and-error approach to specify [a,b]
b
∫a
RXˆ (t1 , t2 )φi (t2 ) dt2 = λ iφi (t1 ) t1 , t2 ∈[a, b] (5.213)
λ i ∞ E Ai2 = σ i2 (5.214)
From Equation 5.214, the calculated parameter λi is a constant if the range [a,b] is
chosen correctly. Furthermore, if λi has a drastic variation over a period of time and/
or after the targeted system undergoes significant loading, then the system may have
a cascading damage.
we have
{
P X (mn+1 ) = in+1 [ X (m1 ) = i1 ] ∩ [ X (m2 ) = i2 ] ∩ ∩ [ X (mn ) = in ] }
{
= P max[ X (mn ),Ymn +1 ,Ymn + 2 ,,Ymn +1 ] = in+1 [ X (m1 ) = i1 ] ∩ [ X (m2 ) = i2 ] ∩ ∩ [ X (mn ) = in ] }
= P {max[i ,Y
n mn +1 ,Ymn + 2 ,,Ymn +1 ] = in +1 [ X ( m1 ) = i1 ] ∩ [ X ( m2 ) = i2 ] ∩ ∩ [ X ( mn ) = in ]}
0, in+1 < in
{ }
= P max[Ymn +1 ,Ymn + 2 ,,Ymn +1 ] < in+1 , in+1 = in
{
P max[Y ,Y ,,Y ] = i , i > i
mn +1 mn + 2 mn +1 n +1 }
n +1 n
(5.216)
{
P X (mn+1 ) = in+1 X (mn ) = in }
{
= P max[ X (mn ), Ymn +1 , Ymn + 2 ,, Ymn+1 ] = in+1 X (mn ) = in }
= P {max[i , Y n mn +1, Ymn + 2 ,, Ymn+1 ] = in+1 X (mn ) = in }
(5.217)
0, in+1 < in
{
= P max[Ymn +1 , Ymn + 2 ,, Ymn +1 ] < in+1 , } in+1 = in
{
P max[Ymn +1 , Ymn + 2 ,, Ymn +1] = in+1 , } in+1 > in
{
P X (n + 1) = j X (n) = i }
{ } {
= P max[ X (n),Yn+1 ] = j X (n) = i = P max[i, Yn+1 ] = j X (n) = i }
0, j<i
0, j<i
i
= P{Yn+1 < j}, = , j=i
j=i a
P{Y = j}, j>i 1
n +1 , j>i (5.219)
a
Statistical Properties of Random Process 275
It is seen that the transition probability is not related to instant n; therefore, {X(n),
n ≥ 1} is a homogeneous Markov chain.
In addition, because
0, elsewhere
i
, j=i (5.220)
pij = a
1
, a≥ j>i
a
a −1 a −1 k −2
1 a − 1 1 (a − 1) k −1
∑ ∑
= P[ X (1) = i]P Ta = k X (1) = i = =
i =1 i =1
a a a ak
(5.221)
In this case, the averaged time, denoted by TE , is
∞ ∞ ∞ k −1
(a − 1) k −1 1 1
E (TE ) = ∑k =1
kP[Ta = k ] = ∑
k =1
k
ak
=
a ∑
k =1
k 1 −
a
= a (5.222)
Thus, the larger the maximum value is, the longer the average record time will be.
Problems
1. Show that the peak frequency of a random process can be given by the fol-
lowing equation and find the formula for the Gaussian specialization
ω p = 2πv p =
∫ −∞
ω 4 S X (ω ) dω
∞
∫ −∞
ω 2 S X (ω ) dω
276 Random Vibration
2. Show that the velocity of Gaussian process X(t) at a zero up-crossing has a
Rayleigh distribution
3. The RMS velocity and displacement of a narrow-band Gaussian vibration
with zero mean are, respectively, 2.0 m/s and 0.05 m. Calculate
a. the rate of level up-crossing with the level to be 0.03 m
b. zero up-crossing rate
Here, a = 0.03; σ X = 2.0 and σX = 0.05
4. A narrow-band Gaussian process with zero mean has RMS displacement of
3.5 cm. Calculate and plot the distribution density function of
a. amplitude for this process and
b. height for this process
5. The joint PDF of a narrow-band process X(t) and its first derivative is given
by a joint Laplace distribution (Lin 1976)
x x
1 − a + b
f XX ( x , x ) = e , − ∞ < x < ∞, − ∞ < x < ∞
4 ab
6. Suppose X(t) is a narrow-band process. What is the value of the peak with a
1% probability of being exceeded?
7. Show that the probability of exceedance P(Z > z0) of Rice’s distribution
of peaks is approximately a times the probability found from the Rayleigh
distribution
8. {X(n), n = 0, 1, 2, …} is a Markov chain. Show that the inverse sequence of
X(n) is also a Markov chain, that is,
P{X(1) = x1∣X(2) = x2, X(3) = x3, …, X(n) = xn} = P{X(1) = x1∣X(2) = x2}
Section III
Vibrations
6 Single-Degree-of-
Freedom Vibration
Systems
In previous chapters, we showed that random process may not always necessarily be
a time history operation. Time histories occurring physically in the real world have
specific reasons and they are limited by certain conditions. In previous chapters, we
mainly focused on time-varying development, instead of a clear understanding of
why time history exists. At most, we studied several important conditions of those
time histories. Yet, these studies were limited to how a process behaves, such as if
it is stationary, what the corresponding statistical parameters are, as well as what
frequency spectra they have listed. Generally speaking, the previous chapters were
limited to the mathematical models of random processes, instead of what the causes
of these models were.
Many time-varying processes, or time histories, are purely artificial. For example,
one can use computational software to generate a random signal. These purely artifi-
cial “stochastic” processes, although they seem very random, are actually controlled
by man-made signal generators and therefore we should have prior knowledge of
how they behave.
On the other hand, most real-world temporal developments are not purely or
directly man-made, and thus are not that easily controllable; some of them cannot
be easily measured. Accounting for all of these time-varying processes would be a
huge task—most of them are far beyond the scope of this manuscript. Here, we focus
only on a special type of process, the vibration signals, and mainly on mechanical
vibrations.
To classify linear vibration signals by their degree of certainty, there are typically
three essentially different types. The first is periodic vibration, such as harmonic
steady-state vibration, which contains a limited number of frequency components
and periodically repeats identical amplitudes. The second type is transient vibration,
such as free decay vibration, which is caused only by initial conditions. Although
we could also have a combination of harmonic steady-state vibration and free decay
vibration, the resulting signal is often not treated as the third type because we study
the first two types of vibration separately and simply add them together. Moreover,
both of them are deterministic. The third type is random vibration. Based on the
knowledge gained in the previous chapters, the nature of random signals is that,
at any future moment, their value is uncertain. Therefore, for the first two types of
vibration, with the given initial conditions and known input, we can predict their
future, including amplitudes, frequencies, and phases. However, we cannot predict
279
280 Random Vibration
the response of random vibrations. For a simple vibrational system, even if we know
the bounds of a random input, the output bound is unpredictable.
Therefore, to handle random signals, we need a basic tool, that is, averaging.
However, to account for random vibrations, we can do something more than the
statistical measure. Namely, we need to study the nature of a vibration system itself.
Here, the main philosophy to support our action is that any real-world vibration sig-
nal must be a result of a certain convolution. That is, vibration is a response caused
by the combination of external excitation and the vibration system. Without any
external excitations, of course, there would be no responses. However, without vibra-
tion systems, the presence of an excitation only will not cause any response either.
We thus need to study the nature of vibration systems to further understand how their
responses behave.
In this chapter, the basics of a single-degree-of-freedom (SDOF) vibration sys-
tem that is linear and time-invariant will be described. The periodic and transient
responses of the SDOF system under harmonic and impulse excitations will also be
examined, respectively. For a more detailed description of vibrations, readers may
consult Meirovitch (1986), Weaver et al. (1990), Chopra (2001), and Inman (2008),
as well as Liang et al. (2012).
6.1 Concept of Vibration
The dynamic behavior of a SDOF system can be characterized by key parameters
through free vibration analysis, such as the natural frequency and the damping ratio.
6.1.1 Basic Parameters
Generally speaking, the background knowledge needed for the study of SDOF
vibration systems can be found in a standard vibration textbook. Examples of vibra-
tion textbooks that provide an ample understanding include Inman’s Engineering
Vibration and Chopra’s Dynamics of Structures. Background knowledge that should
be gained in reading one of these texts include: what vibration is, what the essence
of vibration is versus another form of motion, and why vibration should be studied.
Vibration is a unique type of motion of an object, which is repetitive and relative
to its nominal position. This back-and-forth motion can be rather complex; however,
the motion can often be decoupled into harmonic components. A single harmonic
motion is the simplest motion of vibration given by
In this case, d is the amplitude, ω is the frequency, and t is the time. Thus, x(t) is a
deterministic time history. Typically, Equation 6.1a is used to describe the vibration
displacement.
The velocity, with the amplitude v = dω, is the derivative of the displacement,
then given by
x (t ) = d ω cos(ωt ) = v cos(ωt ) (6.1b)
Single-Degree-of-Freedom Vibration Systems 281
mx + kx = 0 (6.2)
k
ωn = (6.3)
m
x + ω 2n x = 0 (6.4)
x x, x
fm = –mx
k fk = kx
m
mg
1n 1n
2 2
Example 6.1
1000
ωn = = 10 (rad/s)
10
Note that the natural frequency of cycles per second, denoted by fn can be
written as
ωn
fn = = 1.5915 (Hz)
2π
6.1.1.1.4 Solutions
To solve the above equation, the so-called semidefinite method is used. In this approach,
it is first assumed that
Then, Equation 6.5 is substituted into Equation 6.4. If the proper parameters dc
and ds can be determined, the parameters can be either infinite nor zero, then the
assumption is valid and Equation 6.5 is one of the possible solutions.
Noticeably, with initial conditions x0 and x 0, the parameters are
dc = x0 (6.6)
and
x 0
ds = (6.7)
ωn
Example 6.2
therefore
dc = 1
Furthermore, taking the derivative on both sides of Equation 6.15 with respect
to time t and then letting t = 0, we have
Therefore,
ds = −2/10 = −0.2
6.1.1.1.5.1 Conservation of Energy First, the energy terms in the SDOF system
are defined.
Potential energy
1
U (t ) = kx (t )2 (6.8)
2
Kinetic energy
1
T (t ) = mx (t )2 (6.9)
2
d
[T (t ) + U (t )] = 0 (6.10)
dt
1 2 1 2
kx max = mx max (6.12)
2 2
284 Random Vibration
Notice that Equation 6.2 can also be obtained through Equation 6.10. From
Equation 6.12, it is determined that
1 2 1
kx max = mω 2n x max
2
(6.14)
2 2
From Equation 6.14, Equation 6.3 can be calculated. This procedure implies that the
ratio of k and m is seen as a measurement of normalized vibration energy.
Example 6.3
Based on Equation 6.14, we can find the natural frequency of a complex system,
which consists of more than one mass (moment of inertia) but is described by a
single variable x (or rotation angle θ). Using such an energy method can simplify
the procedure for natural frequency analysis.
As shown in Figure 6.2, a system has three gears and a rack that is connected
to the ground through a stiffness k. In this system, the pitch radii of gear 1 to gear
2 are, respectively, R1 and R 2. The pitch radii of gear 2 to gear 3 (also of gear 3
to the rack) are, respectively, r2 and r3. To simplify the problem, let the teeth and
shafts of gears as well as the rack have infinitely strong stiffness and the system is
frictionless.
To use the energy method, Tmax = Umax, we need to find both the potential and
the kinetic energies. The kinetic energies Tgear are the functions of the moment of
inertia, Ji, the gear ratio, ri, and the displacement x. That is Ti = Ti(Ji, ri, x). The kinetic
energy Track is the function of the mass of the rack as well as the displacement x.
Here, x is the only parameter to denote the motion; therefore, the system is of SDOF.
The potential energy is given by
U = 1/2kx2
r2 Gear 3: J3
R2 x
Rack: m
θ3 r3 k
The total kinetic energy T are contributed by these three gears, denoted by
Tgear1, Tgear2, and Tgear3, respectively, and by the rack, denoted by Track. That is,
Denote θi as the rotation angle of gear i. Then, the relationship between the
translational displacement x and the rotational angle is x = r3θ3 so that for transla-
tional velocity x and the rotational angle velocity θ 3, we have x = r3θ 3. Therefore,
θ3 = x/r3
θ 3 = x /r3
γ32 = r3/r2
θ2 = γ32θ3
Furthermore,
θ2 = (r3/r2)x/r3 = x/r2
and
θ 2 = x /r2
γ21 = R 2/R1
θ1 = γ21θ2
and
θ1 = (R 2/R1)x/r2
or
θ1 = x(R 2/r2R1)
Therefore,
R x
θ 1 = 2
r2R1
286 Random Vibration
J R2
( ) J J
T = 1/ 2 mx 2 + 1/ 2 J3θ 32 + J2θ 22 + J1θ 12 = 1/ 2 x 2 m + 12 + 22 + 32 22
r3 r2 R1 r2
xmax = ω n xmax
J1 J2 J3R22
1/ 2 xmax
2
m + 2 + 2 + 2 2 = 1/ 2 kxmax
2
r3 r2 R1 r2
k
ωn =
J1 J2 J3R22
m+ 2 + 2 + 2 2
r3 r2 R1 r2
d J J J R2
1/ 2 x 2 m + 12 + 22 + 32 22 + 1/ 2 kx 2 = 0
dt r3 r2 R1 r2
Thus,
J J J R2
m + 12 + 22 + 32 22 + kxx
xx =0
r3 r2 R1 r2
So that
J J J R2
m + 1 + 2 + 3 2 + kxx
xx =0
r3 r2 R12r22
2 2
1 1 R2
x m + 2 + 2 + 22 2 + kx = 0
r3 r3 R1 r2
Single-Degree-of-Freedom Vibration Systems 287
stiffness
ωn = = (coefficient of acc./coefficient of disp.)1/2
mass
k
=
J1 J2 J3R22
m+ 2 + 2 + 2 2
r3 r2 R1 r2
If there are rotational stiffnesses associated with each gear, k1, k2, k3, then, due
to the rotational deformation of each gears’ shaft, there will be more potential
energies:
R22
Ugear1 = 1/ 2 k1θ12 = 1/ 2 k1
R12r32
1
Ugear2 = 1/ 2 k2θ 22 = 1/ 2 k2 2
r2
1
Ugear3 = 1/ 2 k3θ32 = 1/ 2 k3 2
r1
1 1 R2
U = 1/ 2 x 2 k + k3 2 + k2 2 + k1 22 2
r1 r2 R1 r3
k1 k2 k3R22
k+ + +
r32 r22 R12r22
ωn =
J J J R2
m + 12 + 22 + 32 22
r3 r2 R1 r2
6.1.1.1.5.3 Force and Momentum Equation 6.3 can be further obtained using
additional approaches. For example, consider the momentum q, where
Figure 6.3 conceptually shows the relationships among the forces and momentum
mentioned above, where f X, f M, and q0 represent maximum restoring, inertia, and
momentum, respectively.
288 Random Vibration
x·
q0
v
fK
180°
90°
x
a
d
fM
k fK
m q0
q0 = mv (6.16)
then
fK
k= (6.17)
d
and
q0 q
m= = 0 (6.18)
v ωnd
fK
k
Thus, the ratio = d indicates
m q0
ωnd
fK
ωn = (6.19)
q0
Single-Degree-of-Freedom Vibration Systems 289
x (t ) = de ± jω nt (6.22)
Note that
x (t ) = λ x (t ) (6.23)
with given
1
T (t ) = mx (t )2
2
dT (t ) 2 d d d
= mx (t ) x (t ) = mx (t ) [λDe λt ] = mx (t )λ [ x (t )] (6.24)
dt 2 dt dt dt
1
= 2λ mx (t )2 = 2λT (t )
2
dT (t )
dt = λ (6.25)
2T (t )
dT (t )
1 dt
ωn = λ = (6.26)
2 T (t )
290 Random Vibration
fk fK
–d 0 Umax d x
x d
–d 0
and
dU (t )
1 dt
ωn = λ = (6.27)
2 U (t )
Equations 6.26 and 6.27 indicate that the angular natural frequency is a unique
ratio of energy exchange. It is characterized by the absolute value of one-half of the
rate of energy exchanged over the kinetic (potential) energy. Furthermore, the higher
the rate is, the larger the value of natural frequency will be. Readers may consider
why it is one-half or twice the kinetic (potential) energy needed.
Figure 6.4 conceptually shows the potential energy in one vibration cycle.
k
ωn = (6.28)
m
ωn = ⃒ λ⃒ (6.29)
v a a
ωn = = = (6.30)
d v d
fK
ωn = (6.31)
q0
dT (t ) dU (t )
1 dt 1 dt
ωn = = (6.32)
2 T (t ) 2 U (t )
Single-Degree-of-Freedom Vibration Systems 291
Readers may consider if all of the above approaches always apply. Here, we just
emphasize that, as seen in Equation 6.28, if either m = 0 or k = 0, then the natural
frequency will not exist. In this case, a stable system given by Equation 6.12 is lin-
ear, SDOF, and undamped. This will not always be true, when k > 0. From Equation
6.8, however, if we have negative stiffness, then the potential energy U(t) ∝ k will
become negative, which means that a certain source will continuously input energy
to the system and the response will continuously be increasing, which makes a sys-
tem unstable. Therefore, taking the absolute value of the ratio does not mean that k
can be smaller than zero. Furthermore, if c ≠ 0, we will have a damped system. For
the existence of a stably damped vibration system, not only are the conditions m > 0,
k > 0 needed but also the condition regarding c is needed, which will be discussed
as follows.
fc = cx (6.33)
where c is the proportional coefficient, defined as the damping coefficient. The param-
eter c is always greater than or equal to zero: semipositive or nonnegative. In Figure 6.5,
a damper c is added to the SDOF system and the resulting balance of force is
∑f
x
(.) = 0 → fm + fc + fk = 0 (6.34)
x x, x fm = –mx
k fk = kx
m
mg
c
fc = cx
1n 1n
2 2
x = deλt (6.36)
x = d λe λt (6.37)
x = dλ 2e λt (6.38)
mλ2 + cλ + k = 0 (6.39)
−c ± c 2 − 4 mk
λ1,2 = (6.40)
2m
c k
λ2 + λ + = 0 (6.41)
m m
Given that
k
= ω 2n
m
and let
c
= 2ζω n (6.42)
m
Single-Degree-of-Freedom Vibration Systems 293
in which both m and c are positive, or in the case of c, semipositive; ωn should also
be positive, such that ζ is greater than or equal to zero. The critical damping ratio, ζ,
or simply referred to as the damping ratio, is a semipositive number.
c c
ζ= = (6.43)
k 2 mk
2m
m
c 2 − 4 mk = 0 (6.44)
c 2 mk k
λ1 = λ 2 = − =− =− = −ω n (6.45)
2m 2m m
Thus,
cc = 2 mk (6.46)
c
ζ= (6.47)
cc
c
λ (.)t − t
x (t ) = de = de 2m = de − ω nt (6.48)
c < 2 mk (6.49)
ζ < 1 (6.50a)
294 Random Vibration
ζ = 1 (6.50b)
ζ > 1 (6.50c)
c < 0 (6.51)
ζ < 0 (6.52)
Example 6.4
A car has mass 2000 kg and total stiffness of the suspension system is 2840 kN/m.
The design damping ratio is 0.12; find the total damping coefficient of its suspen-
sion system. Suppose five people weighing 5 kN are sitting in this car. Calculate
the resulted damping ratio (g = 9.8 m/s2).
Based on Equation 6.43, the damping coefficient c can be calculated as
c = 2ζ mk = 18.1 kN/m-s
With additional mass Δm = 5000/9.8 = 510.2 kg, the new damping ratio is
c
ζnew = = 0.11
2 (m + ∆ m)k
It is seen that when the mass is increased by about 1/4, the reduction of the damp-
ing ratio is only about 10%.
Single-Degree-of-Freedom Vibration Systems 295
6.1.1.2.5 Eigenvalue λ
Rewrite Equation 6.40 to
−c ± c 2 − 4 mk c 2
λ1,2 = =− ± (−1) 4 mk − c
2m 2m 4 m 2 2m
(6.53)
c k c 2
=− ± j −
2m m 2m
c 2ζ mk k
= =ζ = ζω n (6.54)
2m 2m m
Note that
λ 2 = λ*1 (6.56)
where (.)* denotes complex conjugate of (.). Figure 6.6 illustrates the eigenvalues.
From Figure 6.6, it is shown that
k
λ1λ 2 = λ1λ*1 = = ω 2n (6.57)
m
Im
+j 1 – ζ2ωn
–ζωn Re
–j 1 – ζ2ωn
and
c
λ1 + λ 2 = − = −2ζω n (6.58)
m
where
ω d = ω n 1 − ζ2 (6.59)
ω d = ω n 1 − ζ2 ≤ ω n (6.60)
Example 6.5
Damping ratio ζ
ζ = −Re(λ)/ωn = 0.3
Velocity
0.2
x
0
–0.2
–0.4
–0.6
–0.8
–0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2 0.25 0.3
(a) (b) Displacement
FIGURE 6.7 Energy dissipations: (a) steady-state response; (b) free decay response.
x (t ) = de ± jω nt
1−ζ2 ω nt
x (t ) = de − ζω nt e ± j = de − ζω nt e ± jω dt (6.61)
Both equations that describe vibrations share a similar term, e ± jω nt or e ± jω dt; there-
fore, the “j” term must be related to dynamic oscillations. In fact, this term implies
energy exchanges between potential and kinetic energies. If this term is eliminated,
then there will be no energy exchange and no vibration.
Readers may consider how this form compares to that of Equation 6.5.
x0
d= (6.64)
sin φ
d
x (t ) = x (t ) = −ζω n de − ζω nt sin(ω dt + φ) + ω d de − ζω nt cos(ω dt + φ) (6.65)
dt
x0 cos φ
x (0) = (−ζω nsin φ + ω dcos φ) = x 0 −ζω n + ω d = x 0 (6.67)
sin φ sin φ
Consequently,
cos φ x + x 0 (ζω n )
= cot φ = 0 (6.68)
sin φ x 0 (ω d )
and
x0 ω d
φ = tan −1 (6.69)
x 0 + x 0ζω n
x0 ω d
sin φ = (6.70)
( x 0 + x 0ζω n )2 + ( x 0 ω d )2
Single-Degree-of-Freedom Vibration Systems 299
φ x0ωd
x 0 + x0ζωn
Thus, resulting in
( x 0 + x 0ζω n )2 + ( x 0 ω d )2
d= (6.71)
ωd
and
ω d x0
φ = tan −1 + hφπ (6.72)
x 0 + ζω n x 0
The reason we have a term hϕπ in Equation 6.72 is to account for the cases of
x 0 + x 0ζω n = 0 as well as x 0 + x 0ζω n < 0. The period of the tangent function is π;
therefore the arctangent function has multiple values. The period of the sine and
cosine functions is 2π. Consequently, the Heaviside function, hϕ, cannot be chosen
arbitrarily. Based on the fact that most computational programs, such as MATLAB®,
calculate the arctangent by limiting the values from –π/2 to +π/2, hϕ is defined as
0, v0 + ζω n x 0 > 0
hφ = (6.73)
1, v0 + ζω n x 0 < 0
As shown in Figure 6.8, there can be four instances of the phase angle ϕ. This is a
result of the possible combinations of ωd x0 and v0 + ζωn x0, which have either a posi-
tive or negative value. Regardless of the values of ωd x0, it is shown from Figure 6.9
and Equation 6.69 that the sign of v0 + ζωn x0 determines the value of hϕ.
300 Random Vibration
Im
ωdx0 > 0 φ2
φ1
Re
ω d x0 < 0
Example 6.6
A linear system with mass = 100 kg, stiffness = 1000 kN/m, and damping ratio =
0.5 is excited by initial condition x0 = 0.01 m and v0 = −2 m/s; calculate and plot
the free-decay displacement.
The undamped and damped natural frequencies are given by
= 100 (rad/s)
The amplitude is
( x 0 + x0ζω n )2 + ( x0ω d )2
d= = 0.02
ωd
ω d x0
φ = tan−1 + π = 2.62
x 0 + ζω n x0
10
6
Displacement (mm)
4
–2
–4
–6
–8
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time (s)
6.2.1 Harmonic Excitation
6.2.1.1 Equation of Motion
From the graphic description of damped SDOF systems shown in Figure 6.11, the
and x(0).
following equation of motion is obtained with initial conditions x(0)
x x, x f(t)
fm = –mx f(t)
k fk = kx
m
mg
c
fc = cx
1n 1n
2 2
mx + cx + kx = f (t )
x (0) = v0 (6.74)
x (0) = x
0
or
in which x h(t) is the response due to the initial displacement and velocity and xp(t) is
the particular solution due to the force excitation.
The particular solution can be expressed as
where xpt(t) is the transient response due to the force f(t) and xps(t) is the steady-state
solution. The total transient response, denoted by xt(t), is
ΔW = ΔE (6.80)
Single-Degree-of-Freedom Vibration Systems 303
and
xp0 = const
Equation 6.82 linearly combines the two cases, expressed by Equations 6.75a
and 6.75b, by using complex functions. This case does not exist in the real world.
Because the real and the imaginary domain are orthogonal, the response due to the
real and the imaginary portions of the excitation will also be orthogonal. Suppose
the response can be written as
(R )
x ps (t ) = Re[ x p 0 e j (ωt + φ) ] = x p 0cos(ωt + φ) (6.84a)
( I)
and the response due to the imaginary force, f 0sin(ωt), denoted by x ps (t ), is
( I)
x ps (t ) = Im[ x p 0 e j (ωt + φ) ] = x p 0sin(ωt + φ) (6.84b)
x ps(t ) = x p 0e jϕ e jωt = xe
jωt (6.85)
x = x p0 e jφ (6.86)
Taking the first and the second order derivatives of Equation 6.85 with respect to
t, yields
jωt
x ps(t ) = jωxe (6.87a)
304 Random Vibration
and
jωt
xps(t ) = −ω 2 xe (6.87b)
Substitution of Equations 6.82, 6.85, 6.86, 6.87a, and 6.87b into Equation 6.74
results in
jωt + jωcxe
−ω 2mxe jωt + kxe
jωt = f0e jωt (6.88)
f0
x = (6.89)
(−ω 2m + jωc + k )
or
f0 ÷ k f 1
x = = 0
(−ω m + jωc + k ) ÷ k k (−ω /ω n + j 2ζω /ω n + 1)
2 2 2
(6.90)
f 1
= 0
k (−r 2 + j 2ζr + 1)
ω
r= (6.91)
ωn
2ζr
f0 1 − j tan −1
x = e 1− r 2
(6.92)
k (1 − r 2 )2 + (2ζr )2
The absolute value of the complex valued amplitude is the amplitude of the
steady-state response xps(t), that is,
2ζr
f0 1 − j tan −1 f0 1
x p0 = x = e 1− r 2 = (6.93)
k (1 − r ) + (2ζr )
2 2 2 k (1 − r ) + (2ζr )2
2 2
Single-Degree-of-Freedom Vibration Systems 305
The phase angle of the complex valued amplitude is the phase angle of the steady-
state response xps(t), that is,
φ = ∠( x) = ∠ e ( − j tan −1
2ζr
1− r 2 ) = − tan −1 2ζr
1− r2
(6.94)
Because the tangent function is periodic with a period π, the inverse of the tan-
gent function is multiplied in value. A more precise description of the phase angle
is given by
φ = ∠( x) = ∠ ( e ) = − tan
2ζr
− j tan −1 2ζr
−1
1− r 2 + hφ π (6.95)
1− r2
0, ω < ωn
hφ = (6.96)
1, ω > ωn
Example 6.7
An m-c-k system with mass = 10 kg, c = 15 N/m−s, and k = 2000 N/m is excited
by a harmonic force f1(t) = 100 sin(4t) under zero condition. Calculate and plot the
response of displacement. If the excitation changes to f2(t) = 100 sin(14t), how does
the response change accordingly?
First, the natural frequency and damping ratio are, respectively, calculated to
be 14.14 rad/s and 0.05.
For f1(t) and f2(t), the frequency ratios are, respectively, r1 = ω1/ωn = 0.282, and
r2 = ω2/ωn = 0.990.
Let us now consider the steady-state solution xps(t). Its amplitude, xp0, can be
calculated by taking the absolute value of x. That is,
2ζr
f0 1 j tan−1 f0 1
xp0 = x = e 1− r 2 =
k (1− r ) + ( 2ζr )
2 2 2 k (1− r ) + ( 2ζr )2
2 2
φ = ∠( x) = ∠ e( − j tan−1
2ζr
1− r 2 ) = − tan −1 2ζr
1− r 2
+ 0π
306 Random Vibration
The results are plotted in Figure 6.12, where the dotted line is xp1(t) and the solid
line is xp2(t).
From Figure 6.12, it is seen that with driving frequency = 14 rad/s, compara-
tively much larger amplitude of the response is shown; this phenomenon is reso-
nance, which will be further discussed in detail in Section 6.2.1.3.1. Also from
Figure 6.12, we see the amplitude of the responses jump to their peak values in the
first quarter cycle. At least for the excitation f2(t) and corresponding resonance, in
our experience, this direct jump is not typical. Because a resonance is a cumula-
tive effect, namely, to reach the peak value in the steady state, the amplitude is
gradually increased with certain duration, and there must be a term in the total
solution to describe the transient phenomenon. This is why we have to consider
Equation 6.78 with the transient term ppt(t). In the following, using the concept of
dynamic magnification factors and the semidefinite method, we can determine
the transient response. In addition, we can also use convolution of the input force
and unite impulse response functions to derive the transient response.
0.5
0.4
0.3
0.2
Displacement (m)
0.1
–0.1
–0.2
–0.3
–0.4
–0.5
0 1 2 3 4 5 6 7 8
Time (s)
6.2.1.3 Dynamic Magnification
Equation 6.90 implies that the amplitude of x is a function of r and ζ. This func-
tion unveils an important phenomenon: the amplitude of the vibration response can
be magnified or reduced, dependent upon the frequency range and the damping
capacity.
f0 1 f0
xp0 = = β D (6.97)
k (1 − r ) + (2ζr )
2 2 2 k
x p0 k 1
βD = = (6.98)
f0 (1 − r 2 )2 + (2ζr )2
6 –1.5
Damping ratio = 10%
5
–2
4
Damping ratio = 30%
3 –2.5
2 Damping ratio = 70.7%
–3
1
Damping ratio = 100%
0 –3.5
0 1 2 3 4 5 6 7 8 9 10 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
(a) Frequency ratio (b) Frequency ratio
FIGURE 6.13 Plot of dynamic magnification factors: (a) amplitudes; (b) phases.
308 Random Vibration
band where the value of βD is greater than 70.7% of the peak value is defined as the
resonance region.
It is seen that when the frequency ratio is much smaller than unity, namely, when
the driving frequency is comparatively much smaller than the natural frequency, the
value of βD is close to unity and its value will gradually increase when the ratio r
becomes larger.
When the frequency ratio is larger than unity, the value of βD will become smaller.
When the frequency ratio is much larger than unity, namely, when the driving fre-
quency is comparatively much larger than the natural frequency, the value of βD is
approaching zero.
The phase angle between the forcing function and the response is
2ζr
φ = tan −1 + hφ π (6.99)
1− r2
In this instance, the Heaviside function is given in Equation 6.96. From Figure
6.13b, we can see that when the frequency ratio varies, so will the phase angle. As
the frequency ratio becomes larger, the phase angle decreases from zero toward –π
(or −180°). At exactly the resonant point, the phase angle becomes –π/2.
xps(t) = f0 /kβDsin(ωt + ϕ)
We also know that a transient response is a free-decay vibration, which may take
the following form,
With zero initial conditions, we thus have the following two equations:
and
a = −f0/kβDsin(ϕ)/sin(θt) (6.103)
f0 /kβ Dsin(φ)
[−ζω nsin(θ t ) + ω dcos(θ t)] + f0 /kβ Dω cos(φ) = 0
sin(θ t )
Therefore, we have
Because
2ζr
tan(φ) =
1− r2
We have
2ζr
sin(φ) = = β D (2ζr ) (6.105a)
(1 − r ) + (2ζr )2
2 2
310 Random Vibration
as well as
2ζ 1 − ζ2
tan(θ t ) = (6.106a)
2ζ2 + r 2 − 1
Furthermore,
sin(θ t ) =
2ζ 1 − ζ2
(2ζr ) + (r − 1)
2 2 2
(
= β D 2ζ 1 − ζ2 ) (6.106b)
θ t = (−1)
hθt
{ (
sin −1 β D 2ζ 1 − ζ2 )} + h π
θt (6.107a)
f0 /kβ 2D (2ζr ) f0 rβ D
a= =
( )
(6.108)
β D 2ζ 1 − ζ 2 k 1 − ζ2
Finally, to complete the semidefinite approach, we can substitute xp(t), x p(t ), and
x p(t ) into Equation 6.76 to see if it is balanced. With a positive result, it can be shown
that Equation 6.101 is indeed a particular solution and thus Equation 6.100 is the
transient part of the particular responses. In Figure 6.14a, the normalized amplitudes
of the transient solution with zero initial conditions (namely, f0/k = 1) versus the fre-
quency ratio are plotted. It is seen that when r = 1, the amplitude reaches the peak
value. Similar to the amplitude of the steady-state responses, when the damping ratio
becomes smaller, the larger peak value can be seen. In Figure 6.14b, the phase angles
versus the frequency ratio are also plotted. It is seen that with different damping
ratios, the curves of the phase angle can be rather different.
Single-Degree-of-Freedom Vibration Systems 311
12 3.5
Damping ratio = 0.05 Damping ratio = 0.05
Damping ratio = 0.30 3 Damping ratio = 0.30
10 Damping ratio = 0.70 Damping ratio = 0.70
Normalized amplitude
2.5
8
Phase angle
2
6
1.5
4
1
2 0.5
0 0
0 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 1.5 2 2.5 3 3.5
(a) Frequency ratio (b) Frequency ratio
FIGURE 6.14 Amplitudes and phase angles of xp(t): (a) normalized amplitudes; (b) phases.
It is noted that unlike the free-decay vibration caused purely by initial velocity or
initial displacement (or both). The response therefore exists without other conditions;
the transient part of a particular solution cannot exist without the presentation of a
steady-state solution.
6.2.1.4.1.1 Transfer Function When all the initial conditions are zero, the fol-
lowing Laplace transforms (Pierre-Simon Laplace, 1749–1827) exist:
L[ x (t )] = X (s) (6.109)
and
L[ x (t )] = sX (s) (6.110)
Furthermore,
As well as
L[ f (t )] = F (s) (6.112)
s = σ + jω (6.113)
312 Random Vibration
and as a result:
or
X (s) 1
H (s ) = = (6.117)
F (s) ms 2 + cs + k
s = jω
X (ω ) 1 1
H ( jω ) = = = (6.118)
F (ω ) m( jω )2 + c( jω ) + k − mω 2 + jωc + k
1 1 1 1 1
H ( jω ) = = = β De jφ (6.119)
k m c k 1 − r + 2 jζr k
2
− (ω )2 + ( jω ) + 1
k k
This results in
1
H ( jω ) = β D e jφ (6.120)
k
1
H ( jω ) = βD (6.121)
k
and
∠H( jω) = ϕ (6.122)
Single-Degree-of-Freedom Vibration Systems 313
The transfer function and the frequency response function are important concepts
in describing the dynamic behavior of a linear vibration system.
where xA is the absolute displacement. Let xg to be the base displacement, and x is the
displacement of the mass relative to the base. This is given by
xA = xg + x (6.124)
x A = x g + x (6.125)
In comparing the above equation with Equation 6.76, the term −mxg can be seen
as a forcing function noted by
f (t ) = − mxg (6.128)
Taking the Laplace transform on both sides of Equation 6.127 with zero initial
conditions results in
The transfer function between the ground displacement and the relative displace-
ment of the base-isolator is
X (s ) −s2
H Dr (s) = = 2 (6.130a)
X g (s) s + 2ζω n s + ω 2n
314 Random Vibration
Here, the first subscript D stands for displacement, whereas the second stands for
relative. In the instance of a frequency response function, it is given by
X (ω ) (ω 2 ) ÷ ω 2n r2
H Dr (ω ) = = = (6.130b)
X g (ω ) ( )
−ω 2 + 2 jζωω n + ω 2n ÷ ω 2n 1 − r + 2 jr
2
Note the transfer function between the ground acceleration and the relative accel-
eration of the base-isolator, given by Equation 6.131, is identical to Equation 6.130b.
X (ω ) ω 2 X (ω ) r2
H Ar ( jω ) = = 2 = (6.131)
X g (ω ) ω X g (ω ) 1 − r + 2 jr
2
x r2 r2
β Ar = = = (6.133)
xg 1 − r + 2 jr
2
(1 − r 2 )2 + (2ζr )2
In the above, βAr is the dynamic magnification factor for the relative acceleration
x(t ) excited by harmonic base acceleration xg (t ). Its value is exactly the dynamic
magnification factor for the relative displacement x(t) excited by harmonic base dis-
placement y(t), denoted by βDr. Namely,
Additionally,
2ζr
Φ = tan −1 +h π (6.135)
1 − r 2 φ
where hϕ is the Heaviside step function given by Equation 6.76. The transfer function
between the ground acceleration and the absolute acceleration of the base-isolator is
Here, the second subscript a stands for absolute. In terms of the dynamic mag-
nification factor and phase angle, as well as for the steady-state solution, let s = jω.
where
x
In Equation 6.138, is the ratio of amplitudes of the absolute and ground accel-
g
x
eration and
2ζr 3
Φ = tan −1 2 2
+ hΦ π (6.139)
1 − (1 − 4ζ )r
0, 1 − (1 − 4ζ2 )r 2 > 0
hΦ = (6.140)
1, 1 − (1 − 4ζ2 )r 2 < 0
The dynamic magnification factor and phase angle versus frequency ratio are
plotted in Figure 6.15a and b, respectively. Compared with the dynamic magnifica-
tion factors, which can be seen as normalized amplitude of forced responses shown
in Figure 6.13a, we can observe that they both have a similar trend of ups and downs.
However, for the case of based isolation, when the amplitude is reduced from the
resonance value toward zero, no matter what the damping ratio is, the curves will
reach unity at exactly r = 1.4142.
Note that in Equation 6.81, the phase angle for the sine function is sin(ωt + ϕ).
However, for the cases of base excitation, the phase angle for the sine function is
sin(ωt − Φ), namely, with a minus sign.
316 Random Vibration
101 2
Phase (rad)
Magnitude
1.5
100 1
0.5
10–1 0
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
(a) Frequency ratio (b) Frequency ratio
FIGURE 6.15 Magnitude (a) and phase (b) of absolute acceleration due to ground excitation.
Example 6.8
1+ ( 2ζr )2
β Aa = = 1.0875
(1− r 2 )2 + ( 2ζr )2
and
r2
βDr = β Ar = = 2.048
(1− r 2 )2 + ( 2ζr )2
Thus, the absolute acceleration is 1.5 g * 1.0875 = 1.631 g and the relative displace-
ment is 14.9 mm * 2.048 = 30.51 mm, neither of which satisfies the requirements.
It is seen that the displacement is more than 20 mm, and the acceleration is
more than 1.2 g. To reduce the acceleration, we need the dynamic magnification
factor to be
If we keep the same damping ratio, the new frequency ratio can be calculated as
1+ ( 2ζr )2
0.82 ≤
(1− r 2 )2 + ( 2ζr )2
6.2.2.2 Force Transmissibility
Suppose a forcing function f0sin(ωt) is applied on an m-c-k structure; find the ampli-
tude of the steady-state force transferred from the structure to the ground. In Figure
6.11, the dynamic force transferred to the ground, denoted by fg(t), is the sum of
damping and the stiffness forces, namely,
fg (t ) = cx + kx (6.143)
From the above discussion, it is seen that the steady-state displacement is given by
x(t) = f0/kβDsin(ωt + ϕ)
x (t ) = f0 /kβ Dω cos(ωt + φ)
Therefore,
Denoting the dynamic magnification factor for the force transmissibility as βT, the
amplitude of ground force can be written
fG = βT f–0 (6.146)
318 Random Vibration
where
1 + (2ζr )2
βT = (6.147)
(1 − r 2 )2 + (2ζr )2
and thus
βT = βAa (6.148)
jωt + jcωxe
fg (t ) = kxe jωt = ( k + jcω ) xe
jωt (6.149)
where the amplitude of the complex value displacement x is given by Equation 6.92.
As a result, the steady-state ground force can be further written as
2ζr
f0 ( k + jcω ) − j tan −1
fg (t ) = e 1− r 2 e jωt (6.150)
k (1 − r ) + (2ζr )
2 2 2
From the absolute value of the ground force described in Equation 6.150, we can
realize that dynamic magnification of the force transmissibility is indeed the one
given in Equation 6.147. Furthermore, the phase difference can be written as
f ( k + jcω ) j tan −1
2ζr
Φ = ∠ 0 e 1− r 2
k (1 − r 2 )2 + (2ζr )2
2ζr 2ζr
= ∠ (1 + j 2ζr )β D cos tan −1 − jβ D sin tan −1
1− r 1 − r 2
2
2ζr 3
= tan −1 + hΦ π
1 − r 2 + (2ζr )2
Single-Degree-of-Freedom Vibration Systems 319
0, 1 − r 2 + (2ζr )2 > 0
hΦ = (6.152)
1, 1 − r 2 + (2ζr )2 < 0
Comparing Equations 6.151 and 6.139, we realize that the phase difference of the
absolute acceleration excited by the ground and the phase difference of the ground
force due to force applied on a system are identical, similar to the corresponding
magnifications factors.
6.2.3 Periodic Excitations
6.2.3.1 General Response
Consider now a linear m-c-k system excited by a periodic forcing function f(t) of
period T, with initial conditions x0 and v0. That is,
mx + cx + kx = f (t )
x (0) = x 0 (6.154)
x (0) = v
0
ωT = 2π/T (6.155)
is used to represent the forcing functions. Suppose f(t) can be represented by the
Fourier series
f
f (t ) = A0 +
2 ∑ f
n =1
An cos(nω Tt ) + fBnsin(nω Tt ) (6.156)
where fA0, fAn, and f Bn are Fourier coefficients. Because the system is linear, the
responses are first considered individually due to the forcing function
fA0
fA = (6.157)
2
320 Random Vibration
with the initial conditions, denoted by x0(t) and the steady-state responses due to the
forcing functions
N N
x (t ) = x 0(t ) + ∑
n =1
[ xan(t ) + x bn(t )] = x 0(t ) + ∑[x (t)]
n =1
n (6.160)
where
fN
x n (t ) = β nsin(nω Tt + φn ) (6.162)
k
Here,
fN = 2
fAn + fBn
2
(6.163)
is the amplitude of the nth forcing function and the dynamic magnification βn as well
as phase angle ϕn are
1 1
βn = = (6.164)
2
n 2ω 2T nω T
2
(1 − n r ) + (2ζnr )2
2 2 2
1 − ω 2 + 2ζ ω
n n
2ζnr −1 f An
φn = tan −1 2 2 + tan f + (hφ + hΦ )π (6.165)
n r −1 Bn
Single-Degree-of-Freedom Vibration Systems 321
where
n 2r 2 > 1
hφ = 0 (6.166)
−1 n 2r 2 ≤ 1
and
0 fBn ≥ 0
hΦ = (6.167)
1 fBn < 0
ωT
r= (6.168)
ωn
6.2.3.3 Transient Response
Assume that the transient response due to the initial conditions and the force con-
stant fA0 is
fA0
x 0 (t ) = e −ζω nt [ A cos(ω dt ) + B sin(ω dt )] + (6.169)
k
The first term in Equation 6.169 is mainly generated by the initial conditions and the
second term is a particular solution due to the step input described in Equation 6.157.
It can be proven that the coefficients A and B are
A = x0 −
fA0
k
− ∑ fk β sin φ
n =1
N
n n (6.170)
1
N
B= v0 + Aζω n −
ω d ∑ fk β nω cos φ
n =1
N
n T n (6.171)
6.3.1 Impulse Responses
In the case when a very large force is applied to a SDOF system for a very short dura-
tion, the excitation can be seen as an impulse process. This simple excitation can be
322 Random Vibration
modeled by a delta function multiplied by the amplitude of the impulse, which is the
foundation of the study of arbitrary excitations.
f0 = f(t)Δt = mv0 − 0
Thus,
f (t )∆t f0
v0 = = (6.173)
m m
Then, the effect of an impulse applied to the SDOF m-c-k system is identical to
the case of a free vibration with zero initial displacement and initial velocity equal
to that described in Equation 6.62, with x (0) = v0 = f0 /m and x(0) = 0. Furthermore,
with unit impulse f0 = 1 from Equation 6.71, the amplitude d is
1
d=
mω d
ϕ=0
1 − ζω nt
x (t ) = e sin(ω d t ) (6.174)
mω d
Single-Degree-of-Freedom Vibration Systems 323
This expression is quite important. A special notation is used to represent this unit
impulse response, denoted by h(t). Thus, let
1 − ζω nt
h(t ) = x (t ) = e sin(ω d t ) (6.175)
mω d
unit impulse
zero initial condition
where the quantity h(t) is known as the unit impulse response function.
Substitution of Equation 6.175 into Equation 6.172 yields
In this case, h(t) is called unit impulse response function. Generally, when f0 ≠ 1,
this results in
t
x (t ) =
∫ 0
f (τ)h(t − τ) d τ (6.178)
Figure 6.16 graphically shows the essence of Equation 6.178. With the help of
Figure 6.16a, let us consider a special instance t = ti. The amplitude of the force f(ti)
can be seen as a result of the sampling effect by the delta function δ(t − ti) (see Figure
6.16b). That is, a response will occur starting from time t − ti as well as afterward
(Figure 6.16c). It can be regarded as a unit impulse response times the amplitude of
the instantaneous force. That is, the response is [f(ti) Δt][h(t − ti)]. However, before
the instance ti, we already had many other impulse responses. Each response can
be seen as an impulse that occurs at ti−1, ti−2, …0. Thus, the total response can be
regarded as a sum of all of these impulse responses. Note that the consideration of
the response is not just ending at ti. It may last until time t (Figure 6.16d). In this
instance, we just show an additional response at ti+1. Then, we will have the sum-
mation of those impulse responses, which are functions of ti, and ti starting at 0 and
ending at t. When letting Δt → dt, the summation becomes an integral and we thus
have Equation 6.178.
Additionally, Equation 6.178 can be rewritten as
t ∞
x (t ) =
∫
−∞
f (τ)h(t − τ) d τ =
∫ −∞
f (τ)h(t − τ) d τ (6.179)
324 Random Vibration
δ(t − ti) ∞
ti
(a)
f(t)
f(ti)
ti
dt
(b)
dx(t), impulse response
(t – ti) ≥ 0
t – ti
(c)
f(t)
f(ti+1)
ti+1
dt
(d)
t – ti+1
FIGURE 6.16 Convolution integral. (a) impulse; (b) forcing function; (c) impulse response;
(d) additional impulse and response.
Single-Degree-of-Freedom Vibration Systems 325
and
∞
x (t ) =
∫−∞
f (t − τ)h(τ) d τ (6.180)
t ≥0
e − ζω nt ∞
x (t ) =
mω d ∫−∞
e − ζω nτ sin[ω d (t − τ)]sinωτdτ (6.182)
Note that the response x(t) is due to the forcing function only, that is, it is xp(t).
Furthermore, we can write
e − ζω nt t
x p (t ) =
mω d ∫e 0
− ζω n τ
sin[ω d (t − τ)]sinωτdτ (6.183)
Evaluating Equation 6.183, we can have the solution (see Equation 6.78) repeated
as follows:
whereas xps(t) is the steady-state response for the particular solution xp(t) described
above, the transient response, xpt(t) can be calculated as
rβ D
x pt (t ) = e − ζω nt sin(ω dt + θ t ) (6.185)
k 1− ζ 2
2ζ 1 − ζ2
θ t = tan −1 + hθπ (6.186a)
2ζ2 + r 2 − 1
326 Random Vibration
0, 2ζ2 + r 2 − 1 > 0
hθ = (6.186b)
1, 2ζ2 + r 2 − 1 < 0
Note that in Equation 6.185, generally speaking, xpt(t) ≠ 0. This implies that even
under zero initial conditions and under the zero initial force (because sinω0 = 0), we
still have the nonzero transient response. Equation 6.185 is exactly the same with
the formula obtained through Equations 6.108 and 6.186a, which is equivalent to
Equation 6.107a. However, based on the method of convolution, we can obtain a
complete solution. Through a semidefinite method, we do have a solution, but we
cannot prove that it is the only solution.
Example 6.9
Reconsider the above-mentioned example of the m-c-k system with mass = 10 kg,
c = 15 N/m−s and k = 2000 N/m excited by a harmonic force f1(t) = 100sin(4t)
under zero condition x0 = 2 m and v0 = 1 m/s, calculate and plot the response
of displacement. If the excitation changes to f2(t) = 100sin(14t), how does the
response change accordingly?
The natural frequency and damping ratio are, respectively, calculated to be
14.14 and 0.05. We then can calculate the following parameters, where the sub-
scripts 1 and 2 stand for the cases of f1(t) and f2(t), respectively
r1 = 0.2828, r2 = 0.9899
and
With the above parameters, the transient solutions for the particular responses
can be computed. Furthermore, in the previous example, the steady-state
responses were calculated (Figure 6.12). Therefore, the total particular responses
xp(t) = xpt(t) + xps(t) can also be calculated.
The results are plotted in Figure 6.17a with driving frequency = 4 rad/s, and
in Figure 6.17b with driving frequency 14, where the solid lines are the transient
responses and the broken lines are the total particular responses. Compared with
Figure 6.12, the steady-state responses, it is seen that with the transient portions,
the total response can be rather different.
Furthermore, consider the initial conditions v0 = 1 m/s and x0 = 2 m, we can
calculate the homogeneous solutions xh(t) based on Equation 6.62, which is not
affected by the forcing function for f1(0) = f2(0) = 0. Including the homogeneous
solution, the total responses are plotted in Figure 6.18a with driving frequency =
4 rad/s and Figure 6.18b with driving frequency 14 rad/s.
Compare the total responses with the particular solutions shown in Figure 6.17
and with the steady-state ones shown in Figure 6.12, we can see the differences
Single-Degree-of-Freedom Vibration Systems 327
0.08 0.5
0.4
0.06
0.3
Displacement (m)
Displacement (m)
0.04 0.2
0.02 0.1
0
0
–0.1
–0.02 –0.2
–0.3
–0.04
–0.4
–0.06 –0.5
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
(a) Time (s) (b) Time (s)
FIGURE 6.17 Particular responses: (a) driving frequency = 4 rad/s; (b) driving frequency =
14 rad/s.
2 5
4
1.5
3
1 2
Displacement (m)
Displacement (m)
1
0.5
0
0
–1
–0.5 –2
–3
–1
–4
–1.5 –5
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
(a) Time (s) (b) Time (s)
FIGURE 6.18 Total forced responses: (a) f1(t) = 100sin(4t); (b) f 2(t) = 100sin(14t).
once more. In the course of deterministic vibration, often the transient responses,
caused by both the initial conditions and the driving force, are ignored. This is
because when the time is sufficiently long, the transient responses will die out.
Only the steady-state response remains. However, in the case of random excita-
tion, the transient portion of the responses should be carefully examined because
with a random excitation, the transient portion will not die out.
X (s) 1
H (s ) = = (6.188)
F (s) ms + cs + k
2
328 Random Vibration
Thus,
Specifically, it can be stated that the unit impulse response function and the trans-
fer function is a Laplace pair.
Generally speaking, for a harmonic excitation, when the response reaches steady
state, let s = jω, the unit impulse response function and the transfer function also
become a Fourier pair:
e −ζωt
h(t ) = sin(ω d t ) (6.194)
mω d
Then,
∞ ∞ ∞
x (t ) = f (t ) * h(t ) =
∫
−∞
h(τ) f (t − τ) d τ =
∫−∞
h(τ)e jω (t − τ ) d τ = e jωt
∫
−∞
h(τ)e − jωτ d τ
(6.196)
Because
∞
∫−∞
h(τ)e − jωτ d τ = F [h(t )] = H (ω ) (6.197)
Single-Degree-of-Freedom Vibration Systems 329
For a general response, take the Fourier transform on both sides of Equation 6.181,
∞
1
x (t ) =
2π ∫
−∞
F (ω ) H (ω )e jωt dω (6.201)
x (t ) = jωx (t ), x(t ) = −ω 2x (t )
and accordingly,
and
1
x (t ) = e jωt = H (ω )e jωt
− mω + jωc + k
2
Therefore,
x (t )
H (ω ) = x (t )e − jωt = condition: f ( t )= e jωt
(6.202)
f (t )
330 Random Vibration
X (ω )
H (ω ) = condition = steady state response (6.203)
F (ω )
In the literature, the transfer function obtained through the steady-state response
of harmonic excitation is specifically defined as the frequency response function.
Mathematically, obtaining the frequency response function is equivalent to replacing
s with jω in the generic form of the transfer function.
In this instance, the response X(s) can be seen as the input F(s) being transferred
through H(s). Because Borel’s theorem does not specify the input force, F(s) can be
any forcing function for which a Fourier transform exists.
X (s)
H (s ) = (6.206)
F (s)
1
H ( s ) = H ( m , c, k ) = (6.207)
ms 2 + cs + k
Single-Degree-of-Freedom Vibration Systems 331
X (s )
F (s ) = (6.208)
H (s )
The inverse problem will be discussed in further detail in Chapter 9.
Problems
1. A system is excited by an initial displacement x0 = 5 cm only, shown in
Figure P6.1.
Find (a) natural period, (b) natural frequency in Hertz (fn), and natural
frequency in radians per second (rad/s), (c) mass in kilograms, (d) damping
ratio (ζ), (e) damping coefficient c, (f) stiffness k (g = 10 m/s2).
2. An SDOF system is shown in Figure P6.1. Suppose the base has a peak
ground acceleration xg = 0.6 g with 2.0 Hz driving frequency; and the natu-
ral frequency is 2.5 Hz, damping ratio is 0.064.
Find (a) the dynamic magnification factor for the absolute acceleration
in the base excitation problem, (b) the amplitude of absolute acceleration,
x x, x
fm = –mx
fk = kx
W = 2500 N
mg
c
fc = cx
1n 1n
x g 2 2
5
4
3
Displacement (cm)
2
1
0
–1
–2
–3
–4
0 0.5 1 1.5 2 2.5
Time (s)
FIGURE P6.1
332 Random Vibration
(c) the dynamic magnification factor for relative displacement, (d) the
amplitude of relative displacement, and (e) compute the amplitude of the
ground force fc + f k.
3. A structure is shown in Figure P6.2 with mass = 12 kg the structure itself is
weightless
a. Determine the damping ratio and stiffness
b. The system is excited by vertical base motion and amplitude = A; the
driving frequency = 6 Hz. The absolute displacement is measured to be
6.1 mm. Find the value of A.
4. An SDOF structure has weight = 6000 lb, damping ratio = 0.08, and stiff-
ness is k with natural period = 1.15 seconds. The structure has a ground
excitation with amplitude = 0.25 g and driving period is 1.80 seconds. Find
the amplitude of the relative displacement.
Weightless
1
0.8
0.6
Displacement (in)
0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
–1
0 0.2 0.4 0.6 0.8 1 1.2 1.4
Time (s)
FIGURE P6.2
Single-Degree-of-Freedom Vibration Systems 333
F(t)
90 N
50 N
Time (s)
0 1 2 3 4 5
FIGURE P6.3
5. An SDOF system with mass = 1.5 kg; c = 8 N/m−s, and k = 120 N/m, is
excited by a forcing function shown in Figure P6.3. The initial conditions
are v0 = −1.5 m/s and x0 = 0.05 m. Calculate the displacement.
6. If the relationship between the log decrement δ and the damping ratio ζ
is approximated to be δ = 2πζ, what values of ζ could you calculate if the
allowable error is 12%.
7. If the amplitude of response of a system under bounded input grows into
infinity, the system is unstable. Consider an inverted pendulum as shown in
Figure P6.4, where k1 = 0.7 k, both installed at exactly the middle point of
the pendulum. Find the value of k such that the system becomes unstable.
Assume that the damper c is installed on the pendulum parallel to the two
springs. How does this affect the stability properties of the pendulum?
8. In Problem 7, m = 12 kg, ℓ = 2.1 m, k = 110 kN/m, c = 1.1 kN/m−s, the bar is
massless. Suppose the initial angle θ is 2.0°, calculate the response.
9. Consider the system in Figure P6.5; write the equation of motion, and cal-
culate the response assuming that the system is initially at rest for the slope
angle = 30°, k = 1200 N/m, c = 95 N/m−s, and m = 42 kg. The amplitude of
the vertical force f(t) is 2.1 N with driving frequency = 1.1 Hz.
c
θ m
ℓ
k1 k
0.5 ℓ
FIGURE P6.4
334 Random Vibration
k
f(t)
c m
FIGURE P6.5
10. A mechanism is modeled as shown in Figure P6.6, with k = 3400 N/m, c =
80 kg/s, m = 45 kg, and the ground has a motion along the 45° line with dis-
placement xg(t) = 0.06 cos πt(m); compute the steady-state vertical responses
of both relative displacement and absolute acceleration, assuming the sys-
tem starts from a horizontal position. Assume the rotation angle is small.
c k f (t)
FIGURE P6.6
7 Response of SDOF
Linear Systems to
Random Excitations
In Chapter 6, the linear single-degree-of-freedom (SDOF) system is discussed in terms
of deterministic forcing functions. In this chapter, random excitations will be consid-
ered. That is, the vibration response is a random process. Analyses in both the time and
frequency domain discussed in Chapters 3 and 4 will be applied here. In addition, a
special random process of the time series will also be described for response modeling.
In this chapter and in Chapter 8, which is concerned with multi-degree-of-freedom
(MDOF) vibrations, we focus on linear systems. In Chapter 11, the general concept
of nonlinear vibration and selected nonlinear systems will be introduced. General
references can be found in Schueller and Shinnozuka (1987), Clough and Penzien
(1993), Wirsching et al. (2006), Chopra (2003), and Liang et al. (2012).
7.1 STATIONARY EXCITATIONS
The simplest cases of random excitations occur when a forcing process is stationary,
for which a weak stationary process is specifically used.
mx + cx + kx = f (t )
x (0) = v0 (7.1)
x (0) = x 0
x(0) = 0 (7.2)
0) = 0
x( (7.3)
335
336 Random Vibration
X (t ) =
∫−∞
f (t − τ)h(t ) d τ (7.4)
In this chapter, unless specially announced, f(t) is the realization of a random process.
and
where
τ = t − s (7.8)
7.1.1.5 Response
Given f(t), x(t) is also a realization of the stationary random process. Equation 7.4 can
be seen as a case of random process through an SDOF linear system.
∞
µ X (t ) = E[ x (t )] = E
∫−∞
f (t − τ)h(τ) d τ
(7.9)
∞ ∞
=
∫−∞
E[ f (t − τ)]h(t ) d τ = µ F
∫−∞
h(t ) d τ
Response of SDOF Linear Systems to Random Excitations 337
Because,
we have
∞
∫ −∞
h(τ) d τ = H ( 0) (7.11b)
and therefore
Thus,
1 1
H (ω ) ω→0 = = (7.13)
− mω 2 + jcω + k ω→ 0 k
µF
µX = (7.14)
k
7.1.3.1 Autocorrelation
The autocorrelation function is given as
RX (t , s) = E[ X (t ) X (s)]
t s
= E
∫
−∞
f (t − u)h(u) du
∫ f (s − v )h( v ) d v
−∞
t s (7.15)
= E
∫
−∞
du
∫
−∞
f (t − u) f (s − v)h(u)h( v) d v
t s
=
∫−∞
du
∫−∞
RF (t − u, s − v)h(u)h( v) d v, t , s ≥ 0
338 Random Vibration
7.1.3.2 Mean Square
In this section, we consider mean square values in general cases and in stationary
processes.
t t
E[ X 2 (t )] = RX (t , t ) =
∫ −∞
du
∫ −∞
RF (t − u, t − v)h(u)h( v) d v, t ≥ 0 (7.16)
t s
RX (t , s) =
∫ ∫
−∞
du
−∞
RF (s − t − (u − v))h(u)h( v) d v, t , s ≥ 0 (7.17b)
the variance, which is also the mean square value for a zero-mean process, is
t t
σ 2X (t ) = RX (t , t ) =
∫ ∫ −∞
du
−∞
RF (u − v)h(u)h( v) d v, t ≥ 0 (7.18)
It is observed that
σ 2X (t )
lim = 1 (7.19)
t →∞ W /4 kc
0
when
t, s → ∞ (7.20)
By denoting
τ = s − t (7.21)
then
∞ ∞
RX (τ) =
∫ ∫ −∞
du
−∞
RF (τ − (u − v))h(u)h( v) d v (7.22)
Response of SDOF Linear Systems to Random Excitations 339
Equation 7.20 implies the practical consideration of a stationary process when the
time t and s are sufficiently long.
Furthermore, the mean square value in this case is
∞ ∞
E[ X (t )2 ] = RX (0) =
∫ ∫
−∞
du
−∞
RF (u − v)h(t )h( v) d v (7.23)
Example 7.1
In the previous discussion, zero initial conditions are assumed. In this example, we
assume random initial conditions. In Chapter 6, we knew that without a forcing
function, we would have free-decay vibrations. The response is given by Equation
6.62. That is,
With the given initial conditions (see Equation 7.1), we can solve the coefficient
d1 and d2 as
d1 = x0
and
v0 + ζω n x0
d2 =
ωd
Therefore, we have
v + ζω n x0
x(t ) = e −ζω nt x0cos(ω d t ) + 0 sin(ω d t )
ωd
and furthermore
ζ v
x(t ) = e −ζω nt x0 cos(ω d t ) + sin(ω d t ) + 0 sin(ω d t )
1− ζ 2 ω
d
Suppose the initial conditions X0 and V0 are random variables. We can examine
the statistical properties of the free-decay response. For convenience, in the fol-
lowing examples, we use lowercase letters to denote random variables.
340 Random Vibration
ζ E (V0 )
µ X (t ) = E[ X (t )] = e −ζω nt E ( X 0 ) cos(ω d t ) + sin(ω d t ) + sin(ω d t )
1− ζ 2 ω
d
ζ e −ζω nt
µ X (t ) = µ X0 e −ζω nt cos(ω d t ) + sin(ω d t ) + µV0 sin(ω d t )
1− ζ 2
ωd
RX (t1, t 2 ) = E[ X (t1)X (t 2 )]
( )
= e −ζω n (t1+t2 ) E X 02 cos(ω d t1) +
ζ
sin(ω d t1) cos(ω d t 2 ) +
ζ
sin(ω d t 2 )
1− ζ 2
1− ζ 2
E ( X 0V0 ) −ζω n (t1+t2 ) ζ
+ e cos(ω d t1) + sin(ω d t1) sin(ω d t 2 )
ωd 1− ζ 2
ζ
+ cos(ω d t 2 ) + sin(ω d t 2 ) sin(ω d t 2 )
1− ζ 2
+
( )
E V02
e −ζω n (t1+t2 ) sin(ω d t1)sin(ω d t 2 )
ω d2
RX (t1, t 2 ) = E[ X (t1)X (t 2 )]
ζ ζ
= e −ζω n (t1+t2 ) σ 2X0 cos(ω d t1) + sin(ω d t1) cos(ω d t 2 ) + sin(ω d t 2 )
1− ζ 2
1− ζ 2
σ XV0 −ζω n (t1+t2 ) ζ
+ e cos(ω d t1) + sin(ω d t1) sin(ω d t 2 )
ωd 1− ζ 2
ζ
+ cos(ω d t 2 ) + sin(ω d t 2 ) sin(ω d t 2 )
1− ζ 2
σV20
+ e −ζω n (t1+t2 ) sin(ω d t1)sin(ω d t 2 )
ω d2
Response of SDOF Linear Systems to Random Excitations 341
Variance
2
ζ
σ 2X (t ) = σ 2X0 e −2ζω nt cos(ω d t ) + sin(ω d t )
1− ζ 2
2σ XV0 −2ζω nt ζ
+ e cos(ω d t ) + sin(ω d t ) sin(ω d t )
ωd 1− ζ 2
σV20
+ e −2ζω nt sin2(ω d t )
ω d2
∞ ∞ ∞ ∞
S X (ω ) =
∫
−∞
RX (t )e − jωτ d τ =
∫ ∫ ∫
−∞
RF (τ + u − v)h(u)h( v) d v e − jωτ d τ
−∞
du
−∞
(7.24)
∞ ∞ ∞
=
∫ ∫
−∞ −∞
h(u)h( v)
−∞
∫
RF (τ + u − v)e − jωτ d τ dudv
θ = τ + u − v (7.25)
Then, we have
τ = θ − u + v (7.26)
∞ ∞ ∞
S X (ω ) =
∫ ∫
−∞ −∞
h(u)h(v)
−∞
∫
RF (θ)e − jωθdθ e jωue − jωv dudv
∞ ∞ ∞
=
∫
−∞
h(u)e jωu du
−∞
∫
h(v)e − jωv d v
−∞
∫
RF (θ)e − jωθ dθ
(7.27)
= H (−ω ) H (ω ) SF (ω )
342 Random Vibration
Therefore,
Note that Equation 4.113a from Chapter 4 provided the same information as given
in Equation 7.28.
Furthermore, the frequency expressed in units of hertz, yields
7.1.4.2 Variance
From Equation 7.28, the variance can be written as
∫
2
σX2 = H (ω ) SF (ω )dω (7.30)
−∞
∫
2
σ 2X = H ( f ) WF ( f )df (7.31)
−∞
In comparing Equation 7.32 with Equation 7.4, both F(t) and X(t) are found to
be generic random processes. From Equation 7.32, it is understood that if F(t) is
Gaussian, then X(t) will also be Gaussian.
1 m
= (7.36)
ω 2n k
c 2ζω n m 2ζ
= = (7.37)
k k ωn
and
ω
=r (7.38)
ωn
S0 /k 2
S X (ω ) = (7.39)
(1 − r 2 )2 + (2ζr )2
7.2.2.2 Variance
In the following, we describe the variance.
B0 + ( jω ) B1 + ( jω )2 B2 + + ( jω )n −1 Bn −1
H (n) = (7.43)
A0 + ( jω ) A1 + ( jω )2 A2 + + ( jω )n An
Note that the number n is determined by the nature of the system. For example, for a
SDOF vibration system, n = 2.
Thus, the integral can be expressed as
∞
∫
2
I (n) = H ( n ) (ω ) dω (7.44)
−∞
It can be calculated:
For n = 1, we have
B02
I (1) = π (7.45)
A0 A1
A0 B12 + A2 B02
I ( 2) = π (7.46)
A0 A1 A3
The coefficients A(.) and B(.), are computed using the equation:
B0 + ( jω ) B1 1
H ( 2) = = (7.47)
A0 + ( jω ) A1 + ( jω ) A2 k + jωc − mω 2
2
B 0 = 1, B1 = 0, A0 = k, A1 = c, and A2 = m (7.48)
π
I ( 2) = (7.49)
kc
Response of SDOF Linear Systems to Random Excitations 345
such that
πS0
σ 2X = (7.50)
kc
W0 0.785 fnW0
σ 2X = = (7.51)
4 kc k 2ζ
I3 = π
( )
A0 A3 2 B0 B2 − B12 − A0 A1B22 − A2 A3 B02
(7.52)
A0 A3 ( A0 A3 − A1 A2 )
I4 =
( ) ( )
A0 B32 ( A0 A3 − A1 A2 ) + A0 A1 A4 2 B1 B3 − B22 − A0 A3 A4 B12 − 2 B0 B2 + A4 B02 ( A1 A4 − A2 A3 )
π
(
A0 A4 A0 A + A A4 − A1 A2 A3
2
3
2
1 )
(7.53)
However, in the resonance region, the region in between the half-power points,
Then, the exact variance given in Equation 7.30 can be approximated by that in
Equation 7.51, where the constant W0 is used to estimate the exact variance, that is,
0.785 fnW0
σ 2X ( exact ) ≈ σ 2X = (7.56)
( approx ) k 2ζ
In Figure 7.1, the broad line is one of the realizations of the response and, in
the resonance region, the variance is close to constant. Notice the frequency band
between the half-power points as illustrated
ΔF = (f 2 − f1) = 2ζ f n (7.57)
346 Random Vibration
102
Damping ratio = 0.05
Damping ratio = 0.3
101
100
10–1
10–2
10–3
10–4
10–2 10–1 100 101
7.3 ENGINEERING EXAMPLES
In this section, specific practical applications of random responses are discussed. As
a comparison, typical deterministic excitations will also be discussed.
7.3.1 Comparison of Excitations
First, we review the commonly used excitations.
7.3.1.1 Harmonic Excitation
As mentioned in Chapter 6, the harmonic forcing function can be used to measure
the transfer function of a linear system.
Note that in the real world, only either f(t) = f0cos(ωt) or f(t) = f0sin(ωt) will exist.
The latter can be written as jf(t) = jf0sin(ωt). Thus, Equation 7.58 is a combination of
these two cases, which is used solely for mathematical convenience. Under the forc-
ing function described by Equation 7.58, the response is given by
7.3.1.1.2.1 Forcing Function Consider the case of a forcing function of sine sweep.
f (t ) = f0 ∑ δ(t − nT ) sin(nω t)
n= 0
0 (7.60)
∞ ∞
F f0
n =−∞
∑
δ(t − nT ) = f0 ω 0
n =−∞
∑
δ(ω − nω 0 ) (7.61)
7.3.1.1.2.2 Waiting Time Let the waiting time be expressed by the variable, T. In
terms of the number of cycles, T can be written as
2π
T=k (7.62)
ω
f(t)
f0
……
t
0 T 2T ….
F(ω)
ω0 f0
……
ω
0 ω0 2ω0
Suppose the waiting time for a 10% decay of transit response is to be calculated.
Then
e–ζωT = 10%
and
ln(0.1)
k= (7.63)
2πζ
Practically speaking, because the response due to initial conditions can be consid-
erably smaller than that caused by the force, k, the number of cycles can be smaller.
SF (ω ) = S0 = f02 (7.64)
f0 1 1
H ( jω ) = 2
=
k ω ω 2
1− + 2 jζ ω 2 ω 2
ωn ω n 1 − + 2ζ
ω n ω n
ω
2ζ
−1 ω n
exp j tan (7.65)
ω
2
1−
ω n
7.3.1.2 Impulse Excitation
Another commonly used method is impulse excitation.
SF (ω ) = S0 = f02 (7.69)
Mathematically, the transfer function is equal to the Fourier transform of the unit
impulse response function.
H (ω ) = F [h(t )] (7.70)
7.3.1.2.1.2 Real Case In the real world, because the duration of the impact force
cannot be infinitely short, the impact time history is approximated to be close to a
half sine wave. This is shown in Figure 7.3. Specifically written as
πt
f sin , 0<t <T
f (t ) = 0 T (7.71)
0, elsewhere
f(t)
f(t)
f0
f0
η2 f0
t t
0 T 0 T
η1 f0
F(ω) F(ω)
ω ...... ω
(a) ωC (b) ωC
ωT
ωT cos
j 2T 2
F (ω ) = e 2
(7.72)
π ω 2T 2
1 −
π2
π
ωC = (7.73)
T
A practical issue (referring to the case given in Figure 7.4b) is that the history of
impact force can differ from a half sine wave. For example, the history may have the
supposed “double hit,” resulting in
π
ωC < (7.74)
T
f(t)
T
(a) 0 T
F(ω)
ω
(b) ωC
7.3.1.2.2.1 Ideal Case First, consider the idealized case, where the forcing func-
tion is modeled as a Heaviside step function:
1
F (ω ) = F [ f (t )] = f0 πδ(ω ) + (7.77)
jω
7.3.1.2.2.2 Real Case In the real world, the excitation as described by Equation
7.76 can be simulated with reasonable accuracy. Equation 7.76 is commonly rewrit-
ten as
f , 0 < t < T
f (t ) = 0 (7.78)
0, elsewhere
ωT
sin j ωT
F (ω ) = F [ f (t )] = f0 T 2 e 2 (7.79)
ωT
2
π
ωC = (7.80)
T
and
π
ωC < (7.81)
T
It is important to note that the time duration T is considerably longer than impact
force; therefore, the corresponding value of ωC is much smaller. Again, the auto-
power spectral function is
7.3.1.3 Random Excitation
As a comparison, consider a random forcing function, which is in general among
these commonly used excitations.
352 Random Vibration
SF(ω) = S 0 (7.83)
Readers may consider how the measurement of ∣H(ω)∣2 can be carried out practically.
From Equation 7.35,
S X (ω ) 2 1
= H (ω ) = (7.85)
S0 ( k − mω ) + (cω )2
2 2
S X (0) 1
= 2 (7.86)
S0 k
S 0 = k2SX(0) (7.87)
Readers may consider the question: How would the measurement of a transfer
function be accomplished?
Similarly, the transfer function can be measured through a vibration test. Mathe
matically speaking,
X (ω )
H (ω ) = (7.88)
F (ω )
As mentioned in Chapter 4, Equations 4.122 and 4.123, the transfer function can
be approximated by
SFX (ω ) SFX (ω )
H1 (ω ) = = (7.89)
SF (ω ) S0
Response of SDOF Linear Systems to Random Excitations 353
or
S X (ω )
H 2 (ω ) = (7.90)
S XF (ω )
7.3.1.3.2.1 Band Width Recall from Chapter 4, the band width is repeated
Δω = ωU − ωL (7.91)
σ 2F = 2∆ωS0 (7.92)
sin(∆ωτ / 2)
RF (τ) = σ 2F cosω 0τ (7.93)
∆ωτ / 2
S ω L < ω < ωU
0
SF (ω ) = S0 / 2 ω = ω L , ωU (7.94)
0 elsewhere
Readers may consider the question: In the case of random excitations, why is RF(τ)
and SF(ω) used in the place of F(ω)?
Because the excitation is limited to a limited frequency band, outside the bound-
aries of ωL and ωU, there will be no excitation energy. In other words, the estimated
transfer function is also band-limited.
7.3.1.4 Other Excitations
Lastly, we consider additional excitations.
–1
0 1 2 3 4 5
7.3.1.4.1.1 Linear Chirp As shown in Figure 7.5, we formulated the linear chirp as
f (t ) = f0sin ω 0 k − 1t
t
(7.96)
ln( k )
S0
SF (ω ) = (7.97)
ω
Generally speaking, the PSD for a color noise can also be written as
S0
SF (ω ) = (7.98)
ωα
–1
0 1 2 3 4 5
By varying the parameter α in Equation 7.98, the color of the noise is varied. For
white noise, α = 0; for pink noise, α = 1; and for red (brown) noise, α = 2. An alterna-
tive way to write the PSD for a special color noise is
2 ηS0
SF (ω ) = + η2 (7.99)
ω2
The corresponding random process f(t) can be seen as a transient solution of the
following first-order differential equation
d
f (t ) = η f (t ) + 2 ηw(t ) (7.101)
dt
where w(t) is a white noise process with auto-PSD S 0. That is, the color noise can be
seen as the response of a first-order system excitation by a white noise. The solution
can be represented by a convolution given by
∞
f (t ) =
∫0
e − ητ 2 ηw(t − τ)dτ (7.102)
7.3.2 Response Spectra
7.3.2.1 Response Spectrum
When the forcing function is a random process, even the bound of the force is known.
In this event, it is difficult to determine the bound of the response. The bound, or the
peak value of responses, is important. The response spectrum is a tool in which to
statistically determine the peak value.
356 Random Vibration
In terms of the natural period and damping ratio, the ground excitation can be
rewritten as
4πζ 4 π2
x(t ) + x (t ) + 2 x (t ) = − xg (t ) (7.103)
Tn Tn
4πζ 4 π2
xij (t ) + x ij (t ) + 2 xij (t ) = − xg j (t ) (7.104)
Ti Ti
where xg j (t ) is the realization of the jth ground excitation. Thus, Equation 7.101
can be solved to find the response xij(t). Here, the subscript i indicates a system with
period Ti.
Denote
N N N
2
xi = xi + σ i =
1
N ∑ j =1
xij +
1
N ∑
j =1
x2 − 1
ij N
∑
j =1
xij
(7.106)
This is illustrated by the line characterized by the symbol “***” in Figure 7.7.
At each period Ti, the mean value xi and the standard deviation σi of the peak
responses are computed, in which the subscript i corresponds to the statistics taken
in accordance with the ith period Ti. The sum, xi + σi, is taken as raw data for the
statistical response spectral value xi.
Referring to Equation 7.106, N signifies the number of records used. Using this
method, the number of measurements xi is not limited and xi, which is a function
of Ti, will obtain a reasonable resolution of Ti. In this instance, Ti is the ith natural
frequency. Note that for convenience, the subscript n is omitted from the term Tni in
the following text.
7.3.2.2 Design Spectra
Because the combined quantities of xi form a nonsmooth curve, this result is not
convenient to manage. Therefore, further measures should be taken to smooth the
curve. Additional measurements may be achieved from the envelope of all xi or other
Response of SDOF Linear Systems to Random Excitations 357
2
SD
1.5
Acc. (g)
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Period (s)
0.8 0.25
Coherence
0.7 0.2
0.6 0.15
0.5 0.1
0.4 0.05
0
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8
FIGURE 7.8 Coherence function (acceleration and displacement). (a) Coherence (linear sys-
tem). (b) Correlation between acceleration and displacement.
measures, which are referred to as the design spectra (see Chopra [2003] and Liang
et al. [2012], for instance). The design displacement response spectrum is denoted by
SD and is shown by the black line in Figure 7.8.
cx (t ) = 0 (7.107)
358 Random Vibration
we can write
mxA (t ) = − kx R (t ) (7.108)
4 π2
xA (t ) = − x R (t ) (7.110)
Tn2
xA (t ) ∞ x R (t ) (7.111)
In these cases, we can see that the correlation of xA (t ) and x R (t ) is unity.
However, due to possible larger damping, Equation 7.107 is no longer valid, the
relationship between the peak values xA (t ) and x R (t ) is no longer unity. In Figure
7.8a and b, we plot the coherence functions versus the natural period Tn with differ-
ent damping ratios. From Figure 7.8a, where the damping ratio is from 0.01 to 0.05,
it can be seen that when the period is greater than 1.6 seconds, when damping ratio
becomes larger and close to 5%, the coherence becomes smaller and smaller. In
Figure 7.8b, the damping ratios are chosen from 0.1 to 0.5; when the damping ratio
is larger than 10%, the coherence function is even smaller than 50%. In other words,
Response of SDOF Linear Systems to Random Excitations 359
the peak values of the absolute acceleration and the relative displacement become
uncorrelated.
This indicates that Equations 7.110 or 7.111 will indeed no longer be valid. In
this circumstance, the acceleration and the displacement become independent; the
design parameters become two, instead of the single parameter with small damping.
To understand the physical significance, let us further discuss coherence function in
Section 7.4.
7.4 COHERENCE ANALYSES
In vibration analysis and testing, the major error often comes from the transfer func-
tion measurement. In this section, a method to ensure the accuracy of transfer func-
tions is discussed.
X (s)
H (s ) = (7.112)
F (s)
In vibration testing, however, we can have two different methods to measure the
transfer functions, estimated through H1(ω) and H2(ω) given by
S fx (ω )
H1 (ω ) = (7.113)
S f (ω )
and
S x (ω )
H 2 (ω ) = (7.114)
S xf (ω )
360 Random Vibration
Sˆ fi xi (ω ) = X i (ω , T ) Fi (ω , T )* (7.115)
and
Sˆxi fi (ω ) = Fi (ω , T ) X i (ω , T )* (7.116)
Sˆxi xi (ω ) = X i (ω , T ) X i (ω , T )* (7.117)
Sˆ fi fi (ω ) = Fi (ω , T ) Fi (ω , T )* (7.118)
Here, to emphasize the ith measurement and the complex conjugate, the sub-
scripts are written as xixi and fi fi, instead of xi and fi only.
In the above equations, Xi(ω) and Fi(ω) are the Fourier transforms of response and
the force obtained from the ith test.
Furthermore, suppose we have conducted a total of n tests, for which we can have
the average of the cross-PSD function of the ith measurement as given by
n
Sˆ fx ≈
1
n ∑ Sˆ
i =1
fi xi (7.119)
Sˆ fx ≈
1
n −1 ∑ Sˆi =1
fi xi (7.120)
We also have the average of the cross-PSD function of the ith test as
n
Sˆxf ≈
1
n ∑ Sˆ
i =1
xi fi (7.121)
Sˆxf ≈
1
n −1 ∑ Sˆi =1
xi fi (7.122)
Response of SDOF Linear Systems to Random Excitations 361
In addition, the average of the auto-PSD function of the ith output is given by
n
Sˆx ≈
1
n ∑ Sˆ
i =1
xi xi (7.123)
Sˆx ≈
1
n −1 ∑ Sˆi =1
xi xi (7.124)
and the average of the auto-PSD function of the ith input is given by
n
Sˆ f ≈
1
n ∑ Sˆ
i =1
fi fi (7.125)
Sˆ f ≈
1
n −1 ∑ Sˆi =1
fi fi (7.126)
In the sense of statistics, one should use the unbiased averages. However, in the
following equation, we can realize that because the estimated transfer functions are
the ratios of PSD functions, either n or n − 1 will be finally canceled.
Let us consider the input-system-output relationship shown in Figure 7.9. As men-
tioned in Chapter 4, in many cases, the noise M(ω) and N(ω) are not correlated with
the forcing function, F(ω), nor with the response X(ω). Therefore, we can approxi-
mately have
FIGURE 7.9 Noises and transfer function calculation. (a) Input and (b) output with signifi-
cant noise.
362 Random Vibration
Sˆ f (ω ) = [ F (ω ) + M (ω )][ F (ω ) + M (ω )]* = F (ω ) F (ω )*
+ M (ω ) F (ω )* + F (ω ) M (ω )* + M (ω ) M (ω )* = S f (ω ) + S fm (ω ) (7.129)
S (ω )
+ Smf (ω ) + Sm (ω ) ≈ S f (ω ) 1 + m
S f (ω )
and
+ N (ω ) X (ω )* + X (ω ) N (ω )* + N (ω ) N (ω )* = S x (ω ) + S xn (ω ) (7.130)
S (ω )
+ Snx (ω ) + Sn (ω ) ≈ S x (ω ) 1 + n
S x (ω )
= F (ω ) X (ω )* + M (ω ) X (ω )* + F (ω ) N (ω )*
(7.131)
+ M (ω ) N (ω )* = S xf (ω ) + S xm (ω ) + Snf (ω )
+ Snm (ω ) ≈ S xf (ω )
Sˆ fx (ω ) = [ X (ω ) + N (ω )][ F (ω ) + M (ω )]*
= X (ω ) F (ω )* + N (ω ) F (ω )* + X (ω ) M (ω )*
(7.132)
+ N (ω ) M (ω )* = S fx (ω ) + S fn (ω ) + Smx (ω )
+ Smn (ω ) ≈ S fx (ω )
With the help of the above approximations, the estimated transfer function can be
calculated through the averaged results, that is,
−1
ˆ Sˆ fx (ω ) Sm (ω )
H1 (ω ) = = H (ω ) 1 + (7.133)
Sˆ f (ω ) S f (ω )
Response of SDOF Linear Systems to Random Excitations 363
and
Sˆ (ω ) S (ω )
Hˆ 2 (ω ) = x = H (ω ) 1 + n (7.134)
ˆ
S xf (ω ) S x (ω )
where Sm(ω) and Sn(ω) are the auto-PSD functions of the input and output noises.
7.4.2 Coherence Function
Introduced in Chapter 4, the coherence function can be seen as a ratio of transfer
functions H2 and H1. To measure the coherence function practically, we consider the
values of Η1 and H2.
At the resonance point,
−1
Sˆ fx (ω ) S (ω )
Hˆ 1 (ω ) = = H (ω ) 1 + m (7.135)
Sˆ f (ω ) S f (ω )
resonance
It is seen that
Hˆ I (ω ) resonance < H (ω )
and
−1
Sˆ x (ω ) S (ω )
Hˆ 2 (ω ) = = H (ω ) 1 + n ≈ H(ω ) (7.136)
Sˆ xf (ω ) S x (ω )
resonance
−1
Sˆ fx (ω ) S (ω )
Hˆ 1 (ω ) = = H (ω ) 1 + m ≈ H (ω ) (7.137)
Sˆ f (ω ) S f (ω )
antiresonance
and
Sˆx (ω ) S (ω )
Hˆ 2 (ω ) = = H (ω ) 1 + n (7.138)
Sˆxf (ω ) S x (ω )
antiresonance
It is seen that
Hˆ 2 (ω ) antiresonance > H (ω )
Now, with the measured data, let us define the coherence function γ 2fx (ω ) as follows:
Hˆ 1 (ω )
γ 2fx (ω ) = (7.139)
Hˆ 2 (ω )
364 Random Vibration
It is seen that
2
Sˆ fx (ω )
γ (ω ) =
2
(7.140)
Sˆx (ω ) Sˆ f (ω )
fx
Using the coherence function, we can evaluate the transfer functions by checking the
level of γ 2fx (ω ). From the discussion of the values of α1 and α2, it can be realized that
In addition, we can also see that the higher the value of the coherence function is,
the better accuracy we can have.
Generally speaking, if
then the corresponding peak value can belong to a mode. However, in certain cases,
a mode is recognized when the coherence function is greater than 50%.
In Figure 7.10a, a sample transfer function Hˆ 1 (ω ) is plotted. From this plot, we
can see that there exist three modes. However, we do not know how well the transfer
0.035
0.03
0.025
Amplitude
0.02
0.015
0.01
0.005
0
0 5 10 15 20 25 30 35 40 45 50
(a) Frequency (Hz)
0.8
0.7
0.6
Amplitude
0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25 30 35 40 45 50
(b) Frequency (Hz)
FIGURE 7.10 Measurement accuracy of (a) a transfer and (b) a coherence function.
Response of SDOF Linear Systems to Random Excitations 365
Third, more pieces of measurement can be used to increase the average. It may
be found that, if 30 averages are not sufficient, then averages around 300 may be
needed.
Time series analysis can be used to account for the measurement noise. It is an
important method in signal processing. In this section, we briefly discuss the model-
ing of time series and the corresponding key characters.
7.5.1 Time Series
The time series is essentially a random process, generated by the response of a system
with discrete time due to excitation of white noise processes. There are many publica-
tions about time series analysis; among these, for example, readers may consult Box et al.
(1994), Hamilton (1994), and Shumway and Stoffer (2011), especially, Ludeman (2003).
7.5.1.1 General Description
Due to noise contamination, the measured vibration signal, no matter if it is determin-
istically or randomly excited, will be considered a random process. Because of the data
acquisition, which turns analogue signals into digital signals, the measured time history
is in the format of time sequence; this is one of the major features of the time series.
More importantly, the time series is arranged in the order of occurrence in time.
Namely, the earlier measured data is arranged in an earlier position and so on.
Happening in order is the second feature of time series. This feature means that the
process is not mutually independent because the independent sequence of the order
is not an issue. Ordered sequences often possess strong correlations.
Randomness is the third, but most important feature. In the following, we will
focus on randomness. The main purpose of time series analysis is to find the pattern
and corresponding statistical parameters of the sequence. The basic method of the
analysis is to establish a correct model, in most cases, one of the typical models of
time series, and then analyze the model.
Three common applications of time series analysis include
p q
x (n) = − ∑
k =1
ak x ( n − k ) + ∑ b w(n − k ),
k =0
k n ≥ 0, p > q (7.143)
Response of SDOF Linear Systems to Random Excitations 367
where both x(n) and w(n) are generic terms, which can be time points of random
process, or realizations of random process.
If w(n) is a stationary white noise series, and to all j and k, we have
E[w(n)] = 0 (7.144)
and
E[w(j)w(k)] = σ2δij (7.145)
then the output is called the autoregressive–moving-average (ARMA) process with
p autoregressive terms and q moving-average terms, denoted by ARMA(p, q). The
general ARMA model was first described in the 1951 thesis of Whittle (also see Box
and Jenkins [1971]). Here, δij is a Kronecker delta function (Leopold Kronecker,
1823–1891), and σ2 is the variance of w(n).
If all k = 1, 2, … q, ak = 0, the output is called qth moving-average (MA) process,
denoted by MA(q). In this case,
q
x (n) = ∑ b w(n − k )
k =0
k (7.146)
x (n) = − ∑ a x (n − k ) + b w(n)
k =1
k 0 (7.147)
In the following, let us first discuss the characters of the ARMA process, such as
mean, variance and correlation functions, and probability density function, which
are decided by the statistical properties of input w(n).
We note that if the time variation of w(n) and x(n) are defined in the entire time
domain, the corresponding system will have a steady state output. If they are only
defined when n ≥ 0, we will have transient processes.
7.5.2.1.1 Mean
The mean of MA(q) can be calculated as
q
E[ x (n)] = E ∑
k = 0
bk w(n − k ) n ≥ 0
(7.148)
q
= ∑ b E[w(n − k )]
k =0
k
368 Random Vibration
Because
we have
7.5.2.1.2 Variance
Now, consider the variance of MA(q). We can write
q
2
σ (n) = E[ x (n)] = E
2
X
2
∑
bk w(n − k )
k =1
(7.151)
q q
= E
k =0 ∑
bk w(n − k )
∑ bk w(n − j)
j=0
Consider n ≥ q, we have
q q q
σ 2X (n) = ∑∑
k =0 j =1
bk b j E[ w(n − k ) w(n − j)] = σ 2 ∑b ,
k =0
2
k n≥q (7.152)
This is because in E[w(n − k)w(n − j)] the only nonzero term is obtained when
j = k. Also note that when n ≥ q the variance of w(n) is constant σ2. Now, in the case
of 0 ≤ n < q, we need to reconsider the result so that we have only nonzero terms
w(n − k) and w(n − j). In this case, the variance becomes a function of n, that is,
σ 2X (n) = σ 2 ∑b ,
k =0
2
k 0≤n<q (7.153)
That is,
RX ( k + m, k ) = E[ x ( k + m) x ( k )]
q
q (7.155)
= E
i=0∑bk w( k + m − i)
∑ bk w( k − j) , k , m ≥ 0
j=0
In the case that k > q and for such m, that 0 ≤ m < q, we can rewrite Equation
7.155 as
However, if m ≠ 0, then
and if k + m − g ≠ k − h, or h + m ≠ g, then
q− m q− m
RX ( k + m, k ) = ∑b b
h= 0
h h+ m E[ w ( k − h)] = σ
2 2
∑b b
h= 0
h h+ m , 0 ≤ m ≤ q (7.159)
Based on the same observation described in Equation 7.158, when m > q,
R X(k + m, k) = 0 (7.160)
It is seen that when k > q, then the autocorrelation function of random process
MA(q) is independent from that at time point k, and is only a function of time lag m.
Therefore, after time point q, because the corresponding mean is zero and is there-
fore a constant, the process is weakly stationary.
370 Random Vibration
Note that for w(n), Equation 7.145 must be satisfied. Now, let us consider the
mean, variance, and autocorrelation functions of the autoregressive process AR(p)
given by Equation 7.161.
7.5.2.2.1 Mean
When n < 0, E[x(n)] = 0 and x(n) = 0, which leads to E[x(n − k)] = 0. In addition, E[w(n)] =
0, therefore, the mean of this autoregressive process can be obtained as follows:
p
7.5.2.2.2 Variance
Because the mean is zero, the variance can be calculated as
p p
2
X
2
σ (n) = E[ x (n)] = −
k =1 ∑
ak x (n − k ) + b0 w(n) −
k =1
∑
ak x (n − k ) + b0 w(n)
(7.163)
Similar to the observation to examine the products discussed previously, we see
that the term x(n − k) occurs before w(n) so that x(n − k) is not a function of w(n), thus
E[x(n − k)w(n)] = 0
Therefore,
p
E −
k =1 ∑
ak x ( n − k ) w( n) = 0
(7.164)
With the help of Equation 7.164, we further write
p p
2
X
σ (n) = E −
k =1 ∑
ak x ( n − k ) −
∑ a x(n − j) + b σ
j
2
0
2
(7.165)
j =1
σ 2X (n) = ∑ ∑ a a R (n − k , n − j ) + b σ
k =1 j =1
k j X
2
0
2
(7.166)
Response of SDOF Linear Systems to Random Excitations 371
7.5.2.2.3 Autocorrelation
Generally speaking, even if a signal has a zero initial condition, the autocorrelation
still has a transient process. This point can be realized by observing the existence
of the transient solution xpt(t) in Chapter 6 (see Equation 6.100), although in this
case x(t) is deterministic. We now consider that the transient portion in the autocor-
relation function becomes negligible, that is, when n > p. In this case, first consider
R X(n, n − 1), given by
Because x(n − 1) is not a function of w(n), and w(n) is white noise, we have
RX (n, n − 1) = − ∑ a R (n − 1, n − k )
k =1
k X (7.169)
Now, with the same idea, we can multiply x(n − j), j = 2, 3, … p on both sides of
Equation 7.161 and take the corresponding mathematic expectation, that is, we will
have
p
RX (n, n − j) = − ∑ a R (n − j, n − k ),
k =1
k X j = 2, 3, , p (7.170)
We should also consider the case of R X(n, n), which is equal to σ 2X (n). From
Equation 7.166, it is seen that
RX (n, n) = − ∑ a R (n, n − j ) + b σ
j =1
j X
2
0
2
(7.171)
Equations 7.169 through 7.171 provide the formulae to calculate the autocorrela-
tion function for time lag from 0 to p. Note that Equation 7.170 will also be valid
when j > p.
Now, when AR(p) has already reached steady state, then the corresponding auto-
correlation Rx(r, s) is only a function of the time lag
j = r − s (7.172)
372 Random Vibration
RX ( j) = − ∑ a R (k − j),
k =1
k X j = 1, 2, 3, , p (7.173)
1 a1 a2 a p− 2 a p−1 a p R (0) 2
X − b0 σ
a1 1 + a2 a3 a p−1 ap 0 RX (1)
0
a2 a1 + a3 1 + a4 ap 0 0 RX (2) 0
= (7.175)
0 ap a p−1 a3 1 + a2 a1 RX ( p − 1) 0
a p a p−1 a p− 2 a2 a1 1 RX ( p) 0
7.5.2.3 ARMA(p, q)
Now consider the process of ARMA(p, q), which satisfies Equations 7.143 through
7.145. For convenience, Equation 7.145 is replaced by
E(w2(n)) = σ2 (7.176)
7.5.2.3.1 Mean
Taking the mathematic expectation of Equation 7.143, we have
p q
E[ x (n)] = − ∑
k =1
ak E[ x (n − k )] + ∑ b E[w(n − k )],
k =0
k n≥0 (7.177)
Response of SDOF Linear Systems to Random Excitations 373
Because
E[w(n − k)] = 0
7.5.2.3.2 Variance
Because the mean is zero, for the variance, we can write
=− ∑ a R ( n, n − k ) + ∑ b R
k =1
k X
k =0
k XW (n, n − k )
=− ∑
k =1
ak RX (n − j, n − k ) + ∑ b E[w(n − k )x(n − j)],
k =0
k j = 2, 3, , p
(7.181)
Equations 7.180 and 7.181 provide formulae to calculate the autocorrelation func-
tion for time lag from 1 to p.
When ARMA(p, q) has already reached steady state, then the corresponding
autocorrelation Rx(r, s) is only a function of the time lag given by Equation 7.172. In
this case, we replace Equations 7.180 and 7.181 by
p
RX ( j) = − ∑ a R (k − j) + Φ (a, b)
k =1
k X j j = 1, 2, 3, , p (7.182)
where Φj(a, b) is the term of the second summations in Equations 7.180 and 7.181,
which are a nonlinear functions. Whereas,
a = [a1, a2, … ap], b = [b1, b2, …, bq] (7.183)
By using the equation in matrix form, we have
Example 7.2
Find the mean, variance, and autocorrelation of the following process ARMA(1, 1):
Mean
We can see from Equation 7.178, E[x(n)] = 0.
Autocorrelation Function
With zero mean, the variance at time n is equal to the value of autocorrelation
R X(0, 0). That is,
σ2(n) = E[x2(n)] = R X(n, n) = E[x(n){−a1x(n − 1) + b0w(n) + b1w(n − 1)}]
= −a1R X(n, n − 1) + b0E[x(n)w(n)] + b1E[x(n)w(n − 1)]
Response of SDOF Linear Systems to Random Excitations 375
The right-hand side of the above equation has two means that need to be
evaluated. Consider the last one first. Substituting the expression of x(n) into the
above equation and rearranging the results, and further taking the mathematical
expectation, we can write
E[x(n)w(n − 1)] = E{−a1x(n − 1) + b0w(n) + b1w(n − 1)}w(n − 1)] = −a1E[x(n − 1)w(n − 1)]
+ b1σ2 = −a1E[{a1x(n − 2) + b0w(n − 1) + b1w(n − 2)}w(n − 1)] + b1σ2 = (−a1b0 + b1)σ2
The reason we can have the above result is because x(n − 1) is not a function of
w(n), and therefore E[x(n – 2) w(n − 1)] = 0. Because w(n) is white noise, E[w(n − 2)}
w(n − 1)] = 0.
Another mean we need can be calculated as
Again, because x(n − 1) is not a function of w(n), E[x(n − 2)w(n − 1)] = 0. Consider,
E[x(n − 1)w(n − 1)] = E[w(n − 1){−a1x(n − 2) + b0w(n − 1) + b1w(n − 2)}] = b0 σ2
We thus have
To have the autocorrelation function for the steady state response, when the
time lag j = 0 and j = 1, we have
and
RX (0) =
(b2
0 )
− 2b0 b1 + b12 σ 2
1− a 2
1
and
RX (1) =
(a b b − a b
2
1 0 1
2
1 0 )
− a1b12 + b0 b1 σ 2
1− a 2
1
376 Random Vibration
Furthermore, for steady state process of ARMA(1, 1) with time lag > 1, we can write
Variance
Because the mean of ARMA(1, 1) = 0, the variance can be obtained as
σ 2X (n) = RX (0) =
(b
2
0 )
− 2b0 b1 + b12 σ 2
1− a12
It is noted that when the orders of p and q are greater than unity, it is difficult
to write the autocorrelation function and variance in closed forms.
7.5.3.2 Sampling of Signals
With the help of delta functions, a signal in the continuous time domain denoted by
x(t) can be sampled using the following treatment:
where Δt is the sampling time interval. The subscript d denotes xd(t) is in discreet
form.
Note that although xd(t) can only have nonzero values at the moment of sampling,
it is still in the continuous time domain.
Taking the Laplace transform of xd(t), we have
∞
X d (s) = L[ x d (t )] = L ∑
k = 0
x ( k )δ(t − k∆t )
∞ ∞ (7.186)
= ∑ x(k )L[δ(t − k∆t)] = ∑ x(k )e
k =0 k =0
− sk∆t
z = esΔt (7.187)
X d (s) z =es∆t = X ( z ) = ∑ x (k )z
k =0
−k
(7.188)
In Equation 7.188, the series X(z) is the function of variable z. When z = es (espe-
cially z = ejω), X(z) is referred to as the z transform and denoted by
X ( z ) = Z[ x (t )] (7.189)
Here, we omit the subscript d for X(z) is obviously a discrete series. The physical
meaning of z will be discussed in Section 7.5.4.3.
The inverse z transform, denoted by Z −1[ X ( z )] can be calculated by
π
1
x (t ) = Z −1[ X ( z )] =
2π ∫−π
X (eiω )eiωt dω (7.190)
378 Random Vibration
x (n) = − ∑
k =1
ak f ( n − k ) + ∑ b x(n − k ),
k =0
k n ≥ 0, p > q (7.191)
X (z) = − ∑
k =1
ak z − k F ( z ) + ∑b z
k =0
k
−k
X ( z ), n ≥ 0, p > q (7.192)
where
F ( z ) = Z[ f (t )] (7.193)
is the z transform of the input forcing function f(t); and X(z) is the z transform of the
output given by Equation 7.192. From Equation 7.192, we can write the transfer func-
tion in the z domain as
X (z)
∑b z k
−k
H (z) = = k =0
p
, n ≥ 0, p > 2 (7.194)
F (z)
1+ ∑a z
k =1
k
−k
From Equation 7.194, it is seen that the transfer function H(z) described in the z
domain is a rational function of z−1. If the excitation is white noise, then the response
will be a random process ARMA(p, q). Taking the inverse z transform of the trans-
form function, we can have the unit impulse response function, namely,
bk = 0, k = 1, 2, …, q (7.196)
X (z) b0
H (z) = = p
, n ≥ 0, p > q (7.197)
F (z)
1+ ∑a z
k =1
k
−k
Response of SDOF Linear Systems to Random Excitations 379
From Equation 7.197, we see that if the input to the system is a white noise, then
the output is an autoregressive process AR(p). Another interesting case is when
ak = 0, k = 1, 2, …, p (7.198)
H (z) = ∑b z
k =1
k
−k
(7.199)
In this case, from Equation 7.199, if the input to the system is a white noise, then
the output is a moving-average process MA(q).
7.5.3.4 PSD Functions
7.5.3.4.1 PSD Function of MA(q)
Based on the transfer function given by Equation 7.199, we can calculate the auto-
PSD function for the process of MA(q). That is,
S X (ω ) = S X ( z ) z = e jω
= H ( z ) SF ( z ) H ( z −1 ) z = e jω
2
q q q
= ∑
k = 0
bk z − k (σ 2 )
k =0
∑
bk z k
jω
= ∑b e
k =0
k
− jk ω
σ2 (7.200)
z =e
S X (ω ) = S X ( z ) z =e jω = H ( z ) SF ( z ) H ( z −1 ) z =e jω
b0 b0
= (σ 2 )
p p (7.202)
1 +
k =1
∑
ak z − k 1+
k =1
ak z k ∑
z = e jω
b02σ 2
= 2
p
1+ ∑a e
k =1
k
− jk ω
380 Random Vibration
S X (ω ) = S X ( z ) z = e jω
= H ( z ) SF ( z ) H ( z −1 ) z = e jω
q q
k =0
∑ bk z − k bk z k ∑
= (σ 2 ) k = 0p
p
(7.204)
1 +
k =1
∑
ak z − k 1+
k =1
ak z k ∑
z = e jω
2
q
∑b e
k =0
k
− jk ω
= 2
σ2
p
1+ ∑a ek =1
k
− jk ω
x (n + 1) − x (n)
x (n) = (7.205)
∆t
x (n + 1) − x (n) x (n + 2) − 2 x (n + 1) + x (n)
x(n) = = (7.206)
∆t ∆t 2
x (n + 2) − 2 x (n + 1) + x (n) x (n + 1) − x (n)
m +c + kx (n) = f (n) (7.207)
∆t 2
∆t
Response of SDOF Linear Systems to Random Excitations 381
Dividing m/Δt2 on both sides of Equation 7.207 and rearranging the resulting
equation, we can write
x (n + 2) − 2 x (n + 1) + x (n) x (n + 1) − x (n)
m +c + kx (n)
∆t 2 ∆t
x g (n + 2) − 2 x g (n + 1) + x g (n)
= −m
∆t 2
or
x (n + 2) − 2 x (n + 1) + x (n) x (n + 1) − x (n)
m +c + kx (n)
∆t 2 ∆t
x g (n + 1) − x g (n)
=c + kx g (n)
∆t
382 Random Vibration
or
7.5.4.2 ARMA Models
The above difference equations can be written in the form of typical ARMA mod-
els. That is, consider the case of the SDOF system excited by forcing function f(n),
Equation 7.208 can be written as
with
c∆t
a1 = −2 + (7.212a)
m
c∆t k∆t 2
a2 = 1 − + (7.212b)
m m
and
∆t 2
b2 = (7.212c)
m
b 0 = 1 (7.214a)
b1 = –2 (7.214b)
and
b2 = 1 (7.214c)
Response of SDOF Linear Systems to Random Excitations 383
b1 = c∆t (7.216a)
m
and
2
b2 = − c∆t + k∆t (7.216b)
m m
It is also seen that
Z [x (n − k )] = X (z )z − k (7.217)
7.5.4.3 Transfer Functions
We now consider the transfer function based on the model for the excitation f(n).
Suppose the forcing function is white noise. Take z-transform of Equation 7.210,
X (z) b2 z −2 b2
H (z) = = −1 −2
= 2 (7.219)
F ( z ) 1 + a1 z + a2 z z + a1 z + a2
∆t 2
H (z) = m
c∆t c∆t k∆t 2
z + −2 +
2
z +1− +
m m m (7.220)
∆t 2 1
=
m z 2 + (−2 + 2ζω n ∆t ) z + 1 − 2ζω n ∆t + ω 2n ∆t 2
Consider the poles of H(z), which are the zeroes of the denominator in Equation
7.220. That is, let
Note that Δt can be sufficiently small. Seen in Equation 7.187, z = esΔt, by letting
s = −ζ ωn ± j (1 − ζ2)1/2 ωn (7.224)
The above shows that the Laplace variable s, on the condition that the trans-
fer function reaches its poles, is equivalent to the eigenvalues of the SDOF system.
Furthermore, we can prove that the transfer function using the Laplace variable s,
H(s), and the transfer function using variable z, H(z), has the same values, provided
Δt is sufficiently small. That is
∆t →0
H ( z ) = H (s ) (7.225)
For the cases expressed by Equations 7.73 and 7.75, we can have the same obser-
vations. Therefore, we use the time series of the ARMA model to describe an SDOF
system. In Chapter 8, we will further discuss the utilization of different functions for
MDOF systems.
Example 7.3
0.035
0.03
Exact transfer function
Time interval = 0.0005
0.025 Time interval = 0.0010
Absolute amplitudes
0.02
0.015
0.01
0.005
0
0 50 100 150 200 250
Frequency (rad/s)
The results are plotted in Figure 7.11. From these curves, it is seen that when Δt
is sufficiently small, H(z) can be a good approximation of the exact calculation of
H(s). However, when Δt = 0.005 s, which is often used in practical measurement,
we will have larger errors especially in the resonant region.
We note that in this example the damping ratio is 6.3%. When the damping
ratio becomes large or different natural frequencies are used (or both), the situa-
tion will not be improved. Therefore, sufficiently small time intervals need to be
carefully chosen when using time series to directly analyze an SDOF system.
7.5.4.4 Stability of Systems
7.5.4.4.1 General Description
In Chapter 6, we showed that for an SDOF system, we need c ≥ 0 and k > 0 to achieve
stable vibrations. The condition of c ≥ 0 is equivalent to having a nonpositive portion
of the system’s eigenvalue, namely,
Re(λ) ≤ 0 (7.226)
Now, we examine the ARMA model to establish criterion for the system’s sta-
bility. Recall the definitions of ARMA(p, q) and AR(p), respectively, described in
Equations 7.143 and 7.161. It can be seen that both are difference equations with
constant coefficients.
For convenience, let us define a lag operator (backshift operator) B such that
It is seen that
∑
k =0
ak x ( n − k ) = ∑ b w(n − k ),
k =0
k a0 = 1, n ≥ 0, p > q (7.229)
∑ a x (n − k ) = 0,
k =0
k a0 = 1, n ≥ 0 (7.230)
It can be seen that the homogeneous difference equation for an AR(p) model can
also be described by Equation 7.230. With the help of the lag operation, Equation
7.230 can be further written as
∑ a x(n − k ) = A (B)[x(n)] = 0,
k =0
k p a0 = 1 (7.231)
386 Random Vibration
where the solutions λ of the characteristic equation are the eigenvalues of ARMA(p, q)
or AR(p). By using these eigenvalues, we can write the operator polynomial as
A p ( B) = (1 − λ1B)(1 − λ 2 B) (1 − λ p B) (7.233)
Therefore, we have
λ2 + a1λ + a2 = 0 (7.237)
− a1 ± j a12 − 4 a2
λ1,2 = (7.238)
2
Problems
1. A system is shown in Figure P7.1 with white noise excitation f(t).
a. Find the equation of motion for this system.
b. What is the transfer function of this system?
c. Find the PSD matrix.
d. Find the RMS value of the response of x1. (Hint: block B is massless.)
2. A white noise force is applied on the first mass of the system given in Figure
P7.2. Find its governing equation and transfer function. What are the trans-
fer functions measured at the first and the second mass? Find the standard
deviation of the response x1.
x1
B x2 f(t)
FIGURE p7.1
k
f
m1
m2
FIGURE p7.2
388 Random Vibration
b
b
B
B
L = 0.5 m F(t)
FIGURE p7.3
k fk = kx
m
mg
c
fc = cx
1n 1n
x g 2 2
FIGURE p7.4
Response of SDOF Linear Systems to Random Excitations 389
− a1 ± j a12 − 4 a2
λ1,2 =
2
8 Random Vibration of
MDOF Linear Systems
The random responses of multi-degree-of-freedom (MDOF) systems are discussed
in this chapter. General references can be found in Clough and Penzien (1993),
Wirsching et al. (2006), Cheng (2001), Cheng and Truman (2001), Chopra (2003),
Inman (2008), and Liang et al. (2012).
8.1 Modeling
In real-world applications, modeling is often the logical starting point in gaining an
understanding of a system. Therefore, in the study of MDOF systems, similar to the
previous chapter about SDOF systems, a model is first discussed.
8.1.1 Background
Many vibration systems are too complex to be modeled as SDOF systems. For
instance, a moving car will encounter vertical bumps, as well as swaying in the hori-
zontal direction. One cannot use the measure of vertical bumping to determine the
degree of rotational rocking because they are responses to independent events. In this
case, the vertical motion and horizontal rotation is described by unique degrees of
freedom represented by two independent displacements of the front and rear wheels.
An MDOF system, with n independent displacements, can have n natural fre-
quencies and n linearly independent vibration shape functions.
8.1.1.1 Basic Assumptions
We consider first the following assumptions:
391
392 Random Vibration
c. Gaussian
This results in linear combinations that yield normal distributions.
8.1.1.2 Fundamental Approaches
One of the following approaches may be used in dealing with an MDOF system:
Complex mode. If the mode shape vector cannot be written as real-valued, then
it is a complex mode. In this case, the following conditions are mutually necessary
and sufficient:
In each case, there is model superposition: the total solution is a linear combina-
tion of the modal solutions.
8.1.2 Equation of Motion
The modeling of MDOF a system is examined in the following.
8.1.2.1 Physical Model
Figure 8.1 shows a typical model of a 2-DOF system.
The equilibrium of force of the first mass is
∑ F = m x + c x + c (x − x ) + k x + k (x − x ) − f = 0
1 1 1 1 1 2 1 2 1 1 2 1 2 1 (8.1)
Random Vibration of MDOF Linear Systems 393
x1, f1 x2, f2
k1 k2
m1 m2
c1 c2
∑ F = m x + c (x − x ) + k (x − x ) − f
2 2 2 2 2 1 2 2 1 2 = 0 (8.2)
or
Here, for the example shown in Figure 8.1, M is the mass matrix.
m 0
M= 1 (8.6a)
0 m2
In general, M is defined as
m m … m1n
11 12
m m … m2 n
M = 21 22 (8.6b)
…
mn1 mn 2 … mnn
394 Random Vibration
c +c −c2
C= 1 2 (8.7a)
−c2 c2
Likewise, C is defined as
c c12 … c1n
11
c c22 … c2 n
C = 21 (8.7b)
…
cn1 cn 2 … cnn
k +k − k2
K= 1 2 (8.8a)
− k2 k2
k k12 … k1n
11
k k22 … k2n
K = 21 (8.8b)
…
kn1 kn 2 … knn
x
1
x
x= 2 (8.9)
…
xn
f
1
f
f= 2 (8.10)
…
fn
Random Vibration of MDOF Linear Systems 395
8.1.2.2 Stiffness Matrix
Beginning with the stiffness matrix, the physical parameters are considered.
f = Kx (8.11)
k ji = f j
xi =1, x p = 0(p≠ i) (8.13)
KT = K (8.14b)
8.1.2.2.2.2 Full Rank For full rank to exist, the following conditions must hold
true:
1. rank(K) = n (8.15)
and
2. K is nonsingular
also
3. K−1 exists
8.1.2.2.2.3 Positive Definite The stiffness matrix is positive definite, that is,
K > 0 (8.16)
in which the “>” symbol for a matrix is used to denote the matrix as being positive
definite, meaning all eigenvalues are greater than zero. This is denoted by
K−1 = S (8.18)
ST = S (8.19)
rank(S) = n (8.20)
rank(M) = n (8.21)
M > 0 (8.22)
3. M is symmetric, where
MT = M (8.23)
M = diag(mi) (8.24)
8.1.2.3.1.2 Consistent Mass Exists when M and K share the same “shape func-
tion” D, namely,
K = DKΔ DT (8.25)
and
M = DMΔ DT (8.26)
rank(C) ≤ n (8.27)
2. C is positive semidefinite
C ≥ 0 (8.28)
Here, the symbol “≥” for a matrix is used to denote the matrix as being positive
semidefinite, whose eigenvalues are all greater than or equal to zero. This is denoted
by
eig(C) ≥ 0 (8.29)
where eig(.) stands for operation of calculating the eigenvalues of matrix (.), which
will be discussed in Sections 8.4.1 and 8.4.5 for proportionally and nonproportion-
ally damped systems, respectively (also see Wilkinson 1965). The algebraic eigen-
value problem.
C is symmetric.
CT = C (8.30)
C = αM + βK (8.32)
Example 8.1
0 C 2 −1 30 −10
M= 1 , = , and K =
0 2 −1 1 −10 50
Input Input
fj = δ(t) Fj(ω)
jth jth
location location
Output Output
hij(t) Xi(ω) Transfer
ith ith function
location location H(ω)
(a) (b)
FIGURE 8.2 Relationship between input and output. (a) The time domain. (b) The fre-
quency domain.
Random Vibration of MDOF Linear Systems 399
h (t ) h12 (t ) … h1n (t )
11
h (t ) h22 (t ) … h2 n (t )
h(t ) = 21 (8.36)
…
hn1 (t ) hn 2 (t ) … hnn (t )
H (ω ) H12 (ω ) … H1n (ω )
11
H (ω ) H 22 (ω ) … H 2 n (ω )
H(ω ) = 21 (8.38)
…
H n1 (ω ) H n 2 (ω ) … H nn (ω )
V(ω)x0 = f0 (8.42)
where
V(ω) is referred to as the impedance matrix. The impedance matrix is of full rank
and symmetric. Its inverse matrix is denoted as
Here, H(ω) is the transfer function matrix (frequency response function matrix),
where
x0 = H(ω) f0 (8.45)
Example 8.2
30 −10
K=
−10 50
If one can measure the amplitude of the displacement as x0 = [1 0.5]T, find the
vector of forcing function.
8.2.1 Expression of Response
For simplicity, consider that the output is measured at a single location only. Denote
the response at that location due to the ith input Fi(t) as Xi(t), as seen in Figure 8.3.
Random Vibration of MDOF Linear Systems 401
Input Input
F1(t) F2(t)
nth location
Fn(t)
In Equation 8.46, hi(t) is the unit impulse response at the location along the x-axis
due to the specific force Fi(t). Given that the system is linear, the total response is the
total sum of Xi(t). That is,
X (t ) = ∑ X (t)
i =1
i (8.47)
n n n
∑ ∑∫ ∑∫
t ∞
X (t ) = Fi (t ) * hi (t ) = Fi (t − τ)hi (τ) d τ = Fi (t − τ)hi (τ) d τ (8.48)
i =1 i =1
0 i =1
−∞
∞
In Figure 8.3, the total solution is the sum of all n terms of
∫−∞
Fi (t − τ)hi (τ) d τ.
However, these terms are not calculated individually. In the following, how to com-
pute the corresponding numerical characteristics will be described.
8.2.2 Mean Values
The mean value of the responses is first considered for multiple input and single
output, as shown in Figure 8.3. The case of multiple input–multiple output will be
further described later.
8.2.2.1 Single Coordinate
If only a single response at a certain location is considered and the corresponding
integration can be carried out, its mean can be calculated as
402 Random Vibration
n
∑∫
∞
µ X = E[ X (t )] = E Fi (t − τ)hi (τ) d τ (8.49)
i =1
−∞
Here, X(t) is the response and Fi(t) is the stationary excitation at the ith location,
with a mean value of
Thus,
n n
∑∫ ∑ µ ∫
∞ ∞
µX = E[ Fi (t − τ)]hi (τ) d τ = Fi hi (τ) d τ (8.51)
i =1
−∞ i =1
−∞
Finally,
n
µX = ∑{µ
i =1
Fi Hi (0)} (8.52)
8.2.2.2 Multiple Coordinates
Now, the response of all n coordinates is considered. In this case, we have multiple
inputs and multiple outputs.
µ X1 = ∑{µ
i =1
Fi H1i (0)} (8.53)
µ X2 = ∑{µ
i =1
Fi H 2i (0)} (8.54)
µ Xn = ∑{µ
i =1
Fi H ni (0)} (8.55)
Random Vibration of MDOF Linear Systems 403
In Equations 8.53 through 8.55, the term Hji(0) is the transfer function of ith input
and jth output when ω = 0;
The mean values, written in matrix form, are represented by
μX = H(0)μF (8.56)
where matrix H(0) is defined in Equation 8.38 when ω = 0
µ
X1
µ
µX = X2 (8.57)
µ Xn
and
µ
F1
µ
µF = F2 (8.58)
µ Fn
Example 8.3
30 −10
K=
−10 50
The mean of input is μF = [0 −5]T, find the vector of the output mean.
μF = 0 (8.59)
μX = 0 (8.60)
In general, a zero mean response can always be achieved. That is, if the forcing func-
tions are not equivalent to zero mean, then a zero mean response can be achieved using
f(t) = fnon(t) − μF (8.61)
404 Random Vibration
In the above equation, f(t) and fnon(t) are, respectively, zero mean and nonzero
mean random process vectors of forcing functions.
The response of this is as follows:
In this instance, x(t) and xnon(t) are, respectively, zero mean and nonzero mean random
process vectors of responses. Note that, the corresponding forcing function vector is
F (t )
1
F (t )
f(t ) = 2 (8.63)
…
Fn (t )
X (t )
1
X (t )
x(t ) = 2 (8.64)
…
X n (t )
Both vectors of Equations 8.63 and 8.64 are random processes. Namely, at least
one of the elements in Fi(t) or Xj(t) are considered to be random; whereas both f(t)
and x(t) are random.
8.2.3 Correlation Functions
Next, the correlation functions are considered. The autocorrelation function of the
response measured at a single location due to multiple inputs is written as
m m
RX (τ) = E[ X (t ) X (t + τ)] = E
∑i =1
∑
X i (t )
j =1
X j (t + τ)
m m
∑∑ ∫
∞ ∞
= E
i =1 j =1
−∞
hi (ξ) Fi (t − ξ) d ξ
∫ −∞
h j ( η) Fj (t + τ − η) d η
(8.65)
m m
∑∑ ∫ ∫
∞ ∞
= hi (ξ)h j ( η) E[ Fi (t − ξ) Fj (t + τ − η)] d ξ d η
−∞ −∞
i =1 j =1
Here, m ≤ n, indicating the forces applied at m locations, and n may be less than m.
Given that the forcing functions are stationary, the cross-correlation function of
Fi(t) and Fj(t) can be denoted as
Random Vibration of MDOF Linear Systems 405
∑∑ ∫ ∫
∞ ∞
RX (τ) = hi (ξ)h j ( η) RFi Fj (ξ − η + τ) d ξ d η (8.67)
−∞ −∞
i =1 j =1
F (ω )
1
F2 (ω )
F(ω ) = (8.68)
…
Fn (ω )
X (ω )
1
X (ω )
X(ω ) = 2 (8.69)
…
X n (ω )
S (ω ) SF1F2 (ω ) … SF1Fn (ω )
F1
S (ω ) SF2 (ω ) SF2Fn (ω )
SF (ω ) = F2F1
…
SF F (ω ) SFn F2 (ω ) SFn (ω )
n1
R (τ) (8.70)
RF1F2 (τ) … RF1Fn (τ)
F1
−1
RF2F1 (τ) RF2 (τ) RF2Fn (τ)
=F
…
RF F (τ) RFn F2 (τ) RFn (τ)
n1
406 Random Vibration
where SFj Fk (ω ) and RFj Fk (τ) are, respectively, the cross-PSD and correlation function
of input forcing functions Fj and Fk. If j = k, we obtain auto-PSD and autocorrelation
functions.
Unlike SDOF systems, expect off-the-diagonal entries in Equation 8.70; the off-
diagonal entries contain cross-power PSD among the input locations. In this case,
SFi Fj (ω ) = lim
T →∞
1
2πT ∑[F (ω, T ) * F (ω, T )]
p
ip jp (8.71)
where Fip(ω,T) is the Fourier transform of the pth measurement of a forcing function
applied at location i, with a measurement length of T.
Now, consider the cross-PSD function matrix of output, the jkth entry can be
written as
S X j X k (ω ) = cE[ X j (ω ) * X k (ω )] (8.73)
Next, take the expected value of both sides and multiply by a constant, repre-
sented by c.
With the input of Equations 8.72 and 8.73, the result becomes
S (ω ) S X1X2 (ω ) S X1Xn (ω )
X1
S (ω ) S X2 (ω ) S X2 Xn (ω )
S X (ω ) = X2 X1
S X X (ω ) S Xn X2 (ω ) S Xn (ω )
n 1
R (τ) (8.79)
RX1X2 (τ) ... RX1Xn (τ)
X1
−1
RX2 X1 (τ) RX2 (τ) RX2 Xn (τ)
=F
...
RX X (τ) RXn X2 (τ) RXn (τ)
n 1
Both cross-PSD matrices SF(ω) and SX(ω) given by Equations 8.70 and 8.79 are
useful. Because we will study not only the auto-PSD of SFi and S Xi but also the rela-
tionships of singles between location j and k.
8.2.4.4 Variance
When xi(t) is of zero mean, then
8.2.4.5 Covariance
∞
σ X i X j (0) =
∫
−∞
S Xi X j (ω ) dω (8.82)
S (ω ), i= j=k
F
SFi Fj (ω ) = k (8.83)
0 , elsewhere
408 Random Vibration
S X (ω ) = H k* (ω ) H k (ω ) SFk (ω ) (8.84)
or
S X (ω ) = H k (ω) 2 SFk (ω ) (8.85)
8.2.5.2 Uncorrelated Input
If all inputs, fi(t), are uncorrelated, then
S (ω ), i = j = k = 1, 2, n
F
SFi Fj (ω ) = k (8.86)
0 , elsewhere
and
n
∑ H (ω) SFk (ω )
2
S X (ω ) = k (8.87)
k =1
σ 2X = ∑σ
k =1
2
i (8.88)
8.3.1 Proportional Damping
As mentioned previously, for the Caughey criterion to be satisfied, a system must be
proportionally damped. In this section, the mathematical and physical meaning of
the Caughey criterion will be discussed first.
Equation 8.90 implies that the two matrices [M−1C] and [M−1K] commute, if and
only if the matrices share the identical eigenvector matrix Φ. Matrices [M−1C] and
[M−1K] are, respectively, referred to as generalized damping and stiffness matrices.
The eigenvector matrix Φ is the mode shape of the M-C-K system, as discussed
further in Section 8.3.5.4.
Physically, if the distribution of the individual dampers and springs are identical
and the amount of individual damping and stiffness are proportional, then both the
generalized damping and stiffness matrices share identical eigenvectors. In qualita-
tive terms, this means that both damping and stiffness are “regularly” distributed.
8.3.1.2 Monic System
To have generalized damping and stiffness matrices, the monic system must be gen-
erated first.
In which, the mass matrix becomes the identity matrix. It is referred to as a monic
system.
Note that, in Equation 8.91, the monic MDOF vibration system has newly formed
generalized damping matrix, M−1C, and stiffness matrix, M−1K.
or
λ = −ζω n + j 1 − ζ2 ω n (8.96)
and
The variables, λi, ζi, and ωni are, respectively, referred to as eigenvalue, damping
ratio, and natural frequency of the ith normal mode. The triple < ωni, ζi, ϕi > is called
the ith normal modal parameter. The phrase “normal” means that the eigenvalues
are calculated from the proportionally damped system.
8.3.2 Eigen-Problems
In Equations 8.97a and 8.97b, the damping ratios and the natural frequencies are parts
of eigenvalues. It is of importance that these eigen-problems be further explored.
8.3.2.1 Undamped System
First, consider an undamped system, where
C = 0 (8.98)
and
ζi = 0 (8.99)
Thus,
λi = jωni (8.100)
Furthermore,
−ω 2niφi + M −1Kφi = 0
Random Vibration of MDOF Linear Systems 411
or
8.3.2.2 Underdamped Systems
Similarly, M−1C is also a square matrix and will contain eigenvectors and eigen
values. Because M−1C and M−1K share the same eigenvector, the corresponding
eigenvalue can be denoted as 2ζiωni, with
Example 8.4
2 −1 30 −10
M= 1 0
, C = , and K =
0 2 −1 1 −10 30
Check whether this system is proportionally damped and find the correspond-
ing eigenvalues and eigenvectors.
It is seen that CM−1K = KM−1C so that the system is proportionally damped.
From Equation 8.101, it is seen that, ωn1 = 3.4917 and ωn2 = 5.7278. The cor-
responding eigenvectors are ϕ1 = [0.4896 0.8719]T and ϕ2 = [0.9628 −0.2703]T.
From Equation 8.102, we see that M −1Cϕ1 = [0.1073 0.1911]T. Dividing the first
element by 0.4896, namely, the first element in ϕ1 results in 0.2192 (the same
result can be found by dividing the first element by 0.8719). Furthermore, damp-
ing ratio ζ1 = 0.2192/(2ω n1) = 0.0314. Similarly, the damping ratio ζ2 = 0.1991.
Therefore, the eigenvalues are
and
λ2 = −1.1404 ± 5.6131 j
8.3.3 Orthogonal Conditions
The eigenvector can be further used to decouple the MDOF system. Before complet-
ing this calculation, first, consider why this is possible. The answer to this is based
on the orthogonal conditions.
412 Random Vibration
8.3.3.1 Weighted Orthogonality
Equation 8.101 can be rewritten as
Further multiplying φiT on both sides of the jth generalized eigen-equation will
yield
Because M and K are symmetric, then take transpose on both sides of Equation
8.104,
Substitution of Equations 8.107 into Equation 8.106 and subtracting the subse-
quent result from Equation 8.105, results in
(ω 2
nj )
− ω ni2 φ iT Mφj = 0 (8.108)
Because in general,
Thus,
and
m , i= j
φiT Mφj = i (8.112)
0, i≠ j
k , i= j
φiTKφj = i (8.113)
0, i≠ j
and likewise
c , i= j
φiTCφj = i (8.114)
0, i≠ j
Here, ki and ci are called the ith modal stiffness and model damping coefficient,
respectively; similar to modal mass, we use italic letters to denote these modal
parameters. Equations 8.112, 8.113, and 8.114 are referred to as weighted orthogonal
conditions.
8.3.3.2 Modal Analysis
8.3.3.2.1 Characteristic Equation
Using the orthogonal conditions, the eigenvector or mode shape can be used to
obtain the SDOF vibration systems mode by mode. In doing so, first consider the
homogeneous equation:
Substituting Equation 8.116 into Equation 8.115 and premultiplying φiT on both
sides yields
The characteristic equation for n SDOF systems has now been obtained.
414 Random Vibration
mi λ i2 + ci λ i + ki = 0, i = 1, n (8.118)
ki
ω ni = , for i = 1, n (8.119)
mi
and
ci
ζi = , for i = 1, … n (8.120)
2 mi ki
ζi < 1 (8.121)
ζi = 1 (8.122)
ζi > 1 (8.123)
the system is overdamped. In the case of critically damped and overdamped systems,
the ith mode reduces to two real valued subsystems. Thus, the system will no longer
contain vibration modes. Note that again, for a stable system, we need all damping
ratios to be nonnegative, which can be guaranteed by M > 0, C ≥ 0 and K > 0. This
will also be true for nonproportionally damped systems.
qi (t ) = e λit (8.124)
Random Vibration of MDOF Linear Systems 415
Here, qi(t) is called the ith modal response of the free decay vibration. Furthermore,
in looking at ϕi, it is seen that ϕi contains spatial variable only, written as
φ
1i
φ
φi = 2i (8.125)
φni
Here, ϕi distributes the modal response qi(t) to different mass from 1 through n.
Equation 8.116 can be rewritten as
where the subscript i in the physical domain xi stands for the response due to the ith
mode only. Substituting Equation 8.126 into Equation 8.115 and premultiplying φiT
on both sides of the result will yield
or
mi qi (t ) + ci q i (t ) + ki qi (t ) = 0, i = 1, … n (8.127b)
v x
01 01
v x 02
v 0 = 02 and x 0 =
… …
v0 n x0 n
Example 8.5
−1
0) = 0.4896 0.9628 1 2.2595
q( =
0.8719 −0.2703 2 −0.1105
−1
0.4896 0.9628 1 1.4250
q(0) = =
0.8719 −0.2703 2 −2.8021
8.3.4 Modal Superposition
Because the system is linear, once the modal responses are obtained, we can sum-
marize them to construct the response in the physical domain, letting xi(t) = ϕiqi(t)
we have
x(t) = x1(t) + x2(t) + …, + xn(t) = ϕ1 q1(t) + ϕ2 q2(t) + … + ϕn qn(t) = [ϕ1 ϕ2, …, ϕn ] q(t)
(8.129)
Accordingly, the response denoted by x(t) is called the physical response, com-
pared with the modal responses denoted by qi(t). Note that, at a given time t, x(t) is a
vector, the jth element is the displacement measured at the jth location, whereas qi(t)
is a scalar, which is the response of the ith mode.
Random Vibration of MDOF Linear Systems 417
Example 8.6
In the example from Section 8.3.3, we calculated the modal response q1(t) and
q2(t). Find the response in the physical domain.
The results are plotted in Figure 8.4b, as a comparison, the modal response cal-
culated in the previous example are plotted in Figure 8.4a. Additionally, because
Equation 8.126 only contains the ith mode, it can be rewritten as follows:
Here, the italic symbol ϕi is used to denote the normalized mode shape. Note
that
φi
φi = (8.131)
mi
In other words, the mode shape ϕi can be normalized so that the following
product is unity:
φ iT M φ i = 1
Given that the system is linear, the linear combination can be obtained as
n n
x(t ) = ∑ i =1
ai x i (t ) = ∑ a φ q (t )
i =1
i i i (8.132)
2 3
1.5 First modal response x1(t)
Second modal response x2(t)
1
Physical displacement
2
Modal displacement
0.5
0 1
–0.5
–1 0
–1.5
–2 –1
–2.5
–3 –2
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
(a) Time (s) (b) Time (s)
1
ai = (8.133)
mi
It is noted that there can be several different types of normalization for the
mode shape ϕi. Equation 8.131 is only one of the normalizations.
xi(t) = qi(t) ϕi
where qi(t) is the ith modal response. In the case of forced vibration, it is no longer
equal to e λit as described in Equation 8.124. Rather, it becomes the forced modal
response. The modal response qi(t) can be solved as follows.
In solving for qi(t), first substitute Equation 8.133 into Equation 8.5b, the equation
of forced vibration for an M-C-K system. In the same way, premultiplying φiT on
both sides of the resulting equation will yield
This will result in a typical equation of motion for an SDOF vibration system:
mi qi (t ) + ci q i (t ) + ki qi (t ) = gi (t ) (8.136)
8.3.5.2 Rayleigh Quotient
Dividing φiT Mφi from both sides of Equation 8.134 results in the monic modal
equation:
It is seen that
φiTCφi φiTCφi
= T = 2ζiω ni (8.138)
φiT Mφi φi φi
and
φ T Aφ
R= (8.140)
φ Tφ
f (t ) = − MJxg (t ) (8.141)
and x(t) becomes the relative displacement vector (see Chapter 6, Base excitation and
Equation 6.127). Here, in Equation 8.141
1
J = 1 (8.142)
1
φiT f (t ) φ T MJx (t )
gi (t ) = =− i T g (8.143)
φi Mφi
T
φi Mφi
φiT MJ
Γi = (8.144)
φiT Mφi
In Equation 8.143, the term Γ i xg (t ) is defined as the modal participation factor
for the ith mode, whereas Γi is the unit acceleration load for the ith mode. In the fol-
lowing, for convenience, Γi is also referred to as the modal participation factor. It
will be shown that the value of Γi will depend on the normalization of ϕi.
and
Furthermore,
and
( )
Φ −1[M −1K ]Φ = diag ω 2ni , for i = 1, n (8.150)
( )
Here, the diag(2ζiωni) and the diag ω 2ni are eigenvalue matrices of the matrices
M C and M−1K, respectively. Additionally, Φ is the eigenvector matrix.
−1
q (t )
1
q (t )
q(t ) = 2 (8.152)
…
qn (t )
x (t ) φ φ12 … φ1n q (t )
1 11 1
x (t ) φ21 φ22 φ2 n q2 (t )
x(t ) = 2 = (8.153)
… … …
x n (t ) φn1 φn 2 … φnn qn (t )
422 Random Vibration
and
x j (t ) = ∑ φ q (t)
i =1
ji i (8.154)
8.3.5.5 Modal Truncation
Higher modes will contain much less energy. Thus, it is practical to use the first S
modes in an approximation of the solution, written as
x j (t ) = ∑ φ q (t)
i =1
ji i (8.155)
Typically, the number of modes, S, will be considerably smaller than the number
of total modes, n, that is,
S ≪ n (8.156)
q (t )
1
q2 (t )
x(t )nx1 ≈ φ1 φ2 … φS for S < n (8.157)
nxS
…
qS (t )
Sx1
In matrix form,
where ΦC = [ϕ1 ϕ2 ⋯ ϕS]nxS is the truncated mode shape matrix and rC is the trun-
cated modal response.
Additionally,
q (t )
1
q (t )
qC (t ) = 2
…
qS (t )
Sx1
Random Vibration of MDOF Linear Systems 423
In many cases, only the first modal response will be used. This is called the
fundamental modal response, which is used to represent the displacement. This is
written as
f1 (t )
f2 (t )
mi qi (t ) + ci q i (t ) + ki qi (t ) = gi (t ) = φiT f ( t) = φiT (8.160)
…
fn (t )
with modal initial velocity q i (0) and modal initial displacement qi(0) (see Equation
8.128a,b).
Here fj(t) is the physical forcing at the jth location, whereas gi(t) is the ith modal
force. If the forcing function f(t) is Gaussian, it is easy to see that the modal forc-
ing functions should also be Gaussian. For stable MDOF systems, the ith modal
response is also Gaussian. Furthermore, both the jth responses given by the com-
plete or truncated modal superposition (see Equations 8.154 and 8.155) should also
be Gaussian. Thus, the output responses will be completely characterized by the
means and covariance functions. In the following examples, let us use the complete
response for convenience.
424 Random Vibration
Therefore, we first consider the modal response qi(t), which can be written as
ζω T
qi (t ) = qi (0) e − ζiω nit cos ω dit + i ni sin ω dit + q i (0)hi (t ) +
ω di ∫
0
hi (t − τ) gi (τ) d τ
(8.161)
Here, hi(t) is the ith unit impulse response function with damping ratio ζi and
natural frequency ω ni. In addition,
ω di = 1 − ζi2 ω ni (8.162)
is the ith damped natural frequency. With the help of modal superposition, we fur-
ther have
8.3.6.2 Mean
The mean of x(t) is given by
ζω
µ X (t ) = Φ diag qi (0)e − ζi ω ni t cos ω di t + i ni sin ω di t + q i (0)hi (t )
ω di
T (8.164)
+
∫0
Φ diag hi (t − τ)µ gi (τ) dτ
where diag[(.)i] is a diagonal matrix with its iith entry equivalent to (.)i. Furthermore
µ gi (t ) = E[ gi (t )] (8.165a)
It should be note that the mean vector of the force in the physical domain is given
by
8.3.6.3 Covariance
The covariance matrix of the random response x(t) is given by
σ XX (t1 ,t2 ) =
{ }{ }
t1 t2 T
∫
0
d τ1
∫
0
d τ 2ΦH (t1 − τ1 )Φ T E diag g(τ1 ) − µ gi (τ1 ) g(t2 ) − µ gi (τ 2 ) ΦH (t2 − τ 2 )Φ T
(8.166)
Here
g (τ ) − µ (τ )
g1 1
1 1
g2 (τ1 ) − µ g2 (τ1 )
g(τ1 ) − µ g (τ1 ) = (8.167a)
gn (τ1 ) − µ g (τ1 )
n
and
In the examples above, we denote the covariance of the modal forcing process to
be
t1 t2
σ XX (t1 , t2 ) =
∫ 0
d τ1
∫
0
d τ 2 Φ H (t1 − τ1 )Φ T σ FF (t1 , t2 )Φ H (t2 − τ 2 )Φ T (8.169)
( x −µ )
2
i Xi
−
1 2 σ 2X
f Xi ( xi ) = e i
(8.171)
2πσ Xi
where the variance σX2 i is the iith entry of the covariance matrix σXX(t,t).
If the fj(t), the forcing function applied at jth location, etc., are jointly normally
distributed, then xj(t) are also jointly normally distributed. The PDF can be given by
1
1 − ( x − µ X )T σ FF ( t ,t )( x −µ
µX)
f X ( x1 , x 2 , … x n ) = e 2
(8.172)
n
2π det[σ FF (t , t )]
Example 8.7
where x = [x1 x2]T is the vector of relative displacement. Find the mean and covari-
ance of the displacement by using the normal mode method.
x2
m2
k2 c2
x1
m1
k1 c1
xg
If the ground acceleration xg (t ) is a stationary white noise with auto-PSD S0,
then both the modal force g1(t) and g2(t) will be proportional to xg and
2
m
T 1
φ Mφ
D= 1 1
m2
φ2T Mφ2
t1 t2
σ XX (t1,t 2 ) = S0
∫
0
dτ1
∫0
dτ 2 Φ H(t1 − τ1)Φ T D Φ H(t 2 − τ 2 )Φ T
Based on the above computation, the variance can be calculated (see Equation
8.169). For example, the first entry of σXX(t,t) is
φ11
4
D1 + φ11φ 21D2
2 2 ∞
ω d1m12
2 ∫ 0
e −2ζ1ω n1(t − τ ) sin2 ω d1(t − τ)dτ
φ12
4
D1 + φ12φ 22D2
2 2 ∞
+
ω d 2m22
2 ∫ 0
e −2ζ2ω n 2 (t − τ ) sin2 ω d 2 (t − τ)dτ
∞
= 11.03
∫ 0
e −7.339(t − τ ) sin2 35.65(t − τ)dτ
∞
− 148.05
∫ 0
e −10.952(t − τ ) sin 35.65(t − τ)sin 49.96(t − τ)dτ
+ 2.32
∫ 0
e −14.566(t − τ ) sin2 49.96(t − τ)dτ] × 10 −8
8.4 Nonproportionally Damped
Systems, Complex Modes
If the Caughey criterion cannot be satisfied, then the system is nonproportionally
damped, or generally damped. In this case, the mode shape function can no longer
be used to decouple the system. However, modal analysis can still be carried out in a
2n space. Generally, this will result in the mode shape being complex in value. The
corresponding decoupling is referred to as the complex mode method (Liang and
Inman 1990; Liang and Lee 1991b).
8.4.1 Nonproportional Damping
Given that the complex mode is the result of damping, the damping will be consid-
ered first.
8.4.1.1 Mathematical Background
The following are both mutually sufficient and necessary:
1. The Caughey criterion is not satisfied (Caughey and O’Kelly 1965; Ventura
1985)
In the event that all of the above is true, nonproportional damping exists.
Equation 8.174 is modified into a matrix equation, referred to as the state equation
Furthermore, with the help of the state and the input matrices A and B, Equation
8.175 can be expressed as
(t ) = AY(t ) + Bf (t )
Y (8.176)
Here, the dimension of the vector Y is 2n × 1, referred to the state vector, specifically
x (t )
Y(t ) = (8.177)
x(t ) 2 n×1
Remembering
x (t )
1
x (t )
x(t ) = 2 (8.178)
x n (t )
n×1
430 Random Vibration
Here, I and 0 are the identity and null matrices, respectively, with the dimen-
sion n × n. Note that the state matrix is not necessarily expressed in Equation 8.179,
another form can be seen in the example in Section 8.4.4. Finally, the input matrix
B is
−1
B= M (8.180)
0
(t ) = AY(t )
Y (8.181)
or
Equations 8.184a and 8.184b form the typical eigen-problem. To be exact, if both
equations result in Y = P2n×1 eλt as a solution of Equation 8.181, then Equations 8.184
implies that λ is one of the eigenvalues of the matrix A with P as the corresponding
eigenvector.
Suppose a system that has n DOFs, thus yielding n pairs of eigenvalues and eigen-
vectors in the complex conjugates. Accordingly, Equations 8.184a and 8.184b can be
further expanded as
λ i Pi = APi i = 1, n (8.185a)
Taking the complex conjugate of both sides of the above equation will yield
and
ωi = │λi│, i = 1, … n (8.187)
and
ζi = −Re(λi)/ωi (8.188)
432 Random Vibration
Up to now, the natural frequency (or angular natural frequency) was all obtained
through the square root of the stiffness k over m or the square root of ki over mi. In
general, this method of calculation cannot be used to obtain the natural frequency
for damped systems. The natural frequency must instead be calculated through
Equations 8.186a,b. To distinguish the natural frequency calculated from Equations
8.186a,b from the previously defined quantities, the italic symbol, ωi is used. In addi-
tion, the normal symbol ωni stands for the ith natural frequency of the corresponding
undamped M-O-K system.
Ri = αPi (8.189)
will also be the eigenvector associated with that λi, where α is an arbitrary nonzero
scalar. The eigenvector, Pi is also associated with the eigenvalue λi. Because Pi is a
2n × 1 vector, it is seen that through the assumption described in Equation 8.182, Pi
can be written as
λ p
Pi = i i (8.190)
p
i
where pi is an n × 1 vector. Given that the system is linear, the solution can have all
the linear combinations of pi’s and p*i ’s as follows:
* * *
x(t ) = p1e λ1t + p2e λ 2t +… pne λ nt + p1*e λ1 t + p*2 e λ 2t +… p*n e λ nt
e λ1t λ*1 t
λt e (8.191)
2 * * e λ*2t
= [ p1 , p2 , … pn ] e + p1 , p2 , … p*n
… …
e λ nt λ*nt
e
Denote
P = [p1, p2, …pn] (8.192)
and
e λ1t
λt
2
E (t ) = e (8.193)
…
e λ nt
Random Vibration of MDOF Linear Systems 433
t ) = P∆E (t ) + P * ∆ * E * (t )
x( (8.195)
and
or
x (t ) P∆ P*∆* E (t )
= (8.197)
x(t ) P P* E * (t )
and
x(t ) P∆ P*∆* ∆ E (t )
= (8.198)
x(t ) P P* ∆* E * (t )
In this instance, Δ is defined as the diagonal n × n matrix, which contains all the
n-sets of eigenvalues. In addition, Δ can be written as follows:
λ
1
(
∆ = diag(λ i ) = diag −ζiω i + j 1 − ζi ω i
2 λ2
)...
(8.199)
λn
n× n
Substitution of Equation 8.198 into Equation 8.180 with the aid of Equation 8.177
results in
P∆ P* ∆* ∆ E (t ) P∆ P* ∆* E (t )
=A
P P* ∆* E * (t ) P P* E * (t )
(8.200)
which can be shown to have full rank 2n. Furthermore, this can be written as
P∆ P* ∆* ∆ P∆ P* ∆* E
E= A (8.202)
P P* ∆* P P*
Given that
both sides of Equation 8.202 can be postmultiplied by E+, where the superscript +
stands for the pseudo inverse.
P∆ P* ∆* ∆ P∆ P* ∆*
=A (8.204)
P P* ∆* P P*
Equation 8.204 indicates that the state matrix A can be decomposed by the eigen-
value matrix
Λ= ∆ (8.205)
∆*
P * ∆ * = [P , P , P ]
P = P∆ 1 2 2n (8.206)
P P*
Note that the eigenvector matrix is now arranged to have the form of a complex
conjugate pair
P∆ P∆ *
and (8.207)
P P
Accordingly,
and,
P∆ * P * ∆ *
= (8.209)
P P*
Random Vibration of MDOF Linear Systems 435
That is,
A = PΛP −1 (8.210)
or
Λ = P −1 AP (8.211)
The matrix Λ in Equation 8.211 will maintain the same eigenvalue format as a
proportionally damped system. Conversely, the eigenvector P can obtain a different
form from the proportionally damped case due to the nonuniqueness of the eigenvec-
tor Pi as previously discussed.
In general, the submatrix P in the eigenvector matrix P is complex valued.
Equations 8.210 and 8.211 can therefore be used to define modal analysis in the 2n
complex modal domain. In this case, P is called the mode shape matrix. Note that, P
contains n vectors as expressed in Equation 8.192.
In this instance, there will be n set of triples < pi, ζi, ωi >, along with n set of its
complex conjugates. It is apparent that the damping ratio ζi and the natural frequency
ωi can be obtained through Equation 8.199. In this situation, the triple < pi, ζi, ωi >
and its complex conjugate define the ith complex mode.
The modal energy transfer ratio (ETR) ξi can be approximated as (Liang et al.
1992; Liang et al. 2012)
ω
ξ i = ln i (8.213)
ω ni
From the discussion of SDOF systems, we have established that the natural fre-
quency denotes the corresponding modal energy. Assume that an MDOF system is
originally undamped and then a particular type of damping is gradually added to
the system. If the damping remains proportional, then the natural frequency ωni will
remain unchanged.
Otherwise, it will be changed to ωi. In the event the natural frequency changes,
one of two events will occur. Either a certain amount of energy will be transferred
into the mode, when
ETi
ξi = >0 (8.214)
4πEK
436 Random Vibration
or, a certain amount of energy will be transferred out of the mode, when
ETi
ξi = < 0 (8.215)
4πEK
In Equations 8.214 and 8.215, ETi is the ith modal energy transferred during a
cycle and EK is the maximum conservative energy. Comparing this to the modal
damping ratio which relates to energy dissipation EDi yields
EDi
ζi = ≥0 (8.216)
4πEK i
The modal energy transfer ratio can be used in identifying whether a specific
mode is complex. Namely, a nonproportionally damped system may encompass
both complex and normal modes. This scenario cannot be distinguished through the
Caughey criterion because the Caughey criterion can only provide global judgment.
In a nonproportionally damped system, the natural frequency of the first complex
mode will always be greater than that of the undamped one, that is,
For the ith mode with ξi ≠ 0, the mode shape pi will be complex valued, that is,
jθ1i
p1i p1i e
p2i p2i e jθ2i
pi = = (8.218)
pni pni e jθni
Example 8.8
0 6 −3 50 −10
M= 1 (kg), C = (N/m-s), and K = (N/m)
0 2 −3 3 −10 30
Find the natural frequencies, modal energy transfer ratios, damping ratios, and
mode shapes.
Decoupled from the corresponding state matrix, we have eigenvalues −0.4158 ±
3.7208j and −3.3342 + 6.2306j. The natural frequency of the first mode is ω1 =
Random Vibration of MDOF Linear Systems 437
Note that the first and the second columns are for the second mode, and the third
and the fourth columns are for the first mode. So that the first mode shapes are given
by
[0.0242 ∓ 0.0850j, 0.0269 ∓ 0.2409j]T
x (t ) 0 I x(t ) 0
= + −1 f (t ) (8.219)
−1 −1
x(t ) − M K − M C x(t ) M
Or
(t ) = A Y(t ) + B f (t )
Y (8.220)
where
x( t) 0 I 0
Y(t) = , A = and B = −1 (8.221)
x ( t)
−1
− M K − M −1C M
where I and 0 are, respectively, identity and null submatrices with proper dimensions.
438 Random Vibration
In this case, the state matrix can also be decoupled as indicated in Equation 8.210
so that the state equation can also be decoupled as mentioned previously.
We can have
U (t ) = ΛU(t ) + P −1F (t )
−1 x(0) (8.224)
U ( 0 ) = P
x (0)
t
U(t ) = e Λt U(0) +
∫e
0
Λ (t − τ )
P −1F (τ) d τ (8.225)
From Equation 8.223, we may write the response in the physical domain as
t
Y(t ) = P U(t ) = P e Λt P −1Y(0) +
∫ Pe
0
Λ(t−τ)
P −1F (τ) d τ (8.227)
P e Λt P −1 = e A t (8.228)
Random Vibration of MDOF Linear Systems 439
t
Y(t ) = e A t Y(0) +
∫e0
A (t − τ )
F (τ) d τ (8.229)
Note that
adj(sI − A) adj(sI − A)
(sI − A)−1 = = 2n (8.231)
det[(sI − A)]
(s − λ i ) ∏ i =1
n n
−1
∑ i a jk ∑ a* i jk
For the steady state response, the free decay solution dissipated to zero. We thus
have
t t ∞
Y(t ) =
∫ 0
e A (t − τ )F (τ) d τ =
∫
−∞
e A (t − τ )F (τ) d τ =
∫ 0
e A τ F (t − τ) d τ (8.232)
8.4.4.2 Mean
First consider the total response. Similar to the proportionally damped systems, the
vector of mean μY(t) is given by
t
µ Y (t ) = P e Λt P −1Y(0) +
∫ Pe
0
Λ(τ)
P −1µ F (t − τ) d τ (8.233)
t
µ Y (t ) = e A t Y(0) +
∫e0
Aτ
µ F (t − τ) d τ (8.234)
440 Random Vibration
8.4.4.3 Covariance
8.4.4.3.1 General Covariance
The covariance of the nonproportionally damped system is given by
σ YY (t1 , t2 ) =
{ }{ }
t1 t2
T
∫ 0
d τ1
∫
0
d τ 2 E P e Λτ1 P −1[F (t1 − τ1 ) − µ F (t1 − τ1 )] P e Λτ 2 P −1[F (t2 − τ 2 ) − µ F (t2 − τ 2 )]
(8.236)
with the help of Equation 8.219, the above equation can be written as
t1 t2
σ YY (t1 , t2 ) =
∫0
d τ1
∫
0
d τ 2{P e Λτ1 P −1σ FF (t1 − τ1 ,t2 − τ 2 )P − T e Λτ 2 P T} (8.237)
t1 t2
σ YY (t1 , t2 ) =
∫ 0
d τ1
∫ 0
d τ 2{e A τ1 σ FF (t1 − τ1 , t2 − τ 2 )e A τ 2 } (8.238)
= E
∫ 0
eA τ1
F (t1 − τ1 ) d τ1
0 ∫
e F (t2 − τ 2 ) d τ 2
A τ2
∞ ∞ (8.239)
∫ ∫
T
= E e A τ1 F (t1 − τ1 ){F (t2 − τ 2 )}T e A τ2 d τ1 d τ 2
0 0
∞ ∞
∫ ∫e
T
= A τ1
E {F
F (t1 − τ1 )}{F (t2 − τ 2 )}T e A τ2
d τ1 d τ 2
0 0
Random Vibration of MDOF Linear Systems 441
Thus far, Equations 8.239 and 8.238 are essentially the same formulae for the
integration limits in Equation 8.238 and can be expanded to infinity. Now, if the
force F(t) in F (t ) = Bf (t ) is n-dimensional independent Gaussian, then
where
Example 8.9
Suppose a 3-DOF system is excited by a forcing function f(t) = w(t) [g1 g2 g3]T,
where gi is the amplitude of force fi(t) applied on the ith mass, and w(t) is a white
noise process with PSD equal to S0.
The matrix Dδ(τ) can be written as diag(di)n×n δ(τ) and di = 2πS0 gi2.
When t1 = t2, substitution of Equation 8.240 into Equation 8.239 yields
∞ ∞
∫ ∫
T
σ YY (t1,t1) = e Aτ1BDδ(τ 2 − τ1)BTe A τ2
dτ1 dτ 2
0 0
∞ ∞
∫ ∫
T
= e Aτ1BD δ(τ 2 − τ1)BTe A τ2
dτ 2 dτ1 (8.242)
0 0
∫
T
= e AτBDBTe A τ dτ = σ Y (τ )
0
T
e AτBDBTe A τ = G(τ ) (8.243)
dG(τ ) T T
= Ae AτBDBTe A τ + e AτB DBTe A τ A T
dτ (8.244)
= AG(τ ) + G(τ ) A T
The integral of Equation 8.244, with the limit from 0 to ∞, can be written as
∞ ∞ ∞
dG(τ )
∫0 dτ
dτ = G(∞) − G(0) = A
∫ 0
G(τ ) dτ +
∫ 0
G(τ) dτ A T (8.245)
442 Random Vibration
It is seen that
G(∞) = 0 (8.246)
and
G(0) = B DB T (8.247)
Therefore, we have
T xx T σ xx σ xx
σ YY(0) = E[ Y(t ) Y T (t )] = E xx T = (8.249)
xx T
xx σ xx
σ xx
Note that
σ xx = σ Txx (8.250)
σ xx = σ Txx
(8.251)
and
σ xx
= σ xx
T
(8.252)
With the help of Equations 8.250 through 8.251, and substituting Equation 8.249
into Equation 8.248, these four partition submatrices may be written as
σ xx + σ xx
= 0 (8.253)
From Equations 8.251 and 8.253, we can see that the diagonal entries of matrices
σ xx and σ xx
are zeros.
−T
σ xx
− σ xxK M
T
− σ xx C T M − T = 0 (8.254)
Random Vibration of MDOF Linear Systems 443
M −1Kσ xx − M −1Cσ xx
− σ xx
= 0 (8.255)
σ x = σ x M TK − T − σ xx C TK − T (8.258)
or
σ xx = K −1Mσ xx −1
− K Cσ xx (8.259)
Therefore, σxx can be obtained through σ xx , σ xx and the corresponding matrix
productions.
It is noted that the mass, damping, and stiffness matrices are symmetric. With the
help of Equations 8.258 and 8.259, we can write
Cσ xx K + Kσ xx C + Mσ xx
K − Kσ xx
M = 0 (8.260)
Kσ xx M − Mσ xx K + Mσ xx
C + Cσ xx
M = D (8.261)
Equations 8.260 and 8.261 have a total of (3n2 + n)/2 independent unknown vari-
ables and they provide (3n2 + n)/2 totally independent equations. Therefore, in the
2n × 2n matrix σYY(τ) is solvable.
Example 8.10
S 0
D= 0
0 0
444 Random Vibration
Denote
σ σ12 0 − σ 23
σ xx = 11 σ xx =
σ12 σ 22 σ 23 0
0 σ 23 σ σ 34
= σ xx
=
33
σ xx
− σ 23 0
σ
34 σ 44
Substitution of the above equations into Equations 8.260 and 8.261 results in
the following four equations
as well as
Specifically, denote
k1 k2 c1 c2 m w
w1 = , w2 = , ζ1 = , ζ2 = , µ = 2 and r = 2
m1 m2 2 k1m1 2 k2m2 m1 w1
in which wi and ζi are used for mathematical convenience: they are not exactly
the natural frequency and damping ratio. The above equations can be replaced by
We therefore have
{
σ 33 = − Aw12 µr 3ζ1 + ζ 2 1− 2r 2 + (1+ µ )r 4 + 4rζ1ζ 22 (1+ r 2 ) + 4r 2ζ12ζ 2 + 4r 2ζ32 }
Random Vibration of MDOF Linear Systems 445
{
σ 34 = − Ar 2w12 ζ 2 (1+ µ )r 2 − 1 + 4rζ1ζ 22 (1+ r 2 ) + 4ζ32 }
and
σ 44 = − Ar 2w12 rζ1 + ζ 2 (1+ µ )r 2 + 4rζ1ζ 22 + 4ζ32
where
S0 /m1
A=
4w13B
and
B = µr 3ζ12 + µrζ 22 + 1− 2r 2 + (1+ µ )2 r 2 ζ1ζ 2 + 4r 2ζ1ζ 2 ζ12 + (1+ µ )2ζ 22 + 4rζ12ζ 22 1+ (1+ µ )r 2
Substituting the above solutions into Equation 8.256, we can finally solve σ11,
σ12, and σ13 as given below:
d
f(t ) = − η f (t ) + Γw(t ) (8.262)
dt
where
Equations 8.262 together with the state equation of the motion given by Equation
8.219 can be combined as
x (t ) 0 I 0 x(t )
0
x(t ) = −1
−M K − M −1C M −1 x (t ) + 0 w(t ) (8.264)
f (t ) 0 0 −η f (t ) Γ
where I and 0 are, respectively, identity and null submatrices with proper dimensions.
446 Random Vibration
Denoting
x(t ) 0 I 0 0
Z(t ) = x (t ) , A = −1
−M K − M −1C M −1 and B = 0 (8.265)
f (t ) 0 0 −η Γ
xx T xx T xf T
T
σ ZZ = E[Z(t )Z (t )] = E xx
T
T
xx T
xf (8.267)
fx T fx T ff T
Because w(t) is white noise, we can directly use the result obtained in the above
subsection, that is,
Aσ ZZ + σ YY A T = −B DB T (8.268)
where D = diag( S0 i ), and S 0i is the corresponding magnitude of PSD of white noise
applied at the ith location.
8.4.4.4 Brief Summary
Nonproportional damping is caused by the different distributions of damping and
stiffness, which is common in practical engineering applications. In this circum-
stance, certain or total modal shapes will become complex-valued. In addition, we
will witness nonzero modal energy transfer ratios.
Nonproportionally damped structures will not have principal axes, which may
alter the peak responses of a structure under seismic ground excitations and, in many
cases, the peak responses will be enlarged (Liang and Lee 1998). In this subsection,
how to account for nonproportionally damped systems is discussed. The basic idea is
to use the state space and the state matrix (Gupta and Jaw 1986; Song et al. 2007a,b).
8.5 Modal Combination
8.5.1 Real Valued Mode Shape
8.5.1.1 Approximation of Real Valued Mode Shape
The following computations of variance and RMS are based on modal combinations.
The modes can be either normal or complex. For normal modes, ϕji represents the jth
element of the ith mode shape.
Random Vibration of MDOF Linear Systems 447
φ ji = (−1)δji p ji (8.269)
where
π π
0, − < θ ji ≤
2 2
δ ji = (8.270)
1, π 3π
< θ ji ≤
2 2
As a result, the newly simplified ith mode shape vector can be written as
φ
1i
φ
φ i = 2i (8.271)
φni
rank(Φ) = n (8.273)
Ψ = Φ−1 (8.274)
448 Random Vibration
In this case, any n × 1 vector can be represented by matrix Φ. Now suppose there
is a vector y, such that
y
1
y
y = 2 (8.275)
yn
y = a1 ϕ1 + a2 ϕ2... + an ϕn (8.276)
or
y ≈ΦC aC (8.279)
and
a
1
a
a C = 2 (8.281)
am
Random Vibration of MDOF Linear Systems 449
In the above case, all the parameters ai can be determined through a least square
approach. Explicitly, denote
and let
∂e T e
= 0, i = 1, m (8.284)
∂ai
( )
−1
a C = ΦCT ΦC ΦCT y (8.285)
8.5.2 Numerical Characteristics
In the following examples, general references can be found in the book of Wirsching
et al. (1995).
8.5.2.1 Variance
Through modal analysis, the variance of randomly forced response can be estimated by
S
σ 2X j = D(x j ) = D
∑ φ q
i =1
ji i (8.286)
σXj = ∑ (φ q )
i =1
ji i
2
(8.287)
450 Random Vibration
or
σXj = ∑ (φ ) σ
i =1
ji
2 2
i (8.288)
In this instance, σ i2 is the mean square value of the ith modal response qi(t).
ρ = ±1 (8.289)
Equation 8.279 is used to imply the case of linear dependency. Here, qi(t1) and
qi(t2) are correlated, meaning that they vary with the same pattern.
σXj = ∑φ σ
i =1
ji i (8.290)
σ X j = φ11σ1 + ∑ (φ σ )
i=2
ji i
2
(8.291)
Closed-space modes
Z n− Z
σXj = ∑ j =1
φ ji σ i + ∑ (φ σ )
j = Z +1
ji i
2
(8.292)
2
H p n− S
σXj = ∑∑
p=1
i =1
φ ji σ i + ∑
i = S +1
(φ ji σ i )2 (8.293)
In this instance, H is the number of sets of equal or close eigenvalues and p is the
number of close modes in a given set (Richard et al. 1988).
n n
σXj = ∑ ∑ (σ ρ σ
i =1 k =1
ji ik kj ) (8.294)
and
In the event that the ith and kth modes are normal, then
ω nk
r= , k > i (8.297)
ω ni
452 Random Vibration
Problems
1. A dynamic absorber can be used to reduce the vibrations of an SDOF sys-
tem subjected to sinusoidal excitation f(t) = F0cos(ωt). Shown in Figure
P8.1, the blue m–k system is the primary SDOF system and the red ma and
ka are additional mass and stiffness. Therefore, with the additional mass,
the system becomes 2-DOF.
Denote ωp = (k/m)1/2, ωa = (ka /ma)1/2, and μ = ma /m.
a. Show that the dynamic magnification factor βdyn for the primary dis-
placement can be written as
ω2
1−
Xk ω a2
β dyn = =
F0 ω2 ω2 ω2 ω2
1 − µ 2a − 2 1 − 2 − µ 2a
ω p ω p ω p ωp
b. Suppose m = 6000 kg and the resonant driving frequency = 60 Hz. The
mass of the absorber is chosen to be 1200 kg. Determine the range of
frequencies within which the displacement x(t) is less with the addi-
tional mass than without the additional mass.
2. Given
4 3 −2 0 0
M=
4.6 ,C = −2 4 −2 , and
3 −2 5 −3
5 −3 5
500 −200
− −400
K = 200 600
−400 550 −150
−150 5000
x(t), f(t)
m
k/2 ka k/2
xa
ma
Figure P8.1
Random Vibration of MDOF Linear Systems 453
0 0 0 0
C= 0 0 0 0
0 0 0 0
0 0 0 c
3. With the mass, damping, and stiffness matrices given by Problem 2, using
MATLAB to generate 30 random ground motions: t = (0:1:1,999) * 0.01;
xga = randn(2000,30) with zero initial condition. If, on average, the dis-
placement of x4 is needed to reduce 30%, how can you choose additional
stiffness ΔK, namely, find k in
0 0 0 0
∆K = 0 0 0 0
0 0 0 0
0 0 0 k
4. A 2DOF system is shown in Figure P8.2 with a force being white noise
process applied on mass 1. Knowing m1 = m2 = 1, k1 = k2 = 100, c1 = c2 = 2,
find the equation of motion, the transfer function; using the normal mode
method to find the mean and covariance of the displacement.
5. The system shown in Figure P8.2 is excited by ground white noise motion,
where m1 = m2 = 1, k1 = k2 = 100, c1 = 18 and c2 = 0. Find the transfer
k1 k2
m1 m2
c1 c2
Figure P8.2
454 Random Vibration
function. Calculate the mean and covariance by using the complex mode
method.
6. Derive a general formula of mean value for a nonproportional system under
excitation of random initial conditions only.
7. In the system given by Figure P8.3; c1 and c2 are, respectively, zero k1 and
k2 are, respectively, 500 and 200; m1 = 1500. m2 = 50 + Δm where Δm is a
random variable with the following distribution:
Δm 0 30 60 200
p 1/2 2/6 1/6 1/6
Suppose the ground excitation is white noise with PSD = 1, find the dis-
tribution of the RMS absolute acceleration of m2.
8. Prove that for proportionally damped systems with white noise excitations,
the first entry of σXX(t,t) can be calculated as
∞
φ11
4
D1 + φ11φ21D2
2 2
ω d 1m12
2 ∫ 0
e −2ζ1ω n1 (t − τ ) sin 2 ω d 1 (t − τ)dτ
∞
2φ11φ21D1 + 2φ11φ12φ21φ22 D2
2 2
+
ω d 1ω d 2m1m2 ∫ 0
e − (ζ1ω n1 +ζ2ω n 2 )(t − τ ) sin ω d 1 (t − τ)sin ω d 2 (t − τ)dτ
∞
φ12
4
D1 + φ12 φ22 D2
2 2
+
ω d 2 m2
2 2 ∫ 0
e −2ζ2ω n 2 (t − τ )sin 2ω d 2 (t − τ)dτ
x2
m2
k2 c2
x1
m1
k1 c1
xg
Figure P8.3
Random Vibration of MDOF Linear Systems 455
2 −1 0 150 −50 0
C = −1 3
−2 and K =
−50 100
−50
0 −2 2 0 −50 70
is excited by forcing function
0
f(t ) = 2 w(t )
1.5
where w(t) is a white noise with PSD = 10. Calculate the covariance matrix
σYY (0).
Section IV
Applications and
Further Discussions
9 Inverse Problems
In this chapter and Chapters 10 and 11, we present several topics by applying the
knowledge gained previously. These topics do not cover the total applications of
random process and random vibrations. They may be considered as “practical”
applications and utilizations of the concept of the random process. We will also
discuss methods to handle engineering problems that are difficult to be treated as
closed-form mathematical models and/or difficult to be approximated as stationary
processes.
Inverse problems are a relatively broad topic. The field of inverse problems was
first discovered and introduced by Viktor Ambartsumian (1908–1996). The inverse
problem is a general framework used to convert measured data into information
about a physical object or system, which has broad applications. One of the difficul-
ties in solution of inverse problems is due to the existence of measurement noises.
In other words, when working with inverse problems, both random variables as well
as random processes must be considered. In this chapter, inverse problems related to
vibration systems will be briefly outlined. Additionally, key issues in system identifi-
cations as well as vibration testing will be discussed. Special emphasis will be given
to measurement uncertainties.
For more detailed description of inverse problems, readers may consult the works
of Chadan and Sabatier (1977), Press et al. (2007), and Aster et al. (2012).
9.1.1.1 Key Issues
The key issues involved in inverse problems are listed as follows.
459
460 Random Vibration
9.1.1.1.3 Testing
To solve an inverse problem, conducting vibration testing is often needed. In general,
vibration testing includes (1) actuation and measurement, (2) S/N ratios, (3) data
management and analysis, and (4) test repeatability and reliability.
9.1.1.2 Error
In solving inverse problems, a range of errors are inevitable. It is essential that these
errors be reduced. To rigorously define errors is difficult. Therefore, errors can
approximately be classified as follows:
The action needed to improve the procedure is dependent on the type of error that
occurred. In the following, the nature of errors and the corresponding improvement
will be discussed as it mainly relates to random problems.
9.1.1.3 Applications
Inverse problems have multiple applications such as
1. System identification
2. Trouble shooting
3. Design modification
4. Model confirmation
9.1.2.1 Modeling
Quite often, for one natural phenomenon, there may be more than one model. For
example, suppose that 50 random data values y are measured and indexed from 1 to
50 by the variable x. It can be assumed that the relationship between y and x is linear
or of first order, i.e., y = ax + b. Through the first-order regression, the parameters a
and b can be determined. The quadratic form or second order, i.e., y = ax2 + bx + c,
can also be used, finding the parameters a, b, and c. This can be repeated for the third
order, fourth order, and so on.
In Figure 9.1a, plots of the original model and several regressed models are shown,
including the first-order (linear) regression, the second-order (quadratic) regression,
and the third-order regression. While the models are dissimilar, all of them regressed
using the same original data. In Figure 9.1a, it is seen that the second- and third-
order regressions are rather close. This, however, does not necessarily mean that
when the regression order is chosen to be sufficiently high, the models will converge.
2.5 3
2 Original data Original data
First-order regression First-order regression
1.5 Second-order regression 2
Third-order regression Second-order regression
1
1 Fourth-order regression
0.5
0 0
–0.5
–1 –1
–1.5
–2
–2
–2.5 –3
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50
(a) (b)
Figure 9.1 Directly measured data and regressed models. (a) Regression including third-
order approach. (b) Regression including fourth-order approach.
462 Random Vibration
In Figure 9.1b, the same original data are shown, with the first- and second-order
regressions. However, instead of the third-order regressed model, the fourth-order
model is plotted. Instead of the fourth-order regressed model being rather close to
the second-order regression model, it is shown to be significantly different.
The above example indicates that one should be very careful to use an a priori model.
9.1.2.4 Simulations
Simulation technologies are also based on inverse problems, although a simulation is
generally considered a forward problem.
Numerical simulations and physical simulations are two basic approaches. In gen-
eral, the success of a simulation will not depend solely upon the stability and effec-
tiveness of computational algorithms and the precision of test apparatus; it will also
depend on the accuracy of the models.
9.1.2.5 Practical Considerations
In engineering applications, the following issues are important.
9.1.3.1 General Description
The first inverse problem is often solved through a two-step approach. The first step
is to obtain the transfer function. The second step is to extract the modal or physical
model through the transfer function.
Figure 9.2 repeats the expression of the relationship among the input–system–output,
the fundamental approach of the inverse problem of systems.
The formula can be repeated to calculate the transfer function, which can be
obtained by one of two ways. The transfer function can be obtained through the ratio
of Fourier transforms of output and input:
As noted earlier, the transfer functions can also be obtained through the power
spectral density functions:
and
Here, the uppercase letters represent the Fourier transforms. The temporal pairs
of the Fourier transforms are not necessarily random.
Equation 9.1, the definition of transfer function, is seldom used practically, espe-
cially in the case of random excitations. The more practical choice is the method
through power spectral density functions, such as H1 and H2, since it can provide
more accurate and stable estimations.
Extraction of modal and/or physical parameters requires certain in-depth knowledge,
which is beyond the scope of random process and vibration. Interested readers may
Input Output
System
h(t), H(ω)
f(t), F(ω) x(t), X(ω)
consult the work of Ewins (2000) or He and Fu (2004) for more detailed descriptions. In
this chapter, a list of fundamental formulas in the frequency and time domains will be
provided. The most commonly used method for estimations of transfer functions is first
summarized.
9.1.3.2 Impulse Response
Consider the impulse response for the signal degree-of-freedom (SDOF) system:
Now consider the MDOF system with an input at the jth location that is measured
at the ith location:
Note that the impulse response function is a normalized response x(t) with respect
to the amplitude of the impact force given by
x (t )
h(t ) = (9.6)
fmax
9.1.3.3 Sinusoidal Response
For sinusoidal excitation, f(ω,t) = f0 sin(ωt) with sweeping frequency ω, the transfer
function for the SDOF system is given by
x (ω , t )
H (ω ) = (9.7)
f (ω , t )
xi (ω , t )
Hij (ω ) = (9.8)
f j (ω , t )
9.1.3.4 Random Response
Again, considering random excitations, repeat the process discussed previously as
follows (see Equation 4.35) in order to practically measure the transfer functions.
and
Inverse Problems 467
In this instance, the lowercase letters represent the temporal functions or mea-
sured values in the physical domain. Note that these temporal functions are taken
from random sets; once measured, they become deterministic “realizations.”
and
Also
and furthermore
∑ X (ω, T )F (ω, T ) *
k k
H1 (ω , T ) = k =1
n (9.15)
∑ F (ω, T )
2
k
k =1
∑ X (ω, T )
2
k
H 2 (ω , T ) = n
k =1
(9.16)
∑ F (ω, T ) X (ω, T ) *
k =1
k k
In both cases, n should be a fairly large number to effectively reduce the noise
contaminations. For most cases,
n > 30 (9.17)
suffices.
H1 (ω , T )
γ 2FX (ω ) = (9.18)
H 2 (ω , T )
In the following, for the sake of simplicity, we will omit the notation T. However, it
is noted that, practically, we will use Equation 9.18 for transfer function measurement.
9.1.3.5 Modal Model
Modal analysis is referred to as the extraction of the modal parameter from the trans-
fer function once measured.
Here, f(.) stands for a function of (.), ωi and ζi are respectively the ith natural fre-
quency and damping ratio of the system. The ith mode shape ϕi can be determined
from the complex-valued amplitude of Hi(ω), where
H (ω )
1i i
H (ω )
Hi (ω i ) = 2i i (9.20)
H ni (ω i )
Furthermore,
ϕi = aiHi(ωi) (9.21)
−1
A = YY (9.22)
= x(t1 ) , x(t2 ) ,…, x(t2 n )
Y
x (t1 ) x (t2 ) (9.23)
x (t )
2 n
and
x (t ) x (t ) x (t )
Y = 1 2 2 n (9.24)
x(t1 ) x(t2 ) x(t2 n )
where x(t1) is the displacement measured at the first time point, etc.
For normal modes, the generalized damping and stiffness matrices M−1C and
M K can be found from the state matrix. Furthermore, the natural frequencies,
−1
ζi = −Re(λi)/ωi (9.27)
From the eigenvector Pi, the mode shape pi can be calculated from:
λ p
Pi = i i (9.28)
p
i
470 Random Vibration
z (t )
1 1
z (t )
z(t1 ) = 2 1 (9.29)
zm (t 1 )
m×1
where m is the total number of measurement locations, and zi(t) is a genetic term of
displacement, velocity, or acceleration. Note that, we use lowercase letters to denote
the measured values, including that measured from random signals; and we continue
to use uppercase letters to denote generic random set.
Construct two Hanckel matrices:
z( t ) z( t ) z( t )
2
3
q +1
Y = z( t3 ) z( t4 ) z( tq+ 2 )
(9.30)
z( t p+1 ) z( t p+ 2 ) z( t p+ q )
( mp)× q
and
z( tq )
z( t1 ) z( t2 )
z( t2 ) z( t3 ) z( tq+1 )
Y = (9.31)
z( t p ) z( t p++1 ) z( t p+ q−1 )
( mp)× q
Here,
where Δt is the sampling time interval. Integers p and q are such that
mp ≥ 2n (9.33)
and
q ≥ mp (9.34)
Inverse Problems 471
+ = exp( A∆t )
YY (9.35)
and
Based on Equation 9.36, the natural frequencies and damping ratios can be calcu-
lated. However, the eigenvectors of matrix exp(AΔt) does not contain the full infor-
mation of mode shapes, unless
m ≥ n (9.37)
9.1.4.1 General Background
From Equation 9.1, the following can be written:
Since
σ F (ω ) =
∫−∞
SF (ω ) dω (9.40)
∫
−2
σ F (ω ) = H (ω ) S X (ω ) dω (9.41)
−∞
In most cases, the above integral of Equation 9.41 does not exist. Therefore,
Equation 9.41 cannot be directly used. However, in selected cases, the autopower
spectral density functions do exist.
472 Random Vibration
9.1.4.2 White Noise
Consider the case of white noise by recalling Equation 7.50. The white noise input
can be written as
kcσ 2X
SF = (9.42)
π
9.1.4.3 Practical Issues
9.1.4.3.1 Sampling Frequency and Cutoff Band
To satisfy the Nyquist sampling theorem, the signal must be low-pass filtered and the
cutoff frequency ωc or fc is given by
ωc = 1/2ωS (9.43)
and
fc = 1/2 f S (9.44)
x (t ) = x (ω c , T ) = F −1[ X (ω c , T )] (9.45)
fC
∫
σF ( f ) = H ( f ) WX ( f ) d f
−2 (9.46)
0
−2
H (ω ) = ( k − mω 2 )2 − (cω )2 ωc = ( k − 4 π 2mf 2 )2 − 4 π 2 (cf )2 fc (9.47)
ωc
9.2.1.1 Maximum Likelihood
In estimating statistical parameters, there are certain criteria that are used to deal
with random variables and processes based on the condition that all possible vari-
ables in a given set will not be exhausted. It must be decided under what condition
the estimation will be satisfied. Maximum likelihood estimation (MLE) provides
a commonly used criterion. It had been used earlier by Gauss and Laplace, and
was popularized by Fisher between 1912 and 1922. Reviews of the development
of maximum likelihood have been provided by a number of authors (for instance,
see LeCam 1990). This method determines the parameters that maximize the prob-
ability or likelihood of the sample data. MLE is considered to be more robust, with
minimal exceptions, and yields estimators with more accurate statistical properties.
They are versatile and can be applied to most models and types of data. They are also
efficient for quantifying uncertainty using confidence bounds. In this section, the
focus will be on MLE, and as a comparison, the method of moments will be briefly
discussed, which is among the estimation methods other than MLE.
x =
1
n ∑x
j =1
j (9.48)
In Equation 9.48, x is the first moment about the origin of all the samples [xj].
Generally, the parameter needed to be estimated is simply a certain moment, such as
the mean or the RMS values. A simple method to estimate the unknown parameter
is referred to as the moment estimation.
Example 9.1
Suppose X ~ N(μ, σ), where the mean and the standard deviation μ and σ, respec-
tively, are unknown and need to be estimated.
Consider the first and second moments about the origin:
µˆ = E[ X ] = x
and
µˆ 2 + σˆ 2 = E[ X 2 ] = x 2
474 Random Vibration
In the above, and hereafter, the overhead symbol “.̂” represents the estimated
value.
Therefore, Equation 9.48 can be used to estimate the mean µ̂ and further
calculate
σˆ 2 = x 2 − µˆ 2 =
1
n ∑x
j =1
2
j − x2 = S2
and
σ = S 2 = S
Here, S2 and S stand for respectively the sample variance and standard devia-
tion. Equation 9.48 provides the basic approach of averaging. It is noted that x is
the sample mean of all measured xj, which are the samples taken from the random
set X. Typically, n samples will be taken and the total number of variables of X will
be much larger than n. Therefore, a reasonable question would be, can x be used
to estimate the mean value of all variables in X? Besides the mean value, the vari-
ance and the moments will also be considered. An additional reasonable question
would be, is there any bias in these parameter estimations based on the average
described in Equation 9.48 or, more generally, is there any bias in the moment
vestimation?
These questions are analyzed in the following.
n n
f X1Xn ( x1 ,…, x n , pn ) = ∏
j =1
f X j ( x j , p1 ) = ∏ f (x , p )
j =1
X j 1 (9.49)
In Equation 9.49, p1 is the parameter to be calculated. Under the condition that the
unknown parameter is indeed p1, and x1, x2, …, xn are independent, Equation 9.49 is
(
in the form of the total production of f X j x j , p1 . )
Inverse Problems 475
Example 9.2
L(p1) = ∏ f ( x ,p )
j =1
X j 1 (9.50)
Table 9.1
Calculated Chance of Making Unusable Specimens
p P(x1 = 1, x2 = 1, x3 = 0, x4 = 0, and x5 = 0) = p2(1 − p)3
0.2 0.22(0.8)3 = 0.02048
0.4 0.42(0.6)3 = 0.03456
0.6 0.62(0.4)3 = 0.02304
0.8 0.82(0.2)3 = 0.00512
476 Random Vibration
d ln ( L)
= 0 (9.52)
dp1
Note that from Equation 9.52, the vector p1 may contain n elements, explicitly,
p
1
p
p1 = 2 (9.53)
pn
∂ ln ( L)
=0
∂p1
∂ ln ( L)
=0
∂p2 (9.54)
∂ ln ( L)
=0
∂pn
Example 9.3
Consider the above-mentioned case of the random variable X with a 0–1 distribu-
tion, expressed as
n n
n n
∑ xi n− ∑ xi
L= ∏ P( X
j =1
= xj ) = ∏p j =1
xi
(1 − p)
1− x j
=p i =1
(1 − p) i =1
n n
ln(L ) = ∑
i =1
xi ln( p) + n −
∑ x ln(1− p)
i =1
i
Third, take the derivative with respect to the unknown parameter p and let the
result be equal to zero.
In this case, “p1” contains only one parameter p; therefore,
1
n n
dln(L )
dp
=
1
p ∑i =1
xi − n −
1 − p ∑ x = 0
i =1
i
p=
1
n ∑x i
i =1
ln(L) reaches the maximum value. Therefore, the estimation of p based on the
maximum likelihood method is, when comparing with Equation 9.48,
p̂ =
1
n ∑ x = X
i =1
i
Example 9.4
1
, 0≤x≤a
fX ( X ) = a
0, otherwhere
n
∏ a− n , 0 ≤ min x ≤ max x ≤ a
n 1
, 0 ≤ xi ≤ a
∏ i i
L= fX ( xi ) = i =1 a = i i
i =1 0, otherwhere
0, otherwhere
ln(L) = −nln(a)
dln(L ) n
= − = 0
da a
The above equation has no meaningful solutions. This implies that, when L ≠ 0,
the derivative is equal to
dln(L )
≠ 0
da
However, the above inequality does not necessarily mean that the likelihood
function L has no maximum value in between 0 and a. In fact, a and L have an
inverse relationship. The smaller the value of a is, the larger the value of L will be.
However, a cannot be smaller than (max xi ). This is written as
i
aˆ = max xi
i
Example 9.5
Consider the sample set (x1, x2, x3, x4, x5, x6) = (1,2,3,5,4,9). Given the above condi-
tion, â = 9.
Now, compare the estimation through the maximum likelihood method and
through the moment method.
∞ a
1 a
∫ ∫
E[ X ] = xfX ( x ) d x = x dx =
−∞ 0 a 2
aˆ
= E[ X ]
2
Inverse Problems 479
aˆ = 2E[ X ] = 2x = 2 × 4 = 8 ≠ 9
This result implies that the estimation through moment about the origin has a
bias.
X=
1
n ∑X
j =1
j (9.55)
1 n 1 n 1 n
E[ X ] = E
n
∑
j =1
Xj = E
n
∑ j =1
Xj =
n
∑ E[ X ]
j =1
j
(9.56)
n
=
1
n ∑µ
j =1
X = µX
480 Random Vibration
Equation 9.56 implies that the estimation of Equation 9.55 is indeed unbiased.
To see if the estimation is consistent, consider
1 n 1 n 1 n
D[ X ] = D
n
∑ j =1
Xj = 2 D
n
∑
j =1
Xj = 2
n
∑ D[X ] + ∑ ∑ cov[X , X ]
j =1
j
j≠k
j k
∑σ σ 2
1
= 2
X = X
n2 j =1
n
(9.57)
Equation 9.57 implies that, when n is sufficiently large, the variance of X tends to
zero. This implies that the estimation is consistent.
Σˆ 2X =
1
n ∑ (X − µ )
j =1
j X
2
(9.58)
Here, Σ̂ 2X is the random variable from which the variance is estimated. It can be
proven that the mean of Σ̂ 2X is σ̂ 2X . This implies that when the mean μX is known,
Equation 9.58 provides unbiased estimation of the variance.
If the mean μX is unknown, then first analyze the following formula, which is used
to estimate the variance:
n
1
∑( X − X )
2
S X2 = j (9.59)
n −1 j =1
1 n 1 n
E S X2 = E
n − 1
∑ ( X j − X )2 = E
n −1 ∑[(X − µ ) − (X − µ )]
j X X
2
j =1 j =1
n
=
1
n −1 ∑ E[(X − µ ) ] − 2E[(X − µ )(X − µ )] + E[(X − µ ) ] (9.60)
j =1
j X
2
j X X X
2
n
2 2 1 2
=
1
n −1 ∑ σ
j =1
2
X −
n
σ X + σ X = σ 2X
n
Inverse Problems 481
S X2 =
1
n −1 ∑ (x − x )
j =1
j
2
(9.61)
9.2.2 Confidence Intervals
9.2.2.1 Estimation and Sampling Distributions
Because estimation is based on samples, not entire variable sets, therefore, the esti-
mator itself is a random variable, which also has distributions.
X − µX
P −∞ < ≤ z1−α = Φ ( z1−α ) = 1 − α (9.62)
σX / n
n n 2 2
X j − µX X − µX
(n − 1)
S X2
=
1
σ 2X σ 2X ∑
j =1
( X j − X )2 = ∑ j =1
σ
X
−
σX / n
(9.63)
From Equation 9.63, the first term on the right-hand side is chi-square with n
DOF, while the second term is chi-square with one DOF. Due to the regenerative
character of chi-square random variables, the left-hand side is seen to be chi-square
with (n − 1) DOF. Consequently, this can be rewritten as
S2
P (n − 1) X2 > x1−α = 1 − Fχ2 ( x1−α ) = 1 − α (9.64)
σX n −1
argument on the left-hand side of Equation 9.62. Using the establishments of prob-
abilities of correct estimation yields
z1−α / 2 σ X z σ
x − , x + 1−α / 2 X (9.65)
n n
(n − 1) S X2
0, x (9.66)
1−α
X − µX
tn−1 = (9.67)
SX / n
Note that Equation 9.67 indicates a t-distribution, also known as a Student dis-
tribution. This was previously discussed in Chapter 2. The two-sided confidence
interval for the mean is given as
X − µX
P − b1−α / 2 < ≤ b1−α / 2 = Ftn−1 (b1−α / 2 ) − Ftn−1 (− b1−α / 2 ) = 1 − α (9.68)
SX / n
By solving the double inequalities in the argument on the left-hand side for the
mean μX, the confidence interval is determined to be
b1− α / 2 S X b S
x − , x + 1− α / 2 X (9.69)
n n
9.2.3.1 General Estimation
An unknown random process should not be assumed stationary before analyzing
its statistical characteristics. Unless the process is known to be stationary, ensemble
average must be used.
Inverse Problems 483
M
xj=
1
M ∑x
m =1
mj
(9.70)
At first glance of Equation 9.70, the mean of a random process appears to be iden-
tical to that of a random variable. Conversely, the mean x j has a subscript j, indicat-
ing the jth realization of the random process X(t). Namely, the average described in
Equation 9.70 is an ensemble average.
S X2 (t j ) =
1
M −1 ∑ (x
m =1
mj − x j )2 , j = 0, 1,…, n − 1 (9.71)
S X (t j ) = S X2 (t j ), j = 0,1, n − 1 (9.72)
9.2.3.1.2 Correlation
Another fundamental difference between a random variable and a random process is
in the analysis of correlations. We now consider the correlation of processes X and Y.
9.2.3.1.2.1 Joint PDF First, consider the joint distribution of X and Y by denoting
the cross-correlation function R as
R = E[XY] (9.73)
1 1
f XY ( x , y) = exp − (σY2 x 2 − 2 Rxy + σ 2X y 2 )
2(σ X σY − R )
2 2 2
2π σ σ − R2
X
2
Y
2
where
1
L (σ 2X , σY2 , R) =
(2π ) nn
σ 2X σY2 − R 2
n
× exp
1
−2(σ X σY − R 2 )
2 2 ∑ (σY2 x 2j − 2 Rx j y j + σ 2X y 2j )
(9.75)
j =1
∂
[L(σ 2X , σY2 , R)] = 0 (9.77)
∂σ 2X
n n
n+
1
σˆ Y2 ∑
j =1
y 2j =
1
σˆ 2X σˆ Y2 − rˆ 2 ∑ (σˆ x
j =1
2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx (9.78)
Next, let
∂
[L(σ 2X , σY2 , R)] = 0 (9.79)
∂σY2
n n
n+
1
σˆ 2X ∑
j =1
x 2j =
1
σˆ 2X σˆ Y2 − rˆ 2 ∑ (σˆ x
j =1
2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx (9.80)
Inverse Problems 485
Additionally, let
∂
[L(σ 2X , σY2 , R)] = 0 (9.81)
∂R
n n
nrˆ + ∑ j =1
x j yj =
rˆ
σˆ 2X σˆ Y2 − rˆ 2 ∑ (σˆ x
j =1
2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx (9.82)
Adding Equations 9.78 and 9.80 and dividing the result by 2 will further yield in
1 1
n n n
n+
2 σˆ Y2 ∑ y 2j +
1
σˆ 2X ∑ x 2j = 2 2
ˆ
σ ˆ
σ
rˆ
X Y −r
ˆ2 ∑ (σˆ x 2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx
j =1 j =1 j =1
(9.83)
rˆ 1
n n n
∑
j =1
x j yj =
2 σˆ 2X ∑ j =1
x 2j +
1
ˆσY2 ∑
j =1
y 2j
(9.84)
To use Equation 9.84, we need to have both the variances of processes X and Y,
which can be respectively estimated as
σ̂ 2X =
1
n ∑x j =1
2
j (9.85)
and
1
σ̂Y2 = n ∑y
j =1
2
j (9.86)
r̂ =
1
n ∑x y
j =1
j j (9.87)
486 Random Vibration
R̂ =
1
n ∑X Y
j =1
j j (9.88)
The following equation can be used to check if the estimator for the correlation
R is biased:
1 n 1 n
E[ Rˆ ] = E
n
∑j =1
X jY j =
n
∑ E[ X Y ] = R
j =1
j j (9.89)
1 n 1 n
D[ Rˆ ] = D
n
∑
j =1
X jY j = 2
n
∑ D[X Y ] = n1 (R
j =1
j j
2
+ σ 2X σY2 ) (9.90)
rˆX (ti , t j ) =
1
M ∑x
m =1
x , ti , t j ∈ T
mi mj (9.91)
RX2 (ti , t j ) − σ 2X (t j )σ 2X (t j )
D[rˆX (ti , t j )] = (9.93)
n
rˆXY (ti , t j ) =
1
M ∑x
m =1
y , ti , t j ∈ T
mi mj (9.94)
and
Cˆ XY (ti , t j ) =
1
M ∑ (x
m =1
mi − xi )( ymj − y j ), ti , t j ∈T (9.97)
Cˆ X (ti , t j )
ρˆ XY (ti , t j ) = (9.99)
s X (ti )sY (t j )
9.2.3.2.1 Mean
9.2.3.2.1.1 Non-Ergodic For a non-ergodic process, the mean can be written as
n −1 n −1 1 M n −1 M
x=
1
n ∑
j=0
xj =
1
n ∑ ∑
j=0
M
m =1
x mj =
1
Mn ∑∑ x
j = 0 m =1
mj (9.100)
x=
1
n ∑x
j=0
j (9.101)
9.2.3.2.2 Variance
9.2.3.2.2.1 Non-Ergodic Similar to the mean value, if the process is non-ergodic,
the variance can be written as
n −1 n −1 M
S X2 =
1
n ∑ j=0
S X2 (t j ) =
1
n( M − 1) ∑ ∑ (x
j = 0 m =1
mj − x j )2 (9.102)
n −1
S X2 =
1
n ∑ (x − x )
j=0
j
2
(9.103)
S X = S X2 (9.104)
9.2.3.2.4 Autocorrelation
Next, the autocorrelation function of a stationary process will be described.
n −1− j
rˆX (τ j ) =
1
n− j ∑ rˆ (t , t + τ ),
i=0
X i i j 0 ≤ ti , τ j ≤ T (9.105)
For the discussion on variance, the expression of rˆX (ti , ti + τ j ) was used. One can
proceed from this point to obtain the expressions.
Inverse Problems 489
n −1− j
rˆX (τ j ) =
1
n− j ∑xx
i=0
i i+ j , 0 ≤ τ j ≤ T (9.106)
n −1− j
( )
rˆXY τ j =
1
n− j ∑xy
i =0
i i+ j , 0 ≤ τ j ≤ T (9.107)
n −1− j
Cˆ XY (τ j ) =
1
n− j ∑ (x − x )( y
i=0
i i+ j − y ), 0 ≤ τ j ≤ T (9.108)
Cˆ X (τ j )
ρˆ XY (τ j ) = , 0 ≤ τ j ≤ T (9.109)
sX sY
9.2.3.3 Nonstationary Process
In real-world applications, processes are often nonstationary. Nonstationary pro-
cesses will be discussed next.
σ 2X (t ) = a 2 (t ) (9.113)
9.2.3.3.1.3 Mean Square Value The estimator of the mean square value for a
nonstationary process can be written as
σˆ 2X (t j ) = ∑w x
k =− N
k
2
j+ k , j = N ,…, n − 1 − N (9.114)
where wk is an even weight function and, n and N are respectively the numbers of
temporal points and the numbers of pieces of measurements, and we have
∑w
k =− N
k = 1 (9.115)
The weight function is used to emphasize certain values. In most cases, a denser
distribution will exist near the central point. In this specific case, the denser distribu-
tion is near k = 0. This allows the bias to be minimized. Thus, the expected value of
the estimator of variance may be written as
E[σˆ 2X (t j )] = ∑ w σ (t
k =− N
k
2
j+ k ), j = N ,.…, n − 1 − N (9.116)
From Equation 9.116, it is seen that when σ2(tj+k) varies evenly (symmetrically) for
k in between the values of –N and N and the variation is linear, the expected value
is close to zero. As a result, the estimator is unbiased. Otherwise, it will be biased.
Furthermore, an increase in the value of the weight function wk near zero will lower
the possible bias.
Inverse Problems 491
N N
D[σˆ 2X (t j )] = 2 ∑ ∑ w w R ( j + k , j + m),
k =− N m =− N
k m
2
X j = N ,…, n − 1 − N (9.117)
where R X(j,k) is the autocorrelation function of the process X(t) at t = jΔt and s = kΔt.
When t = s, R X(t,s) will be at its maximum. Thus, to minimize the variance of
the estimator σˆ 2X (t j ), the value of weight function wk near zero must be reduced, and
certainly not larger. This is contrary to the case of the mean estimator. Thus, a dif-
ferent set of weight functions should be used to deal with the estimation of the mean
and mean-square values.
where z(t) is the ground displacement and x(t ) is the absolute acceleration. If z(t ) is
a shock with a given amplitude, the peak response of x will depend upon the natu-
ral frequency ω or f = ω/2π, with a given value of damping ratio ζ. Thus, the shock
spectrum can be written as
B( f ) =
1
M ∑B (f)
m =1
m (9.120)
492 Random Vibration
1/ 2
1 M
SB ( f ) =
M −1 ∑
m =1
[ Bm ( f ) − B( f )]
2
(9.121)
Bc ( f ) = B( f ) + KsB ( f ) (9.122)
where
K > 0 (9.123)
S= ∑ri =1
i
2
(9.124)
9.2.4.1.2 Residue
The residue ri is the difference between the measured value yi and the function f.
The function f is generated by all the yi values.
ri = yi – f(xi, p) (9.125)
Inverse Problems 493
p = {α 0, α1} (9.126)
and
when
∂S = 2
∂α j ∑ r ∂∂αr
i =1
i
i
i
= 0, j = 1,…, m (9.130)
or
∑ r ∂f ∂(xα, p) = 0
i =1
i
i
j
(9.131)
By solving Equation 9.131, the parameter p, which enables the residue minimum,
can be determined.
9.2.4.2 Curve Fitting
Curve fitting is often expressed by a mathematical function. The aim is to best fit
the measured data points, which are possibly subject to constraints. Interpolation
technology will allow for an exact fit to the data when it is required. Additionally, the
fitted function may be smoothed to result in a “better looking” curve.
Regression is often used for curve fitting through measured data pairs, which are
believed to be independent variables and their corresponding functions. Statistical
494 Random Vibration
inference is often used to deal with any uncertainties and randomness. Extrapolation
can be used to predict results beyond the range of the observed data, although this
implies a greater degree of uncertainty.
We now consider the following function of x and y:
y = f(x) (9.132)
xn x1n−1 x1 a y
1 n 1
x 2n x 2n−1 x2 an−1 y2
= (9.136)
x np x np−1 xp a yp
0
a
n
a
p = n −1 (9.137)
a
0
Inverse Problems 495
From Equation 9.136, the nonlinear function can be determined as given below:
+
a xn x1n−1 x1 y
n 1 1
an−1 x 2n x 2n−1 x2 y2
= (9.138)
a x np x np−1 xp yp
0
In Equation 9.138, the superscript + stands for the pseudo inverse of a matrix, sat
matrix A, written as
A+ = (ATA)−1AT (9.139)
9.3 Vibration Testing
Vibration testing is an important measure in inverse dynamic problems. The focus
of this section is on random vibration-related issues, as opposed to general vibration
testing. Generally in vibration testing, the amount of randomness is fairly small in
comparison to the desired signals and the measurement noises. For this reason, ran-
domness is typically ignored. However, in some instances, the randomness decreases
the signal-to-noise ratio significantly. In this section, randomness and uncertainty
will be discussed only in qualitative terms, rather than quantitative details.
Strictly speaking, randomness and uncertainty have separate concepts. For random
variables or processes, even though individual events cannot be predicted, moments and
distributions can be estimated from the corresponding pattern. In contrast, uncertain
events are unable to be measured. However, for application to engineering problems,
randomness and uncertainty will not differentiate in most situations.
For a more systematic approach, readers may consult McConnel (1995) for instance.
9.3.1 Test Setup
Test setup is the beginning step of vibration testing. To physically install a structure
that will simulate the system being tested, the system must first be correctly modeled.
9.3.1.1 Mathematical Model
Mathematical models are rather fundamental and are often referred to as an analytic
formulation or a closed-form solution. For example, the SDOF vibration system is
represented by the mathematical model
with a solution of
1
s
C t
1 K t -inv(M)
s
M t
1
Simin
From
workspace
3 0.15
2 0.1
Displacement (m)
1 0.05
Force (N)
0 0
–1 –0.05
–2 –0.1
–3 –0.15
–4 –0.2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (s) Time (s)
(a) (b)
Figure 9.4 (a) Input plots for a “random” excitation; (b) output plots for a “ran-
dom” excitation.
9.3.1.2 Numerical Model
The use of a computer is often necessary to establish a numerical or computational
model. For example, consider the SDOF vibration model from Simulink as shown
in Figure 9.3.
Figure 9.4 shows the input and output of the numerical model.
9.3.1.3 Experimental Model
If the models described in Section 9.3.1 or Figure 9.3 have no or acceptable error,
then it is not necessary to conduct a vibration test. However, these two models can
be exceedingly “deterministic.” Namely, there are many uncertain aspects in the real
world that cannot be precisely represented by mathematical or numerical models.
Individual examples include boundary conditions, specific properties of elements or
members in vibration systems, the “correct” number of degrees of freedom, and the
distributions of mass, damping, and stiffness, among others.
Inverse Problems 497
0.6 6
Acceleration Displacement
0.4 N-S 4 N-S
Soil Soil
0.2 2
0 0
–0.2 –2
–0.4 –4
–0.6 –6
0 10 20 30 40 50 0 10 20 30 40 50
In this situation, the setup of a physical test model to deal with the uncertainties
becomes necessary. During the test setup, it becomes imperative to isolate the test
targets in order to best minimize the test uncertainty.
Figures 9.5 and 9.6 show an example of experimental testing on a highway bridge
and recorded time histories on a model bridge, respectively.
9.3.2.1 Actuation
Actuation is completed by using actuators, force hammers, or other measures. Figure 9.7
shows a typical hydraulic actuator, while Figure 9.9 shows an electromagnetic actuator
and Figure 9.10 shows an impact hammer.
One of the advantages of using actuators is that we can directly apply “random”
excitations for testing, such as white and/or color noises. These artificial “random”
498 Random Vibration
9.3.2.1.1 Actuators
Due to uneven frequency responses of actuators, the input demand and the output
force/displacement will not be 100% proportional. Furthermore, due to possible con-
trol loops that can be nonlinear as well as time varying, this phenomenon can be
magnified.
F (ω )
HO (ω ) = (9.142)
S (ω )
F (ω )
H L (ω ) = (9.143)
S (ω )
In initial comparisons of Equations 9.142 and 9.143, the transfer functions appear
identical. However, the curves of magnitude of force (in kips) vs. frequency are sig-
nificantly different between the two transfer functions. Even in the case of zero load
Inverse Problems 499
0.7 0
Disp. = 0.330 in Disp. = 0.330 in
Disp. = 0.985 in –20 Disp. = 0.985 in
0.6
Disp. = 1.315 in Disp. = 1.315 in
–40
0.5
–60
Phase (deg)
Magnitude
0.4 –80
0.3 –100
–120
0.2
–140
0.1
–160
0 –180
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
Freq. (Hz) Freq. (Hz)
(a) (b)
Figure 9.8 Measured transfer functions: (a) magnitude vs. frequency; (b) phase vs.
frequency.
and different displacement will the curves be dissimilar. This is shown in Figure
9.8a. In this case, the main reason is the limitation of the maximum velocity for the
given actuators and secondarily nonlinear control.
For given demands of force and/or displacement, deterministic or random, the
actual force/displacement will be understandably different. Note that the transfer
function shown in Figure 9.8 is measured through the sine sweep test. When random
signals are used, the transfer function should be measured accordingly.
9.3.2.1.1.2 Harmonic Distortion Most actuators (see Figure 9.9a and b) will have
different degrees of harmonic distortion. The ideal forcing function can be written as
(a) (b)
Figure 9.9 Electromagnetic actuator. (a) Photo of an actuator and (b) conceptual drawing.
500 Random Vibration
1, ω = ω
0
HT (ω ) = (9.146)
0, elsewhere
In the ideal case, the output of the actuator Fpractice(ω) in the frequency domain
will be
Force
window
In practical applications, the half-sine-like time history of the impact force and
the impact duration, denoted by frequency w, can be varied by the softness of the
hammer head as well as the surface of the test object. This is represented by the fol-
lowing equation:
9.3.2.1.2.2 Force Window The force window can be used to minimize unwanted
noise. The idealized function of the force window in the time domain is
π
1, 0 < t <
w(t ) = 2ω (9.150)
0, elsewhere
For an ideal case in which the force is generated by the hammer fpractice(t) in the
time domain is given by
9.3.2.2 Measurement
Both the input forces and the output responses during a vibration test need to be
measured. The necessary instrumentation for these measurements typically con-
tains a sensory system, a data acquisition system, and signal processing, which may
introduce some degree of randomness and uncertainty. The basic method to address
randomness is averaging. For most vibration testing, randomness is ignored and
averaging is not carried out. However, to acquire more precise measurements, it will
be necessary (see Bendat and Piersol 2011).
502 Random Vibration
9.3.2.2.1 Sensors
We now consider the basic concept of sensors and analyze the possible randomness.
output
S(.) = = output│unit input (9.152)
input
Once a sensor is manufactured, its sensitivity should be calibrated and likely recorded
and included in its packaging. However, the sensitivity will likely experience drifts
and/or fluctuations over time due to many factors, such as temperature, pressure of
the atmosphere, mounting, cabling, and grounding, among others.
For different driving frequencies, the ratio of output and input will vary. This will
be further explained in Section 9.3.2.2.1.5.
q = Sv–qemin (9.153)
Sv–q refers to the sensitivity that transfers the electronic signal to the required
measurement quantity. Furthermore, emin is the minimum electric signal that a mea-
suring device can put out without being contaminated by noises.
9.3.2.2.1.3 Dynamic Range The dynamic range describes the minimum and the
maximum measurable ranges in decibels, as denoted in Equation 9.154.
Here, emax is the maximum electric signal that a measuring device can output.
t
e(t ) − e(∞) −
= e T (9.155)
e( 0 ) − e( ∞ )
where e(0) is the initial signal and e(∞) is the signal measured after a sufficiently long
period. Also, e(t) is the signal picked up at time t. In Equation 9.155, the parameter T
specifically denotes the time constant given by
T = RC (9.156)
where R in ohms and C in farads are respectively the resistance and capacitance of
the measurement circuit and T is in seconds.
Inverse Problems 503
H(.)(ω)
ω1 ω2 ωn ωc1 ωc2
In the above, the corresponding frequency band is referred to as the working band.
Characteristically for the working band, the phase ϕ vs. the frequency ω plot of the
frequency response function is a straight line. This is represented in the following:
ϕ = aω (9.158)
H (.) (ω )
H(.) (ω ) = ≠1 (9.159)
S(.)
eU − em
Ls = (9.160a)
em
e
eU
em
eL
Log( f )
and/or
em − eL
Ls = (9.160b)
em
where eU and eL are the upper and lower limits of the measured signal, respectively,
and em is the mean value, with
Ls ≤ 5% (9.161)
Sθ
sc = = tan(φ) (9.162)
Sz
sc ≤ 5% (9.163)
An ideal sensor should have the linear capability to pick up physical signals and
output electric signals proportionally. Unfortunately, due to the limitation of mea-
surement ranges, natures of nonlinearity in both the frequency and time domains,
the output signal will not be purely proportional to the physical signal. Additionally,
unwanted noise can contaminate the sensory output, including nonzero transfer
signals.
z Sθ
Sz
ST
φ
θ
x
Here, H(.)(ω) is the normalized frequency response function of the sensor, which
describes the frequency range and the dynamic magnification. The function, N(.)(ω,t),
covers all the randomness due to sensitivity drift and noises at the moment of signal
pickup, among others.
The signal measured and stored in data memories denoted by Y(ω,t) will be dif-
ferent from the signal to be measured, denoted by x(t). This is due to the sensitivity
Cabling
of the sensor as described in Equation 9.164 and the gain as described in Equation
9.165. The expression of randomness is simplified and is given by
∏ ∫ x(τ)h (t − τ) dτ + N (t)
t
Y (ω , t ) = N p (ω , t ) Si S a (9.166)
0
i
In Equation 9.166, hs(t) is the impulse response function of the total measurement
system, which can be regarded as an inverse Fourier transform of the normalized
transfer function of the total measurement system. That is,
hs (t ) ⇔ H s (ω ) = ∏ H (ω)
i
i (9.167)
where H1(ω) is the transfer function of the first link of the measurement system,
namely, the sensor’s.
Using the knowledge of transfer functions, each Hi(ω) can be measured instru-
ment by instrument, resulting in Hs(ω). Nevertheless, a more accurate and effective
method is to measure Hs(ω) of the total system as a whole. The objective is to deter-
mine the linear range and allowed error range (refer to Figure 9.13).
H s (ω ) = ∏ S H (ω) = S ∏ H (ω)
i
1 i 1
i
i (9.168)
In an ideal case,
∏ H (ω) = 1
i
i (9.169)
and
(9.170)
H s (ω ) = S1 = const
Equation 9.170 is often given in many test menus. However, Equation 9.170
can only be used in the event when the measurement of randomness proves to be
negligible.
Np(ω,t) is a random process that modifies the output convolution caused by the
noise contamination of production. Furthermore, Na(t) is an additional random pro-
cess due to the noise contamination of addition. A simplified expression of Np(ω,t)
only can be observed due to gain drift, which is often modeled by normal distribu-
tions. This is written as
where μN(t) and σN(t) are the corresponding mean and standard deviation, respec-
tively. Equation 9.70 can be used to estimate the sample average x j to approximate
μ(t), while Equation 9.71 can be used to estimate the sample variance S X2 (t j ) to
approximate σ2(t).
According to Equation 9.71, by increasing the number of tests n,
S X2 (t j ) → 0 (9.172)
so that
σ(t) → 0 (9.173)
7 6.5
6.5
6
6
5.5
5.5
Gain
Gain
5 5
4.5
4.5
4
4
3.5
3 3.5
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200
Time (h) Time (h)
(a) (b)
2π
ω 0 = 2πf0 = (9.174)
T0
Often, the signal is sampled by an integer not a multiple of ω 0. When the signal is
not sampled with period T0, then there will be discontinuities in the magnitude and
slope, which can be seen in Figure 9.17.
Since the total power of the signal should not be changed, the smaller spectra lines,
illustrated in Figure 9.18b, will shear the power with the central line. As a result, the
length of the line is reduced, which describes the concept of power leakage.
T0
(a)
Discontinuity
(b)
Figure 9.17 Total sampling period. (a) Signal with a frequency multiple of the sampling
period. (b) Signal with a frequency not a multiple of the sampling period.
Magnitude
(a) Freq.
ω
Magnitude
Freq.
(b)
Because of the particular buffering size and sampling rate, the power leakage is a
random event. Such random error cannot be minimized by the averaging of repeated
tests. Such error is referred to as systematic error.
1. Rectangular window
2nπ
0.53836 − 0.46164 cos , 0 < n < N
w(n) = N − 1 (9.176)
0, elsewhere
2 nπ
0.5 1 − cos 0<n<N
w(n) = N − 1 (9.177)
0, elsewhere
The Hanning and Hamming windows are both known as “raised cosine”
windows.
510 Random Vibration
nπ
sin 0<n<N
w(n) = N − 1 (9.178)
0, elsewhere
− 1 n − ( N −1)/ 2 2
2 σ ( N −1)/ 2 0 < n < N
w ( n ) = e (9.179)
0, elsewherre
6. Bartlett–Hann window
n 2 nπ
0.62 − 0.48 − 0.5 − 0.38 cos
N − 1
, 0<n<N
w(n) = N −1 (9.180)
0, elsewhere
1 − a 2 nπ a 4 nπ
− 0.5 cos
+ cos , 0<n< N
w(n) = 2 N −1 2 N − 1 (9.181)
0, elsewhere
2 nπ 4 nπ
0.3929 − 0.51 cos + 0.0959 cos
N −1 N − 1
6 nπ
w(n) = −0.0012 cos , 0<n<N (9.182)
N − 1
0, elsewheree
In using the above windows, the original solitary y(t) is managed by multiplying
it by a window function, resulting in the function yw(t) denoted by
–1
–2
Imaginary
–3
–4
–5
–6
–7
–8
–4 –3 –2 –1 0 1 2 3 4 5
Real
Denote R as
1
R= (9.185)
mω d
We have
R R
H (ω ) = − (9.186)
2 j[ζω n + j(ω − ω d )] 2 j[ζω n + j(ω + ω d )]
Note that, in the neighborhood of the natural frequency ωd, the first term on the
right-hand side of Equation 9.186 will be significantly larger than the second term.
Thus, we can rewrite Equation 9.186 as
R
H (ω ) ≈ (9.187)
2 j[ζω n + j(ω − ω d )]
That is, we can just use the first term to carry out the curve fit.
Rewrite Equation 9.187 as
R ωd − ω R − ζω n
H (ω ) ≈ Re[ H (ω )] + j Im[ H (ω )] = +j
2 [(ω d − ω )2 + ζ2ω 2n ] 2 [(ω d − ω )2 + ζ2ω n2 ]
(9.188)
2 2
R R
{Re[ H (ω )]}2 + Im[ H (ω )] + = (9.189)
4ζω n 4ζω n
R R
is an equation of a circle. The center is at (0, ), and the diameter is .
4ζω n 2ζω n
Using this circle to fit the Nyquist plot, the natural frequency is the cross-point of the
circle and the imaginary axis (see Figure 9.20).
The Nyquist plot as shown in Figure 9.20, is not exactly a circle. However, in the
resonant region, the Nyquist plot is very close to a circle; we thus can use the circle
fit to identify the corresponding modal parameters.
When accelerance frequency response function (FRF) is used, it can be proven that
R −(ω d − ω )ω 2 R ζω n ω 2
A(ω ) ≈ Re[ A(ω )] + j Im[ A(ω )] = + j
2 [(ω d − ω )2 + ζ2ω n2 ] 2 [(ω d − ω )2 + ζ2ω 2n ]
(9.190)
Inverse Problems 513
Im[H(ω)]
Re[H(ω)]
R
(0, ) R
4ζωn
2ζωn
ω2 ω1
ωd
2 2
Rω 2 Rω 2
{Re[ A(ω )]} + Im[ A(ω )] −
2
= (9.191)
4ζω n 4ζω n
9.3.4.2 Circle Fit
To carry out the circle fit for the receptance Nyquist plot, assume that the center is
located at point (0, rjk), and the circle starts from the origin (0,0). The starting point
of the Nyquist plot is not at the origin (0,0). However, for convenience, we name
the starting point of the Nyquist circle at exactly the origin for the normal mode
of a mode, which is not affected by any other modes of the system. Now, rewrite
Equation 9.191 as
and furthermore
x2 + y2 + 2rjk y = 0 (9.193)
Denote
xi = Re[ p α jk (ω i )] (9.194)
and
yi = Im[ p α jk (ω i )] (9.195)
514 Random Vibration
y x 2 + y2
1 1 1
y2 x 22 + y22
2 rjk = − (9.196)
ym x 2 + y2
m m
T
y x12 + y12
1 2
y2 x 2 + y22
+
y x 2 + y2
1 1 1
ym x 2 + y 2
1 y2 x 22 + y22 m m
rjk = − = −
(9.197)
2 y y
T
ym x +y
2 2 1
1
m m y y
2 2 2
y y
m m
From the Nyquist plot, if we can further measure the half power frequencies ω1
and ω2, and furthermore the damped natural frequency ωdp, for the pth mode, then
the natural frequency ωp and the damping ratio ζ p can be obtained. And, with the
help of the parameter rjk, the mode shape can also be determined (Figure 9.21).
Theoretically speaking, the center of the Nyquist circle is exactly located at the
imaginary axis. Once the resonant point ωdp is known, which must also be located at
the imaginary axis, the center can be found at the halfway point from the origin to the
resonant point. However, in Section 9.3.4.3, we will show that, due to the influences of
Im[A(ω)]
ωd
2
(0, Rω ) Rω2
4ζωn
2ζωn
ω2 ω1
Re[A(ω)]
other modes, the starting point of the Nyquist circle will move away from the origin.
Therefore, it is better to use the above method to locate the center and the origin.
ω 2 − ω1 f2 − f1
ζ= = (9.198)
2ω n 2 fn
and
ωd
ωn = (9.199)
1 − ζ2
Therefore,
(1 − ζ2 )ω n2 = ω d2 (9.200)
1/ 2
2 (ω 2 − ω1 )2
ω p = ω dp − (9.201)
4
Using Equations 9.198 and 9.201, the natural frequency and the damping ratio can
be calculated.
Problems
1. Given an SDOF system with m = 10 kg, k = 100 N/m, and its damping
ratio is a random variable with a uniform distribution between 0.01 and 0.1.
Suppose that this system is excited by random initial velocity v0 ~ N(1, 0.2)
m/s.
a. Calculate the free decay responses and identify the corresponding damp-
ing coefficient.
b. Use the least squares curve fit to find the relationship of the identified
damping coefficient and generated damping ratio. Explain your results.
2. With the given six earthquake records (El Centro, Kobe, Northridge, in
both S-N and E-W directions):
a. Calculate the means, RMS values, and standard deviations of each
record.
516 Random Vibration
1 ( x − µ)2
fN ( x ) = exp −
2πσ 2σ 2
1050 −1000 0
C = −1000 2050 −1000 kN
N/m-s
0 −1000 3200
1000 −1000 0
and K = −1000 2000
−1000 MN/m
0 −1000 3200
6. Use the Nyquist circle fit to find the corresponding natural frequencies and
damping ratios. Compare your results with that obtained in Problem 5a.
7. a. Estimate the mean and the standard deviation of the following data:
1. Yielding
2. Excessive deformation
3. Brittle fracture
4. Ductile fracture
5. Buckling
6. Fatigue
Failure modes 1–5 involve “level crossing” (level exceeding), while failure mode
6 is primarily for high-cycle fatigue.
10.1 3σ Criterion
When a quantitative description of either crossing levels or fatigue cycles is addressed,
probability is involved. For most cases, 0% failure is not achievable within reason-
able cost. For this reason, a small percentage of failure probability can be allowed.
This does, however, result in two issues. The first is how to establish the allowed
level of failure probability, and the second is how to calculate the failure probability.
R ≥ 3σS (10.1)
519
520 Random Vibration
applied stress without failure, which is essentially a random variable, and it is treated
as deterministic.
When the mean value of the stress is nonzero, namely, μS, then
R ≥ μS + 3σS (10.2)
10.1.2 3σ Criterion
Given that S(t) is Gaussian and has zero mean, then
Both Equations 10.3 and 10.4 indicate that the failure probability of the stress
level being greater than 3σS is less than 0.3%.
10.1.3 General Statement
10.1.3.1 General Relationship between S and R
A more general case is when both the stress S and the strength R are random. The
mean of R can then be expressed as
μR ≥ ξσS (10.5)
where the parameter ξ is a function of Q and CR. This is illustrated in Figure 10.1.
Here, Q is the ratio of μS and σS (which are the mean and standard deviation of the
stress process S(t), respectively), which is the inverse of the coefficient of variation
of S(t).
µS
Q= (10.6)
σS
σR
CR = (10.7)
µR
Note that R is a random variable with a mean of μR and a standard deviation of σR.
The parameter ξ can be determined as
ξ = 3η (10.8)
Failures of Systems 521
10
8
R
6
S(t)
µR
4
2 ξσS
0 µS
–2
–4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
A larger value of the safety factor is taken when larger values of Q (larger than
0.5) and CR (larger than 0.1) are used. Table 10.1 lists the values of η.
Example 10.1
A stress S(t) is Gaussian with σS = 70 MPa and μS = 35 MPa. Design the mean
strength μR of a rod that satisfies the generalized 3σ rule, assuming that CR = 0.1.
µS
Q= = 35/ 70 = 0.5
σS
Table 10.1
Safety Factor η
Q
CR 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.00 1.0000 1.0033 1.0167 1.0333 1.0500 1.0833 1.1333 1.1600 1.2000
0.07 1.0267 1.0367 1.0667 1.0833 1.1000 1.1167 1.1433 1.1700 1.2100
0.10 1.0433 1.0667 1.0833 1.1000 1.1167 1.1500 1.1600 1.1733 1.2200
0.15 1.1167 1.1267 1.1400 1.1667 1.2000 1.2167 1.2500 1.3000 1.4733
0.20 1.2233 1.2500 1.2667 1.2833 1.3600 1.3750 1.4167 1.4700 1.5333
522 Random Vibration
From Table 10.1, when CR = 0.1, the value of η is found to be 1.15 and ξ is
calculated to be 3.45.
The corresponding strength is
Now, suppose that the strength of the rods to be designed is random for aspects
such as size, yielding stress, etc. To further reduce the chance of failure of the rods, let
μR – 1.0σR ≥ μS + 3σS
This results in
contributions to study the first passage failure. In this section, the probability of the
first passage failure will be further considered.
10.2.1 Introduction
In general, once the stress S(t) is greater than the strength R, failure will occur. That is,
For example, the first passage failure can be seen in the brittle failure mode.
Now, let T and Ts be the time to failure and the duration of service life, respec-
tively. The failure probability, pf, of the first passage failure can then be expressed by
Using Y to denote the maximum value of the peaks of the stress process, it can
be seen that the event Y > R implies a failure mode. Thus, the failure probability can
be written as
10.2.2 Basic Formulation
10.2.2.1 General Formulation
Assume that the strength R(t) is characteristically large in comparison to the stress S(t),
so that the probability of S(t) > R(t) is rare, under the assumption that up-crossing with a
rate of v R+ is a Poisson process.
k k
P(no up-crossing in Ts ) = ∏
i =1
exp[− v R+ (ti )∆t ] = exp −
∑v
i =1
R+
(ti )∆t (10.14)
Δt → 0 (10.15)
524 Random Vibration
Ts
P(no up-crossing in Ts ) = exp −
∫ 0
v R+ (t ) dt
(10.16)
Ts
pf = 1 − exp −
∫0
v R+ (t ) dt
(10.17)
1 R(t ) − µ 2
v R+ (t ) = v0+ exp − s (10.18)
2 σ s
Here, v0+ (t ) is the rate of zero up-crossing, and σs and σs are the mean and the
standard deviation of the stress, respectively.
10.2.2.2 Special Cases
Now consider several special cases of failure probability.
pf = v R+ Ts (10.19)
v 0 + = f0 (10.20)
In this case, f0 is the center frequency of the narrowband process, and the failure
probability is
1 R(t ) − µ 2
pf = ( f0Ts ) exp − s (10.21)
2 σ s
Failures of Systems 525
∣S(t)∣ = R (10.22)
S(t)∣max = R (10.23)
and
S(t)∣min = −R (10.24)
μs = 0 (10.25)
v 0 + = 2 f0 (10.26)
1 R 2
pf = (2 f0Ts ) exp − (10.27)
2 σ s
10.2.3.1 Exact Distribution
The exact distribution is an additional approach for when R is not a function of time.
The peak Zi, however, forms a random process.
Next, consider the case of a distribution FZ (z).
Suppose that Zi are mutually independent and Bi denotes the ith event for Zi < R;
then
Bi = Zi < R (10.28)
B is the event for which Y, the largest peak in a sample of size n, is less than R.
This is denoted by
B = Y < R (10.29)
526 Random Vibration
n
B = B1 ∩ B2 ∩ ∩ Bn = ∩B
i =1
i
(10.30)
P ( B) = ∏ P(B ) = [P(B )]
i =1
i i
n
(10.31)
Note that
Therefore,
If S(t) is narrowbanded, then peak Z will have a Rayleigh distribution such that
1 z 2
FZ ( z ) = 1 − exp − (10.37)
2 σ s
n = 2f0Ts (10.39)
pf can be approximated as
1 R 2
pf ≈ (2 f0Ts ) exp − (10.40)
2 σ s
Comparing Equation 10.40 with Equation 10.30, we see that the failure prob-
ability can be obtained through alternative approaches. Apparently, larger resistance
strength R will result in smaller failure probability.
0.577
µY = β + (10.42)
α
1.283
σY = (10.43)
α
α = nf Z (β) (10.44)
1
β = FZ−1 1 − (10.45)
n
528 Random Vibration
0.577
µY = 2 ln n + σS (10.46)
2 ln n
0.577σ S
σY = (10.47)
2 ln n
σY 1
CY = = (10.48)
µY 1.5588 ln n + 0.4497
2 ln n
α= (10.49)
σS
and
β = 2 ln n σ S (10.50)
Example 10.2
Consider the case with the EVI where n = 1000 and σs = 50. Calculate the mean
and the standard deviation and plot the corresponding PDF of the EVI.
The PDF of the EVI can be obtained by the derivative of FY(y) (see Equation 10.41):
d[FY (y )] − α ( y −β )
fY (y ) = = αe − e e − α ( y −β )
dy
With n = 1000 and σs = 50, we have α = 0.074 and β = 185.85. The mean is
193.61 and the standard deviation is 48.40. The PDF is plotted in Figure 10.2.
From Figure 10.2, the plot of the PDF is asymmetric and has larger right tails,
which has been discussed in Chapter 2 (recall Figure 2.9). Since the EVI has larger
left tails, the chance to have a larger value of y, the extreme value, is comparatively
greater.
Failures of Systems 529
0.12
0.1
0.06
0.04
0.02
0
0 50 100 150 200 250 300 350 400
y
Tn = n (10.52)
When evenly distributed, the peak having a probability of expedience equal to 1/n is
1
P( Z > S0 ) = 1 − FZ ( S0 ) = (10.53)
n
10.3 Fatigue
In Chapter 5, we described the issue of fatigue by classifying it into two categories:
high cycle and low-cycle fatigue. The low-cycle test is further classified as type C and
type D low-cycle tests. However, the focus of those studies is on the time-varying devel-
opment, the model of random processes, and the corresponding statistical param-
eters. In this section, we will further discuss the issue of fatigue, with the focus on
fatigue failures.
300
250
Stress (MPa)
200
150
100
50
0
1.0E+00 1.0E+01 1.0E+02 1.0E+03 1.0E+04 1.0E+05 1.0E+06 1.0E+07
Life (cycles)
Fatigue is a special failure mode, perhaps the most important failure mode in
mechanical design characterized by the following:
Let N denote the cycles of fatigue and S the amplitude of stress. The S–N curve
can be used to describe the phenomenon of fatigue through
N = N(S;A) (10.54)
10.3.2 Strength Models
In the literature, there are several fatigue models available. Fundamentally, the stress-
based approach, the strain-based approach, and the fracture mechanics approach are
the basic considerations.
10.3.2.1 High-Cycle Fatigue
The key parameter obtained through fracture mechanics is the stress intensity factor
range given by
∆K = Y (a) S πa (10.55)
Failures of Systems 531
Here, ΔK is the stress intensity factor range; S is the applied stress range; a is the
crack depth for a surface flaw or half-width for a penetration flaw; and Y(a) is the
geometry correction factor.
The crack growth rate da/dn can be represented as
da
= C (∆K )m (10.56)
dn
Equation 10.56 is referred to as the Paris law (Paris 1964). In this instance, C and
m are empirical constants. Therefore, integrating Equation 10.56 will yield
af
da
N= 1
S Cπm / 2
m ∫ a0 Y m (a)a m / 2
(10.57)
10.3.2.2.2 Assumption
A stress process can be described by discrete events, such as number cycles. The
spectrum of amplitude for stress cycles can be defined as
D= ∑ Nn
i =1
i
i
(10.58)
Equation 10.58 is a more general description than Equation 5.145. Again, D signi-
fies the damage index, where if
D ≥ 1.0 (10.59)
S2
S3 Sk
S1 Si
n1 ni
n2 n3 nk
S1 S3
Sk
S1 Si
(a)
Stress amp (or range), S
S2
S3
Sk
Si
(b) Cycles to failure N
FIGURE 10.4 Miner’s rule. (a) Variation of stress. (b) Stress level vs. failure cycles.
Markov chain to study the chance of failure at a different level (in fact, a different
level of D). In the following, we examine the fatigue failure from another angle: since
whenever Equation 10.59 is reached, the failure deterministically occurs. We now
consider the chance to reach D ≥ 1.0.
10.3.3 Fatigue Damages
The concept of the damage model is important because of its difficulties in model-
ing structural damages. One specific reason is the fact that excitation is often a ran-
dom process. Here, some specific models will be described for better understanding
model random damages.
0.15
0.1
∆S
0.05 Si
–0.05
–0.1
–0.15
–0.2
0 200 400 600 800 1000 1200
(a)
f (s) f4
f5
f2
fi
f1 f3 f6
(b)
FIGURE 10.5 Probability mass function. (a) Range of ΔSi. (b) PMF vs. ΔSi.
ni
fi = (10.60)
n
where
n= ∑n
i =1
i (10.61)
As a result, fi is shown as the probability mass function of the random variable Si.
The total fatigue damage can now be written as
k k
D= ∑
i =1
ni
Ni
=n ∑ Nf
i =1
i
i
(10.62)
534 Random Vibration
NSm = A (10.63)
af
da
A= 1
Cπm / 2 ∫ a0 Y (a)a m / 2
m
(10.64)
D=
n
A ∑fSi =1
m
i i (10.65)
E (S m ) = ∑fS i =1
m
i i (10.66)
n
D= E ( S m ) (10.67)
A
n m
D= S (10.68)
A
Se = [E(Sm)]1/m (10.69)
fi ≈ f S (s)Δs (10.70)
In this example, the total fatigue damage is the sum of all incremental damages
in each window ΔS, since
k
D≈n ∑ f N(s()s∆) s
i =1
S (10.71)
Δs → 0 (10.72)
∞
f S ( s ) ds
D=n
∫ 0 N (s )
(10.73)
∞
n
∫
D= s m fS (s ) d s (10.74)
A 0
n
D= E ( S m ) (10.75)
A
Comparing Equations 10.67 and 10.75, the expected value is calculated by using
the integral described in Equation 10.74.
( ) 1
m
E ( S m ) = Sem = 2σ Γ m + 1 (10.76)
2
( ) 1
m
E ( S m ) = Sem = 2 2 σ Γ m + 1 (10.77)
2
v0 + τ
( ) 1
m
D= 2σ Γ m + 1 (10.78)
A 2
v0 + τ
(2 ) 1
m
D= 2σ Γ m + 1 (10.79)
A 2
Here,
n = v0 + τ (10.80)
ξ
s
−
δ
FS (s) = 1 − e (10.81)
m
E ( S m ) = δ m Γ + 1 (10.82)
ξ
Failures of Systems 537
Let the design stress for static failure modes be S 0. The probability of a “once-in-
a-lifetime” failure is represented by
1
P ( S > S0 ) = (10.83)
NS
Here, NS is the total number of stress applications in the service life. Substitution
of Equation 10.81 into Equation 10.83 will yield
S 0 = [ln(NS)]1/ξ δ (10.84)
NS m m
D= S0 [ln( N S )]− m /ξ Γ + 1 (10.85)
A ξ
k
vi + τi
2 σ Γ m + 1
∑
m
D= (10.86)
A 2
i =1
k
vi + τi
2 2 σ Γ m + 1
∑
m
D= (10.87)
A 2
i =1
m
µ
A = A0 1 − S , µ S ≥ 0 (10.88)
Su
538 Random Vibration
In the above, W( f0) is the power special density function of the fundamental fre-
quency f0, and ΣW( f h ) is the sum of the power spectral densities (PSDs) of the rest of
the frequency components.
Equation 10.89 can be used as a criterion for using the following linear S–N model:
NSm = A (10.90)
v0 + τ
( ) 1
m
D= 2σ Γ m + 1 , S and A based on amplittude (10.91)
A 2
0.15
0.1
0.05
0
–0.05
–0.1
–0.15
–0.2
0 5 10 15 20 25 30 35 40 45 50
(a)
0.15
0.1
0.05
0
–0.05
–0.1
–0.15
–0.2
0 5 10 15 20 25 30 35 40 45 50
(b)
FIGURE 10.6 (a) Non-narrowband process that can be treated as narrowband. (b) Narrowband
process.
Failures of Systems 539
v0 + τ
(2 ) 1
m
D= 2σ Γ m + 1 S and A based on range (10.92)
A 2
0.04 0.015
0.02 0.01
Displacement
Displacement
0 0.005
–0.02 0
–0.04 –0.005
–0.06 –0.01
0 2 4 6 8 10 0 2 4 6 8 10
(a) Time (s) (b) Time (s)
show the concept of the rainflow method, the responses in Figures 10.7 and 10.8
are different):
1. A rainflow path starts at a trough, continuing down the roof until it encoun-
ters a trough that is more negative than the origin. (For example, the path
starts at 1 and ends at 5.)
2. A rainflow path is terminated when it encounters a flow from a previous
path. (For example, the path begun at 3 and was terminated as shown in
Figure 10.8.)
3. A new path is not started until the path under consideration is stopped.
4. Trough-generated half-cycles are defined for the entire record. In each
cycle, the stress range Si is the vertical excursion of a path. The mean µ Si is
the midpoint. (For example, see S1 and S2 in Figure 10.8b.)
5. The process is repeated in reverse with peak-generated rainflow paths. For
sufficiently long records, each trough-generated half-cycle is matched to
the peak-generated half-cycle to form a whole cycle. One may choose only
to analyze a record for peak (through) generated half-cycles, thus assuming
that each cycle is a full cycle.
S(t) 4
(a) 5
S(t)
1
2
3
S1 4
S2
(b)
Here, λ(ε, m) is a rainflow correct factor, and ε is the spectral width parameter. D NB
is the damage estimated in Equations 10.94 and 10.95 using a narrowband process.
Based on the amplitude, the coefficient of fatigue strength A is
v0 + τ
2 σ S Γ m + 1
m
DNB = (10.94)
A 2
v0 + τ
2 2 σ S Γ m + 1
m
DNB = (10.95)
A 2
where empirically
and
1 h 2
FH (h) = 1 − exp − (10.99)
2 2β k σ S
542 Random Vibration
M2 Mk
βk = (10.100)
M 0 M k +2
2.0
k= (10.101)
m
In Equation 10.100, Mj is the jth moment of the one-sided spectral density function:
∞
Mj =
∫ 0
f jWS ( f ) d f (10.102)
D = λkD NB (10.103)
where
β mk
λk = (10.104)
α
D = λLD NB (10.106)
where
M 2m/m/ 2 (10.107)
λL =
v0 +
A typical value for the fracture mechanics model or for welded joints is m = 3.
is reached, the failures occur. For instance, either the stress S(t) exceeds the allowed
strength R or the damage index D reaches 1.0. Explicitly, before these critical points,
no artificially defined damage occurred.
From the viewpoint of a random process, the process is “memoryless.” The process
is often used to describe the stress time history but not the history of damage growth.
In real-world applications, damages that experience growth over time must be
dealt with. Examples of such damages include crack propagations, incidents of aging
and corrosions, gradual loss of pre-stress in reinforced concrete elements, etc. In
these situations, the earlier damage will not be self-cured and will affect the remain-
ing damage process. Thus, it will no longer be “memoryless.”
Very often, such a damage process is random and is caused by the nonlinear
stiffness of systems. While in Chapter 5, we discussed the low-cycle development
by using a Markov model on the type D fatigue, in the following, we will further
consider the corresponding failure mode and its related parameters.
∆εp
= ε′f (2 N )c (10.108)
2
In this instance, Δεp /2 refers to the amplitude of the plastic strain and ε′f denotes
an empirical constant called the fatigue ductility coefficient or the failure strain for
a single reversal. Furthermore, 2N is the number of reversals to failure of N cycles,
while c is an empirical constant named the fatigue ductility exponent, commonly
ranging from −0.5 to −0.7 for metals in time-independent fatigue.
10.3.4.2 Variation of Stiffness
In the course of type D low-cycle fatigue, a structure will have decreased stiffness,
which can be seen when the inelastic range is reached and then repeated. Rzhevsky
and Lee (1998) studied the decreasing stiffness and found that the stiffness decrease
can be seen as a function of the accumulation of inelastic deformation. This type of
accumulative damage is more complex than pure low-cycle fatigue. This is because
during certain cycle strokes, the deformation will be sufficiently too large to yield
the structure, whereas in other cycles, the deformations can respectively be small.
Empirically, the stiffness and the accumulation can be expressed as
−1
kn = koe − aosh (γ n ) (10.109)
544 Random Vibration
Stiffness
ko Steel
k0.65 RC
Here, ko and kn are, respectively, the original stiffness and the stiffness after n
semicycle inelastic deformations; ao is the peak value of the first inelastic deforma-
tion. The subscripts 0, 1, and n denote the cycles without inelastic, the first, and the
nth cycle of inelastic deformations, respectively. Additionally, the term γn is defined as
γn = ∑a
i =1
i (10.110)
The term γn is called the damage factor, which is the summation of the absolute
value of all the peak values of the inelastic deformations.
For an initial stiffness of ko, the values of the inelastic deformation ai can be speci-
fied in an experimental study of certain types of reinforced concrete (RC) components
and structures, comparing with steel, whose stiffness is virtually constant. Figure 10.9
conceptually shows the relation between the decreased stiffness and the damage factor.
For a comparison, both the constant and the decrease stiffness are plotted in Figure
10.9. Conceptually, both are allowed to have the same value initially. It is then seen that
as the inelastic formations accumulate, the constant stiffness will maintain its value
until its final failure stage. However, the decreased stiffness will begin to diminish
from the first semicycle, although the decreased stiffness will eventually have a total
failure. It is also conceptually plotted at the point when the constant stiffness fails.
Example 10.3
−1
k0.65 = k0 e − ao sh ( γ 0.65 )
(10.111)
0.65n
γ 0.65 = ∑a
i =1
i (10.112)
Failures of Systems 545
F = R − L (10.113)
where we use L to denote the realistic load. That is, when F = 0 is reached, failure
occurs. Similar to Chapter 1, Equation 1.154, the failure probability is given as
L = L1 + L2 + L3 + … (10.115)
And each hazard load Li is likely the time variable, that is,
Li = Qi(t) (10.116)
Therefore, it can be very difficult to have the PDF of the combined load because
they are actually a random process. In most cases, the loads are not addible.
In developing probability-based designs of bridges subjected to MH extreme loads,
researchers have pursued the following. Wen et al. (1994) first provided a compre-
hensive view of multiple loads and divided the total failure probability into several
partial terms. Although there is no detailed model of how to formularize these partial
failure probabilities, Wen et al.’s work pointed to a direction to establish closed-form
analytical results for limit state equations, instead of using empirical or semi-empirical
546 Random Vibration
approaches. Nowak and Collins (2000) discussed alternative ways for several detailed
treatments in load combinations, instead of partial failure probabilities. Ghosn et al.
(2003) provided the first systematic approach on MH loads for establishing the design
limit state equations. They first considered three basic approaches: (1) Turkstra’s rule,
(2) the Ferry–Borges model, and (3) Wen’s method. In addition, they also used Monte
Carlo simulations (which will be discussed in Chapter 11). Among these approaches,
Ghosn et al. focused more on the Ferry–Borges method, a simplified model for MH
load combinations. Hida (2007) believed that Ghosn’s suggestion could be too conser-
vative and discussed several engineering examples in less common load combination.
In the above-mentioned major studies, the loads including common live (truck) load
and extreme loads are assumed independent. Moreover, no matter they occur with
or without combinations, the distributions of their intensity are assumed to remain
unchanged. These methods unveiled a key characteristic of MH loads—the challenge
of establishing multihazard load and resistance factor design (MH-LRFD) is the time-
variable load combination. With simplified models, these approaches provided possible
procedures to calculate the required load and resistance factors for design purposes.
Because of the lack of sufficient statistical data, simplifying assumptions may
either underestimate or overestimate certain factors. Cases in which the assumptions
are accurate can be shown, while in other cases, the distributions may vary and the
independency may not exist. In this study, we have pursued both theoretical and
simplified approaches so that results may be compared and evaluated.
Time Time
L1 L2 L1
L2
PDF(t4) L2
t6 L1
t5
t4
PDF(t2) L1 Intensity
t3
PDF PDF L2
t2
t1 Intensity
Intensity Intensity
(a) (b) (c)
FIGURE 10.10 PDF of time-invariant and time-variable loads and their combinations.
(a) L2: constant load; (b) L2: time varying load; (c) time varying load combination.
Failures of Systems 547
and L2, a time-invariant load. Whenever L1 occurs, L2 is there “waiting” for the com-
bination. For example, if we have truck and dead loads only, then we can treat the
combined load as a regular random variable without considering the time variations.
In Figure 10.10b, when combining two (or more) time variables, we may have the
chance L1 occurs without L2. There is also the chance when larger-valued L1 occurs
with the occurrence of smaller-valued L2, as illustrated in Figure 10.10c. We may
also have the chance when smaller-valued L1 occurs with the occurrence of larger-
valued L2, for example, at t4, which is specifically shown in Figure 10.10c. In these
cases, we may have different sample spaces in calculating the corresponding distri-
butions, as well as L1 and L2 distributions.
L1 + L2 + L3 = L1 (10.117)
L1 + L2+ L3 ≠ L1 (10.118)
where ℓ(t), ℓ1(t), and ℓ2(t) are the sum and the special time varying loads, respectively.
With a given time t, they are deterministic.
548 Random Vibration
In most cases, the maximum value of the combined load ℓ(t) does not equal to the
sums of the maximum or amplitude values of ℓ1(t) and ℓ2(t). That is,
(t ) ≠ 1 (t ) + 2 (t ) (10.120)
In some previous studies, the amplitudes of ℓ1(t) and ℓ2(t) are treated as constant
when the time duration Δt is taken to be sufficiently small. However, when Δt is
indeed sufficiently small for one load at a certain moment, it may not be sufficiently
small for the second load. Thus, such a treatment, such as the Ferry–Borges model,
may yield inaccurate results.
0
pf =
∫ −∞
f R− L ( x ) d x (10.121)
where Φ(−β) is the CDF of the normally distributed standardized variable defined
in Chapter 1 (Nowak and Collins 2000).
Turkstra and Madsen (1980) suggested a simplified method for load combinations.
In many cases, this model is an oversimplified treatment on random process because
it does not handle the load combination in the random process level at all. Instead, it
directly assumes that whenever two loads are combined, the “30% rule” can always
be used. On the other hand, this simplifying assumption can sometimes be rather
conservative.
Another method is the Ferry Borges–Castanheta model. Detailed description of
the Ferry–Borges’s model can be found in Ghosn (2003). In order to handle time-
variable loads, the Ferry–Borges’s model breaks down the entire bridge life into
significantly short time periods, in which these loads can be assumed constant so
that the variables can be added. Based on the discussion of extreme distribution in
Chapter 2, the cumulative probability function, FX1,2 ,max.T , of the maximum value of
load combinations, X1,2 max.T , in time T is obtained by
T /t
FX1,2 ,max.T ( x ) = FX1,2 ( x ) (10.123)
Here, FX1,2 ( x ) is the CDF in the short period. In Equation 10.122, the value t is the
average duration of a single event, say, several seconds; therefore, the integer ratio T/t
can be a very large value. If a certain error exists, generated by using Equation 10.123,
Failures of Systems 549
which is unavoidable in many cases, then the resulting CDF FX1,2 ,max.T can contain
unacceptable errors.
To use the Ferry–Borges’s model, one needs to make several simplifying assump-
tions, which introduce errors. The largest error, generally speaking, comes from
Equation 10.123, which is based on the assumption that the distributions in each
individual period t are identical. In addition, the load combinations in a different
period t must be independent; that is, the random process of the load combination
must be an independent stationary process, which is not true. Because the ratio T/t
can be very large, large errors may result.
L1 + L2 + L3 + … ≥ R (10.124)
There are three cases for which Equation 10.124 holds. The first is that individual
load effects are greater than or equal to R. In this case, when this single load occurs,
we need to consider only the peak level of this load. The second is that none of these
individual load effects are greater than or equal to R, but their combined effect is.
In this case, only the peak level of a load may not be sufficient. The third case is the
combination of the first and second cases, which will be considered in Section 10.4.2.5.
10.4.2.5 Independent Events
Let us continue the discussion on Equation 10.124. Besides the two cases men-
tioned in Section 10.4.2.3.1, the third possibility is the contribution of both of these
cases. Suppose that we now have three different kinds of loads, L1, L2, and L3. From
Equation 10.124, we can write
P(L1 + L2 + L3 ≥ R) = pf (10.125)
To add up the load effects of Equation 10.125 without miscounting, Wen et al. (1994)
suggested the use of total probability. That is, the entire case is dissected into sev-
eral mutually exclusive subcases. Each subcase only contains a single kind of load
Failures of Systems 551
(or load combinations). This way, we can deal with the time-variable loads and load
combinations more systematically and reduce the chance of miscounting.
Thus, Equation 10.124 can be further rewritten as follows:
Here, pf is the failure probability; P(.) means the probability of event (.) happen-
ing; the normal letter of L(.) means the load effect due to load L (.). The symbol “|(.)”
stands for condition (.). The symbol “∩” stands for occurring simultaneously. L1 ∩ L2
stands for the condition that loads L1 and L2 occur simultaneously, but that there are
only these two loads; in other words, no other loads (except the dead load LD) show
up during this time interval.
This formula of total probability is theoretically correct but is difficult to realize
practically. The main reason is that, again, these loads L1, L2, and L3 are time vari-
ables and that at a different level of load effect, P(Li only) will be different. In the
following, we introduce the concept of partial failure probability to deal with these
difficulties.
For the sake of simplicity, first consider two loads only. Denote
LL
Similarly, we can rewrite pf 1 2 and pfL2 in the same format as described in Equation
L1 L1L2 L2
10.129. Here, pf , pf , and pf are used to respectively denote the failure probabili-
ties caused by L1 only, L1 and L2 simultaneously, and L2 only.
552 Random Vibration
Earthquake Truck
pf
Total
probability
=1 pE pfET pTf pf
f
That is,
In addition, when these individual failure probabilities pf(.) are calculated for each
case of the maximum load effect of (.) exceeding the resistance, we need to also con-
sider a restricting condition: no other effect reaches the same level. The detail of this
will be discussed in the following.
Effect of load 1
Resistance
Effect of load 2*
fL(x)
0 1 2 3 4 5 6 7 8 9
x
Intensity (MN-m)
all the chances that the second load is smaller than this given level x. In the figure, for
the sake of simplicity, we assume that loads 1 and 2 have no combinations.
In general, the sum of all these chances can be expressed as the integral of the
PDF of the second load effect, fS ( z , x ). Here, the subscript “S” denotes the second
load, and in the following, we use the font of “Ruling Script LT Std” to denote the
conditional PDF, whose condition is the main PDF taking the value of x. That is,
x
PS ( x ) =
∫ −∞
fS ( z , x ) d z ≤ 1 (10.132)
Based on the above, when calculating the failure probability, we cannot use the
PDF of the first load only. In other words, the PDF of the first load, f L(x), shown
in Figure 10.12, must be modified as f L ( x ) PS ( x ). In general, PS ( x ) < 1, so that the
resulted failure probability due to the first load only should be smaller than the case
with the first load only.
The additional possibility that the value of combined loads 1 and 2 must also be
smaller than level x is given by
x
PC ( x ) =
∫−∞
fC ( w, x ) d w ≤ 1 (10.133)
Here, the subscript C denotes the combined load effect of both the first and second
loads, and the PDF of the combined load effect is denoted by fC. In this circumstance,
the PDF of the first load must be further modified as f L ( x ) PS ( x ) PC ( x ).
Failures of Systems 555
∞ ∞
P( L1 ) =
∫ −∞
fR ( x )
∫ x
f L1 ( y) PL2 ( x ) PC2 ( x ) d y d x
(10.134)
∞ ∞ y y
=
∫ −∞
fR ( x )
∫ x
f L1 ( y)
∫ −∞
fL2 ( z )
∫ −∞
fC ( w) d w d z d y d x
where f L1 ( y), fL2 ( z,x ), and fC ( w,x ) are the PDFs of effect due to load 1, due to load
2, and due to the combination of loads 1 and 2, respectively; PL2 ( x ) and PC2 ( x ) are
the condition probability PS ( x ) due to load 2 and the condition probability PC ( x ) due
to the combination of load 1 and load 2, respectively. In the circumstance of MHs, a
specific load, say, load 1, is divided into two portions. The first portion is based on
the case of load 1 only, and the second case is that both loads occur simultaneously.
We use f L1 ( y) and fL2 ( z , x ) to respectively denote load 1 in the first case and load 2
in the second case.
Similarly, the failure probability due to combined load 1 and load 2 should be
written as
∞ ∞
p( L1L2 ) =
∫ −∞
fR ( x )
∫ x
fc ( y) PL1 ( x ) PL2 ( x ) d y d x
(10.135)
∞ ∞ y y
=
∫−∞
fR ( x )
∫ x
fc ( y)
∫ −∞
fL1 ( z , x )
∫ −∞
fL2 ( w, x ) d w d z d y d x
where fc is the PDF of the combined load (L1 and L2 occur simultaneously); fL1 and fL2
are the PDFs of the effects due to loads 1 and 2 in the first case, respectively; PL1 ( x )
and PL2 ( x ) are the condition probability PS ( x ) due to loads 1 and 2, respectively. Note
that, generally speaking, fL1 ≠ f L1 and fL2 ≠ f L2.
Similarly, the failure probability due to load 2 should be written as
∞ ∞
p( L2 ) =
∫−∞
fR ( x )
∫ x
f L2 ( y) PL1 ( x ) PC1 ( x ) d y d x
(10.136)
∞ ∞ y y
=
∫ −∞
fR ( x )
∫ x
f L2 ( y)
∫ −∞
fL1 ( z , x )
∫
−∞
fC ( w, x ) d w d z d y d x
556 Random Vibration
10.4.3 General Formulations
10.4.3.1 Total Failure Probability
With the help of Equations 10.134–136, the total failure probability can then be writ-
ten as
∞ ∞ y y
=
∫−∞
fR ( x )
∫ x
f L1 ( y)
∫ −∞
fL2 ( z , x )
∫ −∞
fC ( w, x ) pL1 d w d z d y d x
(10.137)
∞ ∞ y y
+
∫ −∞
fR ( x )
∫ x
fc ( y)
∫ −∞
fL1 ( z , x )
∫−∞
fL2 ( w, x ) pL1L2 d w d z d y d x
∞ ∞ y y
+
∫ −∞
fR ( x )
∫ x
f L2 ( y)
∫ −∞
fL1 ( z , x )
∫ −∞
fC ( w, x ) pL2 d w d z dyy d x
In Equation 10.137, the total failure probability is all-inclusive if only two loads
are present, which is referred to as the comprehensive failure probability.
10.4.3.3 Brief Summary
In this subsection, we presented a proposed methodology on comprehensive bridge
reliability based on the formulation of partial failure probability, in order to deter-
mine the design limit state equations under time-varying and infrequent loads.
Specifically, if the designed value of failure probability is given, then each partial
failure probability is uniquely determined. Then, we can calculate the partial reli-
ability index one by one, according to the classification of the effect of either single
load or combined loads.
For dead and live (truck) loads only, the relationships of the load and the resis-
tance effects are treated as time-invariant random variables with normal distribu-
tions. The corresponding limit state is simply described by the reliability index, from
which the load and resistance factors can be uniquely determined.
Multihazard loads are time variables. The limit state of loads not exceeding the
resistance exists, but the total failure probability is far more complex to calculate.
Failures of Systems 557
10.4.4 Probability of Conditions
In the above, we have introduced the concept of partial failure probability to replace
the random process with random variables.
To numerically calculate partial and total failure probabilities, it is necessary to
carry out the aforementioned integrations by considering individual load value x.
Thus, we can further write
In the above, we use uppercase Pf(x), etc., to denote the probability of event (x),
whereas we use the lower case pf, etc., to denote the exact value of failure probabilities.
558 Random Vibration
The above five conditions are mutually independent. Each condition can be seen
as a probability, which can be denoted by
a. P(there is a load)
b. P(T only)
c. P(T > x)
d. P(T = max), with
e. P(only T ≥ x) + P(only E ≥ x) + P(T + E ≥ x) = 1
In general, the condition of having a single load effect Li (again, the combined
loads are treated as special kinds of single loads) can be written as a product of P(Li
only)P(Li > x)P(Li = max) (see conditions b, c, and d). These three probabilities are
relatively more complex, whereas the probabilities of conditions a and e are com-
paratively simpler. In the following, we will mainly focus on issues b, c, and d, with
issues a and e only briefly described in a simplified manner.
Therefore, the above-mentioned events are not equal to the existence of all pos-
sible truck loads only (same for earthquake load, etc.). The event of existence of
occurrence of such truck loads can be further divided as the intersection of event
“the existence of all truck loads” and the event “those loads must be greater than or
equal to value x.” That is,
(only T ≥ x) = [(there exist only truck loads) ∩ (truck load effect ≥ x) ∩ (truck load
effects = the maximum value)]│[(there must be a load) ∩ (there are only truck and
earthquake loads)]
The events shown on the right-hand side of the above equation are independent.
Therefore, we can have
P(only T ≥ x) = [P(there exist only truck loads)P(truck load effect ≥ x)P(truck load
effect = max)]│[P(there must be a load)P(there are only truck and earthquake loads)]
= {[P(there exist only truck loads)│P(there must be a load)][P(truck load effect ≥ x)
P(truck load effect = max)│P(there are only truck and earthquake loads)]}
In the following, we first discuss the probability of the event of only a single exist-
ing type of load, such as P(there exist only truck loads).
∞
p( L1 exists) = 1 − e − λt = e − λt ∑
k =1
(λt ) k
k!
(10.139)
If in a time span, there cannot be an infinite number of certain loads, say L1, using
1 − e−λt to calculate the probability of seeing that load L1 can result in an overestimation.
combination is as follows: First, assume that in the life span of a bridge there are a
total of up to n occurrences of L2; each has the same duration t L2. We thus have the
possible cases of L2 to be one occurrence (duration is 1 × t L2), two occurrences (dura-
tion is 2 × t L2), and so on. Secondly, in the duration t L2, we can calculate the probability
of having up to m load effect L1, which is the sum of one L1, two L1,… up to mL1. In
the duration 2t L2, we can calculate the probability of having up to 2 × m load L1, etc.
Denote the quantity m to be the upper limit of the maximum possible number of
loads L1. Since the physical length of a given bridge is fixed, there must be a limited
number of vehicles “occurring” on the bridge. The probability that load L1 agrees in
t L2 is denoted by 1 psL1, where the subscript 1 in front of symbol psL1 stands for load L1
occurring in one time interval t L2. The corresponding computation is given by
m
(λ L1 t L2 ) k
1 psL = e
1
− λ L1 t L2
∑
k =1
k!
(10.140)
mr
[λ L1 (rt L2 )]k
r psL = e
1
− λ L1 ( rt L2 )
∑
k =1
k!
(10.141)
The quantity r psL is, in fact, a conditional probability, which is the probability of
1
the occurrence of L1 under the condition of having L2. It is assumed that L1 and L2
are independent. Therefore, the unconditional occurrence probability of L1 can be
written as r psL1 psL2 i , where psL2 i is the probability of occurrence of load L2i, which
will be discussed in the following.
In the service life span of a bridge, TH, there may be up to n loads L2, with each
having its own occurrence probability. More specifically, let the letter i denote the
said level of load effect L2. We can use the symbol ni to denote the maximum pos-
sible number of such load effects.
More detailed computation should distinguish the simultaneous occurrence of
both loads L1 and L2 during different periods. For example, suppose an earthquake
lasts 60 s. If this 60-s earthquake happens in a rush hour, then we may see more
trucks. Another detailed consideration is to relate the duration t L2 to the level of load
L2. For example, a larger level of earthquake may have a longer duration, and so on.
Here, for the sake of simplicity, we assume that the number m and the length t L2 are
constant. In this case, the total probability of occurrence of load L2i, denoted by psL ,
2i
is given by
ni
(λ L2 i TH )r
psL = e
2i
− λ L2 i TH
∑
r =1
r!
(10.142)
Failures of Systems 561
where the first subscript s of psL2 i stands for the segment of the uniqueness probabil-
ity; ni is the total number of load L2i occurring in TH (75 years); and λ L2 i is the rate of
occurrence or the reciprocal of the return period of the specific load L2i.
A more simplified estimation of the occurrence of load L 2 is to take its aver-
age value without distinguishing the detailed levels. In this case, we use n to
denote the average number of occurrences of load L 2. The life span of the bridge
is denoted by TH. The probability of occurrence of such load effect L 2 in the bridge
life span is
n
(λ L2 TH )r
psL = e
2
− λ L2 TH
∑
r =1
r!
(10.143)
where n is the total number of loads L2 occurring in TH (75 years). Now, the prob-
ability of simultaneous occurrence of both loads L1 and L2 with level i is the product
denoted by pL1L2i given by
pL1L2i =
ni
(λ L2 i TH )r ni mr
(λ L1 rt L2 ) k (λ L2 i TH )r
e
− λ L2 i TH
∑ r psL
r!
=e
− λ L2 i TH
∑ − λ ( rt )
e L1 L2 ∑
k!
r!
r =1
1
r =1 k =1
(10.144)
where ni is the number of occurrences of load effect L2 with level i in the duration
of TH.
With the simplified treatment described in Equation 10.144, we have simultane-
ous occurrence of both loads L1 and L2, denoted by pL1L2 given by
n − λ (rt ) mr
(λ L1 rt L2 ) k (λ L2 TH )r
pL1L2 = e
− λ L2 TH
∑ e L1 L2
∑ k!
r!
(10.145)
r =1 k =1
No earthquake Earthquake
T E
No truck
No earthquake
T E
T E T E
Trucks No trucks
In Figure 10.13 and in the following discussion, the overhead bar stands for “no
existence.” Including situation 1, denoted by T E , the total probability is unity.
Therefore, we can write
Equation 10.146 implies that once the probabilities p(T E ), p(T E ), and
p(T E ) are calculated, they can be normalized under the condition of having a
load, denoted by [1 − p(T E )]. We now consider these probabilities. In order to
simplify the notations, the necessary nomenclatures are listed as follows:
TH
TT TT
tT
TE TE
tE
Figure 10.14 shows a special arrangement of the duration T T, TE vs. TH, which is
the base of the following analysis.
m
1 psT = e
− λT tE
∑
k =1
(λ T t E ) k
k!
(10.147)
mr
r psT = e
− λ T ( rt E )
∑
k =1
[λ T (rt E )]k
k!
(10.148)
nEi
psEi = e − λ EiTH ∑
k =1
(λ EiTH ) k
k!
(10.149)
where nEi is the total number of earthquakes Ei occurring in 75 years; λEi is the rate
of earthquakes with an effect level i in TH.
564 Random Vibration
ni mr
(λ T rt E ) k (λ EiTH )r
pTEi = e − λ Ei TH
∑ e − λT (rtE )
∑ k! r!
(10.150)
r =1 k =1
The term pTEi is a uniqueness probability denoting the unique chance of the simul-
taneous occurrence of a truck load and an earthquake load with level Ei. At the spe-
cific moment, only these two loads are acting on the bridge.
Now, if we use the simplified approach without considering the level of earth-
quake effects, we have the probability of occurrence of the non-exceedance earth-
quake E in TH, denoted by psE and with nE earthquakes. Then
nE
psE = e − λ E TH ∑r =1
(λ E TH )r
k!
(10.151)
where λE is the average rate or the reciprocal of the average earthquake return period.
In this case, we have the average occurrence probability given as
nE mr
(λ T rt E ) k (λ E TH )r
pte = e − λ E TH ∑
r =1
e − λ T (rtE )
∑ k! r!
(10.152)
k =1
nE (λ T )r
r =1
∑
pte = e − λ E TH [ e − λT (rtE ) ] E H
r!
(10.154)
Note that one of the disadvantages of using the above equations is the difficulty
of computing the factorial. If the number of k is large, say, more than 150 for most
personal computers, then the term k! cannot be calculated. Therefore, let us consider
an alternative approach.
In one duration tE , suppose we have the following event: an earthquake is occur-
ring while simultaneously up to m trucks are crossing the bridge. The probability of
that event is
m
(λ T t E )r − λ E TH
1 pte = e
− λT tE
∑r =1
r !
e (λ E TH ) (10.155)
Failures of Systems 565
1 − 1 pte = 1 − e − λT tE − λ E TH (λ E TH ) ∑ r =1
( λ T t E )r
r!
(10.156)
nE
m
( λ T t E )r
pte = 1 − (1 − 1 pte ) nE
= 1 − 1 − e − λT tE − λ E TH (λ E TH )
r =1
∑ r !
(10.157)
and the probability when earthquakes occur with no trucks crossing the bridge is
given as
nE
pte = 1 − e − λT tE − λ E TH (λ E TH ) (10.158)
TH
TT TT
TE pte pte
TH TE pte pte
number nE is sufficiently small. Based on the concepts described in Figures 10.13 and
10.15, the probability of having an earthquake can be calculated as
Therefore,
nE mr
(λ T rt E ) k (λ E TH )r − λ E TH
nE r
− λ ( rt ) (λ T )
pe = e − λ E TH ∑ e − λT (rtE )
∑ k! r!
+e ∑ [e T E ] E H
r =1
r!
r =1 k =1
nE mr
(λ T rt E ) k (λ E TH )r
=e − λ E TH
∑ e − λT (rtE ) 1 +
∑ k ! r!
r =1 k =1
(10.160)
pte
pt = (10.161)
pe
Therefore,
pte (10.162)
pt =
pte + pet
Example 10.4
Based on the Poisson model, the probabilities of truck load only, earthquake load
only, and simultaneously having both loads in the period of tE seconds can be
obtained. Suppose that the daily average truck rate is 1000.
Failures of Systems 567
100
∑ (λ kt! )
k
e − λT tE T E
= 0.6038
k =1
Similarly, during an earthquake, the chance of only two trucks crossing a bridge
is 0.5366. It is seen that, comparing 100 trucks with the probability = 0.6038, the
difference is not very large.
Note that TH is 2.3668 × 109 (s). Suppose that in 75 years, there are a total of 15
earthquakes (nE = 15) so that λE = 15/TH = 6.3376 × 10 −9. Suppose that the average
number of trucks crossing a bridge during an earthquake is m = 2. The chance of
simultaneously having a truck and an earthquake is
(λT rtE )k (λ ETH )r
nE mr
pte = e − λETH ∑
r =1
e − λT ( rtE )
∑
k =1
k! r !
= 0.5677
nE
∑[ e ] (λETH )
r
pte = e − λETH − λT ( rtE )
= 1.1615 × 10
−4
r!
r =1
and
Therefore,
1− pe
pte = pte = 0.4321
pe
The normalized probabilities of having a truck only, having both a truck and
an earthquake, and having an earthquake only are denoted as pTE , pTE, and pTE ,
which can be further defined and calculated according to the next subsections.
Note that, besides the above approach, by using a mixed Poisson distribution, we
could also describe a slightly different formulation of the probability of conditions.
Here, the term (1 − pte ) is a probability of the event (there must be loads), that is,
Denoting the probability of having both trucks and earthquakes in TH under the
condition of having a load only as pTE , we have
pte
pTE = (10.166)
1 − pte
pte
pTE = (10.167)
1 − pte
pte
pET = (10.168)
1 − p te
Thus,
That is, consider the probability of the event (truck load only), under the condition
of “there must be load(s),” we have
P(truck load only) = P(truck load only│there must be loads) /P(there must be loads) (10.170)
∞
P(truck load effect ≥ x ) =
∫ x
ft (u) du (10.171)
where ft is the PDF of the truck load (see Figure 10.16). It shall be noted that this
∞
particular PDF is different from the term
∫ x
f L1 ( y) d y in Equation 10.137, where
f L1 ≡ fT (10.172)
which is the PDF of the maximum value of the truck load effect.
10.4.4.2.5.2 Maximum Values Consider now the probability of the event when
a specific load is a possible maximum load, which has the probability P(load effect =
max).
Using the truck load effect as an example, the conceptual plots of the PDF of the
distributions of the truck loads, denoted by f T, is shown in Figure 10.17, where the
coordinate is the value of the PDF and the abscissa is the intensity of the conceptual
load effect.
From Figure 10.17, the total probability from zero to a given maximum load level
with intensity up to x (such as the one shown in Figure 10.16 with a moment of
25,000 kps ft.) can be written as
x
P(truck load = max) =
∫0
fT ( v) d v (10.173)
× 10–3
4
3.5
3
fT
2.5
PDF
1.5 x
1
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Intensity (kps ft) × 104
× 10–3
4
3.5
2.5
fT
PDF
1.5 x
1
0.5
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Intensity (kps ft) × 104
From Figure 10.17, it is seen that this probability is the CDF of the distribution of
the maximum load up to level x.
= pTE
∫x
fe (u) du
∫ 0
fE ( v) dv (10.175)
In the above, fe and f E are the PDF of the regular and the maximum earthquake
load effects, respectively.
Furthermore, the probability P(T ∩ E ≥ x), denoted by pt∩e, can be written as
where fc and fC are the PDF of the regular and maximum combined truck and earth-
quake load effect, respectively.
The sum of pt, pe, and pt∩e is not necessarily equal to unity. This fact is rather
inconvenient for calculating the total probability. Thus, the terms pt, pe, and pt∩e as
probability of conditions with individual effect values are considered in the following.
Equation 10.177 means that the terms pt, pe, and pt∩e should be normalized accord-
ing to unity. This can be done by
which is the normalized conditional probability of the maximum truck load only,
which is the uniformed conditional probability of the maximum combined load only.
10.4.5 Brief Summary
In the above, the probability of condition for having a specific load only by using
truck and earthquake loads as examples is formulated. (The generic formulations
for other loads are identical.) Since the dead load is time invariant, whenever a time-
variable load occurs, the dead load is “waiting” there for the load combination.
Therefore, the dead load is omitted in this discussion.
Suppose that we have only two load L1 and L2. There exist three distinct cases:
Case 1 is only having load effect L1; case 2 is only having load effect L2; and case 3
is only having the combined load effect L1 + L2, which is denoted as L3 (L3 = L1 + L2).
Each case is treated as a single kind of load effect.
572 Random Vibration
The probability of condition for having the case (Li ≥ certain level x only, i = 1, 2,
3) consists of the following events:
To calculate the occurrence of single and/or simultaneous loads, we can use either
Poisson or mixed distributions.
Generally speaking, we introduced a methodology in this section to develop reliability-
based bridge design under the conditions of MH loads. The first step is to formulate
the total bridge failure probability using the approach of partial failure probabilities.
In Section 10.4.2, we introduced the concept of comprehensive bridge reliability, the
essence of which is an all-exclusive approach to address all necessary loads, time-
invariant and time-variable, regular and extreme, on the same platform. In so doing,
all loads, as long as they contribute accountably to bridge failures, are included. The
basic approach to realize the comprehensive reliability is to break down these loads
into separate cases, referred to as partial failure probability.
Each partial failure probability contains only a “pure” load or a “pure” load com-
bination. Technically speaking, these pure loads can be treated as time-invariant
random variables, although most loadings are time-varying random processes. The
key for such separation of variables is to find the condition when the load occurs on
its own, which is referred to as the probability of condition, and this is addressed in
Section 10.4.4. It is seen that, to realize this term, we need to further break down the
probability into five independent subconditions.
Once we calculate all the partial failure probabilities by having the partial con-
ditional probabilities and the probabilities of conditions, the summation will give us
the total bridge failure probability.
To form the design limit equations and to extract the load and resistant factors for
practical bridge design, however, the formulation of the total probability is only the
first step. To obtain the load and resistance factors, additional efforts are necessary,
which are outside the scope of this manuscript.
Problems
1. The bearing of a type of simply supported bridges is subjected to dead load
(DL) and live load (LL); both can be modeled as normally distributed ran-
dom variables. Suppose that DL ~ N(150 kps, 45 kps) and LL ~ N(235 kps,
Failures of Systems 573
Combined load
FIGURE P10.1
WF(f ) k2/Hz
f(t)
b 0.015
25 in f (Hz)
0 10
FIGURE P10.2
574 Random Vibration
Q(t)
w = 4.5 cm
FIGURE P10.3
ultimate strength of 45 ksi, both with a COV of 0.1. The natural frequency
is sufficiently high so that there is no dynamic magnification.
a. Design this beam with the 3σ criterion.
b. Design this beam for a first passage failure, with respect to ultimate.
The reliability goal is 98% for the service life.
6. A stationary force Q(t) is applied to the plate shown in Figure P10.3.
The mean and the standard deviation of Q(t) are 15 and 20 kN, respec-
tively. The failure mode is brittle fracture. Fracture toughness is given as
K C = 26 MPa m . The crack size is a = 1.3 mm. No subcritical crack propa-
gation (fatigue) is assumed. The geometry factor is Y = 1.25. Determine the
minimum value required for t using the 3σ criterion. Failure occurs when
the stress intensity factor K = YS(πa)1/2 > KC with stress S.
7. Reconsider the plate shown in Figure P10.3. Assume that Q(t) is narrow-
band with a central frequency of 1.5 Hz. The applying duration is 120 s.
Using the first passage criterion, design the plate so that the probability of
failure is less than 0.002.
8. Suppose that a type of bridge will be constructed in a river where scour
hazards may occur with an average duration of 2 days. In a year, on average,
vessel collision on the bridge may occur three times. Calculate the prob-
ability of two vessel collisions when a bridge scour occurs.
9. Suppose that in 75 years, there are a total of 100 bridge scours. Calculate
the chance of simultaneously having a scour and a vessel collision under the
condition given in Problem 8. Calculate the probability of up to three vessel
collisions.
11 Nonlinear Vibrations and
Statistical Linearization
In previous chapters, we often assumed that a system under consideration is linear.
However, practically speaking, there are many nonlinear systems that are subjected
to random and time-varying loads and deformations. In such a situation, it can be
difficult to treat the response of a nonlinear system as a stationary process. In fact,
analyzing such a random nonlinear system can be rather complex.
In this chapter, the basics of nonlinear dynamic systems are introduced. Monte
Carlo simulation is used as a tool to address the complexity of such problems.
11.1 Nonlinear Systems
In this chapter, it is assumed that all random processes are integrable. This assump-
tion is applicable for most engineering systems.
Generally speaking, we have the following reasons for a dynamic system to be
nonlinear:
Within the above scope of nonlinear vibration, we mainly focus on the first two
cases.
then the system is linear. Otherwise, the system is nonlinear. Here, g(.) denotes a
function of variable (.) and α and β are scalars.
The essence of Equation 11.1 is twofold. The first is additivity, that is,
575
576 Random Vibration
11.1.1.1 Nonlinear System
If Equation 11.1 does not hold, we will then have a nonlinear system. In the follow-
ing, we first theoretically consider several typical nonlinear systems. Furthermore,
only certain examples are discussed. These models will be referenced in order to
model a bilinear system.
11.1.1.1.1 Bilinear Model
First consider a general bilinear relationship between Y(t) and X(t) given as
∞ ∞
Y (t ) =
∫ ∫
−∞ −∞
h2 (τ1 , τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2 (11.2)
then
∞ ∞
B=
∫ ∫
−∞ −∞
h2 (τ1 , τ 2) dτ1dτ 2 (11.5)
∞ ∞
H 2 (ω1 , ω 2 ) =
∫ ∫−∞ −∞
h2 (τ1 , τ 2 )e − j (ω1τ1 +ω 2τ2 ) d τ1 d τ 2 (11.6)
Thus, in order to describe a bilinear system, either h2(τ1, τ2) or H2(ω1, ω2) must
be known. In the following, several examples of bilinear systems as well as the cor-
responding kernel functions will be considered.
11.1.1.1.2 Quadratic System
The quadratic system (see Figure 11.1) is defined as
Y = g(X)
∞
X (t ) =
∫−∞
X (t − τ1 )δ(τ1 ) d τ1 (11.8)
∞ ∞
Y (t ) = X 2 (t ) =
∫ −∞
X (t − τ1 )δ(τ1 ) d τ1
∫ −∞
X (t − τ 2 )δ(τ 2 ) d τ 2
∞ ∞ (11.9)
=
∫ ∫
−∞ −∞
δ(τ1 )δ(τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2
Thus,
F [Y (t )] = F [ X 2 (t )] = X (ω ) * X (ω ) (11.12)
From Equation 11.12, it is shown that a process X(t) through a quadratic system
varies its frequency component.
Denote
2
∞
Y (t ) =
∫−∞
h(τ) X (t − τ) d τ
∞ ∞
(11.13)
=
∫ ∫−∞ −∞
h1 (τ1 )h2 (τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2
11.1.1.2.1 Definition
Denote
In Equation 11.15, g is a real function with a single variable only. That is, for a given
moment t, Y(t) is defined by g(X(t)) only. In other words, the system is memoryless.
Y(t) = X2(t)
x2
− t
1 2 σ t2
f X ( xt ) = e (11.16)
2πσ t
Respectively denote xt and yt as the random variables of input and output process
at time t. It can be proven that
yt
−
1 2 σ t2
fY ( yt ) = e u( yt ) (11.17)
2πyt σ t
11.1.1.2.3 Correlation Function
∫ ∫
(11.18)
= g( X (t1 )) g( X (t2 )) f ( x1 , x 2 , t1 , t2 ) d x1 d x 2
−∞ −∞
∞ ∞ ∞
∫
Y (t ) = k0 +
0
k1 (τ1 ) X (t − τ1 ) d τ1 +
∫ ∫0 0
k2 (τ1 , τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2
∞ ∞ ∞
+ +
∫ ∫ ∫
0 0 0
kn (τ1 , τ 2 ,τ n ) X (t − τ1 ) X (t − τ 2 ) X (t − τ n ) d τ1 d τ 2 d τ n
(11.19)
The output is a sum of the constant k0 and those integrals taken from one dimen-
sion to n dimensions.
If only the first term, the constant, and second term, the convolution, exit, the sys-
tem is linear with h1 being the impulse–response function; otherwise, it is nonlinear
with multiple convolutions. For example, with h2 in Equation 11.2 is the second order
convolution.
11.1.3 Structure Nonlinearity
11.1.3.1 Deterministic Nonlinearity
11.1.3.1.1 Nonlinear Spring
11.1.3.1.1.1 Softening Spring Two types of commonly used nonlinear springs
in engineering systems modeling are softening and hardening springs. Figure 11.3
illustrates a softening spring. In Chapter 10, Equation 10.108 defined one kind of
spring softening mechanism using the failure strain εʹf to denote the yielding point
(dy, f y). Overloading the spring can result in softening the spring’s stiffness. In this
instance, the stress passes the yielding point, which is commonly seen in structural
damage. In Figure 11.3b, the spring is shown to be below the yielding point (dy, f y).
In Figure 11.3a, f m and x0 are the maximum force and deformation, respectively.
When the load f increases, the corresponding stiffness will decrease continuously.
As a result, a certain point, for example, 0.6f m and d 0.6, may be set, with an unload-
ing stiffness ku and an effective stiffness keff at the maximum deformation. Another
580 Random Vibration
f f
ku ku
aku
fm
fm fy
0.6fm keff
(ksec) x
(a) d0.6 x0 x (b) dy x0
FIGURE 11.3 Softening stiffness. (a) Yielding point: 0.6 fm. (b) Otherwise defined yielding
point.
commonly used action is shown in Figure 11.3b. At the yielding point f y and dy, the
unloading stiffness is defined by ku, and beyond that point, the loading stiffness is
defined as k l = aku.
Generally speaking, when the loading force is sufficiently small, the stiffness
is close to linear and the force and deformation are close to proportional. (Recall
Hooke’s law.) In this case, the use of either stress or strain to denote the deforma-
tion is fairly equivalent, and both stress and force are commonly used. However,
beyond the yielding point, a rather large deformation occurs even under a rather
small force. Consequently, using strain or deformation becomes more convenient
(see, for instance, FEMA 2009, Figure c12.1-1).
However, it must be noted that when an effective linear system is used to represent
a nonlinear vibration system, the effective stiffness should satisfy the following:
keff = 2 E p /x 02 (11.21a)
and
Here, Ep is the potential energy restored by the system with displacement x0; fc is
the conservative force, and when the system yields and reaches a displacement of x0,
the maximum force f m will contain both the conservative force and the dissipative
force fd. Specifically, this is written as
f m = fc + fd (11.22)
Nonlinear Vibrations and Statistical Linearization 581
As a result, the effective stiffness keff will be smaller than the secant stiffness ksec.
In Chapter 6, it was shown that a vibration is typically caused by the energy
exchange between potential and kinetic energies. The natural frequency of a linear
system can then be obtained by letting the maximum potential energy to be equal to
the maximum kinetic energy, that is,
kx 02 mv02 mω n2 x 02
= = (11.23)
2 2 2
keff x 02 m ω 2n x 02
= (11.24)
2 2
Here, keff is defined in Equations 11.21. The effective frequency ωn can be calcu-
lated by
keff
ωn = (11.25)
m
keff fc /x 0 fm /x 0
= < (11.26)
m m m
On the right-hand side of Equation 11.26, the term f m /x0 is often used to estimate
the effective stiffness, referred to as the secant stiffness, specifically ksec = f m /x0.
Considering the dynamic property of a nonlinear system, the secant stiffness should
not be used as the effective stiffness. Following this logic, the effective stiffness
should therefore be defined differently.
In the case structurally bilinear, denoted by the shadowed regions in Figure 11.4,
when the system moves from 0 to x0, the potential energy is given by
E p = 1/ 2 ku d y2 + kd ( x 0 − d y )2 (11.27)
Defining
keff = 2E p /x 02 = ku d y2 + kd ( x 0 − d y )2 /x 02 (11.28)
Force
fm
kd
qd
ku ku
dy x0
Disp.
μ = x0/dy (11.30)
1 + a(µ − 1)2
keff = k u (11.31)
µ2
In Equation 11.31, a is the ratio of the yield stiffness kd and the unload stiffness ku.
Accordingly, the corresponding effective period is
µ2 m
Teff = T = 2πµ
2 1 (11.32)
1 + a(µ − 1) [1 + a(µ − 1)2 ]ku
Comparing the above effective period from Equation 11.32 to the period obtained
through secant stiffness ksec:
fm m
Teff′ = 2π (11.33)
x0
11.1.3.1.2 Nonlinear Damping
11.1.3.1.2.1 Timosheko Damping (Stepan P. Timoshenko, 1878–1972) By
using the approach Timoshenko damping (see Liang et al. 2012), it is possible to
derive the effective damping ratio of the entire bilinear system by using several dif-
ferent approximations of the effective stiffness. It can be shown that the steady-state
Nonlinear Vibrations and Statistical Linearization 583
response of a linear system under sinusoidal excitation, the damping ratio, can be
calculated through the following equation:
Ed
ζ= (11.34)
4 πEK
where Ed and EK are, respectively, the energy dissipated during a cycle and the maxi-
mum potential (kinetic) energy. For nonlinear damping, the damping ratio will be
denoted by ζeff in the subsequent sections.
Ed 2(µ − 1)(1 − a)
ζeff = = (11.35)
4 πEk πµ(1 + aµ − a)
Using Equation 11.27 of maximum potential energy, the damping ratio is written as
2qd ( x 0 − d y )
ζeff = (11.36)
π k x + kd ( x 0 − d y )2 + kx 02
2
u 0
2qd ( x 0 − d y )
ζeff = (11.37)
π ku x 02 + kd ( x 0 − d y )2
Given that the characteristic strength qd for the bilinear system can be written as
Ed 2(µ − 1)(1 − a)
ζeff = = (11.39)
4 π E p π[µ 2 + a(µ − 1)2 ]
β
fd (t ) = c x syn( x ) (11.40)
∫
2
Ed = cω βf x 0β+1 cosβ+1 (ω f t ) dω f t = cω βf x 0β+1 Aβ (11.41)
0
β + 2
2 πΓ
2
π
∫
2 β +1
Aβ = cos (ω f t ) dω f t = (11.42)
0 β + 3
Γ
2
Through the use of Equation 11.33, the damping ratio can become
fd (t ) = c syn( x ) (11.44)
and
c = μN (11.45)
fd
ζeff = (11.46)
2 fm
where fd and f m are the amplitude of the damping and maximum forces as previously
defined.
In the following example, we consider the case of bilinear damping by using
the alternative approach. To clearly define the dissipative and restoring forces in
a bilinear system can be complex. A bilinear system may possibly have one of the
Nonlinear Vibrations and Statistical Linearization 585
fd (1 − a)
ζeff = = (11.48)
2 fm 2 1 + a(µ − 1)
It is noted that, with different formulas for the dissipative force, the calculated
damping ratio will be slightly varied.
11.1.3.2 Random Nonlinearity
11.1.3.2.1 Random Force and Displacement
In the above discussion of nonlinear stiffness and damping, two assumptions existed,
the first one being that the maximum force, f m, and displacement, x0, are fixed values,
although in many cases, the maximum force and displacement will be a random
process.
We now consider the probability of the maximum deformation, and note that
the probability of maximum force will have similar results. (Referring back to
Section 5.2.) This is understood as a problem of the distributions of extrema.
dRX (τ)
E[ X (t ) X (t )] = (11.49)
dτ τ= 0
Substituting
∞
RX (τ) =
∫ −∞
S X (ω )eiωτ dω (11.50)
586 Random Vibration
will yield
d ∞ ∞
E[ X (t ) X (t )] =
dτ ∫−∞
S X (ω )e jωτ dω = jω
τ= 0
∫ −∞
S X (ω ) dω = 0 (11.51)
The integral for SXX is zero given that SXX is an even function. By further assuming
that the displacement X and the velocity X be Gaussian, a joint PDF given by
1 1 x2 x 2
f XX ( x , x ) = exp − 2 + 2 (11.52)
2πσ X σ X 2 σ X σ X
1 1 x2 1 1 x 2
f XX ( x , x ) = exp − exp − 2
= f X ( x ) f X ( x ) (11.53)
2πσ X 2 σ 2X 2πσ X 2 σ X
11.1.3.2.1.2 Level and Zero Up Crossing Now, we consider the case where the
rate of the displacement is larger than level a. From Equation 5.24, we can obtain
1 a2
1 σ X − 2 σ 2X
va + = e (11.54)
2π σ X
When a = 0, this will result in a zero up-crossing rate. Reference Equation 5.26
for any additional explanations.
1 σ X
v0 + = (11.55)
2π σ X
a 1 a2
f A (a) = exp − 2 , a > 0 (11.56)
σ 2X 2 σ X
For additional explanation, see Equation 5.100. Figure 11.6 plots an example of
the PDF.
stiffness. However, for random processes, the maximum displacement will only be
reached a few times, and in most cases, the magnitude of displacements will be smaller
than the maximum value. To estimate the effective stiffness and damping more realis-
tically, a specific displacement dp, which is smaller than the maximum value, must be
found. Using a proportional coefficient pc, the displacement dp can be written as
dp = pcx0 (11.57)
E p = 1/ 2 ku d y2 = 1/ 2 fm d y (11.58)
When the peak displacement is larger than dy, then referencing Equation 11.28,
the stiffness kn is
In between zero and the maximum peak displacements x0, the average stiffness
kave is given by
1 a2 1 a2
x0 dy
a −2 2 x0 fm d y a − 2 σ 2X
kave =
∫
0
k n f A (a) d a =
∫0
ku 2 e σ X d a +
σX ∫ dy a 2 σ 2X
e d a (11.60)
Force
Y P M
fm
kéff
keff
Peak disp.
0 dy , µ dp x0, 1
1 a2 1 a2
dy
fm a − 2 σ 2X x0
fm d y −
∫ ∫
2 σ 2X
keff = kave = e da + e da (11.61)
0 d y σ 2X dy aσ 2X
fm f
d p = ku d y /keff = d y /kave = m (11.63)
dy kave
pc = dp/x0 (11.64)
Without loss of generality, the maximum possible value of the peak displacement
can be normalized to unity so that
p c = d p (11.65)
It is noted that the distribution of the peak displacement a is from zero to infinity.
Allowing
x0 = 1.0 (11.66)
will result in errors ε in computing the probability of a being beyond 1 (the normal-
ized displacement), denoted by
1 a2
∞
a − 2 σ 2X
ε=
∫1 σ 2X
e d a (11.67)
such unknown variables are treated as uncertain parameters. To determine how much
error is present, the maximum possible value of the uncertain variables must be deter-
mined. In the case of unknown yielding deformations, when the value of dy is small,
less error will exist. Conversely, when the value of dy is large, more error will exist.
We now consider the maximum allowable dy when Equation 11.60 is used.
Refer again to Figure 11.5. Before the yielding point, the force is proportional
to the displacement. Consequently, the distribution density function of the force vs.
the displacement, denoted by f F(a), is exactly equal to the PDF of the displacement.
Explicitly, this can be written as
a 1 a2
f F (a) = exp − 2 , 0 ≤ a < d y (11.68)
σ 2X 2 σ X
After yielding, the force f m will remain constant, while the displacement will vary
from dy to 1. This is denoted by
dy 1 d y2 a
f F (a) = exp − 2 , d y ≤ a < 1 (11.69)
σ 2X 2 σ X 1 − d y
2
1 a2 1 dy
− −
dy
a 1 dy a
∫ ∫
2 σ2 2σ 2
e X
da + e X
1 − d daa = 1 (11.70)
0 σ 2X dy σ 2
X y
2 2
1 dy 1 dy
− −
2 σ2 dy 2 σ2
−e X
+1+ e X
(1 − d y ) = 1 (11.71)
σ 2X
or
σ 2X = d y (1 − d y ) (11.72)
Suppose a displacement of X(t) is given and the variance σ 2X is fixed. Then, the
“allowed value” of the yielding displacement dy is given by
1 ± 1 − 4 σ 2X
dy = (11.73)
2
590 Random Vibration
μ = 1/dy (11.74)
σx = 1/3 (11.75)
Then
1 1
a 1 a2
∫ 0
f A (a) d a =
∫
0 1
2
exp − 2
2 1
da ≈ 99%
3 3
or
ε = 1 – 99% = 1%
When considering the negative square root, Table 11.1 can be used to show the
corresponding errors.
From Table 11.1, it is established that, when the standard deviation σX = 0.3,
calculated through Equation 11.67, the error ε = 0.4%, which is sufficiently small.
However, based on Equation 11.72, the “allowed yielding displacement” is also small
at a value of 0.1, with a ductility of 10. It is seen that when the allowed yielding
displacement becomes increasingly larger, the error ε will also be larger. As an
example, when dy = 0.2, the error will become 4.4%.
Next we calculate the averaged effective stiffness. For example, when σX = 0.4
and dy = 0.2, the first term of Equation 11.61 yields
1 a2
fm dy
a − 2 σ 2X
dy ∫ 0 σ 2X
e d a = 0.59 fm
Table 11.1
Errors versus Yielding Displacements
dy 0.100 0.109 0.118 0.127 0.138 0.149 0.160 0.173 0.186 0.2000
μ 10.0 9.21 8.50 7.85 7.27 6.74 6.25 5.80 5.38 5
σX 0.30 0.31 0.32 0.33 0.34 0.36 0.37 0.38 0.39 0.4
ε 0.004 0.006 0.008 0.011 0.015 0.019 0.024 0.030 0.034 0.044
Nonlinear Vibrations and Statistical Linearization 591
1 a2
1
fm a − 2 σ 2X
∫
dy a 2 σ 2X
e da = 0.95 fm
fm
dp = = 0.65
kave
pc = 0.65 (11.76)
Similarly, other values of dy and σX can be calculated. Table 11.2 lists a number
of computed results. From the table, as dy varies from 0.13 to 0.20, pc will vary from
0.6 to 0.65.
dp = μA + dy (11.77)
An additional approximation is done by using the sum of the mean plus 70%
standard deviation represented by
dp = μA + 0.7σA (11.78)
Here, the mean μA and the standard deviation σA will be determined as follows.
Table 11.3 gives a comparison of the calculated results.
Table 11.2
Yielding Displacements dy versus Proportional Constant pc
dy 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20
σX 0.34 0.35 0.36 0.37 0.38 0.38 0.39 0.4
pc 0.60 0.61 0.61 0.62 0.63 0.64 0.64 0.65
592 Random Vibration
Table 11.3
Equivalent dp
dy 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20
μA 0.42 0.43 0.45 0.46 0.37 0.48 0.49 0.50
σA 0.22 0.23 0.23 0.24 0.25 0.25 0.26 0.26
pc 0.60 0.61 0.61 0.62 0.63 0.64 0.64 0.65
μA + dy 0.55 0.57 0.60 0.62 0.64 0.66 0.68 0.70
μA + 0.7σA 0.58 0.59 0.61 0.63 0.64 0.66 0.67 0.68
Observe from Figure 11.6 that the peak of the PDF (often referred to as the distri-
bution mode) of the Rayleigh distribution is at
a = σX (11.79)
π
µA = σX (11.80)
2
4−π
σA = σX (11.81)
2
Since a = 1 denotes a normalized maximum peak, Equation 11.79 implies that the
yielding displacement occurs when the corresponding PDF is at its maximum value.
2
Mode
1.8 Mean
1.6
1.4
1.2 Mean + 1 STD
1
0.8
0.6
Assumed maximum peak
0.4
0.2
0
0 0.5 1 1.5 2
Normalized peak value
pc = 0.65
Professor J. Penzien (1927–2011) suggested that, during a random process, the chance
of reaching x0 is minimal. To provide a more realistic estimation of the effective stiff-
ness and damping based on observation, 0.65 times x0 should be used rather than 0.65
alone. The proportional constant 0.65 is accordingly referred to as the Penzien constant.
in the equation
′ = fm /x 0
keff (11.83)
Force
Y1 Y2 P
f1
f2
The second yielding point is then denoted by (f 2, d2). If the stiffness is reduced (see
Equation 10.108 for further reference), the corresponding effective stiffness will be
denoted by keff2 such that
Seemingly, the value of keff2 will depend on not only the value of dp but also by
the degree of which the unloading stiffness ku2 is reduced. In general, the computa-
tions of keff2, as well as similar computations, are rather complex. The Monte Carlo
simulation provides a practical method for carrying out this estimation. This will be
briefly discussed in Section 11.3.
11.2.1.1.1 Energy Dissipation
Figure 11.8 shows the plot of viscous damping force vs. a deterministic displace-
ment, where the smallest loop is obtained when β = 1 and the smaller the value of β
is, the larger energy dissipation loop will be resulted. It is known that this form of
plot directly indicates the energy dissipation of a system with steady-state responses
0.1 m. Because viscous damping forces relate directly to velocities, Figure 11.8 can
be seen as several generalized phase plans, among which only the linear velocity vs.
displacement has an exact ellipse curve.
As shown in Figure 11.9, given the same displacement for general viscous damping,
the smaller the damping exponent β is, the larger the amount of energy that can be dis-
sipated. However, this does not necessarily mean that a system with low β will dissipate
more energy. As an example, consider a system with a mass of m = 1, a damping coef-
ficient of c = 1, and a stiffness of k = 50, in which the system is excited by a determin-
istic sinusoidal force with a driving frequency of ω = 3. Figure 11.10 shows the phase
plane of each case when β = 0.1 and β = 1.0. This explains that the area enclosed in the
displacement–velocity curve with β = 1.0 is larger than that in the case with β = 0.1.
11.2.1.1.2 Bounds
Figure 11.10 shows the phase plane for the two systems in Figures 11.8 and 11.9. Clearly,
the curves will provide information for the bounds of the velocities and displacements.
2 β = 0.6
Damping force (N)
1
β = 1.0
0
–1
–2
–3
–4
–0.1 –0.08 –0.06 –0.04 –0.02 0 0.02 0.04 0.06 0.08 0.1
Displacement (m)
1
Beta = 0.1
0.8 Beta = 1.0
0.05
0.04
0.03
0.02
0.01
Velocity
0
–0.01
–0.02
–0.03
Beta = 0.1
–0.04 Beta = 1.0
–0.05
–0.015 –0.01 –0.005 0 0.005 0.01 0.015
Displacement
Here m, c, k, and µ are all constant. Let us first consider the undamped case with
sinusoidal excitation given by
Substitution of Equation 11.88 into Equation 11.87 and the integration of the sec-
ond derivative expression twice yields the next iterative approximation of
x (1) =
1
ω2
( )
ω n2 A ± 0.75αA3 − p cos(ωt ) ± (11.89)
Since the amplitude A in Equation 11.89 must be equal to that of Equation 11.88,
ignoring the higher order terms will yield
0.75α A3 ω2 p
= 1 − 2 A − 2 (11.90)
ωn
2
ω n ω n
11.2.1.2.2 Numerical Simulations
When the input is random, numerical simulations are often needed. Figure 11.11
illustrates a Simulink diagram for the Duffing equation.
C t
1
s
uv Mu t
Beta
3 –1/M
Rvel 1
s K t
Simin M t
2
From
Rdisp1 workspace
t Acc.
Disp 1
t
AAcc1
Figure 11.12 shows the phase plane of the above-mentioned Duffing equation,
where m = 1, k = 50, c = 0, and μ = 2. Figure 11.13 illustrates the same phase plane
but for the unstable case when c = −0.5. Both phase planes are under sinusoidal exci-
tation with values of p = 1 and ω = 3.
Figure 11.14 demonstrates the phase plane of the above-mentioned Duffing equa-
tion under a random excitation, with Figure 11.15 showing the time history of the dis-
placement. From Figure 11.15, the vibration is clearly seen as nonlinear, for the input
has ω only and is deterministic but the response contains more frequency components.
The time history of the response is comparable to that of a narrow-band process.
0.08
0.06
0.04
0.02
Velocity (m/s)
–0.02
–0.04
–0.06
–0.08
–0.015 –0.01 –0.005 0 0.005 0.01 0.015
Displacement (m)
60
40
20
Velocity (m/s)
–20
–40
–60
–6 –4 –2 0 2 4 6
Displacement (m)
0.5
0.4
0.3
0.2
Velocity (m/s)
0.1
–0.1
–0.2
–0.3
–0.4
–0.5
–0.05 –0.04 –0.03 –0.02 –0.01 0 0.01 0.02 0.03 0.04 0.05
Displacement (m)
0.05
0.04
0.03
0.02
Displacement (m)
0.01
0
–0.01
–0.02
-0.03
–0.04
–0.05
0 5 10 15 20 25 30
Time (s)
Although the Duffing equation only depicts one kind of nonlinear vibration, from
this example, the following can be approximately stated:
3. Nonlinear vibration also has the capability to dissipate energy. When the
damping effective is sufficiently small, for example, ζeff < 0.3, the response
time history is similar to that of a narrow-band process. Nevertheless, the
damping effect will not be fixed. In certain cases, nonlinear vibration has
the potential to become unstable or chaotic.
11.2.1.2.3 Other Examples
Beside the Duffing equation, we provide several other nonlinear engineering vibra-
tion problems without detailed discussion.
β
mx + c x syn( x ) + kx = f (t ) (11.92)
11.2.1.2.3.3 Bouc Oscillator The bilinear stiffness shown in Figure 11.4 can be
further generalized as a Bouc-Wen model (Wen 1989); an SDOF system with such
nonlinear stiffness can be modeled as
where α, η, ν, β, γ are parameters that describe the shape of the nonlinear stiffness;
A is the amplitude and n is the number of cycles.
lθ
+ WR sin[αsyn(θ) − θ][1 + f (t )] + WR cos[αsyn(θ) − θ]g(t ) = 0
( ) ( )
θ t*+ = cθ t*− , t* : θ (t*) = 0
(11.94)
Nonlinear Vibrations and Statistical Linearization 601
where θ is the rocking angle, W is the weight of the block, R and α are respectively
the distance and angle of the center of gravity to a corner of the block, f(t) and g(t) are
respectively excitations relating the horizontal and vertical accelerations (see Iyenger
and Dash 1978).
11.2.1.2.3.5 Van der Pol Oscillator (Balthasar van der Pol, 1889–1959) The
Van der Pol oscillator can be used to model chimney vibration due to a cross-wind (see
Vickery and Basu 1983). That is,
Mx + Cx + kx = f (t )
1 2 (11.96)
fi (t ) = 2 C DρAi [ui (t ) − x i (t )] ui (t ) − x i (t ) + C M ρViu i (t ) − C M ρVi xi (t )
Here, CD and CM are the Morison drag and inertia coefficients; ρ is the fluid den-
sity; Ai and Vi are the projected area and volume associated with the ith node; x i, xi,
ui, and u i , are respectively the structural and wave velocity and acceleration at node i.
ωi ≈ ωni, i = 1, …, S (11.97)
and
Edi
ζi = i = 1, S (11.98)
4 πEki
ωi ≠ ωni (11.99)
602 Random Vibration
11.2.2 Markov Vector
In the above section, we introduced nonlinear vibration with deterministic input.
Now let us consider excitations as random processes. Specifically, a system due to a
special input of Gaussian white noise process will have the response of a diffusion
process. A diffusion process is the solution to an SDE, which is the net movement of
a substance from a high concentration region to a region of low concentration. The
aforementioned Brownian motion is only a good example of a diffusion process.
In general, a diffusion process is a Markov process with continuous sample paths.
(For more details, the Fokker–Planck–Kolmogorov [FPK] equation or Kolmogorov
forward equation [Equation 11.105] may be considered.) Horsthemke and Lefever
(1984) and Rishen (1989) describe these equations in detailed fashion.
where δij is a Kroneker delta function and Δj(t) = Bj(t + Δt) – Bj(t).
Equation 11.100 can be used for both multi-degree of freedom (MDOF) linear
and nonlinear discrete systems. The excitations can be both stationary and nonsta-
tionary, both white noise and nonwhite noise, and both external and parameter exci-
tations. The initial conditions can be random as well.
The transitional PDFs that satisfy Kolmogorov equations, denoted by p(x, t│x0, t0),
includes the following cases, as long as the response process is Markovian.
( ) ∫
∞
p x, t x 0 , t 0 = p(x, t z, τ )p(z, τ x 0 , t0 ) d z (11.103)
−∞
Nonlinear Vibrations and Statistical Linearization 603
∂p( x, t x 0 , t0 )
n
∂ f j ( x, t ) p( x, t x 0 , t0 ) n
∂2 aij p( x, t x 0 , t0 )
∂t
=− ∑j =1
∂x j
+ ∑
i , j =1
∂xi ∂x j
(11.104)
where
n
aij = ∑g g
k =1
ik jk (11.105)
and the term aij will also be used in Equations 11.106, 11.112, and 11.117.
∂p( x, t x 0 , t0 )
n
∂ p( x, t x 0 , t0 ) n
∂2 p( x, t x 0 , t0 )
∂t0
=− ∑ j =1
f j (x0 , t )
∂x0 j
+ ∑i , j =1
aij
∂x 0 i ∂x 0 j
(11.106)
11.2.2.2.1 Exact Solutions
In the case of existence of stationary solutions of FPK equations, we can find the solu
tions work for all first-order systems and a limited set of higher order systems. Specifically,
an SDOF and/or MDOF vibration system with the aforementioned nonlinear damp-
ing and nonlinear stiffness excited by white noise process can have exact solutions.
The basic idea is, in a time reversal between t and −t, the response vector can be clas-
sified as composed of either even or odd functions; the even components will not change
their sign, but the odd variables will have the sign changed. Denote the ith response as
xi = δ i xi (11.107)
where δi = 1 and −1; we can have even and odd variables, respectively.
604 Random Vibration
For the steady-state responses, in terms of the drift coefficient Ai and diffusion
coefficient Bij, this condition can be written as
∂[ Bij ( x) p( x)]
Ai ( x) p( x) + δ i Ai (x) p( x) − =0 (11.109)
∂x j
and
where C is a constant, U(x) is the generalized potential; by solving U(x), we can have
the solution p(x) described in Equation 11.111.
11.2.2.2.2 Moment Solutions
When the FPK equation has input to be white noise arising out of a non-Gaussian
process, the solution becomes the nondiffusive Markov process. The solution will
still have Markovian property, but the motion equation of the transitional PDF will
have an infinite number of terms.
Soong (1973) derived the governing equations for the moments based on the
FPK equation. The moments of the function h[x(t), t] of the response solution x(t) of
Equation 11.100 can be given by
n n
∂h ∂2h ∂h
dE[h( x, t )]
dt
= ∑
j =1
E fj
∂x
j
+ ∑
i , j =1
E aij
∂ i∂ j
x x
+ E
∂t
(11.112)
Upon setting
h[ x(t ), t ] = ∏x
i =1
ki
i (11.113)
and choosing different values for ki, we can further derive equations for most com-
monly seen moments.
Nonlinear Vibrations and Statistical Linearization 605
When the Markov property of response is used to study the first passage proper-
ties, either the Kolmogorov forward or backward equations can be solved in conjunc-
tion with appropriate boundary conditions imposed along the critical barriers.
Another approximate solution is to start with the Kolmogorov backward equa-
tion, based on which we can derive equations for moments of the first passage time,
recursively.
Denote the time required by the response trajectory of Equation 11.100 at the point
x = x0 (11.114)
in the phase space at time t0 to cross a special safe domain for the first time as T( x 0 ).
The moments
M0 = 1 (11.115)
M k = E [ T k ] , k = 1, 2,..., n (11.116)
n n
− ∑
j =1
f j (x 0 , t )
∂M k
−∑ aij
∂2 M k
∂x 0 j i , j =1 ∂xi ∂x j
+ kM k −1 = 0, k = 0, 1, 2, (11.117)
11.2.2.2.3 Approximate Solutions
For nonlinear random vibration problems, in general, it is difficult to obtain exact
solutions. We often use the iterative approach based on the parametrix method
to obtain the existence and uniqueness of the corresponding partial differential
equations.
Numerical solutions through a computational approach can be a second approach
to obtain approximate solutions. In so doing, we often can find certain results.
However, one of the important tasks of using the numerical solutions is to ensure the
uniqueness of the solutions.
11.2.3 Alternative Approaches
Besides the above-mentioned methods, there are several other alternative approaches
to solve the problems of nonlinear random vibrations in the literature. In the follow-
ing, we only briefly discuss these basic ideas for the nondiffusion Markov process.
11.2.3.1 Linearization
In Section 11.1, several nonlinear damping and stiffness were discussed. It is shown
that one of the approaches is to linearize the nonlinear coefficients. That is, by using
606 Random Vibration
certain equivalent linear parameters, the systems become linear. In other words,
this approach is to find a linear system to approximate the nonlinear responses. The
criteria of these approaches include equal force, equal displacement, equal energy,
and their weighted combinations.
The linearization method is the most popular approach to deal with nonlinear
systems. It is especially applicable for nonlinear systems under random excitation.
Many observations have shown that the random excitations themselves linearize the
responses, particularly in terms of aforementioned homogeneity. That is, the bound
of the random responses tends to be more proportional to the amplitudes of the input
bound than that of the deterministic excitations.
11.2.3.2 Perturbation
When the equations of motion possess nonlinear coefficients, slightly apart from lin-
ear parameters, the solution can be expanded in a power series by small parameters.
This will lead to a set of linear equations, which is suitable for handling polynomial
nonlinearities.
The above-discussed small parameter describes the difference between nonlinear
and linear systems. In most cases, this method requires that the random forcing func-
tion should be additive and/or multiplicative.
When the above-mentioned parameters are sufficiently small, however, large
errors can be expected.
11.2.3.3 Special Nonlinearization
Compared with the method of linearization, this approach uses certain equivalent
nonlinear parameters, instead of using linear parameters. However, the equivalent
nonlinear system can have closed-form solutions, or it can be easier to solve. The
criteria of this approach are similar to those of linearization—equal force, equal
displacement, equal energy, and their weighted combinations.
11.2.3.4 Statistical Averaging
If the nonlinear system only contains low capacity of energy dissipation, statistical
averages may be used to generate diffusion Markov responses. That is, by using aver-
ages, the equivalent FPK equation approximates the nondiffusion cases.
The average can be the amplitudes and phases, which are typically performed in
the frequency domain. It can also be energy envelops as well as the combinations of
both amplitudes/phases and energy envelops.
This method often requires broadband excitations.
11.2.3.5 Numerical Simulation
For nonlinear random vibrations, numerical simulations are always powerful tools.
Many above-mentioned approaches also require numerical simulations. The com-
putational vibration solvers as well as related developments on numerical simula-
tions have been well established, and they will be continuously developed and/or
improved. It is worth mentioning that computational tools, such as MATLAB® and
Simulink, can be a good platform to carry out the numerical simulations. In Section
11.3, we will discuss a special numerical simulation.
Nonlinear Vibrations and Statistical Linearization 607
1. Draw a square on the ground with the length of its side to be 2, and then
inscribe a circle within it (see Figure 11.16). We know that the area of the
square is 2 × 2 = 4, and the area of the circle is (2/2)2π = π.
2. Uniformly scatter various objects of uniform size throughout the square, for
example, grains of sand.
608 Random Vibration
3. Given that the two areas exhibit a ratio of π/4, the objects, when randomly
scattered, should fall within the areas by approximately the same ratio.
Thus, counting the number of objects in the circle and dividing by the total
number of objects within the square should yield an approximated ratio
of π/4.
4. Lastly, multiplying the result by 4 will then yield an approximation for π.
Although difficult, it is necessary to ensure that the large number of tests con-
verge to the correct results.
11.3.1.1 Applications
Monte Carlo simulations have many applications, specifically in modeling with a
significant number of uncertainties in inputs and system nonlinearities. In the fol-
lowing, some examples of applicable fields are briefly given.
11.3.1.1.1 Mathematics
In general, Monte Carlo simulations are used in mathematics in order to solve vari-
ous problems by generating suitable random numbers and observing the fraction of
the numbers that follow a specified property or properties. This method is useful for
obtaining numerical solutions to problems that are too complicated to solve analyti-
cally. One of the most common applications of Monte Carlo simulations is the Monte
Carlo integration.
∞ ∞ y y
=
∫−∞
fR ( x )
∫ x
f L1 ( y)
∫ −∞
fL2 ( z , x )
∫ −∞
fC ( w, x ) pL1 d w d z d y d x
∞ ∞ y y
+
∫ −∞
fR ( x )
∫ x
fc ( y)
∫ −∞
fL1 ( z , x )
∫−∞
fL2 ( w, x ) pL1L2 d w d z d y d x
(11.118)
∞ ∞ y y
+
∫ −∞
fR ( x )
∫ x
f L2 ( y)
∫ −∞
fL1 ( z , x )
∫ −∞
fC ( w, x ) pL2 d w d z dyy d x
Equation 11.118 gives the total failure probability pf when the calculation is deter-
ministic, based on rigorously defining those PDFs and conditional probabilities, as
well as establishing exact integral limits. However, both the analysis and the compu-
tation can be rather complex, even though numerical integration can be carried out.
Deterministic numerical integration is usually operated by taking a number of
evenly spaced samples from a function. In general, such integration works well for
functions of one variable. For multiple variables, there will be vector functions, for
which deterministic quadrature methods may be very inefficient. For example, to
numerically integrate a function of a two-dimensional vector, equally spaced grid
points over a two-dimensional surface are required. In this case, a 100 × 100 grid
requires 10,000 points. If the vector has 100 dimensions, the same spacing on the
grid would require 100100 points. Note that, in many engineering problems, one dimen-
sion is actually a degree of freedom; an MDOF system with 100 degrees is considered
a small-sized problem. A finite-element model can easily contain thousands of DOFs.
Therefore, the corresponding computational burden is huge and impractical.
610 Random Vibration
Monte Carlo simulations can sharply reduce the procedure of mathematical deri-
vation on multiple integrals and the demands of integration over multiple dimen-
sions. In many cases, the integral can be approximated by randomly selecting points
within this 100-dimensional space and statistically taking the average of the func-
tion values at these selected points. It is well known that by the law of large numbers,
Monte Carlo simulation will display N–2 convergence, which implies that regardless
of the number of dimensions, quadrupling the number of sampled points will halve
the error.
11.3.1.1.2 Physical Sciences
For computational physics, physical chemistry, and related applied fields, Monte
Carlo simulations play an important part and have diverse applications, from com-
plicated quantum chromodynamics calculations to designing heat shields and aero-
dynamic forms. Monte Carlo simulations are very useful in statistical physics; for
example, Monte Carlo molecular modeling works as an alternative for computational
molecular dynamics as well as for computation of statistical field theories of simple
particle and polymer models (Baeurle 2009).
In experimental particle physics, these methods are used for designing detectors,
understanding behavior, and comparing experimental data to theory and on a vastly
large scale of galaxy modeling (MacGillivray and Dodd 1982). Monte Carlo methods
are also used in ensemble models that form the basis of modern weather forecasting
operations.
Nonlinear Vibrations and Statistical Linearization 611
11.3.1.1.4 Pattern Recognition
In many cases of a random process, it is necessary to identify certain patterns. In
stricter terms, pattern recognition belongs to the category of system identification
so that it is an inverse problem. While we will discuss the general inverse problem
separately, specifically considered certain engineering patterns will be discussed.
In Chapter 7, Figure 7.7 shows an earthquake response spectrum and a design
spectrum SD. These spectra are actually generated by using the Monte Carlo pattern
recognition. The domain of the input is all possible earthquake ground motions in an
interested seismic zone with normalized peak amplitude, say, PGA of 0.4g. This is
the “definition of a domain of possible inputs or excitations.”
0.35
B C
0.3
0.25
0.2
0.15
0.1
A
0.05
DE
0
0 1 2 3 4 5 6 7 8 9 10
11.3.1.1.5 Optimization
Optimization can be computationally expensive. Therefore, Monte Carlo simula-
tions are found to be very helpful for optimizations, especially for multiple dimen-
sional optimizations. In most cases, Monte Carlo optimizations are based on random
walks, which were briefly discussed in Chapter 5 (the Markov chains). Also in most
cases, the optimization program will move around a marker in multidimensional
space, seeking to move in directions that lead to a lower cost function of optimiza-
tion. Although sometimes, it will move against the gradient, the average tendency is
to gradually find the lowest value.
Compared to the “systematic” approach of optimization, the random walks may
reduce the computational burden significantly.
11.3.1.1.6 Inverse Problems
In Chapter 9, we discussed the basic concepts and general procedure of engineering
inverse problems and the need to consider randomness and uncertainty. This fact
makes inverse problems more difficult to handle. One of the effective approaches is
to use Monte Carlo simulations.
To solve inverse problems, Monte Carlo simulations are used to probabilistically
formulate inverse problems. To do so, we need to define a probability distribution in
the model space. This probability distribution combines a priori information with new
information obtained by measuring and/or simulating selected observable parameters.
Generally speaking, the function linking data with model parameters can be non-
linear. Therefore, a posteriori probability in the model space may be complex to
describe. In addition, the relationship can be multimodal, and some moments may
not be well defined and/or have no closed-form explanations.
Nonlinear Vibrations and Statistical Linearization 613
Here, a, b, and c are specific integers. The operation module indicates that (1) the
variable (aRi–1 + b) is divided by c, (2) the remainder is assigned to the value aRi, and
(3) the desired uniformly distributed random variable ri is obtained as
ri = Ri/d (11.120)
r = rand(n, 1) (11.121)
Q = FQ−1 (r ) (11.122)
Example 11.1
FX ( x ) = ∑w F
i =1
i Xi ( x ) (11.125)
∑ w = 1
i =1
i (11.126)
Example 11.2
It is seen that
x = F1−1(r2 ) = r2
x = F2−1(r2 ) = (8 / 5 r2 )1/ 4
Given that r1 = 0.535 < 3/5 = 0.8. By following F1(x), r2 is used to generate the
variate by
x1 = F1−1(r2 ) = 0.181
Next, let r1 = 0.722 and r2 = 0.361. Since r1 = 0.722 > 3/5, r2 is used to generate
the variate by following F2(x) or explicitly
Here, X1i, X2i, …, Xki are k independent and exponentially distributed random
variables with parameter λ and zero mean.
Example 11.3
x = −ln(r)/λ (11.129)
As an example, the first four uniformly distributed random numbers are r1 = 0.7778,
r2 = 0.3333, r3 = 0.2222, and r4 = 0.4444. The corresponding exponentially dis-
tributed random variates for λ = 2 will be as follows: 0.7521, 0.2027, 0.1256, and
0.2939.
Thus, the Gamma distributed variate is
4
g=
1
λ ∑ln(r ) = (0.7521+ 0.2027 + 0.1256 + 0.2939) = 1.3743
i =1
i
11.3.2.3 Random Process
11.3.2.3.1 Stationary Process
Now we use the aforementioned methods to generate a random process, which is
weakly stationary.
First, generate a random variate with the desired distribution. Then, index the
variate with the temporal parameter t. For example,
11.3.2.3.2 Nonstationary Process
Change the stationary process with certain desired modifications. For example,
11.3.3 Numerical Simulations
Monte Carlo simulations generate samples from which the PDFs can be statistically
determined. These samples are often obtained through numerical simulations.
Although the totality of numerical simulations is beyond the scope of this textbook,
in the following, we briefly discuss several issues related to random vibrations and
Monte Carlo simulations.
11.3.3.1 Basic Issues
In order to successfully render a Monte Carlo simulation, several basic issues include
the establishment of proper models, the mathematical tools for modeling, the criteria
used to judge if the simulation is acceptable, and the possible error or error bound
between exact resulted and Monte Carlo simulations.
11.3.3.1.1 Models
In many engineering projects, proper modeling is a starting point, and it is an impor-
tant issue, specifically for the case that relates to engineering vibration and the need
to construct proper models. In what follows, we will use vibration systems to realize
the issue of modeling.
For a vibration system, the complete model is the physical model, that is, use the
M-C-K vibration model to directly treat the Monte Carlo simulation as a forward
problem. For such a model, we need to determine the order of the system, the mass,
the damping and stiffness coefficient matrices, as well as the forcing functions.
We also need to decide whether they are linear or nonlinear models. Furthermore,
we also need to consider certain randomness and uncertainty of models. Both the
response model and the modal model are incomplete models. That is, to establish a
proper model, the physical model, if possible, should be the first choice.
In many cases, the physical model is difficult to obtain. Practically, response mod-
els can be directly measured. Under certain signal processing, we can further obtain
618 Random Vibration
frequency spectra, as well as other parameters such as damping ratios, through anal-
yses in both time and frequency domains, which were mentioned in Chapter 9, in
Section 9.3.1.3 of vibration testing. Based on these measured parameters, we can
further simulate the response models. The reason that the response model is incom-
plete is that only from this model the physical parameters cannot be determined, for
example, the corresponding mass.
A modal model can also be used. Since each mode is actually an SDOF system,
with modal mass, damping, and stiffness, the discussion on physical models also
applies. However, if the modal model is used, the system is assumed to be linear.
11.3.3.1.2 Criteria
In most cases, the reason for using Monte Carlo simulation is that it is difficult to
directly compute the results, such as computation of integrations. In this case, how
to judge whether the simulated result is valid is an important issue.
πˆ i − ∑ πˆ
i =1
i
m
≤ ε πˆ (11.133)
∑ πˆ
i =1
i
11.3.3.1.2.2 Unbiased Results Unbiased results mean that the statistical param-
eter π̂ i must be as close as the corresponding exact parameter π, that is,
π− ∑ πˆ
i =1
i
≤ επ (11.134)
π
Generally speaking, we often do not know the exact value of π, so that Equation
11.134 often cannot be realized.
The response can be calculated. Note that, since the generated forcing function is
in fact known, Equation 11.135 can be rewritten as
i + Cx i + Kx i = fi (t )
Mx (11.136)
11.3.3.2.2.3 Crossing Rate, Extrema Distribution The study of the rates of level
up-crossing, zero up-crossing, the distribution of extrema, and the possible bounds
of responses, for example, is possible through X(t).
11.3.3.2.2.4 Mean, Variance, and Correlations The first and second moment
estimations can be calculated as previously discussed.
11.3.3.3 Random Systems
11.3.3.3.1 Random Distribution of Modal Parameters
Consider a dynamic system with a set of parameters denoted by πi, i = 1, 2, …. For
certain cases, the ith parameter of πi for the system may vary within a relatively
small range, approximately επi%. Note that a small variation of parameter πi does not
620 Random Vibration
necessarily indicate a small variation for the resulted response. In fact, the response
variation depends upon the stability of the system. When the variation of πi is ran-
dom, the random variable Πi can be written as
Πi = πi (1 + επ%) (11.138)
Ωi = ωi (1 + εω%) (11.139)
εω ~ N(μ, σ) (11.140)
Given that the damping ratios are also assigned as random variables, then Zi can
be written as
and
Here, Kij and Cij are the ijth entries of the stiffness and damping matrices, respec-
tively, which are considered to be random. Likewise, kij and cij are the ijth designed
stiffness and damping, respectively. They are considered to have desired values.
Lastly, ε(.) are random variables with proper distributions.
The variation of kij and cij can be relatively large. Mathematically speaking, the
range of random variables Kij and Cij can be –∞ to ∞. In this case, a normal distribu-
tion to model the corresponding variation distributions may be used. In engineering
Nonlinear Vibrations and Statistical Linearization 621
applications, variables Kij and Cij cannot be smaller than zero. However, in this case,
the use of a lognormal distribution to model the corresponding variation distribu-
tions is acceptable.
Example 11.4
Consider the stiffness of a steel shaft with length and a cross section A. Given
deterministic design, this example is described by
k = EA / (11.144)
Suppose that the Young’s modulus E, area A, and length are all random vari-
ables with a normal distribution as follows:
Note for this instance that (.)0 is the mean value of (.). Then, the resulted stiffness
is also a random variable, thus indicating that it is not normal any more.
Problems
1. A nonlinear vibration SDOF system is shown in Figure P11.1, with m, c, and
k and the dry friction coefficient μ. Suppose that it is excited by a force f(t) =
sin(ωt).
a. Find the governing equation for the system.
b. Calculate the energy loss and determine the amplitude and phase of the
forced response of the equivalent linear viscous system.
2. An SDOF system is given by
where the mass is 10 kg; k = 40 N/m, xg (t ) = − A sin(2.1t ) m/s2, β = 0.3, and
μ = 15 N (s/m)β.
x(t), f(t)
m
Coulomb dry friction
c
Figure P11.1
622 Random Vibration
where f R (t) is the nonlinear restoring force. If xg (t ) = 10 sin(6t ), then the
restoring force can be plotted in Figure P11.2, with c = 0, dy = 0.02, q = 7,
f m = 10, and x0 = 0.1.
Nonlinear Vibrations and Statistical Linearization 623
fR
10
7
x
y 0.1
dy
Figure P11.2
a. Calculate the effective stiffness by using secant stiffness and the energy
approach given by Equation 11.28.
b. Calculate the effective damping ζeff through Timoshenko damping
(Equation 11.34) and alternative damping (Equation 11.46).
c. Use the MATLAB code of “lsim” to compute the relative displace-
ment through this linearized model with keff and ζeff, with the El Centro
ground acceleration (E-W) as the excitation. The unit of the mass,
damping, and stiffness is kg, N/m s, and N/m, respectively. The PGA
of the El Centro earthquake is normalized to 10 m/s2. The linearized
models include the following:
Case 1: Let the effective stiffness and the damping ratio calculated
through the approach of using the secant stiffness be k eff = 100 and
ζeff = 0.14, respectively.
Case 2: Let the effective stiffness and the damping ratio calculated
through the energy approach be keff = 34.4 and ζeff = 0.35, respectively.
8.
a. Create a Simulink model for the system shown in Problem 7.
b. Use the El Centro ground acceleration again to compute the absolute
acceleration and relative displacement based on your Simulink model
with PGA = 10 m/s2 for the following cases:
Case 1: Let the effective stiffness and the damping ratio calculated
through the approach of using the secant stiffness be keff = 100 and
ζeff = 0.14.
Case 2: Let the effective stiffness and the damping ratio calculated
through the energy approach be keff = 34.4 and ζeff = 0.35.
Case 3: nonlinear response by Simulink.
c. Use the Simulink model and let the PGA be 2.5, 5, 10, 15, 20, 25, and
30 m/s2 to study the linearity of the peak responses of both the accelera-
tions and the displacements.
9. Write a MATLAB code to carry out a Monte Carlo simulation for deter-
mining the upper-left area inside a square with each side = 6 cm, shown in
Figure P11.3, where the red curve has the function y = 0.24x2.
624 Random Vibration
5 cm
Figure P11.3
10.
a. Write a MATLAB program for a random number generator based on
the following Rayleigh distribution:
2
1 y
y −
fH ( y) = 2 e 2 σ , 5 > y > 0, σ = 1.5 m/s2
σ
b. Use Monte Carlo simulation to generate 500 pieces of random ground
accelerations
xga(t) = y xgn(t)
625
626 References
Song, J., Chu, Y.-L., Liang, Z. and Lee, G. C. (2007a). “Estimation of peak relative velocity
and peak absolute acceleration of linear SDOF systems,” Earthquake Eng. Eng. Vib.,
6(1): 1–10.
Song, J., Liang, Z., Chu, Y. and Lee, G. C. (2007b), “Peak earthquake responses of structures
under multi-component excitations,” J. Earthquake Eng. Eng. Vib., 6(4): 1–14.
Soong, T. T. (1973). Random Differential Equations in Science and Engineering, Academic
Press, New York.
Soong, T. T. and Grigoriu, M. (1992). Random Vibration of Mechanical and Structural
Systems, Prentice-Hall International Inc., Englewood Cliffs, NJ.
Sornette, D., Magnin, T. and Brechet, Y. (1992). “The physical origin of the Coffin-Manson
law in low-cycle fatigue,” Europhys. Lett., 20: 433.
Stigler, S. M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900,
Harvard University Press, Cambridge, MA.
Stigler, S. M. (1999). Statistics on the Table: The History of Statistical Concepts and Methods,
Harvard University Press, Cambridge, MA.
Tarantola, A. (2005). Inverse Problem Theory, Society for Industrial and Applied Mathematics,
Philadelphia, PA.
Tayfun, M. A. (1981). “Distribution of crest-to-trough wave heights,” J. Waterways Harbors
Div. ASCE, 107: 149–158.
Tong, M., Liang, Z. and Lee, G. C. (1994). “An index of damping non-proportionality for
discrete vibration systems-reply,” J. Sound Vib., 174(1): 37–55.
Turkstra, C. J. and Madsen, H. (1980). “Load combinations in codified structural design,”
ASCE, J. Struct. Eng., 106(12): 2527–2543.
Vanmarcke, E. (1975). “On the distribution of first passage time for normal stationary random
process,” J. Appl. Mech., 42: 215–220.
Vanmarcke, E. (1984). Random Fields, Analysis and Synthesis, MIT Press, Cambridge, MA.
Ventura, C. E. (1985). “Dynamic analysis of nonclassically damped systems,” Ph.D. Thesis,
Rice University, Houston, TX.
Vickery, B. J. and Basu, R. (1983). “Across wind vibration of structures of circular cross sec-
tion. Part I. Development of a mathematical model for two dimensional conditions,”
J. Wind Eng. Ind. Aerodyn., 12: 49–97.
Villaverde, R. (1988). “Rosenblueth’s modal combination rule for systems with nonclassical
damping,” Earthquake Eng. Struct. Dyn., 16: 315–328.
Vose, D. (2008). Risk Analysis: A Quantitative Guide, 3rd Ed., John Wiley & Sons, Chichester,
UK.
Walter, É. and Pronzato, L. (1997). Identification of Parametric Models from Experimental
Data, Springer, Heidelberg.
Warburton, G. B. and Soni, S. R. (1977). “Errors in response calculations of nonclassically
damped structures,” Earthquake Eng. Struct. Dyn., 5: 365–376.
Weaver, Jr., W., Timoshenko, S. P. and Young, D. H. (1990). Vibration Problems in Engineering,
5th Ed., Wiley.
Wen, Y. K. (1989). “Methods of random vibration for inelastic structures,” Appl. Mech. Rev.,
42(2): 39–52.
Wen, Y. K., Hwang, H. and Shinozuka, M. (1994). Development of Reliability-Based Design
Criteria for Buildings Under Seismic Load, NCEER Tech. Report 94-0023, University
at Buffalo.
Whittle, P. (1951). Hypothesis Testing in Time Series Analysis, Almquist and Wicksell,
Uppsala, Sweden.
Wilkinson, J. H. (1965). The Algebraic Eigenvalue Problem, Oxford University Press, UK.
Wirsching, P. H. and Chen, Y. N. (1988). “Consideration of probability based fatigue design
criteria for marine structures,” Marine Struct., 1: 23–45.
References 629
Wirsching, P. H. and Light, M. C. (1980). “Fatigure under wide band random stresses,” ASCE
J. Struct. Div., 106: 1593–1607.
Wirsching, P. H., Paez, T. L. and Ortiz, K. (1995). Random Vibration, Theory and Practice,
Dover Publications, Inc.
Yang, J. N. (1974). “Statistics of random loading relevant to fatigue,” J. Eng. Mech. Div.
ASCE, 100(EM3): 469–475.
RANDOM
CIVIL ENGINEERING LIANG
•
LEE
RANDOM VIBRATION
Mechanical, Structural, and Earthquake Engineering Applications
RANDOM VIBRATION
Handle Random Processes
After determining that most textbooks on random vibrations are mathematically
intensive and often too difficult for students to fully digest in a single course, the Mechanical, Structural, and
authors of Random Vibration: Mechanical, Structural, and Earthquake
Engineering Applications decided to revise the current standard. This text Earthquake Engineering
incorporates more than 20 years of research on formulating bridge design limit states.
Utilizing the authors’ experience in formulating real-world failure probability-based
Applications
engineering design criteria and their discovery of relevant examples using the basic
ideas and principles of random processes, the text effectively helps students readily
grasp the essential concepts. It eliminates the rigorous math-intensive logic training
applied in the past, greatly reduces the random process aspect, and works to change
a knowledge-based course approach into a methodology-based course approach.
This approach underlies the book throughout, and students are taught the fundamental
methodologies of accounting for random data and random processes as well as how
to apply them in engineering practice.