100% found this document useful (1 vote)
623 views

Random Vibration Mechanical Structural and Earthquake Engineering Applications

Random
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
623 views

Random Vibration Mechanical Structural and Earthquake Engineering Applications

Random
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 661

RANDOM

CIVIL ENGINEERING LIANG



LEE
RANDOM VIBRATION
Mechanical, Structural, and Earthquake Engineering Applications

Focuses on the Basic Methodologies Needed to VIBRATION

RANDOM VIBRATION
Handle Random Processes
After determining that most textbooks on random vibrations are mathematically
intensive and often too difficult for students to fully digest in a single course, the Mechanical, Structural, and
authors of Random Vibration: Mechanical, Structural, and Earthquake
Engineering Applications decided to revise the current standard. This text Earthquake Engineering
incorporates more than 20 years of research on formulating bridge design limit states.
Utilizing the authors’ experience in formulating real-world failure probability-based
Applications
engineering design criteria and their discovery of relevant examples using the basic
ideas and principles of random processes, the text effectively helps students readily
grasp the essential concepts. It eliminates the rigorous math-intensive logic training
applied in the past, greatly reduces the random process aspect, and works to change
a knowledge-based course approach into a methodology-based course approach.
This approach underlies the book throughout, and students are taught the fundamental
methodologies of accounting for random data and random processes as well as how
to apply them in engineering practice.

Gain a Deeper Understanding of the Randomness in Sequences


Presented in four sections, the material discusses the scope of random processes,
provides an overview of random processes, highlights random vibrations, and details the
application of the methodology. Relevant engineering examples, included throughout
the text, equip readers with the ability to make measurements and observations,
understand basic steps, validate the accuracy of dynamic analyses, and master
and apply newly developed knowledge in random vibrations and corresponding
system reliabilities.

Random Vibration: Mechanical, Structural, and Earthquake Engineering


Applications effectively integrates the basic ideas, concepts, principles, and theories
of random processes. This enables students to understand the basic methodology and
establish their own logic to systematically handle the issues facing the theory and
application of random vibrations.
K24606

6000 Broken Sound Parkway, NW


Suite 300, Boca Raton, FL 33487
711 Third Avenue
ZACH LIANG
GEORGE C. LEE
New York, NY 10017
an informa business
2 Park Square, Milton Park
w w w. c r c p r e s s . c o m Abingdon, Oxon OX14 4RN, UK w w w. c rc p r e s s . c o m
www.TechnicalBooksPdf.com
RANDOM
VIBRATION
Mechanical, Structural, and
Earthquake Engineering Applications

www.TechnicalBooksPdf.com
Advances in Earthquake Engineering
Series Editor: Franklin Y. Cheng

Random Vibration: Mechanical, Structural, and Earthquake Engineering Applications


Zach Liang and George C. Lee

Structural Damping: Applications in Seismic Response Modification


Zach Liang, George C. Lee, Gary F. Dargush, and Jianwei Song

Seismic Design Aids for Nonlinear Pushover Analysis of Reinforced Concrete


and Steel Bridges
Jeffrey Ger and Franklin Y. Cheng

Seismic Design Aids for Nonlinear Analysis of Reinforced Concrete Structures


Srinivasan Chandrasekaran, Luciano Nunziante, Giorgio Serino, and Federico Carannante

www.TechnicalBooksPdf.com
RANDOM
VIBRATION
Mechanical, Structural, and
Earthquake Engineering Applications

ZACH LIANG
GEORGE C. LEE

Boca Raton London New York

CRC Press is an imprint of the


Taylor & Francis Group, an informa business

www.TechnicalBooksPdf.com
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks does not
warrant the accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB® soft-
ware or related products does not constitute endorsement or sponsorship by The MathWorks of a particular
pedagogical approach or particular use of the MATLAB® software.

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2015 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works


Version Date: 20141216

International Standard Book Number-13: 978-1-4987-0237-9 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts
have been made to publish reliable data and information, but the author and publisher cannot assume
responsibility for the validity of all materials or the consequences of their use. The authors and publishers
have attempted to trace the copyright holders of all material reproduced in this publication and apologize to
copyright holders if permission to publish in this form has not been obtained. If any copyright material has
not been acknowledged please write and let us know so we may rectify in any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmit-
ted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.
com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood
Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and
registration for a variety of users. For organizations that have been granted a photocopy license by the CCC,
a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used
only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com

www.TechnicalBooksPdf.com
Contents
Series Preface...........................................................................................................xix
Preface.....................................................................................................................xxi
Acknowledgments...................................................................................................xxv
Series Editor..........................................................................................................xxvii

Section I  Basic Probability Theory


Chapter 1 Introduction...........................................................................................3
1.1 Background of Random Vibration.............................................3
1.1.1 General Description......................................................3
1.1.2 General Theory of Vibration.........................................4
1.1.2.1 Concept of Vibration.....................................4
1.1.3 Arrangement of Chapters..............................................9
1.2 Fundamental Concept of Probability Theory........................... 10
1.2.1 Set Theory................................................................... 10
1.2.1.1 Basic Relationship (Operation).................... 11
1.2.2 Axioms of Probability................................................. 15
1.2.2.1 Random Tests and Classic Probability........ 15
1.2.2.2 Axiom of Probability................................... 17
1.2.3 Conditional Probability and Independence................. 18
1.2.3.1 Conditional Probability............................... 18
1.2.3.2 Multiplicative Rules..................................... 19
1.2.3.3 Independency...............................................20
1.2.3.4 Total Probability and Bayes’ Formula......... 21
1.2.3.5 Bayes’ Formula............................................ 23
1.2.4 Engineering Examples................................................ 23
1.2.4.1 Additive Rules............................................. 23
1.2.4.2 Multiplication Rules....................................25
1.2.4.3 Independent Series.......................................25
1.2.4.4 Return Period of Extreme Load...................26
1.3 Random Variables.................................................................... 29
1.3.1 Discrete Random Variables and PMF......................... 29
1.3.1.1 Single Random Variables............................ 29
1.3.1.2 “Two-Dimensional” Approach.................... 30
1.3.1.3 Probability Mass Function........................... 30
1.3.1.4 Bernoulli Distribution (0–1 Distribution).... 30
1.3.1.5 Binomial Distribution.................................. 31
1.3.1.6 Poisson Distribution..................................... 32

www.TechnicalBooksPdf.com
vi Contents

1.3.1.7 Poisson Approximation................................ 33


1.3.1.8 Summary of PMF PN(n)............................... 35
1.3.2 Continuous Random Variables and PDF..................... 35
1.3.2.1 Continuous Random Variables.................... 35
1.3.2.2 Probability Density Function....................... 36
1.3.2.3 Uniform Distribution................................... 38
1.3.2.4 Exponential Distribution.............................. 39
1.3.2.5 Rayleigh Distribution................................... 42
1.3.3 Cumulative Distribution Functions............................. 42
1.3.3.1 Probability of Cumulative Event................. 42
1.3.3.2 Cumulative Distribution Function (CDF).... 43
1.3.3.3 Certain Applications of PDF and CDF........44
1.3.4 Central Tendency and Dispersion............................... 45
1.3.4.1 Statistical Expectations and Moments......... 45
1.3.4.2 Central Tendency, Mean Value.................... 45
1.3.4.3 Variation, Variance, Standard
Deviation, and Coefficient of Variation.......46
1.3.4.4 Expected Values.......................................... 47
1.3.4.5 Linearity of Expected Values...................... 47
1.3.5 Normal Random Distributions.................................... 48
1.3.5.1 Standardized Variables Z............................. 48
1.3.5.2 Gaussian (Normal) Random Variables........ 48
1.3.5.3 PDF of Normal Distribution........................ 48
1.3.5.4 Cumulative Distribution Function of
Normal Distribution..................................... 49
1.3.6 Engineering Applications............................................ 51
1.3.6.1 Probability-Based Design............................ 51
1.3.6.2 Lognormal Distributions............................. 53
1.3.6.3 Further Discussion of Probability-
Based Design............................................... 55

Chapter 2 Functions of Random Variables.......................................................... 59


2.1 Systems and Functions............................................................. 59
2.1.1 Dynamic Systems........................................................ 59
2.1.1.1 Order of Systems......................................... 59
2.1.1.2 Simple Systems............................................60
2.1.2 Jointly Distributed Variables....................................... 61
2.1.2.1 Joint and Marginal Distributions of
Discrete Variables........................................ 61
2.1.2.2 Joint and Marginal Distributions of
Continuous Variables................................... 63
2.1.3 Conditional Distribution and Independence...............66
2.1.3.1 Discrete Variables........................................66
2.1.3.2 Continuous Variables................................... 68
2.1.3.3 Variable Independence................................ 68

www.TechnicalBooksPdf.com
Contents vii

2.1.4 Expected Value, Variance, Covariance, and


Correlation................................................................... 70
2.1.4.1 Expected Value of g(X,Y)............................ 70
2.1.4.2 Conditional Expected Value........................ 71
2.1.4.3 Variance....................................................... 71
2.1.4.4 Covariance of X,Y........................................ 71
2.1.4.5 Correlation Coefficient................................ 71
2.1.5 Linear Independence................................................... 72
2.1.5.1 Relationship between Random
Variables X and Y......................................... 72
2.1.5.2 Expected Value of Sum of Random
Variables X and Y......................................... 73
2.1.6 CDF and PDFs of Random Variables.......................... 73
2.1.6.1 Discrete Variables........................................ 74
2.1.6.2 Continuous Variables................................... 76
2.2 Sums of Random Variables......................................................80
2.2.1 Discrete Variables.......................................................80
2.2.2 Continuous Variables.................................................. 81
2.2.2.1 Sums of Normally Distributed PDF............ 82
2.2.2.2 Sums of n Normally Distributed Variable.... 83
2.3 Other Functions of Random Variables.....................................84
2.3.1 Distributions of Multiplication of X and Y..................84
2.3.2 Distributions of Sample Variance, Chi-Square (χ2).... 85
2.3.2.1 Sample Variance.......................................... 85
2.3.2.2 Chi-Square Distribution.............................. 86
2.3.2.3 CDF of Chi-Square, n = 1............................ 86
2.3.2.4 PDF of Chi-Square, n = 1............................ 86
2.3.2.5 Mean............................................................ 86
2.3.2.6 Variance....................................................... 86
2.3.2.7 PDF of Chi-Square, n > 1............................ 86
2.3.2.8 Reproductive................................................ 87
2.3.2.9 Approximation............................................. 87
2.3.2.10 Mean of Y.................................................... 87
2.3.2.11 Variance of Y............................................... 87
2.3.2.12 Square Root of Chi-Square (χ2)................... 88
2.3.2.13 Gamma Distribution and Chi-Square
Distribution.................................................. 88
2.3.2.14 Relation between Chi-Square χ 2n and
Sample Variance S X2 ..................................... 89
2.3.3 Distributions of Ratios of Random Variables.............90
2.3.3.1 Distribution of Variable Ratios....................90
2.3.3.2 Student’s Distribution..................................90
2.3.3.3 F Distribution.............................................. 91
2.4 Design Considerations..............................................................92
2.4.1 Further Discussion of Probability-Based Design........92
2.4.2 Combination of Loads................................................. 95

www.TechnicalBooksPdf.com
viii Contents

2.5 Central Limit Theorems and Applications...............................97


2.5.1 Central Limit Theorems.............................................. 98
2.5.1.1 Lyapunov Central Limit Theorem .............. 98
2.5.1.2 Lindeberg–Levy Central Limit Theorem....99
2.5.1.3 De Moirve–Laplace Central Limit
Theorem..................................................... 100
2.5.2 Distribution of Product of Positive Random
Variables.................................................................... 102
2.5.3 Distribution of Extreme Values................................. 103
2.5.3.1 CDF and PDF of Distribution of
Extreme Values.......................................... 103
2.5.4 Special Distributions................................................. 104
2.5.4.1 CDF and PDF of Extreme Value of
Rayleigh Distributions............................... 104
2.5.4.2 Extreme Value Type I Distribution............ 104
2.5.4.3 Distribution of Minimum Values............... 107
2.5.4.4 Extreme Value Type II Distribution.......... 108
2.5.4.5 Extreme Value Type III Distribution......... 109

Section II  Random Process


Chapter 3 Random Processes in the Time Domain........................................... 115
3.1 Definitions and Basic Concepts.............................................. 115
3.1.1 State Spaces and Index Sets...................................... 115
3.1.1.1 Definition of Random Process................... 115
3.1.1.2 Classification of Random Process............. 116
3.1.1.3 Distribution Function of Random Process.... 117
3.1.1.4 Independent Random Process................... 121
3.1.2 Ensembles and Ensemble Averages.......................... 123
3.1.2.1 Concept of Ensembles............................... 123
3.1.2.2 Statistical Expectations and Moments....... 124
3.1.3 Stationary Process and Ergodic Process................... 129
3.1.3.1 Stationary Process..................................... 129
3.1.3.2 Ergodic Process......................................... 133
3.1.4 Examples of Random Process................................... 134
3.1.4.1 Gaussian Process....................................... 135
3.1.4.2 Poisson Process.......................................... 136
3.1.4.3 Harmonic Process...................................... 142
3.2 Correlation Analysis............................................................... 144
3.2.1 Cross-Correlation...................................................... 144
3.2.1.1 Cross-Correlation Function....................... 145
3.2.1.2 Cross-Covariance Function....................... 146
3.2.2 Autocorrelation.......................................................... 147
3.2.2.1 Physical Meaning of Correlation............... 147
Contents ix

3.2.2.2 Characteristics of Autocorrelation


Function..................................................... 148
3.2.2.3 Examples of Autocorrelation Function...... 152
3.2.3 Derivatives of Stationary Process............................. 154
3.2.3.1 Stochastic Convergence............................. 154
3.2.3.2 Mean-Square Limit.................................... 155
3.2.3.3 Mean-Square Continuity........................... 157
3.2.3.4 Mean-Square Derivatives of Random
Process....................................................... 159
3.2.3.5 Derivatives of Autocorrelation Functions..... 159
3.2.3.6 Derivatives of Stationary Process.............. 161
3.2.3.7 Derivatives of Gaussian Process................ 162

Chapter 4 Random Processes in the Frequency Domain................................... 165


4.1 Spectral Density Function...................................................... 165
4.1.1 Definitions of Spectral Density Functions................ 165
4.1.1.1 Mean-Square Integrable of Random
Process....................................................... 165
4.1.1.2 Stationary Process: A Review................... 169
4.1.1.3 Autospectral Density Functions................. 170
4.1.1.4 Spectral Distribution Function Ψ(ω).......... 175
4.1.1.5 Properties of Auto-PSD Functions............ 176
4.1.2 Relationship with Fourier Transform........................ 179
4.1.2.1 Fourier Transformation Random Process..... 179
4.1.2.2 Energy Equation........................................ 179
4.1.2.3 Power Density Functions........................... 182
4.1.3 White Noise and Band-Pass Filtered Spectra........... 184
4.1.3.1 White Noise............................................... 184
4.1.3.2 Low-Pass Noise......................................... 186
4.1.3.3 Band-Pass Noise........................................ 187
4.1.3.4 Narrow-Band Noise................................... 188
4.2 Spectral Analysis.................................................................... 188
4.2.1 Definition................................................................... 188
4.2.1.1 Cross-Power Spectral Density Function.... 188
4.2.1.2 Estimation of Cross-PSD Function............ 190
4.2.2 Transfer Function...................................................... 191
4.2.2.1 Random Process through Linear
Systems.................................................... 191
4.2.2.2 Estimation of Transfer Functions.............. 197
4.2.2.3 Stationary Input......................................... 199
4.2.3 Coherence Analysis................................................... 199
4.2.3.1 Coherence Function...................................200
4.2.3.2 Attenuation and Delay............................... 201
4.2.3.3 Sum of Two Random Processes................. 201
4.2.4 Derivatives of Stationary Process.............................202
x Contents

4.3 Practical Issues of PSD Functions.......................................... 203


4.3.1 One-Sided PSD......................................................... 203
4.3.1.1 Angular Frequency versus Frequency.......203
4.3.1.2 Two-Sided Spectrum versus Single-
Sided Spectrum..........................................204
4.3.1.3 Discrete Fourier Transform.......................204
4.3.2 Signal-to-Noise Ratios..............................................205
4.3.2.1 Definition...................................................205
4.3.2.2 Engineering Significances.........................207
4.4 Spectral Presentation of Random Process..............................208
4.4.1 General Random Process..........................................208
4.4.2 Stationary Process..................................................... 210
4.4.2.1 Dynamic Process in the Frequency
Domain...................................................... 210
4.4.2.2 Relationship between the Time and the
Frequency Domains................................... 210
4.4.2.3 Spectral Distribution and Representation..... 213
4.4.2.4 Analogy of Spectral Distribution
Function to CDF........................................ 216
4.4.2.5 Finite Temporal and Spectral Domains..... 217

Chapter 5 Statistical Properties of Random Process......................................... 221


5.1 Level Crossings...................................................................... 222
5.1.1 Background............................................................... 222
5.1.1.1 Number of Level Crossings....................... 222
5.1.1.2 Correlations between Level Crossings......224
5.1.2 Derivation of Expected Rate.....................................224
5.1.2.1 Stationary Crossing...................................224
5.1.2.2 Up-Crossing............................................... 225
5.1.2.3 Limiting Behavior...................................... 225
5.1.3 Specializations.......................................................... 227
5.1.3.1 Level Up-Crossing, Gaussian Process....... 227
5.1.3.2 Zero Up-Crossing...................................... 228
5.1.3.3 Peak Frequency.......................................... 229
5.1.3.4 Bandwidth and Irregularity....................... 230
5.1.4 Random Decrement Methods.................................... 232
5.1.4.1 Random Decrement (Level Up-Crossing).... 232
5.1.4.2 Lag Superposition (Zero Up-Crossing)..... 235
5.1.4.3 Lag Superposition (Peak Reaching).......... 237
5.1.5 Level Crossing in Clusters......................................... 238
5.1.5.1 Rice’s Narrow-Band Envelopes................. 238
5.2 Extrema.................................................................................. 243
5.2.1 Distribution of Peak Values....................................... 243
5.2.1.1 Simplified Approach.................................. 243
5.2.1.2 General Approach...................................... 245
Contents xi

5.2.2 Engineering Approximations.................................... 247


5.2.2.1 Background................................................ 247
5.2.2.2 Probability Distributions of Height,
Peak, and Valley........................................ 249
5.3 Accumulative Damages.......................................................... 252
5.3.1 Linear Damage Rule: The Deterministic
Approach............................................................. 253
5.3.1.1 S–N Curves................................................ 253
5.3.1.2 Miner’s Rule.............................................. 253
5.3.2 Markov Process......................................................... 254
5.3.2.1 General Concept........................................ 255
5.3.2.2 Discrete Markov Chain.............................. 255
5.3.3 Fatigue....................................................................... 263
5.3.3.1 High-Cycle Fatigue.................................... 263
5.3.3.2 Low-Cycle Fatigue.....................................266
5.3.4 Cascading Effect....................................................... 270
5.3.4.1 General Background.................................. 270
5.3.4.2 Representation of Random Process........... 271
5.3.4.3 Occurrence Instance of Maximum
Load...................................................... 273

Section III  Vibrations


Chapter 6 Single-Degree-of-Freedom Vibration Systems................................. 279
6.1 Concept of Vibration..............................................................280
6.1.1 Basic Parameters.......................................................280
6.1.1.1 Undamped Vibration Systems................... 281
6.1.1.2 Damped SDOF System.............................. 291
6.1.2 Free Decay Response................................................ 297
6.1.2.1 Amplitude d and Phase ϕ.......................... 297
6.2 Periodically Forced Vibration................................................ 301
6.2.1 Harmonic Excitation................................................. 301
6.2.1.1 Equation of Motion.................................... 301
6.2.1.2 Harmonically Forced Response.................302
6.2.1.3 Dynamic Magnification.............................307
6.2.1.4 Transient Response under Zero Initial
Conditions..................................................308
6.2.2 Base–Excitation and Force Transmissibility............. 313
6.2.2.1 Model of Base Excitation........................... 313
6.2.2.2 Force Transmissibility............................... 317
6.2.3 Periodic Excitations................................................... 319
6.2.3.1 General Response...................................... 319
6.2.3.2 The nth Steady-State Response................. 320
6.2.3.3 Transient Response.................................... 321
xii Contents

6.3 Response of SDOF System to Arbitrary Forces..................... 321


6.3.1 Impulse Responses.................................................... 321
6.3.1.1 Unit Impulse Response Function............... 322
6.3.2 Arbitrary Loading and Convolution.......................... 323
6.3.2.1 Convolution................................................ 323
6.3.2.2 Transient Response under Harmonic
Excitation f0 sin(ωt).................................... 325
6.3.3 Impulse Response Function and Transfer Function.... 327
6.3.4 Frequency Response and Transfer Functions............ 329
6.3.5 Borel’s Theorem and Its Applications....................... 330
6.3.5.1 Borel’s Theorem......................................... 330

Chapter 7 Response of SDOF Linear Systems to Random Excitations............. 335


7.1 Stationary Excitations............................................................. 335
7.1.1 Model of SDOF System............................................ 335
7.1.1.1 Equation of Motion.................................... 335
7.1.1.2 Zero Initial Conditions.............................. 335
7.1.1.3 Solution in Terms of Convolution.............. 336
7.1.1.4 Nature of Forcing Function....................... 336
7.1.1.5 Response.................................................... 336
7.1.2 Mean of Response Process........................................ 336
7.1.3 Autocorrelation of Response Process........................ 337
7.1.3.1 Autocorrelation.......................................... 337
7.1.3.2 Mean Square.............................................. 338
7.1.4 Spectral Density of Response Process...................... 341
7.1.4.1 Auto-Power Spectral Density Function..... 341
7.1.4.2 Variance..................................................... 342
7.1.5 Distributions of Response Process............................ 342
7.2 White Noise Process............................................................... 342
7.2.1 Definition................................................................... 342
7.2.2 Response to White Noise.......................................... 343
7.2.2.1 Auto-PSD Function.................................... 343
7.2.2.2 Variance..................................................... 343
7.2.3 White Noise Approximation..................................... 345
7.3 Engineering Examples............................................................346
7.3.1 Comparison of Excitations........................................346
7.3.1.1 Harmonic Excitation..................................346
7.3.1.2 Impulse Excitation.....................................348
7.3.1.3 Random Excitation.................................... 351
7.3.1.4 Other Excitations....................................... 353
7.3.2 Response Spectra...................................................... 355
7.3.2.1 Response Spectrum................................... 355
7.3.2.2 Design Spectra........................................... 356
7.3.3 Criteria of Design Values.......................................... 357
7.3.3.1 Pseudo Spectrum....................................... 357
Contents xiii

7.3.3.2 Correlation of Acceleration and


Displacement............................................. 358
7.4 Coherence Analyses............................................................... 359
7.4.1 Estimation of Transfer Function................................ 359
7.4.2 Coherence Function................................................... 363
7.4.3 Improvement of Coherence Functions...................... 365
7.5 Time Series Analysis.............................................................. 365
7.5.1 Time Series................................................................ 366
7.5.1.1 General Description................................... 366
7.5.1.2 Useful Models of Time Series................... 366
7.5.2 Characters of ARMA Models................................... 367
7.5.2.1 Moving-Average Process MA(q)............... 367
7.5.2.2 Autoregressive Process AR(p)................... 370
7.5.2.3 ARMA(p, q)............................................... 372
7.5.3 Analyses of Time Series in the Frequency Domain..... 376
7.5.3.1 Z-Transform............................................... 376
7.5.3.2 Sampling of Signals................................... 377
7.5.3.3 Transfer Function of Discrete Time
System........................................................ 378
7.5.3.4 PSD Functions........................................... 379
7.5.4 Time Series of SDOF Systems.................................. 380
7.5.4.1 Difference Equations................................. 380
7.5.4.2 ARMA Models.......................................... 382
7.5.4.3 Transfer Functions..................................... 383
7.5.4.4 Stability of Systems................................... 385

Chapter 8 Random Vibration of MDOF Linear Systems.................................. 391


8.1 Modeling................................................................................. 391
8.1.1 Background............................................................... 391
8.1.1.1 Basic Assumptions..................................... 391
8.1.1.2 Fundamental Approaches.......................... 392
8.1.2 Equation of Motion................................................... 392
8.1.2.1 Physical Model........................................... 392
8.1.2.2 Stiffness Matrix......................................... 395
8.1.2.3 Mass and Damping Matrices..................... 396
8.1.3 Impulse Response and Transfer Functions................ 398
8.1.3.1 Scalar Impulse Response Function and
Transfer Function....................................... 398
8.1.3.2 Impulse Response Matrix and Transfer
Function Matrix......................................... 399
8.1.3.3 Construction of Transfer Functions........... 399
8.1.3.4 Principal Axes of Structures......................400
8.2 Direct Model for Determining Responses.............................400
8.2.1 Expression of Response.............................................400
8.2.2 Mean Values.............................................................. 401
xiv Contents

8.2.2.1 Single Coordinate...................................... 401


8.2.2.2 Multiple Coordinates.................................402
8.2.3 Correlation Functions................................................404
8.2.4 Spectral Density Function of Response....................405
8.2.4.1 Fourier Transforms of f(t) and x(t).............405
8.2.4.2 Power Spectral Density Function..............405
8.2.4.3 Mean Square Response..............................407
8.2.4.4 Variance.....................................................407
8.2.4.5 Covariance.................................................407
8.2.5 Single Response Variable: Spectral Cases................407
8.2.5.1 Single Input................................................407
8.2.5.2 Uncorrelated Input.....................................408
8.3 Normal Mode Method............................................................408
8.3.1 Proportional Damping...............................................408
8.3.1.1 Essence of Caughey Criterion...................408
8.3.1.2 Monic System............................................409
8.3.2 Eigen-Problems......................................................... 410
8.3.2.1 Undamped System..................................... 410
8.3.2.2 Underdamped Systems.............................. 411
8.3.3 Orthogonal Conditions.............................................. 411
8.3.3.1 Weighted Orthogonality............................ 412
8.3.3.2 Modal Analysis.......................................... 413
8.3.4 Modal Superposition................................................. 416
8.3.5 Forced Response and Modal Truncation................... 418
8.3.5.1 Forced Response........................................ 418
8.3.5.2 Rayleigh Quotient...................................... 419
8.3.5.3 Ground Excitation and Modal
Participation Factor.................................... 420
8.3.5.4 Modal Superposition, Forced Vibration.... 420
8.3.5.5 Modal Truncation...................................... 422
8.3.6 Response to Random Excitations.............................. 423
8.3.6.1 Modal and Physical Response................... 423
8.3.6.2 Mean.......................................................... 424
8.3.6.3 Covariance................................................. 424
8.3.6.4 Probability Density Function for xi(t)........ 426
8.4 Nonproportionally Damped Systems, Complex Modes......... 428
8.4.1 Nonproportional Damping........................................ 428
8.4.1.1 Mathematical Background......................... 428
8.4.1.2 The Reality of Engineering....................... 429
8.4.2 State Variable and State Equation............................. 429
8.4.3 Eigen-Problem of Nonproportionally Damped
System....................................................................... 430
8.4.3.1 State Matrix and Eigen-Decomposition.... 430
8.4.3.2 Eigenvectors and Mode Shapes................. 432
8.4.3.3 Modal Energy Transfer Ratio.................... 435
Contents xv

8.4.4 Response to Random Excitations.............................. 437


8.4.4.1 Modal and Physical Response................... 438
8.4.4.2 Mean.......................................................... 439
8.4.4.3 Covariance.................................................440
8.4.4.4 Brief Summary..........................................446
8.5 Modal Combination................................................................446
8.5.1 Real Valued Mode Shape..........................................446
8.5.1.1 Approximation of Real Valued Mode
Shape..........................................................446
8.5.1.2 Linear Dependency and Representation.... 447
8.5.2 Numerical Characteristics.........................................449
8.5.2.1 Variance.....................................................449
8.5.2.2 Root Mean Square.....................................449
8.5.3 Combined Quadratic Combination........................... 451

Section IV  Applications and Further Discussions


Chapter 9 Inverse Problems............................................................................... 459
9.1 Introduction to Inverse Problems........................................... 459
9.1.1 Concept of Inverse Engineering................................ 459
9.1.1.1 Key Issues.................................................. 459
9.1.1.2 Error...........................................................460
9.1.1.3 Applications............................................... 461
9.1.2 Issues of Inverse Problems........................................ 461
9.1.2.1 Modeling.................................................... 461
9.1.2.2 Identification, Linear System..................... 463
9.1.2.3 Identification, General System................... 463
9.1.2.4 Simulations................................................464
9.1.2.5 Practical Considerations............................464
9.1.3 The First Inverse Problem of Dynamic Systems.......465
9.1.3.1 General Description...................................465
9.1.3.2 Impulse Response......................................466
9.1.3.3 Sinusoidal Response..................................466
9.1.3.4 Random Response.....................................466
9.1.3.5 Modal Model.............................................468
9.1.4 The Second Inverse Problem of Dynamic Systems.... 471
9.1.4.1 General Background.................................. 471
9.1.4.2 White Noise............................................... 472
9.1.4.3 Practical Issues.......................................... 472
9.2 System Parameter Identification............................................. 472
9.2.1 Parameter Estimation, Random Set.......................... 473
9.2.1.1 Maximum Likelihood................................ 473
9.2.1.2 Bias and Consistency................................. 479
xvi Contents

9.2.2 Confidence Intervals................................................. 481


9.2.2.1 Estimation and Sampling Distributions..... 481
9.2.3 Parameter Estimation, Random Process................... 482
9.2.3.1 General Estimation.................................... 482
9.2.3.2 Stationary and Ergodic Process................. 487
9.2.3.3 Nonstationary Process............................... 489
9.2.4 Least Squares Approximation and Curve Fitting..... 492
9.2.4.1 Concept of Least Squares.......................... 492
9.2.4.2 Curve Fitting.............................................. 493
9.2.4.3 Realization of Least Squares Method........ 494
9.3 Vibration Testing.................................................................... 495
9.3.1 Test Setup.................................................................. 495
9.3.1.1 Mathematical Model.................................. 495
9.3.1.2 Numerical Model....................................... 496
9.3.1.3 Experimental Model.................................. 496
9.3.2 Equipment of Actuation and Measurement............... 497
9.3.2.1 Actuation.................................................... 497
9.3.2.2 Measurement.............................................. 501
9.3.3 Signal and Signal Processing.................................... 505
9.3.3.1 Data-Acquisition System........................... 505
9.3.3.2 Single Processing and Window Functions...507
9.3.4 Nyquist Circle............................................................ 511
9.3.4.1 Circle and Nyquist Plot.............................. 511
9.3.4.2 Circle Fit.................................................... 513
9.3.4.3 Natural Frequency and Damping Ratio..... 515

Chapter 10 Failures of Systems........................................................................... 519


10.1 3σ Criterion............................................................................ 519
10.1.1 Basic Design Criteria................................................ 519
10.1.2 3σ Criterion............................................................... 520
10.1.3 General Statement..................................................... 520
10.1.3.1 General Relationship between S and R...... 520
10.1.3.2 System Failure, Further Discussion........... 522
10.2 First Passage Failure............................................................... 522
10.2.1 Introduction............................................................... 523
10.2.2 Basic Formulation..................................................... 523
10.2.2.1 General Formulation.................................. 523
10.2.2.2 Special Cases............................................. 524
10.2.3 Largest among Independent Peaks............................ 525
10.2.3.1 Exact Distribution...................................... 525
10.2.3.2 Extreme Value Distribution....................... 527
10.2.3.3 Design Value Based on Return Period...... 529
10.3 Fatigue.................................................................................... 529
10.3.1 Physical Process of Fatigue....................................... 529
Contents xvii

10.3.2 Strength Models........................................................ 530


10.3.2.1 High-Cycle Fatigue.................................... 530
10.3.2.2 Miner’s Rule, More Detailed Discussion.... 531
10.3.3 Fatigue Damages....................................................... 532
10.3.3.1 Narrowband Random Stress...................... 532
10.3.3.2 Wideband Random Stress.......................... 538
10.3.4 Damages due to Type D Low Cycle.......................... 542
10.3.4.1 Fatigue Ductility Coefficient..................... 543
10.3.4.2 Variation of Stiffness................................. 543
10.4 Considerations on Reliability Design..................................... 545
10.4.1 Further Discussion of Probability-Based Design...... 545
10.4.1.1 Random Variable vs. Random Process...... 545
10.4.1.2 Necessity of Distinguishing Time-
Invariant and Time-Variable Loads...........546
10.4.1.3 Time-Variable Load at a Given Time
Spot............................................................ 547
10.4.1.4 Combination of Time-Variable Loads
in a Given Time Period.............................. 547
10.4.1.5 Additivity of Distribution Functions......... 548
10.4.2 Failure Probability under MH Load.......................... 549
10.4.2.1 Failure Probability Computation............... 549
10.4.2.2 Time-Invariant and -Variable Loads.......... 549
10.4.2.3 Principles of Determining Load and
Load Combination..................................... 549
10.4.2.4 Total and Partial Failure Probabilities....... 550
10.4.2.5 Independent Events.................................... 550
10.4.2.6 Mutually Exclusive Failures,
the Uniqueness Probabilities..................... 552
10.4.3 General Formulations................................................ 556
10.4.3.1 Total Failure Probability............................ 556
10.4.3.2 Occurrence of Loads in a Given Time
Duration..................................................... 556
10.4.3.3 Brief Summary.......................................... 556
10.4.4 Probability of Conditions.......................................... 557
10.4.4.1 Condition for Occurrence of Partial
Failure Probabilities................................... 558
10.4.4.2 Event of Single Type of Loads................... 558
10.4.5 Brief Summary.......................................................... 571

Chapter 11 Nonlinear Vibrations and Statistical Linearization.......................... 575


11.1 Nonlinear Systems.................................................................. 575
11.1.1 Examples of Nonlinear Systems............................... 575
11.1.1.1 Nonlinear System...................................... 576
11.1.1.2 Memoryless Nonlinear System.................. 578
xviii Contents

11.1.2 General Nonlinear System, Volterra Model.............. 579


11.1.3 Structure Nonlinearity.............................................. 579
11.1.3.1 Deterministic Nonlinearity........................ 579
11.1.3.2 Random Nonlinearity................................ 585
11.2 Nonlinear Random Vibrations............................................... 594
11.2.1 General Concept of Nonlinear Vibration.................. 594
11.2.1.1 The Phase Plane......................................... 595
11.2.1.2 Example of Nonlinear Vibration with
Closed-Form Solution................................ 596
11.2.1.3 System with Nonlinear Damping Only..... 601
11.2.1.4 System with Nonlinear Spring................... 601
11.2.2 Markov Vector...........................................................602
11.2.2.1 Itō Diffusion and Kolmogorov Equations....602
11.2.2.2 Solutions of FPK Equation........................603
11.2.3 Alternative Approaches.............................................605
11.2.3.1 Linearization..............................................605
11.2.3.2 Perturbation...............................................606
11.2.3.3 Special Nonlinearization...........................606
11.2.3.4 Statistical Averaging..................................606
11.2.3.5 Numerical Simulation................................606
11.3 Monte Carlo Simulations........................................................607
11.3.1 Basics of Monte Carlo Method..................................607
11.3.1.1 Applications...............................................609
11.3.2 Monte Carlo and Random Numbers......................... 613
11.3.2.1 Generation of Random Numbers............... 613
11.3.2.2 Transformation of Random Numbers........ 614
11.3.2.3 Random Process........................................ 617
11.3.3 Numerical Simulations.............................................. 617
11.3.3.1 Basic Issues................................................ 617
11.3.3.2 Deterministic Systems with Random
Inputs......................................................... 619
11.3.3.3 Random Systems....................................... 619
References.............................................................................................................. 625
Index....................................................................................................................... 631
Series Preface
The editor takes pride in presenting another well-developed manuscript in the series.
This excellent book is a result of the authors’ more than two decades of extensive
research and teaching on the subject of random vibrations related to earthquake
structural response and multiple hazard mitigations of structural engineering. Since
natural hazards rarely occur, the associated solutions must be based on probabil-
ity criteria in the random process. Current random vibration books, however, are
focused on conventional engineering problems, and consequently the authors cou-
pled the traditional solution techniques with their research and teaching experiences
to shape up this 11-chapter textbook. They intend to assist the reader in utilizing
the traditional mathematical logic and then comprehensively and effectively solving
more complex problems.
In earthquake engineering applications, the authors include some difficult and
currently in vogue structural problems, such as load-and-resistance factor design
under multiple hazard load effects, nonlinear vibration with random excitation, etc.
These advanced topics are valuable and are believed to lead the researchers and
practitioners to pursue further in the constructed facility community.
The book is aimed at one semester graduate class; in order for the reader to clearly
grasp the concepts and mathematical formulation, the authors developed extensive
homework problems for individual chapters accompanied by detailed solutions. The
editor strongly suggests that the reader should patiently and gradually digest the
materials in the book with the assistance of the solution manual. Note that a com-
prehensive solution manual is seldom available for other books of this nature that
reflects the authors’ admirable objective of preparing this manuscript, and the book
is believed to be useful for years to come.

xix
Preface
Understanding and modeling a vibration system and measuring and controlling its
oscillation responses are important basic capacities for mechanical, structural, and
earthquake engineers who deal with the dynamic responses of mechanical/structural
systems. Generally speaking, this ability requires three components: the basic theo-
ries of vibrations, experimental observations, and measurement of dynamic systems
and analyses of the time-varying responses.
Among these three efforts, the former two are comparatively easily learned by
engineering students. However, the third component often requires a mathemati-
cal background of random processes, which is rather abstract for students to grasp.
One course covering stochastic processes and random vibrations with engineering
applications is already too much for students to absorb because it is mathematically
intensive and requires students to follow an abstract thinking path through “pure”
theories without practical examples. To carry out a real-world modeling and analy-
sis of specific types of vibration systems while following through the abstract pure
thinking path of mathematical logic would require an additional course; however,
there is no room in curriculums for such a follow-up course. This has been the obser-
vation of the first author during many years of teaching random vibration. He fre-
quently asked himself, How can one best teach the material of all three components
in a one-semester course?
The authors, during the past 20 years, have engaged in an extensive research
study to formulate bridge design limit states; first, for earthquake hazard and, sub-
sequently, expanded to multiple extreme natural hazards for which the time-varying
issue of rare-occurring extreme hazard events (earthquakes, flood, vehicular and
vessel collisions, etc.) had to be properly addressed. This experience of formulat-
ing real-world failure probability–based engineering design criteria provided nice
examples of using the important basic ideas and principles of random process (e.g.,
correlation analysis, the basic relationship of the Wiener–Khinchine formula to
transfer functions, the generality of orthogonal functions and vibration modes, and
the principles and approaches of dealing with engineering random process). We thus
decided to emphasize the methodology of dealing with random vibration. In other
words, we have concluded that it is possible to offer a meaningful course in ran-
dom vibration to students of mechanical and structural engineering by changing the
knowledge-based course approach into a methodology-based approach. The course
will guide them in understanding the essence of vibration systems, the fundamental
differences in analyzing the deterministic and dynamic responses, the way to han-
dle random variables, and the way to account for random process. This is the basic
approach that underlines the material developed in this book. By doing so, we give
up coverage of the rigorous mathematical logic aspect and greatly reduce the portion
of random process. Instead, many real-world examples and practical engineering
issues are used immediately following the abstract concepts and theories. As a result,
students might gain the basic methodology to handle the generality of engineering

xxi
xxii Preface

projects and develop a certain capability to establish their own logic to systemati-
cally handle the issues facing the theory and application of random vibrations. After
such a course, students are not expected to be proficient in stochastic process and to
model a random process, but they will be able to design the necessary measurement
and observation, to understand the basic steps and validate the accuracy of dynamic
analyses, and to master and apply newly developed knowledge in random vibrations
and corresponding system reliabilities.
With this approach, we believe it is possible to teach students the fundamental
methodology accounting for random data and random process and apply them in
engineering practice. This is done in this book by embedding engineering examples
wherever appropriate to illustrate the importance and approach to deal with random-
ness. The materials are presented in four sections. The first is a discussion of the
scope of random process, including engineering problems requiring the concept of
probability to deal with. The second is the overview of random process, including
the time domain approach to define time-varying randomness, the frequency domain
approach for the spectral analysis, and the statistical approach to account for the
process. The third section is dedicated specifically to random vibrations, a typical
dynamic process with randomness in engineering practice. The fourth section is the
application of the methodology. In recent years, we used typical examples of devel-
oping fatigue design limit states for mechanical components and reliability-based
extreme event design limit states for bridge components in teaching this course. The
nice performances and positive responses of the students have encouraged us to pre-
pare this manuscript.
Section I consists of two chapters. Chapter 1 expresses the brief background and
the objectives of this book, followed by a brief review of the theory of probabil-
ity within the context at application to engineering. The attempt is to only intro-
duce basic concepts and formulas to prepare for discussions of random process. The
review of the theory of probability is continued in Chapter 2, with focus on treating
random data measured as a function of certain basic random distributions for ran-
domness in their actual applications. This will also help engineers to gain a deeper
understanding of the randomness in sequences. In this section, the essence of prob-
ability as the chance of occurrence in sample space, the basic treatment to handle
one-dimensional random variables by using two-dimensional deterministic prob-
ability distributions (PDF), and the tools to study the random variables of averag-
ing (statistics) that changes quantities from random to deterministic are emphasized.
Two important issues in engineering practice, the uncertainty of data and the prob-
ability of failure, are introduced.
Section II begins with Chapter 3, where the random (also called stochastic) pro-
cess is introduced in the time domain. The nature of time-varying variables is first
explained by joint PDF through the Kolmogorov extension. Because of the existence
of the indices in both sample space and in the time domain, the averages should be
well defined, in other words, the statistics must be used in rigorous conditions by
identifying if the process is stationary as well as ergodic. Although the averaged
results of mean and variance are often easily understandable, the essence of correla-
tion analysis is explained through the concept of function/variable orthogonality. In
Chapter 4, random process is further examined in the frequency domain. Based on
Preface xxiii

the Wiener–Khinchine relations, the spectral analyses on the frequency components


of the deterministic power spectrum density function of random process are carried
out. In these two chapters, several basic and useful models of random process are
discussed. In Chapter 5, a new set of statistics for random processes that is different
from averaging of the entire process is introduced, such as level crossing, peaks,
and maxima. To further understand the important engineering problems of cumu-
lated damage, the Markov process is introduced, which is a continuous approach
of introducing random processes based on engineering motivations. Thus, due to
the nature of random processes, which consists of a broad range of rather different
types of mathematical models, to introduce each special process one by one is not an
effective approach for students to learn. This book employs an approach to present
important processes within the context of practical engineering problems, whereas
the generality of dealing with randomness and the difference between random vari-
ables and processes are included. Necessary mathematical logic, such as limits, dif-
ferentiation, and integration on random variables is only considered for the purposes
of understanding the nature of randomness.
Section III of this book focuses on the topic of vibration problems. The basic
concept is reviewed in Chapter 6, where the essence of vibration is emphasized
based on energy exchange. The basic parameters of the linear single-degree-of-freedom­
(SDOF) system are discussed, followed by the key issues of dynamic magnification
factors, convolutions, and transfer functions. The topic in Chapter 7 is on SDOF
systems excited by random initial conditions and forcing functions. Together with
the aforementioned correlation and spectral analyses, a new method of random
process referred to as time series is also described. In Chapter 8, the discussion
is extended to linear multi-degree-of-freedom (MDOF) systems. The statistical
analyses of direct approach based on model decoupling of proportionally and non-
proportionally damped systems are discussed, along with basic knowledge of eigen-
parameters, Rayleigh quotient, state variables, and equation and transfer function
matrices. Engineering examples of how to deal with random excitations, such as
earthquake response spectrum and various types of white noises are considered for
students to further gain insight into random processes and particular random vibra-
tions. Vibration is a special dynamic process and possesses time histories, whereas
a random process is also a dynamic process. In the third section of this book, we not
only present the generality of these dynamic processes but also treat the vibration
response as the output of a second-order linear system due to the input of a random
process.
Section IV and the last part of the book provides more materials on the applica-
tions of random process and vibration. Chapter 9 is especially dedicated to inverse
problems, which are limited to system and excitation identifications. In engineer-
ing practice, inverse problems can be much more difficult to solve, both due to
possible dimension reductions and noise contaminations. Measurement and test-
ing, especially on vibration systems, should deal with uncertainties. Based on the
methodology learned from previous chapters, statistical studies on random data and
model identifications are discussed. In Chapter 10, the failure of systems is further
discussed in a more systematic fashion, followed by the concept of reliability. For
mechanical engineering applications, high cycle fatigue failure is further considered
xxiv Preface

as a continuation of the topic in Chapter 5. For structural engineering application,


the example of load-and-resistance-factor-design under multiple hazard load effects
are considered to explain how to deal with load combinations of several random pro-
cesses, which is essentially different from the currently used bridge code based on
bridge reliability. In Chapter 11, nonlinear vibration with random excitation is briefly
considered, along with an introduction of linearization procedure. Again, the pur-
pose of this chapter is not to systematically describe the system and response non-
linearity. Rather, it is intended to explain the nonlinear phenomena and the general
approach of linearization. In addition, a special method of Monte Carlo simulation is
considered as a tool to study complex systems and their responses.

MATLAB® is a registered trademark of The MathWorks, Inc. For product information,


please contact:

The MathWorks, Inc.


3 Apple Hill Drive
Natick, MA 01760-2098 USA
Tel: 508 647 7000
Fax: 508-647-7001
E-mail: [email protected]
Web: www.mathworks.com
Acknowledgments
In this book, certain materials are presented following the approach used in Random
Vibrations, Theory, and Practice (Wirsching et al. 2006) and Random Vibration
of Mechanical and Structural Systems (Soong and Grigoriu 1995). In preparing for
this manuscript, the authors benefited greatly from discussions with their research
collaborators (Drs. John Kulicki, Jerry Shen, Jianwei Song, and Chao Huang) on
the development of bridge design guidelines for earthquake hazard effect and for
multiple extreme hazard load effects for bridges. The research projects were funded
by the Federal Highway Administration and the National Science Foundation. In
particular, we express our appreciation to Dr. Phillip Wen-Huei Yen for his constant
advice and support. We also thank Zhongwang Dou for his very helpful work in
preparing the solution manual.

xxv
Series Editor
Dr. Franklin Cheng earned a BS (1960) at the National
Cheng-Kung University, Taiwan, and an MS (1962) at the
University of Illinois at Urbana-Champaign. He gained indus-
trial experience with C. F. Murphy and Sargent & Lundy in
Chicago, Illinois. Dr. Cheng then earned a PhD (1966) in civil
engineering at the University of Wisconsin, Madison. Dr. Cheng
joined the University of Missouri, Rolla (now named Missouri
University of Science and Technology) as assistant professor
in 1966 and then associate professor and professor in 1969 and
1974, respectively. In 1987, the board of curators of the univer-
sity appointed him curators’ professor, the highest professorial position in the sys-
tem comprising four campuses. He has been Curators’ Professor Emeritus of Civil
Engineering since 2000. In 2007, the American Society of Civil Engineers recog-
nized Dr. Cheng’s accomplishments by electing him to honorary membership, which
is now renamed as distinguished membership. Honorary membership is the highest
award the society may confer, second only to the title of ASCE president. Honorary
members on this prestigious and highly selective list are those who have attained
acknowledged eminence in a branch of engineering or its related arts and sciences.
Until 2007, there have been 565 individuals who were elected to this distinguished
grade of membership since 1853. For the year 2007, only 10 honorary members were
selected from more than 14,000 members.
Dr. Cheng was honored for his significant contributions to earthquake structural
engineering, optimization, nonlinear analysis, and smart structural control and for
his distinguished leadership and service in the international engineering community,
as well as for being a well-respected educator, consultant, author, editor, and mem-
ber of numerous professional committees and delegations. His cutting-edge research
helped recognize the vital importance of the possibilities of automatic computing
in the future of civil engineering. He was one of the pioneers in allying computing
expertise to the design of large and complex structures against dynamic loads. His
research expanded over the years to include the important topics of structural opti-
mization and design of smart structures. In fact, he is one of the foremost experts in
the world on the application of structural dynamics and optimization to the design of
structures. Due to the high caliber and breadth of his research expertise, Dr. Cheng
has been regularly invited to serve on the review panels for the National Science
Foundation (NSF), hence setting the direction of future structural research. In addi-
tion, he has been instrumental in helping the NSF develop collaborative research
programs with Europe, China, Taiwan, Japan, and South Korea. Major industrial
corporations and government agencies have sought Dr. Cheng’s consultancy. He
has consulted with Martin Marietta Energy Systems, Inc., Los Alamos National
Laboratory, Kjaima Corporation, Martin & Huang International, Inc., and others.

xxvii
xxviii Series Editor

Dr. Cheng received four honorary professorships from China and chaired 7 of his
24 NSF delegations to various countries for research cooperation. He is the author
of more than 280 publications, including 5 textbooks: Matrix Analysis of Structural
Dynamics: Applications and Earthquake Engineering, Dynamic Structural Analysis,
Smart Structures: Innovative Systems for Seismic Response Control, Structure
Optimization—Dynamic and Seismic Applications, and Seismic Design Aids for
Nonlinear Pushover Analysis of Reinforced Concrete and Steel Bridges. Dr. Cheng
has received numerous honors and awards, including Chi Epsilon, MSM–UMR
Alumni Merit for Outstanding Accomplishments, Faculty Excellence Award,
Halliburton Excellence Award, and recognitions in 21 biographical publications,
such as Who’s Who in Engineering and Who’s Who in the World. He has twice
received the ASCE State-of-the-Art Award, in 1998 and 2004.
Section I
Basic Probability Theory
1 Introduction

1.1 Background of Random Vibration


1.1.1 General Description
This manuscript provides basic materials for an introductory course level on ran-
dom vibrations, including a review of probability theory; concepts of summation,
multiplication, and general functions of random variables; descriptions of random
processes and their origins; responses of single and multiple degrees-of-freedom lin-
ear systems such as machines and structures due to transient and random excitations;
and analyses in the time and frequency domains for reliability design, system identi-
fications, vibration testing, and control in engineering applications.
The readers are assumed to be familiar with the concepts and theories of regular
vibrations. Thus, these concepts and theories will not be systematically described.
In a regular vibration system, although the signal is a function of time, that is, the
vibration will vary from time to time, at a given time point, we can predict the exact
value of the regular vibration.
However, fundamentally different from deterministic vibration, random vibration
is a random process, which means, first of all, that the value of the vibration, no mat-
ter if it is denoted by displacement or other quantities, is unpredictable; and second,
the vibration, for example, a displacement, will also be a function of time. It is also
assumed that the readers are familiar with basic knowledge of random variables and
the theory of probability. Therefore, a very brief review of certain portions of the
entire theory of probability is given in this manuscript. That is, only foci on neces-
sary ideas and models that closely relate to random process are given.
Assume that a set of random variables are of the same type, for example, a vibra-
tion displacement with identical units (e.g., inches). In other words, we can use a
single dimension to place all these variables. The basic understanding of this set
of random variables is to arrange them from the smallest to the largest, and at each
given value of these variables, we try to find the occurrence probability. The cor-
responding idea is referred to as the probability density function (PDF), which is
a deterministic function. Graphically speaking, a PDF is a two-dimensional plot
to describe random distributions, that is, the abscissa is the value of the variables
and the coordinate is the probability density. Thus, the basic treatment of random
variables is to use a two-dimensional deterministic function to describe the one-
dimensional random variable.
To deal with a random process, we can realize that it is more complex than ran-
dom variables because at any interested moment, it should have not only an index of
the value of the variable but also another index of time. In this sense, a collection of

3
4 Random Vibration

random variables is a one-dimensional set, whereas a random process occupies two


dimensions, and the occurrence is random.
This manuscript deals with random process, especially random vibrations, in
which we need three-dimensional deterministic functions of random distribution.
However, only determining a three-dimensional distribution is insufficient to real-
ize how a system vibrates. We also need specific knowledge of the vibration. In this
sense, the course of random vibration is a combination of the theory of vibration and
the theory of random process.

1.1.2 General Theory of Vibration


In this chapter, we briefly describe the general idea of the theory of vibration. A
detailed discussion of vibration will begin from Chapter 6.

1.1.2.1 Concept of Vibration
Vibration is a repetitive motion of objects to a stationary frame of reference or nomi-
nal position (usually equilibrium). It refers to mechanical oscillations about an equi-
librium point. The oscillations may be periodic (such as the motion of a pendulum),
transient (such as the impact response of vehicle collision), or random (such as the
movement of a tire on a gravel road).
The common sense about vibration is that any object moves back and forth, such
as a swing mass, a car driving on a bumpy road, an earthquake, a rocking boat, tree
branches swaying in the wind, a man’s heartbeat; vibration is everywhere.
Let us consider the generality of the above examples. What these examples have
in common can be seen from the following vibrational responses:
First, all of them possess mass, damping, and stiffness. Second, potential energy
and kinetic energy are exchanged, and third, a vibration system has frequency and
damping ratio. In addition, vibration has a certain shape function. To see these gen-
eral points, let us consider an example shown in Figure 1.1.
Vibration can also be seen as the responses of a system due to certain excitations.
In Figure 1.1a, a flying airplane will be excited by flowing air and its wings will
vibrate accordingly. Because of the uncertainty with how air flows, at a deterministic
time point, it is difficult to predict the exact displacement of a specific location on the
wing. Or we can say that the vibration of the wing is random.
In Figure 1.1, an airplane engine is also shown. Although the engine is working at a
certain rotating speed, the corresponding vibration is mainly periodic, which is concep-
tually shown in Figure 1.1b. In this manuscript, we focus on random vibration responses.
In Figure 1.1a, the vibration of the airplane’s wing is a function of a certain loca-
tion in the air where different excitation input occurs. At virtually the same moment,
the wing vibrates accordingly. The vibration will also be seen as a function of time,
which is a more universal reference. Therefore, we use the time t, instead of other
reference such as the position of the road to describe various vibrations. From either
the location in the air or the moment in time, we can realize that the amplitude of
vibration is not a constant but a variable. Generally speaking, it is a time or temporal
variable. As a comparison, the vibration in the airplane engine shown in Figure 1.1b
can be exactly seen as a deterministic function of time.
Introduction 5

n … 2 1

Response
Force Time

Time

(a)

Displacement

Time

(b)

FIGURE 1.1  Different types of vibration. (a) Random vibration of airplane wings, (b) peri-
odic vibration of airplane engine.

1.1.2.1.1 Deterministic Vibration
Let us now have a closer look at the temporal variable, which is conceptually shown
in Figure 1.1 and referred to as vibration time histories. Specifically, the vibration
time history of the engine vibration, described by displacement x(t) can be repre-
sented by a Fourier series, that is,

x(t) = x1 cos(ω1t + θ1) + x2 cos(ω2t + θ2)… (1.1)

where x1 is the amplitude of the particular frequency component ω1, θ1 is the phase
shift of this component, and so on. The trigonometric function cos(ωit + θi) indicates
a back-and-forth movement, the vibration.
Once the excitation of a system is removed, the vibrational response will gradu-
ally decay because a realistic system will always dissipate the vibration energy. In
this sense, we can add the energy-decaying term such as e −ζ1ω1t in the above equation
so that the vibrational response is written as

x (t ) = x1e − ζ1ω1t cos ( )


1 − ζ12 ω1t + θ1 + x 2e − ζ2ω 2t cos ( )
1 − ζ22 ω 2t + θ2  (1.2)

From the above description, it can be seen that a temporal vibration function is
described by three basic quantities or parameters: namely, the amplitude (xi), the
frequency (ωi), and the phase (θi). If the energy dissipation is also considered, we
should have another term ζi.
Now, let us compare the vibrations described in Equations 1.1 and 1.2, respec-
tively. The essential difference between periodic vibration (Equation 1.1) and tran-
sient vibration (Equation 1.2) is twofold. First, periodic vibration (Equation 1.1) will,
6 Random Vibration

theoretically, last “forever” whereas transient vibration (Equation 1.2) will sooner
or later die out. Second, the vibration in Equation 1.1 will repeat itself periodically
whereas the vibration in Equation 1.2 will not have repeatability.
Note that, if the ratio of frequency ω1 and ω2 is a rational number, the vibration
(Equation 1.1) is periodic. If this is not satisfied, for example, ω1/ω2 = π, we will not
have periodic vibration. Therefore, such a vibration is also referred to as transient
vibration. In this sense, the duration of the vibration is not important. Thus, the major
difference between Equations 1.1 and 1.2 is whether they can repeat themselves or not.
A closer look at both Equations 1.1 and 1.2 unveils their generality. Whenever
we choose a given time, the value of x(t) can be calculated if the frequencies ωi and
damping ratio ζi are known. Therefore, the regular vibration theory devotes most
chapters to formulating methods of how to determine ωi and damping ratio ζi, as
well as how to find the response (Equations 1.1 or 1.2), which is the investigation of
vibration systems.

1.1.2.1.2 Vibration Systems
In regular vibration, an object that vibrates is seen as a mass system, called a vibra-
tion system. In most cases, we assume the vibration system is linear. Any engineer-
ing object that is subjected to a certain load and has responses due to the load can be
seen to have the relationship of “input-system-output.” When the input and output
are functions of time, the system is dynamic. A vibration system is dynamic. On the
other hand, if both the load and the response will not develop as time goes on, or if
the development with time is sufficiently slow so that it can be treated as constant to
time, the system is considered to be static.
The response of a dynamic system can be considered in two basic categories. The
first is that the system response versus time can continue to grow until the entire
system is broken, or the development of the response versus time will continue to
decrease until the system response dies out. In the first case, the response starts at
an origin and can continue to develop but will never come back to the origin. Such a
dynamic system is not a vibration system.
The second type of dynamic system is that the response will sometimes grow
but other times reduce when it reaches a certain peak value and then grow again,
either along the same direction or the opposite direction. Furthermore, the growing
response reaches the next peak value and starts to decrease. As mentioned previ-
ously, this repetitive motion is called vibration and thus the second type of dynamic
system is the vibration system.
It is seen that the responses of a dynamic system will continue to vary, so that we
need at least two quantities to describe the responses, namely, the amplitude of the
responses and the time at which the amplitude reaches a certain level, such as the
term xi and t in the above equation, respectively.
The responses of vibrational dynamic system, however, need at least one additional
quantity to express how fast the responses can go back and forth. This term is the fre-
quency of vibration, such as the value of ωi in Equation 1.1. From this discussion, we
can realize that the response of a vibration system must go back and forth or it is not
vibration. Therefore, the term that describes how fast the response changes values
from growing to reducing is the fundamental quantity distinguishing if a system is
Introduction 7

a vibration system. Thus, frequency is the most important parameter for a vibration
system. In the viewpoint of vibration modal analysis for linear systems, frequency (or
more precisely, natural frequency) is the most important modal parameter.
Also from the input-system-output viewpoint, we can understand that the reason
the system will have a dynamic response is due to the input, or there must be an
amount of energy input to the system. At the same time, a real-world system will
dissipate energy, which can be understood through the second law of thermodynam-
ics. Therefore, we need another term to describe the capability of a given system that
dissipates energy. It can be seen that the larger the capacity of energy dissipation a
system has, to have the same level of vibration, the more energy input is needed. A
quantifiable term to describe the capacity of energy dissipation is called the damping
ratio, which is the second most important modal parameter of a vibration system,
such as the term ζi in the above equation.
When a mass system, such as a car, an airplane, a machine, or a structure vibrates,
we often see that, at different locations of the system, the vibration level can be
different. For example, the vibration at the driver’s seat can be notably different
from the rear passenger seat of a compact car. We thus need another parameter to
express the vibration profile at different locations, such a vibration shape function is
called the mode shape. Different from natural frequency and damping ratio, which
are scalars, the mode shape function is a vector, which is the third most important
parameter of modal analysis. This can be seen conceptually in Figure 1.1a. Suppose
we can measure the vibration not only at location 1 but also at location 2 (through
location n; see Figure 1.1a), the vibration at these two different locations are likely
not identical. In this case, let us assume that the vibration of the airplane wing is
deterministic, which can be expressed by Equation 1.1. In this case of n vibration
locations, the system free-decay responses in Equation 1.2 can be further written as

−ζ ω t −ζ ω t
1 x (t ) = x11 e 1 1 cos(ω d1t + θ11 ) + x12 e 2 2 cos(ω d 2 t + θ12 )
2 x (t ) = x 21 e
− ζ1ω1t
cos(ω d1t + θ21 ) + x 22 e − ζ2ω 2t cos(ω d 2 t + θ22 ) (1.3)

n x (t ) = x n1 e
− ζ1ω1t
cos(ω d1t + θn1 ) + x n 2 e − ζ2ω 2t cos(ω d2 t + θn 2 )

where ω di = 1 − ζi2 ω i is the damped natural frequency. The amplitudes and phase
angles at different locations, such as the ones specifically marked by using the solid
and broken lines in the above equation, actually describe the vibration shapes, which
are referred to as mode shapes, its jth component of the ith mode contains amplitude
xji and phase θji.
Again, suppose a system is linear. In this case, the three terms mentioned previ-
ously, natural frequency, damping ratio, and mode shape, are the most important
parameters of a vibration system; the set of modal parameters. These parameters are,
in most cases, deterministic values. In this manuscript, we assume that our vibration
systems have deterministic modal parameters.
In a linear system, the ratio between the output and input measured at certain
locations is a function of frequency (as well as damping ratio), which is referred to as
8 Random Vibration

a transfer function, and is the most important parameter describing a linear vibration
system. Therefore, the general relationship of input-system-output can be further
written as input-transfer function-output.

1.1.2.1.3 Random Vibrations
Based on the theory of vibration discussed above, once the vibration system is
known, with given forcing functions or initial conditions (or both), the responses of
deterministic vibrations can be calculated as long as the forcing function and initial
conditions are deterministic. Mathematically, the procedure is to find the solution of
a governing equation of motion of a vibration system, namely, the parameters ωi, ζi,
xij, and θij.
On the other hand, for random vibrations, we do not have these deterministic
parameters ωi, ζi, xij, and θij in the closed forms of vibration responses described in
Equations 1.1 or 1.2. The reason is, in general, we do not have deterministic time
history of input or the deterministic initial conditions. In this case, even if the exact
characteristics of the vibration system are known, we will not be able to predict the
amplitude of vibration at a given time. Also, for a given value of vibration amplitude,
we do not know when it will occur.
This does not mean that the vibration is totally uncertain. With the tools of basic
random processes and statistics, we may be able to obtain the rate of occurrence of
a particular value. We may predict the major vibration frequency of a linear system,
with certain knowledge of the input statistics. We may understand the averaged value
or root mean square value of the responses, and so on. In most engineering applica-
tions, these values can be sufficient to design or control the vibration systems, to
predict the fatigue life of a machine or airplane, or to estimate the chance of the
occurrence of some extreme values and the possibility of failures of certain systems.
These are the major motivations for studying random vibration of a mass system.
A deterministic vibration is a dynamic process, and thus is a random vibration.
Therefore, although the vibration responses cannot be written as Equations 1.1
through 1.3, what we do know is that the responses are a function of time. In other
words, random vibration belongs to the category of random process.
Similar to deterministic vibration, in this circumstance, we still stay within the
concept of a mass system. Additionally, in most cases, we have linear systems, and
thus the basic concept or relationship of “input-transfer function-output” are continu-
ously used. The only difference is, in the case of random vibrations, both inputs and
outputs are random processes. Thus, instead of devoting more thought to transfer
function as a deterministic vibration, we will focus more on the time histories of
inputs and outputs. In addition, the main methodology to account for these time
histories is through averaging.
Note that a thorough investigation of random processes can be mathematically
intensive. For the purpose of understanding the engineering of random vibration
and calculating commonly used values of random responses, the author minimizes
the necessary knowledge of random process. Thus, this manuscript only emphasizes
the introduction of stochastic dynamics. All the materials in this manuscript are
managed to let mechanical and civil engineering students in their graduate levels
to master the basics of random vibration in a one-semester course. Readers who are
Introduction 9

interested in more advanced theory may further consult textbooks of random pro-
cess, such as those by Soong and Grigoriu (1992).

1.1.3 Arrangement of Chapters
In Section 1.2, we study the fundamental concept of probability theory by briefly
reviewing the necessary knowledge of random process and random vibrations. Thus,
we only discuss the basics of set theory, axioms of probability, and conditional prob-
ability. In Section 1.3, we consider random variables. In particular, we emphasize the
details of normal distribution. The focus is on single random variables, including con-
tinuous and discrete variables and the important probability density and mass functions.
In Chapter 2, we further discuss the functions of random variables, and the random
distributions of input and output for vibration systems. In Chapter 3, the random pro-
cesses in the time domain are introduced, including the basic definitions and classifi-
cations, the state spaces and index sets, the stationary process, and the conditions and
calculations of ensemble and temporal averages. Correlation analysis is also discussed.
In Chapter 4, random processes in the frequency domain are discussed. The
spectral density function and its relationship with correlation functions are the key
issues. In addition, white noise and band-pass–filtered spectra are also discussed. In
Chapter 5, the random process is further considered with certain statistical proper-
ties, such as the concepts of level crossings and distributions of extrema. By ana-
lyzing the level crossing, the readers can relate with the handling of time-varying
randomness based on random processes. This chapter also provides knowledge bases
to understand fatigue processes and engineering failure probabilities.
In Chapter 6, linear signal degree of freedom (SDOF) vibration systems are con-
sidered with deterministic forcing functions and initial conditions, the emphasis is
on the vibration system itself, including basic vibration models and basic vibration
parameters. In addition, free vibration and simple forced vibration with harmonic
excitation are also considered.
In Chapter 7, the response of linear SDOF systems to random forces will be dis-
cussed. Deterministic impulse responses are considered first followed by arbitrary
loading convolutions. The relationship between impulse response function and the
transfer function is also discussed as Borel’s theorem is introduced. Here, the random
environments are considered as excitations and are treated as random processes. The
mean of the response process, the correlations and the spectral density functions of
the response process, are also discussed.
In Chapter 8, we further extend the discussion to linear multi-degree of freedom
(MDOF) systems. Proportional and nonproportional damping are discussed. The
basic treatment of a MDOF system is to decouple it into SDOF systems. Because of
different types of damping, the decoupling procedures are different. In Chapter 9,
the concept of inverse problems is introduced, including the first and second inverse
problems, to help engineers improve their estimations and to help them minimize
measurement noises.
In Chapter 10, understanding the failures of signal components and total structures
is introduced. The 3σ criterion, the first passage failure, and fatigue are discussed.
In this chapter, the concept of failure is not only focused on materials but also on the
10 Random Vibration

estimation of a system’s reliability under multihazard environments. The probability


of failure is the foundation of the design of limit state equations. Different from the
theory of failure probability, which treats the loading and capacity of a system as
random variables, they are viewed as random processes in this chapter.
In Chapter 11, we briefly introduce nonlinear vibration systems, which are also
subjected to random excitations. Monte Carlo simulations are discussed to provide
better response and parameter estimations under complex situations. In addition,
typical nonlinear vibrations are specifically discussed for engineering practice.
Finally, examples computing the nonlinear random responses are presented.

1.2 Fundamental Concept of Probability Theory


In this section, the basics of probability theory are reviewed for the purpose of sum-
marizing the necessary knowledge base of random process.

1.2.1 Set Theory
It is known that modern probability theory is based on the set theory. In the follow-
ing, we briefly review basic concepts without further proofs. An experiment is a case
that may lead to results referred to as outcomes. An outcome is the result of a single
trial of an experiment, whereas an event is one or more outcomes of an experiment.
Set. A set is a collection of events. For example, the collection of all the vibration
peak values can be a set, and all these values have the same units. The collection of
the modal parameters, such as natural frequencies, damping ratios, and mode shapes
of the first few modes of a vibration system can be another set. Here, these param-
eters can have different units. However, readers can still find what is in common with
the second set.
Event. Now, consider the event in a set. If an event “a” occurs, it is denoted as ωa;
note that if only “a” occurs, we have ωa. When another event “b” occurs, we have ωb.
Collect those ωi’s, denoted by

A = {ωa, ωb, ωc, …ωn} = {ωi}, i = 1, …n (1.4)

Equation 1.4 indicates

1. Set A is a collection of all the events ωi’s.


2. If certain event ωa in A exists (“a” occurs), then we will have A, or A occurs.
3. The language of set theory is based on a single fundamental relation,
defined as a membership. ωa is a member of set A, denoted by

ωa ∈ A (1.5)

(ωa is a member of A, ωa is an element of A).


4. Relative to ωa, A can also be seen as an event. To distinguish between ωa and
A, ωa is called a basic event, A is called a combined event (collective event).
Introduction 11

Space.
5. All the possible events consist of a space of basic events denoted by U. U is
an event that must happen. It is also called a universal set, which contains
all objects, including itself. Furthermore, in engineering, these events may
also be called samples, and U is the space of all the samples. In the litera-
ture, the phrase “space” is often used for continuous variables only.
6. Impossible event Φ. The empty set is the set that has no elements (the empty
set is uniquely determined by this property, as it is the only set that has no
elements—this is a consequence of the understanding that sets are deter-
mined by their elements):

Φ = {} (1.6)

1.2.1.1 Basic Relationship (Operation)


To graphically illustrate the following relationships, a Venn diagram is a useful tool.
In the following, the basic relationships between events are discussed with the expres-
sion of corresponding Venn diagrams (John Venn, 1834–1923; Figure 1.2, etc.).

1. A union B (A or B, A + B)
A union B is the collection of all events in both A and B, denoted by

A ∪ B = A + B (1.7)

See the gray area in Figure 1.2, Venn diagram of A union B.

Example 1.1

The union of {1, 2, 3} and {2, 3, 5} is the set {1, 2, 3, 5}.

2. A intersection B (A and B)
A intersection B is the portion shared by both A and B (Figure 1.3),
denoted by
A ∩ B = AB (1.8)

See Figure 1.3 shaded area.

B
A

FIGURE 1.2  Venn diagram of A union B.


12 Random Vibration

B
A

FIGURE 1.3  Venn diagram of A intersection B.

Example 1.2

The intersection of {1, 2, 3} and {3, 4, 5} is the set {3}.


The intersection of {1, 2, 3} and {4, 5, 6} is the set {}, or Φ.
(Note that, in the literature, A ∩ B is also denoted by “AB”)

3. A and B are mutually exclusive (A and B disjoint; Figure 1.4)


The case that A has nothing to do with B is denoted by

A ∩ B = Φ (1.9)

4. A is contained inside set B, A is a subset of B


That A occurs must let B occur is denoted by

A ⊂ B (1.10)

(A is included in B)
or

B ⊃ A (1.11)

(B contains A, A is a subset of B; Figure 1.5)

In this case, we have the following:

a. A ⊂ A (1.12)

b. If A ⊂ B, B ⊂ C then A ⊂ C (1.13)

c. A must be contained inside U, A is a subset of U

B
A

FIGURE 1.4  Venn diagram of A and B mutually exclusive.


Introduction 13

A B

FIGURE 1.5  B contains A.

Example 1.3

{a,b} is a subset of {a,b,c}, but {b,d} is not.

d. A ⊃ Φ (1.14)

5. A = B (1.15)
This is a special case, that set A and set B are identical, the condition is

iff A ⊂ B and B ⊂ A

where the symbol “iff” means “if and only if.”


In this case, we have
a. A = A (1.16a)

b. If A = B and B = C then A = C (1.16b)

c. If A = B then B = A (1.16c)

6. Cartesian product of A and B, denoted by A × B, is the set whose members


are all possible ordered pairs (a,b), where a is a member of A and b is a
member of B.
A Cartesian product is a useful concept to denote a special relationship
between A and B, which will be discussed in detail in Chapter 3.
7. A and B are mutually inverse, A and its complements
That “A and B are mutually inverse” is denoted by

A ∩ B = Φ,  A + B = U (1.17)

The complement of A is denoted by A, which can be clearly seen in


Figure 1.6. With the help of Equation 1.18, it is seen that A here is nothing
but B described in Equation 1.17.
That is, the following about A and complement A are true

A+ A =U (1.18a)

A∩ A = Φ (1.18b)
14 Random Vibration

A
A

FIGURE 1.6  Venn diagram of A and its complements.

8. The event that A occurs and B does not occur


This case is denoted by

A − B (1.19)

In Figure 1.7, the two dark gray areas are both A − B.


We can realize that

A− B = A− A∩ B = A∩ B (1.20)

9. Operation laws of events


The basic rule of event operation can be summarized as follows:
Commutativity:

A + B = B + A (1.21a)

A ∩ B = B ∩ A (1.21b)

Associativity:

(A + B) + C = A + (B + C) (1.22a)

(A ∩ B) ∩ C = A ∩ (B ∩ C) (1.22b)

U U

A B B A

FIGURE 1.7  Venn diagram of A − B.


Introduction 15

Distributive laws:

(A + B) ∩ C = A ∩ C + B ∩ C (1.23a)

A ∩ B + C = (A + C) ∩ (B + C) (1.23b)

A + (B ∩ C) = (A + B) ∩ (A + C) (1.23c)

A ∩ (B + C) = (A ∩ B) + (A ∩ C) (1.23d)

Duality (De Morgan’s laws [Augustus De Morgan, 1806–1871]):

A1 + A2 +  An = A1 A2  An (1.24a)

A1 A2  An = A1 A2 +  An (1.24b)

Example 1.4

The time history of a vibratory mass is taken at a time interval of 1 s.


Four samples are given in the following. Use Xi to denote that the ith sample of
the vibration level is less than 1 cm. Let Xi represent the following cases:

1. None of the samples are greater than 1 cm


2. At least one of the samples is greater than 1 cm
3. Only one sample is greater than 1 cm
4. At least three samples are smaller than 1 cm

The answer to the above four cases are

1. X1X2X3X4

2. X1X 2X 3X 4
3. X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4
4. X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X 3X 4 + X1X 2X3X 4

1.2.2 Axioms of Probability
In most modern science, a theory is often described by several axioms and basic
concepts, the probability can also be systematically established in a similar fashion.
In the above, we actually introduced these basic concepts; in the following, let us
consider the axioms.

1.2.2.1 Random Tests and Classic Probability


First of all, consider the classic expression of probability, starting with a random test.
16 Random Vibration

1.2.2.1.1 Frequency fN(A)
The frequency of occurrence of A can be viewed as that in N tests, A occurs n times,
denoted by

n
f N ( A) = (1.25)
N

Equation 1.25 provides a very important starting point, which classically expresses
the essence of probability. In other words, any case of probability can be seen as a
ratio of n and N. If one can successfully and completely find n and N without any
overlapping, then he or she locates the corresponding probability.
In Equation 1.26, N is referred to as the sample space and n is referred to as the
total occurrence of the tested samples.
For the frequency of occurrence, we have the following basic relationships:

1. 0 ≤ f N(A) ≤ 1 (1.26)
2. f N(U) = 1 (1.27)
3. if AB = Φ, then

f N(A + B) = f N(A) + f N(B) (1.28)

1.2.2.1.2 Probability
Now, with the help of the viewpoint of occurrence frequency, we have the classic
definition of probability.

1.2.2.1.2.1   Classic Definition of Probability


The probability of A (occurrence of A) is

n
P( A) = lim (1.29)
N → NU N

Here,
NU: in space U, the total possible number of tests
n: number of occurrence of A

1.2.2.1.2.2   Characteristics of Classic Probability


The classic random tests are characterized by

1. The total possible results are limited denoted by

ω1, ω2, …ωN

2. The possibilities of occurrence of all the ωi’s are equal, that is,

P(ω1) = P(ω2) = … = P(ωN) (1.30)


Introduction 17

1.2.2.2 Axiom of Probability
Having reviewed the classic thoughts of probability, let us introduce the axioms.

1.2.2.2.1 Axiomatic Theory of Probability


The mathematical approach of probability theory is based on the following three axioms

Axiom 1 (essence: nonnegative)


0 ≤ P(A) ≤ 1 (1.31)

Axiom 2 (essence: normalization)

P(U) = 1 (1.32)

Axiom 3 (essence: additivity)


A and B are mutually exclusive (A ∩ B = Φ), then

P(A ∪ B) = P(A) + P(B) (1.33)

In general

P(A ∪ B) = P(A) + P(B) − P(A ∩ B) (1.34)

1.2.2.2.2 Properties of Axiomatic Probability


The expression above shows that three axioms can be quite clearly understood. We
now consider the associated properties as follows:

1. P(Φ) = 0 (1.35)
2. If A1, A2, …Am are mutually exclusive, then

 m  m

P
 ∑
i =1
Ai  =
 ∑ P( A )
i =1
i (1.36)

3. For any event A

P( A) = 1 − P( A) (1.37)

4. For any two events A and B

P(A − B) = P(A) − P(A ∩ B) (1.38)

5. For any two events A and B, if

A⊃B


then

P(A − B) = P(A) − P(B) (1.39)


18 Random Vibration

1.2.3 Conditional Probability and Independence


In the world of probability, any occurrence of an event requires certain conditions.
In most cases, to find the probability of a certain occurrence in a complex situation
is to break down the complicated cases into several “pure” conditions and find the
corresponding probabilities of each one. In this subsection, let us consider the basic
relationships of the conditional events.

1.2.3.1 Conditional Probability
A conditional probability is denoted as

P(A|B) = P(A occurs given that we know B occurs) (1.40)

where the bar “|” stands for “given.”


The following is the basic formula for conditional probability:

P ( A ∩ B)
P( A|B) = (1.41)
P ( B)

Equation 1.41 is a normalization to generate a new sample space, in which the prob-
ability of B is 1; namely, B always occurs (because B already occurs), that is, P(B|B) = 1.

Example 1.5

In 2010, we had twelve students from the Department of Mechanical and


Aerospace Engineering (MAE), with one PhD candidate, and five students from
the Department of Civil Engineering (CIE), with three PhD candidates in a cross-
listed course, which is listed in Table 1.1.

1. Students that are masters known to be from MAE: 11 → P(master|MAE)  =


11/12
2. Students that are from MAE: 12 → P(MAE) = 12/17
3. Students that are from MAE and are masters in our class: 11 → P(master ∩
MAE) = 11/17

11/17
P(master|MAE) = = 11/12
12/17

Table 1.1
Students in MAE536/CIE520
PhD Master Subtotal
MAE 1 11 12
CIE 3 2 5
Subtotal 4 13 17
Introduction 19

Table 1.2
MAE Students
PhD Master Subtotal
MAE 1 11 12
CIE 3 2 5
Subtotal 4 13 17

Table 1.3
Master Students
PhD Master Subtotal
MAE 1 11 12
CIE 3 2 5
Subtotal 4 13 17

4. “to be from MAE” → the space is shrunk from the “total classmates” to
“MAE classmates”
The above can be expressed in Table 1.2 with bold numbers.

11/17
P(MAE|master ) = = 11/13
13/17

This can be expressed in Table 1.3 with bold numbers.

1.2.3.2 Multiplicative Rules
Based on the conditional probability, we have

P(A ∩ B) = P(A) P(B|A) (1.42a)

P(A ∩ B) = P(B) P(A|B) (1.42b)

Example 1.6

In the above example, assume P(MAE) is known, then


P(MAE) = 12/17;
and P(master|MAE) is also known, we have
P(master|MAE) = 11/12.
Therefore,

P(MAE ∩ master) = P(MAE) P(master|MAE) = (12/17)(11/12) = 11/17


20 Random Vibration

Note that

P(MAE ∩ master) = 11/17 < P(MAE) = 12/17

and

P(MAE ∩ master) = 11/17 < P(master|MAE) = 11/12.

That is, we can realize

P(A ∩ B) ≤ P(A) (1.43a)

and we can prove that

P(A ∩ B) ≤ P(B) (1.43b)

and

P(A ∩ B) = P(A) (1.44a)

if and only if

P(B) = 1 (1.44b)

1.2.3.3 Independency
The following are useful concepts of variable independency:

1. Two events A and B, if

P(A ∩ B) = P(A) P(B) (1.45)

A and B are independent, which means the occurrence of A does not affect
the occurrence of B.
In addition,

P(B|A) = P(B) (1.46)

Proof:

From conditional probability

P(B|A) = P(A ∩ B)/P(A) (1.47)

Because A and B are independent, then Equation 1.45 holds. Substitution of


Equation 1.45 into Equation 1.46 yields

P(B|A) = P(A) P(B)/P(A) = P(B)

2. If A and B are independent, then A and B, A and B, A and B are also


independent
Introduction 21

Example 1.7

If A and B are independent, then A and B are also independent.


From Equation 1.20, we have

A ∩ B = A − B
therefore

P ( A ∩ B ) = P ( A − B)

From the 4th property of axiomatic probability shown in Equation 1.38

P(A − B) = P(A) − P(A ∩ B)

From Equation 1.46, because A and B are independent, P(A ∩ B) = P(A)P(B),


therefore

P( A ∩ B ) = P( A − B) = P( A) − P( A)P(B) = P( A)(1− P(B)) = P( A)P(B )

Thus, A and B are also independent.

1.2.3.4 Total Probability and Bayes’ Formula


The concept of conditional probability can be further extended to the important
concept of total probability and the corresponding Bayes’ formula (Thomas Bayes,
1701–1761).

1.2.3.4.1 Total Probability
If B1, B2,… Bn are mutually exclusive, and P(Bi) > 0, in addition, A is included in one
of the Bi’s, that is,
n

A⊂ ∑B
i =1
i (1.48)

then
n

P( A) = ∑ P( B )P ( A B )
i =1
i i (1.49)

Proof:

Because B1, B2,… Bn are mutually exclusive, and so will be A ∩ B1, A ∩ B2,… A ∩ Bn.
From Equation 1.48

A = A∩ ∑B i =1
i


22 Random Vibration

Furthermore,
 n  n

P( A) = P  A ∩
 ∑
i =1
Bi  = P( A ∩ B1 + A ∩ B2  + A ∩ Bn ) =
 ∑ P( A ∩ B )
i =1
i


From the multiplicative rules, P(A ∩ B) = P(B) P(A|B) (Equation 1.42b), the above
equation
n

= ∑ P( B )P ( A B )
i =1
i i

Example 1.8

Consider an example of quality control


Quality inspectors a, b, and c check three different products in a factory, which
are 20%, 35%, and 45% of the total production, respectively. Inspectors may
make mistakes.
Suppose the probability of mistakes made by inspector a, b, and c are 5%, 7%,
and 6%, respectively. Find the probability of a defective product from this factory.
Let

Ba = {products passed by inspector a}


Bb = {products passed by inspector b}
Bc = {products passed by inspector c}
A = {a defective product taken from those that passed inspection}

We see that

A = A ∩ Ba +A ∩ Bb +A ∩ Bc

and
[A ∩ Ba] ∩ [A ∩ Bb] = Φ, [A ∩ Ba] ∩ [A ∩ Bc] = Φ, [A ∩ Bb] ∩ [A ∩ Bc] = Φ
Known

P(Ba) = 20%
P(Bb) = 35%
P(Bc) = 45%

P(A|Ba) = 5%
P(A|Bb) = 7%
P(A|Bc) = 6%

we can calculate

P(A) = 20% (5%) + 35% (7%) + 45% (6%) = 6.15%

The essence of total probability is (i) to find P(A), event A must join a group of
mutually exclusive event Bi ’s; (ii) identify this group of disjointed events B1, B2, …Bn

Bi ∩ Bj = Φ (1 ≤ i < j ≤ n) (1.50)
Introduction 23

1.2.3.5 Bayes’ Formula (Thomas Bayes, 1701–1761)


If B1, B2, … and Bn are mutually exclusive (see Equation 1.50) and P(Bi) > 0, in addi-
tion, A is included in one of the Bi’s, that is,
n

A⊂ ∑B
i =1
i (1.51)

then we have the Bayes’ formula described in Equation 1.53.

(
P Bi A = )
P( Bi ) P A Bi ( ) (1.52)
n

∑ P( B ) P ( A B )
i =1
i i

Example 1.9

In the Example 1.8, one product is found to be defective, with the condition “|A,”
the question is, which inspector is more likely to be responsible? We have

P(Ba|A) = P(Ba) P(A|Ba)/[P(Ba) P(A|Ba) + P(Bb) P(A|Bb)+ P(Bc) P(A|Bc)]

Therefore, we have

[P(Ba) P(A|Ba) + P(Bb) P(A|Bb) + P(Bc) P(A|Bc)] = P(A) = 6.15%

P(Ba|A) = P(Ba) P(A|Ba)/[6.15%] = 20% (5%)/[6.15%] = 16.26%

P(Bb|A) = P(Bb) P(A|Bb)/[6.15%] = 35% (7%)/[6.15%] = 39.84%

P(Bc|A) = P(Bc) P(A|Bc)/[6.15%] = 45% (6%)/[6.15%] = 43.90%

The essence of Bayes’ theorem is

1. A has already occurred, to check the probability of Bi, that is, the probability
of event “Bi|A”
2. We also need to find the group of disjointed events B1, B2, …Bn (see
Equation 1.50)

1.2.4 Engineering Examples
Now, let us consider certain engineering examples of multiple occurrences.

1.2.4.1 Additive Rules
Consider two failure modes A and B that are treated as double extreme events to a
single system, say a car or a bridge,

P(A ∪ B) = P(A) + P(B),

if A and B are independent, find the failure probability of the system.


24 Random Vibration

Practically, we have two or more possible cases: (1) failure modes in parallel and
(2) failure modes in series, which can be seen in Figure 1.8a and b, respectively.
Here, the terms P1, P2, … are the probabilities of occurrences of event 1, 2, …
In Figure 1.8a these events causing possible bridge failure occurs simultane-
ously. For example, the combined loads of “earthquake + vehicular collision,” etc. In
Figure 1.8b, these events occur in consequence; say, first an earthquake occurs, and
then a vehicular collision occurs.
Consider the case of combined loads on a bridge, which is listed as follows:

1. Single extreme event


First, the single case, which is omitted
2. Double extreme events
Second, we may have the following combinations:
Example of Collective Set: Combined Loads
Earthquake + vehicular collision
Earthquake + scour
Earthquake + debris flow/land slide
Earthquake + surge
Earthquake + fire
Earthquake + wind
Earthquake + vessel collision

Vehicular collision + scour


Vehicular collision + fire
Vehicular collision + wind

Scour + wind
Scour + surge
Scour + vessel collision
Scour + debris flow/land slide
Scour + fire

P1

P2
P1 P2 Pn

(b)

Pn
(a)

FIGURE 1.8  Relationship of (a) parallel and (b) series.


Introduction 25

Debris flow/land slide + surge


Debris flow/land slide + wind
Debris flow/land slide + vessel collision

Surge + wind
Surge + vessel collision

Fire + wind
3. Triple extreme events
Third, we may have triple events and so on, which are also omitted due
to the limited space in the manuscript. In the above, each individual load
will have its own occurrence probability. Now, the practical question is,
what is the probability of the combined loads?
If we know the total sample space of all the combined loads, then based
on the theory of total probability and Bayes’ formula, we can determine for
the combined probability.

1.2.4.2 Multiplication Rules
If an operation can be performed in p ways, and if for each of these ways, a second
operation can be performed in q ways, then the two operations can be performed
together in pq ways.

1.2.4.3 Independent Series
An independent series of tests satisfies

1. In each test, we have only two possible results, A and A


2. The probability P(A) = p and P( A) = 1 − p for each test are constant
3. All the results are mutually independent; these are also called n-Bernoulli
tests (Bernoulli series, n-Binomial series)

In n-Bernoulli tests, the probability of A that occurs m times is expressed as

Pn (m) = Cnm pm (1 − p)n− m (1.53)

Here, Cnm is the combination given by


n!
Cnm = (1.54)
m !(n − m)!

Example 1.10

Suppose we have six bulbs. Two evens are shown in Figure 1.9 (six bulbs in series)
and Figure 1.10 (three sets of two bulbs in series are in parallel), respectively. The
probability of each bulb being broken is 0.2.
Question: What is the probability of total failure?
Let ωi = {the ith bulb is broken}, we have P(Ai) = 20%
26 Random Vibration

FIGURE 1.9  Six bulbs in series.

FIGURE 1.10  Three sets of two bulbs in series are in parallel.

Then,

ω i = {the ith bulb works}, P(ω i ) = 1% − 20% = 80%


1. Event A = ω1 + ω 2 + ω 3 + ω 4 + ω 5 + ω 6 = ω1ω 2ω 3ω 4ω 5ω 6

6

P( A) = 1− P(P ( ω1ω 2ω 3ω 4ω 5ω 6 ) = 1− ∏ P (ω ) = 1− 0.8


i =1
1
6
= 1− 0.2621 = 0.7379

Thus, the failure probability is about 74%.


2. Event A = (ω1 + ω2)(ω3 + ω4)(ω5 + ω6)

P(ω1 + ω 2 ) = 1− ω1ω 2 = 1− 0.82 = 0.36 = P(ω 3 + ω 4 ) = P(ω 5 + ω 6 )

P(A) = P(ω1 + ω2) P(ω3 + ω4) P(ω5 + ω6) = 0.363 = 0.0467

Compared with case I, the failure probability of about 5% is much smaller.

1.2.4.4 Return Period of Extreme Load


Now, consider the occurrence of extreme load, which is often denoted by the return
period.
The basic description of return period, denoted by TR, is
1 (1.55)
TR =
p
Introduction 27

where p is the occurrence rate. For example, within a year, an earthquake with a
return period of 2500 years has the probability

1 1
p= = = 0.0004 (1.56)
TR 2500

The probability of not seeing such an earthquake is

1 − p = 0.9996 (1.57)

The probability of seeing the earthquake in n years, denoted by pn, will be

pn = 1 − (1 − p)n (1.58)

For example, the possibility in 100 years of seeing an earthquake with a return
period of 2500 years is

100
 1 
P100 = 1 −  1 −  = 0.0392
 2500 

The exposure or design service time is denoted by t D. Therefore,

t
 1
D

ptD = 1 −  1 −  (1.59)
 TR 

When the return period becomes a large number, the following equation is used
to approximate the probability of occurrence,

t
− D
ptD = 1 − e TR
= 1 − e − ptD (1.60)

Figure 1.11 shows this uniform distribution. The uniform probability distribution
implies that we have no reason not to use such a distribution. That is, we have no rea-
son in year i that the chance of seeing the event is greater than year j. In many cases,

p 1
TR

…… Year
1 2 3 4 5 6 ….. TR – 1 TR

FIGURE 1.11  Uniform distribution of probability of occurrence in period TR.


28 Random Vibration

using uniform distribution reflects the fact that we have not yet clearly understood
the nature of such events.
From Figure 1.11 and Equation 1.55, we can also see that

TR p = 1 (1.61)

However, from Equation 1.59, we can see that, when t D = TR,

TR
 1
pn = 1 −  1 −  <1 (1.62)
 TR 

For example, when t D = TR = 2500,

2500
 1 
pn = 1 −  1 −  = 0.6322 < 1 (1.63)
 2500 

1.2.4.4.1 Urn Problems (Ball-Picking Games)


The idea behind an urn test is that it is a method for “virtual experiments,” which
assumes that certain balls with different colors represent different events. The total
number of balls provides the sample space. By picking a specific type of ball, we can
realize the corresponding probability. Using this virtual experiment is an important
method to obtain a certain probability mass function (PMF).

Example 1.11

Suppose ten balls are in a bucket, two balls are black and eight are white (Figure 1.12).

1. For each test, pick one ball and return it to the bucket
2. Pick one ball and never return it
a. In the first test, the chance of getting a black ball is 2/10. In the second
test, we also have a chance of 2/10.

1.

2.

FIGURE 1.12  Ball picking.


Introduction 29

b. In the first test, the chance of getting a black ball is, again, 2/10. In the
second test, getting a black ball will depend on the first test. The prob-
ability of picking up a specially colored ball is then a variable.
Now, suppose the chance of occurrence of a hazard follows the pos-
sibility as described in the second ball game.

 1 1  1   1 
pn = 1−  1−   1−   1−   1−  (1.64)
 TR TR − 1 TR − 2   TR − n + 1

Equation 1.64 can be further written as

 T − 1  T − 2   TR − 3   TR − n 
ptD = 1−  R   R  (1.65)
 TR   TR − 1   TR − 2   TR − n + 1

Therefore, we have

n
pn = (1.66)
TR

1.3 Random Variables
In the above, the basic concepts of probability were reviewed. Now, let us consider
random variables, based on the theory of probability.

1.3.1 Discrete Random Variables and PMF


First of all, consider the single random variable, in the discrete cases. Then, we will
discuss continuous variables.

1.3.1.1 Single Random Variables


That a variable x is random means that its value is uncertain or unpredictable.
Suppose we have m sets of random variables written in vector form as follows:

x1 = [ x11 x12  x1n ]

x2 = [ x 21 x 22  x 2 n ]
(1.67)

xm = [ x m1 x m 2  x mn ]

where, xij means the jth element of the ith set.


It is seen that it is possible to sort all the variable xij with a special arrangement.
A one-dimensional axis can be used for this arrangement.
30 Random Vibration

1.3.1.2 “Two-Dimensional” Approach
Suppose variable xij is associated with the probability of occurrence(s) written as

P( x1 ) = [ P( x11 ), P( x12 )  P( x1n )]

 (1.68)
P( xm ) = [ P( x m1 ), P( x m 2 ) P( x mn )]

where P(.) stands for the probability of variable (.).


Now, in the above-mentioned axis, at each point x, we will have the probability
P(x), in this way, together with the above-mentioned one-dimensional approach for
sorted values, we obtain the second dimension to represent the corresponding prob-
abilities. Therefore, we now have a two-dimensional approach.

1.3.1.3 Probability Mass Function


Specifically, a variable set xi is sorted in the following order
xi1 < xi2 < … < xin (1.69)
we can use integers to denote the position of xij, namely, j is an integer. For example,

xi1, xi2, xi3 … xin


1 2 3 … n

Here, the index j starts at 1, and so on.


Another example is to let j start from zero, namely,

xi0, xi1, xi2 … xin


0 1 2 … n
The corresponding distribution of p(xij)’s is referred to as the probability mass
function (PMF).
In the following, let us consider some typical PMFs. By comparing these closed-
form PMFs with one’s real-world problems, one can figure out one’s specific PMF.
To approximately formulate a specific PMF (and PDF in Section 1.3.2), one can use
the technology of curve fitting.

1.3.1.4 Bernoulli Distribution (0–1 Distribution) (Jacob Bernoulli, 1655–1705)


The “true/false” or “yes/no” test of a special two-variable case yields a Bernoulli
distribution. For example, if the occurrence of a binary, namely, 1 or 0 is random, we
will have such a distribution.
Occurs/does not occur probability can be written as

 p, k =1

PK ( k ) =  1 − p, k = 0 (1.70)
 0, elsewhere

Introduction 31

PK(k)

1–p

k
0 1

FIGURE 1.13  0–1 Distribution.

It is seen that the sum of these two cases is p + (1 − p) + 0 = 1.


Let us consider the sample space S versus the space U of all real numbers.
Suppose in Equation 1.70,

U = real numbers (or nonnegative real) (1.71)

The sample space of a Bernoulli variable is

S = {0, 1} (1.72)

We can also let U = S = {0, 1}, in this case, Equation 1.70 is replaced by

 p, k = 1
PK ( k ) =  (1.73)
 1 − p, k = 0

The distribution can be shown in Figure 1.13.

1.3.1.5 Binomial Distribution
In n tests, the probability of m event A with Bernoulli distribution (P(A) = p) is called
the Bernoulli test, (see Section 1.2.4.3) which has sample space S given by

S = {0, 1, 2, 3, 4, …n} (1.74)

The corresponding PMF is also referred to as a binomial distribution, given by

 C m pm (1 − p)n− m, m = 0, 1, 2, 3, n
Pn (m) =  n (1.75)
0, elsewhere


Figure 1.14 shows the PMF with different values of p.


32 Random Vibration

0.25
p = 0.1

0.2 p = 0.3
p = 0.7
Probability

0.15

0.1

0.05

0
0 5 10 15 20 25 30
Value of m

FIGURE 1.14  Binomial distributions.

Example 1.12: Mendel Model

A couple, a father and a mother, both have mixed genes. They have three children.
Find the chance that one of them has the dominant gene.
Let the dominant gene be denoted by d and the recessive gene is denoted by r.
We see that dd is the pure dominant, rr is the pure recessive, and rd is the mixed
gene. Therefore,

dd = 1/4, rr = 1/4, rd = 1/2, the chance of d: p = (1/4 + 1/2) = 3/4

Then, the chance can be calculated as

P3 (1) = C31p1(1 − p)3−1 = 0.1406


1.3.1.6 Poisson Distribution (Simeon Denis Poisson, 1781–1840)


The sample space of a Poisson variable is a nonnegative integer, given by

S = {0, 1, 2, 3, 4, …} (1.76)

The distribution is given by

 λ k e− λ
 , k = 0, 1, 2, 3, 
PΛ ( k ) =  k ! (1.77)
 0, elsewhere

Figure 1.15 plots the PMF with different values of parameter λ.


It is seen that

∞ ∞

∑ k =0
PΛ ( k ) = e − λ ∑
k =0
λk
k!
= e− λe λ = 1 (1.78)
Introduction 33

0.35
Lambda = 2.5
0.3 Lambda = 5
Lambda = 10
0.25
Probability

0.2

0.15

0.1

0.05

0
0 2 4 6 8 10 12 14 16 18 20
Value of k

FIGURE 1.15  Poisson distributions.

Example 1.13

The number of times an oscillating mass travels across a certain level in a special
time interval (0,t) can be described as

 (λt )n e − λt
 , n = 0, 1, 2, 3, 
PN (n) =  n! (1.79)
 0, elsewhere

Here the notation (0,t), or more generally (a,b), stands for an open interval
excluding its two ends 0 and t or a and b; whereas [a b] is a closed interval includ-
ing a and b; such as [0 1] in Equation 1.84a. These two notations will be used
throughout this book.

1.3.1.7 Poisson Approximation
In a Bernoulli test, if n is sufficiently large and p is sufficiently small, we can use
Poisson distribution to approximately calculate the corresponding probability, that is,

(λ ) k e − λ
Cnk p k (1− p)n− k ≈ (1.80)
k!

where
λ = np (1.81)

Example 1.14

In a workshop, there are 100 instruments working independently. The probability


of failure is 0.01; and one technician can fix an instrument.
34 Random Vibration

Question (1): How many technicians are needed to ensure that the probability
of not fixing an instrument will be 0.005?
Question (2): If one technician handles 20 instruments, then find the probabil-
ity of the instruments not being fixed.
Question (3): If three technicians are responsible for 80 instruments, then find
the probability of an instrument not being fixed.

Answer:
1. Denote the number of broken instruments to be x, then x → Bernoulli
distribution
PN (x) = P100(x) with probability p = 0.01.
Now, assume y technicians are needed, that “x > y” means x − y instru-
ments cannot be fixed. Thus, what we need is P(x > y) ≤ 0.005.
100 100

P( x > y ) = ∑C
k = y +1
k
100 p k (1− p)100− k = ∑C
k = y +1
k
100 (0.01)k (0.99)100− k


100 −λ 100 −1

∑ (λ) k !e ∑ (1)ke!
k k
≈ =
k = y +1 k = y +1

Solving
100 −1

∑ (1)ke!
k
≤ 0.005
k = y +1

y=5

100

In fact, when y = 5, P( x > y ) = ∑C k =6


k
100 p k (1− p)100− k = 0.0034 < 0.005, using

Poisson approximation
100 −1

∑ (1)ke!
k
P( x > y ) = 0.0037 < 0.005
k =6


2. In the group with 20 instruments, if only one is broken, and because a tech-
nician is available, it will be fixed. We need to consider the case of “more
than 1.” That is, the probability is P(x ≥ 2).
Using the Poisson distribution to approximate P(0 ≤ x ≤ 1) =
1

∑Ck =0
k
20 p k (1− p)20− k and note that λ = np = 20 (0.01) = 0.2:
1 −0.2

∑ (0.2)k !e
k
P( x ≥ 2) = 1− P(0 ≤ x ≤ 1) ≈ 1− = 0.0176
k =0

Note that, if the Bernoulli distribution is used,
1
P( x ≥ 2) = 1− P(0 ≤ x ≤ 1) = 1− ∑Ck =0
k
20 p k (1− p)20− k = 0.0169

Introduction 35

3. Consider P(x ≥ 4), here λ = np = 80 (0.01) = 0.8. Using the Poisson


distribution:

3
(0.8)k e −0.8
P( x ≥ 4) = 1− P(0 ≤ x ≤ 3) ≈ 1− ∑
k =0
k!
= 0.0091 < 0.0176

Procedure for solving the problem:


a. Denote x as the total number of broken instruments, denote y as the number
of technicians → starting point
b. Consider P(x > y), etc. → describing the corresponding probability
c. Find the formula of P(x > y), etc.
d. Compute P(x > y), etc.

1.3.1.8 Summary of PMF PN(n)


In Sections 1.3.1.4 through 1.3.1.7, we described several important PMF characteris-
tics. Now, we consider their generality.

1.3.1.8.1 Essence of PMF
The reason we need to study the PMF can be realized by using the two-dimensional
deterministic distribution to treat one-dimensional random variables. That is, we
first sort the random variables, using certain integers to denote these “cases.” In so
doing, we sort random variables into deterministic arrangement in x axis.
Then, we find the probability p(x), using a fixed value to denote the probabil-
ity at position j. In so doing, we turn random variables into deterministic values in
the y axis. That is, the introduction of PMF allows us to use the two-dimensional
deterministic relationship of variables and the corresponding probability to treat the
original one-dimensional random variables. This is the essence of PMF.

1.3.1.8.2 Basic Property
The basic property of PMF is the basic property of generic probability, that is,

0 ≤ PN(n) ≤ 1 (1.82)

ΣPN(n) = 1 (1.83)

Therefore, any function that satisfies Equations 1.82 and 1.83 is a probability dis-
tribution function for discrete random variables.

1.3.2 Continuous Random Variables and PDF


In Section 1.3.1, we discussed discrete random variables, which can be extended to
continuous variables as follows.

1.3.2.1 Continuous Random Variables


1.3.2.1.1 Discrete versus Continuous
In many cases, variables are not discrete. For example, a given range can be any real
number, instead of integers only.
36 Random Vibration

The values of variables can be used to denote:

Discrete: 0, 1, 2, 3, 4 … used to denote the jth cases


Continuous: x, which can be used to denote virtually any analog values

1.3.2.1.2 Sample Space
Compare the sample space. It is seen that

Discrete: Sample space ≠ space


Continuous: Sample space = space

Example 1.15

Consider the following continuous variables

{real number in the interval [0 1]} (1.84a)

{all nonnegative real numbers} (1.84b)

{all real numbers} (1.84c)

1.3.2.2 Probability Density Function


Similar to discrete variables, the random continuous variable can also have distri-
butions. Mathematically, the expressions of discrete and continuous variables are
slightly different.

1.3.2.2.1 Mass versus Density


Consider a 0 – 1 distribution as an example, at 0 (and/or 1), a value of p (or 1 − p) is
assigned.
For a continuous variable, which has an infinite number of those variables, the
summation of these values will become infinity. Therefore, we use the concept of
density. In physics, the integration of density becomes mass. It is seen that the inte-
gration of f(x) along a line becomes area can be written as

b
A(a < X ≤ b) =
∫ a
f (x) d x (1.85)

1.3.2.2.2 Probability
The idea implied in Equation 1.85 can be used for the distribution function of con-
tinuous variables. Suppose the distribution can be expressed as a PDF f X(x) at loca-
tion x. Consider the corresponding probability, which may also be an integral given
by (see Figure 1.16)

b
P (a < X ≤ b) =
∫ a
fX ( x ) d x (1.86)
Introduction 37

fX(x)

b
P(a < X ≤ b) = ∫a fX (x) dx

a b x

FIGURE 1.16  Probability density function.

1.3.2.2.3 Axiom Requirement
To realize that Equation 1.86 is indeed a probability, let us check the axiom require-
ment, that is,

1. f X(x) ≥ 0 (1.87)

and


2.
∫ −∞
f X ( x )dx = 1 (1.88)

For the sample space in the interval [a b] (e.g., see Equation 1.84a), we alterna-
tively have

b

∫ a
f X ( x )dx = 1 (1.89)

Because using the concept of density function can satisfy the basic axioms, we
realize that the integral of the density function is indeed the probability. We thus
called this the probability density function (PDF).

1.3.2.2.4 Probability Density versus Probability Mass


Now, let us compare the PDF and the PMF. In the continuous case, we have

x + dx
P ( x < X ≤ x + dx ) =
∫x
f X (u)du = f X ( x )d x (1.90a)

and in the discrete case, we have

P(X ≈ x) = f X(x)dx (1.90b)

From now on, we will use both concepts of PMF and PDF as fundamental
approaches to deal with random variables.
38 Random Vibration

1.3.2.2.5 Property of PDF
Similar to PMF, PDF has basic properties.
First, it is impossible that f X < 0. However, it is highly possible that f X > 1. On the
other hand, we do not have the chance that f X → ∞. Therefore, the basic property of
PDF f X(x) can be expressed as

0 ≤ f X(x) < ∞ (1.91)

In the following, let us consider some important PDFs. Perhaps normal distribution
is the most important one; however, it will be discussed in Section 1.3.5 separately.

1.3.2.3 Uniform Distribution
The uniform distribution has PDF given by

 1
 , a<x≤b
fU ( x ) =  b − a (1.92)
 0, elsewhere

For example, the possible phase angle of a sinusoidal signal is (see Figure 1.17)

 1
 , 0 < θ ≤ 2π
fΘ (θ) =  2π
 0, elsewhere

Example 1.16

Buses will arrive at a bus station at 8:00, 8:15, and 8:30. A passenger comes to the
bus station in the time interval of 30 minutes from 8:00 to 8:30. Find that (1) he can
take the bus within 5 minutes; (2) he must wait for more than 10 minutes.

1
1. Uniform distribution p =
30
To wait for less than 5 minutes, he must be in between 8:10 and 8:15 or
8:25 and 8:30

1

θ
0 2π

FIGURE 1.17  Uniform distribution.


Introduction 39

P[(8:10 < x < 8:15) ∪ (8:25 < x < 8:30)] = P(8:10 < x < 8:15) + P(8:25 < x < 8:30)

15 30
1 1
=

10 30
dx +
∫ 25 30
d x = 1/ 3


2. Only if he is in between 8:00 and 8:05 or 8:15 and 8:20, he needs to wait
for more than 10 minutes

P[(8:00 < x < 8:05) ∪ (8:15 < x < 8:20)] = P(8:00 < x < 8:05) + P(8:15 < x < 8:20)

5 20
1 1
=
∫ 0 30
dx +

15 30
dx = 1/ 3

Again, the reason for employing the uniform distribution is as follows: we could
not find a reason that the distribution is not uniform. For example, in between
8:00 and 8:30, we do not know the pattern or rule (if it does have a rule) when a
bus can arrive at the station.

1.3.2.4 Exponential Distribution
The exponential distribution has PDF given by

fΛ(x) = λe−λx,  x > 0 (1.93)

λ is a possible constant.

Example 1.17

The total months of a car running without a mechanical problem is a discrete vari-
able, but the total time becomes a continuous variable; with PDF (see Figure 1.18)

× 10–3
10
PDF of exponential distribution

3
0 20 40 60 80 100 120
Time (years)

FIGURE 1.18  Exponential distribution.


40 Random Vibration

x

fΛ ( x ) = λe 120 , x>0

Example 1.18

Consider the car described in Example 1.17.


Question (1): Before the occurrence of a problem, what is the probability that
the car can run between 5 and 8 years?
Question (2): Find the probability that the car can only run within 120 months.

1. Because

∞ ∞ x

∫ ∫

fΛ ( x ) d x = 1→ 1 = λe 120 dx = 120 λ
−∞ 0

We have

1
λ=
120

Thus,

PΛ (60 < x < 96) =


∫60
96
1 − 120
120
e
x
− −
(
d x = − e 120 − e 120 = 0.1572
96 60
)

2. We also have

PΛ ( x < 120) =

120
1 − 120
120
e
x

(
dx = − e −1 − e 120 = 0.63
0
)
0

Compare with Poisson distribution for discrete variables (see Equation 1.77):

λk −λ
PΛ (k ) = e , k = 0, 1, 2, 3, 
k!
where integer k denotes the kth event.
It is generally acknowledged that the number of telephone calls, the number of
passengers in a bus station, and the number of airplanes landing in an airport dur-
ing a period, with given length, can be modeled by discrete Poisson distributions.
Now, if we do not count a period of time duration (0, t), but an event x occurs
in a specific moment during (0, t), then x ~ PΛ(λt). This example shows the essential
difference between discrete and continuous distributions. The reader may con-
sider the question of what the difference is in terms of variables in this example.

Example 1.19: Waiting Period

Consider the example of a certain machine installed “waiting” to be used to deal


with special jobs. In period (0, t), the number x of the event “being used” yields
discrete exponential distribution fΛ(λt) (see Equation 1.73). Find the distribution
function of the waiting time τ for the second job.
Introduction 41

Assume the first job is at time t = 0.


Consider P(τ > t), first τ ≥ 0 and t < 0, P(τ ≤ t) = 0.
When t > 0, the event {τ > t} = {in duration (0, t) no jobs} = {x = 0}.
(λt )0 − λt
Thus, P(τ > t ) = P( x = 0) = e = e − λt.
0!
Furthermore,

P(τ ≤ t ) = 1− P(τ > t ) = 1− e − λt

which is the total waiting time (see Figure 1.19).


The distribution function can be found as

∆P P(τ ≤ t + ∆t ) − P(τ ≤ t ) d
fΛ (t ) = lim = lim = P(τ ≤ t ) = λe − λt
∆t →0 ∆t ∆t→0 ∆t dt

That is, the waiting time yields exponential distribution.


Note that an important property of exponential distribution is being suppos-
edly “memoryless.” That is, for any τ > 0 and t > 0, we have

P[(ξ > τ) ∪ (t | ξ > τ)] = P[(ξ > τ) + (t | ξ > τ)]


P ( ξ > τ + t ) e − λ ( τ +t )
= = − λτ = e − λt = P(ξ > t )
P( ξ > τ ) e

Let ξ denote the life span of a bridge (a car, a machine, and so on) if the bridge
was built for τ years, that the probability distribution of lasting another t years has
nothing to do with the first τ years. In other words, the first τ year is not memo-
rized. Distribution of “always young.”
For any real-world buildings, bridges, machines, airplanes, and cars that are not
“always young,” care must be taken to use exponential distributions.

0.8
CDF of exponential distribution

0.6

0.4

0.2

0
0 20 40 60 80 100 120
Time (years)

t t + ∆t

FIGURE 1.19  Waiting time.


42 Random Vibration

Recall using Equation 1.60

t
−D
ptD = 1− e TR
= 1− e − ptD

to approximate the probability of occurrence of an extreme event in year tD, when
the return period of such an event is TR, see Equation 1.59.
The occurrence of such an event should be “memoryless.”

1.3.2.5 Rayleigh Distribution (Lord Rayleigh, 1842–1919)


The Rayleigh distribution has PDF given by (see Figure 1.20)
2
1 h 
h −  
fH ( h ) = 2 e 2  σ  , h > 0 (1.94)
σ
Note that, symbol σ in Equation 1.94 is not the standard deviation of Rayleigh
distribution.

1.3.3 Cumulative Distribution Functions


Now, from the PDF, we can calculate the cumulative probabilities, which are dis-
cussed as follows.

1.3.3.1 Probability of Cumulative Event


In the above example of “waiting time,” namely,

P(τ ≤ t) = 1 − P(τ > t) = 1 − e −λt (1.95)


Consider the event τ ≤ t (see Figure 1.21)
we have

τ ≤ t = (t1 < t |τ=t1 ) ∪ (t2 < t |τ=t2 ) ∪ (t3 < t |τ=t3 )  (1.96)

1.4
Probability density function

1.2

1
σ = 0.5
0.8

0.6
σ = 1.0
0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4
h

FIGURE 1.20  Rayleigh distribution.


Introduction 43

0, t1 t2 t3 t

FIGURE 1.21  Event τ ≤ t.

P(τ ≤ t ) = P[(t1 < t |τ=t1 ) ∪ (t2 < t |τ=t2 ) ∪ (t3 < t |τ=t3 ) ] (1.97)

Note that event (t1 < t |τ=t1 ) and event (t2 < t |τ=t2 ) are not mutually exclusive so
that using Equation 1.97 to calculate the probability P(τ ≤ t) is extremely challenging.
However, from another angle,

P(τ ≤ t ) = lim [ fΛ (∆t )∆t + fΛ (2∆t )∆t + fΛ (3∆t )∆t + ] (1.98)


∆t →0

1.3.3.2 Cumulative Distribution Function (CDF)


From the previous discussion, we can further have the cumulative distribution
function.

1.3.3.2.1 Definition of CDF
Equation 1.98 is the way to compute the probability of cumulative events, which is
referred to as cumulative distribution function denoted by F(t) and written as
t
FΛ (t ) =

0
fΛ (t ) dt

In the above equation, the variable “τ” does not appear.
Generally, we can write
x
FX ( x ) = P( X ≤ x ) =
∫ −∞
fX ( x ) d x (1.99)

1.3.3.2.2 Property of Cumulative Distribution Function


Now, consider the properties of CDF.

1. Range of CDF

0 ≤ FX(x) ≤ 1 (1.100)

2. Nondecreasing:
if x2 ≥ x1, then

FX(x2) ≥ FX(x1) (1.101)


44 Random Vibration

1.3.3.2.3 Relation between PDF and Cumulative Distribution Function

dFX ( x )
fX ( x ) = (1.102)
dx

1.3.3.3 Certain Applications of PDF and CDF


We now review the concepts of PDF and CDF and consider certain applications in
the following.

1.3.3.3.1 Probability Computation
If the PDF is known, we can find the corresponding CDF through integration.

Example 1.20

Given earthquake records in the past 250 years, suppose we have the following
data of PGA as

Day d1 Mon m1 Yr y1, earthquake No. 1 with PGA1


Day d2 Mon m2 Yr y2, earthquake No. 2 with PGA 2
.....
Day dn Mon mn Yr yn, earthquake No. n with PGA n

Generate exceedance peak ground acceleration values (Table 1.4).


Using the data listed in Table 1.4, we can also obtain Figure 1.22.
Furthermore, by using Table 1.4, we can also generate Table 1.5 (the non­
exceedance PGA)

Table 1.4
Exceedance Peak Ground Acceleration (U.S. Geological Survey)
PGA > 0.01 g PGA > 0.02 g ….
Frequency (probability) p0.01 p0.02

Frequency

FIGURE 1.22  Frequency of PGA.


Introduction 45

Table 1.5
Nonexceedance Peak Ground Acceleration
PGA < 0.01 g PGA < 0.02 g ….
Frequency (probability) p0.01 p0.01 + p0.02

1.3.3.3.2 Find PDF from Cumulative Distribution Function


With the CDF known, we can also find the corresponding PDF through differentiation.

1.3.3.3.3 Curve Fit
To figure out a distribution through statistical data, we can use curve fit technology.
Generally, using CDF can be easier than using PDF.

1.3.4 Central Tendency and Dispersion


1.3.4.1 Statistical Expectations and Moments
Central tendency is the observation of probability distributions when the collection
of random variables follows a certain pattern. This kind of observation is important
in understanding the nature of random process as well.
The following are statistical models of these observations.

1.3.4.2 Central Tendency, Mean Value


1.3.4.2.1 Discrete Variable Set X
Consider the mean value of a set of discrete variables, which can be written as

µx = ∑ f (x )x
all xi
X i i (1.103)

Recall a set

x = {x1, x2, …xn} (1.104)

The mean value of x, denoted by x is given


n

∑x i n

x= i =1
n
= ∑ 1n x
i =1
i (1.105)

In set X, each xi has an equal chance to be considered; the weighting function for
n is 1/n; we thus have the probability of any value xi showing up to be 1/n, that is,

1
fX ( x ) = (1.106)
n
46 Random Vibration

1.3.4.2.2 Continuous Variable Set X


Similarly, consider the set of continuous variables. The mean value can be written as


µx =
∫ −∞
fX ( x ) x d x (1.107)

1.3.4.3 Variation, Variance, Standard Deviation,


and Coefficient of Variation
Furthermore, consider the following measurement of dispersion.

1.3.4.3.1 Variance of Discrete Variable Set X, σ X2


The variance is defined as

σ 2X = ∑ f (x )(x − µ )
all xi
X i i X
2
(1.108)

1.3.4.3.2 Variance of Continuous Variable Set X, σ X2


For continuous variables, we have


σ 2X =
∫−∞
f X ( x )( x − µ X )2 d x (1.109)

See Figure 1.23 for geometric description of mean and variance. Mean value is
related to the first moment, centroid, or the center of mass. Variance is related to the
second moment or moment of inertia.

1.3.4.3.3 Standard Deviation σX
The standard deviation is the square root of variance, that is,

σ X = σ 2X (1.110)

fX(x) fX(x)
x x – µX

fX(x)

x x
0 µX dx 0 µX dx

FIGURE 1.23  Moment and centroid, mean and variance.


Introduction 47

Note that both the variance and the standard deviation can be used to denote the
dispersion of a set of variables. However, the standard deviation has identical units
as these variables.

1.3.4.3.4 Coefficient of Variation CX
The coefficient of variation is the ratio of standard deviation and the mean value.

σX
CX = (1.111)
µX

1.3.4.4 Expected Values
Given a function of random variable g(X), the expected value denoted by E[g(X)] is
written as


E[ g( X )] =

−∞
f X ( x ) g( x ) d x (1.112)

It is seen that the mean is the expected value of the set X.


μx = E[X] (1.113)
Thus, the expected value of a set of random variable g(X), namely, E[g(X)], can
also be seen as a special operation of g(X). It can be shown that

σ 2X = E[( X − µ x )2 ] (1.114)

Furthermore, we have

σ 2X = E[( X )2 ] − µ 2x (1.115)

1.3.4.5 Linearity of Expected Values


One of the important properties of the operation of E[(.)] is linearity, that is,

E[αg(X) + βh(X)] = αE[g(X)] + βE[h(X)] (1.116)

where g(X) and h(X) are functions of X; and α and β are deterministic scalars.

Example 1.21

Suppose Y is a linear variable of X, given by

Y = a X + b (1.117)

where a and b are deterministic scalars; we have

μY = E[aX + b] = a μX + b (1.118)
48 Random Vibration

1.3.5 Normal Random Distributions


In Section 1.3.4, we introduced several probability distributions. In the following,
let us consider a particular distribution that plays an important role in understanding
both random variables and random process.

1.3.5.1 Standardized Variables Z
First, consider a standardized variable defined as

X − µx
Z= (1.119)
σx

the mean of Z is given by

 X − µX  1 1
µZ = E   = σ E[( X − µ X )] = σ {E[ X ] − µ X } = 0 (1.120)
 σ X  X X

and the variance of Z can be expressed as

 X − µ  2  1 1
σ = E 
2
Z
X
  = 2 E[( X − µ X )2 ] = 2 σ 2X = 1 (1.121)
 σ X   σ X σX

1.3.5.2 Gaussian (Normal) Random Variables (Carl F. Gauss, 1777–1855)


The normal random variables are denoted as

X ~ N(μX, σX) (1.122)

In the following, let us examine the distribution and additional important proper-
ties of normal random variables.

1.3.5.3 PDF of Normal Distribution


1.3.5.3.1 General Variable
The PDF of general normal random variables is given as


( x −µ X )2
1 2 σ 2X
fX ( x ) = e , −∞< x <∞ (1.123)
2πσ X

which is plotted in Figure 1.24.

1.3.5.3.2 Standardized Variable (See Equation 1.119)


For standardized variables, we have

z2
1 −
fZ (z) = e 2 , −∞<z <∞ (1.124)

Introduction 49

2
Sigma = 0.2
Sigma = 0.4
Sigma = 0.8
1.5
Normal PDF

0.5

0
–3 –2 –1 0 1 2 3 4 5 6 7
X

FIGURE 1.24  PDF of normal distribution with μX = 1.

1.3.5.4 Cumulative Distribution Function of Normal Distribution


Now, consider the corresponding CDF.

1.3.5.4.1 General Variable
For general variables, we have

( ξ − µ X )2
x −
1
FX ( x ) =
2πσ X ∫
−∞
e 2 σ 2X
dξ (1.125)

Figure 1.25 plots several CDF with different standard deviations.

1
Sigma = 0.2
Sigma = 0.4
0.8
Sigma = 0.8
Normal CDF

0.6

0.4

0.2

0
–3 –2 –1 0 1 2 3 4 5 6 7
X

FIGURE 1.25  CDF of normal distribution with μX = 1.


50 Random Vibration

1.3.5.4.2 Standardized Variable
For standardized variables, we have

x ξ2
1


FZ ( z ) = e 2 dξ (1.126)
2π −∞

This CDF is very important, and is specially denoted as


Φ(z) = FZ (z) (1.127)
Figure 1.26 plots both PDF and CDF for standardized normal variables.

Example 1.22

From a hotel to the Buffalo airport, one can take the local or expressway. Time needed
through local road tL ~ N(26,5). Time needed through expressway tL ~ N(31,2).

Question (1): If one only has 30 minutes, then, which way is better? (Less
chance to miss the flight)
Question (2): One has 35 minutes, which way is better?

1. Denote the time needed as t,

( ξ − 26 )2
30 −
1
P(t > 30) = 1− P(t ≤ 30) = 1−
2π (5) ∫ −∞
e 2( 52 )
dξ = 1− 0.7881 = 0.21

( ξ − 31)2
30 −
1

2
P(t > 30) = 1− P(t ≤ 30) = 1−
e 2( 2 ) dξ = 0.6915
2π ( 2) −∞

Thus, using local road → less probability of “missing the flight.”

1
PDF
Standard normal distribution

CDF
0.8

0.6

0.4

0.2

0
–5 –4 –3 –2 –1 0 1 2 3 4 5
X

FIGURE 1.26  PDF and CDF for standardized normal variables.


Introduction 51

Note that one can also use standard normal distribution. For example, for
tL ~ N(26,5).
Denote

t − 26
T= = 0.8
5 t =30

P(t≤30) = Φ(0.8) = 0.7881 and 1 − Φ(0.8) = 0.21


2. Local road

t − 26
T= = 1.8
5 t =35

1 − Φ(1.8) = 0.036

Expressway

t − 31
T= =2
2 t =35

1 − Φ(2) = 0.023

Using the expressway lowers the chance of missing the flight.

1.3.6 Engineering Applications
1.3.6.1 Probability-Based Design
Consider the component of a system, which can be a structure, a machine, a vehicle,
etc. Designing that component to resist a certain load, we basically have two differ-
ent approaches, namely, the allowed stress design and the probability-based design.

1.3.6.1.1 Allowed Stress Design (Deterministic Design)


The allowed stress design is based on the following criterion:

R N > QN (1.128)

where
R N and QN are, respectively, the values of nominal resistance and load.
Equation 1.128 can be realized by using a safety factor S, that is,

R N = S QN (1.129)

1.3.6.1.2 Load and Resistance Factor Design (Probability-Based Design)


Besides the allowed stress design, we also have the probability-based design, that is,

P(RD ≤ Q D) < [pr] (1.130)


52 Random Vibration

where RD, Q D values of nominal resistance and load. Equation 1.130 means that
the probability of the event that the resistance is smaller/equal to the load, which is
referred to as the failure probability pf must be smaller than the allowed value [pr].
Or in general we can also write

pf = 1 − pr (1.131)

where pf and pr are, respectively, the probability of failure and reliability.


In real-world design, we often use the nominal values. However, it is more direct to
use the mean values in probability analysis, we thus have the following relationship:

Q D = γ Q N (1.132)

and

RD = Φ RN (1.133)

Here, RN and Q N are, respectively, the mean values of resistance and load; the
terms γ and ϕ are, respectively, the load and resistance factors.

1.3.6.1.2.1   Load Factor γ


The load factor can be seen as follows

γ = βQ(1 + κ R CQ) (1.134)

where κ R is the number of standard deviations; for example, κ R = 3.


In Equation 1.134, the term βQ is called the bias of load

µQ
βQ = (1.135)
QN

The term CQ is the coefficient of variation, given by

σQ
CQ = (1.136)
µQ

1.3.6.1.2.2   Resistance Factor Φ


Now, consider Equation 1.133 and the resistance factor can be written as

Φ = βR(1 − κ R CR) (1.137)

Here, the term βR is the bias of resistance, that is,

µR
βR = (1.138a)
RN
Introduction 53

0.4
0.35 Load
Resistance
0.3
Normal PDF

0.25
0.2
0.15
Nominal load Mean load Design resistance
0.1 Nominal resistance
Mean resistance
0.05 Design load
0
32 34 36 38 40 42 44 46
Intensity of force (Ton)

FIGURE 1.27  Load and resistance.

The term CR is the coefficient of variation

σR
CR = (1.138b)
µR

The relationship between the PDF of the load and resistance relates to failure
probability pf can be shown in Figure 1.27, where the dark line is PDF of the demand
fQ and lighter line is the PDF of resistance f R. Thus, the failure probability pf can be
written as
∞ q
pf =
∫−∞
fQ ( q )

−∞
fR (r )dr d q (1.139a)

If both the demand and the resistance are normal, then the random variable R–Q
is also normal; with the PDF denoted by f R–Q(z) the failure probability can be further
written as
0
pf =
∫ −∞
fR–Q ( z ) dz (1.139b)

1.3.6.2 Lognormal Distributions
In general, we do not have negative load or resistance; in these cases, we may con-
sider lognormal distribution. A lognormal distribution of a random set has all of its
variables whose logarithm is normally distributed. If x is a random variable with a
normal distribution, then

y = ex (1.140)

is a lognormal distribution; likewise, y is lognormally distributed, then x = log(y)


will be normally distributed.
54 Random Vibration

1.3.6.2.1 Probability Density Function


The base of the logarithmic function does not matter for generic lognormal distribu-
tion. Using natural log:

(ln y − µ X )2

1 2 σ 2X
f X ( y) = e y>0 (1.141)
2π σ X y

Here, μX and σX are the mean and standard deviation of the variable’s natural
logarithm, in which the variable’s logarithm is normally distributed.

1.3.6.2.2 Cumulative Distribution Function


The CDF is given by

 ln( y) − µ X 
FY ( y) = Φ   (1.142)
 σX

Here, function Φ(.) is defined previously in Equation 1.127.


Denote MY as median value of Y (geometric mean value of Y)

P(Y ≤ MY) = 0.5 (1.143)

From Equation 1.142, we have

 ln( MY ) − µ X 
FY ( MY ) = Φ   = 0.5
 σX

Note that Φ−1 (0.5) = 0, thus

 ln( MY ) − µ X 
Φ −1 (0.5) =   = 0 (1.144)
 σX

Therefore, we have

ln(MY) = μX (1.145)

Mean and standard deviation of a lognormal distribution as functions of the mean


and standard deviation of the normally distributed variable set X are, respectively,
1
µ X + σ 2X
µY = e 2 (1.146a)

or

1  µY2 
µX = ln (1.146b)
2  1 + CY2 
Introduction 55

and
1
µ X + σ 2X 2
σY = e 2 eσ X − 1 (1.147a)

or

σ 2X = µY2 ( eµY − 1)
2
(1.147b)

Here, CY is the coefficient of variation of random variable Y.

1.3.6.3 Further Discussion of Probability-Based Design


It can be proven that the difference of RD − Q D, if both RD and Q D are normally dis-
tributed, is also normally distributed. That is, let

F = RD − Q D (1.148)

We must have

F ~ N(μF, σF) (1.149)

The mean is

μF = μR − μQ (1.150)

where, μR and μQ are, respectively, the means of RD and Q D.


The standard deviation is

( )
1/ 2
σ F = σ R2 + σ Q2 (1.151)

where, σR and σQ are, respectively, the standard deviations of RD and Q D.


A critical state defined at

F = 0. (1.152)

which is called the limit state.


Using standardized variables, we can write

F − µF 0 − µF µ − µQ
= =− R ≡ −β (1.153)
σF σF σ 2R + σ 2Q

Failure probability is then given by

pf = P(F ≤ 0) = P(R − Q ≤0) = Φ(−β) (1.154)

Example 1.23

See Table 1.6


56 Random Vibration

Table 1.6
Reliability Indices and Failure Probabilities
β 2 2.5 3 3.5
pf 0.0228 0.0062 0.0013 0.0002

Problems
1. Using a Venn diagram to show that P(A ∪ B) = P(A) + P(B) − P(A ∩ B)
by the fact that the set (A ∪ B) can be seen as the union of disjointed sets
( A ∩ B), ( A ∩ B), and (A ∩ B).
2. Find the sample spaces of the following random tests:
a. The record of average score of midterm test of class CIE520/MAE536.
(Hint, we have n people and the full score is 40).
b. To continuously have 10 products up to standard, the total number of
checked products.
c. Inspecting products that are marked “C” if certified and marked “D” if
defective; if two “D” are consequently checked, the inspection will be
stopped. Or, if four products have been checked, the inspection will also
be stopped; the records of the inspection.
d. The coordinates of point inside a circle.
3. A and B denote two events, respectively.
a. If AB = AB Prove A = B
b. Suppose either A or B occurs; find the corresponding probability
4. For events A, B, and C, P(A) = 1/2, P(B) = 1/3, P(C) = 1/5, P(A ∩ B) = 1/10,
P(A ∩ C) = 1/15, P(B ∩ C) = 1/20, P(A ∩ B ∩ C) = 1/30, find
a. P(A ∪ B)
b. P( A ∪ B)
c. P(A ∪ B ∪ C)
d. P( A ∩ B ∪ C )
e. P( A ∩ B ∩ C )
f. P( A ∩ B ∩ C )
5. Calculate the probabilities of (a) P(X < 3), (b) P(X > 2), and (c) P(2 < X < 3)
with the following distributions
Poisson (λ = 1)
Uniform (a = 1, b = 4)
Rayleigh (σ = 1)
Normal (μ = 2, σ = 0.5)
6. Find the mean and standard deviation of the following distributions:
Bernoulli
Poisson
Rayleigh
Normal
Introduction 57

7. Ten balls marked 1, 2, …10, respectively, which are picked up by 10 people.


Now, randomly choose three people, find the following probabilities:
a. The minimum number is 5.
b. The maximum number is 5.
8. The 11 letters of the word “probability” are written in 11 cards, respectively.
Pick up seven cards. Find the probability of the seven cards being “ability.”
9. Box I contains 50 bearings, among which, 10 are grade A. Box II contains
30 bearings of the same type, among them, 18 are grade A. The rest of the
bearings are grade B. If taking one of the boxes and picking up the bearings
twice. In each case, only one bearing is picked and there are no returns.
Find the probability that
a. The first bearing is grade A.
b. Suppose that the first one is grade A and that the second is also grade A.
10. Find the reliability of the following two systems:
a. Four independent elements, 1, 2, 3, and 4, with reliabilities at p1, p2, p3,
and p4, respectively, are arranged as shown in Figure P1.1a.
b. Five independent elements, 1, 2, 3, 4, and 5, with identical reliabilities p
are arranged as shown in Figure P1.1b.

1 2
2 3

1 3

4
4 5
(a) (b)

FIGURE P1.1
2 Functions of Random
Variables
In Chapter 1, the basic assumptions and theory of probability are briefly reviewed
together with the concepts of random variables and their distribution. In this chapter,
we will discuss several important functions of random variables, which are also ran-
dom variables. Because the basic idea to treat random variables is to investigate their
distributions, similarly, to study the functions of random variables, we also need to
consider the corresponding probability density function (PDF) and cumulative dis-
tribution function (CDF).

2.1 SYSTEMS AND FUNCTIONS


In Chapter 1, it is shown that a vibrating object can be a dynamic system. In fact, in
Chapter 6, we will further see that a signal degree of freedom vibration system is a
second-order dynamic system. In Chapter 9, we will see that an n-degree-of-freedom
(n-DOF) vibration system will have 2nth order, which can be broken down into n
second-order or 2n first-order systems. Figure 2.1a shows a typical block diagram of
the relationship between input-system-output.
The function of random variables can be seen as the relationship between the
original random variable and the response of a certain system. In these circum-
stances, the original variables are generally not functions of time (see Figure 2.1b).
Furthermore, among random processes, we can have a similar relationship, except
that the input and output become the functions of time.
In this manuscript, because of the limited space, we will not discuss the theory
of the system in detail. Only the necessary concepts will be mentioned. For further
knowledge of the system, especially dynamic systems, interested readers may con-
sult corresponding textbooks, such as the works of Antsaklis and Michel (1997).

2.1.1 Dynamic Systems
2.1.1.1 Order of Systems
A dynamic system can be modeled as a differential equation. In this manuscript, the
order of a system means the highest order of differentiation.

2.1.1.1.1  Zero Order System


If the relationship of input and output involves no differentiation or integration with
respect to time, we have a zero order system.

59
60 Random Vibration

Input Output
System

(a)

Variable Function
f

(b)

FIGURE 2.1  (a) System and (b) function.

2.1.1.1.2  First-Order System


If the output of a system is the first derivative of input with respect to time, we have
a first-order system.

2.1.1.1.3  Second-Order System


If the output of a system is the second derivative of input with respect to time, we
have a second-order system.

2.1.1.2 Simple Systems
2.1.1.2.1  Subsystems
A system of functions can be rather complex. However, in engineering applications,
we can always break a complex system down into several subsystems. When a sys-
tem is broken down, these subsystems can be in parallel or in series (see Figure 1.8a
and b, respectively, where all the “Pi ” symbols can be replaced by “subsystem i”). In
the following, let us consider basic subsystems.

2.1.1.2.2  Proportional System


A proportional system is zero ordered. Suppose X is a random set and scalar a is a
proportional constant, we have

Y = aX (2.1)

2.1.1.2.3  Differential System


Suppose X is a random set that is also a function of time and Y is the result of the
first derivative of X with respect to time (in Chapter 5, we will discuss how to take a
derivative of random variables), we have

d
Y= X (2.2)
dt
Functions of Random Variables 61

2.1.1.2.4  Integral System


Suppose X is a random set that is also a function of time and Y is the result of the
integration of X with respect to time (in Chapter 5, we will discuss how to take an
integration of random variables), we have

Y=
∫ X dt + C (2.3)

Generally speaking, the integration constant C can also be a random variable.

2.1.1.2.5  Polynomial System


Suppose X is a random set and Y is the result of polynomials of X, we have

Y = a 0 + a1X + a2 X2 + … (2.4)

where a 0, a1, and a2 are constants.

2.1.2 Jointly Distributed Variables


Besides the above-mentioned relationship between X and Y, in view of distributions,
we have several special cases that are important for understanding the nature of
random variables. Especially when considering the relationships between input and
corresponding output variable, and between variables and the corresponding func-
tions, the concept of jointly distributed variables plays an important role. They are
then discussed as follows.

2.1.2.1 Joint and Marginal Distributions of Discrete Variables


Let us consider discrete variables first.

2.1.2.1.1  Joint Distributions


Suppose we have two sets of random variables, J and K, which are jointly considered.
We can denote them as
(J, K) = {j, k},  j, k = 1, 2, … (2.5)

To describe the intersection of J and K, we can write

pJK(j, k) = P[(J = j) ∩ (K = k)] (2.6)

where pJK is the joint distribution, it satisfies:


(1)  pJK ≥ 0 (2.7)

(2) ∑∑ p
all j all k
JK =1 (2.8)

See Table 2.1 for a list of joint distributions.
62 Random Vibration

Table 2.1
Joint Distribution
K
J k=1 k=2 ….
i=1 p11 p12
i=2 p21 p22

Example 2.1

Among truck buyers, 50% purchase American pickups, 20% purchase Japanese,
and 30% buy other country’s pickups. Randomly find two customers, denote
as A and J as the number of American and Japanese customers. Find the joint
distributions.
The possible values of A and J are 0, 1, 2 (Table 2.2). When A = a (a people buy
American); J = j (j people buy Japanese), 2 − a − j people buy others. Therefore,

P[( A = a) ∩ ( J = j )] = C2aC2j− a (0.5)a (0.2) j (0.3)2− a− j

P[( A = 0) ∩ ( J = 0)] = C20C20 (0.5)0 (0.2)0 (0.3)2 = 0.09


P[( A = 0) ∩ ( J = 1)] = C20C21 (0.5)0 (0.2)1(0.3)1 = 0.12
P[( A = 0) ∩ ( J = 2)] = C20C22 (0.5)0 (0.2)2 (0.3)0 = 0.04

P[( A = 1) ∩ ( J = 0)] = C21C10 (0.5)1(0.2)0 (0.3)1 = 0.3


P[( A = 1) ∩ ( J = 1)] = C21C11(0.5)1(0.2)1(0.3)0 = 0.2
P[( A = 1) ∩ ( J = 2)] = 0

P[( A = 2) ∩ ( J = 0)] = C22C00 (0.5)2 (0.2)0 (0.3)2 = 0.25


P[( A = 2) ∩ ( J = 1)] = 0
P[( A = 2) ∩ ( J = 2)] = 0

Table 2.2
Joint Distribution of J and K
J
A j=0 j=1 j=2
a=0 0.09 0.12 0.04
a=1 0.3 0.2 0
a=2 0.25 0 0
Functions of Random Variables 63

Table 2.3
CDF of J and K
J
A j=0 j=1 j=2 pA(a)
a=0 0.09 0.12 0.04 0.25
a=1 0.3 0.2 0 0.5
a=2 0.25 0 0 0.25
pJ(j) 0.64 0.32 0.04 1

2.1.2.1.2  Marginal Distributions


From the above example, we have Table 2.3 to list the CDF of A and J. That is, between
the two random variables, we only consider one of them, no matter what value the other
variable is (which means that this variable will take any possible value)

pJ ( j) = P( J = j) = PJK [( J = j) ∩ ( k takes all possible vallues)]


= ∑P
all k
JK [( J = j) ∩ ( K = k )] = ∑p
all k
JK , j = 1, 2, 3, (2.9)

pK ( k ) = P( K = k ) = PJK [( j takes all possible values) ∩ K = k )]


= ∑P
all j
JK [( J = j) ∩ ( K = k )] = ∑p
all j
JK , k = 1, 2, 3, (2.10)

2.1.2.2 Joint and Marginal Distributions of Continuous Variables


Similar to the case of discrete variables, suppose we have two sets of continuous
variables and consider the joint distributions. That is, we have

(X, Y) = {x, y} (2.11)

Equation 2.11 can be graphically shown in Figure 2.2. Generally, the ranges of x
and y are
−∞ < x < ∞,  −∞ < y < ∞ (2.12)

(x, y)

FIGURE 2.2  Ranges of variables.


64 Random Vibration

2.1.2.2.1  Joint Distributions 1 (Bivariate PDF)


Now, the joint distribution of PDF can be written as

P[( X = x ) ∩ (Y = y)] = P[( x < X ≤ x + dx ) ∩ ( y < Y ≤ y + dy)]

= f XY ( x , y) dx dy (2.13)

where f XY (x, y) is the bivariate density function, which satisfies

(1) f XY (x, y) ≥ 0 (2.14)


(2)
∫∫ f XY ( x , y) d x d y = 1 (2.15)

2.1.2.2.2  Joint Distributions 2 (Bivariate CDF)


The joint distribution of CDF can be written as

FXY (x, y) = P[(X ≤ x) ∩ (Y ≤ y)] (2.16)

which can be further written as

y x


FXY ( x , y) =
∫ ∫
−∞ −∞
f XY (u, v) du d v (2.17)

The relationship of the PDF and CDF is

∂2
f XY ( x , y) = [ FXY ( x , y)] (2.18)
∂x ∂y

2.1.2.2.3  Marginal Distributions


Similar to case of discrete variables, marginal distributions can be obtained through
joint distributions:

FX ( x ) = P( X ≤ x ) = P[( X ≤ x ) ∩ (Y < ∞)] = FXY ( x , ∞)
x ∞ (2.19)


=
∫ ∫
−∞ −∞
f XY (u, v) d v du

and

FY ( y) = P(Y ≤ y) = P( X < ∞ ∩ Y ≤ y) = FXY (∞, y)


y ∞ (2.20)


=
∫ ∫
−∞ −∞
f XY (u, v) du d v
Functions of Random Variables 65

The PDF of X is

dFX ( x )

fX ( x ) =
dx
=
∫ −∞
f XY ( x , y) d y

(2.21)

The PDF of Y is

dFY ( y)

fY ( x ) =
dy
=
∫ −∞
f XY ( x , y) d x

(2.22)

Example 2.2

A two-dimensional random variable (X, Y) has the following PDF

f XY (x,y) = ce−2(x+y),  0 < x <∞, 0 < y < ∞

Question (1): Find constant c.


Question (2): Find the CDF.
Question (3): Find the probability of area C shown in Figure 2.3.

∞ ∞ ∞ ∞
(1)
∫ ∫
−∞ −∞
fXY (u ,v ) du dv =
∫ ∫ ce
0 0
−2( x + y )
d x dy
    ∞ ∞  c
∫ ∫
−2( y )
= c e dx e −2( y ) dy  = → c = 4
0 0  4

y x y x
(2)
  
FXY ( x , y ) =
∫ ∫ −∞ −∞
fXY (u ,v )du dv =
∫ ∫ −∞ −∞
4e −2(u +v ) du dv

= (1− e −2x )(1− e −2y ), 0 < x < ∞, 0 < y < ∞


1 x+y=1

X
1

FIGURE 2.3  Probability of area C.


66 Random Vibration

(1− e −2x )(1− e −2y ), 0 < x < ∞, 0 < y < ∞


FXY ( x , y ) = 
 0, elsewherre

1 1− x  1 1− x 
(3) P[( X ,Y ) ∈C ] =

  0 0 ∫∫fXY ( x , y ) d x dy = 4  d x
 0 0 ∫ ∫
e −2( x + y ) dy  = 1− 3e −2

2.1.3 Conditional Distribution and Independence


2.1.3.1 Discrete Variables
From Section 2.1.2, it is seen that marginal distributions can be obtained through
joint distributions. However, we cannot always determine joint distributions through
marginal distributions. If we also know the probability of event (X = xi) under the
condition (Y = yj), denoted by

P[(X = xi)∣(Y = yj)] = pX∣Y (xi∣yj) (2.23)

then, from the conditional probability

P[(X = xi) ∩ (Y = yj)] = P[(X = xi)∣(Y = yj)] P(Y = yj) (2.24)

if

P(Y = yj) ≠ 0 (2.25)

we have

P ( X = xi ) ∩ (Y = y j ) 
pXY ( xiy j ) = P[( X = xi )(Y = y j )] = (2.26)
P(Y = y j )

Fixing j and letting i vary, we have a series pX∣Y (xi∣yj) (i = 1, 2, 3…)


It is seen that

(1)  pX∣Y (xi∣yj) ≥ 0 (i = 1, 2, 3…) (2.27)

and
(2)  ∑p
all i
XY ( xiy j ) = 1 (2.28)
Functions of Random Variables 67

That is, the series pX∣Y (xi∣yj) (i = 1, 2, 3…) satisfies the two basic requirements of
probability distributions. This implies that pX∣Y (xi∣yj) (i = 1, 2, 3…) is a probability
distribution, which describes the statistical property of random variable X, under the
condition that Y = yj.
Now, we can ask a question: is the probability distribution pX∣Y (xi∣yj) equal to pX(xi)?
Generally speaking, it is not, because pX∣Y (xi∣yj) needs the condition

Y = yj. (2.29)

Here, pX∣Y (xi∣yj) is the conditional distribution. And we have the following

P ( X = xi ) ∩ (Y = y j ) 
pYX ( y jxi ) = P[(Y = yi )( X = x j )] = (2.30)
P ( X = xi )

Example 2.3

Recall the above-mentioned example shown in Table 2.4 again. Consider the con-
ditional distribution under conditions j = 0, j = 1 and j = 2, we have Table 2.5.

Table 2.4
CDF of the Above-Mentioned Example
J
A j=0 j=1 j=2 pA(a)
a=0 0.09 0.12 0.04 0.25
a=1 0.3 0.2 0 0.5
a=2 0.25 0 0 0.25
pJ(j) 0.64 0.32 0.04 1

Table 2.5
Conditional Distributions
J
P(A = a|j = 0) P(A = a|j = 1) P(A = a|j = 2)
A (pJ(0) = 0.64) (pJ(1) = 0.32) (pJ(2) = 0.04)
a=0 0.09/0.64 = 0.14 0.12/0.32 = 0.375 0.04/0.04 = 1
a=1 0.3/0.64 = 0.47 0.2/0.32 = 0.625 0/0.04 = 0
a=2 0.25/0.64 = 0.39 0/0.32 = 0 0/0.04 = 0
∑ p (a)
A 1 1 1
68 Random Vibration

2.1.3.2 Continuous Variables
Similar to discrete variables, Equation 2.30 can be written as

f XY (x , y) dx dy
P[( X = x )(Y = y)] = f XY ( xY = y) dx = (2.31)
fY ( y) dyy

Thus, the conditional PDF of variable X is

f XY ( x , y)
f XY ( xY = y) = (2.32)
fY ( y)

Similarly, we have

f XY ( x , y)
fYX ( yX = x ) = (2.33)
fX ( x )

2.1.3.3 Variable Independence
Variable independence is an important concept. Treating two independent sets can
significantly reduce the efforts. On the other hand, if two sets of variables are not
independent but we mistakenly treat them as independent, severe errors may be
introduced.

2.1.3.3.1  Discrete Variables


First, consider discrete cases. Recall Equation 1.45: two events, A and B, are inde-
pendent, if

P(A ∩ B) = P(A)P(B) (2.34)

furthermore, if

P(B∣A) = P(B) (2.35)

Now, consider variables. Similarly, random variable J and K are independent, if

pJK(j, k) = pJ(i)pK(k) (2.36)

Because

pJK(j, k) = P[(J = j) ∩ (K = k)] (2.37)

Because J and K are independent,

P[(J = j) ∩ (K = k)] = P(J = j)P(K = k)] = pJ(i)pK(k) (2.38)


Functions of Random Variables 69

Thus, we can also have

pX∣Y (xi∣yj) = pX(xi) (2.39)

and

pY∣X(yj∣xi) = pY (yj) (2.40)

2.1.3.3.2  Continuous Variables


Similarly, for the independence of continuous variables, we have

f XY (x,y) = f X(x) f Y (y) (2.41)

and

f X∣Y (x∣Y = y) = f X(x) (2.42)

also

f Y∣X(y∣X = x) = f Y (y) (2.43)

Example 2.4

Mr. A and Ms. B plan to meet in place C at 6:00 pm. The arriving times of
each person are independent and are uniformly distributed between 6:00 pm
and 7:00 pm. Find the probability that the first arrival must wait for more than
10 minutes.
Denote that Mr. A arrives at X minutes after 6:00 and Ms. B at Y minutes after
6:00. The probability is

P(X + 10 > Y) + P(Y + 10 > X)

Note that P(X + 10 > Y) = P(Y + 10 > X)


See Figure 2.4.

2
60 y −10
 1 25
P( X + 10 > Y ) =
∫∫
X +10 >Y
fXY (x , y )d x dy =

10
dy

0
 60  d x = 72

So, P = P(X + 10 > Y) + P(Y + 10 > X) = 0.69.


Readers may consider the question what if the distribution of Mr. A is that
shown in Figure 2.5.
70 Random Vibration

Y
x + 10 = y
60
y + 10 = x

10

X
0 10 60

FIGURE 2.4  Regions of integrations.

fX(x)

120

x = –2t +120

FIGURE 2.5  Distribution of Mr. A’s probability.

2.1.4 Expected Value, Variance, Covariance, and Correlation


We now begin to consider a function of both random variables X and Y, denoted by
g(X,Y) and consider the central tendency, dispersion. We will also determine how
these two sets of variables are related. Here, we only consider continuous variables.
Readers can develop their own formulae for discrete variables. Or they can consult
typical textbooks of probability, for example, see Saeed (2004).

2.1.4.1 Expected Value of g(X,Y)


The expected value can be written as
∞ ∞


E[ g( X , Y )] =
∫ ∫
−∞ −∞
f XY (x ) g( x , y) d x d y

(2.44a)

Particularly, the expected value of variable X is


∞ ∞


E[ X ] =
∫ ∫
−∞ −∞
f XY (x ) x d x d y (2.44b)
Functions of Random Variables 71

2.1.4.2 Conditional Expected Value


The conditional expected value is given by


E[ g( X , Y )Y = y] =
∫ −∞
f X Y (xY = y) g( x , y) d x

(2.45)

2.1.4.3 Variance
Because we have two sets of variables, we should have two variances to describe the
dispersions, namely, for X, we have
∞ ∞


D[ X ] =
∫ ∫
−∞ −∞
f XY (x , y)( x − µ X )2 d x d y

(2.46)

and for Y, we also have


∞ ∞


D[Y ] =
∫ ∫
−∞ −∞
f XY (x , y)( y − µY )2 d x d y

(2.47)

and

D[X] = E[X2] − (E[X])2 (2.48)

In the following, we will continue to use E[(.)] and D[(.)] to denote the expected
value and variance of the set (.)

2.1.4.4 Covariance of X,Y
The covariance of random variables X,Y is given as
∞ ∞


σ XY = E[( X − µ X )(Y − µY )] =
∫ ∫
−∞ −∞
f XY (x , y)( x − µ X )( y − µY ) d x d y

(2.49)

The covariance can also be calculated as

σXY = E[(XY)] − μXμY (2.50)

2.1.4.5 Correlation Coefficient
Accordingly, the correlation coefficient is given by

σ XY
ρXY = (2.51)
σ X σY

It is seen that the range of correlation coefficients is in between −1 and 1, that is,

−1 ≤ ρXY ≤ 1 (2.52)
72 Random Vibration

Because the correlation coefficient can be written as

 ( X − µ X )(Y − µ X ) 
ρXY = E   (2.53)
 σ X σY 

Example 2.5

The joint distribution of X~N(0, σX) and Y~N(0, σY) is given by

1  1  x2 ρXY y 2  
fXY ( x , y ) = exp −  − 2 xy + 

2πσ X σ Y 1− ρ2XY ( 2
)
 2 1− ρXY  σ X
2
σ X σY σ Y2  

where ρXY is a correlation coefficient.

2.1.5 Linear Independence
2.1.5.1 Relationship between Random Variables X and Y
For two sets of random variables, the amount of linear independence can be judged
by the correlation coefficient as follows.

2.1.5.1.1  Linear Function


If we have a linear function given by

Y = aX + b (2.54)

Then, the correlation coefficient is

ρXY = ±1 (2.55)

Because

 ( X − µ X )(Y − µ X )   ( X − µ X ) {(aX + b) − (aµ X + b)} 


ρXY = E  = E 
 σ X σY 
  σX a σX ( ) 

a  ( X − µ X )2  a
= E = (2.56)
a  (σ X ) 2  a

That is,

1, a>0
ρXY =  (2.57)
 −1, a<0

Functions of Random Variables 73

2.1.5.1.2  Independence
Now, we further consider independence by examining the following cases.

1. If X and Y are independent, then X and Y are not correlated:

ρXY = 0 (2.58)

Because

σ XY
ρXY =
σ X σY

and

σXY = E[XY] − μXμY

Furthermore, the expected value of the production XY can be written as

E[ XY ] =
∞ ∞ ∞ ∞ (2.59)


∫ ∫
−∞ −∞
f XY ( x , y)( x )( y) d x d y =

−∞
fX ( x ) x d x

−∞
fY ( y) y d y = µ X µY

Thus, σXY = 0 and Equation 2.58 holds

2. If X and Y are independent, then

E[XY] = E[X]E[Y] (2.60)

3. If X and Y are independent, then

D[X + Y] = D[X] + D[Y] (2.61)

2.1.5.2 Expected Value of Sum of Random Variables X and Y

E[X + Y] = E[X] + E[Y] (2.62)

Note that Equation 2.62 holds for any pair of X and Y, no matter if it is indepen-
dent or not.

2.1.6 CDF and PDFs of Random Variables


In many cases, we want to learn the random distribution of the output based on
knowledge of the input. From Figure 2.1b, we may study the distributions of specific
functions of random variables. For example, the random responses of a vibration
74 Random Vibration

system can be seen as a function of the random input variables. Now, the CDF and
PDF of functions Y of random variables X are discussed as follows. Here, Y = f(X).

2.1.6.1 Discrete Variables
First, let us consider the following examples. Through these examples, we can real-
ize how the probability mass functions (PMF) of discrete random variables are
determined.

Example 2.6

The probability distribution of random variable X is shown in Table 2.6.


Let us find the distribution Y = X2
Consider the possible values of Y:
Because X = {−2, −1, 0, 1, 2} we have

Y = {0, 1, 4}

Therefore,

P(Y = 0) = P(X = 0) = 0.2  (X = 01/2)

P(Y = 1) = P(X = 1) + P(X = −1) = 0.3 (X = ±11/2 = ±1)

P(Y = 4) = P(X = 2) + P(X = –2) = 0.4 (X = ±41/2 = ±2)

The results are listed in Table 2.7.

Example 2.7

Let us further consider another example. The probability distribution of random


variable X is shown in Table 2.8.

Table 2.6
Probability Distribution of Random Variable X
X −2 −1 0 1 2
p(X = xi) 0.3 0.2 0.2 0.1 0.2

Table 2.7
Probability Distribution of Random Variable X2
Y 0 1 4
p(Y = yi) 0.2 0.3 0.5
Functions of Random Variables 75

Table 2.8
Probability Distribution of Random Variable X
X 1 2 3 … n
p(X = xi) 1/2 1/22 1/23 … 1/2n

Let us find the distribution of Y = sin(0.5πX).


Consider the possible values of Y:

 −1, n = 4k − 1
 πn  
sin   = 1, n = 4k − 3 k = 1, 2,…
 2
 0, n = 2k

Thus, sin(0.5πX) can only have three values (1, −1, 0)


2 2 −1 
P(Y = −1) = ∑ P[X = 4k − 1] = 21 + 21 + 21 +  =
3 7 11

1
=  X = sin (−1)
1  15  π
k =1 8  1− 
 16 


1 

1
P(Y = 0) =
1  3 ∑ P[X = 2k] = 21 + 21 + 21 +  =
2
=  X = sin−1(0)
 π  2 4 6
k =1 4  1− 
 4


8  2 −1 

1
P(Y = 1) =
1  ∑ P[X = 4k − 3] = 12 + 21 + 21 +  =
=
15
 X = sin (1)
π 5 9
k =1 2 1− 
 16 

The results are listed in Table 2.9.


These two examples describe how to find the PMF of Y and the function of
random variable X as follows:
First, list all the possible values of Y, arranging them from the smallest to the
largest. Second, at each Yi, find the sums of probabilities P(Yi) = Σ (all Xj corre-
sponding to Yi). Note that to systematically find those Xj, we have X = f –1(Yi).

Table 2.9
Probability Distribution of Random Variable Y
Y −1 0 1
p(Y = yj) 2/15 1/3 5/15
76 Random Vibration

2.1.6.2 Continuous Variables
For continuous variables and the corresponding functions, similar to the case of
discrete variables, let us consider the following examples.

Example 2.8

The area of a circle has uniform distribution in [a, b], the PDF is

 1
 , x ∈[ a, b]
fX ( x ) =  b − a , 0<a<b
 0, elsewhere

X
Find the PDF f Y (y) = d/dy [P(Y≤y)] for circles with the radius Y =
π
1. Let us find the CDF FY (y) = P(Y ≤ y).
When y < 0, because

X
Y= ≥0
π

We have

FY (y) = P (Y ≤ y) = P(Φ) = 0

and

f Y (y) = d/dy[FY (y)] = 0

2. Considering y ≥ 0, we have

 X  πy 2
FY (y ) = P(Y ≤ y ) = P 
 π
≤ y  = P[ X ≤ π y 2 ] =
 ∫
−∞
fX ( x ) d x

 πy 2

 ∫
−∞
0 d x = 0, πy 2 < a
 πy 2
a
1 πy 2 − a
=
 ∫
−∞
0 dx +
∫ a b−a
dx =
b−a
, a ≤ πy 2 < b
 a b
1 πy 2



 ∫
−∞
0 dx +
∫ a b−a
dx +
∫b
0 d x = 1, πy 2 > b
Functions of Random Variables 77

and

fY (y ) = d/dy[FY (y )]


 0′ = 0 , 0 ≤ y < a/π
 2 


=  πy − a  = 2πy , a /π ≤ y ≤ b /π
 b − a  b−a

1′ = 0, y > b /π

Thus,

 2πy
 , a/π ≤ y ≤ b /π
fY (y ) =  b − a
0, elsewhere

2.1.6.2.1  General Approach


Having examined the above examples, more general cases can be considered.
Assume X are continuous random variables with PDF

 f ( x ), x ∈ (a, b)
fX ( x ) =  X (2.63)
 0, elsewhere

and g(x) is a differentiable monotonic function within range (a, b), that is, when

x ∈ (a, b) → g(x) ∈ (α, β) (2.64)

Denote the inverse function of g(x) as g–1(y), so that y ∈ (α, β).


The PDF of Y = g(X) can be written as

 −1
 f X [ g −1 ( y)] d[ g ( y)] , y ∈(α, β)
fY ( y) =  dy (2.65)

 0, elsewhere

Proof:

To obtain the PDF f Y (y), find the corresponding CDF FY (y).

(1)  When y ≤ α, FY (y) = P(Y ≤ y) = P(Φ) = 0, → f Y (y) = 0 (2.66)

(2)  When y ≥ β, FY (y) = P(Y ≤ y) = P(U) = 1, → f Y (y) = 0 (2.67)

(3)  When α < y < β, FY (y) = P(Y ≤ y) = P (g(X) ≤ y) (2.68)


78 Random Vibration

Suppose g(x) monotonically increasing,

FY (y) = P(Y ≤ y) = P(g(X) ≤ y) = P(X ≤ g–1(y)) = FX [g–1(y)] (2.69)

Therefore,

 d[ g −1 ( y)] 
fY ( y) = d[ FY ( y)]/dy = d{FX [ g −1 ( y)]}/dy = f X [ g −1 ( y)]   (2.70)
 dy 

Because g(x) monotonically increases in range (a, b), so does g–1(y) in range (α, β).
That is,

d[g−1(y)]/dy ≥ 0

So that

d[ g −1 ( y)]
fY ( y) = f X [ g −1 ( y)] (2.71)
dy

Suppose g(x) is monotonically decreasing,

FY (y) = P(Y ≤ y) = P(g(X) ≤ y) = P(X ≥ g–1(Y)) = 1 − FX [g−1(y)] (2.72)

Therefore,

 d[ g −1 ( y)] 
fY ( y) = d[ FY ( y)]/dy = d{1 − FX [ g −1 ( y)]}/dy = − f X [ g −1 ( y)]   (2.73)
 dy 

Because g(x) monotonically decreases in range (a, b), so does g–1(y) in range
(α, β). That is,

d[g–1(y)]/dy ≤ 0

We also have

d[ g −1 ( y)]
fY ( y) = f X [ g −1 ( y)]
dy

Functions of Random Variables 79

2.1.6.2.2  Linear Function of Normally Distributed Variables


Given random variable X ~ N(μX, σX), find the PDF of Y = aX + b
Let

g(X) = aX + b,  x ∈ (−∞, ∞) (2.74)

The inverse function

y−b
g −1 ( y) = ,  and  y ∈ (−∞, ∞)
a

d[ g −1 ( y)] d[( y − b) /a] 1


= =
dy dy a

Substitution of the above equation into Equation 2.65 yields

d[ g −1 ( y)]
fY ( y) = f X [ g −1 ( y)]
dy

 y − b  
2
(2.75)
  −µ X 
a  ( y − a µ X − b )2
−  −
1 2 σ 2X 1 1 2 a 2σ 2X
= e = e y ∈ (−∞,∞
∞)
2πσ X a 2π a σ X

Thus, Y is also normally distributed and

Y ~ N(aμX + b,│a│σX) (2.76)

Example 2.9

Y = sin(X)

Given X is uniformly distributed between (0, 2π). Find the PDF of Y.


Note that y = sin(x) is not monotonic. However, from −π/2 to π/2, y is mono-
tonically increasing, and from π/2 to 3π/2, y is monotonically decreasing. We can
have

1 sin−1(y )
FY (y ) = + , − 1≤ y ≤ 1 (2.77)
2 π

Furthermore, from Equation 2.77, f Y (y) can be determined.


80 Random Vibration

2.2 SUMS OF RANDOM VARIABLES


Now, let us consider the sums of random variables, which play important roles in
load combinations.

2.2.1 Discrete Variables
First, consider discrete variables by studying some examples.

Example 2.10

X1 and X2 both have Bernoulli distributions with probability p and mutually inde-
pendent variables (see Table 2.10). Find PDF of Y = X1 + X2.
The range of Y is {0, 1, 2}. We thus have

P(Y = 0) = P[(X1 = 0) ∩ (X2 = 0)] = P(X1 = 0) P(X2 = 0) = (1 – p)2

P(Y = 1) = P[(X1 = 1) ∩ (X2 = 0)] + P[(X1 = 0) ∩ (X2 = 1)] = 2(1 – p)p

P(Y = 2) = P[(X1 = 1) ∩ (X2 = 1)] = P(X1 = 1) P(X2 = 1) = p2

The results are listed in Table 2.11.


Generally, X and Y are independent random discrete variables with PDF as

pX(xi) = P(X = xi) (2.78)

and

pY (yj) = P(Y = yj) (2.79)

Table 2.10
Probability Distribution of Random Variable Xi
Xi 0 1
P(Xi = xj) 1−p p

Table 2.11
Probability Distribution of Sums
Y 0 1 2
P(Y = yk) 1−p 2(1 − p)p p2
Functions of Random Variables 81

The sum

Z = X + Y (2.80)

has PDF as

pZ ( zk ) = ∑ P(X = x ) P(Y = z − x ),
all i
i k i k = 1, 2,… (2.81)

To prove Equation 2.81, we see that pZ(zk) = P(Z = zk) = P(X + Y = zk) =
∑ P(X = x ,Y = z − x ), k = 1, 2, …
all i
i k i

Because X and Y are independent, the above summation =


P(Y = zk − xi ), k = 1, 2, …
∑ P( X = x )
all i
i

2.2.2 Continuous Variables
Now, we extend the consideration to continuous variables. Similar to the case of
discrete variables, when X and Y are independent random continuous variables with
PDF f X(x) and f Y (y), the PDF of sum

Z = X + Y (2.82)

has PDF as


fZ (z) =
∫−∞
f X ( x ) fY ( z − x ) d x (2.83a)

or


fZ (z) =
∫−∞
f X ( z − y) fY ( y) d y (2.83b)

Equations 2.83a and 2.83b are convolution integrals.


To prove Equation 2.83a,b, let us consider the CDF of the sum Z, FZ (z) as follows:

∞  ∞ 
FZ ( z ) = P( Z ≤ z ) = P( X + Y ≤ z ) =
∫∫
x + y≤ z
f XY ( x , y) d x d y =
∫ ∫
−∞  −∞
f XY ( x , y) d y  d x


82 Random Vibration

Denote w = x + y, then y = w − x; therefore,

∞  ∞  ∞  ∞ 

FZ ( z ) =
∫ ∫
−∞  −∞
f XY ( x , w − x ) d w  d x =
 ∫ ∫ 
−∞  −∞
f XY ( x , w − x ) d x  dw

Furthermore,


f Z ( z ) = d/dz[ FZ ( z )] =
∫ −∞
f XY ( x , w − x ) d x (2.84)

This is the relationship for the sum of general variable X and Y. Note that when X
and Y are independent, we have


fZ (z) =
∫−∞
f X ( x ) fY ( w − x ) d x

2.2.2.1 Sums of Normally Distributed PDF


2.2.2.1.1  Standardized Variables
First, consider the standardized variables, we have
X ~ N(0,1) and Y ~ N(0,1), their PDF are

x2
1 −
fX ( x ) = e 2
(2.85)

and

y2
1 −
fY ( y) = e 2
(2.86)

The PDF of

Z = X + Y (2.87)

is given by

z2

1
e 2( 2 )
2
fZ (z) = (2.88)
2π 2
Functions of Random Variables 83

That is, we have

Z ~ N 0, 2 ( ) (2.89)

2.2.2.1.2  General Variables


Now, consider the general case of X ~ N(μx,σx) and Y ~ N(μY,σY), their PDF are

( x − µ X )2

1 2 σ 2X
fX ( x ) = e (2.90)
2πσ X

( y − µY )2

1 2 σY2
fY ( x ) = e (2.91)
2πσY

The PDF of

Z = X + Y (2.92)

is

( z − µ X − µY )2

fZ (z) =
1
e
(
2 σ 2X + σY2 )
(2.93)
2π σ + σ 2 2
X Y

That is


(
Z ~ N µ X + µY , σ 2X + σY2 ) (2.94)

2.2.2.2 Sums of n Normally Distributed Variable


The above can be expanded to more than two normally distributed variables:

2.2.2.2.1  Standard Variables


If n mutually independent standard variables Xi are normally distributed, that is,

Xi ~ N(0,1),  i = 1, 2, …n (2.95)
84 Random Vibration

then, we have

∑ X ~ N ( 0, n )
i =1
i (2.96)

2.2.2.2.2  General Variables


If n mutually independent general variables Xi are normally distributed


( )
X i ~ N µ Xi , σ Xi , i = 1, 2, … n

(2.97)

then, we have

n  n n 
∑ i =1
Xi ~ N 
 ∑ i =1
µ Xi , ∑
i =1
σ 2X 
i 

(2.98)

2.2.2.2.3  Linear Combination of Normally Distributed Variables


Suppose n mutually independent general variables Xi are normally distributed,

( )
X i ~ N µ Xi , σ Xi , i = 1, 2,  n

(2.99)

With a set of constant c1, c2, …cn, we have

n  n n 

i =1
ci X i ~ N 
 ∑i =1
ciµ Xi , ∑i =1
ci2σ 2X 
i 

(2.100)

2.3 OTHER FUNCTIONS OF RANDOM VARIABLES


We now consider other functions of random variables.

2.3.1 Distributions of Multiplication of X and Y


Generally, when X and Y are independent random continuous variables with PDF
f X(x) and f Y (y), the PDF of the product

Z = XY (2.101)
Functions of Random Variables 85

has PDF as

 z
f X ( x ) fY   dx

 x

fZ (z) =
∫−∞ x
(2.102)

or

 w
f X ( x ) fY  
z ∞
 x

FZ ( z ) =
∫ ∫
−∞ −∞ x
d x dw

(2.103)

When X and Y are not independent, they have joint PDF f XY (x,y),

 z
f XY  x ,  dx

 x

fZ (z) =
∫ −∞ x
(2.104)

or

 w
f XY  x ,  dx
z ∞
 x

FZ ( z ) =
∫ ∫ −∞ −∞ x
dw (2.105)

2.3.2 Distributions of Sample Variance, Chi-Square (χ2)


We now consider the distribution of sums of squares.

2.3.2.1 Sample Variance

Sample variance S X2 is used to estimate the variance σ 2X of random variable X

n
1
∑( x − X )
2
S X2 = i (2.106a)
n −1 i =1

It is also seen that


1  
n

S X2 = 
n − 1  ∑i =1
xi2 − nX 2 

(2.106b)

X is the mean value of X


X Normal → exact chi-square distribution
X not Normal → approximately OK
86 Random Vibration

We can have the following:


Let Z be a standard normal random variable set

Z = {z1, z2, z3, …} (2.107)

2.3.2.2 Chi-Square Distribution
Chi-square distribution is defined as

χ 2n = ∑z
i =1
2
i (2.108)

2.3.2.3 CDF of Chi-Square, n = 1
In the case of one degree of freedom, that is, n = 1, we have the CDF as

( ) ( ) (
Fχ2 = P χ12 ≤ u = P z12 ≤ u = P − u < z1 ≤ u = Φ(u) − Φ(−u) (2.109)
1
)
2.3.2.4 PDF of Chi-Square, n = 1
Furthermore, in the case of one degree of freedom, that is n = 1, we have the PDF as

u
1 −
fχ 2 = e 2, u > 0 (2.110)
1
2π u

2.3.2.5 Mean
The mean is given by

µ χ2 = 1 (2.111)
1

2.3.2.6 Variance
The variance is

σ 2χ2 = 2 (2.112)
1

2.3.2.7 PDF of Chi-Square, n > 1


Using the convolution integration, it can be proven that

u n / 2−1
u

fχ2 (u) = e 2, u > 0 (2.113)

n
2 Γ (n / 2)
n/2

Functions of Random Variables 87

where Γ(n/2) is the gamma function of n/2. That is,


Γ(n / 2) =
∫ 0
t n / 2−1e − t dt

(2.114)

2.3.2.8 Reproductive
The chi-square distribution is reproductive, which can be written as:

n k n

χ 2n = ∑ ∑ ∑z
i =1
zi2 =
i =1
zi2 +
i = k +1
2
i = χ 2k + χ 2n− k (2.115)

2.3.2.9 Approximation
When n is sufficiently large, say

n > 25

The chi-square distribution approaches normal distribution

Y = χ 2n (2.116)

Then approximately

Y −n
~ N (0,1) (2.117)
2n

2.3.2.10 Mean of Y
Consider the mean

µ χ2 = n (2.118)
n

2.3.2.11 Variance of Y
The variance is given by

σ 2χ2 = 2n (2.119)
n

88 Random Vibration

2.3.2.12 Square Root of Chi-Square (χ2)


Now, let us consider the square root of chi-square, denoted as χ

2.3.2.12.1  PDF of χ
First, the PDF is given as
n / 2−1 v2
 v2  −
fχ n = 1 v   e 2 , v>0 (2.120)
Γ(n / 2)  2 

Note that when n = 2, it reduces to a special Rayleigh distribution whose s = 1,

2
1 h 
h −  
f H (h) = 2 e 2  σ  , h > 0 (2.121)
σ

2.3.2.12.2  Mean Value of χ


The mean of χ is
 n + 1
Γ
 2 
µ χn = 2 (2.122)
 n
Γ 
 2

2.3.2.12.3  Variance of χ
The variance of χ is

σ 2χn = n − µ χn (2.123)

2.3.2.13 Gamma Distribution and Chi-Square Distribution


Let us consider a new type of distribution called gamma distribution, which is closely
related to the chi-square.

2.3.2.13.1  PDF
The PDF of gamma distribution is given by

λ
fX ( x ) = (λx )r −1 e − λx , x > 0 (2.124)
Γ (r )

2.3.2.13.2  Mean
The mean of gamma distribution is

r
µX = (2.125)
λ
Functions of Random Variables 89

2.3.2.13.3  Variance
The variance of gamma distribution is

r
σ 2X = (2.126)
λ2

Here, λ and r are positive numbers.

2.3.2.14 Relation between Chi-Square χ n and Sample Variance SX2


2

In engineering statistics, we often calculate the mean values and variances of sam-
ples. Let us first consider the sample variance as follows.
n −1
Multiplying 2 on both sides of Equation 2.106a results in
σX
n
n −1 2 1
∑( x − X )
2
SX = 2 i (2.127)
σX
2
σX i =1

Insert 0 = −μX + μX in between xi and −X , we further have

n
 xi − µ X X − µ X 
2 n  x − µ  2  xi − µ X   X − µ X   X − µ X  
2

∑  σ −
σ X 
= ∑  i X

 σ X 
− 2  σ   σ +
  σ  
i =1 X i =1 X X X

n 2 2
 xi − µ X   X − µX 
= ∑
i =1
 σ
X
 − n  σ
X


Thus,

n 2 2 n 2 n 2
 xi − µ X   X − µX   xi − µ X   X − µX 
n −1 2
σ 2x
SX = ∑
i =1
 σ
X
 − n  σ
X
 = ∑
i =1
 σ
X
 − ∑
i =1
 σ
X


(2.128)

Note that
n 2
 xi − µ X 
∑i =1
 σ
X
 = χ n
2
(2.129)

and it can be proven that (see Section 2.5.1.2, the Lindeberg–Levy theorem)

n 2
 X − µX 
∑i =1
 σ
X
 = χ1
2
(2.130)

90 Random Vibration

and therefore

n −1 2
S X = χ 2n − χ12 = χ n2 −1 (2.131)
σ 2X

2.3.3 Distributions of Ratios of Random Variables


2.3.3.1 Distribution of Variable Ratios
Random variables X and Y have joint PDF f XY (x,y), the ratio

X
Z= (2.132)
Y

has the PDF



fZ (z) =
∫ −∞
y f XY ( yz , y)dy

(2.133)

If X and Y are independent, then


fZ (z) =
∫ −∞
y f X ( yz ) fY ( y) d y

(2.134)

where f X(x) and f Y (y) are, respectively, the PDF of variables X and Y.

2.3.3.2 Student’s Distribution
2.3.3.2.1  Student’s Random Variable
Random variable with Student’s distribution, denoted by Tn is a ratio of a standard
normal variable Z to the square root of a chi-square variable divided by its degree of
freedom, that is,

Z
Tn = (2.135)
χ 2n /n

2.3.3.2.2  PDF of Student’s Distribution


We have the PDF of Student’s distribution given by

fT (t ) =
Γ ( )
n +1
2
(2.136)

()
( n +1)/ 2
 t2 
nπ Γ n  + 1
2 n 
Functions of Random Variables 91

2.3.3.2.3  Mean of Student’s Distribution


The mean is

μT = 0,  n > 1 (2.137)

2.3.3.2.4  Variance of Student’s Distribution


The variance is

n
σ T2 = , n>2 (2.138)
n−2

2.3.3.2.5  Relation with Standard Normal Distribution

X − µX
σX / n

X − µX
It is seen that the variable Z = , used to standardize the random variable
σX / n
Xi, is a standard normal distribution with variance σ 2X /n . If the standard deviation
is known, Z can be used to estimate how close the mean value can be. However,
the standard deviation (or variance) is often unknown, and will be estimated by the
sample variance S X2 . In this case, we have

X − µX
=
( X − µX ) (σ X n ) = ( X − µ ) (σ
X X n )= Z
= Tn−1
SX n SX σ X (n − 1) S 2
χ 2n−1
X
(n − 1)σ 2
X (n − 1)

(2.139)

That is,

X − µX
Tn−1 = (2.140)
SX / n

Here Tn−1 is the student’s t distributed with n−1 degrees of freedom.

2.3.3.3 F Distribution
We now consider another distribution of F random variables as follows

2.3.3.3.1  Definition of F Random Variable


First, consider the definition of F random variables given by

χu2 /u
F (u, v) = (2.141)
χ 2v /v

92 Random Vibration

2.3.3.3.2  PDF of F Random Variable


The PDF is

u/2
 u+ v  u u / 2−1
Γ   f
 2  v
fFu ,v ( f ) = , f >2 (2.142)
 u   v   u  
( u + v )/ 2
Γ  Γ 
 2   2    f + 1
 v 

2.3.3.3.3  Mean of F Random Variables


The mean is

v
µ Fu ,v = , v>2 (2.143)
v−2

2.3.3.3.4  Variance of F Random Variable


The variance is

2 v 2 (u + v − 2)
σ 2Fu ,v = , v>4 (2.144)
u( v − 2)2 ( v − 4)

2.4 DESIGN CONSIDERATIONS
With the help of the above-mentioned random variables including definitions, means
and variances, and PDF and CDF, let us consider the design under random loads. The
focus is on the reliability or failure probability of these systems.

2.4.1 Further Discussion of Probability-Based Design


From Equation 1.154, the failure probability of a system is given as

pf = P(F ≤ 0) = P(R − Q ≤ 0) (2.145)

By specifying the design value, we have

pf = P(RD − Q D ≤ 0) (2.146)

By substitution of Equations 1.132 and 1.133 into Equation 2.146, we can have

pf = P(γQN − φR N > 0) (2.147)


Functions of Random Variables 93

Furthermore, if more than one load is applied, the limit state can be written as

F = −


∑ γ Q − ϕR
i
i i N
=0


(2.148)

Here, Qi is the ith nominal load, and γi is the corresponding load factor.
Assume all Qi and R are normally distributed. That is,


(
Qi ~ N µQi , σ Qi ) (2.149)

F will also be normally distributed.

F ~ N(μF, σF) (2.150)

It may be shown that

µF = ∑µ Qi (2.151)
i

and

σF = ∑σ 2
Qi (2.152)
i

Use the standard variable (see Equation 1.153)

µ
µR − ∑µ Qi

β= F = i
(2.153)
σF σ 2R + ∑σ 2
Qi
i

β is defined as the reliability index. Recall that Equation 1.154, the failure prob-
ability, can be computed as

pf = Φ(−β) (2.154)

From Equation 2.154, if the allowed failure probability is given, then

[β] = −Φ−1([pf ]) (2.155)

Here, [.] stands for allowed value of (.). From Equation 2.155, if only one load is
considered and assuming the standard deviation σF can be determined, then

μR − μQ = [β]σF = const. (2.156)


94 Random Vibration

µR – µQ

κQσQ –κRσR

Intensity

0 µQ RD = QD µR

FIGURE 2.6  Relationship between mean values.

From Figure 2.6, when the distance between the two mean values μR and μQ is
fixed, under the requirement of [β], one can always find a middle point R D = QD.
Recall Equations 1.132 through 1.139, we can write

QD = μQ + κQ σQ = βQ (1 + κ RCQ)QN = γQN

and

R D = μR − κ R σR = βR (1 + κ RCR)R N = ΦR N

From Equation 2.156, we further have

μR − κ R σR − (μQ + κQ σQ) = [β]σF − κ R σR − κQ σQ = ΦR N − γQN (2.157)

Let

[β]σ F − κ R σ R − κ Q σ Q = [β] σ 2R + ∑σ
i
2
Qi − κ Rσ R − κ Q σQ = 0 (2.158)

For a given allowed reliability index [β], if the standard deviations σR and σQ are
known, then by choosing the proper value of κ R and κQ, the critical point is deter-
mined at QD = R D or

γQN = ΦR N (2.159)

That is, Equation 2.159 is determined by the required condition of allowed failure
probability. This can be extended to multiple loads. Suppose that there exists n loads.
The limit state is

∑ γ Q − ΦR
i =1
i i N =0 (2.160)

must be determined by a preset allowed failure probability [pf ]. Here, γi and Qi are,
respectively, the ith load factor and the ith nominal load.
Functions of Random Variables 95

2.4.2 Combination of Loads
With the help of Equation 2.160, if the allowed failure probability is given and sup-
pose the resistance is also known, the summation
nation, can be determined. i

γ iQi , namely, the load combi-

Consider the inverse problem of given resistance and failure probability to deter-
mine the load factor γis. In Equation 2.160, ΦR N is given. Knowing all the nominal
values of load Qis, we now have n unknowns (γi).
Consider the case of two loads.

γ1Q1 + γ2Q2 = ΦR N (2.161)

When the value ΦR is specified, ΦR determines equal-ΦR plan or ΦR-plan,


shown in Figure 2.7a as plan A-B-C-D, which is parallel to plan O-Q1-Q2.

R Q2 R
C B
ΦR = γ2 Q2

Safe design
ΦR plane region
ΦR A ΦR
β plane

ΦR
γ2

Q2
O Q1
ΦR ΦR
γ1 γ2
(a) (b)

ΦR = γ1Q1
Q2
ΦR

ΦR
γ2

Q1 Q1
ΦR ΦR
γ1 γ1
(c) (d)

FIGURE 2.7  Loads and resistance case 1. (a) R-Q1-Q2 three dimensional plot, (b) R-Q2
plan, (c) R-Q1 plan, and (d) Q1-Q2 plan.
96 Random Vibration

The design value of ΦR chosen above the β plan will yield a large value of β
or a smaller value of failure probability. Thus, we can have a safe design region
(see Figure 2.7b, for example). Now, let value ΦR be the design value ΦR N for the
designed resistance, and γiQi be the ith design load.
Figure 2.7 shows a three-dimensional plot with two loads, Q1 and Q2. In Figure
2.7a, the thick solid line in plan R-Q1, which is also shown in Figure 2.7c,

ΦR = γ1Q1 (2.162)

Equation 2.162 is obtained under the condition Q2 = 0, when the limit state F = 0
is reached. In Figure 2.7a, the thick break in plan R-Q2, which is also shown in
Figure 2.7d, is

ΦR = γ2Q2 (2.163)

R Q2
R
C´ B

ΦR = γ2́Q2́


ΦR plane
ΦR A´
ΦR
β plane
ΦR
γ´2

Q1
O O Q2
ΦR ΦR
(a) γ1́ (b) γ2́

Q2
R

ΦR = γ1́ ΦR
γ2́

Q1́ A´ γ1́Q1́ + γ2́Q2́ = ΦRN


ΦR γ1Q1 + γ2Q2 = ΦRN
γ2 C

Q1
A´ A
Q1
O
ΦR ΦR ΦR
γ1́ γ1́ γ1
(c) (d)

FIGURE 2.8  Loads and resistance case 2. (a) R-Q1-Q2 three dimensional plot, (b) R-Q2
plan, (c) R-Q1 plan, and (d) Q1-Q2 plan.
Functions of Random Variables 97

Equation 2.163 is obtained under the condition Q1 = 0, when the limit state F = 0
is reached.
Because Equations 2.162 and 2.163 are determined based on a given allowed fail-
ure probability, namely, the given value of [β], these two lines define a special plan
called equal β plan, or simply β plan, shown in Figure 2.7a (plan O-A-C).
When we have different combinations of Q1 and Q2, with given resistance ΦR,
the load factors will be different. Let us consider two possible load combinations.
The first case is denoted by Q1 and Q2 and the second is denoted by Q1′ and Q2′.
Correspondingly, we have γ1 and γ2 as well as γ 1′ and γ ′2. The second case is shown
in Figure 2.8.
The intersection of the β plan formed by Q1-Q 2 and the ΦR plan forms a
straight line. Equation 2.161 is the corresponding equation (see Figures 2.7d and
2.8d). The intersection of an alternative β plan formed by Q1′-Q2′ and the ΦR plan
forms another straight line. Equation 2.164 is the corresponding equation (see
Figure 2.8d).

γ 1′Q1′ + γ ′2Q2′ = ΦRN (2.164)


Example 2.11

The Q1-Q2 combination can be a large truck load combined with a small earth-
quake load, acting on a bridge. The Q1′-Q2′ combination can be a small truck load
combined with a large earthquake load, acting on the same bridge.
In Figure 2.8, only the area 0-A′-E-C is the safe region.

2.5 CENTRAL LIMIT THEOREMS AND APPLICATIONS


For random events, although unpredictable specifically, an obvious pattern can be
found through a large number of repeated observations. In this section, we study
these patterns based on the theory of limits. There are two fundamental types of
laws: the law of large numbers and the central limit theorems.
The law of large numbers unveils the stability of mean values of random events.
In the literature, there are several theorems in this regard that describe the results of
virtually the same experiments repeated a large number of times. In these situations,
the average of the results tends to be close to the expected value, and will become
closer as more experiments are carried out. The law of large numbers explains the
stable long-term results for the averages of random events. In this section, however,
we skip the corresponding formulae of these theorems. Interested readers may con-
sult typical textbooks to study the details.
Practically speaking, quite often, when the pool of random variables becomes
sufficiently large, many variables and/or their functions can be approximated by
normal distributions. The corresponding theorems about these normal distributions
are called central limit theorems.
98 Random Vibration

2.5.1 Central Limit Theorems


Specifically, central limit theorems provide conditions under which the distribution
of sums of a large number of independent random variables tends to become normal
distributions. In this section, the focus is on the central limit theorem because they
are more relevant to the main topics of random process.

2.5.1.1 Lyapunov Central Limit Theorem


(Aleksandr Mikhailovich Lyapunov, 1857–1918)
X1, X2, …Xn are a sequence of independent random variables with mean and vari-
ance, μ1, μ2, …μn and σ12 , σ 22 ,  σ n2, respectively. Sn is the sums of Xis:

Sn = ∑X
i =1
i
(2.165)

The mean and variance of Sn are given by


n

µ Sn = ∑µ
i =1
i (2.166)

and
n

σ 2Sn = ∑σ
i =1
2
i (2.167)

In the limit as n goes to infinity, the standardized variable of Sn, Zn, has the stan-
dard normal distribution

S n − µ Sn
Zn = (2.168)
σ Sn

lim f Zn (ξ) = Φ (ξ) (2.169)


n→∞

That is,

 n 


∑ ( X i − µ )i 
 1 ξ ζ2



lim P  i =1
< ξ = e 2 dζ (2.170)
n→∞

n
 2π −∞



∑σ
i =1
2
i 


Functions of Random Variables 99

E(Sn) = 0 (2.171)

D(Sn) = 1 (2.172)

The above statement has conditions, called Lyapunov conditions:

1. Individual terms in the sums make negligible contributions


2. It is very unlikely that any single term contributes disproportionally

2.5.1.2 Lindeberg–Levy Central Limit Theorem (Jarl W. Lindeberg,


1876–1932; Paul P. Lévy, 1886–1971)
X1, X2, …Xn are a sequence of independent random variables with identical mean and
variance μ, σ2, Sn is the sum of Xis:

Sn = ∑X
i =1
i (2.173)

In the limit as n goes to infinity, the standardized variable of Sn, Zn has the stan-
dard normal distribution

Sn − nµ
Zn = (2.174)
n σ Sn

That is,

lim FZn (ξ) = Φ (ξ) (2.175)


n→∞

or

 n 


∑ X − nµ
i 
 1 ξ ζ2



lim P  i =1
< ξ = e 2 dζ (2.176)

n→∞
 nσ  2π −∞

Again, we have

E(Sn) = 0 (2.177)

and

D(Sn) = 1 (2.178)
100 Random Vibration

2.5.1.3 De Moivre–Laplace Central Limit Theorem (Abraham De


Moivre, 1667–1754; Pierre-Simon Laplace, 1749–1827)
Let K be observation indicators of n-Bernoulli tests (also called n 0–1 tests, n bino-
mial, etc., see Equation 1.53).
First, the random variable is

K = {0, 1} (2.179)

The probability of K = 0 is

P(K = 0) = p (2.180)

and the probability of K = 1 is

P(K = 1) = 1 − p (2.181)

Let X be the sum of these observations

X= ∑K
i =1
i (2.182)

As we see from Equation 2.182, X is the number of successes that is denoted by 1


in these n 0–1 tests. Therefore, X has the binomial distribution:

pX ( x ) = Cnx p x (1 − p)n− x , x = 0, 1, 2,  n (2.183)


Note that the mean and variance of n-Bernoulli distribution are

μX = np (2.184a)

and

σ 2X = np(1 − p)
(2.184b)

When n becomes sufficiently large, the PMF of P(X = x) can be written approxi-
mately as

( x − µ X )2

1 2 σ 2X
pX ( x ) = e (2.185)
2πσ X

Although the proofs of Lyapunov and Lindeberg–Levy central limit theorems


are skipped over, we show in the following how to prove Equation 2.185 as an
example.
Functions of Random Variables 101

x − np
Denoting x k = for convenience, which is a standardized variable, we
np(1 − p)
can see that when n → ∞, x = np + np(1 − p) x k → ∞ and (n − x) → ∞. Furthermore,
with Stirling’s approximation

1
n+
n ! ≈ 2π n 2 −n
e

we thus have

1 1
x+ n− x +
1  np  2  n(1 − p)  2
pX ( x ) ≈    n − x 
2πnp(1 − p)  x 

It can be proven that when n → ∞,

1
x+ (1− p ) 2
 np  2 − x k np (1− p ) − xk
  ≈e 2

and

1
n− x + p
 n(1 − p)  2 x k np (1− p ) − x k2
 n − x  ≈e 2

Therefore,

1 ( x − np )2
n− x 1 − x k2 1 −
2 np (1− p )
pX ( x ) = C p (1 − p)
x
n
x
= e 2 = e
2πnp(1 − p) 2πnp(1 − p)
(x − µ X )2

1 2 σ 2X
= e
2πσ X

This is the de Moivre–Laplace central limit theorem, which provides an approxi-


mation of the binomial PMF with a normal PDF when n becomes large.

Example 2.12

Suppose there are 170 identical and independent computers in a department and
each of them has a 1% failure probability. Calculate the total failure probability of
up to two computers.
102 Random Vibration

1. Assume there are X malfunctioning computers. It is seen that X yields a typi-


cal binomial distribution.

P( X = x ) = Cnx p x (1 − p)n− x = C170


x
0.01x 0.99170− x

Therefore,

P(0 ≤ X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2) = 1 × 1 × 0.1811 + 170 × 0.0100 ×


0.1830 + 14,365 × 0.0001 × 0.1848 = 0.1811 + 0.3110 + 0.2655 = 0.7576

2. Use Poison approximation (see Equation 1.80)

( λ )k e − λ
P( X = x ) ≈ Cnk p k (1 − p)n− k ≈ with λ = 170 × 0.01 = 1.7
k!

P(0 ≤ X ≤ 2) = P(X = 0) + P(X = 1) + P(X = 2) = 1.70//0!e−1.7 + 1.71//1!e−1.7 +


1.72//2!e−1.7 = 0.1827 + 0.3106 + 0.2640 = 0.7573

3. Use the de Moivre–Laplace central limit theorem,

np = 1.7, np(1 − p) = 1.2973

 2 − np   0 − np 
P(0 ≤ X ≤ 2) ≈ Φ   −Φ  = Φ (0.2312) − Φ (−1.3104)
 np(1 − p)   np(1 − p) 

= 0.5914 − 0.0950 = 0.4964


Considering that the number of computers cannot be smaller than 0, we


can also use P(X ≤ 2) to approximate the result, that is,

P(X ≤ 2) ≈ Φ(0.2312) = 0.5914

Comparing the above approaches, we see that when we have a very


small probability p (in this case p = 0.01), the Poisson approximation yields
better results.

2.5.2 Distribution of Product of Positive Random Variables


We now consider the distribution of product and random variables, which are posi-
tive. That is,

Y= ∏X ,
i =1
i Xi > 0 (2.186)

Functions of Random Variables 103

Taking the natural logarithm of Y, we have


n

Z = ln(Y ) = ∑ ln(X )
i =1
i (2.187)

Thus,

Y = eZ (2.188)

As

n → ∞ (2.189)

we finally have

Z − E(Z )
~ N (0,1) (2.190)
D( Z )

2.5.3 Distribution of Extreme Values


2.5.3.1 CDF and PDF of Distribution of Extreme Values
Consider the distribution of the largest possible values of n independent samples
taken from a random variable set.
Let X be the random set with a total of n variables. Yn is the largest value in n
independent samples.
The event {max (X1, X2, …Xn) ≤ y} is equivalent to {X1 ≤ y, X2 ≤ y, …Xn ≤ y} or {all
Xi ≤ y}. Therefore, we can write

P(Yn ≤ y) = P(all Xi ≤ y) = P[(X1 ≤ y) ∩ (X2 ≤ y)…(Xn ≤ y)] (2.191)

Because Xis are independent, with identical CDF FX(x)

P(all X i < y) = P( X1 ≤ y) P( X 2 ≤ y)… P( X n ≤ y)


(2.192)
= FX1 ( y) FX2 ( y) F
FX2 ( y) = [ FX ( y)]n

Thus,

P(Yn ≤ y) = FYn ( y) = [ FX ( y)]n (2.193)


d  FYn ( y) 
fYn ( y) = = n[ FX ( y)]n−1 f X ( y) (2.194)
dy
104 Random Vibration

2.5.4 Special Distributions
2.5.4.1 CDF and PDF of Extreme Value of Rayleigh Distributions
2.5.4.1.1  CDF of Rayleigh Distribution
Recalling Equation 1.94 (Rayleigh distribution)

2
1 h 
h −  
fH ( h ) = 2 e 2  σ  , h > 0
σ

and with the help of this PDF, the CDF of Rayleigh distribution can be calculated as

h2
1 h 
2 u= y2
y
h 2σ2 y −

∫ ∫
−  
2 σ −u 2σ 2
FH (h) = e dh = e du = 1 − e (2.195)
0 σ2 0

2.5.4.1.2  CDF
Furthermore, with the help of Equation 2.193, we can write the largest value in n
independent samples of Rayleigh distribution as

n
 −
y2 

FYn ( y) = 1 − e 2σ 2
 , y ≥ 0 (2.196)

2.5.4.1.3  PDF
And the corresponding PDF is

n −1
 −
y2  y2
y − 2σ2
fYn ( y) = n 1 − e σ 
 2
2 e (2.197)
σ2

2.5.4.2 Extreme Value Type I Distribution


Many asymptotic distributions have the form of double exponential, known as the
extreme value type I distribution (EVI, also known as Gumbel distribution) (Emil
Julius Gumbel, 1891–1966),

2.5.4.2.1  CDF
The CDF of EVI is

− α ( y −β )
FYn(y) = e − e (2.198)

Functions of Random Variables 105

2.5.4.2.2  PDF
The PDF of EVI is

− α ( y −β )

fYn( y) = α e − α ( y−β )−e


(2.199)

Here, β is the characteristic largest value of Yn and α is inversely related to the


standard deviation of Yn. β and α are related to the number of samples n and the
deviation of Xi by

1
FX (β) = 1 − (2.200)
n

or

 1
β = FX−1  1 −  (2.201)
 n

and

α = nf X(β) (2.202)

2.5.4.2.3  Mean of the Asymptotic Distribution


With the help of these parameters α, β, and γ in EVI distributions, we can see that the
mean of Yn, the largest value in n independent samples, can be written as

γ
µYn = β + (2.203)
α

2.5.4.2.4  Variance of the Asymptotic Distribution


The variance of Yn is

π2 1.645
σY2n = ≈ (2.204)
6α 2
α2

or the standard deviation is

1.282
σYn ≈ (2.205)
α
106 Random Vibration

Here, γ is Euler’s constant


γ =−

0
e − x ln x d x = 0.5772157 ≈ 0.577

(2.206)

2.5.4.2.5 Approximation of Distribution of Extreme


Value of Rayleigh Distribution
We can also have the approximation of the distribution of the maximum value among
n Rayleigh distributions.

2.5.4.2.5.1   Values of β and α  First of all, the parameters β and α can be calcu-
lated as (see Gumbel 1958; Ang and Tang 1984)

β = σ 2 ln n (2.207)

and

2 ln n
α= (2.208)
σ
respectively.

2.5.4.2.5.2   Mean  The mean in this case is then calculated as

 γ 
µYn = σ  2 ln n + (2.209)
 2 ln n 

2.5.4.2.5.3   Variance  It follows that the variance is given by

σ 2π 2
σY2n = (2.210)
12 ln n

Example 2.13

EVI distribution is often used to model peak annual flow of a river. It is measured
that μY = 2000 m3/s and σY = 1000 m3/s.
Question (1): Find the CDF of EVI
Question (2): In a particular year, find the probability of peak flow exceeding
5000 m3/s

1. We have

1.282
α= = 0.00128
σY
Functions of Random Variables 107

0.577 0.577
β = µY − = 2000 − = 1549.8 (m3 /s)
α 0.00128

So that the CDF is


−0.00128( y −1549.8 )
FYn(y ) = e − e

2. We have

−0.00128( 5000 −1549.8))


P(Y ≥ 5000) = 1 − FYn(5000) = 1 − e − e ≈ 0.01

In other words, the corresponding return period TR (see Equation 1.55)

TR = 1/0.01 = 100

2.5.4.3 Distribution of Minimum Values


In Sections 2.5.3, 2.5.4.1, and 2.5.4.2, we discussed the distribution of the maximum
values. Now, we consider the distribution of minimum values. That is, consider the
distribution of the smallest possible values of n independent samples taken from a
random variable set.
Let X be the random set with n total variables. Zn is the smallest value in n inde-
pendent samples.
The event {min (X1, X2, …Xn) ≤ y} has a complement event {min (X1, X2, …Xn) > y},
which is equivalent to {X1 > y, X2 > y, …Xn > y} or {all Xi > y}. Therefore, we can write

P(Zn ≤ y) = P(all Xi ≤ y) = 1 − P(all Xi > y) = 1 − P[(X1 > y) ∩ (X2 > y)…(Xn > y)]
(2.211)

Because Xis are independent with identical CDF FX(x)

P(all X i ≤ y) = 1 − P( X1 > y) P( X 2 > y)… P( X n > y)


(2.212)
= 1 − 1 − FX1 ( y)  1 − FX2 ( y)  [1 − FX2( y)] = 1 − [1 − FX ( y)]n

Thus,

P( Z n ≤ y) = FZn( y) = 1 − [1 − FX ( y)]n (2.213)


and

d  FZ n( y) 
f Z n ( y) = = n[1 − FX ( y)]n−1 f X ( y) (2.214)
dy
108 Random Vibration

Note that if Xis are independent, each has its own CDF FXi(x),

FZn( y) = 1 − ∏ 1 − F ( y)  Xi (2.215)


i =1

2.5.4.4 Extreme Value Type II Distribution


In Section 2.5.4.2, we discussed EVI distribution. Now, we consider the extreme
value type II distribution (EVII), also known as Fréchet distribution (Maurice
Fréchet, 1878–1973), which is often used to model annual maximum wind velocity
(see Gumbel 1958; Coles 2001).

2.5.4.4.1  CDF
First, consider the CDF. Suppose the variable Xi has CDF as

k
 1
FX ( x ) = 1 − β   , x ≥ 0 (2.216)
 x

Let X be the random set with n total variables. Yn is the largest value in n inde-
pendent samples, and k < 0 is the shape parameter. The asymptotic distribution of Yn
can be written as
k
 u
− 
 y
FYn( y) = e , y≥0 (2.217)

2.5.4.4.2  PDF
The PDF of EVII is
k
k +1  u
k  u − 
 y
fYn( y) =   e , y≥0 (2.218)
u  y

2.5.4.4.3  Mean
The mean of EVII is

 1
µYn = uΓ  1 −  (2.219)
 k

2.5.4.4.4  Variance
Finally, the variance of EVII is

  2 2 1
σY2n = u 2  Γ  1 −  − Γ  1 −  (2.220)
  k k
Functions of Random Variables 109

1.6
Normal
1.4 EVI
EVII
1.2

1
PDF

0.8

0.6

0.4

0.2

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Value of y

FIGURE 2.9  Several PDFs.

Figure 2.9 shows comparisons among normal, EVI, and EVII distributions. All
these distributions have identical mean = 1 and standard deviation = 0.46. From
Figure 2.9, we can realize that EVI and EVII have larger left tails, which means that
the chance to have larger value of y, the extreme value, is comparatively greater.

2.5.4.5 Extreme Value Type III Distribution


Besides the extreme value type I and II distributions, we also use the third type of
extreme value distribution, the extreme value type III distribution (EVIII, also known
as Weibull distribution) (Ernst Hjalmar Waloddi Weibull, 1887–1979), which is also
used to model annual maximum wind velocity (see Gumbel 1958; Coles 2001).

2.5.4.5.1  CDF
When the largest value of variables Xi falls off to a certain maximum value m, which
has CDF as

FX(x) = 1 − c(m − x)k,  x ≤ m, k > 0 (2.221)

Let X be the random set with n total variables. Yn is the largest value in n inde-
pendent samples.
The distribution of Yn is

k
 m− y 
−
 m − u 
FYn( y) = e , y≤m (2.222)

110 Random Vibration

2.5.4.5.2  PDF
The PDF of EVIII is

k
k −1  m− y 
k  m − y −
 m − u 
fYn ( y) = e , y≤m (2.223)
m − u  m − u 

Problems
1. X and Y are continuous variables and X > 0, Y > 0. The joint PDF
y 2
f XY ( x , y) = e − ( x + y / 2)
A
a. What is suitable value of parameter A?
b. Determine the marginal PDFs
c. Find conditional PDF of X when y = 1
d. Find the covariance and correlation coefficient
2. Random variables X and Y are independent with exponential PDFs, respec-
tively, given by

 λe − λx , x > 0
fX ( x ) = 
 0, x ≤ 0

and

 νe − νy , y > 0
fY ( x ) = 
 0, y ≤ 0

Z is another random variable defined as

1, X ≤ Y
Z=
0, X > Y

a. Find the conditional PDF of f X∣Y (x∣y)


b. Find the PDF (PMF) and CDF of Z
3. Suppose both g(x) and h(x) are PDFs. Showing that

f(x) = αg(x) + (1 − α)h(x),  0 ≤ α ≤ 1

is also a PDF.
4. Derive the general normal PDF based on knowing the standard normal PDF
5. The PMF of random variable X is shown in Table P2.1. Find the PMF of
Z = X2.
Functions of Random Variables 111

Table P2.1
X −2 −1 0 1 5
pk 1/5 1/6 1/5 1/15 11/30

6. Show that the sum of two correlated normal random variables is also nor-
mally distributed by using convolution [Hint: consider a2 ± 2ab + b2 =
(a ± b)2]
7. Suppose random variable X is uniformly distributed in (0,1). Find the PDF
of (a) Y = eX and (b) Z = −2ln(X)

1, 0 < x <1


f (x) = 
 0, elsewhere

8. Random variable X has PDF given by


 2x
 , 0<x<π
fX ( x ) =  π 2
0, elsewhere

Find the PDF of y = sin(x).


9. An insurance company has 10,000 auto-insurances. The mean compensa-
tion is $280 with a standard deviation of $800. Calculate the probability of
the compensation being greater than $2,700,000.
10. Show that similar to the de Moivre–Laplace approximation, the normal
distribution is the limiting case for the Poisson distribution as the expected
value goes to infinity.
Section II
Random Process
3 Random Processes in
the Time Domain
In Chapter 2, it was shown that for every number in random space, there is an
occurrence probability creating a “two-dimensional” observation. In Chapter 3, an
additional dimension is added, which is predominantly the dimension of time. The
concept of expending the two-dimensional observation to one of three dimensions is
based on the concept of random process.

3.1 DEFINITIONS AND BASIC CONCEPTS


To understand the main properties of the three-dimensional world, we must first
have an in-depth understanding of random processes.

3.1.1 State Spaces and Index Sets


3.1.1.1 Definition of Random Process
In Chapter 2, random events were expressed by one or more limited numbers of
random variables. Although the idea of random series was mentioned, it was based
on the assumption of independent variables. We now consider the case in which each
variable will not only be associated with its probability density function (mass func-
tion) but also be indexed by an additional parameter.
First, we mathematically explain the essential difference between a two-dimensional
random variable and a three-dimensional random process.

3.1.1.1.1  Random Variable


In a sample space of random test Ω = {e}, where e denotes a sample point, if for any e
there exists a unique real number X(e), then X(e) is referred to as the random variable.
Comparing the “random variable” f(x) to the “function” X(e), it is seen that

1. They are similar.


2. In function f(x), x is an independent variable. In random variable X(e), e is
also an independent variable; however, it is the sample point.
3. The domain and range of f(x) are of real axes, whereas the domain of a
random variable is a sample space and the range is of a real axis.

3.1.1.1.2  Random Process


Now, let us consider the process of developments and variations of random variables,
which can be infinite or uncountable.

115
116 Random Vibration

In a sample space of random test Ω = {e}, if at any moment t ∈ T, then there exists
a random variable X(t, e), referred to as the temporal set of random variables, where
{X(t, e), t ∈ T, e ∈ Ω} is the random process. It is denoted by {X(t), t ∈ T} or more
simply X(t).
The essence of a random process is that a random process must be of

1. A random sequence.
2. The sequence must be inside a set Ω = {e}, sample space, which is seen as a state
space, where e is an independent state used to denote an individual sample.
3. The order of sequence is trackable by index t, which is an element of the
index set T.

By comparing a random process with a random variable, we find that

1. Both are random, thus the need to consider the distribution functions.
2. X is a time invariable, whereas X(t) is a time variable, thus the distribu-
tion of X is fixed. In most cases, the first and second moments, mean, and
variance, are sufficient to describe X. However, the distribution of X(t) is a
temporal function. In this case, moments alone are insufficient and a cor-
relation analysis is needed.
3. The independent variable for random variable X is e, thus for the random
process X(t), we have (e, t).

3.1.1.1.3  Basic Cases


There are four basic cases of X(t):

1. Given e and t: X(t) is deterministic.


2. Given t, e varying: X(t) is a random variable.
3. Given e, t varying: X(t) is a deterministic function of t (sample function).
4. Both e and t vary: X(t) is a random process.

3.1.1.2 Classification of Random Process


The following are four basic classifications of a random process with examples:

1. Continuous state and continuous index: continuous random process


Example: The voltage difference of a resistor can have a fluctuation due
to the random movement of free electrons; the voltage, denoted by {V(t, e)},
is called thermal noise.
2. Continuous state and discrete index: continuous random sequence
Example: Use Yn to denote the nth observation of the maximum PGA of
seismic ground motion in a certain location; {Yn, n = 1, 2, 3…} is a continu-
ous random sequence.
3. Discrete state and continuous index: discrete random process
Example: In time duration [0, t], count the number of times that the peak
acceleration of a car exceeds a preset level. For this example, this number
will not be definite and will vary according to time t.
Random Processes in the Time Domain 117

4. Discrete state and discrete index: discrete random sequence


Example: In an airport, count the number of passengers boarding and
deboarding every 10 minutes, calculating the difference.

3.1.1.3 Distribution Function of Random Process


In Chapter 2, we established that the statistical property of random variables could
be described by distribution functions. We now describe a random process.
In this case, we use a group of distributions with limited dimensions to describe
the statistical pattern of random process, where limiting the dimension means the
associated probability distribution is not infinite. Recall in Chapter 2 the case of a
random variable in which the corresponding distributions were known, that the set
of random variables would be understood fairly well. Now, for a sequence of ran-
dom variables, the random process X(t), one may feel that if the finite dimensional
distributions at any ti are all known, then the probabilities of X(ti) for any choise of e
should be calcualted. For engineering applications, this idea is approximately true.

3.1.1.3.1  One-Dimensional
It is important to note that the phrase “dimensional” differs from the one used at the
beginning of Chapter 3. Previously, the phrase “dimensional” was used in a phil-
osophical (or general) sense. Here, it is more mathematical (or specific). In other
words, “one-dimensional” refers to only one group of variables X1(t) = X(t) being
considered. Furthermore, “two-dimensional” will involve X1(t) and X2(t), where X1(t)
and X2(t) are different groups of variables.
For each fixed time t ∈ T, the random process X(t) becomes a random variable,
its CDF

FX(x; t) = P[X(t) ≤ x] (3.1)

is the one-dimensional distribution of random process X(t).


If the partial derivative of FX(x; t), with respect to x, exists, then

∂FX ( x; t )
f X ( x; t ) = (3.2)
∂x

is the one-dimensional density function of random process X(t).


The one-dimensional distribution describes each individual time. It does not
reflect the relationship among different time points. From Equations 3.1 and 3.2, we
see that the basic approach in which to manage a random process is similar to that
for random variables, considering the distributions. Doing so, “three dimensions”
are reduced to “two dimensions.” Generally, the fundamental approach on random
events is to reduce the number of “dimensions.”

3.1.1.3.2  Two Dimensional


We now consider “two dimensional” using two groups of variables: X1(t) and X2(t),
taken from the same random process. Note that, in many cases, we will denote X1(t) =
X(t1) and X2(t) = X(t2).
118 Random Vibration

3.1.1.3.2.1   CDF  For the relationship of a random process at arbitrary moment t1


and t 2 , consider the distribution function of two-dimensional random variables
(X(t1), X(t2)):

FX(x1, x2; t1, t2) = P[(X(t1) ≤ x1) ∩ (X(t2) ≤ x2)] (3.3)

which is the two-dimensional distribution of random process X(t).

3.1.1.3.2.2   PDF  If a second-order partial derivative of FX(x; t) exists with respect


to x1 and x2, then

∂2 FX ( x1 , x 2 ; t1 , t2 )
f X ( x1 , x 2 ; t1 , t2 ) = (3.4)
∂x1∂x 2

is the two-dimensional density function of random process X(x; t).

3.1.1.3.2.3   Joint Distribution  Generally, X(t1) and X(t2) are different sets of ran-
dom variables. Thus, X(t1) and X(t2) will have a joint distribution:

f X (t1 ) X (t2 ) ( x1 , x 2 ) (3.5)


3.1.1.3.3  N-Dimensional
Similar to joint distributions, we consider n-dimensional distributions.

3.1.1.3.3.1   CDF  The N-dimensional distribution function of random process


X(t) is:

FX(x1, x2, …xn; t1, t2, …tn) = P[(X(t1) ≤x1) ∩ (X(t2) ≤ x2) …∩ (X(tn) ≤ xn)] (3.6)

3.1.1.3.3.2   PDF  The N-dimensional density function of random process X(t) is:

∂n FX ( x1 , x 2 ,  x n; t1 , t2 , ...tn )
f X ( x1 , x 2 ,  x n; t1 , t2 , ...tn ) = (3.7)
∂x1∂x 2 ...∂x n

3.1.1.3.4 Property of N-Dimensional Distribution


Functions (Finite-Dimensional)
The basic properties of finite-dimensional distribution functions of a random process
are given in the following.

3.1.1.3.4.1   Symmetry  From the arbitrary rearrangement of (t1, t2, …tn), namely,
(t1′, t2′ , tn′ )

FX ( x1 , x 2 ,  x n; t1 , t2 , tn ) = FX ( x1′ , x 2′ ,  x n′; t1′, t2′ , tn′ ) (3.8)


Random Processes in the Time Domain 119

Let us use a simple, incomplete example to show this symmetry. Suppose, in two-
dimensional cases, x1 = 1, x2 = 2, t1 = 0.1, and t2 = 0.5,

FX(x1, x2; t1, t2) = FX(1, 2, 0.1, 0.5) = P[(X(0.1) ≤ 1) ∩ (X(0.5) ≤2)] = P[(X(0.5) ≤ 2) ∩
(X(0.1) ≤ 1)] = FX(2, 1, 0.5, 0.1) = FX(x2, x1; t2, t1)

3.1.1.3.4.2   Compatibility  When m < n,

FX(x1, x2, …xm; t1, t2, …tm) = FX(x1, x2, …xm, ∞|m+1,∞|m+2, …∞|n; t1, t2, …tm, tm+1 …tn)
(3.9)

Proof:

FX(x1, x2, …xm, ∞|m+1, ∞|m+2, …∞|n; t1, t2, …tm, tm+1 …tn) = P[(X(t1) ≤ x1) ∩ (X(t2) ≤
x2) … ∩ (X(tm) ≤ xm) ∩ (X(tm+1) < ∞) … ∩ (X(tn) < ∞)] = P[(X(t1) ≤ x1) ∩ (X(t2) ≤ x2) … ∩
(X(tm) ≤ xm) = FX(x1, x2, …xm; t1, t2, …tm)

Note that Equation 3.9 is also known as the consistency condition, which implies

FX ( x1 , x 2 ,  x k −1; t1 , t2 , t k −1 ) = lim FX ( x1 , x2 , …xx k ; t1 , t2 , …t k )


x k →∞

3.1.1.3.5 Kolmogorov Extension Theorem


(Andrey N. Kolmogorov, 1903–1987)
If there exists a group of distribution functions F, which are symmetric and com-
patible, then there exists a random process whose n-dimensional distribution func-
tion is F.
It is seen that the two conditions (symmetric and compatible) required by this the-
orem are trivially satisfied by any random process, which means that the statement
of the theorem is that no other conditions are required. Therefore, for any reasonable
(i.e., consistent) group of finite-dimensional distributions, there must exist a random
process with these distributions. The Kolmogorov extension theorem guarantees the
existence of a random process with a given family of finite-dimensional probability
distributions under the above-mentioned conditions.
This theorem is important: It states that we can construct a random process with
given finite dimensional distributions. Mathematically speaking, a random process
is denoted by{X(t, e), t ∈ T, e ∈ Ω}. A reasonable question is, first, whether T is finite
or infinite. Generally, we are dealing with an infinitely long time duration T, at least
theoretically. That is, for finite duration T, the time development process is often
referred to as a transient process, instead of a random process.
Having realized that T → ∞, the next reasonable question is when the evaluation
values, say, xm+1, …xn, of a random process at certain time points, say, tm+1, …tn,
become infinite, how can we “handle” (or more precisely, “model”) this process? The
Kolmogorov extension theorem answers this question.
120 Random Vibration

Example 3.1

Suppose a random process X(t, x) (−∞ < t < ∞) has only two sample functions

X(t, x1) = 4 cost with P(x1) = 2/3

and

X(t, x2) = −4 cost with P(x2) = 1/3

Let us find the following:

1. One-dimensional distribution function F(0, x) and F(π/3, x)


2. Two-dimensional distribution function F(0, π/3, x1,x2)

1. X(0) can be either −4 or 4, thus we can have

P{X(0) = −4} = 1/3 and P{X(0) = 4} = 2/3

The corresponding PMF is

X (0 ) −4 4

P 1/ 3 2/ 3


and the CDF is

 0, −∞ < x ≤ −4

F (0, x ) = 1/ 3, −4 < x ≤ 4
 1, x>4

Similarly, X(π/3) has PMF as

X (π/ 3) −2 2

P 1/ 3 2/ 3


and the CDF is

 0, −∞ < x ≤ −2

F (π /3, x ) = 1/ 3, −2 < x ≤ 2
 1, x>2

2. Calculate the following probabilities first

P{X(π/3) = −2|X(0) = −4 = 1

P{X(π/3) = 2|X(0) = −4 = 0
Random Processes in the Time Domain 121

P{X(π/3) = −2|X(0) = 4 = 0

P{X(π/3) = 2|X(0) = 4 = 1

Therefore, P{X(0) = −4, X(π/3) = −2} = P{X(0) = −4} P{X(π/3) = −2|X(0) = −4} =
P({X(0) = −4} = 1/3.
Similarly

P{X(0) = 4, X(π/3) = 2} = 2/3

P{X(0) = −4, X(π/3) = 2} = P{X(0) = −4} P{X(π/3) = 2|X(0) = −4} = 0

and

P{X(0) = 4, X(π/3) = −2} = 0

The two-dimensional PMF of {X(0), X(π/3)} is

X (π/ 3) −2 2
X (0 )
−4 1/ 3 0
4 0 2/ 3

and the two-dimensional CDF of {X(0), X(π/3)} is

 0, x1 ≤ −4, or x2 ≤ −2

F (0, π / 3, x1, x2 ) = 1/ 3, x1 > −4, − 2 < x2 ≤ 2, or x2 > −2, − 4 < x1 ≤ 4
 1, x1 > 4, x2 > 2


3.1.1.4 Independent Random Process


The independence of a random process is an important concept. Although in the real
world, two events can never be exactly independent; mathematically, we can assume
such a case to describe two events as having a “tangled” relationship.

3.1.1.4.1  Independent Process


If

FX(x1, x2, …xn; t1, t2, …tn) = FX(x1; t1) FX(x2; t2) …FX(xn; tn) (3.10)

or

f X(x1, x2, …xn; t1, t2, …tn) = f X(x1; t1) f X(x2; t2) …f X(xn; tn) (3.11)

then the random process X(t) is an independent process.


122 Random Vibration

Equations 3.10 and 3.11 show that in an independent random process X(t) at dif-
ferent moments: t1, t2, …, the distributions can be decoupled.
Note that decoupling is an important measure for dealing with complex events,
functions, systems, and others.

3.1.1.4.2  Mutually Independent Processes


Two random processes, X(t) and Y(t), have a joint distribution function:

FXY ( x1 ,  x m ; y1 ,  yn ; t1 , tm , t1′, tn′ )


= P[( X (t1 ) ≤ x1 ) ∩ ( X (tm ) ≤ x m ) ∩ (Y (t1′) ≤ y1 )  ∩ (Y (tn′ ) ≤ yn )] (3.12)

If

FXY ( x1 ,  x m ; y1 ,  yn ; t1 , tm , t1′, tn′ )


= FX ( x1 ,  x m ; t1 , t2 , tn ) FY ( y1 ,  yn ; t1′, tn′ ) (3.13)

then X(t) and Y(t) are mutually independent.

Example 3.2

For a random process Z(t) = (X 2 + Y 2) t, t > 0, where X~N(0,1), Y~N(0,1), and X,Y are
independent, find the one-dimensional density function.
The joint PDF is

x2+y 2
1 −
fXY ( x , y ) = e 2 , − ∞ < x < ∞, − ∞ < y < ∞

when z ≥ 0,

FZ ( z; t ) = P[ Z(t ) ≤ z ] = P[( x 2 + x 2 ) ≤ z /t ]
x2 + y 2 z r2 z

1 − r −2
∫∫ ∫ ∫
t −
= e 2 d x dy = dθ e dr = 1− e 2t
2π 0 0 2π
z
x2 + y 2 ≤
t

Note that in the above derivation, the Cartesian coordinates are replaced by
polar coordinates.
When

z < 0,  FZ(z; t) = 0


Random Processes in the Time Domain 123

Therefore,

 −
z

FZ ( z; t ) =  1− e , z ≥ 0
2t

 0, z<0

and the PDF is

 1 −z
∂fZ ( z; t )  e 2t , z ≥ 0
fZ ( z; t ) = =  2t
∂z  0,
 z<0

3.1.2 Ensembles and Ensemble Averages


It has been established that our basic measure to deal with random variables is by
averaging. In Chapter 2, the average was carried over the range of X, in the instance
of “two-dimensional” space. In Chapter 3, we are confronted with the problem that
the average is performed over Ω or T.
From Figure 3.1, it can be seen that the average can be taken at any given t = ti,
based on the definition of CDF mentioned in Chapter 2.

3.1.2.1 Concept of Ensembles
To consider the above-mentioned average, we must introduce the concept of ensembles.

3.1.2.1.1  Definition
Ensembles: Set of all possible sample realizations of a random process. The essence
of ensembles is that the process is viewed as a whole. The reason to employ ensem-
bles in the study of a random process is to understand the joint probability distribu-
tions at all times.
Recall marginal distribution: the distribution of X(t) at a given t, denoted by
f X(t)(x) (see Figure 3.1).

X(t)
fX(t2)(x)

fX(t1)(x)

t
t1 t2

Figure 3.1  Statistics at different time points.


124 Random Vibration

Generally, we have

f X (t1 ) ( x ) ≠ f X (t2 ) ( x ), t1 ≠ t2 (3.14)


3.1.2.2 Statistical Expectations and Moments


Now, we consider the “should-be” average or the ensemble average as follows.

3.1.2.2.1  Mean
The mean value can be calculated as


µ X (t ) = E[ X (t )] =
∫−∞
xf X (t ) d x (3.15)

3.1.2.2.2  Variance
The variance is


σ 2X (t ) = E[{X (t ) − µ X (t )}2 ] ≡ D[ X (t )] =
∫ −∞
[ x − µ X (t )]2 f X (t ) d x (3.16)

Here, the symbol “ ≡ ” stands for “is defined as.”

3.1.2.2.3  Autocorrelation Function


The autocorrelation function is given by


RX (t1 , t2 ) = E[ X (t1 ) X (t2 )] =
∫ −∞
x1 x 2 f X (t1 ) X (t2 ) ( x1 , x 2 ) dx (3.17)

Equations 3.15 through 3.17 are averages over the ranges of X, namely, the whole
space of Ω.

Example 3.3

X(t) = A cos2π t, A > 0, t > 0, in which A is a random variable with

σ A2 + µ A2 = 1

Consider the joint PDFs of X(t), at two pairs of time (τ0, τ1) and (τ0, τ2).
For the PDF of A:

1. Find fA(a)
Let us assume (note that this assumption is not necessary)

fA(a) = ka,  0 < a < x0


Random Processes in the Time Domain 125

then

x0
a2 2
∫0
(ka)da = k
2
x0
0 = 1→ k =
x02


and

2
fA (a) = a
x02

2. Calculate μA and σ A2 and determine the value of x0.



The mean is given by

x0
2a 2x
µA =
∫0
2
x0
a da = 0
3


and the variance is given by

2
x0
2a  2x  x2
σ 2A =
∫0
2 
x0 
a − 0  da = 0
3  18


because

2
 2x  x2 x2
σ 2A + µ 2A = 1=  0  + 0 = 0
 3  18 2


we have

x0 = (2)1/2

Thus, we can have

2
fA (a) = a=a
x02

3. The autocorrelation function is



Rx (t1, t 2 ) =
∫ −∞
x1x2fX (t1)X (t2 ) ( x1, x2 ) d x
2
=
∫ 0
[a cos( 2πtt1)][a cos( 2πt 2 )]a da = cos( 2πt1)cos( 2πt 2 )

3.1.2.2.4  Autocovariance Function


The autocovariance function is given by

σXX(t1, t2) = E[{X(t1) − μX(t1)}{X(t2) − μX(t2)}] = E[X(t1)X(t2)] − μX(t1)μX(t2)


= R X(t1, t2) − μX(t1)μX(t2) (3.18)
126 Random Vibration

Note that the second question in the above example can also be obtained by using
Equation 3.18 as follows:

RX (t1 , t2 ) = E[ X (t1 ) X (t2 )] = σ XX (t1 , t2 ) + µ X (t1 )µ X (t2 )

= σ 2A cos(2πt1 ) cos(2πt2 ) + µ 2A cos(2πt1 ) cos(2πt2 )



=  σ 2A + µ 2A  cos(2πt1 ) cos(2πt2 ) = cos(2πt1 ) cos(2πt2 )

3.1.2.2.5  Autocorrelation Coefficient Function


The autocorrelation coefficient function is given by

σ XX (t1 , t2 )
ρXX (t1 , t2 ) = (3.19)
σ X (t1 )σ X (t2 )

where

σ X (.) = σ 2X (.)

3.1.2.2.6  Property of Autocorrelation Functions


3.1.2.2.6.1   Variance and Autocovariance  Let

t1 = t 2 = t

we have

σ XX (t , t ) = E[{X (t ) − µ X (t )}{X (t ) − µ X (t )}] = σ 2X (t ) (3.20)

That is, when


t=0

the autocovariance is equal to the variance.

3.1.2.2.6.2   Autocorrelation Functions: Positive Semidefinite  The autocorrela-


tion function is positive semidefinite, which can be expressed as
n n

∑ ∑ α α R (t , t ) ≥ 0
j =1 k =1
j k X j k (3.21)

Example 3.4

1. Let X(t) = C, where C is a random variable with PDF: fC(c), −∞ < c < ∞, t ∈ [0, T]
In this case, X(t) is not a function of time; therefore, the random process
reduces to a random set.
Random Processes in the Time Domain 127

That is, the mean is given by

μX(t) = μC = const.

The standard deviation is then

σX(t) = σC = const.

Furthermore, the autocorrelation is

RX (t1, t 2 ) = µC2 + σ C2 = const.


2. Let X(t) = Bt, where B is a random variable with PDF: fB (b), −∞ < b < ∞, t ∈

[0, T];
The mean is

∞ ∞
µ X (t ) =
∫ −∞
fB (b)tb db = t
∫−∞
fB (b)b db = µ Bt


the variance is

∞ ∞
σ 2X (t ) =
∫ −∞
fB (b)(bt − µ Bt )2 db = t 2
∫−∞
fB (b)(b − µ B )2 db = σ B2t 2


and the autocorrelation is

∞ ∞
RX (t1,t 2 ) =
∫ −∞
x1x2fX (t1)X (t2 ) ( x1, x2 ) d x =
∫−∞
b1t1b2t 2fB (b) db

( )

= t1t 2
∫ −∞
b1b2fB (b) db = t1t 2 µ X2 + σ X2

3. Let X(t) = B + t, where B is a random variable with PDF: fB (b), −∞ < b < ∞,

t ∈ [0, T]

The mean is

∞ ∞ ∞
µ X (t ) =
∫ −∞
fB (b)(t + b) db =
∫ −∞
fB (b)b db + t
∫ −∞
fB (b) db = µ B + t


the variance is


σ 2X (t ) =
∫ −∞
fB (b)(b + t − µ B − t )2 db = σ B2

128 Random Vibration

and the autocorrelation can be calculated as

∞ ∞ ∞
RX (t1, t 2 ) =
∫ −∞
(b1 + t1)(b2 + t 2 )fB (b) db =
∫ −∞
b1b2fB (b) db +
∫ −∞
t1b2fB (b) db
∞ ∞
+
∫ −∞
t 2b1 fB (b) db +
∫ −∞
t1t 2fB (b) db = µ B2 + σ B2 + t1µ B + t 2µ B + t1t 2

4. Let X(t) = cos2(ωt + Θ), where Θ is random variable 0 < Θ ≤ 2π, with PDF fΘ(θ),

t ∈ [0, T]
The mean is

2π 2π
1 1
µ Θ (t ) =
∫ 0
fΘ (θ)cos 2(ωt + θ) dθ =
∫ 0
fΘ (θ) [1+ cos( 2ωt + 2θ)]dθ
2
θ=
2


since

2π 2π


0
[cos( 2ωt + 2θ)]dθ ≠ 1, fΘ (θ) ≠ cos( 2ωt + 2θ),
∫0
fΘ (θ)[cos( 2ωt + 2θ)]dθ = 0

The variance is

2 2

 1 2π
 cos( 2ωt + 2θ) 
σ Θ2 (t ) =
∫ 0
fΘ (θ) cos 2(ωt + θ) −  dθ =
 2 ∫ 0
fΘ (θ) 
 2  dθ



1  1+ cos(4ωt + 4θ)  1
=
∫ 0
fΘ (θ) 
4 2 

dθ =
8

The autocorrelation is


RX (t1, t 2 ) =
∫ 0
cos 2(ωt1 + θ)cos 2 (ωt 2 + θ)fΘ (θ) dθ


1  1  1
=
∫ 0
 [1+ cos( 2ωt1 + 2θ)]  [1+ cos( 2ωt 2 + 2θ)] fΘ (θ) dθ =
2  2  8

 2π 2π 2π

 ∫ 0
fΘ (θ) dθ +
∫ 0
cos( 2ωt1 + 2θ)fΘ (θ) dθ +
∫0
cos( 2ωt 2 + 2θ)fΘ (θ) dθ

2π  1
+
∫ 0
cos( 2ωt1 + 2θ)cos( 2ωt 2 + 2θ)fΘ (θ) dθ  =
 8

It is important to note that in parts (1) and (4) of the above example, the means,
variances, and autocorrelation functions are all constants, whereas in parts (2) and
(3), the means, variances, and autocorrelation functions are time variables.
Random Processes in the Time Domain 129

Example 3.5

For the random process given in the example of Section 3.1.1,

1. Find the function of mean μX(t)


2. Find the autocorrelation function R X(t1, t2)
3. Find the autocovariance function σXX(t1, t2)

a. The mean μX(t) is

μX(t) = E[X(t)] = 2/3 x 4 cos(t) + 1/3 (−4 cost) = 4/3 cos(t)

b. It is seen that the autocorrelation can be calculated as

RX (t1, t 2 ) = E[ X (t1)X (t 2 )]
 2 2 1
 (4) cos 2 t1 + (−4)2 cos 2 t1, t1 = t 2  16 cos 2 t , t = t
 3 3
= = 1 1 2

 2 4 cos t 4 cos t + 1 (−4 cos t )(−4 cos t ), t ≠ t 16 cos t cos t , t ≠ t2


 1 2 1
 3 1 2
3
1 2 1 2

Thus, R X(t1, t2) = 16 cost1 cost2

c. The autocovariance function σXX(t1, t2) is given by

σXX(t1, t2) = RX(t1, t2) − μX(t1)μX(t2) = 16 cost1 cost2 − 16/9 cost1 cost2 = 128/9 cost1 cost2

3.1.3 Stationary Process and Ergodic Process


3.1.3.1 Stationary Process
Generally speaking, the mean and variance of a random process will depend on the
index t, whereas the autocorrelation function will depend on t1 and t2. However, there
are instances in which for a random process, the average will not depend on time. In this
case, the processes are said to be stationary. A stationary process is considerably simpler
to deal with. In the following, we will consider various types of stationary processes.

3.1.3.1.1  Strictly Stationary Process


For any real number h, the n-dimensional distribution of random process X(t) satisfies

FX(x1, x2, … xn; t1, t2, … tn) = FX(x1, x2, … xn; t1 + h, t2 + h; … tn + h) (3.22)

X(t) is a strictly stationary process.


If the PDF exists, then Condition 3.22 can be replaced by

f X(x1, x2, … xn; t1, t2, … tn) = f X(x1, x2, … xn; t1 + h, t2 + h; … tn + h) (3.23)

Conditions 3.22 and 3.23 imply that the n-dimensional distribution does not
evolve over the time intervals.
130 Random Vibration

A strictly stationary process has the following properties:

1. If X(t) is a strictly stationary process, then the joint PDF of {X(t1), X(t2), …
X(tn)} is identical to that of {X(t1 + h), X(t2 + h), …X(tn + h)}.
2. If X(t) is a strictly stationary process, and if the expectation of X(t) is

E[X(t)] < ∞ (3.24a)

and the variance of X(t) is

D[X(t)] < ∞ (3.24b)

also the expectation of X(t)2, the mean square value is

E[X(t)2] < ∞ (3.24c)

Then,

E[X(t)] = μX = const. (3.25)

and

D[ X (t )] = σ 2X = const. (3.26)

Thus, the mean square is

E[X(t)2] = const. (3.27)

and the autocorrelation is

R X(t + τ, t) = R X(τ) (3.28)

It is important to note that Condition 3.24 is not necessary for strictly stationary
processes. Additionally, note that a strictly stationary process is defined by examin-
ing its distributions.

3.1.3.1.2 Weakly Stationary Process


Distribution functions of processes are sometimes difficult to obtain. One approach
is to use the moments.
X(t) is a weakly stationary process, if:

1. The mean square value of X(t) is not infinite

E[X(t)2] < ∞ (3.29)


Random Processes in the Time Domain 131

2. The mean of X(t) is constant

E[X(t)] = μX = const. (3.30)

3. The variance of X(t) is constant

D[ X (t )] = σ 2X (t ) = σ 2X = const. (3.31)

4. The autocorrelation function of X(t) depends only on the time difference τ

R X(τ) = R X(t, t + τ) = E[X(t)X(t + τ)] (3.32)

5. In this case, the autocovariance is equal to

σ XX (τ) = RX (τ) − µ 2X (3.33)

6. The autocorrelation coefficient function is

σ XX (τ)
ρXX (τ) = (3.34)
σ 2X

The main properties of real valued autocorrelation functions of weakly stationary


processes will be listed in the next section. Here, we show the following points for
general autocorrelation functions.

1. If the process is complex valued, then R X(τ) is also complex valued and

R X(−τ) = R X(τ)* (3.35a)

Here, the symbol (.)* stands for taking the complex conjugate of (.)
2. The autocorrelation function is positive semi-definite, that is, for any com-
plex number α1, α2, …,α n and any real number t1, t2, …,tn, we have

n n

∑ ∑ α α*R (t − t ) ≥ 0
j =1 k =1
j k X j k (3.35b)

Example 3.6

A random sequence X(t) = sin(2πAt) with t = 1, 2,... and A is a uniformly distributed


random variable in [0 1]. Let us consider whether X(t) is stationary.
The mean is given by

1
−1
E[ X (t )] = E[sin( 2πAt )] =
∫ sin(2πAt )da = 2πt cos(2πAt )
0
1
0 =0

132 Random Vibration

which satisfies the first condition described in Equation 3.30.


Second, check the autocorrelation function

RX (t + τ , t ) = E {[sin( 2π A(t + τ ))sin( 2π At )] = E[sin( 2π At1)sin( 2π At 2 )]


1
1 1  1/ 2, t1 = t 2
=

0
sin 2π At1 sin 2π At 2 da =
2 ∫
0
[cos 2π A(t1 − t 2 ) − cos 2π A(t1 + t 2 )]da = 

0, t1 ≠ t 2

Therefore, X(t) is weakly stationary.


However, the PDF of X(t) can be written as

 1
 , −1 < x < 1
fX ( x ,t ) =  πt 1− x 2
 0, elsewhere

which is a function of time t; therefore, X(t) is not strictly stationary.

3.1.3.1.3  Strictly Process versus Weakly Process


1. A weakly process is when the first and second moments are specified. How­
ever, a weakly process is not necessarily a strictly process.
2. For a strictly process, the entire distribution must be stationary, but the
first and/or second moments do not necessarily have to exist. Therefore, a
strictly process is not necessarily a weakly process.
3. If the second moment of a strictly random process exists, then it is also a
weakly process.
4. It is difficult to check the conditions of a strictly process. However, for a
Gaussian process, its second moment completely defines the entire distri-
bution. In this case, it is both a strictly and weakly process (the Gaussian
process is further explained in the next subsection).
5. In the real world, purely stationary processes rarely exist. In engineering
applications, it is acceptable within some limits to simplify a process to a
weakly process (from now on, we will consider all stationary processes to
be equal to weakly stationary processes in this book).

From the above-mentioned points, we can make further categorizations. For points
(1) and (3) satisfied, the processes are stationary. Point (2) is not used often.

Example 3.7

A random process Z(t) = Xcos(2πt) + Ysin(2πt), where both X and Y are random
variables and

EX = EY = 0

DX = DY = 1
Random Processes in the Time Domain 133

As well as

EXY = 0

Prove Z(t) is a stationary process.


First, the mean

E[Z(t)] = EXcos(2πt) + EYsin(2πt) = 0

Then, the autocorrelation function can be written as

R X(t + τ,t) = E{[Xcos(2π(t + τ)) + Ysin(2π(t + τ))][Xcos(2πt) + Ysin(2πt)]} =


EX2cos[2π(t + τ)] cos2πt + EY 2sin[2π(t + τ)] sin2πt + EXYcos[2π(t + τ)] sin2πt +
EXYsin[2π(t + τ)] cos2πt = cos[2π(t + τ)] cos2πt + sin[2π(t + τ)] sin2πt = cos2πτ

Therefore, the mean of Z(t) is constant, and it can be readily shown that the
variance of Z(t) is also constant and the autocorrelation function of Z(t) depends
only on the time difference τ, which concludes that Z(t) is a stationary process.

3.1.3.2 Ergodic Process
As mentioned previously, to find the moments, we take the ensemble average. In
most instances, determining the ensemble average is difficult. On the contrary, the
average over the time domain can be significantly simpler to compute.
In fact, in many engineering practices, the temporal average is used to calculate
the mean and variance values. Before using the temporal average, there is a question
that must first be asked. That is, under what conditions can the temporal average be
used? Using the temporal average under the incorrect conditions may have severe
computation errors. Mathematically, the correct condition is that the process must be
ergodic. This issue, however, must be further discussed.

3.1.3.2.1  Ensemble Average and Temporal Average


First, let us consider the two types of averages.
All of the above noted averages are ensembles, denoted by E[(.)]
The temporal average is denoted by 〈X(t, k)〉. Here, X(t, k) is the kth sample real-
ization of random process X(t).

T
1
X (t , k ) = lim
T →∞ 2T ∫ −T
X (t , k ) dt (3.36)

However, it is often more practical to use

2T
1
X (t , k ) = lim
T →∞ 2T ∫
0
X (t , k ) dt (3.37)

It is reasonable to deduce, T → ∞ as taking a sufficiently long time. From this, we


use Equation 3.37, because in the real world, there is no negative time.
134 Random Vibration

3.1.3.2.2 Ergodicity
Ergodicity means the temporal average can be used to replace the ensemble average.
A process is ergodic in the mean, if

〈X(t, k)〉 = E[X(t)] = μX (3.38)

A process is ergodic in the variance, if

{X (t , k ) − µ X }2 = D[ X (t )] = E[{X (t , k ) − µ X }2 ] = σ 2X (3.39)

Furthermore, a process is ergodic in the autocorrelation function if

〈X(t, k) X(t + τ, k)〉 = E[X(t, k) X(t + τ, k)] = R X(τ) (3.40)

From Equations 3.38 through 3.40, it is established that an ergodic process must
be stationary. However, it must be remembered that a stationary process is not neces-
sarily an ergodic one.
A weakly ergodic process is one that satisfies these three conditions, a strongly ergo-
dic process is one that satisfies all ensemble averages to be equal to temporal averages,
wheras a nonergodic process is one that does not satisfy any of these three conditions.

3.1.3.2.2.1   Condition of Ergodicity  A real stationary process is ergodic in mean,


if and only if

1 2T
 τ 
lim
T →∞ T ∫ 0
 1 −   RX (τ) − µ X  dt = 0
2T 
2
(3.41)

A real stationary process is ergodic in autocorrelation function, if and only if

 u 
{ }
2T
1
lim
T →∞ T ∫
0
 1 − 2T  E[ X (t + τ + u) X (t + τ) X (t + u) X (t )] − RX dt = 0 (3.42)
2

In this example, X(t) = C is stationary but nonergodic, and X(t) = B + t is not station-
ary, and therefore is also nonergodic.
Ergodicity is important because we can use temporal averages to replace ensem-
ble averages, which will be discussed in detail in the next section. Practically, how-
ever, we rarely have exact ergodic processes. Caution must be taken in using the
temporal average. In many engineering applications, taking several temporal aver-
ages to see if the corresponding moments have converged to the correct values may
be advantageous.

3.1.4 Examples of Random Process


Now, let us consider several examples of random processes.
Random Processes in the Time Domain 135

3.1.4.1 Gaussian Process (Carl F. Gauss, 1777–1855)


If the n-dimensional distribution function of a random process is normal, then X(t)
is a Gaussian process. Gaussian processes can be a good approximation for many
processes. This in turn can greatly reduce the computational burden.

Example 3.8

Assuming both X~N(0, 1) and Y~N(0, 1) are mutually independent, let us consider

Z(t) = X + Yt

In the initial calculations, we have E(X) = E(Y) = 0; D(X) = D(Y) = 1; E(XY) = 0.


From further calculations, we receive the expected value of Z(t) as

E[Z(t)] = E(X) + tE(Y) = 0

and the autocorrelation is

R(t1, t2) = E[(X + Yt1) (X + Yt2)] = E(X2 + XYt1 + XYt2 + Y 2t1t2) = 1 + t1t2

Furthermore, the variance is

D[Z(t)] = 1 + t2

and the CDF can be calculated as

ξ2
z −
1
FZ ( x; t ) =
2π(1+ t 2 ) ∫ −∞
e 2(1+t 2 )

It is determined that Z(t) is Gaussian but not stationary, and therefore nonergodic.
Note that, in Section 3.1.1.3, we use distribution functions to describe a ran-
dom process. In this particular case, we have the Gaussian process.

Example 3.9

Given Gaussian processes {X(t) −∞ < t < ∞} and {Y(t) −∞ < t < ∞}, which are inde-
pendent, prove Z(t) = X(t) + Y(t) −∞ < t < ∞ is also Gaussian.
Consider a nonzero vector q = [q1, q2, …qn] and the nonzero linear combina-
tion of Z(t), that is,

 Z(t )   X (t1)   Y (t1) 


 1
    
 Z(t 2 )   X (t 2 )   Y (t 2 ) 
q  = q   + q  
 ...   ...   ... 
 Z(tn )   X (tn )   Y (tn ) 
     

Because X(t) is a Gaussian process, then [X(t1), X(t2), …X(tn)] is n-dimensional


normal distribution; furthermore, q1X(t1) + q2X(t2) +, … + qnX(tn) is one-dimensional
normal.
136 Random Vibration

Similarly, q1 Y(t1) + q2Y(t2) +, … + qnY(tn) is also one-dimensional normal.


Now, because X(t) and Y(t) are independent, q1X(t1) + q2X(t2) +, … + qnX(tn) and
q1Y(t1) + q2Y(t2) +, … + qnY(tn) must also be independent. Due to the additivity of
normal distribution, the following term

 Z(t ) 
 1 
 Z(t ) 
q 2 
 ... 
 Z(tn ) 
 

must be normally distributed, so that Z(t) is Gaussian. This example implies that the
Gaussian process is addible. One of the nice properties of the Gaussian process is,
if a process is Gaussian, then its derivatives and integrals are also Gaussian.

3.1.4.2 Poisson Process
Before specifically introducing the Poisson process, let us consider various general cases.

3.1.4.2.1  Independent Increments (Nonoverlapping)


The number of occurrences counted in disjointed intervals are independent from each
other; therefore, X(t) is a random process. For any t1 < t2 ≤ t3 < t4 ∈ T, X(t2) − X(t1) and
X(t4) − X(t3) are mutually independent, then X(t) is an independent increment process.

3.1.4.2.2  Homogenous Process


The increment X(t + τ) − X(t) of the random process X(t) in time interval [t, t + τ] is
independent to t because it only depends on τ.

3.1.4.2.3  Poisson Process (Simeon-Denis Poisson, 1781–1840)


In the following, let us examine the Poisson process. Using the Poisson process as an
example, we can realize how to model a random process in detail.

Definition

A Poisson process is a homogenous continuous process with independent incre-


ments. With zero initial condition N(0) = 0, the increment N(t) − N(τ) is a Poisson
process distributed with parameter λ(t − τ).
This definition implies the following facts:

1. The number of arrivals in nonoverlapping intervals is an independent ran-


dom process.
2. There exists a positive quantity λ > 0, such that in a short time interval Δt,
the following are true:
a. The probability of exactly one arrival in Δt is proportional to the length
of Δt, that is,

P[N(t + Δt) = n + 1 | N(t) = n] = λΔt (3.43)


Random Processes in the Time Domain 137

Note that Equation 3.43 implies the meaning of factor λ, which is a propor-
tional constant.
b. The probability of no arrivals in Δt is

P[N(t + Δt) = n | N(t) = n] = 1 − λΔt (3.44)

c. The probability of more than one arrival in Δt is negligible and van-


ishes as Δt → 0
3. The zero initial condition is

N(0) = 0 (3.45)

We now explain the nature of the Poisson process. Consider the following prob-
ability equation based on the above-mentioned condition,

pN(n, t + Δt) = P[N(t + Δt) = n] = P{[(N(t) = n) ∩ (no new arrival in Δt)] ∪ [(N(t) =
n − 1) ∩ (one new arrival in Δt)]} = pN(n, t) [1 − λΔt] + pN(n − 1, t) [λΔt] (3.46)

Rearranging Equation 3.46, we have

pN(n, t + Δt) − pN(n, t) = − pN(n, t) [λΔt] + pN(n − 1, t) [λΔt]

Furthermore,

pN (n, t + ∆t ) − pN (n, t ) dpN (n, t )


lim = = − λpN (n, t ) + λpN (n − 1, t ) (3.47)
∆t →0 ∆t dt

Under the initial condition n = 0, pN(−1, t) = 0, therefore,

dpN (0, t )
= −λpN (0, t ) (3.48)
dt

The solution for Equation 3.48 is given as follows:

pN (0, t ) = e −λt

Furthermore, we can have

(λt )n − λt
pN (n, t ) = e , n ≥ 0, t ≥ 0 (3.49)
n!

This is the PMF of the Poisson process. Similar to the above-mentioned Gaussian
process, we can use the distribution function to describe the Poisson process.
In addition, we consider the corresponding moments:

Mean

μN(t) = λt (3.50)
138 Random Vibration

Variance

σ 2N (t ) = λt (3.51)

Autocorrelation function

 λt + λ 2 t t , 0 ≤ t1 ≤ t2

RN (t1 , t2 ) =  1 1 2
(3.52)
 λt2 + λ t1t2 , 0 ≤ t2 ≤ t1
2

Furthermore, we can have

[λ(t − τ)]k − λ (t − τ )
P{[ N (t ) − N (τ)] = k} = e (3.53)
k!

Example 3.10

The number of radiated particles during [0, t] from a source is denoted by N(t).
{N(t), t ≥ 0} is a Poisson process with mean radiation rate λ. Assume each particle
can be recorded with probability p, and the record of an individual particle is
independent from other records, and also to the process N(t). Denote the total
number of particles during [0, t] to be M(t). Prove {M(t), t ≥ 0} is also a Poisson
process with mean rate λp.
First, it is seen that M(0) = 0.
Second, let Xi be the record of the ith particle, the distribution is

 1, p
Xi = 
0, 1− p

These Xis are mutually independent and with identical distributions. Consider
a set of increments denoted as

M(t2) − M(t1), M(t3) − M(t2), …., M(tn) − M(tn−1)

Because Xis are mutually independent and have identical distributions, the
increments of M(t) must be independent from each other, thus, the process of M(t)
is an independent increment.
Now, consider P[M(t2) − M(t1) = n], which can be written as

 
{ }

P[M(t 2 ) − M(t1) = n] =  ∑
 n= k
P N(t 2 ) − N(t1) = n   P M(t 2 ) − M(t1) = k

 N (t 2 ) − N (t1) = n

∑ [λ(t n−! t )] e
n
− λ (t 2 −t1)
= 2 1
Cnk p k (1− p)n− k

n= k

[ λp(t 2 − t1)]k
= e − λp(t2 −t1)
k!
Random Processes in the Time Domain 139

Therefore, M(t2) − M(t1) is a Poisson distribution with parameter λp(t2 − t1) and
from the above statement, it is seen that {M(t), t ≥ 0} is a Poisson process with
mean rate λp.

Example 3.11

The process {N1(t), t ≥ 0} is a Poisson with parameter λ1, whereas {N2(t), t ≥ 0} is


another Poisson process with parameter λ2 and they are independent. Let

X(t) = N1(t) + N2(t)

and

Y(t) = N1(t) − N2(t)

Show that

1. X(t) is Poisson with parameter λ1 + λ2


2. Y(t) is not Poisson

1. First, X(0) = N1(0) + N2(0) = 0


Next, assume that X(t1), X(t2), …X(tn) where (t1 < t2 <, …< tn) is arbitrarily
n random variables. We can see that the increments

X(t2) − X(t1), X(t3) − X(t2), …, X(tn) − X(tn−1)

has joint distribution as

P {X (t 2 ) − X (t1) = i1, X (t3 ) − X (t 2 ) = i2 ,, X (tn ) − X (tn−1) = in−1}


{
= P N1(t 2 ) − N1(t1) + N2 (t 2 ) − N2 (t1) = i1, Ni (t3 ) − N1(t 2 ) + N2 (t3 ) − N2 (t 2 ) = i2 ,
N1(tn ) − N1(tn−1) + N2 (tn ) − N2 (tn−1) = in−1 }
i1

= ∑ P{N (t ) − N (t ) = j ,N (t ) − N (t ) = i − j ,N (t ) − N (t ) + N (t ) − N (t ) = i ,...
j1= 0
1 2 1 1 1 2 2 1 1 1 1 1 3 1 2 2 3 2 2 2

N1(tn ) − N1(tn−1) + N2 (tn ) − N2 (tn−1) = in−1}


i1 i2 in−1

= ∑ ∑... ∑ P{N (t ) − N (t ) = j ,N (t ) − N (t ) = i − j ,...,


j1= 0 j2 = 0 jn−1= 0
1 2 1 1 1 2 2 1 1 1 1

N1(tn ) − N1(tn−1) = jn−1, N2 (tn ) − N2 (tn−1) = in−1 − jn−1}


i1 i2 in−1

= ∑ ∑ ∑ P{N (t ) − N (t ) = j } P{N (t ) − N (t ) = i − j },...,


j1= 0 j2 = 0
...
jn−1= 0
1 2 1 1 1 2 2 1 1 1 1

... P {N1(tn ) − N1(tn−1) = jn−1} P {N2 (tn ) − N2 (tn−1) = in−1 − jn−1}


140 Random Vibration

On the other hand, we can see that

P {X (t 2 ) − X (t1) = i1}.... P {X (tn ) − X (tn −1) = in −1}


i1

= ∑ P{N (t ) − N (t ) = j } P{N (t ) − N (t ) = i − j } P{X(t ) − X(t ) = i }


j1 = 0
1 2 1 1 1 2 2 1 1 1 1 3 2 2

... P {X (tn ) − X (tn −1) = in −1}


i1 i2 in−1

= ∑ ∑ ∑ P{N (t ) − N (t ) = j } P{N (t ) − N (t ) = i − j },...,


j1 = 0 j2 = 0
...
jn−1 = 0
1 2 1 1 1 2 2 1 1 1 1

... P {N1(tn ) − N1(tn −1) = jn −1} P {N2(tn ) − N2(tn −1) = in −1 − jn −1}


Therefore,

P {X (t 2 ) − X (t1) = i1,..., X (tn ) − X (tn−1) = in−1}


= P {X (t 2 ) − X (t1) = i1},... P {X (tn ) − X (tn−1) = in−1}

in which, X(t2) − X(t1), X(t3) − X(t2), ….,X(tn) − X(tn−1) are mutually indepen-
dent, which means that {X(t), t ≥ 0} is an independent increment process.
Third, when τ < t, we can have

P {X (t ) − X (τ ) = k} = P {N1(t ) − N1(τ ) + N2(t ) − N2(τ) = k}


k
λ k2− i (t − τ)k − i − λ 2 (t − τ )
∑ λ (ti−! τ) e
i i
− λ1(t − τ )
= 1
e
i =0
(k − i )!
k
λ1i λ k2− i (t − τ)k −( λ1 + λ 2 )(t − τ )

= ∑ i =0
i ! (k − i )!
e

(t − τ)k e ( 1 2 )( )
− λ +λ t−τ k

=
k! ∑C λ λ
i =0
i i k−i
k 1 2

[(λ1 + λ 2 )(t − τ)]k e −( λ1 + λ 2 )(t − τ )


=
k!

Therefore, the process X(t) − X(τ) is Poisson with parameter (λ1 + λ2)(t − τ).
The above discussion illustrates that the Poisson processes are addible.
2. Now, consider Y(t) = N1(t) − N2(t)

P {Y (t ) = −1} = P {N1(t ) − N2(t ) = −1} = P {N2(t ) − N1(t ) = 1}


= ∑ P{N (t ) = i} P{N (t ) = i + 1} > P{N (t ) = 0} P{N (t ) = 1} > 0


i =0
1 2 1 2

Therefore, {Y(t), t ≥ 0)} is not Poisson, because if Y(t) is Poisson, P{Y(t) = −1} = 0.

Random Processes in the Time Domain 141

3.1.4.2.4  Application of Poisson Process


3.1.4.2.4.1   Arrival Time  Time between arrivals in a Poisson process has expo-
nential distribution, where Y is denoted as the random time of the first arrival.

(λy)0 − λy
P(Y > y) = P( N ( y) = 0) = e = e − λy (3.54)
0!

3.1.4.2.4.2   The CDF of Y

−λy
FY ( y) = P(Y ≤ y) = 1 − e (3.55)

Example 3.12

Suppose vehicles are passing a bridge with the rate of two per minute.

Question (1): In 5 minutes, what is the average number of vehicles?


Question (2): What is the variance in 5 minutes?
Question (3): What is the probability of at least one vehicle passing the bridge
in that 5 minutes?

To determine the above, the Poisson process is assumed, where V(t) is the num-
ber of vehicles in time interval [0, t], with a rate of λ = 2.

(λt )n − λt
P {V (t ) = n} = e , n = 1, 2, 3...
n!

1. Substituting t = 5, we have

(10)k −10
P {V (t ) = 5} = e
k!

Therefore, the mean is

μV (5) = 10

2. The variance is

σ v2(5) = 10


3. To calculate the probability, we can write

P[V(5) ≥ 1] = 1 − P[V(5) = 0] = 1 − e−10 ≈ 1.0


142 Random Vibration

3.1.4.3 Harmonic Process
The concept of harmonic process is practically very useful in random variation.

3.1.4.3.1  Definition
X(t) is a harmonic process given by

X(t) = A cosωt + B sinωt (3.56)


where A and B are independent random variables with an identical PDF and with
respect to the following conditions:
The mean is

μA = μB = 0 (3.57)

The variance is

σ 2A = σ 2B = σ 2 (3.58)

Furthermore, we have ω as a given frequency.


It is important to note that if A and B are normally distributed, X(t) is a Gaussian
process. Ordinarily, X(t) is not a Gaussian process.

3.1.4.3.2  Mean
We see the mean is

μX(t) = 0 (3.59)

3.1.4.3.3  Autocorrelation Function


The autocorrelation function is calculated as

R X(τ) = E[X(t) X(t + τ)] = E[{A cosωt + B sinωt}{A cosω(t + τ) + B sinω(t + τ)}] =
E[A2] cosωt cosω(t + τ) + E[B2] sinωt sinω(t + τ)

Note that E[AB] = 0, therefore, we have E[A2] = E[A2] = σ2.


Substituting for σ2 into the above equation, we have

σ2 cos[ωt − ω(t + τ)] = σ2 cos[− ωτ]

Consequently, resulting in

R X(τ) = σ2 cosωτ (3.60)

3.1.4.3.4  Sum of Harmonic Process


Now consider the sum of harmonic process. Let A1, A2, …Am, B1, B2, …Bm be inde-
pendent random variables, and ω1, ω2, …ωm be m distinct frequencies. We then define
a harmonic process as

Xk(t) = Ak cosωkt + Bk sinωkt (3.61)


Random Processes in the Time Domain 143

Furthermore, we define a new harmonic process as the sum of all the Xk(t):

m m

X (t ) = ∑
k =1
X k (t ) = ∑ ( A cosω t + B sinω t)
k =1
k k k k (3.62)

The variance is

σ2 = ∑σ
k =1
2
k (3.63)

and the autocorrelation is

m m

RX (τ) = ∑R
k =1
Xk (τ) = ∑σ
k =1
2
k cosω k τ (3.64)

Also, let p(ωk) represent the portion of the total variance contributed by the pro-
cess with frequency ωk. In other words, let

σ 2k
p(ω k ) = (3.65)
σ2

It is seen that

∑ p(ω ) = 1
k =1
k (3.66)

The autocorrelation function is then rewritten as

RX (τ) = σ 2 ∑ p(ω ) cosω τ


k =1
k k (3.67)

Note that when the frequency interval between ωk+1 and ωk for all k are equal, we
can write

1
p(ω k ) = g(ω k )∆ω (3.68)

In this case,

Δω = ωk+1 − ωk, k = 1, …m − 1 (3.69)

and g(ωk) is density function.


144 Random Vibration

Under a certain condition, which will be explained in Chapter 4, the frequency


interval can become infinitesimal, that is,

ωk+1 − ωk → dω (3.70)

and the autocorrelation function becomes

 m 1  1


RX (τ) = σ 2 lim 
 k =1 2π
∆ω→0 
g(ω k )∆ω cosω k τ  =
 2π ∫
0
σ 2 g(ω ) cosωτ dω (3.71)

Equation 3.71 is the Fourier cosine transform of function σ2g(ω). That is, R X(τ)
and σ2g(ω) is a Fourier pair, denoted by

R X(τ) ⇔ σ2g(ω) (3.72)

where the symbol “x(τ) ⇔ g(ω)” denotes a Fourier pair of x(τ) and g(ω).
In Chapter 4, function σ2g(ω) is referred to as the spectral density function because
it distributes the variance of X(t) as a density across the spectrum in the frequency
domain. Note that because the Fourier pair indicated in Equation 3.72 is unique, g(ω)
contains precisely the same information as R X(τ).
Comparing Equation 3.64 (in which a series consists of discrete harmonic terms
cosωk τ) and Equation 3.71 (in which an integral contains harmonic terms cosωτ), we
see that both represent the autocorrelation functions. The autocorrelation described
by Equation 3.71 has a continuous spectrum, with infinitesimal frequency resolution
dω, which implies that at any frequency point ω, the resolution is identical. That is,
the continuous spectrum has an infinite number of spectral lines. On the other hand,
the autocorrelation described by Equation 3.64 has a discrete spectrum and the num-
ber of the spectral lines is m. However, at frequency ωp and ωq, the corresponding
frequency intervals are not necessarily equal. That is, in general

Δωp = ωp+1 − ωp ≠ Δωq = ωq+1 − ωq (3.73)

The advantage of continuous spectrum and the disadvantage of equal frequency


resolution will be further discussed in Chapter 4.

3.2 CORRELATION ANALYSIS
3.2.1 Cross-Correlation
In Section 3.1, we introduced the concept of autocorrelation without discussing its
physical meaning and engineering applications in detail. In this section, the concept
of correlation of random processes in more specific ways is considered. The focus is
given to stationary processes.
Random Processes in the Time Domain 145

3.2.1.1 Cross-Correlation Function
3.2.1.1.1  Definition
Recalling Equation 3.17, the autocorrelation function is given by


RX (t1 , t2 ) = E[ X (t1 ) X (t2 )] =
∫ −∞
x1x2 f X (t1 ) X (t2 ) ( x1 , x 2 ) dxx

Similar to the term R X(t1, t2), consider the case in which there exists a second pro-
cess Y(t2). In this instance, we would have a cross-correlation, denoted by R XY (t1, t2),
which is the measure of correlation between two random processes X(t) and Y(t).

∞ ∞
RXY (t1 , t2 ) = E[ X (t1 )Y (t2 )] =
∫ ∫
−∞ −∞
x1 y2 f XY ( x , y, t1 , t2 ) d x d y (3.74)

If both X(t) and Y(t) are stationary, the cross-correlation function depends only on
the time lag τ, where

τ = t2 − t1 (3.75)

This is further illustrated in Figure 3.2.


Under these conditions, it is seen that

R XY (τ) = E[X(t) Y(t + τ)] (3.76)

and

RYX(τ) = E[Y(t) X(t + τ)] (3.77)

Observing that

R XY (τ) ≠ RYX(τ) (3.78)

RXY (τ)

Figure 3.2  Conceptual cross-correlation function.


146 Random Vibration

RXY (τ)
µX µY + σX σY
0.6
0.4
0.2 µX µY
0 τ
τ0
–0.2
µX µY – σX σY
–0.4
0 100 200 300 400 500 600

Figure 3.3  Bounds of cross-correlation function.

3.2.1.1.2  Skew Symmetry (Antisymmetric)


In the instance of skew symmetry, we have the following:

R XY (τ) = E[X(t) Y(t + τ)] = E[X(s − τ) Y(s)] = E[Y(s)X(s − τ)] = RYX(−τ) (3.79)

3.2.1.1.3  Bounds
The cross-correlation function has upper and lower bounds. Figure 3.3 shows con-
ceptually the bounds of a cross-correlation function.
In Figure 3.3, R XY (τ) is bounded by

−σXσY + μXμY ≤ R XY (τ) ≤ σXσY + μXμY (3.80)

These bounds can be graphically shown in Figure 3.3.

3.2.1.2 Cross-Covariance Function
3.2.1.2.1  Definition
The cross-covariance function is given by

σXY (t1, t2) = E[{X(t1) − μX(t1)} {Y(t2) − μY (t2)}] (3.81)

It can be realized that

σXY (t1, t2) = E[X(t1) Y(t2)] − μX(t1) μY (t2) (3.82)

Thus, through substitution, we have

σXY (t1, t2) = R XY (t1, t2) − μX(t1) μY (t2) (3.83)

3.2.1.2.2  Orthogonal Processes


If the cross-correlation function of two random processes X(t) and Y(t):

R XY (t1, t2) = 0 (3.84)

then X(t) and Y(t) are orthogonal.


Random Processes in the Time Domain 147

3.2.1.2.3  Uncorrelated Processes


If the cross-covariance function of two random processes X(t) and Y(t):

σXY (t1, t2) = 0 (3.85)

then X(t) and Y(t) are uncorrelated.


In the case of an uncorrelated process:

E[X(t1) Y(t2)] = E[X(t1)] E[Y(t2)] (3.86)

1. If X(t) and Y(t) are mutually independent, then they are uncorrelated.
2. If X(t) and Y(t) are uncorrelated, they are not necessarily independent.
3. If X(t) and Y(t) are Gaussian processes, then the condition of being “mutu-
ally independent” is sufficient and a necessary condition of “uncorrelated.”

3.2.1.2.4  General Meaning of Correlation


In general, the value of a random process over duration T will vary randomly above
and below the expected value. To count the number of instances the value steps
above and/or below in a unit of time, the concept of frequency is used. In the case
of the unit of time measured in seconds, the frequency would be measured in hertz.
If X(t) and Y(t) are correlated, then they share identical frequency components. If
this holds untrue, then they are uncorrelated.

3.2.2 Autocorrelation
By considering autocorrelation functions, the meaning of correlation will be further
explored.

3.2.2.1 Physical Meaning of Correlation


3.2.2.1.1  Ensemble Average versus Temporal Average
Referring to Equation 3.74, let Y(t) be X(t) and remembering the cross-correlation
function reduces to autocorrelation function (which was first introduced in Equation
3.17), Equations 3.17 and 3.74 are rewritten as follows:

∞ ∞
RX (t1 , t2 ) = E[ X (t1 ) X (t2 )] =
∫ ∫
−∞ −∞
x1x 2 f X ( x , y, t1 , t2 ) d x d y (3.87)

Recalling Equation 3.40, we obtain

〈X(t, k) X(t + τ, k)〉 = E[X(t, k) X(t + τ, k)] = R X(τ) (3.88)

From this equation, it is shown that if X(t) is ergodic, it must also be stationary,
then one can use temporal averages to replace the ensemble average as

T
1
RX (t1 , t2 ) = RX (τ) = lim
T →∞ 2T ∫
−T
x (t ) x (t + τ) dt (3.89)
148 Random Vibration

In Equation 3.89

t1 = t (3.90a)

t2 = t + τ (3.90b)

That is

τ = t2 − t1 (3.90c)

Generally, this is written as

T
1
RX (τ) = lim
T →∞ 2T ∫
−T
x k (t ) x k (t + τ) dt = E[ X k (t ) X k (t + τ)] (3.91)

In this instance, the subscription k stands for the kth record. It can be shown that the
notation “k” in Equation 3.91 is necessary.

3.2.2.1.2  Correlation Analysis


Suppose the time history xk(t) can be represented by Fourier series. Most engineering
signals, if not all, can be written in such form. Then, we have

xk(t) = a 0 + a1 cos(ω1t) + b1 sin(ω1t) + a2 cos(ω2t) + b2sin(ω2t) + …. (3.92)

Readers may consider the condition that we can always have Equation 3.92.
Due to the orthogonality of the cosine and sines from Equation 3.91 in the integra-
tion of Equation 3.92, the result will cancel the “uncorrelated” frequency components.
In this case, only the correlated terms will be left. This unveils the physical meaning
of correlation analysis. From this point on, let us denote random processes as “signals.”
In Figure 3.4, several correlation functions of typical signals have been plotted.
In the first case, a sinusoidal signal with an autocorrelation function that will never
decay is shown. Note that a signal that does not contain a sine wave will always
decay. Furthermore, the autocorrelation function of a signal that is closer to sinusoi-
dal will have a slower decaying rate, or on the contrary, it will decay rather quickly.
This is also shown in Figure 3.5.

3.2.2.2 Characteristics of Autocorrelation Function


In this section, we will further explore useful properties of autocorrelation functions
of real valued processes.

3.2.2.2.1  Bounds
For the case: X(t) is stationary.

RX (τ) = E[ X (t ) X (t + τ)] = σ XX (τ) + µ 2X = ρXX (τ)σ 2X + µ 2X (3.93)


Random Processes in the Time Domain 149

1 1
0.8
0.6
0.4 0.5
0.2
0
–0.2
–0.4 0
–0.6
–0.8
–1 –0.5
0 100 200 300 400 500 600 –10 –8 –6 –4 –2 0 2 4 6 8 10
Sine wave Sine wave contaminated by random noises

1 1.2
0.8 1
0.6
0.4 0.8
0.2 0.6
0
–0.2 0.4
–0.4 0.2
–0.6
0
–0.8
–1 –0.2
–10 –8 –6 –4 –2 0 2 4 6 8 10 –10 –8 –6 –4 –2 0 2 4 6 8 10
Narrow band random noise Broad band random noise

Figure 3.4  Examples of autocorrelation functions.

Note that

−1 ≤ ρXX(τ) ≤ 1

Thus,

−σXσY + μXμY ≤ R XY (τ) ≤ σXσY + μXμY (3.94)

Because

RX (0) = E[ X 2 (t )] = σ 2X + µ 2X ≥ 0 (3.95)

R X(0) is the maximum value and

|R X(τ)| ≤ R X(0) (3.96)

3.2.2.2.2  Symmetry
If

R X(τ) = E[X(t) X(t + τ)] = E[X(s − τ) X(s)] = R X(−τ) (3.97)

Then, it is symmetric at

τ = 0 (3.98)
150 Random Vibration

1
0.8
0.6
0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
Constant: RX(t) = C –1
0 100 200 300 400 500 600
Sinusoidal

0.15

0.1

0.05

–0.05
–6 –4 –2 0 2 4 6
White noise Low-pass white noise

0.1 1
0.08 0.8
0.06
0.04 0.6
0.02 0.4
0
0.2
–0.02
–0.04 0
–6 –4 –2 0 2 4 6 –6 –4 –2 0 2 4 6
Band-pass white noise Exponential

1 3
2.5
2
0.5 1.5
1
0 0.5
0
–0.5
–0.5 –1
–1.5
–1 –2
–6 –4 –2 0 2 4 6 –6 –4 –2 0 2 4 6
Cosine exponential Sine-cosine exponential

Figure 3.5  Typical autocorrelation functions.


Random Processes in the Time Domain 151

3.2.2.2.3  Limiting Values

lim RX (τ) = µ 2X (3.99)


τ→∞

Equation 3.99 implies that when the time difference becomes sufficiently large,
X(t) and X(t + τ) becomes uncorrelated, and σXX(τ) vanishes.

Example 3.13

Given the autocorrelation of a stationary process X(t) to be

1
Rx (τ ) = 36 +
1+ 36τ 2

find the mean and variance of X(t).

1. According to Equation 3.99, it is seen that

RX (∞) = µ X2 = 36


thus, the mean is

μX = ±6


2. The variance can be written as

σ 2X = RX (0) − µ 2X = 37 − 36 = 1

3.2.2.2.4  Scales of Fluctuation


The autocorrelation function also measures the perseverance of the correlation. The
faster the decay of R X(τ), the less the sample realization of the process will remain
correlated.
The scale of fluctuation θ is defined as

1 T
1  1 T 
θ = lim
T →∞ T ∫
0
ρXX (τ) d τ = lim
2  T →∞
σX  T ∫
0
RX (τ) d τ 

(3.100)

Note that when θ is considerably longer than time lag τ, little correlation in the ran-
dom process can be expected.
152 Random Vibration

3.2.2.3 Examples of Autocorrelation Function


3.2.2.3.1  Low-Pass Random Process
In Figure 3.6, the plot of real and idealized low-pass filters is shown. A random
process passing through a low-pass filter is referred to as a low-pass random process.
In this case, the corresponding autocorrelation function can be written as

sin(ω C τ)
RX (τ) = σ 2X (3.101)
ωC τ

3.2.2.3.2  Delay and Attenuation


X(t) is a random process. Let Y(t) denote the delay of t by δ and the attenuation by
factor α on X(t), that is,

Y(t) = α X(t − δ) (3.102)

In this example, consider the cross-correlation function of

R XY (τ) = E[X(t) Y(t + τ)] = E[X(t) {α X(t − δ + τ)}] = α R X(τ − δ) (3.103)

Note that in this scenario

E[X(t) {X(t − δ)}] = R X(t − δ)

Gain

1
0.707

ω
0 ωC
(a)

Gain

ω
0 ωC
(b)
 

Figure 3.6  Low-pass filter. (a) Practical low-pass filter, (b) idealized low-pass filter.
Random Processes in the Time Domain 153

3.2.2.3.3  Sum of Two Processes


If X(t) and Y(t) are stationary and the sum is given by

Z(t) = X(t) + Y(t) (3.104)

then the cross-correlation RZX(τ) is

R ZX(τ) = E[Z(t) X(t + τ)] = E[{X(t) + Y(t)}{X(t + τ)}] = R X(τ) + R XY (τ) (3.105)

If X(t) and Y(t) are uncorrelated with zero mean, that is,

R XY (τ) = RYX(τ) = 0 (3.106)

then

R ZX(τ) = R X(τ) (3.107)

The autocorrelation function of Z(t) is

RZ (τ) = E[Z(t) Z(t + τ)] = E[{X(t) + Y(t)}{X(t + τ) + Y(t + τ)}]


= R X(τ) + R XY (τ) + RYX(τ) + RY (τ) (3.108)

Therefore, for the cases of X(t) and Y(t) are uncorrelated with zero mean

RZ (τ) = R X(τ) + RY (τ) (3.109)

3.2.2.3.4  Nondispersive Propagation


Suppose X(t) is stationary and is transmitted as a signal, and the propagation is non-
dispersive. The concept of dispersive and nondispersive is illustrated in Figure 3.7.

N(t)

X(t) r Y(t)

Nondispersive
R independent to frequency
t

Dispersive
R independent to frequency
t
 

Figure 3.7  Dispersion.


154 Random Vibration

Denoting d as the distance, r as the wave speed, and a as an attenuation factor, we


obtain

 d
Yt = aX  t −  + N (t ) (3.110)
 r

In Figure 3.7 and Equation 3.110, N(t) is noise.


The autocorrelation function of Y(t) is

RY (τ) = a2 R X(τ) + R N(τ) (3.111)

The cross-correlation function of X(t) and Y(t) is

 d
RXY (τ) = aRX  τ −  (3.112)
 r

Example 3.14:  Periodic Stationary Process

If a random process X(t) satisfies

X(t) = X(t + T)

X is referred to as periodic stationary.


Show that, in the case of periodic stationary process, the autocorrelation func-
tion is also a periodic function, that is

R X(T + τ) = R X(τ)

It is seen that

R X(T + τ) = E[X(t) X(t + T + τ)] = E[X(t) X(t + τ)] = R X(τ)

3.2.3 Derivatives of Stationary Process


We are familiar with derivatives of deterministic processes. Now, we will consider
the case of stationary random process.

3.2.3.1  Stochastic Convergence


The first concept in the study of derivatives is convergence. Consider a sequence of
random variables denoted by Xj, where j = 0, 1, 2,…
It is impossible to write

lim X j = X 0
j →∞

for Xj are random sets.
Using f( X ) to denote the frequency, it is also problematic to have
j

lim f( X j ) = p
j →∞
Random Processes in the Time Domain 155

This holds true because the above equation implies that there exists an ε > 0, for
no matter how large a number N > 0, one can always find n > N, such that

f( Xn ) − p < ε

This is given that if we let ε < p, then

{ }
P f( Xn ) = 0 = (1 − p)n ≠ 0

that is, the event f( Xn ) = 0 is possible. In other words, it is possible for f( Xn ) ≠ p,


which is against the observation based on the classical theory of probability. We thus
consider the convergence from a different angle as follows.

3.2.3.1.1  Convergence with Probability 1


This angle is the consideration of chance, or probability, of convergence. First,
understand the following:

lim P( X j = X 0 ) = 1 (3.113a)
j →∞

3.2.3.1.2  Convergence in Probability


Because the above requirement is strong, we may in turn consider


j →∞
(
lim P X j − X 0 ≥ ε = 0 ) (3.113b)

3.2.3.1.3  Convergence in Distribution


Another angle is the convergence in distribution function, namely,

lim FX j ( x ) = FX0 ( x ) (3.113c)


j →∞

3.2.3.2 Mean-Square Limit
The second importance of derivatives in the temporal process is the limit. Similarly,
because the process is random, we will need to consider some different approaches.

3.2.3.2.1  Definition
{Xn} is a real series of random variables, where N = 0, 1, 2, 3. For {Xn}, its mean
square values exist, given by

E  X n2  < ∞ (3.114)

If

lim X n − X 0 = 0 (3.115)
n→∞
156 Random Vibration

or

lim E ( X n − X 0 )2  = 0 (3.116)


n→∞

then, X0 is the mean square limit of {Xn}, denoted by

l.i.m X n = X 0 (3.117)
n→∞

3.2.3.2.2  Property
{Xn} and {Yn} are two real series of random variables, where both have a limited
mean and variance:

l.i.m X n = X 0
n→∞
and

l.i.m Yn = Y0
n→∞
Written with constants a and b, we have

1. l.i.m(aX n + bYn ) = aX 0 + bY0 (3.118)


n→∞

2. E ( X ) = E  l.i.m X n  = lim E[ X n ] (3.119)


 n→∞  n→∞
3. lim E[ X nYm ] = E[ X 0Y0 ] (3.120)
n→∞
m →∞

For the case, m = n, we will have

lim E  X n2  = E  X 02  (3.121)
n→∞

Example 3.15

If, for random variables X and Y, we have EX < ∞ and EY < ∞, then the complex
valued random variable Z = X + jY has its mathematical expectation EZ given by

EZ = EX + jEY

We can further define the characteristic function of a real-valued random vari-


able W, denoted by ϕ(t) and

ϕW (t) ≡ E ejtW = E[cos(tW)] + j E[sin(tW)]

Note that E[cos(tW)] < ∞ and E[sin(tW)] < ∞, so that the characteristic function
of the random variable W, ϕW (t), always exists.
Random Processes in the Time Domain 157

It can be proven that the characteristic function and the PDF of a random vari-
able are uniquely determined by each other. For example, for random variables
whose distribution is Poisson, then

( jt )
φW (t ) = e λ e −1

Now, let us consider showing the mean-square limit of a Poisson random sequence
is Poisson random variable.
Let {X n , n = 1, 2,} to denote the Poisson random sequence, and we have

l.i.m X n = X
n→∞

In this case, it is seen that

lim E ( X n ) = E ( X )
n→∞

which implies

lim λ n = λ
n→∞

Therefore, we further have

( jt )
φ X (t ) = lim φ Xn (t ) = lim e n ( e −1) = e λ e −1
λ jt

n→∞ n→∞

which implies that X is the random variable with Poisson distribution, for its char-
acteristic function is Poisson.

3.2.3.3 Mean-Square Continuity
X(t) is a real process, if for t ∈ T,

l.i.m X (t + h) = X (t ) (3.122a)
h→0

X(t) is continuous in mean square at t.


If X(t) is continuous in mean square at every point of t ∈ T, then X(t) is continuous
in mean square on T.
Stationary process X(t) is continuous in mean square at t, if and only if R X(τ) is
continuous at τ = 0.

Proof:

Because X(t) is stationary, we have the following:

E[{X(t + τ) − X(t)}2] = E[{X(t + τ)}2] + E [{X(t)}2] − 2E[X(t + τ) X(t)}] = 2(R X(0) − R X(τ))

lim E[ X (t + τ) − X (t )] = lim[2( RX (0) − RX (τ))]


τ→0 τ→0
158 Random Vibration

Example 3.16

A mutually independent sequence of random variables denoted as {Xn, n ≥ 1} has


the following distribution

Xn 0 n
1 1
P( X n ) 1− 2
n n2

Check whether Xn is a mean-square convergence.


Note that Equation 3.121 implies that

l.i.m[ X (t + h) − X (t )] = 0
h→0

Based on the above equation, let us check the expectation E(|Xm − Xn|2).

(
E Xm − Xn
2
) = E (X 2
m ) + E ( X ) − 2E(X )E(X
2
n n m )

It is seen that

E( X m ) = m
1 1 1
( )
= , E X m2 = m2 2 = 1
m2 m m

and

E( X n ) = n
1 1
n 2
n
( )
n
1
= , E X n2 = n2 2 = 1

Therefore, when m ≠ n

1
E ( X m X n ) = E ( X m )E ( X n ) =
mn

We now have

(
lim E X m − X n
m→∞
2
) = lim  2 − 2 mn1  = 2 ≠ 0
m→∞
n→∞ n→∞

Thus, if {Xn, n ≥ 1} is not continuous in mean square, it will not converge in a mean
square.
Random Processes in the Time Domain 159

3.2.3.4 Mean-Square Derivatives of Random Process

X(t) is a random process. If X (t ) exists such that, for t ∈ T,

X (t + h) − X (t )
X (t ) = l.i.m (3.122b)
h→0 h

then X(t) is mean-square differentiable at t, and X (t ) is the mean-square derivative


of X(t). If any t ∈ T, X(t) is mean-square differentiable, then X(t) is mean-square dif-
ferentiable on T.

3.2.3.5 Derivatives of Autocorrelation Functions


In the following, suppose X(t) is mean-square differentiable, we have

dRX (τ) dE[ X (t ) X (1 + τ)]  dX (t + τ) 


= = E  X (t )  = E[ X (t ) X (t + τ) = RXX (τ)] (3.123)
dτ dτ  dτ 

Furthermore,

d 2 RX (τ)
= RXX (τ) (3.124)
dτ 2

It is equally true that

d  dRX (τ)  d  dE[ X (t ) X (t + τ)]  d  dE[ X (t − τ) X (t )] 


 =  =  
dτ  dτ  dτ  dτ  dτ  dτ 
 dX (t − τ) dX (t )  (3.125)
= E −  = RX (− τ)
 dτ dτ 

In addition, we have

d 3 RX (τ)
= − RXX
  (τ) (3.126)
dτ 3

and

d 4 RX (τ)
= RX (τ) (3.127)
dτ 4

X(t) is mean-square differentiable on T, if and only if the following exists:

∂2 RX (s, t ) R (t + h, t + h′) − RX (t , t + h′) − RX (t + h, t ) + RX (t , t )


= lim X (3.128)
∂s∂t h→ 0
h′→ 0
hh′

160 Random Vibration

Example 3.17

The ordinary random walk (also called binomial process) is the simplest random
process. Using Zt to denote the increments from time t − 1 to time t, taking exclu-
sively the values +1 or −1, we have

Zt = Xt − X−t−1

Assuming Zt is independent from the initial value X0, we can write

Xt = X 0 + ∑ Z , t = 1, 2....
k
k =1

Thus, X0, Z1, Z2, … are independent and for all k, we have

P(Zk = 1) = p  and  P(Zk = −1) = 1 − p

For a more general case, we can have the binomial process, if replacing 1 by f and
replacing −1 by b (f stands for “walking” forward and b stands for backward), that is,

P(Zk = f) = p  and  P(Zk = b) = 1 − p for all k

where f and b are constants.


Now consider the amount of charge in a given value and the location of cloud
before a random time N is binomial whereas after time N it remains constant.
Here N is Poisson random variable with parameter λ and is independent to the
binomial process. Let Xn be the amount of charge at time n; show that as n → ∞,
Xn is mean-square converged to X.
Letting m ≤ n, we have

(
E Xn − Xm
2
) = E E {(X n − X m )2 N } = P(N ≤ m) E {(X n − X m )2 N ≤m }
∑ P(N = k)E {( X } + ∑ P(N = k)E {( X − X }
n ∞

) )
2 2
+ n − Xm N=k n m N=k
k = m+1 k = n+1
n ∞

= 0+ ∑
k = m+1
λk −λ
k!
e (k − m) p[1+ (k − m − 1)p] +
k = n+1
λk −λ
k! ∑
e (n − m)p[1+ (n − m − 1)p]

n ∞
λ k−2 λ k−2 −λ 2
≤ ∑
k = m+1
(k − 2)!
e−λ λ 2p +
k = n+1

(k − 2)!
e λ p

∞ ∞
λ k−2
= e−λ λ 2p ∑
k = m+1
(k − 2)!
= e−λ λ 2p
k = m−1
λk
k! ∑
Furthermore, considering a series S

∑ λk ! = e
k
λ
S=
k =0

Random Processes in the Time Domain 161

and letting
n

∑ λk !
k
Sn =
k =0

we can write

∑ λk ! → 0, as m → ∞
k
S − Sm− 2 =
k = m−1

Therefore, when m ≤ n, we have


lim e − λ λ 2p
m→∞ ∑
k = m−1
λk
k!
=0

That is,

(
lim E X n − X m
m→∞
2
)=0
n→∞

Therefore, {Xn, n ≥ 1} is mean-square converged to X.

3.2.3.6 Derivatives of Stationary Process


Because

R X(0) = max (3.129)

we have the following:

dRX (τ)
τ= 0 = RXX (0) = 0 (3.130)

and

RXX (τ) τ> 0 >0 (3.131)

RXX (τ) τ< 0 <0 (3.132)

Combining Equations 3.130 through 3.132, we conclude that

RXX (τ) = odd (3.133)

Therefore, RXX (τ) is skew symmetric (antisymmetric).


162 Random Vibration

3.2.3.7 Derivatives of Gaussian Process


Suppose X(t) is Gaussian, then X(t, k) denotes the kth sample realization

X (t + h, k ) − X (t , k )
X (t , k ) = l.i.m (3.134)
h→0 h

Because X(t, k) is Gaussian, we see that [X(t + h, k) − X(t, k)] is also Gaussian and
could therefore conclude that X (t , k ) is Gaussian as well.

Example 3.18

A random sequence of Bernoulli distributed variables, which are mutually inde-


pendent with identical PMF, is denoted by {Y(n), n ≥ 1}. Let

X(t) = Y(i), 2−i < t < 21−i, i = 1, 2, …

Consider if {X(t), 0 < t ≤ 1} is derivable.


First, when 2−i < t < 21−i, i = 1, 2, …, there must exist a small neighborhood of
t, in which

X(t) = Y(i)

Therefore,

X (t ) = 0

Second, when t = 1, we can also realize that

X − (t ) = 0

Third, when t = 2−i (i = 1, 2, …), we have X − (t ) = 0.


However,

X (t + h) − X (t ) Y (i − 1) − Y (i )
X + (t ) = l.i.m = l.i.m
h→0 + h h→0 + h

which means that the right mean-square derivative does not exist; therefore, at
t = 2−i, (i = 1, 2, …), {X(t), 0 < t ≤ 1} is not mean-square derivable.

Problems
1. Identify the state space and index set for the following random process
a. Temperature measured hourly at an airport. Continuous state and dis-
crete index
b. Elevation of sea surface measured continuously at a wave staff. Continu­
ous state and continuous index
Random Processes in the Time Domain 163

c. Daily closing price of a stock. Discrete state and discrete index


d. Number of significant defects in an optical fiber starting at one end, dis-
crete state and continuous index
2. A random process X(t) is given as

cos πt H
X (t ) =  −∞<t<∞ (P3.1)
 2t , T

where H and T stands for the case of head and tail when a coin is tossed.
Note that P(H) = P(T) = 0.5.
a. Find the one-dimensional distribution F(x; 0.5) and F(x, 1) of the ran-
dom process X(t)
b. Find the two-dimensional distribution F(x1,x2, 0.5, 1) of X(t)
3. A deterministic square wave process Xsquare(t) with period T. Find means,
variations, and autocorrelations for the following random process variation
of this function (Figure P3.1)

1, 0 < t < T / 2



−1, T / 2 < t < T

X square (t ) = 
 X square (t + T ), t ≤ 0

 X square (t − T ), t ≥ T

a. Square-wave process with random amplitude


b. Square-wave process with random phase
(Hint: Denote the random variable set as Θ. Assume the range of Θ is
from 0 to T, fΘ(θ) = 1/Τ)
c. Square-wave process with random amplitude and phase
4. The mean and autocovariance of random process X(t), t ∈ T are denoted by
μX(t) and σXX(t1, t2). In addition, g(t) is a deterministic function of t. Find the
mean and covariance of random process Y(t) = X(t) + g(t).
5. For the process X(t) = a cos(ωt − Θ), derive the stationary autocorrelation
function from general autocorrelation function.

Xsquare
1

t
–T/2 0 T/2 T

Figure P3.1
164 Random Vibration

6. Find the scale of fluctuation of low-pass random process


sin ω C τ
RX (τ) = σ 2X
ωC τ

7. Find the autocorrelation and cross-correlation functions of W(t) and X(t)


X(t) and Y(t) are uncorrelated [Hint: σXY (t) = 0 and W(t) = X(t) Y(t)]
8. Verify the mean, variance, and autocorrelation given for the Poisson process
Poisson process N(n, t): its density function is

(λt )n − λt
pN (n, t ) = e , n ≥ 0, t ≥ 0
n!

9. Prove if l.i.m X n = X then the characteristic function of Xn will converge to


n→∞
the characteristic function of X.
10. Use MATLAB® to calculate the temporal average of the sample realization
from the computer-generated process. Additionally, calculate the value of
the sample autocorrelation function for t1 = 0 and t2 = 1.
4 Random Processes in
the Frequency Domain
Chapter 3 introduced random processes, the variables of which are indexed by the
temporal parameter t. A random process is defined as a certain kind of temporal
function. For deterministic temporal functions, the corresponding spectra can be
obtained by their Fourier transform, which provides a different angle to describe the
natures of temporal functions.
In Chapter 4, it will be shown that Fourier transforms can also be used on many
types of random processes. This allows random processes to be operated in the
frequency domain, and also provides simplified formulas of the expected values.
Mathematically, these processes need to be stationary. However, for engineering
applications, this requirement may be relaxed to a certain degree, which will also be
discussed hereafter.

4.1 Spectral Density Function


The spectral density function is a useful tool in the application of the Fourier trans-
form and describes the frequency components of a stationary process.

4.1.1 Definitions of Spectral Density Functions


In this subsection, we will first discuss the necessary mathematical background use-
ful to the main topic—the definitions of spectral density functions—which by them-
selves are powerful tools to study random processes in the frequency domain.

4.1.1.1 Mean-Square Integrable of Random Process


In Chapter 3, the properties of the derivatives of random processes were discussed.
Here, the integral of random processes are now considered. This is necessary because
Fourier transforms are based on integrations. However, similar to the difficulty of direct
derivation of random process X(t), special treatment of its integral is also necessary.
Consider X(t) as a random process and a, b ∈ T. Divide the interval [a, b] into an
n + 1 moment such that

a = t0 < t1 < … tn = b (4.1)

Denote

∆ = max (t k +1 − t k ) (4.2)
0 ≤ k ≤ n −1

165
166 Random Vibration

t k′ is a point inside [tk, tk+1], that is,

t k ≤ t k′ ≤ t k +1 , k = 0,1,  n − 1 (4.3)


n −1 b
If the term
namely,
k =0
X (t k′ )(t k +1 − t k ) possesses a mean-square limit

a
X (t ) dt ,

 n−1 

b
l.i.m 
n→∞ 
 k =0
X (t k′ )(t k +1 − t k ) −
∫ a
X (t ) dt 

(4.4)
 n −1
2 
∑ X (t′ )(t
b
= lim E 
∆→0 
 k =0
k k +1 − tk ) −
∫ a
X (t ) dt  = 0

then, X(t) is a mean-square integrable in interval [a, b] (Riemann integrable, Bernhard


Riemann, 1826–1866).
Random process X(t) is a mean-square integrable in interval [a, b], if and only if
RX(s, t) is Riemann integrable in space [a, b] × [a, b]. Stationary process X(t) is a mean-
square integrable in interval [a, b] if its autocorrelation function RX(τ) is integrable, and

 b b  b b


E
 ∫ a
X (s ) d s
∫ a
X (t ) dt  =
 ∫∫
a a
E[ X (s) X (t )] d s dt
(4.5)
b b b b
=
∫∫
a a
RX (s − t ) d s dt =
∫∫
a a
RX (τ) d s dt

where

τ = s − t (4.6)

Example 4.1

Suppose X(t) is a homogeneous continuous process with independent increments


and with zero mean. If

|X(s) − X(t)|~N(0, σ2|s − t|) (4.7)

where,

σ > 0 (4.8)

then X(t) is referred to as a Wiener process (also known as Brownian motion


process; Norbert Wiener, 1894–1964; Robert Brown, 1773–1858). An instance of
a Brownian motion process is illustrated in Figure 4.1.
For an independent increment random process X(t), t ≥ 0, if the probability
distribution of its increment ΔX(t1, t2), ΔX(t2, t3), …ΔX(tn−1, tn) depends only on
Random Processes in the Frequency Domain 167

Figure 4.1  Brownian motion.

the time difference t2 − t1, t3 − t2, tn − tn−1, then X(t) is said to have a stationary
independent increment (a Wiener process is a stationary independent increment).
Consider now a case in which a Wiener process is a mean-square integrable.
By definition of the Wiener process,

E[X(t)] = 0 (4.9)
Furthermore, the following is also true:

 σ 2s , s<t
σ X ( s ,t ) = E[ X ( s )X (t )] =  2
 σ t , t<s

or
σX(s,t) = E[X(s)X(t)] = σ2min(s,t) = R X(s,t) (4.10)

Equations 4.9 and 4.10 can be realized as follows: when t ≥ τ,

E[X(s)X(t)] = E[X(s) {X(t) − X(s) + X(s)}] = E[X(s) {X(t) – X(s)}] + E[X(s)X(s)] = 0 + D[X(s)] = σ2s.

In interval [0, u], it is seen that

u u u u u u
∫∫ 0 0
E[ X ( s )X (t )]d s dt =
∫ ∫
0 0
RX ( s ,t )dt ds =
∫ ∫ 0 0
σ 2 min( s ,t )dt ds
s t s t

u
For integral
∫ 0
t
σ 2 min( s ,t )dt , there exists two possibilities: if t < s, then the limit
s

∫ t dt; if t > s, then the limit of t is from s to u, we


of t is from 0 to s and we have
0
u u u   σ u s u 2


have  s dt. Therefore,
s ∫ ∫ σ min(s,t ) ds dt = σ ∫  ∫ t dt + ∫ s dt  ds = 3 u .
u
0 0
2 2

0 0 s
3


We can see that X (t )dt exists.
0
168 Random Vibration

Example 4.2

Using Y(u) to denote the integral given by the above example, find its mean, auto-
correlation function, and variance, where

b u
Y (u ) =
∫ a
X (t ) dt =

0
X (t ) dt

For mean,

u
E[Y (u )] =
∫ 0
E[ X (t )]dt = 0

For autocorrelation function, assume the case in which a timepoint v exists


such that

0<v ≤u

This case is illustrated in Figure 4.2a, wherein the lower domain, denoted by
Dl, the time point s can be either shorter or longer than t; and in the upper domain,
denoted by Du, s is always smaller than t. Therefore, the correlation function can
be written as

u v u v
RY (u ,v ) =
∫∫
0 0
E[ X ( s )X (t )]d s dt =
∫∫
0 0
σ 2 min( s ,t ) d s dt

v s v t
=
∫∫ σ min(s,t )ds dt + ∫∫ σ s ds dt = ∫
Dl
2
∫ 
∫ ∫ 
d s σ t dt + dt σ s d s
Du
2

0 0
2

0 0
2

Dl

v u
+
∫ σ s ds ∫ dt

0
2

v
Du

v
σ 2s 2 v
σ 2s 2
=2
∫ 0 2
ds +

0
σ 2(u − v )s d s =
6
(3u − v )

t t
u
Du Upper Left Right
v u

Dl Lower Dl Dr

s s
0 v 0 u v
(a) (b)

Figure 4.2  Integral domains. (a) Lower–upper. (b) Left–right.


Random Processes in the Frequency Domain 169

Similarly, when, 0 < u ≤ v, see Figure 4.2b with domains Dl and Dr, we can
write

σ 2u 2
RY (u ,v ) = (3v − u )
6

Additionally, it can be shown that

σ 2u 3
D[Y (u )] = RY (u ,u ) =
3

4.1.1.2 Stationary Process: A Review


In Chapter 3, we introduced a special group of random processes, the stationary
process. Although it is not necessary for a random process to be stationary to ana-
lyze it in the frequency domain, stationary processes, especially stationary random
vibrations, are more conveniently studied in the frequency domain than in the time
domain.
In Chapter 3, we also roughly divided stationary processes into strictly stationary
and weakly stationary processes. Recall that a strictly stationary process is defined
by its finite dimensional distributions, whereas a weakly stationary process is char-
acterized by its mean, variance, and autocorrelation functions, which are considered
to be the first and second moments. In fact, we may have more detailed classifica-
tions by considering their n-order of moments. In Figure 4.3, a block diagram is used

Strictly
stationary

Gaussian
nth-order process
stationary

Second-order Weakly
stationary stationary

Autocorrelated
First-order stationary
stationary

Mean
stationary

Figure 4.3  Stationary processes.


170 Random Vibration

to show the relationships between strictly stationary processes and various weakly
stationary processes.
It is noted that the stationary Gaussian process is both strictly and weakly sta-
tionary. In Figure 4.3, the second-order stationary process and specially the weakly
stationary process are of great importance. Because in this situation, the autocorrela-
tion functions are only a function of τ, the time difference t2 − t1. Namely, R X(t1, t2) =
R X(t2 − t1) = R X(τ). It will be shown that the Fourier transform of such correlation func-
tions will have deterministic spectra, although the processes themselves are random.
Additionally, to operate random vibration, we need to consider both the input and
output processes, namely, the excitation process X(t) and the response process Y(t).
For a linear time-invariant system, if the excitation is stationary, then the response
will also be stationary. In this circumstance, the cross-correlation functions will
also be only functions of τ, the time lag, namely, R XY (t1, t2) = R XY (t2 − t1) = R XY (τ).
Therefore, we will see that the Fourier transform of the cross-correlation functions
also have deterministic spectra.
Additionally, with the help of the Fourier transforms of these correlation func-
tions, we can further obtain the transfer functions, which is one of the most funda-
mental concepts of vibrational systems. Practically speaking, the transfer functions
obtained through the Fourier transforms of correlation functions will be notably
more accurate than measurements through the direct definition of transfer functions.

4.1.1.3 Autospectral Density Functions


Suppose a stationary random process is mean-square integrable, then consider its
spectral density functions, the function in the frequency domain. In the literature,
this is more rigorously referred to as the power spectral density (PSD) function.

4.1.1.3.1  Further Discussion on Sum of Individual Harmonic Process


First, consider the sum of the harmonic process, the Fourier series, which will give
us some insight into the above-mentioned Fourier transform (Jean B.J. Fourier,
1768–1830). Recall Equation 3.62:

m m

X (t ) = ∑ X (t) = ∑ ( A cos ω t + B sin ω t)


k =1
k
k =1
k k k k (4.11)

Repeating Equation 3.64, we have

m m

RX (τ) = ∑
k =1
RX k (τ) =
1
2π ∑σ
k =1
2
k cos ω k τ (4.12)

When m→∞ (recall Equation 3.71),

 m 1  σ2


RX (τ) = σ 2 lim 
∆ω → 0 
 k =1

g(ω k )∆ω cos ω k τ  =
 2π ∫
0
g(ω ) cos ωτ dω (4.13)
Random Processes in the Frequency Domain 171

That is, R X(τ) and σ2g(ω) are a Fourier pair, denoted by (recall Equation 3.72)

R X(τ) ⇔ σ2g(ω) (4.14)

The relationship between R X(τ) and σ2g(ω) can be extended to general cases. This
is one of the fundamental approaches in dealing with random processes.

Example 4.3

Given the following density function σ2g(ω) taken from a stationary process

n
ap
1. σ 2 g (ω ) = ∑
p =1
ω + bp2
2
, bp > 0, p = 0,1, 2, ,n

and

 a2 , ω1 ≤ ω ≤ 2ω1
2. σ 2 g (ω ) = 
0, elsewhere

find the corresponding autocorrelation functions R X(τ).

1. It is known that the Fourier pair exists

ap ap − bp τ
⇔ e
ω 2 + bp2 2bp

Therefore,

n
ap
RX (τ ) = ∑ 2b
p =1 p
e
− bp τ


2. Let

a2 , ω ≤ 2ω1
σ 2g1(ω ) = 
0, elsewhere


and

 a2 , ω < ω1
σ 2 g 2 (ω ) = 
0, elsewhere

172 Random Vibration

we have

σ2g(ω) = σ2g1(ω) − σ2g2(ω)

Furthermore,

∞ 2ω1
1 a2
R1(τ ) =
2π ∫
−∞
a2e jωτ dω =
2π ∫−2ω1
(cos ωτ + j sin ωτ) dω

2ω1a2  sin 2ω1τ  a2 sin 2ω1τ
= =
π  2ω1τ  πτ

and

∞ ω1
1 a2
R2(τ ) =
2π ∫ −∞
a2e jωτ dω =
2π ∫ − ω1
(cos ωτ + j sin ωτ) dω

ω a2  sin ω1τ  a2 sin ω1τ


= 1 
=
π  ω1τ  πτ

Therefore, we can obtain

a2(sin 2ω1τ − sin ω1τ)


σ 2 g (ω ) =
πτ

4.1.1.3.2 Wiener–Khinchine Relations (Norbert Wiener, 1894– 1964;


Aleksandr Y. Khinchine, 1894–1959)
We now extend the relationship between the autocorrelation function and the density
function into a broader view. In fact, the Fourier pair of the autocorrelation function
is called the auto-PSD function.
Formally, the auto-PSD function was defined as


S X (ω ) =
∫ −∞
RX (τ)e − jωτ d τ (4.15)

with the inverse transform as


1
RX (τ) =
2π ∫−∞
S X (ω )e jωτ dω (4.16)

The Wiener–Khinchine relation is now written as

R X(τ) ⇔ SX(ω) (4.17)


Random Processes in the Frequency Domain 173

Note that in Equation 4.15, the auto-PSD function is introduced as a “definition.”


In Section 4.4.2, we will compare Equation 4.14 [R X(τ) ⇔ σ2g(ω)] with Equation 4.17
[R X(τ) ⇔ SX(ω)]. As described in Chapter 3, the autocorrelation function R X(τ) of a
stationary random process becomes a deterministic function of time lag τ only. From
Equation 4.17, we can further realize that the auto-PSD function is also deterministic.

Example 4.4

A stationary process {X(t), −∞ < t < ∞} is zero-mean. It has the PSD function as

6ω 2
S X (ω ) =
ω + 5ω 2 + 4
4

Find the correlation function R X(τ) and variance D(t)


To obtain the correlation function, let

6ω 2 A B
S X (ω ) = = +
ω + 5ω 2 + 4 ω 2 + 4 ω 2 + 1
4

We can obtain

6ω2 = Aω2 + A + Bω2 + 4B

By comparing the coefficients on both sides of the above equation,

A+B=6

and

A + 4B = 0

Therefore, we have A = 8 and B = −2, and then

8 −2
S X (ω ) = +
ω2 + 4 ω2 + 1

Furthermore,

 8  −1  −2 
RX (τ ) = F −1[ SX (ω )] = F −1  2 +F  2 
ω + 4  ω + 1
= 2e −2 τ − e − τ

The variance can be further calculated as

D[ X (t )] = R(0) − µ 2X (t ) = 1

174 Random Vibration

4.1.1.3.3  Existence of Fourier Transform


To have the Wiener–Khinchine relationship, as stated in Equation 4.17, the Fourier
transform must exist. Therefore, the corresponding condition should be considered:
R X(τ) must be absolutely integrable, that is,



∫−∞
RX (τ) d τ < ∞ (4.18)

For a non-zero mean process, X′(t), when τ→∞, Equation 3.99 will be rewritten
with the help of the mean-square limit:

lim RX ′ (τ) = µ 2X ′ (4.19)


τ→∞

This makes R X′(τ) not absolutely integrable. Now let

X(t) = X′(t) − μX′ (4.20)

so that

E(X(t)) = 0 (4.21)

In this operation, the Fourier transform of X(t) will not be divergent.

Example 4.5

{W(t), t ≥ 0} is a Wiener process with parameter σ2. Let

X(t) = W(t + s) − W(t), s > 0

Show that the increment {X(t), t ≥ 0} is a stationary process and find the autocor-
relation function as well as auto-PSD function.
First,

μX(t) = E[X(t)] = E[W(t + s) − W(t)] = 0

Second,

R X(t, t + τ) = E[{W(t + s) − W(t)} {W(t + s+ τ) − W(t + τ)}]

= RW (t + s, t + s + τ) − RW (t + s, t + τ) − RW (t, t + s + τ) + RW (t, t + τ)

= σ2 min{t + s, t + s + τ} − σ2 min{t + s, t + τ} − σ2 min{t, t + s + τ} + σ2 min{t, t + τ}


= σ2 min{s, s + τ} − σ2 min{s, τ} − σ2 min{0 s + τ} + σ2 min{0, τ} = R X(τ)



Random Processes in the Frequency Domain 175

Third,

E[X2(t)] = R(t, t) = sσ2 < ∞

Therefore, the increment {X(t), t ≥ 0} of the Wiener process is stationary. It is noted


that a Wiener process itself is not stationary. To find the auto-PSD function, consider

RX (τ ) = σ 2 min{s , s + τ} − σ 2 min{s , τ} − σ 2 min{0, s + τ} + σ 2 min{0, τ}

 0, τ < −s
 2
σ (τ + s ), −s ≤ τ < 0  σ 2( s − τ ), τ ≤s
= 2 =
σ ( s − τ ), 0≤τ≤s  0, elsewhere
 0, τ>s

Then


1 4σ 2 sin2
SX (ω ) = F [RX (τ )] =
2π ∫ σ (s − τ ) e
τ ≤s
2 − jωτ
dτ =
ω2
2

4.1.1.4 Spectral Distribution Function Ψ(ω)


We now define the direct integral of the auto-PSD function SX(ω) as a spectral distri-
bution function, to be used in a later section.

ω ∞
e − jωτ − 1
Ψ(ω ) =

−∞
S X (ϖ) d ϖ =

−∞
RX (τ)

dτ (4.22)

It can be proven that any stationary random process can be regarded as the super­
position of mutually uncorrelated harmonic oscillations of various frequencies and with
random phases and amplitudes. In this regard, the spectral distribution function Ψ(ω)
unveils a different insight into the random time series because Ψ(ω) is also deterministic.
Note that in Equation 4.22, the integral limit starts from −∞, which may limit certain
existence of Ψ(ω). However, in engineering applications, the lowest frequency of a PSD
function starts from zero. Namely, for engineering applications, Ψ(ω) always exist.

Example 4.6

A stationary process has auto-PSD function given by the following equation. Find
the corresponding spectral distribution function, Ψ(ω).

S ωC < ω
 0

SX (ω ) =  S0 / 2 ωC = ω

0 ωC > ω

176 Random Vibration

It may be shown that

ω ω
Ψ(ω ) =
∫ −∞
S X ( ϖ) d ϖ =

− ωC
S0 dϖ = S0 (ω + ω C )

4.1.1.5 Properties of Auto-PSD Functions


The important properties of the auto-PSD function are as follows:

4.1.1.5.1  Symmetry

SX(–ω) = SX(ω) (4.23)

Proof:

Because both R X(τ) and cos(ωτ) are even functions, in this case, we have


S X (ω ) = 2

0
RX (τ) cos(ωτ) d τ (4.24)

From Equation 4.24, it is easy to see that Equation 4.23 holds; furthermore,


1
RX (τ) =
π ∫ 0
S X (τ) cos(ωτ) dω (4.25)

4.1.1.5.2  Real and Positive Values


Generally speaking, a Fourier transform of a function is complex valued. However,
the autopower spectrum function is real, and in addition, positive.

Proof:

Recall Equation 3.35b

n n

∑ ∑ α α R (t − t ) ≥ 0
j =1 k =1
j k X j k (4.26)

We thus have

∞ ∞

∫ ∫ −∞ −∞
g(s) g(t ) RX (t − s) d s dt ≥ 0 (4.27)
Random Processes in the Frequency Domain 177

By denoting

∞ ∞
q(u) =
∫ ∫
−∞ −∞
g(s) g(u + t ) RX (u + t − s) d s dt ≥ 0 (4.28)

we can see that


Q(ω ) =
∫−∞
q(u)e − jωu du

∞ ∞ ∞
=
∫ ∫ ∫
−∞ −∞ −∞
g(s) g(u + t ) RX (u + t − s) d s dt e − jωu du (4.29)

2
= 2π G (ω ) S X (ω ) ≥ 0

4.1.1.5.3  Mean-Square Value


A useful expression of mean-square value in terms of integration of the auto-PSD
function can be seen as follows:

∞ ∞
1 1
RX (0) = E[ X 2 (t )] = σ 2X =
2π ∫−∞
S X (ω )e 0 dω =
2π ∫
−∞
S X (ω ) dω (4.30a)

This can be further simplified as


1

2π ∫−∞
S X (ω ) dω = σ 2X (4.30b)

Evaluating this with Equation 4.22, we have

1
σ 2X = Ψ(∞) (4.31)

Example 4.7

Check if the following functions are auto-PSD functions. If the function is an auto-
PSD function, find the corresponding autocorrelation function and mean-square
value.

ω2 + 9
1. S1(ω ) =
(ω + 4)(ω + 1)2
2
178 Random Vibration

An auto-PSD function must be an even function.

ω2 + 9
S1(−ω ) = ≠ S1(ω )
(ω + 4)(−ω + 1)2
2

S1(ω) is not an auto-PSD function.

ω2 + 4
2. S 2 (ω ) =
ω − 10ω 2 + 3
4

An auto-PSD function must be greater than zero.

S2(1) = −0.8333

Therefore, S2(ω) is not an auto-PSD function.

2
e − jω
3. S3 (ω ) =
ω2 + 6

An auto-PSD function must be real valued. S3 is complex valued, therefore,


S3(ω) is not an auto-PSD function.

ω2 + 1
4. S4 (ω ) =
ω 4 + 5ω 2 + 8

It is seen that S4(ω) is an auto-PSD function.

ω2 + 1 −1 2
S4 (ω ) ≡ SX (ω ) = = +
ω + 5ω 2 + 8 ω 2 + 2 ω 2 + 3
4


1 2 2 1 2 3
=− +
( 2) ( 3)
2 2
2 2 ω2 + 3 ω2 +

The autocorrelation function is

 1 2 2   1 2 3 
RX (τ ) = F −1[ SX (ω )] = F −1  −  + F −1  
( ) ( )
2 2
 2 2 ω2 + 2   3 ω2 + 3 
   

1 1
=− e− 2τ
+ e 3τ

2 2 3

The mean square is given by

1 1
E  X (t )  = −
2
+
  2 2 3
Random Processes in the Frequency Domain 179

4.1.2 Relationship with Fourier Transform


4.1.2.1 Fourier Transformation Random Process
Fourier transform is an important tool in engineering applications for analyzing
time-varying processes in the frequency domain. One reasonable question is, can a
random process have Fourier transform as given by the following equations?
? ∞ ? ∞
1
X (ω ) =

−∞
X (t )e − jωt dt and X (t ) =
2π ∫ −∞
X (ω )e jωt dω

Generally speaking, the answer is negative. The primary reason is that the abso-
lute integrable requirement for Fourier transform to exist, namely,


∫ −∞
X (t ) dt < ∞ (4.32)

often does not satisfy random processes. To deal with this problem, we introduced
the power spectrum instead. In the following, let us discuss this issue in detail.

4.1.2.2 Energy Equation
First, consider the amount of energy contained in a dynamic process X(t).

4.1.2.2.1  Parseval Equation


The following Parseval equation implies energy conservation:
∞ ∞
1
∫ ∫
2
X 2 (t ) dt = X (ω ) dω (4.33)
−∞ 2π −∞

The left-hand side is the total energy in (−∞, ∞), which is the time domain.
Remember that X(t) is a random process in the time domain.
The Parseval equation is important in signal analyses. Nevertheless, for random pro-
cesses, there may be two problems. The primary issue is that in the domain (−∞, ∞), the
energy can become infinite so that the energy integration does not exist.

4.1.2.2.2  Average Power


Because Equation 4.32 does not satisfy many engineering temporal functions by
having an infinite amount of energy in (0, T) when T→∞, instead of energy, the con-
cept of average power will be used:
T
1
lim
T →∞ 2T ∫ −T
X (t )2 dt

This is why the PSD function is used (this approach is described in the following
section). The power spectrum is used because when “defining” the Fourier transfor-
mation of a random process X(t) as

X (ω ) =
∫ −∞
X (t )e − jωt dt (4.34)
180 Random Vibration

X(t) continues forever, and Equation 4.18 will not be satisfied, thus the spectrum
X(ω), as described in Equation 4.34, does not exist. As a result, we need an alterna-
tive approach to consider the average power spectrum.

4.1.2.2.3 Finite Fourier Transform and Corresponding


Complex Valued Process
We try to solve the above-mentioned problem as follows:
First, define a special function XT (t) as

 X (t ), t ≤T
X T (t ) =  (4.35)
0, t >T

As seen above, the Fourier transform exists as

∞ T
X (ω , T ) =
∫ −∞
X T (t )e − jωt dt =

0
X (t )e − jωt dt (4.36)

Let us denote a function Y(ω) to represent a case with a limited time duration T,

Y(ω) = X(ω,Τ) (4.37)

By focusing on Y, we can calculate the mean, variance as follows

4.1.2.2.3.1   Mean  By visual inspection,

μY (ω) = 0 (4.38)

4.1.2.2.3.2   Variance  Determining the variance, we have

σY2 (ω ) = E[Y 2 (ω )] = E[ X 2 (ω , T )] = E[ X (ω , T ) X *(ω , T )]


 T T   T T 
= E


0
t
X (t )e − jωt dt

0
s
X (s)e jωs ds = E 
 
∫ ∫ 0
t
0
s
X (t ) X (s)e − jω (t − s )ds dt

(4.39)

To further evaluate σY2 (ω ), change the variables in Equation 4.39 by letting (see
Figure 4.4)

τ = t − s (4.40)

Note that the autocorrelation function R X(τ) is even. Exchanging the order of
mathematical expectation and integration, we have
Random Processes in the Frequency Domain 181

s τ τ
T T T

τ=t t=τ

Du
t t t
0 0 0
T T T
Dl
(a)
t=τ+T
τ=t–T

–T –T

(b) (c)

Figure 4.4  Integration domains of variance function. (a) Original coordinates (t, s).
(b) Transformed coordinates (t, τ). (c) In (t, τ) integrating first on t.

 T t  T t
σY2 (ω ) = E 

∫ ∫ 0
t
t −T
τ
X (t ) X (t − τ)e − jωτ dτ dt  =

∫ ∫
0
t
t −T
τ
E[ X (t ) X (t − τ)]e − jωτ dτ dt

RX ( − τ )= RX ( τ ) T t
=
∫ ∫ 0 t −T
RX (τ)e − jωτ dτ dt (4.41)
t τ

In Equation 4.41, it is seen that when s = 0, τ = t, and when s = T, τ = t − T.


Additionally, because ds = −dτ, the limits of the section integral should be exchanged,
namely, from t − T to t (see Figure 4.4b).
To specifically evaluate integral on τ first in Equation 4.41 is difficult because
R X(τ) is a generic term. However, in Equation 4.41, the integrand is independent from
time t, yet the integration limits of τ depend on t. Thus, we change the order of the
double integration, switching first on t and then on τ. In Figure 4.4c, for the integral
domain Dl, τ ∈(−T, 0), the interval of integration of t is 0 to τ + T ; and for domain
Du, τ ∈(0, T), the interval is τ to T: explicitly, we can write

0  τ+T  T  T 
σY2 (ω ) =
∫ −T
τ
RX (τ)e − jωτ 
 0
 t
dt dτ +
 0 ∫
RX (τ)e − jωτ 
    τ
 t 

dt dτ

 τ
 
Dl Du

( )
0 T T
=
∫ −T
RX (τ)e − jωτ (τ + T )dτ +

0
RX (τ) e − jωτ (T − τ)dτ =
∫ −T
RX (τ) e − jωτ T − τ dτ
(4.42)
182 Random Vibration

In the above expression of the Wiener–Khinchine relations, the autopower density


function is defined. Also, it suggests that the variance of function Y(ω) does relate to
the autocorrelation function of X(t). In the following, a more detailed fusion process
is examined.

4.1.2.3 Power Density Functions


To mathematically have Fourier transform, because the integration interval is (−∞,
∞), we need to further let T→∞ in Equation 4.42. This attempt, however, is not always
doable because in Equation 4.37, beyond time T, the process {X(t), t > Τ} is not
defined. Because X(t) is random, when t > T, we may have unexpected circumstances.
In the following, however, we will limit the topic for those processes with doable
T→∞, and in Section 4.4, we will provide a more rigorous expression of the required
conditions. Now, in Equation 4.42, when T→∞,

σY2 (ω ) T→∞ →∞

To force the integrand to be finite, we must divide by T on both sides of Equation


4.42, resulting in

1 2 T  τ

T
σY (ω ) =
∫ −T
RX (τ) e − jωτ  1 −  dτ
 T
(4.43)

The operation of dividing by T indicates that the resulting term is power, instead
of energy. That is why, in the literature, this function is often called the “power”
spectral density function.
Note that τ is the time difference between s and t, and both are elements inside
(0, T). Thus, when T→∞, it results in

 τ
 1 −  = 1 (4.44)
T T →∞

Therefore,

1   T 


lim  σY2 (ω )  = lim 
T →∞  T  T →∞  ∫ −T
RX (τ) e − jωτ dτ 

(4.45)

=
∫−∞
RX (τ) e − jωτ dτ = S X (ω )

Note that Equation 4.45 is derived from the equation σY2 (ω ) = E[ X (ω , T ) X *(ω , T )].
Thus, we can have a very useful formula to obtain the auto-PSD:

1 
E  X (ω , T ) 
2
S X (ω ) = lim (4.46)
T →∞ T
Random Processes in the Frequency Domain 183

Practically, this can be written as though the following average for each Fourier
transform, say, the kth, denoted by Xk(ω, T) was taken from a sample realization of XT (t):

n
11
∑  X (ω, T ) 
2
SˆX (ω , T , n) = k  (4.47)
T n k =1

4.1.2.3.1 Parseval Equation, Further Discussion


(Marc -Antoine Parseval, 1755–1836)
With the gain of the specially defined function XT (t) in Equation 4.35, the Parseval
equation can now be written as
∞ T ∞
1
∫ ∫ ∫
2
X T2 (t ) dt = X 2 (t ) dt = Y (ω ) dω (4.48)
−∞ −T 2π −∞

4.1.2.3.1.1   Average Power Spectrum  To have the power function, we must first
divide by 2T on both sides of Equation 4.49, then take the limit as

T ∞
1 1 1
∫ ∫
2
lim X 2 (t ) dt = lim Y (ω ) dω (4.49)
T →∞ 2T −T 2π −∞ T →∞ 2T

Additionally, by taking the expected value, it results in

 1 T  1  ∞
1 
∫ ∫
2
lim E  X 2 (t ) dt  = lim E  Y (ω ) dω  (4.50)
T →∞  2T −T  2π T →∞  −∞ 2T 

Because the mean of the stationary process is zero, we are able to write

 1 T  1 ∞
lim E 
T →∞  2T ∫ −T
X 2 (t ) dt  = lim
 T →∞ 2T ∫ −∞
EX 2 (t ) dt = RX (0) = EX 2 (t ) (4.51)

Thus, the average power is the value of autocorrelation function at τ = 0.


Additionally, from Equation 4.50, the following can be concluded.

4.1.2.3.1.2   Mean Square (Further Discussion)



1
∫ E  X (ω , T )  dω
2
RX (0) = lim (4.52)
T →∞ −∞ 4 πT

From Equation 4.50, the integrand on the right-hand side of Equation 4.51 is
SX(ω); therefore,

1
RX (0) =
2π ∫−∞
S X (ω ) dω (4.53)

More detailed issues on practical applications will be discussed in Chapter 9.


184 Random Vibration

4.1.3 White Noise and Band-Pass Filtered Spectra


Having introduced the general definition of SX(ω), in this section, additional specific
PSD functions will be introduced.

4.1.3.1 White Noise
First, consider the white noise process as shown in Figure 4.5.
The auto-PSD function is

SX(ω) = S 0 (4.54)

Thus, the autocorrelation function is given by

R X(τ) = S 0 δ(τ) (4.55)

Example 4.8

A white noise sequence {Xn, n = 0, ±1, ±2, …} has autocorrelation function given by

σ 2 , n=0
RX (n) = 
0, n≠0

Find the autopower spectral power density function

X(t)
4
SX(ω)
3
2
1 S0
0
–1
–2
–3 0 ω
–4
0 2 4 6 8 10 12 14 16 18 20

RX(τ)

0 τ

Figure 4.5  White noise.


Random Processes in the Frequency Domain 185

Because Xn is a white noise sequence, we have

E(Xn) = 0

D(Xn) = σ2 < ∞

Therefore,

S X (ω ) = ∑R (n) eX
− jnω
= RX (0)e 0 = σ 2
−∞

In the above example, the autocorrelation function R X(n) of discrete time


sequence X(n) can be realized with the help of Equation 4.55. It is seen that only if
τ = 0, which means that n = 0, δ(τ) ≠ 0 so that the variance is not zero. Generally
speaking, Equation 4.55 implies that a white noise X(t), also referred to as a white
process, possesses the following properties:

1. Zero mean

E[X(t)] = 0 (4.56a)

2. Orthogonality of X(t1) and X(t2) at different time points, that is,

E[X(t1) X(t2)] = 0, t1 ≠ t2 (4.56b)

Because zero mean, X(t1) and X(t2) must be uncorrelated.


Note that white noise is not necessarily Gaussian, as long as it has the auto-
PSD as shown in Equation 4.54. However, if it is Gaussian, then X(t1) and X(t2) must
also be independent.
As seen from Equation 4.55, the average power of white noise is infinity;
therefore, it cannot be realized in realistic practical observations. However, in
engineering measurements, we can observe the output of measuring equipment
with a white noise input because the output is a filtered (results of convolution)
signal. In the following, we will discuss mathematical models of these filtered
signals.

Example 4.9

Consider the spectral distribution function Ψ(ω) of the white noise process.
Recall Equation 4.22. We have

ω ω
Ψ(ω ) =

−∞
S X ( ϖ) d ϖ =
∫ −∞
S0 dϖ = S0ϖ ω
−∞ = S0 (ω + ∞) = ∞

It is shown that, for white noise, the spectral distribution function does not
exist.
186 Random Vibration

X(t)
2
1.5
SX(ω)
1
0.5 S0
0
–0.5
ω
–1
–ωC ωC
–1.5
–2
0 2 4 6 8 10 12 14 16 18 20
(a) (b)
R(τ)
0.15 Ψ(ω)

0.1

0.05

–ωC ωC
–0.05
–6 –4 –2 0 2 4 6
(c) (d)

Figure 4.6  Low pass noise. (a) Random process, abscissa : time. (b) Auto-PSD, abscissa :
frequency. (c) Autocorrelation abscissa : time lag τ. (d) Spectral distribution function abscissa :
frequency.

4.1.3.2 Low-Pass Noise
When the white noise is low-pass filtered (as seen in Figure 4.6), we have the auto-
PSD function given by

S ωC < ω
 0

S X (ω ) =  S0 / 2 ωC = ω
(4.57)

 0 ωC > ω

Additionally, the autocorrelation function is

sin(ω C τ)
RX (τ) = σ 2X (4.58)
ωC τ

Specifically, the variance is

σ 2X = 2ω C S0 (4.59)
Random Processes in the Frequency Domain 187

Recall Equation 4.22, and from the following example, we can see that the spec-
tral distribution function is

Ψ(ω) = S 0(ω + ωC)

4.1.3.3 Band-Pass Noise
When the white noise is band-pass filtered (as seen in Figure 4.7), we have the auto-
PSD function given by

S ω L < ω < ωU
 0
S X (ω ) =  S0 / 2 ω = ω L , ωU (4.60)

 0 elsewhere

The autocorrelation function is

sin(∆ωτ / 2)
RX (τ) = σ 2X cos ω 0 τ (4.61)
∆ωτ / 2
In this case:

Δω = ωU − ωL (4.62)

σ 2X = 2 ∆ωS0 (4.63)

ω 0 = (ωU + ωL )/2 (4.64)

X(t)
2 SX(ω)
S0

ω
–2 –ωU –ωL 0 ωL ωU
0 5 10 15 20
R(τ)
0.1
0.08
0.06
0.04
0.02
0
–0.02
–0.04
–6 –4 –2 0 2 4 6

Figure 4.7  Band-pass noise.


188 Random Vibration

X(t)
0.4
SX(ω)
2 2
0.2 –σX/2 σX/2

–0.2
ω
–0.4 –ω0 0 ω0
0 5 10 15 20
R(τ)
1
0.8
0.6
0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
–1
–10 –8 –6 –4 –2 0 2 4 6 8 10

Figure 4.8  Narrow band noise.

4.1.3.4 Narrow-Band Noise
When the white noise is narrow-band filtered (as shown in Figure 4.8), we can obtain
the following auto-PSD function as

S X (ω ) = πσ 2X [δ(ω + ω 0 ) + δ(ω − ω 0 )] (4.65)


Additionally, the autocorrelation function is

RX (τ) = σ 2X cos(ω 0 τ) (4.66)

4.2 Spectral Analysis
In the second section of this chapter, the auto-PSD function and cross-PSD function,
which are also related to the Wiener–Khinchine formula, are discussed. The focus is
on the spectral analysis of vibration systems.

4.2.1 Definition
4.2.1.1 Cross-Power Spectral Density Function
4.2.1.1.1  Wiener–Khinchine Relation
Defining the cross-power spectral density function SXY (ω) through the Wiener–
Khinchine relations:
Random Processes in the Frequency Domain 189


S XY (ω ) =
∫ −∞
RXY (τ) e − jωτ d τ (4.67)

Similarly,

SYX (ω ) =
∫ −∞
RYX (τ) e − jωτ d τ (4.68)

Additionally,


1
RXY (τ) =
2π ∫
−∞
S XY (ω ) e jωτ dω (4.69)


1
RYX (τ) =
2π ∫
−∞
SYX (ω ) e jωτ dω (4.70)

4.2.1.1.2  Properties of Cross-Power Spectral Density Function


Having defined the cross-PSD functions, select properties can now be examined.

4.2.1.1.3  Symmetry
4.2.1.1.3.1   Skew Symmetry  Unlike the auto-PSD functions, the cross-PSD func-
tions are not symmetric. However, cross spectral density functions do have the fol-
lowing relationship referred to as skew symmetry:

SXY (ω) = SYX(−ω) (4.71)

and

SYX(ω) = SXY (−ω) (4.72)

4.2.1.1.3.2   Hermitian Symmetry  Cross-PSD functions exhibit Hermitian sym-


metry, which is given by

* (ω )
SYX (−ω ) = SYX (4.73)

* (ω ) is the complex conjugate of SYX(ω), and so on.


In this instance, SYX

* ω)
SYX (−ω ) = SYX (4.74)

4.2.1.1.3.3   Real and Imaginary Portions  Furthermore, we have

Re[SXY (ω)] = Re[SYX(ω)] (4.75)


190 Random Vibration

and

Re[SYX(ω)] = Re[SYX(ω)] (4.76)

also

Im[SXY (ω)] = −Im[SXY (−ω)] (4.77)

Furthermore,

Im[SYX(ω)] = −Im[SYX(−ω)] (4.78)

4.2.1.1.3.4   Bounds  The cross-PSD function has bounds described by

[SXY (ω)]2 ≤ SX(ω)SY (ω) (4.79)

4.2.1.2 Estimation of Cross-PSD Function


Similar to Equation 4.47, a practical approach on cross-PSD functions is needed; it
can be proven by

1
S XY (ω ) = lim E[ X *(ω , T )Y (ω , T )] (4.80)
T →∞ T

and

1
SYX (ω ) = lim E[ X (ω , T )Y *(ω , T )] (4.81)
T →∞ T

Similar to Equation 4.48, when the Fourier transforms of X(t) and Y(t) are obtained
through the practicality of the kth measurement, namely, XK(ω,T), YK(ω,T), the esti-
mated cross-correlation function can be written as

SˆXY (ω , T , n) =
11
T n ∑  X* (ω, T )Y (ω, T )
k =1
K K (4.82)

and

SˆYX (ω , T , n) =
11
T n ∑ Y *(ω, T )X
k =1
K K (ω , T )  (4.83)

Equations 4.47, 4.82, and 4.83 enable us to obtain the cross-PSD functions practi-
cally, which will be discussed further in Chapters 7 and 9.
Random Processes in the Frequency Domain 191

4.2.2 Transfer Function
The transfer function is an important concept in linear systems, given that it com-
pletely describes the dynamic behavior of the system. In this section, two basic issues
are considered. The first is, given the transfer function and input random excitations,
to find the statistical properties of the random output. The second is, by measuring
both the random input and output, to find the transfer function of a linear system.

4.2.2.1 Random Process through Linear Systems


Let us consider the first question, the nature of the random output. Generally,
Figure 4.9 shows the relationship of the input-system-output. From Figure 4.9, the output
can be seen as the result of the input being transferred through mapping T[.], that is,

Y(t) = T[X(t)] (4.84)

The mapping of T[(.)] can be analyzed in both the time and the frequency domain.
Let us consider the operation in the time domain first.

4.2.2.1.1  Linearity (Linear Superposition)


The transfer function exists only if the system is linear. Linearity can be explained
as follows:
Suppose there exists a random process X(t), Y(t), and Z(t), with constants a and b.
In the event that the system is linear, then the following properties must also hold
true:

1. Foldable

X(t) + Y(t) + Z(t) = [X(t) + Y(t)] + Z(t) = X(t) + [Y(t) + Z(t)] (4.85)

2. Interchangeable

X(t) + Y(t) = Y(t) + X(t) (4.86)

3. Superposition

T[aX1(t) + bX2(t)] = a T[X1(t)] + b T[X2(t)] (4.87)

4.2.2.1.2  Input–Output Relationship in the Time Domain


First, consider the input–output relationship in the time domain. In this instance, the
output is seen as the result of convolution between the impulse response function

Input Output
X(t) Y(t)
T[.]

Figure 4.9  System and input–output.


192 Random Vibration

and the input forcing function. Thus, based on the convolution integral, the statistical
properties (mean values, etc.) of the output as well as correlation between input and
output can also be considered. In Section 4.3, the properties describing the frequency
domain will be further explored.

4.2.2.1.2.1   Linear Filtering and Convolution  When the unit impulse response
function, h(t), is known, the random process X(t) being mean square integrable through
the corresponding linear time-invariant system can be described by convolution:

Y (t ) =
∫ −∞
X (τ)h(t − τ) d τ = X (t ) * h(t ) (4.88)

or

Y (t ) =
∫ −∞
X (t − τ)h(τ) d τ = h(t )* X (t ) (4.89)

Here the symbol * denotes convolution.


The system denoted by h(t) can be seen as a linear filter. Filtering is one of the
central concepts that will be discussed further in Chapter 6. Along with filtering
convolution, the impulse response function will also be discussed in more detail.

4.2.2.1.2.2   Mean  Consider the mean value of the output

 ∞ 
µY (t ) = E[Y (t )] = E 
 ∫−∞
X (t − τ)h(τ) dt 

(4.90)

If E[Y(t)] is integrable, then,


∞ ∞
µY (t ) =
∫−∞
E[ X (t − τ)h(τ)] dt =

−∞
h(τ) E[ X (t − τ)] dt (4.91)

Because

E[X(t − τ)] = μX(t − τ) (4.92)

we have


µY (t ) =
∫−∞
h(τ) µ X (t − τ) dt = µ X (t ) * h(t ) (4.93)

4.2.2.1.2.3   Autocorrelation  Consider the autocorrelation functions R X(t, u) and


RY (t, u).

 ∞ ∞ 
RY (t , u) = E[Y (t )Y (u)] = E 
 ∫−∞
X (t − ζ)h(ζ) d τ
∫−∞
X (u − ξ) h(ξ) dξ  (4.94)

Random Processes in the Frequency Domain 193

For a deterministic linear system, we are able to rewrite Equation 4.94 as


∞ ∞


RY (t , u) =
∫ ∫
−∞ −∞
E[ X (t − ζ) X (u − ξ)] h(ζ) h(ξ) dζ d ξ
(4.95)
∞ ∞
=
∫ ∫
−∞ −∞
RX (t − ζ, u − ξ) h(ζ) h(ξ) dζ d ξ

For the stationary system,

R X(t − ζ, u − ξ) = R X(τ + ξ − ζ) (4.96)

Furthermore, substitution of Equation 4.96 into Equation 4.95 yields

RY (τ) = h(τ)* R X(τ)* h(−τ) = h(τ)*h(−τ)*R X(τ) (4.97)

Explicitly, the autocorrelation function of the output process Y(t) can be written as
the convolution of the three terms h(τ), h(−τ), and R X(τ).

4.2.2.1.2.4   Cross-Correlation  Consider the cross-correlation of the input and


output processes X(t) and Y(t)

RXY (t , u) = E[ X (t )Y (u)]

 ∞  ∞


= E[ X (t )]E 
 ∫ −∞
h(ξ) X (u − ξ) d ξ  =
 ∫
−∞
h(ξ) E[ X (t ) X (u − ξ)] d ξ


=

−∞
h(ξ) RX (t , u − ξ) d ξ
(4.98)

For the stationary process

R X(t, u − ξ) = R X(τ − ξ) (4.99)

Then

R XY (τ) = R X(τ) * h(τ) (4.100)

Because

R XY (τ) = RYX(−τ) (4.101)

Using −τ to replace τ, Equation 4.100 can be written as

RYX(τ) = R X(τ) * h(−τ) (4.102)

Here, R X(−τ) is even.


194 Random Vibration

Input Output
X(ω) Y(ω)
System H(ω)

Figure 4.10  System in the frequency domain.

4.2.2.1.3 Input–Output Relationship in the Frequency Domain


(Linear Time Invariable)
Switching attention from the time domain to the frequency domain, the input–output
relationship can be re-examined. This is illustrated in Figure 4.10. When mathemati-
cally recalling Equation 4.32, it is known that the following does not exist for ran-
dom input.

X (ω ) =
∫−∞
X (t )e − jωt dt

With the aid of Equation 4.35, this can be redefined as
T
X (ω ) ≡ X (ω , T ) =
∫ 0
X (t )e − jωt dt (4.103)

In this instance, the symbol “≡” stands for “define.”


Similarly, for output, we have
T
Y (ω ) ≡ Y (ω , T ) =
∫ 0
Y (t )e − jωt dt (4.104)

By taking the Fourier transforms on both sides of Equation 4.88, it results in


Borel’s theorem

Y(ω) = H(ω)X(ω) (4.105)

where H(ω) is the Fourier transform of the unit impulse response function h(t) (in
Chapter 6, we will discuss h(t) in more detail):
T
H (ω ) =
∫ 0
h(t ) e − jωt dt (4.106)

It will be shown later in this section that H(ω) is nothing more than the transfer
function.

4.2.2.1.3.1   Laplace and Fourier Transforms of Mean  Assuming zero initial


conditions, take the Laplace transform on both sides of Equation 4.83. This yields

L[µY (t )] = H (s)L[µ X (t )] (4.107)


Random Processes in the Frequency Domain 195

Here L[(.)] denotes the Laplace transform of (.) otherwise written as

µY (t ) = L−1{H (s)L[µ X (t )]} (4.108)

For a stationary process, the mean is constant

μX(t) = μX (4.109)

Shown from Equation 4.108.

µY (t ) = L−1{H (s)}µ X = µY = constant (4.110)

Through the Fourier transform, this can also be written as

μY = H(j0) μX (4.111)

Here, H(j0) is the gain of the filter at ω = 0, namely, the DC gain.

4.2.2.1.3.2   Auto-Power Spectral Density Function  Taking the Fourier trans-


form on both sides of Equation 4.97,

F [ RY (τ)] = SY (ω ) = F [(h(τ) * h(− τ))]F [ RX (τ)] = H (ω ) H *(ω ) S X (ω ) (4.112a)

where F [(.)] denotes the Fourier transform of (.).


Or, we can use the Laplace transform on both sides of Equation 4.97,

L[ RY (τ)] = SY (s) = L[(h(τ) * h(− τ))]L[ RX (τ)] = H (s) H *(s) S X (s) (4.112b)

Thus, the auto-PSD function of Y(t) can be obtained as

SY (ω) = |Η(ω)|2 SX(ω) (4.113a)

By using Laplace transform, we have

SY (s) = |Η(s)|2 SX(s) (4.113b)

Example 4.10

Suppose a linear system has an impulse response of

h(t) = 2 e−t, t > 0

A random process X(t) is applied from t = −∞, which has an autocorrelation of

τ
RX (τ ) = e −2

196 Random Vibration

Find the autocorrelation function RY (τ).


The autospectral density function of X(t) is (refer to Equation 4.15, the Wiener–
Khinchine theorem)


4
S X (ω ) =
∫−∞
e −2 τ e − jωτ dτ =
(ω 2 + 4 )

Furthermore,


1
H (ω ) =
∫ −∞
e − τ e − jωτ dτ =
( jω + 1)

Thus,

2
1 4 1/ π
SY (ω ) = =
( jω + 1) (ω 2 + 4) (ω 2 + 1)(ω 2 + 4)

4.2.2.1.3.3  Cross-Power Spectral Density Function  Taking the Fourier transform


on both sides of Equation 4.100

F [ RXY (τ)] = F [ RX (τ)]F [h(τ)]

So that

SXY (ω) = H(ω)SX(ω) (4.114a)

By using Laplace transform, we have

SXY (s) = H(s) SX(s) (4.114b)

Similarly, we apply Laplace transform on both sides of Equation 4.100, that is,

L[ RXY (τ)] = L[ RX (τ)]L[h(τ)]

The above equation is obtained with known PSD function and transfer function;
we thus used inverse Fourier and/or inverse Laplace transform to calculate the cross-
correlation function.
Similarly, by again taking the Fourier transform on both sides of Equation 4.102,
we have

SYX(ω) = H* (ω) SX(ω) (4.115a)

By using Laplace transform, we have

SYX(s) = H* (s) SX(s) (4.115b)


Random Processes in the Frequency Domain 197

Example 4.11

A linear system has unit impulse response function h(t) = e–t, t > 0. Its input is
a random process and has autocorrelation function given by RX (τ ) = 1/ 2 e −2 τ
applied from t = −∞. In the following, we describe the process of determining the
cross-correlation function RYX(τ).
Upon checking the autocorrelation function, which is only a function of time
lag τ, the input is a stationary process. Using inverse Laplace transform, we have
t >0
RYX (τ) = L−1[H*( s )SX ( s )] = L−1[L{e −t }L(1/ 2e −2|τ| )]

 2 1 1 
 1 1 1
−1 1  −1  2  −1
−3 −
2 6 
=L   +   = L   =L  + + 
 s + 1 2 s + 2 −s + 2   ( s + 1)( s + 2)(− s + 2)   s + 1 s + 2 −s + 2 
 2 1  1
=  − e − τ − e −2τ  u(τ ) + e 2τu(− τ )
 3 2  6

where u(τ) is a Heaviside step function and

1, τ ≥0
u(τ ) = 
0, τ <0

4.2.2.2 Estimation of Transfer Functions


Now, consider the second issue, the estimation of the transfer function by measuring
the input and output.

4.2.2.2.1  Definition
Dividing by X(ω) on both sides of Equation 4.105 yields

Y (ω )
H (ω ) = (4.116)
X (ω )

H(ω) is referred to as the transfer function.

4.2.2.2.2  Estimation
Stemming from Equation 4.114a,

S XY (ω )
H (ω ) = (4.117)
S X (ω )

On the right-hand side of Equation 4.114, both the numerator and the denominator
can be multiplied by X*(ω), such that

Y (ω ) X *(ω )
H (ω ) = (4.118)
X (ω ) X *(ω )
198 Random Vibration

Practically speaking, it is beneficial to let Xk(ω, T) and Yk(ω, T) denote the kth
sample pair. By taking the average of n total sample realizations, Equation 4.118 can
be rewritten as

n
1
2nT ∑Y (ω, T )X* (ω, T )
K K
SˆXY (ω )
Hˆ (ω ) = k =1
= (4.119)
n
SˆX (ω )
1
2nT ∑X
k =1
K (ω , T ) X *K (ω , T )

In addition, on the right-hand side of Equation 4.114, both the numerator and the
denominator can be multiplied by Y*(ω), such that

Y (ω )Y *(ω )
H (ω ) = (4.120)
X (ω )Y *(ω )

Once more, let Xk(ω, T) and Yk(ω, T) denote the kth sample pair, and take the
average of n total sample realizations. This allows Equation 4.118 to be rewritten as

n
1
2nT ∑Y (ω, T )Y *(ω, T )
K K
SˆY (ω )
Hˆ (ω ) = k =1
= (4.121)
n
SˆYX (ω )
1
2nT ∑k =1
X K (ω , T )YK* (ω , T )

Both Equations 4.121 and 4.119 can be used to measure the transfer function in
practical measurements. To distinguish these two cases, an estimation of the transfer
function through Equation 4.121 will be denoted as H1(ω) and that through Equation
4.119 as H2(ω).
Formally, the functions H1(ω) and H2(ω) are defined as follows:

S XY (ω )
H1 (ω ) = (4.122)
S X (ω )

SY (ω )
H 2 (ω ) = (4.123)
SYX (ω )

4.2.2.2.3  Amplitude and Phase of Transfer Functions


In general, a transfer function is a complex value, which can be expressed as follows

H(ω) = Re[H(ω)] + j Im[H(ω)] (4.124)


Random Processes in the Frequency Domain 199

A transfer function can also be stated by its amplitude and phase angle as

H(ω) = ∣H(ω)∣ejθ (4.125)

Here, the amplitude is

∣H(ω)∣ = {Re2[H(ω)] + Im2[H(ω)]}1/2 (4.126)

and the phase angle is

Im[ H (ω )]
θ = ∠H (ω ) = tan −1 (4.127)
Re[ H (ω )]

The phase angle, θ, is the phase difference between the input X(t) and the output
Y(t), which can be directly obtained through Equation 4.114. Referring to Equation
4.122, the expression of H2(ω), SY (ω) is recognized as being real valued, which does
not contain information on the phase. Thus, information on the phase is contained in
the cross-PSD function SYX(ω).

4.2.2.3 Stationary Input
Based on the above discussion, we now consider whether an output is stationary
when the input is stationary. In most cases of engineering applications, we do not
distinguish whether the input is strictly stationary or weakly stationary. However, in
this situation, they will be different.

4.2.2.3.1  Strictly Stationary Input


When a strictly stationary random process is input into a linear time-invariant sys-
tem, we have shown that the mean of output is a constant. Namely, the mean is
time-invariant. In addition, the autocorrelation function is only a function of time
difference τ = t2 − t1.
These two facts imply that when a strictly stationary random process is input
into a linear time-invariant system, the output is also a strictly stationary random
process.

4.2.2.3.2  Weakly Stationary Input


When a weakly stationary random process is input into a linear time-invariant sys-
tem, however, it is not guaranteed to have time-invariant mean. Additionally, it is not
guaranteed to have an autocorrelation function to be the function of time difference
only. Therefore, the output is not necessarily stationary.

4.2.3 Coherence Analysis
In this section, the coherence function will first be defined. It will then be explained
how the coherence function is related to the aforementioned transfer functions and,
finally, the application of coherence analysis. The practical measurement of coher-
ence functions and applications will be discussed further in Chapter 7.
200 Random Vibration

4.2.3.1 Coherence Function
4.2.3.1.1  Definition
The coherence function of two random processes is defined as

2
S XY (ω ) S (ω ) S*XY (ω )
γ 2XY (ω ) = = XY (4.128)
S X (ω ) SY (ω ) S X (ω ) SY (ω )

4.2.3.1.2  Property of Coherence Function

−1 ≤ γ 2XY (ω ) ≤ 1 (4.129)

On the right-hand side of Equation 4.128, it is seen that

S XY (ω )
= H1 (ω ) (4.130)
S X (ω )

and

S*XY (ω ) SYX (ω ) 1
= = (4.131)
SY (ω ) SY (ω ) H 2 (ω )

Substitution of Equations 4.130 and 4.131 into Equation 4.128,

H1 (ω )
γ 2XY (ω ) = (4.132)
H 2 (ω )

Furthermore, from Equation 4.129,

H1 (ω )
≤ 1 (4.133)
H 2 (ω )

Thus,

H1 (ω ) ≤ H 2 (ω ) (4.134)

Because γ 2XY (ω ) is real valued

∠H1(ω) = ∠H2(ω) (4.135)

4.2.3.1.3  Application of Coherence Function


When a system is applied to a forcing function, regardless of being deterministic or
random, the input will always contain a certain amount of noise, which is random. In
other words, in the real world, there will always be random input. Consequently, the
output will also always be random.
Random Processes in the Frequency Domain 201

From the previous discussion on measurement of transfer functions, H1 and H2


estimations were defined. This implies that the measured transfer function will
always be altered due to undisputable types of noises. In addition, in the attempt to
measure transfer function, it must always be assumed that the system is perfectly
linear, which is not always true.
The presentation of noises and nonlinearity will irrefutably introduce errors.
Thus, there is a need to judge if the error in measurement is practically significant.
The coherence function, as a ratio of the H2 and H1 functions, can be used in this
purpose. The higher the value of the coherence function, the better the measurement
is. Generally speaking, the value of coherence function should be greater than 60%
to be considered a dependable measurement. In certain applications, the requirement
will be to have the value greater than 70% or possibly 80%.

4.2.3.2 Attenuation and Delay


The following examples are used to show how the statistical properties in the fre-
quency domain on certain random processes can be analyzed. The first case consid-
ered is that of attenuation/delay. The instance of the sum of two random processes
is then discussed.
Recall the attenuation and delay, X(t) to be a random process and

Y(t) = aX(t − δ) (4.136)


The auto-PSD function is

SY (ω) = a2SX(ω) (4.137)


The cross-PSD function between input and output is

SXY (ω) = aSX(ω) ejωδ (4.138)


The transfer functions are

H2(ω) = SXY (ω)/SX(ω) = aejωδ = H1(ω) (4.139)

Finally, the coherence function is

γ 2XY (ω ) = 1 (4.140)

Note that, in this case, we do not have any noises.

4.2.3.3 Sum of Two Random Processes


Now, consider two random processes, not necessarily independent, denoted by X(t)
and Y(t). Their sum is denoted by Z(t), thus,

Z(t) = X(t) + Y(t) (4.141)

The cross-PSD function between Z(t) and X(t) is

SZX(ω) = SX(ω) + SYX(ω) (4.142)


202 Random Vibration

If X(t) and Y(t) is not correlated, the auto-PSD of Z(t) is

SZ (ω) = SX(ω) + SY (ω) (4.143)

In this case, finally, the coherence function is

S X (ω )
γ 2ZX (ω ) = (4.144)
S X (ω ) + SY (ω )

4.2.4 Derivatives of Stationary Process


In Chapter 3, the derivative of the autocorrelation function of a random process X(t)
resulted in cross-correlation function of X(t) and its derivative X (t ), given by

d
RX (τ) = RXX (τ) (4.145)

Here, if the random process X(t) is seen as displacement, then respectively its
derivatives X and X can be seen as velocity and acceleration.
The Fourier transform of the derivative of a temporal function g(t) is

d 
F  g(t )  = jωG (ω ) (4.146)
 dτ 

Thus,

d 
S XX (ω ) = F {RXX (τ)} = F  RX (τ)  = jωS X (ω ) (4.147)
 dτ 

It is known that

S X (ω ) = ω 2 S X (ω ) (4.148)

We thus have

S XX (ω ) = −ω 2 S X (ω ) (4.149)

and

S X (ω ) = ω 4 S X (ω ) (4.150)
Random Processes in the Frequency Domain 203

Now, the variance functions of the velocity and the acceleration can be written as

∞ ∞
1 1
σ 2X =
2π ∫
−∞
S X (ω ) dω =
2π ∫
−∞
ω 2 S X (ω ) dω (4.151)

and
∞ ∞
1 1
σ 2X =
2π ∫
−∞
S X (ω ) dω =
2π ∫
−∞
ω 4 S X (ω ) dω (4.152)

4.3 PRACTICAL ISSUES OF PSD FUNCTIONS


In this section, several practical issues regarding spectral PSD functions are
discussed.

4.3.1 One-Sided PSD
Initially, in real-world measurements, it is difficult to have negative frequency, in
particular, as the lower integral limit approaches −∞. For this reason, the one-sided
Fourier transform is used.

4.3.1.1 Angular Frequency versus Frequency


Frequency is more commonly expressed in hertz than in radians per second. As a
result, we have

ω = 2πf (4.153)

ω ~ (rad/s);  f ~ (Hz)

The period is

T = 1/f (4.154)

and

T ~ (s)

The relationship between the period and the angular frequency is

T = 2π/ω (4.155)

By drawing on the above notations, the one-sided auto-PSD function can be writ-
ten as

GX(f ) = 2π SX(ω) (4.156)


204 Random Vibration

GX(ω) 2 WX( f ) 4π

SX(ω) 1 SX( f ) 2π

ω f
–ω0 0 ω0 –f0 0 f0 = ω0/2π

Figure 4.11  Double-sided and single-sided spectra.

4.3.1.2 Two-Sided Spectrum versus Single-Sided Spectrum


Let us now compare the two-sided and the one-sided functions. If angular frequency
ω is used, then

 2 S (ω ), ω≥0
GX (ω ) =  X (4.157)
 0, ω<0

If frequency f is used, then alternatively, we have the one-sided auto-PSD W X(  f  )


written as

4 πS (ω ), ω, f ≥ 0
X
WX ( f ) =  (4.158)
0, ω, f < 0

Figure 4.11 graphically describes the relationships shown in Equations 4.157 and
4.158.

4.3.1.3 Discrete Fourier Transform


In practice, the signal picked up through a data acquisition system is digitized with a
given sampling rate, Δt, and a limited duration, T. The corresponding Fourier trans-
form will thus become discrete. In this subsection, the issues related to the discrete
type of operations will be discussed.

4.3.1.3.1  Scale of Frequency


Let us first consider the scale of the frequency domain. In this case, a computer pro-
gram is used to conduct the fast Fourier transform (FFT), which, in other terms, is an
efficient algorithm to compute the discrete Fourier transform (DFT), the total scale
of the frequency domain is universally asked about.
As seen in Figure 4.12,

T = n Δt (4.159)

where n is the total number of samples.


Random Processes in the Frequency Domain 205

X(t) T = n ∆t

t
∆t

X(f ) F = n/2 ∆f

X(fi)

∆f
f
0 fi

Figure 4.12  Relationship between T and F.

The frequency resolution can then be written as

1
∆f = 1/T = (4.160)
n ∆t

Thus, the frequency limit F is

F = n/2 Δf (4.161)

4.3.1.3.2  Scale of Amplitude


The second question of computational FFT is the scale of the amplitude. By denoting
X(f) as the resulting DFT of time history x(t), we have

2
X( f ) = [FFT( x (t ))] (4.162)
n

In this case, FFT(.) means the fast Fourier transform of x(t).

4.3.2 Signal-to-Noise Ratios
Signal-to-noise ratio (S/N) is one of the fundamental concepts in dealing with sig-
nals. Nevertheless, it is difficult to measure the exact value of a S/N because it is
almost impossible to measure a random process of noises.

4.3.2.1 Definition
Let us consider the definition of S/N as follows.
206 Random Vibration

4.3.2.1.1  Deterministic
In deterministic cases, the signal-to-noise ratio, r, is equal to

ps
r= (4.163)
pn

where ps and pn are the powers of signal and noise, respectively.

4.3.2.1.2  Random Process


In random cases, the ratio will depend on the time delay τ, namely,

RS (τ)
r (τ) = (4.164)
RN (τ)

where, RS and R N are autocorrelation functions of the signal and noise, respectively.

4.3.2.1.3  Random Process in the Frequency Domain


The S/N can also be described in the frequency domain. That is,

S X (ω )
r (ω ) = (4.165)
S N (ω )

If the noise is close to white noise, then

SN(ω) ≈ S 0 = const. (4.166)

In this case, the signal-to-noise ratio is proportional to SX(ω)

r(ω) ∝ SX(ω) (4.167)

In most instances, the region is very close to the region between the so-called
“half-power points” (Figure 4.13), which will be discussed in Chapter 6.

SX

High S/N

Mid S/N
Poor S/N
ω

Figure 4.13  S/N ratios and PSD.


Random Processes in the Frequency Domain 207

4.3.2.2 Engineering Significances
Practically speaking, there exists other issues that must be taken notice of as follows.

4.3.2.2.1  Confidence of Measurement


When signals are picked up, we always need to know how reliable a specific mea-
surement is. In general, the confidence factor can be defined as

C=r n (4.168)

where n is the size of measurement and r is the S/N ratio.


An excepted judgment is

C ≥ Cpreset (4.169)

where Cpreset is a preset value of confidence.


If m parameters are estimated, the overall confidence factor can be written as

n
Cm = r (4.170)
m

Similarly, there is also the following judgment

Cm ≥ Cm,preset (4.171)

Here, the second subscript preset indicates a preset value.

4.3.2.2.2  Sensitivity Analysis in Dealing with Normal Uncertainty


An additional uncertainty relating to measurement is the stability or repeatability of
the corresponding signal processing. Often, the method of sensitivity testing is used
in determining this unknown.

4.3.2.2.2.1   Perturbation  To explain the essence of sensitivity analysis, the per-


turbation phenomenon must first be considered. A deterministic process A(t), due to
noise N(t), is commonly measured to be random.

X(t) = A(t) + N(t) (4.172)

When

‖A(t)‖ ≫ ‖N(t)‖ (4.173)

N(t) can be seen as a small term “added” to the main process. Suppose N(t) can be
represented by the Taylor series such that

X(t) = A(t) + ε0 N0(t) + ε1N1(t) + … (4.174)

where ε0 and ε1, and so on, are small numbers and

ε0 > ε1 > … (4.175)


208 Random Vibration

4.3.2.2.2.2   Stability of Realization  Assume N0(T) is Gaussian so that it is


“known.” In this case, ε1N1(t) is considered to be a perturbation (first order perturba-
tion). By adding small amounts of perturbation in the region of ε = (1–5)% of white
noise, whose level is equal to the peak level of X(t), the mean and standard deviation
of X(t) will vary. If the variation is within (1–5)%, the measurement or realization is
considered to be “stable.”
Practically, this can be expressed. First, let the initial condition

X0 = ‖A(t)‖ (4.176)

Then, generate an artificial Gaussian process ε(t) and let

‖ε(t)‖ ≤ (1–5)% (4.177)

Subsequently, by adding noise to the process X(t), it results in

Y(t) = X(t) + X0 ε(t) (4.178)

By checking the mean

µY (t ) − µ X (t )
≤ (1−5)% (4.179)
µ X (t )

and the variance

σY2 (t ) − σ 2X (t )
≤ (1−5)% (4.180)
σ 2X (t )

or the standard deviation

σY (t ) − σ X (t )
≤ (1−5)% (4.181)
σ X (t )

The above concept may be summarized as though the amount of noise is often
unknown; artificial “noises” can be added, which are known. Thus, if a small amount
of “random” noise, namely, a perturbation, is added to the total measured data and it
is determined that the outcome of the data analysis was not varied in a statistically
significant manner, then the total system of the signal pickup and processing is con-
sidered to be stable.

4.4 Spectral Presentation of Random Process


4.4.1 General Random Process
To understand an object, one of the most common methods is to mathematically
model the target, which is also referred to as a mathematical construction. Recall
the case of random variables. The modeling is done by the distribution functions,
Random Processes in the Frequency Domain 209

denoted by FX(x), which is essentially a calculation of probabilities. That is, the prob-
ability of all the chances that X is smaller than x through averaging.
To deal with a set of random variables, the averaging is simple because the com-
putation is done among the variables themselves, namely, in the sample space. In
the case of a random process X(t,e), however, the average can be far more complex
because we will have not only the sample space Ω = {e} but also another index t,
“time.” As a result, the distribution is defined in an n-dimensional way, denoted by
FX(x1, x2, … xn; t1, t2, …, tn). Therefore, only if the entire n-dimensional distribution
is evaluated would we understand the global properties of X(t,e). It is understandable
that this task can be very difficult.
On the other hand, in many cases of engineering applications, we may only need
two or even one dimensional distribution, in which the corresponding averages can-
not provide global information. Instead, these averages provide local parameters,
such as autocorrelation and cross-correlation functions, variance, and mean.
In Figure 4.14, the global and local properties of random processes are illustrated
by a conceptual block diagram. In showing the relationships between these proper-
ties, we also realize the major topics of random process, discussed mainly in Chapter
3. In Figure 4.14, inside the frame with broken lines is a special relationship between
correlation functions and PSD functions, which is the main topic in this chapter and
is shown in detail in Figure 4.15.
In Figure 4.15, through Fourier transform (practically, also including Laplace
transform), which is a powerful mathematical tool, the correlation functions in the
time domain are transferred into the frequency domain. Analyzing vibration sig-
nals, which is one of the major topics in this manuscript, can be carried out in the
frequency domain. Such analysis is a necessary and powerful tool and can provide
insight into vibration systems, which cannot be obtained through the time domain
only. In Section 4.4.2, we discuss the frequency distributions and spectral presenta-
tions of random process in a more rigorous fashion.

Reconstruction of entire random process

Global
properties
n-dimensional distributions

Local
properties Correlation
Two-dimensional distributions
functions

Higher One-dimensional distributions Power


order spectral
moments density

Variance Mean

Figure 4.14  Properties of random process.


210 Random Vibration

Correlation functions
Process in Cross-
Autocorrelation
the time domain correlation

Fourier/Laplace
(inverse)
transforms

PSD functions
Process in
Autopower Cross-power
the frequency domain
spectral spectral
density density

Transfer
Input PSD Output PSD
functions

Figure 4.15  Relationship between analyses of the time and the frequency domain.

4.4.2 Stationary Process
4.4.2.1 Dynamic Process in the Frequency Domain
By definition, in general, a stationary process will not grow to infinity nor will it die
out. As a dynamic process, one can realize that the instance value of the process will
fluctuate, namely, it will be up at a certain time point and down at other points. Such
an up-and-down process in the time domain can be represented by various sinusoi-
dal terms, sin(ωit)’s and cos(ωit)’s.
Such an up-and-down process will contain a certain group of frequency components.
To view such processes in the frequency domain is in fact to list these frequency compo-
nents as a spectrum, which unveils important information on the profile of frequencies.
For a nonstationary process, the Fourier spectrum often does not exist. Therefore,
we cannot have the spectrum that a stationary process does. However, for nonsta-
tionary processes, one can also perform frequency analysis by introducing the finite
Fourier transform (recall Equation 4.35). In this circumstance, when the value of T is
not sufficiently long, the corresponding spectra are not deterministic.

4.4.2.2 Relationship between the Time and the Frequency Domains


The above viewpoint that a stationary process can be represented by harmonic oscil-
lations is the key to further exploring random vibrations, the main topic of this man-
uscript. To realize this point clearly, let us reconsider the autocorrelation R X(τ) as the
result of inverse Fourier transform of autopower spectrum density function (recall
Equation 4.17), that is,


1
RX (τ) =
2π ∫ −∞
S (ω )e jωτ dω (4.182)
Random Processes in the Frequency Domain 211

Furthermore, based on the formula of spectral distribution function Ψ(ω), it is


seen that

dΨ (ω )
S X (ω ) = (4.183)

Thus,

dΨ(ω) = SX(ω)dω (4.184)

Substitution of Equation 4.184 into Equation 4.182 results in



1
RX (τ) =
2π ∫ −∞
e jωτ dΨ(ω ) (4.185)

Equation 4.185 is called a Fourier–Stieltjes integral or Fourier–Stieltjes trans-


form (Thomas J. Stieltjes, 1856–1894).
In Equation 4.185, functions R X(τ), SX(ω), and ΨX(ω) are all deterministic; there-
fore, as long as the auto-PSD function is an absolute integrable, namely,


∫−∞
S(ω ) dω < ∞

then Equations 4.182 through 4.185 exist.


Now, if Equation 4.34 exists, that is, if

X (ω ) =
∫ −∞
X (t )e − jωt dt

then the random process X(t) can be seen as an inverse Fourier transform of function
X(ω), that is,

1
X (t ) =
2π ∫ −∞
X (ω )e jωt dω

Similar to the above-mentioned derivation of Equation 4.185, the Fourier trans-


form of X(t), namely, X(ω), can be seen as a derivative of function Z(ω) (see Equation
4.183), that is,

dZ (ω )
X (ω ) =

Additionally, the function Z(ω) is the integration of X(ω),


ω
Z (ω ) =
∫ −∞
X (ϖ) d ϖ

212 Random Vibration

Similar to Equation 4.185, the random process X(t) can be written as a Fourier–
Stieltjes integral:


X (t ) =
∫ −∞
e jωt d Z (ω ) (4.186)

If Equation 4.186 does exist, then it implies that a random process can be replaced
by its spectral representation. In fact, Z(ω) does exist under certain conditions, and
it is defined as the spectral representation function of random process. Now, let us
consider the required conditions. In Chapter 3, we noted that to use Fourier trans-
form to replace the Fourier series of a random process is not always doable because
it requires conditions. In Sections 4.1 and 4.2, we further point out that to let T→∞
also needs conditions. These two issues are closely related.
To see this point, recall the sum of harmonic process (Equation 3.62) with zero
mean, which can be written in an alternative form given by

m/2 m/2

X (t ) = ∑
k =− m / 2
X k (t ) = ∑Ce
k =− m / 2
k
jω k t
(4.187)

where Ck are complex-valued uncorrelated random variables with zero mean, and
have variance σ2. Because X(t) must be real-valued, we have symmetry for the pair
of frequencies ωk and −ωk, C− k = C*k .
With help from the notation in Equation 4.187, consider the autocorrelation
function

R X(τ) = E[X(t) X(t + τ)] = E[X(t)* X(t + τ)]

 m / 2  *  m/2   m  m 

= E  ∑
p=− m / 2
jω t
C pe p   ∑
 q=− m / 2
jω ( t + τ ) 



 p=1
− jω t
Cqe q  = E  C*p e p   Cqe q  ∑
  q=1
jω ( t + τ ) 



 
m/2 m/2
= ∑ ∑
p=− m / 2 q =− m / 2
E C*p Cq  e p q
j[ − ω t + ω ( t + τ )]


(4.188)

Note that X is zero-mean, and E[CpCq] = 0 for p ≠ q. Also, based on Euler equation
ejθ = cosθ + j sinθ, the above equation can be rewritten as

 m / 2   m 
RX (τ) = Re  ∑
 k =− m / 2
E Ck2  e jω k τ  = Re 
  k =1

E Ck2  e jω k τ 

(4.189)

where the symbol Re(.) means taking the real portion of function (.) only.
Random Processes in the Frequency Domain 213

When the frequency resolution becomes infinitely small, namely, in between ωk


and ωk+1, we insert an infinitely large number of frequency components. The series
in Equation 4.187 will therefore be replaced by an integral

1
X (t ) =
2π ∫
−∞
e jωτ Z (dω ) (4.190)

where
ϖ

Z (ϖ) = ∑C
p=− m / 2
p (4.191)
m→∞

and Equation 4.189 can be replaced by

 1 ∞  1 ∞
RX (τ) = Re 
 2π ∫−∞
e jωτ ϒ (dω )  =
 2π ∫ −∞
cos ωτ ϒ (dω ) (4.192a)

Here ϒ is a special function, which will be expressed in detail in Section 4.4.2.3.


In more general cases, a random process X(t) can be complex valued, so that the
corresponding autocorrelation function is complex in general. In such a situation,
Equation 4.192a can be rewritten as


1
RX (τ) =
2π ∫ −∞
e jωτ ϒ (dω ) (4.192b)

4.4.2.3 Spectral Distribution and Representation


In Figure 4.16a and b, we first show the operation to use finer frequency resolutions, in
which the magnitude Mp is used to represent either the amplitude of function Υ(ϖ) or
the function Z(ϖ). Note that Mp is the area at frequency ωp. Additionally, rectangular-
σ2
shaped Mp can be seen as a product of the height g p and the frequency interval
Δω. That is, 2π

σ2
Mp = g p∆ω (4.193)

Thus, in Equation 4.193 using Mp to represent the magnitude of function Υ(ϖ)


results in
ϖ

ϒ(ϖ) = E C p2  = l.i.m


n→∞ n
1
∑C
p=− m / 2
2
p

(4.194)
σ2
= lim M p = lim g p ∆ω
m→∞ 2π ∆mω →∞
→0
214 Random Vibration

Amplitude Amplitude

Mp Mp
σ2 g σ2 g
p p
2π 2π

Freq. Freq.

∆ω
∆ω dω dω
ωp ωp
(a) (b)

Amplitude Amplitude
Mq Mean Xi(t)
Mp

Freq. Freq.

∆ωp ∆ωq ∆ωp ∆ωq


ωp ωq ωp ωq

(c) (d)

Figure 4.16  Mass and density functions with various frequency intervals. (a) Equal fre-
quency interval Δω. (b) Equal frequency interval dω. (c) Unequal frequency interval Δω.
(d) Unequal frequency interval dω.

To conceptually show the operation described in Equation 4.194, in Figure 4.16a,


we have a less spectral line whereas in Figure 4.16b, there are considerably more
spectral lines. Although in b, the spectrum is still discrete, its frequency interval is
marked as dω for comparison, one can realize that when the resolution Δω becomes
finer and finer, until it reaches an infinitesimally small value dω, a continuous spec-
trum will be obtained. Note that in the process, the frequency interval approach
becomes finer and finer, with all of them having equal length.
σ2
We see that Mp is the “mass” function and height g p is the density function.

For the specific case of function Υ(ϖ), we use Mp to represent the accumulative average
E C p2  and we can compare Υ(ϖ) to CDF, the accumulative probabilities in a later
discussion. Because Mp is the magnitude and gp is the “height” shown in Figure 4.16,
when Δω→dω, g(ω) is used to replace gp and g(ω) becomes a density function. Here,
1
the term is used only for mathematical convenience.

Random Processes in the Frequency Domain 215

Note that when we replace the harmonic random series by integrations (Equation
4.11), it is not necessary to have equally spaced frequency intervals. That is, recall
Equation 3.73, that is, Δωp = ωp+1 − ωp ≠ Δωq = ωq+1 − ωq. The equal and unequal
frequency interval can also be conceptually shown in Figure 4.16.
In Figure 4.16a and b, we show equal frequency intervals whereas in Figure 4.16c and
d, the frequency interval is unequal. From Figure 4.16c, it is seen that to have unequal
frequency interval Δωp and Δωq at point ωp and ωq, respectively, does have advantages
because at point ωp the curve has a steeper slope and at point ωq the curve is flatter.
Mathematically, when equal frequency intervals are used, it implies that the orig-
inal function, say, R X(τ) and/or X(t), is periodic with period T, namely,


= ∆ω = ω T (4.195)
T

In this case, the function, say, R X(τ), can be represented by Fourier series with
harmonic terms cosnωT τ, sinnωT τ and/or ejnωTτ, and others, and ωT is the fundamen-
tal frequency. However, we may also use nonperiodic cases, in this situation, we use
the relationship (Equation 4.192)


1
RX (τ) =
2π ∫ −∞
e jωτ ϒ (dω )

instead of using Fourier transform


1
RX (τ) =
2π ∫−∞
e jωτ S X (ω ) dω

The following is a more general description. A Fourier series, the discrete form,
only works for periodic functions. A numerical realization of Fourier transform
in discrete format, with limited recorded length, will inherit certain drawbacks
of the Fourier series. In addition, when we use Y(ω) = X(ω,Τ) in Equation 4.37,
letting T→∞ is not always legitimate. To be mathematically consistent, the above-
mentioned Wiener–Khinchine equation defines auto-PSD function SX(ω), instead
of introducing the concept of PSD first and then proving that it is the Fourier
transform of R X(τ).
Compared with the definition of the formula for spectral distribution function in
ω
Equation 4.22, namely, Ψ(ω ) =
∫−∞
S X (ϖ) d ϖ, we can realize that if periodic func-
tions do exist, Υ(ϖ) is nothing but the spectral distribution function Ψ, that is,

Υ(ϖ) ≡ Ψ(ϖ) (4.196)

and

d[Υ(ϖ)] = d[Ψ(ϖ)] = SX(ϖ)dϖ (4.197)


216 Random Vibration

Equation 4.197 indicates that the autocorrelation function R X(τ) and function
d[Ψ(ω )] are Fourier pairs, that is,

d[Ψ(ω )]
R( τ ) ⇔ = S X (ω ) (4.198)

Furthermore, compare Equation 4.192a with Equation 4.13, for continuous fre-
quency domain,

σ2
ϒ (dω ) = Ψ(dω ) = g(ω )dω (4.199)

Therefore, in this case

σ2
g(ω ) = S X (ω ) (4.200)

The quantity Cp in Equation 4.188 is the magnitude of the frequency component


at frequency ωp, so that it is the magnitude of the spectral line of the specific point of
X(t)’s spectrum. Therefore, the summation function Z(ϖ) is an accumulated spectral
value up to frequency ϖ. In the literature, Z(ϖ) is referred to as the spectral repre-
sentation function.
Although in Figure 4.16, Mp can represent both the magnitude of Z(ω) for Equation
4.191, or that of Υ(ω) for Equation 4.194, the nature of Z(ω) in Equation 4.191 and the
nature of Υ(ω) in Equation 4.194 are completely different. The former is a random
value where the latter is deterministic. As shown in Figure 4.16d, the solid curve
stands for a determine function, say, the mean of the random process X(t), whereas
the dotted line represents a random function, say, X(t). For a random process, its
spectral amplitude, if it exists in this case, is Cp, which can be random. Therefore,
using unequal frequency Δωp and Δωq at point ωp and ωq, respectively, will show
another advantage accounting for general random processes.
It can be proven that the variance of Z(ω) is

D[Z(ω)] = Υ(ω) (4.201)

The covariance of Z(ωα) and Z(ωβ) is

σ Zα Zβ = ϒ(ω α  ω β ) (4.202)

and

Z(−ω) = Z(ω)* (4.203)

4.4.2.4 Analogy of Spectral Distribution Function to CDF


The spectral distribution Υ(ω) is essentially a rescaled CDF. First, it is easy to see that

ϒ(0) ≡ Ψ(0) = 0 (4.204)


Random Processes in the Frequency Domain 217

Second, ϒ(ω) does not integrate to unity, but integrates to the 2π times the vari-
ance of the process because σ 2X , which can be seen from Equation 4.31. We thus
have, however,

ϒ (∞) = Ψ(∞) = 2πσ 2X = 2πRX (0) (4.205)

Both the spectral distribution and the CDF of a random variable are right-
continuous, nondecreasing bounded functions with countable jumps (in the case of
mixtures of discrete and continuous random variables).

4.4.2.5 Finite Temporal and Spectral Domains


Up to now, infinite time t including time lag τ is used most of the time. That is,
the domain of time is (−∞, ∞). Similarly, in the frequency domain, the domain of
frequency is also (−∞, ∞). Although we realized that letting T→∞ was not always
legitimate and PSD was introduced through direct definition to avoid a mathemati-
cal dilemma, insights about this problem have not yet been discussed. The only
improvement is to use the frequency distribution and representation functions to
deal with nonperiodic processes. Although the case of infinite domains is beyond
the scope of this manuscript, in the following, let us consider these functions and
the corresponding Fourier–Stieltjes integral in the case of finite domains of time
and frequency. Because in practical applications, when the time becomes suffi-
ciently large, the temporal functions often tend to be sufficiently small (see Figure
4.12 for examples).
Consider that in Equation 4.33, the Parseval energy equation, the domains of the
integrands X2(t) and |X(ω)|2 become finite, that is, (−T, T), T < ∞ and (−ω0, ω0), ω0 < ∞.
We have

T ω0
1
∫ ∫
2
X 2 (t ) d τ and/or X (ω ) dω (4.206)
−T 2π −ω0

In the following, we discuss the finite temporal and frequency domains of random
and transient processes, which are sometimes treated as measured signals.

4.4.2.5.1  Finite Time Duration


Generally speaking, with limited duration (0, T), a dynamic process X(t) is better
classified as a transient instead of a random process. However, the amplitude of a
transient process at different time points ti and tj, 0 ≤ ti < ti ≤ T, can be random vari-
ables. For example, an earthquake ground motion is often treated as a random signal.
In practical computations, the Fourier integral of the limited time domain is
treated as a period signal with period T, that is, the transient signal will be repeated
continuously with multiple duration [(n − 1)T, nT], n = 1, 2, … Very often, a random
process is sampled with limited time duration (see Figure 4.12). In this case, we
have forced finite time duration. Such a sampling may introduce errors referred to as
power leakage, which will be discussed in Chapter 9.
218 Random Vibration

4.4.2.5.2  Finite Frequency Domain


In engineering applications, we always have dynamic processes with finite frequency
domains. That is, the highest frequency is limited. This is partly because when a
signal is picked up, due to the limit of frequency response of instrumentation, fre-
quencies higher than a certain level cannot be measured. Another reason is to avoid
signal aliasing, lower pass filters are used so that frequencies higher than the cut-off
threshold will be removed.
Additionally, to measure a random signal, the total number of samples and the
duration T are also limited. Suppose in a measurement, n samples are taken, then,
with the help of Equations 4.37 and 4.39, the upper limit of the frequency domain or
the maximum frequency fmax can be determined by

n
fmax = (4.207)
2T

Problems
1. A random process {X(t), −∞ < t < ∞} is given by X(t) = At2 + Bt + C with
A, B, and C to be independent random variables and A ~ N(0,1), B ~ N(0,1),
and C ~ N(0,1). Find if X(t) is mean-square continuous, mean-square dif-
ferentiable, or mean-square integrable.
2. Derive autocorrelation functions for the following processes:
a. White noise

SX(ω) = S 0

b. Low pass

S ωC < ω
 0

S X (ω ) =  S0 / 2 ωC = ω


 0 ωC > ω


c. Band pass

S ω L < ω < ωU
 0
S X (ω ) =  S0 / 2 ω = ω L , ωU

 0 elsewhere

d. Narrow band

S X (ω ) = σ 2X [δ(ω + ω 0 ) + δ(ω − ω 0 )]/ 2


Random Processes in the Frequency Domain 219

3. Derive PSD functions


a. R X(τ) = e−α|τ|
b. R X(τ) = e−α|τ| cos(ωοτ)
4. A low-pass random process X(t) has a cut-off frequency ω C (or 2π fC). It is
proposed to estimate the PSD function of this process using Equation 4.58
and sample records of length T = 10/fC. Is T long enough? What if T is 10
times longer? How long would you make it and why? Hint: Consider the
rate at which the following ratio approaches unity.

T  τ

∫ −T
RX (τ) e − jωτ  1 −  dτ
 T  =1

∫RX (τ) e − jωτ d τ


−∞

5. Let W(t) = X(t)Y(t), X and Y uncorrelated random process. Find the PSD
function of W(t) and the cross-PSD function and coherence of W(t) and X(t).
6. Consider a random binary function with random phasing Θ, which is uni-
formly distributed between 0 and T, shown in Figure P4.1
a. Model this random process Y(t)
b. Find the autocorrelation and auto-PSD
c. Is the process stationary? Ergodic? Hint: It still depends on whether or
not t1 and t2 are in the same time interval, but this now depends on Θ
7. A local average process Y T (t) is defined by

t +T / 2
1
YT (t ) =
T ∫
t −T / 2
X (u) du

X(t) is a random process. Show that the PSD function is given by

2
sin(ωT / 2)
SYT (ω ) = S X (ω )
ωT / 2

Y(t) Random

+1
θ
t

T 2T

–1

Figure P4.1  Random phasing Θ.


220 Random Vibration

8. A stationary process {X(t), −∞ < t < ∞} has PSD given by

ω 2 + 33
S X (ω ) =
ω + 10ω 2 + 9
4

Find its autocorrelation function and variance.


9. X(t) and Y(t) are non–zero-mean and non–cross-correlated stationary ran-
dom processes

Z(t) = X(t) + Y(t)

a. Is Z(t) stationary?
b. Find the cross-PSD SZY (ω) and SXZ (ω).
10. Θ is a uniformly distributed random variable in a period of frequency ωT.
The parameters ai, bi, and ω are all constant. The summation

∑(a
i =1
2
i )
+ bi2 < ∞

A random process is given as



a
X (t ) = 0 +
2 ∑{a cos[nω (t + Θ)] + b sin[nω (t + Θ)]}
n =1
n T n T

Find the auto-PSD of X(t).


5 Statistical Properties
of Random Process
Up to now, we have reviewed the basic knowledge on the theory of probability, which
is the fundamental tool for the study of random processes. In addition, we have
learned the approach of using two-dimensional (2D) deterministic probability distri-
bution functions to handle one-dimensional (1D) random events. We also introduced
random processes in Chapter 3, the time domain approach, where the nature of such
a dynamic process was unveiled. From the viewpoint of probability distributions,
a random process can have many, if not infinite, pieces of random distributions,
instead of a single one as a set of random variables does. Although it is not neces-
sary, in most cases, we will use time as new indices to describe and handle these
multiple distribution functions and refer to the methodology as a three-dimensional
(3D) approach. Furthermore, in Chapter 4, the functions in the time domain, either
the time-varying process itself, which is random, or the correlation functions, which
become deterministic through the operation of mathematical expectation, were
transferred into the frequency domain. Although it is still a “3D” approach, the cor-
responding Fourier transforms provided spectral analysis. The latter is a powerful
tool to describe the frequency components of a random process.
Different from typical textbooks on random process, which deals with important
models of the processes in detail through rigorous mathematical derivations, the
previous chapters only provide materials on general descriptions and explanations.
Only a few specific random processes are discussed in a more systematic fashion
such as how to master the nature of dynamic process, how to model a process with
both state and time indices, how to use a 3D approach to understand and calculate
the mean, variance, and correlation functions, and how to use the concept of random
process to understand our random world. To achieve the above objectives, the previ-
ous chapters attempt to outline an overall picture instead of dealing with particular
features of individual process. It is also noted that certain useful mathematic tools,
such as detailed Fourier and Laplace transforms, how to determine integral limits
for sample space, and others, which can occupy many sections in typical textbooks,
are also not described.
To compensate for these drawbacks, before systematically presenting the main
topic of this manuscript (random vibrations), three unique issues of random pro-
cesses, level crossing, peak values, and fatigue will further be considered. Here,
especially with fatigue, the focus is given on when to account for the occurrence pro-
cess, instead of how to evaluate and what the results of failures caused by the random
process will be (this will be discussed in detail in Chapter 10). To study the time-
varying developments, certain important random processes, such as Rayleigh pro-
cess and Markov chains, will be studied as tools not only to handle these engineering

221
222 Random Vibration

problems but also to describe the methodology of how to understand specific random
process in detail. However, the emphasis is still given to practical application instead
of mathematical rigorousness. Due to its randomness, statistical surveys are used to
carry out the analyses. In so doing, the “3D” processes are reduced into “2D” vari-
ables and furthermore into “1D” parameters, in general.

5.1 Level Crossings
To analyze a random time history, a specific preset level is first established. The
objective of this is then to examine the probability of when the value of the process is
greater than the preset level. Note that the objective is now reduced to a 1D problem.
This approach may be referred to as a special parameterization (see Rice 1944, 1945
and Wirsching et al. 2006).

5.1.1 Background
5.1.1.1 Number of Level Crossings
For a time history x(t), in an arbitrary time interval (t, t + ΔT), with an arbitrary level

x = a (5.1)

the total number for which x(t) > a, namely, the level being crossed, can be expressed
as

na = Na(t, t + Δt) (5.2)

where na is a random variable. For convenience, we do not use capital letters to


denote the random variable to avoid the risk of confusing Figures 5.1 and 5.2.
From Figure 5.1, a and b are points where the curves cross the zero line, whereas
c and d are points of line a crossing. In addition, f is the peak value. Generally speak-
ing, Na(0, t) is a nonstationary random process beginning with

Na(0, 0)|t = 0 = 0 (5.3)

x
f

c d
a

a b
t
e

Figure 5.1  Level a and crossing points c and d.


Statistical Properties of Random Process 223

na = 2

na = 1

na = 4

Figure 5.2  Number crossing.

If na, which is considered to be the number of “arrivals,” is independent and the


arrival rate λ is constant, then

λ = const. (5.4)

It follows that Na(t) is a Poisson process.


The waiting time until the first arrival, denoted by Y, is a random variable, expo-
nentially distributed. The mean can then be written as

μY = 1/λ (5.5)

Furthermore, the mean of the random process Na(0, t) is

µ N a (t ) = E[ N a (t )] = λt (5.6)
224 Random Vibration

and the variance is

σ 2N a (t ) = λt (5.7)

5.1.1.2 Correlations between Level Crossings


Now, let us consider the correlations between the level crossings as follows.

5.1.1.2.1 Crossing Pair
It is frequently seen that an up-crossing is followed by a down-crossing, as shown
in Figure 5.2.

5.1.1.2.2 Cluster Crossing
As shown in Figure 5.3, an additional case called cluster crossing may also occur.
In this instance, the time history is a narrow-band noise. Unlike the crossing pair, in
this case, many pairs may follow the initial pair.

5.1.2 Derivation of Expected Rate


First, let us consider the case of crossing pairs.

5.1.2.1 Stationary Crossing
If the probability distributions of random processes Na(t1, t2) and Na(t1 + s, t2 + s) are
identical, the random process is said to have stationary increments. It is seen that a
Poisson process has stationary increments.
Let X(t) be a zero mean stationary random process in which, for simplicity, X(t)
is interpreted as displacement. Therefore, X (t ) and X (t ) are considered to be the
velocity and the acceleration, respectively. For these conditions, the expected value is

E[Na(t, t + Δt)] = E[Na(Δt)] (5.8)


In addition,
E[Na(Δt)] = vaΔt (5.9)
where va is the expected rate of level crossing per unit time. The unit of va is hertz (Hz).

0.4

0.2

–0.2

–0.4
0 5 10 15 20

Figure 5.3  Cluster crossing.


Statistical Properties of Random Process 225

The concept of crossing rate is helpful. Because X(t) is a random process, we


cannot predict where or when a crossing occurs. What can be done is, statistically,
the calculation of how often crossing happens. Intuitively, it should be related to the
oscillating frequency of X(t) and the level of a.

5.1.2.2 Up-Crossing
Considering the case of up-crossing only, the rate is

1
va + = va (5.10)
2

The next objective is to find the rate va+.

5.1.2.3 Limiting Behavior
When

Δt → 0

there will be either zero or one up-crossing in the interval Δt.


Consider event A denoted by

A = {x = a is crossed with positive slope in dt} (5.11)

To have the subsequent probability:

 P( A), N a+ (dt ) = 1

P  N a+ (dt )  =  1 − P( A), N a+ (dt ) = 0 (5.12)

 0, elsewhere

Then,

E  N a+ (dt )  = 1 × P( A) + 0 × [1 − P( A)] = P( A) (5.13)

From Equation 5.9, where Δt → dt, the following can be written:

E  N a+ (dt )  = va+ dt

such that

va+ dt = P( A) (5.14)
226 Random Vibration

To further calculate va+ , consider, to have an up-crossing of x = a from t to t + ΔT,


the conditions must be

1. X(t) < a (5.15)


2. X (t ) > 0 (5.16)
3. X(t + Δt) > a (5.17)

From Figure 5.4, the resultant relationship is illustrated:

X (t ) + X (t )dt > a (5.18a)

or

X (t ) > a − X (t ) dt (5.18b)

The event in which the above three conditions are met has the single probability
P(A):

P( A) = P ( a − X (t )dt < X (t ) < a ) ∩ ( X (t ) > 0 )  (5.19)

Figure 5.5 shows the integral domain with the x, x coordinates. The probability
can then be written as

∞ a
P( A) =
∫ ∫ 0
x
a − vdt
x
f XX (u, v) du d v (5.20)

When dt → 0, the starting point of x must be very close to line a, that is, u → a,
in this case, f XX (u, v) → f XX (a, v) and

a a

∫ a− v d t
f XX (a, v) du = f XX (a, v)

a− v d t
du = f XX (a, v)[a − (a − vdt )] = f XX (a, v) vdt

x(t + dt)
a

·
x(t)dt
x(t)

t dt t + dt

Figure 5.4  Up-crossing.


Statistical Properties of Random Process 227

·
a – x dt
x
0 a

Figure 5.5  Integral domain.

Thus, the probability is


f XX (a, v) ( v dt ) d v
P( A) =
∫ 0
(5.21)

The absolute value of v indicates that the slope of X(t) must be positive. Thus,
from Equation 5.15, the closed form formula of the rate va+ is:


P( A)
va + =
dt
=
∫0
v f XX (a, v) d v (5.22)

5.1.3 Specializations
To find the rate va+ in Equation 5.22, the joint density function f XX (a, v) must be
known. The latter is generally unknown, unless X(t) is Gaussian. In the event that X(t)
is Gaussian, then X (t ) will also be Gaussian. Practically speaking, the assumption of
Gaussian process is reasonably valid.

5.1.3.1 Level Up-Crossing, Gaussian Process


Suppose X(t) is Gaussian. It can be proven that if X(t) and X (t ) are uncorrelated,
then to calculate the joint distribution, they can be treated as independent processes.
Then, the rate can be written as

∞ ∞ ∞
va + =

0
v f XX (a, v) d v =
∫ 0
v f X (a) f X ( v) d v = fX (a)

0
v f X ( v) d v (5.23)

Here, the variable v represents the “velocity” X (t ), which is zero-mean, that is,


σ X
∫0
v f X ( v) d v =

228 Random Vibration

and based on Equation 5.23, we have

1 a2
1 σ X − 2 σ 2X
va + = e (5.24)
2π σ X

Note that when the crossing threshold “a” increases, the level up-crossing rate
decreases.
For a given σX, whereas the RMS velocity σ X increases, the crossing rate also
increases.

5.1.3.2 Zero Up-Crossing
When the level of interest is zero, up-crossing will likewise be zero

a = 0 (5.25)

Substitution of Equation 5.25 into Equation 5.24 results in

1 σ X
v0 + = (5.26)
2π σ X

Note that va+ is the rate of level up-crossing, namely, in a unit time, the number of
crossings from which the angular frequency of crossing is determined to be

σ X
ω 0+ = 2πv0+ = (5.27)
σX

In Equations 5.26 and 5.27, the terms of standard deviations can be expressed as
follows:

∞ ∞
σ 2X =
∫−∞
S X (ω ) dω =
∫ 0
WX ( f ) d f (5.28)

and
∞ ∞
σ 2X =

−∞
ω 2 S X (ω ) dω = 4 π 2
∫ 0
f 2WX ( f ) d f (5.29)

Substitution of the above formulas into Equations 5.26 and 5.27, respectively,
results in

va + =
∫ f W (f)d f
0
2
X
(5.30)

∫ W (f)d f
0
X
Statistical Properties of Random Process 229

and

ω 0+ =
∫ ω S (ω) dω
−∞
2
X
(5.31)

∫ S (ω) dω
−∞
X

5.1.3.3 Peak Frequency
5.1.3.3.1 General Process
The same level crossing analysis can also be applied on the velocity, X (t ). A zero
down-crossing of X (t ) results in a change of velocity from positive to negative at
the peak of X(t), so the velocity is zero. Comparing this case with the “displacement
crossing,” we replace a by v and let v = 0 for the velocity and replace v by ϖ for the
“acceleration” in Equation 5.22. In this case, the “acceleration” is negative, so that
the peak frequency vp can be written as

0
vp =
∫−∞
X
− ϖ f XX
  (0, ϖ) d ϖ (5.32)

5.1.3.3.2 Gaussian Process
  (0, ϖ) is when the process is Gaussian. In case
A special case to have the term f XX
the process is Gaussian, similar to the approach for the development of the formula
for va+ ,

1 σ X
vp = (5.33)
2π σ X

and

σ X
ω p = 2πv p = (5.34)
σ X

Additionally, through substitution

vp =
∫ 0

f 4WX ( f ) d f
(5.35)
∫ 0
f 2WX ( f ) d f
230 Random Vibration

ωp =
∫ −∞

ω 4 S X (ω ) dω
(5.36)
∫ −∞
ω 2 S X (ω ) dω

5.1.3.4 Bandwidth and Irregularity


Now, let us consider the following special cases of narrow-band and non–narrow-
band processes.

5.1.3.4.1 Narrow-Band Gaussian
First, suppose the random process X(t) is a narrow-band Gaussian process (refer to
Equation 4.65). For this case, the frequency of zero up-crossing as well as peak fre-
quency will be examined.
For the narrow-band Gaussian process, the auto-spectral density function can be
written as

S X (ω ) = σ 2X [δ(ω + ω m ) + δ(ω − ω m )]/ 2 (5.37)

where ωm is the midband (normal) frequency of the process. For further clarification,
see description of ω 0 in Figure 4.8.
Substitution of Equation 5.37 into Equation 5.31 yields

ω 0+ =
∫−∞
ω 2 [δ(ω + ω m ) + δ(ω − ω m )]/ 2 dω
=
ω 2m

1

−∞
[δ(ω + ω m ) + δ(ω − ω m )]/ 2 dω

Thus,

ω 0+ = ω m (5.38)

Similarly, it is also true that

ωp = ωm (5.39)

In conclusion, for narrow-band processes, the zero up-crossing and the peak fre-
quency are identical and equal to the midband frequency.

5.1.3.4.2 Non–Narrow-Band
If the frequency band is not narrow, then the logical subsequent question is “how
wide” it can be. To measure the width of the band-pass filtering, an irregular factor
is introduced.
Statistical Properties of Random Process 231

5.1.3.4.2.1   Irregularity Factor  The measure of the bandwidth can be described


by an irregularity factor α, defined as the ratio of zero-up-crossing and the peak
frequency (Ortiz 1985), that is,

v0 + ω 0+
α= = (5.40)
vp ωp

By definition, this can be further written as

E[ N 0+ (∆t )]
α= (5.41)
E[ N p (∆t )]

It is understandable that the irregular factor has the following range

0 < α < 1 (5.42)

When α = 1, there will be one peak for every zero up-crossing (see Figure 5.6a).
This implies that the random process only contains a single frequency, which is the
case with the narrow band. Otherwise, if the process contains higher frequency,
whose mean special values are often considerably smaller than that of the lowest
frequency (called fundamental frequency), we will have v p > v0+ , so that 0 < α < 1.
Specifically, when α → 0, there will be an infinite number of peaks for every zero
up-crossing–high-frequency dithering. This is illustrated in Figure 5.6b.

5.1.3.4.2.2   Special Width Parameter ε  With the help of the irregular factor, the
special width parameter is further defined as

ε = 1 − α2 (5.43)

(a) (b)

Figure 5.6  Number of peaks. (a) Single peak. (b) Multiple peaks.
232 Random Vibration

5.1.3.4.3 Gaussian
If the process is Gaussian, the expression of the irregular factor can be simplified as
follows.
Substitutions of Equations 5.26 and 5.33 into Equation 5.40 yields

σ 2X
α= (5.44)
σ X σ X

Given that the process is Gaussian, then

E  X (t ) X (t )  = σ X σ X (5.45)

Thus,

E  X (t ) X (t )  = −σ 2X (5.46)

and

− σ XX
α= = −ρXX (5.47)
σ X σ X

Observe that the irregularity factor, α, is equal to the minus correlation coefficient
between the displacement and the acceleration.

5.1.4 Random Decrement Methods


As an application of level up and zero up-crossing, consider a special measurement
that can generate free-decay time-history through a random process. To identify vibra-
tion parameters such as natural frequencies, damping ratios, and mode shapes through
random vibration responses, we basically have two approaches. The first approach is to
use correlation analysis, which is discussed in Chapter 3. The second approach is to use
free-decay vibrations. However, the random responses are usually not transient signals,
that is, they will not decay as time becomes longer. The random decrement method
(Cole 1968) is a way to obtain free-decay time history through random signals.

5.1.4.1 Random Decrement (Level Up-Crossing)


Suppose a random process is the response of a linear time-invariable system due to
unknown excitations. To analyze the random process, in this section, the method of
random decrement is discussed as follows. First we suppose that a zero-mean ran-
dom process Y(t) is stationary. This can be seen as a response time history, which is
conceptually shown in Figure 5.7. To obtain a free-decay time history, in Figure 5.7,
line a is drawn. Here, a = 0.5.
To use the random decrement method, select the measurement duration T. For
example, in Figure 5.7, T = 3.15 seconds. The first initial up-crossing point is denoted
by t1. This can be seen in Figure 5.7, where t1 = 0.1 seconds. A time history of Y(t),
t1 ≤ t ≤ T is denoted by Y1(t).
Statistical Properties of Random Process 233

1.5

1
Amplitude

0.5

–0.5

–1

–1.5
0 0.5 1 1.5 2 2.5 3 3.5
Time (s)
t1 t2

Figure 5.7  Random decrement method.

Furthermore, the second crossing point, first down-crossing, for example, is


denoted by t2; the third crossing point, second up-crossing, is denoted by t3.
The time history taken from Y(t2), t2 ≤ t ≤ T + (t2 − t1), is denoted by Y2(t). Note that
Y2(t) and Y1(t) have the same measurement length, T − t1, and so on. The time history
taken from Y(t3), t3 ≤ t ≤ T + (t3 − t1), is denoted by Y3(t), and so on. Suppose the case
in which there are a total of n time histories is taken.
A new time history Z(t) is generated as the sum of all the Yi(t), that is,

Z (t ) =
1
n ∑ Y (t)
i =1
i (5.48)

Z(t) will be a free decay time history, with the initial condition

 =0
y(0) (5.49)

and

y(0) = a (5.50)

Yi(t) can be seen as a response of the system due to three kinds of excitations:

1. A random input Fi(t) with zero mean, where the corresponding portion of
Yi(t) is denoted as YFi (t ).
2. An initial velocity vi, which is equal to the slope of Yi(t) at ti, where the cor-
responding portion of Yi (t), is denoted as Yvi (t ).
3. An initial displacement a, where the corresponding portion of Yi(t) is
denoted as YDi (t ).
234 Random Vibration

Explicitly, this can be written as

Yi (t ) = YFi (t ) + Yvi (t ) + YDi (t ) (5.51)


Substitution of Equation 5.51 into Equation 5.48 yields


n n n

Z (t ) =
1
n ∑i =1
YFi (t ) +
1
n ∑
i =1
Yvi (t ) +
1
n ∑Y
i =1
Di (t ) (5.52)

Initially, let us consider the first term: the system’s impulse response function is a
linear time invariant for a stationary process, which is denoted by h(t). This can be
written as

n n  n 
∑i =1
YFi (t ) = ∑
i =1
h(t ) * Fi (t ) = h(t ) * 

∑i =1
Fi (t )  = h(t ) *{0} = {0}

(5.53a)

Here, {0} is null with the dimension of T − t1.


Next, consider the second term: from Figure 5.6, it is seen that the following is
approximately true

v1 ≈ −v2

and it is understandable that

vi ≈ –vi+1 (5.53b)

Consequently, the responses due to the initial velocity vi and vi+1 will cancel each
other. Therefore, this will be reduced to the following:

∑Y (t) = {0}
i =1
vi (5.53c)

Lastly, consider the response due to the initial displacement a. With the same
initial displacement a, the same response should result, which is a free decay time
history under the excitation of the step function

u(t) = a (5.54)

As a result, the following is achieved

Z (t ) =
1
n ∑Y
i =1
Di (t ) (5.55)

where n is the number of fraction pieces of selected time histories.


Statistical Properties of Random Process 235

Random signal Random decrement


3 0.5
2 0.4

1 0.3
0.2
0
0.1
–1
0
–2 –0.1
–3 –0.2
–4 –0.3
0 100 200 300 400 500 600 700 800 900 0 10 20 30 40 50 60 70 80 90 100
(a) (b)

Figure 5.8  Example of random decrement. (a) Random process. (b) Free-decay process.

Example 5.1

In this example, we show the original random signal and the free-decay time his-
tory of using the random decrement method. Figure 5.8a plots the original signal
and Figure 5.8b is the re-generated time history.

5.1.4.2 Lag Superposition (Zero Up-Crossing)


The above-mentioned random decrement method provides a free decay time history
by manipulating the random process, which is crucial for many applications of vibra-
tion analysis. However, it requires a significantly large magnitude of measurements.
The resulting free decay time history can be too short to obtain good results. The
following method of lag superposition can significantly reduce the requirement of
measurements, while being able to yield virtually the same accuracy.
With the same random time history Y(t), first denote ti, the ith time point, where
Y(ti) > 0 with Y(ti–1) < 0. That is, the zero crossing happens in between tt−1 and ti.
Next, take Yi(t), for the range ti ≤ t ≤ T + ti.
Last, let

m+

Z + (t ) =
1
m+ ∑Y (t)
i =1
i (5.56a)

where the subscript + stands for zero up-crossing and m+ is the number of fraction
pieces of selected time histories.
Slightly different from the case of random decrement based on level up-crossing,
Z(t) can be seen to have two kinds of excitations. The first decrement is due to the
random force and the summation that cancels the response. The second decrement
is due to the initial velocities at time ti, because Yi(t) is taken from the points in time
for which Y(ti) > 0.
Directly from the above discussion, the case in which Yj(t), tj ≤ t ≤ T + tj is taken
from Y(tj−1) > 0 and Y(tj) < 0 can also be considered. By changing the sign and placing
the time history inside the sum, the result is
236 Random Vibration

m_

Z − (t ) =
1
m− ∑Y (t)
j =1
j (5.56b)

where the subscript − stands for zero down-crossing and m− is the number of fraction
pieces of selected time histories.
Based on the rate of zero up-crossing given by Equation 5.26 and the rate of level
up-crossing given by Equation 5.24, we can calculate the ratio of v0+ /va+

 1 a2  1 a2
 1 σ X   1 σ 

2 2  = e 2 σ 2X
v0+ /va+ = 
X
e σX (5.57)
 2π σ X   2π σ X 

1 a2
since >0
2 σ 2X


v0 + > va + (5.58)

This ratio is always greater than 1, and it can be seen that such a ratio can be quite large.

Example 5.2

In this example, we have shown the original random signal and the free-decay
time history of using the leg superposition method. Figure 5.9a plots the original
signal and Figure 5.9b is the re-generated time history.

Random signal Leg superposition


3 0.8
2 0.6

1 0.4
0.2
0
0
–1
–0.2
–2 –0.4
–3 –0.6
–4 –0.8
0 100 200 300 400 500 600 700 800 900 0 100 200 300 400 500 600 700 800 900
(a) (b)

Figure 5.9  Example of lag superposition. (a) Random process. (b) Free-decay process.
Statistical Properties of Random Process 237

5.1.4.3 Lag Superposition (Peak Reaching)


The above-mentioned lag superposition can have a considerably large amount of
averaging based on the same piece of random time history Y(t), compared with the
random decrement method because the crossing rate v0+ is generally much higher
than the crossing rate va+. In addition, because we can use both Equations 5.56a and
5.56b, the total number of selected time histories is doubled.
We can also have an additional choice by picking up the pieces of time histories
from each peak of the process Y(t). Namely, with the same random time history Y(t),
first denote ti, the ith time point, where Y(ti−1) < Y(ti) and Y(ti) > Y(ti+1). Namely, the
peak is reached between tt−1 and ti+1.
Next, take Yi(t), for the range ti ≤ t ≤ T + ti.
Last, let

m+

Z + (t ) =
1
m+ ∑ Y (t)
i =1
i (5.59a)

where the subscript + stands for the positive value of the peak and m+ is the number
of fraction pieces of selected time histories.
Similarly, we can also pick up the pieces of time histories from each valley of
the process Y(t). Namely, with the same random time history Y(t), first denote ti, the
ith time point, where Y(ti−1) > Y(ti) and Y(ti+1) > Y(ti). Namely, the valley is reached
between tt−1 and ti+1.
Next, take Yi(t), for the range ti ≤ t ≤ T + ti.
Last, let

m−

Z − (t ) =
1
m− ∑Y (t)
i =1
i (5.59b)

where the subscript – stands for the negative value of the valley and m− is the number
of fraction pieces of selected time histories.
Similar to the case of random decrement based on level up-crossing, Z(t) also has
three kinds of excitations. The first one is due to the random force and the summa-
tion that cancels the response. The second is due to the initial velocities. This portion
of Z(t) due to the initial velocities will also, respectively, cancel each other. The third
decrement is the summation of the responses due to multiple levels of step functions;
each is excited by a specific level of the initial displacement at the peaks.
Now, let us consider the numbers of useful pieces of selected time histories based
on peak reaching and zero-crossing. It is seen that this ratio is the reciprocal of the
irregularity factor α, that is,

vp 1
= (5.60)
v0 + α
238 Random Vibration

8 8

6 6

4
4
2
Amplitude

Amplitude
2
0
0
–2
–2
–4
–4 –6

–6 –8
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
Time point Time point
(a) (b)

Figure 5.10  Free-decay time histories generated from lag superposition (peak reaching).
(a) Peak reaching. (b) Valley reaching.

Because in most α < 1, this ratio is usually greater than 1. Additionally, it is seen that
such a ratio can be rather large.

v p > v0 + (5.61)

Example 5.3

The following example shows a free-decay time history generated through lag
supposition methods based on peak/valley reaching methods. The results are plot-
ted in Figure 5.10a and b.

5.1.5 Level Crossing in Clusters


In Sections 5.1.1 through 5.1.4, the main focus was on the crossing pair. Now, cluster
crossing will be considered.

5.1.5.1 Rice’s Narrow-Band Envelopes (Stephen O. Rice, 1907–1986)


Before discussing cluster crossing, a useful approach to deal with narrow-band pro-
cessing is described. It is used in comparing the aforementioned to “crossing pairs”
because we likely will have two arrivals per cycle in a cluster. Subsequently, there
can be many arrivals. However, between clusters, there will be zero arrivals. This
results in waiting times not being exponentially distributed (see Figure 5.11).
For the purpose of engineering design, often the first time that X(t) crosses the
level x = a is of interest. From this, the random process of the “envelope” will be
defined and the up-cross rate of the envelope will be determined.
For the case, X(t) is stationary narrow-band Gaussian. Suppose,

X(t) = R(t) cos[ω 0 t + Θ(t)] (5.62)


Statistical Properties of Random Process 239

0.5
Amplitude

–0.5

–1
0 0.5 1 1.5 2 2.5 3 3.5
Time (s)

Figure 5.11  Example of Rice’s narrow-band envelope.

Here, ω 0 is the center frequency of the narrow band, referring back to Equation
4.66 and the frequency ω 0; the random process R(t) is the envelope, and

R(t) > 0 (5.63)

The phase angle Θ(t) is an additional random process, where

0 < Θ(t) ≤ 2π (5.64)

In this case, it is assumed that the variation of both R(t) and Θ(t) are slower in com-
parison with X(t).

5.1.5.1.1 Joint Density Function


To use the level-up-crossing formula in Equation 5.23, the joint density function of
R(t) and R (t ) are needed, and they will be discussed in the following.

5.1.5.1.1.1   Trigonometry Equation  First, X(t) will be rewritten as

X(t) = C(t) cos(ωmt) + S(t) sin(ωmt) (5.65)

C(t) and S(t) are independent, identical Gaussian, with a zero mean and variance
of σ 2X .

C(t) = R(t) cos[Θ(t)] (5.66)

S(t) = R(t) sin[Θ(t)] (5.67)

From Equations 5.66 and 5.67, C(t) and S(t) are determined to be zero-mean.
240 Random Vibration

The derivatives of C(t) and S(t) are

C (t ) = R (t ) cos[Θ(t )] + R(t )sin[Θ(t )]Θ


 (t ) (5.68)

and

S (t ) = R (t )sin[Θ(t )] + R(t ) cos[Θ(t )]Θ


 (t ) (5.69)

From Equations 5.68 and 5.69, the derivatives of C(t) and S(t) are also determined
to be zero-mean.

5.1.5.1.1.2   Joint PDF of C(t), S(t) and C ( t ), S ( t )  If the one-sided spectral density
function, W X( f ), is symmetric about the midband frequency ωm, then C(t), S(t) and
C (t ) and S (t ) are all independent. Suppose C (t ) and S (t ) have a variance σ 2R , which
will be determined later, then the joint density function of C(t), S(t), C (t ) and S (t ) is
given by

1  1  c 2 + s 2 c 2 + s 2  
  (c, s , c , s )
fCCSS = exp  −  +  (5.70)
4π σ X σ R
2
2 2
 2  σ X
2
σ 2R  

where

c, s > 0 (5.71)

and

−∞ < c , s < ∞ (5.72)

5.1.5.1.1.3   Joint PDF of R(t), Θ(t), R (t), and Θ(


 t )  By variable transformation, the
 t ) is
joint probability density function of R(t), Θ(t), R (t ), and Θ(

r2  1  r 2 r 2 + r 2θ 2  
f RR ΘΘ ( r , θ, r , θ ) = exp −  2 +  (5.73)
4 π 2 σ 2X σ 2R  2  σ X σ 2R  

where r > 0,

0 < θ ≤ 2π (5.74)

and

 θ < ∞
−∞ < r, (5.75)
Statistical Properties of Random Process 241

If R(t) has a Rayleigh distribution and Θ(t) has a uniform distribution, then the
joint PDF of R(t) and R (t ) is

r  1  r2 r 2  
f RR (r , r ) = exp  −  2 + 2 
, r > 0, − ∞ < r < ∞ (5.76)
2πσ 2X σ R  2  σ X σ R  

The joint density function is the product of the two marginal PDF, given by

r  1  r2  
f R (r ) = exp  −  2   , r > 0 (5.77)
σX2
 2  σ X  

and

1  1  r 2  
f R (r ) = exp  −  2   , −∞ < r < ∞ (5.78)
2πσ R  2  σ R  

As a result, R(t) and R (t ) are independent.

5.1.5.1.1.4   Determination of Variance σ R2  As previously noted, the variance σ 2R


should be determined.
The derivative of X(t) is

X (t ) = C (t ) cos(ω m t ) − C (t )ω m sin(ω m t ) − S (t )sin(ω m t ) − S (t )ω m cos(ω m t ) (5.79)

Equation 5.79 shows a linear combination of four independent processes. The


variance of X (t ) is determined by

σ 2X = σ 2R + ω 2m σ 2X (5.80)

The above equation can be rewritten as

σ 2R = σ 2X − ω 2m σ 2X (5.81)

By further analysis and substitution of Equation 5.27, this becomes

σ 2X = σ 2X ω 20+ (5.82)

where

( )
σ 2R = σ 2X ω 20+ − ω m2 σ 2X = σ 2X ω 20+ − ω 2m (5.83)
242 Random Vibration

Substitutions of Equations 5.28 and 5.29 into Equation 5.80 results in

∫ (ω )

σ 2R = 2
− ω 2m S X (ω ) dω (5.84)
−∞

Equation 5.84 indicates that the term σ 2R can be seen as the moment of the special
density function SX(ω) taken about ωm, the midband frequency of the process.
When X(t) approaches its narrow-band limit, σ 2R tends to zero; this indicates that no
variation exists. Distinctively, each envelope of the narrow-band process does not vary,
thus becoming deterministic. Likewise, X(t) reduces to a pure deterministic sine wave.

5.1.5.1.2 Expected Rate of Level Up-Crossing


Finally, the rate of level up-crossing for the envelope can be stated as follows.
Substitution of the joint PDF of R(t) and R (t ) into the formula of level-up-crossing,
noted by Equation 5.23, results in


v R= a+ =
∫ −∞
r f RR (a, r ) dr
(5.85)

rr  1  r2 r 2  
=
∫ −∞ 2πσ 2X σ R
exp  −  + 2 
 2  σ X σ R  
2
dr

Consequently,

aσ R  1  a2  
v R= a+ = exp  −  2   (5.86)
2πσ 2X  2  σ X  

To eliminate σ1, Equation 5.83 is substituted into Equation 5.86, producing an


equation of rate

a  1  a2  
v R = a+ = ω 20+ − ω m2 exp  −  2   (5.87)
2πσ X  2  σ X  

5.1.5.1.3 Average Clump Size


In using Equation 5.87, the expected rates of envelope crossing of R(t) and crossing
of the original Gaussian process X(t) can be compared, through the use of their ratio,
which is referred to as the average clump size (R.H. Lyon, 1961) given by

v X = a+
cs(a) = (5.88)
v R = a+
Statistical Properties of Random Process 243

Substitution of the two values of crossing rate expressed in Equations 5.23 and
5.86 into Equation 5.87 yields

σX ω 0+ σX
cs(a) = = (5.89)
2π a ω − ω
2
0+
2
m a 2π 1 − ω 2m /ω 20+

The average clump size can be used to estimate the waiting time of envelope
crossing. It is seen that

E[time of first crossing of x = a by R(t)]


= ⟨cs(a)⟩E[time of first crossing of x = a by X(t)]
(5.90)

Equation 5.90 shows that if the size is large, there will be a significant waiting time.

5.2 Extrema
In the second section of this chapter, the extreme values of certain random processes
are considered. By collecting the extreme values, which are no longer processes
but variables, and determining the corresponding distributions, the 3D problems are
reduced to 2D distributions. Namely, the targets are the probability density functions
for specific cases.

5.2.1 Distribution of Peak Values


5.2.1.1 Simplified Approach
5.2.1.1.1 General Process
To study the phenomena of peaks, consider the case of double line crossing (see
Powell 1958 and Wirsching et al. 2006).
Seen in Figure 5.12, in crossing the first level

x = z (5.91)

is denoted by event A and will have a probability of P(A), where

vz + dt = P( A) (5.92)

z + ∆z

dt

Figure 5.12  Double line crossing.


244 Random Vibration

In crossing the second level

x = z + Δz (5.93)

E[rate of peaking in (z , z + ∆z )] = vz + − vz + ∆z + (5.94)


∆z→0
(
lim vz + − vz + ∆z + = − ) d
v + dz
dz z
(5.95)

E[rate of peaking in (z, z + Δz)] = E[total rate of peaking] P[peak in(z, z + Δz)]
(5.96)

lim P[peak in ( z , z + ∆z )] = f Z ( z )dz (5.97)


∆z→0

Combining Equations 5.95 and 5.97, results in

d
− v + dz
f Z ( z )dz = dz z

vp

Thus, simplifying the above equation produces

d
− v+
fZ (z) = dz z (5.98)

vp

5.2.1.1.2 Narrow-Band Process
5.2.1.1.2.1   General Narrow-Band Process  If it is a narrow-band process, the
peaking frequency can be replaced by the zero-up-crossing frequency:

d
− v+
fZ (z) = dz z (5.99)
v0 +

5.2.1.1.2.2   Gaussian Narrow-Band Process, PDF of Peaks  If X(t) is narrow-


band Gaussian, then replace a with the variable z in Equation 5.24, and take the
derivative with respect to z. Furthermore, substituting Equation 5.126 into the results,
we can have the PDF of the peaks written as

z  1  z2  
fZ (z) = exp −  2   , z > 0 (5.100)
σ 2X  2  σ X  

Note that Equation 5.100 is a Rayleigh distribution.


Statistical Properties of Random Process 245

5.2.1.1.2.3   Gaussian Narrow-Band Process, PDF of Height of the Rise  The height
of rise

H = 2Z (5.101)

can be calculated, for zero-mean narrow-band process, by replacing z with h in


Equation 5.100, thus the PDF of the height is given by

h  1  h2  
f H (h) = exp −  2   , h > 0 (5.102)
4 σ 2X  2  4 σ X  

5.2.1.2 General Approach
5.2.1.2.1 Conditional Probability
Recall the conditional probability:

P( B ∩ C )
P( B | C ) = (5.103)
P(C )

P[(peak = Z ) ∩ ( X (t ) is a peak )]
P(peak = Z | X (t ) is a peak ) = (5.104)
P( X (t ) is a peak )

5.2.1.2.2  Events in Terms of X(t), X ( t ), and X ( t )


Conditions for a peak of any magnitude are the same as for a zero down-crossing of
X (t ), specifically, at the start of this interval:

1. X (t ) > 0 (5.105)
2. X (t ) < 0 (5.106)
3. Correspondingly at the end of the interval,

X (t + dt ) < 0 (5.107)

In terms of X (t ), the following is true

X (t ) + X (t )dt < 0 (5.108)

or

X (t ) < 0 − X (t )dt (5.109)


246 Random Vibration

Let C denote the event of having a peak of any magnitude, then P(C) can be writ-
ten as the combination of the above statements. Specifically, this can be written as

{
P(C ) = P  0 < X (t ) < 0 − X (t )dt  ∩  X (t ) < 0  } (5.110)

Let B denote the additional constrain that X(t) = Z, that is,

z < X(t) ≤ z + dz (5.111)

Recognizing B to be a subset of C, the joint probability is

{
P( B ∩ C ) = P [ z < X (t ) ≤ z + dz ] ∩  0 < X (t ) < 0 − X (t )dt  ∩  X (t ) < 0  (5.112) }

5.2.1.2.3 General Application
Denote the joint PDF of X(t), X (t ), and X (t ) to be f XXX
  (u, v , w).

0 0− w dt z+dz

P(peak = Z | X (t ) is a peak ) = f Z ( z )dz =


∫ ∫ ∫
−∞ 0 z
  (u, v , w) du d v d w
f XXX
0 0− w dt ∞

∫ ∫ ∫
−∞ 0 −∞
  (u, v , w) du d v d w
f XXX

=
∫−∞
d z (− w dt ) f XXX
  ( z , 0, w) d w

∫−∞
(− w dt ) f XX
  (0, w) d w

=
∫−∞
(− w) f XXX
  ( z , 0, w) d w
dz
0

∫ −∞
(− w) f XX
  (0, w) d w

(5.113)

Because the resulting denominator is the peaking frequency vp, dz can be divided
on both sides, resulting in

fZ (z) =
∫ −∞
− w f XXX
  ( z , 0, w) d w

(5.114)
vp

Equation 5.114 is a workable solution for zero-mean process with arbitrary band-
width and distribution.
Statistical Properties of Random Process 247

5.2.1.2.4 Gaussian
The joint PDF of f XXX   (u, v , w) is, in general, quite complex. However, in the event
the displacement X(t) is Gaussian, then the velocity and the acceleration will also be
Gaussian. Additionally, suppose each of the three processes are independent, then
the joint PDF of f XXX
  (u, v , w) can be simplified to f XXX   (u, v , w) = f X (u) f X ( v ) f X ( w).
Substitution of the PDF into Equation 5.114 yields

1  1 z2 
f Z ( z ) = (1 − α 2 ) exp  − 2 
=
2π(1 − α ) σ X
2
 2 (1 − α )σ X 
2

(5.115)
 αz  z  1 z2 
+ αΦ   exp  − 2 
, −∞ < z < ∞
 (1 − α )σ X  σ X
2 2
 2 σX 

Here, α is the irregularity factor and Φ(.) is the cumulative of the standard normal
distribution.

5.2.2 Engineering Approximations
5.2.2.1 Background
In the above discussion, it was assumed that the PDF was known. Realistically, this
assumption is seldom true. When the PDF is not known, the approximate distribu-
tions are then used (see Rice [1964], Yang [1974], Krenk [1978], and Tayfun [1981]).
Random rises and falls are useful in studying fatigue problems, but the correspond-
ing exact PDF in most cases is not known. Fortunately, the average rise can be easily
obtained. In this subsection, the issue of rise and fall will be examined. For engi-
neering applications, a certain percentage error may be tolerated to find workable
solutions. In the following, comparative loss assumptions are made.

5.2.2.1.1 Basic Concept
Rise Hi is the difference between X(t), denoted by X(ti) at the valley and X(t), denoted
by X (ti′) at the next peak. The value of hi can be expressed as (see Figure 5.13)

hi = max X (ti′) − min X (ti ) (5.116)

X(t)

hi
pi

ti t
vi tiʹ

Figure 5.13  Peaks and valleys.


248 Random Vibration

In this case, ti is used to denote the event when X(t) reaches a valley. To denote the
height H is once more used.

H = {hi} (5.117)

Here, H is a random set.


For stationary process, the PDF of the rise and fall is a symmetric function.
Typically, the peak distribution of a Gaussian process is known, but the PDF of the
rise is unknown. To determine the time of the next peak is a first-passenger time
problem, given

 0) = 0
X( (5.118)

To find the next zero crossing is more difficult.

5.2.2.1.2 Average Rise
In the duration of Δt, the average traveling distance of ∣X(t)∣ is equal to E X (t )Δt.
This distance is also equal to the average height 2 μH times the number of ups and
down, denoted by v p∆t. This can be expressed as follows:

E X (t )
µH = (5.119)
2v p

It can be proven that if the process is Gaussian, then the average height is

µ H = α 2πσ X (5.120)

5.2.2.1.3 Shape Function
Now, consider the trajectory between a valley and the subsequent peak, referred to as
the shape or trajectory function. With a simple shape function, the analysis becomes
easier.

5.2.2.1.3.1   Displacement Trajectory  The trajectory function is assumed to be


sinusoidal, which is one of the simplest functions but can be a good approximation
of the trajectory path, especially for narrow-band processes, that is,

Ψ(t) = −H/2 cos(ωpt) (5.121)

where ωp is the frequency of the assumed sinusoidal fluctuation of X(t). Note that in
this case,

H > 0
Statistical Properties of Random Process 249

5.2.2.1.3.2   Velocity  To solve for the velocity, the derivative of Equation 5.121 is
taken.

 t ) = ω ( H /2)sin(ω t )
Ψ( (5.122)
p p

At the valley, the conditional distribution of X(t) is f Z (−z), and

X (ti ) = 0 (5.123)

5.2.2.1.3.3   Acceleration  In addition, the acceleration can be calculated as

 t ) = ω 2 ( H /2) cos(ω t ) = A cos(ω t )


Ψ( (5.124)
p p p

Here,

ω 2p H
A= (5.125)
2

where A is a random variable representing the amplitude of the acceleration.

5.2.2.2 Probability Distributions of Height, Peak, and Valley


Thus far, the distribution of the height, peak, and valley have been considered, based
on the above-mentioned simplification of shape functions.

5.2.2.2.1  PDF of X ( t i )
5.2.2.2.1.1   PDF of A, General Process  The distribution of X (ti ), the accelera-
tion at the valley, can be found by using the same approach as used in Section 5.1.
For this approach, first define the conditions for X(ti), then calculate the PDF for
X (ti ).
It can be proven that

a + da 0 ∞

f A ( a ) da =
∫ ∫ ∫
a 0 − w d t −∞
  (u, v , w) du d v d w
f XXX
, a>0 (5.126)
∞ 0 ∞

∫ ∫ ∫
0 0− w dt −∞
  (u, v , w) du d v d w
f XXX

Thus, in simplifying

  (0, a)
a f XX
f A (a) = , a>0 (5.127)
vp
250 Random Vibration

The PDF of A for a Gaussian process is

a2

a 2 σ 2
f A (a) = 2 e X
, a>0 (5.128)
σ X

Note that Equation 5.128 is a Rayleigh distribution with σ X as the parameter σ.
For further explanation, refer to Equation 1.94. This results in

σ X = σ (5.129)

From Equation 5.125,

2A
H= (5.130)
ω 2p

Thus,

h2
h − 2
f H (h) = 2 e 2 θ H , h > 0 (5.131)
θH

where,

2σ X
θH = (5.132)
ω 2p

The above parameter can also be written as

θH = 2σXa (5.133)

Furthermore, f H(h) can be written as

h2

h 2 ( 2 σ X a )2
f H (h) = e , h>0 (5.134)
(2σ X a) 2

For Equation 5.134, when

a = 1 (5.135)

which is the case of a narrow-band process, refer to Equation 5.102.

5.2.2.2.2 Joint Distribution of Height, Peak, and Valley


5.2.2.2.2.1   Peak and Valley  As seen in Figure 5.14, the random variable valley
V and peak P can be denoted, explicitly
Statistical Properties of Random Process 251

V = {vi} (5.136)

and

P = {pi} (5.137)

The midpoint between adjoining valleys and peaks is denoted by

M = 1/2(V + P) = {1/2(vi + pi)} = {mi} (5.138)

For convenience, the subscript i will be omitted in the following equations: con-
sider the example in which the joint distribution of the height, peak, and valley can
be used to count the fatigue cycles that rise above a floor level set by a crack opening
stress (Perng 1989).

5.2.2.2.2.2   Joint PDF of H and V

1  1 ( v + h / 2)2  h  1 h2 
f HV (h, v) = exp  −  exp  − =
 2 (1 − α )σ X  4α σ X  2 4α σ X
2 2 2 2 2 2
2π(1 − α 2 ) σ X 

1  v + h/2  h  1 h2 
Φ  exp  −  , 0 < h < ∞, −∞ < v < ∞
1 − α 2 σ X  1 − α 2 σ X  4α σ X  2 4α σ X
2 2 2 2


(5.139)

5.2.2.2.2.3   Joint PDF of H and P


1  1 ( p − h / 2)2  h  1 h2 
f HP (h, p) = exp  −  2 2 exp − =
 2 (1 − α )σ X  4α σ X  2 4α σ X
2 2 2 2
2π(1 − α 2 ) σ X 

1  p − h/2  h  1 h2 
Φ  exp − 2 2 
, 0 < h < ∞, −∞ < p < ∞
1 − α 2 σ X  1 − α 2 σ X  4α σ X  2 4α σ X 
2 2

(5.140)

5.2.2.2.2.4   Joint PDF of H and M

1  1 m2  h  1 h2 
f HM (h, m) = exp  −  2 2 exp  − 2 2 
=
 2 (1 − α )σ X  4α σ X  2 4α σ X 
2 2
2π(1 − α 2 ) σ X

1  m  h  1 h2 
Φ  exp − 2 2 
, 0 < h < ∞, −∞ < m < ∞
1 − α 2 σ X  1 − α 2 σ X  4α σ X  2 4α σ X 
2 2


(5.141)
252 Random Vibration

Notice that the term

1  m 
Φ  (5.142)
1− α σX  1− α σX 
2 2

does not contains variable h.


Furthermore, the term

h  1 h2 
exp  − 2 2 
(5.143)
4α σ X
2 2
 2 4α σ X 

does not contains variable m.


Hence, the H and M are independent.

5.2.2.2.2.5   Joint PDF of P and V

1  ( p + v)2  p − v  ( p − v)2 
fPV ( p, v) = exp  −  exp  − 2 2 
=
 8(1 − α )σ X  4α σ X  8α σ X 
2 2 2 2
2π(1 − α 2 ) σ X

1  p+ v  p− v  ( p − v)2 
Φ  exp − 2 2 
, −∞ < v < p < ∞
2π(1 − α 2 ) σ X  2 (1 − α 2 ) σ X  4α σ X  8α σ X 
2 2

(5.144)

In the equation above, each joint PDF is the product of a Gaussian term and a
Rayleigh term.

5.3 ACCUMULATIVE DAMAGES
In Sections 5.1 and 5.2, we examined certain important statistical properties rather
than typical mean, variance, and correlation functions only. These studies imply
that, for certain types of random processes, we can have issues beyond typical sta-
tistics. It is known that, for a general process, the previous conclusion may not be
sufficiently accurate; they may not even be workable. Therefore, specific processes
such as the stationary Gaussian process are assumed. In the following, we will use
an engineering problem as an example to show that if certain statistical conclusions
are needed, we need to select a proper model of random process. In this case, the
Markov process will be used.
Accumulative damages are often seen in engineering structures with repeated
loading, such as vibration displacements or unbalanced forces, which are closely
related to the phenomena of level crossing. Material fatigue is a typical example
of damage accumulation. As mentioned at the beginning of this chapter, the
focus here is given to the time-varying developments of the accumulative dam-
age, instead of the resulting damage itself. In Chapter 10, such damages will
Statistical Properties of Random Process 253

be further analyzed to study the nature of accumulative failures. To discuss the


time-varying process, two major approaches are considered: deterministic mod-
els and random process.
The theory based on deterministic approach assumes that a stress cycle with an
alternating stress above the endurance limit inflicts a measurable permanent dam-
age. It also assumes that the total damage caused by a number of stress cycles is
equal to the summation of damages caused by the individual stress cycles.
Although the deterministic approach is simple and widely used with fairly
accepted accuracy, it cannot assess uncertainty in fatigue life and dependency
between the current and future damages (the cascading effect). In this section, we
will first introduce the Markov process, a useful model of accumulative damage.
Then, the problem of fatigue based on the approach of random process will be dis-
cussed. Finally, the concept of cascading damage will be briefly mentioned.
In the literature, there are many excellent works published. Among them, Dowling
(1993) summarized basic approaches of fatigues. Collins (1981) comprehensively
reviewed the models used to estimate fatigue life.

5.3.1 Linear Damage Rule: The Deterministic Approach


Let us first review the widely accepted damage rule based on a deterministic
approach for the purpose of comparison.

5.3.1.1 S–N Curves
When a component of a machine or a structure is subjected to high-cycle loading,
although the load level is smaller than its yielding threshold, after certain cycles, it
may fail to take additional loads. The number of cycles is referred to as fatigue life-
time. Such fatigue is called high-cycle fatigue, or simply fatigue.
Generally speaking, S–N curves are used in a high-cycle fatigue study. An S–N
curve for a material defines alternating stress values versus the number of duty cycles
required to cause failure at a given stress ratio. A typical S–N curve is shown in the
figure. The y axis represents the alternating stress (S) and the x axis represents the
number of cycles (N). An S–N curve is based on a stress ratio or mean stress. One can
define multiple S–N curves with different stress ratios for a material. The software
uses linear interpolation to extract data when you define multiple S–N curves for a
material.
S–N curves are based on mean fatigue life or a given probability of failure.
Generating an S–N curve for a material requires many tests to statistically vary
the alternating stress, mean stress (or stress ratio), and count the number of duty
cycles.

5.3.1.2 Miner’s Rule
In 1945, M.A. Miner popularized a rule that had first been proposed by A. Palmgren
in 1924, which is variously called Miner’s rule or the Palmgren–Miner linear dam-
age hypothesis. Consider the S–N curve shown in Figure 5.14. Suppose that it takes
N1 duty cycles at an alternating stress S1 to cause fatigue failure; then the theory
254 Random Vibration

Stress (MPa)
300

200

100
Fatigue strength

104 105 106 107


Number of cycles

Figure 5.14  Conceptual S–N curve.

states that each cycle causes a damage factor D1 that consumes 1/N1 of the life of the
structure.
Moreover, if a structure is subjected to N1 duty cycles at S1 alternating stress and
N2 duty cycles at S2 alternating stress, then the total damage factor D is calculated as

D = (n1/N1 + n2/N2) (5.145)

where N1 is the number of cycles required to cause failure under S1, and N2 is the
number of cycles required to cause failure under S2.
The damage factor D, also called usage factor, represents the ratio of the con-
sumed life of the structure. A damage factor of 0.35 means that 35% of the structure’s
life is consumed. Failure due to fatigue occurs when the damage factor reaches 1.0.
The linear damage rule does not consider the effects of load sequence. In other
words, it predicts that the damage caused by a stress cycle is independent of where
it occurs in the load history. It also assumes that the rate of damage accumulation is
independent of the stress level. Observed behavior indicates that cracks initiate in a
few cycles at high stress amplitudes, whereas almost all the life is spent on initiating
the cracks at low stress amplitudes.
The linear damage rule is used in its simple form when you specify that fatigue
events do not interact with each other in the properties of the study. When you set the
interaction between events to random, the program uses the ASME code to evaluate
the damage by combining event peaks.

5.3.2 Markov Process
To better understand when the accumulative damage occurs as a random process,
let us describe a useful model, the Markov process (Andrey A. Markov, 1856–1922).
A random process whose present state is dependent on its past history, the
Markov process is one of the most important processes that plays an important role
in engineering applications. The aforementioned Poisson and Wiener processes
(Brownian motion) are all Markovian. Many physical phenomena, including
Statistical Properties of Random Process 255

communication networks,­ signal processes, transportation arrangements, struc-


tural failures by multiple natural hazards, and others, can be approximated by
Markov processes.
In this subsection, we introduce and discuss the basic concepts and properties
of Markov processes, mainly focusing on the discrete Markov chain, which can be
mathematically intensive. However, to study the main objective of this section, accu-
mulative damage, readers can skip these mathematical descriptions and directly con-
sider the resulting conclusions in Sections 5.3.3 and 5.3.4.

5.3.2.1 General Concept
5.3.2.1.1 Definition
A random process X(t), 0 ≤ t ≤ T is said to be a Markov process, if for every n and for
t1 < t2 < … < tn ≤ T, we can have the distribution given by

F(xn, tn∣xn–1, …, x1; tn–1, …, t1) = F(xn, tn∣xn–1, tn–1) (5.146)

If the process is continuous, then we can have PDF given by

f(xn, tn∣xn–1, …, x1; tn–1, …, t1) = f(xn, tn∣xn–1, tn–1) (5.147)

The above equations imply that a Markov process represents a set of trajectories
whose conditional probability distribution at a selected instance, given all past obser-
vations, only depends on the most recent ones. For example, the fatigue damage at a
given time point t2 depends only on the state of time t1; anything before t1, however,
has no influence on the damage level at t2.
Equation 5.146 is equivalent to the following conditional probability

P{X (tn ) < x n ,| X (t1 ) < x1 , X (t2 ) < x 2 ,  X (tn−1 ) < x n−1 )
(5.148)
= P{X (tn ) < x n ,| X (tn−1 ) < x n−1 )

5.3.2.2 Discrete Markov Chain


5.3.2.2.1 Definition
If the process is a discrete random sequence (discrete state, discrete time), it is
referred to as a discrete Markov chain. If the process has continuous time but the
state is discrete, it is called a continuous Markov chain.

Example 5.4

Suppose {X(n), n = 1, 2, …} is a mutually independent random sequence, and

Y (tq ) = ∑X , k t1 < t 2 < 


k =1
256 Random Vibration

Show that Y(t) is a Markov process.


To prove that Y(t) is Markovian, write the sequence in a recursive form given by

Y(tq) = Y(tq−1) + Xq,  q = 2, 3, …

Because Xq are independent variables, the properties of Y(t) at tq are a func-


tion of those at tq−1 only. Therefore, Y(t) is Markovian. This example implies that
a mutually independent random process is Markovian. Furthermore, an indepen-
dent increment random process is also a Markov process. Denote X(t), t > 0 with
P{X(0) = 0} = 1. At any tq ≥ 0, we have

q q

X (tq ) = X (tq ) − X (0) = ∑ [ X (tk ) − X (tk −1)] = ∑ ∆X(t k −1, tk ), t0 = 0


k =1 k =1

Note that ΔX(tk−1, tk), k = 1, … j are independent random variables. Therefore,


the above equation shows that their sum, the independent increment process,
is Markovian. The Poisson process is an independent increment, so that it is a
Markov process.

5.3.2.2.2 Transition Probability
Suppose {X(n), n = 0, 1, 2, …} is a discrete Markov chain. The following probability

pij(n,k) = P{X(n + k) = j ∣ X(n) = i},  n ≥ 0, k ≥ 1 (5.149)

is called the k steps (kth power) transition probability of {X(n), n = 0, 1, 2, …} at instant n.


Furthermore, the k steps transition probability matrix of {X(n), n = 0, 1, 2, …} at
instant n is defined as

P(n,k) = [pij(n,k)]i,j∈Ω (5.150)

where

Ω = {x: X(t) = x, 0 < t < T} (5.151)

is called the state space of the Markov process.


Particularly, when k = 1, at time instant n, the one-step transition probability and
transition probability matrix are denoted by pij(n) and P(n).

Example 5.5

{X(n), n = 1, 2, …} is an independent random sequence of positive integers, whose


distribution is given by

X (k ) 1 2 3  k
PMF pk1 pk 2 pk 3  pkk

Statistical Properties of Random Process 257

Denote

Y (n) = ∑ X(k), n = 1, 2, …
k =1
Show that {Y(n), n = 1, 2, …} is a Markov chain and find its transition probability matrix.
From the equation that defines sequence Y(n), it is seen that the increment
[Y(n) − Y(n − 1)], that is, X(n), and increment [Y(m) − Y(m − 1)], that is, X(m), are
independent because X(n) and X(m) are independent. Therefore, {Y(n), n = 1, 2, …}
is an independent increment process, and thus it is a Markov chain.
Furthermore, the entry of the corresponding transition probability matrix can
be written as

Pij (m, k ) = P {Y (m + k ) = j | Y (m) = i} =

P {Y (m) = i ,Y (m + k ) = j} P {Y (m) = i}P {Y (m + k ) − Y (m) = j − i}


=
P {Y (m) = i} P {Y (m) = i}

 
m+ k

= P ∑
 r = m+1
X (r ) = j − i  = ∑
 i1+ i2 ++ ik = j − i
pm+1,i1 pm+ 2,i2 ,, pm+ k ,ik

where

m(m + 1) (m + k )(m + K + 1)
m≤i ≤ ,m+ k ≤ j ≤ , j−i ≥k
2 2

5.3.2.2.3 Probability Distribution
The initial distribution of a discrete Markov chain {X(n), n = 0, 1, 2, …} is denoted
 , given by
by P0

 = { p = P[ X (0) = i], i ∈ Ω}
P (5.152)
0 i

In addition, absolute distribution of a discrete Markov chain {X(n), n = 0, 1, 2, …}


is denoted by P  , given by
n

n j {
 = p = P[ X (n) = j], j ∈ Ω
P } (5.153)

5.3.2.2.4 Homogeneity
If the one-step transition probability pij(n) is not related to the initial time n, denoted
by p(n), for a discrete Markov chain {X(n), n = 0, 1, 2, …}, then such a discrete
Markov chain is homogeneous, whose k steps transition the probability matrix
denoted as pij(k) and P(k), respectively; and the corresponding one-step transition
probability matrix is denoted by P.
258 Random Vibration

Example 5.6

{X(n), n = 1, 2, …} is a random sequence with independent and identical distribu-


tions, and

Y (n) = ∑ X (k )
k =1

Show that the following {Y(n), n = 1, 2, …} are homogeneous Markov chains.

1. {X(n), n = 1, 2, …} is a Bernoulli random sequence with P{X(n) = 0} = q,


P{X(n) = 1} = p, 0 < q < 1, p + q = 1 and n = 1, 2, … and
2. X(n) ~ N(μ,σ2), n = 1, 2, …

First, for question (1), consider any instant 0 < m1 < m2 < … < mn, and 0 ≤ i1 ≤ i2 ≤
… ≤ in that satisfies for any 1 ≤ k ≤ n − 1, we have ik ≤ ik+1 ≤ ik + mk+1 − mk, then


 mn m1 mn−1 

P {Y (mn ) = in Y (m1) = i1,,Y (mn−1) = in−1} = P  X (k ) = in ∑ ∑ X (k ) = i1,, ∑ 
X (k ) = in−1
 k =1 k =1 k =1 
 mn m1 m2 mn−1 

=P ∑
k = mn−1+1
X (k ) = in − in−1 ∑ k =1
X (k ) = i1, ∑
k = m1+1
X (k ) = i2 − i1,, ∑
k = mn− 2 +1

X (k ) = in−1 − in− 2


 mn 
=P ∑ {
X (k ) = in − in−1 P Y (mn ) = in Y (mn−1) = in−1 }
k = mn−1+1 

 mn mn−1 
 

= P  X ( k ) = in
 k =1
∑ X (k ) = i
k =1
n−1


 mn mn−1 
∑ ∑
 
=P X (k ) = in − in−1 X (k ) = in−1
k = mn−1+1 k =1 

 mn 
=P ∑ X (k ) = in − in−1
k = mn−1+1 

The above equation implies that

{
P Y (mn ) = in Y (m1) = i1,,Y (mn−1) = in−1 }

{
= P Y (mn ) = in Y (mn−1) = in−1 }
Statistical Properties of Random Process 259

Therefore, {Y(n), n = 1, 2, …} is a Markov chain, with transition probability


being

pij (n, k ) = P {Y (n + K ) = j | Y (n) = i} = P {Y (n + k ) − Y (n) = j − i | Y (n) = i}


 k 

= P {Y (k ) = j − i} = P 
 m=1
X (m) = j − i  ∑


It is seen that this probability is not related to n; therefore, {Y(n), n = 1, 2, …} is
a homogeneous Markov chain.
Second, for question (2), consider any instant 0 < m1 < m2 < … < mn, and 0 ≤
i1 ≤ i2 ≤ … ≤ in ∈ Ω that

 mn m1 mn−1 
{ 
P Y (mn ) = in Y (m1) = i1,,Y (mn−1) = in−1 = P  X ( k ) < in } ∑ ∑ X (k ) = i1,, ∑

X (k ) = in−1 
 k =1 k =1 k =1 

 mn m1 m2 mn−1 

= P ∑
 k = mn−1+1
X (k ) < in − in−1 ∑ k =1
X (k ) = i1, ∑
k = m1+1
X (k ) = i2 − i1,, ∑
k = mn− 2 +1

X (k ) = in−1 − in− 2 


(x )
in − in−1
=
∫0
f mn
∑ X (k )
m1
∑ X ( k ), ∑
mn −1
X (k )
i ,, in−1 − in− 2 dxn
n 1

k = mn −1+1 k =1 k = mn − 2 +1

in − in−1
f (i1,in−1 − in− 2 , xn )
=
∫0 f (i1,in−1 − in− 2 )
dxn

in − in−1
=
∫0
f ( xn ) d xn

And furthermore, we have

{
P Y (mn ) < in Y (mn−1) = in−1 }
 mn mn−1 

= P
 k =1

X ( k ) < in ∑ X (k ) = i
k =1
n−1




 mn mn−1 
∑ ∑ X (k ) = i
 
= P X (k ) < in − in−1 n−1 
k = mn−1+1 k =1 

(x )
in − in−1
=
∫ 0
f mn
∑ X (k )
mn −1
∑ X (k )
n in−1 dxn
k = mn −1+1 k =1

in − in−1
f (in−1, xn )
=
∫ 0 f (in−1)
dxn

in − in−1


=
∫ 0
f ( xn ) d xn

260 Random Vibration

Therefore, {Y(n), n = 1, 2, …} is a Markov chain. In addition, we have

 m+ n n 
{
P Y (n + m) < i Y (n) = i P  } ∑ 
X (k ) < i ∑ X(k) = j 
 k = n+1 k =1 
 m+ n n 
= P


k = n+1
X (k ) < i − j ∑
k =1
X (k ) = j 


f n+ m n ( x n+ m , j )
in − in−1 ∑ X ( k ) ∑ X ( k )
=

k = n +1 k =1
d x n+ m
0 fn ( j)
∑ X ( k )
k =1

i− j i− j x2
1 −
=
∫0
f n+ m
∑ X (k )
( xn+ m ) dxn+ m =
∫0 2πσ m
e 2mσ 2 dx
k = n +1

 i− j 
=Φ 
σ m

It is thus seen that this probability does not relate to the starting time point n; there-
fore, {Y(n), n = 1, 2, …} is a homogeneous Markov process.

5.3.2.2.5 Ergodicity
For a homogeneous discrete Markov chain {X(n), n = 0, 1, 2, …}, if for any state i, j ∈ Ω,
there exists a limit independent to i, such that

lim pij (n) = π j > 0, i, j ∈Ω (5.154)


n→∞

then, such a Markov chain is ergodic.


It is seen that a discrete Markov chain can be defined by its state probabilities and
transition probabilities. The probability distribution of the initial state is described
by n dimensional vector

p = [π1, π2, …, πj, …],  j ∈ Ω (5.155a)

For convenience, Equation 5.155a can also be denoted as


p = {πj, j ∈ Ω} (5.155b)

Note that based on the nature of probability

∑π
j ∈Ω
j = 1 (5.156)

if the vector p denotes a probability distribution, it must satisfy Equation 5.156.


Statistical Properties of Random Process 261

5.3.2.2.6 Stationary Distribution of a Homogeneous Discrete Markov Chain


A homogeneous discrete Markov chain, {X(n), n = 0, 1, 2, …}, is stationary, provided
that there exists {vj, j ∈ Ω} such that

1. vj ≥ 0 (5.157)

2. ∑v = 1
j ∈Ω
j (5.158)

3. v j = ∑v p
i ∈Ω
i ij (5.159)

In this case, the vector

v = {vj, j ∈ Ω} (5.160)

is the stationary distribution of this Markov chain, and we can write

vP = v (5.161)

5.3.2.2.7 Main Properties of a Homogeneous Discrete Markov Chain

1. C–K Equation
For the Markov chain, the Chapman–Kolmogorov (C–K) equation is
an identity relating the joint probability distributions of different sets
of coordinates on a random process (Sydney Chapman, 1888–1970),
given by

pij ( k + q) = ∑ p ( k ) p (q)
r ∈Ω
ir rj (5.162)

2. P(n) = P n (5.163)


3. The absolute distribution is determined by the initial distribution and tran-
sition probability, and the following equation is satisfied

p j (n) = ∑ p p (n)
i ∈Ω
i ij (5.164)

Equation 5.164 can be expressed in matrix form, that is,

 =P
P  Pn (5.165)
n 0
262 Random Vibration

4. The limited dimensional distributions can be determined by the initial dis-


tribution and transition probability, and the following equation is satisfied

P{X (n1 ) = i1 , X (n2 ) = i2 , X (nk ) = ik }

(5.166)
= ∑ p p (n ) p
i ∈Ω
i ii1 1 i1i2 (n2 − n1 ) pik −1ik (nk − nk −1 )

5. Suppose the state space Ω = {1, 2, …, s} of a homogeneous discrete Markov


chain is a limited set. If there exists an integer n0 > 0, such that for any i,
j ∈ Ω, we have

pij(n 0) > 0 (5.167)

then this Markov chain is ergodic, and its limited distribution pj (j ∈ Ω) is a


unique solution of the equation

πj = ∑π p ,
i =1
i ij j = 1, 2, ..., s (5.168)

under the conditions

πi > 0,  j = 1, 2, …, s (5.169)

and
s

∑π = 1
i =1
i (5.170)

6. The limit distribution of an ergodic homogeneous discrete Markov chain is


stationary
7. Denote the stationary limit distribution of an ergodic homogeneous discrete
Markov chain to be v = {vj, j ∈ Ω}. To any integer n, we have

v = vPn (5.171)

Example 5.7

Ω = {1, 2, 3} is the state space of homogeneous Markov chain {X(n), n = 1, 2, …},


which has the following transition probability matrix

 1/ 2 1/ 3 1/ 6 
 
P =  1/ 3 1/ 3 1/ 3 
 1/ 3 1/ 2 1/ 6 

Statistical Properties of Random Process 263

and with the initial distribution being

X (0 ) 1 2 3
p 2/5 2/5 1/5


1. Calculate the second transition probability matrix.
2. Find the probability distribution of X(2).
3. Find the stationary distribution.

 5/12 13/ 36 2/ 9 
 
1. P 2 =  7 /18 7 /18 2/ 9 
 7 /18 13/ 36 1/ 4 
2. P2 = P0P2 = ( 2/ 5, 67 /180, 41/180)
 
3. Because

v = vP


and

∑ v = 1
i

The stationary distribution is

v = (2/5, 13/35, 8/35)

5.3.3 Fatigue
5.3.3.1 High-Cycle Fatigue
With the help of the above-mentioned discrete Markov chain, let us consider the
process of accumulative damage, failure due to a material’s high-cycle fatigue (see
Soong and Grigoriu, 1992).
Suppose that, during a fatigue test, a sufficient number of specimens are used
under cyclic loading. Assume that the damage probability can be described by
Equation 5.155 but, in this case, rewritten as

p0 = [π1, π2, …, πj, … πf–1, 0]1×f (5.172)

where the term πj is the probability that the test specimens are in the damage state
j at time zero and the probability at the failure state at the initial time is assumed to
be zero. It is seen that these probabilities satisfy Equation 5.156, with the dimension
1 × f, where f states a final failure and the states can be denoted by

Ω = {1, 2, … f} (5.173)
264 Random Vibration

Having denoted the failure probabilities in state j at time zero, let us further
denote time x to be

px = [πx(1), πx(2), …, πx(j), … πx(f)],  x = 0, 1, 2, … (5.174)

It is seen that

∑ π ( j) = 1
j=1
x (5.175)

Based on Equation 5.165, we have

px = p0P x (5.176)

with P being the first transition probability matrix. Now, assume that the damage
state does not change (no increase) by more than one unit during the duty cycle, the
transition probability matrix P can be written as

π 1 − π1 0 0  0 0 
 1 
0 π2 1 − π2 0  0 0 
0 0 π3 1 − π3  0 0 
P=  (5.177)
  
0 0 0 0  π f −1 1 − π f −1 

0 0 0 0  0 0 

It is noted that this case is a stationary Markov chain because the transition prob-
ability matrix P is time-invariant and also note that any πj in P is smaller than 1 but
greater than 0.
Furthermore, the cumulative distribution function of time Tf to failure state f is
πx( f ), that is,

FT f ( x ) = P{T f ≤ x} = π x ( f ) (5.178)

When the time becomes sufficiently long, namely, let x → ∞, the failure probabil-
ity approaches unity, that is,

lim π x ( f ) = 1 (5.179)
x →∞

Let us denote the reliability function as

RT f ( x ) = 1 − FT f ( x ) (5.180)
Statistical Properties of Random Process 265

The mean and standard deviation of the fatigue problem can be calculated as

µ T f = E[T f ] = ∑R x=0
Tf (x) (5.181)

and

1/ 2
 ∞ 
{ } ∑  xRT ( x ) + µ T  − µ 
1/ 2
σ T f = E T  − µ
2 2
= 2
(5.182)
f Tf  f f  Tf
 x =0 

We can also directly calculate the mean and standard deviation in terms of the
quantity f and the probabilities πq and 1 − πq. Suppose all new specimens have a
probability of 1 at the initial state 1, that is,

π1 = 1, π2 = 0, π3 = 0, … (5.183)

The fatigue failure time Tf ∣1 can then be written as

Tf ∣1 = T1 + T2, + … Tq + … Tf−1 (5.184)

where Tq stands for the time within the duty cycle in state q. It can be proven that all
Tq are mutually independent with the following distribution

P{Tq = x} = (1 − π q ) pqx −1 , q = 1, 2, , f − 1, x = 1, 2,  (5.185)

The mean and standard deviation of the fatigue problem can be calculated based
on the distribution described in Equation 5.185. It can be proven that

f −1

µ T f 1 = E[T f 1 ] = f − 1 + ∑r
i =1
i (5.186)

and

1/ 2
 f −1 
{ } ∑
1/ 2
σ T f 1 = E T f21  − µ T2 f 1 = ri (1 + ri )  (5.187)
 i =1 

where the ratio is defined as

πi
ri = (5.188)
1 − πi
266 Random Vibration

When the ratio becomes constant, that is,

ri = r

we will have the simplest case as follows:

µ T f 1 = ( f − 1)(1 + r ) (5.189)

and

σ T f 1 = ( f − 1)r (1 + r ) (5.190)

5.3.3.2 Low-Cycle Fatigue
When the stress is sufficiently high, plastic deformation will occur. Accounting for
the loading in terms of stress is less useful and the strain in the material can be a
simpler and more accurate description. In this case, we witness the low-cycle fatigue.
One of the widely accepted theories is the Coffin–Manson relation (see Sornette
et al. 1992) given by

∆ε p = f (ε′f , N , c) (5.191)

In Equation 5.191, Δεp is the range of plastic strain, which is a function of ε ′f , an


empirical constant; N, the number of half-reversals to failure (N cycles); and c, an
empirical constant known as the fatigue ductility exponent. In Chapter 10, we will
further explore the essence of Equation 5.191. In the following paragraphs, let us
qualitatively yet briefly introduce low-cycle fatigue.

5.3.3.2.1 Cyclic Test
Unlike high-cycle fatigue, when the load applied on a component or a test specimen
is greater than its yielding point, nonlinear displacement will occur. To study the
material behaviors, a forced cyclic test is often performed. In this case, one type
of material will retain its stiffness for several cycles until reaching the final stage
of broken failure (see Figure 5.15a), where B(t) stands for the broken point. Steel
is a typical material of this kind. Another type of material will reduce its stiffness
continuously until the stiffness is below a certain level at which the total failure is
defined (see Figure 5.15b), where a preset level marked by the dot-dashed line stands
for the failure level. When this level is reached at the nth cycle shown in Figure
5.15b, the corresponding amount of force is denoted by B(n). Reinforced concrete is
typically seen with the overloading cycles. In both cases, the number of cycles when
failure occurs is considerably smaller than the above-mentioned high-cycle fatigue.
Note that during a low-cycle fatigue test, if the stiffness is reduced, the amount of
loading will often be reduced as well; otherwise, the corresponding displacement can
be too large to realize with common test machines. Therefore, instead of applying
Statistical Properties of Random Process 267

1 1
0.8 0.8
0.6 0.6
Normalized load

Normalized load
0.4 0.4
0.2 0.2
0 0
–0.2 –0.2
–0.4 –0.4
–0.6 –0.6
–0.8 –0.8
–1 –1
0 2 4 6 8 10 12 B(t) 0 1 2 3 4 5 6 7 8 9 10 B(n)
(a) Number of cycles (b) Number of cycles

k1
k2
1
0.8
0.6
Normalized load

0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
–1
–1.5 –1 –0.5 0 0.5 1 1.5
(c) Displacement (cm)

Figure 5.15  Low-cycle fatigue. (a) Failure history without significant change in stiffness
of materials. (b) Failure history with decrease of stiffness of materials. (c) Stiffness variation.

equal amplitudes of force, equal displacement is used, which is referred to as the test
with the displacement control. On the other hand, during a cyclic test, if the level of
force is controlled, it is called the force control. There is another type of low-cycle
fatigue test, which is uncontrolled. Tests with ground excitations on a vibration system
whose stiffness is contributed by the test specimen can be carried out to study uncon-
trolled low-cycle fatigue. Figure 5.15c shows conceptually an uncontrolled cyclic test,
from which we can see that the secant stiffness, which is the ratio of the peak force
and corresponding displacement, marked as k1, k2, and so on, is continuously reduced.

5.3.3.2.2 Remaining Stiffness
From Figure 5.15a, we can see that the first type of low-cycle fatigue occurs without
decaying stiffness, because when using displacement control, the force applied on
a test specimen will remain virtually constant in several cycles until sudden failure
occurs. We may classify the failure of this type of material under overloading condi-
tions with constant stiffness as “type C” low-cycle fatigue.
On the other hand, as seen in Figure 5.15b, the overload failure of materials with
decaying stiffness can be called “type D” low-cycle fatigue. To model these two
types of low-cycle fatigue, we will have rather different random processes.
Consider type D low-cycle fatigue first. We will see that, under controlled cyclic
tests, the remaining stiffness at the qth cycle may be caused by the accumulated
deformation in the previous cycles. The corresponding forces under displacement
control at cycle q measured from different specimens are likely different, that is,
random. On the other hand, under force control, the displacement at cycle q is also
268 Random Vibration

random. Thus, both the displacement and the force can be seen as random processes.
In the following paragraphs, we can see that the sums of these random quantities at
cycle q can be approximated as Markov processes.
Experimental studies shows that the force or the displacement of type C material
before failure will remain constant. Therefore, the corresponding test process cannot
be characterized as a random process. However, the amount of force at the broken point
is rather random. In addition, the specific broken cycle is also random. That is, Figure
5.15a conceptually shows the point to be 10 cycles, which is only realized in the test.
In reality, it can happen at 9, 11, or other cycles. In addition, because the level of force
is random, the exact time point of the sudden failure is also random. In the following
examples, we can see that using B(t) to mark the failure force B(t) is not Markovian.

5.3.3.2.3 Type C Low-Cycle Fatigue


Suppose the sudden failure of type C low-cycle fatigue occurs at time point tf and the
amount of force is B, and B(t) is used to denote the process. Also, suppose the failure
happens at cycle q and the cyclic test uses sinusoidal displacement control and the
period of the loading process is T. The reason the above-mentioned process B(t, f )
is not Markovian is that its previous force measured at t−T, t−2T, and so on, are all
constant. Also, the peak value of the force before the failure point is also constant.
Therefore, we cannot say that B(t) depends on the nearest recent cycle. In fact, the
failure should depend on all the previous overloading process. In this case, the cor-
responding materials can memorize the previous process.
Liu et al. (2005) reported that, under their test conditions, the fatigue point only
depends on the total number of previous cycles and the amount of peak forces. That
is, the failure point has little relation with how long the test force is kept at a loading
level. The material memory has little relation with the temporal loading path, which is
conceptually shown in Figure 5.16a, where the solid lines are sinusoidal loading paths
and the break lines are triangular loading paths. Under Liu’s test conditions, as long
as the peak value of the load is controlled to be constant, different load paths result
in little change. However, the level of peak load plays much more significant roles.
More importantly, when using different overloading levels F1, F2, … at cycle
1, 2, …, the fatigue failure B(t) can be rather different between the increasing load test
(F1 < F2 < …) and the decreasing load test (F1 > F2 > …). Figure 5.16b conceptually
shows increasing and decreasing loading processes. More detailed observations unveil
the reason for such a phenomena through metallographic examinations. It is seen that
when the specimen is overloaded to yield inelastic deformation, the local metallo-
graphic microstructure will be altered. Therefore, although no obvious cracks were
found on the specimen, the specimen memorizes the metallographic defection and
the larger the inelastic deformation is, the heavier the metallographic defection will
be. Where a large deformation has previously occurred, the microstructure becomes
susceptible to damage when subjected to further metallographic defections. On the
other hand, when the overloading is increased cycle by cycle, the previous smaller
defections will result in fewer alterations of the metallographic microstructure.
Because the loading path has few effects on fatigue life, one can use the above-
mentioned method with ground excitations for uncontrolled low-cycle fatigue tests,
which is conceptually shown in Figure 5.16c, where the upper and lower broken lines
Statistical Properties of Random Process 269

1.5 1.5
Peak load, higher level Peak load, lower level
Load decreasing test
B(n) Load increasing test
1 1
B(n)
Normalized load

Normalized load
0.5 0.5 B(n)

0 0

–0.5 –0.5

–1 –1

–1.5 –1.5
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
(a) Test cycles (b) Test cycles

1.5

1
Normalized load

0.5

–0.5

–1

–1.5
0 2 4 6 8 10 12 14 16 18 20
(c) Test cycles

Figure 5.16  Different loadings for type C tests. (a) Constant load amplitude. (b) Monotonic
increasing/decreasing load amplitude. (c) Random loading amplitude.

specify the yielding load. Comparing Figure 5.16c with Figure 5.1, we can realize
that the study on type C low-cycle fatigue can be carried out based on the aforemen-
tioned conclusions to the engineering problem of level-crossing.

5.3.3.2.4 Type D Low-Cycle Fatigue


Now, consider the second kind of fatigue, the type D low-cycle test. One useful
model to indicate the decrease of stiffness is by relating stiffness to the accumulative
inelastic displacement N(n).
That is (see Chapter 10, Section 10.3.4)

k = f(N(n)) (5.192)

Assume that in the nth cycle, the total displacement is D(n), which can be N(n)
longer than elastic displacement L(n). Note that the inelastic distance is treated as a
random process because of the possibility that when the material is damaged, the
allowed linear displacement may vary from cycle to cycle. That is,

N(n) = D(n) − L(n) (5.193)

That is, D(n) is the distance of inelastic displacement in cycle n. The accumulated
total displacement, up to cycle n, is denoted by Z(n). To simplify the process, let the
270 Random Vibration

allowed linear displacement be constant, that is, L(n) = L, and assume that, in each
cycle, the forced displacement is greater than L. In this case, we can write

n n

Z (n) = ∑ N (q) = ∑ D(q) − nL


q =1 q =1
(5.194)

It is seen that Z(n) is a continuous state Markov chain. To simplify the analysis,
however, let us consider another Markov chain, Y(n), that has only a deterministic
difference with Z(n) by nL, such that,

Y (n) = Z (n) + nL = ∑ D ( q)
q =1
(5.195)

Assume the displacement D(q) has a normal distribution with zero mean, that is,

D(n) ~ N(0, σ) (5.196)

From the above-mentioned example on sums of random sequence with a normal


distribution, it is seen that the probability distribution of Y(n) is normally given by

d2
1 −
fY ( n ) (n, d ) = e 2 nσ 2
(5.197)
2πσ n

5.3.4 Cascading Effect
5.3.4.1 General Background
When a system or a structure subjects multiple loads applied in sequence, either of the
same kind or with different types of forces, the previous ones may cause certain damage
and the consequent ones will make further damage that can be far more severe than a
single load. This consequent loading and damaging is referred to as a cascading effect.
Among load-resilient designs, the cascading effect is one of the most unclear
issues. This is because the normal training for engineers is to design systems to be
successful, instead of being failures. However, many real-world experiences have
witnessed severe failure under cascading effects. Examples can be found such as
mountain sliding after strong earthquakes, bridge scour failure after heavy floods,
structural failure due to overload after fatigue, and so on.
Because both the magnitudes and the acting time are random, the cascading effect
can be treated as a random process. Although fewer systematic researches on such
effects have been carried out, in this manuscript, we discuss possible approaches.
Again, the discussion of such a topic serves only to encourage readers to develop
a knowledge and methodology to understand the nature and essence of random pro-
cesses. It will also serve the purpose of opening the window to engineers who have
been trained in the deterministic world.
Statistical Properties of Random Process 271

5.3.4.2 Representation of Random Process


The cascading effect consists of two components, the magnitude and the occurrence
time, which are also two of the basic elements of a random process X(t, e), e ∈ Ω, and
0 < t < T. Here, Ω is the total state space and T is the total service time of the system.
It is often helpful to separate the variables in the state space and in the time domain.
Generally speaking, any record of a random process can be seen as a determin-
istic temporal function, which can be represented by summations of orthogonal
functions with corresponding parameters. The aforementioned Fourier series is one
of the popular examples using orthogonal sinusoidal temporal functions, namely,
sin(nωT t) and cos(nωT t). In this regard, the random process itself can also be repre-
sented by certain sets of orthogonal functions under conditions.
Specifically, under the mean square convergence conditions, a random process
X(t) can be represented in a ≤ t ≤ b, by the following series:

X (t ) = ∑ A φ (t),
i=0
i i t ∈[a, b] (5.198)

such that

 n    n  
2

l.i.m 
n→∞  ∑
 i=0
Aiφi (t ) − X (t )  = lim  E 


n→∞
 

Aiφi (t ) − X (t )   = 0
 
(5.199)
  i=0

In Equations 5.198 and 5.199, {Ai} a set of random variables and {ϕi(t)} is a set
of deterministic temporal functions, which is called a base or coordinate functions.
The essence of Equation 5.198 is the methodology of variable separation, because
in each individual product Aiϕi(t), the random variable Ai in state space and the tem-
poral variable ϕi(t) in the time domain are separated. This is similar to the method
of variable separation to solve partial differential equations, where the spatial and
temporal variables are first separated to form a set of ordinary equations.
A common representation that satisfies Equation 5.199 takes the following form

X (t ) = µ(t ) + ∑ A φ (t),
i =1
i i t ∈[a, b] (5.200)

where

μ(t) = E[X(t)] (5.201)

and for all i, j the set {Ai} satisfies

1. E[Ai] = 0 (5.202)
2. E  Ai A j  = σ i2δ ij (5.203)
272 Random Vibration

where

σ i2 = E  Ai2  (5.204a)

and δij is a Koronic delta function such that

 1, i= j
δ ij =  (5.204b)
 0, i≠ j

The complete set of temporal functions {ϕi(t)} should satisfy the following:
1. The covariance of X(t) can be represented by

σ XX (t1 , t2 ) = ∑ σ φ (t )φ (t ),
i =1
2
i i 1 i 2 t1 , t2 ∈[a, b] (5.205)


2
2. φi (t1 ) dt < ∞ (5.206)
a

and
b
3.

a
φi (t1 )φi (t2 ) dt = δ ij (5.207)

If X(t) is taken to be a measured time history of the random process (see Chapter 9,
Inverse Problems), then the coefficient Ai is no longer random and can then be
calculated by
Note that the coefficient Ai can then be calculated by
b
Ai =
∫ [X (t) − µ(t)]φ (t) dt
a
i (5.208)

Practically speaking, only the first n terms in Equation 5.200 are used to proxi-
mate the random process X(t), that is

Xˆ (t ) = µ(t ) + ∑ A φ (t) ≈ X (t)


i =1
i i (5.209)

In this case, Xˆ (t ) is the approximated process and

µ Xˆ (t ) = E  Xˆ (t )  (5.210)

σ 2Xˆ (t ) = ∑ σ φ (t)
i =1
2 2
i i (5.211)
Statistical Properties of Random Process 273

and

RXˆ (t1 , t2 ) = ∑ σ φ (t )φ (t ),
i =1
2
i i 1 i 2 t1 , t2 ∈[a, b] (5.212)

To represent or reconstruct a random process, the sample range [a,b] must be real-
ized. Suppose those orthogonal function ϕi(t) are known, the following integral can
be used as a trial-and-error approach to specify [a,b]

b

∫a
RXˆ (t1 , t2 )φi (t2 ) dt2 = λ iφi (t1 ) t1 , t2 ∈[a, b] (5.213)

where λi is called the corresponding eigenvalue and

λ i ∞ E  Ai2  = σ i2 (5.214)

From Equation 5.214, the calculated parameter λi is a constant if the range [a,b] is
chosen correctly. Furthermore, if λi has a drastic variation over a period of time and/
or after the targeted system undergoes significant loading, then the system may have
a cascading damage.

5.3.4.3 Occurrence Instance of Maximum Load


The peak value of loading magnitude must be calculated not only when consider-
ing the cascading effect but also for most system designs that resist external loads. In
the cases of cascading loads as well as low-cycle fatigue, both are treated as random
processes; when the maximum load occurs, it is an important issue. For example, in
case of random loading in a system with type C low-cycle fatigue, the moment of
earlier or later maximum load will cause rather different results. In this subsection,
let us consider this issue through a simplified Markovian model.
Suppose a system is subjected to random loads from time to time. The amplitude
of these loads possesses state space denoted by {1, 2, …, a} with uniform distribu-
tions. Here, using uniform distribution simplifies the analysis. Generally speaking,
lognormal distributions can be more realistic. If one can measure these loads in
every unit time, then the maximum values of these loads can be recorded as a ran-
dom sequence {X(n), n ≥ 1}.
We shall see that {X(n), n ≥ 1} is Markovian. We will analyze its one-step tran-
sition probability matrix and find the average time that the maximum value m is
recorded. Denote the kth record of the amplitude to be Yk (k = 1, 2, …). It is seen
that Y1, Y2, …, Yn, … are mutually independent and have identical distributions. This
is because X(k) is the maximum value recorded in the first k measurement, that is,
X (k ) = max {Yi } and Yn(n > k) and X (k ) = max {Yi } are mutually independent.
1≤ i ≤ k 1≤ i ≤ k
Now, consider any instant 0 < m1 < m2 < … < mn < mn+1, and i1, i2, …, in+1 ∈ Ω. When

P{[X(m1) = i1] ∩ [X(m2) = i2] ∩ … ∩ [X(mn) = in]} > 0 (5.215)


274 Random Vibration

we have

{
P X (mn+1 ) = in+1 [ X (m1 ) = i1 ] ∩ [ X (m2 ) = i2 ] ∩ ∩ [ X (mn ) = in ] }
{
= P max[ X (mn ),Ymn +1 ,Ymn + 2 ,,Ymn +1 ] = in+1 [ X (m1 ) = i1 ] ∩ [ X (m2 ) = i2 ] ∩ ∩ [ X (mn ) = in ] }
= P {max[i ,Y
n mn +1 ,Ymn + 2 ,,Ymn +1 ] = in +1 [ X ( m1 ) = i1 ] ∩ [ X ( m2 ) = i2 ] ∩  ∩ [ X ( mn ) = in ]}
 0, in+1 < in

{ }
=  P max[Ymn +1 ,Ymn + 2 ,,Ymn +1 ] < in+1 , in+1 = in
{
 P max[Y ,Y ,,Y ] = i , i > i
 mn +1 mn + 2 mn +1 n +1 }
n +1 n

(5.216)

Furthermore, we also have

{
P X (mn+1 ) = in+1 X (mn ) = in }
{
= P max[ X (mn ), Ymn +1 , Ymn + 2 ,, Ymn+1 ] = in+1 X (mn ) = in }
= P {max[i , Y n mn +1, Ymn + 2 ,, Ymn+1 ] = in+1 X (mn ) = in }
(5.217)
 0, in+1 < in


{
=  P max[Ymn +1 , Ymn + 2 ,, Ymn +1 ] < in+1 , } in+1 = in

{
 P max[Ymn +1 , Ymn + 2 ,, Ymn +1] = in+1 , } in+1 > in

Therefore, we can write

P{X(mn+1) = in+1 ∣[X(m1) = i1] ∩ [X(m2) = i2] ∩ … ∩ [X(mn) = in]}


= P{X(mn+1) = in+1∣ X(mn) = in} (5.218)

which indicates that {X(n), n ≥ 1} is Markovian. The one-step transition probability is

{
P X (n + 1) = j X (n) = i }
{ } {
= P max[ X (n),Yn+1 ] = j X (n) = i = P max[i, Yn+1 ] = j X (n) = i }
 0, j<i
 0, j<i 
  i
=  P{Yn+1 < j}, = , j=i
j=i a
 P{Y = j}, j>i  1
 n +1  , j>i (5.219)
 a
Statistical Properties of Random Process 275

It is seen that the transition probability is not related to instant n; therefore, {X(n),
n ≥ 1} is a homogeneous Markov chain.
In addition, because

 0, elsewhere

 i
, j=i (5.220)
pij =  a
 1
 , a≥ j>i
 a

the one-step transition probability matrix is

 1/a 1/a 1/a ... 1/a 


 
 0 2 /a 1/a ... 1/a 
P= 0 0 3 /a, ... 1/a 
 
 ... 
 0 0 0 ... 1 

Now, to consider the averaged time, denote the first time to record the maximum
value a to be Ta. We see that
a −1

P{Ta = k} = ∑ P{[X (1) = i] ∩ [T = k ]}


i =1
a


a −1 a −1 k −2
1  a − 1 1 (a − 1) k −1
∑ ∑

= P[ X (1) = i]P  Ta = k X (1) = i  =   =
i =1 i =1
a a a ak
(5.221)
In this case, the averaged time, denoted by TE , is
∞ ∞ ∞ k −1
(a − 1) k −1 1  1
E (TE ) = ∑k =1
kP[Ta = k ] = ∑
k =1
k
ak
=
a ∑
k =1
k 1 − 
 a
= a (5.222)

Thus, the larger the maximum value is, the longer the average record time will be.

Problems
1. Show that the peak frequency of a random process can be given by the fol-
lowing equation and find the formula for the Gaussian specialization

ω p = 2πv p =
∫ −∞
ω 4 S X (ω ) dω

∫ −∞
ω 2 S X (ω ) dω

276 Random Vibration

2. Show that the velocity of Gaussian process X(t) at a zero up-crossing has a
Rayleigh distribution
3. The RMS velocity and displacement of a narrow-band Gaussian vibration
with zero mean are, respectively, 2.0 m/s and 0.05 m. Calculate
a. the rate of level up-crossing with the level to be 0.03 m
b. zero up-crossing rate
Here, a = 0.03; σ X = 2.0 and σX = 0.05
4. A narrow-band Gaussian process with zero mean has RMS displacement of
3.5 cm. Calculate and plot the distribution density function of
a. amplitude for this process and
b. height for this process
5. The joint PDF of a narrow-band process X(t) and its first derivative is given
by a joint Laplace distribution (Lin 1976)
 x x 
1 − a + b 
f XX ( x , x ) = e , − ∞ < x < ∞, − ∞ < x < ∞
4 ab

where a > 0, b > 0


Show that the PDF for the peak magnitude can be approximated by
z
1 −a
fZ (z) = e , z≥0
a

6. Suppose X(t) is a narrow-band process. What is the value of the peak with a
1% probability of being exceeded?
7. Show that the probability of exceedance P(Z > z0) of Rice’s distribution
of peaks is approximately a times the probability found from the Rayleigh
distribution
8. {X(n), n = 0, 1, 2, …} is a Markov chain. Show that the inverse sequence of
X(n) is also a Markov chain, that is,

P{X(1) = x1∣X(2) = x2, X(3) = x3, …, X(n) = xn} = P{X(1) = x1∣X(2) = x2}
Section III
Vibrations
6 Single-Degree-of-
Freedom Vibration
Systems
In previous chapters, we showed that random process may not always necessarily be
a time history operation. Time histories occurring physically in the real world have
specific reasons and they are limited by certain conditions. In previous chapters, we
mainly focused on time-varying development, instead of a clear understanding of
why time history exists. At most, we studied several important conditions of those
time histories. Yet, these studies were limited to how a process behaves, such as if
it is stationary, what the corresponding statistical parameters are, as well as what
frequency spectra they have listed. Generally speaking, the previous chapters were
limited to the mathematical models of random processes, instead of what the causes
of these models were.
Many time-varying processes, or time histories, are purely artificial. For example,
one can use computational software to generate a random signal. These purely artifi-
cial “stochastic” processes, although they seem very random, are actually controlled
by man-made signal generators and therefore we should have prior knowledge of
how they behave.
On the other hand, most real-world temporal developments are not purely or
directly man-made, and thus are not that easily controllable; some of them cannot
be easily measured. Accounting for all of these time-varying processes would be a
huge task—most of them are far beyond the scope of this manuscript. Here, we focus
only on a special type of process, the vibration signals, and mainly on mechanical
vibrations.
To classify linear vibration signals by their degree of certainty, there are typically
three essentially different types. The first is periodic vibration, such as harmonic
steady-state vibration, which contains a limited number of frequency components
and periodically repeats identical amplitudes. The second type is transient vibration,
such as free decay vibration, which is caused only by initial conditions. Although
we could also have a combination of harmonic steady-state vibration and free decay
vibration, the resulting signal is often not treated as the third type because we study
the first two types of vibration separately and simply add them together. Moreover,
both of them are deterministic. The third type is random vibration. Based on the
knowledge gained in the previous chapters, the nature of random signals is that,
at any future moment, their value is uncertain. Therefore, for the first two types of
vibration, with the given initial conditions and known input, we can predict their
future, including amplitudes, frequencies, and phases. However, we cannot predict

279
280 Random Vibration

the response of random vibrations. For a simple vibrational system, even if we know
the bounds of a random input, the output bound is unpredictable.
Therefore, to handle random signals, we need a basic tool, that is, averaging.
However, to account for random vibrations, we can do something more than the
statistical measure. Namely, we need to study the nature of a vibration system itself.
Here, the main philosophy to support our action is that any real-world vibration sig-
nal must be a result of a certain convolution. That is, vibration is a response caused
by the combination of external excitation and the vibration system. Without any
external excitations, of course, there would be no responses. However, without vibra-
tion systems, the presence of an excitation only will not cause any response either.
We thus need to study the nature of vibration systems to further understand how their
responses behave.
In this chapter, the basics of a single-degree-of-freedom (SDOF) vibration sys-
tem that is linear and time-invariant will be described. The periodic and transient
responses of the SDOF system under harmonic and impulse excitations will also be
examined, respectively. For a more detailed description of vibrations, readers may
consult Meirovitch (1986), Weaver et al. (1990), Chopra (2001), and Inman (2008),
as well as Liang et al. (2012).

6.1 Concept of Vibration
The dynamic behavior of a SDOF system can be characterized by key parameters
through free vibration analysis, such as the natural frequency and the damping ratio.

6.1.1 Basic Parameters
Generally speaking, the background knowledge needed for the study of SDOF
vibration systems can be found in a standard vibration textbook. Examples of vibra-
tion textbooks that provide an ample understanding include Inman’s Engineering
Vibration and Chopra’s Dynamics of Structures. Background knowledge that should
be gained in reading one of these texts include: what vibration is, what the essence
of vibration is versus another form of motion, and why vibration should be studied.
Vibration is a unique type of motion of an object, which is repetitive and relative
to its nominal position. This back-and-forth motion can be rather complex; however,
the motion can often be decoupled into harmonic components. A single harmonic
motion is the simplest motion of vibration given by

x(t) = dsin(ωt) (6.1a)

In this case, d is the amplitude, ω is the frequency, and t is the time. Thus, x(t) is a
deterministic time history. Typically, Equation 6.1a is used to describe the vibration
displacement.
The velocity, with the amplitude v = dω, is the derivative of the displacement,
then given by
x (t ) = d ω cos(ωt ) = v cos(ωt ) (6.1b)
Single-Degree-of-Freedom Vibration Systems 281

Subsequently, the acceleration, with the amplitude a = −dω2, is the derivative of


the velocity, given by

x(t ) = − d ω 2sin(ωt ) = a sin(ωt ) (6.1c)

6.1.1.1 Undamped Vibration Systems


The previous examples can be seen as the responses of a SDOF system. In the fol-
lowing, the system will be examined using physical models.

6.1.1.1.1  Equation of Motion


Figure 6.1 shows an undamped SDOF system consisting of a mass m and a spring k.
Consider the motion in the x direction only (the vertical gravity force mg and sup-
porting force n are then ignored). Thus, there are two forces in balance, the inertial
force fm = mx and the spring force f k = kx. This can be expressed as

mx + kx = 0 (6.2)

6.1.1.1.2  Natural Frequency


Defined as the angular natural frequency, this is also often simply referred to as the
natural frequency; its unity is in radians per second.

k
ωn = (6.3)
m

6.1.1.1.3  Monic Equation


By dividing m from both sides of Equation 6.2, with the help of the notation in
Equation 6.3, results in the monic equation.

x + ω 2n x = 0 (6.4)

 
x x, x
fm = –mx

k fk = kx
m
mg

1n 1n
2 2

FIGURE 6.1  SDOF system.


282 Random Vibration

Example 6.1

An undamped SDOF system has a mass of 10 kg and a stiffness of 10,000 N/m,


find the angular natural frequency.

1000
ωn = = 10 (rad/s)
10

Note that the natural frequency of cycles per second, denoted by fn can be
written as

ωn
fn = = 1.5915 (Hz)

6.1.1.1.4  Solutions
To solve the above equation, the so-called semidefinite method is used. In this approach,
it is first assumed that

x(t) = dccosωnt + dssinωnt (6.5)

Then, Equation 6.5 is substituted into Equation 6.4. If the proper parameters dc
and ds can be determined, the parameters can be either infinite nor zero, then the
assumption is valid and Equation 6.5 is one of the possible solutions.
Noticeably, with initial conditions x0 and x 0, the parameters are

dc = x0 (6.6)

and

x 0
ds = (6.7)
ωn

Accordingly, x(t) = dccosωnt + dssinωnt is indeed a solution, which implies that


the vibration displacement of the mass is harmonic, with frequency ωn and certain
amplitudes. Here, it is also realized why ωn is called the natural frequency.

Example 6.2

Suppose a monic undamped vibration system with nature frequency ωn = 10 has an


 0) = −2, calculate the response.
initial displacement x(0) = 1, and initial velocity x(
From Equation 6.15, it is seen that

x(0) = dccosωn(0) + dssinωn(0)


Single-Degree-of-Freedom Vibration Systems 283

therefore

dc = 1

Furthermore, taking the derivative on both sides of Equation 6.15 with respect
to time t and then letting t = 0, we have

x (0) = dcω nsin ω n 0 + dsω ncos ω n 0 = dsω n


Therefore,

ds = −2/10 = −0.2

6.1.1.1.5  Essence of Vibration


To answer the important question: “Why is there vibration?,” consider the following
discussion.

6.1.1.1.5.1   Conservation of Energy  First, the energy terms in the SDOF system
are defined.

Potential energy

1
U (t ) = kx (t )2 (6.8)
2

Kinetic energy

1
T (t ) = mx (t )2 (6.9)
2

Total energy conservation

d
[T (t ) + U (t )] = 0 (6.10)
dt

6.1.1.1.5.2   Energy Exchange  During the vibration, an exchange of potential and


kinetic energy occurs. Furthermore,

Tmax = Umax (6.11)

Explicitly, this can be expressed as

1 2 1 2
kx max = mx max (6.12)
2 2
284 Random Vibration

Notice that Equation 6.2 can also be obtained through Equation 6.10. From
Equation 6.12, it is determined that

x max = ω n x max (6.13)

Substitution of Equation 6.13 into Equation 6.12 results in

1 2 1
kx max = mω 2n x max
2
(6.14)
2 2

From Equation 6.14, Equation 6.3 can be calculated. This procedure implies that the
ratio of k and m is seen as a measurement of normalized vibration energy.

Example 6.3

Based on Equation 6.14, we can find the natural frequency of a complex system,
which consists of more than one mass (moment of inertia) but is described by a
single variable x (or rotation angle θ). Using such an energy method can simplify
the procedure for natural frequency analysis.
As shown in Figure 6.2, a system has three gears and a rack that is connected
to the ground through a stiffness k. In this system, the pitch radii of gear 1 to gear
2 are, respectively, R1 and R 2. The pitch radii of gear 2 to gear 3 (also of gear 3
to the rack) are, respectively, r2 and r3. To simplify the problem, let the teeth and
shafts of gears as well as the rack have infinitely strong stiffness and the system is
frictionless.
To use the energy method, Tmax = Umax, we need to find both the potential and
the kinetic energies. The kinetic energies Tgear are the functions of the moment of
inertia, Ji, the gear ratio, ri, and the displacement x. That is Ti = Ti(Ji, ri, x). The kinetic
energy Track is the function of the mass of the rack as well as the displacement x.
Here, x is the only parameter to denote the motion; therefore, the system is of SDOF.
The potential energy is given by

U = 1/2kx2

R1 Gear 1: moment of inertia J1; gear ratio γ21


Gear 2: J2, γ32

r2 Gear 3: J3
R2 x

Rack: m
θ3 r3 k

FIGURE 6.2  A complex system.


Single-Degree-of-Freedom Vibration Systems 285

The total kinetic energy T are contributed by these three gears, denoted by
Tgear1, Tgear2, and Tgear3, respectively, and by the rack, denoted by Track. That is,

T = Tgear1 + Tgear2 + Tgear3 + Track

Denote θi as the rotation angle of gear i. Then, the relationship between the
translational displacement x and the rotational angle is x = r3θ3 so that for transla-
tional velocity x and the rotational angle velocity θ 3, we have x = r3θ 3. Therefore,

θ3 = x/r3

θ 3 = x /r3

Gear 3 to gear 2 has gear ratio γ32 given by

γ32 = r3/r2

So that the rotation angle of gear 2 is

θ2 = γ32θ3

Furthermore,

θ2 = (r3/r2)x/r3 = x/r2

and

θ 2 = x /r2

Similarly, the gear 2 to gear 1 ratio is

γ21 = R 2/R1

So that the rotation angle of gear 1 is

θ1 = γ21θ2

and

θ1 = (R 2/R1)x/r2

or

θ1 = x(R 2/r2R1)

Therefore,

R x
θ 1 = 2
r2R1
286 Random Vibration

With the above defined notations, kinetic energy can be summarized as

 J R2 
( ) J J
T = 1/ 2 mx 2 + 1/ 2 J3θ 32 + J2θ 22 + J1θ 12 = 1/ 2 x 2  m + 12 + 22 + 32 22 
 r3 r2 R1 r2 

Note that T is represented by variable x only. Furthermore, the maximum


velocity is

xmax = ω n xmax

Then, based on Equation 6.14, we have

 J1 J2 J3R22 
1/ 2 xmax
2
 m + 2 + 2 + 2 2  = 1/ 2 kxmax
2

 r3 r2 R1 r2 

The natural frequency can be written as

k
ωn =
J1 J2 J3R22
m+ 2 + 2 + 2 2
r3 r2 R1 r2

Alternatively, we can also use Equation 6.10. We have

d   J J J R2  
1/ 2 x 2  m + 12 + 22 + 32 22  + 1/ 2 kx 2  = 0
dt   r3 r2 R1 r2  

Thus,

 J J J R2 
  m + 12 + 22 + 32 22  + kxx
xx  =0
 r3 r2 R1 r2 

So that

 J J J R2 
  m + 1 + 2 + 3 2  + kxx
xx  =0
 r3 r2 R12r22 
2 2

The differential equation is then given by

 1 1 R2 
x  m + 2 + 2 + 22 2  + kx = 0
 r3 r3 R1 r2 

Single-Degree-of-Freedom Vibration Systems 287

Furthermore, the natural frequency is

stiffness
ωn = = (coefficient of acc./coefficient of disp.)1/2
mass
k
=
J1 J2 J3R22
m+ 2 + 2 + 2 2
r3 r2 R1 r2

If there are rotational stiffnesses associated with each gear, k1, k2, k3, then, due
to the rotational deformation of each gears’ shaft, there will be more potential
energies:

R22
Ugear1 = 1/ 2 k1θ12 = 1/ 2 k1
R12r32

1
Ugear2 = 1/ 2 k2θ 22 = 1/ 2 k2 2

r2

1
Ugear3 = 1/ 2 k3θ32 = 1/ 2 k3 2
r1

Thus, the total potential energy is given by

 1 1 R2 
U = 1/ 2 x 2  k + k3 2 + k2 2 + k1 22 2 
 r1 r2 R1 r3 

The natural frequency can finally be calculated as

k1 k2 k3R22
k+ + +
r32 r22 R12r22
ωn =
J J J R2
m + 12 + 22 + 32 22
r3 r2 R1 r2

6.1.1.1.5.3   Force and Momentum  Equation 6.3 can be further obtained using
additional approaches. For example, consider the momentum q, where

q(t ) = mx (t ) (6.15)

Figure 6.3 conceptually shows the relationships among the forces and momentum
mentioned above, where f X, f M, and q0 represent maximum restoring, inertia, and
momentum, respectively.
288 Random Vibration


q0
v

fK
180°

90°
x
a
d

fM

k fK

m q0

FIGURE 6.3  Maximum potential and kinetic energy.

Denoting the amplitude of momentum by q 0, where

q0 = mv (6.16)

then

fK
k= (6.17)
d

and

q0 q
m= = 0 (6.18)
v ωnd

fK
k
Thus, the ratio = d indicates
m q0
ωnd

fK
ωn = (6.19)
q0
Single-Degree-of-Freedom Vibration Systems 289

Here, Equation 6.19 implies that

maximum restoring force


Natural frequency = (6.20)
maximum momentum

6.1.1.1.5.4   Rate of Energy Exchange  Additionally, consider the rate of energy


exchange and, for convenience, denote

x(t) = deλt (6.21)

Furthermore, let λ = ±jωn, this will yield

x (t ) = de ± jω nt (6.22)

Note that

x (t ) = λ x (t ) (6.23)

with given

1
T (t ) = mx (t )2
2
dT (t ) 2 d d d
= mx (t ) x (t ) = mx (t ) [λDe λt ] = mx (t )λ [ x (t )] (6.24)
dt 2 dt dt dt
1
= 2λ mx (t )2 = 2λT (t )
2

Results in the following:

dT (t )
dt = λ (6.25)
2T (t )

Furthermore, it is seen that

dT (t )
1 dt
ωn = λ = (6.26)
2 T (t )
290 Random Vibration

fk fK

–d 0 Umax d x

Umax x(t), U(t) = 1/2kx(t)

x d
–d 0

FIGURE 6.4  Potential energy in one vibration cycle.

and

dU (t )
1 dt
ωn = λ = (6.27)
2 U (t )

Equations 6.26 and 6.27 indicate that the angular natural frequency is a unique
ratio of energy exchange. It is characterized by the absolute value of one-half of the
rate of energy exchanged over the kinetic (potential) energy. Furthermore, the higher
the rate is, the larger the value of natural frequency will be. Readers may consider
why it is one-half or twice the kinetic (potential) energy needed.
Figure 6.4 conceptually shows the potential energy in one vibration cycle.

6.1.1.1.6  Natural Frequency: A Brief Review


Natural frequency is one of the most important concepts in vibration, given that it
unveils the essence of repetitive motion both qualitatively and quantitatively. There
exist several angles in which to examine the natural frequency ωn.

k
ωn = (6.28)
m

ωn = ⃒ λ⃒ (6.29)

v a a
ωn = = = (6.30)
d v d

fK
ωn = (6.31)
q0

dT (t ) dU (t )
1 dt 1 dt
ωn = = (6.32)
2 T (t ) 2 U (t )
Single-Degree-of-Freedom Vibration Systems 291

Readers may consider if all of the above approaches always apply. Here, we just
emphasize that, as seen in Equation 6.28, if either m = 0 or k = 0, then the natural
frequency will not exist. In this case, a stable system given by Equation 6.12 is lin-
ear, SDOF, and undamped. This will not always be true, when k > 0. From Equation
6.8, however, if we have negative stiffness, then the potential energy U(t) ∝ k will
become negative, which means that a certain source will continuously input energy
to the system and the response will continuously be increasing, which makes a sys-
tem unstable. Therefore, taking the absolute value of the ratio does not mean that k
can be smaller than zero. Furthermore, if c ≠ 0, we will have a damped system. For
the existence of a stably damped vibration system, not only are the conditions m > 0,
k > 0 needed but also the condition regarding c is needed, which will be discussed
as follows.

6.1.1.2 Damped SDOF System


From Equation 6.5, the response of an undamped system will vibrate indefinitely.
In the real world, energy dissipation will always exist, causing the free vibration to
eventually die out. This energy dissipation mechanism is referred to as damping.

6.1.1.2.1  Viscous Damping


The viscous damping force is

fc = cx (6.33)

where c is the proportional coefficient, defined as the damping coefficient. The param-
eter c is always greater than or equal to zero: semipositive or nonnegative. In Figure 6.5,
a damper c is added to the SDOF system and the resulting balance of force is

∑f
x
(.) = 0 → fm + fc + fk = 0 (6.34)

As a result, the equation of motion becomes

mx + cx + kx = 0 (6.35)

 
x x, x fm = –mx

k fk = kx
m
mg
c 
fc = cx
1n 1n
2 2

FIGURE 6.5  SDOF damped system.


292 Random Vibration

6.1.1.2.2  Semidefinite Method


Similarly, to solve Equation 6.35, let

x = deλt (6.36)

x = d λe λt (6.37)

x = dλ 2e λt (6.38)

Substitution of Equations 6.36 through 6.38 into Equation 6.35 yields

mdλ2 eλt + cdλeλt + kdeλt = 0

Given that m ≠ 0, d ≠ 0, and eλt ≠ 0 will result in the characteristic equation.

6.1.1.2.3  Characteristic Equation


The characteristic equation of a SDOF system is defined by

mλ2 + cλ + k = 0 (6.39)

To find the solution of Equation 6.39, the following is used:

−c ± c 2 − 4 mk
λ1,2 = (6.40)
2m

6.1.1.2.4  Damping Ratio


To analyze Equation 6.40, the concept of critical damping ratio must be introduced.
By dividing m from both sides, Equation 6.41 yields

c k
λ2 + λ + = 0 (6.41)
m m

Given that

k
= ω 2n
m

and let

c
= 2ζω n (6.42)
m
Single-Degree-of-Freedom Vibration Systems 293

in which both m and c are positive, or in the case of c, semipositive; ωn should also
be positive, such that ζ is greater than or equal to zero. The critical damping ratio, ζ,
or simply referred to as the damping ratio, is a semipositive number.

c c
ζ= = (6.43)
k 2 mk
2m
m

In Equation 6.40, if c2 = 4mk or c = 2 mk , then

c 2 − 4 mk = 0 (6.44)

In addition, the two roots are equal:

c 2 mk k
λ1 = λ 2 = − =− =− = −ω n (6.45)
2m 2m m

Thus,

cc = 2 mk (6.46)

Twice of the geometric average of mk is referred to as the critical damping coefficient.


In the case of

c
ζ= (6.47)
cc

Then, we can write

c
λ (.)t − t
x (t ) = de = de 2m = de − ω nt (6.48)

In this instance, Equation 6.48 no longer describes a motion of vibration, such


that c = cc = 2 mk is a critical point. A system is a vibratory system only if

c < 2 mk (6.49)

For this reason, ζ is called the critical damping ratio.


Condition 6.49 can be rewritten as

ζ < 1 (6.50a)
294 Random Vibration

In this case, we have an underdamped system.


Whereas when

ζ = 1 (6.50b)

In this case, we have a critically damped system


And when

ζ > 1 (6.50c)

In this case, we have an overdamped system.


However, if

c < 0 (6.51)

we have negative damping, which means a certain energy is continuously input to


the system so that the response will continuously increase and the system becomes
unstable. This phenomenon is similar to negative stiffness. The difference is that
with k < 0, the input energy is proportional to displacement whereas with c < 0, the
input is proportional to the velocity.
From Equation 6.43, we see that the damping ratio is proportional to the damping
coefficient, so that if Equation 6.52 holds, then for the unstable condition, we can
also write

ζ < 0 (6.52)

Example 6.4

A car has mass 2000 kg and total stiffness of the suspension system is 2840 kN/m.
The design damping ratio is 0.12; find the total damping coefficient of its suspen-
sion system. Suppose five people weighing 5 kN are sitting in this car. Calculate
the resulted damping ratio (g = 9.8 m/s2).
Based on Equation 6.43, the damping coefficient c can be calculated as

c = 2ζ mk = 18.1 kN/m-s

With additional mass Δm = 5000/9.8 = 510.2 kg, the new damping ratio is

c
ζnew = = 0.11
2 (m + ∆ m)k

It is seen that when the mass is increased by about 1/4, the reduction of the damp-
ing ratio is only about 10%.
Single-Degree-of-Freedom Vibration Systems 295

6.1.1.2.5  Eigenvalue λ
Rewrite Equation 6.40 to

−c ± c 2 − 4 mk c  2

λ1,2 = =− ± (−1)  4 mk −  c  
2m 2m  4 m 2  2m  
(6.53)
c  k  c  2 
=− ± j  −  
2m  m  2m  

With the substitution of ζ, this results in

c 2ζ mk k
= =ζ = ζω n (6.54)
2m 2m m

Thus, the eigenvalue λ can be expressed as

λ1,2 = −ζω n ± j 1 − ζ2 ω n (6.55)

Note that

λ 2 = λ*1 (6.56)

where (.)* denotes complex conjugate of (.). Figure 6.6 illustrates the eigenvalues.
From Figure 6.6, it is shown that

k
λ1λ 2 = λ1λ*1 = = ω 2n (6.57)
m

Im

+j 1 – ζ2ωn
–ζωn Re

–j 1 – ζ2ωn

FIGURE 6.6  Eigenvalues of an underdamped system.


296 Random Vibration

and

c
λ1 + λ 2 = − = −2ζω n (6.58)
m

6.1.1.2.6  Damped Natural Frequency


Equation 6.55 can be further simplified to

λ1,2 = −ζωn ± jωd

where

ω d = ω n 1 − ζ2 (6.59)

Here, ωd is called the damped natural frequency.


Because 0 ≤ ζ < 1,

ω d = ω n 1 − ζ2 ≤ ω n (6.60)

Example 6.5

A system has an eigenvalue of λ = −3.0000 + 9.5394j; find the corresponding


undamped, damped natural frequencies, and damping ratio.
Damped natural frequency ωd

ωd = Im(λ) = 9.5394 (rad/s)

Undamped natural frequency ωn

ωn = |λ| = [Re(λ)2 + Im(λ)2]1/2 = 10.0 (rad/s)

Damping ratio ζ

ζ = −Re(λ)/ωn = 0.3

6.1.1.2.7  Energy Dissipation


In Figure 6.7a, the area of the damping force–displacement loop, ΔE, is the energy
dissipated during a cycle. If the amplitude of the steady-state displacement remains
constant, then there must be an energy input equal to ΔE. How to input this amount
of energy will be discussed in Section 6.2.
In this case, the energy input is equal to zero, so that with the capacity of energy
dissipation, the amplitude of the response will continuously decay. This is shown in
Figure 6.7b. Note that for the decaying response, the curve is clockwise.
Single-Degree-of-Freedom Vibration Systems 297

Free decay vibration (v0 = 1, x0 = 0)


fc 1
0.8
Umax 0.6
0.4

Velocity
0.2
x
0
–0.2
–0.4
–0.6
–0.8
–0.2 –0.15 –0.1 –0.05 0 0.05 0.1 0.15 0.2 0.25 0.3
(a) (b) Displacement

FIGURE 6.7  Energy dissipations: (a) steady-state response; (b) free decay response.

6.1.1.2.8  Essence of Symbol j


First, compare Equation 6.36 to the following:

x (t ) = de ± jω nt

Given this, Equation 6.36 can be rewritten as

1−ζ2 ω nt
x (t ) = de − ζω nt e ± j = de − ζω nt e ± jω dt (6.61)

Both equations that describe vibrations share a similar term, e ± jω nt or e ± jω dt; there-
fore, the “j” term must be related to dynamic oscillations. In fact, this term implies
energy exchanges between potential and kinetic energies. If this term is eliminated,
then there will be no energy exchange and no vibration.

6.1.2 Free Decay Response


Now consider the vibration due to initial condition only, the free decay response, in
detail. We can have an alternative form of the solution of Equation 6.35, which can
be written as

x (t ) = de −ζω nt sin(ω dt + φ) (6.62)

Readers may consider how this form compares to that of Equation 6.5.

6.1.2.1 Amplitude d and Phase ϕ


Similar to undamped systems, with initial conditions

x (0) = x 0 and x (0) = x 0


298 Random Vibration

the amplitude and phase can be calculated as follows:


When t = 0,

x (0) = de −ζω n 0 sin(ω d 0 + φ) = d sin(φ) = x 0 (6.63)

Rewriting the above equation results in

x0
d= (6.64)
sin φ

Taking the derivative of Equation 6.63,

d
x (t ) = x (t ) = −ζω n de − ζω nt sin(ω dt + φ) + ω d de − ζω nt cos(ω dt + φ) (6.65)
dt

Substituting the initial conditions in

x (0) = −ζω n de − ζω n 0sin(ω d 0 + φ) + ω d de − ζω n 0 cos(ω d 0 + φ)


(6.66)
= d(−ζω nsin φ + ω dcos φ)

Furthermore, substitution of Equation 6.64 into Equation 6.66 yields

x0  cos φ 
x (0) = (−ζω nsin φ + ω dcos φ) = x 0  −ζω n + ω d = x 0 (6.67)
sin φ  sin φ 

Consequently,

cos φ x + x 0 (ζω n )
= cot φ = 0 (6.68)
sin φ x 0 (ω d )

and

 x0 ω d 
φ = tan −1  (6.69)
 x 0 + x 0ζω n 

Refer to Figure 6.8 for reference.


Further calculation yields

x0 ω d
sin φ = (6.70)
( x 0 + x 0ζω n )2 + ( x 0 ω d )2
Single-Degree-of-Freedom Vibration Systems 299

φ x0ωd

x 0 + x0ζωn

FIGURE 6.8  Phase angle.

Thus, resulting in

( x 0 + x 0ζω n )2 + ( x 0 ω d )2
d= (6.71)
ωd

and

 ω d x0 
φ = tan −1  + hφπ (6.72)
 x 0 + ζω n x 0 

The reason we have a term hϕπ in Equation 6.72 is to account for the cases of
x 0 + x 0ζω n = 0 as well as x 0 + x 0ζω n < 0. The period of the tangent function is π;
therefore the arctangent function has multiple values. The period of the sine and
cosine functions is 2π. Consequently, the Heaviside function, hϕ, cannot be chosen
arbitrarily. Based on the fact that most computational programs, such as MATLAB®,
calculate the arctangent by limiting the values from –π/2 to +π/2, hϕ is defined as

 0, v0 + ζω n x 0 > 0
hφ =  (6.73)
 1, v0 + ζω n x 0 < 0

As shown in Figure 6.8, there can be four instances of the phase angle ϕ. This is a
result of the possible combinations of ωd x0 and v0 + ζωn x0, which have either a posi-
tive or negative value. Regardless of the values of ωd x0, it is shown from Figure 6.9
and Equation 6.69 that the sign of v0 + ζωn x0 determines the value of hϕ.
300 Random Vibration

Im

ωdx0 > 0 φ2
φ1
Re
ω d x0 < 0

x 0 + ζωnx0 < 0 x 0 + ζωnx0 > 0

FIGURE 6.9  Determination of phase angle ϕ.

Example 6.6

A linear system with mass = 100 kg, stiffness = 1000 kN/m, and damping ratio =
0.5 is excited by initial condition x0 = 0.01 m and v0 = −2 m/s; calculate and plot
the free-decay displacement.
The undamped and damped natural frequencies are given by

ωn = (k/m)1/2 = 100 (rad/s),  and

ωd = (1 − ζ2)1/2 ωn = 86.6 (rad/s)

= 100 (rad/s)

The amplitude is

( x 0 + x0ζω n )2 + ( x0ω d )2
d= = 0.02
ωd

Because x (0) + ζω n x(0) = −1.5 < 0.


The phase angle is

 ω d x0 
φ = tan−1  + π = 2.62
 x 0 + ζω n x0 

Therefore, the response is

x(t ) = de −ζωnt sin(ω dt + φ) = 0.02e −50 t sin(86.6t + 2.62)


The time history is plotted in Figure 6.10.


Single-Degree-of-Freedom Vibration Systems 301

10

6
Displacement (mm)
4

–2

–4

–6

–8
0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2
Time (s)

FIGURE 6.10  Free decay vibration.

6.2 Periodically Forced Vibration


This section describes periodically forced vibration. A periodic forcing function can
be represented by a Fourier series, which consists of a group of harmonic functions.
We first consider the harmonic excitation followed by using linear combinations to
obtain the responses of periodic forced vibrations.

6.2.1 Harmonic Excitation
6.2.1.1 Equation of Motion
From the graphic description of damped SDOF systems shown in Figure 6.11, the
 and x(0).
following equation of motion is obtained with initial conditions x(0)

 
x x, x f(t)
fm = –mx f(t)

k fk = kx
m
mg
c 
fc = cx
1n 1n
2 2

FIGURE 6.11  Damped SDOF system with excitation force.


302 Random Vibration

mx + cx + kx = f (t )

 x (0) = v0 (6.74)
 x (0) = x
 0

Equation 6.74 is a complete form to illustrate a SDOF vibration, often referred to as


the m-c-k equation.

6.2.1.2 Harmonically Forced Response


To examine the solution of the m-c-k equation, we first consider the instance of a
sinusoidal excitation, which is referred to as the harmonically forced response.

6.2.1.2.1  Forcing Function


The forcing function for harmonic excitation is sinusoidal, specifically

f(t) = f0sin(ωt) (6.75a)

or

f(t) = f0cos(ωt) (6.75b)

In the above, f0 is the deterministic amplitude of force and ω is a driving fre-


quency. Thus, this results in

mx + cx + kx = f (t ) = f0sin(ωt ) (6.76)

6.2.1.2.2  Solution, Forced Response


The general solution of Equation 6.76 contains two portions:

x(t) = x h(t) + xp(t) (6.77)

in which x h(t) is the response due to the initial displacement and velocity and xp(t) is
the particular solution due to the force excitation.
The particular solution can be expressed as

xp(t) = xpt(t) + xps(t) (6.78)

where xpt(t) is the transient response due to the force f(t) and xps(t) is the steady-state
solution. The total transient response, denoted by xt(t), is

xt(t) = x h(t) + xpt(t) (6.79)

Now, we first consider the steady-state response.


The condition of steady state signifies the work done by the external force during
a cycle, ΔW, is equal to the energy dissipated by the vibration system, ΔE, that is,

ΔW = ΔE (6.80)
Single-Degree-of-Freedom Vibration Systems 303

and

xps(t) = xp0sin(ωt + ϕ) (6.81)

In this case, the amplitude is (refer to Figure 6.7a)

xp0 = const

6.2.1.2.3  Complex Response Method


There are several methods to solve xp0. In the complex response method, it is assumed
that

f(t) = f0(cos(ωt) + jsin(ωt)) = f0 eiωt (6.82)

Equation 6.82 linearly combines the two cases, expressed by Equations 6.75a
and 6.75b, by using complex functions. This case does not exist in the real world.
Because the real and the imaginary domain are orthogonal, the response due to the
real and the imaginary portions of the excitation will also be orthogonal. Suppose
the response can be written as

xps(t) = xp0 e j(ωt+ϕ) (6.83)


(R )
The response due to the real force, f0cos(ωt), denoted by x ps (t ), is

(R )
x ps (t ) = Re[ x p 0 e j (ωt + φ) ] = x p 0cos(ωt + φ) (6.84a)

( I)
and the response due to the imaginary force, f 0sin(ωt), denoted by x ps (t ), is

( I)
x ps (t ) = Im[ x p 0 e j (ωt + φ) ] = x p 0sin(ωt + φ) (6.84b)

In addition, Equation 6.83 can be written as

x ps(t ) = x p 0e jϕ e jωt = xe
 jωt (6.85)

where x is a complex valued amplitude and

x = x p0 e jφ (6.86)

Taking the first and the second order derivatives of Equation 6.85 with respect to
t, yields

 jωt
x ps(t ) = jωxe (6.87a)
304 Random Vibration

and

 jωt
xps(t ) = −ω 2 xe (6.87b)

Substitution of Equations 6.82, 6.85, 6.86, 6.87a, and 6.87b into Equation 6.74
results in

 jωt + jωcxe
−ω 2mxe  jωt + kxe
 jωt = f0e jωt (6.88)

This gives a solution of

f0
x = (6.89)
(−ω 2m + jωc + k )

or

f0 ÷ k f 1
x = = 0
(−ω m + jωc + k ) ÷ k k (−ω /ω n + j 2ζω /ω n + 1)
2 2 2

(6.90)
f 1
= 0
k (−r 2 + j 2ζr + 1)

where r is the frequency ratio

ω
r= (6.91)
ωn

Equation 6.89 can also be expressed as

2ζr
f0 1 − j tan −1
x = e 1− r 2
(6.92)
k (1 − r 2 )2 + (2ζr )2

The absolute value of the complex valued amplitude is the amplitude of the
steady-state response xps(t), that is,

2ζr
f0 1 − j tan −1 f0 1
x p0 = x = e 1− r 2 = (6.93)
k (1 − r ) + (2ζr )
2 2 2 k (1 − r ) + (2ζr )2
2 2
Single-Degree-of-Freedom Vibration Systems 305

The phase angle of the complex valued amplitude is the phase angle of the steady-
state response xps(t), that is,

φ = ∠( x) = ∠ e ( − j tan −1
2ζr
1− r 2 ) = − tan −1 2ζr
1− r2
(6.94)

Because the tangent function is periodic with a period π, the inverse of the tan-
gent function is multiplied in value. A more precise description of the phase angle
is given by

φ = ∠( x) = ∠ ( e ) = − tan
2ζr
− j tan −1 2ζr
−1
1− r 2 + hφ π (6.95)
1− r2

where hϕ is the Heaviside step function given by

0, ω < ωn
hφ =  (6.96)
1, ω > ωn


Example 6.7

An m-c-k system with mass = 10 kg, c = 15 N/m−s, and k = 2000 N/m is excited
by a harmonic force f1(t) = 100 sin(4t) under zero condition. Calculate and plot the
response of displacement. If the excitation changes to f2(t) = 100 sin(14t), how does
the response change accordingly?
First, the natural frequency and damping ratio are, respectively, calculated to
be 14.14 rad/s and 0.05.
For f1(t) and f2(t), the frequency ratios are, respectively, r1 = ω1/ωn = 0.282, and
r2 = ω2/ωn = 0.990.
Let us now consider the steady-state solution xps(t). Its amplitude, xp0, can be
calculated by taking the absolute value of x. That is,

2ζr
f0 1 j tan−1 f0 1
xp0 = x = e 1− r 2 =
k (1− r ) + ( 2ζr )
2 2 2 k (1− r ) + ( 2ζr )2
2 2

The amplitude xp0 due to f1(t) is 100/2000/0.9205 = 0.05


The amplitude xp0 due to f2(t) is 100/2000/0.9205 = 0.468
The phase angle of the steady-state solution xps(t) can be calculated by using
the angle of x, that is,

φ = ∠( x) = ∠ e( − j tan−1
2ζr
1− r 2 ) = − tan −1 2ζr
1− r 2
+ 0π

306 Random Vibration

The phase ϕ due to f1(t) is −tan−1 (0.03/0.92) = −0.0326


The phase ϕ due to f2(t) is −tan−1 (0.105/0.02) = −1.383
Therefore, based on Equation 6.81, for the steady-state response due to f1(t),
we have

xp1(t) = xp0sin(ωt + ϕ) = 0.05sin(4t − 0.033)

and for the steady-state response due to f2(t), we have

xp2(t) = 0.468sin(14t − 1.383)

The results are plotted in Figure 6.12, where the dotted line is xp1(t) and the solid
line is xp2(t).
From Figure 6.12, it is seen that with driving frequency = 14 rad/s, compara-
tively much larger amplitude of the response is shown; this phenomenon is reso-
nance, which will be further discussed in detail in Section 6.2.1.3.1. Also from
Figure 6.12, we see the amplitude of the responses jump to their peak values in the
first quarter cycle. At least for the excitation f2(t) and corresponding resonance, in
our experience, this direct jump is not typical. Because a resonance is a cumula-
tive effect, namely, to reach the peak value in the steady state, the amplitude is
gradually increased with certain duration, and there must be a term in the total
solution to describe the transient phenomenon. This is why we have to consider
Equation 6.78 with the transient term ppt(t). In the following, using the concept of
dynamic magnification factors and the semidefinite method, we can determine
the transient response. In addition, we can also use convolution of the input force
and unite impulse response functions to derive the transient response.

0.5

0.4

0.3

0.2
Displacement (m)

0.1

–0.1

–0.2

–0.3

–0.4

–0.5
0 1 2 3 4 5 6 7 8
Time (s)

FIGURE 6.12  Responses under f1(t) and f 2(t).


Single-Degree-of-Freedom Vibration Systems 307

6.2.1.3 Dynamic Magnification
Equation 6.90 implies that the amplitude of x is a function of r and ζ. This func-
tion unveils an important phenomenon: the amplitude of the vibration response can
be magnified or reduced, dependent upon the frequency range and the damping
capacity.

6.2.1.3.1  Dynamic Magnification Factor


In comparing Equation 6.90 with Equation 6.86, the absolute value of x can alterna-
tively be written as

f0 1 f0
xp0 = = β D (6.97)
k (1 − r ) + (2ζr )
2 2 2 k

In the above, the term βD is referred to as the dynamic magnification factor


f
because the amplitude of the response is magnified from the static response: 0 .
k
This is shown in Figure 6.13a and the dynamic magnification factor can be written as

x p0 k 1
βD = = (6.98)
f0 (1 − r 2 )2 + (2ζr )2

In the term βD, the subscript D stands for displacement.


In Figure 6.13a, it is seen that when the ratio r of the driving frequency and the
natural frequency is approaching unity, the value of the dynamic magnification factor
βD will be comparatively much larger than that of the rest of the frequency regions,
which is referred to as resonance, especially when the damping ratio is small. In the
previous example, we have seen this phenomenon in the time domain. Now, the plot
in Figure 6.13a shows the variation of the dynamic magnification factor βD in the
frequency domain. At the resonant point, βD reaches its peak value. The frequency

Magnification factor for displacement


10 0
Damping ratio = 5% Damping ratio = 0.10
9 –0.5 Damping ratio = 0.30
8 Damping ratio = 0.70
Damping ratio = 1.0
7 –1
Phase angles
Amplitude

6 –1.5
Damping ratio = 10%
5
–2
4
Damping ratio = 30%
3 –2.5
2 Damping ratio = 70.7%
–3
1
Damping ratio = 100%
0 –3.5
0 1 2 3 4 5 6 7 8 9 10 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
(a) Frequency ratio (b) Frequency ratio

FIGURE 6.13  Plot of dynamic magnification factors: (a) amplitudes; (b) phases.
308 Random Vibration

band where the value of βD is greater than 70.7% of the peak value is defined as the
resonance region.
It is seen that when the frequency ratio is much smaller than unity, namely, when
the driving frequency is comparatively much smaller than the natural frequency, the
value of βD is close to unity and its value will gradually increase when the ratio r
becomes larger.
When the frequency ratio is larger than unity, the value of βD will become smaller.
When the frequency ratio is much larger than unity, namely, when the driving fre-
quency is comparatively much larger than the natural frequency, the value of βD is
approaching zero.
The phase angle between the forcing function and the response is

2ζr
φ = tan −1 + hφ π (6.99)
1− r2

In this instance, the Heaviside function is given in Equation 6.96. From Figure
6.13b, we can see that when the frequency ratio varies, so will the phase angle. As
the frequency ratio becomes larger, the phase angle decreases from zero toward –π
(or −180°). At exactly the resonant point, the phase angle becomes –π/2.

6.2.1.4 Transient Response under Zero Initial Conditions


With zero initial conditions, we have particular solutions. The steady-state response
is only a portion of the particular solution. To obtain the transient response, it is not
necessary to use the above-mentioned dynamic magnification factor. However, with
the help of such a concept, we can have a deeper insight into how the steady state is
reached, especially into how resonance is accumulated.
Now, let us consider the transient part of the particular solution xpt(t) based on
Equations 6.76 and 6.78, namely,

xp(t) = xpt(t) + xps(t)

From the above discussion, it is known that

xps(t) = f0 /kβDsin(ωt + ϕ)

We also know that a transient response is a free-decay vibration, which may take
the following form,

x pt (t ) = ae −ζω nt sin(ω dt + θ t ) (6.100)

where both the amplitude a and the phase θt are to be determined.


Now, the total particular solution of displacement may be written as

x p (t ) = ae −ζω nt sin(ω dt + θ t ) + f0 /kβ Dsin(ωt + φ) (6.101a)


Single-Degree-of-Freedom Vibration Systems 309

We use the semidefinite method to see if it is a true response. To do so, we first


determine parameter a and θt, then check if Equation 6.101a can satisfy the m-c-k
equation with a sinusoidal excitation f0 sin(ωt) with zero-initial conditions.
Take the derivative on both sides of Equation 6.101a with respect to time t, the
possible velocity can be written as

x p (t ) = ae −ζω nt  −ζω nsin(ω dt + θ t ) + ω d cos(ω dt + θ t )  + f0 /kβ Dω cos(ωt + φ)


(6.101b)

With zero initial conditions, we thus have the following two equations:

xp(0) = 0 = asin(θt) + f0/kβDsin(ϕ) (6.102a)

and

x p(0) = 0 = a  −ζω nsin(θ t ) + ω d cos(θ t )  + f0 /kβ Dω cos(φ) (6.102b)

From Equation 6.102a, we can write

a = −f0/kβDsin(ϕ)/sin(θt) (6.103)

Substituting Equation 6.103 into Equation 6.102b results in

f0 /kβ Dsin(φ)
[−ζω nsin(θ t ) + ω dcos(θ t)] + f0 /kβ Dω cos(φ) = 0
sin(θ t )

Therefore, we have

−ζω n sin(φ) + 1 − ζ2 ω n cot(θ t ) + ω cos(φ) = 0 (6.104)

Because

2ζr
tan(φ) =
1− r2

We have

2ζr
sin(φ) = = β D (2ζr ) (6.105a)
(1 − r ) + (2ζr )2
2 2
310 Random Vibration

as well as

cos(ϕ) = βD(1 − r 2) (6.105b)

Substitution of Equations 6.105a and 6.105b into Equation 6.104 yields

2ζ 1 − ζ2
tan(θ t ) = (6.106a)
2ζ2 + r 2 − 1

Furthermore,

sin(θ t ) =
2ζ 1 − ζ2
(2ζr ) + (r − 1)
2 2 2
(
= β D 2ζ 1 − ζ2 ) (6.106b)

Comparing Equations 6.106b with 6.106a, we can write

θ t = (−1)
hθt
{ (
sin −1 β D 2ζ 1 − ζ2 )} + h π
θt (6.107a)

where the Heaviside step function hθt is given by

0, 2ζ2 + r 2 − 1 > 0



hθt =  (6.107b)
1, 2ζ2 + r 2 − 1 < 0

Substitution of Equation 6.106a into Equation 6.104 results in

f0 /kβ 2D (2ζr ) f0 rβ D
a= =
( )
(6.108)
β D 2ζ 1 − ζ 2 k 1 − ζ2

Finally, to complete the semidefinite approach, we can substitute xp(t), x p(t ), and

x p(t ) into Equation 6.76 to see if it is balanced. With a positive result, it can be shown
that Equation 6.101 is indeed a particular solution and thus Equation 6.100 is the
transient part of the particular responses. In Figure 6.14a, the normalized amplitudes
of the transient solution with zero initial conditions (namely, f0/k = 1) versus the fre-
quency ratio are plotted. It is seen that when r = 1, the amplitude reaches the peak
value. Similar to the amplitude of the steady-state responses, when the damping ratio
becomes smaller, the larger peak value can be seen. In Figure 6.14b, the phase angles
versus the frequency ratio are also plotted. It is seen that with different damping
ratios, the curves of the phase angle can be rather different.
Single-Degree-of-Freedom Vibration Systems 311

12 3.5
Damping ratio = 0.05 Damping ratio = 0.05
Damping ratio = 0.30 3 Damping ratio = 0.30
10 Damping ratio = 0.70 Damping ratio = 0.70
Normalized amplitude

2.5
8

Phase angle
2
6
1.5
4
1

2 0.5

0 0
0 0.5 1 1.5 2 2.5 3 3.5 0 0.5 1 1.5 2 2.5 3 3.5
(a) Frequency ratio (b) Frequency ratio

FIGURE 6.14  Amplitudes and phase angles of xp(t): (a) normalized amplitudes; (b) phases.

It is noted that unlike the free-decay vibration caused purely by initial velocity or
initial displacement (or both). The response therefore exists without other conditions;
the transient part of a particular solution cannot exist without the presentation of a
steady-state solution.

6.2.1.4.1  Transfer Function Method


To study the steady-state response, we can also use the transfer function method,
which was described in Chapter 4 and will be further discussed as follows.

6.2.1.4.1.1   Transfer Function  When all the initial conditions are zero, the fol-
lowing Laplace transforms (Pierre-Simon Laplace, 1749–1827) exist:

L[ x (t )] = X (s) (6.109)

and

L[ x (t )] = sX (s) (6.110)

Furthermore,

L[ x(t )] = s 2 X (s) (6.111)

As well as

L[ f (t )] = F (s) (6.112)

In general, s is the Laplace variable, where

s = σ + jω (6.113)
312 Random Vibration

Taking the Laplace transform on both sides of Equation 6.76 yields

L[mx(t ) + cx (t ) + kx (t )] = L[ f (t )] (6.114)

and as a result:

ms2 X(s) + csX(s) + ksX(s) = F(s) (6.115)

or

[ms2 + cs + k]X(s) = F(s) (6.116)

The transfer function for the SDOF system can be defined as

X (s) 1
H (s ) = = (6.117)
F (s) ms 2 + cs + k

6.2.1.4.1.2   Frequency Response Function  In the case of steady-state response


σ = 0 in Equation 6.113, that is,

s = jω

the frequency response function is

X (ω ) 1 1
H ( jω ) = = = (6.118)
F (ω ) m( jω )2 + c( jω ) + k − mω 2 + jωc + k

1 1 1 1 1
H ( jω ) = = = β De jφ (6.119)
k m c k 1 − r + 2 jζr k
2
− (ω )2 + ( jω ) + 1
k k

This results in

1 
H ( jω ) =  β D  e jφ (6.120)
k 

1
H ( jω ) = βD (6.121)
k

and

∠H( jω) = ϕ (6.122)
Single-Degree-of-Freedom Vibration Systems 313

The transfer function and the frequency response function are important concepts
in describing the dynamic behavior of a linear vibration system.

6.2.2 Base–Excitation and Force Transmissibility


6.2.2.1 Model of Base Excitation
We now consider a special case of base excitation modeled by

mxA + cx + kx = 0 (6.123)

where xA is the absolute displacement. Let xg to be the base displacement, and x is the
displacement of the mass relative to the base. This is given by

xA = xg + x (6.124)

x A = x g + x (6.125)

xA = xg + x (6.126)

Consequently, the case of base excitation is written as

mx + cx + kx = − mxg (6.127)

In comparing the above equation with Equation 6.76, the term −mxg can be seen
as a forcing function noted by

f (t ) = − mxg (6.128)

Taking the Laplace transform on both sides of Equation 6.127 with zero initial
conditions results in

[ms2 + cs + k]X(s) = −ms2 Xg(s) (6.129)

The transfer function between the ground displacement and the relative displace-
ment of the base-isolator is

X (s ) −s2
H Dr (s) = = 2 (6.130a)
X g (s) s + 2ζω n s + ω 2n
314 Random Vibration

Here, the first subscript D stands for displacement, whereas the second stands for
relative. In the instance of a frequency response function, it is given by

X (ω ) (ω 2 ) ÷ ω 2n r2
H Dr (ω ) = = = (6.130b)
X g (ω ) ( )
−ω 2 + 2 jζωω n + ω 2n ÷ ω 2n 1 − r + 2 jr
2

Note the transfer function between the ground acceleration and the relative accel-
eration of the base-isolator, given by Equation 6.131, is identical to Equation 6.130b.

X (ω ) ω 2 X (ω ) r2
H Ar ( jω ) =  = 2 = (6.131)
X g (ω ) ω X g (ω ) 1 − r + 2 jr
2

In Equation 6.131, the first subscript A stands for acceleration.


In terms of dynamic magnification factor and phase angle,

HAr( jω) = βArejΦ (6.132)

where (.) denotes the magnitude of function (.) and

x r2 r2
β Ar = = = (6.133)
xg 1 − r + 2 jr
2
(1 − r 2 )2 + (2ζr )2

In the above, βAr is the dynamic magnification factor for the relative acceleration
x(t ) excited by harmonic base acceleration xg (t ). Its value is exactly the dynamic
magnification factor for the relative displacement x(t) excited by harmonic base dis-
placement y(t), denoted by βDr. Namely,

βDr = βAr (6.134)

Additionally,

 2ζr 
Φ = tan −1  +h π (6.135)
 1 − r 2  φ

where hϕ is the Heaviside step function given by Equation 6.76. The transfer function
between the ground acceleration and the absolute acceleration of the base-isolator is

X (s) + Xg (s) r2 1 + 2 jr


H Aa (s) =  = +1 = (6.136)
X g (s ) 1 − r + 2 jr
2
1 − r 2 + 2 jr
Single-Degree-of-Freedom Vibration Systems 315

Here, the second subscript a stands for absolute. In terms of the dynamic mag-
nification factor and phase angle, as well as for the steady-state solution, let s = jω.

HAa(jω) = βAa ejϕ (6.137)

where

xA 1 + 2 jζr 1 + (2ζr )2


β Aa = = = (6.138)
xg 1 − r 2 + 2 jζr (1 − r 2 )2 + (2ζr)2

x
In Equation 6.138, is the ratio of amplitudes of the absolute and ground accel-
g
x
eration and

 2ζr 3 
Φ = tan −1  2 2 
+ hΦ π (6.139)
 1 − (1 − 4ζ )r 

where hΦ is the Heaviside step function given by

 0, 1 − (1 − 4ζ2 )r 2 > 0

hΦ =  (6.140)
 1, 1 − (1 − 4ζ2 )r 2 < 0

Therefore, suppose the normalized ground excitation is

xg (t ) = sin(ωt ) (6.141)

The steady-state absolute acceleration can then be written as

xA (t ) = β Aa sin(ωt − Φ) (6.142)

The dynamic magnification factor and phase angle versus frequency ratio are
plotted in Figure 6.15a and b, respectively. Compared with the dynamic magnifica-
tion factors, which can be seen as normalized amplitude of forced responses shown
in Figure 6.13a, we can observe that they both have a similar trend of ups and downs.
However, for the case of based isolation, when the amplitude is reduced from the
resonance value toward zero, no matter what the damping ratio is, the curves will
reach unity at exactly r = 1.4142.
Note that in Equation 6.81, the phase angle for the sine function is sin(ωt + ϕ).
However, for the cases of base excitation, the phase angle for the sine function is
sin(ωt − Φ), namely, with a minus sign.
316 Random Vibration

Dynamic magnification factors Phase angles between absolute acceleration


of absolute acceleration and ground acceleration
2 3
10
Damping ratio = 5% Damping ratio = 5%
Damping ratio = 10% Damping ratio = 10%
Damping ratio = 30% 2.5 Damping ratio = 30%
Damping ratio = 70% Damping ratio = 70%
Damping ratio = 100% Damping ratio = 100%

101 2

Phase (rad)
Magnitude

1.5

100 1

0.5

10–1 0
0 0.5 1 1.5 2 2.5 0 0.5 1 1.5 2 2.5
(a) Frequency ratio (b) Frequency ratio

FIGURE 6.15  Magnitude (a) and phase (b) of absolute acceleration due to ground excitation.

Example 6.8

A machine with mass = 1500 kg is supported by a stiffness k = 1480.450 kN/m


with a damping ratio of 0.01 and is excited by ground acceleration with a driving
frequency of 7 Hz and an amplitude of 1.5 g (1 g = 9.8 m/s2). If the absolute accel-
eration needs to be reduced to less than 1.2 g and the relative displacement must
be limited to less than 14.0 mm, design the required stiffness and damping ratio.
The ground displacement is y = 1.5 * 9.8/(5 × 2π)2 = 14.9 mm.
The natural frequency is (k/m)1/2/2π = 5.0 (Hz) so that the frequency ratio is r =
7/5 = 1.40. Then, the dynamic magnification factors for the absolute acceleration
and the relative displacement are, respectively, given by

1+ ( 2ζr )2
β Aa = = 1.0875
(1− r 2 )2 + ( 2ζr )2

and

r2
βDr = β Ar = = 2.048
(1− r 2 )2 + ( 2ζr )2

Thus, the absolute acceleration is 1.5 g * 1.0875 = 1.631 g and the relative displace-
ment is 14.9 mm * 2.048 = 30.51 mm, neither of which satisfies the requirements.
It is seen that the displacement is more than 20 mm, and the acceleration is
more than 1.2 g. To reduce the acceleration, we need the dynamic magnification
factor to be

βAa ≤ 1.2/1.5 ≤ 0.8


Single-Degree-of-Freedom Vibration Systems 317

If we keep the same damping ratio, the new frequency ratio can be calculated as

1+ ( 2ζr )2
0.82 ≤
(1− r 2 )2 + ( 2ζr )2

so that we have r > 1.45. Let us choose r = 1.46.


However, in this case, βDr is calculated to be 1.9 so that the displacement is
14.9 × 1.9 mm > 14.0 mm. Next, we choose the damping ratio to be 0.68. In this
case, βDr is calculated to be 0.93 so that the displacement is 13.9 < 14 mm. The
final design is natural frequency 5/1.46 = 3.42 (Hz), and the damping ratio is 0.68.

6.2.2.2 Force Transmissibility
Suppose a forcing function f0sin(ωt) is applied on an m-c-k structure; find the ampli-
tude of the steady-state force transferred from the structure to the ground. In Figure
6.11, the dynamic force transferred to the ground, denoted by fg(t), is the sum of
damping and the stiffness forces, namely,

fg (t ) = cx + kx (6.143)

From the above discussion, it is seen that the steady-state displacement is given by

x(t) = f0/kβDsin(ωt + ϕ)

and the velocity is given by

x (t ) = f0 /kβ Dω cos(ωt + φ)

Therefore,

fg(t) = f0/kβD[ksin(ωt + ϕ) + cωcos(ωt + ϕ)] (6.144)

Let the amplitude be denoted by fG, which can be written as

fG = fg (t ) = f0 /kβ D [ k 2 + (cω )2 ]1/ 2


1/ 2
 k 2 + c 2ω 2  1 + (2ζr )2 (6.145)
= f0 /k  2
= f0
 (1 − r 2 2
) + ( 2ζr )  (1 − r 2 )2 + (2ζr )2

Denoting the dynamic magnification factor for the force transmissibility as βT, the
amplitude of ground force can be written

fG = βT f–0 (6.146)
318 Random Vibration

where

1 + (2ζr )2
βT = (6.147)
(1 − r 2 )2 + (2ζr )2

and thus

βT = βAa (6.148)

The term is often simply referred to as the force transmissibility.


To find the phase between the forcing function and the steady-state ground force
fg(t), let us again apply the complex method by letting the forcing function be f(t) = f0
ejωt. In this case, the ground force can be written as

 jωt + jcωxe
fg (t ) = kxe  jωt = ( k + jcω ) xe
 jωt (6.149)

where the amplitude of the complex value displacement x is given by Equation 6.92.
As a result, the steady-state ground force can be further written as

2ζr
f0 ( k + jcω ) − j tan −1
fg (t ) = e 1− r 2 e jωt (6.150)
k (1 − r ) + (2ζr )
2 2 2

From the absolute value of the ground force described in Equation 6.150, we can
realize that dynamic magnification of the force transmissibility is indeed the one
given in Equation 6.147. Furthermore, the phase difference can be written as

f ( k + jcω ) j tan −1
2ζr 
Φ = ∠ 0 e 1− r 2 
k (1 − r 2 )2 + (2ζr )2 
 
   2ζr   2ζr   
= ∠  (1 + j 2ζr )β D cos  tan −1  − jβ D sin  tan −1  
  1− r   1 − r 2   
2


= ∠ (1 + j 2ζr ){β D (1 − r 2 ) + jβ D (2ζr )} (6.151)

= ∠ (1 − r 2 ) + (2ζr )2 + j{(2ζr )(1 − r 2 ) − (1 − r 2 )}

2ζr 3
= tan −1 + hΦ π
1 − r 2 + (2ζr )2
Single-Degree-of-Freedom Vibration Systems 319

where the Heaviside function hΦ is given by

 0, 1 − r 2 + (2ζr )2 > 0

hΦ =  (6.152)
 1, 1 − r 2 + (2ζr )2 < 0

The ground force can then be written as

fg(t) = f0βTsin(ωt + Φ) (6.153)

Comparing Equations 6.151 and 6.139, we realize that the phase difference of the
absolute acceleration excited by the ground and the phase difference of the ground
force due to force applied on a system are identical, similar to the corresponding
magnifications factors.

6.2.3 Periodic Excitations
6.2.3.1 General Response
Consider now a linear m-c-k system excited by a periodic forcing function f(t) of
period T, with initial conditions x0 and v0. That is,

mx + cx + kx = f (t )

 x (0) = x 0 (6.154)
 x (0) = v
 0

Because the excitation is periodic, a Fourier series with a basic frequency of

ωT = 2π/T (6.155)

is used to represent the forcing functions. Suppose f(t) can be represented by the
Fourier series


f
f (t ) = A0 +
2 ∑  f
n =1
An cos(nω Tt ) + fBnsin(nω Tt )  (6.156)

where fA0, fAn, and f Bn are Fourier coefficients. Because the system is linear, the
responses are first considered individually due to the forcing function

fA0
fA = (6.157)
2
320 Random Vibration

with the initial conditions, denoted by x0(t) and the steady-state responses due to the
forcing functions

fan(t) = fAncos(nωTt) (6.158)

f bn(t) = f Bnsin(nωTt) (6.159)

denoted by xan(t) and x bn(t).


The total response can then be seen as the summation of x0(t) and all the xan(t)
and x bn(t). Thus,

N N

x (t ) = x 0(t ) + ∑
n =1
[ xan(t ) + x bn(t )] = x 0(t ) + ∑[x (t)]
n =1
n (6.160)

where

xn(t) = xan(t) + x bn(t) (6.161)

6.2.3.2 The nth Steady-State Response


The steady-state response corresponding to the nth excitation component described
by Equation 6.161 can also be written as

fN
x n (t ) = β nsin(nω Tt + φn ) (6.162)
k

Here,

fN = 2
fAn + fBn
2
(6.163)

is the amplitude of the nth forcing function and the dynamic magnification βn as well
as phase angle ϕn are

1 1
βn = = (6.164)

2
n 2ω 2T   nω T 
2
(1 − n r ) + (2ζnr )2
2 2 2

 1 − ω 2  +  2ζ ω 
 n  n

 2ζnr  −1  f An 
φn = tan −1  2 2  + tan  f  + (hφ + hΦ )π (6.165)
 n r −1 Bn
Single-Degree-of-Freedom Vibration Systems 321

where

 n 2r 2 > 1
hφ =  0 (6.166)
−1 n 2r 2 ≤ 1

and

 0 fBn ≥ 0
hΦ =  (6.167)
1 fBn < 0


In Equations 6.164 through 6.166, r is the frequency ratio and

ωT
r= (6.168)
ωn

6.2.3.3 Transient Response
Assume that the transient response due to the initial conditions and the force con-
stant fA0 is

fA0
x 0 (t ) = e −ζω nt [ A cos(ω dt ) + B sin(ω dt )] + (6.169)
k

The first term in Equation 6.169 is mainly generated by the initial conditions and the
second term is a particular solution due to the step input described in Equation 6.157.
It can be proven that the coefficients A and B are

A = x0 −
fA0
k
− ∑ fk β sin φ
n =1
N
n n (6.170)

1  
N

B=  v0 + Aζω n −
ω d  ∑ fk β nω cos φ 
n =1
N
n T n (6.171)

6.3 Response of SDOF System to Arbitrary Forces


In this section, arbitrary excitations will be considered.

6.3.1 Impulse Responses
In the case when a very large force is applied to a SDOF system for a very short dura-
tion, the excitation can be seen as an impulse process. This simple excitation can be
322 Random Vibration

modeled by a delta function multiplied by the amplitude of the impulse, which is the
foundation of the study of arbitrary excitations.

6.3.1.1 Unit Impulse Response Function


With zero initial conditions and impulse excitations, the equation of motion can be
given as

 mx + cx + kx = f δ(t )


0
 (6.172)
 x (0) = x (0) = 0

Here, f0 is the amplitude of an impulse. In the case f0 = 1, the response can be


calculated as follows.
First, when the initial displacement is zero, the mass is considered to be at rest just
shortly prior to the application of the impulse f0. At the moment when f0 is applied,
the momentum of the system gains mv0. That is,

f0 = f(t)Δt = mv0 − 0

Thus,

f (t )∆t f0
v0 = = (6.173)
m m

Then, the effect of an impulse applied to the SDOF m-c-k system is identical to
the case of a free vibration with zero initial displacement and initial velocity equal
to that described in Equation 6.62, with x (0) = v0 = f0 /m and x(0) = 0. Furthermore,
with unit impulse f0 = 1 from Equation 6.71, the amplitude d is

1
d=
mω d

and from Equation 6.172, the phase angle is

ϕ=0

Therefore, when I0 = 1, the response due to unit impulse is

1  − ζω nt
x (t ) = e sin(ω d t )  (6.174)
mω d 
Single-Degree-of-Freedom Vibration Systems 323

This expression is quite important. A special notation is used to represent this unit
impulse response, denoted by h(t). Thus, let

1  − ζω nt
h(t ) = x (t ) = e sin(ω d t )  (6.175)
mω d 
unit impulse
zero initial condition

where the quantity h(t) is known as the unit impulse response function.
Substitution of Equation 6.175 into Equation 6.172 yields

mh + ch + kh = δ(t ) (6.176)

In this case, h(t) is called unit impulse response function. Generally, when f0 ≠ 1,
this results in

x(t) = f0 h(t) (6.177)

6.3.2 Arbitrary Loading and Convolution


6.3.2.1 Convolution
It is known that the response of an SDOF system under an arbitrary excitation f(t) can
be expressed as the following convolution:

t
x (t ) =
∫ 0
f (τ)h(t − τ) d τ (6.178)

Figure 6.16 graphically shows the essence of Equation 6.178. With the help of
Figure 6.16a, let us consider a special instance t = ti. The amplitude of the force f(ti)
can be seen as a result of the sampling effect by the delta function δ(t − ti) (see Figure
6.16b). That is, a response will occur starting from time t − ti as well as afterward
(Figure 6.16c). It can be regarded as a unit impulse response times the amplitude of
the instantaneous force. That is, the response is [f(ti) Δt][h(t − ti)]. However, before
the instance ti, we already had many other impulse responses. Each response can
be seen as an impulse that occurs at ti−1, ti−2, …0. Thus, the total response can be
regarded as a sum of all of these impulse responses. Note that the consideration of
the response is not just ending at ti. It may last until time t (Figure 6.16d). In this
instance, we just show an additional response at ti+1. Then, we will have the sum-
mation of those impulse responses, which are functions of ti, and ti starting at 0 and
ending at t. When letting Δt → dt, the summation becomes an integral and we thus
have Equation 6.178.
Additionally, Equation 6.178 can be rewritten as

t ∞
x (t ) =

−∞
f (τ)h(t − τ) d τ =
∫ −∞
f (τ)h(t − τ) d τ (6.179)
324 Random Vibration

δ(t − ti) ∞

ti
(a)

f(t)
f(ti)

ti
dt

(b)
dx(t), impulse response
(t – ti) ≥ 0

t – ti

(c)

f(t)
f(ti+1)

ti+1
dt

(d)
t – ti+1

FIGURE 6.16  Convolution integral. (a) impulse; (b) forcing function; (c) impulse response;
(d) additional impulse and response.
Single-Degree-of-Freedom Vibration Systems 325

and


x (t ) =
∫−∞
f (t − τ)h(τ) d τ (6.180)

Combining Equations 6.38 and 6.39, it can be simply denoted by

x(t) = f(t) * h(t) = h(t) * f(t) (6.181)

6.3.2.2 Transient Response under Harmonic Excitation f0 sin(ωt)


With the help of Equation 6.180, we consider the transient response under a unit
harmonic excitation f(t) = sin ωt, namely, the term xpt(t) in Equation 6.78, under zero
initial conditions.
Substituting the forcing function sin ω(t − τ) into Equation 6.180 and with the
help of the unit impulse response function of SDOF vibration systems (see Equation
6.176), we have

t ≥0
e − ζω nt ∞
x (t ) =
mω d ∫−∞
e − ζω nτ sin[ω d (t − τ)]sinωτdτ (6.182)

Note that the response x(t) is due to the forcing function only, that is, it is xp(t).
Furthermore, we can write

e − ζω nt t
x p (t ) =
mω d ∫e 0
− ζω n τ
sin[ω d (t − τ)]sinωτdτ (6.183)

Evaluating Equation 6.183, we can have the solution (see Equation 6.78) repeated
as follows:

xp(t) = xpt(t) + xps(t) (6.184)

whereas xps(t) is the steady-state response for the particular solution xp(t) described
above, the transient response, xpt(t) can be calculated as

rβ D
x pt (t ) = e − ζω nt sin(ω dt + θ t ) (6.185)
k 1− ζ 2

where the phase angle θt is given by

2ζ 1 − ζ2
θ t = tan −1 + hθπ (6.186a)
2ζ2 + r 2 − 1
326 Random Vibration

where hθ is the Heaviside step function given by

 0, 2ζ2 + r 2 − 1 > 0

hθ =  (6.186b)
 1, 2ζ2 + r 2 − 1 < 0

Note that in Equation 6.185, generally speaking, xpt(t) ≠ 0. This implies that even
under zero initial conditions and under the zero initial force (because sinω0 = 0), we
still have the nonzero transient response. Equation 6.185 is exactly the same with
the formula obtained through Equations 6.108 and 6.186a, which is equivalent to
Equation 6.107a. However, based on the method of convolution, we can obtain a
complete solution. Through a semidefinite method, we do have a solution, but we
cannot prove that it is the only solution.

Example 6.9

Reconsider the above-mentioned example of the m-c-k system with mass = 10 kg,
c  = 15 N/m−s and k = 2000 N/m excited by a harmonic force f1(t) = 100sin(4t)
under zero condition x0 = 2 m and v0 = 1 m/s, calculate and plot the response
of displacement. If the excitation changes to f2(t) = 100sin(14t), how does the
response change accordingly?
The natural frequency and damping ratio are, respectively, calculated to be
14.14 and 0.05. We then can calculate the following parameters, where the sub-
scripts 1 and 2 stand for the cases of f1(t) and f2(t), respectively

r1 = 0.2828,  r2 = 0.9899

βD1 = 1.0864,  βD2 = 9.3556

and

θt1 = 3.0263,  θt2 = 1.7057

With the above parameters, the transient solutions for the particular responses
can be computed. Furthermore, in the previous example, the steady-state
responses were calculated (Figure 6.12). Therefore, the total particular responses
xp(t) = xpt(t) + xps(t) can also be calculated.
The results are plotted in Figure 6.17a with driving frequency = 4 rad/s, and
in Figure 6.17b with driving frequency 14, where the solid lines are the transient
responses and the broken lines are the total particular responses. Compared with
Figure 6.12, the steady-state responses, it is seen that with the transient portions,
the total response can be rather different.
Furthermore, consider the initial conditions v0 = 1 m/s and x0 = 2 m, we can
calculate the homogeneous solutions xh(t) based on Equation 6.62, which is not
affected by the forcing function for f1(0) = f2(0) = 0. Including the homogeneous
solution, the total responses are plotted in Figure 6.18a with driving frequency =
4 rad/s and Figure 6.18b with driving frequency 14 rad/s.
Compare the total responses with the particular solutions shown in Figure 6.17
and with the steady-state ones shown in Figure 6.12, we can see the differences
Single-Degree-of-Freedom Vibration Systems 327

0.08 0.5
0.4
0.06
0.3
Displacement (m)

Displacement (m)
0.04 0.2

0.02 0.1
0
0
–0.1
–0.02 –0.2
–0.3
–0.04
–0.4
–0.06 –0.5
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
(a) Time (s) (b) Time (s)

FIGURE 6.17  Particular responses: (a) driving frequency = 4 rad/s; (b) driving frequency =
14 rad/s.

2 5
4
1.5
3
1 2
Displacement (m)

Displacement (m)

1
0.5
0
0
–1

–0.5 –2
–3
–1
–4
–1.5 –5
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
(a) Time (s) (b) Time (s)

FIGURE 6.18  Total forced responses: (a) f1(t) = 100sin(4t); (b) f 2(t) = 100sin(14t).

once more. In the course of deterministic vibration, often the transient responses,
caused by both the initial conditions and the driving force, are ignored. This is
because when the time is sufficiently long, the transient responses will die out.
Only the steady-state response remains. However, in the case of random excita-
tion, the transient portion of the responses should be carefully examined because
with a random excitation, the transient portion will not die out.

6.3.3 Impulse Response Function and Transfer Function


Recalling Equations 6.116 and 6.117, and additionally, Equation 6.129, for base isolation,

[ms2 + cs + k]X(s) = F(s) (6.187)

From Equation 6.187, the transfer function is given by

X (s) 1
H (s ) = = (6.188)
F (s) ms + cs + k
2
328 Random Vibration

Furthermore, it is seen that

X(s) = H(s)F(s) (6.189)

In the case of unit impulse response,

[ms 2 + cs + k ]L[h(t )] = L[δ(t )] = 1 (6.190)

Thus,

L[h(t )] = H (s)(1) = H (s) (6.191)

Specifically, it can be stated that the unit impulse response function and the trans-
fer function is a Laplace pair.

h(t) ⇔ H(s) (6.192)

Generally speaking, for a harmonic excitation, when the response reaches steady
state, let s = jω, the unit impulse response function and the transfer function also
become a Fourier pair:

h(t) ⇔ H(jω) (6.193a)

For convenience, Equation 6.193a is often rewritten as

h(t) ⇔ H(ω) (6.193b)

Here, the unit impulse response function is given by

e −ζωt
h(t ) = sin(ω d t ) (6.194)
mω d

Now, consider the case of harmonic excitation. Let

f(t) = ejωt (6.195)

Then,
∞ ∞ ∞
x (t ) = f (t ) * h(t ) =

−∞
h(τ) f (t − τ) d τ =
∫−∞
h(τ)e jω (t − τ ) d τ = e jωt

−∞
h(τ)e − jωτ d τ

(6.196)

Because



∫−∞
h(τ)e − jωτ d τ = F [h(t )] = H (ω ) (6.197)
Single-Degree-of-Freedom Vibration Systems 329

As a result, we obtain a useful relationship representing the harmonic response.

x(t) = H(ω)ejωt (6.198)

For a general response, take the Fourier transform on both sides of Equation 6.181,

F [ x (t )] = F [ f (t ) * h(t )] = F [ f (t )]F [h(t )]


(6.199)

X(ω) = F(ω)H(ω) = H(ω)F(ω) (6.200)

Furthermore, taking the inverse Fourier transform on both sides of Equation


6.200 results in


1
x (t ) =
2π ∫
−∞
F (ω ) H (ω )e jωt dω (6.201)

6.3.4 Frequency Response and Transfer Functions


We now compare Equations 6.200 and 6.198. Equation 6.198 is obtained through
harmonic excitation. Because

x (t ) = jωx (t ), x(t ) = −ω 2x (t )

and accordingly,

(−mω2 + jωc + k)x(t) = ejωt

and

1
x (t ) = e jωt = H (ω )e jωt
− mω + jωc + k
2

Therefore,

x (t )
H (ω ) = x (t )e − jωt = condition: f ( t )= e jωt
(6.202)
f (t )
330 Random Vibration

Equation 6.202 is obtained through arbitrary excitation, in which

X (ω )
H (ω ) = condition = steady state response (6.203)
F (ω )

In the literature, the transfer function obtained through the steady-state response
of harmonic excitation is specifically defined as the frequency response function.
Mathematically, obtaining the frequency response function is equivalent to replacing
s with jω in the generic form of the transfer function.

6.3.5 Borel’s Theorem and Its Applications


6.3.5.1 Borel’s Theorem
In rewriting the general form as described in Equation 6.200, produces Borel’s theo-
rem, which is the convolution theorem described in the Laplace domain:

X(s) = H(s)F(s) (6.204)

In this instance, the response X(s) can be seen as the input F(s) being transferred
through H(s). Because Borel’s theorem does not specify the input force, F(s) can be
any forcing function for which a Fourier transform exists.

6.3.5.1.1  Forward Problem


Borel’s theorem can be used to find the solution of a SDOF system. In the event,
H(s) and F(s) are known. First, X(s) must be found, then through the inverse Laplace
transform, x(t) can be found.

X (t ) = L−1[ X (s)] = L−1[ H (s) F (s)] (6.205)

6.3.5.1.2  First Inverse Problem


An additional application is the case when X(s) and F(s) are known. They can be used
to find the transfer function, referred to as the first inverse problem, using the equation

X (s)
H (s ) = (6.206)
F (s)

Furthermore, for the transfer function to find the physical parameters m, c, k:

1
H ( s ) = H ( m , c, k ) = (6.207)
ms 2 + cs + k
Single-Degree-of-Freedom Vibration Systems 331

6.3.5.1.3  Second Inverse Problem


The second inverse problem is referred to when X(s) and H(s) are known. Here, to
find the forcing function:

X (s )
F (s ) = (6.208)
H (s )
The inverse problem will be discussed in further detail in Chapter 9.

Problems
1. A system is excited by an initial displacement x0 = 5 cm only, shown in
Figure P6.1.
Find (a) natural period, (b) natural frequency in Hertz (fn), and natural
frequency in radians per second (rad/s), (c) mass in kilograms, (d) damping
ratio (ζ), (e) damping coefficient c, (f) stiffness k (g = 10 m/s2).
2. An SDOF system is shown in Figure P6.1. Suppose the base has a peak
ground acceleration xg = 0.6 g with 2.0 Hz driving frequency; and the natu-
ral frequency is 2.5 Hz, damping ratio is 0.064.
Find (a) the dynamic magnification factor for the absolute acceleration
in the base excitation problem, (b) the amplitude of absolute acceleration,
 
x x, x
fm = –mx

fk = kx
W = 2500 N
mg
c
fc = cx
1n 1n
x g 2 2

5
4
3
Displacement (cm)

2
1
0
–1
–2
–3
–4
0 0.5 1 1.5 2 2.5
Time (s)

FIGURE P6.1
332 Random Vibration

(c)  the dynamic magnification factor for relative displacement, (d) the
amplitude of relative displacement, and (e) compute the amplitude of the
ground force fc + f k.
3. A structure is shown in Figure P6.2 with mass = 12 kg the structure itself is
weightless
a. Determine the damping ratio and stiffness
b. The system is excited by vertical base motion and amplitude = A; the
driving frequency = 6 Hz. The absolute displacement is measured to be
6.1 mm. Find the value of A.
4. An SDOF structure has weight = 6000 lb, damping ratio = 0.08, and stiff-
ness is k with natural period = 1.15 seconds. The structure has a ground
excitation with amplitude = 0.25 g and driving period is 1.80 seconds. Find
the amplitude of the relative displacement.

Weightless

1
0.8
0.6
Displacement (in)

0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
–1
0 0.2 0.4 0.6 0.8 1 1.2 1.4
Time (s)

FIGURE P6.2 
Single-Degree-of-Freedom Vibration Systems 333

F(t)

90 N

50 N

Time (s)
0 1 2 3 4 5

FIGURE P6.3 

5. An SDOF system with mass = 1.5 kg; c = 8 N/m−s, and k = 120 N/m, is
excited by a forcing function shown in Figure P6.3. The initial conditions
are v0 = −1.5 m/s and x0 = 0.05 m. Calculate the displacement.
6. If the relationship between the log decrement δ and the damping ratio ζ
is approximated to be δ = 2πζ, what values of ζ could you calculate if the
allowable error is 12%.
7. If the amplitude of response of a system under bounded input grows into
infinity, the system is unstable. Consider an inverted pendulum as shown in
Figure P6.4, where k1 = 0.7 k, both installed at exactly the middle point of
the pendulum. Find the value of k such that the system becomes unstable.
Assume that the damper c is installed on the pendulum parallel to the two
springs. How does this affect the stability properties of the pendulum?
8. In Problem 7, m = 12 kg, ℓ = 2.1 m, k = 110 kN/m, c = 1.1 kN/m−s, the bar is
massless. Suppose the initial angle θ is 2.0°, calculate the response.
9. Consider the system in Figure P6.5; write the equation of motion, and cal-
culate the response assuming that the system is initially at rest for the slope
angle = 30°, k = 1200 N/m, c = 95 N/m−s, and m = 42 kg. The amplitude of
the vertical force f(t) is 2.1 N with driving frequency = 1.1 Hz.

c
θ m

k1 k

0.5 ℓ

FIGURE P6.4 
334 Random Vibration

k
f(t)
c m

Initial position, spring is not elongated


by force – mgsin30°

Origin of the coordinate, static equilibrium point

FIGURE P6.5 

10. A mechanism is modeled as shown in Figure P6.6, with k = 3400 N/m, c =
80 kg/s, m = 45 kg, and the ground has a motion along the 45° line with dis-
placement xg(t) = 0.06 cos πt(m); compute the steady-state vertical responses
of both relative displacement and absolute acceleration, assuming the sys-
tem starts from a horizontal position. Assume the rotation angle is small.

1/3 m 1/3 m 1/3 m


m

c k f (t)

xg(t) = 0.06 cos(πt)


45°

FIGURE P6.6 
7 Response of SDOF
Linear Systems to
Random Excitations
In Chapter 6, the linear single-degree-of-freedom (SDOF) system is discussed in terms
of deterministic forcing functions. In this chapter, random excitations will be consid-
ered. That is, the vibration response is a random process. Analyses in both the time and
frequency domain discussed in Chapters 3 and 4 will be applied here. In addition, a
special random process of the time series will also be described for response modeling.
In this chapter and in Chapter 8, which is concerned with multi-degree-of-freedom­
(MDOF) vibrations, we focus on linear systems. In Chapter 11, the general concept
of nonlinear vibration and selected nonlinear systems will be introduced. General
references can be found in Schueller and Shinnozuka (1987), Clough and Penzien
(1993), Wirsching et al. (2006), Chopra (2003), and Liang et al. (2012).

7.1 STATIONARY EXCITATIONS
The simplest cases of random excitations occur when a forcing process is stationary,
for which a weak stationary process is specifically used.

7.1.1 Model of SDOF System


7.1.1.1 Equation of Motion
First, we recall the equation of motion

 mx + cx + kx = f (t )

 x (0) = v0 (7.1)
 x (0) = x 0


7.1.1.2 Zero Initial Conditions


In most practical cases, zero initial conditions are assumed.

x(0) = 0 (7.2)

 0) = 0
x( (7.3)

Furthermore, the forced transient response will also be assumed to be zero.

335
336 Random Vibration

7.1.1.3 Solution in Terms of Convolution


Here, the basic approach remains the method of convolution.
Recalling the convolution equation (Equation 6.180):


X (t ) =
∫−∞
f (t − τ)h(t ) d τ (7.4)

In this chapter, unless specially announced, f(t) is the realization of a random process.

7.1.1.4 Nature of Forcing Function


7.1.1.4.1  Initial Force
The initial force, f(t), is initially zero, that is,

f(t) = 0,  t ≤ 0 (7.5)

7.1.1.4.2  Stationary Random Process


Furthermore, f(t) is a stationary random process. Thus,

μF(t) = μF = const. (7.6)

and

RF(t, s) = RF(τ) (7.7)

where

τ = t − s (7.8)

7.1.1.5 Response
Given f(t), x(t) is also a realization of the stationary random process. Equation 7.4 can
be seen as a case of random process through an SDOF linear system.

7.1.2 Mean of Response Process


Similar to a general random process as discussed in Chapter 3, for a random response
x(t), the numerical characteristics of the response will first be examined. Initially, the
mean of the response in general is considered. Note that we now use lowercase letters
to denote random process in the time domain for convenience.

 ∞ 
µ X (t ) = E[ x (t )] = E 
 ∫−∞
f (t − τ)h(τ) d τ 

(7.9)
∞ ∞
=
∫−∞
E[ f (t − τ)]h(t ) d τ = µ F
∫−∞
h(t ) d τ
Response of SDOF Linear Systems to Random Excitations 337

Recalling Equation 6.193:



H (ω ) =
∫ −∞
h(τ)e − jωτ d τ (7.10)

Because,

lim(e − iωτ ) = 1 (7.11a)


ω→ 0

we have



∫ −∞
h(τ) d τ = H ( 0) (7.11b)

and therefore

μX(t) = μF H(0) = const. (7.12)

Thus,

1 1
H (ω ) ω→0 = = (7.13)
− mω 2 + jcω + k ω→ 0 k

Finally, the mean of the response can be written as

µF
µX = (7.14)
k

7.1.3 Autocorrelation of Response Process


Assuming a mean-square convergence, consider the autocorrelation function of the
response.

7.1.3.1 Autocorrelation
The autocorrelation function is given as

RX (t , s) = E[ X (t ) X (s)]
 t s 
= E
 ∫
−∞
f (t − u)h(u) du
∫ f (s − v )h( v ) d v 
−∞ 
 t s  (7.15)
= E
 ∫
−∞
du

−∞
f (t − u) f (s − v)h(u)h( v) d v 

t s
=
∫−∞
du
∫−∞
RF (t − u, s − v)h(u)h( v) d v, t , s ≥ 0
338 Random Vibration

7.1.3.2 Mean Square
In this section, we consider mean square values in general cases and in stationary
processes.

7.1.3.2.1  General Case


In the general case, by using autocorrelation, the mean square value of the response
in general can be further written as

t t
E[ X 2 (t )] = RX (t , t ) =
∫ −∞
du
∫ −∞
RF (t − u, t − v)h(u)h( v) d v, t ≥ 0 (7.16)

7.1.3.2.2  Stationary Process


Now, consider the special case where the excitation is stationary, that is

RF(t, s) = RF(s − t) (7.17a)

For this event, because

t s
RX (t , s) =
∫ ∫
−∞
du
−∞
RF (s − t − (u − v))h(u)h( v) d v, t , s ≥ 0 (7.17b)

the variance, which is also the mean square value for a zero-mean process, is

t t
σ 2X (t ) = RX (t , t ) =
∫ ∫ −∞
du
−∞
RF (u − v)h(u)h( v) d v, t ≥ 0 (7.18)

It is observed that

σ 2X (t )
lim = 1 (7.19)
t →∞ W /4 kc
0

when

t, s → ∞ (7.20)

By denoting

τ = s − t (7.21)

then

∞ ∞


RX (τ) =
∫ ∫ −∞
du
−∞
RF (τ − (u − v))h(u)h( v) d v (7.22)
Response of SDOF Linear Systems to Random Excitations 339

Equation 7.20 implies the practical consideration of a stationary process when the
time t and s are sufficiently long.
Furthermore, the mean square value in this case is

∞ ∞
E[ X (t )2 ] = RX (0) =
∫ ∫
−∞
du
−∞
RF (u − v)h(t )h( v) d v (7.23)

Example 7.1

In the previous discussion, zero initial conditions are assumed. In this example, we
assume random initial conditions. In Chapter 6, we knew that without a forcing
function, we would have free-decay vibrations. The response is given by Equation
6.62. That is,

x(t ) = de −ζω nt[sin(ω d t + φ)]


which can be further written as

x(t ) = e −ζω nt[d1cos(ω d t ) + d 2sin(ω d t )]


With the given initial conditions (see Equation 7.1), we can solve the coefficient
d1 and d2 as

d1 = x0

and

v0 + ζω n x0
d2 =
ωd

Therefore, we have

 v + ζω n x0 
x(t ) = e −ζω nt  x0cos(ω d t ) + 0 sin(ω d t )
 ωd 

and furthermore

  ζ  v 
x(t ) = e −ζω nt  x0 cos(ω d t ) + sin(ω d t ) + 0 sin(ω d t )
  1− ζ 2 ω 
 d

Suppose the initial conditions X0 and V0 are random variables. We can examine
the statistical properties of the free-decay response. For convenience, in the fol-
lowing examples, we use lowercase letters to denote random variables.
340 Random Vibration

  ζ  E (V0 ) 
µ X (t ) = E[ X (t )] = e −ζω nt E ( X 0 ) cos(ω d t ) + sin(ω d t ) + sin(ω d t )
 1− ζ 2 ω 
  d

Denote E(X0) = µ X0 and E(V0) = µV0, we write

 ζ   e −ζω nt 
µ X (t ) = µ X0 e −ζω nt cos(ω d t ) + sin(ω d t ) + µV0  sin(ω d t )
 1− ζ 2
  ωd 

RX (t1, t 2 ) = E[ X (t1)X (t 2 )]
   
( )
= e −ζω n (t1+t2 ) E X 02 cos(ω d t1) +
ζ
sin(ω d t1) cos(ω d t 2 ) +
ζ
sin(ω d t 2 )
  1− ζ 2
  1− ζ 2

E ( X 0V0 ) −ζω n (t1+t2 )  ζ 
+ e cos(ω d t1) + sin(ω d t1) sin(ω d t 2 )
ωd  1− ζ 2

 ζ  
+ cos(ω d t 2 ) + sin(ω d t 2 ) sin(ω d t 2 )
 1− ζ 2
 

+
( )
E V02
e −ζω n (t1+t2 ) sin(ω d t1)sin(ω d t 2 )
ω d2

Denote E (X 02 ) = σ X20 and E (V02 ) = σV20, and E (X 0V0 ) = σ XV0, we write

RX (t1, t 2 ) = E[ X (t1)X (t 2 )]
  ζ  ζ 
= e −ζω n (t1+t2 ) σ 2X0 cos(ω d t1) + sin(ω d t1) cos(ω d t 2 ) + sin(ω d t 2 )
  1− ζ 2
  1− ζ 2

σ XV0 −ζω n (t1+t2 )  ζ 
+ e cos(ω d t1) + sin(ω d t1) sin(ω d t 2 )
ωd  1− ζ 2

 ζ  
+ cos(ω d t 2 ) + sin(ω d t 2 ) sin(ω d t 2 )
 1− ζ 2  
σV20
+ e −ζω n (t1+t2 ) sin(ω d t1)sin(ω d t 2 )
ω d2

Response of SDOF Linear Systems to Random Excitations 341

Variance

2
 ζ 
σ 2X (t ) = σ 2X0 e −2ζω nt cos(ω d t ) + sin(ω d t )
 1− ζ 2

2σ XV0 −2ζω nt   ζ 
+ e  cos(ω d t ) + sin(ω d t ) sin(ω d t )
ωd   1− ζ 2

σV20
+ e −2ζω nt sin2(ω d t )
ω d2

7.1.4 Spectral Density of Response Process


Thus far, numerical characteristics in the time domain have been considered. Now,
the properties of the SDOF response in the frequency domain will be taken into
consideration.

7.1.4.1 Auto-Power Spectral Density Function


The auto-power spectral density (PSD) function of the response is seen as

∞ ∞  ∞ ∞ 
S X (ω ) =

−∞
RX (t )e − jωτ d τ =
∫ ∫ ∫

−∞ 
RF (τ + u − v)h(u)h( v) d v  e − jωτ d τ
−∞
du
−∞ 
(7.24)
∞ ∞  ∞ 
=
∫ ∫
−∞ −∞
h(u)h( v) 
 −∞

RF (τ + u − v)e − jωτ d τ  dudv

where the autocorrelation function R X(τ) is represented by using Equation 7.22.


Now, let

θ = τ + u − v (7.25)

Then, we have

τ = θ − u + v (7.26)

From this, it can be seen that

∞ ∞  ∞ 
S X (ω ) =
∫ ∫
−∞ −∞
h(u)h(v) 
 −∞

RF (θ)e − jωθdθ  e jωue − jωv dudv

 ∞   ∞   ∞ 
=

 −∞
h(u)e jωu du  
  −∞

h(v)e − jωv d v  
  −∞

RF (θ)e − jωθ dθ 

(7.27)

= H (−ω ) H (ω ) SF (ω )

342 Random Vibration

Therefore,

SX(ω) = ∣H(ω)∣2SF(ω) (7.28)

Note that Equation 4.113a from Chapter 4 provided the same information as given
in Equation 7.28.
Furthermore, the frequency expressed in units of hertz, yields

W X(f) = ∣H(f)∣2WF(f ) (7.29)

7.1.4.2 Variance
From Equation 7.28, the variance can be written as


2
σX2 = H (ω ) SF (ω )dω (7.30)
−∞

Additionally, the variance, in terms of hertz for the frequency, becomes


2
σ 2X = H ( f ) WF ( f )df (7.31)
−∞

7.1.5 Distributions of Response Process


To consider the distributions of the responses, the process must first be checked if it
is stationary and ergodic. For an ergodic excitation, when the duration of excitation
process is sufficiently long, the response is also ergodic. In this case, the temporal
average can be used to estimate its distribution.
For a nonergodic response, if F(t) is Gaussian, then the response of the linear
SDOF system X(t) is also Gaussian. Contrary to this, if F(t) is non-Gaussian, then it
is likely that X(t) is unknown. In studying X(t), convolution can be used.
t
X (t ) =
∫ −∞
F (t − τ)h(t ) d τ (7.32)

In comparing Equation 7.32 with Equation 7.4, both F(t) and X(t) are found to
be generic random processes. From Equation 7.32, it is understood that if F(t) is
Gaussian, then X(t) will also be Gaussian.

7.2 WHITE NOISE PROCESS


7.2.1 Definition
A white noise random process is defined by its auto-PSD function given by Equation
4.54 and repeated as follows

SF(ω) = S 0,  –∞ < ω < ∞ (7.33)


Response of SDOF Linear Systems to Random Excitations 343

The autocorrelation function is given by Equation 4.55, repeated as follows

RF (τ) = F −1 [ SF (ω )] = S0 δ(τ) (7.34)

7.2.2 Response to White Noise


7.2.2.1 Auto-PSD Function
By applying Equation 7.28, the auto-PSD function SX(ω) can be written as
2
2 1 S0
S X (ω ) = H (ω ) SF (ω ) = S0 = (7.35)
− mω + jcω + k
2
( k − mω ) + (cω )2
2 2

Recalling the equations:

1 m
= (7.36)
ω 2n k

c 2ζω n m 2ζ
= = (7.37)
k k ωn

and

ω
=r (7.38)
ωn

With these notations, Equation 7.35 can be expressed as

S0 /k 2
S X (ω ) = (7.39)
(1 − r 2 )2 + (2ζr )2

7.2.2.2 Variance
In the following, we describe the variance.

7.2.2.2.1  General Description


First, recall the equation:

σ 2X =
∫−∞
S X (ω ) dω (7.40)

Substitution of Equation 7.35 into Equation 7.40 yields


2 2
∞ ∞
1 1
σ 2X =

−∞ − mω + jcω + k
2
S0 dω = S0
∫ −∞ − mω + jcω + k
2
dω (7.41)
344 Random Vibration

7.2.2.2.2  Alternative Form of Transfer Function


On the right-hand side of Equation 7.41, it is essential to evaluate the integral.
First, denote
2
∞ ∞
1
∫ ∫
2
I= dω = H (ω ) dω (7.42)
−∞ − mω + jcω + k
2
−∞

for Equation 7.41. Then, consider an alternative form of transfer function as

B0 + ( jω ) B1 + ( jω )2 B2 +  + ( jω )n −1 Bn −1
H (n) = (7.43)
A0 + ( jω ) A1 + ( jω )2 A2 +  + ( jω )n An

Note that the number n is determined by the nature of the system. For example, for a
SDOF vibration system, n = 2.
Thus, the integral can be expressed as


2
I (n) = H ( n ) (ω ) dω (7.44)
−∞

It can be calculated:
For n = 1, we have

B02
I (1) = π (7.45)
A0 A1

For n = 2, we further have

A0 B12 + A2 B02
I ( 2) = π (7.46)
A0 A1 A3

The coefficients A(.) and B(.), are computed using the equation:

B0 + ( jω ) B1 1
H ( 2) = = (7.47)
A0 + ( jω ) A1 + ( jω ) A2 k + jωc − mω 2
2

The results obtained are

B 0 = 1, B1 = 0, A0 = k, A1 = c, and A2 = m (7.48)

Substitution of Equation 7.48 into Equation 7.46 yields

π
I ( 2) = (7.49)
kc
Response of SDOF Linear Systems to Random Excitations 345

such that

πS0
σ 2X = (7.50)
kc

In terms of the engineering spectral density function W0,

W0 0.785 fnW0
σ 2X = = (7.51)
4 kc k 2ζ

For the case n = 3, we let

I3 = π
( )
A0 A3 2 B0 B2 − B12 − A0 A1B22 − A2 A3 B02
(7.52)
A0 A3 ( A0 A3 − A1 A2 )

Finally, for the case n = 4, we can have

I4 =
( ) ( )
A0 B32 ( A0 A3 − A1 A2 ) + A0 A1 A4 2 B1 B3 − B22 − A0 A3 A4 B12 − 2 B0 B2 + A4 B02 ( A1 A4 − A2 A3 )
π
(
A0 A4 A0 A + A A4 − A1 A2 A3
2
3
2
1 )
(7.53)

7.2.3 White Noise Approximation


Now, we consider approximations of variance. In general,

SF(ω) ≠ const. (7.54)

However, in the resonance region, the region in between the half-power points,

SF(ω) ≈ const.  ω1 < ω < ω2 (7.55)

Then, the exact variance given in Equation 7.30 can be approximated by that in
Equation 7.51, where the constant W0 is used to estimate the exact variance, that is,

0.785 fnW0
σ 2X ( exact ) ≈ σ 2X = (7.56)
( approx ) k 2ζ

In Figure 7.1, the broad line is one of the realizations of the response and, in
the resonance region, the variance is close to constant. Notice the frequency band
between the half-power points as illustrated

ΔF = (f 2 − f1) = 2ζ f n (7.57)
346 Random Vibration

102
Damping ratio = 0.05
Damping ratio = 0.3
101

100

10–1

10–2

10–3

10–4
10–2 10–1 100 101

FIGURE 7.1  Approximation of variance.

The requirements of the approximation are obtained as

1. WF(f) is relatively smooth in the resonance region


2. f n is comparatively low
3. ζ is small

7.3 ENGINEERING EXAMPLES
In this section, specific practical applications of random responses are discussed. As
a comparison, typical deterministic excitations will also be discussed.

7.3.1 Comparison of Excitations
First, we review the commonly used excitations.

7.3.1.1 Harmonic Excitation
As mentioned in Chapter 6, the harmonic forcing function can be used to measure
the transfer function of a linear system.

7.3.1.1.1  Sine Dwell


When the driving frequency of sinusoidal excitation is fixed, the forcing function is

f(t) = f0 eiωt (7.58)

Note that in the real world, only either f(t) = f0cos(ωt) or f(t) = f0sin(ωt) will exist.
The latter can be written as jf(t) = jf0sin(ωt). Thus, Equation 7.58 is a combination of
these two cases, which is used solely for mathematical convenience. Under the forc-
ing function described by Equation 7.58, the response is given by

x(t) = H(ω) f0 eiωt (7.59)


Response of SDOF Linear Systems to Random Excitations 347

7.3.1.1.2  Sine Sweep


To use Equation 7.59 to measure the transfer function of a SDOF system, for all the
interested frequencies, for example, n pieces of frequencies, n forcing functions are
needed. A commonly used method is sine sweep. Namely, the driving frequency can
be arranged from the smallest to the highest, from the highest to the smallest or both.

7.3.1.1.2.1   Forcing Function  Consider the case of a forcing function of sine sweep.

f (t ) = f0 ∑ δ(t − nT ) sin(nω t)
n= 0
0 (7.60)

Using Figure 7.2 for f0 ∑ δ(t − nT ), the Fourier transform is shown to be


n =−∞

 ∞  ∞


F f0
 n =−∞
∑ 
δ(t − nT ) = f0 ω 0
 n =−∞

δ(ω − nω 0 ) (7.61)

7.3.1.1.2.2   Waiting Time  Let the waiting time be expressed by the variable, T. In
terms of the number of cycles, T can be written as


T=k (7.62)
ω

f(t)

f0

……

t
0 T 2T ….

F(ω)

ω0 f0

……

ω
0 ω0 2ω0

FIGURE 7.2  Impulse series.


348 Random Vibration

Suppose the waiting time for a 10% decay of transit response is to be calculated.
Then

e–ζωT = 10%

and

ln(0.1)
k= (7.63)
2πζ

Practically speaking, because the response due to initial conditions can be consid-
erably smaller than that caused by the force, k, the number of cycles can be smaller.

7.3.1.1.2.3   Number of Cycles  At each driving frequency, p cycles are needed.


The integer p is determined by the criterion of the response as it reaches a steady
state. Readers may consider the question of how to judge the steady states.

7.3.1.1.2.4   Auto-PSD  The auto-PSD function is

SF (ω ) = S0 = f02 (7.64)

7.3.1.1.2.5   Transfer Function  The transfer function is given by

f0 1 1
H ( jω ) = 2
=
k  ω  ω 2
1−  + 2 jζ    ω  2    ω  2

 ωn   ω n  1 −   +  2ζ 
  ω n     ω n  

  ω 
 2ζ  
 −1  ω n  
exp j tan (7.65)
  ω 
2
 1−  
  ω n  

7.3.1.2 Impulse Excitation
Another commonly used method is impulse excitation.

7.3.1.2.1  Impact Force


First, the impulse or the impact force must be considered.

7.3.1.2.1.1   Ideal Case  As mentioned in Chapter 6, for an ideal case

f(t) = f0 δ(t) (7.66)


Response of SDOF Linear Systems to Random Excitations 349

Given that the unit impulse response is

h(t ) = 1/mω d e − ζω nt sin(ω dt )  , for t > 0 (7.67)

the response is repeated as

x(t) = f0 h(t) (7.68)


Here, the auto-PSD function is

SF (ω ) = S0 = f02 (7.69)

Mathematically, the transfer function is equal to the Fourier transform of the unit
impulse response function.

H (ω ) = F [h(t )] (7.70)

7.3.1.2.1.2   Real Case  In the real world, because the duration of the impact force
cannot be infinitely short, the impact time history is approximated to be close to a
half sine wave. This is shown in Figure 7.3. Specifically written as

  πt 
 f sin , 0<t <T
f (t ) =  0  T  (7.71)
 0, elsewhere

f(t)
f(t)
f0
f0

η2 f0

t t
0 T 0 T
η1 f0
F(ω) F(ω)

ω ...... ω
(a) ωC (b) ωC

FIGURE 7.3  Concept of realistic impulse.


350 Random Vibration

The Fourier transform is

 ωT 
ωT cos
j 2T  2 
F (ω ) = e 2
  (7.72)
π  ω 2T 2 
 1 − 
π2

Here, referring to Figure 7.4 for frequency ωC,

π
ωC = (7.73)
T

A practical issue (referring to the case given in Figure 7.4b) is that the history of
impact force can differ from a half sine wave. For example, the history may have the
supposed “double hit,” resulting in

π
ωC < (7.74)
T

Note that the auto-PSD function in this case is written as

SF(ω) = F(ω)F(ω)* (7.75)

7.3.1.2.2  Step Force


The step function is another frequently used excitation. This function can be realized
through quick-cable-releasing, weight-dropping, and so on.

f(t)

T
(a) 0 T

F(ω)

ω
(b) ωC

FIGURE 7.4  Concept of realistic step function.


Response of SDOF Linear Systems to Random Excitations 351

7.3.1.2.2.1   Ideal Case  First, consider the idealized case, where the forcing func-
tion is modeled as a Heaviside step function:

f(t) = f0 u(t) (7.76)

The Fourier transform is

 1 
F (ω ) = F [ f (t )] = f0  πδ(ω ) + (7.77)
 jω 

7.3.1.2.2.2   Real Case  In the real world, the excitation as described by Equation
7.76 can be simulated with reasonable accuracy. Equation 7.76 is commonly rewrit-
ten as

 f , 0 < t < T
f (t ) =  0 (7.78)
 0, elsewhere

The corresponding Fourier transform is

  ωT  
 sin   j ωT 
F (ω ) = F [ f (t )] = f0 T 2 e 2  (7.79)
 ωT 
 2 

Similar to impact force, the frequency ωC exhibits one of two conditions:

π
ωC = (7.80)
T

and

π
ωC < (7.81)
T

It is important to note that the time duration T is considerably longer than impact
force; therefore, the corresponding value of ωC is much smaller. Again, the auto-
power spectral function is

SF(ω) = F(ω)F(ω)* (7.82)

7.3.1.3 Random Excitation
As a comparison, consider a random forcing function, which is in general among
these commonly used excitations.
352 Random Vibration

7.3.1.3.1  White Noise (Ideal Case)


First, look at the case of white noise excitation f(t). The auto-PSD function is found to be

SF(ω) = S 0 (7.83)

Readers may consider how the measurement of ∣H(ω)∣2 can be carried out practically.
From Equation 7.35,

SX(ω) = ∣H(ω)∣2S 0 (7.84)

Rearranging the terms in the equation yields

S X (ω ) 2 1
= H (ω ) = (7.85)
S0 ( k − mω ) + (cω )2
2 2

Readers may also consider how to estimate S 0 practically.


In conducting a vibration test, the auto-spectral density function S 0 will be known
for both a deterministic and white noise random process. Measuring the auto-PSD,
SX(ω), at ω = 0, will result in

S X (0) 1
= 2 (7.86)
S0 k

Equation 7.86 is used in vibration testing to determine unknown stiffness.


In tangible applications, the value of S 0 is often unknown. Consider the example
in which a bridge is excited by ambient vibrations. If a sufficiently long measure-
ment is taken, then the auto-spectral density function can be treated as a constant in
an interested frequency range. Additionally, if the static stiffness k can be obtained
through finite element analysis and a measurement of the auto-spectral density func-
tion of the response is obtained, S 0 can be estimated as

S 0 = k2SX(0) (7.87)

Readers may consider the question: How would the measurement of a transfer
function be accomplished?
Similarly, the transfer function can be measured through a vibration test. Mathe­
matically speaking,

X (ω )
H (ω ) = (7.88)
F (ω )

As mentioned in Chapter 4, Equations 4.122 and 4.123, the transfer function can
be approximated by

SFX (ω ) SFX (ω )
H1 (ω ) = = (7.89)
SF (ω ) S0
Response of SDOF Linear Systems to Random Excitations 353

or

S X (ω )
H 2 (ω ) = (7.90)
S XF (ω )

7.3.1.3.2  Band-Pass White Noise (Real Case)


Generally, excitations as broadband white noise do not occur. Alternatively, the natu-
ral excitation is closer to band-pass white noise, where the excitation and the fre-
quency range is limited from ωL to ωU.

7.3.1.3.2.1   Band Width  Recall from Chapter 4, the band width is repeated

Δω = ωU − ωL (7.91)

7.3.1.3.2.2   Variance  The variance is repeated as

σ 2F = 2∆ωS0 (7.92)

7.3.1.3.2.3   Autocorrelation and Auto-PSD  The corresponding autocorrelation


function is repeated as

sin(∆ωτ / 2)
RF (τ) = σ 2F cosω 0τ (7.93)
∆ωτ / 2

The auto-spectral density function is repeated as

S ω L < ω < ωU
 0
SF (ω ) =  S0 / 2 ω = ω L , ωU (7.94)

 0 elsewhere

Readers may consider the question: In the case of random excitations, why is RF(τ)
and SF(ω) used in the place of F(ω)?
Because the excitation is limited to a limited frequency band, outside the bound-
aries of ωL and ωU, there will be no excitation energy. In other words, the estimated
transfer function is also band-limited.

7.3.1.4 Other Excitations
Lastly, we consider additional excitations.

7.3.1.4.1  Fast Sine Sweep (Chirp)


For a fast sine sweep excitation, linear and exponential chirps are commonly used.
354 Random Vibration

–1

0 1 2 3 4 5

FIGURE 7.5  Linear chirp.

7.3.1.4.1.1   Linear Chirp  As shown in Figure 7.5, we formulated the linear chirp as

f(t) = f0sin[(ω 0 + kπt)t] (7.95)

7.3.1.4.1.2   Exponential Chirp  As shown in Figure 7.6, we formulated the expo-


nential chirp as

 
f (t ) = f0sin ω 0 k − 1t 
t
(7.96)
 ln( k ) 

7.3.1.4.2  Pink Noise etc.


In addition to white noise, colored noise can exist. This can be seen in Figure 7.6.
Pink noise is defined as

S0
SF (ω ) = (7.97)
ω

Generally speaking, the PSD for a color noise can also be written as

S0
SF (ω ) = (7.98)
ωα

–1

0 1 2 3 4 5

FIGURE 7.6  Exponential chirp.


Response of SDOF Linear Systems to Random Excitations 355

By varying the parameter α in Equation 7.98, the color of the noise is varied. For
white noise, α = 0; for pink noise, α = 1; and for red (brown) noise, α = 2. An alterna-
tive way to write the PSD for a special color noise is

2 ηS0
SF (ω ) = + η2 (7.99)
ω2

and the autocorrelation function is

RF(τ) = S 0 e–η∣τ∣ (7.100)

The corresponding random process f(t) can be seen as a transient solution of the
following first-order differential equation

d
f (t ) = η f (t ) + 2 ηw(t ) (7.101)
dt

where w(t) is a white noise process with auto-PSD S 0. That is, the color noise can be
seen as the response of a first-order system excitation by a white noise. The solution
can be represented by a convolution given by


f (t ) =
∫0
e − ητ 2 ηw(t − τ)dτ (7.102)

7.3.1.4.3  Ambient Excitation F(t)


The case of ambient excitation is a common occurrence initiated by instances such as
a bumpy road, car-riding, ground tremor and bridge response, and others. In ambient
excitation,

1. The forcing function f(t) is often a random process


2. The time history of the forcing function is difficult to measure
3. Measurements of the responses, however, are comparatively much easier
4. Correlation analyses in both the time and the frequency domain are the
primary form of measures
5. However, random decrement analysis is also used
6. In ambient excitation, a narrow band response for a lightly damped SDOF
system will often behave as a narrow-band filter

7.3.2 Response Spectra
7.3.2.1 Response Spectrum
When the forcing function is a random process, even the bound of the force is known.
In this event, it is difficult to determine the bound of the response. The bound, or the
peak value of responses, is important. The response spectrum is a tool in which to
statistically determine the peak value.
356 Random Vibration

Consider a realization of ground excitation:

mx(t ) + cx (t ) + kx (t ) = − mxg (t )

In terms of the natural period and damping ratio, the ground excitation can be
rewritten as

4πζ 4 π2
x(t ) + x (t ) + 2 x (t ) = − xg (t ) (7.103)
Tn Tn

Now, consider the j import to a system whose natural period is Ti:

4πζ 4 π2
xij (t ) + x ij (t ) + 2 xij (t ) = − xg j (t ) (7.104)
Ti Ti

where xg j (t ) is the realization of the jth ground excitation. Thus, Equation 7.101
can be solved to find the response xij(t). Here, the subscript i indicates a system with
period Ti.
Denote

xij = max(∣xij(t)∣) (7.105)

Taking the mean value pulse of one standard deviation results in

N N   N 
2 
xi = xi + σ i =
1
N ∑ j =1
xij +
1
N ∑
j =1
x2 − 1
 ij  N
 

j =1
xij 




(7.106)

This is illustrated by the line characterized by the symbol “***” in Figure 7.7.
At each period Ti, the mean value xi and the standard deviation σi of the peak
responses are computed, in which the subscript i corresponds to the statistics taken
in accordance with the ith period Ti. The sum, xi + σi, is taken as raw data for the
statistical response spectral value xi.
Referring to Equation 7.106, N signifies the number of records used. Using this
method, the number of measurements xi is not limited and xi, which is a function
of Ti, will obtain a reasonable resolution of Ti. In this instance, Ti is the ith natural
frequency. Note that for convenience, the subscript n is omitted from the term Tni in
the following text.

7.3.2.2 Design Spectra
Because the combined quantities of xi form a nonsmooth curve, this result is not
convenient to manage. Therefore, further measures should be taken to smooth the
curve. Additional measurements may be achieved from the envelope of all xi or other
Response of SDOF Linear Systems to Random Excitations 357

Response spectra, input 0.4 g


2.5

2
SD

1.5
Acc. (g)

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Period (s)

FIGURE 7.7  Earthquake response spectra, SD, ζ = 5%.


1 0.35

0.9 0.3 Damping increased


Coherence function

0.8 0.25
Coherence

0.7 0.2

0.6 0.15

0.5 0.1

0.4 0.05

0
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8

(a) Period (s) (b) Period (s)

FIGURE 7.8  Coherence function (acceleration and displacement). (a) Coherence (linear sys-
tem). (b) Correlation between acceleration and displacement.

measures, which are referred to as the design spectra (see Chopra [2003] and Liang
et al. [2012], for instance). The design displacement response spectrum is denoted by
SD and is shown by the black line in Figure 7.8.

7.3.3 Criteria of Design Values


7.3.3.1 Pseudo Spectrum
To use the above description of response spectrum, it is seen that if the damping
force is sufficiently small, then the initial force is approximately equal to the restor-
ing force. That is, in Equation 6.123, let

cx (t ) = 0 (7.107)
358 Random Vibration

we can write

mxA (t ) = − kx R (t ) (7.108)

where the subscript R is particularly used to emphasize the relative displacement or


relative acceleration and

xA (t ) = xR (t ) + xg (t ) (7.109)

and the subscript A stands for absolute accelerations.


Dividing m from both sides of Equation 7.108 yields

4 π2
xA (t ) = − x R (t ) (7.110)
Tn2

In the literature, the spectrum of the peak absolute acceleration is generated


through Equation 7.110 based on the response spectrum of relative displacement,
which is referred to as the pseudo spectrum. From Equation 7.110, it seems that a
designer can only consider either the absolute acceleration or the relative displace-
4 π2
ment because they are related by a proportional factor 2 . In other words, in this
Tn
circumstance, we have only a single parameter for design criterion. If the accelera-
tion satisfies the design criterion, then the displacement can be checked accordingly,
and no extra effort is needed.
However, the pseudo spectrum is used with the condition described in Equation
7.107. When damping becomes large, Equation 7.107 will no longer hold. In the fol-
lowing examples, we consider the correlation between the absolute acceleration and
the relative displacement, when the system is subjected to random excitations.

7.3.3.2 Correlation of Acceleration and Displacement


It is understandable that, if Equation 7.110 holds, then the peak values of the absolute
acceleration and the relative displacement satisfies the following:

xA (t ) ∞ x R (t ) (7.111)

In these cases, we can see that the correlation of xA (t ) and x R (t ) is unity.
However, due to possible larger damping, Equation 7.107 is no longer valid, the
relationship between the peak values xA (t ) and x R (t ) is no longer unity. In Figure
7.8a and b, we plot the coherence functions versus the natural period Tn with differ-
ent damping ratios. From Figure 7.8a, where the damping ratio is from 0.01 to 0.05,
it can be seen that when the period is greater than 1.6 seconds, when damping ratio
becomes larger and close to 5%, the coherence becomes smaller and smaller. In
Figure 7.8b, the damping ratios are chosen from 0.1 to 0.5; when the damping ratio
is larger than 10%, the coherence function is even smaller than 50%. In other words,
Response of SDOF Linear Systems to Random Excitations 359

the peak values of the absolute acceleration and the relative displacement become
uncorrelated.
This indicates that Equations 7.110 or 7.111 will indeed no longer be valid. In
this circumstance, the acceleration and the displacement become independent; the
design parameters become two, instead of the single parameter with small damping.
To understand the physical significance, let us further discuss coherence function in
Section 7.4.

7.4 COHERENCE ANALYSES
In vibration analysis and testing, the major error often comes from the transfer func-
tion measurement. In this section, a method to ensure the accuracy of transfer func-
tions is discussed.

7.4.1 Estimation of Transfer Function


In any realistic test, measurement noise is unavoidable. Using noise-contaminated
data to generate transfer functions and extract modal parameters may result in inac-
curate modal estimation. In certain cases, a mode may not be properly measured. In
other cases, we will have the so-called “noise mode.” That is, certain peak values
look like transfer function magnifications at some frequencies and seem to be vibra-
tion modes. However, these peak values are caused by measurement noises.
To make sure a peak value measured is a true mode, we need an approach to
evaluate the results of the transfer functions. In the previous chapter, we mentioned
a method using orthogonality to check the reciprocal modal vector. The procedure
to check or evaluate the modal estimation is often referred to as the procedure of
modal confidence.
In the frequency domain, coherence function is a commonly used method, which
can be more convenient and accurate. In Chapter 6, we mentioned that transfer
function can be calculated as the Fourier transform ratios of the output and input,
that is,

X (s)
H (s ) = (7.112)
F (s)

In vibration testing, however, we can have two different methods to measure the
transfer functions, estimated through H1(ω) and H2(ω) given by

S fx (ω )
H1 (ω ) = (7.113)
S f (ω )

and

S x (ω )
H 2 (ω ) = (7.114)
S xf (ω )
360 Random Vibration

Equations 7.113 and 7.114 are mathematically correct. In practical measurements,


auto- and cross-spectral density functions are obtained through the fast Fourier
transform signal picked up during the test, and a great number of their averages, that
is, the cross-PSD functions of the ith measurement, can be calculated as

Sˆ fi xi (ω ) = X i (ω , T ) Fi (ω , T )* (7.115)

and

Sˆxi fi (ω ) = Fi (ω , T ) X i (ω , T )* (7.116)

As mentioned previously, T is the sample length. The auto-PSD function of the


ith output is given by

Sˆxi xi (ω ) = X i (ω , T ) X i (ω , T )* (7.117)

The auto-PSD function of the ith input is given by

Sˆ fi fi (ω ) = Fi (ω , T ) Fi (ω , T )* (7.118)

Here, to emphasize the ith measurement and the complex conjugate, the sub-
scripts are written as xixi and fi fi, instead of xi and fi only.
In the above equations, Xi(ω) and Fi(ω) are the Fourier transforms of response and
the force obtained from the ith test.
Furthermore, suppose we have conducted a total of n tests, for which we can have
the average of the cross-PSD function of the ith measurement as given by
n

Sˆ fx ≈
1
n ∑ Sˆ
i =1
fi xi (7.119)

or the unbiased average written as


n

Sˆ fx ≈
1
n −1 ∑ Sˆi =1
fi xi (7.120)

We also have the average of the cross-PSD function of the ith test as
n

Sˆxf ≈
1
n ∑ Sˆ
i =1
xi fi (7.121)

or the unbiased average written as

Sˆxf ≈
1
n −1 ∑ Sˆi =1
xi fi (7.122)
Response of SDOF Linear Systems to Random Excitations 361

In addition, the average of the auto-PSD function of the ith output is given by
n

Sˆx ≈
1
n ∑ Sˆ
i =1
xi xi (7.123)

or the unbiased average written as


n

Sˆx ≈
1
n −1 ∑ Sˆi =1
xi xi (7.124)

and the average of the auto-PSD function of the ith input is given by
n

Sˆ f ≈
1
n ∑ Sˆ
i =1
fi fi (7.125)

or the unbiased average written as


n

Sˆ f ≈
1
n −1 ∑ Sˆi =1
fi fi (7.126)

In the sense of statistics, one should use the unbiased averages. However, in the
following equation, we can realize that because the estimated transfer functions are
the ratios of PSD functions, either n or n − 1 will be finally canceled.
Let us consider the input-system-output relationship shown in Figure 7.9. As men-
tioned in Chapter 4, in many cases, the noise M(ω) and N(ω) are not correlated with
the forcing function, F(ω), nor with the response X(ω). Therefore, we can approxi-
mately have

Smf (ω) = Sfm(ω) = Snx(ω) = Sxn(ω) ≈ 0 (7.127)

In addition, the input and output noises are not correlated,

Snm(ω) = Smn(ω) ≈ 0 (7.128)

F(ω) X(ω) F(ω) X(ω) N(ω)+X(ω)


α(ω) α(ω) +

M(ω) F(ω) + N(ω) N(ω) F(ω)


+
(a) (b)

FIGURE 7.9  Noises and transfer function calculation. (a) Input and (b) output with signifi-
cant noise.
362 Random Vibration

That is, from Figure 7.9, we can see that

Sˆ f (ω ) = [ F (ω ) + M (ω )][ F (ω ) + M (ω )]* = F (ω ) F (ω )*

+ M (ω ) F (ω )* + F (ω ) M (ω )* + M (ω ) M (ω )* = S f (ω ) + S fm (ω ) (7.129)

 S (ω ) 
+ Smf (ω ) + Sm (ω ) ≈ S f (ω )  1 + m
 S f (ω ) 

and

Sˆx = [ X (ω ) + N (ω )][ X (ω ) + N (ω )]* = X (ω ) X (ω )*

+ N (ω ) X (ω )* + X (ω ) N (ω )* + N (ω ) N (ω )* = S x (ω ) + S xn (ω ) (7.130)
 S (ω ) 
+ Snx (ω ) + Sn (ω ) ≈ S x (ω )  1 + n
 S x (ω ) 

Here, for convenience, we omit the sample length T. Furthermore,

Sˆxf (ω ) = [ F (ω ) + M (ω )][ X (ω ) + N (ω )]*

= F (ω ) X (ω )* + M (ω ) X (ω )* + F (ω ) N (ω )*
(7.131)
+ M (ω ) N (ω )* = S xf (ω ) + S xm (ω ) + Snf (ω )

+ Snm (ω ) ≈ S xf (ω )

Sˆ fx (ω ) = [ X (ω ) + N (ω )][ F (ω ) + M (ω )]*

= X (ω ) F (ω )* + N (ω ) F (ω )* + X (ω ) M (ω )*
(7.132)
+ N (ω ) M (ω )* = S fx (ω ) + S fn (ω ) + Smx (ω )

+ Smn (ω ) ≈ S fx (ω )

With the help of the above approximations, the estimated transfer function can be
calculated through the averaged results, that is,

−1
ˆ Sˆ fx (ω )  Sm (ω ) 
H1 (ω ) = = H (ω )  1 + (7.133)
Sˆ f (ω )  S f (ω ) 
Response of SDOF Linear Systems to Random Excitations 363

and

Sˆ (ω )  S (ω ) 
Hˆ 2 (ω ) = x = H (ω )  1 + n (7.134)
ˆ
S xf (ω )  S x (ω ) 

where Sm(ω) and Sn(ω) are the auto-PSD functions of the input and output noises.

7.4.2 Coherence Function
Introduced in Chapter 4, the coherence function can be seen as a ratio of transfer
functions H2 and H1. To measure the coherence function practically, we consider the
values of Η1 and H2.
At the resonance point,
−1
Sˆ fx (ω )  S (ω ) 
Hˆ 1 (ω ) = = H (ω )  1 + m (7.135)
Sˆ f (ω ) S f (ω ) 
resonance

It is seen that

Hˆ I (ω ) resonance < H (ω )

and

−1
Sˆ x (ω )  S (ω ) 
Hˆ 2 (ω ) = = H (ω )  1 + n ≈ H(ω ) (7.136)
Sˆ xf (ω ) S x (ω ) 
resonance

At the antiresonance point,

−1
Sˆ fx (ω )  S (ω ) 
Hˆ 1 (ω ) = = H (ω )  1 + m ≈ H (ω ) (7.137)
Sˆ f (ω ) S f (ω ) 
antiresonance

and

Sˆx (ω )  S (ω ) 
Hˆ 2 (ω ) = = H (ω )  1 + n (7.138)
Sˆxf (ω ) S x (ω ) 
antiresonance

It is seen that

Hˆ 2 (ω ) antiresonance > H (ω )

Now, with the measured data, let us define the coherence function γ 2fx (ω ) as follows:

Hˆ 1 (ω )
γ 2fx (ω ) = (7.139)
Hˆ 2 (ω )
364 Random Vibration

It is seen that
2
Sˆ fx (ω )
γ (ω ) =
2
(7.140)
Sˆx (ω ) Sˆ f (ω )
fx

Using the coherence function, we can evaluate the transfer functions by checking the
level of γ 2fx (ω ). From the discussion of the values of α1 and α2, it can be realized that

100% ≥ γ 2fx (ω ) ≥ 0% (7.141)

In addition, we can also see that the higher the value of the coherence function is,
the better accuracy we can have.
Generally speaking, if

γ 2fx (ω ) > 70% (7.142)

then the corresponding peak value can belong to a mode. However, in certain cases,
a mode is recognized when the coherence function is greater than 50%.
In Figure 7.10a, a sample transfer function Hˆ 1 (ω ) is plotted. From this plot, we
can see that there exist three modes. However, we do not know how well the transfer

0.035
0.03
0.025
Amplitude

0.02
0.015
0.01
0.005
0
0 5 10 15 20 25 30 35 40 45 50
(a) Frequency (Hz)

0.8
0.7
0.6
Amplitude

0.5
0.4
0.3
0.2
0.1
0
0 5 10 15 20 25 30 35 40 45 50
(b) Frequency (Hz)

FIGURE 7.10  Measurement accuracy of (a) a transfer and (b) a coherence function.
Response of SDOF Linear Systems to Random Excitations 365

function is measured. In Figure 7.10b, we also plot the corresponding coherence


function γ 2fx (ω ). It is seen that in the neighborhood of 4, 12, and 25 Hz, which are
the resonant regions of these three modes, the corresponding values of the coherence
function are about 70%, 67%, and 65%, respectively. Therefore, at least the transfer
function of the first mode can be seen as an accurate measurement.

7.4.3 Improvement of Coherence Functions


From the previous discussion, it is seen that, to improve the measurement accuracy,
we need higher values of coherence functions. Practically speaking, during a test, we
need methods to improve the value of coherence functions. First of all, measurement
location plays an important role. If the location is close to the nodal point of a certain
mode, then the corresponding coherence will have a relatively lower value. In this
case, we need to vary the location. Additionally, to judge if the measurement of that
mode is improved, we can check the coherence.
Second, if the location cannot be changed, then we need to have a more defined
frequency resolution. With limited measurement buffer size, the zoom Fourier trans-
form method can be used. A detailed discussion of the zoom Fourier transform is
beyond the scope of this manuscript. Interested readers may consult Clark (2005)
for instance.
Generally speaking, the zoom Fourier transform or zoom fast Fourier transform
(zoom FFT) is a signal processing technique used to analyze a portion of a spectrum
at high resolution. The basic steps to apply the zoom FFT to this frequency region
are as follows:

1. Frequency translate to shift the frequency range of interest down to near


0 Hz (DC).
2. Low-pass filter to prevent aliasing when subsequently sampled at a lower
sample rate.
3. Resample at a lower rate.
4. FFT the resampled data. Multiple blocks of data are needed to have an FFT
of the same length. The resulting spectrum will now have a much smaller
resolution bandwidth, compared with an FFT of nontranslated data.

Third, more pieces of measurement can be used to increase the average. It may
be found that, if 30 averages are not sufficient, then averages around 300 may be
needed.

7.5 TIME SERIES ANALYSIS


The previous discussion described that when the input of a linear system is a random
process, the output will also be a random process. When we attempt to measure such
output, however, additional noise is often inevitable so the measure time history will
be a sum of two pieces of random series. Our goal now is to reduce the effect on the
measurement due to noise contaminations.
366 Random Vibration

Time series analysis can be used to account for the measurement noise. It is an
important method in signal processing. In this section, we briefly discuss the model-
ing of time series and the corresponding key characters.

7.5.1 Time Series
The time series is essentially a random process, generated by the response of a system
with discrete time due to excitation of white noise processes. There are many publica-
tions about time series analysis; among these, for example, readers may consult Box et al.
(1994), Hamilton (1994), and Shumway and Stoffer (2011), especially, Ludeman (2003).

7.5.1.1 General Description
Due to noise contamination, the measured vibration signal, no matter if it is determin-
istically or randomly excited, will be considered a random process. Because of the data
acquisition, which turns analogue signals into digital signals, the measured time history
is in the format of time sequence; this is one of the major features of the time series.
More importantly, the time series is arranged in the order of occurrence in time.
Namely, the earlier measured data is arranged in an earlier position and so on.
Happening in order is the second feature of time series. This feature means that the
process is not mutually independent because the independent sequence of the order
is not an issue. Ordered sequences often possess strong correlations.
Randomness is the third, but most important feature. In the following, we will
focus on randomness. The main purpose of time series analysis is to find the pattern
and corresponding statistical parameters of the sequence. The basic method of the
analysis is to establish a correct model, in most cases, one of the typical models of
time series, and then analyze the model.
Three common applications of time series analysis include

1. Description. By using a specific model, the statistical properties of the mea-


sured random process can be described, such as covariance functions for
unveiling the features of the correlation, the PSD functions for understand-
ing the frequency components, the Green functions to measure the eigen-
parameters of a system, and others.
2. Prediction. Based on established models and data picked up in measure-
ments, we may predict future values of the time series, and the tendency of
the time-varying development.
3. Control. Based on modeling and prediction, we can further adjust the param-
eters of system and control its output with the proper amount of feedback.

7.5.1.2 Useful Models of Time Series


Suppose w(n) is an input and x(n) is the corresponding output of a discrete system
and there exists the following function:

p q

x (n) = − ∑
k =1
ak x ( n − k ) + ∑ b w(n − k ),
k =0
k n ≥ 0, p > q (7.143)
Response of SDOF Linear Systems to Random Excitations 367

where both x(n) and w(n) are generic terms, which can be time points of random
process, or realizations of random process.
If w(n) is a stationary white noise series, and to all j and k, we have
E[w(n)] = 0 (7.144)
and
E[w(j)w(k)] = σ2δij (7.145)
then the output is called the autoregressive–moving-average (ARMA) process with
p autoregressive terms and q moving-average terms, denoted by ARMA(p, q). The
general ARMA model was first described in the 1951 thesis of Whittle (also see Box
and Jenkins [1971]). Here, δij is a Kronecker delta function (Leopold Kronecker,
1823–1891), and σ2 is the variance of w(n).
If all k = 1, 2, … q, ak = 0, the output is called qth moving-average (MA) process,
denoted by MA(q). In this case,
q

x (n) = ∑ b w(n − k )
k =0
k (7.146)

If all k = 1, 2, … q, bk = 0, the output is called pth autoregressive (AR) process,


denoted by AR(p). In this case,
p

x (n) = − ∑ a x (n − k ) + b w(n)
k =1
k 0 (7.147)

In the following, let us first discuss the characters of the ARMA process, such as
mean, variance and correlation functions, and probability density function, which
are decided by the statistical properties of input w(n).
We note that if the time variation of w(n) and x(n) are defined in the entire time
domain, the corresponding system will have a steady state output. If they are only
defined when n ≥ 0, we will have transient processes.

7.5.2 Characters of ARMA Models


7.5.2.1 Moving-Average Process MA(q)
The moving-average process MA(q) is modeled by Equation 7.146, where w(n) is a
stationary white noise process. It is not necessarily Gaussian.

7.5.2.1.1  Mean
The mean of MA(q) can be calculated as
 q 
E[ x (n)] = E  ∑
 k = 0
bk w(n − k )  n ≥ 0

(7.148)
q

= ∑ b E[w(n − k )]
k =0
k
368 Random Vibration

Because

E{w(n − k)} = 0,  k = 1, 2, … (7.149)

we have

E[x(n)] = 0,  n ≥ 0 (7.150)

7.5.2.1.2  Variance
Now, consider the variance of MA(q). We can write

 q  
2


σ (n) = E[ x (n)] = E  
2
X
2 

bk w(n − k )  
  k =1  
(7.151)
  q   q  
= E 
  k =0 ∑
bk w(n − k )  
 
∑ bk w(n − j)  

 j=0  

Consider n ≥ q, we have

q q q

σ 2X (n) = ∑∑
k =0 j =1
bk b j E[ w(n − k ) w(n − j)] = σ 2 ∑b ,
k =0
2
k n≥q (7.152)

This is because in E[w(n − k)w(n − j)] the only nonzero term is obtained when
j = k. Also note that when n ≥ q the variance of w(n) is constant σ2. Now, in the case
of 0 ≤ n < q, we need to reconsider the result so that we have only nonzero terms
w(n − k) and w(n − j). In this case, the variance becomes a function of n, that is,

σ 2X (n) = σ 2 ∑b ,
k =0
2
k 0≤n<q (7.153)

Note that, in this case, we will witness a transient process.

7.5.2.1.3  Autocorrelation Function


When k < 0 and j < 0, we have zero autocorrelation functions, that is,

R X(j, k) ≡ 0,  k, j < 0 (7.154)

Generally speaking, the value of autocorrelation function depends on the time


variable k + m and k, where both k and m are integers.
Response of SDOF Linear Systems to Random Excitations 369

That is,

RX ( k + m, k ) = E[ x ( k + m) x ( k )]

  q  
q   (7.155)
= E 
  i=0∑bk w( k + m − i)  
 
∑ bk w( k − j)   , k , m ≥ 0

 j=0  

In the case that k > q and for such m, that 0 ≤ m < q, we can rewrite Equation
7.155 as

R X(k + m, k) = E[{b 0w(k + m) + b1w(k + m − 1) +, … + bqw(k + m − q)}{b 0w(k)


+ b1w(k − 1) +, … + bqw(k − q)}] (7.156)

Examine the resulting products, such as {b0w(k + m)}{b0w(k)} as well as E[{bgw​(k +


m − g)}{bh w(k − h)}]. It is seen that if m = 0, then k + m = k so that

E[{b0 w( k + m)}{b0 w( k )}] = σ 2b02

However, if m ≠ 0, then

E[{b 0w(k + m)}{b 0w(k)}] = 0

Similarly, we see that if k + m − g = k − h, or h = g − m, then

E[{bgw(k + m − g)}{bhw(k − h)}] = E[{bgw(k + m − g)}{bg–mw(k + m − g)}]


= σ2 bgbh = σ2 bgbg–m = E[{bh+mw(k − h)}{bhw(k − h)}] = σ2 bh+mbh (7.157)

and if k + m − g ≠ k − h, or h + m ≠ g, then

E[{bgw(k + m − g)}{bhw(k − h)}] = 0 (7.158)

Therefore, Equation 7.156 can be replaced by

q− m q− m

RX ( k + m, k ) = ∑b b
h= 0
h h+ m E[ w ( k − h)] = σ
2 2
∑b b
h= 0
h h+ m , 0 ≤ m ≤ q (7.159)

Based on the same observation described in Equation 7.158, when m > q,

R X(k + m, k) = 0 (7.160)

It is seen that when k > q, then the autocorrelation function of random process
MA(q) is independent from that at time point k, and is only a function of time lag m.
Therefore, after time point q, because the corresponding mean is zero and is there-
fore a constant, the process is weakly stationary.
370 Random Vibration

7.5.2.2 Autoregressive Process AR(p)


When n ≥ 0, the excitation of the pth autoregressive process AR(p) is a stationary
white noise. Let us focus on this case, namely,
p

x (n) = − ∑ a x(n − k ) + b w(n),


k =1
k 0 n≥0 (7.161)

Note that for w(n), Equation 7.145 must be satisfied. Now, let us consider the
mean, variance, and autocorrelation functions of the autoregressive process AR(p)
given by Equation 7.161.

7.5.2.2.1  Mean
When n < 0, E[x(n)] = 0 and x(n) = 0, which leads to E[x(n − k)] = 0. In addition, E[w(n)] =
0, therefore, the mean of this autoregressive process can be obtained as follows:
p

E[ x (n)] = − ∑ a E[x(n − k )] + b E[w(n)] = 0,


k =1
k 0 n≥0 (7.162)

7.5.2.2.2  Variance
Because the mean is zero, the variance can be calculated as

 p   p  
2
X
2  
σ (n) = E[ x (n)] = −
  k =1 ∑
ak x (n − k ) + b0 w(n)  −
  k =1

ak x (n − k ) + b0 w(n)  
 

(7.163)
Similar to the observation to examine the products discussed previously, we see
that the term x(n − k) occurs before w(n) so that x(n − k) is not a function of w(n), thus
E[x(n − k)w(n)] = 0
Therefore,

 p  


E  −
  k =1 ∑
ak x ( n − k ) w( n)   = 0

(7.164)
  
With the help of Equation 7.164, we further write

 p   p 
2
X

σ (n) = E   −
  k =1 ∑
ak x ( n − k )   −
 
∑ a x(n − j) + b σ
j
2
0
2
(7.165)
 j =1 

Furthermore, let us take the mathematical expectation and write


p p

σ 2X (n) = ∑ ∑ a a R (n − k , n − j ) + b σ
k =1 j =1
k j X
2
0
2
(7.166)
Response of SDOF Linear Systems to Random Excitations 371

7.5.2.2.3  Autocorrelation
Generally speaking, even if a signal has a zero initial condition, the autocorrelation
still has a transient process. This point can be realized by observing the existence
of the transient solution xpt(t) in Chapter 6 (see Equation 6.100), although in this
case x(t) is deterministic. We now consider that the transient portion in the autocor-
relation function becomes negligible, that is, when n > p. In this case, first consider
R X(n, n − 1), given by

RX (n, n − 1) = E[ x (n) x (n − 1)]


 p  

= E  − ∑
  k =1
ak x (n − k ) + b0 w(n)  x (n − 1)  n > p (7.167)
 
 

Because x(n − 1) is not a function of w(n), and w(n) is white noise, we have

E(w(n)x(n − 1)) = 0 (7.168)

Therefore, Equation 7.167 can be replaced by

RX (n, n − 1) = − ∑ a R (n − 1, n − k )
k =1
k X (7.169)

Now, with the same idea, we can multiply x(n − j), j = 2, 3, … p on both sides of
Equation 7.161 and take the corresponding mathematic expectation, that is, we will
have
p

RX (n, n − j) = − ∑ a R (n − j, n − k ),
k =1
k X j = 2, 3, , p (7.170)

We should also consider the case of R X(n, n), which is equal to σ 2X (n). From
Equation 7.166, it is seen that

RX (n, n) = − ∑ a R (n, n − j ) + b σ
j =1
j X
2
0
2
(7.171)

Equations 7.169 through 7.171 provide the formulae to calculate the autocorrela-
tion function for time lag from 0 to p. Note that Equation 7.170 will also be valid
when j > p.
Now, when AR(p) has already reached steady state, then the corresponding auto-
correlation Rx(r, s) is only a function of the time lag

j = r − s (7.172)
372 Random Vibration

In this case, Equation 7.170 can be rewritten as

RX ( j) = − ∑ a R (k − j),
k =1
k X j = 1, 2, 3, , p (7.173)

For autocorrelation functions, R X(−j) = R X(j), we can rewrite Equation 7.173 in


the format of a matrix equation called the Yule-Walker equation (George Udny Yule
[1871–1951], Gilbert Thomas Walker [1868–1958]), that is,

 R (0) RX (1)  RX ( p − 1)  a   R (1) 


 X
  1  X 
 R X (1) RX (0)  RX ( p − 2)   a2   RX (2) 
   =   (7.174)
      
 RX ( p − 1) RX ( p − 2)  RX (0)   ap   RX ( p) 
   

If these autocorrelation functions R X(0) … R X(p) can be obtained, then we can


calculate these coefficients a1 … ap to determine the process AP(q).
On the other hand, if these coefficients a1 … ap are known to obtain the autocor-
relation functions, we can use the following matrix equation:

1 a1 a2  a p− 2 a p−1 a p   R (0)   2
  X  − b0 σ 
 a1 1 + a2 a3  a p−1 ap 0   RX (1)  
   0 
a2 a1 + a3 1 + a4  ap 0 0   RX (2)   0 
   =   (7.175)
      
0 ap a p−1  a3 1 + a2 a1  RX ( p − 1)   0 
    
a p a p−1 a p− 2  a2 a1 1   RX ( p)   0 

7.5.2.3 ARMA(p, q)
Now consider the process of ARMA(p, q), which satisfies Equations 7.143 through
7.145. For convenience, Equation 7.145 is replaced by

E(w2(n)) = σ2 (7.176)

Let us consider the mean, variance, and autocorrelation functions.

7.5.2.3.1  Mean
Taking the mathematic expectation of Equation 7.143, we have

p q

E[ x (n)] = − ∑
k =1
ak E[ x (n − k )] + ∑ b E[w(n − k )],
k =0
k n≥0 (7.177)
Response of SDOF Linear Systems to Random Excitations 373

Because

E[w(n − k)] = 0

Equation 7.177 can be replaced by

E[ x (n)] = − ∑ a E[x(n − k )],


k =1
k n≥0

Assume zero initial conditions, we further have

E[x(n)] = 0,  n ≥ 0 (7.178)

7.5.2.3.2  Variance
Because the mean is zero, for the variance, we can write

σ 2X (n) = E[ x 2 (n)] = RX (n, n)


 p q  

= E  −
  k =1 ∑
ak x (n − k ) + b0 ∑ w(n − k ){x(n)}
 k =0  
p q

∑ a R (n, n − k ) + ∑ b E[w(n − k )x(n)]


(7.179)
=− k x k
k =1 k =0
p q

=− ∑ a R ( n, n − k ) + ∑ b R
k =1
k X
k =0
k XW (n, n − k )

7.5.2.3.3  Autocorrelation Function


Similar to the case of AR(p), the autocorrelation will have a transient process. We
can consider that the transient portion in the autocorrelation function becomes negli-
gible, namely, n > p. In addition, we also consider R X(n, n − 1), first, given by

RX (n, n − 1) = E[ x (n) x (n − 1)]


 p q  


= E  −
  k =1 ∑
ak x ( n − k ) + ∑ bk w(n − k )  x (n − 1) 
 (7.180)
  k =0  
p q

=− ∑ a R (n − 1, n − k ) + ∑ b E[w(n − k )x(n − 1)]


k =1
k X
k =0
k
374 Random Vibration

Similar to the above treatment, considering E[x(n)x(n − j)], j = 2, 3, … p, we have

RX (n, n − j) = E[ x (n) x (n − j)]


p q

=− ∑
k =1
ak RX (n − j, n − k ) + ∑ b E[w(n − k )x(n − j)],
k =0
k j = 2, 3, , p

(7.181)
Equations 7.180 and 7.181 provide formulae to calculate the autocorrelation func-
tion for time lag from 1 to p.
When ARMA(p, q) has already reached steady state, then the corresponding
autocorrelation Rx(r, s) is only a function of the time lag given by Equation 7.172. In
this case, we replace Equations 7.180 and 7.181 by
p

RX ( j) = − ∑ a R (k − j) + Φ (a, b)
k =1
k X j j = 1, 2, 3, , p (7.182)

where Φj(a, b) is the term of the second summations in Equations 7.180 and 7.181,
which are a nonlinear functions. Whereas,
a = [a1, a2, … ap],  b = [b1, b2, …, bq] (7.183)
By using the equation in matrix form, we have

 R (0) RX (1)  RX ( p − 1)  a1  Φ1 (a , b)   RX (1) 


 X      
 XR (1) R X (0)  RX ( p − 2) a2  Φ 2 (a , b)  RX (2) 
    +    =    (7.184)
      
RX ( p − 1) RX ( p − 2)  RX (0)  a p  Φ p (a , b) RX ( p)
     

Note that Equation 7.184 is nonlinear.

Example 7.2

Find the mean, variance, and autocorrelation of the following process ARMA(1, 1):

x(n) = −a1x(n − 1) + b0w(n) + b1w(n − 1),  n ≥ 0

Mean
We can see from Equation 7.178, E[x(n)] = 0.

Autocorrelation Function
With zero mean, the variance at time n is equal to the value of autocorrelation
R X(0, 0). That is,
σ2(n) = E[x2(n)] = R X(n, n) = E[x(n){−a1x(n − 1) + b0w(n) + b1w(n − 1)}]
= −a1R X(n, n − 1) + b0E[x(n)w(n)] + b1E[x(n)w(n − 1)]
Response of SDOF Linear Systems to Random Excitations 375

The right-hand side of the above equation has two means that need to be
evaluated. Consider the last one first. Substituting the expression of x(n) into the
above equation and rearranging the results, and further taking the mathematical
expectation, we can write

E[x(n)w(n − 1)] = E{−a1x(n − 1) + b0w(n) + b1w(n − 1)}w(n − 1)] = −a1E[x(n − 1)w(n − 1)]
+ b1σ2 = −a1E[{a1x(n − 2) + b0w(n − 1) + b1w(n − 2)}w(n − 1)] + b1σ2 = (−a1b0 + b1)σ2

The reason we can have the above result is because x(n − 1) is not a function of
w(n), and therefore E[x(n – 2) w(n − 1)] = 0. Because w(n) is white noise, E[w(n − 2)}
w(n − 1)] = 0.
Another mean we need can be calculated as

E[(x(n)w(n)) = E[−a1x(n − 1) + b0w(n) + b1w(n − 1)}w(n)] = b0 σ2

Therefore, the variance is

σ 2(n) = − a1 Rx (n, n − 1) + (b02 − a1b0 b1 + b12 )σ 2


The autocorrelation function can be written as

R X (n, n − 1) = E[x(n)x(n − 1)] = E[{−a1x(n − 1) + b 0w(n) + b1w(n − 1)}x(n − 1)] =


−a1R X(n − 1, n − 1) + b0E[x(n − 1)w(n)] + b1E[x(n − 1)w(n − 1)]

Again, because x(n − 1) is not a function of w(n), E[x(n − 2)w(n − 1)] = 0. Consider,
E[x(n − 1)w(n − 1)] = E[w(n − 1){−a1x(n − 2) + b0w(n − 1) + b1w(n − 2)}] = b0 σ2
We thus have

R X(n, n − 1) = −a1R X(n − 1, n − 1) + b0b1σ2

To have the autocorrelation function for the steady state response, when the
time lag j = 0 and j = 1, we have

RX (0) = σ 2(0) = − a1Rx (1) + (b02 − a1b0 b1 + b12 )σ 2


and

R X(1) = −a1R X(0) + b0b1σ2


Solving the above two equations, we can further write

RX (0) =
(b2
0 )
− 2b0 b1 + b12 σ 2
1− a 2
1

and

RX (1) =
(a b b − a b
2
1 0 1
2
1 0 )
− a1b12 + b0 b1 σ 2
1− a 2
1
376 Random Vibration

In addition, let us consider when time lag j = 2.


Because E[x(n − 2)w(n)] = 0 and E[x(n − 2)w(n − 1)] = 0, we can write

R X(n, n − j) = E[x(n)x(n − j)] = E[{−a1x(n − 1) + b0w(n) + b1w(n − 1)}x(n − j)]


= −a1R X(n − 1, n − 2)

Similarly, when j > q, for any j > 2, we can write

R X(n, n − 2) = E[x(n)x(n − 2)] = E[{−a1x(n − 1) + b0w(n) + b1w(n − 1)}x(n − 2)]


= −a1R X(n − 1, n − j + 1)

Furthermore, for steady state process of ARMA(1, 1) with time lag > 1, we can write

R X(j) = (−a1)j−1 R X(1),  j ≥ 2

Variance
Because the mean of ARMA(1, 1) = 0, the variance can be obtained as

σ 2X (n) = RX (0) =
(b
2
0 )
− 2b0 b1 + b12 σ 2
1− a12

It is noted that when the orders of p and q are greater than unity, it is difficult
to write the autocorrelation function and variance in closed forms.

7.5.3  Analyses of Time Series in the Frequency Domain


7.5.3.1 Z-Transform
Fourier transform transfers a signal in the continuous time domain into the continu-
ous frequency domain. To account for the signals in the discrete time domain, it
is known that the discrete Fourier transform (DFT) can be used (see Chapter 4).
However, DFT is not convenient for analytical analysis; rather it was developed for
computational applications. It is needed to have an analytical transform that handles
signals in the discrete time domain, such as the above-mentioned time series like
ARMA models. Z transform transfers a signal from the discrete time domain into
the z-domain. From a mathematical viewpoint, the Z-transform can be seen as a
Laurent series in which the sequence of numbers under consideration are the Laurent
expansion of an analytic function. The basic idea now known as the Z-transform
was known to Laplace, and reintroduced in 1947 by W. Hurewicz as a tractable way
to solve linear, constant-coefficient difference equations. It was later dubbed “the
z-transform” by Ragazzini and Zadeh in a sampled data control group in 1952. In the
following, we briefly introduce the z transform without a detailed discussion on its
main properties and existing conditions.
Response of SDOF Linear Systems to Random Excitations 377

7.5.3.2 Sampling of Signals
With the help of delta functions, a signal in the continuous time domain denoted by
x(t) can be sampled using the following treatment:

x d (t ) = ∑ x(k )δ(t − k∆t)


k =0
(7.185)

where Δt is the sampling time interval. The subscript d denotes xd(t) is in discreet
form.
Note that although xd(t) can only have nonzero values at the moment of sampling,
it is still in the continuous time domain.
Taking the Laplace transform of xd(t), we have

 ∞ 
X d (s) = L[ x d (t )] = L  ∑
 k = 0
x ( k )δ(t − k∆t ) 

∞ ∞ (7.186)
= ∑ x(k )L[δ(t − k∆t)] = ∑ x(k )e
k =0 k =0
− sk∆t

Now, let a variable z be

z = esΔt (7.187)

and substitution of Equation 7.187 into 7.186, we have

X d (s) z =es∆t = X ( z ) = ∑ x (k )z
k =0
−k
(7.188)

In Equation 7.188, the series X(z) is the function of variable z. When z = es (espe-
cially z = ejω), X(z) is referred to as the z transform and denoted by

X ( z ) = Z[ x (t )] (7.189)

Here, we omit the subscript d for X(z) is obviously a discrete series. The physical
meaning of z will be discussed in Section 7.5.4.3.
The inverse z transform, denoted by Z −1[ X ( z )] can be calculated by

π
1
x (t ) = Z −1[ X ( z )] =
2π ∫−π
X (eiω )eiωt dω (7.190)
378 Random Vibration

7.5.3.3 Transfer Function of Discrete Time System


Now, we discuss the case in which an SDOF system is being excited by a time series
of white noise.
First, let f(n) and x(n) be the input and output of a general system, with zero initial
conditions. We can write
p q

x (n) = − ∑
k =1
ak f ( n − k ) + ∑ b x(n − k ),
k =0
k n ≥ 0, p > q (7.191)

Taking z transform on both sides on Equation 7.191, we have


p q

X (z) = − ∑
k =1
ak z − k F ( z ) + ∑b z
k =0
k
−k
X ( z ), n ≥ 0, p > q (7.192)

where

F ( z ) = Z[ f (t )] (7.193)

is the z transform of the input forcing function f(t); and X(z) is the z transform of the
output given by Equation 7.192. From Equation 7.192, we can write the transfer func-
tion in the z domain as

X (z)
∑b z k
−k

H (z) = = k =0
p
, n ≥ 0, p > 2 (7.194)
F (z)
1+ ∑a z
k =1
k
−k

From Equation 7.194, it is seen that the transfer function H(z) described in the z
domain is a rational function of z−1. If the excitation is white noise, then the response
will be a random process ARMA(p, q). Taking the inverse z transform of the trans-
form function, we can have the unit impulse response function, namely,

h(n) = Z −1[ H ( z )] (7.195)

If the coefficient bk is zero except b 0, that is,

bk = 0, k = 1, 2, …, q (7.196)

then, the transfer function takes the following form,

X (z) b0
H (z) = = p
, n ≥ 0, p > q (7.197)
F (z)
1+ ∑a z
k =1
k
−k
Response of SDOF Linear Systems to Random Excitations 379

From Equation 7.197, we see that if the input to the system is a white noise, then
the output is an autoregressive process AR(p). Another interesting case is when

ak = 0,  k = 1, 2, …, p (7.198)

then, the transfer function takes the following form,

H (z) = ∑b z
k =1
k
−k
(7.199)

In this case, from Equation 7.199, if the input to the system is a white noise, then
the output is a moving-average process MA(q).

7.5.3.4 PSD Functions
7.5.3.4.1  PSD Function of MA(q)
Based on the transfer function given by Equation 7.199, we can calculate the auto-
PSD function for the process of MA(q). That is,

S X (ω ) = S X ( z ) z = e jω
= H ( z ) SF ( z ) H ( z −1 ) z = e jω
2
 q q  q
= ∑
 k = 0
bk z − k (σ 2 )
k =0

bk z k 
 jω
= ∑b e
k =0
k
− jk ω
σ2 (7.200)
z =e

7.5.3.4.2  PSD Function of AR(p)


Here, suppose the process AR( p) has input f(n) with zero mean and the autocorrela-
tion function can be written as

R X(k + m, k) = σ2δ(k) (7.201)

Based on the transfer function given by Equation 7.197, we have

S X (ω ) = S X ( z ) z =e jω = H ( z ) SF ( z ) H ( z −1 ) z =e jω
 b0 b0 
= (σ 2 ) 
p p (7.202)
1 +

 k =1

ak z − k 1+
k =1
ak z k ∑ 

 z = e jω
b02σ 2
= 2
p

1+ ∑a e
k =1
k
− jk ω
380 Random Vibration

7.5.3.4.3  PSD Function of ARMA(p, q)


Assume the input f(n) is zero mean and the autocorrelation function is given as

R X(k + m, k) = σ2δ(k) (7.203)

The auto-PSD function for ARMA(p, q) is (see Equation 7.194)

S X (ω ) = S X ( z ) z = e jω
= H ( z ) SF ( z ) H ( z −1 ) z = e jω

 q q 

 k =0
∑ bk z − k bk z k ∑ 

= (σ 2 ) k = 0p 
p
(7.204)

1 +
 k =1

ak z − k 1+
k =1
ak z k ∑ 

 z = e jω
2
q

∑b e
k =0
k
− jk ω

= 2
σ2
p

1+ ∑a ek =1
k
− jk ω

7.5.4 Time Series of SDOF Systems


7.5.4.1 Difference Equations
It is known that we can use difference equations to approximate differential equa-
tions. With sufficiently small time intervals, Δt, the velocity may be written as

x (n + 1) − x (n)
x (n) = (7.205)
∆t

and the acceleration can be written as

x (n + 1) − x (n) x (n + 2) − 2 x (n + 1) + x (n)
x(n) = = (7.206)
∆t ∆t 2

Suppose the excitation of an SDOF system can be written as f(n). Substitution of


Equations 7.205 and 7.206 into Equation 7.1 yields

x (n + 2) − 2 x (n + 1) + x (n) x (n + 1) − x (n)
m +c + kx (n) = f (n) (7.207)
∆t 2
∆t
Response of SDOF Linear Systems to Random Excitations 381

Dividing m/Δt2 on both sides of Equation 7.207 and rearranging the resulting
equation, we can write

 c∆t   c∆t k∆t 2  ∆t 2


x (n + 2) = −  −2 +  x (n + 1) −  1 − +  x (n) + f (n) (7.208)
 m   m m  m

Equation 7.207 is the governing equation of motion of an SDOF system. Similarly,


for the case of ground excitation, we have

mx + cx + kx = − mxg


in which the variable x is a relative displacement, and xg is a ground displacement.


Substitution of Equations 7.205 and 7.206 into the above equation yields

x (n + 2) − 2 x (n + 1) + x (n) x (n + 1) − x (n)
m +c + kx (n)
∆t 2 ∆t

x g (n + 2) − 2 x g (n + 1) + x g (n)
= −m
∆t 2

or

 c∆t   c∆t k∆t 2 


x (n + 2) = −  −2 +  x (n + 1) −  1 − +  x (n)
 m   m m  (7.209)
+ x g (n + 2) − 2 x g (n + 1) + x g (n)

Furthermore, for ground excitation, we also have

mx + cx + kx = cx g + kx g


in which the variable x is an absolute displacement, and xg is also a ground


displacement.
Substitution of Equations 7.205 and 7.206 into the above equation yields

x (n + 2) − 2 x (n + 1) + x (n) x (n + 1) − x (n)
m +c + kx (n)
∆t 2 ∆t
x g (n + 1) − x g (n)
=c + kx g (n)
∆t
382 Random Vibration

or

 c∆t   c∆t k∆t 2 


x (n + 2) = −  −2 +  x (n + 1) −  1 − +  x (n)
 m   m m 
(7.210)
 2
+ c∆t x g (n + 1) +  − c∆t + k∆t  x g (n)
m  m m 

7.5.4.2 ARMA Models
The above difference equations can be written in the form of typical ARMA mod-
els. That is, consider the case of the SDOF system excited by forcing function f(n),
Equation 7.208 can be written as

x(n) = −a1x(n − 1) − a2 x(n − 2) + b2 f(n − 2) (7.211)

with

c∆t
a1 = −2 + (7.212a)
m

c∆t k∆t 2
a2 = 1 − + (7.212b)
m m

and

∆t 2
b2 = (7.212c)
m

In Equation 7.210, without loss of generality, we can use n to replace n + 2, which


will also be used in the following examples.
Equation 7.211 established excitation due to ground acceleration can be written as

x(n) = −a1x(n − 1) − a2 x(n − 2) + b 0 xg(n) + b1xg(n − 1) + b2 xg(n − 2) (7.213)

where a1 and a2 are given by Equations 7.212a and 7.212b. However,

b 0 = 1 (7.214a)

b1 = –2 (7.214b)

and

b2 = 1 (7.214c)
Response of SDOF Linear Systems to Random Excitations 383

Furthermore, Equation 7.210, describing the excitation based on ground damping


and restoring force can be written as

x(n) = −a1x(n − 1) − a2 x(n − 2) + b1xg(n − 1) + b2 xg(n − 2) (7.215)

whereas a1 and a2 are given by Equations 7.212a and 7.212b and

b1 = c∆t (7.216a)
m

and
2
b2 = − c∆t + k∆t (7.216b)
m m
It is also seen that

Z [x (n − k )] = X (z )z − k (7.217)

7.5.4.3 Transfer Functions
We now consider the transfer function based on the model for the excitation f(n).
Suppose the forcing function is white noise. Take z-transform of Equation 7.210,

X(z) = −a1X(z)z−1 − a2 X(z)z−2 + b2F(z)z−2 (7.218)

The transfer function can then be written as

X (z) b2 z −2 b2
H (z) = = −1 −2
= 2 (7.219)
F ( z ) 1 + a1 z + a2 z z + a1 z + a2

Substitution of Equations 7.212 into 7.219 yields

∆t 2
H (z) = m
 c∆t  c∆t k∆t 2
z +  −2 +
2
 z +1− +
 m  m m (7.220)
∆t 2 1
=
m z 2 + (−2 + 2ζω n ∆t ) z + 1 − 2ζω n ∆t + ω 2n ∆t 2

Consider the poles of H(z), which are the zeroes of the denominator in Equation
7.220. That is, let

z 2 + (−2 + 2ζω n ∆t ) z + 1 − 2ζω n ∆t + ω 2n ∆t 2 = 0 (7.221)


384 Random Vibration

which has solutions given by

z = 1 − ζ ωnΔt ± j (1 − ζ2)1/2 ωnΔt (7.222)

Note that Δt can be sufficiently small. Seen in Equation 7.187, z = esΔt, by letting

esΔt = 1 − ζ ωnΔt ± j (1 − ζ2)1/2 ωnΔt (7.223)

for sufficiently small Δt, we have

s = −ζ ωn ± j (1 − ζ2)1/2 ωn (7.224)

The above shows that the Laplace variable s, on the condition that the trans-
fer function reaches its poles, is equivalent to the eigenvalues of the SDOF system.
Furthermore, we can prove that the transfer function using the Laplace variable s,
H(s), and the transfer function using variable z, H(z), has the same values, provided
Δt is sufficiently small. That is
∆t →0
H ( z ) = H (s ) (7.225)

For the cases expressed by Equations 7.73 and 7.75, we can have the same obser-
vations. Therefore, we use the time series of the ARMA model to describe an SDOF
system. In Chapter 8, we will further discuss the utilization of different functions for
MDOF systems.

Example 7.3

Given an SDOF system with m = 1, c = 4, and k = 1000, plot transfer functions of


H(s) as well as H(z) based on Equation 7.220 with Δt = 0.0005, 0.0010, and 0.0050 s.

0.035

0.03
Exact transfer function
Time interval = 0.0005
0.025 Time interval = 0.0010
Absolute amplitudes

Time interval = 0.0050

0.02

0.015

0.01

0.005

0
0 50 100 150 200 250
Frequency (rad/s)

FIGURE 7.11  Exact and approximated transfer functions.


Response of SDOF Linear Systems to Random Excitations 385

The results are plotted in Figure 7.11. From these curves, it is seen that when Δt
is sufficiently small, H(z) can be a good approximation of the exact calculation of
H(s). However, when Δt = 0.005 s, which is often used in practical measurement,
we will have larger errors especially in the resonant region.
We note that in this example the damping ratio is 6.3%. When the damping
ratio becomes large or different natural frequencies are used (or both), the situa-
tion will not be improved. Therefore, sufficiently small time intervals need to be
carefully chosen when using time series to directly analyze an SDOF system.

7.5.4.4 Stability of Systems
7.5.4.4.1  General Description
In Chapter 6, we showed that for an SDOF system, we need c ≥ 0 and k > 0 to achieve
stable vibrations. The condition of c ≥ 0 is equivalent to having a nonpositive portion
of the system’s eigenvalue, namely,

Re(λ) ≤ 0 (7.226)

Now, we examine the ARMA model to establish criterion for the system’s sta-
bility. Recall the definitions of ARMA(p, q) and AR(p), respectively, described in
Equations 7.143 and 7.161. It can be seen that both are difference equations with
constant coefficients.
For convenience, let us define a lag operator (backshift operator) B such that

B([x(n)] = x(n −1) (7.227)

It is seen that

Bk[X(n)] = x(n − k) (7.228)

Rewrite an ARMA(p, q) model as


p q


k =0
ak x ( n − k ) = ∑ b w(n − k ),
k =0
k a0 = 1, n ≥ 0, p > q (7.229)

Consider the corresponding homogeneous difference equation, we write


p

∑ a x (n − k ) = 0,
k =0
k a0 = 1, n ≥ 0 (7.230)

It can be seen that the homogeneous difference equation for an AR(p) model can
also be described by Equation 7.230. With the help of the lag operation, Equation
7.230 can be further written as

∑ a x(n − k ) = A (B)[x(n)] = 0,
k =0
k p a0 = 1 (7.231)
386 Random Vibration

where A p ( B) is an operator polynomial, which will be discussed shortly after the


introduction of characteristic equation. In addition, the corresponding characteristic
equation of the ARMA(p, q) and AP(p) is written as (recall characteristic equation
described in Equation 6.39)

λ p + a1λ p−1 +  + a p−1λ + a p = ∑a λ


k =0
k
p− k
= 0, a0 = 1 (7.232)

where the solutions λ of the characteristic equation are the eigenvalues of ARMA(p, q)
or AR(p). By using these eigenvalues, we can write the operator polynomial as

A p ( B) = (1 − λ1B)(1 − λ 2 B)  (1 − λ p B) (7.233)

Letting A p ( B) = 0, we can find the relationship between coefficients ak and eigen-


values λi. That is, these eigenvalues must satisfy the factorization of the operator
polynomial equivalent to zero.

7.5.4.4.2  Stability of Process AR(2)


Now considering a special case of the homogeneous equation of AR(2), we have

x(n) + a1x(n − 1) + a2 x(n − 2) = 0 (7.234)

The factorization of the operator polynomial is

(1 + a1B + a2 B2) = (1 − λ1Β)(1 − λ2 B) = 0 (7.235)

Therefore, we have

λ1 + λ2 = −a1  and  λ1λ2 = a2 (7.236)

The characteristic equation is

λ2 + a1λ + a2 = 0 (7.237)

Thus, we can solve Equation 7.237 to obtain the eigenvalues

− a1 ± j a12 − 4 a2
λ1,2 = (7.238)
2

It can be proven if the following criterion is satisfied

∣λi∣ < 1 (7.239)

the system is stable.


Response of SDOF Linear Systems to Random Excitations 387

Problems
1. A system is shown in Figure P7.1 with white noise excitation f(t).
a. Find the equation of motion for this system.
b. What is the transfer function of this system?
c. Find the PSD matrix.
d. Find the RMS value of the response of x1. (Hint: block B is massless.)
2. A white noise force is applied on the first mass of the system given in Figure
P7.2. Find its governing equation and transfer function. What are the trans-
fer functions measured at the first and the second mass? Find the standard
deviation of the response x1.

x1

B x2 f(t)

FIGURE p7.1

k
f

m1

m2

FIGURE p7.2
388 Random Vibration

3. A system shown in Figure P7.2 is excited by ground white noise motion.


Denote the stiffness connecting m1 and the ground to be k1 and there is an
additional stiffness k2 connecting m1 and m2. Write the equation of motion.
If mass m1 is sufficiently small (m1 = 0), then, what is the transfer function
of ground displacement and the relative displacement? What are the power
density functions of the relative displacement and the absolute acceleration
of the responses measured at the second mass?
4. An SDOF system with mass m, stiffness k, and damping ratio ζ, vibrates
due to a white noise excitation with level W0.
a. Compute the RMS absolute acceleration by applying a quasistatic
dynamic load.
b. Find the relative displacement and calculate the RMS restoring force
and with your calculations, compute the RMS absolute acceleration
again. Then, explain any difference in the results.
5. A white noise force F(t) with level 15 N2/Hz is applied to the system shown
in Figure P7.3. The mass is 3.5 kg. Design a steel rod to find the mini-
mum b so that the strength R = 360 MPa will be greater than three times the
RMS stress. The damping ratio is 0.05. The modulus of elasticity of steel is
207 GPa. Determine the resulting natural frequency as well.
6. An SDOF system is shown in Figure P7.4 with mass = 0.259 lb sec2/in.,
stiffness k = 63.0 lb/in., and damping ratio ζ = 0.064, excited by white noise
acceleration = 0.02 g2/Hz with occurrence = 10 s period (g = 386 in/s2).

b
b

B
B
L = 0.5 m F(t)

FIGURE p7.3

x, x, x fm = –mx

k fk = kx
m
mg
c
fc = cx
1n 1n
x g 2 2

FIGURE p7.4
Response of SDOF Linear Systems to Random Excitations 389

a. Find the RMS of the absolute acceleration of the mass.


b. The response of mass m is a Gaussian narrow band. Assume all the peaks
and valleys are independent. Find the median maximum acceleration of
the mass.
c. Calculate the RMS of the relative displacement.
d. Compute the median maximum ground force.
7. A camera is mounted on a base with white noise motion, the base accelera-
tion is 0.04 g2/Hz. The camera has natural frequency = 27.4 Hz and damp-
ing ratio = 12%. In a sufficiently long period with more than 1000 peaks,
the expected maximum peak will be approximately four times the RMS.
Estimate the peaks of the absolute acceleration and relative displacement of
the camera.
8. Find the mean, variance, and autocorrelation functions of stationary pro-
cess MA (1), MA (2), and MA (3), which are all with white noise zero mean
input w(n). Suppose k > q.
9. Calculate the mean, variance, and autocorrelation functions of stationary
process AR(1), given by

x(n) = −a1x(n − 1) + b 0w(n)  n ≥ 0

10. Prove that for AR(2) described by

x(n) + a1x(n − 1) + a2 x(n − 2) = 0

is stable when ∣λi∣ < 1, where

− a1 ± j a12 − 4 a2
λ1,2 =
2
8 Random Vibration of
MDOF Linear Systems
The random responses of multi-degree-of-freedom (MDOF) systems are discussed
in this chapter. General references can be found in Clough and Penzien (1993),
Wirsching et al. (2006), Cheng (2001), Cheng and Truman (2001), Chopra (2003),
Inman (2008), and Liang et al. (2012).

8.1 Modeling
In real-world applications, modeling is often the logical starting point in gaining an
understanding of a system. Therefore, in the study of MDOF systems, similar to the
previous chapter about SDOF systems, a model is first discussed.

8.1.1 Background
Many vibration systems are too complex to be modeled as SDOF systems. For
instance, a moving car will encounter vertical bumps, as well as swaying in the hori-
zontal direction. One cannot use the measure of vertical bumping to determine the
degree of rotational rocking because they are responses to independent events. In this
case, the vertical motion and horizontal rotation is described by unique degrees of
freedom represented by two independent displacements of the front and rear wheels.
An MDOF system, with n independent displacements, can have n natural fre-
quencies and n linearly independent vibration shape functions.

8.1.1.1 Basic Assumptions
We consider first the following assumptions:

1. Linear system with time-invariant physical parameters


Assume the MDOF system is linear. The previous discussed require-
ments of a linear system also apply to MDOF vibrations.
2. Forcing function
For purposes of simplification, the excitation forcing function may be
assumed to be stationary and, in many cases, ergodic, as well as Gaussian.
a. Stationary
Occurs when the first and second moment are independent of time t.
b. Ergodic
The system responds impartially to the initial states due to the sig-
nificant lapse in time from that initial state. In other words, the system
“forgets” its initial conditions.

391
392 Random Vibration

c. Gaussian
This results in linear combinations that yield normal distributions.

8.1.1.2 Fundamental Approaches
One of the following approaches may be used in dealing with an MDOF system:

1. Direct method direct integration


2. Modal analysis

An MDOF system can be decoupled into n second-order SDOF vibrations, or


it can be decoupled into 2n first-order SDOF subsystems. The decoupled vibrator
is often referred to as vibration modes. Each mode will have a single natural fre-
quency and a single damping ratio, equivalent to an SDOF system. Conversely, in
each mode, there exist n locations of displacement. The ratio of this displacement is
fixed, which can be described by an n × 1 vector, referred to as a mode shape.
Normal mode. If the mode shape vector can be written as real-valued, then it is a
normal mode. If an MDOF system has real mode only, then the following conditions
are mutually necessary and sufficient:

• The damping is proportional


• The N-DOF system decouples to n normal modes in model space
• Each mode is a second-order SDOF vibration system

Complex mode. If the mode shape vector cannot be written as real-valued, then
it is a complex mode. In this case, the following conditions are mutually necessary
and sufficient:

• The damping is nonproportional


• The N-DOF system decouples to 2n modes in state space
• At least two modes are complex valued and are of the first-order system

In each case, there is model superposition: the total solution is a linear combina-
tion of the modal solutions.

8.1.2 Equation of Motion
The modeling of MDOF a system is examined in the following.

8.1.2.1 Physical Model
Figure 8.1 shows a typical model of a 2-DOF system.
The equilibrium of force of the first mass is

∑ F = m x + c x + c (x − x ) + k x + k (x − x ) − f = 0
1 1 1 1 1 2 1 2 1 1 2 1 2 1 (8.1)
Random Vibration of MDOF Linear Systems 393

x1, f1 x2, f2
k1 k2

m1 m2

c1 c2

FIGURE 8.1  DOF system.

The balance of force of the second mass is

∑ F = m x + c (x − x ) + k (x − x ) − f
2 2 2 2 2 1 2 2 1 2 = 0 (8.2)

Rearranging both equations yields

m1x1 + (c1 + c2 ) x1 + (−c2 ) x 2 + ( k1 + k2 ) x1 + (− k2 ) x 2 = f1 (8.3)

m2 x2 + (−c2 ) x1 + (c2 ) x 2 + (− k2 ) x1 + ( k2 ) x 2 = f2 (8.4)

In matrix form, this is generally written as

[m]{x} + [c]{x} + [ k ]{x} = { f } (8.5a)

or

(t ) + Cx (t ) + Kx(t ) = f (t )


Mx (8.5b)

Here, for the example shown in Figure 8.1, M is the mass matrix.

m 0 
M= 1  (8.6a)
 0 m2 

In general, M is defined as

m m … m1n 
 11 12 
m m … m2 n 
M =  21 22  (8.6b)
 … 
 mn1 mn 2 … mnn 
394 Random Vibration

C is the damping matrix, for the example shown in Figure 8.1

c +c −c2 
C=  1 2  (8.7a)
 −c2 c2 

Likewise, C is defined as

c c12 … c1n 
 11 
c c22 … c2 n 
C =  21  (8.7b)
 … 
 cn1 cn 2 … cnn 

K is the stiffness matrix, for the example shown in Figure 8.1

k +k − k2 
K=  1 2  (8.8a)
 − k2 k2 

In the same way, K is defined as

k k12 … k1n 
 11 
k k22 … k2n 
K =  21  (8.8b)
 … 
 kn1 kn 2 … knn 

As a general rule, x corresponds to the displacement vector.

x 
 1
x 
x=  2  (8.9)
… 
 xn 
 

Lastly, f signifies the force vector.

 f 
 1
f 
f= 2 (8.10)
… 
 fn 
 
Random Vibration of MDOF Linear Systems 395

8.1.2.2 Stiffness Matrix
Beginning with the stiffness matrix, the physical parameters are considered.

8.1.2.2.1  Determination of K Matrix (Force Method, Displacement Method)


Consider static force, defined by

f = Kx (8.11)

The jth row can be written as

fj = kj1x1 + kj2 x2 + … kjixi + … + kjnxn (8.12)

From the above equation, this can be perceived as

k ji = f j
xi =1, x p = 0(p≠ i) (8.13)

Equation 8.13 can then be used to construct a stiffness matrix.

8.1.2.2.2  Property of Stiffness Matrix


The following are the major properties of a stiffness matrix.

8.1.2.2.2.1   Symmetry  The stiffness matrix is symmetric

kji = kij (8.14a)

KT = K (8.14b)

8.1.2.2.2.2   Full Rank  For full rank to exist, the following conditions must hold
true:

1. rank(K) = n (8.15)
and
2. K is nonsingular
also
3. K−1 exists

8.1.2.2.2.3   Positive Definite  The stiffness matrix is positive definite, that is,

K > 0 (8.16)

in which the “>” symbol for a matrix is used to denote the matrix as being positive
definite, meaning all eigenvalues are greater than zero. This is denoted by

eig(K) > 0 (8.17)


396 Random Vibration

8.1.2.2.2.4   Flexibility Matrix  For a matrix to be a flexibility matrix, all of the


following conditions must hold true:

K−1 = S (8.18)

The flexibility matrix is symmetric

ST = S (8.19)

The flexibility matrix is full rank

rank(S) = n (8.20)

8.1.2.3 Mass and Damping Matrices


8.1.2.3.1  Mass Matrix
M is the mass coefficient matrix with the following characteristics:

1. M is full rank, where

rank(M) = n (8.21)

2. M is positive definite, where

M > 0 (8.22)

3. M is symmetric, where

MT = M (8.23)

8.1.2.3.1.1   Lumped Mass

M = diag(mi) (8.24)

8.1.2.3.1.2   Consistent Mass  Exists when M and K share the same “shape func-
tion” D, namely,

K = DKΔ DT (8.25)

and

M = DMΔ DT (8.26)

In Equations 8.25 and 8.26, D is an n × n square matrix describing the displace-


ment “shape functions.” K Δ and MΔ are diagonal matrices.
Random Vibration of MDOF Linear Systems 397

8.1.2.3.2  Damping Matrix


C is the damping coefficient matrix, which exhibits the following characteristics:

1. C is not necessarily full rank

rank(C) ≤ n (8.27)

2. C is positive semidefinite

C ≥ 0 (8.28)

Here, the symbol “≥” for a matrix is used to denote the matrix as being positive
semidefinite, whose eigenvalues are all greater than or equal to zero. This is denoted
by

eig(C) ≥ 0 (8.29)

where eig(.) stands for operation of calculating the eigenvalues of matrix (.), which
will be discussed in Sections 8.4.1 and 8.4.5 for proportionally and nonproportion-
ally damped systems, respectively (also see Wilkinson 1965). The algebraic eigen-
value problem.
C is symmetric.

CT = C (8.30)

8.1.2.3.2.1   Proportional Damping (Caughey Criterion)  If the following matri-


ces commute, then the system is proportionally damped. This is referred to as the
Caughey criterion (Thomas K. Caughey, 1927–2004)

CM−1K = KM−1C (8.31)

8.1.2.3.2.2   Rayleigh Damping  The following proportional combination of M


and K is one of the commonly used forms of proportional damping, where α and β
are scalar.

C = αM + βK (8.32)

8.1.2.3.2.3   Nonproportional Damping  If the Caughey criterion does not hold,


namely,

CM−1K ≠ KM−1C (8.33)

then the system is nonproportionally (nonclassically) damped.


398 Random Vibration

Example 8.1

A system has mass, damping, and stiffness matrices given by

 0  C  2 −1   30 −10 
M= 1 , =   , and K =  
0 2   −1 1   −10 50 

Find if the system is proportionally damped.

 65 −45   65 −35  so that CM−1K ≠ KM−1C the


CM −1K =  −1
 , KM C =  
 −35 35   −45 35 
system is nonproportionally damped.

8.1.3 Impulse Response and Transfer Functions


Consider impulse forcing functions being applied at a certain location of an MDOF
system.

8.1.3.1 Scalar Impulse Response Function and Transfer Function


Figure 8.2a shows an input at location j and the response at location i. Denote hij(t) as
a unit impulse response function for coordinate i due to a unit force at j. Furthermore,
denote Hij(ω) as a transfer function, with the ratio of output, Xi(ω), and input Fj(ω). This
is illustrated in Figure 8.2b. From Figure 8.2b, the transfer function can be defined as

Hij(ω) = Xi(ω)/Fj(ω) (8.34)

Note that Xi(ω) is caused by Fj(ω) only.


Hij(ω) and hij(t) are Fourier pair denoted by

Hij(ω) ⇔ hij(t) (8.35)

Input Input
fj = δ(t) Fj(ω)

jth jth
location location
Output Output
hij(t) Xi(ω) Transfer
ith ith function
location location H(ω)

(a) (b)

FIGURE 8.2  Relationship between input and output. (a) The time domain. (b) The fre-
quency domain.
Random Vibration of MDOF Linear Systems 399

8.1.3.2 Impulse Response Matrix and Transfer Function Matrix


The input–output unit impulse responses can be collected and arranged in h matrix
as

 h (t ) h12 (t ) … h1n (t ) 
 11 
 h (t ) h22 (t ) … h2 n (t ) 
h(t ) =  21  (8.36)
 … 
 hn1 (t ) hn 2 (t ) … hnn (t ) 

It is seen that the h matrix is symmetric

h(t) = h(t)T (8.37)

The Fourier transform matrix is denoted by H as

 H (ω ) H12 (ω ) … H1n (ω ) 
 11 
 H (ω ) H 22 (ω ) … H 2 n (ω ) 
H(ω ) =  21  (8.38)
 … 
 H n1 (ω ) H n 2 (ω ) … H nn (ω ) 

which is also symmetric.

H(ω) = H(ω)T (8.39)

8.1.3.3 Construction of Transfer Functions


Let the forcing function vector be of a harmonic excitation written as

f(t) = f0 ejωt (8.40)

Furthermore, let us denote the response vector as

x(t) = x0 ejωt (8.41)

Substitution of Equations 8.40 and 8.41 into Equation 8.5b results in

V(ω)x0 = f0 (8.42)

where

V(ω) = −ω2M + jωC + K (8.43)


400 Random Vibration

V(ω) is referred to as the impedance matrix. The impedance matrix is of full rank
and symmetric. Its inverse matrix is denoted as

V−1(ω) = H(ω) (8.44)

Here, H(ω) is the transfer function matrix (frequency response function matrix),
where
x0 = H(ω) f0 (8.45)

Example 8.2

A system has mass, damping, and stiffness matrices given by M, C, and

 30 −10 
K=  
 −10 50 

If one can measure the amplitude of the displacement as x0 = [1 0.5]T, find the
vector of forcing function.

f0 = H−1(0) x0 = V(0) x0 = K x0 = [25 15]T

8.1.3.4 Principal Axes of Structures


In many cases, the responses of a structure can be decoupled into two perpendicular
directions physically, say, the east–west and the south–north directions. In this case,
the system is said to have principal axes, that is, the x-axis and the y-axis. The input
along the x-axis will not have any response along the y-axis, and vice versa. The
decoupling of structural responses into principal axes can reduce the computation
burden so that less degree-of-freedom should be considered.
Note that when a structure is nonproportionally damped, it will not have principal
axes. However, even if a structure is proportionally damped, it may also have no
principal axes (Liang and Lee 2002, 2003). For structures without principal axes,
one cannot decouple the responses in X or Y directions.

8.2 Direct Model for Determining Responses


The direct method is used when the number of degrees-of-freedom is not signifi-
cantly large. In the following, we will find the statistical properties by ensemble
averaging, in which the assumption of temporal averages can be used. That is, the
input forcing functions are ergodic, which will be required in specific cases.

8.2.1 Expression of Response
For simplicity, consider that the output is measured at a single location only. Denote
the response at that location due to the ith input Fi(t) as Xi(t), as seen in Figure 8.3.
Random Vibration of MDOF Linear Systems 401

Input Input
F1(t) F2(t)

Output First Second location


Fn(t)*hn(t) location

F2(t)*h2(t) Interested …
F1(t)*h1(t) Output
location …

nth location
Fn(t)

FIGURE 8.3  Multiple-input, single-output.

Xi(t) = Fi(t) * hi(t) (8.46)

In Equation 8.46, hi(t) is the unit impulse response at the location along the x-axis
due to the specific force Fi(t). Given that the system is linear, the total response is the
total sum of Xi(t). That is,

X (t ) = ∑ X (t)
i =1
i (8.47)

Furthermore, substitution of Equation 8.46 into Equation 8.47 will yield

n n n
   
∑ ∑∫ ∑∫
t ∞
X (t ) = Fi (t ) * hi (t ) =  Fi (t − τ)hi (τ) d τ  =  Fi (t − τ)hi (τ) d τ  (8.48)
i =1 i =1
 0  i =1
 −∞ 


In Figure 8.3, the total solution is the sum of all n terms of
∫−∞
Fi (t − τ)hi (τ) d τ.
However, these terms are not calculated individually. In the following, how to com-
pute the corresponding numerical characteristics will be described.

8.2.2 Mean Values
The mean value of the responses is first considered for multiple input and single
output, as shown in Figure 8.3. The case of multiple input–multiple output will be
further described later.

8.2.2.1 Single Coordinate
If only a single response at a certain location is considered and the corresponding
integration can be carried out, its mean can be calculated as
402 Random Vibration

 n   
∑∫

µ X = E[ X (t )] = E   Fi (t − τ)hi (τ) d τ   (8.49)
 
 i =1
−∞

Here, X(t) is the response and Fi(t) is the stationary excitation at the ith location,
with a mean value of

E[Fi(t)] = μFi (8.50)

Thus,

n n
   
∑∫ ∑ µ ∫
∞ ∞
µX =  E[ Fi (t − τ)]hi (τ) d τ  = Fi hi (τ) d τ  (8.51)
i =1
 −∞  i =1
−∞ 

Finally,
n

µX = ∑{µ
i =1
Fi Hi (0)} (8.52)

8.2.2.2 Multiple Coordinates
Now, the response of all n coordinates is considered. In this case, we have multiple
inputs and multiple outputs.

8.2.2.2.1  Vector of Mean


In the case of multiple coordinates, the mean values are collected into a vector. The
mean of the first location will be the first element of the vector, determined by

µ X1 = ∑{µ
i =1
Fi H1i (0)} (8.53)

The second location is determined by

µ X2 = ∑{µ
i =1
Fi H 2i (0)} (8.54)

Similarly, this is repeated to the nth location, which can be calculated by

µ Xn = ∑{µ
i =1
Fi H ni (0)} (8.55)
Random Vibration of MDOF Linear Systems 403

In Equations 8.53 through 8.55, the term Hji(0) is the transfer function of ith input
and jth output when ω = 0;
The mean values, written in matrix form, are represented by
μX = H(0)μF (8.56)
where matrix H(0) is defined in Equation 8.38 when ω = 0

µ 
 X1 
µ 
µX =  X2  (8.57)
  
 µ Xn 
 
and
µ 
 F1 
µ 
µF =  F2  (8.58)
  
 µ Fn 
 

Example 8.3

A system has mass, damping, and stiffness matrices given by M, C, and

 30 −10 
K=  
 −10 50 

The mean of input is μF = [0  −5]T, find the vector of the output mean.

μX = H(0)μF = K−1μF = [−0.036  −0.107]T


It is seen that, although F1(t) is zero mean, the mean of output does not contain
zeros.

8.2.2.2.2  Zero Mean


For a zero mean process of forcing function

μF = 0 (8.59)

where 0 is a n × 1 null vector. Correspondingly, it is seen that

μX = 0 (8.60)

In general, a zero mean response can always be achieved. That is, if the forcing func-
tions are not equivalent to zero mean, then a zero mean response can be achieved using
f(t) = fnon(t) − μF (8.61)
404 Random Vibration

In the above equation, f(t) and fnon(t) are, respectively, zero mean and nonzero
mean random process vectors of forcing functions.
The response of this is as follows:

x(t) = xnon(t) − μX (8.62)

In this instance, x(t) and xnon(t) are, respectively, zero mean and nonzero mean random
process vectors of responses. Note that, the corresponding forcing function vector is

 F (t ) 
 1 
 F (t ) 
f(t ) =  2  (8.63)
 … 
 Fn (t ) 
 

and the response vector is

 X (t ) 
 1 
 X (t ) 
x(t ) =  2  (8.64)
 … 
 X n (t ) 
 

Both vectors of Equations 8.63 and 8.64 are random processes. Namely, at least
one of the elements in Fi(t) or Xj(t) are considered to be random; whereas both f(t)
and x(t) are random.

8.2.3 Correlation Functions
Next, the correlation functions are considered. The autocorrelation function of the
response measured at a single location due to multiple inputs is written as

 m m 
RX (τ) = E[ X (t ) X (t + τ)] = E 

∑i =1

X i (t )
j =1
X j (t + τ) 


 m m 
∑∑ ∫
∞ ∞
= E
 i =1 j =1
−∞
hi (ξ) Fi (t − ξ) d ξ
∫ −∞
h j ( η) Fj (t + τ − η) d η

(8.65)

m m

∑∑ ∫ ∫
∞ ∞
= hi (ξ)h j ( η) E[ Fi (t − ξ) Fj (t + τ − η)] d ξ d η
−∞ −∞
i =1 j =1

Here, m ≤ n, indicating the forces applied at m locations, and n may be less than m.
Given that the forcing functions are stationary, the cross-correlation function of
Fi(t) and Fj(t) can be denoted as
Random Vibration of MDOF Linear Systems 405

RFi Fj (τ) = E[ Fi (t ) Fj (t + τ)] (8.66)

Substitution of Equation 8.66 into Equation 8.20 results in


m m

∑∑ ∫ ∫
∞ ∞
RX (τ) = hi (ξ)h j ( η) RFi Fj (ξ − η + τ) d ξ d η (8.67)
−∞ −∞
i =1 j =1

8.2.4 Spectral Density Function of Response


8.2.4.1 Fourier Transforms of f(t) and x(t)
Suppose the Fourier transforms of f(t) and x(t) exist. One may consider how and
under what conditions these will occur.
Suppose,

 F (ω ) 
 1 
 F2 (ω ) 
F(ω ) =   (8.68)
 … 
 Fn (ω ) 
 

with a response vector of

 X (ω ) 
 1 
 X (ω ) 
X(ω ) =  2  (8.69)
 … 
 X n (ω ) 
 

8.2.4.2 Power Spectral Density Function


The cross-power spectral density (PSD) function matrix of input can be written as

 S (ω ) SF1F2 (ω ) … SF1Fn (ω ) 
 F1 
 S (ω ) SF2 (ω ) SF2Fn (ω ) 
SF (ω ) =  F2F1 
 … 
 SF F (ω ) SFn F2 (ω ) SFn (ω ) 
 n1 
 R (τ) (8.70)
RF1F2 (τ) … RF1Fn (τ) 
 F1 
−1
 RF2F1 (τ) RF2 (τ) RF2Fn (τ) 
=F  
 … 
 RF F (τ) RFn F2 (τ) RFn (τ) 
 n1 
406 Random Vibration

where SFj Fk (ω ) and RFj Fk (τ) are, respectively, the cross-PSD and correlation function
of input forcing functions Fj and Fk. If j = k, we obtain auto-PSD and autocorrelation
functions.
Unlike SDOF systems, expect off-the-diagonal entries in Equation 8.70; the off-
diagonal entries contain cross-power PSD among the input locations. In this case,

SFi Fj (ω ) = lim
T →∞
1
2πT ∑[F (ω, T ) * F (ω, T )]
p
ip jp (8.71)

where Fip(ω,T) is the Fourier transform of the pth measurement of a forcing function
applied at location i, with a measurement length of T.

SFj Fk (ω ) = cE[ Fj (ω ) * Fk (ω )] (8.72)

Now, consider the cross-PSD function matrix of output, the jkth entry can be
written as

S X j X k (ω ) = cE[ X j (ω ) * X k (ω )] (8.73)

In Equations 8.72 and 8.73, c is a constant.


Consider the equation,

X(ω) = H(ω)F(ω) (8.74)

Postmultiplying the Hermitian transposes on both sides of Equation 8.74 results in

X(ω)X(ω)H = H(ω)F(ω)[H(ω)F(ω)]H (8.75)

Here, the Hermitian transposes of a complex valued matrix A is given by

AH = (A*) T = (AT)* (8.76)

Next, take the expected value of both sides and multiply by a constant, repre-
sented by c.

cE[X(ω)X(ω)H] = cE{H(ω)F(ω)[H(ω)*F(ω)*]T} = H(ω)cE{F(ω)F(ω)*}[H(ω)*]T


(8.77)

With the input of Equations 8.72 and 8.73, the result becomes

SX(ω) = H(ω)SF(ω)H(ω)H (8.78)


Random Vibration of MDOF Linear Systems 407

Here, SX(ω) is the cross-PSD matrix given by

 S (ω ) S X1X2 (ω )  S X1Xn (ω ) 
 X1 
S (ω ) S X2 (ω ) S X2 Xn (ω ) 
S X (ω ) =  X2 X1 
  
 S X X (ω ) S Xn X2 (ω ) S Xn (ω ) 
 n 1 
 R (τ) (8.79)
RX1X2 (τ) ... RX1Xn (τ) 
 X1 
−1
 RX2 X1 (τ) RX2 (τ) RX2 Xn (τ) 
=F  
 ... 
 RX X (τ) RXn X2 (τ) RXn (τ) 
 n 1 

Both cross-PSD matrices SF(ω) and SX(ω) given by Equations 8.70 and 8.79 are
useful. Because we will study not only the auto-PSD of SFi and S Xi but also the rela-
tionships of singles between location j and k.

8.2.4.3 Mean Square Response


The mean square response of the ith location is

E  xi2 (t )  = X ri2 =
∫−∞
S Xi (ω ) dω (8.80)

where X ri2 is constant.

8.2.4.4 Variance
When xi(t) is of zero mean, then

σ 2Xi = X ri2 (8.81)

8.2.4.5 Covariance


σ X i X j (0) =

−∞
S Xi X j (ω ) dω (8.82)

8.2.5 Single Response Variable: Spectral Cases


8.2.5.1 Single Input
As an example, if a single input of f k(t) exists, then

 S (ω ), i= j=k
F
SFi Fj (ω ) =  k (8.83)
 0 , elsewhere
408 Random Vibration

S X (ω ) = H k* (ω ) H k (ω ) SFk (ω ) (8.84)

or
S X (ω ) = H k (ω) 2 SFk (ω ) (8.85)

8.2.5.2 Uncorrelated Input
If all inputs, fi(t), are uncorrelated, then

 S (ω ), i = j = k = 1, 2, n
F
SFi Fj (ω ) =  k (8.86)
 0 , elsewhere

and
n

∑  H (ω) SFk (ω ) 
2
S X (ω ) = k (8.87)
k =1

σ 2X = ∑σ
k =1
2
i (8.88)

In the above instance, σ i2 is the variance of x(t) attributed to fi(t) solely.

8.3 Normal Mode Method


When the degrees-of-freedom for an MDOF system are considerably large, the
above computation can be extensive. In this case, modal analysis can greatly reduce
the computation burden while maintaining a reasonable accuracy. First, we consider
the proportionally damped system and normal mode. Although in the real world,
such systems rarely exist. When the damping ratio is comparatively small, in the
region of less than 5%, the normal mode method can provide reliable approximations
(see Cheng [2001] and Chopra [2003]).

8.3.1 Proportional Damping
As mentioned previously, for the Caughey criterion to be satisfied, a system must be
proportionally damped. In this section, the mathematical and physical meaning of
the Caughey criterion will be discussed first.

8.3.1.1 Essence of Caughey Criterion


Recall the Caughey criterion:

CM−1K = KM−1C (8.89)


Random Vibration of MDOF Linear Systems 409

Multiply by M−1 on both sides of the equation.

[M−1C][M−1K] = [M−1K][M−1C] (8.90)

Equation 8.90 implies that the two matrices [M−1C] and [M−1K] commute, if and
only if the matrices share the identical eigenvector matrix Φ. Matrices [M−1C] and
[M−1K] are, respectively, referred to as generalized damping and stiffness matrices.
The eigenvector matrix Φ is the mode shape of the M-C-K system, as discussed
further in Section 8.3.5.4.
Physically, if the distribution of the individual dampers and springs are identical
and the amount of individual damping and stiffness are proportional, then both the
generalized damping and stiffness matrices share identical eigenvectors. In qualita-
tive terms, this means that both damping and stiffness are “regularly” distributed.

8.3.1.2 Monic System
To have generalized damping and stiffness matrices, the monic system must be gen-
erated first.

8.3.1.2.1  Concept of Monic Mass


By multiplying M−1 on both sides of Equation 8.5b, the homogeneous form can be
considered.

(t ) + M −1Cx (t ) + M −1Kx(t ) = 0


Ix (8.91)

In which, the mass matrix becomes the identity matrix. It is referred to as a monic
system.
Note that, in Equation 8.91, the monic MDOF vibration system has newly formed
generalized damping matrix, M−1C, and stiffness matrix, M−1K.

8.3.1.2.2  Solution of Monic Systems


Similar to previous examples, the semidefinite method is used.
Assume that

x(t) = ϕeλt (8.92)

The characteristic equation is written as

λ2ϕ + λM−1Cϕ + M−1Kϕ = 0 (8.93)

or

[Ιλ2 + λM−1C + M−1K]ϕ = 0 (8.94)

Because ϕ ≠ 0, otherwise the solution is trivial, then

det[Ιλ2 + λM−1C + M−1K] = 0 (8.95)


410 Random Vibration

Matrix [Ι λ2 + λ M−1C + M−1K] is referred to as a λ matrix, whose determinant


is a 2n polynomial, with solutions of the 2n complex conjugate pair λi and λ*i . In
general, the 2n corresponding vectors, ϕi and φi*, will also exist. However, if the
Caughey criterion is satisfied, then the vector ϕi will be real valued. In this case, it is
referred to as normal mode shape.
Consider the SDOF system:

λ = −ζω n + j 1 − ζ2 ω n (8.96)

Here, we further have

λ i = −ζiω ni + j 1 − ζi2 ω ni i = 1,  n (8.97a)

and

λ*i = −ζiω ni − j 1 − ζi2 ω ni i = 1,  n (8.97b)

The variables, λi, ζi, and ωni are, respectively, referred to as eigenvalue, damping
ratio, and natural frequency of the ith normal mode. The triple < ωni, ζi, ϕi > is called
the ith normal modal parameter. The phrase “normal” means that the eigenvalues
are calculated from the proportionally damped system.

8.3.2 Eigen-Problems
In Equations 8.97a and 8.97b, the damping ratios and the natural frequencies are parts
of eigenvalues. It is of importance that these eigen-problems be further explored.

8.3.2.1 Undamped System
First, consider an undamped system, where

C = 0 (8.98)

and

ζi = 0 (8.99)

Thus,

λi = jωni (8.100)

Furthermore,

−ω 2niφi + M −1Kφi = 0

Random Vibration of MDOF Linear Systems 411

or

ω 2niφi = M −1Kφi (8.101)

Equation 8.101 is referred to as the eigen-problem of matrix M−1K with scalar


eigenvalue ω 2ni and eigenvector ϕi. A square matrix multiplied by its eigenvector is
equal to a scalar, the corresponding eigenvalue multiplied by the same eigenvector.

8.3.2.2 Underdamped Systems
Similarly, M−1C is also a square matrix and will contain eigenvectors and eigen­
values. Because M−1C and M−1K share the same eigenvector, the corresponding
eigenvalue can be denoted as 2ζiωni, with

2ζiωniϕi = M−1Cϕi  i = 1, … n (8.102)

Example 8.4

A system has mass, damping, and stiffness matrices given by

   2 −1   30 −10 
M= 1 0
, C =   , and K =  
0 2   −1 1   −10 30 

Check whether this system is proportionally damped and find the correspond-
ing eigenvalues and eigenvectors.
It is seen that CM−1K = KM−1C so that the system is proportionally damped.
From Equation 8.101, it is seen that, ωn1 = 3.4917 and ωn2 = 5.7278. The cor-
responding eigenvectors are ϕ1 = [0.4896 0.8719]T and ϕ2 = [0.9628 −0.2703]T.
From Equation 8.102, we see that M −1Cϕ1 = [0.1073 0.1911]T. Dividing the first
element by 0.4896, namely, the first element in ϕ1 results in 0.2192 (the same
result can be found by dividing the first element by 0.8719). Furthermore, damp-
ing ratio ζ1 = 0.2192/(2ω n1) = 0.0314. Similarly, the damping ratio ζ2 = 0.1991.
Therefore, the eigenvalues are

λ1 = −ζ1ω n1 ± j 1− ζ12 ω n1 = −0.1096 ± 3.4900 j

and

λ2 = −1.1404 ± 5.6131 j

8.3.3 Orthogonal Conditions
The eigenvector can be further used to decouple the MDOF system. Before complet-
ing this calculation, first, consider why this is possible. The answer to this is based
on the orthogonal conditions.
412 Random Vibration

8.3.3.1 Weighted Orthogonality
Equation 8.101 can be rewritten as

ω 2ni Mφi = Kφi (8.103)

which is referred to as the ith generalized eigen-equation.


Multiplying φjT on both sides of Equation 8.103 results in

ω 2niφjT MφiT = φjTKφi (8.104)

Further multiplying φiT on both sides of the jth generalized eigen-equation will
yield

ω 2njφ iT Mφj = φiTKφj (8.105)

Because M and K are symmetric, then take transpose on both sides of Equation
8.104,

ω 2ni (φjT Mφi )T = (φjTKφi )T (8.106)

Here note that

(φjT Mφi )T = φiT Mφ j , (φ Tj Kφi )T = φiTKφ j (8.107)

Substitution of Equations 8.107 into Equation 8.106 and subtracting the subse-
quent result from Equation 8.105, results in

(ω 2
nj )
− ω ni2 φ iT Mφj = 0 (8.108)

Because in general,

ω 2ni ≠ ω 2nj (8.109)

Thus,

φ iT Mφj = 0, when i ≠ j (8.110)

and

φ iT Mφi = mi , when i = j (8.111)

where mi is called the ith modal mass.


Random Vibration of MDOF Linear Systems 413

Combining Equations 8.110 and 8.111, results in the orthogonal condition:

 m , i= j
φiT Mφj =  i (8.112)
 0, i≠ j

Similarly, the following can also be proved

 k , i= j
φiTKφj =  i (8.113)
 0, i≠ j

and likewise

 c , i= j
φiTCφj =  i (8.114)
 0, i≠ j

Here, ki and ci are called the ith modal stiffness and model damping coefficient,
respectively; similar to modal mass, we use italic letters to denote these modal
parameters. Equations 8.112, 8.113, and 8.114 are referred to as weighted orthogonal
conditions.

8.3.3.2 Modal Analysis
8.3.3.2.1  Characteristic Equation
Using the orthogonal conditions, the eigenvector or mode shape can be used to
obtain the SDOF vibration systems mode by mode. In doing so, first consider the
homogeneous equation:

(t ) + Cx (t ) + Kx(t ) = 0


Mx (8.115)

Next, assume the following:

x(t ) = φi eλit (8.116)

Substituting Equation 8.116 into Equation 8.115 and premultiplying φiT on both
sides yields

λ i2φiT Mφie λit + λ iφiTCφi eλit + φiTKφi eλit = 0, i = 1,  n (8.117)

The characteristic equation for n SDOF systems has now been obtained.
414 Random Vibration

mi λ i2 + ci λ i + ki = 0, i = 1,  n (8.118)

In comparing the characteristic equation for n SDOF systems to that of an SDOF


system, it is determined that

ki
ω ni = , for i = 1,  n (8.119)
mi

and

ci
ζi = , for i = 1, … n (8.120)
2 mi ki

Similar to an SDOF system, when

ζi < 1 (8.121)

the ith mode is underdamped.


When

ζi = 1 (8.122)

the system is critically damped. Lastly, when

ζi > 1 (8.123)

the system is overdamped. In the case of critically damped and overdamped systems,
the ith mode reduces to two real valued subsystems. Thus, the system will no longer
contain vibration modes. Note that again, for a stable system, we need all damping
ratios to be nonnegative, which can be guaranteed by M > 0, C ≥ 0 and K > 0. This
will also be true for nonproportionally damped systems.

8.3.3.2.2  Vibration Modes


8.3.3.2.2.1   The Essence of Equation 8.116, Separation of Variables  In Equation
8.116 the assumption x(t) = ϕi multiplied by e λit implies the separation of variables.
In comparing Equation 8.116 to Equation 6.36, the term e λit can be seen as a free
decay response of an SDOF vibration system with an amplitude of unity. Thus, e λit
is a scalar temporal function, which describes the ith modal vibration. The unity
vibration response can then be denoted as follows:

qi (t ) = e λit (8.124)
Random Vibration of MDOF Linear Systems 415

Here, qi(t) is called the ith modal response of the free decay vibration. Furthermore,
in looking at ϕi, it is seen that ϕi contains spatial variable only, written as

φ 
 1i 
φ 
φi =  2i  (8.125)

 φni 
 

Here, ϕi distributes the modal response qi(t) to different mass from 1 through n.
Equation 8.116 can be rewritten as

xi (t) = ϕiqi(t) (8.126)

where the subscript i in the physical domain xi stands for the response due to the ith
mode only. Substituting Equation 8.126 into Equation 8.115 and premultiplying φiT
on both sides of the result will yield

φiT Mφi qi (t ) + φiTCφi q i (t ) + φiTKφi qi (t ) = 0, i = 1, … n (8.127a)

or

mi qi (t ) + ci q i (t ) + ki qi (t ) = 0, i = 1, … n (8.127b)

Suppose the system is excited by initial conditions

v  x 
 01   01 
v   x 02 
v 0 =  02  and x 0 =  
…  … 
 v0 n   x0 n 
   

The modal initial conditions can be found as

 0) = [q1 (0), q 2 (0), … q n (0)]T = [φ1 φ2 , … , φn ]−1 v 0


q( (8.128a)

q(0) = [q1(0), q2(0), … qn(0)]T = [ϕ1 ϕ2, …, ϕn]−1 x0 (8.128b)

As a result, an n SDOF vibration system has been attained. This procedure is


referred to as modal decoupling or modal analysis.
416 Random Vibration

Example 8.5

In the previous example in Section 8.3.2, the eigenvectors were calculated to be


ϕ1 = [0.4896 0.8719]T and ϕ2 = [0.9628 −0.2703]T. Find the model response func-
tions of the system and calculate the free decay vibration of the modal response
due to initial velocity v0 = [1, 2]T and initial displacement x0 = [−2, 2]T.
The modal initial conditions are

−1

 0) =  0.4896 0.9628   1   2.2595 
q(   = 
 0.8719 −0.2703   2   −0.1105 

−1
 0.4896 0.9628   1   1.4250 
q(0) =    =  
 0.8719 −0.2703   2   −2.8021 

The first modal mass, damping, and stiffness are, respectively,

φ1T Mφ1 = 1.7603, φ1TCφ1 = 0.3859 and φ1TKφ1 = 21.4615


Thus, the first modal equation is

1.7603 q1(t ) + 0.3859 q1(t ) + 21.4615 q1(t ) = 0

with an initial modal velocity of 2.2595 and a modal displacement of 1.4250.


Similarly, we have the second modal equation written as

1.0731q2(t ) + 2.4474 q 2(t ) + 35.2052 q2(t ) = 0

with an initial modal velocity of −0.1105 and a modal displacement of −2.8021.

8.3.4 Modal Superposition
Because the system is linear, once the modal responses are obtained, we can sum-
marize them to construct the response in the physical domain, letting xi(t) = ϕiqi(t)
we have

x(t) = x1(t) + x2(t) + …, + xn(t) = ϕ1 q1(t) + ϕ2 q2(t) + … + ϕn qn(t) = [ϕ1 ϕ2, …, ϕn ] q(t)
(8.129)

Accordingly, the response denoted by x(t) is called the physical response, com-
pared with the modal responses denoted by qi(t). Note that, at a given time t, x(t) is a
vector, the jth element is the displacement measured at the jth location, whereas qi(t)
is a scalar, which is the response of the ith mode.
Random Vibration of MDOF Linear Systems 417

Example 8.6

In the example from Section 8.3.3, we calculated the modal response q1(t) and
q2(t). Find the response in the physical domain.

x(t) = [ϕ1 ϕ2, …, ϕn] q(t)

The results are plotted in Figure 8.4b, as a comparison, the modal response cal-
culated in the previous example are plotted in Figure 8.4a. Additionally, because
Equation 8.126 only contains the ith mode, it can be rewritten as follows:

xi(t) = ϕiqi(t) (8.130)

Here, the italic symbol ϕi is used to denote the normalized mode shape. Note
that

φi
φi = (8.131)
mi

In other words, the mode shape ϕi can be normalized so that the following
product is unity:

φ iT M φ i = 1

Given that the system is linear, the linear combination can be obtained as

n n

x(t ) = ∑ i =1
ai x i (t ) = ∑ a φ q (t )
i =1
i i i (8.132)

2 3
1.5 First modal response x1(t)
Second modal response x2(t)
1
Physical displacement

2
Modal displacement

0.5
0 1
–0.5
–1 0
–1.5
–2 –1
–2.5
–3 –2
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
(a) Time (s) (b) Time (s)

FIGURE 8.4  Modal and physical responses.


418 Random Vibration

Equation 8.132 is also referred to as modal superposition with the normalized


mode shape of ϕi. The scalar ai is called the modal participating factor for the free
decay vibrate mode. And

1
ai = (8.133)
mi

It is noted that there can be several different types of normalization for the
mode shape ϕi. Equation 8.131 is only one of the normalizations.

8.3.5 Forced Response and Modal Truncation


8.3.5.1 Forced Response
The concept of modal superposition can also be used in representing the solutions
of forced responses.
Again, it is assumed that

xi(t) = qi(t) ϕi

where qi(t) is the ith modal response. In the case of forced vibration, it is no longer
equal to e λit as described in Equation 8.124. Rather, it becomes the forced modal
response. The modal response qi(t) can be solved as follows.
In solving for qi(t), first substitute Equation 8.133 into Equation 8.5b, the equation
of forced vibration for an M-C-K system. In the same way, premultiplying φiT on
both sides of the resulting equation will yield

φiT Mφi qi (t ) + φiTCφi q i (t ) + φiTKφi qi (t ) = φiT f (t ) (8.134)

The scalar forcing function is denoted as

φiT f(t ) = gi (t ) (8.135)

This will result in a typical equation of motion for an SDOF vibration system:

mi qi (t ) + ci q i (t ) + ki qi (t ) = gi (t ) (8.136)

where gi(t) is the ith modal forcing function.


Based on our knowledge of SDOR systems, qi(t) is now solvable from
Equation 8.136.
Random Vibration of MDOF Linear Systems 419

8.3.5.2 Rayleigh Quotient
Dividing φiT Mφi from both sides of Equation 8.134 results in the monic modal
equation:

φiTCφi φiTKφi φiT f (t )


qi (t ) + 
q i (t ) + qi (t ) = (8.137)
φiT Mφi φiT Mφi φiT Mφi

It is seen that

φiTCφi φiTCφi
= T = 2ζiω ni (8.138)
φiT Mφi φi φi

and

φiT Kφi φiT Kφi


= T = ω ni2 (8.139)
φi Mφi
T
φi φi

Now, consider a generic notation denoted by

φ T Aφ
R= (8.140)
φ Tφ

In this instance, A is an n × n square positive or a positive semidefinite matrix and


ϕ is an n × 1 vector.
The ratio described in Equation 8.140 is referred to as the Rayleigh quotient,
denoted by R. When the vector ϕ varies, the Rayleigh quotient R will vary accord-
ingly. It can be proven that, only if ϕ becomes the eigenvector of A will R reach
a standing point. The value of R at a standing point will be the corresponding
eigenvalue. φiT Cφi φiT Kφi
In Equations 8.138 and 8.139, the terms of T and T are all Rayleigh
φi φi φi φi
quotients. Because ϕi is the ith eigenvector, the terms 2ζiωi and ω i2 are, respectively,
the eigenvalues of the damping and stiffness matrices of the monic system I-C-K.
Furthermore, the terms φi Cφi and φi Kφi are generalized Rayleigh quotients.
T T

φiT Mφi φiT Mφi


When ϕ reaches the ith mode shape ϕi, the corresponding generalized Rayleigh quo-
tients 2ζiωi and ω i2 reach, respectively, the eigenvalues of the damping matrix M−1C
and stiffness matrices M−1K of system M-C-K.
Also note that, ϕ will reach the ith eigenvector of M−1C and M−1K simultaneously.
Therefore, the system M-C-K can be decoupled mode by mode. In other words, the
Rayleigh quotient is the base of normal modal analysis.
420 Random Vibration

8.3.5.3 Ground Excitation and Modal Participation Factor


For the case of ground excitation,

f (t ) = − MJxg (t ) (8.141)

and x(t) becomes the relative displacement vector (see Chapter 6, Base excitation and
Equation 6.127). Here, in Equation 8.141

1 
 
 
J =  1  (8.142)
 
 1 

Thus, the generic modal force in Equation 8.137 can be replaced by

φiT f (t ) φ T MJx (t )
gi (t ) = =− i T g (8.143)
φi Mφi
T
φi Mφi

The scalar Γi can be denoted as

φiT MJ
Γi = (8.144)
φiT Mφi

In Equation 8.143, the term Γ i xg (t ) is defined as the modal participation factor
for the ith mode, whereas Γi is the unit acceleration load for the ith mode. In the fol-
lowing, for convenience, Γi is also referred to as the modal participation factor. It
will be shown that the value of Γi will depend on the normalization of ϕi.

8.3.5.4 Modal Superposition, Forced Vibration


8.3.5.4.1  Eigenvalue and Eigenvector Matrices
Combine the eigenvalues and eigenvector in matrix form. This is denoted as

Φ = [ϕ1 ϕ2 … ϕn] (8.145)

It can be shown that

ΦTMΦ = diag(mi),  for i = 1, … n (8.146)

ΦTCΦ = diag(ci),  for i = 1, … n (8.147)

and

ΦTKΦ = diag(ki),  for i = 1, … n (8.148)


Random Vibration of MDOF Linear Systems 421

Furthermore,

Φ−1[M−1C]Φ = diag(2ζiωni),  for i = 1, … n (8.149)

and

( )
Φ −1[M −1K ]Φ = diag ω 2ni , for i = 1,  n (8.150)

( )
Here, the diag(2ζiωni) and the diag ω 2ni are eigenvalue matrices of the matrices
M C and M−1K, respectively. Additionally, Φ is the eigenvector matrix.
−1

In Equations 8.146 through 8.150, the eigenvector matrix can be normalized in


many respects. When calculating the solution of a system specifically, once Φ is
chosen to have fixed modal mass, damping, and stiffness as defined in Equations
8.146 through 8.148, the value of any ϕi should not be changed. Thus, the value of the
modal participation factor Γi is fixed, as well as the modal response qi(t). The solution
in the physical domain can then be determined.

8.3.5.4.2  Solution of Forced M-C-K System


Using the matrix form, with a fixed value of Φ, the solution of the forced MDOF
system in the physical domain can be written as

x(t) = Φq(t) (8.151)

Equation 8.151 can be seen as a linear transform or mapping. In this instance,


the modal response q(t) is transferred by the mode shape matrix Φ to a physical
response. Generally, q(t) is said to be in the modal domain or modal space, whereas
x(t) is in the physical domain or physical space.
If the degree-of-freedom has an order of n, then it is an n-dimensional modal
domain.
In Equation 8.151,

 q (t ) 
 1 
 q (t ) 
q(t ) =  2  (8.152)
 … 
 qn (t ) 
 

Therefore, Equation 8.151 can be rewritten as

 x (t )  φ φ12 … φ1n   q (t ) 
 1   11   1 
 x (t )   φ21 φ22 φ2 n   q2 (t ) 
x(t ) =  2  =     (8.153)
 …   …   … 
 x n (t )   φn1 φn 2 … φnn   qn (t ) 
   
422 Random Vibration

and

x j (t ) = ∑ φ q (t)
i =1
ji i (8.154)

8.3.5.5 Modal Truncation
Higher modes will contain much less energy. Thus, it is practical to use the first S
modes in an approximation of the solution, written as

x j (t ) = ∑ φ q (t)
i =1
ji i (8.155)

Typically, the number of modes, S, will be considerably smaller than the number
of total modes, n, that is,

S ≪ n (8.156)

Specifically, this is expressed as

 q (t ) 
 1 
 q2 (t ) 
x(t )nx1 ≈  φ1 φ2 … φS    for S < n (8.157)
 nxS
 … 
 qS (t ) 
  Sx1

In matrix form,

x(t) ≈ ΦCqC (t) (8.158)

where ΦC = [ϕ1 ϕ2 ⋯ ϕS]nxS is the truncated mode shape matrix and rC is the trun-
cated modal response.
Additionally,

 q (t ) 
 1 
 q (t ) 
qC (t ) =  2 
 … 
 qS (t ) 
  Sx1
Random Vibration of MDOF Linear Systems 423

In many cases, only the first modal response will be used. This is called the
fundamental modal response, which is used to represent the displacement. This is
written as

x(t) ≈ ϕ1q1(t) (8.159)

8.3.6 Response to Random Excitations


Next, we consider random responses. Generally speaking, it is rare to have a rigor-
ously defined stationary response for two reasons even though our vibration system
is stable. First, when the forcing function is not stationary. Second, even the input
is stationary, for a limited operating time, we will have transient response, which is
not stationary. Therefore, in practical applications, a stationary process of vibration
responses should not be assumed until we can prove it. In addition, many random
processes are not zero mean either. In such circumstances, we consider covari-
ance function of (t1, t2), instead of using the correlation function of (τ), as a general
approach. Furthermore, for averaging, we should consider ensemble average instead
of temporal average in many practical applications. On the other hand, however,
many engineering random processes are mean square integrable. In the following,
we assume all the signals are mean square integrable throughout the chapter, and we
use lowercase letters f, g, q, and x, etc., for random processes for simplicity.

8.3.6.1 Modal and Physical Response


We now consider the mean and covariance of the response of proportionally damped
systems through normal mode decoupling. Suppose that in Equation 8.5b, the forc-
ing function is a random process. After decoupling, we can have (see Equation 8.136)

 f1 (t ) 
 
 f2 (t ) 
mi qi (t ) + ci q i (t ) + ki qi (t ) = gi (t ) = φiT f ( t) = φiT   (8.160)
 … 
 fn (t ) 
 

with modal initial velocity q i (0) and modal initial displacement qi(0) (see Equation
8.128a,b).
Here fj(t) is the physical forcing at the jth location, whereas gi(t) is the ith modal
force. If the forcing function f(t) is Gaussian, it is easy to see that the modal forc-
ing functions should also be Gaussian. For stable MDOF systems, the ith modal
response is also Gaussian. Furthermore, both the jth responses given by the com-
plete or truncated modal superposition (see Equations 8.154 and 8.155) should also
be Gaussian. Thus, the output responses will be completely characterized by the
means and covariance functions. In the following examples, let us use the complete
response for convenience.
424 Random Vibration

Therefore, we first consider the modal response qi(t), which can be written as

 ζω  T
qi (t ) = qi (0) e − ζiω nit  cos ω dit + i ni sin ω dit  + q i (0)hi (t ) +
 ω di  ∫
0
hi (t − τ) gi (τ) d τ

(8.161)

Here, hi(t) is the ith unit impulse response function with damping ratio ζi and
natural frequency ω ni. In addition,

ω di = 1 − ζi2 ω ni (8.162)

is the ith damped natural frequency. With the help of modal superposition, we fur-
ther have

x(t) = Φq(t) (8.163)

8.3.6.2 Mean
The mean of x(t) is given by

  ζω  
µ X (t ) = Φ diag  qi (0)e − ζi ω ni t  cos ω di t + i ni sin ω di t  + q i (0)hi (t ) 
  ω di  
T (8.164)
+
∫0
Φ diag  hi (t − τ)µ gi (τ)  dτ

where diag[(.)i] is a diagonal matrix with its iith entry equivalent to (.)i. Furthermore

µ gi (t ) = E[ gi (t )] (8.165a)

It should be note that the mean vector of the force in the physical domain is given
by

μf(t) = E[f(t)] (8.165b)

8.3.6.3 Covariance
The covariance matrix of the random response x(t) is given by

σXX(t1,t2) = E[{x(t1) − μX(t1)} [{x(t2) − μX(t2)}T]


Random Vibration of MDOF Linear Systems 425

Substituting from Equation 8.163 yields

σ XX (t1 ,t2 ) =


{ }{ }
t1 t2 T 



0
d τ1

0
d τ 2ΦH (t1 − τ1 )Φ T E diag  g(τ1 ) − µ gi (τ1 ) g(t2 ) − µ gi (τ 2 )  ΦH (t2 − τ 2 )Φ T
  
(8.166)

Here

 g (τ ) − µ (τ ) 
g1 1
 1 1 
 g2 (τ1 ) − µ g2 (τ1 ) 
g(τ1 ) − µ g (τ1 ) =   (8.167a)
  
 gn (τ1 ) − µ g (τ1 ) 
 n

and

diag{[ g(τ1 ) − µ g (τ1 )][ g(τ 2 ) − µ g (τ 2 )]T}


(8.167b)
= diag{[gi (τ1 ) − µ gi (τ1 )][ gi (τ 2 ) − µ gi (τ 2 )]}

In the examples above, we denote the covariance of the modal forcing process to
be

σFF (t1,t2) = E[{f(τ1) − μf(τ1)}{f(τ2) − μf(τ2)}T] (8.168)

Substitution of Equation 8.167b into Equation 8.166 results in

t1 t2
σ XX (t1 , t2 ) =
∫ 0
d τ1

0
d τ 2 Φ H (t1 − τ1 )Φ T σ FF (t1 , t2 )Φ H (t2 − τ 2 )Φ T (8.169)

For convenience, in Equations 8.166 and 8.169, a special diagonal H matrix is


given by

H[(.)] = diag[hi(.)] (8.170)


426 Random Vibration

8.3.6.4  Probability Density Function for xi(t)


If the forcing function is Gaussian, the response is also Gaussian, then the PDF of
xi(t) is given by

( x −µ )
2
i Xi

1 2 σ 2X
f Xi ( xi ) = e i
(8.171)
2πσ Xi

where the variance σX2 i is the iith entry of the covariance matrix σXX(t,t).
If the fj(t), the forcing function applied at jth location, etc., are jointly normally
distributed, then xj(t) are also jointly normally distributed. The PDF can be given by

1
1 − ( x − µ X )T σ FF ( t ,t )( x −µ
µX)
f X ( x1 , x 2 , … x n ) = e 2
(8.172)
n
2π det[σ FF (t , t )]

Example 8.7

Suppose an automobile suspension system can be modeled as shown in Figure 8.5.


For the ground excitation system, we have the equation of motion written as

Mx + Cx + Kx = − MJ xg


where x = [x1 x2]T is the vector of relative displacement. Find the mean and covari-
ance of the displacement by using the normal mode method.

x2
m2

k2 c2

x1
m1

k1 c1

xg

FIGURE 8.5  Model of automobile suspension system.


Random Vibration of MDOF Linear Systems 427

Suppose this system is proportionally damped. The equation of motion can be


decoupled as

q1(t ) + 2 ζ1ω n1q1(t ) + ω n21 q1(t ) = g1(t )


q2(t ) + 2 ζ 2ω n 2q 2(t ) + ω n22 q2(t ) = g2(t )



If the ground acceleration xg (t ) is a stationary white noise with auto-PSD S0,
then both the modal force g1(t) and g2(t) will be proportional to xg and

σFF(t1,t2) = DS0 δ(τ)

where D is a 2 × 2 diagonal matrix with corresponding proportional factors. In


this case, we can have

2
 m 
 T 1 
 φ Mφ 
D=  1 1
m2 
 
 φ2T Mφ2 

and S0 is the auto-PSD of the ground acceleration

t1 t2
σ XX (t1,t 2 ) = S0

0
dτ1
∫0
dτ 2 Φ H(t1 − τ1)Φ T D Φ H(t 2 − τ 2 )Φ T

Suppose we have m1 = 1400 (kg); m2 = 120 (kg); c1 = 12.571 (kN/m-s); c2 =


1.429 (kN/m-s); k1 = 2200 (kN/m); k2 = 250 (kN/m). We thus can calculate the nat-
ural frequencies, damping ratios, and damped natural frequencies as ωn1 = 35.838
(rad/s), ωn2 = 50.487 (rad/s); ζ1 = 0.1024, ζ2 = 0.1442; ωd1 = 35.650 (rad/s), ωd2 =
49.960 (rad/s); the modal mass of the first and second mode are, respectively, m1 =
284.1181 and m2 = 180.8987.
The diagonal H matrix is

H = 1000 diag([e−3.6696t sin(35.6497t)/10.129 e−7.2828t sin(49.9593t)/9.038])

Suppose S0 = 25.26, the matrix D S0 is

D = diag([D1 D2]) = diag([613.3, 11.11])

The mode shape matrix is

φ φ12   0.3581 0.2181 


Φ =  11 =  
 φ 21 φ 22   0.9337 −0.9759
9
428 Random Vibration

Based on the above computation, the variance can be calculated (see Equation
8.169). For example, the first entry of σXX(t,t) is

φ11
4
D1 + φ11φ 21D2
2 2 ∞

ω d1m12
2 ∫ 0
e −2ζ1ω n1(t − τ ) sin2 ω d1(t − τ)dτ

2φ11φ 21D1 + 2φ11φ12φ 21φ 22D2


2 2 ∞
+
ω d1ω d 2m1m2 ∫ 0
e −(ζ1ω n1+ζ2ω n 2 )(t − τ ) sin ω d1(t − τ)sin ω d 2 (t − τ)dτ

φ12
4
D1 + φ12φ 22D2
2 2 ∞
+
ω d 2m22
2 ∫ 0
e −2ζ2ω n 2 (t − τ ) sin2 ω d 2 (t − τ)dτ

 ∞
= 11.03
 ∫ 0
e −7.339(t − τ ) sin2 35.65(t − τ)dτ


− 148.05
∫ 0
e −10.952(t − τ ) sin 35.65(t − τ)sin 49.96(t − τ)dτ


+ 2.32
∫ 0
e −14.566(t − τ ) sin2 49.96(t − τ)dτ] × 10 −8

8.4 Nonproportionally Damped
Systems, Complex Modes
If the Caughey criterion cannot be satisfied, then the system is nonproportionally
damped, or generally damped. In this case, the mode shape function can no longer
be used to decouple the system. However, modal analysis can still be carried out in a
2n space. Generally, this will result in the mode shape being complex in value. The
corresponding decoupling is referred to as the complex mode method (Liang and
Inman 1990; Liang and Lee 1991b).

8.4.1 Nonproportional Damping
Given that the complex mode is the result of damping, the damping will be consid-
ered first.

8.4.1.1 Mathematical Background
The following are both mutually sufficient and necessary:

1. The Caughey criterion is not satisfied (Caughey and O’Kelly 1965; Ventura
1985)

CM−1K ≠ KM−1C (8.173)


2. The Rayleigh quotients of M C and M K do not reach the standing point
−1 −1

simultaneously for, at minimum, one mode


3. M−1C and M−1K do not share the same eigenvector matrix
4. The mode shape is not the eigenvector of M−1K or the eigenvector of M−1C
Random Vibration of MDOF Linear Systems 429

5. The M-C-K system cannot be decoupled in n-dimensional modal space


6. At least two modal energy transfer ratios are nonzero

In the event that all of the above is true, nonproportional damping exists.

8.4.1.2 The Reality of Engineering


1. It is very rare to have proportional damping (Liang and Lee 1991a).
2. If the damping force is small, then portional damping can be used as a good
approximation.
3. If the damping force is sufficiently large, then using proportional damping
can introduce large error.

8.4.2 State Variable and State Equation


Remembering that nonproportionally damped systems require the use of modal
analysis, a 2n space must be generated (Warburton and Soni 1977; Villaverde 1988).
Rewriting Equation 8.5b:

(t ) + Cx (t ) + Kx(t ) = f (t )


Mx (8.174)

Equation 8.174 is modified into a matrix equation, referred to as the state equation

 x   − M −1C − M −1K   x   M −1f 


 =    +   (8.175)
 x   I 0   x   0 

Furthermore, with the help of the state and the input matrices A and B, Equation
8.175 can be expressed as

 (t ) = AY(t ) + Bf (t )
Y (8.176)

Here, the dimension of the vector Y is 2n × 1, referred to the state vector, specifically

 x (t ) 
Y(t ) =   (8.177)
 x(t ) 2 n×1

Remembering

 x (t ) 
 1 
 x (t ) 
x(t ) =  2  (8.178)
  
 x n (t ) 
  n×1
430 Random Vibration

the state matrix in time can be written as

 [− M −1C] [− M −1K ]n× n 


A=  n× n
 (8.179)
 I n× n 0 n× n 
  2n× 2 n

Here, I and 0 are the identity and null matrices, respectively, with the dimen-
sion n × n. Note that the state matrix is not necessarily expressed in Equation 8.179,
another form can be seen in the example in Section 8.4.4. Finally, the input matrix
B is

 −1 
B= M  (8.180)
 0 

In Equation 8.180, 0 is also the n × n null matrix.

8.4.3 Eigen-Problem of Nonproportionally Damped System


8.4.3.1 State Matrix and Eigen-Decomposition
The homogeneous form of Equation 8.176 is (Tong et al. 1994)

 (t ) = AY(t )
Y (8.181)

To evaluate the eigen-properties of these systems, it must first be assumed that

Y = P2n×1 eλt (8.182)

where, Y is the solution of the homogeneous Equation 8.181. Specifically expressed,


this may result in one of the following being true:

λP2 n×1e λt = AP2 n×1e λt (8.183)

or

λP2 n×1 = AP2 n×1 (8.184a)

where λ is a scalar and P2n×1 is a 2n × 1 vector.


If Y = P2n×1 eλt is a solution of Equation 8.181, then Equation 8.184a should hold.
Additionally, if Equation 8.184a holds, then Y = P2n×1 eλt will also be a solution of
Equation 8.181. Thus, from the theory of linear algebra and the theory of vibra-
tion systems, it can be proved that these necessary and sufficient conditions are
maintained.
Random Vibration of MDOF Linear Systems 431

Further assume the system to be underdamped. Because A is an asymmetric


matrix, this will conventionally result in λ and P2n×1 being complex valued. Taking
the complex conjugate of Equation 8.184a yields

λ * P2*n×1 = AP2*n×1 (8.184b)

Equations 8.184a and 8.184b form the typical eigen-problem. To be exact, if both
equations result in Y = P2n×1 eλt as a solution of Equation 8.181, then Equations 8.184
implies that λ is one of the eigenvalues of the matrix A with P as the corresponding
eigenvector.
Suppose a system that has n DOFs, thus yielding n pairs of eigenvalues and eigen-
vectors in the complex conjugates. Accordingly, Equations 8.184a and 8.184b can be
further expanded as

λ i Pi = APi i = 1,  n (8.185a)

Taking the complex conjugate of both sides of the above equation will yield

λ*i Pi* = APi* i = 1,  n (8.185b)

It is known that the eigen-problem as described in Equations 8.185a and 8.185b


implies that all the eigenvectors, Pi’s and Pi* ’s are linearly independent. Additionally,
each eigenvector is individually associated with a unique eigenvalue λi (or λ*i ). This
is expressed as

λ i = −ζiω i + j 1 − ζi2 ω i , i = 1 n (8.186a)

and

λ*i = −ζiω i − j 1 − ζi2 ω i , i = 1… n (8.186b)

In the case of a nonproportionally damped system, the natural frequency ωi and


the damping ratio ζi are derived from the above equation. Thus, for a nonproportion-
ally damped system:

ωi = │λi│, i = 1, … n (8.187)

and

ζi = −Re(λi)/ωi (8.188)
432 Random Vibration

Up to now, the natural frequency (or angular natural frequency) was all obtained
through the square root of the stiffness k over m or the square root of ki over mi. In
general, this method of calculation cannot be used to obtain the natural frequency
for damped systems. The natural frequency must instead be calculated through
Equations 8.186a,b. To distinguish the natural frequency calculated from Equations
8.186a,b from the previously defined quantities, the italic symbol, ωi is used. In addi-
tion, the normal symbol ωni stands for the ith natural frequency of the corresponding
undamped M-O-K system.

8.4.3.2 Eigenvectors and Mode Shapes


The eigenvalue λi (or λ*i ) can have an infinite number of eigenvectors. That is, suppose
Pi is one of the corresponding eigenvectors proportional to vector Ri, then for vector Ri,

Ri = αPi (8.189)

will also be the eigenvector associated with that λi, where α is an arbitrary nonzero
scalar. The eigenvector, Pi is also associated with the eigenvalue λi. Because Pi is a
2n × 1 vector, it is seen that through the assumption described in Equation 8.182, Pi
can be written as
 λ p 
Pi =  i i  (8.190)
p
 i 

where pi is an n × 1 vector. Given that the system is linear, the solution can have all
the linear combinations of pi’s and p*i ’s as follows:

* * *
x(t ) = p1e λ1t + p2e λ 2t +… pne λ nt + p1*e λ1 t + p*2 e λ 2t +… p*n e λ nt

 e λ1t   λ*1 t 
 λt  e  (8.191)
 2  * *   e λ*2t 
= [ p1 , p2 , … pn ]  e  + p1 , p2 , … p*n  
…  … 
 e λ nt   λ*nt 
e 

Denote
P = [p1, p2, …pn] (8.192)
and

 e λ1t 
 λt 
 2 
E (t ) =  e  (8.193)
… 
 e λ nt 
Random Vibration of MDOF Linear Systems 433

From this, the following can be obtained:

x(t) = P E(t) + P * E * (t) (8.194)

 t ) = P∆E (t ) + P * ∆ * E * (t )
x( (8.195)

and

x(t ) = P∆2 E (t ) + P * ∆ *2 E * (t ) (8.196)

or

 x (t )   P∆ P*∆*   E (t ) 
 =     (8.197)
 x(t )   P P*   E * (t ) 

and

 x(t )   P∆ P*∆*   ∆   E (t ) 
 =       (8.198)

 x(t )   P P*   ∆*   E * (t ) 

In this instance, Δ is defined as the diagonal n × n matrix, which contains all the
n-sets of eigenvalues. In addition, Δ can be written as follows:

 λ 
 1 
( 
∆ = diag(λ i ) = diag −ζiω i + j 1 − ζi ω i 
2 λ2
)...

 (8.199)
 
 λn 
n× n

Substitution of Equation 8.198 into Equation 8.180 with the aid of Equation 8.177
results in

 P∆ P* ∆*   ∆   E (t )   P∆ P* ∆*   E (t ) 
     =A   
 P P*   ∆*   E * (t )   P P*   E * (t ) 

(8.200)

The 2n × u matrix E can further be defined as

 E (t )   E (t + ∆t )   E[t + (u − 1)∆t ] 


E =  ,     , u ≥ 2n (8.201)
 E * (t )   E * (t + ∆t )   E *[t + (u − 1)∆t ] 

434 Random Vibration

which can be shown to have full rank 2n. Furthermore, this can be written as

 P∆ P* ∆*   ∆   P∆ P* ∆*  E
   E= A  (8.202)
 P P*   ∆*   P P* 

Given that

EE+ = I2n×2n (8.203)

both sides of Equation 8.202 can be postmultiplied by E+, where the superscript +
stands for the pseudo inverse.

 P∆ P* ∆*   ∆   P∆ P* ∆* 
   =A  (8.204)
 P P*   ∆*   P P* 

Equation 8.204 indicates that the state matrix A can be decomposed by the eigen-
value matrix

 
Λ= ∆  (8.205)
 ∆* 

and the eigenvector matrix P

 P * ∆ *  = [P , P ,  P ]
P =  P∆  1 2 2n (8.206)
 P P* 

Note that the eigenvector matrix is now arranged to have the form of a complex
conjugate pair

 P∆   P∆  *
  and   (8.207)
 P   P 

Accordingly,

P = [P1 , P2 ,  Pn ] , [P1 , P2 ,  Pn ]*  (8.208)

and,

 P∆  *  P * ∆ * 
  =   (8.209)
 P   P* 
Random Vibration of MDOF Linear Systems 435

That is,

A = PΛP −1 (8.210)

or

Λ = P −1 AP (8.211)

The matrix Λ in Equation 8.211 will maintain the same eigenvalue format as a
proportionally damped system. Conversely, the eigenvector P can obtain a different
form from the proportionally damped case due to the nonuniqueness of the eigenvec-
tor Pi as previously discussed.
In general, the submatrix P in the eigenvector matrix P is complex valued.
Equations 8.210 and 8.211 can therefore be used to define modal analysis in the 2n
complex modal domain. In this case, P is called the mode shape matrix. Note that, P
contains n vectors as expressed in Equation 8.192.
In this instance, there will be n set of triples < pi, ζi, ωi >, along with n set of its
complex conjugates. It is apparent that the damping ratio ζi and the natural frequency
ωi can be obtained through Equation 8.199. In this situation, the triple < pi, ζi, ωi >
and its complex conjugate define the ith complex mode.

8.4.3.3 Modal Energy Transfer Ratio


In a generally damped MDOF system, if by design and letting C = 0, then the cor-
responding natural frequency ωni can be calculated through the eigen-equation as
mentioned in Chapter 7, which is repeated as follows:

ωni ϕi = M−1K ϕi (8.212)

The modal energy transfer ratio (ETR) ξi can be approximated as (Liang et al.
1992; Liang et al. 2012)

ω 
ξ i = ln  i  (8.213)
 ω ni 

From the discussion of SDOF systems, we have established that the natural fre-
quency denotes the corresponding modal energy. Assume that an MDOF system is
originally undamped and then a particular type of damping is gradually added to
the system. If the damping remains proportional, then the natural frequency ωni will
remain unchanged.
Otherwise, it will be changed to ωi. In the event the natural frequency changes,
one of two events will occur. Either a certain amount of energy will be transferred
into the mode, when

ETi
ξi = >0 (8.214)
4πEK
436 Random Vibration

or, a certain amount of energy will be transferred out of the mode, when

ETi
ξi = < 0 (8.215)
4πEK

In Equations 8.214 and 8.215, ETi is the ith modal energy transferred during a
cycle and EK is the maximum conservative energy. Comparing this to the modal
damping ratio which relates to energy dissipation EDi yields

EDi
ζi = ≥0 (8.216)
4πEK i

The modal energy transfer ratio can be used in identifying whether a specific
mode is complex. Namely, a nonproportionally damped system may encompass
both complex and normal modes. This scenario cannot be distinguished through the
Caughey criterion because the Caughey criterion can only provide global judgment.
In a nonproportionally damped system, the natural frequency of the first complex
mode will always be greater than that of the undamped one, that is,

ω1 > ωn1 (8.217)

For the ith mode with ξi ≠ 0, the mode shape pi will be complex valued, that is,

  jθ1i 
p1i   p1i e 
 
 p2i   p2i e jθ2i 
pi =  =   (8.218)
  
  
 pni   pni e jθni 
   

Here θji is the corresponding phase angle.

Example 8.8

A 2-DOF system is given by

 0  6 −3   50 −10 
M= 1  (kg), C =   (N/m-s), and K =   (N/m)
0 2  −3 3   −10 30 

Find the natural frequencies, modal energy transfer ratios, damping ratios, and
mode shapes.
Decoupled from the corresponding state matrix, we have eigenvalues −0.4158 ±
3.7208j and −3.3342 + 6.2306j. The natural frequency of the first mode is ω1 =
Random Vibration of MDOF Linear Systems 437

[(–0.4158)2 + (3.7208)2]1/2 = 3.7440 (rad/s), and that of the second mode is ω2 =


7.0666 (rad/s).
Note that the natural frequencies of the undamped M-O-K system is ωn1 =
3.6913 and ωn2 = 7.1676. Thus, the corresponding modal energy transfer ratios are
 ω 
ξ i = ln  1  = 0.0142 and ξ2 = −0.0142. It is seen that, due to the nonproportional
 ω n1 
damping, the first natural frequency becomes larger by receiving energy so that
the ETR is positive, whereas for the second mode, the natural frequency becomes
smaller by giving up energy and the ETR is thus negative. In addition, for energy
conservation, we see that ξ1 + ξ2 = 0.
The damping ratio of the first mode is ζ1 = 0.4158/3.7440 = 0.1111, and that of
the second mode is ζ2 = 0.4119 (rad/s).
From the state matrix, we also have the eigenvectors

 −0.9596 −0.9596 0.3061+ 0.1255i 0.3061− 0.1255j 


 
 0.2138 + 0.1176j 0.2138 − 0.1176j 0.9077 0.9077 
P=  
 0.0641+ 0.1197j 0.0641− 0.1197j 0.0242 − 0.0850j 0.0242 + 0.0850j 
 0.0004 − 0.0345j 0.0004 + 0.0345j −0.0269 − 0.2409j −0.0269 + 0.2409j 
 

Note that the first and the second columns are for the second mode, and the third
and the fourth columns are for the first mode. So that the first mode shapes are given
by
[0.0242 ∓ 0.0850j,  0.0269 ∓ 0.2409j]T

In addition, the second mode shape is [0.0641 ± 0.00197j  0.0004 ∓ 0.00345j]T.

8.4.4 Response to Random Excitations


Similar to a proportionally damped system, we now consider the response to random
excitations for nonproportionally damped systems. Here, for convenience, we rewrite
the equation of motion of a generally damped MDOF system as

 x (t )   0 I   x(t )   0 
 =     +  −1  f (t ) (8.219)
 −1 −1 
 x(t )   − M K − M C   x(t )   M 

Or
 (t ) = A Y(t ) + B f (t )
Y (8.220)

where

 x( t)   0 I   0 
Y(t) =  , A =   and B =  −1  (8.221)
 x ( t) 
−1
 − M K − M −1C   M 

where I and 0 are, respectively, identity and null submatrices with proper dimensions.
438 Random Vibration

In this case, the state matrix can also be decoupled as indicated in Equation 8.210
so that the state equation can also be decoupled as mentioned previously.

8.4.4.1 Modal and Physical Response


If the input is Gaussian, the forcing function F (t ) = Bf (t ) of the equation of motion
described in the state equation (Equation 8.220), can have a mean and µ F (t ) and
covariance function given by

σ FF (t1 , t2 ) = E {F (t1 ) − µ F (t1 )}{F (t2 ) − µ F (t2 )}T  (8.222)

Furthermore, we know that the state equation can be decoupled by premultiplying


P −1 on both sides of Equation 8.220, that is, by denoting

U(t ) = P −1Y(t ) (8.223)

We can have

 U (t ) = ΛU(t ) + P −1F (t )

  
−1  x(0)  (8.224)
 U ( 0 ) = P  
  x (0) 

The solution of Equation 8.223 can be written as

t
U(t ) = e Λt U(0) +
∫e
0
Λ (t − τ )
P −1F (τ) d τ (8.225)

In Equation 8.225, eΛt is a 2n × 2n diagonal matrix, such that

eΛt = diag[exp(λit)] (8.226)

From Equation 8.223, we may write the response in the physical domain as

t
Y(t ) = P U(t ) = P e Λt P −1Y(0) +
∫ Pe
0
Λ(t−τ)
P −1F (τ) d τ (8.227)

It can be proven that

P e Λt P −1 = e A t (8.228)
Random Vibration of MDOF Linear Systems 439

Therefore, the response Y(t) can be further written as

t
Y(t ) = e A t Y(0) +
∫e0
A (t − τ )
F (τ) d τ (8.229)

In Equations 8.228 and 8.229, the term e At is a 2n × 2n state transition matrix.


To see the relationship between the state transition matrix and the characteristic
matrix of a generally damped MDOF system, defined by sI − A , taking the inverse
Laplace transform of the inverse of characteristic matrix yields

L−1[(sI − A)−1 ] = L−1 (s −1I + s −2 A + s −3 A 2 + )


(8.230)
= I + At + At 2 / 2! + At 3 / 3 +  = e At

Note that
adj(sI − A) adj(sI − A)
(sI − A)−1 = = 2n (8.231)
det[(sI − A)]
(s − λ i ) ∏ i =1

n n

−1
∑ i a jk ∑ a* i jk

The jkth entry of the inverse matrix (sI − A) can be written as i =1


+ i =1
.
n s − λi s − λ*i
Therefore, the jkth entry of L−1[(sI − A)−1 ] is ∑i =1
i a jk e
λit * *
+ i a jk e λi t .

For the steady state response, the free decay solution dissipated to zero. We thus
have

t t ∞
Y(t ) =
∫ 0
e A (t − τ )F (τ) d τ =

−∞
e A (t − τ )F (τ) d τ =
∫ 0
e A τ F (t − τ) d τ (8.232)

8.4.4.2 Mean
First consider the total response. Similar to the proportionally damped systems, the
vector of mean μY(t) is given by

t
µ Y (t ) = P e Λt P −1Y(0) +
∫ Pe
0
Λ(τ)
P −1µ F (t − τ) d τ (8.233)

Or using the state transition matrix, we have

t
µ Y (t ) = e A t Y(0) +
∫e0

µ F (t − τ) d τ (8.234)
440 Random Vibration

8.4.4.3 Covariance
8.4.4.3.1  General Covariance
The covariance of the nonproportionally damped system is given by

σYY (t1, t2) = E[{Y(t1) − μY(t1)}{Y(t2) − μY(t2)}T] (8.235)

Substituting from Equation 8.233 yields

σ YY (t1 , t2 ) =

{ }{ }
t1 t2
 T

∫ 0
d τ1

0
d τ 2 E  P e Λτ1 P −1[F (t1 − τ1 ) − µ F (t1 − τ1 )] P e Λτ 2 P −1[F (t2 − τ 2 ) − µ F (t2 − τ 2 )] 

(8.236)

with the help of Equation 8.219, the above equation can be written as

t1 t2
σ YY (t1 , t2 ) =
∫0
d τ1

0
d τ 2{P e Λτ1 P −1σ FF (t1 − τ1 ,t2 − τ 2 )P − T e Λτ 2 P T} (8.237)

Otherwise, using the state transition matrix, we have

t1 t2
σ YY (t1 , t2 ) =
∫ 0
d τ1
∫ 0
d τ 2{e A τ1 σ FF (t1 − τ1 , t2 − τ 2 )e A τ 2 } (8.238)

To evaluate Equation 8.238, we need to calculate the covariance function of


σ FF (t1,t2), which might not always be easy. In the following examples, let us ­consider
a special case when the excitation is stationary and the responses reach steady states.

8.4.4.3.2  Steady State Covariance under White Noise Excitation


Based on Equation 8.232, for the steady state response with zero mean, the covari-
ance can be written as

σ YY (t1 , t2 ) = E[ Y(t1 ) Y(t2 )T ]


 ∞  ∞  
T

= E 
  ∫ 0
eA τ1
F (t1 − τ1 ) d τ1  
0 ∫
e F (t2 − τ 2 ) d τ 2  
A τ2

 
 ∞ ∞  (8.239)
∫ ∫
T
= E e A τ1 F (t1 − τ1 ){F (t2 − τ 2 )}T e A τ2 d τ1 d τ 2 
 0 0 
∞ ∞

∫ ∫e
T
= A τ1
E {F
F (t1 − τ1 )}{F (t2 − τ 2 )}T  e A τ2
d τ1 d τ 2
0 0
Random Vibration of MDOF Linear Systems 441

Thus far, Equations 8.239 and 8.238 are essentially the same formulae for the
integration limits in Equation 8.238 and can be expanded to infinity. Now, if the
force F(t) in F (t ) = Bf (t ) is n-dimensional independent Gaussian, then

E {F (t1 − τ1 )}{F (t2 − τ 2 )}T  = BDδ(τ)BT (8.240)

where

Dδ(τ) = diag(di)n×n δ(τ) (8.241)

is the covariance matrix of the forcing function.

Example 8.9

Suppose a 3-DOF system is excited by a forcing function f(t) = w(t) [g1 g2 g3]T,
where gi is the amplitude of force fi(t) applied on the ith mass, and w(t) is a white
noise process with PSD equal to S0.
The matrix Dδ(τ) can be written as diag(di)n×n δ(τ) and di = 2πS0 gi2.
When t1 = t2, substitution of Equation 8.240 into Equation 8.239 yields

∞ ∞

∫ ∫
T
σ YY (t1,t1) = e Aτ1BDδ(τ 2 − τ1)BTe A τ2
dτ1 dτ 2
0 0

∞ ∞

∫ ∫
T
= e Aτ1BD δ(τ 2 − τ1)BTe A τ2
dτ 2 dτ1 (8.242)
0 0


T
= e AτBDBTe A τ dτ = σ Y (τ )
0

For convenience, denote

T
e AτBDBTe A τ = G(τ ) (8.243)

Take the derivative of G(τ) with respect to τ,

dG(τ ) T T
= Ae AτBDBTe A τ + e AτB DBTe A τ A T
dτ (8.244)
= AG(τ ) + G(τ ) A T

The integral of Equation 8.244, with the limit from 0 to ∞, can be written as

∞ ∞ ∞
dG(τ )

∫0 dτ
dτ = G(∞) − G(0) = A
∫ 0
G(τ ) dτ +
∫ 0
G(τ) dτ A T (8.245)
442 Random Vibration

It is seen that

G(∞) = 0 (8.246)

and

G(0) = B DB T (8.247)

Therefore, we have

Aσ YY (0) + σ YY (0) A T = −BDB T (8.248)

Equation 8.248 can be seen as an algebraic equation with unknown of the


covariance matrix σYY(0). Because the input and output are stationary, we have
σYY(0) = σYY(t,t).

8.4.4.3.3  Computations of Covariance


To solve σYY(0) through Equation 8.248, consider the definition of the covariance,
namely,

 T xx T   σ xx σ xx 
σ YY(0) = E[ Y(t ) Y T (t )] = E  xx T =   (8.249)

 xx  T
xx   σ xx
 σ xx
 

Note that

σ xx = σ Txx (8.250)

σ xx = σ Txx
 (8.251)

and

σ xx
  = σ xx
T
 (8.252)

With the help of Equations 8.250 through 8.251, and substituting Equation 8.249
into Equation 8.248, these four partition submatrices may be written as

σ xx + σ xx
 = 0 (8.253)

From Equations 8.251 and 8.253, we can see that the diagonal entries of matrices
σ xx and σ xx
 are zeros.

−T
σ xx
  − σ xxK M
T
− σ xx C T M − T = 0 (8.254)
Random Vibration of MDOF Linear Systems 443

Taking the transpose of Equation 8.254 results in

M −1Kσ xx − M −1Cσ xx
 − σ xx
 = 0 (8.255)

In addition, we can further obtain

M −1Kσ xx + M −1Cσ xx


  + σ xx
−1
 KM + σ xx
  CM
−1
= M −1DM − T (8.256)

From Equations 8.250 and 8.253, we have

σ xx = − σ xx (8.257)

in which the σ xx is an antisymmetric matrix. Furthermore, from Equations 8.254 or


8.255 and Equation 8.256, we can write

σ x = σ x M TK − T − σ xx C TK − T (8.258)

or

σ xx = K −1Mσ xx −1
  − K Cσ xx (8.259)

Therefore, σxx can be obtained through σ xx   , σ xx and the corresponding matrix
productions.
It is noted that the mass, damping, and stiffness matrices are symmetric. With the
help of Equations 8.258 and 8.259, we can write

Cσ xx K + Kσ xx C + Mσ xx
  K − Kσ xx
M = 0 (8.260)

and premultiply and postmultiply M on both sides of Equation 8.256 yielding

Kσ xx M − Mσ xx K + Mσ xx
  C + Cσ xx
M = D (8.261)

Equations 8.260 and 8.261 have a total of (3n2 + n)/2 independent unknown vari-
ables and they provide (3n2 + n)/2 totally independent equations. Therefore, in the
2n × 2n matrix σYY(τ) is solvable.

Example 8.10

Consider a 2-DOF system shown in Figure 8.1, where f2(t) = 0.


In this case, f(t) = [f1(t)  0]T and

S 0
D=  0 
 0 0 
444 Random Vibration

Denote

σ σ12   0 − σ 23 
σ xx =  11  σ xx =  
 σ12 σ 22   σ 23 0 

 0 σ 23  σ σ 34 
 =   σ xx
 =  
33
σ xx
 − σ 23 0 
  σ
 34 σ 44 

Substitution of the above equations into Equations 8.260 and 8.261 results in
the following four equations

(k1c2 + k2c1)σ23 + m1c2σ33 + [m1(k1 + k2) − k2m1]σ34 − m2k2σ44 = 0

−m1k2σ23 + 2m1(c1 + c2)σ33 − 2m1c2σ34 = S0


[m1k2 − m2(k1 + k2)]σ23 − m1c2σ33 + [m1c2 + m2(c1 + c2)]σ34 − m2c2σ44 = 0


as well as

2m2k2σ23 − 2m2c2σ34 + 2m1c2σ44 = 0

Specifically, denote

k1 k2 c1 c2 m w
w1 = , w2 = , ζ1 = , ζ2 = , µ = 2 and r = 2
m1 m2 2 k1m1 2 k2m2 m1 w1

in which wi and ζi are used for mathematical convenience: they are not exactly
the natural frequency and damping ratio. The above equations can be replaced by

2rw1(rζ1 + ζ2)σ23 + r2σ33 + (μ − 1)r2σ34 − μr2σ44 = 0

−w1μr2σ23 + 2(ζ1 + ζ2rμ)σ33 − 2μrζ2σ34 = S0/2m2w2


−w1[1 + (μ − 1)r2]σ23 − 2ζ2rσ33 + 2[ζ1 + (1 + μ) rζ2]σ34 − 2μrζ2σ44 = 0


w1rσ23 − 2ζ2σ34 + 2ζ2σ44 = 0


We therefore have

σ23 = −A[2rw1ζ2(rζ1 + ζ2)]


{
σ 33 = − Aw12 µr 3ζ1 + ζ 2 1− 2r 2 + (1+ µ )r 4  + 4rζ1ζ 22 (1+ r 2 ) + 4r 2ζ12ζ 2 + 4r 2ζ32 }
Random Vibration of MDOF Linear Systems 445


{
σ 34 = − Ar 2w12 ζ 2 (1+ µ )r 2 − 1 + 4rζ1ζ 22 (1+ r 2 ) + 4ζ32 }
and
σ 44 = − Ar 2w12  rζ1 + ζ 2 (1+ µ )r 2 + 4rζ1ζ 22 + 4ζ32 

where

S0 /m1
A=
4w13B

and

B = µr 3ζ12 + µrζ 22 + 1− 2r 2 + (1+ µ )2 r 2  ζ1ζ 2 + 4r 2ζ1ζ 2 ζ12 + (1+ µ )2ζ 22  + 4rζ12ζ 22 1+ (1+ µ )r 2 

Substituting the above solutions into Equation 8.256, we can finally solve σ11,
σ12, and σ13 as given below:

σ11 = (σ11 + µσ11)/w12

σ12 = (σ 34 + µσ 44 )/w12 − 2ζ1σ 23 /w1

σ13 = [ σ 34 + (1+ µr 2 )/r 4 σ 44 ]/w12 + 2(ζ 2 − µζ 2 )σ 23 /(w1r )


8.4.4.3.4  Steady State Covariance under Color Noise Excitation


Next, consider the excitation to be stationary color noise, which can be seen as a
solution of Equation 7.100 in Chapter 7. In this case, for multiple input, the scalar
equation is extended to a vector equation given by

d
f(t ) = − η f (t ) + Γw(t ) (8.262)
dt

where

η = diag(ηi) and Γ = diag [(2ηi)1/2] (8.263)

Equations 8.262 together with the state equation of the motion given by Equation
8.219 can be combined as

 x (t )   0 I 0   x(t )   
      0
 x(t )  = −1
 −M K − M −1C M −1   x (t )  +  0  w(t ) (8.264)
 f (t )   0 0 −η   f (t )   Γ 
       

where I and 0 are, respectively, identity and null submatrices with proper dimensions.
446 Random Vibration

Denoting

 x(t )   0 I 0  0
       
Z(t ) =  x (t )  , A = −1
 −M K − M −1C M −1  and B =  0  (8.265)
 f (t )   0 0 −η  Γ 
     

we have the state equation written as


 
Z (t ) = AZ(t ) + B w(t ) (8.266)

The covariance matrix in this circumstance is given by

 xx T xx T xf T 
 T 
σ ZZ = E[Z(t )Z (t )] = E  xx
T
  T
xx  T
xf  (8.267)
 fx T fx T ff T 
 

Because w(t) is white noise, we can directly use the result obtained in the above
subsection, that is,
   
Aσ ZZ + σ YY A T = −B DB T (8.268)


where D = diag( S0 i ), and S 0i is the corresponding magnitude of PSD of white noise
applied at the ith location.

8.4.4.4 Brief Summary
Nonproportional damping is caused by the different distributions of damping and
stiffness, which is common in practical engineering applications. In this circum-
stance, certain or total modal shapes will become complex-valued. In addition, we
will witness nonzero modal energy transfer ratios.
Nonproportionally damped structures will not have principal axes, which may
alter the peak responses of a structure under seismic ground excitations and, in many
cases, the peak responses will be enlarged (Liang and Lee 1998). In this subsection,
how to account for nonproportionally damped systems is discussed. The basic idea is
to use the state space and the state matrix (Gupta and Jaw 1986; Song et al. 2007a,b).

8.5 Modal Combination
8.5.1 Real Valued Mode Shape
8.5.1.1 Approximation of Real Valued Mode Shape
The following computations of variance and RMS are based on modal combinations.
The modes can be either normal or complex. For normal modes, ϕji represents the jth
element of the ith mode shape.
Random Vibration of MDOF Linear Systems 447

For complex modes,

φ ji = (−1)δji p ji (8.269)

where

 π π
 0, − < θ ji ≤
 2 2
δ ji =  (8.270)
 1, π 3π
< θ ji ≤
 2 2

As a result, the newly simplified ith mode shape vector can be written as

φ 
 1i 
φ 
φ i =  2i  (8.271)

 φni 
 

The newly simplified mode shape matrix can be written as

Φ = [ϕ1  ϕ2  ...  ϕn] (8.272)

8.5.1.2 Linear Dependency and Representation


8.5.1.2.1  Linear Independent
Using the above approximation will result in n mode shape functions. If the MDOF
system is proportionally damped, then the corresponding mode shape matrix is of
full rank. This is represented as

rank(Φ) = n (8.273)

It is noted that, when using the approximation as described in Equation 8.269,


the full rank condition represented in Equation 8.273 is not necessarily satisfied.
This is practically true when a complex mode shape is simplified into a real valued
approximation. In this situation, the full rank condition should be checked to see if
Equation 8.273 holds.

8.5.1.2.2  Full Rank Representation


Suppose Equation 8.273 is satisfied, then all the column vector ϕi will be linear
independent and the inverse of matrix Φ will exists. Specifically, there will always
be a matrix Ψ such that

Ψ = Φ−1 (8.274)
448 Random Vibration

In this case, any n × 1 vector can be represented by matrix Φ. Now suppose there
is a vector y, such that
y 
 1
y 
y =  2  (8.275)

 yn 
 

Here, y can be written as

y = a1 ϕ1 + a2 ϕ2... + an ϕn (8.276)

In the above equation, instances of ai are scalars to be determined. Thus, to use


this mode shape vector to represent Y, all values of scalar ai must be determined.
Namely,
a 
 1
 a2 
  = Ψ y (8.277)

 an 
 

8.5.1.2.3  Truncated Representation


In special cases, we may not have n mode shapes. In that event, only m modes exist
and m < n, then Equation 8.276 will become (Liang et al. 2012)

y ≈ a1ϕ1 + a2ϕ2 ... + amϕm (8.278)

or

y ≈ΦC aC (8.279)

where the truncated mode shape matrix is

ΦC = [ϕ1, ϕ2 ... ϕm] (8.280)

and

a 
 1 
a 
a C =  2  (8.281)

 am 
 
Random Vibration of MDOF Linear Systems 449

In the above case, all the parameters ai can be determined through a least square
approach. Explicitly, denote

e = y − a1ϕ1 + a2ϕ2 ... + amϕm (8.282)

and let

eTe = min (8.283)

To achieve Equation 8.280, consider the equation

∂e T e
= 0, i = 1,  m (8.284)
∂ai

Equation 8.281 will result in

( )
−1
a C = ΦCT ΦC ΦCT y (8.285)

8.5.2 Numerical Characteristics
In the following examples, general references can be found in the book of Wirsching
et al. (1995).

8.5.2.1 Variance
Through modal analysis, the variance of randomly forced response can be estimated by

 S 
σ 2X j = D(x j ) = D 
 ∑ φ q 
i =1
ji i (8.286)

In this instance, S is the number of truncated modes.

8.5.2.2 Root Mean Square


In the following, specific formulae will be listed to estimate the root mean square
value of the response taken from a signal location j, without detailed derivations.

8.5.2.2.1  Square Root of the Sum of Square


Suppose all modal response are widely separated. That is, they do not interfere with
each other, then (Singh 1980)

σXj = ∑ (φ q )
i =1
ji i
2
(8.287)

450 Random Vibration

or

σXj = ∑ (φ ) σ
i =1
ji
2 2
i (8.288)

In this instance, σ i2 is the mean square value of the ith modal response qi(t).

8.5.2.2.2  Absolute Method


Let us assume that all modal responses have a correlation coefficient represented by

ρ = ±1 (8.289)

Equation 8.279 is used to imply the case of linear dependency. Here, qi(t1) and
qi(t2) are correlated, meaning that they vary with the same pattern.

σXj = ∑φ σ
i =1
ji i (8.290)

Equation 8.290 is the equation of standard deviation, where σi is the standard


deviation of the ith modal response.

8.5.2.2.3  Naval Research Laboratory Method


Assuming the first modal response can be correlated with the square root of the sum
of square (SRSS) of the rest of the modes, then

σ X j = φ11σ1 + ∑ (φ σ )
i=2
ji i
2
(8.291)

8.5.2.2.4  Closed-Space Modes


In certain systems, some modes are in a relatively closed space. These are referred to
as closed-space modes. The difference between closed-space modes and modes that
are separated is conceptually shown in Figure 8.6. Consider the case where there is
one set of closed-space modes. This case can be represented in the equation:

Closed-space modes

FIGURE 8.6  Closed space modes.


Random Vibration of MDOF Linear Systems 451

Z n− Z

σXj = ∑ j =1
φ ji σ i + ∑ (φ σ )
j = Z +1
ji i
2
(8.292)

Here, Z is the total number of modes deemed to be close.

8.5.2.2.5  Modified Square Root of the Sum of Square Method


If more than one group of closed-space modes exists, then the following equation is
used:

2
H  p  n− S

σXj = ∑∑
p=1

 i =1

φ ji σ i + ∑
 i = S +1
(φ ji σ i )2 (8.293)

In this instance, H is the number of sets of equal or close eigenvalues and p is the
number of close modes in a given set (Richard et al. 1988).

8.5.3 Combined Quadratic Combination


If the cross-effect among modes needs to be taken into account, then the method of
combined quadratic combination (CQC) should be used (Der Kiureghian 1980, 1981;
Sinha and Igusa 1995):

n n

σXj = ∑ ∑ (σ ρ σ
i =1 k =1
ji ik kj ) (8.294)

For Equation 8.294,

σji = ϕjiσI (8.295)

and

8 ζiζ k (ζi + rζ k )r 3/2


ρik =

(1 − r 2 )2 + 4ζiζ k r (1 + r 2 ) + 4 ζi2 + ζ2k r 2 ( ) (8.296)

In the event that the ith and kth modes are normal, then

ω nk
r= , k > i (8.297)
ω ni
452 Random Vibration

For the case of complex modes,


ωk
r= , k > i (8.298)
ωi

Problems
1. A dynamic absorber can be used to reduce the vibrations of an SDOF sys-
tem subjected to sinusoidal excitation f(t) = F0cos(ωt). Shown in Figure
P8.1, the blue m–k system is the primary SDOF system and the red ma and
ka are additional mass and stiffness. Therefore, with the additional mass,
the system becomes 2-DOF.
Denote ωp = (k/m)1/2, ωa = (ka /ma)1/2, and μ = ma /m.
a. Show that the dynamic magnification factor βdyn for the primary dis-
placement can be written as
ω2
1−
Xk ω a2
β dyn = =
F0   ω2  ω2   ω2   ω2 
1 − µ  2a  − 2  1 − 2  − µ  2a 
  ω p  ω p   ω p   ωp 

b. Suppose m = 6000 kg and the resonant driving frequency = 60 Hz. The
mass of the absorber is chosen to be 1200 kg. Determine the range of
frequencies within which the displacement x(t) is less with the addi-
tional mass than without the additional mass.
2. Given
4   3 −2 0 0 
   
M= 
4.6  ,C =  −2 4 −2  , and
 3   −2 5 −3 
 5   −3 5 
 
 500 −200 
 
− −400
K =  200 600 
 −400 550 −150 
 −150 5000 

x(t), f(t)
m

k/2 ka k/2
xa
ma

Figure P8.1
Random Vibration of MDOF Linear Systems 453

a. Check if the system is proportionally damped and calculate the damp-


ing ratios and natural frequencies
b. Generate a proportionally damped system with damping matrix Cnew
such that it can have the identical natural frequencies and damping ratio
as the system by using the eigenvector matrix of K matrix
c. Suppose this system is excited by ground acceleration 10 sin(ω1t) + 2
sin(ω2t + 0.5) and have the displacement resonant at the first and the
acceleration second natural frequencies, what will these driving fre-
quencies be?
d. To reduce the response of displacement of x1 of 30%, how should you
increase the damping matrix by adding ΔC, namely, find c in

0 0 0 0
 
C= 0 0 0 0
0 0 0 0
0 0 0 c 

3. With the mass, damping, and stiffness matrices given by Problem 2, using
MATLAB to generate 30 random ground motions: t = (0:1:1,999) * 0.01;
xga = randn(2000,30) with zero initial condition. If, on average, the dis-
placement of x4 is needed to reduce 30%, how can you choose additional
stiffness ΔK, namely, find k in

0 0 0 0
 
∆K =  0 0 0 0
0 0 0 0
0 0 0 k 

4. A 2DOF system is shown in Figure P8.2 with a force being white noise
process applied on mass 1. Knowing m1 = m2 = 1, k1 = k2 = 100, c1 = c2 = 2,
find the equation of motion, the transfer function; using the normal mode
method to find the mean and covariance of the displacement.
5. The system shown in Figure P8.2 is excited by ground white noise motion,
where m1 = m2 = 1, k1 = k2 = 100, c1 = 18 and c2 = 0. Find the transfer

k1 k2

m1 m2

c1 c2

Figure P8.2
454 Random Vibration

function. Calculate the mean and covariance by using the complex mode
method.
6. Derive a general formula of mean value for a nonproportional system under
excitation of random initial conditions only.
7. In the system given by Figure P8.3; c1 and c2 are, respectively, zero k1 and
k2 are, respectively, 500 and 200; m1 = 1500. m2 = 50 + Δm where Δm is a
random variable with the following distribution:

Δm 0 30 60 200
p 1/2 2/6 1/6 1/6

Suppose the ground excitation is white noise with PSD = 1, find the dis-
tribution of the RMS absolute acceleration of m2.
8. Prove that for proportionally damped systems with white noise excitations,
the first entry of σXX(t,t) can be calculated as


φ11
4
D1 + φ11φ21D2
2 2

ω d 1m12
2 ∫ 0
e −2ζ1ω n1 (t − τ ) sin 2 ω d 1 (t − τ)dτ


2φ11φ21D1 + 2φ11φ12φ21φ22 D2
2 2
+
ω d 1ω d 2m1m2 ∫ 0
e − (ζ1ω n1 +ζ2ω n 2 )(t − τ ) sin ω d 1 (t − τ)sin ω d 2 (t − τ)dτ


φ12
4
D1 + φ12 φ22 D2
2 2


+
ω d 2 m2
2 2 ∫ 0
e −2ζ2ω n 2 (t − τ )sin 2ω d 2 (t − τ)dτ

9. For a system given by Figure P8.3 with


a. Using the method of SRSS to calculate the variance of x1
b. Using the method of CQC to calculate the variance of x1
c. Using the method of closed-space modes to calculate the variance of x1
d. Compare and explain your results

x2
m2

k2 c2

x1
m1

k1 c1

xg

Figure P8.3
Random Vibration of MDOF Linear Systems 455

10. An MDOF system with M = diag ([1  1.5  2])

 2 −1 0   150 −50 0 
C =  −1 3

−2  and K =

 −50 100

−50 
 0 −2 2   0 −50 70 


is excited by forcing function

 0 
 
f(t ) =  2  w(t )
 1.5 
 

where w(t) is a white noise with PSD = 10. Calculate the covariance matrix
σYY (0).
Section IV
Applications and
Further Discussions
9 Inverse Problems

In this chapter and Chapters 10 and 11, we present several topics by applying the
knowledge gained previously. These topics do not cover the total applications of
random process and random vibrations. They may be considered as “practical”
applications and utilizations of the concept of the random process. We will also
discuss methods to handle engineering problems that are difficult to be treated as
closed-form mathematical models and/or difficult to be approximated as stationary
processes.
Inverse problems are a relatively broad topic. The field of inverse problems was
first discovered and introduced by Viktor Ambartsumian (1908–1996). The inverse
problem is a general framework used to convert measured data into information
about a physical object or system, which has broad applications. One of the difficul-
ties in solution of inverse problems is due to the existence of measurement noises.
In other words, when working with inverse problems, both random variables as well
as random processes must be considered. In this chapter, inverse problems related to
vibration systems will be briefly outlined. Additionally, key issues in system identifi-
cations as well as vibration testing will be discussed. Special emphasis will be given
to measurement uncertainties.
For more detailed description of inverse problems, readers may consult the works
of Chadan and Sabatier (1977), Press et al. (2007), and Aster et al. (2012).

9.1 Introduction to Inverse Problems


9.1.1 Concept of Inverse Engineering
In Chapter 6, inverse problems as related to vibration systems were introduced. In
engineering applications, inverse problems can be rather complex. It is necessary to
identify projects with inverse problems in order to avoid treating forward problems
as inverse problems. However, in certain applications, it may be necessary to solve
inverse problems.

9.1.1.1 Key Issues
The key issues involved in inverse problems are listed as follows.

9.1.1.1.1   Modeling  Modeling as a fundamental approach is often a starting point.


Generally, the following should be considered:

1. The type of models—Is it static or dynamic? Is it linear or nonlinear?


2. The number of degrees of freedom (DOFs) or order of models.
3. The type of damping—Is it proportionally or nonproportionally damped?

459
460 Random Vibration

9.1.1.1.2  Boundary Conditions


Boundary conditions are key issues, yet are difficult to be accurately defined.
Identical models with different boundary conditions can have very different modal
parameters and physical responses. The types of boundary conditions and the time
varying and nonlinearity of the conditions need to be identified.

9.1.1.1.3  Testing
To solve an inverse problem, conducting vibration testing is often needed. In general,
vibration testing includes (1) actuation and measurement, (2) S/N ratios, (3) data
management and analysis, and (4) test repeatability and reliability.

9.1.1.1.4  Regression and Realization


Quite often, parameters are obtained through regression procedures for analytical
and empirical models. When using regression procedures, the following must be
considered: (1) the criteria of regression or realizations, (2) computational algo-
rithms, (3) the stability and robustness of the identified model, and (4) error and
sensitivity analysis involved in the regression.

9.1.1.2 Error
In solving inverse problems, a range of errors are inevitable. It is essential that these
errors be reduced. To rigorously define errors is difficult. Therefore, errors can
approximately be classified as follows:

1. Bias or systematic errors


Repeated errors regardless of the test conditions
Errors due to incorrect calibrations
Errors due to inaccurate modeling
Errors due to loading nonlinearity
Errors due to insufficient measurement resolution
Errors due to limited measurement dynamic range
Errors due to insufficient testing duration or insufficient frequency
response
2. Precision or random errors
Errors due to random ambient conditions
Human errors
Errors due to power leakage
Errors due to insufficient analog-to-digital convention (A/D)
Errors due to environmental noises
3. Accident or illegitimate errors
Incorrect modeling
Mistakes
Unexpected accident
Chaotic measurement conditions
Incorrect numerical simulations
Inverse Problems 461

The action needed to improve the procedure is dependent on the type of error that
occurred. In the following, the nature of errors and the corresponding improvement
will be discussed as it mainly relates to random problems.

9.1.1.3 Applications
Inverse problems have multiple applications such as

1. System identification
2. Trouble shooting
3. Design modification
4. Model confirmation

9.1.2 Issues of Inverse Problems


We now briefly consider important issues of inverse problems.

9.1.2.1 Modeling
Quite often, for one natural phenomenon, there may be more than one model. For
example, suppose that 50 random data values y are measured and indexed from 1 to
50 by the variable x. It can be assumed that the relationship between y and x is linear
or of first order, i.e., y = ax + b. Through the first-order regression, the parameters a
and b can be determined. The quadratic form or second order, i.e., y = ax2 + bx + c,
can also be used, finding the parameters a, b, and c. This can be repeated for the third
order, fourth order, and so on.
In Figure 9.1a, plots of the original model and several regressed models are shown,
including the first-order (linear) regression, the second-order (quadratic) regression,
and the third-order regression. While the models are dissimilar, all of them regressed
using the same original data. In Figure 9.1a, it is seen that the second- and third-
order regressions are rather close. This, however, does not necessarily mean that
when the regression order is chosen to be sufficiently high, the models will converge.

2.5 3
2 Original data Original data
First-order regression First-order regression
1.5 Second-order regression 2
Third-order regression Second-order regression
1
1 Fourth-order regression
0.5
0 0
–0.5
–1 –1
–1.5
–2
–2
–2.5 –3
0 5 10 15 20 25 30 35 40 45 50 0 5 10 15 20 25 30 35 40 45 50

(a) (b)

Figure 9.1  Directly measured data and regressed models. (a) Regression including third-
order approach. (b) Regression including fourth-order approach.
462 Random Vibration

In Figure 9.1b, the same original data are shown, with the first- and second-order
regressions. However, instead of the third-order regressed model, the fourth-order
model is plotted. Instead of the fourth-order regressed model being rather close to
the second-order regression model, it is shown to be significantly different.
The above example indicates that one should be very careful to use an a priori model.

9.1.2.1.1  Physical Model


There are several kinds of models. The most commonly used model is the physical
model, which is established based on mechanical analysis.
A static model, which can be either linear or nonlinear, is often described by
algebraic equations, which is time invariable.
A dynamic model, which can also be either linear or nonlinear, is commonly
described by differential equations. For example, the M-C-K model, with either
lumped or finite-element parameters, is a second-order ordinary differential equa-
tion in matrix form.
The first inverse problem is completely solved after the physical model is confirmed.

9.1.2.1.2  Response Model


The second most commonly used model is the response model. The direct response,
by definition, consists of the response time histories. As described in previous chapters
(started from Section 4.2.2.1.2), a response is resulted from a convolution of an impulse
response function and a forcing function. Therefore, the response time history will con-
tain information of both the dynamic behavior of a given vibration system and infor-
mation of excitations. However, because the amount of data of response time histories
is often overwhelmingly large, this model is easier to measure but difficult to execute.
When excitation is a random process, the response model is also a random pro-
cess. When the excitation is deterministic, mathematically, the response is often
modeled as a deterministic process. Furthermore, due to various levels of noise con-
taminations, the time history will likely be a random process as well.

9.1.2.1.3  Modal Model


Chapter 8 has shown the transfer of the M-C-K model into the modal domain using
eigenvector matrices, and the use of the set of natural frequencies, damping ratios,
and mode shapes to represent a multidegree-of-freedom (MDOF) system (the modal
model). There are multiple advantages in using the modal model. The amount of
information in the modal model, compared to the physical model, particularly the
response model, is greatly reduced. Additionally, in computing the possible response
of the MDOF system, the modal model typically provides a more accurate result.
Also, in many cases, for an existing MDOF vibration system, an exact physical
model will not exist. Consequently, through modal testing, a significantly accurate
modal model can be achieved.
The modal model can be extracted from a physical model; however, a physical
model often cannot be obtained from the modal model. In other words, in comparison
to the physical and response models, the modal model is a dimension-reducing model.
Generally speaking, the modal model is only practical for a linear system, which can
be either in normal mode or complex mode, dependent upon the natural of damping.
Inverse Problems 463

9.1.2.1.4  Input–Output (Force–Response) Model


In addition to the above-mentioned three basic models, there also exists the input–
output model. From the viewpoint of system theory, a response model is an output
model. Practically, due to inevitable noises, solely using the response to precisely
determine the modal or physical parameters is a challenging task. By also using
the input, the transfer function can be measured, which, in turn, provides a greater
understanding of the dynamic behaviors of a system. Lastly, it is noted that the
input–output model can be either static or dynamic.

9.1.2.1.5  Geometric Model


The geometric model used to describe the topology configuration of an engineer-
ing object mainly consists of dimensions and drawings, i.e., an AutoCAD drawing.
In some instances, the motion mechanisms of the geometric model may need to be
specified.

9.1.2.1.6  Statistical Models


A statistical model is established through statistical survey. A commonly used model
consists of simple statistical parameters, such as numerical characteristics (mean
values, variance, etc.) and distribution functions. Neural networks and fuzzy logic
are other examples of the statistical model.

9.1.2.1.7  Other Models


In addition to the above models, other types of models exist, such as the pseudo-
dynamic model and the real-time hybrid model, among others.

9.1.2.2 Identification, Linear System


The first inverse problem is to identify the best-fit model. For linear MDOF vibra-
tion systems, system identification can be either in the physical or the modal model.
The targets include (1) deterministic parameters, such as M, C, and K, modal param-
eters, transfer functions, and coherence functions; (2) degree of linearity of the system;
(3) order of the system; and (4) stability of the system, such as time invariant or variant.

9.1.2.2.1  Input Identifications


The second inverse problem is to identify the best-fit input, such as forcing functions,
ground excitations in addition to possible noises.

9.1.2.2.2  System Monitoring


Three major applications, (1) system health monitoring, (2) nondestructive evalua-
tion, and (3) damage assessment of system monitoring, have received more attention
in the past decades.

9.1.2.3 Identification, General System


A system can often be nonlinear and/or time variant. Identification of such systems
may consist of (1) linearity range, (2) similitude analysis such as dimensionless anal-
ysis and scaling effect, and (3) nonlinear models such as nonlinear stiffness, nonlin-
ear damping, and geometric nonlinearity.
464 Random Vibration

9.1.2.3.1  Nonlinear Dynamics


Nonlinear dynamics, specifically random dynamics, is one of the most difficult fields
to handle. Nevertheless, it is often inevitable. Linearization is commonly used in
nonlinear dynamics analysis. Separation of the total duration of a nonstationary pro-
cess into limited pieces of “relative” stationary process is another commonly used
measure. This separation is not limited to the time domain. A nonlinear deformation
can also be divided into piecewise linear ranges.

9.1.2.3.2  Material Identification


One more inverse problem is that of material identification. For this case, the strength
of materials is measured and identified to determine elastic or inelastic stress–strain
relations, fatigue and aging problems, the surface and contact mechanics, as well as
other properties. For example, properties such as chemical components, chemical
functions, and conditions of rust and corrosion can be determined.

9.1.2.4 Simulations
Simulation technologies are also based on inverse problems, although a simulation is
generally considered a forward problem.
Numerical simulations and physical simulations are two basic approaches. In gen-
eral, the success of a simulation will not depend solely upon the stability and effec-
tiveness of computational algorithms and the precision of test apparatus; it will also
depend on the accuracy of the models.

9.1.2.5 Practical Considerations
In engineering applications, the following issues are important.

9.1.2.5.1  Accuracy of Models


Accuracy of models is perhaps the most important issue in modeling. It is not only
the starting point but also the critical task through the entire problem-solving project.
Very often, an iterative approach, known as a trial-and-error approach, is needed.
The criterion in judging that iteration converges as well as verges to the right point is
often needed. This should be independent to establishing the model.

9.1.2.5.2  Measurement S/N Ratio


In vibration testing, having a sufficiently high S/N ratio is often an unavoidable step
in solving inverse problems. Correct modeling, effective testing and precise mea-
surement, sophisticated signal processing, and accurate parameter extraction all con-
tribute to an increased S/N ratio.

9.1.2.5.3  Randomness and Uncertainty


Measurements must be taken in order to manage the randomness of data and the uncer-
tainty of parameter identification. This is another key issue in solving inverse problems.
Adequate sampling sizes and correct statistical surveys are necessary to eliminate this
uncertainty. Methods to judge the randomness and uncertainty are often needed.
In the following, the above issues will be considered in more detail.
Inverse Problems 465

9.1.3 The First Inverse Problem of Dynamic Systems


The measurement of transfer functions, the key issue of identification for dynamic
systems, is examined first.

9.1.3.1 General Description
The first inverse problem is often solved through a two-step approach. The first step
is to obtain the transfer function. The second step is to extract the modal or physical
model through the transfer function.
Figure 9.2 repeats the expression of the relationship among the input–system–output,
the fundamental approach of the inverse problem of systems.
The formula can be repeated to calculate the transfer function, which can be
obtained by one of two ways. The transfer function can be obtained through the ratio
of Fourier transforms of output and input:

X (ω ) Fourier transform of output


H (ω ) = = (9.1)
F (ω ) Fourierr transform of input

As noted earlier, the transfer functions can also be obtained through the power
spectral density functions:

X (ω ) X (ω ) * S X (ω ) auto PSD of output


H (ω ) = = = (9.2)
F (ω ) X (ω ) * S XF (ω ) cross PSD of input–output

and

X (ω ) F (ω )* SFX (ω ) cross PSD off output–input


H (ω ) = = = (9.3)
F (ω ) F (ω )* SF (ω ) auto PSD of input

Here, the uppercase letters represent the Fourier transforms. The temporal pairs
of the Fourier transforms are not necessarily random.
Equation 9.1, the definition of transfer function, is seldom used practically, espe-
cially in the case of random excitations. The more practical choice is the method
through power spectral density functions, such as H1 and H2, since it can provide
more accurate and stable estimations.
Extraction of modal and/or physical parameters requires certain in-depth knowledge,
which is beyond the scope of random process and vibration. Interested readers may

Input Output
System
h(t), H(ω)
f(t), F(ω) x(t), X(ω)

Figure 9.2  System and input–output.


466 Random Vibration

consult the work of Ewins (2000) or He and Fu (2004) for more detailed descriptions. In
this chapter, a list of fundamental formulas in the frequency and time domains will be
provided. The most commonly used method for estimations of transfer functions is first
summarized.

9.1.3.2 Impulse Response
Consider the impulse response for the signal degree-of-freedom (SDOF) system:

H(ω) = F [h(t)] (9.4)

Now consider the MDOF system with an input at the jth location that is measured
at the ith location:

Hij(ω) = F [hij(t)] (9.5)

Note that the impulse response function is a normalized response x(t) with respect
to the amplitude of the impact force given by

x (t )
h(t ) = (9.6)
fmax

9.1.3.3 Sinusoidal Response
For sinusoidal excitation, f(ω,t) = f0 sin(ωt) with sweeping frequency ω, the transfer
function for the SDOF system is given by

x (ω , t )
H (ω ) = (9.7)
f (ω , t )

For the MDOF system, the transfer function is denoted by

xi (ω , t )
Hij (ω ) = (9.8)
f j (ω , t )

9.1.3.4 Random Response
Again, considering random excitations, repeat the process discussed previously as
follows (see Equation 4.35) in order to practically measure the transfer functions.

9.1.3.4.1  Fourier Transform of Output and Input

X(ω,T) = F [xT (t)] (9.9)

and
Inverse Problems 467

F(ω,T) = F [FT (t)] (9.10)

In this instance, the lowercase letters represent the temporal functions or mea-
sured values in the physical domain. Note that these temporal functions are taken
from random sets; once measured, they become deterministic “realizations.”

9.1.3.4.2  Power Spectral Density Functions


The auto- and cross-power spectral density functions are repeated as follows:

SFX(ω) = ΣXk(ω,T)Fk(ω,T)* (9.11)

and

SXF(ω) = ΣFk(ω,T)Xk(ω,T)* (9.12)

Also

SX(ω) = Σ│Xk(ω,T)│2 (9.13)

and furthermore

SF(ω) = Σ│Fk(ω,T)│2 (9.14)

9.1.3.4.3  Transfer Functions


The transfer functions are estimated as

∑ X (ω, T )F (ω, T ) *
k k

H1 (ω , T ) = k =1
n (9.15)

∑ F (ω, T )
2
k
k =1

when the output contains a high level of noise and as

∑ X (ω, T )
2
k

H 2 (ω , T ) = n
k =1
(9.16)
∑ F (ω, T ) X (ω, T ) *
k =1
k k

when the input contains a high level of noise.


468 Random Vibration

In both cases, n should be a fairly large number to effectively reduce the noise
contaminations. For most cases,

n > 30 (9.17)

suffices.

9.1.3.4.4  Coherence Function


The coherence function is used to reject the “noise” mode, with present criteria men-
tioned previously.

H1 (ω , T )
γ 2FX (ω ) = (9.18)
H 2 (ω , T )

In the following, for the sake of simplicity, we will omit the notation T. However, it
is noted that, practically, we will use Equation 9.18 for transfer function measurement.

9.1.3.5 Modal Model
Modal analysis is referred to as the extraction of the modal parameter from the trans-
fer function once measured.

9.1.3.5.1  Frequency Domain Method


In the frequency domain, the transfer function Hij(ω) is directly used to provide infor-
mation about natural frequencies and damping ratios through the relationships of the
following curves: amplitude/phase vs. frequencies, real/imaginary vs. frequencies,
and real vs. imaginary. This is possible because the transfer function Hij(ω) can be
written as a function of the corresponding natural frequencies and damping ratios,
namely,

Hij(ω) = f(ωi, ζi) (9.19)

Here, f(.) stands for a function of (.), ωi and ζi are respectively the ith natural fre-
quency and damping ratio of the system. The ith mode shape ϕi can be determined
from the complex-valued amplitude of Hi(ω), where

 H (ω ) 
 1i i 
 H (ω ) 
Hi (ω i ) =  2i i  (9.20)
  
 H ni (ω i ) 
 

Furthermore,

ϕi = aiHi(ωi) (9.21)

where ai is a proportional factor.


Inverse Problems 469

9.1.3.5.2  Time Domain Method


In an ideal case, if the measurements of the acceleration, velocity, and displacement
vectors ( x, x,
 and x) could be determined, then the state matrix would exist:

 −1
A = YY (9.22)

In the above equation,

     
 =  x(t1 )  ,  x(t2 )  ,…,  x(t2 n ) 
Y
 x (t1 )   x (t2 )  (9.23)
x (t ) 
     2 n 

and

 x (t )   x (t )   x (t ) 

Y =  1   2    2 n  (9.24)
 x(t1 )   x(t2 )   x(t2 n ) 
    

where x(t1) is the displacement measured at the first time point, etc.
For normal modes, the generalized damping and stiffness matrices M−1C and
M K can be found from the state matrix. Furthermore, the natural frequencies,
−1

damping ratios, and mode shapes can also be obtained.


Note that the mass matrix M remains unknown. Having the measured general
damping and stiffness matrices does not mean that the physical parameters can be
obtained.
For complex modes, the eigendecomposition can be directly carried out from the
state matrix and the modal parameters can be obtained. Recall the eigenproblem:

λiPi = APi (9.25)

From the eigenvalue λi, the natural frequency is

ωi = │λi│,  i = 1,…, n (9.26)

and the damping ratio is

ζi = −Re(λi)/ωi (9.27)

From the eigenvector Pi, the mode shape pi can be calculated from:

 λ p 
Pi =  i i  (9.28)
p
 i 
470 Random Vibration

Practically, it is not always possible to simultaneously measure the acceleration,


velocity, and displacement. Unless the signals from all n-locations can be measured
concurrently, Equation 9.22 exists only mathematically.
The signal can be approximately measured in a limited number of locations, rep-
resented by

 z (t ) 
 1 1 
 z (t ) 
z(t1 ) =  2 1  (9.29)
  
 zm (t 1 ) 
  m×1

where m is the total number of measurement locations, and zi(t) is a genetic term of
displacement, velocity, or acceleration. Note that, we use lowercase letters to denote
the measured values, including that measured from random signals; and we continue
to use uppercase letters to denote generic random set.
Construct two Hanckel matrices:

 z( t )   z( t )   z( t ) 
 2
 3
 
q +1


Y =  z( t3 )   z( t4 )    z( tq+ 2 ) 
(9.30)
        
 z( t p+1 )   z( t p+ 2 )   z( t p+ q ) 
     ( mp)× q

and

    z( tq ) 
 z( t1 )   z( t2 )   
 z( t2 )   z( t3 )   z( tq+1 ) 
Y =      (9.31)
        
 z( t p )   z( t p++1 )   z( t p+ q−1 ) 
     ( mp)× q

Here,

ti+1 − ti = Δt,  i = 1,…, p + q (9.32)

where Δt is the sampling time interval. Integers p and q are such that

mp ≥ 2n (9.33)

and

q ≥ mp (9.34)
Inverse Problems 471

It can be proven that

 + = exp( A∆t )
YY (9.35)

and

eig(exp( A∆t )) = diag(e λi ∆t ) (9.36)

Based on Equation 9.36, the natural frequencies and damping ratios can be calcu-
lated. However, the eigenvectors of matrix exp(AΔt) does not contain the full infor-
mation of mode shapes, unless

m ≥ n (9.37)

9.1.4 The Second Inverse Problem of Dynamic Systems


In specific cases, the second inverse problem of dynamic systems is used to identify
an unknown input.

9.1.4.1 General Background
From Equation 9.1, the following can be written:

F(ω) = H(ω) –1X(ω) (9.38)

The autopower spectral density function of the input is calculated as

SF(ω) = │H(ω)│–2SX(ω) (9.39)

Since


σ F (ω ) =
∫−∞
SF (ω ) dω (9.40)

this will yield


−2
σ F (ω ) = H (ω ) S X (ω ) dω (9.41)
−∞

In most cases, the above integral of Equation 9.41 does not exist. Therefore,
Equation 9.41 cannot be directly used. However, in selected cases, the autopower
spectral density functions do exist.
472 Random Vibration

9.1.4.2 White Noise
Consider the case of white noise by recalling Equation 7.50. The white noise input
can be written as

kcσ 2X
SF = (9.42)
π

9.1.4.3 Practical Issues
9.1.4.3.1  Sampling Frequency and Cutoff Band
To satisfy the Nyquist sampling theorem, the signal must be low-pass filtered and the
cutoff frequency ωc or fc is given by

ωc = 1/2ωS (9.43)

and

fc = 1/2 f S (9.44)

Here, ωS or f S is the sampling frequency.


In addition, the total length of sampling is given by T, yielding

x (t ) = x (ω c , T ) = F −1[ X (ω c , T )] (9.45)

9.1.4.3.2  Estimation of RMS Value

fC


σF ( f ) = H ( f ) WX ( f ) d f
−2 (9.46)
0

For SDOF systems,

−2
H (ω ) = ( k − mω 2 )2 − (cω )2 ωc = ( k − 4 π 2mf 2 )2 − 4 π 2 (cf )2 fc (9.47)
ωc

9.2 System Parameter Identification


System identification is an important subfield of inverse problems, which establishes
mathematical models of dynamical systems from measured data. System identification
also includes the optimal design of experiments for efficiently generating informative
data for fitting such models as well as model reduction. System parameter identifica-
tion is the identification of required properties, parameters, and models through the
measurement of random input and output of the system. Due to the randomness and
uncertainties of “measured” data, the carrying out of system identification is to use
statistical analysis (see Goodwin and Payne 1977; Walter and Pronzato 1997).
Inverse Problems 473

9.2.1 Parameter Estimation, Random Set


First, the estimate of numerical characteristics of a random set will be considered.

9.2.1.1 Maximum Likelihood
In estimating statistical parameters, there are certain criteria that are used to deal
with random variables and processes based on the condition that all possible vari-
ables in a given set will not be exhausted. It must be decided under what condition
the estimation will be satisfied. Maximum likelihood estimation (MLE) provides
a commonly used criterion. It had been used earlier by Gauss and Laplace, and
was popularized by Fisher between 1912 and 1922. Reviews of the development
of maximum likelihood have been provided by a number of authors (for instance,
see LeCam 1990). This method determines the parameters that maximize the prob-
ability or likelihood of the sample data. MLE is considered to be more robust, with
minimal exceptions, and yields estimators with more accurate statistical properties.
They are versatile and can be applied to most models and types of data. They are also
efficient for quantifying uncertainty using confidence bounds. In this section, the
focus will be on MLE, and as a comparison, the method of moments will be briefly
discussed, which is among the estimation methods other than MLE.

9.2.1.1.1  Mean and Averaging


Consider the mean value given by

x =
1
n ∑x
j =1
j (9.48)

In Equation 9.48, x is the first moment about the origin of all the samples [xj].
Generally, the parameter needed to be estimated is simply a certain moment, such as
the mean or the RMS values. A simple method to estimate the unknown parameter
is referred to as the moment estimation.

Example 9.1

Suppose X ~ N(μ, σ), where the mean and the standard deviation μ and σ, respec-
tively, are unknown and need to be estimated.
Consider the first and second moments about the origin:

µˆ = E[ X ] = x

and

µˆ 2 + σˆ 2 = E[ X 2 ] = x 2
474 Random Vibration

In the above, and hereafter, the overhead symbol “.̂” represents the estimated
value.
Therefore, Equation 9.48 can be used to estimate the mean µ̂ and further
calculate

σˆ 2 = x 2 − µˆ 2 =
1
n ∑x
j =1
2
j − x2 = S2

and

σ = S 2 = S

Here, S2 and S stand for respectively the sample variance and standard devia-
tion. Equation 9.48 provides the basic approach of averaging. It is noted that x is
the sample mean of all measured xj, which are the samples taken from the random
set X. Typically, n samples will be taken and the total number of variables of X will
be much larger than n. Therefore, a reasonable question would be, can x be used
to estimate the mean value of all variables in X? Besides the mean value, the vari-
ance and the moments will also be considered. An additional reasonable question
would be, is there any bias in these parameter estimations based on the average
described in Equation 9.48 or, more generally, is there any bias in the moment
vestimation?
These questions are analyzed in the following.

9.2.1.1.2  Probability Density Function


Suppose that n independent observations x1, x2, …, xn are measured, whose probabil-
ity density (or probability mass) is unknown. Both the observed variables xi and the
parameters p1 can be vectors. The probability density function (PDF) fx(.) is consid-
ered to belong to a certain family of distributions { f X j ( x j , p), p ∈ P}, which is called
the parametric model. Thus, f X corresponds to p = p1, which is called the true value
of the parameter. It is desirable to find the value of p̂, referred to as the estimator,
since it would be as close to the true value of p1 as possible.
The parameter vector of the first PDF is explicitly denoted by p1, while the param-
eter vector of the second PDF is explicitly denoted by pn. Since xj is independent, the
joint PDF can be written as

n n

f X1Xn ( x1 ,…, x n , pn ) = ∏
j =1
f X j ( x j , p1 ) = ∏ f (x , p )
j =1
X j 1 (9.49)

In Equation 9.49, p1 is the parameter to be calculated. Under the condition that the
unknown parameter is indeed p1, and x1, x2, …, xn are independent, Equation 9.49 is
(
in the form of the total production of f X j x j , p1 . )
Inverse Problems 475

9.2.1.1.3  Likelihood Function

Example 9.2

A machine is used to manufacture certain specimens. Of the specimens pro-


duced, some are usable, while others are not. The random variable X is used to
denote these two cases. When useable X = 0; otherwise X = 1. Therefore, X is 0–1
distributed.
The probability of X = 1 is p: P(X = 1) = p; thus, P(X = 0) = 1 − p. This can
be written by the uniform formula (see the Bernoulli distribution described in
Equation 1.70):

P(X = k) = pk(1 − p)k,  k = 0, 1

In checking the effectiveness of this machine, five specimens are chosen. It is


found that the first two are unusable, while the remaining three are usable. The
probability p is estimated as follows.
It is known that 0 < p < 1. Thus, p can be any value between 0 and 1. The
following values of p will be calculated: 0.2, 0.4, 0.6, and 0.8. The chance of
P(x1 = 1, x2 = 1, x3 = 0, x4 = 0, and x5 = 0) = p2(1 − p)3 will also be calculated. The
results are listed in Table 9.1.
From this table, it is seen that p = 0.4 is the largest calculated chance of mak-
ing unusable specimens. This implies that, among 0.2, 0.4, 0.6, and 0.8, p = 0.4 is
most likely the targeted probability. This is an example of the maximum likelihood
method.
A joint PDF can be studied by viewing the observed values x1, x2, …, xn to be
fixed “parameters,” whereas the value of pn is allowed to vary freely. From this
point of view, this PDF is called the likelihood function. In using this method, the
focus is on pn. In other words, the value of pn is found such that the PDF is likely
to be the “best” estimation.
The joint PDF is thus called the likelihood function, denoted as

L(p1) = ∏ f ( x ,p )
j =1
X j 1 (9.50)

The next step is to maximize the likelihood.

Table 9.1
Calculated Chance of Making Unusable Specimens
p P(x1 = 1, x2 = 1, x3 = 0, x4 = 0, and x5 = 0) = p2(1 − p)3
0.2 0.22(0.8)3 = 0.02048
0.4 0.42(0.6)3 = 0.03456
0.6 0.62(0.4)3 = 0.02304
0.8 0.82(0.2)3 = 0.00512
476 Random Vibration

9.2.1.1.4  Log-Likelihood Function


For mathematical convenience, the log-likelihood function is used.

L(p1) = ln[L(p1)] (9.51)

9.2.1.1.5  Maximum Likelihood Method


In this method, the derivative is taken with respect to the unknown variable p1 and
the result is set equal to zero. By solving this equation, the proper value of p1 is
found. That is,

d ln ( L)
= 0 (9.52)
dp1

Note that from Equation 9.52, the vector p1 may contain n elements, explicitly,

p 
 1
p 
p1 =  2  (9.53)

 pn 
 

In this case, Equation 9.52 is realized by

∂ ln ( L)
=0
∂p1
∂ ln ( L)
=0
∂p2 (9.54)

∂ ln ( L)
=0
∂pn

Example 9.3

Consider the above-mentioned case of the random variable X with a 0–1 distribu-
tion, expressed as

P(X = k) = pk(1 – p)k,  k = 0, 1

In this instance, p is the unknown parameter to be estimated through the maxi-


mum likelihood method. The samples of X are (x1, x2, …, xn).
Inverse Problems 477

First, the likelihood function is given by

n n
n n
∑ xi n− ∑ xi
L= ∏ P( X
j =1
= xj ) = ∏p j =1
xi
(1 − p)
1− x j
=p i =1
(1 − p) i =1

Second, take the logarithm on both sides of the above equation:

n  n 
ln(L ) = ∑
i =1
xi ln( p) +  n −
 ∑ x  ln(1− p)
i =1
i

Third, take the derivative with respect to the unknown parameter p and let the
result be equal to zero.
In this case, “p1” contains only one parameter p; therefore,

1  
n n


dln(L )
dp
=
1
p ∑i =1
xi − n −
1 − p  ∑ x  = 0
i =1
i

It is found that, when

p=
1
n ∑x i
i =1

ln(L) reaches the maximum value. Therefore, the estimation of p based on the
maximum likelihood method is, when comparing with Equation 9.48,

p̂ =
1
n ∑ x = X
i =1
i

Example 9.4

Suppose that a random variable X is evenly distributed in between 0 and a with


the PDF

1
 , 0≤x≤a
fX ( X ) =  a
 0, otherwhere

Estimate the unknown parameter a.


478 Random Vibration

First, the likelihood function is given by

 n
∏  a− n , 0 ≤ min x ≤ max x ≤ a
n  1
, 0 ≤ xi ≤ a
∏  i i
L= fX ( xi ) =  i =1 a =  i i

i =1   0, otherwhere
 0, otherwhere

Second, consider the case L ≠ 0 and take the logarithm:

ln(L) = −nln(a)

Third, take the derivative with respect to a, yielding

dln(L ) n
= − = 0
da a

The above equation has no meaningful solutions. This implies that, when L ≠ 0,
the derivative is equal to

dln(L )
≠ 0
da

However, the above inequality does not necessarily mean that the likelihood
function L has no maximum value in between 0 and a. In fact, a and L have an
inverse relationship. The smaller the value of a is, the larger the value of L will be.
However, a cannot be smaller than (max xi ). This is written as
i

aˆ = max xi
i

Example 9.5

Consider the sample set (x1, x2, x3, x4, x5, x6) = (1,2,3,5,4,9). Given the above condi-
tion, â = 9.
Now, compare the estimation through the maximum likelihood method and
through the moment method.

∞ a
1 a
∫ ∫

E[ X ] = xfX ( x ) d x = x dx =
−∞ 0 a 2

However, it is also true that


= E[ X ]
2
Inverse Problems 479

For this example,

aˆ = 2E[ X ] = 2x = 2 × 4 = 8 ≠ 9

This result implies that the estimation through moment about the origin has a
bias.

9.2.1.2 Bias and Consistency


Statistical estimation is the estimation of certain parameters and/or distribution
functions of random sets through samples, whose size can be considerably smaller
than the entire space of the random sets. As a result, the estimation can be biased
and/or inconsistent.
To judge if an estimator is unbiased, check whether the expected value of the esti-
mation is equal to the “true” value. This maximum likelihood estimator should be
unbiased. Additionally, it should also be consistent. That is, with a sufficiently large
number of observations n, it should be possible to find the value of p with arbitrary
precision. This means that as n approaches to infinity, the estimator p1 converges in
probability to its true value.
The following is true for the maximum likelihood estimator (MLE):

1. MLE is consistent: estimation converges.


2. MLE is asymptotically unbiased: it converges to a “correct” value.
3. MLE is efficient: the correct value yields the minimum variance among all
unbiased estimates.
4. MLE is sufficient: it uses all measured data.
5. MLE is invariant.
6. MLE is asymptotically normally distributed.

9.2.1.2.1  Mean Estimator


Consider the formula used to calculate the mean value of random set X, here Xj etc.
is a conceptual realization taken from generic random set,

X=
1
n ∑X
j =1
j (9.55)

To check if it is unbiased, take the mathematical expectation of Equation 9.55 as


follows:

1 n  1  n  1 n

E[ X ] = E 
 n

j =1
Xj = E 
 n 
∑ j =1
Xj =
 n
∑ E[ X ]
j =1
j

(9.56)
n

=
1
n ∑µ
j =1
X = µX
480 Random Vibration

Equation 9.56 implies that the estimation of Equation 9.55 is indeed unbiased.
To see if the estimation is consistent, consider

1 n  1  n  1  n 
D[ X ] = D 
 n
∑ j =1
Xj  = 2 D
 n 

j =1
Xj  = 2 
 n 
∑ D[X ] + ∑ ∑ cov[X , X ]
j =1
j
j≠k
j k

∑σ σ 2
1
= 2
X = X
n2 j =1
n
(9.57)

Equation 9.57 implies that, when n is sufficiently large, the variance of X tends to
zero. This implies that the estimation is consistent.

9.2.1.2.2  Variance Estimator


We now consider the bias of the variance estimator.

Σˆ 2X =
1
n ∑ (X − µ )
j =1
j X
2
(9.58)

Here, Σ̂ 2X is the random variable from which the variance is estimated. It can be
proven that the mean of Σ̂ 2X is σ̂ 2X . This implies that when the mean μX is known,
Equation 9.58 provides unbiased estimation of the variance.
If the mean μX is unknown, then first analyze the following formula, which is used
to estimate the variance:

n
1
∑( X − X )
2
S X2 = j (9.59)
n −1 j =1

Consider the corresponding mean value:

 1 n   1 n 
E  S X2  = E 
 n − 1
∑ ( X j − X )2  = E 
  n −1 ∑[(X − µ ) − (X − µ )] 
j X X
2

j =1  j =1 
n

=
1
n −1 ∑ E[(X − µ ) ] − 2E[(X − µ )(X − µ )] + E[(X − µ ) ] (9.60)
j =1
j X
2
j X X X
2

n
 2 2 1 2
=
1
n −1 ∑  σ
j =1
2
X −
n
σ X + σ X  = σ 2X
n 
Inverse Problems 481

Consequently, Equation 9.59, with the factor (n − 1) instead of n, provides an


unbiased estimation.

S X2 =
1
n −1 ∑ (x − x )
j =1
j
2
(9.61)

9.2.2 Confidence Intervals
9.2.2.1 Estimation and Sampling Distributions
Because estimation is based on samples, not entire variable sets, therefore, the esti-
mator itself is a random variable, which also has distributions.

9.2.2.1.1  Probability of Correct Estimation


9.2.2.1.1.1   Mean  The mean must fall within all possible values. Because it is
impossible to obtain all the random variables, there exists a chance that the “true”
mean is outside the sampling space. We now consider the probability of correct
estimation.
Assume that the distribution of the mean estimator X is normal, and the variance
of X is known; in the form of standard normal distribution, the following is true:

 X − µX 
P  −∞ < ≤ z1−α  = Φ ( z1−α ) = 1 − α (9.62)
 σX / n 

9.2.2.1.1.2   Variance  We next consider the variance estimator by writing

n n 2 2
 X j − µX   X − µX 
(n − 1)
S X2
=
1
σ 2X σ 2X ∑
j =1
( X j − X )2 = ∑ j =1
 σ
X
 −  
 σX / n 
(9.63)

From Equation 9.63, the first term on the right-hand side is chi-square with n
DOF, while the second term is chi-square with one DOF. Due to the regenerative
character of chi-square random variables, the left-hand side is seen to be chi-square
with (n − 1) DOF. Consequently, this can be rewritten as

 S2 
P (n − 1) X2 > x1−α  = 1 − Fχ2 ( x1−α ) = 1 − α (9.64)
 σX  n −1

9.2.2.1.2  Confidence Intervals


9.2.2.1.2.1   Confidence Interval of Mean  The (1 − α) percentage, two-sided con-
fidence interval for mean can be obtained by solving the double inequalities in the
482 Random Vibration

argument on the left-hand side of Equation 9.62. Using the establishments of prob-
abilities of correct estimation yields

 z1−α / 2 σ X z σ 
 x − , x + 1−α / 2 X  (9.65)
n n 

9.2.2.1.2.2   Confidence Interval of Variance  Similar to Equation 9.64, the (1 − α)


percentage confidence interval for the variance can be written as

 (n − 1) S X2 
 0, x  (9.66)
1−α

9.2.2.1.2.3   Mean with Unknown Variance  The variance in Equation 9.66 is


assumed to be known. However, it is in fact unknown. In this specific instance, the
variance estimator S X2 is used to approximate σ 2X . For this reason, there is a need to
study the distribution of

X − µX
tn−1 = (9.67)
SX / n

Note that Equation 9.67 indicates a t-distribution, also known as a Student dis-
tribution. This was previously discussed in Chapter 2. The two-sided confidence
interval for the mean is given as

 X − µX 
P  − b1−α / 2 < ≤ b1−α / 2  = Ftn−1 (b1−α / 2 ) − Ftn−1 (− b1−α / 2 ) = 1 − α (9.68)
 SX / n 

By solving the double inequalities in the argument on the left-hand side for the
mean μX, the confidence interval is determined to be

 b1− α / 2 S X b S 
 x − , x + 1− α / 2 X  (9.69)
n n 

9.2.3 Parameter Estimation, Random Process


Based on the discussion of the parameter of a random process, the random process
will now be further described in the following (see Stigler 1986, 1999).

9.2.3.1 General Estimation
An unknown random process should not be assumed stationary before analyzing
its statistical characteristics. Unless the process is known to be stationary, ensemble
average must be used.
Inverse Problems 483

9.2.3.1.1  Mean and Variance


The following formulae are used to estimate the mean and the variance of a random
process.

9.2.3.1.1.1   Mean  Suppose that M samples are taken; then

M
xj=
1
M ∑x
m =1
mj
(9.70)

At first glance of Equation 9.70, the mean of a random process appears to be iden-
tical to that of a random variable. Conversely, the mean x j has a subscript j, indicat-
ing the jth realization of the random process X(t). Namely, the average described in
Equation 9.70 is an ensemble average.

9.2.3.1.1.2   Variance  Similarly, the unbiased estimation of variance is given by

S X2 (t j ) =
1
M −1 ∑ (x
m =1
mj − x j )2 , j = 0, 1,…, n − 1 (9.71)

9.2.3.1.1.3   Standard Deviation  From Equation 9.71, the standard deviation is


determined to be

S X (t j ) = S X2 (t j ), j = 0,1, n − 1 (9.72)

9.2.3.1.2  Correlation
Another fundamental difference between a random variable and a random process is
in the analysis of correlations. We now consider the correlation of processes X and Y.

9.2.3.1.2.1   Joint PDF  First, consider the joint distribution of X and Y by denoting
the cross-correlation function R as

R = E[XY] (9.73)

The joint PDF can then be written as

1  1 
f XY ( x , y) = exp  − (σY2 x 2 − 2 Rxy + σ 2X y 2 ) 
 2(σ X σY − R )
2 2 2
2π σ σ − R2
X
2
Y
2

where

–∞ < x, y < ∞ (9.74)


484 Random Vibration

9.2.3.1.2.2   Likelihood Function  Next, consider the likelihood function of n


pairs of realizations of X and Y, where the likelihood function is

1
L (σ 2X , σY2 , R) =
(2π ) nn
σ 2X σY2 − R 2

 n 
× exp 
1
 −2(σ X σY − R 2 )
2 2 ∑ (σY2 x 2j − 2 Rx j y j + σ 2X y 2j ) 

(9.75)
 j =1 

9.2.3.1.2.3   Log-Likelihood Function  Furthermore, the log-likelihood function is

L(σ 2X , σY2 , R) = − n ln(2π) −


n
2
ln ( n
σ 2X σY2 − R 2 )
n

1
2(σ 2X σY2 − R 2 ) ∑ (σ x
j =1
2
Y
2
j − 2 Rx j y j + σ 2X y 2j ) (9.76)

9.2.3.1.2.4   Maximum Log-Likelihood Function  The maximum log-likelihood


function can be used to estimate the variances and joint PDF. First, maximize the
log-likelihood function by letting


[L(σ 2X , σY2 , R)] = 0 (9.77)
∂σ 2X

thus yielding the following equation:

n n

n+
1
σˆ Y2 ∑
j =1
y 2j =
1
σˆ 2X σˆ Y2 − rˆ 2 ∑ (σˆ x
j =1
2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx (9.78)

Next, let


[L(σ 2X , σY2 , R)] = 0 (9.79)
∂σY2

The following second equation results:

n n

n+
1
σˆ 2X ∑
j =1
x 2j =
1
σˆ 2X σˆ Y2 − rˆ 2 ∑ (σˆ x
j =1
2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx (9.80)
Inverse Problems 485

Additionally, let


[L(σ 2X , σY2 , R)] = 0 (9.81)
∂R

We can obtain a third equation as given below:

n n

nrˆ + ∑ j =1
x j yj =

σˆ 2X σˆ Y2 − rˆ 2 ∑ (σˆ x
j =1
2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx (9.82)

Adding Equations 9.78 and 9.80 and dividing the result by 2 will further yield in

1 1 
n n n

n+ 
2  σˆ Y2 ∑ y 2j +
1
σˆ 2X ∑ x 2j  = 2 2
 ˆ
σ ˆ
σ

X Y −r
ˆ2 ∑ (σˆ x 2
Y
2
j
ˆ j y j + σˆ 2X y 2j )
− 2rx
 j =1 j =1 j =1

(9.83)

Therefore, we can write

rˆ  1 
n n n


j =1
x j yj = 
2  σˆ 2X ∑ j =1
x 2j +
1
ˆσY2 ∑
j =1
y 2j 

(9.84)

To use Equation 9.84, we need to have both the variances of processes X and Y,
which can be respectively estimated as

σ̂ 2X =
1
n ∑x j =1
2
j (9.85)

and


1
σ̂Y2 = n ∑y
j =1
2
j (9.86)

Thus, the correlation function is

r̂ =
1
n ∑x y
j =1
j j (9.87)
486 Random Vibration

which is generally written as

R̂ =
1
n ∑X Y
j =1
j j (9.88)

The following equation can be used to check if the estimator for the correlation
R is biased:

1 n  1 n

E[ Rˆ ] = E 
 n
∑j =1
X jY j  =
 n
∑ E[ X Y ] = R
j =1
j j (9.89)

Equation 9.89 indicates that the value of this estimation is unbiased.


Furthermore, to see if the estimator is consistent, we consider the following:

1 n  1 n

D[ Rˆ ] = D 
 n

j =1
X jY j  = 2
 n
∑ D[X Y ] = n1 (R
j =1
j j
2
+ σ 2X σY2 ) (9.90)

For this example, the estimator of the correlation is consistent.

9.2.3.1.2.5   Autocorrelation Function  The estimator of the autocorrelation func-


tion is

rˆX (ti , t j ) =
1
M ∑x
m =1
x , ti , t j ∈ T
mi mj (9.91)

It can be proven that

E[rˆX (ti , t j )] = RX (ti , t j ) (9.92)

Namely, the estimator described in Equation 9.92 is unbiased.


In addition,

RX2 (ti , t j ) − σ 2X (t j )σ 2X (t j )
D[rˆX (ti , t j )] = (9.93)
n

which can be used to check the consistency.


Inverse Problems 487

9.2.3.1.2.6   Cross-Correlation Function  The estimator of a cross-correlation


function is

rˆXY (ti , t j ) =
1
M ∑x
m =1
y , ti , t j ∈ T
mi mj (9.94)

It can be derived from Equation 9.94 that

E[rˆXY (ti , t j )] = RXY (ti , t j ) (9.95)

and

RX2 (ti , t j ) − σ 2X (t j )σY2 (t j )


D[rˆXY (ti , t j )] = (9.96)
n

Thus, Equation 9.94 has an unbiased and consistent estimator.

9.2.3.1.2.7   Covariance  The cross-covariance function is estimated based on the


maximum likelihood method as

Cˆ XY (ti , t j ) =
1
M ∑ (x
m =1
mi − xi )( ymj − y j ), ti , t j ∈T (9.97)

Note that, for this case,

E[Cˆ XY (ti , t j )] ≠ C XY (ti , t j ) (9.98)

or explicitly speaking, Equation 9.97 is a biased estimator.

9.2.3.1.2.8   Correlation Coefficient Function  The estimator of the cross-correlation


coefficient can be written as

Cˆ X (ti , t j )
ρˆ XY (ti , t j ) = (9.99)
s X (ti )sY (t j )

Due to the inequality in Equation 9.98, this estimator is also biased.

9.2.3.2 Stationary and Ergodic Process


The above estimators were for general process and based on the MLE and ensemble
average. In the following, stationary processes and ergodic processes are examined.
488 Random Vibration

9.2.3.2.1  Mean
9.2.3.2.1.1   Non-Ergodic  For a non-ergodic process, the mean can be written as

n −1 n −1  1 M  n −1 M

x=
1
n ∑
j=0
xj =
1
n ∑ ∑
j=0

 M
m =1
x mj  =
1
 Mn ∑∑ x
j = 0 m =1
mj (9.100)

9.2.3.2.1.2   Ergodic  For an ergodic process, the mean is represented in a much


simpler formula:
n −1

x=
1
n ∑x
j=0
j (9.101)

9.2.3.2.2  Variance
9.2.3.2.2.1   Non-Ergodic  Similar to the mean value, if the process is non-ergodic,
the variance can be written as

n −1 n −1 M

S X2 =
1
n ∑ j=0
S X2 (t j ) =
1
n( M − 1) ∑ ∑ (x
j = 0 m =1
mj − x j )2 (9.102)

9.2.3.2.2.2   Ergodic  If the process is ergodic, the variance is

n −1

S X2 =
1
n ∑ (x − x )
j=0
j
2
(9.103)

9.2.3.2.3  Standard Deviation


Once the variance is estimated, the standard deviation can be calculated based on
the following relationship:

S X = S X2 (9.104)

9.2.3.2.4  Autocorrelation
Next, the autocorrelation function of a stationary process will be described.

9.2.3.2.4.1   Non-Ergodic  For a non-ergodic process,

n −1− j

rˆX (τ j ) =
1
n− j ∑ rˆ (t , t + τ ),
i=0
X i i j 0 ≤ ti , τ j ≤ T (9.105)

For the discussion on variance, the expression of rˆX (ti , ti + τ j ) was used. One can
proceed from this point to obtain the expressions.
Inverse Problems 489

9.2.3.2.4.2   Ergodic  For an ergodic process,

n −1− j

rˆX (τ j ) =
1
n− j ∑xx
i=0
i i+ j , 0 ≤ τ j ≤ T (9.106)

9.2.3.2.5  Cross-Correlation, Ergodic


We now consider two ergodic processes X and Y; the cross-correlation function is
estimated as

n −1− j

( )
rˆXY τ j =
1
n− j ∑xy
i =0
i i+ j , 0 ≤ τ j ≤ T (9.107)

9.2.3.2.6  Covariance, Ergodic


For two ergodic processes X and Y, the covariance function is estimated as

n −1− j

Cˆ XY (τ j ) =
1
n− j ∑ (x − x )( y
i=0
i i+ j − y ), 0 ≤ τ j ≤ T (9.108)

9.2.3.2.7  Cross-Correlation Coefficient, Ergodic


For two ergodic processes X and Y, the cross-correlation coefficient is estimated as

Cˆ X (τ j )
ρˆ XY (τ j ) = , 0 ≤ τ j ≤ T (9.109)
sX sY

9.2.3.3 Nonstationary Process
In real-world applications, processes are often nonstationary. Nonstationary pro-
cesses will be discussed next.

9.2.3.3.1  Direct Analysis


The first method for working with nonstationary processes is the direct method, in
which a nonstationary process is made “stationary.”

9.2.3.3.1.1   Product of Deterministic and Random Processes  The product of


deterministic and random processes is achieved by allowing

X(t) = a(t)U(t)  −∞ < t < ∞ (9.110)

Here, a(t) is deterministic and U(t) is approximately stationary, but potentially


nonstationary, random process with zero mean and unity variance.
In many cases, the process a(t) is also a function of driving frequency, which can
be seen as a systematic error in measurement.
490 Random Vibration

9.2.3.3.1.2   Autocorrelation  In Equation 9.110, if a(t) and U(t) can be success-


fully separated from a nonstationary process X(t), then the autocorrelation function
can be written as

R X(t,s) = a(t)a(s)RU(t,s)  for  −∞ < t,s < ∞ (9.111)

The autocorrelation function is

R X(t,s) = a(t)a(s)RU(s − t)  for  −∞ < t,s < ∞ (9.112)

The mean square value for t = s is

σ 2X (t ) = a 2 (t ) (9.113)

9.2.3.3.1.3   Mean Square Value  The estimator of the mean square value for a
nonstationary process can be written as

σˆ 2X (t j ) = ∑w x
k =− N
k
2
j+ k , j = N ,…, n − 1 − N (9.114)

where wk is an even weight function and, n and N are respectively the numbers of
temporal points and the numbers of pieces of measurements, and we have

∑w
k =− N
k = 1 (9.115)

The weight function is used to emphasize certain values. In most cases, a denser
distribution will exist near the central point. In this specific case, the denser distribu-
tion is near k = 0. This allows the bias to be minimized. Thus, the expected value of
the estimator of variance may be written as

E[σˆ 2X (t j )] = ∑ w σ (t
k =− N
k
2
j+ k ), j = N ,.…, n − 1 − N (9.116)

From Equation 9.116, it is seen that when σ2(tj+k) varies evenly (symmetrically) for
k in between the values of –N and N and the variation is linear, the expected value
is close to zero. As a result, the estimator is unbiased. Otherwise, it will be biased.
Furthermore, an increase in the value of the weight function wk near zero will lower
the possible bias.
Inverse Problems 491

9.2.3.3.1.4   Variance of Mean-Square Estimator  The corresponding variance of


the mean-square estimator is

N N
D[σˆ 2X (t j )] = 2 ∑ ∑ w w R ( j + k , j + m),
k =− N m =− N
k m
2
X j = N ,…, n − 1 − N (9.117)

where R X(j,k) is the autocorrelation function of the process X(t) at t = jΔt and s = kΔt.
When t = s, R X(t,s) will be at its maximum. Thus, to minimize the variance of
the estimator σˆ 2X (t j ), the value of weight function wk near zero must be reduced, and
certainly not larger. This is contrary to the case of the mean estimator. Thus, a dif-
ferent set of weight functions should be used to deal with the estimation of the mean
and mean-square values.

9.2.3.3.2  Indirect Analysis of Shock Response Spectrum


Assessing the severity and accounting the effects of nonstationary random processes
can be achieved through the shock response spectrum. This is contradictory with the
power spectral density function. The response spectrum is given in Section 7.3.2.
The shock response spectrum is discussed in the following.
Recall the base excitation:

x + 2ζωx + ω 2 x = 2ζωz + ω 2 z (9.118)

where z(t) is the ground displacement and x(t ) is the absolute acceleration. If z(t ) is
a shock with a given amplitude, the peak response of x will depend upon the natu-
ral frequency ω or f = ω/2π, with a given value of damping ratio ζ. Thus, the shock
spectrum can be written as

B( f ) = max x(t ) (9.119)


t

Denote a series of shocks measured from the ensemble of a single nonstationary


random process as zmj , m = 1,…, M , j = 0,…, n − 1, where m indexes the number of
the measurement from the ensemble, while j represents time. From Equation 9.119,
a collection of shock spectra denoted by Bm(f), m = 1,…, M, is used to obtain a
representative shock response spectrum Bc(f). This representative shock response
spectrum is employed to indicate a possible spectrum with a controlled level of con-
servatism. Recall that in the discussion given in Chapter 7, the earthquake response
spectrum is a sum of the mean value pulse with one standard deviation.
The mean of the shock spectrum can be written as

B( f ) =
1
M ∑B (f)
m =1
m (9.120)
492 Random Vibration

and the standard deviation can be written as

1/ 2
 1 M 
SB ( f ) = 

M −1 ∑
m =1
[ Bm ( f ) − B( f )] 
2


(9.121)

The representative shock response spectrum is now given by

Bc ( f ) = B( f ) + KsB ( f ) (9.122)

where

K > 0 (9.123)

to stay on the conservative side.

9.2.4 Least Squares Approximation and Curve Fitting


While the MLE is most commonly used, the least squares approximation is also a
universal approach. It is often used as criteria for data regression or curve fitting.
This method was the culmination of several advances in the eighteenth century.
The combination of different observations is the best estimate of the true value. The
errors can be decreased with aggregation. This method was first expressed by Cotes
in 1722. A clearer and concise exposition was published by Legendre in 1805, which
described the method as an algebraic procedure for fitting linear equations to data,
and Legendre demonstrates this method by analyzing the same data as Laplace for
the shape of the earth (see Charnes et al. 1976).

9.2.4.1 Concept of Least Squares


First, let us consider the concept of least squares.

9.2.4.1.1  Sum of Squares


Suppose that there exist n data samples denoted as ri, for i = 1,…, n. The sum S of
squares is given by

S= ∑ri =1
i
2
(9.124)

9.2.4.1.2  Residue
The residue ri is the difference between the measured value yi and the function f.
The function f is generated by all the yi values.

ri = yi – f(xi, p) (9.125)
Inverse Problems 493

In Equation 9.125, f is a function of the independent variable xi and the param­


eter p.

9.2.4.1.3  Linear Least Squares


Consider f as a linear function of xi; then

p = {α 0, α1} (9.126)

and

f(x, p) = α 0 + α1x (9.127)

9.2.4.1.4  Nonlinear Least Squares


A polynomial may be used to describe the function f as

f(x, p) = α 0 + α1x + α2 x2 + …, (9.128)

when

p = {α0, α1, α2, …} (9.129)

The residue can be minimized as

∂S = 2
∂α j ∑ r ∂∂αr
i =1
i
i

i
= 0, j = 1,…, m (9.130)

or

∑ r ∂f ∂(xα, p) = 0
i =1
i
i

j
(9.131)

By solving Equation 9.131, the parameter p, which enables the residue minimum,
can be determined.

9.2.4.2 Curve Fitting
Curve fitting is often expressed by a mathematical function. The aim is to best fit
the measured data points, which are possibly subject to constraints. Interpolation
technology will allow for an exact fit to the data when it is required. Additionally, the
fitted function may be smoothed to result in a “better looking” curve.
Regression is often used for curve fitting through measured data pairs, which are
believed to be independent variables and their corresponding functions. Statistical
494 Random Vibration

inference is often used to deal with any uncertainties and randomness. Extrapolation
can be used to predict results beyond the range of the observed data, although this
implies a greater degree of uncertainty.
We now consider the following function of x and y:

y = f(x) (9.132)

which will be curve-fitted through least squares regression.

9.2.4.3 Realization of Least Squares Method

9.2.4.3.1  Linear Function


In the situation when the function is linear,

y = f(x) = a1x + a 0 (9.133)

9.2.4.3.2  Nonlinear Function


In the situation when the function is nonlinear,

y = f(x) = anxn + an−1xn−1 + … a 0 (9.134)

In general, yi can be written as

yi = an xin + an −1 xin −1 +… a0 xi , i = 1,, p, p > n (9.135)

In matrix form, this can be written as

 xn x1n−1  x1   a  y 
 1   n   1 
 x 2n x 2n−1  x2   an−1   y2 
   =   (9.136)
      
 x np x np−1  xp   a   yp 
   0   

where the parameter p is equal to

 a 
 n 
a 
p =  n −1  (9.137)
  
 a 
 0 
Inverse Problems 495

From Equation 9.136, the nonlinear function can be determined as given below:

+
 a   xn x1n−1  x1  y 
 n   1   1 
 an−1   x 2n x 2n−1  x2   y2 
 =     (9.138)
      
 a   x np x np−1  xp   yp 
 0     

In Equation 9.138, the superscript + stands for the pseudo inverse of a matrix, sat
matrix A, written as

A+ = (ATA)−1AT (9.139)

9.3 Vibration Testing
Vibration testing is an important measure in inverse dynamic problems. The focus
of this section is on random vibration-related issues, as opposed to general vibration
testing. Generally in vibration testing, the amount of randomness is fairly small in
comparison to the desired signals and the measurement noises. For this reason, ran-
domness is typically ignored. However, in some instances, the randomness decreases
the signal-to-noise ratio significantly. In this section, randomness and uncertainty
will be discussed only in qualitative terms, rather than quantitative details.
Strictly speaking, randomness and uncertainty have separate concepts. For random
variables or processes, even though individual events cannot be predicted, moments and
distributions can be estimated from the corresponding pattern. In contrast, uncertain
events are unable to be measured. However, for application to engineering problems,
randomness and uncertainty will not differentiate in most situations.
For a more systematic approach, readers may consult McConnel (1995) for instance.

9.3.1 Test Setup
Test setup is the beginning step of vibration testing. To physically install a structure
that will simulate the system being tested, the system must first be correctly modeled.

9.3.1.1 Mathematical Model
Mathematical models are rather fundamental and are often referred to as an analytic
formulation or a closed-form solution. For example, the SDOF vibration system is
represented by the mathematical model

mx(t ) + cx (t ) + kx (t ) = f (t ) (9.140)

with a solution of

x(t) = h(t) * f(t) (9.141)


496 Random Vibration

1
s
C t

1 K t -inv(M)
s

M t
1

Simin
From
workspace

Figure 9.3  Simulink model of an SDOF system.

3 0.15
2 0.1
Displacement (m)

1 0.05
Force (N)

0 0
–1 –0.05
–2 –0.1
–3 –0.15
–4 –0.2
0 5 10 15 20 25 30 0 5 10 15 20 25 30
Time (s) Time (s)
(a) (b)

Figure 9.4  (a) Input plots for a “random” excitation; (b) output plots for a “ran-
dom” excitation.

9.3.1.2 Numerical Model
The use of a computer is often necessary to establish a numerical or computational
model. For example, consider the SDOF vibration model from Simulink as shown
in Figure 9.3.
Figure 9.4 shows the input and output of the numerical model.

9.3.1.3 Experimental Model
If the models described in Section 9.3.1 or Figure 9.3 have no or acceptable error,
then it is not necessary to conduct a vibration test. However, these two models can
be exceedingly “deterministic.” Namely, there are many uncertain aspects in the real
world that cannot be precisely represented by mathematical or numerical models.
Individual examples include boundary conditions, specific properties of elements or
members in vibration systems, the “correct” number of degrees of freedom, and the
distributions of mass, damping, and stiffness, among others.
Inverse Problems 497

Figure 9.5  Photo of bridge test.

0.6 6
Acceleration Displacement
0.4 N-S 4 N-S
Soil Soil
0.2 2

0 0

–0.2 –2

–0.4 –4

–0.6 –6
0 10 20 30 40 50 0 10 20 30 40 50

Figure 9.6  Recorded time histories.

In this situation, the setup of a physical test model to deal with the uncertainties
becomes necessary. During the test setup, it becomes imperative to isolate the test
targets in order to best minimize the test uncertainty.
Figures 9.5 and 9.6 show an example of experimental testing on a highway bridge
and recorded time histories on a model bridge, respectively.

9.3.2 Equipment of Actuation and Measurement


To use vibration testing as a measure of system identification, the input and output
are needed. The input is often a force, or a displacement, which is referred to as an
actuation. Both the input and output responses of the test model must be measured.
In using this method, it becomes possible to reduce the degree of uncertainties.

9.3.2.1 Actuation
Actuation is completed by using actuators, force hammers, or other measures. Figure 9.7
shows a typical hydraulic actuator, while Figure 9.9 shows an electromagnetic actuator
and Figure 9.10 shows an impact hammer.
One of the advantages of using actuators is that we can directly apply “random”
excitations for testing, such as white and/or color noises. These artificial “random”
498 Random Vibration

Figure 9.7  Photos of actuators and shaking tables.

processes, often called pseudo-random, can simulate true random environments.


Examples are experimental studies by using earthquake shaking tables to provide
simulated earthquake ground motions.
It is noted that impact hammers cannot provide random process excitations. The
response of impulse excitation is a free-decay vibration.
However, no matter using actuators or hammers, noise contaminations are unavoid-
able, which are random processes in nature, and they should be accounted for vibration
testing.

9.3.2.1.1  Actuators
Due to uneven frequency responses of actuators, the input demand and the output
force/displacement will not be 100% proportional. Furthermore, due to possible con-
trol loops that can be nonlinear as well as time varying, this phenomenon can be
magnified.

9.3.2.1.1.1   Transfer Function of an Actuator  Therefore, in experimental studies


using an actuator, its transfer functions must first be considered. Practically speak-
ing, there are two kinds of functions: the first is the transfer function without load,
which is expressed in Equation 9.142 and shown in Figure 9.8. The second is the
transfer function with load, which is described in Equation 9.143.

F (ω )
HO (ω ) = (9.142)
S (ω )

F (ω )
H L (ω ) = (9.143)
S (ω )

In initial comparisons of Equations 9.142 and 9.143, the transfer functions appear
identical. However, the curves of magnitude of force (in kips) vs. frequency are sig-
nificantly different between the two transfer functions. Even in the case of zero load
Inverse Problems 499

0.7 0
Disp. = 0.330 in Disp. = 0.330 in
Disp. = 0.985 in –20 Disp. = 0.985 in
0.6
Disp. = 1.315 in Disp. = 1.315 in
–40
0.5
–60

Phase (deg)
Magnitude

0.4 –80

0.3 –100

–120
0.2
–140
0.1
–160

0 –180
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
Freq. (Hz) Freq. (Hz)
(a) (b)

Figure 9.8  Measured transfer functions: (a) magnitude vs. frequency; (b) phase vs.
frequency.

and different displacement will the curves be dissimilar. This is shown in Figure
9.8a. In this case, the main reason is the limitation of the maximum velocity for the
given actuators and secondarily nonlinear control.
For given demands of force and/or displacement, deterministic or random, the
actual force/displacement will be understandably different. Note that the transfer
function shown in Figure 9.8 is measured through the sine sweep test. When random
signals are used, the transfer function should be measured accordingly.

9.3.2.1.1.2   Harmonic Distortion  Most actuators (see Figure 9.9a and b) will have
different degrees of harmonic distortion. The ideal forcing function can be written as

f(t) = f0sin(ω 0 t) (9.144)

(a) (b)

Figure 9.9  Electromagnetic actuator. (a) Photo of an actuator and (b) conceptual drawing.
500 Random Vibration

This can be written in a more practical form:

f(t) = f0sin(ω 0 t) + f1sin(2ω 0 t) + f 2sin(3ω 0 t) + …n(Ωt + Θ) (9.145)

In Equation 9.145, n(Ωt + Θ) denotes uncertainty and unwanted noises.


In comparing Equation 9.145 with Equation 9.144, it is observed that given a pure
sinusoidal signal for the forcing function to an actuator, the real-world signal is dis-
torted to have other frequency components, such as f1sin(2ω 0 t), f 2sin(3ω 0 t), etc. This
signifies that the actuator is behaving nonlinearly.

9.3.2.1.1.3   Tracking Filter  The nonlinearity can be minimized by using tracking


filters, which have the following transfer function:

1, ω = ω
0
HT (ω ) =  (9.146)
 0, elsewhere

In the ideal case, the output of the actuator Fpractice(ω) in the frequency domain
will be

Fpractice(ω) = F(ω)HT (ω) (9.147)

9.3.2.1.2  Impact Hammers


Impulse-like forces are introduced by an impact hammer. An example of this is
shown in Figure 9.10.

9.3.2.1.2.1   Forcing Function  An example of the forcing function of an impact


hammer for an ideal case is

f(t) = f0 δ(t) (9.148)

Figure 9.10  Impact hammer. (Photo courtesy of PCB Piezotronics.)


Inverse Problems 501

Force
window

After using the force window

Figure 9.11  Recorded forces.

In practical applications, the half-sine-like time history of the impact force and
the impact duration, denoted by frequency w, can be varied by the softness of the
hammer head as well as the surface of the test object. This is represented by the fol-
lowing equation:

f(t) = f0sin(ωt) + n(Ωt + Θ),  0 < ωt < π/2 (9.149)

In Equation 9.149, n(Ωt + Θ) denotes uncertainty and unwanted noises.

9.3.2.1.2.2   Force Window  The force window can be used to minimize unwanted
noise. The idealized function of the force window in the time domain is

 π
1, 0 < t <
w(t ) =  2ω (9.150)
 0, elsewhere

For an ideal case in which the force is generated by the hammer fpractice(t) in the
time domain is given by

fpractice(t) = f(t)w(t) (9.151)

This is generally shown in Figure 9.11.

9.3.2.2 Measurement
Both the input forces and the output responses during a vibration test need to be
measured. The necessary instrumentation for these measurements typically con-
tains a sensory system, a data acquisition system, and signal processing, which may
introduce some degree of randomness and uncertainty. The basic method to address
randomness is averaging. For most vibration testing, randomness is ignored and
averaging is not carried out. However, to acquire more precise measurements, it will
be necessary (see Bendat and Piersol 2011).
502 Random Vibration

9.3.2.2.1  Sensors
We now consider the basic concept of sensors and analyze the possible randomness.

9.3.2.2.1.1   Sensitivity  The sensitivity of a sensor is defined as the output of the


unit input of the sensor. Typically, the signals are measured at the lowest frequency
point.

output
S(.) = = output│unit input (9.152)
input

Once a sensor is manufactured, its sensitivity should be calibrated and likely recorded
and included in its packaging. However, the sensitivity will likely experience drifts
and/or fluctuations over time due to many factors, such as temperature, pressure of
the atmosphere, mounting, cabling, and grounding, among others.
For different driving frequencies, the ratio of output and input will vary. This will
be further explained in Section 9.3.2.2.1.5.

9.3.2.2.1.2   Resolution, q  Resolution is the smallest possible measurement that


determines the ability to observe fine details in the measurement.

q = Sv–qemin (9.153)

Sv–q refers to the sensitivity that transfers the electronic signal to the required
measurement quantity. Furthermore, emin is the minimum electric signal that a mea-
suring device can put out without being contaminated by noises.

9.3.2.2.1.3   Dynamic Range  The dynamic range describes the minimum and the
maximum measurable ranges in decibels, as denoted in Equation 9.154.

DD = 20log(emax/emin) (dB) (9.154)

Here, emax is the maximum electric signal that a measuring device can output.

9.3.2.2.1.4   Time Constant  A time constant is described as

t
e(t ) − e(∞) −
= e T (9.155)
e( 0 ) − e( ∞ )

where e(0) is the initial signal and e(∞) is the signal measured after a sufficiently long
period. Also, e(t) is the signal picked up at time t. In Equation 9.155, the parameter T
specifically denotes the time constant given by

T = RC (9.156)

where R in ohms and C in farads are respectively the resistance and capacitance of
the measurement circuit and T is in seconds.
Inverse Problems 503

H(.)(ω)

ω1 ω2 ωn ωc1 ωc2

Figure 9.12  Frequency response function.

9.3.2.2.1.5   Frequency Response  It is advantageous that Equation 9.158 is able


to maintain the measuring frequency range. This is shown in Figure 9.12.

H(.)(ω) = const (9.157)

In the above, the corresponding frequency band is referred to as the working band.
Characteristically for the working band, the phase ϕ vs. the frequency ω plot of the
frequency response function is a straight line. This is represented in the following:

ϕ = aω (9.158)

In this instance, a is the slope of the line.


However, as seen in Figure 9.12, the dynamic magnification curves will not be an
exact horizontal line. Denoting the normalized frequency response function as H(.)(ω),
then

H (.) (ω )
H(.) (ω ) = ≠1 (9.159)
S(.)

9.3.2.2.1.6   Linearity  The measurement linearity can be defined by Figure 9.13.


Here,

eU − em
Ls = (9.160a)
em

e
eU
em
eL

Log( f )

Figure 9.13  Linearity.


504 Random Vibration

and/or

em − eL
Ls = (9.160b)
em

where eU and eL are the upper and lower limits of the measured signal, respectively,
and em is the mean value, with

Ls ≤ 5% (9.161)

9.3.2.2.1.7   Cross-Sensitivity  The cross-sensitivity is described by (see Figure 9.14)


sc = = tan(φ) (9.162)
Sz

This is shown in Figure 9.14.


Characteristically, the cross-sensitivity is required to be less than or equal to 5%,
that is,

sc ≤ 5% (9.163)

An ideal sensor should have the linear capability to pick up physical signals and
output electric signals proportionally. Unfortunately, due to the limitation of mea-
surement ranges, natures of nonlinearity in both the frequency and time domains,
the output signal will not be purely proportional to the physical signal. Additionally,
unwanted noise can contaminate the sensory output, including nonzero transfer
signals.

z Sθ
Sz
ST
φ

θ
x

Figure 9.14  Cross-sensitivity.


Inverse Problems 505

As a result, the actual sensitivity of a sensor (.) can be written as

S(.)(ω,t) = S(.)H(.)(ω)N(.)(ω,t) (9.164)

Here, H(.)(ω) is the normalized frequency response function of the sensor, which
describes the frequency range and the dynamic magnification. The function, N(.)(ω,t),
covers all the randomness due to sensitivity drift and noises at the moment of signal
pickup, among others.

9.3.3 Signal and Signal Processing


9.3.3.1 Data-Acquisition System
Figure 9.15 shows a typical data-acquisition system, which consists of several por-
tions of instrumentation.
As shown in Figure 9.15, there are many links in the signal chain. In each link,
there is a probability that the signals will be distorted by the above-mentioned non-
linear response or contaminated by external noises.
The gain of each link can be denoted by Si(ω,t). An example of such a gain is
the case of amplifiers. Similar to the case of the above-mentioned sensor, the corre-
sponding gain is also a function of the driving frequency and furthermore a random
process.

Si(ω,t) = SiHi(ω)Ni(ω,t),  i = 1, … (9.165)

The signal measured and stored in data memories denoted by Y(ω,t) will be dif-
ferent from the signal to be measured, denoted by x(t). This is due to the sensitivity

Sensor Power supply Amplifier Anti-aliasing filter Grounding

Cabling

Computer A/D–D/A converter Oscilloscope

Figure 9.15  Example of data acquisition systems.


506 Random Vibration

of the sensor as described in Equation 9.164 and the gain as described in Equation
9.165. The expression of randomness is simplified and is given by

∏ ∫ x(τ)h (t − τ) dτ + N (t)
t
Y (ω , t ) = N p (ω , t ) Si S a (9.166)
0
i

In Equation 9.166, hs(t) is the impulse response function of the total measurement
system, which can be regarded as an inverse Fourier transform of the normalized
transfer function of the total measurement system. That is,

hs (t ) ⇔ H s (ω ) = ∏ H (ω)
i
i (9.167)

where H1(ω) is the transfer function of the first link of the measurement system,
namely, the sensor’s.
Using the knowledge of transfer functions, each Hi(ω) can be measured instru-
ment by instrument, resulting in Hs(ω). Nevertheless, a more accurate and effective
method is to measure Hs(ω) of the total system as a whole. The objective is to deter-
mine the linear range and allowed error range (refer to Figure 9.13).

H s (ω ) = ∏ S H (ω) = S ∏ H (ω)
i
1 i 1
i
i (9.168)

In an ideal case,

∏ H (ω) = 1
i
i (9.169)

and
(9.170)
H s (ω ) = S1 = const

Equation 9.170 is often given in many test menus. However, Equation 9.170
can only be used in the event when the measurement of randomness proves to be
negligible.
Np(ω,t) is a random process that modifies the output convolution caused by the
noise contamination of production. Furthermore, Na(t) is an additional random pro-
cess due to the noise contamination of addition. A simplified expression of Np(ω,t)
only can be observed due to gain drift, which is often modeled by normal distribu-
tions. This is written as

Np(ω,t) ~ N(μN(t), σN(t)) (9.171)


Inverse Problems 507

where μN(t) and σN(t) are the corresponding mean and standard deviation, respec-
tively. Equation 9.70 can be used to estimate the sample average x j to approximate
μ(t), while Equation 9.71 can be used to estimate the sample variance S X2 (t j ) to
approximate σ2(t).
According to Equation 9.71, by increasing the number of tests n,

S X2 (t j ) → 0 (9.172)

so that

σ(t) → 0 (9.173)

In so doing, the influence of Np(t) is significantly reduced. Nevertheless, system-


atic error will certainly remain given that it cannot be removed through averages
(see Equation 9.110).
A simplified expression of Na(t) is the result of zero-drift measurement, among
many sources of noises. Averaging can reduce the influence of Na(t). For the first and
second types of transfer functions, H1(ω) and H2(ω) are the specific available meth-
ods to treat the additive noises.
Figure 9.16a shows a measured gain of a data acquisition system whose minimal
value would be 5 if the system is not contaminated by both noise Np(ω,t) and Na(t).
Through averaging, the noise Na(t) can be reduced as shown in Figure 9.16b. On
the other hand, Np(ω,t) is a systematic error, which cannot be removed through aver-
aging. However, if the fluctuation is deterministic, then the previously mentioned
curve fit method can be used to obtain the approximated function.

9.3.3.2 Single Processing and Window Functions


In the situation when a measurement error due to Np(t) exists, further steps will be
necessary.

7 6.5

6.5
6
6
5.5
5.5
Gain

Gain

5 5

4.5
4.5
4
4
3.5

3 3.5
0 20 40 60 80 100 120 140 160 180 200 0 20 40 60 80 100 120 140 160 180 200
Time (h) Time (h)
(a) (b)

Figure 9.16  (a) Raw and (b) averaged time histories.


508 Random Vibration

9.3.3.2.1  Power Leakage and Frequency Windows


Consider a system excited by the driving frequency ω 0:


ω 0 = 2πf0 = (9.174)
T0

Often, the signal is sampled by an integer not a multiple of ω 0. When the signal is
not sampled with period T0, then there will be discontinuities in the magnitude and
slope, which can be seen in Figure 9.17.
Since the total power of the signal should not be changed, the smaller spectra lines,
illustrated in Figure 9.18b, will shear the power with the central line. As a result, the
length of the line is reduced, which describes the concept of power leakage.

T0

(a)

Discontinuity

(b)

Figure 9.17  Total sampling period. (a) Signal with a frequency multiple of the sampling
period. (b) Signal with a frequency not a multiple of the sampling period.

Magnitude

(a) Freq.
ω
Magnitude

Freq.
(b)

Figure 9.18  (a) Original spectrum. (b) Power leakage.


Inverse Problems 509

Because of the particular buffering size and sampling rate, the power leakage is a
random event. Such random error cannot be minimized by the averaging of repeated
tests. Such error is referred to as systematic error.

9.3.3.2.2  Window Functions, Minimizing Power Leakage


In order to minimize the effect of power leakage, window functions can be used.
Window functions are also referred to as apodization functions or tapering functions
and are functions zero valued outside of some chosen time interval.
A function that is constant inside the interval and zero elsewhere is referred to as a
rectangular window, also known as a boxcar window or Dirichlet window, because
the shape of its graphical representation is rectangular. In fact, when samples from
the signals within a certain measurement period are taken, the signal is shown to
be multiplied by the rectangular window function. In other words, the product is
also zero valued outside the time interval. All that means is the “view” through the
window.
To minimize the effectiveness of power leakage, window functions should be
nonnegative smooth “bell-shaped” curves, which greatly reduces the discontinuity.
Unlike the low-pass filter used for anti-aliasing, which is used before sampling
(A/D conversion), window function, also known as windowing, is typically used
after sampling. Denoting the total number of samples as N, the commonly used win-
dow functions in modal testing can be listed as follows:

1. Rectangular window

1, 0 < n < N


w(n) =  (9.175)
0, elsewhere

2. Hamming window (Richard W. Hamming, 1915–1998)

  2nπ 
0.53836 − 0.46164 cos  , 0 < n < N
w(n) =  N − 1 (9.176)

 0, elsewhere

3. Hanning window (Julius F. von Hann, 1839–1921)

   2 nπ  
0.5 1 − cos  0<n<N
w(n) =    N − 1   (9.177)

0, elsewhere

The Hanning and Hamming windows are both known as “raised cosine”
windows.
510 Random Vibration

4. Cosine window (sine window)

  nπ 
sin   0<n<N
w(n) =   N − 1  (9.178)
 0, elsewhere

5. Gauss window (σ ≤ 0.5)

 − 1  n − ( N −1)/ 2 2
 2  σ ( N −1)/ 2  0 < n < N
w ( n ) = e (9.179)

 0, elsewherre

6. Bartlett–Hann window

 n  2 nπ 
0.62 − 0.48 − 0.5 − 0.38 cos 
 N − 1 
, 0<n<N
w(n) =  N −1 (9.180)

0, elsewhere

7. Blackman window (α = 0.16) (Blackman and Tukey, 1959)

1 − a  2 nπ  a  4 nπ 
 − 0.5 cos 
  + cos  , 0<n< N
w(n) =  2 N −1 2 N − 1 (9.181)

 0, elsewhere

8. Kaiser–Bessel window (Kaiser and Schafer 1980; Friedrich W. Bessel,


1784–1846)

  2 nπ   4 nπ 
0.3929 − 0.51 cos   + 0.0959 cos  
 N −1 N − 1

  6 nπ 
w(n) = −0.0012 cos  , 0<n<N (9.182)
 N − 1 


 0, elsewheree


In using the above windows, the original solitary y(t) is managed by multiplying
it by a window function, resulting in the function yw(t) denoted by

yw(t) = y(t)w(t) (9.183)


Inverse Problems 511

In the discrete time domain, this becomes

yw(n) = y(n)w(n) (9.184)

9.3.4 Nyquist Circle (Harry Nyquist, 1889–1976)


In Section 9.3.3, we discussed the transfer function by using the absolute value vs.
frequency plot, as well as the angle vs. frequency plot.
If there exists no noise, theoretically, we can use the above-mentioned modal test-
ing to measure the natural frequency and the damping ratio. However, due to noises,
such curves will be altered so that to find more accurate data, we need the curve fit of
the Nyquist plot for two reasons. First, the Nyquist plot is very close to a pure circle,
whose equation is simple and straightforward. Second, because the resonant region
of the Nyquist plot occupies the entire half circle, the corresponding curve fit can
provide more accurate results.

9.3.4.1 Circle and Nyquist Plot


The transfer function consists of both real and imaginary parts; both are functions
of driving frequencies. The Nyquist plot is a special form of the transfer function,
where the imaginary vs. real part is plotted. Figure 9.19 shows a typical Nyquist plot.
From Figure 9.19, it is seen that the Nyquist plot of an SDOF system is quite close
to a circle, which is referred to as the Nyquist circle, so that we can use the function
of a circle to fit the Nyquist plot.

–1

–2
Imaginary

–3

–4

–5

–6

–7

–8
–4 –3 –2 –1 0 1 2 3 4 5
Real

Figure 9.19  Nyquist plot.


512 Random Vibration

Denote R as

1
R= (9.185)
mω d

We have

R R
H (ω ) = − (9.186)
2 j[ζω n + j(ω − ω d )] 2 j[ζω n + j(ω + ω d )]

Note that, in the neighborhood of the natural frequency ωd, the first term on the
right-hand side of Equation 9.186 will be significantly larger than the second term.
Thus, we can rewrite Equation 9.186 as

R
H (ω ) ≈ (9.187)
2 j[ζω n + j(ω − ω d )]

That is, we can just use the first term to carry out the curve fit.
Rewrite Equation 9.187 as

R ωd − ω R − ζω n
H (ω ) ≈ Re[ H (ω )] + j Im[ H (ω )] = +j
2 [(ω d − ω )2 + ζ2ω 2n ] 2 [(ω d − ω )2 + ζ2ω n2 ]
(9.188)

It is seen that the following equation

2 2
 R   R 
{Re[ H (ω )]}2 +  Im[ H (ω )] +  =  (9.189)
 4ζω n   4ζω n 

R R
is an equation of a circle. The center is at (0, ), and the diameter is .
4ζω n 2ζω n
Using this circle to fit the Nyquist plot, the natural frequency is the cross-point of the
circle and the imaginary axis (see Figure 9.20).
The Nyquist plot as shown in Figure 9.20, is not exactly a circle. However, in the
resonant region, the Nyquist plot is very close to a circle; we thus can use the circle
fit to identify the corresponding modal parameters.
When accelerance frequency response function (FRF) is used, it can be proven that

R −(ω d − ω )ω 2 R ζω n ω 2
A(ω ) ≈ Re[ A(ω )] + j Im[ A(ω )] = + j
2 [(ω d − ω )2 + ζ2ω n2 ] 2 [(ω d − ω )2 + ζ2ω 2n ]
(9.190)
Inverse Problems 513

Im[H(ω)]

Re[H(ω)]

R
(0, ) R
4ζωn
2ζωn
ω2 ω1

ωd

Figure 9.20  Using a circle to approximate receptance Nyquist plot.

In this case, we have the circle equation written as

2 2
 Rω 2   Rω 2 
{Re[ A(ω )]} +  Im[ A(ω )] −
2
 =  (9.191)
 4ζω n   4ζω n 

This circle can be conceptually shown in Figure 9.20.

9.3.4.2 Circle Fit
To carry out the circle fit for the receptance Nyquist plot, assume that the center is
located at point (0, rjk), and the circle starts from the origin (0,0). The starting point
of the Nyquist plot is not at the origin (0,0). However, for convenience, we name
the starting point of the Nyquist circle at exactly the origin for the normal mode
of a mode, which is not affected by any other modes of the system. Now, rewrite
Equation 9.191 as

x 2 + ( y + rjk )2 = rjk2 (9.192)

and furthermore

x2 + y2 + 2rjk y = 0 (9.193)

Denote

xi = Re[ p α jk (ω i )] (9.194)

and

yi = Im[ p α jk (ω i )] (9.195)
514 Random Vibration

with m measurement points of the FRF; we can have

y   x 2 + y2 
 1   1 1

 y2   x 22 + y22 
2   rjk = −   (9.196)
   
 ym   x 2 + y2 
   m m 

Therefore, the parameter r can be calculated as

T
 y   x12 + y12 
 1   2 
 y2   x 2 + y22 
+    
y   x 2 + y2     
 1   1 1
  ym   x 2 + y 2 
1  y2   x 22 + y22     m m 
rjk = −     = −
 (9.197)
2   y   y 
T
  
 ym  x +y 
2 2  1
   1 
   m m   y   y 
2  2   2 
     
 y   y 
 m   m 

From the Nyquist plot, if we can further measure the half power frequencies ω1
and ω2, and furthermore the damped natural frequency ωdp, for the pth mode, then
the natural frequency ωp and the damping ratio ζ p can be obtained. And, with the
help of the parameter rjk, the mode shape can also be determined (Figure 9.21).
Theoretically speaking, the center of the Nyquist circle is exactly located at the
imaginary axis. Once the resonant point ωdp is known, which must also be located at
the imaginary axis, the center can be found at the halfway point from the origin to the
resonant point. However, in Section 9.3.4.3, we will show that, due to the influences of

Im[A(ω)]
ωd

2
(0, Rω ) Rω2
4ζωn
2ζωn
ω2 ω1

Re[A(ω)]

Figure 9.21  Using a circle to approximate accelerance Nyquist plot.


Inverse Problems 515

other modes, the starting point of the Nyquist circle will move away from the origin.
Therefore, it is better to use the above method to locate the center and the origin.

9.3.4.3 Natural Frequency and Damping Ratio


As shown in the Nyquist plot, if we can measure the half-power frequencies ω1 and
ω2, and furthermore the damped natural frequency ωd, then the natural frequency ωn
and the damping ratio ζ can be obtained.
It is known that

ω 2 − ω1 f2 − f1
ζ= = (9.198)
2ω n 2 fn

and

ωd
ωn = (9.199)
1 − ζ2

Therefore,

(1 − ζ2 )ω n2 = ω d2 (9.200)

Substituting Equation 9.198 into Equation 9.199, we have

1/ 2
 2 (ω 2 − ω1 )2 
ω p = ω dp −  (9.201)
 4 

Using Equations 9.198 and 9.201, the natural frequency and the damping ratio can
be calculated.

Problems
1. Given an SDOF system with m = 10 kg, k = 100 N/m, and its damping
ratio is a random variable with a uniform distribution between 0.01 and 0.1.
Suppose that this system is excited by random initial velocity v0 ~ N(1, 0.2)
m/s.
a. Calculate the free decay responses and identify the corresponding damp-
ing coefficient.
b. Use the least squares curve fit to find the relationship of the identified
damping coefficient and generated damping ratio. Explain your results.
2. With the given six earthquake records (El Centro, Kobe, Northridge, in
both S-N and E-W directions):
a. Calculate the means, RMS values, and standard deviations of each
record.
516 Random Vibration

b. Generate the time histories of ground displacement through the ground


accelerations. At t = 0.5, 1.0, and 10.0 s, find the distributions of the
instantaneous ground accelerations. Suppose that the earthquake ground
displacements are random processes. Check your results at the three
instances. If at a given place, say, an earthquake shaking table, these
time histories can be seen as realizations of a random process, is such a
process stationary and/or ergodic? Explain your judgment.
3. Use the given six earthquake records (El Centro, Kobe, Northridge, in both
S-N and E-W directions).
a. To generate your own pseudo response spectra of acceleration, with
PGA = 0.4 g, damping ratio = 0.05 and 0.20.
b. To generate your own response spectra of absolute acceleration, with
PGA = 0.4 g, damping ratio = 0.05 and 0.20. Compare your results with
the pseudo spectra.
c. Plot the standard deviations at each period vs. the period for the pseudo
spectra.
4. Use the maximum likelihood method to write formulae for estimation of
a. An exponentially distributed variable with a PDF given by

f X(x) = λe–λx  λ > 0

b. A normally distributed random variable with a PDF given by

1  ( x − µ)2 
fN ( x ) = exp  − 
2πσ  2σ 2 

5. A 3-DOF structure has mass, stiffness and damping matrices given by M =



diag([500, 500, 2000]) 103 kg

 1050 −1000 0 
 
C =  −1000 2050 −1000  kN
N/m-s
 0 −1000 3200 

 1000 −1000 0 
and K =  −1000 2000

−1000  MN/m
 0 −1000 3200 

a. Calculate the natural frequencies and damping ratios.


b. Use MATLAB® ">>t=(0:1:1999)*0.02;Ag = 4 *randn(2000,30);" to
generate ground excitation and compute the absolute acceleration
x A = [ x A1; x A 2 ; x A3 ]T2000×30.
c. Calculate the corresponding transfer function H1, H2, and H3 through
30 averages.
Inverse Problems 517

6. Use the Nyquist circle fit to find the corresponding natural frequencies and
damping ratios. Compare your results with that obtained in Problem 5a.
7. a. Estimate the mean and the standard deviation of the following data:

X = [−0.1011 0.1918 0.3092 0.5513 0.5584


0.5436 0.7443 0..6915 0.8632 0.9183
0.9654 0.9594 1.1095 0.8044 1.00955
1.0146 0.9408 0.8533 0.7138 0.6796
0.5605 0.3612 0.2255 0.1057 −0.1425]

Y = [−0.1011 0.0614 0.0508 0.1692 0.0591


−0.0644 0.0380 −0.1009 −0.0020 −0.0048
0.0000 −0.0318 0.1095 −0.1874 0.0428
0.0896 0.0731 0.0578 0.0040 0.0677
0.0569 −0.0256 −0.0377 −0.0296 −0.1475]

b. Use MATLAB to generate t = (0:1:24)* 0.1307. Use MATLAB to plot


(t,x,t,x-y,t,y). Explain your results.
8. Use the given time history of ground excitation to study the peak and RMS
values of the vibration of an automobile; its basic weight is 15.0 kN, with
additional weight Wp to be a uniformly distributed random variable from
0.5 up to 5.0 kN. Assume that the stiffness of the suspension system is
2.0 MN/m. Assume that the damping coefficient is 12.5 kN/m-s. (Hint: the
automobile is modeled as an SDOF system, g = 10 m/s2.)
9. Design an experimental test to investigate the relationship of the normal
pressure vs. friction coefficient under a sinusoidal vibration environment of
pairs of specimens with certain materials. The outputs of this study are
a. The curve fitted PDF of the friction coefficient under a given amplitude
of vibration with fixed normal pressure
b. The curve fitted PDF of the friction coefficient under a given amplitude
of normal pressure with a fixed level of vibration
c. The curve fitted joint PDF of the friction coefficient under given ampli-
tudes of normal pressure and vibration
10. Design an experimental test to investigate the relationship of the normal
pressure vs. friction coefficient under a white noise random vibration envi-
ronment of pairs of specimens with certain materials. The outputs of this
study are
a. The curve fitted PDF of the friction coefficient under a given bound of
vibration with fixed normal pressure
b. The curve fitted joint PDF of the friction coefficient under given ampli-
tudes of normal pressure and vibration
c. The curve fitted PDF of the friction coefficient that is greater than a given
level μm, under a given bound of vibration with fixed normal pressure.
10 Failures of Systems

In most cases, a system, such as a structure, a machine, a vehicle, etc., is designed to


be reliable. That is, the structure and/or components will not fail under given con-
ditions. The conditions include given levels of loads, deformations, environmental
conditions, etc., and the service periods. However, systems do fail and the failures
are likely to be random processes. In addition, these random processes are likely to
be nonstationary.
The fundamental approach of reliability-based design for structures is to deter-
mine the correct size of an element given its material and shape so that the failure
probability is at an acceptable level.
There are six different failure modes of structural components:

1. Yielding
2. Excessive deformation
3. Brittle fracture
4. Ductile fracture
5. Buckling
6. Fatigue

Failure modes 1–5 involve “level crossing” (level exceeding), while failure mode
6 is primarily for high-cycle fatigue.

10.1 3σ Criterion
When a quantitative description of either crossing levels or fatigue cycles is addressed,
probability is involved. For most cases, 0% failure is not achievable within reason-
able cost. For this reason, a small percentage of failure probability can be allowed.
This does, however, result in two issues. The first is how to establish the allowed
level of failure probability, and the second is how to calculate the failure probability.

10.1.1 Basic Design Criteria


A commonly used criterion is to allow the design strength R be greater than a speci-
fied level, for example,

R ≥ 3σS (10.1)

In this instance, σS is the standard deviation (RMS value) of a stationary Gaussian


stress process S(t). Note that the strength of a material is its ability to withstand an

519
520 Random Vibration

applied stress without failure, which is essentially a random variable, and it is treated
as deterministic.
When the mean value of the stress is nonzero, namely, μS, then

R ≥ μS + 3σS (10.2)

10.1.2 3σ Criterion
Given that S(t) is Gaussian and has zero mean, then

P[∣S(t) ∣ > 3σS] = 0.0026 (10.3)

or for nonzero mean stress,

P[∣S(t) ∣ > μS + 3σS] = 0.0026 (10.4)

Both Equations 10.3 and 10.4 indicate that the failure probability of the stress
level being greater than 3σS is less than 0.3%.

10.1.3 General Statement
10.1.3.1 General Relationship between S and R
A more general case is when both the stress S and the strength R are random. The
mean of R can then be expressed as

μR ≥ ξσS (10.5)

where the parameter ξ is a function of Q and CR. This is illustrated in Figure 10.1.
Here, Q is the ratio of μS and σS (which are the mean and standard deviation of the
stress process S(t), respectively), which is the inverse of the coefficient of variation
of S(t).

µS
Q= (10.6)
σS

In addition, CR is the coefficient of variation of R, the strength:

σR
CR = (10.7)
µR

Note that R is a random variable with a mean of μR and a standard deviation of σR.
The parameter ξ can be determined as

ξ = 3η (10.8)
Failures of Systems 521

10

8
R
6
S(t)

µR
4

2 ξσS

0 µS

–2

–4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

FIGURE 10.1  Variable stress S and strength R.

where η is a safety factor and usually

η = 1.0 – 2.0 (10.9)

A larger value of the safety factor is taken when larger values of Q (larger than
0.5) and CR (larger than 0.1) are used. Table 10.1 lists the values of η.

Example 10.1

A stress S(t) is Gaussian with σS = 70 MPa and μS = 35 MPa. Design the mean
strength μR of a rod that satisfies the generalized 3σ rule, assuming that CR = 0.1.

µS
Q= = 35/ 70 = 0.5
σS

Table 10.1
Safety Factor η
Q
CR 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
0.00 1.0000 1.0033 1.0167 1.0333 1.0500 1.0833 1.1333 1.1600 1.2000
0.07 1.0267 1.0367 1.0667 1.0833 1.1000 1.1167 1.1433 1.1700 1.2100
0.10 1.0433 1.0667 1.0833 1.1000 1.1167 1.1500 1.1600 1.1733 1.2200
0.15 1.1167 1.1267 1.1400 1.1667 1.2000 1.2167 1.2500 1.3000 1.4733
0.20 1.2233 1.2500 1.2667 1.2833 1.3600 1.3750 1.4167 1.4700 1.5333
522 Random Vibration

From Table 10.1, when CR = 0.1, the value of η is found to be 1.15 and ξ is
calculated to be 3.45.
The corresponding strength is

μR = ξσS = 3.45 × 70 = 241.5 MPa

For a comparison, consider R to be deterministic; then

R ≥ μS + 3σS = 35 + 3 × 70 = 245 MPa

Now, suppose that the strength of the rods to be designed is random for aspects
such as size, yielding stress, etc. To further reduce the chance of failure of the rods, let

μR – 1.0σR ≥ μS + 3σS

This results in

μR = 245/0.9 = 272 MPa

10.1.3.2 System Failure, Further Discussion


In Chapter 1, when the concept of failure probability, pf, is first introduced, it is shown
that the value of pf depends upon two probability density function (PDF) curves: the
PDF of load and the PDF of resistance. In Chapter 2, the concept of pf was further
extended to the examination of another set of two PDFs: the PDFs of realistic and
expected maximum frictions. The probabilities described in Equations 10.3 and 10.4
are actually failure probabilities. Although in Equations 10.3 and 10.4, only the value
of S(t) varies, but the values of 3σ and/or μS + 3σ are deterministic. Thus, both stress
S and strength R can be random variables or even random processes.
Generally speaking, the failure probability is the collection of all the chances that
the real demands, QD in Equation 1.130, S in Equations 10.3 and 10.4, μQi in Equation
1.153, etc., exceed the expected resistance, R D in Equation 1.130, R in Equations 10.1
and 10.2, μR in Equation 1.153, etc. Since both the demands and the resistance can be
random processes, the basic equation of the double integration, see Equation 1.139a
for example, is the key model to compute the failure probability. However, the double
integration such as that given in Equation 1.139a can only handle random variables
and not a random process. In Sections 10.2 and 10.4, we will further discuss how to
estimate the failure probability when at least one of the demands or resistances is
time variable, namely, a random process.

10.2 First Passage Failure


As mentioned previously, there are six types of failures in general. Most of them
(except failure mode 4) can be modeled as a certain breaking of the limit state when
the demand S reaches the resistance R, namely, R = S. The first passage failure means
that the failure occurs the first time that S exceeds R. In the literature, Crandall et
al. (1966), Lin (1970), Vanmarcke (1975, 1984), and Marley (1991) provide excellent
Failures of Systems 523

contributions to study the first passage failure. In this section, the probability of the
first passage failure will be further considered.

10.2.1 Introduction
In general, once the stress S(t) is greater than the strength R, failure will occur. That is,

S(t) > R (10.10)

For example, the first passage failure can be seen in the brittle failure mode.
Now, let T and Ts be the time to failure and the duration of service life, respec-
tively. The failure probability, pf, of the first passage failure can then be expressed by

pf = P(T < Ts) (10.11)

Using Y to denote the maximum value of the peaks of the stress process, it can
be seen that the event Y > R implies a failure mode. Thus, the failure probability can
be written as

pf = P(Y > R) = 1 − FY (R) (10.12)

where FY (R) is the cumulative distribution function (CDF) of R.

10.2.2 Basic Formulation
10.2.2.1 General Formulation
Assume that the strength R(t) is characteristically large in comparison to the stress S(t),
so that the probability of S(t) > R(t) is rare, under the assumption that up-crossing with a
rate of v R+ is a Poisson process.

10.2.2.1.1  Failure Probability


Recall from Chapter 5 that

P(no up-crossing in ∆t ) = exp[− v R+ (t )∆t ] (10.13)

For service life Ts, this becomes

k  k 
P(no up-crossing in Ts ) = ∏
i =1
exp[− v R+ (ti )∆t ] = exp  −

∑v
i =1
R+
(ti )∆t  (10.14)


From Equation 10.14, when

Δt → 0 (10.15)
524 Random Vibration

Equation 10.14 can be reduced to

 Ts 
P(no up-crossing in Ts ) = exp  −
 ∫ 0
v R+ (t ) dt 

(10.16)

Thus, the failure probability of the level R being crossed is

 Ts 
pf = 1 − exp  −
 ∫0
v R+ (t ) dt 

(10.17)

10.2.2.1.2  Rate of Crossing


To take advantage of Equation 10.17, the formula of v R+ must first be obtained. This,
in general, is not easy. However, in the case in which the process S(t) is Gaussian, the
formula of v R+ can be achieved by recalling Equations 5.24 and 5.26.

 1  R(t ) − µ  2 
v R+ (t ) = v0+ exp  −  s  (10.18)
 2  σ s  

Here, v0+ (t ) is the rate of zero up-crossing, and σs and σs are the mean and the
standard deviation of the stress, respectively.

10.2.2.2 Special Cases
Now consider several special cases of failure probability.

10.2.2.2.1  vR+ Being Constant


When R is constant, then from Equation 10.18, v R+ will also be constant and

pf = v R+ Ts (10.19)

10.2.2.2.2  Gaussian and Narrowband


If S(t) is Gaussian and a narrowband process, then

v 0 + = f0 (10.20)

In this case, f0 is the center frequency of the narrowband process, and the failure
probability is

 1  R(t ) − µ  2 
pf = ( f0Ts ) exp  −  s  (10.21)
 2  σ s  
Failures of Systems 525

10.2.2.2.3  Constant Symmetric Bounded with Zero Mean


If the process has constant boundary and it is symmetric, namely, has zero mean,
then the following are required.

∣S(t)∣ = R (10.22)

S(t)∣max = R (10.23)

and

S(t)∣min = −R (10.24)

Since there is zero mean

μs = 0 (10.25)

and S(t) is a narrowband, consequently,

v 0 + = 2 f0 (10.26)

In this particular case, the failure probability is

 1  R 2
pf = (2 f0Ts ) exp  −    (10.27)
 2  σ s  

10.2.3 Largest among Independent Peaks


To consider the first passage failure, an alternative approach is to examine the PDF of
the peaks of the stress process Zi by assuming the strength R to be constant.

10.2.3.1 Exact Distribution
The exact distribution is an additional approach for when R is not a function of time.
The peak Zi, however, forms a random process.
Next, consider the case of a distribution FZ (z).
Suppose that Zi are mutually independent and Bi denotes the ith event for Zi < R;
then

Bi = Zi < R (10.28)

B is the event for which Y, the largest peak in a sample of size n, is less than R.
This is denoted by

B = Y < R (10.29)
526 Random Vibration

Furthermore, B is found to be the intersection of all values of Bi’s, denoted by

n
B = B1 ∩ B2 ∩ ∩ Bn = ∩B
i =1
i
(10.30)

All Bi’s are independent in general. Thus, P(B) can be written as

P ( B) = ∏ P(B ) = [P(B )]
i =1
i i
n
(10.31)

Conversely, P(B), when it is a CDF, can be written as

P(B) = P(Y < R) = FY (R) (10.32)

Additionally, P(Bi) as a CDF is

P(Bi) = P(Z ≤ R) = FZ (R) (10.33)

Note that

FY (R) = [FZ (R)]n (10.34)

Therefore,

P(no peak exceeds R) = P(B) = [FZ (R)]n (10.35)

or, alternatively, Equation 10.35 can be written as

pf = 1 − [FZ (R)]n = 1 − FY (R) (10.36)

If S(t) is narrowbanded, then peak Z will have a Rayleigh distribution such that

 1  z 2
FZ ( z ) = 1 − exp  −    (10.37)
 2  σ s  

Similar to the process of obtaining a CDF described in Equation 10.35, it is deter-


mined that
n
  1  R  2  

FY ( R) = 1 − exp  −     (10.38)
  2  σ s   

Failures of Systems 527

Furthermore, since the failure probability is presumed small, the exponential


term in Equation 10.38 must also be small. Through the binomial series expansion
and by allowing

n = 2f0Ts (10.39)

pf can be approximated as

 1  R 2
pf ≈ (2 f0Ts ) exp  −    (10.40)
 2  σ s  

Comparing Equation 10.40 with Equation 10.30, we see that the failure prob-
ability can be obtained through alternative approaches. Apparently, larger resistance
strength R will result in smaller failure probability.

10.2.3.2 Extreme Value Distribution


10.2.3.2.1  CDF
A stationary peak process has a distribution of peak heights FZ (z). As n → ∞, FY (y)
will approach extreme-value distributions. For most commonly used distributions,
FY (y) approaches to the extreme value type I distribution (EVI); for further refer-
ence, refer to Equation 2.198, which is repeated as
− α ( y −β )
FY ( y) = P(Y ≤ y) = e − e (10.41)

10.2.3.2.2  Mean and Standard Deviation


The mean and RMS of Y can be written as

0.577
µY = β + (10.42)
α

1.283
σY = (10.43)
α

10.2.3.2.3  Parameters of EVI


To take advantage of Equations 10.42 and 10.43, the parameters α and β must first
be calculated. These parameters are determined through the distribution of the indi-
vidual peak Z from the following equations:

α = nf Z (β) (10.44)

 1
β = FZ−1  1 −  (10.45)
 n
528 Random Vibration

10.2.3.2.4  Special Case of Narrowband Gaussian


In the unusual case, the stress process S(t) is a narrowband Gaussian; it follows that
the mean is given by

 0.577 
µY =  2 ln n + σS (10.46)
 2 ln n 

the standard deviation is given by

0.577σ S
σY = (10.47)
2 ln n

and the coefficient of variation (COV) CY is

σY 1
CY = = (10.48)
µY 1.5588 ln n + 0.4497

Furthermore, the parameters α and β are calculated by

2 ln n
α= (10.49)
σS

and

β = 2 ln n σ S (10.50)

Example 10.2

Consider the case with the EVI where n = 1000 and σs = 50. Calculate the mean
and the standard deviation and plot the corresponding PDF of the EVI.
The PDF of the EVI can be obtained by the derivative of FY(y) (see Equation 10.41):

d[FY (y )] − α ( y −β )
fY (y ) = = αe − e e − α ( y −β )
dy

With n = 1000 and σs = 50, we have α = 0.074 and β = 185.85. The mean is
193.61 and the standard deviation is 48.40. The PDF is plotted in Figure 10.2.
From Figure 10.2, the plot of the PDF is asymmetric and has larger right tails,
which has been discussed in Chapter 2 (recall Figure 2.9). Since the EVI has larger
left tails, the chance to have a larger value of y, the extreme value, is comparatively
greater.
Failures of Systems 529

0.12

0.1

PDF of EVI 0.08

0.06

0.04

0.02

0
0 50 100 150 200 250 300 350 400
y

FIGURE 10.2  PDF of EVI.

10.2.3.3 Design Value Based on Return Period


For the design value S 0, FZ (S 0) is the CDF for a stationary peak process, where

design life ≥ service life (10.51)

The total services period of S 0 is Tn such that

Tn = n (10.52)

When evenly distributed, the peak having a probability of expedience equal to 1/n is
1
P( Z > S0 ) = 1 − FZ ( S0 ) = (10.53)
n

10.3 Fatigue
In Chapter 5, we described the issue of fatigue by classifying it into two categories:
high cycle and low-cycle fatigue. The low-cycle test is further classified as type C and
type D low-cycle tests. However, the focus of those studies is on the time-varying devel-
opment, the model of random processes, and the corresponding statistical param-
eters. In this section, we will further discuss the issue of fatigue, with the focus on
fatigue failures.

10.3.1 Physical Process of Fatigue


In the following, we emphasize high-cycle fatigue, first introduced in Chapter 5. The
physical process of fatigue occurs, under a high cycle of oscillatory tensile stress with
a large amplitude, when a small crack initiates at a certain location with stress concen-
tration. In the event that the total cycles make the material fatigue greater than 103, gen-
erally 105, then it is said to be high-cycle fatigue (see Figure 10.3 and also Figure 5.14).
530 Random Vibration

S–N curve for brittle aluminum with a UTS of 320 MPa


350

300

250
Stress (MPa)

200

150

100

50

0
1.0E+00 1.0E+01 1.0E+02 1.0E+03 1.0E+04 1.0E+05 1.0E+06 1.0E+07
Life (cycles)

FIGURE 10.3  S–N curve.

Fatigue is a special failure mode, perhaps the most important failure mode in
mechanical design characterized by the following:

1. Random and inherent unpredictability


2. Difficulty in using laboratory test data to determine practical material
behavior
3. Challenging to establish an accurate model of mechanical environments
4. Environmental effects producing fatigue-sensitive stress distributions

Let N denote the cycles of fatigue and S the amplitude of stress. The S–N curve
can be used to describe the phenomenon of fatigue through

N = N(S;A) (10.54)

Here, A is a vector of parameters related to the S–N curve.

10.3.2 Strength Models
In the literature, there are several fatigue models available. Fundamentally, the stress-
based approach, the strain-based approach, and the fracture mechanics approach are
the basic considerations.

10.3.2.1 High-Cycle Fatigue
The key parameter obtained through fracture mechanics is the stress intensity factor
range given by

∆K = Y (a) S πa (10.55)
Failures of Systems 531

Here, ΔK is the stress intensity factor range; S is the applied stress range; a is the
crack depth for a surface flaw or half-width for a penetration flaw; and Y(a) is the
geometry correction factor.
The crack growth rate da/dn can be represented as

da
= C (∆K )m (10.56)
dn

Equation 10.56 is referred to as the Paris law (Paris 1964). In this instance, C and
m are empirical constants. Therefore, integrating Equation 10.56 will yield

af
da
N= 1
S Cπm / 2
m ∫ a0 Y m (a)a m / 2
(10.57)

where a 0 is an initial crack length and af is the failure crack length.


Note that when the level of crack is less than a certain threshold, ΔKth, the crack
will not propagate. Wirsching and Chen (1988) developed a model to estimate the
random fatigue with ΔKth > 0.

10.3.2.2 Miner’s Rule, More Detailed Discussion


10.3.2.2.1  Constant-Amplitude Tests vs. Random Amplitude
In Chapter 5, we briefly mentioned Miner’s rule. Now, let us further study it by
specifying the stress of a test specimen. Recall the S–N curve shown in Figure 5.14;
for practicality, the stress S is assumed to be constant, as in fatigue tests. In reality, S
varies as a random process. Thus, there is a need to simplify the description of S in
order to obtain a workable formula. In Figure 10.4, we show the Miner’s rule (Figure
10.4a) with conceptual plots describing variations of stress Si (Figure 10.4b).

10.3.2.2.2  Assumption
A stress process can be described by discrete events, such as number cycles. The
spectrum of amplitude for stress cycles can be defined as

D= ∑ Nn
i =1
i

i
(10.58)

Equation 10.58 is a more general description than Equation 5.145. Again, D signi-
fies the damage index, where if

D ≥ 1.0 (10.59)

then the specimen is considered to be damaged or be in a failure mode.


In reality, the event that fatigue failure occurs is random. That is, we may see
a failure even if D < 1, or when D > 1, no failure occurs. In Chapter 5, we used a
532 Random Vibration

S2
S3 Sk
S1 Si

n1 ni
n2 n3 nk

S1 S3
Sk
S1 Si

(a)
Stress amp (or range), S

S2
S3
Sk

Si
(b) Cycles to failure N

FIGURE 10.4  Miner’s rule. (a) Variation of stress. (b) Stress level vs. failure cycles.

Markov chain to study the chance of failure at a different level (in fact, a different
level of D). In the following, we examine the fatigue failure from another angle: since
whenever Equation 10.59 is reached, the failure deterministically occurs. We now
consider the chance to reach D ≥ 1.0.

10.3.3 Fatigue Damages
The concept of the damage model is important because of its difficulties in model-
ing structural damages. One specific reason is the fact that excitation is often a ran-
dom process. Here, some specific models will be described for better understanding
model random damages.

10.3.3.1 Narrowband Random Stress


In the following discussion, we assume that the stress S(t) is a narrowband random
process.

10.3.3.1.1  Discrete Stress Spectrum, Fundamental Case


The following method reduces the random process to a discrete random variable.
Generally speaking, the amplitude of S(t) is a continuous number. By artificially
drawing lines as shown in Figure 10.5a, the stress within the two lines is denoted as
Si and the corresponding window as ΔSi. In the range ΔSi, the number of peaks is
counted as ni (see Figure 10.5b).
Failures of Systems 533

0.15

0.1
∆S
0.05 Si

–0.05

–0.1

–0.15

–0.2
0 200 400 600 800 1000 1200
(a)

f (s) f4
f5
f2
fi
f1 f3 f6

(b)

FIGURE 10.5  Probability mass function. (a) Range of ΔSi. (b) PMF vs. ΔSi.

The fraction of stress at level Si is written as

ni
fi = (10.60)
n

where

n= ∑n
i =1
i (10.61)

As a result, fi is shown as the probability mass function of the random variable Si.
The total fatigue damage can now be written as

k k

D= ∑
i =1
ni
Ni
=n ∑ Nf
i =1
i

i
(10.62)
534 Random Vibration

10.3.3.1.2  Discrete Stress Spectrum, Linear Model for S–N Curve


By assuming the fatigue strength to be a linear function in the log–log plot, the
fatigue analysis can be significantly simplified. In so doing, the fatigue strength A is
equated to

NSm = A (10.63)

Comparing Equations 10.57 and 10.63 will yield

af
da
A= 1
Cπm / 2 ∫ a0 Y (a)a m / 2
m
(10.64)

With the linear relationship of Equation 10.63, damage D can be written as

D=
n
A ∑fSi =1
m
i i (10.65)

In Equation 10.65, Si indicates the ith amplitude or range. As a result, S is a dis-


crete random variable. Thus, consider that its expected value may be written as

E (S m ) = ∑fS i =1
m
i i (10.66)

The substitution of Equation 10.66 into Equation 10.65 will result in

n
D= E ( S m ) (10.67)
A

In the case that S is constant, Equation 10.65 will further be reduced to

n m
D= S (10.68)
A

An equivalent constant-amplitude stress is obtained by comparing Equations


10.67 and 10.68 yielding

Se = [E(Sm)]1/m (10.69)

Here, Se is referred to as Miner’s stress or equivalent stress.


Failures of Systems 535

10.3.3.1.3  Continuous Stress Spectra


Previously, S was taken to be discrete. However, for a continuous model, the prob-
ability mass function will become a PDF, such as a Rayleigh distribution. For exam-
ple, the probability mass function in the range (s, s + ΔS) is

fi ≈ f S (s)Δs (10.70)

In this example, the total fatigue damage is the sum of all incremental damages
in each window ΔS, since

k
D≈n ∑ f N(s()s∆) s
i =1
S (10.71)

Now, consider a continuous stress as

Δs → 0 (10.72)

The damage will be equated to


f S ( s ) ds
D=n
∫ 0 N (s )
(10.73)

For a linear S–N curve, Equation 10.73 becomes


n

D= s m fS (s ) d s (10.74)
A 0

The solution of Equation 10.74 is given by

n
D= E ( S m ) (10.75)
A

Comparing Equations 10.67 and 10.75, the expected value is calculated by using
the integral described in Equation 10.74.

10.3.3.1.4  Continuous Spectra, Special Cases


Now, suppose that S(t) is stationary and a narrowband Gaussian.

10.3.3.1.4.1   Rayleigh Distribution of S(t)  When the stress amplitude or range is


of Rayleigh distribution, then based on the amplitude of the stress S and the strength
A, we have
536 Random Vibration

( ) 1 
m
E ( S m ) = Sem = 2σ Γ  m + 1 (10.76)
2 

and based on range, we can write

( ) 1 
m
E ( S m ) = Sem = 2 2 σ Γ  m + 1 (10.77)
2 

where Γ(.) is the gamma function.


Fatigue damage at time t is obtained by combining Equations 10.76 and 10.77
with Equation 10.75 yielding the following, when S and A are based on amplitude,

v0 + τ
( ) 1 
m
D= 2σ Γ  m + 1 (10.78)
A 2 

and S and A are based on range,

v0 + τ
(2 ) 1 
m
D= 2σ Γ  m + 1 (10.79)
A 2 

Here,

n = v0 + τ (10.80)

where v0+ τ is the zero up-crossing rate of the positive slope.

10.3.3.1.4.2   Weibull Distribution of S(t) (Waloddi Weibull, 1887–1979)  When


stress amplitude or range is Weibull, the CDF of S is written as

ξ
 s
− 
 δ
FS (s) = 1 − e (10.81)

Here ξ is dimensionless and is referred to as the Weibull modulus or shape param-


eters, often used to describe variability in measured material strength of brittle mate-
rials. When measurements show high variation, ξ will be small, which indicates that
flaws are clustered inconsistently and the measured strength is generally weak and
variable. Products made from components of low Weibull modulus will exhibit low
reliability and their strengths will be broadly distributed.
The parameter δ is the scale parameter of the distribution and to be later elimi-
nated. In this case, the expected value of Sm is

m 
E ( S m ) = δ m Γ  + 1 (10.82)
ξ 
Failures of Systems 537

Let the design stress for static failure modes be S 0. The probability of a “once-in-
a-lifetime” failure is represented by

1
P ( S > S0 ) = (10.83)
NS

Here, NS is the total number of stress applications in the service life. Substitution
of Equation 10.81 into Equation 10.83 will yield

S 0 = [ln(NS)]1/ξ δ (10.84)

The elimination of variable δ is achieved by substituting Equation 10.84 into


Equation 10.82, producing

NS m m 
D= S0 [ln( N S )]− m /ξ Γ  + 1 (10.85)
A ξ 

10.3.3.1.5  Blocks of Continuous Spectra


In the previous example, the Weibull distribution is used to model long-term fatigue
stress. An alternative approach is to “create” a nonstationary process stationary by
separating the total process into k blocks. For each block, the duration is to become
comparatively short. Thus, the variation can be sufficiently small so that the station-
ary process can be approximately used.
In each block, the Rayleigh peak distributions can be assumed. The RMS of each
block is σi, the rate of zero up-crossing is vi + , and the time of application is τi. The
total damage can then be written in the following in which A is the given amplitude:

k
vi + τi
 2 σ  Γ  m + 1

m
D= (10.86)
A 2 
i =1

When A is the range (peak-to-valley), the damage becomes

k
vi + τi
 2 2 σ  Γ  m + 1

m
D= (10.87)
A 2 
i =1

10.3.3.1.6  Mean Stress


In the previous example, the stress process was assumed to be zero mean.
Nevertheless, cases with a nonzero mean of μS do occur. In this instance, the coef-
ficient of fatigue strength A is adjusted by

m
 µ 
A = A0  1 − S  , µ S ≥ 0 (10.88)
 Su 
538 Random Vibration

Here, A0 is the coefficient of fatigue strength based on zero-mean tests, while Su


is the ultimate strength of the material.

10.3.3.2 Wideband Random Stress


10.3.3.2.1  Equivalent Narrowband Process
In real occurrences, processes are frequently non-narrowbanded, although it could
be treated as narrowband. This is seen in Figure 10.6a and b.
In engineering practice, a judgment described by the following equation can be
made on the amount of jittering:

W( f0) > 20ΣW( f h ) (10.89)

In the above, W( f0) is the power special density function of the fundamental fre-
quency f0, and ΣW( f h ) is the sum of the power spectral densities (PSDs) of the rest of
the frequency components.
Equation 10.89 can be used as a criterion for using the following linear S–N model:

NSm = A (10.90)

To treat the process as an equivalent narrowband, let

v0 + τ
( ) 1 
m
D= 2σ Γ  m + 1 , S and A based on amplittude (10.91)
A 2 

0.15
0.1
0.05
0
–0.05
–0.1
–0.15
–0.2
0 5 10 15 20 25 30 35 40 45 50
(a)
0.15
0.1
0.05
0
–0.05
–0.1
–0.15
–0.2
0 5 10 15 20 25 30 35 40 45 50
(b)

FIGURE 10.6  (a) Non-narrowband process that can be treated as narrowband. (b) Narrowband
process.
Failures of Systems 539

v0 + τ
(2 ) 1 
m
D= 2σ Γ  m + 1 S and A based on range (10.92)
A 2 

10.3.3.2.2  Rainflow Algorithm


In actual processes, there exist many non-narrowbanded processes that cannot
be treated as narrowband processes. When the Miner’s rule is used, we see that
the key issue is to determine the amplitude of stress Si and to count the number of
cycles ni associated with Si. For narrowband responses, there is only one “up and
down” in one cycle, so that the amplitude between the up and the down is obvious
and easy to count. For a broadband system, the amplitude is more difficult to real-
ize. We thus need certain methods or algorithms to further explore these Si and ni.
Among the algorithms for identifying stress cycles in a given wideband record, the
rainflow method is widely used to estimate the stress range and mean values (see
Dowling 1972).
Figure 10.7a and b conceptually plots the narrowband and broadband systems,
respectively. These responses are calculated by assuming m = 10 and k = 400. When
the damping ratio is small, say, ζ = 0.04, the response is narrowband. When the
damping ratio is large, say, ζ = 0.8, the response becomes broadband.
The rainflow algorithm, also referred to as the “rainflow counting method,” is
used in the analysis of fatigue data in order to reduce a spectrum of varying stress
into a set of simple stress reversals. It allows the application of Miner’s rule to conse-
quently assess the fatigue life of a structure subject to complex loading.
This algorithm is applied by first transforming the stress S(t) to a process illus-
trated with dotted lines as shown in Figure 10.7b and the solid line in Figure 10.7a
with peaks and troughs. In Figures 10.7b and 10.8a, at t = 0, the direction of the
process trajectory is shown by the thick dotted arrow. Then, the graph is rotated
90° as illustrated in Figure 10.8b. Note that the original graph is shown in Figure
10.8a. In Figure 10.8 at t = 0, the direction of the process trajectory is also shown
by the thick dotted arrow. The tensile stresses are expressed as the water sources at
both peaks. The compressive stresses are expressed as the troughs. The downward
water flows are considered according to the following rules (note that in order to

0.04 0.015

0.02 0.01
Displacement

Displacement

0 0.005

–0.02 0

–0.04 –0.005

–0.06 –0.01
0 2 4 6 8 10 0 2 4 6 8 10
(a) Time (s) (b) Time (s)

FIGURE 10.7  (a) Narrowband and (b) broadband systems.


540 Random Vibration

show the concept of the rainflow method, the responses in Figures 10.7 and 10.8
are different):

1. A rainflow path starts at a trough, continuing down the roof until it encoun-
ters a trough that is more negative than the origin. (For example, the path
starts at 1 and ends at 5.)
2. A rainflow path is terminated when it encounters a flow from a previous
path. (For example, the path begun at 3 and was terminated as shown in
Figure 10.8.)
3. A new path is not started until the path under consideration is stopped.
4. Trough-generated half-cycles are defined for the entire record. In each
cycle, the stress range Si is the vertical excursion of a path. The mean µ Si is
the midpoint. (For example, see S1 and S2 in Figure 10.8b.)
5. The process is repeated in reverse with peak-generated rainflow paths. For
sufficiently long records, each trough-generated half-cycle is matched to
the peak-generated half-cycle to form a whole cycle. One may choose only
to analyze a record for peak (through) generated half-cycles, thus assuming
that each cycle is a full cycle.

S(t) 4

(a) 5
S(t)

1
2
3
S1 4

S2

(b)

FIGURE 10.8  (a) Process of stress and (b) rainflow.


Failures of Systems 541

10.3.3.2.3  Closed-Form Expression for Rainflow Damages


The closed-form formula for fatigue damage under a wideband stress process can be
developed through the rainflow algorithm.

10.3.3.2.3.1   Empirical Model  Wirsching and Light (1980) developed an empiri-


cal model of a general expression for fatigue damage, which can be written as

D = λ(ε, m)D NB (10.93)

Here, λ(ε, m) is a rainflow correct factor, and ε is the spectral width parameter. D NB
is the damage estimated in Equations 10.94 and 10.95 using a narrowband process.
Based on the amplitude, the coefficient of fatigue strength A is

v0 + τ
 2 σ S  Γ  m + 1
m
DNB =   (10.94)
A 2 

where v0+ is the equivalent frequency, the rate of zero up-crossing.


Based on the range, the coefficient of fatigue strength A is

v0 + τ
 2 2 σ S  Γ  m + 1
m
DNB =   (10.95)
A 2 

The rainflow correction factor can be obtained by simulating processes contain-


ing a range of spectral shapes such as

λ(ε, m) = a(m) + [1 − a(m)](1 − ε)b(m) (10.96)

where empirically

a(m) = 0.926 − 0.033m (10.97)

and

b(m) = 1.587m − 2.323 (10.98)

10.3.3.2.3.2   Refined Model of Effective Stress Range H  A more refined model


is achieved by counting the effective stress range H, considered as the Rayleigh dis-
tribution, (see Ortiz and Chen 1987). The CDF of H is then written as

 1  h 2
FH (h) = 1 − exp  −   (10.99)
 2  2β k σ S  
542 Random Vibration

Here, βk is the generalized spectral bandwidth given by

M2 Mk
βk = (10.100)
M 0 M k +2

and the parameter k is

2.0
k= (10.101)
m

In Equation 10.100, Mj is the jth moment of the one-sided spectral density function:

Mj =
∫ 0
f jWS ( f ) d f (10.102)

In this approach, the general expression for the wideband damage is

D = λkD NB (10.103)

where

β mk
λk = (10.104)
α

Here, α is the irregular factor, as defined in Equation 5.40. It is repeated as follows:


v0 + ω 0+
α= = (10.105)
vp ωp

10.3.3.2.3.3   Alternative Approach  An alternative approach (Lutes and Larson


1990) can be written as

D = λLD NB (10.106)

where

M 2m/m/ 2 (10.107)
λL =
v0 +

A typical value for the fracture mechanics model or for welded joints is m = 3.

10.3.4 Damages due to Type D Low Cycle


In the above discussion, two types of failure are considered: the first passage and the
high-cycle fatigue. The generality of these two cases is that once the critical point
Failures of Systems 543

is reached, the failures occur. For instance, either the stress S(t) exceeds the allowed
strength R or the damage index D reaches 1.0. Explicitly, before these critical points,
no artificially defined damage occurred.
From the viewpoint of a random process, the process is “memoryless.” The process
is often used to describe the stress time history but not the history of damage growth.
In real-world applications, damages that experience growth over time must be
dealt with. Examples of such damages include crack propagations, incidents of aging
and corrosions, gradual loss of pre-stress in reinforced concrete elements, etc. In
these situations, the earlier damage will not be self-cured and will affect the remain-
ing damage process. Thus, it will no longer be “memoryless.”
Very often, such a damage process is random and is caused by the nonlinear
stiffness of systems. While in Chapter 5, we discussed the low-cycle development
by using a Markov model on the type D fatigue, in the following, we will further
consider the corresponding failure mode and its related parameters.

10.3.4.1 Fatigue Ductility Coefficient


As mentioned in Chapter 5, when the stress becomes sufficiently high such that the
material yields, the deformation becomes plastic. In this case, low-cycle fatigue may
occur and the failure mode is defined differently. In general, the cycles to fatigue are
less, or considerably less, than 1000. It can be a loss of the entire function of systems,
the collapse of the structure, or the introduction of severe structural damage. When low-
cycle fatigue occurs, the account in terms of stress is less revealing and the strain in the
material will offer more enhanced information. In Chapter 11, this point of concept will
be discussed in a more detailed fashion. Low-cycle fatigue is regularly characterized
by the aforementioned Coffin–Manson relation (see Sornette et al. 1992) defined by

∆εp
= ε′f (2 N )c (10.108)
2

In this instance, Δεp /2 refers to the amplitude of the plastic strain and ε′f denotes
an empirical constant called the fatigue ductility coefficient or the failure strain for
a single reversal. Furthermore, 2N is the number of reversals to failure of N cycles,
while c is an empirical constant named the fatigue ductility exponent, commonly
ranging from −0.5 to −0.7 for metals in time-independent fatigue.

10.3.4.2 Variation of Stiffness
In the course of type D low-cycle fatigue, a structure will have decreased stiffness,
which can be seen when the inelastic range is reached and then repeated. Rzhevsky
and Lee (1998) studied the decreasing stiffness and found that the stiffness decrease
can be seen as a function of the accumulation of inelastic deformation. This type of
accumulative damage is more complex than pure low-cycle fatigue. This is because
during certain cycle strokes, the deformation will be sufficiently too large to yield
the structure, whereas in other cycles, the deformations can respectively be small.
Empirically, the stiffness and the accumulation can be expressed as
−1
kn = koe − aosh (γ n ) (10.109)
544 Random Vibration

Stiffness
ko Steel

k0.65 RC

0 0.65 γn Damage factor

FIGURE 10.9  Stiffness versus accumulation of inelastic deformation.

Here, ko and kn are, respectively, the original stiffness and the stiffness after n
semicycle inelastic deformations; ao is the peak value of the first inelastic deforma-
tion. The subscripts 0, 1, and n denote the cycles without inelastic, the first, and the
nth cycle of inelastic deformations, respectively. Additionally, the term γn is defined as

γn = ∑a
i =1
i (10.110)

The term γn is called the damage factor, which is the summation of the absolute
value of all the peak values of the inelastic deformations.
For an initial stiffness of ko, the values of the inelastic deformation ai can be speci-
fied in an experimental study of certain types of reinforced concrete (RC) components
and structures, comparing with steel, whose stiffness is virtually constant. Figure 10.9
conceptually shows the relation between the decreased stiffness and the damage factor.
For a comparison, both the constant and the decrease stiffness are plotted in Figure
10.9. Conceptually, both are allowed to have the same value initially. It is then seen that
as the inelastic formations accumulate, the constant stiffness will maintain its value
until its final failure stage. However, the decreased stiffness will begin to diminish
from the first semicycle, although the decreased stiffness will eventually have a total
failure. It is also conceptually plotted at the point when the constant stiffness fails.

Example 10.3

At 65% of the accumulation of the total inelastic deformation, an individual stiff-


ness, denoted by k0.65, is considered to be the minimum allowed value. This is
written as

−1
k0.65 = k0 e − ao sh ( γ 0.65 )
(10.111)

where the term γ0.65 is defined as

0.65n

γ 0.65 = ∑a
i =1
i (10.112)
Failures of Systems 545

10.4 Considerations on Reliability Design


In Section 10.2, the random relationship between the stress process S(t) and the ran-
dom strength R was studied. It was indicated that the probability of S > R is seldom
zero. Chapter 2 exemplified the fundamental approach of the probability- or reliability-​
based design to ensure that the probability of S > R is smaller than the allowed level.
Practically, the stress S can be the result of more than one load, all of which can be
random in nature. In this section, the formula of calculating the failure probability
and the surface of load combinations will be further examined.
In the following, we use a case study to explain the necessary procedure of reli-
ability design on systems whose loads are combinations of random processes. The
case study is a reliability design of highway bridges, which are subjected to multi-
hazards (MHs). The basic idea is to use the method of variable separation to approxi-
mate random processes by random variables.

10.4.1 Further Discussion of Probability-Based Design


10.4.1.1 Random Variable vs. Random Process
Recalling the concept of the limit state introduced in Chapter 1 (see Equation 1.152),
we can write (also see Ellingwood et al. 1980, Nowak 1999)

F = R − L (10.113)

where we use L to denote the realistic load. That is, when F = 0 is reached, failure
occurs. Similar to Chapter 1, Equation 1.154, the failure probability is given as

pf = P(F ≤ 0) = P(R – L ≤ 0), for most cases. (10.114)

Here, instead of demand Q, symbol L is used to denote multiple loads.


In Equation 10.114, both R and L are random variables, which are essentially time
invariant. However, consider the term L in reality, which can be MH loads, and these
loads are likely time varying in nature. That is, we often have

L = L1 + L2 + L3 + … (10.115)

And each hazard load Li is likely the time variable, that is,

Li = Qi(t) (10.116)

Therefore, it can be very difficult to have the PDF of the combined load because
they are actually a random process. In most cases, the loads are not addible.
In developing probability-based designs of bridges subjected to MH extreme loads,
researchers have pursued the following. Wen et al. (1994) first provided a compre-
hensive view of multiple loads and divided the total failure probability into several
partial terms. Although there is no detailed model of how to formularize these partial
failure probabilities, Wen et al.’s work pointed to a direction to establish closed-form
analytical results for limit state equations, instead of using empirical or semi-empirical
546 Random Vibration

approaches. Nowak and Collins (2000) discussed alternative ways for several detailed
treatments in load combinations, instead of partial failure probabilities. Ghosn et al.
(2003) provided the first systematic approach on MH loads for establishing the design
limit state equations. They first considered three basic approaches: (1) Turkstra’s rule,
(2) the Ferry–Borges model, and (3) Wen’s method. In addition, they also used Monte
Carlo simulations (which will be discussed in Chapter 11). Among these approaches,
Ghosn et al. focused more on the Ferry–Borges method, a simplified model for MH
load combinations. Hida (2007) believed that Ghosn’s suggestion could be too conser-
vative and discussed several engineering examples in less common load combination.
In the above-mentioned major studies, the loads including common live (truck) load
and extreme loads are assumed independent. Moreover, no matter they occur with
or without combinations, the distributions of their intensity are assumed to remain
unchanged. These methods unveiled a key characteristic of MH loads—the challenge
of establishing multihazard load and resistance factor design (MH-LRFD) is the time-
variable load combination. With simplified models, these approaches provided possible
procedures to calculate the required load and resistance factors for design purposes.
Because of the lack of sufficient statistical data, simplifying assumptions may
either underestimate or overestimate certain factors. Cases in which the assumptions
are accurate can be shown, while in other cases, the distributions may vary and the
independency may not exist. In this study, we have pursued both theoretical and
simplified approaches so that results may be compared and evaluated.

10.4.1.2 Necessity of Distinguishing Time-Invariant


and Time-Variable Loads
To illustrate that there is no problem to combine one time-invariant with time-­
independent loads but that it is difficult to consider more than one time-variable loads,
Figure 10.10 plots the conceptual PDFs of time-invariant and time-variable loads and
their combinations. Figure 10.10a shows the combination of L1, a time variable load,

Time Time
L1 L2 L1
L2
PDF(t4) L2

t6 L1
t5
t4
PDF(t2) L1 Intensity
t3
PDF PDF L2

t2
t1 Intensity
Intensity Intensity
(a) (b) (c)

FIGURE 10.10  PDF of time-invariant and time-variable loads and their combinations.
(a) L2: constant load; (b) L2: time varying load; (c) time varying load combination.
Failures of Systems 547

and L2, a time-invariant load. Whenever L1 occurs, L2 is there “waiting” for the com-
bination. For example, if we have truck and dead loads only, then we can treat the
combined load as a regular random variable without considering the time variations.
In Figure 10.10b, when combining two (or more) time variables, we may have the
chance L1 occurs without L2. There is also the chance when larger-valued L1 occurs
with the occurrence of smaller-valued L2, as illustrated in Figure 10.10c. We may
also have the chance when smaller-valued L1 occurs with the occurrence of larger-
valued L2, for example, at t4, which is specifically shown in Figure 10.10c. In these
cases, we may have different sample spaces in calculating the corresponding distri-
butions, as well as L1 and L2 distributions.

10.4.1.3 Time-Variable Load at a Given Time Spot


Suppose a bridge is subjected to two time-variable loads L1 and L2 only. If a larger-
valued load, L2, occurs before the smaller load, L1, then in order to count the maxi-
mum load effect, L1 must be rejected. In other words, to count L1, we need to consider
a precondition: in terms of time sequence, at the amplitude of L1, the load effect due
to L2 as well as that due to the combination L1 and L2 cannot occur before L1. These
situations are discussed in more detail in the following.
Truck, vessel and vehicle collisions, earthquakes, as well as wind gusts are time
variables (random process). In many cases, these loads cannot be directly combined;
when one load, L1, occurs, others may not occur simultaneously. In general, we have

L1 + L2 + L3 = L1 (10.117)

If, at a given moment, all these loads occur simultaneously, then

L1 + L2+ L3 ≠ L1 (10.118)

The distributions of the combined load L1 + L2 + L3 described in Equations 10.117


and 10.118 can be different. To obtain the correct corresponding random distributions,
both single load and combined load distributions, several methods are available. We
will use a method of event separation. That is, the failure probability is separated under
the condition of a certain “pure” load or “pure” load combination only.
In what follows, the combination of a single time-variable load and the time-
invariable dead load is taken as a single time-variable load. When L is only a sin-
gle time-variable load and it is given a special value x, then there is no concern of
another load. However, under MH loads, in Equation 10.114, the event (L ≥ R) must
also be analyzed by examining the load L.

10.4.1.4 Combination of Time-Variable Loads in a Given Time Period


Consider now at a given time duration in which there exists two different kinds of
time-variable loads simultaneously,

ℓ(t) = ℓ1(t) + ℓ2(t) (10.119)

where ℓ(t), ℓ1(t), and ℓ2(t) are the sum and the special time varying loads, respectively.
With a given time t, they are deterministic.
548 Random Vibration

In most cases, the maximum value of the combined load ℓ(t) does not equal to the
sums of the maximum or amplitude values of ℓ1(t) and ℓ2(t). That is,

(t ) ≠  1 (t ) +  2 (t ) (10.120)

In some previous studies, the amplitudes of ℓ1(t) and ℓ2(t) are treated as constant
when the time duration Δt is taken to be sufficiently small. However, when Δt is
indeed sufficiently small for one load at a certain moment, it may not be sufficiently
small for the second load. Thus, such a treatment, such as the Ferry–Borges model,
may yield inaccurate results.

10.4.1.5 Additivity of Distribution Functions


Consider again the failure probability of P(L ≥ R). Similar to Equation 1.139b, if both
L and R are normally distributed, then the distribution of L–R can be calculated by
considering the PDF of R–L, denoted by f R–L(x). Namely, all the cases of R–L ≤ 0
mean that there is a chance of failure. That is,

0
pf =
∫ −∞
f R− L ( x ) d x (10.121)

For a standardized variable, Equation 10.121 can be rewritten as

pf = Φ(Z│Z=−β) = Φ(−β) (10.122)

where Φ(−β) is the CDF of the normally distributed standardized variable defined
in Chapter 1 (Nowak and Collins 2000).
Turkstra and Madsen (1980) suggested a simplified method for load combinations.
In many cases, this model is an oversimplified treatment on random process because
it does not handle the load combination in the random process level at all. Instead, it
directly assumes that whenever two loads are combined, the “30% rule” can always
be used. On the other hand, this simplifying assumption can sometimes be rather
conservative.
Another method is the Ferry Borges–Castanheta model. Detailed description of
the Ferry–Borges’s model can be found in Ghosn (2003). In order to handle time-
variable loads, the Ferry–Borges’s model breaks down the entire bridge life into
significantly short time periods, in which these loads can be assumed constant so
that the variables can be added. Based on the discussion of extreme distribution in
Chapter 2, the cumulative probability function, FX1,2 ,max.T , of the maximum value of
load combinations, X1,2 max.T , in time T is obtained by

T /t
FX1,2 ,max.T ( x ) =  FX1,2 ( x )  (10.123)

Here, FX1,2 ( x ) is the CDF in the short period. In Equation 10.122, the value t is the
average duration of a single event, say, several seconds; therefore, the integer ratio T/t
can be a very large value. If a certain error exists, generated by using Equation 10.123,
Failures of Systems 549

which is unavoidable in many cases, then the resulting CDF FX1,2 ,max.T can contain
unacceptable errors.
To use the Ferry–Borges’s model, one needs to make several simplifying assump-
tions, which introduce errors. The largest error, generally speaking, comes from
Equation 10.123, which is based on the assumption that the distributions in each
individual period t are identical. In addition, the load combinations in a different
period t must be independent; that is, the random process of the load combination
must be an independent stationary process, which is not true. Because the ratio T/t
can be very large, large errors may result.

10.4.2 Failure Probability under MH Load


10.4.2.1 Failure Probability Computation
Below, two distinct approaches to establish the design limit state equations are described.
The first approach is as follows: (1) formulation of closed-form partial failure prob-
abilities consisting of two portions of a term, namely, the conditional probabilities of
failure and the probability of corresponding conditions; (2) formulation of the probabil-
ity of conditions; (3) computation of the conditional probabilities; (4) establishment of
a criterion to reject unnecessary loads; (5) normalization of load effects; and (6) deter-
mination of limit state equations. These five tasks are logically related in sequence.
The second approach is to compute directly the partial failure probabilities. Since
each direct computation will involve single and load combinations, simplifications,
such as Turkstra’s and/or Ferry–Borges’s models, will be necessary. With design
examples, the limit state equations will be established by the least squares method.
The first approach is described in the following.

10.4.2.2 Time-Invariant and -Variable Loads


Failure probabilities of a structure are the sums of the chances whenever load effects
exceed resistance. Under MH loads, the total failure probability of a bridge con-
sists of several partial failure probabilities due to dependent load effects and differ-
ent PDFs. To evaluate the partial failure probability of combined loads exceeding
bridge resistance, one needs to calculate the effects of load combinations. Besides
dead load, most bridge loads are time variables so that the probabilities for these
loads with specific magnitudes occurring simultaneously are likely not 100% but
will depend on the nature of these loads, which is a major condition of load occur-
rences. Furthermore, given that the amplitude of a load effect exceeds the resistance
should have an additional restricting condition, which requires that before such a
load occurs, no effects caused by other hazards reach the same level. This method is
defined as the separation of partial bridge failure probabilities.

10.4.2.3 Principles of Determining Load and Load Combination


As mentioned above, the dead load LD can be reasonably assumed time invariant.
When a single time-varying load occurs, it can be directly added to the dead load.
The method to deal with such load combination has been well established. In the fol-
lowing, we will consider the load combination for time-varying loads only.
550 Random Vibration

10.4.2.3.1  Maximum Possible Loads


When considering loads, no matter how frequent or rare they occur, we only con-
sider their maximum possible values, which are random variables and roughly have
bell-type distributions. In a Cartesian coordinate system, the x-axis of the random
distribution curve is the intensity of the maximum possible load effects, instead of
individual load effects.
To extract maximum valued loads from a load pool, a method or a criterion of
picking up the loads is needed. In general, the maximum value of a load is picked
up from the pool of total possible loads occurring in the bridge lifespan. For certain
loads with the same type that can occur simultaneously, an additional consideration
is the maximum possible combination of the simultaneous load, for example, two
or more trucks crossing a given bridge at the same time. These trucks must load the
bridge at different locations of the bridge, i.e., the combined load effect, which is
often different from the direct sum of the load effects due to the peak value of these
truck loads.
Consider the event that the combined load effect is greater than resistance:

L1 + L2 + L3 + … ≥ R (10.124)

There are three cases for which Equation 10.124 holds. The first is that individual
load effects are greater than or equal to R. In this case, when this single load occurs,
we need to consider only the peak level of this load. The second is that none of these
individual load effects are greater than or equal to R, but their combined effect is.
In this case, only the peak level of a load may not be sufficient. The third case is the
combination of the first and second cases, which will be considered in Section 10.4.2.5.

10.4.2.4 Total and Partial Failure Probabilities


We now discuss the method to divide the total failure probability into several terms.
For each term, we will have a comparatively simpler relationship between a single
type of load (or load combination) and resistance. Therefore, it is easier to not under-
estimate or overestimate the corresponding chance of bridge reliability. Details of
how to reassemble these partial terms together to formulate the entire limit state
equation for all possible loads will not be discussed here.

10.4.2.5 Independent Events
Let us continue the discussion on Equation 10.124. Besides the two cases men-
tioned in Section 10.4.2.3.1, the third possibility is the contribution of both of these
cases. Suppose that we now have three different kinds of loads, L1, L2, and L3. From
Equation 10.124, we can write

P(L1 + L2 + L3 ≥ R) = pf (10.125)

To add up the load effects of Equation 10.125 without miscounting, Wen et al. (1994)
suggested the use of total probability. That is, the entire case is dissected into sev-
eral mutually exclusive subcases. Each subcase only contains a single kind of load
Failures of Systems 551

(or load combinations). This way, we can deal with the time-variable loads and load
combinations more systematically and reduce the chance of miscounting.
Thus, Equation 10.124 can be further rewritten as follows:

P(L1 ≥ RL1 only ) P( L1 only) +

P(L 2 ≥ RL2 only ) P( L2 only) +

P(L 3 ≥ RL3 only ) P( L3 only) +

P(L1 + L 2 ≥ RL1 L2 only ) P( L1  L2 only) +


(10.126)
P(L1 + L 3 ≥ RL1 L3 only ) P( L1  L3 only) +

P(L 2 + L 3 ≥ RL2  L3 only ) P( L2  L3 only) +

P(L1 + L 2 + L3 ≥ RL1 L2  L3 only ) P( L1  L2  L3 only) = pf

Here, pf is the failure probability; P(.) means the probability of event (.) happen-
ing; the normal letter of L(.) means the load effect due to load L (.). The symbol “|(.)”
stands for condition (.). The symbol “∩” stands for occurring simultaneously. L1 ∩ L2
stands for the condition that loads L1 and L2 occur simultaneously, but that there are
only these two loads; in other words, no other loads (except the dead load LD) show
up during this time interval.
This formula of total probability is theoretically correct but is difficult to realize
practically. The main reason is that, again, these loads L1, L2, and L3 are time vari-
ables and that at a different level of load effect, P(Li only) will be different. In the
following, we introduce the concept of partial failure probability to deal with these
difficulties.
For the sake of simplicity, first consider two loads only. Denote

P(L1 ≥ R|L1 only) ≡ P(L1) (10.127)

The probability of only having load L1 is given as

P( L1 only) ≡ pL1 (10.128)

Therefore, the failure probability due to load L1 only can be written as

pfL1 = P( L1 ≥ RL1 only ) P( L1 only) = P( L1 ) pL1 (10.129)

LL
Similarly, we can rewrite pf 1 2 and pfL2 in the same format as described in Equation
L1 L1L2 L2
10.129. Here, pf , pf , and pf are used to respectively denote the failure probabili-
ties caused by L1 only, L1 and L2 simultaneously, and L2 only.
552 Random Vibration

Thus, we can write

pfL1 + pfL1L2 + pfL2 =


(10.130)
P( L1 ) pL1 + P( L1 L2 ) pL1L2 + P( L2 ) pL2 = pf

Equation 10.130 is defined as the separation of partial bridge failure probabili-


ties. The intensity distributions of a load can be different in different regions. If the
occurrence of certain loads, say, truck load L1, is very frequent, then we may assume
that the distribution of L1 remains the same, no matter L1 occurs with or without L2.
If the occurrence of other loads, say, earthquake load L2, is very rare, then we may
assume that the distribution of L2 when it is together with L1 will be different from
the distribution of L2 when it occurs solely without L1.
The essence of Equation 10.130 is that the failure probability pf can be written as a
sum of several partial failure probabilities, pfL1, pfL1L2, and pfL2. It will be further seen
that, to realize Equation 10.130, we will have rather complicated situations, because
the PDFs of the load effects in events L1, L1 ∩ L2, and L2 are not identical. In dif-
ferent events, the random distribution functions can be rather different. In addition,
although the time variable loads L1 and L2 are independent, the corresponding load
effects on a given bridge will no longer be independent. This is the major difference
between the proposed method and that described in Equation 10.126. That is, the
terms P(L1) and pL1 will vary due to different levels.
Furthermore, in Equation 10.126, each partial failure probability is a product of
two terms. For example, pfL1 = P( L1 ) pL1. Here, the first term, P(L1), is referred to as
the conditional probability, and the second term, pL1, is referred to as the probability
of condition.
In the following, the general principle of formulation on these partial probabilities
will be discussed. The focus is on the conditional probability. Detailed computation
of the second term will be discussed in Section 10.4.4 that follows the description of
conditional probability.

10.4.2.6 Mutually Exclusive Failures, the Uniqueness Probabilities


For the sake of simplicity, in the following discussion of load combination, we pur-
posely ignore the dead load because it is time invariant.
In Figure 10.11, we use a Venn diagram to show time-variable load combi-
nations and use earthquake and truck load effect as an example to describe the
concept. In the figure, pf and pf are the probability of failure and nonfailure of
a bridge ( pf + pf = 1), respectively, subjected to earthquake and truck loads. In
Equation 10.130, let pfE, pfET, and pfT denote the failure probabilities due to the
earthquake effect only, the combined effect of both earthquake and truck, and the
truck effect only, respectively. The event of the earthquake effect, the combined
effect, and the truck effect greater than or equal to the resistance are mutually
exclusive events.
Failures of Systems 553

Occurrence of all possible loads

Earthquake Truck
pf

Total
probability
=1 pE pfET pTf pf
f

Earthquake Both Truck


only only only

FIGURE 10.11  Venn diagram of failure probabilities.

That is,

pfE + pfET + pfT = pf (10.131)

In addition, when these individual failure probabilities pf(.) are calculated for each
case of the maximum load effect of (.) exceeding the resistance, we need to also con-
sider a restricting condition: no other effect reaches the same level. The detail of this
will be discussed in the following.

10.4.2.6.1  Condition of Maximum Load Effect


In Equation 10.130, those first probabilities, for example, P(L1), exist only under
rigorous conditions. To count the maximum load effect, we need to evaluate the
condition that guarantees that what is counted is a true maximum effect. As seen
in Figure 10.12, the x-axis of the probability density curve of the load effect is the
maximum value of the load effect. Consider the case that a bridge is subjected to
the time-invariant dead load and a single time-variable load only. When the sum
of the time-variable load and the dead load reaches a certain maximum level, say, x,
there is no chance for other loads to reach this specific level.
However, if other loads exist when the first time-variable load reaches a certain
level, then there can be a chance that the second time-variable load had already
reached and/or exceeded the same level. In this case, the effect due to the first load
cannot be recognized as the maximum value. In other words, for a time-variable load
effect to be recognized as the maximum value at intensity x, we need a restricting
condition that no other loads can reach the same level.
Figure 10.12 conceptually shows this condition. In the figure, the solid and dotted
lines are the PDFs of the first load effect and the resistance, respectively. The dash-
dotted line represents the second time-variable load effect, for convenience, which is
placed below the first load effect curve in the figure. The shadowed area represents
554 Random Vibration

Effect of load 1
Resistance
Effect of load 2*

fL(x)

0 1 2 3 4 5 6 7 8 9
x
Intensity (MN-m)

FIGURE 10.12  Condition of load 1 reaches a certain maximum level x.

all the chances that the second load is smaller than this given level x. In the figure, for
the sake of simplicity, we assume that loads 1 and 2 have no combinations.
In general, the sum of all these chances can be expressed as the integral of the
PDF of the second load effect, fS ( z , x ). Here, the subscript “S” denotes the second
load, and in the following, we use the font of “Ruling Script LT Std” to denote the
conditional PDF, whose condition is the main PDF taking the value of x. That is,

x
PS ( x ) =
∫ −∞
fS ( z , x ) d z ≤ 1 (10.132)

Based on the above, when calculating the failure probability, we cannot use the
PDF of the first load only. In other words, the PDF of the first load, f L(x), shown
in Figure 10.12, must be modified as f L ( x ) PS ( x ). In general, PS ( x ) < 1, so that the
resulted failure probability due to the first load only should be smaller than the case
with the first load only.
The additional possibility that the value of combined loads 1 and 2 must also be
smaller than level x is given by

x
PC ( x ) =
∫−∞
fC ( w, x ) d w ≤ 1 (10.133)

Here, the subscript C denotes the combined load effect of both the first and second
loads, and the PDF of the combined load effect is denoted by fC. In this circumstance,
the PDF of the first load must be further modified as f L ( x ) PS ( x ) PC ( x ).
Failures of Systems 555

Therefore, the conditional failure probability due to load 1 should be written as

 ∞ ∞ 

P( L1 ) = 
 ∫ −∞
fR ( x )
∫ x
f L1 ( y) PL2 ( x ) PC2 ( x ) d y d x 
 (10.134)
∞ ∞ y y
=
∫ −∞
fR ( x )
∫ x
f L1 ( y)
∫ −∞
fL2 ( z )
∫ −∞
fC ( w) d w d z d y d x

where f L1 ( y), fL2 ( z,x ), and fC ( w,x ) are the PDFs of effect due to load 1, due to load
2, and due to the combination of loads 1 and 2, respectively; PL2 ( x ) and PC2 ( x ) are
the condition probability PS ( x ) due to load 2 and the condition probability PC ( x ) due
to the combination of load 1 and load 2, respectively. In the circumstance of MHs, a
specific load, say, load 1, is divided into two portions. The first portion is based on
the case of load 1 only, and the second case is that both loads occur simultaneously.
We use f L1 ( y) and fL2 ( z , x ) to respectively denote load 1 in the first case and load 2
in the second case.
Similarly, the failure probability due to combined load 1 and load 2 should be
written as

 ∞ ∞ 

p( L1L2 ) = 
 ∫ −∞
fR ( x )
∫ x
fc ( y) PL1 ( x ) PL2 ( x ) d y d x 
 (10.135)
∞ ∞ y y
=
∫−∞
fR ( x )
∫ x
fc ( y)
∫ −∞
fL1 ( z , x )
∫ −∞
fL2 ( w, x ) d w d z d y d x

where fc is the PDF of the combined load (L1 and L2 occur simultaneously); fL1 and fL2
are the PDFs of the effects due to loads 1 and 2 in the first case, respectively; PL1 ( x )
and PL2 ( x ) are the condition probability PS ( x ) due to loads 1 and 2, respectively. Note
that, generally speaking, fL1 ≠ f L1 and fL2 ≠ f L2.
Similarly, the failure probability due to load 2 should be written as

 ∞ ∞ 

p( L2 ) = 
 ∫−∞
fR ( x )
∫ x
f L2 ( y) PL1 ( x ) PC1 ( x ) d y d x 
 (10.136)
∞ ∞ y y
=
∫ −∞
fR ( x )
∫ x
f L2 ( y)
∫ −∞
fL1 ( z , x )

−∞
fC ( w, x ) d w d z d y d x
556 Random Vibration

10.4.3 General Formulations
10.4.3.1 Total Failure Probability
With the help of Equations 10.134–136, the total failure probability can then be writ-
ten as

pf = pfL1 + pfL1L2 + pfL2

 ∞ ∞ y y 

=
 ∫−∞
fR ( x )
∫ x
f L1 ( y)
∫ −∞
fL2 ( z , x )
∫ −∞
fC ( w, x ) pL1 d w d z d y d x 
 (10.137)
 ∞ ∞ y y 
+
 ∫ −∞
fR ( x )
∫ x
fc ( y)
∫ −∞
fL1 ( z , x )
∫−∞
fL2 ( w, x ) pL1L2 d w d z d y d x 

 ∞ ∞ y y 
+
 ∫ −∞
fR ( x )
∫ x
f L2 ( y)
∫ −∞
fL1 ( z , x )
∫ −∞
fC ( w, x ) pL2 d w d z dyy d x 

In Equation 10.137, the total failure probability is all-inclusive if only two loads
are present, which is referred to as the comprehensive failure probability.

10.4.3.2 Occurrence of Loads in a Given Time Duration


In Equation 10.137, we purposely write the terms pL1, pL1L2, and pL2 at the end of each
L L
of the partial failure probabilities pf 1, pfL1L2, and pf 2. These terms are occurrence
probabilities of events L1, L1 ∩ L2, and L2 being greater than or equal to the value
x, respectively, which denote the exclusive nature of occurrence of L1 only, L1 ∩ L2,
and L2 only when these load effects are taken at the value x. For convenience, these
occurrences of loads are referred to as the probability of condition, as mentioned
previously. These probabilities are functions of the value x. The detailed discussions
of these occurrence probabilities will be presented in Section 10.4.4.1.

10.4.3.3 Brief Summary
In this subsection, we presented a proposed methodology on comprehensive bridge
reliability based on the formulation of partial failure probability, in order to deter-
mine the design limit state equations under time-varying and infrequent loads.
Specifically, if the designed value of failure probability is given, then each partial
failure probability is uniquely determined. Then, we can calculate the partial reli-
ability index one by one, according to the classification of the effect of either single
load or combined loads.
For dead and live (truck) loads only, the relationships of the load and the resis-
tance effects are treated as time-invariant random variables with normal distribu-
tions. The corresponding limit state is simply described by the reliability index, from
which the load and resistance factors can be uniquely determined.
Multihazard loads are time variables. The limit state of loads not exceeding the
resistance exists, but the total failure probability is far more complex to calculate.
Failures of Systems 557

In order to establish the American Association of State Highway and Transportation


Officials (AASHTO) LRFD compatible MH-LRFD, the concept of comprehensive
bridge reliability is introduced. To achieve the objective, one of the feasible treat-
ments is to use the sum of several partial failure probabilities to represent the total
failure probability by separation of mutually exclusive loading events. The partial
failure probability results in a partial limit state used to form the total limit state. In
this situation, each partial limit state defines the exceedance to the resistance due to
a special kind of load, including two or more load simultaneous combinations.
This process requires that the partial limit states be mutually exclusive events.
When dealing with one partial failure probability, the interference of other loads/
load combinations need not be concerned. Thus, the consideration of the failure
probability is simplified so that the evaluation of the total failure probability can be
carried out with sufficient accuracy. However, the procedure of determination of the
load and resistant factors remains a more complex issue. In order to determine the
partial reliability index, the relationship between load and resistance must follow a
normal distribution. The authors have shown quantitatively that such “normaliza-
tion” is doable with acceptable accuracy for engineering practice. The details are not
elaborated herein.
Once the distributions are normalized, with the help of partial reliability indices,
the ratios of mean values among the loads/load combinations and resistance can
be fully determined, uniquely and accurately. By means of using commonly used
values of load and resistance biases, the load and resistant factors of the uniformed
design limit state for highway bridges can be finally calculated. In order to be fully
compatible with currently used AASHTO LRFD (2004) and with the experiences
of bridge engineers, further calibrations may be needed to ensure the accuracy of
these factors.
In this procedure, each partial failure probability is further formulated as a prod-
uct of a conditional probability and a probability of condition. So far, the main focus
is given to the conditional probability. The probability of condition will be discussed
in Section 10.4.4.

10.4.4 Probability of Conditions
In the above, we have introduced the concept of partial failure probability to replace
the random process with random variables.
To numerically calculate partial and total failure probabilities, it is necessary to
carry out the aforementioned integrations by considering individual load value x.
Thus, we can further write

Pf (x ) = PfT ( x ) + PfE ( x ) + PfET ( x ) = P(T ≥ RR= x , only T ≥ x ) P(only T ≥ x )


(10.138)
+ P( E ≥ RE = x , only E ≥ x ) P(only E ≥ x ) + P(T + E ≥ RT + R= x , T + E ≥ x ) P(T + E ≥ x )

In the above, we use uppercase Pf(x), etc., to denote the probability of event (x),
whereas we use the lower case pf, etc., to denote the exact value of failure probabilities.
558 Random Vibration

10.4.4.1 Condition for Occurrence of Partial Failure Probabilities


In Equation 10.138, there are three partial failure probabilities. Each term contains
a condition, described as the probability of event (only the specific load ≥ x), such as
P(only T ≥ x), P(only E ≥ x), and P(T + E ≥ x). In Section 10.4.3, they were referred
to as the probability of condition. For example, the probability of condition for a
generic load Li is denoted as pL1.
To further realize the general condition of load separation, we first consider the
first special case of having a truck load only and the truck load effect T is greater
than or equal to R, which is the partial failure probability of pfT .
In this circumstance, the condition of maximum truck load only, which exceeds
the level x, consists of five independent events:

a. There must be a load (excluding the case of no load).


b. The load is truck load only, denoted by T only (no other load occurring
simultaneously).
c. The load is greater than or equal to x, denoted by T ≥ x (not considering the
case of T < x).
d. The load is the maximum load denoted by T = max (not considering other
intensity).
e. The total types of loads are truck and earthquake only (no other time vari-
able loads occurring).

The above five conditions are mutually independent. Each condition can be seen
as a probability, which can be denoted by

a. P(there is a load)
b. P(T only)
c. P(T > x)
d. P(T = max), with
e. P(only T ≥ x) + P(only E ≥ x) + P(T + E ≥ x) = 1

In general, the condition of having a single load effect Li (again, the combined
loads are treated as special kinds of single loads) can be written as a product of P(Li
only)P(Li > x)P(Li = max) (see conditions b, c, and d). These three probabilities are
relatively more complex, whereas the probabilities of conditions a and e are com-
paratively simpler. In the following, we will mainly focus on issues b, c, and d, with
issues a and e only briefly described in a simplified manner.

10.4.4.2 Event of Single Type of Loads


10.4.4.2.1  Nature of Occurrences of Load T and E
In condition b, T and E are the maximum truck and earthquake loads, respectively.
Therefore, the event (only T ≥ x) should be the existence of truck loads only, which is
greater than or equal to a certain value x. The value of truck load effect in event (only
T ≥ x) cannot be arbitrary. That is, among all possible truck load effects, we need to
consider the value being greater than or equal to x.
Failures of Systems 559

Therefore, the above-mentioned events are not equal to the existence of all pos-
sible truck loads only (same for earthquake load, etc.). The event of existence of
occurrence of such truck loads can be further divided as the intersection of event
“the existence of all truck loads” and the event “those loads must be greater than or
equal to value x.” That is,

(only T ≥ x) = [(there exist only truck loads) ∩ (truck load effect ≥ x) ∩ (truck load
effects = the maximum value)]│[(there must be a load) ∩ (there are only truck and
earthquake loads)]

The events shown on the right-hand side of the above equation are independent.
Therefore, we can have

P(only T ≥ x) = [P(there exist only truck loads)P(truck load effect ≥ x)P(truck load
effect = max)]│[P(there must be a load)P(there are only truck and earthquake loads)]
= {[P(there exist only truck loads)│P(there must be a load)][P(truck load effect ≥ x)
P(truck load effect = max)│P(there are only truck and earthquake loads)]}

In the following, we first discuss the probability of the event of only a single exist-
ing type of load, such as P(there exist only truck loads).

10.4.4.2.2  Existence of All Possible Loads


We now discuss the uniqueness probability starting with single load and then con-
tinuing to load combinations. There can be several approaches. We chose only the
Poisson distribution model here.

10.4.4.2.3  Modeling of the Probability of Condition


10.4.4.2.3.1   Occurrence of Loads in Given Duration, General Model  If there
is no given load, say L1, in period t, then the corresponding probability is 1 − e−λt,
which is equivalent to the sum of probabilities of the case k = 1, 2,…, ∞. That is,


p( L1 exists) = 1 − e − λt = e − λt ∑
k =1
(λt ) k
k!
(10.139)

If in a time span, there cannot be an infinite number of certain loads, say L1, using
1 − e−λt to calculate the probability of seeing that load L1 can result in an overestimation.

10.4.4.2.3.2   Occurrence of Combined Time Variable Load Effect  With two


loads, L1 and L2 only, the corresponding hazards are likely to have different loading
duration, denoted by t(.)i. To simplify the computation, we assume that during the life
span of a bridge, all the loading duration of L (.) can be represented by its average as a
fixed value, t(.). In general, the average loading duration due to different loads will be
different. Without loss of generality, suppose L2 has longer duration, denoted by t L2,
and that L1 is the load with the shorter duration. The basic idea on the chance of load
560 Random Vibration

combination is as follows: First, assume that in the life span of a bridge there are a
total of up to n occurrences of L2; each has the same duration t L2. We thus have the
possible cases of L2 to be one occurrence (duration is 1 × t L2), two occurrences (dura-
tion is 2 × t L2), and so on. Secondly, in the duration t L2, we can calculate the probability
of having up to m load effect L1, which is the sum of one L1, two L1,… up to mL1. In
the duration 2t L2, we can calculate the probability of having up to 2 × m load L1, etc.
Denote the quantity m to be the upper limit of the maximum possible number of
loads L1. Since the physical length of a given bridge is fixed, there must be a limited
number of vehicles “occurring” on the bridge. The probability that load L1 agrees in
t L2 is denoted by 1 psL1, where the subscript 1 in front of symbol psL1 stands for load L1
occurring in one time interval t L2. The corresponding computation is given by

m
(λ L1 t L2 ) k
1 psL = e
1
− λ L1 t L2

k =1
k!
(10.140)

where λ L1 is the average loading rate of L1.


If r time intervals exist, the total duration is rt L2 . The probability of load L1 show-
ing up in rt L2 is denoted by r psL1, where the subscript r in front of symbol psL stands
1
for load L1 occurring in duration rt L2 . We have

mr
[λ L1 (rt L2 )]k
r psL = e
1
− λ L1 ( rt L2 )

k =1
k!
(10.141)

The quantity r psL is, in fact, a conditional probability, which is the probability of
1
the occurrence of L1 under the condition of having L2. It is assumed that L1 and L2
are independent. Therefore, the unconditional occurrence probability of L1 can be
written as r psL1 psL2 i , where psL2 i is the probability of occurrence of load L2i, which
will be discussed in the following.
In the service life span of a bridge, TH, there may be up to n loads L2, with each
having its own occurrence probability. More specifically, let the letter i denote the
said level of load effect L2. We can use the symbol ni to denote the maximum pos-
sible number of such load effects.
More detailed computation should distinguish the simultaneous occurrence of
both loads L1 and L2 during different periods. For example, suppose an earthquake
lasts 60 s. If this 60-s earthquake happens in a rush hour, then we may see more
trucks. Another detailed consideration is to relate the duration t L2 to the level of load
L2. For example, a larger level of earthquake may have a longer duration, and so on.
Here, for the sake of simplicity, we assume that the number m and the length t L2 are
constant. In this case, the total probability of occurrence of load L2i, denoted by psL ,
2i
is given by

ni
(λ L2 i TH )r
psL = e
2i
− λ L2 i TH

r =1
r!
(10.142)
Failures of Systems 561

where the first subscript s of psL2 i stands for the segment of the uniqueness probabil-
ity; ni is the total number of load L2i occurring in TH (75 years); and λ L2 i is the rate of
occurrence or the reciprocal of the return period of the specific load L2i.
A more simplified estimation of the occurrence of load L 2 is to take its aver-
age value without distinguishing the detailed levels. In this case, we use n to
denote the average number of occurrences of load L 2. The life span of the bridge
is denoted by TH. The probability of occurrence of such load effect L 2 in the bridge
life span is

n
(λ L2 TH )r
psL = e
2
− λ L2 TH

r =1
r!
(10.143)

where n is the total number of loads L2 occurring in TH (75 years). Now, the prob-
ability of simultaneous occurrence of both loads L1 and L2 with level i is the product
denoted by pL1L2i given by

pL1L2i =
ni
(λ L2 i TH )r  ni  mr
(λ L1 rt L2 ) k  (λ L2 i TH )r 
e
− λ L2 i TH
∑ r psL
r!
=e
− λ L2 i TH 
 ∑ − λ ( rt )
e L1 L2 ∑
k!

r!

 r =1  
1

r =1 k =1 
(10.144)

where ni is the number of occurrences of load effect L2 with level i in the duration
of TH.
With the simplified treatment described in Equation 10.144, we have simultane-
ous occurrence of both loads L1 and L2, denoted by pL1L2 given by

 n  − λ (rt ) mr
(λ L1 rt L2 ) k  (λ L2 TH )r 
pL1L2 = e
− λ L2 TH
∑ e L1 L2

∑ k!

 r! 
 (10.145)
 r =1 k =1 

10.4.4.2.3.3   Uniqueness Probability of Truck and Earthquake Loads  We now


consider first the existence of all possible loads of the same kind, for example, all
truck loads only, all earthquake loads only, and all the load combinations only.
Based on Equations 10.144 and/or 10.145, we use truck load T and earthquake
load E as examples to replace loads L1 and L2. Figure 10.13 shows the Venn diagram
of situation 1: no trucks, no earthquakes; situation 2: occurrence of trucks; and situ-
ation 3: occurrence of earthquakes. We are interested in three mutually exclusive
cases, namely, trucks only (no earthquake), denoted by T  E ; trucks and earth-
quake simultaneously, denoted by T  E ; and earthquake only (no trucks), denoted
by T  E.
562 Random Vibration

No earthquake Earthquake

T E

No truck
No earthquake

T E

T E T E

Trucks No trucks

FIGURE 10.13  Venn diagram of occurrence events of trucks and earthquakes.

In Figure 10.13 and in the following discussion, the overhead bar stands for “no
existence.” Including situation 1, denoted by T  E , the total probability is unity.
Therefore, we can write

p(T  E )/[1 − p(T  E ) + p(T  E )]/[1 − p(T  E )] + p(T  E ) /[1 − p(T  E )] = 1



(10.146)

Equation 10.146 implies that once the probabilities p(T  E ), p(T  E ), and
p(T  E ) are calculated, they can be normalized under the condition of having a
load, denoted by [1 − p(T  E )]. We now consider these probabilities. In order to
simplify the notations, the necessary nomenclatures are listed as follows:

Total time, TH: The period of 75 years measured by seconds; TH = 75 × 365.25 ×


24 × 3600
Truck crossing time, tT: average duration of a truck crossing a bridge
Average duration of earthquakes, tE
Number of trucks in TH, nT
Number of earthquakes in TH, nE
Total time of having trucks, T T
Total time of not having trucks, TT
Total time of having quakes, TE
Total time of not having quakes, TE
Occurrence probability of trucks in TH, pt
Occurrence probability of earthquakes in TH, pe
Occurrence probability of both truck and earthquakes in TH, pt∩e
Probability of having both trucks and quakes in TH, pte
Probability of having trucks in TH without earthquakes, pte
Probability of having quakes in TH without truces, pte
Probability of having neither trucks nor quakes in TH, pte
Failures of Systems 563

TH
TT TT
tT

TE TE

tE

FIGURE 10.14  Load durations.

Figure 10.14 shows a special arrangement of the duration T T, TE vs. TH, which is
the base of the following analysis.

10.4.4.2.3.4   Uniqueness Probability of Simultaneous Truck and Earthquake


Loads  Consider the simultaneous loads of both trucks and earthquakes.
Note that we may consider different levels of earthquakes instead of the over-
all occurrence of earthquakes. Suppose that earthquakes with the same ith effect,
denoted by Ei, have the average duration tEi. First, to simplify the analysis, we use the
average duration of all tEi, denoted by tE , in order to study the probability of trucks
showing up in tEi, denoted by 1 psT . Suppose that in the duration tE , there are up to m
trucks crossing the bridge,

m
1 psT = e
− λT tE

k =1
(λ T t E ) k
k!
(10.147)

where λT is the average truck rate.


The conditional probability of trucks showing up in r time intervals, rtEi, denoted
by r psT with mr trucks crossing the bridge is given by

mr

r psT = e
− λ T ( rt E )

k =1
[λ T (rt E )]k
k!
(10.148)

Next, we consider the above-mentioned particular time interval. The probability


of occurrence of the non-exceedance earthquake Ei in TH is

nEi

psEi = e − λ EiTH ∑
k =1
(λ EiTH ) k
k!
(10.149)

where nEi is the total number of earthquakes Ei occurring in 75 years; λEi is the rate
of earthquakes with an effect level i in TH.
564 Random Vibration

In this case, the simultaneous occurrence probability of both earthquakes with


the effect amplitude Ei and truck loads can be written as

 ni  mr
(λ T rt E ) k  (λ EiTH )r 
pTEi = e − λ Ei TH
∑ e − λT (rtE )

∑ k!  r! 
  (10.150)
 r =1 k =1  

The term pTEi is a uniqueness probability denoting the unique chance of the simul-
taneous occurrence of a truck load and an earthquake load with level Ei. At the spe-
cific moment, only these two loads are acting on the bridge.
Now, if we use the simplified approach without considering the level of earth-
quake effects, we have the probability of occurrence of the non-exceedance earth-
quake E in TH, denoted by psE and with nE earthquakes. Then
nE

psE = e − λ E TH ∑r =1
(λ E TH )r
k!
(10.151)

where λE is the average rate or the reciprocal of the average earthquake return period.
In this case, we have the average occurrence probability given as

 nE  mr
(λ T rt E ) k  (λ E TH )r 


pte = e − λ E TH  ∑
 r =1
e − λ T (rtE )

∑ k!  r! 


 (10.152)
k =1 

Suppose that we have r earthquakes in 75 years. The probability of having no


trucks in the duration rtE is

pt = e −λT (rtE ) (10.153)

The probability of having an earthquake only (without trucks) can be written as

 nE (λ T )r 

 r =1

pte = e − λ E TH  [ e − λT (rtE ) ] E H 
r! 

(10.154)

Note that one of the disadvantages of using the above equations is the difficulty
of computing the factorial. If the number of k is large, say, more than 150 for most
personal computers, then the term k! cannot be calculated. Therefore, let us consider
an alternative approach.
In one duration tE , suppose we have the following event: an earthquake is occur-
ring while simultaneously up to m trucks are crossing the bridge. The probability of
that event is

 m
(λ T t E )r   − λ E TH
1 pte =  e

− λT tE
∑r =1
r !  
 e (λ E TH )  (10.155)
Failures of Systems 565

In that duration, the probability of no such event occurring is

1 − 1 pte = 1 − e − λT tE − λ E TH (λ E TH ) ∑ r =1
( λ T t E )r
r!
(10.156)

Suppose that in the life span, a total of nE earthquakes occurred. Therefore, at


least the event that an earthquake occurs simultaneously with up to m trucks can
have the uniqueness occurrence probability written as

nE
 m
( λ T t E )r 
pte = 1 − (1 − 1 pte ) nE
= 1 − 1 − e − λT tE − λ E TH (λ E TH )
 r =1
∑ r ! 
 (10.157)

and the probability when earthquakes occur with no trucks crossing the bridge is
given as

nE
pte = 1 − e − λT tE − λ E TH (λ E TH )  (10.158)

Furthermore, instead of directly calculating the occurrence probabilities of pte  ,


pte, and pet using the above equations, a method based on Figure 10.15 may be used.
The figure gives the three probabilities, of which two of them are determined through
the above approach.
When the number n or ni in the Poisson distribution is too large, the correspond-
ing occurrence probabilities cannot be easily computed. We thus offer the following
simplification.
In general, the number of occurrences of earthquakes is considerably smaller
than that of trucks. Thus, assume that the probability of having both trucks and
earthquakes and the probability of having earthquakes only can be calculated for the

TH
TT TT

TE pte pte

TH TE pte pte

FIGURE 10.15  Probabilities of occurrences.


566 Random Vibration

number nE is sufficiently small. Based on the concepts described in Figures 10.13 and
10.15, the probability of having an earthquake can be calculated as

pe = pte + pet (10.159)

Therefore,

 nE  mr
(λ T rt E ) k  (λ E TH )r  − λ E TH 
nE r 
− λ ( rt ) (λ T ) 
pe = e − λ E TH  ∑ e − λT (rtE )

∑ k!  r! 
 +e ∑ [e T E ] E H 
 r =1
r! 
 r =1 k =1   
 nE  mr
(λ T rt E ) k  (λ E TH )r 
=e − λ E TH
∑ e − λT (rtE ) 1 +

∑ k !  r! 
 
 r =1 k =1  
(10.160)

Thus, the probability of having trucks is

pte
pt = (10.161)
pe

Therefore,

pte (10.162)
pt =
pte + pet

Furthermore, the probability of having trucks only can be written as

pte = pt − pte (10.163)

To calculate the occurrence probabilities further under the condition of having a


load only, the relationship among these conditional occurrence probabilities can be
described by Figure 10.12. That is, by only considering the dark “areas” in the Venn
diagram shown in Figure 10.15, we can let the sum of all these three conditional
probabilities equal to unity. This Poisson model can be shown by using the following
numerical example.

Example 10.4

Based on the Poisson model, the probabilities of truck load only, earthquake load
only, and simultaneously having both loads in the period of tE seconds can be
obtained. Suppose that the daily average truck rate is 1000.
Failures of Systems 567

In a given duration, λT = 1000/(24 × 3600) = 0.0116. In the duration of an earth-


quake, tE = 80 s. The probability of having up to 100 trucks is

100

∑ (λ kt! )
k
e − λT tE T E
= 0.6038
k =1

Similarly, during an earthquake, the chance of only two trucks crossing a bridge
is 0.5366. It is seen that, comparing 100 trucks with the probability = 0.6038, the
difference is not very large.
Note that TH is 2.3668 × 109 (s). Suppose that in 75 years, there are a total of 15
earthquakes (nE = 15) so that λE = 15/TH = 6.3376 × 10 −9. Suppose that the average
number of trucks crossing a bridge during an earthquake is m = 2. The chance of
simultaneously having a truck and an earthquake is

  
(λT rtE )k  (λ ETH )r
nE mr

pte = e − λETH  ∑
 r =1
 e − λT ( rtE )


k =1
k!  r !


 = 0.5677


The probability of the occurrence of an earthquake without a truck is

 nE 
∑[ e ] (λETH )
r
pte = e − λETH  − λT ( rtE )
 = 1.1615 × 10
−4
r!
 r =1 

and

pe = pte + pte = 0.5678


Therefore,

1− pe
pte = pte = 0.4321
pe

The normalized probabilities of having a truck only, having both a truck and
an earthquake, and having an earthquake only are denoted as pTE , pTE, and pTE ,

which can be further defined and calculated according to the next subsections.
Note that, besides the above approach, by using a mixed Poisson distribution, we
could also describe a slightly different formulation of the probability of conditions.

10.4.4.2.4 Existence of All Possible Loads under the


Conditions with Loads Only
We are now interested in the situation of having loads. Thus,

pte + pte + pte = 1 − pte (10.164)


568 Random Vibration

Here, the term (1 − pte ) is a probability of the event (there must be loads), that is,

P(there must be loads) = 1 − pte (10.165)

Denoting the probability of having both trucks and earthquakes in TH under the
condition of having a load only as pTE , we have

pte
pTE = (10.166)
1 − pte

Moreover, denoting the probability of having trucks in TH without earthquakes,


under the condition of with load only, as pTE , we also have

pte
pTE = (10.167)
1 − pte

In addition, denote the probability of having quakes in TH without trucks, under


the condition having a load only, as pET . We have

pte
pET = (10.168)
1 − p te

Thus,

pTE + pTE + pTE = 1 (10.169)

That is, consider the probability of the event (truck load only), under the condition
of “there must be load(s),” we have

P(truck load only) = P(truck load only│there must be loads) /P(there must be loads) (10.170)

10.4.4.2.5  Maximum Loads


Now consider the probability of a specific load being the maximum load. This
requires the load effect L to be the maximum value. It is a value extracted from all
the possible loads applied to the entire spectrum of possible bridges to be designed
under this load. For example, let L be the truck load T only. T must contain two con-
ditions: First, among all events of a truck crossing, T must be equal to or exceed a
certain level x, so that we have the term P(truck load effect ≥ x). Second, among all
the peak values, T must reach the maximum value, as described by the term P(truck
load effect = max).
Failures of Systems 569

10.4.4.2.5.1   Exceeding Values  The probability P(truck load effect ≥ x) implies


the chance that the load effect exceeds the level of the chosen resistance effect, that is,


P(truck load effect ≥ x ) =
∫ x
ft (u) du (10.171)

where ft is the PDF of the truck load (see Figure 10.16). It shall be noted that this

particular PDF is different from the term
∫ x
f L1 ( y) d y in Equation 10.137, where

f L1 ≡ fT (10.172)

which is the PDF of the maximum value of the truck load effect.

10.4.4.2.5.2   Maximum Values  Consider now the probability of the event when
a specific load is a possible maximum load, which has the probability P(load effect =
max).
Using the truck load effect as an example, the conceptual plots of the PDF of the
distributions of the truck loads, denoted by f T, is shown in Figure 10.17, where the
coordinate is the value of the PDF and the abscissa is the intensity of the conceptual
load effect.
From Figure 10.17, the total probability from zero to a given maximum load level
with intensity up to x (such as the one shown in Figure 10.16 with a moment of
25,000 kps ft.) can be written as

x
P(truck load = max) =
∫0
fT ( v) d v (10.173)

× 10–3
4

3.5

3
fT
2.5
PDF

1.5 x
1

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Intensity (kps ft) × 104

FIGURE 10.16  Loads ≥ level x.


570 Random Vibration

× 10–3
4

3.5

2.5
fT
PDF

1.5 x
1

0.5

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5
Intensity (kps ft) × 104

FIGURE 10.17  PDF of maximum load.

From Figure 10.17, it is seen that this probability is the CDF of the distribution of
the maximum load up to level x.

10.4.4.2.5.3   Probability of Total Conditions, Individual Cases  By using the


example of truck loads, and considering P(T only) denoted by pt, the above discus-
sion on the probability of conditions is summarized as follows:

pt = P(truck load only)P(truck load ≥ x )P(truck loadd = max)


∞ x (10.174)
= pTE
∫ x
ft (u) du

0
fT ( v) d v

Similarly, consider P(E ≥ x only), denoted by pe:

pe = P(earthquake load only)P(earthquake load effeect ≥ x )P(earthquake load = max)


∞ x


= pTE
∫x
fe (u) du
∫ 0
fE ( v) dv (10.175)

In the above, fe and f E are the PDF of the regular and the maximum earthquake
load effects, respectively.
Furthermore, the probability P(T ∩ E ≥ x), denoted by pt∩e, can be written as

pt e = P(combined truck and quake load) P(combined load effect ≥ x )


∞ x (10.176)
P(combined load = max) = pTE
∫ x
fc (u) du

0
fC ( v ) d v
Failures of Systems 571

where fc and fC are the PDF of the regular and maximum combined truck and earth-
quake load effect, respectively.
The sum of pt, pe, and pt∩e is not necessarily equal to unity. This fact is rather
inconvenient for calculating the total probability. Thus, the terms pt, pe, and pt∩e as
probability of conditions with individual effect values are considered in the following.

10.4.4.2.5.4   Probability of Conditions, Unity Space  As a total probability, con-


sider all the terms of the partial failure probabilities pfT , pfE, and pfTE . All the loads,
T, E, and/or T + E are compared with a unified value, the resistant value R. That is,
in Equation 10.1, when R is given, the values of T, E, and T + E cannot be chosen
individually. Accordingly, in these conditional probabilities, the amplitudes of T, E,
and T + E are actually of the same value, respectively. In other words, the sum of all
three conditional probabilities must be unity. That is,

P(only T ≥ x) + P(only E ≥ x) + P(T ∩ E ≥ x) = 1 (10.177)

Equation 10.177 means that the terms pt, pe, and pt∩e should be normalized accord-
ing to unity. This can be done by

pt/(pt + pe + pt∩e) + pe/(pt + pe + pt∩e) + pt∩e/(pt + pe + pt∩e) = 1 (10.178)

We thus have the normalized or uniformed conditional probabilities as follows:

P(only T ≥ x) = pt/(pt + pe + pt∩e) (10.179)

which is the normalized conditional probability of the maximum truck load only,

P(only E ≥ x) = pe/(pt + pe + pt∩e) (10.180)

which is the normalized conditional probability of the maximum earthquake load


only, and

P(T ∩ E ≥ x) = pt∩e/(pt + pe + pt∩e) (10.181)

which is the uniformed conditional probability of the maximum combined load only.

10.4.5 Brief Summary
In the above, the probability of condition for having a specific load only by using
truck and earthquake loads as examples is formulated. (The generic formulations
for other loads are identical.) Since the dead load is time invariant, whenever a time-
variable load occurs, the dead load is “waiting” there for the load combination.
Therefore, the dead load is omitted in this discussion.
Suppose that we have only two load L1 and L2. There exist three distinct cases:
Case 1 is only having load effect L1; case 2 is only having load effect L2; and case 3
is only having the combined load effect L1 + L2, which is denoted as L3 (L3 = L1 + L2).
Each case is treated as a single kind of load effect.
572 Random Vibration

The probability of condition for having the case (Li ≥ certain level x only, i = 1, 2,
3) consists of the following events:

a. There must be a load (the corresponding probability is denoted as pA).


b. The load is Li only (the corresponding probability is denoted as pBi).
c. The load effect Li is greater than or equal to x (the corresponding probabil-
ity is denoted as pCi).
d. The load Li is of the maximum value (the corresponding probability is
denoted as pDi).
e. The total types of loads only include L1, L2, and L3 (the corresponding prob-
ability is denoted as pE).

The resulting condition probability for load Li can then be written as

P(Li ≥ x) = [(pBi/pA)(pCipDi)]/pE (10.182)

To calculate the occurrence of single and/or simultaneous loads, we can use either
Poisson or mixed distributions.
Generally speaking, we introduced a methodology in this section to develop reliability-­
based bridge design under the conditions of MH loads. The first step is to formulate
the total bridge failure probability using the approach of partial failure probabilities.
In Section 10.4.2, we introduced the concept of comprehensive bridge reliability, the
essence of which is an all-exclusive approach to address all necessary loads, time-
invariant and time-variable, regular and extreme, on the same platform. In so doing,
all loads, as long as they contribute accountably to bridge failures, are included. The
basic approach to realize the comprehensive reliability is to break down these loads
into separate cases, referred to as partial failure probability.
Each partial failure probability contains only a “pure” load or a “pure” load com-
bination. Technically speaking, these pure loads can be treated as time-invariant
random variables, although most loadings are time-varying random processes. The
key for such separation of variables is to find the condition when the load occurs on
its own, which is referred to as the probability of condition, and this is addressed in
Section 10.4.4. It is seen that, to realize this term, we need to further break down the
probability into five independent subconditions.
Once we calculate all the partial failure probabilities by having the partial con-
ditional probabilities and the probabilities of conditions, the summation will give us
the total bridge failure probability.
To form the design limit equations and to extract the load and resistant factors for
practical bridge design, however, the formulation of the total probability is only the
first step. To obtain the load and resistance factors, additional efforts are necessary,
which are outside the scope of this manuscript.

Problems
1. The bearing of a type of simply supported bridges is subjected to dead load
(DL) and live load (LL); both can be modeled as normally distributed ran-
dom variables. Suppose that DL ~ N(150 kps, 45 kps) and LL ~ N(235 kps,
Failures of Systems 573

Combined load

FIGURE P10.1

480 kps). If the strength of the bearing is considered as deterministic, use


the 3σ criterion to calculate the design load acting on the bearing.
2. A rectangular-shaped bridge bearing is subjected to the loads given in
Problem 1. The size of w is 5.5 in. The resistance stress R is considered to
be normally distributed random variables with COV = 0.15. Use Equation
10.5 and Table 10.1 to determine the required strength (see Figure P10.1).
3. A rectangular-shaped bridge bearing is subjected to loads so that the
demanding load is L~N(1000 kpi, 300 kps). The resistance stress SR is
considered to be a normally distributed random variable also, and SR ~
N(40 ksi, 3ksi). Suppose that we need to let the failure probability be less
than 0.0002. Calculate the size w of the bearing (see Figure P10.1).
4. A shaft is subjected to a cyclic load up to 106 cycles. The corresponding
standard deviation of the stress of the shaft is 3.8 ksi. Suppose that the mean
value of the design strength of the shaft is 12.5 ksi. Use the 3σ criterion to
design the shaft and calculate the failure probability by using the model of
EVI. (Hint: The load is a narrowband process.)
5. A cantilever beam subjected to stationary Gaussian narrowband excitation
f(t) is shown in Figure P10.2. The auto-PSD of f(t) is also given in Figure
P10.2. The central frequency of the excitation is 5 Hz. The loading duration
is 20 min. It is required to design the minimum dimension b. The mate-
rial is 6061-T6 aluminum with a mean yield strength of 40 ksi and a mean

WF(f ) k2/Hz
f(t)
b 0.015

25 in f (Hz)
0 10

FIGURE P10.2
574 Random Vibration

Q(t)

w = 4.5 cm

FIGURE P10.3

ultimate strength of 45 ksi, both with a COV of 0.1. The natural frequency
is sufficiently high so that there is no dynamic magnification.
a. Design this beam with the 3σ criterion.
b. Design this beam for a first passage failure, with respect to ultimate.
The reliability goal is 98% for the service life.
6. A stationary force Q(t) is applied to the plate shown in Figure P10.3.
The mean and the standard deviation of Q(t) are 15 and 20 kN, respec-
tively. The failure mode is brittle fracture. Fracture toughness is given as
K C = 26 MPa m . The crack size is a = 1.3 mm. No subcritical crack propa-
gation (fatigue) is assumed. The geometry factor is Y = 1.25. Determine the
minimum value required for t using the 3σ criterion. Failure occurs when
the stress intensity factor K = YS(πa)1/2 > KC with stress S.
7. Reconsider the plate shown in Figure P10.3. Assume that Q(t) is narrow-
band with a central frequency of 1.5 Hz. The applying duration is 120 s.
Using the first passage criterion, design the plate so that the probability of
failure is less than 0.002.
8. Suppose that a type of bridge will be constructed in a river where scour
hazards may occur with an average duration of 2 days. In a year, on average,
vessel collision on the bridge may occur three times. Calculate the prob-
ability of two vessel collisions when a bridge scour occurs.
9. Suppose that in 75 years, there are a total of 100 bridge scours. Calculate
the chance of simultaneously having a scour and a vessel collision under the
condition given in Problem 8. Calculate the probability of up to three vessel
collisions.
11 Nonlinear Vibrations and
Statistical Linearization
In previous chapters, we often assumed that a system under consideration is linear.
However, practically speaking, there are many nonlinear systems that are subjected
to random and time-varying loads and deformations. In such a situation, it can be
difficult to treat the response of a nonlinear system as a stationary process. In fact,
analyzing such a random nonlinear system can be rather complex.
In this chapter, the basics of nonlinear dynamic systems are introduced. Monte
Carlo simulation is used as a tool to address the complexity of such problems.

11.1 Nonlinear Systems
In this chapter, it is assumed that all random processes are integrable. This assump-
tion is applicable for most engineering systems.
Generally speaking, we have the following reasons for a dynamic system to be
nonlinear:

1. Nonlinear damping, that is, the damping force, is not proportional to


velocity
2. Nonlinear spring, the restoring force, is not proportional to displacement
3. Nonlinear boundary conditions
4. Nonlinear feedback control
5. Nonlinear complex systems, such as fluid–structure interaction

Within the above scope of nonlinear vibration, we mainly focus on the first two
cases.

11.1.1 Examples of Nonlinear Systems


If the following equation is satisfied:

g[αX(t) + βY(t)] = αg[X(t)] + βg[Y(t)] (11.1)

then the system is linear. Otherwise, the system is nonlinear. Here, g(.) denotes a
function of variable (.) and α and β are scalars.
The essence of Equation 11.1 is twofold. The first is additivity, that is,

g(x + y) = g(x) + g(y)

575
576 Random Vibration

The second is homogeneity, that is,

g(αx) = αg(x) for all α

11.1.1.1 Nonlinear System
If Equation 11.1 does not hold, we will then have a nonlinear system. In the follow-
ing, we first theoretically consider several typical nonlinear systems. Furthermore,
only certain examples are discussed. These models will be referenced in order to
model a bilinear system.

11.1.1.1.1 Bilinear Model
First consider a general bilinear relationship between Y(t) and X(t) given as

∞ ∞
Y (t ) =
∫ ∫
−∞ −∞
h2 (τ1 , τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2 (11.2)

Here h2(τ1, τ2) is referred to as the temporal kernel function.


If

τ1, τ2 < 0 (11.3)

then

h2(τ1, τ2) = 0 (11.4)

When the input is bounded, the output will also be bounded.

∞ ∞
B=
∫ ∫
−∞ −∞
h2 (τ1 , τ 2) dτ1dτ 2 (11.5)

That is, a scalar B < ∞ exists.


The double Fourier transform of h2(τ1, τ2) is referred to as the frequency kernel

∞ ∞
H 2 (ω1 , ω 2 ) =
∫ ∫−∞ −∞
h2 (τ1 , τ 2 )e − j (ω1τ1 +ω 2τ2 ) d τ1 d τ 2 (11.6)

Thus, in order to describe a bilinear system, either h2(τ1, τ2) or H2(ω1, ω2) must
be known. In the following, several examples of bilinear systems as well as the cor-
responding kernel functions will be considered.

11.1.1.1.2 Quadratic System
The quadratic system (see Figure 11.1) is defined as

Y(t) = X2(t) (11.7)


Nonlinear Vibrations and Statistical Linearization 577

Y = g(X)

X(t) (.)2 Y(t)


X

FIGURE 11.1  Quadratic system.

To see why a quadratic system is bilinear, denote


X (t ) =
∫−∞
X (t − τ1 )δ(τ1 ) d τ1 (11.8)

Substitution of Equation 11.8 into Equation 11.2 yields

 ∞  ∞ 
Y (t ) = X 2 (t ) = 
 ∫ −∞
X (t − τ1 )δ(τ1 ) d τ1  
 ∫ −∞
X (t − τ 2 )δ(τ 2 ) d τ 2 

∞ ∞ (11.9)
=
∫ ∫
−∞ −∞
δ(τ1 )δ(τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2

Consequently, a quadratic system is bilinear and the kernel function is

h2(τ1, τ2) = δ(τ1) δ(τ2) (11.10)

In this case, it is seen that

H2(ω1, ω2) = 1 (11.11)

Thus,

F [Y (t )] = F [ X 2 (t )] = X (ω ) * X (ω ) (11.12)

From Equation 11.12, it is shown that a process X(t) through a quadratic system
varies its frequency component.

11.1.1.1.3 Linear and Quadratic Systems in Series


When a linear and a quadratic system are in series, another bilinear system may exist
(see Figure 11.2).

X(t) h(t) (.)2 Y(t)

FIGURE 11.2  Linear and quadratic systems in series.


578 Random Vibration

Denote

2
 ∞ 
Y (t ) = 

∫−∞
h(τ) X (t − τ) d τ 

∞ ∞
(11.13)
=
∫ ∫−∞ −∞
h1 (τ1 )h2 (τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2

From Equation 11.13, it can be determined that

h2(τ1, τ2) = h1(τ1) h2(τ2) (11.14)

Additionally, it can be proven that

H2(ω1, ω2) = H(ω1)H(ω2)

11.1.1.2 Memoryless Nonlinear System


In the following, examples of memoryless systems are used to show the probability
density function (PDF) and correlation functions.

11.1.1.2.1 Definition
Denote

Y(t) = g(X(t)) (11.15)

In Equation 11.15, g is a real function with a single variable only. That is, for a given
moment t, Y(t) is defined by g(X(t)) only. In other words, the system is memoryless.

11.1.1.2.2 PDF of Quadratic System


The aforementioned quadratic system is in fact memoryless. Consider the PDF of

Y(t) = X2(t)

Suppose that X(t) is Gaussian; then

x2
− t
1 2 σ t2
f X ( xt ) = e (11.16)
2πσ t

Respectively denote xt and yt as the random variables of input and output process
at time t. It can be proven that
yt

1 2 σ t2
fY ( yt ) = e u( yt ) (11.17)
2πyt σ t

where u(.) is a step function.


Nonlinear Vibrations and Statistical Linearization 579

11.1.1.2.3 Correlation Function

RY (t1 , t2 ) = E[Y (t1 )Y (t2 )] = E[ g( X (t1 )) g( X (t2 ))]


∞ ∞

∫ ∫
(11.18)
= g( X (t1 )) g( X (t2 )) f ( x1 , x 2 , t1 , t2 ) d x1 d x 2
−∞ −∞

11.1.2 General Nonlinear System, Volterra Model


(Vito Volterra, 1860–1940)
In Section 11.1.1, we have introduced several nonlinear systems. To have a general
description of system nonlinearity, let us consider the Volterra series, which provides
a general model for nonlinear analytical systems that are both time-invariable and
with limit memory (Volterra 1959). The Volterra series for system analysis was origi-
nated by Wiener (1958). The series was used for an approximate analysis on radar
noise in a nonlinear receiver circuit.

∞ ∞ ∞


Y (t ) = k0 +
0
k1 (τ1 ) X (t − τ1 ) d τ1 +
∫ ∫0 0
k2 (τ1 , τ 2 ) X (t − τ1 ) X (t − τ 2 ) d τ1 d τ 2
∞ ∞ ∞


+ +
∫ ∫ ∫
0 0 0
kn (τ1 , τ 2 ,τ n ) X (t − τ1 ) X (t − τ 2 )  X (t − τ n ) d τ1 d τ 2 d τ n

(11.19)
The output is a sum of the constant k0 and those integrals taken from one dimen-
sion to n dimensions.
If only the first term, the constant, and second term, the convolution, exit, the sys-
tem is linear with h1 being the impulse–response function; otherwise, it is nonlinear
with multiple convolutions. For example, with h2 in Equation 11.2 is the second order
convolution.

11.1.3 Structure Nonlinearity
11.1.3.1 Deterministic Nonlinearity
11.1.3.1.1 Nonlinear Spring
11.1.3.1.1.1   Softening Spring  Two types of commonly used nonlinear springs
in engineering systems modeling are softening and hardening springs. Figure 11.3
illustrates a softening spring. In Chapter 10, Equation 10.108 defined one kind of
spring softening mechanism using the failure strain εʹf to denote the yielding point
(dy, f y). Overloading the spring can result in softening the spring’s stiffness. In this
instance, the stress passes the yielding point, which is commonly seen in structural
damage. In Figure 11.3b, the spring is shown to be below the yielding point (dy, f y).
In Figure 11.3a, f m and x0 are the maximum force and deformation, respectively.
When the load f increases, the corresponding stiffness will decrease continuously.
As a result, a certain point, for example, 0.6f m and d 0.6, may be set, with an unload-
ing stiffness ku and an effective stiffness keff at the maximum deformation. Another
580 Random Vibration

f f
ku ku
aku
fm
fm fy

0.6fm keff
(ksec) x
(a) d0.6 x0 x (b) dy x0

FIGURE 11.3  Softening stiffness. (a) Yielding point: 0.6 fm. (b) Otherwise defined yielding
point.

commonly used action is shown in Figure 11.3b. At the yielding point f y and dy, the
unloading stiffness is defined by ku, and beyond that point, the loading stiffness is
defined as k l = aku.
Generally speaking, when the loading force is sufficiently small, the stiffness
is close to linear and the force and deformation are close to proportional. (Recall
Hooke’s law.) In this case, the use of either stress or strain to denote the deforma-
tion is fairly equivalent, and both stress and force are commonly used. However,
beyond the yielding point, a rather large deformation occurs even under a rather
small force. Consequently, using strain or deformation becomes more convenient
(see, for instance, FEMA 2009, Figure c12.1-1).

11.1.3.1.1.2   Estimation of Effective Stiffness  In the literature and as accepted


by many building codes, the effective stiffness is calculated by using the secant line
shown in Figure 11.3, namely,

keff = ksec = f m /x0 (11.20)

However, it must be noted that when an effective linear system is used to represent
a nonlinear vibration system, the effective stiffness should satisfy the following:

keff = 2 E p /x 02 (11.21a)

and

keff = fc/x0 (11.21b)

Here, Ep is the potential energy restored by the system with displacement x0; fc is
the conservative force, and when the system yields and reaches a displacement of x0,
the maximum force f m will contain both the conservative force and the dissipative
force fd. Specifically, this is written as

f m = fc + fd (11.22)
Nonlinear Vibrations and Statistical Linearization 581

As a result, the effective stiffness keff will be smaller than the secant stiffness ksec.
In Chapter 6, it was shown that a vibration is typically caused by the energy
exchange between potential and kinetic energies. The natural frequency of a linear
system can then be obtained by letting the maximum potential energy to be equal to
the maximum kinetic energy, that is,

kx 02 mv02 mω n2 x 02
= = (11.23)
2 2 2

For a nonlinear system, the above equation should be modified as

keff x 02 m ω 2n x 02
= (11.24)
2 2

Here, keff is defined in Equations 11.21. The effective frequency ωn can be calcu-
lated by

keff
ωn = (11.25)
m

However, it is seen that (Liang et al. 2012)

keff fc /x 0 fm /x 0
= < (11.26)
m m m

On the right-hand side of Equation 11.26, the term f m /x0 is often used to estimate
the effective stiffness, referred to as the secant stiffness, specifically ksec = f m /x0.
Considering the dynamic property of a nonlinear system, the secant stiffness should
not be used as the effective stiffness. Following this logic, the effective stiffness
should therefore be defined differently.
In the case structurally bilinear, denoted by the shadowed regions in Figure 11.4,
when the system moves from 0 to x0, the potential energy is given by

E p = 1/ 2  ku d y2 + kd ( x 0 − d y )2  (11.27)

Defining

keff = 2E p /x 02 =  ku d y2 + kd ( x 0 − d y )2  /x 02 (11.28)

and by using the displacement ductility, Equation 11.28 can be rewritten as

keff = μ–2 [ku + kd(μ − 1)2] (11.29)


582 Random Vibration

Force
fm
kd
qd

ku ku

dy x0
Disp.

FIGURE 11.4  Maximum potential energy of a bilinear system.

where the ductility is

μ = x0/dy (11.30)

Further calculations yield an effective stiffness of

1 + a(µ − 1)2
keff = k u (11.31)
µ2

In Equation 11.31, a is the ratio of the yield stiffness kd and the unload stiffness ku.
Accordingly, the corresponding effective period is

µ2 m
Teff = T = 2πµ
2 1 (11.32)
1 + a(µ − 1) [1 + a(µ − 1)2 ]ku

Comparing the above effective period from Equation 11.32 to the period obtained
through secant stiffness ksec:

fm m
Teff′ = 2π (11.33)
x0

By comparison, it is observed that Teff′ is a more concise estimate than Teff.

11.1.3.1.2 Nonlinear Damping
11.1.3.1.2.1   Timosheko Damping (Stepan P. Timoshenko, 1878–1972)  By
using the approach Timoshenko damping (see Liang et al. 2012), it is possible to
derive the effective damping ratio of the entire bilinear system by using several dif-
ferent approximations of the effective stiffness. It can be shown that the steady-state
Nonlinear Vibrations and Statistical Linearization 583

response of a linear system under sinusoidal excitation, the damping ratio, can be
calculated through the following equation:

Ed
ζ= (11.34)
4 πEK

where Ed and EK are, respectively, the energy dissipated during a cycle and the maxi-
mum potential (kinetic) energy. For nonlinear damping, the damping ratio will be
denoted by ζeff in the subsequent sections.

11.1.3.1.2.2   Bilinear Damping  For the Timoshenko damping, initially the


approach based on the lines ku and aku can be used to obtain

Ed 2(µ − 1)(1 − a)
ζeff = = (11.35)
4 πEk πµ(1 + aµ − a)

Using Equation 11.27 of maximum potential energy, the damping ratio is written as

2qd ( x 0 − d y )
ζeff = (11.36)
π  k x + kd ( x 0 − d y )2 + kx 02 
2
u 0

In the viewpoint of the entire system, this can be rewritten as

2qd ( x 0 − d y )
ζeff = (11.37)
π  ku x 02 + kd ( x 0 − d y )2 

Given that the characteristic strength qd for the bilinear system can be written as

qd = (ku – kd)dy (11.38)

the damping ratio can be denoted as

Ed 2(µ − 1)(1 − a)
ζeff = = (11.39)
4 π E p π[µ 2 + a(µ − 1)2 ]

11.1.3.1.2.3   Sublinear Damping  When damping is viscous, the damping force


will generally be expressed as

β
fd (t ) = c x syn( x ) (11.40)

where β is the damping exponent.


584 Random Vibration

The energy dissipation by the damping force during a cycle is


2
Ed = cω βf x 0β+1 cosβ+1 (ω f t ) dω f t = cω βf x 0β+1 Aβ (11.41)
0

For Equation 11.41, Aβ denotes

 β + 2
2 πΓ 
 2 
π


2 β +1
Aβ = cos (ω f t ) dω f t = (11.42)
0  β + 3
Γ 
 2 

Through the use of Equation 11.33, the damping ratio can become

cxoβ−1ω αeff Aβ cxoβ−1ω αeff Aβ


ζeff = = (11.43)
2πmω eff
2
2πkeff

When β = 0, the nonlinear viscous damping will be reduced to that of dry-friction


damping. For this instance,

fd (t ) = c syn( x ) (11.44)

and

c = μN (11.45)

As mentioned above, friction damping can be modeled as a special case of bilin-


ear damping.
In the event β = 1, the viscous damping will reduce to that of linear damping.

11.1.3.1.2.4   Alternative Damping  An additional way to calculate the effective


damping ratio is through the force-based or alternative damping. Namely, this is
achieved through the equation

fd
ζeff = (11.46)
2 fm

where fd and f m are the amplitude of the damping and maximum forces as previously
defined.
In the following example, we consider the case of bilinear damping by using
the alternative approach. To clearly define the dissipative and restoring forces in
a bilinear system can be complex. A bilinear system may possibly have one of the
Nonlinear Vibrations and Statistical Linearization 585

subsequent cases—case 1: friction dampers installed in a linear structure; case 2:


bilinear dampers installed in a linear structure; case 3: friction dampers installed
in a bilinear structure; and case 4: bilinear dampers installed in a bilinear structure.
For each case, the dissipative force fd may be different due to the nonlinearity of the
total system.
In order to simplify the study, assume that the damping force is fd = qd = f y –
a(kudy), with an equivalent restoring force of

f m = (ku – kd )dy + kdx0 (11.47)

and thus yielding a damping ratio of

fd (1 − a)
ζeff = = (11.48)
2 fm 2 1 + a(µ − 1) 

It is noted that, with different formulas for the dissipative force, the calculated
damping ratio will be slightly varied.

11.1.3.2 Random Nonlinearity
11.1.3.2.1 Random Force and Displacement
In the above discussion of nonlinear stiffness and damping, two assumptions existed,
the first one being that the maximum force, f m, and displacement, x0, are fixed values,
although in many cases, the maximum force and displacement will be a random
process.
We now consider the probability of the maximum deformation, and note that
the probability of maximum force will have similar results. (Referring back to
Section 5.2.) This is understood as a problem of the distributions of extrema.

11.1.3.2.1.1   Joint PDF of Displacement and Velocity  Consider the correlation


between the displacement X and the velocity X.  Both are assumed to be stationary
and narrow-banded with a zero mean. Among these three assumptions, the case
of zero mean is the most realistic, while the case of narrow-banded is a sensible
approximation. This will be further discussed in Section 11.1.3.2.1.7. The assumption
of stationary is a rather rough estimation. However, a nonstationary process can be
modified through separation of a(t) and U(t) (see Equation 9.108).
For a joint PDF,

dRX (τ)
E[ X (t ) X (t )] = (11.49)
dτ τ= 0

Substituting


RX (τ) =
∫ −∞
S X (ω )eiωτ dω (11.50)
586 Random Vibration

will yield

d  ∞  ∞
E[ X (t ) X (t )] = 
dτ  ∫−∞
S X (ω )e jωτ dω  = jω
 τ= 0
∫ −∞
S X (ω ) dω = 0 (11.51)

The integral for SXX is zero given that SXX is an even function. By further assuming
that the displacement X and the velocity X be Gaussian, a joint PDF given by

1  1  x2 x 2  
f XX ( x , x ) = exp  −  2 + 2   (11.52)
2πσ X σ X  2  σ X σ X  

is resulted. Equation 11.52 can be rewritten as

1  1 x2  1  1 x 2 
f XX ( x , x ) = exp  −  exp  − 2 
= f X ( x ) f X ( x ) (11.53)
2πσ X  2 σ 2X  2πσ X  2 σ X 

11.1.3.2.1.2   Level and Zero Up Crossing  Now, we consider the case where the
rate of the displacement is larger than level a. From Equation 5.24, we can obtain

1 a2
1 σ X − 2 σ 2X
va + = e (11.54)
2π σ X

When a = 0, this will result in a zero up-crossing rate. Reference Equation 5.26
for any additional explanations.

1 σ X
v0 + = (11.55)
2π σ X

11.1.3.2.1.3   Distribution of Peak Value  Furthermore, again referring back to


Chapter 5, the PDF of the peak value a of displacements is given by

a  1  a2  
f A (a) = exp −  2   , a > 0 (11.56)
σ 2X  2  σ X  

For additional explanation, see Equation 5.100. Figure 11.6 plots an example of
the PDF.

11.1.3.2.1.4   Proportional Constant  In order to calculate the above-mentioned


effective stiffness and damping, the maximum displacement and therefore maximum
force must be known. This is true by using either the energy method or the secant
Nonlinear Vibrations and Statistical Linearization 587

stiffness. However, for random processes, the maximum displacement will only be
reached a few times, and in most cases, the magnitude of displacements will be smaller
than the maximum value. To estimate the effective stiffness and damping more realis-
tically, a specific displacement dp, which is smaller than the maximum value, must be
found. Using a proportional coefficient pc, the displacement dp can be written as

dp = pcx0 (11.57)

To simplify engineering applications, a fixed constant pc is more appropriate.


Next, a proper value of this constant will be considered.
Using a simple bilinear model of nonlinear stiffness the elastic perfect plastic
model, we can find the point dp for this particular model. The nonlinear force–­
displacement relationship is shown in Figure 11.5, where f m is the maximum force
and dy is the yielding deformation.
The above has shown that the nonlinear stiffness, denoted by kn, is a function of
the peak displacement a.
When the peak displacement a is less than dy, the system remains linear and the
stiffness kn is equal to ku. As the peak displacement reaches dy, the total conservative
energy becomes

E p = 1/ 2 ku d y2 = 1/ 2 fm d y (11.58)

When the peak displacement is larger than dy, then referencing Equation 11.28,
the stiffness kn is

kn = 2Ep/a2 = f mdy /a2 (11.59)

In between zero and the maximum peak displacements x0, the average stiffness
kave is given by

1 a2 1 a2
x0 dy
a −2 2 x0 fm d y a − 2 σ 2X
kave =

0
k n f A (a) d a =
∫0
ku 2 e σ X d a +
σX ∫ dy a 2 σ 2X
e d a (11.60)

Force

Y P M
fm
kéff

keff

Peak disp.
0 dy , µ dp x0, 1

FIGURE 11.5  Elastic perfect plastic deformation.


588 Random Vibration

Use the average stiffness to represent the effective stiffness:

1 a2 1 a2
dy
fm a − 2 σ 2X x0
fm d y −

∫ ∫
2 σ 2X
keff = kave = e da + e da (11.61)
0 d y σ 2X dy aσ 2X

Figure 11.5 illustrates the relationship

kudy = keff dp (11.62)

In this instance, the displacement dp can be written as

fm f
d p = ku d y /keff = d y /kave = m (11.63)
dy kave

Furthermore, the proportional constant pc can be determined by rearranging


Equation 11.57 to

pc = dp/x0 (11.64)

Without loss of generality, the maximum possible value of the peak displacement
can be normalized to unity so that

p c = d p (11.65)

It is noted that the distribution of the peak displacement a is from zero to infinity.
Allowing

x0 = 1.0 (11.66)

will result in errors ε in computing the probability of a being beyond 1 (the normal-
ized displacement), denoted by

1 a2

a − 2 σ 2X
ε=
∫1 σ 2X
e d a (11.67)

Seemingly, the error is a function of the variance σ 2X of the process X(t).

11.1.3.2.1.5   Allowed Uncertainty  In using Equation 11.64, Equation 11.61 must


first be evaluated, which contains an unknown variable dy. It is found that the yield-
ing deformation dy is the property of the structure, which is virtually independent
of the displacement process X(t). Given that Equation 11.61 is a generic formula, the
value of parameter dy is uncertain. Equation 11.61 can be evaluated through statisti-
cal study of the random process X(t) only.
In engineering applications, it is uncommon to thoroughly investigate the possible
distribution of certain unknown variables, such as yielding deformations. Rather,
Nonlinear Vibrations and Statistical Linearization 589

such unknown variables are treated as uncertain parameters. To determine how much
error is present, the maximum possible value of the uncertain variables must be deter-
mined. In the case of unknown yielding deformations, when the value of dy is small,
less error will exist. Conversely, when the value of dy is large, more error will exist.
We now consider the maximum allowable dy when Equation 11.60 is used.
Refer again to Figure 11.5. Before the yielding point, the force is proportional
to the displacement. Consequently, the distribution density function of the force vs.
the displacement, denoted by f F(a), is exactly equal to the PDF of the displacement.
Explicitly, this can be written as

a  1  a2  
f F (a) = exp −  2   , 0 ≤ a < d y (11.68)
σ 2X  2  σ X  

After yielding, the force f m will remain constant, while the displacement will vary
from dy to 1. This is denoted by

dy  1  d y2    a 
f F (a) = exp −  2     , d y ≤ a < 1 (11.69)
σ 2X  2  σ X    1 − d y 

As the peak displacement a varies from 0 to 1, given that x0 is normalized to be


1, the cumulative distribution function (CDF) of the force will become unity, which
is explicitly written as

2
1 a2 1 dy
− −
dy
a 1 dy  a 
∫ ∫
2 σ2 2σ 2
e X
da + e X
 1 − d  daa = 1 (11.70)
0 σ 2X dy σ 2
X  y

Evaluating Equation 11.70 will result in

2 2
1 dy 1 dy
− −
2 σ2 dy 2 σ2
−e X
+1+ e X
(1 − d y ) = 1 (11.71)
σ 2X

or

σ 2X = d y (1 − d y ) (11.72)

Suppose a displacement of X(t) is given and the variance σ 2X is fixed. Then, the
“allowed value” of the yielding displacement dy is given by

1 ± 1 − 4 σ 2X
dy = (11.73)
2
590 Random Vibration

Note that, with x0 normalized to be unity, the displacement ductility μ is

μ = 1/dy (11.74)

At this instance, consider the error of letting x0 = 1.


For example, suppose that

σx = 1/3 (11.75)

Then

1 1
a  1  a2  

∫ 0
f A (a) d a =

0  1
2
exp −  2
 2   1  
da ≈ 99%
       
3   3 

or

ε = 1 – 99% = 1%

When considering the negative square root, Table 11.1 can be used to show the
corresponding errors.
From Table 11.1, it is established that, when the standard deviation σX = 0.3,
calculated through Equation 11.67, the error ε = 0.4%, which is sufficiently small.
However, based on Equation 11.72, the “allowed yielding displacement” is also small
at a value of 0.1, with a ductility of 10. It is seen that when the allowed yielding
displacement becomes increasingly larger, the error ε will also be larger. As an
example, when dy = 0.2, the error will become 4.4%.
Next we calculate the averaged effective stiffness. For example, when σX = 0.4
and dy = 0.2, the first term of Equation 11.61 yields

1 a2
fm dy
a − 2 σ 2X

dy ∫ 0 σ 2X
e d a = 0.59 fm

Table 11.1
Errors versus Yielding Displacements
dy 0.100 0.109 0.118 0.127 0.138 0.149 0.160 0.173 0.186 0.2000
μ 10.0 9.21 8.50 7.85 7.27 6.74 6.25 5.80 5.38 5
σX 0.30 0.31 0.32 0.33 0.34 0.36 0.37 0.38 0.39 0.4
ε 0.004 0.006 0.008 0.011 0.015 0.019 0.024 0.030 0.034 0.044
Nonlinear Vibrations and Statistical Linearization 591

The second term will yield

1 a2
1
fm a − 2 σ 2X


dy a 2 σ 2X
e da = 0.95 fm

and the average or effective stiffness is given by

keff = kave = 0.59f m + 0.95f m = 1.54f m

As a result, the displacement is

fm
dp = = 0.65
kave

Moreover, given that x0 = 1,

pc = 0.65 (11.76)

Similarly, other values of dy and σX can be calculated. Table 11.2 lists a number
of computed results. From the table, as dy varies from 0.13 to 0.20, pc will vary from
0.6 to 0.65.

11.1.3.2.1.6   Simplified Approach  Additionally from Tables 11.1 and 11.2, it is


seen that for the simplified bilinear model, when the value of dy is small, very large
errors do not occur. Thus, a more simplified approach can be used by assuming that
the equivalent displacement of dp can be computed at

dp = μA + dy (11.77)

An additional approximation is done by using the sum of the mean plus 70%
standard deviation represented by

dp = μA + 0.7σA (11.78)

Here, the mean μA and the standard deviation σA will be determined as follows.
Table 11.3 gives a comparison of the calculated results.

Table 11.2
Yielding Displacements dy versus Proportional Constant pc
dy 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20
σX 0.34 0.35 0.36 0.37 0.38 0.38 0.39 0.4
pc 0.60 0.61 0.61 0.62 0.63 0.64 0.64 0.65
592 Random Vibration

Table 11.3
Equivalent dp
dy 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.20
μA 0.42 0.43 0.45 0.46 0.37 0.48 0.49 0.50
σA 0.22 0.23 0.23 0.24 0.25 0.25 0.26 0.26
pc 0.60 0.61 0.61 0.62 0.63 0.64 0.64 0.65
μA + dy 0.55 0.57 0.60 0.62 0.64 0.66 0.68 0.70
μA + 0.7σA 0.58 0.59 0.61 0.63 0.64 0.66 0.67 0.68

Observe from Figure 11.6 that the peak of the PDF (often referred to as the distri-
bution mode) of the Rayleigh distribution is at

a = σX (11.79)

while the mean value is at

π
µA = σX (11.80)
2

Furthermore, the standard deviation is

4−π
σA = σX (11.81)
2

Since a = 1 denotes a normalized maximum peak, Equation 11.79 implies that the
yielding displacement occurs when the corresponding PDF is at its maximum value.

2
Mode
1.8 Mean
1.6
1.4
1.2 Mean + 1 STD
1
0.8
0.6
Assumed maximum peak
0.4
0.2
0
0 0.5 1 1.5 2
Normalized peak value

FIGURE 11.6  Rayleigh distribution.


Nonlinear Vibrations and Statistical Linearization 593

11.1.3.2.1.7   Penzien Constant (Joseph Penzien, 1924–2011)  When low cycle


fatigue occurs, the unloading stiffness ku may be reduced. When this occurs, the
displacement at keff will be significant, resulting in a larger proportional constant.
Furthermore, for a displacement process that is not a narrow-band Gaussian,
Equation 11.56 may not be sufficient to accurately describe the distribution of the peak
value. For different models, different accuracy assignments and values of yielding
points will result in varying values of the proportional parameter pc. The proportional
parameter pc will also vary due to the reduction of the unloading stiffness. For sim-
plification of engineering applications, however, Equation 11.76 is often used, namely,

pc = 0.65

Professor J. Penzien (1927–2011) suggested that, during a random process, the chance
of reaching x0 is minimal. To provide a more realistic estimation of the effective stiff-
ness and damping based on observation, 0.65 times x0 should be used rather than 0.65
alone. The proportional constant 0.65 is accordingly referred to as the Penzien constant.

11.1.3.2.1.8   Effective Stiffness and Damping Estimation under Random Process ​


In using the Penzien constant, the values x0 are replaced by (pcx0) or (0.65x0) in the
aforementioned formulas to calculate the effective stiffness and damping, as well as
the effective period, among others.
′ . To better estimate
For example, refer to Figure 11.5 for the values of keff and keff
the effective stiffness substitute

keff = f m /0.65x0 (11.82)

in the equation

′ = fm /x 0
keff (11.83)

Additionally, the equation

c( pc x 0 )β−1 ω αeff Aβ c(0.65 x 0 )β−1 ω αeff Aβ


ζeff = = (11.84)
2πmω eff
2
2πAeff

is used to replace Equation 11.43 for a better estimation of effective damping.

11.1.3.2.2 Variable Path of Loading/Unloading


The above descriptions are for the cases when the loading and unloading paths
are deterministic. In actuality, due to the stiffness deteriorations, as described by
Equation 10.108, resulted by low cycle fatigue, these paths may be altered. In this
case, if the loading process is random, then a memoryless random process will not
exist. This is to say that previous stiffness damages in a system will affect the subse-
quent force–deformation relationship.
For Figure 11.7, suppose that (f1, d1) is the first yielding point and the correspond-
ing effective stiffness keff1 is calculated based on the initial unloading stiffness ku1.
594 Random Vibration

Force

Y1 Y2 P
f1
f2

ku1 ku2 keff1


keff2
Displacement
0 d1, d2 dp

FIGURE 11.7  Reduction of effective stiffness.

The second yielding point is then denoted by (f 2, d2). If the stiffness is reduced (see
Equation 10.108 for further reference), the corresponding effective stiffness will be
denoted by keff2 such that

keff2 < keff1 (11.85)

Seemingly, the value of keff2 will depend on not only the value of dp but also by
the degree of which the unloading stiffness ku2 is reduced. In general, the computa-
tions of keff2, as well as similar computations, are rather complex. The Monte Carlo
simulation provides a practical method for carrying out this estimation. This will be
briefly discussed in Section 11.3.

11.2 Nonlinear Random Vibrations


In Chapters 7 and 8, we discussed the vibration problems of linear systems. Generally
speaking, for time-varying excitations and responses, stochastic differential equation
(SDE) will be used. The SDE is a differential equation in which one or more of the
terms are a random process, so that their solution is also a random process. SDEs are
used to model diverse phenomena, but not limited to typical vibration problems. In
1905, Einstein explanted that the Brownian motion is due to statistical fluctuations,
which leads to a SDE and the derivative of the Wiener process (Bichteler 1998). Besides
white noise excitations, we can have many other types of random processes as excita-
tions. For more detailed information, readers may consult Monohar (1995), for instance.

11.2.1 General Concept of Nonlinear Vibration


Relationships between two sets of variables, particularly between the input and the out-
put of random processes, can be linear. Otherwise, the system is said to be nonlinear.
For linear systems, the transfer function has been the key link to deal with a linear
input–output process. However, for a general nonlinear system, a transfer function does
not exist. Therefore, an alternative approach to describe the relationship is needed.
While a linear system can be described through the classical method, nonlin-
ear systems can have many different characteristics, most of which do not have
Nonlinear Vibrations and Statistical Linearization 595

closed-form solutions. In the following, general approaches to study nonlinear ran-


dom vibrations will be discussed briefly.

11.2.1.1 The Phase Plane


The phase plane plots velocity vs. displacement, which is a common tool to provide
a general approach to study the nature of specific nonlinear vibrations, including
system stability, bound of peak values, and others.

11.2.1.1.1 Energy Dissipation
Figure 11.8 shows the plot of viscous damping force vs. a deterministic displace-
ment, where the smallest loop is obtained when β = 1 and the smaller the value of β
is, the larger energy dissipation loop will be resulted. It is known that this form of
plot directly indicates the energy dissipation of a system with steady-state responses
0.1 m. Because viscous damping forces relate directly to velocities, Figure 11.8 can
be seen as several generalized phase plans, among which only the linear velocity vs.
displacement has an exact ellipse curve.
As shown in Figure 11.9, given the same displacement for general viscous damping,
the smaller the damping exponent β is, the larger the amount of energy that can be dis-
sipated. However, this does not necessarily mean that a system with low β will dissipate
more energy. As an example, consider a system with a mass of m = 1, a damping coef-
ficient of c = 1, and a stiffness of k = 50, in which the system is excited by a determin-
istic sinusoidal force with a driving frequency of ω = 3. Figure 11.10 shows the phase
plane of each case when β = 0.1 and β = 1.0. This explains that the area enclosed in the
displacement–velocity curve with β = 1.0 is larger than that in the case with β = 0.1.

11.2.1.1.2 Bounds
Figure 11.10 shows the phase plane for the two systems in Figures 11.8 and 11.9. Clearly,
the curves will provide information for the bounds of the velocities and displacements.

× 104 Damping force vs. displacement β = 0.0


4
β = 0.3
3

2 β = 0.6
Damping force (N)

1
β = 1.0
0

–1

–2

–3

–4
–0.1 –0.08 –0.06 –0.04 –0.02 0 0.02 0.04 0.06 0.08 0.1
Displacement (m)

FIGURE 11.8  Viscous damping forces and generalized phase plans.


596 Random Vibration

1
Beta = 0.1
0.8 Beta = 1.0

Normalized velocity 0.6


0.4
0.2
0
–0.2
–0.4
–0.6
–0.8
–1
–1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1
Normalized displacement

FIGURE 11.9  Phase plane.

0.05
0.04
0.03
0.02
0.01
Velocity

0
–0.01
–0.02
–0.03
Beta = 0.1
–0.04 Beta = 1.0

–0.05
–0.015 –0.01 –0.005 0 0.005 0.01 0.015
Displacement

FIGURE 11.10  Simulated phase plane.

11.2.1.2 Example of Nonlinear Vibration with Closed-Form Solution


In order to see the fundamental difference between linear and nonlinear vibration,
consider an example of the Duffing equation as described in the following.

11.2.1.2.1 Duffing Equation (Georg Duffing, 1861–1944)


The Duffing equation is used to describe a typical nonlinear vibration system such
that (see Lin 1967)

mx + cx + kx ± µx 3 = f (t ) (11.86)


Nonlinear Vibrations and Statistical Linearization 597

Here m, c, k, and µ are all constant. Let us first consider the undamped case with
sinusoidal excitation given by

x + ω 2n x ± αx 3 = p cos(ωt ) (11.87)

where ωn = (k/m)1/2, α = µ/m and p is a constant.


Equation 11.87 can be solved by the method of iteration, and the first solution is
assumed to be

x(0) = A cos(ωt) (11.88)

Substitution of Equation 11.88 into Equation 11.87 and the integration of the sec-
ond derivative expression twice yields the next iterative approximation of

x (1) =
1
ω2
( )
ω n2 A ± 0.75αA3 − p cos(ωt ) ±  (11.89)

Since the amplitude A in Equation 11.89 must be equal to that of Equation 11.88,
ignoring the higher order terms will yield

0.75α A3  ω2  p
= 1 − 2  A − 2 (11.90)
ωn
2
 ω n ω n

11.2.1.2.2 Numerical Simulations
When the input is random, numerical simulations are often needed. Figure 11.11
illustrates a Simulink diagram for the Duffing equation.

C t

1
s
uv Mu t
Beta
3 –1/M
Rvel 1
s K t

Simin M t
2
From
Rdisp1 workspace
t Acc.
Disp 1
t
AAcc1

FIGURE 11.11  Simulink model (Beta = 3).


598 Random Vibration

Figure 11.12 shows the phase plane of the above-mentioned Duffing equation,
where m = 1, k = 50, c = 0, and μ = 2. Figure 11.13 illustrates the same phase plane
but for the unstable case when c = −0.5. Both phase planes are under sinusoidal exci-
tation with values of p = 1 and ω = 3.
Figure 11.14 demonstrates the phase plane of the above-mentioned Duffing equa-
tion under a random excitation, with Figure 11.15 showing the time history of the dis-
placement. From Figure 11.15, the vibration is clearly seen as nonlinear, for the input
has ω only and is deterministic but the response contains more frequency components.
The time history of the response is comparable to that of a narrow-band process.

0.08

0.06

0.04

0.02
Velocity (m/s)

–0.02

–0.04

–0.06

–0.08
–0.015 –0.01 –0.005 0 0.005 0.01 0.015
Displacement (m)

FIGURE 11.12  Duffing responses.

60

40

20
Velocity (m/s)

–20

–40

–60
–6 –4 –2 0 2 4 6
Displacement (m)

FIGURE 11.13  Unstable system.


Nonlinear Vibrations and Statistical Linearization 599

0.5

0.4

0.3

0.2
Velocity (m/s)

0.1

–0.1

–0.2

–0.3

–0.4

–0.5
–0.05 –0.04 –0.03 –0.02 –0.01 0 0.01 0.02 0.03 0.04 0.05
Displacement (m)

FIGURE 11.14  Random Duffing response.

0.05
0.04
0.03
0.02
Displacement (m)

0.01
0
–0.01
–0.02
-0.03
–0.04
–0.05
0 5 10 15 20 25 30
Time (s)

FIGURE 11.15  Duffing displacement.

Although the Duffing equation only depicts one kind of nonlinear vibration, from
this example, the following can be approximately stated:

1. Nonlinear vibration is also a repetitive motion. However, while linear vibra-


tion will have a single equilibrium point, nonlinear vibration may have mul-
tiple points.
2. Similarly, nonlinear vibration has its own motion frequencies. However, the
frequency will vary due to several factors, for instance, the input levels.
600 Random Vibration

3. Nonlinear vibration also has the capability to dissipate energy. When the
damping effective is sufficiently small, for example, ζeff < 0.3, the response
time history is similar to that of a narrow-band process. Nevertheless, the
damping effect will not be fixed. In certain cases, nonlinear vibration has
the potential to become unstable or chaotic.

11.2.1.2.3 Other Examples
Beside the Duffing equation, we provide several other nonlinear engineering vibra-
tion problems without detailed discussion.

11.2.1.2.3.1   Coulomb Damping (Charles-Augustin de Coulomb, 1736–1806) ​


Seen in Equation 11.40 when the damping exponent β = 0, we have dry-friction
damping, which is also called Coulomb damping. A signal degree-of-freedom (SDOF)
system with dry-friction damping can be modeled as (see Ahmadi and Su 1987)

mx + µ mg syn( x ) + kx = f (t ) (11.91)

where μ is the friction coefficient.

11.2.1.2.3.2   Nonlinear Viscous and Sublinear Damping  In Equation 11.40,


when damping exponent β ≠ 1, we have nonlinear viscous damping, and especially,
when β < 1, we have the sublinear damping. An SDOF system with nonlinear viscous
damping can be modeled as

β
mx + c x syn( x ) + kx = f (t ) (11.92)

11.2.1.2.3.3   Bouc Oscillator  The bilinear stiffness shown in Figure 11.4 can be
further generalized as a Bouc-Wen model (Wen 1989); an SDOF system with such
nonlinear stiffness can be modeled as

 mx + cx + αkx + (1 − β) kz = f (t )



 1
{  n −1
 z = η Ax − ν β x z z − γx z 
n
} (11.93)

where α, η, ν, β, γ are parameters that describe the shape of the nonlinear stiffness;
A is the amplitude and n is the number of cycles.

11.2.1.2.3.4   Rectangular Rocking  A rectangular rocking block is a vibration-


impact system, which can be modeled as

 lθ

 + WR sin[αsyn(θ) − θ][1 + f (t )] + WR cos[αsyn(θ) − θ]g(t ) = 0



( ) ( )
θ t*+ = cθ t*− , t* : θ (t*) = 0
(11.94)
Nonlinear Vibrations and Statistical Linearization 601

where θ is the rocking angle, W is the weight of the block, R and α are respectively
the distance and angle of the center of gravity to a corner of the block, f(t) and g(t) are
respectively excitations relating the horizontal and vertical accelerations (see Iyenger
and Dash 1978).

11.2.1.2.3.5   Van der Pol Oscillator (Balthasar van der Pol, 1889–1959)  The
Van der Pol oscillator can be used to model chimney vibration due to a cross-wind (see
Vickery and Basu 1983). That is,

mx + cx + kx + αx + βx 3 = f (t ) (11.95)

where m, c, k, α, and β are scalars.

11.2.1.2.3.6   Modified Morison Problem  The modified Morison model can be


used to describe the vibration of an offshore structure due to dynamic sea wave
motions (see Taylor and Rajagopalan 1983). That is,

 Mx  + Cx + kx = f (t )

 1 2 (11.96)
 fi (t ) = 2 C DρAi [ui (t ) − x i (t )] ui (t ) − x i (t ) + C M ρViu i (t ) − C M ρVi xi (t )

Here, CD and CM are the Morison drag and inertia coefficients; ρ is the fluid den-
sity; Ai and Vi are the projected area and volume associated with the ith node; x i, xi,
ui, and u i , are respectively the structural and wave velocity and acceleration at node i.

11.2.1.3 System with Nonlinear Damping Only


When a dynamic system has linear stiffness with nonlinear damping, it will result in
nonlinear vibration. However, in this case, we may make the following approximation:

ωi ≈ ωni,  i = 1, …, S (11.97)

and

Edi
ζi = i = 1,  S (11.98)
4 πEki

11.2.1.4 System with Nonlinear Spring


When a dynamic system has nonlinear stiffness, with or without nonlinear damping,
it will result in nonlinear vibration. The effective natural frequency will vary such
that

ωi ≠ ωni (11.99)
602 Random Vibration

11.2.2 Markov Vector
In the above section, we introduced nonlinear vibration with deterministic input.
Now let us consider excitations as random processes. Specifically, a system due to a
special input of Gaussian white noise process will have the response of a diffusion
process. A diffusion process is the solution to an SDE, which is the net movement of
a substance from a high concentration region to a region of low concentration. The
aforementioned Brownian motion is only a good example of a diffusion process.
In general, a diffusion process is a Markov process with continuous sample paths.
(For more details, the Fokker–Planck–Kolmogorov [FPK] equation or Kolmogorov
forward equation [Equation 11.105] may be considered.) Horsthemke and Lefever
(1984) and Rishen (1989) describe these equations in detailed fashion.

11.2.2.1 Itō Diffusion and Kolmogorov Equations


An Itō diffusion (Kiyoshi Itō, 1915–2008) is a solution to a specific type of SDE. The
associated transitional PDF will satisfy Kolmogorov equations. The Itō type equa-
tion of motion can be given as follows:

 dx(t ) = f { x(t ), t } dt + g { x(t ), t } dB(t )



 (11.100)
 x(t0 ) = x 0

where x(t) is an n × 1 response vector; x0 is an initial condition vector, independent of


B(t); f{x(t), t} = [fij] and g{x(t), t} = [gij] are respectively n × n and n × m matrices. B(t)
is an m × 1 vector of the Brownian motion process, with the following properties:

E[Bj(t + Δt) – Bj(t)] = 0 (11.101)

E[Δi(t)Δj(t)] = δijΔt (11.102)

where δij is a Kroneker delta function and Δj(t) = Bj(t + Δt) – Bj(t).
Equation 11.100 can be used for both multi-degree of freedom (MDOF) linear
and nonlinear discrete systems. The excitations can be both stationary and nonsta-
tionary, both white noise and nonwhite noise, and both external and parameter exci-
tations. The initial conditions can be random as well.
The transitional PDFs that satisfy Kolmogorov equations, denoted by p(x, t│x0, t0),
includes the following cases, as long as the response process is Markovian.

11.2.2.1.1 Chapman–Kolmogorov–Smoluckowski Integral (CKS) Equations


(Sydney Chapman, 1888–1970; Marian Smoluchowski, 1872–1917)
The CKS equation is an identity relating the joint probability distributions of differ-
ent sets of coordinates on a stochastic process, which is given by

( ) ∫

p x, t x 0 , t 0 = p(x, t z, τ )p(z, τ x 0 , t0 ) d z (11.103)
−∞
Nonlinear Vibrations and Statistical Linearization 603

11.2.2.1.2 Fokker–Planck–Kolmogorov (FPK) Equation


(Kolmogorov Forward Equation)
(Adriaan D. Fokker, 1887–1972; Max K.E.L. Planck, 1858–1947)
Suppose the state x of a system at time t. Namely a probability distribution p(x, t) is
known. We want to know further the probability distribution of the state at a later
time. The phrase “forward” means for that p(x, t) serves as the initial condition and
the FPK equation is integrated forward in time. This equation is given by

∂p( x, t x 0 , t0 )
n
∂  f j ( x, t ) p( x, t x 0 , t0 )  n
∂2  aij p( x, t x 0 , t0 ) 

∂t
=− ∑j =1
∂x j
+ ∑
i , j =1
∂xi ∂x j

(11.104)
where
n

aij = ∑g g
k =1
ik jk (11.105)

and the term aij will also be used in Equations 11.106, 11.112, and 11.117.

11.2.2.1.3 Kolmogorov Backward Equation


Assume that a system state x(t) evolves according to the SDE given by Equation
11.100, this equation is given by

∂p( x, t x 0 , t0 )
n
∂  p( x, t x 0 , t0 )  n
∂2  p( x, t x 0 , t0 ) 

∂t0
=− ∑ j =1
f j (x0 , t )
∂x0 j
+ ∑i , j =1
aij
∂x 0 i ∂x 0 j

(11.106)

11.2.2.2 Solutions of FPK Equation


It is noted that the FPK equation discussed to date is the only approach to have exact
solutions of nonlinear random vibrations. Besides the exact solutions, the FPK equa-
tion can also be used to carry out approximations. The drawback of this method is
that the responses must be Markovian.

11.2.2.2.1 Exact Solutions
In the case of existence of stationary solutions of FPK equations, we can find the solu­
tions work for all first-order systems and a limited set of higher order systems. Specifically,
an SDOF and/or MDOF vibration system with the aforementioned nonlinear damp-
ing and nonlinear stiffness excited by white noise process can have exact solutions.
The basic idea is, in a time reversal between t and −t, the response vector can be clas-
sified as composed of either even or odd functions; the even components will not change
their sign, but the odd variables will have the sign changed. Denote the ith response as

xi = δ i xi (11.107)

where δi = 1 and −1; we can have even and odd variables, respectively.
604 Random Vibration

Furthermore, we define the state of detailed balance as

p( x, t x 0 , t0 ) = p(x 0 , t x, t0 ) t > t0 (11.108)

For the steady-state responses, in terms of the drift coefficient Ai and diffusion
coefficient Bij, this condition can be written as

∂[ Bij ( x) p( x)]
Ai ( x) p( x) + δ i Ai (x) p( x) − =0 (11.109)
∂x j

and

Bij ( x) − δ iδ j Bij (x) = 0 (11.110)

In Equations 11.109 and 11.110, summation on repeated indices is necessary.


When the above conditions are satisfied, the stationary solution can be expressed
as

p(x) = Ce−U(x) (11.111)

where C is a constant, U(x) is the generalized potential; by solving U(x), we can have
the solution p(x) described in Equation 11.111.

11.2.2.2.2 Moment Solutions
When the FPK equation has input to be white noise arising out of a non-Gaussian
process, the solution becomes the nondiffusive Markov process. The solution will
still have Markovian property, but the motion equation of the transitional PDF will
have an infinite number of terms.
Soong (1973) derived the governing equations for the moments based on the
FPK equation. The moments of the function h[x(t), t] of the response solution x(t) of
Equation 11.100 can be given by

n n
 ∂h   ∂2h   ∂h 

dE[h( x, t )]
dt
= ∑
j =1
E  fj
 ∂x 
j
+ ∑
i , j =1
E  aij
 ∂ i∂ j 
x x
 + E  
∂t
(11.112)

Upon setting

h[ x(t ), t ] = ∏x
i =1
ki
i (11.113)

and choosing different values for ki, we can further derive equations for most com-
monly seen moments.
Nonlinear Vibrations and Statistical Linearization 605

When the Markov property of response is used to study the first passage proper-
ties, either the Kolmogorov forward or backward equations can be solved in conjunc-
tion with appropriate boundary conditions imposed along the critical barriers.
Another approximate solution is to start with the Kolmogorov backward equa-
tion, based on which we can derive equations for moments of the first passage time,
recursively.
Denote the time required by the response trajectory of Equation 11.100 at the point

x = x0 (11.114)

in the phase space at time t0 to cross a special safe domain for the first time as T( x 0 ).
The moments

M0 = 1 (11.115)

M k = E [ T k ] , k = 1, 2,..., n (11.116)

can be shown to have the governing equation given by

n n

− ∑
j =1
f j (x 0 , t )
∂M k
−∑ aij
∂2 M k
∂x 0 j i , j =1 ∂xi ∂x j
+ kM k −1 = 0, k = 0, 1, 2,  (11.117)

These equations are referred to as the generalized Pontriagin–Vitt (GPV) equations


(Lev Semenovich Pontriagin 1908–1988).

11.2.2.2.3 Approximate Solutions
For nonlinear random vibration problems, in general, it is difficult to obtain exact
solutions. We often use the iterative approach based on the parametrix method
to obtain the existence and uniqueness of the corresponding partial differential
equations.
Numerical solutions through a computational approach can be a second approach
to obtain approximate solutions. In so doing, we often can find certain results.
However, one of the important tasks of using the numerical solutions is to ensure the
uniqueness of the solutions.

11.2.3 Alternative Approaches
Besides the above-mentioned methods, there are several other alternative approaches
to solve the problems of nonlinear random vibrations in the literature. In the follow-
ing, we only briefly discuss these basic ideas for the ­nondiffusion Markov process.

11.2.3.1 Linearization
In Section 11.1, several nonlinear damping and stiffness were discussed. It is shown
that one of the approaches is to linearize the nonlinear coefficients. That is, by using
606 Random Vibration

certain equivalent linear parameters, the systems become linear. In other words,
this approach is to find a linear system to approximate the nonlinear responses. The
criteria of these approaches include equal force, equal displacement, equal energy,
and their weighted combinations.
The linearization method is the most popular approach to deal with nonlinear
systems. It is especially applicable for nonlinear systems under random excitation.
Many observations have shown that the random excitations themselves linearize the
responses, particularly in terms of aforementioned homogeneity. That is, the bound
of the random responses tends to be more proportional to the amplitudes of the input
bound than that of the deterministic excitations.

11.2.3.2 Perturbation
When the equations of motion possess nonlinear coefficients, slightly apart from lin-
ear parameters, the solution can be expanded in a power series by small parameters.
This will lead to a set of linear equations, which is suitable for handling polynomial
nonlinearities.
The above-discussed small parameter describes the difference between nonlinear
and linear systems. In most cases, this method requires that the random forcing func-
tion should be additive and/or multiplicative.
When the above-mentioned parameters are sufficiently small, however, large
errors can be expected.

11.2.3.3 Special Nonlinearization
Compared with the method of linearization, this approach uses certain equivalent
nonlinear parameters, instead of using linear parameters. However, the equivalent
nonlinear system can have closed-form solutions, or it can be easier to solve. The
criteria of this approach are similar to those of linearization—equal force, equal
displacement, equal energy, and their weighted combinations.

11.2.3.4 Statistical Averaging
If the nonlinear system only contains low capacity of energy dissipation, statistical
averages may be used to generate diffusion Markov responses. That is, by using aver-
ages, the equivalent FPK equation approximates the nondiffusion cases.
The average can be the amplitudes and phases, which are typically performed in
the frequency domain. It can also be energy envelops as well as the combinations of
both amplitudes/phases and energy envelops.
This method often requires broadband excitations.

11.2.3.5 Numerical Simulation
For nonlinear random vibrations, numerical simulations are always powerful tools.
Many above-mentioned approaches also require numerical simulations. The com-
putational vibration solvers as well as related developments on numerical simula-
tions have been well established, and they will be continuously developed and/or
improved. It is worth mentioning that computational tools, such as MATLAB® and
Simulink, can be a good platform to carry out the numerical simulations. In Section
11.3, we will discuss a special numerical simulation.
Nonlinear Vibrations and Statistical Linearization 607

11.3 Monte Carlo Simulations


11.3.1 Basics of Monte Carlo Method
Monte Carlo simulations, also known as Monte Carlo methods, or Monte Carlo
experiments, use repeated random samples to obtain numerical results to simulate
engineering problems as well as problems in many other fields, such as physical and
mathematical problems. These simulations are carried out multiple times over so
that the distribution of targeted probabilistic entity can be statistically determined.
In the 1930s, Fermi first experimented with the Monte Carlo method while studying
neutron diffusion. Since it was being kept as a secret, the similar work of von Neumann
and Ulam in the Los Alamos Scientific Laboratory required a code name so that von
Neumann chose the name Monte Carlo. In many cases, if a dynamic system involves
random variables or random process, due to the uncertainty, the closed-form expres-
sion can be difficult to obtain. Monte Carlo simulations become useful tools to gain
numerically statistical results. In other cases, although objects are deterministic, it can
be difficult and/or costly to carry out the corresponding experiments or computations;
Monte Carlo simulations again may be used to provide good approximations.
Monte Carlo methods are mainly used in four distinct problem classes: pattern
recognition, optimization, numerical integration, and generation of PDFs.
In engineering applications involving random variables and processes, determin-
ing the desired values can be exceedingly difficult. The above-mentioned estimation
of effective stiffness due to random low cycle fatigue is an example. In this case, the
value of the effective stiffness is based on the maximum displacement and unloading
stiffness. The newly resulted unloading stiffness is a function of the yielding dis-
placement, in which the displacement is also a function of the stiffness. Additionally,
both the loading and the displacement are random. Thus, finding a closed-form for-
mula to calculate both the stiffness and the displacement within the current methods
studied is not a feasible approach. For such a complicated circumstance, either experi-
mental study or numerical simulation is needed to determine the effective stiffness.
The Monte Carlo method is one of the most practical computational simulations based
on repeated random samplings. This method can simulate physical as well as mathemati-
cal systems, such as the previously mentioned effective stiffness for nonlinear vibrations.
Monte Carlo simulation methods are particularly useful in dealing with a large number
of coupled degrees and nonlinear responses, in addition to random environments. These
methods are also widely used in mathematics—classically used for the evaluation of
definite integrals, particularly multidimensional integrals with complex boundary condi-
tions. When compared with alternative methods or human intuition, it is shown to be a
widely successful method in risk analysis and reliability designs.
We now use an example to explain how a Monte Carlo simulation works. It is the
computation of the value π by using the Monte Carlo method.

1. Draw a square on the ground with the length of its side to be 2, and then
inscribe a circle within it (see Figure 11.16). We know that the area of the
square is 2 × 2 = 4, and the area of the circle is (2/2)2π = π.
2. Uniformly scatter various objects of uniform size throughout the square, for
example, grains of sand.
608 Random Vibration

FIGURE 11.16  Square and circle.

3. Given that the two areas exhibit a ratio of π/4, the objects, when randomly
scattered, should fall within the areas by approximately the same ratio.
Thus, counting the number of objects in the circle and dividing by the total
number of objects within the square should yield an approximated ratio
of π/4.
4. Lastly, multiplying the result by 4 will then yield an approximation for π.

This method of π approximation describes the general pattern of Monte Carlo


simulations. First, the domain of inputs is defined in the same way as the square
that circumscribes the circle. Second, the inputs are generated randomly with speci-
fied distributions such as scatter individual grains within the square. In the above
example, the distribution was uniform. Third, each input is counted as falling either
within or outside of the circle. Finally, the results of each test are surveyed to deter-
mine the desired result, which is the approximation of π.
From this example, we can also realize three basic requirements. First, we need
a comparatively large number of experiments: in the above example, inputting the
grains. Second, these experiments should be uniformly carried out. That is, in the
above example, the area of the grains should be the same, and the grains must be uni-
formly distributed. Third, the experiments must cover the entire area. In the above
example, the area of the square must be fully covered.
Practically, the first two requirements are easier to realize. The third requirement
needs a pre-knowledge of the targeted domain, which can be difficult to realize. In
the above example, we knew the domain exactly, which is the square. However, in
engineering applications, uncertainty on what is exactly the domain may exist.
Generally speaking, there is not a distinct Monte Carlo method but a large and
widely used class of approaches. These common approaches for applying the Monte
Carlo method consist of the following steps:

1. Define a domain of possible inputs or excitations.


2. Generate inputs randomly from the domain based on a specified probability
distribution.
3. Perform a deterministic computation using the inputs.
4. Statistically aggregate the individual computations into the final result.
Nonlinear Vibrations and Statistical Linearization 609

Although difficult, it is necessary to ensure that the large number of tests con-
verge to the correct results.

11.3.1.1 Applications
Monte Carlo simulations have many applications, specifically in modeling with a
significant number of uncertainties in inputs and system nonlinearities. In the fol-
lowing, some examples of applicable fields are briefly given.

11.3.1.1.1 Mathematics
In general, Monte Carlo simulations are used in mathematics in order to solve vari-
ous problems by generating suitable random numbers and observing the fraction of
the numbers that follow a specified property or properties. This method is useful for
obtaining numerical solutions to problems that are too complicated to solve analyti-
cally. One of the most common applications of Monte Carlo simulations is the Monte
Carlo integration.

11.3.1.1.1.1   Integration  Let us first consider an integral expressed in Chapter


10—the total failure probability of a bridge subjected to multiple hazard loads. By
just considering two loads, L1 and L2, the formula is repeated as follows:

pf = pfL1 + pfL1L2 + pfL2

 ∞ ∞ y y 
=
 ∫−∞
fR ( x )
∫ x
f L1 ( y)
∫ −∞
fL2 ( z , x )
∫ −∞
fC ( w, x ) pL1 d w d z d y d x 

 ∞ ∞ y y 
+
 ∫ −∞
fR ( x )
∫ x
fc ( y)
∫ −∞
fL1 ( z , x )
∫−∞
fL2 ( w, x ) pL1L2 d w d z d y d x 

(11.118)

 ∞ ∞ y y 
+
 ∫ −∞
fR ( x )
∫ x
f L2 ( y)
∫ −∞
fL1 ( z , x )
∫ −∞
fC ( w, x ) pL2 d w d z dyy d x 

Equation 11.118 gives the total failure probability pf when the calculation is deter-
ministic, based on rigorously defining those PDFs and conditional probabilities, as
well as establishing exact integral limits. However, both the analysis and the compu-
tation can be rather complex, even though numerical integration can be carried out.
Deterministic numerical integration is usually operated by taking a number of
evenly spaced samples from a function. In general, such integration works well for
functions of one variable. For multiple variables, there will be vector functions, for
which deterministic quadrature methods may be very inefficient. For example, to
numerically integrate a function of a two-dimensional vector, equally spaced grid
points over a two-dimensional surface are required. In this case, a 100 × 100 grid
requires 10,000 points. If the vector has 100 dimensions, the same spacing on the
grid would require 100100 points. Note that, in many engineering problems, one dimen-
sion is actually a degree of freedom; an MDOF system with 100 degrees is considered
a small-sized problem. A finite-element model can easily contain thousands of DOFs.
Therefore, the corresponding computational burden is huge and impractical.
610 Random Vibration

Monte Carlo simulations can sharply reduce the procedure of mathematical deri-
vation on multiple integrals and the demands of integration over multiple dimen-
sions. In many cases, the integral can be approximated by randomly selecting points
within this 100-dimensional space and statistically taking the average of the func-
tion values at these selected points. It is well known that by the law of large numbers,
Monte Carlo simulation will display N–2 convergence, which implies that regardless
of the number of dimensions, quadrupling the number of sampled points will halve
the error.

11.3.1.1.1.2   Improved Integration  In many cases, the accuracy of numerical


integration can be improved. Consider a Rayleigh distribution for example, where
at point C, the CDF of the Rayleigh distribution reaches the peak value of 0.3033.
The coordinates of A, B, D, and E are, respectively, (Ax, Ay) = (0.25, 0.062), (Bx, By) =
(1.75, 0.2983), (Cx, Cy) = (6.75, 0.053), and (Dx, Dy) = (7.0, 0.038). Suppose that at each
point, along the x-axis, we have an equal increment of 0.25. At point A, the change
of the CDF is 20%. From point B to C, the change is 1.6%, and from point D to E,
the change is 0.6%. This phenomenon is seen in all nonuniformly distributed prob-
ability curves. The integration accuracy may be improved by using unevenly spaced
samples from the distribution function. In the neighborhood of point A, we may
choose a smaller increase in the x-axis, and in the neighborhood of point D, we may
choose a larger increase in the x-axis.
On the other hand, it is also seen that, at points A and/or D and E, the value of
the PDF is comparatively much smaller than the region near points B and C. That is,
the weighting factors of the integrant function will be different at different regions.
When performing Monte Carlo simulation, we can make the samples more likely
to come from regions of high contribution to the integral, such as in the neighborhood
of point B, as well as A, and use less samples from regions of low contribution, such
as the neighborhood of point D. Namely, the points should be drawn from a distribu-
tion similar in form to the integrand. Note that the samples are still taken randomly.
However, accomplishing this improvement can be difficult for the PDF, or related
functions are often unknown or uncertain. For this reason, approximate methods are
used.

11.3.1.1.2 Physical Sciences
For computational physics, physical chemistry, and related applied fields, Monte
Carlo simulations play an important part and have diverse applications, from com-
plicated quantum chromodynamics calculations to designing heat shields and aero-
dynamic forms. Monte Carlo simulations are very useful in statistical physics; for
example, Monte Carlo molecular modeling works as an alternative for computational
molecular dynamics as well as for computation of statistical field theories of simple
particle and polymer models (Baeurle 2009).
In experimental particle physics, these methods are used for designing detectors,
understanding behavior, and comparing experimental data to theory and on a vastly
large scale of galaxy modeling (MacGillivray and Dodd 1982). Monte Carlo methods
are also used in ensemble models that form the basis of modern weather forecasting
operations.
Nonlinear Vibrations and Statistical Linearization 611

11.3.1.1.3 Monte Carlo Simulation versus “What If” Scenarios


From Figure 11.17, we can realize that the occurrences of samples are not equally
weighted if the PDF of an occurrence is not uniformly distributed.
Consider deterministic modeling using single-point estimates, which can be seen
at least philosophically as the opposite of the Monte Carlo simulation. In this deter-
ministic modeling method, each uncertain variable within a model is assigned a
“best estimation.” Various combinations of each input variable are manually cho-
sen. For example, for each value, we can select the best, the worst, and the most
likely cases. The results are then calculated for each so-called “what if” scenario
(see Vose 2008).
On the other hand, Monte Carlo simulation utilizes random samples with PDFs
as model inputs, which produce hundreds or even thousands of possible outcomes,
comparing a few discrete “what if” scenarios. The results through Monte Carlo sim-
ulation therefore provide the possibilities of different outcomes occurring (also see
Vose 2008).

11.3.1.1.4 Pattern Recognition
In many cases of a random process, it is necessary to identify certain patterns. In
stricter terms, pattern recognition belongs to the category of system identification
so that it is an inverse problem. While we will discuss the general inverse problem
separately, specifically considered certain engineering patterns will be discussed.
In Chapter 7, Figure 7.7 shows an earthquake response spectrum and a design
spectrum SD. These spectra are actually generated by using the Monte Carlo pattern
recognition. The domain of the input is all possible earthquake ground motions in an
interested seismic zone with normalized peak amplitude, say, PGA of 0.4g. This is
the “definition of a domain of possible inputs or excitations.”

0.35
B C
0.3

0.25

0.2

0.15

0.1
A
0.05
DE
0
0 1 2 3 4 5 6 7 8 9 10

FIGURE 11.17  Rayleigh distribution.


612 Random Vibration

If each ground motion time history is treated as a realization of a random process,


then at a given natural period Ti, the output is the peak response, which can be seen
as a sample through the “given” random excitation. Note that these time histories can
be generated through numerical simulations; they can also be “randomly” picked
up from the database of seismic records. From the mathematical point of view, both
methods yield the same meaning, namely, “generation of inputs randomly from the
domain based on a specified probability distribution.”
Then, each piece of ground excitation will be used to calculate the response of
an SDOF system with period Ti, which is “to perform a deterministic computation
using the input.”
To do the above computations, a large number of ground time histories will be used.
Collection of all possible peak values under the given period Ti and “statistically aggre-
gating the individual computations into the final result” will provide us the mean as well
as the standard deviation of these responses. In addition, when the value of period Ti is
varied in a proper region, we finally obtain a specific pattern, the response spectrum. In
addition, based on the response spectrum, the design spectrum can also be generated.
Through this example, the procedure of pattern recognition can be explained.
Practically speaking, we may have different kinds of patterns other than response
spectra. However, the basic four-step procedure of Monte Carlo simulation essen-
tially remains the same.

11.3.1.1.5 Optimization
Optimization can be computationally expensive. Therefore, Monte Carlo simula-
tions are found to be very helpful for optimizations, especially for multiple dimen-
sional optimizations. In most cases, Monte Carlo optimizations are based on random
walks, which were briefly discussed in Chapter 5 (the Markov chains). Also in most
cases, the optimization program will move around a marker in multidimensional
space, seeking to move in directions that lead to a lower cost function of optimiza-
tion. Although sometimes, it will move against the gradient, the average tendency is
to gradually find the lowest value.
Compared to the “systematic” approach of optimization, the random walks may
reduce the computational burden significantly.

11.3.1.1.6 Inverse Problems
In Chapter 9, we discussed the basic concepts and general procedure of engineering
inverse problems and the need to consider randomness and uncertainty. This fact
makes inverse problems more difficult to handle. One of the effective approaches is
to use Monte Carlo simulations.
To solve inverse problems, Monte Carlo simulations are used to probabilistically
formulate inverse problems. To do so, we need to define a probability distribution in
the model space. This probability distribution combines a priori information with new
information obtained by measuring and/or simulating selected observable parameters.
Generally speaking, the function linking data with model parameters can be non-
linear. Therefore, a posteriori probability in the model space may be complex to
describe. In addition, the relationship can be multimodal, and some moments may
not be well defined and/or have no closed-form explanations.
Nonlinear Vibrations and Statistical Linearization 613

It is noted that to analyze an inverse problem, obtaining a maximum likelihood


model is often insufficient. Typically, information on the resolution power of the
data will also need to be known. In the general case, there may be a large number of
model parameters. Furthermore, an inspection of the marginal probability densities
of interest may be impractical or even useless. However, it is possible to pseudo-
randomly generate a large collection of models according to the posterior probability
distribution followed by analyzing the models in such a way that information on the
relative likelihood of model properties is conveyed to the spectator.
This task can be accomplished by means of Monte Carlo simulation, especially in
the case where no explicit formula for the a priori distribution is available.
In the literature, the best-known sampling method of significance is the Metropolis
algorithm, which can be generalized, allowing the analysis of possible highly non-
linear inverse problems with complex a priori information and data with an arbi-
trary noise distribution. For more detailed information, the work of Mosegaard and
Tarantola (1995) or Tarantola (2005) may be consulted.

11.3.2 Monte Carlo and Random Numbers


Intriguingly, Monte Carlo simulation methods do not always require true random
numbers to be useful. However, for some applications, such as primality testing,
unpredictability is vital (see Davenport 1992). Many of the most useful techniques
use deterministic, pseudo-random sequences, making testing and re-running of simu-
lations unproblematic. The only quality typically necessary to make good simulations
is for the pseudo-random sequence to appear “random enough” in a certain sense.
Understanding this concept will depend on the application. In most cases, the pseudo-
random sequence must pass a series of statistical tests. The most common requirement
for statistical tests is to verify when a sufficient number of elements are present and that
the numbers are uniformly distributed or follow an alternate desired distribution.

11.3.2.1 Generation of Random Numbers


As mentioned previously, Monte Carlo simulation needs to generate random sam-
ples. One of the key issues is to ensure that the occurrences of these samples are suf-
ficiently “random.” To do so, we need to use artificially generated random numbers.

11.3.2.1.1 Linear Congruential Generator


The following formula can be used to generate uniformly distributed random vari-
ables between 0 and 1:

Ri = (aRi–1 + b) modulo (c) (11.119)

Here, a, b, and c are specific integers. The operation module indicates that (1) the
variable (aRi–1 + b) is divided by c, (2) the remainder is assigned to the value aRi, and
(3) the desired uniformly distributed random variable ri is obtained as

ri = Ri/d (11.120)

The initial value R0 is referred to as the seed.


614 Random Vibration

For example, first let a = 7, b = 8, and c = 9; R0 = 1.

(7 × 1 + 8) modulo (9) → 16/9 → R1 = 16 − 9 = 7, r1 = 7/9 = 0.7778; secondly, (7 ×


7 + 8) modulo (9) → 57/9 → R2 = 57 − 9 × 6 = 3, r2 = 3/9 = 0.3333; thirdly, (7 × 3 + 8)
modulo (9) → 29/9 → R3 = 29 − 9 × 3 = 2, r 3 = 2/9 = 0.2222; fourthly, (7 × 2 + 8)
modulo (9) = 22/9 → R4 = 22 − 9 × 2 = 4, r4 = 4/9 = 0.4444; etc.

To generate a random number between 0 and 1, use the MATLAB Code:

r = rand(n, 1) (11.121)

11.3.2.2 Transformation of Random Numbers


To conduct a Monte Carlo simulation for a specific process, the statistical properties
generated by the computers should be identical to this particular process. However,
the above-mentioned random numbers are uniformly distributed. Therefore, they
must be converted to the desired probability distribution.

11.3.2.2.1 Continuous Random Variables


11.3.2.2.1.1   Inverse Transformation Method  If the inverse form of the desired
CDF can be expressed analytically, then the inverse transformation method can be
used.
First, a random set of uniformly distributed variables r are generated as men-
tioned above. Next, suppose that FQ (q) is the desired CDF, and its inverse FQ−1 is
known.
The variable Q can then be generated as

Q = FQ−1 (r ) (11.122)

Example 11.1

Consider the CDF of the exponential distribution:

FX(x) = 1 – e–λx (11.123)

The inverse function is

x = FX(r) –1 = –ln(1 – r)/λ (11.124)

For r = {0.7778, 0.3333, 0.2222, 0.4444, …} and λ = 2, this will yield

Q = {0.7521, 0.2027, 0.1256, 0.2939, …}


Nonlinear Vibrations and Statistical Linearization 615

11.3.2.2.1.2   Composition Method  Certain random variables have composite


probability distributions such as

FX ( x ) = ∑w F
i =1
i Xi ( x ) (11.125)

where wi are weighting functions, FXi ( x ) are CDFs, and

∑ w = 1
i =1
i (11.126)

To obtain the corresponding distribution, two sets of uniformly distributed ran-


dom numbers r1 and r 2 must first be generated. The random numbers r1 are then
used to select a desired CDF of FXi ( x ) for the generation of random numbers. To
determine the variable set based on the selected CDF, the random variable r 2 is used.

Example 11.2

Generate a random variable set that satisfies the following PDF:

f X(x) = 3/5 + x3,  0 ≤ x ≤ 1

The PDF can be decomposed as

f X(x) = 3/5 f1(x) + (1 − 3/5)f2(x)

It is seen that

f1(x) = 1 and F1(x) = x = r2,  for 0 ≤ x ≤ 1

The variate is given by

x = F1−1(r2 ) = r2

for f2(x) = 5/2 x3, yielding F2(x) = 5/8 x4 = r2, for 0 ≤ x ≤ 1.


Thus, the resulting variate is

x = F2−1(r2 ) = (8 / 5 r2 )1/ 4

Subsequently, two uniformly distributed numbers are generated. Suppose that


the numbers are randomly chosen as 0.530 and 0.181.
616 Random Vibration

Given that r1 = 0.535 < 3/5 = 0.8. By following F1(x), r2 is used to generate the
variate by

x1 = F1−1(r2 ) = 0.181

Next, let r1 = 0.722 and r2 = 0.361. Since r1 = 0.722 > 3/5, r2 is used to generate
the variate by following F2(x) or explicitly

x2 = F2−1(r2 ) = (8 × 0.361/ 5)1/ 2 = 0.870

This is similarly repeated.

11.3.2.2.1.3   Function-Based Method  In specified distributions, random vari-


ables are expressed as functions of other random variables, which can be easier to
generate. For example, the Gamma distribution is the sum of exponential distribu-
tions. If Y is a Gamma distribution with the parameter (k, λ), then

Yi = X1i + X2i + … Xki (11.127)

Here, X1i, X2i, …, Xki are k independent and exponentially distributed random
variables with parameter λ and zero mean.

f Xj(x) = e–λx/λ (11.128)

Example 11.3

Generate a Gamma-distributed random variate with the parameters k = 4 and λ = 2.


First, generate uniformly distributed random numbers r within the range [0, 1].
Next, let

x = −ln(r)/λ (11.129)

As an example, the first four uniformly distributed random numbers are r1 = 0.7778,
r2 = 0.3333, r3 = 0.2222, and r4 = 0.4444. The corresponding exponentially dis-
tributed random variates for λ = 2 will be as follows: 0.7521, 0.2027, 0.1256, and
0.2939.
Thus, the Gamma distributed variate is

4
g=
1
λ ∑ln(r ) = (0.7521+ 0.2027 + 0.1256 + 0.2939) = 1.3743
i =1
i

11.3.2.2.2 Discrete Random Variables


To generate discrete random variables is to transform a uniformly distributed ran-
dom number to the desired probability mass function. One way to accomplish this is
to handle the CDF as follows:

FX(xj–1) < r ≤ FX(xj) (11.130)


Nonlinear Vibrations and Statistical Linearization 617

11.3.2.3 Random Process
11.3.2.3.1 Stationary Process
Now we use the aforementioned methods to generate a random process, which is
weakly stationary.
First, generate a random variate with the desired distribution. Then, index the
variate with the temporal parameter t. For example,

F(t) = A cos(Ωt + Θ) (11.131)

Here, A, Ω, and Θ are random variables, and F(t) is a random process.

11.3.2.3.2 Nonstationary Process
Change the stationary process with certain desired modifications. For example,

G(t) = B(t)F(t) + μY (t) (11.132)

11.3.3 Numerical Simulations
Monte Carlo simulations generate samples from which the PDFs can be statistically
determined. These samples are often obtained through numerical simulations.
Although the totality of numerical simulations is beyond the scope of this textbook,
in the following, we briefly discuss several issues related to random vibrations and
Monte Carlo simulations.

11.3.3.1 Basic Issues
In order to successfully render a Monte Carlo simulation, several basic issues include
the establishment of proper models, the mathematical tools for modeling, the criteria
used to judge if the simulation is acceptable, and the possible error or error bound
between exact resulted and Monte Carlo simulations.

11.3.3.1.1 Models
In many engineering projects, proper modeling is a starting point, and it is an impor-
tant issue, specifically for the case that relates to engineering vibration and the need
to construct proper models. In what follows, we will use vibration systems to realize
the issue of modeling.
For a vibration system, the complete model is the physical model, that is, use the
M-C-K vibration model to directly treat the Monte Carlo simulation as a forward
problem. For such a model, we need to determine the order of the system, the mass,
the damping and stiffness coefficient matrices, as well as the forcing functions.
We also need to decide whether they are linear or nonlinear models. Furthermore,
we also need to consider certain randomness and uncertainty of models. Both the
response model and the modal model are incomplete models. That is, to establish a
proper model, the physical model, if possible, should be the first choice.
In many cases, the physical model is difficult to obtain. Practically, response mod-
els can be directly measured. Under certain signal processing, we can further obtain
618 Random Vibration

frequency spectra, as well as other parameters such as damping ratios, through anal-
yses in both time and frequency domains, which were mentioned in Chapter 9, in
Section 9.3.1.3 of vibration testing. Based on these measured parameters, we can
further simulate the response models. The reason that the response model is incom-
plete is that only from this model the physical parameters cannot be determined, for
example, the corresponding mass.
A modal model can also be used. Since each mode is actually an SDOF system,
with modal mass, damping, and stiffness, the discussion on physical models also
applies. However, if the modal model is used, the system is assumed to be linear.

11.3.3.1.2 Criteria
In most cases, the reason for using Monte Carlo simulation is that it is difficult to
directly compute the results, such as computation of integrations. In this case, how
to judge whether the simulated result is valid is an important issue.

11.3.3.1.2.1   Consistency  Consistency means that the results obtained through


different groups of Monte Carlo simulations agree with each other with acceptable
difference.
The consistency criterion is comparatively easier to realize, and in many cases,
it is necessary to check if different groups of Monte Carlo simulations agree with
each other.
For convenience, denote a specific parameter of a system obtained through the ith
Monte Carlo simulation to be π̂ i . Then the criterion of consistency of parameter π̂ i in
total m simulations can be written as

πˆ i − ∑ πˆ
i =1
i

m
≤ ε πˆ (11.133)
∑ πˆ
i =1
i

where ε π̂ is a preset small number specifically for parameter π̂ i .


The choice of a different group can simply be a different run of Monte Carlo
simulations with the same samples. It can also be the Monte Carlo simulation with a
different number of generated samples, etc.

11.3.3.1.2.2   Unbiased Results  Unbiased results mean that the statistical param-
eter π̂ i must be as close as the corresponding exact parameter π, that is,

π− ∑ πˆ
i =1
i

≤ επ (11.134)
π

where επ is a preset small number specifically for parameter π̂ i.


Nonlinear Vibrations and Statistical Linearization 619

Generally speaking, we often do not know the exact value of π, so that Equation
11.134 often cannot be realized.

11.3.3.2 Deterministic Systems with Random Inputs


11.3.3.2.1 Response Computation
Suppose that the mass, damping, and stiffness matrices of a system are given. With
the generated forcing process Fi(t),

MXi + CX i + KX i = Fi (t ) (11.135)

The response can be calculated. Note that, since the generated forcing function is
in fact known, Equation 11.135 can be rewritten as

 i + Cx i + Kx i = fi (t )
Mx (11.136)

Equation 11.136 can be solved as discussed in Chapter 8.


When the system is linear, the statistical properties of the responses can have a
closed-form expression as mentioned in Chapters 7 and 8. However, in many cases,
C and K can be nonlinear as previously mentioned. Moments and additional statisti-
cal studies may be needed to complete estimations on the PDF.

11.3.3.2.2 Statistical Estimation of Random Parameters


11.3.3.2.2.1   Simulated Random Process  Once the responses xi(t) are calculated
through the generated fi(t), the responses can be obtained as a collection of random
processes, that is,

X(t) = {xi(t)} (11.137)

11.3.3.2.2.2   PDF  By means of X(t), the distribution function can be studied. As


previously discussed, a proper model of the PDF or the CDF must first be specified.
Then, through the maximum likelihood estimation of parameters and curve fitting,
the exact function can be determined.

11.3.3.2.2.3   Crossing Rate, Extrema Distribution  The study of the rates of level
up-crossing, zero up-crossing, the distribution of extrema, and the possible bounds
of responses, for example, is possible through X(t).

11.3.3.2.2.4   Mean, Variance, and Correlations  The first and second moment
estimations can be calculated as previously discussed.

11.3.3.3 Random Systems
11.3.3.3.1 Random Distribution of Modal Parameters
Consider a dynamic system with a set of parameters denoted by πi, i = 1, 2, …. For
certain cases, the ith parameter of πi for the system may vary within a relatively
small range, approximately επi%. Note that a small variation of parameter πi does not
620 Random Vibration

necessarily indicate a small variation for the resulted response. In fact, the response
variation depends upon the stability of the system. When the variation of πi is ran-
dom, the random variable Πi can be written as

Πi = πi (1 + επ%) (11.138)

Here, the resulted variation in πi forms a set of random variables Πi.


Assume that the system being analyzed remains linear; for unspecified reasons,
its modal parameters may vary. For the corresponding natural frequency and damp-
ing ratio, Monte Carlo simulations can be used as follows:

Ωi = ωi (1 + εω%) (11.139)

Here, Ωi is the ith natural frequency and is considered to be random, whereas


ωi is the ith designed natural frequency and is considered to have a desired value.
Additionally, εω is a random variable with a proper distribution. For example,

εω ~ N(μ, σ) (11.140)

Given that the damping ratios are also assigned as random variables, then Zi can
be written as

Zi = ζi(1 + εζ%) (11.141)

In this instance, Zi is the ith damping ratio and is considered to be random.


Similarly, ζi is the ith designed damping ratio and is considered to have a desired
value, while εζ is a random variable with proper distributions.
Given a system with artificially assigned variable modal parameters, the responses
X(t) can be calculated, and further statistical study can be completed as mentioned
above.

11.3.3.3.2 Random Distribution of Physical Parameters


For random physical parameters, the following can be written:

Kij = kij(1 + εk%) (11.142)

and

Cij = cij(1 + εc%) (11.143)

Here, Kij and Cij are the ijth entries of the stiffness and damping matrices, respec-
tively, which are considered to be random. Likewise, kij and cij are the ijth designed
stiffness and damping, respectively. They are considered to have desired values.
Lastly, ε(.) are random variables with proper distributions.
The variation of kij and cij can be relatively large. Mathematically speaking, the
range of random variables Kij and Cij can be –∞ to ∞. In this case, a normal distribu-
tion to model the corresponding variation distributions may be used. In engineering
Nonlinear Vibrations and Statistical Linearization 621

applications, variables Kij and Cij cannot be smaller than zero. However, in this case,
the use of a lognormal distribution to model the corresponding variation distribu-
tions is acceptable.

Example 11.4

Consider the stiffness of a steel shaft with length  and a cross section A. Given
deterministic design, this example is described by

k = EA / (11.144)

Suppose that the Young’s modulus E, area A, and length  are all random vari-
ables with a normal distribution as follows:

E ~ N(E0, 0.2E0) (11.145)

A ~ N(A0, 0.05A0) (11.146)

  N( 0, 0.08 0 ) (11.147)

Note for this instance that (.)0 is the mean value of (.). Then, the resulted stiffness
is also a random variable, thus indicating that it is not normal any more.

Problems
1. A nonlinear vibration SDOF system is shown in Figure P11.1, with m, c, and
k and the dry friction coefficient μ. Suppose that it is excited by a force f(t) =
sin(ωt).
a. Find the governing equation for the system.
b. Calculate the energy loss and determine the amplitude and phase of the
forced response of the equivalent linear viscous system.
2. An SDOF system is given by

10 x + µ x syn ( x ) + 40 x = −10 xg (t )


β

where the mass is 10 kg; k = 40 N/m, xg (t ) = − A sin(2.1t ) m/s2, β = 0.3, and

μ = 15 N (s/m)β.

x(t), f(t)

m
Coulomb dry friction
c

Figure P11.1
622 Random Vibration

a. Calculate the responses by using Simulink; plot the absolute accelera-


tions, relative displacements, and the energy dissipation loops when A =
0.2, 0.15, 0.1, and 0.05 m/s2. Plot the steady-state displacement vs. the
bound of ground acceleration.
b. Calculate the effective damping ζeff through Timoshenko damping
(Equation 11.34) when A = 0.2, 0.15, 0.1, and 0.05 m/s2. Plot the damp-
ing ratio vs. ground acceleration.
c. Use the effective damping ratios obtained on (b) to linearize the system
and calculate the responses, comparing the results with (a) when A = 0.2,
0.15, 0.1, and 0.05 m/s2.
3. Use x= randn( SampNum,1 )*Amp and let Amp = 10, 20, 30, and 40 m/s2 to
generate 300 pieces of excitations acting on the system given in 11.2.
a. Calculate the relative displacements accordingly. Find the mean and
standard deviation of these displacements.
b. Use the above-mentioned random input to study the relationship
between the input amplitude and the mean peak responses.
4.
a. Calculate the linearized FRF by using H(ω) = ∑ Xi(ω)/Fi(ω). The input
is randn( SampNum,1 )*Amp and let Amp = 5, 10, 20, 30, and 40 m/ s2
to generate 300 pieces of excitations acting on the system given in
11.2.
b. Using the half-power point method to calculate the linearized damping
ratio, compare the results with those obtained in 11.2.
c. Examine the linearity.
5.
a. Find the effective stiffness keff by using the approach of secant stiffness
of the linearized model for the system given in Problem 2 with linear-
ized damping ratio ζeff = 0.29.
b. Discuss the method of using secant stiffness to approximate the effec-
tive stiffness of a sublinear system, such as the one given in Problem 2
with 50 randn(SampNum, 1).
6. Use the MATLAB code of “lsim” to compute the absolute acceleration and
relative displacement through this linearized model based on parameters
keff and ζeff, with the El Centro ground acceleration (E-W) as the excitation.
The unit of the mass, damping, and stiffness is kg, N/m s, and N/m, respec-
tively. The peak ground acceleration (PGA) of the El Centro earthquake is
normalized to 4 m/s2.
7. An SDOF system is given by

10 x(t ) + cx (t ) + f R (t ) = 10 xg (t )

where f R (t) is the nonlinear restoring force. If xg (t ) = 10 sin(6t ), then the

restoring force can be plotted in Figure P11.2, with c = 0, dy = 0.02, q = 7,
f m = 10, and x0 = 0.1.
Nonlinear Vibrations and Statistical Linearization 623

fR

10
7

x
y 0.1
dy

Figure P11.2

a. Calculate the effective stiffness by using secant stiffness and the energy
approach given by Equation 11.28.
b. Calculate the effective damping ζeff through Timoshenko damping
(Equation 11.34) and alternative damping (Equation 11.46).
c. Use the MATLAB code of “lsim” to compute the relative displace-
ment through this linearized model with keff and ζeff, with the El Centro
ground acceleration (E-W) as the excitation. The unit of the mass,
damping, and stiffness is kg, N/m s, and N/m, respectively. The PGA
of the El Centro earthquake is normalized to 10 m/s2. The linearized
models include the following:
Case 1: Let the effective stiffness and the damping ratio calculated
through the approach of using the secant stiffness be k eff = 100 and
ζeff = 0.14, respectively.
Case 2: Let the effective stiffness and the damping ratio calculated
through the energy approach be keff = 34.4 and ζeff = 0.35, respectively.
8.
a. Create a Simulink model for the system shown in Problem 7.
b. Use the El Centro ground acceleration again to compute the absolute
acceleration and relative displacement based on your Simulink model
with PGA = 10 m/s2 for the following cases:
Case 1: Let the effective stiffness and the damping ratio calculated
through the approach of using the secant stiffness be keff = 100 and
ζeff = 0.14.
Case 2: Let the effective stiffness and the damping ratio calculated
through the energy approach be keff = 34.4 and ζeff = 0.35.
Case 3: nonlinear response by Simulink.
c. Use the Simulink model and let the PGA be 2.5, 5, 10, 15, 20, 25, and
30 m/s2 to study the linearity of the peak responses of both the accelera-
tions and the displacements.
9. Write a MATLAB code to carry out a Monte Carlo simulation for deter-
mining the upper-left area inside a square with each side = 6 cm, shown in
Figure P11.3, where the red curve has the function y = 0.24x2.
624 Random Vibration

5 cm

Figure P11.3

10.
a. Write a MATLAB program for a random number generator based on
the following Rayleigh distribution:
2
1 y 
y −  
fH ( y) = 2 e 2  σ  , 5 > y > 0, σ = 1.5 m/s2
σ


b. Use Monte Carlo simulation to generate 500 pieces of random ground
accelerations

xga(t) = y xgn(t)

where y is the amplitude and xgn is the normalized earthquake records.


The normalized time histories are chosen to be El Centro.
c. With the generated ground recalculation to calculate the peak responses
(both absolute acceleration and relative displacement) of the system
shown in Problem 7, with the linearized model (keff, zeff ) of Case 2
obtained in Problem 7.
d. With the generated ground recalculation to calculate the peak responses
(both the absolute acceleration and the relative displacement) of the
system shown in Problem 7, with the Simulink model obtained in
Problem 7.
References
AASHTO. (2004). AASHTO LRFD Bridge Design Specifications, 3rd Ed., American
Association of State Highway and Transportation Officials, Washington, DC.
Ahmadi, G. and Su, L. (1987). “Equivalence of single term Wiener-Hermite and equivalent
linearization techniques,” J. Sound Vib., 118: 307–311.
Ang, A. H. S. and Tang, W. H. (1984). Probability Concepts in Engineering Planning and
Design v. II. Decision, Risk and Reliability, Wiley, New York.
Antsaklis, P. J. and Michel, A. N. (1997). Linear Systems, McGraw Hill, Reading, MA.
Aster, R., Borchers, B. and Thurber, C. (2012). Parameter Estimation and Inverse Problems,
2nd Ed., Elsevier, Waltham, MA.
Bendat, J. S. and Piersol, A. G. (2011). Random Data: Analysis and Measurement Procedures,
4th Ed., John Wiley & Sons, Hoboken, NJ.
Blackman, R. B. and Tukey, J. W. (Eds.) (1959). “Particular pairs of windows,” in The Measure­
ment of Power Spectra, From the Point of View of Communications Engineering, Dover
Publications, New York, pp. 98–99.
Box, G., Jenkins, G. M. and Reinsel, G. C. (1994). Time Series Analysis: Forecasting and
Control, 3rd Ed., Prentice-Hall, Englewood Cliffs, NJ.
Caughey, T. K. and O’Kelly, M. E. J. (1965). “Classical normal modes in damped linear
dynamic systems,” J. Appl. Mech., ASME, 32: 583–588.
Chadan, K. and Sabatier, P. C. (1977). Inverse Problems in Quantum Scattering Theory,
Springer-Verlag, Berlin.
Charnes, A., Frome, E. L. and Yu, P. L. (1976). “The equivalence of generalized least squares
and maximum likelihood estimates in the exponential family,” J. Am. Stat. Assoc.,
71(353): 169.
Cheng, F. Y. (2001). Matrix Analysis of Structural Dynamics—Applications and Earthquake
Engineering, CRC Press, Boca Raton, FL.
Cheng, F. Y. and Truman, K. Z. (2001). Structural Optimization—Dynamic and Seismic
Applications, CRC Press, Boca Raton, FL.
Chopra, A. K. (2001). Dynamics of Structures: Theory and Applications to Earthquake
Engineering, 2nd Ed., Prentice-Hall, Englewood Cliffs, NJ.
Clark, C. (2005). LabVIEW Digital Signal Processing and Digital Communications, McGraw-
Hill, New York.
Clough, R. W. and Penzien, J. (1993). Dynamics of Structures, 2nd Ed., McGraw-Hill, New York.
Cole, H. A. (1968). “On-the-analysis of random vibrations,” Paper No. 68-288, American
Institute of Aeronautics and Astronautics.
Coles, S. (2001). An Introduction to Statistical Modeling of Extreme Values, Springer-Verlag,
London.
Collins, J. A. (1981). Fatigue of Materials in Mechanical Design, McGraw-Hill, New York.
Crandall, S. H., Chandiramani, K. L. and Cook, R. G. (1966). “Some first passage problems in
random vibration,” J. Appl. Mech., 33(3): 432–538.
Davenport, J. H. (1992). “Primality testing revisited,” Proceeding ISSAC ’92 Papers from the
International Symposium on Symbolic and Algebraic Computation, 123: 129.
Der Kiureghian, A. (1980). “Structural response to stationary excitation,” J. Eng. Mech. Div.,
ASCE, 106: 1195–1213.
Der Kiureghian, A. (1981). “A response spectrum method for random vibration analysis of
MDF systems,” Earthquake Eng. Struct. Dyn., 9: 419–435.
Dowling, N. E. (1972). “Fatigue failure predictions of complicated stress-strain histories,”
ASTM J. Mater., 7(1): 71–87.

625
626 References

Dowling, N. E. (1993). Mechanical Behavior of Materials, Prentice-Hall, Englewood Cliffs, NJ.


Ellingwood, B., Galambos, T. V., MacGregor, J. G. and Comell, C. A. (1980). Development
of a Probability Based Load Criterion for American National Standard A58, National
Bureau of Standards, Washington, DC.
FEMA (Federal Emergency Management Agency) NEHRP Recommended Seismic Provisions
for New Buildings and Other Structures. (2009). FEMA P-750/2009 Document.
Ghosn, M., Moses, F. and Wang, J. (2003). NCHRP Report 489: Design of Highway Bridges
for Extreme Events, Transportation Research Board of the National Academies,
Washington, DC.
Goodwin, G. C. and Payne, R. L. (1977). Dynamic System Identification: Experiment Design
and Data Analysis, Academic Press, New York.
Gumbel, E. J. (1958). Statistics of Extremes, Columbia University Press, New York.
Gupta, A. K. and Jaw, J.-W. (1986). “Response spectrum method for nonclassically damped
systems,” Nucl. Eng. Des., 91: 161–169.
Hamilton, J. (1994). Time Series Analysis, Princeton University Press, Princeton, NJ.
Hida, S. E. (2007). “Statistical significant of less common load combinations,” J. Bridge Eng.,
ASCE, 12(3): 389–393.
Horsthemke, W. and Lefever, R. (1984). Noise Induced Transitions, Theory and Applications
in Physics, Chemistry and Biology, Springer-Verlag, Berlin.
Inman, D. J. (2008). Engineering Vibration, 3rd Ed., Pearson Prentice-Hall, Upper Saddle
River, NJ.
Iyenger, R. N. and Dash, P. K. (1978). “Study of the random vibration of non-linear systems
by the Gaussian closure technique,” J. Appl. Mech., 45: 393–399.
Kaiser, J. F. and Schafer, R. W. (1980). “On the use of the I0-sinh window for spectrum analy-
sis,” IEEE Trans. Acoust. Speech Signal Process., ASSP-28(1): 105–107.
Krenk, S. (1978). A Double Envelope for Stochastic Process, Report No. 134, Structural
Research Laboratory, Technical University of Denmark, Lyngby, Denmark.
Liang, Z. and Inman, D. J. (1990). “Matrix decomposition methods in experimental modal
analysis,” J. Vib. Acoust., Trans. ASME, 112(3): 410–413.
Liang, Z. and Lee, G. C. (1991a). “Representation of damping matrix,” J. Eng. Mech., ASCE,
117(5): 1005–1020.
Liang, Z. and Lee, G. C. (1991b). Damping of Structures, Part I: Theory of Complex Damping,
National Center for Earthquake Engineering Research, Tech. Report NCEER-91-0004,
State University of New York at Buffalo.
Liang, Z. and Lee, G. C. (1998). “On cross effects of seismic response of structures,” J. Eng.
Struct., 20(4–6): 503–509.
Liang, Z. and Lee, G. C. (2002). “On principal axes of M-DOF structures: Static loading,”
J. Earthquake Eng. Eng. Vib., 1(2): 293–302.
Liang, Z. and Lee, G. C. (2003). “On principal axes of M-DOF structures: Dynamic loading,”
J. Earthquake Eng. Eng. Vib., 2(1): 39–50.
Liang, Z., Lee, G. C., Dargush, G. F. and Song, J. (2012). Structural Damping: Applications
in Earthquake Response Modification, CRC Press, Boca Raton, FL.
Liang, Z., Tong, M. and Lee, G. C. (1992). “Complex modes in damped linear dynamic sys-
tems,” Int. J. Anal. Exp. Modal Anal., 7(1): 1–20.
Lin, Y. K. (1970). “First excursion failure of randomly excited structures,” AIAA J., 8(4):
720–728.
Ludeman, L. C. (2003). Random Process, Filtering, Estimation, and Detection. Wiley-
Interscience, Hoboken, NJ.
Lutes, L. D. and Larson, C. E. (1990). “Improved spectral method for variable amplitude
fatigue prediction,” ASCE J. Struct. Eng., 116(4): 1149–1164.
Lyon, R. H. (1961). “On the vibration statistics of a randomly excited hand-spring oscillator,”
J. Acoust. Soc. Am., 33: 1395–1403.
References 627

Marley, M. J. (1991). Time Variable Reliability under Fatigue Degradation, Norweigian


Institute of Technology, Trondheim, Norway.
McConnel, K. G. (1995). Vibration Testing Theory and Practice, John Wiley & Sons, New
York.
Meirovitch, L. (1986). Elements of Vibration Analysis, McGraw-Hill International Editions,
New York.
Monohar, C. S. (1995). “Methods of nonlinear random vibration analysis,” Sadhana, 20(2–4):
345–371.
Mosegaard, K. and Tarantola, A. (1995). “Monte Carlo sampling of solutions to inverse prob-
lems,” J. Geophys. Res., 100(B7): 12431–12447.
Nowak, A. S. (1999). NCHRP Report 368: Calibration of LRFD Bridge Design Code,
Transportation Research Board of the National Academies, Washington, DC.
Nowak, A. S. and Collins, K. R. (2000). Reliability of Structures, McGraw Hill, New York.
Ortiz, K. (1985). “On the stochastic modeling of fatigue crack growth,” Ph.D. Dissertation,
Stanford University, Stanford, CA.
Ortiz, K. and Chen, N. K. (1987). “Fatigue damage prediction for stationary wideband stresses,”
ICASP 5, Presented at the Fifth International Conference on the Applications of Statistics
and Probability in Civil Engineering, Vancouver, Canada.
Paris, P. C. (1964). “The fracture mechanics approach to fatigue,” in Fatigue, An Interdisciplinary
Approach, J. J. Burke, N. L. Reed and V. Weiss (Eds.), Syracuse University Press, New
York, pp. 107–132.
Perng, H.-L. (1989). “Damage accumulation in random loads,” Ph.D. Dissertation University
of Arizona, Tucson, AZ.
Powell, A. (1958). “On the fatigure failure of structures due to vibration excited by random
pressure fields,” J. Acoust. Soc. Am., 30(12): 1130–1135.
Press, W. H., Teukolsky, S. A., Vetterling, W. T. and Flannery, B. P. (Eds.) (2007). “Section
19.4. Inverse Problems and the Use of A Priori Information,” in Numerical Recipes: The
Art of Scientific Computing, 3rd Ed., Cambridge University Press, New York.
Ragazzini, J. R. and Zadeh, L. A. (1952). “The analysis of sampled-data systems,” Trans. Am.
Inst. Elec. Eng., 71(II): 225–234.
Rice, J. R. (1964). “Theoretical prediction of some statistical characteristics of random loadings
relevant of fatigue and fracture,” Ph.D. Dissertation, Lehigh University, Bethlehem, PA.
Rice, S. O. (1944, 1945). “Mathematical analysis of random noise,” Bell Syst. Tech. J., 23:
282–332; 24: 46–156. Reprinted in Wax, N. (1954). Selected Papers on Noise and
Stochastic Processes, Dover, New York.
Richard, R. M., Cho, M. and Pollard, W. (1988). “Dynamic analysis of the SIRTF one-meter
mirror during launch,” Proc. Int. Soc. Opt. Eng., 973: 86–99.
Rishen, H. (1989). Fokker-Plank Equation: Methods of Solutions and Applications, 2nd Ed.,
Springer-Verlag, Berlin.
Rzhevsky, V. and Lee, G. C. Quantification of damage accumulation of moment resist-
ing frames under earthquake ground motions, MCEER University, Buffalo, NY
(Unpublished manuscript).
Saeed, G. (2004). Fundamentals of Probability, with Stochastic Processes, 3rd Ed., Pearson
Education Limited, Harlow, UK.
Schueller, G. I. and Shinnozuka, M. (Eds.) (1987). Stochastic Methods in Structural Dynamics,
Martinus Nijhjoff, Boston.
Shumway, R. H. and Stoffer, D. S. (2011). Time Series Analysis and Its Applications, Springer,
New York.
Singh, M. P. (1980). “Seismic response by SRSS for nonproportional damping,” J. Eng. Mech.
Div., ASCE, 106(6): 1405–1419.
Sinha, R. and Igusa, T. (1995). “CQC and SRSS methods for non-classically damped struc-
tures,” Earthquake Eng. Struct. Dyn., 24: 615–619.
628 References

Song, J., Chu, Y.-L., Liang, Z. and Lee, G. C. (2007a). “Estimation of peak relative velocity
and peak absolute acceleration of linear SDOF systems,” Earthquake Eng. Eng. Vib.,
6(1): 1–10.
Song, J., Liang, Z., Chu, Y. and Lee, G. C. (2007b), “Peak earthquake responses of structures
under multi-component excitations,” J. Earthquake Eng. Eng. Vib., 6(4): 1–14.
Soong, T. T. (1973). Random Differential Equations in Science and Engineering, Academic
Press, New York.
Soong, T. T. and Grigoriu, M. (1992). Random Vibration of Mechanical and Structural
Systems, Prentice-Hall International Inc., Englewood Cliffs, NJ.
Sornette, D., Magnin, T. and Brechet, Y. (1992). “The physical origin of the Coffin-Manson
law in low-cycle fatigue,” Europhys. Lett., 20: 433.
Stigler, S. M. (1986). The History of Statistics: The Measurement of Uncertainty before 1900,
Harvard University Press, Cambridge, MA.
Stigler, S. M. (1999). Statistics on the Table: The History of Statistical Concepts and Methods,
Harvard University Press, Cambridge, MA.
Tarantola, A. (2005). Inverse Problem Theory, Society for Industrial and Applied Mathematics,
Philadelphia, PA.
Tayfun, M. A. (1981). “Distribution of crest-to-trough wave heights,” J. Waterways Harbors
Div. ASCE, 107: 149–158.
Tong, M., Liang, Z. and Lee, G. C. (1994). “An index of damping non-proportionality for
discrete vibration systems-reply,” J. Sound Vib., 174(1): 37–55.
Turkstra, C. J. and Madsen, H. (1980). “Load combinations in codified structural design,”
ASCE, J. Struct. Eng., 106(12): 2527–2543.
Vanmarcke, E. (1975). “On the distribution of first passage time for normal stationary random
process,” J. Appl. Mech., 42: 215–220.
Vanmarcke, E. (1984). Random Fields, Analysis and Synthesis, MIT Press, Cambridge, MA.
Ventura, C. E. (1985). “Dynamic analysis of nonclassically damped systems,” Ph.D. Thesis,
Rice University, Houston, TX.
Vickery, B. J. and Basu, R. (1983). “Across wind vibration of structures of circular cross sec-
tion. Part I. Development of a mathematical model for two dimensional conditions,”
J. Wind Eng. Ind. Aerodyn., 12: 49–97.
Villaverde, R. (1988). “Rosenblueth’s modal combination rule for systems with nonclassical
damping,” Earthquake Eng. Struct. Dyn., 16: 315–328.
Vose, D. (2008). Risk Analysis: A Quantitative Guide, 3rd Ed., John Wiley & Sons, Chichester,
UK.
Walter, É. and Pronzato, L. (1997). Identification of Parametric Models from Experimental
Data, Springer, Heidelberg.
Warburton, G. B. and Soni, S. R. (1977). “Errors in response calculations of nonclassically
damped structures,” Earthquake Eng. Struct. Dyn., 5: 365–376.
Weaver, Jr., W., Timoshenko, S. P. and Young, D. H. (1990). Vibration Problems in Engineering,
5th Ed., Wiley.
Wen, Y. K. (1989). “Methods of random vibration for inelastic structures,” Appl. Mech. Rev.,
42(2): 39–52.
Wen, Y. K., Hwang, H. and Shinozuka, M. (1994). Development of Reliability-Based Design
Criteria for Buildings Under Seismic Load, NCEER Tech. Report 94-0023, University
at Buffalo.
Whittle, P. (1951). Hypothesis Testing in Time Series Analysis, Almquist and Wicksell,
Uppsala, Sweden.
Wilkinson, J. H. (1965). The Algebraic Eigenvalue Problem, Oxford University Press, UK.
Wirsching, P. H. and Chen, Y. N. (1988). “Consideration of probability based fatigue design
criteria for marine structures,” Marine Struct., 1: 23–45.
References 629

Wirsching, P. H. and Light, M. C. (1980). “Fatigure under wide band random stresses,” ASCE
J. Struct. Div., 106: 1593–1607.
Wirsching, P. H., Paez, T. L. and Ortiz, K. (1995). Random Vibration, Theory and Practice,
Dover Publications, Inc.
Yang, J. N. (1974). “Statistics of random loading relevant to fatigue,” J. Eng. Mech. Div.
ASCE, 100(EM3): 469–475.
RANDOM
CIVIL ENGINEERING LIANG

LEE
RANDOM VIBRATION
Mechanical, Structural, and Earthquake Engineering Applications

Focuses on the Basic Methodologies Needed to VIBRATION

RANDOM VIBRATION
Handle Random Processes
After determining that most textbooks on random vibrations are mathematically
intensive and often too difficult for students to fully digest in a single course, the Mechanical, Structural, and
authors of Random Vibration: Mechanical, Structural, and Earthquake
Engineering Applications decided to revise the current standard. This text Earthquake Engineering
incorporates more than 20 years of research on formulating bridge design limit states.
Utilizing the authors’ experience in formulating real-world failure probability-based
Applications
engineering design criteria and their discovery of relevant examples using the basic
ideas and principles of random processes, the text effectively helps students readily
grasp the essential concepts. It eliminates the rigorous math-intensive logic training
applied in the past, greatly reduces the random process aspect, and works to change
a knowledge-based course approach into a methodology-based course approach.
This approach underlies the book throughout, and students are taught the fundamental
methodologies of accounting for random data and random processes as well as how
to apply them in engineering practice.

Gain a Deeper Understanding of the Randomness in Sequences


Presented in four sections, the material discusses the scope of random processes,
provides an overview of random processes, highlights random vibrations, and details the
application of the methodology. Relevant engineering examples, included throughout
the text, equip readers with the ability to make measurements and observations,
understand basic steps, validate the accuracy of dynamic analyses, and master
and apply newly developed knowledge in random vibrations and corresponding
system reliabilities.

Random Vibration: Mechanical, Structural, and Earthquake Engineering


Applications effectively integrates the basic ideas, concepts, principles, and theories
of random processes. This enables students to understand the basic methodology and
establish their own logic to systematically handle the issues facing the theory and
application of random vibrations.
K24606

6000 Broken Sound Parkway, NW


Suite 300, Boca Raton, FL 33487
711 Third Avenue
ZACH LIANG
GEORGE C. LEE
New York, NY 10017
an informa business
2 Park Square, Milton Park
w w w. c r c p r e s s . c o m Abingdon, Oxon OX14 4RN, UK w w w. c rc p r e s s . c o m

You might also like