100% found this document useful (1 vote)
167 views

Random Decrement - Modal Analysis

This document is the thesis written by John Christian Asmussen titled "Modal Analysis Based on the Random Decrement Technique - Application to Civil Engineering Structures". It was submitted in partial fulfillment of his PhD studies carried out from 1994-1997. The thesis reviews the development and theoretical aspects of the random decrement technique and applies it to modal identification of civil engineering structures from ambient vibration data. It establishes the theoretical background for modal analysis of linear structures subjected to Gaussian white noise loading using the random decrement technique.

Uploaded by

Sumanth Reddy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
167 views

Random Decrement - Modal Analysis

This document is the thesis written by John Christian Asmussen titled "Modal Analysis Based on the Random Decrement Technique - Application to Civil Engineering Structures". It was submitted in partial fulfillment of his PhD studies carried out from 1994-1997. The thesis reviews the development and theoretical aspects of the random decrement technique and applies it to modal identification of civil engineering structures from ambient vibration data. It establishes the theoretical background for modal analysis of linear structures subjected to Gaussian white noise loading using the random decrement technique.

Uploaded by

Sumanth Reddy
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 227

MODAL ANALYSIS BASED ON THE

RANDOM DECREMENT TECHNIQUE


-
APPLICATION TO CIVIL ENGINEERING
STRUCTURES

by

John Christian Asmussen

Department of Building Technology and Structural Engineering


University of Aalborg
Sohngaardholmsvej 57
9000 Aalborg
Denmark
www.civil.auc.dk/i6
Preface
The present thesis Modal Analysis Based on the Random Decrement Technique - Applica-
tion to Civil Engineering Structures has been made as part of my Ph.D.-study programme
carried out in the period September 1994 - August 1997 at the Department of Building
Technology and Structural Engineering, Aalborg University, Denmark. The Ph.D. work is
part of the frame programme Dynamics of Structures sponsored by the Danish Technical
Research Council.
The proof-reading was made by Kirsten Aakjr, Senior Secretary at my department. Her
careful work is grately appreciated.
I would like to thank Professor G.C. Manos from the Department of Civil Engineering,
Aristotle University, Thessaloniki, Greece, for making my visit at his department in the
period 1/6 1995 - 1/9 1995 possible and enjoyable. Furthermore, special thanks to the
sta at his department for solving a lot of logistic problems.
Several persons from the sta at the laboraty have performed excellent work in connection
with my experimental tests and preparation of the Bridge Measurement System used for
ambient testing of bridges. Thanks to Henning Andersen, engineering assistent, Carl
Carstens, electrician, and Jrgen Hasselgren, engineering assistent. The valuable advice
from Peter Mossing, Senior Engineer, from the Danish Building Research Institute in
connection with the Bridge Measurement System design is grately appreciated.
Thanks to Palle Andersen, Ph.D., for discussions and help during the Ph.D.-study. Espe-
cially his simulation tools based on ARMAV-models have increased the quality of most of
the simulation studies performed in this thesis.
Thanks also to my advisers Associate Professor Rune Brincker, Department of Building
Technology and Structural Enginering, Aalborg University, and Professor Sam Ibrahim,
Department of Mechanical Engineering, Old Dominion University, Virginia, USA for sha-
ring their ideas and experience in Identi cation and Random Decrement.
Finally the support from my wife Pernille has been invaluable. She has never complained,
although she had to spent many weekends and holidays on her own. And she never asked
the evident question. How can you spend so much time on estimating constants in linear
di erential equations?

John Christian Asmussen


Aalborg, 5 August, 1997
Nomenclature
Abbreviations
ARMAV = Vector auto regressive moving average.
ASD = Average Spectral Density.
DOF = Degrees of freedom.
FFT = Fast Fourier transform.
FRF/FRM = Frequency response function/matrix.
IRF/IRM = Impulse response function/matrix.
ITD = Ibrahim time domain.
ln = Natural logaritm.
MAC = Modal assurance criterion.
MPF = Modal participation factor.
PTD = Polyreference time domain.
RD = Random decrement.
rms = Root mean square.
SIC = Shape invariance criterion.
VRD = Vector random decrement.

Roman
a, a~ = Triggering level vector.
A = State matrix.
A~ = State matrix.
b, b~ = Triggering level vector.
B = State matrix.
B~ = State matrix.
C = Damping matrix. Symmetric and positive de nite.
COV[] = Covariance operator.
D = Observation matrix.
Dxx = Auto Random Decrement function.
Dxy = Cross Random Decrement function.
Dxxi = Random Decrement functions in vector form.
Dxi = Vector Random Decrement function.
Dx = Vector Random Decrement functions in vector form.
E [] = Mean value operator.
E = Error function.
f = Force vector.
F = State force vector.
h = Impulse response matrix/function.
H = Frequency response
p matrix/function.
i; j = Integers or ,1.
I = Identity matrix.
k = Constant.
K = Sti ness matrix. Symmetric and positive de nite.
m, m = Integer Modal masses, Modal mass matrix.
M = Mass matrix. Diagonal and positive de nite.
n = Integer. E.g. DOFs or vector/matrix size.
N = Integer, e.g. number of time points, triggering points.
p = Density function (multivariate).
P = Distribution function (multivariate).
P = Probability.
q0 = Modal initial vector condition.
q(t) = Modal response vector.
RXX = Auto correlation matrix/function.
RXY = Cross correlation matrix/function.
R0; R00 = One and two time time derivative of R.
t;  = Time variables.
SXX (! ) = Auto spectral density.
SY X (! ) = Cross spectral density.
TE = Local extremum triggering condition.
T GT = Theoretical general triggering condition.
T GA = Applied general triggering condition.
T L = Level crossing triggering condition.
TP = Positive point triggering condition.
T Z = Zero crossing triggering condition.
Tv = Vector triggering condition.
U (t) = Noise process.
u(t) = Realization of noise process.
V = Covariance matrix/function.
x = Displacement response vector.
x_ = Velocity response vector.
x = Acceleration response vector.
x0 = Initial displacement condition.
x_ 0 = Initial velocity condition.
X; Y = Stochastic vector process.
X_ ; Y_ = Time derivative of X; Y.
X; Y = Double time derivative of X; Y.
x; y = Realizations of stochastic vector process, X; Y.
x_ ; y_ = Realizations of stochastic vector process, X_ ; Y_ .
x; y = Realizations of stochastic vector process, X ; Y .
z = State vector.
z_ = Time derivative of state vector.
z0 = Initial state condition.
ZXY = Fourier transform of DXY .
2T = Matrix transposed.
2,1 = Matrix inverse.
2 = Matrix complex conjugate.
2^ = Estimate of 2
Greek
T = Sampling interval.
t = Time shift.
t = Time shifts in vector form.
= Discrete-time eigenvalue.
xy
2 = Coherence function.
 = Continuous-time eigenvalue.
 = Mean value - vector/scalar.
! = Cyclic eigenfrequency.
!d = Damped cyclic eigenfrequency.
i,  = Mode shape vector/matrix.
i, = Eigenvector/matrix or modal matrix.
 = Standard deviation.
 = Time variable.
 = Modal damping ratio.
 = Diagonal matrix with eigenvalues, .
, = Diagonal matrix with discrete eigenvalues.
Contents
1 Introduction 1
1.1 Background and Motivation : : : : : : : : : : : : : : : : : : : : : : : : : : : 1
1.1.1 Data Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4
1.1.2 The Random Decrement Technique : : : : : : : : : : : : : : : : : : : 5
1.2 Review of the Random Decrement Technique : : : : : : : : : : : : : : : : : 6
1.2.1 Development of the Random Decrement Technique : : : : : : : : : : 7
1.2.2 Theoretical Aspects of the Random Decrement Technique : : : : : : 9
1.2.3 Application of the Random Decrement Technique : : : : : : : : : : : 10
1.3 Scope of Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12
1.4 Thesis Outline : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13
1.5 Reader's Guide : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14
2 Theoretical Background for Linear Structures 23
2.1 Lumped Mass Parameter System : : : : : : : : : : : : : : : : : : : : : : : : 24
2.1.1 Modal Decomposition of Free Decays : : : : : : : : : : : : : : : : : : 25
2.1.2 Forced Vibration : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 27
2.2 Identi cation of Modal Parameters From Free Decays : : : : : : : : : : : : 28
2.2.1 General Equations : : : : : : : : : : : : : : : : : : : : : : : : : : : : 29
2.2.2 Pseudo Measurements : : : : : : : : : : : : : : : : : : : : : : : : : : 30
2.2.3 Modelling of Noise : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31
2.2.4 Extraction Eigenfrequencies and Damping Ratios : : : : : : : : : : : 32
2.2.5 Modal Participation Factors : : : : : : : : : : : : : : : : : : : : : : : 32
2.2.6 Practical Application : : : : : : : : : : : : : : : : : : : : : : : : : : : 32
2.2.7 Separation of Noise Modes and Physical Modes : : : : : : : : : : : : 33
2.3 Structures Loaded by Gaussian White-Noise : : : : : : : : : : : : : : : : : : 35
2.3.1 Correlation Functions : : : : : : : : : : : : : : : : : : : : : : : : : : 35
2.3.2 Load Modelling : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 36
2.3.3 Correlation Functions of the Response : : : : : : : : : : : : : : : : : 38
2.4 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 40
3 The Random Decrement Technique 43
3.1 De nition of Random Decrement Functions : : : : : : : : : : : : : : : : : : 44
3.2 Applied General Triggering Condition : : : : : : : : : : : : : : : : : : : : : 46
3.2.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 47
3.2.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 47
3.3 Level Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : 48
3.3.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 49
i
3.3.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 50
3.4 Local Extremum Triggering Condition : : : : : : : : : : : : : : : : : : : : : 52
3.4.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 53
3.4.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 54
3.5 Positive Point Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 55
3.5.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 57
3.5.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 57
3.6 Zero Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 59
3.6.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 60
3.6.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 60
3.7 Quality Assesment of RD Functions : : : : : : : : : : : : : : : : : : : : : : 62
3.7.1 Shape Invariance Test : : : : : : : : : : : : : : : : : : : : : : : : : : 62
3.7.2 Symmetry Test : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 65
3.8 Choice of Triggering Levels : : : : : : : : : : : : : : : : : : : : : : : : : : : 67
3.9 Comparison of Di erent Approaches : : : : : : : : : : : : : : : : : : : : : : 70
3.9.1 Traditional Approaches : : : : : : : : : : : : : : : : : : : : : : : : : 71
3.9.2 Triggering conditions : : : : : : : : : : : : : : : : : : : : : : : : : : : 72
3.9.3 Results of Simulation Study : : : : : : : : : : : : : : : : : : : : : : : 72
3.9.4 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77
3.10 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 78
4 Vector Triggering Random Decrement 81
4.1 De nition of VRD functions : : : : : : : : : : : : : : : : : : : : : : : : : : : 82
4.2 Mathematical Basis of VRD : : : : : : : : : : : : : : : : : : : : : : : : : : : 88
4.2.1 Choice of Time Shifts : : : : : : : : : : : : : : : : : : : : : : : : : : 91
4.3 Variance of VRD Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : 91
4.4 Quality Assessment : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 92
4.5 Examples - 2DOF Systems : : : : : : : : : : : : : : : : : : : : : : : : : : : 94
4.5.1 Example 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 94
4.5.2 Example 2 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 96
4.6 Example - 4DOF System : : : : : : : : : : : : : : : : : : : : : : : : : : : : 98
4.7 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 102
5 Variance of RD Functions 103
5.1 Variance of RD Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 103
5.2 Example 1: Level Crossing - SDOF : : : : : : : : : : : : : : : : : : : : : : : 107
5.3 Example 2: Positive Point - SDOF : : : : : : : : : : : : : : : : : : : : : : : 110
5.4 Example 3: Positive Point - 2DOF : : : : : : : : : : : : : : : : : : : : : : : 115
5.5 Example 4: Positive Point - 5 DOF : : : : : : : : : : : : : : : : : : : : : : : 117
5.6 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 119
6 Bias Problems and Implementation 121
6.1 Bias of RD Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 121
6.1.1 Bias due to Discretization : : : : : : : : : : : : : : : : : : : : : : : : 121
6.1.2 Bias due to Sorting of Triggering Points : : : : : : : : : : : : : : : : 124
6.1.3 Bias due to High Damping : : : : : : : : : : : : : : : : : : : : : : : 126
6.2 Implementation of RD Functions : : : : : : : : : : : : : : : : : : : : : : : : 128
6.2.1 RD and VRD Functions in HIGH-C : : : : : : : : : : : : : : : : : : 128
6.2.2 MATLAB Utility Functions : : : : : : : : : : : : : : : : : : : : : : : 132
6.2.3 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 137
6.3 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 139
7 Estimation of FRF by Random Decrement 141
7.1 Traditional FFT Based Approach : : : : : : : : : : : : : : : : : : : : : : : : 141
7.2 Random Decrement Based Approach : : : : : : : : : : : : : : : : : : : : : : 143
7.3 Case Studies : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 146
7.3.1 Basic Case - SDOF System : : : : : : : : : : : : : : : : : : : : : : : 146
7.3.2 Experimental Study - Laboratory Bridge Model : : : : : : : : : : : : 151
7.4 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 156
8 Ambient Testing of Bridges 159
8.1 Case Study 1: Queensborough Bridge : : : : : : : : : : : : : : : : : : : : : 160
8.1.1 Data Analysis Methodology : : : : : : : : : : : : : : : : : : : : : : : 161
8.1.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 162
8.1.3 Conclusion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 164
8.2 Case Study 2: Laboratory Bridge Model : : : : : : : : : : : : : : : : : : : : 164
8.2.1 Data Analysis Methodology : : : : : : : : : : : : : : : : : : : : : : : 165
8.2.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 170
8.2.3 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 173
8.3 Case Study 3: Vestvej Bridge : : : : : : : : : : : : : : : : : : : : : : : : : : 173
8.4 Bridge Description : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 174
8.5 Measurement Setup : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 174
8.5.1 Data Analysis Methodology : : : : : : : : : : : : : : : : : : : : : : : 177
8.5.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 180
8.5.3 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 183
8.6 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 183
9 Conclusions 187
9.1 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 187
9.1.1 Chapter 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 187
9.1.2 Chapter 2 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 187
9.1.3 Chapter 3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 188
9.1.4 Chapter 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 188
9.1.5 Chapter 5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.1.6 Chapter 6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.1.7 Chapter 7 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.1.8 Chapter 8 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.2 General Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 190
9.3 Perspectivation and Future Work : : : : : : : : : : : : : : : : : : : : : : : : 192
9.3.1 Non-Gaussian Processes : : : : : : : : : : : : : : : : : : : : : : : : : 192
9.3.2 Non-Linear Structures : : : : : : : : : : : : : : : : : : : : : : : : : : 192
9.3.3 Improvement of the Variance Model : : : : : : : : : : : : : : : : : : 193
9.3.4 Extraction of Modal Parameters : : : : : : : : : : : : : : : : : : : : 193
9.3.5 Damage Detection by on-the-line Continuous Surveyance : : : : : : 193
10 Summary in Danish 195
A Random Decrement and Correlation Functions 199
A.1 Multivariate Gaussian Variables : : : : : : : : : : : : : : : : : : : : : : : : : 200
A.2 Conditional Densities : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 201
A.3 De nition of Random Decrement Functions : : : : : : : : : : : : : : : : : : 201
A.4 General Theoretical Triggering Condition : : : : : : : : : : : : : : : : : : : 202
A.5 Applied General Triggering Condition : : : : : : : : : : : : : : : : : : : : : 204
A.6 Level Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : 207
A.6.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 208
A.7 Local Extremum Triggering Condition : : : : : : : : : : : : : : : : : : : : : 209
A.7.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 210
A.8 Positive Point Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 211
A.8.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 212
A.9 Zero Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 213
A.9.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 214
A.10 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 214
Chapter 1
Introduction
The purpose of this chapter is to give an appropriate approach to the topic of this thesis:
Modal analysis based on the Random Decrement technique. It is the intention that the
chapter should give a short background and motivation for the work presented in this
thesis. A clear delimitation of this work will also be presented. This chapter should give
an overview of the contents of this thesis.
Section 1.1 describes the basic procedures in vibration testing and some of the traditional
applications. In this section some main choices, which delimits this work to deal with
the Random Decrement technique are presented. Section 1.2 contains a review of the
Random Decrement technique. This is a natural starting point in order to identify the
advantages/disadvantages of the technique and to detect any missing knowledge to the
technique. Section 1.3 delimits the work presented in this thesis in detail and brings up
questions, which will be investigated throughout the thesis. Section 1.4 summarizes the
contents of each chapter in order to make a selective reading possible. Finally a reader's
guide nishes this introduction in section 1.5.

1.1 Background and Motivation


There are several purposes for performing vibration testing of a structure. This could e.g.
be comparison of the measured response of a structure and the theoretically predicted
response, updating of a theoretical model of the structure, inspection of a structure based
on its dynamic characteristics, force identi cation etc., see e.g. Ewins [1], or the purpose
could simply be to estimate the vibration levels of a structure in its natural environment.
Common for most of the arguments for carrying out a vibration test of a structure is that
identi cation of the dynamic characteristics of the structure is either the main purpose or
necessary halfway results. This thesis only deals with such part of a vibration test, which
includes identi cation of the dynamic characteristics of the structure. Further applications
of these dynamic characteristics are not considered.
Vibration tests di ers from each other by the forces bringing the structure into vibration.
Many di erent types of forces exist: Step excitation, impact excitation, harmonic excita-
tion, random or white noise excitation, ambient excitation etc., see e.g. Ewins. [1]. This
thesis only considers vibrations of structures subjected to loads, which can be modelled by
1
white noise. The loads might be created arti cial using e.g. a frequency analyser together
with a shaker attached to the structure or they might be ambient. Ambient loads are the
natural loads of a structure such as e.g. wind, waves, trac etc. It is not unusual that
these forces are modelled using white noise. A mathematical description of the models of
the loads is given in chapter 2.
If the aim of a vibration test of a structure is to identify the dynamic characteristics of
the structure, the following three processes should all be carried out with high accuracy
in order to make the test succesful. These are, see e.g. Ewins [1].
 The mathematical modelling of the structural vibrations.
 Measurements of the vibrations.
 Data analysis of the measurements.
These three processes are in practice coherent, so that a separation followed up by an
individual solution and performance is not possible. This is illustrated by g. 1.1, where
it is indicated that results of the data analysis could lead to a change of the mathematical
model or a change in the experimental setup.

Figure 1.1: Illustration of the processes in vibration testing. The data analysis may result
in a reformulation of the mathematical model or a change in the experimental setup.
This thesis mainly deals with the data analysis procedure in vibration testing. This is the
process where the measurements of the vibrations of a structure are used to calibrate or
identify the parameters of the mathematical model of the structure. The parameters of
the mathematical model describe the dynamic characteristics of the structure.
Collecting the measurements of the vibrating structure is the fundamental basis for a
succesful test. If the measurements are not collected carefully, it is impossible to obtain
satisfactorily accurate dynamic characteristics of the structure, no matter of the mathe-
matical modelling and data analysis. Although this thesis also will include experimental
work the process of measuring the vibrations of a structure is not described or reported
in detail. This is beyond the scope of this work.
Structures are continuous or distributed systems. The mass, damping and sti ness pro-
perties are distributed throughout the spatial de nition of the structure. In this thesis
the mathematical model used to describe the vibrations of a structure is a discrete model:
The linear lumped mass parameter model. The damping forces, which model all energy
dissipation from the structure, are assumed to be proportional to the velocity of the lumped
masses, and the sti ness forces are assumed to be proportional to the displacements of
the lumped masses. The principle is illustrated in g. 1.2.

Figure 1.2: Example of a continuous structure modelled by a lumped mass parameter


system. The displacements at the top of the monopile o shore platform are modelled by a
lumped mass parameter system. The loads on the structure, wind and waves, are modelled
by f (t), the damping forces are modelled by c  x_ (t) and the sti ness forces are modelled
by k  x(t). x(t) is the displacement and x_ (t) is the velocity of the top of the platform.
In principle any material point of the continuous structure has a mass, damping and sti -
ness property attached. So a structure should intuitively have in nitely many of such
dynamic degrees of freedom. Since the structure is assumed to be linear, the properties
of the dynamic degrees of freedom, which are local properties, can be converted to the
dynamic characteristics called modal parameters, which are global parameters of the struc-
ture. The modal parameters consist of eigenfrequencies, damping ratios and mode shape
vectors. Sometimes the term structural mode or just mode is used. A structural mode is
described by an eigenfrequency, damping ratio and a mode shape vector, which belongs
together. An eigenfrequency can be interpreted as a frequency where the structure will
have a high response level to a harmonic force with that frequency. The eigenfrequencies
of a structure are therefore sometimes also denoted resonant frequencies. A damping ra-
tio gives information about how fast the vibrations of the structure at the corresponding
eigenfrequency dissipate. The mode shape vectors describe the relative displacements of
the material points of the structure at the corresponding eigenfrequency. These properties
make the modal parameters global parameters.
Theoretically structures have in nitely many modes and thereby ini nitely many modal
parameters. In vibration testing the number of modes, that can be identi ed, are limited
by the force. The forces work in a limited frequency area, so only structural modes with
an eigenfrequency within that frequency area are brought into vibration and can therefore
be identi ed. This also means that the lumped mass parameter model of the structure can
also be limited. It does not have to contain an in nite number of degrees of freedom. These
are the arguments for using the linear lumped mass parameter model with a nite number
of masses and thereby nite number of degrees of freedom. This is the mathematical
model used to describe the vibrations of the structures considered in the present thesis.
The above assumptions of the mathematical modelling of the structure might seem very
restrictive. But it is a very general mathematical model and has been used to model vari-
ous structures from large civil engineering structures such as high-rise buildings, bridges,
o shore platforms to small mechanical systems. It is important that the mathematical
model assumes that the dynamic characteristics of the structures are time-invariant during
the measurement period. This is also widely assumed.
1.1.1 Data Analysis
The data analysis process is denoted identi cation or modal analysis, since the aim is to
identify the modal parameters of the linear lumped mass parameter system. The main
topic of this thesis is how these modal parameters can be identi ed from the measurements.
Since the late 1960s where the Fast Fourier Transformation (FFT) algorithm became
well-known, see Cooley & Tukey [2], the main part of the data analysis of the measured
vibrations and forces has been based on this algorithm.
Consider two stochastic processes X (t) and Y (t). They could e.g. describe the measured
response and force of a structure. The Fourier transforms of the processes are de ned as
1 Z1 1 Z1
X (!) = 2 , i!t
X (t)e dt Y (!) = 2 Y (t)e,i!t dt (1.1)
,1 ,1
The data analysis is based on the cross and auto spectral density functions. Several
di erent methods are developed to extract the modal parameters of the structure from
the spectral densities. They are formulated in both the frequency and the time domain.
So the spectral densities constitute a connection between the measured vibrations and
the parameters of the mathematical model. The cross spectral density function is de ned
from, see e.g. Bendat & Piersol [3], Schmidt [4]

SXY (!) = Tlim 1 E [X (!; T )Y (!; T )] (1.2)


!1 T
where E [] denotes the mean value operator, superscript  denotes complex conjugate and
X (!;T ), Y (!; T ) are de ned from eq. (1.1) by replacing the integration limits 1 with
T . The auto spectral densities are special cases of the cross spectral density and is
de ned by replacing X with Y or the opposite. The FFT algorithm is a fast and reliable
method for estimating X (!;T ) and Y (!; T ) used in eq. (1.2). The above approach (and
similar approaches) has by far been the most frequently used data analysis procedure,
since the introduction of the FFT algorithm. In practice the above cross spectral density
function can never be calculated. The reason is that in nite long measurements are in
practice impossible to obtain. Therefore the estimate of the cross spectral density function
is
S^ (!; T ) = 1 E [X (!; T )Y (!; T )]
XY T (1.3)
The mean value operation is calculated from the nite FFTs of pairs of realizations x(t)
and y (t) of the processes X (t) and Y (t). As indicated in eq. (1.3) the estimated spectral
density becomes a function of the nite time record length, T . The practical limit of
realizations with nite time period can be modelled mathematically by multiplying the
realization with a virtual in nite time period by a window function. The simplest possible
window function is the boxcar or rectangular window.
( )
W (t; T ) = 1 jtj < T=2 (1.4)
0 jtj > T=2
So the nite FFTs used in the estimate of the spectral densities are calculated from records
of in nite time length multiplied by a window function, to end up with a time record of
nite time length. This result is one of the major disadvantages of data analysis based on
the FFT algorithm. In practice the necessary introduction of a window function results in a
bias problem, usually denoted leakage, since the FFT is calculated from the measurements
multiplied by the window function. This bias error results in a too high estimation of the
energy dissipation of the di erent modes of a system. The window functions leak energy
from one frequency to another. The bias problem can be reduced by choosing another
and more complicated window function than the rectangular window in eq. (1.4), but in
general never removed.

Another disadvantage is that a nite frequency resolution of at least w = 2T is obtained,


see e.g. Bendat & Piersol [3]. Furthermore, in practice the mean value operation cannot
be calculated exactly since only a nite number of realizations exist. Sometimes only a
single realization of the processes exists, but assuming that the processes are ergodic the
above procedure can still be followed.

It should be made clear that despite these problems data analysis based on Fourier trans-
formation is a fast and reliable method. One of the reasons is that persons with experience
in modal analysis and vibration testing can immediately extract valuable information from
a plot of the data transformed into the frequency domain, such as spectral densities com-
pared to a time domain plot of the data. An area which illustrates the popularity of data
analysis based on Fourier transformations is ambient testing of bridges. Ambient testing
means testing of a structure, which vibrates due to natural loads such as wind, trac,
waves etc. In a review of ambient testing of bridges, see Farrar et al. [6], a bibliography of
about 100 papers is listed. At least 95% of all this work is based on the FFT algorithm.

1.1.2 The Random Decrement Technique


Although data analysis based on the FFT algorithm has many advantages, the disad-
vantages motivated the development of alternative approaches. One of these alternative
approaches, the Random Decrement (RD) technique, was introduced by H.A. Cole at
NASA during the late 1960s and early 1970s, see e.g. Cole [7] - [10]. The topic of this
thesis is the RD technique. The RD technique is a simple and easily implemented method
for analysis of vibrations of structures loaded by stochastic forces, but the technique has
never been used in a wide sense. Figure 1.3 shows the number of papers, reports, articles,
thesis etc. which have been published each year, from the introduction of the technique
in 1968 to 1997.
15

10

Published work

0
1970 1975 1980 1985 1990 1995
Year

Figure 1.3: The literature published each year (conference papers, reports, articles Ph.D.-
thesis etc.) concerning the RD technique.
The numbers in the gure are based on the bibliography at the end of this chapter. Only
a handful of people have published several papers concerning either theoretical work or
application work. One of the reasons for the lack of interest in this technique is perhaps
that for many years the theoretical background did not include a statistical description in
the same sense as the methods based on Fourier transformations. During the late 1980s
the mathematical background of the RD technique was extended to include nearly a full
statistical description.
This motivates the work reported in the present thesis, where the topic is modal analysis
based on the the RD technique. The results of applying the RD technique will several
times be compared with analysis based on the FFT algorithm. The RD technique could
be compared with other time domain algorithms, but the FFT algorithm is chosen, since
it is the most well-known and well-documented method. This will give a solid basis for
comparison.

1.2 Review of the Random Decrement Technique


The natural starting point of this work is to make a review of the development and the
progress of the RD technique made up to the 1990s. Thereby a proper introduction to
the technique will be given. Theoretical work and applications of the technique published
in this period are reviewed. Since some of the results obtained in the 1990s are described
in detail later in this thesis, only selected papers from this period are reviewed. It is the
intention that the bibliography of this chapter presents a reference list to the published
literature dealing with the RD technique. Any paper, thesis, report, etc. dealing with the
RD technique, which are not included in the bibliography is unknown to the author.
Throughout the literature several names such as Random Dec, Randomdec, RD etc. have
been used. In this thesis, the Random Decrement technique will be abbreviated the RD
technique.

1.2.1 Development of the Random Decrement Technique


The RD technique was developed by H.A. Cole at NASA during the late 60s and early
70s. Cole was working with analysis of the dynamic response of space structures subjected
to ambient loads. His main tasks were identi cation of the dynamic characteristics and
in-service damage detection of space structures from the measured response. The rst
papers on the RD technique were published in the period 1968-1973, see Cole [7], [8], [9]
and [10]. Cole was looking for "a simple and direct method for translating the time history
into a form meaningful to the observer", Cole [7]. The damping ratios estimated from
the half power bandwidth of the spectral densities, estimated using the FFT algorithm,
of the random time series were found to have a large variance. Furthermore, no approach
to detect non-linearities from the spectral densities of the response to the unmeasurable
ambient loads was known. Instead, Cole used the sample estimates of the auto correlation
functions of the time series for identi cation and damage detection. Damping ratios and
eigenfrequencies of the structure were extracted from the envelope of the auto correlation
function. Damage detection was suggested to be based on changes in the auto correlation
functions. But problems arised, since the auto correlation function was found to change
with variations in the ambient loads. These problems with both auto correlation functions
and spectral densities motivated the development of the RD technique. Basically, Cole
introduced the RD technique as a method to transform a random time series into a free
decay of the structure in question. Free decays only contains information of the structure
and not the random loads. At rst the modal parameters (especially the damping ratios)
were extracted from the decay (or decrement). This is probably the explanation for the
name given to this technique.

To explain the concept of the RD technique and to argument for the validity of the
technique, Cole used the following explanation. The random response of a structure at the
time t0 + t is composed of three parts: 1) The step response from the initial displacements
at the time t0 . 2) The impulse response from the initial velocity at the time t0 . 3) A
random part which is due to the load applied to the structure in the period, t0 to t0 + t.

What happens if a time segment is picked out every time the random response, x(t), has
an initial displacement, say x(t) = a, and these time segments are averaged?

This question indicates the rst concept of the RD technique and was answered by: As
the number of averages increase the random part due to the random load will eventually
average out and be negligible. Furthermore, the sign of the initial velocity is expected to
vary randomly with time so the resulting initial velocity will be zero. The only part left
is the free decay response from the initial displacement, a. The principle is illustrated in
gure 1.4.
Time [s]
9 10
1 2 3 4 5 6 7 8 11 12
2
0
x

−2
−4
0 0.5 1 1.5 2 2.5 3 3.5 4
2 N=1 2 N=2 2 N=3
Dxx

0 0 0
−2 −2 −2
0 0.2 0.4 0 0.2 0.4 0 0.2 0.4
2 N=7 2 N=11 2 N=12
Dxx

0 0 0
−2 −2 −2
0 0.2 0.4 0 0.2 0.4 0 0.2 0.4
τ τ τ
Figure 1.4: Concept of the RD technique. Random time series, x, (top gure) with trig-
gering points and averaging process (sub gures) with both the resulting RD function (full
line) and the current time segment (dashed line).
In gure 1.4 the initial displacement is chosen to be a = 1:5  x . It is commonly accepted
to denote the initial displacement, a, as the triggering level and the time points, t0 , where
x = a, as triggering points. Furthermore, the triggering level a is usually given as a
multiplum of the standard deviation, x , of the time series. The process of estimating RD
functions illustrated in gure 1.4 can be formulated as a sum of time segments picked out
from the response on condition that the time segments have the value a at the start.
XN
D^ XX ( ) = N1 x(ti +  )jx(ti) = a (1.5)
i=1
where D^ XX is the estimated RD function,  is the time variable in D^ XX as illustrated in g.
1.4, and N is the total number of triggering points. The simplicity of the estimation process
is obvious, since only detection of triggering points and averaging of the corresponding
time segments are performed.
In applications of the RD technique the measurements, xj , are discrete (time series). This
means that a problem arises since the probability of having the value x(ti ) = a in the
time series is zero, unless a is chosen carefully. The problem is solved by implementing
the triggering condition as a level crossing problem. Therefore the name adopted to the
condition illustrated in g. 1.4 and eq. (1.5) is the level crossing triggering condition.
The introduction of this technique was followed up by a simulation study by Chang [11].
Chang investigated the signi cance of the length of the RD function,  , and the number
of ensemble averages (triggering points). Based on simulations of the response of 1 and
2 DOF systems loaded by white noise, he recommended about 2000 ensemble averages in
order to extract accurate damping ratios (and eigenfrequencies) from the RD functions.
The length of the RD functions was suggested to be in the range of 50% and 125% of the
beat period of the two eigenfrequencies.
Chang investigated the level crossing triggering condition and the zero crossing with a
positive slope triggering condition. This triggering condition was introduced by Cole, see
Cole [10]. A time segment is picked out and used in the averaging process if the time
series crosses zero with a positive slope at the start of the time segment. The resulting
RD function was believed to be equal to the impulse response function of the structure,
since the initial displacement is zero. Houbolt, see Houbolt [12], improved this triggering
condition by also picking out time segments if the time series crosses zero with a negative
slope. The sign of these time segments is changed before they are averaged with time
segments picked out by positive slopes. This approach can be adopted to any triggering
condition, see Brincker et al. [57].
1.2.2 Theoretical Aspects of the Random Decrement Technique
Although the RD technique has been applied in connection with a broad band of structures,
only a few papers have considered theoretical aspects of the RD technique. This also
includes the solution of implementation problems, bias problems etc.
In 1977 Ibrahim introduced the concept of auto and cross RD function, see Ibrahim [22],
[23]. Up to this point the RD technique had only been applied to single channel mea-
surements, which resulted in estimation of eigenfrequencies and damping ratios, but with
no possibility to extract mode shape information. The concept of the auto and cross RD
functions is based on multi-channel measurements. Consider two measurements x(t) and
y(t). The auto, DXX , and cross, DY X , RD functions are estimated using the level crossing
triggering condition as:
X
N
D^ XX ( ) = N1 x(ti +  )jx(ti) = a (1.6)
i=1
X
N
D^ Y X ( ) = N1 y (ti +  )jx(ti) = a (1.7)
i=1
The rst sub-script refers to the measurement, where the time segments are picked out and
averaged, whereas the second subscript refers to the measurements, where the triggering
points are detected. Alternatively, zero upcrossings or downcrossings could be used as
the triggering condition. The approach made it possible to estimate mode shapes corre-
sponding to the measurement points by combining a method for determination of modal
parameters from free decays or impulse response functions, such as e.g. the Ibrahim Time
Domain method, see Ibrahim [22], [23]. This was an important improvement of the RD
technique.
In Reed [26] it is suggested that the RD estimation process is also applied backwards to
the time series. The RD functions should be the same. This corresponds to using both
positive and negative time lags,  , in the time segments. This new approach raised a
problem, since it is dicult to describe a negative time lag in terms of free decays. This
problem was partly solved by Vandiver et al. [32] in 1982.
Vandiver et al. published a paper where it was proven that an auto RD function, esti-
mated using the level crossing triggering condition, is proportional to the auto correlation
function of a stationary process, if it has a zero mean Gaussian distribution. The paper
is important, since the proportionality between auto RD functions and auto correlation
functions made the RD technique directly comparable with the spectral density functions
estimated using FFT. This is due to the Wiener-Khintchine relations, which describe the
correlation functions as the inverse Fourier transform of the spectral density functions. A
description in terms of correlation functions has the advantage that negative time lags can
be interpreted without any problems.
Brincker et al. [54] and [60] extended the link between RD functions and correlation
functions. By introducing a theoretical general triggering condition it was proven that
auto and cross RD functions are proportional to the auto and cross correlation functions,
respectively. This interpretation of RD functions will be introduced, discussed and further
generalized in later chapters, so a detailed description of the results is omitted at this stage.
It should be noted that in Bedewi [45], Bedewi & Yang [52] and [56] there also are some
theoretical consideration about the link between RD functions and free decays and/or
correlation functions.
The papers reviewed above are the most important papers concerning the theoretical
aspects of the RD technique. A couple of papers concerning the implementation of this
technique have also been published, see e.g. Chang [11], Caldwell [25], Kiraly [33], Nasir
& Sunder [34], Brincker et al. [53], [54], [57], [58]. These problems and their solution will
also be discussed in this thesis.
Finally its worth mentioning that a recent paper by Desforges et al. [64] concludes that the
RD technique is an accurate way of estimating spectral densities and modal parameters.
They compared the RD technique with several other methods for estimating correlation
functions/spectral density functions.

1.2.3 Application of the Random Decrement Technique


After the introduction and development the RD technique was mainly used in the e-
stimation of damping ratios and eigenfrequencies in utter testing. This is illustrated by
the references [12] - [19]. In these papers the RD technique is applied to experimentally
obtained data and damping ratios and eigenfrequencies are extracted from the RD func-
tions using a least squares approach. The assumption is that the auto RD functions is
equivalent to free decays.
The two main applications of the RD technique are damage detection (on-line) and esti-
mation of modal parameters. The work with these topics resulted in the development of
the RD technique. In damage detection applications the RD functions are used as a basis
for detecting incipient cracks and aws. The basic idea is that an incipient failure will
change the sti ness and the damping characteristics of a structure. Since the RD functions
of a random time series are interpreted as a free decay, these functions will change with
incipient failure. The triggering level a, see eq. (1.5), is kept constant which means that
the RD functions are independent of the level of the input. Several authors, Cole [10],
Yang [24], conclude that any non-linearity will change the auto correlation function with
a change in the input level, wheras the RD functions remain unchanged.
Damage detection based on RD functions has been applied to several structures, such as
aerospace structures, Cole [10], laboratory beams, Yang & Caldwell [21], piping systems,
Yang & Caldwell [24], o shore platform models, Yang et al. [29], Yang et al. [36] and
simulation of MDOF systems, Ibrahim [43], etc. One of the main problems in damage
detection is to decide whether or not an identi ed change in structural characteristics
such as eigenfrequencies and damping ratios is due to a decrease in sti ness (cracks). A
di erent mass loading or changes in the environmental conditions would also result in a
change of modal parameters. Furthermore, the identi ed modal parameters should not be
interpreted as deterministic parameters but as stochastic variables, since there will always
be uncertainty connected to the identi cation process. An identi ed change in e.g. an
eigenfrequency could only be an adverse realization of the stochastic variable.
In Yang et al. [29] this uncertainty is taken into account. During the reference or virgin
state of the structures a reference RD function is estimated by averaging a number of RD
functions. A standard deviation to the reference RD function is estimated at the same
time. Figure 1.5 shows a typical reference RD function and the 95 % con dence intervals.
1
0.5
Amplitude

0
−0.5
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
τ
Figure 1.5: Reference RD function and 95 % con dence intervals.
Next time an inspection is performed by measuring the random response of the structure a
new RD function is estimated. If this RD function at a certain time, e.g.  = 1:5 seconds,
is within the 95 % con dence bounds, no damage is detected. Otherwise the decision is
that a damage is detected. This approach was also used in Kummer et al. [30] and Yang et
al. [36]. One major problem in this technique is how to choose the value of the con dence
bound to have a proper balance between detecting a non-existing crack and disregard an
existing crack. Although this approach takes the statistical nature of the problem into
consideration, it is not independent of changes of the load on the structure.
The second main application of the RD technique is identi cation of structures. The
technique has mainly been applied to measurements of o shore structures and aeroplanes
subjected to ambient loads, see Yang et al. [31] and Ibrahim [27]. But the technique has
also been applied to identi cation of railway vehicle kinematic behaviour, see Siviter &
Pollard [41], and soil testing, see Al-Sannad et al. [44]. One of the advantages in identi -
cation by RD is the simplicity of the approach. Basically RD functions are estimated as
a simple averaging process of time segments. This should make the technique especially
favourable in connection with large structures such as bridges, where the experiments
includes a large number of measurements.
Finally the RD technique has also been used to identify non-linear structures, see e.g.
Ibrahim et al [49], Caldwell [25] and Haddara [59]. The fundamental idea is to identify
several RD functions at di erent initial conditions or triggering levels, a. From the RD
functions parameters (e.g. modal parameters) can be extracted as a function of the trig-
gering level. The non-linearity can then be detected from changes in the modal parameters
with the triggering level, a.

1.3 Scope of Work


The delimitations made in the previus sections are
 The vibrations of the structure can be described by a linear lumped mass parameter
model.
 The loads on the structure are stationary and can be modelled using random white
noise.
 The measurements are assumed to be carried out as carefully as possible.
 The data analysis is based on the RD technique.
 The aim of the vibration test is only to estimate the modal parameters.
From these delimitations the scope of this thesis can be de ned as
The objective of the present Ph.D.-thesis is a description, implementation and further deve-
lopment of the theory behind the RD technique as well as a comparison of the performance
of the RD technique with the FFT algorithm.
From this de nition it follows that a natural starting point is to describe the linear lumped
mass parameter system mathematically in terms of modal parameters. Furthermore, the
modelling of the load should be described in order to be able to describe the response of
the system to the loads. This will make it possible to derive the correlation functions of
the systems, which are important in order to interpret the RD functions.
The next natural task is to de ne the RD functions mathematically and establish the
relations to the correlation functions of the measurements. This step will include the
description of the present known relations, which will lead to detection of the strength
and the weakness of these relations. From this point the technique can be futher developed.
Another task is implementation and application of the technique. The technique should
be implemented in a general way and the technique should be applied in the data analyis
of complicated and large structures. The reason is that, since it is already known that
the technique is fast due to the simple averaging process, this advantage is best exploited
in the analysis of structures with a large quantity of data. By analysing complicated
structures the RD technique is tested properly.
1.4 Thesis Outline
The following presents an overview of the contents of this thesis and allows a selective
reading.
Linear vibration theory and linear stochastic vibration theory are brie y described in
chapter 2. The linear lumped mass parameter model is de ned, the modal parameters
are de ned and the load modelling is described. The main purpose is to show that under
certain circumstances the free decays of the model and the correlation functions of the
stochastic response of the model have equal mathematical description. This allows modal
parameters to be extracted from correlation functions using methods, which originally
were developed to extract modal parameters from free decays. These methods have been
used for decades and their performance is thereby well tested. The chapter summarizes
in practice how modal parameters are extracted from free decays or correlation.
In chapter 3 the RD technique is introduced. RD functions are de ned mathematically and
it is shown how RD functions are estimated. The RD functions are linked to correlation
functions under the assumptions that the stochastic processes are stationary and zero mean
Gaussian distributed by introducing the applied general triggering condition. Di erent
formulations of the technique are explained. It is suggested how the quality of an estimated
RD function can be assessed. The chapter is nished with a simulation study of the quality
and estimation time of the RD technique versus 2 other well-known methods. The chapter
is supported by appendix A, where the theory of the RD technique is derived in detail.
The procedure and theory described does not assume that the loads are measured.
Based on experience with the RD technique some weaknesses were discovered. This leads
to the introduction in chapter 4 of the Vector Random Decrement (VRD) technique,
which can be interpreted as a generalization of the RD technique. The VRD technique
can be used with advantage on data with several measurements collected simultaneously.
The theory of this new technique is described in detail and several examples are given.
Corresponding to the RD technique, the VRD technique does not assume that the loads
are measured. The results of the VRD technique is compared with results from traditional
formulations of the RD technique.
In chapter 5 a new approach for predicting the variance of RD functions is suggested.
The performance of the approach is compared with the performance of a simple method
for predicting the variance of RD functions based on the assumption of uncorrelated time
segments. The simple method is derived in appendix A and chapter 3.
Chapter 6 discusses bias and implementation problems. The chapter is of special interest
for persons, which are going to implement the RD technique. The implementations made
as part of this Ph.D.-work are described.
Chapter 7 introduces the RD technique in combination with the FFT algorithm for estima-
tion of the frequency response function from measured response and force. The previous
chapters mainly concern the application of the RD technique in situations with unmea-
sured forces, such as e.g. ambient testing. This application of the RD technique has the
advantage that leakage errors are suppressed. This can never be done if pure FFT is used.
The approach is illustrated on both simulated and experimentally obtained data.
Di erent experimental studys are described in chapter 8. The ambient vibration testing
of the Queensborough bridge and the Vestvej bridge. Furthermore, a vibration study of a
laboratory bridge mode is reported. These studies document the applicability of the RD
technique. In the analysis of the Vestvej bridge the software developed to analyse data
from ambient excited bridges is illustrated.
The thesis is nished in chapter 9 with a conclusion of the present work and a pespecti-
vation towards future tasks. Chapter 10 contains a summary in Danish.

1.5 Reader's Guide


Each chapter in this thesis is nished with a bibliography where the references are listed in
the order they are quoted. The exception is this chapter where the references are listed in
chronological order in the bibliography. The references are quoted as NAME [1], NAME1
& NAME2 [1] or NAME et al. [1] dependent on one, two or three and more authors.
Tables, gures and equations are referred to as table NUMBER, g. NUMBER and eq.
(NUMBER). NUMBER could be e.g. 4.7 referring to the seventh table, gure or equation
in chapter 4.
The investigations in this thesis will be based on both simulated data and experimentally
obtained data. In the situations were the data have been simulated, this has always
been performed using Auto Regressive Moving Average Vector-models. The interested
reader is referred to Andersen [5] for a description of this technique and the software used.
This method has been chosen, since it preserves the covariance function in the simulated
discrete-time response from the continuous-time response.

Bibliography
[1] Ewins, D.J. Modal Testing: Theory and Practice. Research Studies Press, Ltd.,
Staunton, Somerset, England, 1984 (reprinted 1995). ISBN 0 86380 017 3.
[2] Cooley, J.W. & Tukey, J.W. An Algorithm for the Machine Calculation of Complex
Fourier Series. Mathematics of Computation, Vol. 19, pp. 297-301, April 1965.
[3] Bendat, J.S. & Piersol, A.G. Random Data - Analysis and Measurement Procedures.
John Wiley & Sons, 1986. ISBN 0-471-04000-2.
[4] Schmidt, H. Resolution Bias Errors in Spectral Density, Frequency Response and
Coherence Function Measurements. I-V. Journal of Sound and Vibration, 101(3), pp.
347-427, 1985.
[5] Andersen, P. Identi cation of Civil Engineering Structures using Vector ARMA Mod-
els. Ph.D.-thesis Aalborg University 1997.
[6] Farrar, C.R., Baker, W.E., Bell, T.M., Cone, K.M., Darling, T.W., Du ey, T.A.,
Eklund, A. & Migliori, A. Dynamic Characterization and Damage Detection in the
I-40 Bridge Over the Rio Grande. Los Alamos National Laboratories. LA-12767-MS.
UC-906, June 1994.
1968
[7] Cole, H.A. On-The-Line Analysis of Random Vibrations. AIAA Paper No. 68-288.
1968.
1971
[8] Cole, H.A. Method and Apparatus for Measuring the Damping Characteristics of a
Structure. United States Patent No. 3, 620,069, Nov. 16. 1971.
[9] Cole, H.A. Failure Detection of a Space Shuttle Wing by Random Decrement. NASA
TMX-62,041, May 1971.
1973
[10] Cole, H.A. On-Line Failure Detection and Damping Measurements of Aerospace
Structures By Random Decrement Signature. NASA CR-2205, 1973.
1975
[11] Chang, C.S. Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Ran-
domdec Signatures. NASA-CR-132563, Feb. 1975.
[12] Houbolt, J.C. On Identifying Frequencies and Damping in Subcritical Flutter Testing.
Proc. NASA Symposium on Flutter Testing Techniques, Edwards, California, Oct.
9-10, 1975. NASA SP-415, pp. 1-41.
[13] Bennet, R.M. & Desmarais, R.N. Curve Fitting of Aeroelastic Transient Response
Data with Exponential Functions. Proc. NASA Symposium on Flutter Testing Tech-
niques, Edwards, California, Oct. 9-10, 1975. NASA SP-415, pp. 43-57.
[14] Hammond, C.E. & Dogget, R.V. Determination of Subcritical Damping by Moving
Block/Randomdec Applications. Proc. NASA Symposium on Flutter Testing Tech-
niques, Edwards, California, Oct. 9-10, 1975. NASA SP-415, pp. 59-76.
[15] Huttsell, L.J. & Noll, T.E. Wind Tunnel Investigation of Supersonic Wing-Tail Flut-
ter. Proc. NASA Symposium on Flutter Testing Techniques, Edwards, California,
Oct. 9-10, 1975. NASA SP-415, pp. 193-211.
[16] Lenz, R.W. & McKeever, B. Time Series Analysis in Flight Flutter Testing at the
Air Force Flight Test Center: Concepts and Results. Proc. NASA Symposium on
Flutter Testing Techniques, Edwards, California, Oct. 9-10, 1975. NASA SP-415, pp.
287-317.
[17] Perangelo, H.J. & Milordi, F.W. Flight Flutter Testing Technology at Grumman.
Proc. NASA Symposium on Flutter Testing Techniques, Edwards, California, Oct.
9-10, 1975. NASA SP-415, pp. 319-375.
[18] Abla, M.A. The Application of Recent Techniques in Flight Flutter Testing. Proc.
NASA Symposium on Flutter Testing Techniques, Edwards, California, Oct. 9-10,
1975. NASA SP-415, pp. 395-411.
[19] Brignac, W.J., Ness, H.B., Johnson, M.K. & Smith, L.M. YF-16 Flight Flutter Test
Procedures. Proc. NASA Symposium on Flutter Testing Techniques, Edwards, Cali-
fornia, Oct. 9-10, 1975. NASA SP-415, pp. 433-457.
[20] Reed, R.E. & Cole, H.A. Applicability of Randomdec Technique to Flight Simulator
for Advanced Aircraft. NASA CR-137609, 1975.
[21] Yang, J.C.S. & Caldwell, D.W. The Measurement of Damping and the Detection of
Damages in Structures by the Random Decrement Technique. 46th Shock and Vibra-
tion Symposium and Bulleting, San Diego, California, Nov. 1975.

1977
[22] Ibrahim, S.R. Random Decrement Technique for Modal Identi cation of Structures.
Journal of Spacecraft and Rockets, Vol. 14, No. 11., Nov. 1977, pp. 696-700.
[23] Ibrahim, S.R. The Use of Random Decrement Technique for Identi cation of Struc-
tural Modes of Vibration. AIAA paper, Vol. 77, 1977, pp. 1-9.

1978
[24] Yang, J.C.S. & Caldwell, D.W. A Method for Detecting Structural Deterioration in
Piping Systems. ASME Probabilistic Analysis and Design of Nuclear Power Plant
Structures Manual PVB-PB-030, 1978, pp. 97-117.
[25] Caldwell, D.W. The Measurement of Damping and the Detection of Damage in Linear
and Nonlinear Systems by the Random Decrement Technique. Ph.D.-Thesis, Univer-
sity of Maryland, 1978.

1979
[26] Reed, R.E. Analytical Aspects of Randomdec Analysis. AIAA/ASME/AHS 20th
Structures, Structural Dynamics and Materials Conf. St. Louis, Mo. April 1979, pp.
404-409.
[27] Ibrahim, S.R. Application of Random Time Domain Analysis to Dynamic Flight Mea-
surements. The Shock and Vibration Bulletin, Bulletin 49, Part 2 of 3, Sept. 1979,
pp. 165-170.

1980
[28] Ibrahim, S.R. Limitations on Random Input Forces in Randomdec Computation for
Modal Identi cation. The Shock and Vibration Bulletin. Bulletin 50 (Part 3 of 4),
Dynamic Analysis, Design Techniques. Sept. 1980, pp. 99-112.
[29] Yang, J.C.S., Dagalakis, N. & Hirt, M. Application of the Random Decrement Tech-
nique in the Detection of an Induced Crack on an O shore Platform Model. Computer
Methods for O shore Structures. Winter Annual Meeting of ASME, Nov 165-21, 1980,
pp. 55-67.
1981
[30] Kummer, E., Yang, J.C.S. & Dagalakis, N. Detection of Fatigue Cracks in Struc-
tural Members. Proc. 2nd ASCE/EMD Specialty Conference on Dynamic Response
of Structures. Atlanta, Georgia, Jan. 1981, pp. 445-460.
[31] Yang, J.C.S., Aggour, M.S., Dagalakis, N. & Miller, F. Damping of an O shore
Platform Model by Random Dec Method. Proc. 2nd ASCE/EMD Specialty Conference
on Dynamic Response of Structures. Atlanta, Georgia, Jan. 1981, pp. 819-832.
1982
[32] Vandiver, J.K., Dunwoody, A.B., Campbell, R.B. & Cook, M.F. A Mathematical
Basis for the Random Decrement Vibration Signature Analysis Technique. Journal of
Mechanical Design, Vol. 104, April 1982, pp. 307-313.
[33] Kiraly, L.J. A High Speed Implementation of the Random Decrement Algorithm.
NASA Technical Memorandum 82853, NASA-TA-82853. (Prepared for the 1982
Aerospace/Test Measurement Symposium, Las Vegas, Nevada, May 2-6 1982.)
[34] Nasir, J. & Sunder, S.S. An Evaluation of the Random Decrement Technique of Vibra-
tion Signature Analysis for Monitoring O shore Platforms. Massachussets Institute
of Technology, Department of Civil Engineering, Research Report R82-52. Sept. 1982.
1983
[35] Huan,S.-L., McInnis, B.C. & Denman, E.D. Analysis of the Random Decrement
Method. Int. J. Systems Sci., 1983, Vol. 14, No. 4, 417-423.
1984
[36] Yang, J.C.S., Chen, J. & Dagalakis, N.G. Damage Detection in O shore Structures by
the Random Decrement Technique. ASME, Journal of Energy Resources Technology,
March 1984, Vol. 106, pp. 38-42.
[37] Ibrahim, S.R. Time-Domain Quasilinear Identi cation of Nonlinear Dynamic Sys-
tems. AIAA Journal, Vol. 6, No. 6, June, 1984, pp. 817-823.
1985
[38] Tsai, T. Yang, J.C.S. & Chen, R.Z. Detection of Damages in Structures by the Cross
Random Decrement Technique. Proc. 3rd International Modal Analysis Conference,
Jan. 28-31, Orlando, Florida, 1985, pp. 691-700.
[39] Yang, J.C.S. Tsai, T., Tsai, W.H. & Chen, Z. Detection and Identi cation of Struc-
tural Damage from Dynamic Response Measurements. Proc. 4th Internatianal O -
shore Mechanics and Arctic Engineering Symposium, Dallas, Texas, 1985, pp. 496-
504.
[40] Yang, J.C.S., Tsai, T., Pavlin, V., Chen, J. & Tsai, W.H. Structural Damage De-
tection by the System Identi cation Technique. The Shock and Vibration Bulletin,
Bulletin 55, Part 3 of 5, June 1985, pp. 57-66.
[41] Siviter, R. & Pollard, M.G. Measurement of Railway Vehicle Kinematic Behaviour
using the Random Decrement Technique. Vehicle Systems Dynamics, Vol. 14. No. 1-3
June 1985, pp. 136-140.
[42] Yang, J.C.S., Marks, C.H., Jiang, J., Chen, D., Elahi, A. & Tsai, W.-H. Determina-
tion of Fluid Damping using Random Excitation. ASME Journal of Energy Resources
Technology, Vol. 107, June 1985, pp. 220-225.
1986
[43] Ibrahim, S.R. Incipient Failure Detection from Random-Decrement Time Functions.
The International Journal of Analytical and Experimental Modal Analysis. Vol. 1,
No. 2, April 1986, pp. 1-9.
[44] Al-Sannad, H.A., Aggour, M.S. & Amer, M.I. Use of Random Loading in Soil Testing.
Indian Geotechnical Journal, Vol. 16, No. 2, April 1986, pp. 126-135
[45] Bedewi, N.E. The Mathematical Foundation of the Auto and Cross Random Decre-
ment Technique and the Development of a System Identi cation Technique for Detec-
tion of Structural Deterioration. Ph.D.-Dissertation, University of Maryland, 1986.
1987
[46] Bedewi, N.E., Kung, D.-N., Qi, G.-Z. & Yang, J.C.S. Use of the Random Decre-
ment Technique for Detecting Flaws and Monitoring the Initiation and Propagation
of Fatigue Cracks in High-Performance Materials. Proc. Nondestructive Testing of
High-Performance Ceramics, Boston, MA, August 25-27, 1987, pp. 424-441.
[47] Bedewi, N.E. & Yang, J.C.S. A System Identi cation Technique Based on the Ran-
dom Decrement Signatures. Part 1: Theory and Simulation. Proc. 58th Shock and
Vibration Symposium, Huntsville, Alabama, October 13-15, 1987, Vol.1 pp. 257-273.
[48] Bedewi, N.E. & Yang, J.C.S. A System Identi cation Technique Based on the Random
Decrement Signatures. Part 2: Experimental Results. Proc. 58th Shock and Vibration
Symposium, Huntsville, Alabama, October 13-15, 1987, Vol.1 pp. 275-287.
[49] Ibrahim, S.R., Wentx, K.R, & Lee, J. Damping Identi cation from Non-Linear Ran-
dom Responses using a Multi-Triggering Random Decrement Technique. Mechanical
Systems and Signal Processing (1987) 1(4), pp 389-397.
1988
[50] Kung, D.-N., Qi, G.-Z., Yang, J.C.S. & Bedewi, N. Fatique Life Characterization of
Composite Structures using the Random Decrement Modal Analysis Technique. Proc.
6th International Modal Analysis Conference, Kissimee, Florida, Feb. 1-4, 1988, pp.
350-356.
[51] Bernard, P. Identi cation de Grandes Structures: Une Remarque sur la Meethode du
Decrement Aleatoire. Journal of Theoretical and Applied Mechanics. Vol. 7, No. 3,
1988, pp. 269-280. (In French).
1990
[52] Bedewi, N.E. & Yang, J.C.S. The Random Decrement Technique: A More Ecient
Estimator of the Correlation Function. Proc. 1990 ASME International Conference
and Exposition, Boston, MA, USAm Aug. 5-9, pp. 195-201.
[53] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec
Technique. Proc. 9th International Conference on Experimental Mechanics, Lyngby,
Copenhagen, Aug. 20-24, 1990.
[54] Brincker, Krenk, S. & R., Jensen, J.L. Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[55] Yang, J.C.S., Qi, G.Z. & Kan, C.D. Mathematical Base of Random Decrement Tech-
nique. Proc. 8th International Modal Analysis Conference, Kissimee, Florida, USA,
1990, pp. 28-34.
[56] Bedewi, N.E. & Yang, J.C.S. The Relationship Between the Random Decrement Sig-
nature and the Free Decay Response of Multidegree-Of-Freedom Systems. Proc. 1990
ASME International Comp. In Engineering Conference and Exposition. Boston, MA,
USA, Aug 5-9 1990 pp. 77-86.
1991
[57] Brincker, R., Kirkegaard, P.H. & Rytter, A. Identi cation of System Parameters
by the Random Decrement Technique. Proc. 16th International Seminar on Modal
Analysis, Florence, Italy, Sept. 9-12, 1991.
[58] Brincker, R., Krenk, S. & Jensen, J.L. Estimation of Correlation Functions by the
Random Decrement Technique. Proc. 9th International Modal Analysis Conference
and Exhibit, Firenze, Italy, April 14-18, 1991.
1992
[59] Haddara, M.R. On the Random Decrement for Nonlinear Rolling Motion. 1992
OMAE, Vol. 2, Safety and Reliability, ASME 1992, pp. 321-324.
[60] Brincker, R., Krenk, S., Kirkegaard, P.H. & Rytter, A. Identi cation of Dynamical
Properties from Correlation Function Estimates. Bygningsstatiske Meddelelser, Vol.
63, No. 1, 1992, pp. 1-38.
1993
[61] Bodruzzaman, M., Li, X., Wang, C. & Devgan, S. Identifying Modes of Vibratory
System Excited by Narrow Band Random Excitations. 1993 Souteastcon '93.
[62] Tamura, Y., Sasaki, A. & Tsukagoshi, H. Evaluation of Damping Ratios of Randomly
Excited Buildings Using the Random Decrement Technique. Journal of Structural and
Construction Engineering, AIJ, No. 454, Dec. 1993. (In Japanese)
1994
[63] Brincker, R., Demosthenous, M. & Manos, G.C. Estimation of the Coecient of
Restitution of Rocking Systems by the Random Decrement Technique. Proc. 12th
International Modal Analysis Conference, Honolulu, Hawaii, Jan 31 - Feb 3 1994.
1995
[64] Desforges, M.J., Cooper, J.E. & Wright, J.R. Spectral and Modal Parameter Esti-
mation From Output-Only Measurements. Journal of Mechanical Systems and Signal
Processing, Vol 9, No. 2, March 1995, pp. 169-186.
1996
[65] Asmussen, J.C. & Brincker, R. Estimation of Frequency Response Functions by Ran-
dom Decrement. Proc. 14th International Modal Analysis Conference, Dearborn,
Michigan, USA, February 1996, Vol I, pp. 246-252.
[66] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Modal Parameter Identi cation from
Responses of General Unkonwn Random Inputs. Proc. 14th International Modal Anal-
ysis Conference, Dearborn, Michigan, USA, February 1996, Vol I, pp. 446-452.
[67] Asmussen, J.C., Ibrahim, S.R. & Brincker, R. Random Decrement and Regression
Analysis of Trac Responses of Bridges. Proc. 14th International Modal Analysis
Conference, Dearborn, Michigan, USA, February 1996, Vol I, pp. 453-458.
[68] Asmussen, J.C. & Brincker, R. Estimation of Correlation Functions by Random
Decrement. Proc. ISMA21 - Noise and Vibration Engineering, Leuven, Belgium,
September 18-20 1996, Vol II, pp. 1215-1224.
1997
[69] Chalko, T.J. & Haritos, N. Scaling Eigenvectors obtained from Ambient Excita-
tion Modal Testing. Proc. 15th International Modal Analysis Conference, Orlando,
Florida, USA, February 3-6 1997, Vol. I, pp. 13-19.
[70] Fasana, A., Garibaldi, L., Giorcelli, E., Ruzzene, M. & Sabia, D. Analysis of a Mo-
torway Bridge Under Random Trac Excitation. Proc. 15th International Modal
Analysis Conference, Orlando, Florida, USA, February 3-6 1997, Vol. I, pp. 293-300.
[71] Brincker, R. & Asmussen, J.C. Random Decrement Based FRF Estimation. Proc.
15th International Modal Analysis Conference, Orlando, Florida, USA, February 3-6
1997, Vol. II, pp. 1571-1576.
[72] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Theory of Vector Triggering Random
Decrement. Proc. 15th International Modal Analysis Conference, Orlando, Florida,
USA, February 3-6 1997, Vol. I, pp. 502-509.
[73] Asmussen, J.C., Ibrahim, S.R. & Brincker, R. Application of Vector Triggering
Random Decrement. Proc. 15th International Modal Analysis Conference, Orlando,
Florida, USA, February 3-6 1997, Vol. II pp. 1165-1171.
Chapter 2
Theoretical Background for Linear
Structures
The purpose of this chapter is to establish the tools, which are used to extract the modal
parameters from the RD functions. The following is assumed.
 The structures are time-invariant during the measurement period.
 The loads are stationary and Gaussian distributed.
 The vibrations of the structure can be modelled by a linear lumped mass parameter
system.
The response of a structure to non-zero initial conditions, denoted free decays, is discussed
in detail. These free decays will be derived and the di erent approaches to extract the
modal parameters from free decays are described. Especially the practical application of
these methods is considered. The argument of introducing these principles, is that on
certain assumptions the RD functions are proportional to the correlation functions of the
measurements. On the same assumptions the correlation functions of the response of the
lumped mass parameter system to Gaussian white noise loading have exactly the same
relations as the response of the model to non-zero initial conditions only. This means that
the approaches developed to extract modal parameters from free decays can be used to
extract modal parameters from the RD functions. This is an advantage, since methods
for extracting modal parameters from free decays are well developed. The procedure does
not assume that the loads are measured. The theory presented in this chapter can be seen
in parts in most books on linear vibration theory and linear stochastic vibration theory,
see e.g. references [1] - [7]. So during this chapter references are as a rule omitted.
Section 2.1 establishes the mathematical model of the vibrations of a linear lumped mass
parameter system. The modal parameters are de ned. The equations for free vibrations
and forced vibrations of the structure are given. Free vibrations (or decays) correspond
to the response of the structure to some non-zero initial conditions only. The forced
vibrations are calculated in both time and frequency domain based on knowledge of the
loads and the modal parameters.
Section 2.2 concerns identi cation of the parameters in the linear lumped mass parameter
system from measured free decays of a structure. Two di erent techniques are imple-
23
mented and used: The Ibrahim Time Domain (ITD) technique and the Polyreference
Time Domain (PTD) technique. A detailed mathematical derivation is not given, instead
the implementation and the practical applications of the techniques are discussed.
Section 2.3 introduces the concepts of correlation, covariance and spectral densities. These
considerations are important, since correlation functions are the link between the mathe-
matical model and the RD functions. It is assumed that the lumped parameter system
is loaded by ltered Gaussian white noise. Section 2.3.2 de nes this load modelling. In
section 2.3.3 it is shown that the characteristics of the lter and the system are preserved in
the response of the system and remain uniquely identi able from the correlation functions.
It is shown that the correlation functions have exactly the same relations as free decays.

2.1 Lumped Mass Parameter System


This section introduces the linear lumped mass parameter systems. The main results
of this section are the de nition of the modal parameters, relations for calculating the
forced response of the system and relations for calculating the response to non-zero initial
conditions.
A lumped mass parameter system is characterized by a number of discrete masses. The
number of masses is also denoted the number of dynamic Degrees Of Freedom (DOF).
Each mass has sti ness and damping properties attached. An example of a 2 DOF system
is shown in g. 2.1.

Figure 2.1: Example of 2 DOF system.


The system is assumed to behave linearly. The energy dissipation from the system is
proportional to the velocity of the masses, and the sti ness of the structure is proportional
to the displacements of the masses. The structure is thereby characterized by having vi-
scous damping and elastic sti ness. For a general n DOF system the equations of motion
become
Mx(t) + C_x(t) + Kx(t) = f (t)
(2.1)
x(0) = x0; x_ (0) = x_ 0
This expresses a force equilibrium between external and internal forces. All matrices have
the dimension n  n and the response vectors and force vector have the dimension n  1,
where n is the number of DOFs. M is a diagonal and positive de nite mass matrix.
The sti ness matrix, K, is symmetric and positive de nite. The dissipation of energy
is modelled by the symmetric and positive semi-de nite damping matrix C. Although
several other damping models exist, see e.g. Nashif et al. [2], the linear viscous damping
model is the only one considered in this thesis. This is a widely used assumption in the
modal analysis community. As indicated in eq. (2.1) the system is assumed time-invariant,
since M, C and K do not depend on time. The acceleration, velocity and displacement
response are denoted x , x_ and x, respectively.
2.1.1 Modal Decomposition of Free Decays
In order to nd a solution to eq. (2.1) a state space formulation is applied
Az_ (t) + Bz(t) = F(t)
z(0) = z0 (2.2)
x(t) = Dz(t)
where the state matrices, A and B, force vector, F, and state vector, z are given by
" # " #
C M K 0
A = M 0 ; B = 0 ,M ; D = [I 0] (2.3)
" # " # " #
x ( t ) x
z(t) = x_ (t) ; z0 = x_ 0 ; F(t) = 0
0 f ( t ) (2.4)
The last relation in eq. (2.2) is denoted the observation matrix, since it picks out the
response x(t) which corresponds to the measurements or observations. The matrix D
consists of an n  n identity matrix and an n  n zero matrix. The state matrices A and
B are symmetric, but not positive de nite. The solution to the homogeneous part of eq.
(2.2) is assumed to be of the form
z(t) = et (2.5)
which inserted into eq. (2.2) yields the standard eigenvalue problem of dimension 2n2n
(A + B) = 0 (2.6)
The solution of the eigenvalue problem gives 2n eigenvalues, i, and 2n corresponding
eigenvectors, i , i=1,2,...,2n. It is assumed that all eigenvalues are distinct, which means
that there are no repeatedly eigenvalues. Most structures are characterized by being
critically underdamped. This means that the eigenvalues and the eigenvectors will be
complex. The structures are assumed to be dissipative, so the real part of the eigenvalues
must be negative. The eigenvalues and the eigenvectors, which can only be calculated
except for a constant scaling factor, appear as complex conjugated pairs. The complex
eigenvalue, i , is physically interpreted by considering the complex numbers arising from
the solution of the homogeneous part of eq. (2.1) for a critically underdamped SDOF
system.
q
i = ,i !i , !i 1 , i2 j
q (2.7)
i+1 = ,i !i + !i 1 , i2 j
p
where j = ,1, !i is the undamped cyclic eigenfrequency and i is the modal damping
ratio associated with the ith mode. The system is critically underdamped if i < 1.
The eigenvectors have the following form due to eq. (2.5)
" #
i = i i = D i (2.8)
ii
The vector  is denoted the mode shape vector. Equations (2.7) and (2.8) de nes the
modal parameters of the lumped mass parameter system. Estimation of these parameters
is the main task in modal analysis. The displacements, x(t), are taken as the upper half of
the state vector, z(t), computed as the sum of all solutions to the homogeneous equation
given by eq. (2.5), where the parameters are given by the solutions to the eigenvalue in
eq. (2.6).
z(t) = etq0 (2.9)
The following matrices have been introduced
= [ 1 2 :: 2n] et = diag([e t e t :: e nt]) q0 = [q1;0 q2;0 :: q2n;0]T (2.10)
1 2 2

where q0 contains the modal initial conditions and is denoted the modal matrix.
The following orthogonality relations of the state matrices A and B can be shown using
the symmetry relations of the state matrices only. These relations are introduced in order
to derive the response of the system
T A = m ; m = diag([m1 m2 ::: m2n]) (2.11)
T B = ,m ; ,m = diag(,[1m1 2m2 ::: 2nm2n]) (2.12)
 = diag([1 2 ::: 2n]) (2.13)
The constants mi are denoted modal masses. The modal initial conditions, q0 , can be
calculated from eq. (2.9) by combining the initial conditions from eq. (2.2) and the
orthogonality condition in eq. (2.11).
q0 = m,1 T Az0 (2.14)
The displacements, velocity and acceleration response of the lumped mass parameter sy-
stem to initial conditions become
x(t) = etq0 (2.15)
x_ (t) = etq0 (2.16)
x (t) = et2q0 (2.17)
As seen, the di erence between the free decay displacement response, velocity response
and acceleration response is only a complex scaling factor in form of the eigenvalue matrix,
. This scaling factor changes the amplitude and the phase of the exponentially damped
sinusoidal free decay response. The relations are important, since the modal parameters
can be extracted from all free decay responses in eqs. (2.15) - (2.17) using the same
algorithm, which will be illustrated in section 2.2.
2.1.2 Forced Vibration
The forced vibration of a linear lumped parameter system is calculated using a reformu-
lation of the state response by inserting the modal matrix
z(t) = q(t) (2.18)
where q(t) is de ned equivalently to q0 in eq. (2.10)
q(t) = [q1(t) q2(t) ::: q2n(t)]T (2.19)
Inserting the above relation in eq. (2.2), multiplying on the right hand side by T and
using the orthogonality relations in eqs. (2.11) - (2.12) yields
q_ (t) , q(t) = m,1 T F(t) (2.20)
The solution to these decoupled di erential equations is given by the convolution integral
and the initial condition
Zt
q(t) = e(t, )m,1 T F( )d + et q0 (2.21)
,1
Using eq. (2.18) the response of the system becomes
Zt
z(t) = h(t ,  )F( )d + etq0 (2.22)
,1
where the Impulse Response Matrix (IRM) has been de ned as
h(t) = etm,1 T ; t  0 h(t) = 0 ; t < 0 (2.23)
Equation (2.22) is transformed into the frequency domain using the Fourier transforma-
tion. It is assumed that the response due to the non-zero initial conditions can be neglected
z(!) = H(!)F(!) (2.24)
where
Z1
z(!) = 21 z(t)e,i!tdt (2.25)
,1

Z1
F(!) = 21 F(t)e,i!t dt (2.26)
,1

Z1 Z1
H(!) = h(t)e,i!tdt = h(t)e,i!t dt (2.27)
,1 0

H(!) = m,1(i! , ),1 T (2.28)


The matrix, H(! ), is denoted the Frequency Response Matrix (FRM). The relation in eq.
(2.24) is important, since it establishes a simple way of estimating the FRM of the system
if the response and the load are measured and transformed into the frequency domain. As
seen, the FRM contains exactly the same information in the form of modal parameters
as the IRF. This relation is used as a basis for the RD technique applied to measured
response and load. This issue is investigated in chapter 6. The modal parameters can be
extracted from the FRM or the IRM. The IRM and FRM are constructed so that it is
only necessary to know a row or a column in order to extract modal parameters.
The IRM and FRM transferring the load, f (t), into the response x(t) are given directly
from the IRM and FRM in eqs. (2.23) and (2.28)
h(t) = etm,1T ; t  0 h(t) = 0 ; t < 0 h(t) = hT (t) (2.29)

H(!) = (i! , ),1m,1T H(!) = HT (!) (2.30)

Until now it has been assumed that the response of all masses in the lumped parameter
system is measured or observed. Usually the number of modes is higher than the number
of known responses. This can be modelled by changing the observation equation. If e.g.
a system with 2n degrees of freedom is measured at m locations the observation equation
becomes
x(t) = Dz(t) D = [I 0 0] (2.31)
where the identity matrix has the dimensions m  m, the rst zero matrix has the di-
mensions n , m  n , m and the second zero matrix has dimensions n  n. The relation
between the modal matrices and  still holds
 = D (2.32)
Equations (2.15) - (2.17), (2.23) and (2.27) all contain information about the modal para-
meters. This means that if any of these functions/matrices are known the modal parame-
ters can be extracted. In the next section it is described how modal parameters can be
extracted from free decays using eqs. (2.15) - (2.17) in practice. The motivation is that
these methods can be used to extract modal parameters from the RD functions. This
relation will be shown in section 2.3.

2.2 Identi cation of Modal Parameters From Free Decays


This section introduces the principles behind two di erent algorithms for extracting modal
parameters from free decays. Several di erent algorithms have been developed. In Fladung
et al. [8] the di erent algorithms are reviewed and compared. In this thesis two di erent
algorithms, the Ibrahim Time Domain (ITD) technique, see e.g. Ibrahim [9] and the
Polyreference Time Domain (PTD) technique, see e.g. Vold et al. [10], are implemented
and applied. In the following, the algorithms are described in general terms and the
practical use of these algorithms for identi cation of the modal parameters from free decays
is explained. It is not the intention to present a detailed mathematical derivation of the
algorithms. The interested reader is referred to the original papers, where the algorithms
have been introduced. The aim is to present the philosophy behind the implementation
and the application of these algorithms.
Figure 2.2 shows a diagram for extraction of modal parameters from free decays using
ITD or PTD.

Figure 2.2: Diagram for extracting modal parameters from free decay measurements.
The rst step is to rearrange the measurements to obtain an overdetermined system for
estimation of the ,-matrix. The eigenvalues of the ,-matrix are directly related to the
eigenvalues of the continuous-time system, see eq. (2.7). The ,-matrix is de ned and the
procedure is shown in sections 2.2.1 and 2.2.2. In section 2.2.3 it is discussed how the noise
present in the free decay measurements is modelled and in section 2.2.4 the extraction of
eigenfrequencies and damping ratios from the ,-matrix is shown. Section 2.2.5 describes
how the mode shapes are estimated from the measurements by introduction of the Modal
Participation Factors (MPF). Sections 2.2.6 and 2.2.7 describe the practical application
and the methods used to separate noise modes from physical modes.
2.2.1 General Equations
The di erence between the two algorithms is that the ITD technique has its starting
point in eqs. (2.15) - (2.17) and the PTD technique has its starting point in eq. (2.23).
However, for convenience the following description has its starting point in eqs. (2.15) -
(2.17) but this choice has no in uence on the principles. Common to both algorithms is
that the measured free decays are assumed to be instantaneously sampled at equidistant
time points. The interval between the sampling points is denoted the sampling period,
T . If the response of a structure is sampled simultaneously at n di erent channels in N
time points, the measured response, x, is assumed to be of the following form
x(kT ) = e(kT )q0 = ,k q0 k = 0; 1; 2; :::;N , 1 (2.33)
x_ (kT ) = e(kT )q0 = ,k q0 k = 0; 1; 2; :::;N , 1 (2.34)
x(kT ) = e(kT )2q0 = ,k 2q0 k = 0; 1; 2; :::;N , 1 (2.35)
where the matrix , has been introduced
, = eT = diag([e T e T e nT ])
1 2 2
(2.36)
In order to model a system where the number of modes, 2m, di ers from the number of
known responses, n, the size of the modal matrix, , is given by the size of the observation
matrix D in the observation equation, see eqs. (2.2) and (2.32). However, eqs. (2.33) -
(2.35) can be rewritten in order to obtain a relation for the responses at l time points later
x((k + l)T ) = ,l,k q0 (2.37)
x_ ((k + l)T ) = ,l,k q0 (2.38)
x((k + l)T ) = ,l,k 2q0 (2.39)
In general the above relations form the basis of the ITD technique and the PTD technique.
They illustrate that the present free decay response can be expressed as a function of past
free decay response using the time di erence, lt and the ,-matrix, which contains infor-
mation about the frequencies and damping ratios of the modes. This relation is valid no
matter if the displacement, velocity or acceleration responses are measured. The di erence
in the form of the ,-matrix is simply interpreted as another set of initial conditions. This
makes the algorithms versatile.
In the formulation of the ITD and PTD algorithm on the basis of eqs. (2.37) - (2.39)
di erent demands on the ratio between the number of measurement points and the number
of modes exist. This is a clear delimitation to these techniques, since the number of modes
is dependent on the number of measurement points. In order to lift this restriction the
concept of pseudo measurements was introduced, see Ibrahim [9].

2.2.2 Pseudo Measurements


The idea is to create pseudo measurements by time delaying a number of true measure-
ments in order to increase the total number of measurements. Using this approach the
number of modes, which it is possible to use in the model, becomes totally independent
of the actual number of measurements.
Consider an mDOF system, where the free decay response has been measured at n1 points
and n1 + n2 = m. If e.g. the displacement response is measured eq. (2.33) becomes

x(kT ) = ,k q0 x  ,k q0 (2.40)


n1  1 n1  2m 2m  2m 2m  1
A second measurement vector denoted the pseudo measurements is constructed

x1((k + l)T ) = 1,l,k q0 x 1 ,l ,k q0 (2.41)


n2  1 n2  2m 2m  2m 2m  1
The x1 vector is only the time delayed version of the rst n2 points of x. Stacking these
measurements yields an equation where the modal matrix has the same dimensions as the
,-matrix
" # " #
x(kT ) =  ,
~
kq =  ,k q0 (2.42)
x1((k + l)T ) , l 0 2m  2m 2m  2m 2m  1
The example illustrates how the concept of pseudo measurements can be used to ful l any
demand of the algorithms to the ratio between the size of the measurement vector and the
number of modes. It is possible to use any number of modes independently of the actual
number of measurements.

2.2.3 Modelling of Noise


Since any free decay measurement contains noise, the theoretical expression for a free
decay has to be reformulated to include the di erences between the mathematical model
and the measurements.
x^(kT ) = x(kT ) + e(kT ) = ,k q0 + e(kT ) (2.43)
where the vector e(kT ) is added in order to model the di erences between the measure-
ments and the theoretical predictions. So the process e(kT ) consists of measurement
noise, di erences arising if the mathematical model is incorrect, etc. This means that the
di erences e(kT ) should be modelled as a stochastic process. This is not the case for the
ITD and PTD algorithms. The in uence of e(t) is minimized by extending the dimensions
of the problem by assuming that a part of the noise behaves exactly as the free decays of
the structure. The principle is that the model in eq. (2.43) is extended by a number of
noise modes
^x(kT ) = ~ ,~k q^0
" #" #
= [ n ] ,0 ,0k q0
k
q0;n (2.44)
n

= ,k q0 + n ,kn qn0


Subscript n indicates that the modal parameters originates from noise modes. So the
di erences at any time step are assumed to be modelled as
e(kT ) = n,kn q0;n (2.45)
This modelling of noise results in estimation of both physical and computational noise
modes. In practice noise modes which have negative damping ratios and/or do not appear
as complex conjugated pairs are often observed. A method or procedure to separate the
physical modes from the computational modes has to be applied. This issue is discussed
in section 2.2.7. A major problem in the application of the ITD and PTD algorithms is
to choose the number of noise modes.
2.2.4 Extraction Eigenfrequencies and Damping Ratios
Once the ,-matrix has been estimated the eigenvalues can be calculated. These eigen-
values are denoted the discrete-time eigenvalues, i. The continuous-time eigenvalues are
calculated from the discrete-time eigenvalues and the time di erence between the mea-
surements
, = diag([e T e T e nT ]) ) i = ln( Ti)
1 2 2
(2.46)
The damping ratios and the eigenfrequencies are extracted from the continuous-time eigen-
values using eq. (2.7).
2.2.5 Modal Participation Factors
The mode shapes of the structure are calculated from eqs. (2.37) - (2.39) using regression.
The approach is to reformulate the equations and use the already estimated eigenvalues.
For example for the displacement response
x((k + 1)t) = ,k+l q0 = q~ 0,~ k+l (2.47)
where
q~ 0 = diag([q1;0 q2;0 ::: q2n;0]) ,~ = [e t e t ::: e nt ::: ]T
1 2 2
(2.48)
and it has been utilized that , is a diagonal matrix. Equation (2.47) constitutes a basis
for the determination of the MPFs using regression, since both the measurements on the
left-hand side and the eigenvalue vector, ,~ on the right-hand side are known. The MPFs
are de ned as follows for the displacement, velocity and acceleration response, respectively
MPFx = q~ 0 (2.49)
MPFx_ = q~ 0 (2.50)
MPFx = 2q~ 0 (2.51)
As seen, each mode has an MPF value at each measurement location. The mode shapes
are extracted from the MPFs using column-wise normalization, since  and q~ 0 are only
a scaling of the mode shapes. The absolute value of the MPFs are not only a function of
the mode shapes, but also of the modal initial conditions and the eigenvalues.
2.2.6 Practical Application
In the application of the algorithms for extraction of modal parameters from free decays
several questions must always be answered: How many modes do the measurements con-
tain? How many noise modes should be added in order to model the noise? How many
points from the measured free decays should be used? In order to investigate the in uence
of the chosen number of modes and the number of points from the free decay measure-
ments di erent combinations of the number of modes and the number of points are used
in the modal parameter identi cation algorithms. The following ow chart illustrates the
approach using di erent models.
 Loop 1 (i):
 Number of modes = function(i)
{ Loop 2 (j):
{ Number of points = function(j)
{ Calculate modal parameters (function(i),function(j))
{ End loop 2 (j)
 End Loop 1 (i).
Using a stabilization diagram, which is a plot of the estimated frequencies versus the
identi cation number, the structural modes can be detected and a nal appropriate iden-
ti cation number (number of points and number of modes) can be found. A structural
mode should be represented in all identi cations, so a trend will be visible at the structural
modes.
If it is necessary to use many modes it is not always sucient to use a stabilization diagram
in order to detect structural modes. Therefore, di erent methods to separate noise modes
from physical modes are applied.
2.2.7 Separation of Noise Modes and Physical Modes
Noise modes are distinguished from physical modes by a combined application of ve
di erent approaches
 Complex conjugated pairs.
 Damping ratios.
 Modal Participation Factors (MPFs).
 Modal Con dence Factors (MCFs).
 Modal Assurence Criteria (MAC).
The rst four approaches are most frequently applied since they can separate noise modes
from physical modes from a single model, whereas the MAC uses comparison of mode
shapes of di erent models. In the following the di erent methods are described.
Complex conjugated pairs
As described previously, structural modes are usually critically underdamped so they
appear as complex conjugated pairs. If any eigenvalue does not have a complex conjugate
it is interpreted as a noise mode.
Damping ratios
Usually, structural modes have low damping. This information can be used to separate
noise modes from structural modes. Modes with e.g.  > 0:05 or  > 0:1 can be charac-
terized as noise modes.
Modal Participation Factors
The MPFs de ned in eqs. (2.49) - (2.51) can also be used to separate noise modes from
physical modes. If a mode has a low MPF at all measurement points, it indicates that
the mode is a noise mode. Physical modes can have low MPFs if the mode shape has a
low amplitude, but not at all measurement points. The MPFs should, however, be used
carefully in connection with measurements having a low signal-to-noise ratio. In such a
situation the noise modes might generally have as high MPFs as the structural modes.
Modal Con dence Factors
Perhaps the most ecient approach to distinguish between noise modes and physical modes
is to use the MCFs. The philosophy behind this approach was presented in Ibrahim [11]
and extended to the PTD technique by Vold et al. [12]. Assume that the eigenvalues have
been estimated. In order to calculate the MCFs the measurements are time delayed and
stacked
" # " # " #
x(kT )   k
x((k + l)T ) = ,l , q0 = l , q0
k (2.52)

The eigenvalue matrix , is known so the mode shape matrix can be calculated using
regression as described in section 2.2.5. The modal initial conditions only involve a column-
wise scaling of the mode shapes. The MCF of the ith component of the jth mode shape
is calculated as

MCFi;j =
^ i;j  ,lj;j jMCF j > 1 ; MCF = 1 (2.53)
^ li;j i;j i;j MCF
i;j

Theoretically all the MCFs should be unity. In practice the MCFs will approximately be
unity for structural modes and lower for non-structural modes. If the magnitude of an
MCF is higher than unity, the reciprocal value is used. In general the MCFs are complex
numbers, so a phase and an magnitude, which should be zero and unity, respectively, are
de ned and used.
In a practical situation the following restrictions could be applied in order to characterize
a mode as a structural mode.
 The eigenvalue of the mode should have a complex conjugate.
 The damping ratio should be below 10 %.
 The MCF magnitude should be above 90 %.
 The MCF phase should be below 10 .
Notice that the numbers are a result of the experience obtained by analysing the structures
described in this thesis and can only be considered as guidelines. Especially the MCFs
are capable of separating noise and structural modes. The decision of the criteria applied
to the damping ratios should be obtained from experience with the present structure.
To separate the physical modes from the noise modes the approaches described above
are applied rst. Then a stabilization diagram is used. Such a diagram is a simple plot
of the estimated frequencies versus the model number. A structural mode should not
depend on the model chosen (the number of modes and the number of points used from
the free decays). So a trend is visible at the frequency of a structural modes. Stabilization
diagrams are usually a very ecient method in combinations with the other methods to
extract the structural modes. Stabilization diagrams also give an indication of the optimal
choice of model structure.
Modal Assurance Criteria
The last-mentioned approach used for selecting structural modes is the Modal Assurance
Criteria (MAC). This is a correlation coecient between two di erent mode shapes
jT  j2
MAC = j jj2ji j2 (2.54)
j i
The idea is that the mode shape of a structural mode should not change signi cantly with
a small change in the model structure. A noise mode may change with just a slight change
in the model structure, so noise modes will have a low correlation. The MAC can also be
used to compare two mode shapes estimated from two di erent approaches such as the
RD technique or a technique based on the FFT algorithm. This option will be used later
in this thesis.

2.3 Structures Loaded by Gaussian White-Noise


The purpose of this section is to describe the modelling of the loads on the linear lumped
mass system. The concept of correlation functions is also de ned. This makes it possible
to derive the correlation functions of the response of the linear lumped mass parameter
system subjected to the loads. It is shown that these correlation functions are constructed
corresponding to the free decays of the system. This means that on the given assump-
tions of the load and the structure, the modal parameters can be extracted without any
knowledge of the realization of the load. Only the response has to be measured.
2.3.1 Correlation Functions
Consider a stationary stochastic vector process X(t). The correlation functions of the
vector process are de ned as
RXX( ) = E [X(t +  )XT (t)] (2.55)
It will be assumed that the vector process has zero mean value vector. This implies that
there is no di erence between the correlation functions and the covariance functions
VXX( ) = RXX( ) , E [X(t)]E [X(t)]T = RXX( ) (2.56)
The term correlation function is preferred. For stationary processes the following symme-
try relation is valid
RXX( ) = RTXX(, ) (2.57)
The correlation functions of the time derivatives of the process X(t) can be calculated
from the correlation functions of X(t) using the following di erential rules
2
d (R ( )) 4d (R ( ))
RX_ X_ ( ) = , d XX RX X ( ) = d XX (2.58)
The spectral densities are de ned by the Wiener-Khintchine relations. These relations link
the correlation functions and the power spectral densities using Fourier transformation.
The spectral densities SXX( ) can be calculated from the correlation functions by

1 Z1
SXX(!) = 2 RXX( )e,i! d (2.59)
,1

Z1
RXX( ) = SXX(!)ei! d! (2.60)
,1
The de nition of spectral densities using eq. (2.59) and the de nition via nite Fourier
transforms presented in chapter 1 are equivalent, see e.g. Bendat & Piersol [3].
2.3.2 Load Modelling
The load which excites the lumped mass parameter system is assumed to be a stationary
zero mean Gaussian distributed vector process, see eq. (A.1). In order to generalize the
load process it is assumed that the load process can be described as a white noise vector
process passed through a linear shaping lter. This is an extension of the traditional white
noise assumption. The idea behind this approach and the theory are presented in Ibrahim
et al. [13], where the lter is referred to as a pseudo-force lter. The principle is shown
in gure 2.3.

Figure 2.3: Outline diagram for modelling of loads using a shaping lter. h(t) and H (! )
are the IRM and FRM, respectively
W is a Gaussian white noise process having the following statistical relations
E [W(t)] = 0 (2.61)
E [W(t +  )WT (t)] = RWW   ( ) (2.62)
SWW(!) = 21 RWW (2.63)
The FRM and IRM of the shaping lter are assumed to be given by eqs. (2.64) - (2.65).
Subscript F indicates that the matrices describe the lter characteristics
HF (!) = F (i! , F ),1mF,1FT HF (!) = HFT (!) (2.64)
hF (t) = F eF (t)mF,1FT hF (t) = hFT (t) (2.65)
where mF is an m  m normalization corresponding to the modal masses in eq. (2.11),
F is an n  m matrix containing the modal vectors corresponding to eq. (2.8) and F
is an m  m diagonal matrix containing the eigenvalues. The real part of all eigenvalues
is negative. This ensures that the resulting force is stationary. The force exciting the
structure is modelled by
f (!) = HF (!)W (!) (2.66)
Z t
f (t) = hF (t ,  )W ( )d (2.67)
,1
In order to illustrate the signi cance of a shaping lter the auto spectral densitity and
auto correlation function of the white noise process and the resulting force obtained by
ltering the Gaussian white noise process are given in g. 2.4.
White noise Excitation
−1
10
Spectral Density

−2
10
N*N/s

0
10
−3
10
−4
−1 10
10
−10 −5 0 5 10 −10 −5 0 5 10
ω ω
0.12 0.1
Auto correlation

0.1
0.05
0.08

N*N
0.06 0
0.04 −0.05
0.02
−0.1
0
−5 0 5 −5 0 5
τ τ
Figure 2.4: The e ect of modelling the excitation as a ltered white noise process, where
the lter has an SDOF.
Since W(t) is Gaussian distributed it also follows that the load applied to the structure,
f (t), is Gaussian. The lter performs linear operations to the white noise process W(t),
so the distribution of the force is still Gaussian. This is important, since it follows from
an equivalent argument, that the response will also be Gaussian. The e ect of applying
a pseudo force lter is that the lter characteristics also are identi ed together with the
characteristics of the structure.
The number of output channels from the lter should be equal to the number of modes of
the structual system, n. In order to make the lter versatile it will be assumed that the
lter can have more modes than output and input channels, m > n.
2.3.3 Correlation Functions of the Response
The response of a lumped parameter system with 2n-DOF measured at n points to the
load described in section 2.3.2 is calculated using eq. (2.24)
X(!) = H(!)f (!) = H(!)HF (!)W(!) = Hc(!)W(!) (2.68)
In order to calculate the response the FRM of the combined system, Hc (! ), consisting of
the structure and the shaping lter is de ned as

Hc(!) = H(!)HF (!)


= m,1(i! , ),1 T  F mF , (i! , F ),1 F T
1

X2n  j Tj X
m  F F T
l l X2n Bj X m BFl
= =
j =1 mj (i! , j ) l=1 ml (i! , l ) j =1 (i! , j ) l=1 (i! , l )
F F F

X2n X m Bj BFl Bj BFl


= +
j =1 l=1 (i! , j )(j , l ) (Fl , j )(i! , Fl )
F

X2n B a
j j Xm b BF
l l
= , + (2.69)
i!  l=1 (i! , l )
( j ) F
j =1
where the following vectors, aj and al , have been introduced

X
m BFl
aj = (2.70)
l=1 (j , l )
F

X
2n Bj
bl = F , j ) (2.71)
j =1 (l
The last term in eq. (2.69) shows that all eigenvalues of the shaping lter and the structural
system are uniquely preserved. Furthermore, the mode shapes of the structural system
can be reconstructed from the columns of the FRM, and the mode shapes of the lter can
be reconstructed from the rows of the FRM. The above proof is rst time given in Ibrahim
et al. [13].
Using matrix notation the FRM and the IRM of the combined system become

Hc(!) = ~ m~ ,1(i! , ~ ),1b~ T =


2nX+m ~ j b~Tj (2.72)
~ j (i! , ~ j )
j =1 m
2nX ~ j b~Tj ~j t
+m 
hc(t) = ~ m~ ,1e~ tb~ T = ~j e (2.73)
j =1 m

where the following matrices have been introduced

~ = [1 2:::2n b1F1 b2F2:::bmFm]


(2.74)
b~ = [aT1 1 aT2 2:::aT2n2n F1 F2 :::Fm]
m~ = diag([m1 m2 ::: m2n mF1 mF2 ::: mFm]) (2.75)
~ = diag ([1 2 ::: 2n F1 F2 ::: Fm ]) (2.76)
e~ = diag ([e t e t ::: e n t eF t eF t ::: eFmt ])
1 2 2 1 2 (2.77)
The time response is given by
Zt
X(t) = ,1
hc (t ,  )W( )d (2.78)

Since this is a linear operation on the Gaussian distributed white noise excitation it follows
that the response is Gaussian distributed. The correlation functions of the structural
system loaded by ltered Gaussian white noise can be calculated using the above results

RXX( ) = E [X(t +  )XT (t)]


Z t+ Z t
= E[ hc (t +  , 1)W( 1)W( 2)T hTc (t , 2)d 1d 2]
,1 ,1
Z1
= hc(t +  )RWW(0)hTc (t)dt
0
2nX+m 2Xn+ m ~ i b~ Ti ~ j ~ Tj ~  Z 1 (~ +~ )t
b
= m~ RWW (0) m~ e e i j dt
i
i=1 j =1 i j 0
2nX ~ ib~ Ti ~ i 
+m  2nX ~ j ~ Tj ,1
+m b
= e RWW (0) ~ j ~ i + ~ j
i=1 m ~i j =1 m
~ ~  ,1
= e m
~ ~c (2.79)
where the matrix ~c has been introduced

2nX ~ j ~ j ,1
+m b
~c = [c1 c2 ::: c2n+m ]T ci = b~ Ti RWW(0) (2.80)
i=1 m ~ j i + j
From the last statement of eq. (2.79) it is seen that any column in the correlation function
matrix can be written as
RiXX( ) = ~ e~  m~ ,1c~i (2.81)
where RiXX is an abbreviation of the ith column in the correlation matrix and the scaling
constant q~ i is the ith column of the matrix ~c. Equation (2.81) is of exactly the same
form as eq. (2.15), which is the standard equation for a free decay due to some initial
conditions. This means that the mode shape vector ~ and the matrix  containing the
eigenvalues can be extracted from the correlation functions using methods described in
section 2.2. If the velocities or the accelerations of the system are measured instead of the
displacements the correlation functions can be calculated using eq. (2.81) and the results
of eq. (2.55).
RiX_ X_ ( ) = ,~ e~  ~ 2 ci (2.82)
RiX X ( ) = ~ e~  ~ 4ci (2.83)
Equations (2.81) - (2.83) correspond exactly to eqs. (2.15) - (2.17), so in the modal
parameter extraction procedure there is no reason to distinguish between the correlation
functions of the displacements, velocities or the accelerations of the structure.

2.4 Summary
The lumped mass parameter model has been introduced and the modal parameters of this
model have been de ned in section 2.1. The response of this system to initial conditions
and to arbitrary loads has been derived. Thereby the impulse response matrix and the fre-
quency response matrix have been de ned. In section 2.2 the process of extracting modal
parameters from free decays has been discussed. The basic principles of the two algo-
rithms, the Ibrahim Time Domain and the Polyreference Time Domain, which have been
implemented, are described. Furthermore, the practical applications of these techniques
have been the main issue. In section 2.3 the response of the lumped mass parameter sy-
stem subjected to white noise passed through a shaping lter is discussed. This modelling
assures a versatile description of the loads and that the response is Gaussian distributed.
The correlation functions of the response are de ned and it is shown that these correlation
functions can be described equivalently to free decays. This means that modal parameters
can be extracted from the correlation functions of the lumped mass parameter system
loaded by ltered white noise using methods developed in connection with free decays.
Since RD functions are interpreted in terms of correlation functions, this allows extraction
of the modal parameters from the RD functions using methods like the Ibrahim Time
Domain or the Polyreference Time Domain. This approach is used throughout this thesis.
The chosen modelling of the loads also ensures that it is not necessary to measure the
loads of a structure in order to extract modal parameters.

Bibliography
[1] Caughey, T.K. & o'Kelley, M.E.J. Classical Normal Modes in Damped Linear Sy-
stems. ASME Journal of Applied Mechanics, Vol. 49 pp. pp. 867-870, 1965.
[2] Nashif, A.D., Jones, D.I.G., Henderson, J.P. Vibration Damping. 1985 John Wiley &
Sons. ISBN 0-471-86772-1.
[3] Bendat, J. & Piersol, A. Random Data - Analysis and Measurement Procedures. John
Wiley & Sons, Inc. ISBN 0-471-04000-2.
[4] Pandit, S.M. Modal and Spectrum Analysis: Data Dependent Systems in State Space.
1991 John Wiley & Sons USA, Inc. ISBN 0-471-63705-X.
[5] Ewins, D.J. Modal Testing: Theory and Practice. 1995 Research Studies Press Ltd,
England. ISBN 0-86380-017-3.
[6] Inman, D.J. Engineering Vibration. 1996 Prentice-Hall USA, Inc. ISBN 0-13-518531-
9.
[7] Wirsching, P.H., Paez, T.L. & Ortiz K. Random Vibrations. Theory and Practice.
1995 John Wiley & Sons, Inc. ISBN 0-471-58579-3.
[8] Fladung, W.J., D.L. Brown & R.J. Allemang. Modal Parameter Estimation - A Uni-
ed Matrix Polynomial Approach. Proc. 12th International Modal Analysis Confer-
ence, Honolulu, Hawaii, USA, 1994.
[9] Ibrahim, S.R. An Upper Hessenberg Sparse Matrix Algorithm for Modal Identi cation
on Minicomputers. Journal of Sound and Vibration (1987) 113(1) pp. 47-57.
[10] Vold ,H., Kundrat, J., Rocklin, G.T. & Russel, R. A Multi-Input Modal Estimation
Algorithm for Minicomputers. SAE Paper No. 820194, 1982.
[11] Ibrahim, S.R. Modal Con dence Factor in Vibration Testing. Journal of Spacecraft,
Sept.-Oct. 1978 Vol. 15, No. 5, pp. 313-316.
[12] Vold, H. & Crowley, J. A Modal Con dence Factor for the Polyreference Time Domain
Technique. Proc. 3rd International Modal Analysis Conference, 1985, pp. 305-310.
[13] Ibrahim, S.R., Brincker, R. & Asmussen, J.C. Modal Parameter Identi cation From
Responses of General Unknown Random Inputs. Proc. 14th International Modal Ana-
lysis Conference, Dearborn, Michigan, USA, Feb. 12-15, 1996, Vol. I, pp. 446-452.
Chapter 3
The Random Decrement
Technique
The purpose of this chapter is to introduce the RD technique, present the mathematical
background and illustrate the applicability of this technique. It is not the intention that
the mathematical background of the RD technique should be derived in detail. Only the
nal results are presented and discussed. The interested reader is referred to appendix
A, where a detailed derivation and resume of the mathematical background of the RD
technique are given.
Section 3.1 de nes the RD functions theoretically and illustrates how the RD functions
are estimated. The RD functions are in general de ned for stationary processes, but
the estimation of RD functions demands that the processes are assumed to be ergodic.
Thus the RD technique is restricted to deal with ergodic processes. Section 3.2 introduces
the link between the RD functions de ned on the applied general triggering condition
and the correlation functions of stationary zero mean Gaussian distributed processes. The
assumptions of stationary zero mean Gaussian distributed processes are sucient in terms
of describing the RD technique mathematically. It is not necessary to assume anything
about the physical system describing the processes. In spite of this the processes will be
interpreted as the response of a linear lumped mass parameter system loaded by Gaussian
white noise or ltered Gaussian white noise as described in chapter 2.
Sections 3.3 - 3.6 describe the four most well known triggering conditions. The relation
between the RD functions and the correlation functions is given and approximate formulas
for the variance of the RD functions are presented. In order to illustrate the di erent
triggering conditions two examples are described in each section. First a very simple
example using a very short time series is described. The purpose is to illustrate the
estimation process and the di erent triggering conditions. Secondly an example based
on the response of a 2DOF system loaded by Gaussian white noise is described in each
section.
Section 3.7 introduces the concept of quality assessment of RD functions. The shape
invariance relation of the RD functions and the symmetry relations for the correlation
functions of stationary processes are used as a basis for quality assesment. Section 3.8
illustrates how the triggering levels should be chosen for the di erent triggering conditions.
43
In section 3.9 the RD technique is compared with other approaches for estimation of
correlation functions. The comparison is based on speed and accuracy of the di erent ap-
proaches. Several di erent triggering conditions are considered. The examples illustrates
advantages and disadvantages of the RD technique.

3.1 De nition of Random Decrement Functions


The RD technique is a method which transforms the stochastic processes X(t) and Y(t)
into RD functions. It is assumed that X (t) and Y (t) are stationary processes. The index t
is interpreted as time. The auto RD functions are de ned as the mean value of a stochastic
process on condition, T, of the process itself
DXX ( ) = E[X (t +  )jTX (t)] (3.1)
DY Y ( ) = E[Y (t +  )jTY (t)] (3.2)
An RD function is referred to as e.g. DY Y ( ). The rst subscript refers to the process
from which the mean value is calculated and the second subscript refers to the process
where the condition is ful lled. The conditions TX (t) and TY (t) are denoted triggering
conditions. Equivalent to eqs. (3.1) and (3.2) the cross RD functions are de ned as the
mean value of a stochastic process on condition of another stochastic process
DXY ( ) = E[X (t +  )jTY (t)] (3.3)
DY X ( ) = E[Y (t +  )jTX (t)] (3.4)
The de nitions of the auto RD functions in eqs. (3.1) and (3.2) are immediately obtained
by interchanging the triggering conditions in the de nitions of the cross RD functions in
eqs. (3.3) - (3.4).
Example:
Consider a 3  1-dimensional stochastic vector process. It could e.g. describe the response
at three di erent points of a vibrating structure. The stochastic process is given by
X(t) = [X1(t) X2(t) X3(t)]T . From this stochastic process it is possible to de ne nine RD
functions, three auto RD functions and six cross RD functions
2 3
D X X ( ) DX X ( ) DX X ( )
64 DX X ( ) DX X ( ) DX X ( ) 75 =
1 1 1 2 1 3

2 1 2 2 2 3
DX X ( ) DX X ( ) DX X ( )
3 1 3 2 3 3

2 3 (3.5)
E[X1(t +  )jTX (t)] E[X1(t +  )jTX (t) ] E[X1(t +  )jTX (t)]
64 E[X2(t +  )jTX (t)] E[X2(t +  )jTX (t)] E[X2(t +  )jTX (t)] 75
1 2 3

E[X3(t +  )jTX (t)] E[X3(t +  )jTX (t) ] E[X3(t +  )jTX (t)]


1 2 3

1 2 3

A column in eq. (3.5) is denoted a RD setup, whereas the process where the triggering
condition is ful lled is denoted the reference measurement or triggering measurement.

In practical applications of the RD technique only a single realization of the stochastic
process is available. Or in other words, usually only a single measurement at each chosen
location of a vibrating structure is collected. In order to estimate the conditional mean
value correctly from a single observation it is necessary to assume that the stochastic
process is not only stationary but also ergodic. In this case the auto RD functions can be
estimated as the empirical conditional mean value from a single realization

X
N
D^ XX ( ) = N1  x(ti +  )jTx(ti) (3.6)
i=1

X
N
D^ Y Y ( ) = N1  y(ti +  )jTy(ti) (3.7)
i=1

where N is the number of points in the process which ful ls the triggering condition and
y (t) and x(t) are realizations of X (t) and Y (t). Correspondingly, the cross RD functions
are estimated as

^ 1 XN
DXY ( ) = N  x(ti +  )jTy(ti) (3.8)
i=1

^ 1 XN
DY X ( ) = N  y(ti +  )jTx(ti) (3.9)
i=1

The absolute decisife variable in estimation of RD functions is the number of triggering


points, N . N has to be large enough to secure that eqs. (3.6) - (3.9) has converged
suciently towards eqs. (3.1) - (3.4).
It is important that the estimates of the RD functions in eqs. (3.6) - (3.9) are unbiased,
e.g.

^ 1 X
N
E[DXY ( )] = E[x(ti +  )jTy(ti) ] = DXY ( ) (3.10)
N i=1

Until now no restriction or formulation of the triggering condition has been made. Ob-
viously the formulation of the triggering conditions controls the actual number of triggering
points. This means that the convergence of the estimates in eqs. (3.6) - (3.9) is controlled
by the triggering condition and of course of the absolute length of the observations of
the processes. In the next sections di erent formulations of the triggering conditions are
described.
The de nitions of the RD functions in eqs. (3.1) - (3.4) assume that the index of the
processes X (t) and Y (t) is continuous time. Measurements of the response of a structure
consist of simultaneously sampled values of the response at equidistant time points at the
sampling interval T . So the variables ti and  used in the estimation of RD function are
discrete-time variables and functions of the sampling rate.
3.2 Applied General Triggering Condition
This section introduces the applied general triggering condition. The results are not
derived, only presented. The interested reader is referred to appendix A for a detailed
derivation. In a real estimation situation this triggering condition does not have any
interest, but the results obtained using this condition are important. The link between the
RD functions of any triggering condition of practical interest and the correlation functions
of stationary zero mean Gaussian distributed processes can be derived directly from the
results of this triggering condition. The applied general triggering condition, TXG(At), of the
stochastic process X (t) is de ned as
TXG(At) = fa1  X (t) < a2 ; b1  X_ (t) < b2g (3.11)
Superscript GA refers to the General Applied triggering condition. Using this general
triggering condition the RD functions becomes a weighted sum of the correlation function
and the time derivative of the correlation functions
0
D ( ) = RXX ( )  a~ , RXX ( )  ~b
XX X2 X2_ (3.12)
0
DY X ( ) = RYX2( )  ~a , RYX2( )  ~b (3.13)
X X_
where the triggering levels a~ and ~b are functions of the triggering bounds and the density
functions.
R a xp (x)dx R b xp
_ (x_ )dx_
X
b = Rb b X_
2 2

a~ = R a p (x)dx
a1 ~ 1
(3.14)
a X b pX_ (x_ )dx_
2 2
1 1

Equations (3.12) and (3.13) illustrate how versatile the RD technique is. By adjusting the
triggering bounds a1; a2 and/or b1; b2 the contribution of the correlation functions and the
time derivative of the correlation functions to the resulting RD functions can be changed.
In the limit by choosing one of the triggering level sets, [a1 a2] or [b1 b2], to [,1 1] or
[0 0] the resulting RD functions becomes proportional to either the correlation functions
or their time derivatives.
Another important aspect of the RD technique is the actual number of triggering points.
The RD functions are estimated as the empirical mean from a single realization of the
stochastic processes. This assumes that the processes are ergodic

^ 1 X
N
DXX ( ) = x(ti +  )jfa1  x(ti ) < a2; b1  x_ (ti ) < b2g
N i=1 (3.15)

^ 1 X N
DY X ( ) = N y (ti +  )jfa1  x(ti ) < a2 ; b1  x_ (ti ) < b2 g (3.16)
i=1
The number of triggering points, N , can be adjusted by changing the triggering levels
[a1 a2] and [b1 b2]. This means that the convergence of the estimates in eqs. (3.15) and
(3.16) is controlled by the triggering levels.
In applications of the RD technique only special formulations of the applied general trig-
gering condition are used. The explanation is that usually only the correlation functions
or the time derivative of the correlation function is needed. Furthermore, since the tech-
nique is applied to discrete measurements, noise will be introduced by calculating the
time derivative of the measurements numerically. It is usually necessary to calculate the
time derivative, since only realizations of the process itself are available. This will lead
to erroneous triggering points corresponding to erroneous values of the time derivative of
the measurement. This leads to the formulation of four triggering conditions described in
sections 3.3 - 3.6. In order to demonstrate the applicability of the RD technique and the
di erent triggering conditions two through examples are given in each section.
The rst time RD functions are described mathematically by correlation functions is pre-
sented in Vandiver et al. [1]. They proved that for stationary zero mean Gaussian di-
stributed processes the RD functions obtained using the level crossing triggering condition
are proportional to the auto correlation function. They also derived an approximate for-
mula for the variance of the RD functions. Brincker et al. [2] extended this concept by
deriving a proportional relationship between the cross RD functions of a theoretical ge-
neral triggering condition and the cross correlation functions and the time derivative of
the cross correlation function. They also derived approximate formulas for the variance
of the cross RD functions in terms of cross correlation functions. The results obtained
using the applied general triggering condition are a generalization of the results obtained
by Vandiver and Brincker. From this triggering condition the result of any particular
triggering condition can be derived directly, see appendix A.

3.2.1 Example 1: Illustration of Triggering Conditions


The time series shown in g. 3.1 will be analysed in sections 3.3 - 3.6. The continuous line
shows the original process. The  shows where the process has been sampled at equidistant
time points.

2
0
−2
−4
0 5 10 15 20
Time

Figure 3.1: Continuous process and the corresponding sampled discrete time series.
The discrete time series will be analysed applying the di erent triggering conditions in
order to illustrate the basic idea of the RD technique with a simple example.

3.2.2 Example 2: 2DOF System


The system, which will be analysed in sections 3.3 - 3.6 for illustration purposes, is a
2DOF-system loaded by uncorrelated Gaussian white noise processes at each mass. The
mass, damping and sti ness matrix are
" # " # " #
1 0 0 : 9
M = 0 1 ; C = 1:5  ,1 1:8 ; K = ,200 500, 1 700 , 200 (3.17)
The units of the matrices are kg, Ns/m and N/m. All matrices are symmetric. The modal
parameters of this system are listed in table 3.1:
f [Hz]  % jj1 jj2 6 1 6 2
Mode 1 3.09 1.69 1.00 1.61 0 4.7
Mode 2 4.56 3.56 1.00 0.62 0 173.0
Table 3.1: Modal parameters of the general example.
The sampling rate is chosen as 120 Hz. The system is highly oversampled in order to
obtain a better graphical illustration of the estimated and the theoretical RD functions.
Figure 3.2 shows the simulated displacement response (10000 points) of each mass.
Response of mass 1

0.1
0.05
[m]

0
−0.05
−0.1

0 10 20 30 40 50 60 70 80
Response of mass 2

0.1
0.05
[m]

0
−0.05
−0.1

0 10 20 30 40 50 60 70 80
Time [sec]

Figure 3.2: Measurement 1 and measurement 2. The displacement response of mass 1 and
mass 2, respectively.
As described in the introduction the response is simulated using ARMAV-models. The
approach is to simulate the Gaussian white noise loads, which together with the M, C, K
matrices and the sampling rate are the inputs to the algorithm.

3.3 Level Crossing Triggering Condition


The level crossing triggering condition is the most popular triggering condition in ap-
plications of the RD technique. Cole used this condition, when he introduced the RD
technique. Level crossing triggering, T L , states that a triggering point is detected if the
process is equal to the chosen triggering level, a.
TXL(t) = fX (t) = ag (3.18)
The condition is reformulated to be of the same form as the applied general triggering
condition, see eq. (3.11)
TXL (t) = fa  X (t) < a + a; ,1  X_ (t) < 1g (3.19)
From eq. (3.19) and the results of eqs. (3.12), (3.13) and (3.14) it follows that the RD
functions are proportional to the correlation functions

DXX ( ) = RXX
2
X
( )  a (3.20)

DY X ( ) = RYX2( )  a (3.21)
X
The RD functions de ned by the level triggering condition are calculated as the empirical
mean. The processes are assumed to be ergodic
X
N
D^ XX ( ) = N1 x(ti +  )jfx(ti) = ag (3.22)
i=1
X
N
D^ Y X ( ) = N1 y (ti +  )jfx(ti) = ag (3.23)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t), respectively. If the time segments
in the averaging process are assumed to be uncorrelated the variance of the estimated RD
functions can be estimated as, see appendix A.
0 2 !1
2
Var[D^ XX ( )]  NX @1 , RXX2 ( ) A (3.24)
X
2  RY X ( ) 2 !
^ Y
Var[DY X ( )]  N 1 ,   (3.25)
X Y
The results of eqs. (3.20) - (3.25) constitute the mathematical basis of the level crossing
triggering condition. The estimate of the variance of the estimated RD functions should
be used cautiously, since the assumption of uncorrelated time segments can be highly
violated. Notice that the variance is independent of the chosen triggering level. It is
interesting that the variance of the auto RD functions predicted by eq. (3.24) states that
the variance is zero at time lag zero. The variance of the RD functions predicted by eq.
(3.24) will converge towards NX for j j ! 1, since RXX ( ) ! 0 for j j ! 1. The
2

strength of the relations in eqs. (3.24) - (3.25) is that the variance only is a function of the
number of triggering points and the correlation functions (which is known by a scaling of
the RD functions). The estimate of the variance can be calculated without increasing the
computational time signi cantly.
3.3.1 Example 1: Illustration of Triggering Conditions
The time series in g. 3.3 is considered.
2
0
−2
−4
0 5 10 15 20
Time

Figure 3.3: Continuous process and the corresponding sampled discrete time series.
The RD function is estimated using level crossing with the triggering level TXL (t) = fx(t) =
ag. It is chosen to use 3 points in each RD function.

Figure 3.4 shows the time segments which have been picked out using the level crossing
triggering condition and the resulting RD function and the resulting average of the time
segments. The time axis of the time segments corresponds to the time axis in g. 3.3.

2 2 2
0 0 0
1

−2 −2 3 −2
−4 −4 −4
0 2 4 2 4 6 8 10 12

2 2
Result

0 0
4

−2 −2
−4 −4
12 14 16 −2 0 2
Time Time

Figure 3.4: The time segments and the resulting RD function estimated using level crossing
triggering with a = 1.
As illustrated, two upcrossings and two downcrossings are detected.

3.3.2 Example 2: 2DOF System


The purpose of this analysis is to illustrate the level crossing triggering condition. The
system described in section 3.2.2 is considered. The triggering condition is chosen to
p
TXL (t) = fX (t) = 2X g (3.26)
The triggering levels are usually expressed as a multiplum of the standard deviation of the
process. The RD functions of the response shown in g. 3.2 are calculated. The triggering
condition is rst applied to the response of the rst mass and then to the second mass.
The estimated RD functions and the theoretical RD functions (see eqs. (3.20) and (3.21))
are shown in gs. 3.5 - 3.6.
−4 DX X
x 10 1 1

4
2

[m*m]
0
−2
−4
−5 0 5
−4 DX2X1
x 10
5
[m*m]

−5
−5 0 5
τ

Figure 3.5: Normalized RD functions estimated using level crossing triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [    ]: Estimated RD
functions.
−4 DX1X2
x 10
5
[m*m]

−5
−5 0 5
−3 DX2X2
x 10
1

0.5
[m*m]

−0.5

−1
−5 0 5
τ

Figure 3.6: Normalized RD functions estimated using level crossing triggering applied to
the response of the second mass. [-------]: Theoretical RD functions, [    ]: Estimated RD
functions.

The expected number of triggering points were 191 and 172 predicted by eq. (A.56) and the
actual number of triggering points was 212 and 182. The gures illustrate how the accuracy
of the RD functions decreases with increasing time lag, j j. It is a standard observation
that the estimate of the RD functions does not dissipate as fast as the theoretical RD
functions.
3.4 Local Extremum Triggering Condition
The local extremum triggering condition, TXE(t), is not a commonly used triggering con-
dition compared to the level crossing triggering condition. Nevertheless, this triggering
condition is attractive, since it requires that the contribution from the time derivative of
the process is zero, instead of averaging out the contributions as does the level crossing
triggering condition. A triggering point is detected if the time series has a local extremum
TXE(t) = fa1  X (t) < a2 ; X_ (t) = 0g (3.27)
It is extremely important that the triggering condition states that both local maxima and
local minima are triggering points. If only the local maxima are used as triggering points,
the following equations are not valid. The condition is reformulated to be of the same
form as the applied general triggering condition, see eq. (3.11)
TXE(t) = fa1  X (t) < a; 0  X_ (t) < 0 + bg ; b ! 0 (3.28)
It follows from eqs. (3.12) - (3.14) that the RD functions are proportional to the correlation
functions
D ( ) = RXX ( )  a~
XX X2 (3.29)

DY X ( ) = RYX2( )  ~a (3.30)
X
where the triggering level a~ is given by a1 , a2 and the density function
R a xp (x)dx
a~ = Raa p X(x)dx
2
1
(3.31)
a X
2
1

The bounds a1 and a2 should be chosen to have equal signs in order to extract maximum
information from each time segment. If a1 < 0 and a2 > 0 with a2 > ja1 j the resulting RD
function corresponds to the RD functions estimated using [ja1j a2] as triggering levels. The
contributions from [a1 ja1j] is zero, so the only di erence is the computational time vasted.
The RD functions de ned by the local extremum triggering condition are estimated as the
empirical mean. The processes are assumed to be ergodic
X
N
D^ XX ( ) = N1 x(ti +  )jfa1  x(ti ) < a2; x_ (ti ) = 0g (3.32)
i=1
X
N
D^ Y X ( ) = N1 y (ti +  )jfa1  x(ti ) < a2 ; x_ (ti ) = 0g (3.33)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). The time segments are assumed to
be uncorrelated. Thereby the variance of the estimated RD functions can be approximated
by, see appendix A
0 !2 !1 !2
2 0 ( ) 2 E
Var[D^ XX ( )]  NX @1 , RXX
2
X
( ) R
, XX A + k RXX2 ( )
N X (3.34)
X X_
0  2 !21 E  2
Y2 0
Var[D^ Y X ( )]  N @1 , RY X( ) , RY X( ) A + kN RY X( ) (3.35)
Y X Y X_ Y X

where kE is given by the triggering levels and the density function


R a x2p (x)dx R a xp (x)dx !2
X
= R a p (x)dx , Ra a p X(x)dx
2 2

kE a 1 1
(3.36)
a X a X
2 2
1 1

The results of eqs. (3.29), (3.30) and eqs. (3.34), (3.35) constitute the mathematical basis
of the local extremum triggering condition. The assumption of uncorrelated time segments
can be violated, so eqs. (3.34) and (3.35) should be used cautiously. In contrast to the
level crossing triggering condition the variance of the auto RD functions predicted by eq.
(3.34) is not zero at time lag zero.
E
Var[D^ XX (0)] = kN 6= 0 (3.37)

The variance will converge towards NX , corresponding to the results for the level crossing
2

triggering condition, since RXX ( ); R0XX ( ) ! 0 for j j ! 1. The variance predicted


by eqs. (3.34) and (3.35) is not independent of the chosen triggering levels.

3.4.1 Example 1: Illustration of Triggering Conditions


The time series in g. 3.7 is considered again.

2
0
−2
−4
0 5 10 15 20
Time

Figure 3.7: Continuous process and the corresponding sampled discrete time series.

The RD function is estimated using local extremum with the triggering levels TXE(t) =
f0  x(t) < 1g. It is chosen to use 3 points in each RD functions. Figure 3.8 shows the
time segments which have been picked out using the local extremum triggering condition
and the resulting RD function. The time axis of the time segments corresponds to the
time axis in g. 3.7.
2 2 2

Result
0 0 0

2
−2 −2 −2
−4 −4 −4
2 4 10 12 14 −2 0 2
Time Time Time

Figure 3.8: The time segments and the resulting RD function estimated using local ex-
tremum triggering with [a1 a2 ] = [0 1].

As illustrated, two local maxima and no local minima are detected.

3.4.2 Example 2: 2DOF System


The results of the analysis of the 2DOF system illustrate the local extremum triggering
condition. The triggering condition is chosen as

TXE(t) = fX  X (t) < 1; X_ (t) = 0g (3.38)

The estimated and the theoretical RD functions are shown in gs. (3.9) - (3.10).

−4 DX X
x 10 1 1

4
2
[m*m]

0
−2
−4
−5 0 5
−4 DX2X1
x 10
5
[m*m]

−5
−5 0 5
τ

Figure 3.9: Normalized RD functions estimated using local extremum triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [    ]: Estimated RD
functions.
−4 DX1X2
x 10
5

[m*m]
0

−5
−5 0 5
−3 DX2X2
x 10
1

0.5
[m*m]

−0.5

−1
−5 0 5
τ

Figure 3.10: Normalized RD functions estimated using local extremum triggering applied
to the response of the second mass. [-------]: Theoretical RD functions, [    ]: Estimated
RD functions.
The gures illustrate that the estimation errors of the RD functions increase with increa-
sing distance from the triggering condition. Again it is seen that the errors result in an
overestimation of the RD functions. The number of triggering points was 162 and 152
for the RD functions shown in gures 3.9 and 3.10, respectively. This is an important
di erence compared to the results of the level crossing triggering condition. Although
the triggering conditions theoretically estimate the same functions, there is a signi cant
di erence, since the number of triggering points di ers.

3.5 Positive Point Triggering Condition


The positive point triggering condition, TXP (t), is perhaps the simplest of all triggering
conditions, since a triggering point is detected simply if the time series has a value in
between two bounds a1 and a2 , which are usually chosen to have equal (positive) signs.

TXP (t) = fa1  X (t) < a2 g (3.39)

Although the positive point triggering condition is simple the condition is versatile. If
e.g. the triggering levels are chosen as [0 1] half of the points in the time series will be
triggering points. On the other hand, if the triggering conditions are chosen as [a a + a]
and a ! 0 then the level crossing triggering condition is obtained. So the positive point
triggering condition can be interpreted as a generalization of the level crossing triggering
condition. The condition is reformulated to be of the same form as the applied general
triggering condition.

TXP (t) = fa1  X (t) < a2 ; ,1  X_ (t) < 1g (3.40)


These bounds are inserted in eqs. (3.12) - (3.14). The RD functions are proportional to
the correlation functions
D ( ) = RXX ( )  a~
XX X2 (3.41)

DY X ( ) = RYX2( )  ~a (3.42)
X
where the triggering level a~ is given by the triggering levels a1 ; a2 and the density function.
R a xp (x)dx
a~ = Raa p X(x)dx
2
1
(3.43)
a X
2
1

Equations (3.41) - (3.43) are exactly the same as eqs. (3.29) - (3.31) from the local
extremum triggering condition. The di erence between the two triggering conditions is
that the local extremum triggering condition demands that the time derivative of the time
series is zero, whereas the positive point triggering condition averages the contribution of
the time derivative towards zero. This also means that the number of triggering points
is signi cantly di erent. The RD functions are estimated as the empirical mean. The
processes are assumed to be ergodic.
X
N
D^ XX ( ) = N1 x(ti +  )jfa1  x(ti ) < a2g (3.44)
i=1

^ 1 XN
DY X ( ) = N y (ti +  )jfa1  x(ti ) < a2 g (3.45)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). If the time segments are uncorrelated
the variance of the estimated RD functions are given by, see appendix A.
0 2 !1 !2
^  2
X @
Var[DXX ( )]  N 1 , R XX (  ) A + kP RXX ( ) (3.46)
X2 N X2
 2  RY X ( ) 2! kP  RY X ( ) 2
^ Y
Var[DXX ( )]  N 1 ,   + N   (3.47)
Y X Y X

where kP is given by:


R a x2p (x)dx R a xp (x)dx !2
= R a p (x)dx , Ra a p X(x)dx
X
2 2

kP a 1 1
(3.48)
a X a X
2 2
1 1

The variance predicted by eqs. (3.46) and (3.47) should be used with utmost care. In
situations where broad triggering levels as [a1 a2 ] = [0 1] have been applied, eqs. (3.46)
and (3.47) are invalid, since they will highly underestimate the variance. Only in situations
where a1  a2 eqs. (3.46) and (3.47) will predict reasonable variances. This problem is
one of the topics of chapter 5.
3.5.1 Example 1: Illustration of Triggering Conditions
The time series in g. 3.11 is considered again.

2
0
−2
−4
0 5 10 15 20
Time

Figure 3.11: Continuous process and the corresponding sampled discrete time series.
The RD function is estimated using positive point with the triggering levels TXP (t) = f2 
x(t) = 1g. It is chosen to use 3 points in each RD functions. Figure 3.12 shows the time
segments which have been picked out using the positive point triggering condition and the
resulting RD function. The time axis of the time segments corresponds to the time axis
in g. 3.11.

2 2 2
0 0 0
1

−2 −2 −2
−4 −4 −4
2 4 10 12 10 12 14

2 2
Result

0 0
4

−2 −2
−4 −4
12 14 −2 0 2
Time Time

Figure 3.12: The time segments and the resulting RD function estimated using positive
point triggering with [a1 a2] = [2 1].

As illustrated, 4 positive points exist in the chosen interval. This small example indicates
that the positive point triggering condition results in more triggering points than the other
triggering conditions.

3.5.2 Example 2: 2DOF System


The response of the 2DOF system described in section 3.2.2 is analysed. The triggering
condition is chosen as
TXP (t) = fX  X (t) < 1g (3.49)
The estimated and the theoretical RD functions are shown in gures 3.13 - 3.14.
−4 DX X
x 10 1 1

4
2

[m*m]
0
−2
−4
−5 0 5
−4 DX2X1
x 10
5
[m*m]

−5
−5 0 5
τ

Figure 3.13: Normalized RD functions estimated using positive point triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [    ]: Estimated RD
functions.

−4 DX1X2
x 10
5
[m*m]

−5
−5 0 5
−3 DX2X2
x 10
1

0.5
[m*m]

−0.5

−1
−5 0 5
τ

Figure 3.14: Normalized RD functions estimated using positive point triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [    ]: Estimated RD
functions.
The number of triggering points is 1464 and 1464, and the expected number of triggering
points are 1397 in both situations. Compared to the results obtained using level cros-
sing and local extremum triggering, this condition illustrates the versatility of the RD
technique. The number of triggering points can be adjusted by selecting the triggering
condition and bounds carefully. Figures 3.13 and 3.14 again illustrate that the estimation
errors of the RD functions increase with the distance from time lag zero.

3.6 Zero Crossing Triggering Condition


The zero crossing with positive slope triggering condition, TXZ (t), was the second triggering
condition introduced. Originally the resulting RD functions obtained from this triggering
condition were interpreted as impulse response functions. A triggering point is detected if
the process crosses the zero line with positive slope
TXZ (t) = fX (t) = 0; X_ (t) > 0g (3.50)
The condition is reformulated to be of the same form as the applied general triggering
condition
TXZ (t) = f0  X (t) < 0 + a; 0  X_ (t) < 1g ; a ! 0 (3.51)
The bounds are inserted in eqs. (3.12) - (3.14) and the RD functions become proportional
to the time derivative of the correlation functions
0
DXX ( ) = , RXX2 ( )  ~b
X_ (3.52)
0
DY X ( ) = , RYX2( )  ~b (3.53)
_X
The triggering level ~b is given by eq. (3.54), since all positive time derivatives are used.
R 1 _ (x_ ) r
~b = R0 1xp X_ = 2 (3.54)
0 pX_ (x_ )  X_
The triggering levels are always chosen as [b1 b2 ] = [0 1]. The argument is that usually
only a realization of X (t) is available. If the triggering levels are re ned numerical di e-
rentiation has to be applied, which will result in false triggering points. The RD functions
are estimated as the empirical mean. This assumes that the processes are ergodic
XN
D^ ( ) = 1 x(t +  )jfx(t ) = 0; x_ (t ) > 0g
XX N i=1 i i i (3.55)
X
N
D^ Y X ( ) = N1 y (ti +  )jfx(ti) = 0; x_ (ti) > 0g (3.56)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). An estimate of the variance of
the estimated RD functions is derived in appendix A. The assumption is that the time
segments in the averaging process, see eqs. (3.55) - (3.56) are independent
0 !2 !21
X2 0
Var[D^ XX ( )]  N @1 , RXX X2
( ) , 2 RXX ( ) A
 X X_ (3.57)
0   2 !21
2 0
Var[D^ Y X ( )]  NY @1 , RY X( ) , 2 RY X( ) A (3.58)
Y X Y X_
Since RXX (0) = X2 and R0XX (0) = 0 it follows that V ar(D^ XX ( )) = 0 corresponding to
the result for the level crossing triggering condition.
3.6.1 Example 1: Illustration of Triggering Conditions
The time series in g. 3.15 is considered again.

2
0
−2
−4
0 5 10 15 20
Time

Figure 3.15: Continuous process and the corresponding sampled discrete time series.

The RD function is estimated using zero crossing with positive slope triggering. It is
chosen to use 3 points in each RD functions. Figure 3.16 shows the time segments which
have been picked out using the positive point triggering condition and the resulting RD
function. The time axis of the time segments corresponds to the time axis in g. 3.15.

2 2 2
Result

0 0 0
1

−2 −2 −2
−4 −4 −4
−1 0 1 2 8 10 −2 0 2
Time Time Time

Figure 3.16: The time segments and the resulting RD function estimated using zero cros-
sing triggering with positive slope.

As illustrated, 2 zero crossings with positive slope are detected.

3.6.2 Example 2: 2DOF System


The response of the 2DOF system described in section 3.2.2 is analysed. The estimated
and theoretical RD functions are shown in gs. 3.17 and 3.18.
DX X
1 1

[m*m/s]
0

−1

−5 0 5
DX X
2 1

0.5
[m*m/s]

−0.5

−5 0 5
τ

Figure 3.17: Normalized RD functions estimated using zero crossing triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [    ]: Estimated RD
functions.

DX X
1 2

0.5
[m*m/s]

−0.5

−5 0 5
DX2X2

1
0.5
[m*m/s]

0
−0.5
−1
−5 0 5
τ

Figure 3.18: Normalized RD functions estimated using zero crossing triggering applied to
the response of the second mass. [-------]: Theoretical RD functions, [    ]: Estimated RD
functions.

The auto RD functions illustrate why these functions are interpreted as impulse response
functions. The number of triggering points was 263 and 236, respectively. The expected
number of triggering points predicted by eq. (A.97) was 261 and 233, respectively.
3.7 Quality Assesment of RD Functions
In application of the RD technique it is an advantage to have some standard methods to
assess the quality of the estimated RD functions. The advantage of standard methods
is that valuable experience with this technique can be transferred between di erent data
sets obtained from di erent physical systems. The purpose of quality assesment of RD
functions is to answer the following questions.
 How well do the data approximate the assumption of stationarity and Gaussianity?
 Has the empirical mean value converged suciently towards the true mean value
(or: Is the number of triggering points adequate)?
 How many points in the RD functions are estimated with sucient accuracy?
These questions are correlated in some sense. They cannot be answered individually.
Whether or not the data are realizations of a stationary process can often be judged with
a pre-knowledge of the physical system and the loads. It is a common assumption that the
physical system is time-invariant during the period where the measurements are collected.
Then if the load is stationary the measurements will also be stationary. How well the data
approximate a Gaussian distribution can be analysed by e.g. a normal probability plot
or various tests. Usually the measurements are only approximately Gaussian distributed,
but the RD technique is applied anyway under critical observation. Two di erent types
of tests or investigations are suggested as standard methods for quality assesment of RD
functions: Shape invariance test and symmetry test.

3.7.1 Shape Invariance Test


Testing the shape invariance of RD functions is based on several di erent estimations of
a correlation function using di erent triggering levels for the same triggering condition

R1Y X ( ) = DYaX ( ) X2 ; R2Y X ( ) = DYaX ( ) X2 ; ::: (3.59)


1 2
where superscript 1; 2; ::: refers to the di erent choice of triggering levels. If di erent
estimates of a correlation function are calculated two di erent approaches exist to evaluate
the shape invariance of the RD functions. First a plot of the di erent correlation functions
is usually sucient to validate the di erent estimates of the correlation functions. If a
single estimate di er signi cantly from the rest, the corresponding triggering levels should
not be used. If all estimates di ers signi cantly the data should be analysed carefully with
the RD technique. So the shape invariance test can also be used in a pre-analysis to select
proper triggering levels for a full analysis.
If more than e.g. 5 or 6 di erent RD functions are estimated it might be dicult to
asses the di erent triggering levels graphically. Instead it is suggested to calculate the
correlation between the di erent RD functions. This corresponds to calculating the MAC
values for di erent estimates of a mode shape in modal analysis, see eq. (2.54). Instead
of using the MAC as an abbreviation it is chosen to denote the correlation coecient as
Shape Invariance Criteria (SIC).
The result of calculating the SIC values between all di erent estimates of the correlation
functions is a matrix with unity in the diagonals. The o -diagonal elements all have values
between 0 and 1. If the value is 1 the RD functions are fully correlated and if the value
is 0 the RD functions are uncorrelated. The shape invariance test is illustrated in the
following examples.
Example:
The system described in section 3.2.2 is considered again. In order to check the shape
invariance of the RD functions only the response of mass 1 is considered. The positive
point triggering condition with the triggering levels below is used.

[a1 a2 ] = [0 0:5]  X 1 [a1 a2 ] = [0:5 1:5]  X 1 [a1 a2 ] = [1:5 1]  X1 (3.60)

The RD functions are estimated and normalized to be theoretically equal to the correlation
functions. Figure 3.19 shows the normalized RD functions and the theoretical correlation
function.

−4 DX
x 10 1X 1

1
[m*m]

−1

−2

−3

−4

−1.5 −1 −0.5 0 0.5 1 1.5


τ

Figure 3.19: Shape invariance of RD functions. [-------------]= Theoretical correlation function.


[]=Estimated RD function with [a1 a2 ] = [0 0:5]  X . [,,,,]=Estimated RD
1
function with [a1 a2] = [0:5 1:5]  X . [- - - - -]=Estimated RD function with [a1 a2 ] =
1
[1:5 1]  X .
1

Visually it seems as if the RD functions estimated using [a1 a2 ] = [0 0:5]X are the most
inaccurate function compared to the theoretical function. The SIC values are calculated
between all estimates and the theoretical correlation function. The results are seen in table
3.2. The di erent functions are ordered so that the rst three components correspond to
the three RD functions and the fourth component is the theoretical correlation function.
[a1 a2 ] [0 0.5]X [0.5 1.5]X [1.5 1]X Theo
[0 0.5]X 1.00 0.88 0.88 0.89
[0.5 1.5]X 0.88 1.00 0.97 0.97
[1.5 1]X 0.88 0.97 1.00 0.99
Theo 0.89 0.97 0.99 1.00
Table 3.2: Shape Invariance criteria between three estimated RD functions and the theo-
retical correlation function.
The number of triggering points for the di erent estimates of the RD functions were
N = [1712 2346 675]. The shape invariance test shows that all three RD functions have a
reasonable agreement both visually and using the SIC values. It is seen that the correlation
between the RD functions and the theoretical correlation function is not a simple function
of the number of triggering points. The estimates with the lowest number of triggering
points have the highest correlation with the theoretical function. It is not unusual to
observe that the accuracy of the RD functions increases with increasing triggering level,
although the number of triggering points is decreasing.

Example:
The system described in section 3.2.2 is considered again. The auto RD function of the
response of the rst mass is pcalculated using plevel crossing triggering condition. Two
di erent triggering levels a = 2X and a = , 2X are used. The estimated correlation
functions and the theoretical correlation functions are shown in g. 3.20.

−4 DX
x 10 1X 1

2
[m*m]

−1

−2

−3

−4
−1.5 −1 −0.5 0 0.5 1 1.5
τ

Figure 3.20: Shape invariance of RD functions. [-------------]= Theoretical correlation function.


[]=Estimated correlation function with a = 1:5X . [- - - - -]=Estimated correlation
1
function with a = ,1:5X . 1

The above gure illustrates that there is only a small di erence between the two RD
functions estimated using the same triggering level but with changed sign. If the average
of these two functions is used, maximum information is extracted from the measurements
using the level crossing triggering condition.


3.7.2 Symmetry Test


The second approach to quality assessment of the RD functions is based on the symmetry
relations for correlation functions of stationary stochastic processes. This approach ge-
nerally assumes that all possible RD functions are estimated, corresponding to estimating
the full correlation matrix of the measurements at each time step. The approach can still
be used if only a part of the correlation matrix is estimated, but this is considered to be
a special case. The symmetry relation is.

RY X ( ) = RXY (, ) (3.61)

If the estimated RD functions are scaled to be equal to the correlation functions (norma-
lized with the triggering levels) then an error function can be de ned as.
^ ^
E = RY X ( ) ,2 RXY (, ) (3.62)

A nal estimate of the correlation functions can be calculated as the average value of the
above RD functions.

R^ final (  ) = R^Y X ( ) + R^XY (, ) (3.63)


YX 2
If the above procedure is applied the number of RD functions is still the same, but there
is only an estimate for the positive time lags and the corresponding error function. The
quality of the RD functions can now be evaluated by plotting the nal RD functions versus
the error function. This procedure also can be used to choose the number of points used
in the estimation of modal parameters.

Example:
The system described in section 3.2.2 is considered again. The RD functions of the re-
sponses are calculated using the positive point triggering condition. The triggering levels
are chosen as [0:5X 1]. From the RD functions the error functions and the nal RD
functions are calculated. Figures 3.21 and 3.22 show the error functions and the nal RD
functions.
−4 D X1 X1
x 10

4
2

[m*m]
0
−2
−4
0 1 2 3 4 5
−4 D X2 X1
x 10
5
[m*m]

−5
0 1 2 3 4 5
τ

Figure 3.21: Time averaged RD functions. [-------------]= Final RD function. [      ]=Error


function.

−4 DX
x 10 1 X2
10

5
[m*m]

−5

0 1 2 3 4 5
−4 DX
x 10 2 X2
10

5
[m*m]

−5

0 1 2 3 4 5
τ

Figure 3.22: Time averaged RD functions. [-------------]= Final RD function. [      ]=Error


function.
The gures show that the error function is small compared to the RD function. The
errors increase with increasing time lags. This is obvious since it is natural that the errors
increase with increasing time lags from the triggering condition. The variances predicted
by e.g. eq. (3.46) also state that the uncertainty of the RD functions increases with
increasing time lags.

If the number of measurements get too high, it becomes very tedious to evaluate the nal
RD functions and the error functions graphically. If e.g. 8 measurements are available 64
di erent RD and error functions have to be evaluated. This is not an adequate approach.
Instead it is suggested that a proper measure for the quality of the RD functions is the
fraction between the sum of the absolute value of the error function and the sum of the
absolute value of the RD function
v
u Pn E ( )2 !
u
E (k) = t Pn i=1finali 2 (3.64)
i=1 DY X (i )
where k is varying from 1 to the square of the number of measurements. This allows the
quality of all the RD functions to be assessed in a single graph. Such a graph will illustrate
if one of the measurements are unusable as triggering measurement.

3.8 Choice of Triggering Levels


One of the diculties in applications of the RD technique is how to choose the triggering
levels [a1 a2 ] for a given triggering condition. From a user point of view it is important
to know how to choose a proper triggering level and to know how sensitive the results are
to the choice of triggering level. This section discusses how this choice can be made. The
optimal choice of triggering level is de ned as the choice which minimizes the variance of
the RD functions normalized with the triggering level.
^
min(Var[ DXXa~ ( ) ]) ! [a1 a2] (3.65)
For the level triggering condition eq. (3.65)
p has been solved using eqs. (3.24) and (A.56).
The solution is a triggering level of a = 2X . This result was derived by Hummelshj
et al. [11]. The assumptions are that the processes are stationary zero mean Gaussian
distributed and that the time segments used in the averaging process are independent.
The latter assumption is of course violated in some sense. However the above result
is considered to be a good basis for selecting the triggering level for the level crossing
triggering condition. The result has been supported by a simulation study in Brincker et al.
[3]. The simulation study indicates that choices of the triggering level in between X , 2X
are appropriate. The simulation study takes p the correlation between the segments into
account. This supports the choice of a = 2X .
For the local extremum triggering condition an equivalent study can be performed. The
variance of the estimated RD functions using the local extremum triggering condition can
be approximated by eq. (3.34).
0 !2 !1 !2
2 0 ( ) 2 E
Var[D^ XX ( )] = NX @1 , RXX
X
2
( ) R
, XX A + k RXX2 ( )
N X (3.66)
X X_

The constant kE is a function of the triggering levels.


R a x2p (x)dx R a xp (x)dx !2
= R a p (x)dx , Ra a p X(x)dx
X
2 2

kE a 1 1
(3.67)
a X a X
2 2
1 1
In order to calculate the expected number of triggering points an SDOF system is consi-
dered. The system is loaded by white noise. The natural eigenfrequency and the damping
ratio are f =1 Hz and  =1%. The expected number of triggering points can be calculated
using eq. (A.70). The variance of di erent combinations of the triggering levels [a1 a2]
are calculated for each time lag of the correlation functions. The triggering levels which
minimize eq. (3.65) are taken as the optimal choice. Figure 3.65 shows the optimal upper
triggering level and the lower optimal triggering level as a function of the time lag.

Optimal triggering levels


4

0
0 2 4 6 8 10 12

Auto correlation function


1

0.5

−0.5

−1
0 2 4 6 8 10 12
τ

Figure 3.23: Theoretically predicted optimal lower and upper triggering level (X ) for an
SDOF system using local extremum triggering and the corresponding correlation function.
Figure 3.23 illustrates a new result. It is not always that the optimal choice of triggering
levels is those which maximize the number of triggering points. Theoretically, is it pre-
dicted that the triggering levels should be chosen as [a1 a2 ] = [X 1]. The above result
can only be a guideline to the user, since the result is dependent of the correlation function
and thereby the physical system. The conclusion is that the triggering levels for the local
extremum triggering condition should not uncritically be chosen as [a1 a2 ] = [0 1], which
maximizes the number of triggering points.
The above theoretical prediction does not take the correlation between the time segments
in the averaging process into account. The prediction assumes uncorrelated time segments.
In order to check the above result a simulation study is performed. 500 responses of the
SDOF system loaded by Gaussian white noise are simulated. Each time series contains
5000 points and is sampled with 15 Hz. Estimates of the correlation function are calculated
for each response using the local extremum triggering condition with di erent triggering
levels. The triggering levels are chosen as all possible combinations of a1 = [0; 0:2; :::; 3] 
X , a2 = [0; 0:2;:::; 3]  X under the constraint that a2 > a1. The maximum upper
level is chosen to 3X , since it is very unlike to nd realizations of the response beyond
3X . A higher maximum upper level would demand simulation of extremely long time
series. A similar argument is used to select the resolution of the triggering levels to
0.2. A higher resolution would also demand simulation of extremely long time series. The
optimal triggering levels are chosen as the levels with minimum error calculated as the sum
of the absolute values of the di erence between the simulated and theoretical correlation
functions. The result of the simulation study is shown in gure 3.24.

Optimal triggering levels


4

0
0 2 4 6 8 10 12

Auto correlation function


1

0.5

−0.5

−1
0 2 4 6 8 10 12
τ

Figure 3.24: Simulated optimal lower and upper triggering level (X ) for an SDOF
system using local extremum triggering and the corresponding correlation function.

The results of g. 3.24 show good agreement with the theoretical predictions in g. 3.23.
It is recommended that the triggering levels for the local extremum triggering condition
should be chosen around [a1 a2 ] = [X 1]. The best way to select the optimal triggering
levels is to perform a sensitivity study. The lower triggering level could be chosen as e.g.
[0 0:5 1 1:5]  X and the upper triggering level as in nity and the RD functions with lowest
errors calculated using the symmetry relations decide which triggering levels are optimal.

A simulation study corresponding to the above is performed using the positive point trig-
gering condition. The aim is to investigate how the triggering levels should be chosen. A
theoretical prediction can be calculated, but the assumption of uncorrelated time segments
is highly violated so the prediction is excluded. The results are shown in g. 3.25.
Optimal triggering levels
4

0
0 2 4 6 8 10 12

Auto correlation function


1

0.5

−0.5

−1
0 2 4 6 8 10 12
τ

Figure 3.25: Optimal triggering levels for positive point triggering for an SDOF system
estimated by simulation together with the corresponding correlation function.
The simulation study indicates that the triggering levels for the positive point triggering
condition should be chosen as about [a1 a2 ] = [X 1].
During this section new information how to select the triggering levels for the di erent
triggering conditions is obtained. It is shown that it can be appropriate to exclude the
triggering points between 0 and X . Although this is only a guideline it is important new
information. The reason is that the accuracy of the estimates is increased and at the same
time the estimation time is decreased since triggering points are excluded.

3.9 Comparison of Di erent Approaches


The purpose of this section is to compare the RD technique with other unbiased methods
for estimation of correlation functions. Results shown in this section are parts of a study
presented in Asmussen et al. [11]. The study will illustrate the advantages and the
disadvantages of the RD technique. The comparison of the di erent techniques are based
on simulations of the displacement response of a 2DOF system loaded by independent
white noise at each mass. The modal parameters of the system are listed in table 3.3.

f [Hz]  [%] jj1 jj2 6 1 6 2


Mode 1 3.17 1.22 1 1.62 0 1.5
Mode 2 4.92 3.50 1 0.62 0 178
Table 3.3: Modal parameters of 2DOF system.

The accuracy of the estimates of the (double sided) correlation functions is de ned by the
error, E
XM
E = 2  M1 + 1  (R(i  t) , R^ (i  T ))2 (3.68)
i=,M
where R and R^ denote the theoretical correlation function and the estimate of the correla-
tion function, respectively. The accuracy calculated using eq. (3.68) contains contributions
from both random and bias errors. To reduce simulation errors all results are based on
250 averages (or simulations).
Section 3.9.1 describes the two traditional approaches which are used to estimate the
correlation functions. Section 3.9.2 describes the chosen triggering conditions and levels
for the RD technique. Section 3.9.3 presents the results of the simulation study. The
simulation study is nished with a conclusion in section 3.9.4.

3.9.1 Traditional Approaches


The RD technique will be compared with two other traditionally and well-known ap-
proaches: The direct approach and an approach based on Fourier and inverse Fourier
transforms. For the direct approach (or the sample estimate) the cross correlation func-
tions in a nite continuous time interval are estimated by:
8 1 R T ,
>
< T , 0 y (t +  )x(t) 0 <T
^
RY X ( ) = > (3.69)
: RT
T ,j j j j y (t +  )x(t) ,T <   0
1

In a nite discrete time interval consisting of N points the cross correlation functions are
estimated by
8 1 PN ,r
>
< N ,r i=0 y ((i + r)T )x(iT ) 0r<N
^
RY X (rT ) = > (3.70)
: PN
N ,jrj i=r y ((i , r)T )x(iT ) ,N  r < 0
1

The correlation functions estimated by the direct approach are unbiased. The second
approach is based on Fourier and inverse Fourier transformations. The estimation pro-
cedure is unbiased and described in Bendat & Piersol [10] and Brincker et al. [2]. The
computational steps are

 Divide the measurements into a number of sub-segments.


 Pad each sub-segment with zeroes to obtain double length.
 Calculate a subestimate of the cross spectral density by performing Fourier trans-
formations of the sub-segments and multiply.
 Average all sub-cross spectral densities.
 Perform an inverse Fourier transformation of the averaged cross spectral density to
obtain a biased estimate of the correlation function.
 Remove bias by division of the basic lag window.

The estimation time of both methods is of course dependent of size of the time series, T
(N ), and the maximum time lag, max . But the estimation time of any of the methods is
not dependent on the statistical nature of the time series. This is a signi cant di erence
compared to the RD technique.

3.9.2 Triggering conditions


For the RD technique 3 di erent triggering conditions are used: Level crossing, local
extremum and p positive point triggering. The triggering level for the level crossing condition
is chosen as 2  X , where X is the triggering time series. This triggering level minimizes
the variance of the estimate of the RD functions in eq. (3.24). For the local extremum
triggering condition the triggering levels are chosen as [a1 a2] = [1:5X 1]. For the
positive point triggering condition the triggering levels are also chosen as [a1 a2 ] = [0 1].
This assures that the maximum number of triggering points is obtained.

It is important that the estimation time of the RD functions using level crossing and
local extremum triggering condition depends on the statistical nature of the time series.
Two processes with di erent correlation structures would result in di erent estimation
times. So the results for these two triggering conditions can only be guidelines. But the
estimation time for the positive point triggering condition is independent of the statistical
nature of the measurement. The results for the estimation time of this condition can be
considered to be as general as the results for the direct and the Fourier-inverse Fourier
approach. Since the triggering levels are [0 1] the estimation times for the positive point
condition will be the slowest possible for the RD technique. This is the reason for choosing
these levels instead of e.g. [x 1], which might result in more accurate RD functions.

3.9.3 Results of Simulation Study


Figure 3.26 shows the computational time of the di erent approaches used to estimate the
correlation functions with a varying number of points in the time series. The number of
time lags in the correlation functions is 128.
2
10
Direct

1
10

Time [CPU]

FFT−IFFT
P
0 RD
10
RD E

L
RD

−1
10
2000 4000 6000 8000 10000 12000 14000 16000
Points in time series

Figure 3.26: Total estimation time of the 4 correlation functions with varying number of
points in the time series and 128 time lags in the correlation functions. Direct: The direct
approach. FFT-IFFT: The approach based on Fourier and inverse Fourier transforma-
tions. RDP : Positive point triggering. RDE : Local extremum triggering. RDL : Level
crossing triggering.

As expected, the direct approach is clearly the slowest approach. The reduced estimation
time by using the FFT-IFFT approach is obvious. The estimation time obtained using the
positive point triggering condition is nearly equal to the estimation time obtained using
the FFT-IFFT approach. Level crossing and local extremum triggering results in shorter
estimation times compared to the FFT-IFFT approach. The estimation times of these
two formulations depend on the chosen triggering levels and the correlation structure of
the processes. Even though the estimation time would di er by using other triggering
levels or a di erent physical system, the results indicate how fast the RD technique can
be compared to the FFT-IFFT approach if a strict triggering condition is chosen.

Figure 3.27 shows the estimation time of the correlation functions with varying number
of time lags in the correlation functions. The number of points in the time series is 8000.
3
10

2
10
Direct

Time [CPU]

1
10
RD P
FFT−IFFT

L
RD
0
10
RD E

−1
10
0 100 200 300 400 500
Points in correlation functions

Figure 3.27: Total estimation time of the 4 correlation functions with varying number of
time lags and 8000 points in the time series. Direct: The direct approach. FFT-IFFT:
The approach based on Fourier and inverse Fourier transformations. RDP : Positive point
triggering. RDE : Local extremum triggering. RDL : Level crossing triggering.

Figure 3.27 shows the same tendency as gure 3.26. If a strict RD formulation, such
as level crossing or local extremum triggering is used, the RD technique becomes faster
than the FFT-IFFT approach. If only a small number of time lags is needed, the RD
technique becomes signi cantly faster than the FFT-IFFT approach. The di erence in
the estimation time between the di erent triggering conditions reduces with a decreasing
number of time lags in the correlation functions. This can be explained by the fact that
the averaging process and the detection of triggering points both contribute to the total
estimation time. For estimates with a large number of time lags the averaging process
is dominant. With a decreasing number of time lags in the estimates, the detection of
triggering points becomes correspondingly more dominant. On average the the number
of triggering points for local extremum was 224 and for level crossing 430. For a large
number of time lags local extremum triggering is faster. But for a small number of time
lags level crossing triggering is faster. This is expected, since detecting a local extremum
is more time consuming than detecting a level crossing.

Figure 3.28 shows the accuracy (see eq. (3.68)) of the di erent methods as a function
of the number of points in the time series. Figure 3.29 shows the results of the same
investigations performed on the time series with 20 % noise added. 20 % is the ratio
between the standard deviation of the noise and the standard deviation of the time series.
The direct approach has been omitted due to the high estimation times.
DX 1X 1 −3 DX 1X 2
x 10
0.04 6

0.03
4
0.02

ε
2
0.01

0 0
5000 10000 15000 5000 10000 15000
−3 DX 2X 1 −3 DX 2X 2
x 10 x 10
6 6

4 4
ε

ε
2 2

0 0
5000 10000 15000 5000 10000 15000
Points in time series Points in time series

Figure 3.28: Accuracy, E , of correlation functions with varying time series length. 128
points in the correlation functions. [,,,,,] =FFT-IFFT, [   ]=Local extremum,
[,  , ]=Positive point, [, , ,]=Level crossing.

DX 1X 1 −3 DX 1X 2
x 10
0.04 6

0.03
4
0.02
ε

2
0.01

0 0
5000 10000 15000 5000 10000 15000
−3 DX 2X 1 −3 DX 2X 2
x 10 x 10
6 6

4 4
ε

2 2

0 0
5000 10000 15000 5000 10000 15000
Points in time series Points in time series

Figure 3.29: Accuracy, E , of correlation functions with varying time series length. 128
points in the correlation functions. 20 % noise independent Gaussian white noise added.
[,,,,,] =FFT-IFFT, [   ]=Local extremum, [,  , ]=Positive point, [, , ,]=Level
crossing.
As expected, the results show that the error of all four approaches converges towards zero
with increasing length of the time series. The e ect of adding noise is a slower convergence
rate, but the results still have a high accuracy compared with the noise free analysis. The
positive point triggering condition gives the lowest error, whereas the local extremum
triggering condition has the highest error.

Figures 3.30 and 3.31 show the quality of the 4 approaches as a function of the number
of points in the correlation functions. The results are based on 8000 points in the time
series.

DX1X1 DX1X2
0.01 0.015

0.01
0.005
ε

ε
0.005

0 0
0 200 400 0 200 400
DX2X1 DX2X2
0.02 0.015

0.015
0.01
0.01
ε

0.005
0.005

0 0
0 200 400 0 200 400
Points in correlation function Points in correlation function

Figure 3.30: Accuracy, E , of correlation functions with increasing number of time lags.
8000 points in the time series. [,,,,,] =FFT-IFFT, [   ]=Local extremum,
[,  , ]=Positive point, [, , ,]=Level crossing.
DX1X1 DX1X2
0.01 0.015

0.01
0.005

ε
0.005

0 0
0 200 400 0 200 400
DX2X1 DX2X2
0.02 0.015

0.015
0.01
0.01
ε

ε
0.005
0.005

0 0
0 200 400 0 200 400
Points in time series Points in time series

Figure 3.31: Accuracy, E , of correlation functions with increasing number of time lags.
8000 points in the time series. 20 % independent Gaussian white noise added. [,,,,,]
=FFT-IFFT, [   ]=Local extremum, [,  , ]=Positive point, [,,,]=Level crossing.

All 4 approaches have increasing accuracy with an increasing number of points in the time
series. In general the RD technique with local extremum triggering condition gives the
highest error. The general di erence between the results from the noise free responses and
the responses with 20% noise added is only a slight increase in the error. It is interesting
that although the absolute error is di erent for the 4 approaches, the rate of increase in
the errror with increasing number of points in the correlation functions is the same.

3.9.4 Conclusions
Three di erent unbiased approaches for non-parametric estimation of correlation func-
tions have been investigated: The direct approach, The FFT-IFFT approach and the RD
technique. The main issue was illustration of advantages and disadvantages of di erent
formulation of the RD technique. The investigations are based on simulated response of
a 2DOF system loaded by Gaussian white noise.
The direct approach was clearly the most time-consuming approach. For a small number
of points in the correlation functions the RD technique is the fastest approach no matter
how this approach is formulated. For a large number of points only a strict formulation
of the RD technique is faster than the FFT-IFFT approach.
Triggering using positive points and the FFT-IFFT produces the most accurate estimates
of the modal parameters compared to the level crossing and the local extremum triggering
condition. More information about the in uence of applying bounds to the positive point
triggering condition is needed. Applying bounds will decrease the estimation time and
most probably also decrease the uncertainty.
3.10 Summary
The purpose of this chapter is to present and illustrate the RD technique. The main
assumptions of the results are that the stochastic processes are stationary zero mean
Gaussian distributed processes. In section 3.1 the RD functions are de ned and the
estimation procedure is presented. It is shown that the estimates of the RD functions are
unbiased.
Section 3.2 introduces the applied general triggering condition. It is shown that using
this triggering condition the RD functions become a weighted sum of the correlation func-
tions and the time derivative of the correlation functions. Furthermore, an approximate
equation for calculation of the variance of the RD functions is given. The applied general
triggering condition is introduced, since the results for the four most well known trigge-
ring conditions can be derived directly from the results of the applied general triggering
condition. These four conditions, level crossing, local extremum, positive point and zero
crossing with positive slope are presented in sections 3.3 - 3.6. The relation between the
RD functions and the correlation functions is given and an approximate relation for the
variance of the RD functions is stated. The triggering conditions are illustrated by a
simulation study.
Section 3.7 introduces the concept of quality assessment of RD functions. It is illustrated
how the shape invariance relation of RD functions and the symmetry relations of correla-
tion functions of stationary processes can be used as a basis for quality assessment. Section
3.8 indicates how the triggering levels of RD functions should be chosen. Guidelines for
the user are given.
The chapter is concluded with a comparison of the speed and quality of di erent unbiased
methods for estimation of correlation functions of stationary processes. It is shown that
the RD technique can be superior in speed and/or quality if the triggering conditions and
levels are selected carefully.

Bibliography
[1] Vandiver, J.K., Dunwoody, A.B., Campbell, R.B. & Cook, M.F. A Mathematical
Basis for the Random Decrement Vibration Signature Analysis Technique. Journal of
Mechanical Design, Vol. 104, April 1982, pp. 307-313.
[2] Brincker, R., Krenk, S., Kirkegaard, P.H. & A. Rytter. Identi cation of Dynamical
Properties from Correlation Function Estimates. Bygningsstatiske Meddelelser, Vol.
63, No. 1, 1992, pp. 1-38.
[3] Brincker, R., Krenk, S. & J.L. Jensen Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[4] Ditlevsen, O. Uncertainty Modelling. McGraw-Hill Inc. 1981. ISBN: 0-07-010746-0.
[5] Melsa, J.L. & Sage, A.P. An Introduction to Probability and Stochastic Processes.
Prenctice-Hall, Inc. Englewood Cli s, N.J., 1973. ISBN: 0-13-034850-3.
[6] Sderstrm, T. & Stoica, P. System Identi cation. Prentice Hall International (UK)
Ltd, 1989. ISBN: 0-13-881236.
[7] Rice, S.O. Mathematical Analysis of Random Noise. Bell Syst. Tech. J., Vol. 23, pp.
282-332; Vol 24, pp. 46-156. Reprinted in N. Wax, Selected Papers on Noise and
stochastic Processes.
[8] Hummelshj, L.G., Mller, H. & Pedersen, L. Skadesdetektering ved Responsm—ling.
M.Sc. Thesis (In Danish), Aalborg University, 1991.
[9] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec Tech-
nique. Presented at the 9th International Conference on Experimental Mechanics,
Lyngby, Copenhagen, August 20-24, 1990.
[10] Bendat, J.S. & Piersol, A.G. Random Data - Analysis and Measurement Procedures.
1986 John Wiley & Sons, Inc. ISBN 0-471-0400-2.
[11] Asmussen, J.C. & Brincker, R. Estimation of Correlation Functions by Random
Decrement. Proc. ISMA-21, Noise and Vibration Engineering, Leuven, Belgium, Sept.
18-20, 1996, Vol. II, pp. 13-19.
[12] Allemang, R.J. & Brown, D.L. A Correlation Coecient for Modal Vector Analysis.
Proc. 1st International Modal Analysis Conference, Orlando, Florida, USA, 1982.
Chapter 4
Vector Triggering Random
Decrement
This chapter introduces a new concept: Vector triggering Random Decrement (VRD).
The argumentation for developing this technique is that experience has discovered some
problems with the RD technique applied to a large number of measurements (e.g. >4)
collected simultaneously. This is often the situation in ambient testing of bridges.
 If all measurements are used as the reference measurement, which will result in the
maximum number of RD functions (full correlation matrix estimated), the estimation
time will increase proportionally to the number of measurements compared to the
situation where only a single measurement is used as reference measurement. Since
the speed is one of the main advantages of this technique a decision of estimating
all possible RD setups should be reconsidered. On the other hand if not all possible
RD functions are estimated, valuable information can be lost.
 If only some of the RD setups are estimated, how should the actual reference mea-
surements be chosen. By assuming the signal-to-noise ratio to be highest at the
measurements with highest standard deviation, the standard deviation could be
used as a criterion. But this is only a guideline, not a full solution.
 Usually the uncertainty of the cross RD functions is higher than the uncertainty
of the auto RD functions. So auto RD functions should only be used for modal
parameter estimation. But this excludes the possibility to estimate mode shapes.
The above problems are the motivation for developing the VRD technique. The target
is somehow to estimate equivalent auto RD functions, which contains phase information
and thus the possibility to estimate mode shapes, in contrast to auto RD functions only.
The solution is to apply a vector triggering condition instead of the scalar triggering
conditions used in chapter 3. The VRD concept is rst presented in Ibrahim et al. [1], [2]
and Asmussen et al. [3]. The purpose of this chapter is to present and illustrate the VRD
technique.
VRD functions are de ned in section 4.1 corresponding to the de nition of the RD func-
tions in section 3.1. A mathematical basis of the VRD technique is presented in section
4.2. In Ibrahim et al. [1], [2] a mathematical basis is presented, where the VRD functions
are interpreted as free decays. In section 4.2 the VRD functions are interpreted in term
81
of correlation function. The link between the correlation functions and VRD functions is
derived for the rst time. The results will be equivalent to the results of chapter 4. In
section 4.3 the variance of the VRD functions are derived. Section 4.4 discusses quality
assessment of VRD functions. Sections 4.5 and 4.6 present simulation studies of the VRD
technique. The purpose is to investigate the performance of the VRD technique com-
pared to the traditional RD technique. The examples illustrate the advantage of the VRD
technique.

4.1 De nition of VRD functions


Consider an n-dimensional stochastic vector process, X(t). The VRD functions are de ned
equivalently to traditional RD functions
DX( ) = E[X(t +  )jTXv (t+t)] (4.1)
where the vector triggering condition, TXv (t+t) , is de ned as
TXv (t+t) = TXv (t+t );X (t+t );:::;Xm(t+tm ) ; 2  m  n
1 1 2 2
(4.2)
The size of the vector triggering condition has to be in between 2 and n. If the con-
dition is scalar, a traditional RD triggering condition is formulated. A triggering point
is detected if several of the elements (m) in the stochastic vector process ful l indivi-
dually formulated triggering conditions at any time t plus the individual time shifts ti ,
t = [t1; t2; :::; tm]. At this stage the triggering conditions could be any of the
conditions discussed in chapter 4. But as explained in section 4.2 only the positive point
triggering condition is of practical interest.
The VRD functions are estimated as the empirical mean by assuming the processes to be
ergodic
XN
D^ X( ) = N1 X(ti +  )jTXv (ti+t) (4.3)
i=1
Another way of estimating the VRD functions is
XN
D^ X( + t) = N1 X(ti +  + t)jTXv (ti+t) (4.4)
i=1
Of course the VRD functions D^ X ( + t) should be shifted backwards according to the
time shift vector, t. Which of these implementation possibilities to use is for the user
to decide. In the implementation performed during this work eq. (4.3) is preferred, since
the time shift problem is solved during the estimation process. The time shift problem is
thereby solved once and for all. Estimation of the VRD functions using eqs. (4.3) or (4.4)
provides unbiased estimates, corresponding to the RD technique.
The following examples illustrate the VRD technique and some of the possibilities in formu-
lation of the triggering condition, TXv (t+t) , are discussed. The latter example illustrates
the algorithm on a simple system corresponding to the examples in chapter 3.
Example - Situation with 2 Measurements
Consider a 2  1-dimensional stochastic vector process, X(t). The VRD functions obtained
from a vector triggering condition of size two are given by
" # " #
DX ( ) = E[ X 1(t +  ) jT v
X2(t +  ) X (t+t );X (t+t )] (4.5)
1
DX ( )
2
1 1 2 2

The estimate of the VRD functions is calculated as


" ^ # N " x (t +  ) #
DX ( ) = 1 X 1 i v
D^ X ( ) N i=1 x2(ti +  ) jTx (ti +t );x (ti +t ) (4.6)
1
1 1 2 2
2

where xi (t) is a realization of Xi(t).


3
Example - Situation with 4 Measurements
Consider a 4  1-dimensional stochastic vector process, X(t). The VRD functions obtained
from a vector triggering condition of size four are given by
2 3 2 3
DX ( ) X1(t +  )
66 DX ( ) 77 66 X2(t +  ) 77 v
1

64 DX ( ) 75 = E[64 X3(t +  ) 75 jTX (t+t );X (t+t );X (t+t );X (t+t )]
2
1 1 2 2 3 3 4 4
(4.7)
3
DX ( )
4 X4(t +  )
The estimation of the VRD functions is calculated as
2 D^ ( ) 3 2 3
X x 1 (ti +  )
66 D^ X ( ) 77 1 X
1
N 6 7
64 D^ ( ) 75 = N 664 xx23((ttii ++  )) 775 jTxv (ti+t );x (ti+t );x (ti+t );x (ti+t )
2
(4.8)
X i=1
1 1 2 2 3 3 4 4

D^ X ( )
3

4
x4(ti +  )
where xi (t) are realizations of Xi (t). It is obvious that the number of triggering points N
will always decrease with an increasing number of measurements or elements in the vector
triggering condition. With a high number of measurements, e.g. four, it is not obvious
that the triggering condition should be applied to all measurements. This could lead to
a low number of triggering points, resulting in an estimate of the VRD functions, which
has not converged suciently.
The solution to this problem is to estimate one or several sets of VRD functions, each
containing a number of VRD functions corresponding to the number of measurements.
This is illustrated in the situation with four measurements by de ning two sets of VRD
functions.
2 1 3 2 3
DX ( ) X1(t +  )
66 DX1 ( ) 77 66 X2(t +  ) 77 v
1

64 D1 ( ) 75 = E[64 X3(t +  ) 75 jTX (t+t );X (t+t )]


2 (4.9)
X 1 1 2 2

DX1 ( ) X4(t +  )
3

4
2 2 3 2 3
DX ( ) X1(t +  )
66 DX2 ( ) 77 66 X2(t +  ) 77 v
1

64 D2 ( ) 75 = E[64 X3(t +  ) 75 jTX (t+t );X (t+t )]


2 (4.10)
X 3 3 4 4

DX2 ( ) X4(t +  )
3

In the general case with n measurements the maximum number of VRD setups which can
be estimated is the lowest integer of n=2. Furthermore, the size of the vector triggering
condition does not have to be equal for every setup. Consider e.g. ve simultaneously
recorded measurements. Two VRD setups could be calculated by the triggering conditions
TXv (t+t );X (t+t ) ; TXv (t+t );X (t+t );X (t+t )
1 1 2 2 3 3 4 4 5 5
(4.11)
or alternatively
TXv (t+t );X (t+t );X (t+t ) ; TXv (t+t );X (t+t )
1 1 2 2 3 3 4 4 5 5
(4.12)

3
Example - Illustration of the Algorithm
The purpose of this example is to illustrate the VRD algorithm applied to a simple system.
The resulting VRD function does not have any interpretation. Consider two continuous-
time processes, X1 (t) and X2(t). The continuous-time processes are shown i g. 4.1
together with two discrete-time processes corresponding to sampling X1 (t) and X2(t)
simultaneously at equidistant time points.
X1
4

−2

−4
0 2 4 6 8 10 12 14 16
X2
4

−2

−4
0 2 4 6 8 10 12 14 16
Time

Figure 4.1: The continuous-time processes and the corresponding discrete-time process.
The VRD functions of the two processes are calculated using the following vector triggering
condition and the following size of the time segments.
TXv (t+t) = fX1 (t) > 0; X2(t) > 0g (4.13)
DX( ) = E [X(t +  )jTXv (t+t)] , 1    1 (4.14)
where X(t) = [X1(t) X2(t)]. As seen the time shift vector is t = [0 0]. Four triggering
points are detected in the two processes. Figures 4.2 and 4.3 show the time segments at
each triggering point and the resulting VRD functions. The time axis of the time segments
corresponds to the time segments of the processes in g. 4.1.

2 2 2
0 0 0
1

3
−2 −2 −2
−4 −4 −4
2 4 6 4 6 14 16 18

2 2

Result
0 0
4

−2 −2
−4 −4
16 18 −2 0 2
Time Time

Figure 4.2: The time segments picked out in the averaging process and the resulting VRD
functions for X1 (t).

2 2 2
0 0 0
1

−2 −2 −2
−4 −4 −4
2 4 6 4 6 14 16 18

2 2
Result

0 0
4

−2 −2
−4 −4
16 18 −2 0 2
Time Time

Figure 4.3: The time segments picked out in the averaging process and the resulting VRD
functions for X2 (t).
Figures 4.2 and 4.3 show that the VRD functions of the discrete-time process correspond
to the VRD functions of the continuous-time process at the discrete-time points. The
vector triggering condition could also have been formulated as
TXv (t+t) = fX1 (t) > 0; X2(t) < 0g (4.15)
where the time shift vector still is t = [0 0]. Figures 4.4 and 4.5 show the the time
segments picked out by the triggering condition and the corresponding resulting VRD
functions.
2 2 2
0 0 0

3
−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 12 14

2 2

Result
0 0
4

−2 −2
−4 −4
12 14 16 −2 0 2
Time Time

Figure 4.4: The time segments picked out in the averaging process and the resulting VRD
functions for X1 (t).

2 2 2
0 0 0
1

−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 12 14

2 2
Result

0 0
4

−2 −2
−4 −4
12 14 16 −2 0 2
Time Time

Figure 4.5: The time segments picked out in the averaging process and the resulting VRD
functions for X2 (t).

Four triggering points are detected using this triggering condition. In order to increase
the number of triggering points the time shift vector can be chosen to be di erent from
t = [0 0]. A new triggering condition is formulated as

TXv (t+t) = fX1 (t) > 0 ^ X2 (t + 3) > 0g (4.16)

A time shift has been introduced at X2 (t), since t = [0 3]. Figures 4.6 and 4.7 show the
time segments picked out for the averaging process and the resulting VRD functions.
2 2 2
0 0 0

3
−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 2 4

2 2 2
0 0 0
4

6
−2 −2 −2
−4 −4 −4
2 4 6 4 6 12 14 16

2 2 2

Result
0 0 0
7

8
−2 −2 −2
−4 −4 −4
14 16 14 16 18 −2 0 2
Time Time Time

Figure 4.6: The time segments picked out in the averaging process and the resulting VRD
functions for X1 (t).

2 2 2
0 0 0
1

−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 2 4

2 2 2
0 0 0
4

−2 −2 −2
−4 −4 −4
2 4 6 4 6 12 14 16

2 2 2
Result

0 0 0
7

−2 −2 −2
−4 −4 −4
14 16 14 16 18 −2 0 2
Time Time Time

Figure 4.7: The time segments picked out in the averaging process and the resulting VRD
functions for X2 (t).

By introducing a proper time shift the number of triggering points has been increased
from four to eight. This illustrates how versatile the VRD technique is. The number of
triggering points is not only controlled by the triggering bounds but also by the choice of
time shift vector. Notice that in gs. 4.6 and 4.7 the time segments are all picked out at
the time t although the triggering condition is ful lled at the time t for X1(t) and at time
t + 3 for X2(t).
4.2 Mathematical Basis of VRD
This section develops the relation between the VRD functions and the correlation functions
of stationary zero mean Gaussian distributed processes. Consider an n  1-dimensional
stochastic vector process, X(t)
X(t) = [X1(t); X2(t); ::: ; Xn(t)]T (4.17)
The correlation matrix of X at any time di erence  is given by
RXX( ) = E[X(t +  )XT (t)]
2 3
R ( ) R ( ) :::: R ( )
66 RXX XX ( ) RXX XX ( ) :::: RXX XXnn ( )
1 1 1 2 1
77 (4.18)
= 64 : 2
:
1
: :
2 2 2
75
RXn X ( ) RXnX ( ) :::: RXnXn ( )
1 2

For simplicity the correlation matrix can be written as follows


RXX( ) = [RX ( ) RX ( ) ::: RXn ( )]
1 2
(4.19)
where the correlation vectors RXi are given by
RXi = [RX Xi ( ) RX Xi ( ) ::: RXnXi ( )]T
1 2 (4.20)
To describe the VRD technique two vector processes, Xv (t) and Yv (t), are de ned
2 X (t +  + t ) 3 2 X (t + t ) 3
1 1
66 X2(t +  + t2) 77 66 X12(t + t12) 77
6 3(t +  + t3) 77
Xv (t) = 666 X 77 Yv (t) = 666 X3(t + t3) 77
75 (4.21)
64 :: 75 4:
Xn(t +  + tn) Xm (t + tm )

where Xi refers to elements in the vector process de ned in eq. (4.17). The time shifts
ti could be both positive and negative. The stochastic vector process Yv (t) is of course
restricted by m  n. It is important that Yv is contained as the rst m elements of
Xv with  = 0. This is only a question of rearranging the elements in Xv and does not
in uence the nal result or any application.
The auto correlation matrix of Yv is given by
RYvYv = E[Yv (t)YvT (t)] =
2 3
R (0) R (t , t2 ) :::: RX Xm (t1 , tm )
66 RXX XX (t2 , t1) RXX XX (0) 1
1 1 1 2
:::: RX Xm (t2 , tm ) 777
1
(4.22)
64 : 2 1
:
2 2
: :
2
5
RXm X (tm , t1) RXmX (tm , t2 )
1 2 :::: RXmXm (0)
The cross correlation matrix between Xv and Yv is given by:
RXvYv = E[Xv (t)YvT (t)] =
2 3
R ( ) RX X ( + t1 , t2 ) :::: RX Xm ( + t1 , tm ) (4.23)
66 RXX XX ( + t2 , t1)
1 1 1
RX X ( )
2
:::: RX Xm ( + t2 , tm ) 777
1

64 : 2 1
:
2 2
: :
2
5
RXn X ( + tn , t1 )
1 RXn X ( + tn , t2) :::: RXn Xm ( + tn , tm )
2

The de nition of the VRD functions is now restricted to be the mean value of Xv on
condition that Yv = yv (see eq. (4.1) for the general de nition). This could be interpreted
as a vector level crossing triggering condition
TYv(t+t) = fX1(t + t1 ) = y1; ::: ; Xm (t + tm) = ym g (4.24)
Since X(t) has been assumed to have a zero mean vector (see eq. (4.17)) the conditional
mean value can be calculated from eq. (A.5)
E[Xv jTYv(t+t)] = E[Xv jYv = yv ] = RXv Yv  R,Y1v Yv  yv (4.25)
A triggering level vector, ~a, is de ned as
~a = R,Y1v Yv  yv ; ~a = [~a1 a~2 ::: a~m]T (4.26)
The VRD functions can be rewritten using eq. (4.26)
DvX( + t) = E[XvjTYv v(t+t)] = RXvYv  a~ (4.27)
Before any attempt is made to extract the modal parameters each VRD function is shifted
,ti . This leads to the shifted VRD functions
DvXv ( ) =
2 32 3
66 RRXX XX (( ,, 
1 1
t1 )
t1 )
RX X ( , t2) :::: RX Xm ( , tm )
1 2 1 ~a1
RX X ( , t2) :::: RX Xm ( , tm ) 777 666 a~2 777 (4.28)
64 : 2 1
:
2 2
: :
2
54 : 5
RXnX ( , t1)
1 RXn X ( , t2) :::: RXnXm ( , tm )
2 ~am
The same result would have been obtained if Xv (t) in eq. (4.21) had been de ned with-
out the t time shift vector. The computational e ort in the estimation procedure is
independent of the chosen formulation. Using eq. (4.19), eq. (4.28) can be rewritten as
DvX( ) = RX ( , t1 )  a~1 + RX ( , t2 )  a~2 + ::: + RXm ( , tm )  a~m (4.29)
1 2

Notice that if m = 1 and ti = 0 the traditional RD formulation for the level triggering
condition is obtained
D ( ) = R ( )  a~ = RX X ( )  y
X1 X1 X1X1 1
1 1

X2 1 1 (4.30)
The probability that Yv = yv occurs is very small. This means that the expected number
of triggering points can easily be too small to achieve a VRD function, which has converged
acceptably. Instead a vector positive point triggering condition is used
TYv v(t+t) = fa1  X1(t + t1 )  b1; ::: ; am  Xm (t + tm )  bm g (4.31)
The maximum number of triggering points is always obtained by choosing a = 0 and
b = j1j. In order to link this triggering condition with the results from eq. (4.25) or eq.
(4.27), eq. (A.8) is used

DX( + t) = E[Xv jTYvv(t+t)] = E[Xvja < Yv  b]


Z1
= xv pXvjTYv v t t (xvjTyv (t+t))dxv
,1 ( + )

1 Z bZ 1
= k xv pXvjYv (xv jyv )pYv (yv)dxv dyv
1 a ,1
1 Zb
,
= RXv Yv  RYv Yv  k  yv pYv (yv )dyv
1
1 a
= RXv Yv  ~a (4.32)
where the triggering level ~a is now de ned as
R,1 Z b Zb
~a = Ykv Yv a yv  p(yv )dyV k1 = p (y )dy
a Yv v v
(4.33)
1
So the vector positive point triggering condition gives results equivalent to the vector level
crossing triggering condition
DvX( ) = RX ( , t1 )  a~1 + RX ( , t2 )  a~2 + ::: + RXm ( , tm )  a~m (4.34)
1 2

Only the weights, a~i , of the correlation functions have changed. In practice only the vector
positive point triggering condition is of interest, since this is the only triggering condition,
which results in a reasonable number of triggering points.
In conclusion the VRD functions are a sum of a number of correlation functions corre-
sponding to the size of the vector condition. The result in eq. (4.34) is important since the
modal parameters can be extracted from the VRD functions using the methods described
in chapter 2.
A single problem arises when methods developed to extract modal parameters from free
decays are used to extract modal parameters from VRD functions. These methods can
only deal with positive time lag correlation functions. The decays should dissipate with
increasing time lags. This is not the case for the part of the correlation functions which
has negative time lags. This also means that the VRD functions cannot be used directly
as input to ITD or PTD. First of all, only the part of the VRD functions with   0 can
be used. Furthermore, a number of points corresponding to max(ti ) should be removed
from all functions. Otherwise, a part from the correlation functions with negative time lags
is used in the modal parameter extraction procedure. This can result in highly erroneous
damping ratios.
4.2.1 Choice of Time Shifts
Another problem is to choose the time shifts t1 ; t2 ; ::: tm in a way to obtain the
maximum number of triggering points. The obvious possibility is to estimate a column
in the correlation matrix at several positive as well as negative time points using the
traditional RD technique. To obtain the maximum number of triggering points for the
VRD technique the time shifts ti can be chosen from
max(jDXiXj ( )j) ) ti =  (4.35)
Notice that the time shift corresponding to i = j will always be ti = 0 which is always
the time lag with maximum value for the auto correlation functions of a stationary process.
If DXi Xj (ti ) is negative, the triggering levels ai and bi should also be negative.

4.3 Variance of VRD Functions


The variance of the VRD functions is derived corresponding to the principles used to
derive the variance of the RD functions, see appendix A. Two vector processes Xv (t) and
Yv (t) are constructed from the stationary Gaussian distributed zero mean vector process
X(t) corresponding to eq. (4.21)
2 X (t +  + t ) 3 2 X (t + t ) 3
66 X12(t +  + t12) 77 66 X12(t + t12) 77
6 3(t +  + t3) 77
Xv (t) = 666 X 77 Yv (t) = 666 X3(t + t3) 77
75 (4.36)
64 :: 75 4:
Xn(t +  + tn) Xm (t + tm )
By assuming that the stochastic vector process X(t) is ergodic and that the individual
time segments in the averaging process (see eq. (4.3)) are uncorrelated, the variance of
the VRD functions can be calculated from
^ v ( )] = 1 Var[X jT v
Var[D X ]
v Yv (t+t) (4.37)
N
where N is the number of triggering points. Var[Xv jTYv v(t+t)] are the diagonals of the
covariance functions Cov[Xv jTYv v (t+t) ].
Z1
Cov[Xv jTYv v(t+t) ] = xv xTv pXvjTYv v t t) (xv jTyv (t+t) )dxv
v ,
,1 ( +

E[Xv jTYv v (t+t) ]E[Xv jTYv v (t+t) ]T (4.38)


Using the results in appendix A, see eq. (A.8), the above integral can be calculated.

Cov[Xv jTYv v(t+t) ] = RXv Xv , RXv Yv R,Y1v Yv RTXv Yv ,


1R Zb
,1
k12 v v Yv Yv a yv pYv (yv )dyv 
X Y R
Zb !T
yv pYv (yv )dyv R,Y1v;TYv RTXv Yv +
a
(4.39)
1R Zb
k1 Xv Yv R,Y1v Yv yv yvT pYv (yv)dyv R,Y1v;TYv RTXv Yv
a
This allows the variance of the VRD functions to be calculated. The variance is not only a
function of the VRD functions but a function of the correlation functions. This means that
eq. (4.37) can only be used in theoretical applications, e.g. to investigate the signi cance
of the number of triggering points. It is also seen to be a more complicated expression
compared to the results for the RD technique.

4.4 Quality Assessment


In application of the RD technique the RD functions are assumed to be proportional to
the correlation functions. For e.g. the positive point triggering condition the following
relation is valid
R b yp (y)dy
DXY ( ) = RXY ( ) a = Rab Y
1

Y2  a ; (4.40)
1

p (y )dy
a
1
1
Y
The proportionality relation is independent of any choice of the triggering levels a1 and
a2 . This is denoted shape invariance. By choosing di erent triggering levels a number
of di erent RD functions can be calculated. By normalizing these functions properly a
number of theoretical identical RD functions can be obtained. A quality measure based
on eq. (4.40) can be de ned on the basis of the di erence between the estimated RD
functions. This approach were developed for the RD technique in section 1.7.
It would be obvious to extend this approach to the VRD technique. The VRD functions
corresponding to eq. (4.40) given by are
DvX( ) = RX ( , t1)  ~a1 + RX ( , t2)  a~2 + ::: + RXm ( , tm )  a~m (4.41)
1 2

Rb
~a = R,Y1v Yv aR byv pYv (yv )dyv (4.42)
a pYv (yv )dyv
If VRD functions are shape invariant, the following relation should be ful lled
a~(a1; b1) = k  a~(a2; b2) (4.43)
where k is an arbitrary constant. This demand can be reformulated using eq. (4.42)
Zb Zb
yv pYv (yv )dyv = k yv pYv (yv )dyv
1 2
(4.44)
a1 a2
This relation cannot in general lead to a guideline to show how the choices of ai and bi
can secure shape invariance of VRD functions.
Another method to assess the quality of RD functions is based on the following relation
for stationary processes
RXY ( ) = RY X (, ) (4.45)
Which leads to the error function
E = D^ XY ( ) , D^ Y X (, ) (4.46)
where the error function contains information about the quality of the RD functions. This
approach was also developed in section 1.7 for the RD technique. Again, it would be
obvious to extend this approach for VRD functions. The VRD functions are a sum of
correlation functions, see eq. (4.34)
X
m
DXv i ( ) = RXi Xj ( , tj )  ~aj (4.47)
j =1
First of all it is clear that in general the VRD functions are not symmetric since
X
m X
m
RXiXj ( , tj )  ai , RXiXj (tj ,  )  ai 6= 0 (4.48)
j =1 j =1
This holds even for tj = 0; j = 1; :::;m, since the cross correlation functions in eq.
(4.48) are not symmetric. Next attempt would be to combine (add or subtract) the n
VRD functions, so that the nal result is theoretically zero. This can be done if and only
if m = n, since both RXi Xj and RXi Xj should be represented in the VRD functions. If
m  n only the rst m VRD functions can be used. Next assume that m = n. Then the
following relation should hold
RXi Xj ( , tj )  ai = RXj Xi ( , ti )  aj (4.49)
In general this cannot be ful lled since ti 6= tj and ai 6= aj are both unknown. So in
general the relation eq. (4.45) does not constitute a basis for quality assesment of VRD
functions.
It is a disadvantage of the VRD technique, that the quality assesment cannot be performed
using the methods developed for the RD technique. The only approach to quality assess-
ment of VRD functions is to choose two triggering conditions, where the only di erence
is the sign of the triggering levels.
TYv v(t+t) = [a1  Y1 (t + t1) < b1; :::;am  Ym(t + tm ) < bm ]
1
(4.50)
TYv v(t+t) = [,b1 < Y1 (t + t1)  ,a1; :::; ,bm < Ym (t + tm )  ,am]
2
(4.51)
The estimated VRD functions have the following relations
D^ vXv = ,D^ vXv
1 2
(4.52)
This can be used to de ne an error function, which should theoretically be zero and thereby
constitute a basis for quality assessment.
4.5 Examples - 2DOF Systems
In order to test the performance of the VRD technique a simulation study of two 2 DOF
systems loaded by independent white noise at each mass is performed in section 4.5.1
and section 4.5.2. Since these are the rst results obtained using the VRD technique, it
is natural to have a starting point with a simple system such as a 2 DOF-system. The
eciency of the VRD technique is compared with the RD technique by comparing modal
parameters estimated from the VRD functions with modal parameters estimated from the
RD functions. Modal parameters are extracted from the RD and VRD functions using
the ITD technique, see chapter 2.
Three di erent quality measures are de ned in order to compare the accuracy of the
di erent approaches: E1, E2 and E3.  is the theoretical parameter (frequencies or damping
ratios or mode shape components) and ^ is the estimated parameter.
X
2 j , ^ j
i i
E1 = ^i (4.53)
i=1
X
2 ^
i
E2 = (4.54)
i=1 i
X
2 j , ^ j
i i
E3 = (4.55)
i=1 ^i
10% independent Gaussian white noise is added to each response (10% is the standard
deviation of the noise divided by the standard deviation of the noise free response). The
purpose is to model a real life situation.
4.5.1 Example 1
The modal parameters of the system are shown in table 4.1
f [Hz]  jj1 jj2 6 1 6 2
Mode 1 1.29 3.39 1.00 0.68 0.00 1
Mode 2 2.10 1.09 1.00 1.48 0.00 179
Table 4.1: Modal parameters of the 2 DOF system for example 1.
The quality measures in eqs. (4.53) - (4.55) are calculated on the basis of 200 independent
simulations of the response of the system by estimating VRD and RD functions from
each simulated response and extract the modal parameters using ITD. The nal quality
measures are the mean values of the measures obtained from each of the simulations. 500
points are generated in each response time series and the sampling frequency is 10 Hz. The
system is simple, so no more than 500 points are necessary in order to estimate reasonable
modal parameters.
Figure 4.8 shows typical RD functions using triggering at a single measurement. Positive
point triggering has been used with a = 0:5X and b = 1. These triggering levels are
chosen for all RD and VRD functions. The (*) and the (o) on the cross RD functions
indicate optimum time delays for vector triggering if the triggering levels are selected to
be positive for (*) and negative for (o).

1 1

0.5 0.5
D X1 X1

D X1 X2
0 0

−0.5 −0.5

−1 −1
−2 0 2 −2 0 2
τ τ

1 1

0.5 0.5
D X2 X1

D X2 X2
0 0

−0.5 −0.5

−1 −1
−2 0 2 −2 0 2
τ τ

Figure 4.8: RD functions using measurement 1 (left) and measurement 2 (right) for trig-
gering. The (*) and (o) on the cross RD functions designate optimum time delays for
VRD estimation. The RD functions have been normalized column wise so the auto RD
functions are correlation coecient functions.
Figure 4.9 shows typical VRD functions of the simulated response based on t1 = 0; t2 =
(). The VRD functions are not symmetric around  = 0, which illustrates the di erence
between auto RD functions and VRD functions.

1 1

0.5 0.5
D X1

D X2

0 0

−0.5 −0.5

−1 −1
−2 0 2 −2 0 2
τ τ

Figure 4.9: VRD functions estimated using the (*) time delay from g. 5.1.
The quality measures calculated from the 200 independent simulations are shown in g.
4.10. Five di erent bars are plotted in each sub- gure. Bars 1 and 2 correspond to results,
where the modal parameters have been extracted from RD functions using triggering
only at the response of the rst and second masses, respectively (or results based on the
rst column in g. 4.8 and the second column in g. 4.8 only). Bar 3 corresponds to
results where all four RD functions have been used in the modal parameter extraction
procedure. Bars 4 and 5 correspond to results from VRD functions using the time shifts
t1 = 0; t2 = () and t1 = 0; t2 = (o), see g. 4.8.

1
0.5 1
0.5
ε1
0.5

0 0 0
0 5 0 5 0 5
0.5
0.05 4
ε2

0 0 0
0 5 0 5 0 5

2 0.1
0.01
ε3

0.05

0 0 0
0 5 0 5 0 5
Frequency Damping Mode shape

Figure 4.10: Bias, variance and relative error for di erent RD functions (bar 1-3) and
di erent VRD functions (bar 4-5).
The results indicate the importance of choosing the time shifts for triggering with high
positive correlation and positive triggering levels or with high negative correlation and
negative correlation. Bar 5 shows that a time shift with high negative correlation and
positive triggering levels results in poor quality measures. The results for a correct choice
of time shifts and triggering levels (bar 4) document the mathematical basis. On average
the VRD technique can produce results with the same high accuracy as the RD technique
according to the quality measures.
If the VRD technique should be a genuine alternative to the traditional RD technique, the
diculties in choosing proper time shifts have to be paid o with either a high accuracy
or a low computational time compared to the RD technique.
4.5.2 Example 2
The purpose of this example is to document that under certain observable and realistic
conditions the VRD technique can be more accurate than the RD technique. The modal
parameters of the system are shown in table 4.1.
f [Hz]  jj1 jj2 6 1 6 2
Mode 1 1.56 2.19 1.00 0.08 0.00 2
Mode 2 4.22 1.83 1.00 12.0 0.00 174
Table 4.2: Modal parameters of a 2DOF system for example 2.
The major di erence from example 1 is that there is practically no information between
the two masses (the cross mode shape component is relatively small). The results are based
on 200 independent simulations of the response of the system with the modal parameters
in table 4.2 loaded by independent Gaussian white noise. 2000 points are generated in
each time series and the sampling frequency is 20 Hz. 10 % independent Gaussian white
noise is added (standard deviation of the noise divided by the standard deviation of the
noise-free process). Figure 4.11 shows typical RD functions.

1 1

0.5 0.5
D X1 X1

D X1 X2
0 0

−0.5 −0.5

−1 −1
−1 0 1 −1 0 1
τ τ

1 1

0.5 0.5
D X2 X1

0 D X2 X2 0

−0.5 −0.5

−1 −1
−1 0 1 −1 0 1
τ τ

Figure 4.11: RD functions using measurement 1 (left) and measurement 2 (right) for
triggering. The RD functions have been normalized column wise so the auto RD functions
are correlation coecient functions.
Figure 4.12 shows the VRD functions. The time shifts are chosen as t1 = 0; t2 = 0:05
s, which are the time shifts with maximum correlation, see g. 4.11.

1 1

0.5 0.5
D X1

D X2

0 0

−0.5 −0.5

−1 −1
−1 0 1 −1 0 1
τ τ

Figure 4.12: VRD functions estimated using the time shift vector t = [00:05] s.
Figure 4.13 shows the quality measures calculated on the basis of the 200 independent
simulations. Bars 1 and 2 correspond to RD functions using triggering only at the rst
and second mass, respectively. Bar 3 corresponds to results from all four RD functions
and bar 4 corresponds to results from VRD functions.
0.5
0.5

ε1
0.5

0 0 0
0 5 0 5 0 5

ε2 5 2

0.05 1

0 0 0
0 5 0 5 0 5
0.05 4 0.5
ε3

0 0 0
0 5 0 5 0 5
Frequency Damping Mode shape

Figure 4.13: Bias, variance and relative error for di erent RD functions (bar 1-3) and
di erent VRD functions (bar 4).

These results illustrate that the VRD technique is superior with respect to the accuracy
compared to the RD technique, when only a single setup is used in the modal parameter
extraction procedure. The VRD technique should be compared with the RD technique
where all RD setups are used to extract the modal parameters.

4.6 Example - 4DOF System


The purpose of this example is further documentation of the applicability of the VRD
technique by performing a simulation study on a more complicated system compared to
the 2 DOF systems used in section 4.5. The eigenfrequencies and the damping ratios of
the chosen system are shown in table 4.3.

Mode 1 2 3 4
f [Hz] 1.62 4.61 6.86 9.00
 [%] 3.70 2.07 1.16 1.52
Table 4.3: Modal parameters of 4-DOF system.

The mode shapes of the system are illustrated by plotting their absolute value in g. 4.14.
The mode shapes are approximately in or out of phase.
1st 2nd 3rd 4th
4 4 4 4

3.5 3.5 3.5 3.5

3 3 3 3

2.5 2.5 2.5 2.5

2 2 2 2

1.5 1.5 1.5 1.5

1 1 1 1

0.5 0.5 0.5 0.5

0 0 0 0
−1 0 1 −1 0 1 −1 0 1 −1 0 1

Figure 4.14: Absolute value of mode shape of 4-DOF system.

In order to describe the accuracy of the di erent methods statistically the simulation and
estimation process are repeated 500 times. The sampling frequency is 50 Hz and 8000
points are simulated in each time series. 10 % independent Gaussian white noise is added
to each response. The quality measures in eqs. (4.53) - (4.55) are calculated on the basis
of the 500 simulations.

Four di erent approaches are compared:

 Approach 1: RD using positive point triggering at the measurement with the


highest standard deviation. So only a single RD setup is estimated. The triggering
levels are chosen as: a = 0; b = 1
 Approach 2: RD using level triggering estimating the full correlation matrix. The
triggering levels are chosen as: a = 1:4X .
 Approach 3: RD using positive point triggering estimating the full correlation
matrix. The triggering levels are all chosen as: ai = 0; bi = 1 i = 1; :::; 4.
 Approach 4: VRD using a vector triggering condition of size four. The triggering
levels are chosen as: ai = 0; bi = 1 i = 1; :::; 4.

Figure 4.15 shows typical RD functions estimated by approach 1. The RD functions are
an estimate of a single column in the correlation functions.
0.1 0.1

0.05 0.05

Cross Dx2x1
Auto Dx1x1
0 0

−0.05 −0.05

−0.1 −0.1
−2 −1 0 1 2 −2 −1 0 1 2
τ τ

0.04 0.01

0.02 0.005
Cross Dx3x1

Cross Dx4x1
0 0

−0.02 −0.005

−0.04 −0.01
−2 −1 0 1 2 −2 −1 0 1 2
τ τ

Figure 4.15: RD functions estimated using approach 1.

From g. 4.15 it is seen that a choice of VRD time shifts of ti = 0 ; i = 1; :::; 4 have
maximum correlation. Figure 4.16 shows typical VRD functions.

500 500
Vector Dx1

Vector Dx2

0 0

−500 −500
−2 −1 0 1 2 −2 −1 0 1 2
τ τ

400 150

100
200
Vector Dx3

Vector Dx4

50
0
0
−200
−50

−400 −100
−2 −1 0 1 2 −2 −1 0 1 2
τ τ

Figure 4.16: VRD functions estimated using time shifts ti = 0; i = 1; :::; 4.

The VRD functions are not symmetric around  = 0. Figure 4.17 shows the quality
measures. The four bars correspond to the 4 di erent approaches in the order they were
described previously.
1 1 4

ε1
0.5 0.5 2

0 0 0
0 5 0 5 0 5
0.04 5 10
ε2
0.02 5

0 0 0
0 5 0 5 0 5
0.02 2 4
ε3

0.01 1 2

0 0 0
0 5 0 5 0 5
Frequency Damping Mode Shape

Figure 4.17: Bias, variance and relative error for di erent RD (bar 1-3) and VRD (bar
4) approaches.
The quality measures show that the VRD and the RD technique (approach 3) using
positive point triggering estimating the full correlation matrix, in general has higher quality
compared to RD approach 1 and 2. The result is expected, since approaches 1 and 2 can
be interpreted as sub-approaches to the more general approach 3. The estimation times
and the number of triggering points of the 4 di erent approaches are shown in table 4.4.
Column 5 shows the results from the necessary initial estimation time using level crossing
triggering.
Approach 1 2 3 4 5
Time 1.39 1.40 5.56 1.05 0.31
N 3950 470 3950 1590 470
Table 4.4: Estimation time [CPU] and number of triggering points, N
As seen the VRD technique has the same estimation time as approaches 1 and 2 and it
is about 4-5 times faster than approach 3. The results of this section are summarized in
table 4.5 by giving the di erent approaches grades in the form of a number of + signs.
Approach 1 2 3 4
Quality + ++ ++++ +++
CPU Time ++ ++ + ++
Table 4.5: Evaluation of di erent approaches.
The simulation study indicates that the VRD approach is ecient in the sense of having
low estimation and high accuracy. The VRD technique can replace the traditional RD
technique. Only in the case of measurements with a high content of noise, it is recom-
mended to use the traditional RD technique with all possible RD setups.
4.7 Summary
In this chapter a new concept has been introduced: Vector triggering Random Decrement.
The VRD technique di ers from the RD technique in the formulation of the triggering
condition. In formulation of the traditional RD technique the triggering condition should
only be ful lled in a single process, whereas in formulation of the VRD technique the trig-
gering condition should be ful lled in several processes. This makes it a vector triggering
condition.
In section 4.1 the VRD functions are de ned and the estimation process, which provides
unbiased estimates, is described. The relation between the VRD functions and the corre-
lation functions of stationary zero mean Gaussian distributed vector process is derived in
section 4.2 for the rst time. The VRD functions are a weighted sum of the correlation
functions of the vector process. The number of correlation functions is equal to the size
of the vector triggering condition. In section 4.3 an approximate relation for the variance
of the estimate of the VRD functions is derived, corresponding to the relations for the
estimate of the RD functions. Section 4.4 discusses quality assessment of VRD functions.
It is shown that the VRD functions are not shape invariant and that the symmetry rela-
tions for correlation functions of stationary processes can only be used in a very special
situation. Instead quality assessment is suggested to be based on the di erence between
the VRD functions estimated by shifting the sign of the processes.
Sections 4.5 and 4.6 describe di erent simulation studies, which document the applicability
of the VRD technique. Section 4.5 illustrates that the VRD technique is superior in
accuracy compared to the RD technique using a single set of RD functions, if a physical
system with low correlation between the measurements is analysed. In section 4.6 it is
illustrated that the accuracy or the speed of the VRD technique can be superior to the
RD technique. In conclusion the VRD technique can be used with a lower estimation time
and as high accuracy as the RD technique, unless measurements with a high noise content
are considered. The VRD tehcnique is an attractive alternative to the RD technique.
A nal comparison between the RD and the VRD technique is performed in chapter 7,
where a laboratory bridge model loaded by Gaussian white noise through a shaker has
been analysed.

Bibliography
[1] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Theory of Vector Triggering Random
Decrement Technique. Proc. 15th International Modal Analysis Conference, Orlando,
Florida, USA, 1997, Vol. I, pp. 502-510.
[2] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Vector Triggering Random Decrement
for High Identi cation Accuracy. Accepted for publication in Journal of Vibration
and Acoustics.
[3] Asmussen, J.C., Ibrahim, S.R. & Brincker, R. Application of the Vector Triggering
Random Decrement Technique. Proc. 15th International Modal Analysis Conference,
Orlando, Florida, USA, 1997, Vol. II, pp. 1165-1171.
Chapter 5
Variance of RD Functions
The purpose of this chapter is to investigate the variance of RD functions. Knowledge of
the variance of RD functions could be used in the modal parameter extraction procedure
or to indicate the duration of the RD functions which are not too uncertain.
The relations for the variance of the estimate of the RD functions given in chapter 3 are
considered. The decisive assumption for these relations is that the di erent time segments
in the averaging proces are uncorrelated. This assumption can be violated in such a degree
that the relations shown in chapter 3 are unusable. Instead a new approach for obtaining
more accurate estimates of the variance of RD functions is suggested. This new approach
takes the correlation between the di erent time segments into account by considering the
relative time distribution of the triggering points.

5.1 Variance of RD Functions


This section investigates how the variance of the estimates of the RD functions can be
calculated. In chapter 3 approximate equations for the variance of the estimates of the
RD functions are given for all triggering conditions. The strength of these relations is
that the approximate variance can be calculated from the estimated RD functions and the
number of triggering points only. A simpler and faster approach is impossible. The im-
portant assumption of the relations is that the time segments in the averaging process are
uncorrelated. This assumption is in theory always violated. In practice the consequence is
that the relations for the variance of the RD functions cannot be used for e.g. the positive
point triggering condition, since it is obvious that the time segments are not uncorrelated
for this triggering condition. It can only be a guideline, also for other conditions than
positive point triggering.
The purpose of this section is to introduce a new method for estimating the variance of the
RD functions. It will be assumed that the measurements are realizations of stationary zero
mean Gaussian distributed processes. A new method should ful l the following demands.
 The method should be valid for all triggering conditions. This means that the
correlation between the time segments should be taken into account and that the
method is independent of the chosen triggering condition.
 The method should be accurate and consistent. Consistent means that the accuracy
103
of the method should be independent of the physical system describing the measured
responses.
 The method should be fast, otherwise the absolute main advantage of the RD tech-
nique is vasted and the method will only have theoretical interest.
 The requirement for the speed of the method is ful lled if the variance can be
predicted by the RD functions only.
In the following a new method is proposed and it is investigated if it ful ls the above
demands. Only cross RD functions and the variance of the estimated cross RD functions
are considered, in order to simplify the derivations and still preserve generality. The RD
functions are always estimated as the empirical mean

X
N
D^ Y X ( ) = N1 y(ti +  )jTxG(Ati)
i=1
1 X
N
= N y (ti +  )jx(ti) = xi ; x_ (ti ) = x_ i
i=1
XN
= N1 y(ti +  jTxi ) (5.1)
i=1
where the applied general triggering condition is used to preserve generality, since this
condition contains any particular condition. When the RD functions have been estimated
the applied general triggering condition can be replaced by N alternative triggering condi-
tions, which are formulated from the observable values of x and x_ at the already detected
time points ti . If these triggering conditions are applied to the measurements exactly the
same RD functions would be estimated, since the same triggering points are detected.
Although this is impossible in practice, since the values of x(ti ) and x_ (ti ) and the corre-
sponding time points, ti , are unknown in advance, the conditions will be used as a basis
for the model presented in the following. The idea is that the information of x(ti ), x_ (ti )
at the triggering time points ti can be obtained from the estimation procedure of the RD
functions and used in an estimation method for the variance of the RD functions.
The variance of the estimated RD functions can be calculated as

XN
Var[D^ Y X ( )] = N12 Var[ y (ti +  )jTxG(Ati) ]
i=1
XN X N
= N12 Cov[y (ti +  )jTxG(Ati); y (tj +  )jTxG(Atj) ] (5.2)
i=1 j =1
If all cross terms in eq. (5.2) are neglected then the relations for the variance of the
estimated RD functions described in chapter 3 are obtained. The variance of the RD
function could as well be calculated on the basis of the alternative triggering condition
introduced in parts 2 and 3 of eq. (5.1).
Var[D^ Y X ( )] = 1 Var[X
N
y (ti +  )jTxi ]
N 2
i=1
X
N X
N
= N12 Cov[y (ti +  )jTxi ; y (tj +  )jTxj ] (5.3)
i=1 j =1
By keeping track of the real time points, ti ; i = 1; 2; :::; N in the estimation of the
RD functions, the variance of the RD functions can be rewritten using this information
without the loss of generality

^ 1 XN
Var[DY X ( )] = N 2  Cov[y (ti +  )jTxi ; y (ti +  )jTxi ]
i=1
X
m X
Nj
+ Cov[y (ti +  )jTxi ; y (ti + j T +  )jTxi j ]
+
j =1 i=1
1
X m X Nj
+ Cov[y (ti + j T +  )jTxi j ; y (ti +  )jTxi ] A
+ (5.4)
j =1 i=1
where m is the maximum number of time lags between any triggering points. In eq. (5.4)
some of the Ni can be zero. The general requirement for the number of the covariance
terms at each time step is
N + 2  N1 + 2  N2 + ::: + 2  Nm = N 2 (5.5)
Since the covariance of the conditional processes is independent of the chosen initial con-
ditions, Txi , which will be shown later, eq. (5.4) can be rewritten as

XN
Var[D^ Y X ( )] = N12  Cov[Y (t +  )jTXG(Tt); Y (t +  )jTXG(Tt)]
i=1
X
m
+ Nj Cov[Y (t +  )jTXG(Tt); Y (t + j T +  )jTXG(Tt+j T )]
j =1
1
X m
+ Nj Cov[Y (t + j T +  )jTXG(Tt+j T ); Y (t +  )jTXG(Tt)] A (5.6)
j =1
where TXG(Tt) is of the same form as the theoretical general triggering condition.
TXG(Tt) = fX (t) = a; X_ (t) = bg (5.7)
The major problem is to calculate the general covariance between Y (t +  )jTXG(Tt) and
Y (t + j T +  )jTXG(Tt+jT ). Consider the following two Gaussian distributed stochastic
vectors
X1 = [Y (t +  ) Y (t + t1 +  )]T (5.8)
X2 = [X (t) X (t + t1) X_ (t) X_ (t + t1)]T (5.9)
The covariance of X1 on condition of X2 is calculated using eq. (A.4).
Cov[X1jX2] = RX X , RX X R,X1 X RTX X
1 1 1 2 2 2 1 2
(5.10)
Using the de nition of the correlation functions in eq. (2.55) the correlation matrices in
(5.10) can be calculated from eqs (5.11) - (5.13) (X and Y are assumed to have zero mean
value).
" #
RX X = RY Y (0) RY Y (,t1 ) (5.11)
1 1
RY Y (t1 ) RY Y (0)
" #
RX X = RXX (0) RXX (,t1) ,R0XX (0) ,R0XX (t1 ) (5.12)
2 2
RXX (t1 ) RXX (0) ,R0XX (t1 ) ,R0XX (0)
2 0 3
,R0Y X ( , t1 )
66 RRYY XX (( )+ t1) RRYY XX (( ), t1 ) ,,RR0YY XX (( )+ t1) ,R0Y X ( ) 77
RX X = 64 R0 ( )
1 2
RY X ( , t1 ) ,R00Y X ( ) ,R00Y X ( , t1 ) 75 (5.13)
YX
0
RY X ( + t1) RY X ( ) ,R00Y X ( + t1) ,R00Y X ( )
The covariance between the Y (t+ )jTXG(Tt) and Y (t+nT + )jTXG(Tt+nT ) , can be calculated
by inserting the results of eqs. (5.11), (5.12) and (5.13) in eq. (5.10). The covariance is
taken as the element [1,2] of the 4  4 dimensional covariance matrix Cov[X1jX2].
It is important that the only information which should be available is RY X ( ), R0Y X ( )
and R00Y X ( ). Since the estimated RD functions are proportional to the correlation func-
tions the information can be obtained by scaling the RD functions and then calculate the
time derivative and double time derivative of the correlation functions using numerical dif-
ferentiation. This is considered to be a simple and computationally fast requirement. The
disadvantage is that numerical di erentiation of the correlation functions demands that
the measurements are oversampled. Otherwise the terms R0Y X ( ) and R00Y X ( ) should be
obtained by di erentiating the measurements and then estimating the corresponding cor-
relation functions using the RD technique. This might be the best solution if the system
is not suciently oversampled for numerical di erentiation of the RD functions.
What now remains is somehow to make the number of the di erent correlation functions,
N1 ; N2; ::: ; Nm, available. Instead of somehow making theoretical consideration of the
distribution of the triggering points it is decided to use the sample distribution. This
means that the weighting numbers, N1; N2 ; ::: ; Nm , are obtained by picking out the time
for each triggering point in the estimation of the RD functions. By sorting the time
di erences between the triggering points the weighting numbers are obtained.
The estimate of the variance of the RD functions involves the following computational
steps.
 Sampling the time for each triggering point in estimation of the RD functions.
 Sorting the time di erences between the triggering points.
 Numerical (two-time) di erentiation of the RD functions (scaled to be the correlation
functions).
 Calculating the variance estimate according to eq. (5.6).
None of these computational steps are extremely time consuming. The sampling of the
time points for each triggering point is free, since these time points are identi ed in
the estimation process of the RD functions. In the following sections the accuracy of the
method for estimating the variance of RD functions are investigated by di erent simulation
studies.

5.2 Example 1: Level Crossing - SDOF


An SDOF system loaded by Gaussian white noise is considered. The system has an eigen-
frequency of 1 Hz and a damping ratio of 5 %. The response is sampled with T = 301f
at 5000 time points. 30000 simulations of this system are performed. For each simulated
response an RD function with 601 points corresponding to ,10p=f    10=f is estimated
using level crossing triggering with a triggering level of a = 2X . The time points for
each triggering point are picked out and the distribution of the time points is obtained
by sorting the time di erences. The response has unit standard deviation. The average
number of triggering points was 185.
Figure 5.1 shows the average distribution of the triggering points for all simulations. The
distribution is taken as the true distribution of triggering points for the system considered.
The distribution corresponds to the weighting numbers in eq. (5.6).

Distribution of triggering points

100

50

0
0 1 2 3 4 5 6 7 8
Auto correlation function
1

0.5
[m*m]

−0.5

−1
0 1 2 3 4 5 6 7 8
τ

Figure 5.1: Average distribution of triggering points using level crossing triggering obtained
from simulations and the theoretical auto correlation function of the system.
The gure illustrates that it is not correct to assume that the time segments in the a-
veraging process are uncorrelated, since many triggering points are within the correlation
length, max, of the system. The correlation length is de ned as jRXX (max)j   , where
 is a small number, say e.g. 0.1. The true variance of the RD functions is calculated on
basis of the 30000 independently estimated RD functions. Furthermore, the variance of
the RD function estimated using the new method is calculated. The true distribution from
g. 5.1 is used together with the theoretical correlation functions. This means that the
variance predicted by the method is as accurate as possible, since the true distributions
and not the sample distributions are used. This procedure is used in order to control the
validity of the method.

Simulated and predicted variance

0.03
[m*m]

0.02

0.01

0
−10 −5 0 5 10
Auto correlation function
1

0.5
[m*m]

−0.5

−1
−10 −5 0 5 10
τ

Figure 5.2: The simulated and the predicted variance of the RD functions using level
crossing triggering a = 20:5X and the auto correlation function of the system. [---------- ]:
Theoretical (simulated) variance of the RD function. []: Predicted variance of the RD
function using the new method.

As seen the method predicts the variance of the estimated RD functions extremely well.
The next step is to investigate how well the variance of the RD functions are predicted by
the method if a sample distribution of the triggering points and the estimated correlation
functions from a single realization of the response are used.

Figure 5.3 shows simulated variance of the RD functions together with the variance pre-
dicted by the new method and the variance predicted by the relation in eq. (3.24).
Simulated and predicted variance

0.03

[m*m]
0.02

0.01

0
−10 −5 0 5 10
τ

Figure 5.3: Simulated and predicted variance of the RD functions using level crossing
triggering a = 20:5X . [---------- ]: Theoretical (simulated) variance of the RD function.
[    ]: Predicted variance of the RD function using the new method. [- - - -]: Predicted
variance using eq. (3.24).

The situation where only a single realization of the response is available corresponds to
the real life situation. So it is very important that the method predicts the true variance
as well as shown in gure 5.3 from a single realization. The estimated RD function,
the theoretical RD function and the distribution of the triggering points for the single
realization are shown in g. 5.4.
Theoretical and estimated RD function
1.5
1
0.5
[m*m]

0
−0.5
−1
−1.5
−10 −8 −6 −4 −2 0 2 4 6 8 10

Sampled distribution of triggering points


120

100

80

60

40

20

0
0 1 2 3 4 5 6 7 8 9 10
τ

Figure 5.4: The theoretical and the estimated RD function using level crossing triggering
a = 20:5X and the sample distribution of the triggering points. [---------- ]: Theoretical RD
function. [    ]: Estimated RD function.

Even though the accuracy of the estimated RD function is not excellent and that the
distribution of the triggering points di ers signi cantly from the true distribution shown
in g. 5.1 the new method shows a promising result.
5.3 Example 2: Positive Point - SDOF
In order to investigate further the accuracy of the method to predict the variance of the
estimated RD functions and thereby further document the validity of the method the
positive point triggering condition is considered. The system is the same as in section 5.2.
The distribution of the triggering points is estimated on the basis of 30000 simulations
of the response followed by the identi cation and sorting of the triggering points for each
response. Correspondingly the variance of the RD function is obtained on the basis of the
simulations. Two di erent sets of triggering levels are investigated. First [a1 a2 ] = [0 1]
is used since this maximizes the number of triggering points and there is a high correlation
between the triggering points.

Figure 5.5 shows the distribution of the triggering points and the theoretical auto corre-
lation function.

Distribution of triggering points

4000
3000
2000
1000
0
0 1 2 3 4 5 6 7 8
Auto correlation function
1

0.5
[m*m]

−0.5

−1
0 1 2 3 4 5 6 7 8
τ

Figure 5.5: Average distribution of triggering points using positive point triggering
[a1 a2 ] = [0 1]X obtained by simulations and the auto correlation function of the system.

The di erence in the distribution of the triggering points in comparison with the result in
g. 5.1 is obvious. Figure 5.10 shows the simulated and the predicted variance obtained
using the theoretical values.
−3 Simulated and predicted variance
x 10

[m*m]
6
4
2
0
−10 −5 0 5 10
Auto correlation function
1

0.5
[m*m]

−0.5

−1
−10 −5 0 5 10
τ

Figure 5.6: The simulated and the predicted variance of the RD functions using positive
point triggering [a1 a2 ] = [X 1] and the auto correlation function of the system. [----------
]: Theoretical (simulated) variance of the RD function. [    ]: Predicted variance of the
RD function using the new method.

The variance predicted using eq. (3.46) and the variance predicted by the new method
are shown in g. 5.7 where a single estimate of the RD function and the distribution of
the triggering point have been used.

−3 Simulated and predicted variance


x 10

8
[m*m]

6
4
2
0
−10 −5 0 5 10
τ

Figure 5.7: The simulated and the predicted variance of the RD functions using positive
point triggering [a1 a2] = [0 1]. [---------- ]: Theoretical (simulated) variance of the RD
function. [    ]: Predicted variance of the RD function using the new method. [- - -]:
Predicted variance using eq. (3.46).

The estimated RD function, the theoretical RD function and the distribution of the trig-
gering points for the single realization are shown in g. 5.8.
Theoretical and estimated RD function

0.5

[m*m] 0

−0.5

−10 −8 −6 −4 −2 0 2 4 6 8 10

Sampled distribution of triggering points


5000

4000
[m*m]

3000

2000

1000

0
0 1 2 3 4 5 6 7 8 9 10
τ

Figure 5.8: The theoretical and the estimated RD function using positive point triggering
[a1 a2] = [0 1] and the sample distribution of the triggering points. [---------- ]: Theoretical
RD function. [    ]: Estimated RD function.

The results of the simulations show that the method can be used also for the positive
point triggering condition. Usually the triggering levels for this condition are not chosen
as [a1 a2] = [0 1]. As shown in chapter 3 [a1 a2 ] = [X 1] can increase the accuracy
of the RD functions and decrease the computational time. In the following the system
described above is investigated again using the positive point triggering condition with
[a1 a2 ] = [X 1]. Figure 5.9 shows the distribution of the triggering points obtained by
simulations together with the auto correlation function.
Distribution of triggering points

1000

500

0
0 1 2 3 4 5 6 7 8
Auto correlation function
1

0.5
[m*m]

−0.5

−1
0 1 2 3 4 5 6 7 8
τ

Figure 5.9: Average distribution of triggering points obtained by simulation using positive
point triggering [a1 a2 ] = [X 1] and the auto correlation function of the system.
Figure 5.10 shows the simulated and predicted variance obtained using the theoretical
values.
Simulated and predicted variance

0.03
[m*m]

0.02

0.01

0
−10 −5 0 5 10
Auto correlation function
1

0.5
[m*m]

−0.5

−1
−10 −5 0 5 10
τ

Figure 5.10: The simulated and the predicted variance of the RD functions using point
triggering [a1 a2 ] = [X 1] and the auto correlation function of the system. [---------- ]:
Theoretical (simulated) variance of the RD function. []: Predicted variance of the RD
function using the new method.
The variance predicted using eq. (3.46) and the variance predicted by the new method are
shown in g. 5.11, where the RD function and the distribution of triggering point from a
single realization have been used.

Simulated and predicted variance

0.03
[m*m]
0.02

0.01

0
−10 −5 0 5 10
τ

Figure 5.11: The simulated and the predicted variance of the RD functions using positive
point triggering [a1 a2] = [X 1]. [---------- ]: Theoretical (simulated) variance of the RD
function. [    ]: Predicted variance of the RD function using the new method. [- - - -]:
Predicted variance using the relation from chapter 3.

The estimated RD function, the theoretical RD function and the distribution of the trig-
gering points for the single realization are shown in g. 5.12.

Theoretical and estimated RD function

0.5
[m*m]

−0.5

−10 −8 −6 −4 −2 0 2 4 6 8 10

Sampled distribution of triggering points


1500

1000

500

0
0 1 2 3 4 5 6 7 8 9 10
τ

Figure 5.12: The theoretical and the estimated RD function using positive point triggering
[a1 a2] = [X 1] and the sample distribution of the triggering points. [---------- ]: Theoretical
RD function. [    ]: Estimated RD function.
5.4 Example 3: Positive Point - 2DOF
This section investigates how the model works on a 2DOF system loaded by uncorrelated
white noise at each mass. It is a natural continuation of the work performed with an
SDOF system. The modal parameters are printed in table 5.1.
f [Hz]  [%] jj1 jj2 6 1 6 2
Mode 1 3.74 4.10 1.000 1.005 0.00 177.7
Mode 2 6.27 4.50 1.000 0.995 0.00 1.3
Table 5.1: Modal parameters of a 2DOF system.
The theoretical correlation (scaled RD) functions of the 2DOF system is shown in gure
5.13.
D X 1X 1 D X 1X 2
1 1
0.5 0.5
[m*m]

[m*m]

0 0
−0.5 −0.5
−1 −1
−2 0 2 −2 0 2
D X 2X 1 D X 2X 2
1 1
0.5 0.5
[m*m]

[m*m]
0 0
−0.5 −0.5
−1 −1
−2 0 2 −2 0 2
τ τ
Figure 5.13: Theoretical correlation (scaled RD) functions of 2DOF system.
The investigations are based on 50000 simulations of the response of the system loaded
by white noise followed up by an estimation of the RD functions using the positive point
triggering condition with the triggering levels [a1 a2 ] = [X 1]. Figure 5.14 shows the
simulated distribution of the triggering points and a single realization of the distribution
of triggering points. This realization is used later to predict the variance of RD functions
from a single set of measurements only.
Distribution of triggering points
3000

2500

2000

1500

1000

500

0
0 0.5 1 1.5 2 2.5 3

3000

2500

2000

1500

1000

500

0
0 0.5 1 1.5 2 2.5 3
τ

Figure 5.14: Simulated distribution of triggering points and distribution of triggering points
from a single realization.

Figure 5.14 illustrates that the distribution of the triggering points is well described by a
single realization of the measurements.

Variance: Predicted and simulated


0.01

0.008
[m*m]

0.006

0.004

0.002

0
−3 −2 −1 0 1 2 3

0.01

0.008
[m*m]

0.006

0.004

0.002

0
−3 −2 −1 0 1 2 3
τ

Figure 5.15: Variance of RD functions. [---------- ]: Simulated variance. [- - - -]: Variance


predicted from a single realization. []: Variance predicted from theoretical RD function
and simulated distribution of triggering points. [,,,]: Variance predicted by eq. (3.46).
Variance: Predicted and simulated
0.01

0.008

[m*m]
0.006

0.004

0.002

0
−3 −2 −1 0 1 2 3

0.01

0.008
[m*m]

0.006

0.004

0.002

0
−3 −2 −1 0 1 2 3
τ

Figure 5.16: Variance of RD functions. [---------- ]: Simulated variance. [- - - -]: Variance


predicted from a single realization. []: Variance predicted from theoretical RD function
and simulated distribution of triggering points. [,,,]: Variance predicted by eq. (3.46).

The investigation of this 2 DOF system show that the new method is superior to the
predictions by eq. (3.46). Whether or not the new method is accurate enough to pay o
the extra computational time is an open question. The accuracy of the method is highest
around the zero time lag and especially for higher time lags.

5.5 Example 4: Positive Point - 5 DOF


The investigation is concluded with the analysis of a lumped mass parameter system with
5 DOF loaded by white noise. The eigenfrequencies and the damping ratios of the system
are listed in table 5.2 and the auctocorrelation function for the response of the rst mass
is shown in g. 5.17.

Mode 1 2 3 4 5
f [Hz] 2.18 4.18 5.81 6.69 7.73
 [%] 3.46 3.57 3.72 3.80 4.70
Table 5.2: Modal parameters of the 5 DOF system.
Auto correlation function
1

0.5

[m*m]
0

−0.5

−1
−3 −2 −1 0 1 2 3
τ

Figure 5.17: The auto correlation function of the response of the rst mass.

It is a very time-consuming process to investigate the performance of the method for all 25
RD funtions. Instead only the response of a single mass is considered. Figure 5.18 shows
the distribution of triggering points obtained from simulation and a single realization.
Distribution of Triggering Points
1500

1000

500

0
0 0.5 1 1.5 2 2.5 3
τ

Figure 5.18: [---------- ]: Simulated distribution of triggering points. [    ]: Distribution of


triggering points from a single realization.

Figure 5.19 shows the variance calculated by simulation, eq. (3.46), theoretical RD func-
tion with simulated distribution of triggering points and RD function and distribution of
triggering points obtained from a single realization.
Variance: Predicted and Simulated

0.015
[m*m]

0.01

0.005

0
−3 −2 −1 0 1 2 3
τ

Figure 5.19: Variance of RD functions. [---------- ]: Simulated variance. [- - - -]: Variance


predicted from a single realization. []: Variance predicted from theoretical RD function
and simulated distribution of triggering points. [,,,]: Variance predicted by eq. (3.46).

Again it is concluded that the method predicts the variance well, especially around zero
time lag and for time lags where the variance becomes constant.
5.6 Summary
An approach to estimate the variance of RD functions has been suggested. The method
takes the correlation between the time segments into account by using the sampled time
points of the triggering points. The method has been tested by simulation of di erent
systems. The method seems to predict the variance well at the zero time lag and for time
lags where the variance have converged. It is superior to the method for predicting the
variance, which is based on uncorrelated time segments in the averaging process. It is an
open question if this increase in accuracy can pay o the increasing computational time.
Further investigations in order to understand the approach are recommended.

Bibliography
[1] Asmussen, J.C. & Brincker, R. A New Approach for Predicting the Variance of
Random Decrement Functions. Proc. 16th International Modal Analysis Conference,
Santa Barbara, California, USA, February 2-5 1998.
[2] Asmussen, J.C. & Brincker, R. A New Approach for Predicting the Variance of Ran-
dom Decrement Functions. Submitted for publication in Journal of Mechanical Sy-
stems and Signal Processing.
Chapter 6
Bias Problems and
Implementation
The purpose of this chapter is to discuss the practical problems, which arise in applications
and implementations of the RD technique. Most of these problems are due to the sampling
of continuous-time processes into discrete-time processes. The aim is to point out these
problems and describe when and how to be attentive to these practical problems.
Section 6.1 describes di erent bias problems, which arise in application of the RD tech-
nique. The bias problems are discussed and illustrated. Special attention is given to
their importance towards both the estimated correlation functions and the modal para-
meters extracted from these correlation functions. The solutions to the bias problems are
explained and it is described when to be attentive to these problems.
Section 6.2 illustrates the di erent implementations of the RD technique, which has been
programmed and used during this work. The RD functions have been programmed in
HIGH-C, see the reference manual [1] and linked to MATLAB, see the user guide [2],
using MATLAB's external interface opportunities, see MATLAB [3].

6.1 Bias of RD Functions


Theoretically, the RD functions are unbiased as described in chapter 3. In applications of
the RD technique 3 di erent types of bias are introduced. These are:
 Bias due to the discretization of the continuous-time processes.
 Bias due to false sorting of triggering points.
 Bias due to high damping.
In the following sections these three types of bias are discussed and illustrated.

6.1.1 Bias due to Discretization


Since the measured responses of any structure are always converted from an analog to a
digital signal, bias is introduced in the RD functions. This bias problem is dominant if
121
any triggering condition corresponding to the theoretical general triggering condition is
used
TXG(Tt) = fX (t) = a; X_ (t) = bg (6.1)
In continuous-time it is not a problem to use the above triggering condition, but if the
sampled measurements are considered the event that x(ti ) = a and/or x_ (ti ) = b will in
general never occur. This introduces bias. The e ect is illustrated using the level crossing
triggering condition. Figure 6.1 shows a realization of a continuous-time process and the
corresponding discrete-time process sampled at equidistant time points at the sampling
interval T .

Figure 6.1: Illustration of the e ect of sampling a continuous process at equidistant time
points.
The triggering level a is indicated with the horizontal line. As seen the continuous-time
process ful ls the condition X (t) = a at two time points. But the sampled process never
ful ls this condition. In order to detect the triggering points it is necessary to use a crossing
condition. An implementation of a level crossing triggering condition in MATLAB could
be like

if ((x(k+1)>a & x(k)<a) j (x(k+1)<a & x(k)>a)), ...

The discrete time process ful ls this condition twice corresponding to the continuous-time
process. The problem is: Which time point should be used as centre of the time segment
picked out and used in the averaging process? Three possibilities exist. The left-hand
point (y(k)) could be used, the right-hand point (y(k+1)) could be used or both points
could be used corresponding to using the average of the two time segments. In Brincker
et al. [1] the latter approach is denoted a symmetric window. The left-hand (l) and
right-hand (r) points are shown in g. 6.1.
In order to illustrate the di erent possibilities an SDOF system loaded by Gaussian white
noise is considered. The eigenfrequency is f = 1 Hz and the damping ratio is  = 1%.
The response of this system is simulated at a sampling interval of T = 51f at 8000 time
points. Figure 6.2 show the estimated RD functions (level crossing) using the left-hand
point, the right-hand point and both points as triggering points.
1

Left
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

Right 0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric

1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ

Figure 6.2: Illustration of bias problems. RD functions of an SDOF system with T = 51f .
[---------- ]: Theoretical RD function. [    ]: Estimated RD function.
The bias introduced from the sampling of the continuous process is illustrated in gure
6.2. If the left-hand point is used as triggering point the estimate of the RD function is
shifted to the right and if the right-hand point is used the estimate of the RD function is
shifted to the left. If the left-hand and the right-hand point is used as triggering points,
the estimated RD functions is not shifted only scaled.
If the aim is to estimate modal parameters the above bias problems can almost always be
neglected. It is assumed that the methods such as ITD and PTD are used to extract the
modal parameters from the RD functions. These methods are only capable of extracting
correct modal parameters from either the positive time lags or the negative time lags of
the correlation functions. Assume that only the positive time lags of the estimated RD
functions in g. (6.1) are used. The RD functions estimated using the right-hand point or
using both the right-hand and the left-hand point will result in correct modal parameters.
But the RD functions estimated using the left-hand point will result in erroneous modal
parameters, unless a number of points corresponding to the time shift is omitted. The
reason is that the part of the RD function from zero to the time shift corresponds to the
part of the true RD function from minus the time shift to zero. The discontinuity at time
lag zero cannot be described using the methods to extract modal parameters from free
decays introduced in chapter 3.
The ratio between the natural eigenfrequency and the sampling frequency in uences the
bias problem. The system used above is considered again. The response of the system
is sampled at a lower sampling interval, T = 151f , again at 8000 time points. Figure
6.3 shows the estimated RD functions using the left-hand point, right-hand pointp and
left-hand and right-hand point as triggering points. The triggering level is a = 2X .
1

Left
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

Right 0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric

1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ

Figure 6.3: Illustration of bias problems. RD functions of a SDOF system with T = 151f .
[---------- ]: Theoretical RD function. [    ]: Estimated RD function.

As seen the bias problem cannot be neglected, but it has decreased with increasing samp-
ling frequency. This means that for highly oversampled systems the bias problem can be
neglected.
The above bias problems have been discussed in detail by Brincker et al. [5], [6] and [7].
If the level crossing triggering condition or the zero crossing with positive slope triggering
condition is used the above problems should be considered. If the positive point or the
local extremum triggering condition is used, the above problems do not exists. Only if the
triggering levels a1 and a2 are formulated so that a1  a2 the above problems can arise in
application of these conditions.

6.1.2 Bias due to Sorting of Triggering Points


Another bias problem is sorting of triggering points. If long records are analysed, it might
be tempting to perform some kind of selection among all the detected triggering points in
order to keep a low estimation time. It is very dicult to exclude some of the triggering
points without introducing bias. This is illustrated in the following two examples.
Consider an SDOF system loaded by Gaussian white noise. The natural eigenfrequency
is f = 1 Hz and the damping ratio is  = 3%. The system is sampled with T = 151f
at 40000 time points. In order to restrict the number of triggering points a time jump
of 100  T is performed each time a triggering point is detected, before detection of the
next triggering point starts. The
p theoretical RD function and the RD function estimated
using level crossing with a = 2X and the time jump are shown in g. 6.4.
1

−1

−4 −3 −2 −1 0 1 2 3 4
τ

Figure 6.4: Illustration of bias due to false triggering point sorting using the level crossing
triggering condition. Every time a triggering point is detected a jump of 100 points is
performed before the search for a new triggering point continues. [---------- ]: Theoretical RD
function. [    ]: Estimated RD function.

As seen this makes the RD functions highly biased. The bias consists of a time shift of the
RD functions and an underestimation of the damping ratio for the positive time lags and
an overestimation of the damping ratio for negative time lags. The explanation is that
the probability of an upcrossing after a time jump is much higher than the probability of
a downcrossing. So a hidden condition stating that the velocity of the process is positive
is introduced. If the average of the negative and positive time lags are used this bias does
not in uence the modal parameters.

Consider another SDOF system loaded by Gaussian white noise. The natural eigenfre-
quency is f = 1 Hz and the damping ratio is  = 10%. The system is sampled with
T = 101f at 10000 time points. The local extremum triggering condition is used to e-
stimate the RD function from the response. The damping ratio is chosen high in order
to ensure that the response contains both local minima and local maxima for positive
response levels. The RD functions are estimated using the local maxima as triggering
points only. The result is shown in gure 6.5. The top gure shows the non-normalized
estimated RD functions and the bottom gure shows the normalized RD functions with
the theoretical RD function.
Non−normalized RD function

−1

−3 −2 −1 0 1 2 3
Normalized RD function

−1

−3 −2 −1 0 1 2 3
τ

Figure 6.5: Illustration of bias due to false triggering point sorting using the local ex-
tremum triggering condition. Only local maxima are used in the averaging process. [----------
]: Theoretical RD function. [    ]: Estimated RD function.
As seen the di erence is not only a simple scaling factor but also extensive bias. Even the
frequency of the RD function is changed. The explanation is that since only local maxima
are used a hidden condition is made on the acceleration of the process (X (t)  0). The
acceleration of the process is not independent of the process itself and thereby extensive
bias is introduced. This bias cannot be removed. The example illustrates that sorting
or selection of triggering points is very dicult. It is recommended to avoid selection of
triggering points and instead choose another triggering condition if the estimation time is
too high.
6.1.3 Bias due to High Damping
The last-mentioned bias problem is due to high damping. High levels of damping seldom
occur in mechanical systems or civil engineering structures so this phenomenon is only
illustrated and not discussed in detail. In section 6.1.1, it is illustrated how the discretiza-
tion of a continuous-time process introduces bias into the RD functions. The solution to
these bias problems is to use a symmetric window, which means that the average of the
left-hand and right-hand triggering point is used. If the system is heavily damped it is
not possible to remove the bias by using the average of two time segments. The reason is
that a typical upcrossing and a typical downcrossing are not symmetric. A downcrossing
is not a mirror of the upcrossing. This introduces bias.
Consider an SDOF system loaded by Gaussian white noise. The natural eigenfrequency
is f = 1 Hz and the damping ratio is  = 20%. The system is sampled with T = 31f at
40000 time points. The RD function is calculated from the response using the level crossing
triggering condition. All three di erent choices of the triggering condition described in
section 6.1.1 are used.
1

Left
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

Right 0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric

1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ

Figure 6.6: Illustration of bias due to heavy damping using the level crossing triggering
condition. T = 31f . [---------- ]: Theoretical RD function. [    ]: Estimated RD function.

The RD functions become biased. The bias is reduced by choosing a higher sampling
frequency. This is illustrated in g. 6.7 where the RD function is calculated from the
response which has been sampled with T = 101f at 40000 time points.

1
Left

0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

1
Right

0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric

1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ

Figure 6.7: Illustration of bias due to high damping using the level crossing triggering
condition. T = 101f . [---------- ]: Theoretical RD function. [    ]: Estimated RD function.

The gure illustrates that a high sampling rate removes the bias for highly damped sy-
stems.
6.2 Implementation of RD Functions
This section describes implementation of the RD technique in MATLAB. The functions
described here have been used throughout this thesis for estimation of RD functions.
One of the disadvantages of MATLAB is that for-do-end loops are performed extremely
slowly compared to most other programming languages. The reason is that MATLAB is
a programming language constructed specially for matrix operation. In a for-do-end loop
the purpose is to perform operations at index level. Loops with operation at index level
should not be programmed in MATLAB, since they are extremely slow, see the MATLAB
Userguide [2]. For this reason all RD functions have been implemented in HIGH-C [1]
and linked to MATLAB using MATLAB's external interface opportunities, see MATLAB
[3]. Using this approach the RD function can be used as if they were programmed in
MATLAB. The HIGH-C language has been chosen, since it does not give any restriction
on the size of vectors and matrices. This is equivalent to MATLAB and is convenient for
time series analysis.

6.2.1 RD and VRD Functions in HIGH-C


For each triggering condition eight di erent implementations of the four triggering condi-
tions described in chapter 3 have been programmed. The purpose of programming eight
di erent functions for each condition instead of programming a single general function for
each condition is to ensure the highest possible speed of the technique for user application.
In the following the name of the functions for the di erent triggering condition is given
together with an example of a function call in MATLAB.

Level crossing triggering condition (lev)


RD1alev RD2alev RD3alev RD4alev
RD1plev RD2plev RD3plev RD4plev
>>[Ntrig RD]=RD1alev(X,n,no,a);

The input/output variables, which are common to all the above RD functions, are

Input X The measurement matrix.


n The number of points in the RD functions.
no The number of the triggering measurement.
a The triggering level.
Output Ntrig The number of triggering points.
RD The RD functions (not normalized with Ntrig ).
Local extremum triggering condition (loc)
RD1aloc RD2aloc RD3aloc RD4aloc
RD1ploc RD2ploc RD3ploc RD4ploc
>>[Ntrig RD]=RD3aloc(X,n,no,a1,a2);

The input/output variables, which are common to all the above RD functions, are
Input X The measurement matrix.
n The number of points in the RD functions.
no The number of the triggering measurement.
a1 The lower triggering level.
a2 The upper triggering level.
Output Ntrig The number of triggering points.
RD The RD functions (not normalized with Ntrig ).

Positive point triggering condition (pos)


RD1apos RD2apos RD3apos RD4apos
RD1ppos RD2ppos RD3ppos RD4ppos
>>[Ntrig RD]=RD3apos(X,n,no,a1,a2);

The input/output variables, which are common to all the above RD functions, are
Input X The measurement matrix.
n The number of points in the RD functions.
no The number of the triggering measurement.
a1 The lower triggering level.
a2 The upper triggering level.
Output Ntrig The number of triggering points.
RD The RD functions (not normalized with Ntrig ).

Zero Crossing triggering condition (zer)


RD1azer RD2azer RD3azer RD4azer
RD1pzer RD2pzer RD3pzer RD4pzer
>>[Ntrig RD]=RD3azer(X,n,no);

The input/output variables, which are common to all the above RD functions, are
Input X The measurement matrix.
n The number of points in the RD functions.
no The number of the triggering measurement.
Output Ntrig The number of triggering points.
RD The RD functions (not normalized with Ntrig ).

The di erent triggering conditions have been implemented so that the only di erence in
the input to these functions is the triggering levels. The argument for not normalizing
the RD functions is that if the aim is to estimate modal parameters from a single set
of RD functions it is not unnecessary to normalize the RD functions. This means less
computational time.

The di erence between e.g. RD1apos and RD1ppos is that RD1apos estimates RD func-
tions with both positive and negative time lags, whereas RD1ppos only uses positive time
lags. The di erence between RD1apos - RD4apos and RD1ppos - RD4ppos is the speed and
accuracy of the di erent implementations. The numbers refer to the following restrictions
of the data matrix X.

1: X in double precision. No restriction on the size of X.


2: X in single precision. Maximum size of X is 65535 points.
3: X in double precision. No restriction on the size of X.
4: X in single precision. Maximum size of X is 65535 points.

The fastest functions are RD4a??? or RD4p??? and the slowest functions are RD1a???
or RD1p???. For all of the above functions the uncompiled le name has extension :c
and the compiled version of the functions have extension :mex (Matlab Executable), since
they can be called directly from the MATLAB evironment. Common to all functions is
that online help is available by typing the function name in the MATLAB environment.
The help functions have extension :m and are simple editable les. The online help utility
is simple to use. If the following command is given in the MATLAB environment the help
le will respond as shown below.

>>RD1apos

will result in the following help statement.


************************************************************************
** RD1aPOS.C : Random Decrement Function **
* Positive Point Triggering Condition **
* **
* Purpose : To calculate the Random Decrement functions using **
* triggering at positive points. The function uses **
* both positive and negative time delays. This **
* function can deal with in nitely many points in **
* the data matrix and works with DOUBLE precision. **
* Remember to use an unequal number of points in **
* the functions. **
* **
* Call : RD1aPOS(X,n,no,a1,a2) **
* X: Matrix with time series. Time series must be **
* at the columns. **
* n: Number of points in RD functions. Choose an **
* unequal number. **
* no: Number of the reference (triggering) measure- **
* ment. **
* a1: Trig level lower bound. Lower triggering le- **
* vel. **
* a2: Trig level upper bound. Upper triggering le- **
* vel. **
* **
* Return : 1: Number of triggering points. **
* 2: Estimated RD functions from matrix. **
* **
* Computed : JCA 1/06-96 **
* Edited : JCA 1/06-96 **
************************************************************************

This nishes the description of the computational RD algorithms in HIGH-C. For the
VRD technique the computational algorithm has also been programmed in HIGH-C and
linked to MATLAB using MATLABs external interface possibilities. Only the positive
point triggering condition has been implemented, since this is the only triggering condition
which gives sucient triggering points. Furthermore, only a single type, which corresponds
to type three of the RD functions, has been implemented.

Vector triggering condition

vectora vectorp

>>[Ntrig VRD]=vectora(X,n,a1,a2,vec1,m,k);

The input/output variables are


Input X The measurement matrix.
n The number of points in the RD functions.
a1 The lower triggering level.
a2 The upper triggering level.
vec1 Time shift for vector triggering.
m Size of vector triggering condition.
k Maximum time shift of the vector condition.
Output Ntrig The number of triggering points.
V RD The VRD functions (Not normalized with Ntrig).
The di erence between vectora and vectorp is that vectorp only estimates positive time
lags in the VRD functions, whereas vectora estimates positive and negative time lags.
The uncompiled versions of the functions has extension .c and the compiled version have
extension .mex. There is also online help available for the VRD functions. The following
MATLAB command will result in the help statement
>>vectora
*************************************************************************
** VECTORa.C : Vector Random Decrement Function **
* Positive point triggering condition **
* **
* Purpose : To the estimate the Vector Random Decrement func- **
* tions using positive point triggering. The function **
* uses both positive and negative time delays. All **
* variables are of size DOUBLE. Remember to use an **
* unequal number of points in the functions. **
* **
* Call : vectora(X,n,a1,a2,vec1,m,k) **
* x: Data matrix with measurements. **
* n: Number of points in RD functions. Choose an un- **
* equal number. **
* a1: Lower triggering level vector. **
* a2: Upper triggering level vector. au>al>0. **
* vec1: Vector with triggering time shifts. **
* m: Size of vector condition. **
* k: Maximum time shift. **
* **
* Return : 1 : Number of triggering points. **
* 2 : Estimated VRD functions from matrix. **
* **
* Computed by : JCA 01/06-96 **
* Edited by : JCA 01/09-96 **
*************************************************************************

6.2.2 MATLAB Utility Functions


This section describes functions programmed in MATLAB, which uses the basic *.mex
functions or the RD/VRD functions for further analysis. Usually in the analysis of several
simultaneously recorded measurements the full covariance (correlation) matrix is esti-
mated. This means that the above functions have to be used several times, since each
function call only results in an estimated set of RD functions corresponding to a column
of the correlation matrix. In order to standardize this computational routine two MAT-
LAB functions, fullcova.m and fullcovp.m have been programmed. These functions return
the full covariance matrix at positive and negative time lags, (a), or positive time lags (p)
only. The online help for fullcova is

***************************************************************************
** FULLCOVA.M : Random Decrement utility functions **
* **
* Purpose Estimation of FULL COVariance matrices using RD **
* functions. Both positive and negative time lags are **
* used. Notice that the RD functions are scaled so **
* that the true covariance matrix are returned. **
* **
* Call : fullcova(X,Type1,Type2,n,a1,a2) **
* X: Data matrix with measurements. **
* Type1: Type of implementation of the RD technique. **
* 1: Double precision, 0-4294967925 points. **
* 2: Single precision, 0-4294967925 points. **
* 3: Double precision, 0-65535 points. **
* 4: Single precision, 0-65355 points. **
* Type2: Type of triggering condition used for esti- **
* mation of covariance functions. **
* 1: Level crossing triggering condition. **
* 2: Local extremum triggering condition. **
* 3: Every positive point triggering condition. **
* 4: Zero crossing with positive slope trig condition. **
* !!This triggering condition estimates the time deriva- **
* tive of the covariance functions. **
* N: Number of points in correlation functions. Must **
* be unequal in order to take the zero time lag into **
* account. **
* a1: If Type2=1, a1 is the triggering level. **
* If Type2=2,3 a1 is the lower triggering bound **
* a2: If Typ2e=1, a2 is not used. **
* If Type2=2,3 a2 is the upper triggering bound. **
* **
* Return : Ntrig: Vector with number of triggering points. **
* : Cfull: Covariance matrix. C11=row1. C21=Row2. **
* Cn1=Rown. C12=rown+1. C22=Rown+2 ... **
* **
* Computed by : JCA 01/09-96 **
* Edited by : JCA 01/09-96 **
***************************************************************************

Although the above functions make it easy to estimate the correlation matrix of the
measurements the triggering level and triggering condition should be chosen carefully as
described in chapter 3. Furthermore, the positive point triggering condition and the local
extremum triggering condition are implemented so that the triggering levels a1 a2 should
not be chosen as a1  a2 . Consider the function RD1apos. The computational code for
the detection of triggering points and the averaging process in HIGH-C looks like
for (i = N1; i < Row - N1; i++) f
if (*(X+(No-1)*Row+i)>*Al && *(X+(No-1)*Row+i)<*Au) f
(*Po trig)++;
for (j=0; j<Col; j++)f
p=X+j*Row+i-N1;
for (k=0; k<N; k++)f
*(RD+j*N+k)+=*(p+k);
gggg
where X is the measurement matrix with Row points and Col measurements, Al and Au are
the lower and upper triggering level, respectively and Po trig is the number of triggering
points. If a1 ! a2 then a bias problem arises since the discrete process might be sampled so
that X (k) < a1 and X (k + 1) > a2 . This will occur for high levels of the time derivative.
In order to take this situation into account the condition in the above code should be
combined with a crossing condition. So in conclusion for the positive point triggering
condition and the local extremum triggering condition the triggering levels should not be
chosen as a1  a2 .
In order to use information from both positive and negative time lags and to obtain
a quality measure of the RD functions a MATLAB function, which uses the symmetry
relations for correlation/covariance functions of stationary processes, see chapter 2, has
been programmed. The name of the function is avgcov.m and returns the average estimated
covariance functions and the error for the average covariance functions. avgcov.m can only
be used in combination with the output of fullcova.m. The online help for avgcov.m is

************************************************************************
** AVGCOV.M : Random Decrement utility function. **
* **
* Purpose : From a full covariance matrix with both positive **
* and negative time lags an averaged covariance ma- **
* trix with only positive time lags is returned. Fur- **
* thermore, an error matrix with the di erence be- **
* tween positive and negative time lags is returned. **
* The averaging process is based on the validity of **
* the following eqs. for the covariance matrices for **
* stationary processes. **
* Cxy(t)=Cyx(-t) Cyx(t)=Cxy(-t) **
* Call avgcov(Cfull); **
* Cfull: Full covariance matrix with positive and **
* negative time lags (output from fullcova.m). **
* Return CfullP: Averaged covariance matrix. All negative **
* time lags averaged with corresponding positive time **
* lags after Cxy(t)=Cyx(-t) **
* Cerror: Error matrix with di erence between posi- **
* tive and negative time lags, theoretically zero **
* matrix. **
* **
* Computed by : JCA 22/3-1996 **
* Edited by : JCA 22/3-1996 **
************************************************************************
The matrix CfullP can be used as input to ITD or PTD and the matrix Cerror can be
used for quality assessment of CfullP.

For the VRD technique a general MATLAB function has been programmed. The function
uses vectora:mex and vectorp:mex. The online help for this function is

****************************************************************************
** VECTRICA.M : Random Decrement utility function. **
* **
* Purpose : To calculate the Random Decrement functions using **
* vector positive point triggering. Assuming more mea- **
* surements than channels. The vector triggering con- **
* dition is assumed to be applied to the rst n mea- **
* surements. **
* **
* Call : VECTRICA(X,n,a1,a2,Type,Vec1,Vec2) **
* X: Data matrix with measurements. **
* n: Number of points in the RD functions. If positive **
* and negative time lags are used (Type=1) n is unequal **
* a1: Lower triggering level vector (which are multi- **
* plied by the standard deviation of the corresponding **
* measurement. **
* a2: Upper triggering level vector (which are multi- **
* plied by the standard deviation of the corresponding **
* measurement. **
* Type: If Type=1, both positive and negative time de- **
* lays are used in the RD functions. If Type=2, only **
* positive time delays are used. **
* Vec1: Vector with time delays for the triggering **
* condition. **
* Vec2: Vector with sign of the triggering condition. **
* 1 or -1. **
* **
* Return : Ntrig: Number of triggering points. **
* VRD: Vector Random Decrement functions. **
* **
* Computed by : JCA 01/09-96 **
* Edited : JCA 01/09-96 **
****************************************************************************

The modal parameters of the system can be extracted from the output of either vectrica,
avgcov, fullcova, fullcovp or the basic RD and VRD functions programmed in HIGH-C.
For this purpose the ITD and the PTD algorithm have been programmed in MATLAB.
The online help for the functions are as follows.
*****************************************************************************
** ITD UHSM.M : Ibrahim Time Domain function. **
* **
* Purpose To estimate eigenfrequencies, damping ratios and mode **
* shapes from free decay measurements from an MDOF **
* structural system. **
* **
* Call ITD UHSM(X,Dt,N1,N2,N3,N4,N5,Vecmeas,Avg,Name) **
* X: Data matrix containing the free decay measure- **
* ments or the RD/VRD functions. **
* Dt: The sampling interval. **
* N1: The number of points omitted from the input ma- **
* trix in the estimation procedure. The rst points **
* might be biased due to noise. **
* N2: Every N2 points is only used in the identi ca- **
* tion which increases the sampling interval to dt*N2. **
* N3: The number of physical modes to be identi ed. **
* N4: The matrix aspect ratio for linear regression. **
* N5: The number of time delays for calculation of MCFs **
* Vecmeas: Vector with information about the free de- **
* cays or RD/VRD functions. The vector inform about **
* rows of the reference measurements. The reference **
* measurement should always be the top measurement. **
* Avg: Indicates if the mode shapes from di erent set- **
* ups should be averaged. Useful for e.g. RD func- **
* tions corresponding to a full covariance matrix. **
* This option can only be used if the setups have the **
* same number of free decays. **
* Avg=1 averages, other numbers discards the averaging. **
* Name: Name of le to which the results are written. **
* **
* Return RES: Matrix with results. Matrix has 2*N4 rows. **
* Column 1: Estimated eigenfrequencies. **
* Column 2: Estimated damping ratios. **
* Column 3: Averaged MCF magnitude. **
* Column 4: Averaged MCF phase. **
* Column 5: RMS ABS MPF. **
* Column 6:6+N Complex mode shapes. **
* Column 6+N+1:6+2*N: MCF magnitude. **
* Column 6+2*N+1:6+3*N MCF phase. **
* Column 6+3*N+1:6+4*N Absolute Value of MPF **
* **
* Computed by JCA 1/7 1996 **
* Edited by JCA 1/7 1996 **
*****************************************************************************
***************************************************************************
** POLYREF.M : Polyreference Time Domain function. **
* **
* Purpose This is an implementation of the PTD technique. The **
* The results are modal parameters and modal con den- **
* ce factors for magnitude and phase. **
* **
* Call polyref(X,Dt,N1,N2,N3,N4,N5,Vecmeas,Avg,Name); **
* X: Data matrix with free decays or RD/VRD func- **
* tions. **
* Dt: The sampling interval. **
* N1: The number of points omitted from the input ma- **
* trix in the estimation procedure. The rst couple **
* of points might be biased due to noise. **
* N2: Every N2 point is only used in the identi ca- **
* tion. Increasing the sampling interval to N2*Dt **
* N3: The number of physical modes to be identi ed. **
* N4: The number of time delays for MCFs. **
* N5: The number of setups (inputs). **
* N6: The number of measurement locations. **
* Name: Name of le to which the results are written. **
* **
* Return RES: Matrix with results. Matrix has 2*N4 rows. **
* Column 1: Estimated eigenfrequencies. **
* Column 2: Estimated damping ratios. **
* Column 3: Averaged MCF magnitude. **
* Column 4: Averaged MCF phase. **
* Column 5: RMS ABS MPF. **
* Column 6:6+N Complex mode shapes. **
* Column 6+N+1:6+2*N: MCF magnitude. **
* Column 6+2*N+1:6+3*N MCF phase. **
* Column 6+3*N+1:6+4*N Absolute Value of MPF **
* **
* Computed by JCA 1/9 1996 **
* Edited by JCA 1/9 1996 **
***************************************************************************

6.2.3 Example
Consider the 2DOF system from chapter 3 again. The system has the following modal
parameters
f [Hz]  % jj1 jj2 6 j1 6 j2
Mode 1 3.09 1.69 1.00 1.61 0 4.7
Mode 2 4.56 3.56 1.00 0.62 0 173.0
Table 6.1: Modal parameters of the 2DOF system.
The response of this system to Gaussian white noise is simulated and saved in the data
matrix X . 10000 points are simulated at a sampling frequency of 120 Hz. In order to
calculate the correlation matrix the following commands are given
>>Cfull=fullcova(X,3,3,501,1,80);
>>[Cavg Cerr]=avgcov(Cfull);
The number of points in the scaled RD functions in Cfull is 501. The positive point
triggering condition has been used and the triggering levels are chosen as [a1 a2 ] = [X 1]
(1  80). In order to evaluate the estimate of the correlation functions the functions in
Cavg are plotted together with Cerr.
−4 −4
x 10 x 10

5 5

0 0

−5 −5

0 1 2 0 1 2
−4 −3
x 10 x 10

1
5
0.5
0 0
−0.5
−5
−1

0 1 2 0 1 2
τ τ

Figure 6.8: [---------- ]: Average correlation functions. [      ]: Errors of the average of the
correlation functions.
The gure illustrates that the errors of the average estimate of the correlation functions
are small. In order to extract the modal parameters the PTD technique is used. The
following command is given
>>RES=polyref(Cavg,1/120,5,1,2,1,2,2,'result')
The number of modes is 2, ve points have been removed from the beginning of the
correlation functions and a single time shift is used for the calculation of the MCFs. The
RESult matrix is saved in the le 'result'. the absolute values of the rst 7 columns of the
RES matrix are
3.10 2.02 0.999 0.11 11.66 1.00 1.61
4.52 3.00 0.998 0.23 20.10 1.00 0.63
which can be compared with the theoretical values in table 6.1. The columns have the
following estimates: 1: Frequencies, 2: Damping ratios, 3: MCF magnitude, 4: MCF
phase, 5: MPFs, 6: Mode shape component 1, 7: Mode shape component 2.
This example shows how simple it is to estimate modal parameters using MATLAB in
combination with the HIGH-C functions.
6.3 Summary
In this chapter considerations concerning the implementation of the RD technique have
been discussed. Section 6.1 illustrates the bias problems, which can occur in application
of the RD technique. In general the bias problems can be avoided by proper implemen-
tation. In the situation where the sampling frequency is much higher than the maximum
eigenfrequency of the system the bias problems vanish.
In section 6.2 a description of the di erent implementation made during this work of the
RD funcions is given, and it is illustrated how to use these functions. The RD functions
are implemented in HIGH-C and linked to MATLAB using MATLAB's external interface
opportunities in order to make the technique as fast as possible. Several di erent imple-
mentations of each triggering condition are made in order to be able to select the fastest
and most accurate function dependent on the size and the precision of the available data.

Bibliography
[1] HIGH-C/C++ Tools, Library and Program manuals 1992. MetaWare Inc.
[2] MATLAB User's Guide (August 1992). MathWorks, Inc.
[3] MATLAB External Interface Guide (January 1992). MathWorks, Inc.
[4] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec
Technique. Proc. 9th International Conference on Experimental Mechanics, Lyngby,
Copenhagen, Aug. 20-24, 1990.
[5] Brincker, Krenk, S. & R., Jensen, J.L. Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[6] Brincker, R., Kirkegaard, P.H. & Rytter, A. Identi cation of System Parameters
by the Random Decrement Technique. Proc. 16th International Seminar on Modal
Analysis, Florence, Italy, Sept. 9-12, 1991.
[7] Brincker, R., Krenk, S. & Jensen, J.L. Estimation of Correlation Functions by the
Random Decrement Technique. Proc. 9th International Modal Analysis Conference
and Exhibit, Firenze, Italy, April 14-18, 1991.
[8] MATLAB Reference Guide (October 1992). MathWorks, Inc.
Chapter 7
Estimation of FRF by Random
Decrement
In this chapter a new method for estimating FRF is tested. The method is based on the
RD functions of the load to and the response from a linear system. It is assumed that the
input is stochastic and stationary. Traditionally the measured response of and load to a
linear system has been analysed using the FFT algorithm in order to obtain the FRFs. As
described in the introduction to this thesis such an approach will in general always result
in biased estimates of the FRM. Using the RD technique as basis for the estimation of the
FRFs bias can under the right circumstances be removed. This is an important relation
of the RD technique. This method was rst tested in Brincker et al. [1] and Asmussen et
al. [2].
Section 7.1 describes the traditional FFT based approaches for estimating the FRFs. The
di erent approaches which are used to minimize bias and random errors are described.
In section 7.2 the theoretical background for estimating FRFs using the RD technique is
given together with a description of the expected advantages and disadvantages. Section
7.3 presents an illustrative simulation study and an experimental test performed in order
to validate the performance of this method.

7.1 Traditional FFT Based Approach


This section gives a short review of the traditional FFT based approach for estimation of
FRFs. As described in the introduction to this thesis the spectral densities can be de ned
using Fourier transformation, see e.g. Bendat & Piersol [3], Schmidt [4]. Consider two
stationary stochastic processes X (t) and Y (t). The spectral densities are de ned as
S (!) = lim E [ 1 X (!;T )X (!; T )]
XX T !1 T k k (7.1)

SY X (!) = Tlim E [ 1 Y (!;T )Xk (!; T )]


!1 T k
(7.2)

SXY (! ) = Tlim E [ 1 X (!; T )Yk (!; T )]


!1 T k
(7.3)

SY Y (!) = Tlim E [ 1 Y (!; T )Yk (!; T )]


!1 T k
(7.4)

141
where superscript  denotes complex conjugate and
1 ZT 1 ZT
X (!;T ) = 2 X (t)e dt , i!t Y (!; T ) = 2 Y (t)e,i!t dt (7.5)
0 0
The mean value operation in eqs. (7.1) - (7.4) is over the statistical ensemble of the
di erent realizations, xk (t) and yk (t), of the processes X (t) and Y (t). In practice the limit
T ! 1 can never be obtained. This is modelled by multiplying the realizations of the
processes by a window function, which delimits the time period of the realizations to be
T.
(
x(t; T ) = W (t; T )  x(t) W (t; T ) = w0 (t) jjttjj < T=2 (7.6)
y(t; T ) = W (t; T )  y (t) < T=2
If w(t) = 1 the boxcar or rectangular window is used. The limitations of the time period
of the realizations, forces the frequency resolution to be nite, ! = 2T . The spectral
densities in eqs. (7.1) - (7.4) can only be estimated as e.g. eq. (7.2)
S^ (! ) = E [ 1 Y  (!; T )X (!; T )]
YX T k k (7.7)
This estimate will always be biased due to the nite record period or window e ects.
These bias errors are usually denoted leakage errors since energy (or signal power) at a
given frequency band is moved to the surrounding frequency band. Random errors can
also be introduced if only a nite number of realizations of the processes X (t) and Y (t)
are available. The bias errors can be reduced by using more complicated window functions
than the boxcar window as the Hanning or the Hamming window, see e.g. Schmidt [4]. In
a real life situation only a single realization of each process is available (the measurements).
This problem is solved by assuming that the processes are ergodic. Then the realizations
can be divided into a number of segments, which represent the statistical properties of the
ergodic process. The above description follows Bendat & Piersol [3] and Schmidt [4].
The stochastic process X (! ) is interpreted as a load applied to a linear structure at the
location i and Y (! ) is the stochastic process describing the corresponding response at the
location j of the structure. The FRF which transfers X (! ) into Y (! ), Hji (! ) will for
simplicity be denoted H (! ) without any subscripts.
Y (! ) = H (! )X (!) (7.8)
Using the de nition of the spectral densities in eqs. (7.1) - (7.4) the FRF can be expressed
in terms of spectral densities.
SXY (!) = H (! )SXX (!) (7.9)
SY Y (!) = H (! )SY X (! ) (7.10)
Equations (7.9) and (7.10) constitute the basis of the two basic estimators of the FRF,
H1 (! ) and H2(!).
^
H^ (! ) = H1(!) = S^XY (!) (7.11)
SXX (!)
^
H^ (! ) = H2(!) = S^Y Y (!) (7.12)
SY X (!)
Other more complicated estimators of the FRF exist such as H3 and H4, see e.g. Fabunmi
et al. [5] and Yun et al. [6]. But the H1 and the H2 estimator is the basic estimators.
Assume that the input measurement or load X (t) is measured without the introduction
of any noise. The response or output Y (t) is measured with an noise process added. If
this noise process is uncorrelated with the response, the H1 estimator will result in a true
FRF, in the sense of being independent of the noise added to the response. On the other
hand, assume that the input measurement is collected with a noise process added. The
output measurement is noise free. If the noise process is independent of the input or load
the H2 estimator will result in a true FRF, in the sense of being independent of the noise
added to the input. For more complicated noise situations other estimators have been
developed as mentioned previously.
The ordinary coherence function is de ned as
2 (! ) =
xy jSXY (!)j2 0  xy
2 (! )  1 (7.13)
SXX (!)SY Y (! )
The estimate of the coherence function is calculated by using the estimates of the spectral
densities. The coherence function can be used for quality assessment of the estimate of the
FRF. A high coherence means a good estimate of the FRF at the corresponding frequency
and a low coherence means a poor estimate of the FRF. Low coherence could also be
interpreted to be an indicator of non-linearities. The coherence should be high ( 1)
around the peaks of an FRF.
The problems with an approach to estimate the FRFs based on the FFT algorithm can
be summarized to
 the choice of a proper window function.
 the choice of a proper estimator.
In the next section an approach for estimating FRFs based on the RD technique is intro-
duced. The main di erence to the traditional methods presented shortly in this section
is that the averaging process, see e.g. eqs. (7.1) - (7.4), is performed in the time do-
main before the Fourier transformation is used. This makes it possible to obtain unbiased
estimates of the FRF.

7.2 Random Decrement Based Approach


Two ergodic stochastic processes X (t) and Y (t) are considered. As in section 7.1, X (t)
will be interpreted as the load at point i on a structure and Y (t) as the corresponding
response of the structure at point j. The IRF and FRF will be abbreviated h(t) and H (! )
instead of hji (t) and Hji (! ). The structure is assumed to be linear and time invariant as
described in chapter 2. The response of the structure is given by the convolution integral.
Zt
Y (t) = h(t , )X ()d (7.14)
,1
where the in uence of the initial conditions has been neglected. Substituting variables,
t = t +  and  =  + t, eq. (7.14) can be rewritten as
Z
Y (t +  ) = h( , )X (t +  )d (7.15)
,1
Taking the conditional mean value of eq. (7.15) yields
Z
E [Y (t +  )jTXGA (t)] = h( ,  )E [X (t +  )jTXGA (t)]d (7.16)
,1
or
Z
E [Y (t +  )jTYGA (t)] = h( ,  )E [X (t +  )jTYGA (t)]d (7.17)
,1
Using the de nition of the RD functions introduced in chapter 3 eqs. (7.16) and (7.17)
become
Z
DY X ( ) = h( ,  )DXX ( )d (7.18)
,1
Z
DY Y ( ) = h( , )DXY ( )d (7.19)
,1
Equations (7.18) - (7.19) establish relations for estimating the IRF. The original processes
are transformed into the RD functions, but the input-output relation is preserved. The ad-
vantage is that noise has been averaged out in the estimation process of the RD functions.
Also, the size of the problem has been reduced, since RD functions contain a considerably
smaller number of points corresponding to the original time series. If the response of a
structure was measured at several points the relation in eq. (7.19) could be extended to
cross RD functions only, by using the RD functions DYi Yj ( ) and DXYj ( ). By introducing
the Fourier transform of an RD function as ZXY (! ) de ned as
1 Z1
ZXY (!) = 2 e,i! DXY ( )d (7.20)
,1
eqs. (7.18) and (7.19) can be transformed into the frequency domain as
ZY X (!) = H (! )ZXX (! ) (7.21)
and
ZY Y (! ) = H (! )ZXY (!) (7.22)
These two equations constitute a basis for estimating H (! ) corresponding to the H1 and
H2 estimators in eqs. (7.11) and (7.12). The approach in eq. (7.21) is denoted H1RD and
the approach in eq. (7.22) is denoted H2RD
H (!) = H1RD = ZZY X ((!!)) (7.23)
XX

H (!) = H2RD = ZZY Y ((!!)) (7.24)


XY
Corresponding to eq. (7.13) a coherence function for the RD functions can be de ned as
RD
2 = H1
xy = ZZXY ((!!))ZZY X ((!! )) (7.25)
RD
H2 XX YY

There is a main di erence between the two coherence functions de ned in this chapter.
In eqs. (7.13) the coherence function is based on the number of averages in the frequency
domain, whereas eq. (7.25) is only based on two di erent estimates in the frequency
domain. The averaging process is performed in time domain. Alternatively the coherence
function for the estimates based on the RD technique could be based on several RD
functions estimated with di erent triggering levels.
It will now be assumed that the load is Gaussian white noise. This means that the response
will also be Gaussian distributed. It also means that the RD functions will be proportional
to the correlation functions of the processes. Since RY X ( ), RXY ( ), RY Y ( ) and RXX ( )
all satisfy R ! 0 for j j ! 1 it follows that all RD functions in eqs. (7.18) and (7.19)
dissipate towards zero with increasing time distance from zero. The result of this relation
is that the bounds in the Fourier transformation do not have to be ,1 and 1 and thereby
no leakage errors will occur. This assumes that R(max)  0, where max is the maximum
time lag in the correlation function.
Assume that the input to the system is measured with a noise process, U(t), added. The
noise process is assumed to be Gaussian distributed and uncorrelated with the measured
output, which is free of noise. Subscript M denotes the measured realizations of the
di erent processes
yM = y(t) ; xM = x(t) + u(t) (7.26)
The RD functions are proportional to the correlation functions, since the processes are
Gaussian distributed
RYM YM ( ) = E [yM (t +  )yM (t)] = RY Y ( ) (7.27)
RXM YM = E [yM (t +  )xM (t)] = RXY ( ) + RUY ( ) = RXY ( ) (7.28)
In this situation the estimation of the FRF should be based on eq. (7.22). Correspondingly
if a Gaussian distributed noise process is added to the output of the system and the
measured input is noise free the estimation of the FRF should be based on eq. (7.21).
Two main advantages are expected using the RD based method for estimation of FRF
compared to the traditional method based on pure FFT. The computational time is ex-
pected to decrease, since the estimation of RD functions only involved averaging, whereas
the estimation using pure FFT involves multiplication. In general this question can not
be answered since the estimation time for the RD technique depends on the statistical
description of the processes. This issue was discussed in chapter 3. RD functions are
estimated unbiased and dissipate towards zero for increasing absolute time lags. This is
an advantage since no leakage errors are introduced.
7.3 Case Studies
This method was rst tested in an introductory simulation study of a 3DOF system loaded
by white noise, see Asmussen & Brincker [2]. In Brincker & Asmussen [1] the method was
tested and compared with the FFT based approach using experimentally obtained data.
This section starts with an illustrative example of an SDOF system. The purpose is to
illustrate the advantages and disadvantages of the method and to describe the di erent
problems, which arise in the application of this technique. The method is further investi-
gated by analysing the vibrations of a laboratory bridge model loaded by Gaussian white
noise through a shaker.

7.3.1 Basic Case - SDOF System


Consider an SDOF system with an eigenfrequency f = 1 Hz and a low damping ratio
of  = 0:6%. The system is loaded by Gaussian white noise. Firstly, the measurements
consist of 10000 points sampled at 5.8 Hz. The FRF is calculated using the traditional
method based on the FFT algorithm and the H1 estimator. 1024 points are used in each
time segment for each Fourier transformation and each time segment is multiplied by the
Hanning window. The FRF is also estimated using the H1RD estimator. The positive
point triggering condition is used with 1025 points in each RD function and the triggering
bounds are chosen as [a1 a2] = [0:5X 1]. From the FRFs the IRFs are calculated using
inverse FFT. The eigenfrequencies and the damping ratios are estimated from the IRFs
using the ITD algorithm. In order only to have bias errors the modal parameters are
estimated using the two approaches from 100 independent simulations of the Gaussian
white noise load and the corresponding response. Figure 7.1 shows a typical auto and
cross RD function for the Gaussian white noise load, DXX ( ), and the response, DY X ( ).
DXX

1
[N]

0.5

0
−80 −60 −40 −20 0 20 40 60 80
DYX

0.1
[m]

−0.1

−80 −60 −40 −20 0 20 40 60 80


τ

Figure 7.1: Typical auto (load) and cross (response) RD functions estimated using positive
point triggering.
The RD functions in g. 7.1 illustrate the idea of using the RD technique for estimation
of FRFs. The auto RD function is seen to be almost identical to the auto correlation
function of the white noise load. An appropriate choice window function can increase the
accuracy of the RD functions. The window functions can be chosen to be e.g. symmetric
exponentials or a force window for the auto RD functions according to the standard
approach used in impact testing.
Figure 7.2 shows the RD functions from g. 7.1, where a symmetric exponential window
has been multiplied to both functions.
DXX

1
[N]

0.5

0
−80 −60 −40 −20 0 20 40 60 80
DYX

0.1
[m]

−0.1

−80 −60 −40 −20 0 20 40 60 80


τ

Figure 7.2: Typical auto, DXX ( ), and cross, DY X ( ) RD functions estimated using
positive point triggering and a symmetric exponential window.

The e ect of the exponential window is clear. The RD functions dissipate to zero more
eciently, which means that leakage errors are controlled by the exponential window.

Figure 7.3 shows the absolute value of a typical FRF calculated theoretically and using
the H1 and the H1RD estimators. The result for the H1RD estimator is based on the RD
functions shown in g. 7.1. The di erent estimates of the FRF are very alike except
for the ragged curve, which is the FRF from the H1RD estimator without applying any
window function, see g. 7.1 (no windowing corresponds to using the boxcar window).
The result of applying a symmetric exponential window is that the estimates become
smooth on account of arti cial introduced damping. The estimated modal parameters can
be corrected for this arti cially damping using principles developed for impact testing.
2
10

1
10

0
[m/N] 10

−1
10

−2
10

0 0.5 1 1.5 2
Frequency [Hz]

Figure 7.3: Absolute value of FRFs. [-------]: Theoretical. [- - - - ]: FFT Hanning window,
H1 . [     ]: RD-FFT, No windowing, H1RD . [ ,  ,  ]: RD-FFT, exponential window,
H1RD .
Figure 7.4 shows a zoom of g. 7.3 around the resonance frequency.

2
10
[m/N]

1
10

0.96 0.98 1 1.02 1.04


Frequency [Hz]

Figure 7.4: Zoom of absolute value of the FRFs. [-------]: Theoretical. [- - - - ]: FFT
Hanning window, H1 . [     ]: RD-FFT, No windowing, H1RD . [ ,  ,  ]: RD-FFT,
exponential window, H1RD .
The curve from the H1RD estimator from the RD function in g. 7.1 (no windowing) is
almost identical to the theoretical curve. If the symmetric exponential window is used the
FRF is smooth in all the frequency band, but in the vicinity of f the arti cial damping is
very clear.
In order to investigate the in uence of the length of the record period the number of
measurement points is increased to 40000. Figure 7.2 shows the estimated RD functions
without windowing.
DXX

[N]
0.5

0
−80 −60 −40 −20 0 20 40 60 80
DYX

0.1
[m]

−0.1

−80 −60 −40 −20 0 20 40 60 80


τ

Figure 7.5: No windowing 40000 points


The gure illustrates that the increase of the record length has increased the accuracy of
the RD functions. The noise content is lower and the RD functions dissipate more clearly
towards zero with increasing time lags. Figure 7.6 shows the estimated FRFs.

2
10

1
10

0
10
[m/N]

−1
10

−2
10

0 0.5 1 1.5 2
Frequency [Hz]

Figure 7.6: FRFs estimated from a record length of 40000 points. [-------]: Theoretical.
[- - - - ]: FFT Hanning window, H1. [     ]: RD-FFT, No windowing, H1RD .
Figure 7.6 shows that the H1RD estimates become smooth with increasing record length,
even though no window is applied. Figure 7.7 shows a zoom of the FRFs in g. 7.6 around
the resonant frequency.

2
10

[m/N]

1
10

0.96 0.98 1 1.02 1.04


Frequency [Hz]

Figure 7.7: Zoom of absolute value of the FRFs. [-------]: Theoretical. [- - - - ]: FFT
Hanning window, H1. [     ]: RD-FFT, No windowing, H1RD .
The increase of the record length has increased the accuracy of the H1RD estimate, but
the H1 estimate is still biased (an increase in the length of the time segments from 1024
to 2048 would decrease the bias, but for illustration purposes this is omitted).
The estimation of the FRFs using the 5 di erent approaches is performed 100 times. The
mean values and the standard deviations are shown in table 7.1.
f [Hz] f  [%] 
FFT - 10000 pts. 1.000 1.1410,4 0.68 0.0054
FFT - 40000 pts. 1.000 0.5310,4 0.67 0.0026
RD - 10000 pts. 1.000 6.7110,4 0.59 0.0100
RD - 10000 pts., window 1.000 4.9710,4 0.60 0.0073
RD - 40000 pts. 1.000 2.6910,4 0.60 0.0032
Table 7.1: Average values of the modal parameters and the corresponding standard devia-
tions based on 100 simulations. The theoretical values are f = 1 Hz and  =0.6 %.
The results show that all approaches provide unbiased estimates of the eigenfrequencies,
but the FFT approach provides biased estimates of the damping ratio. On the other
hand, the standard deviation of the estimates using the RD technique is higher than the
estimates using the FFT approach. If no window function is applied and the system has
low damping the ragged spectral densities will result in modal parameters a ected by high
random errors. The high standard deviations of the modal parameters estimated based on
the RD technique is a result of the diculties which arise in the Fourier transformation
of the RD functions.
7.3.2 Experimental Study - Laboratory Bridge Model
This case study is based on a laboratory bridge model loaded by Gaussian white noise.
The model consists of a simply supported steel plate with 3 spans. The steel plate has
the dimensions 3.0  0.35 m. The length of each span is 1 m. A shaker is attached at the
right-hand span. The shaker is exciting the bridge model with Gaussian white noise in the
frequency span 0-60 Hz. The measurements consist of 32000 points sampled with 150 Hz.
The measurements are analog and digital ltered to avoid aliasing and to suppress high
frequency noise. Figure 7.8 shows an outline draft of the bridge and the sensor locations
are also indicated.

Figure 7.8: Laboratory bridge model and sensor locations.

The measurements are collected in 3 setups consisting of 7,7 and 6 response records in
each setup and the corresponding record of the load. Two di erent approaches are used
to estimate the FRFs of the structure. The approach based on the FFT algorithm with
the H1 estimator is used. Each time segment has a length of 1024 points and is multiplied
by the Hanning window to reduce leakage errors. The second approach is based on the
H1RD estimator. The positive point triggering condition is used with triggering levels
chosen as [0:5X 1]. Each RD function has a length of 825 points. In common for both
approaches is that the FRFs are transformed into IRFs using inverse FFT. The modal
parameters are extracted from the IRFs using the PTD algorithm. Structural modes are
separated from noise modes using the MCF, MPF and a requirement for low damping
ratios as described in chapter 2. Table 7.2 shows the estimated eigenfrequencies and the
corresponding damping ratios.

f [Hz] (RD) f [Hz] (FFT)  % (RD)  % (FFT)


11.64 11.63 1.14 0.73
15.45 15.44 0.34 0.56
21.51 21.50 0.29 0.47
45.09 45.08 0.10 0.18
47.97 47.97 0.17 0.24
49.96 50.11 0.36 0.53
50.30 50.17 0.21 0.15
51.77 51.74 0.15 0.14
61.60 66.61 0.22 0.28
65.47 65.45 0.43 0.50
Table 7.2: Modal parameters estimated using FFT and RD-FFT.
There is a good agreement between the eigenfrequencies estimated using the two ap-
proaches. Only at the two closely spaced modes at about 50 Hz, there is a small disagree-
ment. All the damping ratios are small and it seems that the damping ratios estimated
using the RD approaches are generally smaller compared to the damping ratios estimated
using the FFT approach. It is the experience obtained during this analysis that the modal
parameters of the RD technique are far more sensitive to the choice of model order and
the number of points used from the IRFs as input to the PTD algorithm. Compared to
the RD technique the results of the FFT algorithm is more stable and not very sensitive
to these choices.

The reason can be that the FRFs estimated using the RD technique is more ragged than
the FRFs estimated using the FFT algorithm. It is very dicult to calculate the FFT of
the RD functions with a satisfying result. The result is very sensitive to the number of
points in the RD functions. Figures 7.9 and 7.10 show two typical FRFs estimated using
the H1RD and the H1 estimators, respectively.

0
10

−1
10

−2
10

−3
10

−4
10
0 10 20 30 40 50 60
Frequency [Hz]

Figure 7.9: Typical jH (! )j estimated using H1RD .

The gure illustrates that the FRFs estimated using H1RD is very ragged. Around the
spikes the shape of the curve is very convincing, since the curve is smooth and it is clear
that this system has low damping. The FRF shown in g. 7.10 is on the other hand
very smooth. The experience with this experimental study is that the modal parameters
estimated based on the RD technique are more uncertain than the modal parameters esti-
mated based on the FFT technique. The model order should be higher and the uctuation
of the modal parameters is also higher. This di erence in the FRFs is believed to be the
explanation for the high uncertainty (or higher random errors) of the modal parameters
estimated using the RD technique.
0
10

−1
10

−2
10

−3
10

−4
10
0 10 20 30 40 50 60
Frequency [Hz]

Figure 7.10: Typical jH (! )j estimated using H1 .

The higher uncertainty of the modal parameters estimated from H1RD compared to the
results from the H1 estimator is supported by the estimated mode shapes, which are shown
in gs. 7.11 - 7.15. Except for the two closely spaced modes at about 50 Hz, the absolute
value of the mode shapes from the H1 estimator is the most convincing result.

Mode 1: FFT 11.63 Hz Mode 2: FFT 15.44 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 1: RD 11.64 Hz Mode 2: RD 15.55 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 7.11: Mode shapes 1 and 2 estimated using H1RD and H1. MAC=0.98 and
MAC=0.81, respectively.
Mode 3: FFT 21.5 Hz Mode 4: FFT 45.09 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 3: RD 21.51 Hz Mode 4: RD 45.09 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 7.12: Mode shapes 3 and 4 estimated using H1RD and H1. MAC=1.00 and
MAC=1.00, respectively.

Mode 5: FFT 47.97 Hz Mode 6: FFT 50.1 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 5: RD 47.97 Hz Mode 6: RD 49.96 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 7.13: Mode shapes 5 and 6 estimated using H1RD and H1. MAC=1.00 and
MAC=0.54, respectively.
Mode 7: FFT 50.17 Hz Mode 8: FFT 51.74 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 7: RD 50.3 Hz Mode 8: RD 51.77 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 7.14: Mode shapes 7 and 8 estimated using H1RD and H1. MAC=0.56 and
MAC=0.99, respectively.

Mode 9: FFT 61.61 Hz Mode 10: RD 65.47 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 9: RD 61.6 Hz Mode 10: RD 65.47 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 7.15: Mode shapes 9 and 10 estimated using H1RD and H1 . MAC=0.96 and
MAC=0.93, respectively.

The MAC values show that in general there is a high correlation between the mode shapes.
There is only major disagrement at the closely spaced modes. It is clear that the FFT
based estimates are more convincing, since the mode shapes look very smooth.
The estimation time for the two approaches was almost equal. The reason is that the
RD functions contain relatively many points and that there is a high number of averages.
1025 points in an RD function is in general a very high number. For a system with lower
damping ratios less points in the RD functions are necessary and the estimation time
thereby decreases.
It is very dicult to use the RD technique for estimation of FRFs for ligthly damped
systems. One of the major problems is that although the RD functions are estimated with
high accuracy, it is very dicult to calculate the FRF from the RD functions. If systems
with higher damping are considered, less points in the RD functions are needed. This will
make the RD technique much faster than the FFT approach, and the accuracy will also
increase. The RD functions are always more accurate close to the centre and the functions
will dissipate towards zero more clearly if a system with higher damping is analysed.

7.4 Summary
A new method for estimating FRF based on the RD technique has been introduced. The
idea is based on the fact that the RD functions of the input and output of a linear system
are related, since the RD function of the response is the convolution integral of the RD
function of the load and the IRF. This relation is always valid no matter what type of
load has been subjected to the structure.
In this chapter it has been assumed that the load is Gaussian white noise. This means
that the RD functions will dissipate to zero with increasing absolute time lags. Intuitively
it will be perfect to Fourier transform the RD functions, since the restriction of nite time
record length is insigni cant. The in uence of the load can be removed in the frequency
domain using either the H1RD or H2RD estimator, both of which are based on plain division.
Based on the experience with the simulation study and the analysis of the bridge model
in section 7.3.2 and in Brincker et al. [1] it is recommended to use the H1RD estimator.
Compared with the traditional method based on the FFT algorithm the RD-FFT approach
is not as stable. The auto and cross RD functions can be calculated with satisfactory
accuracy if the guidelines given in chapter 3 are followed. It is the transformation of the
RD functions into the frequency domain which creates trouble. The experience obtained
in this study is that it is necessary to introduce a window function. Otherwise the FRFs
will be dominated by the noise introduced with the Fourier transformation. Furthermore,
the result is dependent on a proper choice of the length of the RD functions. If the RD
functions contains too many points the FRFs will be dominated by noise.
The problems are particular dominant for lightly damped structures. If the damping ratio
increases from 0:1% , 1% ! 3% , 5% the stability of the RD-FFT approach will improve,
since the RD functions will dissipate faster. The result is that shorter and thereby more
accurate RD functions are obtained.
Further work with this approach is recommended to focus on the relations in eqs. (7.18)
and (7.19). The modal parameters can be extracted directly from these relations by
using algorithms which are based on the input-output relation in the time domain. This
approach would also not require that the input to the structure is white noise.
Bibliography
[1] Brincker, R. & Asmussen, J.C. Random Decrement Based FRF Estimation. Proc. 15th
International Modal Analysis Conference, Orlando, Florida, USA, Feb. 3-6, 1997, Vol.
II, pp. 1571-1576.
[2] Asmussen, J.C. & Brincker, R. Estimation of Frequency Response Functions by Ran-
dom Decrement. Proc. 14th International Modal Analysis Conference, Dearborn,
Michigan, USA, Feb. 12-15, 1996, Vol. I, pp. 246-252.
[3] Bendat, J. & Piersol, A. Random Data: Analysis and Measurement Procedures. John
Wiley & Sons, Inc. 1986. ISBN 0-471-04000-2.
[4] Schmidt, H. Resolution Bias Errors in Spectral Density, Frequency Response and
Coherence Function Measurement, I: General Theory. Journal of Sound and Vibration
(1986) 101(3) pp. 347-362.
[5] Fabunmi, J.A. & Tasker, F.A. Advanced Techniques for Measuring Structural Mobi-
lities. Journal of Vibration, Acoustics, Stress, and Reliability in Design. July 1988,
Vol. 110, pp. 345-349.
[6] Yun, C.-B. & Hong, K.-S. Improved Frequency Domain Identi cations of Structures.
Structural Safety and Reliability, ICOSSAR '93, Vol. 2. 1994 Balkema, Rotterdam,
ISBN 90 5410 357 4. pp. 859-865.
Chapter 8
Ambient Testing of Bridges
The main advantages of the RD technique is the low estimation time and the simple esti-
mation algorithm. The advantage of a low estimation time can be utilized in identi cation
of large structures, where the response of the structure is collected at many locations. In
general this is the situation in ambient testing of bridges. The purpose of this chapter is to
document the applicability of the RD technique for ambient testing of bridges. Ambient
testing of bridges refers to measurements of the vibrations of bridges due to ambient loads
such as trac, wind, waves and micro tremors.
In ambient testing of bridges a special terminology is used. Usually the number of mea-
surement locations is higher than the number of measurement channels available from the
measurement system. The number of measurement channels available in a bridge measure-
ment system is delimited by the number of cables available, the number of accelerometers
available and the analog/digital conversion of the measurements. This means that the
measurements have to be collected by applying the measurement system several times. A
single set of these measurements is denoted a setup. It is necessary to have one or several
measurement locations represented in each of the setups. Otherwise it is not possible to
link the mode shapes estimated from the di erent setups together. The measurements
collected at the locations, which are represented in all setups, are denoted reference mea-
surements. In principle it is sucient with a single reference measurement, but it is
common to use two or more reference measurements. This ensures that the probability
that all modes are well represented in one of the reference measurements is high.
Section 8.1 deals with identi cation of the Queensborough bridge from ambient vibrations.
This work was a pre-investigation of the performance of the RD technique compared to
other well-known techniques, such as FFT and ARMAV based approaches. The results
of this study encouraged a continuation with the RD technique as a tool for analyzing
ambient measurements of bridges.
In section 8.2, a laboratory bridge model is considered. The purpose of this work was to
compare the speed and accuracy of the RD and the VRD technique. This study concluded
the development and documentation of the VRD technique.
Ambient testing of the Vestvej bridge is reported in section 8.3. These measurements
were collected using a bridge measurement system developed as a part of this Ph.D.-
159
project. The purpose of this work, besides testing the performance of the RD technique,
is to check the bridge measurement system. The analysis is also a pre-investigation to a
demonstration project of vibration based inspection using the RD technique.

8.1 Case Study 1: Queensborough Bridge


This case study presents the results of an application of the RD technique for identi ca-
tion of bridges. The modal parameters of the Queensborough bridge are estimated from
ambient responses. The study was performed in order to obtain experience with the RD
technique applied to measurements of large civil engineering structures. The ambient data
have been analysed using four di erent techniques by di erent authors see Felber et al.
[2] - Brincker et al. [5]. The four di erent approaches are.
 FFT - Spectral densities estimated using FFT, see Felber et al. [2].
 ARMAV - Auto Regressive Moving Average models, see Giorcelli et al. [3].
 TFD - Time-Frequency Domain models, see De Stefano et al. [4].
 RD - Random Decrement technique, see Asmussen et al. [6].
The results of the di erent approaches were discussed and compared at a single session
at the 14th International Modal Analysis Conference. The rst analysis of the data was
presented in Ventura et al. [1]. The data analysis methodology, which constitutes the basis
for the RD technique is described in the next section. The data analysis methodology used
for the other approaches and their background can be seen in the corresponding papers.
The Queensborough bridge crosses the Fraser river near Vancouver B.C. Canada. The
bridge has a length of 200 m and has 3 spans. A typical cross-section and an outline draft
of the bridge are shown in gures 8.1 and 8.2. The illustrations are taken from Ventura
et al. [1]. and Felber et al. [2].

Figure 8.1: Outline draft of the cross-section of the Queensborough bridge.


Figure 8.2: Outline draft of the Queensborough bridge.
During the measurement period the bridge was mainly loaded by the trac on the bridge.
The vibrations of the bridge were collected using 8 accelerometers at 46 locations on the
bridge (including supports). The locations are chosen at equidistant distances on both
sides of the bridge. The data are collected in 7 setups with 8 measurements in each setup
and a single setup with 4 measurements. Each setup contains the response at two reference
locations. This makes it possible to assemble the mode shapes from the di erent setups
by normalizing the mode shapes with one of the reference location components. The data
are ltered analogously before sampling at 40 Hz and 32000 points are collected at each
locations. A description of the test equipment is given in Ventura et al. [1].
8.1.1 Data Analysis Methodology
An ambient vibration study of a large civil engineering structure usually starts with a
pre-analysis of the data. The purpose is to have an idea about the number of modes
represented in the data and the noise contents etc. One of the most frequently applied
approaches is to average the spectral densities of all the measurements. This will give an
idea about the frequencies of the modes and the average contribution from each mode to
the measured response. Such an analysis is omitted, since a full analysis of the data has
already been presented in Ventura et al. [1].
It is chosen to use the positive point triggering condition. This condition makes it possible
to obtain sucient triggering points for an accurate estimate of the RD functions. In order
to investigate the in uence of the triggering level 4 di erent conditions are formulated.
TXP;(1t) = f1  X < x(t) < 2  X g TXP;(2t) = f2  X < x(t) < 3  X g (8.1)
TXP;(3t) = f3  X < x(t) < 4  X g TXP;(4t) = f4  X < x(t) < 5  X g (8.2)
The auto RD functions are calculated from one of the reference measurements. The 4 dif-
ferent RD functions which have been normalized at the zero time lag are shown in gure
8.3. The number of triggering points for the four triggering levels was approximately 2000,
600, 300 and 50. There is no signi cant di erence between the 4 RD functions, except for
TXP;(4t) which simply produces to few triggering points. The accuracy of the RD functions
increases with the number of triggering points. If noise is present in the triggering mea-
surement it will introduce false triggering points, especially at small triggering levels. To
avoid these false triggering points and to have a reasonable estimation time the triggering
bounds are chosen as [a1 a2 ] = [3X 4X ].
1

0.5

−0.5
0 0.5 1 1.5
τ
Figure 8.3: 4 di erent auto RD functions. [    ]: [a1 a2]=[X 2X ]. [,  ,  ,]:
[a1 a2 ]=[2X 3X ]. [, , ,,]: [a1 a2 ]=[3X 4X ]. [--------------]: [a1 a2]=[4X 5X ].

All RD functions are estimated in each setup, which corresponds to estimating the full
correlation matrix. The RD functions are only estimated for positive time lags. The modal
parameters are extracted using ITD.

8.1.2 Results
In order to extract the modal parameters from the RD functions the ITD technique is
used to extract modal parameters from each of the set of RD functions. The number of
sets of RD functions is equal to the number of measurements collected. Figure 8.4 shows
the estimated eigenfrequencies as a function of the identi cation number. It was assumed
that 64 modes were sucient to model both physical and computational modes. Only the
modes with 6 MCF < 10 deg and jMCF j > 90% and damping ratios < 10 % are selected.

Figure 8.4: Stabilization diagram with estimated natural frequencies from identi cation of
di erent RD setups.
The selected modes are indicated with a vertical line. The estimated eigenfrequencies and
damping ratios, reported in Felber et al. [2] - Brincker et al. [5], are shown in table 8.1.

Mode FFT ARMAV TFD RD TFD RD


F [Hz] F [Hz] F [Hz] F [Hz]  [%]  [%]
1t 1.11 1.12 1.10 1.10 1.94 7.36
2 t 1.87 1.87 1.87 1.88 0.87 1.45
3r 2.28 2.29 2.28 2.28 0.49 1.86
4 t 2.42 2.42 2.42 2.42 0.84 2.15
5r 3.20 3.20 3.20 3.20 0.78 1.51
6 r 3.43 3.43 3.45 3.44 0.58 1.08
7t 3.71 3.70 3.74 3.73 1.40 1.68
8 r 5.16 5.15 5.16 5.15 0.25 0.64
9t 5.33 - - - - -
10 t 5.80 5.78 5.84 5.74 0.96 1.15
11t 7.01 7.04 7.12 7.01 0.65 0.77
12 t 7.52 7.52 7.52 7.53 0.82 0.69
13t 8.59 8.56 8.64 - 0.32 -
Table 8.1: Natural frequencies and damping ratios of the Queensborough bridge estimated
using di erent approaches. Superscript t=translational mode and superscript r=rotational
modes.

Table 8.1 illustrates that there is good agreement between the eigenfrequencies and the
damping ratios estimated using the di erent approaches. For the approaches based on
FFT and ARMAV-models the estimated damping ratios have not been reported. It is
interesting to note that only the FFT based approach identi es a strucutural mode at
5.33 Hz. The explanation could be that the measurements at opposite positions of the
bridge have been subtracted and added in order to separate translational and rotational
modes for the results of the FFT analysis. This data analysis strategy has not been
applied in the analysis using the other approaches. The argument is that it is not correct
to assume all modes to be either translational or rotational. Usually, the modes will
contain contributions from both a translational and a rotational part.

The rst rotational mode shape and the second translational mode shape are shown in
gure 8.5. The mode shape components are almost in or out of phase.
Figure 8.5: First rotational mode (2.28 Hz) and second translational mode (1.88 Hz).
The estimated mode shapes indicate that the identi cation of the modal parameters using
the RD technique in combination with ITD seem to have been succesful.

8.1.3 Conclusion
The results of this introductory application of the RD technique for ambient testing of
bridges encouraged continued investigations. It was possible to identify modal parameters
with a satisfactory accuracy compared to the results of the other techniques. One of the
reasons for this conclusion is that the accuracy of the RD technique can be improved. The
following improvements are recommended.
 Estimate RD functions with both positive and negative time lags. This opens an
opportunity for averaging and quality assessment of the RD functions.
 Shift the sign of the time series to obtain the maximum number of triggering points
for the chosen triggering levels.
 Use broader triggering levels, e.g. [a1 a2] = [X 1] to obtain more triggering points.

8.2 Case Study 2: Laboratory Bridge Model


The purpose of this case study is to make the nal development and documentation of the
VRD technique. The case study is based on the response of a laboratory bridge model
loaded by Gaussian white noise. The bridge model is a 3-span simply supported 0.01 m
thick steel plate with a total length of 3 m and a width of 0.35 m. Figure 8.6 illustrates the
laboratory bridge model (The bridge model is basically the same as the model analysed
in chapter 3).
Figure 8.6: Laboratory bridge model and sensor locations.
The bridge is excited by a shaker attached at the right-hand span. Only identi cation of
the mid span is considered. The 16 di erent locations of the accelerometers at the mid
span are indicated in gure 8.6. The acceleration responses of the bridge are collected at
a sampling rate of 150 Hz. The white noise load excites the structure in the frequency
range 0-60 Hz. The acceleration response is ltered analogously and digitally after samp-
ling. Each measurement consists of 32000 points corresponding to a sampling period of
approximately 3.5 minutes. The measurements are collected using three setups with 7, 7
and 6 measurements in each setup, since two reference points are used.

8.2.1 Data Analysis Methodology


The data have been analysed using the traditional RD technique and the VRD technique.
In the following the approaches for both method are discussed. Figure 8.7 shows a typical
acceleration record of the bridge.

0.8

0.6

0.4

0.2

−0.2

−0.4

−0.6

0 50 100 150 200


Time [sec]

Figure 8.7: Typical acceleration record from the laboratory bridge model.
For the RD technique the positive point triggering condition has been chosen, since this
condition ensures that sucient triggering points can be obtained. The triggering levels
are chosen as [a1 a2 ] = [0:5X 1]. Any triggering point between 0 and 0.5X is omitted
to avoid false triggering points. This level is expected to be dominated by noise. Figure
8.8 shows a typical auto RD function, where the average function and the error function
have been calculated using the symmetry relation as described in chapter 3.

0.03

0.02

0.01

−0.01

−0.02

−0.03

0 0.5 1 1.5 2
Time [sec]

Figure 8.8: Average and error RD function calculated by the symmetry relations. The
error RD function is the curve uctuating around zero.

There are 301 points in the average RD and error RD functions in g. 8.8. From g.
8.8 it is concluded that for all 301 points it seems as if the error RD function is so small
that all points can be used in the modal parameter extraction procedure if necessary. The
number of triggering points was approximately 5000.

All RD functions are calculated for each setup corresponding to estimating the full corre-
lation matrix. The average and the error RD functions are estimated using the symmetry
relations. The problem is if all sets of RD functions (columns in the correlation matrix)
should be used in the modal parameter extraction procedure. In order to investigate this
problem the approach suggested in section 3.7. is used. For each RD function an error
measure is calculated as the standard deviation of the error RD function divided by the
standard deviation of the average RD function, see eq. (3.64). The result from this ana-
lysis from the setup containing six measurements is shown in g. 8.9. The record number
indicates the measurement number where the RD functions are calculated from and the
reference number indicates the triggering measurement.
0.04

0.03

0.02

0.01

0
6
6
4
4
2 2
Reference No. Record No.

Figure 8.9: Fraction between the standard deviation of the error and average RD function
calculated by the symmetry relations and 301 points in each function.

All fractions are small and there is no signi cant di erence between the fractions so all
sets of RD functions can be used in the modal parameter extraction procedure.

To illustrate the principle in a modal parameter extraction procedure the 3 full correlation
matrices, estimated using the RD technique as described above, are considered. The PTD
technique is used and for each setup the number of modes is varied from 26 to 28 and the
number of points used from the RD functions is varied from 140 - 180. Figure 8.10 shows
a stabilization diagram of the estimated frequencies without applying any restriction to
the results except that the eigenvalues should have a complex conjugate.
25

20

15

10

0
0 10 20 30 40 50 60 70
Frequency [Hz]
Figure 8.10: Stabilization diagram.
The following restrictions are applied to the di erent modes:  < 0:05, jMCF j > 0:9 and
6 MCF < 10. This results in the following stabilization diagram

25

20

15

10

0
0 10 20 30 40 50 60 70
Frequency [Hz]
Figure 8.11: Stabilization diagram.
As seen it is much simpler to detect structural modes from g. 8.11 compared to the sta-
bilization diagram shown in g. 8.10. This illustrates the advantage of applying di erent
mode selection criteria. The nal resulting modal parameters are shown in section 8.2.2.
In order to apply the VRD technique, the time shifts between the elements of the vector
triggering condition should be chosen. As an example, the setup with 6 measurements is
considered. Figure 8.12 shows the initial estimate of the RD functions using level crossing
triggering condition. The triggering level is 1:4X .
1 2
1 1

2 1
DX X

DX X

0 0
−1 −2
−0.05 0 0.05 −0.05 0 0.05
1 1
3 1

4 1
DX X

DX X

0 0
−1 −1
−0.05 0 0.05 −0.05 0 0.05
2 1
5 1

6 1
DX X

DX X

0 0
−2 −1
−0.05 0 0.05 −0.05 0 0.05
τ τ
Figure 8.12: Initial RD functions for selection of vector triggering time delays.
The correlation is maximized by choosing the time vector as t = [0 0 0:06 0:0533 0:0533]
seconds, or if the number of time lags is considered t = [0 0 9 9 8 8]. A time shift vector
chosen as t = [0 0 9 9 , 3 , 3] would also have maximized the correlation.
The size of the vector triggering condition does not have to be equal to the number of
measurements. Table 8.2 shows the actual number of triggering points as a function of
the size of the vector triggering condition. The elements of the triggering levels are all
chosen as [a1i a2i ] = [0:5Xi 1] in order to omit false triggering points introduced in the
area [0 0:5Xi ].
Size 1 2 3 4 5 6
N 9900 8200 4600 4200 2600 2400
Table 8.2: Size of vector triggering condition and the corresponding number of triggering
points.
The number of triggering points decreases with the size of the vector triggering condition.
About 2000 triggering points are sucient for a reasonable convergence in the averaging
process. So the vector triggering condition is chosen to be maximum size, which corre-
sponds to the number of measurements in each setup. For each setup a single set of VRD
functions is calculated. In order to calculate an error function for the VRD functions
the sign of the triggering levels is shifted and the VRD functions are estimated again. A
typical average and a typical error VRD function are shown in g. 8.13.
0.15

0.1

0.05

−0.05

−0.1

−0.15

0 0.5 1 1.5 2
Time [sec]

Figure 8.13: Average and error VRD function calculated by the shifting sign of triggering
levels (301 points).
The error VRD functions are seen to be small compared to the average VRD function.
But it is also clear that the signi cance of the error function increases with the number of
time lags. It is not recommended to use all 300 points in the modal parameter extraction
procedure. The number of points used should not exceed 150.
The modal parameters are extracted from the VRD functions using the same approach as
described for the RD technique. The nal results are presented in section 8.2.2.
8.2.2 Results
The modal parameters are extracted from the VRD and the RD functions using PTD.
A stabilization diagram used with restrictions on the damping ratios ( < 10%) and the
MCF (magnitude > 90%, phase < 10 deg) are used to select the structural modes from
the computational modes. Table 8.3 shows the estimated modal parameters for the two
approaches.

Approach Parameter 1 2 3 4 5 6 7 8
RDAP f [Hz] 12.31 16.31 21.72 45.14 48.11 49.91 51.53 61.60
VRD f [Hz] 12.39 16.54 21.89 45.14 48.11 50.09 51.46 61.60
RDAP  [%] 1.90 3.65 1.51 0.22 0.34 0.67 0.54 0.47
VRD  [%] 1.82 4.80 1.54 0.22 0.36 0.47 0.47 0.24
Table 8.3: Estimated natural eigenfrequencies and damping ratios for the laboratory bridge
model.
From table 8.3 it is seen that there is a high correlation between the estimated modal
parameters, even for the damping ratios. The mode shapes are shown in gs. 8.14 - 8.17.
Mode 1: VRD 12.37 Hz Mode 2: VRD 16.29 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 1: RD 12.31 Hz Mode 2: RD 16.31 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 8.14: First and second mode shape estimated using RD and VRD. MAC=0.99 and
0.54, respectively.

Mode 3: VRD 21.87 Hz Mode 4: VRD 45.13 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 3: RD 21.72 Hz Mode 4: RD 45.14 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 8.15: Third and fourth mode shape estimated using RD and VRD. MAC=0.99 and
0.99, respectively.
Mode 5: VRD 48.09 Hz Mode 6: VRD 50.07 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 5: RD 48.11 Hz Mode 6: RD 49.91 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 8.16: Seventh and sixth mode shape estimated using RD and VRD. MAC=0.99 and
0.10, respectively.

Mode 7: VRD 51.49 Hz Mode 8: VRD 61.61 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Mode 7: RD51.53 Hz Mode 8: RD 61.6 Hz

1 1
0 0
−1 −1

0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0

Figure 8.17: Seventh and eigth mode shape estimated using RD and VRD. MAC=0.99 and
1.00, respectively.

There is a high correlation between the mode shapes except for the mode at about 50 Hz.
The explanation is that there is two closely spaced modes which only are excited weakly.
It is not possible to obtain reasonable estimates of these modes with the RD or VRD
technique from the available measurements of the response only.
8.2.3 Conclusions
The estimation time for the RD technique and the VRD technique including the estimation
time for the initial RD functions is shown in table 8.4.
Method RD VRD Initial
Time 565 120 5
N 8700 1700 1700
Table 8.4: Estimation Time (CPU-time [sec]) and number of triggering points (average),
N
The initial estimation of the RD technique, used for selection of the time shift vector is
seen to be extremely fast.
The application of the VRD technique is justi ed through an analysis of the acceleration
response of a laboratory bridge model. The analysis resulted in a high correlation between
the modal parameters estimated from RD and VRD functions. An approach to estimate
the optimal time shifts for the formulation of the vector triggering condition is illustrated.
The advantage of the VRD technique is illustrated through a 5 time reduction of the
computational time.

8.3 Case Study 3: Vestvej Bridge


The ambient vibration study of the Vestvej bridge forms the initial part of a demonstration
project concerning the application of vibration based inspection to bridges. The Vestvej
bridge is shown in g. 8.18.

Figure 8.18: The Vestvej bridge.


The aim of the project is to perform a continuous on-the-line surveyance of the bridge
using the RD technique. The RD technique is capable of handling large data quantities,
since it transforms the response into short data segments, correlation functions. In order
to obtain information about the modal parameters of the bridge, ambient vibration tests
are carried out. The tests are described in this section. The ambient vibrations have been
collected on 25/03 1997 and 04/06 1997. In this section the analysis of the data collected
on 04/06 1997 is described in detail, whereas only some of the results of the data analysis
of the data collected the 25/03 1997 are presented. The analysis are also reported in
Asmussen et al. [9], [10] and summarized in [11].

8.4 Bridge Description


The Vestvej bridge is crossing the highway from Aalborg to Frederikshavn between Vod-
skov and Langholt just north of Aalborg, Denmark. The main geometry of the bridge is
shown in g. 8.19. The western 2/3 of the bridge was erected in 1986 and the remaining
1/3 in 1996. The bridge deck is made of post-tensioned concrete.

Figure 8.19: The Vestvej bridge.


The bridge is mainly loaded by the vehicles on the bridge. But also the vehicles passing
under the bridge on the highway can contribute to the vibrations of the bridge. Further-
more, wind can generate vibrations of the bridge. Since all these forces are unmeasurable
and ambient the vibration study is denoted ambient.

8.5 Measurement Setup


The acceleration response of the bridge is measured using STDI-BMS (Structural Time
Domain Identi cation - Bridge Measurement System) currently developed as a part of this
Ph.D.-project at Aalborg University. The measurement system is an 8 channel system,
which means that the bridge response can be recorded, A/D-converted and saved to disk
from 8 channels or measurement location simultaneously. In order to measure the vibra-
tions of the bridge at more than 8 locations and still preserve the possibility of estimating
mode shapes, several setups with two reference locations, for the purpose of safety and
exibility, are collected. The chosen measurement locations and the reference locations
are seen in g. 8.20. Only 7 accelerometers are available at the moment so each setup
contains 7 measurements. The accelerometers are Schaewitz type at 20V/g, full range of
 0.25 g and secured in watertight steel boxes.

Figure 8.20: Outline draft of the Vestvej bridge and the measurement locations.
Since 26 measurement locations are chosen 5 di erent setups with data are collected. The
number of the accelerometer (internal AU number) and the corresponding channel can be
seen in table 8.5. Furthermore, the measurement locations de ned in g. 8.20 are linked
to the di erent setups.

Acc. Setup 1 2 3 4 5
No. Channel LB LB LB LB LB
7214 1 2 2 2 2 2
7706 2 3 3 3 3 3
9967 3 14 15 16 1 1
9969 4 25 23 21 19 17
10354 5 26 24 22 20 18
10355 6 12 10 8 6 4
14838 7 13 11 9 7 5
Table 8.5: Setup and measurement location overview. LB=Location Bridge, see g. 8.20.
The measurements were sampled at 160 Hz for 900 seconds. The data were detrended
and decimated twice (this includes lowpass digital ltering) before saved to disk. The
resulting sampling frequency is thereby reduced to 80 Hz and the number of points in
each measurement is 72000. Detrending removes any linear trend from the data and
decimation reduces the sampling frequency and suppresses the noise in the records.
Figures 8.21 and 8.22 show the data collection equipment and data acquisition equipment
on site, respectively.

Figure 8.21: Data collection equipment on site.

Figure 8.22: Data acquisition equipment on site.

Figure 8.23 shows a typical acceleration record from the Vestvej bridge collected on 04/06
1997.
−3
x 10
2.5

1.5

1
Accelerations [g] 0.5

−0.5

−1

−1.5

−2

0 100 200 300 400 500 600 700 800


Time [sec]

Figure 8.23: Typical acceleration record from the Vestvej bridge.

8.5.1 Data Analysis Methodology


The RD technique is applied for the analysis of the acceleration measurements. The data
have a high noise content as seen in g. 8.23. The reason is that only a few vehicles are
crossing the bridge during the 900 second record period. This means that the accelerations
are mainly due to the wind and the vechicles passing under the bridge. Furthermore, long
cabling is used (20m-100m) and the sensitivity of the accelerometers is high compared to
the small accelerations of the concrete bridge, see g. 8.23. This makes the analysis of
the data challenging.
Due to the high contents of noise it is chosen to use the positive point triggering condi-
tion, so that sucient triggering points are available. On the other hand, using triggering
bounds close to zero might introduce false triggering points, which will increase the un-
certainty of the RD functions. The triggering levels should ful l these three conditions:
 No false triggering points should be introduced.
 The number of triggering points should be sucient to average out the contribution
from the noise.
 The number of triggering points should be restricted so that the estimation time is
reasonable.
Two di erent sets of triggering bounds are investigated: [a1 a2 ] = [0 1:5X ] and [a1 a2] =
[1:5X 1]. The auto RD function is calculated from one of the reference measurements.
The number of triggering points was 33200 and 2780 and the estimation times in CPU
were 2.69 and 0.44, respectively. As expected [a1 a2 ] = [1:5X 1] has the lowest number
of triggering points and is thereby fastest. In order to evaluate the di erent estimates
the symmetry relation is used to calculate the error and the average estimate of the auto
correlation function. The result for the triggering bounds [a1 a2 ] = [0 1:5X ] is shown in
g. (8.24) and the result for the triggering bounds [a1 a2 ] = [1:5X 1] is shown in g.
(8.25).

−5
x 10

−1

0 0.5 1 1.5 2 2.5


τ

Figure 8.24: The average and error estimate of an auto correlation function using [a1 a2] =
[0 1:5X ].

−5
x 10

−1

0 0.5 1 1.5 2 2.5


τ

Figure 8.25: The average and error estimate of an auto correlation function using [a1 a2] =
[1:5X 1].
The di erence is signi cant and the error function shows that although there are only
0.1 times as many triggering points using [a1 a2 ] = [1:5X 1] the estimate is far more
accurate. This is positive, since the most accurate approach is thereby also the fastest
approach. So the triggering levels are chosen as [a1 a2 ] = [1:5X 1].

For each of the ve setups the full correlation matrix is estimated using the positive point
triggering condition. After the estimation of the RD functions and before the modal
parameters are extracted it is common in ambient testing of bridges to perform some
kind of pre-analysis to have an idea about the number of structural modes present in the
measurements. Such an analysis is developed for ambient testing based on FFT estimated
spectral densities. The method is denoted Average Spectral Densities (ASD) and the idea
is simply to average the spectral densities of all measurements. The ASD will strongly
indicate the number of modes and the corresponding frequencies. Figure 8.26 shows the
ASD calculated using FFT of the measurements.

−3
Average Spectral Density − All Setups
10

−4
10

−5
10

−6
10

−7
10

−8
10

−9
10
0 5 10 15
Frequency [Hz]

Figure 8.26: The ASD calculated from the spectral densities of the measurements.

The idea behind this approach is adapted to the RD technique. Instead of averaging
the spectral densities of the measurements, the Fourier transform of the estimated RD
functions are calculated and averaged. The estimates of the RD based ASD are shown in
g. 8.27.
−3
Average Spectral Density − All Setups
10

−4
10

−5
10

−6
10

−7
10

−8
10

−9
10
0 5 10 15
Frequency [Hz]
Figure 8.27: The ASD calculated from the FFT of the RD functions of the measurements.

From g. 8.27 the structural modes with the following frequencies are detected.

5.00 6.40 6.62 6.81 7.80 8.13


8.44 9.02 13.06 13.37 14.39 14.61
Table 8.6: Natural frequencies in Hz of the Vestvej Bridge.

It seems as if the ASD based on the RD functions provides a better basis for detecting
structural modes. The reason is that the ASD from RD functions is based on averaging in
both time and frequency domain, whereas the ASD based on the spectral densities of the
measurements is only based on averaging in the frequency domain. This is an important
relation for the RD technique

8.5.2 Results
The modal parameters are extracted from the RD functions using the PTD algorithm.
The aim is to estimate the mode shapes corresponding to the frequencies in table 8.6,
but the noise content in the data is high so not all mode shapes might be estimated at a
high con dence level. The in uence of the number of points used from the RD functions
and the model order are investigated by changing the number of modes from 25 to 30
and varying the number of points in the RD functions from 100 to 120. Corresponding to
the analysis of the laboratory bridge model a the following restriction has been applied:
 < 0:05, jMCFj > 0:9 and 6 MCF< 10 deg. Table 8.7 show the estimated eigenfrequencies
and damping ratios.
Parameter Date 1 2 3 4 5
f [Hz] 25/03 5.16 6.64 8.31 14.01 15.38
f [Hz] 04/06 5.02 6.58 8.04 13.23 14.56
 [%] 25/03 1.85 4.07 3.04 2.15 2.42
 [%] 04/06 1.21 3.52 2.45 1.20 1.17
Table 8.7: Eigenfrequencies and damping ratios of the Vestvej Bridge.
The corresponding mode shapes are shown in gs. 8.28 - 8.32.

−2

−4

10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]

Figure 8.28: First mode shape of the Vestvej bridge.

−2

10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]

Figure 8.29: Second mode shape of the Vestvej bridge.


2

−1

−2

10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]

Figure 8.30: Third mode shape of the Vestvej bridge.

10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]

Figure 8.31: Fourth mode shape of the Vestvej bridge.


2

−2

10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]

Figure 8.32: Fifth mode shape of the Vestvej bridge.


As seen it was not possible estimate the mode shapes of all the modes with sucient
accuracy. This is mainly due to the high noise contents in the meausurements. The result
is that all modes are not suciently represented in all setups, so it becomes impossible to
estimate the mode shapes.
8.5.3 Conclusions
An ambient testing of the Vestvej bridge has been performed. It was possible to detect
the natural frequencies of the bridge using the RDD technique. But it was only possible
to estimate some of the mode shapes of the bridge. Especially the modes at 5.02, 6.58
and 14.56 Hz were estimated with accurate mode shapes.
It is recommended to base a continuous on-the-line surveyance of the bridge on the RD
functions or only the estimated natural eigenfrequencies. The RD functions should be
calculated from longer time records than the 900 seconds used in the above analysis. This
will increase the accuracy of the RD technique and due to the speed of the technique it is
not impossible.

8.6 Summary
In this chapter the application of the RD technique has been demonstrated using di erent
experimental tests. In general the technique have resulted in accurate modal parameters,
but it has also been demonstrated that the technique has its limits, when closely spaced
modes are present in the data. This corresponds to the experience with the FFT algorithm.
But the lack of accuracy should always be compared with the speed of this technique.
The analysis of the Queensborough bridge with the RD technique was a pre-investigation
of the performance of the RD technique compared to other techniques. For this purpose
the data from the Queensborough bridge were an evident choice since the data have been
analysed using di erent techniques by di erent authors. The performance of the RD
technique motivated the further work with this technique.
The VRD technique was tested in competition with the RD technique using data from a
laboratory bridge model. This investigation was a natural continuation of the test of the
performance of the VRD technique. The result of the VRD technique was high quality
modal parameters, highly correlated with the result of the RD technique. At the same
time the VRD technique were 4-5 times faster than the RD technique. This investigation
concludes the introduction and justi cation of the VRD technique.
The chapter is concluded with an ambient vibration study of the Vestvej bridge. The
purpose of this investigation is to obtain information of the modal parameters of the bridge
and to investigate if the RD technique can be used as a basis for an on-line surveyance
of the bridge. It is concluded that due to the low level accelerations of the bridge a
surveyance should be based on long-term records of the accelerations.

Bibliography
[1] Ventura, C.E., Felber, J.A. & Prion, H.G.L. Evaluation of a Long Span Bridge
by Modal Testing. Proc. 12th International Modal Analysis Conference, Honolulu,
Hawaii, USA, 1994, Vol. II, pp. 1309-1315.
[2] Felber, A.J. & Ventura, C.E. Frequency Domain Analysis of the Ambient Vibration
Data of the Queensborough Bridge Main Span. Proc. 14th International Modal Anal-
ysis Conference, Dearborn, Michigan, USA, February 1996, Vol. I, pp. 459-465.
[3] Giorcelli, E., Garibaldi, L., Riva, A. Fasana, A. ARMAV Analysis of Queensborough
Bridge Ambient Data. Proc. 14th International Modal Analysis Conference, Dear-
born, Michigan, USA, February 1996, Vol. I, pp. 466-469.
[4] De Stefano, A., Kna itz, M., Bonato, P., Ceravolo, R. & Gagliati, G. Analysis of Am-
bient Vibration Data From Queensborough Bridge Using Cohen Class Time-Frequency
Distributions. Proc. 14th International Modal Analysis Conference, Dearborn, Michi-
gan, USA, February 1996, Vol. I, pp. 470-476.
[5] Brincker, R., De Stefano, A. & Piombo, B. Ambient Data to Analyse the Dynamic
Behaviour of Bridges: A First Comparison Between Di erent Techniques. Proc. 14th
International Modal Analysis Conference, Dearborn, Michigan, USA, February 1996,
Vol. I, pp. 477-482.
[6] Asmussen J.C., Ibrahim, S.R. & Brincker, R. Random Decrement and Regression
Analysis of Trac Responses of Bridges. Proc. 14th International Modal Analysis
Conference, Dearborn, Michigan, USA, February 1996, Vol. I, pp. 453-458.
[7] Asmussen J.C., Ibrahim, S.R. & Brincker, R. Application of the Vector Triggering
Random Decrement Technique. Proc. 15th International Modal Analysis Conference,
Orlando, Florida, USA, February 1997, Vol II, pp. 1165-1171.
[8] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Theory of Vector Triggering Random
Decrement. Proc. 15th International Modal Analysis Conference, Orlando, Florida,
USA, February 1997, Vol I, pp. 502-509.
[9] Asmussen, J.C., Brincker, R. Rytter, A., Hededal,P., Stoltzner, E. & Lauridsen, J.
Ambient Vibration Testing of the Vestvej Bridge - Full Virgin Measurement: 1st
Study. Internal note to RAMBOLL and The Danish Road Directorate, April 1997.
[10] Asmussen, J.C., Brincker, R. Rytter, A., Hededal,P., Stoltzner, E. & Lauridsen, J.
Ambient Vibration Testing of the Vestvej Bridge - Full Virgin Measurement: 2nd
Study. Internal note to RAMBOLL and The Danish Road Directorate, June 1997.
[11] Asmussen, J.C., Brincker, R. Rytter, A., Hededal,P., Stoltzner, E. & Lauridsen, J.
Application of the RD technique to Ambient Testing of The Vestvej Bridge. To be
Presented at the 16th International Modal Analysis Conference, Santa Barbara, Cal-
ifornia, USA, February 1998.
Chapter 9
Conclusions
This chapter concludes the thesis with the nal comments to the investigations of the RD
technique. First the contents and the results of each chapter are reviewed in section 9.1.
The purpose is to give an overview before the general conclusions are given in section
9.2. The general conclusions contain a step-by-step recipe for the application of the RD
technique to identify the dynamic characteristica of structures from ambient data. The
chapter is concluded with a perspectivation in section 9.3. This will include topics and
areas on which future work can be based.

9.1 Summary
9.1.1 Chapter 1
This chapter introduces and delimits the work concerning vibrations of civil engineering
structures presented in this thesis. This work is delimited to deal with the RD technique
for estimation of modal parameters of structures, where the vibrations can be modelled
by a time invariant linear lumped mass parameter system. Furthermore, it is assumed
that the loads can be described using ltered stationary Gaussian white noise. The loads
can be created arti cially or be ambient. A review of the RD technique is performed as a
natural starting point. The major result is that it is chosen to interpret RD functions in
terms of correlation functions. The scope of the work is formulated as:
The objective of the present Ph.D.-thesis is a description, implementation, and futher
development of the theory behind the RD technique as well as a comparison of the perfor-
mance of the RD technique with the FFT algorithm.
The introduction is concluded with a description of the contents of each chapter.
9.1.2 Chapter 2
This chapter contains a review of linear and linear stochastic vibration theory. The purpose
is to describe how modal parameters can be extracted from correlation functions under
the previously mentioned assumptions. It is shown in chapter 3 and appendix A that the
RD functions are proportional to the correlation functions of the response. The lumped
mass parameter system is introduced and the modal parameters are de ned. The practical
application and implementation of two algorithms, the Ibrahim Time Domain (ITD) and
187
the Polyreference Time Domain (PTD), for extraction of modal parameters from the free
decays of a structure are described. The load modelling is de ned and it is shown that
the correlation functions of the response of the linear lumped mass parameter system
subjected to the load have exactly the same relation as the free decays of the structure.
This means that the ITD and PTD algorithms can be used to extract modal parameters
from RD functions.

9.1.3 Chapter 3
The RD functions are de ned as conditional mean values of a stationary stochastic pro-
cess and the unbiased estimation of RD functions is shown. The condition is denoted the
triggering condition. The applied general triggering condition is introduced. Using this
condition it is shown that the RD functions are a weighted sum of the correlation func-
tions and the time derivative of the correlation functions dependent on the formulation
of the condition. From this condition the link between the correlation functions and four
particular triggering conditions can be derived. These triggering conditions are: Level
crossing, local extremum, positive point and zero crossing. The main di erence between
these triggering conditions is the resulting number of triggering points in the estimation
process of the RD functions. The main assumption of these results is that the stochastic
processes are stationary and Gaussian distributed with zero mean. The results are derived
in detail in appendix A. The response af the structures subjected to the ltered Gaussian
distributed white noise load, described in chapter 2, will ful l these assumptions.
Quality assessment of an estimated RD function is an important problem in any appli-
cation. Two di erent approaches to quality assessment of RD functions are suggested.
The rst approach is based on the shape invariance relation of the RD functions and the
second approach is based on the symmetry relation for correlation functions of stationary
processes. The strength of the suggested methods is that the experience can be used from
one structure to another. Another problem is to choose triggering levels for the di erent
triggering conditions. Some guidelines are given, but it is extremely dicult to obtain
some general and consistent rules. Chapter 3 is concluded with a comparison of the speed
and accuracy of the RD technique for estimation of correlation functions with an FFT
based approach. The result is that the RD technique can be faster and still estimate as
accurate correlation functions as the FFT based approach. The absolute main advantage
of the RD technique, speed, is underlined.

9.1.4 Chapter 4
This chapter introduces a new technique: Vector triggering Random Decrement (VRD).
The motivation for developing this technique is that if the RD technique is applied to a
large number of measurements, it becomes time consuming to estimate the full correlation
matrix. Instead a vector triggering condition is formulated. It is shown that the VRD
functions are a sum of correlation functions corresponding to the size of the vector condi-
tion. The assumption is the same: The stochastic processes are stationary and Gaussian
distributed with zero mean. The advantage of the VRD technique and its application are
illustrated by di erent simulation studies. This includes the solution to the problem of
formulating the vector triggering condition. It is concluded that the VRD technique is
an attractive alternative to the RD technique in the analysis of data setups with many
measurements.
9.1.5 Chapter 5
The chapter contains a proposal for a new method to predict the variance of the RD
functions. A simple method exists to predict the variance of RD functions, which only
uses the RD functions and the number of triggering points, see Appendix A. The main
assumption of this method is that the time segments used in the averaging process are
uncorrelated. The validity of this assumption has never been investigated. The new
method takes the correlation between the time segments into account. The method seems
to perform well on simple systems and it illustrates how dicult it is to estimate the
variance, since usually only a single realization (measurement) of the processes is available.
In the vicinity of zero and far away from zero the method predicts the variance well. It
is much more accurate compared to the existing simple method, but it also uses more
computational time. Whether or not this increase in accuracy can pay o the increase in
computational time is an open question.
9.1.6 Chapter 6
This chapter considers the problems, which arise in practical applications of the RD tech-
nique. Several bias problems are described. It is shown how these bias problems can be
avoided and how they in uence the estimate of the correlation functions and the estimate
of the modal parameters. The di erent implementations of the RD and VRD technique
performed as a part of this Ph.D.-project are discussed. It is described how the RD
technique and the VRD technique should be implemented in MATLAB. The chapter is
concluded with an example of the use of the implemented functions.
9.1.7 Chapter 7
A new method for estimating the frequency response matrix (FRM) of the linear lumped
mass parameter system is investigated. The method is based on the RD technique and
assumes that both the response and the loads are measured. If the RD functions of the
response and the load are calculated and Fourier transformed the FRM can be estimated
by simple division. The advantage is that the RD functions dissipate towards zero with
increasing and decreasing time lags from zero if the load is white noise. This means that
leakage free estimates of the FRM are obtained, since it is not necessary to apply any
other window in the time domain than the exponential window. The in uence of this
window is well-known from investigations in impact testing. It increases the damping
ratio corresponding to the exponential decay of the window. The performance of the
method is investigated by a simulation study and analysis of a laboratory bridge model
loaded by white noise through a shaker. The method can remove the leakage error, but at
the expense of higher random uncertainty. It is very dicult to use the method on lightly
damped systems, since the results are very sensitive to the number of points in the RD
functions and the choice of window function.
9.1.8 Chapter 8
This chapter describes the di erent analyses of structures, which have been performed.
The ambient measurements of the Queensborough bridge have been analysed using a
combination of the RD technique and the ITD algorithm. The results were promising,
since the RD technique produced results, which were comparable with the results obtained
by di erent authors using several other well-known algorithms such as FFT, ARMAV
and time-frequency domain algorithms. These results encouraged to continue the work
with the RD technique. As a nal documentation of the VRD technique the response
of laboratory bridge model subjected to Gaussian white noise has been analysed using
both the VRD and the RD technique. The VRD technique was faster and the modal
parameters were identical to those obtained using the RD technique. This underlines that
the VRD technique is an attractive alternative to the RD technique, for systems with a
high number of measurements.
The work in this thesis is concluded with the ambient vibration study of the Vestvej bridge.
The ambient vibrations have been collected using a bridge measurement system developed
as a part of this Ph.D.-project. The study is the initial investigations for a demonstration
project of vibration based inspection. The data are analyzed using the RD technique. The
future perspective of the projcet is to perform continuous on-the-line surveyance using the
RD technique in order to utilize the advantages of the RD technique.

9.2 General Conclusions


During this work the RD technique has been investigated intensively through analysis of
a broad band of di erent structures: From simulations of a simple SDOF system loaded
by white noise over a laboratory bridge model to the analysis of ambient measurements
from existing bridges.
The theory of the RD technique has been summarized and further developed, especially
with the results of the applied general triggering condition, the new VRD technique and
the new approach for predicting the variance of RD functions. An analytical relation
between the RD functions and the modal parameters of a linear lumped mass parameter
system loaded by ltered white noise has been established.
Because of the presentation of the theoretical background combined with the experimental
experience, is it decided to give a recipe for the application of the RD technique for
identi cation of modal parameters from response measurements. The purpose is to give
a guideline for potential users with no or little experience with the technique. In the
following the recipe is given step by step.

1. Validation of the measurements: The rst step is to validate the measure-


ments. The purpose is to check that the measurements are stationary and Gaussian
distributed. The user should determine if the assumptions of a time-invariant linear
structure and stationary Gaussian distributed loads are ful lled. It can be checked by
e.g. a normal probability plot or various tests if the measured response is Gaussian
distributed.

It is standard procedure to apply the above assumptions if the loads are ambient
such as wind, waves or trac. If the loads are deterministic the RD technique should
be used carefully. An example could be a structure loaded by a few impulses. The
response should be analysed carefully using the RD technique and therefore the
guidelines given below are not valid.
2. Choice of triggering condition and levels: The rst problem is to choose a
triggering condition. It is recommended to use the positive point triggering condi-
tion. Only if the records consist of many data points the level crossing or the local
extremum triggering condition should be used. It is not recommended to use the
zero crossing triggering condition. The reason is that noise has a high in uence of
the response around zero, so false triggering points will be detected resulting in RD
functions with slow convergence. During this thesis it has been emphasized repea-
tedly by experimental results that low level triggering points should not be used.
This illustrates the disadvantages of the zero crossing triggering condition.

The second problem is the choice of triggering levels. From all the measurements pick
out a single reference measurement or a measurement with a relatively high standard
deviation. Choose di erent triggering levels and calculate the RD functions and the
estimation time. An appropriate division would be [a1 a2 ] = [0:5 1]X , [a1 a2 ] =
[1 1:5]X and [a1 a2 ] = [1:5 1]X or [a1 a2 ] = [1 1]X and [a1 a2 ] = [1:5 1]X ,
etc. The SIC can be used to check the shape invariance of the RD functions. Any
levels without the value SIC  1 should be omitted. If all RD functions have high
a SIC and the computational time is not important the minumum and maximum
levels, which have been investigated should be used. If the computational time is
important the levels which have the lowest number of triggering points and a high
SIC value should be chosen. The number of points used in the RD functions can also
be determined from this initial investigation by taking a number of points, which
ensures that the RD functions just have dissipated suciently. Remember to use
positive and negative time lags.
3. Validation of the RD functions:
After choosing of triggering condition and determining the triggering levels and the
number of points, the RD functions can be calculated. Calculate all RD functions
corresponding to estimating the full correlation matrix. In order to extract maximum
information from the measurements the sign of the triggering levels should be shifted
and the RD functions calculated again. The averages of the RD functions normalized
to be the correlation functions are the nal estimate of the correlation functions with
positive and negative time lags.

The validation of the quality of the RD functions is very important. Use the sym-
metry relation to calculate an estimate of the correlation functions for positive time
lags and the error function for the correlation functions for positive time lags, see
section 3.7.2. By plotting the average correlation functions and the error function
the quality can be assessed and the number of points used in the modal parameter
extraction procedure can be determined. To check if all RD functions should be used
in the modal parameter extraction procedure the fraction between the standard de-
viation of the estimated correlation function and the error function can be calculated
using eq. (3.64). This approach makes it possible to omit the RD functions with a
high noise contents.
The last procedure in the validation of the RD functions is to calculate the absolute
values of the FFT of all RD functions and average them. The nal spectral density
contains information about the number of modes present in the RD functions and
the corresponding natural frequencies. This gives an opportunity to select a proper
model order in the modal parameter extraction procedure.
4. Extraction of Modal parameters:
From the validation process the approximate number of physical modes and the ap-
propriate number of points from the RD functions are known. Using this information
the model order and the number of points should be varied in order to investigate
the sensitivity of the modal parameters to these choices. By using stabilization
diagrams in combination with the MCF and a restriction of small damping ratios
a proper model order can be selected and the eigenfrequencies, damping ratios and
the mode shapes can be extracted. Usually the mode shapes indicate how accurately
the modal parameters are estimated.

This recipe can of course only be a guideline for the application of the VRD technique.
Experience with the technique will make it easier to select proper triggering levels, in order
to have accurate RD functions and low estimation time.

9.3 Perspectivation and Future Work


There are several di erent aspects of the RD technique which can be further developed
and investigated. The following sections describe these aspects and how they can be
investigated.
9.3.1 Non-Gaussian Processes
The main assumption for the theory of the link between the RD functions and the cor-
relation functions of a stochastic vector process is that the process should be Gaussian
distributed. It would be interesting to investigate how sensitive the estimate of the RD
functions is to this assumption. Will the RD functions di er signi cantly from the cor-
relation functions if the vector process is non-Gaussian distributed and how sensitive are
will the modal parameters be?
A natural starting point for an investigation would be to consider an ideal linear sy-
stem loaded by Gaussian white noise, where the response will be Gaussian distributed.
By adding non-Gaussian measurement noise to the realizations of the response the nal
process becomes non-Gaussian distributed. By changing the distribution of the meausure-
ment noise and the contribution (signal-to-noise ratio) of the noise, the sensitivity of the
accuracy of the estimated RD functions, and the correlation functions of the noise-free
system, the estimated modal parameters and the theoretical modal parameters can be
investigated by a comparison.
9.3.2 Non-Linear Structures
Further to the above suggestion for future work, non-linear structures can be considered.
Non-linear structures subjected to Gaussian load will be non-Gaussian distributed so this
can be considered as an extension to the previous situation where only the measurement
noise is non-Gaussian. There is a possibility that it will be possible to use amplitude
dependent RD functions for identi cation of non-linear structures.
9.3.3 Improvement of the Variance Model
The variance model suggested in chapter 5 can probably be improved. The applicability
of this model should be tested using real data and used as information for improvement
of the accuracy of the modal parameters. The information in the form of variance should
be used in the modal parameter extraction process.
9.3.4 Extraction of Modal Parameters
Estimating modal parameters from RD functions is as dicult a task as estimating accu-
rate RD functions. The main problem is that methods used to extract modal parameters
from free decays have been used. The noise present in the RD functions is thereby model-
led by adding noise or computational modes. This means that a high number of modes
have to be used. This rises two main problems, namely the book-keeping of all the modes
and the separation of physical or structural modes from the noise modes.
The solution could be to use methods where the noise is modelled as a stochastic process.
This might result in more accurate modal parameters and also remove the book-keeping
problem. The number of di erent models (model order, number of points), which have to
be tested, is also reduced.
9.3.5 Damage Detection by on-the-line Continuous Surveyance
Using the RD technique for continuous on-the-line surveyance of civil engineering struc-
tures is considered to be the most promising application of the technique. The reason is
that the RD technique can transform long-term observations into a small amount of data,
the RD functions. From these RD functions the modal parameters can be extracted and
used as input to a vibration based inspection scheme - or even the RD functions themselves
can be used as input. Although the RD technique is not the most accurate technique for
estimation of modal parameters, the technique can compensate for the lack of accuracy
by extracting the RD functions from the response continuously. The RD functions can be
estimated from a huge amount of information.
Chapter 10
Summary in Danish
Dette kapitel giver et kort referat p— dansk af indholdet og delkonklusionerne i de enkelte
kapitler i afhandlingen.
Kapitel 1: Kapitlet introducerer og afgrnser arbejdet med vibrationer af bygningskon-
struktioner, der rapporteres i denne thesis. Det forudsttes, at vibrationsm—lingerne
er foretaget s— omhyggeligt som muligt. Arbejdet afgrnses til at omhandle random
dekrement (RD) teknikken til estimering af modal parametre fra konstruktioner, hvor
vibrationerne kan modelleres ved hjlp af et system af diskrete masser med tidsinvari-
ante og linere egenskaber. En eventuel anvendelse af de estimerede modal parametre
betragtes ikke. Det forudsttes, at belastningen kan modelleres ved hjlp af hvid stj
eller hvid stj formet ved hjlp af et linert lter. I alle tilflde er belastningen Gaussisk
fordelt og stationr. Belastningen p— konstruktionerne kan vre naturlig (vind, blger,
kretjer) eller kunstig (shaker). Kapitlet indeholder ogs— en gennemgang af den allerede
eksisterende litteratur omhandlende RD teknikken. Hovedresultatet fra gennemgangen er,
at RD teknikken vlges tolket ved hjlp af korrelationsfunktioner. Form—let med arbejdet
i denne thesis kan kort beskrives ved:
Form—let med denne Ph.D.-thesis er at beskrive, implementere, videre udvikle teorien bag
og sammenligne prstationerne for RD algoritmen med den velkendte FFT algoritme.
Introduktionen afsluttes med en prsentation of indholdet af de enkelte kapitler.
Kapitel 2: I dette kapitel gennemg—es liner svingningsteori og liner stokastisk sving-
ningsteori. Hensigten med kapitlet er at beskrive, hvorledes modal parametrene kan e-
stimeres udfra korrelationsfunktionerne for responset under de frnvnte forudstninger.
Det vises senere i kapitel 3 og appendix A, at under disse forudstninger bliver RD funk-
tionerne proportionale med korrelationsfunktionerne. Den linere diskrete masse model
beskrives og udfra denne de neres modal parametrene. Den praktiske anvendelse af to
forskellige algoritmer Ibrahim Time Domain (ITD) og Polyreference Time Domain (PTD)
er beskrevet. Disse algoritmer anvendes til at estimere modal parametre udfra m—linger
af frie henfald af konstruktioner. Den valgte modellering og beskrivelse af krfterne p—
konstruktionerne er gennemg—et, og det vises, at korrelationsfunktioner for responset af
den diskrete masse model, belastet med disse krfter har samme egenskaber som frie hen-
fald af den diskrete masse model. Derfor kan algoritmer som ITD og PTD anvendes til at
195
estimere modal parametre fra korrelationsfunktioner. Responset bliver Gaussisk fordelt,
idet det er forudsat, at konstruktionerne har linere egenskaber, og at belastningen er
Gaussisk fordelt.

Kapitel 3: RD funktioner de neres som betingede middelvrdier ved hjlp af en trig


betingelse, og det vises, hvorledes RD funktioner estimeres udfra tidsserier, f.eks. det
m—lte respons af en bygningskonstruktion. Estimeringen af RD funktioner foretages uden
systematiske fejl. En generel trig betingelse de neres. Udfra denne betingelse vises det, at
RD funktionerne bliver en vgtet sum af korrelationsfunktionerne og de tidsa edede kor-
relationsfunktioner under forudstningen af, at tidsserierne er Gaussisk fordelte. Derved
er der skabt en sammenhng mellem modal parametrene for den linere diskrete masse
model og RD funktionerne fra responset af den linere diskrete masse model belastet med
hvid stj.

I kapitlet beskrives ogs— to forskellige metoder, der foresl—s anvendt til kvalitetsvurdering
af RD funktionerne. Metoderne er generelle, s—ledes at erfaring med metoderne kan med-
bringes fra en konstruktion til en anden. Desuden er det angivet, hvorledes trig niveauer
kan vlges, s—ledes at RD funktionerne estimeres s— prcist som muligt. Kapitlet afsluttes
med en sammenligning af forskellige metoder til estimering af korrelationsfunktioner. Det
vises, ved hjlp af simulering, at RD teknikken kan vre lige s— prcis som en metode
baseret p— FFT algoritmen og samtidig have en vsentlig hurtigere beregningstid.

Kapitel 4: Dette kapitel introducerer en ny teknik: Vektor trig random dekrement


(VRD). Motiveringen for at udvikle denne teknik er, at ved analyse af et stort antal
tidsserier ved hjlp af RD teknikken, kan estimeringstiden blive forholdsvis stor, hvis
hele korrelationsmatricen estimeres. I stedet udvides RD teknikken, fra at vre de neret
ved en skalr trig betingelse, til at vre de neret ved en vektor trig betingelse (VRD).
Derved bliver det maksimale antal af funktioner minimeret svarende til strrelsen af vektor
betingelsen. Det vises, at VRD funktionerne er en sum af korrelationsfunktioner svarende
til strrelsen af vektor betingelsen. Forudstningerne er, at tidsserierne er stationre
og Gaussisk fordelte. Fordelene ved, og den praktiske anvendelse af VRD teknikken er
illustreret ved hjlp af forskellige simuleringsstudier. Dette inkluderer lsning af de prob-
lemer, der opst—r ved formulering af vektor betingelsen. VRD teknikken fremst—r som
et attraktivt og p—lideligt alternativ til RD teknikken ved tilflde, hvor et stort antal
m—linger er opsamlet simultant.

Kapitel 5: I kapitlet foresl—es en ny metode til prediktion af variansen p— RD funktioner.


Der ndes en simpel metode til prediktion af variansen udfra RD funktionerne alene, som
beskrevet i kapitel 3 og appendix A. Forudstningen for denne metode er, at tidsseg-
menterne, der anvendes i midlingsprocessen, er ukorrelerede. Denne forudstning er ikke
opfyldt, og betydningen har ikke vret testet. Den ny metode der foresl—es tager hensyn
til korrelationen mellem de enkelte tidssegmenter. Metoderne undersges ved hjlp af
forskellige simuleringsstudier. Den ny metode er beregningsmssigt mere tidskrvende
end den gamle metode. Til gengld er prcisionen kraftigt forbedret. Metoden er i stand
til at forudsige variansen p— RD funktionerne prcist omkring centrum og langt vk fra
centrum af dobbeltsidede kryds og auto RD funktioner. Det er et —bent sprgsm—l, om
denne gevinst i prcision kan betale for den ekstra beregningstid. Yderligere udvikling og
afprvning af metoden foresl—s.

Kapitel 6: Kapitlet omhandler den praktiske anvendelse af RD teknikken. Selvom


teknikken teoretisk set er uden systematisk fejl, opst—r disse alligevel p— grund af, at
teknikken anvendes p— diskrete tidsserier (m—linger). De systematiske fejl illustreres ved
hjlp af simuleringsstudier af forskellige systemer. Der gres opmrksom p—, hvorn—r
man skal vre opmrksom p— problemerne, og hvilken ind ydelse de har p— estimaterne
af b—de korrelationsfunktionerne og modal parametrene. En systematisk fejl p— korre-
lationsfunktionerne ver ikke ndvendigvis nogen ind ydelse p— eventuelle estimater af
modal parametrene. Generelt kan det siges, at de systematiske fejl aftager med stigende
samplingsfrekvens i forhold til den maksimale egenfrekvens af konstruktionen. Foruden
beskrivelsen af systematiske fejl beskrives det, hvorledes RD og VRD funktionerne er im-
plementeret. De forskellige funktioner er programmeret i HIGH-C og koblet til MATLAB
ved hjlp af MATLABs eksterne interface muligheder.

Kapitel 7: Her undersges en ny metode til estimering af frekvensresponsfunktioner.


Fra det m—lte respons og belastning af en konstruktion beregnes auto og kryds RD funk-
tionerne. Ved at Fourier transformere disse, opn—es en simpel metode til estimering af
frekvensresponsfunktionerne. Fordelen ved denne metode, sammenlignet med den tra-
ditionelle metode baseret p— Fourier transformationer alene, er mindre beregningstid og
frekvensresponsfunktioner estimeret uden systematiske fejl. Begrundelsen er, at RD funk-
tionerne dissiperer mod nul langt vk fra centrum. Dette medfrer, at kravet om uendelig
lange tidssserier til Fourier transformationen kan udelades. Metoden undersges ved hjlp
af et simuleringsstudie af et etfrihedsgradsystem og analyse af et vibrationstest p— en la-
boratorie bro-model. Undersgelsen understreger fordelene ved metoden, men specielt
for lavt dmpede systemer vil metoden ikke vre mere velegnet end metoder baseret p—
FFT-algoritmen, hvis ikke der anvendes vinduesfunktioner.

Kapitel 8: I kapitlet beskrives de forskellige analyser af vibrationsm—linger fra eksi-


sterende konstruktioner. Den frste konstruktion er Queensborough broen i Canada.
M—linger af denne bro, der hovedsageligt er belastet med tra klast, er analyseret ved
hjlp af forskellige metoder: FFT, ARMAV og simultan tids-frekvensdomne metoder
af forskellige forfattere. Resultaterne opn—et ved hjlp af RD teknikken rapporteres og
sammenlignes med resultaterne opn—et ved hjlp af de andre metoder. Det viser sig, at
anvendelsen af RD teknikken resulterer i modal parametre, der er sammenlignelige med
resultater opn—et ved hjlp af andre metoder. Bl.a. dette resultat motiverede det videre
arbejde med RD teknikken.

Foruden m—lingerne af Queensborough broen er der analyseret en laboratorie bro-model


og Vestvej broen beliggende nord for Aalborg. Form—let med undersgelsen af laboratorie
bro-modellen er at afslutte sammenligningen mellem RD og VRD teknikken. Det viser sig,
at VRD-teknikken kan anvendes i forbindelse med et stort antal m—linger. Resultatet er
en reduktion i estimeringstiden uden at prcisionen falder betydeligt. Specielt ved analyse
af tidsserier med et stort antal punkter vil VRD teknikken vre fordelagtig. Analysen af
Vestvej broen er baseret p— m—linger foretaget med et brom—lesystem udviklet som en del
af dette Ph.D.-projekt. Modal parametrene for broen estimeres ved hjlp af RD teknikken.
Undersgelserne skal danne grundlag for et demonstrationsprojekt for vibrationsbaseret
inspektion af broen.
Kapitel 9: Konklusionen afrunder denne thesis. Indhold og resultater fra de enkelte
kapitler resumeres her for at give et afsluttende overblik. Derefter opsummeres den en-
delige konklusion, der udgres af en punkt for punkt opskrift for, hvorledes RD teknikken
kan anvendes til analyse af vibrationsm—linger af bygningskonstruktioner. Konklusionen
afsluttes med en perspektivering, hvor bl.a. eventuelle fremtidige emner for RD teknikken
beskrives.
Appendix A
Random Decrement and
Correlation Functions
The purpose of this appendix is to derive the relations between RD functions and corre-
lation functions of stationary zero mean Gaussian distributed stochastic processes. These
relations are considered to be the fundamental mathematical description and basis of this
technique. So the main assumption of the results presented in this apendix is that the
stochastic processes have a zero mean Gaussian distribution and are stationary. Only a
few authors have interpreted RD functions in terms of correlation functions. Vandiver et
al. [1] derived the proportional relation between the auto RD functions de ned by the
level triggering condition and the auto correlation functions using the above-mentioned
assumptions. This result was extended by deriving a proportional relation between cross
RD functions de ned using the theoretical general triggering condition and cross correla-
tion functions, see Brincker et al. [2], [3]. This proof is important since the mathematical
description in terms of correlation functions was extended not only to cover the analysis
of single measurements but also to cover multivariate measurements. Bedewi et al. [4]
and Yang et al. [5] also have some theoretical considerations concerning RD functions
and correlation functions. This appendix describes the state of the art of interpreting
RD functions in terms of correlation functions and derives the necessary relation to ll
out some remaining parts. This is done by introducing the applied general triggering
condition, which is a generalization of the theoretical general triggering condition.

Sections A.1 and A.2 describe the mathematical tools, which are applied in the derivations
in later sections. The density function of a conditional multivariate Gaussian distributed
stochastic variable is described in section A.1. In section A.2 a relation between the density
functions of two stochastic variables on di erent conditions is derived.

In section A.3 the de nition of RD functions is given. Furthermore, the estimate of the
RD functions is introduced. Based on these de nitions the theoretical general triggering
condition introduced by Brincker et al. [2], [3] is described in section A.4. In section A.5
the condition is generalized to the applied general triggering condition. This triggering
condition is important, since the relation between the RD functions of any particular
triggering condition and the correlation functions can be extracted directly from the results
of the applied general triggering condition.
199
Sections A.6 - A.9 describes four commonly used triggering conditions. The relation be-
tween the RD functions using these triggering conditions and the correlation functions is
derived directly from the results of section A.5. Furthermore, an approximate expression
for the variance of the estimated RD functions is derived. The expected number of trig-
gering points, which can be obtained by applying the di erent triggering conditions is also
given.

A.1 Multivariate Gaussian Variables


A multivariate (n1-dimensional) Gaussian distributed stochastic variable, X, is described
by the general n-dimensional Gaussian density function, pX(x).
1  1 
pX (x) =  exp , ( x ,  ) T V,1 (x ,  ) (A.1)
2 X XX X
(2 )n=2(det(VXX))
1
2

where VXX is the n  n-dimensional covariance matrix and X is the n  1-dimensional


mean value vector. Consider an (n + m)  1 dimensional Gaussian variable, X, which is
partitioned into the n  1 dimensional variable X1 and the m  1-dimensional variable
X2, X = [XT1 XT2 ]T . Correspondingly the mean value vector and covariance matrix are
partitioned.
"# " #
X
X = X ; VXX = V
1 VX X V X X (A.2)
X X VX X
1 1 1 2

2 2 1 2 2

An important property of a multivariate Gaussian distributed stochastic variable is that


the density function of X1 on condition of X2 is also Gaussian distributed, Melsa & Sage
[6]. The mean value vector and the covariance matrix of X1, on condition of X2 , are given
by, Melsa & Sage [6], Soderstrom [7].

E[X1jX2] = X + VX X VX,1X (x2 , X )


1 1 2 2 2 2
(A.3)

Cov[X1jX2] = VX X , VX X VX,1 X VX X
1 1 1 2 2 2 2 1 (A.4)

If X = 0 then eq. (A.3) is reduced to.

E[X1jX2] = VX X VX,1 X x2
1 2 2 2
(A.5)
= RX X R,X1 X x2
1 2 2 2

where R denotes a correlation matrix. The zero mean value vector implies that the
covariance matrix and the correlation matrix are identical. Eqs. (A.3) - (A.5) are the basic
equations when the relationship between the RD functions and the correlation functions
and an approximate expression for the variance of the RD functions is derived in sections
A.4 and A.5.
A.2 Conditional Densities
A conditional variable is written as X1 jT. The condition T could e.g. be of di erent
complexity.
T1 = fX2 = a; X3 = b; g (A.6)
T2 = fa1  X2 < a2 ; b1  X3  b2g (A.7)
In the following sections it will be necessary to have a relation between the density function
of a variable on the condition in eq. (A.6) and the density function of a variable on the
condition in eq. (A.7). This relation is derived from general relations between the density
and the distribution functions and the relation between conditional distribution functions
and probabilities.

pX jT (x1jT2 ) = @ P (X1  x1 )ja1  X@ x2 < a2 ; b1  X3 < b2 )


1 2
1

= k1  @ (PX X X (x1; a2 ; b2@) x, PX X X (x1; a1 ; b1 ))


1 2 3 1 2 3

1
1 Z a2 Z b2
= k pX X X (x1; x2; x3)dx2dx3
a1 b1 1 2 3

Z a2 Z b2
= k1  pX jA (x1jA1)pX X (x2)dx2dx3 (A.8)
a1 b1 1 1 2 3

where
Za Zb
k = P (a1  X2 < a2 ; b1  X3 < b1 ) = pX X (x2; x3)dx2dx3
2 2
(A.9)
a1 b1 2 3

Consider a special case of the condition in eq. (A.7).


T3 = fa1  X2 < a2 ; X3 = bg (A.10)
A relation between the density functions pX jT (x1jT2) and pX jT (x1jT3) can be esta-
blished directly from the last statement in eq. (A.8) by setting b1 = b; b2 = b + b and
1 2 1 3

then let b ! 0. This procedure is used in section A.6 - A.9. The above equations follow
Papoulis [8].

A.3 De nition of Random Decrement Functions


Consider two stochastic processes Y (t) and X (t). The index t will be interpreted as time.
The auto RD functions are de ned as the mean value af a process on condition of the
process itself.
DXX ( ) = E[X (t +  )jTX (t)] (A.11)
DY Y ( ) = E[Y (t +  )jTY (t)] (A.12)
The conditions TY (t) and TX (t) are denoted triggering conditions. The cross RD functions
are de ned as the mean value of a process on condition of another process.
DXY ( ) = E[X (t +  )jTY (t)] (A.13)
DY X ( ) = E[Y (t +  )jTX (t)] (A.14)
The rst subscript in e.g. DXY refers to the process where the mean value is calculated,
X (t) and the second subscript refers to the process where the condition is applied, Y (t).
The de nitions of eqs. (A.11) and (A.12) can be derived from eqs. (A.13) and (A.14) by
replacing X (t) by Y (t) or the opposite, respectively. This property is used throughout the
rest of this appendix, so only results for the cross RD functions are stated.
The estimate of the RD functions are calculated as the emperical mean. This implies that
the processes X (t) and Y (t) are assumed to be ergodic.
X
N
D^ XY ( ) = N1 x(ti +  )jTy(ti) (A.15)
i=1
X
N
D^ Y X ( ) = N1 y (ti +  )jTx(ti) (A.16)
i=1
where x(t) and y (t) are realizations of the stochastic processes X (t) and Y (t). The impor-
tant parameter in these estimates is the number of triggering points, N . The conditions
TY (t) and TX (t) should always be chosen so that the number of triggering points is high
enough to secure a satisfactory convergence of the estimates.
It is important that the estimates of the RD functions in eqs. (A.15) and (A.16) are
unbiased, which is shown below.
X
N
E[D^ XY ( )] = 1 E[x(ti +  )jTy(ti) ] = DXY ( ) (A.17)
N i=1
Systematic errors of the correlation functions are avoided by applying the RD technique
to Gaussian distributed processes.
The following sections introduce di erent formulations of the triggering conditions and
link the RD functions to the correlation functions of the processes.

A.4 General Theoretical Triggering Condition


Consider two univariate Gaussian distributed stochastic processes, X (t) and Y (t). It is
assumed that X (t) and Y (t) have zero mean and are stationary processes. The theoretical
general triggering condition is denoted TXG(Tt) and is given by the following conditions on
X (t) and its time derivative X_ (t).
TXG(Tt) = fX (t) = a; X_ (t) = bg (A.18)
This triggering condition is denoted theoretical, since it is too strict to be used in any
practical application of the RD technique. The probability of the event TXG(Tt) is very
small, so the realizations of X (t) and Y (t) have to be extremely long in order to estimate
the conditional mean value using eqs. (A.15) or (A.16) with a satisfactory number of
triggering points. The reason for introducing this triggering condition is that the results
are used in the next section, where the applied general triggering condition is introduced.
Using TXG(Tt) a relationship between the RD functions and the correlation functions of
stationary multivariate Gaussian processes is obtained. The vector processes X1 and X2
are created.
X1 = [X (t +  ) Y (t +  )]T X2 = [X (t) X_ (t)]T (A.19)
X (t) is assumed to be stationary so X (t) and X_ (t) are uncorrelated and thereby inde-
pendent, since X (t) is Gaussian. The auto and cross correlation matrices of X1 and X2
becomes.
" 2 #
RX X = RY X (0) 2  X RXY (0) (A.20)
1 1
Y
" # " #
RX X = X2 0 R,1 X,2 0
2 2
0 X2_ X2 X2 = 0 X,_ 2 (A.21)
" # " #
RX X ( ) = RRXX ( ) RX X_ ( ) = RXX ( ) ,R0XX ( ) (A.22)
1 2
Y X ( ) RY X_ ( ) RY X ( ) ,R0Y X ( )
R0 is the time derivative of R and X2 is the variance of X (t) and equal to RXX (0). The
mean value of X1 on condition that X2 = [a b]T is calculated from standard results for
the conditional mean value of multivariate Gaussian variables, see eq. (A.5).
" 0 (, ) # "  ,2 0 # " a #
R XX
E[X1jX2] = R ( ) ,RXX (  ) , R X (A.23)
YX 0 (, )
YX 0 X,_ 2 b
Since the condition X2 = [a b] is equal to TXG(Tt) de ned in eq. (A.18) the results of eq.
(A.23) is equal to the RD functions, see eqs. (A.11) and (A.14). The relation between RD
functions de ned using TXG(Tt) and correlation functions follows
0
D ( ) = RXX ( )  a , RXX ( )  b
XX X2 X2_ (A.24)
0
DY X ( ) = RYX2( )  a , RYX2( )  b (A.25)
X X_
These fundamental solutions relate the auto and cross RD functions of two Gaussian
distributed stochastic processes to their correlation functions. Corresponding to the con-
ditional mean value in eq. (A.23) the covariance of X1 on condition that X2 = [a b]T can
be calculated from standard results for multivariate Gaussian variables, see eq. (A.4).
" 2 #

Cov[X1jX2] = R X(0)  2 R XY (0) ,
YX Y
" # " ,2 #" # (A.26)
0
RXX ( ) ,RXX ( ) X 0 RXX ( ) RY X ( )
RY X ( ) ,R0Y X ( ) 0 X,_ 2 ,R0XX ( ) ,R0Y X ( )
The variance of the conditional process can be extracted from the diagonals of the nal
results of eq. (A.26).
0 !2 !21
R (  ) R 0 (  )
Var[X (t +  )jTXG(Tt)] = X2 @1 , XX
X2 , XX A (A.27)
X X_
0 1
 RY X (  )  2 R 0 ( ) !2
Var[Y (t +  )jTX (Tt)] = Y2 @1 ,
G , Y X A (A.28)
  Y X Y X_

These general relationships were rst established in Brincker et al. [2], [3]. From the
results of this section it is possible to derive the relation between the RD functions and
the correlation functions of some of the triggering conditions with practical interest, see
e.g. section A.6. But in general the process becomes very complex. This is the motivation
for introducing the applied general triggering condition.

A.5 Applied General Triggering Condition


The zero mean Gaussian stationary processes X (t) and Y (t) are considered. In applica-
tion of the RD technique less strict triggering conditions, compared with the theoretical
general triggering condition, are used. The reason is that the theoretical general triggering
condition produces too few triggering points in order to secure a satisfactory averaging
process. This leads to the formulation of the applied general triggering condition, TXG(At).
From the results of this triggering condition the relation between the RD functions of all
known triggering conditions and the correlation functions can be established directly. The
applied general triggering condition, TXG(At), is de ned by.
TXG(At) = fa1  X < a2 ; b1  X_ < b2g (A.29)
Notice that no restriction is made on the triggering bounds, [a1 a2]; [b1 b2]. This means
e.g. that the bounds on X_ (t) could be chosen as [b1 b2] = [,1 1], which is a way of
omitting a condition on X_ (t). The RD function is de ned as the mean value of Y (t +  )
on condition of TXG(At). In the following only the cross RD functions are considered.

DY X ( ) = E[Y (t +  )jTXG(At)]
Z1
= y  pY jT GA (y jTXG(At))dy
,1 X (t)
Za Zb Z1
= k1  y  pY X X_ (y; x; x_ )dydxdx
2 2
_
1 a b ,1 1 1

Za Zb
= k1  E [Y (t +  )jTXG(Tt)]  pX X_ (x; x_ )dxdx
2 2
_ (A.30)
1 a1 b1
where the results of eq. (A.8) and eq. (A.9) and the following have been used.
Za Zb
k1 = pX X_ (x; x_ )dxdx
2 2
_ (A.31)
a1 b1
TXG(TT ) = fX (t) = x; X_ (t) = x_ g (A.32)
The results from the theoretical general triggering condition, see eq. (A.25), are inserted
in eq. (A.30).

DY X ( ) = E[Y (t +  )jTXG(At)]
Za Zb !
= k1  RY X ( ) x , R0Y X ( ) x_ p (x; x_ )dxdx
2 2
_
1 a b X2 X2_ X X_
1 1

0
= RY X2 ( )  ~a , RY X2 ( )  ~b (A.33)
X X_
where ~a and ~b are given by.
R a R b xp (x; x_ )dxdx_
R a xp (x)dx
_
= Ra a p X(x)dx
2 2 2
a b
a~ = R a R b
1 X X
1 1
(A.34)
a b pX X_ (x; x_ )dxdx
_ a X
2 2 2
1 1 1

R a R b _ (x; x_ )dxdx R b xp
~b = Ra a Rb b xp X X_ _ _ (x_ )dx_
= Rb b X_
2 2 2
1 1 1
(A.35)
a b pX X_ (x; x_ )dxdx
_ b pX_ (x_ )dx_
2 2 2
1 1 1

In the statement of eqs. (A.33), (A.34) and (A.35) it has been used that X (t) and X_ (t)
are independent, which is true, since X (t) is assumed to be Gaussian distributed and
stationary.
Equation (A.33) describes the relationship between the RD functions from the applied
general triggering condition and the correlation functions and the time derivative of the
correlation functions. From the result the weights or triggering bounds of RY X ( ) and
R0Y X ( ) can be extracted directly by inserting the triggering bounds [a1 a2 ]; [b1 b2 ] in eqs.
(A.34) and (A.35). In principle there is no signi cant di erence between the results of
TXG(Tt) and TXG(At). The only di erence is the scaling or weight of the correlation functions.
The variance of the conditional stochastic process Y (t +  )jTXG(At) is de ned as
Z1
Var[Y (t +  )jTXG(At)] = (y , Y jT GA )2 pY jT GA (y jTXG(At))dy
,1 X (t) X (t)
Z1
= y 2p G (y jTXG(At))dy , 2Y jT GA (A.36)
,1 Y jTX (At) X (t)

where the conditional mean value is equal to the RD functions (the term Y jT GA is used
Xt ( )
for simplicity)
2Y jT GA = E[Y (t +  )jTXG(At)]2 = DY2 X ( ) (A.37)
X (t)
and straightforward can be calculated from eqs. (A.33) - (A.35). The rst term in eq.
(A.36) is calculated using eq. (A.8).
Z1 Z1Za Zb
y 2 pY jT GA (yjTXG(At))dy = k1 y 2pY X X_ (y; x; x_ )dxdxdy
2 2
_
,1 Xt ( ) 1 ,1 a1 b1
Z a2 Z b2 Z 1
= k1 y 2pY jT GT (y jTXG(Tt))dypX X_ (x; x_ )dxdx
_
1 a1 b1 ,1 X (t)

=k 1 Z a2 Z b2 Var[Y jTXG(Tt)]pX X_ (x; x_ )dxdx


_ +
1 a1 b1
1 Z a2 Z b2 E[Y jTXG(Tt)]2pX X_ (x; x_ )dxdx
_ (A.38)
k1 a1 b1
where k1 is given by eq. (A.31). The conditional variance is calculated using the results
of eqs. (A.37) and (A.38) and the results of eqs. (A.25) and (A.28).

Var[Y (t +  )jTXG(At)] =
Za Zb !
Var[Y jTXG(Tt)] 1
+ k
2 2 RY X ( ) x , R0Y X ( ) x_ 2 p (x; x_ )dxdx
_ ,
1 a1 b1 X2 X2_ X X_
! ! !2
1 RY X ( ) k , 1 R0Y X ( ) k (A.39)
k 2 X 2 k 2 3 X_
1 1
where the following abbreviations are used.
Za Zb
k1 = p _ (x; x_ )dxdx
2 2
_ (A.40)
a1 b1 X X
Z a2 Z b2
k2 = xpX X_ (x; x_ )dxdx
_ (A.41)
a b
1 1

Za Zb
k3 = _ X X_ (x; x_ )dxdx
xp
2 2
_ (A.42)
a1 b1
Z a2 Z b2
k4 = x2pX X_ (x; x_ )dxdx
_ (A.43)
a1 b1
Z a2 Z b2
k5 = x_ 2pX X_ (x; x_ )dxdx
_ (A.44)
a1 b1
The conditional variance in eq. (A.39) is reduced to
0  2 R0 ( ) !21
= Y2 @1 ,
Var[Y (t +  )jTXG(At)] R Y X (  ) YX A+
Y X , Y X_
! ! ! !
RY X ( ) 2 k4 ,  k2 2 + R0Y X ( ) 2 k5 ,  k3 2 (A.45)
X2 k1 k1 X2_ k1 k1
The variance of the conditional stochastic process Y (t +  )jTXG(At) is basically only a function
of the correlation functions and the time derivative of the correlation functions.
Equations (A.33) and (A.45) are the mathematical basis for the RD technique applied to
stationary Gaussian stochastic processes. The result has been derived for the case of cross
RD functions. The results for auto RD functions are obtained by substituting Y (t +  )
with X (t +  ) or the opposite. The relation between the RD function of any triggering
condition can immediately be derived from the results of eqs. (A.33) and (A.45). The
derivation is shown in the sections A.6 - A.9 for commonly used triggering conditions.

A.6 Level Crossing Triggering Condition


The level crossing triggering condition is the original triggering condition and has been
by far the most popular triggering condition. A triggering point is detected if the process
X (t) is equal to the chosen triggering level a. Superscript L is an abbreviation for the
Level crossing triggering condition.
TXL(t) = fX (t) = ag (A.46)
Equation (A.46) is reformulated to be of the same form as the applied general triggering
condition
TXL(t) = fa  X (t) < a + a; ,1  X_ (t) < 1g ; a ! 0 (A.47)
From the above reformulation of the level crossing triggering condition the weights ~a and
~b in eqs. (A.34) and (A.35) are calculated by letting a ! 0.
R a+a xp (x)dx
a~ = Raa+a X = a (A.48)
a p X ( x ) dx
R 1 xp
_ X_ (x_ )dx_
~b = R,11 p (x_ )dx_ = 0 (A.49)
,1 X_
The nal result for the level crossing triggering condition states that the RD functions are
proportional to the correlation functions.

DY X ( ) = E[Y (t +  )jTXL (t)] = RYX2( )  a (A.50)


X
The variance of the conditional process Y (t+ )jTXL (t) is obtained by calculating k1; k2 ; k3; k4
and k5 from eqs. (A.40) - (A.44). The fractions used in eq. (A.45) become
k4 = a2 ;  k2 2 = a2; k5 =  2 ;  k3 2 = 0 (A.51)
k1 k1 k1 X_ k1
The variance follows by inserting the results of eq. (A.51) into eq. (A.45)
 RY X ( ) 2!
Var[Y (t +  )jTXL (t)] = Y2 1, Y X (A.52)
It is important that if an auto conditional process is considered, X (t +  )jTXL (t), the
variance of this process will always be zero, since RXX (0) = X2 for stationary processes.
Furthermore, the variance is independent of the triggering level, a. The RD functions using
level triggering are estimated as the emperical mean value. The processes are assumed to
be ergodic
X
N
D^ Y X ( ) = N1 y (ti +  )jx(ti) = a (A.53)
i=1
where x(t) and y (t) are realizations of the processes X (t) and Y (t). If the di erent time
segments in the averaging process of eq. (A.53) are assumed to be independent the variance
of D^ Y X can be derived from eq. (A.52)
2  RY X ( ) 2 !
^ Y
Var(DY X ( ))  N 1 ,   (A.54)
Y X

Eqs. (A.50), (A.52) and (A.54) were rst derived by Vandiver et al. [1] for the auto
RD functions. Their work was based on a more complicated and less general approach,
since they operated directly on the density functions instead of using eqs. (A.3) and (A.4).
Brincker et al. [2], [3] derived eqs. (A.50), (A.52) and (A.54) using the results from section
A.4 and the total representation theorem, see Ditlevsen [9]. Their proof included cross
RD functions.

A.6.1 Expected Number of Triggering Points


The expected number of level crossings per unit time of a Gaussian process is given by
Rices formula, see Rice [10]
2 !

E[ dNdt(a) ] = 1 X_  exp , 2 a 2 (A.55)
X X
The expected number of triggering points of a Gaussian time series is given by
!

E[N (a)] = T  (NX , N )  1 X_ exp , a 2
2
(A.56)
 X 2  X
where T is the sampling interval, NX is the number of points in the time series, X , and
N is the number of points in the RD functions. The number of level crossings, N (a), or
triggering points is of course dependent on the triggering level, a.
One of the problems with the level crossing triggering condition is how to choose the
triggering level a. It can be shown that the optimal triggeringplevel, in the sense of
minimizing the variance of the RD functions (see eq. (A.52)) is a = 2X , see Hummelshj
et al. [11]. This has been supported by a simulation study by Brincker et al. [12], where
the results indicate that a triggering level between 1 and 2 times X should be chosen.
These results are based on the assumption of an ergodic Gaussian stochastic process and
that the averaged time segments are independent.
A.7 Local Extremum Triggering Condition
The local extremum triggering condition, TXE(t), has not been widely used. Nevertheless
the triggering condition is attractive since the contribution from the time derivative of the
process is not averaged out, but demanded to be zero. A triggering point is detected if
the time derivative of the process is zero and the process itself is bounded by a1 and a2
TXE(t) = fa1  X (t) < a2 ; X_ (t) = 0g ; 0  a1 < a2 < 1 (A.57)
In general a1 and a2 should have equal sign. Otherwise information is lost since the
contribution from fa1  X (t) < 0; X_ (t) = 0g will average out the contribution from
f0  X (t) < ja1j; X_ (t) = 0g. The only part left will be the contribution from fja1j 
X (t) < a2; X_ (t) = 0g. Equation (A.57) is reformulated to be of the same form as the
applied general triggering condition.
TXE(t) = fa1  X (t) < a2 ; 0  X_ (t) < 0 + bg ; b ! 0 (A.58)
From the above reformulation of the local extremum triggering condition the weights a~
and ~b in eqs. (A.34) and (A.35) are calculated by letting b ! 0
R a xp (x)dx
a~ = Raa p X(x)dx
2
1
(A.59)
a X
2
1

R 0+v xp_ (x_ )dx_


~b = R0 0+v p X_(x_ )dx_ = 0 (A.60)
0 X_
The RD functions for the local extremum triggering conditions are proportional to the
correlation functions
DY X ( ) = E[Y (t +  )jTXE(t)] = RY X2( )  ~a
X (A.61)

If the triggering levels are chosen as [a1 a2 ] = [0 1] (or alternatively [,1 0]) the maxi-
mum number of triggering points are always obtained. The resulting triggering level in
this case is
R a xp (x)dx r2
2
X
a~ = R a p (x)dx ; [a1 a2] = [0 1] ) a =  X
a 1
(A.62)
a X
2
1

The variance of Y (t +  )jTXE(t) is obtained by calculating k1 ; k2; k3; k4 and k5 from eqs.
(A.40) - (A.44). The fractions used in eq. (A.45) become
 k2 2 R a xp (x)dx !2 R a x2p (x)dx
= R a p (x)dx ; k = aR a p X(x)dx ; kk3 = kk5 = 0
X k
2 2
a 1 4 1
(A.63)
k1 a X 1
2
1 a X 1 11
2

The variance of the conditional process follows by inserting the results of eq. (A.63) into
eq. (A.45)
0  2 R0 ( ) !21 !2
E 2 @ R Y
Var[Y (t +  )jTX (t)] = Y 1 ,   X (  ) Y
,  X A +k E RY X (  ) (A.64)
Y X Y X_2 X
where kE is
R a x2p (x)dx R a xp (x)dx !2
= R a p (x)dx , Ra a p X(x)dx
X
2 2

kE a 1 1
(A.65)
a X a X
2 2
1 1

If especially the triggering levels are chosen to [a1 a2 ] = [0 1] then the variance become
0  RY X ( ) 2 R0 ( ) !21
2
Var[Y (t +  )jTXE(t)] = Y2 @1 ,    , Y X A (A.66)
X Y _X Y

The variance of the conditional variable becomes dependent on the chosen triggering levels.
Furthermore, the variance of X (t +  )jTXE(t) at time lag zero is

Var[X (t +  )jTXE(t)] = X2 (1 , 2 ) (A.67)

This result di ers from the level crossing triggering condition, since the variance at time
lag zero is not zero. The reason is that the triggering levels, a1 ; a2, de ne two bounds
instead of only a single value.
The RD functions using local extremum triggering are estimated as the empirical mean.
The processes are assumed to be ergodic

^ 1 XN
DY X ( ) = N y (ti +  )ja1  x(ti ) < a2 ; x_ (ti ) = 0 (A.68)
i=1

where x(t) and y (t) are realizations of X (t) and Y (t). If the di erent time segments in the
averaging process are assumed to be independent the variance of the RD functions can be
derived from eq. (A.66)
0 !1
 RY X ( ) 2 R0 ( ) 2 kE RY X ( ) !2
 2
^ Y @
Var[DY X ( )]  N 1 ,   , Y X A+ (A.69)
Y X Y X_ N X2

A.7.1 Expected Number of Triggering Points


In general the expected number of local extremes can only be calculated using tedius inte-
grations, see e.g. Lin [13]. In the case of a narrow banded process a simpler approach can
be used. For narrow banded processes the expected number of local extremes (maxima)
above a certain level is equal to the expected number of up or down crossings of this
level. This implies that the local extremes are all local maxima. The expected number of
triggering points in a narrow banded Gaussian process can be calculated from the result
of eq. (A.56)
!
E[N (a1; a2)] = T (NX , N )  21 X_ exp( 2, a12 ) , exp( 2, a22 )
 2 2
(A.70)
X X X
A.8 Positive Point Triggering Condition
The positive point triggering condition can be interpreted as a generalization of the level
crossing triggering condition. It is the most versatile of the di erent triggering conditions
presented. A triggering point is detected if the process is bounded by a1 and a2. The
bounds should have equal sign. Usually positive signs are used.
TXP (t) = fa1 < X (t)  a2 g (A.71)
Equation (A.71) is reformulated to be of the same form as the applied general triggering
condition.
TXP (t) = fa1 < X (t)  a2 ; ,1 < X_ (t) < 1g a2 > a1  0 (A.72)
Notice that if a1 ! a2 the positive point triggering condition is equal to the level triggering
condition. The RD functions calculated using TXP (t) are derived from eq. (A.33). The
triggering levels or weights become, see eqs. (A.34) and (A.35)
R a xp (x)dx
a~ = Raa p X(x)dx
2
1
(A.73)
a X
2
1

R1
xp
_ X_ (x_ )dx_
~b = R,1
1 p (x_ )dx_ = 0 (A.74)
,1 X_
The RD functions for the positive point triggering condition are proportional to the cor-
relation functions
D ( ) = E[Y (t +  )jT P ] = RY X ( )  a~
YX X (t)X2 (A.75)

If the triggering bounds are chosen as [a1 a2 ] = [0 1] the maximum number of triggering
points is obtained
r
[a1 a2 ] = [0 1] ) a~ = 2 (A.76)
 X

The variance of the conditional process Y (t+ )jTXP (t) is obtained by calculating k1; k2 ; k3; k4
and k5 from eqs. (A.40) - (A.44). The fractions used in eq. (A.45) become
R R
k4 = aRa x2px(x)dx ; k2 = Raa xpx (x)dx ; k5 = 2 ; k3 = 0
2 2

(A.77)
a a
1 1
k1 X_ k1
a px (x)dx k1 a px (x)dx k1
2 2
1 1

The variance of the conditional process Y (t +  )jTXP (t) reduces to


 R ( ) 2! R (  )
!2
Var[Y (t +  )jTXZ (t)] = Y2 1, Y X +k P Y X (A.78)
X Y X2
where kP is
R a x2p (x)dx R a xf (x)dx !2
= R a f (x)dx , Ra a p X(x)dx
X
2 2

kP a 1 1
(A.79)
a X a X
2 2
1 1
If especially [a1 a2 ] = [0 1] the variance reduces to
 2 !
Var[Y (t +  )jTXP (t)] = Y2 1 , 2 RY X ( ) (A.80)
 X Y
The variance of the conditional auto process at time lag zero is
 
Var[X (t +  )jT P ] =  2 1 , 2
X (t) X  (A.81)
which is di erent from the variance of the conditional auto process using level triggering,
since the above is not zero.
The RD functions using positive point triggering are estimated as the empirical mean.
The processes are assumed to be ergodic
X
N
D^ Y X ( ) = N1 y (ti +  )ja1 < x(ti )  a2 (A.82)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). If the di erent time segments in the
averaging process are assumed to be independent the variance of the RD functions can be
derived from eq. (A.80)
2  2 P ! !2
Var[D^ Y X ( )]  NY 1 , RY X( ) + kN RY X2 ( ) (A.83)
Y X X
The above relation for the variance should be used with care, since it is very unlikely that
the time segments are independent.
A.8.1 Expected Number of Triggering Points
The expected number of triggering points per unit time, dN (dta ;a ) is simply the probability
1 2

that a1  X (t)  a2 .
dN ( a 1 ; a 2 ) Za
pX (x)dx
2
E[ dt ] = (A.84)
a 1

The expected number of triggering points in a time series is given by.


Za
E[N (a1; a2)] = T (NX , N )  pX (x)dx
2
(A.85)
a1
If for instance the bounds are chosen as [a1 a2 ] = [0 1] then the expected number of
triggering points will be:
E[NT ] = T (NX , N )  0:5 (A.86)
Half of the points in the time series will be triggering points. This is the major di erence
between RD functions estimated using positive point triggering and local extremum trig-
gering. The number of triggering points is much higher for the positive point triggering
condition.
A.9 Zero Crossing Triggering Condition
The zero crossing with positive slope triggering condition was the second triggering con-
dition introduced in RD estimation. This condition was used, since the resulting RD
functions were interpreted as impulse response functions. A triggering point is detected if
the process crosses the zero line with positive slope
TXZ (t) = fX (t) = 0; X_ (t)  0g (A.87)
This triggering condition could also have been formulated with more general bounds in-
stead of [b1 b2] = [0 1]. In practice only a realization of X (t) is known. So if more general
bounds are used a numerical di erentiation of X (t) is neccessary, which will introduce
uncertainty and thereby false triggering points. The bounds in eq. (A.87) also ensure a
maximum number of triggering points.
Equation (A.87) is reformulated to be of the same form as the applied general triggering
condition
TXZ(t) = f0  X (t) < 0 + a; 0  X_ (t) < 1g (A.88)
The RD functions calculated using TXZ (t) are derived from eq. (A.33). The triggering levels
or weights become, see eqs. (A.34) and (A.35)
R 0+a xp (x)dx
a~ = R0 0+a X = 0 (A.89)
0 pX (x)dx
R 1 xf
_ (x_ ) dx_
r
~b = R 1
0 _
X = 2 (A.90)
0 fX_ (x_ )dx_  X_
From the weights it follows that the RD functions are proportional to the time derivative
of the correlation functions
R 0 ( ) r 2
Z
D ( ) = E[Y (t +  )jT ] = , Y X  (A.91)
YX X (t) X2_  X_
The variance of the conditional process Y (t+ )jTXE(t) is obtained by calculating k1; k2 ; k3; k4
and k5 from eq. (A.40) - eq. (A.41). The fractions used in eq. (A.45) become
r
k3 = 2  ; k5 = 2 ; k2 = k4 = 0
k1  X_ k1 X_ k1 k1 (A.92)
The variance of the conditional process Y (t +  )jTXZ (t) reduces to
0  2 2 R0 ( ) !21
R
Var[Y (t +  )jTXZ (t)] = Y2 @1 ,  
Y X (  ) ,  Y X A (A.93)
X Y _X Y
This relation is equivalent to the result for the local extremum triggering condition, see
eq. (A.66). The RD functions using zero crossing triggering condition are estimated as
the empirical mean. The processes are assumed to be ergodic
X
N
D^ Y X ( ) = N1 y (ti +  )jx(ti) = 0; x_ (ti )  0 (A.94)
i=1
If the di erent time segments in the averaging process are assumed to be independent the
variance of the RD functions can be derived from eq. (A.93).
0 RY X ( ) 2 2 R0 ( ) 2 !1
 2
^ Y @
Var[DY X ( )]  N 1 ,   ,  Y X A (A.95)
X Y Y X_

A.9.1 Expected Number of Triggering Points


The expected number of triggering points is derived from Rices formula, see eq. (A.55).
The argument is that half of the zero crossings has positive slope

E[ dNdt(0) ] = 21 X_ (A.96)
X
The expected number of triggering points in a time series with NX points and a sampling
period T is
E[N (0)] = T (NX , N )  1 X_
 (A.97)
2 X
where N is the number of points in the RD functions.

A.10 Summary
This appendix establishes the mathematical basis for the RD technique applied to sta-
tionary zero mean Gaussian distributed processes. A de nition of the RD functions is
given in section A.3. Furthermore, the estimation process is discussed and it is shown
that the estimates of the RD functions are unbiased. Section A.4 introduces the general
theoretical triggering condition. The relations between the RD functions of this condition
and the correlation functions are derived. Since this triggering condition only has theore-
tical interest, a generalization of this condition, the generally applied triggering condition
is introduced in section A.5. The relation between the RD functions and the correlation
functions is derived. This triggering condition is important, since the relations between
the RD functions of the triggering conditions used in practice and the correlation func-
tions can be derived directly from the results of section A.5. The four di erent triggering
conditions which are used in practice, level crossing, local extrumum, positive point and
zero crossing with positive slope triggering, are described in sections A.6 - A.9. The rela-
tions between the RD functions and the correlation functions are derived, an approximate
equation for the variance of the estimated RD functions are given and nally the expected
number of triggering points are given. Sections A.6 - A.9 give the fundamental mathe-
matical description of the RD functions. The results of this appendix are used throughout
the thesis.

Bibliography
[1] Vandiver, J.K., Dunwoody, A.B., Campbell, R.B. & Cook, M.F. A Mathematical
Basis for the Random Decrement Vibration Signature Analysis Technique. Journal of
Mechanical Design, Vol. 104, April 1982, pp. 307-313.
[2] Brincker, R., Krenk, S., Kirkegaard, P.H. & A. Rytter. Identi cation of Dynamical
Properties from Correlation Function Estimates. Bygningsstatiske Meddelelser, Vol.
63, No. 1, 1992, pp. 1-38.
[3] Brincker, R., Krenk, S. & J.L. Jensen Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[4] Bedewi, N.A. & Yang, J.C.S. The Random Decrement Technique: A More Ecient
Estimator of the Correlation Function. Proc. 1990 ASME International Conference
and Exposition, Boston, MA, USA, August 5-9, pp. 195-201.
[5] Yang, J.C.S., Qi, G.Z. & Kan, C.D. Mathematical Base of the Random Decrement
Technique. Proc. 8th International Modal Analysis Conference, Kissimee, Florida,
USA, 1990, pp. 28-34.
[6] Melsa, J.L. & Sage, A.P. An Introduction to Probability and Stochastic Processes.
Prentice-Hall, Inc. Englewood Cli s, N.J., 1973. ISBN: 0-13-034850-3.
[7] Soderstrom, T. & Stoica, P. System Identi cation. Prentice Hall International (UK)
Ltd, 1989. ISBN: 0-13-881236.
[8] Papoulis, A. Probability, Random Variables and Stochastic Processes. McGraw-Hill,
Inc. 1991. ISBN 0-7-100870-5.
[9] Ditlevsen, O. Uncertainty Modelling. McGraw-Hill Inc. 1981. ISBN: 0-07-010746-0.
[10] Rice, S.O. Mathematical Analysis of Random Noise. Bell Syst. Tech. J., Vol. 23, pp.
282-332; Vol 24, pp. 46-156. Reprinted in N. Wax, Selected Papers on Noise and
stochastic processes. Dover Publications, Inc., New York.
[11] Hummelshj, L.G., Mller, H. & Pedersen, L. Skadesdetektering ved Responsm—ling.
M.sc. Thesis (In Danish), Aalborg University, 1991.
[12] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec
Technique Proc. 9th International Conference on Experimental Mechanics, Lyngby,
Copenhagen, August 20-24, 1990.
[13] Lin, Y.K. Probabilistic Theory of Structural Dynamics. 3rd Edition, McGraw-Hill,
Inc., 1986. ISBN: 0-88275-377-0.

You might also like