Random Decrement - Modal Analysis
Random Decrement - Modal Analysis
by
Roman
a, a~ = Triggering level vector.
A = State matrix.
A~ = State matrix.
b, b~ = Triggering level vector.
B = State matrix.
B~ = State matrix.
C = Damping matrix. Symmetric and positive denite.
COV[] = Covariance operator.
D = Observation matrix.
Dxx = Auto Random Decrement function.
Dxy = Cross Random Decrement function.
Dxxi = Random Decrement functions in vector form.
Dxi = Vector Random Decrement function.
Dx = Vector Random Decrement functions in vector form.
E [] = Mean value operator.
E = Error function.
f = Force vector.
F = State force vector.
h = Impulse response matrix/function.
H = Frequency response
p matrix/function.
i; j = Integers or ,1.
I = Identity matrix.
k = Constant.
K = Stiness matrix. Symmetric and positive denite.
m, m = Integer Modal masses, Modal mass matrix.
M = Mass matrix. Diagonal and positive denite.
n = Integer. E.g. DOFs or vector/matrix size.
N = Integer, e.g. number of time points, triggering points.
p = Density function (multivariate).
P = Distribution function (multivariate).
P = Probability.
q0 = Modal initial vector condition.
q(t) = Modal response vector.
RXX = Auto correlation matrix/function.
RXY = Cross correlation matrix/function.
R0; R00 = One and two time time derivative of R.
t; = Time variables.
SXX (! ) = Auto spectral density.
SY X (! ) = Cross spectral density.
TE = Local extremum triggering condition.
T GT = Theoretical general triggering condition.
T GA = Applied general triggering condition.
T L = Level crossing triggering condition.
TP = Positive point triggering condition.
T Z = Zero crossing triggering condition.
Tv = Vector triggering condition.
U (t) = Noise process.
u(t) = Realization of noise process.
V = Covariance matrix/function.
x = Displacement response vector.
x_ = Velocity response vector.
x = Acceleration response vector.
x0 = Initial displacement condition.
x_ 0 = Initial velocity condition.
X; Y = Stochastic vector process.
X_ ; Y_ = Time derivative of X; Y.
X; Y = Double time derivative of X; Y.
x; y = Realizations of stochastic vector process, X; Y.
x_ ; y_ = Realizations of stochastic vector process, X_ ; Y_ .
x; y = Realizations of stochastic vector process, X ; Y .
z = State vector.
z_ = Time derivative of state vector.
z0 = Initial state condition.
ZXY = Fourier transform of DXY .
2T = Matrix transposed.
2,1 = Matrix inverse.
2 = Matrix complex conjugate.
2^ = Estimate of 2
Greek
T = Sampling interval.
t = Time shift.
t = Time shifts in vector form.
= Discrete-time eigenvalue.
xy
2 = Coherence function.
= Continuous-time eigenvalue.
= Mean value - vector/scalar.
! = Cyclic eigenfrequency.
!d = Damped cyclic eigenfrequency.
i, = Mode shape vector/matrix.
i, = Eigenvector/matrix or modal matrix.
= Standard deviation.
= Time variable.
= Modal damping ratio.
= Diagonal matrix with eigenvalues, .
, = Diagonal matrix with discrete eigenvalues.
Contents
1 Introduction 1
1.1 Background and Motivation : : : : : : : : : : : : : : : : : : : : : : : : : : : 1
1.1.1 Data Analysis : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4
1.1.2 The Random Decrement Technique : : : : : : : : : : : : : : : : : : : 5
1.2 Review of the Random Decrement Technique : : : : : : : : : : : : : : : : : 6
1.2.1 Development of the Random Decrement Technique : : : : : : : : : : 7
1.2.2 Theoretical Aspects of the Random Decrement Technique : : : : : : 9
1.2.3 Application of the Random Decrement Technique : : : : : : : : : : : 10
1.3 Scope of Work : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12
1.4 Thesis Outline : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13
1.5 Reader's Guide : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14
2 Theoretical Background for Linear Structures 23
2.1 Lumped Mass Parameter System : : : : : : : : : : : : : : : : : : : : : : : : 24
2.1.1 Modal Decomposition of Free Decays : : : : : : : : : : : : : : : : : : 25
2.1.2 Forced Vibration : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 27
2.2 Identication of Modal Parameters From Free Decays : : : : : : : : : : : : 28
2.2.1 General Equations : : : : : : : : : : : : : : : : : : : : : : : : : : : : 29
2.2.2 Pseudo Measurements : : : : : : : : : : : : : : : : : : : : : : : : : : 30
2.2.3 Modelling of Noise : : : : : : : : : : : : : : : : : : : : : : : : : : : : 31
2.2.4 Extraction Eigenfrequencies and Damping Ratios : : : : : : : : : : : 32
2.2.5 Modal Participation Factors : : : : : : : : : : : : : : : : : : : : : : : 32
2.2.6 Practical Application : : : : : : : : : : : : : : : : : : : : : : : : : : : 32
2.2.7 Separation of Noise Modes and Physical Modes : : : : : : : : : : : : 33
2.3 Structures Loaded by Gaussian White-Noise : : : : : : : : : : : : : : : : : : 35
2.3.1 Correlation Functions : : : : : : : : : : : : : : : : : : : : : : : : : : 35
2.3.2 Load Modelling : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 36
2.3.3 Correlation Functions of the Response : : : : : : : : : : : : : : : : : 38
2.4 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 40
3 The Random Decrement Technique 43
3.1 Denition of Random Decrement Functions : : : : : : : : : : : : : : : : : : 44
3.2 Applied General Triggering Condition : : : : : : : : : : : : : : : : : : : : : 46
3.2.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 47
3.2.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 47
3.3 Level Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : 48
3.3.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 49
i
3.3.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 50
3.4 Local Extremum Triggering Condition : : : : : : : : : : : : : : : : : : : : : 52
3.4.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 53
3.4.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 54
3.5 Positive Point Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 55
3.5.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 57
3.5.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 57
3.6 Zero Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 59
3.6.1 Example 1: Illustration of Triggering Conditions : : : : : : : : : : : 60
3.6.2 Example 2: 2DOF System : : : : : : : : : : : : : : : : : : : : : : : : 60
3.7 Quality Assesment of RD Functions : : : : : : : : : : : : : : : : : : : : : : 62
3.7.1 Shape Invariance Test : : : : : : : : : : : : : : : : : : : : : : : : : : 62
3.7.2 Symmetry Test : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 65
3.8 Choice of Triggering Levels : : : : : : : : : : : : : : : : : : : : : : : : : : : 67
3.9 Comparison of Dierent Approaches : : : : : : : : : : : : : : : : : : : : : : 70
3.9.1 Traditional Approaches : : : : : : : : : : : : : : : : : : : : : : : : : 71
3.9.2 Triggering conditions : : : : : : : : : : : : : : : : : : : : : : : : : : : 72
3.9.3 Results of Simulation Study : : : : : : : : : : : : : : : : : : : : : : : 72
3.9.4 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 77
3.10 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 78
4 Vector Triggering Random Decrement 81
4.1 Denition of VRD functions : : : : : : : : : : : : : : : : : : : : : : : : : : : 82
4.2 Mathematical Basis of VRD : : : : : : : : : : : : : : : : : : : : : : : : : : : 88
4.2.1 Choice of Time Shifts : : : : : : : : : : : : : : : : : : : : : : : : : : 91
4.3 Variance of VRD Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : 91
4.4 Quality Assessment : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 92
4.5 Examples - 2DOF Systems : : : : : : : : : : : : : : : : : : : : : : : : : : : 94
4.5.1 Example 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 94
4.5.2 Example 2 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 96
4.6 Example - 4DOF System : : : : : : : : : : : : : : : : : : : : : : : : : : : : 98
4.7 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 102
5 Variance of RD Functions 103
5.1 Variance of RD Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : : 103
5.2 Example 1: Level Crossing - SDOF : : : : : : : : : : : : : : : : : : : : : : : 107
5.3 Example 2: Positive Point - SDOF : : : : : : : : : : : : : : : : : : : : : : : 110
5.4 Example 3: Positive Point - 2DOF : : : : : : : : : : : : : : : : : : : : : : : 115
5.5 Example 4: Positive Point - 5 DOF : : : : : : : : : : : : : : : : : : : : : : : 117
5.6 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 119
6 Bias Problems and Implementation 121
6.1 Bias of RD Functions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 121
6.1.1 Bias due to Discretization : : : : : : : : : : : : : : : : : : : : : : : : 121
6.1.2 Bias due to Sorting of Triggering Points : : : : : : : : : : : : : : : : 124
6.1.3 Bias due to High Damping : : : : : : : : : : : : : : : : : : : : : : : 126
6.2 Implementation of RD Functions : : : : : : : : : : : : : : : : : : : : : : : : 128
6.2.1 RD and VRD Functions in HIGH-C : : : : : : : : : : : : : : : : : : 128
6.2.2 MATLAB Utility Functions : : : : : : : : : : : : : : : : : : : : : : : 132
6.2.3 Example : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 137
6.3 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 139
7 Estimation of FRF by Random Decrement 141
7.1 Traditional FFT Based Approach : : : : : : : : : : : : : : : : : : : : : : : : 141
7.2 Random Decrement Based Approach : : : : : : : : : : : : : : : : : : : : : : 143
7.3 Case Studies : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 146
7.3.1 Basic Case - SDOF System : : : : : : : : : : : : : : : : : : : : : : : 146
7.3.2 Experimental Study - Laboratory Bridge Model : : : : : : : : : : : : 151
7.4 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 156
8 Ambient Testing of Bridges 159
8.1 Case Study 1: Queensborough Bridge : : : : : : : : : : : : : : : : : : : : : 160
8.1.1 Data Analysis Methodology : : : : : : : : : : : : : : : : : : : : : : : 161
8.1.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 162
8.1.3 Conclusion : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 164
8.2 Case Study 2: Laboratory Bridge Model : : : : : : : : : : : : : : : : : : : : 164
8.2.1 Data Analysis Methodology : : : : : : : : : : : : : : : : : : : : : : : 165
8.2.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 170
8.2.3 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 173
8.3 Case Study 3: Vestvej Bridge : : : : : : : : : : : : : : : : : : : : : : : : : : 173
8.4 Bridge Description : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 174
8.5 Measurement Setup : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 174
8.5.1 Data Analysis Methodology : : : : : : : : : : : : : : : : : : : : : : : 177
8.5.2 Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 180
8.5.3 Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 183
8.6 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 183
9 Conclusions 187
9.1 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 187
9.1.1 Chapter 1 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 187
9.1.2 Chapter 2 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 187
9.1.3 Chapter 3 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 188
9.1.4 Chapter 4 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 188
9.1.5 Chapter 5 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.1.6 Chapter 6 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.1.7 Chapter 7 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.1.8 Chapter 8 : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 189
9.2 General Conclusions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 190
9.3 Perspectivation and Future Work : : : : : : : : : : : : : : : : : : : : : : : : 192
9.3.1 Non-Gaussian Processes : : : : : : : : : : : : : : : : : : : : : : : : : 192
9.3.2 Non-Linear Structures : : : : : : : : : : : : : : : : : : : : : : : : : : 192
9.3.3 Improvement of the Variance Model : : : : : : : : : : : : : : : : : : 193
9.3.4 Extraction of Modal Parameters : : : : : : : : : : : : : : : : : : : : 193
9.3.5 Damage Detection by on-the-line Continuous Surveyance : : : : : : 193
10 Summary in Danish 195
A Random Decrement and Correlation Functions 199
A.1 Multivariate Gaussian Variables : : : : : : : : : : : : : : : : : : : : : : : : : 200
A.2 Conditional Densities : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 201
A.3 Denition of Random Decrement Functions : : : : : : : : : : : : : : : : : : 201
A.4 General Theoretical Triggering Condition : : : : : : : : : : : : : : : : : : : 202
A.5 Applied General Triggering Condition : : : : : : : : : : : : : : : : : : : : : 204
A.6 Level Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : 207
A.6.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 208
A.7 Local Extremum Triggering Condition : : : : : : : : : : : : : : : : : : : : : 209
A.7.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 210
A.8 Positive Point Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 211
A.8.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 212
A.9 Zero Crossing Triggering Condition : : : : : : : : : : : : : : : : : : : : : : : 213
A.9.1 Expected Number of Triggering Points : : : : : : : : : : : : : : : : : 214
A.10 Summary : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 214
Chapter 1
Introduction
The purpose of this chapter is to give an appropriate approach to the topic of this thesis:
Modal analysis based on the Random Decrement technique. It is the intention that the
chapter should give a short background and motivation for the work presented in this
thesis. A clear delimitation of this work will also be presented. This chapter should give
an overview of the contents of this thesis.
Section 1.1 describes the basic procedures in vibration testing and some of the traditional
applications. In this section some main choices, which delimits this work to deal with
the Random Decrement technique are presented. Section 1.2 contains a review of the
Random Decrement technique. This is a natural starting point in order to identify the
advantages/disadvantages of the technique and to detect any missing knowledge to the
technique. Section 1.3 delimits the work presented in this thesis in detail and brings up
questions, which will be investigated throughout the thesis. Section 1.4 summarizes the
contents of each chapter in order to make a selective reading possible. Finally a reader's
guide nishes this introduction in section 1.5.
Figure 1.1: Illustration of the processes in vibration testing. The data analysis may result
in a reformulation of the mathematical model or a change in the experimental setup.
This thesis mainly deals with the data analysis procedure in vibration testing. This is the
process where the measurements of the vibrations of a structure are used to calibrate or
identify the parameters of the mathematical model of the structure. The parameters of
the mathematical model describe the dynamic characteristics of the structure.
Collecting the measurements of the vibrating structure is the fundamental basis for a
succesful test. If the measurements are not collected carefully, it is impossible to obtain
satisfactorily accurate dynamic characteristics of the structure, no matter of the mathe-
matical modelling and data analysis. Although this thesis also will include experimental
work the process of measuring the vibrations of a structure is not described or reported
in detail. This is beyond the scope of this work.
Structures are continuous or distributed systems. The mass, damping and stiness pro-
perties are distributed throughout the spatial denition of the structure. In this thesis
the mathematical model used to describe the vibrations of a structure is a discrete model:
The linear lumped mass parameter model. The damping forces, which model all energy
dissipation from the structure, are assumed to be proportional to the velocity of the lumped
masses, and the stiness forces are assumed to be proportional to the displacements of
the lumped masses. The principle is illustrated in g. 1.2.
It should be made clear that despite these problems data analysis based on Fourier trans-
formation is a fast and reliable method. One of the reasons is that persons with experience
in modal analysis and vibration testing can immediately extract valuable information from
a plot of the data transformed into the frequency domain, such as spectral densities com-
pared to a time domain plot of the data. An area which illustrates the popularity of data
analysis based on Fourier transformations is ambient testing of bridges. Ambient testing
means testing of a structure, which vibrates due to natural loads such as wind, trac,
waves etc. In a review of ambient testing of bridges, see Farrar et al. [6], a bibliography of
about 100 papers is listed. At least 95% of all this work is based on the FFT algorithm.
10
Published work
0
1970 1975 1980 1985 1990 1995
Year
Figure 1.3: The literature published each year (conference papers, reports, articles Ph.D.-
thesis etc.) concerning the RD technique.
The numbers in the gure are based on the bibliography at the end of this chapter. Only
a handful of people have published several papers concerning either theoretical work or
application work. One of the reasons for the lack of interest in this technique is perhaps
that for many years the theoretical background did not include a statistical description in
the same sense as the methods based on Fourier transformations. During the late 1980s
the mathematical background of the RD technique was extended to include nearly a full
statistical description.
This motivates the work reported in the present thesis, where the topic is modal analysis
based on the the RD technique. The results of applying the RD technique will several
times be compared with analysis based on the FFT algorithm. The RD technique could
be compared with other time domain algorithms, but the FFT algorithm is chosen, since
it is the most well-known and well-documented method. This will give a solid basis for
comparison.
To explain the concept of the RD technique and to argument for the validity of the
technique, Cole used the following explanation. The random response of a structure at the
time t0 + t is composed of three parts: 1) The step response from the initial displacements
at the time t0 . 2) The impulse response from the initial velocity at the time t0 . 3) A
random part which is due to the load applied to the structure in the period, t0 to t0 + t.
What happens if a time segment is picked out every time the random response, x(t), has
an initial displacement, say x(t) = a, and these time segments are averaged?
This question indicates the rst concept of the RD technique and was answered by: As
the number of averages increase the random part due to the random load will eventually
average out and be negligible. Furthermore, the sign of the initial velocity is expected to
vary randomly with time so the resulting initial velocity will be zero. The only part left
is the free decay response from the initial displacement, a. The principle is illustrated in
gure 1.4.
Time [s]
9 10
1 2 3 4 5 6 7 8 11 12
2
0
x
−2
−4
0 0.5 1 1.5 2 2.5 3 3.5 4
2 N=1 2 N=2 2 N=3
Dxx
0 0 0
−2 −2 −2
0 0.2 0.4 0 0.2 0.4 0 0.2 0.4
2 N=7 2 N=11 2 N=12
Dxx
0 0 0
−2 −2 −2
0 0.2 0.4 0 0.2 0.4 0 0.2 0.4
τ τ τ
Figure 1.4: Concept of the RD technique. Random time series, x, (top gure) with trig-
gering points and averaging process (sub gures) with both the resulting RD function (full
line) and the current time segment (dashed line).
In gure 1.4 the initial displacement is chosen to be a = 1:5 x . It is commonly accepted
to denote the initial displacement, a, as the triggering level and the time points, t0 , where
x = a, as triggering points. Furthermore, the triggering level a is usually given as a
multiplum of the standard deviation, x , of the time series. The process of estimating RD
functions illustrated in gure 1.4 can be formulated as a sum of time segments picked out
from the response on condition that the time segments have the value a at the start.
XN
D^ XX ( ) = N1 x(ti + )jx(ti) = a (1.5)
i=1
where D^ XX is the estimated RD function, is the time variable in D^ XX as illustrated in g.
1.4, and N is the total number of triggering points. The simplicity of the estimation process
is obvious, since only detection of triggering points and averaging of the corresponding
time segments are performed.
In applications of the RD technique the measurements, xj , are discrete (time series). This
means that a problem arises since the probability of having the value x(ti ) = a in the
time series is zero, unless a is chosen carefully. The problem is solved by implementing
the triggering condition as a level crossing problem. Therefore the name adopted to the
condition illustrated in g. 1.4 and eq. (1.5) is the level crossing triggering condition.
The introduction of this technique was followed up by a simulation study by Chang [11].
Chang investigated the signicance of the length of the RD function, , and the number
of ensemble averages (triggering points). Based on simulations of the response of 1 and
2 DOF systems loaded by white noise, he recommended about 2000 ensemble averages in
order to extract accurate damping ratios (and eigenfrequencies) from the RD functions.
The length of the RD functions was suggested to be in the range of 50% and 125% of the
beat period of the two eigenfrequencies.
Chang investigated the level crossing triggering condition and the zero crossing with a
positive slope triggering condition. This triggering condition was introduced by Cole, see
Cole [10]. A time segment is picked out and used in the averaging process if the time
series crosses zero with a positive slope at the start of the time segment. The resulting
RD function was believed to be equal to the impulse response function of the structure,
since the initial displacement is zero. Houbolt, see Houbolt [12], improved this triggering
condition by also picking out time segments if the time series crosses zero with a negative
slope. The sign of these time segments is changed before they are averaged with time
segments picked out by positive slopes. This approach can be adopted to any triggering
condition, see Brincker et al. [57].
1.2.2 Theoretical Aspects of the Random Decrement Technique
Although the RD technique has been applied in connection with a broad band of structures,
only a few papers have considered theoretical aspects of the RD technique. This also
includes the solution of implementation problems, bias problems etc.
In 1977 Ibrahim introduced the concept of auto and cross RD function, see Ibrahim [22],
[23]. Up to this point the RD technique had only been applied to single channel mea-
surements, which resulted in estimation of eigenfrequencies and damping ratios, but with
no possibility to extract mode shape information. The concept of the auto and cross RD
functions is based on multi-channel measurements. Consider two measurements x(t) and
y(t). The auto, DXX , and cross, DY X , RD functions are estimated using the level crossing
triggering condition as:
X
N
D^ XX ( ) = N1 x(ti + )jx(ti) = a (1.6)
i=1
X
N
D^ Y X ( ) = N1 y (ti + )jx(ti) = a (1.7)
i=1
The rst sub-script refers to the measurement, where the time segments are picked out and
averaged, whereas the second subscript refers to the measurements, where the triggering
points are detected. Alternatively, zero upcrossings or downcrossings could be used as
the triggering condition. The approach made it possible to estimate mode shapes corre-
sponding to the measurement points by combining a method for determination of modal
parameters from free decays or impulse response functions, such as e.g. the Ibrahim Time
Domain method, see Ibrahim [22], [23]. This was an important improvement of the RD
technique.
In Reed [26] it is suggested that the RD estimation process is also applied backwards to
the time series. The RD functions should be the same. This corresponds to using both
positive and negative time lags, , in the time segments. This new approach raised a
problem, since it is dicult to describe a negative time lag in terms of free decays. This
problem was partly solved by Vandiver et al. [32] in 1982.
Vandiver et al. published a paper where it was proven that an auto RD function, esti-
mated using the level crossing triggering condition, is proportional to the auto correlation
function of a stationary process, if it has a zero mean Gaussian distribution. The paper
is important, since the proportionality between auto RD functions and auto correlation
functions made the RD technique directly comparable with the spectral density functions
estimated using FFT. This is due to the Wiener-Khintchine relations, which describe the
correlation functions as the inverse Fourier transform of the spectral density functions. A
description in terms of correlation functions has the advantage that negative time lags can
be interpreted without any problems.
Brincker et al. [54] and [60] extended the link between RD functions and correlation
functions. By introducing a theoretical general triggering condition it was proven that
auto and cross RD functions are proportional to the auto and cross correlation functions,
respectively. This interpretation of RD functions will be introduced, discussed and further
generalized in later chapters, so a detailed description of the results is omitted at this stage.
It should be noted that in Bedewi [45], Bedewi & Yang [52] and [56] there also are some
theoretical consideration about the link between RD functions and free decays and/or
correlation functions.
The papers reviewed above are the most important papers concerning the theoretical
aspects of the RD technique. A couple of papers concerning the implementation of this
technique have also been published, see e.g. Chang [11], Caldwell [25], Kiraly [33], Nasir
& Sunder [34], Brincker et al. [53], [54], [57], [58]. These problems and their solution will
also be discussed in this thesis.
Finally its worth mentioning that a recent paper by Desforges et al. [64] concludes that the
RD technique is an accurate way of estimating spectral densities and modal parameters.
They compared the RD technique with several other methods for estimating correlation
functions/spectral density functions.
0
−0.5
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
τ
Figure 1.5: Reference RD function and 95 % condence intervals.
Next time an inspection is performed by measuring the random response of the structure a
new RD function is estimated. If this RD function at a certain time, e.g. = 1:5 seconds,
is within the 95 % condence bounds, no damage is detected. Otherwise the decision is
that a damage is detected. This approach was also used in Kummer et al. [30] and Yang et
al. [36]. One major problem in this technique is how to choose the value of the condence
bound to have a proper balance between detecting a non-existing crack and disregard an
existing crack. Although this approach takes the statistical nature of the problem into
consideration, it is not independent of changes of the load on the structure.
The second main application of the RD technique is identication of structures. The
technique has mainly been applied to measurements of oshore structures and aeroplanes
subjected to ambient loads, see Yang et al. [31] and Ibrahim [27]. But the technique has
also been applied to identication of railway vehicle kinematic behaviour, see Siviter &
Pollard [41], and soil testing, see Al-Sannad et al. [44]. One of the advantages in identi-
cation by RD is the simplicity of the approach. Basically RD functions are estimated as
a simple averaging process of time segments. This should make the technique especially
favourable in connection with large structures such as bridges, where the experiments
includes a large number of measurements.
Finally the RD technique has also been used to identify non-linear structures, see e.g.
Ibrahim et al [49], Caldwell [25] and Haddara [59]. The fundamental idea is to identify
several RD functions at dierent initial conditions or triggering levels, a. From the RD
functions parameters (e.g. modal parameters) can be extracted as a function of the trig-
gering level. The non-linearity can then be detected from changes in the modal parameters
with the triggering level, a.
Bibliography
[1] Ewins, D.J. Modal Testing: Theory and Practice. Research Studies Press, Ltd.,
Staunton, Somerset, England, 1984 (reprinted 1995). ISBN 0 86380 017 3.
[2] Cooley, J.W. & Tukey, J.W. An Algorithm for the Machine Calculation of Complex
Fourier Series. Mathematics of Computation, Vol. 19, pp. 297-301, April 1965.
[3] Bendat, J.S. & Piersol, A.G. Random Data - Analysis and Measurement Procedures.
John Wiley & Sons, 1986. ISBN 0-471-04000-2.
[4] Schmidt, H. Resolution Bias Errors in Spectral Density, Frequency Response and
Coherence Function Measurements. I-V. Journal of Sound and Vibration, 101(3), pp.
347-427, 1985.
[5] Andersen, P. Identication of Civil Engineering Structures using Vector ARMA Mod-
els. Ph.D.-thesis Aalborg University 1997.
[6] Farrar, C.R., Baker, W.E., Bell, T.M., Cone, K.M., Darling, T.W., Duey, T.A.,
Eklund, A. & Migliori, A. Dynamic Characterization and Damage Detection in the
I-40 Bridge Over the Rio Grande. Los Alamos National Laboratories. LA-12767-MS.
UC-906, June 1994.
1968
[7] Cole, H.A. On-The-Line Analysis of Random Vibrations. AIAA Paper No. 68-288.
1968.
1971
[8] Cole, H.A. Method and Apparatus for Measuring the Damping Characteristics of a
Structure. United States Patent No. 3, 620,069, Nov. 16. 1971.
[9] Cole, H.A. Failure Detection of a Space Shuttle Wing by Random Decrement. NASA
TMX-62,041, May 1971.
1973
[10] Cole, H.A. On-Line Failure Detection and Damping Measurements of Aerospace
Structures By Random Decrement Signature. NASA CR-2205, 1973.
1975
[11] Chang, C.S. Study of Dynamic Characteristics of Aeroelastic Systems Utilizing Ran-
domdec Signatures. NASA-CR-132563, Feb. 1975.
[12] Houbolt, J.C. On Identifying Frequencies and Damping in Subcritical Flutter Testing.
Proc. NASA Symposium on Flutter Testing Techniques, Edwards, California, Oct.
9-10, 1975. NASA SP-415, pp. 1-41.
[13] Bennet, R.M. & Desmarais, R.N. Curve Fitting of Aeroelastic Transient Response
Data with Exponential Functions. Proc. NASA Symposium on Flutter Testing Tech-
niques, Edwards, California, Oct. 9-10, 1975. NASA SP-415, pp. 43-57.
[14] Hammond, C.E. & Dogget, R.V. Determination of Subcritical Damping by Moving
Block/Randomdec Applications. Proc. NASA Symposium on Flutter Testing Tech-
niques, Edwards, California, Oct. 9-10, 1975. NASA SP-415, pp. 59-76.
[15] Huttsell, L.J. & Noll, T.E. Wind Tunnel Investigation of Supersonic Wing-Tail Flut-
ter. Proc. NASA Symposium on Flutter Testing Techniques, Edwards, California,
Oct. 9-10, 1975. NASA SP-415, pp. 193-211.
[16] Lenz, R.W. & McKeever, B. Time Series Analysis in Flight Flutter Testing at the
Air Force Flight Test Center: Concepts and Results. Proc. NASA Symposium on
Flutter Testing Techniques, Edwards, California, Oct. 9-10, 1975. NASA SP-415, pp.
287-317.
[17] Perangelo, H.J. & Milordi, F.W. Flight Flutter Testing Technology at Grumman.
Proc. NASA Symposium on Flutter Testing Techniques, Edwards, California, Oct.
9-10, 1975. NASA SP-415, pp. 319-375.
[18] Abla, M.A. The Application of Recent Techniques in Flight Flutter Testing. Proc.
NASA Symposium on Flutter Testing Techniques, Edwards, California, Oct. 9-10,
1975. NASA SP-415, pp. 395-411.
[19] Brignac, W.J., Ness, H.B., Johnson, M.K. & Smith, L.M. YF-16 Flight Flutter Test
Procedures. Proc. NASA Symposium on Flutter Testing Techniques, Edwards, Cali-
fornia, Oct. 9-10, 1975. NASA SP-415, pp. 433-457.
[20] Reed, R.E. & Cole, H.A. Applicability of Randomdec Technique to Flight Simulator
for Advanced Aircraft. NASA CR-137609, 1975.
[21] Yang, J.C.S. & Caldwell, D.W. The Measurement of Damping and the Detection of
Damages in Structures by the Random Decrement Technique. 46th Shock and Vibra-
tion Symposium and Bulleting, San Diego, California, Nov. 1975.
1977
[22] Ibrahim, S.R. Random Decrement Technique for Modal Identication of Structures.
Journal of Spacecraft and Rockets, Vol. 14, No. 11., Nov. 1977, pp. 696-700.
[23] Ibrahim, S.R. The Use of Random Decrement Technique for Identication of Struc-
tural Modes of Vibration. AIAA paper, Vol. 77, 1977, pp. 1-9.
1978
[24] Yang, J.C.S. & Caldwell, D.W. A Method for Detecting Structural Deterioration in
Piping Systems. ASME Probabilistic Analysis and Design of Nuclear Power Plant
Structures Manual PVB-PB-030, 1978, pp. 97-117.
[25] Caldwell, D.W. The Measurement of Damping and the Detection of Damage in Linear
and Nonlinear Systems by the Random Decrement Technique. Ph.D.-Thesis, Univer-
sity of Maryland, 1978.
1979
[26] Reed, R.E. Analytical Aspects of Randomdec Analysis. AIAA/ASME/AHS 20th
Structures, Structural Dynamics and Materials Conf. St. Louis, Mo. April 1979, pp.
404-409.
[27] Ibrahim, S.R. Application of Random Time Domain Analysis to Dynamic Flight Mea-
surements. The Shock and Vibration Bulletin, Bulletin 49, Part 2 of 3, Sept. 1979,
pp. 165-170.
1980
[28] Ibrahim, S.R. Limitations on Random Input Forces in Randomdec Computation for
Modal Identication. The Shock and Vibration Bulletin. Bulletin 50 (Part 3 of 4),
Dynamic Analysis, Design Techniques. Sept. 1980, pp. 99-112.
[29] Yang, J.C.S., Dagalakis, N. & Hirt, M. Application of the Random Decrement Tech-
nique in the Detection of an Induced Crack on an Oshore Platform Model. Computer
Methods for Oshore Structures. Winter Annual Meeting of ASME, Nov 165-21, 1980,
pp. 55-67.
1981
[30] Kummer, E., Yang, J.C.S. & Dagalakis, N. Detection of Fatigue Cracks in Struc-
tural Members. Proc. 2nd ASCE/EMD Specialty Conference on Dynamic Response
of Structures. Atlanta, Georgia, Jan. 1981, pp. 445-460.
[31] Yang, J.C.S., Aggour, M.S., Dagalakis, N. & Miller, F. Damping of an Oshore
Platform Model by Random Dec Method. Proc. 2nd ASCE/EMD Specialty Conference
on Dynamic Response of Structures. Atlanta, Georgia, Jan. 1981, pp. 819-832.
1982
[32] Vandiver, J.K., Dunwoody, A.B., Campbell, R.B. & Cook, M.F. A Mathematical
Basis for the Random Decrement Vibration Signature Analysis Technique. Journal of
Mechanical Design, Vol. 104, April 1982, pp. 307-313.
[33] Kiraly, L.J. A High Speed Implementation of the Random Decrement Algorithm.
NASA Technical Memorandum 82853, NASA-TA-82853. (Prepared for the 1982
Aerospace/Test Measurement Symposium, Las Vegas, Nevada, May 2-6 1982.)
[34] Nasir, J. & Sunder, S.S. An Evaluation of the Random Decrement Technique of Vibra-
tion Signature Analysis for Monitoring Oshore Platforms. Massachussets Institute
of Technology, Department of Civil Engineering, Research Report R82-52. Sept. 1982.
1983
[35] Huan,S.-L., McInnis, B.C. & Denman, E.D. Analysis of the Random Decrement
Method. Int. J. Systems Sci., 1983, Vol. 14, No. 4, 417-423.
1984
[36] Yang, J.C.S., Chen, J. & Dagalakis, N.G. Damage Detection in Oshore Structures by
the Random Decrement Technique. ASME, Journal of Energy Resources Technology,
March 1984, Vol. 106, pp. 38-42.
[37] Ibrahim, S.R. Time-Domain Quasilinear Identication of Nonlinear Dynamic Sys-
tems. AIAA Journal, Vol. 6, No. 6, June, 1984, pp. 817-823.
1985
[38] Tsai, T. Yang, J.C.S. & Chen, R.Z. Detection of Damages in Structures by the Cross
Random Decrement Technique. Proc. 3rd International Modal Analysis Conference,
Jan. 28-31, Orlando, Florida, 1985, pp. 691-700.
[39] Yang, J.C.S. Tsai, T., Tsai, W.H. & Chen, Z. Detection and Identication of Struc-
tural Damage from Dynamic Response Measurements. Proc. 4th Internatianal O-
shore Mechanics and Arctic Engineering Symposium, Dallas, Texas, 1985, pp. 496-
504.
[40] Yang, J.C.S., Tsai, T., Pavlin, V., Chen, J. & Tsai, W.H. Structural Damage De-
tection by the System Identication Technique. The Shock and Vibration Bulletin,
Bulletin 55, Part 3 of 5, June 1985, pp. 57-66.
[41] Siviter, R. & Pollard, M.G. Measurement of Railway Vehicle Kinematic Behaviour
using the Random Decrement Technique. Vehicle Systems Dynamics, Vol. 14. No. 1-3
June 1985, pp. 136-140.
[42] Yang, J.C.S., Marks, C.H., Jiang, J., Chen, D., Elahi, A. & Tsai, W.-H. Determina-
tion of Fluid Damping using Random Excitation. ASME Journal of Energy Resources
Technology, Vol. 107, June 1985, pp. 220-225.
1986
[43] Ibrahim, S.R. Incipient Failure Detection from Random-Decrement Time Functions.
The International Journal of Analytical and Experimental Modal Analysis. Vol. 1,
No. 2, April 1986, pp. 1-9.
[44] Al-Sannad, H.A., Aggour, M.S. & Amer, M.I. Use of Random Loading in Soil Testing.
Indian Geotechnical Journal, Vol. 16, No. 2, April 1986, pp. 126-135
[45] Bedewi, N.E. The Mathematical Foundation of the Auto and Cross Random Decre-
ment Technique and the Development of a System Identication Technique for Detec-
tion of Structural Deterioration. Ph.D.-Dissertation, University of Maryland, 1986.
1987
[46] Bedewi, N.E., Kung, D.-N., Qi, G.-Z. & Yang, J.C.S. Use of the Random Decre-
ment Technique for Detecting Flaws and Monitoring the Initiation and Propagation
of Fatigue Cracks in High-Performance Materials. Proc. Nondestructive Testing of
High-Performance Ceramics, Boston, MA, August 25-27, 1987, pp. 424-441.
[47] Bedewi, N.E. & Yang, J.C.S. A System Identication Technique Based on the Ran-
dom Decrement Signatures. Part 1: Theory and Simulation. Proc. 58th Shock and
Vibration Symposium, Huntsville, Alabama, October 13-15, 1987, Vol.1 pp. 257-273.
[48] Bedewi, N.E. & Yang, J.C.S. A System Identication Technique Based on the Random
Decrement Signatures. Part 2: Experimental Results. Proc. 58th Shock and Vibration
Symposium, Huntsville, Alabama, October 13-15, 1987, Vol.1 pp. 275-287.
[49] Ibrahim, S.R., Wentx, K.R, & Lee, J. Damping Identication from Non-Linear Ran-
dom Responses using a Multi-Triggering Random Decrement Technique. Mechanical
Systems and Signal Processing (1987) 1(4), pp 389-397.
1988
[50] Kung, D.-N., Qi, G.-Z., Yang, J.C.S. & Bedewi, N. Fatique Life Characterization of
Composite Structures using the Random Decrement Modal Analysis Technique. Proc.
6th International Modal Analysis Conference, Kissimee, Florida, Feb. 1-4, 1988, pp.
350-356.
[51] Bernard, P. Identication de Grandes Structures: Une Remarque sur la Meethode du
Decrement Aleatoire. Journal of Theoretical and Applied Mechanics. Vol. 7, No. 3,
1988, pp. 269-280. (In French).
1990
[52] Bedewi, N.E. & Yang, J.C.S. The Random Decrement Technique: A More Ecient
Estimator of the Correlation Function. Proc. 1990 ASME International Conference
and Exposition, Boston, MA, USAm Aug. 5-9, pp. 195-201.
[53] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec
Technique. Proc. 9th International Conference on Experimental Mechanics, Lyngby,
Copenhagen, Aug. 20-24, 1990.
[54] Brincker, Krenk, S. & R., Jensen, J.L. Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[55] Yang, J.C.S., Qi, G.Z. & Kan, C.D. Mathematical Base of Random Decrement Tech-
nique. Proc. 8th International Modal Analysis Conference, Kissimee, Florida, USA,
1990, pp. 28-34.
[56] Bedewi, N.E. & Yang, J.C.S. The Relationship Between the Random Decrement Sig-
nature and the Free Decay Response of Multidegree-Of-Freedom Systems. Proc. 1990
ASME International Comp. In Engineering Conference and Exposition. Boston, MA,
USA, Aug 5-9 1990 pp. 77-86.
1991
[57] Brincker, R., Kirkegaard, P.H. & Rytter, A. Identication of System Parameters
by the Random Decrement Technique. Proc. 16th International Seminar on Modal
Analysis, Florence, Italy, Sept. 9-12, 1991.
[58] Brincker, R., Krenk, S. & Jensen, J.L. Estimation of Correlation Functions by the
Random Decrement Technique. Proc. 9th International Modal Analysis Conference
and Exhibit, Firenze, Italy, April 14-18, 1991.
1992
[59] Haddara, M.R. On the Random Decrement for Nonlinear Rolling Motion. 1992
OMAE, Vol. 2, Safety and Reliability, ASME 1992, pp. 321-324.
[60] Brincker, R., Krenk, S., Kirkegaard, P.H. & Rytter, A. Identication of Dynamical
Properties from Correlation Function Estimates. Bygningsstatiske Meddelelser, Vol.
63, No. 1, 1992, pp. 1-38.
1993
[61] Bodruzzaman, M., Li, X., Wang, C. & Devgan, S. Identifying Modes of Vibratory
System Excited by Narrow Band Random Excitations. 1993 Souteastcon '93.
[62] Tamura, Y., Sasaki, A. & Tsukagoshi, H. Evaluation of Damping Ratios of Randomly
Excited Buildings Using the Random Decrement Technique. Journal of Structural and
Construction Engineering, AIJ, No. 454, Dec. 1993. (In Japanese)
1994
[63] Brincker, R., Demosthenous, M. & Manos, G.C. Estimation of the Coecient of
Restitution of Rocking Systems by the Random Decrement Technique. Proc. 12th
International Modal Analysis Conference, Honolulu, Hawaii, Jan 31 - Feb 3 1994.
1995
[64] Desforges, M.J., Cooper, J.E. & Wright, J.R. Spectral and Modal Parameter Esti-
mation From Output-Only Measurements. Journal of Mechanical Systems and Signal
Processing, Vol 9, No. 2, March 1995, pp. 169-186.
1996
[65] Asmussen, J.C. & Brincker, R. Estimation of Frequency Response Functions by Ran-
dom Decrement. Proc. 14th International Modal Analysis Conference, Dearborn,
Michigan, USA, February 1996, Vol I, pp. 246-252.
[66] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Modal Parameter Identication from
Responses of General Unkonwn Random Inputs. Proc. 14th International Modal Anal-
ysis Conference, Dearborn, Michigan, USA, February 1996, Vol I, pp. 446-452.
[67] Asmussen, J.C., Ibrahim, S.R. & Brincker, R. Random Decrement and Regression
Analysis of Trac Responses of Bridges. Proc. 14th International Modal Analysis
Conference, Dearborn, Michigan, USA, February 1996, Vol I, pp. 453-458.
[68] Asmussen, J.C. & Brincker, R. Estimation of Correlation Functions by Random
Decrement. Proc. ISMA21 - Noise and Vibration Engineering, Leuven, Belgium,
September 18-20 1996, Vol II, pp. 1215-1224.
1997
[69] Chalko, T.J. & Haritos, N. Scaling Eigenvectors obtained from Ambient Excita-
tion Modal Testing. Proc. 15th International Modal Analysis Conference, Orlando,
Florida, USA, February 3-6 1997, Vol. I, pp. 13-19.
[70] Fasana, A., Garibaldi, L., Giorcelli, E., Ruzzene, M. & Sabia, D. Analysis of a Mo-
torway Bridge Under Random Trac Excitation. Proc. 15th International Modal
Analysis Conference, Orlando, Florida, USA, February 3-6 1997, Vol. I, pp. 293-300.
[71] Brincker, R. & Asmussen, J.C. Random Decrement Based FRF Estimation. Proc.
15th International Modal Analysis Conference, Orlando, Florida, USA, February 3-6
1997, Vol. II, pp. 1571-1576.
[72] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Theory of Vector Triggering Random
Decrement. Proc. 15th International Modal Analysis Conference, Orlando, Florida,
USA, February 3-6 1997, Vol. I, pp. 502-509.
[73] Asmussen, J.C., Ibrahim, S.R. & Brincker, R. Application of Vector Triggering
Random Decrement. Proc. 15th International Modal Analysis Conference, Orlando,
Florida, USA, February 3-6 1997, Vol. II pp. 1165-1171.
Chapter 2
Theoretical Background for Linear
Structures
The purpose of this chapter is to establish the tools, which are used to extract the modal
parameters from the RD functions. The following is assumed.
The structures are time-invariant during the measurement period.
The loads are stationary and Gaussian distributed.
The vibrations of the structure can be modelled by a linear lumped mass parameter
system.
The response of a structure to non-zero initial conditions, denoted free decays, is discussed
in detail. These free decays will be derived and the dierent approaches to extract the
modal parameters from free decays are described. Especially the practical application of
these methods is considered. The argument of introducing these principles, is that on
certain assumptions the RD functions are proportional to the correlation functions of the
measurements. On the same assumptions the correlation functions of the response of the
lumped mass parameter system to Gaussian white noise loading have exactly the same
relations as the response of the model to non-zero initial conditions only. This means that
the approaches developed to extract modal parameters from free decays can be used to
extract modal parameters from the RD functions. This is an advantage, since methods
for extracting modal parameters from free decays are well developed. The procedure does
not assume that the loads are measured. The theory presented in this chapter can be seen
in parts in most books on linear vibration theory and linear stochastic vibration theory,
see e.g. references [1] - [7]. So during this chapter references are as a rule omitted.
Section 2.1 establishes the mathematical model of the vibrations of a linear lumped mass
parameter system. The modal parameters are dened. The equations for free vibrations
and forced vibrations of the structure are given. Free vibrations (or decays) correspond
to the response of the structure to some non-zero initial conditions only. The forced
vibrations are calculated in both time and frequency domain based on knowledge of the
loads and the modal parameters.
Section 2.2 concerns identication of the parameters in the linear lumped mass parameter
system from measured free decays of a structure. Two dierent techniques are imple-
23
mented and used: The Ibrahim Time Domain (ITD) technique and the Polyreference
Time Domain (PTD) technique. A detailed mathematical derivation is not given, instead
the implementation and the practical applications of the techniques are discussed.
Section 2.3 introduces the concepts of correlation, covariance and spectral densities. These
considerations are important, since correlation functions are the link between the mathe-
matical model and the RD functions. It is assumed that the lumped parameter system
is loaded by ltered Gaussian white noise. Section 2.3.2 denes this load modelling. In
section 2.3.3 it is shown that the characteristics of the lter and the system are preserved in
the response of the system and remain uniquely identiable from the correlation functions.
It is shown that the correlation functions have exactly the same relations as free decays.
where q0 contains the modal initial conditions and is denoted the modal matrix.
The following orthogonality relations of the state matrices A and B can be shown using
the symmetry relations of the state matrices only. These relations are introduced in order
to derive the response of the system
T A = m ; m = diag([m1 m2 ::: m2n]) (2.11)
T B = ,m ; ,m = diag(,[1m1 2m2 ::: 2nm2n]) (2.12)
= diag([1 2 ::: 2n]) (2.13)
The constants mi are denoted modal masses. The modal initial conditions, q0 , can be
calculated from eq. (2.9) by combining the initial conditions from eq. (2.2) and the
orthogonality condition in eq. (2.11).
q0 = m,1 T Az0 (2.14)
The displacements, velocity and acceleration response of the lumped mass parameter sy-
stem to initial conditions become
x(t) = etq0 (2.15)
x_ (t) = etq0 (2.16)
x (t) = et2q0 (2.17)
As seen, the dierence between the free decay displacement response, velocity response
and acceleration response is only a complex scaling factor in form of the eigenvalue matrix,
. This scaling factor changes the amplitude and the phase of the exponentially damped
sinusoidal free decay response. The relations are important, since the modal parameters
can be extracted from all free decay responses in eqs. (2.15) - (2.17) using the same
algorithm, which will be illustrated in section 2.2.
2.1.2 Forced Vibration
The forced vibration of a linear lumped parameter system is calculated using a reformu-
lation of the state response by inserting the modal matrix
z(t) = q(t) (2.18)
where q(t) is dened equivalently to q0 in eq. (2.10)
q(t) = [q1(t) q2(t) ::: q2n(t)]T (2.19)
Inserting the above relation in eq. (2.2), multiplying on the right hand side by T and
using the orthogonality relations in eqs. (2.11) - (2.12) yields
q_ (t) , q(t) = m,1 T F(t) (2.20)
The solution to these decoupled dierential equations is given by the convolution integral
and the initial condition
Zt
q(t) = e(t, )m,1 T F( )d + et q0 (2.21)
,1
Using eq. (2.18) the response of the system becomes
Zt
z(t) = h(t , )F( )d + etq0 (2.22)
,1
where the Impulse Response Matrix (IRM) has been dened as
h(t) = etm,1 T ; t 0 h(t) = 0 ; t < 0 (2.23)
Equation (2.22) is transformed into the frequency domain using the Fourier transforma-
tion. It is assumed that the response due to the non-zero initial conditions can be neglected
z(!) = H(!)F(!) (2.24)
where
Z1
z(!) = 21 z(t)e,i!tdt (2.25)
,1
Z1
F(!) = 21 F(t)e,i!t dt (2.26)
,1
Z1 Z1
H(!) = h(t)e,i!tdt = h(t)e,i!t dt (2.27)
,1 0
Until now it has been assumed that the response of all masses in the lumped parameter
system is measured or observed. Usually the number of modes is higher than the number
of known responses. This can be modelled by changing the observation equation. If e.g.
a system with 2n degrees of freedom is measured at m locations the observation equation
becomes
x(t) = Dz(t) D = [I 0 0] (2.31)
where the identity matrix has the dimensions m m, the rst zero matrix has the di-
mensions n , m n , m and the second zero matrix has dimensions n n. The relation
between the modal matrices and still holds
= D (2.32)
Equations (2.15) - (2.17), (2.23) and (2.27) all contain information about the modal para-
meters. This means that if any of these functions/matrices are known the modal parame-
ters can be extracted. In the next section it is described how modal parameters can be
extracted from free decays using eqs. (2.15) - (2.17) in practice. The motivation is that
these methods can be used to extract modal parameters from the RD functions. This
relation will be shown in section 2.3.
Figure 2.2: Diagram for extracting modal parameters from free decay measurements.
The rst step is to rearrange the measurements to obtain an overdetermined system for
estimation of the ,-matrix. The eigenvalues of the ,-matrix are directly related to the
eigenvalues of the continuous-time system, see eq. (2.7). The ,-matrix is dened and the
procedure is shown in sections 2.2.1 and 2.2.2. In section 2.2.3 it is discussed how the noise
present in the free decay measurements is modelled and in section 2.2.4 the extraction of
eigenfrequencies and damping ratios from the ,-matrix is shown. Section 2.2.5 describes
how the mode shapes are estimated from the measurements by introduction of the Modal
Participation Factors (MPF). Sections 2.2.6 and 2.2.7 describe the practical application
and the methods used to separate noise modes from physical modes.
2.2.1 General Equations
The dierence between the two algorithms is that the ITD technique has its starting
point in eqs. (2.15) - (2.17) and the PTD technique has its starting point in eq. (2.23).
However, for convenience the following description has its starting point in eqs. (2.15) -
(2.17) but this choice has no in
uence on the principles. Common to both algorithms is
that the measured free decays are assumed to be instantaneously sampled at equidistant
time points. The interval between the sampling points is denoted the sampling period,
T . If the response of a structure is sampled simultaneously at n dierent channels in N
time points, the measured response, x, is assumed to be of the following form
x(kT ) = e(kT )q0 = ,k q0 k = 0; 1; 2; :::;N , 1 (2.33)
x_ (kT ) = e(kT )q0 = ,k q0 k = 0; 1; 2; :::;N , 1 (2.34)
x(kT ) = e(kT )2q0 = ,k 2q0 k = 0; 1; 2; :::;N , 1 (2.35)
where the matrix , has been introduced
, = eT = diag([e T e T e nT ])
1 2 2
(2.36)
In order to model a system where the number of modes, 2m, diers from the number of
known responses, n, the size of the modal matrix, , is given by the size of the observation
matrix D in the observation equation, see eqs. (2.2) and (2.32). However, eqs. (2.33) -
(2.35) can be rewritten in order to obtain a relation for the responses at l time points later
x((k + l)T ) = ,l,k q0 (2.37)
x_ ((k + l)T ) = ,l,k q0 (2.38)
x((k + l)T ) = ,l,k 2q0 (2.39)
In general the above relations form the basis of the ITD technique and the PTD technique.
They illustrate that the present free decay response can be expressed as a function of past
free decay response using the time dierence, lt and the ,-matrix, which contains infor-
mation about the frequencies and damping ratios of the modes. This relation is valid no
matter if the displacement, velocity or acceleration responses are measured. The dierence
in the form of the ,-matrix is simply interpreted as another set of initial conditions. This
makes the algorithms versatile.
In the formulation of the ITD and PTD algorithm on the basis of eqs. (2.37) - (2.39)
dierent demands on the ratio between the number of measurement points and the number
of modes exist. This is a clear delimitation to these techniques, since the number of modes
is dependent on the number of measurement points. In order to lift this restriction the
concept of pseudo measurements was introduced, see Ibrahim [9].
The eigenvalue matrix , is known so the mode shape matrix can be calculated using
regression as described in section 2.2.5. The modal initial conditions only involve a column-
wise scaling of the mode shapes. The MCF of the ith component of the jth mode shape
is calculated as
MCFi;j =
^ i;j ,lj;j jMCF j > 1 ; MCF = 1 (2.53)
^ li;j i;j i;j MCF
i;j
Theoretically all the MCFs should be unity. In practice the MCFs will approximately be
unity for structural modes and lower for non-structural modes. If the magnitude of an
MCF is higher than unity, the reciprocal value is used. In general the MCFs are complex
numbers, so a phase and an magnitude, which should be zero and unity, respectively, are
dened and used.
In a practical situation the following restrictions could be applied in order to characterize
a mode as a structural mode.
The eigenvalue of the mode should have a complex conjugate.
The damping ratio should be below 10 %.
The MCF magnitude should be above 90 %.
The MCF phase should be below 10 .
Notice that the numbers are a result of the experience obtained by analysing the structures
described in this thesis and can only be considered as guidelines. Especially the MCFs
are capable of separating noise and structural modes. The decision of the criteria applied
to the damping ratios should be obtained from experience with the present structure.
To separate the physical modes from the noise modes the approaches described above
are applied rst. Then a stabilization diagram is used. Such a diagram is a simple plot
of the estimated frequencies versus the model number. A structural mode should not
depend on the model chosen (the number of modes and the number of points used from
the free decays). So a trend is visible at the frequency of a structural modes. Stabilization
diagrams are usually a very ecient method in combinations with the other methods to
extract the structural modes. Stabilization diagrams also give an indication of the optimal
choice of model structure.
Modal Assurance Criteria
The last-mentioned approach used for selecting structural modes is the Modal Assurance
Criteria (MAC). This is a correlation coecient between two dierent mode shapes
jT j2
MAC = j jj2ji j2 (2.54)
j i
The idea is that the mode shape of a structural mode should not change signicantly with
a small change in the model structure. A noise mode may change with just a slight change
in the model structure, so noise modes will have a low correlation. The MAC can also be
used to compare two mode shapes estimated from two dierent approaches such as the
RD technique or a technique based on the FFT algorithm. This option will be used later
in this thesis.
1 Z1
SXX(!) = 2 RXX( )e,i! d (2.59)
,1
Z1
RXX( ) = SXX(!)ei! d! (2.60)
,1
The denition of spectral densities using eq. (2.59) and the denition via nite Fourier
transforms presented in chapter 1 are equivalent, see e.g. Bendat & Piersol [3].
2.3.2 Load Modelling
The load which excites the lumped mass parameter system is assumed to be a stationary
zero mean Gaussian distributed vector process, see eq. (A.1). In order to generalize the
load process it is assumed that the load process can be described as a white noise vector
process passed through a linear shaping lter. This is an extension of the traditional white
noise assumption. The idea behind this approach and the theory are presented in Ibrahim
et al. [13], where the lter is referred to as a pseudo-force lter. The principle is shown
in gure 2.3.
Figure 2.3: Outline diagram for modelling of loads using a shaping lter. h(t) and H (! )
are the IRM and FRM, respectively
W is a Gaussian white noise process having the following statistical relations
E [W(t)] = 0 (2.61)
E [W(t + )WT (t)] = RWW ( ) (2.62)
SWW(!) = 21 RWW (2.63)
The FRM and IRM of the shaping lter are assumed to be given by eqs. (2.64) - (2.65).
Subscript F indicates that the matrices describe the lter characteristics
HF (!) = F (i! , F ),1mF,1FT HF (!) = HFT (!) (2.64)
hF (t) = F eF (t)mF,1FT hF (t) = hFT (t) (2.65)
where mF is an m m normalization corresponding to the modal masses in eq. (2.11),
F is an n m matrix containing the modal vectors corresponding to eq. (2.8) and F
is an m m diagonal matrix containing the eigenvalues. The real part of all eigenvalues
is negative. This ensures that the resulting force is stationary. The force exciting the
structure is modelled by
f (!) = HF (!)W (!) (2.66)
Z t
f (t) = hF (t , )W ( )d (2.67)
,1
In order to illustrate the signicance of a shaping lter the auto spectral densitity and
auto correlation function of the white noise process and the resulting force obtained by
ltering the Gaussian white noise process are given in g. 2.4.
White noise Excitation
−1
10
Spectral Density
−2
10
N*N/s
0
10
−3
10
−4
−1 10
10
−10 −5 0 5 10 −10 −5 0 5 10
ω ω
0.12 0.1
Auto correlation
0.1
0.05
0.08
N*N
0.06 0
0.04 −0.05
0.02
−0.1
0
−5 0 5 −5 0 5
τ τ
Figure 2.4: The eect of modelling the excitation as a ltered white noise process, where
the lter has an SDOF.
Since W(t) is Gaussian distributed it also follows that the load applied to the structure,
f (t), is Gaussian. The lter performs linear operations to the white noise process W(t),
so the distribution of the force is still Gaussian. This is important, since it follows from
an equivalent argument, that the response will also be Gaussian. The eect of applying
a pseudo force lter is that the lter characteristics also are identied together with the
characteristics of the structure.
The number of output channels from the lter should be equal to the number of modes of
the structual system, n. In order to make the lter versatile it will be assumed that the
lter can have more modes than output and input channels, m > n.
2.3.3 Correlation Functions of the Response
The response of a lumped parameter system with 2n-DOF measured at n points to the
load described in section 2.3.2 is calculated using eq. (2.24)
X(!) = H(!)f (!) = H(!)HF (!)W(!) = Hc(!)W(!) (2.68)
In order to calculate the response the FRM of the combined system, Hc (! ), consisting of
the structure and the shaping lter is dened as
X2n j Tj X
m F F T
l l X2n Bj X m BFl
= =
j =1 mj (i! , j ) l=1 ml (i! , l ) j =1 (i! , j ) l=1 (i! , l )
F F F
X2n B a
j j Xm b BF
l l
= , + (2.69)
i! l=1 (i! , l )
( j ) F
j =1
where the following vectors, aj and al , have been introduced
X
m BFl
aj = (2.70)
l=1 (j , l )
F
X
2n Bj
bl = F , j ) (2.71)
j =1 (l
The last term in eq. (2.69) shows that all eigenvalues of the shaping lter and the structural
system are uniquely preserved. Furthermore, the mode shapes of the structural system
can be reconstructed from the columns of the FRM, and the mode shapes of the lter can
be reconstructed from the rows of the FRM. The above proof is rst time given in Ibrahim
et al. [13].
Using matrix notation the FRM and the IRM of the combined system become
Since this is a linear operation on the Gaussian distributed white noise excitation it follows
that the response is Gaussian distributed. The correlation functions of the structural
system loaded by ltered Gaussian white noise can be calculated using the above results
2nX ~ j ~ j ,1
+m b
~c = [c1 c2 ::: c2n+m ]T ci = b~ Ti RWW(0) (2.80)
i=1 m ~ j i + j
From the last statement of eq. (2.79) it is seen that any column in the correlation function
matrix can be written as
RiXX( ) = ~ e~ m~ ,1c~i (2.81)
where RiXX is an abbreviation of the ith column in the correlation matrix and the scaling
constant q~ i is the ith column of the matrix ~c. Equation (2.81) is of exactly the same
form as eq. (2.15), which is the standard equation for a free decay due to some initial
conditions. This means that the mode shape vector ~ and the matrix containing the
eigenvalues can be extracted from the correlation functions using methods described in
section 2.2. If the velocities or the accelerations of the system are measured instead of the
displacements the correlation functions can be calculated using eq. (2.81) and the results
of eq. (2.55).
RiX_ X_ ( ) = ,~ e~ ~ 2 ci (2.82)
RiX X ( ) = ~ e~ ~ 4ci (2.83)
Equations (2.81) - (2.83) correspond exactly to eqs. (2.15) - (2.17), so in the modal
parameter extraction procedure there is no reason to distinguish between the correlation
functions of the displacements, velocities or the accelerations of the structure.
2.4 Summary
The lumped mass parameter model has been introduced and the modal parameters of this
model have been dened in section 2.1. The response of this system to initial conditions
and to arbitrary loads has been derived. Thereby the impulse response matrix and the fre-
quency response matrix have been dened. In section 2.2 the process of extracting modal
parameters from free decays has been discussed. The basic principles of the two algo-
rithms, the Ibrahim Time Domain and the Polyreference Time Domain, which have been
implemented, are described. Furthermore, the practical applications of these techniques
have been the main issue. In section 2.3 the response of the lumped mass parameter sy-
stem subjected to white noise passed through a shaping lter is discussed. This modelling
assures a versatile description of the loads and that the response is Gaussian distributed.
The correlation functions of the response are dened and it is shown that these correlation
functions can be described equivalently to free decays. This means that modal parameters
can be extracted from the correlation functions of the lumped mass parameter system
loaded by ltered white noise using methods developed in connection with free decays.
Since RD functions are interpreted in terms of correlation functions, this allows extraction
of the modal parameters from the RD functions using methods like the Ibrahim Time
Domain or the Polyreference Time Domain. This approach is used throughout this thesis.
The chosen modelling of the loads also ensures that it is not necessary to measure the
loads of a structure in order to extract modal parameters.
Bibliography
[1] Caughey, T.K. & o'Kelley, M.E.J. Classical Normal Modes in Damped Linear Sy-
stems. ASME Journal of Applied Mechanics, Vol. 49 pp. pp. 867-870, 1965.
[2] Nashif, A.D., Jones, D.I.G., Henderson, J.P. Vibration Damping. 1985 John Wiley &
Sons. ISBN 0-471-86772-1.
[3] Bendat, J. & Piersol, A. Random Data - Analysis and Measurement Procedures. John
Wiley & Sons, Inc. ISBN 0-471-04000-2.
[4] Pandit, S.M. Modal and Spectrum Analysis: Data Dependent Systems in State Space.
1991 John Wiley & Sons USA, Inc. ISBN 0-471-63705-X.
[5] Ewins, D.J. Modal Testing: Theory and Practice. 1995 Research Studies Press Ltd,
England. ISBN 0-86380-017-3.
[6] Inman, D.J. Engineering Vibration. 1996 Prentice-Hall USA, Inc. ISBN 0-13-518531-
9.
[7] Wirsching, P.H., Paez, T.L. & Ortiz K. Random Vibrations. Theory and Practice.
1995 John Wiley & Sons, Inc. ISBN 0-471-58579-3.
[8] Fladung, W.J., D.L. Brown & R.J. Allemang. Modal Parameter Estimation - A Uni-
ed Matrix Polynomial Approach. Proc. 12th International Modal Analysis Confer-
ence, Honolulu, Hawaii, USA, 1994.
[9] Ibrahim, S.R. An Upper Hessenberg Sparse Matrix Algorithm for Modal Identication
on Minicomputers. Journal of Sound and Vibration (1987) 113(1) pp. 47-57.
[10] Vold ,H., Kundrat, J., Rocklin, G.T. & Russel, R. A Multi-Input Modal Estimation
Algorithm for Minicomputers. SAE Paper No. 820194, 1982.
[11] Ibrahim, S.R. Modal Condence Factor in Vibration Testing. Journal of Spacecraft,
Sept.-Oct. 1978 Vol. 15, No. 5, pp. 313-316.
[12] Vold, H. & Crowley, J. A Modal Condence Factor for the Polyreference Time Domain
Technique. Proc. 3rd International Modal Analysis Conference, 1985, pp. 305-310.
[13] Ibrahim, S.R., Brincker, R. & Asmussen, J.C. Modal Parameter Identication From
Responses of General Unknown Random Inputs. Proc. 14th International Modal Ana-
lysis Conference, Dearborn, Michigan, USA, Feb. 12-15, 1996, Vol. I, pp. 446-452.
Chapter 3
The Random Decrement
Technique
The purpose of this chapter is to introduce the RD technique, present the mathematical
background and illustrate the applicability of this technique. It is not the intention that
the mathematical background of the RD technique should be derived in detail. Only the
nal results are presented and discussed. The interested reader is referred to appendix
A, where a detailed derivation and resume of the mathematical background of the RD
technique are given.
Section 3.1 denes the RD functions theoretically and illustrates how the RD functions
are estimated. The RD functions are in general dened for stationary processes, but
the estimation of RD functions demands that the processes are assumed to be ergodic.
Thus the RD technique is restricted to deal with ergodic processes. Section 3.2 introduces
the link between the RD functions dened on the applied general triggering condition
and the correlation functions of stationary zero mean Gaussian distributed processes. The
assumptions of stationary zero mean Gaussian distributed processes are sucient in terms
of describing the RD technique mathematically. It is not necessary to assume anything
about the physical system describing the processes. In spite of this the processes will be
interpreted as the response of a linear lumped mass parameter system loaded by Gaussian
white noise or ltered Gaussian white noise as described in chapter 2.
Sections 3.3 - 3.6 describe the four most well known triggering conditions. The relation
between the RD functions and the correlation functions is given and approximate formulas
for the variance of the RD functions are presented. In order to illustrate the dierent
triggering conditions two examples are described in each section. First a very simple
example using a very short time series is described. The purpose is to illustrate the
estimation process and the dierent triggering conditions. Secondly an example based
on the response of a 2DOF system loaded by Gaussian white noise is described in each
section.
Section 3.7 introduces the concept of quality assessment of RD functions. The shape
invariance relation of the RD functions and the symmetry relations for the correlation
functions of stationary processes are used as a basis for quality assesment. Section 3.8
illustrates how the triggering levels should be chosen for the dierent triggering conditions.
43
In section 3.9 the RD technique is compared with other approaches for estimation of
correlation functions. The comparison is based on speed and accuracy of the dierent ap-
proaches. Several dierent triggering conditions are considered. The examples illustrates
advantages and disadvantages of the RD technique.
2 1 2 2 2 3
DX X ( ) DX X ( ) DX X ( )
3 1 3 2 3 3
2 3 (3.5)
E[X1(t + )jTX (t)] E[X1(t + )jTX (t) ] E[X1(t + )jTX (t)]
64 E[X2(t + )jTX (t)] E[X2(t + )jTX (t)] E[X2(t + )jTX (t)] 75
1 2 3
1 2 3
A column in eq. (3.5) is denoted a RD setup, whereas the process where the triggering
condition is fullled is denoted the reference measurement or triggering measurement.
In practical applications of the RD technique only a single realization of the stochastic
process is available. Or in other words, usually only a single measurement at each chosen
location of a vibrating structure is collected. In order to estimate the conditional mean
value correctly from a single observation it is necessary to assume that the stochastic
process is not only stationary but also ergodic. In this case the auto RD functions can be
estimated as the empirical conditional mean value from a single realization
X
N
D^ XX ( ) = N1 x(ti + )jTx(ti) (3.6)
i=1
X
N
D^ Y Y ( ) = N1 y(ti + )jTy(ti) (3.7)
i=1
where N is the number of points in the process which fulls the triggering condition and
y (t) and x(t) are realizations of X (t) and Y (t). Correspondingly, the cross RD functions
are estimated as
^ 1 XN
DXY ( ) = N x(ti + )jTy(ti) (3.8)
i=1
^ 1 XN
DY X ( ) = N y(ti + )jTx(ti) (3.9)
i=1
^ 1 X
N
E[DXY ( )] = E[x(ti + )jTy(ti) ] = DXY ( ) (3.10)
N i=1
Until now no restriction or formulation of the triggering condition has been made. Ob-
viously the formulation of the triggering conditions controls the actual number of triggering
points. This means that the convergence of the estimates in eqs. (3.6) - (3.9) is controlled
by the triggering condition and of course of the absolute length of the observations of
the processes. In the next sections dierent formulations of the triggering conditions are
described.
The denitions of the RD functions in eqs. (3.1) - (3.4) assume that the index of the
processes X (t) and Y (t) is continuous time. Measurements of the response of a structure
consist of simultaneously sampled values of the response at equidistant time points at the
sampling interval T . So the variables ti and used in the estimation of RD function are
discrete-time variables and functions of the sampling rate.
3.2 Applied General Triggering Condition
This section introduces the applied general triggering condition. The results are not
derived, only presented. The interested reader is referred to appendix A for a detailed
derivation. In a real estimation situation this triggering condition does not have any
interest, but the results obtained using this condition are important. The link between the
RD functions of any triggering condition of practical interest and the correlation functions
of stationary zero mean Gaussian distributed processes can be derived directly from the
results of this triggering condition. The applied general triggering condition, TXG(At), of the
stochastic process X (t) is dened as
TXG(At) = fa1 X (t) < a2 ; b1 X_ (t) < b2g (3.11)
Superscript GA refers to the General Applied triggering condition. Using this general
triggering condition the RD functions becomes a weighted sum of the correlation function
and the time derivative of the correlation functions
0
D ( ) = RXX ( ) a~ , RXX ( ) ~b
XX X2 X2_ (3.12)
0
DY X ( ) = RYX2( ) ~a , RYX2( ) ~b (3.13)
X X_
where the triggering levels a~ and ~b are functions of the triggering bounds and the density
functions.
R a xp (x)dx R b xp
_ (x_ )dx_
X
b = Rb b X_
2 2
a~ = R a p (x)dx
a1 ~ 1
(3.14)
a X b pX_ (x_ )dx_
2 2
1 1
Equations (3.12) and (3.13) illustrate how versatile the RD technique is. By adjusting the
triggering bounds a1; a2 and/or b1; b2 the contribution of the correlation functions and the
time derivative of the correlation functions to the resulting RD functions can be changed.
In the limit by choosing one of the triggering level sets, [a1 a2] or [b1 b2], to [,1 1] or
[0 0] the resulting RD functions becomes proportional to either the correlation functions
or their time derivatives.
Another important aspect of the RD technique is the actual number of triggering points.
The RD functions are estimated as the empirical mean from a single realization of the
stochastic processes. This assumes that the processes are ergodic
^ 1 X
N
DXX ( ) = x(ti + )jfa1 x(ti ) < a2; b1 x_ (ti ) < b2g
N i=1 (3.15)
^ 1 X N
DY X ( ) = N y (ti + )jfa1 x(ti ) < a2 ; b1 x_ (ti ) < b2 g (3.16)
i=1
The number of triggering points, N , can be adjusted by changing the triggering levels
[a1 a2] and [b1 b2]. This means that the convergence of the estimates in eqs. (3.15) and
(3.16) is controlled by the triggering levels.
In applications of the RD technique only special formulations of the applied general trig-
gering condition are used. The explanation is that usually only the correlation functions
or the time derivative of the correlation function is needed. Furthermore, since the tech-
nique is applied to discrete measurements, noise will be introduced by calculating the
time derivative of the measurements numerically. It is usually necessary to calculate the
time derivative, since only realizations of the process itself are available. This will lead
to erroneous triggering points corresponding to erroneous values of the time derivative of
the measurement. This leads to the formulation of four triggering conditions described in
sections 3.3 - 3.6. In order to demonstrate the applicability of the RD technique and the
dierent triggering conditions two through examples are given in each section.
The rst time RD functions are described mathematically by correlation functions is pre-
sented in Vandiver et al. [1]. They proved that for stationary zero mean Gaussian di-
stributed processes the RD functions obtained using the level crossing triggering condition
are proportional to the auto correlation function. They also derived an approximate for-
mula for the variance of the RD functions. Brincker et al. [2] extended this concept by
deriving a proportional relationship between the cross RD functions of a theoretical ge-
neral triggering condition and the cross correlation functions and the time derivative of
the cross correlation function. They also derived approximate formulas for the variance
of the cross RD functions in terms of cross correlation functions. The results obtained
using the applied general triggering condition are a generalization of the results obtained
by Vandiver and Brincker. From this triggering condition the result of any particular
triggering condition can be derived directly, see appendix A.
2
0
−2
−4
0 5 10 15 20
Time
Figure 3.1: Continuous process and the corresponding sampled discrete time series.
The discrete time series will be analysed applying the dierent triggering conditions in
order to illustrate the basic idea of the RD technique with a simple example.
0.1
0.05
[m]
0
−0.05
−0.1
0 10 20 30 40 50 60 70 80
Response of mass 2
0.1
0.05
[m]
0
−0.05
−0.1
0 10 20 30 40 50 60 70 80
Time [sec]
Figure 3.2: Measurement 1 and measurement 2. The displacement response of mass 1 and
mass 2, respectively.
As described in the introduction the response is simulated using ARMAV-models. The
approach is to simulate the Gaussian white noise loads, which together with the M, C, K
matrices and the sampling rate are the inputs to the algorithm.
DXX ( ) = RXX
2
X
( ) a (3.20)
DY X ( ) = RYX2( ) a (3.21)
X
The RD functions dened by the level triggering condition are calculated as the empirical
mean. The processes are assumed to be ergodic
X
N
D^ XX ( ) = N1 x(ti + )jfx(ti) = ag (3.22)
i=1
X
N
D^ Y X ( ) = N1 y (ti + )jfx(ti) = ag (3.23)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t), respectively. If the time segments
in the averaging process are assumed to be uncorrelated the variance of the estimated RD
functions can be estimated as, see appendix A.
0 2 !1
2
Var[D^ XX ( )] NX @1 , RXX2 ( ) A (3.24)
X
2 RY X ( ) 2 !
^ Y
Var[DY X ( )] N 1 , (3.25)
X Y
The results of eqs. (3.20) - (3.25) constitute the mathematical basis of the level crossing
triggering condition. The estimate of the variance of the estimated RD functions should
be used cautiously, since the assumption of uncorrelated time segments can be highly
violated. Notice that the variance is independent of the chosen triggering level. It is
interesting that the variance of the auto RD functions predicted by eq. (3.24) states that
the variance is zero at time lag zero. The variance of the RD functions predicted by eq.
(3.24) will converge towards NX for j j ! 1, since RXX ( ) ! 0 for j j ! 1. The
2
strength of the relations in eqs. (3.24) - (3.25) is that the variance only is a function of the
number of triggering points and the correlation functions (which is known by a scaling of
the RD functions). The estimate of the variance can be calculated without increasing the
computational time signicantly.
3.3.1 Example 1: Illustration of Triggering Conditions
The time series in g. 3.3 is considered.
2
0
−2
−4
0 5 10 15 20
Time
Figure 3.3: Continuous process and the corresponding sampled discrete time series.
The RD function is estimated using level crossing with the triggering level TXL (t) = fx(t) =
ag. It is chosen to use 3 points in each RD function.
Figure 3.4 shows the time segments which have been picked out using the level crossing
triggering condition and the resulting RD function and the resulting average of the time
segments. The time axis of the time segments corresponds to the time axis in g. 3.3.
2 2 2
0 0 0
1
−2 −2 3 −2
−4 −4 −4
0 2 4 2 4 6 8 10 12
2 2
Result
0 0
4
−2 −2
−4 −4
12 14 16 −2 0 2
Time Time
Figure 3.4: The time segments and the resulting RD function estimated using level crossing
triggering with a = 1.
As illustrated, two upcrossings and two downcrossings are detected.
4
2
[m*m]
0
−2
−4
−5 0 5
−4 DX2X1
x 10
5
[m*m]
−5
−5 0 5
τ
Figure 3.5: Normalized RD functions estimated using level crossing triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [ ]: Estimated RD
functions.
−4 DX1X2
x 10
5
[m*m]
−5
−5 0 5
−3 DX2X2
x 10
1
0.5
[m*m]
−0.5
−1
−5 0 5
τ
Figure 3.6: Normalized RD functions estimated using level crossing triggering applied to
the response of the second mass. [-------]: Theoretical RD functions, [ ]: Estimated RD
functions.
The expected number of triggering points were 191 and 172 predicted by eq. (A.56) and the
actual number of triggering points was 212 and 182. The gures illustrate how the accuracy
of the RD functions decreases with increasing time lag, j j. It is a standard observation
that the estimate of the RD functions does not dissipate as fast as the theoretical RD
functions.
3.4 Local Extremum Triggering Condition
The local extremum triggering condition, TXE(t), is not a commonly used triggering con-
dition compared to the level crossing triggering condition. Nevertheless, this triggering
condition is attractive, since it requires that the contribution from the time derivative of
the process is zero, instead of averaging out the contributions as does the level crossing
triggering condition. A triggering point is detected if the time series has a local extremum
TXE(t) = fa1 X (t) < a2 ; X_ (t) = 0g (3.27)
It is extremely important that the triggering condition states that both local maxima and
local minima are triggering points. If only the local maxima are used as triggering points,
the following equations are not valid. The condition is reformulated to be of the same
form as the applied general triggering condition, see eq. (3.11)
TXE(t) = fa1 X (t) < a; 0 X_ (t) < 0 + bg ; b ! 0 (3.28)
It follows from eqs. (3.12) - (3.14) that the RD functions are proportional to the correlation
functions
D ( ) = RXX ( ) a~
XX X2 (3.29)
DY X ( ) = RYX2( ) ~a (3.30)
X
where the triggering level a~ is given by a1 , a2 and the density function
R a xp (x)dx
a~ = Raa p X(x)dx
2
1
(3.31)
a X
2
1
The bounds a1 and a2 should be chosen to have equal signs in order to extract maximum
information from each time segment. If a1 < 0 and a2 > 0 with a2 > ja1 j the resulting RD
function corresponds to the RD functions estimated using [ja1j a2] as triggering levels. The
contributions from [a1 ja1j] is zero, so the only dierence is the computational time vasted.
The RD functions dened by the local extremum triggering condition are estimated as the
empirical mean. The processes are assumed to be ergodic
X
N
D^ XX ( ) = N1 x(ti + )jfa1 x(ti ) < a2; x_ (ti ) = 0g (3.32)
i=1
X
N
D^ Y X ( ) = N1 y (ti + )jfa1 x(ti ) < a2 ; x_ (ti ) = 0g (3.33)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). The time segments are assumed to
be uncorrelated. Thereby the variance of the estimated RD functions can be approximated
by, see appendix A
0 !2 !1 !2
2 0 ( ) 2 E
Var[D^ XX ( )] NX @1 , RXX
2
X
( ) R
, XX A + k RXX2 ( )
N X (3.34)
X X_
0 2 !21 E 2
Y2 0
Var[D^ Y X ( )] N @1 , RY X( ) , RY X( ) A + kN RY X( ) (3.35)
Y X Y X_ Y X
kE a 1 1
(3.36)
a X a X
2 2
1 1
The results of eqs. (3.29), (3.30) and eqs. (3.34), (3.35) constitute the mathematical basis
of the local extremum triggering condition. The assumption of uncorrelated time segments
can be violated, so eqs. (3.34) and (3.35) should be used cautiously. In contrast to the
level crossing triggering condition the variance of the auto RD functions predicted by eq.
(3.34) is not zero at time lag zero.
E
Var[D^ XX (0)] = kN 6= 0 (3.37)
The variance will converge towards NX , corresponding to the results for the level crossing
2
2
0
−2
−4
0 5 10 15 20
Time
Figure 3.7: Continuous process and the corresponding sampled discrete time series.
The RD function is estimated using local extremum with the triggering levels TXE(t) =
f0 x(t) < 1g. It is chosen to use 3 points in each RD functions. Figure 3.8 shows the
time segments which have been picked out using the local extremum triggering condition
and the resulting RD function. The time axis of the time segments corresponds to the
time axis in g. 3.7.
2 2 2
Result
0 0 0
2
−2 −2 −2
−4 −4 −4
2 4 10 12 14 −2 0 2
Time Time Time
Figure 3.8: The time segments and the resulting RD function estimated using local ex-
tremum triggering with [a1 a2 ] = [0 1].
The estimated and the theoretical RD functions are shown in gs. (3.9) - (3.10).
−4 DX X
x 10 1 1
4
2
[m*m]
0
−2
−4
−5 0 5
−4 DX2X1
x 10
5
[m*m]
−5
−5 0 5
τ
Figure 3.9: Normalized RD functions estimated using local extremum triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [ ]: Estimated RD
functions.
−4 DX1X2
x 10
5
[m*m]
0
−5
−5 0 5
−3 DX2X2
x 10
1
0.5
[m*m]
−0.5
−1
−5 0 5
τ
Figure 3.10: Normalized RD functions estimated using local extremum triggering applied
to the response of the second mass. [-------]: Theoretical RD functions, [ ]: Estimated
RD functions.
The gures illustrate that the estimation errors of the RD functions increase with increa-
sing distance from the triggering condition. Again it is seen that the errors result in an
overestimation of the RD functions. The number of triggering points was 162 and 152
for the RD functions shown in gures 3.9 and 3.10, respectively. This is an important
dierence compared to the results of the level crossing triggering condition. Although
the triggering conditions theoretically estimate the same functions, there is a signicant
dierence, since the number of triggering points diers.
Although the positive point triggering condition is simple the condition is versatile. If
e.g. the triggering levels are chosen as [0 1] half of the points in the time series will be
triggering points. On the other hand, if the triggering conditions are chosen as [a a + a]
and a ! 0 then the level crossing triggering condition is obtained. So the positive point
triggering condition can be interpreted as a generalization of the level crossing triggering
condition. The condition is reformulated to be of the same form as the applied general
triggering condition.
DY X ( ) = RYX2( ) ~a (3.42)
X
where the triggering level a~ is given by the triggering levels a1 ; a2 and the density function.
R a xp (x)dx
a~ = Raa p X(x)dx
2
1
(3.43)
a X
2
1
Equations (3.41) - (3.43) are exactly the same as eqs. (3.29) - (3.31) from the local
extremum triggering condition. The dierence between the two triggering conditions is
that the local extremum triggering condition demands that the time derivative of the time
series is zero, whereas the positive point triggering condition averages the contribution of
the time derivative towards zero. This also means that the number of triggering points
is signicantly dierent. The RD functions are estimated as the empirical mean. The
processes are assumed to be ergodic.
X
N
D^ XX ( ) = N1 x(ti + )jfa1 x(ti ) < a2g (3.44)
i=1
^ 1 XN
DY X ( ) = N y (ti + )jfa1 x(ti ) < a2 g (3.45)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). If the time segments are uncorrelated
the variance of the estimated RD functions are given by, see appendix A.
0 2 !1 !2
^ 2
X @
Var[DXX ( )] N 1 , R XX ( ) A + kP RXX ( ) (3.46)
X2 N X2
2 RY X ( ) 2! kP RY X ( ) 2
^ Y
Var[DXX ( )] N 1 , + N (3.47)
Y X Y X
kP a 1 1
(3.48)
a X a X
2 2
1 1
The variance predicted by eqs. (3.46) and (3.47) should be used with utmost care. In
situations where broad triggering levels as [a1 a2 ] = [0 1] have been applied, eqs. (3.46)
and (3.47) are invalid, since they will highly underestimate the variance. Only in situations
where a1 a2 eqs. (3.46) and (3.47) will predict reasonable variances. This problem is
one of the topics of chapter 5.
3.5.1 Example 1: Illustration of Triggering Conditions
The time series in g. 3.11 is considered again.
2
0
−2
−4
0 5 10 15 20
Time
Figure 3.11: Continuous process and the corresponding sampled discrete time series.
The RD function is estimated using positive point with the triggering levels TXP (t) = f2
x(t) = 1g. It is chosen to use 3 points in each RD functions. Figure 3.12 shows the time
segments which have been picked out using the positive point triggering condition and the
resulting RD function. The time axis of the time segments corresponds to the time axis
in g. 3.11.
2 2 2
0 0 0
1
−2 −2 −2
−4 −4 −4
2 4 10 12 10 12 14
2 2
Result
0 0
4
−2 −2
−4 −4
12 14 −2 0 2
Time Time
Figure 3.12: The time segments and the resulting RD function estimated using positive
point triggering with [a1 a2] = [2 1].
As illustrated, 4 positive points exist in the chosen interval. This small example indicates
that the positive point triggering condition results in more triggering points than the other
triggering conditions.
4
2
[m*m]
0
−2
−4
−5 0 5
−4 DX2X1
x 10
5
[m*m]
−5
−5 0 5
τ
Figure 3.13: Normalized RD functions estimated using positive point triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [ ]: Estimated RD
functions.
−4 DX1X2
x 10
5
[m*m]
−5
−5 0 5
−3 DX2X2
x 10
1
0.5
[m*m]
−0.5
−1
−5 0 5
τ
Figure 3.14: Normalized RD functions estimated using positive point triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [ ]: Estimated RD
functions.
The number of triggering points is 1464 and 1464, and the expected number of triggering
points are 1397 in both situations. Compared to the results obtained using level cros-
sing and local extremum triggering, this condition illustrates the versatility of the RD
technique. The number of triggering points can be adjusted by selecting the triggering
condition and bounds carefully. Figures 3.13 and 3.14 again illustrate that the estimation
errors of the RD functions increase with the distance from time lag zero.
2
0
−2
−4
0 5 10 15 20
Time
Figure 3.15: Continuous process and the corresponding sampled discrete time series.
The RD function is estimated using zero crossing with positive slope triggering. It is
chosen to use 3 points in each RD functions. Figure 3.16 shows the time segments which
have been picked out using the positive point triggering condition and the resulting RD
function. The time axis of the time segments corresponds to the time axis in g. 3.15.
2 2 2
Result
0 0 0
1
−2 −2 −2
−4 −4 −4
−1 0 1 2 8 10 −2 0 2
Time Time Time
Figure 3.16: The time segments and the resulting RD function estimated using zero cros-
sing triggering with positive slope.
[m*m/s]
0
−1
−5 0 5
DX X
2 1
0.5
[m*m/s]
−0.5
−5 0 5
τ
Figure 3.17: Normalized RD functions estimated using zero crossing triggering applied to
the response of the rst mass. [-------]: Theoretical RD functions, [ ]: Estimated RD
functions.
DX X
1 2
0.5
[m*m/s]
−0.5
−5 0 5
DX2X2
1
0.5
[m*m/s]
0
−0.5
−1
−5 0 5
τ
Figure 3.18: Normalized RD functions estimated using zero crossing triggering applied to
the response of the second mass. [-------]: Theoretical RD functions, [ ]: Estimated RD
functions.
The auto RD functions illustrate why these functions are interpreted as impulse response
functions. The number of triggering points was 263 and 236, respectively. The expected
number of triggering points predicted by eq. (A.97) was 261 and 233, respectively.
3.7 Quality Assesment of RD Functions
In application of the RD technique it is an advantage to have some standard methods to
assess the quality of the estimated RD functions. The advantage of standard methods
is that valuable experience with this technique can be transferred between dierent data
sets obtained from dierent physical systems. The purpose of quality assesment of RD
functions is to answer the following questions.
How well do the data approximate the assumption of stationarity and Gaussianity?
Has the empirical mean value converged suciently towards the true mean value
(or: Is the number of triggering points adequate)?
How many points in the RD functions are estimated with sucient accuracy?
These questions are correlated in some sense. They cannot be answered individually.
Whether or not the data are realizations of a stationary process can often be judged with
a pre-knowledge of the physical system and the loads. It is a common assumption that the
physical system is time-invariant during the period where the measurements are collected.
Then if the load is stationary the measurements will also be stationary. How well the data
approximate a Gaussian distribution can be analysed by e.g. a normal probability plot
or various tests. Usually the measurements are only approximately Gaussian distributed,
but the RD technique is applied anyway under critical observation. Two dierent types
of tests or investigations are suggested as standard methods for quality assesment of RD
functions: Shape invariance test and symmetry test.
The RD functions are estimated and normalized to be theoretically equal to the correlation
functions. Figure 3.19 shows the normalized RD functions and the theoretical correlation
function.
−4 DX
x 10 1X 1
1
[m*m]
−1
−2
−3
−4
Visually it seems as if the RD functions estimated using [a1 a2 ] = [0 0:5]X are the most
inaccurate function compared to the theoretical function. The SIC values are calculated
between all estimates and the theoretical correlation function. The results are seen in table
3.2. The dierent functions are ordered so that the rst three components correspond to
the three RD functions and the fourth component is the theoretical correlation function.
[a1 a2 ] [0 0.5]X [0.5 1.5]X [1.5 1]X Theo
[0 0.5]X 1.00 0.88 0.88 0.89
[0.5 1.5]X 0.88 1.00 0.97 0.97
[1.5 1]X 0.88 0.97 1.00 0.99
Theo 0.89 0.97 0.99 1.00
Table 3.2: Shape Invariance criteria between three estimated RD functions and the theo-
retical correlation function.
The number of triggering points for the dierent estimates of the RD functions were
N = [1712 2346 675]. The shape invariance test shows that all three RD functions have a
reasonable agreement both visually and using the SIC values. It is seen that the correlation
between the RD functions and the theoretical correlation function is not a simple function
of the number of triggering points. The estimates with the lowest number of triggering
points have the highest correlation with the theoretical function. It is not unusual to
observe that the accuracy of the RD functions increases with increasing triggering level,
although the number of triggering points is decreasing.
Example:
The system described in section 3.2.2 is considered again. The auto RD function of the
response of the rst mass is pcalculated using plevel crossing triggering condition. Two
dierent triggering levels a = 2X and a = , 2X are used. The estimated correlation
functions and the theoretical correlation functions are shown in g. 3.20.
−4 DX
x 10 1X 1
2
[m*m]
−1
−2
−3
−4
−1.5 −1 −0.5 0 0.5 1 1.5
τ
The above gure illustrates that there is only a small dierence between the two RD
functions estimated using the same triggering level but with changed sign. If the average
of these two functions is used, maximum information is extracted from the measurements
using the level crossing triggering condition.
If the estimated RD functions are scaled to be equal to the correlation functions (norma-
lized with the triggering levels) then an error function can be dened as.
^ ^
E = RY X ( ) ,2 RXY (, ) (3.62)
A nal estimate of the correlation functions can be calculated as the average value of the
above RD functions.
Example:
The system described in section 3.2.2 is considered again. The RD functions of the re-
sponses are calculated using the positive point triggering condition. The triggering levels
are chosen as [0:5X 1]. From the RD functions the error functions and the nal RD
functions are calculated. Figures 3.21 and 3.22 show the error functions and the nal RD
functions.
−4 D X1 X1
x 10
4
2
[m*m]
0
−2
−4
0 1 2 3 4 5
−4 D X2 X1
x 10
5
[m*m]
−5
0 1 2 3 4 5
τ
−4 DX
x 10 1 X2
10
5
[m*m]
−5
0 1 2 3 4 5
−4 DX
x 10 2 X2
10
5
[m*m]
−5
0 1 2 3 4 5
τ
kE a 1 1
(3.67)
a X a X
2 2
1 1
In order to calculate the expected number of triggering points an SDOF system is consi-
dered. The system is loaded by white noise. The natural eigenfrequency and the damping
ratio are f =1 Hz and =1%. The expected number of triggering points can be calculated
using eq. (A.70). The variance of dierent combinations of the triggering levels [a1 a2]
are calculated for each time lag of the correlation functions. The triggering levels which
minimize eq. (3.65) are taken as the optimal choice. Figure 3.65 shows the optimal upper
triggering level and the lower optimal triggering level as a function of the time lag.
0
0 2 4 6 8 10 12
0.5
−0.5
−1
0 2 4 6 8 10 12
τ
Figure 3.23: Theoretically predicted optimal lower and upper triggering level (X ) for an
SDOF system using local extremum triggering and the corresponding correlation function.
Figure 3.23 illustrates a new result. It is not always that the optimal choice of triggering
levels is those which maximize the number of triggering points. Theoretically, is it pre-
dicted that the triggering levels should be chosen as [a1 a2 ] = [X 1]. The above result
can only be a guideline to the user, since the result is dependent of the correlation function
and thereby the physical system. The conclusion is that the triggering levels for the local
extremum triggering condition should not uncritically be chosen as [a1 a2 ] = [0 1], which
maximizes the number of triggering points.
The above theoretical prediction does not take the correlation between the time segments
in the averaging process into account. The prediction assumes uncorrelated time segments.
In order to check the above result a simulation study is performed. 500 responses of the
SDOF system loaded by Gaussian white noise are simulated. Each time series contains
5000 points and is sampled with 15 Hz. Estimates of the correlation function are calculated
for each response using the local extremum triggering condition with dierent triggering
levels. The triggering levels are chosen as all possible combinations of a1 = [0; 0:2; :::; 3]
X , a2 = [0; 0:2;:::; 3] X under the constraint that a2 > a1. The maximum upper
level is chosen to 3X , since it is very unlike to nd realizations of the response beyond
3X . A higher maximum upper level would demand simulation of extremely long time
series. A similar argument is used to select the resolution of the triggering levels to
0.2. A higher resolution would also demand simulation of extremely long time series. The
optimal triggering levels are chosen as the levels with minimum error calculated as the sum
of the absolute values of the dierence between the simulated and theoretical correlation
functions. The result of the simulation study is shown in gure 3.24.
0
0 2 4 6 8 10 12
0.5
−0.5
−1
0 2 4 6 8 10 12
τ
Figure 3.24: Simulated optimal lower and upper triggering level (X ) for an SDOF
system using local extremum triggering and the corresponding correlation function.
The results of g. 3.24 show good agreement with the theoretical predictions in g. 3.23.
It is recommended that the triggering levels for the local extremum triggering condition
should be chosen around [a1 a2 ] = [X 1]. The best way to select the optimal triggering
levels is to perform a sensitivity study. The lower triggering level could be chosen as e.g.
[0 0:5 1 1:5] X and the upper triggering level as innity and the RD functions with lowest
errors calculated using the symmetry relations decide which triggering levels are optimal.
A simulation study corresponding to the above is performed using the positive point trig-
gering condition. The aim is to investigate how the triggering levels should be chosen. A
theoretical prediction can be calculated, but the assumption of uncorrelated time segments
is highly violated so the prediction is excluded. The results are shown in g. 3.25.
Optimal triggering levels
4
0
0 2 4 6 8 10 12
0.5
−0.5
−1
0 2 4 6 8 10 12
τ
Figure 3.25: Optimal triggering levels for positive point triggering for an SDOF system
estimated by simulation together with the corresponding correlation function.
The simulation study indicates that the triggering levels for the positive point triggering
condition should be chosen as about [a1 a2 ] = [X 1].
During this section new information how to select the triggering levels for the dierent
triggering conditions is obtained. It is shown that it can be appropriate to exclude the
triggering points between 0 and X . Although this is only a guideline it is important new
information. The reason is that the accuracy of the estimates is increased and at the same
time the estimation time is decreased since triggering points are excluded.
The accuracy of the estimates of the (double sided) correlation functions is dened by the
error, E
XM
E = 2 M1 + 1 (R(i t) , R^ (i T ))2 (3.68)
i=,M
where R and R^ denote the theoretical correlation function and the estimate of the correla-
tion function, respectively. The accuracy calculated using eq. (3.68) contains contributions
from both random and bias errors. To reduce simulation errors all results are based on
250 averages (or simulations).
Section 3.9.1 describes the two traditional approaches which are used to estimate the
correlation functions. Section 3.9.2 describes the chosen triggering conditions and levels
for the RD technique. Section 3.9.3 presents the results of the simulation study. The
simulation study is nished with a conclusion in section 3.9.4.
In a nite discrete time interval consisting of N points the cross correlation functions are
estimated by
8 1 PN ,r
>
< N ,r i=0 y ((i + r)T )x(iT ) 0r<N
^
RY X (rT ) = > (3.70)
: PN
N ,jrj i=r y ((i , r)T )x(iT ) ,N r < 0
1
The correlation functions estimated by the direct approach are unbiased. The second
approach is based on Fourier and inverse Fourier transformations. The estimation pro-
cedure is unbiased and described in Bendat & Piersol [10] and Brincker et al. [2]. The
computational steps are
The estimation time of both methods is of course dependent of size of the time series, T
(N ), and the maximum time lag, max . But the estimation time of any of the methods is
not dependent on the statistical nature of the time series. This is a signicant dierence
compared to the RD technique.
It is important that the estimation time of the RD functions using level crossing and
local extremum triggering condition depends on the statistical nature of the time series.
Two processes with dierent correlation structures would result in dierent estimation
times. So the results for these two triggering conditions can only be guidelines. But the
estimation time for the positive point triggering condition is independent of the statistical
nature of the measurement. The results for the estimation time of this condition can be
considered to be as general as the results for the direct and the Fourier-inverse Fourier
approach. Since the triggering levels are [0 1] the estimation times for the positive point
condition will be the slowest possible for the RD technique. This is the reason for choosing
these levels instead of e.g. [x 1], which might result in more accurate RD functions.
1
10
Time [CPU]
FFT−IFFT
P
0 RD
10
RD E
L
RD
−1
10
2000 4000 6000 8000 10000 12000 14000 16000
Points in time series
Figure 3.26: Total estimation time of the 4 correlation functions with varying number of
points in the time series and 128 time lags in the correlation functions. Direct: The direct
approach. FFT-IFFT: The approach based on Fourier and inverse Fourier transforma-
tions. RDP : Positive point triggering. RDE : Local extremum triggering. RDL : Level
crossing triggering.
As expected, the direct approach is clearly the slowest approach. The reduced estimation
time by using the FFT-IFFT approach is obvious. The estimation time obtained using the
positive point triggering condition is nearly equal to the estimation time obtained using
the FFT-IFFT approach. Level crossing and local extremum triggering results in shorter
estimation times compared to the FFT-IFFT approach. The estimation times of these
two formulations depend on the chosen triggering levels and the correlation structure of
the processes. Even though the estimation time would dier by using other triggering
levels or a dierent physical system, the results indicate how fast the RD technique can
be compared to the FFT-IFFT approach if a strict triggering condition is chosen.
Figure 3.27 shows the estimation time of the correlation functions with varying number
of time lags in the correlation functions. The number of points in the time series is 8000.
3
10
2
10
Direct
Time [CPU]
1
10
RD P
FFT−IFFT
L
RD
0
10
RD E
−1
10
0 100 200 300 400 500
Points in correlation functions
Figure 3.27: Total estimation time of the 4 correlation functions with varying number of
time lags and 8000 points in the time series. Direct: The direct approach. FFT-IFFT:
The approach based on Fourier and inverse Fourier transformations. RDP : Positive point
triggering. RDE : Local extremum triggering. RDL : Level crossing triggering.
Figure 3.27 shows the same tendency as gure 3.26. If a strict RD formulation, such
as level crossing or local extremum triggering is used, the RD technique becomes faster
than the FFT-IFFT approach. If only a small number of time lags is needed, the RD
technique becomes signicantly faster than the FFT-IFFT approach. The dierence in
the estimation time between the dierent triggering conditions reduces with a decreasing
number of time lags in the correlation functions. This can be explained by the fact that
the averaging process and the detection of triggering points both contribute to the total
estimation time. For estimates with a large number of time lags the averaging process
is dominant. With a decreasing number of time lags in the estimates, the detection of
triggering points becomes correspondingly more dominant. On average the the number
of triggering points for local extremum was 224 and for level crossing 430. For a large
number of time lags local extremum triggering is faster. But for a small number of time
lags level crossing triggering is faster. This is expected, since detecting a local extremum
is more time consuming than detecting a level crossing.
Figure 3.28 shows the accuracy (see eq. (3.68)) of the dierent methods as a function
of the number of points in the time series. Figure 3.29 shows the results of the same
investigations performed on the time series with 20 % noise added. 20 % is the ratio
between the standard deviation of the noise and the standard deviation of the time series.
The direct approach has been omitted due to the high estimation times.
DX 1X 1 −3 DX 1X 2
x 10
0.04 6
0.03
4
0.02
ε
2
0.01
0 0
5000 10000 15000 5000 10000 15000
−3 DX 2X 1 −3 DX 2X 2
x 10 x 10
6 6
4 4
ε
ε
2 2
0 0
5000 10000 15000 5000 10000 15000
Points in time series Points in time series
Figure 3.28: Accuracy, E , of correlation functions with varying time series length. 128
points in the correlation functions. [,,,,,] =FFT-IFFT, [ ]=Local extremum,
[, , ]=Positive point, [, , ,]=Level crossing.
DX 1X 1 −3 DX 1X 2
x 10
0.04 6
0.03
4
0.02
ε
2
0.01
0 0
5000 10000 15000 5000 10000 15000
−3 DX 2X 1 −3 DX 2X 2
x 10 x 10
6 6
4 4
ε
2 2
0 0
5000 10000 15000 5000 10000 15000
Points in time series Points in time series
Figure 3.29: Accuracy, E , of correlation functions with varying time series length. 128
points in the correlation functions. 20 % noise independent Gaussian white noise added.
[,,,,,] =FFT-IFFT, [ ]=Local extremum, [, , ]=Positive point, [, , ,]=Level
crossing.
As expected, the results show that the error of all four approaches converges towards zero
with increasing length of the time series. The eect of adding noise is a slower convergence
rate, but the results still have a high accuracy compared with the noise free analysis. The
positive point triggering condition gives the lowest error, whereas the local extremum
triggering condition has the highest error.
Figures 3.30 and 3.31 show the quality of the 4 approaches as a function of the number
of points in the correlation functions. The results are based on 8000 points in the time
series.
DX1X1 DX1X2
0.01 0.015
0.01
0.005
ε
ε
0.005
0 0
0 200 400 0 200 400
DX2X1 DX2X2
0.02 0.015
0.015
0.01
0.01
ε
0.005
0.005
0 0
0 200 400 0 200 400
Points in correlation function Points in correlation function
Figure 3.30: Accuracy, E , of correlation functions with increasing number of time lags.
8000 points in the time series. [,,,,,] =FFT-IFFT, [ ]=Local extremum,
[, , ]=Positive point, [, , ,]=Level crossing.
DX1X1 DX1X2
0.01 0.015
0.01
0.005
ε
0.005
0 0
0 200 400 0 200 400
DX2X1 DX2X2
0.02 0.015
0.015
0.01
0.01
ε
ε
0.005
0.005
0 0
0 200 400 0 200 400
Points in time series Points in time series
Figure 3.31: Accuracy, E , of correlation functions with increasing number of time lags.
8000 points in the time series. 20 % independent Gaussian white noise added. [,,,,,]
=FFT-IFFT, [ ]=Local extremum, [, , ]=Positive point, [,,,]=Level crossing.
All 4 approaches have increasing accuracy with an increasing number of points in the time
series. In general the RD technique with local extremum triggering condition gives the
highest error. The general dierence between the results from the noise free responses and
the responses with 20% noise added is only a slight increase in the error. It is interesting
that although the absolute error is dierent for the 4 approaches, the rate of increase in
the errror with increasing number of points in the correlation functions is the same.
3.9.4 Conclusions
Three dierent unbiased approaches for non-parametric estimation of correlation func-
tions have been investigated: The direct approach, The FFT-IFFT approach and the RD
technique. The main issue was illustration of advantages and disadvantages of dierent
formulation of the RD technique. The investigations are based on simulated response of
a 2DOF system loaded by Gaussian white noise.
The direct approach was clearly the most time-consuming approach. For a small number
of points in the correlation functions the RD technique is the fastest approach no matter
how this approach is formulated. For a large number of points only a strict formulation
of the RD technique is faster than the FFT-IFFT approach.
Triggering using positive points and the FFT-IFFT produces the most accurate estimates
of the modal parameters compared to the level crossing and the local extremum triggering
condition. More information about the in
uence of applying bounds to the positive point
triggering condition is needed. Applying bounds will decrease the estimation time and
most probably also decrease the uncertainty.
3.10 Summary
The purpose of this chapter is to present and illustrate the RD technique. The main
assumptions of the results are that the stochastic processes are stationary zero mean
Gaussian distributed processes. In section 3.1 the RD functions are dened and the
estimation procedure is presented. It is shown that the estimates of the RD functions are
unbiased.
Section 3.2 introduces the applied general triggering condition. It is shown that using
this triggering condition the RD functions become a weighted sum of the correlation func-
tions and the time derivative of the correlation functions. Furthermore, an approximate
equation for calculation of the variance of the RD functions is given. The applied general
triggering condition is introduced, since the results for the four most well known trigge-
ring conditions can be derived directly from the results of the applied general triggering
condition. These four conditions, level crossing, local extremum, positive point and zero
crossing with positive slope are presented in sections 3.3 - 3.6. The relation between the
RD functions and the correlation functions is given and an approximate relation for the
variance of the RD functions is stated. The triggering conditions are illustrated by a
simulation study.
Section 3.7 introduces the concept of quality assessment of RD functions. It is illustrated
how the shape invariance relation of RD functions and the symmetry relations of correla-
tion functions of stationary processes can be used as a basis for quality assessment. Section
3.8 indicates how the triggering levels of RD functions should be chosen. Guidelines for
the user are given.
The chapter is concluded with a comparison of the speed and quality of dierent unbiased
methods for estimation of correlation functions of stationary processes. It is shown that
the RD technique can be superior in speed and/or quality if the triggering conditions and
levels are selected carefully.
Bibliography
[1] Vandiver, J.K., Dunwoody, A.B., Campbell, R.B. & Cook, M.F. A Mathematical
Basis for the Random Decrement Vibration Signature Analysis Technique. Journal of
Mechanical Design, Vol. 104, April 1982, pp. 307-313.
[2] Brincker, R., Krenk, S., Kirkegaard, P.H. & A. Rytter. Identication of Dynamical
Properties from Correlation Function Estimates. Bygningsstatiske Meddelelser, Vol.
63, No. 1, 1992, pp. 1-38.
[3] Brincker, R., Krenk, S. & J.L. Jensen Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[4] Ditlevsen, O. Uncertainty Modelling. McGraw-Hill Inc. 1981. ISBN: 0-07-010746-0.
[5] Melsa, J.L. & Sage, A.P. An Introduction to Probability and Stochastic Processes.
Prenctice-Hall, Inc. Englewood Clis, N.J., 1973. ISBN: 0-13-034850-3.
[6] Sderstrm, T. & Stoica, P. System Identication. Prentice Hall International (UK)
Ltd, 1989. ISBN: 0-13-881236.
[7] Rice, S.O. Mathematical Analysis of Random Noise. Bell Syst. Tech. J., Vol. 23, pp.
282-332; Vol 24, pp. 46-156. Reprinted in N. Wax, Selected Papers on Noise and
stochastic Processes.
[8] Hummelshj, L.G., Mller, H. & Pedersen, L. Skadesdetektering ved Responsmling.
M.Sc. Thesis (In Danish), Aalborg University, 1991.
[9] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec Tech-
nique. Presented at the 9th International Conference on Experimental Mechanics,
Lyngby, Copenhagen, August 20-24, 1990.
[10] Bendat, J.S. & Piersol, A.G. Random Data - Analysis and Measurement Procedures.
1986 John Wiley & Sons, Inc. ISBN 0-471-0400-2.
[11] Asmussen, J.C. & Brincker, R. Estimation of Correlation Functions by Random
Decrement. Proc. ISMA-21, Noise and Vibration Engineering, Leuven, Belgium, Sept.
18-20, 1996, Vol. II, pp. 13-19.
[12] Allemang, R.J. & Brown, D.L. A Correlation Coecient for Modal Vector Analysis.
Proc. 1st International Modal Analysis Conference, Orlando, Florida, USA, 1982.
Chapter 4
Vector Triggering Random
Decrement
This chapter introduces a new concept: Vector triggering Random Decrement (VRD).
The argumentation for developing this technique is that experience has discovered some
problems with the RD technique applied to a large number of measurements (e.g. >4)
collected simultaneously. This is often the situation in ambient testing of bridges.
If all measurements are used as the reference measurement, which will result in the
maximum number of RD functions (full correlation matrix estimated), the estimation
time will increase proportionally to the number of measurements compared to the
situation where only a single measurement is used as reference measurement. Since
the speed is one of the main advantages of this technique a decision of estimating
all possible RD setups should be reconsidered. On the other hand if not all possible
RD functions are estimated, valuable information can be lost.
If only some of the RD setups are estimated, how should the actual reference mea-
surements be chosen. By assuming the signal-to-noise ratio to be highest at the
measurements with highest standard deviation, the standard deviation could be
used as a criterion. But this is only a guideline, not a full solution.
Usually the uncertainty of the cross RD functions is higher than the uncertainty
of the auto RD functions. So auto RD functions should only be used for modal
parameter estimation. But this excludes the possibility to estimate mode shapes.
The above problems are the motivation for developing the VRD technique. The target
is somehow to estimate equivalent auto RD functions, which contains phase information
and thus the possibility to estimate mode shapes, in contrast to auto RD functions only.
The solution is to apply a vector triggering condition instead of the scalar triggering
conditions used in chapter 3. The VRD concept is rst presented in Ibrahim et al. [1], [2]
and Asmussen et al. [3]. The purpose of this chapter is to present and illustrate the VRD
technique.
VRD functions are dened in section 4.1 corresponding to the denition of the RD func-
tions in section 3.1. A mathematical basis of the VRD technique is presented in section
4.2. In Ibrahim et al. [1], [2] a mathematical basis is presented, where the VRD functions
are interpreted as free decays. In section 4.2 the VRD functions are interpreted in term
81
of correlation function. The link between the correlation functions and VRD functions is
derived for the rst time. The results will be equivalent to the results of chapter 4. In
section 4.3 the variance of the VRD functions are derived. Section 4.4 discusses quality
assessment of VRD functions. Sections 4.5 and 4.6 present simulation studies of the VRD
technique. The purpose is to investigate the performance of the VRD technique com-
pared to the traditional RD technique. The examples illustrate the advantage of the VRD
technique.
64 DX ( ) 75 = E[64 X3(t + ) 75 jTX (t+t );X (t+t );X (t+t );X (t+t )]
2
1 1 2 2 3 3 4 4
(4.7)
3
DX ( )
4 X4(t + )
The estimation of the VRD functions is calculated as
2 D^ ( ) 3 2 3
X x 1 (ti + )
66 D^ X ( ) 77 1 X
1
N 6 7
64 D^ ( ) 75 = N 664 xx23((ttii ++ )) 775 jTxv (ti+t );x (ti+t );x (ti+t );x (ti+t )
2
(4.8)
X i=1
1 1 2 2 3 3 4 4
D^ X ( )
3
4
x4(ti + )
where xi (t) are realizations of Xi (t). It is obvious that the number of triggering points N
will always decrease with an increasing number of measurements or elements in the vector
triggering condition. With a high number of measurements, e.g. four, it is not obvious
that the triggering condition should be applied to all measurements. This could lead to
a low number of triggering points, resulting in an estimate of the VRD functions, which
has not converged suciently.
The solution to this problem is to estimate one or several sets of VRD functions, each
containing a number of VRD functions corresponding to the number of measurements.
This is illustrated in the situation with four measurements by dening two sets of VRD
functions.
2 1 3 2 3
DX ( ) X1(t + )
66 DX1 ( ) 77 66 X2(t + ) 77 v
1
DX1 ( ) X4(t + )
3
4
2 2 3 2 3
DX ( ) X1(t + )
66 DX2 ( ) 77 66 X2(t + ) 77 v
1
DX2 ( ) X4(t + )
3
In the general case with n measurements the maximum number of VRD setups which can
be estimated is the lowest integer of n=2. Furthermore, the size of the vector triggering
condition does not have to be equal for every setup. Consider e.g. ve simultaneously
recorded measurements. Two VRD setups could be calculated by the triggering conditions
TXv (t+t );X (t+t ) ; TXv (t+t );X (t+t );X (t+t )
1 1 2 2 3 3 4 4 5 5
(4.11)
or alternatively
TXv (t+t );X (t+t );X (t+t ) ; TXv (t+t );X (t+t )
1 1 2 2 3 3 4 4 5 5
(4.12)
3
Example - Illustration of the Algorithm
The purpose of this example is to illustrate the VRD algorithm applied to a simple system.
The resulting VRD function does not have any interpretation. Consider two continuous-
time processes, X1 (t) and X2(t). The continuous-time processes are shown i g. 4.1
together with two discrete-time processes corresponding to sampling X1 (t) and X2(t)
simultaneously at equidistant time points.
X1
4
−2
−4
0 2 4 6 8 10 12 14 16
X2
4
−2
−4
0 2 4 6 8 10 12 14 16
Time
Figure 4.1: The continuous-time processes and the corresponding discrete-time process.
The VRD functions of the two processes are calculated using the following vector triggering
condition and the following size of the time segments.
TXv (t+t) = fX1 (t) > 0; X2(t) > 0g (4.13)
DX( ) = E [X(t + )jTXv (t+t)] , 1 1 (4.14)
where X(t) = [X1(t) X2(t)]. As seen the time shift vector is t = [0 0]. Four triggering
points are detected in the two processes. Figures 4.2 and 4.3 show the time segments at
each triggering point and the resulting VRD functions. The time axis of the time segments
corresponds to the time segments of the processes in g. 4.1.
2 2 2
0 0 0
1
3
−2 −2 −2
−4 −4 −4
2 4 6 4 6 14 16 18
2 2
Result
0 0
4
−2 −2
−4 −4
16 18 −2 0 2
Time Time
Figure 4.2: The time segments picked out in the averaging process and the resulting VRD
functions for X1 (t).
2 2 2
0 0 0
1
−2 −2 −2
−4 −4 −4
2 4 6 4 6 14 16 18
2 2
Result
0 0
4
−2 −2
−4 −4
16 18 −2 0 2
Time Time
Figure 4.3: The time segments picked out in the averaging process and the resulting VRD
functions for X2 (t).
Figures 4.2 and 4.3 show that the VRD functions of the discrete-time process correspond
to the VRD functions of the continuous-time process at the discrete-time points. The
vector triggering condition could also have been formulated as
TXv (t+t) = fX1 (t) > 0; X2(t) < 0g (4.15)
where the time shift vector still is t = [0 0]. Figures 4.4 and 4.5 show the the time
segments picked out by the triggering condition and the corresponding resulting VRD
functions.
2 2 2
0 0 0
3
−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 12 14
2 2
Result
0 0
4
−2 −2
−4 −4
12 14 16 −2 0 2
Time Time
Figure 4.4: The time segments picked out in the averaging process and the resulting VRD
functions for X1 (t).
2 2 2
0 0 0
1
−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 12 14
2 2
Result
0 0
4
−2 −2
−4 −4
12 14 16 −2 0 2
Time Time
Figure 4.5: The time segments picked out in the averaging process and the resulting VRD
functions for X2 (t).
Four triggering points are detected using this triggering condition. In order to increase
the number of triggering points the time shift vector can be chosen to be dierent from
t = [0 0]. A new triggering condition is formulated as
A time shift has been introduced at X2 (t), since t = [0 3]. Figures 4.6 and 4.7 show the
time segments picked out for the averaging process and the resulting VRD functions.
2 2 2
0 0 0
3
−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 2 4
2 2 2
0 0 0
4
6
−2 −2 −2
−4 −4 −4
2 4 6 4 6 12 14 16
2 2 2
Result
0 0 0
7
8
−2 −2 −2
−4 −4 −4
14 16 14 16 18 −2 0 2
Time Time Time
Figure 4.6: The time segments picked out in the averaging process and the resulting VRD
functions for X1 (t).
2 2 2
0 0 0
1
−2 −2 −2
−4 −4 −4
−1 0 1 2 0 2 4 2 4
2 2 2
0 0 0
4
−2 −2 −2
−4 −4 −4
2 4 6 4 6 12 14 16
2 2 2
Result
0 0 0
7
−2 −2 −2
−4 −4 −4
14 16 14 16 18 −2 0 2
Time Time Time
Figure 4.7: The time segments picked out in the averaging process and the resulting VRD
functions for X2 (t).
By introducing a proper time shift the number of triggering points has been increased
from four to eight. This illustrates how versatile the VRD technique is. The number of
triggering points is not only controlled by the triggering bounds but also by the choice of
time shift vector. Notice that in gs. 4.6 and 4.7 the time segments are all picked out at
the time t although the triggering condition is fullled at the time t for X1(t) and at time
t + 3 for X2(t).
4.2 Mathematical Basis of VRD
This section develops the relation between the VRD functions and the correlation functions
of stationary zero mean Gaussian distributed processes. Consider an n 1-dimensional
stochastic vector process, X(t)
X(t) = [X1(t); X2(t); ::: ; Xn(t)]T (4.17)
The correlation matrix of X at any time dierence is given by
RXX( ) = E[X(t + )XT (t)]
2 3
R ( ) R ( ) :::: R ( )
66 RXX XX ( ) RXX XX ( ) :::: RXX XXnn ( )
1 1 1 2 1
77 (4.18)
= 64 : 2
:
1
: :
2 2 2
75
RXn X ( ) RXnX ( ) :::: RXnXn ( )
1 2
where Xi refers to elements in the vector process dened in eq. (4.17). The time shifts
ti could be both positive and negative. The stochastic vector process Yv (t) is of course
restricted by m n. It is important that Yv is contained as the rst m elements of
Xv with = 0. This is only a question of rearranging the elements in Xv and does not
in
uence the nal result or any application.
The auto correlation matrix of Yv is given by
RYvYv = E[Yv (t)YvT (t)] =
2 3
R (0) R (t , t2 ) :::: RX Xm (t1 , tm )
66 RXX XX (t2 , t1) RXX XX (0) 1
1 1 1 2
:::: RX Xm (t2 , tm ) 777
1
(4.22)
64 : 2 1
:
2 2
: :
2
5
RXm X (tm , t1) RXmX (tm , t2 )
1 2 :::: RXmXm (0)
The cross correlation matrix between Xv and Yv is given by:
RXvYv = E[Xv (t)YvT (t)] =
2 3
R ( ) RX X ( + t1 , t2 ) :::: RX Xm ( + t1 , tm ) (4.23)
66 RXX XX ( + t2 , t1)
1 1 1
RX X ( )
2
:::: RX Xm ( + t2 , tm ) 777
1
64 : 2 1
:
2 2
: :
2
5
RXn X ( + tn , t1 )
1 RXn X ( + tn , t2) :::: RXn Xm ( + tn , tm )
2
The denition of the VRD functions is now restricted to be the mean value of Xv on
condition that Yv = yv (see eq. (4.1) for the general denition). This could be interpreted
as a vector level crossing triggering condition
TYv(t+t) = fX1(t + t1 ) = y1; ::: ; Xm (t + tm) = ym g (4.24)
Since X(t) has been assumed to have a zero mean vector (see eq. (4.17)) the conditional
mean value can be calculated from eq. (A.5)
E[Xv jTYv(t+t)] = E[Xv jYv = yv ] = RXv Yv R,Y1v Yv yv (4.25)
A triggering level vector, ~a, is dened as
~a = R,Y1v Yv yv ; ~a = [~a1 a~2 ::: a~m]T (4.26)
The VRD functions can be rewritten using eq. (4.26)
DvX( + t) = E[XvjTYv v(t+t)] = RXvYv a~ (4.27)
Before any attempt is made to extract the modal parameters each VRD function is shifted
,ti . This leads to the shifted VRD functions
DvXv ( ) =
2 32 3
66 RRXX XX (( ,,
1 1
t1 )
t1 )
RX X ( , t2) :::: RX Xm ( , tm )
1 2 1 ~a1
RX X ( , t2) :::: RX Xm ( , tm ) 777 666 a~2 777 (4.28)
64 : 2 1
:
2 2
: :
2
54 : 5
RXnX ( , t1)
1 RXn X ( , t2) :::: RXnXm ( , tm )
2 ~am
The same result would have been obtained if Xv (t) in eq. (4.21) had been dened with-
out the t time shift vector. The computational eort in the estimation procedure is
independent of the chosen formulation. Using eq. (4.19), eq. (4.28) can be rewritten as
DvX( ) = RX ( , t1 ) a~1 + RX ( , t2 ) a~2 + ::: + RXm ( , tm ) a~m (4.29)
1 2
Notice that if m = 1 and ti = 0 the traditional RD formulation for the level triggering
condition is obtained
D ( ) = R ( ) a~ = RX X ( ) y
X1 X1 X1X1 1
1 1
X2 1 1 (4.30)
The probability that Yv = yv occurs is very small. This means that the expected number
of triggering points can easily be too small to achieve a VRD function, which has converged
acceptably. Instead a vector positive point triggering condition is used
TYv v(t+t) = fa1 X1(t + t1 ) b1; ::: ; am Xm (t + tm ) bm g (4.31)
The maximum number of triggering points is always obtained by choosing a = 0 and
b = j1j. In order to link this triggering condition with the results from eq. (4.25) or eq.
(4.27), eq. (A.8) is used
1 Z bZ 1
= k xv pXvjYv (xv jyv )pYv (yv)dxv dyv
1 a ,1
1 Zb
,
= RXv Yv RYv Yv k yv pYv (yv )dyv
1
1 a
= RXv Yv ~a (4.32)
where the triggering level ~a is now dened as
R,1 Z b Zb
~a = Ykv Yv a yv p(yv )dyV k1 = p (y )dy
a Yv v v
(4.33)
1
So the vector positive point triggering condition gives results equivalent to the vector level
crossing triggering condition
DvX( ) = RX ( , t1 ) a~1 + RX ( , t2 ) a~2 + ::: + RXm ( , tm ) a~m (4.34)
1 2
Only the weights, a~i , of the correlation functions have changed. In practice only the vector
positive point triggering condition is of interest, since this is the only triggering condition,
which results in a reasonable number of triggering points.
In conclusion the VRD functions are a sum of a number of correlation functions corre-
sponding to the size of the vector condition. The result in eq. (4.34) is important since the
modal parameters can be extracted from the VRD functions using the methods described
in chapter 2.
A single problem arises when methods developed to extract modal parameters from free
decays are used to extract modal parameters from VRD functions. These methods can
only deal with positive time lag correlation functions. The decays should dissipate with
increasing time lags. This is not the case for the part of the correlation functions which
has negative time lags. This also means that the VRD functions cannot be used directly
as input to ITD or PTD. First of all, only the part of the VRD functions with 0 can
be used. Furthermore, a number of points corresponding to max(ti ) should be removed
from all functions. Otherwise, a part from the correlation functions with negative time lags
is used in the modal parameter extraction procedure. This can result in highly erroneous
damping ratios.
4.2.1 Choice of Time Shifts
Another problem is to choose the time shifts t1 ; t2 ; ::: tm in a way to obtain the
maximum number of triggering points. The obvious possibility is to estimate a column
in the correlation matrix at several positive as well as negative time points using the
traditional RD technique. To obtain the maximum number of triggering points for the
VRD technique the time shifts ti can be chosen from
max(jDXiXj ( )j) ) ti = (4.35)
Notice that the time shift corresponding to i = j will always be ti = 0 which is always
the time lag with maximum value for the auto correlation functions of a stationary process.
If DXi Xj (ti ) is negative, the triggering levels ai and bi should also be negative.
Y2 a ; (4.40)
1
p (y )dy
a
1
1
Y
The proportionality relation is independent of any choice of the triggering levels a1 and
a2 . This is denoted shape invariance. By choosing dierent triggering levels a number
of dierent RD functions can be calculated. By normalizing these functions properly a
number of theoretical identical RD functions can be obtained. A quality measure based
on eq. (4.40) can be dened on the basis of the dierence between the estimated RD
functions. This approach were developed for the RD technique in section 1.7.
It would be obvious to extend this approach to the VRD technique. The VRD functions
corresponding to eq. (4.40) given by are
DvX( ) = RX ( , t1) ~a1 + RX ( , t2) a~2 + ::: + RXm ( , tm ) a~m (4.41)
1 2
Rb
~a = R,Y1v Yv aR byv pYv (yv )dyv (4.42)
a pYv (yv )dyv
If VRD functions are shape invariant, the following relation should be fullled
a~(a1; b1) = k a~(a2; b2) (4.43)
where k is an arbitrary constant. This demand can be reformulated using eq. (4.42)
Zb Zb
yv pYv (yv )dyv = k yv pYv (yv )dyv
1 2
(4.44)
a1 a2
This relation cannot in general lead to a guideline to show how the choices of ai and bi
can secure shape invariance of VRD functions.
Another method to assess the quality of RD functions is based on the following relation
for stationary processes
RXY ( ) = RY X (, ) (4.45)
Which leads to the error function
E = D^ XY ( ) , D^ Y X (, ) (4.46)
where the error function contains information about the quality of the RD functions. This
approach was also developed in section 1.7 for the RD technique. Again, it would be
obvious to extend this approach for VRD functions. The VRD functions are a sum of
correlation functions, see eq. (4.34)
X
m
DXv i ( ) = RXi Xj ( , tj ) ~aj (4.47)
j =1
First of all it is clear that in general the VRD functions are not symmetric since
X
m X
m
RXiXj ( , tj ) ai , RXiXj (tj , ) ai 6= 0 (4.48)
j =1 j =1
This holds even for tj = 0; j = 1; :::;m, since the cross correlation functions in eq.
(4.48) are not symmetric. Next attempt would be to combine (add or subtract) the n
VRD functions, so that the nal result is theoretically zero. This can be done if and only
if m = n, since both RXi Xj and RXi Xj should be represented in the VRD functions. If
m n only the rst m VRD functions can be used. Next assume that m = n. Then the
following relation should hold
RXi Xj ( , tj ) ai = RXj Xi ( , ti ) aj (4.49)
In general this cannot be fullled since ti 6= tj and ai 6= aj are both unknown. So in
general the relation eq. (4.45) does not constitute a basis for quality assesment of VRD
functions.
It is a disadvantage of the VRD technique, that the quality assesment cannot be performed
using the methods developed for the RD technique. The only approach to quality assess-
ment of VRD functions is to choose two triggering conditions, where the only dierence
is the sign of the triggering levels.
TYv v(t+t) = [a1 Y1 (t + t1) < b1; :::;am Ym(t + tm ) < bm ]
1
(4.50)
TYv v(t+t) = [,b1 < Y1 (t + t1) ,a1; :::; ,bm < Ym (t + tm ) ,am]
2
(4.51)
The estimated VRD functions have the following relations
D^ vXv = ,D^ vXv
1 2
(4.52)
This can be used to dene an error function, which should theoretically be zero and thereby
constitute a basis for quality assessment.
4.5 Examples - 2DOF Systems
In order to test the performance of the VRD technique a simulation study of two 2 DOF
systems loaded by independent white noise at each mass is performed in section 4.5.1
and section 4.5.2. Since these are the rst results obtained using the VRD technique, it
is natural to have a starting point with a simple system such as a 2 DOF-system. The
eciency of the VRD technique is compared with the RD technique by comparing modal
parameters estimated from the VRD functions with modal parameters estimated from the
RD functions. Modal parameters are extracted from the RD and VRD functions using
the ITD technique, see chapter 2.
Three dierent quality measures are dened in order to compare the accuracy of the
dierent approaches: E1, E2 and E3. is the theoretical parameter (frequencies or damping
ratios or mode shape components) and ^ is the estimated parameter.
X
2 j , ^ j
i i
E1 = ^i (4.53)
i=1
X
2 ^
i
E2 = (4.54)
i=1 i
X
2 j , ^ j
i i
E3 = (4.55)
i=1 ^i
10% independent Gaussian white noise is added to each response (10% is the standard
deviation of the noise divided by the standard deviation of the noise free response). The
purpose is to model a real life situation.
4.5.1 Example 1
The modal parameters of the system are shown in table 4.1
f [Hz] jj1 jj2 6 1 6 2
Mode 1 1.29 3.39 1.00 0.68 0.00 1
Mode 2 2.10 1.09 1.00 1.48 0.00 179
Table 4.1: Modal parameters of the 2 DOF system for example 1.
The quality measures in eqs. (4.53) - (4.55) are calculated on the basis of 200 independent
simulations of the response of the system by estimating VRD and RD functions from
each simulated response and extract the modal parameters using ITD. The nal quality
measures are the mean values of the measures obtained from each of the simulations. 500
points are generated in each response time series and the sampling frequency is 10 Hz. The
system is simple, so no more than 500 points are necessary in order to estimate reasonable
modal parameters.
Figure 4.8 shows typical RD functions using triggering at a single measurement. Positive
point triggering has been used with a = 0:5X and b = 1. These triggering levels are
chosen for all RD and VRD functions. The (*) and the (o) on the cross RD functions
indicate optimum time delays for vector triggering if the triggering levels are selected to
be positive for (*) and negative for (o).
1 1
0.5 0.5
D X1 X1
D X1 X2
0 0
−0.5 −0.5
−1 −1
−2 0 2 −2 0 2
τ τ
1 1
0.5 0.5
D X2 X1
D X2 X2
0 0
−0.5 −0.5
−1 −1
−2 0 2 −2 0 2
τ τ
Figure 4.8: RD functions using measurement 1 (left) and measurement 2 (right) for trig-
gering. The (*) and (o) on the cross RD functions designate optimum time delays for
VRD estimation. The RD functions have been normalized column wise so the auto RD
functions are correlation coecient functions.
Figure 4.9 shows typical VRD functions of the simulated response based on t1 = 0; t2 =
(). The VRD functions are not symmetric around = 0, which illustrates the dierence
between auto RD functions and VRD functions.
1 1
0.5 0.5
D X1
D X2
0 0
−0.5 −0.5
−1 −1
−2 0 2 −2 0 2
τ τ
Figure 4.9: VRD functions estimated using the (*) time delay from g. 5.1.
The quality measures calculated from the 200 independent simulations are shown in g.
4.10. Five dierent bars are plotted in each sub-gure. Bars 1 and 2 correspond to results,
where the modal parameters have been extracted from RD functions using triggering
only at the response of the rst and second masses, respectively (or results based on the
rst column in g. 4.8 and the second column in g. 4.8 only). Bar 3 corresponds to
results where all four RD functions have been used in the modal parameter extraction
procedure. Bars 4 and 5 correspond to results from VRD functions using the time shifts
t1 = 0; t2 = () and t1 = 0; t2 = (o), see g. 4.8.
1
0.5 1
0.5
ε1
0.5
0 0 0
0 5 0 5 0 5
0.5
0.05 4
ε2
0 0 0
0 5 0 5 0 5
2 0.1
0.01
ε3
0.05
0 0 0
0 5 0 5 0 5
Frequency Damping Mode shape
Figure 4.10: Bias, variance and relative error for dierent RD functions (bar 1-3) and
dierent VRD functions (bar 4-5).
The results indicate the importance of choosing the time shifts for triggering with high
positive correlation and positive triggering levels or with high negative correlation and
negative correlation. Bar 5 shows that a time shift with high negative correlation and
positive triggering levels results in poor quality measures. The results for a correct choice
of time shifts and triggering levels (bar 4) document the mathematical basis. On average
the VRD technique can produce results with the same high accuracy as the RD technique
according to the quality measures.
If the VRD technique should be a genuine alternative to the traditional RD technique, the
diculties in choosing proper time shifts have to be paid o with either a high accuracy
or a low computational time compared to the RD technique.
4.5.2 Example 2
The purpose of this example is to document that under certain observable and realistic
conditions the VRD technique can be more accurate than the RD technique. The modal
parameters of the system are shown in table 4.1.
f [Hz] jj1 jj2 6 1 6 2
Mode 1 1.56 2.19 1.00 0.08 0.00 2
Mode 2 4.22 1.83 1.00 12.0 0.00 174
Table 4.2: Modal parameters of a 2DOF system for example 2.
The major dierence from example 1 is that there is practically no information between
the two masses (the cross mode shape component is relatively small). The results are based
on 200 independent simulations of the response of the system with the modal parameters
in table 4.2 loaded by independent Gaussian white noise. 2000 points are generated in
each time series and the sampling frequency is 20 Hz. 10 % independent Gaussian white
noise is added (standard deviation of the noise divided by the standard deviation of the
noise-free process). Figure 4.11 shows typical RD functions.
1 1
0.5 0.5
D X1 X1
D X1 X2
0 0
−0.5 −0.5
−1 −1
−1 0 1 −1 0 1
τ τ
1 1
0.5 0.5
D X2 X1
0 D X2 X2 0
−0.5 −0.5
−1 −1
−1 0 1 −1 0 1
τ τ
Figure 4.11: RD functions using measurement 1 (left) and measurement 2 (right) for
triggering. The RD functions have been normalized column wise so the auto RD functions
are correlation coecient functions.
Figure 4.12 shows the VRD functions. The time shifts are chosen as t1 = 0; t2 = 0:05
s, which are the time shifts with maximum correlation, see g. 4.11.
1 1
0.5 0.5
D X1
D X2
0 0
−0.5 −0.5
−1 −1
−1 0 1 −1 0 1
τ τ
Figure 4.12: VRD functions estimated using the time shift vector t = [00:05] s.
Figure 4.13 shows the quality measures calculated on the basis of the 200 independent
simulations. Bars 1 and 2 correspond to RD functions using triggering only at the rst
and second mass, respectively. Bar 3 corresponds to results from all four RD functions
and bar 4 corresponds to results from VRD functions.
0.5
0.5
ε1
0.5
0 0 0
0 5 0 5 0 5
ε2 5 2
0.05 1
0 0 0
0 5 0 5 0 5
0.05 4 0.5
ε3
0 0 0
0 5 0 5 0 5
Frequency Damping Mode shape
Figure 4.13: Bias, variance and relative error for dierent RD functions (bar 1-3) and
dierent VRD functions (bar 4).
These results illustrate that the VRD technique is superior with respect to the accuracy
compared to the RD technique, when only a single setup is used in the modal parameter
extraction procedure. The VRD technique should be compared with the RD technique
where all RD setups are used to extract the modal parameters.
Mode 1 2 3 4
f [Hz] 1.62 4.61 6.86 9.00
[%] 3.70 2.07 1.16 1.52
Table 4.3: Modal parameters of 4-DOF system.
The mode shapes of the system are illustrated by plotting their absolute value in g. 4.14.
The mode shapes are approximately in or out of phase.
1st 2nd 3rd 4th
4 4 4 4
3 3 3 3
2 2 2 2
1 1 1 1
0 0 0 0
−1 0 1 −1 0 1 −1 0 1 −1 0 1
In order to describe the accuracy of the dierent methods statistically the simulation and
estimation process are repeated 500 times. The sampling frequency is 50 Hz and 8000
points are simulated in each time series. 10 % independent Gaussian white noise is added
to each response. The quality measures in eqs. (4.53) - (4.55) are calculated on the basis
of the 500 simulations.
Figure 4.15 shows typical RD functions estimated by approach 1. The RD functions are
an estimate of a single column in the correlation functions.
0.1 0.1
0.05 0.05
Cross Dx2x1
Auto Dx1x1
0 0
−0.05 −0.05
−0.1 −0.1
−2 −1 0 1 2 −2 −1 0 1 2
τ τ
0.04 0.01
0.02 0.005
Cross Dx3x1
Cross Dx4x1
0 0
−0.02 −0.005
−0.04 −0.01
−2 −1 0 1 2 −2 −1 0 1 2
τ τ
From g. 4.15 it is seen that a choice of VRD time shifts of ti = 0 ; i = 1; :::; 4 have
maximum correlation. Figure 4.16 shows typical VRD functions.
500 500
Vector Dx1
Vector Dx2
0 0
−500 −500
−2 −1 0 1 2 −2 −1 0 1 2
τ τ
400 150
100
200
Vector Dx3
Vector Dx4
50
0
0
−200
−50
−400 −100
−2 −1 0 1 2 −2 −1 0 1 2
τ τ
Figure 4.16: VRD functions estimated using time shifts ti = 0; i = 1; :::; 4.
The VRD functions are not symmetric around = 0. Figure 4.17 shows the quality
measures. The four bars correspond to the 4 dierent approaches in the order they were
described previously.
1 1 4
ε1
0.5 0.5 2
0 0 0
0 5 0 5 0 5
0.04 5 10
ε2
0.02 5
0 0 0
0 5 0 5 0 5
0.02 2 4
ε3
0.01 1 2
0 0 0
0 5 0 5 0 5
Frequency Damping Mode Shape
Figure 4.17: Bias, variance and relative error for dierent RD (bar 1-3) and VRD (bar
4) approaches.
The quality measures show that the VRD and the RD technique (approach 3) using
positive point triggering estimating the full correlation matrix, in general has higher quality
compared to RD approach 1 and 2. The result is expected, since approaches 1 and 2 can
be interpreted as sub-approaches to the more general approach 3. The estimation times
and the number of triggering points of the 4 dierent approaches are shown in table 4.4.
Column 5 shows the results from the necessary initial estimation time using level crossing
triggering.
Approach 1 2 3 4 5
Time 1.39 1.40 5.56 1.05 0.31
N 3950 470 3950 1590 470
Table 4.4: Estimation time [CPU] and number of triggering points, N
As seen the VRD technique has the same estimation time as approaches 1 and 2 and it
is about 4-5 times faster than approach 3. The results of this section are summarized in
table 4.5 by giving the dierent approaches grades in the form of a number of + signs.
Approach 1 2 3 4
Quality + ++ ++++ +++
CPU Time ++ ++ + ++
Table 4.5: Evaluation of dierent approaches.
The simulation study indicates that the VRD approach is ecient in the sense of having
low estimation and high accuracy. The VRD technique can replace the traditional RD
technique. Only in the case of measurements with a high content of noise, it is recom-
mended to use the traditional RD technique with all possible RD setups.
4.7 Summary
In this chapter a new concept has been introduced: Vector triggering Random Decrement.
The VRD technique diers from the RD technique in the formulation of the triggering
condition. In formulation of the traditional RD technique the triggering condition should
only be fullled in a single process, whereas in formulation of the VRD technique the trig-
gering condition should be fullled in several processes. This makes it a vector triggering
condition.
In section 4.1 the VRD functions are dened and the estimation process, which provides
unbiased estimates, is described. The relation between the VRD functions and the corre-
lation functions of stationary zero mean Gaussian distributed vector process is derived in
section 4.2 for the rst time. The VRD functions are a weighted sum of the correlation
functions of the vector process. The number of correlation functions is equal to the size
of the vector triggering condition. In section 4.3 an approximate relation for the variance
of the estimate of the VRD functions is derived, corresponding to the relations for the
estimate of the RD functions. Section 4.4 discusses quality assessment of VRD functions.
It is shown that the VRD functions are not shape invariant and that the symmetry rela-
tions for correlation functions of stationary processes can only be used in a very special
situation. Instead quality assessment is suggested to be based on the dierence between
the VRD functions estimated by shifting the sign of the processes.
Sections 4.5 and 4.6 describe dierent simulation studies, which document the applicability
of the VRD technique. Section 4.5 illustrates that the VRD technique is superior in
accuracy compared to the RD technique using a single set of RD functions, if a physical
system with low correlation between the measurements is analysed. In section 4.6 it is
illustrated that the accuracy or the speed of the VRD technique can be superior to the
RD technique. In conclusion the VRD technique can be used with a lower estimation time
and as high accuracy as the RD technique, unless measurements with a high noise content
are considered. The VRD tehcnique is an attractive alternative to the RD technique.
A nal comparison between the RD and the VRD technique is performed in chapter 7,
where a laboratory bridge model loaded by Gaussian white noise through a shaker has
been analysed.
Bibliography
[1] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Theory of Vector Triggering Random
Decrement Technique. Proc. 15th International Modal Analysis Conference, Orlando,
Florida, USA, 1997, Vol. I, pp. 502-510.
[2] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Vector Triggering Random Decrement
for High Identication Accuracy. Accepted for publication in Journal of Vibration
and Acoustics.
[3] Asmussen, J.C., Ibrahim, S.R. & Brincker, R. Application of the Vector Triggering
Random Decrement Technique. Proc. 15th International Modal Analysis Conference,
Orlando, Florida, USA, 1997, Vol. II, pp. 1165-1171.
Chapter 5
Variance of RD Functions
The purpose of this chapter is to investigate the variance of RD functions. Knowledge of
the variance of RD functions could be used in the modal parameter extraction procedure
or to indicate the duration of the RD functions which are not too uncertain.
The relations for the variance of the estimate of the RD functions given in chapter 3 are
considered. The decisive assumption for these relations is that the dierent time segments
in the averaging proces are uncorrelated. This assumption can be violated in such a degree
that the relations shown in chapter 3 are unusable. Instead a new approach for obtaining
more accurate estimates of the variance of RD functions is suggested. This new approach
takes the correlation between the dierent time segments into account by considering the
relative time distribution of the triggering points.
X
N
D^ Y X ( ) = N1 y(ti + )jTxG(Ati)
i=1
1 X
N
= N y (ti + )jx(ti) = xi ; x_ (ti ) = x_ i
i=1
XN
= N1 y(ti + jTxi ) (5.1)
i=1
where the applied general triggering condition is used to preserve generality, since this
condition contains any particular condition. When the RD functions have been estimated
the applied general triggering condition can be replaced by N alternative triggering condi-
tions, which are formulated from the observable values of x and x_ at the already detected
time points ti . If these triggering conditions are applied to the measurements exactly the
same RD functions would be estimated, since the same triggering points are detected.
Although this is impossible in practice, since the values of x(ti ) and x_ (ti ) and the corre-
sponding time points, ti , are unknown in advance, the conditions will be used as a basis
for the model presented in the following. The idea is that the information of x(ti ), x_ (ti )
at the triggering time points ti can be obtained from the estimation procedure of the RD
functions and used in an estimation method for the variance of the RD functions.
The variance of the estimated RD functions can be calculated as
XN
Var[D^ Y X ( )] = N12 Var[ y (ti + )jTxG(Ati) ]
i=1
XN X N
= N12 Cov[y (ti + )jTxG(Ati); y (tj + )jTxG(Atj) ] (5.2)
i=1 j =1
If all cross terms in eq. (5.2) are neglected then the relations for the variance of the
estimated RD functions described in chapter 3 are obtained. The variance of the RD
function could as well be calculated on the basis of the alternative triggering condition
introduced in parts 2 and 3 of eq. (5.1).
Var[D^ Y X ( )] = 1 Var[X
N
y (ti + )jTxi ]
N 2
i=1
X
N X
N
= N12 Cov[y (ti + )jTxi ; y (tj + )jTxj ] (5.3)
i=1 j =1
By keeping track of the real time points, ti ; i = 1; 2; :::; N in the estimation of the
RD functions, the variance of the RD functions can be rewritten using this information
without the loss of generality
^ 1 XN
Var[DY X ( )] = N 2 Cov[y (ti + )jTxi ; y (ti + )jTxi ]
i=1
X
m X
Nj
+ Cov[y (ti + )jTxi ; y (ti + j T + )jTxi j ]
+
j =1 i=1
1
X m X Nj
+ Cov[y (ti + j T + )jTxi j ; y (ti + )jTxi ] A
+ (5.4)
j =1 i=1
where m is the maximum number of time lags between any triggering points. In eq. (5.4)
some of the Ni can be zero. The general requirement for the number of the covariance
terms at each time step is
N + 2 N1 + 2 N2 + ::: + 2 Nm = N 2 (5.5)
Since the covariance of the conditional processes is independent of the chosen initial con-
ditions, Txi , which will be shown later, eq. (5.4) can be rewritten as
XN
Var[D^ Y X ( )] = N12 Cov[Y (t + )jTXG(Tt); Y (t + )jTXG(Tt)]
i=1
X
m
+ Nj Cov[Y (t + )jTXG(Tt); Y (t + j T + )jTXG(Tt+j T )]
j =1
1
X m
+ Nj Cov[Y (t + j T + )jTXG(Tt+j T ); Y (t + )jTXG(Tt)] A (5.6)
j =1
where TXG(Tt) is of the same form as the theoretical general triggering condition.
TXG(Tt) = fX (t) = a; X_ (t) = bg (5.7)
The major problem is to calculate the general covariance between Y (t + )jTXG(Tt) and
Y (t + j T + )jTXG(Tt+jT ). Consider the following two Gaussian distributed stochastic
vectors
X1 = [Y (t + ) Y (t + t1 + )]T (5.8)
X2 = [X (t) X (t + t1) X_ (t) X_ (t + t1)]T (5.9)
The covariance of X1 on condition of X2 is calculated using eq. (A.4).
Cov[X1jX2] = RX X , RX X R,X1 X RTX X
1 1 1 2 2 2 1 2
(5.10)
Using the denition of the correlation functions in eq. (2.55) the correlation matrices in
(5.10) can be calculated from eqs (5.11) - (5.13) (X and Y are assumed to have zero mean
value).
" #
RX X = RY Y (0) RY Y (,t1 ) (5.11)
1 1
RY Y (t1 ) RY Y (0)
" #
RX X = RXX (0) RXX (,t1) ,R0XX (0) ,R0XX (t1 ) (5.12)
2 2
RXX (t1 ) RXX (0) ,R0XX (t1 ) ,R0XX (0)
2 0 3
,R0Y X ( , t1 )
66 RRYY XX (( )+ t1) RRYY XX (( ), t1 ) ,,RR0YY XX (( )+ t1) ,R0Y X ( ) 77
RX X = 64 R0 ( )
1 2
RY X ( , t1 ) ,R00Y X ( ) ,R00Y X ( , t1 ) 75 (5.13)
YX
0
RY X ( + t1) RY X ( ) ,R00Y X ( + t1) ,R00Y X ( )
The covariance between the Y (t+ )jTXG(Tt) and Y (t+nT + )jTXG(Tt+nT ) , can be calculated
by inserting the results of eqs. (5.11), (5.12) and (5.13) in eq. (5.10). The covariance is
taken as the element [1,2] of the 4 4 dimensional covariance matrix Cov[X1jX2].
It is important that the only information which should be available is RY X ( ), R0Y X ( )
and R00Y X ( ). Since the estimated RD functions are proportional to the correlation func-
tions the information can be obtained by scaling the RD functions and then calculate the
time derivative and double time derivative of the correlation functions using numerical dif-
ferentiation. This is considered to be a simple and computationally fast requirement. The
disadvantage is that numerical dierentiation of the correlation functions demands that
the measurements are oversampled. Otherwise the terms R0Y X ( ) and R00Y X ( ) should be
obtained by dierentiating the measurements and then estimating the corresponding cor-
relation functions using the RD technique. This might be the best solution if the system
is not suciently oversampled for numerical dierentiation of the RD functions.
What now remains is somehow to make the number of the dierent correlation functions,
N1 ; N2; ::: ; Nm, available. Instead of somehow making theoretical consideration of the
distribution of the triggering points it is decided to use the sample distribution. This
means that the weighting numbers, N1; N2 ; ::: ; Nm , are obtained by picking out the time
for each triggering point in the estimation of the RD functions. By sorting the time
dierences between the triggering points the weighting numbers are obtained.
The estimate of the variance of the RD functions involves the following computational
steps.
Sampling the time for each triggering point in estimation of the RD functions.
Sorting the time dierences between the triggering points.
Numerical (two-time) dierentiation of the RD functions (scaled to be the correlation
functions).
Calculating the variance estimate according to eq. (5.6).
None of these computational steps are extremely time consuming. The sampling of the
time points for each triggering point is free, since these time points are identied in
the estimation process of the RD functions. In the following sections the accuracy of the
method for estimating the variance of RD functions are investigated by dierent simulation
studies.
100
50
0
0 1 2 3 4 5 6 7 8
Auto correlation function
1
0.5
[m*m]
−0.5
−1
0 1 2 3 4 5 6 7 8
τ
Figure 5.1: Average distribution of triggering points using level crossing triggering obtained
from simulations and the theoretical auto correlation function of the system.
The gure illustrates that it is not correct to assume that the time segments in the a-
veraging process are uncorrelated, since many triggering points are within the correlation
length, max, of the system. The correlation length is dened as jRXX (max)j , where
is a small number, say e.g. 0.1. The true variance of the RD functions is calculated on
basis of the 30000 independently estimated RD functions. Furthermore, the variance of
the RD function estimated using the new method is calculated. The true distribution from
g. 5.1 is used together with the theoretical correlation functions. This means that the
variance predicted by the method is as accurate as possible, since the true distributions
and not the sample distributions are used. This procedure is used in order to control the
validity of the method.
0.03
[m*m]
0.02
0.01
0
−10 −5 0 5 10
Auto correlation function
1
0.5
[m*m]
−0.5
−1
−10 −5 0 5 10
τ
Figure 5.2: The simulated and the predicted variance of the RD functions using level
crossing triggering a = 20:5X and the auto correlation function of the system. [---------- ]:
Theoretical (simulated) variance of the RD function. []: Predicted variance of the RD
function using the new method.
As seen the method predicts the variance of the estimated RD functions extremely well.
The next step is to investigate how well the variance of the RD functions are predicted by
the method if a sample distribution of the triggering points and the estimated correlation
functions from a single realization of the response are used.
Figure 5.3 shows simulated variance of the RD functions together with the variance pre-
dicted by the new method and the variance predicted by the relation in eq. (3.24).
Simulated and predicted variance
0.03
[m*m]
0.02
0.01
0
−10 −5 0 5 10
τ
Figure 5.3: Simulated and predicted variance of the RD functions using level crossing
triggering a = 20:5X . [---------- ]: Theoretical (simulated) variance of the RD function.
[ ]: Predicted variance of the RD function using the new method. [- - - -]: Predicted
variance using eq. (3.24).
The situation where only a single realization of the response is available corresponds to
the real life situation. So it is very important that the method predicts the true variance
as well as shown in gure 5.3 from a single realization. The estimated RD function,
the theoretical RD function and the distribution of the triggering points for the single
realization are shown in g. 5.4.
Theoretical and estimated RD function
1.5
1
0.5
[m*m]
0
−0.5
−1
−1.5
−10 −8 −6 −4 −2 0 2 4 6 8 10
100
80
60
40
20
0
0 1 2 3 4 5 6 7 8 9 10
τ
Figure 5.4: The theoretical and the estimated RD function using level crossing triggering
a = 20:5X and the sample distribution of the triggering points. [---------- ]: Theoretical RD
function. [ ]: Estimated RD function.
Even though the accuracy of the estimated RD function is not excellent and that the
distribution of the triggering points diers signicantly from the true distribution shown
in g. 5.1 the new method shows a promising result.
5.3 Example 2: Positive Point - SDOF
In order to investigate further the accuracy of the method to predict the variance of the
estimated RD functions and thereby further document the validity of the method the
positive point triggering condition is considered. The system is the same as in section 5.2.
The distribution of the triggering points is estimated on the basis of 30000 simulations
of the response followed by the identication and sorting of the triggering points for each
response. Correspondingly the variance of the RD function is obtained on the basis of the
simulations. Two dierent sets of triggering levels are investigated. First [a1 a2 ] = [0 1]
is used since this maximizes the number of triggering points and there is a high correlation
between the triggering points.
Figure 5.5 shows the distribution of the triggering points and the theoretical auto corre-
lation function.
4000
3000
2000
1000
0
0 1 2 3 4 5 6 7 8
Auto correlation function
1
0.5
[m*m]
−0.5
−1
0 1 2 3 4 5 6 7 8
τ
Figure 5.5: Average distribution of triggering points using positive point triggering
[a1 a2 ] = [0 1]X obtained by simulations and the auto correlation function of the system.
The dierence in the distribution of the triggering points in comparison with the result in
g. 5.1 is obvious. Figure 5.10 shows the simulated and the predicted variance obtained
using the theoretical values.
−3 Simulated and predicted variance
x 10
[m*m]
6
4
2
0
−10 −5 0 5 10
Auto correlation function
1
0.5
[m*m]
−0.5
−1
−10 −5 0 5 10
τ
Figure 5.6: The simulated and the predicted variance of the RD functions using positive
point triggering [a1 a2 ] = [X 1] and the auto correlation function of the system. [----------
]: Theoretical (simulated) variance of the RD function. [ ]: Predicted variance of the
RD function using the new method.
The variance predicted using eq. (3.46) and the variance predicted by the new method
are shown in g. 5.7 where a single estimate of the RD function and the distribution of
the triggering point have been used.
8
[m*m]
6
4
2
0
−10 −5 0 5 10
τ
Figure 5.7: The simulated and the predicted variance of the RD functions using positive
point triggering [a1 a2] = [0 1]. [---------- ]: Theoretical (simulated) variance of the RD
function. [ ]: Predicted variance of the RD function using the new method. [- - -]:
Predicted variance using eq. (3.46).
The estimated RD function, the theoretical RD function and the distribution of the trig-
gering points for the single realization are shown in g. 5.8.
Theoretical and estimated RD function
0.5
[m*m] 0
−0.5
−10 −8 −6 −4 −2 0 2 4 6 8 10
4000
[m*m]
3000
2000
1000
0
0 1 2 3 4 5 6 7 8 9 10
τ
Figure 5.8: The theoretical and the estimated RD function using positive point triggering
[a1 a2] = [0 1] and the sample distribution of the triggering points. [---------- ]: Theoretical
RD function. [ ]: Estimated RD function.
The results of the simulations show that the method can be used also for the positive
point triggering condition. Usually the triggering levels for this condition are not chosen
as [a1 a2] = [0 1]. As shown in chapter 3 [a1 a2 ] = [X 1] can increase the accuracy
of the RD functions and decrease the computational time. In the following the system
described above is investigated again using the positive point triggering condition with
[a1 a2 ] = [X 1]. Figure 5.9 shows the distribution of the triggering points obtained by
simulations together with the auto correlation function.
Distribution of triggering points
1000
500
0
0 1 2 3 4 5 6 7 8
Auto correlation function
1
0.5
[m*m]
−0.5
−1
0 1 2 3 4 5 6 7 8
τ
Figure 5.9: Average distribution of triggering points obtained by simulation using positive
point triggering [a1 a2 ] = [X 1] and the auto correlation function of the system.
Figure 5.10 shows the simulated and predicted variance obtained using the theoretical
values.
Simulated and predicted variance
0.03
[m*m]
0.02
0.01
0
−10 −5 0 5 10
Auto correlation function
1
0.5
[m*m]
−0.5
−1
−10 −5 0 5 10
τ
Figure 5.10: The simulated and the predicted variance of the RD functions using point
triggering [a1 a2 ] = [X 1] and the auto correlation function of the system. [---------- ]:
Theoretical (simulated) variance of the RD function. []: Predicted variance of the RD
function using the new method.
The variance predicted using eq. (3.46) and the variance predicted by the new method are
shown in g. 5.11, where the RD function and the distribution of triggering point from a
single realization have been used.
0.03
[m*m]
0.02
0.01
0
−10 −5 0 5 10
τ
Figure 5.11: The simulated and the predicted variance of the RD functions using positive
point triggering [a1 a2] = [X 1]. [---------- ]: Theoretical (simulated) variance of the RD
function. [ ]: Predicted variance of the RD function using the new method. [- - - -]:
Predicted variance using the relation from chapter 3.
The estimated RD function, the theoretical RD function and the distribution of the trig-
gering points for the single realization are shown in g. 5.12.
0.5
[m*m]
−0.5
−10 −8 −6 −4 −2 0 2 4 6 8 10
1000
500
0
0 1 2 3 4 5 6 7 8 9 10
τ
Figure 5.12: The theoretical and the estimated RD function using positive point triggering
[a1 a2] = [X 1] and the sample distribution of the triggering points. [---------- ]: Theoretical
RD function. [ ]: Estimated RD function.
5.4 Example 3: Positive Point - 2DOF
This section investigates how the model works on a 2DOF system loaded by uncorrelated
white noise at each mass. It is a natural continuation of the work performed with an
SDOF system. The modal parameters are printed in table 5.1.
f [Hz] [%] jj1 jj2 6 1 6 2
Mode 1 3.74 4.10 1.000 1.005 0.00 177.7
Mode 2 6.27 4.50 1.000 0.995 0.00 1.3
Table 5.1: Modal parameters of a 2DOF system.
The theoretical correlation (scaled RD) functions of the 2DOF system is shown in gure
5.13.
D X 1X 1 D X 1X 2
1 1
0.5 0.5
[m*m]
[m*m]
0 0
−0.5 −0.5
−1 −1
−2 0 2 −2 0 2
D X 2X 1 D X 2X 2
1 1
0.5 0.5
[m*m]
[m*m]
0 0
−0.5 −0.5
−1 −1
−2 0 2 −2 0 2
τ τ
Figure 5.13: Theoretical correlation (scaled RD) functions of 2DOF system.
The investigations are based on 50000 simulations of the response of the system loaded
by white noise followed up by an estimation of the RD functions using the positive point
triggering condition with the triggering levels [a1 a2 ] = [X 1]. Figure 5.14 shows the
simulated distribution of the triggering points and a single realization of the distribution
of triggering points. This realization is used later to predict the variance of RD functions
from a single set of measurements only.
Distribution of triggering points
3000
2500
2000
1500
1000
500
0
0 0.5 1 1.5 2 2.5 3
3000
2500
2000
1500
1000
500
0
0 0.5 1 1.5 2 2.5 3
τ
Figure 5.14: Simulated distribution of triggering points and distribution of triggering points
from a single realization.
Figure 5.14 illustrates that the distribution of the triggering points is well described by a
single realization of the measurements.
0.008
[m*m]
0.006
0.004
0.002
0
−3 −2 −1 0 1 2 3
0.01
0.008
[m*m]
0.006
0.004
0.002
0
−3 −2 −1 0 1 2 3
τ
0.008
[m*m]
0.006
0.004
0.002
0
−3 −2 −1 0 1 2 3
0.01
0.008
[m*m]
0.006
0.004
0.002
0
−3 −2 −1 0 1 2 3
τ
The investigation of this 2 DOF system show that the new method is superior to the
predictions by eq. (3.46). Whether or not the new method is accurate enough to pay o
the extra computational time is an open question. The accuracy of the method is highest
around the zero time lag and especially for higher time lags.
Mode 1 2 3 4 5
f [Hz] 2.18 4.18 5.81 6.69 7.73
[%] 3.46 3.57 3.72 3.80 4.70
Table 5.2: Modal parameters of the 5 DOF system.
Auto correlation function
1
0.5
[m*m]
0
−0.5
−1
−3 −2 −1 0 1 2 3
τ
Figure 5.17: The auto correlation function of the response of the rst mass.
It is a very time-consuming process to investigate the performance of the method for all 25
RD funtions. Instead only the response of a single mass is considered. Figure 5.18 shows
the distribution of triggering points obtained from simulation and a single realization.
Distribution of Triggering Points
1500
1000
500
0
0 0.5 1 1.5 2 2.5 3
τ
Figure 5.19 shows the variance calculated by simulation, eq. (3.46), theoretical RD func-
tion with simulated distribution of triggering points and RD function and distribution of
triggering points obtained from a single realization.
Variance: Predicted and Simulated
0.015
[m*m]
0.01
0.005
0
−3 −2 −1 0 1 2 3
τ
Again it is concluded that the method predicts the variance well, especially around zero
time lag and for time lags where the variance becomes constant.
5.6 Summary
An approach to estimate the variance of RD functions has been suggested. The method
takes the correlation between the time segments into account by using the sampled time
points of the triggering points. The method has been tested by simulation of dierent
systems. The method seems to predict the variance well at the zero time lag and for time
lags where the variance have converged. It is superior to the method for predicting the
variance, which is based on uncorrelated time segments in the averaging process. It is an
open question if this increase in accuracy can pay o the increasing computational time.
Further investigations in order to understand the approach are recommended.
Bibliography
[1] Asmussen, J.C. & Brincker, R. A New Approach for Predicting the Variance of
Random Decrement Functions. Proc. 16th International Modal Analysis Conference,
Santa Barbara, California, USA, February 2-5 1998.
[2] Asmussen, J.C. & Brincker, R. A New Approach for Predicting the Variance of Ran-
dom Decrement Functions. Submitted for publication in Journal of Mechanical Sy-
stems and Signal Processing.
Chapter 6
Bias Problems and
Implementation
The purpose of this chapter is to discuss the practical problems, which arise in applications
and implementations of the RD technique. Most of these problems are due to the sampling
of continuous-time processes into discrete-time processes. The aim is to point out these
problems and describe when and how to be attentive to these practical problems.
Section 6.1 describes dierent bias problems, which arise in application of the RD tech-
nique. The bias problems are discussed and illustrated. Special attention is given to
their importance towards both the estimated correlation functions and the modal para-
meters extracted from these correlation functions. The solutions to the bias problems are
explained and it is described when to be attentive to these problems.
Section 6.2 illustrates the dierent implementations of the RD technique, which has been
programmed and used during this work. The RD functions have been programmed in
HIGH-C, see the reference manual [1] and linked to MATLAB, see the user guide [2],
using MATLAB's external interface opportunities, see MATLAB [3].
Figure 6.1: Illustration of the eect of sampling a continuous process at equidistant time
points.
The triggering level a is indicated with the horizontal line. As seen the continuous-time
process fulls the condition X (t) = a at two time points. But the sampled process never
fulls this condition. In order to detect the triggering points it is necessary to use a crossing
condition. An implementation of a level crossing triggering condition in MATLAB could
be like
The discrete time process fulls this condition twice corresponding to the continuous-time
process. The problem is: Which time point should be used as centre of the time segment
picked out and used in the averaging process? Three possibilities exist. The left-hand
point (y(k)) could be used, the right-hand point (y(k+1)) could be used or both points
could be used corresponding to using the average of the two time segments. In Brincker
et al. [1] the latter approach is denoted a symmetric window. The left-hand (l) and
right-hand (r) points are shown in g. 6.1.
In order to illustrate the dierent possibilities an SDOF system loaded by Gaussian white
noise is considered. The eigenfrequency is f = 1 Hz and the damping ratio is = 1%.
The response of this system is simulated at a sampling interval of T = 51f at 8000 time
points. Figure 6.2 show the estimated RD functions (level crossing) using the left-hand
point, the right-hand point and both points as triggering points.
1
Left
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Right 0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric
1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ
Figure 6.2: Illustration of bias problems. RD functions of an SDOF system with T = 51f .
[---------- ]: Theoretical RD function. [ ]: Estimated RD function.
The bias introduced from the sampling of the continuous process is illustrated in gure
6.2. If the left-hand point is used as triggering point the estimate of the RD function is
shifted to the right and if the right-hand point is used the estimate of the RD function is
shifted to the left. If the left-hand and the right-hand point is used as triggering points,
the estimated RD functions is not shifted only scaled.
If the aim is to estimate modal parameters the above bias problems can almost always be
neglected. It is assumed that the methods such as ITD and PTD are used to extract the
modal parameters from the RD functions. These methods are only capable of extracting
correct modal parameters from either the positive time lags or the negative time lags of
the correlation functions. Assume that only the positive time lags of the estimated RD
functions in g. (6.1) are used. The RD functions estimated using the right-hand point or
using both the right-hand and the left-hand point will result in correct modal parameters.
But the RD functions estimated using the left-hand point will result in erroneous modal
parameters, unless a number of points corresponding to the time shift is omitted. The
reason is that the part of the RD function from zero to the time shift corresponds to the
part of the true RD function from minus the time shift to zero. The discontinuity at time
lag zero cannot be described using the methods to extract modal parameters from free
decays introduced in chapter 3.
The ratio between the natural eigenfrequency and the sampling frequency in
uences the
bias problem. The system used above is considered again. The response of the system
is sampled at a lower sampling interval, T = 151f , again at 8000 time points. Figure
6.3 shows the estimated RD functions using the left-hand point, right-hand pointp and
left-hand and right-hand point as triggering points. The triggering level is a = 2X .
1
Left
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Right 0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric
1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ
Figure 6.3: Illustration of bias problems. RD functions of a SDOF system with T = 151f .
[---------- ]: Theoretical RD function. [ ]: Estimated RD function.
As seen the bias problem cannot be neglected, but it has decreased with increasing samp-
ling frequency. This means that for highly oversampled systems the bias problem can be
neglected.
The above bias problems have been discussed in detail by Brincker et al. [5], [6] and [7].
If the level crossing triggering condition or the zero crossing with positive slope triggering
condition is used the above problems should be considered. If the positive point or the
local extremum triggering condition is used, the above problems do not exists. Only if the
triggering levels a1 and a2 are formulated so that a1 a2 the above problems can arise in
application of these conditions.
−1
−4 −3 −2 −1 0 1 2 3 4
τ
Figure 6.4: Illustration of bias due to false triggering point sorting using the level crossing
triggering condition. Every time a triggering point is detected a jump of 100 points is
performed before the search for a new triggering point continues. [---------- ]: Theoretical RD
function. [ ]: Estimated RD function.
As seen this makes the RD functions highly biased. The bias consists of a time shift of the
RD functions and an underestimation of the damping ratio for the positive time lags and
an overestimation of the damping ratio for negative time lags. The explanation is that
the probability of an upcrossing after a time jump is much higher than the probability of
a downcrossing. So a hidden condition stating that the velocity of the process is positive
is introduced. If the average of the negative and positive time lags are used this bias does
not in
uence the modal parameters.
Consider another SDOF system loaded by Gaussian white noise. The natural eigenfre-
quency is f = 1 Hz and the damping ratio is = 10%. The system is sampled with
T = 101f at 10000 time points. The local extremum triggering condition is used to e-
stimate the RD function from the response. The damping ratio is chosen high in order
to ensure that the response contains both local minima and local maxima for positive
response levels. The RD functions are estimated using the local maxima as triggering
points only. The result is shown in gure 6.5. The top gure shows the non-normalized
estimated RD functions and the bottom gure shows the normalized RD functions with
the theoretical RD function.
Non−normalized RD function
−1
−3 −2 −1 0 1 2 3
Normalized RD function
−1
−3 −2 −1 0 1 2 3
τ
Figure 6.5: Illustration of bias due to false triggering point sorting using the local ex-
tremum triggering condition. Only local maxima are used in the averaging process. [----------
]: Theoretical RD function. [ ]: Estimated RD function.
As seen the dierence is not only a simple scaling factor but also extensive bias. Even the
frequency of the RD function is changed. The explanation is that since only local maxima
are used a hidden condition is made on the acceleration of the process (X (t) 0). The
acceleration of the process is not independent of the process itself and thereby extensive
bias is introduced. This bias cannot be removed. The example illustrates that sorting
or selection of triggering points is very dicult. It is recommended to avoid selection of
triggering points and instead choose another triggering condition if the estimation time is
too high.
6.1.3 Bias due to High Damping
The last-mentioned bias problem is due to high damping. High levels of damping seldom
occur in mechanical systems or civil engineering structures so this phenomenon is only
illustrated and not discussed in detail. In section 6.1.1, it is illustrated how the discretiza-
tion of a continuous-time process introduces bias into the RD functions. The solution to
these bias problems is to use a symmetric window, which means that the average of the
left-hand and right-hand triggering point is used. If the system is heavily damped it is
not possible to remove the bias by using the average of two time segments. The reason is
that a typical upcrossing and a typical downcrossing are not symmetric. A downcrossing
is not a mirror of the upcrossing. This introduces bias.
Consider an SDOF system loaded by Gaussian white noise. The natural eigenfrequency
is f = 1 Hz and the damping ratio is = 20%. The system is sampled with T = 31f at
40000 time points. The RD function is calculated from the response using the level crossing
triggering condition. All three dierent choices of the triggering condition described in
section 6.1.1 are used.
1
Left
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Right 0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric
1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ
Figure 6.6: Illustration of bias due to heavy damping using the level crossing triggering
condition. T = 31f . [---------- ]: Theoretical RD function. [ ]: Estimated RD function.
The RD functions become biased. The bias is reduced by choosing a higher sampling
frequency. This is illustrated in g. 6.7 where the RD function is calculated from the
response which has been sampled with T = 101f at 40000 time points.
1
Left
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
1
Right
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
Symmetric
1
0
−1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
τ
Figure 6.7: Illustration of bias due to high damping using the level crossing triggering
condition. T = 101f . [---------- ]: Theoretical RD function. [ ]: Estimated RD function.
The gure illustrates that a high sampling rate removes the bias for highly damped sy-
stems.
6.2 Implementation of RD Functions
This section describes implementation of the RD technique in MATLAB. The functions
described here have been used throughout this thesis for estimation of RD functions.
One of the disadvantages of MATLAB is that for-do-end loops are performed extremely
slowly compared to most other programming languages. The reason is that MATLAB is
a programming language constructed specially for matrix operation. In a for-do-end loop
the purpose is to perform operations at index level. Loops with operation at index level
should not be programmed in MATLAB, since they are extremely slow, see the MATLAB
Userguide [2]. For this reason all RD functions have been implemented in HIGH-C [1]
and linked to MATLAB using MATLAB's external interface opportunities, see MATLAB
[3]. Using this approach the RD function can be used as if they were programmed in
MATLAB. The HIGH-C language has been chosen, since it does not give any restriction
on the size of vectors and matrices. This is equivalent to MATLAB and is convenient for
time series analysis.
The input/output variables, which are common to all the above RD functions, are
The input/output variables, which are common to all the above RD functions, are
Input X The measurement matrix.
n The number of points in the RD functions.
no The number of the triggering measurement.
a1 The lower triggering level.
a2 The upper triggering level.
Output Ntrig The number of triggering points.
RD The RD functions (not normalized with Ntrig ).
The input/output variables, which are common to all the above RD functions, are
Input X The measurement matrix.
n The number of points in the RD functions.
no The number of the triggering measurement.
a1 The lower triggering level.
a2 The upper triggering level.
Output Ntrig The number of triggering points.
RD The RD functions (not normalized with Ntrig ).
The input/output variables, which are common to all the above RD functions, are
Input X The measurement matrix.
n The number of points in the RD functions.
no The number of the triggering measurement.
Output Ntrig The number of triggering points.
RD The RD functions (not normalized with Ntrig ).
The dierent triggering conditions have been implemented so that the only dierence in
the input to these functions is the triggering levels. The argument for not normalizing
the RD functions is that if the aim is to estimate modal parameters from a single set
of RD functions it is not unnecessary to normalize the RD functions. This means less
computational time.
The dierence between e.g. RD1apos and RD1ppos is that RD1apos estimates RD func-
tions with both positive and negative time lags, whereas RD1ppos only uses positive time
lags. The dierence between RD1apos - RD4apos and RD1ppos - RD4ppos is the speed and
accuracy of the dierent implementations. The numbers refer to the following restrictions
of the data matrix X.
The fastest functions are RD4a??? or RD4p??? and the slowest functions are RD1a???
or RD1p???. For all of the above functions the uncompiled le name has extension :c
and the compiled version of the functions have extension :mex (Matlab Executable), since
they can be called directly from the MATLAB evironment. Common to all functions is
that online help is available by typing the function name in the MATLAB environment.
The help functions have extension :m and are simple editable les. The online help utility
is simple to use. If the following command is given in the MATLAB environment the help
le will respond as shown below.
>>RD1apos
This nishes the description of the computational RD algorithms in HIGH-C. For the
VRD technique the computational algorithm has also been programmed in HIGH-C and
linked to MATLAB using MATLABs external interface possibilities. Only the positive
point triggering condition has been implemented, since this is the only triggering condition
which gives sucient triggering points. Furthermore, only a single type, which corresponds
to type three of the RD functions, has been implemented.
vectora vectorp
>>[Ntrig VRD]=vectora(X,n,a1,a2,vec1,m,k);
***************************************************************************
** FULLCOVA.M : Random Decrement utility functions **
* **
* Purpose Estimation of FULL COVariance matrices using RD **
* functions. Both positive and negative time lags are **
* used. Notice that the RD functions are scaled so **
* that the true covariance matrix are returned. **
* **
* Call : fullcova(X,Type1,Type2,n,a1,a2) **
* X: Data matrix with measurements. **
* Type1: Type of implementation of the RD technique. **
* 1: Double precision, 0-4294967925 points. **
* 2: Single precision, 0-4294967925 points. **
* 3: Double precision, 0-65535 points. **
* 4: Single precision, 0-65355 points. **
* Type2: Type of triggering condition used for esti- **
* mation of covariance functions. **
* 1: Level crossing triggering condition. **
* 2: Local extremum triggering condition. **
* 3: Every positive point triggering condition. **
* 4: Zero crossing with positive slope trig condition. **
* !!This triggering condition estimates the time deriva- **
* tive of the covariance functions. **
* N: Number of points in correlation functions. Must **
* be unequal in order to take the zero time lag into **
* account. **
* a1: If Type2=1, a1 is the triggering level. **
* If Type2=2,3 a1 is the lower triggering bound **
* a2: If Typ2e=1, a2 is not used. **
* If Type2=2,3 a2 is the upper triggering bound. **
* **
* Return : Ntrig: Vector with number of triggering points. **
* : Cfull: Covariance matrix. C11=row1. C21=Row2. **
* Cn1=Rown. C12=rown+1. C22=Rown+2 ... **
* **
* Computed by : JCA 01/09-96 **
* Edited by : JCA 01/09-96 **
***************************************************************************
Although the above functions make it easy to estimate the correlation matrix of the
measurements the triggering level and triggering condition should be chosen carefully as
described in chapter 3. Furthermore, the positive point triggering condition and the local
extremum triggering condition are implemented so that the triggering levels a1 a2 should
not be chosen as a1 a2 . Consider the function RD1apos. The computational code for
the detection of triggering points and the averaging process in HIGH-C looks like
for (i = N1; i < Row - N1; i++) f
if (*(X+(No-1)*Row+i)>*Al && *(X+(No-1)*Row+i)<*Au) f
(*Po trig)++;
for (j=0; j<Col; j++)f
p=X+j*Row+i-N1;
for (k=0; k<N; k++)f
*(RD+j*N+k)+=*(p+k);
gggg
where X is the measurement matrix with Row points and Col measurements, Al and Au are
the lower and upper triggering level, respectively and Po trig is the number of triggering
points. If a1 ! a2 then a bias problem arises since the discrete process might be sampled so
that X (k) < a1 and X (k + 1) > a2 . This will occur for high levels of the time derivative.
In order to take this situation into account the condition in the above code should be
combined with a crossing condition. So in conclusion for the positive point triggering
condition and the local extremum triggering condition the triggering levels should not be
chosen as a1 a2 .
In order to use information from both positive and negative time lags and to obtain
a quality measure of the RD functions a MATLAB function, which uses the symmetry
relations for correlation/covariance functions of stationary processes, see chapter 2, has
been programmed. The name of the function is avgcov.m and returns the average estimated
covariance functions and the error for the average covariance functions. avgcov.m can only
be used in combination with the output of fullcova.m. The online help for avgcov.m is
************************************************************************
** AVGCOV.M : Random Decrement utility function. **
* **
* Purpose : From a full covariance matrix with both positive **
* and negative time lags an averaged covariance ma- **
* trix with only positive time lags is returned. Fur- **
* thermore, an error matrix with the dierence be- **
* tween positive and negative time lags is returned. **
* The averaging process is based on the validity of **
* the following eqs. for the covariance matrices for **
* stationary processes. **
* Cxy(t)=Cyx(-t) Cyx(t)=Cxy(-t) **
* Call avgcov(Cfull); **
* Cfull: Full covariance matrix with positive and **
* negative time lags (output from fullcova.m). **
* Return CfullP: Averaged covariance matrix. All negative **
* time lags averaged with corresponding positive time **
* lags after Cxy(t)=Cyx(-t) **
* Cerror: Error matrix with dierence between posi- **
* tive and negative time lags, theoretically zero **
* matrix. **
* **
* Computed by : JCA 22/3-1996 **
* Edited by : JCA 22/3-1996 **
************************************************************************
The matrix CfullP can be used as input to ITD or PTD and the matrix Cerror can be
used for quality assessment of CfullP.
For the VRD technique a general MATLAB function has been programmed. The function
uses vectora:mex and vectorp:mex. The online help for this function is
****************************************************************************
** VECTRICA.M : Random Decrement utility function. **
* **
* Purpose : To calculate the Random Decrement functions using **
* vector positive point triggering. Assuming more mea- **
* surements than channels. The vector triggering con- **
* dition is assumed to be applied to the rst n mea- **
* surements. **
* **
* Call : VECTRICA(X,n,a1,a2,Type,Vec1,Vec2) **
* X: Data matrix with measurements. **
* n: Number of points in the RD functions. If positive **
* and negative time lags are used (Type=1) n is unequal **
* a1: Lower triggering level vector (which are multi- **
* plied by the standard deviation of the corresponding **
* measurement. **
* a2: Upper triggering level vector (which are multi- **
* plied by the standard deviation of the corresponding **
* measurement. **
* Type: If Type=1, both positive and negative time de- **
* lays are used in the RD functions. If Type=2, only **
* positive time delays are used. **
* Vec1: Vector with time delays for the triggering **
* condition. **
* Vec2: Vector with sign of the triggering condition. **
* 1 or -1. **
* **
* Return : Ntrig: Number of triggering points. **
* VRD: Vector Random Decrement functions. **
* **
* Computed by : JCA 01/09-96 **
* Edited : JCA 01/09-96 **
****************************************************************************
The modal parameters of the system can be extracted from the output of either vectrica,
avgcov, fullcova, fullcovp or the basic RD and VRD functions programmed in HIGH-C.
For this purpose the ITD and the PTD algorithm have been programmed in MATLAB.
The online help for the functions are as follows.
*****************************************************************************
** ITD UHSM.M : Ibrahim Time Domain function. **
* **
* Purpose To estimate eigenfrequencies, damping ratios and mode **
* shapes from free decay measurements from an MDOF **
* structural system. **
* **
* Call ITD UHSM(X,Dt,N1,N2,N3,N4,N5,Vecmeas,Avg,Name) **
* X: Data matrix containing the free decay measure- **
* ments or the RD/VRD functions. **
* Dt: The sampling interval. **
* N1: The number of points omitted from the input ma- **
* trix in the estimation procedure. The rst points **
* might be biased due to noise. **
* N2: Every N2 points is only used in the identica- **
* tion which increases the sampling interval to dt*N2. **
* N3: The number of physical modes to be identied. **
* N4: The matrix aspect ratio for linear regression. **
* N5: The number of time delays for calculation of MCFs **
* Vecmeas: Vector with information about the free de- **
* cays or RD/VRD functions. The vector inform about **
* rows of the reference measurements. The reference **
* measurement should always be the top measurement. **
* Avg: Indicates if the mode shapes from dierent set- **
* ups should be averaged. Useful for e.g. RD func- **
* tions corresponding to a full covariance matrix. **
* This option can only be used if the setups have the **
* same number of free decays. **
* Avg=1 averages, other numbers discards the averaging. **
* Name: Name of le to which the results are written. **
* **
* Return RES: Matrix with results. Matrix has 2*N4 rows. **
* Column 1: Estimated eigenfrequencies. **
* Column 2: Estimated damping ratios. **
* Column 3: Averaged MCF magnitude. **
* Column 4: Averaged MCF phase. **
* Column 5: RMS ABS MPF. **
* Column 6:6+N Complex mode shapes. **
* Column 6+N+1:6+2*N: MCF magnitude. **
* Column 6+2*N+1:6+3*N MCF phase. **
* Column 6+3*N+1:6+4*N Absolute Value of MPF **
* **
* Computed by JCA 1/7 1996 **
* Edited by JCA 1/7 1996 **
*****************************************************************************
***************************************************************************
** POLYREF.M : Polyreference Time Domain function. **
* **
* Purpose This is an implementation of the PTD technique. The **
* The results are modal parameters and modal conden- **
* ce factors for magnitude and phase. **
* **
* Call polyref(X,Dt,N1,N2,N3,N4,N5,Vecmeas,Avg,Name); **
* X: Data matrix with free decays or RD/VRD func- **
* tions. **
* Dt: The sampling interval. **
* N1: The number of points omitted from the input ma- **
* trix in the estimation procedure. The rst couple **
* of points might be biased due to noise. **
* N2: Every N2 point is only used in the identica- **
* tion. Increasing the sampling interval to N2*Dt **
* N3: The number of physical modes to be identied. **
* N4: The number of time delays for MCFs. **
* N5: The number of setups (inputs). **
* N6: The number of measurement locations. **
* Name: Name of le to which the results are written. **
* **
* Return RES: Matrix with results. Matrix has 2*N4 rows. **
* Column 1: Estimated eigenfrequencies. **
* Column 2: Estimated damping ratios. **
* Column 3: Averaged MCF magnitude. **
* Column 4: Averaged MCF phase. **
* Column 5: RMS ABS MPF. **
* Column 6:6+N Complex mode shapes. **
* Column 6+N+1:6+2*N: MCF magnitude. **
* Column 6+2*N+1:6+3*N MCF phase. **
* Column 6+3*N+1:6+4*N Absolute Value of MPF **
* **
* Computed by JCA 1/9 1996 **
* Edited by JCA 1/9 1996 **
***************************************************************************
6.2.3 Example
Consider the 2DOF system from chapter 3 again. The system has the following modal
parameters
f [Hz] % jj1 jj2 6 j1 6 j2
Mode 1 3.09 1.69 1.00 1.61 0 4.7
Mode 2 4.56 3.56 1.00 0.62 0 173.0
Table 6.1: Modal parameters of the 2DOF system.
The response of this system to Gaussian white noise is simulated and saved in the data
matrix X . 10000 points are simulated at a sampling frequency of 120 Hz. In order to
calculate the correlation matrix the following commands are given
>>Cfull=fullcova(X,3,3,501,1,80);
>>[Cavg Cerr]=avgcov(Cfull);
The number of points in the scaled RD functions in Cfull is 501. The positive point
triggering condition has been used and the triggering levels are chosen as [a1 a2 ] = [X 1]
(1 80). In order to evaluate the estimate of the correlation functions the functions in
Cavg are plotted together with Cerr.
−4 −4
x 10 x 10
5 5
0 0
−5 −5
0 1 2 0 1 2
−4 −3
x 10 x 10
1
5
0.5
0 0
−0.5
−5
−1
0 1 2 0 1 2
τ τ
Figure 6.8: [---------- ]: Average correlation functions. [ ]: Errors of the average of the
correlation functions.
The gure illustrates that the errors of the average estimate of the correlation functions
are small. In order to extract the modal parameters the PTD technique is used. The
following command is given
>>RES=polyref(Cavg,1/120,5,1,2,1,2,2,'result')
The number of modes is 2, ve points have been removed from the beginning of the
correlation functions and a single time shift is used for the calculation of the MCFs. The
RESult matrix is saved in the le 'result'. the absolute values of the rst 7 columns of the
RES matrix are
3.10 2.02 0.999 0.11 11.66 1.00 1.61
4.52 3.00 0.998 0.23 20.10 1.00 0.63
which can be compared with the theoretical values in table 6.1. The columns have the
following estimates: 1: Frequencies, 2: Damping ratios, 3: MCF magnitude, 4: MCF
phase, 5: MPFs, 6: Mode shape component 1, 7: Mode shape component 2.
This example shows how simple it is to estimate modal parameters using MATLAB in
combination with the HIGH-C functions.
6.3 Summary
In this chapter considerations concerning the implementation of the RD technique have
been discussed. Section 6.1 illustrates the bias problems, which can occur in application
of the RD technique. In general the bias problems can be avoided by proper implemen-
tation. In the situation where the sampling frequency is much higher than the maximum
eigenfrequency of the system the bias problems vanish.
In section 6.2 a description of the dierent implementation made during this work of the
RD funcions is given, and it is illustrated how to use these functions. The RD functions
are implemented in HIGH-C and linked to MATLAB using MATLAB's external interface
opportunities in order to make the technique as fast as possible. Several dierent imple-
mentations of each triggering condition are made in order to be able to select the fastest
and most accurate function dependent on the size and the precision of the available data.
Bibliography
[1] HIGH-C/C++ Tools, Library and Program manuals 1992. MetaWare Inc.
[2] MATLAB User's Guide (August 1992). MathWorks, Inc.
[3] MATLAB External Interface Guide (January 1992). MathWorks, Inc.
[4] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec
Technique. Proc. 9th International Conference on Experimental Mechanics, Lyngby,
Copenhagen, Aug. 20-24, 1990.
[5] Brincker, Krenk, S. & R., Jensen, J.L. Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[6] Brincker, R., Kirkegaard, P.H. & Rytter, A. Identication of System Parameters
by the Random Decrement Technique. Proc. 16th International Seminar on Modal
Analysis, Florence, Italy, Sept. 9-12, 1991.
[7] Brincker, R., Krenk, S. & Jensen, J.L. Estimation of Correlation Functions by the
Random Decrement Technique. Proc. 9th International Modal Analysis Conference
and Exhibit, Firenze, Italy, April 14-18, 1991.
[8] MATLAB Reference Guide (October 1992). MathWorks, Inc.
Chapter 7
Estimation of FRF by Random
Decrement
In this chapter a new method for estimating FRF is tested. The method is based on the
RD functions of the load to and the response from a linear system. It is assumed that the
input is stochastic and stationary. Traditionally the measured response of and load to a
linear system has been analysed using the FFT algorithm in order to obtain the FRFs. As
described in the introduction to this thesis such an approach will in general always result
in biased estimates of the FRM. Using the RD technique as basis for the estimation of the
FRFs bias can under the right circumstances be removed. This is an important relation
of the RD technique. This method was rst tested in Brincker et al. [1] and Asmussen et
al. [2].
Section 7.1 describes the traditional FFT based approaches for estimating the FRFs. The
dierent approaches which are used to minimize bias and random errors are described.
In section 7.2 the theoretical background for estimating FRFs using the RD technique is
given together with a description of the expected advantages and disadvantages. Section
7.3 presents an illustrative simulation study and an experimental test performed in order
to validate the performance of this method.
141
where superscript denotes complex conjugate and
1 ZT 1 ZT
X (!;T ) = 2 X (t)e dt , i!t Y (!; T ) = 2 Y (t)e,i!t dt (7.5)
0 0
The mean value operation in eqs. (7.1) - (7.4) is over the statistical ensemble of the
dierent realizations, xk (t) and yk (t), of the processes X (t) and Y (t). In practice the limit
T ! 1 can never be obtained. This is modelled by multiplying the realizations of the
processes by a window function, which delimits the time period of the realizations to be
T.
(
x(t; T ) = W (t; T ) x(t) W (t; T ) = w0 (t) jjttjj < T=2 (7.6)
y(t; T ) = W (t; T ) y (t) < T=2
If w(t) = 1 the boxcar or rectangular window is used. The limitations of the time period
of the realizations, forces the frequency resolution to be nite, ! = 2T . The spectral
densities in eqs. (7.1) - (7.4) can only be estimated as e.g. eq. (7.2)
S^ (! ) = E [ 1 Y (!; T )X (!; T )]
YX T k k (7.7)
This estimate will always be biased due to the nite record period or window eects.
These bias errors are usually denoted leakage errors since energy (or signal power) at a
given frequency band is moved to the surrounding frequency band. Random errors can
also be introduced if only a nite number of realizations of the processes X (t) and Y (t)
are available. The bias errors can be reduced by using more complicated window functions
than the boxcar window as the Hanning or the Hamming window, see e.g. Schmidt [4]. In
a real life situation only a single realization of each process is available (the measurements).
This problem is solved by assuming that the processes are ergodic. Then the realizations
can be divided into a number of segments, which represent the statistical properties of the
ergodic process. The above description follows Bendat & Piersol [3] and Schmidt [4].
The stochastic process X (! ) is interpreted as a load applied to a linear structure at the
location i and Y (! ) is the stochastic process describing the corresponding response at the
location j of the structure. The FRF which transfers X (! ) into Y (! ), Hji (! ) will for
simplicity be denoted H (! ) without any subscripts.
Y (! ) = H (! )X (!) (7.8)
Using the denition of the spectral densities in eqs. (7.1) - (7.4) the FRF can be expressed
in terms of spectral densities.
SXY (!) = H (! )SXX (!) (7.9)
SY Y (!) = H (! )SY X (! ) (7.10)
Equations (7.9) and (7.10) constitute the basis of the two basic estimators of the FRF,
H1 (! ) and H2(!).
^
H^ (! ) = H1(!) = S^XY (!) (7.11)
SXX (!)
^
H^ (! ) = H2(!) = S^Y Y (!) (7.12)
SY X (!)
Other more complicated estimators of the FRF exist such as H3 and H4, see e.g. Fabunmi
et al. [5] and Yun et al. [6]. But the H1 and the H2 estimator is the basic estimators.
Assume that the input measurement or load X (t) is measured without the introduction
of any noise. The response or output Y (t) is measured with an noise process added. If
this noise process is uncorrelated with the response, the H1 estimator will result in a true
FRF, in the sense of being independent of the noise added to the response. On the other
hand, assume that the input measurement is collected with a noise process added. The
output measurement is noise free. If the noise process is independent of the input or load
the H2 estimator will result in a true FRF, in the sense of being independent of the noise
added to the input. For more complicated noise situations other estimators have been
developed as mentioned previously.
The ordinary coherence function is dened as
2 (! ) =
xy jSXY (!)j2 0
xy
2 (! ) 1 (7.13)
SXX (!)SY Y (! )
The estimate of the coherence function is calculated by using the estimates of the spectral
densities. The coherence function can be used for quality assessment of the estimate of the
FRF. A high coherence means a good estimate of the FRF at the corresponding frequency
and a low coherence means a poor estimate of the FRF. Low coherence could also be
interpreted to be an indicator of non-linearities. The coherence should be high ( 1)
around the peaks of an FRF.
The problems with an approach to estimate the FRFs based on the FFT algorithm can
be summarized to
the choice of a proper window function.
the choice of a proper estimator.
In the next section an approach for estimating FRFs based on the RD technique is intro-
duced. The main dierence to the traditional methods presented shortly in this section
is that the averaging process, see e.g. eqs. (7.1) - (7.4), is performed in the time do-
main before the Fourier transformation is used. This makes it possible to obtain unbiased
estimates of the FRF.
There is a main dierence between the two coherence functions dened in this chapter.
In eqs. (7.13) the coherence function is based on the number of averages in the frequency
domain, whereas eq. (7.25) is only based on two dierent estimates in the frequency
domain. The averaging process is performed in time domain. Alternatively the coherence
function for the estimates based on the RD technique could be based on several RD
functions estimated with dierent triggering levels.
It will now be assumed that the load is Gaussian white noise. This means that the response
will also be Gaussian distributed. It also means that the RD functions will be proportional
to the correlation functions of the processes. Since RY X ( ), RXY ( ), RY Y ( ) and RXX ( )
all satisfy R ! 0 for j j ! 1 it follows that all RD functions in eqs. (7.18) and (7.19)
dissipate towards zero with increasing time distance from zero. The result of this relation
is that the bounds in the Fourier transformation do not have to be ,1 and 1 and thereby
no leakage errors will occur. This assumes that R(max) 0, where max is the maximum
time lag in the correlation function.
Assume that the input to the system is measured with a noise process, U(t), added. The
noise process is assumed to be Gaussian distributed and uncorrelated with the measured
output, which is free of noise. Subscript M denotes the measured realizations of the
dierent processes
yM = y(t) ; xM = x(t) + u(t) (7.26)
The RD functions are proportional to the correlation functions, since the processes are
Gaussian distributed
RYM YM ( ) = E [yM (t + )yM (t)] = RY Y ( ) (7.27)
RXM YM = E [yM (t + )xM (t)] = RXY ( ) + RUY ( ) = RXY ( ) (7.28)
In this situation the estimation of the FRF should be based on eq. (7.22). Correspondingly
if a Gaussian distributed noise process is added to the output of the system and the
measured input is noise free the estimation of the FRF should be based on eq. (7.21).
Two main advantages are expected using the RD based method for estimation of FRF
compared to the traditional method based on pure FFT. The computational time is ex-
pected to decrease, since the estimation of RD functions only involved averaging, whereas
the estimation using pure FFT involves multiplication. In general this question can not
be answered since the estimation time for the RD technique depends on the statistical
description of the processes. This issue was discussed in chapter 3. RD functions are
estimated unbiased and dissipate towards zero for increasing absolute time lags. This is
an advantage since no leakage errors are introduced.
7.3 Case Studies
This method was rst tested in an introductory simulation study of a 3DOF system loaded
by white noise, see Asmussen & Brincker [2]. In Brincker & Asmussen [1] the method was
tested and compared with the FFT based approach using experimentally obtained data.
This section starts with an illustrative example of an SDOF system. The purpose is to
illustrate the advantages and disadvantages of the method and to describe the dierent
problems, which arise in the application of this technique. The method is further investi-
gated by analysing the vibrations of a laboratory bridge model loaded by Gaussian white
noise through a shaker.
1
[N]
0.5
0
−80 −60 −40 −20 0 20 40 60 80
DYX
0.1
[m]
−0.1
Figure 7.1: Typical auto (load) and cross (response) RD functions estimated using positive
point triggering.
The RD functions in g. 7.1 illustrate the idea of using the RD technique for estimation
of FRFs. The auto RD function is seen to be almost identical to the auto correlation
function of the white noise load. An appropriate choice window function can increase the
accuracy of the RD functions. The window functions can be chosen to be e.g. symmetric
exponentials or a force window for the auto RD functions according to the standard
approach used in impact testing.
Figure 7.2 shows the RD functions from g. 7.1, where a symmetric exponential window
has been multiplied to both functions.
DXX
1
[N]
0.5
0
−80 −60 −40 −20 0 20 40 60 80
DYX
0.1
[m]
−0.1
Figure 7.2: Typical auto, DXX ( ), and cross, DY X ( ) RD functions estimated using
positive point triggering and a symmetric exponential window.
The eect of the exponential window is clear. The RD functions dissipate to zero more
eciently, which means that leakage errors are controlled by the exponential window.
Figure 7.3 shows the absolute value of a typical FRF calculated theoretically and using
the H1 and the H1RD estimators. The result for the H1RD estimator is based on the RD
functions shown in g. 7.1. The dierent estimates of the FRF are very alike except
for the ragged curve, which is the FRF from the H1RD estimator without applying any
window function, see g. 7.1 (no windowing corresponds to using the boxcar window).
The result of applying a symmetric exponential window is that the estimates become
smooth on account of articial introduced damping. The estimated modal parameters can
be corrected for this articially damping using principles developed for impact testing.
2
10
1
10
0
[m/N] 10
−1
10
−2
10
0 0.5 1 1.5 2
Frequency [Hz]
Figure 7.3: Absolute value of FRFs. [-------]: Theoretical. [- - - - ]: FFT Hanning window,
H1 . [ ]: RD-FFT, No windowing, H1RD . [ , , ]: RD-FFT, exponential window,
H1RD .
Figure 7.4 shows a zoom of g. 7.3 around the resonance frequency.
2
10
[m/N]
1
10
Figure 7.4: Zoom of absolute value of the FRFs. [-------]: Theoretical. [- - - - ]: FFT
Hanning window, H1 . [ ]: RD-FFT, No windowing, H1RD . [ , , ]: RD-FFT,
exponential window, H1RD .
The curve from the H1RD estimator from the RD function in g. 7.1 (no windowing) is
almost identical to the theoretical curve. If the symmetric exponential window is used the
FRF is smooth in all the frequency band, but in the vicinity of f the articial damping is
very clear.
In order to investigate the in
uence of the length of the record period the number of
measurement points is increased to 40000. Figure 7.2 shows the estimated RD functions
without windowing.
DXX
[N]
0.5
0
−80 −60 −40 −20 0 20 40 60 80
DYX
0.1
[m]
−0.1
2
10
1
10
0
10
[m/N]
−1
10
−2
10
0 0.5 1 1.5 2
Frequency [Hz]
Figure 7.6: FRFs estimated from a record length of 40000 points. [-------]: Theoretical.
[- - - - ]: FFT Hanning window, H1. [ ]: RD-FFT, No windowing, H1RD .
Figure 7.6 shows that the H1RD estimates become smooth with increasing record length,
even though no window is applied. Figure 7.7 shows a zoom of the FRFs in g. 7.6 around
the resonant frequency.
2
10
[m/N]
1
10
Figure 7.7: Zoom of absolute value of the FRFs. [-------]: Theoretical. [- - - - ]: FFT
Hanning window, H1. [ ]: RD-FFT, No windowing, H1RD .
The increase of the record length has increased the accuracy of the H1RD estimate, but
the H1 estimate is still biased (an increase in the length of the time segments from 1024
to 2048 would decrease the bias, but for illustration purposes this is omitted).
The estimation of the FRFs using the 5 dierent approaches is performed 100 times. The
mean values and the standard deviations are shown in table 7.1.
f [Hz] f [%]
FFT - 10000 pts. 1.000 1.1410,4 0.68 0.0054
FFT - 40000 pts. 1.000 0.5310,4 0.67 0.0026
RD - 10000 pts. 1.000 6.7110,4 0.59 0.0100
RD - 10000 pts., window 1.000 4.9710,4 0.60 0.0073
RD - 40000 pts. 1.000 2.6910,4 0.60 0.0032
Table 7.1: Average values of the modal parameters and the corresponding standard devia-
tions based on 100 simulations. The theoretical values are f = 1 Hz and =0.6 %.
The results show that all approaches provide unbiased estimates of the eigenfrequencies,
but the FFT approach provides biased estimates of the damping ratio. On the other
hand, the standard deviation of the estimates using the RD technique is higher than the
estimates using the FFT approach. If no window function is applied and the system has
low damping the ragged spectral densities will result in modal parameters aected by high
random errors. The high standard deviations of the modal parameters estimated based on
the RD technique is a result of the diculties which arise in the Fourier transformation
of the RD functions.
7.3.2 Experimental Study - Laboratory Bridge Model
This case study is based on a laboratory bridge model loaded by Gaussian white noise.
The model consists of a simply supported steel plate with 3 spans. The steel plate has
the dimensions 3.0 0.35 m. The length of each span is 1 m. A shaker is attached at the
right-hand span. The shaker is exciting the bridge model with Gaussian white noise in the
frequency span 0-60 Hz. The measurements consist of 32000 points sampled with 150 Hz.
The measurements are analog and digital ltered to avoid aliasing and to suppress high
frequency noise. Figure 7.8 shows an outline draft of the bridge and the sensor locations
are also indicated.
The measurements are collected in 3 setups consisting of 7,7 and 6 response records in
each setup and the corresponding record of the load. Two dierent approaches are used
to estimate the FRFs of the structure. The approach based on the FFT algorithm with
the H1 estimator is used. Each time segment has a length of 1024 points and is multiplied
by the Hanning window to reduce leakage errors. The second approach is based on the
H1RD estimator. The positive point triggering condition is used with triggering levels
chosen as [0:5X 1]. Each RD function has a length of 825 points. In common for both
approaches is that the FRFs are transformed into IRFs using inverse FFT. The modal
parameters are extracted from the IRFs using the PTD algorithm. Structural modes are
separated from noise modes using the MCF, MPF and a requirement for low damping
ratios as described in chapter 2. Table 7.2 shows the estimated eigenfrequencies and the
corresponding damping ratios.
The reason can be that the FRFs estimated using the RD technique is more ragged than
the FRFs estimated using the FFT algorithm. It is very dicult to calculate the FFT of
the RD functions with a satisfying result. The result is very sensitive to the number of
points in the RD functions. Figures 7.9 and 7.10 show two typical FRFs estimated using
the H1RD and the H1 estimators, respectively.
0
10
−1
10
−2
10
−3
10
−4
10
0 10 20 30 40 50 60
Frequency [Hz]
The gure illustrates that the FRFs estimated using H1RD is very ragged. Around the
spikes the shape of the curve is very convincing, since the curve is smooth and it is clear
that this system has low damping. The FRF shown in g. 7.10 is on the other hand
very smooth. The experience with this experimental study is that the modal parameters
estimated based on the RD technique are more uncertain than the modal parameters esti-
mated based on the FFT technique. The model order should be higher and the
uctuation
of the modal parameters is also higher. This dierence in the FRFs is believed to be the
explanation for the high uncertainty (or higher random errors) of the modal parameters
estimated using the RD technique.
0
10
−1
10
−2
10
−3
10
−4
10
0 10 20 30 40 50 60
Frequency [Hz]
The higher uncertainty of the modal parameters estimated from H1RD compared to the
results from the H1 estimator is supported by the estimated mode shapes, which are shown
in gs. 7.11 - 7.15. Except for the two closely spaced modes at about 50 Hz, the absolute
value of the mode shapes from the H1 estimator is the most convincing result.
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 7.11: Mode shapes 1 and 2 estimated using H1RD and H1. MAC=0.98 and
MAC=0.81, respectively.
Mode 3: FFT 21.5 Hz Mode 4: FFT 45.09 Hz
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 7.12: Mode shapes 3 and 4 estimated using H1RD and H1. MAC=1.00 and
MAC=1.00, respectively.
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 7.13: Mode shapes 5 and 6 estimated using H1RD and H1. MAC=1.00 and
MAC=0.54, respectively.
Mode 7: FFT 50.17 Hz Mode 8: FFT 51.74 Hz
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 7.14: Mode shapes 7 and 8 estimated using H1RD and H1. MAC=0.56 and
MAC=0.99, respectively.
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 7.15: Mode shapes 9 and 10 estimated using H1RD and H1 . MAC=0.96 and
MAC=0.93, respectively.
The MAC values show that in general there is a high correlation between the mode shapes.
There is only major disagrement at the closely spaced modes. It is clear that the FFT
based estimates are more convincing, since the mode shapes look very smooth.
The estimation time for the two approaches was almost equal. The reason is that the
RD functions contain relatively many points and that there is a high number of averages.
1025 points in an RD function is in general a very high number. For a system with lower
damping ratios less points in the RD functions are necessary and the estimation time
thereby decreases.
It is very dicult to use the RD technique for estimation of FRFs for ligthly damped
systems. One of the major problems is that although the RD functions are estimated with
high accuracy, it is very dicult to calculate the FRF from the RD functions. If systems
with higher damping are considered, less points in the RD functions are needed. This will
make the RD technique much faster than the FFT approach, and the accuracy will also
increase. The RD functions are always more accurate close to the centre and the functions
will dissipate towards zero more clearly if a system with higher damping is analysed.
7.4 Summary
A new method for estimating FRF based on the RD technique has been introduced. The
idea is based on the fact that the RD functions of the input and output of a linear system
are related, since the RD function of the response is the convolution integral of the RD
function of the load and the IRF. This relation is always valid no matter what type of
load has been subjected to the structure.
In this chapter it has been assumed that the load is Gaussian white noise. This means
that the RD functions will dissipate to zero with increasing absolute time lags. Intuitively
it will be perfect to Fourier transform the RD functions, since the restriction of nite time
record length is insignicant. The in
uence of the load can be removed in the frequency
domain using either the H1RD or H2RD estimator, both of which are based on plain division.
Based on the experience with the simulation study and the analysis of the bridge model
in section 7.3.2 and in Brincker et al. [1] it is recommended to use the H1RD estimator.
Compared with the traditional method based on the FFT algorithm the RD-FFT approach
is not as stable. The auto and cross RD functions can be calculated with satisfactory
accuracy if the guidelines given in chapter 3 are followed. It is the transformation of the
RD functions into the frequency domain which creates trouble. The experience obtained
in this study is that it is necessary to introduce a window function. Otherwise the FRFs
will be dominated by the noise introduced with the Fourier transformation. Furthermore,
the result is dependent on a proper choice of the length of the RD functions. If the RD
functions contains too many points the FRFs will be dominated by noise.
The problems are particular dominant for lightly damped structures. If the damping ratio
increases from 0:1% , 1% ! 3% , 5% the stability of the RD-FFT approach will improve,
since the RD functions will dissipate faster. The result is that shorter and thereby more
accurate RD functions are obtained.
Further work with this approach is recommended to focus on the relations in eqs. (7.18)
and (7.19). The modal parameters can be extracted directly from these relations by
using algorithms which are based on the input-output relation in the time domain. This
approach would also not require that the input to the structure is white noise.
Bibliography
[1] Brincker, R. & Asmussen, J.C. Random Decrement Based FRF Estimation. Proc. 15th
International Modal Analysis Conference, Orlando, Florida, USA, Feb. 3-6, 1997, Vol.
II, pp. 1571-1576.
[2] Asmussen, J.C. & Brincker, R. Estimation of Frequency Response Functions by Ran-
dom Decrement. Proc. 14th International Modal Analysis Conference, Dearborn,
Michigan, USA, Feb. 12-15, 1996, Vol. I, pp. 246-252.
[3] Bendat, J. & Piersol, A. Random Data: Analysis and Measurement Procedures. John
Wiley & Sons, Inc. 1986. ISBN 0-471-04000-2.
[4] Schmidt, H. Resolution Bias Errors in Spectral Density, Frequency Response and
Coherence Function Measurement, I: General Theory. Journal of Sound and Vibration
(1986) 101(3) pp. 347-362.
[5] Fabunmi, J.A. & Tasker, F.A. Advanced Techniques for Measuring Structural Mobi-
lities. Journal of Vibration, Acoustics, Stress, and Reliability in Design. July 1988,
Vol. 110, pp. 345-349.
[6] Yun, C.-B. & Hong, K.-S. Improved Frequency Domain Identications of Structures.
Structural Safety and Reliability, ICOSSAR '93, Vol. 2. 1994 Balkema, Rotterdam,
ISBN 90 5410 357 4. pp. 859-865.
Chapter 8
Ambient Testing of Bridges
The main advantages of the RD technique is the low estimation time and the simple esti-
mation algorithm. The advantage of a low estimation time can be utilized in identication
of large structures, where the response of the structure is collected at many locations. In
general this is the situation in ambient testing of bridges. The purpose of this chapter is to
document the applicability of the RD technique for ambient testing of bridges. Ambient
testing of bridges refers to measurements of the vibrations of bridges due to ambient loads
such as trac, wind, waves and micro tremors.
In ambient testing of bridges a special terminology is used. Usually the number of mea-
surement locations is higher than the number of measurement channels available from the
measurement system. The number of measurement channels available in a bridge measure-
ment system is delimited by the number of cables available, the number of accelerometers
available and the analog/digital conversion of the measurements. This means that the
measurements have to be collected by applying the measurement system several times. A
single set of these measurements is denoted a setup. It is necessary to have one or several
measurement locations represented in each of the setups. Otherwise it is not possible to
link the mode shapes estimated from the dierent setups together. The measurements
collected at the locations, which are represented in all setups, are denoted reference mea-
surements. In principle it is sucient with a single reference measurement, but it is
common to use two or more reference measurements. This ensures that the probability
that all modes are well represented in one of the reference measurements is high.
Section 8.1 deals with identication of the Queensborough bridge from ambient vibrations.
This work was a pre-investigation of the performance of the RD technique compared to
other well-known techniques, such as FFT and ARMAV based approaches. The results
of this study encouraged a continuation with the RD technique as a tool for analyzing
ambient measurements of bridges.
In section 8.2, a laboratory bridge model is considered. The purpose of this work was to
compare the speed and accuracy of the RD and the VRD technique. This study concluded
the development and documentation of the VRD technique.
Ambient testing of the Vestvej bridge is reported in section 8.3. These measurements
were collected using a bridge measurement system developed as a part of this Ph.D.-
159
project. The purpose of this work, besides testing the performance of the RD technique,
is to check the bridge measurement system. The analysis is also a pre-investigation to a
demonstration project of vibration based inspection using the RD technique.
0.5
−0.5
0 0.5 1 1.5
τ
Figure 8.3: 4 dierent auto RD functions. [ ]: [a1 a2]=[X 2X ]. [, , ,]:
[a1 a2 ]=[2X 3X ]. [, , ,,]: [a1 a2 ]=[3X 4X ]. [--------------]: [a1 a2]=[4X 5X ].
All RD functions are estimated in each setup, which corresponds to estimating the full
correlation matrix. The RD functions are only estimated for positive time lags. The modal
parameters are extracted using ITD.
8.1.2 Results
In order to extract the modal parameters from the RD functions the ITD technique is
used to extract modal parameters from each of the set of RD functions. The number of
sets of RD functions is equal to the number of measurements collected. Figure 8.4 shows
the estimated eigenfrequencies as a function of the identication number. It was assumed
that 64 modes were sucient to model both physical and computational modes. Only the
modes with 6 MCF < 10 deg and jMCF j > 90% and damping ratios < 10 % are selected.
Figure 8.4: Stabilization diagram with estimated natural frequencies from identication of
dierent RD setups.
The selected modes are indicated with a vertical line. The estimated eigenfrequencies and
damping ratios, reported in Felber et al. [2] - Brincker et al. [5], are shown in table 8.1.
Table 8.1 illustrates that there is good agreement between the eigenfrequencies and the
damping ratios estimated using the dierent approaches. For the approaches based on
FFT and ARMAV-models the estimated damping ratios have not been reported. It is
interesting to note that only the FFT based approach identies a strucutural mode at
5.33 Hz. The explanation could be that the measurements at opposite positions of the
bridge have been subtracted and added in order to separate translational and rotational
modes for the results of the FFT analysis. This data analysis strategy has not been
applied in the analysis using the other approaches. The argument is that it is not correct
to assume all modes to be either translational or rotational. Usually, the modes will
contain contributions from both a translational and a rotational part.
The rst rotational mode shape and the second translational mode shape are shown in
gure 8.5. The mode shape components are almost in or out of phase.
Figure 8.5: First rotational mode (2.28 Hz) and second translational mode (1.88 Hz).
The estimated mode shapes indicate that the identication of the modal parameters using
the RD technique in combination with ITD seem to have been succesful.
8.1.3 Conclusion
The results of this introductory application of the RD technique for ambient testing of
bridges encouraged continued investigations. It was possible to identify modal parameters
with a satisfactory accuracy compared to the results of the other techniques. One of the
reasons for this conclusion is that the accuracy of the RD technique can be improved. The
following improvements are recommended.
Estimate RD functions with both positive and negative time lags. This opens an
opportunity for averaging and quality assessment of the RD functions.
Shift the sign of the time series to obtain the maximum number of triggering points
for the chosen triggering levels.
Use broader triggering levels, e.g. [a1 a2] = [X 1] to obtain more triggering points.
0.8
0.6
0.4
0.2
−0.2
−0.4
−0.6
Figure 8.7: Typical acceleration record from the laboratory bridge model.
For the RD technique the positive point triggering condition has been chosen, since this
condition ensures that sucient triggering points can be obtained. The triggering levels
are chosen as [a1 a2 ] = [0:5X 1]. Any triggering point between 0 and 0.5X is omitted
to avoid false triggering points. This level is expected to be dominated by noise. Figure
8.8 shows a typical auto RD function, where the average function and the error function
have been calculated using the symmetry relation as described in chapter 3.
0.03
0.02
0.01
−0.01
−0.02
−0.03
0 0.5 1 1.5 2
Time [sec]
Figure 8.8: Average and error RD function calculated by the symmetry relations. The
error RD function is the curve
uctuating around zero.
There are 301 points in the average RD and error RD functions in g. 8.8. From g.
8.8 it is concluded that for all 301 points it seems as if the error RD function is so small
that all points can be used in the modal parameter extraction procedure if necessary. The
number of triggering points was approximately 5000.
All RD functions are calculated for each setup corresponding to estimating the full corre-
lation matrix. The average and the error RD functions are estimated using the symmetry
relations. The problem is if all sets of RD functions (columns in the correlation matrix)
should be used in the modal parameter extraction procedure. In order to investigate this
problem the approach suggested in section 3.7. is used. For each RD function an error
measure is calculated as the standard deviation of the error RD function divided by the
standard deviation of the average RD function, see eq. (3.64). The result from this ana-
lysis from the setup containing six measurements is shown in g. 8.9. The record number
indicates the measurement number where the RD functions are calculated from and the
reference number indicates the triggering measurement.
0.04
0.03
0.02
0.01
0
6
6
4
4
2 2
Reference No. Record No.
Figure 8.9: Fraction between the standard deviation of the error and average RD function
calculated by the symmetry relations and 301 points in each function.
All fractions are small and there is no signicant dierence between the fractions so all
sets of RD functions can be used in the modal parameter extraction procedure.
To illustrate the principle in a modal parameter extraction procedure the 3 full correlation
matrices, estimated using the RD technique as described above, are considered. The PTD
technique is used and for each setup the number of modes is varied from 26 to 28 and the
number of points used from the RD functions is varied from 140 - 180. Figure 8.10 shows
a stabilization diagram of the estimated frequencies without applying any restriction to
the results except that the eigenvalues should have a complex conjugate.
25
20
15
10
0
0 10 20 30 40 50 60 70
Frequency [Hz]
Figure 8.10: Stabilization diagram.
The following restrictions are applied to the dierent modes: < 0:05, jMCF j > 0:9 and
6 MCF < 10. This results in the following stabilization diagram
25
20
15
10
0
0 10 20 30 40 50 60 70
Frequency [Hz]
Figure 8.11: Stabilization diagram.
As seen it is much simpler to detect structural modes from g. 8.11 compared to the sta-
bilization diagram shown in g. 8.10. This illustrates the advantage of applying dierent
mode selection criteria. The nal resulting modal parameters are shown in section 8.2.2.
In order to apply the VRD technique, the time shifts between the elements of the vector
triggering condition should be chosen. As an example, the setup with 6 measurements is
considered. Figure 8.12 shows the initial estimate of the RD functions using level crossing
triggering condition. The triggering level is 1:4X .
1 2
1 1
2 1
DX X
DX X
0 0
−1 −2
−0.05 0 0.05 −0.05 0 0.05
1 1
3 1
4 1
DX X
DX X
0 0
−1 −1
−0.05 0 0.05 −0.05 0 0.05
2 1
5 1
6 1
DX X
DX X
0 0
−2 −1
−0.05 0 0.05 −0.05 0 0.05
τ τ
Figure 8.12: Initial RD functions for selection of vector triggering time delays.
The correlation is maximized by choosing the time vector as t = [0 0 0:06 0:0533 0:0533]
seconds, or if the number of time lags is considered t = [0 0 9 9 8 8]. A time shift vector
chosen as t = [0 0 9 9 , 3 , 3] would also have maximized the correlation.
The size of the vector triggering condition does not have to be equal to the number of
measurements. Table 8.2 shows the actual number of triggering points as a function of
the size of the vector triggering condition. The elements of the triggering levels are all
chosen as [a1i a2i ] = [0:5Xi 1] in order to omit false triggering points introduced in the
area [0 0:5Xi ].
Size 1 2 3 4 5 6
N 9900 8200 4600 4200 2600 2400
Table 8.2: Size of vector triggering condition and the corresponding number of triggering
points.
The number of triggering points decreases with the size of the vector triggering condition.
About 2000 triggering points are sucient for a reasonable convergence in the averaging
process. So the vector triggering condition is chosen to be maximum size, which corre-
sponds to the number of measurements in each setup. For each setup a single set of VRD
functions is calculated. In order to calculate an error function for the VRD functions
the sign of the triggering levels is shifted and the VRD functions are estimated again. A
typical average and a typical error VRD function are shown in g. 8.13.
0.15
0.1
0.05
−0.05
−0.1
−0.15
0 0.5 1 1.5 2
Time [sec]
Figure 8.13: Average and error VRD function calculated by the shifting sign of triggering
levels (301 points).
The error VRD functions are seen to be small compared to the average VRD function.
But it is also clear that the signicance of the error function increases with the number of
time lags. It is not recommended to use all 300 points in the modal parameter extraction
procedure. The number of points used should not exceed 150.
The modal parameters are extracted from the VRD functions using the same approach as
described for the RD technique. The nal results are presented in section 8.2.2.
8.2.2 Results
The modal parameters are extracted from the VRD and the RD functions using PTD.
A stabilization diagram used with restrictions on the damping ratios ( < 10%) and the
MCF (magnitude > 90%, phase < 10 deg) are used to select the structural modes from
the computational modes. Table 8.3 shows the estimated modal parameters for the two
approaches.
Approach Parameter 1 2 3 4 5 6 7 8
RDAP f [Hz] 12.31 16.31 21.72 45.14 48.11 49.91 51.53 61.60
VRD f [Hz] 12.39 16.54 21.89 45.14 48.11 50.09 51.46 61.60
RDAP [%] 1.90 3.65 1.51 0.22 0.34 0.67 0.54 0.47
VRD [%] 1.82 4.80 1.54 0.22 0.36 0.47 0.47 0.24
Table 8.3: Estimated natural eigenfrequencies and damping ratios for the laboratory bridge
model.
From table 8.3 it is seen that there is a high correlation between the estimated modal
parameters, even for the damping ratios. The mode shapes are shown in gs. 8.14 - 8.17.
Mode 1: VRD 12.37 Hz Mode 2: VRD 16.29 Hz
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 8.14: First and second mode shape estimated using RD and VRD. MAC=0.99 and
0.54, respectively.
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 8.15: Third and fourth mode shape estimated using RD and VRD. MAC=0.99 and
0.99, respectively.
Mode 5: VRD 48.09 Hz Mode 6: VRD 50.07 Hz
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 8.16: Seventh and sixth mode shape estimated using RD and VRD. MAC=0.99 and
0.10, respectively.
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
1 1
0 0
−1 −1
0.4 0.4
0.2 0.2
0 1 0 1
0.5 0.5
0 0
Figure 8.17: Seventh and eigth mode shape estimated using RD and VRD. MAC=0.99 and
1.00, respectively.
There is a high correlation between the mode shapes except for the mode at about 50 Hz.
The explanation is that there is two closely spaced modes which only are excited weakly.
It is not possible to obtain reasonable estimates of these modes with the RD or VRD
technique from the available measurements of the response only.
8.2.3 Conclusions
The estimation time for the RD technique and the VRD technique including the estimation
time for the initial RD functions is shown in table 8.4.
Method RD VRD Initial
Time 565 120 5
N 8700 1700 1700
Table 8.4: Estimation Time (CPU-time [sec]) and number of triggering points (average),
N
The initial estimation of the RD technique, used for selection of the time shift vector is
seen to be extremely fast.
The application of the VRD technique is justied through an analysis of the acceleration
response of a laboratory bridge model. The analysis resulted in a high correlation between
the modal parameters estimated from RD and VRD functions. An approach to estimate
the optimal time shifts for the formulation of the vector triggering condition is illustrated.
The advantage of the VRD technique is illustrated through a 5 time reduction of the
computational time.
Figure 8.20: Outline draft of the Vestvej bridge and the measurement locations.
Since 26 measurement locations are chosen 5 dierent setups with data are collected. The
number of the accelerometer (internal AU number) and the corresponding channel can be
seen in table 8.5. Furthermore, the measurement locations dened in g. 8.20 are linked
to the dierent setups.
Acc. Setup 1 2 3 4 5
No. Channel LB LB LB LB LB
7214 1 2 2 2 2 2
7706 2 3 3 3 3 3
9967 3 14 15 16 1 1
9969 4 25 23 21 19 17
10354 5 26 24 22 20 18
10355 6 12 10 8 6 4
14838 7 13 11 9 7 5
Table 8.5: Setup and measurement location overview. LB=Location Bridge, see g. 8.20.
The measurements were sampled at 160 Hz for 900 seconds. The data were detrended
and decimated twice (this includes lowpass digital ltering) before saved to disk. The
resulting sampling frequency is thereby reduced to 80 Hz and the number of points in
each measurement is 72000. Detrending removes any linear trend from the data and
decimation reduces the sampling frequency and suppresses the noise in the records.
Figures 8.21 and 8.22 show the data collection equipment and data acquisition equipment
on site, respectively.
Figure 8.23 shows a typical acceleration record from the Vestvej bridge collected on 04/06
1997.
−3
x 10
2.5
1.5
1
Accelerations [g] 0.5
−0.5
−1
−1.5
−2
−5
x 10
−1
Figure 8.24: The average and error estimate of an auto correlation function using [a1 a2] =
[0 1:5X ].
−5
x 10
−1
Figure 8.25: The average and error estimate of an auto correlation function using [a1 a2] =
[1:5X 1].
The dierence is signicant and the error function shows that although there are only
0.1 times as many triggering points using [a1 a2 ] = [1:5X 1] the estimate is far more
accurate. This is positive, since the most accurate approach is thereby also the fastest
approach. So the triggering levels are chosen as [a1 a2 ] = [1:5X 1].
For each of the ve setups the full correlation matrix is estimated using the positive point
triggering condition. After the estimation of the RD functions and before the modal
parameters are extracted it is common in ambient testing of bridges to perform some
kind of pre-analysis to have an idea about the number of structural modes present in the
measurements. Such an analysis is developed for ambient testing based on FFT estimated
spectral densities. The method is denoted Average Spectral Densities (ASD) and the idea
is simply to average the spectral densities of all measurements. The ASD will strongly
indicate the number of modes and the corresponding frequencies. Figure 8.26 shows the
ASD calculated using FFT of the measurements.
−3
Average Spectral Density − All Setups
10
−4
10
−5
10
−6
10
−7
10
−8
10
−9
10
0 5 10 15
Frequency [Hz]
Figure 8.26: The ASD calculated from the spectral densities of the measurements.
The idea behind this approach is adapted to the RD technique. Instead of averaging
the spectral densities of the measurements, the Fourier transform of the estimated RD
functions are calculated and averaged. The estimates of the RD based ASD are shown in
g. 8.27.
−3
Average Spectral Density − All Setups
10
−4
10
−5
10
−6
10
−7
10
−8
10
−9
10
0 5 10 15
Frequency [Hz]
Figure 8.27: The ASD calculated from the FFT of the RD functions of the measurements.
From g. 8.27 the structural modes with the following frequencies are detected.
It seems as if the ASD based on the RD functions provides a better basis for detecting
structural modes. The reason is that the ASD from RD functions is based on averaging in
both time and frequency domain, whereas the ASD based on the spectral densities of the
measurements is only based on averaging in the frequency domain. This is an important
relation for the RD technique
8.5.2 Results
The modal parameters are extracted from the RD functions using the PTD algorithm.
The aim is to estimate the mode shapes corresponding to the frequencies in table 8.6,
but the noise content in the data is high so not all mode shapes might be estimated at a
high condence level. The in
uence of the number of points used from the RD functions
and the model order are investigated by changing the number of modes from 25 to 30
and varying the number of points in the RD functions from 100 to 120. Corresponding to
the analysis of the laboratory bridge model a the following restriction has been applied:
< 0:05, jMCFj > 0:9 and 6 MCF< 10 deg. Table 8.7 show the estimated eigenfrequencies
and damping ratios.
Parameter Date 1 2 3 4 5
f [Hz] 25/03 5.16 6.64 8.31 14.01 15.38
f [Hz] 04/06 5.02 6.58 8.04 13.23 14.56
[%] 25/03 1.85 4.07 3.04 2.15 2.42
[%] 04/06 1.21 3.52 2.45 1.20 1.17
Table 8.7: Eigenfrequencies and damping ratios of the Vestvej Bridge.
The corresponding mode shapes are shown in gs. 8.28 - 8.32.
−2
−4
10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]
−2
10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]
−1
−2
10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]
10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]
−2
10
40
5 30
20
0 10
Bridge width [m] 0
Bridge length [m]
8.6 Summary
In this chapter the application of the RD technique has been demonstrated using dierent
experimental tests. In general the technique have resulted in accurate modal parameters,
but it has also been demonstrated that the technique has its limits, when closely spaced
modes are present in the data. This corresponds to the experience with the FFT algorithm.
But the lack of accuracy should always be compared with the speed of this technique.
The analysis of the Queensborough bridge with the RD technique was a pre-investigation
of the performance of the RD technique compared to other techniques. For this purpose
the data from the Queensborough bridge were an evident choice since the data have been
analysed using dierent techniques by dierent authors. The performance of the RD
technique motivated the further work with this technique.
The VRD technique was tested in competition with the RD technique using data from a
laboratory bridge model. This investigation was a natural continuation of the test of the
performance of the VRD technique. The result of the VRD technique was high quality
modal parameters, highly correlated with the result of the RD technique. At the same
time the VRD technique were 4-5 times faster than the RD technique. This investigation
concludes the introduction and justication of the VRD technique.
The chapter is concluded with an ambient vibration study of the Vestvej bridge. The
purpose of this investigation is to obtain information of the modal parameters of the bridge
and to investigate if the RD technique can be used as a basis for an on-line surveyance
of the bridge. It is concluded that due to the low level accelerations of the bridge a
surveyance should be based on long-term records of the accelerations.
Bibliography
[1] Ventura, C.E., Felber, J.A. & Prion, H.G.L. Evaluation of a Long Span Bridge
by Modal Testing. Proc. 12th International Modal Analysis Conference, Honolulu,
Hawaii, USA, 1994, Vol. II, pp. 1309-1315.
[2] Felber, A.J. & Ventura, C.E. Frequency Domain Analysis of the Ambient Vibration
Data of the Queensborough Bridge Main Span. Proc. 14th International Modal Anal-
ysis Conference, Dearborn, Michigan, USA, February 1996, Vol. I, pp. 459-465.
[3] Giorcelli, E., Garibaldi, L., Riva, A. Fasana, A. ARMAV Analysis of Queensborough
Bridge Ambient Data. Proc. 14th International Modal Analysis Conference, Dear-
born, Michigan, USA, February 1996, Vol. I, pp. 466-469.
[4] De Stefano, A., Kna
itz, M., Bonato, P., Ceravolo, R. & Gagliati, G. Analysis of Am-
bient Vibration Data From Queensborough Bridge Using Cohen Class Time-Frequency
Distributions. Proc. 14th International Modal Analysis Conference, Dearborn, Michi-
gan, USA, February 1996, Vol. I, pp. 470-476.
[5] Brincker, R., De Stefano, A. & Piombo, B. Ambient Data to Analyse the Dynamic
Behaviour of Bridges: A First Comparison Between Dierent Techniques. Proc. 14th
International Modal Analysis Conference, Dearborn, Michigan, USA, February 1996,
Vol. I, pp. 477-482.
[6] Asmussen J.C., Ibrahim, S.R. & Brincker, R. Random Decrement and Regression
Analysis of Trac Responses of Bridges. Proc. 14th International Modal Analysis
Conference, Dearborn, Michigan, USA, February 1996, Vol. I, pp. 453-458.
[7] Asmussen J.C., Ibrahim, S.R. & Brincker, R. Application of the Vector Triggering
Random Decrement Technique. Proc. 15th International Modal Analysis Conference,
Orlando, Florida, USA, February 1997, Vol II, pp. 1165-1171.
[8] Ibrahim, S.R., Asmussen, J.C. & Brincker, R. Theory of Vector Triggering Random
Decrement. Proc. 15th International Modal Analysis Conference, Orlando, Florida,
USA, February 1997, Vol I, pp. 502-509.
[9] Asmussen, J.C., Brincker, R. Rytter, A., Hededal,P., Stoltzner, E. & Lauridsen, J.
Ambient Vibration Testing of the Vestvej Bridge - Full Virgin Measurement: 1st
Study. Internal note to RAMBOLL and The Danish Road Directorate, April 1997.
[10] Asmussen, J.C., Brincker, R. Rytter, A., Hededal,P., Stoltzner, E. & Lauridsen, J.
Ambient Vibration Testing of the Vestvej Bridge - Full Virgin Measurement: 2nd
Study. Internal note to RAMBOLL and The Danish Road Directorate, June 1997.
[11] Asmussen, J.C., Brincker, R. Rytter, A., Hededal,P., Stoltzner, E. & Lauridsen, J.
Application of the RD technique to Ambient Testing of The Vestvej Bridge. To be
Presented at the 16th International Modal Analysis Conference, Santa Barbara, Cal-
ifornia, USA, February 1998.
Chapter 9
Conclusions
This chapter concludes the thesis with the nal comments to the investigations of the RD
technique. First the contents and the results of each chapter are reviewed in section 9.1.
The purpose is to give an overview before the general conclusions are given in section
9.2. The general conclusions contain a step-by-step recipe for the application of the RD
technique to identify the dynamic characteristica of structures from ambient data. The
chapter is concluded with a perspectivation in section 9.3. This will include topics and
areas on which future work can be based.
9.1 Summary
9.1.1 Chapter 1
This chapter introduces and delimits the work concerning vibrations of civil engineering
structures presented in this thesis. This work is delimited to deal with the RD technique
for estimation of modal parameters of structures, where the vibrations can be modelled
by a time invariant linear lumped mass parameter system. Furthermore, it is assumed
that the loads can be described using ltered stationary Gaussian white noise. The loads
can be created articially or be ambient. A review of the RD technique is performed as a
natural starting point. The major result is that it is chosen to interpret RD functions in
terms of correlation functions. The scope of the work is formulated as:
The objective of the present Ph.D.-thesis is a description, implementation, and futher
development of the theory behind the RD technique as well as a comparison of the perfor-
mance of the RD technique with the FFT algorithm.
The introduction is concluded with a description of the contents of each chapter.
9.1.2 Chapter 2
This chapter contains a review of linear and linear stochastic vibration theory. The purpose
is to describe how modal parameters can be extracted from correlation functions under
the previously mentioned assumptions. It is shown in chapter 3 and appendix A that the
RD functions are proportional to the correlation functions of the response. The lumped
mass parameter system is introduced and the modal parameters are dened. The practical
application and implementation of two algorithms, the Ibrahim Time Domain (ITD) and
187
the Polyreference Time Domain (PTD), for extraction of modal parameters from the free
decays of a structure are described. The load modelling is dened and it is shown that
the correlation functions of the response of the linear lumped mass parameter system
subjected to the load have exactly the same relation as the free decays of the structure.
This means that the ITD and PTD algorithms can be used to extract modal parameters
from RD functions.
9.1.3 Chapter 3
The RD functions are dened as conditional mean values of a stationary stochastic pro-
cess and the unbiased estimation of RD functions is shown. The condition is denoted the
triggering condition. The applied general triggering condition is introduced. Using this
condition it is shown that the RD functions are a weighted sum of the correlation func-
tions and the time derivative of the correlation functions dependent on the formulation
of the condition. From this condition the link between the correlation functions and four
particular triggering conditions can be derived. These triggering conditions are: Level
crossing, local extremum, positive point and zero crossing. The main dierence between
these triggering conditions is the resulting number of triggering points in the estimation
process of the RD functions. The main assumption of these results is that the stochastic
processes are stationary and Gaussian distributed with zero mean. The results are derived
in detail in appendix A. The response af the structures subjected to the ltered Gaussian
distributed white noise load, described in chapter 2, will full these assumptions.
Quality assessment of an estimated RD function is an important problem in any appli-
cation. Two dierent approaches to quality assessment of RD functions are suggested.
The rst approach is based on the shape invariance relation of the RD functions and the
second approach is based on the symmetry relation for correlation functions of stationary
processes. The strength of the suggested methods is that the experience can be used from
one structure to another. Another problem is to choose triggering levels for the dierent
triggering conditions. Some guidelines are given, but it is extremely dicult to obtain
some general and consistent rules. Chapter 3 is concluded with a comparison of the speed
and accuracy of the RD technique for estimation of correlation functions with an FFT
based approach. The result is that the RD technique can be faster and still estimate as
accurate correlation functions as the FFT based approach. The absolute main advantage
of the RD technique, speed, is underlined.
9.1.4 Chapter 4
This chapter introduces a new technique: Vector triggering Random Decrement (VRD).
The motivation for developing this technique is that if the RD technique is applied to a
large number of measurements, it becomes time consuming to estimate the full correlation
matrix. Instead a vector triggering condition is formulated. It is shown that the VRD
functions are a sum of correlation functions corresponding to the size of the vector condi-
tion. The assumption is the same: The stochastic processes are stationary and Gaussian
distributed with zero mean. The advantage of the VRD technique and its application are
illustrated by dierent simulation studies. This includes the solution to the problem of
formulating the vector triggering condition. It is concluded that the VRD technique is
an attractive alternative to the RD technique in the analysis of data setups with many
measurements.
9.1.5 Chapter 5
The chapter contains a proposal for a new method to predict the variance of the RD
functions. A simple method exists to predict the variance of RD functions, which only
uses the RD functions and the number of triggering points, see Appendix A. The main
assumption of this method is that the time segments used in the averaging process are
uncorrelated. The validity of this assumption has never been investigated. The new
method takes the correlation between the time segments into account. The method seems
to perform well on simple systems and it illustrates how dicult it is to estimate the
variance, since usually only a single realization (measurement) of the processes is available.
In the vicinity of zero and far away from zero the method predicts the variance well. It
is much more accurate compared to the existing simple method, but it also uses more
computational time. Whether or not this increase in accuracy can pay o the increase in
computational time is an open question.
9.1.6 Chapter 6
This chapter considers the problems, which arise in practical applications of the RD tech-
nique. Several bias problems are described. It is shown how these bias problems can be
avoided and how they in
uence the estimate of the correlation functions and the estimate
of the modal parameters. The dierent implementations of the RD and VRD technique
performed as a part of this Ph.D.-project are discussed. It is described how the RD
technique and the VRD technique should be implemented in MATLAB. The chapter is
concluded with an example of the use of the implemented functions.
9.1.7 Chapter 7
A new method for estimating the frequency response matrix (FRM) of the linear lumped
mass parameter system is investigated. The method is based on the RD technique and
assumes that both the response and the loads are measured. If the RD functions of the
response and the load are calculated and Fourier transformed the FRM can be estimated
by simple division. The advantage is that the RD functions dissipate towards zero with
increasing and decreasing time lags from zero if the load is white noise. This means that
leakage free estimates of the FRM are obtained, since it is not necessary to apply any
other window in the time domain than the exponential window. The in
uence of this
window is well-known from investigations in impact testing. It increases the damping
ratio corresponding to the exponential decay of the window. The performance of the
method is investigated by a simulation study and analysis of a laboratory bridge model
loaded by white noise through a shaker. The method can remove the leakage error, but at
the expense of higher random uncertainty. It is very dicult to use the method on lightly
damped systems, since the results are very sensitive to the number of points in the RD
functions and the choice of window function.
9.1.8 Chapter 8
This chapter describes the dierent analyses of structures, which have been performed.
The ambient measurements of the Queensborough bridge have been analysed using a
combination of the RD technique and the ITD algorithm. The results were promising,
since the RD technique produced results, which were comparable with the results obtained
by dierent authors using several other well-known algorithms such as FFT, ARMAV
and time-frequency domain algorithms. These results encouraged to continue the work
with the RD technique. As a nal documentation of the VRD technique the response
of laboratory bridge model subjected to Gaussian white noise has been analysed using
both the VRD and the RD technique. The VRD technique was faster and the modal
parameters were identical to those obtained using the RD technique. This underlines that
the VRD technique is an attractive alternative to the RD technique, for systems with a
high number of measurements.
The work in this thesis is concluded with the ambient vibration study of the Vestvej bridge.
The ambient vibrations have been collected using a bridge measurement system developed
as a part of this Ph.D.-project. The study is the initial investigations for a demonstration
project of vibration based inspection. The data are analyzed using the RD technique. The
future perspective of the projcet is to perform continuous on-the-line surveyance using the
RD technique in order to utilize the advantages of the RD technique.
It is standard procedure to apply the above assumptions if the loads are ambient
such as wind, waves or trac. If the loads are deterministic the RD technique should
be used carefully. An example could be a structure loaded by a few impulses. The
response should be analysed carefully using the RD technique and therefore the
guidelines given below are not valid.
2. Choice of triggering condition and levels: The rst problem is to choose a
triggering condition. It is recommended to use the positive point triggering condi-
tion. Only if the records consist of many data points the level crossing or the local
extremum triggering condition should be used. It is not recommended to use the
zero crossing triggering condition. The reason is that noise has a high in
uence of
the response around zero, so false triggering points will be detected resulting in RD
functions with slow convergence. During this thesis it has been emphasized repea-
tedly by experimental results that low level triggering points should not be used.
This illustrates the disadvantages of the zero crossing triggering condition.
The second problem is the choice of triggering levels. From all the measurements pick
out a single reference measurement or a measurement with a relatively high standard
deviation. Choose dierent triggering levels and calculate the RD functions and the
estimation time. An appropriate division would be [a1 a2 ] = [0:5 1]X , [a1 a2 ] =
[1 1:5]X and [a1 a2 ] = [1:5 1]X or [a1 a2 ] = [1 1]X and [a1 a2 ] = [1:5 1]X ,
etc. The SIC can be used to check the shape invariance of the RD functions. Any
levels without the value SIC 1 should be omitted. If all RD functions have high
a SIC and the computational time is not important the minumum and maximum
levels, which have been investigated should be used. If the computational time is
important the levels which have the lowest number of triggering points and a high
SIC value should be chosen. The number of points used in the RD functions can also
be determined from this initial investigation by taking a number of points, which
ensures that the RD functions just have dissipated suciently. Remember to use
positive and negative time lags.
3. Validation of the RD functions:
After choosing of triggering condition and determining the triggering levels and the
number of points, the RD functions can be calculated. Calculate all RD functions
corresponding to estimating the full correlation matrix. In order to extract maximum
information from the measurements the sign of the triggering levels should be shifted
and the RD functions calculated again. The averages of the RD functions normalized
to be the correlation functions are the nal estimate of the correlation functions with
positive and negative time lags.
The validation of the quality of the RD functions is very important. Use the sym-
metry relation to calculate an estimate of the correlation functions for positive time
lags and the error function for the correlation functions for positive time lags, see
section 3.7.2. By plotting the average correlation functions and the error function
the quality can be assessed and the number of points used in the modal parameter
extraction procedure can be determined. To check if all RD functions should be used
in the modal parameter extraction procedure the fraction between the standard de-
viation of the estimated correlation function and the error function can be calculated
using eq. (3.64). This approach makes it possible to omit the RD functions with a
high noise contents.
The last procedure in the validation of the RD functions is to calculate the absolute
values of the FFT of all RD functions and average them. The nal spectral density
contains information about the number of modes present in the RD functions and
the corresponding natural frequencies. This gives an opportunity to select a proper
model order in the modal parameter extraction procedure.
4. Extraction of Modal parameters:
From the validation process the approximate number of physical modes and the ap-
propriate number of points from the RD functions are known. Using this information
the model order and the number of points should be varied in order to investigate
the sensitivity of the modal parameters to these choices. By using stabilization
diagrams in combination with the MCF and a restriction of small damping ratios
a proper model order can be selected and the eigenfrequencies, damping ratios and
the mode shapes can be extracted. Usually the mode shapes indicate how accurately
the modal parameters are estimated.
This recipe can of course only be a guideline for the application of the VRD technique.
Experience with the technique will make it easier to select proper triggering levels, in order
to have accurate RD functions and low estimation time.
I kapitlet beskrives ogs to forskellige metoder, der foresls anvendt til kvalitetsvurdering
af RD funktionerne. Metoderne er generelle, sledes at erfaring med metoderne kan med-
bringes fra en konstruktion til en anden. Desuden er det angivet, hvorledes trig niveauer
kan vlges, sledes at RD funktionerne estimeres s prcist som muligt. Kapitlet afsluttes
med en sammenligning af forskellige metoder til estimering af korrelationsfunktioner. Det
vises, ved hjlp af simulering, at RD teknikken kan vre lige s prcis som en metode
baseret p FFT algoritmen og samtidig have en vsentlig hurtigere beregningstid.
Sections A.1 and A.2 describe the mathematical tools, which are applied in the derivations
in later sections. The density function of a conditional multivariate Gaussian distributed
stochastic variable is described in section A.1. In section A.2 a relation between the density
functions of two stochastic variables on dierent conditions is derived.
In section A.3 the denition of RD functions is given. Furthermore, the estimate of the
RD functions is introduced. Based on these denitions the theoretical general triggering
condition introduced by Brincker et al. [2], [3] is described in section A.4. In section A.5
the condition is generalized to the applied general triggering condition. This triggering
condition is important, since the relation between the RD functions of any particular
triggering condition and the correlation functions can be extracted directly from the results
of the applied general triggering condition.
199
Sections A.6 - A.9 describes four commonly used triggering conditions. The relation be-
tween the RD functions using these triggering conditions and the correlation functions is
derived directly from the results of section A.5. Furthermore, an approximate expression
for the variance of the estimated RD functions is derived. The expected number of trig-
gering points, which can be obtained by applying the dierent triggering conditions is also
given.
2 2 1 2 2
Cov[X1jX2] = VX X , VX X VX,1 X VX X
1 1 1 2 2 2 2 1 (A.4)
E[X1jX2] = VX X VX,1 X x2
1 2 2 2
(A.5)
= RX X R,X1 X x2
1 2 2 2
where R denotes a correlation matrix. The zero mean value vector implies that the
covariance matrix and the correlation matrix are identical. Eqs. (A.3) - (A.5) are the basic
equations when the relationship between the RD functions and the correlation functions
and an approximate expression for the variance of the RD functions is derived in sections
A.4 and A.5.
A.2 Conditional Densities
A conditional variable is written as X1 jT. The condition T could e.g. be of dierent
complexity.
T1 = fX2 = a; X3 = b; g (A.6)
T2 = fa1 X2 < a2 ; b1 X3 b2g (A.7)
In the following sections it will be necessary to have a relation between the density function
of a variable on the condition in eq. (A.6) and the density function of a variable on the
condition in eq. (A.7). This relation is derived from general relations between the density
and the distribution functions and the relation between conditional distribution functions
and probabilities.
1
1 Z a2 Z b2
= k pX X X (x1; x2; x3)dx2dx3
a1 b1 1 2 3
Z a2 Z b2
= k1 pX jA (x1jA1)pX X (x2)dx2dx3 (A.8)
a1 b1 1 1 2 3
where
Za Zb
k = P (a1 X2 < a2 ; b1 X3 < b1 ) = pX X (x2; x3)dx2dx3
2 2
(A.9)
a1 b1 2 3
then let b ! 0. This procedure is used in section A.6 - A.9. The above equations follow
Papoulis [8].
These general relationships were rst established in Brincker et al. [2], [3]. From the
results of this section it is possible to derive the relation between the RD functions and
the correlation functions of some of the triggering conditions with practical interest, see
e.g. section A.6. But in general the process becomes very complex. This is the motivation
for introducing the applied general triggering condition.
DY X ( ) = E[Y (t + )jTXG(At)]
Z1
= y pY jT GA (y jTXG(At))dy
,1 X (t)
Za Zb Z1
= k1 y pY X X_ (y; x; x_ )dydxdx
2 2
_
1 a b ,1 1 1
Za Zb
= k1 E [Y (t + )jTXG(Tt)] pX X_ (x; x_ )dxdx
2 2
_ (A.30)
1 a1 b1
where the results of eq. (A.8) and eq. (A.9) and the following have been used.
Za Zb
k1 = pX X_ (x; x_ )dxdx
2 2
_ (A.31)
a1 b1
TXG(TT ) = fX (t) = x; X_ (t) = x_ g (A.32)
The results from the theoretical general triggering condition, see eq. (A.25), are inserted
in eq. (A.30).
DY X ( ) = E[Y (t + )jTXG(At)]
Za Zb !
= k1 RY X ( ) x , R0Y X ( ) x_ p (x; x_ )dxdx
2 2
_
1 a b X2 X2_ X X_
1 1
0
= RY X2 ( ) ~a , RY X2 ( ) ~b (A.33)
X X_
where ~a and ~b are given by.
R a R b xp (x; x_ )dxdx_
R a xp (x)dx
_
= Ra a p X(x)dx
2 2 2
a b
a~ = R a R b
1 X X
1 1
(A.34)
a b pX X_ (x; x_ )dxdx
_ a X
2 2 2
1 1 1
R a R b _ (x; x_ )dxdx R b xp
~b = Ra a Rb b xp X X_ _ _ (x_ )dx_
= Rb b X_
2 2 2
1 1 1
(A.35)
a b pX X_ (x; x_ )dxdx
_ b pX_ (x_ )dx_
2 2 2
1 1 1
In the statement of eqs. (A.33), (A.34) and (A.35) it has been used that X (t) and X_ (t)
are independent, which is true, since X (t) is assumed to be Gaussian distributed and
stationary.
Equation (A.33) describes the relationship between the RD functions from the applied
general triggering condition and the correlation functions and the time derivative of the
correlation functions. From the result the weights or triggering bounds of RY X ( ) and
R0Y X ( ) can be extracted directly by inserting the triggering bounds [a1 a2 ]; [b1 b2 ] in eqs.
(A.34) and (A.35). In principle there is no signicant dierence between the results of
TXG(Tt) and TXG(At). The only dierence is the scaling or weight of the correlation functions.
The variance of the conditional stochastic process Y (t + )jTXG(At) is dened as
Z1
Var[Y (t + )jTXG(At)] = (y , Y jT GA )2 pY jT GA (y jTXG(At))dy
,1 X (t) X (t)
Z1
= y 2p G (y jTXG(At))dy , 2Y jT GA (A.36)
,1 Y jTX (At) X (t)
where the conditional mean value is equal to the RD functions (the term Y jT GA is used
Xt ( )
for simplicity)
2Y jT GA = E[Y (t + )jTXG(At)]2 = DY2 X ( ) (A.37)
X (t)
and straightforward can be calculated from eqs. (A.33) - (A.35). The rst term in eq.
(A.36) is calculated using eq. (A.8).
Z1 Z1Za Zb
y 2 pY jT GA (yjTXG(At))dy = k1 y 2pY X X_ (y; x; x_ )dxdxdy
2 2
_
,1 Xt ( ) 1 ,1 a1 b1
Z a2 Z b2 Z 1
= k1 y 2pY jT GT (y jTXG(Tt))dypX X_ (x; x_ )dxdx
_
1 a1 b1 ,1 X (t)
Var[Y (t + )jTXG(At)] =
Za Zb !
Var[Y jTXG(Tt)] 1
+ k
2 2 RY X ( ) x , R0Y X ( ) x_ 2 p (x; x_ )dxdx
_ ,
1 a1 b1 X2 X2_ X X_
! ! !2
1 RY X ( ) k , 1 R0Y X ( ) k (A.39)
k 2 X 2 k 2 3 X_
1 1
where the following abbreviations are used.
Za Zb
k1 = p _ (x; x_ )dxdx
2 2
_ (A.40)
a1 b1 X X
Z a2 Z b2
k2 = xpX X_ (x; x_ )dxdx
_ (A.41)
a b
1 1
Za Zb
k3 = _ X X_ (x; x_ )dxdx
xp
2 2
_ (A.42)
a1 b1
Z a2 Z b2
k4 = x2pX X_ (x; x_ )dxdx
_ (A.43)
a1 b1
Z a2 Z b2
k5 = x_ 2pX X_ (x; x_ )dxdx
_ (A.44)
a1 b1
The conditional variance in eq. (A.39) is reduced to
0 2 R0 ( ) !21
= Y2 @1 ,
Var[Y (t + )jTXG(At)] R Y X ( ) YX A+
Y X , Y X_
! ! ! !
RY X ( ) 2 k4 , k2 2 + R0Y X ( ) 2 k5 , k3 2 (A.45)
X2 k1 k1 X2_ k1 k1
The variance of the conditional stochastic process Y (t + )jTXG(At) is basically only a function
of the correlation functions and the time derivative of the correlation functions.
Equations (A.33) and (A.45) are the mathematical basis for the RD technique applied to
stationary Gaussian stochastic processes. The result has been derived for the case of cross
RD functions. The results for auto RD functions are obtained by substituting Y (t + )
with X (t + ) or the opposite. The relation between the RD function of any triggering
condition can immediately be derived from the results of eqs. (A.33) and (A.45). The
derivation is shown in the sections A.6 - A.9 for commonly used triggering conditions.
Eqs. (A.50), (A.52) and (A.54) were rst derived by Vandiver et al. [1] for the auto
RD functions. Their work was based on a more complicated and less general approach,
since they operated directly on the density functions instead of using eqs. (A.3) and (A.4).
Brincker et al. [2], [3] derived eqs. (A.50), (A.52) and (A.54) using the results from section
A.4 and the total representation theorem, see Ditlevsen [9]. Their proof included cross
RD functions.
If the triggering levels are chosen as [a1 a2 ] = [0 1] (or alternatively [,1 0]) the maxi-
mum number of triggering points are always obtained. The resulting triggering level in
this case is
R a xp (x)dx r2
2
X
a~ = R a p (x)dx ; [a1 a2] = [0 1] ) a = X
a 1
(A.62)
a X
2
1
The variance of Y (t + )jTXE(t) is obtained by calculating k1 ; k2; k3; k4 and k5 from eqs.
(A.40) - (A.44). The fractions used in eq. (A.45) become
k2 2 R a xp (x)dx !2 R a x2p (x)dx
= R a p (x)dx ; k = aR a p X(x)dx ; kk3 = kk5 = 0
X k
2 2
a 1 4 1
(A.63)
k1 a X 1
2
1 a X 1 11
2
The variance of the conditional process follows by inserting the results of eq. (A.63) into
eq. (A.45)
0 2 R0 ( ) !21 !2
E 2 @ R Y
Var[Y (t + )jTX (t)] = Y 1 , X ( ) Y
, X A +k E RY X ( ) (A.64)
Y X Y X_2 X
where kE is
R a x2p (x)dx R a xp (x)dx !2
= R a p (x)dx , Ra a p X(x)dx
X
2 2
kE a 1 1
(A.65)
a X a X
2 2
1 1
If especially the triggering levels are chosen to [a1 a2 ] = [0 1] then the variance become
0 RY X ( ) 2 R0 ( ) !21
2
Var[Y (t + )jTXE(t)] = Y2 @1 , , Y X A (A.66)
X Y _X Y
The variance of the conditional variable becomes dependent on the chosen triggering levels.
Furthermore, the variance of X (t + )jTXE(t) at time lag zero is
This result diers from the level crossing triggering condition, since the variance at time
lag zero is not zero. The reason is that the triggering levels, a1 ; a2, dene two bounds
instead of only a single value.
The RD functions using local extremum triggering are estimated as the empirical mean.
The processes are assumed to be ergodic
^ 1 XN
DY X ( ) = N y (ti + )ja1 x(ti ) < a2 ; x_ (ti ) = 0 (A.68)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). If the dierent time segments in the
averaging process are assumed to be independent the variance of the RD functions can be
derived from eq. (A.66)
0 !1
RY X ( ) 2 R0 ( ) 2 kE RY X ( ) !2
2
^ Y @
Var[DY X ( )] N 1 , , Y X A+ (A.69)
Y X Y X_ N X2
R1
xp
_ X_ (x_ )dx_
~b = R,1
1 p (x_ )dx_ = 0 (A.74)
,1 X_
The RD functions for the positive point triggering condition are proportional to the cor-
relation functions
D ( ) = E[Y (t + )jT P ] = RY X ( ) a~
YX X (t)X2 (A.75)
If the triggering bounds are chosen as [a1 a2 ] = [0 1] the maximum number of triggering
points is obtained
r
[a1 a2 ] = [0 1] ) a~ = 2 (A.76)
X
The variance of the conditional process Y (t+ )jTXP (t) is obtained by calculating k1; k2 ; k3; k4
and k5 from eqs. (A.40) - (A.44). The fractions used in eq. (A.45) become
R R
k4 = aRa x2px(x)dx ; k2 = Raa xpx (x)dx ; k5 = 2 ; k3 = 0
2 2
(A.77)
a a
1 1
k1 X_ k1
a px (x)dx k1 a px (x)dx k1
2 2
1 1
kP a 1 1
(A.79)
a X a X
2 2
1 1
If especially [a1 a2 ] = [0 1] the variance reduces to
2 !
Var[Y (t + )jTXP (t)] = Y2 1 , 2 RY X ( ) (A.80)
X Y
The variance of the conditional auto process at time lag zero is
Var[X (t + )jT P ] = 2 1 , 2
X (t) X (A.81)
which is dierent from the variance of the conditional auto process using level triggering,
since the above is not zero.
The RD functions using positive point triggering are estimated as the empirical mean.
The processes are assumed to be ergodic
X
N
D^ Y X ( ) = N1 y (ti + )ja1 < x(ti ) a2 (A.82)
i=1
where x(t) and y (t) are realizations of X (t) and Y (t). If the dierent time segments in the
averaging process are assumed to be independent the variance of the RD functions can be
derived from eq. (A.80)
2 2 P ! !2
Var[D^ Y X ( )] NY 1 , RY X( ) + kN RY X2 ( ) (A.83)
Y X X
The above relation for the variance should be used with care, since it is very unlikely that
the time segments are independent.
A.8.1 Expected Number of Triggering Points
The expected number of triggering points per unit time, dN (dta ;a ) is simply the probability
1 2
that a1 X (t) a2 .
dN ( a 1 ; a 2 ) Za
pX (x)dx
2
E[ dt ] = (A.84)
a 1
A.10 Summary
This appendix establishes the mathematical basis for the RD technique applied to sta-
tionary zero mean Gaussian distributed processes. A denition of the RD functions is
given in section A.3. Furthermore, the estimation process is discussed and it is shown
that the estimates of the RD functions are unbiased. Section A.4 introduces the general
theoretical triggering condition. The relations between the RD functions of this condition
and the correlation functions are derived. Since this triggering condition only has theore-
tical interest, a generalization of this condition, the generally applied triggering condition
is introduced in section A.5. The relation between the RD functions and the correlation
functions is derived. This triggering condition is important, since the relations between
the RD functions of the triggering conditions used in practice and the correlation func-
tions can be derived directly from the results of section A.5. The four dierent triggering
conditions which are used in practice, level crossing, local extrumum, positive point and
zero crossing with positive slope triggering, are described in sections A.6 - A.9. The rela-
tions between the RD functions and the correlation functions are derived, an approximate
equation for the variance of the estimated RD functions are given and nally the expected
number of triggering points are given. Sections A.6 - A.9 give the fundamental mathe-
matical description of the RD functions. The results of this appendix are used throughout
the thesis.
Bibliography
[1] Vandiver, J.K., Dunwoody, A.B., Campbell, R.B. & Cook, M.F. A Mathematical
Basis for the Random Decrement Vibration Signature Analysis Technique. Journal of
Mechanical Design, Vol. 104, April 1982, pp. 307-313.
[2] Brincker, R., Krenk, S., Kirkegaard, P.H. & A. Rytter. Identication of Dynamical
Properties from Correlation Function Estimates. Bygningsstatiske Meddelelser, Vol.
63, No. 1, 1992, pp. 1-38.
[3] Brincker, R., Krenk, S. & J.L. Jensen Estimation of Correlation Functions by the
Random Dec Technique. Proc. Skandinavisk Forum for Stokastisk Mekanik, Lund,
Sweden, Aug. 30-31, 1990.
[4] Bedewi, N.A. & Yang, J.C.S. The Random Decrement Technique: A More Ecient
Estimator of the Correlation Function. Proc. 1990 ASME International Conference
and Exposition, Boston, MA, USA, August 5-9, pp. 195-201.
[5] Yang, J.C.S., Qi, G.Z. & Kan, C.D. Mathematical Base of the Random Decrement
Technique. Proc. 8th International Modal Analysis Conference, Kissimee, Florida,
USA, 1990, pp. 28-34.
[6] Melsa, J.L. & Sage, A.P. An Introduction to Probability and Stochastic Processes.
Prentice-Hall, Inc. Englewood Clis, N.J., 1973. ISBN: 0-13-034850-3.
[7] Soderstrom, T. & Stoica, P. System Identication. Prentice Hall International (UK)
Ltd, 1989. ISBN: 0-13-881236.
[8] Papoulis, A. Probability, Random Variables and Stochastic Processes. McGraw-Hill,
Inc. 1991. ISBN 0-7-100870-5.
[9] Ditlevsen, O. Uncertainty Modelling. McGraw-Hill Inc. 1981. ISBN: 0-07-010746-0.
[10] Rice, S.O. Mathematical Analysis of Random Noise. Bell Syst. Tech. J., Vol. 23, pp.
282-332; Vol 24, pp. 46-156. Reprinted in N. Wax, Selected Papers on Noise and
stochastic processes. Dover Publications, Inc., New York.
[11] Hummelshj, L.G., Mller, H. & Pedersen, L. Skadesdetektering ved Responsmling.
M.sc. Thesis (In Danish), Aalborg University, 1991.
[12] Brincker, R., Jensen, J.L. & Krenk, S. Spectral Estimation by the Random Dec
Technique Proc. 9th International Conference on Experimental Mechanics, Lyngby,
Copenhagen, August 20-24, 1990.
[13] Lin, Y.K. Probabilistic Theory of Structural Dynamics. 3rd Edition, McGraw-Hill,
Inc., 1986. ISBN: 0-88275-377-0.