Simulating Non-Gaussian Processes
Simulating Non-Gaussian Processes
Abstract
This paper presents a new numerical scheme for simulating stochastic pro-
cesses specified by their marginal distribution functions and covariance func-
tions. Stochastic samples are firstly generated to automatically satisfy target
marginal distribution functions. An iterative algorithm is proposed to match
the simulated covariance function of stochastic samples to the target covari-
ance function, and only a few times iterations can converge to a required
accuracy. Several explicit representations, based on Karhunen-Loève expan-
sion and Polynomial Chaos expansion, are further developed to represent
the obtained stochastic samples in series forms. Proposed methods can be
applied to non-gaussian and non-stationary stochastic processes, and three
examples illustrate their accuracies and efficiencies.
Keywords: Stochastic samples, Non-gaussian, Non-stationary,
Karhunen-Loève expansion, Polynomial Chaos expansion
∗
Corresponding author.
Email address: [email protected] (Zhibao Zheng)
2
in frequency domain and is initially developed for gaussian stochastic pro-
cesses [11]. It has been extended to non-gaussian stochastic processes by com-
bining the spectral representation method with non-linear transformations
[10], i.e., tranforming gaussian stochastic samples generated by the spectral
representation method into the non-gaussian stochastic process and match-
ing the target power spectral density function and non-gaussian marginal dis-
tribution function. Extensive studies based on this method can be found in
[12, 13, 14, 15]. Different from the spectral representation method, Karhunen-
Loève (KL) expansion [1, 16] is implemented in time or space domains, which
is usually used in the simulation of stationary and non-stationary gaussian
processes [17, 18, 19, 20]. Iterative algorithms for updating the non-gaussian
expanded random variables are proposed in [21, 22] for the simulation of non-
gaussian stochastic processes. The method can be applied to highly skewed
non-gaussian marginal distribution functions. Hence, KL expansion provides
a unified and powerful framework for the simulation of stochastic processes,
which is potentially capable of providing a better fit to non-gaussian and
non-translational data [23]. Another important technique, Polynomial Chaos
(PC) expansion, has also been developed for simulation of non-gaussian and
non-stationary stochastic processes and fields in [24, 25]. The method rep-
resents the target stochastic process and field as multidimensional Hermite
polynomial chaos in a set of normalized gaussian random variables. The
accuracy and efficiency of this method were further examined in [26, 27].
In this paper, we present numerical schemes for simulating non-gaussian
and non-stationary stochastic processes that have been specified by their co-
variance functions and non-gaussian marginal distribution functions. The
3
basic idea is to firstly generate stochastic samples that satisfying target
marginal distribution functions, and then match target covariance functions
by developing delicate iterative algorithm. In this way, the simulation of
both gaussian and non-gaussian stochastic processes can be implemented
in an unified framework since marginal distribution functions are automati-
cally satisfied by generated samples, and the accuracy and efficiency of the
simulation are only dependent on matching the target covariance functions.
Another advantage is that there are no differences in the application of the
proposeed iterative algorithm to stationary and non-stationary stochastic
processes. Thus, the proposed method can be considered as a unified nu-
merical scheme for simulating samples of stochastic processes. Further, it’s
usually not convenient to apply stochastic samples in practical problems. In
this paper, we exploit KL expansion for expanding the obtained stochastic
samples since KL expansion is optimal among series expansion methods in
the global mean square error with respect to the number of random variables
in the representation. Thus, the proposed strategy is capable of representing
stochastic processes with sufficient accuracy with as few random variables as
possible. In order to meet the requirements of different practical problems, we
also exploit PC expansion and KL-PC expansion (combination of KL expan-
sion and PC expansion) to represent the obtained stochastic samples, whose
methodology are similar to KL expansion but based on different expansions.
The accuracies and efficiencies are demonstrated by several numerical ex-
amples. Proposed methods can be readily generalized to multi-dimensional
random fields [25, 28, 29, 30, 31], but it’s beyond the scope of this article and
will be studied in subsequent papers.
4
The paper is organized as follows: a new algorithm for simulating stochas-
tic samples is presented in Section 2, Section 3 develops several numerical
algorithms for representing the obtained stochastic samples and three illus-
trative examples are finally given in Section 4 to demonstrate the proposed
algorithms.
5
tively. Expanding Eq.(1) yields,
N
1 X
Tij = (ηi (θk ) ηj (θk ) − ηi (θk ) η̄j − η̄i ηj (θk ) + η̄i η̄j )
N − 1 k=1
" N N
! N
!
1 X X 1 X
= ηi (θk ) ηj (θk ) − ηi (θk ) ηj (θk )
N − 1 k=1 k=1
N k=1
N
! N ! N
! N
!#
1 X X 1 X 1 X
− ηi (θk ) ηj (θk ) + N ηi (θk ) ηj (θk )
N k=1 k=1
N k=1
N k=1
" N N
! N !#
1 X 1 X X
= ηi (θk ) ηj (θk ) − ηi (θk ) ηj (θk )
N − 1 k=1 N k=1 k=1
N N
! N !
1 X 1 X X
= ηi (θk ) ηj (θk ) − ηi (θk ) ηj (θk )
N − 1 k=1 N (N − 1) k=1 k=1
(2)
C = PTP (5)
6
and
T = QT Q (6)
Y 0 = Y Q−1 P (7)
0 Y 0T Y 0 Y 0T U U T Y 0
T = −
N −1 N (N − 1)
P Q Y Y Q−1 P
T −T T
P T Q−T Y T U U T Y Q−1 P
= −
N −1 N (N − 1)
T T T
T −T Y Y Y UU Y
=P Q − Q−1 P
N − 1 N (N − 1)
= P T Q−T T Q−1 P
=C
7
not change distributions of random variables but change the statistical cor-
relations, i.e., simulated covariance matrix, is enlightened. Hence, we use
the strategy that re-ordering the sample realizations {ηi (θk )}N
k=1 in each col-
8
spatial points. A Cholesky decomposition of target covariance matrix C is
performed in Step 2. The computational cost can be neglected since only one
time decomposition needs to be computed for matrix C. The Step 3 to Step
7 includes a loop iteration procedure to match the target covariance matrix
C, and the computational cost in these steps is low since only Cholesky de-
compositions and re-ordering samples are involved. The convergence error in
Step 7 can adopt 2-norm or infinite-norm (here same to 1-norm) and we adopt
2-norm in this paper. Note that, Algorithm 1 can be applied to non-gaussian
and non-stationary stochastic processes and can be readily generalized to
high-dimensional random fields.
9
Eq.(9) can be determined if any two variables are available. In this con-
text, only a set of random variables {ξi (θ)}M
i=0 or deterministic functions
{fi (x)}M
i=0 are required to be determined since samples of the stochastic
where ω̄ (x) is the mean function of the stochastic process ω (x, θ), M is the
number of terms of KL expansion and {ξi (θ)}M
i=1 is a set of uncorrelated
10
and given by
Z
1
ξi (θ) = √ [ω (x, θ) − ω̄ (x)] fi (x) dx (12)
λi D
where {λi } and {fi (x)} are the eigenvalues and eigenfunctions of the covari-
ance function C (x1 , x2 ), obtained from solving the following homogeneous
Fredholm integral equation of the second kind
Z
C (x1 , x2 ) fi (x1 ) dx1 = λi fi (x2 ) (13)
D
which satisfies Z
fi (x) fj (x) dx = δij (14)
D
which needs N times deterministic integral and has very low computational
costs. The above method is summarized in Algorithm 2 as
11
Algorithm 2 Algorithm based on stochastic samples and KL expansion
1: Generate samples of the stochastic process by use of Algorithm 1.
where ω̄ (x) is the mean function of the stochastic proces ω (x, θ), {Γi (θ)}Pi=1
are Polynomial Chaos basis and can be generated by Rodriguez formula [32].
The deterministic coefficient functions {fi (x)}M
i=1 can be obtained by
12
Algorithm 3 Algorithm based on stochastic samples and PC expansion
1: Generate samples of the stochastic process by use of Algorithm 1.
M
2: Choose standard random variables {γi (θ)}i=1γ .
M
3: Generate PC basis {Γi (θ)}Pi=1 of random variables {γi (θ)}i=1γ .
4: Compute {fi (x)}Pi=1 by Eq.(17).
There are various choices for specifying random variables γi (θ) in Step
2, such as gaussian random variables and uniform random variables, and the
corrsponding PC basis should be adopted as Hermite Polynomial Chaos basis
[1] and Generalized Polynomial Chaos basis [1, 32], respectively. Further,
Algorithm 3 provides a suitable method to simulate stochastic processes if
Eq.(13) is not easy to solve. However the computational efficiency decreases
if the number Mγ of random variables γi (θ) or the order of PC basis is too
large.
13
Hence, unknown projection coefficients cij can be computed by
Z
1
cij = √ E {[ω (x, θ) − ω̄ (x)] Γj (θ)} fi (x) dx (20)
λi E Γ2j (θ) D
14
4. Numerical examples
2
C (x1 , x2 ) = e−(x1 −x2 ) (22)
y − ymin
u= , p = 4, q = 2 (23)
ymax − ymin
15
The expectation function and variance function of the beat distribution
in Eq.(21) are
µF (x) = (ymax − ymin ) p + ymin
p+q
(24)
2
σ (x) = (ymax − ymin ) 2 pq
F 2
(p+q) (p+q+1)
5
Figure 1: Iterations of simulated covariance functions T (k) k=0
.
16
1
0.8
0.6
Errors
0.4
0.2
0
1 2 3 4 5
3
0.8
2
0.7
1 0.6
0.5
fn (x)
λn
0
0.4
-1 0.3
0.2
-2
0.1
-3
0 0.2 0.4 0.6 0.8 1 1 2 3 4
x Index n
17
ables {ξi (θ)}4i=1 computed by Eq.(15) and Table.1 indicates that they are
uncorrelated random variables, which is consistent with the theory of KL
expansion.
0.8
0.6
CDFs
0.4
0.2
0
-4 -2 0 2 4
4
Figure 4: CDFs of random variables {ξi (θ)}i=1 .
ξ1 (θ) 1.0005
ξ2 (θ) 0.0004 0.9995 sym.
ξ3 (θ) 0.0042 0.0016 0.9927
ξ4 (θ) 0.0272 0.0087 −0.0082 1.0248
Fig.5 shows comparisons between target (top left), sample (top mid) and
KL-simulated (top right) covariance. The relative errors between target and
18
sample covariance (bottom left), target and KL-simulated covariance (bot-
tom mid) and sample and KL-simulated covariance (bottom right) demon-
strate the high accuracy of the proposed Algorithm 2.
19
1.5
0.5
fi (x)
-0.5
-1
-1.5
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
14
Figure 6: Coefficient functions {fi (x)}i= of PC expansion.
20
For Algorithm 4, Table.2 shows projection coefficients cij obtained by
Eq.(20) and comparisons between target, sample and KL-PC-simulated co-
variance (the corresponding legends of relative errors are the same as Fig.5)
are shown as Fig.8, which demonstrates the high accuracy of the proposed
Algorithm 4.
cij 1 2 3 4 5 6 7
cij 8 9 10 11 12 13 14
21
Figure 8: Comparison between target, sample and KL-PC-simulated covariance.
4
Fig.9 shows iterations of sample covariance functions T (k) k=0
and Fig.10
shows convergence error in each iteration, which once again demonstrate the
good convergence and the high accuracy of Algorithm 1.
22
4
Figure 9: Iterations of sample covariance functions T (k) k=0
.
0.9
0.8
0.7
0.6
Errors
0.5
0.4
0.3
0.2
0.1
1 2 3 4
23
1.5 0.1
1
0.08
0.5
0.06
fn (x)
λn
0
0.04
-0.5
-1 0.02
-1.5
0 0.2 0.4 0.6 0.8 1 1 2 3 4 5 6
x Index n
Fig.12 shows CDFs of random variables {ξi (θ)}6i=1 and they are uncorre-
lated as shown as Table.3, which is once again consistent with the theory of
KL expansion. Fig.13 shows comparisons between target, sample and KL-
simulated covariance (the corresponding legends of relative errors are the
same as Fig.5), which verify the applicability of the proposed Algorithm 2 to
non-gaussian and non-stationary stochastic processes.
24
1
0.8
0.6
CDFs
0.4
0.2
0
-4 -2 0 2 4
6
Figure 12: CDFs of random variables {ξi (θ)}i=1 .
ξ1 (θ) 0.9991
ξ2 (θ) −0.0010 1.0006
ξ3 (θ) −0.0025 0.0015 1.0005 sym.
ξ4 (θ) −0.0011 0.0017 0.0004 1.0001
ξ5 (θ) −0.0004 0.0006 −0.0010 −0.0006 1.0000
ξ6 (θ) 0.0026 −0.0009 −0.0003 0.0009 0.0005 0.9999
25
Figure 13: Comparisons between target, sample and KL-simulated covariance.
According to Eq.(29), the variance function is σF2 (x) = C (x, x) = e−2x . Let-
p
ting µF (x) = 0, σ = 1 and solving Eq.(30) yield µ (x) = −x−ln e (e − 1) ≈
−x
−x − 0.7707 and δ (x) = − √ee−1 ≈ −0.7629e−x .
26
6
Fig.14 shows iterations of sample covariance functions T (k) k=0
and
the corresponding convergence error of each iteration is shown as Fig.15.
The good convergence of Algorithm 2 for non-gaussian and non-stationary
stochastic processes is demonstrated.
6
Figure 14: Iterations of sample covariance functions T (k) k=0
.
27
1
0.8
0.6
Errors
0.4
0.2
0
1 2 3 4 5 6
1.5 0.3
1 0.25
0.5
0.2
fn (x)
λn
0
0.15
-0.5
0.1
-1
-1.5 0.05
-2
0 0.2 0.4 0.6 0.8 1 1 2 3 4 5 6
x Index n
Fig.17 shows CDFs of random variables {ξi (θ)}6i=1 and their uncorre-
lated properties are shown as Table.4. Fig.18 shows comparisons between
28
target, sample and KL-simulated covariance (the corresponding legends of
relative errors are the same as Fig.5), which verify the applicability of the
proposed Algorithm 2 to strongly non-gaussian and non-stationary stochastic
processes.
0.8
0.6
CDFs
0.4
0.2
0
-4 -2 0 2 4
6
Figure 17: CDFs of random variables {ξi (θ)}i=1 .
ξ1 (θ) 1.0051
ξ2 (θ) −0.0015 0.9979
ξ3 (θ) −0.0148 0.0014 1.0049 sym.
ξ4 (θ) 0.0141 0.0004 −0.0028 0.9988
ξ5 (θ) −0.0050 0.0066 0.0077 0.0038 0.9997
ξ6 (θ) −0.0093 0.0069 0.0029 −0.0076 0.0061 1.0042
29
Figure 18: Comparisons between target, sample and KL-simulated covariance.
5. Conclusion
In this paper, efficient numerical schemes have been presented for simu-
lating non-gaussian and non-stationary stochastic processes specified by co-
variance functions and marginal distribution functions. In order to simulate
samples of the target stochastic process, stochastic samples automatically
matching the target marginal distribution function are firstly generated, and
an iterative algorithm is proposed to match the target covariance function
by transform the order of initial stochastic samples. Three numerical exam-
ples demonstrate the fast convergence and the high accuracy of the proposed
algorithm. In order to overcome the difficulty that sample-descriptions are
not convenient to applied to subsequent stochastic analysis, three numerical
algorithms are developed to represent the obtained stochastic samples based
on KL expansion and PC expansion. Different algorithms can be used for dif-
ferent problems of practical interests and the performances of the developed
30
algorithms are indicated by numerical examples. All proposed algorithms can
be readily extended to multi-dimensional random fields and will be shown in
subsequent researches.
Acknowledgments
References
[4] G. Stefanou, The stochastic finite element method: past, present and
future, Computer methods in applied mechanics and engineering 198
(2009) 1031–1051.
31
[6] M. Grigoriu, Evaluation of karhunen–loève, spectral, and sampling rep-
resentations for stochastic processes, Journal of engineering mechanics
132 (2006) 179–189.
32
[15] Z. Liu, W. Liu, Y. Peng, Random function based spectral representa-
tion of stationary and non-stationary stochastic processes, Probabilistic
Engineering Mechanics 45 (2016) 115–126.
33
[22] K. Phoon, H. Huang, S. Quek, Simulation of strongly non-Gaussian
processes using Karhunen–Loeve expansion, Probabilistic Engineering
Mechanics 20 (2005) 188–198.
34
[29] Z. Zheng, H. Dai, Simulation of multi-dimensional random fields by
Karhunen–Loève expansion, Computer Methods in Applied Mechanics
and Engineering 324 (2017) 221–247.
35