0% found this document useful (0 votes)
18 views

Topic1.4-Functions of Random Variables

This document discusses functions of random variables and how to derive the distribution of a response variable Y given the distribution of input variables X. It provides analytical solutions for some common cases like linear functions and uses examples to demonstrate calculating statistics like mean, variance and probability of failure for structural reliability problems.

Uploaded by

Cho Wing So
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Topic1.4-Functions of Random Variables

This document discusses functions of random variables and how to derive the distribution of a response variable Y given the distribution of input variables X. It provides analytical solutions for some common cases like linear functions and uses examples to demonstrate calculating statistics like mean, variance and probability of failure for structural reliability problems.

Uploaded by

Cho Wing So
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

CIEM5810

Engineering Risk, Reliability and Decision


Instructor: Prof. Anthony Leung

Topic 1: Random variables and probability models


1.4 Functions of random variables
About this lecture

Suppose y = g(x) denote a model in civil engineering,


where x are the uncertain input parameters, and y is the system response

An essential task for civil engineers is to predict the distribution of y given the statistics of x

Three general approaches:


1. Analytical solutions (which are however seldom available except in some special/simple cases)

2. Taylor series expansion method

3. Point Estimate Methods

4. Monte Carlo simulation (to be covered in a separate topic)


Analytical solutions
Analytical solutions

Suppose

𝑌 = 𝑔(𝑋)
where the PDF of the random variable, X, is pX(x); and g is a monotonic function of x

Then, the PDF of Y can be expressed as:

−1
𝑑𝑔
𝑝𝑌 𝑦 = 𝑝𝑋 (𝑔−1 )
𝑑𝑦
Analytical solutions

For more complicated PDFs, 𝑝𝑌 𝑦 cannot be derived straightforwardly.

Derive the main descriptors (i.e. mean and variance of Y)

For a linear function


𝒀 = 𝑎𝑿 + 𝑏

𝑬 𝒀 = 𝐸 𝑎𝑿 + 𝑏 = න(𝑎𝑥 + 𝑏)𝑓𝑋 (𝑥) 𝑑𝑥 = 𝑎𝑬(𝑿) + 𝑏


2
𝑽𝒂𝒓 𝒀 = 𝐸[ 𝒀 − 𝑬 𝒀 ] = 𝐸[ 𝑎𝑿 + 𝑏 − 𝑎𝑬 𝑿 − 𝑏 2 ]

= 𝑎2 න(𝑥 − 𝜇𝑋 )2 𝑓𝑋 (𝑥) 𝑑𝑥 = 𝑎2 𝑽𝒂𝒓(𝑿)


Analytical solutions

For more complicated linear function …

𝒀 = 𝑎1 𝑿1 + 𝑎2 𝑿2
𝑬 𝒀 = 𝑎1 𝑬(𝑿1 ) + 𝑎2 𝑬(𝑿𝟐 )

𝑽𝒂𝒓 𝒀 = 𝑎1 2 𝑽𝒂𝒓 𝑿 + 𝑎2 2 𝑽𝒂𝒓 𝑿 + 𝟐𝑎1 𝑎2 𝜌𝜎1 𝜎2

By deduction

𝑬 𝒀 = ෍ 𝑎𝑖 𝑬(𝑿𝑖 )

𝑽𝒂𝒓 𝒀 = 𝑽𝒂𝒓 𝑎1 𝑿1 + ⋯ + 𝑎𝑛 𝑿𝑛 = ෍ 𝑎𝑖 2 𝑽𝒂𝒓 𝑿𝑖 + ෍ ෍ 𝑎𝑖 𝑎𝑗 𝜌𝑖𝑗 𝜎𝑖 𝜎𝑗


𝑖 𝑗
Applications to common linear functions

If X = N (m, s) then Y = N (am, as)


𝒀 = 𝑎𝑿 (𝑎 > 0)
If X = LN (l, z) then Y = LN(ln a + l, z)

𝒀 = 𝑎𝑿 + 𝑏 (𝑎 > 0) If X = N (m, s) then Y = N (am + b, as)

If X1 and X2 are Poisson with mean


𝒀 = 𝑿1 + 𝑿2 rates of 𝒗1 and 𝒗2, respectively
then Y is Poisson with 𝒗Y = 𝒗1 + 𝒗2
Applications to common linear functions

If X1 = N (𝜇1 , 𝜎1 ) and X2 = N (𝜇2 , 𝜎2 )


then Y = N (𝜇Y, 𝜎Y), where

𝒀 = 𝑎1 𝑿1 + 𝑎2 𝑿2 𝝁𝒀 = 𝑎1 𝜇1 + 𝑎2 𝜇2
𝝈𝒀 = 𝑎1 2 𝜎1 2 + 𝑎2 2 𝜎2 2 + 2𝑎1 𝑎2 𝝆𝜎1 𝜎2

If X1 and X2 are statistical independent,


then𝝆= 0
Applications to common linear functions

Generalisation
If Xi = N (𝜇𝑖 , 𝜎𝑖 ) where i = 1 … n
then Y = N (𝜇Y, 𝜎Y), where

𝒀 = 𝑎1 𝑿1 + 𝑎2 𝑿2 𝝁𝒀 = 𝑎1 𝜇1 + 𝑎2 𝜇2 + ⋯ + 𝑎𝑛 𝜇𝑛
+ … + 𝑎𝑛 𝑿𝑛
𝝈𝒀 = ෍ 𝑎𝑖 2 𝜎𝑖 2 + 𝑐𝑜𝑟𝑟𝑒𝑙𝑎𝑡𝑖𝑜𝑛 𝑡𝑒𝑟𝑚𝑠
𝑖
Applications to common linear functions

In matrix form:

Let 𝝁 = 𝜇1 , 𝜇2 , … , 𝜇𝑛 , 𝑨 = 𝑎1 , 𝑎2 , … , 𝑎𝑛 and 𝑪𝑿 denote the covariance matrix of 𝑋1 , 𝑋2 , … , 𝑋𝑛

𝐶𝑂𝑉11 𝐶𝑂𝑉12 𝐶𝑂𝑉13


𝐂𝐗 = 𝐶𝑂𝑉21 𝐶𝑂𝑉22 𝐶𝑂𝑉23 where 𝐶𝑂𝑉𝑖𝑗 = 𝐶𝑂𝑉 𝑥𝑖 , 𝑥𝑗 = 𝜌𝑖𝑗 𝜎𝑖 𝜎𝑗 (c.f. slide 35, Topic 1.1)
𝐶𝑂𝑉31 𝐶𝑂𝑉32 𝐶𝑂𝑉33

The mean and std can be written as follows:

𝝁𝒀 = 𝑔(𝝁𝒙 ) 𝝈𝒀 = 𝑨𝑪𝑿 𝑨𝑻
Applications to common linear functions

Central Limit Theorem: S will approach a normal distribution regardless the individual
probability distribution of 𝑿𝒊 if N is large enough
Example 1

Suppose a column with a ultimate capacity, R, is subjected to a total vertical load S, which is
composed of three type of loadings, namely D, L and W

𝑆 =𝐷+𝐿+𝑊
S=D+L+W

The three loadings are


1. Random variables and normally distributed
2. Statistical independent

Column with
mD = 4.2 , sD = 0.3
capacity, R
mL = 6.5 , sL = 0.8
mW = 3.4 , sW = 0.7
Example 1

Question 1(a): What is the probability that the total load exceeds 18?

18 − 𝜇𝑆
𝑃 𝑆 > 18 = 1 − Φ( )
𝜎𝑆 S=D+L+W

𝝁𝑺 = 𝜇𝐷 + 𝜇𝐿 + 𝜇𝑊 = 14.1

𝝈𝑺 = 𝜎𝐷 2 + 𝜎𝐿 2 + 𝜎𝑊 2 = 1.1
Column with
capacity, R
𝑃 𝑆 > 18 = 1 − Φ 3.55 = 0.000193
Example 1

Question 1(b): What is the probability of column failure (i.e., R < S)?

𝑃 𝑓𝑎𝑖𝑙𝑢𝑟𝑒 = 𝑃 𝑅 < 𝑆 = 𝑃 𝑅 − 𝑆 < 0 = 𝑃(𝑿 < 0)


S=D+L+W
Suppose the design safety factor is 1.5, i.e. 𝝁𝑹 /𝝁𝑺 = 1.5
R has a COV of 15%

Hence,
Column with
capacity, R
mR = 1.5 mS = 21.15
sR = 15% x 21.15 = 3.17
Example 1

Question 1(b): What is the probability of column failure (i.e., R < S)?

Therefore

𝝁𝑿 = 𝜇𝑅 − 𝜇𝑆 = 7.05
S=D+L+W
𝝈𝑿 = 𝜎𝑅 2 + 𝜎𝑆 2 = 3.36

Hence
0 − 7.05 Column with
𝑃 𝑓𝑎𝑖𝑙𝑢𝑟𝑒 = 𝑃 𝑿 < 0 = Φ capacity, R
3.36
= 0.018
Example 1

Question (c): If the target P (failure) = 0.001, what is the corresponding mR?

𝝁𝑿 = 𝜇𝑅 − 14.1 𝝈𝑿 = 0.15𝜇𝑅 2 + 1.12

−𝜇𝑅 + 14.1 S=D+L+W


𝑃 𝑓𝑎𝑖𝑙𝑢𝑟𝑒 = Φ = 0.001
0.15𝜇𝑅 2 + 1.12

−𝜇𝑅 + 14.1
= −3.09
0.15𝜇𝑅 2 + 1.12
Column with
Solving the quadratic equation gives 𝜇𝑅 = 8.81 or 26.9 capacity, R

Since 𝜇𝑅 > 21.5, therefore 𝝁𝑹 = 26.9


Example 2

Question 2(a): What is the mean (𝝁𝑺 ) and COV (𝜹𝑺 ) of the footing settlement?

From observation

𝑃𝐵𝐼
𝑆=
𝑀
S
Considering P, B, I and M are m c.o.v.
P 1.0 0.10
1. Statistical independent B 6.0 0
2. Log-normally distributed 0.6 0.10
I Sand
Footing
M 32.0 0.15
P
Sand Property – M Footing Property –
B and I
Example 2

Question 2(a): What is the mean (𝝁𝑺 ) and COV (𝜹𝑺 ) of the footing settlement?

𝑃𝐵𝐼
𝑆= ln𝑆 = ln 𝑃 + ln 𝐵 + ln 𝐼 − ln 𝑀
𝑀
𝑌 = 𝑋1 + 𝑋2 + 𝑋3 − 𝑋4
Therefore

1. Y and ln S are normal


2. S is log-normal, LN (𝜆𝑆 , 𝜁𝑆 )
𝜆𝑆 = 𝜆𝑃 + 𝜆𝐵 + 𝜆𝐼 − 𝜆𝑀

2 2 2 2
𝜁𝑆 = 𝜁𝑃 + 𝜁𝐵 + 𝜁𝐼 + 𝜁𝑀
Example 2

Question 2(a): What is the mean (𝝁𝑺 ) and COV (𝜹𝑺 ) of the footing settlement?
1 2
𝜆𝑃 = ln(1.0) − 0.1 = −0.005
2
𝜆𝐵 = 1.792 𝜆𝐼 = −0.516 𝜆𝑀 = 3.455

Hence

𝜆𝑆 = 𝜆𝑃 + 𝜆𝐵 + 𝜆𝐼 − 𝜆𝑀 = −𝟐. 𝟏𝟖𝟒

𝜁𝑆 = 𝜁𝑃 2 + 𝜁𝐵 2 + 𝜁𝐼 2 + 𝜁𝑀 2 = 𝟎. 𝟐𝟎𝟔
Recall
1 1 2
𝝁𝑺 = exp 𝜆𝑆 + 𝜁𝑆 2 = 0.115 (ft) 𝜹𝑺 ≅ 𝜁𝑆 = 0.206 𝜆 = ln 𝜇 − 𝛿
2 2
Example 2

Question 2(b): If the maximum allowable settlement is 2.5 inch, what is the reliability against
excessive settlement?

𝑃 𝑅𝑒𝑙𝑖𝑎𝑏𝑖𝑙𝑖𝑡𝑦 = 𝑃 𝑆 < 2.5

2.5
ln( ) − (−2.184)
=Φ 12 = 𝟎. 𝟗𝟗𝟖𝟔
0.206
Example 2

Question 2(c): Your fellow engineer spent $100 so that the COV of M is reduced to 5% for
getting better information. If you were him, would you spend this money? Assuming that the
damaging cost for the exceedance of the max. allowable settlement is $50,000

Then
1 2
𝛿𝑀 = 0.05 𝜆𝑀 = ln(32) − 0.05 = 3.465
2
𝜆𝑆 = −𝟐. 𝟏𝟗𝟒 𝜁𝑆 = 𝟎. 𝟏𝟓

Hence, the reliability

2.5
ln( ) − (−2.194)
Φ 12 = 𝟎. 𝟗𝟗𝟗𝟗𝟖
0.15
Example 2

Question 2(c): Your fellow engineer spent $100 so that the COV of M is reduced to 5% for
getting better information. If you were him, would you spend this money? Assuming that the
damaging cost for the exceedance of the max. allowable settlement is $50,000

Assume the initial cost of construction is C0, then the expected cost of the first design (i.e.,
without spending money)

𝐸 𝐶1 = 𝐶0 + (1 − 𝑟𝑒𝑙𝑖𝑎𝑏𝑖𝑙𝑖𝑡𝑦)(𝑐𝑜𝑠𝑡 𝑜𝑓 𝑓𝑎𝑖𝑙𝑢𝑟𝑒)
= 𝐶0 + 1 − 0.9986 50000 = 𝑪𝟎 + 𝟕𝟎
For the second design,

𝐸 𝐶2 = 𝑪𝟎 + 𝟏𝟎𝟎 + 𝟏 = 𝑪𝟎 + 𝟏𝟎𝟏
Since 𝐸 𝐶1 < 𝐸 𝐶2 , thus better NOT spending the money
Taylor series expansion method

Advantages:

1. Ease the calculations (by hand/excel)


2. Compare relative contributions of uncertainties – allocation of resource
3. Combine individual contributions of uncertainties
Nonlinear functions

Suppose 𝒀 = 𝑔(𝑿) , where g is a nonlinear function

The rigorous approach:

𝑬 𝒀 = න 𝑔(𝑥)𝑓𝑋 (𝑥) 𝑑𝑥 𝑽𝒂𝒓 𝒀 = න(𝑔(𝑥) − 𝐸[𝑔 𝑥 ])2 𝑓𝑋 (𝑥) 𝑑𝑥

However

1. Computation difficulties
2. We do not always have a closed-form fX(x) to solve analytically
Nonlinear functions

Approach by Taylor’s series approximation


g(X)
g (x) Linear
approximation

Higher-order terms
g’(mX) (X – mX)

g(mX)

mX X X

For first-order approximation:


𝑔′′(𝜇𝑋 )
𝑔 𝑋 = 𝑔 𝜇𝑋 + 𝑔′ 𝜇𝑋 𝑋 − 𝜇𝑋 + 𝑋 − 𝜇𝑋 2 +⋯
2
𝒈 𝑿 = 𝒈 𝝁𝑿 + 𝒈′ 𝝁𝑿 𝑿 − 𝝁𝑿
More accurate when/if g is almost linear (i.e., small curvature) and 𝜎𝑋 is small
Nonlinear functions

For first-order approximation

𝒀 = 𝑔 𝑋 = 𝑔 𝜇𝑋 + 𝑔 ′ 𝜇𝑋 𝑋 − 𝜇𝑋 𝜇𝑋

𝑬 𝒀 =𝐸 𝑔 𝑋 ≅ 𝑔 𝜇𝑋 + 𝑔′ 𝜇𝑋 𝑋 − 𝜇𝑋 = 𝒈 𝝁𝑿

𝑽𝒂𝒓 𝒀 = 𝑉𝑎𝑟 𝑔 𝑋 ≅ 𝑔′ 𝜇𝑋 2 𝑉𝑎𝑟 𝑋 − 𝜇𝑋 = 𝒈′ 𝝁𝑿 𝟐 𝑽𝒂𝒓(𝑿)


Nonlinear functions

Evaluating the probability (Mean First-Order Reliability Method, MFORM)

Assumptions:
1. g(x) is normally distributed;
2. g(x) < 0 denotes failure

0−𝐸 𝑔 𝒙 −𝐸 𝑔 𝒙 𝐸𝑔 𝒙
𝑝𝑓 = 𝑃 𝑔 𝒙 < 0 = 𝑃 =Φ =1−Φ
𝑉𝑎𝑟(𝑔 𝒙 ) 𝑉𝑎𝑟(𝑔 𝒙 ) 𝑉𝑎𝑟(𝑔 𝒙 )

Define reliability index 𝜷 satisfying 𝑝𝑓 = 1 − Φ 𝛽 ; Hence:

𝐸𝑔 𝒙
𝜷=
𝑉𝑎𝑟(𝑔 𝒙 )
Nonlinear functions

For multiple, correlated random variables:

𝒀 = 𝑔(𝑿𝟏 , 𝑿𝟐 , 𝑿𝟑 , … , 𝑿𝒏 )

Following Taylor’s series of expansion and expressing in matrix form:

𝑬 𝒀 = 𝑔 𝜇1 , 𝜇2 , 𝜇3 , … , 𝜇𝑛 = 𝑔(𝝁𝑿 )
2
𝑑𝑔 𝑑𝑔 𝑑𝑔
𝑽𝒂𝒓 𝒀 = ෍ +2 ∙ 𝜌𝑋𝑖 𝜌𝑋𝑗 𝜎𝑋𝑖 𝜎𝑋𝑗 + ⋯ = 𝐆𝐂𝐗 𝐆𝑻
𝑑𝑋𝑖 𝝁𝒊
𝑑𝑋𝑖 𝑑𝑋𝑗

𝜕𝑔(𝒙) 𝜕𝑔(𝒙) 𝜕𝑔(𝒙) 𝑔(𝝁𝑿 )


where 𝐆 = ቤ , ቤ ,…, ቤ Hence, 𝜷=
𝜕𝑥1 𝑥=𝝁 𝜕𝑥2 𝑥=𝝁 𝜕𝑥𝑛 𝑥=𝝁 𝐆𝐂𝐗 𝐆 𝑻
𝒙 𝒙 𝒙
Example 2

Consider the following function

𝑦 = 𝑔 𝑥1 , 𝑥2 = 𝑥1 𝑥2 + 𝑥12

where 𝜇1 =1.0, 𝜎1 =1.0, 𝜇2 =2.0, 𝜎2 =2.0 and 𝜌12 =-0.5


Example 2

Question 2(a): Evaluate the statistics of y


𝝁𝒀 ≈ 𝑔 𝜇1 , 𝜇2 = 𝜇1 𝜇2 + 𝜇12 = 1 × 2 + 12 = 3

𝜕𝑔(𝒙) 𝜕𝑔(𝒙)
ቤ = 𝑥2 + 2𝑥1 = 𝜇2 + 2𝜇1 = 4 ቤ = 𝑥1 = 𝜇1 = 1
𝜕𝑥1 𝑥=𝝁 𝜕𝑥2 𝑥=𝝁
𝒙 𝒙

𝜕𝑔(𝒙) 𝜕𝑔(𝒙) 12
−0.5 × 1 × 2 = 1 −1
𝐆= ቤ , ቤ = 3,1 𝐂𝐗 =
𝜕𝑥1 𝑥=𝝁 𝜕𝑥2 𝑥=𝝁 −0.5 × 1 × 2 22 −1 4
𝒙 𝒙

1 −1 3
𝝈𝒀 = 𝑽𝒂𝒓 𝑔(𝒙) ≈ 𝐆𝐂𝐗 𝐆 𝑻 = 3,1 × × = 2.646
−1 4 1
=sqrt(MMULT(MMULT(B11:C11, B13:C14),B16:B17))
Example 2

Question 2(b): Suppose y <0 denotes failure. Evaluate 𝑝𝑓

𝝁𝒀 ≈ 𝑔 𝜇1 , 𝜇2 = 𝜇1 𝜇2 + 𝜇12 = 1 × 2 + 12 = 4
1 −1 3
𝝈𝒀 = 𝑽𝒂𝒓 𝑔(𝒙) ≈ 𝐆𝐂𝐗 𝐆 𝑻 = 3,1 × × = 2.646
−1 4 1
𝑔(𝝁𝑿 ) 4
𝜷= = = 1.512
𝐆𝐂𝐗 𝐆 𝑻 2.646

Hence
𝑝𝑓 = 1 − Φ 𝜷 = 1 − Φ 1.134 = 𝟎. 𝟎𝟔𝟓
=normdist(-B20,0,1,1)
Example 3

Performance of shallow foundation against bearing failure

Recalling the Terzaghi’s bearing capacity theory

𝑞𝑢𝑙𝑡 = 0.5𝛾𝑠 𝐵𝑁𝛾 + 𝑐 ′ 𝑁𝑐 + 𝛾𝑠 𝐷𝑓 𝑁𝑞

𝑁𝛾 = 1.8 𝑁𝑞 − 1 tan ∅′

𝜋 ∅′ 𝜋 tan ∅′
𝑁𝑞 = tan2 + 𝑒
4 2
𝐵 = 2 m; 𝐷𝑓 = 0.5 m; 𝛾𝑠 =20 kN/m3
𝑁𝑐 = 𝑁𝑞 − 1 cot ∅′ 𝜇𝑐 = 8 kPa 𝜎𝑐 = 2 kPa
𝜇∅′ = 30 o 𝜎∅′ = 3 o 𝜌𝑐∅′ = -0.4
Example 3

What is the reliability of the shallow foundation?

Factor of safety (𝑭𝒔 ) can be defined as 𝑭𝒔 𝒄, ∅′ = 𝒒𝒖𝒍𝒕 /𝟓𝟎𝟎


Correspondingly, the limit state function is:

𝒈 𝒄, ∅′ = 𝑭𝒔 𝒄, ∅′ − 1

Based on the Taylor’s series expansion method and using the central difference method:

𝜕𝑭𝒔 𝑭𝒔 𝜇𝑐 + ∆, 𝜇∅′ − 𝑭𝒔 𝜇𝑐 , 𝜇∅′ 𝜕𝑭𝒔 𝑭𝒔 𝜇𝑐 , 𝜇∅′ + ∆ − 𝑭𝒔 𝜇𝑐 , 𝜇∅′


ቤ ≈ ቤ ≈
𝜕𝑐 𝜇 ∆ 𝜕∅′ 𝜇 ∆
𝑐 ,𝜇∅′ 𝑐 ,𝜇∅′

Hence, 𝐆 = 0.060,0.211
Example 3

What is the reliability of the shallow foundation?

Statistics of 𝑭𝒔
𝝁𝑭𝒔 ≈ 𝑔 𝜇𝑐 , 𝜇∅′ = 1.573 𝝈𝑭𝒔 ≈ 𝐆𝐂𝐗 𝐆 𝑻 = 0.595

Since 𝒈 𝒄, ∅′ = 𝑭𝒔 𝒄, ∅′ − 1, statistics of 𝒈 𝒄, ∅′

𝝁𝒈 ≈ 𝝁𝑭𝒔 − 𝟏 = 0.573 𝝈𝒈 ≈ 𝝈𝑭𝒔 = 0.595

Thus,
𝝁𝒈
𝜷= = 0.964
𝝈𝒈

𝒑𝒇 = 𝟏 − 𝜱 𝜷 = 𝟎. 𝟏𝟔𝟕
Point estimate methods (PEM)
By evaluating the performance function, g, at a few concentration points, one
can estimate the system response y.

General procedures:

1. Determine the concentration points


2. Evaluate y = g (x) at these points
3. Infer the moments of y based on responses of g(x) at these points

For small n (number of r. v.), we can use Rosenblueth method


For higher n, we can use Harr’s method (not covered herein)
Rosenblueth’s method

Suppose 𝒀 = 𝑔(𝑿)
where g is a nonlinear function; and 𝑿 is a vector of n symmetric random variables.

To evaluate the moment of 𝒀 based on the statistics of 𝑿, consider 𝑿 = 𝑥1 , 𝑥2 , i.e. n = 2.


In the Rosenblueth’s method, g can be first evaluated at the following 𝟐𝒏 (i.e. four) points:
𝑦++ = 𝑔 𝜇1 + 𝜎1 , 𝜇2 + 𝜎2 𝑃++ = 1 + 𝜌 /4
𝑦+− = 𝑔 𝜇1 + 𝜎1 , 𝜇2 − 𝜎2 𝑃+− = 1 − 𝜌 /4
P are the weighting factors
𝑦−+ = 𝑔 𝜇1 − 𝜎1 , 𝜇2 + 𝜎2 𝑃−+ = 1 − 𝜌 /4
𝑦−− = 𝑔 𝜇1 − 𝜎1 , 𝜇2 − 𝜎2 𝑃−− = 1 + 𝜌 /4

The mth order moment of y can be estimated approximately using the following equation:
4
𝑚
𝐸 𝑦 𝑚 = 𝑃++ 𝑦++ 𝑚
+ 𝑃+− 𝑦+− 𝑚
+ 𝑃−+ 𝑦−+ 𝑚
+ 𝑃−− 𝑦−− =෍ 𝑃𝑖 𝑦𝑖𝑚
𝑖=1
Illustrating example

Using the same shallow foundation problem as an illustrating example:

Based on the Rosenblueth’s method, the following table can be obtained:


𝑐 ∅′ 𝒒𝒖 𝑭𝒔 𝑷
10 33 1233.921 2.467842 0.15
10 27 595.181 1.190362 0.35
6 33 1079.368 2.158736 0.35
6 27 517.200 1.032336 0.15

4
The mean of 𝑭𝒔 , 𝐸 𝑭𝒔 , can be calculated as: ෍ 𝑃𝑖 𝑦𝑖 = 1.697
𝑖=1

4
The mean of 𝑭𝟐𝒔 , 𝐸 𝑭𝟐𝒔 , can be calculated as: ෍ 𝑃𝑖 𝑦𝑖2 = 3.200
𝑖=1
Illustrating example

Using the same shallow foundation problem as an illustrating example:

The variance of 𝑭𝒔 , 𝑉𝑎𝑟 𝑭𝒔 , can be calculated using the following relationship:

𝑉𝑎𝑟 𝑭𝒔 = 𝐸 𝑭𝟐𝒔 − 𝐸 2 𝑭𝒔 = 0.320

Hence, the standard deviation of 𝑭𝒔 : 𝝈𝑭𝒔 = 0.566

Thus, the reliability index and the failure probability can be evaluated by:
𝝁𝒈 ≈ 𝝁𝑭𝒔 − 𝟏 = 0.697 𝝈𝒈 ≈ 𝝈𝑭𝒔 = 0.566

𝝁𝒈
𝜷= = 1.232
𝝈𝒈
(Note: The Rosenblueth’s method predicts a lower failure
𝒑𝒇 = 𝟏 − 𝜱 𝜷 = 𝟎. 𝟏𝟎𝟗 probability than the Taylor’s series expansion method)
Rosenblueth’s method

In the case of n = 3, i.e. 𝑿 = 𝑥1 , 𝑥2 , 𝑥3

In the Rosenblueth’s method, g can be first evaluated at the following 𝟐𝒏 (i.e. eight) points:
𝑦+++ = 𝑔 𝜇1 + 𝜎1 , 𝜇2 + 𝜎2 , 𝜇3 + 𝜎3 𝑃+++ = 1 + 𝜌12 + 𝜌13 + 𝜌23 /8
𝑦++− = 𝑔 𝜇1 + 𝜎1 , 𝜇2 + 𝜎2 , 𝜇3 − 𝜎3 𝑃++− = 1 + 𝜌12 − 𝜌13 − 𝜌23 /8
𝑦+−+ = 𝑔 𝜇1 + 𝜎1 , 𝜇2 − 𝜎2 , 𝜇3 + 𝜎3 𝑃+−+ = 1 − 𝜌12 + 𝜌13 − 𝜌23 /8

𝑦−−− = 𝑔 𝜇1 − 𝜎1 , 𝜇2 − 𝜎2 , 𝜇3 − 𝜎3 𝑃−−− = 1 + 𝜌12 + 𝜌13 + 𝜌23 /8

The mth order moment of y can be estimated approximately using the following equation:
8
𝐸 𝑦 𝑚
=෍ 𝑃𝑖 𝑦𝑖𝑚
𝑖=1
Rosenblueth’s method

Generalisation

The generalisation to more than 3 variables follows the same logic.


If there are n variables, then 𝟐𝒏 points are chosen to include all possible combinations of each
variable one standard deviation above or below its mean

Suppose 𝑠𝑖 is positive when the value of the ith variable is one s.d. above the mean
is negative when the value of the ith variable is one s.d. below the mean

1
𝑃(𝑠1𝑠2…𝑠𝑛 ) = 1 + σ𝑛−1 σ 𝑛
𝑖=1 𝑗=𝑖+1 𝑠𝑖 𝑠𝑗 𝜌𝑖𝑗
𝟐𝒏

Again,
𝟐𝒏
𝐸 𝑦𝑚 = ෍ 𝑃𝑖 𝑦𝑖𝑚
𝑖=1
End of lecture note

You might also like