3sensitivity Lecture Note
3sensitivity Lecture Note
Global sensitivity analysis (GSA) aims at quantifying the contribution of individual random
variables 𝑿 to a quantity of interest. Sensitivity analysis can be used to screen out unimportant
variables before main analysis and to gain engineering insights about the model at hand. Let
us consider a function
𝑌 = 𝑔(𝑿) (6.0.1)
We would like to identify the fraction of the uncertainty (variance) of 𝑌 that can be attributed
to each random variable.∗
For example, consider a multi-story building with random property subjected to random
ground motion excitation. The structural responses of interest can be peak displacement, peak
velocity, peak acceleration etc. Below are some of the engineering questions that may arise.
• To reduce the peak acceleration response, which factor should be changed? In other words,
which factor affects the peak displacement the most?
• Are all of the variables actually affecting the peak acceleration? Can we set some of the
variables to be deterministic to simplify the analysis? (Model simplification)
• We want to optimize the importance sampling density for reliability analysis but the input
dimension is too high. Can we optimize the sampling density only for selected variables
instead of considering all the variables?
On the other hand, GSA can also be used to assist resource allocation decision.
• If we have some resources to collect more information, should we plan a field investigation
to identify soil property or should we focus more on structural deterioration inspection?
In which variables should we reduce the uncertainty?
These are some of the questions that can be answered by sensitivity analysis† .
∗
Main reference: Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M.
and Tarantola, S., 2008. Global sensitivity analysis: the primer. John Wiley & Sons.
†
Razavi, S., Jakeman, A., Saltelli, A., Prieur, C., Iooss, B., Borgonovo, E., Plischke, E., Piano, S.L., Iwanaga,
T., Becker, W. and Tarantola, S., 2021. The future of sensitivity analysis: An essential discipline for systems
modeling and policy support. Environmental Modelling & Software, 137, p.104954.
54
6.1. LOCAL VERSUS GLOBAL 55
𝑌 = 𝑋 1 + 𝑋2 (6.1.2)
where 𝑋1 and 𝑋2 each follow a Gaussian distribution with standard deviations 𝜎1 = 1 and
𝜎2 = 5. Figure 6.1.1 shows scatter plots obtained by performing Monte Carlo simulation. The
plots indicates that 𝑌 is more sensitive to 𝑋2 than 𝑋1 , because we can observe a clearer pattern
on the right-hand side plot. However, if we decide the relative importance based on the gradient
measure, this behavior will not be captured and sensitivity to both variables will be deemed
to be equal. A modified of local sensitivity index consistent with this intuition is called sigma-
normalized derivatives:
𝜎𝑋𝑖 𝜕𝑔(𝑿)
𝑆𝑖𝑆𝐷 = (6.1.3)
𝜎𝑌 𝜕𝑋𝑖
This can be applied only when input variables are independent to each other.
Alternative approach to account for the input randomness in the local sensitivity analysis is
to transform each variables 𝑥𝑖 into standard normal space 𝑧𝑖 = 𝑇 (𝑥𝑖 ) and get partial derivatives
using 𝐺(𝒖). Again this is appropriate only when input random variables are independent to
each other. For example, recall the variable transform introduced in Eq.(4.1.1) introduced for
6.1. LOCAL VERSUS GLOBAL 56
FORM analysis. In fact, FORM analysis approximates the limit state using the gradient (local
sensitivity index) at design point value. Often the vector of normalized gradient is denoted as
𝜶:
∇𝐺(𝒛∗ )
𝜶=− (6.1.4)
‖∇𝐺(𝒛∗ )‖
and 𝜶 is called importance vector, as it represents the importance of each random variable
around the design point. Further 𝛼 is the directional cosines of the design point (See Figure
6.1.2).
𝒛∗
𝜶=− (6.1.5)
𝛽
The (−) sign is attached indicating that 𝜶 is directed towards the failure domain.
However, when input random variables are not independent, 𝛼𝑖 can not represent the lo-
cal sensitivity of original variable, 𝑋𝑖 , because transformation of dependent random variables,
i.e. Eq.(4.1.5), no longer establishes one-on-one relationship between 𝑋𝑖 and 𝑍𝑖 . Therefore,
effect of more than one original variable can be mixed up in a single transformed variable, i.e.
transformation into 𝑍𝑖 may involve 𝑋𝑗 .
Alternatively one can perform linear regression to assess sensitivity - when the slope is larger
for a variable, the variable is more influential. However, it does not capture the nonlinear
dependencies between 𝑿 and 𝑌 .
On the other hand, global sensitivity analysis (GSA) considers the sensitivity across the
‘whole range’ of input space and considers also nonlinear dependencies. There are different
global sensitivity measures available in literature, e.g. Pearson correlation, Morries method,
cross-entropy-based method, and one of the most widely accepted concept is variance-based
sensitivity index, also called as Sobol index.
6.2. INTUITION BEHIND VARIANCE-BASED SENSITIVITY ANALYSIS 57
Using the statistical term, the ‘trend’ corresponds to the conditional mean of 𝑌 given different
𝑋 values and whether the conditional mean is variant or invariant to different 𝑋 values
becomes the key question when evaluating the sensitivity index. The variability is measured by
the variance operation. To summarize, the following two statements support the assumption
that the ‘variance of conditional mean’, i.e. 𝕍𝑎𝑟𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]] is a good measure of sensitivity.
• In Figure 6.2.1(a)
• In Figure 6.2.1(b)
where 𝑿𝑖 ̄ represents the 𝑑 − 1 dimensional vector containing all the components of 𝑿 except
𝑋𝑖 .
On the other hand, the Law of Total Variance states that the variance of output can always
be decomposed into two parts.
Law of Total Variance
In this equation, the total variance is decomposed to ‘explained’ and ‘unexplained’ parts, i.e.
‘explained’ part means the portion of variance that can be explained from the regression model
of 𝑋𝑖 and 𝑌 , and ‘unexplained’ represents the portion of variance that cannot be reduced by
adding knowledge on 𝑋𝑖 . The proof is as follows.
2
𝕍𝑎𝑟 [𝑌 ] = 𝔼 [𝑌 2 ] − 𝔼 [𝑌 ]
2
= 𝔼𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 2 |𝑋𝑖 ]] − 𝔼𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]]
(law of iterated expectation, law of total probability)
2 2
= 𝔼𝑋𝑖 [𝕍𝑎𝑟𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ] + 𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ] ] − 𝔼𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]]
(6.2.3)
(definition of variance for Y|X)
2 2
= 𝔼𝑋𝑖 [𝕍𝑎𝑟𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]] + 𝔼𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ] ] − 𝔼𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]]
= 𝔼𝑋𝑖 [𝕍𝑎𝑟𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]] + 𝕍𝑎𝑟𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]]
(definition of variance for E[Y|X])
Note that the first term of Eq.(6.2.2) correspond to our intuitive definition of the measure of
sensitivity. By dividing both sides of the equation by 𝕍𝑎𝑟 [𝑌 ], we get
Note that because of Eq.(6.2.4), 𝑆𝑖 ∈ [0, 1] always hold. Also by re-arranging Eq.(6.2.4), we get
an alternative expression of the Sobol index
Sobol Main Sensitivity Index (2)
This definition of Sobol index is referred to as the main-effect index or first-order index.
6.3. INTERACTION EFFECTS 59
In this case, we are conditioning over two variables 𝑋𝑖 and 𝑋𝑗 . The inner mean operator
must be taken over all variables but 𝑋𝑖 and 𝑋𝑗 , while the outer variance operator is taken over
the two conditioned variables. Since we have subtracted individual contributions from the joint
contribution, the remaining term quantifies the pure interaction effect of the two variables, i.e.
the part of contribution from 𝑋𝑖 and 𝑋𝑗 that cannot be captured by simple summation of 𝑆𝑖
and 𝑆𝑗 . The interaction effect is present when the model is nonadditive.
𝕍𝑎𝑟𝑋1 ,𝑋2 [𝔼𝑿12̄ [𝑌 |𝑋1 , 𝑋2 ]] = 𝕍𝑎𝑟𝑋1 [𝔼𝑿1̄ [𝑌 |𝑋1 ]] + 𝕍𝑎𝑟𝑋2 [𝔼𝑿2̄ [𝑌 |𝑋2 ]] (6.3.4)
𝐴
holds, and therefore, their interaction effect is zero, i.e. 𝑆12 = 0. On the other hand, for
nonadditive models, presence of the interaction term produces additional variance
𝕍𝑎𝑟𝑋1 ,𝑋2 [𝔼𝑿12̄ [𝑌 |𝑋1 , 𝑋2 ]] ≥ 𝕍𝑎𝑟𝑋1 [𝔼𝑿1̄ [𝑌 |𝑋1 ]] + 𝕍𝑎𝑟𝑋2 [𝔼𝑿2̄ [𝑌 |𝑋2 ]] (6.3.5)
𝐵
resulting in 𝑆12 > 0. Similarly to the second-order index, the third-order sensitivity index is
defined as
Third-order Sensitivity Index
that will have non-zero values when there exists a nonadditive term of 𝑋𝑖 ,𝑋𝑗 and 𝑋𝑘 . However
it is important to note that in real-world applications, the presence of interaction terms is often
unknown in advance, therefore sensitivity results can be a useful indication of the presence of
the interaction effect.
6.4. ANALYSIS OF VARIANCE (ANOVA) DECOMPOSITION 60
The proof will be provided in the next section. Because of the property, for additive models,
the first-order Sobol indices add up to one, i.e. ∑𝑖 𝑆𝑖 = 1. For nonadditive models, sum of the
first-order Sobol indices is always smaller than one, i.e. ∑𝑖 𝑆𝑖 < 1. This holds only when the
input variables are independent of each other.
For example, when the model gets total of three variables, the total-effect index for 𝑋1 is
calculated by
𝑆1⊺ = 1 − 𝑆23 − 𝑆2 − 𝑆3 (6.3.11)
When the variables are uncorrelated, the following also holds
𝑆1⊺ = 𝑆1 + 𝑆12 + 𝑆13 + 𝑆123 (6.3.12)
from the property in Eq.(6.3.8).
ANOVA Decomposition
Consider a function with uncorrelated input 𝑿 uniformly distributed within a unit hyper-
cube.
𝑌 = 𝑔(𝑿) (6.4.1)
The model can be decomposed into summands of increasing dimension.
𝑌 = ∑ 𝑔𝒖 (𝑿𝒖 ) (6.4.3)
𝒖⊆{1,2,..,𝑑}
The decomposition always exists and is unique when the below holds
where 𝑋𝑘 ∈ 𝑿𝒖 , i.e. integration of the each component function with respect to any of
their “own” variables are zero. The expansion is called ANOVA-representation.
Variance of these summands divided by total variance is defined as global sensitivity analysis.
Sobol Indices - ANOVA Definition
Given that 𝑔(𝑿) is square-integrate, global importance measure is
Eq.(6.4.5) represent the contribution of partial variances, associated with each combination of
random variables. That is the reduction in the total variance of the system, induced by freezing
the associated random variables. Because of the independent assumption, it can be shown that
variance values of each component function sums up to the total variance of 𝑌 (i.e, variance
sum law).
𝕍𝑎𝑟 [𝑌 ] = ∑ 𝕍𝑎𝑟 [𝑔𝑖 (𝑋𝑖 )] + ∑ 𝕍𝑎𝑟 [𝑔𝑖𝑗 (𝑋𝑖 , 𝑋𝑗 )] + ... +𝕍𝑎𝑟 [𝑔12...𝑑 (𝑋1 , 𝑋2 , ..., 𝑋𝑑 )] (6.4.6)
𝑖 𝑖<𝑗
Therefore,
The equivalence of the previous Sobol index in Eq.(6.2.5) and the ANOVA partial variance
Eq.(6.4.5) can be drawn as the follows. From the property Eq.(6.4.4). Below can be derived
𝔼 [𝑌 ] = 𝑔0
𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ] = 𝑔0 + 𝑔𝑖 (𝑋𝑖 ) (6.4.8)
𝔼𝑿𝑖𝑗̄ [𝑌 |𝑋𝑖 , 𝑋𝑗 ] = 𝑔0 + 𝑔𝑖 (𝑋𝑖 ) + 𝑔𝑗 (𝑋𝑗 ) + 𝑔𝑖𝑗 (𝑋𝑖 , 𝑋𝑗 )
6.5. CORRELATED RANDOM VARIABLES AND TRANSFORMATION INVARIANCY 62
𝑔0 = 𝔼 [𝑌 ]
𝑔𝑖 (𝑋𝑖 ) = 𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ] − 𝑔0 (6.4.9)
𝑔𝑖𝑗 (𝑋𝑖 , 𝑋𝑗 ) = 𝔼𝑿𝑖𝑗̄ [𝑌 |𝑋𝑖 , 𝑋𝑗 ] − 𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ] − 𝔼𝑿𝑗 ̄ [𝑌 |𝑋𝑗 ] + 𝑔0
Further by taking variance operator on both sides and dividing them by 𝕍𝑎𝑟 [𝑌 ],
In this way, starting from the ANOVA definition of partial variance (left) we arrived at the
expression of Sobol index (right). Note that Eq.(6.4.7) is also equivalent to Eq.(6.2.4) obtained
from the Law of Total Variance. This derivation using S-H decomposition clearly shows that
Sobol index represents the fraction of total response variance, which can be attributed to each
individual (sets) of input variables. Again, it is noted that this property Eq.(6.4.7) is acquired
only because the variables are assumed to be independent to each other.
• Input variable transform: When 𝑿 is independent random variables, given any one-on-
one transformation 𝑍𝑖 = 𝑇𝑖 (𝑋𝑖 ), and for corresponding model form 𝑔𝑍 (𝒁) = 𝑔(𝑇 −1 (𝒁)),
the sensitivity index does not change, i.e.
𝑔 (𝒁) 𝑔(𝑿)
𝑆𝑖 𝑧 = 𝑆𝑖 (6.5.1)
Therefore,
𝐺𝐹 𝑂𝑅𝑀 (𝑧) 𝕍𝑎𝑟𝑍𝑖 [𝔼𝒁𝑖 ̄ [𝑌FORM |𝑍𝑖 ]]
𝑆𝑖 = = 𝛼2𝑖 (6.6.4)
𝕍𝑎𝑟 [𝑌FORM ]
from the definition of 𝜶 in Eq.(6.1.4). Because the Sobol index is invariant to one-on-one
transform of input variables, provided that the input variables are independent to each other,
Eq.(6.6.4) can be directly used as the approximated importance measure of non-standardized
variable 𝑋𝑖 .
1. For 𝑛 = 1, 2, ..., 𝑁
(𝑛)
(a) Draw one sample of 𝑋𝑖 , say 𝑋𝑖
(𝑚)
(b) Draw 𝑁 -samples of 𝑿𝑖 ,̄ say {𝑿𝑖 ̄ }𝑚=1,...,𝑁
(𝑛) (𝑚)
(c) Compute 𝑁 -sample responses with {𝑋𝑖 , 𝑿𝑖 ̄ }, say {𝑌 (𝑛,𝑚) }𝑚=1,...,𝑁 .
(𝑛)
(d) Compute the sample mean of the response, let us call this 𝐸𝑖 , i.e.
(𝑛)
2. Compute sample variance of {𝐸𝑖 }𝑛=1,2,...,𝑁
1 𝑁 (𝑛)
𝕍𝑎𝑟𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]] ≃ ∑(𝐸 − 𝐸𝑖̄ )2 (6.7.2)
𝑁 𝑛=1 𝑖
(𝑛)
where 𝐸𝑖̄ is sample mean of 𝐸𝑖
3. Compute the variance of Y using {𝑌 (𝑛,𝑚) }𝑛,𝑚=1,...,𝑁 and compute the main Sobol index.
This algorithm requires (𝑑×𝑁 2 ) model evaluations to get main Sobol indices. Total-effect indices
can be calculated similarly (by switching sampling order of 𝑋𝑖 and 𝑿𝑖 ̄ and by subtracting the
final results from 1) and it requires the same amount of calculations to the main-effect index.
1. Draw two independent 𝑁 -sample sets, say set 𝑨 and set 𝑩 (i.e., total 2𝑁 sample points
are randomly sampled). Let us denote the corresponding sample responses as 𝒀𝐴 and 𝒀𝐵 ,
respectively.
2. Compute the response mean and variance, 𝑌 ̄ = 𝔼 [𝑌 ] and 𝕍𝑎𝑟 [𝑌 ], using 𝒀𝐴 and 𝒀𝐵
3. For 𝑖 = 1, 2, ..., 𝑑
(a) Let us define a new sample set by combining sample set 𝑨 and 𝑩. By bringing the
sample values of only 𝑖-th variable, 𝑋𝑖 , from sample set 𝑩 and bringing 𝑿𝑖 ̄ from 𝑨,
a new sample set can be defined, say 𝑨𝑖𝑩 , that has 𝑁 sample points.
(b) Compute corresponding sample responses 𝒀𝑨𝑩 𝑖
(c) Main- and total-effect indices can be estimated using
1 𝑁 (𝑛) (𝑛)
𝕍𝑎𝑟𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]] = ∑𝑌 𝑌 − 𝑌̄ 2
𝑁 𝑛=1 𝑨 𝑨𝑩 𝑖
(6.7.3)
1 𝑁 (𝑛) (𝑛)
𝕍𝑎𝑟𝑿𝑖 ̄ [𝔼𝑋𝑖 [𝑌 |𝑿𝑖 ]]
̄ = ∑𝑌 𝑌 − 𝑌̄ 2
𝑁 𝑛=1 𝑩 𝑨𝑩 𝑖
This algorithm requires total (𝑑 + 2) × 𝑁 model evaluations to get both main- and total-effect
indices of all random variables. The derivation can be found in Saltelli et al. (2010)
2. For 𝑖 = 1, 2, ..., 𝑑
(𝑛)
(a) Collect samples of 𝑋𝑖 and 𝑌 , say {𝑋𝑖 , 𝑌 (𝑛) }𝑛=1,...,𝑁
(b) Approximate the joint distribution of {𝑋𝑖 , 𝑌 } using the joint sample set. Fit bivariate
Gaussian mixture distribution
𝑚
𝑓(𝑋𝑖 , 𝑌 ) = ∑ 𝛼𝑘 𝑓𝑁 ([𝑋𝑖 , 𝑌 ]; 𝝁𝑘 , 𝜮𝑘 ) (6.7.4)
𝑘=1
2
𝜎𝑥,𝑘 𝜎𝑥,𝑘 𝜎𝑦,𝑘 𝜌𝑥𝑦,𝑘
𝜮𝑘 = ( 2 ) (6.7.6)
𝜎𝑥,𝑘 𝜎𝑦,𝑘 𝜌𝑥𝑦,𝑘 𝜎𝑦,𝑘
and
𝑚
∑ 𝛼𝑘 = 1 (6.7.7)
𝑘=1
where,
(𝑛)
2
(𝑛) 𝛼𝑘 𝑓𝑁 (𝑋𝑖 ; 𝜇𝑥,𝑘 , 𝜎𝑥,𝑘 )
𝛼𝑘̃ = 𝑚 (𝑛)
2 )
∑𝑗=1 𝛼𝑗 𝑓𝑁 (𝑋𝑖 ; 𝜇𝑥,𝑗 , 𝜎𝑥,𝑗 (6.7.9)
(𝑛) 𝜎𝑦,𝑘 (𝑛)
𝜇𝑘̃ = 𝜇𝑦,𝑘 + 𝜌 (𝑋 − 𝜇𝑥,𝑘 )
𝜎𝑥,𝑘 𝑥𝑦,𝑘 𝑖
(𝑛)
(d) Compute sample variance of {𝐸𝑖 }𝑛=1,2,...,𝑁
1 𝑁 (𝑛)
𝕍𝑎𝑟𝑋𝑖 [𝔼𝑿𝑖 ̄ [𝑌 |𝑋𝑖 ]] ≃ ∑(𝐸 − 𝐸𝑖̄ )2 (6.7.10)
𝑁 𝑛=1 𝑖
(𝑛)
where 𝐸𝑖̄ is sample mean of 𝐸𝑖
(e) Compute the main Sobol index.
Total-effect index can be obtained in a similar manner by replacing 𝑋𝑖 with 𝑿𝑖 ̄ and subtract-
ing the final result from 1. For the total-effect index, the Gaussian mixture fitting is performed
in the higher 𝑑-dimension space and Eq.(6.7.9) becomes
(𝑛)
(𝑛) 𝛼𝑘 𝑓𝑁 (𝑿𝑖 ̄ ; 𝝁𝑥,𝑘 , 𝜮𝑥,𝑘 )
𝛼𝑘̃ = 𝑚 (𝑛)
∑𝑗=1 𝛼𝑗 𝑓𝑁 (𝑿𝑖 ̄ ; 𝝁𝑥,𝑗 , 𝜮𝑥,𝑗 ) (6.7.11)
(𝑛) −1 (𝑛)
𝜇𝑘̃ = 𝜇𝑦,𝑘 + 𝜮𝑦𝑥,𝑘 𝜮𝑥𝑥,𝑘 (𝑿𝑖 ̄ − 𝝁𝑥,𝑘 )
6.8. RELIABILITY-ORIENTED GSA 66
𝑞 = 𝟙 (𝐺(𝒙)) (6.8.1)
In this case, the mean and variance of 𝑞 is expressed in terms of its occurrence probability, i.e.
the failure probability in the reliability problems.
𝔼 [𝑞] = 𝑃𝑓
(6.8.2)
𝕍𝑎𝑟 [𝑞] = 𝑃𝑓 (1 − 𝑃𝑓 )
Similarly, the conditional variance can be written in terms of conditional probability.
𝑃𝑓|𝑍𝑖 = ℙ(𝜶𝑖 𝒁
̄ 𝑖 ̄ ≥ 𝛽 − 𝛼 𝑖 𝑍𝑖 )
Therefore,
2
2 𝛼 𝑖 𝑍𝑖 − 𝛽
𝔼𝑍𝑖 [𝑃𝑓|𝑍 ] = 𝔼𝑍𝑖 [ℙ(𝑧 ̃ ≤ ) ]
𝑖 ‖𝜶𝑖 ‖̄
𝛼 𝑖 𝑍𝑖 − 𝛽 𝛼 𝑍 −𝛽
= 𝔼𝑍𝑖 [ℙ (𝑧1̃ ≤ , 𝑧2̃ ≤ 𝑖 𝑖 )]
‖𝜶𝑖 ‖̄ ‖𝜶𝑖 ‖̄
(𝑧1̃ and 𝑧2̃ are independent standard normal)
𝛼 𝑖 𝑍𝑖 − 𝛽 𝛼 𝑍 −𝛽
= ℙ (𝑧1̃ ≤ , 𝑧2̃ ≤ 𝑖 𝑖 ) (6.8.6)
‖𝜶𝑖 ‖̄ ‖𝜶𝑖 ‖̄
(Total probability theorem)
= ℙ (𝑦1̃ ≤ −𝛽, 𝑦2̃ ≤ −𝛽)
(by letting 𝑦𝑘̃ = 𝑧𝑘̃ ‖𝜶𝑖 ‖̄ − 𝛼𝑖 𝑍𝑖 for 𝑘 = 1, 2)
= Φ2 (−𝛽, −𝛽, 𝛼2𝑖 )
(𝑦𝑘̃ are standard normal with correlation 𝛼2𝑖 )
Meanwhile, bivariate normal CDF can be expressed in terms of single-fold integral (Papaioannou
and Straub, 2021)
𝛼2𝑖
Φ2 (−𝛽, −𝛽, 𝛼2𝑖 ) Φ(−𝛽)2
= ⏟ +∫ 𝜑2 (−𝛽, −𝛽, 𝑟)𝑑𝑟 (6.8.7)
0
𝑃𝑓2
By substituting above equations to Eq.(6.8.4), the formulation for the reliability-oriented sensi-
tivity index is derived:
First-order Sobol Index for FORM Analysis
Given a linear limit state with design point 𝒛∗ , the main-effect Sobol index for output 𝑞
is
𝛼2𝑖
1
𝑆𝑖 = ∫ 𝜑2 (−𝛽, −𝛽, 𝑟)𝑑𝑟 (6.8.8)
𝑃𝑓 (1 − 𝑃𝑓 ) 0
where 𝛽 = ‖𝒛∗ ‖, 𝛼𝑖 = 𝑧𝑖∗ /𝛽, and 𝑃𝑓 = Φ(−𝛽).
Therefore, the first-order and total-effect indices can be completely determined given the
design point (or 𝜶 and 𝛽). Figure 6.8.1 shows the sensitivity indices for different 𝛼𝑖 and 𝛽
values. One thing to be noticed is that as the probability becomes smaller, the main-effect
index approaches zero, while the total-effect index approaches one. It is because the rare events
are often triggered by particular combinations of random variables rather than an extreme
realization of just a single variable. Therefore, the interaction effect dominates the response.
Figure 6.8.1: Example results of the first-order and total-effect indices for different failure prob-
abilities (Papaioannou and Straub, 2021).
A popular choice for kernel PDF is standard normal PDF, i.e. 𝐾(⋅) = 𝜑(⋅) and the optimal
bandwidth can be found by solving the following optimization problem.
2
̂
𝑤𝑜𝑝𝑡 = arg min (𝔼𝑋𝑖 [𝑃𝑓|𝑋 (𝑤)] − 𝑃𝑓̂ ) (6.8.12)
𝑖
𝑤
By approximating 𝑓𝑋𝑖 |ℱ (𝑋𝑖 ) ≃ 𝑘(𝑋𝑖 ), 𝑃𝑓|𝑋𝑖 can be computed and the sensitivity index is
calculated using Eq.(6.8.4). The mean operation can be replaced by the sample mean of 𝑃𝑓|𝑋𝑖
obtained using different 𝑋𝑖 values.
𝑌 = 𝐺(𝒙) (6.8.13)
The failure event is defined as ℱ = {𝒙 ∶ 𝐺(𝒙) ≤ 0}. Suppose, based on the Monte Carlo simu-
lation samples, the joint PDF can be approximated as the following Gaussian mixture form.
𝑚
𝑓(𝑋𝑖 , 𝑌 ) = ∑ 𝛼𝑘 𝑓𝑁 ([𝑋𝑖 , 𝑌 ]; 𝝁𝑘 , 𝜮𝑘 ) (6.8.14)
𝑘=1
∗
Li, L., Papaioannou, I. and Straub, D., 2019. Global reliability sensitivity estimation based on failure
samples. Structural Safety, 81, p.101871.
6.8. RELIABILITY-ORIENTED GSA 70
ℙ(𝑋𝑖 , 𝑌 ≤ 0)
𝑃𝑓|𝑋𝑖 = ℙ(𝑌 ≤ 0|𝑋𝑖 ) = (6.8.15)
𝑓(𝑋𝑖 )𝑑𝑋𝑖
where
0
ℙ(𝑋𝑖 , 𝑌 ≤ 0) = ∫ 𝑓𝑋𝑖 ,𝑌 (𝑋𝑖 , 𝑌 )𝑑𝑌 𝑑𝑋𝑖
−∞
0 𝑚
≃ ∫ ∑ 𝛼𝑘 𝑓𝑁 ([𝑋𝑖 , 𝑌 ]; 𝝁𝑘 , 𝜮𝑘 )𝑑𝑌 𝑑𝑋𝑖 (6.8.16)
−∞ 𝑘=1
𝑚
= ∑ 𝛼𝑘 𝐹𝑁 ([𝑋𝑖 , 𝑌 = 0]; 𝝁𝑘 , 𝜮𝑘 )𝑑𝑋𝑖
𝑘=1
in which 𝐹𝑁 (⋅) is bivariate normal CDF. Given this 𝑃𝑓|𝑋𝑖 formulation, the sensitivity index can
be derived using Eq.(6.8.4). The mean operation can be replaced by the sample mean of 𝑃𝑓|𝑋𝑖
obtained using different 𝑋𝑖 samples.