0% found this document useful (0 votes)
2 views

engineering report format

Uploaded by

nelsonying2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

engineering report format

Uploaded by

nelsonying2002
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Proceedings of the 43rd Design Automation Conference

IDETC/CIE 2017
August 6-9, 2017, Cleveland, Ohio, USA

DETC2017-67620

MANAGEMENT OF SAMPLING UNCERTAINTY USING CONSERVATIVE ESTIMATE


OF PROBABILITY IN BAYESIAN NETWORK
Sangjune Bae Nam H. Kim
Graduate Student Professor
Department of Mechanical and Aerospace Department of Mechanical and Aerospace
Engineering Engineering
University of Florida University of Florida
Gainesville, Florida, USA Gainesville, Florida, USA

Seung-gyo Jang
Principal Researcher
Agency for Defense Development
Daejeon, Republic of Korea

ABSTRACT accuracy has always been in a question. That is to say, there is


Since the safety of a system is often assessed by the uncertainty in the probability calculation [1].
probability of failure, it is crucial to calculate the probability There are two categories of uncertainty involved in a
accurately in order to achieve the target safety. Despite such system: aleatory uncertainty and epistemic uncertainty [2]. The
importance, calculating the precise probability is not a trivial former is inherent variability or natural randomness that is
task due to the inherent aleatory variability and epistemic irreducible. The aleatory uncertainty is usually represented
uncertainty. Therefore the safety is assessed by a conservative using a probability theory [3]. This natural randomness is
estimate of the probability rather than using a single value of the reflected in the input variables in a form of probability
probability. In general, there are two ways to achieve the target distributions. Epistemic uncertainty, on the other hand, comes
probability: Shifting the probability or reducing the uncertainty. from a lack of knowledge. If additional information is provided,
In this paper, among various sources of epistemic uncertainty, then the uncertainty can be reduced. However, modeling
the uncertainty quantification error from sampling is considered epistemic uncertainty is not a trivial task. The sources of both
to calculate the conservative estimate of a system probability of uncertainties can be found throughout the entire design process.
failure. To quantify and shape the epistemic uncertainty, The material property can be taken as an example of the
Bayesian network is utilized for constituting the relationship aleatory uncertainty: Young’s modulus and Poisson’s ratio are
between the system probability and component probabilities, different from a sample to another even if the samples are made
while global sensitivity analysis is employed to connect the from the same material because there is no absolute uniformity
variance in the probabilities in system level with that in the in nature. Likewise, the input design variables can be thought to
component level. Based on this, local sensitivity of the have variability, which is reflected in the reliability-based
conservative estimate with respect to a design change in a design optimization [4].
component is derived and approximated for a simple numerical To systematically consider the epistemic uncertainty which
calculation using Bayesian network and global sensitivity can be divided into the following types – surrogate model error,
analysis. This is to show how a design can meet the model form error, sampling error, numerical error, and other
probabilistic criteria considering propagated uncertainty when unknown error [5] – in a design process, there have been
the design changes. numerous trials. For example, fuzzy set theory, possibility
theory and interval analysis are used to mathematically express
INTRODUCTION the epistemic uncertainty or evidence theory is applied to
The probability of failure is usually utilized in a multi- quantify the epistemic uncertainty [6-8]. Also the limit of the
disciplinary design process to verify the safety of a system. prediction interval of a surrogate model is used as a design
Notwithstanding the importance of the probability of failure, its criterion [9]. Recently, Jiang et al.- tried to improve the

1 Copyright © 2017 by ASME


modeling capability in a multidisciplinary system by efficiently Section III is a general explanation of Bayesian network and
allocating samples by using the prediction interval [10]. global sensitivity analysis, which are main contrivance in the
In the field of reliability analysis, the epistemic uncertainty following sections. Section IV explains how the epistemic
has been a major concern as well. As examples, Nannapaneni, uncertainty from sampling is reflected in the sensitivity analysis
S. and Mahadevan, S. utilized a novel FORM-based approach in the component level and the system level. Section V exhibits
and Monte Carlo simulation [11], and Martinez, F.A., et al. the accuracy of the formulation using a simple component-level
applied belief function theory to a system under epistemic probability as well as the contribution of this paper by
uncertainty to estimate the reliability [12]. Although, the comparing the difference of the sensitivity analysis in the
methods tried to precisely estimate the uncertainty in the system level when the epistemic uncertainty is considered and
system, these approaches do not show how component-level not.
uncertainty propagates to the system level.
To analyze uncertainty propagation in a system reliability, DEALING WITH EPISTEMIC UNCERTAINTY
there have been numerous approaches such as methods of In this section, two approaches of handling uncertainty in
moments, Monte Carlo simulation, probability bound method, reliability-based design optimization are explained.
interval analysis, fuzzy set theory, discrete probability method,
and evidence theory [13]. 1. Living with Uncertainty
The above approaches are to ensure the system safety Multidisciplinary design optimization helps to find a local
under all sources of uncertainty. An alternative method to make optimum that minimizes the objective function while satisfying
sure the system is safe is to use the conservative estimate of the all constraints. The process is well utilized in industry to model
probability of failure [14]. The aleatory uncertainty is used to a structure that satisfies the constraints which can be a function
calculate the probability of failure, while the epistemic of external force, vibration, or heat [17]. However, when the
uncertainty yields its distribution. In this approach, a designated optimized structure is actually made, not all of them show
percentile value of the distribution is utilized as a design expected performance. That is, some of them do not satisfy the
criterion. Such a criterion is referred to as A- or B- basis prescribed constraints. This is because of the uncertainty
allowable, depending on the desired confidence level [15]. involved in manufacturing process and computational errors
However, determining the conservative estimate also has that the optimization process did not consider.
uncertainty due to the error and uncertainty in the probability
estimation. This problem can significantly act on the result
0.35
when the probability is through a sampling method with a
limited number of samples. 0.3
In reality, however, considering all sources of aleatory and
epistemic uncertainty may yield too conservative design such 0.25
that the performance of system may not be satisfactory. In
Error/PF

0.2
general, without reducing uncertainty, reliability-based
optimization cannot satisfy reliability constraints in complex 0.15
engineering systems, such as aircraft [2]. During aircraft design,
various tests are employed in building-block process to reduce 0.1
design and modeling errors.
0.05
Unlike previous studies, the objective of this paper is to
demonstrate how uncertainty propagates from a component 0
level and to the system probability of failure in sensitivity 0 0.02 0.04 0.06 0.08 0.1
Probability of Failure (PF)
analysis, by showing the sensitivity analysis of the conservative
estimate of probability with respect to the input variables Fig. 1 Relative Importance of error in Probability
assuming the uncertainty mainly arises due to lack of samples.
In order to consider uncertainty, reliability-based design
For the purpose, a hierarchical structure of Bayesian network is
optimization (RBDO) is developed to find an optimum with a
utilized to represent the propagation of component-level
constraint on the probability of failure. RBDO takes it into
probability to the system, while global sensitivity analysis is
account randomness in inputs and searches for the optimum
deployed to analyze the propagation of the uncertainty in the
[18]. The idea has been thoroughly investigated with different
component- and system-level probability of failure. In the
approaches. One of them is to use the sampling method, in
literature, global sensitivity is normally used to screening
which the probability of failure is evaluated through Monte-
important variables [16]. This paper is the first to show how the
Carlo sampling with a surrogate model [19].
global sensitivity can be explicitly incorporated in design
A challenge in sampling-based reliability calculation is that
process.
the sampling uncertainty also affects the probability calculation,
The paper is composed of six sections. Section II shows
which in turn, results in variability of the probability. For
two different methods to deal with the epistemic uncertainty.

2 Copyright © 2017 by ASME


instance, let us assume that the epistemic uncertainty of the PDF

probability calculation comes mainly from sampling. Then the y,p threshold
Dist. of y
Dist. of P, original
estimated probability with N samples follows a normal Dist. of P, less uncertainty
Dist. of P, design change
distribution, with m = p and s = p (1 - p ) / N , where p is
the estimated probability with N samples and the standard error,
s, represents the epistemic uncertainty. Following the property
of the epistemic uncertainty, the standard error converges to
zero if an infinite number of samples are provided. This
B
uncertainty plays a key role when the probability becomes A C
small. In Figure 1, the ratio between p and s is shown. y, p

Even though the absolute value of the standard error is small,


Fig. 2 Options to move Conservative Estimate
the ratio is significant when p is small. That is to say, the
uncertainty in probability can be much larger than the target
probability. In the figure, the feasible region is defined as { y | y ³ yth } .
To compensate for this uncertainty, the target probability of The distribution of the probability of failure PF at the original
failure can be reduced, which can be achieved by changing design is given as the solid line. At this design, the conservative
design toward a conservative direction. Although the target estimate of PF , which is represented by the mark C in the
probability can be reduced to be a conservative design, there are
figure, does not satisfy the probabilistic constraint. In order to
two issues: first, by how much the target probability must be
make the conservative estimate to satisfy the constraint, either
reduced to consider the effect of sampling uncertainty? And
the standard deviation of the distribution should be reduced or
second, how can the effect be quantified? Therefore within this
the mean should be shifted by changing the design. When the
framework, the design is assumed to have small probability of
design is changed, the estimate can satisfy the constraint as the
failure enough to satisfy the probabilistic constraints. In this
point C moves to the point A in the figure. If the uncertainty is
paper, this approach is called ‘living with uncertainty’. This is
reduced, the estimate can also satisfy the constraint as the point
the basic approach in conventional RBDO—designing the
C moves to the point B, although the design does not change. In
system to satisfy reliability with all given uncertainties.
other words, not only changing the design variables but also
shaping uncertainty can be used to satisfy reliability constraint
2. Shaping Uncertainty
in RBDO.
If the epistemic uncertainty can be quantified, then the
uncertainty can be included in the design process by introducing
BAYESIAN NETWORK AND GLOBAL SENSITIVITY
a conservative estimate. In other words, the design must be
ANALYSIS
more conservative in order to compensate for such uncertainty.
When a system consists of many components, the system-
The conservative estimate, in this case, is that of the probability.
In detail, the target probability of failure in a probabilistic level probability of failure (SLP) PFsys can be determined as a
constraint can be achieved with the upper confidence limit of function of component-level probabilities of failure (CLP). For
the probability of failure. If the desired confidence level is given estimates of CLP, the SLP can be calculated using the
1 - a and the probability of failure follows a normal relationships among the components. To quantify such
distribution, then the reliability constraint with conservative relationships, there have been numerous trials. A Bayesian
estimate can be written as Eq. (1). network is one of them, which uses the concept of conditional
probabilities [20, 21]. A Bayesian network, as the name itself
P ( G ( d ) ³ 0 ) + z1-a s P ,G £ PTar (1) implies, utilizes Bayes’ theorem to calculate the SLP. An
example of a simple Bayesian network is given in Fig. 3.

where G(d) is the limit state with aleatory uncertainty, which is S F


S F
considered to be fail when G(d) ≥ 0; P(A) is the probability of P(A|B=S) 1 - p A1 p A1 A B P(B) 1 - pB pB
event A; sP,G is the standard error of the limit state due to P(A|B=F) 1 - pA2 p A2
epistemic uncertainty; and PTar is the target probability of
failure. S F
To satisfy Eq. (1), there are two options: first, move the C P(C|A=S,B=S) 1 - pC 1 pC1
design point d to the safer region. Second, reduce the epistemic P(C|A=S,B=F) 1 - pC 2 pC 2
uncertainty, which is represented as s P ,G . Fig. 2 illustrates such P(C|A=F,B=S) 1 - pC 3 pC 3

options. P(C|A=F,B=F) 1 - pC 4 pC 4

Fig. 3 Example of Bayesian Network

3 Copyright © 2017 by ASME


between the variance of the SLP and the component-level input
A Bayesian network consists of a graphical model and parameters needs to be identified.
conditional probability tables associated with it. The model is To postulate such relationships, global sensitivity analysis
referred to as directed acyclic graph. The circles are called (GSA) is utilized in this paper [23, 24]. GSA breaks the system-
‘nodes’ and represent each component in a system in the graph. level variance into the component-level variances and shows
The arrows, or so-called causal edges, in the graph, show the which component-level variance contributes to the system-level
dependence of the components. For example, if the arrow starts variance the most without harming the definition of variance.
from node B end reaches node A as in the figure, then it From a Bayesian network, the SLP in Eq. (4) can be
represents the result of node A depends on the result of node B. decomposed into sub-functions as Eq. (5).
In this case, node B is called ‘parent node’ and node A is called
‘child node’. PFsys = g 0 + å gi + å gij + ... + gij ...m (5)
The table on the side of a node shows the probability of i j >i

failure of the node conditioned on the probability of parent


nodes. For the node C in the figure, there can be four possible In Eq. (5), dwhere m is the number of CLPs and g is a
failure cases: node C fails when node A and B succeed, node A sub-function. The subscript in the sub-functions represent the
succeeds but B fails, node A fails but B succeeds, or both node component. The sub-functions are calculated as Eqs. (6) and
A and B fail. In the table, the probability of failure (7).
corresponding to each case is calculated. Likewise, a Bayesian
network considers all the possible failure scenarios. 1 m
Based on this configuration, the SLP can be expressed as a g 0 = ò PFsys Õ q ( PFi ) dPFi (6)
function of CLPs using Bayes’ theorem. Note that the CLPs 0 i =1

defined in a Bayesian network are conditional, not marginal


probabilities. The definition of the SLP can be different from a 1

system to another, but let us assume the SLP is determined here gij ...r = ò PF
sys
Õ q(PFk ) dPFk
as the probability of node C to fail which is the last node in the 0 k ¹i, j ,..,r (7)
r r r -1
system. Then the CLP is calculated as Eq. (2).
-å gk - å å gkl - ... -g 0
k l k ,l >k
PFSYS = P ( C = F )
= pC1 (1 - p A1 )(1 - pB ) + pC 2 (1 - p A 2 ) pB (2) The advantage of the decomposition in Eq. (5) is that the
+ pC 3 p A1 (1 - pB ) + pC 4 p A 2 (1 - pB ) variance of SLP can be calculated as the sum of the variance of
individual sub-functions, as Eq. (8).
As shown in Eq. (2), there is no need to carry out
additional calculations for the SLP. Note that the CLPs are V [PFsys ] = åV [gi ] + åV [gij ] + ... + V [g12...m ] (8)
i j >i
independent of each other in this system. A more general form
of the system probability of failure is represented as Eq. (3).
Each term in Eq. (8) can be calculated as
PF
sys
=P
F
sys
( P , P , ..., P )
1
F F
2
F
m
(3) 1
2
V [ PFsys ] = ò ( PFsys ) Õ q( P F
k
) dPFk - g 02 (9)
i 0 k
In the equation above, P is the conditional probability
F

of failure of component i. Since CLPs are estimated using a 1


sampling-based method, they have sampling uncertainty, and V [ gij ...r ] = ò gij2...r Õ q ( PFk ) dPFk - g 02 (10)
thus, the SLP too. The variance of SLP can be calculated using 0 k = i , j ,..., r

the following equation [22]:


Then it is possible to define a sensitivity index. A
V éë P ùû = E éê( P sys 2 ù - E éP ù 2 sensitivity index with one subscript is referred to as the main
F
sys
ë F ) ûú ë
F
sys
û (4)
sensitivity index because it shows the effect of the
corresponding variable alone, and a sensitivity index with more
One of the ultimate purposes of this paper is to reduce the than one subscript is referred to as an interaction sensitivity
variance of the SLP, which is given by Eq. (4) when input index because it shows the interaction effect among the
parameters are the number of samples used in estimating the corresponding variables. The sensitivity index is defined as Eq.
CLPs. Because the input parameters such as the number of (11).
samples are all defined at the component level, the relationship

4 Copyright © 2017 by ASME


V [ gij ...r ]
Sij ...r = (11) ¶PF ( d ) N
V [ PFsys ] 1
¶di
=
N
å I (x ) s (x ;d)
j =1
G >0 j di j (15)

Using the sensitivity indices, the total sensitivity index is


defined as the sum of the main sensitivity index and the In Eq. (15), sdi ( x; d ) is the partial derivative of the log-
interaction sensitivity indices. The index shows the total effect
likelihood function with respect to its argument, which is called
of a CLP on the variance of the SLP. Si ,int represents any
a score function. If the probability is determined based on a
interaction sensitivity index of which subscript possesses i. finite number of samples, then the sampling uncertainty is
induced. Using the result, the design sensitivity of Eq. (1) can
STi = Si + å Si ,int (12) be derived using the chain rule of differentiation as Eq. (16).

¶ ( PF ( d ) + z1-a s ( PF ) )
DESIGN SENSITIVITY ANALYSIS UNDER EPISTEMIC ¶di
UNCERTAINTY
There can be two different design failure criteria: ¶PF ¶s ( PF ) ¶PF
= + z1-a (16)
component-level multiple failure criteria and system-level ¶di ¶PF ¶di
single failure criterion. The former only focuses on the
functionality of each component. Therefore each component æ1 N
öæ 1 - 2 PF ö
;ç åI G j >0 ( x m ) sd ( x; d ) ÷ ç1 + z1-a ÷
èN ø çè 2 NPF (1 - PF ) ÷ø
i
must satisfy the prescribed target probability of failure, the m =1

violence of which is not allowed. The traditional RBDO uses


this concept. Such a criterion is referred to as a reliability
As mentioned before, Eq. (16) is the sensitivity of the
constraint in RBDO. On the other hand, the latter focuses on the
conservative estimate of a CLP.
system level, instead of considering the functionality of
To derive the sensitivity of a SLP, the global sensitivity
individual components. In this design, a component is not
analysis needs to be calculated first because the system variance
obliged to satisfy the reliability constraint. If only the SLP in a
is a function of component variance. The variance of a sub-
Bayesian network is concerned, this design concept can be
function in Eq. (7) is expressed as
applied.
Let us assume that N samples are available for the r r -1 r
probability calculation. Then the probability is calculated as Eq. V éë gij ..r ùû = å amVm + å å bmnVmVn + ... + ViV j ...Vr + c (17)
(13). m =i m =i n = j , n > m

1 N Note that Vi is different from V ( gi ) . Vi is the


PF ( d ) = åI (y ) F i (13)
N i =1 variance of pi while V ( gi ) is the variance of the sub-
function gi . In a Bayesian network, V ( gij ..r ) is a multilinear
where I F is the indicator function, which becomes 1 if yi is in
function of pi , p j ,..., pr only. Therefore the variance of the
the failure region; otherwise 0.
SLP is the sum of the variance of the sub-functions given in Eq.
ì0 if yi Ï W F (18).
I F ( yi ) = í (14)
î1 if yi Î W F
Vsys
Then the summation of the indicator function NPF ( d ) , m m -1 m m - l +1 m

follows a binomial distribution B ( N , PF ( d ) ) when the = å V ( gi ) + å


i =1
å V ( g ) ... + å ... å V ( g ) + C
i =1 j = i +1
ij
i =1 l > i , j ,.., k
ij ...kl m

normality condition is satisfied, such that NPF > 10 and m m -1 m


æ j
ö
= å aiVi + å å çåb Vn + cij ,ijViV j + dij ÷ + (18)
N (1 - PF ) > 10 [25]. Then the probability of failure follows a i =1 è n =i
i =1 j = i +1
n , ij
ø
normal distribution, and the standard error of the probability m - l +1 m
æ l ö
can be estimated. ... + å ... å ç å en ,ij ..klVn + ... + ViV j ..VkVl + gij ..kl ÷ + Cm
i =1 l > j ,.., k è n = i ø
A conservative estimate of a CLP is provided in Eq. (1).
r m -1 m
To calculate the sensitivity of Eq. (1) with respect to an input
= å a mVm + å å b npVnV p + ... + g ViV j ..Vm + C
design variable, the CLP is considered first. The design m =i n =1 p = n +1, p > n
sensitivity can be calculated using Leibniz’s rule as below [18].

5 Copyright © 2017 by ASME


In the equation above, a m is the summation of all the ¶ ( PFsys + z1-a s sys )
coefficients corresponding to Vm . b np and g are defined in ¶dij
the similar way. Now, using the same assumptions used in the (23)
case of a CLP, the sensitivity of a SLP can be derived using the æ1 N
öæ k i (1 - 2 PFi ) ö
;ç å I G j > 0 ( x m ) sdi ( x; d ) ÷ f ' ( P ) + z1-a
ç ÷
chain rule of differentiation as Eq. (19). èN m =1 ø çè 2 N is sys ÷
ø

(
¶ PFsys + z1-a s ( PFsys ) ) f ' ( P ) is the derivative of the SLP with respect to the
¶dij CLP which is differentiable with respect to PFi . The relationship
(19)
¶PFsys ¶PFi ¶s ( PFsys ) ¶V ( PFsys ) ¶V ( PFi ) ¶PFi between the CLP and the SLP is provided explicitly by a
= + z1- a Bayesian network. Because the coefficient a i is the sum of all
¶PFi ¶dij ¶V ( PFsys ) ¶V ( PFi ) ¶PF ¶dij
i

coefficients of Vi and k i is the sum of a i and the high


Each term on the right-hand side of Eq. (19) can be calculated order terms, Eq. (23) is not simple enough to calculate at once.
as follows. dij represents jth design variable of the ith In other words, both the main effect and the interaction effect
must be taken into account to calculate the sensitivity.
component, and PFi represents the ith CLP. Note that in this On the other hand, if the main effects dominate in the
equation, a design variable of one component is not related to global sensitivity analysis, then the total sensitivity index
other components. Assuming N i samples are used to evaluate defined in Eq. (12) can be approximated as
PFi , the variance can be differentiated with respect to the
probability of failure as Eq. (20). V ( f i ) + V ( f ij ) + ... + V ( f ij ...kl )
STi =
Vsys
(24)
¶V ( PFi ) 1 - 2P i
aVi
= F
(20) ;
¶PFi Ni aVi + e ( f ~ i )

For simplicity, hereinafter denote V ( PFi ) , s ( PFsys ) and Therefore the derivative of Vsys with respect to Vi can be
expressed through a total sensitivity index.
V ( PFsys ) as Vi , s sys and Vsys , respectively. The derivative of
the variance of the a SLP with respect to the variance of a CLP ¶Vsys STiVsys
is calculated in Eq. (21). = (25)
¶Vi Vi

¶V ( PFsys )
Note that Eq. (25) does not require any differentiation.
¶V ( PFi ) Although it still needs to evaluate the partial variances, they can
be numerically calculated through Eq. (10). However,
¶ æ r m -1 m ö
differentiating them is not a trivial task as seen in Eq. (21),
= ç å a mVm + å å b npVnV p + ... + C ÷ (21)
¶Vi è m = i n =1 p = n +1, p > n ø which Eq. (25) avoids. Thus the sensitivity of a conservative
m -1 m estimate of the CLP can be written as
= ai + å å
n =1, n ¹ i p = n +1, p > n
b npVnV p + ... + g V j ...Vm
¶ ( Psys + z1-a s sys )
= ki
¶dij
(26)
Finally, the derivative of s sys with respect to Vsys is æ1 N
öæ STis sys (1 - 2 PFi ) ö
calculated in Eq. (22). ;ç å I G j >0 ( x m ) s di ( x; d ) ç
֍ f ' ( P ) + z1-a
÷
èN m =1 øè 2 N iVi ÷
ø
¶s sys 1
= (22)
¶Vsys 2s sys
ACCURACY OF SENSITIVITY WITH EPISTEMIC
UNCERTAINTY
Using Eqs. (18), (20), (21), and (22), the sensitivity in Eq. To demonstrate the accuracy of the sensitivity derived in
(19) can be rewritten as Eq. (23). Eq. (16), a simple linear function with a known input
distribution is considered. Let f ( x ) = x and x ~ N ( d ,1) ,

6 Copyright © 2017 by ASME


where d is the current design point. Here the failure region is of failure with 10,000 repetitions. As expected, the probability
defined as W F = { y | y < yth } and yth = -2.3263 . A of failure is normally distributed as
conservative estimate is chosen to set the confidence level to PF ~ N ( m = 0.01, s = 9.7194 E - 08 ) , while the true mean and
2

90%, such that z90% = 1.28 . Also, it is assumed that the current variance of the probability of failure are calculated as
design point is d = 0 so that the true probability of failure is mt = 0.01, s t2 = 9.9000 E - 08 .
equal to 1%. Using 100,000 samples, the sensitivity of the
conservative estimate of PF is verified. Because the input Distribution of Probability - True PF: 1%
3000
variable follows a normal distribution, the score function is
calculated as below.
2500

¶ ln f x ( x; d , s ) 2000
=x (27)
¶d
1500
Using the direct differentiation method given as Eq. (16),
the sensitivity of the conservative estimate is calculated as Eq. 1000
(28).
500
¶ ( PF ( d ) + z1-a s F ( d ) )
0
¶di 0.0085 0.009 0.0095 0.01 0.0105 0.011 0.0115
Probability of Failure
(28)
æ1 N
ö æ 1 - 2 PF ö Fig. 5 Distribution of Probability of Failure
;ç å I ( x ) x ÷ø çç1 + z
f <0 m i 0.9
÷
èN m =1
è 2 NPF (1 - PF ) ÷ø
Figure 5 and Table 1 compare the distribution of the
sensitivity using two different methods with 10,000 repetitions.
The sensitivity in Eq. (28) is compared with the result of The mean of sensitivity obtained through the direct
finite difference method. The sensitivity is approximately differentiation method using Eq. (29) is equal to -0.0272, while
calculated by perturbing the design as Eq. (29). the mean obtained by the finite difference method is -0.0269.
The exact value is -0.0272 and therefore the direct
¶ ( PF ( d ) + z1-a s F ( d ) ) differentiation method shows a higher accuracy. Moreover, the
¶d variance of the result obtained from the finite difference method
(29) is approximately 38 times larger than that from the direct
;
{PF ( d + Dd ) + z1-a s F ( d + Dd )} - {PF ( d ) + z1-a s F ( d )} differentiation method, which may cause additional
Dd computational cost in a design process due to uncertainty.

Figure 4 shows the estimated distribution of the probability

FDM Sensitivity - True PF: 1% DDM Sensitivity - True PF: 1%


3000 3000

2500 2500

2000 2000

1500 1500

1000 1000

500 500

0 0
-0.05 -0.045 -0.04 -0.035 -0.03 -0.025 -0.02 -0.015 -0.01 -0.031 -0.03 -0.029 -0.028 -0.027 -0.026 -0.025 -0.024 -0.023
Finite Difference Method Direct Differentiation Method

Fig. 4 Distribution of Design Sensitivity

7 Copyright © 2017 by ASME


Table 1 Mean and Standard Error of Sensitivity
Exact FDM
Sampling- PFsys = p1 p2 + (1 - p1 ) p3 (32)
based DDM
Mean -0.0272 -0.0269 -0.0272 The confidence level is set to 90%, which corresponds to
z90% = 1.28 . Using the variance decomposition, the sub-
Standard
- 0.005214 0.000844 functions and partial variances are calculated. Note that the sub-
Error
function f 23 that represents the interaction between p2 and p3
On the other hand, the analytical solution of the sensitivity as well as f123 that shows the interaction of all the CLPs are
of the conservative estimate, which is given in Table 1, can be equal to zero because there is no term that involves p2 and
calculated through the following procedure.
p3 together. Using the decomposition result, the coefficients in
¶ ( PF ( d ) + z1-a s F ( d ) ) Eq. (18) are calculated. This is to compare the main effect with
¶d the interaction effect. If the main effect dominates, then the
approximation in Eq. (24) can be applied, which significantly
¶PF ( d ) æç
-0.5
ö (30)
1 æ PF ( d ) (1 - PF ( d ) ) ö 1 - 2 PF ( d ) ÷ reduces computational burden. Since the target probability of
= 1 + z1-a ç ÷
¶d ç 2 çè N ÷
ø N ÷ failure in a design process is usually extremely small, this
è ø example assumes such small probability of failure also. The
probability of failure is calculated through the sampling
In Eq. (30), the derivative of PF ( d ) can be obtained method.
through numerical integration as Eq. (31). Suppose that the outcome of the probability is
p1 = 0.04%, p2 = 0.1%, and p3 = 0.03% . If 10,000 samples
¶PF ( d ) yth 1 ( x - d ) é ( x - d )2 ù
= ò 2
exp ê - ú dx (31) are applied, then the normality condition is seriously violated.
¶d -¥ 2ps s êë 2s 2 úû To check if the approximation Eq. (24) can be used for small
probability, the coefficients ai and k i are compared, which is
summarized in Fig. 7.
EFFECT OF EPISTEMIC UNCERTAINTY ON Fig. 7 shows a relative magnitude of ai and k i by
SENSITIVITY ANALYSIS normalizing them. The main effect and the total effect
To demonstrate the effect of sampling uncertainty on correspond to ai and k i , respectively. The interaction effect is
sensitivity analysis and to show in what situation the
approximation of the sensitivity given as Eq. (26) can be used, the sum of all high order terms. As seen in the graph, the ratio
the following 2-node Bayesian network is considered. In the of k to a is about 25% higher than 1 for p1 and p2 ,
network, node B depends on node A, and the probability table therefore the approximation Eq. (24) and the consecutive
corresponding to each marginal and conditional probability of procedure cannot be applied in this case. In fact, too small
failure is provided on the left side of the directed acyclic graph. number of samples invalidate the variance decomposition.
“S” and “F” denotes “Success” and “Failure, respectively. This
2-node structure is the simplest form of Bayesian networks. main effect
1.2
interaction effect
total effect
F S 1

P(A) p1 1 - p1
A 0.8

0.6

F S 0.4

p2 1 - p2
B
P(B|A=F)
0.2
P(B|A=S) p3 1 - p3
0
Fig. 6 2-node Bayesian Network p1 p2 p3

From the Bayesian network provided in Fig. 6, the system Fig. 7 Comparison of Main and Interaction Effect
probability of failure which is defined as the probability of the
failure of component B can be calculated as Eq. (32).

8 Copyright © 2017 by ASME


Another case is when the outcome of the probability is Table 3 Decision Criteria of Sensitivity Analysis
p1 = 0.5%, p2 = 0.8%, and p3 = 0.3% . The same number of Probability p1 p2 p3
samples are applied as in the previous case to satisfy the
normality condition. The effect of sampling uncertainty in the k /a 2.23 1.02 1
sensitivity analysis is found in Eq. (23) and Eq. (26). In Table 3, s '(P) / f '(P) 0.0019% 0.0042% 8.26%
the effect of design change on the probability calculation
f ' ( P ) is compared with the effect of uncertainty. Because the STiVsys / Vi 2.29E-08 1.64E-08 0.999
ratio (k / a ) is close to 1, Eq. (26) is utilized to approximate
the sensitivity. For simplicity, the effect of the sampling CONCLUSION
This paper presented how to include the sampling
uncertainty, z1-a STis sys (1 - 2 pi ) / 2 N iVi , is denoted as s ' ( P ) .
uncertainty in the sensitivity analysis using global sensitivity
analysis along with Bayesian network. Through the discussion,
Table 2 Effect of Sampling Uncertainty on Sensitivity the following conclusion can be made:
Analysis
§ At least O (10n +1 ) samples are required for O (10- n )
Probability p1 p2 p3
level probability of failure to shape the sampling
k /a 1.04 1.02 1 uncertainty.
§ If the number of samples is very large compared to the
f '(P) 0.005 0.005 0.995 true probability of failure, there is no need to include
sampling uncertainty.
s '(P) 3.04E-06 3.00E-06 0.116 § The sampling uncertainty can significantly change the
s '(P) / f '(P) 0.06% 0.059% 11.64% design sensitivity analysis result.
§ STiVsys / Vi must be compared in order to decide when
Table 3 clearly displays the consequence of incorporating to include the sampling uncertainty in the design
the epistemic uncertainty with sensitivity analysis. Although sensitivity analysis.
there is no substantial change by introducing epistemic
uncertainty in the sensitivity calculation for p1 and p2 , the NOMENCLATURE
d = Design point
sensitivity corresponding to p3 is differed by more than 10%.
G (g) = Limit state function
Furthermore, s ' ( P ) is linearly proportional to the confidence
z1-a = 1 - a level z-score
level, and therefore the effect of uncertainty becomes
s (g) = Standard error
considerable as the confidence level becomes higher.
The last case is dedicated to understanding when the V (g) = Variance
approximation can be exercised. Consider the following PTar = Target probability of failure
situation: p1 = 0.05%, p2 = 0.07%, and p3 = 0.06% with
yth = Threshold value of y
100,000 samples for each. Although the normality condition is
PFsys = System probability of failure
well-satisfied, the approximation cannot be employed as seen in
i
Table 3. However, when s ' ( P ) is actually calculated, then one P F
= Component probability of failure
can note that the value is negligibly small. In other words, there g = Sub-function
is no need to include such uncertainty. Si = Main sensitivity index
In fact, s ' ( P ) is highly dependent on Eq. (25). Since the Si ,int = Interaction sensitivity index
global sensitivity analysis is performed before discussing the STi = Total sensitivity index
approximation and Eq. (25) is purely a result of it, making a
fx (g) = Probability density function
decision whether to include the sampling uncertainty in the
design sensitivity analysis or not is possible before carrying out s (g) = Score function
the analysis. That is, STiVsys / Vi must be compared beforehand, IF = Indicator function
and the corresponding CLPs with low value do not need to WF = Failure domain
incorporate the sampling uncertainty in the design sensitivity
analysis.
ACKNOWLEDGMENTS
This research was also supported by the research grant
(PMD) of Agency for Defense Development and Defense
Acquisition Program Administration of the Korean government.

9 Copyright © 2017 by ASME


REFERENCES [13] Durgarao, K., Kushwaha, H.S., Verma, A.K., and Srividya,
[1] Cadini, F., Gioletta A., 2016, “A Bayesian Monte Carlo- A., 2007, "Epistemic Uncertainty Propagation in Reliability
based algorithm for the estimation of small failure probabilities Assessment of Complex Systems," Int. J. Perform. Eng., 3(4),
of systems affected by uncertainties,” Reliab. Eng. Sys. Saf., pp. 71–84
153, pp. 15-27 [14] Picheny, V., Kim, N.H., Haftka, R.T., 2006, “Conservative
[2] Park, C.Y., Kim, N.H., Haftka, R.T., 2014, “How coupon estimation of probability of failure,” Proc. 11th AIAA/ISSMO
and element tests reduce conservativeness in element failure Multidisciplinary Analysis and Optimization Conference, 2006-
prediction,” Reliab. Eng. Sys. Saf., 123, pp.123-136 7038
[3] Li, Y., Chen, J., and Feng, L., 2013, “Dealing with [15] Wu, H.F., and Wu, L.L., 1997, “MIL-HDBK-5 design
Uncertainty: A Survey of Theories and Practices,” IEEE Trans. allowables for fibre/metal laminates: ARALL2 and ARALL3,”
Knowl. Data Eng., 25(11), pp. 2463-2482 J. Mater. Sci. Letters, 13, pp. 582-585
[4] Choi, K.K., Youn, B.D., and Du, L., 2005, “Integration of [16] Jin, R., Chen, W., and Sudjianto, A., 2004, "Analytical
Reliability- and Possibility-Based Design Optimizations Using Metamodel-Based Global Sensitivity Analysis and Uncertainty
Performance Measure Approach,” SAE World Congress, Propagation for Robust Design," SAE Technical Paper 2004-
Detroit, MI, 2005-01-0342 01-0429, doi: 10.4271/2004-01-0429
[5] Liang, B., and Mahadevan, S., 2011, “Error and uncertainty [17] Guerra, A., Kiousis, P. D., 2006, “Design optimization of
quantification and sensitivity analysis in mechanics reinforced concrete structures,” Comput. Concrete, 3(5), pp.
computational models,” Int. J. Uncertain. Quantif., 1(2), pp. 313-334
147-161 [18] Tu, J. and Choi, K. K., 1999, “A New Study on Reliability
[6] Helton, J.C., and Oberkampf, W.L., 2004, “Alternative Based Design Optimization,” ASME J. Mech. Des., 121(4), pp.
representations of epistemic uncertainty,” Reliab. Eng. Sys. 557-564.
Saf., 85(1-3), pp. 1-10 [19] Lee, I., Choi, K. K., and Zhao, L., 2011, “Sampling-based
[7] Youn, B.D., Choi, K.K., Du, L, Gorsich, D., 2006, RBDO using the stochastic sensitivity analysis and dynamic
“Integration of Possibility-Based Optimization and Robust kriging method,” Struct. Multidisc. Optim., 44(3), pp. 299-317
Design for Epistemic Uncertainty,” ASME. J. Mech. Des. [20] Jensen, F.V., and Nielsen, T.D., 2007, Bayesian Networks
129(8), pp. 876-882. doi:10.1115/1.2717232. and Decision Graphs, Springer, New York, pp. 32-42
[8] Agarwal, H., Renaud, J.E., Preston, E.L., and Padmanabhan, [21] Mahadevan, S., Zhang, R., and Smith, N., 2001, “Bayesian
D., 2004, “Uncertainty quantification using evidence theory in networks for system reliability reassessment”, Struct. Saf.,
multidisciplinary design optimization,” Reliab. Eng. Sys. Saf., 23(3), pp. 231-251
85(1-3), pp. 281-294 [22] Bae, S., Kim, N. H., and Park, C., 2017, “Confidence
[9] Zhuang, X. and Pan, R., 2012, "Epistemic uncertainty in Interval of Bayesian Network and Global Sensitivity Analysis,”
reliability-based design optimization," 2012 Proc. Annual Proc. 19th AIAA Non-Deterministic Approaches Conference
Reliability and Maintainability Symposium, Reno, NV, pp. 1-6, pp. 0595
doi: 10.1109/RAMS.2012.6175496 [23] Saltelli, A., 2004, March. Global sensitivity analysis: an
[10] Jiang, Z., Chen, S., Apley, D.W., Chen, W., 2016, introduction. In Proc. 4th International Conference on
“Reduction of Epistemic Model Uncertainty in Simulation- Sensitivity Analysis of Model Output (SAMO’04), pp. 27-43
Based Multidisciplinary Design”, ASME. J. Mech. Des. [24] Jiang, Z., Chen, W., and German, B.J., 2016,
138(8) pp. 081403-1-081403-13, doi:10.1115/1.4033918. "Multidisciplinary Statistical Sensitivity Analysis Considering
[11] Nannapaneni, S. and Mahadevan, S., 2016, "Reliability Both Aleatory and Epistemic Uncertainties," AIAA Journal,
analysis under epistemic uncertainty," Reliab. Eng. Sys. Saf., 54(4), pp. 1326-1338
155, pp. 9-20 [25] Fraser, D. A. S., 1958, Statistics: An introduction, John
[12] Martinez, F.A., Sallak, M., and Schon, W., 2015, "An Wiley & Sons Inc, Hoboken, NJ, Chaps 2
Efficient Method for Reliability Analysis of Systems Under
Epistemic Uncertainty Using Belief Function Theory," IEEE
TRANS RELIAB, 64(3), pp. 893-909

10 Copyright © 2017 by ASME

You might also like