0% found this document useful (0 votes)
9 views9 pages

J Ress 2010 02 015

Uploaded by

murugesan26694
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views9 pages

J Ress 2010 02 015

Uploaded by

murugesan26694
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

ARTICLE IN PRESS

Reliability Engineering and System Safety 95 (2010) 777–785

Contents lists available at ScienceDirect

Reliability Engineering and System Safety


journal homepage: www.elsevier.com/locate/ress

A Bayesian approach for quantification of model uncertainty


Inseok Park , Hemanth K. Amarchinta, Ramana V. Grandhi
Department of Mechanical and Materials Engineering, Wright State University, 3640 Colonel Glenn Hwy Dayton, OH 45435, USA

a r t i c l e in fo abstract

Article history: In most engineering problems, more than one model can be created to represent an engineering
Received 15 September 2009 system’s behavior. Uncertainty is inevitably involved in selecting the best model from among the
Received in revised form models that are possible. Uncertainty in model selection cannot be ignored, especially when the
22 February 2010
differences between the predictions of competing models are significant. In this research, a
Accepted 23 February 2010
Available online 7 March 2010
methodology is proposed to quantify model uncertainty using measured differences between
experimental data and model outcomes under a Bayesian statistical framework. The adjustment factor
Keywords: approach is used to propagate model uncertainty into prediction of a system response. A nonlinear
Model uncertainty vibration system is used to demonstrate the processes for implementing the adjustment factor
Model probability
approach. Finally, the methodology is applied on the engineering benefits of a laser peening process,
Bayes’ theorem
and a confidence band for residual stresses is established to indicate the reliability of model prediction.
Adjustment factor approach
& 2010 Elsevier Ltd. All rights reserved.

1. Introduction based on expert judgment. Zhang and Mahadevan [4] performed a


fatigue reliability analysis on the butt welds of a steel bridge using
As engineering structural systems become more complex and two competing crack growth models. The failure probabilities
computer technology advances, the dependence of structural were averaged for two competing models using model probabil-
analysis on multi-physics simulation increases. Simulation mod- ities as weights. Model probabilities were evaluated by incorpor-
els can be generated in many different ways by altering the ating the uncertainty of crack size measurement into Bayes’
representation of the geometry of the model or the interaction of theorem. Zouaoui and Wilson [5] quantified model uncertainty in
components comprising the model. This implies that two or more prediction of a message delay in a computer communication
different simulation models can be assumed to analyze an network using BMA. The models considered in their research were
engineering system. Model uncertainty—uncertainty involved in different probability distribution forms representing the random-
selecting the best model from a set of possibilities—is unavoid- ness in a message length, which acts as an input to computer
ably accompanied by the generation of different simulation simulation. Using BMA, McFarland and Bichon [6] performed a
models for an engineering system. The degree of model un- reliability analysis of a bistable MEMS device by incorporating
certainty may be considerably large in problems for which the probability distribution model form uncertainty into the process
predictions by competing simulation models are significantly of estimating failure probability distribution. They evaluated
different. model probabilities using experimental data. The models con-
Little work has been done in the engineering field to quantify sidered in their research were three probability distribution forms
model uncertainty compared with other fields such as statistics, representing the randomness of edge bias on beam widths. In the
economics, and environmental science. Using Bayesian model two papers mentioned [5,6], model probabilities were quantified
averaging (BMA), Alvin et al. [1] quantified model uncertainty in using model likelihoods given observed experimental data, which
three simulation models that they used to predict the vibration measure how well models are supported by experimental data
frequencies of a bracket component. However, model probabil- relative to the other models. However, the models considered
ity—assigned to each model to quantify model uncertainty—was were different probability distribution forms representing the
assumed to be equal among the models considered. Zio and randomness in the input parameters.
Apostolakis [2] and Reinert and Apostolakis [3] quantified model In the research mentioned above, model probability was not
uncertainty for nuclear safety problems using the adjustment quantified using experimental data, or it was quantified by
factor approach. In these papers, model probability was quantified evaluating model likelihood given the experimental data on an
input parameter (not system response). The evaluated model
probabilities could not supply effective measures of model
 Corresponding author. Tel.: + 1 937 877 8354; fax: + 1 937 775 5147. uncertainty in terms of the predictions of system responses. In
E-mail address: [email protected] (I. Park). this research, a methodology to evaluate model likelihood using

0951-8320/$ - see front matter & 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.ress.2010.02.015
ARTICLE IN PRESS
778 I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785

experimental and model outcomes under a Bayesian statistical data set. Model uncertainty should be incorporated into the
framework is developed so as to make an informed estimation of prediction of a system response, especially when two or more
model probability. This methodology is demonstrated with the competing models show fairly large differences in their
engineering problem of a time-dependent high impact process. predictions. Ignoring model uncertainty often results in the
The paper is organized as follows: Model uncertainty is defined underestimation of uncertainty in the prediction of a system
in Section 2. The methodology to quantify model probability using response [11,12].
experimental and model outcomes is discussed in Section 3. The Model uncertainty is quantified by model probability assigned
model averaging technique is presented with a brief introduction to each model of a model set. The definition of model probability
of the adjustment factor approach in Section 4. The adjustment and the methodology to quantify it are presented in Section 3.
factor approach is discussed in detail in Section 5. The adjustment
factor approach is illustrated with the numerical problem of a
spring–mass system in Section 6.1. The presented methodology is 3. Quantification of model probability
demonstrated with the computer simulation of a laser peening
process—a time-dependent high impact process—in Section 6.2. 3.1. Model probability
Summary remarks are presented in Section 7.
Mathematically, model probability is defined as the degree of
belief that a model is true, given that the true model is in the set
2. Model uncertainty of models considered. It is argued that this definition is the
simplest and the only definition that is mathematically acceptable
Models are essential to understanding physical behaviors [10]. However, from the practical point of view that all the models
and predicting the responses of physical systems. In many cases, are just approximations, it is more appropriate to interpret model
a model is assumed to mirror a real physical situation. However, probability as the degree of belief that a model is the best
‘‘a model is just a reduced and parsimonious representation of approximating model among a model set.
a physical, chemical, or biological system in a mathematical,
abstract, numerical, or experimental form’’ [7]. Generating a 3.2. Bayes’ theorem for model probability quantification
model is, in fact, the process of idealizing the complicated real
world into a relatively simple form through making a set of
Consider a set of models denoted by M1, M2, y, MN, and
assumptions. No model can completely represent a real situation
experimental data D. Using Bayes’ theorem, the posterior
or process because of the assumptions made during the modeling
probability of model Mk, which is the model probability evaluated
process. Also, if other sets of assumptions are introduced,
after observing experimental data D, is represented by Eq. (1):
different, incomplete models would be generated that represent
the identical physical phenomenon or process in question. Apart PðDjMk ÞPðMk Þ
PðMk jDÞ ¼ PN ð1Þ
from the simplifying assumption, models may also vary depend- q ¼ 1 PðDjMq ÞPðMq Þ
ing on the decisions made during the modeling process with
where P(DjMk) is the likelihood of experimental data D for
regard to the modeler’s preference, requirements of model user,
model Mk, and P(Mk) is the prior probability of Mk, which is the
or economic matters. For example, the modeler can construct
model probability evaluated before observing data D. Prior model
many types of finite element models to analyze an engineering
probability P(Mk) can be specified depending on the existing prior
system by varying the element types, geometry, shape functions,
knowledge about the credibility of model Mk, or it can be given a
mesh sizes, material behavior, expected operating loads, or
uniform probability, P(Mk) ¼1/N, if not referring to any informa-
boundary conditions. In short, different models can be con-
tion. If uniform prior probability is assumed, the difficulty of
structed for a certain physical system, but all of them are
specifying prior knowledge numerically is avoided. Model like-
incomplete in representation of system behaviors.
lihood P(DjMk) may be thought of as the probability of observing
Given two or more models describing an engineering system,
experimental data D under model Mk. It supplies a relative
we may be faced with the problem of choosing a single best
measure of how well model Mk is supported by experimental data
approximating model that describes the real system with the
D. Since the denominator in Eq. (1) is common for all the models,
highest fidelity among the models considered. Because there is
posterior model probability is proportional to prior model
limitation in our understanding about an engineering system, it is
probability and model likelihood. The methodology of calculating
usually not possible to identify the best approximating model
model likelihood using experimental and model outcomes is
among a set of models, especially when higher fidelity models
discussed in Section 3.3.
under consideration are based on sound fundamentals. In general,
uncertainty is involved in selecting the best approximating
model from among a set of models. This uncertainty is called 3.3. Evaluation of model likelihood
model-selection uncertainty [8], or just model uncertainty. Model
uncertainty is categorized as epistemic uncertainty [9] since it 3.3.1. Probabilistic description of the differences of experimental and
derives from our lack of knowledge. Given observed experimental model outcomes
data, it is feasible to select a single approximating model which Model likelihood is evaluated for each model by measuring the
seems to be best, according to model selection criteria such as degree of agreement between experimental data and predictions
Akaike information criterion (AIC) or Bayesian information of the data by each model. For this purpose, the probabilistic
criterion (BIC) [8,10]. After a single best approximating model is relationship between experimental data and model predictions
selected using a set of observed experimental data, statistical involving uncertainty should be described. There are various
inference is made based only on that model on the assumption formulations to describe the probabilistic relationship, which
that the selected model is truly the best. However, this have been developed with the goal of validating a simulation
assumption may prove to be wrong if another set of experimental model. Usually, bias function and measurement error are included
data, not yet observed, supports another model better than the as parts of the probabilistic relationship to match model
best model identified. Therefore, there is still uncertainty in predictions with experimental data. The bias function captures
selecting the best model even after observing an experimental systematic discrepancies between the true system responses and
ARTICLE IN PRESS
I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785 779

predictions by a model. The measurement error is usually of each prediction by Mk—is estimated as shown in Eq. (4):
assumed to be an independent normal variable with a mean of Xl
zero. Bayarri et al. [13,14] and Kennedy and O’Hagan [15] used s2k ¼ ð i ¼ 1 e2ki Þ=l ð4Þ
Bayesian statistical methodology to quantify the uncertainty in
Second, the predictive distribution P(yjMk) of response y under
the bias function modeled by a Gaussian process. Xiong et al. [16]
model Mk is constructed by incorporating the prediction
treated the bias function as a deterministic regression model.
error estimated in the first step into the prediction of y made by
Jiang et al. [17] used a probabilistic principal component analysis
Mk. Eq. (5) shows the predictive distribution of response y under
method and Bayesian statistical approach to validate dynamic
model Mk:
systems by comparing multiple responses simultaneously. The
fundamental concepts and methodologies for validation of PðyjMk Þ ¼ fk þ ek ,ek  Nð0,s2k Þ ð5Þ
large-scale simulation models are being actively developed
Finally, the likelihood P(DjMk) of experimental data D for model
by professional societies including the American Institute of
Mk is evaluated by multiplying the probability of observing each
Aeronautics and Astronautics [18] and the American Society of
experimental data given prediction of each data made by model
Mechanical Engineers [19].
Mk. The assumption that prediction errors of model Mk are
For this research, a simple formulation that combines bias
independent of one another implies that experimental data are
function associated with an approximating model and measure-
also independent. So, the likelihood of experimental data D¼
ment error on data [20,21] is utilized to describe the probabilistic
{y1, y, yl} for model Mk or joint probability of D given Mk can be
relationship between experimental data and model predictions.
calculated by Eq. (6):
The formulation is represented by Eq. (2):
!l=2  
Yl
1 l
yi ¼ fi þ ei ð2Þ PðDjMk Þ ¼ Pðy1 , . . . , yl jMk Þ ¼ Pðyi jfki ,s2k Þ ¼ exp 
2
2psk 2
i¼1
where ei is a random variable that encompasses both bias
ð6Þ
associated with model prediction fi (for this research, determi-
nistic) and measurement error on experimental data yi. ei is Model averaging to propagate the model uncertainty quantified
assumed to be an independent and identically distributed (i.i.d.) by evaluating model probability into the prediction of a system
normal random variable with zero mean. response is discussed in Section 4.
Use of ei with zero mean does not shift model prediction fi and
reflects the fact that a model claims that its prediction fi is the
most probable value. The reason that the bias function is not 4. Model averaging
included separately in the probabilistic relationship for this
research is that a separate incorporation of the bias function To quantify model uncertainty in prediction of a system
results in shifting the prediction by a model from the initially response, the predictions by all the plausible models should be
predicted value (although this can reduce the bias involved in the taken into account. Given the predictions of a system response
prediction by the model). Although the models considered in this made by a set of models, the response can be estimated by
research are deterministic, their predictions are represented by integrating these model predictions into a representative predic-
probability distributions because random error ei is involved in tion using model averaging. Model averaging is the technique of
the model predictions. If a more complicated formulation is averaging weighted model predictions. In general, combining
desired, the covariance matrix of errors may be estimated. multiple model predictions leads to an improvement in predictive
However, errors are assumed to be independent in this research accuracy [22].
due to insufficient experimental data to estimate the correlation Since Barnard [23] made the first mention of model combina-
among errors. tion, the idea has appeared in economics, forecasting, and
statistics literature. Leamer [24] built the basic paradigm of
Bayesian model averaging (BMA) based on the idea of model
3.3.2. Evaluation of model likelihood using experimental data and combination. BMA is a methodology to handle both the
model predictions uncertainty in models as well as the uncertainty in their
The likelihood P(DjMk) of experimental data D for model Mk parameters by integrating model predictions weighted by model
(k¼1, y, N) is evaluated by observing where the experimental probabilities. The adjustment factor approach [2,25] is another
data points are located in the predictive distributions of data D methodology to address model uncertainty based on model
estimated by model Mk. The procedures for estimating the averaging. The prediction by a model identified as being the best
predictive distribution P(yjMk) of response y under model Mk of a set of models is adjusted by an adjustment factor to
and evaluating the likelihood P(DjMk) of experimental data D for incorporate model uncertainty. This approach has the advantage
Mk are as follows. of accommodating normal and log-normal distribution forms to
First, the uncertainty in errors of predictions made by model describe the model uncertainty in the response predictions made
Mk is quantified by introducing an assumption that the prediction by a deterministic model set. However, it is limited in its
errors are i.i.d. normal random variables with zero mean as application in that model predictions must be deterministic.
discussed in Section 3.3.1. The error of a prediction made by
model Mk is represented by Eq. (3):
5. Adjustment factor approach
eki ¼ yi fki ,eki  Nð0,s2k Þ, i ¼ 1, . . . , l ð3Þ

where yi is the ith experimental data, fki the prediction of Mosleh and Apostolakis [26] suggest the adjustment factor
experimental data yi made by model Mk, sk2 the variance of approach to combine experts’ estimates according to Bayes’
prediction error eki, and l the number of experimental data. Each theorem. The application of this approach was extended to the
prediction error eki measured is considered to be a random sample model uncertainty problem [2]. It has been applied to quantify
from a normal distribution of which the mean is zero and the model uncertainty for the problems of groundwater flow and
variance is sk2. Using the maximum likelihood estimation (MLE) contaminant transport [2] and nuclear reactor safety [3]. In this
approach, variance sk2 for model Mk—which is common to error approach, model uncertainty is accounted for by an adjustment
ARTICLE IN PRESS
780 I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785

factor represented by a probability distribution. An adjustment Eqs. (13)–(16):


factor can be evaluated by assuming the differences between the
X
N
prediction of the best model and those of alternate models to be Eðln E
mÞ ¼ PðMi Þðln yi ln y Þ ð13Þ
normally or log-normally distributed. When quantifying an i¼1
adjustment factor, model probabilities are assigned as weights
to the models considered. The distribution of the system response X
N
Varðln E
mÞ ¼ PðMi Þðln yi Eðln yÞÞ2 ð14Þ
predicted by a set of models is constructed by introducing the i¼1
evaluated adjustment factor into the prediction of the best model.
Depending on whether the assumption that the distribution Eðln yÞ ¼ ln y þ Eðln E
mÞ ð15Þ
representing model uncertainty in prediction of a system
response is normal or log-normal, an additive or a multiplicative Varðln yÞ ¼ Varðln E
mÞ ð16Þ
adjustment factor is used, respectively. An additive adjustment
where E(ln()) is mean of logarithm of a variable, and Var(ln()) is
factor is added to the prediction of the best model to construct
variance of logarithm of a variable. The adjusted prediction y is
a predictive distribution incorporating model uncertainty.
also a lognormal random variable. The means and variances of
Similarly, a multiplicative adjustment factor is multiplied
both Em and y can be calculated with the means and variances of
by the prediction of the best model to construct the predictive  ) and ln(y) calculated by Eqs. (13)–(16), according to the
ln(Em
distribution.
property of lognormal variable.
The bigger the absolute value of the mean of an adjustment
5.1. Additive adjustment factor factor is, the more the adjusted prediction is shifted from the
prediction of the best model. The bigger the variance of an
When an additive adjustment factor is used, the prediction of a adjustment factor is, the larger the degree of model uncertainty.
system response is represented by Eq. (7): Although both the adjustment factors can be used to quantify
y ¼ y þ E ð7Þ model uncertainty, the problem may arise of deciding which of
a
the two factors represents model uncertainty with higher fidelity.
where yn represents the prediction of the response by the best There is no quantitative instruction presented to address this
model with the highest model probability among the model set problem, but it is reasoned that the use of a multiplicative
considered, Ea represents an additive adjustment factor, and y adjustment factor would be more appropriate if weighted model
represents an adjusted prediction. An additive adjustment factor predictions are significantly asymmetric.
Ea is assumed to be a normal random variable. Supposing that the
predictions and probabilities of a set of models are known,
the means and variances of both E a and y are computed by
6. Demonstration problems
Eqs. (8)–(11):
A nonlinear spring–mass system is considered to illustrate the
X
N
EðE
aÞ¼ PðMi Þðyi y Þ ð8Þ adjustment factor approach. The methodology to quantify the
i¼1 model uncertainty associated with the computer simulation using
experimental data is demonstrated with the finite element
X
N
simulation of a laser peening process, a time-dependent high
VarðE
aÞ¼ PðMi Þðyi EðyÞÞ2 ð9Þ
i¼1
impact process.

EðyÞ ¼ y þ EðE
aÞ ð10Þ 6.1. Nonlinear spring–mass system

VarðyÞ ¼ VarðE
aÞ ð11Þ The free vibration of a single-degree-of-freedom system into
where E() is the mean of a variable, Var() is the variance of a which a spring introduces nonlinearity is described by the
variable, yi represents the prediction of the response by model Mi, governing equation, Eq. (17):
P(Mi) represents the probability of Mi, and N is the number of mu€ þ f ðuÞ ¼ 0 ð17Þ
models considered. P(Mi) can denote either prior or posterior
where m is a mass, and spring force f(u) is a nonlinear function of
model probability. As shown in Eqs. (8) and (9), the mean and
displacement u. Depending on the functions introduced to
variance of E a are the averaged mean and variance of the
describe the relation between spring force and displacement,
differences between the prediction of the best model and those
different models are generated to represent a nonlinear spring–
of alternate models, using model probabilities as weights. The
mass system. Suppose that there are three types of spring force
adjusted prediction y is also a normal random variable. The mean
functions suggested for this problem. They are [27]
of y is the sum of the prediction of the best model and the mean of
E 
a , as shown in Eq. (10). The variance of y is the same as that of Ea , f1 ðuÞ ¼ eu1=3 ð18Þ
as shown in Eq. (11).
f2 ðuÞ ¼ auþ bu3 ð19Þ
5.2. Multiplicative adjustment factor
du
f3 ðuÞ ¼ cuþ pffiffiffiffiffiffiffiffiffiffiffiffiffi ð20Þ
1þ u2
When a multiplicative adjustment factor is used, the predic-
tion of a system response is represented by Eq. (12): The three force–displacement functions are graphically
represented in Fig. 1, given the values of constants (e ¼0.65
y ¼ y Em
 ð12Þ
N/cm1/3, a ¼1 N/cm, b¼ 0.35 N/cm3, c¼1 N/cm, and d¼  0.5 N).
where E m represents a multiplicative adjustment factor. A multi- A nonlinear spring is described as the stiffest by Eq. (18) (model 1)
plicative adjustment factor Em  is assumed to be a lognormal and described as the most flexible by Eq. (20) (model 3).
random variable. The means and variances of the logarithms Given the mass and initial conditions (m¼1 kg, u(0) ¼1 cm,
of both E m and adjusted prediction y are computed by and du/dt (0)¼0 cm/s), the fundamental natural frequency of a
ARTICLE IN PRESS
I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785 781

Fig. 1. Nonlinear spring force in three different models of spring–mass system.

Table 1 Table 2
Predictions and probabilities of three models for spring–mass system. Mean and standard deviation of adjusted prediction of natural frequency for two
adjustment factor cases.
Natural frequency (rad/s) Model probability
Mean of adjusted SD of
Model 1 0.863 0.3 prediction (rad/s) adjusted
Model 2 0.859 0.5 prediction
Model 3 0.808 0.2 (rad/s)

Additive adjustment factor case 0.8500 0.0208


Multiplicative adjustment factor case 0.8445 0.0240
spring–mass system can be predicted by the three mathematical
models. Using simple formulas [27], the natural frequency is
predicted as shown in Table 1. Model 1 predicts the largest predictions. The standard deviation also reflects the degree of
frequency (0.863 rad/s) among the models considered. Model 3 model uncertainty resulting from consideration of the two
predicts the smallest frequency (0.808 rad/s). alternate models that make predictions different from the best
Model probabilities are assumed for this problem because model prediction.
there is no information available to evaluate them (Table 1). Using A normal and a lognormal distribution are shown in Fig. 2,
both the additive and multiplicative adjustment factors, the which represents the adjusted predictions of natural frequency
adjustment factor approach is applied to quantify the model for the additive and multiplicative adjustment factor cases,
uncertainty. respectively. Both distributions of natural frequency are almost
Model 2 is identified as the best model because it has the identical because the weighted predictions of the three models
highest model probability (0.5) among the models considered. are almost symmetrical. So, for this problem, which of the two
The prediction of the best model, which would only be considered adjustment factors is used to represent model uncertainty in
if model uncertainty was ignored, is adjusted by two adjustment prediction of natural frequency is of little concern. However, if the
factors to incorporate model uncertainty. The mean of adjusted results of using the two adjustment factors show a significant
prediction of natural frequency is shown for each adjustment difference, it would be important to decide which of the two
factor case in Table 2. The prediction of the best model is represents the reality with higher confidence. It would be
decreased for both the adjustment factor cases because the model reasonable to select one of the two factors after considering
making a smaller prediction (model 3) than the best model has several points, such as the number of models considered and the
more effects on the adjustments of the best model prediction than symmetry of weighted model predictions.
the model making a lager prediction (model 1). The prediction of The model probabilities shown in Table 1 are assumed as if
the best model (0.859 rad/s) is decreased by the amount of they were quantified by a group of experts to demonstrate the
0.0090 rad/s for the additive adjustment factor case. The best procedures of how to update subjectively quantified model
model prediction is decreased by the amount of 0.0145 rad/s for probabilities. When presented with the distributions of adjusted
the multiplicative adjustment factor case. The two means of predictions shown in Fig. 2, experts might feel the need to modify
adjusted predictions show an insignificant difference. The the model probabilities. By assuming the experts would alter the
standard deviation of adjusted prediction for the additive model probabilities into new model probabilities (e.g. P(M1) ¼0.3,
adjustment factor case (0.0208 rad/s) is little different from the P(M2)¼0.4, and P(M3)¼ 0.3), a new analysis would be executed to
standard deviation for the multiplicative adjustment factor case update the predictive distributions of natural frequency using the
(0.0240 rad/s) as shown in Table 2. The standard deviation modified model probabilities. The modification of model
indicates the degree of dispersion in the weighted model probabilities by experts might be iterated until the experts regard
ARTICLE IN PRESS
782 I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785

Fig. 2. PDFs of fundamental natural frequency for additive and multiplicative adjustment factor cases—spring–mass system.

the predictive distributions reflecting their opinions about the


models as full and appropriate characterizations of model
uncertainty.

6.2. Finite element simulation for laser peening process

6.2.1. Problem description


Laser peening (LP) is an advanced surface enhancement
technique that has been shown to increase the fatigue life of
metallic components. LP has also been shown to increase the
corrosion and fretting properties of metals. During the LP process,
laser energy is converted into shock waves at the surface that
induce compressive residual stresses. Fatigue life is improved as Fig. 3. Representative axi-symmetric FE mesh for LP simulation model [20].
the induced compressive residual stresses inhibit the formation of
cracks. A detailed description of the LP process can be found in
Refs. [28–30].
In simulating the LP process, accurate description of material which of the four models makes a prediction closest to the true
behavior is a challenging task because of the high strain rates residual stress not yet observed. The significant differences
experienced by the material. During the LP process, the strain between the four simulation results imply that model
rates experienced by a material can reach as high as 106 s  1. In uncertainty in prediction of residual stress might be consi-
such high strain-rate processes, different material models are derable. So, the model uncertainty must be quantified by
available to describe the elastic–plastic behavior. In a recent paper integrating the simulation outcomes of the four models weigh-
by Amarchinta et al. [29], three material models were considered ted by model probabilities. The experimental data measured at
to describe the unknown material behavior: the elastic perfectly nine points are used as information to estimate the probabilities
plastic (EPP) model, the Johnson–Cook (JC) model, and the Zerilli– of the four models.
Armstrong (ZA) model. Later, a fourth material model has also
been presented: the Khan–Huang–Liang (KHL) model [30].
Each material model results in a different finite element (FE) 6.2.2. Quantification of model uncertainty in FE simulation for LP
simulation to predict the residual stress field induced by the LP process
process. The simulation requires an extensive computer effort due For this problem, the prior probabilities of the four FE models
to modeling of the material behavior under high pressure shock are assumed to be uniform as shown in Table 3 because of the
waves with time marching numerical procedures. A schematic unavailability of information to quantify them. The evaluation of
illustration of the LP FE model is shown in Fig. 3. In this work, FE model likelihoods is required to update the prior model
simulation results of the residual stress field for 6.1 GPa peak probabilities into the posterior model probabilities using Bayes’
pressure are taken from recent works [29,30]. The simulation theorem as shown in Eq. (1). The observed experimental data and
results of the four FE models based on four different material model outcomes shown in Fig. 4 are used to evaluate the
theories are shown along with experimental data in Fig. 4. likelihoods of the four models considered. The differences
Basing a prediction of residual stress on a single model can cause between the experimental data and the model outcomes are
unreliable results because it is beyond our capability to know measured and are assumed to be randomly sampled from
ARTICLE IN PRESS
I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785 783

Fig. 4. Residual stress comparison between the predictions of four FE models and experimental data for axi-symmetric LP component [20,21].

Table 3 chance of being the best model than the last two FE models. It can
Prior and posterior probabilities of four FE models for axi-symmetric LP be inferred from this fact that the JC and the KHL material models
component.
are much more effective in the simulation of the LP component
Model Model probability than the EPP and the ZA material models. The FE model that
includes the KHL material model (the KHL-based FE model) is
Prior Posterior identified as the best model because it has the highest probability
among the models considered. However, uncertainty exists in the
EPP-based FE model 2.50  10  1 2.09  10  4
JC-based FE model 2.50  10  1 3.86  10  1
identification of the best model because there is a possibility that
ZA-based FE model 2.50  10  1 4.40  10  4 the other FE models, especially the FE model that includes the JC
KHL-based FE model 2.50  10  1 6.14  10  1 material model (the JC-based FE model), might be the best model
if additional experimental data are observed.
The mean of adjusted prediction of a residual stress field is
independent and identical normal distributions. The likelihoods of shown in Fig. 5. The mean of adjusted prediction is the sum of
the experimental data for the four models are calculated using prediction of the best model (the KHL-based FE model) to the
Eqs. (3)–(6). mean of an additive adjustment factor, which accounts for the
Using an additive adjustment factor, the adjustment factor effects of the alternate FE models. The mean of an adjustment
approach is implemented to quantify the model uncertainty in factor indicates the extent to which the weighted predictions of
prediction of residual stresses. The reason for utilizing only an the alternate FE models are asymmetrical around the prediction
additive adjustment factor is that a multiplicative adjustment of the best model. The mean of adjusted prediction indicates the
factor cannot deal with the negative numbers shown in some of most likely estimate of residual stress at each depth because
the residual stresses. Using an additive adjustment factor, the distribution of residual stress at every depth is assumed to be
mean and variance of adjusted prediction of a residual stress field normal. The variance of adjusted prediction indicates the degree
can be calculated with Eqs. (8)–(11). of disagreement about the prediction of a residual stress field
among the FE models considered. It also reflects the degree of
6.2.3. Results of quantifying model uncertainty in prediction of a model uncertainty in the prediction of a residual stress field.
residual stress field
The calculated posterior probabilities of the four FE models are
shown in Table 3 together with the prior model probabilities. As 6.2.4. Establishment of a confidence band for a residual stress field
shown in Eq. (1), equal prior model probabilities cancel out in the In addition to the most likely estimate of the true residual
calculation of posterior model probabilities. So, the posterior stress represented as the mean of adjusted prediction, an interval
probability of a model is the ratio of likelihood for one model to estimate of the residual stress must be made at every depth to
the sum of likelihoods for all the models. A posterior model indicate the reliability of model prediction. For this problem, a
probability indicates the measure of how well an FE model is 95% confidence interval for residual stress is established at every
supported by the experimental data relative to the other FE depth in the LP component considered. A 95% confidence interval
models. The posterior probabilities of the two FE models that represents the interval that includes the true residual stress with
include the EPP and the ZA material models (2.09  10  4 and a probability of 0.95. Because each distribution of residual stress is
4.40  10  4) are significantly smaller than the two FE models that assumed to be normal, the end points of a 95% confidence interval
include the JC and the KHL material models (3.86  10  1 and are calculated at each depth using the mean and variance of
6.14  10  1) because the first two FE models are poorly supported adjusted prediction by Eq. (21):
by the experimental data compared with the last two FE models. pffiffiffiffiffiffiffiffiffiffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffi
This implies that the first two FE models have considerably less ½EðyÞ1:96 VarðyÞ, EðyÞ þ 1:96 VarðyÞ ð21Þ
ARTICLE IN PRESS
784 I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785

Fig. 5. Mean of adjusted prediction and 95% confidence band of residual stress field.

where E(y) is the mean of adjusted prediction of residual stress y, prediction of an unknown system response incorporating model
and Var(y) is the variance of it. By connecting the upper end points uncertainty is not conditional on a single model in a model set.
at all the depths, an upper bound curve is drawn. Similarly, a The variance in the prediction that would be missing if the other
lower bound curve is drawn by connecting the lower end points. A models in the model set considered were disregarded is
95% confidence band bounded by the upper and lower curves incorporated into the composite model prediction.
shown in Fig. 5 represents a collection of 95% confidence intervals. The adjustment factor approach is illustrated with the
The confidence band is dominated by the differences between the quantification of model uncertainty in natural frequency predic-
predictions of the KHL-based FE model and those of the JC-based tion for a nonlinear spring–mass system. The proposed metho-
FE model because the probabilities of the other two FE models are dology is demonstrated with the FE simulation of a laser peening
much smaller compared with those of the former two FE models. process. Model uncertainty is quantified by evaluating model
The width of the 95% confidence band is considerable below the probability using limited experimental data on residual stresses
depth of around 0.3 mm because the KHL-based and the JC-based and the deterministic predictions by the set of models considered.
FE models show significant differences in predictions of residual The model uncertainty involved in the FE simulation due to the
stresses below that depth. This means that the degree of model use of different material models proves to be significant. Given
uncertainty below that depth is significant since the width of the deterministic predictions by a model set and experimental data,
confidence band reflects the degree of model uncertainty. the methodology can be applied to any problem of model
As mentioned above, a 95% confidence band is the band that is uncertainty quantification regardless of the numbers of models
estimated to include the true system responses with 95% considered and the observed experimental data. Uncertainty in
probability. However, this statement would be feasible only if prediction of a system response might be underestimated if the
the true model exists in the set of models considered. It is not errors of model predictions are considerable. To make a more
believed that the established confidence band encloses the true informed prediction of system response, an investigation should
residual stresses with 95% probability because there is no true be made to incorporate uncertainty in prediction error as well as
model among the models considered. This fact explains why more model selection uncertainty into a response prediction.
than half of the observed experimental data are positioned
outside the confidence band. The established confidence band is
too narrow because it considers only model uncertainty. To make Acknowledgements
a confidence band more accurate, uncertainty in the error of
model prediction as well as model uncertainty should be The authors acknowledge the support of this research work
incorporated into the confidence band. through the Contract FA8650-04-D-3446, DO #25 sponsored by
Wright Patterson Air Force Base, Ohio.

7. Summary remarks References

In this paper, a methodology to evaluate model likelihood for [1] Alvin KF, Oberkampf WL, Diegert KV, Rutherford BM. Uncertainty quantifica-
each model by probabilistically comparing model predictions tion in computational structural dynamics: a new paradigm for model
validation. In: Proceedings of the 16th international modal analysis
with experimental data is developed to quantify model uncer-
conference, Santa Barbara, CA; 1998. p. 1191–8.
tainty resulting from creation of different deterministic simula- [2] Zio E, Apostolakis G. Two methods for the structured assessment of model
tion models. The formulation used to describe the probabilistic uncertainty by experts in performance assessments of radioactive waste
relationship between experimental data and model predictions repositories. Reliability Engineering and System Safety 1996;54(2–3)
225–41.
makes the evaluation of model likelihood easy to implement, but [3] Reinert JM, Apostolakis GE. Including model uncertainty in risk-informed
it does not accommodate a correlated error structure. The decision making. Annals of Nuclear Energy 2006;33(4):354–69.
ARTICLE IN PRESS
I. Park et al. / Reliability Engineering and System Safety 95 (2010) 777–785 785

[4] Zhang R, Mahadevan S. Model uncertainty and Bayesian updating in [18] AIAA. Guide for the verification and validation of computational fluid
reliability-based inspection. Structural Safety 2000;22(2):145–60. dynamics simulations. American Institute of Aeronautics and Astronautics;
[5] Zouaoui F, Wilson JR. Accounting for input model and parameter uncertainty 1998. AIAA-G-077.
in simulation. In: Proceedings of the 2001 winter simulation conference, [19] ASME. Guide for verification and validation in computational solid
Arlington, VA; 2001. p. 290–9. mechanics. American Society of Mechanical Engineers; 2006. ASME V&V 10.
[6] McFarland JM, Bichon BJ. Bayesian model averaging for reliability analysis [20] McFarland JM, Mahadevan S, Swiler L, Giunta A. Bayesian calibration of the
with probability distribution model form uncertainty. In: Fiftieth AIAA/ASME/ QASPR simulation. In: Forty-eighth AIAA/ASME/ASCE/AHS/ASC structures,
ASCE/AHS/ASC structures, structural dynamics and materials conference, structural dynamics and materials conference, Honolulu, HI, April 2007.
Palm Springs, CA, 2009. [21] McFarland JM. Uncertainty analysis for computer simulations through
[7] Bunge M. Foundations of physics. New York, NY: Springer-Verlag; 1967. validation and calibration. PhD thesis, Vanderbilt University, Nashville, TN,
[8] Burnham KP, Anderson DR. Model selection and multi-model inference: 2008.
a practical information-theoretic approach, 2nd ed. New York, NY: Springer- [22] Clemen RT. Combining forecasts: a review and annotated bibliography.
Verlag; 2002. International Journal of Forecasting 1989;5(4):559–83.
[9] Vamos T. Epistemic background problems of uncertainty. In: First interna- [23] Barnard GA. New methods of quality control. Journal of the Royal Statistical
tional symposium on uncertainty modeling and analysis, College Park, MD, Society Series A 1963;126:255–8.
USA, 1990. p. 96–100. [24] Leamer EE. Specification searches: ad hoc inference with nonexperimental
[10] Link WA, Barker RJ. Model weights and the foundations of multimodel data. New York, NY: John Wiley & Sons; 1978.
inference. Ecology 2006;87(10):2626–35. [25] Nilsen T, Aven T. Models and model uncertainty in the context
[11] Draper D. Assessment and propagation of model uncertainty. Journal of the of risk analysis. Reliability Engineering and System Safety 2003;79(3):
Royal Statistical Society Series B 1995;57(1):45–97. 309–17.
[12] Raftery AE. Approximate Bayes factors and accounting for model uncertainty [26] Mosleh A, Apostolakis G. The assessment of probability distributions from
in generalized linear models. Biometrika 1996;83(2):251–66. expert opinions with an application to seismic fragility curves. Risk Analysis
[13] Bayarri MJ, Berger JO, Cafeo J, Garcia-Donato G, Liu F, Palomo J, et al. 1986;6(4):447–61.
Computer model validation with functional output. Annals of Statistics [27] He JH. Variational approach for nonlinear oscillators. Chaos Solitons and
2007;35(5):1874–906. Fractals 2007;34(5):1430–9.
[14] Bayarri MJ, Berger JO, Paulo R, Sacks J, Cafeo JA, Cavendish J, et al. A frame- [28] Singh G, Grandhi RV, Stargel DS. Modeling and parameter design of a laser
work for validation of computer models. Technometrics 2007;49(2): shock peening process. In: International journal for computational methods
138–54. in engineering science and mechanics. Philadelphia, PA: Taylor and Francis;
[15] Kennedy MC, O’Hagan A. Bayesian calibration of computer models. Journal of in print.
the Royal Statistical Society Series B (Statistical Methodology) 2001;63(3): [29] Amarchinta HK, Grandhi RV, Langer K, Stargel DS. Material model
425–64. validation for laser shock peening process simulation. Modelling and
[16] Xiong Y, Chen W, Tsui KL, Apley DW. A better understanding of model Simulation in Materials Science and Engineering 2009;17(1). paper id.
updating strategies in validating engineering models. Journal of Computer 015010.
Methods in Applied Mechanics and Engineering 2009;198(15–16):1327–37. [30] Amarchinta HK, Grandhi RV, Clauer AH, Langer K, Stargel D. Simulation of
[17] Jiang X, Yang RJ, Barbat S, Weerappuli P. Bayesian probabilistic PCA approach residual stress induced by a laser peening process through inverse
for model validation of dynamic systems. In: Proceedings of the SAE world optimization of material models. Journal of Materials Processing Technology
congress and exhibition, Detroit, MI, April 2009. September 2009; submitted for publication.

You might also like