0% found this document useful (0 votes)
10 views

Modeling and Quantification of Physical Systems Uncertainties in a Probabilistic Framework 2017_Américo_Cunha

Uploaded by

yan.carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Modeling and Quantification of Physical Systems Uncertainties in a Probabilistic Framework 2017_Américo_Cunha

Uploaded by

yan.carlos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/316519530

Modeling and Quantification of Physical


Systems Uncertainties in a Probabilistic
Framework

Chapter · May 2017


DOI: 10.1007/978-3-319-55852-3_8

CITATION READS

1 90

1 author:

Americo Cunha Jr
Rio de Janeiro State University
102 PUBLICATIONS 135 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Modeling and quantification of uncertainties in nonlinear systems View project

Enhance the efficiency of energy harvesting devices View project

All content following this page was uploaded by Americo Cunha Jr on 22 March 2018.

The user has requested enhancement of the downloaded file.


Modeling and Quantification of Physical Systems
Uncertainties in a Probabilistic Framework
Americo Cunha Jr

To cite this version:


Americo Cunha Jr. Modeling and Quantification of Physical Systems Uncertainties in a Probabilistic
Framework. Stephen Ekwaro-Osire; Aparecido Carlos Gonçalves; Fisseha M. Alemayehu. Probabilistic
Prognostics and Health Management of Energy Systems, Springer International Publishing, pp.127-
156, 2017, 978-3-319-55851-6. <10.1007/978-3-319-55852-3_8>. <hal-01516295v2>

HAL Id: hal-01516295


https://hal.archives-ouvertes.fr/hal-01516295v2
Submitted on 21 Mar 2018

HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est


archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents
entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non,
lished or not. The documents may come from émanant des établissements d’enseignement et de
teaching and research institutions in France or recherche français ou étrangers, des laboratoires
abroad, or from public or private research centers. publics ou privés.
Copyright}
This is a corrected version of:
A. Cunha Jr., Modeling and Quantification of Physical Systems Uncertainties in a Prob-
abilistic Framework, in Probabilistic Prognostics and Health Management of Energy Sys-
tems, (Editors: S. Ekwaro-Osire, A. C. Gonçalves, and F. M. Alemayehu), Springer Inter-
national Publishing, New York, 2017. DOI: 10.1007/978-3-319-55852-3 8

Modeling and Quantification of


Physical Systems Uncertainties in a
Probabilistic Framework

Americo Cunha Jr

Abstract Uncertainty quantification (UQ) is a multidisciplinary area, that


deals with quantitative characterization and reduction of uncertainties in
applications. It is essential to certify the quality of numerical and experi-
mental analyses of physical systems. The present manuscript aims to provide
the reader with an introductory view about modeling and quantification of
uncertainties in physical systems. In this sense, the text presents some funda-
mental concepts in UQ, a brief review of probability basics notions, discusses,
through a simplistic example, the fundamental aspects of probabilistic model-
ing of uncertainties in a physical system, and explains what is the uncertainty
propagation problem.

Key words: Uncertainty quantification, stochastic modeling of uncertain-


ties, probabilistic approach

1 An Introductory Overview on UQ

Typically, highly complex engineering projects use both numerical simula-


tions and experimental tests on prototypes to specify a certain system or
component with desired characteristics. These two tools are used in a simi-
lar way by scientists to investigate physical phenomena of interest. However,
none of these approaches provides a response that is an exact reproduction
of the physical system behaviour, because computational model and test rig
are subject to uncertainties, which are intrinsic to modeling process (lack of
knowledge on the physics) and model parameters (measurement inaccuracies,
manufacturing variabilities, etc).

NUMERICO - Nucleus of Modeling and Experimentation with Computers


Universidade do Estado do Rio de Janeiro e-mail: [email protected]

1
2 Americo Cunha Jr

In order to improve the reliability level of numerical results and exper-


imental data, it is necessary to quantify the underlying uncertainties. The
cautious experimentalists have been doing this for many decades, leading to
a high level competence in what concerns the specification of the level of
uncertainty in an experiment. It is worth remembering that an experiment
that does not specify the level of uncertainty is not well seen by the tech-
nical/scientific community. On the other hand, just recently the numerical
community has begun to pay attention to the need of specifing the level of
confidence for computer simulations.
Uncertainty quantification (UQ) is a multidisciplinary area that deals with
quantitative characterization and the reduction of uncertainties in applica-
tions. One reason that UQ has gained such popularity over the last years, in
numerical world, is due to several books on the subject have recently emerged
[1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12]. To motivate its study, we
present three important scenarios where UQ is an essential tool:

Decision making: Some risk decisions, which negative result can cause catas-
trophic failure or huge financial costs, need to be well analysed before a final
opinion by the responsible party. The possible variabilities that generate un-
certain scenarios need to be taken into account in the analysis. The evaluation
of these uncertain scenarios has the task of assisting the responsible party to
minimize the chances of a wrong decision. Briefly, and in this context, UQ is
essential to provide the necessary certification for a risk decision.

Model validation: Experimental data are widely used to check the accuracy
of a computational model which is used to emulate a real system. Although
this procedure is already being used by scientists and engineers for many
decades, there is still no universally accepted criteria to ensure the model
quality. However, it is known that any robust criteria of model validation
must take into account the simulation and experiment uncertainties.

Robust design: An increasingly frequent requirement in several projects is


the robust design of a component which consists make a specific device low
sensitive to variation on its properties.
In a very simplistic way, we can summarize UQ objectives as (i) add error
bars to experiments and simulations, and (ii) define a precise notion of the
validated model.
The first objective is illustrated in Figure 1(a), which shows the comparison
bewteen a simulation result with experimental data, and in Figure 1(b), that
presents the previous graph with the inclusion of an envelope of reliability
around the simulation. As careful experimentalists, which use error bars for
a long time, UQ mainly focuses on “error bars for simulations”.
Moreover, a possible notion of validated model is illustrated in Figure 2,
where experiment and simulation are compared, and the computational
model is considered acceptable if the admissible range for the experimen-
Modeling and Quantification of Physical Systems ... 3

system response
1

−1

−2
simulation
experiment
−3
0 1 2 3 4 5 6
system input
(a)

3
simulation
experiment
2
confidence band
system response

−1

−2

−3
0 1 2 3 4 5 6
system input
(b)

Fig. 1 (a) Comparison between simulation and experimental data, without an envolepe
of reliability for the simulation, and (b) including this envelope.

tal value (defined by the point and its error bar) is contained within the
reliability envelope around the simulation.

experimental simulation experimental simulation

OPTIMAL VIOLATION
Fig. 2 Illustration of a possible notion of validated model.
4 Americo Cunha Jr

This chapter is organised into six sections. Besides this introduction, there
is a presentation of some fundamental concepts of UQ in section 2; a brief re-
view on probability theory basics in section 3; an exposure of the fundamental
aspects of probabilistic modeling of uncertainties, through a simplistic exam-
ple, in section 4; the presentation of the uncertainty propagation problem in
section 5; and the final remarks in section 6.
It is noteworthy that many of the ideas that are presented in this
manuscript are very influenced by courses taught by the author’s doctoral
supervisor, Prof. Christian Soize [13], [14] and [15]. Lectures of Prof. Gi-
anluca Iaccarino, Prof. Alireza Doostan, and collaborators were also very
inspiring [16], [17] and [18].

2 Some Fundamental Concepts on UQ

This section introduce some fundamental notions in the context of UQ.

2.1 Errors and Uncertainties

Unfortunately, until the present date, there is still no consensus in UQ litera-


ture about the notions of errors and uncertainties. This manuscript presents
the definitions we think make more sense, introduced by [19].
Let’s start with three conceptual ideas that will be relevant to the stochas-
tic modeling of physical systems: designed system, real system and computa-
tional model. A schematic illustration of these concepts is shown in Figure 3.

designed
system

manufacturing process mathematical modeling


(variabilities) (model uncertainty)

model input
real system + computational
real input real response model parameters model model response
(real parameters)
(data uncertainty)

(uncertain system)

Fig. 3 Schematic representation of the relationship between the designed system, the real
system and the computational model [19].
Modeling and Quantification of Physical Systems ... 5

Designed system: The designed system consists of an idealized project for a


physical system. It is defined by the shape and geometric dimensions, material
properties, connection types between components (boundary conditions), and
many other parameters. This ideal system can be as simple as a beam or as
complex as an aircraft [19].

Real system: The real system is constructed through a manufacturing process


taking the designed system as reference. In contrast to the designed system,
the real system is never known exactly, as the manufacturing process intro-
duces some variabilities in the system geometric dimensions, on its materials
properties, etc. No matter how controlled the construction process is, these
deviations from the conceptual project are impossible to eliminate, since any
manufacturing process is subjected to finite accuracy. Thus, the real system
is uncertain with respect to the designed system [19].

Computational model: In order to analyze the real system behaviour, a com-


putational model should be used as predictive tool. The construction of this
computational model initially performs a physical analysis of the designed
system, identifies the associated physical phenomena and makes hypotheses
and simplifications about its behaviour. The identified physical phenomena
are then translated into equations in a mathematical formulation stage. Using
the appropriate numerical methods, the model equations are then discretized
and the resulting discrete system of equations is solved, providing an approx-
imation to the computational model response. This approximate response is
then used to predict the real system behaviour [19].

Numerical errors: The response obtained with the computational model is, in
fact, an approximation to the model equation’s true solution. Inaccuracies,
intrinsic to the discretization process, are introduced in this step giving rise
to numerical errors [19]. Other source of errors are: (i) the finite precision
arithmetic that is used to perform the calculations, and (ii) possible bugs in
the computer code implementation of the computational model.

Uncertainties on the data: The computational model is supplied with model


input and parameters, which are (not exact) emulations of the real system
input and parameters, respectively. Thus, it is uncertain with respect to the
real system. The discrepancy between the real system and computational
model supplied information is called data uncertainties [19], [4].

Uncertainties on the model: In the conception of the computational model,


considerations made may or may not be in agreement with reality, which
should introduce additional inaccuracies known as model uncertainties. This
source of uncertainty is essentially due to lack of knowledge about the phe-
nomenon of interest and, usually, is the largest source of inaccuracy in com-
putational model response [19], [4].
6 Americo Cunha Jr

Naturally, uncertainties affect the response of a computational model, but


they should not be considered errors because they are physical in nature.
Errors are purely mathematical in nature and can be controlled and reduced
to a negligible level if the numerical methods and algorithms used are well
known by the analyst [19], [4]. This differentiation is summarized in Figure 4.

errors uncertainties
w w
w
 6= w


mathematical physical
nature nature

Fig. 4 The difference between errors and uncertainties.

2.2 Verification and Validation

Today verification and validation, also called V&V, are two concepts of fun-
damental importance for any carefully done work in UQ. Early works advo-
cating in favor of these ideas, and showing their importance, date back to
the late 1990s and early 2000s [20], [21], [22], [23]. The impact on the nu-
merical simulation community was not immediate, but has been continuously
growing over the years, conquering a prominent space in the last ten years,
especially after the publication of Oberkampf and Roy’s book [24].

These notions are well characterized in terms of two questions:

Verification:
Are we solving the equation right?

Validation:
Are we solving the right equation?

Although extremely simplistic, the above “definitions” communicate, di-


rectly and objectively, the key ideas behind the two concepts. Verification
is a task whose goal is to make sure that the model equation’s solution is
being calculated correctly. In other words, it is to check if the computational
implementation has no critical bug and the numerical method works well.
It is an exercise in mathematics. Meanwhile, validation is a task which aims
Modeling and Quantification of Physical Systems ... 7

to check if the model equations provide an adequate representation of the


physical phenomenon/system of interest. The proper way to do this “valida-
tion check up” is through a direct comparison of the model responses with
experimental data carefully obtained from the real system. It is an exercise
in physics. In Figure 5 the reader can see a schematic representation of the
difference between the two notions.

solution model
verification validation
w w
w
 6= w


mathematics physics

Fig. 5 The difference between verification and validation.

An example in V&V: A skydiver jumps vertically in free fall, from a heli-


copter that is stopped in flight, from a height of y0 = 2000 m with velocity
v0 = 0 m/s. Such situation is illustrated in Figure 6. Imagine we want to
know the skydiver height in every moment of the fall. To do this we develop
a (toy) model where the falling man is idealized as point mass m = 70 kg,
under the action of gravity g = 9.81 m/s2 . The height at time t is denoted
by y(t).

g
y0

Fig. 6 V&V example: a skydiver in free fall from an initial height y0 .

The skydiver’s height at time t can be determined through the following


initial value problem (IVP)
8 Americo Cunha Jr

m ÿ(t) + m g = 0, (1)
ẏ(0) = v0 ,
y(0) = y0 ,
˙ := d /dt.
where the upper dot is an abbreviation for a time derivative, i.e., 
This toy model is obtained from Newton’s 2nd law of motion and considers
the weight as the only force acting on the skydiver body.
Imagine that we have developed a computer code to integrate this IVP
using a standard 4-th order Runge-Kutta method [25]. The model response
obtained with this computer code is shown in Figure 7.

solution verification
2500

2000
heigth (m)

1500

1000

500

simulation
0
0 5 10 15 20
time

Fig. 7 Response obtained with the toy model.

To check accuracy of the numerical method and its implementation we


have at our disposal the analytical (reference) solution of the IVP, given by

1
y(t) = − g t2 + v0 t + y0 . (2)
2
In Figure 8(a) we can see the comparison between toy model response
(solid blue curve —) and the reference solution (dashed red curve - - -). We
note that both curves are in excellent agreement, but if we look at Figure 8(b),
which shows the difference between numerical and analytical solutions, it is
evident the effectiveness of the numerical method and the robustness of its
implementation become ever clearer.
Here the verification was made taking as reference the real solution of the
model equation. In the most frequent case, the model equations solution is
not known. In such a situation, the verification task can be performed, for
instance, using the method of manufactured solutions [26], [27], [24], [28].
Modeling and Quantification of Physical Systems ... 9

solution verification
2500

2000

heigth (m)
1500

1000

500
simulation
analytical
0
0 5 10 15 20
time
(a)

−12 solution verification


x 10
1

0.8
absolute error

0.6

0.4

0.2

0
0 5 10 15 20
time
(b)

Fig. 8 (a) Solution verification: comparison between toy model response and reference
solution; (b) absolute error of Runge-Kutta method approximation.

Now let’s turn our attention to model validation, and compare simulation
results with experimental data, such as shown in Figure 9(a). We note that
the simulation is completely in disagreement with experimental observations.
In other words, the model does not provide an adequate representation of the
real system behaviour.
The toy model above take into account the gravitational force which at-
tracts the skydiver toward the ground, but neglects air resistance effects. This
is the major reason for the observed discrepancy, the model deficiency (model
uncertainty). If the air drag force effects are included, the improved model
below is obtained
10 Americo Cunha Jr

model validation
2500

2000

heigth (m)
1500

1000

500
toy model
experiment
0
0 5 10 15 20
time
(a)

model validation
2500

2000
heigth (m)

1500

1000

500 improved model


toy model
experiment
0
0 5 10 15 20
time
(b)

Fig. 9 (a) Model validation: comparison between experimental data and the toy model,
(b) comparison between experimental data, the toy model, and the improved model.

1 2
m ÿ(t) + m g − ρ A CD ẏ(t) = 0, (3)
2
ẏ(0) = v0 ,
y(0) = y0 ,

where ρ is the air mass density, A is the cross-sectional area of the falling
body, and CD is the (dimensionless) drag coefficient.
With this new model, a better agreement between simulation and experi-
ment is expected. In Figure 9(b) the reader can see the comparison between
experimental data and the responses of both models, where we note that the
improved model provides more plausible results.
An important message, implicit in this example, is that epistemic uncer-
tainties can be reduced by increasing the actual knowledge about the phe-
nomenon/system of interest [22], [24].
Modeling and Quantification of Physical Systems ... 11

2.3 Two Approaches to Model Uncertainties

Being uncertainties in physical system the focus of stochastic modeling, two


approaches are found in the scientific literature to deal with then: probabilis-
tic, and non-probabilistic.

Probabilistic approach: This approach uses probability theory to model the


physical system uncertainties as random mathematical objects. This ap-
proach is well-developed and very consistent from the mathematical foun-
dations point of view. For this reason there is a consensus among the experts
that it is preferable whenever possible to use it [4].

Non-probabilistic approach: This approach uses techniques such as inter-


val analysis, fuzzy sets, imprecise probabilities, evidence theory, probability
bounds analysis, fuzzy probabilities, etc. In general these techniques are less
suitable for problems in high stochastic dimension. Usually they are applied
only when the probabilistic approach can not be used [4].

Because of their aleatory nature, data uncertainties are, quite naturally,


well represented in a probabilistic environment. Thus, the parametric proba-
bilistic approach is an appropriate method to describe this class of uncertain-
ties. This procedure consists in describe the computational model random
parameters as random objects (random variables, random vectors, random
processes and/or random fields) and then consistently construct their joint
probability distribution. Consequently, the model response becomes aleatory,
and starts to be modeled by another random object, depending on the nature
of the model equations. The model response is calculated using a stochastic
solver. For further details, we recommend [29], [30], [19], [4] and [31].
When model uncertainties are the focus of analysis, the non-probabilistic
techniques receive more attention. Since the origin of this type of uncertainty
is epistemic (lack of knowledge), it is not naturally described in a proba-
bilistic setting. More details on non-probabilistic techniques can be seen in
[32], [33], [34]. However, the use of probability theory for model uncertain-
ties is still possible through a methodology called nonparametric probabilistic
approach. This method, which also take into account the data uncertainty,
was proposed in [35], and describes the mathematical operators in the com-
putational model (not the parameters) as random objects. The probability
distribution of these objects are then constructed in a consistent way, us-
ing the Principle of Maximum Entropy. The methodology lumps the model
level of uncertainty into a single parameter, which can be identified by solv-
ing a parameter identification problem when (enough) experimental data is
available. An overview of this technique can be seen in [19] and [31].
A generalized probabilistic approach describing model and data uncertain-
ties on different probability spaces, with some advantages, is presented in [36]
and [37].
12 Americo Cunha Jr

3 A Brief on Probability Theory

This section presents a brief review of probability basic concepts. Such expo-
sition is elementary, being insufficient for a solid understanding of the theory.
Our objective is only to equip the reader with basic probabilistic vocabu-
lary necessary to understand UQ scientific literature. For deeper studies on
probability theory, we recommend the references [38], [39], [40] and [41].

3.1 Probability Space

The mathematical framework in which a random experiment is described


consists of a triplet (Ω, Σ, P), where Ω is called sample space, Σ is a σ-algebra
over Ω, and P is a probability measure. The trio (Ω, Σ, P) is called probability
space.

Sample space: The set which contains all possible outcomes (events) for a
certain random experiment is called sample space, being represented by Ω. An
elementary event in Ω is denoted by ω. Sample spaces may contain a number
of events that is finite, denumerable (countable infinite) or non-denumerable
(non-countable infinity). The following three examples, respectively, illustrate
the three situations:

Example 1 (finite sample space). Rolling a given cube-shaped fare die, where
the faces are numbered from 1 through 6, we have Ω = {1, 2, 3, 4, 5, 6}.

Example 2 (denumerable sample space). Choosing randomly an integer even


number, we have Ω = {· · · , −8, −6, −4, −2, 0, 2, 4, 6, 8, · · · }.

Example 3 (non-denumerable sample space). Measuring the temperature (in


Kelvin) at Rio de Janeiro city during the summer, we have Ω = [a, b] ⊂
[0, +∞).

σ-algebra: In general, not all of the outcomes in Ω are of interest so that, in


a probabilistic context, we need to pay attention only to the relevant events.
Intuitively, the σ-algebra Σ is the set of relevant outcomes for a random
experiment. Formally, Σ is σ-algebra if:
• φ ∈ Σ (contains the empty set);
• for any A ∈ Σ we also have Ac ∈ Σ (closed under complementation);

[
• for any countable collections of Ai ∈ Σ, it is true that Ai ∈ Σ
i=1
(closed under denumerable unions).
Modeling and Quantification of Physical Systems ... 13

Example 4. Consider the experiment of rolling a die with sample space Ω =


{1, 2, 3, 4, 5, 6} where we are interested in knowing
 if the result is odd or
even. In this case, a suitable σ-algebra is Σ = Ω, {1, 3, 5}, {2, 4, 6}, φ . On
the other hand, if we are interested in knowing the upper face value after
rolling, an adequate σ-algebra is Σ = 2Ω (set of all subsets of Ω). Different
σ-algebras generate distinct probability spaces.

Probability measure: The probability measure is a function P : Σ → [0, 1] ⊂ R


which indicates the level of expectation that a certain event in Σ occurs. In
technical language, P has the following properties:
• P {A} ≥ 0 for any A ∈ Σ (probability is nonnegative);
• P {Ω} = 1 (entire space has probability one);
• for anydenumerable
 collection of mutually disjoint events Ai , it is true
[∞  X ∞
that P Ai = P {Ai } .
 
i=1 i=1

Note that P {φ} = 0 (empty set has probability zero).

3.2 Random Variables

A mapping X : Ω → R is called a random variable if the preimage of every


real number under X is a relevant event, i.e.,

X−1 (x) = ω ∈ Ω : X (ω) ≤ x ∈ Σ,



for every x ∈ R. (4)
We denote a realization of X by X(ω).
Random variables provide numerical characteristics of interesting events,
in such a way that we can forget the sample space. In practice, when working
with a probabilistic model, we are concerned only with the possible values of
X.
Example 5. The random experiment is now toss a two fare dice, then Ω =

(d1 , d2 ) : 1 ≤ d1 ≤ 6 and 1 ≤ d2 ≤ 6 . Define the random variables X1 and
X2 in such way that X1 (ω) = d1 + d2 and X2 (ω) = d1 d2 . The former is a
numerical indicator of the sum of dice upper faces values, while the latter
characterizes the product of these numbers.

3.3 Probability Distribution

The probability distribution of X, denoted by PX , is defined as the probability


of the elementary event {X ≤ x}, i.e.,
14 Americo Cunha Jr

PX (x) = P {X ≤ x} . (5)
PX has the following properties:
• 0 ≤ PX (x) ≤ 1 (it is a probability);
• PX is non-decreasing, and right-continuous;
• lim PX (x) = 0, and lim PX (x) = 1;
x→−∞ x→+∞

so that
Z x
PX (x) = dPX (ξ), (6)
ξ=−∞

and
Z
dPX (x) = 1. (7)
R

PX is also known as cumulative distribution function (CDF).

3.4 Probability Density Function

If the function PX is differentiable, then we call its derivative the probability


density function (PDF) of X, using the notation pX .
Given that pX = dPX /dx, we have dPX (x) = pX (x) dx, and then
Z x
PX (x) = pX (ξ) dξ. (8)
ξ=−∞

The PDF is a function pX : R → [0, +∞) such that


Z
pX (x) dx = 1. (9)
R

3.5 Mathematical Expectation Operator

Given a function g : R → R, the composition of g with the random variable


X is also a random variable g(X).
The mathematical expectation of g(X) is defined by
Z

E g(X) = g(x) pX (x) dx. (10)
R
With the aid of this operator, we define
Modeling and Quantification of Physical Systems ... 15

Z {X}
mX = E
(11)
= x pX (x) dx,
R
n o
2
σX2 = E (X − mX )
Z (12)
= (x − mX )2 pX (x) dx,
R
and
q
σX = σX2 , (13)
which are the mean value, variance, and standard deviation of X, respectively.
Note further that
n o
σX2 = E X2 − m2X . (14)

The ratio between standard deviation and mean value is called coefficient
of variation of X
σX
δX = , mX 6= 0. (15)
mX
These scalar values are indicators of the random variable behaviour. Specif-
ically, the mean value mX is a central tendency indicator, while variance σX2
and standard deviation σX are measures of dispersion around the mean. The
difference in these dispersion measures is that σX has the same unit as mX
while σX2 is measured in mX unit squared. Once it is dimensionless, the coef-
ficient of variation is a standardized measure of dispersion.
For our purposes, it is also convenient to define the entropy of pX
n o
S (pX ) = −E ln pX (X) , (16)

which (see Eq.(10)) is equivalent to


Z

S (pX ) = − pX (x) ln pX (x) dx. (17)
R

Entropy provides a measure for the level of uncertainty of pX [42].

3.6 Second-order Random Variables

The mapping X is a second-order random variable if the expectation of its


square (second-order moment) is finite, i.e.,
16 Americo Cunha Jr
n o
E X2 < +∞. (18)

The inequality expressed in (18) implies that E {X} < +∞ (mX is also finite).
Consequently, with the aid of Eq.(14), we see that a second-order random
variable X has finite variance, i.e., σX2 < +∞.
This class of random variables is very relevant for stochastic modeling,
once, for physical considerations, typical random parameters in physical sys-
tems have finite variance.

3.7 Joint Probability Distribution

Given the random variables X and Y, the joint probability distribution of X


and Y, denoted by PX Y , is defined as

PX Y (x, y) = P {X ≤ x} ∩ {Y ≤ y} . (19)
The function PX Y has the following properties:
• 0 ≤ PX Y (x, y) ≤ 1 (it is a probability);
• PX (x) = lim PX Y (x, y), and PY (y) = lim PX Y (x, y)
y→+∞ x→+∞
(marginal distributions are limits);
such that
Z x Z y
PX Y (x, y) = dPX Y (ξ, η), (20)
ξ=−∞ η=−∞

and
Z Z
dPX Y (x, y) = 1. (21)
R2
PX Y is also known as joint cumulative distribution function.

3.8 Joint Probability Density Function

If the partial derivative ∂ 2 PX Y /∂x ∂y exists, for any x and y, then it is called
joint probability density function of X and Y, being denoted by

∂ 2 PX Y
pX Y (x, y) = (x, y). (22)
∂x ∂y
Hence, we can write dPX Y (x, y) = pX Y (x, y) dy dx, so that
Modeling and Quantification of Physical Systems ... 17
Z x Z y
PX Y (x, y) = pX Y (ξ, η) dη dξ. (23)
ξ=−∞ η=−∞

The joint PDF is a function pX Y : R → [0, +∞) which satisfies


Z Z
pX Y (x, y) dy dx = 1. (24)
R2

3.9 Conditional Probability

Consider the pair of random events {X ≤ x} and {Y ≤ y}, where the prob-
ability of occurrence of the second one is non-zero , i.e., P {Y ≤ y} > 0.
The conditional probability of event {X ≤ x}, given the occurrence of event
{Y ≤ y}, is defined as
n o P {X ≤ x} ∩ {Y ≤ y}
P {X ≤ x} {Y ≤ y} =  . (25)
P {Y ≤ y}

3.10 Independence of Random Variables

The event {X ≤ x} is said to be independent of event {Y ≤ y} if the occurrence


of the former does not affect the occurrence of the later, i.e.,
n o 
P {X ≤ x} {Y ≤ y} = P {X ≤ x} . (26)

Consequently, if the random variables X and Y are independent, from


Eq.(25) we see that

P {X ≤ x} ∩ {Y ≤ y} = P {X ≤ x} P {Y ≤ y} . (27)
This also implies that

PX Y (x, y) = PX (x) PY (y), (28)


and

pX Y (x, y) = pX (x) pY (y). (29)

3.11 Random Process

A random process U, indexed by t ∈ T , is a mapping


18 Americo Cunha Jr

U : (t, ω) ∈ T × Ω → U(t, ω) ∈ R, (30)


such that, for fixed t, the output is a random variable U(t, ·), while for fixed ω,
U(·, ω) is a function of t. In other words, it is a collection of random variables
indexed by a parameter. Roughly speaking, a random process, also called
stochastic process, can be thought of as a time-dependent random variable.

4 Parametric Probabilistic Modeling of Uncertainties

This section discusses the use of the parametric probabilistic approach to de-
scribe uncertaintis in physical systems. Our goal is to provide the reader with
some key ideas behind this approach and call attention to the fundamental
issues that must be taken into account. The exhibition is based on [13] and
[15] and use a simplistic example to discuss the theory.

4.1 A Simplistic Stochastic Mechanical System

Consider the mechanical system which consists of a spring fixed on the left
side of a wall and being pulled by a constant force on the right side (Fig-
ure 10). The spring stiffness is k, the force is represented by f , and the spring
displacement is denoted by u. A mechanical-mathematical model to describe
this system behaviour is given by

k u = f, (31)
from where we get the system response

u = k −1 f. (32)

k
f

Fig. 10 Mechanical system composed by a fixed spring and a constant force.


Modeling and Quantification of Physical Systems ... 19

4.2 Stochastic Model for Uncertainties Description

We are interested in studying the case where the above mechanical system is
subject to uncertainties on the stiffness parameter k. To describe the random
behaviour of the mechanical system, we employ the parametric probabilistic
approach.
Let us use the probability space (Ω, Σ, P), where the stiffness k is modeled
as the random variable K : Ω → R. Therefore, due to result of the relationship
imposed by Eq.(32), the displacement u is also uncertain, being modeled as a
random variable U : Ω → R, which respects the equilibrium condition given
by the following stochastic equation

K U = f. (33)
It is reasonable to assume that the deterministic model is minimally rep-
resentative, and corresponds to the mean of K, i.e., mK = k. Additionally, for
physical reasons, K must have a finiten variance.
o Thus, K is assumed to be a
2
second-order random variable, i.e., E K < +∞.

4.3 The Importance of Knowing the PDF

Now that we have the random parameter described in a probabilistic context,


and a stochastic model for the system, we can ask ourselves some questions
about the system response. For instance, to characterize the system response
central tendency, it is of interest to know the mean of U, denoted by mU .
Since mK is a known information about K (but pK is unknown), we can
ask ourselves: Is it possible to compute mU with this information only? The
answer for this question is negative. The reason is that U = K−1 f , so that
n o
mU = E K−1 f
Z
= k −1 f pK (k) dk,
R
and the last integral can only
 be calculated ifpK is known. Once the map
g(k) = k −1 f is nonlinear, E g (K) 6= g E {K} .

Conclusion: In order to obtain any statistical information about model re-


sponse, it is absolutely necessary to know the probability distribution of model
parameters.
20 Americo Cunha Jr

4.4 Why Can’t We Arbitrate Distributions?

As the knowledge of the probability distribution of K is necessary, let’s assume


that it is Gaussian distributed. In this way,
( )
1 (k − mK )2
pK (k) = p exp − , (34)
2π σK2 2 σK2

whose support is the entire real line, i.e., Supp pK = (−∞, +∞).
The attentive reader may question, at this point, that from the physical
point of view, make no sense use a Gaussian distribution to model a stiffness
parameter, since K is always positive. This is true and makes the arbitrary
choice of a Gaussian distribution inappropriate. However, this is not the only
reason against this choice.
For physical considerations, it is necessary that the model
n oresponse U be a
second-order (finite variance) random variable, i.e., . E U2 < +∞. Is this
possible when we arbitrate the probability distribution as Gaussian? No way!
Just do a simple calculation

n o n o
E U2 = E K−2 f 2
Z
= k −2 f 2 pK (k) dk
R
(35)
 ( )
Z +∞ 2
1 (k − mK ) 
= k −2 f 2  p exp − dk
k=−∞ 2π σK
2 2 σK2
= +∞.

In fact, we also have E {U} = mU = +∞.


The Gaussian distribution is a bad choice since K must be a positive-
valued random variable (almost sure). Thus, we know the following informa-
tion about K:

• Supp pK ⊆ (0, +∞) ⇐⇒ K > 0 a.s.


• mnK =ok > 0 is known
• E K2 < +∞

All these requirements are verified by the exponential distribution, in which


the PDF is given by the function
 
1 k
pK (k) = 1(0,+∞) (k) exp − , (36)
mK mK
where 1(0,+∞) the indicator function of the interval (0, +∞).
However, we still have
Modeling and Quantification of Physical Systems ... 21
n o n o
E U2 = E K−2 f 2
Z
= k −2 f 2 pK (k) dk
ZR+∞  ! (37)
−2 2 1 k
= k f exp − dk
k=0 mK mK
= +∞,
n o
once the function k 7→ k −2 diverges in k = 0. Thus, in order to E U2 < +∞,
n o
we must have E K−2 < +∞.

Conclusion: Arbitrate probability distributions for parameters can generate


stochastic model that is inconsistent from the physical/mathematical point of
view.

4.5 An Acceptable Distribution

In short, an adequate distribution must satisfy the conditions below


• Supp pK ⊆ (0, +∞) =⇒ K > 0 a.s.
• mnK =ok > 0 is known
2
• E K < +∞
n o
• E K−2 < +∞.

The gamma distribution satisfies all the conditions above so that it is an


acceptable choice. Its PDF is written as

−2δ −2
( )
1 δK K δK−2 −1 k/mK
pK (k) = 1(0,+∞) (k)   k/mK exp − 2 , (38)
mK Γ δ −2 δK
K


where 0 ≤ δK = σK /mK < 1/ 2 is a dispersion parameter, and Γ denotes
the gamma function
Z +∞
Γ (α) = tα−1 e−t dt. (39)
t=0

Conclusion: Probability distributions for model parameters must be objectively


constructed (never arbitrated), and take into account all available information
about the parameters.
22 Americo Cunha Jr

4.6 How to Safely Specify a Distribution?

In the previous example, we have chosen a suitable probability distribution


by verifying if the candidate distributions satisfy the constraints imposed
by physical and mathematical properties of the model parameter/response.
However, this procedure is not practical and does not provide a unique dis-
tribution as a possible choice. For instance, in the spring example, uniform,
lognormal and an infinitude of other distributions are also acceptable (com-
patible with the restrictions).
Thus, it is natural to ask ourselves if it is possible to construct a consis-
tent stochastic model in a systematic way. The answer for this question is
affirmative, and the objective procedure to be used depends on the scenario.
Scenario 1: large amount of experimental data is available
The usual procedure in this case employs nonparametric statistical esti-
mation to construct the random parameter distribution from the available
data [13], [15], [43].
Suppose we want to estimate the probability distribution of a random
variable X, and for that we have N independent samples of X, respectively
denoted by X 1 , X 2 , · · · , X N .
Assuming, without loss of generality, that X 1 < X 2 < · · · < X N , we
consider an estimator for PX (x) given by
N
b N (x) = 1
X
P H (x − X n ) , (40)
N n=1
where H is defined as
(
1 if x ≥ X n
H (x − X n ) = (41)
0 if x < X n .
This estimator, which is unbiased
n o
E Pb N (x) = PX (x), (42)

and mean-square consistent


 2 
lim E b N (x) − PX (x)
P = 0, (43)
N →+∞

is known as the empirical distribution function or the empirical CDF [13],


[15], [43], [44].
If the random variable admits a PDF, it is more common to estimate its
probability distribution using a histogram, that is an estimator for pX (x). To
construct such a histogram, the first step is to divide the random variable
support into a denumerable number of bins Bm , where
Modeling and Quantification of Physical Systems ... 23

 
Bm = (m − 1) h, m h , m ∈ Z, (44)
being h the bin width. Then we count the number of samples in each of the
bins Bm , denoting this number by νm . After that, we normalize the counter
(dividing by N h) to obtain the normalized relative frequency νm / (N h). Fi-
nally, for each bin Bm , we plot a vertical bar with height νm / (N h) [44],
[43].
In analytical terms (see [44] and [43]) we can write this as PDF estimator
as
+∞
1 X
p N (x) =
b νm 1Bm (x), (45)
N h m=−∞

where 1Bm (x) is the indicator function of Bm , defined as


(
1 if x ∈ Bm
1Bm (x) = (46)
0 if x ∈/ Bm .
Both estimators above are easily constructed, but they require a large
number of samples in order to obtain a reasonable approximation [44], [43].
In practice, these estimators are used when we do not know the random
variable distribution. However, to illustrate the use of these tools, let us con-
sider a dataset with N = 100 samples obtained from the (standard) Gaussian
random variable X, with zero mean and unity standard deviation. Such sam-
ples are illustrated in Figure 11. Considering these samples, we can construct
the two estimators shown in Figure 12, with the empirical CDF on the left
and a histogram on the right.

2
sample value

−1

−2

−3

−4
20 40 60 80 100
x

Fig. 11 These samples are realizations of a standard Gaussian random variable.

Scenario 2: little or even none experimental data is available


24 Americo Cunha Jr

1
exact CDF
empirical CDF
0.8

probability
0.6

0.4

0.2

0
−3 −2 −1 0 1 2 3
x
(a)

0.4
exact PDF
0.35 histogram

0.3
probability density

0.25

0.2

0.15

0.1

0.05

0
−3 −2 −1 0 1 2 3
x

(b)

Fig. 12 (a) Estimators for the probability distribution of X: the empirical CDF, and (b)
a histogram.

When very little or no experimental data is available, to the best of the au-
thor’s knowledge, the most conservative approach uses the Maximum Entropy
Principle (MEP) [15], [45], [46], [48], with parametric statistical estimation,
to construct the random parameter distribution. If no experimental data
is available, this approach takes into account only theoretical information
which can be inferred from the model physics and its mathematical structure
to specify the desired distribution.
The MEP can be stated as follows: Among all the (infinite) probability dis-
tributions, consistent with the known information about a random parameter,
the most unbiased is the one which corresponds to the maximum of entropy
PDF.
Using it to specify the distribution of a random variable X presupposes
finding the unique PDF which maximizes the entropy (objective function)
Modeling and Quantification of Physical Systems ... 25
Z

S (pX ) = − pX (x) ln pX (x) dx, (47)
R

respecting N + 1 constraints (known information) given by


Z
gk (X) pX (x) dx = µk , k = 0, · · · , N, (48)
R

where gk are known real functions, with g0 (x) = 1, and µk are known real
values, being µ0 = 1. The restriction associated with k = 0 corresponds to
the normalization condition of pX , while the other constraints, typically, but
not exclusively, represent statistical moments of X.
To solve this problem, the method of Lagrange multipliers is employed, and
introduces other (N +1) unknown real parameters λk (Lagrange multipliers).
We can show that if this optimization problem has a solution, it actually
corresponds to a maximum and is unique, being written as
 
XN
pX (x) = 1K (x) exp (−λ0 ) exp − λk gk (x), (49)
k=1

where K = Supp pX here denotes the support of pX , and 1K (x) is the indicator
function of K.
The Lagrange multipliers, which depend on µk and K, are identified with
the aid of the restriction defined in Eq.(48) using techniques of parametric
statistics.

4.7 Using the Maximum Entropy Principle

In this section we exemplify the use of the MEP to consistently specify the
probability distribution of a random variable X.
Suppose that Supp pX = [a, b] is the only information we know about X. In
this case, a consistent (unbiased) probability distribution for X is obtained
solving the following optimization problem:

Maximize
Z

S (pX ) = − pX (x) ln pX (x) dx
ZRb

=− pX (x) ln pX (x) dx,
x=a
subjected to the constraint
26 Americo Cunha Jr
Z
1= pX (x) dx
ZRb
= pX (x) dx.
x=a
To solve this optimization problem, first we define the Lagrangian

!
Z b 
Z b
L (pX , λ0 ) = − pX (x) ln pX (x) dx − (λ0 − 1) pX (x) dx − 1 ,
x=a x=a
(50)
where λ0 − 1 is the associated Lagrange multiplier. It is worth mentioning
that λ0 depends on the known information about X, i.e. λ0 = λ0 (a, b).
Then we impose the necessary conditions for an extreme
∂L ∂L
(pX , λ0 ) = 0, and (pX , λ0 ) = 0, (51)
∂pX ∂λ0
whence we conclude that
Z
pX (x) = 1[a,b] (x) e−λ0 , and pX (x) dx = 1. (52)
R

The first equation in Eq.(52) provides the PDF of X in terms of the La-
grange multiplier λ0 , while the second equation corresponds to the known
information about this random variable (the normalization condition).
In order to represent pX in terms of the known information (a and b),
we need to find the dependence of λ0 with respect to these parameters. To
this end, let’s go to replace the expression of pX into the second equation of
Eq.(52), so that

Z
1
1[a,b] (x) e−λ0 dx = 1 =⇒ e−λ0 (b − a) = 1, =⇒ e−λ0 = , (53)
R b−a

from where we get


1
pX (x) = 1[a,b] (x)
, (54)
b−a
which corresponds to the PDF of a uniform distributed random variable over
the interval [a, b].
Other cases of interest, where the optimization problem solution is a known
distribution, are shown in Table 1. In the fourth line of this table the maxi-
mum entropy PDF corresponds to a gamma distribution. Once any gamma
random variable
n o has finite variance, and E ln (X) = q, |q| < +∞, which
implies E K−2 < +∞, the known information in this case is equivalent
to those listed in section 4.5, required to be satisfied by the distribution of
Modeling and Quantification of Physical Systems ... 27

K. For this reason, we presented the gamma distribution as the acceptable


choice in section 4.5. It corresponds to the most unbiased choice for that set
of information.
For other possible applications of the maximum entropy principle and to
go deeper into the underlying mathematics, we recommend the reader to see
the references [47], [48], [49], [50], [51], [52], [53], [54], and [15].

5 Calculation of Uncertainty Propagation

Once one or more of the model parameters are described as random objects,
the system response itself becomes random. To understand how the variabili-
ties are transformed by the model, and influence in the response distribution,
is a key issue in UQ, known as uncertainty propagation problem. This problem
can only be attacked after the construction of a consistent stochastic model.
Very succinctly, we understand the uncertainty propagation problem as to
determine the probability distribution of model response once we know the
distribution of model input/parameters. A schematic representation of this
problem is can be seen in Figure 13.

Uncertainty Propagation

computational
model

input output
PDF PDF

Fig. 13 Schematic representation of uncertainty propagation problem.

The methods for calculation of uncertainty of propagation are classified


into two types: non-intrusive and intrusive.

Non-intrusive methods: These methods of stochastic calculation obtain the


random problem response by running an associated deterministic problem
multiple times (they are also known as sampling methods). In order to use a
non-intrusive method, it is not necessary to implement the stochastic model
in a new computer code. If a deterministic code to simulate the deterministic
model is available, the stochastic simulation can then be performed by run-
ning the deterministic program several times, changing only the parameters
that are randomly generated [55].
28 Americo Cunha Jr

Table 1 Maximum entropy distributions for given known information.

support known information Maximum Entropy PDF


1
[a, b] — pX (x) = 1[a,b] (x)
b−a
(uniform in [a, b])

[a, b] E {X} = mX ∈ [a, b] pX (x) = 1[a,b] (x) exp (−λ0 − x λ1 )


λ0 = λ0 (a, b, mX )
λ1 = λ1 (a, b, mX )

[a, b] {X} o
En = mX ∈ [a, b]  
E X2 = m2X + σX
2
pX (x) = 1[a,b] (x) exp −λ0 − x λ1 − x2 λ2
λ0 = λ0 (a, b, mX , σX )
λ1 = λ1 (a, b, mX , σX )
λ2 = λ2 (a, b, mX , σX )

[0, 1] E ln (X) = p, |p| < +∞
Γ (a + b) a−1
(1 − x)b−1

E ln (1 − X) = q, |q| < +∞ pX (x) = 1[0,1] (x) x
  Γ (a) Γ (b) 
2 2
a = mX /δX 1/mX − δX −1

b = a 1/mX − 1
(beta with shape parameters a and b)
 
1 x
(0, +∞) E {X} = mX > 0 pX (x) = 1(0,+∞) (x)exp −
mX mX
(exponential with mean mX )

(0, +∞) E {X} = mX > 0


−2δ −2
( )
1 δX X  −2 x/mX
 x/mX δX −1 exp

E ln (X) = q, |q| < +∞ pX (x) = 1(0,+∞) (x)  − 2
mX Γ δ −2 δX
X
(gamma with mean mX and variation coefficient δX )

(0, +∞) E ln (X) = µ ∈ R ( )
n 2 o 2 1 (ln (x) − µ)2
E ln (X) − µ = σ , σ > 0 pX (x) = √ exp −
σ2 2 σ2
 x 2π 
q 2
µ = ln mX 1 + δX
q 
2
σ = ln 1 + δX
(lognormal with location µ and scale σ)

(−∞, +∞) E {X} = mX ∈ R ( )


n o 1 (x − mX )2
E X2 = m2X + σX
2
pX (x) = q exp − 2
2
2π σX 2 σX
2
(normal with mean mX and variance σX )
Modeling and Quantification of Physical Systems ... 29

Intrusive methods: In this class of stochastic solvers, the random problem


response is obtained by running a customized computer code only once. This
code is not based on the associated deterministic model, but on a stochastic
version of the computational model [2].

5.1 Monte Carlo Method: A Non-Intrusive Approach

The most frequently used technique to compute the propagation of uncer-


tainties of random parameters through a model is the Monte Carlo (MC)
method, originally proposed by [56], or one of its variants [57].
An overview of the MC algorithm can be seen in the Figure 14. First, the
MC method generates N realizations (samples) of the random parameters
according to their joint distributions (stochastic model). Each of these re-
alizations defines a deterministic problem which is then solved (processing)
using a deterministic technique, generating a certain amount of data. Then,
these data are combined through statistics, to access the response of the ran-
dom system [58], [55]. By the nature of the algorithm, we note that MC is a
non-intrusive method.

Monte Carlo method

samples output model


generation processing data statistics response

stochastic deterministic
model model

Fig. 14 An overview of Monte Carlo algorithm.

It can be shown that if N is large enough, the MC method describes very


well the statistical behaviour of the random system. However, the rate of
convergence of this non-intrusive method is very slow√– proportional to the
inverse of number of samples square root, i.e., ∼ 1/ N . Therefore, if the
processing time of a single sample is very large, this slow rate of convergence
makes MC a very time-consuming method – unfeasible to perform simulation
of complex models. Meanwhile, the MC algorithm can easily be parallelized,
once each realization can be processed separately and then the results aggre-
gated to compute the statistics [55].
30 Americo Cunha Jr

Because of its simplicity and accuracy, MC is the best method to com-


pute the propagation of uncertainties, whenever its use is feasible. Thus, it
is recommended that anyone interested in UQ master this technique. Many
good references about MC method are available in the literature. For further
details, we recommend [58], [59], [60], [61], [62], [63] and [64].

5.2 Stochastic Galerkin Method: An Intrusive


Approach

When the use of MC method is unfeasible, the state of art strategy is based on
the so-called stochastic Galerkin method. This spectral approach was orig-
inally proposed by [65] and [66], and became very popular in the last 15
years, especially after work of [67]. It uses a Polynomial Chaos Expansion
(PCE) to represent the stochastic model response combined with a Galerkin
projection to transform the original stochastic equations into a system of
deterministic equations. The resulting unknowns are the coefficients of the
linear combination underlying to the PCE.
Once PCE theory is quite rich and extensive, we do not have space in
this manuscript to cover it in enough detail, but to the reader interested in
digging deeper on this subject is encouraged to see the references [2], [3], [68],
[69], [70] and [8].

6 Concluding Remarks

In this manuscript, we have argued about the importance of modeling and


quantification of uncertainties in engineering projects, advocating in favor of
the probabilistic approach as a tool to take into account the uncertainties. It
is our thought that specifying an envelope of reliability for curves obtained
from numerical simulations is an irreversible tendency. We also introduced
the basic probabilistic vocabulary to prepare the reader for deeper literature
on this subject, and discussed the key points of the stochastic modeling of
physical systems, using a simplistic mechanical system as a more in-depth
example.

Acknowledgements

The author’s research is supported by the Brazilian agencies CNPq (National


Council for Scientific and Technological Development), CAPES (Coordina-
Modeling and Quantification of Physical Systems ... 31

tion for the Improvement of Higher Education Personne) and FAPERJ (Re-
search Support Foundation of the State of Rio de Janeiro).

References

1. L. Biegler, G. Biros, O. Ghattas, M. Heinkenschloss, D. Keye, B. Mallick, Y. Mar-


zouk, L. Tenorio, B. B. Waanders, and K. Willcox, Large-Scale Inverse Problems and
Quantification of Uncertainty. Wiley, 2010.
2. O. P. Le Maı̂tre and O. M. Knio, Spectral Methods for Uncertainty Quantification:
With Applications to Computational Fluid Dynamics. Springer, 2010.
3. D. Xiu, Numerical Methods for Stochastic Computations: A Spectral Method Ap-
proach. Princeton University Press, 2010.
4. C. Soize, Stochastic Models of Uncertainties in Computational Mechanics. American
Society of Civil Engineers, 2012.
5. M. Grigoriu, Stochastic Systems: Uncertainty Quantification and Propagation.
Springer, 2012.
6. R. C. Smith, Uncertainty Quantification: Theory, Implementation, and Applications.
SIAM, 2013.
7. H. Bijl, D. Lucor, S. Mishra, and C. Schwab, Uncertainty Quantification in Compu-
tational Fluid Dynamics. Springer, 2013.
8. M. P. Pettersson, G. Iaccarino, and J. Nordström, Polynomial Chaos Methods for
Hyperbolic Partial Differential Equations: Numerical Techniques for Fluid Dynamics
Problems in the Presence of Uncertainties. Springer, 2015.
9. R. Ohayon and C. Soize, Advanced Computational Vibroacoustics: Reduced-Order
Models and Uncertainty Quantification. Cambridge University Press, 2015.
10. T. J. Sullivan, Introduction to Uncertainty Quantification. Springer, 2015.
11. S. Sarkar and J. A. S. Witteveen, Uncertainty Quantification in Computational Sci-
ence. World Scientific Publishing Company, 2016.
12. R. Ghanem, D. Higdon, and H. Owhadi, Handbook of Uncertainty Quantification.
Springer, 2017.
13. C. Soize, Uncertainties and Stochastic Modeling. Short Course at PUC-Rio, August
2008.
14. C. Soize, Stochastic Models in Computational Mechanics. Short Course at PUC-Rio,
August 2010.
15. C. Soize, Probabilité et Modélisation des Incertitudes: Eléments de base et concepts
fondamentaux. Course Notes, Université Paris-Est Marne-la-Vallée, Paris, September
2013.
16. G. Iaccarino, A. Doostan, M. S. Eldred, and O. Ghattas, Introduction to Uncertainty
Quantification Techniques. Minitutorial at SIAM CSE Conference, 2009.
17. G. Iaccarino, Introduction to Uncertainty Quantification. Lecture at KAUST, 2012.
18. A. Doostan and P. Constantine, Numerical Methods for Uncertainty Propagation.
Short Course at USNCCM13, 2015.
19. C. Soize, A comprehensive overview of a non-parametric probabilistic approach of
model uncertainties for predictive models in structural dynamics. Journal of Sound
and Vibration, 288 (2005), 623–652.
20. Guide for the verification and validation of computational fluid dynamics simulations.
Technical Report AIAA G-077-1998, American Institute of Aeronautics and Astronau-
tics, Reston, 1998.
21. W. L. Oberkampf and T. G. Trucano, Verification and Validation in Computational
Fluid Dynamics. Technical Report SAND 2002-0529, Sandia National Laboratories,
Livermore, 2002.
32 Americo Cunha Jr

22. W. Oberkampf, T. Trucano, and C. Hirsch, Verification, validation, and predictive


capability in computational engineering and physics. Applied Mechanics Reviews, 57
(2004), 345–384.
23. ASME Guide for Verification and Validation in Computational Solid Mechanics. Tech-
nical Report ASME Standard V&V 10-2006, American Society of Mechanical Engi-
neers, New York, 2006.
24. W. L. Oberkampf and C. J. Roy, Verification and Validation in Scientific Computing.
Cambridge University Press, 2010.
25. U. M. Ascher and C. Greif, A First Course in Numerical Methods. SIAM, 2011.
26. P. J. Roache, Code verification by the method of manufactured solutions. Journal of
Fluids Engineering, 124 (2001), 4–10.
27. C. J. Roy, Review of code and solution verification procedures for computational
simulation. Journal of Computational Physics, 205 (2005), 131–156.
28. L. A. Petri, P. Sartori, J. K. Rogenski, and L. F. de Souza, Verification and validation
of a direct numerical simulation code. Computer Methods in Applied Mechanics and
Engineering, 291 (2015), 266–279.
29. G. I. Schuëller, A state-of-the-art report on computational stochastic mechanics. Prob-
abilistic Engineering Mechanics, 12 (1997), 197–321.
30. G. I. Schuëller, Computational stochastic mechanics recent advances. Computers &
Structures, 79 (2001), 2225–2234.
31. C. Soize, Stochastic modeling of uncertainties in computational structural dynamics -
Recent theoretical advances. Journal of Sound and Vibration, 332 (2013), 2379-2395.
32. D. Moens and D. Vandepitte, A survey of non-probabilistic uncertainty treatment in
finite element analysis. Computer Methods in Applied Mechanics and Engineering,
194 (2005) 1527–1555.
33. D. Moens and M. Hanss, Non-probabilistic finite element analysis for parametric
uncertainty treatment in applied mechanics: Recent advances. Finite Elements in
Analysis and Design, 47 (2011), 4–16.
34. M. Beer, S. Ferson, and V. Kreinovich, Imprecise probabilities in engineering analyses.
Mechanical Systems and Signal Processing, 37 (2013), 4–29.
35. C. Soize, A nonparametric model of random uncertainties for reduced matrix models
in structural dynamics. Probabilistic Engineering Mechanics, 15 (2000), 277–294.
36. C. Soize, Generalized probabilistic approach of uncertainties in computational dy-
namics using random matrices and polynomial chaos decompositions. International
Journal for Numerical Methods in Engineering, 81 (2010) 939–970.
37. A. Batou, C. Soize, and M. Corus, Experimental identification of an uncertain compu-
tational dynamical model representing a family of structures. Computers & Structures,
89 (2011), 1440–1448.
38. G. Grimmett and D. Welsh, Probability: An Introduction. Oxford University Press,
2nd edition, 2014.
39. J. Jacod and P. Protter, Probability Essentials. Springer, 2nd edition, 2004.
40. A. Klenke, Probability Theory: A Comprehensive Course. Springer, 2nd edition, 2014.
41. A. Papoulis and S. U. Pillai, Probability, Random Variables and Stochastic Processes.
McGraw-Hill, 4th edition, 2002.
42. C.E. Shannon, A mathematical theory of communication. Bell System Technical
Journal, 27 (1948), 379–423.
43. L. Wasserman, All of Nonparametric Statistics. Springer, 2007.
44. L. Wasserman, All of Statistics: A Concise Course in Statistical Inference. Springer,
2004.
45. E. T. Jaynes, Information theory and statistical mechanics. Physical Review Series
II, 106 (1957), 620–630.
46. E. T. Jaynes, Information theory and statistical mechanics II. Physical Review Series
II, 108 (1957), 171–190.
47. J. N. Kapur and H. K. Kesavan, Entropy Optimization Principles with Applications.
Academic Press, 1992.
Modeling and Quantification of Physical Systems ... 33

48. J. N. Kapur, Maximum Entropy Models in Science and Engineering. New Age, 2009.
49. F.E. Udwadia, Response of uncertain dynamic systems. I. Applied Mathematics and
Computation, 22 (1987), 115–150.
50. F.E. Udwadia, Response of uncertain dynamic systems. II. Applied Mathematics and
Computation, 22 (1987), 151–187.
51. F.E. Udwadia, Some results on maximum entropy distributions for parameters known
to lie in finite intervals. SIAM Review, 31 (1989), 103–109.
52. K. Sobezyk and J. Trçbicki, Maximum entropy principle in stochastic dynamics. Prob-
abilistic Engineering Mechanics, 5 (1990), 102–110.
53. K. Sobezyk and J. Trȩbicki, Maximum entropy principle and nonlinear stochastic
oscillators. Physica A: Statistical Mechanics and its Applications, 193 (1993), 448–
468.
54. J. Trȩbicki and K. Sobezyk, Maximum entropy principle and non-stationary distribu-
tions of stochastic systems. Probabilistic Engineering Mechanics, 11 (1996), 169–178.
55. A. Cunha Jr, R. Nasser, R. Sampaio, H. Lopes, and K. Breitman, Uncertainty quantifi-
cation through Monte Carlo method in a cloud computing setting. Computer Physics
Communications, 185 (2014), 1355-1363.
56. N. Metropolis and S. Ulam, The Monte Carlo Method. Journal of the American
Statistical Association, 44 (1949), 335–341.
57. C. Lemieux, Monte Carlo and Quasi-Monte Carlo Sampling. Springer, 2009.
58. D. P. Kroese, T. Taimre, and Z. I. Botev, Handbook of Monte Carlo Methods. Wiley,
2011.
59. J. S. Liu, Monte Carlo Strategies in Scientific Computing. Springer, 2001.
60. G. Fishman, Monte Carlo: Concepts, Algorithms, and Applications. Springer, cor-
rected edition, 2003.
61. R. Y. Rubinstein and D. P. Kroese, Simulation and the Monte Carlo Method. Wiley,
2nd edition, 2007.
62. S. Asmussen and P. W. Glynn, Stochastic Simulation: Algorithms and Analysis.
Springer, 2007.
63. R. W. Shonkwiler and F. Mendivil, Explorations in Monte Carlo Methods. Springer,
2009.
64. C. P. Robert and G. Casella, Monte Carlo Statistical Methods. Springer, 2010.
65. R. Ghanem and P. D. Spanos, Polynomial chaos in stochastic finite elements. Journal
of Applied Mechanics, 57 (1990) 197–202.
66. R. Ghanem and P. D. Spanos, Stochastic Finite Elements: A Spectral Approach.
Dover Publications, 2nd edition, 2003.
67. D. Xiu and G. E. Karniadakis, The Wiener-Askey Polynomial Chaos for stochastic
differential equations. SIAM Journal on Scientific Computing, 24 (2002) 619–644.
68. P. Vos, Time-dependent polynomial chaos. Master Thesis, Delft University of Tech-
nology, Delft, 2006.
69. P. Constantine, A Primer on Stochastic Galerkin Methods. Lecture Notes, 2007.
70. A. O’Hagan, Polynomial Chaos: A tutorial and critique from a statistician’s perspec-
tive. (submitted to publication), 2013.

View publication stats

You might also like