Methods and Settings For Sensitivity Analysis
Methods and Settings For Sensitivity Analysis
A
S
= 1,
(1.47)
1.2.16
Further Methods
2
directly from those curves, so that an efficient way to estimate
Si is to use state-space regression on the scatterplots and then
take the variances of these.
i
In general, for a model of unknown linearity,
monotonicity
and additivity, variance-based measures constitute a good
means of tackling settings such as factor fixing and factor
prioritization. We shall discuss one further setting before the
end of this chapter, but let us first consider whether there are
alternatives to the use of variance-based methods for the
settings so far described.
METHODS AND SETTINGS FOR SENSITIVITY ANALYSIS
1.2.17
Ll
J. i
1
)
r
j=1
IEEi I
(1.49)
2. It is numerically efficient;
3. It is very good for factor fixing it is indeed a good proxy for
STi;
1.2.18
elements denoted
(X |B )
(X |B )
B
n=
set (
N n simulations (Figure 1.11).
A statistical test can be performed for each factor
independently,
analysing theofmaximum
the cumulative
distributions
the (X I B)distance
and (X I between
B\ sets (Figure
1.12).
1
Prior
Fm(Xi|B )
0.8
Fn(Xi|B )
0.6
F(
Xi)
dn,n
0.4
0.2
0.2
0.4
0.6
0.8
18
Smirnov two-sample test (two-sided version) is used in Figure 1.12 (see Saltelli
et al., 2004, pp. 3839).
41
1.4
19
1
INTRODUCTION TO SENSITIVITY ANALYSIS
1
ranking to believe or privilege. Another potential danger is to
present sensi- tivity measures for too many output variables Y .
Although exploring the sensitivity of several model outputs is
sound practice for testing the quality of the model, it is better,
when presenting the results of the sensitivity anal- ysis, to
focus on the key inference suggested by the model, rather than
to confuse the reader with arrays of sensitivity indices relating
to intermediate output variables. Piecewise sensitivity analysis,
such as when investigating one model compartment at a time,
can lead to type II errors if interactions among factors of
different compartments are neglected. It is also worth noting
that, once a model-based analysis has been produced, most
modellers will not willingly submit it to a revision via sensitivity
analysis by a third party.
This anticipation of criticism by sensitivity analysis is also one
of the 10 commandments of applied econometrics according to
Peter Kennedy:
Thou
shall criticism
confess in
the
presence
of sensitivity.
Corollary:
Thou shall
anticipate
[
] When
reporting
a sensitivity
analysis,
researchers
should explain
fully their specification search so that the readers can judge for
themselves how the results may have been affected. This is basically
an honesty is the best policy approach, advocated by Leamer,
(1978, p. vi) (Kennedy, 2007).
1
2 such as:
CONCLUDING REMARKS
43
3.
4.
5.
The reader will find in this and the following chapters didactic
examples for the purpose of familiarization with sensitivity
measures. Most of the exercises will be based on models whose
output (and possibly the associated sensitivity measures) can be
computed analytically. In most practical instances the model under
analysis or development will be a computational one, without a
closed analytic formula.
Typically, models will involve differential equations or
optimization algo- rithms involving numerical solutions. For this
reason the best available prac- tices for numerical computations
will be presented in the following chapters. For the Elementary
Effects Test, we shall offer numerical procedures devel- oped by
Campolongo et al. (1999b, 2000, 2007). For the variance-based
measures we shall present the Monte Carlo based design
developed by Saltelli (2002) as well as the Random Balance
Designs based on Fourier Amplitude Sensitivity Test (FAST-RBD,
Tarantola et al., 2006, see Chapter 4). All these methods are based
on true points in the space of the input factors, i.e. on actual
computations of the model at these points. An important and
powerful class of methods will be presented in Chapter 5; such
techniques are based on meta- modelling, e.g. on estimates of the
model at untried points. Metamodelling allows for a great reduction
in the cost of the analysis and becomes in fact the only option
when the model is expensive to run, e.g. when a single simulation
of the model takes tens of minutes or hours or more. The
drawback is that metamodelling tools such as those developed
by Ratto et al. (2007) are less straightforward to encode than
plain Monte Carlo. Where possible, pointers will be given to
available software.
1
5
ANSWERS
1.6 EXERCISES
V(Y ) = E(Y 2) E2(Y ).
1. Prove
that
3.
4.
X1 and
X2, fixing one variable can only decrease the variance of the
model.
Why in f.L are absolute differences used rather than simple
differences?
If the variance of Y as results from an uncertainty analysis is
too large, and the objective is to reduce it, sensitivity analysis
can be used to suggest how many and which factors should be
better determined. Is this a new setting? Would you be
inclined to fix factors with a larger first-order term or rather
those with a larger total effect term?
1.7 ANSWERS
1. Given a function Y = f (X) where X = (X1, X2, Xk) and X p
(X)
(
where p (X) is the joint distribution of X with p (X) dX = 1,
then the
function mean can be defined as
r
E(Y ) = f (X) p (X) dX,
and its variance as
Var(Y) = r (f(X) E(Y ))2p(X)dX
1
6
= f (X)p(X)dX + E (Y )
2
E(Y )f(X)p(X)dX
1 2
r
E(X1) = E(X2) =
p(x)xdx
=
x=0
r
r
1
x=
0
p(x
)
Further:
Var(X1) = Var(X2)
x2
1
2
dx
=
1
x=
0
x2 x +1 dx
4
x3
x2
=[
+ x]0 = + =
3
2
4
3
2
4
12
E(X1 + X2) = E(X1) + E(X2) = 1,
as the variables are separable in the integral.
Given that X1 + X2 is an additive model (see Exercise 1) it is
also true that
1
Var(X1 + X2) = Var(X1) + Var(X2) =
.
6
The same result is obtained integrating explicitly
Var(X1 + X2)
=
x1
x2
=0
=0
dx1dx2.
p(x) (x + x
2
1)
6. Note that the model (1.3, 1.4) is linear and additive. Further,
(Z1)2
1
Z1 N (0, 1) or equivalently
p(Z1) =
aZ
2
e 2aZ1
21r
n1
+oo
oo
Z1p(Z1)dZ1 +
n2
r
+oo
Z2p(Z2)dZ2 .
oo
ex2 /
2
primitive
i
i
We
write
2
an
d
1
E(Z2) =
i
2 aZi
oo
2 zi /2aZ
zi e
oo
dzi .
+oo
2 at
d
t e
t
= 4
which gives with an easy transformation
0
E(Z2) = a 2
i
Zi
so that
2
VZ = Di aZ
i
an
d
V(Y ) = D2 a2
1
an
d
+ D2a2
Z1
Z2
2
D2
SZ =
i
aZ
i
V(Y ) .
p(z )p(z ) (D z + D z ) dz
dz
= D z .
oo
1
1
1
2
X1, X2 N (0, ).
Based on the previous exercise it is easy to see that
E(Y ) = E (X1X2) = E (X1)E(X2) = 0,
so
that
V(Y ) = E X 2 X 2 = E X 2 E X 2 =
E (X1x ) = x E (X ) = 0
2
2 = (x 2
(X2 (x ) )
E2 X 2 = (x
V(Y I X2 = x ) = V (X x )
)
=E
2
2
one gets
Given that
it must be
that
E(V(Y I X2)) =
= V(Y ).
i.e. the first-order sensitivity index is null for both X1 and X2.
These results are illustrated in the two figures which follow.
2
Figure 1.13 shows a plot of
(Y I X2 = x ),
X
V
i.e. VX
(Y I X2 = x2 ) at 1
2
V(Y ) for x2 1.
Figure 1.14 shows a scatterplot of Y versus
x (the same
1
shape would
appear
for
x
).
It
is
clear
from
the plot that
2
whatever the value of