Basics of Probability and Statistics
Basics of Probability and Statistics
Logical arguments derive from the process of evaluating whether or not a claim is
believed (or known) to be true. These arguments can be characterized as either deductive
or inductive. Deductive arguments arrive at conclusions that are guaranteed to be certain
given certain premises. Inductive arguments support conclusions that are likely or
probable based on the supporting evidence. In practice, actual truth is a challenging
matter to assess. How do you know for certain that other countries in the world exist?
Have you visited them? Is the map correct? Is the teacher correct?
Deductive arguments are valid when the conclusions necessarily follow if the premises
are assumed to be true. A valid deductive argument does not require the premises to
actually be true. This is a potential source of disagreement among rational people
because actual truth is independent of validity and difficult to prove. Given a premise
that all dams reduce flood risk and that John Rapids is a dam, it necessarily follows that
John Rapids Dam must reduce flood risk. This is a valid deductive argument even
though we know the premise that all dams reduce flood risk is not true. Navigation dams
or hydropower dams may not be designed to reduce flood risk. A deductive argument is
sound if and only if it is both valid and all of its premises are actually true. In the
previous example, the argument that John Rapids Dam reduces flood risk is not sound
because we know the premise is false. It is important to understand that both invalid and
valid but unsound arguments can still have true conclusions. John Rapids Dam might
reduce flood risk even though the argument is unsound. The conclusion of a deductive
argument should not be automatically rejected based on flaws in its validity or soundness.
Because induction is not an exact science and evidence is often limited, errors in
reasoning can occur and it is possible to reach wrong conclusions. Recognizing and
I-1-2
dealing with issues that can lead to such errors (e.g. overestimating the strength of
evidence, overconfidence in expert judgment, group think, misplacing burden of proof,
and many others) can strengthen inductive arguments. The validity of inductive
conclusions must be evaluated against alternative conclusions to determine how strong
they are. Decision makers often use objective standards to compare the alternatives and
assess whether a particular conclusion is preferred over another.
In practice, both deductive and inductive arguments are necessary for a credible
systematic approach to risk analysis. Deduction can provide absolute proof for a
conclusion, but the premises can rarely be tested and verified to be actually true.
Induction is driven by the available evidence, but proof of a theory cannot be obtained.
Risk analysis requires a careful synthesis of these two logical approaches.
Set Theory
Many of the characteristics of a risk analysis problem can be defined and modeled using
sets. Set theory is a branch of mathematics that deals with the properties and
relationships of collections of elements or events. Risk analysis relies on set theory to
provide a logical framework for the analysis of events and the relationships between
events.
A set is a well defined collection of unique elements or events. The sample space for a
set includes all possible outcomes of a random trial or experiment. For example, the
sample space for a single coin toss might be represented by a set containing two possible
events {head, tail}. The complement of an event A, expressed as Ā, includes all of the
events that are not A. For the coin toss, the complement of {head} would include all
other outcomes that are not {head} which in this case would simply be {tail}. Events are
mutually exclusive when they both cannot occur during the same random trial or
experiment. The events {head} and {tail} are mutually exclusive because a single coin
toss cannot result in both a head and a tail. Events are collectively exhaustive when at
least one of the events must occur during a random trial or experiment. Assuming the
coin does not land on its side, the collectively exhaustive events for a coin toss would be
{head, tail}. The union of two (or more) events, denoted by A ∪ B, is the set that
includes all outcomes that are either A or B or both. For the coin toss example, the union
of all possible outcomes would again be the set {head, tail} The intersection of two (or
more) events, denoted by A ∩ B or simply AB is the set of all outcomes that include both
A and B. For the coin toss example, the intersection of the possible outcomes would be
the null set{φ}, since the outcomes are mutually exclusive.
I-1-3
Modes are plausible, the normal state of the system could be described by the intersection
event 𝑃𝐹𝑀 1 ∩ 𝑃𝐹𝑀 2 (neither failure mode occurs). Each of the intersection events
𝑃𝐹𝑀1 ∩ 𝑃𝐹𝑀2, 𝑃𝐹𝑀1 ∩ 𝑃𝐹𝑀2, and PFM1 ∩ 𝑃𝐹𝑀2 describe a potential failure state
for the system. The occurrence of both failure modes (event PFM1 ∩ 𝑃𝐹𝑀2) is usually
unlikely in most cases.
The basic concepts of set theory can be illustrated using Venn diagrams. A sample space
is represented by a rectangle. Events and their relationships are normally depicted on the
Venn diagram by overlapping circles or other closed shapes within the sample space.
The Venn diagrams in Figure I-1-1 summarize some of the basic set theory concepts and
operations. Venn diagrams can be developed for risk analysis to obtain a better depiction
and understanding of the relationship between events to support constructing event trees,
estimating probabilities, or combining and portraying risks. For example, the
relationships between multiple potential failure modes can be illustrated using Venn
diagrams and evaluated using set theory.
S S S
A A
Ā
S S S
A B A A B
Ā
A B
Intersection (A ∩ B)
Combinatorics
Combinatorics is a branch of mathematics that includes the study of the enumeration,
combination, and permutation of set elements. Risk analysis can utilize combinatorics to
identify relevant outcomes from a set of possible events.
I-1-4
Permutation with Repetition
If each event outcome can be realized more than once and the order of the events matters,
then the number of permutations is nk where n is the number of event outcomes available
to choose from and k is the number of events that occur. For example, there are 8
permutations for k=3 potential failure modes (perhaps internal erosion, overtopping, and
structural collapse of a spillway gate) and n=2 possible outcomes for each potential
failure mode (breach or no breach). The eight permutations for this example are
summarized in Table I-1-1. A key decision for the risk analyst at this stage would be to
decide whether or not scenarios with multiple breach mechanisms should be considered
in the risk analysis based on the plausibility of these scenarios and their potential effect
on the estimated risk. Additional information on the treatment of multiple breach
mechanism scenarios is covered in Chapter I-5 – Event Trees.
multiple breach mechanisms are plausible and the order in which the potential failure
modes initiate is important. The permutations for this example are summarized in Table
I-1-2. A particular potential failure mode occurring first (say internal erosion or
overtopping) could preclude other potential failure modes from developing (say structural
collapse of a spillway gate) due to rapid evacuation of the pool through the initial breach.
On the other hand, structural collapse of a spillway gate may not necessarily preclude
internal erosion from also developing if the increased discharge through the failed gate is
not sufficient to evacuate the pool. As a result, the order in which potential failure modes
initiate might be important to consider in the risk analysis.
I-1-5
Table I-1-2. Permutation without Repetition
Permutation First Outcome Second Outcome Third Outcome
5-1 FM1 FM2 AFM
¯¯ 3 E A
5-2 FM1 FM
¯¯ 3
A E A FM2
5-3 FM2 FM1 AFM
¯¯ 3 E A
5-4 FM2 FM
¯¯ 3
A E A FM1
5-5 AFM
¯¯ 3 E A FM1 FM2
5-6 A FM
¯¯ E A
3 FM2 FM1
Axioms
Probability theory is founded on three axioms that have been attributed Kolmogorov.
The first axiom states that the probability of an event (A) is a non-negative real number.
The second axiom states that the probability of the certain event is equal to one. The
third axiom (sometimes referred to as the addition rule) states that the union of two or
more mutually exclusive events is equal to the sum of the probabilities for each event.
These axioms are summarized by the three equations below.
I-1-6
Third Axiom: 𝑃(𝐴 𝑈 𝐵) = 𝑃(𝐴) + 𝑃(𝐵)
The third axiom can be expressed as a multiplication rule instead of the addition rule. In
practice, it makes no difference because the remaining probability formulas can be
derived from either set of axioms. The multiplication rule can be expressed using the
equation below.
In the above equation, P(A|B) is the conditional probability of event A given that event B
has occurred. In a risk analysis, this might correspond to the probability that a levee will
breach given that a 50 year flood occurs, expressed as P(Breach|Flood). Note that the
Annualized Failure Probability (AFP) is calculated (using the multiplication rule) as the
intersection probability of the events that comprise the Potential Failure Mode.
Expressing Probabilities
Common ways to express probabilities include as a percent (10% chance), as a fraction
(1/10 chance), as a decimal (0.1 probability), or as odds (1:9). Each of the values in this
particular example has the same value and the same meaning. Probabilities that occur on
a time scale are often expressed as an annual chance exceedance (ACE) or an annual
exceedance probability (AEP). Probabilities can also vary with time to represent
temporal processes such as climate change or deterioration due to corrosion.
Random Variables
A random (or stochastic) variable is used to represent an uncertain quantity whose value
can take on a number of possible values. The uncertainty associated with the random
variable could be the result of natural variability or a lack of knowledge. Despite the
name, random variables do not necessarily have to be associated with a random process.
For example, the magnitude of a spring flood might be modeled as a random process that
varies from year to year, whereas a fault in the dam foundation might be modeled as a
state of nature, either it exists or it does not. Both of these scenarios can be described
using random variables. The flood might be described by a range of peak discharge
values and the presence of the fault might be described by two values, either ‘yes, it
exists’ or ‘no, it does not exist’.
The event described by a particular value (or range of values) of a random variable must
be expressed with an associated probability. Probability distributions describe the
probabilities associated with all possible values of a random variable. For example, we
can estimate a probability distribution for a random variable describing the annual peak
ground acceleration at a dam site. This probability distribution can then be used to
estimate the probability that the peak ground acceleration next year will be large enough
to cause liquefaction. Virtually all parameters considered and applied in a risk analysis
have some degree of uncertainty and are therefore treated as random variables.
I-1-7
are 62 or 36 possible permutations. A sum of 4 can be obtained from 3 of these
permutations [{3,1}, {1,3}, {2,2} ]. The probability for a sum of 4 is 3/36 or about 0.08.
Probabilities for the other permutations can be estimated in a similar manner.
0.2
0.16
Probability Mass
0.12
0.08
0.04
0
2 3 4 5 6 7 8 9 10 11 12
Sum of Two Die Rolls
Continuous random variables can take on an infinite number of possible values. For
example, the peak ground acceleration measured this year at the dam site might be any
value greater than or equal to 0g and less than about 2g. The probability for any specific
value (say a ground acceleration of 0.3g) is zero for continuous random variables. This is
why continuous random variables, such as those typically used to characterize flood or
seismic loading, must be evaluated in the risk analysis using partitions (also commonly
referred to as ranges or intervals).
I-1-8
2
1.5
Probability Density 1
0.5
0
0 0.5 1 1.5 2
Ratio of Peak Ground Acceleration to Gravity
A Cumulative Distribution Function (CDF) can be developed for both discrete and
continuous random variables. A CDF describes the probability that a random variable
will have a value less than or equal to a particular value 1.For discrete random variables, 0F
the cumulative distribution can be obtained by summing the probability mass values
associated with each value of the random variable that is less than or equal to the target
value. For the previous example of rolling two dice, the CDF is presented in Figure I-1-
4. The probability that the sum is less than 4 can be estimated directly from Figure I-1-4
as 0.17 or by summing the appropriate probability mass values for sums of 2, 3, and 4
from Figure I-1-2 as 0.03 + 0.06 + 0.08.
1
0.8
Cumulative Distribution
0.6
0.4
0.17
0.2
0
2 3 4 5 6 7 8 9 10 11 12
Sum of Two Die Rolls
1
The ‘or equal’ convention applies when developing loading functions such as the annual chance
exceedance function for reservoir water surface elevation or peak ground acceleration. A ‘less
than’ convention applies when portraying risks on an F,N chart.
I-1-9
For continuous random variables, the cumulative distribution function is obtained by
integrating the probability density function as illustrated by the following equation.
𝑏
0.8
0.65
Cumulative Distribution
0.6
0.4
0.2
0
0 0.5 1 1.5 2
Ratio of Peak Ground Acceleration to Gravity
The CCDF for the peak ground acceleration example is provided in Figure I-1-6. From
this figure, the probability that the peak ground acceleration this year will be greater than
0.5g can be estimated as 0.35. Note that the probability of an acceleration greater than
0.5g plus the probability of an acceleration less than or equal to 0.5g is equal to 1 which
satisfies the axioms of probability given the fact that the two events are complementary.
I-1-10
1
0.35
0.1
0.01
0 0.5 1 1.5 2
Ratio of Peak Ground Acceleration to Gravity
Correlation is the degree to which two or more random variables are linearly related to
each other. For example, standard penetration test blow counts might be correlated with
the shear strength of soils. Higher blow counts might be an indicator of higher shear
strengths. This can facilitate the indirect estimation of random variable parameters
without having to measure the parameter directly. The concept can also be used to
provide internal consistency between parameters within a risk analysis. For example, the
number of people estimated to be sleeping when the flood warning is issued might be
correlated with the time of day. Note that correlation does not automatically imply
causation. A causal connection may exist when there is a plausible cause and effect
explanation.
Events
Two (or more) events are mutually exclusive when both events cannot occur at the same
time. Floods and earthquakes are often modeled as mutually exclusive events in a risk
analysis (even though they are not) so that their risks can be estimated separately and
then summed to obtain the total risk. This is usually a reasonable simplifying assumption
because the joint probability of a flood and an earthquake is usually very remote. This
assumption, however, may not be reasonable in every situation. For mutually exclusive
events,
𝑃(𝐴 𝑈 𝐵) = 𝑃(𝐴) + 𝑃(𝐵)
Two (or more) events are statistically independent if the occurrence of one event does not
affect the probability for occurrence of the other event(s). For example, floods and
earthquakes may be statistically independent events when the occurrence of an
earthquake does not change the probability that a flood will occur. Potential failure
modes are often developed in a way that assumes each individual potential failure mode
is statistically independent. Statistically independent events satisfy the following
equation.
I-1-11
𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵)
Two (or more) events are statistically dependent if the occurrence of one event affects the
occurrence probability of the other event(s). Potential failure modes can sometimes be
statistically dependent events. During a flood event the occurrence of a breach by
overtopping (and subsequent draining of the reservoir) might decrease the probability of
initiation of internal erosion. Statistically dependent events satisfy the following
equation.
𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵|𝐴)
The probability of the union of two events can be estimated using the following equation.
The concept can be expanded when more than two events are involved. The equation can
quickly become quite long and cumbersome as the number of events increases.
DeMorgan’s Rule
For two events A and B, DeMorgan’s rule states that the complement of the union of two
events is equal to the intersection of their complements ( A U B = Ā ∩ B̄ ). The Venn
¯¯¯¯¯ A E A A E A A E A
S S Ā S
A B A B
S S
A B = A B
DeMorgan’s rule can simplify the calculation for the probability of the union of two or
more events. In practice, it can be used to estimate the total probability of system failure
given ‘n’ potential failure modes.
n
Uni-Modal Bounds
The uni-modal bounds theorem (Ang and Tang, 1984) states that for ‘n’ positively
correlated events (E1, E2, E3, …, En) with corresponding probabilities [P(E1), P(E2), P(E3),
I-1-12
…, P(En)], the total probability for the union of the events [P(E) = P(E1 ∪ E2 ∪ E3 …∪
En)] lies between the upper and lower bounds given by the following equation.
n
Calculation of the upper bound is based on application of de Morgan’s rule whereas the
lower bound is equal to the probability of the most likely individual event. Events that
are highly correlated will yield a total probability closer to the lower bound. Events that
are highly uncorrelated will yield a total probability that is closer the upper bound. In
practice, the degree of positive correlation is difficult to estimate and risk analysts often
assume the upper bound value unless there is specific information to indicate correlation
is important.
Point Estimators
The mean is the expected value for a random variable. The mean is located at the
centroid of the probability distribution. It can be estimated as the sum (or integral) over
every possible value weighted by the probability for that value. The median is the 50th
percentile which means that there is equal probability for values greater than and less
than the median value. The median divides the probability distribution into equal areas.
The mode is the most probable value of a random variable. The mode has the largest
probability (discrete random variable) or the largest probability density (continuous
random variable). A graphical depiction of these point estimators is presented in Figure
I-1-8 for a continuous random variable.
1.5
Probability Density
1
Mode
Median
Mean
0.5
0
0 0.5 1 1.5 2
Ratio of Peak Ground Acceleration to Gravity
I-1-13
expert opinion. The mathematical form for these parameters is provided in Table I-1-4.
Other common parameters can be derived from these basic parameters. The standard
deviation is simply the square root of the variance. The coefficient of variation is simply
the standard deviation divided by the mean.
Distributions
A multitude of probability distributions are available to describe random variables. Some
of the more common types used in risk analysis include uniform, triangular, normal, log
normal, and Weibull. Other distributions are available and may be more appropriate for a
particular situation. A uniform distribution can be used to describe a random variable
whose possible values are all equally likely to occur. The uniform distribution is defined
by a lower and upper limit. Triangular distributions are defined by a lower limit, upper
limit, and mode. Normal distributions are well known for their bell curve shape. The
normal distribution can be defined by a mean and variance (or standard deviation). A
more generalized form of the normal distribution may also include a skew parameter.
The log normal distribution is used when the logarithm of a random variable is normally
distributed as is typically the case for unregulated annual peak discharges or volumes. A
useful characteristic of the log normal distribution is that all values of the random
variable must be non-negative. The Weibull distribution is a generalization of the
exponential distribution that is described by a shape parameter and a scale parameter.
The Weibull distribution can be used to develop relationships that describe the rate of
failure over time. These relationships are commonly referred to as bathtub curves. They
are characterized by a wear-in or early failure period (region A), a random failure period
(region B), and a wear-out period (region C). Each type of distribution has advantages
and disadvantages. The facilitator and risk analysts should be familiar with various
distribution types and their relative strengths and weaknesses. An illustration of the
probability density function for the common distributions is provided in Figure I-1-9.
I-1-14
p(x) p(x)
a b x a b x
Uniform Triangular
x x
Normal Log-Normal
p(t)
A B C
t
Weibull
According to the Central Limit Theorem (see e.g. Ang and Tang 1984), the probability
distribution of random variable that is obtained by summing random variables will trend
toward a normal distribution regardless of how the summed random variables are
distributed. Similarly, the probability distribution of a random variable that is obtained as
a product of random variables will trend toward a log normal distribution regardless of
how the random variables being multiplied are distributed.
I-1-15
beyond the range of observed data or experience. Also, the quantity of information is
often limited. In these situations, distributions should be selected after careful
consideration, and sometimes only after a sensitivity analysis to help understand the
effect the selected distribution may have on the estimated risk has been performed.
Confidence Intervals
A confidence interval is used to describe the amount of uncertainty associated with an
estimated or sampled value for a random variable. The confidence interval [a,b] for a
specified degree of confidence (C%) can be estimated using the following relationship.
𝑏
𝐶% = � 𝑓(𝑥)𝑑𝑥
𝑎
As an example, assume the friction angle for the soils at a particular site is believed to
follow a normal distribution with an estimated mean of 32° and an estimated standard
deviation of 1°. The probability (or confidence) that the phi angle is between 30° and 33°
can be estimated as about 82%. This value is represented by the shaded area under the
probability distribution in Figure I-1-10.
0.40
0.30
Probability Density
0.20 82%
0.10
0.00
28 30 32 34 36
Phi Angle (°)
Figure I-1-10. Confidence Interval
I-1-16
Table I-1-5. Confidence Intervals for Normal Distribution
Uncertainty
Two general types of uncertainty can be described as aleatory (natural variability) and
epistemic (knowledge uncertainty). Aleatory uncertainty characterizes processes that are
assumed to be random in time and space. The occurrence of floods might be assumed to
be random in time and the geologic properties of a foundation might be assumed to be
random in space. In practice, aleatory uncertainty is treated as irreducible. In other
words, there is no practical way to reduce the uncertainty through the acquisition of more
knowledge. Epistemic uncertainty characterizes our lack of knowledge regarding the
state of nature. The foundation flaw either exists or it does not exist, but we don’t have
sufficient knowledge to determine for certain whether or not the flaw exists. Epistemic
uncertainty considers the uncertainty in both models and model parameters. Uncertainty
in modeling includes our ability to identify a proper model, the ability of the model to
represent reality, and our understanding of how the model may be changing over time.
Uncertainty in model parameters includes our ability to identify the appropriate
representative parameters and consistently estimate values for the parameters through
observation or measurement. In practice, epistemic uncertainty is treated as reducible. In
other words, more knowledge can be obtained to reduce the magnitude of the uncertainty.
These uncertainty concepts are applicable not only to risk analysis models and risk
estimates but also to decision making processes.
Bayes Theorem
Bayes theorem expresses the way in which a degree of belief probability should
rationally change to account for new evidence. According to Ang and Tang (1975), the
Bayesian method provides a useful approach when dealing with limited available
information and when reliance on subjective judgments is necessary. It can be used to
inform subjective judgments so that the available evidence is not given too much weight
or too little weight when estimating probabilities.
The method begins with an estimate of the prior probability of an event based on
available information. The significance of new information or evidence can then be
considered by using Bayes theorem to obtain an updated or posterior estimate of the
event probability (Hartford and Baecher 2004). For example, given some background
rate of levee breach due to embankment stability, and given that levees with longitudinal
cracking are more likely to experience an embankment stability breach, an observation of
I-1-17
longitudinal cracking on a particular levee should logically increase the estimated
probability that the levee will breach under flood loading.
To illustrate the concept, it is convenient to start with the general form of Bayes theorem
using the equation below, where P(x|O) is the posterior probability of an event x given an
observation O, P(x) is the prior probability of the event x absent information, P(O|x) is
the conditional probability of the observation O given the event x, and P(O) is the
probability of the observation.
( )
P xO =
( )
P (x )P O x
P (O )
A simple example is presented to illustrate the application of Bayes Theorem for the case
of a single observation. The Venn diagram shown in Figure I-1-11 provides the prior
probability of breach, P(x), in the absence of any specific information. It represents an
average or expected breach rate given a flood loading. For this example, it is assumed
that the average or base rate of breach given a flood loading is 0.2.
No Breach (80%)
Breach (20%)
The Venn diagram shown in Figure I-1-12 provides additional information about
performance when seepage is observed. These values can be obtained from a
combination of observations, analytical models, and expert opinions.
I-1-18
Given the new information (seepage is observed), Bayes theorem can be applied using
the equation below to improve the estimate of the probability of breach. As one might
expect, the observation of seepage increases the estimated probability of breach.
0.15
P (breach )P (seepage | breach )
0.2
(
P breach seepage = ) P (seepage )
= 0.15 + 0.05 = 0.375
0.15 + 0.25
When dealing with multiple observations, it is more convenient to express Bayes theorem
in terms of odds and likelihood ratios using the equation below. In this equation, x is
taken to be the outcome of breach and x’ is taken to be the outcome of no breach. The
expression P(x|O)/P(x’|O) represents the posterior odds of breach given the observation
and P(x)/P(x’) represents the prior odds of breach absent the information. The term
P(O|x)/P(O|x’) is commonly referred to as the likelihood ratio for the observation, which
represents the value gained by having the additional information. When the likelihood
ratio is equal to 1.0, the observation provides no additional information on the expected
performance. A likelihood ratio greater than 1.0 will increase the estimated probability
of breach. Similarly, a likelihood ratio less than 1.0 will decrease the estimated
probability of breach based on the observation.
P(x O ) P( x ) P(O x )
=
P(x' O ) P( x') P(O x')
For the example presented in Figure I-1-12, the likelihood ratio for observed seepage can
be computed using the equation below.
0.15
P(O x ) 0.15 + 0.05
= = 2. 4
P(O x') 0.25
0.25 + 0.55
In the case of multiple observations, the likelihood ratio can be computed as the joint
likelihood of the observations using the equation below.
(
P O1 ,..., On x ) = P(O x ) P(O O , x ) ... P(O O n −1
,...O1 , x)
P (O ,..., O x ' ) P (O x ' ) P (O O , x ' ) P (O O ,..., O , x ' )
1 2 1 n
1 n 1 2 1 n n −1 1
If the observations are assumed to be mutually statistically independent, then the joint
likelihood ratio calculation reduces to the simple product in the equation below.
I-1-19
P (x O1 , O2 ,..., On ) P ( x ) n P (Oi x )
= ∏
P (x ' O1 , O2 ,..., On ) P ( x ' ) i P (Oi x ' )
Given the posterior odds of breach and knowing that the posterior probability of breach is
the complement of the posterior probability of no breach, [P(x’) = 1 – P(x)], the posterior
probability of breach can be computed using the equation below. This form of the
equation provides a method for systematically weighing the evidence obtained from
observations to update the probability of breach estimate.
P ( x ) n P (Oi x )
∏
P ( x ' ) i P (Oi x ' )
P (x O1 , O2 ,..., On ) =
P ( x ) n P (Oi x )
1+ ∏
P ( x ' ) i P (Oi x ' )
For the previous example, the probability of breach given observed seepage can
alternatively be computed as follows.
P ( x ) n P (Oi x ) 0.2
P(x')
∏ P (Oi x ' ) 2.4
P (breach seepage ) =
i
= 0.8
= 0.375
P ( x ) n P (Oi x ) 0.2
1+ ∏
P ( x ' ) i P (Oi x ' )
1+ 2.4
0.8
References
Scriven, M. and Paul, R.W., 1987. Critical Thinking as Defined by the National Council
for Excellence in Critical Thinking
Hartford, Desmond N.D. and Baecher, Gregory B, 2004. Risk and Uncertainty in Dam
Safety
Benjamin, Jack R. and Cornell, C. Allin, 1970. Probability, Statistics, and Decision for
Civil Engineerg
Ang, Alfredo H-S. and Tang, Wilson H., 1975. Probability Concepts in Engineering
Planning and Design – Volume 1-Basic Principles
Ang, Alfredo H-S. and Tang, Wilson H., 2000. Probability Concepts in Engineering
Planning and Design – Volume 2-Decision, Risk, and Reliability
I-1-20