0% found this document useful (0 votes)
2 views

Lecture 2_Introduction to Probability Theory

Lecture 2 introduces probability theory with a focus on uncertainty in design, types of uncertainty, and applications to system reliability analysis. It covers concepts such as probability space, basic axioms of probability, conditional probability, and independent events, along with examples relevant to reliability in engineering design. The lecture also discusses handling uncertainty in design processes and the differences between reliability-based and robust design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture 2_Introduction to Probability Theory

Lecture 2 introduces probability theory with a focus on uncertainty in design, types of uncertainty, and applications to system reliability analysis. It covers concepts such as probability space, basic axioms of probability, conditional probability, and independent events, along with examples relevant to reliability in engineering design. The lecture also discusses handling uncertainty in design processes and the differences between reliability-based and robust design.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Lecture 2: Introduction to Probability Theory

Lecture #2
Part 1 – Outline

• Example on Time-Independent Reliability (Lecture #1)

• Uncertainty in Design
• Types of Uncertainty
• Handling Uncertainty in Design Process

• Introduction to Probability Theory


• Probability Space
• Basic Axioms and Additional Rules

• Application to System Reliability Analysis

2
Example of Time-Independent Reliability

Definition of Reliability
 A failure will occur when the load L is greater than the strength S,
or G (= L – S) > 0. Hence G > 0 indicates a failure.
 The probability of failure can therefore be defined by
Pf = Pr ( G > 0 )  R = 1 − Pf = Pr ( G ≤ 0 )
Since S and L follow normal distributions, G (= L – S) is normal.

G≤0 Probability density G>0


Safe region Failure region

Pf = Pr ( G > 0 )

μG 0 G=L‒S 3
Example of Time-Independent Reliability

Group Discussion (5 Min): Which Design is the Most Reliable?

Design 1

Probability density
G≤0 G>0
Safe region Failure region Reduced
σG
uncertainty
Design 3
Design 2
Design 2 Improved mean
μG
performance
Design 1
Design 3

0 G=L‒S

4
Types of Uncertainty

Aleatory uncertainty (objective uncertainty)


 Describes inherent variation associated with a system or its
operating environment and arises from variations in material
properties, dimensional tolerances, loading conditions, etc.
 Arises from natural variability and is thus irreducible.
 Often modeled as random variables whose values are in an
established range but vary from unit to unit or from time to time.

Epistemic uncertainty (subjective uncertainty)


 Arises primarily from the lack of information or knowledge about
some characteristics of the system or environment.
 The degree of uncertainty can be reduced if more data are
collected or more knowledge is acquired.

5
Types of Uncertainty

Aleatory uncertainty (objective uncertainty)

>> thickness_a = normrnd(1,0.05,100000,5);


>> mean(thickness_a)

ans =

1.0001 1.0002 0.9999 1.0002 1.0000

>> std(thickness_a)

ans =

0.0499 0.0500 0.0500 0.0501 0.0500

6
Types of Uncertainty

Epistemic uncertainty (subjective uncertainty)

>> thickness_a = normrnd(1,0.05,10,5);


>> mean(thickness_a)

>> std(thickness_a)

7
Types of Uncertainty

Epistemic uncertainty (subjective uncertainty)

>> thickness_a = normrnd(1,0.05,10,5);


>> mean(thickness_a)

ans =

0.9982 1.0274 1.0352 1.0130 0.9862

>> std(thickness_a)

ans =

0.0410 0.0310 0.0539 0.0370 0.0602

8
Handling Uncertainty at Three Steps in Design

Model (PDF) Pf = Pr ( G > 0 )

Probability
Probability

Probability
Design 2

density
density

density
Data Design 1

Design variable 0 G 0 G

Step 1: Uncertainty Step 2: Uncertainty Step 3: Design under


modeling analysis uncertainty

• Probability theory • Reliability analysis • Reliability-based


• Statistics • Robustness analysis design
• Other theories • Sensitivity analysis • Robust design

Design process

PDF: Probability density function 9


Introduction to Probability Theory

Probability Space
 It represents our uncertainty regarding an experiment.
 It has three parts
 The sample space Ω, which is the set of all possible outcomes
 The event field A, which is the set of all possible events (an event E is a
measurable subset of the sample space Ω, i.e., E ⊆ Ω)
 The probability measure P, which is a real-valued function of the events,
P: A → [0, 1]


P
E1

0 1
P(E1)

10
Introduction to Probability Theory

Probability Space
 Example experiment: Rolling a six-sided die
 The sample space Ω = {1, 2, 3, 4, 5, 6}
 Three example events
Rolling an even number: E1 = {2, 4, 6}
Rolling a 1 or a 3: E2 = {1, 3}
Rolling a 1 and a 3: E3 = { } or ∅ (Only one number can be rolled, so this
outcome is impossible. The event has no outcomes in it.)
 The probabilities of events E1 and E2

Number of outcomes in E1 3
P ( E1 ) = = = 0.5
Number of equally likely outcomes 6

Number of outcomes in E2 2
P ( E2 ) = = ≈ 0.333
Number of equally likely outcomes 6

11
Introduction to Probability Theory

Three Basic Axioms of Probability Theory


 Axiom 1: P(E) ≥ 0 for any E ⊆ Ω.
 Axiom 2: P(Ω) = 1.
 Axiom 3: If E1 and E2 are mutually exclusive it follows that
P ( E1 ∪ E2 ) = P ( E1 ) + P ( E2 )


E2 P
E1

0 1
P(E1) + P(E2) 1
= P(E1∪E2)

12
Introduction to Probability Theory

Other Useful Rules (Follow from Axioms 1-3)


 Rule 1: P(∅) = 0.

 Rule 2: P(E1) ≤ P(E2) if E1 ⊆ E2. Intersection
 Rule 3: P( 1) = P(Ω \ E1) = 1 – P(E1). of two events E2
 Rule 4: P(E1 ∪ E2) = P(E1) + P(E2) – P(E1 ∩ E2). E1

E1 ∩ E2
Conditional Probability
 Allows reasoning with partial information
 Conditional probability of E1 given E2
P(E1|E2) = P(E1 ∩ E2) / P(E2)

E2
E1

0 1
1 P(E | E )
P(E1∩E2) / P(E2) = 13
1 2
Introduction to Probability Theory

Applications of Probability Theory in System Reliability


 Group discussion (5 min): How to estimate the reliability of a
series system (two components are serially connected)?
 The failure of either component can result in the system failure.

Component A Component B

 Suppose the following information is obtained from field data.


 Event E1: failure of component A, and P(E1) = 0.01
 Event E2: failure of component B, and P(E2) = 0.02
 Events E1 and E2 are independent: the state of one component does not
affect that of the other.

14
Lecture #2
Part 2 – Outline

• Conditional Probability (Part 1) and Independent Events


• Definition of Independent Events
• Two Examples on Conditional Probability

• Random Variables and Probability Distributions


• Discrete Random Variables
• Continuous Random Variables and Normal Distribution

• Application to System Reliability Analysis


• An Example with Series System
• An Example with Parallel System

15
Introduction to Probability Theory

Conditional Probability
 Allows reasoning with partial information
 Conditional probability of E1 given E2
Ω P(E1|E2) = P(E1 ∩ E2)/P(E2)

E2
E1

0 1
1 P(E | E )
P(E1∩E2) / P(E2) = 1 2
Independent Events
 If the occurrence of E2 does not affect that of E1, then E1 and E2
are said to be independent.
P(E1|E2) = P(E1)  P(E1 ∩ E2)/P(E2) = P(E1)
 P(E1 ∩ E2) = P(E1) ∙ P(E2) Multiplication Theorem

16
Introduction to Probability Theory

Group discussion (5 min): Example on Conditional Probability


 In a battery factory, 40% and 60% of the battery cells in a batch
are manufactured by product lines 1 and 2, respectively. The
probabilities of having defective cells from lines 1 and 2 are 0.01
and 0.02, respectively.
 If a quality engineer randomly samples a cell from the batch,
what is the probability of getting a defective cell from each line?
 Let the events be
 L1 = Cell produced from line 1
 L2 = Cell produced from line 2
 D = Defective cell
 The probability of sampling a defective cell from each line
 P(D ∩ L1) = P(D|L1) P(L1) = 0 .01 × 0.4 = 0.004
 P(D ∩ L2) = P(D|L2) P(L2) = 0 .02 × 0.6 = 0.012
 What is the probability of sampling a defective cell?
17
Introduction to Probability Theory

Another Example on Conditional Probability


 Suppose Frank tossed a coin five times, and the coin landed on
heads for all these trials.
 What is the probability of Frank getting a tail for the next toss?

P( )?
Time

 Would getting a tail be MORE likely because Frank has not seen it
in so long?
 Two events are independent if the outcome of one event has no
effect on the outcome of the other.

18
Handling Uncertainty at Three Steps in Design

Model (PDF) Pf = Pr ( G > 0 )

Probability
Probability

Probability
Design 2

density
density

density
Data Design 1

Design variable 0 G 0 G

Step 1: Uncertainty Step 2: Uncertainty Step 3: Design under


modeling analysis uncertainty

• Probability theory • Reliability analysis • Reliability-based


• Statistics • Robustness analysis design
• Other theories • Sensitivity analysis • Robust design

Design process

PDF: Probability density function 19


Reliability-based Design vs. Robust Design

Reliability-based Design Robust Design


Control the probability of failure Control the variability of design

G(X)

Probability
density
Design 2

Design 1 Pf = Pr ( G > 0 )

0 G Design
Optimal Robust variable X
solution solution

20
Introduction to Probability Theory

Model (PDF)
Probability
density

Data

Component thickness X

Random Variable
 Numerical outcome of a random experiment.
 The probability distribution of a random variable is the collection
of possible outcomes along with their probabilities:
M

 Discrete case (probability mass function): P ( X = xk ) = pX ( xk ) , with  pX ( xk ) = 1


k =1

x2
 Continuous case (PDF): P ( x1 < X ≤ x2 ) =  f X ( x ) dx
x1

PDF: Probability density function 21


Introduction to Probability Theory

Normal distribution
Model (PDF)
Probability
density

7
P ( 6 < X ≤ 7 ) =  f X ( x ) dx
6
Data

Component thickness X

Random Variable
 Numerical outcome of a random experiment.
 The probability distribution of a random variable is the collection
of possible outcomes along with their probabilities:
M

 Discrete case (probability mass function): P ( X = xk ) = pX ( xk ) , with  pX ( xk ) = 1


k =1

x2
 Continuous case (PDF): P ( x1 < X ≤ x2 ) =  f X ( x ) dx
x1

PDF: Probability density function 22


Introduction to Probability Theory

Applications of Probability Theory in System Reliability


 Group discussion (5 min): How to estimate the reliability of a
series system?
 The failure of either component can result in the system failure.

Component A Component B

 Suppose the following information is obtained from field data.


 Event E1: failure of component A, and P(E1) = 0.01
 Event E2: failure of component B, and P(E2) = 0.02
 Events E1 and E2 are independent: the state of one component does not
affect that of the other.

23
Introduction to Probability Theory

Applications of Probability Theory in System Reliability


 Approach 1: The system fails when either component fails.
 System failure event Esf is the union of component failure events E1 and E2
Esf = E1 ∪ E2
 The probability of system failure is
P(Esf) = P(E1 ∪ E2) = P(E1) + P(E2) – P(E1 ∩ E2)
= P(E1) + P(E2) – P(E1) ∙ P(E2)
= 0.01 + 0.02 – 0.01 × 0.02 = 0.0298
 System success event Ess is the complement of Esf and therefore the system
reliability R is given by
R = P(Ess) = P( sf) = 1 – P(Esf) = 1 – 0.0298 = 0.9702

24
Introduction to Probability Theory

Applications of Probability Theory in System Reliability


 Approach 2: The system succeeds when both components
succeed.
 The system success event Ess is the intersection of component success events
1 and 2
Ess = 1 ∩ 2
 The system reliability R is therefore
R = P(Ess) = P( 1 ∩ 2) = P( 1) ∙ P( 2) = 0.99 × 0.98 = 0.9702
 The above equation can also be written as
R = R1 ∙ R2
 This can generalize to a series system of m independent components
R=∏

Component 1 Component 2 … Component m

25
Introduction to Probability Theory

Applications of Probability Theory in System Reliability


 How to estimate the reliability of a parallel system?
 The success of either component can result in the system
success.
Component A

Component B

 Suppose the same information is obtained from field data.


 Event E1: failure of component A, and P(E1) = 0.01
 Event E2: failure of component B, and P(E2) = 0.02
 Events E1 and E2 are independent: the state of one component does not
affect that of the other.

26
Lecture #2
Part 3 – Outline

• Continuous Random Vectors and Joint Distributions


• Joint Probability Distributions
• Marginal Probability Distributions

27
Introduction to Probability Theory

Continuous Random Vectors and Joint Distributions


 Joint CDF of two continuous random variables X and Y
y x
FXY ( x, y ) = P ( X ≤ x,Y ≤ y ) =   f XY (τ ,υ ) dτ dυ
−∞ −∞

An example of joint PDF Joint PDF


∂2 FXY ( x, y )
f XY ( x, y ) =
∂x∂y

Each slice shares the same


shape as a conditional PDF
f XY ( x, y )
fY | X ( y | x ) =
f X ( x)
28
Introduction to Probability Theory

Continuous Random Vectors and Joint Distributions


 Knowing the joint PDF, one can obtain the PDF of one random
variable, called the marginal PDF, by integrating out the other.
+∞
fY ( y ) =  f XY (τ , y ) dτ
−∞
+∞
f X ( x) =  f XY ( x,υ ) dυ
−∞

 The marginal PDFs should satisfy the following properties:

(1) f X ( x ) , fY ( y ) ≥ 0
+∞ +∞
(2) −∞
f X ( x ) dx = 1, −∞
fY ( y ) dy = 1

29
Introduction to Probability Theory

An Example on Joint Distribution


 Given the joint density function of two random variables X and Y
2
 ( x + 2 y ), if 0 < x, y < 1
f XY ( x, y ) =  3
0, otherwise

 Find the marginal distributions of X and Y.


 Solution
 The marginal distribution (PDF) of X can be obtained by integrating out Y
+∞ 2 1 2
fX ( x) = 
3 0
f XY ( x,υ ) dυ = ( x + 2υ ) dυ = ( x + 1) , for 0 < x < 1
−∞ 3

 Therefore, the marginal distribution of X takes the below form


2
 ( x + 1) , if 0 < x < 1
fX ( x) =  3
 0, otherwise

30
Introduction to Probability Theory

An Example on Joint Distribution


 Given the joint density function of two random variables X and Y
2
 ( x + 2 y ), if 0 < x, y < 1
f XY ( x, y ) =  3
0, otherwise

 Find the marginal distributions of X and Y.


 Solution
 The marginal distribution (PDF) of Y can be obtained by integrating out X
+∞ 2 1 1
fY ( y ) = 
3 0
f XY (τ , y ) dτ = (τ + 2 y ) d τ = ( 4 y + 1) , for 0 < y < 1
−∞ 3

 Therefore, the marginal distribution of Y takes the below form


1
 ( 4 y + 1) , if 0 < y < 1
fY ( y ) =  3
 0, otherwise

31
Introduction to Probability Theory

Covariance and Correlation Coefficient


 The correlation coefficient ρXY between two random variables X
and Y with means μX and μY and stds σX and σY is defined as

Cov ( X ,Y ) E ( X − µX )(Y − µY )  Covariance


ρ XY = =
σ XσY σ X σY E ( X − µX )(Y − µY ) 
= E ( XY ) − E ( X ) E (Y )

 |ρXY| ≤ 1 and measures a linear dependence between X and Y.


 Case 1: |ρXY| = 1  A perfect linear relationship between X and Y
 Case 2: |ρXY| = 0  X and Y are uncorrelated (no linear dependence) and
E[XY] = E[X]E[Y]

32
Introduction to Probability Theory

Covariance and Correlation Coefficient


 Correlation and dependence
 If two variables are independent, they are uncorrelated and ρXY = 0.
 If two variables are uncorrelated, they can be (nonlinearly) dependent.

(x, y) point sets with various values of ρXY

Picture courtesy of Wikipedia 33


Introduction to Probability Theory

Covariance and Correlation Coefficient


 How correlation between input variables affects reliability?
G=0

 Which case has the most reliable design?


 It may not be realistic to have correlated strength and load. Any
other combination of variables that have a realistic correlation?

34

You might also like