Lecture 2_Introduction to Probability Theory
Lecture 2_Introduction to Probability Theory
Lecture #2
Part 1 – Outline
• Uncertainty in Design
• Types of Uncertainty
• Handling Uncertainty in Design Process
2
Example of Time-Independent Reliability
Definition of Reliability
A failure will occur when the load L is greater than the strength S,
or G (= L – S) > 0. Hence G > 0 indicates a failure.
The probability of failure can therefore be defined by
Pf = Pr ( G > 0 ) R = 1 − Pf = Pr ( G ≤ 0 )
Since S and L follow normal distributions, G (= L – S) is normal.
Pf = Pr ( G > 0 )
μG 0 G=L‒S 3
Example of Time-Independent Reliability
Design 1
Probability density
G≤0 G>0
Safe region Failure region Reduced
σG
uncertainty
Design 3
Design 2
Design 2 Improved mean
μG
performance
Design 1
Design 3
0 G=L‒S
4
Types of Uncertainty
5
Types of Uncertainty
ans =
>> std(thickness_a)
ans =
6
Types of Uncertainty
>> std(thickness_a)
7
Types of Uncertainty
ans =
>> std(thickness_a)
ans =
8
Handling Uncertainty at Three Steps in Design
Probability
Probability
Probability
Design 2
density
density
density
Data Design 1
Design variable 0 G 0 G
Design process
Probability Space
It represents our uncertainty regarding an experiment.
It has three parts
The sample space Ω, which is the set of all possible outcomes
The event field A, which is the set of all possible events (an event E is a
measurable subset of the sample space Ω, i.e., E ⊆ Ω)
The probability measure P, which is a real-valued function of the events,
P: A → [0, 1]
Ω
P
E1
0 1
P(E1)
10
Introduction to Probability Theory
Probability Space
Example experiment: Rolling a six-sided die
The sample space Ω = {1, 2, 3, 4, 5, 6}
Three example events
Rolling an even number: E1 = {2, 4, 6}
Rolling a 1 or a 3: E2 = {1, 3}
Rolling a 1 and a 3: E3 = { } or ∅ (Only one number can be rolled, so this
outcome is impossible. The event has no outcomes in it.)
The probabilities of events E1 and E2
Number of outcomes in E1 3
P ( E1 ) = = = 0.5
Number of equally likely outcomes 6
Number of outcomes in E2 2
P ( E2 ) = = ≈ 0.333
Number of equally likely outcomes 6
11
Introduction to Probability Theory
Ω
E2 P
E1
0 1
P(E1) + P(E2) 1
= P(E1∪E2)
12
Introduction to Probability Theory
E1 ∩ E2
Conditional Probability
Allows reasoning with partial information
Conditional probability of E1 given E2
P(E1|E2) = P(E1 ∩ E2) / P(E2)
Ω
E2
E1
0 1
1 P(E | E )
P(E1∩E2) / P(E2) = 13
1 2
Introduction to Probability Theory
Component A Component B
14
Lecture #2
Part 2 – Outline
15
Introduction to Probability Theory
Conditional Probability
Allows reasoning with partial information
Conditional probability of E1 given E2
Ω P(E1|E2) = P(E1 ∩ E2)/P(E2)
E2
E1
0 1
1 P(E | E )
P(E1∩E2) / P(E2) = 1 2
Independent Events
If the occurrence of E2 does not affect that of E1, then E1 and E2
are said to be independent.
P(E1|E2) = P(E1) P(E1 ∩ E2)/P(E2) = P(E1)
P(E1 ∩ E2) = P(E1) ∙ P(E2) Multiplication Theorem
16
Introduction to Probability Theory
P( )?
Time
Would getting a tail be MORE likely because Frank has not seen it
in so long?
Two events are independent if the outcome of one event has no
effect on the outcome of the other.
18
Handling Uncertainty at Three Steps in Design
Probability
Probability
Probability
Design 2
density
density
density
Data Design 1
Design variable 0 G 0 G
Design process
G(X)
Probability
density
Design 2
Design 1 Pf = Pr ( G > 0 )
0 G Design
Optimal Robust variable X
solution solution
20
Introduction to Probability Theory
Model (PDF)
Probability
density
Data
Component thickness X
Random Variable
Numerical outcome of a random experiment.
The probability distribution of a random variable is the collection
of possible outcomes along with their probabilities:
M
x2
Continuous case (PDF): P ( x1 < X ≤ x2 ) = f X ( x ) dx
x1
Normal distribution
Model (PDF)
Probability
density
7
P ( 6 < X ≤ 7 ) = f X ( x ) dx
6
Data
Component thickness X
Random Variable
Numerical outcome of a random experiment.
The probability distribution of a random variable is the collection
of possible outcomes along with their probabilities:
M
x2
Continuous case (PDF): P ( x1 < X ≤ x2 ) = f X ( x ) dx
x1
Component A Component B
23
Introduction to Probability Theory
24
Introduction to Probability Theory
25
Introduction to Probability Theory
Component B
26
Lecture #2
Part 3 – Outline
27
Introduction to Probability Theory
(1) f X ( x ) , fY ( y ) ≥ 0
+∞ +∞
(2) −∞
f X ( x ) dx = 1, −∞
fY ( y ) dy = 1
29
Introduction to Probability Theory
30
Introduction to Probability Theory
31
Introduction to Probability Theory
32
Introduction to Probability Theory
34