0% found this document useful (0 votes)
5 views48 pages

3. Raghunath Chatterjee Probability Lecture

The document discusses the importance of learning probability, explaining its role in quantifying uncertainty and bridging descriptive and inferential statistics. It covers basic concepts such as experiments, events, and mutual exclusivity, as well as methods for calculating probabilities, including the additive and multiplicative rules. Additionally, it introduces conditional probabilities, Bayes' Rule, and the concepts of permutations and combinations for analyzing patterns.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views48 pages

3. Raghunath Chatterjee Probability Lecture

The document discusses the importance of learning probability, explaining its role in quantifying uncertainty and bridging descriptive and inferential statistics. It covers basic concepts such as experiments, events, and mutual exclusivity, as well as methods for calculating probabilities, including the additive and multiplicative rules. Additionally, it introduces conditional probabilities, Bayes' Rule, and the concepts of permutations and combinations for analyzing patterns.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Probability

Why Learn Probability?


• Nothing in life is certain. In everything we do, we gauge
the chances of successful outcomes, from business to
medicine to the weather
• A probability provides a quantitative description of the
chances or likelihoods associated with various outcomes
• It provides a bridge between descriptive and inferential
statistics

Probability

Population Sample
Statistics
What is Probability?
• We used graphs and numerical measures to
describe data sets which were usually
samples.
• We measured “how often” using
Relative frequency = f/n

• As n gets larger,
Sample Population
And “How often”
= Relative frequency Probability
Basic Concepts

• An experiment is the process by which an


observation (or measurement) is obtained.
• An event is an outcome of an experiment,
usually denoted by a capital letter.
– The basic element to which probability is
applied
– When an experiment is performed, a
particular event either happens, or it
doesn’t!
Basic Concepts

• Two events are mutually exclusive if, when


one event occurs, the other cannot, and vice
versa.
•Experiment: Toss a die Not Mutually
–A: observe an odd number Exclusive

–B: observe a number greater than 2


–C: observe a 6 B and C?
Mutually
–D: observe a 3 Exclusive B and D?
Basic Concepts
• An event is a collection of possible outcomes.
• A simple event is the event of a SINGLE
outcome.
• A complementary event is just the opposite of
an event.
• Each simple event will be assigned a probability,
measuring “how often” it occurs.
• The set of all simple events of an experiment is
called the sample space, S.
Example
• The die toss:
• Simple events: Sample space:
1 E1
S ={E1, E2, E3, E4, E5, E6}
2
E2
S
3 E3 •E1 •E3
4 E4 •E5
5
E5 •E2 •E4 •E6
6 E6
The Probability of an Event

• The probability of an event A measures “how


often” A will occur. We write P(A).
• Suppose that an experiment is performed n
times. The relative frequency for an event A is

Number of timesA occurs f


=
n n
• If we let n get infinitely large,
f
P ( A) = lim
n→∞ n
The Probability of an Event
• P(A) must be between 0 and 1.
– If event A can never occur, P(A) = 0. If event A
always occurs when the experiment is performed,
P(A) =1.
• The sum of the probabilities for all simple
events in S equals 1.

• The probability of an event A is found


by adding the probabilities of all the
simple events contained in A.
Finding Probabilities
• Probabilities can be found using
– Estimates from empirical studies
– Common sense estimates based on equally likely
events.

• Examples:
–Toss a fair coin. P(Head) = 1/2
– Suppose that 10% of the students in your class
are from chemistry background. Then for a
person selected at random, P(chemistry) = .10
Using Simple Events
• The probability of an event A is equal to the
sum of the probabilities of the simple events
contained in A
• If the simple events in an experiment are
equally likely, you can calculate

n A number of simple events in A


P ( A) = =
N total number of simple events
Example 1

Toss a fair coin twice. What is the probability of


observing at least one head?
1st Coin 2nd Coin Ei P(Ei)

H HH 1/4 P(at least 1 head)


H
T HT = P(E1) + P(E2) + P(E3)
1/4
= 1/4 + 1/4 + 1/4 = 3/4
H TH 1/4
T
T TT 1/4
Example 3
The sample space of throwing a pair of dice is
Example 3
Event Simple events Probability

Dice add to 3 (1,2),(2,1) 2/36


Dice add to 6 ?

Red die show 1 ? 6/36

Green die show 1 ? ?


Event Relations
The beauty of using events, rather than simple events, is
that we can combine events to make other events using
logical operations: and, or and not.
The union of two events, A and B, is the event that
either A or B or both occur when the experiment is
performed. We write
A∪B
S

A∪ B A B
Event Relations
The intersection of two events, A and B, is
the event that both A and B occur when the
experiment is performed. We write A ∩ B.
S

A∩ B A B

• If two events A and B are mutually


exclusive, then P(A ∩ B) = 0.
Event Relations
The complement of an event A consists of
all outcomes of the experiment that do not
result in event A. We write AC.

S
AC

A
Example
Select a student from the classroom and
record his/her hair color and gender.
– A: student has brown hair
– B: student is female
– C: student is male Mutually exclusive; B = C
C

What is the relationship between events B and C?


•AC: Student does not have brown hair
•B∩C: Student is both male and female = ∅
•B∪C: Student is either male and female = all students = S
Calculating Probabilities for
Unions and Complements
• There are special rules that will allow you to
calculate probabilities for composite events.
• The Additive Rule for Unions:
• For any two events, A and B, the probability of
their union, P(A ∪ B), is

P ( A ∪ B ) = P ( A) + P ( B ) − P ( A ∩ B)
A B
Example: Additive Rule
Example: Suppose that there were 120
students in the classroom, and that they
could be classified as follows:
A: brown hair Brown Not Brown
P(A) = 50/120 Male 20 40
B: female Female 30 30
P(B) = 60/120
P(A∪B) = P(A) + P(B) – P(A∩B)
= 50/120 + 60/120 - 30/120
= 80/120 = 2/3 Check: P(A∪B)
= (20 + 30 + 30)/120
A Special Case
When two events A and B are
mutually exclusive, P(A∩B) = 0
and P(A∪B) = P(A) + P(B).
A: male with brown hair Brown Not Brown
P(A) = 20/120 Male 20 40
B: female with brown hair Female 30 30
P(B) = 30/120
A and B are mutually P(A∪B) = P(A) + P(B)
= 20/120 + 30/120
exclusive, so that
= 50/120
Calculating Probabilities AC

for Complements
A

• We know that for any event A:


P(A ∩ AC) = 0
• Since either A or AC must occur,
P(A ∪ AC) =1
• so that P(A ∪ AC) = P(A)+ P(AC) = 1

P(AC) = 1 – P(A)
Example

Select a student at random


from the classroom. Define:
A: male Brown Not Brown
P(A) = 60/120 Male 20 40
B: female Female 30 30
P(B) = ?

A and B are P(B) = 1- P(A)


complementary, so = 1- 60/120 = 60/120
that
Calculating Probabilities for
Intersections
In the previous example, we found P(A ∩ B)
directly from the table. Sometimes this is
impractical or impossible. The rule for calculating
P(A ∩ B) depends on the idea of independent
and dependent events.
Two events, A and B, are said to be
independent if the occurrence or
nonoccurrence of one of the events does
not change the probability of the
occurrence of the other event.
Conditional Probabilities
The probability that A occurs, given
that event B has occurred is called
the conditional probability of A
given B and is defined as
P( A ∩ B)
P( A | B) = if P( B) ≠ 0
P( B)

“given”
Example: Two Dice
Toss a pair of fair dice. Define
– A: red die show 1
– B: green die show 1

P(A|B) = P(A and B)/P(B)


=1/36/1/6=1/6=P(A)

P(A) does not


change, whether A and B are
B happens or independent!
not…
Example: Two Dice
Toss a pair of fair dice. Define
– A: add to 3
– B: add to 6

P(A|B) = P(A and B)/P(B)


=0/36/5/6=0

P(A) does change A and B are dependent!


when B happens In fact, when B happens,
A can’t
Defining Independence
• We can redefine independence in terms of
conditional probabilities:
Two events A and B are independent if and
only if
P(A|B) = P(A) or P(B|A) = P(B)
Otherwise, they are dependent.
• Once you’ve decided whether or not two
events are independent, you can use the
following rule to calculate their
intersection.
The Multiplicative Rule for
Intersections
• For any two events, A and B, the probability that
both A and B occur is
P(A ∩ B) = P(A) P(B given that A occurred)
= P(A)P(B|A)

• If the events A and B are independent,


then the probability that both A and B
occur is
P(A ∩ B) = P(A) P(B)
Example
In a certain population, 10% of the people can be
classified as being high risk for a heart attack. Three
people are randomly selected from this population.
What is the probability that exactly one of the three
are high risk?
Define H: high risk N: not high risk
P(exactly one high risk) = P(HNN) + P(NHN) + P(NNH)
= P(H)P(N)P(N) + P(N)P(H)P(N) + P(N)P(N)P(H)
= (.1)(.9)(.9) + (.9)(.1)(.9) + (.9)(.9)(.1)= 3(.1)(.9)2 = .243
Example
Suppose we have additional information in the
previous example. We know that only 49% of the
population are female. Also, of the female patients, 8%
are high risk. A single person is selected at random. What
is the probability that it is a high risk female?
Define H: high risk F: female
From the example, P(F) = .49 and P(H|F) = .08.
Use the Multiplicative Rule:
P(high risk female) = P(H∩F)
= P(F)P(H|F) =.49(.08) = .0392
Bayes Rule
•Bayes Rule is a very powerful probability law.
An example from medicine:
Let D be a disease and S a symptom.
A doctor may be interested in P(D|S).
This is a hard probability to assign.
A probability that is much easier to calculate
is P(S|D),
i.e. from patient records….

•The power of Bayes Rule is its ability to take


P(S|D) and calculate P(D|S).
We have already seen a version of Bayes
Rule before
P( A ∩ B)
P( B | A) =
P( A)
Using the Multiplication Law we can
re-write this as
P( B | A) =
P( A | B) P( B) (Bayes Rule)
P( A)
Example
A gene has two possible alleles A1 and A2. 75% of
the population have A1 . B is a disease that has 3
forms B1 (mild), B2 (severe) and B3(lethal). A1 is a
protective gene, with probabilities of having the
three forms of the disease given A1 as 0.9, 0.1 and
0 respectively. People with A2 are unprotected
and have the three forms with probabilities 0, 0.5
and 0.5 respectively.

What is the probability that a person has gene A1


given they have the severe disease?
First decode the information (as below):

P( A1 ) = 0.75 P( A2 ) = 0.25
P( B1 | A1 ) = 0.9 P( B2 | A1 ) = 0.1 P( B3 | A1 ) = 0
P( B1 | A2 ) = 0 P( B2 | A2 ) = 0.5 P( B3 | A2 ) = 0.5

We have to calculate P(A1|B2)?


From the Bayes Rule we know that
P( B2 | A1 ) P( A1 )
P( A1 | B2 ) =
P( B2 )

We know P( B2 | A1 ) and P( A1 ), but what is that P( B2 )

P( B2 ) = P( B2 ∩ A1 ) + P( B2 ∩ A2 )
= P( B2 | A1 ) P( A1 ) + P( B2 | A2 ) P( A2 )
= 0.1× 0.75 + 0.5 × 0.25
= 0.2
Permutation and combination
Sometimes, we observe a specific pattern from a
large number of possible patterns.

To calculate the probability of pattern, we need


to count the number of ways our pattern could
have arisen.

This is why we need to learn about


permutations and combinations.
Permutation
Combination

You might also like