0% found this document useful (0 votes)
12 views

Summary of Lectures - Chapter 1. Probability

Chapter 1 introduces fundamental concepts in probability, including experiments, sample spaces, events, and their relations. It covers probability rules such as the complement rule, addition rule, conditional probability, and Bayes' rule, along with methods for counting outcomes like permutations and combinations. The chapter concludes with an overview of Bernoulli trials and their probability calculations.

Uploaded by

hqn0028
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Summary of Lectures - Chapter 1. Probability

Chapter 1 introduces fundamental concepts in probability, including experiments, sample spaces, events, and their relations. It covers probability rules such as the complement rule, addition rule, conditional probability, and Bayes' rule, along with methods for counting outcomes like permutations and combinations. The chapter concludes with an overview of Bernoulli trials and their probability calculations.

Uploaded by

hqn0028
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Summary of chapter 1.

1.1 Basic Notions


1.1.1 Experiments
- An experiment is the process by which an observation (or measurement) is obtained.
1.1.2 Sample space
- An outcome of an experiment is any possible observation of that experiment.
- The sample space of an experiment is the set of all possible outcomes for an experiment, denoted
by S.
1.1.3 Events
- An event is a set of outcomes of an experiment (or a subset of a sample space).
- A simple event is an event that consists of exactly one outcome.
1.1.4 Event Relations
- The union of events A and B, denoted by A∪B (or A+B) is the event that either A or B or both
occur.

- The intersection of events A and B, denoted by A ∩ B (or AB), is the event that both A and B
occur.

- Two events, A and B, are mutually exclusive/disjoint if, when one event occurs, the others
cannot, and vice versa. That is, A ∩ B = AB = ∅.

- A collection of events A1, A2, …, An is mutually exclusive if and only if Ai∩Aj = ∅, i ≠ j.


- A collection of events A1, A2, …, An is collectively exhaustive if and only if A1 ∪A2 ∪…∪An = S
- The complement of event A is the set of all outcomes in the sample space that are not included in
event A. Denoted by A c or A’.

1.1.5 Event space


- An event space is a collectively exhaustive, mutually exclusive set of events.
1.1.6 Counting sample points
- Multiplication Rule: If an operation can be performed in n1 ways, and if for each of these
a second operation can be performed in n2 ways, and for each of the first two a third
operation can be performed in n3 ways, and so forth, then the sequence of k operations
can be performed in n1n2 ...nk ways.
- Permutation: number of permutations of n distinct objects taken k at a time is
n!
Ank = n(n − 1) . . . (n − k + 1) =
(n − k)!
!!! Permutation of n objects taken k at a time: ORDER, NO REPEAT.

- The number of n-permutations of n distinguishable objects is


Pn = Ann = n(n − 1) . . .2.1 = n!
- The number of distinct combinations of n distinct objects that can be formed, taking them
k at a time, is
n!
Cnk =
k!(n − k)!
!!! Combination of n objects taken k at a time: NO REPEAT, NO ORDER.
- Sampling with Replacement: Given n distinguishable objects, there are Ākn = n k ways to
choose with replacement an ordered sample of k objects.
1.2 Probability of an event
- The probability P[A] of an event A is a measure of our belief that the event A will occur.
- Theoretical probability (Classical approach): If an experiment has n possible equally likely
outcomes, this method would assign a probability of 1/n to each outcome. Then if an event A
contains exactly m outcomes, the probability of event A is
m
P(A) =
n
- In general, the probability P(A) of event A is the sum of the probabilities assigned to the
outcomes (simple events) contained in A:


P(A) = P()i )
Oi ∈ A

- Empirical Probability (Relative frequency): assigning probabilities based on experimentation or


historical data. If an experiment is performed n times, then the relative frequency of a particular
occurrence say, A is
frequency
relative frequency =
n
where the frequency is the number of times the event A occurred. Then the relative frequency of the
event A is defined as the probability of event A; that is
frequency
P(A) = lim
n→+∞ n
1.3 Probability rule

1.3.1 Complement rule: P(Ā) = 1 − P(A)

1.3.2 Addition rule:

• P(A ∪ B) = P(A) + P(B) − P(A ∩ B)

• If A and B are mutually exclusive (i.e, A ∩ B = ∅) then P(A ∪ B) = P(A) + P(B)

• P(A ∪ B ∪ C ) = P(A) + P(B) + P(C ) − P(A ∩ B) − P(B ∩ C ) − P(C ∩ A) + P(A ∩ B ∩ C )


n
In general, P(∪ni=1 Ai ) = P(Ai Aj Ak ) − … + (−1)n P(A1 A2…An )
∑ ∑ ∑
P(Ai ) − P(Ai Aj ) +

i=1 i< j i< j<k

1.3.3 Conditional probability rule: Conditional probability of event A given event B, denoted by
P(A | B), is the probability of event A given that the event B has occurred. The conditional
probability formula is:
P(A B)
P(A | B) =
P(B)
1.3.4 The multiplication rule:

• P(A B) = P(A)P(B | A) = P(B)P(A | B)

• P(A BC ) = P(A)P(B | A)P(C | A B)

• In general, P(A1 A2…An ) = P(A1)P(A2 | A1)P(A3 | A1 A2 )…P(An | A1…An−1)

• A and B are independent if P(A | B) = P(A) or P(B | A) = P(B) or P(A B) = P(A)P(B)

• In general, A1, A2, …, An are independent if P(Ai Aj ) = P(Ai )P(Aj ), ∀i ≠ j

1.4 Bayes’ rule

1.4.1 The total probability

• For 2 events A and B, the probability of the event B can be expressed as

P(B) = P(A B) + P(Ā B) = P(A)P(B | A) + P(Ā)P(B | Ā)

• For an event space {A1, A2, …, Am}with P[Ai ] > 0 for all i and an event A, the probability of the
event A can be expressed as

P(A) = P(A A1) + P(A A2 ) + … + P(A An )

= P(A1)P(A | A1) + P(A2 )P(A | A2 ) + … + P(An )P(A | An )

1.4.2 Bayes’ rule

• For 2 events A and B, the posterior probability of the event A given B can be expressed as
P(A)P(B | A) P(A)P(B | A)
P(A | B) = =
P(B) P(A)P(B | A) + P(Ā)P(B | Ā)

• For an event space {A1, A2, …, Am}with P[Ai ] > 0 for all i and an event A, posterior probability
of the event Ai given A can be expressed as

P(Ai )P(A | Ai ) P(Ai )P(A | Ai )


P(Ai | A) = =
P(A) P(A1)P(A | A1) + P(A2 )P(A | A2 ) + … + P(An )P(A | An )

1.5 Bernoulli Trial Calculator


- A Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes,
"success" and "failure", in which the probability of success is the same (equals p) every time the
experiment is conducted.

- The probability of k successes and n − k failures in n Bernoulli trials is


Pn(k) = Cnk p k (1 − p)n−k

You might also like