Statistics -Notes
Statistics -Notes
Deterministic Experiments
• The experiments in which the initial conditions remains the same in every repetition and
the outcomes in each repetition will always be the same, then such experiments is known
as deterministic experiments.
Example: If a piece of zinc is put into dilute sulphuric acid at room temperature the outcome
will always be the formation of hydrogen.
Random Experiments
Characteristic Features of Random Experiment
• It may be repeated any number of times under more or less similar conditions.
• The outcomes vary irregularly from repetition to repetition even when the initial
conditions are the same.
Basic Terminologies
Example: Checking the blood group of a selected person and the outcome will be anyone
among A, B, AB and O.
• Sample Space: The set of all possible outcomes of a random experiment is known as
sample space. Sample space is a set and so it may be represented by methods usually
used to specify sets. The following are the two commonly used methods.
Example: Tossing of a coin and observing the side turns up. Sample space, A = {H,T}.
Events are given by
Mutually Exclusive Events: Events are said to be mutually exclusive, if the happening
of any one of them precludes the happening of all others.
Example: If a coin is tossed, either head will turns up or tail will turns up. Both head and tail
cannot turns up simultaneously. So we say that the events “head turning up” and “tail turning
up” are mutually exclusive events.
𝐴1 ∪ 𝐴2 ∪ 𝐴3 = {1,2,3,4,5,6} = S
Equally Likely Events: Outcomes of a random experiment are said to be equally likely,
if taken into consideration all the relevant evidences, there is no reason to expect one in
preference to the other.
Example: In a random toss of unbiased coin head and tail are equally likely events. In
throwing an unbiased die all the 6 faces are equally likely to come.
If a random experiments results in “n” exhaustive, mutually exclusive and equally likely outcomes,
out of it “m” are favorable to the occurrence of an event E. Then the probability “P” of occurrence
of E is usually denoted by “P(E)” and is given by
Properties of Probability
1. P(E) ≥ 0
2. 0 ≤ 𝑃(𝐸) ≤ 1
• To determine the probability of an event we require only the no. of points in the sample
space and no. of points in the events.
• This is the only definition which enable us to determine the probability of an event exactly.
• If the various outcomes of the random experiment are not equally likely or equally probable
Ex: The probability that a candidate will pass in a certain test is not 50%.
If trials be repeated a great number of times under, essentially the same conditions, the limit of the
ratio of the number of times that an event happens to the total number of trials as the number of
trials increases indefinitely is called the probability of the happening of that event.
𝑚
𝑃(𝐸) = lim
𝑛→∞ 𝑛
• The probability of an event can be determined without making assumptions like equally
likely
Limitations
• It is difficult to repeat experiments a large number of times under the same conditions in
many practical situations.
Axiomatic Definition of Probability
The function P(A) will be called a Probability measure of an event A, if it satisfies the following
three axioms
(1) P(A) is a real number such that P(A) ≥ 0 for every event A.
(3) If 𝐴1 , 𝐴2 , …, are events such that 𝐴𝑖 ∩ 𝐴𝑗 = {}, when 𝑖 ≠ 𝑗, for all 𝑖 and 𝑗, then
Theorem – 3: 𝑃(𝜑) = 0
Conditional Probability
The probability of an event A given that an event B of the same sample space has happened is
called the conditional probability of A given B and is denoted by P(A|B) and is defined as
𝑃(𝐴 ∩ 𝐵)
𝑃(𝐴|𝐵) = , 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐵) ≠ 0.
𝑃(𝐵)
𝑃(𝐴 ∩ 𝐵)
𝑃(𝐵|𝐴) = , 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐴) ≠ 0.
𝑃(𝐴)
Consider the random experiment of tossing a die. Let A be the event that “the outcome of the toss
is 2” and B be the event that “the outcome is an even number”. Then the conditional probability
P(A|B) is the probability of the event “the outcome of the toss is 2” given that “the outcome is an
even number”.
𝑃(𝐴 ∩ 𝐵) 1
𝑃(𝐴|𝐵) = =
𝑃(𝐵) 3
Multiplication Theorem
The conditional probability of an event A given another event B is defined as if 𝑃(𝐵) ≠ 0, then
𝑃(𝐴∩𝐵)
𝑃(𝐴|𝐵) = . So 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵)𝑃(𝐴|𝐵). This gives the probability of the compound event
𝑃(𝐵)
𝐴 ∩ 𝐵 as the product of the probabilities of one of the events and the conditional probability of the
other given that the first event has happened. This result is sometimes referred to as the
multiplication theorem for two events.
= 𝑃(𝐴)𝑃(𝐴|𝐵), 𝑃(𝐴) ≠ 0
𝑃(𝐴 ∩ 𝐵 ∩ 𝐶) = 𝑃(𝐴)𝑃(𝐵|𝐴)𝑃(𝐶|𝐴 ∩ 𝐵)
Statistical Independence
If 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴) or 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵), we say that the events A and B are statistically
independent.
It can be easily shown that the two conditions given above are not different
𝐼𝑓 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴),
𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵)𝑃(𝐴|𝐵)
= 𝑃(𝐵)𝑃(𝐴)
𝐵𝑢𝑡 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐴|𝐵)
𝑃(𝐵|𝐴) = 𝑃(𝐵)
So the first condition implies the second and similarly it can be shown that the second condition
implies the first. So the two conditions are the same. Also both conditions reduce to the same
condition.
𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵)
This is referred to as the multiplication theorem for independent events. This is also used to define
independence as follows:
Also it should be noted that mutually exclusive events cannot be independent. (Proof is already
given)
Bayes’ Theorem
If an event A can happen if and only if one or other of a set of mutually exclusive events
𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘
𝑃(𝐵𝑖 )𝑃(𝐴|𝐵𝑖 )
𝑃(𝐵𝑖 |𝐴) = 𝑘
∑𝑖=1 𝑃(𝐵𝑖 )𝑃(𝐴|𝐵𝑖 )
In Bayes’ theorem we are assuming that the experiment was performed and the outcome is the
event A. We know that A can occur if and only if one or other of a set of mutually exclusive events
𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘 happens. We do not know which of these k events has happened. Our attempt
is to determine the probability of the occurrence of 𝐵𝑖 using the information that the experiment
has resulted in the event A. This probability is denoted by the symbol 𝑃(𝐵𝑖 |𝐴), 𝑖 = 1,2,3, … , 𝑘.
These probabilities are determined after observing the outcome of the experiment. So they are
called ‘probabilities determined after the experiment’ or ‘a posteriori probabilities’. So Bayes
theorem gives the posteriori probability of the event 𝐵𝑖 .
Now consider the right side of Bayes’ formula. We have two types of probabilities there.
𝑃(𝐵1 ), 𝑃(𝐵2 ), … are of one type and 𝑃(𝐴|𝐵𝑖 ) are of another type.
The information that the experiment results in the event A has nothing to do with the probabilities
𝑃(𝐵1 ), 𝑃(𝐵2 ), …. They are evaluated on the other considerations and may be determined before
the performance of the experiment. These probabilities are called the ‘a priori’ probabilities of
𝐵1 , 𝐵2 , 𝐵3 , …
Now consider the probability 𝑃(𝐴|𝐵𝑖 ). We know that the event A has happened. It might have
happened together with 𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘 . 𝑃(𝐴|𝐵𝑖 ) gives the probability of occurrence of A on the
hypothesis that 𝐵𝑖 has happened or it gives the ‘likelihood’ of A if 𝐵𝑖 has happened. So such
probabilities are called ‘likelihoods’. In other words 𝑃(𝐴|𝐵𝑖 ) is called a likelihood if A is an event
that is known to have happened and 𝐵𝑖 an event which may lead to the occurrence of A.
So Bayes’ theorem gives a ‘posteriori’ probability of 𝐵𝑖 given that A has happened interms of the
priori probabilities of 𝐵𝑖 and likelihoods of A under the various hypothesis regarding the
occurrence of the mutually exclusive events 𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘 .