0% found this document useful (0 votes)
8 views

Statistics -Notes

The document discusses two types of experiments: deterministic, where outcomes are always the same under identical conditions, and random, where outcomes can vary despite similar conditions. It introduces key concepts in probability, including sample space, events, and definitions of probability, such as classical, frequency, and axiomatic definitions, along with their advantages and limitations. Additionally, it covers important theorems related to probability, conditional probability, independence, and Bayes' theorem.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Statistics -Notes

The document discusses two types of experiments: deterministic, where outcomes are always the same under identical conditions, and random, where outcomes can vary despite similar conditions. It introduces key concepts in probability, including sample space, events, and definitions of probability, such as classical, frequency, and axiomatic definitions, along with their advantages and limitations. Additionally, it covers important theorems related to probability, conditional probability, independence, and Bayes' theorem.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Experiments

If an experiment is repeated under essentially homogeneous and similar condition, we generally


come across two types of situations

1. The result or outcome is unique or certain


2. The result is not unique but may be one of the several possible outcomes

Deterministic Experiments

• The experiments in which the initial conditions remains the same in every repetition and
the outcomes in each repetition will always be the same, then such experiments is known
as deterministic experiments.

Example: If a piece of zinc is put into dilute sulphuric acid at room temperature the outcome
will always be the formation of hydrogen.

Random Experiments
Characteristic Features of Random Experiment

• It may be repeated any number of times under more or less similar conditions.

• It has more than one possible outcomes.

• The outcomes vary irregularly from repetition to repetition even when the initial
conditions are the same.

Basic Terminologies

• Outcome: The result of a random experiment will be called an outcome.

Example: Checking the blood group of a selected person and the outcome will be anyone
among A, B, AB and O.

• Trial : Any particular performance of a random experiment is called a trial

• Sample Space: The set of all possible outcomes of a random experiment is known as
sample space. Sample space is a set and so it may be represented by methods usually
used to specify sets. The following are the two commonly used methods.

• Event: The outcome or combination of outcomes of a random experiment is known as


events. Or it can be defined as “Any subset of a sample space is called an event”.

Example: Tossing of a coin and observing the side turns up. Sample space, A = {H,T}.
Events are given by

B = { {H}, {T}, {H,T}, {} }

 Mutually Exclusive Events: Events are said to be mutually exclusive, if the happening
of any one of them precludes the happening of all others.

Example: If a coin is tossed, either head will turns up or tail will turns up. Both head and tail
cannot turns up simultaneously. So we say that the events “head turning up” and “tail turning
up” are mutually exclusive events.

 Exhaustive Events: A set of events 𝐴1 , 𝐴2 , 𝐴3 , … , 𝐴𝑘 are said to be exhaustive events if


their union is the whole sample space.
Example: S = {1,2,3,4,5,6}

𝐴1 = {1,2,3}, 𝐴2 = {2,3,4} and 𝐴3 = {4,5,6}

𝐴1 ∪ 𝐴2 ∪ 𝐴3 = {1,2,3,4,5,6} = S

𝐴1 , 𝐴2 , 𝐴3 are exhaustive events

 Equally Likely Events: Outcomes of a random experiment are said to be equally likely,
if taken into consideration all the relevant evidences, there is no reason to expect one in
preference to the other.

Example: In a random toss of unbiased coin head and tail are equally likely events. In
throwing an unbiased die all the 6 faces are equally likely to come.

Mathematical or Classical or Priori Definition of Probability

If a random experiments results in “n” exhaustive, mutually exclusive and equally likely outcomes,
out of it “m” are favorable to the occurrence of an event E. Then the probability “P” of occurrence
of E is usually denoted by “P(E)” and is given by

𝑛𝑜. 𝑜𝑓 𝑓𝑎𝑣𝑜𝑟𝑎𝑏𝑙𝑒 𝑐𝑎𝑠𝑒𝑠 𝑚


𝑃 = 𝑃(𝐸) = =
𝑇𝑜𝑡𝑎𝑙 𝑛𝑜. 𝑜𝑓 𝑐𝑎𝑠𝑒𝑠 𝑛

Properties of Probability

1. P(E) ≥ 0

2. 0 ≤ 𝑃(𝐸) ≤ 1

3. Probability of compliment of the event E, denoted by 𝐸 𝑐 is given by 𝑃(𝐸 𝑐 ) = 1 − 𝑃(𝐸)

Advantage's of Classical Definition

• It is the simple and easily understandable definition of probability.

• To determine the probability of an event we require only the no. of points in the sample
space and no. of points in the events.
• This is the only definition which enable us to determine the probability of an event exactly.

Limitations of Classical Definition

The classical definition of probability breaks down in the following cases

• If the various outcomes of the random experiment are not equally likely or equally probable

Ex: The probability that a candidate will pass in a certain test is not 50%.

• If the total no. of outcomes of the random experiment is infinite or unknown.

Von Mises’s Statistical or Empirical or Frequency Definition of Probability

If trials be repeated a great number of times under, essentially the same conditions, the limit of the
ratio of the number of times that an event happens to the total number of trials as the number of
trials increases indefinitely is called the probability of the happening of that event.

𝑚
𝑃(𝐸) = lim
𝑛→∞ 𝑛

Advantages of Frequency Definition

• It provides a practical method for estimating the probability of an event.

• The probability of an event can be determined without making assumptions like equally
likely

Limitations

• To get an estimate of the probability of an event, the random experiment is to be repeated


a very large number of times.

• It is difficult to repeat experiments a large number of times under the same conditions in
many practical situations.
Axiomatic Definition of Probability

The function P(A) will be called a Probability measure of an event A, if it satisfies the following
three axioms

(1) P(A) is a real number such that P(A) ≥ 0 for every event A.

(2) P(S)=1, where S is the sample space.

(3) If 𝐴1 , 𝐴2 , …, are events such that 𝐴𝑖 ∩ 𝐴𝑗 = {}, when 𝑖 ≠ 𝑗, for all 𝑖 and 𝑗, then

𝑃(𝐴1 ∪ 𝐴1 ∪ … ) = 𝑃(𝐴1 ) + 𝑃(𝐴2 ) + ⋯

Some Theorems Based on the Axiomatic Definition of Probability

Theorem – 1: If A is an event and A’ its complement, then P(A) + P(A’) = 1

Theorem – 2: For any event A, 0 ≤ 𝑃(𝐴) ≤ 1.

Theorem – 3: 𝑃(𝜑) = 0

Addition Theorem for two events:

If A and B are two events then 𝑃(𝐴 ∪ 𝐵) = 𝑃(𝐴) + 𝑃(𝐵) − 𝑃(𝐴 ∩ 𝐵)

Addition Theorem for three events:

If A, B and C are three events such that

𝑃(𝐴 ∪ 𝐵 ∪ 𝐶) = 𝑃(𝐴) + 𝑃(𝐵) + 𝑃(𝐶) − 𝑃(𝐴 ∩ 𝐵) − 𝑃(𝐴 ∩ 𝐶) − 𝑃(𝐵 ∩ 𝐶) + 𝑃(𝐴 ∩ 𝐵 ∩ 𝐶)

Conditional Probability

The probability of an event A given that an event B of the same sample space has happened is
called the conditional probability of A given B and is denoted by P(A|B) and is defined as

𝑃(𝐴 ∩ 𝐵)
𝑃(𝐴|𝐵) = , 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐵) ≠ 0.
𝑃(𝐵)

In the similar way we can define 𝑃(𝐵|𝐴) as

𝑃(𝐴 ∩ 𝐵)
𝑃(𝐵|𝐴) = , 𝑃𝑟𝑜𝑣𝑖𝑑𝑒𝑑 𝑃(𝐴) ≠ 0.
𝑃(𝐴)
Consider the random experiment of tossing a die. Let A be the event that “the outcome of the toss
is 2” and B be the event that “the outcome is an even number”. Then the conditional probability
P(A|B) is the probability of the event “the outcome of the toss is 2” given that “the outcome is an
even number”.

𝑃(𝐴 ∩ 𝐵) 1
𝑃(𝐴|𝐵) = =
𝑃(𝐵) 3

Multiplication Theorem

The conditional probability of an event A given another event B is defined as if 𝑃(𝐵) ≠ 0, then
𝑃(𝐴∩𝐵)
𝑃(𝐴|𝐵) = . So 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵)𝑃(𝐴|𝐵). This gives the probability of the compound event
𝑃(𝐵)

𝐴 ∩ 𝐵 as the product of the probabilities of one of the events and the conditional probability of the
other given that the first event has happened. This result is sometimes referred to as the
multiplication theorem for two events.

Multiplication theorem for two events may be stated as follows:

For two events A and B,

𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵)𝑃(𝐴|𝐵), 𝑃(𝐵) ≠ 0

= 𝑃(𝐴)𝑃(𝐴|𝐵), 𝑃(𝐴) ≠ 0

For three events A, B and C,

𝑃(𝐴 ∩ 𝐵 ∩ 𝐶) = 𝑃(𝐴)𝑃(𝐵|𝐴)𝑃(𝐶|𝐴 ∩ 𝐵)

Statistical Independence

If 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴) or 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵), we say that the events A and B are statistically
independent.

It can be easily shown that the two conditions given above are not different

𝐼𝑓 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴),

𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐵)𝑃(𝐴|𝐵)

= 𝑃(𝐵)𝑃(𝐴)
𝐵𝑢𝑡 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐴|𝐵)

𝑆𝑜 𝑃(𝐴)𝑃(𝐴|𝐵) = 𝑃(𝐴)𝑃(𝐵) 𝑎𝑛𝑑 ℎ𝑒𝑛𝑐𝑒

𝑃(𝐵|𝐴) = 𝑃(𝐵)

So the first condition implies the second and similarly it can be shown that the second condition
implies the first. So the two conditions are the same. Also both conditions reduce to the same
condition.

𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵)

This is referred to as the multiplication theorem for independent events. This is also used to define
independence as follows:

Events A and B are said to be independent if 𝑃(𝐴 ∩ 𝐵) = 𝑃(𝐴)𝑃(𝐵).

Also it should be noted that mutually exclusive events cannot be independent. (Proof is already
given)

Bayes’ Theorem

If an event A can happen if and only if one or other of a set of mutually exclusive events
𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘

Happens and if 𝑃(𝐵𝑖 ) ≠ 0 for 𝑖 = 1,2,3, … , 𝑘, then

𝑃(𝐵𝑖 )𝑃(𝐴|𝐵𝑖 )
𝑃(𝐵𝑖 |𝐴) = 𝑘
∑𝑖=1 𝑃(𝐵𝑖 )𝑃(𝐴|𝐵𝑖 )

In Bayes’ theorem we are assuming that the experiment was performed and the outcome is the
event A. We know that A can occur if and only if one or other of a set of mutually exclusive events
𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘 happens. We do not know which of these k events has happened. Our attempt
is to determine the probability of the occurrence of 𝐵𝑖 using the information that the experiment
has resulted in the event A. This probability is denoted by the symbol 𝑃(𝐵𝑖 |𝐴), 𝑖 = 1,2,3, … , 𝑘.
These probabilities are determined after observing the outcome of the experiment. So they are
called ‘probabilities determined after the experiment’ or ‘a posteriori probabilities’. So Bayes
theorem gives the posteriori probability of the event 𝐵𝑖 .
Now consider the right side of Bayes’ formula. We have two types of probabilities there.
𝑃(𝐵1 ), 𝑃(𝐵2 ), … are of one type and 𝑃(𝐴|𝐵𝑖 ) are of another type.

The information that the experiment results in the event A has nothing to do with the probabilities
𝑃(𝐵1 ), 𝑃(𝐵2 ), …. They are evaluated on the other considerations and may be determined before
the performance of the experiment. These probabilities are called the ‘a priori’ probabilities of
𝐵1 , 𝐵2 , 𝐵3 , …

Now consider the probability 𝑃(𝐴|𝐵𝑖 ). We know that the event A has happened. It might have
happened together with 𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘 . 𝑃(𝐴|𝐵𝑖 ) gives the probability of occurrence of A on the
hypothesis that 𝐵𝑖 has happened or it gives the ‘likelihood’ of A if 𝐵𝑖 has happened. So such
probabilities are called ‘likelihoods’. In other words 𝑃(𝐴|𝐵𝑖 ) is called a likelihood if A is an event
that is known to have happened and 𝐵𝑖 an event which may lead to the occurrence of A.

So Bayes’ theorem gives a ‘posteriori’ probability of 𝐵𝑖 given that A has happened interms of the
priori probabilities of 𝐵𝑖 and likelihoods of A under the various hypothesis regarding the
occurrence of the mutually exclusive events 𝐵1 , 𝐵2 , 𝐵3 , … , 𝐵𝑘 .

You might also like