Rishita Rajan (Probability)
Rishita Rajan (Probability)
1
PROBABILITY
INTRODUCTION
Probability is a measure of the likelihood of an event to occur. Many events
cannot be predicted with total certainty. We can predict only the chance of an
event to occur i.e., how likely they are going to happen, using it. Probability can
range from 0 to 1, where 0 means the event to be an impossible one and 1
indicates a certain event. Probability for Class 10 is an important topic for the
students which explains all the basic concepts of this topic. The probability of
all the events in a sample space adds up to 1.
For example, when we toss a coin, either we get Head OR Tail, only two
possible outcomes are possible (H, T). But when two coins are tossed then there
will be four possible outcomes, i.e {(H, H), (H, T), (T, H), (T, T)}.
History:
Probable and probability and their cognates in other modern languages derive
from medieval learned Latin probabilis, deriving from Cicero and generally
applied to an opinion to mean plausible or generally approved.The form
probability is from Old French probabilite (14 c.) and directly from Latin
probabilitatem (nominative probabilitas) "credibility, probability," from
probabilis (see probable). The mathematical sense of the term is from 1718. In
the 18th century, the term chance was also used in the mathematical sense of
"probability" (and probability theory was called Doctrine of Chances). This
word is ultimately from Latin cadentia, i.e. "a fall, case". The English adjective
likely is of Germanic origin, most likely from Old Norse likligr (Old English
had geliclic with the same sense), originally meaning "having the appearance of
being strong or able" "having the similar appearance or qualities", with a
meaning of "probably" recorded mid-15c. The derived noun likelihood had a
meaning of "similarity, resemblance" but took on a meaning of "probability"
from the mid 15th century. The meaning "something likely to be true" is from
1570s
Origins:
Ancient and medieval law of evidence developed a grading of degrees of proof,
credibility, presumptions and half-proof to deal with the uncertainties of
evidence in court.In Renaissance times, betting was discussed in terms of odds
such as "ten to one" and maritime insurance premiums were estimated based on
2
intuitive risks, but there was no theory on how to calculate such odds or
premiums.
The mathematical methods of probability arose in the investigations first of
Gerolamo Cardano in the 1560s (not published until 100 years later), and then
in the correspondence Pierre de Fermat and Blaise Pascal (1654) on such
questions as the fair division of the stake in an interrupted game of chance.
Christiaan Huygens (1657) gave a comprehensive treatment of the subject
From Games, Gods and Gambling ISBN 978-0-85264-171-2 by F.N. David:
In ancient times there were games played using astragali, or Talus bone. The
Pottery of ancient Greece was evidence to show that there was a circle drawn on
the floor and the astragali were tossed into this circle, much like playing
marbles. In Egypt, excavators of tombs found a game they called "Hounds and
Jackals", which closely resembles the modern game "Snakes and Ladders". It
seems that this is the early stages of the creation of dice.
The first dice game mentioned in literature of the Christian era was called
Hazard. Played with 2 or 3 dice. Thought to have been brought to Europe by the
knights returning from the Crusades.
Cardano also thought about the sum of three dice. At face value there are the
same number of combinations that sum to 9 as those that sum to 10. For a
9:(621) (531) (522) (441) (432) (333) and for 10: (631) (622) (541) (532) (442)
(433). However, there are more ways of obtaining some of these combinations
than others. For example, if we consider the order of results there are six ways
to obtain (621): (1,2,6), (1,6,2), (2,1,6), (2,6,1), (6,1,2), (6,2,1), but there is only
one way to obtain (333), where the first, second and third dice all roll 3. There
are a total of 27 permutations that sum to 10 but only 25 that sum to 9. From
this, Cardano found that the probability of throwing a 9 is less than that of
throwing a 10. He also demonstrated the efficacy of defining odds as the ratio of
favourable to unfavourable outcomes.
Eighteenth century
Jacob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham De
Moivre's The Doctrine of Chances (1718) put probability on a sound
mathematical footing, showing how to calculate a wide range of complex
probabilities. Bernoulli proved a version of the fundamental law of large
numbers, which states that in a large number of trials, the average of the
outcomes is likely to be very close to the expected value - for example, in 1000
throws of a fair coin, it is likely that there are close to 500 heads (and the larger
the number of throws, the closer to half-and-half the proportion is likely to be).
3
Nineteenth century
The power of probabilistic methods in dealing with uncertainty was shown by
Gauss's determination of the orbit of Ceres from a few observations. The theory
of errors used the method of least squares to correct error-prone observations,
especially in astronomy, based on the assumption of a normal distribution of
errors to determine the most likely true value. In 1812, Laplace issued his
Théorie analytique des probabilités in which he consolidated and laid down
many fundamental results in probability and statistics such as the moment-
generating function, method of least squares, inductive probability, and
hypothesis testing.
Towards the end of the nineteenth century, a major success of explanation in
terms of probabilities was the Statistical mechanics of Ludwig Boltzmann and J.
Willard Gibbs which explained properties of gases such as temperature in terms
of the random motions of large numbers of particles.
The field of the history of probability itself was established by Isaac Todhunter's
monumental A History of the Mathematical Theory of Probability from the
Time of Pascal to that of Laplace (1865).
Twentieth century
Probability and statistics became closely connected through the work on
hypothesis testing of R. A. Fisher and Jerzy Neyman, which is now widely
applied in biological and psychological experiments and in clinical trials of
drugs, as well as in economics and elsewhere. A hypothesis, for example that a
drug is usually effective, gives rise to a probability distribution that would be
observed if the hypothesis is true. If observations approximately agree with the
hypothesis, it is confirmed, if not, the hypothesis is rejected.
The twentieth century also saw long-running disputes on the interpretations of
probability. In the mid-century frequentism was dominant, holding that
probability means long-run relative frequency in a large number of trials. At the
end of the century there was some revival of the Bayesian view, according to
which the fundamental notion of probability is how well a proposition is
supported by the evidence for it.
The mathematical treatment of probabilities, especially when there are infinitely
many possible outcomes, was facilitated by Kolmogorov's axioms (1933).
4
Probability Tree
The tree diagram helps to organize and visualize the different possible
outcomes. Branches and ends of the tree are two main positions. Probability of
each branch is written on the branch, whereas the ends are containing the final
outcome. Tree diagrams are used to figure out when to multiply and when to
add. You can see below a tree diagram for the coin:
P(A∪B)=P(A)+P(B)−P(A∩B)P(A∪B)=P(A)+P(B)−P(A∩B)
2. Complementary Rule
P(A)+P(A′)=1
3. Conditional Rule
5
4. Multiplication Rule
Whenever an event is the intersection of two other events, that is, events A and
B need to occur simultaneously. Then P(A and B)=P(A)⋅P(B).
P(A∩B)=P(A)⋅P(B∣A)
It states that when there are n ways to do one thing, and m ways to do another
thing, then the number of ways to do both the things can be obtained by taking
their product. This is expressed as n×m.
Example:
An ice cream seller sells 3 flavors of ice creams, vanilla, chocolate and
strawberry giving his customers 6 different choices of cones.
How many choices of ice creams does Wendy have if she goes to this ice cream
seller?
Solution
Wendy has 3 choices for the ice cream flavors and 6 choices for ice cream
cones.
6
Types of Probability
There are three major types of probabilities:
*Theoretical Probability
*Experimental Probability
*Axiomatic Probability
Theoretical Probability
Theoretical probability as the name suggests is the theory behind probability.
Theoretical probability gives the outcome of the occurrence of an event based
on mathematics and reasoning. It tells us about what should happen in an ideal
situation without conducting any experiments.
Theoretical probability is extremely useful in situations, such as in the
launching of a satellite, where it is not feasible to conduct an actual experiment
to arrive at a sound conclusion. In this article, we will learn more about the
meaning of theoretical probability, the differences between the types of
probabilities, and see some associated examples.
Theoretical probability can be defined as the number of favorable outcomes
divided by the total number of possible outcomes. To determine the theoretical
probability there is no need to conduct an experiment. However, knowledge of
the situation is required to find the probability of occurrence of that event.
Theoretical probability predicts the probability of occurrence of an event by
assuming that all events are equally likely to occur.
Example
Suppose there are a total of 5 cards and the probability of drawing 2 cards needs
to be determined. Then by using the concept of theoretical probability, the
number of favorable outcomes (2) is divided by the total possible outcomes (5)
to get the probability as 0.4.
7
Theoretical probability can be calculated either by using logical reasoning or by
using a simple formula. The result of such a type of probability is based on the
number of possible outcomes. The theoretical probability formula is equal to the
ratio of the number of favorable outcomes to the total number of probable
outcomes. This formula is expressed as follows:
Theoretical Probability = Number of favorable outcomes / Number of possible
outcomes.
Experimental Probability
The chance or occurrence of a particular event is termed its probability. The
value of a probability lies between 0 and 1 which means if it is an impossible
event, the probability is 0 and if it is a certain event, the probability is 1. The
probability that is determined on the basis of the results of an experiment is
known as experimental probability. This is also known as empirical probability.
A random experiment is done and is repeated many times to determine their
likelihood and each repetition is known as a trial. The experiment is conducted
to find the chance of an event to occur or not to occur. It can be tossing a coin,
rolling a die, or rotating a spinner. In mathematical terms, the probability of an
8
event is equal to the number of times an event occurred ÷ the total number of
trials. For instance, you flip a coin 30 times and record whether you get a head
or a tail. The experimental probability of obtaining a head is calculated as a
fraction of the number of recorded heads and the total number of tosses.
P(head) = Number of heads recorded ÷ 30 tosses.
Color Occurrences
Pink 11
Blue 10
Green 13
Yellow 16
The experimental probability of spinning the color blue = 10/50 = 1/5 = 0.2 =
20%
9
Aziomatic probability
In the normal approach to probability, we consider random experiments, sample
space and other events that are associated with the different experiments. In our
day to day life, we are more familiar with the word ‘chance’ as compared to the
word ‘probability’. Since Mathematics is all about quantifying things, the theory
of probability basically quantifies these chances of occurrence or non-
occurrence of the events. There are different types of events in probability.
Here, we will have a look at the definition and the conditions of the axiomatic
probability in detail.
Axiomatic Probability Definition
10
If 'E' and 'F' are mutually exclusive events, then P(E ∪ F) = P(E) + P(F)
From the third point, we can deduce that P(empty set) = 0.
If we consider 'F' as an empty set, it's clear that 'E' and an empty set are disjoint
events. So, from the third point, we can conclude that P(E ∪ empty set) = P(E) +
P(empty set) or P(E) = P(E) + P(empty set). This implies that P(empty set) = 0.
If the sample space 'S' contains outcomes δ_1, δ_2, δ_3 …… δ_n, then
according to the axiomatic definition of probability, we can deduce:
0 ≤ P(δ_i) ≤ 1 for each δ_i ∈ S
P(δ_1) + P(δ_2)+ … + P(δ_n) = 1
For any event 'Q', P(Q) = ∑P(δ_i), where δ_i ∈ Q.
Please note that the singleton {δ_i} is known as an elementary event and for
simplicity, we write P(δ_i) for P({δ_i}).
11
Probability Distribution
In Statistics, the probability distribution gives the possibility of each outcome of
a random experiment or event. It provides the probabilities of different possible
occurrences. Also read, events in probability, here.
To recall, the probability is a measure of uncertainty of various phenomena.
Like, if you throw a dice, the possible outcomes of it, is defined by the
probability. This distribution could be defined with any random experiments,
whose outcome is not sure or could not be predicted. Let us discuss now its
definition, function, formula and its types here, along with how to create a table
of probability based on random variables.
Two random variables with equal probability distribution can yet vary with
respect to their relationships with other random variables or whether they are
independent of these. The recognition of a random variable, which means, the
12
outcomes of randomly choosing values as per the variable’s probability
distribution function, are called random variates.
Where,
• μ = Mean Value
• σ = Standard Distribution of probability.
• If mean(μ) = 0 and standard deviation(σ) = 1, then this
distribution is known to be normal distribution.
• x = Normal random variable
13
Normal Distribution Examples
Since the normal distribution statistics estimates many natural events so well, it
has evolved into a standard of recommendation for many probability queries.
Some of the examples are:
For example, if a dice is rolled, then all the possible outcomes are discrete and
give a mass of outcomes. It is also known as the probability mass function.
So, the outcomes of binomial distribution consist of n repeated trials and the
outcome may or may not occur. The formula for the binomial distribution is;
Where,
14
Binomial Distribution Examples
As we already know, binomial distribution gives the possibility of a different set
of outcomes. In the real-life, the concept is used for:
15
• The number of apples sold by a shopkeeper in the time
period of 12 pm to 4 pm daily.
FX(x) = P(X ≤ x)
Where P shows the probability that the random variable X occurs on less than or
equal to the value of x.
For a closed interval, (a→b), the cumulative probability function can be defined
as;
16
In the case of Binomial distribution, as we know it is defined as the
probability of mass or discrete random variable gives exactly some value. This
distribution is also called probability mass distribution and the function
associated with it is called a probability mass function.
X:S→A
Then the probability mass function fX : A → [0,1] for X can be defined as;
X X1 X2 X3 ………….. Xn
P(X) P1 P2 P3 …………… Pn
17
What is the Prior Probability?
In Bayesian statistical conclusion, a prior probability distribution, also known as
the prior, of an unpredictable quantity is the probability distribution, expressing
one’s faiths about this quantity before any proof is taken into the record. For
instance, the prior probability distribution represents the relative proportions of
voters who will vote for some politician in a forthcoming election. The hidden
quantity may be a parameter of the design or a possible variable rather than a
perceptible variable.
Solution:
First write, the value of X= 0, 1 and 2, as the possibility are there that
No head comes
18
Now the probability distribution could be written as;
P(X=0) = P(Tail+Tail) = ½ * ½ = ¼
P(X=2) = P(Head+Head) = ½ * ½ = ¼
Example 2:
0.900−0.925 1
0.925−0.950 7
0.950−0.975 25
0.975−1.000 32
1.000−1.025 30
1.025−1.050 5
Total 100
Solution:
We first divide the number of containers in each weight category by 100 to give
the probabilities.
19
Weight W Number of Containers Probability
0.900−0.925 1 0.01
0.925−0.950 7 0.07
0.950−0.975 25 0.25
0.975−1.000 32 0.32
1.000−1.025 30 0.30
1.025−1.050 5 0.05
Random Variable
In probability, a real-valued function, defined over the sample space of a
random experiment, is called a random variable. That is, the values of the
random variable correspond to the outcomes of the random experiment.
Random variables could be either discrete or continuous. In this article, let’s
discuss the different types of random variables.
A random variable’s likely values may express the possible outcomes of an
experiment, which is about to be performed or the possible outcomes of a
preceding experiment whose existing value is unknown. They may also
conceptually describe either the results of an “objectively” random process (like
rolling a die) or the “subjective” randomness that appears from inadequate
knowledge of a quantity.
20
A random variable is a rule that assigns a numerical value to each outcome in
a sample space. Random variables may be either discrete or continuous. A
random variable is said to be discrete if it assumes only specified values in an
interval. Otherwise, it is continuous. We generally denote the random variables
with capital letters such as X and Y. When X takes values 1, 2, 3, …, it is said
to have a discrete random variable.
Variate
A variate can be defined as a generalization of the random variable. It has the
same properties as that of the random variables without stressing to any
particular type of probabilistic experiment. It always obeys a particular
probabilistic law.
21
A discrete random variable can take only a finite number of distinct values such
as 0, 1, 2, 3, 4, … and so on. The probability distribution of a random variable
has a list of probabilities compared with each of its possible values known as
probability mass function.
22
Mean of random variable: If X is the random variable and P is the respective
probabilities, the mean of a random variable is defined by:
Mean (μ) = ∑ XP
Variance of Random Variable: The variance tells how much is the spread of
random variable X around the mean value. The formula for the variance of a
random variable is given by;
FY(y) = P(g(X)≤y)
23
A probability distribution always satisfies two conditions:
• f(x)≥0
• ∑f(x)=1
The important probability distributions are:
• Binomial distribution
• Poisson distribution
• Bernoulli’s distribution
• Exponential distribution
• Normal distribution
24
Events In Probability
The sample space for the tossing of three coins simultaneously is given by:
S = {(T , T , T) , (T , T , H) , (T , H , T) , (T , H , H ) , (H , T , T ) , (H , T , H) , (H , H, T) ,(H
, H , H)}
Suppose, if we want to find only the outcomes which have at least two heads;
then the set of all such possibilities can be given as:
There could be a lot of events associated with a given sample space. For any
event to occur, the outcome of the experiment must be an element of the set of
event E.
25
• Impossible and Sure Events
• Simple Events
• Compound Events
• Independent and Dependent Events
• Mutually Exclusive Events
• Exhaustive Events
• Complementary Events
• Events Associated with “OR”
• Events Associated with “AND”
• Event E1 but not E2
Simple Events
Any event consisting of a single point of the sample space is known as a simple
event in probability. For example, if S = {56 , 78 , 96 , 54 , 89} and E = {78}
then E is a simple event.
Compound Events
Contrary to the simple event, if any event consists of more than one single point
of the sample space then such an event is called a compound event.
Considering the same example again, if S = {56 ,78 ,96 ,54 ,89}, E1 = {56 ,54 },
E2 = {78 ,56 ,89 } then, E1 and E2 represent two compound events.
26
Mutually Exclusive Events
If the occurrence of one event excludes the occurrence of another event, such
events are mutually exclusive events i.e. two events don’t have any common
point. For example, if S = {1 , 2 , 3 , 4 , 5 , 6} and E1, E2 are two events such
that E1 consists of numbers less than 3 and E2 consists of numbers greater than
4.
Exhaustive Events
A set of events is called exhaustive if all the events together consume the entire
sample space.
Complementary Events
For any event E1 there exists another event E1‘ which represents the remaining
elements of the sample space S.
E1 = S − E1‘
27
E1 U E2 U E3U ………En = S
E1, E2 = E1 – E2
In the game of snakes and ladders, a fair die is thrown. If event E1 represents all
the events of getting a natural number less than 4, event E2 consists of all the
events of getting an even number and E3 denotes all the events of getting an odd
number. List the sets representing the following:
i)E1 or E2 or E3
28
iii)E1 but not E3
Solution:
E1 = {1,2,3}
E2 = {2,4,6}
E3 = {1,3,5}
If A and B are dependent events, then the probability of both events occurring
simultaneously is given by:
29
If A and B are two independent events in an experiment, then the probability of
both events occurring simultaneously is given by:
Proof
We know that the conditional probability of event A given that B has occurred
is denoted by P(A|B) and is given by:
Where, P(B)≠0
Where, P(A) ≠ 0.
P(B∩A) = P(A)×P(B|A)
P(A) ≠ 0,P(B) ≠ 0.
For independent events A and B, P(B|A) = P(B). The equation (2) can be
modified into,
30
P(A∩B) = P(B)×P(A|B) ; if P(B) ≠ 0
Let us learn here the multiplication theorems for independent events A and B.
If A and B are two independent events for a random experiment, then the
probability of simultaneous occurrence of two independent events will be equal
to the product of their probabilities. Hence,
P(A∩B) = P(A).P(B)
P(A∩B) = P(A)×P(B|A)
P(B|A) = P(B)
P(A∩B) = P(A).P(B)
Hence, proved.
Solution: Let A and B denote the events that the first and the second balls are
drawn are red balls. We have to find P(A∩B) or P(AB).
Now, only 19 red balls and 10 blue balls are left in the bag. The probability of
drawing a red ball in the second draw too is an example of conditional
probability where the drawing of the second ball depends on the drawing of the
first ball.
P(B|A) = 19/29
31
By multiplication rule of probability,
Sample Space
A sample space is a collection or a set of possible outcomes of a random
experiment. The sample space is represented using the symbol, “S”. The subset
of possible outcomes of an experiment is called events. A sample space may
contain a number of outcomes that depends on the experiment. If it contains a
finite number of outcomes, then it is known as discrete or finite sample spaces.
The samples spaces for a random experiment is written within curly braces “ { }
“. There is a difference between the sample space and the events. For rolling a
die, we will get the sample space, S as {1, 2, 3, 4, 5, 6 } whereas the event can
be written as {1, 3, 5 } which represents the set of odd numbers and { 2, 4, 6 }
which represents the set of even numbers. The outcomes of an experiment are
random and the sample space becomes the universal set for some particular
experiments. Some of the examples are as follows:
Tossing a Coin
When flipping a coin, two outcomes are possible, such as head and tail.
Therefore the sample space for this experiment is given as
Sample Space, S = { (H1, H2), (H1, T2), (T1, H2), (T1, T2) }
In general, if you have “n” coins, then the possible number of outcomes will be
2n.
32
Example: If you toss 3 coins, “n” is taken as 3.
Sample space S = { HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}
A Die is Thrown
When a single die is thrown, it has 6 outcomes since it has 6 faces. Therefore,
the sample is given as
S = { 1, 2, 3, 4, 5, 6}
{(1,1)(1,2)(1,3)(1,4)(1,5)(1,6)(2,1)(2,2)(2,3)(2,4)(2,5)(2,6)(3,1)(3,2)(3,3)(3,4)(3,
5)(3,6)(4,1)(4,2)(4,3)(4,4)(4,5)(4,6)(5,1)(5,2)(5,3)(5,4)(5,5)(5,6)(6,1)(6,2)(6,3)(
6,4)(6,5)(6,6)}
If three dice are thrown, it should have the possible outcomes of 216 where n in
the experiment is taken as 3, so it becomes 63 = 216.
Sample Problem
Question:
Write the sample space for the given interval [3,9].
Solution:
Given interval: [3, 9]
As the integers given are in the closed interval, we can take the value from 3 to
9.
33
Therefore, the sample space for the given interval is:
Sample space = { 3, 4, 5, 6, 7, 8, 9 }
34
development of mathematical concepts will structure thinking about random
phenomena.
Weighting possibilities by combinatorial multiplicity is an old idea going
back to Cardano (1560) and earlier. It culminated in the definition of probability
as ‘favourable divided by possible cases’ by Laplace (1812). This definition
requires the equiprobability of all possible cases which is ‘guaranteed’
in cases of application by some sort of symmetry argument. This notion of
equiprobability is still the foundation of the idea of simple random sampling.
The frequentist interpretation goes back to Bernoulli’s theorema aureum
(1713), the law of large numbers which relates individual probability to the
probabilistic ‘convergence’ of relative frequencies. The stabilising of
frequencies is a very intuitive means of transferring abstract probability onto the
(ideal) frequency in large series. Yet this law causes many a misconception,
with the further transfer of frequencies in large (or small) series onto a single,
one-off decision. The frequentist idea has been the basis for the axiomatization
of von Mises (1919), and it requires for its sensible application an irregularity
property of the occurrence or non-occurrence of the event in question.
The idea of weighting the evidence is based on a personal judgement using
whatever quantitative or qualitative information is available. It is the most
general idea of the three which are described here, as it can include the other
two.
To make such subjective probability evaluations, a person must be well
informed or expert. Objectivity is achieved by the application of some
rationality
axioms on the preference system of that person (see, e.g., de Finetti 1937).
Regardless of which variant of the theory is considered intuitive thought
about probabilistic phenomena will be needed. Learning such a theory should
enable one to structure one’s formerly vague thinking about random
phenomena. For example, the additivity law of probability provides a check for
intuitive weights of evidence, as the weights of an event and its complement
must
35
total unity. A deeper understanding of the independence assumption should
enable one to regard prior frequencies (information about past series) as being
of no help in order to predict the single next outcome. Yet these frequencies
form the basis of an estimate of the probability, which in turn may be
informative for the prediction of frequencies in a new series of events. It takes
more effort to present theoretical ideas in order to use past frequencies as a
weight of evidence for a single, one-off case.
A very important function to stabilise intuitive thought can be seen in the
simulation method, which gives a material form to probability allowing one
to study its empirical consequences. The so-called Poisson process (see, e.g.,
Meyer 1970) of generating events in time is labelled as purely random. Its
mathematical characterisation may not be within the reach of learners at
secondary level, yet its simulation would give impressive insights.
36
Applications Of Probability In Real Life
If you see a 60 percent chance of rain, don’t take that to mean that it’s
definitely going to rain. The 60 percent implies that on days with similar
weather conditions, 60 out of 100 times, it ended up raining. This is
where the 60 percent comes from.
Probability and statistics is a major part of card games, and this is why
poker is so difficult. Sometimes, you get a bad hand, and there’s nothing
you can do about it. Unless you’re gutsy and can bluff your way out of a
dire situation.
37
4. Insurance.
If you were absolutely certain that you’d never get into a car accident,
then you never need to spend money on car insurance, right? But the
moment you own a car and drive around, the chances of you getting into
an accident becomes non-zero. Or more than 0 percent.
The higher the likelihood of you running into an accident, the higher the
premium you have to pay. Teenage boys end up paying a whole lot more
on car insurance than other people. This is one way insurance companies
do business—by breaking down complex real-life situations into
numbers so they can help the most number of people and penalize people
that are at high risk.
5. Traffic signals.
What’s the average amount of time you’ll spend waiting in traffic? Did
you know traffic signals work on probability as well? Roads with high
traffic have higher waiting times because of traffic signals.
It’s programmed into the signals because the people that create and set
up these signals understand the average number of people that need to
cross the roads and understand the average number of vehicles in an
area.
You can understand the flow of traffic in a city and even estimate the
number of green lights you’ll end up with if you take pen and paper in
hand and write down all the possibilities.
This is one of the probability examples in real life that can help you
waste less time on things you don’t like and more on what you want to
actually do.
6. Medical diagnosis.
This is one of the most noble applications of probability in real life. How
does your doctor know that your cough is just because of an infection
and not because of something more serious? Doctors widely go by the
proverb coined in the 1940s by Dr. Theodore Woodward, professor at the
University of Maryland School of Medicine., which states, “When you
hear hooves, think horses, not zebras.”
38
The chances of you having an exotic, serious disease if you have a cough
is very low when compared to you coughing because you have simple
throat irritation, a simple infection, or something else really mundane.
Doctors have to understand false positives and false negatives
extensively if they want to diagnose patients. They deal with dozens, if
not hundreds of patients a day.
7. Election results.
Political pundits are everywhere, and once election results draw near,
you can be sure every news channel in your country will be filled with
buzz about the winner. Election officials use historical data to
understand how a region voted previously to understand who they will
vote for this time.
They combine this with current trends, current polls, and do a lot of math
to arrive at a conclusion on who is going to win.
8. Lottery probability.
There’s one way to make sure you 100 percent win the lottery. And
that’s to buy all of the tickets. But lottery organizations have laws and
safeguards in place to prevent people from doing just that. So how can
you increase your chances while following the rules and law as much as
possible?
The rules of probability dictate that the only way to win the lottery is to
be part of it. Then you can further increase your odds by playing
frequently. Each time you play the lottery, there’s an independent
probability frequency, like with a coin flip, where you can win or lose.
While buying more than one ticket can indeed increase your chances of
winning, this doesn’t give you a significant advantage in beating the
odds at all. At least, not in any amount that justifies the additional cost
of tickets.
39
9. Shopping recommendations.
Ever wonder why you have Amazon recommend certain products to buy
after you finish purchasing something else? It’s because businesses
understand consumer behavior.
They understand you so well, in fact, that they can guess your next
purchases based on what you’ve previously bought. If you’re shopping
for pregnancy clothes, for example, there’s a fairly obvious chance that
you’ll be buying baby slippers and diapers about 9 months later.
This isn’t rocket science, but probability can help you understand the
shopping habits of people in the present and predict their shopping habits
in the future as a result .
But there are other safer and simpler ways to predict a stock’s
performance. If the CEO of a company says stupid things or starts
dancing in their underpants on live television, you can bet that the
public’s trust in the company will deteriorate, and the s tock will fall as a
result.
While life is chaotic, you can use math to break down a lot of things in
real life to predict what’s going to happen in the future. Seeing how
probability in real life works out can be fascinating to anyone that’s
mathematically inclined.
40
Conclusion
In conclusion, probability theory serves as a fundamental tool for understanding
uncertainty and randomness in various fields such as mathematics, statistics,
science, finance, and more. Through the application of probability, we can
quantify the likelihood of events occurring, make informed decisions in
uncertain situations, and assess risks effectively. Probability theory provides a
framework for reasoning about uncertainty, enabling us to model complex
systems, make predictions, and draw meaningful conclusions from data. As we
continue to advance in our understanding of probability and its applications, we
unlock new insights into the behavior of random phenomena, leading to
advancements in technology, science, and decision-making processes.
41