0% found this document useful (0 votes)
52 views41 pages

Rishita Rajan (Probability)

The document provides an overview of probability including its history, origins, key concepts and rules. It discusses theoretical, experimental and axiomatic probability. Key concepts covered include probability tree diagrams, the addition rule, complementary rule, conditional rule and multiplication rule. The fundamental counting principle is also explained.

Uploaded by

koushikj571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views41 pages

Rishita Rajan (Probability)

The document provides an overview of probability including its history, origins, key concepts and rules. It discusses theoretical, experimental and axiomatic probability. Key concepts covered include probability tree diagrams, the addition rule, complementary rule, conditional rule and multiplication rule. The fundamental counting principle is also explained.

Uploaded by

koushikj571
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

INDEX

Sl.No Content Page.No


1. Introduction 2-4
2. Probability rules 5-6
3. Types of probability 7-11
4. Probability distribution 12-17
5. Prior,poster probability 18-20
6. Random variable 20-24
7. Events and types of events 24-32
8. Sample Space 32-34
9. Application 34-40
10. Conclusion 41

1
PROBABILITY
INTRODUCTION
Probability is a measure of the likelihood of an event to occur. Many events
cannot be predicted with total certainty. We can predict only the chance of an
event to occur i.e., how likely they are going to happen, using it. Probability can
range from 0 to 1, where 0 means the event to be an impossible one and 1
indicates a certain event. Probability for Class 10 is an important topic for the
students which explains all the basic concepts of this topic. The probability of
all the events in a sample space adds up to 1.
For example, when we toss a coin, either we get Head OR Tail, only two
possible outcomes are possible (H, T). But when two coins are tossed then there
will be four possible outcomes, i.e {(H, H), (H, T), (T, H), (T, T)}.

History:
Probable and probability and their cognates in other modern languages derive
from medieval learned Latin probabilis, deriving from Cicero and generally
applied to an opinion to mean plausible or generally approved.The form
probability is from Old French probabilite (14 c.) and directly from Latin
probabilitatem (nominative probabilitas) "credibility, probability," from
probabilis (see probable). The mathematical sense of the term is from 1718. In
the 18th century, the term chance was also used in the mathematical sense of
"probability" (and probability theory was called Doctrine of Chances). This
word is ultimately from Latin cadentia, i.e. "a fall, case". The English adjective
likely is of Germanic origin, most likely from Old Norse likligr (Old English
had geliclic with the same sense), originally meaning "having the appearance of
being strong or able" "having the similar appearance or qualities", with a
meaning of "probably" recorded mid-15c. The derived noun likelihood had a
meaning of "similarity, resemblance" but took on a meaning of "probability"
from the mid 15th century. The meaning "something likely to be true" is from
1570s

Origins:
Ancient and medieval law of evidence developed a grading of degrees of proof,
credibility, presumptions and half-proof to deal with the uncertainties of
evidence in court.In Renaissance times, betting was discussed in terms of odds
such as "ten to one" and maritime insurance premiums were estimated based on

2
intuitive risks, but there was no theory on how to calculate such odds or
premiums.
The mathematical methods of probability arose in the investigations first of
Gerolamo Cardano in the 1560s (not published until 100 years later), and then
in the correspondence Pierre de Fermat and Blaise Pascal (1654) on such
questions as the fair division of the stake in an interrupted game of chance.
Christiaan Huygens (1657) gave a comprehensive treatment of the subject
From Games, Gods and Gambling ISBN 978-0-85264-171-2 by F.N. David:
In ancient times there were games played using astragali, or Talus bone. The
Pottery of ancient Greece was evidence to show that there was a circle drawn on
the floor and the astragali were tossed into this circle, much like playing
marbles. In Egypt, excavators of tombs found a game they called "Hounds and
Jackals", which closely resembles the modern game "Snakes and Ladders". It
seems that this is the early stages of the creation of dice.
The first dice game mentioned in literature of the Christian era was called
Hazard. Played with 2 or 3 dice. Thought to have been brought to Europe by the
knights returning from the Crusades.
Cardano also thought about the sum of three dice. At face value there are the
same number of combinations that sum to 9 as those that sum to 10. For a
9:(621) (531) (522) (441) (432) (333) and for 10: (631) (622) (541) (532) (442)
(433). However, there are more ways of obtaining some of these combinations
than others. For example, if we consider the order of results there are six ways
to obtain (621): (1,2,6), (1,6,2), (2,1,6), (2,6,1), (6,1,2), (6,2,1), but there is only
one way to obtain (333), where the first, second and third dice all roll 3. There
are a total of 27 permutations that sum to 10 but only 25 that sum to 9. From
this, Cardano found that the probability of throwing a 9 is less than that of
throwing a 10. He also demonstrated the efficacy of defining odds as the ratio of
favourable to unfavourable outcomes.
Eighteenth century
Jacob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham De
Moivre's The Doctrine of Chances (1718) put probability on a sound
mathematical footing, showing how to calculate a wide range of complex
probabilities. Bernoulli proved a version of the fundamental law of large
numbers, which states that in a large number of trials, the average of the
outcomes is likely to be very close to the expected value - for example, in 1000
throws of a fair coin, it is likely that there are close to 500 heads (and the larger
the number of throws, the closer to half-and-half the proportion is likely to be).

3
Nineteenth century
The power of probabilistic methods in dealing with uncertainty was shown by
Gauss's determination of the orbit of Ceres from a few observations. The theory
of errors used the method of least squares to correct error-prone observations,
especially in astronomy, based on the assumption of a normal distribution of
errors to determine the most likely true value. In 1812, Laplace issued his
Théorie analytique des probabilités in which he consolidated and laid down
many fundamental results in probability and statistics such as the moment-
generating function, method of least squares, inductive probability, and
hypothesis testing.
Towards the end of the nineteenth century, a major success of explanation in
terms of probabilities was the Statistical mechanics of Ludwig Boltzmann and J.
Willard Gibbs which explained properties of gases such as temperature in terms
of the random motions of large numbers of particles.
The field of the history of probability itself was established by Isaac Todhunter's
monumental A History of the Mathematical Theory of Probability from the
Time of Pascal to that of Laplace (1865).
Twentieth century
Probability and statistics became closely connected through the work on
hypothesis testing of R. A. Fisher and Jerzy Neyman, which is now widely
applied in biological and psychological experiments and in clinical trials of
drugs, as well as in economics and elsewhere. A hypothesis, for example that a
drug is usually effective, gives rise to a probability distribution that would be
observed if the hypothesis is true. If observations approximately agree with the
hypothesis, it is confirmed, if not, the hypothesis is rejected.
The twentieth century also saw long-running disputes on the interpretations of
probability. In the mid-century frequentism was dominant, holding that
probability means long-run relative frequency in a large number of trials. At the
end of the century there was some revival of the Bayesian view, according to
which the fundamental notion of probability is how well a proposition is
supported by the evidence for it.
The mathematical treatment of probabilities, especially when there are infinitely
many possible outcomes, was facilitated by Kolmogorov's axioms (1933).

4
Probability Tree
The tree diagram helps to organize and visualize the different possible
outcomes. Branches and ends of the tree are two main positions. Probability of
each branch is written on the branch, whereas the ends are containing the final
outcome. Tree diagrams are used to figure out when to multiply and when to
add. You can see below a tree diagram for the coin:

What Are the Rules of Probability in Math?


1. Addition Rule

Whenever an event is the union of two other events, say A and B,


then P(A or B)=P(A)+P(B)−P(A∩B)

P(A∪B)=P(A)+P(B)−P(A∩B)P(A∪B)=P(A)+P(B)−P(A∩B)

2. Complementary Rule

Whenever an event is the complement of another event, specifically, if A is an


event, then P(not A)=1−P(A) or P(A') = 1 - P(A').

P(A)+P(A′)=1

3. Conditional Rule

When event A is already known to have occurred and probability of event B is


desired, then P(B, given A)=P(A and B)P(A, given B). It can be vica versa in
case of event B.
P(B∣A)=P(A∩B)P(A)P(B∣A)=P(A∩B)P(A)

5
4. Multiplication Rule

Whenever an event is the intersection of two other events, that is, events A and
B need to occur simultaneously. Then P(A and B)=P(A)⋅P(B).

P(A∩B)=P(A)⋅P(B∣A)

What Is the Fundamental Counting Principle?


The fundamental counting principle is a rule which counts all the possible ways
for an event to happen or the total number of possible outcomes in a situation.

It states that when there are n ways to do one thing, and m ways to do another
thing, then the number of ways to do both the things can be obtained by taking
their product. This is expressed as n×m.

Example:

An ice cream seller sells 3 flavors of ice creams, vanilla, chocolate and
strawberry giving his customers 6 different choices of cones.

How many choices of ice creams does Wendy have if she goes to this ice cream
seller?

Solution

Wendy has 3 choices for the ice cream flavors and 6 choices for ice cream
cones.

Hence, by the fundamental counting principle, the number of choices that


Wendy has can be represented as 3×6=18

6
Types of Probability
There are three major types of probabilities:
*Theoretical Probability
*Experimental Probability
*Axiomatic Probability

Theoretical Probability
Theoretical probability as the name suggests is the theory behind probability.
Theoretical probability gives the outcome of the occurrence of an event based
on mathematics and reasoning. It tells us about what should happen in an ideal
situation without conducting any experiments.
Theoretical probability is extremely useful in situations, such as in the
launching of a satellite, where it is not feasible to conduct an actual experiment
to arrive at a sound conclusion. In this article, we will learn more about the
meaning of theoretical probability, the differences between the types of
probabilities, and see some associated examples.
Theoretical probability can be defined as the number of favorable outcomes
divided by the total number of possible outcomes. To determine the theoretical
probability there is no need to conduct an experiment. However, knowledge of
the situation is required to find the probability of occurrence of that event.
Theoretical probability predicts the probability of occurrence of an event by
assuming that all events are equally likely to occur.

Example
Suppose there are a total of 5 cards and the probability of drawing 2 cards needs
to be determined. Then by using the concept of theoretical probability, the
number of favorable outcomes (2) is divided by the total possible outcomes (5)
to get the probability as 0.4.

Theoretical Probability Formula

7
Theoretical probability can be calculated either by using logical reasoning or by
using a simple formula. The result of such a type of probability is based on the
number of possible outcomes. The theoretical probability formula is equal to the
ratio of the number of favorable outcomes to the total number of probable
outcomes. This formula is expressed as follows:
Theoretical Probability = Number of favorable outcomes / Number of possible
outcomes.

How to Find Theoretical Probability?


Theoretical probability is used to express the likelihood of occurrence of an
event without conducting any experiments. Suppose a person has 30 raffle
tickets and, in total, 500 tickets were sold. The steps to calculate the theoretical
probability of the person winning a prize are as follows:
Step 1: Identify the number of favorable outcomes. As there are 30 raffle tickets
thus, 30 will be the number of desired outcomes.
Step 2: Determine the total possible outcomes. Since 500 total tickets were sold
thus, 500 will be the number of total possible outcomes.
Step 3: To calculate the theoretical probability divide the value from step 1 by
step 2. Thus, 30 / 500 = 0.06. This shows that the probability of a person
winning a raffle prize is 0.06.

Experimental Probability
The chance or occurrence of a particular event is termed its probability. The
value of a probability lies between 0 and 1 which means if it is an impossible
event, the probability is 0 and if it is a certain event, the probability is 1. The
probability that is determined on the basis of the results of an experiment is
known as experimental probability. This is also known as empirical probability.
A random experiment is done and is repeated many times to determine their
likelihood and each repetition is known as a trial. The experiment is conducted
to find the chance of an event to occur or not to occur. It can be tossing a coin,
rolling a die, or rotating a spinner. In mathematical terms, the probability of an

8
event is equal to the number of times an event occurred ÷ the total number of
trials. For instance, you flip a coin 30 times and record whether you get a head
or a tail. The experimental probability of obtaining a head is calculated as a
fraction of the number of recorded heads and the total number of tosses.
P(head) = Number of heads recorded ÷ 30 tosses.

Experimental Probability Formula


The experimental probability of an event is based on the number of times the
event has occurred during the experiment and the total number of times the
experiment was conducted. Each possible outcome is uncertain and the set of all
the possible outcomes is called the sample space. The formula to calculate the
experimental probability is:
P(E) = Number of times an event occurs/Total number of times the
experiment is conducted.

Consider an experiment of rotating a spinner 50 times. The table given below


shows the results of the experiment conducted. Let us find the experimental
probability of spinning the color - blue.

Color Occurrences

Pink 11

Blue 10

Green 13

Yellow 16
The experimental probability of spinning the color blue = 10/50 = 1/5 = 0.2 =
20%

9
Aziomatic probability
In the normal approach to probability, we consider random experiments, sample
space and other events that are associated with the different experiments. In our
day to day life, we are more familiar with the word ‘chance’ as compared to the
word ‘probability’. Since Mathematics is all about quantifying things, the theory
of probability basically quantifies these chances of occurrence or non-
occurrence of the events. There are different types of events in probability.
Here, we will have a look at the definition and the conditions of the axiomatic
probability in detail.
Axiomatic Probability Definition

One important thing about probability is that it can only be applied to


experiments where we know the total number of outcomes of the experiment,
i.e. unless and until we know the total number of outcomes of an experiment,
concept of probability cannot be applied.
Thus, in order to apply probability in day to day situations, we should know the
total number of possible outcomes of the experiment. Axiomatic Probability is
just another way of describing the probability of an event. As, the word itself
says, in this approach, some axioms are predefined before assigning
probabilities. This is done to quantize the event and hence to ease the
calculation of occurrence or non-occurrence of the event.

Conditions of Axiomatic Probability


Suppose, 'S' is the sample space of any random experiment and 'P' represents the
probability of occurrence of any event. The characteristics of 'P' should be a real
valued function with its domain being the power set of 'S' and the range lying in
the interval [0,1]. The probability 'P' must satisfy the following axioms:

For any event 'E', P(E) ≥ 0


P(S) = 1

10
If 'E' and 'F' are mutually exclusive events, then P(E ∪ F) = P(E) + P(F)
From the third point, we can deduce that P(empty set) = 0.
If we consider 'F' as an empty set, it's clear that 'E' and an empty set are disjoint
events. So, from the third point, we can conclude that P(E ∪ empty set) = P(E) +
P(empty set) or P(E) = P(E) + P(empty set). This implies that P(empty set) = 0.
If the sample space 'S' contains outcomes δ_1, δ_2, δ_3 …… δ_n, then
according to the axiomatic definition of probability, we can deduce:
0 ≤ P(δ_i) ≤ 1 for each δ_i ∈ S
P(δ_1) + P(δ_2)+ … + P(δ_n) = 1
For any event 'Q', P(Q) = ∑P(δ_i), where δ_i ∈ Q.
Please note that the singleton {δ_i} is known as an elementary event and for
simplicity, we write P(δ_i) for P({δ_i}).

Example of Axiomatic Probability


To better understand the axiomatic approach to probability, let's consider a
simple example.
Suppose we toss a coin. We say that the probability of getting a head or a tail is
1/2 each. Here, we are assigning a probability value of 1/2 for each event's
occurrence.
This scenario satisfies both conditions of the axiomatic approach, i.e.
Each value is neither less than zero nor greater than 1, and
The sum of the probabilities of getting a head and a tail is 1
Therefore, for this case, we can say that the probabilities of getting a head and a
tail are 1/2 each.
Now, suppose P(H) = 5/8 and P(T) = 3/8. Does this probability assignment also
satisfy the conditions of the axiomatic approach?
Let's recheck the basic conditions of the axiomatic approach.
Each value is neither less than zero nor greater than 1, and
The sum of the probabilities of getting a head and a tail is 1
This assignment of probability values also satisfies the axiomatic approach.
Hence, we can conclude that there are infinite ways to assign probabilities to the
outcomes of an experiment.

11
Probability Distribution
In Statistics, the probability distribution gives the possibility of each outcome of
a random experiment or event. It provides the probabilities of different possible
occurrences. Also read, events in probability, here.
To recall, the probability is a measure of uncertainty of various phenomena.
Like, if you throw a dice, the possible outcomes of it, is defined by the
probability. This distribution could be defined with any random experiments,
whose outcome is not sure or could not be predicted. Let us discuss now its
definition, function, formula and its types here, along with how to create a table
of probability based on random variables.

What is Probability Distribution?


Probability distribution yields the possible outcomes for any random event. It is
also defined based on the underlying sample space as a set of possible outcomes
of any random experiment. These settings could be a set of real numbers or a set
of vectors or a set of any entities. It is a part of probability and statistics.

Random experiments are defined as the result of an experiment, whose outcome


cannot be predicted. Suppose, if we toss a coin, we cannot predict, what
outcome it will appear either it will come as Head or as Tail. The possible result
of a random experiment is called an outcome. And the set of outcomes is called
a sample point. With the help of these experiments or events, we can always
create a probability pattern table in terms of variables and probabilities.

Probability Distribution of Random Variables


A random variable has a probability distribution, which defines the probability
of its unknown values. Random variables can be discrete (not constant) or
continuous or both. That means it takes any of a designated finite or countable
list of values, provided with a probability mass function feature of the random
variable’s probability distribution or can take any numerical value in an interval
or set of intervals. Through a probability density function that is representative
of the random variable’s probability distribution or it can be a combination of
both discrete and continuous.

Two random variables with equal probability distribution can yet vary with
respect to their relationships with other random variables or whether they are
independent of these. The recognition of a random variable, which means, the

12
outcomes of randomly choosing values as per the variable’s probability
distribution function, are called random variates.

Types of Probability Distribution


There are two types of probability distribution which are used for different
purposes and various types of the data generation process.

1. Normal or Cumulative Probability Distribution


2. Binomial or Discrete Probability Distribution

Cumulative Probability Distribution


The cumulative probability distribution is also known as a continuous
probability distribution. In this distribution, the set of possible outcomes can
take on values in a continuous range.

For example, a set of real numbers, is a continuous or normal distribution, as it


gives all the possible outcomes of real numbers. Similarly, a set of complex
numbers, a set of prime numbers, a set of whole numbers etc. are examples of
Normal Probability distribution. Also, in real-life scenarios, the temperature of
the day is an example of continuous probability. Based on these outcomes we
can create a distribution table. A probability density function describes it. The
formula for the normal distribution is;

Where,

• μ = Mean Value
• σ = Standard Distribution of probability.
• If mean(μ) = 0 and standard deviation(σ) = 1, then this
distribution is known to be normal distribution.
• x = Normal random variable

13
Normal Distribution Examples
Since the normal distribution statistics estimates many natural events so well, it
has evolved into a standard of recommendation for many probability queries.
Some of the examples are:

• Height of the Population of the world


• Rolling a dice (once or multiple times)
• To judge the Intelligent Quotient Level of children in this
competitive world
• Tossing a coin
• Income distribution in countries economy among poor and
rich
• The sizes of females shoes
• Weight of newly born babies range
• Average report of Students based on their performance

Discrete Probability Distribution


A distribution is called a discrete probability distribution, where the set of
outcomes are discrete in nature.

For example, if a dice is rolled, then all the possible outcomes are discrete and
give a mass of outcomes. It is also known as the probability mass function.

So, the outcomes of binomial distribution consist of n repeated trials and the
outcome may or may not occur. The formula for the binomial distribution is;

Where,

• n = Total number of events


• r = Total number of successful events.
• p = Success on a single trial probability.

n
Cr = [n!/r!(n−r)]!
• 1 – p = Failure Probability

14
Binomial Distribution Examples
As we already know, binomial distribution gives the possibility of a different set
of outcomes. In the real-life, the concept is used for:

• To find the number of used and unused materials while


manufacturing a product.
• To take a survey of positive and negative feedback from the
people for anything.
• To check if a particular channel is watched by how many
viewers by calculating the survey of YES/NO.
• The number of men and women working in a company.
• To count the votes for a candidate in an election and many
more.

What is Negative Binomial Distribution?


In probability theory and statistics, if in a discrete probability distribution, the
number of successes in a series of independent and identically disseminated
Bernoulli trials before a particularised number of failures happens, then it is
termed as the negative binomial distribution. Here the number of failures is
denoted by ‘r’. For instance, if we throw a dice and determine the occurrence of
1 as a failure and all non-1’s as successes. Now, if we throw a dice frequently
until 1 appears the third time, i.e.r = three failures, then the probability
distribution of the number of non-1s that arrived would be the negative binomial
distribution.
What is Poisson Probability Distribution?
The Poisson probability distribution is a discrete probability distribution that
represents the probability of a given number of events happening in a fixed time
or space if these cases occur with a known steady rate and individually of the
time since the last event. It was titled after French mathematician Siméon Denis
Poisson. The Poisson distribution can also be practised for the number of events
happening in other particularised intervals such as distance, area or volume.
Some of the real-life examples are:

• A number of patients arriving at a clinic between 10 to 11


AM.
• The number of emails received by a manager between office
hours.

15
• The number of apples sold by a shopkeeper in the time
period of 12 pm to 4 pm daily.

Probability Distribution Function


A function which is used to define the distribution of a probability is called a
Probability distribution function. Depending upon the types, we can define these
functions. Also, these functions are used in terms of probability density
functions for any given random variable.

In the case of Normal distribution, the function of a real-valued random


variable X is the function given by;

FX(x) = P(X ≤ x)

Where P shows the probability that the random variable X occurs on less than or
equal to the value of x.

For a closed interval, (a→b), the cumulative probability function can be defined
as;

P(a<X ≤ b) = FX(b) – FX(a)

If we express, the cumulative probability function as integral of its probability


density function fX , then,

In the case of a random variable X=b, we can define cumulative probability


function as;

16
In the case of Binomial distribution, as we know it is defined as the
probability of mass or discrete random variable gives exactly some value. This
distribution is also called probability mass distribution and the function
associated with it is called a probability mass function.

Probability mass function is basically defined for scalar or multivariate random


variables whose domain is variant or discrete. Let us discuss its formula:

Suppose a random variable X and sample space S is defined as;

X:S→A

And A ∈ R, where R is a discrete random variable.

Then the probability mass function fX : A → [0,1] for X can be defined as;

fX(x) = Pr (X=x) = P ({s ∈ S : X(s) = x})

Probability Distribution Table


The table could be created based on the random variable and possible outcomes.
Say, a random variable X is a real-valued function whose domain is the sample
space of a random experiment. The probability distribution P(X) of a random
variable X is the system of numbers.

X X1 X2 X3 ………….. Xn

P(X) P1 P2 P3 …………… Pn

where Pi > 0, i=1 to n and P1+P2+P3+ …….. +Pn =1

17
What is the Prior Probability?
In Bayesian statistical conclusion, a prior probability distribution, also known as
the prior, of an unpredictable quantity is the probability distribution, expressing
one’s faiths about this quantity before any proof is taken into the record. For
instance, the prior probability distribution represents the relative proportions of
voters who will vote for some politician in a forthcoming election. The hidden
quantity may be a parameter of the design or a possible variable rather than a
perceptible variable.

What is Posterior Probability?


The posterior probability is the likelihood an event will occur after all data or
background information has been brought into account. It is nearly associated
with a prior probability, where an event will occur before you take any new data
or evidence into consideration. It is an adjustment of prior probability. We can
calculate it by using the below formula:

Posterior Probability = Prior Probability + New Evidence

It is commonly used in Bayesian hypothesis testing. For instance, old data


propose that around 60% of students who begin college will graduate within 4
years. This is the prior probability. Still, if we think the figure is much lower, so
we start collecting new data. The data collected implies that the true figure is
closer to 50%, which is the posterior probability.
Example 1:

A coin is tossed twice. X is the random variable of the number of heads


obtained. What is the probability distribution of x?

Solution:

First write, the value of X= 0, 1 and 2, as the possibility are there that

No head comes

One head and one tail comes

And head comes in both the coins

18
Now the probability distribution could be written as;

P(X=0) = P(Tail+Tail) = ½ * ½ = ¼

P(X=1) = P(Head+Tail) or P(Tail+Head) = ½ * ½ + ½ *½ = ½

P(X=2) = P(Head+Head) = ½ * ½ = ¼

We can put these values in tabular form;


X 0 1 2

P(X) 1/4 1/2 1/4

Example 2:

The weight of a pot of water chosen is a continuous random variable. The


following table gives the weight in kg of 100 containers recently filled by the
water purifier. It records the observed values of the continuous random variable
and their corresponding frequencies. Find the probability or chances for each
weight category.
Weight W Number of Containers

0.900−0.925 1

0.925−0.950 7

0.950−0.975 25

0.975−1.000 32

1.000−1.025 30

1.025−1.050 5

Total 100

Solution:

We first divide the number of containers in each weight category by 100 to give
the probabilities.

19
Weight W Number of Containers Probability

0.900−0.925 1 0.01

0.925−0.950 7 0.07

0.950−0.975 25 0.25

0.975−1.000 32 0.32

1.000−1.025 30 0.30

1.025−1.050 5 0.05

Total 100 1.00

Random Variable
In probability, a real-valued function, defined over the sample space of a
random experiment, is called a random variable. That is, the values of the
random variable correspond to the outcomes of the random experiment.
Random variables could be either discrete or continuous. In this article, let’s
discuss the different types of random variables.
A random variable’s likely values may express the possible outcomes of an
experiment, which is about to be performed or the possible outcomes of a
preceding experiment whose existing value is unknown. They may also
conceptually describe either the results of an “objectively” random process (like
rolling a die) or the “subjective” randomness that appears from inadequate
knowledge of a quantity.

The domain of a random variable is a sample space, which is represented as the


collection of possible outcomes of a random event. For instance, when a coin is
tossed, only two possible outcomes are acknowledged such as heads or tails.

Random Variable Definition

20
A random variable is a rule that assigns a numerical value to each outcome in
a sample space. Random variables may be either discrete or continuous. A
random variable is said to be discrete if it assumes only specified values in an
interval. Otherwise, it is continuous. We generally denote the random variables
with capital letters such as X and Y. When X takes values 1, 2, 3, …, it is said
to have a discrete random variable.

As a function, a random variable is needed to be measured, which allows


probabilities to be assigned to a set of potential values. It is obvious that the
results depend on some physical variables which are not predictable. Say, when
we toss a fair coin, the final result of happening to be heads or tails will depend
on the possible physical conditions. We cannot predict which outcome will be
noted. Though there are other probabilities like the coin could break or be lost,
such consideration is avoided.

Variate
A variate can be defined as a generalization of the random variable. It has the
same properties as that of the random variables without stressing to any
particular type of probabilistic experiment. It always obeys a particular
probabilistic law.

• A variate is called discrete variate when that variate is not capable of


assuming all the values in the provided range.
• If the variate is able to assume all the numerical values provided in the
whole range, then it is called continuous variate.

Types of Random Variable


As discussed in the introduction, there are two random variables, such as:

• Discrete Random Variable


• Continuous Random Variable
Let’s understand these types of variables in detail along with suitable examples
below.

Discrete Random Variable

21
A discrete random variable can take only a finite number of distinct values such
as 0, 1, 2, 3, 4, … and so on. The probability distribution of a random variable
has a list of probabilities compared with each of its possible values known as
probability mass function.

In an analysis, let a person be chosen at random, and the person’s height is


demonstrated by a random variable. Logically the random variable is described
as a function which relates the person to the person’s height. Now in relation
with the random variable, it is a probability distribution that enables the
calculation of the probability that the height is in any subset of likely values,
such as the likelihood that the height is between 175 and 185 cm, or the
possibility that the height is either less than 145 or more than 180 cm. Now
another random variable could be the person’s age which could be either
between 45 years to 50 years or less than 40 or more than 50.

Continuous Random Variable


A numerically valued variable is said to be continuous if, in any unit of
measurement, whenever it can take on the values a and b. If the random variable
X can assume an infinite and uncountable set of values, it is said to be a
continuous random variable. When X takes any value in a given interval (a, b),
it is said to be a continuous random variable in that interval.

Formally, a continuous random variable is such whose cumulative distribution


function is constant throughout. There are no “gaps” in between which would
compare to numbers which have a limited probability of occurring. Alternately,
these variables almost never take an accurately prescribed value c but there is a
positive probability that its value will rest in particular intervals which can be
very small.

Random Variable Formula


For a given set of data the mean and variance random variable is calculated by
the formula. So, here we will define two major formulas:

• Mean of random variable


• Variance of random variable

22
Mean of random variable: If X is the random variable and P is the respective
probabilities, the mean of a random variable is defined by:

Mean (μ) = ∑ XP

where variable X consists of all possible values and P consist of respective


probabilities.

Variance of Random Variable: The variance tells how much is the spread of
random variable X around the mean value. The formula for the variance of a
random variable is given by;

Var(X) = σ2 = E(X2) – [E(X)]2

where E(X2) = ∑X2P and E(X) = ∑ XP

Functions of Random Variables


Let the random variable X assume the values x1, x2, …with corresponding
probability P (x1), P (x2),… then the expected value of the random variable is
given by:

Expectation of X, E (x) = ∑ x P (x).

A new random variable Y can be stated by using a real Borel measurable


function g:R→R, to the results of a real-valued random variable X. That is, Y =
f(X). The cumulative distribution function of Y is then given by:

FY(y) = P(g(X)≤y)

Random Variable and Probability Distribution


The probability distribution of a random variable can be

• Theoretical listing of outcomes and probabilities of the outcomes.


• An experimental listing of outcomes associated with their observed
relative frequencies.
• A subjective listing of outcomes associated with their subjective
probabilities.
The probability of a random variable X which takes the values x is defined as a
probability function of X is denoted by f (x) = f (X = x)

23
A probability distribution always satisfies two conditions:

• f(x)≥0
• ∑f(x)=1
The important probability distributions are:

• Binomial distribution
• Poisson distribution
• Bernoulli’s distribution
• Exponential distribution
• Normal distribution

Transformation of Random Variables


The transformation of a random variable means to reassign the value to another
variable. The transformation is actually inserted to remap the number line from
x to y, then the transformation function is y = g(x).

Transformation of X or Expected Value of X for a Continuous


Variable
Let the random variable X assume the values x1, x2, x3, ..… with corresponding
probability P (x1), P (x2), P (x3),……….. then the expected value of the random
variable is given by

Expectation of X, E (x) = ∫ x P (x)

Events and Types of Events in Probability

What are Events in Probability?


A probability event can be defined as a set of outcomes of an experiment. In
other words, an event in probability is the subset of the respective sample space.
So, what is sample space?

The entire possible set of outcomes of a random experiment is the sample


space or the individual space of that experiment. The likelihood of occurrence
of an event is known as probability. The probability of occurrence of any event
lies between 0 and 1.

24
Events In Probability
The sample space for the tossing of three coins simultaneously is given by:

S = {(T , T , T) , (T , T , H) , (T , H , T) , (T , H , H ) , (H , T , T ) , (H , T , H) , (H , H, T) ,(H
, H , H)}

Suppose, if we want to find only the outcomes which have at least two heads;
then the set of all such possibilities can be given as:

E = { (H , T , H) , (H , H ,T) , (H , H ,H) , (T , H , H)}

Thus, an event is a subset of the sample space, i.e., E is a subset of S.

There could be a lot of events associated with a given sample space. For any
event to occur, the outcome of the experiment must be an element of the set of
event E.

What is the Probability of Occurrence of an Event?


The number of favourable outcomes to the total number of outcomes is defined
as the probability of occurrence of any event. So, the probability that an event
will occur is given as:

P(E) = Number of Favourable Outcomes/ Total Number of Outcomes

Types of Events in Probability:


Some of the important probability events are:

25
• Impossible and Sure Events
• Simple Events
• Compound Events
• Independent and Dependent Events
• Mutually Exclusive Events
• Exhaustive Events
• Complementary Events
• Events Associated with “OR”
• Events Associated with “AND”
• Event E1 but not E2

Impossible and Sure Events


If the probability of occurrence of an event is 0, such an event is called
an impossible event and if the probability of occurrence of an event is 1, it is
called a sure event. In other words, the empty set ϕ is an impossible event and
the sample space S is a sure event.

Simple Events
Any event consisting of a single point of the sample space is known as a simple
event in probability. For example, if S = {56 , 78 , 96 , 54 , 89} and E = {78}
then E is a simple event.

Compound Events
Contrary to the simple event, if any event consists of more than one single point
of the sample space then such an event is called a compound event.
Considering the same example again, if S = {56 ,78 ,96 ,54 ,89}, E1 = {56 ,54 },
E2 = {78 ,56 ,89 } then, E1 and E2 represent two compound events.

Independent Events and Dependent Events


If the occurrence of any event is completely unaffected by the occurrence of any
other event, such events are known as an independent event in probability and
the events which are affected by other events are known as dependent events.

26
Mutually Exclusive Events
If the occurrence of one event excludes the occurrence of another event, such
events are mutually exclusive events i.e. two events don’t have any common
point. For example, if S = {1 , 2 , 3 , 4 , 5 , 6} and E1, E2 are two events such
that E1 consists of numbers less than 3 and E2 consists of numbers greater than
4.

So, E1 = {1,2} and E2 = {5,6} .

Then, E1 and E2 are mutually exclusive.

Exhaustive Events
A set of events is called exhaustive if all the events together consume the entire
sample space.

Complementary Events
For any event E1 there exists another event E1‘ which represents the remaining
elements of the sample space S.

E1 = S − E1‘

If a dice is rolled then the sample space S is given as S = {1 , 2 , 3 , 4 , 5 , 6 }. If


event E1 represents all the outcomes which is greater than 4, then E1 = {5, 6}
and E1‘ = {1, 2, 3, 4}.

Thus E1‘ is the complement of the event E1.

Similarly, the complement of E1, E2, E3……….En will be represented as E1‘,


E2‘, E3‘……….En‘

Events Associated with “OR”


If two events E1 and E2 are associated with OR then it means that either E1 or
E2 or both. The union symbol (∪) is used to represent OR in probability.

Thus, the event E1U E2 denotes E1 OR E2.

If we have mutually exhaustive events E1, E2, E3 ………En associated with


sample space S then,

27
E1 U E2 U E3U ………En = S

Events Associated with “AND”


If two events E1 and E2 are associated with AND then it means the intersection
of elements which is common to both the events. The intersection symbol (∩) is
used to represent AND in probability.

Thus, the event E1 ∩ E2 denotes E1 and E2.

Types of Events In Probability

Event E1 but not E2


It represents the difference between both the events. Event E1 but not
E2 represents all the outcomes which are present in E1 but not in E2. Thus, the
event E1 but not E2 is represented as

E1, E2 = E1 – E2

In the game of snakes and ladders, a fair die is thrown. If event E1 represents all
the events of getting a natural number less than 4, event E2 consists of all the
events of getting an even number and E3 denotes all the events of getting an odd
number. List the sets representing the following:

i)E1 or E2 or E3

ii)E1 and E2 and E3

28
iii)E1 but not E3

Solution:

The sample space is given as S = {1 , 2 , 3 , 4 , 5 , 6}

E1 = {1,2,3}

E2 = {2,4,6}

E3 = {1,3,5}

i)E1 or E2 or E3= E1 E2 E3= {1, 2, 3, 4, 5, 6}

ii)E1 and E2 and E3 = E1 E2 E3 = ∅

iii)E1 but not E3 = {2}

Multiplication Rule of Probability


The multiplication rule of probability explains the condition between two
events. For two events A and B associated with a sample
space S set A∩B denotes the events in which both events A and event B have
occurred. Hence, (A∩B) denotes the simultaneous occurrence of
events A and B. Event A∩B can be written as AB. The probability of
event AB is obtained by using the properties of conditional probability.

What is the Multiplication Rule of Probability?


According to the multiplication rule of probability, the probability of occurrence
of both the events A and B is equal to the product of the probability of B
occurring and the conditional probability that event A occurring given that
event B occurs.

If A and B are dependent events, then the probability of both events occurring
simultaneously is given by:

P(A ∩ B) = P(B) . P(A|B)

29
If A and B are two independent events in an experiment, then the probability of
both events occurring simultaneously is given by:

P(A ∩ B) = P(A) . P(B)

Proof
We know that the conditional probability of event A given that B has occurred
is denoted by P(A|B) and is given by:

Where, P(B)≠0

P(A∩B) = P(B)×P(A|B) ……………………………………..(1)

Where, P(A) ≠ 0.

P(B∩A) = P(A)×P(B|A)

Since, P(A∩B) = P(B∩A)

P(A∩B) = P(A)×P(B|A) ………………………………………(2)

From (1) and (2), we get:

P(A∩B) = P(B)×P(A|B) = P(A)×P(B|A) where,

P(A) ≠ 0,P(B) ≠ 0.

The above result is known as the multiplication rule of probability.

For independent events A and B, P(B|A) = P(B). The equation (2) can be
modified into,

P(A∩B) = P(B) × P(A)

Multiplication Theorem of Probability


We have already learned the multiplication rules we follow in probability, such
as;

P(A∩B) = P(A)×P(B|A) ; if P(A) ≠ 0

30
P(A∩B) = P(B)×P(A|B) ; if P(B) ≠ 0

Let us learn here the multiplication theorems for independent events A and B.

If A and B are two independent events for a random experiment, then the
probability of simultaneous occurrence of two independent events will be equal
to the product of their probabilities. Hence,

P(A∩B) = P(A).P(B)

Now, from multiplication rule we know;

P(A∩B) = P(A)×P(B|A)

Since A and B are independent, therefore;

P(B|A) = P(B)

Therefore, again we get;

P(A∩B) = P(A).P(B)

Hence, proved.

Solved Example of Multiplication Rule of Probability


Illustration 1: An urn contains 20 red and 10 blue balls. Two balls are drawn
from a bag one after the other without replacement. What is the probability that
both the balls are drawn are red?

Solution: Let A and B denote the events that the first and the second balls are
drawn are red balls. We have to find P(A∩B) or P(AB).

P(A) = P(red balls in first draw) = 20/30

Now, only 19 red balls and 10 blue balls are left in the bag. The probability of
drawing a red ball in the second draw too is an example of conditional
probability where the drawing of the second ball depends on the drawing of the
first ball.

Hence Conditional probability of B on A will be,

P(B|A) = 19/29

31
By multiplication rule of probability,

P(A∩B) = P(A) × P(B|A)

Sample Space
A sample space is a collection or a set of possible outcomes of a random
experiment. The sample space is represented using the symbol, “S”. The subset
of possible outcomes of an experiment is called events. A sample space may
contain a number of outcomes that depends on the experiment. If it contains a
finite number of outcomes, then it is known as discrete or finite sample spaces.

The samples spaces for a random experiment is written within curly braces “ { }
“. There is a difference between the sample space and the events. For rolling a
die, we will get the sample space, S as {1, 2, 3, 4, 5, 6 } whereas the event can
be written as {1, 3, 5 } which represents the set of odd numbers and { 2, 4, 6 }
which represents the set of even numbers. The outcomes of an experiment are
random and the sample space becomes the universal set for some particular
experiments. Some of the examples are as follows:

Tossing a Coin
When flipping a coin, two outcomes are possible, such as head and tail.
Therefore the sample space for this experiment is given as

Sample Space, S = { H, T } = { Head, Tail }

Tossing Two Coins


When flipping two coins, the number of possible outcomes are four. Let, H1 and
T1 be the head and tail of the first coin and H2 and T2 be the head and tail of the
second coin respectively and the sample space can be written as

Sample Space, S = { (H1, H2), (H1, T2), (T1, H2), (T1, T2) }

In general, if you have “n” coins, then the possible number of outcomes will be
2n.

32
Example: If you toss 3 coins, “n” is taken as 3.

Therefore, the possible number of outcomes will be 23 = 8 outcomes

Sample space for tossing three coins is written as

Sample space S = { HHH, HHT, HTH, HTT, THH, THT, TTH, TTT}

A Die is Thrown
When a single die is thrown, it has 6 outcomes since it has 6 faces. Therefore,
the sample is given as

S = { 1, 2, 3, 4, 5, 6}

Two Dice are Thrown


When two dice are thrown together, we will get 36 pairs of possible outcomes.
Each face of the first die can fall with all the six faces of the second die. As
there are 6 x 6 possible pairs, it becomes 36 outcomes. The 36 outcome pairs
are written as:

{(1,1)(1,2)(1,3)(1,4)(1,5)(1,6)(2,1)(2,2)(2,3)(2,4)(2,5)(2,6)(3,1)(3,2)(3,3)(3,4)(3,
5)(3,6)(4,1)(4,2)(4,3)(4,4)(4,5)(4,6)(5,1)(5,2)(5,3)(5,4)(5,5)(5,6)(6,1)(6,2)(6,3)(
6,4)(6,5)(6,6)}
If three dice are thrown, it should have the possible outcomes of 216 where n in
the experiment is taken as 3, so it becomes 63 = 216.

Sample Problem

Question:
Write the sample space for the given interval [3,9].

Solution:
Given interval: [3, 9]

As the integers given are in the closed interval, we can take the value from 3 to
9.

33
Therefore, the sample space for the given interval is:

Sample space = { 3, 4, 5, 6, 7, 8, 9 }

Probability as means for modelling thinking and


applications
Probabilistic thinking, or stochastic thinking as it is called in continental Europe
is a slogan, a vision for the teaching of probability at all levels. On the
one hand, it is argued that probability can be used as a means to describe a
specific type of thinking, an approach towards viewing reality in terms of
uncertainty. On the other hand, if one has developed a special type of thinking,
then this should enable one to structure reality in a very specific way, as
one might see everything red using spectacles with red glass. Some
deliberations on the intuitive roots of probability and its refinement by
experience or mathematics .

Probability – a concept to structure thinking


There are two main motives for legitimating probability for curricula at any
level. One is the formation of a specific type of thinking, a probabilistic
thinking. This should be comparable to other types of mathematical reasoning
such as geometric thinking and can be improved by mathematical treatment.
There should emerge a component extending one’s senses, some sort of
intuitive insight, which is more than a superficial knowledge of mathematical
terms and procedures. Second is the need for applications, which is discussed
in the subsection below.
Different ideas embraced by the concept of probability include classical
probability, the frequentist interpretation, the weighting of evidence,
randomness with its irregular patterns at a micro-level and regular consequences
at a macro-level. All these ideas are at the basis of intuitive thought and should
or
could be sharpened by mathematical penetration. Insofar as it takes place, the

34
development of mathematical concepts will structure thinking about random
phenomena.
Weighting possibilities by combinatorial multiplicity is an old idea going
back to Cardano (1560) and earlier. It culminated in the definition of probability
as ‘favourable divided by possible cases’ by Laplace (1812). This definition
requires the equiprobability of all possible cases which is ‘guaranteed’
in cases of application by some sort of symmetry argument. This notion of
equiprobability is still the foundation of the idea of simple random sampling.
The frequentist interpretation goes back to Bernoulli’s theorema aureum
(1713), the law of large numbers which relates individual probability to the
probabilistic ‘convergence’ of relative frequencies. The stabilising of
frequencies is a very intuitive means of transferring abstract probability onto the
(ideal) frequency in large series. Yet this law causes many a misconception,
with the further transfer of frequencies in large (or small) series onto a single,
one-off decision. The frequentist idea has been the basis for the axiomatization
of von Mises (1919), and it requires for its sensible application an irregularity
property of the occurrence or non-occurrence of the event in question.
The idea of weighting the evidence is based on a personal judgement using
whatever quantitative or qualitative information is available. It is the most
general idea of the three which are described here, as it can include the other
two.
To make such subjective probability evaluations, a person must be well
informed or expert. Objectivity is achieved by the application of some
rationality
axioms on the preference system of that person (see, e.g., de Finetti 1937).
Regardless of which variant of the theory is considered intuitive thought
about probabilistic phenomena will be needed. Learning such a theory should
enable one to structure one’s formerly vague thinking about random
phenomena. For example, the additivity law of probability provides a check for
intuitive weights of evidence, as the weights of an event and its complement
must

35
total unity. A deeper understanding of the independence assumption should
enable one to regard prior frequencies (information about past series) as being
of no help in order to predict the single next outcome. Yet these frequencies
form the basis of an estimate of the probability, which in turn may be
informative for the prediction of frequencies in a new series of events. It takes
more effort to present theoretical ideas in order to use past frequencies as a
weight of evidence for a single, one-off case.
A very important function to stabilise intuitive thought can be seen in the
simulation method, which gives a material form to probability allowing one
to study its empirical consequences. The so-called Poisson process (see, e.g.,
Meyer 1970) of generating events in time is labelled as purely random. Its
mathematical characterisation may not be within the reach of learners at
secondary level, yet its simulation would give impressive insights.

36
Applications Of Probability In Real Life

1. Forecasting the weather.


Here’s a simple use of probability in real life that you likely already do.
We always check the weather forecast before we plan a big outing.
Sometimes the forecaster declares that there’s a 60 percent chance of
rain.
We might decide to delay our outing because we trust this forecast. But
where did the “60 percent” come from? Meteorologists use expensive
equipment and algorithms to understand the likelihood of weather
happenings. They look at the historical data, combine it with current
trends, and look at the chances of rain occurring on a certain day.

If you see a 60 percent chance of rain, don’t take that to mean that it’s
definitely going to rain. The 60 percent implies that on days with similar
weather conditions, 60 out of 100 times, it ended up raining. This is
where the 60 percent comes from.

The same applies to temperature guesstimates, along with chances of


snow, hail, or thunderstorms. This is just one of the probability examples
in real life that can help you in your day-to-day life.
2. Sports outcomes.
Coaches use probability to decide the best possible strategy to pursue in
a game. When a particular batter goes up to bat in a baseball game, the
players and coach can look up the player’s specific batting average to
deduce how that player will perform. The coach can then plan their
approach accordingly.

3. Card games and other games of chance.


The card game Rummy uses probability, as well as permutations and
combinations to guesstimate the kind of cards that will end up on the
table. Poker odds are another great application of probability in real life.
Players use probability to estimate their chances of getting a good hand,
a bad hand, and whether they should bet more or simply fold their
hands.

Probability and statistics is a major part of card games, and this is why
poker is so difficult. Sometimes, you get a bad hand, and there’s nothing
you can do about it. Unless you’re gutsy and can bluff your way out of a
dire situation.

37
4. Insurance.
If you were absolutely certain that you’d never get into a car accident,
then you never need to spend money on car insurance, right? But the
moment you own a car and drive around, the chances of you getting into
an accident becomes non-zero. Or more than 0 percent.

The higher the likelihood of you running into an accident, the higher the
premium you have to pay. Teenage boys end up paying a whole lot more
on car insurance than other people. This is one way insurance companies
do business—by breaking down complex real-life situations into
numbers so they can help the most number of people and penalize people
that are at high risk.

Insurance companies make use of probability in the real world to make


money.

5. Traffic signals.
What’s the average amount of time you’ll spend waiting in traffic? Did
you know traffic signals work on probability as well? Roads with high
traffic have higher waiting times because of traffic signals.

It’s programmed into the signals because the people that create and set
up these signals understand the average number of people that need to
cross the roads and understand the average number of vehicles in an
area.

You can understand the flow of traffic in a city and even estimate the
number of green lights you’ll end up with if you take pen and paper in
hand and write down all the possibilities.

This is one of the probability examples in real life that can help you
waste less time on things you don’t like and more on what you want to
actually do.
6. Medical diagnosis.
This is one of the most noble applications of probability in real life. How
does your doctor know that your cough is just because of an infection
and not because of something more serious? Doctors widely go by the
proverb coined in the 1940s by Dr. Theodore Woodward, professor at the
University of Maryland School of Medicine., which states, “When you
hear hooves, think horses, not zebras.”

38
The chances of you having an exotic, serious disease if you have a cough
is very low when compared to you coughing because you have simple
throat irritation, a simple infection, or something else really mundane.
Doctors have to understand false positives and false negatives
extensively if they want to diagnose patients. They deal with dozens, if
not hundreds of patients a day.

Doctors use all kinds of mathematical techniques in their daily practice


so they can treat people efficiently. This is one of the more useful
probability examples in real life since it can save people’s lives when
done correctly.

7. Election results.
Political pundits are everywhere, and once election results draw near,
you can be sure every news channel in your country will be filled with
buzz about the winner. Election officials use historical data to
understand how a region voted previously to understand who they will
vote for this time.

They combine this with current trends, current polls, and do a lot of math
to arrive at a conclusion on who is going to win.

8. Lottery probability.
There’s one way to make sure you 100 percent win the lottery. And
that’s to buy all of the tickets. But lottery organizations have laws and
safeguards in place to prevent people from doing just that. So how can
you increase your chances while following the rules and law as much as
possible?

The rules of probability dictate that the only way to win the lottery is to
be part of it. Then you can further increase your odds by playing
frequently. Each time you play the lottery, there’s an independent
probability frequency, like with a coin flip, where you can win or lose.

While buying more than one ticket can indeed increase your chances of
winning, this doesn’t give you a significant advantage in beating the
odds at all. At least, not in any amount that justifies the additional cost
of tickets.

39
9. Shopping recommendations.
Ever wonder why you have Amazon recommend certain products to buy
after you finish purchasing something else? It’s because businesses
understand consumer behavior.

They understand you so well, in fact, that they can guess your next
purchases based on what you’ve previously bought. If you’re shopping
for pregnancy clothes, for example, there’s a fairly obvious chance that
you’ll be buying baby slippers and diapers about 9 months later.

This isn’t rocket science, but probability can help you understand the
shopping habits of people in the present and predict their shopping habits
in the future as a result .

10. Stock market predictions.


How do you know if the stock you’ve bought today will rise in price next
month or fall instead? There are millions of people trying to find the
answer to this question. They use tremendous amounts of historical data,
algorithms, and predictive analytics that all make use of math to
understand the market. This is one of the most formal and studied
probability examples in real life.
Some people even think that the collective mood of Twitter users can be
used to predict the rise and fall of the stock market. The stock market is
widely known to run off people’s emotions and not necessarily things
based on sound logic.

But there are other safer and simpler ways to predict a stock’s
performance. If the CEO of a company says stupid things or starts
dancing in their underpants on live television, you can bet that the
public’s trust in the company will deteriorate, and the s tock will fall as a
result.

Probability is a strange and fascinating field that can be incredibly fun


and interesting to study.

While life is chaotic, you can use math to break down a lot of things in
real life to predict what’s going to happen in the future. Seeing how
probability in real life works out can be fascinating to anyone that’s
mathematically inclined.

40
Conclusion
In conclusion, probability theory serves as a fundamental tool for understanding
uncertainty and randomness in various fields such as mathematics, statistics,
science, finance, and more. Through the application of probability, we can
quantify the likelihood of events occurring, make informed decisions in
uncertain situations, and assess risks effectively. Probability theory provides a
framework for reasoning about uncertainty, enabling us to model complex
systems, make predictions, and draw meaningful conclusions from data. As we
continue to advance in our understanding of probability and its applications, we
unlock new insights into the behavior of random phenomena, leading to
advancements in technology, science, and decision-making processes.

41

You might also like