Probability Distribution Basics
Probability Distribution Basics
describes how the possible outcomes of a random experiment or process are distributed or spread
out in terms of their likelihood of occurrence. It provides a way to quantify uncertainty and make
predictions about the outcomes of random events.
Discrete Random Variable: A random variable that can only take on distinct, separate values
with specific probabilities. Examples include the number of heads obtained when flipping a coin
or the outcome of rolling a fair six-sided die.
Continuous Random Variable: A random variable that can take on any value within a range.
These values are often associated with measurements and can have an infinite number of
possible outcomes. Examples include the height of individuals in a population or the time it takes
for a computer to process a task.
For discrete random variables, the probability distribution is described by a Probability Mass
Function (PMF). The PMF assigns probabilities to each possible outcome.
The probability assigned to all possible outcomes must sum to 1 for discrete random variables or
integrate to 1 for continuous random variables.
The domain of the random variable is the set of all possible values it can take.
The mean (expected value) and variance of a probability distribution provide measures of central
tendency and variability, respectively, for the random variable.
Some well-known discrete probability distributions include the Bernoulli distribution, Binomial
distribution, Poisson distribution, and Geometric distribution.
Probability distributions are essential in various fields such as statistics, economics, engineering,
and science because they help in modeling and understanding uncertainty. By knowing the
probability distribution of a random variable, one can make informed decisions, perform
statistical inference, and analyze data to draw meaningful conclusions about real-world
phenomena.