Probability Density
Probability Density
In a more precise sense, the PDF is used to specify the probability of the random
variable falling within a particular range of values, as opposed to taking on any
one value. This probability is given by the integral of this variable's PDF over
that range—that is, it is given by the area under the density function but above
the horizontal axis and between the lowest and greatest values of the range. The
probability density function is nonnegative everywhere, and its integral over the
entire space is equal to 1.
Contents
1 Example
2 Absolutely continuous univariate distributions
3 Formal definition
3.1 Discussion
4 Further details
5 Link between discrete and continuous distributions
6 Families of densities
7 Densities associated with multiple variables
7.1 Marginal densities
7.2 Independence
7.3 Corollary
7.4 Example
8 Function of random variables and change of variables in the probability
density function
8.1 Scalar to scalar
8.2 Vector to vector
8.3 Vector to scalar
9 Sums of independent random variables
10 Products and quotients of independent random variables
10.1 Example: Quotient distribution
10.2 Example: Quotient of two standard normals
11 See also
12 References
13 Further reading
14 External links
Example
Suppose bacteria of a certain species typically live 4 to 6 hours. The probability
that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria live for
approximately 5 hours, but there is no chance that any given bacterium dies at
exactly 5.00... hours. However, the probability that the bacterium dies between 5
hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then,
the probability that the bacterium dies between 5 hours and 5.001 hours should be
about 0.002, since this time interval is one-tenth as long as the previous. The
probability that the bacterium dies between 5 hours and 5.0001 hours should be
about 0.0002, and so on.
There is a probability density function f with f(5 hours) = 2 hour−1. The integral
of f over any window of time (not only infinitesimal windows but also large
windows) is the probability that the bacterium dies in that window.
Formal definition
(This definition may be extended to any probability distribution using the measure-
theoretic definition of probability.)
Discussion
In the continuous univariate case above, the reference measure is the Lebesgue
measure. The probability mass function of a discrete random variable is the density
with respect to the counting measure over the sample space (usually the set of
integers, or some subset thereof).
Further details
Unlike a probability, a probability density function can take on values greater
than one; for example, the uniform distribution on the interval [0, 1/2] has
probability density f(x) = 2 for 0 ≤ x ≤ 1/2 and f(x) = 0 elsewhere.
Families of densities
It is common for probability density functions (and probability mass functions) to
be parametrized—that is, to be characterized by unspecified parameters. For
example, the normal distribution is parametrized in terms of the mean and the
variance, denoted by {\displaystyle \mu }\mu and {\displaystyle \sigma ^{2}}\sigma
^{2} respectively, giving the family of densities
It is tempting to think that in order to find the expected value E(g(X)), one must
first find the probability density fg(X) of the new random variable Y = g(X).
However, rather than computing
Scalar to scalar
Let {\displaystyle g:{\mathbb {R} }\rightarrow {\mathbb {R} }}{\displaystyle g:
{\mathbb {R} }\rightarrow {\mathbb {R} }} be a monotonic function, then the
resulting density function is
This follows from the fact that the probability contained in a differential area
must be invariant under change of variables. That is,
{\displaystyle \left|f_{Y}(y)\,dy\right|=\left|f_{X}(x)\,dx\right|,}
{\displaystyle \left|f_{Y}(y)\,dy\right|=\left|f_{X}(x)\,dx\right|,}
or
Vector to vector
Suppose x is an n-dimensional random variable with joint density f. If y = H(x),
where H is a bijective, differentiable function, then y has density g:
For example, in the 2-dimensional case x = (x1, x2), suppose the transform H is
given as y1 = H1(x1, x2), y2 = H2(x1, x2) with inverses x1 = H1−1(y1, y2), x2 =
H2−1(y1, y2). The joint distribution for y = (y1, y2) has density[7]
{\displaystyle H(Z,X)={\begin{bmatrix}Z+V(X)\\X\end{bmatrix}}={\begin{bmatrix}Y\\
{\tilde {X}}\end{bmatrix}}.}{\displaystyle
H(Z,X)={\begin{bmatrix}Z+V(X)\\X\end{bmatrix}}={\begin{bmatrix}Y\\{\tilde
{X}}\end{bmatrix}}.}
It is clear that {\displaystyle H}H is a bijective mapping, and the Jacobian of
{\displaystyle H^{-1}}H^{{-1}} is given by:
{\displaystyle Y=U/V}Y=U/V
{\displaystyle Z=V}Z=V
Then, the joint density p(y,z) can be computed by a change of variables from U,V to
Y,Z, and Y can be derived by marginalizing out Z from the joint density.
{\displaystyle U=YZ}U=YZ
{\displaystyle V=Z}V=Z
The Jacobian matrix {\displaystyle J(U,V\mid Y,Z)}{\displaystyle J(U,V\mid Y,Z)} of
this transformation is
Exactly the same method can be used to compute the distribution of other functions
of multiple independent random variables.
{\displaystyle Y=U/V}Y=U/V
{\displaystyle Z=V}Z=V
This leads to:
See also
Density estimation
Kernel density estimation
Likelihood function
List of probability distributions
Probability mass function
Secondary measure
Uses as position probability density:
Atomic orbital
Home range
References
"AP Statistics Review - Density Curves and the Normal Distributions". Archived
from the original on 2 April 2015. Retrieved 16 March 2015.
Grinstead, Charles M.; Snell, J. Laurie (2009). "Conditional Probability -
Discrete Conditional" (PDF). Grinstead & Snell's Introduction to Probability.
Orange Grove Texts. ISBN 161610046X. Retrieved 2019-07-25.
Probability distribution function PlanetMath Archived 2011-08-07 at the Wayback
Machine
Probability Function at MathWorld
Ord, J.K. (1972) Families of Frequency Distributions, Griffin. ISBN 0-85264-137-0
(for example, Table 5.1 and Example 5.4)
Devore, Jay L.; Berk, Kenneth N. (2007). Modern Mathematical Statistics with
Applications. Cengage. p. 263. ISBN 0-534-40473-1.
David, Stirzaker (2007-01-01). Elementary Probability. Cambridge University Press.
ISBN 0521534283. OCLC 851313783.
Further reading
Billingsley, Patrick (1979). Probability and Measure. New York, Toronto, London:
John Wiley and Sons. ISBN 0-471-00710-2.
Casella, George; Berger, Roger L. (2002). Statistical Inference (Second ed.).
Thomson Learning. pp. 34–37. ISBN 0-534-24312-6.
Stirzaker, David (2003). Elementary Probability. ISBN 0-521-42028-8. Chapters 7 to
9 are about continuous variables.
External links
Ushakov, N.G. (2001) [1994], "Density of a probability distribution", Encyclopedia
of Mathematics, EMS Press
Weisstein, Eric W. "Probability density function". MathWorld.
vte
Theory of probability distributions
probability mass function (pmf)probability density function (pdf)cumulative
distribution function (cdf)quantile function
Loglogisticpdf no-labels.svg
raw momentcentral momentmeanvariancestandard deviationskewnesskurtosisL-moment
moment-generating function (mgf)characteristic functionprobability-generating
function (pgf)cumulantcombinant
Categories: Functions related to probability distributionsEquations of physics
Navigation menu
Not logged in
Talk
Contributions
Create account
Log in
ArticleTalk
ReadEditView history
Search
Search Wikipedia
Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
Contribute
Help
Learn to edit
Community portal
Recent changes
Upload file
Tools
What links here
Related changes
Special pages
Permanent link
Page information
Cite this page
Wikidata item
Print/export
Download as PDF
Printable version
Languages
العربية
Deutsch
Español
Français
Bahasa Indonesia
Português
Русский
اردو
中文
35 more
Edit links
This page was last edited on 5 September 2021, at 15:03 (UTC).
Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. By using this site, you agree to the Term