0% found this document useful (0 votes)
17 views5 pages

D Models

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views5 pages

D Models

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Ignacio Cascos Fernández

Department of Statistics
Universidad Carlos III de Madrid

Distribution models

Statistics — 2011–2012

1 Discrete distributions
1.1 Binomial distribution, B(n, p)
A Bernuolli trial is a random experiment with two possible outcomes, success
or failure. The probability of success is commonly written as p (and the
probability of failure is 1 − p).
A random variable X follows a binomial distribution with parameters
n ∈ N and p ∈ (0, 1), denoted by X ∼ B(n, p), if it equals the number of
trials that result in a success, out of n independent ones. It can assume any
value in {0, 1, . . . , n}.
If k ∈ {0, 1, . . . , n}, then
 
n k
P (X = k) = p (1 − p)n−k ;
k

E[X] = np ; var[X] = np(1 − p).

Property. Given two independent random variables X ∼ B(n1 , p) and


Y ∼ B(n2 , p), then X + Y ∼ B(n1 + n2 , p).
It is now straightforward that a binomial random variable X ∼ B(n, p)
can be decomposed as the addition of n independent random variables with
distribution B(1, p), each of them referring to a trial.

1
1.2 Poisson distribution, P(λ).
A random variable X follows a Poisson distribution with parameter λ > 0,
denoted by X ∼ P(λ), if it represents the number of events occurring in a
fixed interval of real numbers with a known average rate λ and independently
one from the others. It can assume any value from the set {0, 1, 2, . . .}.
If k ∈ {0, 1, 2, . . .}, it holds

λk −λ
P (X = k) = e ;
k!
E[X] = λ ; var[X] = λ.

Property. Given two independent random variables X ∼ P(λ1 ) and Y ∼


P(λ2 ), then X + Y ∼ P(λ1 + λ2 ).

2 Continuous distributions
2.1 Uniform distribution, U(a, b).
A random variable X follows a uniform distribution with parameters a < b,
denoted by X ∼ U(a, b), if it represents a number chosen at random between
a and b. The selection is made in such a way that the probability that the
random variable lays in any interval inside (a, b) depends only on the length
of such interval.
The density mass function and the cumulative distribution function of a
random variable uniformly distributed in (a, b) adopt the following expres-
sions:

 1  0 if x < a
if x ∈ (a, b) x−a
fX (x) = b−a ; FX (x) = if a ≤ x < b ;
0 if x ∈/ (a, b)  b−a
1 if x ≥ b

a+b (b − a)2
E[X] = ; var[X] = .
2 12

2
2.2 Exponential distribution, Exp(λ).
A random variable X follows an exponential distribution with parameter
λ > 0, denoted by X ∼ Exp(λ), if it equals the distance between successive
events in a Poisson process with mean λ.
The density mass function and the cumulative distribution function of
an exponentially distributed random variable with parameter λ adopt the
following expressions:
 −λx 
λe if x > 0 0 if x < 0
fX (x) = ; FX (x) = ;
0 if x ≤ 0 1 − e−λx if x ≥ 0

1 1
E[X] = ; var[X] = .
λ λ2

Lack of memory property. Given X ∼ Exp(λ) and t1 , t2 > 0, it holds

P (X > t1 + t2 |X > t1 ) = P (X > t2 ).

2.3 Normal distribution, N(µ, σ)


A random variable X follows a normal distribution with mean µ and standard
deviation σ, denoted by X ∼ N(µ, σ), if in can assume any real value in
accordance with the density bellow,
1 (x−µ)2
fX (x) = √ e− 2σ2 .
σ 2π
There does not exists any explicitly analytic expression of a primitive of
fX , therefore
R x the cdf of a normal random variable can only be given by
FX (x) = −∞ fX (t)dt.

E[X] = µ ; var[X] = σ 2 .

A standard normal distribution is a normal distribution with mean 0 and


standard deviation 1, N(0, 1).

3
Property. Given a, b ∈ R and a random variable X ∼ N(µ, σ), the random
variable aX + b is normally distributed, specifically

aX + b ∼ N(aµ + b, |a|σ).

We can standardize any normal random variable subtracting its mean from
it and dividing between its standard deviation. If X ∼ N(µ, σ), then

X −µ
∼ N(0, 1) .
σ

Property. If X ∼ N(0, 1) and FX is its cdf, by symmetry it holds that for


any x ∈ R, FX (−x) = 1 − FX (x).

Property. The addition of two independent normal random variables is


also normally distributed. Thus, if X ∼ N(µ1 , σ1 ) and Y ∼ N(µ2 , σ2 ) are
independent, then
 q 
X + Y ∼ N µ1 + µ2 , σ12 + σ22 .

Central Limit Theorem. Given n independent random variables X1 , X2 ,


. . . , Xn with finite means and variances E[Xi ] = µi and var[Xi ] = σi2 , the
limiting distribution of their sum as n tends to infinity is normal
 v 
n
X
u n
X
u
X1 + X 2 + . . . + Xn ≈ N  µi , t σi2  .
i=1 i=1

The approximation by a normal distribution is usually good for n ≥ 30.

Normal approximation to the binomial distribution. A binomial dis-


p B(n, p) with n ≥ 30 and np(1 − p) > 5 is approximately
tribution
N(np, np(1 − p)).

Normal approximation to the Poisson distribution.


√ A Poisson dis-
tribution P(λ) with λ > 5 is approximately N(λ, λ).

4
2.4 Multivariate normal distribution
The random vector X = (X1 , X2 )t follows a bivariate normal distribution
withe mean vector µ = (µ1 , µ2 )t and covariance matrix
 
σ12 ρσ1 σ2
Σ=
ρσ1 σ2 σ22

if its density mass function assumes the expression below


  
1 1 −1 x1 − µ 1
f (x, y) = exp − (x1 − µ1 , x2 − µ2 )Σ , x1 , x2 ∈ R.
2π|Σ|1/2 2 x2 − µ 2

Properties.

• If ρ = 0, then X1 and X2 are independent.

• Given a1 , a2 ∈ R, the random variable a1 X1 + a2 X2 is normally dis-


tributed,
 q 
2 2 2 2
a1 X1 + a2 X2 ∼ N a1 µ1 + a2 µ2 , a1 σ1 + a2 σ2 + 2a1 a2 ρσ1 σ2 .

In particular, X1 and X2 are two normal random variables.

• The random variables X1 |X2 =x2 and X2 |X1 =x1 are normally distributed.

You might also like