0% found this document useful (0 votes)
79 views5 pages

HW3: Moment Functions - Solutions: Problem 1. Let X Be A Real-Valued Random Variable On A Probability Space

This document contains solutions to homework problems related to probability and statistics. It covers topics like moment generating functions, cumulant generating functions, and their applications. Specifically: - It shows that for a random variable X, the moment generating function of aX + b is equal to e^bt times the moment generating function of X. - It derives the moment generating function and distribution of Y = aX when X is exponentially distributed. - It finds the moment generating function, mean, and variance of a random variable with a particular probability density function. - It proves an inequality relating a random variable's moment generating function to the probability that it is greater than a constant. - It defines the

Uploaded by

Duc-Viet Ha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
79 views5 pages

HW3: Moment Functions - Solutions: Problem 1. Let X Be A Real-Valued Random Variable On A Probability Space

This document contains solutions to homework problems related to probability and statistics. It covers topics like moment generating functions, cumulant generating functions, and their applications. Specifically: - It shows that for a random variable X, the moment generating function of aX + b is equal to e^bt times the moment generating function of X. - It derives the moment generating function and distribution of Y = aX when X is exponentially distributed. - It finds the moment generating function, mean, and variance of a random variable with a particular probability density function. - It proves an inequality relating a random variable's moment generating function to the probability that it is greater than a constant. - It defines the

Uploaded by

Duc-Viet Ha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

STAT/MATH 395 A - PROBABILITY II – UW

Winter Quarter 2017 Néhémy Lim

HW3 : Moment functions – Solutions

Problem 1. Let X be a real-valued random variable on a probability space


(Ω, A, P) with moment generating function MX .
(a) Show that for any constants a, b ∈ R,

MaX+b (t) = ebt MX (at)

Answer. For a given t in the domain of MX , we have :

h i
MaX+b (t) = E et(aX+b)
= etb E eta X
 

= etb MX (ta)

(b) Application. Let X follow an exponential distribution with parameter λ > 0.


Compute the moment generating function of Y = aX (a > 0), and conclude
that Y follows an exponential distribution. Specify the new parameter.
Answer. For a given t, we have

MY (t) = MaX (t)


= et·0 MX (ta)
λ
=
λ − at
λ/a
=
λ/a − t

We recognize the moment generating function of an exponential distribution


with parameter λ/a. By virtue of the uniqueness property of moment gener-
ating functions, this proves that Y = aX follows an exponential distribution
with parameter λ/a.

Problem 2. Let X be a continuous random variable with probability density


function :
fX (x) = xe−x 1[0,∞) (x)

1
(a) Show that MX (t) the moment generating function of X is given by :
1
MX (t) =
(1 − t)2

Answer.
Z ∞
tX
MX (t) = E[e ]= etx fX (x) dx
−∞
Z ∞
= etx xe−x dx
0
Z ∞
= xe−(1−t)x dx
0

The following change of variable : u = (1 − t)x, du = (1 − t) dx gives : for


t < 1,
Z ∞
u −u du
MX (t) = e
0 1 − t 1−t
Z ∞
1
= ue−u du
(1 − t)2
| 0 {z }
=1 since fX is a pdf
1
=
(1 − t)2

(b) Compute E[X] using MX (t).


Answer. Let us differentiate MX (t) : for t < 1,

0 2
MX (t) =
(1 − t)3
Thus,
0 2
E[X] = MX (0) = =2
(1 − 0)3
(c) Compute Var(X) using MX (t).
00
Answer. The second derivative MX (t) is given by : for t < 1,

00 6
MX (t) =
(1 − t)4
Therefore, the second moment of X is :

00 6
E[X 2 ] = MX (0) = =6
(1 − 0)4
Finally,
00 0
Var(X) = MX (0) − MX (0)2 = 6 − 22 = 2.

2
Problem 3. Let X be a continuous random variable with probability density
function fX and moment generating function MX defined on a neighborhood
(−h, h) of zero, for some h > 0. Show that1

P(X ≥ a) ≤ e−at MX (t) for 0 < t < h

Hint. Start from the right-hand side of the inequality and split the integral
defining MX (t) into the intervals (−∞, a) and (a, ∞).
Answer.
Z ∞
MX (t) = E[etX ] = etx fX (x) dx
−∞
Z a Z ∞
= etx fX (x) dx + etx fX (x) dx
−∞ a
Ra
Noting that −∞
etx fX (x) dx ≥ 0 since etx ≥ 0 and fX (x) ≥ 0, we have
Z ∞
MX (t) ≥ etx fX (x) dx
a
Z ∞
≥ eat fX (x) dx
a
tx at
since e ≥e for all x ≥ a. Therefore,
Z ∞
at
MX (t) ≥ e fX (x) dx = eat P(X ≥ a).
a

which completes the proof.

Problem 4. Let X be a real-valued random variable on a probability space


(Ω, A, P) with moment generating function MX . The cumulant generating
function of X, denoted KX , is defined by

KX (t) = ln MX (t)

(a) Show that


0
KX (0) = E[X]

Answer.
0
0 MX (t)
KX (t) =
MX (t)
Thus,
0
0 MX (0)
KX (0) = = E[X]
MX (0)
0
since MX (0) = E[X] and MX (0) = 1.
1 Actually, this property also holds for discrete random variables.

3
(b) and that
00
KX (0) = Var(X)

Answer.
00 0
00 MX (t)MX (t) − MX (t)2
KX (t) =
MX (t)2
Hence,
00 0
00 MX (0)MX (0) − MX (0)2
KX (0) = 2
MX (0)
E[X 2 ] · 1 − (E[X])2
=
12
= Var(X)

(c) Application. We say that a random variable X follows a geometric distribu-


tion G(p), with parameter p ∈ (0, 1) if its probability mass function is given
by :
pX (x) = P(X = x) = (1 − p)x−1 p, for x ∈ N∗
Compute MX (t) the moment generating function of X.
Answer.

X
MX (t) = etx pX (x)
x=1
X∞
= etx (1 − p)x−1 p
x=1

X x−1
= pet (1 − p)et

x=1

1−1
We recognize a geometric series with first term [(1 − p)et ] = 1 and com-
mon ratio (1 − p)et . This series converges if and only if −1 < (1 − p)et < 1,
that is if and only if t < − ln(1 − p). In this case, the moment generating
function is defined and is given by :

pet
MX (t) =
1 − (1 − p)et

(d) Compute KX (t) the cumulant generating function of X.


Answer.
pet
KX (t) = ln
1 − (1 − p)et
= ln p + t − ln 1 − (1 − p)et


4
(e) Compute E[X] using KX (t).
0
Answer. The first derivative KX (t) is given by :

0 (1 − p)et
KX (t) = 1 +
1 − (1 − p)et

Therefore,
0
E[X] = KX (0)
(1 − p) · 1
=1+
1 − (1 − p) · 1
1
=
p

(f) Compute Var(X) using KX (t).


00
Answer. The second derivative KX (t) is given by :

00 (1 − p)et (1 − (1 − p)et ) − (1 − p)et (−(1 − p)et )


KX (t) = 2
(1 − (1 − p)et )
(1 − p)et
= 2
(1 − (1 − p)et )

Therefore,
00
Var(X) = KX (0)
(1 − p) · 1
= 2
(1 − (1 − p) · 1)
1−p
=
p2

You might also like