0% found this document useful (0 votes)
3 views

ps2,3

The document presents a problem set focused on statistical estimation methods, including Fisher information matrices for normal and gamma distributions, method of moments estimators for various distributions, and maximum likelihood estimators for binomial and Poisson distributions. It also discusses sufficient statistics for linear models and properties of maximum likelihood estimators, including unbiasedness and variance comparisons. The solutions involve deriving equations and estimators for different statistical distributions and their properties.

Uploaded by

adithyan R
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

ps2,3

The document presents a problem set focused on statistical estimation methods, including Fisher information matrices for normal and gamma distributions, method of moments estimators for various distributions, and maximum likelihood estimators for binomial and Poisson distributions. It also discusses sufficient statistics for linear models and properties of maximum likelihood estimators, including unbiasedness and variance comparisons. The solutions involve deriving equations and estimators for different statistical distributions and their properties.

Uploaded by

adithyan R
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Problem Set 2

1. Let X,,...,X, be aniid. sample from the N(,u, 7) distribution. Find the Fisher information matrix, I,,(@),
for the parameter @ = (ju, o7).
[Note: The Fisher information matrix for a vector valued parameter @ is defined as

1,(@) = (Flow F(X 0) (3o log fx(X; a) =—E ( o 2: f<00:0))]|

Further, from the above definition of I,,(@) show that, when Xy,...,X, are iid, then I,,(@) = nI,(@), where
I, (@) is the Fisher information matrix for one sample.
Solution:
The PDF of a normal distribution N(j, a”) is given by:

f(a;au, 0) = 1

Vv 270?
es
_ fe ny

The log-likelihood function for a single observation is:

l + iy 4)
1
—| tar| — 1
—] 2) _ (x — x)?
The log-likelihood function ms n. iid. observations is:

log L(t, 0 = Silos ft Ti; , o*)= — Flog(2n) - Flog(o ~ = Ye


Now, we compute the first Der derivatives with respect to a and a
0 DL 1
~ oe Ye
1
Jlog L n 1 “ 3
=— + a
Oa? 202 _.2(a?)? ¥ ")
Next, we compute the second partial derivatives:
PlogL on
Owe 7
8 logL n .
Ga ~ 2(02)2 = 32 Ti — ft)
PlogL @logh_ He
Opdo2 Ae An Sa =
Now, we take the expected values of the negative second partial derivatives:
E F log =| 7
au? a
_ | Blog L _ - 2}_ 1 z uy?
Fresaleee Laan oF WF aorp * ap” ye — |
SinceE [377 ,(a; — )?] = no: , ,

|p) roo
~ - A(o?) | =- (07)?
ar + Ton
(a?)8_— = (0?)ay
_,(OloegLl] ,}/ 1 < ~-w\-— = _ _
i=l
Thus, the Fisher information matrix is:
9
I,,(9) = k xo |
Went

aes
For one sample (7: = 1), the Fisher information matrix is:
+ oa
0 Bf a2)

From the above, it is clear that:


1,,(0) = nI, (0)
(a) Gamma(a, (3):
The first two moments of a Gamma distribution are:

E|X] = af
E|X*] = af? + (af)?
The sample moments are:
. 1 Th

7
Tt
a x?
t
"=I
Equating the population moments to the sample moments:
ap = X

of? 2 + (a)? 2 = —DX;


lwy
i=1
Substituting the first equation into the second:

eae
eat bal Ssh
i=1
where S? is the sample variance.
Dividing af? by a8
ost
OB x
5
om Xx
Substituting into first equation,
Dividing af? by a8
a e _ Ss?

ag X
Ss?

x
Substituting @ into first equation,

yer>e
xX
—-— =
x ST
*
B St= S2
Therefore, the method of moments estimators for a and 3 are:
.
C= “g2
x?
‘ 3?
B =
XxX
.

(b) Beta(a, 3):


The first two moments of a Beta distribution are:
om
A(X] =
bd a+
o a(a+ 1)
E[X?] = -
x" (a+ B\(a+8+1)
Equating the population moments to the sample moments:
o 7
=A
a+ 4
a(a+1)1 _i > x?
rE

(a+Piat+@+l) ne
From the first equation:
a = X(a+ 8)
a({l— x j= A B
_ ali= X)
f= x
Now, let m2 = 4+ 377, X?. The sample variance is $* = m2 — X*. We also know that
a{a + 1) ry2 2,
= E[X*)=S*+24 #2
(a+ A\(at+ B8+1) El
Substituting 3
a(a+1) a(a +1)
= = = Fair = Ma
(+ GM )q 4 HM 4 EME +D
a+1 X? mn
a fo Te ow ee

The solution ts:


. «| X(1-X)

F =, | X({1— X)
p= a-x|205% +

Then, the MOME for g(#) = a8 is:


a
© Poisson(A):
The first moment of a Poisson distribution is:
E|.X] = <A
Equating the population moment to the sample moment:
A=X
Therefore, the method of moments estimator for A is:
A\-X
The MOME for g(@) = exp{—A} is:
eX
(d) Location-scale Exponential
(yi, 7):
The first two moments of the given Exponential distribution are:
E|X]=p+o
E[X?] = p? + 2uo + 207
Equating the population moments to the sample moments:
ptao=X
1 TL

pb? + 2po
+ 20? = we,
i=1
From the first equation:
—y

pe
\|
q
Now, let m2 = I yo, X?. We also know that
a” = E[X?*] — EX]? = m2 — X?

Substituting oa:

Substituting pu:

The MOME:

The MOME for g(@) = (1, @) is:


(a) Binomial(m, 6)
The likelihood function is

L() = |] _ pr(1 — @yn—* = ( (" ')


Th TL

Qa 1* (1 — 6) 2am)

i=1 (”" I]

The log-likelihood function is:

log L(@)= Yes (™ ) + (s>=) log 6+ ( So(m — 7) log(1 — 8)


i=] =1
Taking the derivative with respect to @:
dlog L _ ii 2s _ Vi-i(m — aj)
dé 6 1-@

Setting the derivative to zero


dint i _ Vi-i(m — 2)
6 1-9
yo — 9S > 2, = = 6S >m- ayo
i= i=1 i=1

> 2, = a5 m = fnm
é=1 i=1
p= ier 2 xX
FLITE m
ga Dini _ X
nm m.

Therefore, the MLE for @ is:


~ A
é= —
Trt
The MLE for g(@) = @ is:
, Xx
é— —
m
(b) Binomial(é, p):
In this case, m = 1, and we have only one observation, X. The likelihood function Is:

10) = (2 pra»)
For @ to be an integer, you have to use discrete optimization.
theta must be greater than or equal to the observed value.
© Binomial(m, @):
g(0) = P(X, + X_ = 0) = P(X, = 0, X2 = 0) = P(X, = ONP(X, = 0) = (1-4#)™(1 - 8)" = (1- a
Because the MLE of # is X /m. The MLE of (1 — @)?™ is (1 — X/m)?™.
(d) Hypergeometric(m, 7, @):
The likelihood function ts:

140) = (=) m (@—m

(7)
g(@) = 8

Since m = 1, the MLE is the value of @ that maximizes [(@). The solution is @ = 1
(e) Double Exponential:
The likelihood function is:
fl 1 1 FL

L(@) = I] 9 expt lei —A}= pn exP ‘- 2 jz; — al

To maximize L(@), we need to minimize 55", |a; — 6]. This is minimized when @ is the median of x;
(f) Uniform(a, (3):
1
F(z} a, B) = Bs aijz<p
TE 1 1

L(a, B) =I]
(a 8) —
- ( = a)”

To maximize L(a, 3), we must minimize 3 — a.


= min(X;)

g(@) =a4+ B.
min(X;) + max(X;)
(g) Normal(@, 67):
The likelihood function is:
Wh
L() =] a0 — (on 9”) "Pex | a

The log-likelihood function is:


. = oo o a
log L(@) = —Flog(2n) - 5 lo8(6") - S| a
i=1
Taking the derivative with respect to @:
dlogL on. wo (ti- 9) m. (a;
— 6)?
wb etl me tke
Setting the derivative to zero:
Problem 3(h): MLE for Inverse Gaussian
Given X,,...,X» ~ Inverse Gaussian(6@),
42) and g(@) = (1, @2).
The likelihood function is:
7) __ Bglzj—8))"
1¢000) TT (5 wes,

t=1

Taking the logarithm:


1 1 62(x; — 81)?
logL x S> 7 — 5 log(2n2}) -
2022;

Maximizing with respect to #, and @ leads to:


TF a

Solving for 63:

Therefore, the MLE for g(@) is:


nee * a!im
a(S 2") | i=l
Problem 4(a): Sufficient Statistic for Linear Model
Given Y; = Bx; + €;, with e; ~ N(0, 07).
The likelihood function ts:
_ (%-Ba,)?
L(B,07) x I Vino®
do

1 << |
fly, eeey Yn\|B, a”) = (2107)? €Xp a (Yj — ax
7=1

Expanding the square:


Th TL

So: — Bx;)° = So(¥? — 262iY; + 6°27)


i=l i=1
By the factorization theorem, a two-dimensional sufficient statistic for (8, 0”) is:

a> ni¥;, SY?)


=! i=1
Problem 4(b): MLE of Beta
The log-likelihood is:

(8,02) = ~ Zlog(2n0?) — 45 (Wi — Ba)?


i.
Taking the derivative with respect to £3:
al i<¢ (Y; — Ba) 0
AQ UD y i PLi)Li =
0B @t &
Solving for 8, we obtain the MLE:
4 doin BY;
p= —a
Duin 2;
To show unbiasedness:

me _ int THY; _ Vici i(Bz;) _ p21 x}


E\p] =E 2
et a? ict x? Soil a? 7 ict De;
Thus, the MLE B is an unbiased estimator of £.
Problem 4@: Distribution of MLE of Beta
Since Y; = Bx; + €; and e; ~ N(0, 07), it follows that Y; ~ N(82;, 07).
The MLE ts a linear combination of normal random variables:

p= =iar Vii
. dui=l a;
Therefore, 8 is also normally distributed.
The mean of 8 IS:

Elf]= 6
The variance of {3 is:
5 int CY;
= Var | =
Var(8) (4S: = Sia t}7Var(Y%i)
inp = oP? Sy
—— 2}
= o”
So
Dini &; (Sri-1 2;) (Soin 77) int Fj
Thus, the distribution of the MLE of £ is:
a

al Coser
D>
Problem 4(d): Unbiased Estimator of Beta
Let B — 2%

_ DB) _ habe) _ Dias _,


yeie1 s

it Li ict Li vet Li
xy,
Therefore, B = <—
+
is an unbiased estimator of £.
™ : sf a ~ ues * -_
Problem 4(e): Variance Comparison
The variance of 6 =Se is:
, Var(Y;) ne
in Xi
Vv ar(£)5 = Var r | ( =]
n = yo"
(i F xi)?
_
( m zi)?
a ea
The variance of the MLE is:

Var( B) = Se

The ratio is:


Var(8) _ no? j o _ n>) 2?
Var(B) ()7 2)?" Soa? (>) zi)?
By Cauchy-Schwarz inequality:
(S > 2)? < nS oa;
Sa,
n> 2?
Oa)? -"
Thus, Var(8) > Var({). The MLE is more efficient.
Problem 4(f): Another Unbiased Estimator
Consider the estimator:

To show unbiasedness:
a oll SYM| TSE) 146% 14,_
sided Ee? i=1 i=1 dyi=1
ay?
Thus, 8* = 1 Soi — is an unbiased estimator of 8.
Problem 4(g): eo Saree eee
The variance of 8* = +>", = is:
1S Y; lig Y; 1 Q.Var(Y¥;) e& Gl

Comparing to:
wave
Var(8*) =Var|—
(SSE) abt
4% —]=—)
(E)-ab ye
Var(—]= =
SE3 =

vol) = ray
The MLE, 8, is the best linear unbiased estimator (BLUE). It has the smallest variance.
Problem 5: Best Unbiased Estimator
Let W,,..., W;, be unbiased estimators of 6 with known variances Var(W;) = 0?i?
Consider an estimator of the form:

For 6 to be unbiased, we require:


k k
E(6] = 5 a: E[Wi] = > a, = 0
i=1 ‘—1
This means:
I:
> aj = 1
i=1
We want to minimize the variance of 6 subject to this constraint:
k k k
Var(@) = Var (>: a) = S¢ a?Var(W;) = S- ajo;
i=1 i=1 i=1
Using Lagrange multipliers, we get:
1/0?
a; =
k
ja L/o%
Thus, the best unbiased estimator is:
k 2
a W; ag.
7 = S / =
ke
i=1 a j-1 1/o%
Problem 6: Unbiased Estimator of Area
Let R; be the measured radius with R; = r + €;, where e; ~ N(0, o*) and r is the true radius.
The area of the circle is:
A=ar"
: iE . a m2 m7 1 7
An obvious estimator is A = 7R*, where R = = are fe
S- Ri) o2
E|rR?| = 1E (2 =n(7+=)
n
So 1R? is biased.
To find an unbiased estimator:
+r?
E|R?| = Var(R;) + E[Rj]* =0?
Thus, r? = E[R?] — o?.
We estimate o? with:

sg?= S (Ri
— R)?
n—1l
So, the unbiased estimator is:
gs?
Aunbiased am n(R? _ a )

Since the normal distribution is complete and sufficient, this estimator is the UMVUE.
Problem Set 3
1. (Additive Properties)
(a) Binomial:
Let X; né Binomial (n;,p) fori = 1,...,k. The moment generating function (MGF) of X; is:
Mx,(t) = (1— p+ pe')™
Since the X;'s are independent, the MGF of JT' = yy X; is:
ki k
_ p + pe)” _ (1 _ p+ pet)uam
Mr(t) = [[ “x. — [Iq

i=l i=1
This is the MGF of a Binomial distribution with parameters a n,; and p. Therefore:
k
T ~ Binomial (>: Ni;
i=]
(b) Poisson:
Let X; ‘™" Poisson(A,;) fori = 1,...,n. The MGF of X; is:
Mx, (t) —_ erile'-1)

Since the X;'s are independent, the MGF of T = >""_, X; is:


Ft rh

Mr(t) = ll Mx,(t) = ll eri(e-1) — edie Ade“) — 9 (Xi) (e'-1)


i=l i=1
This is the MGF of a Poisson distribution with parameter >> ."_, A;. Therefore:

T ~ Poisson (3: |
i=1
© Normal:
Let X; pay Normal (i;, 07) fori = 1,...,. The MGF of X; is:
Mx. (t) _— elit gait

Since the X;'s are independent, the MGF of T = )>""_, X; is:


Te Fr

Mr(t) = Il Mx, (t) = Il elitt poit = erste mitt> Di age? = ele wi)t+3 (Sh oF)!
i=1 i=1
This is the MGF of a Normal distribution with mean }>7_, (4; and variance }>""_, ao. Therefore:

T ~ Normal ( pis > “)


i=1 i=l
(d) Gamma:
Let X; oe Gamma(aqa;, 3) fori = 1,...,n. The MGF of X; is:
1
Mx,(t)=(1-6t) ™, t< B

Since the X;'s are independent, the MGF of I’= Dui= 1 AG is:

= [Pei = [Jas ‘= (1— ft) 4am


This is the MGF of a Gamma a with parameters oy _, a; and £. Therefore:

T ~ Gamma (3: on)


i=l
(e) Chi-squared:
Let X; pi xa, fort = 1,...,k. Recall that a Chi-squared distribution with n; degrees of freedom is a special
case of the Gamma distribution with a; = n;/2 and 8 = 2. Therefore, X; ~ Gamma(n,/2, 2). Using the result
from part (d):
k k
T= X; ~ Gamma (>: = )
i=1 i=l

Which is a Chi-squared distribution with yt n,; degrees of freedom. Therefore:


k
T~x%, where N = Soni
i=1
2. Let X ~ Normal(, 07). Then T = aX + b. The MGF of X is:
Mx(t) = etttiat
T
The MGF of is:
= Blet(oX+) _ et Ble(@)X) = e Mx (at)
Mr(t) = Efe]
+b)t+7 (a*o*)t?
Mr(t) = ee" (at)+z0°(at)? _ _(ap

This is the MGF of a Normal distribution with mean ay + b and variance a7o”. Therefore:
T ~ Normal(ap + 6, a?o”)
3. Let X ~ Gamma/(a, 8). Then T = aX. The MGF of Xis:
Mx(t) =(1- st)", t<- p < —

The MGF of TZ is:

Mr(t) = Ele'’) = Efe“) = Ble*] = Mx(at) = (1 — A(at))


Mr(t) = (1 — (a8)t)*
This is the MGF of a Gamma distribution with parameters a and (/a. Therefore:

T ~ Gamma («, )
a
4. Let X ~ beta(n/2,m/2). Then T = ie To find the distribution of T, we use the transformation
method.
First, find the CDF of 7:

n(l1—X
nt
Fr(t)= P(X < ~~)

Now, differentiate F’p(t) to find the PDF f(t). The PDF of Xis:
fx(e) = 2 B-1/4 (1 — 4)
2) #1

B(z, 7)
where B(-,-) is the beta function. Using the transformation formula, we find that:
LP ~ Fam
5. Let X ~ Uniform(0, 1) and a > 0. Then T = X1/@. The CDF ofX is Fy (x) = x for 0 < & < 1. The CDF of
T is:
Fy(t) = P(T < t) = P(X™* <t) = P(X < t*) = Fx(t®) = t°

for0 < t < 1. Differentiating F’7(t) with respect to t, we get the PDF of T:
fr(t) =at™"
for 0 < t < 1. This is the PDF of a Beta distribution with parameters @ and 1. Therefore:
T ~ Beta(a, 1)
6. Let X ~ Cauchy(0, 1). Then T = +5. The PDF of Xis:
1
fx(z) = —— —oo
< f< oO
(1+ 2?)’
The CDF of 7’ is:

Fr(t) = P(T <1) =P ( , <t)=P(x?>7-1]


1+ X? —

re) = P(x Vea) +2

oh | Re
xX

IV
_—
Differentiating F'7(t) to find the PDF f(t) and simplifying, we find that:
T ~ Beta(0.5, 0.5)
6. Let X ~ Cauchy(0, 1). Then T = 45. The PDF of Xis:
1
fx(z) = SSS =|! —o <7 < oO
(1 +7)’
The CDF of 7’ is:

Fr(t) = P(T <1) =P ( , <t)=P(x7>7-1]


1+ xX? —

ra) = P(x Vier

oh | Re
xX

TT

\V
Differentiating F’7(t) to find the PDF f(t) and simplifying, we find that:
T ~ Beta(0.5, 0.5)
7. Let X ~ Uniform(0, 1). Then T = —2 log X. The CDF of X is F'x(x) = x for 0 < x < 1. The CDF of T is:
F,(t) = P(T <t) = P(-2logX <t)=P (108 x > -5 =P (x > e?)

bales
Fr(t)=1—e ?, t>0

Differentiating F'r(t) with respect to t, we get the PDF of T:


1
fr(t) = ae t>0
This is the PDF of a Chi-squared distribution with 2 degrees of freedom. Therefore:
T~ x3
8. Let X be distributed as some absolutely continuous distribution with CDF Gx. Then T = Gx(X). The CDF of
T is:
F_(t) = P(T < t) = P(Gx(X) < t)

Since G'x is a CDF, it is non-decreasing, so we can write:

Fr(t) = P(X < Gy(t)) = Gx(Gy'()) =t, O<t<1


This is the CDF of a Uniform distribution on (0, 1). Therefore:
T ~ Uniform(0, 1)
9. Let the random variable X have PDF f(a) = 2 exp{—2?/2}, 4 > 0.
(a) Find F(X) and Var(X).
The given PDF is that of a half-normal distribution.
The expected value of X is:

-{- oy/” exp{-2?/2}de = y/? [O wexp{-2 /2}dx


0 T W $0

Letu = x? /2, then du = 2dz. So:

x= [2 [ ert-ujau=
2 ep= 2
To find the variance, we first needme

E(X?) = I 24/2— exp{—a? /2}dx


a rie

Using integration by parts or recognizing that this is the second moment of a half-normal distribution, we have
E(X?)=1.
Therefore:

Var(X) = B(X?) ~ [B(X)}? =1- ( 2) =1-2


(b) Find an appropriate transformation Y = g{(X) and a, 3, so that Y ~ Gamma(a, 8).
Consider Y = X*/2. Then X = V2Y. The PDF of Xis:

fx(2) = 2 exp{-2"/2}
The PDF of Y can be found using the transformation formula:
dr 1 1
fy(y) = fx(z) ~ (2 exp{—y} —|
Taa| = —er{-y},
Va y>0
dy
This is the PDF of a Gamma distribution with parameters a = 1/2 and 8 = 1. Therefore:
1
Y ~ Gamma (>. 1)
10. Let X be distributed as Gamma(a, §). Then show that E(X") = p" ee r > —a.
The expected value of X” is given by:
oo gt —1 e~2/8 1 [

(=
E(X")=
fare)
x” x
= Br)
ode = gi te-le-a/8
dp

Lety = z/8, then w = By and dz = dy. Then:


arre Oo

E(X") = aar(a) [ eae Pay Borla) y tte Ndy


E(X") = a) [(r+a) =p" ree)

11. Let the bivariate random variable (X,Y) have a joint PDF fy y(#,y) = C(x + 2y) if0<y<10<2 <2,
and 0 otherwise.
(a) Find the marginal distribution of Y.
First, we need to find the value of C’. Since the joint PDF must integrate to 1, we have:
1 p2
/ / O(x + 2y)drdy = 1
0 Jo

Evaluating the integral:


1 2 2 1
c| = + 20y| dy = cf (2+ 4y)dy = C[2y + 2y"], = O(2+2) =4C=1
0 Oo 0
So, C = 1/4.
Now, to find the marginal distribution of Y, we integrate the joint PDF over a:
24 1 mt 2 1 1

fw = [ q(t + 2y)de = z|5 +2] q2@taaaty

forO<y< 1.
(b) Find the conditional distribution of Y given X = 1.
First, find the marginal distribution of X:

f(a) =f Fee +2v)dy =F lev


+ I= Ge +0
1

forO < a < 2.


Then, the conditional distribution of Y given X = 1 is:

fyx(ylz = 1) = =fxy(ly) = 47(1+2y) -—-


142

forO<y< 1.
© Compare the expectations of the above two distributions of Y.
The expectation of the marginal distribution of Y is:
‘fil Vy 1 1.31 1
[Y] Ula tua pl gute jay= jay + gy (43

The expectation of the conditional distribution of Y given X = 1 is:


1 /14+2y Vr] 9 1, 1,)) 1
Byix=a= f v/ ; Jav= | (Su + )av= | +a] =a+
(d) Find the covariance between _X and Y.
We need:
1 2 1 1 1 zZ
E| XY] = / / xy—(x + 2y)drdy = — / [ (a?y + 2ay”)dady
0 Jo 4 4 Jo Jo

[\s
1/8 1/4 4.,)' 1
E[XY] = + —a3y + oy | dy =
3 0 4 fi (gutav)av=a|3e+ 34]
—y 4 Ay? \dy
= —|—y? + —y8|
=a
= —[ =

Now, we need:

E[X] -[ afx(2)dz = [etes l)dx = te + 2)dx = zis°+ pe] _ 1


0 0 0
Then:

Cov(X,Y) = E|XY]— E|XJ EY] = (1)6 (2)12/ _2_


3
4972 _ 48-49
72
1 72

ois
(e) Find the distribution of Z = 9/(2Y + 1)”.
9
P(Z<2)=P(a aoe <2) P(@y +1 ) ‘22 =P(2v+1>—)
(2Y +1)? Zz 3
—1 1 1

ale
P(r= fae (5 tu) dy

bo
(f) What is P(X > Y)?
P(X >Y)= ff toxtedet = [ [Ger 2vaeay

P(X >Y)= if
aj, "42
[2 ay=4
+74]
Y= 4], 2+ 4y—
y- ¥> —2y?)dy
—2y
1 /f 5 1 » 5a) 1 5 1/19 19
0
12. Let X ~ Normal(0, 1). Define Y = —X1(|X| < 1) + X1{|X]| > 1). Find the distribution of Y. (Hint: Apply
the CDF approach)
X, X<-—-l
Y= ¢-X, -1<X<1
X, X>il
Let's find the CDF of Y, Fy(y) = P(Y < y).
e ify<—tl:
Fy(y) = P(X <y)
eif-—lL<y<l:
Fy(y) = P(X < —y)
elfy> 1:
Fy(y) = P(X < y) + P(X 2 -y)
13. Let X ~ Normal(0, 1). Define Y = sign(X) and Z = |X|. Here sign(-) is a R + {—1,1} function such
that sign(a) = 1ifa > 0, and sign(a) = —1 otherwise.
(a) Find the marginal distributions of Y and Z.
Y is a discrete random variable taking values —1 and 1.
P(Y =1)=P(X >0)=0.5
P(Y = -1) = P(X < 0) =0.5

Z = |X| has a half-normal distribution.


1
fz(z) = 2fx(z) =2 ia etl? z>0

(b) Find the joint CDF of (Y, 7). Hence or otherwise prove that Y and Z are independently distributed.
P(Y =1,2Z =2) = P(X > 0, |X| = z) = P(X = z) = 0.5fz(z)
P(Y = -1,Z = z) = P(X < 0, |X| = z) = P(X = —z) = 0.5f2(z)
P(Y,Z) = P(Y)P(Z) thus independent
14. Suppose X1,---,Xp, a Normal(,, 0°), ¥1,---,;Ym Ta Normal(j,,, 07), and all the random variables
{X1,+++,;Xn,Vi,--+, Ym} are mutually independent. Then find the distribution of T := S32/S}?, where $3?
and oe are the unbiased sample variances of X and Y, respectively.
Si = gy Da Ki — WP and SH? = ay ELK -P)
(n Si 2 (m—1)8/?
We know that —_7~ ~ y*_, and _e~ x21
eine g

Thus T = se can be written as T = ecm


m—1 Xm-1
Ss} 1 XA
Thus = =.
372 n—l Xin-1 :
_ —1)
Also we know that Fy 1m_1 — 22)
nem dN Xml (m1)
S¥#2
Thus we have
oe> ~ Fa-im-1
Problem 15(a): Expectation of Y1
Given X1,..., Xp iid. with CDF Fy and B(X,) = pw. Define Y; = 1 if X; > ps, and Y; = 0 otherwise.
We want to find E(Y,). By definition:
E(¥,) =1- P(X, > yw) +0- P(X <p)
= P(X > p)
Since P(X, > wp) =1— P(X; < pw) =1- Fy(p), we have:
E(Y,) =1— Fx(u)
If the distribution is continuous, then P(X, = 4) = 0, so P(X, < yu) = Fx(p). If wis the median, then
Fx (pw) = 0.5, and E(Y,) = 0.5.
Problem 15(b): Distribution of Sum of Yi
Let S = }°" | Yj. Since Y; are i.i.d. Bernoulli random variables with success probability
p = E(Y|) = 1-— Fx(p), and S is the sum of n independent Bernoulli trials, S follows a binomial distribution:
S ~ Binomial(n, p)
Therefore, the probability mass function of S is:

P(S =k) = (7) eta —p)”*


where k = 0,1,...,n and p = 1 — F’x(z). Thus, the distribution of >, Yj; is Binomial(n, 1 — F’x (2).
Problem 16: Function of Sample Variance
Given X1,...,Xn a= Ne *) and S? is the sample variance. We want to find g(S?) such that E[g(S?)| = a.
z
We know that aie ~ x2 .
Therefore, E| “2)
(n- YS, = n-—1, which implies E[S?] = 0°.
Let ial =c¢- Sy, . Then, we need to find ¢ such that E|cS,] =
Since 4 n-1)5; f ~xX? Sn =o Xn 7. Then:

E[Sp] = oE[y 4}.


Let m = n — 1. Then, E[S,,] = cE[y/ Xn),
Let
A = Ely/ Xi). Then,
¢ = +.
9(Sp) = —=— so Elg(Sz)] = o.
Problem 17(a): CDF of r-th Order Statistic
Let X1,...,X,, bei.id. with pdf fx and CDF Fy. Let X(,) be the r-th order statistic. The CDF of X;(,) is given
by:

Fx, (2) = P(X) < 2) =S>P( (at least r of X1,...,X, are < 2x)
i=r

x(t) = 9 (4) lFx(o))t ~ Fete)


Th
n a mt

Problem 17(b): PDF of r-th Order Statistic


Differentiating the CDF of X(,) with respect to x to get the pdf:
d
fx, (2) = Ga PX (x)
n!
fan (x) =
(r—1)'(n—7)! [Fx(2)]"*[1 — Fx(x)]""" fx(2)
Problem 18(a): CDF of Cauchy(0,1)
Let Y have a Cauchy(0,1) distribution. The pdf is:
1
= ome OY Kw

The CDF of Y is:


Fy ( y= fF a eel )|"
eo oo M1+t) ot #7 50
1 1 W 1 1
Fy(y) = 7 arctan(y) — a(-9) = ~arctan(y) + 3
Problem 18(b): Simulating Cauchy Random Samples
Let U ~ Uniform(0, 1). We want to find a transformation ¥ = g(U) such that Y ~ Cauchy(0, 1). Using the
inverse transform method:
Fy(y) =u
! arctan( ) + : =
7 a
1
arctan(y) = m(u — 9)
1
y = tan(x(u — 5)
Therefore, to simulate a Cauchy(0,1) random variable, generate U ~ Uniform(0, 1) and set
Y = tan(n(U — 4)).

You might also like