0% found this document useful (0 votes)
36 views

Problem Set1

This document contains 4 problems from an estimation theory problem set. The first problem asks to determine the PDF of x given θ is deterministic or random. The second problem asks if the sample variance estimator is unbiased and finds its variance as N goes to infinity. The third problem examines how bias is introduced when an unbiased estimator undergoes a nonlinear transformation. The fourth problem asks to derive the condition for an unbiased estimator of θ given a single observation from a uniform distribution and proves no such estimator exists.

Uploaded by

Puneet Sangal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Problem Set1

This document contains 4 problems from an estimation theory problem set. The first problem asks to determine the PDF of x given θ is deterministic or random. The second problem asks if the sample variance estimator is unbiased and finds its variance as N goes to infinity. The third problem examines how bias is introduced when an unbiased estimator undergoes a nonlinear transformation. The fourth problem asks to derive the condition for an unbiased estimator of θ given a single observation from a uniform distribution and proves no such estimator exists.

Uploaded by

Puneet Sangal
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

EE613: Estimation Theory

Problem Set 1

1. Let x = θ + w, where w is a random variable with PDF pw (w). If θ is a deterministic


parameter, find the PDF of x in terms of pw and denote it by p(x; θ). Next, assume that
θ is a random variable independent of w and find the conditional PDF p(x|θ). Finally, do
not assume that θ and w are independent and determine p(x|θ). What can you say about
p(x; θ) versus p(x|θ)?

2. The data {x[0], x[1], · · · , x[N − 1]} are observed where x[n]’s are independent and iden-
tically distributed (IID) as N (0, σ 2 ). We wish to estimate the variance σ 2 as

1 NX
−1
σˆ2 = x2 [n].
N n=0

Is this an unbiased estimator? Find the variance of σˆ2 and examine what happens as
N → ∞.

3. This problem illustrates what happens to an unbiased estimator when it undergoes a


nonlinear transformation. Consider the example discussed in class: x[n] = A + w[n],
where w[n] is zero mean white Gaussian noise. If we choose to estimate the unknown
parameter θ = A2 by
!2
1 NX
−1
θ̂ = x[n] ,
N n=0
can we say that the estimator is unbiased? What happens as N → ∞?

4. Given a single observation x[0] from the distribution U[0, 1/θ], it is desired to estimate θ.
It is assumed that θ > 0. Show that for an estimator θ̂ = g(x[0]) to be unbiased we must
have Z 1
θ
g(u)du = 1.
0
Next, prove that a function g cannot be found to satisfy this condition for all θ > 0.

You might also like