X n θ by less than any arbitrary constant c > 0. Also using Chebyshev's theorem, we see > 0
X n θ by less than any arbitrary constant c > 0. Also using Chebyshev's theorem, we see > 0
The example of the preceding paragraph is a bit farfetched, but it suggests that
we pay more attention to the probabilities with which estimators will take on values
that are close to the parameters that they are supposed to estimate. Basing our argu-
ment on Chebyshev’s theorem, when n→q the probability approaches 1 that the
X
sample proportion will take on a value that differs from the binomial parameter
n
θ by less than any arbitrary constant c > 0. Also using Chebyshev’s theorem, we see
that when n→q the probability approaches 1 that X will take on a value that differs
from the mean of the population sampled by less than any arbitrary constant c > 0.
In both of these examples we are practically assured that, for large n, the
estimators will take on values that are very close to the respective parameters.
Formally, this concept of “closeness” is expressed by means of the following
definition of consistency.
lim P(|
ˆ − θ | < c) = 1
n→q
THEOREM 3. If
ˆ is an unbiased estimator of the parameter θ and
var()→0
ˆ as n→q, then
ˆ is a consistent estimator of θ .
EXAMPLE 8
Show that for a random sample from a normal population, the sample variance S2 is
a consistent estimator of σ 2 .
Solution
Since S2 is an unbiased estimator of σ 2 in accordance with Theorem 3, it remains to
be shown that var(S2 )→0 as n→q. Referring to the theorem “the random variable
(n−1)S2
σ2
has a chi-square distribution with n − 1 degrees of freedom”, we find that for
a random sample from a normal population
2σ 4
var(S2 ) =
n−1
293
Point Estimation
It follows that var(S2 )→0 as n→q, and we have thus shown that S2 is a consistent
estimator of the variance of a normal population.
EXAMPLE 9
With reference to Example 3, show that the smallest sample value (that is, the first
order statistic Y1 ) is a consistent estimator of the parameter δ.
Solution
Substituting into the formula for g1 (y1 ), we find that the sampling distribution of Y1
is given by
n−1
q
−(y1 −δ) −(x−δ)
g1 (y1 ) = n · e · e dx
y1
= n · e−n(y1 −δ)
for y1 > δ and g1 (y1 ) = 0 elsewhere. Based on this result, it can easily be shown
1
that E(Y1 ) = δ + and hence that Y1 is an asymptotically unbiased estimator of δ.
n
Furthermore,
= 1 − e−nc
f (x1 , x2 , . . . , xn , θ̂ ) f (x1 , x2 , . . . , xn )
f (x1 , x2 , . . . , xn |θ̂ ) = =
g(θ̂ ) g(θ̂ )
294