0% found this document useful (0 votes)
34 views

X n θ by less than any arbitrary constant c > 0. Also using Chebyshev's theorem, we see > 0

The document discusses point estimation and consistent estimators. It defines a consistent estimator as one where the probability that the estimate differs from the true parameter by less than any value c approaches 1 as the sample size increases. A sufficient condition for an estimator to be consistent is if it is unbiased and its variance approaches 0 as the sample size increases. The document proves that the sample proportion, sample mean, and sample variance are consistent estimators under certain conditions.

Uploaded by

Ŕănîá FA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views

X n θ by less than any arbitrary constant c > 0. Also using Chebyshev's theorem, we see > 0

The document discusses point estimation and consistent estimators. It defines a consistent estimator as one where the probability that the estimate differs from the true parameter by less than any value c approaches 1 as the sample size increases. A sufficient condition for an estimator to be consistent is if it is unbiased and its variance approaches 0 as the sample size increases. The document proves that the sample proportion, sample mean, and sample variance are consistent estimators under certain conditions.

Uploaded by

Ŕănîá FA
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Point Estimation

The example of the preceding paragraph is a bit farfetched, but it suggests that
we pay more attention to the probabilities with which estimators will take on values
that are close to the parameters that they are supposed to estimate. Basing our argu-
ment on Chebyshev’s theorem, when n→q the probability approaches 1 that the
X
sample proportion will take on a value that differs from the binomial parameter
n
θ by less than any arbitrary constant c > 0. Also using Chebyshev’s theorem, we see
that when n→q the probability approaches 1 that X will take on a value that differs
from the mean of the population sampled by less than any arbitrary constant c > 0.
In both of these examples we are practically assured that, for large n, the
estimators will take on values that are very close to the respective parameters.
Formally, this concept of “closeness” is expressed by means of the following
definition of consistency.

DEFINITION 5. CONSISTENT ESTIMATOR. The statistic  ˆ is a consistent estimator of


the parameter ␪ of a given distribution if and only if for each c > 0

lim P(|
ˆ − θ | < c) = 1
n→q

Note that consistency is an asymptotic property, that is, a limiting property of an


estimator. Informally, Definition 5 says that when n is sufficiently large, we can be
practically certain that the error made with a consistent estimator will be less than
any small preassigned positive constant. The kind of convergence expressed by the
limit in Definition 5 is generally called convergence in probability.
X
Based on Chebyshev’s theorem, is a consistent estimator of the binomial
n
parameter θ and X is a consistent estimator of the mean of a population with a finite
variance. In practice, we can often judge whether an estimator is consistent by using
the following sufficient condition, which, in fact, is an immediate consequence of
Chebyshev’s theorem.

THEOREM 3. If 
ˆ is an unbiased estimator of the parameter θ and
var()→0
ˆ as n→q, then 
ˆ is a consistent estimator of θ .

EXAMPLE 8
Show that for a random sample from a normal population, the sample variance S2 is
a consistent estimator of σ 2 .

Solution
Since S2 is an unbiased estimator of σ 2 in accordance with Theorem 3, it remains to
be shown that var(S2 )→0 as n→q. Referring to the theorem “the random variable
(n−1)S2
σ2
has a chi-square distribution with n − 1 degrees of freedom”, we find that for
a random sample from a normal population

2σ 4
var(S2 ) =
n−1

293
Point Estimation

It follows that var(S2 )→0 as n→q, and we have thus shown that S2 is a consistent
estimator of the variance of a normal population.

It is of interest to note that Theorem 3 also holds if we substitute “asymptotically


unbiased” for “unbiased.” This is illustrated by the following example.

EXAMPLE 9
With reference to Example 3, show that the smallest sample value (that is, the first
order statistic Y1 ) is a consistent estimator of the parameter δ.

Solution
Substituting into the formula for g1 (y1 ), we find that the sampling distribution of Y1
is given by

 n−1
q
−(y1 −δ) −(x−δ)
g1 (y1 ) = n · e · e dx
y1

= n · e−n(y1 −δ)

for y1 > δ and g1 (y1 ) = 0 elsewhere. Based on this result, it can easily be shown
1
that E(Y1 ) = δ + and hence that Y1 is an asymptotically unbiased estimator of δ.
n
Furthermore,

P(|Y1 − δ| < c) = P(δ < Y1 < δ + c)


δ+c
= n · e−n(y1 −δ) dy1
δ

= 1 − e−nc

Since lim (1 − e−nc ) = 1, it follows from Definition 5 that Y1 is a consistent estima-


n→q
tor of δ.

Theorem 3 provides a sufficient condition for the consistency of an estimator. It


is not a necessary condition because consistent estimators need not be unbiased, or
even asymptotically unbiased. This is illustrated by Exercise 41.

5 Sufficiency An estimator  ˆ is said to be sufficient if it utilizes all the information in a sample


relevant to the estimation of θ , that is, if all the knowledge about θ that can be
gained from the individual sample values and their order can just as well be gained
from the value of ˆ alone.
Formally, we can describe this property of an estimator by referring to the con-
ditional probability distribution or density of the sample values given  ˆ = θ̂, which
is given by

f (x1 , x2 , . . . , xn , θ̂ ) f (x1 , x2 , . . . , xn )
f (x1 , x2 , . . . , xn |θ̂ ) = =
g(θ̂ ) g(θ̂ )

294

You might also like