0% found this document useful (0 votes)
42 views

Statistical Inference II

1. The document discusses various statistical concepts related to point estimation including estimators, estimands, and efficiency. 2. It provides examples of different estimators for the probability of "free pizza" based on a sample, and compares their variances to determine which is most efficient. 3. The sample mean is identified as the minimum variance unbiased estimator and best estimate among the alternatives given.

Uploaded by

shivam1992
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Statistical Inference II

1. The document discusses various statistical concepts related to point estimation including estimators, estimands, and efficiency. 2. It provides examples of different estimators for the probability of "free pizza" based on a sample, and compares their variances to determine which is most efficient. 3. The sample mean is identified as the minimum variance unbiased estimator and best estimate among the alternatives given.

Uploaded by

shivam1992
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Statistical Inference

Few more names


The process of making an educated guess about the specific value of the
unknown parameter or parametric function is known as point estimation.
The statistic used in the estimation is called an estimator.
The realization (observed value) of the estimator from a specific sample is
an estimate.
The quantity (parameter or parametric function) to be estimated is called
estimand.
Let us now construct the statistical framework for the estimation of probability of
free pizza for the marketing manager. Notice that all she has at hand is a random
sample of size n from the Bernoulli() distribution. Let the random sample be,
generically, denoted by
{X
1
, X
2
, X
n
}
and we need to find out the point estimate of .
A statistic to start with is the sample mean. We have seen sample mean is
unbiased for . Let us now look at other possibilities:
1.


2.


3.


4. Average of the 1
st
half sample
5. Sample average
One way to judge the efficiency of these estimators is to consider their variances.
To do so we first note the probability distribution of the average.
Result: Let X
1
and X
2
be two independent RV following Bin(n
1
,) and Bin(n
2
,)
distributions respectively. Then T=X
1
+X
2
follows Bin(n
1
+n
2
,) distribution.
Result: Let X
1
, X
2
, X
k
be a random sample of size n from Bin(n
i
,). Then the
sum X
1
+X
2
++X
ni
follows Bin(n
i
,) distribution.
Proof: (Hint: Use Result 1 ) HW
Corollary: Let X
1
, X
2
, X
n
be a random sample of size n from Bernnoulli().
Then the sum X
1
+X
2
++X
n
follows Bin(n

,) distribution.
Proof: HW
Thus, in the current problem, all estimators in (1-5) are unbiased (proof is HW).
To compare them one may use their variances and compare them.
Sampling Variance: The variance of the estimator describes the sampling
variance.
Standard Error: The sd of the estimator describes the standard error of the
estimator.
In such comparison, one estimator is said to be more efficient than other if the 1
st

variance is less than the other variance. Thus efficiency is measured by the ratio
of the two competing estimators.
Here, V(T
1
) = (1-), V(T
2
) =(1-)/2.
a. Varinces of estimators 3-5 are HW
b. Find the most efficient one from your results.
Thus the best estimate the manager may have is the sample mean among the
given alternatives. An unbiased estimator which has the minimum variance
among all unbiased estimators is called the minimum variance unbiased
estimator.
Large sample distribution of sample proportion:
Sample mean will follow Normal distribution by CLT. (Simulate large
sample size and plot the histograms using R).
ASW pp- 294: 49, 51, 53
Let us now consider the original data that she had i.e revenue amounts of 250
bills. So the sample is
R
1
, R
2
, R
250
Which are iid copies of a random variable R so that E[R]= and V( R ) =
2
.
Result: The sample mean is unbiased for no matter what be the distribution of
R
Result: The sample variance is not unbiased for
2
no matter what be the
distribution of R.
In this case an alternative criterion of estimation is consistency.
Consistency: An estimator T is said to be consistent for the parameter if P[|T-
|<] 1 as sample size tends to . In effect E[T] and V( T) 0.
One way to do so is to use maximum likelihood estimator (MLE) .
Problem: Assume the bill amount R ~N(,1). Write down the likelihood and
find MLE of .
Assume BJ sales data in R be the data for the marketing manage. I t is
known that the values at the following positions were reverted:
3
7
10
13
13
14
17
Find MLE of mean revenue.
Find the standard error.
State the distribution(approximate) of the MLE.
Generate random samples of increasing sizes and find the pattern in
standard error.
I nterpret the pattern
What is the revenue that could be generated with 95%, 80%, 50% and
30% confidence?
What is your recommendation for the offer? J ustify with statistical
reason.

You might also like