5 Tests of Significance Seema
5 Tests of Significance Seema
Seema Jaggi
I.A.S.R.I., Library Avenue, New Delhi – 110012
[email protected]
In applied investigations, one is often interested in comparing some characteristic (such as the
mean, the variance or a measure of association between two characters) of a group with a
specified value, or in comparing two or more groups with regard to the characteristic. For
instance, one may wish to compare two varieties of wheat with regard to the mean yield per
hectare or to know if the genetic fraction of the total variation in a strain is more than a given
value or to compare different lines of a crop in respect of variation between plants within
lines. In making such comparisons one cannot rely on the mere numerical magnitudes of the
index of comparison such as the mean, variance or measure of association. This is because
each group is represented only by a sample of observations and if another sample were drawn
the numerical value would change. This variation between samples from the same population
can at best be reduced in a well-designed controlled experiment but can never be eliminated.
One is forced to draw inference in the presence of the sampling fluctuations which affect the
observed differences between groups, clouding the real differences. Statistical science
provides an objective procedure for distinguishing whether the observed difference connotes
any real difference among groups. Such a procedure is called a test of significance. The test
of significance is a method of making due allowance for the sampling fluctuation affecting the
results of experiments or observations. The fact that the results of biological experiments are
affected by a considerable amount of uncontrolled variation makes such tests necessary.
These tests enable us to decide on the basis of the sample results, if
i) the deviation between the observed sample statistic and the hypothetical parameter value,
or
ii) the deviation between two sample statistics,
is significant or might be attributed to chance or the fluctuation of sampling.
For applying the tests of significance, we first set up a hypothesis - a definite statement about
the population parameters. In all such situations we set up an exact hypothesis such as, the
treatments or variate in question do not differ in respect of the mean value, or the variability,
or the association between the specified characters, as the case may be, and follow an
objective procedure of analysis of data which leads to a conclusion of either of two kinds:
i) reject the hypothesis, or
ii) not reject the hypothesis
Thus if the discrepancy between the observed and the expected (hypothetical) value of a
statistic is greater than Zα times the standard error (S.E.), hypothesis is rejected at α level of
significance. Similarly if
⏐ t – E(t) ⏐ ≤ Zα × S.E(t),
the deviation is not regarded significant at 5% level of significance. In other words the
deviation t - E(t), could have arisen due to fluctuations of sampling and the data do not
provide any evidence against the null hypothesis which may, therefore be accepted at α level
of significance.
If ⏐Z⏐ ≤ 1.96, then the hypothesis H0 is accepted at 5% level of significance. Thus the steps
to be used in the normal test are as follows:
i) Compute the test statistic Z under H0.
ii) If ⏐Z⏐ > 3, H0 is always rejected
iii) If ⏐Z⏐ < 3, we test its significance at certain level of significance
If xi ( i =1,…,n) is a random sample of size n from a normal population with mean μ and
variance σ2, then the sample mean is distributed normally with mean μ and variance σ 2 / n ,
i.e., x ~ N ( μ , σ 2 / n)
Example 1.1.1: A sample of 900 members has a mean of 3.4 cms and standard deviation
(s.d.) 2.61 cms. Is the sample drawn from a large population of mean 3.25 cms?
Solution:
II-74
Tests of Significance
H0 : The sample has been drawn from the population with mean μ = 3.25 cm
H1: μ ≠ 3.25 (two tailed test)
Here x = 3.4 cm, n = 900,
μ = 3.25 cm, σ = 2.61 cm
3.40 − 3.25
Under H0, Z = = 1.73 .
2.61
900
Since ⏐Z⏐ < 1.96, we conclude that the data does not provide any evidence against the null
hypothesis H0 which may therefore be accepted at 5% level of significance.
II-75
Tests of Significance
PQ
and V(p) = , Q =1- P
n
The normal test for the proportion of successes becomes
p - E (p) p-P
Z= = ~ N (0,1)
S.E (p) PQ/n
Example 1.3.1: In a sample of 1000 people, 540 are rice eaters and the rest are wheat eaters.
Can we assume that both rice and wheat are equally popular at 1% level of significance.
Since computed Z < 2.58 at 1% level of significance, therefore Ho is not rejected and we
conclude that rice and wheat are equally popular.
PQ PQ
V(p1) = 1 1 , V(p2) = 2 2 ,
n1 n2
Test Statistic:
p1 − p 2 − ( P1 − P2 )
Z= ~ N (0,1)
P1Q1 P2 Q2
+
n1 n2
Under H0 : P1 = P2 = P i.e. no significant difference between population proportions
II-76
Tests of Significance
p1 − p2
Z=
1 1
PQ( + )
n1 n2
The table giving the value of t required for significance at various levels of probability and for
different degrees of freedom are called the t – tables which are given in Statistical Tables by
Fisher and Yates. The computed value is compared with the tabulated value at 5 or 1 percent
levels of significance and at (n-1) degrees of freedom and accordingly the null hypothesis is
accepted or rejected.
II-77
Tests of Significance
x − x 2 − δ0
t= 1 ~ t n1 + n 2 − 2
1 1
s +
n1 n 2
If δ0 = 0, this test reduces to test the equality of two population means.
Example 2.1.2.1: A group of 5 plots treated with nitrogen at 20 kg/ha. yielded 42, 39, 48, 60
and 41 kg whereas second group of 7 plots treated with nitrogen at 40 kg/ha. yielded 38, 42,
56, 64, 68, 69 and 62 kg. Can it be concluded that nitrogen at level 40 kg/ha. increases the
yield significantly?
Solution: H0: μ1 = μ2 , H1: μ1 < μ2
x1 = 46, x 2 = 57, s 2 = 121.6
46 − 57
t= = - 1.7 ~ t10
1 1
121 . 6 ( + )
5 7
Since |t| < 1.81 (value of t at 5% and 10 d.f.), the yield from two doses of nitrogen do not
differ significantly.
2.1.3 Paired t-test for Difference of Means
When n1 = n2 = n and the two samples are not independent but the sample observations are
paired together, then this test is applied. Let (xi, yi), i = 1,..,n be a random sample from a
bivariate normal population with parameters (μ1, μ2, σ 12 , σ 22 , ρ) . Let di = xi - yi
H0 : μ1 - μ2 = μ0
Test statistic:
d − μ0
t= ~ t n -1
s/ n
n n
1 1
where d =
n
∑ di , s 2 =
n −1
∑ (di − d )2 .
i =1 i =1
r n−2
Test Statistic: t = ~ tn − 2 ,
1− r2
where r is the sample correlation coefficient. H0 is rejected at level α if ⏐t⏐ > tn-2 (α/2). This
test can also be used for testing the significance of rank correlation coefficient.
2.2 Test of Significance Based on Chi-Square Distribution
2.2.1 Test for the Variance of a Normal Population
Let x1, x2,…,xn (n≥2) be a random sample from N(μ, σ2). H0 : σ 2 = σ02 .
II-78
Tests of Significance
n 2
⎛ xi − μ ⎞
Test statistic: χ 2 = ∑ ⎜⎜ ⎟
σ 0 ⎟⎠
~ χ n2 , when μ is known
i =1 ⎝
n ⎛ 2
x −x⎞
χ 2 = ∑ ⎜⎜ i ⎟⎟ ~ χ n2 , when μ is not known
σ0 ⎠
i =1 ⎝
Tables are available for χ2 at different levels of significance and with different degrees of
freedom.
Test statistic: If Oi and Ei , i =1,…,n are respectively the observed and expected frequency of
ith class, then the statistic
n
(Oi − Ei )2
χ2 = ∑ Ei
~ χ n2- r - 1
i =1
where r is the number of parameters estimated from the sample, n is the number of classes
after pooling. H0 is rejected at level α if calculated χ2 > tabulated χ n2-r -1 (α)
Example 2.2.1: In an F2 population of chillies, 831 plants with purple and 269 with non-
purple chillies were observed. Is this ratio consistent with a single factor ratio of 3:1?
Solution: On the hypothesis of a ratio of 3:1, the frequencies expected in the purple and non-
purple classes are 825 and 275 respectively.
Frequency
Observed (Oi) Expected (Ei) Oi - Ei
Purpose 831 825 6
Non-purple 269 275 -6
2
(Oi − Ei )2
χ2 = ∑ = 0.17
Ei
i =1
Here χ2 is based on one degree of freedom. It is seen from the table that the value of 0.17 for
χ2 with 1 d.f corresponds to a level of probability which lies between 0.5 and 0.7. It is
concluded that the result is non-significant.
II-79
Tests of Significance
Class A1 A2 A3
B1 n11 n21 n31
B2 n12 n22 n32
B3 n13 n23 n33
Such a table giving the simultaneous classification of a body of data in two different ways is
called contingency table. If there are r rows and c columns the table is said to be an r x c table.
H0: the attributes are independent
H1: they are not independent
Test statistic:
c r (O - E ) 2
ij ij
χ2 = ∑ ∑ E ij
2
~ χ (r -1) (c-1)
j =1 i =1
H0 is rejected at level α if χ 2 > χ (r2 - 1) (c - 1)
s12 1
Under H0 : F = ~ Fn1 −1, n2 −1 .
s 22 σ 02
Tables are available giving the values of F required for significance at different levels of
probability and for different degrees of freedom. The computed value of F is compared with
the tabulated value and the inference is drawn accordingly.
II-80