R Tutorials - Independent Samples T Test
R Tutorials - Independent Samples T Test
The values of the response variable indicate number of items completely successfully on the digit span task. An examination of the distributions would be in order at this point, as the t-test assumes sampling from normal parent populations...
> plot(density(nonsmokers)) > plot(density(smokers)) # output not shown # output not shown
The smoothed distributions appear reasonably mound-shaped, although with samples this small, it's hard to say for sure what the parent distribution looks like. Another way to examine the data graphically is with side-by-side boxplots...
> boxplot(nonsmokers,smokers,ylab="Scores on Digit Span Task", + names=c("nonsmokers","smokers"), + main="Digit Span Performance by\n Smoking Status")
Note: The "\n" inside the main title causes it to be printed on two lines. I'm not entirely sure this will work in Windows, but it should. Boxplots are becoming the standard graphical method of displaying this sort of data, although I have an issue with the fact that boxplots are median oriented graphics, while the t-test is comparing means. We could also do a bar graph with error bars (which requires some finagling), and I will discuss this is a future tutorial, but the bar graph would display less information about the data. Meanwhile, the boxplots show the nonsmokers to have done better on the digit span task than the smokers. It also shows an absence of outliers, which is good news.
ww2.coastal.edu//independent-t.html
1/5
6/9/2012
If you want standard errors, there is no built-in function (we will write one later), but they are easily enough calculated...
> sqrt(var(nonsmokers)/length(nonsmokers)) [1] 0.674125 > sqrt(var(smokers)/length(smokers)) [1] 0.9339284
There is little left to do but the statistical test. The t-test can be done either by entering the names of the two groups (or two numerical vectors) into the t.test( ) function, or by using a formula interface if the data are in a data frame (below)...
> t.test(nonsmokers,smokers) Welch Two Sample t-test data: nonsmokers and smokers t = 2.2573, df = 16.376, p-value = 0.03798 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.1628205 5.0371795 sample estimates: mean of x mean of y 20.1 17.5
By default the two-tailed test is done, and the Welch correction for nonhomogeneity of variance is applied. Both of these options can easily enough be changed...
> t.test(nonsmokers,smokers,alternative="greater",var.equal=T) Two Sample t-test data: nonsmokers and smokers t = 2.2573, df = 18, p-value = 0.01833 alternative hypothesis: true difference in means is greater than 0 95 percent confidence interval: 0.6026879 Inf sample estimates: mean of x mean of y 20.1 17.5
The output also includes a 95% CI (change this using "conf.level=") for the difference between the means and the sample means. Note that the t.test( ) function always subtracts first group minus second group. Since the "nonsmokers" were entered first into the function, and our hypothesis was that they would score higher on the digit span test, this dictated the alternative hypothesis to be "difference between means is greater than zero." The null hypothesized difference can also be changed by setting the "mu=" option. The Formula Interface I hope you haven't erased those vectors, because now we are going to make a data frame from them...
> scores = c(nonsmokers,smokers)
ww2.coastal.edu//independent-t.html
2/5
6/9/2012
> mj.data groups scores 1 nonsmokers 18 2 nonsmokers 22 3 nonsmokers 21 4 nonsmokers 17 5 nonsmokers 20 6 nonsmokers 17 7 nonsmokers 23 8 nonsmokers 20 9 nonsmokers 22 10 nonsmokers 21 11 smokers 16 12 smokers 20 13 smokers 14 14 smokers 21 15 smokers 20 16 smokers 18 17 smokers 13 18 smokers 15 19 smokers 17 20 smokers 21 > rm(scores,groups) > attach(mj.data)
Read the formula as "scores by groups". Notice you get group labels on the x-axis this way, too. Finally, the t-test...
> t.test(scores ~ groups) Welch Two Sample t-test data: scores by groups t = 2.2573, df = 16.376, p-value = 0.03798 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.1628205 5.0371795 sample estimates: mean in group nonsmokers mean in group smokers 20.1 17.5
If you don't care for attaching data frames (and I usually try to avoid it), the t.test( ) function also has a "data=" option...
> detach(mj.data) > t.test(scores ~ groups, data=mj.data) ### output not shown
Note: I am told there is an easier way to create a data frame from individual vectors representing group-by-group data, but I haven't yet been told what it is. If I find out, I'll pass it on! Textbook Problems
ww2.coastal.edu//independent-t.html
3/5
6/9/2012
mean sd
> mean.diff = 17.5 - 20.1 > df = 10 + 10 - 2 > pooled.var = (2.95^2 * 9 + 2.13^2 * 9) / df > se.diff = sqrt(pooled.var/10 + pooled.var/10) > t.obt = mean.diff / se.diff > t.obt [1] -2.259640 > p.value = 2*pt(t.obt,df=df) # two-tailed > p.value [1] 0.03648139
A custom function could be written if these calculations had to be done repeatedly, but there is a certain educational value to be had from doing them by hand! Power The power.t.test( ) function has the following syntax (from the help page for this function)...
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05, power = NULL, type = c("two.sample", "one.sample", "paired"), alternative = c("two.sided", "one.sided"), strict = FALSE)
Exactly one of the options on the first line should be set to "NULL", and R will then calculate it from the remaining values. Suppose we wanted a power of 85% for the Scott Keats experiment, and were anticipating a mean difference of 2.5 and a pooled standard deviation of 2.6. How many subjects should be used per group if a two-tailed test is planned?
> power.t.test(delta=2.5, sd=2.6, sig.level=.05, power=.85, + type="two.sample", alternative="two.sided") Two-sample t test power calculation n delta sd sig.level power alternative = = = = = = 20.43001 2.5 2.6 0.05 0.85 two.sided
So there you go! Homogeneity of Variance The standard, textbook, pooled-variance t-test assumes homogeneity of variance. The easiest way to deal with nonhomogeneity of variance is to allow R to do what it does by default anyway--run the t-test using the Welch correction for nonhomogeneity. Further discussion of nonhomogeneity of variance will occur in the ANOVA tutorial. Alternatives to the t Test
ww2.coastal.edu//independent-t.html
4/5
6/9/2012
samples t-test is what we old timers were taught to call the Mann-Whitney U test. For some reason, this name is now out of fashion, and the test goes by the name Wilcoxin-Mann-Whitney test, or Wilcoxin rank sum test, or some variant thereof. The syntax is very similar, and it will work with or without the formula interface...
> wilcox.test(nonsmokers,smokers) Wilcoxon rank sum test with continuity correction data: nonsmokers and smokers W = 76.5, p-value = 0.04715 alternative hypothesis: true location shift is not equal to 0 Warning message: In wilcox.test.default(nonsmokers, smokers) : cannot compute exact p-value with ties > > ### and now the formula interface... > wilcox.test(scores ~ groups, data=mj.data) Wilcoxon rank sum test with continuity correction data: scores by groups W = 76.5, p-value = 0.04715 alternative hypothesis: true location shift is not equal to 0 Warning message: In wilcox.test.default(x = c(18, 22, 21, 17, 20, 17, 23, 20, 22, cannot compute exact p-value with ties :
The test assumes continuous variables without ties (and neither is the case here). And no, I don't know why the warning message about that takes two different forms. Return to the Table of Contents
ww2.coastal.edu//independent-t.html
5/5