Correlation & Regression
Correlation & Regression
17-1
Product Moment Correlation
17-2
Product Moment Correlation
From a sample of n observations, X and Y, the product
moment correlation, r, can be calculated as:
n
( X i - X )( Y i - Y )
r= i=1
n n
(X i - X ) (Y i - Y )2
2
i=1 i=1
D iv is io n o f th e n u m er ato r an d d en o m in ato r b y ( n -1 ) g iv es
n
( X i - X )( Y i - Y )
n -1
r= i=1
n 2 n 2
(X i - X ) (Y i - Y )
n -1 n -1
i=1 i=1
C OV x y
= 17-3
SxSy
Product Moment Correlation
17-4
Explaining Attitude Toward
the City of Residence
Respondent No Attitude Toward Duration of Importance
the City Residence Attached to
Weather
1 6 10 3
2 9 12 11
3 8 12 4
4 3 4 1
5 10 12 11
6 4 6 1
7 5 8 7
8 2 2 4
9 11 18 8
10 9 9 10
11 10 17 8
12 2 2 5
© 2007 Prentice Hall 17-5
Product Moment Correlation
The correlation coefficient may be calculated as follows:
= (10 + 12 + 12 + 4 + 12 + 6 + 8 + 2 + 18 + 9 + 17 + 2)/12
X = 9.333
= (6 + 9 + 8 + 3 + 10 + 4 + 5 + 2 + 11 + 9 + 10 + 2)/12
Y = 6.583
n
=1 (X i - X )(Y i - Y ) = (10 -9.33)(6-6.58) + (12-9.33)(9-6.58)
+ (12-9.33)(8-6.58) + (4-9.33)(3-6.58)
i
+ (12-9.33)(10-6.58) + (6-9.33)(4-6.58)
+ (8-9.33)(5-6.58) + (2-9.33) (2-6.58)
+ (18-9.33)(11-6.58) + (9-9.33)(9-6.58)
+ (17-9.33)(10-6.58) + (2-9.33)(2-6.58)
= -0.3886 + 6.4614 + 3.7914 + 19.0814
+ 9.1314 + 8.5914 + 2.1014 + 33.5714
+ 38.3214 - 0.7986 + 26.2314 + 33.5714
= 179.6668
17-6
Product Moment Correlation
n
=1 (X i - X )2 = (10-9.33)2 + (12-9.33)2 + (12-9.33)2 + (4-9.33)2
i + (12-9.33)2 + (6-9.33)2 + (8-9.33)2 + (2-9.33)2
+ (18-9.33)2 + (9-9.33)2 + (17-9.33)2 + (2-9.33)2
= 0.4489 + 7.1289 + 7.1289 + 28.4089
+ 7.1289+ 11.0889 + 1.7689 + 53.7289
+ 75.1689 + 0.1089 + 58.8289 + 53.7289
= 304.6668
n
(Y i - Y )2 = (6-6.58)2 + (9-6.58)2 + (8-6.58)2 + (3-6.58)2
i =1 + (10-6.58)2+ (4-6.58)2 + (5-6.58)2 + (2-6.58)2
+ (11-6.58)2 + (9-6.58)2 + (10-6.58)2 + (2-6.58)2
= 0.3364 + 5.8564 + 2.0164 + 12.8164
+ 11.6964 + 6.6564 + 2.4964 + 20.9764
+ 19.5364 + 5.8564 + 11.6964 + 20.9764
= 120.9168
Thus, r= 179.6668
= 0.9361
(304.6668) (120.9168)
17-7
Decomposition of the Total Variation
2 E x p la in e d v a r ia tio n
r =
T o ta l v a r ia tio n
S S x
=
S S y
= T o ta l v a r ia tio n - E r r o r v a r ia tio n
T o ta l v a r ia tio n
S S y - S S e rro r
=
S S y
17-8
Decomposition of the Total Variation
• When it is computed for a population rather than a
sample, the product moment correlation is denoted by
, the Greek letter rho. The coefficient r is an estimator
of .
17-9
Decomposition of the Total Variation
The test statistic is:
1/2
t = r n-22
1-r
which has a t distribution with n - 2 degrees of freedom.
For the correlation coefficient calculated based on the
data given in Table 17.1,
12-2 1/2
t = 0.9361
1 - (0.9361)2
= 8.414
and the degrees of freedom = 12-2 = 10. From the
t distribution table (Table 4 in the Statistical Appendix),
the critical value of t for a two-tailed test and
= 0.05 is 2.228. Hence, the null hypothesis of no
relationship between X and Y is rejected.
17-10
Partial Correlation
A partial correlation coefficient measures the
association between two variables after controlling for,
or adjusting for, the effects of one or more additional
variables.
rx y - (rx z ) (ry z )
rx y . z =
1 - rx2z 1 - ry2z
17-12
Part Correlation Coefficient
The part correlation coefficient represents the
correlation between Y and X when the linear effects of
the other independent variables have been removed
from X but not from Y. The part correlation coefficient,
ry(x.z) is calculated as follows:
rx y - ry z rx z
ry (x . z ) =
1 - rx2z
The partial correlation coefficient is generally viewed as
more important than the part correlation coefficient.
17-13
Nonmetric Correlation
• If the nonmetric variables are ordinal and numeric,
Spearman's rho, s
, and Kendall's tau, , are two
measures of nonmetric correlation, which can be used to
examine the correlation between them.
• Both these measures use rankings rather than the
absolute values of the variables, and the basic concepts
underlying them are quite similar. Both vary from -1.0 to
+1.0.
• In the absence of ties, Spearman's yields s a closer
approximation to the Pearson product moment
correlation coefficient, ,than
Kendall's . In these
cases, the absolute magnitude of to be smaller
tends
than Pearson's .
• On the other hand, when the data contain a large number
of tied ranks, Kendall's seems more appropriate.
17-14
Regression Analysis
Regression analysis examines associative relationships
between a metric dependent variable and one or more
independent variables in the following ways:
• Determine whether the independent variables explain a significant
variation in the dependent variable: whether a relationship exists.
• Determine how much of the variation in the dependent variable can
be explained by the independent variables: strength of the
relationship.
• Determine the structure or form of the relationship: the
mathematical equation relating the independent and dependent
variables.
• Predict the values of the dependent variable.
• Control for other independent variables when evaluating the
contributions of a specific variable or set of variables.
• Regression analysis is concerned with the nature and degree of
association between variables and does not imply or assume any
causality.
17-15
Statistics Associated with Bivariate
Regression Analysis
• Bivariate regression model. The basic regression
equation is Yi = +0 Xi + 1ei, where Y = dependent or
criterion variable, X = independent or predictor variable,
= intercept0of the line, = slope of the line,
1 and ei is the
error term associated with the i th observation.
17-17
Statistics Associated with Bivariate
Regression Analysis
• Standardized regression coefficient. Also termed the
beta coefficient or beta weight, this is the slope
obtained by the regression of Y on X when the data are
standardized.
17-19
Conducting Bivariate Regression Analysis
Fig. 17.2
Plot the Scatter Diagram
where
Y = dependent or criterion variable
X = independent or predictor variable
0 = intercept of the line
1= slope of the line
Yi = 0 + 1Xi + ei
9
Attitude
Duration of Residence
17-22
Which Straight Line Is Best?
Line 1
Line 2
9 Line 3
Line 4
6
β 0 + β 1X
Y
YJ
eJ
eJ
YJ
X
X1 X2 X3 X4 X5
17-24
Conducting Bivariate Regression Analysis
Estimate the Parameters
In most cases, 0 and 1 are unknown and are estimated
from the sample observations using the equation
Y i = a + b xi
whereY i is the estimated or predicted value of Yi, and
a and b are estimators of 0 and 1 , respectively.
COV xy
b=
S x2
n
(X i - X )(Y i - Y )
= i=1
n 2
(X i - X )
i=1
n
X iY i - nX Y
= i=1
n
X i2 - nX 2
17-25
i=1
Conducting Bivariate Regression Analysis
Estimate the Parameters
The intercept, a, may then be calculated using:
a =Y - b X
For the data in Table 17.1, the estimation of parameters may be
illustrated as follows:
12
XiYi
i =1
= (10) (6) + (12) (9) + (12) (8) + (4) (3) + (12) (10) + (6) (4)
+ (8) (5) + (2) (2) + (18) (11) + (9) (9) + (17) (10) + (2) (2)
= 917
12
Xi 2 = 102 + 122 + 122 + 42 + 122 + 62
i =1
+ 82 + 22 + 182 + 92 + 172 + 22
= 1350 17-26
Conducting Bivariate Regression Analysis
Estimate the Parameters
It may be recalled from earlier calculations of the simple correlation that:
X = 9.333
Y = 6.583
a= Y -b X
= 6.583 - (0.5897) (9.333)
= 1.0793
17-27
Conducting Bivariate Regression Analysis
Estimate the Standardized Regression Coefficient
where n
SSy = i=1 (Yi - Y)2
n
SSreg = i (Yi - Y)2
=1
n
SSres = i=1 (Yi - Yi)2
17-31
Decomposition of the Total
Variation in Bivariate Regression
Fig. 17.6
Y
Residual Variation
o tal n
T tio SSres
a ria Explained Variation
V SS y
SSreg
Y
X
X1 X2 X3 X4 X5
17-32
Conducting Bivariate Regression Analysis
Determine the Strength and Significance of Association
S S y - S S res
=
SSy
To illustrate the calculations of r2, let us consider again the effect of attitude
toward the city on the duration of residence. It may be recalled from earlier
calculations of the simple correlation coefficient that:
n
SS y = (Y i - Y )2
i =1
= 120.9168
17-33
Conducting Bivariate Regression Analysis
Determine the Strength and Significance of Association
17-34
Conducting Bivariate Regression Analysis
Determine the Strength and Significance of Association
n
Therefore,
S S reg = (Y i - Y )
2
i =1
= (6.9763-6.5833)2 + (8.1557-6.5833)2
+ (8.1557-6.5833)2 + (3.4381-6.5833)2
+ (8.1557-6.5833)2 + (4.6175-6.5833)2
+ (5.7969-6.5833)2 + (2.2587-6.5833)2
+ (11.6939 -6.5833)2 + (6.3866-6.5833)2
+ (11.1042 -6.5833)2 + (2.2587-6.5833)2
=0.1544 + 2.4724 + 2.4724 + 9.8922 + 2.4724
+ 3.8643 + 0.6184 + 18.7021 + 26.1182
+ 0.0387 + 20.4385 + 18.7021
= 105.9524
17-35
Conducting Bivariate Regression Analysis
Determine the Strength and Significance of Association
n
= (6-6.9763)2 + (9-8.1557)2 + (8-8.1557)2
SS res = (Y i - Y i )
2
= 14.9644
r2 = SSreg /SSy
= 105.9524/120.9168
= 0.8762
17-36
Conducting Bivariate Regression Analysis
Determine the Strength and Significance of
Association
H0: R2pop = 0
17-37
Conducting Bivariate Regression Analysis
Determine the Strength and Significance of
Association
The appropriate test statistic is the F statistic:
SS reg
F=
SS res /(n-2)
H 0 : 1 = 0
H0 : 1 0
or
H 0 : = 0
H0 : 0 17-38
Conducting Bivariate Regression Analysis
Determine the Strength and Significance of Association
From Table 17.2, it can be seen that:
r2 = 105.9522/(105.9522 + 14.9644)
= 0.8762
Which is the same as the value calculated earlier. The value of the
F statistic is:
F = 105.9522/(14.9644/10)
= 70.8027
Multiple R 0.93608
R2 0.87624
Adjusted R2 0.86387
Standard Error 1.22329
ANALYSIS OF VARIANCE
df Sum of Squares Mean Square
(Y i Yˆ i )
2
SEE i 1
n2
or
SEE SS res
n2
SEE SS res
n k 1
For the data given in Table 17.2, the SEE is estimated as follows:
SEE = 14.9644/(12-2)
= 1.22329 17-41
Assumptions
• The error term is normally distributed. For each fixed
value of X, the distribution of Y is normal.
17-42
Multiple Regression
The general form of the multiple regression model
is as follows:
Y = 0 + 1 X1 + 2 X2 + 3 X3+ . . . + k Xk + e
which is estimated by the following equation:
Y = a + b 1 X1 + b 2 X2 + b 3 X3 + . . . + b k Xk
17-43
Statistics Associated with Multiple
Regression
• Adjusted R2. R2, coefficient of multiple determination, is
adjusted for the number of independent variables and the
sample size to account for the diminishing returns. After the
first few variables, the additional independent variables do
not make much contribution.
• Coefficient of multiple determination. The strength of
association in multiple regression is measured by the square
of the multiple correlation coefficient, R2, which is also called
the coefficient of multiple determination.
• F test. The F test is used to test the null hypothesis that the
coefficient of multiple determination in the population, R2pop, is
zero. This is equivalent to testing the null hypothesis. The test
statistic has an F distribution with k and (n - k - 1) degrees of
freedom. 17-44
Statistics Associated with Multiple
Regression
• Partial F test. The significance of a partial regression
coefficient, , of Xi may be
i tested using an incremental F
statistic. The incremental F statistic is based on the
increment in the explained sum of squares resulting from
the addition of the independent variable Xi to the
regression equation after all the other independent
variables have been included.
17-45
Conducting Multiple Regression Analysis
Partial Regression Coefficients
To understand the meaning of a partial regression coefficient, let us
consider a case in which there are two independent variables, so that:
= a + b 1X 1 + b 2X 2
Y
17-46
Conducting Multiple Regression Analysis
Partial Regression Coefficients
• Suppose one was to remove the effect of X2 from X1. This could be
done by running a regression of X1 on X2. In other words, one would
X
estimate the equation 1X= a + b X2 and calculate the residual Xr = (X1 -
1). The partial regression coefficient, b1, is then equal to the bivariate
regression coefficient,
Y br , obtained from the equation = a + br Xr .
17-47
Conducting Multiple Regression Analysis
Partial Regression Coefficients
• Extension to the case of k variables is straightforward. The partial regression
coefficient, b1, represents the expected change in Y when X1 is changed by one
unit and X2 through Xk are held constant. It can also be interpreted as the
bivariate regression coefficient, b, for the regression of Y on the residuals of X1,
when the effect of X2 through Xk has been removed from X1.
• The relationship of the standardized to the non-standardized coefficients
remains the same as before:
B1 = b1 (Sx1/Sy)
Bk = bk (Sxk /Sy)
or
Multiple R 0.97210
R2 0.94498
Adjusted R2 0.93276
Standard Error 0.85974
ANALYSIS OF VARIANCE
df Sum of Squares Mean Square
where
n
SSy = (Y i - Y )2
i =1
n
2
S S reg = (Y i - Y )
i =1
n
2
S S res = (Y i - Y i )
i =1
17-50
Conducting Multiple Regression Analysis
Strength of Association
SS reg
R2 =
SS y
17-51
Conducting Multiple Regression Analysis
Significance Testing
H0 : R2pop = 0
H0: 1 = 2 = 3 = . . . = k = 0
SS reg /k
F=
SS res /(n - k - 1)
= R 2 /k
(1 - R 2 )/(n- k - 1)
t= b
SE
b
17-53
Conducting Multiple Regression Analysis
Examination of Residuals
• A residual is the difference between the observed value
of Yi and the value predicted by the regression equation i.
• Scattergrams
Y of the residuals, in which the residuals are
plotted against the predicted values, i, time, or predictor
variables, provide useful insights in examiningY the
appropriateness of the underlying assumptions and
regression model fit.
• The assumption of a normally distributed error term can
be examined by constructing a histogram of the residuals.
• The assumption of constant variance of the error term can
be examined by plotting the residuals against the
predicted values of the dependent variable, i.
Y
17-54
Conducting Multiple Regression Analysis
Examination of Residuals
• A plot of residuals against time, or the sequence of
observations, will throw some light on the assumption that the
error terms are uncorrelated.
• Plotting the residuals against the independent variables
provides evidence of the appropriateness or
inappropriateness of using a linear model. Again, the plot
should result in a random pattern.
• To examine whether any additional variables should be
included in the regression equation, one could run a
regression of the residuals on the proposed variables.
• If an examination of the residuals indicates that the
assumptions underlying linear regression are not met, the
researcher can transform the variables in an attempt to satisfy
the assumptions. 17-55
Residual Plot Indicating that
Variance Is Not Constant
Residuals
Predicted Y Values
17-56
Residual Plot Indicating a Linear Relationship
Between Residuals and Time
Residuals
Time
17-57
Plot of Residuals Indicating that
a Fitted Model Is Appropriate
Residuals
Predicted Y Values
17-58
Stepwise Regression
The purpose of stepwise regression is to select, from a large number of
predictor variables, a small subset of variables that account for most of
the variation in the dependent or criterion variable. In this procedure,
the predictor variables enter or are removed from the regression
equation one at a time. There are several approaches to stepwise
regression.
• Forward inclusion. Initially, there are no predictor variables in the
regression equation. Predictor variables are entered one at a time, only
if they meet certain criteria specified in terms of F ratio. The order in
which the variables are included is based on the contribution to the
explained variance.
• Backward elimination. Initially, all the predictor variables are included
in the regression equation. Predictors are then removed one at a time
based on the F ratio for removal.
• Stepwise solution. Forward inclusion is combined with the removal of
predictors that no longer meet the specified criterion at each step.
17-59
Multicollinearity
• Multicollinearity arises when intercorrelations among the
predictors are very high.
• Multicollinearity can result in several problems, including:
– The partial regression coefficients may not be estimated
17-60
Multicollinearity
• A simple procedure for adjusting for multicollinearity
consists of using only one of the variables in a highly
correlated set of variables.
17-61
Relative Importance of Predictors
Unfortunately, because the predictors are correlated,
there is no unambiguous measure of relative
importance of the predictors in regression analysis.
However, several approaches are commonly used to
assess the relative importance of predictor variables.
17-62
Relative Importance of Predictors
• Square of the partial correlation coefficient. This measure, R
yxi.xjxk, is the coefficient of determination between the
2
17-64
Regression with Dummy Variables
Product Usage Original Dummy Variable Code
Category Variable
Code D1 D2
D3
Nonusers............... 1 1 0 0
Light Users........... 2 0 1 0
Medium Users....... 3 0 0 1
Heavy Users.......... 4 0 0 0
Y
i = a + b 1 D 1 + b 2D 2 + b 3 D 3
17-66
Analysis of Variance and Covariance
with Regression
Given this equivalence, it is easy to see further relationships
between dummy variable regression and one-way ANOVA.
i =1
R2 = 2
Overall F test = F test
17-67
SPSS Windows
The CORRELATE program computes Pearson product moment
correlations and partial correlations with significance levels.
Univariate statistics, covariance,
and cross-product deviations may also be requested.
Significance levels are included in the output. To select these
procedures using SPSS for Windows click:
Analyze>Correlate>Bivariate …
Analyze>Correlate>Partial …
Scatterplots can be obtained by clicking:
Graphs>Scatter …>Simple>Define
REGRESSION calculates bivariate and multiple regression
equations, associated statistics, and plots. It allows for an easy
examination of residuals. This procedure can be run by
clicking:
Analyze>Regression Linear … 17-68
SPSS Windows: Correlations
1. Select ANALYZE from the SPSS menu bar.
7. Click OK.
17-69
SPSS Windows: Bivariate Regression
8. Click CONTINUE.
9. Click OK.
17-70