0% found this document useful (0 votes)
51 views

Robust Regression: 1 M-Estimation

The document describes robust regression techniques, including M-estimation and bounded-influence regression. It provides examples of applying robust regression, specifically M-estimation using Huber's method, to Duncan's occupational prestige data. This reduces the influence of outliers and produces coefficient estimates closer to those obtained by omitting the influential observations.

Uploaded by

Vikas Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
51 views

Robust Regression: 1 M-Estimation

The document describes robust regression techniques, including M-estimation and bounded-influence regression. It provides examples of applying robust regression, specifically M-estimation using Huber's method, to Duncan's occupational prestige data. This reduces the influence of outliers and produces coefficient estimates closer to those obtained by omitting the influential observations.

Uploaded by

Vikas Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Robust Regression

Appendix to An R and S-PLUS Companion to Applied Regression

John Fox
January 2002

M-Estimation

Linear least-squares estimates can behave badly when the error distribution is not normal, particularly when
the errors are heavy-tailed. One remedy is to remove inuential observations from the least-squares t (see
Chapter 6, Section 6.1, in the text). Another approach, termed robust regression, is to employ a tting
criterion that is not as vulnerable as least squares to unusual data.
The most common general method of robust regression is M-estimation, introduced by Huber (1964).1
Consider the linear model
yi

= + 1 xi1 + 2 xi2 + + k xik + i


= xi + i

for the ith of n observations. The tted model is


yi

= a + b1 xi1 + b2 xi2 + + bk xik + ei


= xi b + ei

The general M-estimator minimizes the objective function


n


(ei ) =

i=1

n


(yi xi b)

i=1

where the function gives the contribution of each residual to the objective function. A reasonable should
have the following properties:
(e) 0
(0) = 0
(e) = (e)
(ei ) (ei ) for |ei | > |ei |
For example, for least-squares estimation, (ei ) = e2i .
Let =  be the derivative of . Dierentiating the objective function with respect to the coecients, b,
and setting the partial derivatives to 0, produces a system of k + 1 estimating equations for the coecients:
n

i=1

(yi xi b)xi = 0

1 This

class of estimators can be regarded as a generalization of maximum-likelihood estimation, hence the term M estimation. Hubers 1964 paper introduced M-estimation in the context of estimating the location (center) of a distribution;
the method was later generalized to regression.

Dene the weight function w(e) = (e)/e, and let wi = w(ei ). Then the estimating equations may be written
as
n

wi (yi xi b)xi = 0
i=1

 2 2
Solving the estimating equations is a weighted least-squares problem, minimizing
wi ei . The weights,
however, depend upon the residuals, the residuals depend upon the estimated coecients, and the estimated
coecients depend upon the weights. An iterative solution (called iteratively reweighted least-squares, IRLS )
is therefore required:
1. Select initial estimates b(0) , such as the least-squares estimates.
(t1)

2. At each iteration t, calculate residuals ei


previous iteration.

(t1)

and associated weights wi



(t1)
from the
= w ei

3. Solve for new weighted-least-squares estimates



1
X W(t1) y
b(t) = X W(t1) X


(t1)
is the current weight
where X is the model matrix, with xi as its ith row, and W(t1) = diag wi
matrix.
Steps 2. and 3. are repeated until the estimated coecients converge.
The asymptotic covariance matrix of b is
V(b) =

E(2 )
(X X)1
[E( )]2

 
2

Using
[(ei )]2 to estimate E(2 ), and
(ei )/n to estimate [E(  )]2 produces the estimated asymp
totic covariance matrix, V(b)
(which is not reliable in small samples).

1.1

Objective Functions

Figure 1 compares the objective functions, and the corresponding and weight functions for three Mestimators: the familiar least-squares estimator; the Huber estimator; and the Tukey bisquare (or biweight)
estimator. The objective and weight functions for the three estimators are also given in Table 1.
Both the least-squares and Huber objective functions increase without bound as the residual e departs
from 0, but the least-squares objective function increases more rapidly. In contrast, the bisquare objective
function levels eventually levels o (for |e| > k). Least-squares assigns equal weight to each observation; the
weights for the Huber estimator decline when |e| > k; and the weights for the bisquare decline as soon as e
departs from 0, and are 0 for |e| > k.
The value k for the Huber and bisquare estimators is called a tuning constant; smaller values of k produce
more resistance to outliers, but at the expense of lower eciency when the errors are normally distributed.
The tuning constant is generally picked to give reasonably high eciency in the normal case; in particular,
k = 1.345 for the Huber and k = 4.685 for the bisquare (where is the standard deviation of the errors)
produce 95-percent eciency when the errors are normal, and still oer protection against outliers.
In an application, we need an estimate of the standard deviation of the errors to use these results. Usually
a robust measure of spread is employed in preference to the standard deviation of the residuals. For example,
a common approach is to take
= MAR/0.6745, where MAR is the median absolute residual.

Bounded-Influence Regression

Under certain circumstances, M -estimators can be vulnerable to high-leverage observations. A key concept
in assessing inuence is the breakdown point of an estimator: The breakdown point is the fraction of bad
2

-6

-4

-2

w LS(e)
-6

-4

-2

0.0 0.2 0.4 0.6 0.8 1.0

5
-10 -5

LS(e)

15
0 5

LS(e)

25

10

35

Least Squares

-6

-4

-2

-6

-4

-2

w H(e)
-6

-4

-2

0.0 0.2 0.4 0.6 0.8 1.0

1.0
0.0
-1.0

H(e)

H(e)

0 1 2 3 4 5 6 7

Huber

-6

-4

-2

0
e

w B(e)

-1.0
0
-6

-4

-2

-6

-4

-2

0.0 0.2 0.4 0.6 0.8 1.0

0.0

B(e)

2
1

B(e)

1.0

Bisquare

-6

-4

-2

0
e

Figure 1: Objective, , and weight functions for the least-squares (top), Huber (middle), and bisquare
(bottom) estimators. The tuning constants for these graphs are k = 1.345 for the Huber estimator and
k = 4.685 for the bisquare. (One way to think about this scaling is that the standard deviation of the errors,
, is taken as 1.)

Method
Least-Squares
Huber

Bisquare

Objective Function

Weight Function

L S (e) =
e

1 2
for |e| k
2e
H (e) =

1 2
k|e| 2 k for |e| >
k
 e 2 3
2

k
1 1
6
k
B (e) =

2
k /6

for |e| k
for |e| > k

wL S (e) =
1

1
for |e| k
wH (e) =

k/|e| for |e| > k



 2 2

1 e
for |e| k
k
wB (e) =

0
for |e| > k

Table 1: Objective function and weight function for least-squares, Huber, and bisquare estimators.

data that the estimator can tolerate without being aected to an arbitrarily large extent. For example, in
the context of estimating the center of a distribution, the mean has a breakdown point of 0, because even
one bad observation can change the mean by an arbitrary amount; in contrast the median has a breakdown
point of 50 percent.
There are also regression estimators that have breakdown points of nearly 50 percent. One such boundedinuence estimator is least-trimmed squares (LTS ) regression.
The residuals from the tted regression model are
ei

= yi (a + b1 xi1 + b2 xi2 + + bk xik )


= yi xi b

Let us order the squared residuals from smallest to largest:


(e2 )(1) , (e2 )(2) , . . . , (e2 )(n)
The LTS estimator chooses the regression coecients b to minimize the sum of the smallest m of the squared
residuals,
m

(e2 )(i)
LTS(b) =
i=1

where, typically, m = n/2 + (k + 2)/2 (i.e., a little more than half of the observations), and the oor
brackets,  , denote rounding down to the next smallest integer.
While the LTS criterion is easily described, the mechanics of tting the LTS estimator are complicated
(see, for example, Rousseeuw and Leroy, 1987). Moreover, bounded-inuence estimators can produce unreasonable results in certain circumstances (Stefanski, 1991), and there is no simple formula for coecient
standard errors.2

An Illustration: Duncans Occupational-Prestige Regression

Duncans occupational-prestige regression was introduced in Chapter 1 and described further in Chapter 6
on regression diagnostics. The least-squares regression of prestige on income and education produces the
following results:
>
>
>
>

library(car) # mostly for the Duncan data set


data(Duncan)
mod.ls <- lm(prestige ~ income + education, data=Duncan)
summary(mod.ls)

Call:
lm(formula = prestige ~ income + education, data = Duncan)
Residuals:
Min
1Q
-29.538 -6.417

Median
0.655

3Q
6.605

Max
34.641

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -6.0647
4.2719
-1.42
0.16
income
0.5987
0.1197
5.00 1.1e-05
education
0.5458
0.0983
5.56 1.7e-06
Residual standard error: 13.4 on 42 degrees of freedom
Multiple R-Squared: 0.828,
Adjusted R-squared: 0.82
F-statistic: 101 on 2 and 42 DF, p-value: 1.11e-016
2 Statistical inference for the LTS estimator can easily be performed by bootstrapping, however. See the Appendix on
bootstrapping for an example.

Recall from the previous discussion of Duncans data that two observations, ministers and railroad conductors, serve to decrease the income coecient substantially and to increase the education coecient, as
we may verify by omitting these two observations from the regression:
> mod.ls.2 <- update(mod.ls, subset=-c(6,16))
> summary(mod.ls.2)
Call:
lm(formula = prestige ~ income + education, data = Duncan, subset = -c(6,
16))
Residuals:
Min
1Q Median
-28.61 -5.90
1.94

3Q
5.62

Max
21.55

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -6.4090
3.6526
-1.75
0.0870
income
0.8674
0.1220
7.11 1.3e-08
education
0.3322
0.0987
3.36
0.0017
Residual standard error: 11.4 on 40 degrees of freedom
Multiple R-Squared: 0.876,
Adjusted R-squared: 0.87
F-statistic: 141 on 2 and 40 DF, p-value:
0
Alternatively, let us compute the Huber M -estimator for Duncans regression model, employing the rlm
(robust linear model) function in the MASS library:
> library(MASS)
> mod.huber <- rlm(prestige ~ income + education, data=Duncan)
> summary(mod.huber)
Call: rlm.formula(formula = prestige ~ income + education, data = Duncan)
Residuals:
Min
1Q Median
3Q
Max
-30.12 -6.89
1.29
4.59 38.60
Coefficients:
Value Std. Error t value
(Intercept) -7.111 3.881
-1.832
income
0.701 0.109
6.452
education
0.485 0.089
5.438
Residual standard error: 9.89 on 42 degrees of freedom
Correlation of Coefficients:
(Intercept) income
income
-0.297
education -0.359
-0.725
The summary method for rlm objects prints the correlations among the coecients; to suppress this
output, specify correlation=FALSE. The Huber regression coecients are between those produced by the
least-squares t to the full data set and by the least-squares t eliminating the occupations minister and
conductor.
5

1.0
0.9
0.7

0.8

streetcar.motorman
factory.owner

mail.carrier

0.6

store.clerk
machinist
contractor
conductor
insurance.agent
reporter

0.4

0.5

Huber Weight

coal.miner

carpenter

minister
0

10

20

30

40

Index

Figure 2: Weights from the robust Huber estimator for the regression of prestige on income and education.
Observations with weights less than 1 were identied interactively with the mouse.
It is instructive to extract and plot (in Figure 2) the nal weights employed in the robust t, identifying
observations with weights less than 1 using the mouse:
> plot(mod.huber$w, ylab="Huber Weight")
> identify(1:45, mod.huber$w, rownames(Duncan))
[1] 6 9 16 17 18 22 23 24 25 28 32 33
Ministers and conductors are among the observations that receive the smallest weight.
Next, I employ rlm to compute the bisquare estimator for Duncans regression. Start-values for the IRLS
procedure are potentially more critical for the bisquare estimator; specifying the argument method=MM to
rlm requests bisquare estimates with start values determined by a preliminary bounded-inuence regression.
To use this option, it is necessary rst to attach the lqs library, which contains functions for boundedinuence regression:
> library(lqs)
> mod.bisq <- rlm(prestige ~ income + education, data=Duncan, method=MM)
> summary(mod.bisq, cor=F)
Call: rlm.formula(formula = prestige ~ income + education, data = Duncan,
method = "MM")
Residuals:
Min
1Q Median
3Q
Max
-29.87 -6.63
1.44
4.47 42.40
Coefficients:
Value Std. Error t value
(Intercept) -7.389 3.908
-1.891
income
0.783 0.109
7.149
education
0.423 0.090
4.710
Residual standard error: 9.79 on 42 degrees of freedom
6

1.0
0.8
0.6
0.4

conductor

reporter
0.2

Bisquare Weight

machinist
insurance.agent

contractor

0.0

minister
0

10

20

30

40

Index

Figure 3: Weights from the robust bisquare estimator for the regression of prestige on income and
education. Observations accorded relatively small weight were identied interactively with the mouse.
Compared to the Huber estimates, the bisquare estimate of the income coecient is larger, and the
estimate of the education coecient is smaller. Figure 3 shows a graph of the weights from the bisquare
t, interactively identifying the observations with the smallest weights:
> plot(mod.bisq$w, ylab="Bisquare Weight")
> identify(1:45, mod.bisq$w, rownames(Duncan))
[1] 6 9 16 17 23 28
Finally, I use the ltsreg function in the lqs library to t Duncans model by LTS regression:3
> mod.lts <- ltsreg(prestige ~ income + education, data=Duncan)
> mod.lts
Call:
lqs.formula(formula = prestige ~ income + education, data = Duncan,
method = "lts")
Coefficients:
(Intercept)
-7.015

income
0.804

education
0.432

Scale estimates 7.77 7.56


In this case, the results are similar to those produced by the M -estimators. Note that the print method
for bounded-inuence regression gives the regression coecients and two estimates of the variation (scale)
of the errors. There is no summary method for this class of models.

References
Huber, P. J. 1964. Robust Estimation of a Location Parameter. Annals of Mathematical Statistics 35:73
101.
3 LTS

regression is also the default method for the lqs function, which additionally can fit other bounded-influence estimators.

Rousseeuw, R. J. & A. M. Leroy. 1987. Robust Regression and Outlier Detection. New York: Wiley.
Stefanski, L. A. 1991. A Note on High-Breakdown Estimators. Statistics and Probability Letters 11:353
358.

You might also like