Latent Variable Models in Econometrics
Latent Variable Models in Econometrics
CHENG HSIAO
University of Toronto
ARIE KAPTEYN
Tilburg University
TOM WANSBEEK*
Nerherlands Central Bureau of Sratisrics
Contents
Introduction
1.1. Background
1.2. Our single-equation heritage
1.3. Multiple equations
1.4. Simultaneous equations
1.5. The power of a dynamic specification
1.6. Prologue
Contrasts and similarities between structural and functional models
2.1. ML estimation in structural and functional models
2.2. Identification
2.3. Efficiency
2.4. The ultrastructural relations
Single-equation models
3.1. Non-normality and identification: An example
3.2. Estimation in non-normal structural models
*The authors would like to express their thanks to Zvi Griliches, Hans Schneeweiss, Edward
Learner, Peter Bentler, Jerry Hausman, Jim Heckman, Wouter Keller, Franz Palm, and Wynand van
de Ven for helpful comments on an early draft of this chapter and to Denzil Fiebig for considerable
editorial assistance in its preparation. Sharon Koga has our special thanks for typing the manuscript.
C. Hsiao also wishes to thank the Social Sciences and Humanities Research Council of Canada and
the National Science Foundation, and Tom Wansbeek the Netherlands Organization for the Advance-
ment of Pure Research (Z.W.O.) for research support.
1. Introduction
1.1. Background
Yet even a cursory reading of recent econometrics texts will show that the
historical emphasis in our discipline is placed on models without measurement
error in the variables and instead with stochastic "shocks" in the equations. TO
the extent that the topic is treated, one normally will find a sentence alluding to
the result that for a classical single-equation regression model, measurement error
in the dependent variable, y, causes no particular problem because it can be
subsumed within the equation's disturbance term.' And, when it comes to the
matter of measurement errors in independent variables, the reader will usually be
convinced of the futility of consistent parameter estimation in such instances
unless repeated observations on y are available at each data point or strong a
priori information can be employed. And the presentation usually ends just about
there. We are left with the impression. that the errors-in-variables "problem" is
bad enough in the classical regression model; surely it must be worse in more
complicated models.
But in fact this is not the case. For example, in a simultaneous equations setting
one may employ overidentifying restrictions that appear in the system in order to
identify observation error variances and hence to obtain consistent parameter
estimates. (Not always, to be sure, but at least sometimes.) This was recognized as
long ngo as 1947 in an unpublished paper by Anderson and Hunvicz, referenced
(with an example) by Chernoff and Rubin (1953) in one of the early Cowles
Commission volumes. Moreover, dynamics in an equation can also be helpful in
parameter identification, ceteris paribus. Finally, restrictions on a model's covari-
ance structure, which are commonplace in sociometric and psychometric model-
ling, may also serve to aid identification. [See, for example, Bentler and Weeks
(1980).] These are the three main themes of research with which we will be
concerned throughout this essay. After brief expositions in this Introduction, each
topic is treated in depth in a subsequent section.
There is no reason to spend time and space at this point recreating the discussion
of econometrics texts on the subject of errors of measurement in the independent
variables of an otherwise conventional single-equation regression model. But the
setting does provide a useful jumping-off-place for much of what follows.
Let each observation (y,, x i ) in a random sample be generated by the stochastic
relationships:
'That is to say, the presence of measurement error in y does not alter the properties of least squares
estimates of regression coefficients. But the variance of the measurement error remains hopelessly
entangled with that of the disturbance term.
Ch. 23: Latent Variable Models in Econometrics 1325
Equation (1.3) is the heart of the model, and we shall assume E(qi(E,)= a + Pt,,
so that R E , )= 0 and E([,e,) = 0. Also, we denote R E ? )= a,,. Equations (1.1) and
(1.2) involve the measurement errors, and their properties are taken to be
E(u,) = Qu,) = 0, E(u?)= a,,, E(u?)= a,, and E(u,v,)= 0. Furthermore, we will
assume that the measurement errors are each uncorrelated with E, and with the
latent variables q, and 5,. Inserting the expressions ti= xi - u, and q, = y, - u,
into (1.3), we get:
y, = a + / ? x , + w,, (1.4)
where w, = E, + U, - Po,. NOW since E(u,lx,) # 0, we readily conclude that least
squares methods will yield biased estimates of a and P.
By assuming all random variables are normally distributed we eliminate any
concern over estimation of the 5,'s as "nuisance" parameters. This is the so-called
structural latent variables model, as contrasted to the functional model, wherein
the 5,'s are assumed to be fixed variates (Section 2). Even so, under the normality
assumption no consistent estimators of the primary.parameters of interest exist.
This can easily be seen by writing out the so-called "covariance" equations that
relate consistently estimable variances and covariances of the observables ( y, and
x,) to the underlying parameters of the model. Under the assumption of joint
normality, these equations exhaust the available information and so provide
necessary and sufficient conditions for identification. They are obtained by
"covarying" (1.4) with y, and x,, respectively. Doing so, we obtain:
Obviously, there are but three equations (involving three consistently estimable
quantities, a,, a,, and a,,) and five parameters to be estimated. Even if we agree
to give up any hope of disentangling the influences of E, and u, (by defining, say,
a 2 = a,, + a,,) and recognize that the equation a,, = o + a,,, will always be used
to identify act alone, we are still left with two equationsE Ein three unknowns (P, a 2 ,
and a,,).
The initial theme in the literature develops from this point. One suggestion to
achieve identification in (1.5) is to assume we know something about a,,, relative
to a 2 or a,, relative to a,,. Suppose this a priori information is in the form
h = a,, /a '. Then we have a,, = Aa and
,,,, =Pa,,, + a 2 ,
(J. .
Clearly this is but one of several possible forms that the prior information may
take. In Section 3.2 we discuss various alternatives. A Bayesian treatment suggests
itself as well (Section 3.11).
In the absence of such information, a very practical question arises. It is
whether, in the context of a classical regression model where one of the indepen-
dent variables is measured with error, that variable should be discarded or not, a
case of choosing between two second-best states of the world, where inconsistent
parameter estimates are forthcoming either from the errors-in-variables problem
or through specification bias. As is well known, in the absence of an errors-of-
observation problem in any of the independent variables, discarding one or more
of them from the model may, in the face of severe multicollinearity, be an
appropriate strategy under a mean-square-error (MSE) criterion. False restric-
tions imposed cause bias but reduce the variances on estimated coefficients
(Section 3.6).
-
uzx - YO;:.
It is apparent that the parameters of (1.8) are identified through the last two of
these equations. If, as before, we treat a,, + a,,, as a single parameter, u 2 , then
(1.5) and the first equation of (1.9) will suffice to identify P, u2, q,,, and at[.
This simple example serves to illustrate how additional equations containing
the same latent variable may serve to achieve identification. This "multiple
Ch. 23: h t e n t Variable Models in Econometrics 1327
From our consideration of (1.4) and (1.8) together, we saw how the existence of
an instrumental variable (equation) for an independent variable subject to mea-
surement error could resolve the identification problem posed. This is equivalent
to suggesting that an overidentifying restriction exists somewhere in the system of
equations from which (1.4) is extracted that can be utilized to provide an
instrument for a variable like xi. But it is not the case that overidentifying
restrictions can be traded-off against measurement error variances without qualifi-
cation. Indeed, the locations of exogenous variables measured with error and
overidentifying restrictions appearing elsewhere in the equation system are cru-
cial. To elaborate, consider the following equation system, whlch is dealt with in
detail in Section 5.2:
where 5, ( j= 1,2,3) denote the latent exogenous variables in the system. Were the
latent exogenous variables regarded as obseruable, the first equation is-condi-
tioned on this supposition-overidentified (one overidentifying restriction) while
the second equation is conditionally just-identified. Therefore, at most one
measurement error variance can be identified.
Consider first the specifications x, = 5, + u,, x 2= 5,, x, = 5,, and let all denote
the variance of u,. The corresponding system of covariance equations turns out to
be:
2 4 2 ~ 1)
( 5 ,+~8 1 ~ (
+ 8 1 2 ' ~ ~ ~ ~ ')Y ~ x +
~ 8 1 2 ~ ~ ~ ~ ~
unknowns, PI,, &, yll, y22,Y23,and a,,. It is clear that equations @ and @ in
(1.11) can be used to solve for P12 and y,,, leaving @ to solve for a,,. The
remaining three equations can be solved for /I2,, y,,, y2,, so in this case all
parameters are identified. Were the observation error instead to have been
associated with t,, we would find a different conclusion. Under that specification,
p12and yll are overdetermined, whereas there are only three covariance equations
available to solve for P,,, y2,, y2,, and o,,. Hence, these latter four parameters [all
of them associated with the second equation in (1.10)] are not identified.
Up to this point in our introduction we have said nothing about the existence of
dynamics in any of the equations or equation systems of interest. Indeed, the
results presented and discussed so far apply only to models depicting contempora-
neous behavior.
When dynamics are introduced into either the dependent or the independent
variables in a linear model with measurement error, the results are usually
beneficial. To illustrate, we will once again revert to a single-equation setting, one
that parallels the development of (1.4). In particular, suppose that the sample at
hand is a set of time-series observations and that (1.4) is instead:
with all the appropriate previous assumptions imposed, except that now we will
also use IPI < 1 , E(u,)= E(u,_,) = 0, E(u:)= ~ ( u f - , =
) a,,, and E ( U , U , _=
~ )0.
Then, analogous to (1.5) we have:
where a,,,-, is our notation for the covariance between y, and y,_, and we have
equated the variances of y, and y,-, by assumption. It is apparent that t h s
variance identity has eliminated one parameter from consideration (a,,,,),and we
are now faced with a system of two equations in only three unknowns. Unfor-
tunately, we are not helped further by an agreement to let the effects of the
equation disturbance term ( E , ) and the measurement error in the dependent
variable (u,) remain joined.
Fortunately, however, there is some additional information that can be utilized
to resolve things: it lies in the covariances between current y, and lags beyond
one period ( y,-, for s 2 2). These covariances are of the form:
Ch. -73: Latent Variable Models in Econometrics 1329
so that any one of them taken in conjunction with (1.13) will suffice to solve for /I,
uEe,and
1.6. Prologue
Our orientation in this Chapter is primarily theoretical, and while that will be
satisfactory for many readers, it may detract others from the realization that
structural modelling with latent variables is not only appropriate from a concep-
tual viewpoint in many applications, it also provides a means to enhance marginal
model specifications by taking advantage of information that otherwise might be
misused or totally ignored.
Due to space restrictions, we have not attempted to discuss even the most
notable applications of latent variable modelling in econometrics. And indeed
there have been several quite interesting empirical studies since the early 1970's.
In chronological order of appearance, some of these are: Griliches and Mason
(1972), h g n e r (1974a), Chamberlain and Griliches (1975, 19771, Griliches (1974,
1977), Chamberlain (1977a, 1977b, 1978), Attfield (19771, Kadane et al. (1977),
Robinson and Ferrara (19771, Avery (1979), and Singleton (1980). Numerous
others in psychology and sociology are not referenced here.
In the following discussion we have attempted to highlight interesting areas for
further research as well as to pay homage to the historical origins of the important
lines of thought that have gotten us this far. Unfortunately, at several points in
the development we have had to cut short the discussion because of space
constraints. In these instances the reader is given direction and references in order
to facilitate his/her own completion of the topic at hand. In particular we
abbreviate our discussions of parameter identification in deference to Hsiao's
chapter on that subject in Volume I of this Handbook.
In this section we analyze the relation between functional and structural models
and compare the identification and estimation properties of them. For expository
reasons we do not aim at the greatest generality possible. The comparison takes
place within the context of the multiple linear regression model. Generalizations
are considered in later sections.
co he existence of a set of solvable covariance equations should not be surprising. For, combining
(1.12) to get the reduced form expression, y, = a + Py-l+(el + U ~ ) - ~ U , whlch
~ , is in the form of
an autoregressive/moving-average (ARMA) model.
1330 D. J . Aigner et al.
Consider the following multiple linear regression model with errors in variables:
where t,,x,, v,, and /3 are k-vectors, and y, and E, are scalars. The t , ' s are
unobservable variables; instead x , is observed. v, is unobservable and we assume
-
u, N(0, Q) for all i. E, is assumed to follow a N(0, a 2 ) distribution. v, and E , are
mutually independent and independent of 6,.
In the functional model the above statements represent all the assumptions one
has to make, except for the possible specification of prior knowledge with respect
to the parameters 8, a 2 and Q. The elements of I , are considered to be unknown
constants. For expository simplicity we assume that Q is non-singular. The
likelihood of the observable random variables y, and x , is then:
where X and Z are n X k-matrices with ith rows x,' and t,' respectively, and
y = ( y,, y,, . .., y,)'. The unknown parameters in (2.3) are 8 , Q, a and the
elements of Z. Since the order of Z is n x k, the number of unknown parameters
increases with the number of observations. The parameters /3, a 2 , and Q are
usually referred to as structural parameters, whereas the elements of Z are called
incidental parameters [Neyman and Scott (1948)l. The occurrence of incidental
parameters poses some nasty problems, as we shall soon see.
In the structural model one has to make an explicit assumption about the
distribution of the vector of latent variables, t,.A common assumption is that 5,
- -
is normally distributed: t, N(0, K ) , say. Consequently x, N(0, A), where
A = K + Q. We assume K, hence A , to be positive definite. Under these assump-
tions we can write down the simultaneous likelihood of the random variables in
y,, 6, and x,. This appears as:
In order to show the relationship between the functional and the structural
models it is instructive to elaborate upon (2.4). It can be verified by direct
multiplication that:
various normality assumptions made, since parameters (in t h s case the incidental
parameters) can always be interpreted as random variables on whlch the model in
which they appear has been conditioned. These conclusions remain essentially the
same if we allow for the possibility that some variables are measured without
error. If there are no measurement errors, the distinction between the functional
and structural interpretations boils down to the familiar distinction between fixed
regressors ("conditional upon X") and stochastic regressors [cf. Sampson (1974)l.
To compare the functional and structural models a bit further it is of interest to
look at the properties of ML estimators for both models, but for reasons of space
we will not do that here. Suffice it to say that the structural model is underiden-
tified. A formal analysis follows in Sections 2.2 and 2.3. As for the functional
model, Solari (1969) was the first author to point out that the complete log-likeli-
hood has no proper maxim~rn.~ She also showed that the stationary point
obtained from the first order conditions corresponds to a saddle point of the
likelihood surface. Consequently, the conditions of Wald's (1949) consistency
proof are not fulfilled. The solution to the first order conditions is known to
produce inconsistent estimators and the fact that the ML method breaks down in
t h s case has been ascribed to the presence of the incidental parameters [e.g.
Malinvaud (1970, p. 387), Neyman and Scott (1948)l. In a sense that explanation
is correct. For example, Cramer's proof of the consistency of ML [Cramer (1946,
pp. 500 ff.)] does not explicitly use the fact that the first order conditions actually
generate a maximum of the likelihood function. He does assume, however, that
the number of unknown parameters remains fixed as the number of observations
increases.
Maximization of the likelihood in the presence of incidental parameters is not
always impossible. If certain identifying restrictions are available, ML estimators
can be obtained, but the resulting estimators still need not be consistent, as will
be discussed further in Section 3.4. ML is not the only estimation method that
breaks down in the functional model. In the next subsection we shall see that
without additional identifying restrictions there does not exist a consistent
estimator of the parameters in the functional model.
2.2. Identification
'see also Sprent (1970) for some further comments on Solari. A result similar to Solari's had been
obtained 13 years before by Anderson and Rubin (1956), who showed that the likelihood function of a
factor analysis model with fixed factors does not have a maximum.
Ch. 23: Latent Vanable Models in Econometrics 1333
n n
log L , = - - logo 2 - -1og1S21- f t r ( ~ I)52-'(X-
- I)'
2 2
- f ,.- 2( - ZB)'( - I @ ) - ink log27r, (2.12)
In general the matrix 9 is positive definite and hence both the structural and the
incidental parameters are identified. But this result does not help us obtain
reasonable estimates of the parameters since no consistent estimators exist.
To see why this is true we use a result obtained by Wald (1948). In terms of the
functional model his result is that the likelihood (2.3) admits a consistent estimate
Ch. 23: Lotenr Variable Models in Econometrics 1335
6 ~ h result
e quoted here is stated briefly in Wolfowitz (1954), but no conditions or proof are given.
We are not aware of a subsequent publication containing a full proof.
Ch. 23: Latent Variable Models in Econometrics 1337
equations for this model as well as the information matrix. Since the case r = 1
yields a model which is closely related to the functional model, the analysis in the
previous section would suggest that in thls case the inverse of the information
matrix does not yield a consistent estimate of the asymptotic variance-covariance
matrix, even if sufficient identifying assumptions are made. This is also pointed
out by Patefield (1978).
3. Single-equation models
For thls section the basic model is given by (2.1) and (2.2), although the basic
assumptions will vary over the course of the discussion. We first discuss the
structural model with non-normally distributed latent variables when no extra-
neous information is available. Next we consider an example of a non-normal
model with extraneous information. Since normal structural models and func-
tional models have the same identification properties they are treated in one
section, assuming that sufficient identifying restrictions are available. A variety of
other topics comprise the remaining sub-sections, including non-linear models,
prediction and aggregation, repeated observations, and Bayesian methods.
where y,, (,, E,, xi, and u, are scalar random variables with zero means; also, u,, E ~ ,
and [, are mutually independent. Denote moments by subscripts, e.g. axxxx=
E(xP). Assuming that (,is not normally distributed, not all information about its
distribution is contained in its second moment. Thus, we can employ higher order
moments, if such moments exist. Suppose [, is symmetrically distributed around
zero and that its second and fourth moments exist. Instead of three moment
equations in four unknowns, we now have eight equations in five unknowns (i.e.
four plus the kurtosis of [,). Ignoring the overidentification, one possible solution
for p can easily be shown to be:
1338 D. J. Aigner el a/.
One observes that the closer the distribution of 6, comes to a normal distribution,
the closer oxxxx- 302~(the kurtosis of the distribution of x,) is to zero. In that
case the variance of the estimator defined by (3.3) may become so large as to
make it useless.
As an illustration of the results obtained in Section 2.2, the example shows how
identification is achieved by non-normality. Two comments can be made. First, as
already observed in Section 2.2, underidentification comes from the fact that both
5, and u, are normally distributed. The denominator in (3.3) does not vanish if 5, is
normally distributed but u, is not. Secondly, let us extend the example by adding a
latent variable {, so that (3.1) becomes:
Y, = P t , + rSi + E,. (3-4)
The measured value of {, is z , , generated by z, = {,+ w,, where w, is normally
distributed and independent of u,, E,, t i , {,; {, is assumed to be normally distrib-
uted, with mean zero, independent of t , , ui, t i . Applying the proposition of
Kapteyn and Wansbeek (1983) (cf. Section 2.2) we realize that there is a linear
combination of ti and I, , {, itself, which is normally distributed. Thus,
namely
overidentification due to the non-normal distribution of t, does not help in
identifying y , as one can easily check by writing down the moment equations.
' ~ c o t t(1950) gives a consistent estimator of in (3.1) by using the third central moment of the
distribution of 6,. Rather than seeking a minimum variance combination. Pal (1980) considers various
moment-estimators and compares their asymptotic variances.
Ch. 23: Latent Variable Models in Econome~rics 1339
One sees that as an additional condition, plim(x, - F,) should be non-zero for p
to exist asymptotically. If this condition and the condition for the allocation rule
is satisfied, B is a consistent estimator of P. Wald also gives confidence intervals.
The restrictive aspect of the grouping method is the required independence of the
allocation rule from the errors e, and u , . ~
If no such rule can be devised, grouping
has no advantages over OLS. Pakes (1982) shows that under normality of the 5,
and a grouping rule based on the observed values of the x,, the grouping
estimator has the same asymptotic bias as the OLS estimator. Indeed, as he points
out, this should be expected since the asymptotic biases of the two estimators
depend on unknown parameters. If the biases were different, this could be used to
identify the unknown parameters.
If the conditions for the use of the grouping estimator are satisfied, several
variations are possible, like groups of unequal size and more than two groups.
[See, for example, Bartlett (1949), Dorff and Gurland (1961a), Ware (1972) and
Kendall and Stuart (1979, p. 424 ff.). Small sample properties are investigated by
Dorff and Gurland (1961b).]
8 ~ h e s eare sufficient conditions for consistency; Neyman and Scott (1951) give slightly weaker
conditions that are necessary and sufficient.
1340 D. J. Aigner el al.
The three estimators discussed so far can also be used in the functional model
under a somewhat different interpretation. The assumptions on cumulants or
moments are now not considered as pertaining to the distribution of 5, but as
assumptions on the behavior of sequences of the fixed variables. An example of
the application of the method of moments to a functional model can be found in
Drion (1951). Richardson and Wu (1970) give the exact distribution of grouping
estimators for the case that the groups contain an equal number of observations.
In conclusion, we mention that Kiefer and Wolfowitz (1956) have suggested a
maximum likelihood estimator for the non-normal structural model with one
regressor. A somewhat related approach for the same model appears in Wolfowitz
(1952). Until recently, it was not clear how these estimators could be computed,
so they have not been used in p r a ~ t i c e .Neyman
~ (1951) provides a consistent
estimator for the non-normal structural model with one regressor for whlch
explicit formulas are given, but these are complicated and lack an obvious
interpretation.
It appears that there exist quite a few consistent estimation methods for
non-normal structural models, that is, structural models satisfying the proposition
of Section 2.2. Unfortunately, most of these methods lack practical value, whereas
a practical method like the method of product cumulants turns out to have a very
large estimator variance in cases where it has been applied [Madansky (1959)l.
These observations suggest that non-normality is not such a blessing as it appears
at first sight. To make progress in practical problems, the use of additional
identifying information seems almost indispensable.
'For a recent operationalization, see, for example, Heckman and Singer (1982).
Ch. -73: Lorent Variable Models in Econometrics 1341
Since the moments of E, are all a function of ,a: one can easily generate equations
like (3.7) to identify the unknown parameters p, Po,P,, .a: The model is thus
identified even without using the observed variable x,! The extraneous informa-
tion used here is that we know the distribution function from which the latent
variable has been drawn, although we do not known its unknown parameter p.
The identification result remains true if we extend model (3.6) by adding
observable exogenous variables to the right-hand side. Such a relation may for
[,
example occur in practice if y represents an individual's wage income, indicates
whether or not he has a disease, which is not always correctly diagnosed, and the
other explanatory variables are years of schooling, age, work experience, etc. In
such an application we may even have more information available, like the share
of the population suffering from the disease, which gives us the parameter p. This
situation has been considered by Aigner (1973), who uses this knowledge to
establish the size of the inconsistency of the OLS estimator (with x, instead of the
unobservable 5,) and then to correct for the inconsistency to arrive at a consistent
estimator of the parameters in the model.
Mouchart (1977) has provided a Bayesian analysis for Aigner's model. A fairly
extensive discussion of errors of misclassiiication outside regression contexts has
been given by Cochran (1968).
e = (el.. .&,)I and V is the (n x k)-matrix with v; as its ith row. In this section we
assume the rows of Z either to be fixed or normally distributed. To remedy the
resulting underidentification, m 2 k 2 identifying restrictions are supposed to be
available:
where F has been defined in (3.10). Using the formula for the Cramer-Rao lower
bound for a constrained estimator [Rothenberg (1973b, p. 21)] we obtain as an
asymptotic lower bound for the variance of any estimator of 6 :
where t i , x,, u,, and 8 are k-vectors, y, and are scalars; u, - N(0, Q), with 52
non-singular. There is statistical independence across observations. The function
f is assumed to be twice continuously differentiable. Furthermore, Eu,E,= 0.
Let us consider the functional model.'' The likelihood of the observable
random variables y, and x, is given by:
The n-vector I;(:, 8 ) has f (ti, 8 )as its ith element. As in Section 2.2 identifiabil-
ity of the functional model can be checked by writing down the information
matrix corresponding to this likelihood. Again, identifiability does not guarantee
the existence of consistent estimators of 8 , 52, and a 2 . No investigations have
been carried out regarding conditions under which such consistent estimators
exist. Dolby (1972) maximizes L, with respect to 3 and 8 , assuming a 2 and 52 to
" w e are unaware of any studies that deal with a non-linear structural model.
Ch. 23: Lotent Variable Models in Econometrics 1345
both estimation methods are, of course, unbiased. Thus one should always include
a proxy, however poor it may be.
No such clear-cut conclusion can be obtained if also one or more elements of
are measured with error [Barnow (1976) and Garber and Klepper (1980)], or if
the measurement error in E,, is allowed to correlate with (,, [Frost (1979)l.
Aigner (1974b) considers mean square error rather than asymptotic bias as a
criterion to compare estimators in McCallum's and Wickens' model. He gives
conditions under which the mean square error of OLS with omission is smaller
than OLS with the proxy included. Giles (1980) turns the analyses of McCallum,
Wickens and Aigner upside down by considering the question whether it is
advisable to omit correctly measured variables if our interest is in the coefficient
of the mismeasured variable.
McCallum's and Wickens' result holds true for both the functional and
structural model. Aigner's conditions refer only to the structural model with
normally distributed latent variables. It would be of interest to see how his
conditions modify for a functional model.
It is a rather remarkable fact that in the structural model the inconsistent OLS
estimator can be used to construct consistent predictors, as shown by Johnston
(1972, pp. 290, 291). The easiest way to show this is by considering (2.10): y, and
x, are simultaneously normally distributed with variance-covariance matrix 2 as
defined in (2.5). Using a well-known property of the normal distribution we
obtain for the conditional distribution of y, given x , :
with y and a defined with respect to (2.9). Therefore, E( y l X ) = Xa. This implies
that a, the OLS estimator of a is unbiased given X, and E( X&IX) = Xa = E( y 1 X).
We can predict y unbiasedly (and consistently) by the usual OLS predictor,
ignoring the measurement errors. As with the preceding omitted variable problem,
we should realize that the conclusion only pertains to prediction bias, not to
precision.
The conclusion of unbiased prediction by OLS does not carry over to the
functional model. There we have:
Ch. 23: Lurenr Variable Models in Econometrics 1347
so that E( yl X, 2)= ZB, which involves both the incidental parameters and the
unidentified parameter vector 8. OLS predictions are biased in this case, cf.
Hodges and Moore (1972).
A somewhat different approach to prediction (and estimation) was taken by
Aigner and Goldfeld (1974). They consider the case where exogenous variables in
micro equations are measured with error but not so the corresponding aggregated
quantities in macro equations. That situation may occur if the aggregated
quantities have to satisfy certain exact accounting relationships which do not have
to hold on the micro level. The authors find that under certain conditions the
aggregate equations may yield consistent predictions whereas the micro equations
do not. Similar results are obtained with respect to the estimation of parameters.
In a sense this result can be said to be due to the identifying restrictions that
are available on the macro level. The usual situation is rather the reverse, i.e. a
model which is underidentified at the aggregate level may be overidentified if
disaggregated data are available. An example is given by Hester (1976).
Finally, an empirical case study of the effects of measurement error in the data
on the quality of forecasts is given by Denton and Kuiper (1965).
Without further restrictions we cannot say very much about the parameters of
main interest, 8. An easy-to-accept restriction would be that the estimates of u2
and the diagonal elements of K and D should be non-negative. If in addition we
assume that D is diagonal we obtain the following results.
Denote by w the k-vector of the'diagonal elements of D and by k the k-vector
of diagonal elements of K; B is the k x k diagonal matrix with the elements of 8
on its main diagonal. From (3.20)-(3.22) we derive as estimators for u2, w and k
(given B):
1
62=-(y>-B'~>),
n
a
where diag is the k-vector of diagonal elements of a.
1348 D.J . Aigner er al.
So 2 l ( ~ ' X ) - ' X ' y l= I&I and B must have the same sign as &. Inequality
(3.27) implies for this case I [( y ' y ) - ' ~ ' y ] - ' .Thus, a consistent estimator for
p must have the same sign as the OLS estimator and its absolute value has to be
between the OLS estimator and the reciprocal or" the OLS regression coefficient of
the regression of X on y.
For k > 1, such simple characterizations are no longer possible, since they
depend in particular on the structure of X'X and the signs of the elements of 8.
The only result that seems to be known is that if one computes the k + l
regressions of each of the variables y,, xi,,..., xi, on the other k variables and all
these regressions are in the same orthant, then j? has to lie in the convex hull of
these regressions. [Frisch (1934), Koopmans (1937), Klepper and Leamer (1984);
see Patefield (1981) for an elegant proof using the Frobenius theorem]. Klepper
and Leamer (1984) show that if the k + 1 regressions are not all in the same
orthant, if X is a k-vector not equal to ( l / n ) X ' y or the zero vector, and if
( X ' X ) - ' has no zero elements, then the set { U l b satisfying (3.26) and (3.27)) is
the set of real numbers. Obviously, if one is willing to specify further prior
knowledge, bounds can also be derived for k > 1. For example, Levi (1973,1977)
considers the case where only one of the exogenous variables is measured with
error and obtains bounds for the coefficient of the mismeasured variable. Differ-
ent prior knowledge is considered by Klepper and Leamer (1984).
A related problem is whether the conventional t-statistics are biased towards
zero. Cooper and Newhouse (1972) find that for k = 1 the t-statistic of the OLS
regression coefficient is asymptotically biased toward zero. For k > 1 no direction
of bias can be determined.
Although inequalities (3.26) and (3.27) were derived from the maximum
likelihood equations of the structural model, the same inequalities are derived in
the functional model, because & is simply the OLS estimator and j. the residual
variance estimator resulting from OLS. In fact, Levi only considers the OLS
estimator & and derives bounds for a consistent estimator by considering the
inconsistency of the OLS estimator.
Ch. 23: Lotent Var~ableModels in Econometrics 1349
Notice that the bounds obtained are not confidence intervals but merely
bounds on the numerical values of estimates. These bounds can be transformed
into confidence intervals by taking into account the (asymptotic) distribution of
the OLS estimator [cf. Rothenberg (1973a), Davies and Hutton (1975), Kapteyn
and Wansbeek (1983)l. One can also use the asymptotic distribution of the OLS
estimator and a prior guess of the order of magnitude of measurement error to
derive the approximate bias of the OLS estimator and to judge whether it is
sizable relative to its standard error. Thls gives an idea of the possible seriousness
of the errors-in-variables bias. This procedure has been suggested by Blomqvist
(1972) and Davies and Hutton (1975).
where
Q*
b
-- ( b - &,)'[(w'x)~
(x'x)-~x>,
w'w(x'w)-~- (x'x)-'1 ( b- ) (3.30)
(3.31)
bIV= ( w f X ) - l W > , (3.32)
-
Q=Qf-Q*, (3.33)
and
Q'= ( y - Xb)'( - Xb). (3.34)
Hitherto we have only discussed models with single indexed variables. As soon as
one has more than one observation for each value of the latent variable the
identification situation improves substantially. We shall illustrate this fact by a
few examples. We do not pay attention to matters of efficiency of estimation,
because estimation of these models is discussed extensively in the variance
components literature. [See for example, Amerniya (1971).] Consider the following
model:
The variables z,, and tiare for simplicity taken to be scalars; z,, is observable, ti
is not. A model like (3.35) may occur in panel studies, where n is the number of
individuals in the panel and m is the number of periods in which observations on
the individuals are obtained. Alternatively, the model may describe a controlled
experiment in which the index i denotes a particular treatment with m observa-
tions per treatment.
Ch. 23: Latent Variable Models in Econometrics 1351
where the {a,)are binary indicators. The resulting estimate of P is unbiased and
consistent. Although it is not possible to estimate A, the estimates of a, are
unbiased estimates of [,A so that the treatment effects are identified. A classical
example of this situation is the correction for management bias [Mundlak (1961))
if (3.36) represents a production function and 5, is the unobservable quality of
management in the ith firm, omission of 5 , would bias /3, whereas formulation
(3.36) remedies the bias.
A second situation which may occur is that for each latent variable there is one
fallible measurement: x , = 5,+ u,, i = 1 , ..., n. One measurement per 5, allows for
identification of all unknown parameters but does not affect the estimator of P, as
can be seen readily by writing out the required covariance equations.
The thtrd situation we want to consider is where there are m measurements of
5,:
Now there is overidentification, and allowing for correlation between u,, and u,,,
I # j, does not alter that conclusion. Under the structural interpretation, ML is
the obvious estimation method for this overidentified case. In fact, (3.35) and
(3.37) provide an example of the multiple equation model discussed in the next
section, where ML estimation will also be considered.
ML estimation for the functional model with replicated observations has been
considered by Villegas (1961), Barnett (1970), Dolby and Freeman (1975), and
Cox (1976). Barnett restricts hls attention to the case with only one independent
variable. Cox analyzes the same model, but takes explicitly into account the
required non-negativity of estimates of variances. Villegas finds that apart from a
scalar factor the variance-covariance matrix of the errors is obtained as the usual
analysis-of-variance estimator applied to the multivariate counterpart of (3.37).
The structural parameters are next obtained from the usual functional ML
equations with known error matrix. Healy (1980) considers ML estimation in a
multivariate extension of Villegas' model (actually a more general model of whlch
the multivariate linear functional relationship is a special case). Dolby and
Freeman (1975) generalize Villegas' analysis by allowing the errors to be corre-
lated across different values of i. They show that, given the appropriate estimator
1352 - D.J. Aigner et al.
of the incidental parameters. The parameters of the second stage distributions are
sometimes called hyperparameters.
The Bayesian analysis of latent variables models has mainly been restricted to
the simple linear regression model with errors-in-variables [i.e. (2.1) is simplified
to y, = p, + &ti + q ,with &, &, 5,scalars], although Florens et al. (1974) make
some remarks on possible generalizations of their analysis to the multiple regres-
sion model with errors-in-variables.
The extent to whch Bayesian analysis remedies identification problems de-
pends on the strength of the prior beliefs expressed in the prior densities. T h s is
illustrated by Lindley and El-Sayyad's analysis. In the simple linear regression
model with errors in the variables they specify a normal prior distribution for the
latent variables, i.e. the 6, are i.i.d. normal with mean zero and variance r , and
next a general prior for the hyperparameter r and the structural parameters.
Upon deriving the posterior distribution they find that some parts of it depend on
the sample size n, whereas other parts do not. Specifically, the marginal posterior
distribution of the structural parameters and the hyperparameter does not depend
on n. Consequently, this distribution does not become more concentrated when n
goes to infinity.
This result is a direct consequence of the underidentification of the model.
When repeating the analysis conditional on a given value of the ratio of the error
variances with a diffuse prior for the variance of the measurement error, the
posterior distribution of the structural parameters does depend on n and becomes
more and more concentrated if n increases. The marginal posterior distribution of
p, concentrates around the functional ML value. This is obviously due to the
identification achieved by fixing the ratio of the error variances at a given value.
The analyses by Zellner (1971, ch. V) and Florens et al. (1974) provide
numerous variations and extensions of the results sketched above: if one imposes
exact identifying restrictions on the parameters, the posterior densities become
more and more concentrated around the true values of the parameters when the
number of observations increases. If prior distributions are specified for an
otherwise unidentified model, the posterior distributions will not degenerate for
increasing n and the prior distributions exert a non-vanishing influence on the
posterior distributions for any number of observations.
4. Multiple equations
4. I. Instrumental variables
Due to the assumption of joint normality for E, u and I , all sample information
relating to the parameters in the model (4.1), (4.2) and (4.3) is contained in the six
Ch. 23: Lutent Vuriuhle Models in Econometrics
This system of six equations in six unknowns can easily be solved to yield
consistent estimators of UEE,P, y, a 2 , a,,,, and a,,. So, the introduction of the
indicator variable (or instrumental variable) z renders the model identified.
Since the number of equations in (4.5) is equal to the number of parameters,
the moment estimators are in principle also the ML estimators. This statement is
subject to a minor qualification when ML is applied and the restriction of
non-negativity of the error variances is explicitly imposed. Leamer (1978a) has
shown that the ML estimator of P is the median of SyZ/Sx,,S,,,/S,, and S,.,./S,.,y
where S indicates the sample counterpart of a, if these threequantities have the
same sign.
In the multivariate errors-in-variables [cf. (3.8), (3.9)] model we need at least
12 k indicator variables (or instrumental variables) in order to identify the
parameter vector 8. The following relation is then assumed to hold:
read:
2, = KP,
Z, = TKP,
Zxx= K + 9 ,
2,, = TK,
Z z z = r K r f+ 0.
The identification of this system can be assessed somewhat heuristically as
follows. Equations (4.7), (4.10) and (4.12) serve to identify the error variances a 2 ,
52 and 0 for given T , K and P. Substitution of (4.11) into (4.9) yields:
Sargan (1958) has shown that the weighting matrix G is optimal in the sense that
it has minimal asymptotic variance in the class of all linear combinations of
estimators whlch can be derived from (4.13). [See also Malinvaud (1970, section
20.5).] The asymptotic variance-covariance matrix of B is, both for 1 = k and
l>k:
When the researcher is in the happy situation that he has more instruments than
error-ridden variables (i.e. 1 > k), he may also consider applying ML to the full
model after imposing a sufficient number of identifying restrictions on (at least) r
and K. The LISREL program (see Section 5.3) is well-suited for this purpose.
The major problem involved with IV in the non-dynamic single equation
context, however, is to find instrumental variables. Columns of X without
measurement errors can be used as instruments, but it is often difficult to find
variables that are correlated with a variable in X and are not already explanatory
variables in the model under consideration. The method of grouping, discussed in
Section 3.2, can be considered as a special case of IV, where the instrument
consists of a vector of + 1's and - l's, allocating observations to the two groups.
The instrument should be uncorrelated with the measurement error in order to
have a consistent estimator of the slope parameters. This is the case, for instance,
when the size of the measurement error is bounded from above and the popula-
tion consists of two subsets separated by an interval at least as great as twice t h s
maximum. This situation is unlikely to occur in practice.
Factor analysis (FA), a method for dealing with latent variables with a venerable
history in psychometrics, is closely related to instrumental variables. In thls
section we will discuss some aspects of FA as far as it is relevant in the present
context without the pretension of coming anywhere near a complete survey. For a
more comprehensive coverage see, for example, Gorsuch (1974), Lawley and
Maxwell (1971), Harman (1967) or Bentler and Weeks (1980); econometricians
will find the book by Mulaik (1972) highly readable because of its notation.
The connection between the FA and IV models is as follows. Let, in (3.19), the
measurement error between the columns of Z be uncorrelated, i.e. the matrix fi
of measurement error variances and covariances is diagonal, and let the coefficient
matrix of Z, so far implicitly taken to be the unit matrix, be arbitrary. T h s means
that (i) the correlation between different columns of X is attributable to Z only,
and not to the measurement error, and (ii) X is, no longer considered to be a
1358 D.J. Aigner et 01.
and the estimation problem is to derive estimators for K, r and O from the
observed covariance matrix S,,. Without further information, the model is
clearly underidentified since postmultiplication of r by any non-singular ( k X k ) -
matrix T and replacing K by T-'K(T')-' leads to the same value of 2. There are
several ways to cope with this indeterminacy, each of which identifies a main
branch of factor analysis distinguished in the literature. [See, for example, Elffers
et al. (1978).]
An extreme case arises if k is taken equal to I. Then I' and K are of the same
order as ZZZ.This obviates the error term A, so O is put equal to 0. Next, the
indeterminacy may be solved by taking r to be the matrix of eigenvectors of
2,,, and K is the diagonal matrix containing the k eigenvalues of Zzz on its
main diagonal. The matrix Z T is called the matrix of principal components of Z.
[See, for example, Anderson (1958, ch. 12) and Kendall and Stuart (1979, ch. 43).]
This relation between principal components and FA is a matter of mathematics
only; conceptually, there is the essential difference that principal components is
not based on a statistical model; it is a data reduction technique.12
" ~ r i n c i ~ acomponents
l is sometimes used in econometrics when the number of observations is
deficient and one wants to reduce the number of regressors. Kloek and Mennes (1960) and Amemiya
(1966) explore this idea for simultaneous equations and propose using principal components of
predetermined variables.
Ch. 23: Latent Variable Models in Econometrics 1359
Apart from the principal components case, the number k of underlying factors
is usually set at a (much) lower value than I. There are two different approaches to
the normalization problem. In conJirmatory factor analysis, the researcher has a
number of a priori restrictions on r, K or O at his disposal that derive from say,
the interpretation of the factors [like the implicit unit coefficient restriction in
equation (3.9), where the factors correspond to phenomena that are in principle
observable] or an extension of the model whereby the latent variables are, in turn,
regressed on other, observable variables (an example of which is to be discussed
below). These restrictions may serve to remove all indeterminacy in the parame-
ters. In exploratory factor analysis, however, the researcher is unsure about the
meaning of the factors and would like to treat them in a symmetric way. The
usual approach then is to choose T such that T-'K(T')-' is the unit matrix, i.e.
the factors are uncorrelated. For f = TT:
There is still some indeterminacy left, since the columns of may be reweighted
with any orthonormal matrix without affecting 2,,. This freedom may be used to
make T'WIT a diagonal matrix, which is convenient in the course of ML
estimation of the parameters [Joreskog (1967)], or can be used at will to obtain
some desired pattern in T. Such a reweighting is called a rotation by factor
analysts, and a huge literature has evolved around the pros and cons of all
possible types of rotations. Shapiro (1982) has investigated the identification of
the exploratory FA model. He shows that it is identified (apart from the
+
indeterminacies in r ) if and only if (1 - k ) * 2 I k .
Again, it should be stressed that the above treatment of FA is meant only to
show its relation to the measurement error problem and to show that "factor
analysis is just a generalization of the classical errors-in-the-variables model"
[Goldberger (1972a, p. 992)].
with
1360 D.J. Aigner et a/.
The model has two kinds of restrictions on its parameters. First, the coefficient
matrix has rank unity, and the disturbances have a variance-covariance matrix:
for any scalar @. This indeterminacy may be solved by fixing a,, at some
non-negative value, e.g. a,, = 0. This means that, in the case of O unrestricted, the
model is operationally equivalent to a model without an error in the cause
equation.
The MIMIC model relates a single latent variable to a number of indicators
and a number of causes. The extension to a more general multiple equations
model is obvious. A very general formulation is the following one, proposed by
Robinson (1974):
exogenous variables, some of whch (W,) may also occur in the indicator
equation. Note that there is no simultaneity in the model: the causal chain is in
one direction, the W 's determining Z directly and, after a detour, via Z. For this
model, Robinson (1974) discusses identification and presents a (limited informa-
tion) estimation method. The problems involved are apparent from the reduced
form of (4.27) and (4.28):
+
where each row has variance-covariance matrix O TJ/Tf.The model has, just
like the MIMIC model, patterned coefficient matrices and a patterned
variance-covariance matrix. Some of the coefficients are clearly underidentified.
After imposing appropriate restrictions, overidentification may result. Instead of
Robinson's method, one might estimate the (appropriately restricted) model by
FIML, using (for instance) the LISREL computer program (see Section 5.3).
What should be clear from the development in this section (especially this
subsection) is that an important convergence in methodology between psychomet-
rics, sociometrics and econometrics has taken place over the last decade. The
input into econometrics from the other two social sciences induced a breakthrough
in the measurement error problem; in return, econometrics can contribute rigor in
the fields of identification, estimation and hypothesis testing, areas where psycho-
logical and sociological researchers tend to be somewhat more casual than
econometricians.
5. Simultaneous equations
Stripped to its bare essentials, the linear simultaneous equations model with latent
variables is the following. Let Z be an (n X L)-matrix of observations on an
( L X 1)-vector with n data points. Let Z be generated by an unobservable, "true"
part z of order (n X L ) and an (n X L)-matrix U of measurement errors, each
row of which is independently N(0,D) distributed, with D an ( L X L)-matrix:
this section we will discuss some equivalent ways of assessing the identifiability of
a simultaneous equations model containing latent variables. Complications arise
when there are latent variables which enter into more than one equation or when
the measurement error of latent variables in different equations is correlated.
Then, identification cannot be settled on an equation-by-equation basis anymore
and the structure of the total model has to be taken into consideration.
When an exogenous variable is measured with error, its observed value is no
longer independent of the equation's disturbance and may be considered as an
additional endogenous variable. Accordingly, we may expand the model by an
additional relation. This approach is due to Chernoff and Rubin (1953) and is
also used by Hausman (1977). As an example, consider the two-equation model of
Section 1.4 (in vector notation):
"Not only is it possible to transform a model with errors in variables into one without rnismeasured
variables, one can also reformulate standard simultaneous equation models as functional models. For
reasons of space we do not give the relationship between both models, but refer to Anderson (1976)
instead. Among the results of exploring the link between functional and simultaneous models are
asymptotic approximations to the distributions of various estimators. See Anderson (1976, 1980) and
Patefield (1976) for details.
Ch. 13: Larent Var~ableModels in Econometrics 1365
This reformulation of the system may be used to assess the state of identification.
Still, this is no standard problem, since the variance-covariance matrix of the
disturbances ( 3 , say) of the extended structural model (5.6) is restricted:
So, two elements of 3 are restricted to be zero. Identification for this type of
restricted model was studied by Wegge (1965) and Hausman and Taylor (1983),
who present rank and order conditions for identification. Below, we will discuss
identification of the simultaneous model with latent variables using a somewhat
different approach.
Two features of this extension of the model should be noted. First, in order for
(5.5) to make sense, the unobservable should be correlated with at least one other
exogenous variable, i.e. a, or a, should be non-zero. Second, (5.5) fits in the
Zellner-Goldberger approach of relating an unobservable to other, observable
"causes". In the simultaneous equations context, such an additional relation
comes off quite naturally from the model.
A direct approach to the assessment of the identification of the simultaneous
equations model with latent variables is the establishment of a rank condition
that generalizes the rank condition for the usual model without latent variables.
Let the model be:
When B, r and J2 are known, (5.10) and (5.12) serve to identify 2 and 2,,; so
a priori information from (5.13) and identification of the full model is equivalent
to the identification of B, r and S2 (e.g. normalizations, exclusions, and symmetry
restrictions on a).
A necessary and sufficient rank condition for identification can now be devel-
oped as follows. Define a, 5 vec(B, r)', o = vec S2, and let a = (ah, of)' be the
vector of all parameters. Then the a priori information can be written as:
+
has rank G~ GK + K 2 , i.e. J has full column rank [and if a is locally
isolated-see, for example, Fisher (1966)l. It remains to evaluate J. Using
standard matrix derivation methods, one readily obtains:
As an example, consider the simple model (5.3). The a priori restrictions are
Pll = /322=1, y12= YI3 = y21= 0, and, when uncorrelated measurement error is
assumed, 52 = 0 apart from Q,,. So, there are G(G + K ) + K 2 = 19 parameters on
+ + +
the one hand and GK + m = 6 2 3 8 = 19 restrictions on them. Denoting
Ch. 23: Lotent Variable Models in Econometrics
non-zero elements by a "+ " for the sake of transparency, then J is:
+ O O O O 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o o o + o 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o o o o + 0 0 0 0 0 0 0 0 0 0 0 0 0 0
o o o o o o + o o o o o o o o o o o o (5.18)
0 0 0 0 0 o o + o o 0 0 0 0 0 0 0 0 0
The rank of this matrix is easily assessed, as follows. The last 13 rows correspond
to normalizations and exclusions (i.e. it shows the incidence of zero and non-zero
elements in R); the columns in which non-zero elements occur are clearly linearly
independent. So, the rank of .f equals 13 plus the rank of the matrix that remains
after deleting the rows and columns in which these non-zero elements occur:
This matrix generally has rank 6, so the rank of J equals 19. The model is hence
identified.
Now suppose that I , instead of 6, is unobservable. In terms of the scheme, this
means that, in the last column of (5.19), the "+" moves from the first to the fifth
position, introducing a linear dependence between the last four columns. Under
this new specification, the model is underidentified.
This example serves to illustrate a few points. First, the identifiability of the
model does not only depend on the number of unobservable variables, but also on
their location. A measurement error in the first equation does not impair
identifiability, since this equation is overidentified when all exogenous variables
are measured accurately. This overidentification allows for identification of the
measurement error variance of t,. The second equation is just-identified and
hence becomes underidentified when one of its exogenous variables cannot be
observed.
Second, each exogenous variable occurs in exactly one equation. This means
that the last column in the reduced "incidence" matrix in (5.19) contains just a
single non-zero element. In such a situation, identification can still be assessed
equation by equation. The situation becomes more complicated when a particular
unobservable occurs in more than one equation. Then the identifiability of the
equations sharing that unobservable becomes intertwined.
Third, the identifiability of the model depends basically on the pattern of zero
and non-zero elements in J only. Further information as to their exact value is not
needed. (It is assumed that ,Z , and ,Z
, have full rank and that the a priori
information is in the form of exclusions and normalizations.) Note that the
pattern of correlations between the 5's does matter; if say 6, is uncorrelated with
E2 and t3,(5.19) becomes:
where the second and sixth columns are proportional. So, the rank of J is reduced
by one. This problem has been noted already when discussing (5.5).
On the basis of the Jacobian, rank and order conditions for identification, both
necessary and sufficient, can be derived, and a number of these results have been
reported in the literature. They pertain to identification of the complete system as
well as to identification of a single equation. Contrary to the situation with
simultaneous equations without measurement error, this distinction is not trivial:
a certain latent variable may enter into more than one equation, thereby tying
together the identification of these equations. This problem occurs even when
each latent variable enters into a single equation only, as soon as the measure-
ment errors have a non-zero correlation.
In the first published paper on the problem, Hsiao (1976) presents a number of
sufficientconditions for identification of a single equation of the model when the
measurement errors are uncorrelated. For the correlated case, he derives on the
Ch. 23: Lorent Variable Models in Economerrics 1369
basis of the Jacobian, a necessary and sufficient rank condition for a single
equation, plus a derived necessary order condition. Geraci (1976) uses the
Jacobian to derive, for the uncorrelated measurement error case, an "assignment
condition" for identification of the complete model. This is a necessary condition,
whlch can be verified solely on the basis of knowledge about the location of the
latent variables and the number of overidentifying restrictions on each equation
in the case of no measurement error. These "conditional" overidentifying restric-
tions can be used to identify variances of measurement error of exogenous
variables in the equations where the restrictions apply. If it is possible to assign
each error variance to a particular equation, the assignment condition is verified.
In a recent paper, Geraci (1983) presents rank conditions for individual structural
relations, both for a general model, where U and V may be correlated and S2 is
non-diagonal, and for the restricted model with S2 diagonal.
Estimation of the simultaneous equations model with latent variables can be
done by means of a program for the analysis of covariance structures, like
LISREL (see Section 5.3). Under normality, LISREL delivers FIML estimates of
the model parameters. (The newer versions of LISREL also have a least-squares
option available.)
With the development of LISREL, the scope for alternative estimation methods
seems to be limited. There are a few papers that propose other estimators. Geraci
(1977) proposes three estimators that are all asymptotically equivalent to FIML
but are likely to be simpler to compute. These estimators are based on the GLS
approach due to Browne (1974), which leads to a simpler optimization criterion.14
Hsiao (1976) presents, for the case of uncorrelated measurement error, a FIML
estimator based on a transformation of the model, and a single-equation estima-
tor.
The breakthrough of latent variable modelling which has taken place in econo-
metrics over the last decade has been accompanied by the availability of succes-
sive versions of the computer program LISREL. LISREL is particularly well-suited
to deal with systems of linear structural multiple and simultaneous equations
("structural" in the sense of modelling the causal process, not as the opposite of
functional!). This ,section describes the model handled by LISREL and discusses
the importance for latent variable modelling in econometrics. For a full account,
see Joreskog and Sorbom (1977, 1981). LISREL (Linear Structural Relations-a
registered trademark, but we will use the name to denote both the program and
the model) is not the only program available, nor is it the most general linear
14see Joreskog and Goldberger (1972) for a clear exposition of GLS vis-a-vis ML in the context of
factor analysis.
1370 D. J . Aigner el al.
model; yet its general availability and user-friendliness has made it perhaps the
most important tool for handling latent variables at present.
The idea behlnd LISREL and similar programs is to compare a sapple
covariance matrix with the parametric structure imposed on it by the hypothe-
sized model. Therefore, this type of analysis is frequently called the 'analysis of
covariance structures' [e.g. Joreskog (1970); see Bentler (1983) for an excellent
overview].
The general format of the model to be analyzed by LISREL is as follows, using
the notation of the LISREL manual. Let q and E be (m X 1) and ( n X 1) vectors
of latent dependent and independent variables, respectively, satisfying a system of
linear structural relations:
r
with B and (m x m) and (m X n) coefficient matrices, B being non-singular,
and 3 an ( m x 1)-vector of disturbances. It is assumed that q, 6 and 3 have zero
expectations, and that E and 3 are uncorrelated. Instead of q and t , ( p X 1) and
(q X 1)-vectors y and x are observed such that:
and
x=A,l+6,
set equal to other parameters. Given these restrictions and the structure that
(5.24) imposes on the data, LISREL computes estimates of the parameters. These
estimates are the FIML estimates when (y', x ' ) is normally distributed, i.e. the
criterion:
6. Dynamic models
1 5 ~ oarpossible generalization of the results in this section to non-stationruy cases, see, for example,
Hannan (1971) and Maravall(1979).
\ ,
1 6 ~d&cuss;d
s before, when variables are normally distributed all the information is contained in
the first and second moments. When variables are not normally distributed, additional information
may be contained in higher moments, which can be used to identify and estimate unknown
parameters.
Ch. 23: Lotent Variable Models in Economerrics
are greater than one in absolute value and the two sets have no roots in
common." The endogenous and exogenous variables, 4, and t , , are assumed to be
measured with error, according to (2.2) and
where u, is white noise with mean zero and constant variance a.,
For simplicity, we shall for the moment assume that &, and 5, are white noise.
Then:
where
uW,,,(s)= COV(W,W,-S)
= 0, for Is1 > m a ~ ( p ,=~ 7).
As this is the covariance function of a 7th order moving average process, all
information about the unknown parameters is contained in the variance and first
o or the generalization of results contained in this section to the non-stationary case, see Maravall
(1979).
1374 D. J. Aigner er a/.
+
determined by the 0,1,. .. ,T p, autocovariances of y and the 0,l,. .. ,q, cross-
covariances between y and x.
From (6.1) we know that these cross- and autocovariances satisfy:
and
where J,, J,, and J, are the partial derivatives of (6.6), (6.7), and (6.8) with respect
to P's, y's and a,,, a,,, a,, and J, and J, have p + q + 4 columns while J, has
just p.
Ch. 23: Latent Variable Models in Economelrics 1375
Defining a ( p x ( p + q + 4)) matrix (J', Z')', it is easy to see that rank( J ) = rank
( ). It follows that the autoregressive parameters 8,.&,..., 3.j, are identified.
(For details of the identification conditions relying on the rank of a Jacobian
matrix, see Chapter 4 in this Handbook by Hsiao.)
The Jacobian Jl is of the form:
C ( ~ k O ( . O S k i + ~ k l t k . i - l + ". + ~ k q ~ t k . r - q =~' )k ,
+ (6.12)
k =l
where x,, = t,, + uk,, and tki, u,,, are mutually independent white noises, Mara-
vall (1979), Maravall and Aigner (1977) obtain the following result:
If the ( K + 1) integers, p, q,, ...,q K , are arranged in increasing order (ties are
immaterial), and qy denotes the one occupying the jth place in this new sequence,
1376 D.J. Aigner eta/.
(6.12) is locally identified if and only if q,* 2 j, for j = 1,2,...,K + 1.
When the shocks E, are serially correlated, the above results on identification
will have to be modified. We first consider the case where E, is a s th order moving
+
average process, E, = a i + Olai-l + . - . d,a,-,, where 8, # 0 and a i is white noise
with mean zero and variance a,,.
The j-lag autocovariances of E, will be equal to zero for j > s. In other words,
the s + 1 unknown parameters 4 , ... ,8,, and a,, only appear in the variance and
first s-lag autocovariances of y. If other parameters of the model are identified, the
autocovariance functions of y can be r e a t t e n in terms of 8's and a,, as in the
case of standard s th order moving average process. Thus a unique solution for
them exists [for details, see Maravall(1979)J. However, if the variance and first s
autocovariance functions of y are used to identify this set of parameters, it means
that we have (s + 1) less equations to identify other parameters.'' Assuming e, to
be a s th order moving average process (6.12) is locally identified if and only if
q*
1 2- J ' + s , j = l , 2 ,...,K + l .
Alternatively, suppose we assume that the shocks, E,, follow a stationary rth
order autoregressive process, E, = p l ~ i -+
l . . . + p r ~ i -+
r a,. AS we can see from
(6.1), under this assumption the autocovariance functions of y alone can no longer
be used to identify P. However, @ can still be identified by the cross-covariance
functions:
for j > max(p, q) [or see (6.11)]. We also note that for j > max(p, q)+ r, the
autocovariance function of y is:
where
Once the p's are identified by (6.13), a,,( j ) is identified also by (6.14). Therefore,
the p's are identifiable by (6.15); hence o,,. Thus, contrary to the case of white
noise shocks when a,, only appears in the 0-lag autocovariance equations, now a,,
can be identified through the j-lag autocovariance equations of y when j >
max(p, q). In a way, the autoregressive shocks help to identify a model by
reducing the number of unknowns by one. Assuming E, to be a stationary rth
" ~ o t enow that this parameter set no longer needs to include a,, which can be identified from 9 ' s
and a,.
Ch. 23: Latent Var~ableModels in Econometrics 1377
order autoregressive process, model (6.12) is locally identified if and only if
9 7 2 j-1, j = l , ..., K + 1 .
Combining these two results we have the general result with regard to autocor-
related shocks. If the E, follow a stationary autoregressive moving average process
of order r and s, E, = + + + ,-,,
. . . + preiPr+ a, 61a,_l . - . + $,a we have
that model (6.12) is locally identified if and only if (a) when r > s, q,? 2 j - 1; (b)
w h e n r ~ s , q , ? >j + s - r f o r j = l , ...,K + 1 .
These results are based on the assumption that the exogenous variables are
serially and mutually unconelated. If they are correlated, additional information
will be available in the cross- and autocovariance functions of the y's and x's,
and hence conditions for identification may be relaxed.
The main reason that a dynamic structure helps in identifying a model is
because of our strong assumption that measurement errors are uncorrelated. This
assumption means that cross- and autocovariances of the observed variables equal
the corresponding ones of the unobserved variables. When measurement errors
are autocorrelated, the problem becomes very complicated. For some examples,
see Maravall(1979) and Nowak (1977).
We have seen how a dynamic structure may affect the identification of a single
equation model. The basic idea carries through to the dynamic simultaneous
equation model. However, the problem is complicated by the interrelationships
among variables, whlch means in general that stronger conditions are required
than in the single equation model to ensure the proper rank of the Jacobian. We
illustrate the problem by considering the following simple model:
<
where q and are (G x 1) and ( K x 1) vectors of jointly dependent variables and
exogenous variables, respectively; e is a (G x 1) vector of disturbance terms with
covariance matrix 2.We assume that Bo is non-singular and that the roots of
Bo + B I L= 0 lie outside the unit circle. We again assume that the exogenous
variables 5 are stationary and disturbance e is white noise. The q and [ are
unobservable. They are related to observable y and x by:
and (2.2).
1378 D.J . Aigner et a/.
Since the measurement errors are assumed to be serially uncorrelated, we know
that Cyy(r)and C,,(T) satisfy:
and
+
We stack (B,, B,, r ) into a (1 X ( 2 ~ ' GK))-vector A' and assume that they
satisfy R linear restrictions:
where @ is an ( R x(2G2+ GK))-matrix with known elements. Let X' and w'
denote the (1x n) and (1 X I ) vectors consisting of unknown elements of A and
S2. Letting a' = (A', x', w'), then a' has to satisfy (6.19)-(6.23). Now we know that
+
the 1 x(2G2+ GK n + I) parameter vector a' is locally identified if and only if
the Jacobian
Ch. 23: Latent Variable Models in Econometrics 1379
+
has rank (2G 2 + GK n + I ) around its true value, where H is a ( G 2 X n ) matrix
whose elements are either zero or elements of B,, and U is a (GK X I) matrix
whose elements are either zero or elements of r.
Unfortunately, this condition is usually difficult to check in practice. If we
know that the matrix
- -
CYY( 1) CXY( 2 )
CYY(2) Cxy(3)
............ (6.25)
c;,(o) Cxx(1)
1 ) cxx(2)
- -
has rank ( G + K ) , the G independent columns of (B,, B,, T)' will form a basis of
the column kernel of the transposes of (6.21) and (6.22). Then by an argument
similar to Fisher's (1966), we can show that the usual order and rank conditions
are necessary and sufficient to identify a'. However, because of the interrelation
among G different variables, we need a stronger condition than the univariate
case (Section 6.1) to ensure the rank of (6.25). Using the result of Hannan
(1975,1976) we know that one such condition is19 to assume that B, is non-singu-
lar and that C x , ( l )is non-singular, C x , ( q =
) 0 for some q 2 2.
Under these assumptions, the matrix
Again, the rank of (6.27) is not easy to check. However, under certain
conditions [Hsiao (1979)l the matrix
has rank (G + K), and hence the usual order and rank condition is necessary and
sufficient to identify (6.16).
Y, = 9, + u;
and
X,= t i .
Ch. 23: Latent Variable Models in Econometrics 1381
Assuming E, to be white noise, the composite disturbance term w, has variance and
autocovariances:
Clearly, this has the property of a first order MA process. Establishing the
equivalences:
where p = uE,/uUu. We choose the root which is greater than unity as the solution.
It is clear from this example that the restrictions are hlghly non-linear, and
arise as the solution of the roots of a polynomial. It is not an easy matter to
impose the requisite restrictions. Generally, it is impossible to derive an analytical
solution for models with composite disturbance terms. Pagan (1973) has, there-
fore, resorted to numerical alternatives in order to obtain efficient estimate^.^'
Let a denote the m x 1 unknown parameters. To obtain an estimated a, Pagan
(1973) adopts the Phillips/Box-Jenkins methodology by minimizing C:= with
respect to a with the aid of the Gauss-Newton algorithm, leading to the following
iterative formula:
2 0 ~ a r - ~ h a l o(1972)
m has suggested a computationally simpler iterative scheme which involves
solving the likelihood function as a system of non-linear equations with the parameters and
unobsetvables q. The system of non-linear equations is then separated into two interconnected linear
problems, one for the q , the other for the parameters. Besides the problem of the non-existence of the
MLE in h s approach, it is dubious that his method will have good convergence properties, although
he did report so in his numerical examples.
1382 D.J. Aigner et al.
where 3 denotes the disturbance vector (&,, ... ,&). Thus, the problem is shifted
to one of computing derivatives.
Of course, to complete the algorithm we need to specify the process for
determining $ given a. One possibility would be to solve for the roots of the
covariance generating function. However, Pagan (1973) reports that this approach
revealed computational difficulties if the order of the moving average process was
high. Hence, Wilson's (1969) method for factoring a covariance function into its
moving average form was adopted.
The global minimum solution of the Pagan's (1973) method is asymptotically
equivalent to that of the maximum likelihood method, and hence is consistent and
asymptotically normally distributed. However, there is no guarantee that the
convergent solution is a global minimum. Therefore it is advisable to start the
iteration from a consistent estimate and perform a number of experiments with
other starting values.
+
When exogenous variables are also measured with error (i.e. x,= 5, u, and
u, # 0), Pagan's (1973) method cannot be applied and neither can the iterative
schemes suggested by Aoki and Yue (1970), Cox (1964), Levin (1964), Ljung
(1977), etc. The main problem appears to be the correlation between the mea-
sured exogenous variables and the composite disturbance terms. If there is prior
knowledge that measurement errors appear only at some frequencies [e.g. higher
frequencies, Engle and Foley (1975)], or in other words that only a portion of the
spectrum satisfies the model, Engle (1974, 1980) and Hannan (1963) have
suggested a band spectrum approach. We illustrate their approach by considering
model (6.29).
The spectrum approach to estimating a involves first transforming the model
by the (n x n) unitary matrix A with the j, Ith element equal to:
1 1
A,, = -
J2?m
where 1= n. Ignoring the end-effects which are of order 1/h,we can write
the log-likelihood function of (6.29) as:
and t denotes the complex conjugate of the transpose. Maximizing (6.38) with
respect to unknowns we obtain the (full) spectrum estimates.
If only a subset of the full spectrum, say S, is assumed to satisfy the model, we
can maximize (6.38) with respect to this set of frequencies, which leads to the
estimator:
[$I = -
where fyx(t,) denotes the cross-spectral density between y and x . Under the
assumption of smoothness of the spectral density, it can be shown that the band
spectrum estimate (6.40) is consistent if a consistent estimate of fw is available.
One way to obtain a consistent estimate of fw is by substituting a consistent
estimate of j? into:
The band spectrum approach has the advantages that no explicit assumptions
about the autocovariance structure of the measurement error are needed, and that
it is somewhat easier computationally. However, the portion of the frequency
band with a small signal-to-noise ratio may be rather large, and so if all these
frequencies are omitted the resulting estimate may have a rather large variance. In
particular, we have been assuming that the measurement error has a uniform
spectrum (white noise) which may imply that there is no frequency for which the
signal-to-noise ratio is really large. Also, there may be a problem in knowing S. A
full spectrum method thus may be more desirable. Hannan (1963) has suggested
such an approach for the case where no measurement error appears in y (i.e.
y, = 17, and u,= 0). His basic idea is to first estimate o,, by substituting consistent
estimates of /3 and y into the spectrum and cross spectrum off,, f, and f,, to
obtain an estimated spectrum of 5, then use an optimally weighting method to
estimate j? and y. A generalization of Hannan's (1963) method to the case when
both dependent and exogenous variables are observed with error in a single
equation model seems highly desirable.
On the other hand, a full spectrum method can be applied to a simultaneous
equation model without much problem. If a simultaneous equation model is
1384 D.J. Aigner et al.
where
where the coefficients of the gth equation are normalized to be unity. The
transformed model (6.44) possesses (asymptotically) the classical property of
orthogonality between the "exogenous variables" x(t,) and the "residual" iC(t,).
We now stack (6.44) as:
where
Ch. 23: Latent Varrable Models in Econornetrrcs 1385
The matrices L,, L,, and L, and vectors j?, y, and w, are obtained as follows.
Suppose there are G, zero constraints on B = [B, - I,, B,]. Then the uncon-
strained parameters may be rewritten as j? = L,vec(B), where L, is obtained
from I,,z by eliminating the rows corresponding to zero elements. Likewise, if
there are G, zero constraints on r we write the unconstrained parameters as
y = L,vec(T), where L, is obtained from I,, by eliminating the rows corre-
sponding to zero elements. Also, we write w = L,vec(L?), where L, is the
((K - F)X K2)-matrix obtained from I,? by eliminating rows corresponding to
the off-diagonal elements and the F (0 I F I K ) a priori zero diagonal elements
of L?.
An instrumental variable method for (6.45) will be possible after we find an
appropriate instrument for y(t,), and a consistent estimate of f,(t,) =
lim,, ,EiC(t,)iCt(tJ). A possible instrument for y(t,) would be A(tJ)x(t,), where
A(tJ) = A,y(t,)fy(tJ)-l. A consistent estimate of f , ( t l ) may be obtained from:
where
1
D =-
T i
x wt(tJ)z(t,),
If the spectrum is smooth, we can prove that (6.47) is consistent and asymptoti-
cally normally distributed. To obtain an efficient estimate it may be desirable to
iterate (6.47). If E is stationary then (6.47) is efficient in the sense that the limiting
covariance matrix is the same as that of maximum likelihood estimates based on
Gaussian +(t,) [Hsiao (1979)], and iteration produces no improvement in
efficiency. If E is a finite-order autoregressive moving average process, (6.47) is still
consistent but will not be fully efficient [e.g. see Espasa (1979) and Hannan and
Nicholls (1972)], and then iteration is probably desirable.
1386 D. J. Aigner el ul.
As one can see from the above description, the computation of the estimates for
the dynamic error-shock model seems a formidable task, particularly if there is
iteration. Yet on many occasions we would like to estimate behavioural relation-
ships that are dynamic in character. It does seem desirable to devise some simple,
yet reasonably efficient computational algorithms.
References
Aigner, D. J. (1974a) "An Appropriate Econometric Framework for Estimating a Labor-Supply
Function from the SEO File", International Economic Reciiew, 15, 59-68. Reprinted as Chapter 8 in
D. J. Aigner and A. S. Goldberger, eds., L.utent Vuriuhles in Socio-Economic Models. Amsterdam:
North-Holland Publishing Company.
Aigner, D. J. (1974b) "MSE Dominance of Least Squares with Errors of Observation", Journul of
Econometrics, 2,365-72. Reprinted as Chapter 3 in D. J. Aigner and A. S. Goldberger, eds., Lutent
Vuriuhles in Socio-Economic Models. Amsterdam: North-Holland Publishing Company.
Aigner, D. J. (1973) "Regression with a Binary Independent Variable Subject to Errors of Observa-
tion", Journul of Econometrics, 1, 49-59.
Aigner, D. J. and S. M. Goldfeld (January 1974) "Estimation and Prediction from Aggregate Data
when Aggregates are Measured More Accurately than Their Components", Etanometricu, 42,
113-34.
Amemiya, T. (1971) "The Estimation of the Variances in a Variance-Components Model", Internu-
tional Economic Re~liew,12, 1-13.
Amemiya, T. (1966) "On The Use of Principal Components of Independent Variables in Two-Stage
Least-Squares Estimation", lnternutionul Economic Reoiew, 7, 282-303.
Anderson, T. W. (1976) "Estimation of Linear Functional Relationships: Approximate Distributions
and Connections with Simultaneous Equations in Econometrics", Journul of the Rovul Statistical
Socieiy, Series B, 38, 1-20.
Anderson, T. W. (1980) "Recent Results in the Estimation of a Linear Functional Relationship", in
P. R. Krishnaiah, ed., Multiouriute Statistics, V. Amsterdam: North-Holland Publishing Company.
Anderson, T. W. and H. Rubin (1956) "Statistical Inference in Factor Analysis", in J. Neyman, ed.,
Proceedings of the Third Berkeley S?,mposiumon Muthemuticul Stutistics and Prohuhili[v. 5. Berkeley:
University of Cahfornia Press.
Anderson, T. W. (1958) Multiouriute Stutisticul Anoltvis. New York: Wiley.
Aoki, M. and P. C. Yue (1970) "On Certain Convergence Questions in System Identification", SIAM
Journal of Control, 8, 239-256.
Attfield, C. L. F. (July 1977) "Estimation of a Model Containing Unobservable Variables Using
Grouped Observations: An Application to the Permanent Income Hypothesis", Journul of Econo-
metrics, 6, 51-63.
Aufm Kampe. H. (1979) Ident~jizierhurkeitin Multirwiaten Fehler-in-den- Vuriuhelen-Modellen (Iden-
tification in Multivariate Errors-in-Variables Models). Unpublished Masters thesis, University of
Bonn.
Avery, Robert B. (1979) "Modelling Monetary Policy as an Unobserved Variable", Journal of
Econometrics, 10, 291-311.
Barnett, V. D. (1967) "A Note on Linear Structural Relationships When Both Residual Variances are
Known", Biometrika, 54, 670-672.
Barnett, V. D. (1970) "Fitting Straight Lines. The Linear Functional Relationship with Replicated
Observations", Applied Stutislics, 19, 135-144.
Barnow, Burt S. (August 1976) "The Use of Proxy Variables When One or Two Independent
Variables are Measured with Error", The Amerrcun Statistician, 30, 119-121.
Bar-Shalom, Y. (1972) "Optimal Simultaneous State Estimation and Parameter Identification in
Linear Discrete-Time Systems", IEEE Trunsactions on Automatic Control, AC-17, 308-319.
Bartlett. M. S. (1949) "Fitting a Straight Line When Both Variables Arc Subject to Error". Biometrics,
5, 207-212.
Ch. 23: Latent Variable Models in Econometrics 1387
Basu, A. P. (1969) "On Some Tests for Several Linear Relations", Journal of the Royal Statistical
Society, Series B, 31, 65-71.
Bentler, P. M. (1982) "Linear Systems with Multiple Levels and Types of Latent Variables", Chapter
5 in K. G. Joreskog and H. Wold, eds., Systems Under Indirect Observations Causalit),, Structure,
Prediction, Part I. Amsterdam: North-Holland Publishing Company, 101-130.
Bentler, P. M. (1983) "Simultaneous Equation Systems as Moment Structure Models, With an
Introduction to Latent Variable Models", Journal of Econometrics, 22, 13-42.
Bentler, P. M. and D. G. Weeks (September 1980) "Linear Structural Equations with Latent
Variables", Psychometrika, 45, 289-308.
Birch, M. W. (1964) "A Note on the Maximum Likelihood Estimation of a Linear Structural
Relationship", Journal of the American Statistical Associution, 59, 1175-1178.
Blomqvist, A. G. (1972) "Approximating the Least-Squares Bias in Multiple Regression with Errors in
Variables", The Review of Economics and Statistics, 54, 202-204.
Bowden, R. (1973) "The Theory of Parametric Identification", Econometrrcu, 41, 1069-1074.
Box, G. E. P. and G. M. Jenkins (1970) Time Series Analysis: Forecasting and Control. San Francisco:
Holden- Day.
Brown, R. L. (1957) "Bivariate Structural Relation", Biometrika, 44, 84-96.
Browne, M. W. (1974) "Generalized Least-Squares Estimators in the Analysis of Covariance Struc-
tures", South African Statistical Journal, 8, 1-24. Reprinted as Chapter 13 in D. J. h g n e r and A. S.
Goldberger, eds., Latent Variables in Socio-Economic Models. Amsterdam: North-Holland Publish-
ing Company, 143-161.
Casson, M. C. (1974) "Generalized Errors in Variables Regression", Reciew of Economic Studies, 41,
347-352.
Chamberlain, Gary (1978) "Omitted Variables Bias in Panel Data: Estimating the Returns to
Schooling", Annules de I'INSEE 30- 31, 49-82
Chamberlain, Gary (1977a) "Education, Income and Ability Revisited", Chapter 10 in D. J. Aigner
and A. S. Goldberger, eds., Lrrtent Variuhles in Socio-Economic Models. Amsterdam: North-Holland
Publishing Company, 143-161.
Chamberlain, Gary (1977b) "An Instrumental Variable Interpretation of Identification in
Variance-Components and MIMIC Models", Chapter 7 in P. Taubman, ed.. Kinometrrcs: The
Determinunets of SocivEconomic Success Within and Between Fumilies. Amsterdam: North-Holland
Publishing Company.
Chamberlain, Gary and Zvi Griliches (1975) " Unobservables with a Variance-Components Structure:
Ability, Schooling and the Economic Success of Brothers", Internutionul Economic Reiien,, 16,
422-449. Reprinted as Chapter 15 in D. J. Aigner and A. S. Goldberger, eds., Latent Vurruh1e.s in
Socio-Economic Models. Amsterdam: North-Holland Publishing Company.
Chamberlain, Gary and Zvi Griliches (1977) "More on Brothers". Chapter 4 in P. Taubman. ed..
Kinometriu: The Determrnants of Socro-Economic Success Within uncl Between Fumllies. Amsterdam:
North-Holland Publishing Company.
Chen, C.-F. (September 1981) "The EM Approach to the Multiple Indicators and Multiple Causes
Model Via the Estimation of the Latent Variable". Journal of the Americun Stutisticul Associurlon,
76, 704-708.
Chernoff, Herman and Herman Rubin (1953) "Asymptotic Properties of Limited-Information Esti-
mates under Generalized Conditions", Chapter VII in W. C. Hood and T. C. Koopmans, eds.,
Studies in Econometric Method. New York: John Wiley and Sons.
Cochran, W. G. (1968) "Errors of Measurement in Statistics", Technometrics. 10, 637-666.
Cooper, R. V. and J. P. Newhouse (1972) "Further Results on the Errors in the Variables Problem",
mimeo, The Rand Corporation, Santa Monica, Ca.
Copas, J. B. (1972) "The Likelihood Surface in the Linear Functional Relationship Problem", Journul
of the Ro)~ulStatistical Socieg, Series B, 34, 274-278.
Cox, H. (1964) "On the Estimation of State Variables and Parameters for Noisy Dynamic Systems",
IEEE Trunsactions of A u t o m u t ~Control, AC-10, 5-12.
Cox, N. R. (1976) "The Linear Structural Relation for Several Groups of Data", Biometriku. 63,
231-237.
CramCr, H. (1946) Muthematicul Methods of Stutistics. Princeton: Princeton University Press.
1388 D. J. Aigner et 01.
Creasy, M. (1956) "Confidence Limits for the Gradient in the Linear Functional Relationship".
Journal of the Royal Statistical Society, Series B, 18, 65-69.
Davies, R. B. and B. Hutton (1975) "The Effect of Errors in the Independent Variables in Linear
Regression", Biometrika, 62, 383-391.
DeGrgcie, J. S. and W. A. Fuller (1972) "Estimation of the Slope and Analysis of Covariance When
the
.--.Concomitant
-. Variable is Measured with Error", Journal o f the American Statistical.4ssociarion.
67, 930-937.
Deistler, M. and H.-G. Seifert (1978) "Identifiability and Consistent Estimability in Econometric
Models", Econometrics, 46, 969-980.
Denton, F. T. and J. Kuiper (1965) "The Effect of Measurement Errors on Parameter Estimates and
Forecasts: A Case Study Based on the Canadian Preliminary National Accounts", The Review of
Economics and Statistics, 47, 198-206.
Dolby, G. R. (1976a) "A Note on the Linear Structural Relation When Both Residual Variances are
Known", Journal of the American Statistical ~ssociation,71, 352-353.
Dolby, G. R. (1976b) "The Ultrastructural Relation: A Synthesis of the Functional and Structural
Relations", Blometrika, 63, 39-50.
Dolby, G. R. (1972) "Generalized Least Squares and Maximum Likelihood Estimation of Nonlinear
Functional Relationships", Journal of the Royal Statist~alSociefv, Series B. 34, 393-400.
Dolby, G. R. and T. G. Freeman (1975) "Functional Relationships Having Many Independent
Variables and Errors with Multivariate Normal Distribution", Journal of Multirarrate Ana!~.sis.5,
466-479.
Dolby, G. R. and S. Lipton (1972) "Maximum Likehhood Estimation of the General Nonlinear
Relationship with Replicated Observations and Correlated Errors", Biometrrka. 59, 121-129.
Dorff, M. and J. Gurland (1961a) "Estimation of the Parameters of a Linear Functional Relation".
Journal of the Royal Statistical Sociely, Series B, 23, 160-170.
DorK, M. and J. Gurland (1961b) "Small Sample Behavior of Slope Estimators in a Linear Functional
Relation", Biometries, 17, 283-298.
Drion, E. F. (1951) "Estimation of the Parameters of a Straight Line and of the Variances of the
Variables, if They Are Both Subject to Error", Indagationes Muthemuticae, 13, 256-260.
Egerton, M. F. and P. J. Laycock (1979) "Maximum Likelihood Estimation of Multivariate Non-Lin-
ear Functional Relationships", Mathemutische Operat~onsforschungund Statistrk. 10. 273-780.
Engle, R. F. (1974) "Band Spectrum Regression", International Economic Recliew, 15, 1-11.
Engle. R. F. (1980) "Exact Maximum Likelihood Methods for Dynamic Regressions and Band
Spectrum Regressions", International Economrc Reoiew, 21, 391-408.
Engle R. F. and D. K. Foley (1975) "An Asset Price Model of Aggregate Investment". International
Economic Reuiew, 16, 625-47.
Elffers, H., J. G. Bethlehem and R. Gill (1978) "Indeterminacy Problems and the Interpretation of
Factor Analysis Results", Stutistica Neerlandicu, 32, 181-199.
Espasa, A. (1979) The Spectral Maximum Lrkelihood Estimutron of Econometrrc Models with Stationaq~
Errors. Gottingen: Vandenhoeck und Ruprecht.
Fisher, F. M. (1966) The Identification Problem rn Econometrics. New York: McGraw-Hill.
Florens, J.-P., M. Mouchart and J.-F. Richard (1974) "Bayesian Inference in Error-in-Variables
Models", Journal of Multioariate Anulvsis, 4, 419-52.
Fornell, C. (1983) "Issues in the Application of Covariance Structure Analysis: A Comment". Journul
of Consumer Research, 9, 4 3 - 4 8 ,
Frisch, R. (1934) Statistical Confluence Anal~iris by Means of Complete Regressron Slsrems. Oslo:
University Institute of Economics.
Frost, P. A. (1979) "Proxy Variables and Specification Bias", The Reorew of Economrcs and Srutistics.
61, 323-325.
Fuller, W. A. (1980) "Properties of Some Estimators for the Errors-in-Variables Model". The Annuls
of Statistics, 8, 407-422.
Fuller. W. A. and M. A. Hidiroglou (1978) "Regression Estimation After Correcting for Attenuation",
Journal of the American Stafistical Association, 73, 99-104.
Garber, S. and S. Klepper (Sept. 1980) "Extending the Classical Normal Errors-in-Variables Model".
Etanometrica, 48, 1541-1546.
Geary. R. C. (1943) "Relations Between Statistics: The General and the Sampling Problem When the
Samples are Large", Proceedings of the Roj~alIruh Academy. A, 49, 177-196.
Ch. 13: Lorent Variable Models in Econometrics 1389
Geary. R. C. (1942) "Inherent Relations Between Random Variables", Proceeding of the RO)JU/ Irrsh
cade en^. A, 47. 63-67.
Geraci. Vincent. J. (1976) "Identification of Simultaneous Equation Models with Measurement
Error", Journal of Econometrics, 4, 263-283. Reprinted as Chapter 11 in D. J. Aigner and A. S.
Goldberner, eds., Latent Variables in Socio-Economic Models. Amsterdam: North-Holland Publish-
ing ~omYpany.
Geraci, Vincent, J. (1977) "Estimation of Simultaneous Equation Models with Measurement Error",
Econometrica, 45,1243-1255.
Geraci, Vincent. J. (1983) Errors in Variables and the Individual Structural Equation, Inrernationol
Economrc Review. 24. 217-236.
Giles, R. L. (1980) "Error of Omission and Measurement: Estimating the Parameter of the Variable
Subject to Error". Polytechnic of the South Bank. London.
Goldberger. A. S. (1974) "Unobservable Variables in Econometrics", Chapter 7 in P. Zarembka, ed.,
Frontiers in Econometrics. New York: Academic Press.
Goldberger. A. S. (November 1972a) "Structural Equation Methods in the Social Sciences",
Econometrica. 40, 979-1001.
Goldberger, A. S. (1972b) "Maximum-Likelihood Estimation of Regressions Containing Unobserva-
ble Independent Variables". International Economic Reciew, 13. 1-15. Reprinted as Chapter 6 in D.
J. l g n e r and A. S. Goldberger. eds., Latent Variables In Socio-Economic Models. North-Holland
Publishing Company.
Goldberger, A. S. (June 1971) "Econometrics and Psychometrics: A Survey of Communalities".
Psjchomerrika, 36, 83-107.
Goldberger, A. S. and 0 . D. Duncan, eds. (1973) Structural Equation Models in the Sonul Sciences.
New York: Seminar Press.
Gorsuch, S. A. (1974) Factor Ana!lsis. Philadelph~a:W. B. Saunders Company.
Griliches, Z. (January 1977) "Estimating the Returns to Schooling: Some Econometric Problems".
Econometrica, 45. 1-22.
Griliches. 2. (1974) "Errors in Variables and Other Unobservables", Econometrica, 42, 971-998.
Reprinted as Chapter 1 in D. J. l g n e r and A. S. Goldberger, eds.. Larenr Variables in Socio-Em
nonlrc Models. Amsterdam: North-Holland Publishing Company.
Griliches, Z. and Vidar Ringstad (March 1970) "Error-in-the-Variables Bias in Nonlinear Contexts".
Economerrrco. 38. 368-370.
Griliches, 2. and W. M. Mason (May 1972) "Education, Income and Ability", Journal of Polirrcal
Economj, 80, 74-103.
Hall, Bronwyn (1979) User's Guide to MOMENTS. 204 Junipero Serra Blvd., Stanford. CA 94305.
Hannan, E. J. (1963) "Regression for Time Series with Errors of Measurement", B~ometriku,50,
293-302.
Hannan, E. J. (1971) "The Identification Problem for Multiple Equation Systems with Moving
Average Errors". Econometrica, 39, 751-765.
Hannan. E. J. (1975) "The Estimation of ARMA Models", The Annuls of Stutistrts, 3, 975-981.
Hannan, E. J. (1976) "The Identification and Parameterization of ARMAX and State Space Forms".
Econometrrcu, 44, 713-723.
Hannan. E. J. and D. F. Nicholls (1972) "The Estimation of Wxed Regression, Autoregression,
Moving Avcragc, and Distributed Lag Models", Econometrrcu, 40, 529-547.
Harman, H. H. (1967) Modern Factor Analysis. Chicago: The University of Chcago Press.
Hausman, J. A. (1978) "Specification Tests in Econometrics", Economerrica, 46, 1251-1272.
Hausman, J. A. (May 1977) "Errors in Variables in Simultaneous Equation Models", Jo~trnulof
Econometrics, 5, 389-401.
Hausman, J. A. and W. E. Taylor (1983) "Identification in Linear Simultaneous Equations Models
with Covariance Restrictions: An Instrumental Variables Interpretation", Econometricu. 51.
1527-1549.
Healy, J. D. (1980) "Maximum Likelihood Estimation of a Multivariate Linear Functional Relation-
ship". Journal of Multir~artateAnall~sis.10, 243-251.
Heckman, J. J. and B. Singer (1982) "The Identification Problem in Econometric Models for Duration
Data", Chapter 2 in W. Hddenbrand, ed., Adrlances rn Econometrics, Part 11. Cambridge: Cam-
bridge University Press.
Heise, D. R. (1975) Causal Anal~sis.New York: Wiley.
1390 D. J. Aigner et al.
Hester, Donald D. (July 1976) "A Note on Identification and Information Loss Through Aggregation",
Econometrrca, 44, 815-818.
Hodpes. S. D. and P. G. Moore (1972) "Data Uncertainties and Least Squares Regression", Applred - -
21, 185-195.
~r;~isrics.
Hoschel. H.-P. (1978)
\ , "Generalized Least Sauares Estimators of Linear Functional Relations with
Known Error-Covariance", Muthemutische ~ ~ e r u t i o n s f o r s c h uund
n ~ Stutistlk, 9. 9-26.
Hsiao, C. (March 1979) "Measurement Error in a Dynamic Simultaneous Equation Model with
Stationary Disturbances", Econometricu. 47, 475-494.
Hsiao. C. (February 1977) "Identification for a Linear Dynamic Simultaneous Error-Shock Model",
Internutionul Economic Reoiew, 18, 181-194.
Hsiao, C. (June 1976) "Identification and Estimation of Simultaneous Equation Models with Mea-
surement Error", International Economic Re~~iew, 17, 319-339.
Hsiao, C. and P. M. Robinson (June 1978) "Efficient Estimation of a Dynamic Error-Shock Model".
Internutronul Economic Reoiew, 19, 467-480.
Johnston, J. (1972) Econometric. Methods. New York: McGraw-Hill.
Joreskog, K. G . (1978) "An Econometric Model for Multivariate Panel Data", Annules de I'INSEE
30- 31. 355-366.
Jiireskog, K. G. and D. Sorbom (1981) LISREL V User's Guide. Chicago: National Educational
Resources.
Joreskog. K. G. (1970) "A General Method for Analysis of Covariance Stmctures", Biometrika. 57.
239-251. Reprinted as Chapter 12 in D. J. Aigner and A. S. Goldberger, eds., Latent Vuriuhles in
Socio-Economic Models. Amstcrdam: North-Holland Publishing Company.
Joreskog, K. G . and A. S. Goldberger (1975) "Estimation of a Model with Multiple Indicators and
Multiple Causes of a Single Latent Variable", Journal of the American Stutisticul Associutron, 70.
631-639.
Jiireskog. K. G. and Dag Sorbom (1977) "Statistical Models and Methods for Analysis of Longitudi-
nal Data", Chapter 16 in D. J. Aigner ar,d A. S. Goldberger. eds., Latent Vuriuhles in Socio-Eco-
nomic Models. Amsterdam: North-Holland Publishing Company, 285-325.
Joreskog, K. G. (1967) "Some Contributions to Maximum Likelihood Factor Analysis", Psvcho-
metriku, 32, 443-482.
Joreskog, K. G. and A. S. Goldberger (1972) "Factor Analysis by Generalized Least Squares",
Psychometriku. 37, 243-260.
Kadane, Joseph B., T. W. McGuire, P. R. Sanday and R. Staelin (1977) "Estimation of Environmental
EKects on the Pattern of IQ Scores Over Time", Chapter 17 in D. J. Aigner and A. S. Goldberger,
eds., Latent Vuriubles in Socio-Economic Models. Amsterdam: North-Holland Publishing Company,
327-348.
Kapteyn, A. and T. J. Wansbeek (1983) "Identification in the Linear Errors in Variables Model".
Econometricu, 51, 1847-1849.
Kapteyn, A. and T. J. Wansbeek (1983) "Errors in Variables: Consistent Adjusted Least Squares
(CALS) Estimation", Netherlands Central Bureau of Statistics.
Kapteyn, A. and T. J. Wansbeek (1981) "Structural Methods in Functional Models", Modelling
Research Group, University of Southern California.
Keller, W. J. (1975) "A New Class of Limited-Information Estimators for Simultaneous Equation
Systems", Journul of Econometrics, 3, 71-92.
Kendall, M. G. and A. Stuart (1979) The Adciunced Theoiy of Statistics, Fourth Edition. New York:
Macmillan.
IGefer, J. and J. Wolfowitz (1956) "Consistency of the Maximum Likelihood Estimator in the
Presence of Infinitely Many Incidental Parameters", Annuls of Muthemuticul Stutistics, 27, 887-906.
Klepper, S. and E. E. Leamer (1984) "Consistent Sets of Estimates for Regressions with Errors in All
Variables", Econometricu, 52, 163-183..
Kloek, T. and L. B. M. Mennes (1960) "Simultaneous Equations Estimation Based On Principal
Components of Predetermined Variables", Econometricu, 28, 45-61.
Konijn, H. S. (1962) "Identification and Estimation in a Simultaneous Equations Model with Errors in
the Variables", Econometricu, 30, 79-87.
Koopmans, T. C. (1937) Lineur Regression Analysis of Economic Time Series. Haarlem: Netherlands
Economic Institute, De Erven F. Bohn N.V.
Ch. 23: L.utent Vuriuhle Models in Econometrics 1391
Lawley, D, N. and A. E. Maxwell (1971) Fuctor Analysis us u Stutisticul Method. London: Butter-
worths.
Leamer, E. E. (1978a) "Least-Squares Versus Instrumental Variables Estimation in a Simple Errors in
Variables Model", Econometricu, 46, 961-968.
Leamer, E, E. (1978b) Speclficution Searches, Ad Hoc Inference with None.xperimentu1 Dutu. New
York: Wiley.
Lee, S.-Y. (September 1980) "Estimation of Covariance Structure Models with Parameters Subject to
Functional Constraints", Psychometricu, 45, 309-324.
Levi, M. D. (1977) "Measurement Errors and Bounded OLS Estimates". Journul of Econometrics, 6,
165-171.
Levi, M. D. (1973) "Errors in the Variables Bias in the Presence of Correctly Measured Variables".
Econometricu, 41, 985-986.
Levin, M. J. (1964) "Estimation of a System Pulse Transfer Function in the Presence of Noise", IEEE
Trunsuctions on Automatic Control, AC-9, 229-235.
Lindley, D. V. and G. M. El-Sayyad (1968) "The Bayesian Estimation of a Linear Functional
Relationship", Journul of the Rojul Stutisticul Society, Series B, 30, 198-202.
Liviatan, N. (1963) "Tests of the Permanent-Income Hypothesis Based on a Reinterview Savings
Survey", in: C. F. Christ, ed., Measurement in Economics. Stanford: Stanford University Press.
Liviatan, N. (July 1961) "Errors in Variables and Engel Curve Analysis", Econometricu, 29, 336-362.
Ljung, L. (1977) "Positive Real Transfer Functions and the Convergence of Some Recursive
Schemes", IEEE Trunsuctions of Automatic Control, AC-22, 539-551.
Madansky, A. (1976) Foundutions of Econometrics. Amsterdam: North-Holland Publishing Company.
Madansky, A. (1959) "The Fitting of Straight Lines When Both Variables are Subject to Error",
Journul of the Americun Slutisticul Associution, 54, 173-205.
Malinvaud, E. (1970) Stutist~cul Methods of Econometrics, Second Revised Edition. Amsterdam:
North-Holland Publishing Company.
Maravall, A. (1979) Identlfcution in Dj~numicShock-Error Models. Berlin: Springer-Verlag.
Maravall, A. and D. J. h g n e r (1977) "Identification of the Dynamic Shock-Error Model: The Case
of Dynamic Regression", Chapter 18 in D. J. Aigner and A. S. Goldberger, eds.. Latent Vuriuhles in
Socio-Economic Models. Amsterdam: North-Holland Publishing Company, 349-363.
McCallum, B. T. (1977) "Relative Asymptotic Bias from Errors of Omission and Measurement",
Econometricu, 40, 757-758. Reprinted as Chapter 2 in D. J. Aigner and A. S. Goldberger, eds.,
h t e n t Vuriuhles in Socio-Economic Models. Amsterdam: North-Holland Publishing Company.
Moberg, L. and R. Sundberg (1978) "Maximum Likelihood Estimation of a Linear Functional
Relationship When One of the Departure Variances is Known", Scundinu~*iun Journul o j Stutistics.
5, 61-64.
Mouchart, M. (1977) "A Regression Model with an Explanatory Variable Which Is Both Binary and
Subject to Errors", Chapter 4 in D. J. h g n e r and A. S. Goldberger. eds., h t e n r Vuriuhles 1rn
Socio-Economic Models. Amsterdam: North-Holland Publishing Company. 48-66.
Mulaik, S. D. (1972) The Foundutions of Fuctor Anu/ysis. New York: McGraw-Hill.
Mundlak, Y. (1961) "Empirical Production Function Free of Management Bias", Journal of Farm
Economics, 43, 44-56.
Muthen, B. (December 1979) "A Structural Probit Model With Latent Variables", Journul of the
American Stutisticul Associution, 74, 807-811.
Neyman, J. (1951) "Existence of Consistent Estimates of the Directional Parameter in a Linear
Structural Relation Between Two Variables", Annuls of Muthemutitnl Stutisrics, 22, 496-512.
Neyman, J. and Elizabeth L. Scott (1951) "On Certain Methods of Estimating the Linear Structural
Relationw, Annuls o/ Muthemuticul Statistics, 22, 352-361.
Neyman, J. and Elizabeth L. Scott (1948) "Consistent Estimates Based on Partially Consistent
Observations", Econometricu, 16, 1-32.
Nicholls, D. F., A. R. Pagan and R. D. Terrell (1975) "The Estimation and Use of Models with
Moving Average Disturbance Terms: A Survey", Internutionul Economic Review, 16, 113-134.
Nowak, E. (1977) "An Identification Method for Stochastic Models of Time Series Analysis with
Errors in the Variables", paper presented to the European Meeting of the Econometric Society,
Vienna.
Nussbaum, M. (1977) "Asymptotic Optimality of Estimators of a Linear Functional Relation if the
1392 D. J. Aigner el al.
Singleton, K. J. (October 1980) "A Latent Time Series Model of the Cyclical Behavior of Interest
Rates", International Economic Reuiew, 21, 559-576.
Solari, M. E. (1969) "The 'Maximum Likelihood Solution' of the Problem of Estimating a Linear
Functional Relationship", Journal of the Royal Statistical Societ)~,Series B, 31, 372-375.
Sprent, P. (1966) "A Generalized Least-Squares Approach to Linear Functional Relationships",
Journal of the Royal Statistical Society, Series B, 28, 278-297.
Sprent, P. (1970) "The Saddlepoint of the Lkelihood Surface for a Linear Functional Relationship".
Journal of the Royal Statistical Socieiy, Series B, 32, 432-434.
Theil. H. (1971) Principles of Econometrics. New York: Wiley.
Tintner, G. (1945) "A Note on Rank, Multicollinearity and Multiple Regression", Annuls of
Mathematical Statistics, 16, 304-308.
Van Uven, M. J. (1930) "Adjustment of N Points (in n-Dimensional Space) to the Best Linear
( n - 1)-Dimensional Space", Koninklijke Akademie clan Wetenschappen re Amsterdam, Proceedings
of the Section of Sciences, 33, 143-157, 307-326.
Villegas, C. (1961) "Maximum Likelihood Estimation of a Linear Functional Relationship", Annals of
Muthematical Statistics, 32, 1040-1062.
Villegas, C. (1964) "Confidence Region for a Linear Relation", Annuls of Mathemutical Statistics, 35.
780-788.
Wald, A. (1949) "Note on the Consistency of the Maximum Likelihood Estimate", Annuls of
Mathematical Statistics. 20. 595-601.
Wald, A. (1948) "Estimation of a Parameter When the Number of Unknown Parameters Increases
Indefinitely with Number of Observations", Annals of Mathematical Statistics, 19. 220-227.
Wald, A. (1940) "The Fitting of Straight Lines if Both Variables Are Subject to Error", Annuls of
Mathematical Statistics, 11, 284-300.
Ware. J. H. (1972) "The Fitting of Straight Lines When Both Variables are Subject to Error and the
Ranks of the Means Are Known", Journal of the American Statisticul Associution. 67, 891-897.
Wegge. L. L. (1965) "Identifiability Criteria for a System of Equations as a Whole", The Australian
Journal of Statistics, 7, 67-77.
Wickens, M. R. (1972) "A Note on the Use of Proxy Variables", Econometricu, 40, 759-761.
Wilson. G. J. (1969) "Factorization of the Generating Function of a Pure Moving Average Process",
SIAM Journal of Numerical Analysu, 6, 1-7.
Willassen, Y. (1979) "&tension of Some Results by Reierserl to Multivariate Models", Stzlndinaoiun
Journal of Sratlstics, 6, 89-91.
Wolfowitz, J. (1954) "Estimation of Structural Parameters When The Number of Incidental Parame-
ters is Unbounded" (abstract), Annuls of Mathematrcul Statistics, 25, 811.
Wolfowitz, J. (1952) "Consistent Estimators of the Parameters of a Linear Structural Relation".
Skandlna~iskAktuurietidskrifr, 35, 132-151.
Wu, D.-M. (1973) "Alternative Tests of Independence Between Stochastic Regressors and Dis-
turbances", Econometrics, 41, 733-750.
Zellner, Arnold (1970) "Estimation of Regression Relationships Containing Unobservable Indepen-
dent Variables", International Economic Revrew, 11, 441-454. Reprinted as Chapter 5 in D. J.
Aigner and A. S. Goldberger, eds., Latent Variables in Socio-Economlc Models. Amsterdam:
North-Holland Publishing Company.
Zellner. Arnold (1971) An Introduction to Bayesian Inference rn Econometrics. New York: Wiley.