PRINCIPLES OF LEAST SQUARES Lecture Notes
PRINCIPLES OF LEAST SQUARES Lecture Notes
Generalized Least Squares (GLS) is a statistical method used to estimate the unknown parameters in
a linear regression model. It’s particularly useful when there is a correlation between the residuals
(errors) in the regression model, which violates the assumptions of Ordinary Least Squares (OLS)
regression.
1. Purpose: GLS is designed to handle situations where the residuals are heteroskedastic
(having different variances) or autocorrelated (correlated with each other). This makes GLS
o Response variable: ( y )
o Predictor variables: ( X )
y = X\beta + \epsilon
3. Assumptions: GLS assumes that the error terms have a known covariance matrix ( \Sigma ).
(FGLS)2.
4. Estimation: The GLS estimator minimizes the weighted sum of squared residuals, taking into
account the covariance structure of the errors. The estimator is given by:
5. Advantages: GLS provides unbiased, consistent, and efficient estimates of the regression
coefficients, making it a powerful tool when dealing with non-constant error variances or
correlated errors2.
Would you like to know more about how GLS is applied in practice or its assumptions?
Solution of Normal Equations Treatment of large geodetic networks
The solution of normal equations in the treatment of large geodetic networks involves sophisticated
techniques to handle the vast amount of data and the complexity of the calculations. Here are some
key points:
Key Concepts:
1. Normal Equations: In the context of least squares adjustment, normal equations are derived
from the condition that the sum of the squared residuals (differences between observed and
computed values) is minimized. For a system of linear equations (Ax = b), the normal
A^T A x = A^T b
2. Large Geodetic Networks: These involve a large number of observations and unknowns,
making the system of equations very large and sparse. Efficient algorithms are needed to
1. Block-Orthogonal Decomposition: This method involves breaking down the large system into
smaller, more manageable blocks. It helps in efficient data management and numerical
stability1.
2. Nested Dissection: This is a technique used to reorder the equations to minimize fill-in (non-
complexity1.
3. Helmert Blocking: This method partitions the network into smaller blocks that can be solved
independently and then combined. It is particularly useful for very large-scale problems2.
4. Parallel Processing: Modern approaches often use parallel processing to handle the large
datasets efficiently. This involves distributing the computations across multiple processors3.
Practical Application:
Example: The readjustment of the North American Datum in 1983 involved solving an
Would you like more details on any specific technique or a practical example of how these methods
are applied?
1
: Stanford University 2: Tandfonline 3: MyGeodesy