Ambiguity Statistics Interpretation
Ambiguity Statistics Interpretation
Training materials –
Network Adjustments
Advanced Adjustment
Concepts
Version 1.0
English
Introduction
The purpose of this document is to describe the advanced network adjustment concepts in Infinity,
with more focus on the parameters that affect the network’s reliability.
The first chapter discusses the measurements that are used as observations in network adjustment.
The second chapter explains the adjustment parameters.
In the third chapter, a detailed description of the network adjustment report is presented along with
an in-depth explanation of the statistical tests.
It is assumed, that the reader is already familiar with the Infinity network adjustments module.
If not, then the “How to…” tutorials related to network adjustments are recommended.
Table of Contents
1. Before the network adjustment ............................................................................................................................... 4
2. Explaining the adjustment parameters ................................................................................................................. 4
3. Adjustment types......................................................................................................................................................... 11
4. Explaining the network adjustment report ........................................................................................................ 12
5. Proposed workflow for running network adjustments ................................................................................. 15
In case GNSS raw data have been collected, a set of suitable baselines need to be processed first, as these are the
observations that will be included in the network (namely the Δx, Δy and Δz baseline components). The aim is to
organize the field measurements in such a way so that a network of independent baselines can be formed.
In case TPS raw data have been measured, the observations can either be the raw data themselves or the result of
an application. For example, if data have been collected through Measured Foresights or Sets of Angles
applications, then the user can choose whether to use the result of those applications (that is, the reduced
observations) or the raw measurements themselves. This option can be found in the Adjustments ribbon tab,
under Advanced Terrestrial.
In case Level data have been measured, then the level lines are the observations that will be used in the network
adjustment. If, however, the level data have not been stored in the instrument, Infinity supports the option of typing
in the height differences directly. This option can be found in the Processing ribbon tab, under Height
Observation.
General:
This option is unchecked by default, as the normal case is that there will be many adjustment runs before reaching
the desired result.
The “Controls” option determines whether the control points will be treated as absolutely “Constrained” or
“Weighted”.
If “Constrained” is selected, then the control points will be kept fixed without receiving any corrections during
Notes:
1. Fixing the control point coordinates often imposes some distortion on the network.
2. At least three control points need to be available in the project, otherwise treating the control points
as “Weighted” will have the same impact on the result as if they were set as “Constrained”.
In the Iteration settings, the user can set the maximum number of iterations as well as the threshold correction
also define whether individual or default values for all observations will be used.
In Centring/Height Errors, the centering and height errors for setup and target points can be set. Again, the users
can also define whether individual or default values will be used.
Notes:
1. Setting the proper standard deviations as well as realistic centering and height errors is of very high
importance, as these values have an immediate impact on the weight matrix of the observations.
2. The more we increase the standard deviations and centering and height errors, the less the weight of
and height errors. What is worth mentioning is that, if the “Source for Standard Deviations” is set to “Individual”,
each baseline is weighted according to the variance-covariance matrix that was calculated along with the processed
baseline. If “Use Defaults” is selected, then all the baselines will be weighted relative to their length, using the
In many cases, the variance-covariance matrices of a processed baseline are too small. The results for the processed
baselines can therefore be considered as too optimistic. Before running an adjustment, it is recommended that the
baselines acquire a reasonable weight. The “Sigma a priori (GNSS)” serves this purpose: it re-scales the variance-
Test Criteria:
Infinity uses the B-Method of testing for evaluating the reliability of the network adjustment results. The Alpha level
of significance is the probability of committing a Type-I error, whereas the 1-Beta is the power of the test against a
Type-II error.
1. It can be used to rescale the final variance-covariance matrices. This can lead to unexpected results, if its
value is significantly greater or smaller than 1.0.
2. It can be used only if the F-Test fails. This is up to the user, because if the F-Test fails there is always a
3. It can be ignored. This will keep the originally calculated variance-covariance matrices unscaled and it is a
good choice for the initial inner-constrained adjustment runs.
Advanced Terrestrial:
The standard deviations of the reduced observations will then be used to calculate the weight matrix.
In some cases, it is worth extending the mathematical model of the adjustment, as this can make it more suitable to
fit the observations. Two parameters are commonly used: the “Vertical Refraction Coefficient” and the “Scale Factor
Correction”.
The “Vertical Refraction Coefficient” can be useful, if small TPS 3D networks are measured. This parameter has an
The “Scale Factor Correction” is useful when GNSS and TPS observations are used in a combined adjustment. More
What is worth mentioning is that each of the aforementioned parameters can be ignored, computed or used with a
value that is user-entered. This provides a flexibility that in many cases is important.
Coordinate System:
observations will be referred to. This is something different from the coordinate system that is used in the project.
The latter is clearly used to project the point coordinates to it, whereas the first one is used to transform the
observations from different sources, so that they can be properly combined. As an example, we take the case where
GNSS and TPS observations need to be used in a combined adjustment. The GNSS observations are referred to a
geocentric coordinate system. On the other hand, the TPS observations are referred to a topographic plane. In
order to use the two observation groups in a combined adjustment, it is necessary to transform both groups to a
common reference. Since the power of GNSS observations is that they can be treated like true 3D vectors, we
choose to transform the TPS observations from the topographic plane to the same geocentric system the GNSS
observations are referred to. This geocentric system is the “WGS84” option in the “Coordinate System” dialog box.
The “Local Grid” option is a good choice when only TPS data are adjusted or when TPS and Level data are
combined. In this case, the GNSS data will not be used and a warning message will appear.
The “Local Geodetic” is a good choice when GNSS, TPS and even Level data need to be used in a combined
adjustment, provided that there are at least 3 control points whose coordinates are known in this “Local Geodetic”
system. This option offers great flexibility as it extends the mathematical model of the adjustment so that it includes
additional rotation and scale parameters to transform the observations to this “Local Geodetic” system.
1. A local coordinate system needs to be set as Master in the project in this case.
2. The transformation used in this local coordinate system has to be either a Classical 3D or None for
this option to become active.
The additional rotation and scale parameters can be either computed or used with a user-entered value.
What is also interesting is that either Ellipsoidal or Orthometric heights can be used. This is very important when
GNSS and Level data are combined. The “WGS84” or the “Local Geodetic” (in case enough control points and a
suitable coordinate system are available) option could be used as the coordinate system for the adjustment.
In this case, the orthometric height differences will be transformed via the geoid to ellipsoidal height differences;
they will be adjusted in WGS84 (or the Local Geodetic) and then transformed back to adjusted orthometric height
differences to calculate correct orthometric heights for the points.
3. Adjustment types
Depending on the dimension of the data, Infinity supports the following types of adjustment:
1. 3D (Full) adjustment: this type can be used to adjust pure GNSS data or a small TPS 3D network or
2. 2D adjustment: this type is more suitable to adjust TPS data and ignore the point heights.
3. 1D adjustment: this type can be used to adjust pure level data or combined level and trigonometric heioght
differences.
Infinity can also run adjustments in two steps. Two additional adjustment types are also supported: 2D+1D and
1D+2D. To put it simply, when TPS and level data are combined, the point positions can be determined from the
TPS observations, whereas the point heights can be determined from the level data. Thus, the trigonometric height
Depending on the constraints imposed on the data, Infinity supports three types of adjustment: “Inner
Constrained”, “Minimally Constrained” and “Constrained”.
“Inner Constrained” is the case that a special type of minimum constraints called inner constraints is used in the
adjustment. In this case, the corrections the point coordinates receive are the smallest possible. Also, all control
the corrections the point coordinates receive are the smaller than the ones calculated tin a constrained adjustment.
An example of minimally constraining a network is to fix one control point in height and adjust level data in 1D, or
to fix one control point in position and height and adjust GNSS data.
“Constrained” is the case that all or more than the minimum required control points are kept fixed in the
adjustment.
As already mentioned, some distortions are expected in the network when more than the necessary constraints are
imposed on the data. This causes some bias in the coordinate estimations and it is for this reason that this type of
adjustment should be avoided, when checking for possible outliers in the observations.
Project Details
Adjustment Settings
Adjustment Summary
Input Data
Adjustment Results
Project Details:
All the project details can be found in this section: General, Customer details, Master coordinate system, etc.
Adjustment Settings:
This section includes all the necessary information on settings used in adjustment. In summary, information on the
control point treatment, the adjustment dimension and type, the coordinate system the observations are referred
to, the height mode as well as the standard deviations and the testing criteria is displayed. There is also information
Adjustment Sumary:
of observations and of constraints as well as degrees of freedom, optimization criterion and sigma a posteriori can
be found here. What is also important is that the F-Test and Chi Square Test actual and critical values are included
along with the critical values for W- and T-Tests.
Notes:
1. If the calculated value of the F-( or the Chi Square) Test exceeds its critical value (upper or lower
boundary respectively), then it will be marked in bold red.
2. The sigma a posteriori is the square root of the result of dividing the optimization criterion by the
degrees of freedom.
Input Data:
This section includes the approximate values for the point coordinates as well as the initial observations with the
standard deviations. As already mentioned, the standard deviations are used to form the weight matrix.
Adjustment Results:
This section includes the adjusted point coordinates with their standard deviations, the absolute error ellipses, the
external reliability values and the adjusted observations. If GNSS baselines have been included in the adjustment,
the baseline vector residuals can be also found in this section. Next to each observation, the W-Test values (and T-
Note:
By default, the error ellipses are calculated at 39.4% confidence level, whereas the 1D-coordinate
Internal reliability, which is expressed by the Minimal Detectable Bias (MDB). The MDB presents the size of
the smallest possible observation error, still detectable by the statistical test (data snooping) with a
probability equal to the power 1-Beta of the test. A large MDB indicates a weakly checked observation or
coordinate. Thus, the larger the MDB the poorer the reliability. If an observation is not checked at all, no
MDB can be computed and the observation is marked as a 'free observation'.
measure to determine the influence of a possible (undetected) error in the observations on the adjusted
coordinates. The BNR of an observation reflects this influence, whereby the size of the observation error is
defined equal to the MDB of that particular observation. The BNR is a dimensionless parameter combining
the influence of a single observation on all coordinates. The BNR can be interpreted as the ratio between
reliability and precision.
An important property of both the MDB and BNR is that they are independent of the choice of the control points.
The Red expresses the redundancy (%) of each observation. The higher the redundancy the more controllable this
observation is. Theoretically, the redundancy of an observation between two control points that are kept fixed
should be 100. On the other hand, the redundancy of a free observation should be 0.
The W-Test is a one-dimensional statistical test that is suitable for detecting possible outliers. It’s critical value
depends on the significance level Alpha0.
Essential for the B-method of testing is that an outlier is detected with the same probability by both the F-Test and
the W-Test. For this purpose the power 1-Beta of both tests is fixed on a level of usually 0.80. The level of
significance Alpha0 of the W-test is also fixed, which leaves the level of significance Alpha of the F-Test to be
determined. Having Alpha0 and 1-Beta fixed, Alpha depends strongly on the redundancy in the network. For large
scale networks with many observations and a considerable amount of redundancy, it is difficult for the F-Test to
react on a single outlier. The F-Test, being an overall model test, is not sensitive enough for this task. As a
consequence of the link between the F-Test and the W-Test by which the power is forced at 0.80, the level of
significance Alpha of the F-test will increase. Considering the above, it is common practice to always carry out the
data snooping, no matter the outcome of the F-test.
The W-Test works well for single observations, e.g. directions, distances, zenith angles, azimuths and height
differences. However, for some observations such as GNSS baselines, it is not enough to test the DX-, DY-, DZ-
elements of the vector separately. It is imperative to test the baseline as a whole as well. For this purpose the T-
Test is introduced. Depending on the dimension of the quantity to be tested, the T-Test is a 3- or 2-dimensional
test. As with the W-Test, the T-Test is also linked to the F-Test by the B-method of testing. The T-Test has the same
power as both F- and W-Tests, but it has its own level of significance and its own critical value.
The Estimated Error is the size of the error responsible for the rejection of an observation or known coordinate. It
is a useful tool, yet it should be handled with care:
As far as the W-test is concerned, the Estimated Error is based on the conventional alternative hypothesis
that just one observation or known coordinate contains an error. Consequently, if more errors are present
in the network, the result of the estimation could be meaningless, unless errors have been made
baseline or known station contains an error. Consequently if more errors are present in the network the
result of the estimation could be meaningless, unless errors have been made (geographically) far apart. The
test results and Estimated Errors are only meaningful when observational errors have been filtered out in
the foregoing free adjustment and testing phase. This is why you always need to run an inner- or a
minimally constrained adjustment before adding the control points.
Notes:
2. The MDB and the BNR values are affected by the selection of the weight matrix.
3. Always strive to retain sensibly small MDB values or else the network might become very insensitive
in detecting outliers.
5. There might be cases that an observation is marked as a possible outlier, but the estimated error is
small –even smaller than the MDB value for this observation. In such cases, it is up to the user to
The proposed workflow for running network adjustments is shown in the following chart:
No
No
Yes Yes
Blunders are Estimated Remove 1 observation with max absolute T- or
detected Errors are big W- test value
No No
Yes
Remove Control Point with max absolute T- or
F-Test fails
W- test value and with big Estimated Error
No
Store Result
Leica Geosystems AG
Heinrich-Wild-Strasse
CH-9435 Heerbrugg
Switzerland
Phone +41 71 727 31 31
www.leica-geosystems.com