0% found this document useful (0 votes)
9 views21 pages

Drones 09 00387

This study presents a methodology to mitigate the 'bowl' effect in UAV photogrammetry for corridor projects, which causes misalignment in point clouds. By employing statistical compensation methods and optimizing aerotriangulation with ground control points (GCPs), the research demonstrates significant improvements in accuracy and point cloud density. The findings suggest that the proposed methods can enhance the reliability of UAV surveys for linear infrastructure projects, achieving centimeter-level accuracy.

Uploaded by

eigenking91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views21 pages

Drones 09 00387

This study presents a methodology to mitigate the 'bowl' effect in UAV photogrammetry for corridor projects, which causes misalignment in point clouds. By employing statistical compensation methods and optimizing aerotriangulation with ground control points (GCPs), the research demonstrates significant improvements in accuracy and point cloud density. The findings suggest that the proposed methods can enhance the reliability of UAV surveys for linear infrastructure projects, achieving centimeter-level accuracy.

Uploaded by

eigenking91
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Article

Effective Strategies for Mitigating the “Bowl” Effect and


Optimising Accuracy: A Case Study of UAV Photogrammetry in
Corridor Projects
Sara Ait-Lamallam 1, * , Rim Lamrani 1 , Wijdane Mastari 1 and Mehdi Kechna 2

1 Department of Cartography and Photogrammetry, College of Geomatics and Surveying Engineering,


Agronomic and Veterinary Institute Hassan II, Rabat 10101, Morocco
2 MILLENIUM TOPO CHARIF MED, Surveying Company, Rabat 12200, Morocco
* Correspondence: [email protected]

Abstract: UAV-Enabled Corridor Photogrammetry is applied to survey linear transport


infrastructure projects’ sites. The corridor flight missions cause a misalignment of the point
cloud called the “bowl” effect. The purpose of this study is to offer a methodology based
on statistical compensation methods to mitigate this effect and to improve the accuracy and
density of the generated point cloud. The aerial images’ post-processing was carried out by
varying the aerotriangulation methods. Subsequently, the accuracy improvement was com-
pleted by integrating the coordinates of the ground control points (GCPs) through different
spatial distributions. Finally, Mean and RANSAC compensations were proposed to address
the errors induced by the “bowl” effect on the coordinates of the images’ perspective centres
(PCs). The findings indicate that the optimised aerotriangulation using Post-Processed
Kinematic (PPK) data significantly contribute to reducing the “bowl” effect. Moreover, the
GCP pyramidal spatial distribution allows accuracy improvement to a centimetre level.
The Mean compensation method yields optimal outcomes in accuracy. It also helps to
optimise on-site survey time and computing resources. RANSAC compensation optimises
the accuracy and allows the retrieval of a 5-times-denser point cloud. Furthermore, the
results give better accuracy compared to some current approaches.

Academic Editor: Giordano Teza Keywords: UAV photogrammetry; “bowl” effect; corridor flight; point cloud
Received: 8 April 2025
Revised: 16 May 2025
Accepted: 19 May 2025
Published: 22 May 2025 1. Introduction
Citation: Ait-Lamallam, S.; Lamrani, Geospatial data acquisition is facing increasing demands in terms of accuracy, speed
R.; Mastari, W.; Kechna, M. Effective and efficiency. Given the scope of these growing needs, aerial photogrammetry is consid-
Strategies for Mitigating the “Bowl”
ered as an efficient solution to collect data over large areas in a limited timeframe while
Effect and Optimising Accuracy: A
Case Study of UAV Photogrammetry
ensuring optimal accuracy [1].
in Corridor Projects. Drones 2025, 9, The use of UAVs in photogrammetry is becoming an effective and practical method for
387. https://doi.org/10.3390/ generating cartographic deliverables, especially in linear projects, such as point clouds and
drones9060387 Digital Terrain Models (DTMs). UAV application is increasing due to its ability to generate
Copyright: © 2025 by the authors. survey outputs quickly and at low cost, thus providing an advantageous alternative to
Licensee MDPI, Basel, Switzerland. traditional methods of mapping and surveying [2]. Thanks to its low flight heights, this
This article is an open access article technology offers high spatial resolution and good radiometric homogeneity compared to
distributed under the terms and images taken by photogrammetric aeroplanes or satellites [3].
conditions of the Creative Commons
In photogrammetry, a “corridor project” or “linear project” refers to a long and narrow
Attribution (CC BY) license
(https://creativecommons.org/
geographic area where detailed mapping and analyses occur. Such a project is characterised
licenses/by/4.0/). by a width that ranges from hundreds to thousands of metres and a length of several

Drones 2025, 9, 387 https://doi.org/10.3390/drones9060387


Drones 2025, 9, 387 2 of 21

kilometres. Corridor project area is usually linked to infrastructure such as roads, railways,
pipelines or utility lines [4–8]. The linearity and continuity of these elements require special
methodologies for mapping and analysis. The spatial distribution and number of ground
control points (GCPs) are essential components of these methodologies that should be
considered to achieve high-accuracy measurements [2].
To execute an infrastructure project in a corridor, the process unfolds in several distinct
stages. The routing and preliminary conceptual design stages are the second prime phase
of these projects and precede detailed design [9,10]. These stages’ objective is to provide a
basic spatial layout of the primary alignments and assess whether the sites’ topography
and characteristics respect the engineering standards [11]. When the accuracy of the
topographical survey of these projects’ sites is high, the stakeholders can produce more
accurate technical calculations, such as excavation and embankment. For the stage of
routing and preliminary conceptual design, the surveyed 3D points’ accuracy must be
better than 50 cm [12]. However, the corridor projects, when surveyed by UAVs, face many
challenges caused by their linearity. These challenges generally lead to non-compliance
with the accuracy requirements for the routing and conceptual design purposes.
UAVs substantially reduce the required time to obtain accurate and detailed data,
even for difficult-to-access areas. This is crucial to cover linear projects’ spanning and
inaccessible areas [13]. According to a study conducted by [14], UAVs minimise human
intervention while maximising effective data collection for corridor projects. UAVs also
offer beneficial technological synergy for modelling both urban and natural corridors. They
offer valuable perspectives for the design and maintenance of infrastructures as well as for
the management and protection of natural resources [15]. Indeed, by integrating LiDAR
technology with UAV photogrammetry, it is possible to produce DTMs and digital surface
models (DSMs) with high accuracy. Numerous studies have illustrated the application of
UAVs in linear projects. Ruzgienė et al. [16] use UAV photogrammetry for road surface
modelling purposes to demonstrate the significant advantages of this technology in the case
of linear infrastructure. The accuracy of the measured 3D points on the road was 2.94 cm.
Nahon et al. [15] map the coastal dunes in France by deploying UAVs equipped with both
photogrammetry and LiDAR technology at Cap Ferret. The objective of the study is to
monitor environmental impacts in this touristic and ecologically sensitive area. The study
allows the measurement of erosion with a 15 cm value for vertical accuracy. In a study
conducted by Abd Mukti et al. [17] in urban environments, UAVs proved their effectiveness
by mapping a wide road reserve. The accuracy of measurement reached 2.19 cm for points’
altitude. The UAV also allows great manoeuvrability in congested areas. Singh et al. [18]
use a set of semi-autonomous UAVs to generate precise geospatial results in a linear project.
The accuracy of their 3D results reached 2.3 cm, demonstrating significant effectiveness
compared to traditional methods.
Corridor UAV surveys are generally considered problematic due to many challenges
caused by their linearity. First, the continuity of linear structures does not allow homoge-
neous coverage to be obtained along the project. This increases the risk of having gaps in
the collected data and compromises their reliability [19]. Second, linear and parallel flight
configurations can restrict camera angles and flight orientations, reducing image redun-
dancy, which is essential for having cartography that reflects real-world conditions [20].
Then, having a long and thin geometry of the corridor is not optimal for matching aerial
images because they can only be taken in a linear sequence [21]. In several cases, this
leads to recommending oblique images for the survey to avoid any decline in accuracy
of photogrammetric surveys [20]. Indeed, the integration of this type of image improves
the quality of generated 3D models [22]. Also, the linear aspect of corridor projects can
complicate georeferencing due to the difficulty in obtaining more stable and homoge-
Drones 2025, 9, 387 3 of 21

neously distributed GCPs along wide structures [23]. Additionally, the limitations of IMUs,
based on micro-electromechanical systems (MEMSs) technology, degrade the quality of
height data, notably in relation to yaw angle in the case of corridors flights [24]. Finally,
an accumulation of lengthwise deformations is observed in corridor projects surveyed
using UAVs. This accumulation of errors, which mainly concerns the longest dimension,
is caused by camera calibration inconsistencies. This may induce salient deformations,
particularly because of erroneous and variable focal lengths, the effect of the rolling shutter,
and the influence of the rotational movement [25].
Among the length distortions that are observed in UAV shots in corridor projects,
the “bowl” effect emerges. It refers to a distortion that appears on the surface of the
produced data, such as point clouds or DTMs, characterised by a bowl-shaped deformation
in the output point cloud [26]. Several products derived from vertical UAV images taken
in corridors show large-scale systematic deformations, shown in the form of a central
bowl [27,28]. The flatter the area, the greater the accumulation of errors, introducing strong
correlations between parameters, which consequently decreases the accuracy of results in
corridor configurations compared to traditional configurations [29].
The “bowl” effect is a hindrance in the photogrammetric workflow. It is caused by
the accumulation of camera calibration errors or a combination of near-parallel imaging
directions and inaccurate correction of the lens radial distortion [30]. These systematic
errors can be caused by an unstable rolling shutter affecting georeferencing accuracy and
inaccurate self-calibration of the software based on Structure from Motion (SFM) without
the use of GCPs [31]. Wackrow and Chandler [32] demonstrate how such deformation is
associated with the processing of image sets with predominantly parallel viewing direc-
tions. They also demonstrate that this deformation relates to inaccuracies in the modelling
of the camera lens’s radial distortion. Their study analyses this correlation and shows
how variations in distortion parameters can introduce errors. As an example, Wackrow
and Chandler [33] examine how a biased lens model or wrong georeferencing can cause
systematic errors in DTMs. Errors are caused by the inability of the adjustment process to
correctly model the real shape of the field. The study of Wackrow and Chandler [33] shows
that the radial distortion parameter k1 can be adjusted to simulate the effects of a distorted
model using a virtual simulator.
The “bowl” deformation appears because the variation in points’ altitudes is symmet-
rical and co-centred. It results in an underestimation of altitudes in the centre of the studied
area and an overestimation on its edges, or vice versa. Barazzetti et al. [34] estimate the
impact of the “bowl” effect on a DTM by adding random offsets to the altitudes calculated
for each 3D point in order to represent the measurement noise. The introduced offsets
present the “bowl” effect. James and Robson [30] implemented simulations without any
control measurements using an internal constraint method so that the “bowl” effect and
other systematic errors can directly affect the accuracy of the coordinate’s measurements.
The variation in points’ altitudes due to the “bowl” effect is proportional to the cube
of the radial distance. It affects the adjusted altitude of the points, as explained by Equation
(1) [35]. In this equation, h0 is the initial altitude at point (x, y), k1 the radial distortion
parameter, and r the radial distance from this point to the centre of the image.

h( x, y) = h0 + k1 ·r3 (1)

Wackrow and Chandler [33] studied how a biased lens model or wrong georeferencing
workflow can cause the “bowl” effect at the DTM level. Similarly, they showed that the
radial distortion parameter k1 can be adjusted to simulate the effects of a distorted model
using a virtual simulator. Thus, radial distortion incorporated in the “bowl” effect can be
Drones 2025, 9, 387 4 of 21

described by Equation (2) [36]. In this equation, r is the radial distance to the image centre
and k1 is the radial distortion parameter.

∆r = k1 ·r3 (2)

Several solutions exist to resolve the “bowl” effect. These solutions provide approaches
that can be categorised as follows: 1/ procedural approaches based on the optimisation of
flight parameters; 2/ geometric approaches based on camera calibration methods and the
improvement of camera distortions; or 3/ mathematical approaches based on the error’s
compensation at the level of image coordinates. Table 1 summarises examples from the
literature review carried out as part of this study with regard to these different solutions.

Table 1. Recap of solution options to resolve the “bowl” effect.

Study Reference Approach Type Description

Abd Mukti and Tahar • Urban road mapping.


Procedural • Optimisation of flight parameters and camera settings.
(2021) [17]
• Spatial distribution of GCPs.

Elsheshtawy and • Coordinates of images’ PC obtained by GNSS.


Mathematical • Compensation of images’ PC coordinates by averaging.
Gavrilova [24]
• Use of checkpoints for error evaluation.

Huang et al. [26] Geometric • Analysis of distortion patterns.


• Adjustment using GNSS observations.

Ferrer-González • Evaluating the impact of GCP number and distribution on the


Procedural
et al. [2] accuracy.

• Survey of linear coastal reliefs.


Procedural and • Variation in flight plans.
Jaud et al. [19]
Geometric • Use of different distortion models.
• Comparison between two software programs.

Molina et al. [37] Procedural


• Improved measurement accuracy with optimised GCPs and
air traffic control parameters.

• Simulation of multi-image networks.


James and Robson [30] Procedural • Use of the k1 distortion parameter to reduce systematic errors.
• Added oblique images.

The studies based on procedural and/or geometric approaches show that it is neces-
sary to determine the optimal GCP number and distribution. This leads to the optimisation
of accuracy as well as the field work timeframe. However, the deployment of some GCP
spatial distributions is strongly contingent on the available resources and on environmen-
tal constraints. At the same time, the self-calibration method with a single GCP, using
high-accuracy GNSS observations, helps to lessen field effort but can occasionally decrease
accuracy due to the low number of GCPs. In addition, flight plans optimised with oblique
images greatly reduce systematic errors. This solution is quite practical for improving the
accuracy but requires additional captures, which increases the time of field missions as well
as the processing time. Thereby, existing procedural and/or geometric approaches in the
literature vary in terms of application and technical requirements. On the one hand, some
require fewer GCPs but additional technologies. On the other hand, some are based on the
Drones 2025, 9, 387 5 of 21

optimisation of camera distortion models. Alternatively, others are working on optimising


flight parameters and varying camera angles.
Studies that use a mathematical approach to resolve the “bowl” effect are infrequent.
Further, even fewer studies that combine all three approaches to resolve this problem
exist. Accordingly, this study seeks to propose a practical methodology based on statistical
compensation methods to mitigate the “bowl” effect and improve the accuracy and density
of photogrammetric results for UAV flights on corridor infrastructure projects. The method-
ology is based on a combination of procedural, geometric and mathematical approaches.
In the following sections of this paper, the methodology of this study is presented as
well as the results obtained. Then, the results are discussed to establish effective practices for
aerial photogrammetry by UAVs in corridor projects with the aim of mitigating the “bowl”
effect on photogrammetric deliverables. The methodology proposed aims to respond
to accuracy requirements, focusing solely on routing and preliminary conceptual design
stages for corridor projects. The main contributions of this paper are as follows: 1/ to
propose RANSAC compensation as a mathematically grounded yet underutilised approach
to mitigate the “bowl” effect; 2/ to test the combination of procedural, geometric and
mathematical approaches to reach better accuracies for similar projects in comparison with
current approaches; and 3/ to suggest practical methods to improve fieldwork efficiency
while mitigating the “bowl” effect for corridor projects using UAVs.

2. Materials and Methods


2.1. Methodology
To mitigate the “bowl” effect in UAV photogrammetry for corridor projects, a four-
phase methodological framework is proposed. Its main steps are presented in Figure 1.
The first phase is an aerial mission in both Real-Time Kinematic (RTK) and Post-Processed
Kinematic (PPK) modes. In this phase, the aerial images are captured, and the coordinates
of their perspective centres (PCs) are observed using the two modes RTK then PPK. A
GNSS station on the ground is used for surveys.

Figure 1. Overall methodology.

The second phase pertains to surveying the GCPs on the ground. Four GCP spatial
distributions are carried out to meet the comparison objectives of this research. Zigzag,
pyramid and pair distributions are conducted to identify the best spatial distribution for
corridor projects. These spatial distributions demanded significant time spent on field
surveying. For this reason, a fourth spatial distribution is established. It is a 5-GCP
distribution combined with a coordinate compensation method. It aims to study the
effect of reducing the number of GCPs, when paired with a compensation method, on the
Drones 2025, 9, 387 6 of 21

accuracy for this kind of project. Opting for 5 GCPs is based on two criteria: 1/ 5 is the
minimum number of GCPs (when combined with accurate calibration of the camera) used
in the reviewed literature to obtain accurate results [38,39]; 2/ to allow for comparison with
some existing works that also use a 5-GCP configuration in their experiments. These works
were conducted by Amr et al. [23] and Elsheshtawy and Gavrilova [24].
The third phase involves the photogrammetric calculations. Aerial images are pro-
cessed by aerotriangulation. This workflow is based on the mathematical principle of
Bundle Adjustment (BA) [40]. Then, the tie points are extracted using the Scale-Invariant
Feature Transform (SIFT) algorithm [41]. Finally, the matching of images is carried out by
the Structure from Motion (SFM) algorithm [42]. This step aims to obtain a block of georefer-
enced and scaled images. During this phase, a comparative approach is adopted, involving
the simultaneous implementation of two sub-steps. First, the standard aerotriangulation
consists of matching aerial images using the RTK coordinates of the PC. Then, optimised
aerotriangulation involves both using the PPK coordinates of the PC and re-calculating the
camera’s auto-calibration parameters repetitively using the initial parameters as pseudo-
observations. The optimised aerotriangulation applied to the 5-GCP project is the reference
calculation that is used to compare results.
The fourth phase consists of compensation for the PC coordinates. Figure 2 gives an
overview of the specific steps of this phase. This phase is conducted using a mathematical
approach where two statistical methods are used. The first method is the Mean compen-
sation presented in the literature by Elsheshtawy and Gavrilova [24], while the second
method is that of RANdom SAmple Consensus (RANSAC). This method is specifically
used to overcome common problems in image matching algorithms in photogrammetry
and computer vision [43]. However, its use in the literature is limited for PC coordinate
compensation for errors. This finding prompted further investigation into the potential
of this statistical method to resolve or mitigate the “bowl” effect. After the compensa-
tion step, the results are compared to the reference calculation achieved using the 5-GCP
configuration project.

Figure 2. Overall overview of PC coordinate compensation.


Drones 2025, 9, 387 7 of 21

Mean compensation detailed steps are illustrated in Figure 3. Its algorithm is de-
veloped within this study using the Python programming language in its version 3.12.0.
The algorithm table is presented in Algorithm 1. The calculation is carried out consid-
ering the means (Dφ av , Dλ av , and DH av ) of the differences DX (i,j) obtained between the
coordinates XRi of the PC calculated through the standard aerotriangulation and XG (i,j)
obtained through optimised aerotriangulation. The results of the Mean compensation are
the coordinates (φN i , λN i , HN i ) of the images’ PC.

Algorithm 1. Algorithm table for the Mean compensation used


Require: Images’ PC coordinates calculated after standard aerotriangulation XRi; images’ PC coordinates calculated
after optimised aerotriangulation XGi (in several aero iterations j)
   
φi λ1 H1 φ ′ i λ ′1 H ′1
 φ′ λ′ H ′ 
 2 λ2 H2 
φ 
(i,j) 2 2 
1 : Input: XRi=1,...,n =  , XG = 2

. . . . . . . . .  ... ... ... 

i = 1, . . . , n 
φn λn Hn j = 1, . . . , p φ′n λ′n Hn ′ j
Number of images nϵN
Number of iterations of optimised aerotriangulation p ϵ N
2 : Output : Deviations DXi , Mean Deviations (Dφi,av , Dλi,av , DHi,av ), Compensated coordinates (φNi , λNi , HNi )
3: for i= 1 to n do // Compute offsets
 
φ′1 − φ1 λ′1 − λ1 H ′1 − H1
 φ′ − φ λ′ − λ H ′ − H 
(i,j) 2 2 2 2 2 
DX = 2

i = 1, . . . , n  . . . ... ...


j = 1, . . . , p φ′n − φn λ′n − λn H ′n − Hn j
4: for i=1 to n and k=1 to j do // Compute average offsets
n,j
Dφi,av = 1
j ∑ ( φ ′ i − φi )k
i =1, k =1
n,j
Dλi,av = 1
j ∑ ( λ ′ i − λi )k
i =1, k =1
n,j
DHi,av = 1
j ∑ ( H ′ i − Hi )k
i =1, k =1
5: for i=1 to n do // Apply Compensation
φNi = φi + Dφi,av
λNi = λi + Dλi,av
HNi = H i + DHi,av
6: end

RANSAC compensation is a predictive statistical method that provides an estimate


of the probability of obtaining reliable predictions [44]. The RANSAC modelling used is
the regression fitting. The algorithm applied is the one proposed by [45]. The steps of the
algorithm are detailed in Figure 4. Then, Algorithm 2 displays an extract of the algorithm
table. This algorithm focuses on reducing mis-matched points gradually. RANSAC first
selects an initial set of random samples (points) and solves the model parameters. The inlier
threshold t value is fixed to the median absolute deviation (MAD) of the target values φ′i .
Then, the algorithm checks the number of inliers and calculates the number of iterations k
compared to the maximum allowed M. If the inlier ratio is greater than the one from the
last iteration, the number of inliers e is updated. This procedure continues until the number
of iterations reaches M.
Drones 2025, 9, 387 8 of 21

Figure 3. Detailed steps of Mean compensation approach.

Algorithm 2. Extract from the RANSAC compensation algorithm table used to compensate images’ PC latitudes
Require: Images’ PC coordinates calculated after standard aerotriangulation XRi; images’ PC coordinates calculated
after optimised aerotriangulation XGi
1: Input:
   
φ1 λ1 H1 φ ′1 λ ′1 H ′1
 φ′ λ′ H ′ 
 2 λ2 H2 
φ 
(i,j) 2 2 
XRi =  , XG = 2

. . . . . . . . .  . . . . . . . . .

i = 1, . . . , n  
φn λn Hn j = 1, . . . , p φ′n λ′n Hn ′ j
n: Number of images
p: Number of iterations of optimised aerotriangulation
t( φ) : Distance thresholds to the model
M: Max iterations
e: number of inliers
2: Output:
Dφi,av : Mean Deviations on images’ PC latitudes
φNi : Compensated latitudes
3 : Initialisation k = 0, e = 0
while k <= M do
for i = 1 to n and j = 1 to p do
Read XRi and XG (i,j)  // Define feature set and targets
Fit Ransac. Regressor t( φ) , e, M) to XG (i,j) points // Train RANSAC model for latitude
Median φ′i = median (| φ′i − median ( φ′i )|)
If Median φ′i <= t( φ) then // Define inliers and outliers
inlierφi ← φ′i and k = k + 1 and e = e + 1
Else
outlier ← φ′i and k = k + 1
Drones 2025, 9, 387 9 of 21

Algorithm 2. Cont.
n,j
1
4 : Dφi,av = j ∑ (inlierφi − φi )e // Compute average offsets on inliers
i =1, e=1
5 : For i = 1 to n do
φNi = φi + Dφi,av // Apply Compensation
12: end

Figure 4. Detailed steps of RANSAC compensation approach.

To validate the research results, a visual identification of the “bowl” effect correction
is performed first. Then, an analysis of the Random Mean Square Error (RMSE) based on
the checkpoints is realised. Afterwards, a comparison of the two compensation method
results is performed. Table 2 synthesises the three tests conducted to obtain and validate
the results. Test 1 refers to the integration of GCPs into the standard aerotriangulation
calculations. It aims to visually assess the impact of GCPs only on the “bowl” effect result.
Test 2 refers to applying optimised aerotriangulation with the use of PPK coordinates of
the images’ PC. It allows the exploration of the impact of changing the aerotriangulation
approach on the “bowl” effect results. Test 3 is built upon a mathematical model, which, in
addition to the two previous procedural and geometric tests, aims to increase the accuracy
of the results obtained. Thereby, error compensation on the PPK coordinates of the images’
PC is carried out by two statistical methods: Mean and RANSAC compensations. This last
test also aims to explore whether it is possible to reduce the GCP number by keeping an
accuracy threshold that meets the accuracy requirements of the routing and preliminary
conceptual design stages for corridor projects.
Drones 2025, 9, 387 10 of 21

Table 2. Tests realised to mitigate the “bowl” effect.

Test Test Type Test Description

• Standard aerotriangulation only


Test 1 Procedural • GCP Integration: 3 spatial distributions
• Images’ PC coordinates: RTK mode

• Optimised aerotriangulation
Test 2 Procedural and Geometric • GCP Integration: 3 spatial distributions
• Images’ PC coordinates: PPK mode

• Mean compensation and Ransac compensation for images’ PC.


Test 3 Mathematical • Optimised aerotriangulation with 5 GCPs as reference calculation
• GCP Integration: 5 GCPs only after Mean and RANSAC compensation.

2.2. Study Site


The suggested methodology is applied to the High-Speed Railway project in Morocco
at the section linking the two cities of Kenitra and Rabat. The project is in its routing and
preliminary conceptual design stages. The project site is characterised by a curved shape
and a mildly uneven terrain in some zones. The site’s altitudes vary between 360 m and
395 m. Two UAV corridor missions were executed. The missions’ technical characteristics
are described in Table 3. Figure 5 illustrates the geographic location of the corridor, the
areas covered by the two flight missions and the positions of all GCPs used in the project.

Table 3. Technical characteristics of the missions.

Number of Flight Height/Ground


Mission Description Overlaps
Images Sample Distance (GSD)
Length: 10 km Forward: 60%
Mission 1 653 233 m/3 cm
Width: 294 m Side: 65%
Length: 10 km Forward: 60%
Mission 2 668 233 m/3 cm
Width: 300 m Side: 65%

Figure 5. Geographical location of the study site.


Drones 2025, 9, 387 11 of 21

2.3. Ressources
For the aerial missions, the UAV used was the Trinity F90+ [46]. It is a fixed-wing UAV
with Vertical Take-off and Landing (VTOL). It is equipped with an integrated Sony RX1RII
Camera. Table 4 describes the technical specifications of this camera.

Table 4. Technical characteristics of the camera.

Camera Characteristic Description


Model Sony RX1R II (DSC-RX1RM2)
Sensor CMOS Exmor R BSI
Bands R, G, B
Type: Fixed
Focal length: 35 mm
Lens
Max opening: f/2.0
FOV: 63◦
ISO range 100–25,600
Shutter speed 1/4000 s to 30 s, mechanic
Frame rate Up to 1/32,000 s electronic

The adopted GCP spatial distributions in this study are illustrated in Figure 6. Each
spatial distribution allows a different accuracy of the results. To obtain the PPK coordinates
of the images PC, QBase software (version 2.35.40) [47] is used.

Figure 6. Spatial distribution of GCP and checkpoints: 18 GCPs for pyramid distribution (a), 17 GCPs
for zigzag distribution (b), 14 GCPs for pair distribution (c) and 5 GCPs for the reference calcula-
tion (d).

For photogrammetric calculations, Pix4D software (version 1.75.0) [48] is deployed. It


helps to carry out the two sub-steps of aerotriangulation and the point cloud extraction
from the images. Finally, and for the last phase of the methodology, the Google Colab
platform [49] is used. It allows applying the two statistical algorithms RANSAC and Mean
for the compensation of PC coordinates.

3. Results
The calculation of aerotriangulation using the standard method shows the “bowl”
effect. This effect is observed only in mission 2. Figure 7 illustrates these results for the
two UAV missions. This is attributable to multiple reasons. First, and in addition to the
restricted viewing angles, required by corridor shooting, the field covered by mission 2 is
characterised by low contrast in several images, as described in Figure 8. Then, the specific
environmental features of the site used in mission 2 can explain this discrepancy. Even
with the same flight parameters and lighting conditions, the richness of textures at this site
Drones 2025, 9, 387 12 of 21

produces repeated patterns, which makes it difficult to detect homologous points while
searching for tie points. Additionally, and since the vegetation is dense and in similar hues,
the matching process from successive images remains difficult to precisely accomplish in
mission 2. Finally, pixels that describe the ground are scarce in mission 2 images. This
causes errors in the estimation of flight heights while calibrating the camera parameters
in the aerotriangulation process. All these conditions lead to errors at several levels in the
case of mission 2: 1/ in the calculation of the camera’s auto-calibration parameters; 2/ in
the extraction of tie points; and 3/ in the detection of the tie points’ homologous points.

Figure 7. Standard aerotriangulation result for mission 1 (a) and mission 2 with “bowl” effect (b).

Figure 8. Extracts of images from the field covered by mission 1 (a) and by mission 2 with “bowl”
effect (b).

The first test to mitigate the “bowl” effect is to integrate the different GCP spatial
distributions into the standard aerotriangulation. The results of Test 1 are illustrated in
Figure 9. The visual inspection of these results shows only a slight improvement in the
standard aerotriangulation with the zigzag distribution. However, the “bowl” effect still
exists after this test.
The second test to overcome the “bowl” effect is to change the aerotriangulation
method by using an iterative approach and integrating the PPK coordinates of the images’
PC. During Test 1, it is the RTK coordinates of the images’ PC that are used. In Test 2, an
optimised aerotriangulation is adopted by integrating the PPK coordinates of the PC and
iteratively recalculating the auto-calibration parameters of the camera. In this iterative
calculation, the initial auto-calibration parameters are used as pseudo-observations. The
visual result of Test 2 is illustrated in Figure 10. Visually, the “bowl” effect is mitigated. The
results of the planimetric and altimetric RMSE are given comparatively in Table 5.
Drones 2025, 9, 387 13 of 21

Figure 9. Profile view of the standard aerotriangulation results with GCP spatial distribution integra-
tion: zigzag distribution (a), pair distribution (b) and pyramid distribution (c).

Figure 10. Visual result of correcting the “bowl” effect by optimised aerotriangulation.

Table 5. Comparative RMSE results after GCP spatial distribution integration with optimised
aerotriangulation.

GCP Spatial Checkpoint RMSE of GCPs (cm) RMSE of Checkpoints (cm)


GCP Number
Distribution. Number on x, y and z, Respectively on x, y and z, Respectively
8.0 17.4
Pairs 14 6 3.1 15.1
16.0 34.5
3.1 7.1
Zigzag 17 6 3.2 6.1
6.1 23.4
3.3 5.7
Pyramid 18 5 3.0 5.5
1.0 9.3

All three spatial distributions offer accurate outcomes that respond to the routing and
preliminary conceptual design stage requirements. From a practical point of view, and to
optimise the duration and costs of field missions for the GCP survey phase, the distribution
Drones 2025, 9, 387 14 of 21

of GCPs in pairs remains the most optimal. However, the results of Test 2 indicate that it is
the latter distribution that provides the lowest level of accuracy compared to the others.
The pyramid distribution offers the best accuracy level in our case. Furthermore, the use of
UAVs for corridor projects aims to optimise the entire process. To elaborate on the results,
a third test is carried out in this research.
In Test 3, the number of GCPs is reduced to a minimum of five GCPs for the entire
corridor. The spatial distribution of these GCPs is illustrated above by Figure 6d. Then, an
optimised aerotriangulation with the five GCPs is calculated. The result of this calculation
is taken as a reference. Afterwards, by retaining the same distribution of the five GCPs,
two optimised aerotriangulation calculations are made: The first is completed by applying
Mean compensation to the PPK coordinates of the images’ PC. And the second is performed
by applying the RANSAC compensation method to the same coordinates.
After compensation, the analysis of images’ matching shows an increase in the num-
ber of tie points. Table 6 represents the average number of tie points, highlighting the
compensations’ role in a remarkable increase in the image’s matching points, particularly
for the RANSAC compensation.

Table 6. Image matching results following compensation of PC coordinates by Mean and


RANSAC method.

Average Match Number Point Cloud Density


Test
Per Image (Points/m²)
Test 2: Optimised
aerotriangulation with 1535.23 0.34
GCP pyramid distribution
Test 3: Mean compensation
7184.66 1.58
with 5 GCPs
Test 3: RANSAC
7202.01 1.59
compensation with 5 GCPs

Figure 11 gives a comparative overview between the point cloud produced following
RANSAC compensation and the one from the reference calculation. These results show
that the RANSAC cloud is significantly denser than the reference one. This is justified by
the partial correction of image positions through this compensation, which allows better
identification of tie points existing on a larger number of images.

Figure 11. Cloud generated through the reference calculation (a) and RANSAC compensation (b).
Drones 2025, 9, 387 15 of 21

The results of calculating the RMSE of these different aerotriangulations are reported
in Table 7. The reference calculation gives a vertical RMSE exceeding 50 cm (the accu-
racy threshold for routing and preliminary conceptual design for corridor projects). The
RANSAC compensation method offers significant improvement for the three dimensions,
particularly for the vertical RMSE, which was improved from 54 cm to 32 cm. Furthermore,
the Mean compensation method improves the vertical error by 50% and shows a clear
improvement in the vertical RMSE. For this reason, this last method represents a good
compromise between acceptable accuracy and the minimum number of GCPs to survey on
the field.

Table 7. RMSE results after error compensation for images’ PC coordinates.

Checkpoint RMSE of GCPs (cm) RMSE of Checkpoints (cm)


Calculation GCP Number
Number on x, y and z, Respectively on x, y and z, Respectively
2.3 20.1
Reference calculation
5 6 6.7 23.3
with 5 GCPs
19.2 54.6
4.8 9.9
Mean compensation
5 6 5.6 15.4
with 5 GCPs
14.8 23.4
RANSAC 5.1 12.1
compensation with 5 6 7.4 17.2
5 GCPs 14.8 32.5

For a better understanding of the outcomes of the two compensation methods, we car-
ried out a statistical analysis of the distributions of the deviations between the coordinates
of the estimated and initial PC for each of the two missions.
For mission 1, regarding the longitudes and latitudes of the images’ PC, the visuali-
sation of the deviations’ frequencies showed a symmetric and homogeneous distribution
with deviations order of 10−6 , as shown in Figure 12. Altitude deviations are slightly
asymmetric but remain homogeneous without the presence of noise or outliers. Altitude
deviations do not exceed 1 m.

(a) (b) (c)

Figure 12. Deviation statistical distribution of mission 1 PC for longitude (a), latitude (b) and
altitude (c).

The 3D visualisation presented in Figure 13 for each of the coordinates of the initial and
estimated PC validates the results of Test 3. We clearly notice the homogeneous distribution
of deviations throughout the cloud without any noise or outlier.
Drones 2025, 9, 387 16 of 21

Figure 13. Statistical representation of initial and estimated PC coordinates for mission 1.

For mission 2, where the “bowl” effect is noticed, the distribution of the deviations
in latitudes, longitudes and altitudes is relatively normal and symmetrical, as shown in
Figure 14. The differences are more visible for altitudes. However, altitude’s deviation
values do not present outlier values. The coordinate plot in a 3D reference frame, presented
in Figure 15 for mission 2, shows a significant deviation in altitude; however, this deviation
is uniformly distributed over all points of the cloud. The altitudes’ deviations for mission
2 are due to errors in calculating flight heights while calibrating camera parameters for
aerotriangulation. The conditions that lead to this issue are explained in the beginning of
the Results Section.

(a) (b) (c)

Figure 14. Deviation statistical distribution of mission 2 PC for longitude (a), latitude (b) and
altitude (c).

Figure 15. Statistical representation of initial and estimated PC coordinates for mission 2.
Drones 2025, 9, 387 17 of 21

4. Discussion
The results of this study show two levels of mitigating the “bowl” effect and increasing
the 3D accuracy of results in the case of UAV photogrammetry in corridors. In the first level,
procedural and geometric approaches are conducted. The aerotriangulation procedure
and the GCP spatial distribution are varied. The “bowl” effect is mitigated and the 3D
accuracies are acceptable in relation to the requirement value. In the second level, the goal
is to explore the reduction in field work on surveyed GCPs. The number of the latter is
reduced and two mathematical compensations are applied to give an optimal approach to
conduct UAV flights in corridors.
Aerotriangulation methods play a major role in correcting the “bowl” effect for cor-
ridor flights. Their ability to reduce errors is apparent through the calculation of internal
and external camera parameters. These parameters are used for image geolocation and
distortion correction. In this paper, the variation in the standard aerotriangulation method
towards an optimised approach is carried out.
The improvement of the altimetric and planimetric accuracies of the point clouds
is achieved by the integration of several GCP spatial distributions while calculating the
optimised aerotriangulation. The tests carried out show a pronounced improvement in the
RMSE, particularly at the altitude level. Its value decreased from 34.5 cm for the GCP pair
distribution to 11.8 cm for the GCP zigzag distribution and finally to 9.3 cm for the GCP
pyramid distribution. The improvement in the horizontal RMSE is also notable, ranging
from 23 cm for the GCP pair distribution to 7.9 cm for the GCP pyramid distribution. The
latter offers the best results’ accuracy among all the tests carried out in our research. The
3D accuracy reached by varying the GCP spatial distributions is sufficient to respond to the
accuracy requirements.
However, the pyramid distribution required 18 GCPs distributed over the entire
corridor. This demanded an increased investment of time and effort on the surveyed site. In
order to propose an optimal process for aerial photogrammetry by UAVs in corridor cases,
this study proposes the integration of a mathematical approach. The latter draws upon
the compensation of errors at the level of the images’ PC coordinates. The two statistical
methods proposed are Mean and RANSAC compensations.
Concerning image matching, the implementation of the proposed compensations
results in a fivefold improvement in image matching accuracy in the case of the reference
calculation. Also, the accurate images’ PC coordinates resulting from the compensation
allows a much denser point cloud to be obtained. This induces a surge in the overlap
area of the images and therefore a more productive search for homologous points between
sequential images.
RANSAC compensation allows a decrease in vertical RMSE of 40% and a refinement
of horizontal RMSE of 32%. The vertical RMSE obtained is 32 cm. Mean compensation
gives better results. The RMSE is reduced by 57% in altitude and by 43% in planimetry. The
RMSE obtained with Mean compensation is 23 cm. The efficiency of Mean compensation,
in our case, is explained by the statistical distribution of the deviations between the PC
surveyed coordinates and calculated ones. This distribution was homogeneous and quasi-
normal. This makes the RANSAC method less robust. In fact, the RANSAC method proves
most effective when the number of outliers is important [43], which was not the case in
our study.
Compared to existing approaches, our proposed methodological framework gives
better results. Amr et al. [23] and Elsheshtawy and Gavrilova [24] used a Linear Relation
Model to correct images’ PC coordinates. This model is based on Mean compensation to
enhance images’ PC coordinates. When using the same number of GCPs and checkpoints
used in our study, Amr et al. [23] obtained an RMSE of 74 cm and Elsheshtawy and
Drones 2025, 9, 387 18 of 21

Gavrilova [24] obtained an RMSE value of 66 cm. These values can be sufficient for
some photogrammetric applications in corridor projects, but they do not respond to the
accuracy requirements of the routing and preliminary conceptual design stage of linear
infrastructure projects.
Overall, it is undeniable that the GCPs’ pyramid distribution integrated with an opti-
mised aerotriangulation offers the best accuracy compared to other tests of the proposed
methodology. This GCPs’ spatial distribution allows a centimetre-level accuracy. However,
this approach consumes more time in the field and therefore induces more costs in the
overall process. The mathematical approach through compensation for the images’ PC
coordinates is an alternative that has proven its worth in our research. The two proposed
compensation methods remain an interesting compromise between a remarkable improve-
ment in accuracy and an optimisation of the GCPs’ number to be surveyed on site. Also,
applying compensation for the images’ PC coordinates allows very dense point clouds to
be obtained in the case of corridor flights. This is a major pre-requisite for DTM calculations
on this type of project.
Finally, the accuracy thresholds achieved through these two compensations, with an
optimal number of GCPs to be surveyed on the field, allow UAV flights in corridors to
respond to the accuracy requirements of the routing and preliminary conceptual design
stage of linear infrastructure projects.

5. Conclusions
This study highlights several practices that can correct the “bowl” effect and optimise
the accuracy of corridor photogrammetric surveys by UAVs for linear infrastructure projects.
It also proposes RANSAC compensation to correct images’ PC coordinates, which is not
frequently used in the literature to solve the “bowl” effect issue. The results give better
accuracy compared to some existing approaches. This study shows two levels of mitigating
the “bowl” effect and increasing the 3D accuracy of UAV photogrammetry in corridors for
the routing and preliminary conceptual design stage. This study is based on the following
practices: 1/ the integration of ground data while optimising the effort provided; 2/ the
optimisation of photogrammetric calculations; and 3/ the correction of the coordinates of
images’ PC using statistical compensation.
The “bowl” effect is mitigated by an iterative aerotriangulation approach. This ap-
proach integrates the initial parameters of camera auto-calibration as pseudo-observation
and the PPK coordinates of the PC. The absolute accuracy of the point cloud produced
is improved either by an appropriate spatial distribution of the GCPs or by using one of
the proposed statistical compensation methods. For RMSE analysis, the GCPs’ pyramid
distribution gives the best results. This is due to the fact that this distribution offers ho-
mogeneous coverage of the site and avoids any extrapolation. To improve accuracy while
optimising effort, time and costs, a statistical compensation of images’ PC coordinates is
proposed. The effectiveness of the compensation method depends on the magnitude of
discrepancies between surveyed coordinates and calculated ones after aerotriangulation.
This requires careful inspection of the data before deciding on the compensation method
to use. In projects similar to this study, where the discrepancies are uniformly distributed,
Mean compensation is recommended to optimise accuracy and RANSAC compensation is
recommended to optimise accuracy and to have denser point clouds.
To implement the results of this study, the authors recommend first defining the
accuracy threshold required from the UAV photogrammetry corridor project. In the case
of medium accuracies, the optimisation of GCP number is preferred when combined
with a compensation method. RANSAC compensation in the latter case would yield
satisfactory outcomes if the terrain is uneven. In case of high accuracies, the pyramid
Drones 2025, 9, 387 19 of 21

spatial distribution of GCPs is recommended. This distribution can be combined with a


compensation method for images’ PC for more accurate results in the case of uneven terrain.
Lastly, the site characteristics should be analysed before flight. It is recommended to orient
the flight lines in a way that allows for rich and detailed variation on all captured images.
This supports the prevention of problems in calculating camera calibration parameters or
detecting homologous points which leads to the “bowl” effect.
Finally, the results obtained in this paper should also be interpreted in the light of
the limitations encountered. First, the access to site points was limited due to dense
vegetation in some areas. Then, the length of the site required consideration in take-off
and landing zones. In addition, iterative calculating requires high processing performance
and time. It is advisable for future studies to 1/ explore a large number of more complex
compensation methods for better geolocation of images and 2/ study the potential of
artificial intelligence in predicting corrected coordinates for images’ PC and optimising
calculation performances.

Author Contributions: S.A.-L. writing—original draft, writing—review and editing, methodology,


conceptualisation, formal analysis, validation, supervision and project administration. R.L. and W.M.:
methodology, software, visualisation, formal analysis, data curation and conceptualisation. M.K.:
conceptualisation, validation, supervision, project administration and data curation. All authors have
read and agreed to the published version of the manuscript.

Funding: This research received no external funding.

Data Availability Statement: The original contributions presented in this study are included in the
article. Further inquiries can be directed to the corresponding author.

Acknowledgments: The authors thank the Moroccan company MILLENIUM TOPO CHARIF MED
for their warm welcome, technical support, and supervision.

Conflicts of Interest: Author Mehdi Kechna was employed by the company MILLENIUM TOPO
CHARIF MED. The remaining authors declare that the research was conducted in the absence of any
commercial or financial relationships that could be construed as a potential conflict of interest.

References
1. Devoto, S.; Macovaz, V.; Mantovani, M.; Soldati, M.; Furlani, S. Advantages of Using UAV Digital Photogrammetry in the Study
of Slow-Moving Coastal Landslides. Remote Sens. 2020, 12, 3566. [CrossRef]
2. Ferrer-González, E.; Agüera-Vega, F.; Carvajal-Ramírez, F.; Martínez-Carricondo, P. UAV Photogrammetry Accuracy Assessment
for Corridor Mapping Based on the Number and Distribution of Ground Control Points. Remote Sens. 2020, 12, 2447. [CrossRef]
3. Komarek, J.; Kumhalova, J.; Kroulik, M. Surface modelling based on unmanned aerial vehicle photogrammetry and its accuracy
assessment. In Proceedings of the 24th International Scientific Conference, Engineering for Rural Development, Jelgava, Latvia,
21–23 May 2025.
4. Lenda, G.; Borowiec, N.; Marmol, U. Study of the Precise Determination of Pipeline Geometries Using UAV Scanning Compared
to Terrestrial Scanning, Aerial Scanning and UAV Photogrammetry. Sensors 2023, 23, 8257. [CrossRef] [PubMed]
5. Prince, G.B. Investigation of the Possible Applications of Drone-Based Data Acquisition for the Development of Road Information
Systems. Master’s Thesis, Westsächsische Hochschule Zwickau, Zwizkau, Germany, 2023.
6. Maghazei, O.; Steinmann, M. Drones in Railways: Exploring Current Applications and Future Scenarios Based on Action Research.
Eur. J. Transp. Infrastruct. Res. 2020, 20, 87–102. [CrossRef]
7. Aela, P.; Chi, H.-L.; Fares, A.; Zayed, T.; Kim, M. UAV-Based Studies in Railway Infrastructure Monitoring. Autom. Constr. 2024,
167, 105714. [CrossRef]
8. Halvorsen, A.M. A Comparison Between Unmanned Aerial Vehicle (UAV) Corridors & Placements to Understand Their Effects
on Traffic Safety & Efficiency. Master’s Thesis, University of South-Eastern Norway, Notodden, Norway, 2022.
9. Sheriffdeen, K. Applications of UAV-Derived Digital Elevation Models in Terrain Analysis and Civil Engineering; Easychair: Manchester,
UK, 2024.
10. Pyrgidis, C.N. Railway Transportation Systems: Design, Construction and Operation 2021; CRC Press: Boca Raton, FL, USA, 2021.
Drones 2025, 9, 387 20 of 21

11. Severino, A.; Martseniuk, L.; Curto, S.; Neduzha, L. Routes planning models for railway transport systems in relation to
passengers’ demand. Sustainability 2021, 13, 8686. [CrossRef]
12. Sliuzas, R.; Brussel, M. Usability of Large-Scale Topographic Data for Urban Planning and Engineering Applications. Int. Arch.
Photogramm. Remote Sens. 2000, 33, 1003–1010.
13. Gómez-López, J.M.; Pérez-García, J.L.; Mozas-Calvache, A.T.; Delgado-García, J. Mission Flight Planning of RPAS for Photogram-
metric Studies in Complex Scenes. ISPRS Int. J. Geo Inf. 2020, 9, 392. [CrossRef]
14. McNeil, B.; Snow, C. The truth about drones in mapping and surveying. Skylogic Res. 2016, 200, 1–6.
15. Nahon, A.; Molina, P.; Blázquez, M.; Simeon, J.; Capo, S.; Ferrero, C. Corridor Mapping of Sandy Coastal Foredunes with UAS
Photogrammetry and Mobile Laser Scanning. Remote Sens. 2019, 11, 1352. [CrossRef]
16. Ruzgienė, B.; Aksamitauskas, Č.; Daugėla, I.; Prokopimas, Š.; Puodžiukas, V.; Rekus, D. UAV Photogrammetry for Road Surface
Modelling. Balt. J. Road Bridge Eng. 2015, 10, 151–158. [CrossRef]
17. Abd Mukti, S.N.; Tahar, K.N. Low Altitude Photogrammetry for Urban Road Mapping/Shahrul Nizan Abd Mukti and Khairul
Nizam Tahar. Built Environ. J. 2021, 18, 31–38. [CrossRef]
18. Singh, C.; Mishra, V.; Harshit, H.; Jain, K.; Mokros, M. Application of UAV swarm semi-autonomous system for the linear
photogrammetric survey. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2022, XLIII-B1-2022, 407–413. [CrossRef]
19. Jaud, M.; Passot, S.; Allemand, P.; Le Dantec, N.; Grandjean, P.; Delacourt, C. Suggestions to Limit Geometric Distortions in the
Reconstruction of Linear Coastal Landforms by SfM Photogrammetry with PhotoScan® and MicMac® for UAV Surveys with
Restricted GCPs Pattern. Drones 2019, 3, 2. [CrossRef]
20. Ahmed, S.; El-Shazly, A.; Abed, F.; Ahmed, W. The Influence of Flight Direction and Camera Orientation on the Quality Products
of UAV-Based SfM-Photogrammetry. Appl. Sci. 2022, 12, 10492. [CrossRef]
21. James, M.R.; Robson, S. Straightforward Reconstruction of 3D Surfaces and Topography with a Camera: Accuracy and Geoscience
Application. J. Geophys. Res. Earth Surf. 2012, 117, F03017. [CrossRef]
22. Liang, L.; Yang, Z.; Wang, Y.; Jia, G.; Sun, B. Accuracy Analysis of Oblique Photogrammetry Measurement in 3D Modeling of
Power Line Selection Design. J. Phys. Conf. Ser. 2022, 2400, 012013. [CrossRef]
23. Amr, E.M.; Larisa, G.A.; Mohamed, E.A. Low-Cost Technique of Enhancement Georeferencing for UAV Linear Projects. In
Proceedings of the 2020 3rd International Conference on Geoinformatics and Data Analysis, Marseille, France, 15–17 April 2020;
Association for Computing Machinery: New York, NY, USA, 2020; pp. 76–80.
24. Elsheshtawy, A.M.; Gavrilova, L.A. Improving Linear Projects Georeferencing to Create Digital Models Using UAV Imagery. E3S
Web Conf. 2021, 310, 04001. [CrossRef]
25. Zhou, Y.; Daakir, M.; Rupnik, E.; Pierrot-Deseilligny, M. A Two-Step Approach for the Correction of Rolling Shutter Distortion in
UAV Photogrammetry. ISPRS J. Photogramm. Remote Sens. 2020, 160, 51–66. [CrossRef]
26. Huang, W.; Jiang, S.; Jiang, W. Camera Self-Calibration with GNSS Constrained Bundle Adjustment for Weakly Structured Long
Corridor UAV Images. Remote Sens. 2021, 13, 4222. [CrossRef]
27. Rosnell, T.; Honkavaara, E. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned
Aerial Vehicle and a Digital Still Camera. Sensors 2012, 12, 453–480. [CrossRef] [PubMed]
28. Javernick, L.; Brasington, J.; Caruso, B. Modeling the Topography of Shallow Braided Rivers Using Structure-from-Motion
Photogrammetry. Geomorphology 2014, 213, 166–182. [CrossRef]
29. Zhou, Y.; Rupnik, E.; Faure, P.-H.; Pierrot-Deseilligny, M. GNSS-Assisted Integrated Sensor Orientation with Sensor Pre-
Calibration for Accurate Corridor Mapping. Sensors 2018, 18, 2783. [CrossRef] [PubMed]
30. James, M.R.; Robson, S. Mitigating Systematic Error in Topographic Models Derived from UAV and Ground-Based Image
Networks. Earth Surf. Process. Landf. 2014, 39, 1413–1420. [CrossRef]
31. Zhou, Y.; Rupnik, E.; Meynard, C.; Thom, C.; Pierrot-Deseilligny, M. Simulation and Analysis of Photogrammetric UAV Image
Blocks—Influence of Camera Calibration Error. Remote Sens. 2020, 12, 22. [CrossRef]
32. Wackrow, R.; Chandler, J.H. A Convergent Image Configuration for DEM Extraction That Minimises the Systematic Effects
Caused by an Inaccurate Lens Model. Photogramm. Rec. 2008, 23, 6–18. [CrossRef]
33. Wackrow, R.; Chandler, J.H. Minimising Systematic Error Surfaces in Digital Elevation Models Using Oblique Convergent
Imagery. Photogramm. Rec. 2011, 26, 16–31. [CrossRef]
34. Barazzetti, L.; Remondino, F.; Scaioni, M. Automation in 3D reconstruction: Results on different kinds of close-range blocks. Int.
Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, XXXVIII, 55–61.
35. Fryer, J.G.; Mitchell, H.L. Radial distortion and close range stereophotogrammetry. Aust. J. Geod. Photogramm. Surv. 1987, 46,
123–138.
36. Brown, D. Decentering distortion of lenses. Photogramm. Eng. 1996, 32, 444–462.
37. Molina, P.; Blázquez, M.; Sastre, J.; Colomina, I. Precision analysis of point-and-scale photogrammetric measurements for corridor
mapping: Preliminary results. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XL-3-W4, 85–90. [CrossRef]
Drones 2025, 9, 387 21 of 21

38. James, M.R.; Robson, S.; d’Oleire-Oltmanns, S.; Niethammer, U. Optimising UAV topographic surveys processed with structure-
from-motion: Ground control quality, quantity and bundle adjustment. Geomorphology 2017, 280, 51–66. [CrossRef]
39. Mirko, S.; Eufemia, T.; Alessandro, R.; Giuseppe, F.; Umberto, F. Assessing the Impact of the Number of GCPS on the Accuracy of
Photogrammetric Mapping from UAV Imagery. Balt. Surv. 2019, 10, 43–51.
40. Boccardo, P.; Chiabrando, F.; Dutto, F.; Tonolo, F.G.; Lingua, A. UAV Deployment Exercise for Mapping Purposes: Evaluation of
Emergency Response Applications. Sensors 2015, 15, 15717–15737. [CrossRef]
41. Tang, L.; Ma, S.; Ma, X.; You, H. Research on Image Matching of Improved SIFT Algorithm Based on Stability Factor and Feature
Descriptor Simplification. Appl. Sci. 2022, 12, 8448. [CrossRef]
42. Jiang, S.; Jiang, C.; Jiang, W. Efficient Structure from Motion for Large-Scale UAV Images: A Review and a Comparison of SfM
Tools. ISPRS J. Photogramm. Remote Sens. 2020, 167, 230–251. [CrossRef]
43. Salehi, B.; Jarahizadeh, S.; Sarafraz, A. An Improved RANSAC Outlier Rejection Method for UAV-Derived Point Cloud. Remote
Sens. 2022, 14, 4917. [CrossRef]
44. Kaspi, O.; Yosipof, A.; Senderowitz, H. RANdom SAmple Consensus (RANSAC) Algorithm for Material-Informatics: Application
to Photovoltaic Solar Cells. J. Cheminform. 2017, 9, 34. [CrossRef]
45. Scikitlearn. Available online: https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RANSACRegressor.html
(accessed on 20 November 2024).
46. Trinity Pro. Quantum Systems, Gilching, Germany. Available online: https://quantum-systems.com/trinity-pro/ (accessed on
20 November 2024).
47. QBase 3D. Available online: https://quantum-systems.com/qbase-3d/ (accessed on 20 November 2024).
48. PIX4Dmapper: Professional Photogrammetry Software for Drone Mapping|Pix4D. Available online: https://www.pix4d.com/
product/pix4dmapper-photogrammetry-software/ (accessed on 20 November 2024).
49. Google Colab. Available online: https://colab.research.google.com/ (accessed on 20 November 2024).

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like