0% found this document useful (0 votes)
4 views

UAV_Sensefly

The document presents a study comparing various software for block triangulation in UAV photogrammetry, focusing on a rural area in Italy. It discusses the methodologies used for image acquisition and processing, including the use of Ground Control Points for accuracy analysis. The results highlight differences in digital surface models and orientation parameters across the tested software, emphasizing the need for careful selection based on project requirements.

Uploaded by

Mouad Raber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

UAV_Sensefly

The document presents a study comparing various software for block triangulation in UAV photogrammetry, focusing on a rural area in Italy. It discusses the methodologies used for image acquisition and processing, including the use of Ground Control Points for accuracy analysis. The results highlight differences in digital surface models and orientation parameters across the tested software, emphasizing the need for careful selection based on project requirements.

Uploaded by

Mouad Raber
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/283419640

UAV photogrammetry: Block triangulation comparisons

Conference Paper · September 2013


DOI: 10.5194/isprs

CITATIONS READS

16 876

6 authors, including:

Rossana Gini Diana Pagliari

8 PUBLICATIONS 485 CITATIONS


Politecnico di Milano
29 PUBLICATIONS 872 CITATIONS
SEE PROFILE
SEE PROFILE

Livio Pinto Giovanna Sona


Politecnico di Milano Politecnico di Milano
88 PUBLICATIONS 1,504 CITATIONS 53 PUBLICATIONS 1,041 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Rossana Gini on 02 November 2015.

The user has requested enhancement of the downloaded file.


UAV PHOTOGRAMMETRY: BLOCK TRIANGULATION COMPARISONS

Rossana Ginia, Diana Pagliarib, Daniele Passonib, Livio Pintob, Giovanna Sonaa*, Paolo Dossoc
a
Politecnico di Milano, DICA, Geomatics Laboratory at Como Campus – Via Valleggio, 11 – 22100 Como (IT),
[email protected], [email protected]
b
Politecnico di Milano, DICA - Piazza Leonardo da Vinci 32 - 20133 Milano (IT),
(diana.pagliari, daniele.passoni, livio.pinto)@polimi.it
c
Studio di Ingegneria Terradat, via A. Costa 17 – 20037 Paderno Dugnano (IT), [email protected]

KEY WORDS:, Aerial Triangulation, Software Comparison, UAV, DEM, Point cloud

ABSTRACT:

UAVs systems represent a flexible technology able to collect a big amount of high resolution information, both for metric and
interpretation uses. In the frame of experimental tests carried out at Dept. ICA of Politecnico di Milano to validate vector-sensor
systems and to assess the UAVs images' metric accuracies, a block of photos taken by a fixed wing system is triangulated by several
software. The test field is a rural area included in an Italian Park (Parco Adda Nord), useful to study flight and imagery
performances, on buildings, roads, cultivated and uncultivated vegetation.
The UAV Sensefly, equipped with a camera Canon Ixus 220HS, flew autonomously over the area at a height of 130m yielding a
block of 49 images divided in 5 strips. Sixteen pre-signalized Ground Control Points surveyed in the area through GPS (NRTK
survey) allow the referencing of the block and accuracy analyses. Approximate values for exterior orientation parameters (positions
and attitudes) were recorded by the flight control system.
The block was processed with several software: Esdas-LPS, EyeDEA (Univ.of Parma), Agisoft Photoscan, PIX4UAV, in assisted or
automatic way. Results comparisons are given in terms of differences among digital surface models, differences in orientation
parameters and accuracies, when available. Moreover, image and ground point coordinates obtained by the different software are
independently used as initial values in a comparative adjustment made by scientific in-house software which can apply different
constraints to evaluate the effectiveness of different methods of point extraction and accuracies on ground check points.

1. INTRODUCTION By using software coming from computer vision, on the other


hand, the processing of a big amount of images is usually faster
and easier and the digital model of the object, orthoimages and
Aerial surveys carried out by unmanned vehicles (UAVs) are photorealistic 3D representations are produced with minor
nowadays under quick expansion, also thanks to the control on some processing steps (as georeferencing and block
development of new vehicles and sensors (more effective and formation) and on the accuracies of computed geometric
safer) and the improvement of data acquisition devices as well parameters.
as automatic systems for planning and controlling the flights. Therefore, it is still necessary to test and compare the
The increased ease of use, as a consequence, widens the capabilities of different systems, in order to carefully assess the
employment of UAVs for proximal sensing for both metric and accuracies of final products and be aware in the choice of the
interpretation purposes, and the capabilities of these systems are system, suitable for collecting and processing images according
widely explored and studied according to different to the survey purpose.
requirements. At Dept. ICA of Politecnico di Milano, some tests are under
As regards image sensors, the limited payload implies the use of development to validate vector-sensor systems and optimize the
compact digital cameras which are able to acquire a big amount UAVs survey for 3D modelling. First experiments started in
of images at a very high resolution but are often affected by 2010 within the FoGLIE project (Fruition of Goods Landscape
higher deformations compared with those coming from in Interactive Environment) (Gini et al., 2012), that made use of
photogrammetric calibrated cameras. aerial imagery acquired by UAVs to enhance the natural, artistic
Digital images from UAVs can be processed by using the and cultural heritage.
traditional photogrammetric method or software coming from In this frame, images taken by compact cameras mounted on
the Computer Vision (CV) field: in the first case, high accuracy drones are processed by "traditional" photogrammetric software
in points coordinates determination and in 3D modelling is the (PhotoModeler, Erdas LPS) and home-made software like
main pursued requirement, whilst the others work mainly to EyeDEA (realized by University of Parma) (Roncella et al.
achieve a quick processing and an effective final product. 2011), Calge, realized by the Dept. ICA of Politecnico di
In the photogrammetric approach, Exterior Orientation (EO) Milano (Forlani, 1986) or by software specifically built for
parameters and points ground coordinates are estimated together managing UAVs images, as Pix4UAV and AGiSoft Photoscan.
with the related accuracies: however, some difficulties often This paper describes a test performed with a fixed wing system
arise during the images georeferencing and the block formation Sensefly SwingletCAM in a rural area of northern Italy and
phase (Aerial Triangulation), especially when images positions discusses the obtained results.
and attitudes are far from those commonly realized in a
photogrammetric survey, aerial, terrestrial or close range.

*Corresponding author.
2. TEST AREA AND DATA CAPTURE

The test flight has been performed on a small test area located
near Cisano Bergamasco (BG, Italy), belonging to the protected
park "Parco Adda Nord" in Lombardy and already studied in the
frame of FoGLIE project. It comprises some buildings,
secondary roads, cultivated fields and natural vegetation.
The flight covered an area of roughly 0.3 Km2. which is
depicted in Figure 1.

Figure 2 – Camera locations and image overlaps.

For the block georeferencing and the subsequent accuracy


analysis, sixteen pre-signalized Ground Control Points (GCPs)
were distributed along the edges and in the middle of the area
and their centre coordinates were measured by GPS (Trimble
5700) in NRTK survey; then, a subset of these were used as
Check Points (CPs).

3. TIE POINTS EXTRACTION


Figure 1 – Overview of the flown area
As mentioned in the introduction, the images acquired with the
The employed drone is a lightweight fixed-wing SwingletCAM Sensefly vector were processed using different software. The
system produced by the swiss company SenseFly (now part of results were analysed both in terms of EO as well as in terms of
the Parrot group), owned and operated by "Studio di Ingegneria the obtained products (DSM and orthophoto).
Terradat". The employed software can be divided into two main
Because of its very limited weight ( < 500 g) and size, its categories: ‘traditional’ photogrammetric software and
autopilot smartness and ease of use it's a suitable option to computer vision based software.
perform photogrammetric flights over limited areas at very high In the first group software that follows a traditional workflow
resolutions (3-7 cm GSD). The SwingletCAM is able to can be found: first of all, it is necessary to perform the camera
perform pre-planned flights in a fully automated mode, though calibration; then the GCPs identification and the Tie Points
the operator can always recover full control of the system itself. (TPs) research (automatic or manual, in dependence on the
Moreover, the SenseFly autopilot continuously analyzes data specific tested program) are accomplished. After that, the
from the onboard GPS/IMU and takes care of all aspects of the images are oriented (with or without self-calibration refinement)
flight mission: the SwingletCAM takes off, flies and lands fully and the subsequent DSM production and the images projection
autonomously. The system incorporates a compact camera for the orthophoto generation are realized.
Canon Ixus 220HS (12 Mp and fixed focal length of 4.0mm), In this context, Erdas Leica Photogrammetric Suite (LPS) and
capable of acquiring images at 3-7 cm GSD resolution the scientific software EyeDEA were analysed.
depending on flight height. In the second group, 3D modelling software packages can be
To reach the target resolution of 4.5 cm GSD, the flight mean collocated:they carryout the image relative orientation together
altitude has been set at 132 m AGL; furthermore, in order to with the self-calibration, in an arbitrary reference system which
gain maximum stereoscopy and avoid holes, the flight planning is often obtained using a minimum constraint coming from the
was performed with longitudinal and lateral overlap equal to approximate orientation provided by the telemetry. The TPs
80%. Following this approach, seven strips were necessary to extraction, their measurement and the error rejection are
cover the area of interest; however, due to strong winds and completely automatized steps; the subsequent use of GCPs
turbulences in the area during the flight, the mission was allows to translate and rotate the photogrammetric block in a
aborted several times and a subsequent rearrangement of the specific reference system. Pix4UAV Desktop (from now on P4)
flight plan limited the final acquisition to 5 strips and 49 images by Pix4D 2013 and Agisoft Photoscan (from now on AP) by
in total. Despite this, the image overlap was always good, as AgiSoft LLC, 2010 were taken under analysis.
shown in Figure 2. A specific procedure was realized for each software package,
Big differences are present among images attitudes and according to its characteristic, as briefly presented below.
positions, thus resulting in high variability in local image scale LPS is a photogrammetry system available in a user-friendly
and some high multiplicity of homologous points. environment that guarantees photogrammetry results. LPS
During the flight, at each shot position, the Senseflight control provides tools for manually and automated precision
system recorded position and attitude of the vehicle, thus measurement and for delivering complete analytical
yielding approximate values for all the Exterior Orientation triangulation, digital surface model generation, orthophoto
parameters. production, mosaicking, and 3D feature extraction. With its
tight integration with ERDAS Image software, LPS is a
photogrammetric package for projects involving various types
of data and further processing and analysis of airborne imagery.
For this work the tie points used to orient the image were
manually selected and measured, for a total of 295 points with
an average multiplicity of 5.
On the other hand, EyeDEA is a scientific software developed
by the University of Parma and it implements SURF operator
and SURF feature descriptor (Bay et al., 2008). Like any other
interest operator, SURF allows to identify a large number of
matches with erroneous correspondence within each set. For
this reason EyeDEA implements also some robust error
rejection methods.
First of all the fundamental matrix F is used to define the
constraint between two sets of coordinates: since the epipolar
constraint is not sufficient to discriminate the wrong matches
between two points located on the epipolar line, also the trifocal
tensor has been implemented. The RANSAC paradigm (Fischler
and Bolles, 1981) is run after each geometric control to
guarantee a higher percentage of inlier.
As input EyeDEA requires undistorted images, whose
deformations were removed according to the parameters
estimated with the camera calibration procedure implemented in
the commercial software PhotoModeler V.6.33 (from now on
PM). As the number of tie points extracted with EyeDEA was
too large (21224) we decided to reduce them to better manage
the photogrammetric block during the orientation phase. The
points reduction was performed with an appropriate Matlab
function, on the basis of the criteria of homogeneous
distribution throughout the block and higher point multiplicity.
In this way the final accuracy is not affected although the time
required to compute the solution is significantly decreased; thus,
the number of tie points was reduced to 2924 image points.
EyeDEA proceeds by successive image triplets, so the
homologous points are seen, on average, only on three frames.
Since the longitudinal overlap through the block was not always
adequate to guarantee the automatic extraction of points on all
the subsequent triplets and in order to strengthen the block
itself, we repeated the tie points extraction also along the
transverse direction. Despite that, the software was not able at
all to extract points on two images and for other six images it
was necessary to manually measure some homologous points,
because their arrangements was not good enough to ensure a
bundle block adjustment solution. These operations were carried
out with the commercial software PM in order to obtain the
terrain coordinates necessary to the subsequent analysis and to
manually measure the GCPs.
For what concerns the CV-based software, the specifically
designed software for UAV application, Pix4UAV, was tested.
It allows to compute the block orientation in a fully automatic
way, requiring as input only the camera calibration parameters
and an image geo-location; moreover, GCPs were manually
measured with the aim of comparing the results with the ones
computed with the other programs. The coordinates of all the
points used to perform the bundle block adjustment were
exported and converted in the PM input format in order to
generate the corresponding coordinates in the terrain reference
system: these coordinates had to be used as approximate values
for the next computations (see paragraph 4). The software
allows to create in an automatic way also the points cloud, the
DSM and the orthophoto (with a resolution of 20 cm).
Eventually, AP was employed to automatically compute both
the image orientation and the tie points cloud. In this case, the
software extracted a large amount of points, so it was decided to
decimate them considering only the points characterized by a
multiplicity equal and greater than three and a RMSE lower Figure 3 – Tie points distribution on the images for the different
than 0.40 meters. The layouts of the TPs used in the four software used: from top to bottom LPS, EyeDEA, P4, AP
different software are represented in figure 3.
It is evident how the points extracted by AP outnumber the LPS* EyeDEA** P4 AP
other considered cases even if they are almost all characterized # TPs 285 1052 1317 6098
by a multiplicity equal to three. The result of EyeDEA is similar # image
1492 3395 6146 19097
in terms of multiplicity, but the set selected is smaller because points
the TPs were decimated before the bundle-block adjustment # GCPs 15 5 15 5 15 5 15 5
phase. P4 identified less points than AP but it was able to detect 0 [m] 2.6 2.6 1.2 1.2 1.0 1.0 0.3 0.3
points visible on more images. The case of LPS is different c [mm] 4.437 4.149 4.235 4.361
because all the measurement were performed manually, by
E 109 119 48 57 25 30 8 9

RMS (st.dev) TPs


Theor. Accuracy
leading to an average multiplicity of five. A common point of
all the tested software packages is that they extracted few TPs in

[mm]
the central zone of the block, characterized by the presence of N 89 101 42 52 23 28 7 8
forest trees.
h 215 259 118 151 61 76 20 23
4. BUNDLE-BLOCK ADJUSTMENT

Empirical Accuracy
E - 50 - 50 - 39 - 50
Considering the different nature of the software, it was decided

RMSE CPs
to uniform the E.O. analysis by defining a standard procedure:

[mm]
for this purpose the scientific software Calge was used. Calge is N - 50 - 38 - 54 - 19
an home-made computer program designed to realize bundle
block compensation of a general topographic network or of a h - 130 - 274 - 113 - 55
photogrammetric block.
*manual measurements
**some manual measurements

Table 1 – Bundle block adjustment results (15 and 5 GCPs


configuration)

The first rows show the number of TPs and the observation
sample sizes: the ratio between these two quantities is equal to
the average TPs multiplicity.
In the subsequent row, the bundle-block 0 is reported: it ranges
from 0.3m (AP) to 2.6m (LPS), respectively 0.2 and 1.6
times the pixel size, equal to 1.54 m.
The estimated focal length varies between 4.149 mm (EyeDEA)
and 4.437 mm (LPS): the effects are absorbed by the shift of the
projection center heights.
In the following rows the RMS of standard deviation of the TPs
and RMSE of the CPs are shown for each coordinate.
As regards LPS results, RMS high values are surely due to the
small number of TPs, manually selected with high multiplicity,
but not by expert photogrammetric technician.
The other software are almost fully automatic, so the extracted
TPs number is higher and, consequently, the RMS of the
standard deviation values are smaller (also because of the lower
value of0). As regards CPs RMSE, results are more
Figure 4 – The 10 CPs employed in the analysis with 5 GCPs homogenous, especially in East and North coordinates that are
around GSD. In altitude the differences are more pronounced.
A first comparison between the different software involved the A further analysis was carried out with the aim of evaluating the
bundle block adjustment using the TPs measured, either quality of the EO parameters for each software. Since EyeDEA
manually (LPS, some points for EyeDEA and all the GCPs) or performed only the TPs extraction, the EO parameters were
automatically (the most of TPs extracted with EyeDEA and all calculated by PM. The analyses were realized in a consistent
the points identified by P4 and AP). In all cases, the calibration way because it was decided to constrain the EO parameters
parameters were refined thank to the self-calibration executed obtained using only 5 GCPs. At the same time also a self-
during the bundle block adjustment itself: especially, the calibration was performed, in order to evaluate the best
variations of the 10 parameters of the Fraser model (Fraser, calibration parameters set. The RMSE of CPs residuals are
1997) were estimated. summarized in Table 2.
For each software, two different kinds of bundle block
adjustment were realized: i) constraining all the measured EyeDEA
GCPs; ii) constraining only 5 GCPs, 4 of which selected along LPS P4 AP
/PM
the block edges and one near the center (see Figure 4). East [mm] 48 16 81 74
Both GCPs and CPs measures on images have been done North [mm] 47 12 46 61
manually by different non-expert operators.
height [mm] 90 36 214 83
In Table 1 the obtained results are listed.
Table 2 – RMSE on the CPs residuals.
edges and in the central area with a large number of very height
The RMSE values are low with respect to the image scale of trees.
1:31,000 and the GSD equal to 4.5cm. Considering the
horizontal coordinates, the minimum value (0.33*GSD) was
achieved with the combination of the software PM and
EyeDEA, followed by LPS (1 GSD). Worse results were
obtained by P4 and AP.
Considering the height coordinates, the RMSE are higher than
horizontal ones, even if the values are smaller than 100 mm
(with the exception of the value obtained by processing the
block with P4, which is equal to 214 mm).

5. DSM COMPARISONS

A second kind of comparison among software results is done


analyzing the different DSM they produce. A mesh of 0.20 m
was chosen to compute surface models with different software;
automatic procedure are used in LPS, P4 and AP whilst another
home-made software called Dense Matcher (DM – Re et al,
2012) was used to process the data coming from EyeDEA/PM
workflow. After a manual editing, the points cloud created by
DM was interpolated on the same grid mesh trough ArcGIS
10.0. A first visual analysis shows a different behavior where
sharp height variations occur, for instance, around buildings. P4,
DM and LPS indeed compute interpolated values, as in all the
other parts, while AP, run in 'sharp' mode, seems to recognize
edges, and produce a sharper DSM (see figure 5).
Figure 6 – DSM’s differences.

In most areas (about 90%) differences are in the range of -0.3


m, 0.4 m. The average of the differences close to zero showed
the absence of vertical and horizontal biases for all DSMs.
A detailed analysis made on a regular and flat area (green line in
P4-AP) confirmed the difference in smoothing effect between
the two surface generating approaches (see figure 7). In this
case the maximum variations are about 50 cm with an average
of 4.2 cm.

Figure 7 – Differences of DSM’s: detail in flat area.

Figure 5 – DSM from AP, LPS, P4 and DM. In P4-AP comparison, the anomalous behavior visible in the red
circle is due to the presence of trees' shadows (see figure 8): the
This is clearly visible in the layouts (see figure 6) where the almost flat ground is modeled in one case with false height
differences coming from AP and the other software are variations of the order of 1 m. This is probably due to
presented. Statistics of the differences yields an average value homologous points chosen at shadows edges, which are slightly
of some centimeters and a standard deviation of 84, 89 and 103 moving during the survey, thus causing mismatching and false
cm respectively for P4-AP, DM-AP and LSM-AP differences. intersections. This effect is visible also in the LPS and DM
The maximum absolute values are about 20 m near building DSM. We found here again the different behavior of the
software: P4 and the other software produced higher and orientation validation, in the estimation of the self-calibration
sharper undulations, while AP gave a smoother surface. parameters or in the manual selection of points in critical areas
of the images. The computational time is often very high in
comparison with the other software: for instance, the DSM
generation in DM required many hours of processing. On the
other hand, the photogrammetric software's results are better
(see Table 2), in terms of CPs RMSE obtained by constraining
the E.O. parameters. Thanks to the DSMs analysis, it can be
said that the implemented strategy of AP seems to be the one
able to achieve the most reliable results: this is highlighted by a
details comparison rather than a global analysis (indeed, all
products didn't have appreciable systematic errors); moreover,
AP provided the best product, especially in flat areas and in the
presence of shadows. Eventually, the strategy that AP employs
for the buildings' outlines allows the creation of orthophotos
with a high level of quality (see Figure 9).

7. REFERENCES
Figure 8 – Differences of DSMs (P4-AP): detail in shadow area
References from Journals:
produced from trees.
Bay H., Ess A., Tuylelaars T., Van Gool., 2008. Speeded
Finally in figure 9 it is shown the two ortophotos carried out
Robust Features (SURF). Computer Vision and Image
from the DSM generated by P4 (up) and AP (down). The
Understanding (110), pp. 346–359.
different behaviour near the roof edges is clear: AP has defined
the edges better than P4.
Fischler, M., Bolles, R.,1981. Random sample consensus: a
paradigm for model fitting with application to image analysis
and automated cartography. Commun. Assoc. Comp. Mach., Vol
24, pp. 81-95.

Re, C., Roncella, R., Forlani, G., Cremonese, G., and Naletto,
G., 2012. Evaluation of area-based image matching applied to
DTM generation with hirise images, ISPRS Ann. Photogramm.
Remote Sens. Spatial Inf. Sci., I-4, 209-214.

Forlani, G., 1986. Sperimentazione del nuovo programma


CALGE dell’ITM. Bollettino SIFET No. 2, pp. 63-72.

Fraser, C.S., 1997. Digital camera self-calibration. ISPRS


Journal of Photogrammetry and Remote Sensing, Vol. 52, pp.
149-159.

References from Other Literature:

Roncella, R., Re, C., Forlani, G., 2011. Performance evaluation


of a structure and motion strategy in architecture and cultural
heritage, Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci.,
Volume XXXVIII-5/W16, pp. 285-292.

Figure 9 - Differences of ortophotos (AP up - P4 down): detail Gini, R., Passoni, D., Pinto, L., Sona, G., 2012. Aerial images
in the edges of some buildings. from an UAV system: 3d modeling and tree species
classification in a park area. Int. Arch. Photogramm. Remote
Sens. Spatial Inf. Sci., Vol. XXXIX-B1, pp. 361-366.
6. CONCLUSIONS
References from websites:
The images acquired by UAVs, in particular the fixed-wing AgiSoftLLC.(2010).AgiSoftPhotoScan.http://www.agisoft.ru/pr
ones, are suitable to be processed by different software oducts/photoscan/.
packages: in particular, both computer vision-based and
photogrammetric software (even home-made like EyeDEA and Pix4UAV Desktop by Pix4D, 2013.http://pix4d.com/pix4uav.ht
DM) was analyzed in this paper. The whole set was able to ml
provide the images exterior orientation and products such as the
DSM, although computer programs of the first type can work Acknowledgements
almost entirely in an automatic way as well as they can quickly
create a high quality final product; moreover, both P4 and AP The authors thank Riccardo Roncella for allowing them the use
can automatically generate very dense point clouds with high of the software EyeDEA and for his help and Pix4D, platinum
multiplicity. The photogrammetric software requires an sponsor of UAV G 2013.
operator's intervention in some phases, as in the exterior

View publication stats

You might also like