Journal of Field Robotics - 2019 - Pestana - Overview Obstacle Maps for Obstacle‐Aware Navigation of Autonomous Drones
Journal of Field Robotics - 2019 - Pestana - Overview Obstacle Maps for Obstacle‐Aware Navigation of Autonomous Drones
DOI: 10.1002/rob.21863
FIELD REPORT
1
Institute for Computer Graphics and Vision
(ICG), Graz University of Technology Abstract
(TU Graz), Graz, Austria Achieving the autonomous deployment of aerial robots in unknown outdoor environ-
2
BIOENERGY2020+ GmbH, Graz, Austria
ments using only onboard computation is a challenging task. In this study, we have
Correspondence developed a solution to demonstrate the feasibility of autonomously deploying drones in
Jesús Pestana, Institute for Computer
unknown outdoor environments, with the main capability of providing an obstacle map of
Graphics and Vision (ICG), Graz University of
Technology (TU Graz), Inffeldgasse 16/II, the area of interest in a short period of time. We focus on use cases where no obstacle
8010 Graz, Austria.
maps are available beforehand, for instance, in search and rescue scenarios, and on
Email: [email protected]
increasing the autonomy of drones in such situations. Our vision‐based mapping approach
Funding information
consists of two separate steps. First, the drone performs an overview flight at a safe
Österreichische
Forschungsförderungsgesellschaft, Grant/ altitude acquiring overlapping nadir images, while creating a high‐quality sparse map of
Award Numbers: Bridge1 project 843450,
the environment by using a state‐of‐the‐art photogrammetry method. Second, this map is
FreeLine (2014‐2016); Austrian Science Fund
I‐1537, Grant/Award Numbers: V‐MAV: georeferenced, densified by fitting a mesh model and converted into an Octomap obstacle
Cooperative micro aerial vehicles, I‐1537;
map, which can be continuously updated while performing a task of interest near the
OMICRON electronics GmbH
ground or in the vicinity of objects. The generation of the overview obstacle map is
performed in almost real time on the onboard computer of the drone, a map of size
100 m × 75 m is created in ≈2.75 min, therefore, with enough time remaining for the
drone to execute other tasks inside the area of interest during the same flight.
We evaluate quantitatively the accuracy of the acquired map and the characteristics of
the planned trajectories. We further demonstrate experimentally the safe navigation of
the drone in an area mapped with our proposed approach.
KEYWORDS
aerial robotics, computer vision, mapping, planning
1 | INTRODUCTION absence of obstacles along the flight route, for instance, during the
takeoff and landing operations and more generally when flying close to
The utilization of drone technology in civilian applications is being limited the ground, buildings, and trees; hence, the requirement in practice for a
by the requirement for drone operations to have a human pilot to ensure pilot to ensure obstacle avoidance during flight. These are currently
collision avoidance at all times. From a technical standpoint, first, most limiting factors for the automated operation of drones in promising high‐
drones are not equipped with obstacle‐sensing technologies. And second, value operations, such as infrastructure inspection and package delivery.
drone‐automated flight tends to make strong assumptions about the In addition, due to this current lack of automatic obstacle avoidance
---------------------------------------------------------------------------------------------------------------------------
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
© 2019 The Authors. Journal of Field Robotics Published by Wiley Periodicals, Inc.
capabilities, setting up a fully automated flight in environments from Technology (MIT; Bachrach, Prentice, He, & Roy, 2011). The same
which the operator has limited information, for instance, for search and authors demonstrated similar capabilities using only LIDAR and a
rescue and disaster relief operations, is not feasible. smaller drone platform in their work (Bachrach, He, & Roy, 2009).
Early works on this topic motivated by the International Aerial As these early works show, the fast deployment of autonomous
Robotics Competition (IARC) Missions 3 and 4 (AUVSI Association, drones in unknown outdoor environments is since several years an
2018) running, respectively, on years 1998–2000 and 2001–2008 ongoing research problem. In this study, some of the main challenges
showed promising results using unmanned helicopters and computer related to this topic are tackled, namely the acquisition of a good‐quality
vision. In Mission 3, the aerial robot had to detect and avoid obstacles, obstacle map and the calculation of trajectories that allow fast
identify survivors, and recognize drum containers. The winning team navigation in the area of interest. Our presented approach uses only
from the Technical University of Berlin (Kondak & Remuß, 2007; Musial, onboard computation power, and as a result, the drone does not need
Brandenburg, & Hommel, 2000) was able to perform the target to transfer data to a ground‐station via a wireless communication link.
identification and localization tasks; however, their helicopter did not fly Our vision‐based mapping approach consists of two separate steps.
near the debris, but rather flew high over the area (Greer, McKerrow, & First, the drone performs an overview flight at a safe altitude acquiring
Abrantes, 2002). In Mission 4, the aerial robot had to identify a overlapping downward‐looking images, while creating a high‐quality
particular building and deploy a rover to accomplish a task inside it. A map of the environment by using a state‐of‐the‐art photogrammetry
team of the Georgia Institute of Technology won this challenge method, the online Structure from Motion (SfM) pipeline (Hoppe et al.,
(Johnson, Mooney, & Christophersen, 2013; Rooz et al., 2009) by 2012; Rumpler et al., 2016). Second, this map is georeferenced and
completing the entire mission. Working in topics that relate to our converted into an Octomap, see Figure 1, which is used as an initial
presented work, the same team also developed a helicopter system able overview obstacle map that can be updated during the rest of the flight
to fly over an area and acquire an accurate three‐dimensional (3D) while performing a task of interest near the ground. The generation of
reconstruction using a pan‐tilt‐mounted laser range finder (LADAR or the overview obstacle map is performed in a few minutes on the drone
LIDAR) and explored the 3D obstacle avoidance problem in simulation onboard computer, and thus, with enough time remaining for the drone
(Geyer & Johnson, 2006). A comparison of the 3D reconstructions (Figure 2) to execute other tasks inside the area of interest during the
obtained by performing an overview flight and acquiring and processing same flight.
data from either a LIDAR or a camera is discussed on the work by Our trajectory planning approach is designed to provide smooth
Leberl et al. (2010). The IARC Mission 5 (2009) proposed the challenge trajectories away from obstacles. We have tested our navigation and
of autonomously exploring an indoor area with tight spaces while trajectory planning algorithms, experimentally utilizing an obstacle
searching for a target object on a wall. Mission 5 was fully accomplished map obtained using our mapping method. The aim of the experiment
using a quadrotor drone and a low‐weight LIDAR and a stereo camera is to demonstrate the feasibility of autonomously deploying drones in
system as main sensors by a team from the Massachusetts Institute of unknown outdoor environments.
F I G U R E 1 Obstacle map of an outdoor environment of size 105 m × 75 m generated from 56 images, with 12 m high buildings and up to
14 m high trees, generated using our method explained in Section 3.1. The obstacle map is displayed color‐coded according to the height and
has a minimum voxel resolution of 1 m. On the left, “Google Earth ©2015,” an overview image of the area is shown, where the target
50 m × 50 m region of interest is located around the parking lot and a red contour denotes the effectively mapped area. The onboard processing
time for the creation of this obstacle map was ≈2.75 min. Man‐made obstacles, such as buildings and cars, and big trees are quite well
reconstructed and included in the obstacle map. However, small trees are often not correctly mapped and need to be sensed later on during
lower altitude flight. For this purpose, our drone, see Figure 2, is equipped with several stereo‐heads that can acquire point‐clouds of trees along
with unmapped and dynamic obstacles during flight, which can be used to update the map [Color figure can be viewed at wileyonlinelibrary.com]
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
736 | PESTANA ET AL.
fusing the inertial measurement unit (IMU) readings with optical flow,
thus only requiring a minimal amount of feature correspondences in
consecutive frames. Using this, efficient version of PTAM (Weiss,
Achtelik, Kneip, Scaramuzza, & Siegwart, 2011) showed an effective
terrain exploration technique for micro‐aerial vehicles (MAVs) that
generate, in real time in a ground‐station, a textured 3D mesh by means
of a Delaunay triangulation (Labatut, Pons, & Keriven, 2007), which
supports the drone operator in understanding the MAV’s environment.
Several research works have focused on the creation of maps that
can be later reused by the drone to localize in real time during an
autonomous flight. Surber, Teixeira, and Chli (2017) use the VIO
algorithm open keyframe-based visual-inertial SLAM (OKVIS) (Leuteneg-
ger et al., 2013, 2015) to acquire a map of an area during a manual flight,
F I G U R E 2 Hardware setup: DJI M100 drone, autopilot, GPS, and later reuse this map to reduce the UAV’s dependency on global
gimbal camera, DJI Guidance system with five stereo camera heads,
positioning system (GPS) and evaluated their system against ground‐truth
and a Nvidia Jetson TK1 onboard computer. GPS: global positioning
position data acquired with a Leica Total Station. Recently, researchers
system; GPU: graphics processing unit [Color figure can be viewed at
wileyonlinelibrary.com] from the ETH Zürich have released a visual‐inertial mapping framework
to process and produce multisession maps (T. Schneider et al., 2018),
The concept of overview obstacle maps and the presented solution which uses robust visual inertial odometry (ROVIO) (Bloesch, Omari,
for drone deployment were inspired by the objectives of the 2016 DJI Hutter, & Siegwart, 2015) as the VIO front‐end, and has been used to
Developer Challenge.1 This competition consisted of a search and rescue achieve autonomous drone flight (Burri, Oleynikova, Achtelik, & Siegwart,
mission, in which the drone needed to explore a designated area 2015). In Burri et al. (2015), the full bundle adjustment (BA) result and
searching for survivors. Our team, the Graz Griffins, took part in this the obstacle map are generated after a manual flight and are later used in
challenge and was among the few participants that qualified to autonomous flights achieving precise indoor localization, navigation, and
participate in the finals, where we demonstrated our solution at work. obstacle avoidance. The known state‐of‐the‐art visual SLAM frameworks
The outline of this study is the following. The related work and ORB‐simultaneous localization and mapping (SLAM) (Mur‐Artal, Montiel,
our contributions are discussed in Section 2. The algorithms utilized & Tardós, 2015) and ORB‐SLAM2 (Mur‐Artal & Tardós, 2017) also
for the realization of this study are described in Sections 3 and 4: the provide the capability of reusing a map acquired during a previous session
mapping approach in Sections 3.1 and 3.2, the navigation control in or experiment. In Qiu, Liu, and Shen (2017), the authors propose the
Section 4.1, and our trajectory planning solution in Section 4.2. The usage of mesh models obtained using well‐accepted off‐line SfM
experiments are described and discussed in Section 5 with: the algorithms (Triggs, McLauchlan, Hartley, & Fitzgibbon, 1999) to substitute
evaluation of the trajectory planner in Section 5.2, a qualitative and the usage of GPS. To achieve the vision‐based localization against the
quantitative evaluation of the capabilities of our overview obstacle model, the authors propose an edge alignment scheme for the current
maps in Section 5.3, and an experimental flight showcasing image against a virtual image extracted from the model that is used as a
performance of our system in a map acquired using our proposed reference or keyframe. A visual odometry framework (Schenk &
approach in Section 5.4. Sections 6 and 7 contain the conclusions and Fraundorfer, 2017) using a similar algorithm for RGB‐D sensors provides
a discussion about possibilities for future work. a better evaluation on the approach and produces estimates with a drift
accumulation on par with state‐of‐the‐art visual odometry (VO) methods.
Achieving dense mapping onboard a drone is very challenging due
2 | ST ATE O F TH E A RT to the limited computational capabilities of their onboard computers.
Several methods have been proposed that are too computation
Generating maps of areas of medium size in real‐time onboard a drone intensive and require powerful graphics processing units (GPUs), but
or leveraging offboard computing resources is a challenging task, and the achieved levels of detail, and thus, the high quality of their dense
much research has been dedicated to it with varying degrees of success. maps would be extremely desirable for navigating drones. Examples
The swarm of micro flying robots (SFLY) project (http://www.sfly.org/) of such dense‐reconstruction algorithms are the following: KinectFu-
(Scaramuzza et al., 2014) developed several novel algorithms for drones. sion (Newcombe, Izadi, et al., 2011), ElasticFusion (Whelan, Leute-
A real‐time loosely coupled visual‐inertial odometry (VIO) framework, negger, Salas‐Moreno, Glocker, & Davison, 2015, 2016), MonoFusion
by Weiss, Achtelik, Lynen, Chli, and Siegwart (2012), was developed (Pradeep et al., 2013), and dense tracking and mapping (DTAM;
based on a modified version of parallel tracking and mapping (PTAM) Newcombe, Lovegrove, & Davison, 2011), or the similar approach by
(Klein & Murray, 2009) improved for onboard execution and on a Stühmer, Gumhold, and Cremers (2010). There is, thus, interest in the
computationally fast estimation algorithm used as a fall‐back method community in developing algorithms with better computational
efficiency and accuracy trade‐off that could be executed onboard
1
https://developer.dji.com/challenge2016/ drones. Heng, Lee, Fraundorfer, and Pollefeys (2011) propose a
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 737
method to generate a real‐time dense‐reconstruction offboard while Another possibility to create dense reconstructions is using VO
guiding the drone by means of onboard VO. A framework to assist a semidense methods, which extract the depth of high‐gradient regions
surveyor while acquiring an SfM data set was proposed by Hoppe of the scene, such as large‐scale direct LSD‐SLAM (Engel, Schöps, &
et al. (2012), an offboard calculated color‐coded mesh model is Cremers, 2014), Direct Sparse Odometry (DSO; Engel, Koltun, &
displayed in real time for the purpose of providing the surveyor with Cremers, 2018), or semidense mapping (SDM; Mur‐Artal & Tardós,
feedback about the local quality of the reconstruction. Wendel, 2015). However, these methods are not well suited for this purpose
Maurer, Graber, Pock, and Bischof (2012) utilize the drone onboard because their depth estimates are not filtered or optimized for dense
PTAM‐based calculated poses in an offboard server to produce a life mapping. The direct tracking and mapping method dense piecewise‐
dense 3D reconstruction that is displayed in real time on a tablet. The planar tracking and mapping (DPPTAM; Concha & Civera, 2015) achieve
dense monocular 3D reconstruction algorithm regularized monocular good results by including piecewise‐planar surfaces in the model, but the
depth estimation (REMODE) (Pizzoli, Forster, & Scaramuzza, 2014) computation requirement is too high for direct onboard execution. The
measures depth against a reference view and performs uncertainty‐ method by Teixeira and Chli (2016) is extremely fast but the produced
dependent point‐cloud smoothing achieving real‐time execution mesh results include strong interpolations causing error in sharp‐edges,
using CUDA (https://en.wikipedia.org/wiki/CUDA) by combining an such as corners. A later method from the same authors (Teixeira & Chli,
algorithm to generate dense point‐clouds using patch‐level or per‐ 2017) uses large-scale direct monocular SLAM (LSD-SLAM), super‐pixels,
pixel Bayesian depth estimation using a parametric model (Vogiatzis and filtering that eliminates most depth outlier estimates, and it achieves
& Hernández, 2011) and the fast state‐of‐the‐art visual odometry very competitive runtimes on an Intel‐i7 4700MQ/Intel‐i7 5557U/Intel
method semidirect visual odometry (SVO; Forster, Pizzoli, & NUC processor (Intel Corporation (Intel), Santa Clara, CA) that can be
Scaramuzza, 2014). REMODE has since been utilized on data mounted onboard drones.
acquired with drones to generate dense depth maps in real time The trajectory planner presented in this study was designed using
for various research projects: creating medium‐sized maps in an methods from the state of the art to deliver long and smooth
offboard ground‐station by streaming the acquired data (Faessler trajectories on our overview obstacle maps and it is presented as a
et al., 2016), the creation of dense maps onboard (Forster, Faessler, component of the developed system. The reader is here directed to
Fontana, Werlberger, & Scaramuzza, 2015) by restricting their size to work in the field of fast trajectory replanning that would allow the
a relatively small 2.5D fixed‐size grid‐map around the robot drone to explore unknown cluttered environments while flying near
(Fankhauser, Bloesch, Gehring, Hutter, & Siegwart, 2014), and the the obstacles. These types of planners are able to regenerate an
feasibility of sharing the 2.5D map acquired by the drone to guide a obstacle‐free smooth trajectory at a high rate, for instance, the
ground robot. Regarding the latter and still using REMODE, a mobile following recent works make computation efficiency improvements
robot plans and executes trajectories in rough terrain in a small area by using operations in an OcTree data structure (Chen, Liu, & Shen,
mapped by a drone (Delmerico, Mueggler, Nitsch, & Scaramuzza, 2016), a local multiresolution discretization (Nieuwenhuisen &
2017), by training a terrain classifier on‐the‐fly (Delmerico, Giusti, Behnke, 2016), and local replanners (Oleynikova et al., 2016; Usenko,
Mueggler, Gambardella, & Scaramuzza, 2016), and a legged robot and vonStumberg, Pangercic, & Cremers, 2017).
the drone achieve localization on the same global coordinate frame in In this study, we propose a method to create an overview
Käslin et al. (2016). In Lynen et al. (2015), an efficient indexed obstacle map of a desired outdoor area onboard the drone. The
nearest‐neighbor search to achieve image‐based relocalization on a success of our approach is a direct consequence of utilizing a survey
prebuilt map is proposed, where the map is obtained using standard flight trajectory that provides an image data set of a large area with
SfM techniques with its scale recovered using IMU data and the high and approximately constant image overlap resulting in a very
recursive nonlinear filtering approach OKVIS (Agarwal et al., 2012; well constrained BA problem. This choice reduces the size of the
Leutenegger et al., 2013, 2015), and a VIO method for local pose optimization problem, for which the associated Hessian matrix has a
tracking inspired on the multi-state constraint kalman filter (MSCKF) known structure (Triggs et al., 1999), and keeps it at an onboard
(Mourikis et al., 2009) is used. Using OKVIS (Leutenegger et al., 2013, computationally manageable size. The resulting sparse 3D model is
2015) as VIO front‐end, the research work (Oleynikova, Burri, Lynen, meshed to generate an obstacle map by using a Delaunay
& Siegwart, 2015) also proposes a method to localize a drone and a triangulation (Labatut et al., 2007). Although our approach comes
ground robot on the same map by means of a previously acquired at the cost of only mapping the obstacles that are well represented
reference map. Recently, a drone dense‐reconstruction (Karrer, by the sparse SfM model, consisting of 3D points and lines, our
Kamel, Siegwart, & Chli, 2016) data set has been released, which densification operation is computationally very efficient. When
focuses on small working areas and producing precise 3D dense compared with related work, our approach presents several
models for the purpose of object manipulation, in which ground‐truth novelties.
position data acquired with a Leica Total Station are available. In comparison to previous work that re‐utilizes a map acquired on an
The system developed by J. Schneider et al. (2016) creates a earlier session for navigation (e.g., Burri et al., 2015; Qiu et al., 2017; T.
relatively dense georeferenced point‐cloud of very high accuracy while Schneider et al., 2018), our solution allows the acquisition of a moderately
localizing the drone in real time at 100 Hz using only onboard sized obstacle map onboard the drone, which we demonstrate for maps
computation on a 3.6 GHz Intel CPU (Santa Clara, CA) with 4 cores. of size 100 m × 75 m that are created in ≈ 2.75 min, allowing the drone
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
738 | PESTANA ET AL.
to perform a near‐ground navigation task on the same flight. Because the well with still images, so that they can be used without a navigation
creation of our sparse 3D model is incremental, the mapping operation computer vision camera. These characteristics allow the use of our
can be stopped at any time resulting in a smaller mapped area and a solution in a broad range of commercially available drones.
shorter map creation time. Similarly, in comparison to the discussed
onboard dense‐reconstruction methods, our maps cover much bigger
areas than the onboard solutions from the related work (e.g., Forster 3.1 | Online SfM
et al., 2015; Teixeira & Chli, 2017). Accuracy evaluation of our obstacle The multirotor autonomously performs an overview flight at a safe
map is performed to provide a basis for the comparison of our obstacle high altitude over the region of interest. This region is defined by its
map to that of other methods. Here, it is noted that in some works (e.g., J. GPS corner points. The drone takes off and ascends to a safe height,
Schneider et al., 2016), the accuracy is evaluated based on the distance of approaches the region of interest, and plans and executes a regular
the mapped points to the ground‐truth point‐cloud rather than the other survey flight trajectory according to a desired image overlap setting.
way around, which is not as informative for the purpose of using the 3D The set of acquired overlapping nadir images is used to generate a
reconstruction as an obstacle map. sparse 3D map enhanced with line features on‐the‐fly. Simulta-
Similarly to Weiss et al. (2011), we densify the sparse model neously, a 3D mesh representing the surface model is fitted to the
representation into an obstacle map by fitting a mesh model by using a sparse model and at the end of the survey flight, it is rasterized to
Delaunay triangulation (Labatut et al., 2007), which is computationally obtain the overview obstacle map.
very efficient. In comparison to the work by Weiss et al. (2011), we use The utilized online SfM pipeline, developed at our institute,2 was
(a) a conventional photogrammetry pipeline that reconstructs points and first proposed by Hoppe et al. (2012), and it is very runtime efficient.
lines rather than an efficient version of PTAM and (b) a survey flight The task of the online SfM is to reconstruct the scene 3D points and
trajectory for the image acquisition, which together should result in a simultaneously calculate the camera poses against the calculated sparse
more accurate mesh model. In addition, our experiments provide new point‐cloud. The pipeline is based on a precalibrated camera model that
insights on the usage of the Delaunay triangulation from a sparse SfM we obtained with the method of Daftry, Maurer, Wendel, and Bischof
model to create an obstacle map by (a) providing an accuracy evaluation (2013) and utilizes scale invariant feature transform (SIFT) features
of the generated mesh models using as ground‐truth dense point‐clouds (Lowe, 2004; Wu, 2007) to be able to handle imagery with large
obtained using a photogrammetry method and (b) demonstrating the baselines. The sparse model is initialized from the first two images, for
usage of the calculated overview obstacle map for autonomous which a valid relative pose estimate can be computed by using the
navigation. robust version of the five‐point pose estimation algorithm (Nistér,
In a nutshell, the main contributions of this study are as follows: 2004). Afterward, the absolute pose estimation method by Kneip,
Scaramuzza, and Siegwart (2011) is used to align the incoming images to
• We propose the concept of overview obstacle map generation for the current sparse 3D reconstruction in real time. Meanwhile, iterative
the fast deployment of drones in unknown outdoor environments. bundle adjustment (Triggs et al., 1999) is performed in a parallel thread
A short survey flight provides the data for the vision‐based to prevent the scene drift likely to be caused by the incremental map
incremental generation of the obstacle map onboard and leaves building procedure. Using an incremental and real‐time version of
enough time to directly exploit the created map for near‐ground Line3D + + (Hofer, Maurer, & Bischof, 2017), a set of 3D lines is
navigation. calculated from the aligned images.
• An accurate** evaluation of the generated obstacle map is The sparse 3D model, thus, consists of a point‐cloud, a set of line
presented in Section 5.3. segments and the camera pose. The calculated set of 3D lines is
• We demonstrate the potential of our solution by performing sampled so as to add points and information for the calculation of the
autonomous obstacle‐free navigation on a map acquired using our surface model. In addition, the lines enhance the interpretability of
proposed method. the visualization of the sparse 3D map, especially for man‐made
structures and line‐rich regions.
3 | OVERVIEW OBSTACLE MAP Following the method (Rumpler et al., 2014, 2016), the GPS
G E N E R A T IO N measurements and the camera poses of the sparse model are aligned
by means of a robust random sample consensus (RANSAC)‐based least‐
The calculation of the overview map is performed using an SfM squares minimization of the distance between both sets of locations. This
algorithm. This choice allows us to use a minimal set of sensors effectively provides the correct scale, rotation, and relative translation for
commonly available on drones: a GPS sensor, used for scaling and the sparse 3D map. A better georeferenciation could be obtained by
georeferenciation, and a standard camera. In comparison to visual‐ placing georeferenced fiducial markers on site; however, in this study, we
inertial approaches, we forgo the intersensor calibration of the camera forgo the use of markers so as to maintain our approach suitable for
with respect to the IMU sensor and the time synchronization of the data search and rescue scenarios.
from both sensors, which could be achieved, for instance, using the
Kalibr open‐source library from Furgale, Rehder, and Siegwart (2013). 2
http://icg.tugraz.at/—Institute of Computer Graphics and Vision, Graz University of
Additionally, SfM algorithms do not require a high frame rate and work Technology.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 739
F I G U R E 3 Successive steps undertaken to generate an obstacle map of size 105 m × 75 m, see explanation in Section 3.1: autonomous
overview flight at a safe altitude over the 50 m × 50 m area of interest, generation of the sparse model using our incremental SfM pipeline,
generating a dense surface model from the sparse one, conversion to Octomap by direct sampling of the surface model and successive updates
during flight at low altitude by processing the stereo‐head data streams. The Octomaps are displayed color‐coded according to the height and
have a minimum voxel resolution of 1 m [Color figure can be viewed at wileyonlinelibrary.com]
In addition to the sparse 3D map, the pipeline also creates a onboard and it can represent general‐shaped obstacles. In our
surface model on‐the‐fly (Hoppe, Klopschitz, Donoser, & Bischof, approach, the obstacle map is obtained from a single mesh model, for
2013). The current implementation uses the approach by Labatut which Octomap is a good fit. In contrast, methods based on creating
et al. (2007), rather than the approach outlined in Hoppe et al.’s the surface model using Truncated Signed Distance Fields (TSDFs),
(2013) work. This model is directly converted into an Octomap such as Voxblox (Oleynikova, Taylor, Fehr, Siegwart, & Nieto, 2017),
(Hornung, Wurm, Bennewitz, Stachniss, & Burgard, 2013) obstacle are better suited to be used with depth sensors, such as RGB‐D and
map representation by direct point‐sampling of the triangle mesh. stereo cameras.
The obtained Octomap can be updated in real time and is utilized by To accelerate the calculation of obstacle‐free trajectories, we use
our trajectory planner during flight. a precalculated distance map that provides the clearance of any point
Because all these map representations of the environment are in free space to its closest obstacle. An efficient implementation of
georeferenced, the drone can be localized in the map, with up to GPS such an algorithm for Octomap was developed by Lau, Sprunk, and
precision, by simply using its internal GPS + IMU fusion provided by Burgard (2013) and released open‐source as a library named
the autopilot board. Further, georeferenciation enables us to, first, “DynamicEDT3D: A library for Incrementally Updatable Euclidean
add GPS‐defined no‐fly zones. And, second, it allows one to show the distance transforms in 3D’5. Its main advantages are featuring
geolocation of objects of interest detected during flight. constant access‐time, because its internal data structure storing the
The experimental setup for the onboard real‐time execution of distance map is an array, and being capable of time‐efficient
our mapping software is the following. The image stream from the incremental updates.
Zenmuse X3 Gimbal (SZ DJI Technology Co., Ltd. (DJI), Shenzhen, For the specific case of our drone, see Figure 2, the point‐
Guangdong, China) (resolution of 1,280 × 720 pixels) is fully clouds provided by our stereo‐heads are of low resolution
processed onboard our drone. This is made possible by leveraging (320 × 240 pixels). For this reason, during flight, we are able to
the GPU of the onboard computer, an Nvidia Jetson TK1 apply fast incremental updates to both, the obstacle map and the
development board, to extract image features using SiftGPU distance map, by using their native Application Programming
(Wu, 2007). The Online SfM pipeline is able to process one image Interfaces (API). We have tested the runtime of this operation on
every 3.0 s during flight experiments, or one image every 1.5 s data sets and for an Octomap with a minimum voxel resolution of
when only processing a data set. A region of interest of size 1 m, the updates can be applied onboard in real time, at a
50 m × 50 m with a final map size of up to around 105 m × 75 m (at frequency higher than 1 Hz.
ground‐level) is mapped onboard in ≈ 2.75 min (Figure 1). This
processing time includes: the overview flight, the acquisition of
images, the generation of the sparse 3D model, the 3D mesh 4 | N A V I G A T I O N U S I N G TH E O V E R V I E W
generation, the georeferenciation, and the conversion to the OBSTACLE MAP
obstacle map representation (Figure 3).
The navigation controller design and tuning is explained in Section
4.1. To achieve safe near‐ground navigation in cluttered environ-
3.2 | Obstacle map representation
ments, we implemented a trajectory planner (see Section 4.2) that
We use the Octomap (Hornung et al., 2013) obstacle map generates trajectories at a configurable clearance distance from
representation for this purpose, which is an OcTree‐based volumetric obstacles.
map representation (Figure 1, right). Its implementation is open‐
source3 and it is integrated to be easily used with the Robot 3
https://github.com/OctoMap/octomap
4
Operating System (ROS) . We selected it for various reasons: It is 4
http://wiki.ros.org/octomap/_mapping
4.1 | Navigation control board. An example speed command, vc , for the autopilot over one of
the coordinate axes is
4.1.1 | System identification
vc,fb (t ) = Kp [xd (t ) − x (t )] + Kd [xḋ (t ) − x ̇ (t )], (3)
The flight behavior of our drone was characterized by performing
speed command step response identification tests. Based on our
where Kp and Kd are the controller tuning parameters and
experimental data and understanding of the system, the dynamical
{xd (t ) , x˙ d (t ) , x¨ d (t ) = v˙d (t )} is the reference trajectory.
behavior from velocity command to the velocity output is assumed to
be described by a transfer function
4.1.4 | Overall control
v (s ) V
P (s ) ≔ = e−sTd , (1)
vc (s ) Ts + 1 Sections 4.1.2 and 4.1.3 are combined in a single control law,
resulting in the following equation for each of our coordi-
where s is the Laplace variable and P (s ) maps the Laplace transform nate axes:
v (s ) of v (t ), the actual speed of the drone, to the Laplace transform
vc (s ) of vc (t ), the speed command. The parameters of our model are:
T 1
V , the static gain; Td, a pure delay; and T , the time constant. vc (t ) = vc,ff (t ) + vc,fb (t ) = V x¨ d (t + Td) + V xḋ (t + Td)
(4)
A rough controller parameter tuning was calculated based on the + Kd [xḋ (t ) − x ̇ (t )] + Kp [xd (t ) − x (t )].
resulting model and later experimentally improved.
In our experiments, the measurement feedback utilized by the
controller are the position, x (t ), and velocity, x˙ (t ), from the
4.1.2 | Feedforward control GPS + IMU fusion provided by the autopilot board.
The reference smooth trajectory is calculated in two steps. First,
The mathematical model for the dynamics from velocity command to
a trajectory specified through waypoints, and accompanying speed
real velocity can be used to determine a feedforward control action.
and acceleration plans are obtained using the speed planner
This action takes knowledge about the future development of the
explained in Section 4.2. And, second, a third‐order spline is fitted
desired velocity into account and would thus, in the absence of errors,
to the set of waypoints, times of passage, speeds, and accelerations.
lead to the drone following the desired trajectory exactly. Because all
The resulting spline is the reference smooth trajectory for the
axes are considered separately, the following section restricts itself to
controller, specified in Equation (4) as {xd (t ) , x˙ d (t ) , x¨ d (t )} .
the x ‐axis; all other axes can be handled in the same way.
Assuming that the desired trajectory xd (t ) is given by a smooth
mathematical function. Then, the value of the desired position xd and
all of its derivatives xḋ = vx,d, x¨ d = vẋ ,d, and so forth are known at
4.2 | Trajectory planning
each time instant t . For our dynamics model of the drone, the The purpose of our trajectory planner is to allow fast and safe
transfer function (1), the feedforward control command is navigation along long trajectories in cluttered environments. The
calculated path should, therefore, be smooth and keep clear of
T 1
vc,ff (t ) = v˙d (t + Td) + vd (t + Td) , (2) obstacles. Whenever a new goal position is received, a new path is
V V
delivered to the controller. Similar to the approach proposed by
that is, we need to look “into the future” by Td seconds and have Richter, Bry, and Roy (2013), we use the differential flatness of the
knowledge about the desired acceleration and velocity. In the quadrotor dynamics (Mellinger & Kumar, 2011) to plan a smooth
absence of modeling errors and flight disturbances, this command trajectory in 3D position coordinates without directly considering the
would lead to the drone following the desired trajectory exactly, that system dynamics, and perform the following calculations separately:
is, v (t ) ≡ vd (t ). obstacle‐free path planning and subsequent generation of a smooth,
continuous, and differentiable trajectory path.
Our method proceeds as follows (Figure 4):
4.1.3 | Feedback control
The feedforward control law alone does not guarantee that the drone 1. Calculate a path using a state‐of‐the‐art trajectory planner that
will actually follow the trajectory in a real setting, even if the initial minimizes a cost function, which penalizes proximity to obstacles,
position matches the beginning of the trajectory exactly. We utilize a unnecessary changes in altitude and length.
feedback loop controller, similar to the PID controller architecture, 2. Limiting the increase on the path cost, the raw output path from
for the three linear coordinates and the yaw heading and utilizing the planning algorithm is consecutively shortened into a smooth
both position and speed measurement feedback and references. The trajectory.
utilized measurement feedbacks are the position, x (t ), and velocity, 3. Taking into account the path curvature and parameters that fix
x˙ (t ), from the internal GPS + IMU fusion provided by the autopilot the maximum values for the velocity and acceleration, feasible
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 741
safe distance to nearby obstacles [Color figure can be viewed at Kl clength (x)} ∣ dl∣, (7)
wileyonlinelibrary.com]
To avoid this, some increase in the path cost is allowed, but it is Max_Direction (ni* , config), with ni* being the estimated normal
constrained to a fraction of the raw path cost. In this manner, the vector to the trajectory at waypoint[i]. Then, the desired initial and
above‐mentioned sought qualities of the raw path are kept in the the final velocities are set, being usually both set to zero.
smoothed path. The shortening and the smoothing are the result of The generation of the speed plan entails the following steps:
applying subsequent suboperations iteratively, for which a cost increase
can be calculated individually. Therefore, the performed path simplifica- 1. The tangent, normal and binormal vectors are estimated at each
tions are all cost‐aware, that is, a shortening or smoothing suboperation waypoint. Afterward, with this information, the position of the
is only accepted when its corresponding cost increase, measured by center and the radius of curvature are calculated by estimating the
means of Equations (5)–(10), is below a threshold. circumference that approximates each waypoint[i] and its neighbors.
The performed suboperations are the following (Figure 4): To perform this calculation, each point[i] and a sampling of its
immediate neighbors along the path are projected into the plane
1. Reduce the number of vertices that are present in the current path: formed by the current estimate of the normal and tangent vectors;
Interim waypoints are removed if the trajectory is still collision‐free and a system of equations is solved to determine the parameters of
and until the total cost does not increase more than 10%. the said circumference. The radius of curvature at each waypoint
2. Collapse waypoints that are too near each other, an overall and new estimates for the tangent and normal vectors are retrieved
allowed cost increase of 10%. based on the solution to the system of equations.
3. Shortcut the path, an allowed cost increase of 10%: Not only 2. A first speed plan is calculated that complies with the acceleration
waypoints are considered for the path length reduction, but also constraints. For this purpose, the algorithm Velocity_Plan_S-
inner points of the path segments. weep_Double_Pass(…) that was originally proposed by Hoffmann
4. Smooth the path using the B‐spline algorithm with an allowed cost et al. (2008) is used with only minor modifications, see Algorithms
increase of 15%: New waypoints are sampled making the path 1 and 2. In this algorithm, the equations relating, for every section
rounder around sharp corners. of the trajectory, to path segment lengths, velocities, and
5. New waypoints are sampled in the current trajectory segments, accelerations correspond with those of linear uniformly acceler-
and others are reduced to achieve segment lengths inside an ated motion. The modifications to this algorithm are: (a) see line 3
acceptable predefined range, an allowed cost increase of 10%. of Algorithm 2, saturating for both normal and tangential
accelerations and (b) allowing the saturation of velocities and
The resulting smooth path sets the final position coordinates for all accelerations differently depending on the direction. A compar-
the waypoints. In the next step, only the dynamic information of the ison between the smoothed and nonsmoothed speed plans is done
trajectory, for example, speed and times of passage, is calculated. in Section 5.2.1.
the maximum upward, downward, and horizontal velocities and the elements of each vector, so that they are timewise reversed */
accelerations, {vmax, h, vmax, z, up, vmax, z,down, amax, h , amax, z,up , amax, z,down} . 5: s = (−1)⋅s
These parameters define the velocity saturation constraints as the
6: {vplan, aat, plan , Δtplan}= Velocity_Plan_Sweep
ellipsoids ((v x2 + vy2)/ vmax
2 2 2
, h) + (v z / vmax, z ,up) = 1 and ((v x2 + vy2)/vmax,h
2
)+
(vend, s, vplan, r, {t*i }i ,config) ; ∕* see Algorithm 2 */
(v z2/ vmax
2
, z, down) = 1. The acceleration constraints are similarly de-
fined. In Algorithms 1 and 2, the maximum values for the speed and 7: flip{s, r, {t*i }i , vdesired, vplan}
the acceleration provided by the four ellipsoids are retrieved by the
8: s = (−1)⋅s
functions Velocity_Max_Direction(d, config) and Acceleration_Max_-
Direction(d, config). 9: {vplan, aat, plan , Δtplan}=
The desired speed for the trajectory is set at every point to the Velocity_Plan_Sweep (vinit, s, vplan, r, {ti*}i ,config)
minimum of the following two values: (a) the maximum configured
10: return{vplan, aat, plan , Δtplan}; ∕* note that the cross‐track
velocity, Velocity_Max_Direction(…), and (b) the maximum attainable 2
acceleration is act, plan = [vplan, i ∕ri ]*∕
velocity as limited by the radius of curvature and the maximum
acceleration, vmax, i = amax, n ⋅ ri , where amax, n = Acceleration_
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 743
1: v1 ← vinit 1999), note that 1 and N are the indexes of the initial and
end velocities:
2: for i = 1 to N do
i=N−1
vi2 ⎡ ⎤
3: amax = Acceleration_Max_Direction (ti*,config) − ri
; f (v, vold, Δt = Δtplan) = λ1 ⎢ ∑ (vi − vi,old)2⎥
⎣ i=2 ⎦
tN d 2v
vi2 − (vī + 1)2
4: aat, i ← min (amax , ) + λ2 ⎡
⎢ ∫( 2 )2 dt ⎤
dt ⎥
(11)
2 (si − si + 1)
⎣ t1 ⎦
tN d 3v
5: if aat, i > 0 then + λ3 ⎡
⎢ ∫( 3 )2 dt⎤,
dt ⎥
⎣ t1 ⎦
6: vi + 1 ← vi2 − 2aat, i (si − si + 1)
i=N−1
⎡ ⎤
−vi + vi2 − 2aat, i (si − si + 1) = λ1 ⎢ ∑ (vi − vi,old)2⎥
7: Δti ← aat, i ⎣ i=2 ⎦
i=N−2
⎡ ⎧ ⎛ aat, i + 1 − aat, i 2 ⎫⎤
8: else + λ2 ⎢ ∑ ⎨
⎜
Δ t
) ΔtTat, i ⎥
⎬
Tat, i
⎣ i=1 ⎩⎝ ⎭⎦
9: if vi > vī + 1 then
i=N−3 2
10: vi + 1 ← vī + 1
⎡
+ λ3 ⎢ ∑
⎢
⎣ i=1
⎨
⎩
(
⎧ Tat, i + 1 − Tat, i
ΔtΔT , i ) ⎫⎤
ΔtΔT , i ⎥,
⎬⎥
⎭⎦
vi2 − vi2+ 1 where ΔtTat, i
11: aat, i ← 2 (si − si + 1)
Δti + 1 + Δti Δti + 2 + 2Δti + 1 + Δti
= and ΔtΔT , i = .
−vi + vi2 − 2aat, i (si − si + 1) 2 4
12: Δti ← aat, i (12)
15: aat, i ← 0
vi + 1 − vi
aat, i = , (13)
Δti
(si + 1 − si )
16: Δti ← vi
ii. Tat, i , derivative of the along‐track acceleration at waypoint i ,
17: end if
18: end if
Tat, i =
(aat,i+1 − aat,i )
=
( vi + 2 − vi + 1
Δti + 1
−
vi + 1 − vi
Δti ). (14)
ΔtTat, i ΔtTat, i
19: end for
20: return {vplan = v, aat, plan , Δtplan} ; // note that the cross‐track The similarity of the smoothing cost function to the one minimized by a
2
acceleration is act, plan = [vplan, i ∕ri ] smoothing spline is shown in Equation (11), see related chapter of the
book (James et al., 2013). The expression is first developed using
the intermediary variables aat, i and Tat, i resulting in Equation (12), and for
3. The output speed plan, vplan, of the Velocity_Plan_Sweep_Dou- the optimization, it is further developed to depend only on the speed plan
ble_Pass(…) function, Algorithm 1, complies with the configured using Equations (13) and (14). The data term, the old velocity plan vold,
velocity and acceleration constraints, considering both normal and the time intervals between waypoints, Δt = Δtplan, are held constant
and along‐track accelerations. However, it often provides a bang– during each smoothing spline optimization iteration. The optimization is
bang solution that proposes maximum acceleration values with performed using the Gauss–Newton least‐squares, where only the
opposite signs at certain consecutive waypoints of the path. For current velocity plan is considered as an optimization variable v . The
this reason, in the next step and in contrast to Hoffmann et al. initial, v1, and end, vN , velocities are kept constant, and therefore, not
(2008), we propose an iterative speed plan smoothing algorithm optimized. The parameters {λ1, λ2, λ3} are weights adjusting the strength
that results in a continuous acceleration plan with a feasible of each type of optimization residual: λ1 regulates the strength of the
derivative. The velocity smoothing operation is summarized in data term, λ2 regulates the strength of the acceleration smoothing terms,
Algorithm 3 and it consists of the following steps: and λ3 regulates the strength of the acceleration derivative smoothing
(a) Using the current velocity plan as a data term, a new velocity terms. A set of parameter weights that have provided good results and
plan is calculated applying a smoothing spline‐type optimization, that were used to obtain the values shown in the simulation
see a related chapter of the book (James, Witten, Hastie, & and experimental tests are the following: λ1 = 300, λ2 = 1.12, and
Tibshirani, 2013). This corresponds to minimizing the following λ3 = 0.08. Considering the notation from the SfM review paper (Triggs
cost function, see Equations (11) and (12), by using the et al., 1999), the previously defined cost function, Equation (12), can be
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
744 | PESTANA ET AL.
rewritten using residuals as follows, see Equations (15)–(20). In these acceleration derivative bounded by a specific value. In practice,
equations, W is the weight matrix and Δz the residual vector. The to obtain efficient computation times, the smoothing spline
residual vector is subdivided in the three types of cost, Δzv0 Δza ΔzT , from optimization is repeated a fixed number of times.
the smoothing cost function, see Equations (11) and (12). Δzv0, i is defined
Algorithm 3 Velocity_Smoothing(s, r, {t*i }i , vplan, Δtplan, r )
for i = 2, …, N − 1. Δza, i is defined for i = 1, …, N − 2. ΔzT , i is defined
for i = 1, …, N − 3. 1: for i = 1 to num_passes do
2: vold = vplan
1 ⊤
f (v, vold, Δt = Δtplan) = Δz W Δz, (15)
2
3: for j = 1 to max_iterations_per_pass do
df df
W = diag (λ1 IN − 2 , λ2 IN − 2 , λ3 IN − 3) , (17) 5: if ((j = =1) or (∣ d v ∣j < ∣ d v ∣j − 1 ))
ΔzT , i =
(
Tat, i + 1 − Tat, i
ΔtΔT , i ) ΔtΔT , i . (20) 8: else
9: break
Using this notation, the increment to the speed plan, Δvc , at each
10: end if
iteration is calculated from the following set of equations
(J⊺WJ )Δvc = −J⊺W Δz , where J is the Jacobian matrix of Δz . 11: end for; ∕* j */
(b) After every iteration of the smoothing spline optimization, the
12: end for; ∕* i */
resulting speed plan is not self‐consistent. This means that the
values {vplan, aat, plan , Δtplan} do not verify the equations of a 13: return {vplan, aat, plan , Δtplan}; ∕* note that the cross‐track
uniformly accelerated motion for all segments of the trajectory. 2
acceleration is act,plan = [vplan, i ∕ri ]*∕
In fact, the smoothing spline optimization does not enforce
these constraints. For this reason, after every iteration of the 4. To calculate the trajectory reference for the navigation controller,
smoothing spline optimization, a new pass of Velocity_Plan_S- a third‐order spline is fitted to the set of waypoints, times of
weep_Double_Pass(…) is performed, so that a new set of self‐ passage, speeds, and accelerations, see Section 4.1. The spline
consistent values for {vplan, aat, plan , Δtplan} is obtained. This representation allows the navigation controller to calculate the
amounts to the following function call [vplan, aat, plan , Δtplan] = trajectory references at any instant in time.
Velocity_Plan_Sweep_Double_Pass (s , r, {ti*}i , vnew max , config),
where vnew max is defined for every component as the minimum
of vplan + Δvc and vdesired, that is, vnew max =
{vnew max, i = min (vplan, i + Δvc, i, vdesired, i )} (see Algorithm 3). The
iterative Gauss–Newton least‐squares optimization that en-
codes the velocity smoothing algorithm, Algorithm 3 lines 4–11,
is stopped when: (a) the norm of the Jacobian of the cost
function, Equations (11) and (12), increased in the last iteration
or (b) a preset number of iterations are reached.
(c) Rerunning the smoothing spline optimization exchanging the data
term by the last smoothed velocity plan, Algorithm 3 lines 1–12,
allows the algorithm to gradually forget the strong initial bang–
bang velocity plan provided by Algorithm 1. For a given set of the
smoothing strength parameters {λ1, λ2, λ3} , increasing the
number of reruns of the smoothing spline optimization results F I G U R E 5 Synthetic obstacle map of size 50 × 50 × 48 m used to
in smoother speed plans, with lower maximum values for the benchmark the capabilities of our trajectory planner. The map features
acceleration derivative. This effect is demonstrated in Section 5. a fair amount of clutter and small passages with a width of 5.25 m
This fact allows the calculation of a speed plan with an [Color figure can be viewed at wileyonlinelibrary.com]
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 745
(a) (b)
F I G U R E 6 Example trajectory 1—Comparison of speed and acceleration plans with and without velocity smoothing. (top)
Visualization of the planned trajectory. Regarding the rest of the plots: (top‐left) trajectory path in 3D, (top‐right) path color‐coded
with the radius of curvature, (middle, bottom‐left) smoothed and (middle, bottom‐right) nonsmoothed speed, and acceleration plans
(a) with smoothing and (b) no smoothing [Color figure can be viewed at wileyonlinelibrary.com]
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
746 | PESTANA ET AL.
(a) (b)
F I G U R E 7 Example trajectory 2—Comparison of speed and acceleration plans with and without velocity smoothing. (top) Visualization of the
planned trajectory. Regarding the rest of the plots: (top‐left) trajectory path in 3D, (top‐right) path color‐coded with the radius of curvature,
(middle, bottom‐left) smoothed and (middle, bottom‐right) nonsmoothed speed, and acceleration plans (a) with smoothing and (b) no smoothing
[Color figure can be viewed at wileyonlinelibrary.com]
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 747
(a) (b)
(c) (d)
F I G U R E 8 Example trajectory 2, see Figure 7—Comparison of speed and acceleration plans with a varying number of velocity smoothing
passes. Each pair of plots shows the corresponding plans for a different number of passes, resulting in the following [number of passes,
maximum acceleration derivative]: (upper left) [1; 19 m/ s3], (upper right) [5; 15 m/ s3], (down left) [10; 11 m/ s3], and (down right) [20; 9 m/ s3].
The result of not applying any smoothing passes is shown in Figure 7 (middle, bottom‐right). (a) 1 smoothing pass, (b) 5 smoothing passes, (c)
10 smoothing passes, and (d) 20 smoothing passes [Color figure can be viewed at wileyonlinelibrary.com]
4.3 | Geometric speed planner obstacle‐free path planning, trajectory shortening, and smooth-
ing; and speed, acceleration, and time‐of‐passage planning. The
In Section 4.2, we have described a planning approach, which
presented speed planning approach can also be used when the
divides the trajectory generation in separate three subproblems:
drone flies in obstacle‐free areas over paths specified by
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
748 | PESTANA ET AL.
• DJI M100 drone (SZ DJI Technology Co., Ltd. (DJI), Shenzhen,
Guangdong, China), which, with a TB47D battery and equipped as
described, features a takeoff weight of 3.5 kg and achieves a flight
time of approximately 12 min.
F I G U R E 9 Example trajectories from the evaluation of the planner • Nvidia Jetson TK1 (DJI Manifold, Nvidia Corporation (Nvidia), Santa
described in Section 5.2, color coding of the trajectories as explained in
Clara, CA) onboard computer that features a quad‐core processor,
Figure 4 [Color figure can be viewed at wileyonlinelibrary.com]
2 GB of RAM and a CUDA‐enabled Tegra chip.
• Zenmuse X3 Gimbal camera: 1,280 × 720 pixels at 30 Hz.
manually defined waypoints. For this purpose, the “Geometric
• DJI Guidance visual‐sensing system with five stereo‐heads
Speed Planner” module was developed, which implements only
simultaneously provides images from all five directions and
the following two subproblems: (a) trajectory smoothing that
point‐clouds from two directions at 320 × 240 pixels at 10 Hz.
produces a path following the specified sequence of waypoints
but that generates curves at the intermediary waypoints; and (b)
the calculation of the speed, acceleration, and time‐of‐passage 5.2 | Evaluation of the trajectory planning
plans. approach
The evaluation of the trajectory planner is performed in a synthetic
5 | EXPERIMENTA L RESU LTS obstacle map (Figure 5) that represents an industrial environment of
size 50 × 50 × 48 m, which is distributed as part of the GitHub
We have performed three types of experiments for the evalua- repository of the RotorS simulation framework (Burri, 2015; Furrer,
tion of our solution, the combination of the mapping procedure, Burri, Achtelik, & Siegwart, 2016). Using this environment, we
see Sections 3.1 and 3.2, and our navigation approach, see showcase the capabilities of our trajectory planner to generate
Sections 4.1 and 4.2. In Section 5.2, the runtime and the smooth trajectories around and away from obstacles.
characteristics of the trajectories generated by our planning The configuration of the trajectory planner in this section is as
approach have been benchmarked in a synthetic industrial follows. The PRM* and the RRT* planners are configured with the
environment. In Section 5.3, we showcase the quality of the default parameters from the OMPL library, except for the maximum
generated overview maps in three different scenes and analyze distance limit of a new vertex to the current tree in the RRT* algorithm,
quantitatively the accuracy of the onboard generated overview which is deactivated. The obstacle‐free trajectory calculation was
obstacle maps against an offboard state‐of‐the‐art dense‐recon- configured with a maximum distance dmax of 6 m for the distance map
struction photogrammetry method. And in Section 5.4, the usage and a maximum planning time of 1 s. This particular choice for the
of overview obstacle maps for navigation is demonstrated on an maximum planning time is based on the fact that the overview obstacle
autonomous flight experiment. map contains abundant free space above ground. In most cases, the
T A B L E 1 Trajectory and speed planning performance benchmark using the RRT* for the obstacle‐free path calculation
Planned
daat
Length (m) Clearance (m) Velocity (m/s) traversal time (s) dt (m / s 3 )
Direct Mean Min. Mean of Mean of Mean Max. Ours Incr. Mean Max.
#Traj. path Mean ± 3σ of min. of min. mean ± 3σ mean ± 3σ of max. of max. mean mean (%) of max. of max.
1 30.25 40.84 ± 3.51 3.00 2.24 4.70 ± 0.45 4.83 ± 0.29 9.05 9.68 8.47 4.36 10.98 19.66
2 21.63 24.89 ± 1.64 2.99 2.24 4.07 ± 0.31 4.36 ± 0.34 6.47 7.69 5.72 3.92 8.78 12.17
3 18.71 24.62 ± 2.24 3.04 2.97 4.23 ± 0.33 3.86 ± 0.40 5.76 6.38 6.38 5.62 5.71 9.05
4 25.24 37.85 ± 1.61 3.09 2.85 5.10 ± 0.22 4.50 ± 0.32 8.43 8.97 8.42 5.56 10.52 17.28
5 28.41 59.48 ± 3.05 3.43 2.69 4.91 ± 0.27 5.04 ± 0.38 9.01 10.00 11.81 4.29 12.04 21.07
6 16.14 27.78 ± 48.36 1.68 1.35 2.88 ± 1.50 3.98 ± 1.78 6.15 9.76 6.65 3.84 7.10 17.25
7 14.00 58.74 ± 2.40 2.23 2.00 3.70 ± 0.31 5.48 ± 0.39 9.37 9.85 10.71 2.81 12.29 23.57
8 31.55 52.28 ± 19.49 2.25 2.25 4.08 ± 0.65 5.23 ± 0.86 8.22 9.80 9.98 3.57 10.99 23.55
9 18.17 27.26 ± 6.29 2.50 2.50 4.07 ± 0.42 3.95 ± 0.36 5.96 6.52 6.90 4.09 6.47 9.86
Note. RRT: Rapidly exploring Random Tree.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 749
T A B L E 2 Trajectory and speed planning performance benchmark using the PRM* for the obstacle‐free path calculation
Planned
daat
Length (m) Clearance (m) Velocity (m/s) traversal time (s) dt
(m / s 3 )
Direct Mean Min. Mean of Mean of Mean Max. Ours Incr. Mean Max.
#Traj. path mean ± 3σ of min. of min. mean ± 3σ mean ± 3σ of max. of max. mean mean (%) of max. of max.
1 30.25 40.53 ± 3.29 3.02 2.55 4.66 ± 0.47 4.81 ± 0.34 9.11 9.65 8.43 4.43 11.56 24.79
2 21.63 24.76 ± 0.90 2.93 2.30 4.02 ± 0.41 4.31 ± 0.32 6.44 6.89 5.74 4.11 8.82 11.41
3 18.71 24.68 ± 2.68 3.04 2.89 4.26 ± 0.33 3.84 ± 0.46 5.73 6.35 6.43 5.77 5.91 9.82
4 25.24 38.04 ± 1.68 3.09 2.85 5.11 ± 0.21 4.47 ± 0.33 8.53 9.13 8.51 5.57 10.52 16.09
5 28.41 59.33 ± 2.87 3.51 2.95 4.89 ± 0.21 5.03 ± 0.41 8.99 9.74 11.80 4.37 11.62 27.19
6 16.14 19.36 ± 17.88 1.52 1.35 2.65 ± 0.63 3.73 ± 0.70 5.41 8.67 5.17 3.29 5.49 16.37
7 14.00 58.08 ± 3.73 2.21 1.75 3.57 ± 0.42 5.46 ± 0.37 9.33 10.11 10.65 2.95 12.08 22.19
8 31.55 49.70 ± 12.61 2.25 2.00 4.03 ± 0.50 5.15 ± 0.94 7.86 9.35 9.64 3.64 10.59 21.62
9 18.17 27.07 ± 5.78 2.50 2.50 4.08 ± 0.34 3.91 ± 0.44 6.00 6.55 6.92 4.33 6.28 9.55
Note. PRM: Probabilistic RoadMap.
T A B L E 3 Execution time of the different steps of our trajectory generation approach, using the RRT* obstacle‐free path planner
Speed planning
#Traj. Avg.N Planning obs.‐free Path smoothing Ours Alg. (Hoffmann et al., 2008) Overall ± 3σ
1 57 1.0086 0.0988 0.1663 0.0006 1.3132 ± 0.1414
2 35 1.0108 0.0447 0.0369 0.0003 1.1305 ± 0.0693
3 35 1.0103 0.0598 0.0352 0.0003 1.1429 ± 0.0624
4 53 1.0103 0.1019 0.1283 0.0004 1.2790 ± 0.1119
5 85 1.0110 0.1846 0.5512 0.0007 1.7862 ± 0.4205
6 39 1.0405 0.0718 0.1207 0.0003 1.2711 ± 0.9011
7 83 1.0105 0.1761 0.5110 0.0007 1.7369 ± 0.3989
8 74 1.0106 0.1503 0.3836 0.0006 1.5836 ± 0.6583
9 38 1.0116 0.0620 0.0487 0.0003 1.1605 ± 0.1085
Note. RRT: Rapidly exploring Random Tree.
T A B L E 4 Execution time of the different steps of our trajectory generation approach, using the PRM* obstacle‐free path planner
Speed planning
#Traj. Avg.N Planning obs.‐free Path smoothing Ours Alg. (Hoffmann et al., 2008) Overall ±3σ
1 57 1.0558 0.0928 0.1527 0.0005 1.3371 ± 0.1536
2 35 1.0477 0.0432 0.0366 0.0003 1.1646 ± 0.0854
3 35 1.1189 0.0564 0.0336 0.0003 1.2461 ± 0.1146
4 53 1.0298 0.0997 0.1220 0.0004 1.2869 ± 0.0977
5 83 1.0624 0.1809 0.4936 0.0006 1.7765 ± 0.3199
6 27 1.2131 0.0419 0.0264 0.0002 1.3179 ± 0.3177
7 82 1.0433 0.1515 0.4654 0.0006 1.6982 ± 0.3267
8 71 1.1049 0.1391 0.3063 0.0006 1.5887 ± 0.3288
9 38 1.2492 0.0591 0.0466 0.0003 1.3903 ± 0.2201
Note. PRM: Probabilistic RoadMap.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
750 | PESTANA ET AL.
F I G U R E 1 0 Scene1—Obstacle map of size 105 m × 75 m generated onboard in ≈2.75 min from 56 images, from left to right: “Google Earth
©2015” image with red contour around the effectively mapped area, the SfM sparse model, the surface model, and the Octomap obstacle map
with 1 m resolution. SfM: Structure from Motion [Color figure can be viewed at wileyonlinelibrary.com]
F I G U R E 1 1 Scene2—Obstacle map of size 103 m × 75 m generated onboard in ≈2.75 min from 35 images, from left to right: “Google Earth
©2015” image with red contour around the effectively mapped area, the SfM sparse model, the surface model, and the Octomap obstacle map
with 1 m resolution. SfM: Structure from Motion [Color figure can be viewed at wileyonlinelibrary.com]
target point is reachable and the trajectory planning algorithm is able The (red) raw path planned by the state‐of‐the‐art trajectory planner
find at least one valid solution on the allotted time. Therefore, the is shown along with the consecutive path shortening and smoothing
resulting trajectory is the best solution that was found in 1 s. The utilized steps resulting in the (white) smoothed path. The semitransparent
Octomap resolution is 0.25 m. The trajectory planner was set up with a lines show the velocity and acceleration plans. The next two plots,
maximum speed of 20.0 m/s, and maximum horizontal, upward, and inspecting the figure from top to bottom, show the trajectory path in
downward accelerations of 0.5g = 4.91 m∕s2 , 0.45g = 4.41 m∕s2 and 3D: at the (left), the waypoints are shown in green, along with the
0.4g = 3.92 m∕s2 , respectively. speed and the acceleration plans in blue and red, respectively; at the
(right), the trajectory path is displayed color‐coded for the radius of
curvature, where black color denotes parts of the trajectory
5.2.1 | Comparison of speed and acceleration plans
estimated to be straight. The remaining four plots are grouped in
with and without velocity smoothing
vertical pairs, where the (top) plot shows the final along‐track speed
In this section, we compare our speed planning method, see Section plan in red, with the maximum desired velocity in green; and the
4.2.3, to the one proposed by Hoffmann et al. (2008), from which our (bottom) plot shows the along‐track, the cross‐track and total
method was inspired. The result of running both methods in two accelerations in red, blue, and magenta, respectively, along with the
trajectories is shown in Figures 6 and 7. derivative of the acceleration in green. The maximum velocity
The following information is shown in these figures: (top) constraint is a combination of the maximum velocity and the
visualization of the planned trajectory with the Octomap displayed maximum acceleration constraints through the calculated radius of
color‐coded for altitude from low (pink) to high (blue, green, and red). curvature and the required centripetal acceleration.
F I G U R E 1 2 Scene3—Obstacle map of size 106 m × 75 m generated onboard in ≈ 2.75 min from 52 images, from left to right: image with red
contour around the effectively mapped area, the SfM sparse model, the surface model, and the Octomap obstacle map with 1 m resolution. SfM:
Structure from Motion [Color figure can be viewed at wileyonlinelibrary.com]
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 751
F I G U R E 1 3 Detail of smaller elements that are well reconstructed and included into the georeferenced overview obstacle map: four cars
from scene1; a pylon with solar panels and a structure from scene2; and the hut, debris, and a car from scene3 [Color figure can be viewed at
wileyonlinelibrary.com]
In the context of the speed and acceleration plans, on the (left), the plots in the lower part of Figures 6 and 7. As shown, for a given set of
result of applying the velocity smoothing optimization, and on the the smoothing strength parameters {λ1, λ2, λ3} , increasing the number
(right), the result of applying only the velocity_plan_sweep_double_pass, of reruns of the smoothing spline optimization results in smoother
our implementation of the algorithm by Hoffmann et al. (2008), are speed plans, with lower maximum values for the acceleration
presented. When compared with the method proposed by Hoffmann derivative.
et al. (2008), our velocity smoothing approach results in a slightly higher The discussed results, see Figures 6–8, showcase the importance
trajectory traversal time with similar values for the maximum velocity of utilizing our velocity smoothing approach and that by applying an
but with much more feasible velocity and acceleration plans and with increasing number of smoothing passes, the derivative of the
bounded and more continuous values for the acceleration derivative. acceleration becomes more continuous and achieves consecutively
In Figure 8, a comparison of the resulting speed and acceleration lower absolute values. Based on these results, in the rest of the
plans when applying a varying number of velocity smoothing passes evaluation and experiments, we use 10 passes for the speed plan
is shown. The plots have the same meaning as each pair of vertical smoothing.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
752 | PESTANA ET AL.
F I G U R E 1 4 Quantitative accuracy evaluation of our mapping method against dense point‐clouds obtained using Pix4D. The top image
corresponds to scene1, the middle to scene2, and the bottom to scene3. The reconstruction error is calculated as the distance between each
dense point and the onboard calculated mesh model. The dense point‐clouds are color‐coded according to the reconstruction error, where the
color‐to‐distance correspondence** is shown in the histograms. The reconstruction error distribution is shown graphically by the histograms in
the figure and numerically in Table 5 [Color figure can be viewed at wileyonlinelibrary.com]
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 753
F I G U R E 1 5 Overview obstacle map environment used by the trajectory planner in our autonomous navigation experimental flight. The
map of size 55 m × 170 m × 40 m was generated from 195 images with a higher than onboard resolution of 4,912 × 3,264 pixels using our
method, see Section 3.1, with the same parameter configuration used for onboard real‐time execution. Displayed voxel‐grid resolution:
0.50 m [Color figure can be viewed at wileyonlinelibrary.com]
5.2.2 | Performance benchmarking of the trajectory Hoffmann et al. (2008); and regarding the acceleration derivative, the
planner mean of its maximum value and its overall maximum value for each
query are shown. The execution times for each main subpart of our
The capabilities of our trajectory planning approach are evaluated by
trajectory planning approach are shown in Tables 3 and 4, where the
testing it on nine different queries repeatedly on the synthetic
speed plan calculation time for the algorithm by Hoffmann et al.
obstacle map, as shown in Figure 9. Each query corresponds to a
(2008) is also shown, along with the resulting average number of
given initial and target point. This evaluation was executed directly
waypoints for each query.
on the drone’s onboard computer, a Nvidia Jetson TK1 development
The results, see Tables 1 and 2, show that for queries outside
board. In order for the repeated evaluation not to be dependent on
buildings, with enough free space for the path, both obstacle‐free
the internal state of the planner, which is relevant when using the
planning algorithms, PRM* and RRT*, provide similar performance
PRM* algorithm, the planner is reinitialized after querying the
and result also in comparable metrics for the speed plan. The
planner for the nine trajectories. This process was repeated 100
biggest differences are to be expected in trajectory queries that
times using the PRM* and the RRT* algorithms, resulting on the
require navigation through narrow spaces, such as 6, 7, and 8, see
performance statistics shown in Tables 1–4.
Figure 9, and they effectively occur in Queries 6 and 8. In these
The calculated performance parameters for the trajectories and
two queries, the PRM* algorithm provides on average shorter
speed plans are shown in Tables 1 and 2, and for the execution times
trajectories with a smaller standard deviation in length, and
in Tables 3 and 4. Each row of the tables shows the following
therefore, resulting also in faster traversal times. In these cases,
performance statistics for the corresponding trajectory query:
the clearance of the trajectory is also better with the PRM* as
regarding the path length, the direct distance from initial to target
shown by the lower values of its standard deviation. The overall
point and the average trajectory length with its corresponding 3σ
mean velocity of the traversal of the planned trajectories ranges
uncertainty; regarding the path clearance, the mean of the minimum
between 4 and 5.5 m/s, with maximum speed values of up to 10 m/
clearance, the minimum overall clearance for this query, and the
s. In all the trajectory queries, we only incur an increased path
resulting mean path clearance with its corresponding 3σ uncertainty;
traversal time of 3–6% when comparing our speed plan smoothing
regarding the planned velocity, the mean traversal velocity with its
result with the corresponding result from the Hoffmann et al.
corresponding 3σ uncertainty, the mean maximum velocity, and the
(2008) algorithm. The only trajectory queries that show low values
maximum overall velocity for this query; regarding the traversal time,
for clearance are 6, 7, and 8, which require the drone to fly inside
its mean for each query and the relative increase of the traversal
an area with a width of 5.25 m, with a maximum achievable
time required by the speed plan smoothing, which effectively
clearance in these regions of 2.62 m. Regarding the clearance of
compares our speed plan with that resulting from the approach by
Query 9, the initial point of this trajectory has a clearance of 2.5 m.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
754 | PESTANA ET AL.
F I G U R E 1 7 Experimental flight trajectory 1—(top) Visualization of the planned trajectory, (middle‐left) trajectory path in 3D, (bottom‐left)
path color‐coded with the radius of curvature, (middle‐right) speed, and (bottom‐right) acceleration plans [Color figure can be viewed at
wileyonlinelibrary.com]
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 755
F I G U R E 1 8 Experimental flight trajectory 2—(top) Visualization of the planned trajectory, (middle‐left) trajectory path in 3D, (bottom‐left)
path color‐coded with the radius of curvature, (middle‐right) speed, and (bottom‐right) acceleration plans [Color figure can be viewed at
wileyonlinelibrary.com]
25–50 ms for the computation of the speed plan. Although the The drone navigates through the survey trajectory by using the
algorithm by Hoffmann et al. (2008) is significantly faster, it results, algorithms described in Sections 4.1 and 4.2, and images are
as discussed in Section 5.2.1, in speed plans that are unfeasible for processed on‐the‐fly to generate the overview obstacle map using
high velocities. We favor, therefore, the use of our approach, the algorithm explained in Section 3.
because overall, these execution times are good for our target The regular survey flight trajectory is generated based on two
application in this research study. configuration parameters, the desired map generation time and the GPS
corners of the area of interest. The number of images that can be
acquired is calculated through the approximate processing time per
5.3 | Evaluation of the overview obstacle map image required by the onboard computer. The image acquisition
In this section, we evaluate quantitatively the quality of the overview positions for the creation of the overview map are distributed along a
obstacle maps obtained using our method, described in Section 3.1, horizontal grid at a constant height and spaced to result in an equal
when executed during flight onboard the drone. To acquire these image overlap in both directions of the grid. In this situation, for a given
maps, the drone flew autonomously a regular survey flight trajectory. camera (with calibrated focal length and intrinsic parameters), the
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
756 | PESTANA ET AL.
F I G U R E 1 9 Experimental flight trajectory 3—(top) Visualization of the planned trajectory, (middle‐left) trajectory path in 3D, (bottom‐left)
path color‐coded with the radius of curvature, (middle‐right) speed, and (bottom‐right) acceleration plans [Color figure can be viewed at
wileyonlinelibrary.com]
resulting image overlap depends on the height and the number of 1,280 × 720 pixels and 85% vertical and 72% horizontal overlaps
images. Therefore, the height of the grid is calculated from the desired were acquired and processed to generate the georeferenced map
image overlap. Additionally, a minimum height of 34 m and a maximum models, resulting in maps with an approximate size of 105 m × 75 m.
height of 100 m for the survey flight are enforced as a safety measure. In the context of the qualities of the three reconstructions, the
The evaluation is performed on three maps generated in different following elements are usually well represented in the map:
areas: scene1 (Fiure 10), scene2 (Figure 11) and scene3 (Figure 12).
Each of these figures shows an overview image of the area with a red • Some example elements are shown zoomed in Figure 13.
contour around the effectively mapped area, and the corresponding • Scene1—Figure 10: big buildings, cars, man‐made objects with
SfM sparse model, the surface model, and the resulting Octomap visually distinguishable lines, road edges, and, to a certain extent,
obstacle map at 1 m resolution. The sparse model consists of the grass and part of the trees.
• Scene2—Figure 11: a building, man‐made structures with visually
camera positions, shown with camera frustums, and the triangulated
points and lines. distinguishable lines, road edges, and grass.
• Scene3—Figure 12: woodland area, human‐made debris, cars, and,
The mapping operations took overall onboard and during a flight
around 2.75 min. In all, 35–56 grayscale images with a resolution of to a certain extent, grass, and part of the trees.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 757
F I G U R E 2 0 Experimental flight trajectory 4—(top) Visualization of the planned trajectory, (middle‐left) trajectory path in 3D, (bottom‐left)
path color‐coded with the radius of curvature, (middle‐right) speed, and (bottom‐right) acceleration plans [Color figure can be viewed at
wileyonlinelibrary.com]
We assess the accuracy of our onboard mesh models against a to extract the slight differences in the georeferenciation results,
dense point‐cloud reconstruction that was obtained using the which would otherwise affect our accuracy assessment. The
photogrammetry software Pix4D6. To perform the accuracy assess- reconstruction error between these high‐quality point‐clouds and
ment calculation, the point‐cloud was first registered to the mesh their corresponding onboard calculated mesh models is shown
model using the Iterative Closes Point (ICP) algorithm, second graphically in Figure 14 and numerically in Table 5. The number**
cropped along the borders, because the area of interest is in the of dense points that for each model have an error of 0.5 m or less are:
middle of the model, and third the distance of each dense point to the 85.2% for scene1, 93.2% for scene2, and 88.7% for scene3.
mesh was calculated, which is used as estimate of the reconstruction From the accuracy evaluation, we can assess that there are some
error. In this calculation, the error was saturated to 2 m so that its areas, which tend to be not well mapped by our onboard overview
distribution, shown in the histograms, can be better appreciated. The mapping solution, due to the fact that our mesh is derived from a sparse
registration of the dense point‐cloud to the mesh model is performed 3D model. Some examples of challenging regions of the scene are:
T A B L E 6 Trajectory planning and control performance in our interface that runs on a laptop that allows a user to teleoperate
autonomous navigation experiment the drone by using what we term a point‐and‐click interface. By
Execution time (s) Speed (m/s) clicking on a point of the obstacle map, the drone is commanded to
Path length navigate to a waypoint of 1.5 m over the clicked point. Therefore,
#Traj. (m) UI + plan Navigation Avg. Max. the user is able to teleoperate the drone from a laptop, connected
1 77.24 2.52 26.6/25.5 2.90 4.00 through WiFi, and issue the following commands: takeoff, point‐
2 43.53 1.66 15.7/14.9 2.77 4.00 and‐click navigation, stop, and land. The onboard computer is
3 44.96 2.09 15.6/14.9 2.88 4.00 configured appropriately, so that all the onboard ROS modules are
constrained onboard computational power of our drone, a map of Next to our method, maps are also generated using SLAM
size 105 m × 75 m is acquired in ≈2.75 min. In the experiments, we approaches that combine a VO or VIO front‐end with a bundle
quantitatively evaluated the accuracy of the acquired maps. We adjustment back‐end (e.g., Forster, Lynen, Kneip, & Scaramuzza,
demonstrated the usability of the generated obstacle map in an 2013, 2017; Schmuck & Chli, 2017; T. Schneider et al., 2018). In the
autonomous flight experiment. The georeferenciation of the map is context of overview obstacle maps, a detailed benchmark comparison
performed using only GPS measurements, which offers a general of the mapping accuracy and processing time requirements of our
solution for search and rescue scenarios. The performed georefer- SfM‐based method against those algorithms would be of interest.
enciation provides an absolute accuracy up to the positioning The experimental focus on this study was performed to support
precision of the onboard GPS sensor. the main contribution in the mapping task. The further benchmarking
Our experiments demonstrate the capabilities for an autonomous of the trajectory planner in experiments and simulation, for instance,
drone to acquire the obstacle map of a moderately sized area using a based on the benchmarks (Mettler, Kong, Goerzen, & Whalley, 2010;
vision‐based method and, by using the acquired map, to immediately Nous, Meertens, Wagter, & de Croon, 2016), is left as future work.
perform near‐ground obstacle‐free navigation in that area. The over-
view obstacle map is up‐to‐date and through it we can generate paths AC KNO WL EDG M EN TS
away from obstacles. In contrast to pure reactive obstacle avoidance or
to trajectory planning with an outdated potentially invalid map, the up‐ This study has been supported by the Austrian Science Fund (FWF)
to‐date overview map should enable an overall increased mission project V‐MAV (I‐1537), the Austrian Research Promotion Agency
execution efficiency. The reason for this is that the availability of the (FFG) project FreeLine (Bridge1/843450), and OMICRON electronics
overview map during an autonomous mission allows the robot to focus GmbH. The authors would like to thank the hardware donation from
on direct task objectives rather than on exploration. Although onboard DJI, granted as a part of the Graz Griffins team participation in the
mapping approaches will continue to be improved, our proposed vision‐ 2016 DJI Developer Challenge.
based approach represents a step forward in the fast deployment of
autonomous drones in unknown outdoor environments. ORCI D
7 | FUTURE WORK
R E F E R E N CE S
There are still regions of the scene, which are challenging to
Agarwal, S., Mierle, K., & others (2012). Ceres solver. Retrieved from http://
reconstruct using point‐based SfM. A possible approach to deal with
ceres‐solver.org
them is to densify the point‐cloud, for example, by using patch-based AUVSI Association (2018). International Aerial Robotics Competition (IARC)—Past
multi-view stereo (PMVS) (Furukawa & Ponce, 2010), SURE missions—Website. Retrieved from http://www.aerialroboticscompetition.
(Rothermel, Wenzel, Fritsch, & Haala, 2012), PlaneSweepLib (PSL; org/pastmissions.php Last checked: 29.11.2018; original: 2018.
Bachrach, A., He, R., & Roy, N. (2009). Autonomous flight in
(Häne, Heng, Lee, Sizov, & Pollefeys 2014), or the work by
unstructured and unknown indoor environments. 2009 European
Shekhovtsov, Reinbacher, Graber, and Pock (2016). Often times, Micro Air Vehicle Conference and Flight Competition, Delft, The
the usage of these algorithms requires more computation power than Netherlands, pp. 21–28.
available on current onboard computers of drones, which motivates Bachrach, A., Prentice, S., He, R., & Roy, N. (2011). RANGE‐Robust
the development of faster algorithms for the map densification. autonomous navigation in GPS‐denied environments. Journal of Field
Robotics (JFR), 28(5), 644–666.
Another approach is the utilization of view‐planning methods
Bloesch, M., Omari, S., Hutter, M., & Siegwart, R. (2015). Robust visual inertial
(Roberts et al., 2017; Schmid et al., 2012), for instance, the approach odometry using a direct EKF‐based approach. 2015 IEEE/RSJ International
(Mostegel, Rumpler, Fraundorfer, & Bischof, 2016) results in a set of Conference on Intelligent Robots and Systems (IROS), pp. 298–304.
images adapted to the challenging elements present in the scene, for Burri, M. (2015). RotorS’ synthetic industrial environment Octomap.
Retrieved from https://github.com/ethz‐asl/rotors_simulator/blob/
example, by providing short baseline images for vegetation.
master/rotors_gazebo/resource/power_plant.bt Last checked:
In our approach, the drone is localized in the overview obstacle map 25.09.2017; original: 27.03.2015.
by using GPS + IMU fusion. This poses a problem for collision avoidance Burri, M., Oleynikova, H., Achtelik, M. W., & Siegwart, R. (2015). Real‐time
because a typical GPS + IMU state estimate can drift a few meters, visual‐inertial mapping, re‐localization and planning onboard MAVs in
particularly for longer flight times and during navigation close to the unknown environments. 2015 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS), pp. 1872–1878.
ground and among buildings. The localization precision against the
Chen, J., Liu, T., & Shen, S. (2016). Online generation of collision‐free
overview map can be further improved by additionally fusing vision‐ trajectories for quadrotor flight in unknown cluttered environments.
based localization methods. However, feature‐based matching between 2016 IEEE International Conference on Robotics and Automation (ICRA),
images acquired at a large and a close distance is challenging and thus pp. 1476–1483.
Concha, A., & Civera, J. (2015). DPPTAM: Dense piecewise planar
provides options for future research. An enhanced relocalization
tracking and mapping from a monocular sequence. 2015 IEEE/RSJ
capability would also provide improvements for collaborative mapping, International Conference on Intelligent Robots and Systems (IROS),
map reuse, and map sharing between different robots and devices. pp. 5686–5693.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
760 | PESTANA ET AL.
Daftry, S., Maurer, M., Wendel, A., & Bischof, H. (2013). Flexible and user‐ Hofer, M., Maurer, M., & Bischof, H. (2017). Efficient 3D scene abstraction
centric camera calibration using planar fiducial markers. 2013 British using line segments. Computer Vision and Image Understanding (CVIU),
Machine Vision Conference (BMVC). 157, 167–178.
Delmerico, J., Giusti, A., Mueggler, E., Gambardella, L. M., & Scaramuzza, Hoffmann, G. M., Waslander, S. L., & Tomlin, C. J. (2008). Quadrotor
D. (2016). “On‐the‐Spot Training” for terrain classification in helicopter trajectory tracking control. 2008 AIAA Guidance, Navigation
autonomous air‐ground collaborative teams. 2016 International and Control Conference and Exhibit (AIAA GN&C).
Symposium on Experimental Robotics (ISER), pp. 574–585. Piscataway, Hoppe, C., Klopschitz, M., Donoser, M., & Bischof, H. (2013). Incremental
NJ: IEEE. https://doi.org/10.1109/LRA.2017.2651163 surface extraction from sparse structure‐from‐motion point clouds.
Delmerico, J., Mueggler, E., Nitsch, J., & Scaramuzza, D. (2017). Active 2013 British Machine Vision Conference (BMVC), Vol. 94‐1.
autonomous aerial exploration for ground robot path planning. IEEE Hoppe, C., Klopschitz, M., Rumpler, M., Wendel, A., Kluckner, S.,
Robotics and Automation Letters (RA‐L), 2(2), 664–671. Bischof, H., & Reitmayr, G. (2012). Online feedback for structure‐
Engel, J., Koltun, V., & Cremers, D. (2018). Direct sparse odometry. IEEE from‐motion image acquisition. 2012 British Machine Vision Con-
Transactions on Pattern Analysis and Machine Intelligence (T‐PAMI), ference (BMVC), 2, 6.
40(3), 611–625. Hornung, A., Wurm, K. M., Bennewitz, M., Stachniss, C., & Burgard, W.
Engel, J., Schöps, T., & Cremers, D. (2014). LSD‐SLAM: Large‐scale direct (2013). Octomap: An efficient probabilistic 3D mapping framework
monocular SLAM. 2014 European Conference on Computer Vision based on octrees. Autonomous Robots, 34(3), 189–206.
(ECCV), pp. 834–849. James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to
Faessler, M., Fontana, F., Forster, C., Mueggler, E., Pizzoli, M., & statistical learning: With applications in R. London: Springer Publishing
Scaramuzza, D. (2016). Autonomous, vision‐based flight and live Company Incorporated.
dense 3D mapping with a quadrotor micro aerial vehicle. Journal of Johnson, E., Mooney, J., & Christophersen, H. (2013). Fourteen years of
Field Robotics (JFR), 33(4), 431–450. autonomous rotorcraft research at the Georgia Institute of Technol-
Fankhauser, P., Bloesch, M., Gehring, C., Hutter, M., & Siegwart, R. (2014). ogy. 2013 2nd Asian/Australian Rotorcraft Forum and the 4th Interna-
Robot‐centric elevation mapping with uncertainty estimates. 2014 tional Basic Research Conference on Rotorcraft Technology, pp. 229–238.
International Conference on Climbing and Walking Robots (CLAWAR), pp. Karaman, S., & Frazzoli, E. (2011). Sampling‐based algorithms for optimal
433–440. motion planning. International Journal of Robotics Research (IJRR), 30(7),
Forster, C., Faessler, M., Fontana, F., Werlberger, M., & Scaramuzza, D. 846–894.
(2015). Continuous on‐board monocular‐vision‐based elevation map- Karrer, M., Kamel, M., Siegwart, R., & Chli, M. (2016). Real‐time dense
ping applied to autonomous landing of micro aerial vehicles. 2015 surface reconstruction for aerial manipulation. 2016 IEEE/RSJ Inter-
IEEE International Conference on Robotics and Automation (ICRA), national Conference on Intelligent Robots and Systems (IROS),
pp. 111–118. pp. 1601–1608.
Forster, C., Lynen, S., Kneip, L., & Scaramuzza, D. (2013). Collaborative Käslin, R., Fankhauser, P., Stumm, E., Taylor, Z., Mueggler, E., Delmerico, J.,
monocular SLAM with multiple micro aerial vehicles. 2013 IEEE/RSJ … Hutter, M. (2016). Collaborative localization of aerial and ground
International Conference on Intelligent Robots and Systems (IROS), robots through elevation maps. 2016 IEEE International Symposium on
pp. 3962–3970. Safety, Security, and Rescue Robotics (SSRR), pp. 284–290.
Forster, C., Pizzoli, M., & Scaramuzza, D. (2014). SVO: Fast semi‐direct Kavraki, L. E., Svestka, P., Latombe, J.‐C., & Overmars, M. H. (1996).
monocular visual odometry. 2014 IEEE International Conference on Probabilistic roadmaps for path planning in high‐dimensional config-
Robotics and Automation (ICRA). uration spaces. IEEE Transactions on Robotics and Automation (T‐RA),
Forster, C., Zhang, Z., Gassner, M., Werlberger, M., & Scaramuzza, 12(4), 566–580.
D. (2017). SVO: Semidirect visual odometry for monocular and Klein, G., & Murray, D. (2009). Parallel tracking and mapping on a camera
multicamera systems. IEEE Transactions on Robotics (T‐RO), 33(2), phone. 2009 8th IEEE International Symposium on Mixed and Augmented
249–265. Reality (ISMAR), pp. 83–86.
Furgale, P., Rehder, J., & Siegwart, R. (2013). Unified temporal and spatial Kneip, L., Scaramuzza, D., & Siegwart, R. (2011). A novel parametrization
calibration for multi‐sensor systems. 2013 IEEE/RSJ International of the perspective‐three‐point problem for a direct computation of
Conference on Intelligent Robots and Systems (IROS). absolute camera position and orientation. 2011 IEEE Conference on
Furrer, F., Burri, M., Achtelik, M., & Siegwart, R. (2016). RotorS—A Computer Vision and Pattern Recognition (CVPR).
modular Gazebo MAV simulator framework, In Koubaa, A. (Ed.) Robot Kondak, K., & Remuß, V.( 2007). Multi‐purpose aerial robot vehicles with
Operating System (ROS): The Complete Reference (1, pp. 595–625). intelligent navigation (MARVIN)—TU Berlin—Website. Retrieved from
Cham, Switzerland: Springer International Publishing. http://pdv.cs.tu‐berlin.de/MARVIN/index.html Last checked:
Furukawa, Y., & Ponce, J. (2010). Accurate, dense, and robust multiview 29.11.2018; original: 23.10.2007.
stereopsis. IEEE Transactions on Pattern Analysis and Machine Intelli- Labatut, P., Pons, J.‐P., & Keriven, R. (2007). Efficient multi‐view
gence (T‐PAMI), 32(8), 1362–1376. reconstruction of large‐scale scenes using interest points, delaunay
Geyer, M. S., & Johnson, E. N. (2006). 3D obstacle avoidance in adversarial triangulation and graph cuts. 2007 IEEE 11th International Conference
environments for unmanned aerial vehicles. 2006 AIAA Guidance, on Computer Vision (ICCV), pp. 1–8.
Navigation, and Control Conference and Exhibit (AIAA GN&C). Lau, B., Sprunk, C., & Burgard, W. (2013). Efficient grid‐based spatial
Greer, D., McKerrow, P., & Abrantes, J. (2002). Robots in urban search representations for robot navigation in dynamic environments.
and rescue operations. 2002 Australasian Conference on Robotics and Robotics and Autonomous Systems (RAS), 61(10), 1116–1130.
Automation (ACRA), pp. 27–29. Lavalle, S. M. (1998, October). Rapidly‐exploring random trees: A new tool for
Häne, C., Heng, L., Lee, G. H., Sizov, A., & Pollefeys, M. (2014). Real‐ path planning. (Technical Report TR 98‐11). Computer Science
time direct dense matching on fisheye images using plane‐ Department, Iowa State University.
sweeping stereo. 2014 International Conference on 3D Vision Leberl, F., Irschara, A., Pock, T., Meixner, P., Gruber, M., Scholz, S., &
(3DV), Vol. 1, pp. 57–64. Scholz, S. (2010). Point clouds: LIDAR versus 3D vision. Photogram-
Heng, L., Lee, G. H., Fraundorfer, F., & Pollefeys, M. (2011). Real‐time metric Engineering & Remote Sensing, 76(10), 1123–1134.
photo‐realistic 3D mapping for micro aerial vehicles. 2011 IEEE/RSJ Leutenegger, S., Furgale, P. T., Rabaud, V., Chli, M., Konolige, K., &
International Conference on Intelligent Robots and Systems (IROS), Siegwart, R. (2013). Keyframe‐based visual‐inertial SLAM using
pp. 4012–4019. nonlinear optimization. 2013 Robotics: Science and Systems (RSS).
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
PESTANA ET AL. | 761
Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., & Furgale, P. (2015). Pizzoli, M., Forster, C., & Scaramuzza, D. (2014). REMODE: Probabilistic,
Keyframe‐based visual‐inertial odometry using nonlinear optimiza- monocular dense reconstruction in real time. 2014 IEEE International
tion. International Journal of Robotics Research (IJRR), 34(3), 314–334. Conference on Robotics and Automation (ICRA), pp. 2609–2616.
Lowe, D. G. (2004). Distinctive image features from scale‐invariant Pradeep, V., Rhemann, C., Izadi, S., Zach, C., Bleyer, M., & Bathiche, S.
keypoints. International Journal of Computer Vision, 60(2), 91–110. (2013). Monofusion: Real‐time 3D reconstruction of small scenes with
Lynen, S., Sattler, T., Bosse, M., Hesch, J. A., Pollefeys, M., & Siegwart, R. a single web camera. 2013 IEEE International Symposium on Mixed and
(2015). Get out of my lab: Large‐scale, real‐time visual‐inertial Augmented Reality (ISMAR), pp. 83–88.
localization. 2015 Robotics: Science and Systems (RSS). Qiu, K., Liu, T., & Shen, S. (2017). Model‐based global localization for aerial
Mellinger, D., & Kumar, V. (2011). Minimum snap trajectory generation robots using edge alignment. IEEE Robotics and Automation Letters
and control for quadrotors. 2011 IEEE International Conference on (RA‐L), 2(3), 1256–1263.
Robotics and Automation (ICRA), pp. 2520–2525. Richter, C., Bry, A., & Roy, N. (2013). Polynomial trajectory planning for
Mettler, B., Kong, Z., Goerzen, C., & Whalley, M. (2010). Benchmarking of quadrotor flight. 2013 IEEE International Conference on Robotics and
obstacle field navigation algorithms for autonomous helicopters. 2010 Automation (ICRA).
66th Forum of the American Helicopter Society (AHS): “Rising to New Roberts, M., Dey, D., Truong, A., Sinha, S., Shah, S., Kapoor, A., … Joshi, N.
Heights in Vertical Lift Technology.” (2017). Submodular trajectory optimization for aerial 3D scanning.
Mostegel, C., Rumpler, M., Fraundorfer, F., & Bischof, H. (2016). UAV‐ 2017 International Conference on Computer Vision (ICCV).
based autonomous image acquisition with multi‐view stereo quality Rooz, N., Johnson, E. N., Wu, A., Christmann, C., Ha, J.‐C., Chowdhary, G.,
assurance by confidence prediction. 2016 IEEE Conference on … Proctor, A. (2009). Experience with highly automated unmanned
Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 1–10. aircraft performing complex missions. 2009 AIAA Guidance, Navigation,
Mourikis, A. I., Trawny, N., Roumeliotis, S. I., Johnson, A. E., Ansar, A., & and Control Conference (AIAA GN&C).
Matthies, L. (2009). Vision‐aided inertial navigation for spacecraft Rothermel, M., Wenzel, K., Fritsch, D., & Haala, N. (2012). SURE:
entry, descent, and landing. IEEE Transactions on Robotics (T‐RO), 25(2), Photogrammetric surface reconstruction from imagery. 2012 Proceed-
264–280. ings LC3D Workshop.
Mur‐Artal, R., Montiel, J. M. M., & Tardós, J. D. (2015). ORB‐SLAM: A Rumpler, M., Daftry, S., Tscharf, A., Prettenthaler, R., Hoppe, C., Mayer, G.,
versatile and accurate monocular SLAM system. IEEE Transactions on & Bischof, H. (2014). Automated end‐to‐end workflow for precise and
Robotics (T‐RO), 31(5), 1147–1163. geo‐accurate reconstructions using fiducial markers. Photogram-
Mur‐Artal, R., & Tardós, J. D. (2015). Probabilistic semi‐dense mapping metric Computer Vision at 2014 European Conference on Computer
from highly accurate feature‐based monocular SLAM. 2015 Robotics: Vision (ECCV). ISPRS Annals of Photogrammetry, Remote Sensing and
Science and Systems (RSS). Spatial Information Sciences.
Mur‐Artal, R., & Tardós, J. D. (2017). ORB‐SLAM2: An open‐source SLAM Rumpler, M., Tscharf, A., Mostegel, C., Daftry, S., Hoppe, C., Prettenthaler,
system for monocular, stereo, and RGB‐D cameras. IEEE Transactions R., & Bischof, H. (2016). Evaluations on multi‐scale camera networks
on Robotics (T‐RO), 33(5), 1255–1262. for precise and geo‐accurate reconstructions from aerial and
Musial, M., Brandenburg, U. W., & Hommel, G. (2000). Cooperative terrestrial images with user guidance. Computer Vision and Image
autonomous mission planning and execution for the flying robot Understanding (CVIU), 157, 255–273.
MARVIN. 2000 Sixth International Conference on Intelligent Autonomous Scaramuzza, D., Achtelik, M. C., Doitsidis, L., Fraundorfer, F., Kosmato-
Systems (IAS‐6), pp. 636–643. poulos, E., Martinelli, A., & Meier, L. (2014). Vision‐controlled micro
Newcombe, R. A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A. flying robots. IEEE Robotics & Automation Magazine (RAM), 21(3),
J., … Fitzgibbon, A. (2011). KinectFusion: Real‐time dense surface 26–40. https://doi.org/10.1109/MRA.2014.2322295
mapping and tracking. 2011 10th IEEE International Symposium on Schenk, F., & Fraundorfer, F. (2017). Combining edge images and depth
Mixed and Augmented Reality (ISMAR), pp. 127–136. maps for robust visual odometry. 2017 British Machine Vision
Newcombe, R. A., Lovegrove, S. J., & Davison, A. J. (2011). DTAM: Dense Conference (BMVC).
tracking and mapping in real‐time. 2011 IEEE International Conference Schmid, K., Hirschmüller, H., Dömel, A., Grixa, I., Suppa, M., & Hirzinger, G.
on Computer Vision (ICCV), pp. 2320–2327. (2012). View planning for multi‐view stereo 3D reconstruction using
Nieuwenhuisen, M., & Behnke, S. (2016). Local multiresolution trajectory an autonomous multicopter. Journal of Intelligent & Robotic Systems,
optimization for micro aerial vehicles employing continuous curvature 65(1), 309–323.
transitions. 2016 IEEE/RSJ International Conference on Intelligent Robots Schmuck, P., & Chli, M. (2017). Multi‐UAV collaborative monocular SLAM.
and Systems (IROS), pp. 3219–3224. 2017 IEEE International Conference on Robotics and Automation (ICRA),
Nistér, D. (2004). An efficient solution to the five‐point relative pose pp. 3863–3870.
problem. IEEE Transactions on Pattern Analysis and Machine Intelligence Schneider, J., Eling, C., Klingbeil, L., Kuhlmann, H., Förstner, W., &
(T‐PAMI), 26(6), 756–770. Stachniss, C. (2016). Fast and effective online pose estimation and
Nous, C., Meertens, R., Wagter, C. D., & de Croon, G. (2016). Performance mapping for UAVs. 2016 IEEE International Conference on Robotics and
evaluation in obstacle avoidance. 2016 IEEE/RSJ International Con- Automation (ICRA), pp. 4784–4791.
ference on Intelligent Robots and Systems (IROS), pp. 3614–3619. Schneider, T., Dymczyk, M., Fehr, M., Egger, K., Lynen, S., Gilitschenski, I.,
Oleynikova, H., Burri, M., Lynen, S., & Siegwart, R. (2015). Real‐time visual‐ & Siegwart, R. (2018). maplab: An open framework for research in
inertial localization for aerial and ground robots. 2015 IEEE/RSJ visual‐inertial mapping and localization. IEEE Robotics and Automation
International Conference on Intelligent Robots and Systems (IROS), Letters (RA‐L), 3(3), 1418–1425.
pp. 3079–3085. Shekhovtsov, A., Reinbacher, C., Graber, G., & Pock, T. (2016). Solving
Oleynikova, H., Burri, M., Taylor, Z., Nieto, J., Siegwart, R., & Galceran, E. dense image matching in real‐time using discrete‐continuous optimi-
(2016). Continuous‐time trajectory optimization for online UAV zation. 2016 21st Computer Vision Winter Workshop (CVWW).
replanning. 2016 IEEE/RSJ International Conference on Intelligent Robots Stühmer, J., Gumhold, S., & Cremers, D. (2010). Real‐time dense geometry
and Systems (IROS), pp. 5332–5339. from a handheld camera. In M, Goesele, Roth, S., Kuijper, A., Schiele,
Oleynikova, H., Taylor, Z., Fehr, M., Siegwart, R., & Nieto, J. (2017). B., & Schindler, K. (Eds.), 2010 Joint Pattern Recognition Symposium
Voxblox: Incremental 3D Euclidean signed distance fields for (DAGM) (pp. 11–20). Berlin, Heidelberg: Springer.
on‐board MAV planning. 2017 IEEE/RSJ International Conference on Şucan, I. A., Moll, M., & Kavraki, L. E. (2012). The open motion planning
Intelligent Robots and Systems (IROS), pp. 1366–1373. library. IEEE Robotics & Automation Magazine (RAM), 19(4), 72–82.
15564967, 2019, 4, Downloaded from https://onlinelibrary.wiley.com/doi/10.1002/rob.21863 by Cambodia Hinari NPL, Wiley Online Library on [22/11/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
762 | PESTANA ET AL.
Surber, J., Teixeira, L., & Chli, M. (2017). Robust visual‐inertial localization MAVs in unknown environments. 2012 IEEE International Conference
with weak GPS priors for repetitive UAV flights. 2017 IEEE International on Robotics and Automation (ICRA), pp. 957–964.
Conference on Robotics and Automation (ICRA), pp. 6300–6306. Wendel, A., Maurer, M., Graber, G., Pock, T., & Bischof, H. (2012). Dense
Teixeira, L., & Chli, M. (2016). Real‐time mesh‐based scene estimation for reconstruction on-the-fly. IEEE Conference on Computer Vision and
aerial inspection. 2016 IEEE/RSJ International Conference on Intelligent Pattern Recognition (CVPR), pp. 1450–1457.
Robots and Systems (IROS), pp. 4863–4869. Whelan, T., Leutenegger, S., Salas‐Moreno, R., Glocker, B., & Davison, A.
Teixeira, L., & Chli, M. (2017) Real-time local 3D reconstruction for aerial (2015). ElasticFusion: Dense SLAM without a pose graph. 2015
inspection using superpixel expansion. 2017 IEEE InternationalCon- Robotics: Science and Systems (RSS).
ference on Robotics and Automation (ICRA). Whelan, T., Salas‐Moreno, R. F., Glocker, B., Davison, A. J., & Leutenegger,
Triggs, B., McLauchlan, P. F., Hartley, R. I., & Fitzgibbon, A. W. (1999). S. (2016). ElasticFusion: Real‐time dense SLAM and light source
Bundle adjustment—A modern synthesis, 1999 International Workshop estimation. International Journal of Robotics Research (IJRR),
on Vision Algorithms pp. (298–372). Berlin, Heidelberg: Springer. 35(14), 1697–1716. Retrieved from https://github.com/mp3guy/
Usenko, V., vonStumberg, L., Pangercic, A., & Cremers, D. (2017). Real‐ ElasticFusion Last checked: 04.11.2017; original: 30.10.2015
time trajectory replanning for MAVs using uniform b‐splines and a 3D Wu, C. (2007). SiftGPU: A GPU implementation of scale invariant feature
circular buffer. 2017 IEEE/RSJ International Conference on Intelligent transform (SIFT). http://www.cs.unc.edu/~ccwu/siftgpu
Robots and Systems (IROS), pp. 215–222.
Vogiatzis, G., & Hernández, C. (2011). Video‐based, real‐time multi view
stereo. Image and Vision Computing (IVC), 29(7), 434–441.
Weiss, S., Achtelik, M., Kneip, L., Scaramuzza, D., & Siegwart, R. How to cite this article: Pestana J, Maurer M, Muschick D,
(2011). Intuitive 3D maps for MAV terrain exploration and Hofer M, Fraundorfer F. Overview obstacle maps for obstacle‐
obstacle avoidance. Journal of Intelligent & Robotic Systems, 61(1),
aware navigation of autonomous drones. J Field Robotics.
473–493.
Weiss, S., Achtelik, M. W., Lynen, S., Chli, M., & Siegwart, R. (2012).Real- 2019;36:734–762. https://doi.org/10.1002/rob.21863
time onboard visual-inertial state estimation and self-calibration of