0% found this document useful (0 votes)
69 views14 pages

A Real-Time 3D Perception and Reconstruction Syste

Uploaded by

Mayur Rabadiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views14 pages

A Real-Time 3D Perception and Reconstruction Syste

Uploaded by

Mayur Rabadiya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Hindawi

Journal of Sensors
Volume 2018, Article ID 2937694, 14 pages
https://doi.org/10.1155/2018/2937694

Research Article
A Real-Time 3D Perception and Reconstruction System Based on a
2D Laser Scanner

Zheng Fang ,1 Shibo Zhao,1 Shiguang Wen,1 and Yu Zhang2


1
Faculty of Robot Science and Engineering, Northeastern University, Shenyang, China
2
State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University,
Hangzhou 310058, China

Correspondence should be addressed to Zheng Fang; [email protected]

Received 5 August 2017; Revised 7 February 2018; Accepted 28 February 2018; Published 16 May 2018

Academic Editor: Paolo Bruschi

Copyright © 2018 Zheng Fang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper presents a real-time and low-cost 3D perception and reconstruction system which is suitable for autonomous navigation
and large-scale environment reconstruction. The 3D mapping system is based on a rotating 2D planar laser scanner driven by a step
motor, which is suitable for continuous mapping. However, for such a continuous mapping system, the challenge is that the range
measurements are received at different times when the 3D LiDAR is moving, which will result in big distortion of the local 3D point
cloud. As a result, the errors in motion estimation can cause misregistration of the resulting point cloud. In order to continuously
estimate the trajectory of the sensor, we first extract feature points from the local point cloud and then estimate the transformation
between current frame to local map to get the LiDAR odometry. After that, we use the estimated motion to remove the distortion of
the local point cloud and then register the undistorted local point cloud to the global point cloud to get accurate global map. Finally,
we propose a coarse-to-fine graph optimization method to minimize the global drift. The proposed 3D sensor system is
advantageous due to its mechanical simplicity, mobility, low weight, low cost, and real-time estimation. To validate the
performance of the proposed system, we carried out several experiments to verify its accuracy, robustness, and efficiency. The
experimental results show that our system can accurately estimate the trajectory of the sensor and build a quality 3D point
cloud map simultaneously.

1. Introduction LiDAR from Faro (http://www.faro.com/) and Leica


(https://lasers.leica-geosystems.com/) company. However,
Three-dimensional perception is very important to many those sensors are usually very expensive, and the sizes of
mobile robotic systems, since the robot is increasingly those sensors are usually very big. Besides, most of them
required to operate in dynamic, unstructured environments are only suitable for “stop-and-scan” applications, which
and interact safely and effectively with humans or other means you must put the sensor statically to acquire data
robots [1–5]. Beside robotics domain, three-dimensional per- and process the data offline to get 3D model of the environ-
ception and reconstruction are also a key technology for ments. Therefore, those sensors usually are not suitable for
many nonrobotic applications such as surveying, object scan- robotics application. In robotics area, the most widely used
ning, 3D manufacturing, and entertainments [6–10]. commercial 3D range finders are provided by Velodyne
Nowadays, there are many sensors that can be used to (http://velodynelidar.com/) with multibeam models HDL-
acquire 3D data, for example, scanning LiDAR, stereo vision, 64E, HDL-32E, and VLP-16. These models differ mainly in
and structure light sensors. Among those sensors, scanning the number of laser/detectors located in the rotating head.
LiDAR is mostly used for acquiring 3D data due to its accu- In recent years, there are also some new 3D range finders
racy and long measurement range [11]. In geographic areas, based on solid-state LiDAR technology, for example, S3
there are several commercial products already available in LiDAR sensor from Quanergy (http://quanergy.com/). The
the market for 3D laser mapping, for example, scanning main advantage of these sensors is the high acquisition rate
2 Journal of Sensors

that makes them suitable for vehicles moving at high speeds. trajectory of the sensor and build the 3D point cloud of the
However, they have a limited vertical resolution and are more environment simultaneously. The proposed 3D sensor sys-
expensive than 2D laser scanners. There are also several tem is advantageous due to its mechanical simplicity, mobil-
hand-held 3D scanners that are commercially available ity, low weight, low cost, and real-time estimation. Therefore,
(e.g., Leica’s T-scan). Unfortunately, those sensors are pri- our sensor can not only be used for autonomous navigation
marily intended for object scanning applications. Therefore, but also be suitable for surveying and reconstruction of areas.
they often require modification to the environment and have To validate the performance of the proposed system, we
limited working volume that is not appropriate for large- carried out several experiments to verify its accuracy, robust-
scale mobile applications. In recent years, RGB-D cameras ness, and efficiency. The experimental results show that our
[9], which are based on structured lighting or TOF (time of system can accurately estimate the trajectory of the sensor
flight) technology, become very popular due to its low price and build a quality 3D point cloud map simultaneously.
and depth measurement ability. However, those sensors are The rest of the paper is organized as follows. In Section 2,
limited in their sensing range, precision, and the lighting a brief discussion of related work is presented. Section 3
conditions in which they can operate. For example, the Asus describes the hardware and software system of our sensor.
Xtion has an effective range from 0.5 to 4 meters with preci- Then, our real-time pose estimation, 3D mapping, and global
sion of around 12 cm at 4 m range and does not work in map optimization algorithms are detailed in Section 4. In
bright sunlight. Stereo cameras are also widely studied for Section 5, we validate our system in different environments.
real-time 3D reconstruction [12]. However, stereo cameras Section 6 concludes the paper.
have similar precision characteristics, and their performance
is dependent on lighting conditions and textural appearance 2. Related Work
of the environments.
Besides those commercial 3D scanning sensors, In the recent year, several different kinds of customized 3D
researchers have also developed many customized 3D sen- laser scanner system built from a 2D laser scanner have been
sors for their specific applications. Up to now, several proposed in the robotics community. Those systems mainly
customized 3D laser scanners have been developed for differ in the driving mechanism [13, 16, 17] and data pro-
autonomous robots by means of a rotating laser scanner cessing methods [18–20]. In the following, we will have a
with a 360° horizontal FoV, allowing for detection of obsta- brief review of the existing systems and the reconstruction
cles in all directions. For example, Morales et al. [13] methods they used.
designed a low-cost 3D laser rangefinder based on pitching
a commercial 2D rangefinder around its optical center. They 2.1. Driving Mechanism. Currently, there are several ways
also developed a 3D laser scanner by continuously rotating a to make 3D scanners from a 2D laser scanner. According
2D laser scanner [14]. However, in their work, they only to its driving mechanism, they can be divided into two
describe the design and development of the hardware of groups, namely, passive driving mechanism and active driv-
the sensor and do not propose efficient algorithms for ing mechanism.
processing the received 3D point cloud. Therefore, those The first type is to use passive mechanisms. For example,
sensors are generally suitable for “stop-and-scan” applica- Bosse et al. [16] proposed a sensor called Zebedee, which is
tions. Zhang and Singh [15] proposed a continuously scan- constructed from a 2D range scanner coupled with a passive
ning system LOAM which could be a back-and-forth or linkage mechanism, such as a spring. By mounting the other
continuous rotation configuration. The mapping results of end of the passive linkage mechanism to a moving body, dis-
their system are very impressive. However, they do not pro- turbances resulting from accelerations and vibrations of the
vide detailed description of their hardware system. Besides, body propel the 2D scanner in an irregular fashion, thereby
the author only proposed LiDAR odometry and mapping extending the device’s field of view outside of its standard
algorithms. Though the local drift is very low, it is not suit- scanning plane. By combining the information from the 2D
able for large-scale mapping since accumulative error is laser scanner and a rigid-mounted industrial-grade IMU, this
inevitable. Bosse et al. [16] also proposed a sensor called system can accurately recover the trajectory and create the
Zebedee, which is constructed from a 2D range scanner 3D map. However, this sensor is not very good for mobile
coupled with a passive linkage mechanism, such as a navigation since if the robot runs on a flat floor, there would
spring. By shaking the sensor, the device’s field of view be no enough vibrations to actuate the sensor. Kaul et al. [21]
could be extended outside of its standard scanning plane. also proposed a passively actuated rotating laser scanner for
However, this system needs to collect the data first and aerial 3D mapping. A key feature of the system is a novel pas-
then process the data offline. Therefore, it is not suitable sively driven mechanism to rotate a lightweight 2D laser
for real-time applications, such as autonomous navigation scanner using the rotor downdraft from a quadcopter. The
of mobile robots. data generated from the spinning laser is input into a
In this paper, we present a low-cost customized 3D laser continuous-time simultaneous localization and mapping
scanner based on a rotating 2D laser scanner which is suit- solution to produce an accurate 6DoF trajectory estimate
able for both autonomous navigation and large-scale envi- and a 3D point cloud map.
ronment reconstruction. We not only describe the details of The second type is to use active mechanisms. For this
the hardware design but also present a real-time motion type, usually there is a motor actively driving the sensor
estimation algorithm that can continuously estimate the around one axis. When rotating a 2D unit, two basic 3D
Journal of Sensors 3

scanning configurations are possible: pitching (x-axis rota- sensors to provide velocity measurements to remove the
tion) and rolling (y-axis rotation) scans. For example, distortion. For example, the LiDAR cloud can be registered
Morales et al. [13] designed a low-cost 3D laser rangefinder by state estimation from visual odometry integrated with an
based on pitching a commercial 2D rangefinder around its IMU [5, 27].
optical center. The pitching scan is preferable for mobile
robotics and area monitoring because it obtains information 3. System Overview
of the region in front of the 3D rangefinder faster than the
rolling scan. This is because the rolling scan always requires In this section, we will give a detailed description of the
a complete 180° rotation to avoid dead zones. However, com- hardware and software system of our sensor. We first
pared to pitching scanner, the advantage of a rolling scanner describe the hardware system, then present the architecture
is that its field of view can be widened to 360°. Therefore, they of the software system.
also developed a 3D laser scanner by continuously rotating a
3.1. Hardware System. The 3D laser scanner is based on
2D laser scanner [14].
spinning a 2D laser scanner. The mechanical system of
the 3D reconstruction system is composed of two parts,
2.2. Motion Estimation and Registration Methods. For the
namely, the rotating head and driving body as shown in
customized 3D LiDAR sensor which is based on a 2D
Figure 1. The head is mainly composed of a continuously
laser scanner, there are different ways to estimate the
rotating 2D laser scanner. To make the 3D point cloud
motion of sensor and reconstruct the environment using
registration easy, we align the rotating axis y along with
the perceived 3D point cloud. The existing method could
the scanning plane x1 y1 . Axis y1 of the LiDAR sensor is
be divided into two types: stop-and-scan methods and
motored by a step motor with a coupling. Since the scanner
continuous methods.
will rotate continuously, we connect its signal and power
Previously, most existing 3D mapping solutions either
lines to a slip ring to solve revolving problem. The step motor
eliminate sensor motion by taking a stop-and-scan approach
is equipped with an encoder to record the rotating angle of
or attempt to correct the motion using odometric sensors,
the 2D scanner.
such as wheel or visual odometry. Many researchers take
The whole 3D sensor system consists of a 2D laser scan-
the approach of frequently stopping the platform in order
ner, a step motor driving system, and a PC for real-time reg-
to take a stationary scan [19]. However, for mobile robotics
istration as shown in Figure 2. The PC is a Surface Pro3 tablet
system, this type of behavior is undesirable as it limits the
from Microsoft, which has an i7 processor, 256 Gb solid
efficiency of the task. Since there is no movement when tak-
hardware and 12-inch touch screen for displaying. The PC
ing a scan, the advantage of this kind of scanning is that
is responsible for receiving scans from the 2D laser scanner
there is no distortion in the perceived 3d point cloud.
and the encoder data from the step motor, then converting
Therefore, it is easier to register the point cloud. As to the
the 2D scans into 3D point cloud for LiDAR odometry esti-
registration methods, several approaches have been pro-
mation and mapping, and finally displaying the recon-
posed to estimate the motion by means of 3D scan registra-
structed 3D point cloud map in real time. The 2D laser
tion [19, 22, 23]. Most of these approaches are derived from
scanner used in our system is a Hokuyo UTM-30LX, which
the iterative closest Point (ICP) algorithm [18, 24]. For
is a 2D time-of-flight laser with a 270 deg field of view,
example, generalized ICP (GICP) [22] unifies the ICP for-
30 m maximum measurement range, and 40 Hz scanning
mulation for various error metrics such as point-to-point,
rate. The dimension of the UTM-30LX is 60 × 60 mm, and
point-to-plane, and plane-to-plane. Beside ICP methods,
its mass is 210 g, which makes it ideal for low-weight require-
the 3DNDT [20] method discretizes point clouds in 3D
ments. The motion controller is responsible for controlling
grids and aligns Gaussian statistics within grid cells to per-
the rotating speed of the 2D laser scanner, which is composed
form scan registration.
of an ARM STM32 board, a step motor, an encoder, and a
Nowadays, people prefer to develop continuous estima-
communication board. The whole system is powered by a
tion methods for a rotating 2D laser scanner. However, col-
12 V lithium battery.
lecting consistent 3D laser data when the sensor is moving
is often difficult. The trajectory of the sensor during the scan 3.2. Software Systems. The software of the whole 3D recon-
must be considered when constructing point clouds, other- struction system is composed of two parts, namely, low-
wise the structure of the cloud will be severely distorted and level motion control and high-level reconstruction. The
potentially unrecognizable. To solve this problem, several low-level motion control module is responsible for control-
methods have been proposed. For example, a two-step ling the 2D laser scanner rotating at a constant speed and
method is proposed to remove the distortion [25]: an ICP- reading and sending the encoder data to the PC via USB-
based velocity estimation step is followed by a distortion RS232 data line. The high-level reconstruction module is
compensation step, using the computed velocity. A similar responsible for receiving the scanning data and encoder data
technique is also used to compensate for the distortion and then estimating the motion of the sensor and registering
introduced by a single-axis 3D LiDAR [26]. However, if the received point cloud into a 3D map.
the scanning motion is relatively slow, motion distortion
can be severe. This is especially the case when a 2-axis 3.2.1. Low-Level Motion Control. The 2D scanner is
LiDAR is used since one axis is typically much slower than controlled to rotate at a speed of 180°/s by the STM32
the other. Therefore, many researchers try to use other embedded controller. The overall precision of the LiDAR
4 Journal of Sensors

Motor driver Step motor STM32 board


2D laser
Slip ring scanner
z1
z
Rotation

o
y1
y o1

60.5 mm

Encoder RS232 board Coupling

Body Head

Figure 1: Mechanical structure of the 3D reconstruction system.

2D scan
USB Laser commands
PC
STM32 board Motion Time &
commands angle

2D laser scanner
Step motor Sync signal Motion
controller
Motion controller
3D LiDAR Slip ring
Motor driver
Power Encoder
data

Slipping ring Encoder

USB-RS232 board
Battery 2D sensor Rotational mechanism Step motor
(a) (b)

Figure 2: Hardware and functional diagram of the 3D reconstruction system: on the (a) is the hardware system and on the (b) is the
functional diagram.

sensor strongly depends on the measurement precision of a scan is constant, then we can calculate the exact angle
the instantaneous angle of axis y (as shown in Figure 1), when each point is measured.
henceforth denoted as β. To determine that angle, a
high-precision absolute rotary encoder is used. The 3.2.2. High-Level Reconstruction Software. After receiving the
encoder’s resolution is 10 bits. In general, each scan point 2D scan and angle data from the 2D laser scanner and the
must be related to the very value of β (continuously pro- motion controller, the high-level reconstruction algorithm
vided by the encoder) at the very time when this point is needs to estimate the motion of sensor based on the received
measured. However, the UTM-30LX does not output point data and construct the 3D map in real time. The whole archi-
after point in real time. Fortunately, the UTM-30LX trig- tecture of the motion estimation and reconstruction software
gers a synchronization impulse after it finishes a complete is shown in Figure 3. We first need to convert the 2D scans
scanning. The embedded controller reads the encoder into 3D point cloud by computing the Cartesian coordinates
angle data at the exact time it receives a trigger signal. of each point. Then, we need to calibrate the sensor since
And finally, the embedded controller sends the raw angle small errors in the attachment of the 2D device to the rotat-
and scans to Surface Pro3 via USB to serial converter. ing mechanism provoke a big distortion in the point cloud.
We assume the rotation speed of 2D laser scanner during Finally, we need to estimate the motion of the sensor and
Journal of Sensors 5

3D LiDAR reconstruction algorithm

LiDAR precalibration LiDAR odometry LiDAR mapping Graph optimization

Global
Feature point transform
extration Undistorted
algorithm sweep
3D
Laser ranges point
Mapping
UTM_30LX cloud Loop edge ELCH
algorithm
Local
transform
Key frame
Finding feature
point
correspondence

Encoder
Angle Transform
integration
g2o
Motion estimation
algorithm
Loop
closure

Figure 3: Software architecture of the 3D reconstruction system.

register the increasingly perceived 3D point cloud into a 3D z


world map.
o z1
y
(1) Generate 3D Point Cloud. The Microsoft Surface PC
x
receives scan data from the 2D laser scanner which is a vector A′
composed of angle θ and range measurement ρ. At the same 𝜑
𝜃 o1
time, the computer receives the rotating angle φ. The whole 𝜌 A
A
coordinate systems are shown in Figure 4. A point of the
2D device is given by its polar coordinates: angle θ and range x1
ρ. If we assume the reference frame oxyz of the system is
y1
defined as coincident with that of the 2D device, then, the
Cartesian coordinates of the point cloud can be computed Figure 4: (a) Point A in 2D scan plane; (b) after rotating around
from following equation: y-axis to get A′.

x 0 cos ϕ
ρ sin θ
y = 1 0 1 the point cloud computed with 1. To remove the distortion,
ρ cos θ we need to calibrate the sensor. For our customized 3D laser
z 0 sin ϕ scanner which is a 3D scanner with a 2D laser rangefinder
rotating on its optical center, the calibration method pro-
(2) Geometric Calibration. Practically, due to the installation posed in [17] can be used to obtain mechanical misalign-
error, small errors in the attachment of the 2D device to the ments. After calibration, 2 can be employed to obtain 3D
rotating mechanism results in y1 not perfectly aligned with y Cartesian coordinates of a point in the 3D frame instead
as shown in Figure 5. This error will result in a distortion in of 1.

x C α0 C β 0 −C α0 S β0
ρC θ
y = C Θ S β0 + C β 0 S α 0 S Θ C Θ C β 0 − S α 0 S Θ S β0 , 2
ρS θ
z S Θ S β0 − C Θ C β 0 S α 0 C β 0 S Θ + C Θ S α 0 S β0
6 Journal of Sensors

4.1. Odometry Estimation Algorithm. First, we assume that


𝜃0 the movement of the sensor is continuous and smooth
over time, without abrupt changes. As a convention, we
use right uppercase superscription to indicate the coordi-
z1 nate systems. We define a sweep as the sensor complete
z
one-scan coverage. We use right subscription k to indicate
𝜑
the sweeps and Pk to indicate the point cloud perceived
o o1
𝛽0
during sweep k. The function of odometry estimation is
y y1
𝜌 to estimate the relative motion of the sensor during a
𝛼0 x x1 𝜃 sweeping time and use the estimated motion to remove
the distortion of sweep Pk to get undistorted point cloud.
We use similar idea as described in [15] to estimate the
odometry. The difference is that they use scan-to-sweep
Figure 5: Misalignments between the 2D sensor frame and the 3D strategy for relative pose estimation. But in our work, in
sensor frame. order to improve the robustness, we use scan-to-local
map strategy instead of scan-to-sweep strategy. The detail
of the odometry estimation is as follows.
where Θ = θ0 + θ and θ0 represent the zero angle of the
rotation mechanism. θ0 is an extrinsic parameter that 4.1.1. Feature Extraction. First, we extract feature points from
depends on the installation of the sensor, which will not the LiDAR cloud Pk . For each scan, it returns range values on
provoke distortion. a scan plane with 0.25° resolution. However, as the laser scan-
ner rotates at an angular velocity of 180°/s and generate scans
(3) Motion Estimation and Reconstruction. After calibration, at 40 Hz, the resolution in the perpendicular direction to the
if we place the sensor at a place statically, we can get a locally scan planes is 180° /40 = 4 5° . We select two kinds of features
undistorted point cloud. However, in our system, we want to which are edge points and planar points. Let X Lk,i be a point
carry the sensor and move around. The movement of the sen- in Pk (the superscript L means in laser coordinate system),
sor will cause the distortion of point cloud; therefore, we need and let S be the set of consecutive points of i returned by
to continuously estimate the trajectory (laser odometry) of the laser scanner in the same scan. We use the following
the sensor and remove the distortion of the point cloud. After equation to determine whether a point belonged to edge or
that, we can register the undistorted point cloud into a global planar points:
map (mapping). To reduce the local drift of laser odometry
estimation, we build a local map and calculate the laser odo- 1
metry based on local map instead of last sweep. To reduce the c= 〠 X Lk,i − X Lk, j 3
global drift for large-scale reconstruction, we first use ELCH S ⋅ X Lk,i j∈S, j≠i

[28] algorithm to coarsely distribute the loop-closure error.


According to the c value, the points in a scan can be clas-
Then, we use g2o [29] framework to finely optimize the pose
sified into edge points and planar points. The edge points
graph to get globally consistent map (pose graph optimiza-
usually have big c values while the planar points have small
tion). The details of motion estimation and graph optimiza-
c values. We select a feature as planar feature if its c value is
tion algorithms will be described in the following section.
smaller than a threshold and a feature as edge feature if its c
value is bigger than a threshold. In our implementation, the
4. Motion Estimation and threshold is determined by experiments. In order to evenly
Reconstruction Algorithm distribute the feature points, we also divided a scan into four
identical subregions. In each subregion, there are 2 edge
Large-scale environment reconstruction by using 3D laser
points and 4 planar points in maximum.
scanners has been widely used [26, 30]. The challenge for
our system is that our 3D reconstruction system is based on 4.1.2. Feature Matching
a rotating 2D laser scanner; therefore, there are big distor-
tions as our sensor moves, which pose big challenges for (1) Feature Projecting. Since we assume that the angular and
correct and accurate registration. Currently, most 3D recon- linear velocities of the sensor during a sweep are constant,
struction systems estimate the motion and reconstruct the therefore this allows us to linearly interpolate the pose
environment through offline processing. They usually collect transform within a sweep for the points that are received
the data using the sensor and then process those data on a at different times. Given a point i, i ∈ PK+1 , and its corre-
powerful computer to get accurate model of the environ- sponding timestamp t i and let T Lk+1,i be the pose transform
ment. In contrast to them, our algorithm runs in real time. between t k+1 , t i , then T Lk+1,i can be computed by linear
We continuously estimate the motion of the sensor, remove
interpolation of T Lk+1 :
the distortion of the point cloud, and register the local point
cloud to get the whole map. In the following, we will detail
the process of motion estimation, mapping, and global opti- t i − t k+1 L
T Lk+1,i = T 4
mization algorithms. t − t k+1 k+1
Journal of Sensors 7

l m i
i

j j l

LiDAR LiDAR

(a) (b)

Figure 6: (a) Finding corresponding edge point and edge line; (b) finding corresponding planar point and planar patch.

Assuming that we have point cloud Pk+1 , then we can (3) Error Metric Computation. After finding the correspon-
extract edge feature points εk+1 and planar feature points dences, we can calculate the distance from a feature point
μk+1 using 3. We transform features εk+1 and μk+1 to the to its correspondence. We will recover the motion by mini-
starting time of sweep k + 1 and denote them as εk+1 and mizing the overall distances of the features.
μk+1 , then we can get the following equation according L
For an edge point X k+1,i = x0 , y0 , z 0 , i ∈ εk+1 , if j, l is
to 4: L L
its corresponding edge line X k, j = x1 , y1 , z 1 , X k ,l = x2 ,
L
y2 , z 2 , j, l ∈ Mk , then the point to line distance can be com-
L
X Lk+1,i X k+1,i puted as
= T Lk+1,i , 5
1 1 L L L L
X k+1,i −X k, j × X k+1,i −X k ,l
dε = 6
L L
L X −X
where is a point i in εk+1 or μk+1 and
X Lk+1,i X k+1,i is the cor- k, j k ,l

responding point in εk+1,i or μk+1 . L


For a planar point X k+1,i = x0 , y0 , z 0 , i ∈ μk+1 , if j, l, m
(2) Feature to Local Map Matching. For relative motion esti- is the corresponding planar patch (X Lk, j = x1 , y1 , z 1 ,
mation, a common strategy is to compute the transform X Lk,l = x2 , y2 , z 2 , and X Lk,m = x3 , y3 , z 3 , j, l, m ∈ M Lk ),
between the current sweep Pk+1 to the last sweep Pk . This is then the point to plane distance is
a kind of strategy similar to frame-to-frame matching strat-
egy which is widely used in visual odometry. However, this X
L
k+1,i −X
L
k, j X
L
k, j −X
L
k,l × X
L
k, j −X
L
k,m
strategy is not very robust. This is because the time duration dμ =
L L L L
7
X −X × X −X
of one sweep is very short; if there is a sharp turn around a k, j k,l k, j k,m

corner, the scanning scene will change dramatically. If we just


(4) Motion Estimation. Recall that 6 and 7 compute the dis-
register current scan to the last sweep, the registration may
tances between feature points and their correspondences.
fail due to insufficient overlap. Here, we use a local map strat-
By combining 4, 5, 6, and 7, we can get the geometric
egy. Instead of matching to the last sweep, we create a local
relationship between the feature point and their
map by accumulating the last 3 sweeps and then match cur-
correspondences:
rent scan to the created local map, which improves the
robustness of the whole algorithm.
At the beginning of running our system, we assume f ε X Lk+1,i , T Lk+1 = d ε ,  i ∈ εk+1 , 8
the sensor is static; therefore, we concatenate the first 3
full sweeps as a local map M Lk , k ≥ 3 . We transform the f μ X Lk+1,i , T Lk+1 = d μ ,  i ∈ μk+1 9
L
local map M Lk to time stamp t k+1 and denote it as M k
and store it in a 3D kd tree. For εk+1 and μk+1 , we need For each feature point in εk+1 and μk+1 , we can get one of
to find the corresponding edges and planes in M k . The
L the above equation. Stack all the equations and we can get
feature matching process is described in Figure 6. Let i
be a point in εk+1 , i ∈ εk+1 . The edge line is represented f T Lk+1 = d, 10
by two points. Let j be the closest neighbor of i in M k ,
and let l be the closest neighbor of i in the two consecu- where each row corresponds to one feature point and d is its
tive scans to the scan of j as shown in Figure 6(a). There- corresponding distance. Our object is to minimize d towards
fore, j, l forms the correspondence of i. To verify if both zero through nonlinear iterative optimization to estimate the
j and l are edge points, we check the smoothness of the transform T Lk+1 . Taking the first-order Taylor expansion at
local surface based on 3. Figure 6(b) shows the procedure T Lk+1 , we get
of finding a planar patch as the correspondence of a pla-
nar point, where j, l, m forms the correspondence of f ≈ f T Lk+1 + A T Lk+1 T L − T Lk+1 , 11
point i, i ∈ μk+1 .
8 Journal of Sensors

Qk+1
L
Gk+1

Qk GkW

Figure 7: Mapping process.


Figure 8: g2o pose optimization: each triangle represents laser pose
and the blue edge represents the constraint relationship between the
where A T Lk+1 is the Jacobian matrix of f at T Lk+1 . There- pose obtained by the front-end algorithm. By using these
constraints, each pose is optimized so as to satisfy the constraints.
fore, we convert the nonlinear optimization problem into a
linear least square problem:
and pose information. Then, the Euclidean distance between
2 the current frame and all the historical keyframes is calcu-
min   A T Lk+1 T L
− T Lk+1 +f T Lk+1 12 lated. When the distance is less than the threshold of
1.5 m, it is considered that there potentially exists a closed
Finally, we can get loop. Since the previous frames adjacent to the current
frame are also easily able to meet the above requirement,
T Lk+1 ← T Lk+1 + ΔT, 13 therefore these data should be excluded. Here, we use
ELCH [28], a heuristic algorithm, to do coarse graph opti-
T −1 T
where ΔT = − A T Lk+1 A T Lk+1 A T Lk+1 f T Lk+1 . mization. The approach’s key idea is that it will spread the
final accumulated error to every frame according to the
4.2. Mapping Algorithm. At the end of sweep k + 1, the value of uncertainty.
odometry estimates an undistorted point cloud Pk+1 and
L When loop closure is detected, we use ICP algorithm to
L calculate the accumulated error transformation matrix ΔT
simultaneously a pose transform T k+1 which is the sensor
between the start keyframe and end keyframe in a closed
motion during the sweep between time t k+1 , t k+2 . The
loop. This ΔT is then assigned to each node in the SLAM
mapping algorithm matches and registers the undistorted
L graph. Here, the SLAM graph is denoted as G = V, E , where
point cloud Pk+1 to the global map in the world coordi- the vertex V represents the pose of the scanned keyframes
nate, as shown in Figure 7. In the figure, the blue curve and E is the uncertainty between the vertices. Suppose that
denotes the global pose GW k of the LiDAR sensor at the an undirected graph G = V, E has been constructed, which
end time of sweep k, and the orange curve denotes the contains two special nodes v f and vl , denoting the starting
pose estimation GLk+1 for sweep k + 1 from the LiDAR odo- and ending nodes of the closed loop in the graph, with corre-
metry estimation algorithm. Combining GW L
k and Gk+1 , we sponding weights of 0 and 1, respectively.
W
can get an initial guess of transform Gk+1 . Then, we can Then, according to the Dijkstra algorithm, we obtain
L the uncertainty value equivalent to weight wi . The Dijkstra
transform the local undistorted point cloud Pk+1 to the global
world coordinate, denoted as Qk+1 and shown as green cloud algorithm is used to calculate the shortest path from each
in Figure 7. Next, the mapping algorithm matches Qk+1 to Qk node to other nodes in the set S. Since the initial set con-
to get a refined pose estimation GW tains only closed-loop nodes v f and v f , the first iteration
k+1 and register Qk+1 onto
Qk to get a new map Qk+1 . only calculates the shortest path from the beginning to
the end, and the cost of the path is d vl , v f :
4.3. Pose Graph Optimization. In the previous section, we
obtain the refined pose transformation by the motion d vl , v f ≔ 〠 ci j 14
estimation in the mapping algorithm. However, since the edge<i, j>∈path
accumulative error is inevitable in the calculation process,
we cannot construct globally consistent map for large-scale ci, j in 14 represents the edge of the connected nodes v f
environments. Here, we first utilize ELCH [28] as the coarse and v f , indicating the uncertainty of the edge, so the cost
graph optimization method to solve the global map consis- of a path represents the sum of the uncertainties of all the
tency problem to a certain extent by finding the minimum edges in the path. Based on this path cost value, the
accumulated error in the map and trajectory estimation. weight of each node in the path is updated according to
Then, we use g2o [29] for fine graph optimization which con- following equation:
siders the relationship among global constraints.
d v s , vi
4.4. Coarse Graph Optimization. The coarse graph optimiza- wi = w s + we − w s 15
d vs , ve
tion method is mainly based on geometric relationship of
point cloud. First, we save the point cloud data every 3 s as we and ws in 15 denote the last and first nodes of the cur-
a keyframe, which is stored in a vector containing keyframes rent closed loop.
Journal of Sensors 9

(a) (b)

Figure 9: g2o result: (a) the hypothetical pose map optimization; (b) the optimized pose diagram.

In our 3D laser SLAM system, we send the edges after the


coarse graph optimization to g2o. At the same time, we con-
tinue to create the local constraints by connecting each key
frame to the first three keyframes through ICP algorithm to
establish a constraint relationship. If the matching in the
ICP is successful, the matching constraint is taken as a new
edge in g2o. We set the number of optimization iterations
to 100. We use Levenberg-Marquardt algorithm as the non-
linear optimization algorithm and Huber kernel as the edge
kernel function. An illustrative example is shown in Figure 9:

5. Experiments and Analyses


Figure 10: Reconstructed map: the green trajectory is the ground
truth from OptiTrack system, and the red trajectory is the The customized 3D reconstruction sensor has been tested in
estimated trajectory of our SLAM system. The left figure is top variant practical indoor environments. Since the used
view and the right figure is side view. Hokuyo UTM-30LX laser scanner is an indoor laser scanner,
we did not carry out experiments in outdoor environments.
We carried out 3 different indoor experiments to test the
While updating the weight of each node in the path, it is performance of our system. We evaluate the quality of our
also necessary to detect the connectivity between the nodes. system by considering both the map and trajectory accura-
For example, when the degree of a node exceeds 2, the node cies. In the first experiment, we quantitatively analyze the
is added to the set S. When a closed-loop path is computed, performance of our system using a motion capture system.
all edges in the path are removed from the graph G, and then We compared the estimated trajectory of the sensor to the
the degree of the nodes vs and ve is determined. If the node ground-truth data provided by the motion capture system
degree is zero, the node is also removed from the graph. After to show the estimation accuracy of our system. In the second
that, the coarse graph optimization algorithm continues iter- experiment, we put our mapping sensor on a ground mobile
atively to calculate the remaining points of the set. The pro- robot. On the ground mobile robot, we also have a 2D laser
cessing flow is the same as the above process until all the scanner, which performs a 2D SLAM algorithm by using
closed loops are processed; thus, the weight value of each wheel odometry and laser data. We compare the trajectory
node in the graph is obtained. between 2D SLAM and 3D mapping to show the accuracy
of the estimation in large indoor environment. We also com-
pared the 2D map projected from 3D global map to the 2D
4.5. Fine Graph Optimization. In the previous section, we map generated by the 2D SLAM to show the accuracy of
introduce the coarse graph optimization. However, it only the mapping. In the third experiment, we test our sensor in
considers the local constraints in the optimization process indoor long corridors by manually carrying the sensor. In
and does not consider the relationship among the global con- this experiment, we show the importance of loop-closure
straints. In this paper, a general framework for optimization detection and global pose optimization for reducing global
(g2o) [29] is also used for fine graph optimization. g2o is an drift in challenging environments.
extensible template library written in C++ that provides an During experiments, the system processing the LiDAR
easy-to-use framework for handling SLAM-related issues. data using a Microsoft Surface Pro 3 tablet with Intel i7
Here, for our laser three-dimensional reconstruction system, 4650U CPU and 8 Gb memory running Ubuntu 14.04 and
we only use the pose optimization, as shown in Figure 8. ROS Indigo. We developed our algorithms using C++, PCL
10 Journal of Sensors

Figure 11: Absolute trajectory error of our system. Poses of the trajectory are projected on the XY-plane.

3
1.7, Eigen. The experiment video can be found at https://
youtu.be/qXESSWWK1rQ or in the Supplementary Material 2
(available here).
1
5.1. Motion Capture Experiments. Quantitative analysis can
be performed on the trajectory provided a ground-truth tra- 0

y (m)
jectory estimate is available. We therefore evaluate the accu-
racy of the trajectory in a motion capture environment in −1
which we can track the position of our sensor with high
precision. An OptiTrack motion capture system consisting −2
of 8 Prime 13W cameras is capable of tracking the positions
of tags containing multiple reflective targets to millimeter −3
precision at more than 200 Hz, which is good enough as
ground truth. −4
−3 −2 −1 0 1 2 3
In this experiment, the sensor body follows an irregular
x (m)
path with the sensor held at varying heights. The sensor
moves at a speed about 0.5 m/s. The environment containing Ground truth
the OptiTrack system is a 7 × 6 m room as shown in Estimated
Figure 10, with few computer workstations and furnitures. Difference
We mounted several reflective tags on top of our LiDAR sen-
Figure 12: Motion capture room.
sor to provide 6DoF tracking as it is moved around in the
room. The estimated trajectory and ground-truth trajectory
from the OptiTrack system are plotted in Figure 10.
We quantify the accuracy of the estimated trajectories
using the measure of the absolute trajectory error (ATE) pro-
posed by [31]. It is based on determining relative and abso-
lute differences between estimated and ground-truth poses.
Global consistency is measured by first aligning and then
directly comparing absolute pose estimates (and trajectories):

1/2
1 m
ATE F i n ≔ 〠 trans F i Δ 2
16
m i+1
Figure 13: 3D laser scanner on a turtlebot.

with F i Δ ≔ Q−i 1 SPi , where S is the rigid body transforma- further improve the robustness and accuracy of the system
tion mapping the estimated trajectory Pi n to the ground- in geometry feature-less environments.
truth trajectory Qi n . In Figure 11, we show the trajectory esti-
mates obtained from our system as well as the deviation from
the ground-truth trajectory. The mean translational and 5.2. Mobile Mapping Experiments. Though in motion capture
rotational errors are 0.049 m and 0.536°, respectively. From room we can get accurate ground truth, the environment is
Figures 10 and 12, you can see that the experimental environ- simple and small. In this experiment, we put our sensor on
ment is very clear. Most times, there are only smooth walls a ground mobile robot as shown in Figure 13. The robot is
and glass windows. These are very challenging for laser- driven through a big office environment with tables, chairs,
based reconstruction system, since there are only few geom- and computers. This environment was chosen to test the
etry features and laser sensor does not work well with glass. performance of our system in big and complex indoor envi-
In the future, we will incorporate visual information to ronments. Since it is difficult to get ground truth in big
Journal of Sensors 11

Figure 14: A comparison between the trajectory estimated by the 3D reconstruction system and the trajectory estimate from the 2D laser
SLAM system as ground truth in office environment.

indoor environment, we compare the estimated 6DoF trajec- a closed loop. This environment is very challenging for our
tory with the ground-truth trajectory computed from the system since the long corridor contains little geometry fea-
robot’s 2D laser SLAM system to show the performance of tures that our algorithms can use. We started our sensor from
our system. one place, then walked around, and finally went back to the
The top figures of Figure 14 depict the 2D view of the esti- same place. The motion estimates generate a gap between
mated trajectory of our sensor and the trajectory computed the starting and finishing positions, which indicates the
from the 2D laser SLAM. The total length is about 60 m. As amount of drift. In practice, there is an accumulative error.
you can see, our estimated trajectory can accurately align The experimental results show that the growth of transla-
with the 2D SLAM trajectory in most times. The difficult tional errors is about to be less than 2% of distance traveled
motion for accurate recovering is fast spot turns, when por- and rotational errors less than 0.3° per meter. If we do not
tions of the sweeps might not be sufficiently constrained. have loop-closure detection and global optimization, the
For example, when the sensor is turning around a corner map will become inconsistent after a big loop traveling. How-
where there is only one smooth wall, it is difficult to recover ever, since our system has loop-closure detection and coarse-
the translation since the problem becomes a degeneration to-fine graph optimization, the error can be minimized to
problem. The bottom figure of Figure 14 shows an oblique make the map consistent as shown in Figure 15. The left figure
view of the 3D point cloud generated from an experiment of Figure 15 shows the map before optimization and the right
and overlaid with the points from the horizontal laser. In figure shows the map after optimization. From the recon-
the zoomed-in region, we can see the undistorted shapes of structed map in the red box, it can be seen that the map shows
the environments. Besides, from top left figure of Figure 14, obvious inconsistency before using the closed-loop correction
we can see that our 3D point cloud model aligned with the algorithm. After optimization, the inconsistency is mini-
2D occupancy grid map very accurately; therefore, our sys- mized. The green line is the precorrection trajectory and the
tem works very well in complex indoor environments. The red line is the closed-loop corrected trajectory. This experi-
mean translational and rotational errors in these experiments ment shows the importance of loop-closure detection and
are 0.023 m and 0.373°, respectively. global optimization for long-distance movement.

5.3. Hand-Held Experiments. We also tested our sensor and 5.4. Odometry Estimation Comparison. In this part, we did an
algorithms by carrying our system and walking around in experiment to show the improvement of our scan-to-local
indoor environments. We conducted tests to measure accu- map method compared to original scan-to-sweep method.
mulated drift of the motion estimate. Since it is difficult to We carried out the experiment in a long-corridor environ-
get 6DoF ground-truth data in large indoor environments, ment of one building in our university. The long corridor is
we choose a 30 m×20 m rectangular hallway which contains around 200 meters and it has two closed loops. There
12 Journal of Sensors

Figure 15: Indoor loop-closure test: left is before optimization and right is after optimization.

Figure 16: Comparison of odometry estimation by using scan-to-sweep method with scan-to-local map method: left figures are the results of
scan-to-sweep method, which have bigger accumulative drift. Right figures are the results of our method, which have much smaller drift.

are some geometry features in the corridor. LOAM uses center which is suitable for 3D perception in robotics and
scan-to-sweep method to match the feature points which reconstruction in other applications. We also described a
however leads to a big accumulative error and makes the parallel motion estimation, mapping, and global pose optimi-
map distorted. We use scan-to-local map matching zation algorithm that enable our system to output the 6DoF
method, which in a way can reduce the local accumulative sensor pose and reconstruct consistent 3D map in real time.
error and distortion by using more information in match- In the future, we will continue to improve the performance
ing corresponding points. of our system from hardware and software aspects. For the
The experiment results are shown in Figure 16. The left hardware, we will make the sensor more compact and inte-
column figures are the results of LOAM method, while the grate with other sensors like IMU or camera. For the soft-
right figures are ours. The first row shows the map in top ware, we will consider to use sensor fusion methods to
view, while the second row shows the map in side view. For further reduce the drift of the estimation algorithm.
fairness, we disabled the loop-closure function in our system
and run both algorithms using the same dataset. From the Conflicts of Interest
results, you can see that there is a big drift of LOAM algo-
rithm in the map as denoted by the red circle. For the same The authors declare that there is no conflict of interest
dataset, however, our method has smaller drift. We can also regarding the publication of this paper.
find that our method has much smaller accumulative error
in the z-axis direction from the bottom figures in Figure 16.
Acknowledgments
The experimental results demonstrate that our scan-to-local
map method has a lower drift. This work was supported by the National Natural Science
Foundation of China (no. 61573091 and no. 61673341),
6. Conclusions Open Research Project of the State Key Laboratory of
Industrial Control Technology, Zhejiang University, China
This paper has described a 3D laser scanner with 360° field of (no.ICT170302), Fundamental Research Funds for the
view. Our low-cost 3D laser rangefinder consisted of a 2D Central Universities (N172608005) and Doctoral Scientific
LiDAR scanner continuously rotating around its optical Research Foundation of Liaoning Province (no.20170520244).
Journal of Sensors 13

Supplementary Materials [14] J. L. Martínez, J. Morales, A. J. Reina, A. Mandow, A. Pequeño-


Boter, and A. García-Cerezo, “Construction and calibration of
The supplementary material is a video clip of all the experi- a low-cost 3D laser scanner with 360° field of view for mobile
ments. The experiment video can also be found at https:// robots,” in 2015 IEEE International Conference on Industrial
youtu.be/qXESSWWK1rQ. (Supplementary Materials) Technology (ICIT), pp. 149–154, Seville, Spain, March 2015.
[15] J. Zhang and S. Singh, “Low-drift and real-time lidar odometry
References and mapping,” Autonomous Robots, vol. 41, no. 2, pp. 401–
416, 2017.
[1] A. Hornung, M. Phillips, E. Gil Jones, M. Bennewitz, [16] M. Bosse, R. Zlot, and P. Flick, “Zebedee: design of a spring-
M. Likhachev, and S. Chitta, “Navigation in three-dimensional mounted 3-D range sensor with application to mobile map-
cluttered environments for mobile manipulation,” in 2012 ping,” IEEE Transactions on Robotics, vol. 28, no. 5,
IEEE International Conference on Robotics and Automation, pp. 1104–1119, 2012.
pp. 423–429, Saint Paul, MN, USA, May 2012. [17] J. Morales, J. Martínez, A. Mandow, A. Reina, A. Pequeño-
[2] M. Nieuwenhuisen and S. Behnke, “Hierarchical planning Boter, and A. García-Cerezo, “Boresight calibration of con-
with 3D local multiresolution obstacle avoidance for micro struction misalignments for 3D scanners built with a 2D laser
aerial vehicles,” in ISR/Robotik 2014; 41st International Sym- rangefinder rotating on its optical center,” Sensors, vol. 14,
posium on Robotics, pp. 1–7, Munich, Germany, June 2014. no. 11, pp. 20025–20040, 2014.
[3] M. Schadler, J. Stückler, and S. Behnke, “Rough terrain 3D [18] P. J. Besl and N. D. McKay, “A method for registration of 3-D
mapping and navigation using a continuously rotating 2D shapes,” IEEE Transactions on Pattern Analysis and Machine
laser scanner,” KI - Künstliche Intelligenz, vol. 28, no. 2, Intelligence, vol. 14, no. 2, pp. 239–256, 1992.
pp. 93–99, 2014. [19] A. Nuchter, K. Lingemann, J. Hertzberg, and H. Surmann, “6D
[4] D. Maier, A. Hornung, and M. Bennewitz, “Real-time naviga- SLAM with approximate data association,” in ICAR '05. Pro-
tion in 3D environments based on depth camera data,” in 2012 ceedings., 12th International Conference on Advanced Robotics,
12th IEEE-RAS International Conference on Humanoid Robots 2005, pp. 242–249, Seattle, WA, USA, July 2005.
(Humanoids 2012), pp. 692–697, November-December 2012. [20] T. Stoyanov, M. Magnusson, H. Andreasson, and A. J. Lilienthal,
[5] D. Droeschel, Holz, and S. Behnke, “Omnidirectional per- “Fast and accurate scan registration through minimization of
ception for lightweight MAVs using a continuously rotating the distance between compact 3D NDT representations,” The
3D laser<BR>Omnidirektionale wahrnehmung für leichte International Journal of Robotics Research, vol. 31, no. 12,
MAVs mittels eines kontinuierlich rotierenden 3D-laserscan- pp. 1377–1393, 2012.
ners,” Photogrammetrie - Fernerkundung - Geoinformation, [21] L. Kaul, R. Zlot, and M. Bosse, “Continuous-time three-
vol. 2014, no. 5, pp. 451–464, 2014. dimensional mapping for micro aerial vehicles with a passively
[6] R. F. Salas-Moreno, R. A. Newcombe, H. Strasdat, P. H. J. actuated rotating laser scanner,” Journal of Field Robotics,
Kelly, and A. J. Davison, “SLAM++: simultaneous localisation vol. 33, no. 1, pp. 103–132, 2016.
and mapping at the level of objects,” in 2013 IEEE Conference [22] A. V. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” in
on Computer Vision and Pattern Recognition, pp. 1352–1359, Robotics: Science and Systems V, Seattle, WA, USA, June 2009.
Portland, OR, USA, June 2013. [23] M. Magnusson, A. Lilienthal, and T. Duckett, “Scan registra-
[7] R. Zlot and M. Bosse, “Efficient large-scale three-dimensional tion for autonomous mining vehicles using 3D-NDT,” Journal
mobile mapping for underground mines,” Journal of Field of Field Robotics, vol. 24, no. 10, pp. 803–827, 2007.
Robotics, vol. 31, no. 5, pp. 758–779, 2014. [24] F. Pomerleau, F. Colas, R. Siegwart, and S. Magnenat, “Com-
[8] S. Vidas, P. Moghadam, and M. Bosse, “3D thermal mapping paring ICP variants on real-world data sets: open-source
of building interiors using an RGB-D and thermal camera,” library and experimental protocol,” Autonomous Robots,
in 2013 IEEE International Conference on Robotics and Auto- vol. 34, no. 3, pp. 133–148, 2013.
mation, pp. 2311–2318, Karlsruhe, Germany, May 2013. [25] S. Hong, H. Ko, and J. Kim, “VICP: velocity updating iterative
[9] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D closest point algorithm,” in 2010 IEEE International Confer-
mapping: using Kinect-style depth cameras for dense 3D ence on Robotics and Automation, pp. 1893–1898, Anchorage,
modeling of indoor environments,” The International Journal AK, USA, May 2010.
of Robotics Research, vol. 31, no. 5, pp. 647–663, 2012. [26] F. Moosmann and C. Stiller, “Velodyne SLAM,” in 2011 IEEE
[10] A. Nüchter, K. Lingemann, J. Hertzberg, and H. Surmann, “6D Intelligent Vehicles Symposium (IV), pp. 393–398, Baden-
SLAM—3D mapping outdoor environments,” Journal of Field Baden, Germany, June 2011.
Robotics, vol. 24, no. 8-9, pp. 699–722, 2007. [27] S. Scherer, J. Rehder, S. Achar et al., “River mapping from a fly-
[11] F. Pomerleau, F. Colas, and R. Siegwart, “A review of point ing robot: state estimation, river detection, and obstacle map-
cloud registration algorithms for mobile robotics,” Founda- ping,” Autonomous Robots, vol. 33, no. 1-2, pp. 189–214, 2012.
tions and Trends in Robotics, vol. 4, no. 1, pp. 1–104, 2015. [28] J. Sprickerhof, A. Nüchter, K. Lingemann, and J. Hertzberg,
[12] A. Geiger, J. Ziegler, and C. Stiller, “StereoScan: dense 3d recon- “An explicit loop closing technique for 6D SLAM,” in Pro-
struction in real-time,” in 2011 IEEE Intelligent Vehicles Sympo- ceedings of the 4th European Conference on Mobile Robots,
sium (IV), pp. 963–968, Baden-Baden, Germany, June 2011. ECMR 09, I. Petrovic and A. J. Lilien-thal, Eds., pp. 229–234,
[13] J. Morales, J. L. Martínez, A. Mandow, A. Pequeño-Boter, and KoREMA, Mlini/Dubrovnik, Croatia, 2009.
A. García-Cerezo, “Design and development of a fast and pre- [29] R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and
cise low-cost 3D laser rangefinder,” in 2011 IEEE International W. Burgard, “G2o: a general framework for graph optimiza-
Conference on Mechatronics, pp. 621–626, Istanbul, Turkey, tion,” in 2011 IEEE International Conference on Robotics and
April 2011. Automation, pp. 3607–3613, Shanghai, China, May 2011.
14 Journal of Sensors

[30] P. Kr, F. Colas, P. Furgale, and R. Siegwart, “Long-term 3D


map maintenance in dynamic environments,” in 2014 IEEE
International Conference on Robotics and Automation (ICRA),
pp. 3712–3719, Hong Kong, China, June 2014.
[31] J. Sturm, W. Burgard, and D. Cremers, “Evaluating egomotion
and structure-from-motion approaches using the TUM
RGB-D benchmark,” in IEEE/RJS International Conference
on Intelligent Robot, Vilamoura, Algarve, Portugal, 2012.

You might also like