A Real-Time 3D Perception and Reconstruction Syste
A Real-Time 3D Perception and Reconstruction Syste
Journal of Sensors
Volume 2018, Article ID 2937694, 14 pages
https://doi.org/10.1155/2018/2937694
Research Article
A Real-Time 3D Perception and Reconstruction System Based on a
2D Laser Scanner
Received 5 August 2017; Revised 7 February 2018; Accepted 28 February 2018; Published 16 May 2018
Copyright © 2018 Zheng Fang et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper presents a real-time and low-cost 3D perception and reconstruction system which is suitable for autonomous navigation
and large-scale environment reconstruction. The 3D mapping system is based on a rotating 2D planar laser scanner driven by a step
motor, which is suitable for continuous mapping. However, for such a continuous mapping system, the challenge is that the range
measurements are received at different times when the 3D LiDAR is moving, which will result in big distortion of the local 3D point
cloud. As a result, the errors in motion estimation can cause misregistration of the resulting point cloud. In order to continuously
estimate the trajectory of the sensor, we first extract feature points from the local point cloud and then estimate the transformation
between current frame to local map to get the LiDAR odometry. After that, we use the estimated motion to remove the distortion of
the local point cloud and then register the undistorted local point cloud to the global point cloud to get accurate global map. Finally,
we propose a coarse-to-fine graph optimization method to minimize the global drift. The proposed 3D sensor system is
advantageous due to its mechanical simplicity, mobility, low weight, low cost, and real-time estimation. To validate the
performance of the proposed system, we carried out several experiments to verify its accuracy, robustness, and efficiency. The
experimental results show that our system can accurately estimate the trajectory of the sensor and build a quality 3D point
cloud map simultaneously.
that makes them suitable for vehicles moving at high speeds. trajectory of the sensor and build the 3D point cloud of the
However, they have a limited vertical resolution and are more environment simultaneously. The proposed 3D sensor sys-
expensive than 2D laser scanners. There are also several tem is advantageous due to its mechanical simplicity, mobil-
hand-held 3D scanners that are commercially available ity, low weight, low cost, and real-time estimation. Therefore,
(e.g., Leica’s T-scan). Unfortunately, those sensors are pri- our sensor can not only be used for autonomous navigation
marily intended for object scanning applications. Therefore, but also be suitable for surveying and reconstruction of areas.
they often require modification to the environment and have To validate the performance of the proposed system, we
limited working volume that is not appropriate for large- carried out several experiments to verify its accuracy, robust-
scale mobile applications. In recent years, RGB-D cameras ness, and efficiency. The experimental results show that our
[9], which are based on structured lighting or TOF (time of system can accurately estimate the trajectory of the sensor
flight) technology, become very popular due to its low price and build a quality 3D point cloud map simultaneously.
and depth measurement ability. However, those sensors are The rest of the paper is organized as follows. In Section 2,
limited in their sensing range, precision, and the lighting a brief discussion of related work is presented. Section 3
conditions in which they can operate. For example, the Asus describes the hardware and software system of our sensor.
Xtion has an effective range from 0.5 to 4 meters with preci- Then, our real-time pose estimation, 3D mapping, and global
sion of around 12 cm at 4 m range and does not work in map optimization algorithms are detailed in Section 4. In
bright sunlight. Stereo cameras are also widely studied for Section 5, we validate our system in different environments.
real-time 3D reconstruction [12]. However, stereo cameras Section 6 concludes the paper.
have similar precision characteristics, and their performance
is dependent on lighting conditions and textural appearance 2. Related Work
of the environments.
Besides those commercial 3D scanning sensors, In the recent year, several different kinds of customized 3D
researchers have also developed many customized 3D sen- laser scanner system built from a 2D laser scanner have been
sors for their specific applications. Up to now, several proposed in the robotics community. Those systems mainly
customized 3D laser scanners have been developed for differ in the driving mechanism [13, 16, 17] and data pro-
autonomous robots by means of a rotating laser scanner cessing methods [18–20]. In the following, we will have a
with a 360° horizontal FoV, allowing for detection of obsta- brief review of the existing systems and the reconstruction
cles in all directions. For example, Morales et al. [13] methods they used.
designed a low-cost 3D laser rangefinder based on pitching
a commercial 2D rangefinder around its optical center. They 2.1. Driving Mechanism. Currently, there are several ways
also developed a 3D laser scanner by continuously rotating a to make 3D scanners from a 2D laser scanner. According
2D laser scanner [14]. However, in their work, they only to its driving mechanism, they can be divided into two
describe the design and development of the hardware of groups, namely, passive driving mechanism and active driv-
the sensor and do not propose efficient algorithms for ing mechanism.
processing the received 3D point cloud. Therefore, those The first type is to use passive mechanisms. For example,
sensors are generally suitable for “stop-and-scan” applica- Bosse et al. [16] proposed a sensor called Zebedee, which is
tions. Zhang and Singh [15] proposed a continuously scan- constructed from a 2D range scanner coupled with a passive
ning system LOAM which could be a back-and-forth or linkage mechanism, such as a spring. By mounting the other
continuous rotation configuration. The mapping results of end of the passive linkage mechanism to a moving body, dis-
their system are very impressive. However, they do not pro- turbances resulting from accelerations and vibrations of the
vide detailed description of their hardware system. Besides, body propel the 2D scanner in an irregular fashion, thereby
the author only proposed LiDAR odometry and mapping extending the device’s field of view outside of its standard
algorithms. Though the local drift is very low, it is not suit- scanning plane. By combining the information from the 2D
able for large-scale mapping since accumulative error is laser scanner and a rigid-mounted industrial-grade IMU, this
inevitable. Bosse et al. [16] also proposed a sensor called system can accurately recover the trajectory and create the
Zebedee, which is constructed from a 2D range scanner 3D map. However, this sensor is not very good for mobile
coupled with a passive linkage mechanism, such as a navigation since if the robot runs on a flat floor, there would
spring. By shaking the sensor, the device’s field of view be no enough vibrations to actuate the sensor. Kaul et al. [21]
could be extended outside of its standard scanning plane. also proposed a passively actuated rotating laser scanner for
However, this system needs to collect the data first and aerial 3D mapping. A key feature of the system is a novel pas-
then process the data offline. Therefore, it is not suitable sively driven mechanism to rotate a lightweight 2D laser
for real-time applications, such as autonomous navigation scanner using the rotor downdraft from a quadcopter. The
of mobile robots. data generated from the spinning laser is input into a
In this paper, we present a low-cost customized 3D laser continuous-time simultaneous localization and mapping
scanner based on a rotating 2D laser scanner which is suit- solution to produce an accurate 6DoF trajectory estimate
able for both autonomous navigation and large-scale envi- and a 3D point cloud map.
ronment reconstruction. We not only describe the details of The second type is to use active mechanisms. For this
the hardware design but also present a real-time motion type, usually there is a motor actively driving the sensor
estimation algorithm that can continuously estimate the around one axis. When rotating a 2D unit, two basic 3D
Journal of Sensors 3
scanning configurations are possible: pitching (x-axis rota- sensors to provide velocity measurements to remove the
tion) and rolling (y-axis rotation) scans. For example, distortion. For example, the LiDAR cloud can be registered
Morales et al. [13] designed a low-cost 3D laser rangefinder by state estimation from visual odometry integrated with an
based on pitching a commercial 2D rangefinder around its IMU [5, 27].
optical center. The pitching scan is preferable for mobile
robotics and area monitoring because it obtains information 3. System Overview
of the region in front of the 3D rangefinder faster than the
rolling scan. This is because the rolling scan always requires In this section, we will give a detailed description of the
a complete 180° rotation to avoid dead zones. However, com- hardware and software system of our sensor. We first
pared to pitching scanner, the advantage of a rolling scanner describe the hardware system, then present the architecture
is that its field of view can be widened to 360°. Therefore, they of the software system.
also developed a 3D laser scanner by continuously rotating a
3.1. Hardware System. The 3D laser scanner is based on
2D laser scanner [14].
spinning a 2D laser scanner. The mechanical system of
the 3D reconstruction system is composed of two parts,
2.2. Motion Estimation and Registration Methods. For the
namely, the rotating head and driving body as shown in
customized 3D LiDAR sensor which is based on a 2D
Figure 1. The head is mainly composed of a continuously
laser scanner, there are different ways to estimate the
rotating 2D laser scanner. To make the 3D point cloud
motion of sensor and reconstruct the environment using
registration easy, we align the rotating axis y along with
the perceived 3D point cloud. The existing method could
the scanning plane x1 y1 . Axis y1 of the LiDAR sensor is
be divided into two types: stop-and-scan methods and
motored by a step motor with a coupling. Since the scanner
continuous methods.
will rotate continuously, we connect its signal and power
Previously, most existing 3D mapping solutions either
lines to a slip ring to solve revolving problem. The step motor
eliminate sensor motion by taking a stop-and-scan approach
is equipped with an encoder to record the rotating angle of
or attempt to correct the motion using odometric sensors,
the 2D scanner.
such as wheel or visual odometry. Many researchers take
The whole 3D sensor system consists of a 2D laser scan-
the approach of frequently stopping the platform in order
ner, a step motor driving system, and a PC for real-time reg-
to take a stationary scan [19]. However, for mobile robotics
istration as shown in Figure 2. The PC is a Surface Pro3 tablet
system, this type of behavior is undesirable as it limits the
from Microsoft, which has an i7 processor, 256 Gb solid
efficiency of the task. Since there is no movement when tak-
hardware and 12-inch touch screen for displaying. The PC
ing a scan, the advantage of this kind of scanning is that
is responsible for receiving scans from the 2D laser scanner
there is no distortion in the perceived 3d point cloud.
and the encoder data from the step motor, then converting
Therefore, it is easier to register the point cloud. As to the
the 2D scans into 3D point cloud for LiDAR odometry esti-
registration methods, several approaches have been pro-
mation and mapping, and finally displaying the recon-
posed to estimate the motion by means of 3D scan registra-
structed 3D point cloud map in real time. The 2D laser
tion [19, 22, 23]. Most of these approaches are derived from
scanner used in our system is a Hokuyo UTM-30LX, which
the iterative closest Point (ICP) algorithm [18, 24]. For
is a 2D time-of-flight laser with a 270 deg field of view,
example, generalized ICP (GICP) [22] unifies the ICP for-
30 m maximum measurement range, and 40 Hz scanning
mulation for various error metrics such as point-to-point,
rate. The dimension of the UTM-30LX is 60 × 60 mm, and
point-to-plane, and plane-to-plane. Beside ICP methods,
its mass is 210 g, which makes it ideal for low-weight require-
the 3DNDT [20] method discretizes point clouds in 3D
ments. The motion controller is responsible for controlling
grids and aligns Gaussian statistics within grid cells to per-
the rotating speed of the 2D laser scanner, which is composed
form scan registration.
of an ARM STM32 board, a step motor, an encoder, and a
Nowadays, people prefer to develop continuous estima-
communication board. The whole system is powered by a
tion methods for a rotating 2D laser scanner. However, col-
12 V lithium battery.
lecting consistent 3D laser data when the sensor is moving
is often difficult. The trajectory of the sensor during the scan 3.2. Software Systems. The software of the whole 3D recon-
must be considered when constructing point clouds, other- struction system is composed of two parts, namely, low-
wise the structure of the cloud will be severely distorted and level motion control and high-level reconstruction. The
potentially unrecognizable. To solve this problem, several low-level motion control module is responsible for control-
methods have been proposed. For example, a two-step ling the 2D laser scanner rotating at a constant speed and
method is proposed to remove the distortion [25]: an ICP- reading and sending the encoder data to the PC via USB-
based velocity estimation step is followed by a distortion RS232 data line. The high-level reconstruction module is
compensation step, using the computed velocity. A similar responsible for receiving the scanning data and encoder data
technique is also used to compensate for the distortion and then estimating the motion of the sensor and registering
introduced by a single-axis 3D LiDAR [26]. However, if the received point cloud into a 3D map.
the scanning motion is relatively slow, motion distortion
can be severe. This is especially the case when a 2-axis 3.2.1. Low-Level Motion Control. The 2D scanner is
LiDAR is used since one axis is typically much slower than controlled to rotate at a speed of 180°/s by the STM32
the other. Therefore, many researchers try to use other embedded controller. The overall precision of the LiDAR
4 Journal of Sensors
o
y1
y o1
60.5 mm
Body Head
2D scan
USB Laser commands
PC
STM32 board Motion Time &
commands angle
2D laser scanner
Step motor Sync signal Motion
controller
Motion controller
3D LiDAR Slip ring
Motor driver
Power Encoder
data
USB-RS232 board
Battery 2D sensor Rotational mechanism Step motor
(a) (b)
Figure 2: Hardware and functional diagram of the 3D reconstruction system: on the (a) is the hardware system and on the (b) is the
functional diagram.
sensor strongly depends on the measurement precision of a scan is constant, then we can calculate the exact angle
the instantaneous angle of axis y (as shown in Figure 1), when each point is measured.
henceforth denoted as β. To determine that angle, a
high-precision absolute rotary encoder is used. The 3.2.2. High-Level Reconstruction Software. After receiving the
encoder’s resolution is 10 bits. In general, each scan point 2D scan and angle data from the 2D laser scanner and the
must be related to the very value of β (continuously pro- motion controller, the high-level reconstruction algorithm
vided by the encoder) at the very time when this point is needs to estimate the motion of sensor based on the received
measured. However, the UTM-30LX does not output point data and construct the 3D map in real time. The whole archi-
after point in real time. Fortunately, the UTM-30LX trig- tecture of the motion estimation and reconstruction software
gers a synchronization impulse after it finishes a complete is shown in Figure 3. We first need to convert the 2D scans
scanning. The embedded controller reads the encoder into 3D point cloud by computing the Cartesian coordinates
angle data at the exact time it receives a trigger signal. of each point. Then, we need to calibrate the sensor since
And finally, the embedded controller sends the raw angle small errors in the attachment of the 2D device to the rotat-
and scans to Surface Pro3 via USB to serial converter. ing mechanism provoke a big distortion in the point cloud.
We assume the rotation speed of 2D laser scanner during Finally, we need to estimate the motion of the sensor and
Journal of Sensors 5
Global
Feature point transform
extration Undistorted
algorithm sweep
3D
Laser ranges point
Mapping
UTM_30LX cloud Loop edge ELCH
algorithm
Local
transform
Key frame
Finding feature
point
correspondence
Encoder
Angle Transform
integration
g2o
Motion estimation
algorithm
Loop
closure
x 0 cos ϕ
ρ sin θ
y = 1 0 1 the point cloud computed with 1. To remove the distortion,
ρ cos θ we need to calibrate the sensor. For our customized 3D laser
z 0 sin ϕ scanner which is a 3D scanner with a 2D laser rangefinder
rotating on its optical center, the calibration method pro-
(2) Geometric Calibration. Practically, due to the installation posed in [17] can be used to obtain mechanical misalign-
error, small errors in the attachment of the 2D device to the ments. After calibration, 2 can be employed to obtain 3D
rotating mechanism results in y1 not perfectly aligned with y Cartesian coordinates of a point in the 3D frame instead
as shown in Figure 5. This error will result in a distortion in of 1.
x C α0 C β 0 −C α0 S β0
ρC θ
y = C Θ S β0 + C β 0 S α 0 S Θ C Θ C β 0 − S α 0 S Θ S β0 , 2
ρS θ
z S Θ S β0 − C Θ C β 0 S α 0 C β 0 S Θ + C Θ S α 0 S β0
6 Journal of Sensors
l m i
i
j j l
LiDAR LiDAR
(a) (b)
Figure 6: (a) Finding corresponding edge point and edge line; (b) finding corresponding planar point and planar patch.
Assuming that we have point cloud Pk+1 , then we can (3) Error Metric Computation. After finding the correspon-
extract edge feature points εk+1 and planar feature points dences, we can calculate the distance from a feature point
μk+1 using 3. We transform features εk+1 and μk+1 to the to its correspondence. We will recover the motion by mini-
starting time of sweep k + 1 and denote them as εk+1 and mizing the overall distances of the features.
μk+1 , then we can get the following equation according L
For an edge point X k+1,i = x0 , y0 , z 0 , i ∈ εk+1 , if j, l is
to 4: L L
its corresponding edge line X k, j = x1 , y1 , z 1 , X k ,l = x2 ,
L
y2 , z 2 , j, l ∈ Mk , then the point to line distance can be com-
L
X Lk+1,i X k+1,i puted as
= T Lk+1,i , 5
1 1 L L L L
X k+1,i −X k, j × X k+1,i −X k ,l
dε = 6
L L
L X −X
where is a point i in εk+1 or μk+1 and
X Lk+1,i X k+1,i is the cor- k, j k ,l
Qk+1
L
Gk+1
Qk GkW
(a) (b)
Figure 9: g2o result: (a) the hypothetical pose map optimization; (b) the optimized pose diagram.
Figure 11: Absolute trajectory error of our system. Poses of the trajectory are projected on the XY-plane.
3
1.7, Eigen. The experiment video can be found at https://
youtu.be/qXESSWWK1rQ or in the Supplementary Material 2
(available here).
1
5.1. Motion Capture Experiments. Quantitative analysis can
be performed on the trajectory provided a ground-truth tra- 0
y (m)
jectory estimate is available. We therefore evaluate the accu-
racy of the trajectory in a motion capture environment in −1
which we can track the position of our sensor with high
precision. An OptiTrack motion capture system consisting −2
of 8 Prime 13W cameras is capable of tracking the positions
of tags containing multiple reflective targets to millimeter −3
precision at more than 200 Hz, which is good enough as
ground truth. −4
−3 −2 −1 0 1 2 3
In this experiment, the sensor body follows an irregular
x (m)
path with the sensor held at varying heights. The sensor
moves at a speed about 0.5 m/s. The environment containing Ground truth
the OptiTrack system is a 7 × 6 m room as shown in Estimated
Figure 10, with few computer workstations and furnitures. Difference
We mounted several reflective tags on top of our LiDAR sen-
Figure 12: Motion capture room.
sor to provide 6DoF tracking as it is moved around in the
room. The estimated trajectory and ground-truth trajectory
from the OptiTrack system are plotted in Figure 10.
We quantify the accuracy of the estimated trajectories
using the measure of the absolute trajectory error (ATE) pro-
posed by [31]. It is based on determining relative and abso-
lute differences between estimated and ground-truth poses.
Global consistency is measured by first aligning and then
directly comparing absolute pose estimates (and trajectories):
1/2
1 m
ATE F i n ≔ 〠 trans F i Δ 2
16
m i+1
Figure 13: 3D laser scanner on a turtlebot.
with F i Δ ≔ Q−i 1 SPi , where S is the rigid body transforma- further improve the robustness and accuracy of the system
tion mapping the estimated trajectory Pi n to the ground- in geometry feature-less environments.
truth trajectory Qi n . In Figure 11, we show the trajectory esti-
mates obtained from our system as well as the deviation from
the ground-truth trajectory. The mean translational and 5.2. Mobile Mapping Experiments. Though in motion capture
rotational errors are 0.049 m and 0.536°, respectively. From room we can get accurate ground truth, the environment is
Figures 10 and 12, you can see that the experimental environ- simple and small. In this experiment, we put our sensor on
ment is very clear. Most times, there are only smooth walls a ground mobile robot as shown in Figure 13. The robot is
and glass windows. These are very challenging for laser- driven through a big office environment with tables, chairs,
based reconstruction system, since there are only few geom- and computers. This environment was chosen to test the
etry features and laser sensor does not work well with glass. performance of our system in big and complex indoor envi-
In the future, we will incorporate visual information to ronments. Since it is difficult to get ground truth in big
Journal of Sensors 11
Figure 14: A comparison between the trajectory estimated by the 3D reconstruction system and the trajectory estimate from the 2D laser
SLAM system as ground truth in office environment.
indoor environment, we compare the estimated 6DoF trajec- a closed loop. This environment is very challenging for our
tory with the ground-truth trajectory computed from the system since the long corridor contains little geometry fea-
robot’s 2D laser SLAM system to show the performance of tures that our algorithms can use. We started our sensor from
our system. one place, then walked around, and finally went back to the
The top figures of Figure 14 depict the 2D view of the esti- same place. The motion estimates generate a gap between
mated trajectory of our sensor and the trajectory computed the starting and finishing positions, which indicates the
from the 2D laser SLAM. The total length is about 60 m. As amount of drift. In practice, there is an accumulative error.
you can see, our estimated trajectory can accurately align The experimental results show that the growth of transla-
with the 2D SLAM trajectory in most times. The difficult tional errors is about to be less than 2% of distance traveled
motion for accurate recovering is fast spot turns, when por- and rotational errors less than 0.3° per meter. If we do not
tions of the sweeps might not be sufficiently constrained. have loop-closure detection and global optimization, the
For example, when the sensor is turning around a corner map will become inconsistent after a big loop traveling. How-
where there is only one smooth wall, it is difficult to recover ever, since our system has loop-closure detection and coarse-
the translation since the problem becomes a degeneration to-fine graph optimization, the error can be minimized to
problem. The bottom figure of Figure 14 shows an oblique make the map consistent as shown in Figure 15. The left figure
view of the 3D point cloud generated from an experiment of Figure 15 shows the map before optimization and the right
and overlaid with the points from the horizontal laser. In figure shows the map after optimization. From the recon-
the zoomed-in region, we can see the undistorted shapes of structed map in the red box, it can be seen that the map shows
the environments. Besides, from top left figure of Figure 14, obvious inconsistency before using the closed-loop correction
we can see that our 3D point cloud model aligned with the algorithm. After optimization, the inconsistency is mini-
2D occupancy grid map very accurately; therefore, our sys- mized. The green line is the precorrection trajectory and the
tem works very well in complex indoor environments. The red line is the closed-loop corrected trajectory. This experi-
mean translational and rotational errors in these experiments ment shows the importance of loop-closure detection and
are 0.023 m and 0.373°, respectively. global optimization for long-distance movement.
5.3. Hand-Held Experiments. We also tested our sensor and 5.4. Odometry Estimation Comparison. In this part, we did an
algorithms by carrying our system and walking around in experiment to show the improvement of our scan-to-local
indoor environments. We conducted tests to measure accu- map method compared to original scan-to-sweep method.
mulated drift of the motion estimate. Since it is difficult to We carried out the experiment in a long-corridor environ-
get 6DoF ground-truth data in large indoor environments, ment of one building in our university. The long corridor is
we choose a 30 m×20 m rectangular hallway which contains around 200 meters and it has two closed loops. There
12 Journal of Sensors
Figure 15: Indoor loop-closure test: left is before optimization and right is after optimization.
Figure 16: Comparison of odometry estimation by using scan-to-sweep method with scan-to-local map method: left figures are the results of
scan-to-sweep method, which have bigger accumulative drift. Right figures are the results of our method, which have much smaller drift.
are some geometry features in the corridor. LOAM uses center which is suitable for 3D perception in robotics and
scan-to-sweep method to match the feature points which reconstruction in other applications. We also described a
however leads to a big accumulative error and makes the parallel motion estimation, mapping, and global pose optimi-
map distorted. We use scan-to-local map matching zation algorithm that enable our system to output the 6DoF
method, which in a way can reduce the local accumulative sensor pose and reconstruct consistent 3D map in real time.
error and distortion by using more information in match- In the future, we will continue to improve the performance
ing corresponding points. of our system from hardware and software aspects. For the
The experiment results are shown in Figure 16. The left hardware, we will make the sensor more compact and inte-
column figures are the results of LOAM method, while the grate with other sensors like IMU or camera. For the soft-
right figures are ours. The first row shows the map in top ware, we will consider to use sensor fusion methods to
view, while the second row shows the map in side view. For further reduce the drift of the estimation algorithm.
fairness, we disabled the loop-closure function in our system
and run both algorithms using the same dataset. From the Conflicts of Interest
results, you can see that there is a big drift of LOAM algo-
rithm in the map as denoted by the red circle. For the same The authors declare that there is no conflict of interest
dataset, however, our method has smaller drift. We can also regarding the publication of this paper.
find that our method has much smaller accumulative error
in the z-axis direction from the bottom figures in Figure 16.
Acknowledgments
The experimental results demonstrate that our scan-to-local
map method has a lower drift. This work was supported by the National Natural Science
Foundation of China (no. 61573091 and no. 61673341),
6. Conclusions Open Research Project of the State Key Laboratory of
Industrial Control Technology, Zhejiang University, China
This paper has described a 3D laser scanner with 360° field of (no.ICT170302), Fundamental Research Funds for the
view. Our low-cost 3D laser rangefinder consisted of a 2D Central Universities (N172608005) and Doctoral Scientific
LiDAR scanner continuously rotating around its optical Research Foundation of Liaoning Province (no.20170520244).
Journal of Sensors 13