Speed and Accuracy Tradeoff For LiDAR Data Based Road Boundary Detection
Speed and Accuracy Tradeoff For LiDAR Data Based Road Boundary Detection
6, JUNE 2021
Abstract—Road boundary detection is essential for autonomous urban environment, the road boundary can be used as an
vehicle localization and decision-making, especially under GPS effective feature to locate AV. Therefore, accurate and reliable
signal loss and lane discontinuities. For road boundary detection
in structural environments, obstacle occlusions and large road road research communities have invested much time into this
curvature are two significant challenges. However, an effective area. According to sensors, current methods can be divided
and fast solution for these problems has remained elusive. To into vision-based and LiDAR-based methods.
solve these problems, a speed and accuracy tradeoff method for With the development of computer vision technology,
LiDAR-based road boundary detection in structured vision-based methods are widely used in road boundary
environments is proposed. The proposed method consists of three
main stages: 1) a multi-feature based method is applied to extract detection [1]–[3]. Effective detection results can be achieved
feature points; 2) a road-segmentation-line-based method is using vision-based methods, but performance is greatly
proposed for classifying left and right feature points; 3) an affected by light and weather conditions. In addition, it is
iterative Gaussian Process Regression (GPR) is employed for difficult to obtain accurate depth information, which makes it
filtering out false points and extracting boundary points. To difficult to meet the needs of autonomous driving
demonstrate the effectiveness of the proposed method, KITTI
applications.
datasets is used for comprehensive experiments, and the
performance of our approach is tested under different road Compared with vision sensors, LiDAR can provide the
conditions. Comprehensive experiments show the road- accurate depth information and is immune to illumination and
segmentation-line-based method can classify left, and right feature shadow. Thus, LiDAR has become an indispensable sensor for
points on structured curved roads, and the proposed iterative
AV at this stage. In [4], the method based on the 2D-LiDAR
Gaussian Process Regression can extract road boundary points on
varied road shapes and traffic conditions. Meanwhile, the interactive multi-model was used to detect road boundaries. In
proposed road boundary detection method can achieve real-time [5], road boundary points were extracted as line segments in
performance with an average of 70.5 ms per frame. polar coordinates based on 2D-LiDAR. However, due to the
Index Terms—3D-LiDAR, autonomous vehicle, object detection, sparsity of 2D-LiDAR data, it is difficult to meet the needs of
point cloud, road boundary. environmental perception. Compared with 2D-LiDAR, 3D-
LiDAR has the advantages of a 360-degree scanning range,
I. Introduction
which can provide a large quantity of data, known as point
N the autonomous vehicle (AV) field, environmental
I perception is a prerequisite for other functional modules,
and road boundary detection is an important part of this
cloud, and thus plays an increasingly important role in AV. In
previous the Defense Advanced Research Projects Agency
(DARPA) challenges, many teams used 3D-LiDARs for
perception. Road boundary can be used to distinguish on-road environmental perception [6], [7]. In recent years, Waymo,
area from non-road area, which provides an important basis
Baidu, and other companies have chosen 3D-LiDAR as the
for AVs to understand scenes and make decisions. In addition,
main sensor to obtain environmental information [8]–[10].
when GPS data is inaccurate, or perhaps has been lost in an
Although 3D-LiDAR has many advantages, there are still
Manuscript received December 27, 2019; revised March 21, 2020, June 2, two great challenges for road boundary detection: the
2020; accepted June 25, 2020. This work was supported by the Research on
Construction and Simulation Technology of Hardware in Loop Testing
classification of road boundary points on curved roads and
Scenario for Self-Driving Electric Vehicle in China (2018YFB0105103J). obstacle occlusions. On curved roads, it is difficult to
Recommended by Associate Editor Lei Shu. (Corresponding author: Jian distinguish left and right boundary points even if all boundary
Wu.) points are extracted, which also has an important effect on
Citation: G. J. Wang, J. Wu, R. He, and B. Tian, “ Speed and accuracy autonomous driving safety. Besides, when obstacle occlusions
tradeoff for LiDAR data based road boundary detection,” IEEE/CAA J.
Autom. Sinica, vol. 8, no. 6, pp. 1210–1220, Jun. 2021. exist, false points caused by obstacles will also affect road
G. J. Wang, J. Wu, and R. He are with the State Key Laboratory of boundary detection.
Automotive Simulation and Control, Jilin University, Changchun 130022, In order to solve these problems, we propose a speed and
China (e-mail: [email protected]; [email protected]; herui@jlu.
edu.cn).
accuracy tradeoff method to extract road boundaries from the
B. Tian is with the State Key Laboratory of Management and Control for point cloud. The proposed method can filter out false points
Complex Systems, Institute of Automation, Chinese Academy of Sciences, caused by obstacles under non-congested traffic conditions
Beijing 100190, and also with the Qingdao Academy of Intelligent Industries, and can correctly classify left and right boundary points on
Shandong, China (e-mail: [email protected]).
Color versions of one or more of the figures in this paper are available
curved roads. The main contributions of this paper are as
online at http://ieeexplore.ieee.org. follows:
Digital Object Identifier 10.1109/JAS.2020.1003414 1) This paper presents a road-segmentation-line-based
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
WANG et al.: SPEED AND ACCURACY TRADEOFF FOR LIDAR DATA BASED ROAD BOUNDARY DETECTION 1211
classification method for classifying feature points. The contour and snake model were used to extract the left and
feature points are rough road boundary candidate points right road boundary, and navigation information was
extracted by spatial and geometrical features. The road necessary to initialize the snake model.
segmentation line is determined by a beam band model and an
improved peak-finding algorithm. The method can classify B. Related Work on Feature Points Filtering
left and right feature points accurately on curved roads up to For the problem of obstacle occlusions, various filtering
70 m away. methods have been proposed. Sun et al. [11] and Yang et al.
2) This paper proposes a distance filter and random sample [14] firstly classified feature points into segments with k-
consensus (RANSAC) filter for candidate point extraction and nearest neighbor (KNN) method, then segments of less than
seed point extraction. The candidate points and seed points three points were considered as false points and eliminated,
would be used for the subsequent feature points filtering based which may filter out true isolated boundary points. In [12], a
on an iterative Gaussian Process. RANSAC line fitting algorithm was used to filter out false
3) This paper proposes an iterative Gaussian Process points, which assumed that roads were straight, thus making it
Regression (GPR) based feature points filtering method. The not suitable for curved roads. In [15], a regression filter was
GPR algorithm can be applied to various road shapes without introduced to make the detection robust to occlusions. In [19],
assuming that road boundaries are parametric models. At this a RANSAC quadratic polynomial fitting algorithm was used
same time, the algorithm can effectively remove false points to remove false points. References [12], [15] and [19]
caused by obstacle occlusions. Because GPR is a modeled road boundaries as a linear or quadratic polynomial
nonparametric model, it significantly enhances the parameterized model, and were not suitable for various road
adaptability to various road shapes and the robustness for shapes. In [20], road boundary points are extracted only based
obstacle occlusions. on spatial features, which is not enough to remove false points
The road-segmentation-line-based classification method and caused by obstacles. In [22], feature points were extracted
the iterative GPR effectively addressed the problem of curved based on local normal saliency, and distance to trajectory was
road and obstacle occlusions in structured environments. The used to filter out false points that cannot filter out false points
proposed road boundary detection method achieves excellent inside the road. In [23], region growing–based filtering was
accuracy while ensuring real-time performance. used to extract true road boundary points based on the
The remainder of this paper is organized as follows. Section Ⅱ similarity of height and intensity. However, in a structured
describes a review of related research. Section Ⅲ introduces environment, the materials used for road boundaries and road
the methodologies for road boundary detection. Section Ⅳ surfaces are usually the same, and the intensity was not
presents comprehensive experiments and evaluations. Section V effective. In [24], multiple parameterized RANSAC models
summarizes the major contributions of this research, and its were used to extract road boundary points.
future work is presented. III. Methodology
II. Related Work As shown in Fig. 1 , the proposed method consists of four
main steps. It takes a frame of raw point cloud as input and
A. Related Work on Feature Points Classification outputs road boundary points.
For the classification of left and right road boundary points,
most of the existing literature focuses on the problem of
straight road structures in which left and right boundary points
are classified just according to lateral coordinates, such as in
[11]–[15]. Xu et al. [16] extracted feature points based on
energy functions and classified left and right boundary points
using the least-cost path model. This method can work well on
Ground points segmentation Feature points extraction
curved roads, but it requires manual addition of source points,
making this algorithm unusable in practice. In [17], super
voxels were used to obtain boundaries, and then trajectory
data was used to classify left and right boundary points, which
can only work offline and are not suitable for on-line
applications. In [18] and [19], in order to solve the problem of
boundary detection on curved roads, clustering methods were
proposed to divide left and right boundary points. However, Feature points filtering Feature points classification
these methods need to iterate at each possible segmentation
Fig. 1. The pipeline of road boundary detection method.
angle to maximize the classification score, which requires a
lot of computation. In [20], road boundary points are searched
and separated by the predicted trajectory of autonomous A. Ground Points Segmentation
vehicles. In this method, the cumulative error of the predicted The proposed method is used to process point cloud from
trajectory would increase, which would affect the accuracy of Velodyne HDL-64E LiDAR [25]. The LiDAR coordinate
road boundary points detection. In [21], the parametric active systems are defined, as shown in Fig. 2 . The directions are
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
1212 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 8, NO. 6, JUNE 2021
given from the drivers’ view, X : forward, Y : left, Z : up, and Po f f off-ground points. All figures pertaining to point cloud
the coordinate system is right-handed. In this paper, we illustrations in this paper are from the top view.
consider the range of [−3, 1] × [−40, 40] × [−70, 70] meters
along for the Z-axis, Y-axis, X -axis, respectively as the B. Feature Points Extraction
detection regions for road boundaries. The algorithm of feature point extraction is mainly inspired
by [27] and [28]. In order to extract all feature points as much
Z as possible, three spatial features with loose thresholds are
used to extract feature points. The on-ground points Pon are
Y divided into 64 scan layers to extract feature points, and each
X
layer contains all points from the corresponding laser. For
[ ]
convenience, let Pli = xli yli zli denote each point, where l is
the corresponding layer ID. The spatial features applied in this
paper are described in the following.
1) Height Difference
Let Zmax and Zmin denote the maximum and minimum z
values of neighbors of pli, respectively. Thus, the height
difference feature is defined as
T height1 ≤ Zmax − Zmin ≤ T height2
√∑
Fig. 2. The cartesian coordinate system of LiDAR. (zli − µ)2
≥ T height3
nheight
Since road boundaries are a part of the ground, off-ground
∑
and on-ground points can be separated by ground where µ = zli /nheight , nheight is the number of neighbors, zli is
segmentation. In this paper, a piecewise plane-based fitting
the z values of each neighbor where l is the layer ID, and
method is used. For convenience, we use P to represent a
T height1 , T height2 , and T height3 are thresholds.
frame of raw point cloud. First, point cloud P is divided into
2) Smoothness
several segments along the x -axis. Then, a RANSAC plane
The feature proposed by [28] is used to describe the
fitting method is applied in each segment to extract on-ground
smoothness of the area around some point. For any point pli,
points [26]. Because the point cloud is divided into segments,
let S represent its neighbors. Thus, the smoothness feature is
we assume that the slope of the ground in each segment does
defined as
not fluctuate greatly and can be approximated to a plane.
Besides, we focus on road boundary detection, so the ground
point segmentation does not require high precision, and the 1 ∑
s= · (pli − pl j )
distance threshold is set to 28 cm, which is determined through |S | ||Pli ||
Pl j ∈ S
a trial-and-error method. The distance threshold should ensure j,i
that the ground point contains all road boundary points, which s ≥ T smoothness
generally in the range of 15 cm to 30 cm.
where s is the smoothness value of pli, |S | is the cardinality of
Fig. 3 shows the results of a ground point segmentation,
S , and T smoothness is the threshold.
where on-ground points are used to extract road boundary
3) Horizontal Distance
points, and off-ground points are used to compute road
The horizontal distance feature is proposed by [27] to
segmentation lines for classifying road boundary feature
represent the horizontal distance between two adjacent points
points. For convenience, let Pon denote on-ground points and
in the same layer. It sets a horizontal distance threshold δ xy,l to
determine whether point pli is a feature point. It is defined by
πθa
δ xy,l = H s · cot θl ·
180
where H s is the absolute value of the height of point pli, θl is
the vertical azimuth of scanning layer l, and θa is the
horizontal angular resolution of LiDAR. If pli is selected as a
feature point, the horizontal distance between pli and its
adjacent points should be smaller than δ xy,l .
All the features are tested in experiments, and loose
thresholds are determined by a trial-and-error method. The
height thresholds T height1 , T height2 , and T height3 are determined
based on the height jump of road boundaries. T height1 and
T height3 are generally drawn from [0.01, 0.05], and T height2 is
Fig. 3. An example of ground point segmentation. On-ground points are in generally drawn from [0.15, 0.35]. The smoothness threshold
blue, off-ground points are in black. is determined based on the smoothness of road boundary
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
WANG et al.: SPEED AND ACCURACY TRADEOFF FOR LIDAR DATA BASED ROAD BOUNDARY DETECTION 1213
areas, so T smoothness is generally drawn from [0.001, 0.005]. beam model based on off-ground points Po f f and
corresponding distance function δ (k). It is clear that because
C. Feature Points Classification point cloud data is sparse, the distance function has many
In order to solve the problem of classification for road false dominant extremes (δ (k) = 1). However, we just need
boundary points, especially on curved roads, this paper two true dominant extremes to determine the road
proposes a road-segmentation-line-based method to classify segmentation line. To solve this problem, we present an
feature points, which is mainly inspired by [27]. The principle improved peak-finding algorithm. The algorithm consists of
of the method is that the beam model [29] is established based two steps: the first median filter is applied to process the
on off-ground points Po f f , from which the length of each distance function, then each dominant extreme is evaluated
beam is calculated. The longest beams in the front and rear based on the extreme width.
regions of the AV are regarded as road segmentation lines,
which indicates the direction of the road. Compared with [27],
an improved peak-finding algorithm is proposed to filter out
outliers and determine two truly dominant extremes in the
distance function δ (k). Thus, the proposed improved peak-
finding algorithm in this paper is more robust for the sparsity
of point cloud and losses of laser return. Then, feature points
P f eature are divided into left and right feature points according
to the road segmentation lines. Compared with the clustering
methods used in our previous work [19], the proposed method
can achieve real-time performance, and it can classify feature
points accurately under diverse road conditions.
1) Beam Band Model Fig. 4. Beam model based on off-ground points. The blue lines are middle
The beam model was proposed by Thrun [29] in 2005 and lines of each beam band.
has been widely used in the field of robotics. In order to
determine the accurate angle of road segmentation, the 1.0 1.0
angular resolution θmodel of the beam model is set to 1 degree. 0.9 0.9
In classical beam models, the beam length is the distance from 0.8 0.8
Distance filtered
0.7 0.7
the launching point to the nearest point. In this paper, we 0.6
Distance
0.6
expand the beam into a beam band Zk 0.5
0.5 0.4
( ) 0.4
y − yb 180 0.3
Zk = {( x, y )| (k − 1) < arctan · ≤ k} (1) 0.3 0.2
x − xb π 0.2 0.1
0.1 0
where (xb , yb ) is the launching point, which is the LiDAR 0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350
Beam number Beam number
origin in this paper. The LiDAR scans clockwise, where zero (a) Before median filtering (b) After median filtering
degrees is on the x-axis. (x, y) denotes off-ground points, and
k ∈ {1, 2, . . . , 360}. The beam length of each band is defined as Fig. 5. An example of distance function for beam band model.
the shortest distance of the points in this band to the launching
a) Median filtering
point, that is
√ Median filtering is a nonlinear digital filtering technique
which is often used to remove noise from an image or signal
d (k) = mini (xi − xb )2 + (yi − yb )2 (2)
[31]. With the beam model established above, a distance
where (xi , yi ) ∈ (Zk ∩ Po f f ). The beam length of each band function δ (k) is obtained, and each δ (k) is replaced by the
d (k) is normalized to get distance function δ (k): median of its corresponding neighbors. Figs. 5(a) and 5(b) are
d (k) the distance function before and after filtering, respectively.
δ (k) = √ (3) b) Evaluating dominant extremes based on extreme width
max x,y∈Zk (x − xb )2 + (y − yb )2 For dominant extreme δ (k), extreme width w is defined as
where if there is no point in the k -th band, δ (k) = 1. So, in follows:
order to determine road segmentation lines, especially in roads w = (wr + wl ) (4)
with large curvatures, the longest beam will be used as road
segmentation line in front region ({0 ≤ k ≤ 90 ∪ 270 ≤ k < wl = |kl − k| wr = |kr − k| (5)
360}) and rear region ({90 ≤ k < 270} ) of the AV. After that, where kl is the nearest beam band whose distance δ(kl ) is less
an improved peak-finding method is used to determine the than δ (k) on the left of k , and kr is the nearest band whose
road segmentation line. distance δ(kr ) is less than δ (k) on the right of k . The
2) Improved Peak-Finding Algorithm evaluation of dominant extremes is as follows:
The peak-finding algorithm was first proposed to find the i) Remove the dominant extremes whose w is less than a
number of dominant extremes that represent road direction pre-defined threshold T w
from aerial images in [30]. For example, Figs. 4 and 5 show a ii) Merge the dominant extremes that are close. For two
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
1214 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 8, NO. 6, JUNE 2021
extremes δ(ki ) and δ(k j ) , ki − k j < T distance or ki − k j + 360 < vehicles. So, the method is to divide feature points into
T distance, then, the extreme whose width w is larger will be segments based on x coordinates, and to find the nearest point
chosen. If the extreme widths are the same, the dominant to the x-axis of LiDAR as a candidate point in each segment:
extreme whose value of |wl − wr | is smaller will be chosen. { }
x
According to the above method, the respective dominant S i = (x, y) | (i − 1) ≤ <i , i∈Z (6)
extremes are found in two intervals {0 ≤ k ≤ 90 ∪ 270 ≤ k < ws
360} and {90 ≤ k < 270} . The middle lines of bands that
fdist (S i ) = argmin yi, j (7)
correspond to the two extremes are used as road segmentation Si
lines. Based on the road segmentation line, the feature points
where S i is segment i, yi, j is the y coordinate of j -th point of
located on the left side of the road segmentation line can be
segment i, w s is the width of segments which need to be
classified as one class, and the feature points located on the
given, and fdist (S i ) is the index of the nearest point in
right side of the road segmentation line can be classified as
segment i. For each segment, only one nearest point is
another class.
preserved. The result of the distance filter for P f eature,l and
In order to demonstrate the effectiveness of the proposed
P f eature,r is Pcandidate,l and Pcandidate,r , as shown in Fig. 7.
method, experiments are carried out under three different road
curvature conditions, as shown in Fig. 6 , the green line
represents the road segmentation line, where the left feature
points P f eature,l are blue, and the right feature points P f eature,r
are red. We can see that the proposed improved peak-finding
method can determine road segmentation lines accurately on
different curved roads.
(a) (b) (c) b) RANSAC filter: In order to extract seed points, the false
Fig. 6. Results of feature points classification on three different road
points caused by obstacles inside roads such as pedestrians and
curvatures. The green line represents the road segmentation line, the left
vehicles must be filtered out. The filter models road boundaries
feature points are in blue, and the right feature points are in red. (a) Small. (b)
as a quadratic polynomial, then a RANSAC algorithm is applied
Moderate. (c) Large. to fit the model [26]. Then, all the points whose distances from
the fitted model are less than the threshold are used as seed
points. The result of RANSAC filter for Pcandidate,l and
D. Feature Points Filtering Pcandidate,r is P seed,l and P seed,r, as shown in Fig. 8. It is clear that
After feature points are extracted and classified, there are the RANSAC filter also removes some true boundary points
still many false points, including vehicles, pedestrians, railway because of the assumption that road boundaries fit a quadratic
tracks, adjacent roads, and so on. Besides, even on structured polynomial model. In order to improve the robustness of road
road environments, the shape of road boundaries is irregular boundary detection, iterative GPR is further applied to extract
and cannot be accurately modeled with parametric models. road boundary points.
The GPR model is a nonparametric model, which has both
powerful approximate and outlier rejection abilities. Based on
this, an iterative GPR algorithm is proposed to model road
boundary and filter out false points. Before the iterative GPR,
a filtering-based method is proposed in this paper to extract
candidate points and seed points.
1) Candidate Points and Seed Points Extraction
The filtering-based method includes a distance filter and a
RANSAC filter. Feature points P f eature,l and P f eature,r are
filtered by the distance filter to obtain candidate points
Pcandidate,l and Pcandidate,r , and seed points P seed,l and P seed,r
are obtained by the RANSAC filter.
a) Distance filter: The distance filter is used to remove false
points caused by obstacles outside roads. In general, we
assume that road boundaries are the nearest obstacle to Fig. 8. Seed points (left is in red, right is in blue).
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
WANG et al.: SPEED AND ACCURACY TRADEOFF FOR LIDAR DATA BASED ROAD BOUNDARY DETECTION 1215
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
1216 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 8, NO. 6, JUNE 2021
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
WANG et al.: SPEED AND ACCURACY TRADEOFF FOR LIDAR DATA BASED ROAD BOUNDARY DETECTION 1217
TP
Recall =
T P + FN
where FN is the number of cells that are missed detections.
F1, which denotes the harmonic average of Precision and
Recall.
Precision· Recall
F1 = 2 ·
Precision+ Recall
(a) (b) (c) The quantitative evaluation and comparison are given in
Table III. We can see that the Recall of our method is just
Fig. 12. The results of three methods on curved roads (left is in red, right is slightly lower than Zhang’s and Sun’s because of the distance
in blue). In the red box are false points, and in the blue box are the points that filter used in our method. Only one nearest point is preserved
have been misclassified. (a) Zhang. (b) Sun. (c) Proposed. in the distance filter. However, because iterative GPR is used
to model road boundaries and remove outliers, the Precision
and F1 of our method are the highest. It shows that the
iterative GPR algorithm can effectively filter out false points
and keep most boundary points. In addition, for the four
different scenarios, the precision, recall , and F1 of the
proposed method do not vary much, reflecting the robustness
of the proposed algorithm.
In Zhang’s method [27], the bottom-layer beam model and
the top-layer beam model were created based on the feature of
(a) (b) (c) the LiDAR sensor. Then, a rule-based peak-finding method was
used to determine the road segmentation line. This method can
Fig. 13. The results of three methods on roads with obstacles (left is in red, achieve high Precision and Recall on straight roads. However,
right is in blue). In the red box are false points. (a) Zhang. (b) Sun. (c) this method cannot handle outliers in distance function δ (k)
Proposed. which may cause misclassification for boundary points. On the
contrary, the improved peak-finding algorithm in this paper can
easily deal with this problem and select the two most suitable
dominant extremes. Besides, boundary points are directly
extracted by three spatial features, and the obstacle occlusions
were not considered. So, the Precision and F1 of Zhang’s
method are lower than the proposed method, especially on
curved roads and those with obstacles.
In Sun’s method [20], first ground points are extracted by
the polar-grid method, then road boundary points are directly
(a) (b) (c) detected based on a defined feature by searching along the
predicted trajectory of vehicles. Because the trajectory was
Fig. 14. The results of three methods on roads with varying widths (left is used to search boundary points, this method can correctly
in red, right is in blue). In the red box are false points. (a) Zhang. (b) Sun. (c) classify boundary points on various curved roads. However,
Proposed. the cumulative error of the predicted trajectory becomes larger
as the scope of the prediction goes further, which would affect
right boundary, or none; thus the ground truth of each cell is
the accuracy of road boundaries. Similarly, the obstacle
obtained. With the same method, the detection results of each
occlusions are also not considered, Sun’s method has higher
cell can be obtained. When one cell is detected as the left
Recall and significantly lower Precision . On the contrary,
boundary, and is also labeled as left boundary, we believe that
because iterative GPR has both powerful approximate and
this cell is detected correctly.
outlier rejection abilities, our method achieves significantly
To quantitatively evaluate our algorithm, three quantitative
higher precision and F1 with a small recall loss.
metrics in [37] and [38] are introduced for a comprehensive
In conclusion, our proposed method can handle road
evaluation.
boundary detection over various road scenes and achieves
Precision, which denotes the proportion of cells detected
better performance. The other two methods achieve relatively
correctly in all detected cells.
poorer performance for obstacle occlusions and classification.
TP 2) Runtime
Precision =
T P + FP We also evaluate the real-time performance of our method
where TP is the number of cells detected correctly, and FP is based on 1170 frames chosen from the KITTI dataset. Our
the number of cells detected incorrectly. experiments are performed on a 3.70 GHz Intel Core i7-8700 K
Recall, which denotes the proportion of the cells detected processor with 16 GB of RAM. The runtime of our method is
correctly in the labeled cells. shown in Fig. 15 . We can see that the running time of our
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
1218 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 8, NO. 6, JUNE 2021
TABLE III
Qualitative Evaluation Results in the KITTI Dataset
Mean Methods Straight road Curved road Road with obstacles Road with varying width
Proposed 0.9214 0.9003 0.8973 0.8938
Precision Zhang [27] 0.8874 0.8594 0.8093 0.8572
Sun [20] 0.8858 0.8864 0.8294 0.8732
100 Fig. 16. Successful cases of the proposed road boundary method. (a) Roads
Sun
90 Zhang with obstacles. (b) Roads with varying widths. (c) Roads with large
Proposed
80 curvatures.
70
Time (ms)
60
50
40
30
20
0 200 400 600 800 1 000 1 200
Frame number
C. Failure Case Fig. 17. Failure cases of the proposed road boundary method. (a) Roads
The method proposed in this paper is mainly aimed at road with traffic congestion. (b) Roads with intersection. (c) Y-shape roads.
boundary detection in a structured non-congested traffic
environment. The method achieved excellent performance on the beam band model and improved peak-finding
under occluded, curved, and variable-width road conditions, algorithm. The road segmentation line can be determined
as shown in Fig. 16. However, the proposed method achieves exactly under curved roads up to 70 m away. Then, the distance
relatively poorer performance in heavy traffic environments filter and RANSAC filter are applied to extract candidate
and at complex intersections, as shown in Fig. 17. In Fig. 17(a), points and seed points. Finally, an iterative GPR is proposed
due to severe occlusion, our method incorrectly identified the to filter out false points and extract boundary points. Based on
side of the vehicles as the right road boundary. seed points, the iterative GPR can remove false points caused
by road users. Because iterative GPR is a nonparametric
V. Conclusions model, it can handle various road shapes well. Comprehensive
This paper presents an effective road boundary detection experiment evaluations clearly demonstrate that our proposed
method with speed and accuracy tradeoff in structured method can obtain real-time performance and achieve better
environments. A multi-feature loose-threshold method is accuracy, especially on occluded and curved roads.
applied for feature points extraction. In order to classify The method proposed in this paper is mainly aimed at road
feature points, road segmentation lines are determined based boundary detection in a structured non-congested traffic
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
WANG et al.: SPEED AND ACCURACY TRADEOFF FOR LIDAR DATA BASED ROAD BOUNDARY DETECTION 1219
environment. In future works, we will focus on solving road detection based on 3D LiDAR,” Journal of Huazhong University of
Science and Technology ( Natural Science Edition), vol. 39, no. 2,
boundary detection under traffic jams and complex
pp. 351–354, 2011.
intersections. In addition, we will simultaneously solve the
[19] G. Wang, J. Wu, R. He, and S. Yang, “ A Point Cloud-Based Robust
problem of long-distance road boundary detection, which is Road Curb Detection and Tracking Method,” IEEE Access , vol. 7,
necessary for high-speed driving. pp. 24611–24625, 2019.
[20] P. Sun, X. Zhao, Z. Xu, R. Wang, and H. Min, “ A 3D LiDAR Data-
References Based Dedicated Road Boundary Detection Algorithm for Autonomous
Vehicles,” IEEE Access, vol. 7, pp. 29623–29638, 2019.
[1] F. Oniga, S. Nedevschi, and M. Michael Meinecke, “ Curb detection
[21] P. Kumar, C. P. McElhinney, P. Lewis, and T. McCarthy, “An
based on a multi-frame persistence map for urban driving scenarios,” in
automated algorithm for extracting road edges from terrestrial mobile
Proc. IEEE Conf. Intelligent Transportation Systems, 2008, pp. 67–72.
LiDAR data,” ISPRS Journal of Photogrammetry and Remote Sensing,
[2] J. Siegemund, D. Pfeiffer, U. Franke, and W. Forstner, “Curb vol. 85, pp. 44–55, 2013.
reconstruction using conditional random fields,” in Proc. IEEE Conf.
Intelligent Vehicles Symposium, 2010, pp. 203–210. [22] H. Wang, H. Luo, C. Wen, J. Cheng, and P. Li, “ Road Boundaries
Detection Based on Local Normal Saliency from Mobile Laser
[3] L. Wang, T. Wu, Z. Xiao, L. Xiao, D. Zhao, and J. Han, “Multi-cue Scanning Data,” IEEE Geoscience Remote Sensing Letters , vol. 12,
road boundary detection using stereo vision,” in Proc. IEEE Conf. no. 10, pp. 2085–2089, 2015.
Vehicular Electronics and Safety, 2016, pp. 48–53.
[23] M. Yadav, A. K. Singh, and B. Lohani, “Extraction of road surface from
[4] Y. Kang, C. Roh, S. Suh, and B. Song, “ A LiDAR-based decision-
mobile LiDAR data of complex road environment,” Int. Journal of
making method for road boundary detection using multiple Kalman
Remote Sensing, vol. 38, no. 16, pp. 4655–4682, 2017.
filters,” IEEE Trans. Industrial Electronics , vol. 59, no. 11,
pp. 4360–4368, Nov. 2012. [24] Z. Liu, J. Wang, and D. Liu, “ A new curb detection method for
unmanned ground vehicles using 2D sequential laser data,” Sensors,
[5] J. Han, D. Kim, M. Lee, and M. Sunwoo, “Road boundary detection and
vol. 13, no. 1, pp. 1102–1120, 2013.
tracking for structured and unstructured roads using a 2D LiDAR
sensor,” Int. Journal of Automotive Technology , vol. 15, no. 4, [25] Velodyne. (2019). HDL-64E. [Online]. Available: https://velodynelidar.
pp. 611–623, 2014. com/hdl-64e.html, Accessed on: Apr. 16, 2020.
[6] M. Buehler, K. Iagnemma, and S. Singh, “Junior: The Stanford Entry in [26] M. A. Fischler and R. C. Bolles, “ Random sample consensus: a
the Urban Challenge,” in The DARPA urban challenge: autonomous paradigm for model fitting with applications to image analysis and
vehicles in city traffic, vol. 56, Berlin, Heidelberg, Germany: Springer, automated cartography,” Communications of the ACM , vol. 24, no. 6,
2009, pp. 91–123. pp. 381–395, 1981.
[7] G. Seetharaman, A. Lakhotia, and E. Blasch, “Unmanned vehicles come [27] Y. Zhang, J. Wang, X. Wang, and J. Dolan, “Road-segmentation-based
of age: The DARPA grand challenge,” IEEE Computer Society, vol. 39, curb detection method for self-driving via a 3D-LiDAR sensor,” IEEE
no. 12, pp. 26–29, 2006. Trans. Intelligent Transportation System, vol. 19, no. 12, pp. 3981–3991,
[8] S. Verghese, “Self-driving cars and LiDAR,” in Proc. Conf. on Lasers Dec. 2018.
and Electro-Optics, California, USA, 2017, pp. AM3A-1. [28] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in Real-
[9] J. Fang, F. Yan, T. Zhao, F. Zhang, D. Zhou, R. Yang, Y. Ma, and L. time,” in Proc. Conf. Robotics: Science and Systems, 2014.
Wang, “Simulating LiDAR Point Cloud for Autonomous Driving using [29] S. Thrun, W. Burgard, and D. Fox, “Robot Perception,” in Probabilistic
Real-world Scenes and Traffic Flows,” arXiv: 1811.07112, 2018. Robotics, London, England: MIT press, 2005, pp. 149–187. [Online].
[10] K. Bimbraw, “Autonomous cars: Past, present and future a review of the Available: https://mitpress.mit.edu/books/probabilistic-robotics.
developments in the last century, the present scenario and the expected [30] J. Hu, A. Razdan, JC. Femiani, M Cui, and P. Wonka, “Road network
future of autonomous vehicle technology,” in Proc. Int. Conf. on extraction and intersection detection from aerial images by tracking
Informatics in Control, Automation and Robotics, 2015, pp. 191–198. road footprints,” IEEE Trans. Geoscience and Remote Sensing , vol. 45,
[11] P. Sun, X. Zhao, Z. Xu, and H. Min, “ Urban curb robust detection no. 12, pp. 4144–4157, 2007.
algorithm based on 3D-LiDAR,” Journal of ZheJiang University, [31] Wikipedia, “ Median filter,” The Free Encyclopedia, USA. [Online].
vol. 52, no. 3, pp. 504–514, Mar. 2018.
Available: https://en.wikipedia.org/wiki/Median_filter , Accessed on:
[12] K. Hu, T. Wang, Z. Li, D. Chen, and X. Li, “ Real-time extraction Nov. 5, 2019.
method of road boundary based on three-dimensional LiDAR,” Journal
[32] T. Chen, B. Dai, D. Liu, J. Song, and Z. Liu, “ Velodyne-based curb
of Physics: Conf. Series, vol. 1074, no. 1, pp. 1–8, 2018.
detection up to 50 meters away,” in Proc. IEEE Conf. Intelligent
[13] W. Yao, Z. Deng, and L. Zhou, “Road curb detection using 3D LiDAR Vehicles Symp., 2015, pp. 241–248.
and integral laser points for intelligent vehicles,” Soft Computing and
[33] C. K. Williams and C. E. Rasmussen, “Regression,” in Gaussian
Intelligent Systems (SCIS) and 13th Int. Symp. on Advanced Intelligent
Systems (ISIS), pp. 100–105, 2012. Processes for Machine Learning, London, UK: MIT press, 2006.
[14] B. Yang, L. Fang, and J. Li, “Semi-automated extraction and delineation [34] B. Douillard, J. Underwood, N. Kuntz, V. Vlaskine, A. Quadros, P.
of 3D roads of street scene from mobile laser scanning point clouds,” Morton, and A. Frenkel, “ On the segmentation of 3D LiDAR point
ISPRS Journal of Photogrammetry and Remote Sensing, vol. 79, pp. 80– clouds.” in Proc. Conf. Robotics and Automation, 2011, pp. 2798–2805.
93, 2013. [35] A. Geiger, P. Lenz, and R. Urtasun, “ Are we ready for autonomous
[15] A. Y. Hata and D. F. Wolf, “Feature detection for vehicle localization in driving? the kitti vision benchmark suite.” in Proc. Conf. Computer
urban environments using a multilayer LiDAR,” IEEE Trans. Intelligent Vision and Pattern Recognition, 2012, pp. 3354–3361.
Transportation Systems, vol. 17, no. 2, pp. 420–429, 2016. [36] J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss,
[16] S. Xu, R. Wang, and H. Zheng, “ Road Curb Extraction from Mobile and J. Gall, “ SemanticKITTI: A dataset for semantic scene
LiDAR Point Clouds,” IEEE Trans. Geoscience and Remote Sensing, understanding of LiDAR sequences,” in Proc. IEEE Conf. Computer
vol. 55, no. 2, pp. 996–1009, 2017. Vision, 2019, pp. 9297–9307.
[17] D. Zai, J. Li, Y. Guo, M. Cheng, and Y. Lin, “ 3D road boundary [37] Powers and D. Martin, “ Evaluation: from precision, recall and F-
extraction from mobile laser scanning data via supervoxels and graph measure to ROC, informedness, markedness and correlation,” Journal
cuts,” IEEE Trans. Intelligent Transportation Systems. , vol. 19, no. 3, of Machine Learning Technologies, vol. 2, no. 1, pp. 37–63, Feb. 2011.
pp. 802–813, 2018. [38] Y. Sasaki, “ The truth of the F-measure,” Teach Tutor mater, vol. 1,
[18] M. Wu, Z. Liu, and Z. Ren, “ Algorithm of real-time road boundary no. 5, pp. 1–5, 2007.
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.
1220 IEEE/CAA JOURNAL OF AUTOMATICA SINICA, VOL. 8, NO. 6, JUNE 2021
Guojun Wang received the B.S. degree in vehicle Rui He received the B.S. and Ph.D. degrees from
engineering from Yanshan University, Qinhuangdao, Jilin University, Changchun, China in 2007 and
China in 2014. He is currently a Ph.D. candidate in 2012, respectively. He is currently an Associate
vehicle engineering at Jilin University, Changchun, Professor with the State Key Lab of Automotive
China. His current research interests include Simulation and Control, Jilin University, China. His
computer vision, automated driving, and deep research interests are mainly in vehicle control
learning. system, electric vehicles, and intelligent vehicles.
Jian Wu received the B.S. and Ph.D. degrees from Bin Tian received the B.S. degree from Shandong
Jilin University, China in 1999 and 2010, respec- University, Jinan, China in 2009 and the Ph.D.
tively. He is a Professor of College of Automotive degree from the Institute of Automation, Chinese
Engineering at Jilin University, China. His research Academy of Sciences, Beijing, China in 2014. He is
interests are mainly in vehicle control system, currently an Associate Professor of the State Key
electric vehicles, and intelligent vehicles. He is the Laboratory of Management and Control for Complex
authors of over 40 peers-reviewed papers in Systems, Institute of Automation, Chinese Academy
international journals and conferences and has been of Sciences, China. His current research interests
in charge of numerous projects funded by national include computer vision, machine learning, and
government and institutional organizations on automated driving.
vehicles.
Authorized licensed use limited to: INDIAN INSTITUTE OF TECHNOLOGY KANPUR. Downloaded on June 01,2023 at 06:57:22 UTC from IEEE Xplore. Restrictions apply.