0% found this document useful (0 votes)
53 views

A Robust Lane Detection Method For Low Memory and Computation Resource Platform

This document summarizes a research paper presented at the 9th IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems from July 29 to August 2, 2019 in Suzhou, China. The paper proposes a robust lane detection method for low-memory and low-computation platforms. It extracts lane features using template matching and clusters points by distance. Then, it detects lanes through a coarse-to-fine Hough transform to generate lane equations with low storage requirements. Experimental results demonstrate high detection rates on resource-limited platforms.

Uploaded by

Loi Ngo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

A Robust Lane Detection Method For Low Memory and Computation Resource Platform

This document summarizes a research paper presented at the 9th IEEE International Conference on CYBER Technology in Automation, Control, and Intelligent Systems from July 29 to August 2, 2019 in Suzhou, China. The paper proposes a robust lane detection method for low-memory and low-computation platforms. It extracts lane features using template matching and clusters points by distance. Then, it detects lanes through a coarse-to-fine Hough transform to generate lane equations with low storage requirements. Experimental results demonstrate high detection rates on resource-limited platforms.

Uploaded by

Loi Ngo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Proceedings of 9th IEEE International Conference

on CYBER Technology in Automation, Control, and Intelligent Systems


July 29- August 2, 2019, Suzhou, China

A robust lane detection method for low


memory and computation resource platform
Yijie Yu 1, Hai Wang 1,3, Yingfeng Cai 2,*, Long Chen 2 , Jun Chen4
1 School of Automotive and Traffic Engineering, Jiangsu University, Zhenjiang, 212013, China
2 Automotive Engineering Research Institute, Jiangsu University, Zhenjiang, 212013, China
3 Robotic and automation lab, The University of Hongkong, Hongkong, China
4 Yangzhou Ruikong Automotive Electronics Co., Ltd., Yangzhou, 225100, China

techniques are usually based on the properties of


lanes like directionality, intensity gradients, texture
Abstract—Traditional monocular vision based meth- and color. Techniques like steerable filters
ods for lane detection are mostly implemented in high
[11][16][17], adaptive thresholding [12][15], Gabor
performance PC or embedded systems so that those
methods can use complicated algorithm and utilize big filters [18-20] are used to extract lane features. Other
storage space. However, automotive companies are very machine learning based approaches are also em-
sensitive to hardware cost, so they more prefer to lane ployed in [14] to extract lane features. After that, the
detection method that can be used in low cost but thus detected lane features are usually involves fitting the
low resource and computation ability platform. In order detected lane features into a known road/lane model,
to apply lane detection on this low resource platform, a thereby eliminating non-lane features. This step in-
lane detection method based on improved Hough trans- cludes algorithms such as RANSAC [12][16], Hough
form is proposed. The feature points are obtained by transform [11][13][18], etc.
template matching and the distance is used to cluster
Generally, many novel methods listed before have
the feature points, and a coarse to fine Hough transform
is used to generate the lane equation. Experiments been proposed to deal with lane detection application
demonstrate that this method achieved very high detec- and have achieved very good performance. What
tion rate of road lane in low resource platform and is should be mentioned is that those proposed methods
with significant application value. are all implemented in high performance PC or em-
bedded systems so that those methods can use com-
Keywords—Automotive active safety, Monocular vi- plicated algorithm and utilize big storage space.
sion, Lane detection, Resource limited platform. However, automotive companies are very sensitive to
hardware cost, so they more prefer to lane detection
method that can be used in low cost and resource
I. INTRODUCTION limited platform. So is it able to build a simple algo-
rithm which is as effective as that used in high per-
Among different modules for ADAS technology, formance platform? To meet this challenge, a simple
lane detection using monocular cameras contributes but robust lane detection method is proposed for an
to its efficiency in multiple ways. Firstly, lane detec- extremely resource limited embedded platform in this
tion and tracking aids in localizing the ego-vehicle paper.
motion, which is the one of the very first and primary The rest of this paper will be organized as follow.
steps in most ADAS such as lane departure warning The lane detection method will be detailed proposed
(LDW), lane change assistance etc. [1-5]. Further- in Section II. In Section III, the experiment results
more, lane detection is also able to aid other ADAS and analysis will be given. Conclusion and further
modules such as vehicle detection and driver inten- discussion is in Section IV.
tion perception [6-10], etc.
The lane detection methods nowadays are usually
contain two main steps [11-16]: (1) lane feature ex-
traction, (2) lane modeling. Lane feature extraction

*
Corresponding author. E-mail: [email protected].

978-1-7281-0770-7/19/$31.00 © 2019 IEEE 854

Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
II. LANE DETECTION METHOD edge feature point searching method is applied line
The lane detection method proposed is also contain by line. The main concept of this method is that if the
three main steps which are lane feature extraction, distance between any two edge points (or two groups
feature clustering and lane modeling, as shown in of edge points) is in a specific range then the two
Fig.1. The lane feature extraction step mainly use a edge points can be considered as the edge points be-
lane edge feature point searching method based on longing to road lane. Here, t1 is set as the lower
prior knowledge like lane width. The lane modeling bound of lane width and t2 is the upper bound. Typi-
step will use a coarse to fine Hough transform with cally, t1 is set as 15 and t2 is set as 30 for the image
lower storage space requirement to model lane func- captured by our camera.
tion. With the mentioned lane feature extraction step,
many pixels that might be belonging to lane edge are
I mage ROI set
selected. The processing results of an image are
shown in Fig.3. It should be mentioned that since
Lane f eat ur e poi nt s ext r act i on
BF592 do not have any image output channel so that
the results are got from PC platform with the same
method.
Lane model i ng

Figure 1. Overall flowchart of proposed lane detection method.

A. Lane Feature Points Extraction Method


In the lane feature extraction step, we mainly use
prior knowledge which is lane width. For our applica-
tion, the processing area of image is that below van-
ishing line which is called region of interest (ROI)
area and the left (right) part of ROI will be processed 
separately for left and right lane detection, as shown
in Fig.2. Figure 3. One example of lane feature points extraction.
Vanishing
Line The biggest advantage of the proposed lane fea-
ture extraction method is the short calculation time
and low storage space requirement. For a 720*480
image, the processing time for this step is 5.9 ms in
Left Part Right Part
average and the storage space needed is around 7.5K
of ROI of ROI bytes.

Left Lane Right lane B. Clustering Algorithm


Model matching is used in the feature point gen-
Figure 2. Region of interest (ROI) of the image. eration process. But in the night time, rain and other
bad illumination conditions, the characteristics of the
In ROI, consecutive three lines of image pixels lane line is not clear as the day, so the quality of fea-
are loaded and a simple edge extraction process is ture points drop sharply and result in false detection.
applied by using 45 and 135 Sobel operator for left Due to this, a clustering algorithm is proposed to
and right part firstly. Then, all points that is higher combine the similar feature points, remove the isolat-
than a threshold in line L is marked as 255 which ed feature points, clear the detection target, narrow
mean potential edge points. The threshold W is simp- the range of the detection line, and reduce the inter-
ly set as W 1.3* ave( Ledge ) manually throw many ference of the wrong feature points. Specific steps are
as follows:
experiments. Here ave( Ledge ) means average value of
Step 1: remove a feature point P from the list,
all pixels in line L after edge reinforcement. put into the set S, then delete the point P from the list;
After that a lane edge feature point searching Step 2: Search for the 5 × 5 fields around P
method will further be applied to find points that be- point, if there is a feature point P1 in this fields. Place
long to lane edge. Due to hardware limitation, the P1 into the collection S and delete it from list;

855

Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
Step 3: if there is no P1 in this fields, remove a gregated together by the above process. At the same
new feature point P from the list, put it into the set time, the isolated feature points in the image are re-
S1; moved. The advantage of the method is that the pix-
Step 4: if the number of feature points in the set els are sorted by distance, narrowing the range of the
S1 is less than 15, then it is considered to be an inva- line detection, making the detection target more ex-
lid set, it will be casting out. plicit, furthermore, and avoiding false detection or
Step 5: cycle the above steps 1-3 until the list is missed detection, while reducing the amount of com-
empty; putation and calculation time. Fig.5 shows the clus-
Step 6: the obtained sets {S, S1, S2, ... Sn} tering performance of Fig.3. It can be seen that the
ranks according to the number of elements from more feature points are divide two cluster and some feature
to less points that from interference line are removed.
Fig.4 is a flow chart of the pixel clustering algo-
rithm, all the connected pixels in the image are ag-


Figure 4. Flow chart of pixel clustering algorithm.

C. Lane Model Fitting 瀊ith Coarse to Fine Hough


Transform
As mentioned before, after verifying lane edge
feature points, the model of lane edge should be got
which will provide valuable information for further
mission like lane departure warning or lane keeping.
In this subsection, a fast Hough transform technology
will be detailed introduced for this purpose.
Hough transform is a bounding description
 method. It transform a curve or a straight line in im-
age space to a point in another polar coordinates
Figure 5. The result of feature point clustering.
space so that curve or straight line detection problem
is transferred to a peak point finding problem in polar
coordinates space.

856

Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
y T Edge Points ĂĂ

A
T
T1 B U 15°-16° 16°-17° 17°-18° 18°-19° ĂĂ 71°-72° 72°-73° 73°-74° 74°-75°

U1 B 1
A
2
T1
o U

ĂĂ
o x U1
298 Max

299
Figure 6. Hough transform.
300

In our road lane detection which mainly for


highway application, straight line model is enough (a) Original Hough transform
accurate. For straight line modeling with Hough, a
'voting mechanism' is commonly used. As seen in Edge Points ĂĂ
Fig.6, each point (such as A, B) in image space is
transferred to curve (A, B) in polar coordinates space. T ĂĂ
U 15°-20° 20°-25° 25°-30° 30°-35° 55°-60° 60°-65° 65°-70° 70°-75°
As 'voting mechanism', the straight line ( U , T ) mod-
1
el with the most points is considered as the right
function. 2

Obviously, Hough transform is proved to be


ĂĂ

very effective by many works, however it need big


298 Max
storage space and long processing time. The reason is
that, to accurately get line model, the points in image 299

space are densely mapped in the polar coordinates 300


space. Take the right lane for example, each point
will be mapped from 15 to 75 with 1 gap which
means the mapping process will be repeat 60 times. T
Generally, there are often more that one hundred U 30°-31° 31°-32° 32°-33° 33°-34° 34°-35°

edge points, so that the overall storage space and pro- 1

cessing time is unacceptable for our extremely re- 2


source limited hardware platform.
ĂĂ

To solve this problem, a coarse to fine Hough


transform method is proposed. The main concept of 298 Max

this method is that dividing T coarsely and finding 299

the maximum value firstly, then finely search for the


300
maximum value in the small range. To clearly intro-
duce this method, we also take a lane for example.
(b) Coarse to fine Hough transform

Figure 7. Original Hough transform and proposed coarse to fine


Hough transform.

Fig.7 (a) is the calculation process of original


Hough transform method. It can be found that the
required storage space is 300 u 60 18K bytes and
the image space to polar coordinates mapping will be
take N edge u 60 times in which N edge is the number
of edge points. Fig.7 (b) is the proposed coarse to fine
Hough transform method. Obviously, this method

857

Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
dramatically reduces the storage space and mapping Data Set 1400 13 99.071%
times which are 300 u12 3.6K bytes (80% reduc- 5
Data Set 1872 16 99.145%
tion) and N edge u 17 times (71.7% reduction). Mean- 6
while, the model accuracy of the coarse to fine Data Set 1624 21 98.707%
Hough transform is the same as the original Hough 7
Transform. A straight line model is shown in Fig.8. Data Set 1868 11 99.411%
8
Total 6764 61 99.098%

Test images are divided into normal conditions


and complex conditions, the detection effect shown in
Fig.9. Under the normal conditions (lines 1-4), the
feature points are intensive and false detection rate is
low, which has a good fit with real lane line. The
complex environment is rainy day (lines 5-6), dusk
(line 9) and night environment (lines 7-8). It can be
seen that the visibility of the left lane line is low in
Figure 8. One example of lane modeling step
the heavy rain and the contrast is not obvious enough
and there are two obvious water stains on the place
III. EXPERIMENTS AND ANALYSIS
where the front vehicle passes. Besides, the camera
To fully evaluate the proposed lane detection vision is blocked seriously after opening the wipes
method, it is tested in highway and in many weather (line 6). In this situation, the algorithm is still able to
conditions including sunny and raining day. More find enough feature points, but the lane line can not
than 10000 road images are tested in which data set 1 fit as well as the normal situation. In the evening, the
to data set 4 are in good weather conditions and data illumination is not enough, the contrast is reduced,
set 5 to data set 8 are in bad weather conditions. The but because of the local threshold method, the algo-
statistic results of two groups of experiments are rithm is still valid, the test results are better than rainy
listed in table.2 and table.3. It can be seen that this days. The difficulty in the night environmental lies in
method can reach detection rate of 99.527% in good the complex lighting, reflector and high beam will
weather and 99.098% in bad weather. cause interference and failure of the algorithm. Re-
flective of the reflector will increase the number of
false feature points, but the shape of these points is
TABLE I. LANE DECTECTION RESULTS IN GOOD irregular, and they do not exist in large numbers con-
WEATHER
tinuously, so these points can be eliminated by clus-
Image Da- Image False Detection tering algorithm. In the process of the intersection of
ta Set Number 㸦Miss㸧 Detec- Rate two cars, the other car's high beam will cause dozens
tion Number
of consecutiveframes over-exposure, the threshold is
Data Set 796 5 99.372%
1 much higher than the lane line gray value, which will
Data Set 1370 9 99.343% cause miss detection.
2
Data Set 1555 4 99.743%
3
Data Set 296 1 99.662%
4
Total 4017 19 99.527%

TABLE II. LANE DETECTION RESULTS IN BAD WEATHER


Image Da- Image False Detection
ta Set Number 㸦Miss㸧 Detec- Rate
tion Number

858

Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
(U1564201㸪U1664258㸪U1764257㸪U1762264㸪
61601203㸪61773184), Key Research and Develop-
ment Program of Jiangsu Province (BE2016149),
Key Project for the Development of Strategic Emerg-
ing Industries of Jiangsu Province (2016-1094, 2015-
1084)㸪Key Research and Development Program of
Yangzhou City (YZ2017032).

REFERENCES
[1] Cui G, Wang J, Li J. Robust multilane detection and tracking
in urban scenarios based on LIDAR and mono-vision[J]. Im-
age Processing, IET. 8(5): 269-279, 2014.
[2] Li Q, Chen L, Li M, et al. A Sensor-Fusion Drivable-Region
and Lane Detection System for Autonomous Vehicle Naviga-
tion in Challenging Road Scenarios[J]. Vehicular Technology,
IEEE Transactions on . 63(2), 2014: 540 – 555
[3] Lei J, Wu H, Yang J, et al. Sliding mode lane keeping control
based on separation of translation and rotation movement[J].
Optik-International Journal for Light and Electron Optics,
2016, 127(10): 4369-4374.
[4] Hillel A B, Lerner R, Levi D, et al. Recent progress in road
and lane detection: a survey[J]. Machine vision and applica-
tions, 2014, 25(3): 727-745.
[5] Son J, Yoo H, Kim S, et al. Real-time illumination invariant
lane detection for lane departure warning system[J]. Expert
Systems with Applications, 2015, 42(4): 1816-1824.
[6] Yao F, Yanli C. Study of a new vehicle detection algorithm
based on linear CCD images[J]. Optik-International Journal
for Light and Electron Optics, 2015, 126(24): 5932-5935.
[7] Wang H, Cai Y, Chen L, et al.Vehicle detection algorithm
based on Haar-NMF features and improved SOMPNN[J].
Journal of Southeast University (Natural Science Edition),
 2016, 46(3): 499-504
Figure 9. Some typical experiment results [8] Li J, Liang X, Shen S M, et al. Scale-aware fast R-CNN for
The first column is the original image collected by the camera, pedestrian detection[J]. IEEE Transactions on Multimedia,
the second column is the result of the edge detection, the third 2018, 20(4): 985-996.
column is the clustering feature points showed in the binary pic- [9] Shen J, Zuo X, Li J, et al. A novel pixel neighborhood differ-
ture, the fourth column is the lane line test results ential statistic feature for pedestrian and face detection[J]. Pat-
tern Recognition, 2017, 63: 127-138.
IV. CONCLUSION [10] Tang Y, Zhang C, Gu R, et al. Vehicle detection and recogni-
tion for intelligent traffic surveillance system[J]. Multimedia
This paper proposed a simple but robust lane de- tools and applications, 2017, 76(4): 5817-5832.
tection algorithm for a low resource platform. By [11] McCall J.C. and Trivedi M.M.. Video-Based Lane Estimation
and Tracking for Driver Assistance: Survey, System, and
applying a simple lane feature extraction method and Evaluation. IEEE Transactions on Intelligent Transportation
a coarse to fine Hough transform algorithm for lane Systems, 7(1):20–37, March 2006.
modeling, the proposed method just need very small [12] Hillel A B, Lerner R, Levi D, et al. Recent progress in road
storage space and is with very low computation com- and lane detection: a survey[J]. Machine vision and applica-
tions, 2014, 25(3): 727-745.
plexity. Experiments demonstrate that this method [13] Wang Y, Dahnoun N, Achim A. A novel system for robust
can achieve detection rate of more than 99% in high- lane detection and tracking[J]. Signal Processing, 2012, 92(2):
way. So it can be concluded that this method is very 319-334.
valuable for engineering application. [14] Borkar A, Hayes M, Smith M T. Polar randomized hough
transform for lane detection using loose constraints of parallel
lines[C]//Acoustics, Speech and Signal Processing (ICASSP),
ACKNOWLEDGEMENT 2011 IEEE International Conference on. IEEE, 2011: 1037-
1040.
Fund support : National Key Research and De- [15] Aharon B, Ronen L, Dan L, and Guy R. Recent progress in
velopment Program of China (2017YFB0102603), road and lane detection: a survey. Machine Vision and Appli-
National Natural Science Foundation of China cations, February 2012.

859

Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
[16] Sayanan S, and Mohan M. Integrated Lane and Vehicle Detec-
tion , Localization , and Tracking : A Synergistic Approach.
IEEE Trans. On Intell. Trans. Sys., pages 1–12, 2013.
[17] Shinko Y and Mohan M. Lane Tracking with Omnidirectional
Cameras: Algorithms and Evaluation. EURASIP Journal on
Embedded Systems, 2007:1–8, 2007.
[18] Satzoda R K, Sathyanarayana S, and Srikanthan T. Hierar-
chical Additive Hough Transform for Lane Detection. IEEE
Embedded Systems Letters, 2(2):23–26, 2010.
[19] Veit T, Tarel J P, Nicolle P, et al. Evaluation of Road Marking
Feature Extraction[C]// International IEEE Conference on In-
telligent Transportation Systems. 2008:174-181.
[20] Pollard E, Gruyer D, Tarel J P, et al. Lane marking extraction
with combination strategy and comparative evaluation on syn-
thetic and camera images[C]// International IEEE Conference
on Intelligent Transportation Systems. IEEE, 2011:1741-1746.

860

Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.

You might also like