A Robust Lane Detection Method For Low Memory and Computation Resource Platform
A Robust Lane Detection Method For Low Memory and Computation Resource Platform
*
Corresponding author. E-mail: [email protected].
Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
II. LANE DETECTION METHOD edge feature point searching method is applied line
The lane detection method proposed is also contain by line. The main concept of this method is that if the
three main steps which are lane feature extraction, distance between any two edge points (or two groups
feature clustering and lane modeling, as shown in of edge points) is in a specific range then the two
Fig.1. The lane feature extraction step mainly use a edge points can be considered as the edge points be-
lane edge feature point searching method based on longing to road lane. Here, t1 is set as the lower
prior knowledge like lane width. The lane modeling bound of lane width and t2 is the upper bound. Typi-
step will use a coarse to fine Hough transform with cally, t1 is set as 15 and t2 is set as 30 for the image
lower storage space requirement to model lane func- captured by our camera.
tion. With the mentioned lane feature extraction step,
many pixels that might be belonging to lane edge are
I mage ROI set
selected. The processing results of an image are
shown in Fig.3. It should be mentioned that since
Lane f eat ur e poi nt s ext r act i on
BF592 do not have any image output channel so that
the results are got from PC platform with the same
method.
Lane model i ng
855
Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
Step 3: if there is no P1 in this fields, remove a gregated together by the above process. At the same
new feature point P from the list, put it into the set time, the isolated feature points in the image are re-
S1; moved. The advantage of the method is that the pix-
Step 4: if the number of feature points in the set els are sorted by distance, narrowing the range of the
S1 is less than 15, then it is considered to be an inva- line detection, making the detection target more ex-
lid set, it will be casting out. plicit, furthermore, and avoiding false detection or
Step 5: cycle the above steps 1-3 until the list is missed detection, while reducing the amount of com-
empty; putation and calculation time. Fig.5 shows the clus-
Step 6: the obtained sets {S, S1, S2, ... Sn} tering performance of Fig.3. It can be seen that the
ranks according to the number of elements from more feature points are divide two cluster and some feature
to less points that from interference line are removed.
Fig.4 is a flow chart of the pixel clustering algo-
rithm, all the connected pixels in the image are ag-
Figure 4. Flow chart of pixel clustering algorithm.
856
Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
y T Edge Points ĂĂ
A
T
T1 B U 15°-16° 16°-17° 17°-18° 18°-19° ĂĂ 71°-72° 72°-73° 73°-74° 74°-75°
U1 B 1
A
2
T1
o U
ĂĂ
o x U1
298 Max
299
Figure 6. Hough transform.
300
857
Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
dramatically reduces the storage space and mapping Data Set 1400 13 99.071%
times which are 300 u12 3.6K bytes (80% reduc- 5
Data Set 1872 16 99.145%
tion) and N edge u 17 times (71.7% reduction). Mean- 6
while, the model accuracy of the coarse to fine Data Set 1624 21 98.707%
Hough transform is the same as the original Hough 7
Transform. A straight line model is shown in Fig.8. Data Set 1868 11 99.411%
8
Total 6764 61 99.098%
858
Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
(U1564201㸪U1664258㸪U1764257㸪U1762264㸪
61601203㸪61773184), Key Research and Develop-
ment Program of Jiangsu Province (BE2016149),
Key Project for the Development of Strategic Emerg-
ing Industries of Jiangsu Province (2016-1094, 2015-
1084)㸪Key Research and Development Program of
Yangzhou City (YZ2017032).
REFERENCES
[1] Cui G, Wang J, Li J. Robust multilane detection and tracking
in urban scenarios based on LIDAR and mono-vision[J]. Im-
age Processing, IET. 8(5): 269-279, 2014.
[2] Li Q, Chen L, Li M, et al. A Sensor-Fusion Drivable-Region
and Lane Detection System for Autonomous Vehicle Naviga-
tion in Challenging Road Scenarios[J]. Vehicular Technology,
IEEE Transactions on . 63(2), 2014: 540 – 555
[3] Lei J, Wu H, Yang J, et al. Sliding mode lane keeping control
based on separation of translation and rotation movement[J].
Optik-International Journal for Light and Electron Optics,
2016, 127(10): 4369-4374.
[4] Hillel A B, Lerner R, Levi D, et al. Recent progress in road
and lane detection: a survey[J]. Machine vision and applica-
tions, 2014, 25(3): 727-745.
[5] Son J, Yoo H, Kim S, et al. Real-time illumination invariant
lane detection for lane departure warning system[J]. Expert
Systems with Applications, 2015, 42(4): 1816-1824.
[6] Yao F, Yanli C. Study of a new vehicle detection algorithm
based on linear CCD images[J]. Optik-International Journal
for Light and Electron Optics, 2015, 126(24): 5932-5935.
[7] Wang H, Cai Y, Chen L, et al.Vehicle detection algorithm
based on Haar-NMF features and improved SOMPNN[J].
Journal of Southeast University (Natural Science Edition),
2016, 46(3): 499-504
Figure 9. Some typical experiment results [8] Li J, Liang X, Shen S M, et al. Scale-aware fast R-CNN for
The first column is the original image collected by the camera, pedestrian detection[J]. IEEE Transactions on Multimedia,
the second column is the result of the edge detection, the third 2018, 20(4): 985-996.
column is the clustering feature points showed in the binary pic- [9] Shen J, Zuo X, Li J, et al. A novel pixel neighborhood differ-
ture, the fourth column is the lane line test results ential statistic feature for pedestrian and face detection[J]. Pat-
tern Recognition, 2017, 63: 127-138.
IV. CONCLUSION [10] Tang Y, Zhang C, Gu R, et al. Vehicle detection and recogni-
tion for intelligent traffic surveillance system[J]. Multimedia
This paper proposed a simple but robust lane de- tools and applications, 2017, 76(4): 5817-5832.
tection algorithm for a low resource platform. By [11] McCall J.C. and Trivedi M.M.. Video-Based Lane Estimation
and Tracking for Driver Assistance: Survey, System, and
applying a simple lane feature extraction method and Evaluation. IEEE Transactions on Intelligent Transportation
a coarse to fine Hough transform algorithm for lane Systems, 7(1):20–37, March 2006.
modeling, the proposed method just need very small [12] Hillel A B, Lerner R, Levi D, et al. Recent progress in road
storage space and is with very low computation com- and lane detection: a survey[J]. Machine vision and applica-
tions, 2014, 25(3): 727-745.
plexity. Experiments demonstrate that this method [13] Wang Y, Dahnoun N, Achim A. A novel system for robust
can achieve detection rate of more than 99% in high- lane detection and tracking[J]. Signal Processing, 2012, 92(2):
way. So it can be concluded that this method is very 319-334.
valuable for engineering application. [14] Borkar A, Hayes M, Smith M T. Polar randomized hough
transform for lane detection using loose constraints of parallel
lines[C]//Acoustics, Speech and Signal Processing (ICASSP),
ACKNOWLEDGEMENT 2011 IEEE International Conference on. IEEE, 2011: 1037-
1040.
Fund support : National Key Research and De- [15] Aharon B, Ronen L, Dan L, and Guy R. Recent progress in
velopment Program of China (2017YFB0102603), road and lane detection: a survey. Machine Vision and Appli-
National Natural Science Foundation of China cations, February 2012.
859
Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.
[16] Sayanan S, and Mohan M. Integrated Lane and Vehicle Detec-
tion , Localization , and Tracking : A Synergistic Approach.
IEEE Trans. On Intell. Trans. Sys., pages 1–12, 2013.
[17] Shinko Y and Mohan M. Lane Tracking with Omnidirectional
Cameras: Algorithms and Evaluation. EURASIP Journal on
Embedded Systems, 2007:1–8, 2007.
[18] Satzoda R K, Sathyanarayana S, and Srikanthan T. Hierar-
chical Additive Hough Transform for Lane Detection. IEEE
Embedded Systems Letters, 2(2):23–26, 2010.
[19] Veit T, Tarel J P, Nicolle P, et al. Evaluation of Road Marking
Feature Extraction[C]// International IEEE Conference on In-
telligent Transportation Systems. 2008:174-181.
[20] Pollard E, Gruyer D, Tarel J P, et al. Lane marking extraction
with combination strategy and comparative evaluation on syn-
thetic and camera images[C]// International IEEE Conference
on Intelligent Transportation Systems. IEEE, 2011:1741-1746.
860
Authorized licensed use limited to: Carleton University. Downloaded on July 26,2020 at 16:07:11 UTC from IEEE Xplore. Restrictions apply.