Lane
Lane
ABSTRACT - Driver assistance system is a technology used to make motor vehicle travel safer by automating,
improving or adapting some or all of the tasks involved in operating a vehicle. Driver assistance serves to make travel
comfortable and easier, while also increasing car and road safety. While some systems help with the task of driving,
others alert the driver to errors or hazards, such as lane departure detection and drowsiness detection. Aside from
vehicle control, driver assistance can also refer to secondary driving tasks such as location finding, route planning and
obstacle detection. Driver assistance is a developing field. This paper aims at detecting lanes using Python and
OpenCV. In real time vehicular movements will be captured using a camera and the same will be processed to achieve
the goal. The Hough Transform is used to detect lanes in an image or video.
Keywords: Lane detection, Hough Transform, Canny edge detection, Python OpenCV
In lane departure warning system, the lane detection is the important role to detect it. Here only the selected are as is detected or
initial step to be taken. There are two types of taken for the next level of processing. These selected ROI images are
methodologies used in lane identification: the elements- then used for lane detection using a proposed algorithm. The
based methodology and the model-based methodology. The selection of ROI reduces the processing time of the frames.
elements-based methodology detects the lane from the
4. Hough Transform: The Hough Transform is implemented on
images of roads by detecting the low- l e v e l elements
images after the canny edges detection has taken place so as to obtain
such as lane edges etc. This methodology requires all
the image pixels that are desired ones. So here in this system to detect
round painted lanes or solid lane edges, otherwise it cannot
the lanes marking from the image data, Hough Transform is used.
detect it. This methodology may experience the ill effects
of impediment. [6] 5. Lane Detection: Here, the Lane will be marked with a separate color.
The model-b ased methodology uses geometric Two important algorithms Canny Edge Detection and Hough
parameters such as assuming the shape of lane can be Transform are used to implement Lane Detection System which are
presented by straight line or curve. [7] explained below:
Abhay Tewari et al proposed in their paper that Lane
Canny edge detection:
detection techniques play a significant role in intelligent
transport system. Abhaya et al [2] proposed a smart driver Canny edge detection is a technique to extract useful
assistance system that will help drivers to avoid accidents structural information from different vision objects and
during lane departures by providing prompt and quick dramatically reduce the amount of data to be processed. It
marking of lanes. Proposed novel system is providing has been widely applied in various computer vision
automatic detection and recognition of traffic signs. systems. Canny has found that the requirements for the
Detection is providing good results under different application of edge detection on diverse vision systems are
lightning conditions. Recognition is based on cascade relatively similar. Thus, an edge detection solution to
pattern of CNNs that are trained using Histogram of address these requirements can be implemented in a wide
oriented gradient (HOG). The region where Traffic signs range of situations. The general criteria for edge detection
are located are identified as a candidate region and can be include: [1]
calculated through the process called MSERs. Synthetic
dataset has been generated so as to increase the number of 1. Detection of edge with low error rate, which means that
images in dataset to increase the performance in terms of the detection should accurately catch as many edges shown
accuracy and to train model better. in the image as possible.
III. ALGORITHM 2. The edge point detected from the operator should
accurately localize on the center of the edge.
The block diagram of a proposed lane detection system on
Raspberry Pi is shown in the figure below: 3. A given edge in the image should only be marked once,
and where possible, image noise should not create false
edges.
b = -a (1) + 5
b = -a + 5
b=0
IV. RESULTS
The System is implememted using Python and OpemCV on Raspberry Pi. The results of the system on Image as well
as video are shown in the figure below:
RESULTS ON IMAGE
V. REFERENCES
[1] Digital Image Processing (Third Edition), Rafael C.
Gonzales and Richard E. Woods, Published by
PEARSON Education.
[2] Prof A. B. Deshmukh, Pravin T. Mandlik, “Raspberry-Pi
Based Real Time Lane Departure Warning System using
Image Processing”, International Journal of Engineering
Research and Technology (IJERT), volume 5, Issue 06,
June 2016.
[3] Prof A. B. Deshmukh, Pravin T. Mandlik, “Image
Processing based Lane Departure Warning System
Using Hough Transform and Euclidean Distance”,
International Journal of Research and Scientific
Innovation (IJRSI), Volume 3, Issue 10, October 2016,
ISSN 2321–2705.
[4] B. Yu, A. Jain, "Lane boundary detection using a
multiresolution hough transform", Proceedings of
International Conference on Image Processing, pp. 748-
751.
[5] Winserng Chee, Phooi Yee Lau and Sungkwon Park,
“Real-time Lane Keeping Assistant System on
Raspberry Pi”, IEIE Transactions on Smart Processing
and Computing, vol. 6, no. 6, December 2017
[6] Juan Pablo Gonzalez, Omit Ozguner, "Lane Detection
Using Histogram-Based Segmentation and Decision
Trees", IEEE Intelligent Transportation Systems
Conference Proceedings, pp. 346-351.
[7] S. P. Narote, P. N. Bhujbal, A. S. Narote, D. M. Dhane,
"A review of recent advances in lane detection and
departure warning system", Pattern Recognition, vol. 73,
pp. 216-234, 2018.
[8] J. Canny, "A Computational Approach to Edge
Detection", IEEE Trans. Pattern Analysis and Machine
Intelligence, vol. 8, no. 6, pp. 679-686, 1986.
[9] C. Bila, F. Sivrikaya, M. A. Khan, S. Albayrak,
"Vehicles of the Future: A Survey of Research on Safety
Issues", IEEE Transactios on Intelligent Transportation
Systems, vol. 18, no. 5, pp. 1046-1065, 2017.