Comprehensive and Practical Vision System For Self Driving Vehicle Lane Level Localization
Comprehensive and Practical Vision System For Self Driving Vehicle Lane Level Localization
A PROJECT REPORT
Submitted by
BACHELOR OF TECHNOLOGY
in
BONAFIDE CERTIFICATE
We here by declare that the Major Project entitled "Comprehensive and practical
vision system for self driving Vehicle lane level localization" to be submitted for the
Degree of Bachelor of Technology is our original work as a team and the dissertation has
not formed the basis for the award of any degree, diploma, associateship or fellowship
of similar other titles. It has not been submitted to any other University or Institution
for the award of any degree or diploma.
Place :
Date :
G. SUNNY ARJUN
P. VARAPRASAD
iii
ACKNOWLEDGEMENTS
I am greatly indebted to my Head of Department, Dr. T. Rama Rao for his motivation
and guidance throughout the course of the Project Work.
I express my gratitude to the faculty and lab programmers of School of Electronics and
Communication for their needy and continuous support in technical assistance given.
G. SUNNY ARJUN
P. VARAPRASAD
iv
TABLE OF CONTENTS
DECLARATION iii
ACKNOWLEDGEMENTS iv
ABSTRACT viii
LIST OF TABLES ix
LIST OF FIGURES x
ABBREVIATIONS xi
1 INTRODUCTION 1
1.1 Existing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Proposed System . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 LITERATURE SURVEY 3
2.1 "A Sensor-Fusion Drivable-Region and Lane-Detection System for Au-
tonomous Vehicle Navigation in Challenging Road Scenarios",by Qingquan
Li,Long Chen, Ming Li,Shih-Lung Shaw, and Andreas Nuchter . . . 3
2.2 "A Multi-Model Lane Detector that Handles Road Singularities", by
Raphael Labayrade, Jerome Douret and Didier Aubert. . . . . . . . 3
2.3 "The Heterogeneous Systems Integration Design and Implementation
for Lane Keeping on a Vehicle " , by Shinq-Jen Wu, Hsin-Han Chiang,
Jau-Woei Perng, Chao-Jung Chen, Bing-Fei Wu and Tsu-Tian Lee . 4
2.4 4. " Lane Marking Based Vehicle Localization Using Particle Filter and
Multi-Kernel Estimation ", by Wenjie Lu, Emmanuel Seignez, Sergio
A. Rodriguez F. and Roger Reynaud. . . . . . . . . . . . . . . . . . 5
2.5 5. " Integrated Lane and Vehicle Detection, Localization, and Tracking:
A Synergistic Approach " , by Sayanan Sivaraman and Mohan Manub-
hai Trivedi. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 RESEARCH METHODOLOGY 7
v
3.1 Statement of the problem . . . . . . . . . . . . . . . . . . . . . . . 7
3.2 Objectives of the study . . . . . . . . . . . . . . . . . . . . . . . . 7
3.3 scope of the study . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.4 Type of research . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.5 Realistic Constraits . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.6 Engineering Standards . . . . . . . . . . . . . . . . . . . . . . . . 8
5 RESULTS 20
5.1 Lcd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
5.2 snaps of the moving vehicle . . . . . . . . . . . . . . . . . . . . . . 20
5.3 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
5.4 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
6 CONCLUSION 23
6.1 Future enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . 23
A CODING 25
vi
A.1 Pin Assignment . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
A.2 Motor Description . . . . . . . . . . . . . . . . . . . . . . . . . . 25
A.3 LCD Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
A.4 Robot control section using ZigBee . . . . . . . . . . . . . . . . . 27
ABSTRACT
cess tight, and thus not a sensible resolution in several things. Thispaper
brand new lane line detection algorithmic program with alternative lane
high accuracy and consistency, the planned system will be enforced in au-
localization.
LIST OF TABLES
ix
LIST OF FIGURES
x
ABBREVIATIONS
xi
CHAPTER 1
INTRODUCTION
The future will see the deployment of self-driving vehicle in the areas of indoor au-
tomation, transportation and unknown environment exploration. Implementation of
self-driving vehicle systems in such tasks is widely appreciated technique as they handle
these tasks more efficiently and reliably. Currently, a growing community of researchers
is focusing on the scientific and engineering challenges of these kinds of self-driving
vehicle systems. This project tries to address the main challenges in the field of self-
driving vehicle Autonomous Navigation. There are several techniques for effective
autonomous navigation, among which Vision based navigation is the most significant
and popular technique which experiences rapid developments.
Other techniques include navigation using ultrasound sensors, LI DAR (Light De-
tection and Ranging) systems, preloaded maps, landmarks etc. Navigation which uses
ultrasound sensors will not detect narrow obstacles such as legs of tables and chairs
properly, and hence leads to collision. LI DAR systems are perfect tools for indoor nav-
igation because of their accuracy and speed but they are less impressive for large scale
implementation due to their high cost. navigation based on landmarks and preloaded
maps become valid options only when there is prior information about the environment
and thus, it does not give a generic solution to the problem of autonomous navigation.
In this scenario, building a cost effective stereo vision system using regular web
cams which is able to meet the performance of commercially available an alternative is
highly appreciable and this fact makes the theme of this project.
1. This system is less immune to drastic light changes,occlusions, and objects deforma-
tion.
2. The iconographic information, expresses information about exits, city directions,
kilometers, etc., has not been analyzed.
3. It has been not done to analyze a sequence instead of one image at a time.
1. The impacts from poor lighting condition and wet surface are proposed algorithm is
still able to function as normal.
2
CHAPTER 2
LITERATURE SURVEY
Road detection through the use of an onboard camera is an essential task required for
the development of Advanced Driving Assistance Systems. Although many solutions
have been proposed in literature, most existing algorithms are not designed to handle
road singularities such as freeway exits, double painted lines, zebra road markings, etc.
Such conditions can be confusing for vision-based algorithms although they are often
met in real driving situations. In order to tackle this issue, Authors propose the use of
various instances of a road model at the same time. Authors first describe the generic
road model used and then explain how the different instances of this model are updated.
The extracted models can be used either to detect the correct vehicle lane or to obtain
a more complete description of the road configuration. Experiments are presented to
demonstrate the validity of the approach. An application to multi-lane detection is also
presented.
4
2.4 4. " Lane Marking Based Vehicle Localization Us-
ing Particle Filter and Multi-Kernel Estimation ",
by Wenjie Lu, Emmanuel Seignez, Sergio A. Ro-
driguez F. and Roger Reynaud.
Vehicle localization is the primary information needed for advanced tasks like navi-
gation. This information is usually provided by the use of Global Positioning System
(GPS) receivers. However, the low accuracy of GPS in urban environments makes it
unreliable for further treatments. The combination of GPS data and additional sensors
can improve the localization precision. In this article, a marking feature based vehicle
localization method is proposed, able to enhance the localization performance. To this
end, markings are detected using a multi-kernel estimation method from an on-vehicle
camera. A particle filter is implemented to estimate the vehicle position with respect
to the detected markings. Then, map-based markings are constructed according to an
open source map database. Finally, vision-based markings and map-based markings
are fused to obtain the improved vehicle fix. The results on road traffic scenarios using
a public database show that our method leads to a clear improvement in localization
accuracy.
In this paper, Authors introduced a synergistic approach to integrated lane and vehicle
tracking for driver assistance. The approach presented in this paper results in a final
system that improves on the performance of both lane tracking and vehicle tracking
modules. Further, the presented approach introduces a novel approach to localizing
and tracking other vehicles on the road with respect to lane position, which provides
information on higher contextual relevance that neither the lane tracker nor vehicle
5
tracker can provide by itself. Improvements in lane tracking and vehicle tracking have
been extensively quantified. Integrated system performance has been validated on real-
world highway data. Without specific hardware and software optimizations, the fully
implemented system runs at near-real-time speeds of 11 frames per second.
6
CHAPTER 3
RESEARCH METHODOLOGY
The driving system has been a long back old manual system which is getting tough for
a person to drive on roads with heavy traffic,hence to reduce the stress on a person to
drive on heavy traffics,this model is implemented such that it captures the traffic signals
and moves accordingly without human involvement.
This project aims to design and provide a vehicle which is useful for exploring of indoor
environment uses a stereo vision system, that is cheap, nonetheless conjointly in a po-
sition to achieve high accuracy and consistency.To built a vehicle which moves without
having any human involvement, monitor the vehicle with stereo vision based technique
and to built a self driving vehicle system in cost-effective range .
Reversible image data hiding (RI DH) is a special category of data hiding technique,
which ensures perfect reconstruction of the cover image upon the extraction of the em-
bedded message. The reversibility makes such image data hiding approach particularly
attractive in the critical scenarios, e.g., military and remote sensing, medical images
sharing, law forensics and copyright authentication, where high fidelity of the recon-
structed cover image is required.
3.4 Type of research
RS232
ZigBee
8
Hence, Zigbee is a low-power, low data rate, and close proximity (i.e., personal area)
wireless ad hoc network.
Its low power consumption limits transmission distances to 10âĂŞ100 meters line-
of-sight, depending on power output and environmental characteristics.Zigbee devices
can transmit data over long distances by passing data through a mesh network of inter-
mediate devices to reach more distant ones. Zigbee is typically used in low data rate
applications that require long battery life and secure networking (Zigbee networks are
secured by 128 bit symmetric encryption keys.) Zigbee has a defined rate of 250 kbit/s,
best suited for intermittent data transmissions from a sensor or input device.
9
CHAPTER 4
The main concept of the method is done in two stages.The first one is image acquisition
of traffic signal.Image acquisition is capturing of an image through the camera fixed to
the laptop.The captured image will be resized by imaging scaling process.The traffic
signals will be already given in our database.The image resized will be used to correlate
with the images in the database. The prototype moves accordingly on recognition of
the image similar to the image in the database.
COMPARISON OF
The image captured is converted into a re-sized image in the stage of image acqui-
sition.The image captured is re-sized and processed using an algorithm called co effi-
ciency correlation.The whole process of the algorithm is to check the correlation match-
ing and similarity between the image captured and the image which is given in the
database.Once the image correlates with the image in the database the vehicle moves
accordingly to the captured image.
Figure 4.1: Block diagram
Image Scaling
In computer graphics and digital imaging, image scaling refers to the resizing of a
digital image. In video technology, the magnification of digital material is known as
upscaling or resolution enhancement.
11
metering]. Here in our approach, the exposure time is calculated based on the road sur-
face (ROI) brightness. The road surface is approximately derived from the previous
lane line detection. This is helpful especially when the vehicle moves from a shaded
area to an unshaded or the other way round.
correlation coefficient algorithm is used to match the template which is in the database
and captured image
1. The circuit is built around a recording and playback chip that supports voice record-
ing for 16 to 30 seconds and reproduces it clearly.
2. It allows recording from external microphones, circuit of voice recorder and play-
back system. The circuit is built around voice recording and playback IC APR9301-V2
(IC1), voltage regulator 7806 (IC2), npn transistor BC547 (T1), 8-ohm, 0.5W speaker
(LS1), microphone (MIC1) and a few other components.
3. The length of message recording depends on the value of the delay given in the mat-
lab software
4. It can be used in different types of applications such as door bells, railway announce-
ment systems and automatic telephone answering devices.
12
4.5.2 Arduino
13
Figure 4.4: ATmega328P Pin Diagram
4.5.3 LCD
A 16x2 LCD means it can display 16 characters per line and there are 2 such lines. In
this LCD each character is displayed in 5x7 pixel matrix. This LCD has two registers,
namely, Command and Data.
14
Figure 4.6: PIN diagram of LCD
Initialise LCD
The initialization of the LCD follows the instruction method for an 8 bit interference
module.After power ON, the function calls the delay function for a 30ms wait.The first
instruction after power on delay is the function for a 30ms Wait.The first instruction
after power on delay is the function set done by calling instruction LCD function.It is
used to detect the unwanted metal or any type of explosive material.
Instr LCD
The functions sends an instruction to the LCD.It starts by calling the BF LCD func-
tion,which checks through the busy flag that the LCD is ready to receive an instruc-
tion.The register select the instruction register and the Read/Write selects the write op-
eration as input.The enable signal then allows sending out the command.The function
finishes by disabling the enable signal to show that the external wire cycle is complete.
BF LCD
This function detects the LCD busy flag before exceuting the next instruction.It con-
figures the LCD data bus for input.The registerselect is set for instruction mode an
read/write input to read operation.The function polls the busy flag,where it enables the
signal to read the data in the display data RAM and checks if the flag is busy(=1).The
function ends when the busy flag is cler.The LCD data bus is then set to output.
15
IS BUSY
4.6 Speaker
A device that converts analog audio signals into the equivalent air vibrations in order
to make audible sound. When CRT monitors were the norm, speakers designed for
computers were shielded to avoid magnetic interference with the CRT’s magnetic coil.
Getting Smaller All the Time Starting in the 1990s, vendors began to offer higher-
quality computer speakers. Similar to home theater and stereo systems, the systems
include a pair of small speakers for the midrange and high (treble) frequencies and a
large subwoofer for the low end (bass). The small speakers are placed in a left/right
stereo orientation, while the subwoofer can be located anywhere on the floor because
bass signals are omnidirectional. See sound card, Bluetooth speaker, parametric speaker
and subwoofer.
A motor driver is an integrated circuit chip which is usually used to control motors in
autonomous robots. Motor driver act as an interface between Arduino and the motors
. The most commonly used motor driver ICâĂŹs are from the L293 series such as
L293D, L293NE, etc.These ICs are designed to control 2 DC motors simultaneously.
16
L293D consist of two H-bridge.H-bridge is the simplest circuit for controlling a low
current rated motor.
4.8 ZigBee
Zigbee builds on the physical layer and media access control defined in IEEE stan-
dard 802.15.4 for low-rate wireless personal area networks (WPANs). The specification
17
includes four additional key components: network layer, application layer, Zigbee De-
vice Objects (ZDOs) and manufacturer-defined application objects. ZDOs are respon-
sible for some tasks, including keeping track of device roles, managing requests to join
a network, as well as device discovery and security.
4.9 MatLab
18
ation of user interfaces, and interfacing with programs written in other languages, in-
cluding C, C++, C, Java, Fortran and Python.
As of 2017, MATLAB has roughly 1 million users across industry and academia.MATLAB
users come from various backgrounds of engineering, science, and economics.
Our self driving vehicle moving based on ZigBee to move has multidisciplinary as-
pects like electronics, software or computer science, wireless communication etc.The
sensors, modules that are interfaced and the micro controller together comes under the
electronics area.The code for programming the sensors to make them work in coor-
dination and software used to burn the code into controller comes under software or
computer science part.The system uses many wire- less communication elements like
ZigBEE, transmission and receiving which comes under wireless communication.
19
CHAPTER 5
RESULTS
5.1 Lcd
The Lcd is used for displaying of outputs which are given by the ZigBee,hence
zigbee is used for transmission of outputs through Lcd
Rover captures a picture using webcam and processes with help of coefficiency correla-
tion algorithm and moves when the picture matches with the picture in the database.Stereo
vision based architecture is one of the least pondered but rapidly developing research
area which has been dealt in this project and we have successfully implemented a cost
effective prototype of the stereo camera and self-driving vehicle. This performance is
adequate for safe indoor navigation for slowly moving robots.This system is also being
implemented in vehicles and even started by Mercedes benz.
21
1.The impacts from poor lighting condition and wet surface are proposed algorithm is
still able to function as normal.
2. High precision lane-level vehicle localization.
3. High sensitivity across all kinds of environments.
5.4 Applications
1. Its low cost and promising performance in semi autonomous driving applications
2. Auto driver alert system.
3. In driver assistance systems or road inventory systems this technique is effectively
applied.
4. To increase the robustness and to get faster systems for real-time applications the
lane sign detection is widely used.
5. Large progress has also been made recently in lane sign detection and recognition
which resulted in the first successful, large-scale commercial applications
6. The lane sign detection is also applicable for the blind people.
22
CHAPTER 6
CONCLUSION
In this paper, a lane-level vehicle localization system is proposed. The system works
based on stereo vision and the particle filter. Through extensive on-field tests, it has
been proven that the proposed system is able to estimate the vehicle localization infor-
mation accurately and robustly. More importantly, the system works in real time and
has the fewest limitations in practice. It even works in dark nights. Its high level of
accuracy and consistent good performances under difference conditions enable its im-
plementation on the navigation of autonomous driving vehicles for structured road.This
paper outlines the implementation of a cost-effective stereo vision system for a slowly
moving self-driving vehicle in an indoor environment. The detailed descriptions of
algorithms used for stereo vision, navigation and three dimensional map reconstruc-
tion are included in this paper. The self-driving vehicle described in this paper is able
to navigate through a completely unknown environment without any manual control.
Stereo vision fails when it is being subjected to surfaces with less textures and features.
The illumination level of environment is another factor which considerably affects the
performance of stereo vision.
We can improve the level of accuracy and performances under outdoor conditions so
this will enable its implementation on the navigation of autonomous driving vehicles
for roads in different conditions.
REFERENCES
[1] A. B. Hillel, R. Lerner, D. Levi, and G. reddy, Recent progress in road and lane
detection: A survey, Mach. Vis. Appl., vol. 25, no. 3, pp. 727-745, 2014.
[2] Q. Li, L. Chen, M. Li, S.-L. Shaw, A. Nuchter, A sensorfusion drivable-region and
lane-detection system for autonomous vehicle navigation in challenging road scenarios,
IEEE Trans. Veh. Technol., vol. 63, no. 2, pp. 540-555, Feb. 2014.
[3] M. Montemerlo et al,The Stanford entry in the urban challenge, J. Field Robot., vol.
25, no. 9, pp. 569-597, 2008.
[4] C. Rose, J. Britt, J. Allen, and D. Bevly, An integrated vehicle navigation system
utilizing lane-detection and lateral position estimation systems in difficult environments
for gps, IEEE Trans. Intell. Transp. Syst., vol. 15, no. 6, pp. 2615-2629, Dec. 2014.
[5] Y. Jiang, F. Gao, and G. Xu, "Computer vision-based multiple-lane detection on
straight road and in a curve," in Proc. Int. Conf. Image Anal. Signal Process. (IASP),
Apr. 2010, pp. 114-117.
[6] I. Miller, M. Campbell, and D. Huttenlocher, "Map-aided localization in sparse
global positioning system environments using vision and particle filtering," J. Field
Robot., vol. 28, no. 5, pp. 619-643, 2011.
[7] A. Lopez, J. Serrat, C. Canero, F. Lumbreras, and T. Graf, "Robust lane markings
detection and road geometry computation," Int. J. Automotive Technol., vol. 11, no. 3,
pp. 395-407, 2010.
24
APPENDIX A
CODING
#include LiquidCrystal lcd(A8, A9, A10, A11, A12, A13); /// REGISTER SELECT
PIN,ENABLE PIN,D4 PIN,D5 PIN, D6 PIN, D7 PIN
int state = 0;
char str[70];
String gpsString="";
char *test="$GPGGA";
String latitude="No Range ";
String longitude="No Range ";
int i;
boolean gps_status=0;
void setup() {
//.....begin.....//
lcd.begin(16, 2);
Serial.begin(9600);
Serial1.begin(9600);
Serial2.begin(9600);
Serial3.begin(9600);
pinMode(10, OUTPUT);
pinMode(11, OUTPUT);
pinMode(12, OUTPUT);
pinMode(13, OUTPUT);
//........motor off.....//
digitalWrite(10,LOW);
digitalWrite(11,LOW);
digitalWrite(12,LOW);
digitalWrite(13,LOW);
//...........LCD.....................//
//..........Sensor Reading.............//
if(digitalRead(3)>0)
{
int motion = digitalRead(3);
if(motion==1)
{
26
Serial.println("Motion detected");
Serial2.println("Motion detected");
get_gps();
motionmessage();
lcd.setCursor(0, 1);
lcd.print("M:");
lcd.setCursor(3, 1);
lcd.print("Yes");
delay(1000);
}
if(motion==0)
{
lcd.setCursor(0, 1);
lcd.print("M:");
lcd.setCursor(3, 1);
lcd.print(" No");
delay(1000);
}
}
fprintf(aaa,’1’);
elseif ccf ==2
fprintf(aaa,’2’);
elseif ccf == 3
27
fprintf(aaa,’3’);
elseif ccf == 4
fprintf(aaa,’4’);
elseif ccf == 5
fprintf(aaa,’5’);
end
fclose(aaa);
close all;
28