A.D.A.S. Project Report
A.D.A.S. Project Report
INTRODUCTION
1.1 Introduction
Multi-sensor data fusion is a multidisciplinary technology that involves several application
domains, such as robotics, Driver assistance systems, military application, biomedicine,
wireless sensor network (WSN) domain, video and image processing etc. It deals with the
combination of information coming from multiple sources in order to provide a robust and
accurate representation of environment or process of interest. Most of the techniques pertaining
to the information fusion were derived from defence and surveillance applications in the
automotive industry. The challenge was to make the environment perception more robust by
increasing confidence and reliability in order to meet the unpredictable demands of the
constantly changing nature of the AV’s surroundings. Yet, the challenges that are to be met
impose few additional constraints depending on the cost, architectures and types of sensors
used.
The remarkable progress in the development of safety critical applications and extensive use
of remote sensing devices in automotive industry has led to make use of tracking and
information fusion methods that are largely in use for defence and surveillance related
applications. With this the data fusion community has been working to develop applications
based on the complete information derived from individual sensors to provide a more robust
description of the environment to take better decisions. The basic problem in multi sensor
systems is to integrate a sequence of observations from a number of different sensors into a
single best-estimate of the state of the environment.”
1
the accidents from the driver’s inattention, the needs of Advanced Driver Assistance Systems
(ADAS) increased. ADAS systems provide the drivers with warning for the traffic accidents,
or control the internal vehicle system to avoid accidents or reduce collision impact. An
increasing number of modern vehicles have advanced driver-assistance systems such as
electronic stability control, anti-lock brakes, lane departure warning, adaptive cruise control
and traction control. These systems can be affected by mechanical alignment adjustments. This
has led many manufacturers to require electronic resets for these systems, after a mechanical
alignment is performed, ensure the wheel aligner you are considering to allow you to meet
these safety requirements.
1.2 Motivation
For Autonomous vehicles, to deal with preventive safety applications and to accurately judge
the driving scenarios, the relevant information required is not only the state of the own vehicle
but also the state of the traffic situation and the road condition. Many complex signal processing
strategies are used to process the information perceived by the sensors. In literature, although
some specific algorithms to perform this task are investigated since a decade, when it comes to
automotive environment there are aspects which need specific attention. For example, the task
of tracking is trivial, but in order to derive specific features of vehicle and pedestrian more
specific algorithms are required depending on the types of sensors and the fusion methodology
used. The important parameters to be considered here are the accurate detection of the obstacles
and tracking them in the subsequent frames. It should be noted that this is not a trivial task.
Many different algorithms have been proposed in literature to achieve this task, therefore
different system designs with corresponding advantages and disadvantages are possible. Also,
it should be noted that the same algorithms may fail to perform and complexity varies
depending on the factors such as the number of sensors used and changing weather conditions.
In literature one of the important aspects studied with respect to data fusion systems, in fact the
most controversial one, can be termed as Fusion Paradigm. The ego vehicle’s environment can
be perceived by remote sensing devices and information from abstract raw sensor data can be
transformed into high level description with help of signal processing algorithms. The steps
involved to combine the information from several sensors into one joint description of the
environment can vary depending on the fusion paradigms, low-level and high-level. Each has
its advantages and dis-advantages in terms of performance of the overall system and it’s still
an open ended question as to which architecture is superior over another.
2
1.3 Objective of this Project
• Build blind spot detection information system for getting real time scenario of objects
around the prototype while moving and helping while it is changing its lane and for
adaptive cruise control system.
• Develop adaptive cruise control for maintaining constant speed while moving straight
and changing lanes. Also maintain constant distance with front vehicle in case if
overtaking and lane changing is not a safer option.
• Lane departure warning system for preventing accident in case when prototype is
changing lane without proper indicator or without inspecting the scenario on side lanes
properly. Malfunctioning in wheel alignment are mainly the cause for this type of
problems.
• Park assistance system for detecting free space required by prototype to park itself in a
parking space.
This prototype with multiple sensor data fusion can be used in ADAS for making a car totally
autonomous for driving from one point to another point on map. It can also serve as robot for
automated warehouse and automated grocery store systems in order to increase efficiency and
saving cost for carrying out the operation at faster pace and another major use can be robotic
delivery system to reduce transportation cost.
• January 2018
1. Selection of sensors
2. Designing of model prototype
3. Literature survey.
• February 2018
1. Assembling and positioning of sensors on prototype.
2. Study of Multi-sensor data fusion techniques with different sensors.
3
3. Study of different ADAS features involving ultrasonic, camera, IR sensor.
4. Implementing and testing of different ADAS features using Arduino.
• March 2018
1. Programming and image data logging in MATLAB for image processing of video.
2. Extraction of required data from image processing for sensor data fusion.
3. Mounting of DC motor and calibration for synchronous rotation of both motors.
• April 2018
1. Further programming for park assist system and testing for desired detection.
2. Testing of prototype with all systems working simultaneously.
• May 2018
1. Documentation
2. Final presentation of project
4
CHAPTER 2
BACKGROUND THEORY
2.1 Introduction
In this chapter, we shall discuss the need of the project and what are the technical advancement
happened in this particular field. Also we have discussed few techniques for resolving the
present day problems in following section of this chapter. This prototype is just for simulation
and hence it will be at rest during testing. Relative speed of model prototype is neglected
because of which only obstacle around it will be moving.
5
order to keep the driver active. In order to provide safety for drivers at the intersections and
avoid collisions while turning.
Inspired by the above projects one more integrated project was started SAFESPOT
(2010) [5]. This project focused on cooperative systems for road safety with smart vehicles on
smart roads. The main objective is detection in advance of potentially dangerous situations to
extend in space and time driver’s awareness of surroundings. One of the interesting sub-project
to look at is INFRASENS [6]. This sub-project focuses on the acquisition of data from roadside
and combine them with on-board sensors located on the vehicles. A cooperative fusion was
proposed in order to combine data from laser scanners, digital maps and V2V/V2I technologies
[7].
Another integrated project that gained a lot of attention is HAVEit (2011) [8]. This
project aims at the long term realization of highly automated driving for intelligent transport.
The highly automated driving brings the next generation Advanced driver assistance systems
for increased road safety by letting the vehicles drive by itself, however, keeping the driver still
in control of the vehicle whenever it is necessary. The paper [9] examines in detail the problem
of multi sensor data fusion for target tracking and road environment perception in automated
vehicles. Fusion was done at central level and sensor level. The important claim they make
with their experimental outcomes is that central level tracking yields better results than sensor
level tracking.
The project InteractIVe (2010-2013) [10], inspired by PReVENT again, aims at
developed intelligent and integrated high performance ADAS applications to enhance the
safety of the driver. This project more specifically aimed to design, develop and implement
three groups of functions which include continuous driver support, collision avoidance and
collision mitigation with demonstrator vehicles consisting of six passenger cars of different
class and one truck. The major claim that could be used for this thesis is that High-level fusion
is better than low-level fusion paradigm while designing safety and time critical applications.
More on the techniques can be read in [11], [12], [13].
6
CHAPTER 3
SELECTION OF SENSOR AND DESIGNING OF PROTOTYPE
3.1 Introduction
This chapter gives the overview of some of the sensors that are commonly used in multi sensor
data fusion in ADAS for the environment perception. Due to the nature of electronic devices,
it is common to have a certain degree of error with respect to physical phenomenon measured
by the individual sensors. While designing safety critical applications, the degree of reliability
on these sensors is high and should be accurate enough to make the decisions that humans fail
like blind spot detection in ADAS.
Research
Design
Hardware development
Software development
No
Testing
Yes
End
7
3.2 TYPES OF SENSORS
Most of the sensors used for automotive applications can be classified into two types Active
sensors and passive sensors, based on the physical phenomenon they measure either by actively
probing the environment or passively perceiving the environment. Active sensors tend to send
the radiations in order to detect the objects around and eliminate the noise by comparing the
information between the emissions and receptions of radiated signal. Whereas, passive sensors
perceive the information based on the illumination of the environment. Passive sensors are less
expensive compared to the active sensors in a way of mechanism they are generally built. Some
of the active sensors that are used are laser based, radar, ultrasonic and infrared sensors. While,
passive sensors can be vision based such as cameras. Some of the important properties of these
sensors are reviewed below with advantages and disadvantages. For designing of prototype,
sensor used are ultrasonic sensor, infrared sensor and camera sensor.
8
Fig. 3.3 working of ultrasonic sensor
The infrared sensor, also known as IR sensor, is a device that detects reflected light and emits
infrared light. It can therefore differentiate between black and white or dark and light. There
are many types of infrared sensor, such as the ones found in TV remote controls, object
detection or even for measuring a person´s heart rate. The infrared sensor that we will be using
is digital and will return 1 whenever white is detected, and 0 when black is detected. Infrared
sensor consist of two main part which are IR-LED and Photo Diode. IR LED (Infrared
Transmitter) is as same as other LED we generally see, but it emits light of Infrared range 700
nanometers (nm) to 1 mm. This light is not visible through naked eyes but can be seen by
camera so in case of checking for defective device camera can used. Photo Diode (IR receiver)
is also a LED as IR-LED and it will detect the amount of reflection light and it will also depend
upon the color of surface from which it is being reflected. Black is said to be perfect absorber
and white is to be said perfect reflector. The reflection will be different for different colors.
Thus make it a color detector. Intensity of color detection can be adjusted by changing
resistance on IR sensor module. IR sensor is the best choice for this purpose because it is
unaffected by poor visibility and tends to be lower in cost. However, infrared systems tend to
detect actual land departures; rather than being predictive, it is reactive.
Vcc
Ground
Digital output
Analog output
9
Fig 3.4 Infrared sensor pin diagram
10
prototype design are microcontroller, DC motor, DC motor driver circuit. Microcontroller used
for this project is Arduino UNO which is based on the ATmega328 with clock speed of 16
MHz .Microcontroller will be programmed through Arduino IDE software for different
information systems. All five ultrasonic and two infrared sensor data will be transmitting sensor
data to this microcontroller for sensor data fusion. Other component used in this design is L298
DC motor driver circuit module is dual H bridge motor controller which will be connected to
Arduino for getting the required signal for operating two DC motor on rear wheels. This motor
driver circuit is capable of driving both motors individually in any direction and at different
speed which will be used for adaptive cruise control action. List of component used for the
prototype is as given in table 3.1.
Components Quantity
DC motor(12V) 2
DC motor mounting frames 2
DC motor driver circuit (L298) 1
Wheels (70mm diameter) 4
Ultrasonic sensor (HC-SR04) 5
Infrared sensor (330E LM358) 2
Camera sensor (Samsung S8) 1
Arduino UNO microcontroller 1
Bread board 1
Male to Female jumper wires 36
Red colour LEDs 4
470 ohm resistor 3
Card board sheets 3
Metal joints for structure 7
Power adapter(AC/DC) 1
Table 3.1 List of component used for model prototype
11
Fig.3.6 Motor driver circuit and microcontroller
Ultrasonic
sensor
Tyre Tyre
IR IR
Bread Arduino
board
DC
Tyre DC motor
Motor
DC motor Tyre
driver
12
Chapter 4
METHODOLOGY
4.1 Introduction
In this chapter first part of work is a technique to detect the road obstacles for prototype using
ultrasonic and camera is designed. The second part focuses on data association and state
estimation often termed as multi target tracking (MTT) with robust track management. In order
to estimate the performance of proposed MTT in terms of different fusion paradigms with
varying sensor combinations, a discrete event simulation analysis model is also done. Software
used for associating sensor data from multiple ultrasonic and infrared sensors is Arduino IDE
in which black board architecture technique for sensor data fusion.
Ultrasonic sensors
and IR sensors
Front centre
Front right
A
S
Front left S
O Feature
C Level
Rear right I Data Fusion
A level Joint identity
T Fusion declaration
Rear left I Identity
O Declaration
N
IR left
IR right
13
4.2 Blind spot detection information system
This system is developed by sensor data fusion of five ultrasound sensor mounted on front and
rear side of prototype model. Architecture used for multi-sensor data fusion is black board
architecture. Blind spot region is an area to the side and slightly behind driver fields of vision
that is not reflected in the vehicle rear mirror and requires the driver to turn their head slightly
to monitor the area before making any action such as changing the lane. A problem will be
occurs when a vehicle approaching another vehicle blind spot and the driver unable to see the
vehicle decide to change the lane. For example, refer picture in Figure 4.1 below, location of
cars on the road and the driver’s view from side mirror and rear mirror was shown. At the right
side, the blue car is in the green car blind spot area and drivers are able to see a little bit part of
the blue car and assume the location of the car is far behind from his car. Then, when the green
car decides to change the lane, accident may happen. In addition, many road accident are occurs
in blind spot region especially in highway due to overtaking, being overtake or changing the
lane action. Sometimes, some drivers are too focusing to monitor their blind spot region and
loss focus on the road in front of them.
Prototype designed for this project helps to solve this problem by fusing all sensor data and
assist driver or autonomous driven vehicles to detect multiple object at blind spots. In this
system ultrasonic sensor on front side centre is used to detect objects on centre lane and safe
14
distance is kept as 35 cm below which lane assist system will be activated. Safe distance for
side sensors (front right, front left, rear right and rear left) is kept as 49 cm. Sensors at front
side are mounted at 45 degrees in order to keep track of objects on left and right lanes before
changing lane after detecting an object on centre lane. Objects will be continuously detected
on all 5 sides around the prototype and distance of object will be calculated by continuously
sending sensor data to microcontroller where time of flight will be calibrated to distance as
discussed in previous topic 3.1.1.
Fig. 4.3 Arrangement of ultrasonic sensors for blind spot information system.
Adaptive cruise control (ACC) is an intelligent form of cruise control that slows down and
speeds up automatically to keep pace with the object in front of prototype. Here sensor data
output from blind spot information is used for object detection. Maximum speed will be set in
algorithm just as with cruise control, then a ultrasonic sensor watches for objects ahead, locks
on to the car in a lane at a maximum speed as defined in program and it should be cross checked
with speed limit permissible as per government laws on that particular type of road on which
it is been driven. ACC also changes the lane at almost same cruise control speed whenever it
detects any object in its lane. In this type of condition prototype will try to detect any object on
15
left lane on the front side and rear side, here rear side is also been detected so as to avoid any
collision with other object which is already moving in left lane at higher speed than prototype.
So if it doesn’t detect any objects on left lane than left indicator will be turned ON automatically
and after that indicator will be turned OFF as soon as it successfully changes the lane. In case
if it detects any object on left lane rear side only and right lane is totally open for safe lane
departure than right indicator will be turned ON and turned OFF accordingly as discussed
before. Than for instance while left lane is busy and it detects that objects are moving even on
right lane than it will move in the same lane by maintaining a constant distance from the object
moving in front of it. ACC is now almost always paired with a pre-crash system that alerts you
and often begins with braking. For simulation of prototype in this project, factor of relative
velocity is ignored so that the prototype will be at rest but wheels will be running as per normal
condition during testing for all different scenarios discussed above. In ACC speed of DC motor
mounted on prototype will be varied by L298 dual H Bridge DC motor driver circuit. And it
can be done by mean of pulse width modulation (PWM). Basically by changing PWM value
in driver circuit from microcontroller it will run DC motor at particular speed, value of PWM
can be varied from 0 to 255. Change of linear speed (cm/s) and RPM (rounds per minute) with
respect to change in PWM value can be observed in the table given below and trend can be
observed in graph.
50
0
0 50 100 150 200 250 300
PWM
16
Table 4.1 PWM vs velocity
17
adaptive cruise control system back to keep moving forward. It is assumed that lane marking
are present throughout the entire road. In this system IR sensor works as backup safety in case
the camera sensor is not able to track the line marking on road using image processing.
18
Fig. 4.7 Sensor arrangement for park assist system.
19
CHAPTER 5
RESULT ANALYSIS
5.1 Introduction
This chapter provides the details of the evaluation made on the proposed architectures using
real time and simulated sensor data. All the experiments were conducted on a Arduino UNO
which is embedded with Atmel 328 which is CMOS 8-bit microcontroller based on the AVR®
enhanced RISC architecture. Image processing was done on windows 10 Operating System
running on 2.4GHz processor using MATLAB for simulation interface. The purpose of the
experiments was focused, keeping in mind answering four different research problems: blind
spot detection, lane departure, adaptive cruise control and park assist. In the first part focus of
study is to detect the potential road obstacles while the second part deals with predicting the
straight movement of prototype. In third part focus is on movement of prototype using
information from first and second part. The fourth problem is solved by changing sensor angle
for specific time detection and this will help prototype to detect free space and avoid collision.
20
Turning ON left indicator when an object is detected on centre lane.
Turning ON left indicator when objects are detected on centre and left lane
21
Actuating brakes and adapt speed according to distance from object in center
Fig. 5.1 Object detection simulation for blind spots
22
5.2.2 Lane departure warning system
23
Prototype continue to move when free space is not safe for parallel parking
Prototype activates brakes and give indicator for notifying other car.
Fig 5.4 Park assist system simulation
5.2.4 Object detection and ground truth labelling
Using vision based sensor
24
Fig. 5.5 Image processing for ground truth labelling in MATLAB.
CHAPTER 6
Conclusion and future scope of work
25
6.1 CONCLUSION
In this project we focused on how multi sensor data fusion is useful for performing different
Advanced Driver assistance systems. Advanced driver assistance systems have different
controls where the vehicle can be controlled automatically with the help of the sensors mounted
on the vehicle. So we focused on few controls such as Blind spot information system, lane
departure warning system, adaptive cruise control and park assist system. For performing all
these controls we designed a model prototype which consisted of different sensors such as
ultrasonic sensors to detect the objects in the surroundings and provide information to the driver
about the blind spots. These ultrasonic sensors also help the driver notifying him about the
objects in the surroundings of the vehicle while cruise controlling. For adaptive cruise control
of the vehicle, the information from on board sensors is collected and the vehicle adjusts its
speed according to the objects detected in the surroundings. With sensor data collected from
ultrasonic, infrared and camera sensors, it helps the prototype to change different lanes
detecting the objects on the front and rear side of the prototype. These on board sensors also
helps the driver to park the vehicle wherever sufficient place is available for parking. The
architecture used for the multi sensor fusion of different sensors is blackboard architecture.
26
improve the shapes perceived. Also, the model has 53 been kept flexible to account for further
integration of knowledge about the targets by combining the additional sensor values in the
high level architecture. In the second part, the robustness of multi target tracker could be
increased by considering data association techniques like JPDA and filtering techniques like
Particle filters
REFERENCES
27
Journal / Conference Papers
1. Martin Buechel "An Automated electric vehicle prototype showing new trends in
automotive architectures", IEEE 18th international conference, 2015.
2. George thomaidis, Christina kotsiourou, Grant Grubb ‘Multi sensor tracking and lane
estimation in highly automated vehicles’, published in IET intelligent transportation
systems, August 2012
3. D. Bajpayee and J. Mathur, "A comparative study about autonomous vehicle," 2015
International Conference on Innovations in Information, Embedded and
Communication Systems (ICIIECS), Coimbatore, 2015
4. Malavika panicker,Tanzeela mitha “multisensor data fusion for an autonomous ground
vehicle”, 2016 conferencee on adavances in signal processing.
5. Elena cardarelli,Lorenzo sabattini, “Multi sensor data fusion for obstacle detection in
automated factory logistics”.
6. Bahdor khaleghi, saiedeh N.razavi, “Multi sensor data Fusion:Antecedents and
Directions”, 2009 international conference on signals,circuits and systems.
Reference / Hand Books
[1] Y. Bar-Shalom, Multitarget-multisensor tracking: applications and advances, vol. III,
Artech House, 2000.
[2] D. Hall and J. Llinas, "An introduction to multisensor data fusion," in proceedings of
IEEE, 1997.
[3] A. Gelb, Applied optimal estimation, Mit Press, 1974.
[4] M. Schulze, T. Mäkinen, I. Joachim, F. Maxime and K. Tanja, "Preventive and active
safety applications," Irion Management Consulting GmbH, 2008.
[5] SAFESPOT, "Cooperative vehicles and road infrastructure for road safety,". Available:
http://www.safespot-eu.org/.
[6] K. Matti, L. Jukka, L. Tamas and B. Arpad, "Specifications for infrastructure based
sensing,part a-sensing systems and data fusion," SAFESPOT, 2007.
[7] G. Torsten, S. Roland and L. Andreas, "Socio-economic assessment of the SAFESPOT
cooperative systems –methodology, final assessment results and deployment conclusions," in
IEEE Intelligent Vehicles Symposium, 2011.
[8] HAVEit, " Highly automated vehicles for intelligent transport,". Available:
http://www.haveit-eu.org/. [Accessed November 2015].
28
[9] T. George, K. Christina, G. Grant, L. Panagiotis, K. Giannis and A. Angelos, "Multi-
sensor tracking and lane estimation in highly automated vehicles," in IET Intelligent Transport
Systems, 2012.
[10] A. Giancarlo, A. Angelos, M. Sarah, J. Emma and F. Felix, "Accident avoidance by
active intervention for intelligent vehicles," InteractIVe, 2014.
[11] F. Nikos, T. Manolis, L. Panagiotis and A. Angelos, "Object perception algorithms for
multiple homogeneous sensors with all-around vehicle coverage," in 20th ITS World Congress
proceedings., 2013.
[12] T. Manolis, F. Nikos, L. Panagiotis and A. Angelos, "Improved road geometry estimation
by fusing multiple sources of information: the interactIVe approach," in 20th ITS World
Congress proceedings., 2013.
[13] C.-G. R. Omar, V. Trung-Dung, A. Olivier and T. Fabio, "Fusion framework for moving-
object classification," in FUSION 2013 Conference proceedings, 2013.
29
ANNEXURES
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
digitalWrite(trigPin, LOW);
delayMicroseconds(2);
digitalWrite(trigPin, HIGH);
delayMicroseconds(10);
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance= duration*0.034/2;
30
PROJECT DETAILS
Student Details
Student Name JESAL NAR
Register Number 140921340 Section / Roll No A/25
Email Address [email protected] Phone No (M) 9901758329
Student Name NISTALA SURYA ARCHAN
Register Number 140921166 Section / Roll No A/
Email Address [email protected] Phone No (M) 9742369666
Project Details
Project Title Sensor data fusion for ADAS (Advanced Driving Assistance
System)
Project Duration 4 months Date of 4 Jan, 2018
reporting
Organization Details
Organization Name Manipal Institute of Technology
Full postal address Academic Block 5
with pin code Manipal-576104, Karnataka
Website address https://manipal.edu/mit.html
Internal Guide Details
Faculty Name Ms. Sravani Vemulapalli
Full contact address Department of Instrumentation and Control Engg, Manipal Institute of
with pin code Technology, Manipal – 576 104 (Karnataka State), INDIA
Email address [email protected]
31