0% found this document useful (0 votes)
3 views6 pages

BSMS1

The document discusses the development of an intelligent blind spot detection system for Heavy Goods Vehicles as part of the DESERVE project, aimed at enhancing safety for vulnerable road users. It outlines the integration of driver monitoring and advanced detection technologies to improve driver awareness and reduce collision risks. The system utilizes various sensors and algorithms to monitor driver behavior and detect obstacles, ultimately contributing to safer driving practices and potentially saving lives.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views6 pages

BSMS1

The document discusses the development of an intelligent blind spot detection system for Heavy Goods Vehicles as part of the DESERVE project, aimed at enhancing safety for vulnerable road users. It outlines the integration of driver monitoring and advanced detection technologies to improve driver awareness and reduce collision risks. The system utilizes various sensors and algorithms to monitor driver behavior and detect obstacles, ultimately contributing to safer driving practices and potentially saving lives.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Developing intelligent blind spot detection system for

Heavy Goods Vehicles


Pasy Pyykonen, Ari Virtanen, Arto Kyytinen

Abstract— Collisions between Heavy Goods Vehicle and safety. One of the targets of the project was to introduce driver
Vulnerable Road Users such as cyclists or pedestrians often monitoring in training vehicles of professional truck drivers.
result in severe injuries of the weaker road users. Blind Spot For the training supervisor, it is challenging to recognise all
Mirrors and advanced Blind Spot Detection systems assist in situations when the candidate is not paying sufficient attention
avoiding collisions. Blind Spot Mirrors are however only useful if to traffic during the driving session. Thus, the driving
the drivers are trained to use them. The paper describes the monitoring system helps to gather the data during the education
development of a monitoring solution for assist in truck driver period.
training. The system also can be used as a blind spot detection
system to warn truck drivers. The work is performed within the More advanced ITS solutions are Blind Spot Detection
DESERVE project, which aims at designing and developing a systems, which detect objects in blind spots and warn drivers.
Tool Platform for embedded Advanced Driver Assistance Blind Spot Detection systems have the capacity to save up to
Systems (ADAS) to exploit the benefits of cross-domain software 66 lives and around 10000 injuries in Europe yearly by 2030 at
reuse, standardised interfaces, and easy and safety-compliant full system penetration [4].
integration of heterogeneous modules to cope with the expected
increase of functions complexity and the impellent need of cost
reduction. II. DESERVE PLATFORM
The automotive industry has set out a roadmap from driver
Keywords—driver monitoring, blind spot detection, heavy assistance to automation of driver tasks. Automation requires a
good vehicle, ADAS number of sensors, actuators and algorithms in the in-vehicle
platform. The system perception ability should be very close to
I. INTRODUCTION human perception capabilities to cope with the large variety of
traffic conditions and to adapt assistance functions or even
The DESERVE (DEvelopment platform for Safe and conduct manoeuvres in traffic autonomously.
Efficient dRiVe) project, which is funded by the European
Commission under the ECSEL Joint Undertaking Program, The DESERVE project has picked up this challenge by [5]:
aims at designing and developing a Tool Platform for x Drafting a new software architecture (to master
embedded Advanced Driver Assistance Systems (ADAS) to complexity, enable seamless integration of new advanced
exploit the benefits of cross-domain software reuse, driver assistance system (ADAS) functions and to reduce
standardised interfaces, and easy and safety-compliant overall costs of components),
integration of heterogeneous automotive modules.
x Designing and validating a novel design and more efficient
The DESERVE project has selected 22 different modules development process for new ADAS functions that is
[1] for implementing 11 driver support applications selected enabled by a platform (to reduce the development time of
according to user needs analysis. The developed applications ADAS);
are tested in different of demonstrations to show that platform
is not limited to one single vehicle type. In this paper we x Designing and validating a novel platform that enables the
discuss the training truck application, built in co-operation development process (to reduce the development time of
between VTT and TTS. ADAS).
Drivers of heavy goods vehicles have similar driver The idea of DESERVE is that the architecture, design and
inattention problems as passenger car drivers. Consequences of development process are interlinked. New ADAS functions are
traffic accidents between heavy goods vehicles and other road designed within the existing software architecture and after
users are more severe, when compared to passenger cars [2]. having being developed added to the software architecture. The
Scenarios, in which vehicles turn into the path of a cyclists, or platform enables this interlinked process as seen in Fig. 1 [5]
pull out into the path of an oncoming cyclist, are the most
frequent occurring scenarios in accidents between vehicles and
cyclists at an intersection between vehicles and cyclists [3].
There have been improvements in the safety through
legislation. For example side guards and extra mirrors for
minimising blind spot areas are required nowadays.
Nevertheless, these extra mirrors increase safety only if the
driver does look at them. Therefore improvement of the
driver’s professional skills is essential to improve the overall

978-1-4673-8200-7/15/$31.00 ©2015 IEEE 293

Authorized licensed use limited to: VTU Consortium. Downloaded on October 15,2024 at 12:02:22 UTC from IEEE Xplore. Restrictions apply.
high
mid level mid
physical low level level low physical
level level level level

Perception Application Advice-Warning-


Information
provisioning
(activation)

sensors actuators

Fig. 1. Modularisation of ADAS functions from streaming data perspective.


[5]

Fitted in the vehicle approach, this gives modular software


architecture for the in-vehicle platform, as depicted in Fig. 2 Fig. 3. Example view of the RTMaps development environment [6]

A. Driver monitoring
The driver monitoring functionality is implemented using
three RGB HD cameras installed in the cabin of a training
vehicle. The aim of the driver monitoring is to ensure that
critical safety information has been properly registered by a
driver. The locations of the cameras are optimised on that way
that drivers’ face is in the field of view of the camera; even if
the driver is not turning head to see out from the side windows
(see Fig 5). However, monitoring is more challenging since the
driver needs to turn head to see out from the large cabin of a
heavy goods vehicle [7]. Therefore, this implementation
requires new way of adapting the driver monitoring system to
detect the driver’s gaze direction and calculate the activity
Fig. 2. DESERVE’s in-vehicle platform software architecture [5]
index of driver awareness.
The main focus in driver monitoring is to classify the
driver’s gaze direction to match the view areas inside the cabin.
III. IMPLEMENTATION
For this purpose the driver’s view area (outside the cabin) is
Selected ADAS functions for the training vehicle are: divided in 3 main and 7 smaller and more detailed
classification areas (Fig. 4). These areas are selected based on
x Blind Spot Detection
user experiences, containing areas that need more attention
x Collision warning when the driver is trying to see outside the vehicle or is
performing driving tasks. These areas include the blind spot
x Safe start functionality mirrors, the dashboard and the side mirrors.
x Lane change assistant After the cameras have detected and calculated the driver’s
Because the training vehicle is intended to be used in the gaze direction, gaze direction vectors are classified based on
daily traffic, its street legality cannot be compromised. This these areas. The first phase of the algorithm is to classify
limits control to vehicle actuators to situations when the vehicle which one of the main 3 areas (Fig. 4) the gaze direction vector
is not moving. In other cases only warnings are allowed. is pointing to. With this information, one of the cameras is
selected for more detailed classification. When the camera is
The design of the system follows the principles developed selected, the gaze direction is classified to connect it to a
in DESERVE-project. The development platform was RTMaps classification area shown in Fig. 4.
from Intempora S.A. [6], which is one of the development
platforms adapted in DESERVE platform. RTMaps provides
reusable components for the typical sensors and processing
algorithms. Components data flows are connected via an easy-
to-use graphical interface. The Prototype system can be built
without any programming.

294

Authorized licensed use limited to: VTU Consortium. Downloaded on October 15,2024 at 12:02:22 UTC from IEEE Xplore. Restrictions apply.
B. Obstacle detection
The obstacle detection functionality is focused on blind
spot areas of the training truck, which are shown in Fig. 6. The
frontal area has a blind spot very close to vehicle nose. From
the driver’s seat is not possible to see obstacle which height
smaller than 1.5 meter. Other blind spots are on left and right
side of the cabin. A limitation to sensor installations is that
protrusions over 50 mm from vehicle are not allowed by
regulations.

Fig. 4. Drivers view inside the cabin is divided in 3 main and 7 more detailed
classification areas. Small areas numbered between 11 and 32 are
pointing interesting areas i.e. blind spot mirrors, dashboard etc.

The classified gaze direction is used as input for fusing


results from obstacle detection and driver monitoring system. If
the obstacle detection functionality is detecting an obstacle
outside of the vehicle, the system will use gaze direction
information to make a decision if driver is giving enough
awareness and attention to obstacle. In most of the cases this
means that the gaze direction of the driver is so, that the object
should be possible to detect by the driver trough blind spot
mirrors or windows.
Fig. 6. Environment sensor setup in the Iveco driver training vehicle Stereo
cameras install locations are marked with red circles. Approximation of
the field of view is marked on the ground.

In order to find the best combination of the overall


performance, reliability and cost, different sensor setups will be
used for the obstacle detection functionality. The aim is to
compare the performance of these three different sensor setups.
The following sensors are used:
x Vislab 3D-E Stereo camera system with 639x476
image array and 57 degrees field of view.[8 ]
x Continental SRR 20X 24 GHz Short Range Radar
Sensor. Range 50 meter with lateral angle 75qdeg
and vertical angle 12q deg. [9]
Fig. 5. Development view of the gaze detection system. Red bounding boxes
show image areas where faces are searched. x Maxbotix MaxSonar MB7047 I2CXL ultrasonic
range finders. Range up to 6,25 m [10]
The test truck also collects data concerning the behaviour
of the driver. The behaviour is analysed for different situations The sensor set 1 consists of three Vislab 3D-E cameras, one
(braking, acceleration, curve driving, turning of the steering in the front and on both sides of the vehicle (see Fig. 6). All
wheel etc.) and with the driver monitoring system installed into cameras are installed near the top of the vehicle and the field of
the cabin dashboard. The monitoring system detects the view is downwards. Thus any object is elevated from the
driver’s gaze and head activity indexes. Rules have been ground level and obstacle detection is rather straightforward.
created for the gaze orientation variation in various driving Moreover the setup can detect small objects, for example a
situations which are based on careful, economical, safe driving small child lying on the ground. Fig. 7 shows an example of
and perception of the driving environment. For example, when how a human object is seen from the depth image and the
starting to reverse, the driver should pay attention to the side original grey scale image taken from the front camera.
mirrors. Or when changing the lane, the driver needs to check The sensor set 2 has a single Vislab 3DV-E stereo camera
the free space from the mirror first. in the front. Additionally three Continental SRR 20X radars

295

Authorized licensed use limited to: VTU Consortium. Downloaded on October 15,2024 at 12:02:22 UTC from IEEE Xplore. Restrictions apply.
will be installed under the cargo bed. One on the left side and if there is poor visibility outside. The second purpose is to warn
one on the right side and one at the rear end of the vehicle. the Vulnerable Road User about the dangerous situation.
The sensor set 3 has one Vislab 3DV-E camera in the front If the obstacle detection system detects an obstacle for
and several ultrasonic range finders installed under the cargo example on the right side of the vehicle, audible warning is
bed. given from that side and the right side of the vehicle is stressed
on the display image.

Fig. 7. Disparity image from the frontal stereo camera

C. Vehicle interface
Our test vehicle is an Iveco Stralis 560. Iveco supports the
project by providing the full documentation of the electrical
system. Therefore vehicle internal signals on the CAN-bus,
such as lane detection, axle loads, fuel consumption, speed,
braking, engine torque, steering angle, selected gear and other
valuable information provided by the vehicle, can be used for
system development. There are certain modules like brakes,
where alteration is not allowed for the safety reasons. One has
to keep in mind that vehicle has to be stay street legal after
implementation of selected ADAS functions. On the other hand
there is an analogue interface to for example the switches on
the parking brake. According to the implemented ADAS
functions, the system triggers different warning devices (sound,
voice signal, light, vibration, etc.) to alert the driver if
Fig. 8. HMI display showing 360q view from ASL360 camera system.
necessary. There are built in warnings for the collision
avoidance and lane keeping using radars and camera as
standard features of the vehicle. IV. RESULTS
We have performed laboratory and outdoor test with
D. Human-machine interface
different sensor systems for driver monitoring and object
The images from the cameras for obstacle and gaze detection. The first tests for driver monitoring were performed
detection are not shown to the driver. The Human-machine in a simulator environment (Fig. 9). The main focus was to
interface (HMI) is based on audible and visual warnings and a specify the camera constellation and to assess if the camera
bird-eye view of the vehicle. The latter is based on the ASL360 technology is reliable enough for multipoint driver monitoring.
surround view system from Continental [11] as seen in Fig. 8.
The ASL360 has the possibility to combine up to six fish-eye
cameras to a single image. In our case, there were only four
cameras (front, left, right and rear) installed around the vehicle.
After installation, a calibration process is required to produce
an undistorted single image. Through adding 2 cameras to the
current camera setup, a better result would be obtained for the
10-meter long vehicle. The ASL360 contains trigger inputs and
the possibility to program different views to the display.
Audible warnings can be produced through the vehicle’s
audio system according to the direction of the detected safety
hazard. In addition to audible warnings a set of lights are
installed under the cargo bed, which are controlled via the
computer system. These lights have two purposes: they
provide additional illumination to improve the driver’s ability
to see what is happening on the critical vehicle section that side

296

Authorized licensed use limited to: VTU Consortium. Downloaded on October 15,2024 at 12:02:22 UTC from IEEE Xplore. Restrictions apply.
Fig. 11. Motorcycle driving in the blind spot. Obstacle is hidden in the vehicle
shadow, but it still can be detected from the disparity image right (red
rectangle).

B. Driver monitoring
The driver monitoring system was tested with different
drivers in both a stationary and a moving vehicle. Based on the
Fig. 9. VTT’s simulator environment for testing vehicle sensors. The major
first results from the tests, gaze detection in large cabins is a
part of the algorithm development for driver monitoring was performed challenging task especially from long distances. At this
in the simulator environment. moment, tests are concentrating on finding the best camera and
illumination combination to find drivers’ eyes even in
challenging environment conditions with variable drivers.
After the first tests in the laboratory, the sensor systems were
In Fig. 12 are illustrated three different scenarios which
installed to the Iveco Stralis 560 test vehicle for outdoor test. decrease performance of driver eye tracking in driver
Tests were performed during winter 2014-2015 in southern monitoring system. On the first row, is shown typical situation
Finland’s road network. The main effort of the test was when driving in bright lighting (against the sun). Driver needs
concentrated on finding the best sensor constellation for driver to almost close the eyes making algorithm difficult to detect
and environment monitoring. At this moment, sensor systems pupils.
are tested separately but after testing, the main focus was
concentrated on fusing the driver monitoring system output On the second shown typical situation when i.e. driver is
with obstacle detection. reversing the vehicle and needs to turn head to see clearly
outside the cabin. In this case, eye tracking is challenging
A. Obstacle detection because side window doesn’t provide any installation points for
Obstacle detection system initial tests show that stereo the camera and direct view to drivers eyes.
imaging system is capable of detecting very low obstacles. Third tested situation was concerning drivers using eye
Fig. 10 presents a human lying on the ground in the front of classes. In this case camera algorithm can easily detect and find
the vehicle. Obstacle can be detected form the disparity image drivers face, but detecting pupils, is challenging. With eye
(red rectangle). The second example in Fig. 11 presents the classes, lens can easily distort eye area and therefore make
detection of the motorcycle from the side of the vehicle. difficult to detect pupil.
Motorcycle is hidden in the shadow, but is clearly detectable
By using the results, it was possible to find best possible
from the disparity image.
camera constellation inside the cabin. Cameras installed near
the driver (left mirror and dashboard) gave good results in all
conditions but the detection of the gaze direction to the right
mirror or the right blind spot mirror was challenging
especially in changing illumination conditions. Under good
illumination conditions (day light or cabin interior lights) the
gaze detection gave good and reliable results. In night time
with low illumination, the camera’s dynamic range is not
sufficient enough to provide an image for detection. This can
be boosted i.e. using IR-illuminators to boost interior lighting.
Fig. 10. Detection of the low obstacle from the front of the vehicle. Obstacle
is seen in the disparity image on the right as a green colour inside red
rectangle.

297

Authorized licensed use limited to: VTU Consortium. Downloaded on October 15,2024 at 12:02:22 UTC from IEEE Xplore. Restrictions apply.
V. FUTURE WORK
Future plans are to develop the system further and to bring
the concept in use in professional truck driver training. It shall
bring different means to adapt e-learning solutions, serious
games, practical exercises with real vehicles and simulator
(high-end and low-end) based training together. The individual
development of learned skills is followed during the training
period. This enables personalised learning methods and
duration. An important part of the concept is to pre-test the
drivers and to plan training accordingly
The measurement units are installed to the cockpit of the
truck and data gathering can be executed during the normal
work hours, which is attractive for cargo companies. This
enables the opportunity to measure real learning instead of a
number of separate training days. The driver receives feedback
from real driving instead of being in an artificial simulator
environment which makes it possible to pretend driving habits
in order to pass the exam. The solution reported in this paper
collects data from long term driving, making artificial
improvement of results almost impossible

ACKNOWLEDGMENTS
This study has been conducted in the Development
platform for Safe and Efficient dRiVE (DESERVE) project
Fig. 12. Three examples of failed eye tracking with driver monitoring system. under the ECSEL Joint Undertaking programme. We would
On the first row driver is closing eyes because dense illumination. On
the second row, driver is turning head to see outside of vehicle. On the
like to express gratitude to the whole consortium due to giving
third row, driver uses eye classes and blocking direct view to pupils. scope for our work and having fruitful discussions. We also
like to thank IVECO Finland Oy for valuable support.
The second challenge for driver monitoring is to adapt for
different drivers. Drivers have different size (height, width), REFERENCES
use classes (Fig. 12), wear caps or are simply sitting in an
[1] Matti Kutila, Pasi Pyykönen, Aarno Lybeck, Pirita Niemi, Erik Nordin.
ergonomically challenging angle. In these cases the driver’s Towards Autonomous Vehicles with Advanced Sensor Solutions. IEEE
face and eyes can be outside or partially outside the camera 11th International Conference on Intelligent Computer Communication
field of view and is therefore not detected. and Processing (2015 IEEE ICCP). Cluj-Napoca, Romania. 3-5 Sep
2015. (to be submitted)
Preliminary tests with driver monitoring system shows that [2] Lorry/cyclist Blind Spot Accidents. Fact sheet. European Cyclist
gaze detection inside the large heavy vehicle cabin can be Federation. http://www.ecf.com/wp-
challenging task compared to small vehicle. Based on these test content/uploads/ECF_FACTSHEET5_V3_cterreeBlindSpots.pdf
we have managed to determine functional and reliable (referenced 2.4.2015)
[3] J. Scholliers, D. Bell, A. Morris, A.B. Garcia, “Improving safety and
installation constellation for driver monitoring cameras which mobility of Vulnerable Road Users through ITS applications,”
can detect drivers face with eyes reliable enough to support Transport Research Arena 2014, 14 - 17 April 2014,, Paris,
obstacle detection. [4] Silla, A., Rämä, P., Leden, L., van Noort, M., Morris, A. & Bell, D.
2015. Are intelligent transport systems effective in improving the safety
At the moment of writing the obstacle detection system of vulnerable road users? Scientific paper accepted for presentation at
sensors are selected and installed to the vehicle. The next the 22nd ITS World Congress, 5-9 October 2015, Bordeaux, France.
development step is to determine the specifications for sensor [5] Paul van Koningsbruggen. DESERVE white paper. The DESERVE
system functionality in different environment conditions and to ECSEL Undertaking (295364) (to be published)
[6] Intempora. RTMaps. http://www.intempora.com/products/rtmaps-
gather test data with both the obstacle detection and driver software/overview.html (referenced 2.4.2015)
monitoring system. Collected data are used to create a [7] M. Kutila, M. Jokela., G. Markkula & M. R. Rué, “Driver Distraction
multilayer warning system to raise the driver’s awareness of Detection with a Camera Vision System” in Proceedings of the IEEE
possible incidence. The special warning protocol will be International Conference on Image Processing (ICIP 2007). 16-19 Sep.
applied if the driver is not reacting in a safe way. For example: 2007. U.S.A., Texas, San Antonio. Vol. VI, pp. 201-204.
[8] Vislab. VisLab 3DV-E http://vislab.it/products/3dv-e-system/
the driver’s intention is to change the lane. Does the driver look (referenced 2.4.2015)
at the side mirror first? Does the driver react to the lane change [9] Continental. SRR 20X datasheet. https://www.conti-online.com
warning? Does the driver recognize the blind spot area? How (referenced 2.4.2015)
does the driver react when an obstacle is noticed in the other [10] MaxBotix. I2CXL-MaxSonar-WR_Datasheet.pdf.
lane? Does the driver slow down the speed or continue the http://www.maxbotix.com (referenced 2.4.2015)
manoeuvre? [11] Continental. ProViu ®ASL360 Product information.
http://www.conti-online.com (referenced 2.4.2015)

298

Authorized licensed use limited to: VTU Consortium. Downloaded on October 15,2024 at 12:02:22 UTC from IEEE Xplore. Restrictions apply.

You might also like