Framework of Advanced Driving Assistance System (ADAS) Research
Framework of Advanced Driving Assistance System (ADAS) Research
Advanced Driver Assistance Systems are intelligent systems that reside inside the vehicle and
assist the main driver in a variety of ways. These systems may be used to provide vital
information about traffic, closure, and blockage of roads ahead, congestion levels, suggested
routes to avoid congestion, etc. The role of ADAS is to prevent deaths and injuries by reducing
the number of car accidents and the serious impact of those that cannot be avoided. ADAS
systems use automated technology, such as sensors and cameras, to detect nearby obstacles or
driver errors, and respond accordingly. As one of the labs that conduct research in the field of
image processing, especially fisheye, we can contribute to conducting ADAS research. Below is
a framework that I propose about ADAS research that will be done by graduate students.
1. Camera calibration
Camera calibration is the process of estimating intrinsic and/or extrinsic parameters. Intrinsic
parameters deal with the camera's internal characteristics, such as, its focal length, skew,
distortion, and image center. Extrinsic parameters describe its position and orientation in the
world. Knowing intrinsic parameters is an essential first step for 3D computer vision, as it allows
you to estimate the scene's structure in Euclidean space and removes lens distortion, which
degrades accuracy [1].
2. Visual Odometry
Paper reference: [1], [2]
Cameras are the first specific mechanisms to seize accurate facts at high resolution. The different
applications include industrial machines, autonomous driving, real-time clinical analytics
(surgical robots), and motors that are driven with self-reliant intelligence. Visual Odometry (VO)
is one of the methods of determining the orientation and position of a camera or a robot by
analyzing images from a moving camera. Visual Odometry is the crucial concept that came to
life persuade by the human ability to analyze motion using moving video data. VO focuses on
estimating camera pose sequentially in real-time also estimates the camera motion in the
presence of outliers.
In our proposed approach, we extracted multiple undistorted views from fisheye images and
executed all the views simultaneously in the form of Odometry. We have integrated our solution
to Moildev-SDK.
4. 3d Reconstruction
Paper reference: [1], [2], [3]
In computer vision and computer graphics, 3D reconstruction is the process of capturing the
shape and appearance of real objects. This process can be accomplished either by active or
passive methods. If the model is allowed to change its shape in time, this is referred to as non-
rigid or spatio-temporal reconstruction. 3D reconstruction is used to estimate a 3D representation
of the environment based on sensor data.
5. Object detection
ADAS gather information from the surrounding environment to support driving, and the object
detection plays an extremely crucial role in ADAS. Object detectors in ADAS currently have
faced many challenges due to the demands on inference speed and accuracy. While the inference
speed depends mainly on hardware resources and the complexity of the network, the accuracy,
on the other hand, depends completely on the detection algorithm adopted for the system. In the
scenario that the hardware technology is developing rapidly, the future detection algorithms are
expected to perform at higher speeds in near future. Currently, the best choice for the object
detector in a self-driving car is a network which is able to balance efficiently the inference speed
and detection accuracy. The core of an object detection module definitely is an object detection
network. Generally, there are two genres of object detectors: one-stage object detectors which
show high inference speed and considerable accuracy, and two-stage object detectors which
yield higher object recognition and localization accuracy but with expensive computational cost
and diminished speed.
Application plan:
1. Using TurtleBot
TurtleBot is a low-cost, personal robot kit with open-source software. TurtleBot was created at
Willow Garage by Melonee Wise and Tully Foote in November 2010. With TurtleBot, you’ll be
able to build a robot that can drive around your house, see in 3D, and have enough horsepower to
create exciting applications. TurtleBot's core technology is SLAM, Navigation, and
Manipulation. the TurtleBot can run SLAM (simultaneous localization and mapping) algorithms
to build a map and Also, it can be controlled remotely from a laptop, joypad, or Android-based
smartphone. we can adopt the features available on this turtlebot to apply fisheye technology as a
visual sensor which will be processed as needed.
3. Real car
If it is felt we had to get good results, we can also implement it in a real car.
Project timeline
Aug 7, ... Sep 6, ... Oct 6, ... Nov 5, ... Dec 5, ... Jan 4, ... Feb 3, ... Mar 5, ... Apr 4, ...
a. Camera calibration
We have done camera calibration experiments using OpenCV and Matlab, you can access our
test results at the following link:
https://github.com/MoilOrg/Progress-Aji/tree/master/Camera-Calibration-Python
https://github.com/MoilOrg/Progress-Heru/tree/master/Matlab
b. Intel Realsense
This camera is used to create a visual Odometry dataset, you can access the documentation of
this experiment at the following link:
https://github.com/MoilOrg/Progress-Heru/tree/master/Camera-Intel-T265
https://github.com/MoilOrg/Progress-Aji/tree/master/make-dataset-from-T256
c. Feature detection
Feature detection includes methods for computing abstractions of image information and
making local decisions at every image point whether there is an image feature of a given type at
that point or not. The resulting features will be subsets of the image domain, often in the form
of isolated points, continuous curves, or connected regions. Here is a link to the experiment
they did for feature detection:
https://github.com/MoilOrg/Progress-Heru/tree/master/Features-Detection
Below are some documentation links for visual Odometry using kiti datasets they do:
https://github.com/MoilOrg/Progress-Heru/tree/master/Visual-Odometry
https://github.com/MoilOrg/Progress-Heru/tree/master/Pipeline-VisualOdometry
https://mcut-
my.sharepoint.com/:w:/g/personal/m07158031_o365_mcut_edu_tw/EfyeC0oLryZCj5jp
eOg7aM4BN4ohWSSf1X-wiC-aVh6cTw?e=7OA3bL
e. ROS 2 + TurtleBot
Previously we have also spent time running the Robot Operating System version 2 (ROS2)
software which is integrated with TurtleBot. Some of the documents that were made can be
accessed at the link below:
f. UI for MoilApp
The user interface they developed for visual Odometry using multiple views from the
implementation of Moildev SDK can be accessed at the link below:
https://github.com/MoilOrg/Progress-Aji/releases
https://github.com/MoilOrg/Progress-Heru/tree/master/Plugin-UserInterface
https://mcut-
my.sharepoint.com/:f:/g/personal/m07158031_o365_mcut_edu_tw/EuBGx64FHZpOqN
SD_Xhv5CYBtEMMkiPHIcjTUj4OAF4z1Q?e=ktlytx