0% found this document useful (0 votes)
106 views

Framework of Advanced Driving Assistance System (ADAS) Research

The document proposes a framework for research on advanced driver assistance systems (ADAS) conducted by graduate students. The framework includes: (1) camera calibration to estimate intrinsic and extrinsic parameters, (2) visual odometry to estimate camera motion, (3) simultaneous localization and mapping (SLAM) to build maps and localize objects, (4) 3D reconstruction to model object shapes, and (5) object detection for identifying obstacles. Experiments will apply these techniques using Turtlebot robots, Intel RealSense cameras, and potentially real cars. Current student progress includes camera calibration, creating visual odometry datasets from Intel cameras, and feature detection research.

Uploaded by

herusyahputra
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
106 views

Framework of Advanced Driving Assistance System (ADAS) Research

The document proposes a framework for research on advanced driver assistance systems (ADAS) conducted by graduate students. The framework includes: (1) camera calibration to estimate intrinsic and extrinsic parameters, (2) visual odometry to estimate camera motion, (3) simultaneous localization and mapping (SLAM) to build maps and localize objects, (4) 3D reconstruction to model object shapes, and (5) object detection for identifying obstacles. Experiments will apply these techniques using Turtlebot robots, Intel RealSense cameras, and potentially real cars. Current student progress includes camera calibration, creating visual odometry datasets from Intel cameras, and feature detection research.

Uploaded by

herusyahputra
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Framework of advanced driving assistance system (ADAS) research

Proposed by: Haryanto

Advanced Driver Assistance Systems are intelligent systems that reside inside the vehicle and
assist the main driver in a variety of ways. These systems may be used to provide vital
information about traffic, closure, and blockage of roads ahead, congestion levels, suggested
routes to avoid congestion, etc. The role of ADAS is to prevent deaths and injuries by reducing
the number of car accidents and the serious impact of those that cannot be avoided. ADAS
systems use automated technology, such as sensors and cameras, to detect nearby obstacles or
driver errors, and respond accordingly. As one of the labs that conduct research in the field of
image processing, especially fisheye, we can contribute to conducting ADAS research. Below is
a framework that I propose about ADAS research that will be done by graduate students.

1. Camera calibration
Camera calibration is the process of estimating intrinsic and/or extrinsic parameters. Intrinsic
parameters deal with the camera's internal characteristics, such as, its focal length, skew,
distortion, and image center. Extrinsic parameters describe its position and orientation in the
world. Knowing intrinsic parameters is an essential first step for 3D computer vision, as it allows
you to estimate the scene's structure in Euclidean space and removes lens distortion, which
degrades accuracy [1].

a. Intrinsic camera calibration


i. OpenCV for camera calibration
OpenCV handles different types of calibration objects such as black and white
checkerboard, circle pattern, and asymmetrical circle pattern. It provides a calibration tool
for intrinsic parameters. It takes several images of the calibration pattern in different
orientations to calibrate one camera. Whether the calibration pattern in the process or as in
this case, the camera must be set when going around the other one. Also, we need to
estimate the extrinsic parameters of the camera that is the position of the camera in terms
of translation vector t and rotation matrix R. Purpose of estimating the extrinsic parameter
is to check the relation between camera and coordinate system.
ii. Matlab Toolbox for Camera Calibration
MATLAB camera calibrator tool is employed to estimate the intrinsic and extrinsic
parameters for the camera along with lens distortion. It can be used for various computer
vision applications for removing lens distortion, reconstructing the 3D scene and
measuring the object from multiple cameras.

iii. Moil camera calibration


Moil camera calibration is a fisheye image calibration method developed by Ming-chi
omnidirectional surveillance and imaging laboratory based on Prof. Chuang-Jan Chang.
The method used can generate intrinsic parameters from each camera with high accuracy.

b. Extrinsic Camera calibration


Uses CamOdoCal (Camera Odometry Calibration) for calculating the extrinsic parameters.
CamOdoCal does the following steps when calibrating the extrinsic parameters [ref]:
1. Monocular visual Odometry
2. Triangulate 3D points with feature correspondences from mono visual Odometry and
run bundle adjustment
3. Run robust pose graph simultaneous localization and mapping (SLAM) and find inlier
2D-3D correspondences from loop closures
4. Find local inter-camera 3D-3D correspondences
5. Run bundle adjustment

2. Visual Odometry
Paper reference: [1], [2]

Cameras are the first specific mechanisms to seize accurate facts at high resolution. The different
applications include industrial machines, autonomous driving, real-time clinical analytics
(surgical robots), and motors that are driven with self-reliant intelligence. Visual Odometry (VO)
is one of the methods of determining the orientation and position of a camera or a robot by
analyzing images from a moving camera. Visual Odometry is the crucial concept that came to
life persuade by the human ability to analyze motion using moving video data. VO focuses on
estimating camera pose sequentially in real-time also estimates the camera motion in the
presence of outliers.

Visual Odometry Application

In our proposed approach, we extracted multiple undistorted views from fisheye images and
executed all the views simultaneously in the form of Odometry. We have integrated our solution
to Moildev-SDK.

3. Simultaneous localization and mapping (SLAM)


Paper reference: [1], [2]

Simultaneous localization and mapping (SLAM) is the computational problem of constructing or


updating a map of an unknown environment while simultaneously keeping track of an agent's
location within it. While this initially appears to be a chicken-and-egg problem there are several
algorithms known for solving it, at least approximately, in tractable time for certain
environments. Popular approximate solution methods include the particle filter, extended
Kalman filter, Covariance intersection, and GraphSLAM. SLAM algorithms are based on
concepts in computational geometry and computer vision, and are used in robot navigation,
robotic mapping and odometry for virtual reality or augmented reality. SLAM is largely used to
describe the mapping procedure used when navigating an unknown environment. This is done
online so the most recent state estimates are available to the navigator.

4. 3d Reconstruction
Paper reference: [1], [2], [3]
In computer vision and computer graphics, 3D reconstruction is the process of capturing the
shape and appearance of real objects. This process can be accomplished either by active or
passive methods. If the model is allowed to change its shape in time, this is referred to as non-
rigid or spatio-temporal reconstruction. 3D reconstruction is used to estimate a 3D representation
of the environment based on sensor data.

5. Object detection

Object detection: [1]

ADAS gather information from the surrounding environment to support driving, and the object
detection plays an extremely crucial role in ADAS. Object detectors in ADAS currently have
faced many challenges due to the demands on inference speed and accuracy. While the inference
speed depends mainly on hardware resources and the complexity of the network, the accuracy,
on the other hand, depends completely on the detection algorithm adopted for the system. In the
scenario that the hardware technology is developing rapidly, the future detection algorithms are
expected to perform at higher speeds in near future. Currently, the best choice for the object
detector in a self-driving car is a network which is able to balance efficiently the inference speed
and detection accuracy. The core of an object detection module definitely is an object detection
network. Generally, there are two genres of object detectors: one-stage object detectors which
show high inference speed and considerable accuracy, and two-stage object detectors which
yield higher object recognition and localization accuracy but with expensive computational cost
and diminished speed.
Application plan:

1. Using TurtleBot
TurtleBot is a low-cost, personal robot kit with open-source software. TurtleBot was created at
Willow Garage by Melonee Wise and Tully Foote in November 2010. With TurtleBot, you’ll be
able to build a robot that can drive around your house, see in 3D, and have enough horsepower to
create exciting applications. TurtleBot's core technology is SLAM, Navigation, and
Manipulation. the TurtleBot can run SLAM (simultaneous localization and mapping) algorithms
to build a map and Also, it can be controlled remotely from a laptop, joypad, or Android-based
smartphone. we can adopt the features available on this turtlebot to apply fisheye technology as a
visual sensor which will be processed as needed.

2. Using intel t625 camera


With its small form factor and low power consumption, the Intel RealSense Tracking Camera
T265 has been designed to give you the tracking performance you want straight off-the-shelf.
Cross-platform, developer-friendly simultaneous localization and mapping for all your robotics,
drone and augmented reality rapid prototyping needs. With the calibration technology that we
developed, it is possible for us to implement the image processing that we propose on this
camera.

3. Real car
If it is felt we had to get good results, we can also implement it in a real car.
Project timeline
Aug 7, ... Sep 6, ... Oct 6, ... Nov 5, ... Dec 5, ... Jan 4, ... Feb 3, ... Mar 5, ... Apr 4, ...

Open Timeline online here


Current progress status of graduate students related to the ADAS project:

a. Camera calibration
We have done camera calibration experiments using OpenCV and Matlab, you can access our
test results at the following link:
 https://github.com/MoilOrg/Progress-Aji/tree/master/Camera-Calibration-Python
 https://github.com/MoilOrg/Progress-Heru/tree/master/Matlab

b. Intel Realsense
This camera is used to create a visual Odometry dataset, you can access the documentation of
this experiment at the following link:

 https://github.com/MoilOrg/Progress-Heru/tree/master/Camera-Intel-T265
 https://github.com/MoilOrg/Progress-Aji/tree/master/make-dataset-from-T256

c. Feature detection
Feature detection includes methods for computing abstractions of image information and
making local decisions at every image point whether there is an image feature of a given type at
that point or not. The resulting features will be subsets of the image domain, often in the form
of isolated points, continuous curves, or connected regions. Here is a link to the experiment
they did for feature detection:

 https://github.com/MoilOrg/Progress-Heru/tree/master/Features-Detection

d. Visual Odometry using kiti dataset

Below are some documentation links for visual Odometry using kiti datasets they do:

 https://github.com/MoilOrg/Progress-Heru/tree/master/Visual-Odometry
 https://github.com/MoilOrg/Progress-Heru/tree/master/Pipeline-VisualOdometry
 https://mcut-
my.sharepoint.com/:w:/g/personal/m07158031_o365_mcut_edu_tw/EfyeC0oLryZCj5jp
eOg7aM4BN4ohWSSf1X-wiC-aVh6cTw?e=7OA3bL
e. ROS 2 + TurtleBot
Previously we have also spent time running the Robot Operating System version 2 (ROS2)
software which is integrated with TurtleBot. Some of the documents that were made can be
accessed at the link below:

 Streaming camera from turtle bot


 https://mcut-
my.sharepoint.com/:f:/g/personal/m07158031_o365_mcut_edu_tw/ErFNpZ6APk1MoQ
8_HS9fDMQBk1T8vHEOIZJCAeA5RdayTQ?e=YCdRLn
 https://github.com/MoilOrg/Progress-Aji/tree/master/Ros%202

f. UI for MoilApp
The user interface they developed for visual Odometry using multiple views from the
implementation of Moildev SDK can be accessed at the link below:

 https://github.com/MoilOrg/Progress-Aji/releases
 https://github.com/MoilOrg/Progress-Heru/tree/master/Plugin-UserInterface
 https://mcut-
my.sharepoint.com/:f:/g/personal/m07158031_o365_mcut_edu_tw/EuBGx64FHZpOqN
SD_Xhv5CYBtEMMkiPHIcjTUj4OAF4z1Q?e=ktlytx

You might also like