20231108
20231108
necessary to detect the presence of other vehicles, pedestrians, and vehicle driving system, including hazardous object detection and
other relevant entities. Safety concerns and the need for accurate semantic segmentation.
estimations have led to the introduction of lidar systems to
complement camera- or radar-based perception systems [5]. Babak Shahian Jahromi, Theja Tulabandhula and Sabri
Regarding object distance estimation, there are several Cetin et. al., [11] presents Real-Time Hybrid Multi-Sensor Fusion
approaches that have been proposed, depending on the modality Framework for Perception in Autonomous Vehicles. We propose
of the sensors used, such as radar, LiDAR, or camera. Each sensor a new hybrid multi-sensor fusion pipeline configuration that
modality is capable of perceiving the environment with a specific performs environment perception for autonomous vehicles such as
perspective and is limited by detecting certain attribute road segmentation, obstacle detection, and tracking Tested on over
information of objects. More specifically, vision-based 3K road scenes, our fusion algorithm shows better performance in
approaches are more robust and accurate in object detection but various environment scenarios compared to baseline benchmark
fail in estimating the distance of the object accurately [6]. networks.
Deep learning algorithms have been utilized in different
aspects of AV systems, such as perception, mapping, and decision Bike Chen, Chen Gong, Jian Yang et. al., [12] discussed
making. These algorithms have proven their ability to solve many about Importance-Aware Semantic Segmentation for Autonomous
of these difficulties, including computational loads faced by Vehicles. The IAL (Importance Aware Losses) operates under a
traditional algorithms while maintaining decent accuracy and fast hierarchical structure and the classes with different importance are
processing speed. Currently high-performance vision system located in different levels so that they are assigned distinct weights
usually is based on deep learning techniques. Deep neural They derive the forward and backward propagation rules for IAL
networks (DNN) have proven to be an extremely powerful tool for and apply them to four typical deep neural networks for realizing
many vision tasks. Hence in this paper classification of objects SS in an intelligent driving system.
using CNN based Vision and LIDAR fusion in autonomous
vehicle environment is presented. Jin Fang, Feilong Yan, Tongtong Zhao, Feihu Zhang,
Dingfu Zhou, Ruigang Yang, Yu Ma and Liang Wang et. al., [13]
presents Simulating LIDAR Point Cloud for Autonomous Driving
II.. literature survey: using Real-world Scenes and Traffic Flows. We present a LIDAR
simulation framework that can automatically generate 3D point
G Ajay Kumar, Jin Hee Lee, Jongrak Hwang, Jaehyeong cloud based on LIDAR type and placement.
Park, Sung Hoon Youn and Soon Kwon et. al., [7] presents LiDAR
and Camera Fusion Approach for Object Distance Estimation in Xinxin Du, Marcelo H. Ang Jr. and Daniela Rus et. al., [14]
Self‐Driving Vehicles. This paper presents a method to estimate presents Car Detection for Autonomous Vehicle: LIDAR and
the distance (depth) between a self‐driving car and other vehicles, Vision Fusion Approach Through Deep Learning Framework.
objects, and signboards on its path using the accurate fusion They propose a LIDAR and vision fusion system for car detection
approach. Based on the geometrical transformation and projection, through the deep learning framework. With further optimization
low‐level sensor fusion was performed between a camera and of the framework structure, it has great potentials to be
LiDAR using a 3D marker. Jian Nie, Jun Yan, Huilin Yin, Lei implemented onto the autonomous vehicle.
Ren, and Qian Meng et. Al [8] presents A Multimodality Fusion
Deep Neural Network and Safety Test Strategy for Intelligent Andreas Eitel Jost Tobias Springenberg Luciano Spinello
Vehicles. In this paper, they firstly propose a multimodality fusion Martin Riedmiller Wolfram Burgard et. al., [15] presents
framework called Integrated Multimodality Fusion Deep Neural Multimodal Deep Learning for Robust RGB-D Object
Network (IMF-DNN), which can flexibly accomplish both object Recognition. architecture is composed of two separate CNN
detection and end-to-end driving policy for prediction of steering processing streams: one for each modality which are consecutively
angle and speed. combined with a late fusion network. They focus on learning with
imperfect sensor data, a typical problem inreal-world robotics
Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, tasks.
Won Park, Sara Rampazzi, Qi Alfred Chen, Z. Morley Mao, Kevin
Fu et. al., [9] presents Adversarial Sensor Attack on LiDAR-based
Perception in Autonomous Driving. we perform the first security III. Classification of Objects using CNN.
study of LiDAR-based perception in AV settings, which is highly
important but unexplored. We consider LiDAR spoofing attacks In this work, classification of objects using CNN based
as the threat model and set the attack goal as spoofing obstacles vision and LIDAR fusion in autonomous vehicle environment is
close to the front of a victim AV. presented. The framework of presented model is shown in Fig. 1.
Mhafuzul Islam, Mashrur Chowdhury, Hongda Li, and
Hongxin Hu et. al., [10] presents Vision-Based Navigation of
Autonomous Vehicles in Roadway Environments with
Unexpected Hazards. They develop a DNN-based autonomous
vehicle driving system using object detection and semantic
segmentation to mitigate the adverse effect of this type of hazard,
which helps the autonomous vehicle to navigate safely around
such hazards. We find that our developed DNN-based autonomous
IJCSNS International Journal of Computer Science and Network Security, VOL.23 No.11, November 2023 69
References
[1] Hrag-Harout Jebamikyous, (Member, Ieee), And Rasha
Kashef, “Autonomous Vehicles Perception (AVP) Using
Deep Learning: Modeling, Assessment, and Challenges”,
IEEE ACCESS, VOLUME 10, 2022, doi:
Fig. 5: CLASSIFIED OBJECT IMAGE 10.1109/ACCESS.2022.3144407
[2] Javier Mendez, Miguel Molina, Noel Rodriguez, Manuel P.
now click on ‘LIDAR Accuracy & Loss Graph’ button to get below Cuellar and Diego P. Morales, “Camera-LiDAR Multi-Level
Sensor Fusion for Target Detection at the Network Edge”,
graph. Sensors 2021, 21, 3992, doi.org/10.3390/s21123992
[3] Mingyu Park, Hyeonseok Kim and Seongkeun Park, “A
Convolutional Neural Network-Based End-to-End Self-
. Driving Using LiDAR and Camera Fusion: Analysis
Perspectives in a Real-World Environment”, Electronics
2021, 10, 2608, doi.org/10.3390/electronics10212608
[4] Yulong Cao, Ningfei Wang, Chaowei Xiao, Dawei Yang, Jin
Fangz Ruigang Yangyy Qi Alfred Chen, Mingyan Liux Bo
Li, “Invisible for both Camera and LiDAR: Security of Multi-
Sensor Fusion based Perception in Autonomous Driving
Under Physical-World Attacks”, arXiv:2106.09249v1 [cs.CR]
17 Jun 2021
[5] G Ajay Kumar , Jin Hee Lee, Jongrak Hwang, Jaehyeong
Park, Sung Hoon Youn and Soon Kwon , “LiDAR and
Camera Fusion Approach for Object Distance Estimation in
Self-Driving Vehicles”, Symmetry 2020, 12, 324;
doi:10.3390/sym12020324
[6] You Li and Javier Ibanez-Guzman, “Lidar for Autonomous
Driving”, IEEE SIGNAL PROCESSING MAGAZINE,
6: FINAL OUTPUT GRAPH 1053-5888/20©2020IEEE, Digital Object Identifier
10.1109/MSP.2020.2973615
[7] Ajay Kumar, Jin Hee Lee, Jongrak Hwang, Jaehyeong Park,
Sung Hoon Youn and Soon Kwon, “LiDAR and Camera
Fusion Approach for Object Distance Estimation in Self‐
V. CONCLUSION Driving Vehicles”, Symmetry 2020, 12, 324;
In this work, classification of objects using CNN doi:10.3390/sym12020324
[8] Jian Nie, Jun Yan, Huilin Yin, Lei Ren, and Qian Meng, “A
based vision and LIDAR fusion in autonomous vehicle Multimodality Fusion Deep Neural Network and Safety Test
environment is presented. we propose a deep-learning- Strategy for Intelligent Vehicles”, 2020 IEEE Transactions
based approach by fusing vision and LIDAR data for object on Intelligent Vehicles
detection and classification in autonomous vehicle [9] Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou,
environment. On the one hand, we upsample point clouds Won Park, Sara Rampazzi, Qi Alfred Chen, Z. Morley Mao,
of LIDAR data and convert the upsampled point cloud data Kevin Fu, “Adversarial Sensor Attack on LiDAR-based
into pixel-level depth featuremap. On the other hand, we Perception in
convert the RGB together with depth feature map and then [10] Mhafuzul Islam, Mashrur Chowdhury, Hongda Li, and
fed the data into a CNN. On the basis of the integrated RGB Hongxin Hu, “Vision-Based Navigation of Autonomous
and depth data, we utilize DCNN to perform feature
learning from raw input information and obtain informative
feature representation to classify objects in the autonomous
vehicle environment. The proposed approach, in which
visual data are fused with LIDAR data, exhibits superior
72 IJCSNS International Journal of Computer Science and Network Security, VOL.23 No.11, November 2023
G.komali
M.Tech Scholar
CSE Department,
R.V.R & J.C College of Engineering,
Guntur, Andhra Pradesh, India.
[email protected]
Dr.A.Sri Nagesh
Professor,
CSE Department, R.V.R & J.C
College of Engineering,
Guntur, Andhra Pradesh, India.
[email protected]