ACM1
ACM1
AUTONOMOUS VEHICLE
Ricky C. Layderos Bryan N. Lomerio Aaron Francis L. Pacardo
[email protected] [email protected] [email protected]
Camarines Sur Polytechnic Colleges Camarines Sur Polytechnic Colleges Camarines Sur Polytechnic Colleges
San Miguel, Bato, Camarines Sur Santa Cruz Sur, Iriga City, Camarines San Vicente, Bato, Camarines Sur
Sur
ABSTRACT segmentation, drivable area segmentation, and object detection.
This study explored the use of pretrained YOLOP for lane detec- This study sought to provide answers to the following questions
tion in autonomous vehicle, emphasizing the crucial role of accu- such as incorporating YOLOP for lower specification computers, the
rate lane identification for safe and efficient self-driving. Through potential impact of training a lane detection model with a custom
exploratory research, the pretrained YOLOP model underwent a dataset on its performance, and the effectiveness of the lane keeping
comprehensive evaluation in 10 scenarios, successfully detecting algorithm for the autonomous vehicle.
lanes and objects in a simulated environment. Results showed su- This study’s general objective was to explore the use of YOLOP
perior performance in multiclass detection, lane line segmentation, in autonomous vehicles where object detection and segmentation is
and drivable area segmentation, surpassing custom weights. The present. To leverage the capabilities of this object detection model
researchers developed a YOLOP-based prototype for autonomous to enhance the vehicle’s perception and decision-making processes.
driving, show- casing capabilities in multi-class object detection The study focuses on exploring and evaluating YOLOP for multi-
and collision anticipation. Real-world testing confirmed the model’s class detection, lane line segmentation, and drivable area segmenta-
adaptability and efficiency, highlighting its potential for di- verse tion. Additionally, it aims to develop and evaluate the performance
scenarios. In conclusion, this study demonstrates the practical ap- of a YOLOP-based prototype for lane detection in autonomous
plication of pre- trained YOLOP for enhancing autonomous vehicle vehicles.
safety and efficiency, marking a signif- icant advancement in lane Soni, R. [2022] focuses on lane detection for autonomous vehicles,
detection technology. examining traditional and deep learning-based methods. The Robot
Operation System (ROS) is employed to streamline subsystem com-
CCS CONCEPTS munication, utilizing a pipeline with a publisher, subscriber, and
core. The program creates an Image Converter Class object, spins
• Computing methodologies → Vision for robotics.
the ROS node, and shuts down OpenCV windows on exception. The
Image Converter Class uses a callback function to convert images,
KEYWORDS employing the Curved Lane Detection class to find street lanes.
yolop, artificial intelligence, computer vision, robotics, lane seg- Processed images are published to the topic unless a CvBridgeError
mentation, object detection occurs. [5]
The study of Paden B., et. al. entitled “A Survey of Self-Driving
1 INTRODUCTION Car Technology was conducted exploring the technology used in
An autonomous vehicle, also referred to as a self-driving car or self-driving cars. It covers how these cars “see” the world. This
driverless car, employs a range of sensors, cameras, radar, and ar- paper is a great resource for getting a clear picture of what goes
tificial intelligence (AI) to navigate between destinations without into making self-driving cars work[4].
the need for a human driver. For a vehicle to be considered fully Qin Zou et al. [2022] propose a hybrid deep neural network for
autonomous, it must have the capability to travel to a predeter- challenging lane detection. Combining a deep convolutional neural
mined destination without human intervention, even on roads that network (DCNN) and a deep recurrent neural network (DRNN), the
have not been specifically adapted for its use. AI technologies play model utilizes DCNN as a feature extractor to reduce dimensional-
a crucial role in powering self-driving car systems. The develop- ity and preserve details. The DRNN processes extracted features
ers of self-driving cars rely on vast amounts of data from image recursively, establishing fully connected layers for effective lane
recognition systems, in conjunction with deep learning, to con- prediction in time-series signals.[3].
struct systems that can navigate autonomously. Neural networks The study of Dr. A.K. Madan Divyansha Jharwal [2022] identified
are utilized to identify patterns in the data which are subsequently the increasing rate of accidents due to drivers’ lack of attention
processed by the machine learning algorithms. These data sets en- as a major issue in road safety. They propose using lane detection
compass images from cameras installed on self-driving cars, allow- systems that alert drivers when they switch to an incorrect lane
ing the neural network to learn how to detect various components marking. The system utilizes a camera positioned on the front of the
of a given driving environment, such as traffic lights, trees, curbs, car that captures the view of the road and detects lane boundaries.
pedestrians, street signs, and other relevant objects [6]. Dr. A.K. Madan used OpenCV, an open-source computer vision
This study delved into the exploration of Pretrained YOLOP library, to implement their algorithm. They have divided the video
for Lane Detection in Autonomous Vehicles. YOLOP is a powerful image into a series of sub-images and generated image-features for
network when incorporated into autonomous vehicles for lane each of them, which are then used to recognize the lanes on the
road. The proposed system’s functionality ranges from displaying
road line positions to advanced applications such as recognizing
lane switching[1].
Wangfeng Cheng et al. [2023] present an instance segmentation-
based lane line detection algorithm for complex traffic scenes. Using
RepVgg-A0 for encoding and a multi-size asymmetric shuffling
convolution model for feature extraction, the algorithm employs
an adaptive upsampling decoder. Successfully deployed on Jetson
Nano, it achieves a 96.7 percent accuracy and real-time speed of
77.5 fps/s. The study reviews traditional and deep learning-based
methods, highlighting the proposed algorithm’s effectiveness in
enhancing autonomous driving safety[2]. Figure 1: YOLOP Architecture
2 METHODOLOGY
2.1 Research Design area, and object detection. The utilization of this dataset ensures
significant time and resource savings while maintaining data qual-
The researchers used exploratory research for this study. It was
ity and diversity, crucial for prototyping in object detection, lane,
used to determine the suitability of YOLOP for lane detection and
and drivable area segmentation development.
possible optimization of drivable area segmentation by using a
custom dataset for training. Then, finally, to test an autonomous
vehicle prototype using a Jetson Nano device. Exploratory research
is often conducted when there is little existing knowledge on a
topic or when a new angle or perspective is being considered. In
this case, the researchers were trying to gain an understanding of
how well YOLOP performed in this specific context and whether
it could be effectively used and utilized on the device to be used,
which was the Jetson Nano. By conducting and using exploratory
research, they aimed to gather preliminary data and insights that
could help guide future research in decision-making.
is added to Lllseg as it is efficient for predicting sparse categories Lane Keeping Implementation
Model Training.The training involves training a model for object Table 1: Total no. of Images Dataset used in the Model
detection, lane-line segmentation, and drivable area segmentation.
It can take several hours depending on the dataset size and com- Dataset Total Number
puting power. Total Number of Datasets (Training) 70000
Evaluation and Testing. In the context of evaluation and testing, Total Number of Datasets (Testing) 20000
precision and recall were employed for object detection. For drivable Total Number of Datasets (Validation) 10000
area accuracy assessment, the researchers utilized IoU and mIoU. Total Number of Images 100,000
Similarly, in the case of lane line segmentation, the metrics of
interest include accuracy, IoU, and mIoU.
Lane Keeping Implementation. After model training, the lane The dataset was subsection into 3 categories which are for train-
keeping implementa- tion is vital in prototyping as this will help ing, testing, and validation. First, the total number of images utilized
the driving system vehicle to stay on the lane, using Computer for training is 70000 datasets which are responsible for object de-
Vision techniques based on the output of the YOLOP model. tection wherein the goal is to identify and locate objects. A portion
of the 70,000 images was annotated for objects like cars, pedes-
trians and more. Another portion of these images were also used
for semantic segmentation tasks. In the second instance, the total
number of images used for testing is 20,000. This subsection is re-
sponsible for indicating the object or category within the simulated
environment. For example, the road, cars, pedestrians, and other
objects in the scene would be labeled with different classes. Some
of the 20,000 total number of images which was utilized for test-
ing is for model evaluation, performance metrics, generalization,
benchmarking, fine-tuning and debugging. These test images were
applied to evaluate the accuracy and performance of the model.
This also helped in assessing whether the prototype has learned
meaningful patterns and different scenarios. Lastly, the 10,000 total
number of images used for validation is responsible for monitoring
the model’s performance throughout the training process. This Figure 6: Prototype Model
helps ensure that the model is learning meaningful patterns from
the training data and making progress in terms of accuracy and Table 2: Summary of Test Results
other relevant metrics. Overall, the total number of images used to
train the model is 100,000 images.
Test Case Test 1 Test 2 Test 3
Broken Line Failed Failed Passed
3.3 Model Training
Left Curve Passed Passed Failed
Right Curve Failed Failed Failed
Single White Lane Passed Passed Passed
Collision Avoidance Passed Failed Passed