0% found this document useful (0 votes)
17 views

ACM1

This study explored using the pretrained YOLOP model for lane detection in autonomous vehicles. The researchers evaluated YOLOP in 10 scenarios, finding it successfully detected lanes and objects while surpassing custom weights models. They developed a YOLOP-based prototype for autonomous driving that demonstrated capabilities in multi-class object detection and collision anticipation. Real-world testing confirmed YOLOP's adaptability and efficiency for diverse scenarios. The study concludes YOLOP has potential for enhancing autonomous vehicle safety and efficiency through accurate lane identification.

Uploaded by

Ricky Layderos
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

ACM1

This study explored using the pretrained YOLOP model for lane detection in autonomous vehicles. The researchers evaluated YOLOP in 10 scenarios, finding it successfully detected lanes and objects while surpassing custom weights models. They developed a YOLOP-based prototype for autonomous driving that demonstrated capabilities in multi-class object detection and collision anticipation. Real-world testing confirmed YOLOP's adaptability and efficiency for diverse scenarios. The study concludes YOLOP has potential for enhancing autonomous vehicle safety and efficiency through accurate lane identification.

Uploaded by

Ricky Layderos
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

EXPLORING PRETRAINED YOLOP FOR LANE DETECTION IN

AUTONOMOUS VEHICLE
Ricky C. Layderos Bryan N. Lomerio Aaron Francis L. Pacardo
[email protected] [email protected] [email protected]
Camarines Sur Polytechnic Colleges Camarines Sur Polytechnic Colleges Camarines Sur Polytechnic Colleges
San Miguel, Bato, Camarines Sur Santa Cruz Sur, Iriga City, Camarines San Vicente, Bato, Camarines Sur
Sur
ABSTRACT segmentation, drivable area segmentation, and object detection.
This study explored the use of pretrained YOLOP for lane detec- This study sought to provide answers to the following questions
tion in autonomous vehicle, emphasizing the crucial role of accu- such as incorporating YOLOP for lower specification computers, the
rate lane identification for safe and efficient self-driving. Through potential impact of training a lane detection model with a custom
exploratory research, the pretrained YOLOP model underwent a dataset on its performance, and the effectiveness of the lane keeping
comprehensive evaluation in 10 scenarios, successfully detecting algorithm for the autonomous vehicle.
lanes and objects in a simulated environment. Results showed su- This study’s general objective was to explore the use of YOLOP
perior performance in multiclass detection, lane line segmentation, in autonomous vehicles where object detection and segmentation is
and drivable area segmentation, surpassing custom weights. The present. To leverage the capabilities of this object detection model
researchers developed a YOLOP-based prototype for autonomous to enhance the vehicle’s perception and decision-making processes.
driving, show- casing capabilities in multi-class object detection The study focuses on exploring and evaluating YOLOP for multi-
and collision anticipation. Real-world testing confirmed the model’s class detection, lane line segmentation, and drivable area segmenta-
adaptability and efficiency, highlighting its potential for di- verse tion. Additionally, it aims to develop and evaluate the performance
scenarios. In conclusion, this study demonstrates the practical ap- of a YOLOP-based prototype for lane detection in autonomous
plication of pre- trained YOLOP for enhancing autonomous vehicle vehicles.
safety and efficiency, marking a signif- icant advancement in lane Soni, R. [2022] focuses on lane detection for autonomous vehicles,
detection technology. examining traditional and deep learning-based methods. The Robot
Operation System (ROS) is employed to streamline subsystem com-
CCS CONCEPTS munication, utilizing a pipeline with a publisher, subscriber, and
core. The program creates an Image Converter Class object, spins
• Computing methodologies → Vision for robotics.
the ROS node, and shuts down OpenCV windows on exception. The
Image Converter Class uses a callback function to convert images,
KEYWORDS employing the Curved Lane Detection class to find street lanes.
yolop, artificial intelligence, computer vision, robotics, lane seg- Processed images are published to the topic unless a CvBridgeError
mentation, object detection occurs. [5]
The study of Paden B., et. al. entitled “A Survey of Self-Driving
1 INTRODUCTION Car Technology was conducted exploring the technology used in
An autonomous vehicle, also referred to as a self-driving car or self-driving cars. It covers how these cars “see” the world. This
driverless car, employs a range of sensors, cameras, radar, and ar- paper is a great resource for getting a clear picture of what goes
tificial intelligence (AI) to navigate between destinations without into making self-driving cars work[4].
the need for a human driver. For a vehicle to be considered fully Qin Zou et al. [2022] propose a hybrid deep neural network for
autonomous, it must have the capability to travel to a predeter- challenging lane detection. Combining a deep convolutional neural
mined destination without human intervention, even on roads that network (DCNN) and a deep recurrent neural network (DRNN), the
have not been specifically adapted for its use. AI technologies play model utilizes DCNN as a feature extractor to reduce dimensional-
a crucial role in powering self-driving car systems. The develop- ity and preserve details. The DRNN processes extracted features
ers of self-driving cars rely on vast amounts of data from image recursively, establishing fully connected layers for effective lane
recognition systems, in conjunction with deep learning, to con- prediction in time-series signals.[3].
struct systems that can navigate autonomously. Neural networks The study of Dr. A.K. Madan Divyansha Jharwal [2022] identified
are utilized to identify patterns in the data which are subsequently the increasing rate of accidents due to drivers’ lack of attention
processed by the machine learning algorithms. These data sets en- as a major issue in road safety. They propose using lane detection
compass images from cameras installed on self-driving cars, allow- systems that alert drivers when they switch to an incorrect lane
ing the neural network to learn how to detect various components marking. The system utilizes a camera positioned on the front of the
of a given driving environment, such as traffic lights, trees, curbs, car that captures the view of the road and detects lane boundaries.
pedestrians, street signs, and other relevant objects [6]. Dr. A.K. Madan used OpenCV, an open-source computer vision
This study delved into the exploration of Pretrained YOLOP library, to implement their algorithm. They have divided the video
for Lane Detection in Autonomous Vehicles. YOLOP is a powerful image into a series of sub-images and generated image-features for
network when incorporated into autonomous vehicles for lane each of them, which are then used to recognize the lanes on the
road. The proposed system’s functionality ranges from displaying
road line positions to advanced applications such as recognizing
lane switching[1].
Wangfeng Cheng et al. [2023] present an instance segmentation-
based lane line detection algorithm for complex traffic scenes. Using
RepVgg-A0 for encoding and a multi-size asymmetric shuffling
convolution model for feature extraction, the algorithm employs
an adaptive upsampling decoder. Successfully deployed on Jetson
Nano, it achieves a 96.7 percent accuracy and real-time speed of
77.5 fps/s. The study reviews traditional and deep learning-based
methods, highlighting the proposed algorithm’s effectiveness in
enhancing autonomous driving safety[2]. Figure 1: YOLOP Architecture

2 METHODOLOGY
2.1 Research Design area, and object detection. The utilization of this dataset ensures
significant time and resource savings while maintaining data qual-
The researchers used exploratory research for this study. It was
ity and diversity, crucial for prototyping in object detection, lane,
used to determine the suitability of YOLOP for lane detection and
and drivable area segmentation development.
possible optimization of drivable area segmentation by using a
custom dataset for training. Then, finally, to test an autonomous
vehicle prototype using a Jetson Nano device. Exploratory research
is often conducted when there is little existing knowledge on a
topic or when a new angle or perspective is being considered. In
this case, the researchers were trying to gain an understanding of
how well YOLOP performed in this specific context and whether
it could be effectively used and utilized on the device to be used,
which was the Jetson Nano. By conducting and using exploratory
research, they aimed to gather preliminary data and insights that
could help guide future research in decision-making.

2.2 Theorems, Algorithm, and Mathematical


Framework
This part presents the theorems, algorithms, and mathematical
framework that is needed in this study. The following will be utilized
in this research entitled “Exploring Pre trained YOLOP for Lane
Detection in Autonomous Vehicle.”
YOLOP Architecture YOLOP is a real-time multi-task detection
architecture with a unified encoder (backbone and neck networks),
as illustrated in Figure 1. It utilizes YOLOv4’s CSPDarknet for the
backbone, addressing gradient duplication issues. The neck net-
work integrates features through spatial pyramid pooling (SPP)
Figure 2: Dataset used for Training
and feature pyramid network (FPN) modules, ensuring information
from various scales and semantic levels. YOLOP employs concate-
nation to merge features effectively. The architecture includes three Test Cases The diverse set of test cases encompasses various
decoders for object detection, drivable area segmentation, and lane challenging road driving scenarios, ensuring a comprehensive eval-
detection, aiming to enhance real-time multi-task detection perfor- uation of the prototype’s performance. The absence of a midline
mance. lane, presence of a single white lane, and double solid yellow lanes
Dataset The research utilized the BDD100K dataset for autonomous simulate real- world situations, testing the system’s ability to nav-
vehicle applications, comprising 100,000 driving images from over igate different road markings accurately. Additionally, scenarios
50,000 rides. Each image has a resolution of 1280x640 and includes such as curved left and right lanes, blurry line lanes, intersections,
diverse scenes such as city streets, residential areas, and highways, and pedestrian lanes further challenge the prototype’s adaptability
along with varying weather conditions. The dataset incorporates and responsiveness, pro- viding a robust assessment of its capa-
annotations for lane detection, object detection, semantic segmen- bilities in handling complex driving environments. Here are the
tation, instance segmentation, panoptic segmentation, multi-object sample test cases: Together, these instruments provided the neces-
tracking, and segmentation tracking. It consists of 70,000 training sary tools for developing an autonomous vehicle prototype capable
images, 20,000 testing images, and 10,000 validation images, provid- of real-time lane keeping and testing of a new deep learning al-
ing comprehensive data for scene tagging, lane marking, drivable gorithm
between two sets of pixels. It is calculated by dividing the intersec-
tion of the two sets by the union of the two sets. Mathematically,
the mIoU can be expressed as:
Mean Intersection over Union (mIoU):
𝑚𝐼𝑜𝑈 = (1/𝑁 ) ∗ 𝐼𝑜𝑈 𝑖, where N is the number of classes.
Intersection of Union (IoU).It stands for Intersection over Union.
It is a metric commonly used to evaluate the performance of object
detection algorithms, particularly in tasks like object localization
and segmentation. It is calculated as the ratio of the area of inter-
section between two bounding boxes to the area of their union. It
is used to determine how well a predicted bounding box (usually
generated by an object detection algorithm) aligns with a ground
truth bounding box, which is a manually annotated or known true
bounding box for the same object. Here’s the formula calculating
the IoU: 𝐼𝑜𝑈 = (𝐴𝑟𝑒𝑎𝑜 𝑓 𝐼𝑛𝑡𝑒𝑟𝑠𝑒𝑐𝑡𝑖𝑜𝑛)/(𝐴𝑟𝑒𝑎𝑜 𝑓 𝑈 𝑛𝑖𝑜𝑛) This metric
is vital in object detection, tracking and fusion, and lastly for the
evaluation and benchmarking.
Accuracy. The fraction of pixels that are correctly classified. A
higher accuracy indicates a better segmentation. The accuracy
will be measured by training and choosing an appropriate cus-
tom datasets, splitting it into training and testing sets, training the
algorithm, evaluating its performance, analyzing the results, and
Figure 3: Sample Test Cases iterating and improving the model. The formula of Accuracy is:
Accuracy = (Number of Correct Predictions) / (Total Number of
Predictions)
Figure 3 illustrates sample test cases for lane detection, featuring Precision. The fraction of pixels that are classified as drivable area
a single white lane and double solid yellow lanes. The tests are con- and are actually drivable area. This will be calculated to evaluate
ducted in both day and night conditions, as- sessing the algorithm’s the ability of the YOLOP model in detecting objects and lanes.
ability to accurately identify and track lanes under varying lighting Presenting the formula:
and road marking scenarios Precision = (TP) / (TP + FP)
Where the TP represents True Positive and FP for False Posi-
2.3 Procedure/Process tive. Recall The fraction of drivable area pixels that are correctly
classified. Here’s how it’s calcu- lated:
The YOLOP model was adjusted with new data from a custom 𝑅𝑒𝑐𝑎𝑙𝑙 = (𝑇 𝑃)/(𝑇 𝑃 + 𝐹 𝑁 ), where TP are True Positives and FN
dataset to enhance its real-time performance in identifying and seg- is False Negatives
menting drivable areas. The adjusted YOLOP model was run on the Frames per second (FPS).It is a measure of how fast a machine
Jetson Nano device for optimal performance. Additionally, a lane learning model can process images. It is used to evaluate the real-
keeping algorithm was created and integrated into the autonomous time performance of machine learning models and is especially
vehicle prototype to improve its ability to stay within its lane while important for applications such as autonomous driving and robot-
driving. In order for the prototype to see and detect lanes, a camera ics. Loss Function. The multi-task loss is composed of three parts:
was incorporated into the prototype to provide visual data for the detection loss 𝐿det , drivable area seg- mentation loss 𝐿da-seg , and
YOLOP model and lane-keeping algorithm and to detect obstacles lane line segmentation loss 𝐿ll-seg [1]. The detection loss 𝐿det is
and segment lines. Then, the autonomous vehicle prototype under- a weighted sum of classification loss 𝐿class , object loss 𝐿obj , and
went testing on a closed course to assess its real-time lane keeping bounding box loss Lbox as shown in equation (1)[1]:
and drivable area segmentation capabilities. Lastly, the results of
these tests was examined to determine the effectiveness of the ad-
justed YOLOP model and lane-keeping algorithm in enhancing the 𝐿all = 𝛾 1 𝐿det + 𝛾 2 𝐿da-seg + 𝛾 3 𝐿ll-seg (1)
prototype’s performance. This process outlines the steps involved
in conducting the research and developing an autonomous vehicle where 𝐿class and 𝐿obj are focal losses, used to reduce the loss of well-
prototype capable of real-time image processing, lane keeping, and classified exam- ples and focus on the hard ones[2]. 𝐿class penalizes
testing. classification and 𝐿obj penalizes the confidence of one prediction.
𝐿box is 𝐿CIoU , which takes into account distance, overlap rate, scale
2.4 Evaluation Methods similarity, and aspect ratio between the predicted box and ground
Mean intersection over union (mIoU). is a widely used metric for truth[1]. Both drivable area segmentation loss Ldaseg, and lane line
evaluating segmentation models. It is calculated by averag- ing the segmentation loss Lllseg contain cross-entropy loss with logits Lce
intersection over union (IoU) between the model’s predictions and which minimizes classification errors between pixels of network
the ground truth. IoU is a measure of how much overlap there is outputs and targets[1]. Intersection over union (IoU) loss:
𝑇𝑃 Prototyping
𝐼𝑜𝑈 = 1 − Custom Dataset
𝑇𝑃 + 𝐹𝑃 + 𝐹𝑁 Labels
Model Training

is added to Lllseg as it is efficient for predicting sparse categories Lane Keeping Implementation

of lane lines.𝐿da-seg and 𝐿ll-seg are defined as shown in equations


(2) and (3):
Evaluation and testing
Autonomous Vehicle Prototype
𝐿da-seg = 𝐿ce (2)
𝐿ll-seg = 𝐿ce + 𝐿IoU (3)
The final loss is a weighted sum of all three parts as shown in Test Cases Vehicle
Lane Simulation
Keeping Testing
Implementation
Benchmarking Comparison
equation (4):
𝐿all = 1𝐿det + 2𝐿da-seg + 3𝐿ll-seg (4)
where 𝛼 1 , 𝛼 2 , 𝛼 3 , 𝛾 1 , 𝛾 2 , and 𝛾 3 can be adjusted to achieve a balance Figure 4: Conceptual Framework
among all components of the overall loss.
Likert Scale. The Likert scale is an invaluable tool that the re-
searchers employed for the purpose of evaluating and grading the Autonomous Vehicle Prototype. Once data collection and prepa-
test cases associated with Pretrained YOLOP and Custom YOLOP ration is successful, test cases are presented, and the training is
weight, both for daytime and nighttime results. This scale offers successful, a prototype of Autonomous vehicle will be made and
a structured and standardized approach to assess the quality and deployed.
effectiveness of these test cases. Further- more, the application of Prototype Simulation and Testing. The performance of the Au-
the Likert scale enhances the reliability and comparability of the tonomous Vehicle Pro- totype is tested using simulated road sce-
evaluation process, enabling researchers to draw robust conclu- narios. Five test cases, each comprising three test runs, will be
sions about the efficacy of Pretrained YOLOP and Custom YOLOP conducted. These tests include lane keeping and object collision
weights in both daytime and nighttime scenarios. avoidance.
Arithmetic Mean. The average Likert scale outcomes of the test
cases was determined using the Arithmetic Mean (AM), a statistical 3 RESULTS AND DISCUSSION
measure that is computed by adding up all of the individual Likert
scale values and dividing the total by the total number of values. 3.1 Explore and Evaluate YOLOP
A greater grasp of the whole evaluation is made possible by this This evaluation explores the YOLOP-based autonomous vehicle pro-
procedure, which helps condense the collective assessment into a totype, encompass- ing inter-device communication, motor control,
single numerical representation. and comprehensive testing scenarios. The pseudocode outlines key
Mathematically, the mean is defined as: functions for effective communication and motor control, while
Mean = Sum of Given Values/Total Number of Values testing results provide insights into system performance and ar-
eas for refinement. This iter- ative process highlights the proto-
2.5 Conceptual Framework type’s strengths and limitations, emphasizing the ongoing need for
The framework of this study is the following: fine-tuning to enhance its reliability in diverse real-world driving
scenarios.
Custom Data Labeling. The first step was to customize the labels
used from the datasets incorporated in the training. Previously, only 3.2 Data Preparation
one label was utilized from the datasets used in the previous study. After building the respective algorithms for each model architecture,
BDD100K Datasets were used for the training, adding 12 categories multiple training sessions were conducted before coming up to final
for a total of 13 categories. results.

Model Training.The training involves training a model for object Table 1: Total no. of Images Dataset used in the Model
detection, lane-line segmentation, and drivable area segmentation.
It can take several hours depending on the dataset size and com- Dataset Total Number
puting power. Total Number of Datasets (Training) 70000
Evaluation and Testing. In the context of evaluation and testing, Total Number of Datasets (Testing) 20000
precision and recall were employed for object detection. For drivable Total Number of Datasets (Validation) 10000
area accuracy assessment, the researchers utilized IoU and mIoU. Total Number of Images 100,000
Similarly, in the case of lane line segmentation, the metrics of
interest include accuracy, IoU, and mIoU.
Lane Keeping Implementation. After model training, the lane The dataset was subsection into 3 categories which are for train-
keeping implementa- tion is vital in prototyping as this will help ing, testing, and validation. First, the total number of images utilized
the driving system vehicle to stay on the lane, using Computer for training is 70000 datasets which are responsible for object de-
Vision techniques based on the output of the YOLOP model. tection wherein the goal is to identify and locate objects. A portion
of the 70,000 images was annotated for objects like cars, pedes-
trians and more. Another portion of these images were also used
for semantic segmentation tasks. In the second instance, the total
number of images used for testing is 20,000. This subsection is re-
sponsible for indicating the object or category within the simulated
environment. For example, the road, cars, pedestrians, and other
objects in the scene would be labeled with different classes. Some
of the 20,000 total number of images which was utilized for test-
ing is for model evaluation, performance metrics, generalization,
benchmarking, fine-tuning and debugging. These test images were
applied to evaluate the accuracy and performance of the model.
This also helped in assessing whether the prototype has learned
meaningful patterns and different scenarios. Lastly, the 10,000 total
number of images used for validation is responsible for monitoring
the model’s performance throughout the training process. This Figure 6: Prototype Model
helps ensure that the model is learning meaningful patterns from
the training data and making progress in terms of accuracy and Table 2: Summary of Test Results
other relevant metrics. Overall, the total number of images used to
train the model is 100,000 images.
Test Case Test 1 Test 2 Test 3
Broken Line Failed Failed Passed
3.3 Model Training
Left Curve Passed Passed Failed
Right Curve Failed Failed Failed
Single White Lane Passed Passed Passed
Collision Avoidance Passed Failed Passed

and connected lines. Collision avoidance success depended on car


speed and accurate object detection. Insights revealed limitations in
camera capture width, sensitivity to lane line quality, and challenges
at higher speeds. Blind spots under the camera pose a limitation,
indicating a need for further refinement in the prototype’s camera
system.

4 SUMMARY OF FINDINGS, CONCLUSION,


AND RECOMMENDATIONS
4.1 Summary
Figure 5: Custom Trained Weights Model Training and Test-
Autonomous vehicles, often referred to as self-driving cars or AVs,
ing
represent a trans- formative technological innovation in the field
of transportation. These vehicles are en- gineered to function au-
tonomously, operating without direct human involvement. They
3.4 Prototype depend on an amalgamation of sophisticated sensors, machine learn-
The prototype features a Jetson Nano as the core AI computer, ing algorithms, and real-time data processing to navigate, make
coupled with a Raspberry Pi Camera V2 for real-time processing. decisions, and interact with their surroundings. The integration
Connectivity is enabled by a wireless adapter, and an Arduino Uno of lane and object detection technology in autonomous vehicles
controls a servo and L298n motor driver. Two power sources, batter- (AVs) signifies a significant breakthrough in the field of self-driving
ies, and step-down regulators ensure efficient operation. Mobility is cars. These vehicles are de- signed to operate without human in-
achieved through a modified toy car axle, driven by a servo motor tervention, leveraging advanced sensors and artificial intelligence
for precise control. The entire setup is housed in a thin acrylic car algorithms to detect and segment lanes, as well as identify and
chassis, completing the prototype. respond to ob- jects and obstacles in their path. As researchers, this
can be explored and understand the system of autonomous vehicles
In a series of tests, the autonomous vehicle prototype showed which can be a big breakthrough in the car industry today. In this
varying performance. Broken lines caused failures in lane line seg- study, entitled “Exploring Pretrained YOLOP for Lane Detection in
mentation, especially at high speeds. Left curves had mixed results, Autonomous Vehicle” the results of the study were able to detect
with road line quality affecting outcomes. Right curves consistently objects and segment drivable areas and lines. The primary objec-
failed due to poor segmentation and camera width issues. The model tive of this research was to assess and investigate YOLOP in the
excelled in a single white lane test, performing best with straight context of multiclass object detection. The YOLOP network and
BDD100K datasets were employed for the development, training, line segmentation, but the custom weight still demonstrated com-
and testing of the model. The researchers utilized. Visual Studio mendable performance. The prototype successfully detected 13
Code for both training and testing the dataset, incorporating the multiclasses in object detection, and overall, both pretrained and
architecture, algorithm, and evaluation metrics. To obtain the re- custom weights proved effective in achieving the study’s objec-
sults, the researchers employed metrics such as mIoU, IoU, accuracy, tives, showcasing their potential in enhancing lane detection for
precision, recall, and frames per second (fps) for presentation. autonomous vehicles.
2. The developed prototype, incorporating YOLOP for Lane De-
tection in Autonomous Vehicles, has been successfully realized,
4.2 Findings demonstrating a robust and efficient lane detection system. By in-
The researchers explored the use of pretrained YOLOP for lane de- tegrating YOLOP, the prototype offers a promising solution for
tection approach in a simulated environment. Based on the result, enhancing the autonomy and safety of self-driving vehicles. This
the prototype can segment and detect lines and objects success- conclusion highlights the practical implementation and success of
fully in comparison to the pretrained weights. Derived from the the prototype in real-world scenarios, emphasizing its potential
examination of the collected data, the subsequent conclusions were impact on advancing the capabilities of autonomous vehicles.
formulated: 3. The culmination of the thesis represents a significant achieve-
1. The researchers explored the Pretrained YOLOP for lane de- ment in evaluating the performance of the developed prototype. The
tection in autonomous vehicle. The pretrained YOLOP Weight and researchers noted that through rigorous training and meticulous
Custom YOLOP Weight were evaluated interms of: a)Multiclass development, the prototype showcased commendable capabilities,
detection The pretrained YOLOP weight garnered 90.8 percent pre- particularly in its successful navigation within a simulated envi-
cision rate, 61.5 percent recall, and 60.4 percent mAP. The custom ronment. The obtained results from the evaluation underscore the
weight was able to detect multi classes such as pedestrian, car, bus, prototype’s effectiveness in autonomously carrying out driving
traffic light and more with 72 percent precision, 12.2 percent re- tasks, suggesting its potential for practical applications and con-
call, and 11.0 percent mAP. (b) Lane Line Segmentation Pretrained tributing to the broader field of autonomous vehicle development
YOLOP weight obtained 70.3 percent which is 4.1 percent better
accuracy than custom weight, pretrained weight has high accuracy 4.4 Recommendations
rate compared to custom weight. However, still, the custom weight Based from the above-mentioned conclusions, the following recom-
had a reasonable accuracy rate which can segment and identify lines mendations were made:
in a lane. Pretrained YOLOP weight has an IoU and mIoU rating of 1. Future researchers may conduct relevant study to further
26.1 percent and 62.2 percent while the custom YOLOP weight has improve the prototype model by improving its recall and precision.
30.5 percent and 64.3 percent.(c) Drivable Area Segmentation Pre- Lane line segmentation can be improved for a robust performance
trained YOLOP weight garnered 97.4 percent accuracy, 85.9 percent of the prototype.
IoU , and 91.4 percent mIoU. The custom YOLOP weight obtained 2. Subsequent researchers have the opportunity to undertake
96.6 percent accuracy, 82.6 percent accuracy, and 89.3 percent mIoU. parallel investigations and apply the datasets and YOLOP archi-
The accuracy of pretrained YOLOP weight has better accuracy of tecture employed in this study to enhance the presented results’
0.8 percent compared to the custom YOLOP weight and both were accuracy, IoU, and mIoU. An alternative strategy could involve
excellent in drivable area segmentation. employing a different potent microprocessor to integrate YOLOP,
2.The prototype was developed by utilizing YOLOP architecture thereby achieving a more precise performance for the prototype.
for the autonomous vehicle. The researcher’s prototype was able 3. Subsequent researchers have the option to employ the sug-
to drive on its own and with the help of lane keeping algorithm, it gested approach to improve the accuracy and resilience of lane
can maintain in one lane. The prototype can also detect multiclass detection and segmentation in intricate road scenarios. Utilizing a
objects, segment the different kinds of lines and be able to seg- dual set of cameras enables the system to capture a broader field of
ment drivable areas simultaneously. 3.The proposed prototype was view, mitigating distortion resulting from perspective projection.
evaluated for its performance. The researchers trained and tested Consequently, the system becomes more adept at segmenting multi-
the needed tools, datasets and algorithm used. The prototype able ple lane lines and effectively managing scenarios like lane changes,
to drive au- tonomously and can detect objects and able to detect merges, splits, and curves.
collision ahead of time. 4. Future researchers who are interested in developing autonomous
vehicles may use the drivable area segmentation to implement mo-
tion planning algorithms for lane changing scenarios.
4.3 Conclusion
The researchers drew the following conclusions based on their REFERENCES
findings: [1] Boris Bučko et al. “Computer Vision Based Pothole Detection under Challenging
1. The researchers, through their study on the application of Pre- Conditions”. In: Sensors 22.22 (2022), p. 8878. doi: 10.3390/s22228878.
[2] Wangfeng Cheng, Xuanyao Wang, and Bangguo Mao. “Research on Lane Line
trained YOLOP for lane detection in autonomous vehicles, found Detection Algorithm Based on Instance Segmentation”. In: Sensors 23.2 (Jan.
that both the pretrained YOLOP weight and the custom weight 2023), p. 789. doi: 10.3390/s23020789.
exhibited effective performance in detecting lane lines, as well as [3] Dr. A.K. Madan and Divyansha Jharwal. “Road Lane Line Detection Using
OpenCV”. Mechanical Engineering Department. Ph.D. thesis. Delhi Technologi-
in lane line segmentation and drivable area segmentation. Notably, cal University, New Delhi, India, 2022.
the pretrained YOLOP outperformed the custom weight in lane
[4] Brian Paden et al. “A Survey of Motion Planning and Control Techniques for [5] Radhika Soni. “Lane Detection Using Computer Vision and Machine Learning
Self-driving Urban Vehicles”. In: (2020). Retrieved from https://arxiv.org/abs/ For Self-Driving Cars”. Thesis. United States: Texas A&M University, 2022.
1701.00133. arXiv: 1701.00133. [6] Marc Weber. “Where to? A History of Autonomous Vehicles”. In: (2019). url:
https://www.example.com/weber2019 (visited on 05/03/2023).

You might also like