DA1 Report
DA1 Report
HỒ CHÍ MINH
Over the years, numerous approaches have been developed to address the
challenges associated with lane detection. These approaches can be broadly
categorized into traditional methods and modern techniques. Traditional
methods often rely on image processing techniques such as edge detection,
Hough transforms, and color thresholding to identify lane markings. While
these methods have shown effectiveness in controlled environments, they
often struggle in real-world scenarios where lane markings may be faded,
occluded, or obscured by adverse weather conditions.
7
tasks by learning complex features directly from raw sensor data. These
models can adapt to varying road conditions and can be trained to recognize
different types of lane markings and road textures.
8
algorithms and updates, providing a versatile solution for evolving lane
detection requirements. Building CNNs on FPGAs can thus achieve a balance
between performance and efficiency, making them suitable for real-time
applications in autonomous driving and other embedded systems where
power and speed are critical.
Based on the initial orientation, here are the outline of the project 1 report:
Provide a general overview of the objectives of lane classification, then list and
analyze some existing solution groups, including the research within these
solutions, and discuss their strengths and weaknesses. Then, we choose one
approach that suits all of the proposed criteria most.
Report the current issues in each section and then discuss carefully all the
possible solutions, then illustrate the final result.
Provide the final result of lane detection application on both hardware and
software. Also, provide a hardware resource report, performance of our work.
9
Chapter 1: Overall introduction
In recent years, the field of lane detection has increasingly favored deep
learning methods over traditional approaches due to their superior
performance and robustness in addressing complex real-world driving
environments. Unlike traditional methods that rely on handcrafted features,
deep learning algorithms automatically learn hierarchical features from raw
data, enabling them to recognize lane markings under challenging conditions
such as varying lighting, shadows, occlusions, and poor weather. Additionally,
deep learning models trained on diverse and extensive datasets can generalize
well to various driving scenarios, unlike traditional algorithms that require
extensive parameter tuning and may struggle to adapt to different road types
and conditions. One approach for detecting lanes is the segmentation map.
Below is an example of segmentation map that we created.
10
a segmentation map highlights the exact location and shape of lane lines within
an image, enabling precise delineation of lanes. The use of segmentation maps
in lane detection offers several significant advantages, making it a highly
effective approach for advanced driver-assistance systems (ADAS) and
autonomous vehicles. Firstly, segmentation maps provide pixel-level accuracy,
enabling precise localization and delineation of lane markings. This high-
resolution output ensures that even subtle lane features are detected,
enhancing the reliability of the lane detection system. One of the notable
research on this approach is the U-net[4], Enet[5] and LaneNet[6]. However,
deep learning models used for generating segmentation maps are typically
large and computationally intensive, requiring significant memory and
processing resources. The performance result in [6] produces the output at
50fps, which, while impressive, still poses challenges for real-time deployment,
especially on resource-constrained platforms.
To overcome these issues, recent studies have proposed solutions to this issue
through compact lane representation methods that can incorporate more
geometric description information. For instance, Line-CNN [7] employs Line
Proposal Units (LRUs) to predict lanes using predefined straight lines.
PolyLaneNet [8] transforms lane classification into a regression problem.
Despite achieving exceptionally high accuracy and highly optimized outputs,
these methods typically run on GPUs with high energy consumption levels,
making them less practical for onboard mobile systems in vehicles.
11
features from raw data. Techniques such as segmentation maps exemplify this
advancement, enabling pixel-level precision in lane marking detection crucial
for advanced driver-assistance systems (ADAS) and autonomous vehicles.
However, the computational demands of deep learning models, such as those
generating segmentation maps, remain a challenge for real-time deployment
on resource-constrained platforms. Recent efforts towards compact lane
representation methods aim to reconcile these issues by enhancing geometric
detail without compromising efficiency, though practical integration into
mobile vehicle systems remains an ongoing pursuit.
In recent years, there has been a growing emphasis among researchers on the
development of specialized hardware systems tailored to specific applications,
driven primarily by the imperative to enhance system performance. Unlike
general-purpose systems reliant on CPUs for managing diverse tasks such as
lane detection, these purpose-built hardware systems can be meticulously
optimized to efficiently execute specific functions, thus markedly improving
performance metrics and overall system efficacy.
ChipNet [9] represents a CNN module designed for lane detection using LiDAR
sensors, showcasing a departure from traditional camera-based methods. This
approach offers distinct advantages: LiDAR exhibits robustness in varying
lighting conditions, performing effectively under both bright sunlight and
complete darkness, unlike cameras susceptible to glare and shadow issues.
Additionally, LiDAR excels in adverse weather scenarios such as rain, fog, and
snow, where cameras often falter. However, LiDAR entails drawbacks such as
higher costs and greater power consumption compared to cameras, alongside
potentially lower resolution that may omit critical details. Nevertheless, owing
to its high accuracy, reliability, and detailed environmental mapping
capabilities, LiDAR remains a preferred choice for real-time autonomous
12
driving applications. The implementation of this system on FPGA Xilinx
UltraScale XCKU115 achieves a notable frame rate of 79.4 FPS.
RoadNet-RT [10], another CNN offering similar outputs to ChipNet, opts for
cameras over LiDAR to balance performance and power consumption
considerations. To accelerate computations at the convolutional layer,
researchers employ data quantization and enhance multiplication efficiency by
executing two parallel 8-bit multiplications on a single DSP48E2 unit, thereby
achieving an impressive output frame rate of 327.9 FPS. However, in both
RoadNet-RT and ChipNet, the final outputs are presented as segmentation
maps, which impose significant computational demands due to their high
resolution, impacting real-time performance critical for autonomous driving
applications. Furthermore, these outputs are typically data-intensive, posing
challenges in resource-constrained environments for further processing.
13
when interfacing the FPGA with other hardware components to enhance
system adaptability and integration.
Below is the graph to summarize the main ideas between three systems.
n this thesis, we aim to make lane detection systems on FPGA more adaptable
and integrated, ensuring they perform well in real-time applications. To
achieve this, we're expanding on the QuantLaneNet framework. Instead of
using PCIe, which transfers data from PC to FPGA, we switch to Ethernet. This
change improves how the system handles data, making it easier to scale up and
compatible with modern communication setups. It also speeds up data transfer
and reduces delays, making the lane detection system more effective in
practical situations. In this thesis, there are 4 main issues we need to solve:
1. Choose the suitable protocols to send packages via Ethernet. With this issue,
we expect to find some common protocols that are risk-free when transmitting.
14
2. Extract data from Ethernet module and send to a customized IP.
15
[2] Y. He, H. Wang and B. Zhang, "Color-based road detection in urban traffic
scenes," IEEE Transactions on Intelligent Transportation Systems, vol. 5, no. 4, pp.
309-318, 2004.
[4]
[5]
[6]
16