0% found this document useful (0 votes)
82 views

ROS2 Project

Uploaded by

shruthika1311
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
82 views

ROS2 Project

Uploaded by

shruthika1311
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

ROS2 (Robot Operating System 2) and

deep learning for autonomous driving.

Submitted by

Sumanth Reddy Keshav Varma Sruthika K


210929032 210929054 210929236

Under the guidance of

Dr. ASHA C S
Associate Professor
Department of Mechatronics
MIT Manipal

in partial fulfillment of the requirements for the award of the degree of

BACHELOR OF TECHNOLOGY
IN
MECHATRONICS

DEPARTMENT OF MECHATRONICS
MANIPAL INSTITUTE OF
TECHNOLOGY
(A Constituent of Manipal Academy of Higher Education)
MANIPAL - 576104, KARNATAKA, INDIA
NOVEMBER 2023
ABSTRACT
This report explores the utilization of Robot Operating System 2 (ROS2) as a robust
platform for simulating a Formula SAE (FSAE) driverless car. The objective is to create a
realistic and dynamic environment that mimics real-world scenarios, enabling the
development and testing of autonomous driving algorithms. The simulation environment
incorporates various sensors commonly found in autonomous vehicles, such as lidar,
camera, and IMU, to provide comprehensive data for algorithm development.

ROS2, with its enhanced features over its predecessor, ROS, offers a flexible and modular
framework for designing complex robotic systems. The report delves into the integration
of ROS2 components to model the FSAE driverless car's perception, control, and planning
systems. This integration facilitates the seamless interaction between simulated sensors,
actuators, and the autonomous control algorithms.
TABLE OF CONTENTS

Abstract 1

Chapter 1 Introduction 3

1.1 Brief introduction to the project

Chapter 2 Literature Review 4


2.1 Analysis of existing project implying same idea 4

Chapter 3 Problem Definition and Objectives 5


3.1 Problem Statement 5
3.2 Objective 5

Chapter 4 Methodology 6
4.1 Hardware Setup 6
4.2 Software Configuration 6
4.3 Algorithm Implementation 6
4.4 Testing and Validation 6

Chapter 5 Contribution of Each Student 7

Chapter 6 Results and Discussion 8


6.1 Fault Analysis 8
6.2 Experiment Results 8

Chapter 7 Conclusion and Future Scope 9

7.1 Conclusion 9

Chapter 8 Reference 10
CHAPTER 1

INRODUCTION
Autonomous driving systems rely on a comprehensive understanding of the vehicle's
surroundings, efficient decision-making algorithms, and precise control mechanisms.
ROS2, as a flexible and modular robotic middleware, provides a suitable framework for
developing complex autonomous systems. Deep learning, with its ability to extract
intricate patterns from large datasets, enhances perception and decision-making
capabilities crucial for autonomous vehicles.
Deep learning techniques, such as Convolutional Neural Networks (CNNs) and LiDAR-
based point cloud processing, play a pivotal role in perception tasks. ROS2 facilitates
the seamless integration of these deep learning models to process sensor data,
enabling the vehicle to recognize and interpret objects, pedestrians, and other dynamic
elements in its environment.
ROS2 supports the implementation of decision-making algorithms, which can be
enriched by deep learning models.
Deep learning models are employed in control systems to enhance the vehicle's ability
to adapt to diverse driving conditions. ROS2's middleware architecture facilitates the
integration of these models into the control loop, enabling real-time adjustments of
steering, acceleration, and braking commands.

1.2 PRESENT DAY SCENARIO


Integrating ROS2 and deep learning for autonomous driving has been a rapidly evolving
field with various ongoing research and development efforts.
Deep learning models were applied for object detection, segmentation, and tracking,
allowing vehicles to perceive and understand their environment with improved
accuracy.
ROS2 provided a flexible platform for incorporating deep learning models into the
control systems of autonomous vehicles.
Many companies and research institutions had been actively collaborating on projects
that involve the integration of ROS2 and deep learning for autonomous driving. These
collaborations aimed to address challenges related to real-time processing, system
robustness, and safety, with the goal of advancing state-of-the-art autonomous vehicle
technology.
Challenges in this field included the need for further optimization of deep learning
models to meet real-time constraints, ensuring the safety and reliability of autonomous
systems, and addressing ethical considerations in decision-making algorithms.
Opportunities for innovation and improvement were abundant, and the field was
characterized by a dynamic and collaborative research ecosystem.

1.3 MOTIVATION TO DO THE PROJECT


The motivation to undertake a project that involves the integration of ROS2 and deep
learning for autonomous driving combination of our passion for technology, a desire to
contribute to societal advancements, and the excitement of working in a cutting-edge
and impactful field.
Engaging in projects related to autonomous driving contributes to the advancement of
automotive technology. It allows us to be at the forefront of innovations that have the
potential to revolutionize the transportation industry. This project provides an
opportunity for interdisciplinary learning, involving robotics, computer vision, artificial
intelligence, and control systems. It allows us to acquire a diverse skill set that is highly
relevant in today's technology landscape.
Involvement in a project of this nature enhances our skills and knowledge in areas that
are in high demand. It opens career opportunities in industries at the forefront of
autonomous technology, including automotive companies, tech startups, and research
institutions.
The integration of ROS2 and deep learning encourages innovative thinking and creative
problem-solving. It allows you to explore novel solutions to complex challenges, pushing
the boundaries of what is currently possible.
Undertaking a project of this magnitude provides opportunities for personal and
professional growth. It challenges you to learn, adapt, and develop leadership skills as
you navigate the complexities of designing and implementing autonomous systems.
CHAPTER 2
PROBLEM DEFINATION AND OBJECTIVE
3.1 Problem Statement -
ROS2 (Robot Operating System 2) and deep learning for autonomous driving.

3.2 Objective –
Goal:
•Create an environment on ROS by means of obstacle detection on camera to plan the
fastest path around a circuit
•On detecting the obstacle, the bot should be able to take up an obstacle free path that is
also the fastest and give least times
Key Components:
•Mini-Zed Camera for obstacle detection
• Brake by Wire, Steer by Wire motors
•On-board Nvidia Jetson for computing
•Deep learning algorithm for path planning

Outcomes:
•The ability to traverse a given track in the fastest time fully automated
•Racing Automation
•This type of tech can be used to traverse difficult terrains quickly for emergency
operations

Project Impact:
•To push automation to its limits by constantly updating data and improving speed
•To potentially use this tech for search and rescue operations

CHAPTER 3
METHODOLOGY

4.1 Hardware Setup:


An FSAE electric car with an on board Nvidia jetson to compute, Mini-Zed Camera for obstacle
detection and BBW and SBW motors to actuate steering and brakes.

4.2 Software Configuration:


ROS package installation and setup for communication with the Mini-Zed Camer sensor.
Use of EUFS (Edinburgh University Formula Student Simulator).

4.3 Algorithm Implementation:


The use of deep learning algorithms to detect the positions of Cones on a Circuit and then
compute the fastest path iteratively by feeding data form each new lap of the circuit.

4.4 Testing and Validation:


Everyday test in smaller areas to test how the car reacts to smaller simpler circuits and then
slowly making circuits more and more complex as the cars learns how to plan the most efficient
and least time-consuming path.

CHAPTER 4
CONTRIBUTION OF EACH STUDENT
Keshav Varma – 2109292054
Worked on ideation phase of attaining the model with Mini-zed camera.

Shruthika K – 210929236

Documentation of the progress of the project, resulting in better evaluation of the results achieved
and implied in preparing the report with the necessary contents and references.

Sumanth Reddy - 210929017

Configuring the parameters in the selected algorithm to get appropriate results.

CHAPTER 5
RESULT AND DISCUSSION
6.1 Fault Analysis
Throughout the project, numerous obstacles arose at each level, and new approaches to problem
solving were required. Below we briefly address the challenges faced:

6.1.1 Concept Revision


Issues with the implementation of brake by wire motors was the most profound. The motors must
be very specifically calibrated to deliver a certain value of torque output, or the car could crash.
These motors must be checked, and a better method of actuation should be put in place.

6.2 Experiment Result


The car exhibits improved navigating abilities after putting the obstacle detection and avoidance
system with a Mini – Zed into practice. It successfully detects obstacles in its path and
autonomously maneuvers around them, ensuring a fast trajectory .

CHAPTER 8
CONCLUSION AND FUTURE SCOPE
7.1 Conclusion
In conclusion, the car's navigational capabilities have been enhanced by the effective installation
of obstacle detection using Mini-Zed camera and avoidance system employing. The method
enables the car to recognize and avoid race cones on its own, resulting in effective movement in
the direction of a designated objective. Prospects include optimizing the obstacle avoidance
algorithm, investigating multi-sensor integration to improve perception, and applying machine
learning methods to enable adaptive navigation in various situations.

REFERENCES
 https://www.mdpi.com/2032-6653/14/7/163
 https://www.google.com/url?
sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjFi
rHGudeCAxUMcmwGHfitDLkQFnoECAYQAQ&url=https%3A%2F
%2Fgitlab.com%2Feufs
%2Feufs_sim&usg=AOvVaw0dfw7NsvpYT3gbTiQV_fIp&opi=89978449

You might also like