ROS2 Project
ROS2 Project
Submitted by
Dr. ASHA C S
Associate Professor
Department of Mechatronics
MIT Manipal
BACHELOR OF TECHNOLOGY
IN
MECHATRONICS
DEPARTMENT OF MECHATRONICS
MANIPAL INSTITUTE OF
TECHNOLOGY
(A Constituent of Manipal Academy of Higher Education)
MANIPAL - 576104, KARNATAKA, INDIA
NOVEMBER 2023
ABSTRACT
This report explores the utilization of Robot Operating System 2 (ROS2) as a robust
platform for simulating a Formula SAE (FSAE) driverless car. The objective is to create a
realistic and dynamic environment that mimics real-world scenarios, enabling the
development and testing of autonomous driving algorithms. The simulation environment
incorporates various sensors commonly found in autonomous vehicles, such as lidar,
camera, and IMU, to provide comprehensive data for algorithm development.
ROS2, with its enhanced features over its predecessor, ROS, offers a flexible and modular
framework for designing complex robotic systems. The report delves into the integration
of ROS2 components to model the FSAE driverless car's perception, control, and planning
systems. This integration facilitates the seamless interaction between simulated sensors,
actuators, and the autonomous control algorithms.
TABLE OF CONTENTS
Abstract 1
Chapter 1 Introduction 3
Chapter 4 Methodology 6
4.1 Hardware Setup 6
4.2 Software Configuration 6
4.3 Algorithm Implementation 6
4.4 Testing and Validation 6
7.1 Conclusion 9
Chapter 8 Reference 10
CHAPTER 1
INRODUCTION
Autonomous driving systems rely on a comprehensive understanding of the vehicle's
surroundings, efficient decision-making algorithms, and precise control mechanisms.
ROS2, as a flexible and modular robotic middleware, provides a suitable framework for
developing complex autonomous systems. Deep learning, with its ability to extract
intricate patterns from large datasets, enhances perception and decision-making
capabilities crucial for autonomous vehicles.
Deep learning techniques, such as Convolutional Neural Networks (CNNs) and LiDAR-
based point cloud processing, play a pivotal role in perception tasks. ROS2 facilitates
the seamless integration of these deep learning models to process sensor data,
enabling the vehicle to recognize and interpret objects, pedestrians, and other dynamic
elements in its environment.
ROS2 supports the implementation of decision-making algorithms, which can be
enriched by deep learning models.
Deep learning models are employed in control systems to enhance the vehicle's ability
to adapt to diverse driving conditions. ROS2's middleware architecture facilitates the
integration of these models into the control loop, enabling real-time adjustments of
steering, acceleration, and braking commands.
3.2 Objective –
Goal:
•Create an environment on ROS by means of obstacle detection on camera to plan the
fastest path around a circuit
•On detecting the obstacle, the bot should be able to take up an obstacle free path that is
also the fastest and give least times
Key Components:
•Mini-Zed Camera for obstacle detection
• Brake by Wire, Steer by Wire motors
•On-board Nvidia Jetson for computing
•Deep learning algorithm for path planning
Outcomes:
•The ability to traverse a given track in the fastest time fully automated
•Racing Automation
•This type of tech can be used to traverse difficult terrains quickly for emergency
operations
Project Impact:
•To push automation to its limits by constantly updating data and improving speed
•To potentially use this tech for search and rescue operations
CHAPTER 3
METHODOLOGY
CHAPTER 4
CONTRIBUTION OF EACH STUDENT
Keshav Varma – 2109292054
Worked on ideation phase of attaining the model with Mini-zed camera.
Shruthika K – 210929236
Documentation of the progress of the project, resulting in better evaluation of the results achieved
and implied in preparing the report with the necessary contents and references.
CHAPTER 5
RESULT AND DISCUSSION
6.1 Fault Analysis
Throughout the project, numerous obstacles arose at each level, and new approaches to problem
solving were required. Below we briefly address the challenges faced:
CHAPTER 8
CONCLUSION AND FUTURE SCOPE
7.1 Conclusion
In conclusion, the car's navigational capabilities have been enhanced by the effective installation
of obstacle detection using Mini-Zed camera and avoidance system employing. The method
enables the car to recognize and avoid race cones on its own, resulting in effective movement in
the direction of a designated objective. Prospects include optimizing the obstacle avoidance
algorithm, investigating multi-sensor integration to improve perception, and applying machine
learning methods to enable adaptive navigation in various situations.
REFERENCES
https://www.mdpi.com/2032-6653/14/7/163
https://www.google.com/url?
sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjFi
rHGudeCAxUMcmwGHfitDLkQFnoECAYQAQ&url=https%3A%2F
%2Fgitlab.com%2Feufs
%2Feufs_sim&usg=AOvVaw0dfw7NsvpYT3gbTiQV_fIp&opi=89978449