0% found this document useful (0 votes)
24 views

Final SLAM Report (Capstone Project)

Final SLAM Report (Capstone Project)

Uploaded by

saifsaqqa26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views

Final SLAM Report (Capstone Project)

Final SLAM Report (Capstone Project)

Uploaded by

saifsaqqa26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Capstone Project:

Simultaneous Localization

SAIF ALSAQQA
Page |1

Contents
Table of Figures ........................................................................................................................... 2
Abstract ......................................................................................................................................... 3
1. Introduction .......................................................................................................................... 4
2. Literature Review ................................................................................................................ 5
3. Theoretical Background ................................................................................................... 7
4. Methodology ..................................................................................................................... 12
Section 1: SLAM using MATLAB................................................................................................ 12
Section 2: Active SLAM Using TurtleBot3 (Burger) .................................................................. 13
Section 3: SLAM Using ROSMASTER X1 Robot ........................................................................ 14
5. Virtual Machine................................................................................................................. 15
6. Used robot .......................................................................................................................... 18
7. Simulation & Results ......................................................................................................... 19
Section 1: SLAM using MATLAB environment: ........................................................................ 19
Section 2: Active SLAM using ROS Environment (TurtleBot): ................................................. 21
Section 3: SLAM using ROS Environment (ROSMASTER X1 Robot): ....................................... 29
8. Project Timeline................................................................................................................. 38
9. Conclusion and Future Work ......................................................................................... 39
10. References .................................................................................................................... 40
Page |2

Table of Figures
FIGURE 1. COMMUNICATION WITH MATLAB......................................................................................................... 12
FIGURE 2. COMMUNICATION WITH TURTLEBOT3 .................................................................................................... 13
FIGURE 3. COMMUNICATION WITH ROSMASTER X1 ............................................................................................. 14
FIGURE 4. VMWARE INTERFACE ........................................................................................................................... 15
FIGURE 5. RVIZ INTERFACE ................................................................................................................................. 16
FIGURE 6. GAZEBO INTERFACE ........................................................................................................................... 17
FIGURE 7. ROSMASTER X1 ROBOT .................................................................................................................... 18
FIGURE 8. MONTE CARLO LOCALIZATION RESULTS USING MATLAB ........................................................................... 19
FIGURE 9. SLAM WITH LOOP CLOSURE RESULTS USING MATLAB ............................................................................. 20
FIGURE 10. GITHB.COM INTERFACE ...................................................................................................................... 22
FIGURE 11. PACKAGE/TOOL INSTALLATION ............................................................................................................ 22
FIGURE 12. CHECK PACKAGE INSTALLATION ........................................................................................................... 22
FIGURE 13. OPEN GAZEBO INTERFACE USING LINUX COMMANDS ............................................................................ 23
FIGURE 14. OPEN RVIZ TO VISUALIZE THE MAP...................................................................................................... 24
FIGURE 15. BUILDING THE MAP USING KEYBOARD .................................................................................................. 25
FIGURE 16. RQT GRAPH OF MAP BULIT USING KEYBOARD ....................................................................................... 25
FIGURE 17. BUILDING THE MAP USING NAVIGATION TOOL ....................................................................................... 26
FIGURE 18. RQT GRAPH OF MAP BULIT USING NAVIGATION TOOL ............................................................................ 26
FIGURE 19. BUILDING THE MAP USING ACTIVE SLAM ALGORITHM ............................................................................ 27
FIGURE 20. MAP USING ACTIVE SLAM ................................................................................................................. 28
FIGURE 21. RQT GRAPH OF MAP BULIT USING ACTIVE SLAM ALGORITHM ................................................................. 28
FIGURE 22. ROSMASTER X1 LIDAR DATA VISUALIZATION ..................................................................................... 31
FIGURE 23. LIDAR DATA ON THE CONSOLE............................................................................................................ 31
FIGURE 24. RQT GRAPH OF MAP CONSTRUCTION OF ROSMASTER X1..................................................................... 32
FIGURE 25. GMAPPING STRUCTURE ................................................................................................................... 32
FIGURE 26. ROSMASTER X1 MAP USING 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔 ALGORITHM (HIGH SPEED) ................................................... 33
FIGURE 27. ROSMASTER X1 MAP USING 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔 ALGORITHM (LOW SPEED) .................................................... 34
FIGURE 28. RQT GRAPH FOR 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔 ALGORITHM .......................................................................................... 34
FIGURE 29. ROSMASTER X1 MAP USING HECTOR ALGORITHM (HTU BUILDING) ...................................................... 35
FIGURE 30. ROSMASTER X1 MAP USING HECTOR ALGORITHM (HOME) .................................................................. 36
FIGURE 31. RQT GRAPH FOR HECTOR ALGORITHM ................................................................................................. 37
FIGURE 32. PROJECT TIMELINE ............................................................................................................................ 38
Page |3

Abstract
Simultaneous Localization and Mapping (SLAM) is crucial for enabling
autonomous robots to navigate and understand unknown environments. This
study explores the application of SLAM through a comprehensive three-stage
simulation and evaluation process, addressing the challenges and potential of
SLAM in real-world applications. The first stage utilizes MATLAB's SLAM-Toolbox to
establish a foundational understanding and simulate a mobile robot in a Gazebo
environment via the Robot Operating System (ROS). The second stage advances
the simulations to a Linux virtual machine using the Gazebo environment and
mobile robot, focusing on evaluating different SLAM packages. The final stage
involves real-world testing with the ROSMASTER X1 robot. Results demonstrate the
effectiveness of the ROSMASTER X1 in replicating conditions necessary for
autonomous navigation. However, the study also highlights the need for more
powerful computational resources to scale SLAM applications to full-scale
autonomous vehicles. Future work should prioritize algorithm optimization and
hardware advancements to enhance scalability and performance, ensuring
SLAM's broader applicability in autonomous systems.
Page |4

1. Introduction
Over the past few years, unprecedented progress in robotics has made it possible
to create intelligent, self-governing systems that can interact and navigate their
environment. Simultaneous Localization and Mapping (SLAM), a basic technique
that enables robots to autonomously develop maps of their environments while
simultaneously determining their positions within these maps, is a crucial element
in realizing such capabilities. This technology has enormous potential for a wide
range of uses, including autonomous cars and robotic exploration.

In this investigation outlines a thorough plan to study and apply SLAM techniques
on a small-scale robotic platform in preparation for their eventual use on larger
vehicles. Moreover, it focuses especially on how these techniques will eventually
be integrated with automotive systems. In this work, a detailed simulation of SLAM
algorithms will be conducted using different scenarios.

The concept of SLAM emerged as a pivotal solution to the challenge of enabling


robots to autonomously navigate and map unknown environments. SLAM
encompasses the simultaneous processes of mapping the environment and
determining the robot's position within that map in real-time, without reliance on
external references. This capability is indispensable for a diverse range of
applications, including search and rescue operations, surveillance, and,
prominently, autonomous transportation. Moreover, The urgent need for
increased autonomy is what motivates the integration of SLAM technologies into
robotic systems, especially in situations where a priori knowledge is limited or
absent. The capacity to create precise, real-time maps while accurately localizing
the vehicle has significant implications for overall effectiveness, safety, and
efficiency in the context of autonomous vehicles. This study attempts to
investigate SLAM approaches methodically to meet these requirements.

This research project aims to accomplish two main goals which are to put SLAM
techniques into practice and assess them on a small-scale robotic platform to
create a solid basis for further applications and to create the framework for SLAM
technologies to be applied to bigger cars, with a focus on potential integration
with automotive systems.

This project is divided into three basic stages, in the first stage, the simulation will
be conducted on MATLAB using SLAM-Toolbox to understand the main concept
of SLAM as well as in this stage the TurtleBot 3 robot will be used and simulated in
a Gazebo environment through the robot operating system (ROS). In the second
stage, the simulation will be conducted on a virtual machine using Linux software
and a Gazebo environment using TurtleBot 3 burger type, in this stage the
simulation is only using a virtual machine and gazebo environment with some
SLAM packages. In the last stage, the simulation will be conducted on the
ROSMASTER X1 robot which is a physical robot as well and the simulation will be in
the real world.
Page |5

2. Literature Review

Shahrzal Saat et al,[1] investigated the application of a LiDAR sensor to create 2D


maps in unknown environments and be able to determine its location based on
detected landmarks. This project was tested and verified in a multi-parameter
curtain chamber using a Robot Operating System (ROS). SLAM was implemented
to provide estimates of localization in environments. The main outcome of this
project is to present the result of an experiment conducted to apply Simultaneous
Localization and Mapping (SLAM) based on a laser sensor which is LiDAR in terms
of the mapping and localization capability itself.

Bashar Alsadik et al,[2] conducted a clear and simple explanation of SLAM


technology to the scientific community as well as non-specialist readers from a
Geomatical point of view, avoiding dealing with the complex algorithmic details
behind the presented techniques. In this way, an overview of SLAM is presented
that shows the relationship between its components and different phases such as
the main part of the front-end and back-end and their relationship to the SLAM
model. Moreover, the obtained results explain the main mathematical techniques
for filtering and pose graph optimization using either visual or LiDAR SLAM and
provide a summary of the effective contribution of deep learning to the SLAM
problem. In addition, terms such as visual odometry, loop closure, 2D SLAM, and
histogram optimization are briefly explained with illustrations.

Cyrill Stachniss et al,[3] investigated a comprehensive introduction to the


simultaneous localization and mapping problem, that stands for SLAM. Through
analysis, the main perception problem of SLAM is in robots navigating in an
unknown environment. While navigating the environment, the robot seeks a map
of it and, at the same time, wishes to locate itself using its map in another way and
seeking to localize itself while concurrently building a map of the environment.
SLAM serves both purposes. The result of this paper is a review of the three main
models from which many published approaches to SLAM are derived: (1) the
extended Kalman filter (EKF); (2) Particle filtering; and (3) graph optimization.

Zhen An et al,[4] conducted an adaptive Monte Carlo localization algorithm used


for mobile robot pose estimation. Bayesian algorithm was used to build a grid map.
The ROS platform was used for tracking mobile robots to realize the SLAM
technology. and why we are using ROS instead of the conventional platforms the
experimental results demonstrated that the ROS platform. The outcome that we
get is the robot's moving path was computed by the path planner algorithm. The
robot follows the path and is located for location and builds the mapping by laser
scan and actual environment in Northeastern University's new mechanical fifth
floor.
Page |6

Arbnor Pajazit et al,[5] investigated and described a ROS-based control system for
the TurtleBot robot for mapping and navigation in indoor environments.

Displays TurtleBot’s navigation in a self-created environment. The mapping


process is done using the GMapping algorithm, and the localization process is
done using the AMCL package. Furthermore, the built-in ROS functions are used
to perform navigation in TurtleBot. The result they obtained is the realization of
mapping, localization, and navigation for TurtleBot in new and unknown
environments. Through doing a simulation TurtleBot in Gazebo with a simulated
laser a map of an environment with the GMapping algorithm has been set up.

Otherwise, in this investigation, the focus will be on applying the Simultaneous


Localization and Mapping (SLAM) algorithm with three different stages. The first
stage will be conducted using MATLAB-Toolbox, ROS environment, and gazebo
simulator to visualize the robot's movement, moreover, commands will be sent to
from the virtual machine to MATLAB through ROS. In the second stage, the
simulation will be conducted only using ROS environment, Gazebo simulator, and
SLAM packages. In the third stage, the simulation will be conducted in the real
world using ROSMASTER X1 robot using ROS environment via a virtual machine.
ROSMASTER X1 robot can create a map of its environment and identify its
orientation and pose. In addition, navigation can be implemented by drawing a
path for the robot to move from one position to another as well as building its map
simultaneously of the unknown indoor environment.
Page |7

3. Theoretical Background

SLAM, or Simultaneous Localization and Mapping, is a crucial concept in robotics,


enabling robots to build a map of their environment while simultaneously keeping
track of their location within that map.

Imagine a robot exploring an unknown environment. It needs to answer two


fundamental questions, the first question is where am I in the environment? While
the second question is what does the environment look like? Moreover, we can
notice that the robot is asking about its location in an unknown environment as
well as the environment map. Furthermore, SLAM tackles both these questions
concurrently. It uses sensor data from cameras, LIDARs, or other sensors to build a
map and estimate the robot's pose (position and orientation) within that map.

Localization:

Localization is the process of determining the robot's position and orientation


within a known environment. Moreover, it involves the use of sensors to perceive
the environment and algorithms to estimate the robot's pose (position and
orientation). In other words, robot localization is the process of determining the
robot's pose as it is critical in SLAM due to its impact on observation accuracy and
control actions. Pose estimation is probabilistic and prone to significant variations
without sensor fusion and learning techniques. Combining odometry from sensors
like IMUs and encoders with external measurements from LIDAR or cameras yields
a probabilistic pose estimate. Visual odometry, enhanced by deep learning
techniques, also plays a role in this estimation process.

Localization challenges arise from communication and data processing delays,


making sensor data quickly obsolete. Synchronization techniques and frame
transformations are employed to maintain data accuracy. Absolute pose
estimation defines the robot's position in a global frame during map building, while
relative pose estimation, which is less prone to cumulative errors, determines the
pose relative to previous landmarks or poses. This approach mitigates the impact
of significant map changes due to loop closure.

Moreover, there are some localization methods such as:

- Odometry: Uses data from motion sensors such as wheel encoders or IMUs
to estimate the robot's change in position over time.
- Dead Reckoning: A method like odometry but integrates sensor data over
time, which can accumulate errors.
- Beacon-based Localization: Utilizes known positions of beacons (e.g., GPS,
RFID) to triangulate the robot's position.
- Vision-based Localization (Visual localization): Employs cameras and
image processing techniques to identify landmarks and features within the
environment.
Page |8

Mapping:

Mapping is the process of creating a representation (map) of the environment


from sensor data. This map can be a metric map, representing exact distances
and layouts, or a topological map, representing the connectivity of different
places. There are two primary categories of maps: metric and topological.

1- Metric maps: provide a geometric representation of an environment,


detailing the boundaries, edges, areas, and configuration spaces,
analogous to political maps. The most common metric maps in SLAM are
occupancy grids (OGs), which divide the environment into grids, each
assigned a probability of being occupied by an obstacle or free for
navigation. These grids can be 2D or 3D depending on the robot's
navigation requirements. Global planners for robot motion over long
distances typically use OGs represented as graphs, employing search
algorithms like A* and Dijkstra.
2- Topological maps: in contrast, emphasize distinct landmarks and physical
features, akin to physical maps with natural elements. Feature maps, a type
of topological map, provide detailed information about navigable areas,
assisting robots in collision-free navigation. These maps are created and
updated using depth cameras and laser scanners as the robot explores,
continually appending new elements and correcting previous errors
through loop closure.

Simultaneous localization and mapping (SLAM):

Moreover, localization and mapping concepts can be combined to come up


with simultaneous localization and mapping (SLAM) which requires the robot to
keep track of its position while simultaneously updating its map of the environment.
Furthermore, the main challenge of SLAM is the dependency loop consequently
accurate localization requires a good map as well as a good map requires
accurate localization.

State Estimation and Uncertainty

SLAM can be framed as a state estimation problem. The state vector typically
encompasses the robot's pose and the map features (landmarks). Sensor
measurements establish a relationship between the robot's state and the
observations. The core challenge lies in estimating both the robot's pose and the
map features given these inherently noisy sensor measurements. Moreover, SLAM
draws upon various mathematical tools to achieve this estimation:

- Probability Theory: Sensor measurement and robot motion uncertainties are


modeled probabilistically using probability distributions.
- Bayesian Estimation: Techniques like Kalman Filters (EKF, UKF) or particle
filters are employed to update the robot's pose and map features based
on the acquired sensor measurements.
Page |9

- Optimization Techniques: For large-scale problems, graph-based


optimization approaches are employed to refine the robot's trajectory and
ensure map consistency.

SLAM Algorithms:

SLAM encompasses a variety of algorithms that process data sequentially in a


modular framework, allowing for flexibility and iterative improvements. Given the
stochastic nature of SLAM algorithms and their inconsistent performance across
different environments, they are often unsuitable for deployment in uncontrolled
critical settings. The complexity of diverse operational environments means that
SLAM systems effective in one context, such as a warehouse, may not perform
equally well in another, like a shopping mall.

Robot Localization Algorithms

Robot localization involves determining the robot's pose, which is a vector defined
in various reference frames. Measurements from sensors like cameras and
rangefinders are converted from their local frames to the robot's frame for
accurate localization. The absolute pose of the robot and landmarks is
determined relative to a global reference frame. Multiple sensors, data
association, and correction techniques are employed to achieve precise
localization. Key methods include:

• Odometry Estimation: Utilizes motor encoders and wheel dimensions to


compute the robot's pose through forward kinematics, though it is prone to
inaccuracies from wheel slips and hardware errors. IMU readings,
integrated at each timestep, are combined using Kalman filters to reduce
drift.
• Visual Odometry: Employs onboard cameras to map changes in feature
pixels in 3D images to corresponding frame transformations for the robot's
pose, involving feature detection, data association, deep learning, and
optical flow.

Localization is a recursive process, updating poses based on evolving map


accuracy. Common probabilistic techniques include Markov Localization,
Kalman Localization, and Monte Carlo Localization.

• Markov Localization: Uses a probability distribution for the robot's pose,


updated with sensor and odometry information, but is memory- and time-
intensive.
• Kalman Localization: A more efficient recursive Bayes estimation that
models pose probability as a Gaussian distribution, updating mean and
variance iteratively.
• Monte Carlo Localization: Employs a particle filter approach for recursive
pose determination, enhanced by importance sampling and particle filter
improvements.
P a g e | 10

Robot Mapping Algorithms

Creating and updating environment maps is computationally demanding,


requiring efficient data structures. Common techniques include:

• Inverse Sensor Model: Calculates the probability of grid cell occupancy


using LiDAR data, associating each cell with a ray, and determining if it
blocks the ray.
• Ray Tracing: Updates occupancy grid cells along each rangefinder ray,
saving computational resources.
• Maximum-Likelihood Incremental Technique: Updates maps using previous
maps, current poses, and sensing data, though it is computationally
intensive.

Miscellaneous Algorithms

• Kalman and Extended Kalman Filters: Used for sensor fusion in linear and
non-linear systems.
• Particle Filter: A non-parametric state estimation technique using multiple
particle hypotheses.
• Feature Extraction Algorithms: SIFT, SURF, ORB, and BRIEF for visual SLAM.
• Deep Learning Algorithms: For object detection, tracking, and recognition
to identify obstacles and predict collisions.
• Collision Detection: Preemptive motion analysis to avoid dynamic
obstacles.
• Motion Prediction: Uses object classification to anticipate and prevent
collisions.

Active SLAM

Active SLAM extends traditional SLAM by incorporating the decision-making


process for the robot's movements to improve the quality of the map and
localization. The robot actively explores the environment, choosing actions that
are expected to reduce uncertainty in its map and localization estimates.

Techniques in Active SLAM

1. Frontier-based Exploration: Identifies the boundary between known and


unknown regions (frontiers) and directs the robot towards them to expand
the map.
2. Information-Theoretic Approaches: Uses measures like entropy to quantify
uncertainty and select actions that maximize information gain.

SLAM is a collection of algorithms used in robotics for navigation and mapping.


SLAM works based on the data from the fusion sensor as fusion sensor may include
the following sensors:
P a g e | 11

- light detection and Ranging (LIDAR) which simply uses laser light to measure
distances to create a 3D representation of the environment.
- Camera: Gather visual data so that the robot can identify features and
landmarks in pictures.
- Inertial Measurement Unit (IMU): gives details about the angular velocity,
acceleration, and orientation of the robot.
- Radio Detection and Ranging (RADAR): it measures an object's distance
using radio waves.
P a g e | 12

4. Methodology

As mentioned in the introduction, this work is divided into three main stages or
sections to implement SLAM concept:

Section 1: SLAM using MATLAB


In the first semester report, we started to understand the main concept of SLAM
and how to implement it using the ROS environment. Moreover, a detailed
methodology has been developed by understanding the main concept of the
robot operating system (ROS) which provides a scalable and adaptable
framework for creating robotic applications, marking a model change in robotic
software design and implementation. The ROS structure has been explained in
detail in this report such as ROS master, nodes, topics, subscriber, publisher, as well
as messages and services. Moreover, that report aims to explain a comprehensive
exploration of Simultaneous Localization and Mapping (SLAM) through simulation
using MATLAB, the Gazebo Simulator, and the Robot Operating System (ROS).
Leveraging the interconnected capabilities of these platforms, a series of
simulations were conducted on the TurtleBot robot platform, encompassing
obstacle avoidance, localization, and SLAM functionalities within a simulated
environment.

Additionally, we understood the whole structure of the MATLAB simulation and


how it works the communication between two operating systems how we linked
both of them, also how the data transfers through the operating systems as we
shown below from the schematic to the implementation of the SLAM, we need for
the MATLAB and the functionality of the MATLAB which is received the data
odometry and from the LiDAR and store them as a matrix and plot the
environmental of the robot surroundings. Furthermore, we need Linux as a
command or user interface for controlling and sending the orders through the ROS
that create nodes, topics, subscribers, publishers, as well as messages. As well as
the Gazebo is also used for simulating the robot's work in the environment, and
getting the data collected from the sensors like encoder, LiDAR and sending it
back to MATLAB to do SLAM, through the IP address hotspot by MATLAB and links
it in the command in Linux.

Figure 1. Communication with MATLAB


P a g e | 13

Section 2: Active SLAM Using TurtleBot3 (Burger)

This stage aims to understand the working principle of applying active SLAM using
virtual machines before applying it in the real world. The simulation will be
conducted using TurtleBot robot as it will be simulated in the Gazebo environment
as well as it will be visualized using Rviz. Moreover, the below schematic shows the
whole structure of implementing active SLAM in the simulation using ROS, gazebo,
and RVIZ without MATLAB. So, as shown in the schematic the procedure of
implementing active SLAM and how it works the communication between the
TurtleBot3 (Burger) and the user interface, also it is acquiring just one operating
system as a user interface to be able to log in to it through the IP communication
and can get this IP from the TurtleBot consequently, the TurtleBot is considered a
hotspot in the Gazebo environment, basically can access the robot through an IP
address. Furthermore, ROS is used basically for creating nodes, topics, subscribers,
publishers, and messages that come from sensors like IMU and LiDAR. As well as
the visualization process is also done using RVIZ to plot/ visualize the 2D
environment of the robot surroundings based on the data that comes from the
ROS which weighs up messages coming from the LiDAR and IMU.

Figure 2. Communication with TurtleBot3


P a g e | 14

Section 3: SLAM Using ROSMASTER X1 Robot

The second simulation will be conducted using a ROSMASTER X1 Robot as it will be


visualized using Rviz moreover, this simulation will be conducted in the real world
and all the data will be collected from real sensors. Moreover, the below
schematic shows the whole structure of implementing SLAM in the real world using
the ROSMASTER X1 repository. So, we see in the schematic the procedure of
implementing SLAM in the real robot and how it works the communication
between the robot and Linux, it's acquiring just one operating system as a user
interface to be able to log in to it through the IP communication and can get this
IP from the robot consequently, the robot is considered a hotspot, basically can
access the robot through an IP address. Furthermore, the software inside the
Raspberry PI has ROS that is used for creating nodes, topics, subscribers, publishers,
and messages that come from the controller and data from the sensors. As well
as for plotting using RVIZ to plot the real-world 2D environment of the robot
surroundings based on the data that comes from the ROS which weighs up
messages come from the LiDAR, encoder, and camera.

Figure 3. Communication with ROSMASTER X1


P a g e | 15

5. Virtual Machine

In this report, the focus was to implement the simulation using a virtual machine as
well as Linux will be installed as the main operating system on the virtual
machine/box. Moreover, the virtual machine allows you to create a separate and
isolated environment on your physical computer as well as software like VirtualBox
or VMware can be used to set up a virtual machine. The first step of the simulation
was to install VMware has been installed with ROS Noetic version as well as it
contains a virtual machine. The use of virtual machines has many advantages
such as isolation which ensures your main operating system is not affected by any
changes or installations made for ROS and snapshots can be taken and reverted
if something goes wrong.

Here is the user interface of the VMware workstation as it is already packaged with
ROS Noetic Humble Gazebo 11.

Figure 4. VMware Interface

After installing VMware, we started to understand the main ROS tools as ROS's core
functionality is augmented by a variety of tools that allow developers to visualize
and record real-time data. These tools are provided in packages like any other
algorithm.
P a g e | 16

Some examples of the used tools to investigate this work include:

- Robot visualization tool (RViz) is a three-dimensional visualizer used to


visualize robots, the environments they work in, and sensor data. It is a highly
configurable tool, with many different types of visualizations and plugins.
The interface is mainly divided into a display setting area on the left, a large
display area in the middle, and a viewing angle setting area on the right.
At the top are several tools related to navigation. The bottom is the display
of some data related to ROS status. In other words, ROS can provide
visualization tools for visualizing sensor data (such as radar point clouds,
camera images, lidar, etc.) and some status information (commonly, the
status of robot navigation, etc.). RViz can display multiple data types and
visualize them by loading the Display type.
Here is the interface of the robot visualization tool: it has a tool bar that
shows the nodes and topics, and sensors data such as position and
orientation.

Figure 5. RVIZ Interface

- Gazebo Simulator: Gazebo is a free robot simulation software that provides


high-fidelity physical simulation, a complete set of sensor models, and a
very user- and program-friendly interaction method. Functions that can
accurately and efficiently simulate robot work in complex indoor and
outdoor environments. By loading the model, an operating environment
like the actual operating environment is constructed, the robot is loaded
into it, and the program is run to realize simulation.
P a g e | 17

This is the interface of Gazebo:

Figure 6. GAZEBO Interface


P a g e | 18

6. Used robot

Figure 7. ROSMASTER X1 Robot

ROSMASTER X1 is a 4WD drive mobile robot with a pendulous suspension chassis


based on robot operating system (ROS). It supports Raspberry Pi 4 development
boards and is equipped with high-performance hardware configurations such as
lidar and depth camera, which can realize robot motion control, remote control
communication, mapping building and navigation, following and obstacle
avoidance, automatic driving, human body feature recognition, and other
functions. In addition, the ROSMASTER X1 also supports multiple remote-control
methods such as APP/handle/computer keyboard.

Single-line LiDAR, used in the ROSMASTER X1 ROS robot, is a high-speed, high-


resolution radar technology primarily utilized in robotics for precise distance and
obstacle measurement. It operates by emitting a laser beam that, upon hitting an
obstacle, reflects to the receiver, with the time or angle of return used to calculate
distances. There are two main types: trigonometric ranging which includes two
methods direct and oblique methods, and time-of-flight (TOF). The ROSMASTER X1
used Silan Technology's RPLIDAR series, which features components like a pulsed
laser, receiver, signal processing unit, and rotating mechanism to achieve 360-
degree scanning. This enables accurate and real-time environmental mapping,
critical for applications in autonomous navigation.

The ROSMASTER X1 ROS robot uses the Astra Pro depth camera to enhance its
environmental perception and navigation capabilities. The Astra Pro utilizes
structured light technology, where an infrared (IR) laser projector emits a known
pattern onto the environment. The IR camera then captures the deformation of
this pattern caused by the surfaces and objects it hits. By analyzing these
deformations, the camera calculates the depth information of the scene,
creating a 3D map. The Astra Pro is equipped with a high-resolution RGB camera
and an IR depth sensor, allowing it to capture both color and depth data
simultaneously. This depth of information is crucial for tasks such as obstacle
detection, object recognition, and 3D mapping, enabling the robot to navigate
and interact with its environment more effectively and autonomously.
P a g e | 19

7. Simulation & Results

The simulation has been divided into three main sections, the first one was using
TurtleBot, MATLAB, Virtual Machine, ROS, and Gazebo, the second one was using
TurtleBot, RVIZ, Virtual Machine, ROS, and Gazebo, and the third one was using
ROSMASTER X1 Robot, RVIZ, Virtual Machine, ROS, and Gazebo.

Section 1: SLAM using MATLAB environment:


This simulation has been divided into two sections, the first one is providing the
robot with the map and then the robot can localize itself withing the map, while
in the second simulation, the robot will simultaneously build the map and then
localize its position within the map.

1- Localization when map is given:

In the first simulation, TurtleBot robot has been used as well as Monte Carlo
Localization (MCL) is used which is a particle filter-based algorithm used to
estimate a robot's pose within a known map, leveraging motion and sensor data.
It begins with an initial belief about the robot's pose, represented by particles
distributed according to this belief. As the robot moves, particles propagate
according to its motion model. When new sensor readings are received, each
particle evaluates its accuracy by calculating the likelihood of such readings at its
current pose. The algorithm resamples particles to bias them towards higher
accuracy. This process iterates, with particles converging to the robot's true pose.
Adaptive Monte Carlo Localization (AMCL), a variant of MCL, dynamically adjusts
the number of particles based on KL distance to ensure a close resemblance to
the true robot state distribution. In practical implementation, the algorithm
updates with laser scan and odometry data, computes the robot's pose, updates
the estimated pose and covariance, and drives the robot to the next pose. Initially,
particles are uniformly distributed, but after several updates (1, 8, and 60), they
converge to areas with higher likelihood, ultimately aligning closely with the robot's
true pose and the map outlines.

Figure 8. Monte Carlo Localization Results Using MATLAB


P a g e | 20

2- SLAM and Loop Closure:

In the second simulation, we utilized the Lidar SLAM (Simultaneous Localization


and Mapping) class to process Lidar data. This SLAM algorithm takes lidar scans
and attaches them to nodes in an underlying pose graph. It then correlates
these scans using scan matching techniques. Additionally, the algorithm
searches for loop closures—instances where scans overlap previously mapped
regions—and optimizes the node poses in the pose graph to enhance the map's
accuracy.

In our implementation, we used the 𝑎𝑑𝑑𝑆𝑐𝑎𝑛 method to sequentially add each


scan into the Lidar SLAM class within a loop. We set the Loop Closure Threshold to
205 and examined how varying the Loop Closure Search Radius affected the
algorithm's performance. By adjusting these parameters, we could study their
impact on the detection of loop closures and the optimization of the pose graph,
which are crucial for accurate mapping and localization.

Figure 9. SLAM with Loop Closure Results Using MATLAB


P a g e | 21

Section 2: Active SLAM using ROS Environment (TurtleBot):


❖ Methodology of this simulation: Active SLAM using TurtleBot3 robot

In this simulation, we conducted an active SLAM algorithm which is a


technological mapping method that allows robots to build a map and localize
themselves on that map at the same time as well it also incorporates the decision-
making process for the robot's movements to improve the map quality and
localization. The robot actively explores the environment, choosing actions that
are expected to reduce uncertainty in its map and localization estimates.

This simulation has been conducted using an ROS environment along with a
Gazebo simulator. Moreover, this is an implementation of an active slam algorithm
that autonomously navigates to new goal locations and explores the area to form
a map of the environment.

To start the simulation, some prerequisite tools and packages are required to be
installed from the ROS Wiki website. These prerequisites are 𝑡𝑢𝑒𝑟𝑡𝑙𝑒𝑏𝑜𝑡3_𝑠𝑖𝑚𝑢𝑙𝑎𝑡𝑖𝑜𝑛𝑠,
𝑡𝑢𝑟𝑡𝑙𝑒𝑏𝑜𝑡3_𝑚𝑠𝑔𝑠, 𝑡𝑢𝑟𝑡𝑙𝑒𝑏𝑜𝑡3, 𝑔𝑎𝑧𝑒𝑏𝑜_𝑟𝑜𝑠 and 𝑠𝑙𝑎𝑚_𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔. Please refer to the ROS
Wiki for more installation instructions on setting the ROS for using gazebo turtlebot3
simulator for navigation.

After installing the prerequisite tools and packages, here are some steps to
complete the simulation: Note. bule words are commands that can be used in the
simulation.

- First, you need to open the virtual machine and then open the terminal.
- Second, export the IP address and the robot type using the following two
commands. Note. You need to enter the IP address that is displayed on
your virtual machine.
export ROS_IP = 192.168.140.128
export TURTLEBOT3_MODEL = burger
- Third, check if the IP address and the robot type have been exported
successfully using the following command.
𝑒𝑐ℎ𝑜 $𝑅𝑂𝑆_𝐼𝑃
𝑒𝑐ℎ𝑜 $TURTLEBOT3_MODEL

- Fourth, setup a workspace by creating a folder and name it as 𝑐𝑎𝑡𝑘𝑖𝑛_𝑤𝑠


using this command 𝑚𝑘𝑑𝑖𝑟 𝑐𝑎𝑡𝑘𝑖𝑛_𝑤𝑠, then enter the this folder using
𝑐𝑑 𝑐𝑎𝑡𝑘𝑖𝑛_𝑤𝑠 then create the folder “src” inside the catkin workspace using
𝑚𝑘𝑑𝑖𝑟 𝑠𝑟𝑐.
- Fifth, initialize the workspace using the command 𝑐𝑎𝑡𝑘𝑖𝑛_𝑖𝑛𝑖𝑡_𝑤𝑜𝑟𝑘𝑠𝑝𝑎𝑐𝑒
from command line in the folder ~/𝑐𝑎𝑡𝑘𝑖𝑛_𝑤𝑠.
- Sixth, you need to enter the 𝑠𝑟𝑐 folder and download all the prerequisite
tools and packages in addition to the active slam algorithm. you should get
the link from github.com and use the following command to install the
packages and tools 𝑔𝑖𝑡 𝑐𝑙𝑜𝑛𝑒 ℎ𝑡𝑡𝑝𝑠://𝑔𝑖𝑡ℎ𝑢𝑏. 𝑐𝑜𝑚/𝑔𝑖𝑡ℎ𝑢𝑏/𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 − 𝑘𝑖𝑡. 𝑔𝑖𝑡
P a g e | 22

For example: I will show you how to install the active slam package:

Go to githb.com and copy the following link:

Figure 10. Githb.com Interface

Then go to you terminal and open the 𝑠𝑟𝑐 folder and then install the package
using the above-mentioned command.

The package can be installed as following:

Figure 11. Package/Tool Installation

Check if the package is installed by opening and listing what inside the 𝑠𝑟𝑐 folder

Figure 12. Check Package Installation

- Seventh step is to run the following command 𝑐𝑎𝑡𝑘𝑖𝑛_𝑚𝑎𝑘𝑒 from the


~/𝑐𝑎𝑡𝑘𝑖𝑛_𝑤𝑠 location. It is a s a convenient tool for building code in a catkin
workspace. Note. when you will notice two new folders in the root of your
catkin workspace: the build and devel folders.
• If you run the 𝑐𝑎𝑡𝑘𝑖𝑛_𝑚𝑎𝑘𝑒 command, and it did not work, delete the
𝐶𝑀𝑎𝑘𝑒𝑙𝑖𝑠𝑡𝑠. 𝑡𝑥𝑡 file and run it again.
P a g e | 23

- Eighth, Source the workspace by running the command 𝑠𝑜𝑢𝑟𝑐𝑒 𝑑𝑒𝑣𝑒𝑙/


𝑠𝑒𝑡𝑢𝑝. 𝑏𝑎𝑠ℎ from ~/𝑐𝑎𝑡𝑘𝑖𝑛_𝑤𝑠 location.

- The final step is to open three terminals, enter the 𝑐𝑎𝑡𝑘𝑖𝑛_𝑤𝑠 in the three
terminals, then run the following three commands in order:
1- 𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑡𝑢𝑟𝑡𝑙𝑒𝑏𝑜𝑡3_𝑔𝑎𝑧𝑒𝑏𝑜 𝑡𝑢𝑟𝑡𝑙𝑒𝑏𝑜𝑡3_𝑤𝑜𝑟𝑙𝑑. 𝑙𝑎𝑢𝑛𝑐ℎ
2- 𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑎𝑐𝑡𝑖𝑣𝑒_𝑠𝑙𝑎𝑚 𝑎𝑐𝑡𝑖𝑣𝑒_𝑠𝑙𝑎𝑚. 𝑙𝑎𝑢𝑛𝑐ℎ
3- 𝑝𝑦𝑡ℎ𝑜𝑛 ./𝑠𝑟𝑐/𝑎𝑐𝑡𝑖𝑣𝑒_𝑠𝑙𝑎𝑚/𝑠𝑐𝑟𝑖𝑝𝑡𝑠/𝑔𝑙𝑜𝑏𝑎𝑙_𝑝𝑙𝑎𝑛𝑛𝑖𝑛𝑔. 𝑝𝑦

❖ Simulation and results:

After following the steps that have been discussed earlier, the simulation was
conducted to build the map and localize the robot’s position within the map.
Three different simulations will be conducted, and all these simulations have the
same first two steps which are running the command that opens the gazebo
environment and the second command is to launch the active SLAM package.

When running the first command which is:


𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑡𝑢𝑟𝑡𝑙𝑒𝑏𝑜𝑡3_𝑔𝑎𝑧𝑒𝑏𝑜 𝑡𝑢𝑟𝑡𝑙𝑒𝑏𝑜𝑡3_𝑤𝑜𝑟𝑙𝑑. 𝑙𝑎𝑢𝑛𝑐ℎ

By running this command on the terminal, the Gazebo environment will be


opened, and the required nodes will be called.

Figure 13. Open GAZEBO Interface Using Linux Commands

When running the second command:

𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑎𝑐𝑡𝑖𝑣𝑒_𝑠𝑙𝑎𝑚 𝑎𝑐𝑡𝑖𝑣𝑒_𝑠𝑙𝑎𝑚. 𝑙𝑎𝑢𝑛𝑐ℎ

This command will use the following four nodes:


P a g e | 24

𝑚𝑜𝑣𝑒 𝑏𝑎𝑠𝑒, 𝑟𝑜𝑏𝑜𝑡_𝑠𝑡𝑎𝑡𝑒_𝑝𝑢𝑏𝑙𝑖𝑠ℎ𝑒𝑟, 𝑟𝑣𝑖𝑧, 𝑎𝑛𝑑 𝑡𝑢𝑟𝑡𝑙𝑒𝑏𝑜𝑡3_𝑠𝑙𝑎𝑚_𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔

After running this command RVIZ will open and will present the map and sensor
data on the left-hand tool bar.

Figure 14. Open RVIZ to Visualize the Map

This simulation will be divided into three sections:


First Simulation: Mapping and localization using keyboard:

In this simulation, the TurtleBot3 robot will simultaneously build the map and localize
its position within that map. The robot will move from one point to another using
the keyboard, moreover, this is not active SLAM because in active SLAM the robot
must autonomously navigate to new goal locations and explore the area to form
a map of the environment.

After running the first and second commands, the third command will be run:
𝑟𝑜𝑠𝑟𝑢𝑛 𝑡𝑒𝑙𝑒𝑜𝑝_𝑡𝑤𝑖𝑠𝑡_𝑘𝑒𝑦𝑏𝑜𝑎𝑟𝑑 𝑡𝑒𝑙𝑒𝑜𝑝_𝑡𝑤𝑖𝑠𝑡_𝑘𝑒𝑦𝑏𝑜𝑎𝑟𝑑. 𝑝𝑦

This command will make the user able to control the robot using the keyboard.

As shown in the graph below, the robot has successfully built its map by using the
keyboard as a movement controller.
P a g e | 25

Figure 15. Building the Map Using Keyboard

The following picture shows the 𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ for all active nodes and topics when
using the keyboard as controller.

Note. circles are nodes while rectangles are topics.

Figure 16. RQT Graph of Map Bulit Using Keyboard


P a g e | 26

Second simulation: Building the map using the navigation tool

In this simulation, the TurtleBot3 robot will simultaneously build the map and localize
its position within that map. The robot will move from one point to another using
the 2D navigation goal that is available on RVIZ. Moreover, this also not active
SLAM.

Figure 17. Building the Map Using Navigation Tool

The following picture shows the 𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ for all active nodes and topics: without
keyboard, just using the 2D navigation goal tool.

Note. circles are nodes while rectangles are topics.

Figure 18. RQT Graph of Map Bulit Using Navigation Tool


P a g e | 27

Third Simulation: Mapping and localization using the active SLAM algorithm

In this simulation, the TurtleBot3 robot will simultaneously build the map and localize
its position within that map. The robot will autonomously move from one point to
another by incorporating the decision-making process to improve the map quality
and localization. The robot actively explores the environment, choosing actions
that are expected to reduce uncertainty in its map and localization estimates.
Moreover, the robot will send a goal location randomly and then it will try to reach
this location, moreover, when the robot reaches that location/ destination, it will
repeat the steps periodically until it explores all unknown spaces within the
environment.

Figure 19. Building the Map Using Active SLAM Algorithm


P a g e | 28

Figure 20. Map Using Active SLAM

The following picture shows the 𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ for all active nodes and topics: using the
active SLAM algorithm.

Note. circles are nodes while rectangles are topics.

Figure 21. RQT Graph of Map Bulit Using Active SLAM Algorithm
P a g e | 29

Section 3: SLAM using ROS Environment (ROSMASTER X1 Robot):


In this section, SLAM simulation has been conducted on real world using
ROSMASTER X1 robot.

❖ Methodology of this simulation: SLAM using ROSMASTER X1 Robot

After the robot has been assembled, the first step that you should take is to
connect the robot with the virtual machine by connecting your WIFI to the robot
hotspot. The robot hotspot has the following name and password:

𝐻𝑜𝑡𝑠𝑝𝑜𝑡 𝑁𝑎𝑚𝑒: 𝑅𝑂𝑆𝑀𝐴𝑆𝑇𝐸𝑅


𝑃𝑎𝑠𝑠𝑤𝑜𝑟𝑑: 𝑦𝑎ℎ𝑏𝑜𝑜𝑚

Then open your virtual machine to go inside the robot tools and packages using
the following command:

𝑠𝑠ℎ − 𝑋 𝑝𝑖@192.168.1.11

Then it will require you to enter the robot password that have mentioned earlier.

Once you went inside the robot (Raspberry pi) tools and packages, you must set
the robot type, camera type, and LiDAR type because the system cannot
automatically identify the product version as there are many versions of the
ROSMASTER robots.

Case 1: if you want to use the application control, you can directly select the
corresponding product model in the APP to operate the control car.

Case 2: if you want to control the robot or to edit the code, you must enter the
𝑏𝑎𝑠ℎ𝑟𝑐 𝑓𝑖𝑙𝑒 and then edit the camera, LiDAR, and robot type.

You can change them using the following steps:

1- Open the terminal.


2- Enter the following command to enter the 𝑏𝑎𝑠ℎ𝑟𝑐 file
𝑠𝑢𝑑𝑜 𝑣𝑖𝑚 ~/. 𝑏𝑎𝑠ℎ𝑟𝑐
3- To start editing the file, you need to enter the insert mode. Press 𝑖 on your
keyboard. You can now make changes to the file.
4- Change the camera, LiDAR, and robot type to 𝑎𝑠𝑡𝑟𝑎𝑝𝑟𝑜, 𝐴1, 𝑎𝑛𝑑 𝑋1
respectively.
5- To save and exit the 𝑣𝑖𝑚 ~/𝑏𝑎𝑠ℎ𝑟𝑐 file, use the following:
Esc: Return to command mode

: wq: Save changes and quit vim

: q! : Quit without saving changes

6- Then you must refresh/update the environment variables using the


following command:
P a g e | 30

𝑠𝑜𝑢𝑟𝑐𝑒 ~/. 𝑏𝑎𝑠ℎ𝑟𝑐

7- In case you did not able to modify the 𝑏𝑎𝑠ℎ𝑟𝑐 file, you can manually set the
camera, LiDAR, and robot type using the following commands:
𝑒𝑥𝑝𝑜𝑟𝑡 𝑅𝑃𝐿𝐼𝐷𝐴𝑅_𝑇𝑌𝑃𝐸 = 𝑎1
𝑒𝑥𝑝𝑜𝑟𝑡 𝐶𝐴𝑀𝐸𝑅𝐴_𝑇𝑌𝑃𝐸 = 𝑎𝑠𝑡𝑟𝑎𝑝𝑟𝑜
𝑒𝑥𝑝𝑜𝑟𝑡 𝑅𝑂𝐵𝑂𝑇_𝑇𝑌𝑃𝐸 = 𝑋1
8- Then, update/refresh the environment:
𝑠𝑜𝑢𝑟𝑐𝑒 ~/. 𝑏𝑎𝑠ℎ𝑟𝑐

Section one: Testing LiDAR:

The ROSMASTER X1 robot has a LiDAR type A1, and it has a package called
𝑟𝑝𝑙𝑖𝑑𝑎𝑟_𝑟𝑜𝑠, make sure to use this library, not 𝑦𝑑_𝑙𝑖𝑑𝑎𝑟, otherwise no results will be
presented because you are using different LiDAR type.

the first step that we will do is to remap USB serial port, and it can be done using
two methods:

Method 1: Adding permission directly. This method only works this time:

- Check the permission of the 𝑟𝑝_𝑙𝑖𝑑𝑎𝑟 serial port using the following
command:
𝑙𝑠 − 𝑙 /𝑑𝑒𝑣 |𝑔𝑟𝑒𝑝 𝑡𝑡𝑦𝑈𝑆𝐵

- Then, Add write permission: (such as /dev/ttyUSB0):


𝑠𝑢𝑑𝑜 𝑐ℎ𝑚𝑜𝑑 777 /𝑑𝑒𝑣/𝑡𝑡𝑦𝑈𝑆𝐵0

Method 2: This method works long term.

- You must install USB port remapping in the 𝑟𝑝𝑙𝑖𝑑𝑎𝑟_𝑟𝑜𝑠 function package
path.
./𝑠𝑐𝑟𝑖𝑝𝑡𝑠/𝑐𝑟𝑒𝑎𝑡𝑒_𝑢𝑑𝑒𝑣_𝑟𝑢𝑙𝑒𝑠. 𝑠ℎ
- Re-plug the LiDAR USB interface and use the following command to
modify the remapping:
𝑙𝑠 − 𝑙 /𝑑𝑒𝑣 | 𝑔𝑟𝑒𝑝 𝑡𝑡𝑦𝑈𝑆𝐵

The second step, LiDAR code will be tested:

- Run the 𝑟𝑝𝑙𝑖𝑑𝑎𝑟 node and view it in 𝑟𝑣𝑖𝑧 using the following command: this
will make you able to visualize the LiDAR reading on RVIZ.
𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑟𝑝𝑙𝑖𝑑𝑎𝑟_𝑟𝑜𝑠 𝑣𝑖𝑒𝑤_𝑟𝑝𝑙𝑖𝑑𝑎𝑟. 𝑙𝑎𝑢𝑛𝑐ℎ
P a g e | 31

Figure 22. ROSMASTER X1 LiDAR Data Visualization

- Run the 𝑟𝑝𝑙𝑖𝑑𝑎𝑟 node and see with the test application using the following
command.
𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑟𝑝𝑙𝑖𝑑𝑎𝑟_𝑟𝑜𝑠 𝑟𝑝𝑙𝑖𝑑𝑎𝑟. 𝑙𝑎𝑢𝑛𝑐ℎ
- To see the LiDAR results/ data on the console, you can run the following
command on the terminal:
𝑟𝑜𝑠𝑟𝑢𝑛 𝑟𝑝𝑙𝑖𝑑𝑎𝑟_𝑟𝑜𝑠 𝑟𝑝𝑙𝑖𝑑𝑎𝑟𝑁𝑜𝑑𝑒𝐶𝑙𝑖𝑒𝑛𝑡

Figure 23. LiDAR Data on the Console

- To observe the map construction, you can the following command,


however, it will just show the first reading which means if you tried to move
the robot or to change its position, the map will not be changed it will just
show you the construction.
P a g e | 32

𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑟𝑝𝑙𝑖𝑑𝑎𝑟_𝑟𝑜𝑠 𝑡𝑒𝑠𝑡_𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔. 𝑙𝑎𝑢𝑛𝑐ℎ

- If you want to see a graph that displays all active nodes and topics, you
can use the following command.
𝑟𝑜𝑠𝑟𝑢𝑛 𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ 𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ
Or
𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ

Figure 24. RQT Graph of Map Construction of ROSMASTER X1

Section 2: SLAM using different algorithms

1) 𝑮𝒎𝒂𝒑𝒑𝒊𝒏𝒈 mapping algorithm:


𝐺𝑚𝑎𝑝𝑝𝑖𝑛𝑔 is a widely used open-source SLAM algorithm based on the
filtered SLAM framework, utilizing the RBpf particle filter algorithm which
separates real-time positioning and mapping processes. It first performs
positioning and then mapping. 𝐺𝑚𝑎𝑝𝑝𝑖𝑛𝑔 has made significant
improvements over the RBpf algorithm through improved proposal
distribution and selective resampling. It can construct indoor maps in real
time with high accuracy and low computational requirements for small
scenes. However, it has limitations when the number of two-dimensional
laser points in a single frame exceeds 1440, leading to errors such as the
"[[mapping -4] process has died]" issue. Additionally, as the scene size
grows, the memory and computation requirements increase because
each particle carries a map. This makes it less suitable for large scene maps.
Moreover, 𝐺𝑚𝑎𝑝𝑝𝑖𝑛𝑔 lacks loop detection, which can cause map
misalignment during loop closures, although increasing the number of
particles can mitigate this at the cost of higher computational and memory
demands.

Structure:

Figure 25. GMAPPING Structure


P a g e | 33

The following commands are used to apply the algorithm:

1- The first command is to get the laser data/ LiDAR data:


𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣 𝑙𝑎𝑠𝑒𝑟_𝑏𝑟𝑖𝑛𝑔𝑢𝑝. 𝑙𝑎𝑢𝑛𝑐ℎ
2- The second command is the mapping command:
𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑚𝑎𝑝. 𝑙𝑎𝑢𝑛𝑐ℎ 𝑢𝑠𝑒_𝑟𝑣𝑖𝑧:
= 𝑓𝑎𝑙𝑠𝑒 𝑚𝑎𝑝_𝑡𝑦𝑝𝑒: = 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔
3- The third command is to visualize the map on RVIZ:
𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣 𝑣𝑖𝑒𝑤_𝑚𝑎𝑝. 𝑙𝑎𝑢𝑛𝑐ℎ
4- To control robot movement using keyboard, you can use the
following command.
𝑟𝑜𝑠𝑟𝑢𝑛 𝑡𝑒𝑙𝑒𝑜𝑝_𝑡𝑤𝑖𝑠𝑡_𝑘𝑒𝑦𝑏𝑜𝑎𝑟𝑑 𝑡𝑒𝑙𝑒𝑜𝑝_𝑡𝑤𝑖𝑠𝑡_𝑘𝑒𝑦𝑏𝑜𝑎𝑟𝑑. 𝑝𝑦
5- There are two ways to save the map:
▪ 𝑟𝑜𝑠𝑟𝑢𝑛 𝑚𝑎𝑝_𝑠𝑒𝑟𝑣𝑒𝑟 𝑚𝑎𝑝_𝑠𝑎𝑣𝑒𝑟 − 𝑓 ~/𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑤𝑠/𝑠𝑟𝑐/
𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣/𝑚𝑎𝑝𝑠/𝑚𝑦_𝑚𝑎𝑝
▪ 𝑏𝑎𝑠ℎ ~/𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑤𝑠/𝑠𝑟𝑐/𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣/𝑚𝑎𝑝𝑠/𝑚𝑎𝑝. 𝑠ℎ
6- To see the nodes and topics:
𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ
7- To see the transformation tree (TF Tree), use the following command:
𝑟𝑜𝑠𝑟𝑢𝑛 𝑟𝑞𝑡_𝑡𝑓_𝑡𝑟𝑒𝑒 𝑟𝑞𝑡_𝑡𝑓_𝑡𝑟𝑒𝑒

Figure 26. ROSMASTER X1 Map Using 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔 Algorithm (High Speed)

When we tried to lower the robot speed down to 0.14 𝑚𝑒𝑡𝑒𝑟𝑠/𝑠𝑒𝑐 from
0.5 𝑚𝑒𝑡𝑒𝑟𝑠/𝑠𝑒𝑐, we got a better result with 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔:
P a g e | 34

Figure 27. ROSMASTER X1 Map Using 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔 Algorithm (Low Speed)

Figure 28. RQT Graph for 𝑔𝑚𝑎𝑝𝑝𝑖𝑛𝑔 Algorithm


P a g e | 35

2) Hector mapping algorithm

The main feature of hector SLAM does not need to subscribe to


odometry/Odom messages. It uses the Gauss-Newton method and directly
uses lidar to estimate odometry information. However, when the robot is fast,
slipping will occur, causing deviations in the mapping effect and placing high
demands on sensors. When building a map, set the car's rotation speed as low
as possible.

The following commands are used to apply the algorithm:

1- The first command is to get the laser data/ LiDAR data:


𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣 𝑙𝑎𝑠𝑒𝑟_𝑏𝑟𝑖𝑛𝑔𝑢𝑝. 𝑙𝑎𝑢𝑛𝑐ℎ
2- The second command is the mapping command:
𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑚𝑎𝑝. 𝑙𝑎𝑢𝑛𝑐ℎ 𝑢𝑠𝑒_𝑟𝑣𝑖𝑧:
= 𝑓𝑎𝑙𝑠𝑒 𝑚𝑎𝑝_𝑡𝑦𝑝𝑒: = ℎ𝑒𝑐𝑡𝑜𝑟
3- The third command is to visualize the map on RVIZ:
𝑟𝑜𝑠𝑙𝑎𝑢𝑛𝑐ℎ 𝑦𝑎ℎ𝑏𝑜𝑜𝑚𝑐𝑎𝑟_𝑛𝑎𝑣 𝑣𝑖𝑒𝑤_𝑚𝑎𝑝. 𝑙𝑎𝑢𝑛𝑐ℎ
4- To control the robot movement, save map, and to see the 𝑟𝑞𝑡_𝑔𝑟𝑎𝑝ℎ, and
the TF Tree, see the previous simulation.

The following map is for the industrial robotics lab in HTU building.

Figure 29. ROSMASTER X1 Map Using Hector Algorithm (HTU Building)


P a g e | 36

The robot will build a map for the following environment:

Figure 30. The Environment

The following map is for Mohamad’s home.

Figure 31. ROSMASTER X1 Map Using Hector Algorithm (Home)


P a g e | 37

Figure 32. RQT Graph for Hector Algorithm


P a g e | 38

8. Project Timeline

The project timeline for our whole journey in the Capstone project during the two
semesters, as shown below, the various tasks that we did according to the weekly
meetings with the supervisor, Dr. Tarek Tutunji. Moreover, the periods show the
duration of our work on this task in addition to when we started and when we
finished.

SLAM Project Start: 27/11/2023

Project End: 13/6/2024 Period Highlight: 1 Plan Duration Actual Start % Complete Actual (beyond plan ) % Complete (beyond plan)
PLAN PLAN ACTUAL ACTUAL
PERCENT November December January February March April May June
ACTIVITY START DURATION START DURATION
COMPLETE
Week Week Week Week
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
Identification the topic
1 1 1 0 100

Secondary research (SLAM) 2 1 2 0 100

Understanding SLAM Principles 3 1 3 0 100

Literature review 2 2 2 1 100

Thinking about Hardware and Software 3 2 3 1 100


Initial simulation using MATLAB with ROS 3 4 3 3 100
Results 5 2 5 1 100

Documentation and prepared the proposal 4 6 4 5 100

First presentation 9 2 9 1 100

Ordering the robot 11 2 11 1 100

Simulation Using ROS in Linux 13 12 13 11 100

Assembly of robot 25 1 25 0 100

Testing the robot 25 7 25 6 100


Analysis the results 28 3 28 2 100
Documentation and prepared the final report 18 14 18 13 100
Final Presentation 31 2 31 1 100

Figure 33. Project Timeline


P a g e | 39

9. Conclusion and Future Work

This research evaluated the application of Simultaneous Localization and


Mapping (SLAM) through a three-stage simulation process, utilizing TurtleBot 3 and
ROSMASTER X1 for both virtual and real-world environments. The study yielded
several critical findings and highlighted specific challenges, providing a
foundation for future advancements in SLAM technology.

Findings: The Hector SLAM algorithm demonstrated superior performance for


indoor SLAM tasks, offering the best balance of speed and accuracy compared
to other tested algorithms. The study also found that while the Raspberry Pi can
manage small-scale SLAM applications, it lacks the computational power
necessary for real-time and full-scale autonomous vehicle operations. Moreover,
the deployment of ROSMASTER X1 revealed substantial issues with package
compatibility, necessitating extensive customization and adaptation of SLAM
packages.

Challenges: Significant challenges were encountered, particularly with the


ROSMASTER X1 platform, which included delays in obtaining the robot due to
customs clearance in Jordan and issues with SLAM package functionality. These
obstacles were mitigated by intensifying simulations and gaining deeper insights
into the Robot Operating System (ROS), which contributed to advancing the study
despite equipment unavailability.

Implications for Future Work: Future research should focus on optimizing SLAM
algorithms to enhance processing speed and reduce computational complexity.
The development of advanced sensor fusion techniques, particularly integrating
LiDAR with computer vision, is essential for improving SLAM accuracy and
robustness. Furthermore, efficient handling of data from 3D LiDARs demands the
creation of faster SLAM algorithms capable of managing increased data volumes
and complexity.

Real-World Applications: The implications of this research extend to various real-


world applications beyond autonomous vehicles. SLAM technology can
significantly impact indoor navigation systems for robots used in catering,
cleaning, and healthcare settings. Additionally, it holds promise for search-and-
rescue operations by enabling autonomous robots to navigate hazardous
environments. Other potential applications include warehouse automation for
precise inventory management and agricultural robotics for autonomous
navigation of farm machinery.

In summary, this study underscores the need for continuous advancements in


SLAM algorithms and hardware integration to fully harness the technology's
potential in diverse applications. Addressing the current limitations in
computational capacity and enhancing algorithmic efficiency will be crucial in
developing robust and scalable SLAM solutions for practical deployment.
P a g e | 40

10. References
[1] “HECTORSLAM 2D MAPPING FOR SIMULTANEOUS LOCALIZATION AND
MAPPING (SLAM)”, doi: 10.1088/1742-6596/1529/4/042032.

[2] W. S. Org, B. Alsadik, and S. Karam, “SURVEYING AND GEOSPATIAL


ENGINEERING

JOURNAL The Simultaneous Localization and Mapping (SLAM)-An Overview,” vol.


02, no.

01, pp. 1–12, 2021, doi: 10.38094/sgej1027.

[3] C. Stachniss, J. J. Leonard, and S. Thrun, “Simultaneou 46. Simultaneous


Localization and Mapping”.

[4] Z. An, L. Hao, Y. Liu, and L. Dai, “Development of Mobile Robot SLAM Based
on ROS,” International Journal of Mechanical Engineering and Robotics Research,
vol. 5, no. 1, 2016, doi: 10.18178/ijmerr.5.1.47-51.

[5] A. Pajaziti and P. Avdullahu, “International Journal of Intelligent Systems and


Applications in Engineering SLAM-Map Building and Navigation via ROS #,”
Original Research Paper This journal is © Advanced Technology & Science, vol. 2,
no. 4, 2013, doi: 10.1039/b000000x.

You might also like