0% found this document useful (0 votes)
43 views9 pages

Robot Indoor Navigation: Comparative Analysis of LiDAR 2D and Visual SLAM

Robot indoor navigation has become a significant area of research and development for applications such as autonomous robots, smart homes, and industrial automation. This article presents an in-depth comparative analysis of LiDAR 2D and visual sensor simultaneous localization and mapping (SLAM) approaches for robot indoor navigation. The increasing demand for autonomous robots in indoor environments has led to the development of various SLAM techniques for mapping and localization. LiDAR 2D and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views9 pages

Robot Indoor Navigation: Comparative Analysis of LiDAR 2D and Visual SLAM

Robot indoor navigation has become a significant area of research and development for applications such as autonomous robots, smart homes, and industrial automation. This article presents an in-depth comparative analysis of LiDAR 2D and visual sensor simultaneous localization and mapping (SLAM) approaches for robot indoor navigation. The increasing demand for autonomous robots in indoor environments has led to the development of various SLAM techniques for mapping and localization. LiDAR 2D and
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

IAES International Journal of Robotics and Automation (IJRA)

Vol. 13, No. 1, March 2024, pp. 41~49


ISSN: 2722-2586, DOI: 10.11591/ijra.v13i1.pp41-49  41

Robot indoor navigation: comparative analysis of LiDAR 2D


and visual SLAM

Hind Messbah, Mohamed Emharraf, Mohammed Saber


National School of Applied Sciences, SmartICT Lab, Mohammed First University, Oujda, Morocco

Article Info ABSTRACT


Article history: Robot indoor navigation has become a significant area of research and
development for applications such as autonomous robots, smart homes, and
Received Sep 22, 2023 industrial automation. This article presents an in-depth comparative analysis
Revised Nov 27, 2023 of LiDAR 2D and visual sensor simultaneous localization and mapping
Accepted Dec 18, 2023 (SLAM) approaches for robot indoor navigation. The increasing demand for
autonomous robots in indoor environments has led to the development of
various SLAM techniques for mapping and localization. LiDAR 2D and
Keywords: visual sensor-based SLAM methods are widely used due to their low cost
and ease of implementation. The article provides an overview of LiDAR 2D
Indoor and visual sensor-based SLAM techniques, including their working
LiDAR 2D principles, advantages, and limitations. A comprehensive comparative
Localization analysis is conducted, assessing their capabilities in terms of robustness,
Mapping accuracy, and computational requirements. The article also discusses the
Navigation impact of environmental factors, such as lighting conditions and obstacles,
Visual sensor on the performance of both approaches. The analysis’s findings highlight
each approach’s strengths and weaknesses, providing valuable insights for
researchers and practitioners in selecting the appropriate SLAM method for
robot indoor navigation based on specific requirements and constraints.
This is an open access article under the CC BY-SA license.

Corresponding Author:
Hind Messbah
National School of Applied Sciences, SmartICT Lab, Mohammed First University
BP 669 Bd Mohammed VI, Oujda 60000, Morocco
Email: [email protected]

1. INTRODUCTION
Robot indoor navigation involves using robots to move through an indoor space and perform various
tasks such as cleaning, delivery, or inspection. Typically used as assistants in interior environments, service
robots include the "Roomba" vacuum cleaner, the personal robot PR2 service robot, the Husky, unmanned
ground vehicle (UGV) robot, and many others [1]. There are several ways in which robots can navigate
indoors, including Sensor-based navigation, this method involves using sensors such as cameras, laser range
finders, or sonar to detect obstacles and create a map of the environment [2]. The robot can then use this map
to plan its path and avoid obstacles, Map-based navigation: This method involves using a pre-built map of the
environment, which the robot can use to plan its path [3]. This requires the robot to have access to the map
and the ability to locate itself within the map, hybrid navigation: This method combines sensor-based and
map-based navigation to provide more accurate and efficient navigation. The robot uses sensors to detect
obstacles and updates its map in real time, allowing it to adjust its path as needed.
Overall, robot indoor navigation is an important technology for various industries such as
manufacturing, healthcare, and hospitality. It allows for the automation of tasks that were previously
performed by humans, leading to increased efficiency and cost savings. When the environment is complex
and continually changing, navigation becomes more challenging. The simultaneous localization and mapping

Journal homepage: http://ijra.iaescore.com


42  ISSN: 2722-2586

(SLAM) issue refers to a robot lacking knowledge of a map or a specific place. Using sensors like light
detection and ranging (LiDAR), RGB-D cameras, IMU, and others, the SLAM method enables mapping out
the immediate area while also guiding the robot to its destination. Accurate instruments and exact real-time
algorithms are needed to solve the SLAM issue. Real-time SLAM algorithms come in many different flavors,
including those that use monocular cameras and are referred to as mono SLAM, parallel tracking, and
mapping (PTAM), oriented fast and rotated brief (ORB-SLAM) [4] SLAM algorithms that use RGB cameras
are referred to as stereo SLAM or visual SLAM (VSLAM) algorithms.
Most SLAM-based solutions commonly employ RGB cameras with depth-sensing capabilities,
exemplified by devices like the Microsoft Kinect, which facilitate the mapping of the three-dimensional
world. Although effective, LiDARs are often associated with higher costs, but they excel in rapidly
generating accurate environment maps with minimal processing requirements [5]. Various instruments,
including LiDAR, RADAR, and monocular/stereo cameras, can address the location challenges mobile
robots encounter. Nonetheless, the most prevalent instruments for this purpose are webcams and LiDAR in
general. It is worth noting that, despite their essential obstacle recognition and tracking capabilities, LiDAR
SLAM solutions have received comparatively less attention than passive visual sensor SLAM solutions.
Nevertheless, LiDAR sensors are progressively supplanting passive visual sensors as the primary
measurement and positioning sensors in robotic research [6].
The following is the paper's outline: A comparative analysis of LiDAR 2D and visual sensor
SLAM-based approaches for indoor navigation in Section 2, we will discuss the strengths and weaknesses of
each approach, including their accuracy, complexity, computational requirements, and robustness to different
environmental conditions in Section 3. Finally in Section 4. We will conclude with a summary of the findings
and provide insights into the future directions of indoor navigation for robots. Overall, this article aims to
provide a comprehensive analysis of LiDAR 2D and visual sensor SLAM-based approaches for indoor
navigation, shedding light on their advantages, limitations, and prospects, and serving as a valuable resource
for researchers, engineers, and practitioners working in the field of robotics and autonomous navigation.

2. RELATED WORK
Indoor mapping and localization have garnered a lot of attention in recent years. For this goal, a
variety of algorithms that work on data from various sensors, LiDAR 2D and Visual sensor SLAM-based
navigation are two common methods of robot indoor navigation. Many inventions are indeed the results of
our need to tackle a problem. Many people aspired to fly once Orville and Wilbur Wright proved that it was
feasible. You could fly farther, higher, and more quickly as flying technology advanced. Everything was OK,
but there was an issue almost immediately: how to land. You need to know how far you are from the earth to
land (safely). Knowing your relative distance to the ground is essential throughout the landing approach.
How then can you determine that distance in circumstances where vision is impaired, particularly in poor
lighting or during bad weather like snow?
LiDAR 2D SLAM is a technique that leverages LiDAR sensors to scan the environment and
construct a two-dimensional map. These LiDAR sensors emit laser beams that interact with objects in the
environment, returning signals to the sensor. Using this data, the sensor generates a detailed map of the
surroundings and precisely determines its own position within that map. As the robot moves within the
environment, it continuously scans and updates the map in real time. This capability enables the robot to
navigate its surroundings and avoid obstacles effectively.
LiDAR sensors function by emitting a laser beam to measure the distance to an object. The emitted
light beam strikes the object and subsequently returns to the sensor [7]. A microcontroller within the LiDAR
sensor measures the time it takes for the light to return to the sensor. Since the speed of light is a constant
value, the sensor can calculate and provide the precise distance to the object. Remarkably, LiDAR sensors
perform these measurements and computations at a remarkable rate, ranging from 10 to 1000 times per
second.
The evolution of LiDAR technology has granted us new perspectives on objects and environments
that were previously unseen. LiDAR finds applications in a wide array of fields, including hazard assessment
(such as monitoring lava flows, landslides, tsunamis, and floods), conducting surveys of watersheds and
rivers, simplifying urban planning, enabling people and vehicle counting, enhancing climate monitoring, and
revolutionizing meteorology and mining practices, all while promoting ecological sustainability [8].
Conversely, Visual Sensor SLAM-based navigation entails the use of cameras to capture images of the
environment and construct a map. The robot employs image analysis techniques to identify distinctive
features such as corners and edges within these images, subsequently using these features to determine its
precise location within the environment. As the robot progresses through its surroundings, it continually
captures images and updates its map, thereby enabling effective navigation and obstacle avoidance [9].

IAES Int J Rob & Autom, Vol. 13, No. 1, March 2024: 41-49
IAES Int J Rob & Autom ISSN: 2722-2586  43

The application of estimation-theoretic methodologies to address robot localization and mapping


can be traced back to 1986, as chronicled by Whyte et al. [10], coinciding with the IEEE Robotics and
Automation Conference hosted in San Francisco. However, it was not until the 1995 International
Symposium on Robotics Research that the fundamental structuring of the SLAM problem, the formulation of
its convergence criteria, and the introduction of its abbreviation took place [9]. In its initial developmental
stages, SLAM research, as exemplified by the pioneering contributions of Leonard et al. [11] (utilizing
ultrasonic sensing) and Gutmann et al. [6] (employing laser technology), predominantly relied upon data
acquisition from odometry and laser/ultrasonic sensors to perceive and navigate within the environment.
Throughout the 1990s, significant contributions from researchers such as Thrun et al. [12] laid the
foundational principles and prototypical constructs underpinning the conventional Bayesian filtering-based
SLAM framework. A comprehensive exploration of this body of work, encompassing its core principles and
specialized subtopics, is presented in [13].
One of the most active fields of robotics study is VSLAM. Due to their low cost, high data
collection capacity, and broad measurement range, visual sensors have been the primary study focus for
SLAM systems. The purpose is to infer the camera motions based on the recorded pixel movements in the
image series. There are numerous methods to accomplish this. The first method is to identify and follow a
few key features in the picture; this method is known as feature-based VSLAM. The use of the complete
picture without feature extraction is another option; this method is known as direct SLAM.
There are, obviously, other SLAM approaches that involve various types of cameras, notably RGB-
D cameras and they not only generate an image but also a depth measurement of the environment, as well as
event cameras, which detect only changes in the image. Both LiDAR 2D and Visual sensor SLAM-based
navigation have their own strengths and weaknesses. LiDAR 2D SLAM is good at creating accurate maps of
the environment and detecting obstacles, but it may struggle in low-light conditions or when there are
reflective surfaces. Visual sensor SLAM-based navigation can work in a variety of lighting conditions and
can provide more detailed maps but may struggle in environments with poor visibility or limited features. In
essence, the choice of a navigation method hinges on the precise requirements of the application and the
characteristics of the robot's operational environment. Certain applications may necessitate the synergistic
utilization of both LiDAR and visual sensors, harmonizing their capabilities to attain the intended levels of
accuracy and operational efficiency.

3. METHOD
The objective of this study is to furnish an all-encompassing overview of SLAM techniques for
indoor robot navigation, with a specific emphasis on LiDAR and VSLAM solutions. We begin by offering a
concise review of the foundational theory underpinning LiDAR and VSLAM methodologies. This is intended
to facilitate the comprehension of these concepts, particularly for novice researchers in the field of indoor
robot navigation. A fundamental grasp of each technique is considered pivotal from our standpoint.
To offer a holistic perspective of the advancements made and to assist in the selection of suitable
sensors and algorithms for diverse scenarios, we delve into the contemporary landscape of camera-based and
LiDAR-based SLAM solutions. Our investigation encompasses multiple experimental settings, all situated
indoors. These include a compact indoor structure comprising two open rooms and a corridor, an agricultural
greenhouse, indoor livestock, and farming greenhouses, as well as a hospital environment.
In a specific case, Fang et al. [13] introduced a mobile robotic arm prototype designed for achieving
a high degree of autonomous navigation. This system employs a 2D LiDAR sensor and incorporates guided
remote monitoring and object selection capabilities for the loading and unloading processes, employing the
Hector SLAM method for navigation. Our experimental findings illustrate the prototype's capability to
navigate the test environment safely and efficiently, executing object transfers between rooms without
encountering collisions. Furthermore, the mobile robotic arm's mapping technique demonstrates the
generation of relatively precise maps, with noise reduction achieved through the application of the Hector
SLAM method. It is worth noting that the mapping process should be conducted at a controlled speed, with
the robot maintaining a maximum velocity of 10 m/s [14].
Subsequently, our focus shifts to an exploration of various VSLAM techniques, encompassing the
use of contemporary RGB-D and event cameras, as well as monocular and stereo cameras. In [13], a novel
SLAM technology employing RGB and depth images are introduced to enhance hospital operational
efficiency, minimize the risk of cross-infections between doctors and patients, and mitigate the spread of
diseases such as COVID-19. This technology was evaluated in real hospital scenarios, with 15 image
sequences captured by Kinect cameras, encompassing ward settings, corridors, nurse stations, and more.
These sequences include both dynamic scenes with human activity and static scenes devoid of human
presence.

Robot indoor navigation: comparative analysis of LiDAR 2D and visual SLAM (Hind Messbah)
44  ISSN: 2722-2586

This innovative solution extends the utilization of ORB feature points and introduces a semantic
descriptor for map construction. The semantic descriptor facilitates the establishment of high-level semantic
relationships and integrates them into a knowledge graph. Through this approach, dynamic objects within the
environment are identified and filtered out to enhance the accuracy of pose estimation.
Krul et al. [15] investigated the feasibility of robot positioning within indoor livestock and farming
greenhouses by conducting a comparative analysis of VSLAM techniques using monocular camera input in
indoor environments. Noteworthy disparities emerge when applying these VSLAM methods to diverse
scenarios, and even seemingly minor performance variations can hold significant implications. For instance,
within greenhouse settings where tomato cultivars are grown, row distances can fluctuate between 0.23 and
0.60 meters, contingent on the stage of crop growth. In the context of the bell pepper greenhouse from which
our data was derived, the inter-row spacing measured 0.5 meters. Expressing the observed error difference of
0.16 meters in absolute terms between the VSLAM algorithms, it becomes apparent that, depending on the
crop's growth stage, this error corresponds to a range of 27% to 70% of the inter-row width. In the realm of
livestock applications, even centimeter-level discrepancies in localization can hold considerable significance,
as they directly impact tasks related to dairy monitoring and serve as essential parameters for disturbance
minimization.
The complexity of implementing SLAM varies due to the diversity of sensors and installation
methodologies. SLAM implementations are predominantly categorized into LiDAR SLAM and VSLAM.
LiDAR SLAM, which has enjoyed an earlier inception, has reached a relatively mature state in terms of
theory, technology, and product development [16], [17]. Presently, it remains the most stable and mainstream
method for positioning and navigation. However, the landscape is undergoing a transformation with the swift
progress in computer vision, resulting in heightened attention towards VSLAM due to its wealth of
information and expansive application scope. Over time, various industries have held differing perspectives
on the supremacy of LiDAR SLAM versus VSLAM and their prospective dominance. The ensuing
discussion provides a concise comparison between LiDAR SLAM and VSLAM across several dimensions.
− Prime cost
LiDAR prices range from 10K to 100K, the cost is relatively high, but there are also low-cost
LiDAR (RPLiDAR) solutions available. Unlike LiDAR, which is clearly much more expensive, sensors are
primarily used in VSLAM to gather data. For location and guidance, LiDAR is advantageous because it can
more precisely determine the angle and distance of obstacle spots.
− Application scene
As far as the application scenario is concerned, the application scene of VSLAM is much richer.
VSLAM operates effectively in both indoor and outdoor contexts; however, its functionality relies
significantly on sufficient lighting conditions, rendering it ineffective in low-light or certain textured
environments [18]. At present, LiDAR is mainly used indoors, which is used for map construction and
navigation.
− Accuracy of map
When laser SLAM constructs a map, the precision is high, and can reach about 2 cm; VSLAM uses
a lot of depth cameras, and map construction accuracy is about 3 cm. LiDAR SLAM maps are typically more
accurate than VSLAM maps and can be used right away for location and guidance. The ability to generate
highly accurate maps is crucial for tasks such as obstacle avoidance and path planning, ensuring that robots
can navigate with precision and efficiency in complex indoor environments.
− Ease of use
LiDAR SLAM and VSLAM with depth cameras directly obtain environment data via data in the
cloud. According to the data in the cloud, robots can predict and measure where there are obstacles and the
distance of obstacles. However, the VSLAM scheme based on monocular, binocular cameras cannot directly
obtain environmental data in clouds but form gray or color images. It is necessary to calculate the distance of
obstacles by constantly moving their own position, extracting, and matching feature points, and using the
triangulation ranging method to work out the positioning.
− Others
In addition to the above, LiDAR SLAM and VSLAM may also have a certain gap in the aspects of
detection range, operational intensity, real-time data generation, and map cumulative error. As a case in
point, with the same scene, VSLAM has a larger deviation in the second half of the process compared with
LiDAR SLAM, which is caused by the cumulative error, so VSLAM must carry out the loop test.
An overview of the advantages/disadvantages is shown in Tables 1 and 2, respectively.
Nevertheless, VSLAM exhibits both strengths and weaknesses. While it excels in providing precise
mapping and localization capabilities, its utilization demands significant computational resources.
Furthermore, VSLAM systems can be susceptible to environmental factors such as variations in lighting

IAES Int J Rob & Autom, Vol. 13, No. 1, March 2024: 41-49
IAES Int J Rob & Autom ISSN: 2722-2586  45

conditions and the availability of distinct textures, as outlined in Table 2. Despite these challenges, ongoing
research endeavors aim to address these limitations, seeking to optimize the computational efficiency of
VSLAM algorithms and enhance their robustness in diverse environmental settings.

Table 1. Advantages and disadvantages of LiDAR SLAM


Advantages Disadvantages
− High reliability and mature technology − Limited by LiDAR detection range
− Map construction is based on intuitiveness, precision is high, − (∼ 6 meters)
and there is no cumulative error. − Installation with structural requirements
− Maps can be used for path planning. − The map lacks semantic information.

Table 2. Advantages and disadvantages of VSLAM


Advantages Disadvantages
− The structure is simple, and the installation mode is − Performance is largely affected by ambient light. Does not work in a dark
diversified. (non-textured area) environment.
− No limitation on the sensor’s distance detection − Heavy load in computing, the constructed map itself makes it difficult to
range, low-cost be directly in path planning and navigation.
− Extractable semantic information. − The dynamic performance of the sensor needs to be improved, and there
will be cumulative errors in map construction.

4. RESULTS AND DISCUSSION


According to this review of the literature, a fully integrated visual-LiDAR strategy that fully exploits
the benefits of both sensing modalities is still lacking. We contend that the SLAM community would profit
from tightly hybridizing LiDAR features as visible features. It would be possible to solve a multi-modal,
mixed, multi-constraint MAP issue. A system like this would increase SLAM's resistance to outside factors
like temperature and light. It is well known that LiDAR SLAM can operate in low-light or texture-less
settings while VSLAM cannot. In contrast, camera-based SLAM performs better than LiDAR-SLAM in
textured but less mathematically significant areas (open field, very long corridor). (Detection of incorrect
hits). When choosing a navigation system for a specific robotics application, it is paramount to consider the
inherent challenges that robots encounter. Robots are tasked with traversing a diverse range of surfaces and
pathways. For example, a robotic vacuum cleaner must effectively find the optimal route between rooms,
navigating various floor types such as hardwood, tile, or carpet. This necessitates a dual requirement: an
understanding of broad environmental constraints and access to location-specific data. The robot must be
equipped with information that includes recognizing potential obstacles like stairs or determining the precise
distance from a coffee table to a doorway.
VSLAM, light detection and ranging (LiDAR) technologies offer solutions to address these
challenges. LiDAR, known for its speed and precision, excels at providing accurate data but comes at a
higher cost. Conversely, VSLAM offers a more cost-effective alternative, utilizing less expensive hardware,
specifically cameras instead of lasers. It can also generate a 3D map, albeit with some trade-offs in terms of
precision and speed in contrast to LiDAR [19]. Notably, VSLAM enjoys an advantage over LiDAR in its
ability to capture a more comprehensive view of the environment, thanks to its sensor's capacity to capture
multiple dimensions. However, LiDAR-based SLAM offers excellent options. LiDAR techniques offer
extremely precise 3D inputs about the environment, but they are frequently tedious (time-consuming) and
depend on weakly resilient scan-matching techniques. Very few works currently exist that analyze the 3D
image by removing some 3D features. None of the 3D LiDAR SLAM methods handle locations in the same
manner as the vision-based framework does. This is because extracting and studying LiDAR markers takes
time in the processing stage. The only characteristics used in LiDAR-SLAM methods now are airplanes.
However, given that they are by their very nature unstructured, natural outdoor settings are not very helpful
for using aircraft.
LiDAR-based SLAM primarily uses scan-matching techniques like ICP. Since their creation thirty
years ago, these programs have essentially not changed. One of the major drawbacks of 2D LiDAR (often
used in robotics applications) is that if one item is obscured by another at the height of the LiDAR, or if an
object is an inconsistent shape with different widths across its body, this information is lost. Reprojection
error, or the disparity between each set point's apparent position and its real location, is a possible flaw in
VSLAM. To eliminate geometric distortions (including reprojection error), which might lower the accuracy
of the inputs to the SLAM algorithm, camera optical calibration is crucial.
There have been some experiments to combine LiDAR and visual instruments, but they are all still
only very loosely fused. LiDAR and visual detection cannot assist one another because the fusion primarily

Robot indoor navigation: comparative analysis of LiDAR 2D and visual SLAM (Hind Messbah)
46  ISSN: 2722-2586

uses the results of both odometry stages, and the choice is made very late in the process when merging the
relative displacement estimation. Other methods initialize the visual features immediately using only the
LiDAR depth data. Once more, the potential of LiDAR is not fully utilized.
A lot of studies are being done in the field of VSLAM, and we have only reviewed the major
methods. Even though VSLAM produces excellent results, all these VSLAM methods are susceptible to
mistakes due to their sensitivity to changes in lighting or an untextured world. Additionally, because RGB-D
methods are dependent on IR light, they are highly susceptible to daylight. They consequently only function
effectively in interior settings. In comparison to other visual methods, they perform badly in untextured or
poor circumstances where no exact estimation of pixel displacement can be determined.
Strengths:
− Accuracy: VSLAM has the potential to achieve high levels of accuracy in indoor navigation because it
relies on precise visual data. This is particularly useful in indoor environments where GPS signals are
often weak or unavailable.
− Robustness: VSLAM algorithms are robust to changes in lighting, shadows, and occlusions, making them
suitable for indoor environments with varying lighting conditions.
− Flexibility: VSLAM algorithms are adaptable to a wide range of indoor environments and can work with
different types of cameras and sensors [20].
Weaknesses:
− Computational complexity: VSLAM algorithms can be computationally intensive, requiring powerful
hardware and processing resources, which can limit their real-time performance.
− Limited field of view: VSLAM relies on the camera's field of view, which may be limited, especially in
small or cluttered indoor environments [16]. This can affect the accuracy of the map and the robot's
localization.
− Vulnerability to feature-poor environments: VSLAM algorithms rely on identifying unique features in the
environment to build a map and track the robot's position. The algorithm's accuracy may be affected if the
environment lacks such features.
There are also several algorithms that can be used for VSLAM-based indoor navigation, including:
− ORB-SLAM: is a real-time SLAM algorithm that uses a feature-based approach to build a map and track
the robot's position [21]. It relies on the oriented FAST and rotated BRIEF (ORB) feature descriptor to
extract and match features in the environment [22].
− LSD-SLAM: is a monocular SLAM algorithm that relies on online segments to build a map of the
environment. It uses an optimization approach to estimate the robot's pose and the map's geometry [18].
− DSO: a VSLAM algorithm, which employs direct image alignment to deduce the robot's pose and map
structure. It employs a sparse map representation to alleviate computational complexity and minimize
memory consumption [23].
These disadvantages encourage researchers to develop powerful, optimized algorithms that can deal
with data errors and accelerate processing. Due to all these factors, other instruments have also been
investigated for the SLAM procedure. The first driverless vehicle prototypes currently use RADAR or
LiDAR as their primary sensors. Even though LiDAR has a wide range of applications, it is noteworthy to
observe that the method for registering LiDAR scans has not changed much in the past almost ten years. The
scan-matching method, followed by graph optimization, is the primary remedy for LiDAR-based guidance.
LiDAR 2D indoor navigation has several strengths and weaknesses. Here are some of them:
Strengths:
− Accurate mapping: LiDAR 2D can create highly accurate maps of indoor environments with a resolution
of a few centimeters, making it ideal for navigation in areas where precision is crucial.
− Robustness: LiDAR 2D can function in a wide range of environments, including those with low light,
shadows, or reflections, making it a reliable navigation solution.
− Time: LiDAR 2D can process data at high speeds and can create maps in real-time, making it an efficient
option for real-time applications.
Weaknesses:
− Limited mapping range: LiDAR 2D is limited by its range and may not be able to detect objects or create
maps beyond a certain distance.
− Complexity: LiDAR 2D requires a lot of computational power and specialized hardware to create maps
and navigate.
− Cost: LiDAR 2D sensors can be expensive, making them a costly option for some applications.
− There are also several algorithms that can be used for LiDAR 2D indoor navigation, including:
− Iterative Closest Point (ICP): This algorithm matches a set of points in the current scan to points in the
previous scan and then aligns them to create a map [24].

IAES Int J Rob & Autom, Vol. 13, No. 1, March 2024: 41-49
IAES Int J Rob & Autom ISSN: 2722-2586  47

− Scan Matching algorithm: This algorithm uses a technique called point-to-point matching to align two
consecutive scans and create a map [25].
− Grid-based algorithm: This algorithm divides the map into a grid and then assigns each cell a value based
on the LiDAR data. The robot can then use this grid to navigate through the environment.
− Particle Filter algorithm: This algorithm uses a statistical approach to track the position of the robot and
create a map [26].
Overall, LiDAR 2D is a powerful and reliable option for indoor navigation, but it may not be
suitable for all applications due to its limitations and cost. The choice of algorithm will depend on the
application's specific needs and the environment in which the robot will be operating. A general overview is
represented in Table 3.

Table 3. Advantages and disadvantages of robot indoor navigation using LiDAR /visual-based SLAM
LiDAR Based SLAM Visual Based SLAM
Advantages − Provides precise distance measurements, which can be − Can be implemented using standard cameras, which
used to generate accurate 2D maps of the environment. are generally more affordable than LiDAR systems.
− Works well in low-light conditions, which makes it − Provides high-resolution images that can be used for
suitable for indoor environments with varying lighting other applications, such as object detection or
conditions. recognition.
− Can cover a larger field of view than a camera-based − Works well in well-lit environments with clear and
system. distinct visual features.
Drawbacks − Generally, more expensive than camera-based systems. − Can be sensitive to changes in lighting conditions or
− The generated map is not as visually detailed as a map visual obstructions, such as occlusions or reflections.
generated by a camera-based system. − Performance can be affected by a lack of distinct
− Can be affected by reflective surfaces, which can lead visual features in the environment.
to inaccurate distance measurements. − Requires high computational resources, which can
impact real-time performance.

5. CONCLUSION
In conclusion, the comparison between LiDAR 2D and visual sensor SLAM-based approaches for
robot indoor navigation reveals that both technologies have their strengths and limitations. LiDAR 2D-based
approaches offer accurate and reliable distance measurements, making them ideal for obstacle detection and
mapping in indoor environments. They provide high precision and robustness, allowing robots to navigate in
complex environments with obstacles and varying lighting conditions. However, LiDAR sensors can be
expensive and may have limitations in capturing detailed color or texture information.
On the other hand, visual sensor SLAM-based approaches, which rely on cameras for navigation,
are cost-effective and flexible, as cameras are typically less expensive than LiDAR sensors. They can capture
rich color and texture information, enabling better object recognition and scene understanding. VSLAM-
based approaches also have the potential to leverage existing cameras in smartphones or other devices,
making them more accessible for certain applications. However, they may struggle with accuracy and
robustness in challenging lighting conditions and may require more computational resources for processing
visual data.
The choice between LiDAR 2D and visual sensor SLAM-based approaches for robot indoor
navigation depends on the specific requirements of the application, such as the level of accuracy needed, the
complexity of the environment, and the available resources. LiDAR 2D-based approaches may be more
suitable for applications that require high precision in obstacle detection and mapping, while visual sensor
SLAM-based approaches may be more cost-effective and flexible for applications that prioritize visual
perception and scene understanding. In summary, both LiDAR 2D and visual sensor SLAM-based
approaches have their advantages and limitations, and the choice between them should be guided by the
precise requirements of the robot's indoor navigation application. Considering the findings from the
comparison made in this article, additional assessments will be undertaken, and future research may include
evaluating the performance of a physical robot in different real-world scenarios.

REFERENCES
[1] M. N. Favorskaya, S. Mekhilef, R. K. Pandey, and N. Singh, Eds., Innovations in electrical and electronic engineering, vol. 661.
Singapore: Springer Singapore, 2021. doi: 10.1007/978-981-15-4692-1.
[2] F. Guth, L. Silveira, S. Botelho, P. Drews, and P. Ballester, “Underwater SLAM: Challenges, state of the art, algorithms and a
new biologically-inspired approach,” in Proceedings of the IEEE RAS and EMBS International Conference on Biomedical
Robotics and Biomechatronics, Aug. 2014, pp. 981–986. doi: 10.1109/biorob.2014.6913908.
[3] S. N. Ferdaus, “A topological approach to online autonomous map building for mobile robot navigation,” 2008, Accessed: Jan.
24, 2024. [Online]. Available: https://api.semanticscholar.org/CorpusID:126578060

Robot indoor navigation: comparative analysis of LiDAR 2D and visual SLAM (Hind Messbah)
48  ISSN: 2722-2586

[4] J. J. Leonard and H. F. Durrant-Whyte, “Simultaneous map building and localization for an autonomous mobile robot,” in
Proceedings IROS ’91:IEEE/RSJ International Workshop on Intelligent Robots and Systems ’91, 2002, pp. 1442–1447. doi:
10.1109/iros.1991.174711.
[5] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: Using Kinect-style depth cameras for dense 3D
modeling of indoor environments,” International Journal of Robotics Research, vol. 31, no. 5, pp. 647–663, Feb. 2012, doi:
10.1177/0278364911434148.
[6] B. Suleymanoglu, M. Soycan, and C. Toth, “Indoor Mapping: Experiences with LiDAR SLAM,” International Archives of the
Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives, vol. 43, no. B1-2022, pp. 279–285, May
2022, doi: 10.5194/isprs-archives-XLIII-B1-2022-279-2022.
[7] J. Song, “Motion estimation techniques from outdoor To indoor on a multi-rotor robot,” The Hong Kong University of Science
and Technology Library, 2014. doi: 10.14711/thesis-b1333733.
[8] Acroname, “LIDAR: The history of LIDAR,” Acroname, 2020. https://acroname.com/blog/history-lidar
[9] Z. Fan, “Brief review on visual SLAM : A historical perspective,” Brief review on visual slam: A historical perspective, pp. 1–15,
2016, [Online]. Available: https://fzheng.me/2016/05/30/slam-review/#ptam-new-standard-for-local-tracking-and-mapping
[10] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: Part I,” IEEE Robotics and Automation Magazine, vol.
13, no. 2, pp. 99–108, Jun. 2006, doi: 10.1109/MRA.2006.1638022.
[11] J. S. Gutmann and K. Konolige, “Incremental mapping of large cyclic environments,” in Proceedings - 1999 IEEE International
Symposium on Computational Intelligence in Robotics and Automation, CIRA 1999, 1999, pp. 318–325. doi:
10.1109/cira.1999.810068.
[12] S. Thrun, W. Burgard, and D. Fox, “A probabilistic approach to concurrent mapping and localization for mobile robots,”
Autonomous Robots, vol. 5, no. 3–4, pp. 253–271, 1998, doi: 10.1023/a:1008806205438.
[13] B. Fang, G. Mei, X. Yuan, L. Wang, Z. Wang, and J. Wang, “Visual SLAM for robot navigation in healthcare facility,” Pattern
Recognition, vol. 113, p. 107822, May 2021, doi: 10.1016/j.patcog.2021.107822.
[14] H. Afrisal et al., “Mobile robotic-ARM development for a small-scale inter-room logistic delivery using 2D LIDAR-guided
navigation,” Teknik, vol. 43, no. 2, pp. 158–167, Aug. 2022, doi: 10.14710/teknik.v43i2.45642.
[15] S. Krul, C. Pantos, M. Frangulea, and J. Valente, “Visual slam for indoor livestock and farming using a small drone with a
monocular camera: A feasibility study,” Drones, vol. 5, no. 2, p. 41, May 2021, doi: 10.3390/drones5020041.
[16] S. Rongchuan, M. Shugen, L. Bin, and W. Yuechao, “Improving consistency of EKF-based SLAM algorithms by using accurate
linear approximation,” in IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM, Jul. 2008, pp. 619–
624. doi: 10.1109/AIM.2008.4601731.
[17] J. Dai, X. Li, K. Wang, and Y. Liang, “A novel STSOSLAM algorithm based on strong tracking second order central difference
Kalman filter,” Robotics and Autonomous Systems, vol. 116, pp. 114–125, Jun. 2019, doi: 10.1016/j.robot.2019.03.006.
[18] J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM,” in Lecture Notes in Computer Science
(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 8690 LNCS, no. PART 2,
Springer International Publishing, 2014, pp. 834–849. doi: 10.1007/978-3-319-10605-2_54.
[19] H. He, Z. Chen, Z. Li, X. Liu, and H. Liu, “Scale-aware tracking method with appearance feature filtering and inter-frame
continuity,” Sensors, vol. 23, no. 17, p. 7516, Aug. 2023, doi: 10.3390/s23177516.
[20] C. Y. Chiu, “SLAM backends with objects in motion: A unifying framework and tutorial,” in Proceedings of the American
Control Conference, May 2023, vol. 2023-May, pp. 1635–1642. doi: 10.23919/ACC55779.2023.10155957.
[21] L. Bauersfeld and G. Ducard, “RTOB SLAM: Real-time onboard laser-based localization and mapping,” Vehicles, vol. 3, no. 4,
pp. 778–789, Nov. 2021, doi: 10.3390/vehicles3040046.
[22] S. Chen, Y. Li, and H. Chen, “A monocular vision localization algorithm based on maximum likelihood estimation,” in 2017
IEEE International Conference on Real-Time Computing and Robotics, RCAR 2017, Jul. 2017, vol. 2017-July, pp. 561–566. doi:
10.1109/RCAR.2017.8311922.
[23] A. Basiri, V. Mariani, and L. Glielmo, “Improving visual SLAM by combining SVO and ORB-SLAM2 with a complementary
filter to enhance indoor mini-drone localization under varying conditions,” Drones, vol. 7, no. 6, p. 404, Jun. 2023, doi:
10.3390/drones7060404.
[24] J. Wen, X. Zhang, H. Gao, J. Yuan, and Y. Fang, “A novel 2d laser scan matching algorithm for mobile robots based on hybrid
features,” in 2018 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2018, Aug. 2018, pp. 366–371.
doi: 10.1109/RCAR.2018.8621744.
[25] B. Lisien, D. Morales, D. Silver, G. Kantor, I. Rekleitis, and H. Choset, “The hierarchical atlas,” IEEE Transactions on Robotics,
vol. 21, no. 3, pp. 473–481, Jun. 2005, doi: 10.1109/TRO.2004.837237.
[26] G. Jang, J. S. Kim, S. Kim, and I. Kweon, “PR-SLAM in particle filter framework,” in Proceedings of IEEE International
Symposium on Computational Intelligence in Robotics and Automation, CIRA, 2005, pp. 327–333. doi:
10.1109/cira.2005.1554298.

BIOGRAPHIES OF AUTHORS

Hind Messbah received her master’s degree in big data and her bachelor’s degree
in computer engineering from the International University of Rabat, Morocco, in the year
2018. Presently, she holds the position of Technical Leader within the consulting industry.
With a career spanning over five years, she has garnered extensive expertise as a Data
Engineer, boasting a track record of successful project implementations across diverse sectors,
encompassing telecommunications, insurance, retail, and banking. Her research interests
include big data, artificial intelligence, robotics, and the internet of things (IoT). She can be
contacted at [email protected]

IAES Int J Rob & Autom, Vol. 13, No. 1, March 2024: 41-49
IAES Int J Rob & Autom ISSN: 2722-2586  49

Mohamed Emharraf is a professor of robotics at National School of Applied


Sciences, Mohamed First University, Oujda, Morocco. He received his Ph.D. in 2017 from
CEDOC-EMPO. His research interests include Indoor robot control, Smart Agricultural,
Computer Engineering, Human-computer-Interaction, and artificial intelligence. He has
published 29 papers in peer-reviewed journals and conference proceedings, He has also served
as a reviewer for several scientific journals and as a program committee member. He can be
contacted at [email protected].

Mohammed Saber is currently an associate professor in the Department of


Electronics, Computer Science and Telecommunications at National School of Applied
Sciences at Mohammed First University, Oujda, Morocco (2013 -). He received a PhD in
Computer Science at Faculty of Sciences, Oujda, Morocco, in July 2012, an engineer degree in
Network and Telecommunication at National School of Applied Sciences, in July 2004, and
License degree in Electronics at Faculty of Sciences, in July 2002, all from Mohammed First
University, Oujda. He is currently director of Smart Information, Communication &
Technologies Laboratory (SmartICT Lab). His interests include Network Security (Intrusion
Detection System, Evaluation of security components, Security IoT), AI, Robotics, Embedded
Systems. He can be contacted at [email protected].

Robot indoor navigation: comparative analysis of LiDAR 2D and visual SLAM (Hind Messbah)

You might also like