0% found this document useful (0 votes)
12 views1 page

IEEE-Review-VSLAM

Uploaded by

huseyinumut2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views1 page

IEEE-Review-VSLAM

Uploaded by

huseyinumut2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

IEEE Xplore Full-Text PDF: https://ieeexplore.ieee.org/stamp/stamp.jsp?

tp=&arnumber=8601227

A Review on Vision Simultaneous Localization and


Mapping (VSLAM)
Jabulani K. Makhubela, Tranos Zuva and Olusanya Yinka Agunbiade
Faculty of Applied and Computer Sciences
Dept of Information and Communication, Vaal University of Technology
Vanderbijlpark, South Africa
[email protected]

Abstract—Simultaneous Localization and Mapping (SLAM) Vision sensors are able to exact more and viable information
has seen a tremendous interest amongst research community in both in color and per-pixel about location than any other sensor
recent years due to its ability to make the robot truly independent [6]. Vision sensors are favored because people and animals
in navigation. The capability of an autonomous robot to locate seem to be navigating effectively in complicated locations
itself within the environment and construct a map at the same using vision as prime sensor [5].
time, it is known as Simultaneous Localization and Mapping
(SLAM). They are various sensors that are employed in a Various researchers have embarked on Visual Simultaneous
Simultaneous Localization and Mapping (SLAM) which Localization and Mapping (VSLAM) with exceptional results,
characterized either as a laser, sonar and vision sensor. Visual however many challenges still exist in Visual Simultaneous
Simultaneous Localization and mapping (VSLAM) is when Localization and Mapping (VSLAM). This paper will be
autonomous robot embedded with a vision sensor such as reviewing the methods, achievements and the limitations on
monocular, stereo vision, omnidirectional or Red Green Blue studies done on visual Simultaneous Localization and mapping
Depth (RGBD) camera to localize and map the environment. (VSLAM) by some of these researchers. A reminder of this
Numerous researchers have embarked on the study of Visual paper is organized as follows: - section II discusses a review of
Simultaneous Localization and Mapping (VSLAM) with studies done by some of the researchers. Section III discuss
incredible results, however many challenges stills exist. The open issues and challenges which still exists on studies
purpose of this paper is to review the work done by some of the reviewed and finally conclusion is drawn in section IV.
researchers in Visual Simultaneous Localization and Mapping
(VSLAM). We conducted a literature survey on several studies II. RECENT RESEARCH ON VSLAM
and outlined the frameworks, challenges and limitation of these
studies. Open issues, challenges and future research in Visual Visual Simultaneous Localization and Mapping (VSLAM)
Simultaneous Localization and Mapping (VSLAM) are also is when an autonomous robot use service of camera as
discussed. exteroceptive sensor to navigate, map the location and localize
itself. [2]. This section will focus some of the research done by
Keywords—Navigation; Sensors; Vision; Illumination variance; other researchers under a Visual Simultaneous Localization and
Simultaneous Localization and mapping (SLAM) Mapping (VSLAM), their achievements and limitations they
I. INTRODUCTION face when implementing their method.
Simultaneous Localization and Mapping (SLAM) has A proposed method by [7] on Stereo Vision Simultaneous
captured a great deal of attention within the research Localization and Mapping (VSLAM) for autonomous mobile
community during the past recent years because of it is potential robot navigation in an indoor location. The objective was to
to make robot truly autonomous [1]. Visual Simultaneous design a system in which an autonomous robot would
localization and Mapping (VSLAM) is when a robot can exclusively utilize a vision sensor for acquiring data and
independently estimate its position within environment and able navigating the environment. Their navigation system
to draw a map of the same environment, by utilizes vision comprised of navigation and self-localization. The overall
sensor such as camera, Red Green Blue Depth (RGBD) sensor navigation hierarchy contained of localization, Perform the
etc. [2]. Choosing a sensor for autonomous robot such as Laser Region of Interest (ROI), Region of Interest (ROI) Sub
Finders (LRFs), sonar, acoustic, cameras (monocular, vision Screening, grid mapping optimal path search and path planning
stereo or omnidirectional), Red Green Blue Depth (RGBD) as illustration in Figure 1. The routine activities into their Visual
Sensor such as Microsoft Kinect and PrimeSense has become a Simultaneous Localization and Mapping (SLAM) navigation
critical part of the SLAM technique [3]. system was to achieve 3D depth calculation of the location,
scene analysis, optimal path search, real time path planning and
According to [4] vision sensors are utilized in various motor speed control.
robotic systems like object recognition, obstacle avoidance,
topological global localization. The reason for this is because
vision sensor over the other sensors are potable, less expensive,
compact, precise, low-priced, non-invasive and pervasive [5].

978-1-5386-6477-3/18/$31.00 ©2018 IEEE

Author�zed l�censed use l�m�ted to: ULAKBIM UASL - MIDDLE EAST TECHNICAL UNIVERSITY. Downloaded on December 03,2024 at 11:27:03 UTC from IEEE Xplore. Restr�ct�ons apply.

1/1 3.12.2024 14:27

You might also like