Simultaneous Localization and Mapping (SLAM) Is The
Simultaneous Localization and Mapping (SLAM) Is The
Contents
Mathematical description of the problem A map generated by a SLAM Robot.
Algorithms
Mapping
Sensing
Kinematics modeling
Acoustic SLAM
Audio-Visual SLAM
Collaborative SLAM
Moving objects
Loop closure
Exploration
Biological inspiration
Implementation methods
EKF SLAM
GraphSLAM
History
See also
References
External links
Mathematical description of the problem
Given a series of controls and sensor observations over discrete time steps , the SLAM problem is to
compute an estimate of the agent's state and a map of the environment . All quantities are usually
probabilistic, so the objective is to compute:
Applying Bayes' rule gives a framework for sequentially updating the location posteriors, given a map and a
transition function ,
Like many inference problems, the solutions to inferring the two variables together can be found, to a local
optimum solution, by alternating updates of the two beliefs in a form of EM algorithm.
Algorithms
Statistical techniques used to approximate the above equations include Kalman filters and particle filters (aka.
Monte Carlo methods). They provide an estimation of the posterior probability function for the pose of the
robot and for the parameters of the map. Methods which conservatively approximate the above model using
Covariance intersection are able to avoid reliance on statistical independence assumptions to reduce
algorithmic complexity for large-scale applications.[1] Other approximation methods achieve improved
computational efficiency by using simple bounded-region representations of uncertainty.[2]
Set-membership techniques are mainly based on interval constraint propagation.[3][4] They provide a set which
encloses the pose of the robot and a set approximation of the map. Bundle adjustment, and more generally
Maximum a posteriori estimation (MAP), is another popular technique for SLAM using image data, which
jointly estimates poses and landmark positions, increasing map fidelity, and is used in commercialized SLAM
systems such as Google's ARCore (https://developers.google.com/ar/) which replaces their previous
augmented reality project 'Tango'. MAP estimators compute the most likely explanation of the robot poses and
the map given the sensor data, rather than trying to estimate the entire posterior probability.
New SLAM algorithms remain an active research area,[5] and are often driven by differing requirements and
assumptions about the types of maps, sensors and models as detailed below. Many SLAM systems can be
viewed as combinations of choices from each of these aspects.
Mapping
Topological maps are a method of environment representation which capture the connectivity (i.e., topology)
of the environment rather than creating a geometrically accurate map. Topological SLAM approaches have
been used to enforce global consistency in metric SLAM algorithms.[6]
In contrast, grid maps use arrays (typically square or hexagonal) of discretized cells to represent a topological
world, and make inferences about which cells are occupied. Typically the cells are assumed to be statistically
independent in order to simplify computation. Under such assumption, are set to 1 if the
new map's cells are consistent with the observation at location and 0 if inconsistent.
Modern self driving cars mostly simplify the mapping problem to almost nothing, by making extensive use of
highly detailed map data collected in advance. This can include map annotations to the level of marking
locations of individual white line segments and curbs on the road. Location-tagged visual data such as
Google's StreetView may also be used as part of maps. Essentially such systems simplify the SLAM problem
to a simpler localisation only task, perhaps allowing for moving objects such as cars and people only to be
updated in the map at runtime.
Sensing
SLAM will always use several different types of sensors, and the
powers and limits of various sensor types have been a major driver of
new algorithms.[7] Statistical independence is the mandatory
requirement to cope with metric bias and with noise in measurements.
Different types of sensors give rise to different SLAM algorithms
whose assumptions are most appropriate to the sensors. At one
extreme, laser scans or visual features provide details of many points Accumulated registered point cloud
within an area, sometimes rendering SLAM inference is unnecessary from lidar SLAM.
because shapes in these point clouds can be easily and unambiguously
aligned at each step via image registration. At the opposite extreme,
tactile sensors are extremely sparse as they contain only information about points very close to the agent, so
they require strong prior models to compensate in purely tactile SLAM. Most practical SLAM tasks fall
somewhere between these visual and tactile extremes.
Sensor models divide broadly into landmark-based and raw-data approaches. Landmarks are uniquely
identifiable objects in the world whose location can be estimated by a sensor—such as wifi access points or
radio beacons. Raw-data approaches make no assumption that landmarks can be identified, and instead model
directly as a function of the location.
Optical sensors may be one-dimensional (single beam) or 2D- (sweeping) laser rangefinders, 3D High
Definition LiDAR, 3D Flash LIDAR, 2D or 3D sonar sensors and one or more 2D cameras.[7] Since 2005,
there has been intense research into VSLAM (visual SLAM) using primarily visual (camera) sensors, because
of the increasing ubiquity of cameras such as those in mobile devices.[8] Visual and LIDAR sensors are
informative enough to allow for landmark extraction in many cases. Other recent forms of SLAM include
tactile SLAM[9] (sensing by local touch only), radar SLAM,[10] acoustic SLAM,[11] and wifi-SLAM (sensing
by strengths of nearby wifi access points).[12] Recent approaches apply quasi-optical wireless ranging for
multi-lateration (RTLS) or multi-angulation in conjunction with SLAM as a tribute to erratic wireless
measures. A kind of SLAM for human pedestrians uses a shoe mounted inertial measurement unit as the main
sensor and relies on the fact that pedestrians are able to avoid walls to automatically build floor plans of
buildings by an indoor positioning system.[13]
For some outdoor applications, the need for SLAM has been almost entirely removed due to high precision
differential GPS sensors. From a SLAM perspective, these may be viewed as location sensors whose
likelihoods are so sharp that they completely dominate the inference. However, GPS sensors may occasionally
decline or go down entirely, e.g. during times of military conflict, which are of particular interest to some
robotics applications.
Kinematics modeling
The term represents the kinematics of the model, which usually include information about action
commands given to a robot. As a part of the model, the kinematics of the robot is included, to improve
estimates of sensing under conditions of inherent and ambient noise. The dynamic model balances the
contributions from various sensors, various partial error models and finally comprises in a sharp virtual
depiction as a map with the location and heading of the robot as some cloud of probability. Mapping is the
final depicting of such model, the map is either such depiction or the abstract term for the model.
For 2D robots, the kinematics are usually given by a mixture of rotation and "move forward" commands,
which are implemented with additional motor noise. Unfortunately the distribution formed by independent
noise in angular and linear directions is non-Gaussian, but is often approximated by a Gaussian. An alternative
approach is to ignore the kinematic term and read odometry data from robot wheels after each command—
such data may then be treated as one of the sensors rather than as kinematics.
Acoustic SLAM
An extension of the common SLAM problem has been applied to the acoustic domain, where environments
are represented by the three-dimensional (3D) position of sound sources, termed.[14] Early implementations of
this technique have utilized Direction-of-Arrival (DoA) estimates of the sound source location, and rely on
principal techniques of Sound localization to determine source locations. An observer, or robot must be
equipped with a microphone array to enable use of Acoustic SLAM, so that DoA features are properly
estimated. Acoustic SLAM has paved foundations for further studies in acoustic scene mapping, and can play
an important role in human-robot interaction through speech. In order to map multiple, and occasionally
intermittent sound sources, an Acoustic SLAM system utilizes foundations in Random Finite Set theory to
handle the varying presence of acoustic landmarks.[15] However, the nature of acoustically derived features
leaves Acoustic SLAM susceptible to problems of reverberation, inactivity, and noise within an environment.
Audio-Visual SLAM
Originally designed for Human–robot interaction, Audio-Visual SLAM is a framework that provides the
fusion of landmark features obtained from both the acoustic and visual modalities within an environment.[16]
Human interaction is characterized by features perceived in not only the visual modality, but the acoustic
modality as well; as such, SLAM algorithms for human-centered robots and machines must account for both
sets of features. An Audio-Visual framework estimates and maps positions of human landmarks through use of
visual features like human pose, and audio features like human speech, and fuses the beliefs for a more robust
map of the environment. For applications in mobile robotics (ex. drones, service robots), it is valuable to use
low-power, lightweight equipment such as monocular cameras, or microelectronic microphone arrays. Audio-
Visual SLAM can also allow for complimentary function of such sensors, by compensating the narrow field-
of-view, feature occlusions, and optical degradations common to lightweight visual sensors with the full field-
of-view, and unobstructed feature representations inherent to audio sensors. The susceptibility of audio sensors
to reverberation, sound source inactivity, and noise can also be accordingly compensated through fusion of
landmark beliefs from the visual modality. Complimentary function between the audio and visual modalities in
an environment can prove valuable for the creation of robotics and machines that fully interact with human
speech and human movement.
Collaborative SLAM
Collaborative SLAM combines images from multiple robots or users to generate 3D maps.[17]
Moving objects
Non-static environments, such as those containing other vehicles or pedestrians, continue to present research
challenges.[18][19] SLAM with DATMO is a model which tracks moving objects in a similar way to the agent
itself.[20]
Loop closure
Loop closure is the problem of recognizing a previously-visited location and updating beliefs accordingly. This
can be a problem because model or algorithm errors can assign low priors to the location. Typical loop closure
methods apply a second algorithm to compute some type of sensor measure similarity, and re-set the location
priors when a match is detected. For example, this can be done by storing and comparing bag of words vectors
of SIFT features from each previously visited location.
Exploration
"Active SLAM" studies the combined problem of SLAM with deciding where to move next in order to build
the map as efficiently as possible. The need for active exploration is especially pronounced in sparse sensing
regimes such as tactile SLAM. Active SLAM is generally performed by approximating the entropy of the map
under hypothetical actions. "Multi agent SLAM" extends this problem to the case of multiple robots
coordinating themselves to explore optimally.
Biological inspiration
Implementation methods
Various SLAM algorithms are implemented in the open-source robot operating system (ROS) libraries, often
used together with the Point Cloud Library for 3D maps or visual features from OpenCV.
EKF SLAM
In robotics, EKF SLAM is a class of algorithms which utilizes the extended Kalman filter (EKF) for SLAM.
Typically, EKF SLAM algorithms are feature based, and use the maximum likelihood algorithm for data
association. In the 1990s and 2000s, EKF SLAM had been the de facto method for SLAM, until the
introduction of FastSLAM.[24]
Associated with the EKF is the gaussian noise assumption, which significantly impairs EKF SLAM's ability to
deal with uncertainty. With greater amount of uncertainty in the posterior, the linearization in the EKF fails.[25]
GraphSLAM
In robotics, GraphSLAM is a SLAM algorithm which uses sparse information matrices produced by
generating a factor graph of observation interdependencies (two observations are related if they contain data
about the same landmark).[25]
History
A seminal work in SLAM is the research of R.C. Smith and P. Cheeseman on the representation and
estimation of spatial uncertainty in 1986.[26][27] Other pioneering work in this field was conducted by the
research group of Hugh F. Durrant-Whyte in the early 1990s.[28] which showed that solutions to SLAM exist
in the infinite data limit. This finding motivates the search for algorithms which are computationally tractable
and approximate the solution.
The self-driving STANLEY and JUNIOR cars, led by Sebastian Thrun, won the DARPA Grand Challenge
and came second in the DARPA Urban Challenge in the 2000s, and included SLAM systems, bringing
SLAM to worldwide attention. Mass-market SLAM implementations can now be found in consumer robot
vacuum cleaners.[29]
See also
Computational photography
Visual odometry
Kalman filter
Inverse depth parametrization
The Mobile Robot Programming Toolkit (MRPT) project: A set of open-source, cross-platform
libraries covering SLAM through particle filtering and Kalman Filtering.
Monte Carlo localization
Multi Autonomous Ground-robotic International Challenge: A $1.6 million international
challenge requiring multiple vehicles to collaboratively map a large area
Neato Robotics
Particle filter
Project Tango
Robotic mapping
Stanley, a DARPA Grand Challenge vehicle winner using SLAM techniques
Stereophotogrammetry
Structure from motion.
References
1. Julier, S.; Uhlmann, J. (2001). Building a Million-Beacon Map. Proceedings of ISAM
Conference on Intelligent Systems for Manufacturing. doi:10.1117/12.444158 (https://doi.org/10.
1117%2F12.444158).
2. Csorba, M.; Uhlmann, J. (1997). A Suboptimal Algorithm for Automatic Map Building.
Proceedings of the 1997 American Control Conference. doi:10.1109/ACC.1997.611857 (https://
doi.org/10.1109%2FACC.1997.611857).
3. Jaulin, L. (2009). "A nonlinear set-membership approach for the localization and map building
of an underwater robot using interval constraint propagation" (http://www.ensta-bretagne.fr/jauli
n/paper_reder_ieee_tro.pdf) (PDF). IEEE Transactions on Robotics. 25: 88–98.
doi:10.1109/TRO.2008.2010358 (https://doi.org/10.1109%2FTRO.2008.2010358).
S2CID 15474613 (https://api.semanticscholar.org/CorpusID:15474613).
4. Jaulin, L. (2011). "Range-only SLAM with occupancy maps; A set-membership approach" (htt
p://www.ensta-bretagne.fr/jaulin/paper_dig_slam.pdf) (PDF). IEEE Transactions on Robotics.
27 (5): 1004–1010. doi:10.1109/TRO.2011.2147110 (https://doi.org/10.1109%2FTRO.2011.214
7110). S2CID 52801599 (https://api.semanticscholar.org/CorpusID:52801599).
5. Cadena, Cesar; Carlone, Luca; Carrillo, Henry; Latif, Yasir; Scaramuzza, Davide; Neira, Jose;
Reid, Ian; Leonard, John J. (2016). "Past, Present, and Future of Simultaneous Localization
and Mapping: Toward the Robust-Perception Age". IEEE Transactions on Robotics. 32 (6):
1309–1332. arXiv:1606.05830 (https://arxiv.org/abs/1606.05830).
Bibcode:2016arXiv160605830C (https://ui.adsabs.harvard.edu/abs/2016arXiv160605830C).
doi:10.1109/tro.2016.2624754 (https://doi.org/10.1109%2Ftro.2016.2624754). hdl:2440/107554
(https://hdl.handle.net/2440%2F107554). ISSN 1552-3098 (https://www.worldcat.org/issn/1552-
3098). S2CID 2596787 (https://api.semanticscholar.org/CorpusID:2596787).
6. Cummins, Mark; Newman, Paul (June 2008). "FAB-MAP: Probabilistic localization and
mapping in the space of appearance" (http://www.robots.ox.ac.uk/~mjc/Papers/IJRR_2008_Fab
Map.pdf) (PDF). The International Journal of Robotics Research. 27 (6): 647–665.
doi:10.1177/0278364908090961 (https://doi.org/10.1177%2F0278364908090961).
S2CID 17969052 (https://api.semanticscholar.org/CorpusID:17969052). Retrieved 23 July
2014.
7. Magnabosco, M.; Breckon, T.P. (February 2013). "Cross-Spectral Visual Simultaneous
Localization And Mapping (SLAM) with Sensor Handover" (http://www.durham.ac.uk/toby.breck
on/publications/papers/magnabosco13slam.pdf) (PDF). Robotics and Autonomous Systems.
63 (2): 195–208. doi:10.1016/j.robot.2012.09.023 (https://doi.org/10.1016%2Fj.robot.2012.09.02
3). Retrieved 5 November 2013.
8. Karlsson, N.; et al. (Di Bernardo, E.; Ostrowski, J; Goncalves, L.; Pirjanian, P.; Munich, M.)
(2005). The vSLAM Algorithm for Robust Localization and Mapping. Int. Conf. on Robotics and
Automation (ICRA). doi:10.1109/ROBOT.2005.1570091 (https://doi.org/10.1109%2FROBOT.20
05.1570091).
9. Fox, C.; Evans, M.; Pearson, M.; Prescott, T. (2012). Tactile SLAM with a biomimetic whiskered
robot (http://eprints.uwe.ac.uk/18384/1/fox_icra12_submitted.pdf) (PDF). Proc. IEEE Int. Conf.
on Robotics and Automation (ICRA).
10. Marck, J.W.; Mohamoud, A.; v.d. Houwen, E.; van Heijster, R. (2013). Indoor radar SLAM A
radar application for vision and GPS denied environments (http://publications.tno.nl/publication/
34607287/4nJ48k/marck-2013-indoor.pdf) (PDF). Radar Conference (EuRAD), 2013
European.
11. Evers, Christine, Alastair H. Moore, and Patrick A. Naylor. "Acoustic simultaneous localization
and mapping (a-SLAM) of a moving microphone array and its surrounding speakers (https://spir
al.imperial.ac.uk/bitstream/10044/1/38877/2/2016012291332_994036_4133_Final.pdf)." 2016
IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE,
2016.
12. Ferris, Brian, Dieter Fox, and Neil D. Lawrence. "Wifi-slam using gaussian process latent
variable models (https://www.aaai.org/Papers/IJCAI/2007/IJCAI07-399.pdf)." IJCAI. Vol. 7. No.
1. 2007.
13. Robertson, P.; Angermann, M.; Krach, B. (2009). Simultaneous Localization and Mapping for
Pedestrians using only Foot-Mounted Inertial Sensors (https://web.archive.org/web/201008160
40331/http://www.kn-s.dlr.de/indoornav/ubicomp2009_final_my_pub.pdf) (PDF). Ubicomp
2009. Orlando, Florida, USA: ACM. doi:10.1145/1620545.1620560 (https://doi.org/10.1145%2F
1620545.1620560). Archived from the original (http://www.kn-s.dlr.de/indoornav/ubicomp2009_f
inal_my_pub.pdf) (PDF) on 2010-08-16.
14. Evers, Christine; Naylor, Patrick A. (September 2018). "Acoustic SLAM" (https://eprints.soton.a
c.uk/437941/1/08340823.pdf) (PDF). IEEE/ACM Transactions on Audio, Speech, and
Language Processing. 26 (9): 1484–1498. doi:10.1109/TASLP.2018.2828321 (https://doi.org/1
0.1109%2FTASLP.2018.2828321). ISSN 2329-9290 (https://www.worldcat.org/issn/2329-
9290).
15. Mahler, R.P.S. (October 2003). "Multitarget bayes filtering via first-order multitarget moments".
IEEE Transactions on Aerospace and Electronic Systems. 39 (4): 1152–1178.
Bibcode:2003ITAES..39.1152M (https://ui.adsabs.harvard.edu/abs/2003ITAES..39.1152M).
doi:10.1109/TAES.2003.1261119 (https://doi.org/10.1109%2FTAES.2003.1261119).
ISSN 0018-9251 (https://www.worldcat.org/issn/0018-9251).
16. Chau, Aaron; Sekiguchi, Kouhei; Nugraha, Aditya Arie; Yoshii, Kazuyoshi; Funakoshi, Kotaro
(October 2019). "Audio-Visual SLAM towards Human Tracking and Human-Robot Interaction in
Indoor Environments". 2019 28th IEEE International Conference on Robot and Human
Interactive Communication (RO-MAN). New Delhi, India: IEEE: 1–8. doi:10.1109/RO-
MAN46459.2019.8956321 (https://doi.org/10.1109%2FRO-MAN46459.2019.8956321).
ISBN 978-1-7281-2622-7. S2CID 210697281 (https://api.semanticscholar.org/CorpusID:21069
7281).
17. Zou, Danping, and Ping Tan. "Coslam: Collaborative visual slam in dynamic environments (htt
p://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.463.8135&rep=rep1&type=pdf)." IEEE
transactions on pattern analysis and machine intelligence 35.2 (2012): 354-366.
18. Perera, Samunda; Pasqual, Ajith (2011). Bebis, George; Boyle, Richard; Parvin, Bahram;
Koracin, Darko; Wang, Song; Kyungnam, Kim; Benes, Bedrich; Moreland, Kenneth; Borst,
Christoph (eds.). "Towards Realtime Handheld MonoSLAM in Dynamic Environments".
Advances in Visual Computing. Lecture Notes in Computer Science. Springer Berlin
Heidelberg. 6938: 313–324. doi:10.1007/978-3-642-24028-7_29 (https://doi.org/10.1007%2F97
8-3-642-24028-7_29). ISBN 9783642240287.
19. Perera, Samunda; Barnes, Dr.Nick; Zelinsky, Dr.Alexander (2014), Ikeuchi, Katsushi (ed.),
"Exploration: Simultaneous Localization and Mapping (SLAM)", Computer Vision: A Reference
Guide, Springer US, pp. 268–275, doi:10.1007/978-0-387-31439-6_280 (https://doi.org/10.100
7%2F978-0-387-31439-6_280), ISBN 9780387314396
20. Wang, Chieh-Chih; Thorpe, Charles; Thrun, Sebastian; Hebert, Martial; Durrant-Whyte, Hugh
(2007). "Simultaneous Localization, Mapping and Moving Object Tracking" (https://www.ri.cmu.
edu/pub_files/pub4/wang_chieh_chih_2007_1/wang_chieh_chih_2007_1.pdf) (PDF). Int. J.
Robot. Res. 26 (9): 889–916. doi:10.1177/0278364907081229 (https://doi.org/10.1177%2F027
8364907081229). S2CID 14526806 (https://api.semanticscholar.org/CorpusID:14526806).
21. Howard, MW; Fotedar, MS; Datey, AV; Hasselmo, ME (2005). "The temporal context model in
spatial navigation and relational learning: toward a common explanation of medial temporal
lobe function across domains" (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1421376).
Psychological Review. 112 (1): 75–116. doi:10.1037/0033-295X.112.1.75 (https://doi.org/10.10
37%2F0033-295X.112.1.75). PMC 1421376 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC14
21376). PMID 15631589 (https://pubmed.ncbi.nlm.nih.gov/15631589).
22. Fox, C; Prescott, T (2010). "Hippocampus as unitary coherent particle filter". The 2010
International Joint Conference on Neural Networks (IJCNN) (http://eprints.whiterose.ac.uk/1086
22/1/Fox2010_HippocampusUnitaryCoherentParticleFilter.pdf) (PDF). pp. 1–8.
doi:10.1109/IJCNN.2010.5596681 (https://doi.org/10.1109%2FIJCNN.2010.5596681).
ISBN 978-1-4244-6916-1. S2CID 10838879 (https://api.semanticscholar.org/CorpusID:108388
79).
23. Milford, MJ; Wyeth, GF; Prasser, D (2004). "RatSLAM: A hippocampal model for simultaneous
localization and mapping". IEEE International Conference on Robotics and Automation, 2004.
Proceedings. ICRA '04. 2004 (https://eprints.qut.edu.au/37593/1/c37593.pdf) (PDF). pp. 403-
408 Vol.1. doi:10.1109/ROBOT.2004.1307183 (https://doi.org/10.1109%2FROBOT.2004.13071
83). ISBN 0-7803-8232-3. S2CID 7139556 (https://api.semanticscholar.org/CorpusID:7139556).
24. Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. (2002). "FastSLAM: A factored solution to the
simultaneous localization and mapping problem" (https://www.cs.cmu.edu/~mmde/mmdeaaai2
002.pdf) (PDF). Proceedings of the AAAI National Conference on Artificial Intelligence.
pp. 593–598.
25. Thrun, S.; Burgard, W.; Fox, D. (2005). Probabilistic Robotics. Cambridge: The MIT Press.
ISBN 0-262-20162-3.
26. Smith, R.C.; Cheeseman, P. (1986). "On the Representation and Estimation of Spatial
Uncertainty" (http://www.frc.ri.cmu.edu/~hpm/project.archive/reference.file/Smith&Cheeseman.p
df) (PDF). The International Journal of Robotics Research. 5 (4): 56–68.
doi:10.1177/027836498600500404 (https://doi.org/10.1177%2F027836498600500404).
S2CID 60110448 (https://api.semanticscholar.org/CorpusID:60110448). Retrieved 2008-04-08.
27. Smith, R.C.; Self, M.; Cheeseman, P. (1986). "Estimating Uncertain Spatial Relationships in
Robotics" (https://web.archive.org/web/20100702155505/http://www-robotics.usc.edu/~maja/te
aching/cs584/papers/smith90stochastic.pdf) (PDF). Proceedings of the Second Annual
Conference on Uncertainty in Artificial Intelligence. UAI '86. University of Pennsylvania,
Philadelphia, PA, USA: Elsevier. pp. 435–461. Archived from the original (http://www-robotics.u
sc.edu/~maja/teaching/cs584/papers/smith90stochastic.pdf) (PDF) on 2010-07-02.
28. Leonard, J.J.; Durrant-whyte, H.F. (1991). "Simultaneous map building and localization for an
autonomous mobile robot". Intelligent Robots and Systems' 91.'Intelligence for Mechanical
Systems, Proceedings IROS'91. IEEE/RSJ International Workshop on: 1442–1447.
doi:10.1109/IROS.1991.174711 (https://doi.org/10.1109%2FIROS.1991.174711). ISBN 978-0-
7803-0067-5. S2CID 206935019 (https://api.semanticscholar.org/CorpusID:206935019).
29. Knight, Will (September 16, 2015). "With a Roomba Capable of Navigation, iRobot Eyes
Advanced Home Robots" (https://www.technologyreview.com/s/541326/the-roomba-now-sees-
and-maps-a-home/). MIT Technology Review. Retrieved 2018-04-25.
External links
Probabilistic Robotics (http://www.probabilistic-robotics.org/) by Sebastian Thrun, Wolfram
Burgard and Dieter Fox with a clear overview of SLAM.
SLAM For Dummies (A Tutorial Approach to Simultaneous Localization and Mapping) (https://d
space.mit.edu/bitstream/handle/1721.1/36832/16-412JSpring2004/NR/rdonlyres/Aeronautics-a
nd-Astronautics/16-412JSpring2004/A3C5517F-C092-4554-AA43-232DC74609B3/0/1Aslam_
blas_report.pdf).
Andrew Davison (http://www.doc.ic.ac.uk/%7Eajd/index.html) research page at the Department
of Computing, Imperial College London about SLAM using vision.
openslam.org (https://openslam-org.github.io/) A good collection of open source code and
explanations of SLAM.
Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping (http://ei
a.udg.es/~qsalvi/Slam.zip) Vehicle moving in 1D, 2D and 3D.
FootSLAM research page (https://web.archive.org/web/20120313064730/http://www.kn-s.dlr.de/
indoornav/footslam_video.html) at DLR including the related Wifi SLAM and PlaceSLAM
approaches.
SLAM lecture (https://www.youtube.com/watch?v=B2qzYCeT9oQ&list=PLpUPoM7Rgzi_7YWn
14Va2FODh7LzADBSm) Online SLAM lecture based on Python.
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this
site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia
Foundation, Inc., a non-profit organization.