Topological Simultaneous Localization and Mapping - A Survey
Topological Simultaneous Localization and Mapping - A Survey
net/publication/259754230
CITATIONS READS
39 1,741
3 authors, including:
Jaime Boal
Universidad Pontificia Comillas
9 PUBLICATIONS 200 CITATIONS
SEE PROFILE
All content following this page was uploaded by Jaime Boal on 04 July 2016.
8 SUMMARY
9 One of the main challenges in robotics is navigating autonomously through large, unknown, and
10 unstructured environments. Simultaneous localization and mapping (SLAM) is currently regarded
11 as a viable solution for this problem. As the traditional metric approach to SLAM is experiencing
12 computational difficulties when exploring large areas, increasing attention is being paid to topological
13 SLAM, which is bound to provide sufficiently accurate location estimates, while being significantly
14 less computationally demanding. This paper intends to provide an introductory overview of the most
15 prominent techniques that have been applied to topological SLAM in terms of feature detection, map
16 matching, and map fusion.
17
18 KEYWORDS: Mobile robots; SLAM; Topological modeling of robots; Feature detection; Robot
19 localization
20 1. Introduction
21 Mobile robotics’ ultimate aim is to develop fully autonomous entities capable of performing rather
22 complicated tasks, without the need for human intervention, during extended periods of time. Over
23 the past three decades, this objective has constantly faced harsh difficulties, which have hindered
24 progress. The most recurrent issues in the literature, which are yet to be completely resolved, are
25 stated below.
26 A mobile robot must be able to navigate through the environment in order to achieve its goals.
27 According to Leonard and Durrant-Whyte,63 this general problem can be summarized in three
28 questions: “Where am I?,” “Where am I going?,” and “How should I get there?” The first question
29 addresses the localization problem, which intends to estimate the robot’s pose (i.e., location and
30 orientation) using data gathered by distinct sensors and knowledge of previous locations. However,
31 the presence of noisy sensor measurements makes this problem harder than it may seem at first
32 sight. The precision with which this problem is solved decisively affects the answer to the other two
33 questions, as it is necessary to localize oneself in the environment to safely interact with it, decide
34 what the following step should be, and how to accomplish it.
35 During the localization process, a robot must resort to some kind of reference system; in other
36 words, it requires a map. The extensive research survey carried out by Thrun110 collects the main
37 open issues concerning robotic mapping, which are succinctly presented henceforth. Currently, there
38 are robust methods for mapping structured, static, and bounded environments, whereas mapping
39 unstructured, dynamic, or large-scale unknown environments remains largely an unsolved problem.
40 According to Thrun,110 the robotic mapping problem is “that of acquiring a spatial model of a
41 robot’s environment.” To this end, robots must be equipped with sensors that enable them to perceive
42 the outside world. Once again, sensor errors and range limitations pose a great difficulty.
43 The first challenge in robotic mapping develops from the measurement noise. Usually, this issue
44 can be overcome if the noise is statistically independent, as it can be canceled out performing enough
45 measurements. Unfortunately, this does not always occur in robotic mapping because, whenever
46 incremental sensors (e.g., encoders) are used, errors in navigation control accumulate progressively
47 and condition the way in which subsequent measurements are interpreted. As a result, if a robot does
48 not rely on the layout of the environment whatever it infers about its surroundings is plagued by
49 systematic, correlated errors. Leonard and Durrant-Whyte64 state the correlation problem as follows:
50 If a mobile robot uses an observation of an imprecisely known target to update its
51 position, the resulting vehicle position estimate becomes correlated with the feature
52 location estimate. Likewise, correlations are introduced if an observation taken from an
53 imprecisely known position is used to update the location estimate of a feature in the map.
54 The second difficulty of the robot mapping problem derives from the amount and complexity of the
55 features required to describe the objects that are being mapped, as the computational burden grows
56 exponentially as the map becomes more detailed. Obviously, it is absolutely different to restrict to
57 the description of corridors, intersections, and doors, than to build a 3D visual map.
58 A third, and perhaps the hardest, issue is the correspondence problem, which attempts to determine
59 if sensor measurements taken at different times correspond to the same physical entity. A specific
60 instance of this problem occurs when returning to an already visited area, because the robot has to
61 realize that it has arrived at a previously mapped location. This is known as the loop-closing problem.
62 Another particular case is the so-called first location problem or kidnapped robot problem,53 which
63 occurs when a robot is placed in an unknown position of an environment for which it has a map.
64 Fourth, the vast majority of environments are dynamic. Doh et al.28 further classify the concept of
65 dynamic environments in temporary dynamics, which are instantaneous changes that can be discarded
66 by consecutive sensor measurements (e.g., moving objects like walking people), and semi-permanent
67 dynamics or scene variability,58 which are changes that persist for a prolonged period of time. This
68 second type of dynamics makes the correspondence problem even more difficult to solve, as it provides
69 another manner in which apparently inconsistent sensor measurements can be interpreted. Suppose a
70 robot perceives a closed door that was previously modeled as open. This observation may be explained
71 by two equally plausible hypotheses: either the door position has changed, or the robot is in error
72 about its current location. At present, there are almost no mapping algorithms capable of coping with
73 this difficulty. On the contrary, most approaches assume a static world and, as a consequence of this
74 simplification, anything that moves apart from the robot is regarded as noise. In fact, the majority
75 of the experimental tests in the literature are carried out in rather controlled environments and never
76 mention how to deal with these troublesome dynamics. Doh et al.28 are an exception to this trend due
77 to the fact they take door position changes into consideration.
78 Finally, robots must navigate through the environment while mapping on account of sensor
79 range limitations. The operation of generating navigation commands with the aim of building a
80 map is known as robotic exploration. Although the commands issued during the exploration of the
81 environment provide relevant information about the locations at which different sensor measurements
82 were obtained, motion is also subject to errors (e.g., wheel slippage). Therefore, these controls alone
83 are insufficient to determine a robot’s pose.
99 The majority of the problems that researchers are currently facing are those of computational
100 nature.8 In order to overcome the correspondence problem, each location in the environment must
101 be unequivocally distinguishable from all the rest. This implies gathering either plenty of similar
102 features or a more restricted number with richer information in every place analyzed. In any case, the
103 computational burden rapidly increases to intractable levels in large environments. Therefore, most
104 approaches make a trade-off between computation times and precision or global distinctiveness, that
105 is, they either limit the number of locations considered or reduce the number of features analyzed in
106 each place.
150 the geometry of the environment from sensor measurements and then inferred the topology from it
151 (see Chatila and Laumond, 198517 for instance), they proposed constructing a topological description
152 based on simple control strategies in the first place, and incorporate local metric information in each
153 of the identified nodes afterwards.
154 Albeit, considering that metric maps are more accurate and that a hybrid approach helps to
155 overcome storage problems, why should purely topological maps be used? To begin with, topological
156 navigation is a behavior employed by a variety of different animal species, including human beings.
157 We do not need to answer the question “Where am I?” in millimeters and degrees in order to safely
158 move through the environment.15 On the contrary, rather than navigating using coordinates, we have
159 an abstract notion of distance but are still able to recognize where we are in space.89 Moreover,
160 Brooks14 supports the belief that topological maps are a means of coping with uncertainty in mobile
161 robot navigation. The absence of metric and geometric information, which is replaced by notions of
162 proximity and order, eliminates dead-reckoning error issues, which no longer accumulate.
163 In conclusion, topological representation resembles human intuitive navigation system, which has
164 been proven to deal efficiently with uncertainty, and results in a straightforward map from which path
165 planning follows naturally.
Correspondence
Perception Detection & Map Matching Map Fusion
Fig. 2. (Colour online) Topological SLAM overview. From left to right: the system acquires sensory information
from one or several sources; selected features are extracted and encoded; the current location is compared with
a database of previously visited nodes resulting in a belief state (i.e., the robot could be in several locations
with different probabilities); finally, once the uncertainty has been resolved, either a new node is added to the
database or the information of an existing one is updated.
Cameras
Hafner, 200046
Ulrich & Nourbakhsh, 2000113
Choset & Nagatani, 200118
Tomatis et al., 2002111
Total 7 12 10 3 7 14 3
226 to what has been called geometric features in Table II (i.e., distances to different obstacles that allow
227 to identify simple topological landmarks such as corners or dead ends) and gateways, which are an
228 extension of the previous to detect openings.
229 With the rise of laser range scanners, these approaches became more precise owing to
230 the acquisition of dense point clouds and, more recently, with the introduction of computer
231 vision techniques, simple methods like color histograms were applied. For instance, Ulrich and
232 Nourbakhsh113 extract histograms in the RGB and HSL color spaces from omnidirectional images.
233 However, it was soon widely accepted that the information obtained from histograms was not
234 sufficiently distinctive and reliable—they can be potentially identical for two images with different
235 content, and are very sensitive to illumination changes—to use them as a sole characteristic detector.
236 Thus, this approach has now become a part of, or a complement for, other more consistent and
Topological simultaneous localization and mapping: a survey
Table II. Landmark extraction techniques for topological navigation grouped according to sensor technologies: range sensors (i.e., sonar and laser), cameras, and both; and the
type of features obtained: distances, lines, frequency-based, edges, keypoints, affine regions, and probabilistic.
Range sensors Cameras R/C
Reference Geometric Gateways Douglas- EM RANSAC Hough Hist. Haar Sobel Invariant SIFT SURF KLT Harris- MSER Bayesian
features Peucker trans. wavelets operator columns affine surprise
7
8
Table II. Continued.
Range sensors Cameras R/C
Reference Geometric Gateways Douglas- EM RANSAC Hough Hist. Haar Sobel Invariant SIFT SURF KLT Harris- MSER Bayesian
features Peucker trans. wavelets operator columns affine surprise
237 informative methods. In addition, other procedures like line extractors, Haar wavelets, edge, keypoint,
238 and affine covariant region detectors, and Bayesian surprise saw the light of day.
239 The rest of this section concentrates on the detection methods found in the literature. For those
240 techniques that are common knowledge in the field, only references to surveys or seminal papers are
241 put forward. Emphasis is put on the more recent and original techniques.
242 5.2.1. Line extractors. Human-made environments are full of vertical and horizontal lines and,
243 therefore, constitute an invaluable source of topological information. Line extraction techniques are
244 usually employed in conjunction with laser range scanners. There exist many approaches for line
245 extraction, some of which are compared by Nguyen et al.82 As far as topological feature detection
246 is concerned, the Douglas-Peucker algorithm108 (also known as split-and-merge), EM (Expectation-
247 Maximization) applied to line fitting,87 the Hough transform,38 and RANSAC (RANdom SAmple
248 Consensus)37 have been employed. Finally, it is worth mentioning that the latter is a general algorithm
249 for model adjustment in the presence of many data outliers, which has further applications, for
250 instance, Nüchter and Hertzberg,84 adopt this technique for plane extraction.
251 5.2.2. Haar wavelets. Yet another attempt to topological feature extraction is that of Lui and Jarvis72
252 who use a feature extraction method for unwarped stereo panoramic images based on the standard
253 2D Haar wavelet decomposition proposed by Jacobs et al.49 and adapted for mobile robotics by Ho
254 and Jarvis.47 Similarly to the Fourier transform, which is used to decompose complex signals into
255 a series of sine waves, Haar wavelets are applied to obtain a summation of simpler images that can
256 be used to extract a discriminative and robust to occlusions and light changes signature, although
257 rotation variant.
258 5.2.3. Edge-based detectors. They are used to obtain outlines in the context of computer vision. In
259 particular, Tapus108 utilizes the Sobel operator as an intermediate step to obtain segments of vertical
260 edges, whereas Goedemé et al.42 employs this operator to apply the so-called invariant column
261 segments method, which is not an edge detector strictly speaking but a specialization of the so-called
262 affine invariant regions that are commented below. For further reference, a comparison of several
263 edge detectors can be found in Maini and Aggarwal.76
264 5.2.4. Keypoint detectors. In the context of feature detection using computer vision, blobs are points
265 in the image that are either significantly brighter or darker than its neighbors. An initial comment
266 is required before proceeding with the most remarkable algorithms. Although the title alludes to
267 detectors, most of the methods cited below also include a descriptor to encode the distinguishing data
268 that can be extracted from the features localized using the detector. For the sake of simplicity, they
269 will be treated as a whole because they are usually presented together. Nevertheless, it is important
270 to bear in mind that detectors and descriptors are interchangeable.
271 The most pre-eminent blob detector algorithm is Scale Invariant Feature Transform (SIFT),69, 70, 102
272 which is the current standard for vision-based SLAM. Later on, Bay et al.9 developed Speeded-Up
273 Robust Features (SURF) with the aim of reducing the computational burden of SIFT. This fact makes
274 it a better candidate for real-time applications. Last but not least, it is worth mentioning another
275 promising feature detector named Center Surround Extremas (CenSurE),1 which is much faster than
276 the previous two methods at the expense of a slight increase in rotation sensitivity. More recently,
277 Ebrahimi and Mayol-Cuevas34 presented SUSurE, an interest point detector and descriptor based on
278 CenSurE, which is capable of executing two to three times faster with only a slight loss in repeatability.
279 However, there exist other type of interest points apart from blobs. An example of these is Kanade-
280 Lucas-Tomasi (KLT), included within the OpenCV library,13 which is a corner detector that has also
281 been applied in topological SLAM systems to perform visual odometry.72
282 5.2.5. Affine covariant region detectors. Affine covariant region detectors emerged with the idea of
283 extracting features from images that were robust to perspective transformations. It is unclear which
284 is the best among them, as they are often complementary and well suited for extracting regions with
285 different properties. Mikolajczyk et al.77 carried out a survey comparing the most common detectors,
286 among which Harris-affine and Maximally Stable Extremal Regions (MSER) can be found.
287 It is also interesting to point out that Romero and Cazorla97 run the JSEG segmentation algorithm27
288 prior to applying MSER described with SIFT with the aim of grouping features according to the image
289 region to which they belong and produce a graph with them.
10 Topological simultaneous localization and mapping: a survey
290 5.2.6. Bayesian surprise. Mainly based on the concept of saliency, it states that relevant stimuli
291 represent statistical outliers or, in other words, sudden or unexpected changes in the environment.48, 93
292 Thus, at least one of its attributes needs to be unique or rare over the entire scene (e.g., a red coat
293 is perceptually salient among black suits but not among many other red coats). This method, which
294 claims to fire at almost all locations that would be regarded as landmarks by a human being, as well as
295 at some others that would not, can be implemented for different sensor technologies, predominantly
296 laser and cameras, and applied to several elementary features such as color, intensity, orientation, or
297 motion. For example, Ranganathan and Dellaert93 illustrate this technique using laser range scanners
298 and, in the context of computer vision, by simultaneously applying this method to SIFT descriptors
299 computed over Harris-affine and MSER features.
300 5.2.7. A hybrid approach: fingerprint of places. Once set forth the most common feature extraction
301 methods, it is clear that they all have advantages and disadvantages that make them suitable for
302 specific applications. Thus, in the pursuit of a more generally applicable method, some authors have
303 tried to combine several of the aforementioned techniques.
304 An interesting approach has its origin in the paper by Lamon et al.62 where the term fingerprint
305 of places was coined to refer to a circular list of complementary simple features (color patches and
306 vertical edges), obtained from omnidirectional images, whose order matches their relative position
307 around the robot. This idea led to the publication of a series of pieces of work that further developed
308 on the concept of fingerprint. Of special relevance is that of Tapus and Siegwart109 where, thanks to
309 the information provided by two 180◦ laser range scanners, corners and empty areas (i.e., when there
310 are more than 20◦ of free space between two features) are additionally detected.
311 More recently, Liu et al.67 proposed a much simpler fingerprint procedure, exclusively based on
312 panoramic images, which extracts vertical edges under the belief that the prevailing lines naturally
313 segment a structured environment into meaningful areas, and uses the distance among those lines and
314 the mean U-V chrominance of the defined regions as a lightweight descriptor called FACT, which
315 was later granted with statistical meaning and renamed DP-FACT68 .
344 well know Joint Compatibility Branch and Bound (JCBB) test,81 it is faster and more accurate, and
345 performs better on non-linear problems.
346 Finally, because map matching becomes more demanding as the mapped area grows, some authors
347 like Goedemé et al.42 or Romero and Cazorla97 propose applying clustering techniques like kd-trees
348 to reduce the dimensionality of the features in order to optimize the search and comparison processes.
349 Cummins and Newman22 employ a Chow-Liu tree. Notice, nevertheless, that both Goedemé et al.42
350 and Cummins and Newman22 perform tree building offline due to time constraints.
397 5.4.3. Markov decision processes. Markov decision processes—Hidden Markov Models
398 (HMMs)44, 103 and their extension, Partially Observable Markov Decision Processes
399 (POMDPs)16, 51, 54 —have also been employed to determine the navigation policy that the robot should
400 follow in order to reduce uncertainty. Subsequently, Tomatis et al.111 and Tapus and Siegwart109
401 extended POMDPs to perform multi-hypothesis tracking and determine a pose distribution. However,
402 as computing an optimal policy is intractable in large environments, Tomatis et al.111 suggested using
403 the most likely state (MLS) criterion to choose the following action, whereas Tapus and Siegwart109
404 opted for another heuristic, the entropy of the current location probability distribution, to decide the
405 control commands. In the latter case, whenever the entropy falls below an experimentally determined
406 threshold, the robot’s location is assumed certain and the map is updated accordingly, either by adding
407 a new node or by merging the latest fingerprint information with the node representative.
408 Loop closures are also identified by means of the POMDP. Whenever the robot returns to a
409 previously visited location, the probability distribution should split in two. One hypothesis would
410 correspond to a new location and the other to a node already present in the map. If both divergent
411 peaks evolve similarly over time, a loop closure is assumed.108
412 5.4.4. Probabilistic topological maps. A Bayesian inference framework has also been explored
413 for topological mapping. Ranganathan and Dellaert coined the term Probabilistic Topological
414 Map (PTM), a sample-based representation that estimates the posterior distribution over all the
415 possible topologies that can be built given a set of sensor measurements.90, 91 Due to the fact
416 that this is a problem of a combinatorial nature, they proposed approximating the solution by
417 drawing samples from the distribution using Markov-Chain Monte Carlo (MCMC) sampling.91, 95
418 In principle, this technique is applicable to any landmark detection scheme as long as the landmark
419 detection algorithm does not provide false negatives (i.e., the robot’s sensors do not fail to recognize
420 landmarks).
421 Afterwards, they presented Rao-Blackwellized Particle Filters (RBPFs)29, 79 as an alternative to
422 MCMC sampling for PTMs.90, 92, 94 Particle filters is yet another Monte Carlo localization technique
423 used to probabilistically estimate the state of a system under noisy measurement conditions. They
424 claim that this technique permits incremental inference in the space of topologies—conversely to
425 MCMC, which is a batch algorithm—and can therefore be computed in real time. In order to
426 overcome the samples degeneracy problem over time,30 that can lead to convergence issues, they
427 suggest integrating odometric data to draw more likely particles with higher probability. However,
428 the selection of the appropriate number of particles still remains an open issue, as particle filtering
429 inherently has the risk of disposing the correct map. Koenig et al.52 also employ a RBPF. Each particle
430 incrementally constructs its own graph of the environment using color histograms and odometry
431 information. Local graphs are compared with the global graph to determine the best matches and,
432 simultaneously, the resampling weights for each particle.
433 The main advantage of PTMs is that all decisions are reversible and the algorithm is therefore
434 capable of recovering from incorrect loop closures. In the end, only a small set of similar topologies
435 have non-negligible probabilities. The experiments conducted suggest that, if the environment is
436 unambiguous, the ground-truth topology is assigned a much higher posterior probability mass than
437 the other alternatives.
438 5.4.5. Voronoi graphs and neighboring information. Choset and Nagatani18 represent the environment
439 by means of a generalized Voronoi graph (GVG). A GVG is a one-dimensional set of points equidistant
440 to n obstacles in n dimensions. When used in the plane, it reduces to the set of points equidistant
441 to two (or more) obstacles, and define a roadmap of the robot’s free space. Voronoi nodes, which
442 are locations equidistant to n + 1 obstacles, are used as natural landmarks because they provide
443 topologically meaningful information that can be extracted online (e.g., junctions, dead-ends, etc.).
444 The main problem with Voronoi nodes is that they are very sensitive to changes in the configuration
445 of the environment. If non-structural obstacles are moved, Voronoi vertices may appear or vanish.
446 In order to achieve SLAM, the robot follows simple control commands looking for these nodes in
447 the environment. Loop-closing is carried out by comparing the subgraph built from the latest observed
448 nodes to the already encoded map. Ambiguity is resolved by following a candidate path and ruling
449 out inconsistent matches based on the new visited places. This method as is assumes that the robot
450 is equipped with infinite range sonar sensors, and is only suitable for static and planar environments
Topological simultaneous localization and mapping: a survey 13
451 with plenty of obstacles. Based on this idea, Beeson et al.10 introduced extended Voronoi graphs
452 (EVGs) to address the problems of GVGs derived from limited sensory horizons by means of local
453 perceptual maps (LPMs).
454 The research path initiated by Werner et al.115, 118 is also remarkable. They apply Bayesian inference
455 to obtain a topological map in ambiguous environments that explains the set of observations without
456 the need for motion knowledge. The method is based on guaranteeing consistency between the local
457 neighboring information extracted from the latest n images and the constructed map while keeping the
458 number of topological vertices as low as possible, following the Occam’s razor principle. Topological
459 places, where captures are acquired, are identified by means of a GVG using sonar readings. The
460 algorithm assumes that there exist some prior information about the connectivity but not about the
461 number of distinct locations in the environment.
462 Initially, a sequential Monte Carlo technique was employed to maintain a series of candidate
463 maps,116 which was later replaced by a particle filter.117 In order to be able to recover from incorrect
464 loop closures, Tully et al.112 introduced a multi-hypothesis approach based on a tree expansion
465 algorithm specifically conceived for edge-ordered graphs,32 as well as a series of pruning rules
466 to keep the number of hypothesis under control. Recently, Tao et al.107 discussed the benefits of
467 saturated generalized Voronoi graphs (S-GVG), that employ a wall-following behavior to navigate
468 within sensor range limits, and performed SLAM using a similar hypothesis tree. Finally, Werner
469 et al.119 suggested applying stochastic local search (SLS) to produce the topological map.
470 Before concluding this section, it is worth mentioning the work by Doh et al.,28 who deal with semi-
471 permanent dynamics induced by door opening and closing. They classify GVG nodes in invariant
472 (i.e., junctions, corners, and ends of corridors) and variant (i.e., doors). Nodes are told apart using
473 the areas between two local minimums of a sensor scan (which identify doors), and looking for a
474 vanishing point from a range scan or in an image (for invariant nodes).
475 5.4.6. Appearance-based topological SLAM. Most early approaches to inferring topological maps
476 based only on visual information—they discard employing odometric data because it is prone
477 to cumulative errors, especially on slippery surfaces—rely on SIFT keypoints extracted from
478 omnidirectional images. Some examples include the work by Zivkovic et al.,121 who solve the
479 map building process using graph-cuts, and Goedemé et al.,41 who resort to Dempster-Shafer theory
480 of evidence26 for loop-closing. Unfortunately, these solutions require offline computation.
481 Later on, Fraundorfer et al.39 presented a real-time framework based on the bag-of-words
482 paradigm,19 where images are quantized in terms of unordered elementary features taken from
483 an offline-built dictionary. Loop-closing is identified by visual word comparison following a voting
484 scheme. Romero and Cazorla97, 98 take a similar approach but without the need for a dictionary.
485 They build graphs from homogeneous regions using MSER features described with SIFT and use the
486 GTM algorithm for matching. They then compare the graphs from newly acquired images with the
487 latest visited topological node representative. If the matching score is below a threshold, it is then
488 compared—using another threshold—with the rest of the encoded vertices in order to identify loop
489 closures. If no match is found, a new node is added to the map. The main drawback of this algorithm
490 is that it is extremely sensitive to the two thresholds. The value of these parameters has a decisive
491 impact on the final topology obtained.
492 Angeli et al.4–6 proposed a method that builds the vocabulary online, following the procedure
493 developed by Filliat.35 The problem of loop-closing is addressed following a Bayesian approach. The
494 probability of transition between locations is modeled using a sum of Gaussians to assign higher
495 probability to adjacent states, whereas the correspondence likelihood is computed by means of voting
496 using the tf-idf coefficient.104
497 Furthermore, Fast Appearance-Based Mapping (FAB-MAP), which is a Bayesian framework for
498 navigation and mapping exclusively based on appearance information developed by Cummins and
499 Newman as a solution to loop closure detection,20–22, 24 has attracted a great deal of attention. It relies
500 on a vocabulary model built offline from the clustering of SURF features extracted from a large
501 collection of independent images. The words obtained are then organized using a Chow-Liu tree to
502 capture the dependencies among them (i.e., car wheels and car doors are likely to appear together).
503 This vocabulary model is used to approximate the partition function in the Bayesian formulation,
504 which provides a natural probabilistic measure of when an observation should be labeled as a new
505 location.
14 Topological simultaneous localization and mapping: a survey
506 The experiments conducted outdoors suggest that it performs well in repetitive environments
507 and is fast enough for online loop-closing. The fact that it requires offline training is an drawback,
508 although tests carried out indoors with the bag-of-words model built for outdoor environments produce
509 surprisingly good results according to the authors.
510 Some improvements have been introduced to the original algorithm since its presentation. First,
511 speed was increased by more than 25 times, with only a slight degradation in accuracy, thanks to the
512 usage of concentration inequalities to reduce the number of hypothesis considered.23 The formulation
513 of the algorithm was also modified to operate on very large environments (over trajectories of around
514 1000 km).25 Finally, Paul and Newman86 incorporated the spatial arrangement of visual words to
515 improve distinctiveness.
516 5.4.7. Continuous appearance-based trajectory SLAM. Continuous Appearance-based Trajectory
517 SLAM (CAT-SLAM)73 incorporates odometry, following the approach of FastSLAM,79 to
518 appearance-based SLAM using FAB-MAP. The current location is modeled as a probability
519 distribution over a trajectory and appearance is treated as a continuous variable. The evaluation
520 of the distribution is carried out using a RBPF. Compared to FAB-MAP, it identifies three times
521 as many loop closures at 100% precision (i.e., with no false positives). By contrast, FAB-MAP is
522 capable of recognizing places when approached from a different direction, whereas CAT-SLAM
523 cannot because it relies on odometric information. Enhancements to computational and memory
524 storage requirements, like pruning those nodes in the trajectory that are locally uninformative once
525 a preset maximum number of nodes is reached, were subsequently introduced to allow continuous
526 operation on much larger environments.74, 75
527 5.4.8. Closing the loop with visual odometry. Lui and Jarvis72 have implemented a different correction
528 algorithm for loop closure detection, which relies on visual odometry. They employ the Kanade-
529 Lucas-Tomasi (KLT) features present in the OpenCV computer vision library13 to estimate the
530 distance traveled and column image comparison using the Sum of Absolute Differences (SAD) for
531 the front 180◦ field of view (FOV) of the robot to estimate the bearing. These are then used to
532 reduce the matches retrieved from the database, using a Haar wavelet-based signature, utilizing the
533 relaxation algorithm proposed by Duckett et al.31 The current location is then told apart by means of
534 SURF.9 This system has been proven effective in indoor and semi-outdoor environments. However, its
535 main drawback lies in the complexity of the robot infrastructure, which includes an omnidirectional
536 stereovision system and a web camera to perform visual odometry, as well as a stereo camera for
537 obstacle avoidance.
538 5.4.9. The final stage: updating the map. Finally, once the uncertainty has been resolved, the new
539 information gathered should be incorporated to the map for future reference. On the one hand, some
540 authors suggest removing any unobserved nodes, features, and relations or, better, implementing a
541 gradual “forgetting” process that could take into account changes in the environment (e.g., an open
542 door appears closed when revisiting a place).114 On the other hand, Kuipers and Beeson58 and Tapus
543 and Siegwart109 propose applying clustering techniques to create a mean node representative with a
544 view to reducing the impact of scene variability.
545 6. Conclusion
546 There is still a long way to go as far as topological SLAM is concerned. Although there exist plenty
547 of partial implementations and ideas for some of the phases, a robust and globally applicable method
548 is yet to be developed.
549 In detection, it seems that the best results have been achieved with a wisely chosen collection of
550 features. Nevertheless, the selection of these features is crucially affected by the sensory technology
551 used. Computer vision is rising as a promising alternative because, even though processing the data
552 gathered can be more difficult and computationally expensive, it provides richer information than
553 other sensors, like laser range scanners, and can be easily installed in any mobile entity.
554 In map matching and updating, the probabilistic approach seems to be the most consolidated
555 research line. However, in spite of the topological representation being less computationally
556 demanding, there are still some open issues in unbounded, dynamic, and ambiguous unknown
557 environments. Constantly solving a loop-closing problem can be cumbersome in large maps as
558 the robot can simultaneously believe to be in several locations, which results in having to deal with
Topological simultaneous localization and mapping: a survey 15
559 a huge pose distribution that multiply the calculations required. Because of this, new tracks are
560 being explored in the pursuit of scalability in metric SLAM. The work by Blanco et al.,12 which
561 demonstrates that an appropriate formulation of the SLAM problem can pose an upper limit on loop
562 closure complexity in unbounded environments, is a remarkable example.
563 References
564 1. M. Agrawal, K. Konolige and M. R. Blas, “CenSurE: Center Surround Extremas for Realtime Feature
565 Detection and Matching,” LNCN 5305, 102–115 (2008).
566 2. W. Aguilar, Y. Frauel, F. Escolano, M. E. Martı́nez-Pérez, A. Espinosa-Romero and M. Á. Lozano, “A
567 robust graph transformation matching for non-rigid registration,” Image Vis. Comput. 27, 897–910 (2009).
568 3. H. Andreasson, A. Treptow and T. Duckett, “Localization for Mobile Robots Using Panoramic Vision,
569 Local Features and Particle Filter,” Proceedings of the IEEE International Conference Robotics and
570 Automation, Barcelona, Spain (2005) pp. 3348–3353.
571 4. A. Angeli, S. Doncieux, J.-A. Meyer and D. Filliat, “Incremental Vision-Based Topological SLAM,”
572 Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France
573 (2008a) pp. 1031–1036.
574 5. A. Angeli, D. Filliat, S. Doncieux and J.-A. Meyer, “A fast and incremental method for loop-closure
575 detection using bags of visual words,” IEEE Trans. Robot. 24(5), 1027–1037 (2008b).
576 6. A. Angeli, D. Filliat, S. Doncieux and J.-A. Meyer, “Real-Time Visual Loop-Closure Detection,”
577 Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA (2008c)
578 pp. 1842–1847.
579 7. D. Anguelov, D. Koller, E. Parker and S. Thrun, “Detecting and modeling doors with mobile robots,”
580 Proc. IEEE Int. Conf. Robot. Autom. 4, 3777–3784 (2004).
581 8. T. Bailey and H. F. Durrant-Whyte, “Simultaneous localization and mapping (SLAM): Part II,” IEEE
582 Robot. Autom. Mag. 13(3), 108–117 (2006).
583 9. H. Bay, A. Ess, T. Tuytelaars and L. van Gool, “SURF: Speeded up robust features,” Comput. Vis. Image
584 Underst. 110(3), 346–359 (2008).
585 10. P. Beeson, N. K. Jong and B. Kuipers, “Towards Autonomous Topological Place Detection Using the
586 Extended Voronoi Graph,” Proceedings of the IEEE International Conference on Robotics and Automation,
587 Barcelona, Spain (2005) pp. 4373–4379.
588 11. J. L. Blanco, J. A. Fernández-Madrigal and J. González, “Toward a unified Bayesian approach to hybrid
589 metric-topological SLAM,” IEEE Trans. Robot. 24(2), 259–270 (2008).
590 12. J. L. Blanco, J. González-Jiménez and J. A. Fernández-Madrigal, “Sparser Relative Bundle Adjustment
591 (SRBA): Constant-Time Maintenance and Local Optimization of Arbitrarily Large Maps,” Proceedings
592 of the IEEE International Conference Robotics and Automation, Karlsruhe, Germany (2013) pp. 70–77.
593 13. G. Bradski, “The OpenCV library” (2000). http://opencv.willowgarage.com/.
594 14. R. A. Brooks, “Visual map making for a mobile robot,” Proc. IEEE Int. Conf. Robot. Autom. 2, 824–829
595 (1985).
596 15. R. A. Brooks, “Elephants don’t play chess,” Robot. Auton. Syst. 6, 3–15 (1990).
597 16. A. R. Cassandra, L. P. Kaelbling and J. A. Kurien, “Acting Under Uncertainty: Discrete Bayesian Models
598 for Mobile-Robot Navigation,” Proceedings of the IEEE/RSJ International Conference on Intelligent
599 Robots and Systems, Vol. 2, Osaka, Japan (1996) pp. 963–972.
600 17. R. Chatila and J.-P. Laumond, “Position referencing and consistent world modeling for mobile robots,”
601 Proc. IEEE Int. Conf. Robot. Autom. 2, 138–145 (1985).
602 18. H. Choset and K. Nagatani, “Topological simultaneous localization and mapping (SLAM): Toward exact
603 localization without explicit localization,” IEEE Trans. Robot. Autom. 17(2), 125–137 (2001).
604 19. G. Csurka, C. Dance, L. Fan, J. Williamowski and C. Bray, “Visual Categorization with Bags of Keypoints,”
605 ECCV International Workshop on Statistical Learning in Computer Vision, Prague, Czech Republic (2004)
606 pp. 59–74.
607 20. M. Cummins, Probabilistic Localization and Mapping in Appearance Space Ph.D. Thesis (University of
608 Oxford, 2009).
609 21. M. Cummins and P. Newman, “Probabilistic Appearance Based Navigation and Loop Closing,”
610 Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy (2007)
611 pp. 2042–2048.
612 22. M. Cummins and P. Newman, “FAB-MAP: Probabilistic localization and mapping in the space of
613 appearance,” Int. J. Robot. Res. 27(6), 647–665 (2008).
614 23. M. Cummins and P. Newman, “Accelerating FAB-MAP with concentration inequalities,” IEEE Trans.
615 Robot. 26(6), 1042–1050 (2010a).
616 24. M. Cummins and P. Newman, “FAB-MAP: Appearance-Based Place Recognition and Mapping Using a
617 Learned Visual Vocabulary Model,” Proceedings of the International Conference on Machine Learning,
618 Haifa, Israel (2010b) pp. 3–10.
619 25. M. Cummins and P. Newman, “Appearance-only SLAM at large scale with FAB-MAP 2.0,” Int. J. Robot.
620 Res. 30(9), 1100–1123 (2011).
621 26. A. P. Dempster, “Upper and lower probabilities induced by a multivalued mapping,” Ann. Math. Stat.
622 38(2), 325–339 (1967).
16 Topological simultaneous localization and mapping: a survey
623 27. Y. Deng and B. S. Manjunath, “Unsupervised segmentation of color-texture regions in images and video,”
624 IEEE Trans. Pattern Anal. Mach. Intell. 23(8), 800–810 (2001).
625 28. N. L. Doh, K. Lee, W. K. Chung and H. Cho, “Simultaneous localisation and mapping algorithm for
626 topological maps with dynamics,” IET Control Theor. Appl. 3(9), 1249–1260 (2009).
627 29. A. Doucet, N. de Freitas, K. Murphy and S. Russell, “Rao-Blackwellised Particle Filtering for Dynamic
628 Bayesian Networks,” Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence,
629 Stanford, CA (2000a) pp. 176–173.
630 30. A. Doucet, S. Godsill and C. Andrieu, “On sequential Monte Carlo sampling methods for Bayesian
631 filtering,” Stat. Comput. 10, 197–208 (2000b).
632 31. T. Duckett, S. Marsland and J. Shapiro, “Learning Globally Consistent Maps by Relaxation,” Proceedings
633 of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA (2000)
634 pp. 3841–3846.
635 32. G. Dudek, P. Freedman and S. Hadjres, “Using Local Information in a Non-Local Way for Mapping
636 Graph-Like Worlds,” International Joint Conference on Artificial Intelligence, Chambery, France (1993)
637 pp. 1639–1647.
638 33. H. F. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: Part I,” IEEE Robot. Autom.
639 Mag. 13(2), 99–110 (2006).
640 34. M. Ebrahimi and W. W. Mayol-Cuevas, “SUSurE: Speeded up Surround Extrema Feature Detector
641 and Descriptor for Realtime Applications,” Proceedings of the IEEE Computer Society Conference on
642 Computer Vision and Pattern Recognition, Miami, FL (2009) pp. 9–14.
643 35. D. Filliat, “A Visual Bag of Words Method for Interactive Qualitative Localization and Mapping,”
644 Proceedings of the IEEE International Conference on Robotics and Automation, Rome, Italy (2007)
645 pp. 3921–3926.
646 36. D. Filliat and J.-A. Meyer, “Map-based navigation in mobile robots: I. A review of localization strategies,”
647 Cogn. Syst. Res. 4(4), 243–282 (2003).
648 37. M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with
649 applications to image analysis and automated cartography,” Commun. ACM 24(6), 381–395 (1981).
650 38. D. A. Forsyth and J. Ponce, Computer Vision: A Modern Approach (Prentice Hall, Upper Saddle River,
651 NJ, 2003).
652 39. F. Fraundorfer, C. Engels and D. Nistér, “Topological Mapping, Localization and Navigation Using Image
653 Collections,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems,
654 San Diego, CA, USA (2007) pp. 3872–3877.
655 40. T. Goedemé and L. van Gool, “Robust Vision-Only Mobile Robot Navigation with Topological Maps,”
656 In: Mobile Robots Motion Planning, New Challenges (X.-J. Jing, ed.) (InTech, Austria, 2008) chap. 4,
657 pp. 63–88.
658 41. T. Goedemé, M. Nuttin, T. Tuytelaars and L. van Gool, “Omnidirectional vision based topological
659 navigation,” Int. J. Comput. Vis. 74(3), 219–236 (2007).
660 42. T. Goedemé, T. Tuytelaars and L. van Gool, “Fast wide baseline matching for visual navigation,” Proc.
661 IEEE Comp. Soc. Conf. Comp. Vision Pattern Recogn. 1, 24–29 (2004).
662 43. T. Goedemé, T. Tuytelaars, L. van Gool, G. Vanacker and M. Nuttin, “Feature Based Omnidirectional
663 Sparse Visual Path Following,” Proceedings of the IEEE/RSJ International Conference on Intelligent
664 Robots and Systems, Edmonton, Canada (2005) pp. 1806–1811.
665 44. R. Gutiérrez-Osuna and R. C. Luo, “LOLA Probabilistic navigation for topological maps,” AI Mag. 17(1),
666 55–62 (1996).
667 45. J.-S. Gutmann and K. Konolige, “Incremental Mapping of Large Cyclic Environments,” Proceedings
668 of the International Symposium on Computational Intelligence in Robotics and Automation (1999),
669 pp. 318–325.
670 46. H. H. Hafner, “Learning Places in Newly Explored Environments,” In: Proceedings of the International
671 Conference on Simulation of Adaptive Behavior (Meyer, Berthoz, Floreano, Roitblat and Wilson, eds.)
672 (International Society for Adaptive Behavior, Honolulu, HI, USA, 2000) pp. 111–120.
673 47. N. Ho and R. Jarvis, “Vision Based Global Localisation Using a 3D Environmental Model Created by
674 a Laser Range Scanner,” Proceedings of the IEEE/RSJ International Conference Intelligent Robots and
675 Systems, Nice, France (2008) pp. 2964–2969.
676 48. L. Itti and P. Baldi, “A principled approach to detecting surprising events in video,” Proc. IEEE Comp.
677 Soc. Conf. Comp. Vision Pattern Recogn. 1, 631–637 (2005).
678 49. C. E. Jacobs, A. Finkelstein and D. H. Salesin, “Fast Multiresolution Image Querying,” Proceedings
679 of the Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA (1995)
680 pp. 277–286.
681 50. C. Johnson and B. Kuipers, “Efficient Search for Correct and Useful Topological Maps,” Proceedings of
682 the IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal
683 (2012) pp. 5277–5282.
684 51. L. P. Kaelbling, M. L. Littman and A. R. Cassandra, “Planning and acting in partially observable stochastic
685 domains,” Artif. Intell. 101(1–2), 99–134 (1998).
686 52. A. Koenig, J. Kessler and H.-M. Gross, “A Graph Matching Technique for an Appearance-Based, Visual
687 SLAM-Approach Using Rao-Blackwellized Particle Filters,” Proceedings of the IEEE/RSJ International
688 Conference on Intelligent Robots and Systems, Nice, France (2008) pp. 1576–1581.
Topological simultaneous localization and mapping: a survey 17
689 53. S. Koenig, A. Mudgal and C. Tovey, “A Near-Tight Approximation Lower Bound and Algorithm for the
690 Kidnapped Robot Problem,” Proceedings of the Symposium on Discrete Algorithms, Miami, FL (2006)
691 pp. 133–142.
692 54. S. Koenig and R. G. Simmons, “Unsupervised Learning of Probabilistic Models for Robot Navigation,”
693 Proceedings of the IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA
694 1996) pp. 2301–2308.
695 55. K. Konolige, “Large-Scale Map-Making,” Proceedings of the National Conference on Artificial
696 Intelligence, San Jose, CA (2004) pp. 457–463.
697 56. D. Kortenkamp and T. Weymouth, “Topological Mapping for Mobile Robots Using a Combination of
698 Sonar and Vision Sensing,” Proceedings of the American Association for Artificial Intelligence (AAAI)
699 Conference, Seattle, WA, USA (1994).
700 57. B. Kuipers, “The spatial semantic hierarchy,” Artif. Intell. 119, 191–233 (2000).
701 58. B. Kuipers and P. Beeson, “Bootstrap Learning for Place Recognition,” Proceedings of the 18th National
702 Conference on Artificial Intelligence, Edmonton, Alberta, Canada (2002) pp. 174–180.
703 59. B. Kuipers and Y.-T. Byun, “A robot exploration and mapping strategy based on a semantic hierarchy of
704 spatial representations,” Robot. Auton. Syst. 8(1), 47–63 (1991).
705 60. B. Kuipers and T. Levitt, “Navigation and mapping in large-scale space,” AI Mag., 9(2), 25–43 (1988).
706 61. B. Kuipers, J. Modayil, P. Beeson, M. MacMahon and F. Savelli, “Local metrical and global topological
707 maps in the hybrid spatial semantic hierarchy,” Proc. IEEE Int.Conf. Robot. Autom. 5, 4845–4851 (2004).
708 62. P. Lamon, I. Nourbakhsh, B. Jensen and R. Siegwart, “Deriving and Matching Image Fingerprint Sequences
709 for Mobile Robot Localization,” Proceedings of the IEEE International Conference on Robotics and
710 Automation, Seoul, Korea (2001).
711 63. J. J. Leonard and H. F. Durrant-Whyte, “Mobile robot localization by tracking geometric beacons,” IEEE
712 Trans. Robot. Autom. 7(3), 376–382 (1991a).
713 64. J. J. Leonard and H. F. Durrant-Whyte, “Simultaneous Map Building and Localization for an Autonomous
714 Mobile Robot,” Proceedings of the IEEE/RSJ International Workshop on Intelligent Robots and Systems,
715 Osaka, Japan (1991b) pp. 1442–1447.
716 65. Y. Li and E. B. Olson, “IPJC: The Incremental Posterior Joint Compatibility Test for Fast Feature
717 Cloud Matching,” Proceedings of the IEEE/RSJ International Conference Intelligent Robots and Systems,
718 Vilamoura, Algarve, Portugal (2012) pp. 3467–3474.
719 66. B. Lisien, D. Morales, D. Silver, G. Kantor, I. Rekleitis and H. Choset, “Hierarchical Simultaneous
720 Localization and Mapping,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots
721 and Systems, Las Vegas, NV (2003) pp. 448–453.
722 67. M. Liu, D. Scaramuzza, C. Pradalier, R. Siegwart and Q. Chen, “Scene Recognition with Omnidirectional
723 Vision for Topological Map Using Lightweight Adaptive Descriptors,” Proceedings of the IEEE/RSJ
724 International Conference Intelligent Robots and Systems, St. Louis, MO, USA (2009) pp. 116–121.
725 68. M. Liu and R. Siegwart, “DP-FACT: Towards Topological Mapping and Scene Recognition with Color
726 for Omnidirectional Camera,” IEEE International Conference on Robotics and Automation, Saint Paul,
727 MN, USA (2012).
728 69. D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. IEEE Int. Conf. Computer
729 Vision 2, 1150–1157.
730 70. D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110
731 (2004).
732 71. F. Lu and E. Milios, “Globally consistent range scan alignment for environment mapping,” Auton. Robots
733 4, 333–349 (1997).
734 72. W. L. D. Lui and R. Jarvis, “A Pure Vision-Based Approach to Topological SLAM,” Proceedings of the
735 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan (2010).
736 73. W. Maddern, M. Milford and G. Wyeth, “Continuous Appearance-Based Trajectory SLAM,” Proceedings
737 of the IEEE International Conference on Robotics and Automation, Shanghai, China (2011) pp. 3595–
738 3600.
739 74. W. Maddern, M. Milford and G. Wyeth, “Capping Computation Time and Storage Requirements for
740 Appearance-Based Localization with CAT-SLAM,” Proceedings of the IEEE International Conference on
741 Robotics and Automation, Saint Paul, MN, USA (2012a) pp. 822–827.
742 75. W. Maddern, M. Milford and G. Wyeth, “Towards Persistent Indoor Appearance-Based Localization,
743 Mapping and Navigation Using CAT-SLAM,” Proceedings of the IEEE/RSJ International Conference on
744 Intelligent Robots and Systems, Vilamoura, Algarve, Portugal (2012b) pp. 4224–4230.
745 76. R. Maini and H. Aggarwal, “Study and comparison of various image edge detection techniques,” Int. J.
746 Image Process. 3(1), 1–11 (2009).
747 77. K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir and L. van
748 Gool, “A comparison of affine region detectors,” Int. J. Comput. Vis. 65(1), 43–72 (2006).
749 78. J. Modayil, P. Beeson and B. Kuipers, “Using the Topological Skeleton for Scalable Global Metrical Map-
750 Building,” Proceedings of the IEEE/RSJ International Conference on lntelligent Robots and Systems,
751 Sendai, Japan (2004) pp. 1530–1536.
752 79. M. Montemerlo, S. Thrun, D. Koller and B. Wegbreit, “FastSLAM: A Factored Solution to the
753 Simultaneous Localization and Mapping Problem,” Proceedings of the AAAI National Conference on
754 Artificial Intelligence, Edmonton, Canada (2002).
18 Topological simultaneous localization and mapping: a survey
755 80. S. B. Needleman and C. D. Wunsch, “A general method applicable to the search for similarities in the
756 amino acid sequence of two proteins,” J. Mol. Biol. 48, 443–453 (1970).
757 81. J. Neira and J. D. Tardós, “Data association in stochastic mapping using the joint compatibility test,” IEEE
758 Trans. Robot. Autom. 17(6), 890–897 (2001).
759 82. V. Nguyen, A. Martinelli, N. Tomatis and R. Siegwart, “A Comparison of Line Extraction Algorithms
760 Using 2D Laser Rangefinder for Indoor Mobile Robotics,” Proceedings of the IEEE/RSJ International
761 Conference Intelligent Robots and Systems, Edmonton, Canada (2005) pp. 1929–1934.
762 83. J. I. Nieto, J. E. Guivant and E. M. Nebot, “The Hybrid Metric Maps (HYMMs): A novel map representation
763 for DenseSLAM,” Proc. IEEE Int. Conf. Robot. Autom. 1, 391–396 (2004).
764 84. A. Nüchter and J. Hertzberg, “Towards semantic maps for mobile robots,” Robot. Auton. Syst. 56, 915–926
765 (2008).
766 85. C. Owen and U. Nehmzow, “Landmark-Based Navigation for a Mobile Robot,” In: Proceedings of the
767 Simulation of Adaptive Behaviour (MIT Press, 1998) pp. 240–245.
768 86. R. Paul and P. Newman, “FAB-MAP 3D: Topological Mapping with Spatial and Visual Appearance,”
769 Proceedings of the IEEE International Conference on Robotics and Automation, Anchorage, AK, USA
770 (2010) pp. 2649–2656.
771 87. S. T. Pfister, S. I. Roumeliotis and J. W. Burdick, “Weighted line fitting algorithms for mobile robot map
772 building and efficient data representation,” Proc. IEEE Int. Conf. Robot. Autom. 1, 1304–1311 (2003).
773 88. P. Pirjanian, N. Karlsson, L. Goncalves and E. di Bernardo, “Low-cost visual localization and mapping
774 for consumer robotics,” Ind. Robot Int. J. 30(2), 139–144 (2003).
775 89. F. T. Ramos, B. Upcroft, S. Kumar and H. F. Durrant-Whyte, “A Bayesian Approach for Place Recognition,”
776 Proceedings of the IJCAI Workshop on Reasoning with Uncertainty in Robotics, Edinburgh, Scotland
777 (2005).
778 90. A. Ranganathan, Probabilistic Topological Maps Ph.D. Thesis (Georgia Institute of Technology, 2008).
779 91. A. Ranganathan and F. Dellaert, “Inference in the Space of Topological Maps: An MCMC-Based
780 Approach,” Proceedings of the IEEE/RSJ International Conference Intelligent Robots and Systems, Sendai,
781 Japan (2004) pp. 1518–1523.
782 92. A. Ranganathan and F. Dellaert, “A Rao-Blackwellized Particle Filter for Topological Mapping,”
783 Proceedings of the IEEE International Conference on Robotics and Automation, Orlando, FL, USA
784 (2006) pp. 810–817.
785 93. A. Ranganathan and F. Dellaert, Automatic Landmark Detection for Topological Mapping Using Bayesian
786 Surprise Technical Report (Georgia Institute of Technology, 2008).
787 94. A. Ranganathan and F. Dellaert, “Online probabilistic topological mapping,” Int. J. Robot. Res. 30(6),
788 755–771 (2011).
789 95. A. Ranganathan, E. Menegatti and F. Dellaert, “Bayesian inference in the space of topological maps,”
790 IEEE Trans. Robot. 22(1), 92–107 (2006).
791 96. E. Remolina and B. Kuipers, “Towards a general theory of topological maps,” Artif. Intell. 152, 47–104
792 (2004).
793 97. A. Romero and M. Cazorla, “Topological SLAM Using Omnidirectional Images: Merging Feature
794 Detectors and Graph-matching,” Proceedings of the Advanced Concepts for Intelligent Vision Systems,
795 Sydney, Australia (2010) pp. 464–475.
796 98. A. Romero and M. Cazorla, “Topological visual mapping in robotics,” Cogn. Process. 13(1 Supplement),
797 305–308 (2012).
798 99. D. Sabatta, D. Scaramuzza and R. Siegwart, “Improved Appearance-Based Matching in Similar and
799 Dynamic Environments Using a Vocabulary Tree,” Proceedings of the IEEE International Conference on
800 Robotics and Automation, Anchorage, AK (2010).
801 100. D. G. Sabatta, “Vision-Based Topological Map Building and Localisation Using Persistent Features,”
802 Robotics and Mechatronics Symposium, Bloemfontein, South Africa (2008) pp. 1–6.
803 101. F. Savelli and B. Kuipers, “Loop-Closing and Planarity in Topological Map-Building,” Proceedings of the
804 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sendai, Japan (2004) pp. 1511–
805 1517.
806 102. S. Se, D. G. Lowe and J. J. Little, “Vision-based global localization and mapping for mobile robots,” IEEE
807 Trans. Robot. 21(3), 364–375 (2005).
808 103. H. Shatkay and L. P. Kaelbling, “Learning Topological Maps with Weak Local Odometric Information,”
809 Proceedings of the International Conference on Artificial Intelligence, Nagoya, Japan (1997) pp. 920–929.
810 104. J. Sivic and A. Zisserman, “Video Google: A text retrieval approach to object matching in videos,” Proc.
811 IEEE Int. Conf. Comput. Vis. 2, 1470–1477 (2003).
812 105. C. Stachniss, O. Martı́nez-Mozos, A. Rottmann and W. Burgard, “Semantic Labeling of Places,”
813 Proceedings of the International Symposium of Robotics Research, San Francisco, CA, USA (2005).
814 106. B. J. Stankiewicz and A. A. Kalia, “Acquisition of structural versus object landmark knowledge,” J. Exp.
815 Psychol. Hum. Perception Perform. 33(2), 378–390 (2007).
816 107. T. Tao, S. Tully, G. Kantor and H. Choset, “Incremental Construction of the Saturated-GVG for Multi-
817 Hypothesis Topological SLAM,” Proceedings of the IEEE International Conference on Robotics and
818 Automation, Shanghai, China (2011) pp. 3072–3077.
819 108. A. Tapus, Topological SLAM – Simultaneous Localization and Mapping with Fingerprints of Places Ph.D.
820 Thesis (École Polytechnique Fédérale de Lausanne, Switzerland, 2005).
Topological simultaneous localization and mapping: a survey 19
821 109. A. Tapus and R. Siegwart, “Incremental Robot Mapping with Fingerprints of Places,” Proceedings of the
822 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, Canada (2005) pp.
823 2429–2434.
824 110. S. Thrun, “Robotic Mapping: A Survey,” In: Exploring Artificial Intelligence in the New Millenium
825 (G. Lakemeyer and B. Nebel, eds.) (Morgan Kaufmann, San Francisco, CA, 2002) pp. 1–35.
826 111. N. Tomatis, I. Nourbakhsh and R. Siegwart, “Hybrid Simultaneous Localization and Map Building:
827 Closing the Loop with Multi-Hypotheses Tracking,” Proceedings of the IEEE International Conference
828 Robotics and Automation, Washington, DC (2002) pp. 2749–2754.
829 112. S. Tully, G. Kantor, H. Choset and F. Werner, “A Multi-Hypothesis Topological SLAM Approach for Loop
830 Closing on Edge-Ordered Graphs,” Proceedings of the IEEE/RSJ International Conference on Intelligent
831 Robots and Systems, St. Louis, MO, USA (2009) pp. 4943–4948.
832 113. I. Ulrich and I. Nourbakhsh, “Appearance-Based Place Recognition for Topological Localization,”
833 Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA,
834 USA (2000) pp. 1023–1029.
835 114. S. Vasudevan, S. Gáchter, V. Nguyen and R. Siegwart, “Cognitive maps for mobile robots: an object based
836 approach,” Robot. Auton. Syst. 55, 359–371 (2007).
837 115. F. Werner, C. Gretton, F. Maire and J. Sitte, “Induction of Topological Environment Maps from Sequences
838 of Visited Places,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
839 Systems, Nice, France (2008a) pp. 2890–2895.
840 116. F. Werner, F. Maire and J. Sitte, “Topological SLAM Using Fast Vision Techniques,” Proceedings of the
841 Advances in Robotics: FIRA RoboWorld Congress, Incheon, Korea (2009a) pp. 187–198.
842 117. F. Werner, F. Maire, J. Sitte, H. Choset, S. Tully and G. Kantor, “Topological SLAM Using Neighbourhood
843 Information of Places,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
844 Systems, St. Louis, MO, USA (2009b).
845 118. F. Werner, J. Sitte and F. Maire, “Visual Topological Mapping and Localisation Using Colour Histograms,”
846 Proceedings of the International Conference on Control, Automation, Robotics and Vision, Hanoi, Vietnam
847 (2008b) pp. 341–346.
848 119. F. Werner, J. Sitte and F. Maire, “Topological map induction using neighbourhood information of places,”
849 Auton. Robots 32(4), 405–418 (2012).
850 120. B. Yamauchi, A. Schultz and W. Adams, “Mobile Robot Exploration and Map-Building with Continuous
851 Localization,” Proceedings of the International Conference on Robotics and Automation, Leuven, Belgium
852 (1998).
853 121. Z. Zivkovic, B. Bakker and B. Kröse, “Hierarchical Map Building Using Visual Landmarks and Geometric
854 Constraints,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems,
855 Edmonton, Canada (2005) pp. 2480–2485.