Mobile Robot Navigation System.
Mobile Robot Navigation System.
Publication date:
2009
Citation (APA):
Özkil, A. G. (2009). Technical Report on Autonomous Mobile Robot navigation.
General rights
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright
owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
You may not further distribute the material or use it for any profit-making activity or commercial gain
You may freely distribute the URL identifying the publication in the public portal
If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately
and investigate your claim.
Technical Report on
Autonomous Mobile Robot Navigation
Marts 2009
1
Table of Contents
1 Objective .................................................................................................................... 3
2 Introduction ............................................................................................................... 3
3 Robotic Paradigms ..................................................................................................... 3
3.1 Hierarchical paradigm ........................................................................................ 3
3.2 Reactive paradigm .............................................................................................. 4
3.3 Hybrid Paradigm ................................................................................................. 5
4 Autonomous Navigation ............................................................................................ 6
4.1 Perception .......................................................................................................... 7
4.1.1 Environmental Representation ................................................................... 8
4.1.2 Maps Used in Mobile Robot Navigation ..................................................... 9
4.2 Mapping and Localization ................................................................................ 13
4.2.1 Mapping .................................................................................................... 13
4.2.2 Localization ............................................................................................... 15
4.3 Cognition and Path planning ............................................................................ 16
4.4 Motion Control ................................................................................................. 18
5 Bibliography ............................................................................................................. 20
2
1 Objective
This report is the outcome of the project ‘Nestor’, which, in general terms, aims to
utilize an autonomous mobile robot navigation system for real world settings. The
report has three aims:
3. To give the state of the art for dealing with these concepts.
2 Introduction
Before getting into the details about navigation, it is important to characterize the
genera robot control problem. The following section describes the basics of this issue
3 Robotic Paradigms
A paradigm is a philosophy or set of assumptions or techniques which characterize an
approach to a class of problems [1]. In this sense, the aim of a robotic paradigm is to
organize the ‘intelligence’ of the system and control its actions.
A robotic system has three main set functions: SENSE, PLAN and ACT. SENSE functions
gather information from robot’s sensors and produce a useful output for other
functionalities. PLAN functions take these sorts of outputs or use robot’s own
knowledge to produce a set of tasks for the robot to perform. ACT functions produce
actuator commands to carry out physical embodiment with the environment.
There are currently three paradigms in robot control, which are described by the
relationship between these three primitive functionalities (Figure 1).
3
Figure 1, three robotic paradigms; (a) hierarchical, (b) reactive, (c) hybrid
Under this paradigm, the robot basically senses the world, plans its action, and then
acts. Therefore, at each step it explicitly plans the next move. This model tends to
construct a database to gather a global world model based on the data flow from the
sensors, such that the planner can use this single representation to route the tasks to
actions.
In this approach, sensing is directly coupled to actuation, and planning does not take
place. There are multiple instances of SENSE-ACT couplings, which can be also called as
behaviors. The resulting action of the robot is the combination of its behaviors.
4
Figure 2, hierarchical paradigm in detail
Brooks, in his seminal paper [4], described the main difference between these two
approaches as the way they decompose the tasks. According to him, reactive systems
decompose tasks in layers. They start with generating basic survival behaviors and
then evolve new ones that either use the existing ones or create parallel tracks of
more advanced ones. If anything happens to the advanced ones, the lower behavior
will still operate, ensuring the survival of the system. This is similar to the
functionalities of human brain stem such as breathing, which continue independently
from high level cognitive functions of the brain (i.e. talking), or even in case of
cognitive hibernation (i.e. sleeping)
Purely reactive systems showed the potential of the approach, but it was seen that it is
not very suitable for general purpose applications without any planning.
5
sense-act couplings in such a way that tasks are decomposed to subtasks and
behaviors are accordingly generated. Sensory information is routed to requesting
behaviors, but it is also available to the planner for building a task oriented world
model. Therefore, sensing is organized as a mixture of hierarchical and reactive styles;
where planning is done at one step and sensing and acting are done together.
4 Autonomous Navigation
Autonomous mobile robot navigation can be characterized by three questions [11]:
• Where am I?
• Where am I going?
• How do I get there?
Therefore the robot has to have a model of the environment, be able to perceive,
estimate its relative state and finally plan and execute its movement.
6
Figure 4, autonomous navigation problem
This chapter aims to summarize these elements and give an overview of relevant
problems to be addressed.
4.1 Perception
First action in the control loop is perception of the self and the environment, which is
done through sensors. Proprioceptive sensors capture information about the self-state
of the robot, whereas exoprioceptive sensors capture information about the
environment. Types of sensors being used on mobile robots shows a big variety
[12,13]. The most relevant ones can be briefly listed as: encoders, gyroscopes,
accelerometers, sonars, laser range finders, beacon based sensors and vision sensors.
It is also possible to navigate using only exoprioceptive sensors. One such realization of
this approach is the Global Positioning System (GPS); which is being successfully used
in vehicle navigation systems. The problem with GPS and its upcoming, European
counterpart Galileo [14] is that these systems require a direct line of sight to the
satellites on earth orbit. Therefore these systems are especially inapplicable to indoor
applications.
7
This makes them inflexible and costly to install and maintain. Due to such reasons,
many researches focused on solving the robot navigation problem in unmodified
environments.
Many of the state of the art techniques for navigation in unmodified environments
uses combinations of proprioceptive and exoprioceptive sensors and fuse them using
probabilistic techniques. Sonars, laser range finders and several kinds of vision sensors
are used to capture information in such methods.
Range sensors have been the dominating choice for environmental sensing on robots.
Early works extensively used sonar arrays for distance sensing, but the limitations with
range and resolution of sonars severely affected functions of mapping and localization.
Time-of-flight laser scanners later became widely applicable to mobile robotics, but
their scanning field is restricted to a horizontal plane, which in turn yields to poor
world representation [22]. This limitation was tackled by using oscillating the laser
scanners [23][24][25] or multiple lasers with complementary placements[26] to
achieve higher dimensionality in range sensing . Yet, these systems are rather
expensive and complex to utilize in a real world robotic application. Finally, different
8
vision based approaches has been emerged in the last decade to extract metric
information from the environment using imaging sensors. Stereo systems have been
long investigated for 3D range sensing, whereas a big amount of recent work is based
on monocular systems that can extract metric information from the optical flow
detected by the camera.
Metric Maps: Maps that carry distance information that corresponds to actual
distances in the environment. Such a map can give a distance of a path or size
of an object.
Sensor Level Maps: Maps that are derived directly from the interpretation of
the sensor inputs from the current position. (i.e. [28])
Semantic Maps: Maps which are oriented for high level decision making, and
contain information about objects and their relationships with the
environment. (i.e. [30-32])
Hybrid Maps: A combination of different types of maps. Hybrid maps also need
to glue elements that represent the same part of the environment in combined
maps.
Following section elaborate on metric, topological and hybrid maps, with are the most
commonly used types in mobile robotics.
9
environmental representation, a metric map can be either feature based or grid based
[33].
A metric feature based map is basically built upon the features that can be reliably
observed in the environment. In [34-36]typical features of indoor
door environments such
as walls, edges or corners are used for mapping of indoor environments.
In a grid based metric map, environment is divided in to a matrix of sub cells, where
each cell represents a portion of the environment. A cell is considered toto be occupied
if an object exists in the corresponding area in the environment. Moravec and Elfes
developed a common way of representing occupancy occupancy is using probabilities [37].
Saffiotti used fuzzy sets where occupancy and emptiness values are held separately
[38]. In [39], grid is represented using histograms where each cell holds a value of how
often a sensor has detected it. Stachniss and Burgard developed a coverage map,
where each cell holds a value representing how much it is covered by an obstacle [40].
Apart from robots, humans might also need to interact with the topological maps for
robot navigation. Different characteristics of the environment have been used by
10
researches such as rooms or corridors as nodes and doors or passageways as edges.
Thrun [41] preferred to use places with ‘significant features’ as nodes. Fabrizi [42]
defined a node as a ‘large open space’. Duckett [43] proposed a system where a new
node is placed after robot has travelled far away from the previous one.
It is clear to see that what is an advantage for one approach is a disadvantage to the
other, which constituted the motivation to develop hybrid maps. The idea came to the
11
scene as early as 1978 [44], but it has only been a decade that such maps emerged in
an increasing number.
Two types of hybrid maps are parallel maps and patchwork maps. A parallel map
constitutes of at least two different maps that represent the same area in an
environment. Most parallel maps are constructed automatically by extracting a
topological map from a metric one. Thrun utilized Voronoi diagrams in the empty parts
of a grid map in [41]. A similar approach was carried in [42] by using image processing.
The opposite approach, extracting metric maps from topological nodes had also been
presented [45]. An interesting multi-layered hierarchical parallel map representation is
developed in [46,47] where the main focus was efficient localization. The map is called
‘Annotated Hierarchical graph’ and it consists of hierarchically ordered topological
maps, supported with local metric patches in the lowest layer. Nieto also developed a
novel kind of parallel map, which consisted of an augmentation of a feature based
metric map and a grid based metric map [48]. While the first were used for
localization, the latter was used for optimal route planning.
12
Several patchwork maps simply connect small sized metric maps based on topology
[49]. Thus nodes do not correspond to any particular environmental structure. More
elaborate patchworks used openings between i.e. rooms and corridors as the node
features [50]. In [51,52] similar approach is used for node selection, and the rest of the
topology is completed using ‘Reduced Generalized Voronoi Graph’. Aguirre developed
a complex patchwork map in [53], where two kinds of metric maps where used in each
room, which acted as nodes in the topology.
Figure 7, a hybrid map: Global metric map is extracted from the ‘signatures’ of topological
nodes [45]
4.2.1 Mapping
A mobile robot requires a representation of the environment for autonomous
navigation in the form of a map. Based on the environment characteristics and the
type of the map, it is possible to build robot maps using existing maps by other means.
But in most of the cases, the robot needs to build a map of the environment in a
subsequent training phase.
Metric grid maps are the most commonly used types of maps in mobile robot
navigation. Building metric maps basically requires estimating the initial position of
13
the robot, and updating the cells of the map as the new sensory information is
acquired. The most trivial approach is to use odometry for position estimation. As
explained previously, estimation error accumulates by time in odometrical systems.
The apparent idea [54] to address this problem is to use the map, which is being built
at that moment, for correcting the estimation, which is now coined as Simultaneous
Localization and Mapping – SLAM.
Particularly difficult part of the SLAM problem with the grid maps is that the cell
positions in a grid map are static. Therefore, if a robot recognizes a place it has already
been during mapping (loop closure), it might see that its position is off and needs to be
corrected. On the other hand, to correct the grid map, the entire map should be
traced back and recalculated based on the new information.
An evident method is to build the map sequentially by first localizing and then
rebuilding the map based on adjusted positions [55]. Genetic algorithms are also used
for mapping. Duckett developed a method [56] where several maps are generated
with slightly altering paths, and then a genetic algorithm is used to select the best
maps and combine new paths to test. In [57], a new grid map representation is
generated where the cells are able to hold multiple hypotheses about the map. The
least probable hypotheses are later removed in a map update stage. Rao-Blackwellized
particle filters, introduced in [58] became a popular choice for building grid maps. In
this approach, a number of maps based on single particles are being carried and
updated simultaneously. Recent improvements on this method permitted to reduce
the number of particles to still get good results [59-61].
Feature maps differ from grid maps in the sense that sensor data is used to extract
features before the mapping stage. These features are then compared to the ones in
the map so that either the new feature is added to the map or the existing features in
the map are updated accordingly, or used for correcting the position estimation. Many
of the solutions are based on the approach presented in [62]. The most significant
developments around this method are based on how the Kalman filter is utilized for
position update. Information filter is introduced to ease the computation burden in
[63,64]. Also unscented filter is used in [65,66] to cope with the nonlinearities.
Building topological maps can be done in two different ways; by using sensor data or
by using another type of map. Choset used a generalized Voronoi graph as a map in
[51]. The map is constructed my moving the robot in the environment to construct the
nodes of the map, and visited nodes are detected by matching their “signatures” to
the previously acquired ones. Thrun et.al used the latter technique in [52], where they
preprocessed the grip maps to threshold the occupancy probabilities to further
14
generate Voronoi diagrams on the empty areas of the map. Local minima found in the
Voronoi graph are used to partition the grid map into nodes.
4.2.2 Localization
Localization is the task of finding the position of a mobile robot in an environment,
based on its representation.
Localization is tightly coupled to how belief is represented and estimated. Most of the
robotic systems use planar maps. The main reason for that is to decrease the
complexity of the problem by reducing dimensionality of the robot pose vector from 6-
D (x, y, z, pitch, roll, yaw) to 3-D (x, y, yaw).
One approach to solve the problem is position tracking, where the belief of the robot
is reduced to a single pose. The position is estimated in a single hypothesis, and
whenever an action or observation occurs, the hypothesis is updated. Therefore, the
initial position of the robot must be known to be able to track the position. Kalman
filter [67], and its variants are widely used in position tracking. In [68] sonars range
finders are used for line extraction and a Kalman filter is used for matching. In [69],
an extended Kalman Filter is used to match raw sensor data with a feature based
metric map. Fuzzy logic is also used for representing uncertainty in position tracking in
[70].
Position tracking problem deals with a single pose, therefore representation is simple,
and update calculations are computationally cheap. But this technique requires that
the initial position is known. In addition, if the measurements become vague, the
position can be lost.
15
In [71] several position candidates are tracked using Kalman filters. The number of
hypotheses adapts the uncertainty of the localization. Safiotti et. al. used fuzzy sets to
represent uncertainities to carry out multi-hypothesis tracking in [72]. In [73] Markov
localization is introduced, where each cell of a grid map holds a belief of how much the
actual position of the robot is in that cell. In this approach, the localization grid map
represents a probably density function (pdf) of the belief of localization. As the robot
moves or observes, cells are updated using Bayesian updating. In [74] , the method is
further modified to overcome the heavy computational cost of the approach. An
alternative is proposed in [75], where the updating is based on fuzzy logic instead of
Bayesian inference. In [76] pdf used in localization belief is represented as a set of
samples. This approach reduced the number of calculations compared to Markov
localization, while it is still possible to perform global localization. This approach is
called Monte Carlo localization, and further improved in [77-79] to decrease the
number of samples needed.
16
Path planning can be defined as searching a suitable path in a map from one place to
another. Depending on the map type, it is possible to follow different strategies for
planning paths.
Metric maps are useful for planning precise paths. Due to metric information
associated, it is possible to find nearly-optimal paths using metric maps [27]. There
exists several different methods for path planning, but they are based on a few
general approaches. Latombe classifies these approaches in [84]as follows:
Road map: a road map is a collision free set of path between a starting position
and an ending position. Therefore, they describe the connectivity of robot free
space on the map. One method to construct road map is based on visibility
graphs [85]. In this method, path is incremented from one point to other points
that are visible from the first point. Another method is to construct a Voronoi
graph, which tries to maximize the clearances between the robot and obstacles
[86].
Cell decomposition: free space in the map is divided into non-overlapping cells,
and a connectivity graph describes how the cells are connected to each other.
The result is a chain of cells, which also describes the path. Therefore,
formation of cells plays an important role in planning the path. In [84],
trapezoidal decomposition is used, where a polygonal map is divided into
trapezoidal cells by generating vertical line segments at each corner of every
polygon. In [87] qualitative spatial reasoning is used for path planning, which is
inspired of the way humans find their paths with imprecise knowledge. Cell
decomposition is also a suitable method for area coverage, where the planner
breaks down the target area into cells to be all traversed. Applications of this
approach can be listed as i.e. lawn moving, snow removal or floor cleaning [88]
Potential field: A potential field function is defined an applied over the free
space on the map, where the goal acts as an attractive potential (sink) and the
obstacles act as repulsive potentials (sources). The path is then derived based
on the derivative of the potential field, where the steepest direction is
followed. This approach was first developed for online collision avoidance in
[89]. It is combined with a graph search technique in [84] for path planning.
Topological maps are well suited for planning paths. Graph search algorithms, such as
A* [90] or D*[91], can be used the plan the shortest path on a topological map. In
most of the cases, the number of edges and nodes are moderate, so the path planning
can be performed very quickly. Path finding time is even further shortened in [92] by
17
preprocessing all paths and storing them in a lookup table. In [82] wave-front
algorithm is used for both path planning and collision avoidance. In [93,94], planning
on very large maps is described in the context of hierarchical topological maps.
Borenstein introduced vector field histogram (vfh) algorithms fro obstacle avoidance
tasks in [39], based on local potential fields. In this approach, first the range data is
continuously sampled, and a two dimensional local grid is generated to represent the
environment. In the next stage, one dimensional polar histogram is extracted from the
local grid in terms of angular sectors with particular widths. Finally, this one
dimensional histogram is threshold and the angular sector with the highest density is
selected as the direction. Sped of the robot is also adjusted in correlation with the
distance from the obstacle. In [95], the algorithm is improved by incorporating the
kinematics of the robot as the original algorithm assumes that the robot is able to
change its direction instantaneously (named as vfh+). The algorithm is further
improved and coined as vfh* in [96]. In contrast to vfh and vfh+, which are purely local
algorithms based on current sensor readings, vfh* incorporated A* graph search
algorithm to consider more than immediate surroundings of the robot.
18
Finally, nearness diagram is introduced in [99], which is based on heuristic rules that
are inferred from possible high and low safety situations that the robot can end up.
Based on five rules (two low and three high safety situations), five behaviors are
defined, where robot compares its current situation to these predefined ones and
executes the appropriate behavior. It is shown in [100] that this reactive approach can
perform well in cluttered environments with narrow passages, as compared to
previous approaches.
19
5 Bibliography
[1] “paradigm - Definition from the Merriam-Webster Online Dictionary.”
[4] R. Brooks, “A robust layered control system for a mobile robot,” Robotics and
Automation, IEEE Journal of, vol. 2, 1986, pp. 14-23.
[5] R. Arkin and T. Balch, “AuRA: principles and practice in review,” Apr. 1997.
[6] Y. Koren and J. Borenstein, “Potential field methods and their inherent limitations
for mobilerobot navigation,” Robotics and Automation, 1991. Proceedings.,
1991 IEEE International Conference on, 1991, pp. 1398-1404.
[7] J.K. Rosenblatt, “DAMN: a distributed architecture for mobile navigation,” Journal
of Experimental & Theoretical Artificial Intelligence, vol. 9, 1997, pp. 339-360.
[8] K. Konolige and K. Myers, “The saphira architecture for autonomous mobile
robots,” 1998.
[9] R.R. Murphy and R.C. Arkin, “Sfx: An Architecture For Action-oriented Sensor
Fusion,” Intelligent Robots and Systems, 1992., Proceedings of the 1992 lEEE/RSJ
International Conference on, 1992.
[11] J. Borenstein, H.R. Everett, and L. Feng, Navigating Mobile Robots: Systems and
Techniques, AK Peters, Ltd. Natick, MA, USA, 1996.
[12] R. Siegwart and I.R. Nourbakhsh, Introduction to Autonomous Mobile Robots, MIT
Press, 2004.
[13] H.R. Everett, Sensors for mobile robots: theory and application, AK Peters, Ltd.
Natick, MA, USA, 1995.
[15] J.C. Harris, H.T. Inc, and V.A. Clifton, “An infogeometric approach to telerobotics,”
Telesystems Conference, 1994. Conference Proceedings., 1994 IEEE National,
1994, pp. 153-156.
[16] A.M. Ladd, K.E. Bekris, A. Rudys, L.E. Kavraki, and D.S. Wallach, “Robotics-Based
Location Sensing Using Wireless Ethernet,” Wireless Networks, vol. 11, 2005,
pp. 189-204.
20
[17] K. Briechle and U.D. Hanebeck, “Localization of a mobile robot using relative
bearing measurements,” Robotics and Automation, IEEE Transactions on, vol.
20, 2004, pp. 36-44.
[21] A. Howard, M.J. Mataric, and G.S. Sukhatme, “Localization for mobile robot
teams: A maximum likelihood approach,” USA: University of Sourthern
California, 2001.
[24] H. Surmann, A. Nüchter, and J. Hertzberg, “An autonomous mobile robot with a
3D laser range finder for 3D exploration and digitalization of indoor
environments,” Robotics and Autonomous Systems, vol. 45, 2003, pp. 181-198.
[25] C. Brenneke, O. Wulf, and B. Wagner, “Using 3D laser range data for SLAM in
outdoor environments,” Intelligent Robots and Systems, 2003.(IROS 2003).
Proceedings. 2003 IEEE/RSJ International Conference on, 2003.
[26] S. Thrun, W. Burgard, and D. Fox, “A real-time algorithm for mobile robot
mapping with applications tomulti-robot and 3D mapping,” Robotics and
Automation, 2000. Proceedings. ICRA'00. IEEE International Conference on,
2000.
[27] P. Buschka, An Investigation of Hybrid Maps for Mobile Robots, Örebro: Örebro
universitetsbibliotek, 2005.
21
[29] B.J.A. Kröse, N. Vlassis, R. Bunschoten, and Y. Motomura, “A probabilistic model
for appearance-based robot localization,” Image and Vision Computing, vol. 19,
2001, pp. 381-391.
[30] R. Chatila and J. Laumond, “Position referencing and consistent world modeling
for mobile robots,” Robotics and Automation. Proceedings. 1985 IEEE
International Conference on, 1985.
[34] J. Gasós and A. Martín, “A fuzzy approach to build sonar maps for mobile robots,”
Computers in Industry, vol. 32, 1996, pp. 151-167.
[35] J.J. Leonard and H.F. Durrant-Whyte, “Simultaneous map building and localization
for an autonomous mobilerobot,” Intelligent Robots and Systems'
91.'Intelligence for Mechanical Systems, Proceedings IROS'91. IEEE/RSJ
International Workshop on, 1991, pp. 1442-1447.
[37] H. Moravec and A. Elfes, “High resolution maps from wide angle sonar,” Robotics
and Automation. Proceedings. 1985 IEEE International Conference on, 1985.
[38] A. Saffiotti, “The uses of fuzzy logic in autonomous robot navigation: a catalogue
raisonne,” Soft Computing, vol. 1, 1997, pp. 180-197.
[39] J. Borenstein and Y. Koren, “The vector field histogram-fast obstacle avoidance
for mobilerobots,” Robotics and Automation, IEEE Transactions on, vol. 7, 1991,
pp. 278-288.
22
[41] S. Thrun, “Learning metric-topological maps for indoor mobile robot navigation,”
Artificial Intelligence, vol. 99, 1998, pp. 21-71.
[44] B. Kuipers, “Modeling spatial knowledge,” Cognitive Science, vol. 2, 1978, pp.
129-153.
[48] J.I. Nieto, J.E. Guivant, and E.M. Nebot, “The HYbrid metric maps (HYMMs): a
novel map representation for DenseSLAM,” Robotics and Automation, 2004.
Proceedings. ICRA'04. 2004 IEEE International Conference on.
[50] H. Shatkay and L.P. Kaelbling, “Learning topological maps with weak local
odometric information,” In Proceedings of IJCAI-97. IJCAI, Inc, 1997.
[52] S. Thrun and A. Bücken, “Integrating grid-based and topological maps for mobile
robot navigation,” PROCEEDINGS OF THE NATIONAL CONFERENCE ON
ARTIFICIAL INTELLIGENCE, 1996, pp. 944-951.
23
[53] E. Aguirre and A. Gonzalez, “Integrating fuzzy topological maps and fuzzy
geometric maps for behavior-based robots,” International Journal of Intelligent
Systems, vol. 17, 2002, pp. 333-368.
[54] R. Chatila and J. Laumond, “Position referencing and consistent world modeling
for mobile robots,” Robotics and Automation. Proceedings. 1985 IEEE
International Conference on, 1985.
[57] A. Eliazar and R. Parr, “DP-SLAM: Fast, Robust Simultaneous Localization and
Mapping Without Predetermined Landmarks,” INTERNATIONAL JOINT
CONFERENCE ON ARTIFICIAL INTELLIGENCE, LAWRENCE ERLBAUM ASSOCIATES
LTD, 2003, pp. 1135-1142.
[60] G. Grisetti, C. Stachniss, and W. Burgard, “Improved Techniques for Grid Mapping
With Rao-Blackwellized Particle Filters,” IEEE TRANSACTIONS ON ROBOTICS, vol.
23, 2007, p. 34.
[61] G. Grisetti, G.D. Tipaldi, C. Stachniss, W. Burgard, and D. Nardi, “Fast and accurate
SLAM with Rao–Blackwellized particle filters,” Robotics and Autonomous
Systems, vol. 55, 2007, pp. 30-38.
24
[65] K.S. Chong and L. Kleeman, “Mobile robot map building from an advanced sonar
array and accurate odometry,” International Journal of Robotics Research, vol.
18, 1999, pp. 20-36.
[67] R.E. Kalman, “A new approach to linear filtering and prediction problems,”
Journal of basic Engineering, vol. 82, 1960, pp. 35–45.
[68] J.L. Crowley, “World modeling and position estimation for a mobile robot
usingultrasonic ranging,” 1989 IEEE International Conference on Robotics and
Automation, 1989. Proceedings., 1989, pp. 674-680.
[69] J.J. Leonard and H.F. Durrant-Whyte, “Mobile robot localization by tracking
geometric beacons,” IEEE Transactions on Robotics and Automation, vol. 7,
1991, pp. 376-382.
[70] J. Gasós and A. Saffiotti, “Using fuzzy sets to represent uncertain spatial
knowledge in autonomous robots,” Spatial Cognition and Computation, vol. 1,
1999, pp. 205-226.
[73] D. Fox, W. Burgard, and S. Thrun, “Active markov localization for mobile robots,”
Robotics and Autonomous Systems, vol. 25, 1998, pp. 195-208.
[74] W. Burgard, A. Derr, D. Fox, and A.B. Cremers, “Integrating global position
estimation and position tracking formobile robots: the dynamic Markov
localization approach,” 1998 IEEE/RSJ International Conference on Intelligent
Robots and Systems, 1998. Proceedings., 1998.
[76] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte carlo localization for mobile
robots,” 1999 IEEE International Conference on Robotics and Automation, 1999.
Proceedings, 1999.
[77] D. Fox, “KLD-sampling: Adaptive particle filters and mobile robot localization,”
Advances in Neural Information Processing Systems (NIPS), 2001.
25
[78] S. Thrun, D. Fox, W. Burgard, and F. Dellaert, “Robust Monte Carlo localization for
mobile robots,” To appear in Artificial Intelligence, 2001.
[81] U. Nehmzow and C. Owen, “Robot navigation in the real world: Experiments with
Manchester’s FortyTwo in unmodified, large environments,” Robotics and
Autonomous systems, vol. 33, 2000, pp. 223-242.
[82] C.H. Choi, J.B. Song, W. Chung, and M. Kim, “Topological map building based on
thinning and its application to localization,” IEEE/RSJ International Conference
on Intelligent Robots and System, 2002, 2002.
[84] J.C. Latombe, Robot motion planning, Kluwer academic publishers, 1991.
[87] C. Freksa, R. Moratz, and T. Barkowsky, “Schematic maps for robot navigation,”
Lecture notes in computer science, 2000, pp. 100-114.
[89] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,”
The International Journal of Robotics Research, vol. 5, 1986, p. 90.
[90] P.E. Hart, N.J. Nilsson, and B. Raphael, “A formal basis for the heuristic
determination of minimum cost paths,” IEEE transactions on Systems Science
and Cybernetics, vol. 4, 1968, pp. 100-107.
26
[92] S. Thrun, J. Gutmann, D. Fox, W. Burgard, and B. Kuipers, “Integrating Topological
and Metric Maps for Mobile Robot Navigation: A Statistical Approach,”
AAAI/IAAI, 1998, pp. 989-995.
[95] I. Ulrich and J. Borenstein, “VFH+: reliable obstacle avoidance for fast mobile
robots,” Robotics and Automation, 1998. Proceedings. 1998 IEEE International
Conference on, 1998.
[96] I. Ulrich and J. Borenstein, “VFH*: local obstacle avoidance with look-
aheadverification,” Robotics and Automation, 2000. Proceedings. ICRA'00. IEEE
International Conference on, 2000.
[97] D. Fox, W. Burgard, and S. Thrun, “The dynamic window approach to collision
avoidance,” Robotics & Automation Magazine, IEEE, vol. 4, 1997, pp. 23-33.
[98] O. Brock and O. Khatib, “High-speed navigation using the global dynamic window
approach,” Robotics and Automation, 1999. Proceedings. 1999 IEEE
International Conference on, 1999, pp. 341-346 vol.1.
[99] J. Minguez and L. Montano, “Nearness Diagram Navigation (ND): A New Real
Time Collision Avoidance Approach,” In Proc. of the ieee/rsj international
conference on intelligent robots and systems (iros’00, vol. 60, 2000, pp. 2094--
2100.
27