Using The C Algorithm For Robust, Vision-Based Mobile Robot Localization
Using The C Algorithm For Robust, Vision-Based Mobile Robot Localization
resolution.
The likelihood densities p(z|x) associated with this
simple sensor model are in general complex and multi-
modal, which explains the need for more powerful prob-
abilistic reasoning. To illustrate this, three different likeli-
hood densities are shown in Fig. 2, respectively when the
robot is under a light, at the border of a light, and not under
a light. It is clear that these densities do not resemble any-
thing remotely like a Gaussian density, and that more pow-
erful density representations are required than provided by
the Kalman filter.
4 Existing Approaches:
A Tale of Density Representations
The solution to the robot localization problem is ob-
Figure 1: Example of a large-scale ceiling mosaic. The tained by recursively solving the two (1) and (2). Depend-
area shown is 40 m. deep by 60 m. wide. ing on how one chooses to represent the density p(xk |Z k ),
one obtains algorithms with vastly different properties:
The Kalman filter If both the motion and the measure-
images, and estimating the texture of the ceiling from the ment model can be described using a Gaussian density, and
aligned images. The alignment was done using a variant of the initial state is also specified as a Gaussian, then the den-
an algorithm by Lu and Milios [25] for map-building from sity p(xk |Z k ) will remain Gaussian at all times. In this
laser range finder scans. The details are beyond the scope case, (1) and (2) can be evaluated in closed form, yield-
of the current paper, and are described elsewhere [8]. ing the classical Kalman filter [27]. Kalman-filter based
At run time, we use a simple scalar measurement to lo- techniques [24, 31, 14] have proven to be robust and accu-
calize the robot. Rather than working with entire images, rate for keeping track of the robot’s position. Because of
the idea is to use a very simple sensor that can be inte- its concise representation (the mean and covariance matrix
grated at high frequencies while requiring very little com- suffice to describe the entire density) it is also a particu-
putation and memory. This is an important property in en- larly efficient algorithm. However, it is clear that the basic
abling small, low cost robots. In particular, the brightness assumption of Gaussian densities is violated in our applica-
of a small ceiling patch directly above the robot is mea- tion, where the likelihood densities are typically complex
sured. In our testbed application, the measurement is done and multi-modal, as discussed above.
using a camera pointed at the ceiling, by extracting a small Topological Markov Localization To overcome these
window of 25 by 25 pixels from the center of the image. disadvantages, different approaches have used increasingly
A Gaussian filter with a standard deviation of 6 pixels is richer schemes to represent uncertainty, moving away from
then applied, to create a single scalar measurement. The the restricted Gaussian density assumption inherent in the
blurring is essential to filter out frequency components that Kalman filter. These different methods can be roughly dis-
cannot be represented within the mosaic, which is of finite tinguished by the type of discretization used for the rep-
resentation of the state space. In [28, 32, 19, 34], Markov In analogy with the formal filtering problem outlined in
localization is used for landmark-based corridor navigation Section 2, the algorithm proceeds in two phases:
and the state space is organized according to the topolog-
ical structure of the environment. However, in many ap- Prediction Phase In the first phase we start from the set
plications one is interested in a more fine-grained position of particles Sk−1 computed in the previous iteration, and
estimate, e.g., in environments with a simple topology but apply the motion model to each particle sik−1 by sampling
large open spaces, where accurate placement of the robot from the density p(xk |sik−1 , uk−1 ):
is needed. (i) for each particle sik−1 :
draw one sample s0 k from p(xk |sik−1 , uk−1 )
Grid-based Markov Localization To deal with multi- i
6 Experimental Results
As a test-bed for our approach we used the robotic tour-
guide Minerva, a prototype RWI B18 robot shown in Fig.
4. In the summer of 1998, Minerva functioned for two
weeks as an interactive robotic tour-guide in the Smith-
sonian’s National Museum of American History. During
this time, it interacted with thousands of people, travers-
ing more than 44 km. The datasets we work with consist
of logs of odometry and vision-based measurements col-
lected while Minerva operated within the MAH environ-
ment. The upward pointing camera, used both for building
the ceiling-mosaic as performing the brightness measure-
ment, can be seen on the right-hand side of Fig. 4. Using
Figure 4: Our testbed, the robotic museum tour-guide Min- the time stamps in the logs, all tests have been conducted
erva. The ceiling camera can be seen on the right. in real-time.
6.1 Global Localization
cloud of particles Sk−1 representing our uncertainty about One of the key advantages of using a sampling-based
the robot position. In the example, the robot is fairly local- representation over Kalman-filter based approaches is its
ized, but its orientation is unknown. Panel B shows what ability to represent multi-modal probability distributions.
happens to our belief state when we are told the robot has This ability is a precondition for localizing a mobile robot
moved exactly one meter since the last time-step: we now from scratch, i.e., without knowledge of its starting loca-
know the robot to be somewhere on a circle of 1 meter tion. This global localization capability is illustrated in Fig.
radius around the previous location. Panel C shows what 5 through Fig. 7. In the first iteration, the algorithm is ini-
happens when we observe a landmark, half a meter away, tialized by drawing 40,000 samples from a uniform prob-
somewhere in the top-right corner: the top panel shows the ability density save where there are known to be (static)
likelihood p(zk |xk ), and the bottom panel illustrates how obstacles. After 15 iterations of the algorithm, the samples
each sample s0 k is weighted according to this likelihood.
i
are still scattered over the area, but tend to be concentrated
Finally, panel D shows the effect of resampling from this in areas with high brightness. This is shown in Fig. 5.
weighted set, and this forms the starting point for the next After 38 iterations, all but a few possibilities have been
iteration. eliminated, as shown in Fig. 6. This is possible because all
Good explanations of the mechanism underlying the samples that disagree with the actual temporal sequence of
C ONDENSATION algorithm are given in [6, 29]. The en- brightness measurements become very unlikely. It should
tire procedure of sampling, reweighting and subsequently be noted that in this early stage of localization, the ability
Fig. 8: Estimated path Fig. 9: Corrected path
using odometry only using vision
to represent ambiguous probability distributions is vital for research is to adapt the number of samples appropriately.
successful position estimation. Finally, after 126 iterations Also, the noisy nature of the estimated position occurs of-
only one localized peak is left, as shown in Fig. 7, which ten when two or more probability peaks arise. In this case,
indicates that the robot has been able to uniquely determine rather than plotting one mean position, it might be more ap-
its position. Note that the samples are shown after the re- propriate to display several alternative paths, correspond-
sampling step, where they are unweighted, and that most ing to the modes of the posterior probability density.
of the off-peak samples have in fact low probability.
6.2 Position Tracking 7 Conclusion and Future Work
Using the same approach, we are also able to track the In this paper, we have shown that C ONDENSATION type
position of a robot over time, even when the robot is mov- algorithms can be used in a novel way to perform global
ing at high speeds. In this experiment, we used recorded and local localization for mobile robots. The ability of
data from Minerva, as it was moving with speeds up to 1.6 these Monte Carlo methods to deal with arbitrary likeli-
m/sec through the museum. At the time of this run there hood densities is a crucial property in order to deal with
were no visitors in the museum, and the robot was remotely highly ambiguous sensors, such as the vision-based sen-
controlled by users connected through the world wide web. sor used here. In addition, the ability to represent multi-
To illustrate this, Fig. 8 shows an occupancy grid map modal posterior densities allows them to globally localize
of the museum along with the estimated trajectory using the robot, condensing an initially uniform prior distribution
odometry alone. In the tracking experiment, the robot po- into one globally localized peak over time.
sition is initialized with the true position at the top of the Based on these properties, the resulting Monte Carlo
map. Due to the accumulated odometry error, the robot is Localization approach has been shown to address one of
hopelessly lost after traversing 200 meters. In contrast, the the fundamental and open problems in the mobile robot
path estimated using the vision-sensor keeps track of the community. In order to support this claim, we have shown
global position at all times. This can be appreciated from results within the context of a challenging robot applica-
Fig. 9, where the vision-corrected path is shown. This path tion. Our method was able to both globally localize and
was generated by running the C ONDENSATION algorithm locally track the robot within a highly dynamic and un-
with 1000 samples, and plotting the mean of the samples modified environment, and this with the robot traveling at
over time. The resulting estimated path is noisier than the high speeds.
one obtained by odometry, but it ends up within 10 cm. of The results in this paper were obtained using a single
the correct end-position, indicated with a black dot at the brightness measurement of the ceiling directly above the
right of the figure. robot’s position. In part, this is done to investigate how lit-
In general, far fewer samples are needed for position tle sensing one can get away with. In future work, this can
tracking than for global localization, and an issue for future be further developed in two opposite directions: one can
envision building robots with even less sensory input, e.g., [16] J.E. Handschin. Monte Carlo techniques for prediction and filtering
by omitting wheel encoders that provide odometry mea- of non-linear stochastic processes. Automatica, 6:555–563, 1970.
surements. This shifts the burden towards the development [17] M. Isard and A. Blake. Contour tracking by stochastic propagation
of even more powerful paradigms for probabilistic reason- of conditional density. In Eur. Conf. on Computer Vision (ECCV),
pages 343–356, 1996.
ing. In the other direction, it is clear that integrating more
[18] M. Isard and A. Blake. Condensation – conditional density propaga-
complex measurements will provide more information at tion for visual tracking. International Journal of Computer Vision,
each iteration of the algorithm, resulting in faster global 29(1):5–28, 1998.
localization and more accurate tracking than shown here. [19] L.P. Kaelbling, A.R. Cassandra, and J.A. Kurien. Acting under un-
In future work, we hope to investigate both these possibil- certainty: Discrete Bayesian models for mobile-robot navigation.
ities in more depth. In Proceedings of the IEEE/RSJ International Conference on Intel-
ligent Robots and Systems, 1996.
References [20] S. King and C. Weiman. Helpmate autonomous mobile robot nav-
[1] J. Borenstein. The nursing robot system. PhD thesis, Technion,
igation system. In Proceedings of the SPIE Conference on Mobile
Haifa, Israel, 1987.
Robots, volume 2352, 1990.
[2] J. Borenstein, B. Everett, and L. Feng. Navigating Mobile Robots:
[21] G. Kitagawa. Monte carlo filter and smoother for non-gaussian non-
Systems and Techniques. A. K. Peters, Ltd., Wellesley, MA, 1996.
linear state space models. Journal of Computational and Graphical
[3] R. S. Bucy. Bayes theorem and digital realisation for nonlinear fil- Statistics, 5(1):1–25, 1996.
ters. Journal of Astronautical Science, 17:80–94, 1969.
[22] D. Kortenkamp, R.P. Bonasso, and R. Murphy, editors. AI-based
[4] W. Burgard, A. Derr, D. Fox, and A.B. Cremers. Integrating global Mobile Robots: Case studies of successful robot systems, Cam-
position estimation and position tracking for mobile robots: the Dy- bridge, MA, 1998. MIT Press.
namic Markov Localization approach. In IEEE/RSJ Intl. Conf. on
Intelligent Robots and Systems (IROS), 1998. [23] D. Kortenkamp and T. Weymouth. Topological mapping for mobile
robots using a combination of sonar and vision sensing. In Proceed-
[5] W. Burgard, D. Fox, D. Hennig, and T. Schmidt. Estimating the
ings of the Twelfth National Conference on Artificial Intelligence,
absolute position of a mobile robot using position probability grids.
pages 979–984, 1994.
In Proc. of the Fourteenth National Conference on Artificial Intelli-
gence (AAAI-96), pages 896–901, 1996. [24] J.J. Leonard and H.F. Durrant-Whyte. Directed Sonar Sensing for
[6] J. Carpenter, P. Clifford, and P. Fernhead. An improved particle filter Mobile Robot Navigation. Kluwer Academic, Boston, 1992.
for non-linear problems. Technical report, Department of Statistics, [25] F. Lu and E Milios. Globally consistent range scan alignment for
University of Oxford, 1997. environment mapping. Technical report, Department of Computer
[7] I.J. Cox. Blanche—an experiment in guidance and navigation of Science, York University, April 1997.
an autonomous robot vehicle. IEEE Transactions on Robotics and [26] M. J. Matarić. A distributed model for mobile robot environment-
Automation, 7(2):193–204, 1991. learning and navigation. Master’s thesis, MIT, Artificial Intelligence
[8] Frank Dellaert, Sebastian Thrun, and Chuck Thorpe. Mosaicing a Laboratory, Cambridge, 1990.
large number of widely dispersed, noisy, and distorted images: A [27] P. Maybeck. Stochastic Models, Estimation and Control, volume 1.
Bayesian approach. Technical report, Computer Science Depart- Academic Press, New York, 1979.
ment, Carnegie Mellon University, 1999.
[28] I. Nourbakhsh, R. Powers, and S. Birchfield. DERVISH an office-
[9] A. Doucet. On sequential simulation-based methods for Bayesian navigating robot. AI Magazine, 16(2):53–60, Summer 1995.
filtering. Technical Report CUED/F-INFENG/TR.310, Department
of Engineering, University of Cambridge, 1998. [29] M. K Pitt and N. Shephard. Filtering via simulation: auxiliary par-
ticle filters. Technical report, Department of Mathematics, Imperial
[10] H. Everett, D. Gage, G. Gilbreth, R. Laird, and R. Smurlo. Real-
College, London, October 1997.
world issues in warehouse navigation. In Proceedings of the SPIE
Conference on Mobile Robots IX, volume 2352, 1994. [30] D.B. Rubin. Using the SIR algorithm to simulate posterior distri-
[11] D. Fox, W. Burgard, and S. Thrun. Active Markov localization for butions. In Bayesian Statistics, volume 3, pages 395–402. Oxford
mobile robots. Robotics and Autonomous Systems, 25(3-4):195– University Press, 1988.
207, 1998. [31] B. Schiele and J. L. Crowley. A comparison of position estimation
[12] D. Fox, W. Burgard, S. Thrun, and A.B. Cremers. Position estima- techniques using occupancy grids. In IEEE Conf. on Robotics and
tion for mobile robots in dynamic environments. In Proceedings of Automation (ICRA), volume 2, pages 1628–1634, 1994.
the AAAI Fifteenth National Conference on Artificial Intelligence, [32] R. Simmons and S. Koenig. Probabilistic robot navigation in par-
1998. tially observable environments. In Proc. International Joint Confer-
[13] N.J. Gordon, D.J. Salmond, and A.F.M. Smith. Novel approach to ence on Artificial Intelligence, 1995.
nonlinear/non-Gaussian Bayesian state estimation. IEE Procedings [33] A.F.M. Smith and A.E. Gelfand. Bayesian statistics without tears: A
F, 140(2):107–113, 1993. sampling-resampling perspective. American Statistician, 46(2):84–
[14] J.-S. Gutmann and C. Schlegel. Amos: Comparison of scan match- 88, 1992.
ing approaches for self-localization in indoor environments. In [34] S. Thrun. Bayesian landmark learning for mobile robot localization.
Proceedings of the 1st Euromicro Workshop on Advanced Mobile Machine Learning, 33(1), 1998.
Robots. IEEE Computer Society Press, 1996.
[15] G. Hager, D. Kriegman, E. Yeh, and C. Rasmussen. Image-based
prediction of landmark features for mobile robot navigation. In
IEEE Conf. on Robotics and Automation (ICRA), pages 1040–1046,
1997.