SlideShare a Scribd company logo
Adaptive Mobile Robot Navigation and Mapping
Hans Jacob S. Feder1
, John J. Leonard1
, and Christopher M. Smith2
1
Massachusetts Institute of Technology, 77 Mass Ave., Cambridge, MA 02139
2
Charles Stark Draper Laboratory, 555 Technology Square, Cambridge, MA 02139
feder@deslab.mit.edu,jleonard@mit.edu,csmith@draper.com
Abstract
The task of building a map of an unknown environment and concurrently using that
map to navigate is a central problem in mobile robotics research. This paper addresses
the problem of how to perform concurrent mapping and localization (CML) adaptively
using sonar. Stochastic mapping is a feature-based approach to CML that generalizes the
extended Kalman lter to incorporate vehicle localization and environmental mapping. We
describe an implementation of stochastic mapping that uses a delayed nearest neighbor
data association strategy to initialize new features into the map, match measurements to
map features, and delete out-of-date features. We introduce a metric for adaptive sensing
which is de ned in terms of Fisher information and represents the sum of the areas of the
error ellipses of the vehicle and feature estimates in the map. Predicted sensor readings and
expected dead-reckoning errors are used to estimate the metric for each potential action
of the robot, and the action which yields the lowest cost (i.e., the maximum information)
is selected. This technique is demonstrated via simulations, in-air sonar experiments, and
underwater sonar experiments. Results are shown for 1) adaptive control of motion and 2)
adaptive control of motion and scanning. The vehicle tends to explore selectively di erent
objects in the environment. The performance of this adaptive algorithm is shown to be
superior to straight-line motion and random motion.
1 Introduction
The goal of concurrent mapping and localization (CML) is to build a map of the environment
while simultaneously using that map to enhance navigation performance. CML is critical for
many eld and service robotics applications. The long-term goal of our research is to develop new
To appear in the International Journal of Robotics Research, Special Issue on Field and Service Robotics,
July, 1999.
methods for CML, enabling autonomous underwater vehicles (AUVs) to navigate in unstructured
environments without relying on a priori maps or acoustic beacons. This paper presents a
technique for adaptive concurrent mapping and localization and demonstrates that technique
using real sonar data.
Navigation is essential for successful operation of underwater vehicles in a range of scienti c,
commercial, and military applications (Leonard, Bennett, Smith, and Feder 1998). Accurate
positioning information is vital for the safety of the vehicle and for the utility of the data it
collects. The error growth rates of inertial and dead-reckoning systems available for AUVs are
usually unacceptable. Underwater vehicles cannot use the Global Positioning System (GPS) to
reset dead-reckoning error growth unless they surface, which is usually undesirable. Positioning
systems that use acoustic transponders (Milne 1983) are often expensive and impractical to
deploy and calibrate. Navigation algorithms based on existing maps have been proposed (Tuohy,
Leonard, Bellingham, Patrikalakis, and Chryssostomidis 1996; Lucido, Popescu, Opderbecke,
and Rigaud 1996), but suciently accurate a priori maps are often unobtainable.
Adaptive sensing strategies have the potential to save time and maximize the eciency of
the concurrent mapping and localization process for an AUV. Energy eciency is one of the
most challenging issues in the design of underwater vehicle systems (Bellingham and Willcox
1996). Techniques for building a map of sucient resolution as quickly as possible would be
highly bene cial. Survey class AUVs must maintain forward motion for controllability (Belling-
ham, Goudey, Consi, Bales, Atwood, Leonard, and Chryssostomidis 1994), hence the ability to
adaptively choose a sensing and motion strategy which obtains the most information about the
environment is especially important.
Sonar is the principle sensor for AUV navigation. Possible sonar systems include mechani-
cally and electronically scanned sonars, side-scan sonar and multi-beam bathymetric mapping
sonars (Urick 1983). The rate of information obtained from a mechanically scanned sonar is
low, making adaptive strategies especially bene cial. Electronically scanned sonars can provide
information at very high data rates, but enormous processing loads make real-time implementa-
tions dicult. Adaptive techniques can be used to limit sensing to selected regions of interest,
dramatically reducing computational requirements.
In this paper, adaptive sensing is formulated as the evaluation of di erent actions that the
robot can take and the selection of the action that maximizes the amount of information acquired.
This general problem has been considered in a variety of contexts (Manyika and Durrant-Whyte
2
1994; Russell and Norvig 1995) but not speci cally for concurrent mapping and localization.
CML provides an interesting context within which to address adaptive sensing because of the
trade-o between dead-reckoning error and sensor data acquisition. The information gained by
observing an environmental feature from multiple vantage points must counteract the rise in
uncertainty that results from the motion of the vehicle.
Our experiments use two di erent robot systems. One is an inexpensive wheeled land robot
equipped with a single rotating sonar. Observations are made of several cylindrical targets
whose location is initially unknown to the robot. Although this is a simpli ed scenario, these
experiments provide a useful illustration of the adaptive CML process and con rm behavior seen
in simulation. The second system is a planar robotic positioning system which moves a sonar
within a 9 meter by 3 meter by 1 meter testing tank. The sonar is a mechanically scanned
675 kHz Imagenex model 881 sonar with a 2 degree beam. These characteristics are similar to
alternative models used in the marine industry. Testing tank experimentation provides a bridge
between simulation and eld AUV systems. Repeatable experiments can be performed under
identical conditions; ground truth can be determined to high accuracy.
Section 2 reviews previous research in concurrent mapping and localization and adaptive sens-
ing. Section 3 develops the theory of adaptive stochastic mapping. Sections 4, 5, and 6 describe
testing of the method using simulations, air sonar experiments, and underwater sonar experi-
ments. Finally, Section 7 provides concluding remarks and suggestions for future research.
2 Background
2.1 Stochastic mapping
Stochastic mapping is a technique for CML that was introduced by Smith, Self, and Cheese-
man (1990). In stochastic mapping, a single state vector represents estimates of the locations of
the robot and the features in the environment. An associated error covariance matrix represents
the uncertainty in these estimates, including the correlations between the vehicle and feature
state estimates. As the vehicle moves through its environment taking measurements, the system
state vector and covariance matrix are updated using an extended Kalman lter (EKF). As new
features are observed, new states are added to the system state vector; the size of the system
covariance matrix grows quadratically. Implementing stochastic mapping with real data requires
methods for data association and track initiation. Smith, Self, and Cheeseman describe processes
3
for adding new features to the map and enforcing known geometric constraints between di erent
map features, but do not perform simulations or experiments using these techniques.
Moutarlier and Chatila (1989) provided the rst implementation of this type of algorithm
with real data. Their implementation uses data from a scanning laser range nder mounted on a
wheeled mobile robot operating indoors. They implement a modi ed updating technique, called
relocation-fusion, that reduces the e ect of vehicle dynamic model uncertainty. More recently,
Chatila and colleagues developed improved implementations for outdoor vehicles in natural en-
vironments and investigated new topics such as the decoupling of local and global maps (Betg
e-
Brezetz, H
ebert, Chatila, and Devy 1996; H
ebert, Betg
e-Brezetz, and Chatila 1996). The number
of other researchers who have implemented stochastic mapping with real data has been limited.
Using sonar on land robots operating indoors, Rencken (1993) and Chong and Kleeman (1997)
implemented variations of stochastic mapping using sonar sensing. Rencken utilized the standard
Polaroid ranging system, while Chong and Kleeman used a precision custom sonar array which
provided accurate classi cation of geometric primitives. Current implementations of stochastic
mapping are limited to ranges of tens of meters and/or durations of a few hours.
Stochastic mapping assumes a metrical, feature-based environmental representation, in which
objects can be e ectively represented as points in an appropriate parameter space. Other types
of representations are possible and have been employed with success. For example, Thrun et
al. (1998) has demonstrated highly successful navigation of indoor mobile robots using a com-
bination of grid-based (Elfes 1987) and topological (Kuipers and Byun 1991) modeling. For
marine robotic systems, the complexity of representing 3D natural environments with geometric
models is formidable. Our hypothesis, however, is that salient features can be found and reliably
extracted (Medeiros and Carpenter 1996), enabling CML. Exhaustively detailed environmental
modeling should not be necessary for autonomous navigation.
2.2 Adaptive sonar sensing
Adaptive sensing has been a popular research topic in many di erent areas of robotics. Synony-
mous terms that have been used for these investigations include active perception (Bajcsy 1988),
active vision (Blake and Yuille 1992), directed sensing (Leonard and Durrant-Whyte 1992), active
information gathering (Hager 1990), adaptive sampling (Bellingham and Willcox 1996), sensor
management (Manyika and Durrant-Whyte 1994) and limited rationality (Russell and Wefald
4
1995). A common theme that emerges is that adaptive sensing should be formulated as the pro-
cess of evaluating di erent sensing actions that the robot can take and choosing the action that
maximizes the information acquired by the robot. The challenge in implementing this concept
in practice is to develop a methodology for quantifying the expected information for di erent
sensing actions and evaluating them in a computationally feasible manner given limited a priori
information.
Our approach is closest to Manyika (1994), who formulated a normative approach to multi-
sensor data fusion management based on Bayesian decision theory (Berger 1985). A utility
structure for di erent sensing actions was de ned using entropy (Shannon information) as a
metric for maximizing the information in decentralized multi-sensor data fusion. The method
was implemented for model-based localization of a mobile robot operating indoors using multiple
scanning sonars and an a priori map. Feature location uncertainty and the loss of information
due to vehicle motion error, which are encountered in CML, were not explicitly addressed.
Examples of the application of adaptive sensing in marine robotics include Singh et al. (1995)
and Bellingham and Willcox (1996). Singh formulated an entropic measure of adaptive sensing
and implemented it on the Autonomous Benthic Explorer. The implementation was performed
using stochastic back-projection, a grid-based modeling technique developed by Stewart for ma-
rine sensor data fusion (Stewart 1988). Bellingham and Willcox have investigated optimal survey
design for AUVs making observations of dynamic oceanographic phenomena (Willcox, Zhang,
Bellingham, and Marshall 1996). Decreased vehicle speeds save power due to the quadratic
dependence of drag on velocity, but susceptibility to space-time aliasing is increased.
An interesting motivation for adaptive sonar sensing is provided by the behavior of bats and
dolphins performing echo-location. For example, dolphins are observed to move their heads from
side to side as they discriminate objects (Au 1993). Our hypothesis for this behavior is that
sonar is more like touch than vision. A useful analogy may be the manner in which a person
navigates through an unknown room in the dark. By reaching out for and establishing contact
with walls, tables, and chairs, transitions from one object to the another can be managed as
one moves across the room. Whereas man-made sonars tend to use narrow-band waveforms and
narrow beam patterns, bat and dolphin sonar systems use broad-band waveforms and relatively
wide beam patterns. A broad beam pattern can be bene cial because it provides a greater range
of angles over which a sonar can establish and maintain contact by receiving echoes from an
environmental feature. The task for concurrent mapping and localization is to integrate the
5
information obtained from sonar returns obtained from di erent features as the sensor moves
through the environment to estimate both the geometry of the scene and the trajectory of the
sensor.
3 Adaptive Delayed Nearest Neighbor
Stochastic Mapping
This section reviews the theory of stochastic mapping, derives a metric for performing stochastic
mapping adaptively, and describes the delayed nearest neighbor method for track initiation and
track deletion.
3.1 Stochastic mapping
Stochastic mapping is simply a special way of organizing the states in an extended Kalman lter
(EKF) for the purpose of feature relative navigation. An EKF is a computational ecient esti-
mator for the states of a given nonlinear dynamic system and assuming that the noise processes
are well modeled by Gaussian noise and the errors due to linearization of the nonlinear system are
small. That is, the EKF for a system provides an estimate of both the state of the system, say ^
x,
as well as an estimate of the covariance of the state, say P. The covariance of the state provides
an estimate of the con dence in the estimate ^
x. A dynamic system is described by a dynamic
model, F which de nes the evolution of the system (robot) through time, and an observation
model, H, which relates observations (measurements) to the state of the system (robot).
Our implementation of stochastic mapping is based on Smith, Self, and Cheeseman (1990)
and is described in more detail in Appendix A. We use xk = ^
xkjk + ^
xk to represent the system
state vector x = [xT
r xT
1 : : : xT
N ]T
, where xr and x1 : : : xN are the robot and feature states,
respectively, ^
x is the estimated state vector, and ~
x is the error estimate. Measurements are
taken every t = kT seconds, where T is a constant period and k is a discrete time index. The
measurements are related to the state by a state-to-observation transform zk = H(xk; dz), where
dz is the measurement noise process. The a posteriori PDF of xk, given a set of measurements
Zk  fzk; : : : ; z1g, can be found from Bayes rule as
p(xkjZk) = p(zkjxk)p(xkjZk?1)
p(zkjZk?1) : (1)
6
The distribution p(zkjxk) is de ned as the likelihood function using the Likelihood Princi-
ple (Berger 1985). By knowing p(xkjZk) we can form an estimate ^
xk of the state.
In order to perform CML, the state transition (dynamic model) xk+1 = F(xk; uk; dx), in
addition to the observation transformation H, must be known. Here uk is the control input
(action) at time k and dx is the process noise. If the stochastic processes dz and dx are assumed
to be independent, white and Gaussian, and the state transition F and observation model H
are both linear, the Bayes Least Square (BLS) estimator, ^
xk+1jk = F(EfxkjZkg; u), will be
an ecient estimator of x. However, in the general problem of CML, neither the dynamic
model nor the observation model will be linear. Thus, an ecient estimator cannot be obtained.
Further, propagating the system's covariance through nonlinear equations does not guarantee
that the statistics will be conserved. Thus, in order to circumvent the problem of transformation
of nonlinearities, the nonlinear models of F and H are approximated through a Taylor series
expansion, keeping only the rst two terms. That is,
F(xk; uk; dx)  F(^
xkjk; uk; dx) + Fx ~
xkjk
) ^
xk+1jk  EfF(xk; uk; dx)jZkg;
(2)
where Fx = dF(x; uk)=dxjx=^
xkjk is the Jacobian of dynamic model F with respect to x, evaluated
at ^
xkjk. Similarly, the observation model is approximated by
H(xk; dz)  H(^
xkjk?1; dz) + Hx~
xkjk?1; (3)
where Hx = dH(x)=dxjx=^
xkjk?1 is the Jacobian of the observation model H with respect to x
evaluated at ^
xkjk?1. These approximations are equivalent to the assumption that the estimated
mean at the previous time step, ^
xkjk, is approximately equal to the system state, xk at the
previous time step. Once these linearization have been performed and assuming that the ap-
proximation error is small, we can now nd the BLS for this linearized system. The result is
equivalent to that of the extended Kalman lter (Gelb 1973; Bar-Shalom and Li 1993; Uhlmann,
S. J. Julier and M. Csorba 1997).
The estimate error covariance, Pkjk = Ef~
xk ~
xT
k g, of the system takes the form
Pkjk =
2
6
6
6
4
Prr Pr1  PrN
P1r P11  P1N
... ... ... ...
PNr PN1  PNN
3
7
7
7
5
kjk
: (4)
7
The sub-matrices, Prr, Pri and Pii are the vehicle-vehicle, vehicle-feature, and feature-feature
covariances respectively. This form is signi cant as it allows us to separate the uncertainty
associated with the robot, Prr, as well as the individual features, Pii, and this separation will be
used in obtaining a measure of the information in our system.
Thus, the robot and the map are represented by a single state vector, x, with an associated
estimate error covariance P at each time step. As new features are added, x and P increase in
size.
In our implementation, we denote the vehicle's state by xr = [xr yr ]T
and the control input
to the vehicle is given by u = [ux uy u]. For the dynamic model, F, we use
xrk+1 =f(xk; uk) + G(uk)dx
=xrk + Tkuk + G(uk)dx;
(5)
where G(uk) scales the noise process dx as a function of the distance traveled, that is
G(uk) =
2
4
x
p
u2
xk + u2
yk 0 0
0 y
p
u2
xk + u2
yk 0
0 0 
3
5 (6)
and
Tk =
2
4
cos(k) ?sin(k) 0
sin(k) cos(k) 0
0 0 1
3
5: (7)
The  in this expression comes from the robot position xrk. The covariance of dx is for
convenience set equal to the identity matrix, as any scaling is placed in G. The matrix G
is diagonal by assumption. If the correlations between the noise processes in the x, y, and 
direction were known, it should be included. However, this is a minor point and this form has
has shown to work well for our systems.
Equation (5) does not take into account any of the vehicle's real dynamics. Thus the model is
very general. However, if the vehicle's dynamics were known, they could be used to ensure that
the vehicle movements remain realistic. In our experiments and simulation, we will constrain the
robot to move only a certain distance each time step, thus making ux and uy dependent.
For the observation model we use
zk = H(xk; dz) = h(xk) + dz; (8)
8
where zk is the observation vector of range and bearing measurements. The observation model,
h, de nes the nonlinear coordinate transformation from state to observation coordinates. The
stochastic process dz, is assumed to be white, Gaussian, and independent of x0 and dx, and has
covariance R.
In our experiments, a sector including an object is scanned over multiple adjacent angles
to yield multiple sonar returns that originate from the object. That is, isolated features and
smooth surfaces appear as circular arcs, also known as regions of constant depth (Leonard and
Durrant-Whyte 1992), in sonar scans. For the specular returns, an improved estimate of the
range and bearing to the object is obtaining by grouping sets of adjacent returns with nearly
the same range and taking the mode of this set of angles as the bearing measurement, and the
range associated with this bearing as the range measurement to the object (Moran, Leonard,
and Chryssostomidis 1997). Rough surfaces (Bozma and Kuc 1991) yield additional returns at
high angles of incidence. These occur frequently in the data from our underwater sonar, but are
not processed in the experiments reported in this paper.
In our work, all features are modeled as point features. More complex object can be described
by a nite set of parameters and estimated by using the associated observation model for this
parameterization in stochastic mapping. For instance, the modeling of planes, corners, cylinders
and edges are relatively straight-forward (Leonard and Durrant-Whyte 1992).
3.2 Adaptation step
The goal of adaptive CML is to determine the optimal action given the current knowledge of
the environment, sensors, and robot dynamics in a CML framework. To provide an intuitive
understanding of this goal, imagine an underwater vehicles with no navigational uncertainty
estimating the position of a feature as depicted in gure 1. As can be seen from this simple
example of gure, it is clearly advantageous for the vehicle to take the next measurement from a
new direction. By doing so, more information about the feature is extracted, and thus a better
estimate can be obtained.
The essence of our model is to determine the action that maximizes the total knowledge
(that is, the information) about the system in the presence of measurement and navigational
uncertainty. By adaptively choosing actions, we mean that the next action of the robot is chosen
so as to maximize the robot's information about its location and all the features' locations (the
9
a)
Actual feature position.
Estimated feature position
b)
1
2
Figure 1: An autonomous underwater vehicle with no navigational uncertainty estimating the
position of an environmental feature. The ellipses denotes the certainty (error ellipse) to which
the feature position is known. a) The initial estimate of the feature. b) Given two possible new
locations, ① and ②, to make the next observation from, position ① allows for a more accurate
estimate of the feature's position, as the measurement of the feature is taken from a new angle,
resulting in the tilted error ellipse associated with this measurement. Combining this (tilted
error ellipse) with the error ellipse from a), the small dotted circular error ellipse is achieved.
Taking the next observation from location ② only yields a slightly smaller error ellipse than that
of the previous time step in a), shown by the dotted ellipse.
10
map).
The amount of information contained in Equation (1) can be quanti ed in various ways.
Fisher information is of particular interest and is related to the estimate of the state x given
the observations. The Fisher information for a random parameter is de ned (Bar-Shalom and
Fortmann 1988) as the covariance of the gradient of the total log-probability, that is,
Ikjk  Ef(rx lnp(x; Zk))(rx lnp(x; Zk))T
g
= ?EfrxrT
x lnp(xk; Zk)g:
(9)
Where rx = [ d
dx1  d
dxN ]T
is the gradient operator with respect to x = [x1 xN ], thus rxrT
x
is the Hessian matrix.
Strictly speaking, the Fisher information is only de ned in the non-Bayesian view of estimation
(Bar-Shalom and Fortmann 1988), that is, in the estimation of a non-random parameter. In the
Bayesian approach to estimation, the parameter is random with a (prior) probability density
function. However, as in the non-Bayesian de nition of the Fisher information, the inverse of
the the Fisher information for random parameter estimation is the Cram
er-Rao lower bound for
the mean square error.
Applying Bayes rule to Equation (9), that is, p(xk; Zk) = p(xk; z; Zk?1) = p(xk; Zk?1)p(zjxk),
and noting the linearity of the gradient and expectation operators we obtain a Fisher information
update:
EfrxrT
x lnp(xk; Zk)g =EfrxrT
x lnp(xk; Zk?1)g+ E

rxrT
x lnp(zkjxk)
, Ikjk =Ikjk?1 ?E

rxrT
x lnp(zkjxk) :
(10)
The rst term on the right represents the Fisher information, Ikjk?1, before the last measurement,
while the second term corresponds to the additional information gained by measurement zk.
If we obtain an ecient estimator for x the Fisher information will simply be given by the
inverse of the error covariance of the state. Thus, under the assumption that Equation (2)
and Equation (3) holds, the inverse of error covariance P will be an estimate for the Fisher
information of the system, that is
I  P?1 :
Under this assumption, and using Equation (5) the transformation M relating Ikjk to Ik+1jk can
be found. By combining this with Equation (10) we have a recursive Fisher information update,
11
which depends on the actions u (inputs). M will generally depend on the state xk+1 as well,
which is not available. By invoking the assumption of Equation (2), xk+1 is replaced by the
estimate ^
xk+1jk. Thus, M can be used to give us the optimal action uk to take given our model
and assumptions.
By optimal, we here mean that, under the assumption that the EKF is the best estimator
for our state, the action u that maximizes the knowledge (information) of the system given the
current knowledge of the system can be determined. This is not necessarily the optimal action
for the actual system.
At each time step the algorithm seeks to determine the transformation M and, from this,
infer the optimal action uk. Combining the vehicle prediction and EKF update step of stochastic
mapping, M is
Ik+1jk+1 = (FxI?1
kjkFT
x + G(uk)G(uk)T
)?1 + HT
xR?1Hx: (11)
Where Fx and Hx are the Jacobian of f and h with respect to x evaluated at ^
xkjk and ^
xk+1jk
respectively. The rst term on the right of Equation (11) represents the previous information of
the system, as well as the loss of information that occurs due to the action uk. The second term
represents the additional information gained by the system due to observations after the action
uk. As this quantity is a function of xk+1, which is unknown, we approximate xk+1 by ^
xk+1jk by
the assumption of Equation (2). The action that maximizes the information can be expressed as
uk = max
u
Ik+1jk+1 = min
u
Pk+1jk+1: (12)
The information is a matrix and we require a metric to quantify the information. Further, it is
desired that this metric have a simple physical interpretation.
For concurrent mapping and localization, it is desirable to use a metric that makes explicit
the tradeo between uncertainty in feature locations and uncertainty in the vehicle position
estimate. To accomplish this, we de ne the metric by a cost function C(P), which gives the total
area of all the error ellipses, (i.e., highest probability density regions) and is thus a measure of
our con dence in our map and robot position. That is,
C(P) = 
Q
j
p
j(Prr) + 
PN
i=1
Q
j
p
j(Pii)
= 
p
det(Prr) + 
PN
i=1
p
det(Pii) ;
(13)
12
where j() is the j-th eigenvalue of its argument and det is the determinant of its argument..
Turning back to gure 1b, Equation (13) minimizes the error ellipses, thus yielding a lower cost
for position ① than for position ② as makes intuitive sense.
The action to take is obtained by evaluating Equation (12) over the action space of the robot
using the metric in Equation (13). This yields an adaptive stochastic mapping algorithm. This
procedure optimizes the information locally at each time step, thus the adaptation step performs
a local optimization. Notice, that the action space of the robot is not limited to motion control
inputs. Other actions and constraints can readily be included in the control input u, such as,
what measurements should be taken by the sonar. For this, we control the set of angles U which
are scanned by the sonar.
In principle Equation (12) can be solved in symbolic form using Lagrange multipliers. However,
the symbolic matrix inversion required in Equation (11) is very tedious and results in very large
number of terms. Further, as more features are added, the inversion scales exponentially in the
number of calculations. Numerical methods, however, can be used to evaluate Equation (12)
quite eciently. The experiments below do not use a numerical technique such as the simplex
method to perform this optimization, however this could be readily incorporated in the future.
3.3 Data association, track initiation and track deletion
Section 3.1 outlined the stochastic mapping approach to feature based navigation. In order to
employ stochastic mapping in a real world scenario, we have to be able to extract features from the
environment. Sonars are notorious for exhibiting drop-outs, false returns, no-returns and noise.
Thus addressing the problem of data association is critical for the validity of the observation
model (Equation (8)) and in employing a stochastic mapping based approach to CML. For this
purpose, data association, initiation of new feature estimates (i.e. track initiation) and the
removal of out-dated feature estimates (track deletion) for the mobile robot are described here.
The problem of data association is that of determining the source of a measurement. Ob-
taining correct data association can pose a signi cant challenge when making observation of
environmental features (Smith 1998). In our implementation, data association is performed us-
ing a gated nearest neighbor technique (Bar-Shalom and Fortmann 1988). The initiation of new
feature tracks is performed using a delayed nearest neighbor (DNN) initiator. The DNN initiator
is similar in spirit to the logic-based multiple target track initiator described by Bar-Shalom and
13
Fortmann (1988). One important di erence in our method is that the vehicle's position is un-
certain, and this uncertainty has to be included when performing gating and nding the nearest
neighbor. It is assumed that a sonar return originates from not more than one feature.
In order to include the vehicle's uncertainty in performing data association using a nearest
neighbor gating technique, we need to transform the vehicle's uncertainty into measurement space
and add this uncertainty to the measurement uncertainty. We assume that the true measurement
of feature i at time k conditioned upon Zk?1 is normally distributed in measurement space.
Further, it is assumed that the transformation from vehicle space to measurement space retains
the Gaussianity of the estimated state. Under these assumptions, one may de ne the innovation
matrix Si for feature i as
Si = Hxi

Prr Pri
Pir Pii

kjk
HT
xi + R; with Hxi = dhi([xr xi])
d[xr xi] [xr xi]=[^
xrk+1jk ^
xik+1jk]
; (14)
where hi is the observation model for feature i. Hxi is the linearized transformation from vehicle
space to measurement space. The nearest neighbor gating is performed in innovation space.
That is, de ning the innovation  = z?zi, the validation region, or gate, is given by
T
S?1
i   : (15)
The value of the parameter is obtained from the 2 distribution. For a system with 2 degrees of
freedom, a value of = 9:0 yields the region of minimum volume that contains the measurement
with a probability of 98.9% (Bar-Shalom and Fortmann 1988). This validation procedure de nes
where a measurement is expected to be found. If a measurement is outside this region, it is
considered too unlikely to arise from feature i. If several measurements gate with the same
feature i, the closest (i.e. most probable) one is chosen.
In performing track initiation, all measurements that have not been matched with any feature
over the last N time steps are stored. That is, any measurement that was not matched to a
known feature is a potential new feature. At each time step a search for clusters of more than
M  N measurements over this set of unmatched measurements is performed. For each of these
clusters, a new feature track is initiated. A cluster is de ned as at most one measurement at
each time step that gates according to Equation (15) with all other measurements in the cluster.
For our systems, where the probability of false returns is relatively low and the probability of
detection is relatively high, values of M = 2 or 3 and N = M + 1 are sucient.
14
A track deletion capability is also incorporated to provide a limited capability to operate in
dynamic environments. When a map feature is predicted to be visible but is not observed for
several time steps in a row, it is removed from the map (Leonard, Cox, and Durrant-Whyte
1992). This is motivated by assuming a probability of detection PD  1. Thus, if the feature
has not been observed over the last r time steps during which an observation was expected of
the feature, the probability of the feature being at the expected location is (1 ?PD)r
, assuming
that the observations are independent. Thus, setting a threshold on r is equivalent to setting a
threshold on the probability that the feature still exist at the predicted location.
With the incorporation of these data association methods, the complete adaptive delayed
nearest neighbor stochastic mapping algorithm is summarized as follows:
1. state projection: the system state (vehicle and features) is projected to the next time
step using the state transition model F, along with the control input uk;
2. gating: the closest feature to each new measurement is determined and gated with the
feature and non-matching measurements are stored;
3. state update: re-observed features update the vehicle and feature tracks using the EKF.
4. new feature generation: new features are initialized using the delayed nearest neighbor
data association strategy;
5. old feature removal: out-of-date features are deleted; and
6. adaptation step: determine the next action to take, uk, by optimizing Equation (13).
An outline of the algorithm for performing adaptive augmented stochastic mapping is shown
in gure 2.
4 Simulation Results
The algorithm described above has been extensively tested in simulation. In these simulations,
as well as in the experiments to follow, a numerical approximation was performed by evaluating
Equation (13) at a xed number of points in the action space. The robot was constrained to
move a distance of 0, 10 or 20 cm at each time step, and could only turn in increments of 22:5.
Further, the vehicle was constrained to not get closer than 40 cm to the features (PVC tubes)
15
adaptation
SM update
track deletion
track initiation
SM update
feature integration
state prediction
data association
feature deletion
sonar
control
input
Figure 2: Structure of the adaptive delayed nearest neighbor stochastic mapping algorithm.
as the sonar signal in this range becomes unreliable. In all these simulations, range error was
assumed to have a standard deviation of 2 cm while the standard deviation for the bearing was
10. Further, the sonar could move in increments of 0:9 between each measurement. Thus, a
complete scan of 360 consisted of 400 sonar returns. The standard deviation for the vehicle
odometry was set to 5% of the distance traveled. The angle uncertainty was set to 1. These
parameters were chosen as they resemble the situation for the air sonar experiments in Section 5.
Two di erent types of simulation are described.
4.1 Adaptive vehicle motion
In these simulations, it was assumed that the robot stopped and took a complete 360 scan of the
environment before continuing. The algorithm chose where to move adaptively. The algorithm
took advantage of the fact that the measurements have di erent certainty in range and bearing,
thus forming an error ellipse. Notice, however, that if the observations of a target have nearly
equal standard deviation in all directions, the robot will not move, as the loss of information from
odometric error is larger than the information gained by taking a measurement from a di erent
location.
Figure 3 shows the three typical paths of the robot in the presence of two features (8.4 cm
radius PVC tubes) as a result of adaptation. The robot started at (0,0) and the paths (solid,
dashed, and dashed-dotted lines) occurred with approximately equal frequency. The dotted lines
around the PVC tubes denote the constraints placed on the robot for how close it could come
16
to the PVC tubes while still obtaining valid sonar returns. (This is a limitation of the standard
Polaroid sonar driver circuit). The resulting 95% con dence ellipses are drawn for the middle
path, for the estimated position of the center of the features and for the robot's position. The
robot's true position is indicated by a `+' while the estimated position is shown by a `'. The
robot stopped moving after about 15 time steps, as more information would be lost than gained
by moving. Also notice that the robot moved larger distances in the beginning before taking
the next scan than at the end of the run; more information was gained by moving and taking a
measurement from a di erent location in the beginning than towards the end of a run.
Figure 4 shows the average, over 2000 runs, of the total cost (as de ned by Equation (13))
of the system under adaptation as a percentage of the total cost of the system when moving
randomly (solid line), or moving along the negative x-axis (dashed line) without adaptation.
Moving along the negative x-axis is a worst case scenario, moving in a straight line from the
initial position moving in a direction between the two features would practically be identical
to the adaptive run, thus presenting an upper bound for straight line motion. In all the runs
without adaptation, the robot moved a distance of 10 cm at each time step. As can be seen, the
adaptation procedure obtains a cost of about 60% of the non-adaptive strategies after about 8
time steps. The lower cost signi es that the adaptive strategy has obtained more information
about the environment and have thus produced a more accurate estimate of the robot's position
as well as the features' position. The random motion slowly started catching up after the 8th
time step and by the 50th time step, it has achieved a cost of about 15% higher than that of the
adaptive strategy. When moving along the negative x-axis, the cost actually starts increasing
after about the 15th time step, as the robot was so far away from the features that it lost more
information by moving than it gained by sensing at each time step. This is due to the poor
angular resolution of the sonar.
Figure 5 shows the sensor trajectory for a simulation involving eight objects. Figure 6 shows
the east and north vehicle location errors and associated error bounds as a function of time
during the simulation. After an initial loop around four of the objects is executed, the error
bounds converge and the sensor wanders back-and-forth over a small area.
17
−0.5 0 0.5 1 1.5 2
−0.5
0
0.5
1
1.5
2
x position (meters)
y
position
(meters)
Figure 3: Three typical paths taken by the robot in simulations of adaptive CML in the presence of
two PVC tubes ( lled circles) of radius 8.4 cm. Out of 1000 simulated runs, the robot chose to go to
the left, to the right and through the middle with about equal frequency. The 95% con dence level of
the map for the middle path is shown by the ellipses. The dotted circles de nes how close the robot is
allowed to be to the PVC tubes. (See text for symbols.)
0 10 20 30 40 50
30
40
50
60
70
80
90
Time Step #
C
adaption
/C
no−adaption
(%)
Figure 4: Simulation result showing the cost function C(P) when performing adaptation divided by the
cost function when not performing adaptation. Two cases in which no adaptation was performed are
shown: the solid line denotes the case when the robot moved randomly, while the dashed line denotes
the case when the robot moved along the negative x-axis.
18
−4 −2 0 2 4
−4
−3
−2
−1
0
1
2
3
4
East (m)
North
(m)
Figure 5: Path of vehicle performing adaptive motion among multiple objects.
0 500 1000 1500 2000 2500 3000
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
Time (s)
North
position
error(m)
(3σ)
0 500 1000 1500 2000 2500 3000
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
Time (s)
East
position
error
(m)
(3σ)
Figure 6: Position estimate and 3 bounds for adaptive motion among multiple features. As can
be seen, a steady state is reached after about 500 seconds.
4.2 Adaptive scanning and motion
In using sonar for mapping an environment, one is limited by the relatively slow speed with which
measurements can be acquired. Thus, we imposed the additional constraint that the sonar could
only scan an angle of 15 (that is, 50% more than the measurement bearing standard deviation) at
each time step. The algorithm was required to decide where to direct the attention of the robot.
The algorithm thereby adaptively decided where to look as well as where to move. This was
19
C = Ce C = Cr
Strategy Returns Steps Returns Steps
Adaptive sensing and motion: 100 6 300 18
Adaptive motion: 1600 4 5200 13
No adaptation, random motion: 3200 8 20000 50
No adaptation, line motion: 3600 9 1 1
Table 1: Resources needed to achieve a given cost C in simulation. Ce = 0:038m2 is the minimum cost
achieved when moving in one direction during the experiment. Cr = 0:0019m2 is the minimum cost
achieved in the simulations when moving randomly.
implemented in the framework outlined above by adding an additional action u to be controlled
and solving Equation (13) given the constraints of the scan angle.
To compare the simulation results to the experiments and the previous simulations, the sim-
ulations were, as before, conducted over 50 time steps. However, under the adaptive strategy,
only a 15 scanning angle was chosen at each time step. This is equivalent to obtaining 17 sonar
returns. Without adaptation, a complete scan was taken at every time step, generating 400
sonar returns. Figure 7 shows the relative cost of adaptive sensing and motion with and without
adaptation. The solid line denotes the case when the robot moved randomly without adaptation,
while the dashed line denotes the case when the robot moved along the negative x-axis without
adaptation as a function of sonar returns. The dotted vertical line indicates the point where
the adaptive case was terminated as 50 time steps was reached. As can be seen from Figure 7,
the adaptive method obtained a map with high con dence after relatively few sonar returns.
After obtaining a total of 20000 returns, moving randomly and moving along the negative x-axis
only achieved 93% and 33% of the con dence level that the adaptive method obtained with 850
sonar returns. Thus, the adaptive strategy required fewer measurements, and achieved a higher
con dence level than any of the strategies without adaptation. The actual vehicle's motion was
very similar to that shown in Figure 3.
Table 1 compares all the strategies on the basis of how many sonar returns and how many
time steps are required before the map reached a speci ed con dence level. As expected, the
adaptive sensing and motion strategy required the fewest number of sonar returns to reach a
given con dence level. However, it used more time steps than the adaptive motion strategy
alone, as under the adaptive motion and sensing strategy only one feature was measured at each
20
0 5 10 15 20
0
20
40
60
80
100
Sonar Return Number x 1000
C
adaption
/C
no−adaption
(%)
Figure 7: Advantage of adaptive sensing and motion control. The cost function when performing
adaptation divided by the cost function when not performing adaptation is plotted. Two cases in which
no adaptation was performed is shown: the solid line denotes the case when the robot moved randomly,
while the dashed line denotes the case when the robot moved along the negative x-axis. The vertical
dotted line indicates the point at which the adaptive method had completed its 50 time steps and
terminated.
time step, while under the adaptive motion strategy, both features were measured at each time
step.
5 Air Sonar Experimental Results
The algorithm was implemented on a Nomad Scout robot (Technologies 1998) equipped with
a 50 kHz Polaroid 6500 series ultrasonic sensor mounted on a stepping motor that rotated the
sensor in 0.9 degree increments, as pictured in Figure 8. The error in the sensor and the vehicle
odometry was assumed to be the same as that used in the simulations. The same constraints
were employed as well. The sonar returns were assumed to be mainly specular, therefore regions
of constant depth were extracted from the scans (Leonard and Durrant-Whyte 1992). In these
experiments, tracks were initiated from the rst scan only, rather than using the DNN track
initiator. Figure 9 shows a rough model of the room and the con guration of the robot and
features (PVC tubes of known radius) in the experimental setup. The dots indicate individual
sonar returns of the Polaroid sensor from a complete scan of the environment. Each complete
21
Figure 8: The Nomad Scout robot with the Polaroid ultrasonic sensor mounted on top.
−4 −3 −2 −1 0 1 2 3 4
−3
−2
−1
0
1
2
3
x position (meters)
y
position
(meters)
Figure 9: The returns from the Polaroid sensor (dots) with a rough room model superimposed. The
robot is drawn as a triangle, the PVC tubes are indicated by small lled circles.
scan of the environment consisted of 400 sonar returns. In these experiments 15 motion steps
were performed in each run.
Figure 10 shows the advantage of adaptation for a representative Nomad run, similar to the
simulation of Figure 4. As can be seen from this gure, the advantage of performing adaptation
is clear. Further, the experimental result was well within the one standard deviation bound of
the simulated predicted result over 2000 runs.
22
0 5 10 15
40
50
60
70
80
90
100
Time Step #
C
adaption
/C
no−adaption
(%)
Figure 10: Comparison of cost function C(P) with and without adaptation for a representative Nomad
experiment. The solid line is the ratio of the cost when performing adaptation divided by the cost of
moving in a straight line along the negative x-axis. The dashed line is the average simulation result
over 2000 runs, with the dotted line denoting the one standard deviation bounds.
Strategy, C = Ce Returns Steps
Adaptive sensing and motion: 100 6
Adaptive motion: 2000 5
No adaptation, line motion: 6000 15
Table 2: Number of scans and time steps required to achieve a cost of Ce = 0:038 m2 or less for a
representative experiment. The cost Ce was chosen to be the minimum cost achieved during the no
adaptation strategy.
Figure 11 shows the advantage of performing adaptive motion and sensing over moving in a
straight line along the negative x-axis for the Nomad Scout robot (solid line). The simulated
result is shown by a dashed line along with the one standard deviation bounds for 2000 simulated
runs. As can be seen, the experimental cost ratio was within the one standard deviation of the
simulated value. The adaptive strategy produced a high con dence map after relatively few
measurements, consistent with the simulations in Figure 7. The non-adaptive method only
reached a con dence level of 50% of the adaptive level, even after more than 20 times as many
measurements were taken.
Table 2 shows the number of time steps and number of sonar returns required under each
23
0 1 2 3 4 5 6
0
20
40
60
80
100
Sonar Return Number x 1000
C
adaption
/C
no−adaption
(%)
Figure 11: Advantage of adaptive sensing and motion in a representative Nomad experiment. The
solid line is the cost of adaptive sensing and motion divided by the cost of moving in a straight line
along the negative x-axis. The dashed line is the average of 2000 simulated runs and the dotted lines
show the one standard deviation bound. The dotted vertical line indicates the time where the adaptive
case terminated as 15 time steps were completed.
strategy to reach a map with some maximum speci ed cost C. As expected, the adaptive
sensing and motion strategy required orders of magnitude less sonar returns than any of the
other strategies. However, the adaptive motion strategy also used fewer time steps to reach
a speci ed con dence level. Comparing this table for the experimental results to that of the
simulation results of Table 1, we see that they are consistent.
6 Underwater Sonar Experimental Results
The second type of experiments conducted to test the adaptive stochastic mapping algorithm
used a narrow-beam 675 kHz sector scan sonar mounted on a planar robotic positioningsystem, as
shown in Figure 12. The positioning system was controlled by a Compumotor AT6450 controller
card. The system was mounted on a 3.0 by 9.0 by 1.0 meter testing tank. The system executed
on a PC and controller software and CML code was integrated to obtain a closed loop system for
performing CML. At each time step, Equation (13) was minimized over the action space of the
robot to choose the motion and scanning angles of the sensor. At each time step, Equation (13)
was minimized over the action space of the robot to choose the motion and scanning angles of
24
Figure 12: The planar robotic positioning system and sector-scan sonar used in the underwater sonar
experiments. The water in the tank is approximately 1 meter deep. The transducer was translated and
rotated in a horizontal plane mid-way through the water column.
the sensor.
The experiments were designed to simulate an underwater vehicle equipped with a sonar
that can scan at any direction relative to the vehicle at each time step. Conducting complete
360 scans of the environment at every time step is slow with a mechanically scanned sonar
and computationally expensive with an electronically scanned sonar. For these experiments, we
envisioned a vehicle mounted with two sonars, one forward looking one for obstacle avoidance,
and one that can scan at any angle for localization purposes. The forward looking sonar was
assumed to scan an angle of 30. The scanning sonar was limited to scan over U = [?15; 15],
at each time step. The scanning was performed in intervals of 0:15. The sonars were modeled
to have a standard deviation in bearing of 10:0 and 2:0 centimeters in range. Between each
scan by the sonar, the vehicle could move between 15 cm and 30 cm. The lower limit signi es
a minimum speed that the vehicle could move at before loosing controllability, while the upper
limit signi es the maximum speed of the vehicle. The vehicle was constrained to only turn in
increments of 22:5. Further, we assumed that the vehicle was equipped with a dead-reckoning
system with an accuracy of 10% of distance traveled and an accuracy of 1.0 in heading.
Figure 13 shows a typical scan taken by the sonar from the origin. The crosses show individual
25
−10 −8 −6 −4 −2 0 2 4
−6
−4
−2
0
2
4
x position (m)
y
position
(m)
Figure 13: The returns from the underwater sonar for a 360 scan of the tank from the origin. The
crosses shows individual returns. The small circles identify the position of the features (PVC tubes),
with a dotted 5 cm outside circle drawn around them to signify the minimum allowable distance between
the sonar and the features. The sonar was mounted on the carriage of the positioning system, which
served as a simulated AUV. The location of the sonar is shown by a triangle. The outline of the tank
is shown in gray.
26
sonar returns. The circles show the features (PVC tubes). The dotted circles around the features
signify the minimum allowable distance between the vehicle and the features. The triangle shows
the position of the sensor. Circular arc features were extracted from the sonar scans using the
technique described by Leonard and Durrant-Whyte (1992).
Figure 14 shows the sensor trajectory for a representative underwater sonar experiment with
the complete algorithm. The sensor started at the origin and moved around the tank using the
adaptive stochastic mapping algorithm to decide where to move and where to scan. Based on
minimization of the cost function in Equation (13), the vehicle selected one target to scan to
provide localization information. In addition, at each time step, the sonar was also scanned in
front of the vehicle for obstacle avoidance. Figure 15 shows the x and y errors for the experiment
and associated error bounds. No sonar measurements were obtained from approximately time
step 75 to time step 85 due to a communication error between the sonar head and the host PC.
Figure 16 shows the cost as a function of time. Solid vertical lines in Figures 15 and 16 indicate the
time steps when features of the environment were removed. Figure 17 plots the vehicle position
error versus time for the stochastic mapping algorithm in comparison to dead-reckoning.
Figures 18 through 20 show the results of a non-adaptive experiment in which the vehicle
moved in a straight line in the negative x direction with the two objects present throughout the
experiment. Without adaptive motion, the observability of the features was degraded and the y
estimate seem to diverge. Comparing gure 19 with gure 15 we observe that the non-adaptive
strategy had a maximum 3 con dence level of about 0.8 m and 0.2 m in the x and y position
respectively, while the maximum 3 con dence level under adaptation was about 0.17 m and
0.15 m in the y and x position respectively. A further indication of the advantage of the adaptive
approach can be seen by comparing gure 20 with gure 17.
An interesting behavior that was observed is that, over a range of di erent operating conditions,
the system selectively explores di erent objects in the environment. For example, Figure 21 shows
the sensor path for two di erent underwater sonar experiments under similar conditions where
an exploratory behavior can be observed.
7 Conclusion
This paper has considered the problem of adaptive sensing for feature-based concurrent mapping
and localization by autonomous underwater vehicles. An adaptive sensing metric has been incor-
27
−6 −4 −2 0 2
−2
−1
0
1
x position (m)
y
position
(m)
Time Step 2
−6 −4 −2 0 2
−2
−1
0
1
x position (m)
y
position
(m)
Time Step 42
−6 −4 −2 0 2
−2
−1
0
1
x position (m)
y
position
(m)
Time Step 91
−6 −4 −2 0 2
−2
−1
0
1
x position (m)
y
position
(m)
Time Step 175
Figure 14: Time evolution of the sensor trajectory for an adaptive stochastic mapping experiment
with the underwater sonar. The feature on the right was added at the 40th time step and the
two left features were removed towards the end of the experiment. The vehicle started at the
origin and moved adaptively through the environment to investigate di erent features in turn,
maximizing Equation (13) at each time step. The lled circles designate the feature locations,
and are surrounded by dotted circles which designate the stand o  distance used by an obstacle
avoidance routine. Similarly, the dashed-dotted rectangle designate the stand o  distance to
the tank walls. The dashed-dotted line represents the scanning region selected by the vehicle
at the last time step. The large triangle designates the vehicle's position. The vehicle was
constrained from moving outside the dashed-dotted lines to avoid collisions with the walls of the
tank. Sonar returns originating from outside the dashed-dotted lines were rejected.
28
0 50 100 150
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
Time Step
y
position
(m)
0 50 100 150
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
Time Step
x
position
(m)
Figure 15: Position errors in the x and y directions and 3- con dence bounds for adaptive underwater
experiment.
0 50 100 150
0
0.02
0.04
0.06
0.08
0.1
0.12
Time Step
C(
P)
(m
2
)
Figure 16: Cost as a function of time for adaptive underwater experiment. The cost increased at
approximately the 40th time step when the third object was inserted into the tank and was observed
for the rst time. During the time interval between the two dashed vertical lines, no sonar data was
obtained due to a serial communications problem between the PC and the sonar head. The two solid
vertical lines designate the time steps during which features were removed from the tank, to simulate a
dynamic environment.
29
0 50 100 150 200
0
0.01
0.02
0.03
0.04
0.05
Time Step
3−σ
ellipse
of
vehicle
(m
2
)
Figure 17: Vehicle position error versus time for dead-reckoning (dashed line) and the adaptive DNNSM
algorithm (solid line). During the time interval between the two dashed vertical lines, no sonar data
was obtained due to a serial communications problem between the PC and the sonar head. The two
solid vertical lines designate the time steps when features were removed from the tank, to simulate a
dynamic environment.
−6 −4 −2 0 2
−2
−1
0
1
x position (m)
y
position
(m)
Figure 18: Sensor trajectory for a non-adaptive experiment in which the vehicle moved in a straight
line. While accurate location information was obtained in the x direction, the CML process diverged
in the y direction. The dashed-dotted rectangle indicates the stand o  distance from the tank walls
used by an obstacle avoidance routine.
30
0 10 20 30 40
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
Time Step
y
position
(m)
0 10 20 30 40
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
Time Step
x
position
(m)
Figure 19: Position errors in the x and y directions and 3- con dence bounds for non-adaptive
underwater experiment.
5 10 15 20 25 30 35
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
Time Step
Vehicle’s
3−σ
error
ellipse
(m
2
)
Figure 20: Vehicle error as a function of time for non-adaptive underwater experiment.
31
−4 −3 −2 −1 0
−2
−1.5
−1
−0.5
0
0.5
1
1.5
x position (m)
y
position
(m)
−4 −3 −2 −1 0
−2
−1.5
−1
−0.5
0
0.5
1
1.5
x position (m)
y
position
(m)
Figure 21: Two representative stochastic mapping experiments that exhibited adaptive behavior.
In each gure, the solid line shows the estimated path of the sensor and the dashed line shows
the actual path. The triangle indicates the nal position of the sensor. The lled disks indicate
the locations of the features (PVC tubes). The ellipses around the features and the sensor are
the 3  contours, that is, the 99% highest con dence regions. The sonar view is indicated by the
dashed-dotted line. In the left gure, the sensor had a scanning angle of [?30; 30]. The sensor
started at (0,0) and then adaptively determined the path to take as well as the direction to scan,
resulting in exploratory behavior. The sensor rst moves over to one of the objects, turns, and
moves around the second. The experiment illustrated in the right gure was similar, with the
exception that the sonar was only able to scan n area of [?7:5; 7:5] each time step. Again, we
can see that exploratory behavior emerged as the sensor attempted to maximize the information
it obtained about the environment. In these experiments, tracks were initiated from the rst
scan only, rather than using the DNN track initiator. In addition, the robot was constrained to
turn a maximum of 30 at each time step in 15 increments.
32
porated within a stochastic mapping algorithm and tested via air and underwater experiments.
This is the rst time, to our knowledge, that a feature-based concurrent mapping and localization
algorithm has been implemented with underwater sonar data.
We have introduced a method for performing adaptive concurrent mapping and localization
(CML) in a priori unknown environments for any number of features. The adaptive method was
based on choosing actions that, given the current knowledge, would maximize the information
gained in the next measurement. This approach can easily be implemented as an extra step in a
stochastic mapping algorithm for CML. The validity and usefulness of the approach were veri ed
both in simulation and in experiments with air and underwater sonar data.
Based on the air sonar experiments, we feel con dent in the accuracy of the simulation in
predicting experimental outcomes. For example, the three typical paths for the robot shown
in Figure 3 are representative of both adaptive simulations and real data experiments, and the
plots comparing the performance with and without adaptation are similar. The advantage of
performing adaptive CML is notable when only adapting the motion of the vehicle (Figures 4
and 10). However, more substantial gains are obtained when performing adaptive motion and
sensing. This is apparent from Figures 7 and 11, where the number of sonar returns required
to obtain a given con dence level is an order of magnitude fewer than when not performing
adaptation.
The adaptive sensing technique employed here is a local method. At each cycle, only the next
action of the robot is considered. By predicting over an expanded time horizon, one can formulate
global adaptive mapping and navigation. For example, one can consider how to determine the
best path between the robot's current position and a desired goal position. In this case, the space
of possible actions grows tremendously, and a computationally ecient method for searching the
action space will be essential.
In future research, we will integrate adaptive sensing within a hybrid estimation framework
for CML in development in our laboratory (Smith and Leonard 1997; Smith 1998). In addition,
we will perform experiments with more complex objects and develop methods that incorporate
additional criteria for adaptation to address e ects such as occlusion, rough surface scattering,
and multiple re ections.
Concurrent mapping and localization is a compelling topic for investigation because there are
so many open issues for future research. Key questions include:
33
How can representation of complex, natural environments be incorporated within a metri-
cally accurate, feature-based mapping approach?
 How can we reliably extract features from sensor data?
 What are the trade-o s between global versus local, absolute versus relative, and metrical
versus topological mapping?
 How can we select which types of environmental features are most useful as navigation
landmarks?
 How can data association ambiguity be e ectively addressed?
 How can computational complexity be overcome in extending CML to long duration mis-
sions over large areas?
 In what situations can long-term bounded position errors be achieved?
 How can CML be extended to dynamic environments, in which previously mapped environ-
mental objects can move or disappear, and new features can appear in previously mapped
areas?
 How can CML be integrated with path planning and obstacle avoidance?
While this paper has focused on an adaptive sensing metric for improved state estimation per-
formance in CML, we believe that other types of sensing strategies can help answer many of
these questions. We have shown that adaptive sensing can save time and energy, and reduce the
amount of data that needs to be acquired. However, we believe that adaptive strategies have the
potential to improve state estimation robustness, ease data association ambiguity, prevent diver-
gence, and facilitate recovery from errors. Investigation of adaptive strategies for these purposes
will be addressed in future research.
Acknowledgments
This research has been funded in part by the Naval Undersea Warfare Center, Newport, RI,
U.S.A. H.J.S.F. acknowledges the support of NFR (Norwegian Research Council) through grant
109338/410. J.J.L. acknowledges the support of the Henry L. and Grace Doherty Assistant
34
Professorship in Ocean Utilization and NSF Career AWARD BES-9733040. Further, the authors
would like to thank Seth Lloyd for fruitful discussions and comments. The authors would also
like to thank Jan Meyer, Jong H. Lim, Marlene Cohen, and Tom Consi for their help with the
Nomad hardware integration, Prof. J. Milgram for the use of the testing tank, Paul Dambra for
the installation of the positioning system, and Imagenex Technology Corp. and Paul Newman
for their help with the sonar integration.
A Stochastic Mapping
This appendix provides more detail on the stochastic mapping method, based on the work of
Smith et al. (1990). Stochastic mapping is simply a special way of organizing the states in an
extended Kalman lter, and providing a consistent way of adding new states to the systems
as more features are observed and estimated. In a Kalman lter, through the combination
of a dynamic model and observation model, an estimated state, x and associated estimated
covariance, P, is produced. In this paper, the dynamic model for the robot was given by
xrk+1jk = f(xrk; uk) + G(uk)dx
= xrkjk + Tk + G(uk)dx;
where, as in the main text, k, de nes the time index. Since all features are assumed to be static,
their is no dynamics for the features, thus
xik = ^
xik:
The noise scaling matrix G is de ned in Equation (6) and the transformation matrix T is de ned
in Equation (7). As in the main text, the state vector x = [xr x1 : : : xN ] consists of the robot's
state, xr as well as all the N features states denoted by x1 : : : xN . The control input is given by
u and the process noise is given by dx.
As the robot moves around in its environments, the features are observed through the robots
senor. The relation between the current state and the observations is given by the observation
model
zk = h(xk) + dz; (16)
where z is range and bearing measurements to the features in the environment and h is the
nonlinear transformation from the Cartesian coordinates of x and the polar coordinates of z.
35
As the robot moves around in its environment, the system evolves through 1) robot displace-
ment, 2) new feature integrations, and 3) re-observation of features. Each of these steps of
stochastic mapping is presented below.
Robot displacement
When the robot moves a distance given by uk, the robot state, xr, estimated at time step k + 1
is given by taking the expectation of dynamic model given above:
^
xrk+1jk = ^
xrkjk + T^
kjkuk and ^
xik+1jk = ^
xik:
Under the assumption of Equation (2), that this approximation error is small, the Fisher
information is updated through the linearized system by:
Ik+1jk = (FxI?1
kjkFT
x + G(uk)G(uk)T
)?1:
Here Fx, is the Jacobian of f with respect to x evaluated at ^
xkjk.
New feature integration
If the robot observes a new feature znew = [r ] with respect to the robot's reference frame, a
new feature state ^
xN+1 is estimated and incorporated by
^
xN+1 = l(^
xkjk; znew) =

^
xr + r cos( + ^
)
^
yr + r sin( + ^
)

:
The new feature is integrated into the map by adding this new state to x and I. That is
^
xk+1jk =

^
xkjk
^
xN+1

;
PN+1N+1 = LxrPrrLT
xr + LzRLT
z ;
PN+1 i = PT
i N+1 = LxrPri:
Where Lxr and Lz are the Jacobian of l with respect to the robot state xr evaluated at ^
xrkjk, and
to znew evaluated at znew. The covariance of the state of the new features is given a priori by R.
The structure of the covariance for the system, P, is given in Equation (4), and its inverse is a
good estimate for the Fisher information of the system, that is, P = I?1.
36
Re-observation of features
When a feature i is re-observed we use the prediction step of the EKF to update the vehicle's
state and the map. By introducing the following de nitions:
~
xi = xik+1jk ?xrk+1jk;
~
yi = yik+1jk ?yrk+1jk;
the observation model h (Equation (16)) for feature i takes the form
zik =

rik
ik

=
 p
~
x2
i + ~
y2
i
tan?1(~
yi=~
xi) ?k

+ dzi
= hi(xk) + dzi:
The noise process dzi is assumed to be white and Gaussian with covariance Ri. If the n
features i1 : : : in are re-observed, the observation model becomes:
zk =
2
6
4
zi1k
...
zink
3
7
5; h =
2
6
4
hi1
...
hin
3
7
5; R =
2
6
4
Rin  0
... ... ...
0  Rin
3
7
5;
with the Jacobian of h given by Hx = Hx(^
xk+1jk) = dh(x)=dxjx=^
xk+1jk. These matrices are used
in the update step of the extended Kalman lter as follows:
^
xk+1jk+1 = ^
xk+1jk + Kk+1(zk+1 ?h(^
xk+1jk))
Ik+1jk+1 = Ik+1jk + HT
xR?1Hx:
where Kk+1 is the extended Kalman lter gain given by:
Kk+1 = I?1
k+1jkHT
x(HxI?1
k+1jkHT
x + R)?1:
Computational Issues
To prevent instability in the coding of the extended Kalman lter, it is advised that the square-
root lter or the Josephson form covariance update (Bar-Shalom and Li 1993) be employed. In
these implementations the Josephson form was used, which ensures the positive de niteness of
the EKF.
In order to prevent the EKF to become over con dent, extra process noise was added to the
feature position. That is, at each time step each feature was assumed to have a process noise
included of the same order of magnitude as dx. This was placed in the G matrix.
37
List of symbols
F Dynamic model.
H Observation model.
M Transformation relating the Fisher information between time steps recursively.
F Jacobian of the dynamic model .
G Process noise scaling matrix. Dependent on u.
H Jacobian of the state-to-observation transformation h.
I Fisher information matrix.
K Kalman lter gain matrix.
L Jacobian of l with respect to xr, evaluated at xrkjk.
N Number or element in the state vector. Number of features in the map.
P System covariance matrix.
R Observation covariance matrix.
T Transformation used in the compounding operation.
dx Process noise. Assumed to be white and Gaussian.
dz Observation noise process. Assumed to be white and Gaussian.
h State-to-observation transformation.
i Subscript index identifying a feature number.
k Time index are shown in subscript.
l Transformation used for entering a new element to the state vector.
p() A probability density function.
u Actions, or system control input.
U Set of sonar scanning angles.
x Actual system state vector.
^
x Estimated system state vector.
^
xkjk Estimated state at time step k given all the information up to time step k.
xr Vehicle state. Location (xr; yr) and direction . That is, xr = [xr yr ].
z Measurement vector (range and bearing measurements).
^ Signi es an estimated variable.
References
Au, W. (1993). The Sonar of Dolphins. New York: Springer-Verlag.
Bajcsy, R. (1988, August). Active perception. Proceedings of the IEEE 76(8), 996{1005.
Bar-Shalom, Y. and T. E. Fortmann (1988). Tracking and Data Association. Academic Press.
Bar-Shalom, Y. and X.-R. Li (1993). Estimation and Tracking: Principles, Techniques, and Software.
Artech House.
Bellingham, J. G., C. A. Goudey, T. R. Consi, J. W. Bales, D. K. Atwood, J. J. Leonard, and
C. Chryssostomidis(1994). A second generation survey AUV. In IEEE Conference on Autonomous
Underwater Vehicles, Cambridge, MA.
Bellingham, J. G. and J. S. Willcox (1996). Optimizing AUV oceanographic surveys. In IEEE Con-
ference on Autonomous Underwater Vehicles, Monterey, CA.
38
Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis (Second ed.). Springer-Verlag.
Betg
e-Brezetz, S., P. H
ebert, R. Chatila, and M. Devy (1996, April). Uncertain map making in
natural environments. In Proc. IEEE Int. Conf. Robotics and Automation, pp. 1048{1053.
Blake, A. and A. Yuille (Eds.) (1992). Active Vision. MIT Press.
Bozma, O. and R. Kuc (1991, June). Characterizing pulses re ected from rough surfaces using ultra-
sound. J. Acoustical Society of America 89(6), 2519{2531.
Chong, K. S. and L. Kleeman (1997). Sonar feature map building for a mobile robot. In Proc. IEEE
Int. Conf. Robotics and Automation.
Elfes, A. (1987, June). Sonar-based real-world mapping and navigation. IEEE Journal of Robotics
and Automation RA-3(3), 249{265.
Gelb, A. C. (1973). Applied Optimal Estimation. The MIT Press.
Hager, G. (1990). Task-directed Sensor Fusion and Planning: A Computational Approach. Boston:
Kluwer Academic Publishers.
H
ebert, P., S. Betg
e-Brezetz, and R. Chatila (1996, April). Decoupling odometry and exteroceptive
perception in building a global world map of a mobile robot: The use of local maps. In Proc.
IEEE Int. Conf. Robotics and Automation, pp. 757{764.
Kuipers, B. J. and Y. Byun (1991). A robot exploration and mapping strategy based on a semantic
heirarchy of spatial representations. Robotics and Autonomous Systems, 47{63.
Leonard, J. J., A. A. Bennett, C. M. Smith, and H. J. S. Feder (1998, May). Autonomous underwater
vehicle navigation. In IEEE ICRA Workshop on Navigation of Outdoor Autonomous Vehicles,
Leuven, Belguim.
Leonard, J. J., I. J. Cox, and H. F. Durrant-Whyte (1992, August). Dynamic map building for an
autonomous mobile robot. Int. J. Robotics Research 11(4), 286{298.
Leonard, J. J. and H. F. Durrant-Whyte (1992). Directed Sonar Sensing for Mobile Robot Navigation.
Boston: Kluwer Academic Publishers.
Lucido, L., B. Popescu, J. Opderbecke, and V. Rigaud (1996, May). Segmentation of bathymetric
pro les and terrain matching for underwater vehicle navigation. In Proceedings of the second
annual World Automation Conference, Montpellier, France.
Manyika, J. S. and H. F. Durrant-Whyte (1994). Data Fusion and Sensor Management: A decentral-
ized information-theoretic approach. New York: Ellis Horwoord.
Medeiros, M. and R. Carpenter (1996). High resolution array signal processing for AUVs. In AUV
96, pp. 10{15.
Milne, P. H. (1983). Underwater Acoustic Positioning Systems. London: E. F. N. Spon.
Moran, B. A., J. J. Leonard, and C. Chryssostomidis (1997). Curved shape reconstruction using
multiple hypothesis tracking. IEEE J. Ocean Engineering 22(4), 625{638.
Moutarlier, P. and R. Chatila (1989). Stochastic multisensory data fusion for mobile robot location
and environment modeling. In 5th Int. Symposium on Robotics Research, Tokyo.
Rencken, W. D. (1993). Concurrent localisation and map building for mobile robots using ultrasonic
sensors. In Proc. IEEE Int. Workshop on Intelligent Robots and Systems, Yokohama, Japan, pp.
2192{2197.
39
Russell, S. and P. Norvig (1995). Arti cial Intelligence A Modern Approach. Prentice Hall.
Russell, S. and E. Wefald (1995). Do the right thing : studies in limited rationality. MIT Press.
Singh, H. (1995). An entropic framework for AUV sensor modelling. Ph. D. thesis, Massachusetts
Institute of Technology.
Smith, C. M. (1998). Integrating Mapping and Navigation. Ph. D. thesis, Massachusetts Institute of
Technology.
Smith, C. M. and J. J. Leonard (1997). A multiple hypothesis approach to concurrent mapping and
localization for autonomous underwater vehicles. In International Conference on Field and Service
Robotics, Sydney, Australia.
Smith, R., M. Self, and P. Cheeseman (1990). Estimating uncertain spatial relationships in robotics.
In I. Cox and G. Wilfong (Eds.), Autonomous Robot Vehicles. Springer-Verlag.
Stewart, W. K. (1988). Multisensor Modeling Underwater with Uncertain Information. Ph. D. thesis,
Massachusetts Institute of Technology.
Technologies, N. (1998). http://www.robots.com.
Thrun, S., D. Fox, and W. Burgard (1998). A probabilistic approach to concurrent mapping and
localization for mobile robots. Machine Learning 31, 29{53.
Tuohy, S. T., J. J. Leonard, J. G. Bellingham, N. M. Patrikalakis, and C. Chryssostomidis (1996,
March). Map based navigation for autonomous underwater vehicles. International Journal of
O shore and Polar Engineering 6(1), 9{18.
Uhlmann, S. J. Julier and M. Csorba, J. K. (1997). Nondivergent Simultaneous Map Building and
Localisation using Covariance Intersection. Navigation and Control Technologies for Unmanned
Systems II.
Urick, R. (1983). Principles of Underwater Sound. New York: McGraw-Hill.
Willcox, J. S., Y. Zhang, J. G. Bellingham, and J. Marshall (1996). AUV survey design applied to
oceanic deep convection. In IEEE Oceans, pp. 949{954.
40
Ad

Recommended

A Real-Time Algorithm for Mobile Robot Mapping With Applications to.pdf
A Real-Time Algorithm for Mobile Robot Mapping With Applications to.pdf
ssusera00b371
 
Robotic Navigation and Mapping with Radar 1st Edition Martin Adams
Robotic Navigation and Mapping with Radar 1st Edition Martin Adams
ablinhurshye
 
Paper
Paper
Adrià Serra Moral
 
Zupt, LLC's SLAM and Optimal Sensor fusion
Zupt, LLC's SLAM and Optimal Sensor fusion
Robert Flaming, PCM®
 
Robot Localization And Map Building Hanafiah Yussof Ed
Robot Localization And Map Building Hanafiah Yussof Ed
gaoesmimis
 
Visual analytics of 3D LiDAR point clouds in robotics operating systems
Visual analytics of 3D LiDAR point clouds in robotics operating systems
journalBEEI
 
Design of Mobile Robot Navigation system using SLAM and Adaptive Tracking Con...
Design of Mobile Robot Navigation system using SLAM and Adaptive Tracking Con...
iosrjce
 
K017655963
K017655963
IOSR Journals
 
Robotics map based navigation in urban
Robotics map based navigation in urban
Sagheer Abbas
 
Survey 1 (project overview)
Survey 1 (project overview)
Ahmed Abd El-Fattah
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Self-Flying Drones: On a Mission to Navigate Dark, Dangerous and Unknown Worlds
Self-Flying Drones: On a Mission to Navigate Dark, Dangerous and Unknown Worlds
Tahoe Silicon Mountain
 
IRJET Autonomous Simultaneous Localization and Mapping
IRJET Autonomous Simultaneous Localization and Mapping
IRJET Journal
 
Under water object classification using sonar signal
Under water object classification using sonar signal
IRJET Journal
 
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
ijfls
 
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
Wireilla
 
Monocular simultaneous localization and generalized object mapping with undel...
Monocular simultaneous localization and generalized object mapping with undel...
Chen-Han Hsiao
 
SLAM for dummies
SLAM for dummies
mustafa sarac
 
Monocular simultaneous localization and generalized object mapping with undel...
Monocular simultaneous localization and generalized object mapping with undel...
Chen-Han Hsiao
 
FUZZY LOGIC IN ROBOT NAVIGATION
FUZZY LOGIC IN ROBOT NAVIGATION
Ashish Kholia
 
report
report
Tomas Van Oyen
 
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET Journal
 
Project presentation
Project presentation
KayDrive
 
Project presentation
Project presentation
KayDrive
 
Robot navigation in unknown environment with obstacle recognition using laser...
Robot navigation in unknown environment with obstacle recognition using laser...
IJECEIAES
 
Active sensors for local planning in mobile robotics 1st Edition Penelope Pro...
Active sensors for local planning in mobile robotics 1st Edition Penelope Pro...
fleitybuzder
 
Multisensor data fusion based autonomous mobile
Multisensor data fusion based autonomous mobile
eSAT Publishing House
 
SLAM Technology SLAM For Robot patents.pptx
SLAM Technology SLAM For Robot patents.pptx
gunnelsbrittny
 
AdaptiveImage-BasedLeader-FollowerFormationControlofMobileRobotsWithVisibilit...
AdaptiveImage-BasedLeader-FollowerFormationControlofMobileRobotsWithVisibilit...
ssusera00b371
 
Implementation_of_Hough_Transform_for_image_processing_applications.pdf
Implementation_of_Hough_Transform_for_image_processing_applications.pdf
ssusera00b371
 

More Related Content

Similar to Adaptive Mobile Robot Navigation and Mapping.pdf (20)

Robotics map based navigation in urban
Robotics map based navigation in urban
Sagheer Abbas
 
Survey 1 (project overview)
Survey 1 (project overview)
Ahmed Abd El-Fattah
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Self-Flying Drones: On a Mission to Navigate Dark, Dangerous and Unknown Worlds
Self-Flying Drones: On a Mission to Navigate Dark, Dangerous and Unknown Worlds
Tahoe Silicon Mountain
 
IRJET Autonomous Simultaneous Localization and Mapping
IRJET Autonomous Simultaneous Localization and Mapping
IRJET Journal
 
Under water object classification using sonar signal
Under water object classification using sonar signal
IRJET Journal
 
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
ijfls
 
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
Wireilla
 
Monocular simultaneous localization and generalized object mapping with undel...
Monocular simultaneous localization and generalized object mapping with undel...
Chen-Han Hsiao
 
SLAM for dummies
SLAM for dummies
mustafa sarac
 
Monocular simultaneous localization and generalized object mapping with undel...
Monocular simultaneous localization and generalized object mapping with undel...
Chen-Han Hsiao
 
FUZZY LOGIC IN ROBOT NAVIGATION
FUZZY LOGIC IN ROBOT NAVIGATION
Ashish Kholia
 
report
report
Tomas Van Oyen
 
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET Journal
 
Project presentation
Project presentation
KayDrive
 
Project presentation
Project presentation
KayDrive
 
Robot navigation in unknown environment with obstacle recognition using laser...
Robot navigation in unknown environment with obstacle recognition using laser...
IJECEIAES
 
Active sensors for local planning in mobile robotics 1st Edition Penelope Pro...
Active sensors for local planning in mobile robotics 1st Edition Penelope Pro...
fleitybuzder
 
Multisensor data fusion based autonomous mobile
Multisensor data fusion based autonomous mobile
eSAT Publishing House
 
SLAM Technology SLAM For Robot patents.pptx
SLAM Technology SLAM For Robot patents.pptx
gunnelsbrittny
 
Robotics map based navigation in urban
Robotics map based navigation in urban
Sagheer Abbas
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
IJERD Editor
 
Self-Flying Drones: On a Mission to Navigate Dark, Dangerous and Unknown Worlds
Self-Flying Drones: On a Mission to Navigate Dark, Dangerous and Unknown Worlds
Tahoe Silicon Mountain
 
IRJET Autonomous Simultaneous Localization and Mapping
IRJET Autonomous Simultaneous Localization and Mapping
IRJET Journal
 
Under water object classification using sonar signal
Under water object classification using sonar signal
IRJET Journal
 
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
ijfls
 
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
EFFECTIVE REDIRECTING OF THE MOBILE ROBOT IN A MESSED ENVIRONMENT BASED ON TH...
Wireilla
 
Monocular simultaneous localization and generalized object mapping with undel...
Monocular simultaneous localization and generalized object mapping with undel...
Chen-Han Hsiao
 
Monocular simultaneous localization and generalized object mapping with undel...
Monocular simultaneous localization and generalized object mapping with undel...
Chen-Han Hsiao
 
FUZZY LOGIC IN ROBOT NAVIGATION
FUZZY LOGIC IN ROBOT NAVIGATION
Ashish Kholia
 
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET Journal
 
Project presentation
Project presentation
KayDrive
 
Project presentation
Project presentation
KayDrive
 
Robot navigation in unknown environment with obstacle recognition using laser...
Robot navigation in unknown environment with obstacle recognition using laser...
IJECEIAES
 
Active sensors for local planning in mobile robotics 1st Edition Penelope Pro...
Active sensors for local planning in mobile robotics 1st Edition Penelope Pro...
fleitybuzder
 
Multisensor data fusion based autonomous mobile
Multisensor data fusion based autonomous mobile
eSAT Publishing House
 
SLAM Technology SLAM For Robot patents.pptx
SLAM Technology SLAM For Robot patents.pptx
gunnelsbrittny
 

More from ssusera00b371 (9)

AdaptiveImage-BasedLeader-FollowerFormationControlofMobileRobotsWithVisibilit...
AdaptiveImage-BasedLeader-FollowerFormationControlofMobileRobotsWithVisibilit...
ssusera00b371
 
Implementation_of_Hough_Transform_for_image_processing_applications.pdf
Implementation_of_Hough_Transform_for_image_processing_applications.pdf
ssusera00b371
 
Research on automated guided vehicle (AGV) path tracking control based on la...
Research on automated guided vehicle (AGV) path tracking control based on la...
ssusera00b371
 
Computer_Vision_Control_Based_CNN_PID_fo.pdf
Computer_Vision_Control_Based_CNN_PID_fo.pdf
ssusera00b371
 
OpenCV-based PID control line following vehicle with object recognition and ...
OpenCV-based PID control line following vehicle with object recognition and ...
ssusera00b371
 
An_Ultrasonic_Line_Follower_Robot_to_Det.pdf
An_Ultrasonic_Line_Follower_Robot_to_Det.pdf
ssusera00b371
 
A_Review_of_Control_Algorithm_for_Autono.pdf
A_Review_of_Control_Algorithm_for_Autono.pdf
ssusera00b371
 
Design_and_implementation_of_line_follow.pdf
Design_and_implementation_of_line_follow.pdf
ssusera00b371
 
industrial automation circuits 2020-2021.ppt
industrial automation circuits 2020-2021.ppt
ssusera00b371
 
AdaptiveImage-BasedLeader-FollowerFormationControlofMobileRobotsWithVisibilit...
AdaptiveImage-BasedLeader-FollowerFormationControlofMobileRobotsWithVisibilit...
ssusera00b371
 
Implementation_of_Hough_Transform_for_image_processing_applications.pdf
Implementation_of_Hough_Transform_for_image_processing_applications.pdf
ssusera00b371
 
Research on automated guided vehicle (AGV) path tracking control based on la...
Research on automated guided vehicle (AGV) path tracking control based on la...
ssusera00b371
 
Computer_Vision_Control_Based_CNN_PID_fo.pdf
Computer_Vision_Control_Based_CNN_PID_fo.pdf
ssusera00b371
 
OpenCV-based PID control line following vehicle with object recognition and ...
OpenCV-based PID control line following vehicle with object recognition and ...
ssusera00b371
 
An_Ultrasonic_Line_Follower_Robot_to_Det.pdf
An_Ultrasonic_Line_Follower_Robot_to_Det.pdf
ssusera00b371
 
A_Review_of_Control_Algorithm_for_Autono.pdf
A_Review_of_Control_Algorithm_for_Autono.pdf
ssusera00b371
 
Design_and_implementation_of_line_follow.pdf
Design_and_implementation_of_line_follow.pdf
ssusera00b371
 
industrial automation circuits 2020-2021.ppt
industrial automation circuits 2020-2021.ppt
ssusera00b371
 
Ad

Recently uploaded (20)

Solar thermal – Flat plate and concentrating collectors .pptx
Solar thermal – Flat plate and concentrating collectors .pptx
jdaniabraham1
 
Microwatt: Open Tiny Core, Big Possibilities
Microwatt: Open Tiny Core, Big Possibilities
IBM
 
AI_Presentation (1). Artificial intelligence
AI_Presentation (1). Artificial intelligence
RoselynKaur8thD34
 
David Boutry - Mentors Junior Developers
David Boutry - Mentors Junior Developers
David Boutry
 
Modern multi-proposer consensus implementations
Modern multi-proposer consensus implementations
François Garillot
 
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
João Esperancinha
 
20CE404-Soil Mechanics - Slide Share PPT
20CE404-Soil Mechanics - Slide Share PPT
saravananr808639
 
60 Years and Beyond eBook 1234567891.pdf
60 Years and Beyond eBook 1234567891.pdf
waseemalazzeh
 
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
djiceramil
 
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
ijab2
 
Introduction to sensing and Week-1.pptx
Introduction to sensing and Week-1.pptx
KNaveenKumarECE
 
nnnnnnnnnnnn7777777777777777777777777777777.pptx
nnnnnnnnnnnn7777777777777777777777777777777.pptx
gayathri venkataramani
 
System design handwritten notes guidance
System design handwritten notes guidance
Shabista Imam
 
Industry 4.o the fourth revolutionWeek-2.pptx
Industry 4.o the fourth revolutionWeek-2.pptx
KNaveenKumarECE
 
DESIGN OF REINFORCED CONCRETE ELEMENTS S
DESIGN OF REINFORCED CONCRETE ELEMENTS S
prabhusp8
 
FUNDAMENTALS OF COMPUTER ORGANIZATION AND ARCHITECTURE
FUNDAMENTALS OF COMPUTER ORGANIZATION AND ARCHITECTURE
Shabista Imam
 
Structured Programming with C++ :: Kjell Backman
Structured Programming with C++ :: Kjell Backman
Shabista Imam
 
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
Taqyea
 
A Cluster-Based Trusted Secure Multipath Routing Protocol for Mobile Ad Hoc N...
A Cluster-Based Trusted Secure Multipath Routing Protocol for Mobile Ad Hoc N...
IJCNCJournal
 
Structural Wonderers_new and ancient.pptx
Structural Wonderers_new and ancient.pptx
nikopapa113
 
Solar thermal – Flat plate and concentrating collectors .pptx
Solar thermal – Flat plate and concentrating collectors .pptx
jdaniabraham1
 
Microwatt: Open Tiny Core, Big Possibilities
Microwatt: Open Tiny Core, Big Possibilities
IBM
 
AI_Presentation (1). Artificial intelligence
AI_Presentation (1). Artificial intelligence
RoselynKaur8thD34
 
David Boutry - Mentors Junior Developers
David Boutry - Mentors Junior Developers
David Boutry
 
Modern multi-proposer consensus implementations
Modern multi-proposer consensus implementations
François Garillot
 
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
Decoding Kotlin - Your Guide to Solving the Mysterious in Kotlin - Devoxx PL ...
João Esperancinha
 
20CE404-Soil Mechanics - Slide Share PPT
20CE404-Soil Mechanics - Slide Share PPT
saravananr808639
 
60 Years and Beyond eBook 1234567891.pdf
60 Years and Beyond eBook 1234567891.pdf
waseemalazzeh
 
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
362 Alec Data Center Solutions-Slysium Data Center-AUH-Adaptaflex.pdf
djiceramil
 
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
VARICELLA VACCINATION: A POTENTIAL STRATEGY FOR PREVENTING MULTIPLE SCLEROSIS
ijab2
 
Introduction to sensing and Week-1.pptx
Introduction to sensing and Week-1.pptx
KNaveenKumarECE
 
nnnnnnnnnnnn7777777777777777777777777777777.pptx
nnnnnnnnnnnn7777777777777777777777777777777.pptx
gayathri venkataramani
 
System design handwritten notes guidance
System design handwritten notes guidance
Shabista Imam
 
Industry 4.o the fourth revolutionWeek-2.pptx
Industry 4.o the fourth revolutionWeek-2.pptx
KNaveenKumarECE
 
DESIGN OF REINFORCED CONCRETE ELEMENTS S
DESIGN OF REINFORCED CONCRETE ELEMENTS S
prabhusp8
 
FUNDAMENTALS OF COMPUTER ORGANIZATION AND ARCHITECTURE
FUNDAMENTALS OF COMPUTER ORGANIZATION AND ARCHITECTURE
Shabista Imam
 
Structured Programming with C++ :: Kjell Backman
Structured Programming with C++ :: Kjell Backman
Shabista Imam
 
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
最新版美国圣莫尼卡学院毕业证(SMC毕业证书)原版定制
Taqyea
 
A Cluster-Based Trusted Secure Multipath Routing Protocol for Mobile Ad Hoc N...
A Cluster-Based Trusted Secure Multipath Routing Protocol for Mobile Ad Hoc N...
IJCNCJournal
 
Structural Wonderers_new and ancient.pptx
Structural Wonderers_new and ancient.pptx
nikopapa113
 
Ad

Adaptive Mobile Robot Navigation and Mapping.pdf

  • 1. Adaptive Mobile Robot Navigation and Mapping Hans Jacob S. Feder1 , John J. Leonard1 , and Christopher M. Smith2 1 Massachusetts Institute of Technology, 77 Mass Ave., Cambridge, MA 02139 2 Charles Stark Draper Laboratory, 555 Technology Square, Cambridge, MA 02139 [email protected],[email protected],[email protected] Abstract The task of building a map of an unknown environment and concurrently using that map to navigate is a central problem in mobile robotics research. This paper addresses the problem of how to perform concurrent mapping and localization (CML) adaptively using sonar. Stochastic mapping is a feature-based approach to CML that generalizes the extended Kalman lter to incorporate vehicle localization and environmental mapping. We describe an implementation of stochastic mapping that uses a delayed nearest neighbor data association strategy to initialize new features into the map, match measurements to map features, and delete out-of-date features. We introduce a metric for adaptive sensing which is de ned in terms of Fisher information and represents the sum of the areas of the error ellipses of the vehicle and feature estimates in the map. Predicted sensor readings and expected dead-reckoning errors are used to estimate the metric for each potential action of the robot, and the action which yields the lowest cost (i.e., the maximum information) is selected. This technique is demonstrated via simulations, in-air sonar experiments, and underwater sonar experiments. Results are shown for 1) adaptive control of motion and 2) adaptive control of motion and scanning. The vehicle tends to explore selectively di erent objects in the environment. The performance of this adaptive algorithm is shown to be superior to straight-line motion and random motion. 1 Introduction The goal of concurrent mapping and localization (CML) is to build a map of the environment while simultaneously using that map to enhance navigation performance. CML is critical for many eld and service robotics applications. The long-term goal of our research is to develop new To appear in the International Journal of Robotics Research, Special Issue on Field and Service Robotics, July, 1999.
  • 2. methods for CML, enabling autonomous underwater vehicles (AUVs) to navigate in unstructured environments without relying on a priori maps or acoustic beacons. This paper presents a technique for adaptive concurrent mapping and localization and demonstrates that technique using real sonar data. Navigation is essential for successful operation of underwater vehicles in a range of scienti c, commercial, and military applications (Leonard, Bennett, Smith, and Feder 1998). Accurate positioning information is vital for the safety of the vehicle and for the utility of the data it collects. The error growth rates of inertial and dead-reckoning systems available for AUVs are usually unacceptable. Underwater vehicles cannot use the Global Positioning System (GPS) to reset dead-reckoning error growth unless they surface, which is usually undesirable. Positioning systems that use acoustic transponders (Milne 1983) are often expensive and impractical to deploy and calibrate. Navigation algorithms based on existing maps have been proposed (Tuohy, Leonard, Bellingham, Patrikalakis, and Chryssostomidis 1996; Lucido, Popescu, Opderbecke, and Rigaud 1996), but suciently accurate a priori maps are often unobtainable. Adaptive sensing strategies have the potential to save time and maximize the eciency of the concurrent mapping and localization process for an AUV. Energy eciency is one of the most challenging issues in the design of underwater vehicle systems (Bellingham and Willcox 1996). Techniques for building a map of sucient resolution as quickly as possible would be highly bene cial. Survey class AUVs must maintain forward motion for controllability (Belling- ham, Goudey, Consi, Bales, Atwood, Leonard, and Chryssostomidis 1994), hence the ability to adaptively choose a sensing and motion strategy which obtains the most information about the environment is especially important. Sonar is the principle sensor for AUV navigation. Possible sonar systems include mechani- cally and electronically scanned sonars, side-scan sonar and multi-beam bathymetric mapping sonars (Urick 1983). The rate of information obtained from a mechanically scanned sonar is low, making adaptive strategies especially bene cial. Electronically scanned sonars can provide information at very high data rates, but enormous processing loads make real-time implementa- tions dicult. Adaptive techniques can be used to limit sensing to selected regions of interest, dramatically reducing computational requirements. In this paper, adaptive sensing is formulated as the evaluation of di erent actions that the robot can take and the selection of the action that maximizes the amount of information acquired. This general problem has been considered in a variety of contexts (Manyika and Durrant-Whyte 2
  • 3. 1994; Russell and Norvig 1995) but not speci cally for concurrent mapping and localization. CML provides an interesting context within which to address adaptive sensing because of the trade-o between dead-reckoning error and sensor data acquisition. The information gained by observing an environmental feature from multiple vantage points must counteract the rise in uncertainty that results from the motion of the vehicle. Our experiments use two di erent robot systems. One is an inexpensive wheeled land robot equipped with a single rotating sonar. Observations are made of several cylindrical targets whose location is initially unknown to the robot. Although this is a simpli ed scenario, these experiments provide a useful illustration of the adaptive CML process and con rm behavior seen in simulation. The second system is a planar robotic positioning system which moves a sonar within a 9 meter by 3 meter by 1 meter testing tank. The sonar is a mechanically scanned 675 kHz Imagenex model 881 sonar with a 2 degree beam. These characteristics are similar to alternative models used in the marine industry. Testing tank experimentation provides a bridge between simulation and eld AUV systems. Repeatable experiments can be performed under identical conditions; ground truth can be determined to high accuracy. Section 2 reviews previous research in concurrent mapping and localization and adaptive sens- ing. Section 3 develops the theory of adaptive stochastic mapping. Sections 4, 5, and 6 describe testing of the method using simulations, air sonar experiments, and underwater sonar experi- ments. Finally, Section 7 provides concluding remarks and suggestions for future research. 2 Background 2.1 Stochastic mapping Stochastic mapping is a technique for CML that was introduced by Smith, Self, and Cheese- man (1990). In stochastic mapping, a single state vector represents estimates of the locations of the robot and the features in the environment. An associated error covariance matrix represents the uncertainty in these estimates, including the correlations between the vehicle and feature state estimates. As the vehicle moves through its environment taking measurements, the system state vector and covariance matrix are updated using an extended Kalman lter (EKF). As new features are observed, new states are added to the system state vector; the size of the system covariance matrix grows quadratically. Implementing stochastic mapping with real data requires methods for data association and track initiation. Smith, Self, and Cheeseman describe processes 3
  • 4. for adding new features to the map and enforcing known geometric constraints between di erent map features, but do not perform simulations or experiments using these techniques. Moutarlier and Chatila (1989) provided the rst implementation of this type of algorithm with real data. Their implementation uses data from a scanning laser range nder mounted on a wheeled mobile robot operating indoors. They implement a modi ed updating technique, called relocation-fusion, that reduces the e ect of vehicle dynamic model uncertainty. More recently, Chatila and colleagues developed improved implementations for outdoor vehicles in natural en- vironments and investigated new topics such as the decoupling of local and global maps (Betg e- Brezetz, H ebert, Chatila, and Devy 1996; H ebert, Betg e-Brezetz, and Chatila 1996). The number of other researchers who have implemented stochastic mapping with real data has been limited. Using sonar on land robots operating indoors, Rencken (1993) and Chong and Kleeman (1997) implemented variations of stochastic mapping using sonar sensing. Rencken utilized the standard Polaroid ranging system, while Chong and Kleeman used a precision custom sonar array which provided accurate classi cation of geometric primitives. Current implementations of stochastic mapping are limited to ranges of tens of meters and/or durations of a few hours. Stochastic mapping assumes a metrical, feature-based environmental representation, in which objects can be e ectively represented as points in an appropriate parameter space. Other types of representations are possible and have been employed with success. For example, Thrun et al. (1998) has demonstrated highly successful navigation of indoor mobile robots using a com- bination of grid-based (Elfes 1987) and topological (Kuipers and Byun 1991) modeling. For marine robotic systems, the complexity of representing 3D natural environments with geometric models is formidable. Our hypothesis, however, is that salient features can be found and reliably extracted (Medeiros and Carpenter 1996), enabling CML. Exhaustively detailed environmental modeling should not be necessary for autonomous navigation. 2.2 Adaptive sonar sensing Adaptive sensing has been a popular research topic in many di erent areas of robotics. Synony- mous terms that have been used for these investigations include active perception (Bajcsy 1988), active vision (Blake and Yuille 1992), directed sensing (Leonard and Durrant-Whyte 1992), active information gathering (Hager 1990), adaptive sampling (Bellingham and Willcox 1996), sensor management (Manyika and Durrant-Whyte 1994) and limited rationality (Russell and Wefald 4
  • 5. 1995). A common theme that emerges is that adaptive sensing should be formulated as the pro- cess of evaluating di erent sensing actions that the robot can take and choosing the action that maximizes the information acquired by the robot. The challenge in implementing this concept in practice is to develop a methodology for quantifying the expected information for di erent sensing actions and evaluating them in a computationally feasible manner given limited a priori information. Our approach is closest to Manyika (1994), who formulated a normative approach to multi- sensor data fusion management based on Bayesian decision theory (Berger 1985). A utility structure for di erent sensing actions was de ned using entropy (Shannon information) as a metric for maximizing the information in decentralized multi-sensor data fusion. The method was implemented for model-based localization of a mobile robot operating indoors using multiple scanning sonars and an a priori map. Feature location uncertainty and the loss of information due to vehicle motion error, which are encountered in CML, were not explicitly addressed. Examples of the application of adaptive sensing in marine robotics include Singh et al. (1995) and Bellingham and Willcox (1996). Singh formulated an entropic measure of adaptive sensing and implemented it on the Autonomous Benthic Explorer. The implementation was performed using stochastic back-projection, a grid-based modeling technique developed by Stewart for ma- rine sensor data fusion (Stewart 1988). Bellingham and Willcox have investigated optimal survey design for AUVs making observations of dynamic oceanographic phenomena (Willcox, Zhang, Bellingham, and Marshall 1996). Decreased vehicle speeds save power due to the quadratic dependence of drag on velocity, but susceptibility to space-time aliasing is increased. An interesting motivation for adaptive sonar sensing is provided by the behavior of bats and dolphins performing echo-location. For example, dolphins are observed to move their heads from side to side as they discriminate objects (Au 1993). Our hypothesis for this behavior is that sonar is more like touch than vision. A useful analogy may be the manner in which a person navigates through an unknown room in the dark. By reaching out for and establishing contact with walls, tables, and chairs, transitions from one object to the another can be managed as one moves across the room. Whereas man-made sonars tend to use narrow-band waveforms and narrow beam patterns, bat and dolphin sonar systems use broad-band waveforms and relatively wide beam patterns. A broad beam pattern can be bene cial because it provides a greater range of angles over which a sonar can establish and maintain contact by receiving echoes from an environmental feature. The task for concurrent mapping and localization is to integrate the 5
  • 6. information obtained from sonar returns obtained from di erent features as the sensor moves through the environment to estimate both the geometry of the scene and the trajectory of the sensor. 3 Adaptive Delayed Nearest Neighbor Stochastic Mapping This section reviews the theory of stochastic mapping, derives a metric for performing stochastic mapping adaptively, and describes the delayed nearest neighbor method for track initiation and track deletion. 3.1 Stochastic mapping Stochastic mapping is simply a special way of organizing the states in an extended Kalman lter (EKF) for the purpose of feature relative navigation. An EKF is a computational ecient esti- mator for the states of a given nonlinear dynamic system and assuming that the noise processes are well modeled by Gaussian noise and the errors due to linearization of the nonlinear system are small. That is, the EKF for a system provides an estimate of both the state of the system, say ^ x, as well as an estimate of the covariance of the state, say P. The covariance of the state provides an estimate of the con dence in the estimate ^ x. A dynamic system is described by a dynamic model, F which de nes the evolution of the system (robot) through time, and an observation model, H, which relates observations (measurements) to the state of the system (robot). Our implementation of stochastic mapping is based on Smith, Self, and Cheeseman (1990) and is described in more detail in Appendix A. We use xk = ^ xkjk + ^ xk to represent the system state vector x = [xT r xT 1 : : : xT N ]T , where xr and x1 : : : xN are the robot and feature states, respectively, ^ x is the estimated state vector, and ~ x is the error estimate. Measurements are taken every t = kT seconds, where T is a constant period and k is a discrete time index. The measurements are related to the state by a state-to-observation transform zk = H(xk; dz), where dz is the measurement noise process. The a posteriori PDF of xk, given a set of measurements Zk fzk; : : : ; z1g, can be found from Bayes rule as p(xkjZk) = p(zkjxk)p(xkjZk?1) p(zkjZk?1) : (1) 6
  • 7. The distribution p(zkjxk) is de ned as the likelihood function using the Likelihood Princi- ple (Berger 1985). By knowing p(xkjZk) we can form an estimate ^ xk of the state. In order to perform CML, the state transition (dynamic model) xk+1 = F(xk; uk; dx), in addition to the observation transformation H, must be known. Here uk is the control input (action) at time k and dx is the process noise. If the stochastic processes dz and dx are assumed to be independent, white and Gaussian, and the state transition F and observation model H are both linear, the Bayes Least Square (BLS) estimator, ^ xk+1jk = F(EfxkjZkg; u), will be an ecient estimator of x. However, in the general problem of CML, neither the dynamic model nor the observation model will be linear. Thus, an ecient estimator cannot be obtained. Further, propagating the system's covariance through nonlinear equations does not guarantee that the statistics will be conserved. Thus, in order to circumvent the problem of transformation of nonlinearities, the nonlinear models of F and H are approximated through a Taylor series expansion, keeping only the rst two terms. That is, F(xk; uk; dx) F(^ xkjk; uk; dx) + Fx ~ xkjk ) ^ xk+1jk EfF(xk; uk; dx)jZkg; (2) where Fx = dF(x; uk)=dxjx=^ xkjk is the Jacobian of dynamic model F with respect to x, evaluated at ^ xkjk. Similarly, the observation model is approximated by H(xk; dz) H(^ xkjk?1; dz) + Hx~ xkjk?1; (3) where Hx = dH(x)=dxjx=^ xkjk?1 is the Jacobian of the observation model H with respect to x evaluated at ^ xkjk?1. These approximations are equivalent to the assumption that the estimated mean at the previous time step, ^ xkjk, is approximately equal to the system state, xk at the previous time step. Once these linearization have been performed and assuming that the ap- proximation error is small, we can now nd the BLS for this linearized system. The result is equivalent to that of the extended Kalman lter (Gelb 1973; Bar-Shalom and Li 1993; Uhlmann, S. J. Julier and M. Csorba 1997). The estimate error covariance, Pkjk = Ef~ xk ~ xT k g, of the system takes the form Pkjk = 2 6 6 6 4 Prr Pr1 PrN P1r P11 P1N ... ... ... ... PNr PN1 PNN 3 7 7 7 5 kjk : (4) 7
  • 8. The sub-matrices, Prr, Pri and Pii are the vehicle-vehicle, vehicle-feature, and feature-feature covariances respectively. This form is signi cant as it allows us to separate the uncertainty associated with the robot, Prr, as well as the individual features, Pii, and this separation will be used in obtaining a measure of the information in our system. Thus, the robot and the map are represented by a single state vector, x, with an associated estimate error covariance P at each time step. As new features are added, x and P increase in size. In our implementation, we denote the vehicle's state by xr = [xr yr ]T and the control input to the vehicle is given by u = [ux uy u]. For the dynamic model, F, we use xrk+1 =f(xk; uk) + G(uk)dx =xrk + Tkuk + G(uk)dx; (5) where G(uk) scales the noise process dx as a function of the distance traveled, that is G(uk) = 2 4 x p u2 xk + u2 yk 0 0 0 y p u2 xk + u2 yk 0 0 0 3 5 (6) and Tk = 2 4 cos(k) ?sin(k) 0 sin(k) cos(k) 0 0 0 1 3 5: (7) The in this expression comes from the robot position xrk. The covariance of dx is for convenience set equal to the identity matrix, as any scaling is placed in G. The matrix G is diagonal by assumption. If the correlations between the noise processes in the x, y, and direction were known, it should be included. However, this is a minor point and this form has has shown to work well for our systems. Equation (5) does not take into account any of the vehicle's real dynamics. Thus the model is very general. However, if the vehicle's dynamics were known, they could be used to ensure that the vehicle movements remain realistic. In our experiments and simulation, we will constrain the robot to move only a certain distance each time step, thus making ux and uy dependent. For the observation model we use zk = H(xk; dz) = h(xk) + dz; (8) 8
  • 9. where zk is the observation vector of range and bearing measurements. The observation model, h, de nes the nonlinear coordinate transformation from state to observation coordinates. The stochastic process dz, is assumed to be white, Gaussian, and independent of x0 and dx, and has covariance R. In our experiments, a sector including an object is scanned over multiple adjacent angles to yield multiple sonar returns that originate from the object. That is, isolated features and smooth surfaces appear as circular arcs, also known as regions of constant depth (Leonard and Durrant-Whyte 1992), in sonar scans. For the specular returns, an improved estimate of the range and bearing to the object is obtaining by grouping sets of adjacent returns with nearly the same range and taking the mode of this set of angles as the bearing measurement, and the range associated with this bearing as the range measurement to the object (Moran, Leonard, and Chryssostomidis 1997). Rough surfaces (Bozma and Kuc 1991) yield additional returns at high angles of incidence. These occur frequently in the data from our underwater sonar, but are not processed in the experiments reported in this paper. In our work, all features are modeled as point features. More complex object can be described by a nite set of parameters and estimated by using the associated observation model for this parameterization in stochastic mapping. For instance, the modeling of planes, corners, cylinders and edges are relatively straight-forward (Leonard and Durrant-Whyte 1992). 3.2 Adaptation step The goal of adaptive CML is to determine the optimal action given the current knowledge of the environment, sensors, and robot dynamics in a CML framework. To provide an intuitive understanding of this goal, imagine an underwater vehicles with no navigational uncertainty estimating the position of a feature as depicted in gure 1. As can be seen from this simple example of gure, it is clearly advantageous for the vehicle to take the next measurement from a new direction. By doing so, more information about the feature is extracted, and thus a better estimate can be obtained. The essence of our model is to determine the action that maximizes the total knowledge (that is, the information) about the system in the presence of measurement and navigational uncertainty. By adaptively choosing actions, we mean that the next action of the robot is chosen so as to maximize the robot's information about its location and all the features' locations (the 9
  • 10. a) Actual feature position. Estimated feature position b) 1 2 Figure 1: An autonomous underwater vehicle with no navigational uncertainty estimating the position of an environmental feature. The ellipses denotes the certainty (error ellipse) to which the feature position is known. a) The initial estimate of the feature. b) Given two possible new locations, ① and ②, to make the next observation from, position ① allows for a more accurate estimate of the feature's position, as the measurement of the feature is taken from a new angle, resulting in the tilted error ellipse associated with this measurement. Combining this (tilted error ellipse) with the error ellipse from a), the small dotted circular error ellipse is achieved. Taking the next observation from location ② only yields a slightly smaller error ellipse than that of the previous time step in a), shown by the dotted ellipse. 10
  • 11. map). The amount of information contained in Equation (1) can be quanti ed in various ways. Fisher information is of particular interest and is related to the estimate of the state x given the observations. The Fisher information for a random parameter is de ned (Bar-Shalom and Fortmann 1988) as the covariance of the gradient of the total log-probability, that is, Ikjk Ef(rx lnp(x; Zk))(rx lnp(x; Zk))T g = ?EfrxrT x lnp(xk; Zk)g: (9) Where rx = [ d dx1 d dxN ]T is the gradient operator with respect to x = [x1 xN ], thus rxrT x is the Hessian matrix. Strictly speaking, the Fisher information is only de ned in the non-Bayesian view of estimation (Bar-Shalom and Fortmann 1988), that is, in the estimation of a non-random parameter. In the Bayesian approach to estimation, the parameter is random with a (prior) probability density function. However, as in the non-Bayesian de nition of the Fisher information, the inverse of the the Fisher information for random parameter estimation is the Cram er-Rao lower bound for the mean square error. Applying Bayes rule to Equation (9), that is, p(xk; Zk) = p(xk; z; Zk?1) = p(xk; Zk?1)p(zjxk), and noting the linearity of the gradient and expectation operators we obtain a Fisher information update: EfrxrT x lnp(xk; Zk)g =EfrxrT x lnp(xk; Zk?1)g+ E rxrT x lnp(zkjxk) , Ikjk =Ikjk?1 ?E rxrT x lnp(zkjxk) : (10) The rst term on the right represents the Fisher information, Ikjk?1, before the last measurement, while the second term corresponds to the additional information gained by measurement zk. If we obtain an ecient estimator for x the Fisher information will simply be given by the inverse of the error covariance of the state. Thus, under the assumption that Equation (2) and Equation (3) holds, the inverse of error covariance P will be an estimate for the Fisher information of the system, that is I P?1 : Under this assumption, and using Equation (5) the transformation M relating Ikjk to Ik+1jk can be found. By combining this with Equation (10) we have a recursive Fisher information update, 11
  • 12. which depends on the actions u (inputs). M will generally depend on the state xk+1 as well, which is not available. By invoking the assumption of Equation (2), xk+1 is replaced by the estimate ^ xk+1jk. Thus, M can be used to give us the optimal action uk to take given our model and assumptions. By optimal, we here mean that, under the assumption that the EKF is the best estimator for our state, the action u that maximizes the knowledge (information) of the system given the current knowledge of the system can be determined. This is not necessarily the optimal action for the actual system. At each time step the algorithm seeks to determine the transformation M and, from this, infer the optimal action uk. Combining the vehicle prediction and EKF update step of stochastic mapping, M is Ik+1jk+1 = (FxI?1 kjkFT x + G(uk)G(uk)T )?1 + HT xR?1Hx: (11) Where Fx and Hx are the Jacobian of f and h with respect to x evaluated at ^ xkjk and ^ xk+1jk respectively. The rst term on the right of Equation (11) represents the previous information of the system, as well as the loss of information that occurs due to the action uk. The second term represents the additional information gained by the system due to observations after the action uk. As this quantity is a function of xk+1, which is unknown, we approximate xk+1 by ^ xk+1jk by the assumption of Equation (2). The action that maximizes the information can be expressed as uk = max u Ik+1jk+1 = min u Pk+1jk+1: (12) The information is a matrix and we require a metric to quantify the information. Further, it is desired that this metric have a simple physical interpretation. For concurrent mapping and localization, it is desirable to use a metric that makes explicit the tradeo between uncertainty in feature locations and uncertainty in the vehicle position estimate. To accomplish this, we de ne the metric by a cost function C(P), which gives the total area of all the error ellipses, (i.e., highest probability density regions) and is thus a measure of our con dence in our map and robot position. That is, C(P) = Q j p j(Prr) + PN i=1 Q j p j(Pii) = p det(Prr) + PN i=1 p det(Pii) ; (13) 12
  • 13. where j() is the j-th eigenvalue of its argument and det is the determinant of its argument.. Turning back to gure 1b, Equation (13) minimizes the error ellipses, thus yielding a lower cost for position ① than for position ② as makes intuitive sense. The action to take is obtained by evaluating Equation (12) over the action space of the robot using the metric in Equation (13). This yields an adaptive stochastic mapping algorithm. This procedure optimizes the information locally at each time step, thus the adaptation step performs a local optimization. Notice, that the action space of the robot is not limited to motion control inputs. Other actions and constraints can readily be included in the control input u, such as, what measurements should be taken by the sonar. For this, we control the set of angles U which are scanned by the sonar. In principle Equation (12) can be solved in symbolic form using Lagrange multipliers. However, the symbolic matrix inversion required in Equation (11) is very tedious and results in very large number of terms. Further, as more features are added, the inversion scales exponentially in the number of calculations. Numerical methods, however, can be used to evaluate Equation (12) quite eciently. The experiments below do not use a numerical technique such as the simplex method to perform this optimization, however this could be readily incorporated in the future. 3.3 Data association, track initiation and track deletion Section 3.1 outlined the stochastic mapping approach to feature based navigation. In order to employ stochastic mapping in a real world scenario, we have to be able to extract features from the environment. Sonars are notorious for exhibiting drop-outs, false returns, no-returns and noise. Thus addressing the problem of data association is critical for the validity of the observation model (Equation (8)) and in employing a stochastic mapping based approach to CML. For this purpose, data association, initiation of new feature estimates (i.e. track initiation) and the removal of out-dated feature estimates (track deletion) for the mobile robot are described here. The problem of data association is that of determining the source of a measurement. Ob- taining correct data association can pose a signi cant challenge when making observation of environmental features (Smith 1998). In our implementation, data association is performed us- ing a gated nearest neighbor technique (Bar-Shalom and Fortmann 1988). The initiation of new feature tracks is performed using a delayed nearest neighbor (DNN) initiator. The DNN initiator is similar in spirit to the logic-based multiple target track initiator described by Bar-Shalom and 13
  • 14. Fortmann (1988). One important di erence in our method is that the vehicle's position is un- certain, and this uncertainty has to be included when performing gating and nding the nearest neighbor. It is assumed that a sonar return originates from not more than one feature. In order to include the vehicle's uncertainty in performing data association using a nearest neighbor gating technique, we need to transform the vehicle's uncertainty into measurement space and add this uncertainty to the measurement uncertainty. We assume that the true measurement of feature i at time k conditioned upon Zk?1 is normally distributed in measurement space. Further, it is assumed that the transformation from vehicle space to measurement space retains the Gaussianity of the estimated state. Under these assumptions, one may de ne the innovation matrix Si for feature i as Si = Hxi Prr Pri Pir Pii kjk HT xi + R; with Hxi = dhi([xr xi]) d[xr xi] [xr xi]=[^ xrk+1jk ^ xik+1jk] ; (14) where hi is the observation model for feature i. Hxi is the linearized transformation from vehicle space to measurement space. The nearest neighbor gating is performed in innovation space. That is, de ning the innovation = z?zi, the validation region, or gate, is given by T S?1 i : (15) The value of the parameter is obtained from the 2 distribution. For a system with 2 degrees of freedom, a value of = 9:0 yields the region of minimum volume that contains the measurement with a probability of 98.9% (Bar-Shalom and Fortmann 1988). This validation procedure de nes where a measurement is expected to be found. If a measurement is outside this region, it is considered too unlikely to arise from feature i. If several measurements gate with the same feature i, the closest (i.e. most probable) one is chosen. In performing track initiation, all measurements that have not been matched with any feature over the last N time steps are stored. That is, any measurement that was not matched to a known feature is a potential new feature. At each time step a search for clusters of more than M N measurements over this set of unmatched measurements is performed. For each of these clusters, a new feature track is initiated. A cluster is de ned as at most one measurement at each time step that gates according to Equation (15) with all other measurements in the cluster. For our systems, where the probability of false returns is relatively low and the probability of detection is relatively high, values of M = 2 or 3 and N = M + 1 are sucient. 14
  • 15. A track deletion capability is also incorporated to provide a limited capability to operate in dynamic environments. When a map feature is predicted to be visible but is not observed for several time steps in a row, it is removed from the map (Leonard, Cox, and Durrant-Whyte 1992). This is motivated by assuming a probability of detection PD 1. Thus, if the feature has not been observed over the last r time steps during which an observation was expected of the feature, the probability of the feature being at the expected location is (1 ?PD)r , assuming that the observations are independent. Thus, setting a threshold on r is equivalent to setting a threshold on the probability that the feature still exist at the predicted location. With the incorporation of these data association methods, the complete adaptive delayed nearest neighbor stochastic mapping algorithm is summarized as follows: 1. state projection: the system state (vehicle and features) is projected to the next time step using the state transition model F, along with the control input uk; 2. gating: the closest feature to each new measurement is determined and gated with the feature and non-matching measurements are stored; 3. state update: re-observed features update the vehicle and feature tracks using the EKF. 4. new feature generation: new features are initialized using the delayed nearest neighbor data association strategy; 5. old feature removal: out-of-date features are deleted; and 6. adaptation step: determine the next action to take, uk, by optimizing Equation (13). An outline of the algorithm for performing adaptive augmented stochastic mapping is shown in gure 2. 4 Simulation Results The algorithm described above has been extensively tested in simulation. In these simulations, as well as in the experiments to follow, a numerical approximation was performed by evaluating Equation (13) at a xed number of points in the action space. The robot was constrained to move a distance of 0, 10 or 20 cm at each time step, and could only turn in increments of 22:5. Further, the vehicle was constrained to not get closer than 40 cm to the features (PVC tubes) 15
  • 16. adaptation SM update track deletion track initiation SM update feature integration state prediction data association feature deletion sonar control input Figure 2: Structure of the adaptive delayed nearest neighbor stochastic mapping algorithm. as the sonar signal in this range becomes unreliable. In all these simulations, range error was assumed to have a standard deviation of 2 cm while the standard deviation for the bearing was 10. Further, the sonar could move in increments of 0:9 between each measurement. Thus, a complete scan of 360 consisted of 400 sonar returns. The standard deviation for the vehicle odometry was set to 5% of the distance traveled. The angle uncertainty was set to 1. These parameters were chosen as they resemble the situation for the air sonar experiments in Section 5. Two di erent types of simulation are described. 4.1 Adaptive vehicle motion In these simulations, it was assumed that the robot stopped and took a complete 360 scan of the environment before continuing. The algorithm chose where to move adaptively. The algorithm took advantage of the fact that the measurements have di erent certainty in range and bearing, thus forming an error ellipse. Notice, however, that if the observations of a target have nearly equal standard deviation in all directions, the robot will not move, as the loss of information from odometric error is larger than the information gained by taking a measurement from a di erent location. Figure 3 shows the three typical paths of the robot in the presence of two features (8.4 cm radius PVC tubes) as a result of adaptation. The robot started at (0,0) and the paths (solid, dashed, and dashed-dotted lines) occurred with approximately equal frequency. The dotted lines around the PVC tubes denote the constraints placed on the robot for how close it could come 16
  • 17. to the PVC tubes while still obtaining valid sonar returns. (This is a limitation of the standard Polaroid sonar driver circuit). The resulting 95% con dence ellipses are drawn for the middle path, for the estimated position of the center of the features and for the robot's position. The robot's true position is indicated by a `+' while the estimated position is shown by a `'. The robot stopped moving after about 15 time steps, as more information would be lost than gained by moving. Also notice that the robot moved larger distances in the beginning before taking the next scan than at the end of the run; more information was gained by moving and taking a measurement from a di erent location in the beginning than towards the end of a run. Figure 4 shows the average, over 2000 runs, of the total cost (as de ned by Equation (13)) of the system under adaptation as a percentage of the total cost of the system when moving randomly (solid line), or moving along the negative x-axis (dashed line) without adaptation. Moving along the negative x-axis is a worst case scenario, moving in a straight line from the initial position moving in a direction between the two features would practically be identical to the adaptive run, thus presenting an upper bound for straight line motion. In all the runs without adaptation, the robot moved a distance of 10 cm at each time step. As can be seen, the adaptation procedure obtains a cost of about 60% of the non-adaptive strategies after about 8 time steps. The lower cost signi es that the adaptive strategy has obtained more information about the environment and have thus produced a more accurate estimate of the robot's position as well as the features' position. The random motion slowly started catching up after the 8th time step and by the 50th time step, it has achieved a cost of about 15% higher than that of the adaptive strategy. When moving along the negative x-axis, the cost actually starts increasing after about the 15th time step, as the robot was so far away from the features that it lost more information by moving than it gained by sensing at each time step. This is due to the poor angular resolution of the sonar. Figure 5 shows the sensor trajectory for a simulation involving eight objects. Figure 6 shows the east and north vehicle location errors and associated error bounds as a function of time during the simulation. After an initial loop around four of the objects is executed, the error bounds converge and the sensor wanders back-and-forth over a small area. 17
  • 18. −0.5 0 0.5 1 1.5 2 −0.5 0 0.5 1 1.5 2 x position (meters) y position (meters) Figure 3: Three typical paths taken by the robot in simulations of adaptive CML in the presence of two PVC tubes ( lled circles) of radius 8.4 cm. Out of 1000 simulated runs, the robot chose to go to the left, to the right and through the middle with about equal frequency. The 95% con dence level of the map for the middle path is shown by the ellipses. The dotted circles de nes how close the robot is allowed to be to the PVC tubes. (See text for symbols.) 0 10 20 30 40 50 30 40 50 60 70 80 90 Time Step # C adaption /C no−adaption (%) Figure 4: Simulation result showing the cost function C(P) when performing adaptation divided by the cost function when not performing adaptation. Two cases in which no adaptation was performed are shown: the solid line denotes the case when the robot moved randomly, while the dashed line denotes the case when the robot moved along the negative x-axis. 18
  • 19. −4 −2 0 2 4 −4 −3 −2 −1 0 1 2 3 4 East (m) North (m) Figure 5: Path of vehicle performing adaptive motion among multiple objects. 0 500 1000 1500 2000 2500 3000 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 Time (s) North position error(m) (3σ) 0 500 1000 1500 2000 2500 3000 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 Time (s) East position error (m) (3σ) Figure 6: Position estimate and 3 bounds for adaptive motion among multiple features. As can be seen, a steady state is reached after about 500 seconds. 4.2 Adaptive scanning and motion In using sonar for mapping an environment, one is limited by the relatively slow speed with which measurements can be acquired. Thus, we imposed the additional constraint that the sonar could only scan an angle of 15 (that is, 50% more than the measurement bearing standard deviation) at each time step. The algorithm was required to decide where to direct the attention of the robot. The algorithm thereby adaptively decided where to look as well as where to move. This was 19
  • 20. C = Ce C = Cr Strategy Returns Steps Returns Steps Adaptive sensing and motion: 100 6 300 18 Adaptive motion: 1600 4 5200 13 No adaptation, random motion: 3200 8 20000 50 No adaptation, line motion: 3600 9 1 1 Table 1: Resources needed to achieve a given cost C in simulation. Ce = 0:038m2 is the minimum cost achieved when moving in one direction during the experiment. Cr = 0:0019m2 is the minimum cost achieved in the simulations when moving randomly. implemented in the framework outlined above by adding an additional action u to be controlled and solving Equation (13) given the constraints of the scan angle. To compare the simulation results to the experiments and the previous simulations, the sim- ulations were, as before, conducted over 50 time steps. However, under the adaptive strategy, only a 15 scanning angle was chosen at each time step. This is equivalent to obtaining 17 sonar returns. Without adaptation, a complete scan was taken at every time step, generating 400 sonar returns. Figure 7 shows the relative cost of adaptive sensing and motion with and without adaptation. The solid line denotes the case when the robot moved randomly without adaptation, while the dashed line denotes the case when the robot moved along the negative x-axis without adaptation as a function of sonar returns. The dotted vertical line indicates the point where the adaptive case was terminated as 50 time steps was reached. As can be seen from Figure 7, the adaptive method obtained a map with high con dence after relatively few sonar returns. After obtaining a total of 20000 returns, moving randomly and moving along the negative x-axis only achieved 93% and 33% of the con dence level that the adaptive method obtained with 850 sonar returns. Thus, the adaptive strategy required fewer measurements, and achieved a higher con dence level than any of the strategies without adaptation. The actual vehicle's motion was very similar to that shown in Figure 3. Table 1 compares all the strategies on the basis of how many sonar returns and how many time steps are required before the map reached a speci ed con dence level. As expected, the adaptive sensing and motion strategy required the fewest number of sonar returns to reach a given con dence level. However, it used more time steps than the adaptive motion strategy alone, as under the adaptive motion and sensing strategy only one feature was measured at each 20
  • 21. 0 5 10 15 20 0 20 40 60 80 100 Sonar Return Number x 1000 C adaption /C no−adaption (%) Figure 7: Advantage of adaptive sensing and motion control. The cost function when performing adaptation divided by the cost function when not performing adaptation is plotted. Two cases in which no adaptation was performed is shown: the solid line denotes the case when the robot moved randomly, while the dashed line denotes the case when the robot moved along the negative x-axis. The vertical dotted line indicates the point at which the adaptive method had completed its 50 time steps and terminated. time step, while under the adaptive motion strategy, both features were measured at each time step. 5 Air Sonar Experimental Results The algorithm was implemented on a Nomad Scout robot (Technologies 1998) equipped with a 50 kHz Polaroid 6500 series ultrasonic sensor mounted on a stepping motor that rotated the sensor in 0.9 degree increments, as pictured in Figure 8. The error in the sensor and the vehicle odometry was assumed to be the same as that used in the simulations. The same constraints were employed as well. The sonar returns were assumed to be mainly specular, therefore regions of constant depth were extracted from the scans (Leonard and Durrant-Whyte 1992). In these experiments, tracks were initiated from the rst scan only, rather than using the DNN track initiator. Figure 9 shows a rough model of the room and the con guration of the robot and features (PVC tubes of known radius) in the experimental setup. The dots indicate individual sonar returns of the Polaroid sensor from a complete scan of the environment. Each complete 21
  • 22. Figure 8: The Nomad Scout robot with the Polaroid ultrasonic sensor mounted on top. −4 −3 −2 −1 0 1 2 3 4 −3 −2 −1 0 1 2 3 x position (meters) y position (meters) Figure 9: The returns from the Polaroid sensor (dots) with a rough room model superimposed. The robot is drawn as a triangle, the PVC tubes are indicated by small lled circles. scan of the environment consisted of 400 sonar returns. In these experiments 15 motion steps were performed in each run. Figure 10 shows the advantage of adaptation for a representative Nomad run, similar to the simulation of Figure 4. As can be seen from this gure, the advantage of performing adaptation is clear. Further, the experimental result was well within the one standard deviation bound of the simulated predicted result over 2000 runs. 22
  • 23. 0 5 10 15 40 50 60 70 80 90 100 Time Step # C adaption /C no−adaption (%) Figure 10: Comparison of cost function C(P) with and without adaptation for a representative Nomad experiment. The solid line is the ratio of the cost when performing adaptation divided by the cost of moving in a straight line along the negative x-axis. The dashed line is the average simulation result over 2000 runs, with the dotted line denoting the one standard deviation bounds. Strategy, C = Ce Returns Steps Adaptive sensing and motion: 100 6 Adaptive motion: 2000 5 No adaptation, line motion: 6000 15 Table 2: Number of scans and time steps required to achieve a cost of Ce = 0:038 m2 or less for a representative experiment. The cost Ce was chosen to be the minimum cost achieved during the no adaptation strategy. Figure 11 shows the advantage of performing adaptive motion and sensing over moving in a straight line along the negative x-axis for the Nomad Scout robot (solid line). The simulated result is shown by a dashed line along with the one standard deviation bounds for 2000 simulated runs. As can be seen, the experimental cost ratio was within the one standard deviation of the simulated value. The adaptive strategy produced a high con dence map after relatively few measurements, consistent with the simulations in Figure 7. The non-adaptive method only reached a con dence level of 50% of the adaptive level, even after more than 20 times as many measurements were taken. Table 2 shows the number of time steps and number of sonar returns required under each 23
  • 24. 0 1 2 3 4 5 6 0 20 40 60 80 100 Sonar Return Number x 1000 C adaption /C no−adaption (%) Figure 11: Advantage of adaptive sensing and motion in a representative Nomad experiment. The solid line is the cost of adaptive sensing and motion divided by the cost of moving in a straight line along the negative x-axis. The dashed line is the average of 2000 simulated runs and the dotted lines show the one standard deviation bound. The dotted vertical line indicates the time where the adaptive case terminated as 15 time steps were completed. strategy to reach a map with some maximum speci ed cost C. As expected, the adaptive sensing and motion strategy required orders of magnitude less sonar returns than any of the other strategies. However, the adaptive motion strategy also used fewer time steps to reach a speci ed con dence level. Comparing this table for the experimental results to that of the simulation results of Table 1, we see that they are consistent. 6 Underwater Sonar Experimental Results The second type of experiments conducted to test the adaptive stochastic mapping algorithm used a narrow-beam 675 kHz sector scan sonar mounted on a planar robotic positioningsystem, as shown in Figure 12. The positioning system was controlled by a Compumotor AT6450 controller card. The system was mounted on a 3.0 by 9.0 by 1.0 meter testing tank. The system executed on a PC and controller software and CML code was integrated to obtain a closed loop system for performing CML. At each time step, Equation (13) was minimized over the action space of the robot to choose the motion and scanning angles of the sensor. At each time step, Equation (13) was minimized over the action space of the robot to choose the motion and scanning angles of 24
  • 25. Figure 12: The planar robotic positioning system and sector-scan sonar used in the underwater sonar experiments. The water in the tank is approximately 1 meter deep. The transducer was translated and rotated in a horizontal plane mid-way through the water column. the sensor. The experiments were designed to simulate an underwater vehicle equipped with a sonar that can scan at any direction relative to the vehicle at each time step. Conducting complete 360 scans of the environment at every time step is slow with a mechanically scanned sonar and computationally expensive with an electronically scanned sonar. For these experiments, we envisioned a vehicle mounted with two sonars, one forward looking one for obstacle avoidance, and one that can scan at any angle for localization purposes. The forward looking sonar was assumed to scan an angle of 30. The scanning sonar was limited to scan over U = [?15; 15], at each time step. The scanning was performed in intervals of 0:15. The sonars were modeled to have a standard deviation in bearing of 10:0 and 2:0 centimeters in range. Between each scan by the sonar, the vehicle could move between 15 cm and 30 cm. The lower limit signi es a minimum speed that the vehicle could move at before loosing controllability, while the upper limit signi es the maximum speed of the vehicle. The vehicle was constrained to only turn in increments of 22:5. Further, we assumed that the vehicle was equipped with a dead-reckoning system with an accuracy of 10% of distance traveled and an accuracy of 1.0 in heading. Figure 13 shows a typical scan taken by the sonar from the origin. The crosses show individual 25
  • 26. −10 −8 −6 −4 −2 0 2 4 −6 −4 −2 0 2 4 x position (m) y position (m) Figure 13: The returns from the underwater sonar for a 360 scan of the tank from the origin. The crosses shows individual returns. The small circles identify the position of the features (PVC tubes), with a dotted 5 cm outside circle drawn around them to signify the minimum allowable distance between the sonar and the features. The sonar was mounted on the carriage of the positioning system, which served as a simulated AUV. The location of the sonar is shown by a triangle. The outline of the tank is shown in gray. 26
  • 27. sonar returns. The circles show the features (PVC tubes). The dotted circles around the features signify the minimum allowable distance between the vehicle and the features. The triangle shows the position of the sensor. Circular arc features were extracted from the sonar scans using the technique described by Leonard and Durrant-Whyte (1992). Figure 14 shows the sensor trajectory for a representative underwater sonar experiment with the complete algorithm. The sensor started at the origin and moved around the tank using the adaptive stochastic mapping algorithm to decide where to move and where to scan. Based on minimization of the cost function in Equation (13), the vehicle selected one target to scan to provide localization information. In addition, at each time step, the sonar was also scanned in front of the vehicle for obstacle avoidance. Figure 15 shows the x and y errors for the experiment and associated error bounds. No sonar measurements were obtained from approximately time step 75 to time step 85 due to a communication error between the sonar head and the host PC. Figure 16 shows the cost as a function of time. Solid vertical lines in Figures 15 and 16 indicate the time steps when features of the environment were removed. Figure 17 plots the vehicle position error versus time for the stochastic mapping algorithm in comparison to dead-reckoning. Figures 18 through 20 show the results of a non-adaptive experiment in which the vehicle moved in a straight line in the negative x direction with the two objects present throughout the experiment. Without adaptive motion, the observability of the features was degraded and the y estimate seem to diverge. Comparing gure 19 with gure 15 we observe that the non-adaptive strategy had a maximum 3 con dence level of about 0.8 m and 0.2 m in the x and y position respectively, while the maximum 3 con dence level under adaptation was about 0.17 m and 0.15 m in the y and x position respectively. A further indication of the advantage of the adaptive approach can be seen by comparing gure 20 with gure 17. An interesting behavior that was observed is that, over a range of di erent operating conditions, the system selectively explores di erent objects in the environment. For example, Figure 21 shows the sensor path for two di erent underwater sonar experiments under similar conditions where an exploratory behavior can be observed. 7 Conclusion This paper has considered the problem of adaptive sensing for feature-based concurrent mapping and localization by autonomous underwater vehicles. An adaptive sensing metric has been incor- 27
  • 28. −6 −4 −2 0 2 −2 −1 0 1 x position (m) y position (m) Time Step 2 −6 −4 −2 0 2 −2 −1 0 1 x position (m) y position (m) Time Step 42 −6 −4 −2 0 2 −2 −1 0 1 x position (m) y position (m) Time Step 91 −6 −4 −2 0 2 −2 −1 0 1 x position (m) y position (m) Time Step 175 Figure 14: Time evolution of the sensor trajectory for an adaptive stochastic mapping experiment with the underwater sonar. The feature on the right was added at the 40th time step and the two left features were removed towards the end of the experiment. The vehicle started at the origin and moved adaptively through the environment to investigate di erent features in turn, maximizing Equation (13) at each time step. The lled circles designate the feature locations, and are surrounded by dotted circles which designate the stand o distance used by an obstacle avoidance routine. Similarly, the dashed-dotted rectangle designate the stand o distance to the tank walls. The dashed-dotted line represents the scanning region selected by the vehicle at the last time step. The large triangle designates the vehicle's position. The vehicle was constrained from moving outside the dashed-dotted lines to avoid collisions with the walls of the tank. Sonar returns originating from outside the dashed-dotted lines were rejected. 28
  • 29. 0 50 100 150 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 0.2 Time Step y position (m) 0 50 100 150 −0.15 −0.1 −0.05 0 0.05 0.1 0.15 Time Step x position (m) Figure 15: Position errors in the x and y directions and 3- con dence bounds for adaptive underwater experiment. 0 50 100 150 0 0.02 0.04 0.06 0.08 0.1 0.12 Time Step C( P) (m 2 ) Figure 16: Cost as a function of time for adaptive underwater experiment. The cost increased at approximately the 40th time step when the third object was inserted into the tank and was observed for the rst time. During the time interval between the two dashed vertical lines, no sonar data was obtained due to a serial communications problem between the PC and the sonar head. The two solid vertical lines designate the time steps during which features were removed from the tank, to simulate a dynamic environment. 29
  • 30. 0 50 100 150 200 0 0.01 0.02 0.03 0.04 0.05 Time Step 3−σ ellipse of vehicle (m 2 ) Figure 17: Vehicle position error versus time for dead-reckoning (dashed line) and the adaptive DNNSM algorithm (solid line). During the time interval between the two dashed vertical lines, no sonar data was obtained due to a serial communications problem between the PC and the sonar head. The two solid vertical lines designate the time steps when features were removed from the tank, to simulate a dynamic environment. −6 −4 −2 0 2 −2 −1 0 1 x position (m) y position (m) Figure 18: Sensor trajectory for a non-adaptive experiment in which the vehicle moved in a straight line. While accurate location information was obtained in the x direction, the CML process diverged in the y direction. The dashed-dotted rectangle indicates the stand o distance from the tank walls used by an obstacle avoidance routine. 30
  • 31. 0 10 20 30 40 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 Time Step y position (m) 0 10 20 30 40 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 Time Step x position (m) Figure 19: Position errors in the x and y directions and 3- con dence bounds for non-adaptive underwater experiment. 5 10 15 20 25 30 35 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Time Step Vehicle’s 3−σ error ellipse (m 2 ) Figure 20: Vehicle error as a function of time for non-adaptive underwater experiment. 31
  • 32. −4 −3 −2 −1 0 −2 −1.5 −1 −0.5 0 0.5 1 1.5 x position (m) y position (m) −4 −3 −2 −1 0 −2 −1.5 −1 −0.5 0 0.5 1 1.5 x position (m) y position (m) Figure 21: Two representative stochastic mapping experiments that exhibited adaptive behavior. In each gure, the solid line shows the estimated path of the sensor and the dashed line shows the actual path. The triangle indicates the nal position of the sensor. The lled disks indicate the locations of the features (PVC tubes). The ellipses around the features and the sensor are the 3 contours, that is, the 99% highest con dence regions. The sonar view is indicated by the dashed-dotted line. In the left gure, the sensor had a scanning angle of [?30; 30]. The sensor started at (0,0) and then adaptively determined the path to take as well as the direction to scan, resulting in exploratory behavior. The sensor rst moves over to one of the objects, turns, and moves around the second. The experiment illustrated in the right gure was similar, with the exception that the sonar was only able to scan n area of [?7:5; 7:5] each time step. Again, we can see that exploratory behavior emerged as the sensor attempted to maximize the information it obtained about the environment. In these experiments, tracks were initiated from the rst scan only, rather than using the DNN track initiator. In addition, the robot was constrained to turn a maximum of 30 at each time step in 15 increments. 32
  • 33. porated within a stochastic mapping algorithm and tested via air and underwater experiments. This is the rst time, to our knowledge, that a feature-based concurrent mapping and localization algorithm has been implemented with underwater sonar data. We have introduced a method for performing adaptive concurrent mapping and localization (CML) in a priori unknown environments for any number of features. The adaptive method was based on choosing actions that, given the current knowledge, would maximize the information gained in the next measurement. This approach can easily be implemented as an extra step in a stochastic mapping algorithm for CML. The validity and usefulness of the approach were veri ed both in simulation and in experiments with air and underwater sonar data. Based on the air sonar experiments, we feel con dent in the accuracy of the simulation in predicting experimental outcomes. For example, the three typical paths for the robot shown in Figure 3 are representative of both adaptive simulations and real data experiments, and the plots comparing the performance with and without adaptation are similar. The advantage of performing adaptive CML is notable when only adapting the motion of the vehicle (Figures 4 and 10). However, more substantial gains are obtained when performing adaptive motion and sensing. This is apparent from Figures 7 and 11, where the number of sonar returns required to obtain a given con dence level is an order of magnitude fewer than when not performing adaptation. The adaptive sensing technique employed here is a local method. At each cycle, only the next action of the robot is considered. By predicting over an expanded time horizon, one can formulate global adaptive mapping and navigation. For example, one can consider how to determine the best path between the robot's current position and a desired goal position. In this case, the space of possible actions grows tremendously, and a computationally ecient method for searching the action space will be essential. In future research, we will integrate adaptive sensing within a hybrid estimation framework for CML in development in our laboratory (Smith and Leonard 1997; Smith 1998). In addition, we will perform experiments with more complex objects and develop methods that incorporate additional criteria for adaptation to address e ects such as occlusion, rough surface scattering, and multiple re ections. Concurrent mapping and localization is a compelling topic for investigation because there are so many open issues for future research. Key questions include: 33
  • 34. How can representation of complex, natural environments be incorporated within a metri- cally accurate, feature-based mapping approach? How can we reliably extract features from sensor data? What are the trade-o s between global versus local, absolute versus relative, and metrical versus topological mapping? How can we select which types of environmental features are most useful as navigation landmarks? How can data association ambiguity be e ectively addressed? How can computational complexity be overcome in extending CML to long duration mis- sions over large areas? In what situations can long-term bounded position errors be achieved? How can CML be extended to dynamic environments, in which previously mapped environ- mental objects can move or disappear, and new features can appear in previously mapped areas? How can CML be integrated with path planning and obstacle avoidance? While this paper has focused on an adaptive sensing metric for improved state estimation per- formance in CML, we believe that other types of sensing strategies can help answer many of these questions. We have shown that adaptive sensing can save time and energy, and reduce the amount of data that needs to be acquired. However, we believe that adaptive strategies have the potential to improve state estimation robustness, ease data association ambiguity, prevent diver- gence, and facilitate recovery from errors. Investigation of adaptive strategies for these purposes will be addressed in future research. Acknowledgments This research has been funded in part by the Naval Undersea Warfare Center, Newport, RI, U.S.A. H.J.S.F. acknowledges the support of NFR (Norwegian Research Council) through grant 109338/410. J.J.L. acknowledges the support of the Henry L. and Grace Doherty Assistant 34
  • 35. Professorship in Ocean Utilization and NSF Career AWARD BES-9733040. Further, the authors would like to thank Seth Lloyd for fruitful discussions and comments. The authors would also like to thank Jan Meyer, Jong H. Lim, Marlene Cohen, and Tom Consi for their help with the Nomad hardware integration, Prof. J. Milgram for the use of the testing tank, Paul Dambra for the installation of the positioning system, and Imagenex Technology Corp. and Paul Newman for their help with the sonar integration. A Stochastic Mapping This appendix provides more detail on the stochastic mapping method, based on the work of Smith et al. (1990). Stochastic mapping is simply a special way of organizing the states in an extended Kalman lter, and providing a consistent way of adding new states to the systems as more features are observed and estimated. In a Kalman lter, through the combination of a dynamic model and observation model, an estimated state, x and associated estimated covariance, P, is produced. In this paper, the dynamic model for the robot was given by xrk+1jk = f(xrk; uk) + G(uk)dx = xrkjk + Tk + G(uk)dx; where, as in the main text, k, de nes the time index. Since all features are assumed to be static, their is no dynamics for the features, thus xik = ^ xik: The noise scaling matrix G is de ned in Equation (6) and the transformation matrix T is de ned in Equation (7). As in the main text, the state vector x = [xr x1 : : : xN ] consists of the robot's state, xr as well as all the N features states denoted by x1 : : : xN . The control input is given by u and the process noise is given by dx. As the robot moves around in its environments, the features are observed through the robots senor. The relation between the current state and the observations is given by the observation model zk = h(xk) + dz; (16) where z is range and bearing measurements to the features in the environment and h is the nonlinear transformation from the Cartesian coordinates of x and the polar coordinates of z. 35
  • 36. As the robot moves around in its environment, the system evolves through 1) robot displace- ment, 2) new feature integrations, and 3) re-observation of features. Each of these steps of stochastic mapping is presented below. Robot displacement When the robot moves a distance given by uk, the robot state, xr, estimated at time step k + 1 is given by taking the expectation of dynamic model given above: ^ xrk+1jk = ^ xrkjk + T^ kjkuk and ^ xik+1jk = ^ xik: Under the assumption of Equation (2), that this approximation error is small, the Fisher information is updated through the linearized system by: Ik+1jk = (FxI?1 kjkFT x + G(uk)G(uk)T )?1: Here Fx, is the Jacobian of f with respect to x evaluated at ^ xkjk. New feature integration If the robot observes a new feature znew = [r ] with respect to the robot's reference frame, a new feature state ^ xN+1 is estimated and incorporated by ^ xN+1 = l(^ xkjk; znew) = ^ xr + r cos( + ^ ) ^ yr + r sin( + ^ ) : The new feature is integrated into the map by adding this new state to x and I. That is ^ xk+1jk = ^ xkjk ^ xN+1 ; PN+1N+1 = LxrPrrLT xr + LzRLT z ; PN+1 i = PT i N+1 = LxrPri: Where Lxr and Lz are the Jacobian of l with respect to the robot state xr evaluated at ^ xrkjk, and to znew evaluated at znew. The covariance of the state of the new features is given a priori by R. The structure of the covariance for the system, P, is given in Equation (4), and its inverse is a good estimate for the Fisher information of the system, that is, P = I?1. 36
  • 37. Re-observation of features When a feature i is re-observed we use the prediction step of the EKF to update the vehicle's state and the map. By introducing the following de nitions: ~ xi = xik+1jk ?xrk+1jk; ~ yi = yik+1jk ?yrk+1jk; the observation model h (Equation (16)) for feature i takes the form zik = rik ik = p ~ x2 i + ~ y2 i tan?1(~ yi=~ xi) ?k + dzi = hi(xk) + dzi: The noise process dzi is assumed to be white and Gaussian with covariance Ri. If the n features i1 : : : in are re-observed, the observation model becomes: zk = 2 6 4 zi1k ... zink 3 7 5; h = 2 6 4 hi1 ... hin 3 7 5; R = 2 6 4 Rin 0 ... ... ... 0 Rin 3 7 5; with the Jacobian of h given by Hx = Hx(^ xk+1jk) = dh(x)=dxjx=^ xk+1jk. These matrices are used in the update step of the extended Kalman lter as follows: ^ xk+1jk+1 = ^ xk+1jk + Kk+1(zk+1 ?h(^ xk+1jk)) Ik+1jk+1 = Ik+1jk + HT xR?1Hx: where Kk+1 is the extended Kalman lter gain given by: Kk+1 = I?1 k+1jkHT x(HxI?1 k+1jkHT x + R)?1: Computational Issues To prevent instability in the coding of the extended Kalman lter, it is advised that the square- root lter or the Josephson form covariance update (Bar-Shalom and Li 1993) be employed. In these implementations the Josephson form was used, which ensures the positive de niteness of the EKF. In order to prevent the EKF to become over con dent, extra process noise was added to the feature position. That is, at each time step each feature was assumed to have a process noise included of the same order of magnitude as dx. This was placed in the G matrix. 37
  • 38. List of symbols F Dynamic model. H Observation model. M Transformation relating the Fisher information between time steps recursively. F Jacobian of the dynamic model . G Process noise scaling matrix. Dependent on u. H Jacobian of the state-to-observation transformation h. I Fisher information matrix. K Kalman lter gain matrix. L Jacobian of l with respect to xr, evaluated at xrkjk. N Number or element in the state vector. Number of features in the map. P System covariance matrix. R Observation covariance matrix. T Transformation used in the compounding operation. dx Process noise. Assumed to be white and Gaussian. dz Observation noise process. Assumed to be white and Gaussian. h State-to-observation transformation. i Subscript index identifying a feature number. k Time index are shown in subscript. l Transformation used for entering a new element to the state vector. p() A probability density function. u Actions, or system control input. U Set of sonar scanning angles. x Actual system state vector. ^ x Estimated system state vector. ^ xkjk Estimated state at time step k given all the information up to time step k. xr Vehicle state. Location (xr; yr) and direction . That is, xr = [xr yr ]. z Measurement vector (range and bearing measurements). ^ Signi es an estimated variable. References Au, W. (1993). The Sonar of Dolphins. New York: Springer-Verlag. Bajcsy, R. (1988, August). Active perception. Proceedings of the IEEE 76(8), 996{1005. Bar-Shalom, Y. and T. E. Fortmann (1988). Tracking and Data Association. Academic Press. Bar-Shalom, Y. and X.-R. Li (1993). Estimation and Tracking: Principles, Techniques, and Software. Artech House. Bellingham, J. G., C. A. Goudey, T. R. Consi, J. W. Bales, D. K. Atwood, J. J. Leonard, and C. Chryssostomidis(1994). A second generation survey AUV. In IEEE Conference on Autonomous Underwater Vehicles, Cambridge, MA. Bellingham, J. G. and J. S. Willcox (1996). Optimizing AUV oceanographic surveys. In IEEE Con- ference on Autonomous Underwater Vehicles, Monterey, CA. 38
  • 39. Berger, J. O. (1985). Statistical Decision Theory and Bayesian Analysis (Second ed.). Springer-Verlag. Betg e-Brezetz, S., P. H ebert, R. Chatila, and M. Devy (1996, April). Uncertain map making in natural environments. In Proc. IEEE Int. Conf. Robotics and Automation, pp. 1048{1053. Blake, A. and A. Yuille (Eds.) (1992). Active Vision. MIT Press. Bozma, O. and R. Kuc (1991, June). Characterizing pulses re ected from rough surfaces using ultra- sound. J. Acoustical Society of America 89(6), 2519{2531. Chong, K. S. and L. Kleeman (1997). Sonar feature map building for a mobile robot. In Proc. IEEE Int. Conf. Robotics and Automation. Elfes, A. (1987, June). Sonar-based real-world mapping and navigation. IEEE Journal of Robotics and Automation RA-3(3), 249{265. Gelb, A. C. (1973). Applied Optimal Estimation. The MIT Press. Hager, G. (1990). Task-directed Sensor Fusion and Planning: A Computational Approach. Boston: Kluwer Academic Publishers. H ebert, P., S. Betg e-Brezetz, and R. Chatila (1996, April). Decoupling odometry and exteroceptive perception in building a global world map of a mobile robot: The use of local maps. In Proc. IEEE Int. Conf. Robotics and Automation, pp. 757{764. Kuipers, B. J. and Y. Byun (1991). A robot exploration and mapping strategy based on a semantic heirarchy of spatial representations. Robotics and Autonomous Systems, 47{63. Leonard, J. J., A. A. Bennett, C. M. Smith, and H. J. S. Feder (1998, May). Autonomous underwater vehicle navigation. In IEEE ICRA Workshop on Navigation of Outdoor Autonomous Vehicles, Leuven, Belguim. Leonard, J. J., I. J. Cox, and H. F. Durrant-Whyte (1992, August). Dynamic map building for an autonomous mobile robot. Int. J. Robotics Research 11(4), 286{298. Leonard, J. J. and H. F. Durrant-Whyte (1992). Directed Sonar Sensing for Mobile Robot Navigation. Boston: Kluwer Academic Publishers. Lucido, L., B. Popescu, J. Opderbecke, and V. Rigaud (1996, May). Segmentation of bathymetric pro les and terrain matching for underwater vehicle navigation. In Proceedings of the second annual World Automation Conference, Montpellier, France. Manyika, J. S. and H. F. Durrant-Whyte (1994). Data Fusion and Sensor Management: A decentral- ized information-theoretic approach. New York: Ellis Horwoord. Medeiros, M. and R. Carpenter (1996). High resolution array signal processing for AUVs. In AUV 96, pp. 10{15. Milne, P. H. (1983). Underwater Acoustic Positioning Systems. London: E. F. N. Spon. Moran, B. A., J. J. Leonard, and C. Chryssostomidis (1997). Curved shape reconstruction using multiple hypothesis tracking. IEEE J. Ocean Engineering 22(4), 625{638. Moutarlier, P. and R. Chatila (1989). Stochastic multisensory data fusion for mobile robot location and environment modeling. In 5th Int. Symposium on Robotics Research, Tokyo. Rencken, W. D. (1993). Concurrent localisation and map building for mobile robots using ultrasonic sensors. In Proc. IEEE Int. Workshop on Intelligent Robots and Systems, Yokohama, Japan, pp. 2192{2197. 39
  • 40. Russell, S. and P. Norvig (1995). Arti cial Intelligence A Modern Approach. Prentice Hall. Russell, S. and E. Wefald (1995). Do the right thing : studies in limited rationality. MIT Press. Singh, H. (1995). An entropic framework for AUV sensor modelling. Ph. D. thesis, Massachusetts Institute of Technology. Smith, C. M. (1998). Integrating Mapping and Navigation. Ph. D. thesis, Massachusetts Institute of Technology. Smith, C. M. and J. J. Leonard (1997). A multiple hypothesis approach to concurrent mapping and localization for autonomous underwater vehicles. In International Conference on Field and Service Robotics, Sydney, Australia. Smith, R., M. Self, and P. Cheeseman (1990). Estimating uncertain spatial relationships in robotics. In I. Cox and G. Wilfong (Eds.), Autonomous Robot Vehicles. Springer-Verlag. Stewart, W. K. (1988). Multisensor Modeling Underwater with Uncertain Information. Ph. D. thesis, Massachusetts Institute of Technology. Technologies, N. (1998). http://www.robots.com. Thrun, S., D. Fox, and W. Burgard (1998). A probabilistic approach to concurrent mapping and localization for mobile robots. Machine Learning 31, 29{53. Tuohy, S. T., J. J. Leonard, J. G. Bellingham, N. M. Patrikalakis, and C. Chryssostomidis (1996, March). Map based navigation for autonomous underwater vehicles. International Journal of O shore and Polar Engineering 6(1), 9{18. Uhlmann, S. J. Julier and M. Csorba, J. K. (1997). Nondivergent Simultaneous Map Building and Localisation using Covariance Intersection. Navigation and Control Technologies for Unmanned Systems II. Urick, R. (1983). Principles of Underwater Sound. New York: McGraw-Hill. Willcox, J. S., Y. Zhang, J. G. Bellingham, and J. Marshall (1996). AUV survey design applied to oceanic deep convection. In IEEE Oceans, pp. 949{954. 40