Ical 2008 4636224
Ical 2008 4636224
Abstract – A key problem of a mobile robot system is how to applied in mobile robot localization projects to gather
localize itself and detect objects in the workspace. In this paper, information on the robot’s pose estimation at high precision.
a multiple sensor based robot localization and object pose Sonar is fast and inexpensive but is usually rather crude,
estimation method is presented. First, optical encoders and whereas laser scanning is active, accurate and widely applied
odometry model are utilized to determine the pose of the mobile
robot in the workspace, with respect to the global coordinate
in mobile robotics. Vision systems are passive and of high
system. Next, a CCD camera is used as a passive sensor to find resolution, and are the most promising sensors for future
an object (a box) in the environment, including the specific generations of mobile robots. In [6], a 3-dimensional (3-D)
vertical surfaces of the box. By identifying and tracking color laser scanner is applied to perceive the traversability
blobs which are attached to the center of each vertical surface of affordance and used to wander in a room filled with different
the box, the robot rotates and adjusts its base pose to move the types of objects (spheres, cylinders and boxes). The results
color blob into the center of the camera view in order to make obtained through training show that a mobile robot can
sure that the box is in the range of the laser scanner. Finally, a wander around avoiding collision with non-taversable objects,
laser range finder, which is mounted on the top of the mobile but moving over traversable objects by rolling them out of its
robot, is activated to detect and compute the distances and
angles between the laser source and laser contact surface on the
way. A vision-based mobile robot localization and mapping
box. Based on the information acquired in this manner, the algorithm which uses STIF (scale-invariant image feature) is
global pose of the robot and the box can be represented using the applied for mobile robot localization and map building in [7].
homogeneous transformation matrix. This approach is validated The experiment in this paper shows that visual landmarks are
using the Microsoft Robotics Studio simulation environment. robustly matched and the pose of the robot is estimated in a 3-
D map. Previous work done in robotic localization indicates
Index Terms - Mobile robot, laser, color blob tracking, that laser scanners and CCD cameras are two promising
localization. sensors for this purpose.
I. INTRODUCTION The work presented in this paper is a part of a mobile
robot box pushing project, where a group of intelligent mobile
Mobile robot localization and object pose (position and robots detect boxes, find suitable paths for transportation,
orientation) estimation in a working environment have been clear obstacles along the paths and move the boxes to goal
central research activities in mobile robotics. Significant locations. This paper addresses the robot localization and box
research attention has been received by this problem in the pose estimation problems using a computer simulation
past decade because of the successes of mobile robotic environment, and will contribute to a box pushing project in a
implementations such as vacuum cleaning robots, delivery real working environment. Figure 1 presents the general
robots, and elder care robots, which heavily rely on their scheme of mobile robot localization and object pose
accurate localization and object detection capabilities. estimation in this project. Successful localization and object
Solution of the mobile robot localization problem requires detection contain three steps: perception, the robot must
addressing of two main problems [1]: the robot must have a interpret its sensor data to extract meaningful information;
representation of the environment; the robot must have a prediction, the robot should determine its global pose based
representation of its belief regarding its pose in this on meaningful information; matching and pose update, the
environment. Sensors are the basis of addressing both robot should detect objects and compute their poses with
problems. Many of-the-shelf sensors (e.g., GPS, compasses, respect to a local coordinate system, and update their global
gyroscopes, and ultrasonic sensors) available to mobile robots poses.
are introduced in [1], giving their basic principles and In this paper, we propose a mobile robot localization and
performance limitations. Based on this introduction, object pose estimation method for the mobile robot box
ultrasonic sensors [2], goniometers [3], laser range finders pushing project using multiple sensors. The organization of
[4], and CCD cameras [5] are sensors which are commonly this paper is listed below:
617
¾ Global pose estimation p = [x y θ]
T
(1)
¾ Object tracking
¾ Box pose estimation
¾ Experimental results
First, optical encoders and an odometry model are
utilized in this paper to determine the pose of a mobile robot
with respect to a global coordinate system. Next, a CCD
Y-Axis
camera, which is a passive sensor, is used to find boxes in the
environment as well as the vertical surfaces of the boxes. By
identifying and tracking the color blobs which are attached to
the center of each surface of the boxes, the robot rotates and
adjusts its base pose to move the detected color blob into the
center of the camera view. Finally, a laser range finder, which Fig. 2 Motion of a differential-drive robot
is mounted on the top of the mobile robot, is activated to
measure the distance and the angle between the laser source For a differential driving robot, the pose can be estimated
and the laser contact surface on the box. Based on the from the starting pose by integrating the travel distance in
information acquired in this manner, a homogeneous
each interval (during fixed sampling interval Δt of sensors)
transformation matrix is applied to represent the global pose
using
of the robot and the box. The approach will be validated using
the Microsoft Robotics Studio simulation environment. Δx = Δs cos(θ + Δθ / 2) (2)
Encoder
Δy = Δs sin(θ + Δθ / 2) (3)
Observation
Camera
Δθ = (Δsr − Δsl ) / d (4)
(Perception)
Raw data or
features Δs = Δx 2 + Δy 2 = (Δsr + Δsr ) / 2 (5)
Prediction Laser Here, Δx and Δy are distances traveled in the last sampling
(e.g. odometry)
interval along the x and y directions, respectively; Δθ is the
Database/
travel angle with the last sampling interval; Δsl and Δsr are
Matching
Knowledgebase travel distances of the left and the right wheels, respectively;
and d is the distance between the two wheels, which is a
constant for a given robot. Therefore, the updated position p '
Position update
for each interval can be found using
ª x'º ª x º ªΔs cos(θ + Δθ / 2)º
Fig. 1 General scheme of mobile robot localization and object detection
p ' = «« y '»» = «« y »» + «« Δs sin(θ + Δθ / 2) »» (6)
II. GLOBAL POSE ESTIMATION
«¬θ '»¼ «¬θ »¼ «¬ Δθ »¼
The first step of this project is to determine the pose of
the mobile robot with respect to the workspace as it moves in By using the equations (4), (5) and (6), p ' can also be
the environment. The estimation of the mobile robot’s pose is computed by the following equation:
a fundamental problem, which can be roughly divided into
two classes [7]: methods for keeping track of the robot’s pose ª x º ª((Δsl + Δsr ) / 2) cos(θ + (Δsl + Δsr ) / 2d )º
and methods for global pose estimation. p ' = «« y »» + ««((Δsl + Δsr ) / 2) cos(θ + (Δsl + Δsr ) / 2d )»»
Much of the research carried out to date has concentrated
on the first class, which assumes that the initial pose of the «¬θ »¼ «¬ (Δsr − Δsl ) / 2 »¼
robot is known. A commonly used method for global pose (7)
estimation in this field is the odometry model, which In this situation, the global pose of the mobile robot can
determines the pose of a robot relative to a starting point be found by computing equation (7) during each interval of
during wheeled vehicle navigation. sensor measurement.
Figure 2 shows a movement of a differential-drive robot,
III. COLOR BLOB TRACKING
as used in this project. Suppose that the initial pose of the
robot is completely known. Then, real-time pose information The second step of this project concerns accurately
of the mobile robot can be calculated using rotation tracking or detecting the objects (boxes) by a color blob
measurements of the two wheels. The overall estimation vision tracking system. Vision-based automated object
procedure is outlined below. detection has played a significant role in industrial and
The location and orientation of the mobile robot in the service applications. Studies [8, 9] have focused on detecting
global coordinate system are presented by the vector: objects efficiently by using color, shape, size, and texture
618
features. However, there are a number of problems that arise hi − 1.2σ i ≤ h ≤ hi + 1.2σ i or ……
when using these methods to process real world images
because the detection methods based on low-level pixel hk − 1.2σ k ≤ h ≤ hk + 1.2σ k }
features cannot meet requirements under different conditions Then (it will not be changed)
and environments. Although it is possible to improve the Else (Its hue values will be replaced with 0)
detection accuracy by combining these approaches with Loop
human vision concepts, it is difficult to automate the process After statistical feature filtering, the color-blob templates,
and therefore not very useful. Although various methods have
which are 5× 5 pixel matrix, are applied to search the entire
been considered for processing real-world images, their
image, using the algorithm given below:
performance is not yet adequate for common practical
Initialize min-distance=100000, blob_pos=(1,1);
applications.
A fast color-blob tracking algorithm [10] has been For (each element H (i, j ) in the matrix of H)
developed for this project, which can effectively detect and distance=
track different color blobs attached on each vertical surface of n n
a box. The detailed process of color blob tracking in this
project is shows in Figure 3 (a). The original image as
¦¦ ( H (i + k − 1, j + l − 1) − T (k , l ))
i =1 j =1
2
/ m2
acquired by a CCD camera is first processed by removing the if (distance<min-distance) then
disturbances caused by transferring the image from the RGB min-distance=distance;
(red, blue, and green) color space to the HIS (hue, saturation, update: blob_pos=(i,j)
and intensity) color space and removing its saturation and output blob_pose;
intensity components. Finally, the robot rotates its base to adjust the view of the
image in order to make sure that the detected color blob is
approximately located on the center line of the camera view.
By doing so it guarantees that the robot is heading towards
the color label and the box in the range of the laser scanner.
In this project, 4 different color circle labels (red, green,
blue, yellow) are attached on the four vertical surfaces of the
box. By using different color labels on different surfaces, the
specific surface toward which the robot is heading can be
Center Line
determined as well at the same time.
IV. BOX POSE ESTIMATION
(a) (b)
Fig. 3 (a) Color blob tracking procedure; (b) Camera view A. Relative Pose Estimation
The final step of this project involves locating the box in
In the next step, a type of statistical feature filtering is the workspace, with respect to the global coordinate system.
employed to remove the color that is not related to the sample In order to achieve this objective, we first need to know the
color blobs ( 5× 5 pixel templates). For a 2-D image with pose of the box with respect to the coordinate system attached
to the mobile robot (local coordinate system). The laser range
i × j pixels, the average hue value and its standard deviation finder is used in this project for relative pose estimation. It is
σ of the corresponding color blob can be calculated by the based on a SICK LMS 200 2D scanner, which has a
following equations: horizontal range of 180o with a maximum resolution of 0.5 o.
n n This device produces a range estimation based on the time
h = ¦¦ (h(i, j )) / n 2 (8) needed for the light to reach the target and return [1]. Figure 4
i =1 j =1 (a) indicates how it is used to measure range. The sensor
n n transmits light at a known frequency and measures the phase
σ= ¦¦ (h(i, j ) − h ) 2
/(n 2 − 1) (9) shift between the transmitted and reflected signals. Therefore,
i =1 j =1 the distance between the emission source and the target
Here, h(i, j ) represents the hue value of the original pixel surface can be determined using:
D = λθ / 4π (10)
(i, j ) . By executing an “If…else” logic as shown below, the
Here, θ is the electronically measured phase difference
statistical feature filtering is completed. between the transmitted and reflected light beams.
For (each pixel in the original image)
If (its hue value is within the set of
{ h h1 − 1.2σ 1 ≤ h ≤ h1 + 1.2σ 1 or
h2 − 1.2σ 2 ≤ h ≤ h2 + 1.2σ 2 or……
619
Box Length Box Width Į ȕ OA (d1) OB (d2)
1000mm 500mm 80.5° 99° 3079mm 3226mm
x A = d1 cos α
° y A = d1 sin α
°
°° xB = d 2 cos β
(a) (b) ® (10)
Fig. 4 (a) Schematic drawing of laser range sensor; (b) A 180 degree laser
° y B = d 2 sin β
range sensor
°( xo − x A ) + ( yo' − y A ) 2 = 312500
' 2
Figure 5 shows the results of the laser range finder. There ° '
are four objects within the laser scanner according to the
°¯( xo − xB ) 2 + ( yo' − y B ) 2 = 312500
results given in Figure 5. Because the box with a color blob θ ' = A tan(( y B − y A ) /( xB − x A )) (11)
label should be in the center of the laser range scan after By solving the equations (10) and (11), the pose of the
applying the color blob tracking in Section 3, it is very easy to
box center ( o' ) and the orientation of the coordinate frame
establish that the box must be in the range between 79.5° and
99°. which is attached to the box with respect to the robot
coordinate system can be found and represented by the vector
o ' = [o x o y θ ']T (12)
It describes the pose of the box with respect to the mobile
robot, and is called relative pose.
B found as
x
-A
ªcosθ − sin θ
Y’
xº
Bo
o’
cosθ (14)
Y-Axis
X’
ș' «¬ 0 0 1 »¼
-A
A
xis
620
ªcosθ − sin θ x º ªcosθ ' − sin θ ' ox º
T = T ' 'T ' = «« sin θ cosθ y »» «« sin θ ' cosθ ' o y »»
«¬ 0 0 1 »¼ «¬ 0 0 1 »¼
ªcos(θ + θ ' ) − sin(θ + θ ' ) ox cosθ − o y sin θ + xº
= «« sin(θ + θ ' ) cos(θ − θ ' ) ox sin θ + o y cosθ + y »»
«¬ 0 0 1 »¼
(15)
In matrix (15), the location of the box origin and the
orientation of the box with respect to the global coordinate
system can be determined by
Fig. 8 Experimental setup in the simulator
ª xº ª ox cosθ − o y sin θ + x º
« » «
o' ' = « y » = « ox sin θ + o y cosθ + y » (16)
First, the mobile robot rotates its base clockwise to
» search for the color blobs (i.e., the box) in the environment.
«¬θ »¼ «¬ A tan(sin(θ + θ ' ) / cos(θ + θ ' ))»¼ Once the robot finds the color blobs, the color-blob tracking
algorithm is applied to identify candidates. Then, it keeps
V. EXPERIMENTAL RESULTS adjusting the base and moves the color blob into the middle of
the vision frame in order to make sure that the box is within
A. Simulation Environment
the range of the laser scanner. Meanwhile, the pose of the
In this project, Microsoft Robotics Studio simulation
robot is recorded according to the odometry algorithm, as
environment is used to validate the developed method. The
discussed in Section 2. Figure 9 shows the global camera
software and relevant algorithm are developed using
view and the on-robot local camera view. The current robot
Microsoft C#.
pose in the global coordinate system in this experiment
[
is p ' = 1000mm 1500mm − 18°] .
T
(a) (b)
Fig. 7 Simulation environment GUI Fig. 9 (a) Global camera view; (b) Robot camera view
Figure 7 shows the simulation environment of Microsoft While the robot sits still seeing the box, the laser range
Robotics Studio. The Microsoft Robotics Studio is a finder is activated. Figure 10 shows the results acquired from
Windows-based environment for robot control and the range scanner. The data for the pose calculation is given in
simulation. Its features include: a visual programming tool, Table 2.
Microsoft Visual Programming Language, for creating and
debugging robot applications, web-based and windows-based
interfaces, 3-D simulation (including hardware acceleration),
a lightweight service-oriented runtime, easy access to a
robot's sensors and actuators via a .NET-based concurrent
library implementation, and support for a number of 3000
languages including C# and Visual Basic .NET, JScript, and
2500
IronPython.
B. Simulation Result 2000
range finder and a CCD camera, and a gray box (500 mm in 500
width and 1000 mm in length) which is labeled using color
0
blobs on each vertical surface. The objective of this 0 20 40 60 80 100 120 140 160 180
experiment is to determine the box pose in the global Fig. 10 Laser range finder results
coordinate system.
621
Table 2 Data for pose calculation this project. First, an odometry model was utilized to
Box Length Box Width Į ȕ OA (d1) OB (d2) determine the pose of the mobile robot in terms of the global
1000mm 500mm 82° 99.5° 2435mm 2968mm
coordinates. Next, a CCD camera, which is a passive sensor,
By using the group of equations (10) and equation (11), was used to find the box in the environment as well as a
the pose of the coordinate frame attached at the center of the specific surface of the box. By identifying and tracking the
box with respect to the robot coordinate system can be color blobs which were attached at the center of each surface
determined as: o' = [ 2901mm 69mm − 31.9°] . The
T
of the box, the robot adjusted its base pose to move the color
homogeneous transformation matrix between the box blob into the center of the camera view. Finally, a laser range
coordinate frame and the robot coordinate frame can be finder, which was mounted on the top of mobile robot, was
written as activated to detect the distances and angles between the laser
source and the laser contact surface on the box. Based on the
ª 0.8490 0.5284 2901º information acquired in this manner, the global pose of the
T ' = ««- 0.5289 0.8490 69 »» (17) robot and the boxes was determined. This approach was
validated using the Microsoft Robotics Studio simulation
«¬ 0 0 1 »¼ environment.
The homogeneous transformation matrix between the robot Our future work will focus on the robot localization and
coordinate frame and the global coordinate frame can also be object detection in a real world environment, where sensor
generated as noise, sensor aliasing, and effects from the environment
(illumination, friction, etc.) will be taken into account. A
ª 0.9511 0.3090 1000º robust and effective mobile robot localization and object
T ' ' = ««− 0.3090 0.9511 1500»» (18) detection algorithm will be developed and applied to the
physical mobile robot box pushing project.
«¬ 0 0 1 »¼
ACKNOWLEDGMENT
Therefore, the homogeneous transformation matrix between
the box coordinate frame and the global coordinate frame can This work has been supported by research grants from
be determined by using (17) and (18); as the Natural Sciences and Engineering Research Council
ª 0.6441 0.7649 3780º (NSERC) of Canada and the Canada Foundation for
Innovation (CFI).
T = T ' ' T ' = ««− 0.7653 0.6442 669 »» (19)
REFERENCES
«¬ 0 0 1 »¼ [1] R. Siegwart, I. Nourbakhsh, Introduction to autonomous mobile robots,
Finally, the location of the box origin and the orientation of The MIT Press, 2004.
[2] J. Leonard, H. Durrant-Whyte, “Mobile robot localization by tracking
the box in the global coordinate system can be determined geometric beacons,” IEEE transactions on robotics and automation, Vol.
according to (16); as 7, pp. 89-97, 1991.
o' ' = [ x y θ ]T = [3780mm 669mm 10.04°]
T [3] P. Bonnifait, G. Garcia, “Design and experimental validation of an
Odometric and Goniometric localization system for outdoor robot
Figure 11 shows the visualized experimental results vehicles,” IEEE transactions on robotics and automation, Vol. 14, No. 4,
represented in the global coordinate system. pp. 541-548, 1998.
[4] A. Arsenio, M. I. Ribeiro, “Active range sensing for mobile robot
localization,” Proceedings on IEEE/RSJ international conference on
intelligent robotics and system (IROS’ 98), Canada, 1998.
[5] Y. Yamamoto, P. Pirjanlan, M. Munich, E. Dibernardo, L. Goncalves, J.
Ostrowski, and N. Larlsson, “Optical sensing for robot perception and
localization,” IEEE workshop on advanced robotics and its social
10.04° impacts, Nagoya, Japan, pp. 14-17, 2005
Y-Axis
− 18° [6] E. Uƣur, M. R. Doƣar, M. Çakmak, and E. ùahin, “The learning and use
of traversability affordance using range images on a mobile robot,”
International conference on robotics and automation, Roma, Italy, pp.
− 31.9° 1721-1726, 2007.
[7] J. Borenstein, B. Everett, and L. Feng, Navigating mobile robot: system
and techniques, A. K. Peters, Ltd., Wellesly, MA, 1996.
[8] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: a review,”
IEEE transactions on pattern analysis and machine intelligence, Vol. 28,
No. 5, pp. 585- 590, 2006.
Fig. 11 Experimental results [9] B. S. Park W. J. Lee, and K. S. Kim, “Content-based image classification
using a neural network,” Pattern Recognition Letters, Vol. 25, No. 3, pp.
VI. CONCLUSIONS 287-300, 2004.
[10] Y. Wang, C. W. de Silva, “A fast and robust algorithm for color-blob
In this paper, a mobile robot localization and object pose tracking in multi-robot coordinated tasks,” International journal of
detection method was developed and applied in a mobile information acquisition, Vol. 3, No. 3, pp. 191-200, 2006.
[11] M. W. Spong, S. Hutchinson, M. Vidyasagar, Robot modeling and
robot boxing pushing project. Optical encoders, a CCD control, John Wiley & Sons, Inc., 2006.
camera and a laser range finder were the sensors utilized in
622