0% found this document useful (0 votes)
21 views

Isie 2012

This document describes a visual tracking system called SAVAR for mobile robots. It uses two controllers, one for the robotic platform and one for the camera, to allow independent movement. The major processing is done by a host computer while low-cost algorithms are embedded in the robot's hardware. SAVAR can control a robot with two degrees of freedom and a network camera. It aims to track targets in semi-structured environments while avoiding obstacles. Visual information is used in a closed-loop control system to coordinate the camera and robot positions during tracking.

Uploaded by

rfm61
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

Isie 2012

This document describes a visual tracking system called SAVAR for mobile robots. It uses two controllers, one for the robotic platform and one for the camera, to allow independent movement. The major processing is done by a host computer while low-cost algorithms are embedded in the robot's hardware. SAVAR can control a robot with two degrees of freedom and a network camera. It aims to track targets in semi-structured environments while avoiding obstacles. Visual information is used in a closed-loop control system to coordinate the camera and robot positions during tracking.

Uploaded by

rfm61
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Visual Tracking System for mobile robots

Ricardo Figueiredo Machado and Paulo Fernando Ferreira Rosa


Military Institute of Engineering - IME
Praa Gen. Tibrcio, n80 Urca, Rio de Janeiro - RJ,
CEP 22290-270 Telephone:55 21 2546-7080
Email: [email protected], [email protected]
AbstractThis paper presents a visual tracking system of tar-
gets by wheeled mobile robots on semi-structured environments
(System SAVAR), capable of controlling two degrees of freedom
robot and a network camera loaded. Two controllers are used, one
for the robotic platform and another for the camera, in order
to allow the independence movements of the camera and the
robot in performing the task of tracking with obstacle avoidance.
The conguration in the closed loop system is performed under
a WLAN(Wireless Local Area Network) technology standard
802.11b/g. In this approach, the major computational cost is
achieved by a computer (host) and low computational cost
algorithms are embedded in the hardware of the robot.
I. INTRODUCTION
For many living species visual perception plays a key role
in their behaviour. The coordination capacity of the type hand-
eye gives exibility, dexterity and strength of movement that a
machine can not run yet. The human being to track a moving
target much of the attention focuses on the object of interest,
but maintains relative perception of the environment to detect
moving objects and static, when in their conduct towards
the target by applying different methods to track an object,
navigate and avoid collisions.
This paper proposes a Target Tracking System for Mobile
Robot (SAVAR) for semi-structured environments capable of
controlling two degrees of freedom robot and a network
camera board. As the system is composed of two subsystems
with different purposes (gure 1), this leads to the use of two
controllers: one for the robotic platform and one for a camera.
The vision of the robot to perform its tasks, extends to the
eld of computer vision, as part of a more complex system.
The visual information are the signs of reference control loop
for the camera and the position of the robot (visual servo
control), [1].
Among recent works that use the imaging plane for visual
tracking, cite [2], they were applied robust control techniques
and a new concept, called by the author of dual Jacobian
to coupling the control model of the robot and the target in
the image plane. The use of WLAN technology (IEEE 802.11
b/g) and TCP/IP allows prospects to list [3], and considerable
benets to the system, such as the possibility of teleoperation
and reuse system, with little (or no) change in the robot
hardware and embedded software.
II. THE PROBLEM
In semi-structured scenario, solve the problem of visual
tracking for a mobile robot equipped with only a ground
camera as its only sensory system. Most of the industrial
robot has automated the exact knowledge of its position, the
position of objects and obstacles, thus operating in structured
environments completely, like the industrial robots used in
assembly line. For structured environment means an environ-
ment in which some pre-conditions can be established and
guaranteed as to soil type for navigation, it is appropriate to the
anatomy of the robot and the existence of reliable landmarks
for navigation, among other.
In environments where these initial conditions can not
be fully guaranteed and respected, semi-structured, a certain
degree of autonomy should be given to the robot control
system, so that the task be safely performed. Also in these en-
vironments is essential that the response of the robotic system
is suitable for the non-linear dynamics of the environment and
their interaction in real time. Thus it is necessary to increase
the number of sensors to interact with the environment, which
leads to higher costs and greater complexity of the robotic
system.
Fig. 1. SAVAR System - block diagram
III. SAVAR SYSTEM
Most of the industrial robot has automated the exact knowl-
edge of its position, the coordinates of objects and obstacles,
thus operating in structured environments. For environment
means a structured environment in which some pre-conditions
can be established and guaranteed, as the oor to t your
anatomy and the existence of reliable landmarks for naviga-
tion.
Where these initial conditions can not be fully guaranteed
and respected environment (called semi-structured), a certain
degree of autonomy should be given to the robot control
system, so that the task be safely performed. It is also essential
that the response of the robotic system is suitable for the non-
linear dynamics of the environment and their interaction in
real time.
A. Camera control
The goal of the controller in 3D space is to align the optical
axis with the center of the object (
x,y
), where the movement
of the object is seen as a reference with unknown dynamics.
The strategy of joining two paradigms of control, and servo
control method based on appearance, improve the monitoring
process, gure 2.
Fig. 2. Camera control - Control law I
By linking the inputs and outputs of the system with
visual information, relegated to a secondary mechanical and
kinematic aspects of the system. For these reasons, this work
uses the paradigm of regulation to model the visual tracking
system SAVAR, where the state vector of the kinematic model
of the camera is: s(t) = [PAN(t)TILT(t)ZOOM(t)]
T
,
whose discrete evolution ( sufciently small) is given by:
s(t
0
+)

= s(t
0
) + s(t
0
). (1)
Considering order controller, we have PAN 0 and TILT
0, so we take u(t) = K
P
err(t), where K
P
is the
controller gain and err(t) the error state.
The non-introduction of an integral term control can make
the oscillating system and practice their inclusion makes for
a continuous movement of the target, the plant-camera slow
to respond. Thus, a negative feedback and a truncation (via a
threshold ) are introduced, gure 3, avoiding the swing and
getting a quick response, [4]:
(err(t), ) =
_
x, se err(t)
0, se err(t) <
, > 0.
It has been that for a size S proper target, with the optical
axis of coordinates C
x
and C
y
:
s(t) =
_

_
K
1

x
C
x

f
,
1
_
K
2

y
C
y

f
,
2
_
K
3

_
S S
0
S
0
,
3
_
_

_
(2)
Sending constant sequences of commands for the network
camera increases the communication rate and causes loss
of queue commands. The adopted solution to this problem
was to establish a strategy of time-sharing, grouping com-
mands, eg POST/(PAN TILT SPEED) and GET/(PAN
TILT SPEED).
Fig. 3. Feedback
B. Embedded electronics
The hardware design, gure 4, use modules standard of
industry Wi-Fi IEEE 802.11b/g. to implement wireless con-
nectivity to Ethernet/Internet. These modules, MatchPort b/g
(Lantronix , Inc.) is a server-Serial-Ethernet ports having two
standard RS232-C up to 921 Kbps.
The main control module of the robot is based on the
LPC2148 processor with an ARM7 core of 60Mhz, which
controls the motors and communication, whose information
is used by the estimator ways, in conjunction with wheel
odometry to estimate of safe paths.
All communication is done via I2C/RS-232 and the network
interface board and main control module of the robot are
customized and developed with software from CadSoft Eagle
V5.9.
Fig. 4. Network interface and control module
C. Visual Tracker
The visual tracking is to establish a method of visual servo
control of a target camera. In tests of tracking, the coordinates
of the centroid are obtained by tracking its color (RGB lters)
and its Hough Transform. However, other algorithms may be
used. This information is also used by the robot controller.
The formulation of this strategy is partly based on [5].
Considering = [
x

y
]
T
the vector of coordinates of the
centroid of the target in the image plane and f the focal length
by a simple analysis of the geometric model and the angle
(see gure 5), we have the targets position in the image plane
is obtained from these variables.
Fig. 5. Dependence of of the PAN
The following equations show this relationship:

x
= f tanh( ) (3)

x
=
f(

)
cos
2
( )
. (4)
Thus, proper control of ,
= +K

x
, for K

> 0, (5)
guarantees the exponential convergence of
x
to zero.
The term exogenous (feed forward), which arises in the
relationship is dependent on the instantaneous relative motion
of the target and robot. This term, in a sense, increases com-
pensation for the relative motion of camera-robot establishes
a level of separation and its dynamics can be numerically
calculated by:
= arctan(

x
f
) (6)
From the quantities , b and h, shown in gure 6(a),
the exponential convergence of
y
to zero can be obtained
analogously the
x
. The relative distance of the target is
estimated by:
b =
h
tanh()
(7)
From Figure 6(b), we have the pointing error of SAVAR
(
error
=
SAV AR
+), where:

2
= b
2
+d
2
2bd cos( ), (8)
and even where, by Carnots theorem, we have:
= arcsin
b sin( )

(9)
Fig. 6. Relative position of the target - (a) side view and (b) top view
Considering the displacement of the target (

P1P2) and robot


(

S1S2), each image frame (gure 7), an approximation for


backward, the speed of the target at t = t
2
is estimated at
v
target

P1P2
T
and the SAVAR by Runge-Kutta second order

S
v
1
T cos
1
T,
y
v
1
T sin
1
and T

t, where
v and are respectively linear and angular speed.
Fig. 7. Obtaining the estimated speed of the target
D. Modelling of the Target Moving
Set the start position of the robot and after the rst capture
of the target image and the calculation of , a slight observer
predicts the absolute position of the target after an instant .
The value of the sampling period is dened as k, represented
simply by k , which must be adjusted in real-time applications.
The synthesis of this observer is performed predictor assuming
that the target in the image plane, has a constant acceleration.
Thus, the transition from state x
k
is given by:
(k + 1) = (k) +

(k) +
1
2

(k)
2
(10)

(k + 1) =

(k) +

(k) (11)

(k + 1) =

(k), (12)
which can be summarized in matrix form as:
_

x
(k + 1)

y
(k + 1)

x
(k + 1)

y
(k + 1)

x
(k + 1)

y
(k + 1)
_

_
=
_

_
1 0 0 0, 5
2
0
0 1 0 0 0, 5
2
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1 0
0 0 0 0 0 1
_

_
_

x
(k)

y
(k)

x
(k)

y
(k)

x
(k)

y
(k)
_

_
(13)
Note that the state vector consists of position, velocity
and acceleration dimensional. Synthetically, we have: x(k +
1) = x(k), where is the state transition matrix, which
determines the relationship between the current and previous.
The measurement equation is dened as z(k) = Hx(k)+v(k).
The noise v(k) is Gaussian with zero mean. The matrix H
describes the relationship between the vector of measures z(k)
and the state vector, x(k), as follows:
_
1 0 0 0 0 0
0 1 0 0 0 0
_
(14)
Given the uncertainties of the input data (image information)
and the transition state of a moving object, the strategy of using
a Kalman lter in state observer, estimated on the image plane,
is applied here, gure 8. The gure 9 show the real position
of the target (in red) and the position estimated (in blue) by
the Kalman lter at the frame F for = t
k
.
Fig. 8. Tracking owchart - summary
Fig. 9. Tracking of target with kalman - result
E. Summary of the Robot Controller
For the calculation of the controller, which enables the
SAVAR intercept the target, the premises were considered: (a)
the maximum velocities and accelerations (linear and angular)
of the robot are larger than the object pursued and (b) the range
of minimum time T between two commands is sufciently
small. Thus, control of SAVAR is performed based on the
position and velocity of the target relative to the robot, [6].
We used the principles of [5] with a simplied model.
As the engine that moves the robot admit speeds or accel-
erations as input, we opted for a kinematic model dened by
the right wheel speeds v
R
(t) and v
left
L(t) given by:

x,y
(t) =
v
R
(t) +v
L
(t)
2
(15)

(t) =
v
R
(t) v
L
(t)
L
(16)
where
x,y
(t) and

(t) are scalar values of linear and


angular displacements SAVAR and L is the distance be-
tween the wheels. Thus, for the state vector x(t) =
_

x,y
(t)

(t)
_
T
and the input signal dened by speed,
u(t) =
_
v
R
(t) v
L
(t)

T
, has the equation of state space:
x(t) = Bu(t) where: B =
_
0, 5 0, 5
1/L1/L
_
and B
1
. The goal
is to make SAVAR go the distance dened by , gure 6
(b), and simultaneously redirect to the target, according to the
angle . This allows setting as a reference straight to be
covered and as the angle that the SAVAR should turn to
intercept the target, so that the absolute value of the linear
displacement just take the robot to balance when the distance
= 0.
From an initial displacement
x,y
(t
0
), we seek to

x,y
(t
f
) =
x,y
(t
0
) + for some t
f
> t
0
. By analogy,
we conclude that

(t
f
) =

(t
0
) + . Thus, we dene
the reference vector imposed on the closed-loop system as:
r(t) =
_
(t) +x
1
(t) + e(t) (t) +x
2
(t)

T
, where e(t)
is added to a distance (t), due to the spin of dislocation
occur in SAVAR. The actual path is not straight, but but
slightly curved, implying a distance to be travelled, greater
than .
As states are measured, the law is dened as a gain control
feedback (negative), which allows to write the following
equation in closed loop:
x(t) = B
_
[
x,y
(t) + +e(t)

(t) +(t) ]
T
K
x,y
(t)

(t)
T
_

x(t) = B(I K)x(t) +B[ +e(t) (t) ]


T
(17)
allocating the poles in a closed-loop performance required.
For the assumptions made, it is concluded that the relation
(kT) > (kT + T) is assured. Thus, when K ,
0 and e(t) 0 and can thus disregard and (t) for k large
enough.
For smaller values of k, the removal of this correction
implies that the robot walk a shorter distance than the ideal.
However, the second premise (b) if there is no interception of
the target due to the error of linear displacement, the new
reference distance is = 0

x,y
(t) = 0 (corrected
in the direction of ). Thus, the variable e(t) has negligible
importance in the interception of course required. From the
above, the model adopted to describe the movement of the
robot in closed loop is simple and linear, given by:
x(t) = B(I K)x(t) +B[ (t) ]
T
(18)
In the case of d sufciently small, gure 6, we can adopt the
simplication (gure 7):
x(t) = B(I K)x(t) +B[ b(t) (t) ]
T
(19)
IV. EXPERIMENTAL RESULTS
The target tracking simulation was performed in MatLab
(10) and a software was developed in C#. Preliminary prac-
tical experiments with both robot and camera, validate the
modelling. The displacements shown in gure 10 are part of
tens of tests successfully observed, gure 11-12. Odometer
data, battery voltage and current are obtained in real time.
For simulation, the target and the robot are positioned by
lot, with speeds limited by performing the target motion
with a change in acceleration. The poles of the closed-loop
system were placed in [0.8 8]. The results show that the
Fig. 10. Software of SAVAR
simulation model, assumptions and simplications are adopted
as a solution to the problem. The Kalman lter was able to
correctly process a trace showing a good performance, gure
13. The SAVAR was able to track the target test, performing an
intercept trajectory with satisfactory performance. The WLAN
network was adequate to the feedback control loops in real
time, considering the requisites in this research (15-20 FPS
rate and T = 1 / 25 s).
V. CONCLUSION
This paper presents a concept of a visual monitoring system
and control module for mobile robots on wheels with feedback
from the plant on a WiFi wireless Ethernet network. A control
unit for the robot was tested and test programs written for its
Fig. 11. Kinematic simulation of control
Fig. 12. Simulating the movement of the wheels
Fig. 13. Position of the target with constant acceleretion - kalman result
basic functions, being fully functional. It can communicate
successfully via WiFi to RS232, sending commands and re-
ceiving data from the sensors modules standard Ethernet/serial.
A differential experimental mobile robot was developed for
use by the system and a preliminary visual tracking algorithm,
based only on color and shape was successfully tested. Among
the future work are model validation and implementation of the
algorithm detection of obstacles, enabling the robot tracking
and navigation in semi-structured.
ACKNOWLEDGMENT
The authors would like to thank Mr. Gilberto Figueiredo
Machado, the ITEG - sharpening, tooling, machining of me-
chanical parts for the robot.
REFERENCES
[1] N. Papanikolopoulos and Pradeep Khosla and Takeo Kanade, Visual
Tracking of a Moving Target by a Camera Mounted on a Robot: A
Combination of Control and Vision, IEEE Transactions on Robotics and
Automation,page 14-35,February,1993.
[2] Tsai, Chi-Yi and Song, Kai-Tai and Dutoit, Xavier and Van Brussel,
Hendrik and Nuttin, Marnix, Robust visual tracking control system of a
mobile robot based on a dual-Jacobian visual interaction model, Robot.
Auton. Syst., 2009.
[3] Felser, M., Real-Time Ethernet - Industry Prospective, Proceedings of the
IEEE,2005.
[4] Dinh, T. and Qian Yu and Medioni, G., Real time tracking using an
active pan-tilt-zoom network camera, Intelligent Robots and Systems,
IROS 2009-IEEE/RSJ International Conference on, 2009.
[5] Freda, L. and Oriolo, G. . Vision-based interception of a moving target
with a nonholonomic mobile robot, Robot. Auton. Syst., 55(6): 419-432,
2007.
[6] Figueiredo, R.M. ; ROSA, Paulo F. F. ; CARRILHO, A. ; Felix, D. A. .
Um Sistema de Acompanhamento Visual para Robs Mveis Semi-autnomos
em Ambientes Semi-estruturados. VI Simpsio Brasileiro de Engenharia
Inercial, 2010, Rio de Janeiro.

You might also like