Isie 2012
Isie 2012
= s(t
0
) + s(t
0
). (1)
Considering order controller, we have PAN 0 and TILT
0, so we take u(t) = K
P
err(t), where K
P
is the
controller gain and err(t) the error state.
The non-introduction of an integral term control can make
the oscillating system and practice their inclusion makes for
a continuous movement of the target, the plant-camera slow
to respond. Thus, a negative feedback and a truncation (via a
threshold ) are introduced, gure 3, avoiding the swing and
getting a quick response, [4]:
(err(t), ) =
_
x, se err(t)
0, se err(t) <
, > 0.
It has been that for a size S proper target, with the optical
axis of coordinates C
x
and C
y
:
s(t) =
_
_
K
1
x
C
x
f
,
1
_
K
2
y
C
y
f
,
2
_
K
3
_
S S
0
S
0
,
3
_
_
_
(2)
Sending constant sequences of commands for the network
camera increases the communication rate and causes loss
of queue commands. The adopted solution to this problem
was to establish a strategy of time-sharing, grouping com-
mands, eg POST/(PAN TILT SPEED) and GET/(PAN
TILT SPEED).
Fig. 3. Feedback
B. Embedded electronics
The hardware design, gure 4, use modules standard of
industry Wi-Fi IEEE 802.11b/g. to implement wireless con-
nectivity to Ethernet/Internet. These modules, MatchPort b/g
(Lantronix , Inc.) is a server-Serial-Ethernet ports having two
standard RS232-C up to 921 Kbps.
The main control module of the robot is based on the
LPC2148 processor with an ARM7 core of 60Mhz, which
controls the motors and communication, whose information
is used by the estimator ways, in conjunction with wheel
odometry to estimate of safe paths.
All communication is done via I2C/RS-232 and the network
interface board and main control module of the robot are
customized and developed with software from CadSoft Eagle
V5.9.
Fig. 4. Network interface and control module
C. Visual Tracker
The visual tracking is to establish a method of visual servo
control of a target camera. In tests of tracking, the coordinates
of the centroid are obtained by tracking its color (RGB lters)
and its Hough Transform. However, other algorithms may be
used. This information is also used by the robot controller.
The formulation of this strategy is partly based on [5].
Considering = [
x
y
]
T
the vector of coordinates of the
centroid of the target in the image plane and f the focal length
by a simple analysis of the geometric model and the angle
(see gure 5), we have the targets position in the image plane
is obtained from these variables.
Fig. 5. Dependence of of the PAN
The following equations show this relationship:
x
= f tanh( ) (3)
x
=
f(
)
cos
2
( )
. (4)
Thus, proper control of ,
= +K
x
, for K
> 0, (5)
guarantees the exponential convergence of
x
to zero.
The term exogenous (feed forward), which arises in the
relationship is dependent on the instantaneous relative motion
of the target and robot. This term, in a sense, increases com-
pensation for the relative motion of camera-robot establishes
a level of separation and its dynamics can be numerically
calculated by:
= arctan(
x
f
) (6)
From the quantities , b and h, shown in gure 6(a),
the exponential convergence of
y
to zero can be obtained
analogously the
x
. The relative distance of the target is
estimated by:
b =
h
tanh()
(7)
From Figure 6(b), we have the pointing error of SAVAR
(
error
=
SAV AR
+), where:
2
= b
2
+d
2
2bd cos( ), (8)
and even where, by Carnots theorem, we have:
= arcsin
b sin( )
(9)
Fig. 6. Relative position of the target - (a) side view and (b) top view
Considering the displacement of the target (
P1P2
T
and the SAVAR by Runge-Kutta second order
S
v
1
T cos
1
T,
y
v
1
T sin
1
and T
t, where
v and are respectively linear and angular speed.
Fig. 7. Obtaining the estimated speed of the target
D. Modelling of the Target Moving
Set the start position of the robot and after the rst capture
of the target image and the calculation of , a slight observer
predicts the absolute position of the target after an instant .
The value of the sampling period is dened as k, represented
simply by k , which must be adjusted in real-time applications.
The synthesis of this observer is performed predictor assuming
that the target in the image plane, has a constant acceleration.
Thus, the transition from state x
k
is given by:
(k + 1) = (k) +
(k) +
1
2
(k)
2
(10)
(k + 1) =
(k) +
(k) (11)
(k + 1) =
(k), (12)
which can be summarized in matrix form as:
_
x
(k + 1)
y
(k + 1)
x
(k + 1)
y
(k + 1)
x
(k + 1)
y
(k + 1)
_
_
=
_
_
1 0 0 0, 5
2
0
0 1 0 0 0, 5
2
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1 0
0 0 0 0 0 1
_
_
_
x
(k)
y
(k)
x
(k)
y
(k)
x
(k)
y
(k)
_
_
(13)
Note that the state vector consists of position, velocity
and acceleration dimensional. Synthetically, we have: x(k +
1) = x(k), where is the state transition matrix, which
determines the relationship between the current and previous.
The measurement equation is dened as z(k) = Hx(k)+v(k).
The noise v(k) is Gaussian with zero mean. The matrix H
describes the relationship between the vector of measures z(k)
and the state vector, x(k), as follows:
_
1 0 0 0 0 0
0 1 0 0 0 0
_
(14)
Given the uncertainties of the input data (image information)
and the transition state of a moving object, the strategy of using
a Kalman lter in state observer, estimated on the image plane,
is applied here, gure 8. The gure 9 show the real position
of the target (in red) and the position estimated (in blue) by
the Kalman lter at the frame F for = t
k
.
Fig. 8. Tracking owchart - summary
Fig. 9. Tracking of target with kalman - result
E. Summary of the Robot Controller
For the calculation of the controller, which enables the
SAVAR intercept the target, the premises were considered: (a)
the maximum velocities and accelerations (linear and angular)
of the robot are larger than the object pursued and (b) the range
of minimum time T between two commands is sufciently
small. Thus, control of SAVAR is performed based on the
position and velocity of the target relative to the robot, [6].
We used the principles of [5] with a simplied model.
As the engine that moves the robot admit speeds or accel-
erations as input, we opted for a kinematic model dened by
the right wheel speeds v
R
(t) and v
left
L(t) given by:
x,y
(t) =
v
R
(t) +v
L
(t)
2
(15)
(t) =
v
R
(t) v
L
(t)
L
(16)
where
x,y
(t) and
(t)
_
T
and the input signal dened by speed,
u(t) =
_
v
R
(t) v
L
(t)
T
, has the equation of state space:
x(t) = Bu(t) where: B =
_
0, 5 0, 5
1/L1/L
_
and B
1
. The goal
is to make SAVAR go the distance dened by , gure 6
(b), and simultaneously redirect to the target, according to the
angle . This allows setting as a reference straight to be
covered and as the angle that the SAVAR should turn to
intercept the target, so that the absolute value of the linear
displacement just take the robot to balance when the distance
= 0.
From an initial displacement
x,y
(t
0
), we seek to
x,y
(t
f
) =
x,y
(t
0
) + for some t
f
> t
0
. By analogy,
we conclude that
(t
f
) =
(t
0
) + . Thus, we dene
the reference vector imposed on the closed-loop system as:
r(t) =
_
(t) +x
1
(t) + e(t) (t) +x
2
(t)
T
, where e(t)
is added to a distance (t), due to the spin of dislocation
occur in SAVAR. The actual path is not straight, but but
slightly curved, implying a distance to be travelled, greater
than .
As states are measured, the law is dened as a gain control
feedback (negative), which allows to write the following
equation in closed loop:
x(t) = B
_
[
x,y
(t) + +e(t)
(t) +(t) ]
T
K
x,y
(t)
(t)
T
_
x,y
(t) = 0 (corrected
in the direction of ). Thus, the variable e(t) has negligible
importance in the interception of course required. From the
above, the model adopted to describe the movement of the
robot in closed loop is simple and linear, given by:
x(t) = B(I K)x(t) +B[ (t) ]
T
(18)
In the case of d sufciently small, gure 6, we can adopt the
simplication (gure 7):
x(t) = B(I K)x(t) +B[ b(t) (t) ]
T
(19)
IV. EXPERIMENTAL RESULTS
The target tracking simulation was performed in MatLab
(10) and a software was developed in C#. Preliminary prac-
tical experiments with both robot and camera, validate the
modelling. The displacements shown in gure 10 are part of
tens of tests successfully observed, gure 11-12. Odometer
data, battery voltage and current are obtained in real time.
For simulation, the target and the robot are positioned by
lot, with speeds limited by performing the target motion
with a change in acceleration. The poles of the closed-loop
system were placed in [0.8 8]. The results show that the
Fig. 10. Software of SAVAR
simulation model, assumptions and simplications are adopted
as a solution to the problem. The Kalman lter was able to
correctly process a trace showing a good performance, gure
13. The SAVAR was able to track the target test, performing an
intercept trajectory with satisfactory performance. The WLAN
network was adequate to the feedback control loops in real
time, considering the requisites in this research (15-20 FPS
rate and T = 1 / 25 s).
V. CONCLUSION
This paper presents a concept of a visual monitoring system
and control module for mobile robots on wheels with feedback
from the plant on a WiFi wireless Ethernet network. A control
unit for the robot was tested and test programs written for its
Fig. 11. Kinematic simulation of control
Fig. 12. Simulating the movement of the wheels
Fig. 13. Position of the target with constant acceleretion - kalman result
basic functions, being fully functional. It can communicate
successfully via WiFi to RS232, sending commands and re-
ceiving data from the sensors modules standard Ethernet/serial.
A differential experimental mobile robot was developed for
use by the system and a preliminary visual tracking algorithm,
based only on color and shape was successfully tested. Among
the future work are model validation and implementation of the
algorithm detection of obstacles, enabling the robot tracking
and navigation in semi-structured.
ACKNOWLEDGMENT
The authors would like to thank Mr. Gilberto Figueiredo
Machado, the ITEG - sharpening, tooling, machining of me-
chanical parts for the robot.
REFERENCES
[1] N. Papanikolopoulos and Pradeep Khosla and Takeo Kanade, Visual
Tracking of a Moving Target by a Camera Mounted on a Robot: A
Combination of Control and Vision, IEEE Transactions on Robotics and
Automation,page 14-35,February,1993.
[2] Tsai, Chi-Yi and Song, Kai-Tai and Dutoit, Xavier and Van Brussel,
Hendrik and Nuttin, Marnix, Robust visual tracking control system of a
mobile robot based on a dual-Jacobian visual interaction model, Robot.
Auton. Syst., 2009.
[3] Felser, M., Real-Time Ethernet - Industry Prospective, Proceedings of the
IEEE,2005.
[4] Dinh, T. and Qian Yu and Medioni, G., Real time tracking using an
active pan-tilt-zoom network camera, Intelligent Robots and Systems,
IROS 2009-IEEE/RSJ International Conference on, 2009.
[5] Freda, L. and Oriolo, G. . Vision-based interception of a moving target
with a nonholonomic mobile robot, Robot. Auton. Syst., 55(6): 419-432,
2007.
[6] Figueiredo, R.M. ; ROSA, Paulo F. F. ; CARRILHO, A. ; Felix, D. A. .
Um Sistema de Acompanhamento Visual para Robs Mveis Semi-autnomos
em Ambientes Semi-estruturados. VI Simpsio Brasileiro de Engenharia
Inercial, 2010, Rio de Janeiro.