0% found this document useful (0 votes)
6 views

A Tracking Algorithm For PTZ Cameras

Uploaded by

Lhappycat Luo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

A Tracking Algorithm For PTZ Cameras

Uploaded by

Lhappycat Luo
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2nd IFAC Workshop on

Distributed Estimation and Control in Networked Systems


Annecy, France, September 13-14, 2010

A tracking algorithm for PTZ cameras 


Davide M. Raimondo ∗ S. Gasparella ∗∗ D. Sturzenegger ∗
J. Lygeros ∗ M. Morari ∗

Automatic Control Laboratory, Electrical Engineering
ETH Zurich, Switzerland
(e-mail: {davide.raimondo,lygeros,morari}@control.ee.ethz.ch,
[email protected])
∗∗
Videotec S.p.A., Via Friuli,6 - I-36015 Schio (VI) Italy
(e-mail: [email protected])

Abstract: This paper presents a tracking algorithm for PTZ (pan-tilt-zoom) cameras. The
tracked objects are Mini 1:43 scale RC cars that have been described by a unicycle model. The
algorithm is based on the combination of EKF (Extended Kalman Filter) and PF (Particle
Filters). A scanning procedure is used to explore the environment. Once targets are detected,
EKF is used to predict their future position. PTZ are then moved in order to guarantee a certain
probability of targets detection at the next time instant. If a target is lost, particle filters is
exploited. If target is found again EKF is restored. If this does not happen in a predefined
number of steps, the scanning procedure restarts.

Keywords: Tracking, EKF, PF, Simulated Annealing.

1. INTRODUCTION The objective of this paper is to present a fully automated


surveillance system able to detect and track targets as they
Nowadays, security is a major issue in public places and move through the whole monitored site. The approach we
video surveillance systems are being installed everywhere. use relies on a combination of Extended Kalman Filter (see
Surveillance systems usually employ one or more fixed Jazwinski (1970)) and Particle Filters (see Doucet et al.
view cameras to analyze indoor and outdoor areas, see for (2001)). A nonlinear model, i.e. unicycle model, is used
example Black and Ellis (2006). On the other side, pan- to describe targets dynamics. Target inputs are unknown
tilt and zoom cameras have the capability of dynamically and modeled as process noise. EKF predicts objects future
scanning their environment, see for example Dinh et al. state by linearizing the model and PTZ cameras are
(2009), where communication between the control unit then moved in order to guarantee a certain probability
and the PTZ camera is via TCP/IP network. In recent of detection at the next time instant. If a target is lost,
sensor networks, the advantages of using pan-tilt-zoom EKF in open loop prediction could be used for finding
cameras has been considered, see for example Soto et al. it again but, instead, we use particle filters. PF can deal
(2009), Piciarelli et al. (2009), Bellotto et al. (2009), Avni with nonlinear models and non-gaussian noise. Moreover,
et al. (2008) and Everts et al. (2007). A very promising the fact that targets do not appear in cameras FOV
tracking approach consists in merging the 2D observations can be used by PF in order to refine targets prediction.
from each camera view to obtain a 3D world coordinate A potential problem of PF is the loss of exploration
description of the targets. As discussed in Black and Ellis capabilities along the time. For this reason PF is used
(2006), tracking in 3D offers benefits in terms of allowing just when a target is lost. If target is found again EKF is
multiple views to be combined to generate a network field restored. If this does not happen in a predefined number
of view (FOV), i.e. the FOV of all the cameras combined. of steps, a scanning procedure is started.
In order to achieve such result, one of the main issues The paper is organized as follows. First the cameras test-
regards the necessity of computing the relative locations bed is described in Section 2. Then Section 3 presents the
and orientations of the various cameras, see e.g. Stein target model. Section 4 presents the tracking algorithm
(1999). and Section 5 the results. The paper ends with the con-
The fundamental building block of a tracking system clusions in Section 6.
is a filter for recursive target state estimation. Kalman
Filter and Particle Filters are the most popular approaches 2. CAMERAS
used nowadays in visual tracking, see for example Muñoz-
Salinas et al. (2010), Yao and Odobez (2008), Czyz et al. In collaboration with Videotec S.p.A., ETH has set up in
(2007), Black and Ellis (2006). its laboratory a test bed comprising pan-tilt-zoom Ulisse
compact cameras, (see Figure 1). The testbed is used to
 This research has been partially supported by the Euro- design an automated surveillance system able to detect
pean Commission under the project Feednetback FP7-ICT-223866 and track targets as they move through the monitored
(www.feednetback.eu). site.

978-3-902661-82-1/10/$20.00 © 2010 IFAC 61 10.3182/20100913-2-FR-4014.00060


NecSys'10
Annecy, France, Sept 13-14, 2010

See Raimondo et al. (2010) for details about the identifi-


cation of extrinsic and intrinsic cameras parameters.
3. TARGETS

Fig. 3. Targets: Mini 1:43 scale RC cars Kyosho dNano


Fig. 1. Testbed: pan-tilt-zoom Ulisse compact camera In our experiments so far Mini 1:43 scale RC cars Kyosho
2.1 Cameras Model dNano have been considered as targets (see Figure 3).
Tracking these race cars can be quite challenging since
For a given camera, the roto-translations involved between they can achieve speeds of up to 5m/s. Targets detection
optical center reference system (oc) and world reference is based on colors (see Raimondo et al. (2010) for details).
system (w) can be compactly described as follows (see also
Figure 2) 3.1 Targets 3D position and orientation

Each time a target is detected, its position/orientation is


Zfc Zrp Zft Zrt
Zoc available in terms of image view reference system. In order
Yrp Yft Yrt Yoc
Yfc ˷
Ȝ
to guarantee targets hand-off between different cameras
R˷ Rȥ
[D 0 0] ȥ
and a reliable target tracking, 3D position/orientation
[xoff yoff zoff]
Xrp Xft Xoc is required. In general, 3D position/orientation can be
Xfc Xrt Ȝ
inferred if at least two cameras are looking at the same
[0 0 H] Yim
object. In our particular case, the environment in which
p
Xim
cars move is a planar ground plane. Under this assumption,
Zw one camera is sufficient (see Raimondo et al. (2010) for
Yw
details).
p

Xw 3.2 Unicycle Model

Targets dynamics have been modeled as follows (see also


Figure 4)⎧
⎪ x(k + 1) = x(k) + cos(φ(k))v(k)ΔT
⎨ y(k + 1) = y(k) + sin(φ(k))v(k)ΔT

Fig. 2. Pinhole camera model

φ(k + 1) = φ(k) + ω(k)ΔT (3)

xw
 
0
 
D
 
xof f
 
xoc
 ⎪

⎪ v(k + 1) = v(k) + a(k)ΔT
ω(k + 1) = ω(k) + α(k)ΔT

yw = 0 + Rθ 0 + Rψ yof f + yoc (1)
zw H 0 zof f zoc where x and y are target coordinates in the world reference
system, φ the orientation, v and ω the linear and angular
where pw = [xw yw zw ] are the coordinates of a point p velocity, a and α the linear and angular acceleration, and
in the world reference system while poc = [xoc yoc zoc ] ΔT the sampling time. The accelerations, inputs of the
are its coordinates in the optical center reference system. model, are unknown but bounded
H, D, xof f , yof f , zof f are parameters (see Figure 2) and |a| ≤ amax |α| ≤ αmax
cθ −sθ 0 cψ 0 sψ Targets velocities are also bounded
   
Rθ = sθ cθ 0 , Rψ = 0 1 0 |v| ≤ vmax |ω| ≤ ωmax
0 0 1 −sψ 0 cψ Such bounds are available from cars manufacturer. Note
the rotation matrices involved (cθ = cos(θ), sθ = sin(θ)). that system measurements are position x, y and orienta-
tion φ.
Given a point expressed in optical center coordinates, its
position in the image view frame (im) is obtained as 3.3 Stochastic Model
follows
yoc zoc
xim = λ(ζ) yim = −λ(ζ) Since accelerations are unknown, they can be modeled as
xoc xoc
process noise. Moreover, measurements are also corrupted
where λ(ζ) = λ1 ζ is the focal length for a given level of
by noise. If we denote with x = [x y φ v ω] the state vector
zoom ζ, while λ1 is λ value for ζ = 1.
and with h(x) = [x y φ] the observation function, we can
Cameras dynamics are considered constrained as follows: write the following stochastic model
θmin ≤ θ ≤ θmax , |θ(k + 1) − θ(k)| ≤ Δθ x(k + 1) = f (x(k)) + w(k)
(4)
ψmin ≤ ψ ≤ ψmax , |ψ(k + 1) − ψ(k)| ≤ Δψ (2) z(k) = h(x(k)) + v(k)
ζmin ≤ ζ ≤ ζmax , |ζ(k + 1) − ζ(k)| ≤ Δζ where the process noise and the observation noise are
for all k ≥ 0. respectively

62
NecSys'10
Annecy, France, Sept 13-14, 2010

Yw

 a
v
 
p

Xw

Fig. 4. Unicycle model


Fig. 5. Projection of camera image view on the ground
0
⎡ ⎤
4.1 EKF
0 vx (k)
 
⎢ ⎥
0 vy (k)
⎢ ⎥
w(k) = ⎣⎢ ⎥ v(k) = The classical derivation of EKF can be found e.g. in
a(k)ΔT ⎦ vφ (k) Jazwinski (1970). In order to implement the EKF, both
α(k)ΔT process noise and observation noise are supposed to be
while z are the measurements and f is derived by eq. (3). gaussians
Let assume that the initial state is independent of process a ∼ N (0, σ12 ) α ∼ N (0, σ22 )
noise and its distribution is given through a probability vx ∼ N (0, σx2 ) vy ∼ N (0, σy2 ) vφ ∼ N (0, σφ2 )
density function (pdf) p(x(0)). Moreover, assume that Note that, observation noises are related to cameras ac-
noise distributions are known. This implies that the system curacy, i.e. it is reasonable to assume they are gaussian.
can be equivalently represented using two pdf Concerning process noise, we have computed variances
x(k) ∼ px (·, x(k − 1)) according to accelerations bounds. The initial state x(0) is
assumed to follow a known Gaussian distribution x(0) ∼
z(k) ∼ pz (·, x(k)) N (x̂(0), P (0)) where x̂(0) = [x y φ 0 0] and
Here px (·, x(k − 1)) is a conditional pdf that models the ⎡ 2 ⎤
σx 0 0 0 0
stochastic dynamics of system state, determined by f and ⎢ 0 σy2 0 0 0 ⎥
the pdf of w, while pz (·, x(k)) is a conditional pdf that ⎢ 2

models measurements probability distribution, determined P (0) = ⎢
⎢ 0 0 σφ 0 0 ⎥

by h and the pdf of v. ⎣ 0 0 0 σ2 0 ⎦
v
0 0 0 0 σω2
4. TRACKING ALGORITHM where σv and σω are computed according to velocity
bounds.
The tracking algorithm we propose combines Extended Under the assumption that noises are zero-mean gaussian,
Kalman Filter and Particle Filters. At the beginning, a x(k + 1) ∼ N (x̂(k + 1), P (k + 1)), where x̂(k + 1) is the
scanning procedure explores the environment. When a predicted state at time k + 1 and P (k + 1) the associated
target is detected, EKF is used to predict its future state. covariance matrix.
Cameras inputs, i.e. zoom, pan and tilt angles, are then
computed in order to guarantee a certain probability of When a target is lost, EKF in open loop prediction could
detection at the next time instant while at the same time provide poor performances. For this reason, in that case,
minimizing variation w.r.t. previous inputs and maximiz- PF is exploited instead.
ing target resolution, i.e. ”zoom in” value.
4.2 Particle filters
If a target is lost, Particle Filters is exploited. As already
said, PF can deal with nonlinear models and non-gaussian
noise. Moreover, with particle filters we can refine targets Particle filters (or Sequential Monte Carlo methods, see
prediction also by taking into account that when a target e.g. Doucet et al. (2001)) are fast estimation techniques
is not detected it is not inside the projection of cameras that perform a numerical approximation of the pdf of in-
image view on the ground (see Figure 5). A potential terest using simulation. Consider a generic time k ≥ 0, and
problem of PF is the loss of exploration capabilities along denote Z(k) = {z(i)}i=0,··· ,k and X(k) = {x(i)}i=0,··· ,k
the time. For this reason, PF is used just when a target is respectively the sequence of measurements and states up
lost. to time k. The main idea of particle filters is to approx-
imate the continuous probability distribution of interest,
When PF is exploited, the constraint on the probability i.e., p(X(k)|Z(k)) using a discrete distribution comprising
of detection is assured by requiring a certain percentage weighted samples (known as particles). To do this, N inde-
of particles to be inside cameras view at the next time. pendent identically distributed particles, X1 , · · · , XN are
If target is found again EKF is restored. If this does extracted from p(X(k)|Z(k)), and an empirical estimate of
not happen in a predefined number of steps, a scanning the distribution is constructed
procedure is started. We do this since after some steps the N
1 
uncertainty about future targets position will be so big p̂(X(k)|Z(k)) = δ j (X(k))
that PF will not help anymore. N j=1 X (k)

63
NecSys'10
Annecy, France, Sept 13-14, 2010

where δXj (k) denotes the Dirac mass at particle Xj (k).


Then the expectation of any integrable function, g, can
be estimated by,

E[g(X(k), k)] ≈ g(X(k), k)p̂(X(k)|Z(k))dX(k)
N
1 
= g(Xj (k), k)
N j=1
This estimator is unbiased and (under weak assumptions)
(a) (b)
converges to the true expectation as the number of par-
ticles N tends to infinity, see e.g. Crisan and Doucet
(2002). In cases where it is not possible to sample from
p(X(k)|Z(k)) directly, a technique known as importance
sampling, can be employed. An algorithm to perform these
operations recursively is Sequential Importance Resam-
pling (SIR), see Doucet et al. (2000).
As previously stated, even when a target is lost, we have a
useful information: the target was not inside the projection
of camera image view on the ground. Taking this facts into
account, we can formulate our SIR algorithm. (c)

SIR for PTZ cameras (lost target case) Fig. 6. SIR algorithm: example. Figure (a) shows in red
samples of the probability distribution p(x(k−1)) and
Require: η, p(x(k − 1)) and imax . η is the probability in green the N particles xj (k), j = 1, . . . , N . Figure
that a target is detected when inside a camera FOV (b) shows time k + 1. The target was not detected.
(usually close to 1). p(x(k − 1)) is the distribution of In green we have again the N particles xj (k) while in
the estimated state vector at time k − 1, i.e. x(k − red the resampled N particles x̃j (k). Figure (c) shows
1) ∼ N (x̄(k − 1), P̄ (k − 1)), where x̄(k − 1) and P̄ (k − 1) in red x̃j (k) and in green xj (k + 1), j = 1, . . . , N .
are computed via EKF. imax is the maximum number
of PF iterations. instead of the distribution of the predicted state vector at
time k.
Initialization. Extract N particles {Xj (k − 1)}j=1,··· ,N
= {xj (k − 1)}j=1,··· ,N from p(x(k − 1)). For j = The proposed SIR algorithm ends when the target is found
1, · · · , N , extract xj (k) ∼ px (·|xj (k − 1)) and set again. In such case EKF is restored. If this does not happen
Xj (k) = (Xj (k − 1), xj (k)). in a predefined number of steps, a scanning procedure is
while θ(k), ψ(k) and ζ(k) are such that target is not started. We do this since after some steps the uncertainty
detected do about future target positions will be so big that PF will
if the target is not detected for imax subsequent steps not help anymore. A possible scanning technique could be
then the one described in Avni et al. (2008).
compute θ(k + 1), ψ(k + 1) and ζ(k + 1) with a A potential problem of the SIR algorithm is loss of
predefined scanning procedure. diversity as particles with large weights are selected more
else and more often in step Resampling, eventually eliminating
set q j (k) = 1 − η if xj (k) is inside the projection of all other particles. The loss of diversity diminishes the
the image view and q j (k) = η otherwise. exploration capabilities of the algorithm. Therefore, we
Normalization. Normalize the weights q j (k) as explore particle filters just when a target is lost.
follows
q j (k) Figure 6 shows an example of our SIR algorithm.
q̃ j (k) = N
l
l=1 q (k) 4.3 Optimization Problem
Resampling. Extract N particles {X̃j (k)} from
the set {Xj (k)}j=1,··· ,N with replacement according The objective of the tracking algorithm is to compute
to importance weights q̃ j (k). cameras inputs, i.e. zoom, pan and tilt angles, in order
Prediction. For j = 1, · · · , N , extract xj (k + 1) ∼ to guarantee a certain probability of targets detection at
px (·|xj (k)) and set Xj (k + 1) = (Xj (k), xj (k + 1)) the next time instant while at the same time minimizing
variation w.r.t. previous inputs and maximizing target
Iteration. Compute θ(k +1), ψ(k +1) and ζ(k +1). resolution, i.e. ”zoom in” value. This problem can be
end if formulated as follows
Increment k. n
 κ4
end while min κ1 Δθi2 + κ2 Δψi2 + κ3 Δζi2 +
Restart EKF. θi (k+1),ψi (k+1),ζi (k+1) ζi (k + 1)2
i=1
subject to cameras dynamics (2)
PF can deal with non-gaussian noise. We take advantage P (targetj ∈ F OV∪ (k + 1)) >= α
of that by describing accelerations in a more realistic way. ∀j = 1, . . . , m
For the same reason the Initialization phase starts from (5)
the distribution of the estimated state vector at time k − 1

64
NecSys'10
Annecy, France, Sept 13-14, 2010

where n and m are respectively the number of cameras and Suppose for example n = 3. When a = 1 we solve on the
targets, Δθi = θi (k + 1) − θi (k), Δψi = ψi (k + 1) − ψi (k), three processors in parallel the problems with active only
Δζi = ζi (k + 1) − ζi (k) and α is the required probability of camera 1 or 2 or 3. Then for a = 2, we solve in parallel the
detection. κ1 , κ2 , κ3 , κ4 are constants while F OV∪ (k + 1) problems with active cameras 1−2, 1−3, 2−3. For a = 3 we
is the union of cameras field of view at time k + 1. The have just to solve the problem with all the cameras active.
relation between FOVs and inputs is described in section In that case the clustering algorithm can be employed for
2.1. parallelizing the problem and speeding up the convergence.
When EKF is exploited, the probability of detection con- Another expedient for reducing the number of optimiza-
straint is assured by using the method described in Som- tion variables is to exclude a priori from the optimization
mariva and Vianello (2007), that numerically approxi- the cameras that can not really contribute in increasing
mates the integral of a bivariate normal distribution over target probability of detection. This evaluation can be
an arbitrary polygon. easily done by looking at cameras constraint (2).
When PF is used instead, the probability of detection is The application of parallel simulated annealing to our
assured by requiring at least a percentage α of particles to tracking problem deserves still further analysis and its use
be inside cameras FOV at the next time. in our test-bed is currently under investigation.
The optimization is performed by using Simulated Anneal- Another problem that deserves further investigation is the
ing (SA), see e.g. Kirkpatrick (1984), with time limit equal distributed Kalman/Particle Filtering estimation, see e.g.
to the sampling time. Olfati-Saber (2007), Simonetto and Keviczky (2009).

5. RESULTS

Parallel Simulated Annealing One of the major draw- The tracking algorithm presented in Section 4 has been
backs of simulated annealing is its very slow convergence, tested in our laboratories. In order to speed up commu-
especially for problems with large search space. Under the nication and computation, the algorithm has been imple-
assumption that each camera has a dedicated processor on mented in C. For the moment one car - one camera scenario
board and communication between cameras is available, a has been considered. We have used the following settings:
parallel versions of SA can be implemented. We assume ΔT = 0.3s, σ1 = 1m/s2 , σ2 = 15◦ /s2 , σx = σy = 0.01m,
also that cameras are calibrated, i.e. their homographies σφ = 5◦ , σv = 5m/s, σω = 45◦ /s. Particle filter generates
are known to all the cameras. There have been many at- at each time N = 1000 particles. η = 0.05 and imax = 4.
tempts to develop parallel versions of simulated annealing, κ1 = κ2 = 1, κ3 = κ4 = 0.1. Simulated annealing
see e.g. Aarts and Korst (1989). One of the approaches has been used with initial temperature T0 = 30, final
described in Aarts and Korst (1989) is the clustering temperature Tend = 0.1 and cooling coefficient αc = 0.98.
algorithm. Such algorithm takes advantage of the fact that The benchmark in Figure 7 has been used for comparing
a good initial solution provides SA faster convergence. the performances of PF only and EKF+PF.
Initially, the n cameras of the network run SA algorithm
using different initial solutions. It is assumed that proces-
sors have different number seeds. After a fixed number of
iterations, they exchange their partial results. The best
partial solution is then used as new initial solution for all
processes and SA is started again. This process is repeated
until time limit (i.e. sampling time) is exceeded.
When the number of targets and cameras involved in the
problem is big, the SA search space is very large. This
means that even finding a feasible solution to problem (5)
will be difficult, especially with a time limit constraint.
For this reason, one can think about solving problems
with a smaller number of optimization variables by fixing
some cameras inputs to their previous values, i.e. that Fig. 7. Benchmark. The blue crosses represent the target
cameras will not be moved. Given n, number of cameras, trajectory.
and na number of cameras we want to move, we can solve
n!
in parallel a!(n−a)! (all the possible combinations without The results are shown in Video1 (2010),Video2 (2010)
and summarized in Figure 8. For evaluating algorithms
repetitions) problems on different processors, i.e. cameras,
robustness, at frame 22 and 23 we assumed that the image
and then, by communicating, choose the minimum cost
was blurred and target not detected, even if effectively in
feasible solution. A possible strategy could be the follow-
the field of view. Both algorithms found again again the
ing:
target. However, if we look at Video2 (2010), frames 35-41
• start the procedure with a = 1 we see the effect ”loss of diversity” on PF performances.
• solve in parallel all possible combinations na for a
 
In fact, the red crosses, resampled particles close to the
fixed number of iterations measurement, are very few and also far from the real
• initialize all na SA optimization problems with the target position. For this reason the target is lost again
minimum cost feasible solution of a − 1 at frames 39 and 40 but fortunately found again. This
• iterate the procedure till time limit is exceeded. reasons, together with the costs (cumulative cost (5)) in

65
NecSys'10
Annecy, France, Sept 13-14, 2010

Figure 8, indicates that PF+EKF is more robust than PF Doucet, A., Godsill, S., and Andrieu, C. (2000). On
alone. sequential Monte Carlo sampling methods for Bayesian
Algorithm Cost nlost filtering. Statistics and computing, 10(3), 197–208.
PF 392.806 4 Everts, I., Sebe, N., and Jones, G. (2007). Cooperative
PF+EKF 305.444 2 object tracking with multiple ptz cameras. In the 14th
Intl Conf. on Image Analysis and Processing. Citeseer.
Fig. 8. PF vs EKF+PF: comparison of performances. Jazwinski, A. (1970). Stochastic processes and filtering
theory. Academic Pr.
Kirkpatrick, S. (1984). Optimization by simulated anneal-
6. CONCLUSIONS ing: Quantitative studies. Journal of Statistical Physics,
34(5), 975–986.
In this paper, a novel tracking algorithm for PTZ (pan- Muñoz-Salinas, R., Medina-Carnicer, R., Madrid-Cuevas,
tilt-zoom) cameras has been presented. A unicycle model F., and Carmona-Poyato, A. (2010). Particle filtering
has been used in order to describe targets behavior. The with multiple and heterogeneous cameras. Pattern
proposed tracking algorithm combines EKF and Particle Recognition.
Filters. EKF has been used in order to predict future Olfati-Saber, R. (2007). Distributed kalman filtering for
target position. PTZ cameras input are then computed sensor networks. In Proc. of the 46th IEEE Conf. on
in order to guarantee a certain probability of targets de- Decision and Control. San Diego, California.
tection at the next time instant. If a target is lost, particle Piciarelli, C., Micheloni, C., and Foresti, G. (2009). PTZ
filters is exploited. If target is found again EKF is restored. camera network reconfiguration. In Third ACM/IEEE
If this does not happen in a predefined number of steps, the International Conference on Distributed Smart Cam-
scanning procedure is restarted. The optimization problem eras, 2009. ICDSC 2009, 1–7.
is solved by simulated annealing. The results obtained on Raimondo, D., Gasparella, S., Struzenegger, D., Lygeros,
our experimental setup are promising. The possibility of J., and Morari, M. (2010). A tracking algorithm for
distributing the simulated annealing problem over several ptz cameras. Technical report, ETH Zürich, Switzer-
processors, i.e. several cameras, in order to speed up con- land. http://control.ee.ethz.ch/index.cgi?
vergence is under investigation. page=publications ;action=details;id=3609.
Simonetto, A. and Keviczky, T. (2009). Recent develop-
ACKNOWLEDGEMENTS ments in distributed particle filtering: Towards fast and
accurate algorithms. In Proc. of the 1st IFAC Work-
The authors would like to thank I. Lymperopoulos, G. shop on Estimation and Control of Networked Systems.
Gennari, P. Donaggio for the helpful discussions. Venice, Italy.
Sommariva, A. and Vianello, M. (2007). Product Gauss
REFERENCES cubature over polygons based on Greens integration
formula. BIT Numerical Mathematics, 47(2), 441–453.
Aarts, E. and Korst, J. (1989). Simulated annealing and Soto, C., Song, B., and Roy-Chowdhury, A. (2009). Dis-
Boltzmann machines. John Wiley & Sons New York. tributed multi-target tracking in a self-configuring cam-
Avni, O., Borrelli, F., Katzir, G., Rivlin, E., and Rot- era network. CVPR 2009.
stein, H. (2008). Scanning and tracking with indepen- Stein, G. (1999). Tracking from multiple view points: Self-
dent camerasa biologically motivated approach based on calibration of space and time. In cvpr, 1521. Published
model predictive control. Autonomous Robots, 24(3), by the IEEE Computer Society.
285–302. Video1 (2010). http://control.ee.ethz.ch/∼rdavide/EKF.avi.
Bellotto, N., Sommerlade, E., Benfold, B., Bibby, C., Video2 (2010). http://control.ee.ethz.ch/∼rdavide/PF.avi.
Reid, I., Roth, D., Fernández, C., Van Gool, L., and Yao, J. and Odobez, J. (2008). Multi-camera 3d person
Gonzalez, J. (2009). A distributed camera system tracking with particle filter in a surveillance environ-
for multi-resolution surveillance. In Third ACM/IEEE ment.
International Conference on Distributed Smart Cameras
(ICDSC 2009).
Black, J. and Ellis, T. (2006). Multi camera image
tracking. Image and Vision Computing, 24(11), 1256–
1267.
Crisan, D. and Doucet, A. (2002). A survey of convergence
results on particle filtering methods for practitioners.
IEEE Transactions on signal processing, 50(3), 736–746.
Czyz, J., Ristic, B., and Macq, B. (2007). A particle filter
for joint detection and tracking of color objects. Image
and Vision Computing, 25(8), 1271–1281.
Dinh, T., Yu, Q., and Medioni, G. (2009). Real time
tracking using an active pan-tilt-zoom network camera.
In Proceedings of the 2009 IEEE/RSJ international
conference on Intelligent robots and systems, 3786–3793.
IEEE Press.
Doucet, A., De Freitas, N., and Gordon, N. (2001). Sequen-
tial Monte Carlo methods in practice. Springer Verlag.

66

You might also like