Distributed Monte Carlo InformationFusion and Distributed Particle Filtering - Good
Distributed Monte Carlo InformationFusion and Distributed Particle Filtering - Good
Abstract: We present a Monte Carlo solution to the distributed data fusion problem and
apply it to distributed particle filtering. The consensus-based fusion algorithm is iterative and it
involves the exchange and fusion of empirical posterior densities between neighbouring agents.
As the fusion method is Monte Carlo based it is naturally applicable to distributed particle
filtering. Furthermore, the fusion method is applicable to a large class of networks including
networks with cycles and dynamic topologies. We demonstrate both distributed fusion and
distributed particle filtering by simulating the algorithms on randomly generated graphs.
Carlo fusion algorithm to the problem of distributed par- Taking the exponential gives
ticle filtering. In Section 4 we offer conclusions.
Q ∝ p(x)g(yi |x)1/n (5)
2. DISTRIBUTED MONTE CARLO DATA FUSION
Consider a group of agents indexed in V = {1, . . . , n} and a which completes the proof. 2
set of possible time-varying undirected links E(k) ⊂ V × V
Note that despite the fact each agent is sharing its local
defining a network graph G(V, E(k)). The neighbor set at
posterior the consensus-based distributed fusion algorithm
agent i is denoted by Ni (k) = {j ∈ V : (i, j) ∈ E(k)} and
just depicted converges to p(x) j∈V g(yj |x)1/n and not
Q
j ∈ Ni (k) ⇔ i ∈ Nj (k) for undirected topologies. We often
p(x)n j∈V g(yj |x)1/n or p(x)1/n j∈V g(yj |x)1/n . Thus it
Q Q
drop the network’s dependence on k for brevity.
is not ‘double counting’ the common prior information
Each agent constructs an initial local posterior of the form p(x) and nor is it conservative in this common prior
g(yi |x)p(x) information. The algorithm is actually conservative in
p(x|yi ) = R ∝ g(yi |x)p(x) (1) Q 1/n
g(yi |x)p(x)dx j∈V g(yj |x) which may be important for consistency
from given measurements yi ∈ Rmyi and where g(yi |x) is as discussed in Bailey et al. (2012). In particular, if yi
the likelihood function at agent i conditioned on some un- and yj were correlated (i.e. g(yi |x) and g(yj |x) were not
derlying event x ∈ Rmx . Here, p(x) is the prior information conditionally independent) and this dependence was not
common to all agents. The goal of distributed data fusion considered then over-confident
Q results may be obtained
in this case is to compute if one simply computes j∈V g(yj |x). Similarly, if p(x)
Y was counted multiple times then over-confident results are
p(x|{yi }i∈V ) ∝ p(x) g(yi |x) (2) clearly obtained. The proposed algorithm converges to the
i∈V log-linear opinion pool [see Abbas (2009); Bailey et al.
locally at each agent i under the constraint that agent (2012)] on the likelihoods multiplied by the common prior
i can only share p(x|yi ) with its neighbours in Ni (t). In which is guaranteed consistent. It converges to this value in
other words, each agent is constrained to computations a distributed manner which generalises Bailey et al. (2012).
involving local posteriors, p(x|yi ) and p(x|yj ) where j ∈
Ni . Obviously, an iterative procedure is required to reach Corollary 2. Suppose that g(yi |x) and g(yj |x) are con-
i
p(x|{yi }i∈V ) at each agent in an incomplete network. ditionally independent for all i, j ∈ V. Define πk=0 ∝
n
p(x)g(yi |x) . The proposed consensus-based distributed
The following theorem is a slight modification of the main fusion algorithm converges to Q0 ∝ p(x) i∈V g(yi |x) as
Q
result in Olfati-Saber et al. (2006). k → ∞. This is exactly the optimal centralised Bayesian
Theorem 1. Consider a network G(t) as described above result given p(x|yi ) ∝ g(yi |x)p(x) where g(yi |x) and
where each agent exchanges πki with πk=0 i
= p(x|yi ). Here g(yj |x) are conditionally independent and p(x) is the com-
i j mon prior information among all agents.
πk and πk are not conditionally independent. Suppose
p(x) > 0 and g(yi |x) > 0 for all i ∈ V. Then the following The algorithm can be rewritten in the form
statements hold:
!γ
i The agents are capableQof asymptotically reaching a Y j
πk−1
consensus on Q ∝ p(x) j∈V g(yj |x)1/n . πki = i
πk−1 i
(6)
j∈Ni
πk−1
ii The consensus algorithm for agreement in the value Q
takes the form Y j which shows that when consensus is reached the iterations
πki = (πk−1
i
)1−|Ni |γ (πk−1 )γ . (3) i
reduce to πki = πk−1 .
j∈Ni
2 This algorithm has many appealing points. Firstly, it is
where 0 < γ < 1/ max({|Ni | : i ∈ V}) . distributed: the agents only share and compute with local
knowledge. Secondly, this distributed fusion algorithm can
Proof. It is shown in Olfati-Saber et al. (2006) that deal with cycles and time-varying topologies (where, for
example, the network may be disconnected for many time
πki → j∈V (π0j )1/n as k → ∞. In other words, log(πki ) →
Q
steps). Thirdly, the algorithm has a guaranteed speed of
( j∈V log(π0j ))/n as k → ∞.
P
convergence that is characterised by the network’s alge-
i
braic connectivity; see Olfati-Saber et al. (2006). Finally,
Note then in this case πk=0 = p(x|yi ) ∝ g(yi |x)p(x) and as noted above, the proposed algorithm converges to the
log-linear opinion pool multiplied by the common prior in-
log(π0j )
P
j∈V formation (which is a conservative, guaranteed consistent,
log(πki ) → version of the optimal Bayesian result regardless of the
n P dependence between g(yi |x) and g(yj |x)). Moreover, with
n log(p(x)) j∈V log(g(yj |x)) a slight modification to the initially shared data we can
∝ + (4)
n n also achieve distributed convergence to the true optimal
2 The restriction that γ is less than the inverse of the maximum Bayesian fusion result as noted in the corollary (and this
degree in the network is sufficient for each agent to converge to Q. result is obviously the most desired when it is known
See Olfati-Saber et al. (2006) or Olfati-Saber and Murray (2004). g(yi |x) and g(yj |x) are conditionally independent).
8682
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
2.1 A Monte Carlo Implementation of the Distributed where here xl ∈ Rmx are a set of Ns independent and
Fusion Algorithm identically distributed samples of the so-called importance
i
function qk−1 . Two matters remain. Firstly, one needs
i
Given the desirable nature of the proposed data fusion an importance function qk−1 from which one can easily
algorithm it remains to establish a Monte Carlo version sample from. Secondly, we still don’t know πk−1 j
, ∀j ∈ Ni ∪
which allows one to begin with an empirical estimate of j i
i
πk=0 ∝ p(x)g(yi |x)n or πk=0
i
∝ p(x)g(yi |x) given by {i} and thus cannot compute πk−1 (xl ), ∀j ∈ Ni ∪ {i}.
To resolve the second matter we resort to Kernel density
Ns
1 X estimation and obtain
π ik=0 (x) = δ(x − x` ) (7)
Ns Ns j
!γ
`=1
i
X π i
ek−1 (xil ) Y π ek−1 (xil )
where x` ∈ Rmx are a set of independent and identically π k (x) = i
δ(x − xil )(11)
qk−1 (xil ) j∈N π i
ek−1 (xil )
distributed samples of p(x)g(yi |x)n or p(x)g(yi |x) etc. l=1 i
mx i
The main contribution of this section is a Monte Carlo ver- where again xl ∈ R are samples of qk−1 and
sion of the algorithm outlined in the previous subsection Ns
x − xi`
1 X1
that allows one to begin with π ik=0 . This is desirable when eki (x) =
π K (12)
Ns h h
dealing with complex estimation problems and/or when `=1
i
πk=0 is sourced from alternative Monte Carlo estimations where here x` ∈ Rmx are samples of πki or in other words
as in Doucet et al. (2001). correspond to the support of π ik (x) and thus are obviously
So given π jk−1 , ∀j ∈ Ni ∪ {i} we want to compute π ik at known (i.e. by assumption at k = 0 we know π ik (x)). The
Kernel K(·) is chosen to be Gaussian in our simulations
each agent i in the sense that π ik should be an empirical
but other choices are possible; see Silverman (1986).
version of !γ
j i
Y πk−1 The importance function qk−1 (x) must now be chosen and
πki = πk−1
i
i
(8) given the information available at the agents one obvious
j∈N
πk−1 choice is
i
i i
qk−1 (x) = π̃k−1 (x) (13)
where 0 < γ < 1/ max({|Ni | : i ∈ V}). i
where the support of π k−1 (x) is distributed according to
If the supports of π jk−1 , ∀j ∈ Ni ∪ {i} were totally i
qk−1 i
(x) = π̃k−1 i
(x) and so sampling from qk−1 (x) is given.
overlapping (i.e. if these estimates were constructed from In this case
the same sample points) then one could simply compute π ik Ns
directly via (8). This is essentially the situation considered bik (x) =
X
i
in Savic et al. (2012) and Lindberg et al. (2013) where the π wk,` δ(x − xi` ) (14)
i `=1
initial posteriors πk=0 are sampled at the same point for
each i ∈ V. However, in the more likely case in which the where !γ
j
supports of π jk−1 , ∀j ∈ Ni ∪ {i} are totally disjoint (with π̃ i (xi` ) Y π̃k−1 (xi` )
i
wk,` = k−1 (15)
probability 1) then an alternative method of computing an i
π̃k−1 (xi` ) j∈N i
π̃k−1 (xi` )
empirical estimate π ik of πki is needed. i
!γ
j
Y π̃k−1 (xi` )
Note that we obviously cannot sample directly from πki = i
(16)
π̃k−1 (xi` )
even if we know π jk−1 , ∀j ∈ Ni ∪ {i}. We can estimate j∈Ni
j
, ∀j ∈ Ni ∪ {i} from π jk−1 , ∀j ∈ Ni ∪ {i}, using for i i
PNs i
πk−1 with wk,` = wk,` / `=1 wk,` and xi` ∈ Rmx sampled from
example Kernel density estimation as in Silverman (1986), qk−1 (x) = π̃k−1 (x) which in practice means xi` are exactly
i i
but we cannot then use this to compute an estimate of πki the samples of π ik−1 (x) at the previous time step. To
as no closed form solution to the update in (8) exists for i
any reasonable choice of the Kernel density estimate. combat possible degeneracy we then, given the wk,` and xi`
i
that define πbk (x), resample Ns times from a multinomial
Instead we propose the following empirical approximation i
distribution defined by wk,` and xi` to obtain the final
of πki based on importance sampling; see Doucet et al.
(2001). Note Ns
1 X
π ik (x) = δ(x − x` ) (17)
!γ Ns
j `=1
Y πk−1
πki = πk−1
i
i
where now x` ∈ Rmx corresponds to the set of independent
j∈Ni
πk−1 and identically distributed samples of the multinomial
i j
!γ distribution just discussed.
i πk−1 Y πk−1
= qk−1 i i
(9)
qk−1 j∈N πk−1 2.2 The Distributed Monte Carlo Fusion Algorithm
i
8683
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
0.16
i
e.g. πk=0 ∝ p(x)g(yi |x)n or πk=0
i
∝ p(x)g(yi |x). 0.14
pi(x)
0.1
j
{xik,` }N`=1 to agent j. For each j ∈ Ni receive {xk,` }`=1 .
s Ns 0.06
0.04
i
particle weight wk+1,` via (15). If k = kstop then for Fig. 2. The actual initial densities are Gaussian mixtures
i
` = 1 . . . Ns assign wk+1,` = π̃ki (xik,` ). and the samples are shown above these for clarity.
Step 3: Resample
of course we are interested in distributed computation
Resample with replacement Ns points xik+1,` from the
in this work. However, this centralised solution is the
empirical density π bik+1 (x) in (14). This is equivalent ideal solution and represents the outcome we seek through
to sampling from a multinomial distribution defined by iterative means and via Monte Carlo approximation.
i
xik,` and wk+1,` . If k = kstop then the resampled set
Centralized Bayesisan Fusion
i Ns 0.2
0.14
1/n
approximation
0
Q of either Q ∝ p(x) i∈V g(yi |x) or 0.12
p(x)
0.1
0.08
0.04
obviously and one could even use some defined threshold Fig. 3. The ideal centralised optimal Bayesian fusion result
on the weights such that the weights in (15) approach computed on the true underlying initial continuous
one the algorithm halts. Again, depending on whether the densities. This is not computable in practice as we
initial likelihoods are conditionally independent or not, one suppose only the empirical (sampled) density is ini-
1/n
Q
may opt for convergence to Q ∝ p(x) i∈V g(y i |x) or tially known (and also the underlying true distribu-
Q0 ∝ p(x) i∈V g(yi |x).
Q
tions are unlikely to Gaussian mixtures).
2.3 Illustrative Examples The centralised optimal Bayesian solution was compared
to the distributed Monte Carlo algorithm for Bayesian
fusion. The Monte Carlo solutions are shown (in colour)
In this subsection we highlight the performance of the
against the centralised solution (in black) in Figure 4.
distributed Monte Carlo fusion algorithm initialised sim-
i
ply by πk=0 ∝ p(x)g(yi |x)n and where we seek conver- 0.2
Final Distribution
0.14
0.1
i
shown in Figure 1. The true initial density πk=0 at each 0.08
0.04
π ik=0 (x) which all the agents would actually have access
Fig. 4. The outcome from the distributed Monte Carlo
to in practice. The true initial densities at each agent and
fusion algorithm after 50 iterations. The empirical
the corresponding random samples are shown in Figure 2.
measure sample points are shown along with continu-
80
Sensor Network ous Kernel density estimates generated from such (for
70
Sensors
visualisation only). The centralised optimal Bayesian
60
solution shown in Figure 3 is also shown again here
and we can see the Kernel estimates of the empirical
y−direction (m)
40
centralised solution. They also have converged to-
30
gether and reached consensus.
20
0 10 20 30 40 50 60 70 80 90
x−direction (m)
8684
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
generated Gaussian mixture with 5 components. This and that this common value has converged closely to
initial density was then sampled at 1000 points to generate the desired (centralised Bayesian) solution. This multi-
each agent’s initial sampled density π ik=0 (x) which all modal fusion result shows the accuracy and potential of
the agent would actually have access to in practice. The the distributed Monte Carlo fusion algorithm (as only the
true initial densities at each agent and the corresponding initial sample points in Fig 5 are used in initialising the
random samples are also shown in Figure 5. algorithm and no knowledge about the underlying initial
Sensor Network Initial Distributions
continuous densities is assumed).
100 0.2
90
80
0.18
0.16
3. A NOVEL METHOD FOR DISTRIBUTED
70
0.14 PARTICLE FILTERING
0.12
y−direction (m)
60
pi(x)
0.1
50
0.08
In this section we apply the Monte Carlo data fusion
40
0.06 algorithm to systems of networked particle filters. Particle
30
20 Sensors
0.04
0.02
filters are Bayesian filters that represent their posterior
10
0 10 20 30 40 50 60 70 80 90 100
0
0 5 10 15 20 25 30 35 40 45 50
distributions by sets of unweighted samples.
x−direction (m) x
0.18
0.12
tion of process noise.
For ` = 1 . . . Ns , evaluate the importance weights
p(x)
0.1
i
PNs i
∝ g(yti |xit ) and normalise, `=1
0.08
0.06
wt,` wt,` = 1.
0.04
Step 2: Resample
0.02
0
For ` = 1 . . . Ns , sample xit,` from the multivariate
0 5 10 15 20 25 30 35 40 45 50
Fig. 6. The ideal centralised optimal Bayesian fusion result samples xit,` and the weights wt,` i
.
computed on the true underlying initial continuous
densities. This is not typically computable in practice. After resampling, the samples {xt,` }N `=1 define the approx-
s
The centralised optimal Bayesian solution was compared imation of the posterior density of agent i. That is, we are
to the distributed Monte Carlo algorithm for Bayesian approximating the true posterior given by
fusion. The Monte Carlo solutions are shown (in colour) p(xt |xt−1 , yti ) ∝ p(xt |xt−1 )g(yti |xt ) (18)
against the centralised solution (in black) in Figure 7. via a normalised empirical probability distribution defined
Final Distribution
by the samples {xt,` }N`=1 .
s
0.2
0.18
0.16
3.2 The Distributed Particle Filtering Algorithm
0.14
0.12
Consider a group of agents (implementing individual par-
pi(x)
0.1
0.08
ticle filters) and an information sharing network defined
0.06
by an undirected graph G as defined in Section 2.
0.04
0.02
The agents are tasked with estimating a target state. Each
agent i ∈ V, can make observations yti about the target at
0
0 5 10 15 20 25 30 35 40 45 50
x
8685
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
From the perspective of sensor i: Between measurements each agent runs the distributed
fusion algorithm with kstop = 15. In Figure 9 we compare
Step 0: Initialisation (t = 0)
the resulting estimates (which we call the fusion estimates)
Pick γ according to 0 < γ < 1/ max({|Nj | : j ∈ V }).
to the desired state and the centralised estimate (com-
For ` = 1 . . . Ns sample xit,` ∼ p(xi0 |xi0 ), where p(xi0 |xi0 ) puted at a hypothetical agent with access to each agent’s
is a known initial particle density. likelihood functions). The fusion estimates are shown by
Step 1: Importance Sampling (t > 0) the coloured dotted lines while the centralised estimate
For ` = 1 . . . Ns , propagate the samples forward in time is shown in solid blue and the true state in solid black.
xit,` = f (xit−1,` , uit,` ) where uit,` is a sample from the It is clear that after fusion the agents arrive at estimates
distribution of process noise. that are close to the centralised solution. In Figure 10 we
For ` = 1 . . . Ns , evaluate the importance weights show the estimates that would be obtained if each node
i
wt,` ∝ g(yti |xit )|V | and normalise. ran the bootstrap filter algorithm in isolation (i.e. using
Step 2: Resample only their individual local likelihood functions). We refer
For ` = 1 . . . Ns , sample xit,` from the multinomial to these as the isolated estimates as they do not involve any
distribution p̄ constructed from the samples {xit,` }N s communication (exchange of information). By comparison
`=1
to the fusion estimates, the isolated estimates are scattered
and the weights {wt,` i
}N
`=1 .
s
around the centralised solution as expected.
Step 3: Distributed Fusion
Run fusion algorithm of Section 2.2 for k = 0 . . . kstop . In Figure 11 we show the estimates generated by a local
i
For ` = 1 . . . Ns assign xit,` = xikstop ,` and wt,` = wki stop ,` . particle filter running at each agent that makes use of the
Note that the weights should be equal to 1/Ns as the just the independent likelihood functions from each agent’s
samples xikstop ,` should be unweighted. neighbours. We refer to these as the local estimates and
this method. The local estimate of i can be thought of as
centralised estimate in the neighbourhood of agent i. In a
3.3 Illustrative Examples complete network the local estimates are equivalent to the
centralised estimates.
In this subsection we demonstrate the distributed particle 20
True, Central, Fusion
this system is 10
25xt−1 5
1 + xt−1 2 0
−5
2
where ut is Gaussian white noise with variance, σ = 10;
−10
and the sensor equation is
−15
xt 2
yti = + vti , (21) −20
0 1 2 3 4 5
20 Time (s)
where vti is Gaussian white noise with a random variance. Fig. 9. The estimates made by the distributed particle
We initialised the filters with the state xi0 = 0.1 for all filtering algorithm (coloured dotted lines). These are
i ∈ V . This represents a system with a known initial state. shown with the centralised estimates (solid blue) and
the true state (solid black). The figure shows that the
The first simulation was conducted over a network of 10 agents are approximately reaching a consensus about
agents, see Figure 8, with Ns = 1500 samples. The true the fused posterior near the centralised solution as
state of the system is also shown in Figure 8. We use desired.
the version of the fusion algorithm that converges to Q0
True, Central, Isolated
(see Corollary 2) as the likelihood functions are known 20
Actual
to be conditionally independent. Therefore we anticipate 15
Centralised
Isolated
that the fusion algorithm will converge to the centralised 10
posterior density.
5
State, x
60
True state, x
−15
50 0
40 −20
−5
0 1 2 3 4 5
30 Time (s)
−10
20
10
−15 Fig. 10. The isolated estimates (coloured dotted lines) are
0
0 10 20 30 40 50 60 70 80
−20
0 1 2 3 4 5
shown along with the centralised estimate and the
x−direction (m) Time (s)
true state. As expected the centralised estimate falls
between the isolated estimates which are computed
Fig. 8. The network topology of the second distributed by the standard bootstrap filter algorithm.
particle filter simulation with n = 10 agents and the
true state of the nonlinear dynamical system which In the second example we randomly generate another net-
the agents are trying to estimate. work of 10 agents and a new underlying system trajectory;
8686
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
10
5
5
State, x
State, x
0
0
−5
−5
−10
−10
−15 −15
−20 −20
0 1 2 3 4 5 0 1 2 3 4 5
Time (s) Time (s)
Fig. 11. The local estimates (coloured dotted lines) are Fig. 13. The estimates made by the distributed particle
shown along with the centralised estimate and the filtering algorithm (coloured dotted lines). These are
true state. The local estimates are made by exchang- shown with the centralised estimates (solid blue) and
ing likelihoods between neighbours only. This is im- the true state (solid black). The figure shows that the
portant as a benchmark for our proposed algorithm agents are approximately reaching a consensus about
as this is the simplest algorithm that may be imple- the fused posterior near the centralised solution as
mented in a network of particle filters. desired.
see Figure 12. Again the likelihood functions are condi- 25
True, Central, Isolated
True state, x
−15
50 5
40 0 −20
0 1 2 3 4 5
30 −5 Time (s)
20 −10
Fig. 14. The isolated estimates (coloured dotted lines) are
10 −15
0 −20
shown along with the centralised estimate and the
10 20 30 40 50 60
x−direction (m)
70 80 90 100 0 1 2
Time (s)
3 4 5
true state. As expected the centralised estimate falls
between the isolated estimates which are computed
Fig. 12. The network topology of the second distributed by the standard bootstrap filter algorithm.
particle filter simulation with n = 10 agents and the
True, Central, Local
true state of the nonlinear dynamical system which 25
Actual
the agents are trying to estimate. 20 Centralised
Local
15
The agents run the conservative distributed fusion algo- 10
rithm with kstop = 20 which converges towards the con- 5
State, x
13. We can see that the agents are approaching a common −15
value (i.e. they are close to consensus). −20
0 1 2 3 4 5
Time (s)
In Figures 14 and 15 we show the isolated and local
estimates respectively (see the coloured dotted lines). It Fig. 15. The local estimates (coloured dotted lines) are
can be seen that although the local estimates are an shown along with the centralised estimate and the
improvement on the isolated estimates, the conservative true state. The local estimates are made by exchang-
fusion estimates are better again. This simulation shows ing likelihoods between neighbours only.
that it is possible to make useful state estimates even when In the future we would like to experiment with other
the dependence between the agents is unknown [Bailey consensus algorithms such as the dynamic consensus al-
et al. (2012)]. gorithm of Zhu and Martı́nez (2010) to reduce the com-
munication overhead.
4. CONCLUSION
REFERENCES
We presented a practical method for distributed data
fusion and particle filtering. The algorithm is consensus- Abbas, A. (2009). A Kullback-Leibler view of linear and
based and involves the exchange of posteriors between log-linear pools. Decision Analysis, 6(1), 25–37.
neighbours. The proposed solution is robust to time- Bailey, T., Julier, S., and Agamennoni, G. (2012). On
varying network topologies including those with cycles and conservative fusion of information with unknown non-
is guaranteed consistent. gaussian dependence. In 15th International Conference
8687
19th IFAC World Congress
Cape Town, South Africa. August 24-29, 2014
on Information Fusion (FUSION), 1876–1883. Singa- Lee, S.H. and West, M. (2013). Convergence of the
pore. markov chain distributed particle filter (mcdpf). IEEE
Bashi, A.S., Jilkov, V.P., Li, X.R., and Chen, H. (2003). Transactions on Signal Processing, 61(4), 801–812.
Distributed implementations of particle filters. In Pro- Lindberg, C., Muppirisetty, L., Dahlén, K.M., Savic, V.,
ceedings of the Sixth International Conference of Infor- and Wymeersch, H. (2013). Mac delay in belief con-
mation Fusion, volume 2, 1164–1171. Cairns, Australia. sensus for distributed tracking. In Proc. of 10th IEEE
Bolić, M., Djurić, P.M., and Hong, S. (2005). Resampling Workshop on Positioning, Navigation and Communica-
algorithms and architectures for distributed particle tion. Dresden, Germany.
filters. IEEE Transactions on Signal Processing, 53(7), Mohammadi, A. and Asif, A. (2009). Distributed particle
2442–2450. filtering for large scale dynamical systems. In IEEE
Doucet, A., de Freitas, N., and Gordon, N. (eds.) (2001). 13th International Multitopic Conference (INMIC), 1–5.
Sequential Monte Carlo Methods in Practice. Springer. Islamabad.
Farahmand, S., Roumeliotis, S.I., and Giannakis, G.B. Mohammadi, A. and Asif, A. (2011a). Consensus-based
(2011). Set-membership constrained particle filter: Dis- distributed unscented particle filter. In IEEE Statisti-
tributed adaptation for sensor networks. IEEE Trans- cal Signal Processing Workshop (SSP), 237–240. Nice,
actions on Signal Processing, 59(9), 4122–4138. France.
Gordon, N., Salmond, D., and Smith, A.F.M. (1993). Mohammadi, A. and Asif, A. (2011b). A consensus/fusion
Novel approach to nonlinear/non-gaussian bayesian based distributed implementation of the particle filter.
state estimation. Radar and Signal Processing, IEE In 4th IEEE International Workshop on Computational
Proceedings F, 140(2), 107–113. Advances in Multi-Sensor Adaptive Processing (CAM-
Gu, D. (2007). Distributed particle filter for target track- SAP), 285–288. San Juan.
ing. In IEEE International Conference on Robotics and Olfati-Saber, R., Franco, E., Frazzoli, E., and Shamma,
Automation, 3856–3861. Rome. J.S. (2006). Belief consensus and distributed hypothesis
Gu, D., Sun, J., Hu, Z., and Li, H. (2008). Consensus testing in sensor networks. In Network Embedded Sens-
based distributed particle filter in sensor networks. In ing and Control. (Proceedings of NESC05 Workshop),
International Conference on Information and Automa- volume 331 of Lecture Notes in Control and Information
tion (ICIA), 302–307. Changsha. Sciences, 169–182. Springer Verlag.
Hlinka, O., Djurić, P.M., and Hlawatsch, F. (2009). Time- Olfati-Saber, R. and Murray, R.M. (2004). Consensus
space-sequential distributed particle filtering with low- problems in networks of agents with switching topology
rate communications. In Conference Record of the and time-delays. IEEE Transactions on Automatic
Forty-Third Asilomar Conference on Signals, Systems Control, 49(9), 1520–1533.
and Computers (ASILOMAR), 196–200. Pacific Grove, Oreshkin, B.N. and Coates, M.J. (2010). Asynchronous
CA. distributed particle filter via decentralized evaluation of
Hlinka, O., Hlawatsch, F., and Djurić, P.M. (2013). Dis- gaussian products. In 13th Conference on Information
tributed particle filtering in agent networks: A survey, Fusion (FUSION), 1–8. Edinburgh.
classification, and comparison. IEEE Signal Processing Savic, V., Wymeersch, H., and Zazo, S. (2012). Distributed
Magazine, 30(1), 61–68. target tracking based on belief propagation consensus.
Hlinka, O., Slučiak, O., Hlawatsch, F., Djurić, P.M., and In Proceedings of the 20th European Signal Processing
Rupp, M. (2010). Likelihood consensus: Principles and Conference, 544–548. Bucharest, Romania.
application to distributed particle filtering. In Confer- Savic, V., Wymeersch, H., and Zazo, S. (2012). Distributed
ence Record of the Forty Fourth Asilomar Conference on target tracking based on belief propagation consensus.
Signals, Systems and Computers (ASILOMAR), 349– In Proceedings of the 20th European Signal Processing
353. Pacific Grove, CA. Conference (EUSIPCO), 544–548. Bucharest.
Hlinka, O., Slučiak, O., Hlawatsch, F., Djurić, P.M., and Sheng, X. and Hu, Y.H. (2005). Distributed particle
Rupp, M. (2011). Distributed gaussian particle filter- filters for wireless sensor network target tracking. In
ing using likelihood consensus. In IEEE International IEEE International Conference on Acoustics, Speech,
Conference on Acoustics, Speech and Signal Processing and Signal Processing (ICASSP), volume 4, 845–848.
(ICASSP), 3756–3759. Prague. Sheng, X., Hu, Y.H., and Ramanathan, P. (2005). Dis-
Hlinka, O., Slučiak, O., Hlawatsch, F., Djurić, P.M., and tributed particle filter with gmm approximation for
Rupp, M. (2012). Likelihood consensus and its applica- multiple targets localization and tracking in wireless
tion to distributed particle filtering. IEEE Transactions sensor network. In Fourth International Symposium
on Signal Processing, 60(8), 4334–4349. on Information Processing in Sensor Networks (IPSN),
Lee, S.H. and West, M. (2009). Markov chain distributed 181–188.
particle filters (mcdpf). In Proceedings of the 48th IEEE Silverman, B.W. (1986). Density Estimation for Statistics
Conference on Decision and Control (CDC) and 28th and Data Analysis. Monographs on Statistics and
Chinese Control Conference, 5496–5501. Shanghai. Applied Probability. Chapman and Hall.
Lee, S.H. and West, M. (2010). Performance comparison Üstebay, D., Coates, M., and Rabbat, M. (2011). Dis-
of the distributed extended kalman filter and markov tributed auxiliary particle filters using selective gossip.
chain distributed particle filter (mcdpf). In Proceedings In IEEE International Conference on Acoustics, Speech
of the 2nd IFAC Workshop on Distributed Estimation and Signal Processing (ICASSP), 3296–3299. Prague.
and Control in Networked Systems (NecSys’10), 151– Zhu, M. and Martı́nez, S. (2010). Discrete-time dynamic
156. Annecy, France. average consensus. Automatica, 46(2), 322–329.
8688