Hypothesis Pruning in JPDA Algorithm For Multiple
Hypothesis Pruning in JPDA Algorithm For Multiple
net/publication/228813831
CITATION READS
1 112
2 authors:
Some of the authors of this publication are also working on these related projects:
Future Education and Training in Computing: How to Support Learning at Anytime Anywhere (FETCH) View project
Modelling of voluntary saccadic eye movements during decision making View project
All content following this page was uploaded by Kiril Alexiev on 28 May 2014.
K. M. Alexiev, P. D. Konstantinova
Multiple target tracking in heavy clutter is a challenging task. Many algorithms have been proposed in recent years to solve
this problem. One of the most effective and practical algorithms is Joint Probability Data Association (JPDA) algorithm. This
paper comments several aspects of this algorithm. The most time consuming (combinatorial) part of this algorithm is
hypotheses generation and hypotheses score calculation. Most of hypotheses are without significance, with negligible effect
over final result – the choice of the best hypothesis. In this case it is useful to reduce the number of generated hypotheses. The
paper comments how to do this. The received results are applicable in all real time JPDA algorithms and their modification
(IMM JPDA).
∗
The research reported in this paper is partially supported by the Bulgarian Ministry of Education and Science under grants I-1205/2002
and I-1202/2002 and by Center of Excellence BIS21 grant ICA1-2000-70016.
a) no target can create more than one [4], ranked in increasing order of cost. Every
measurement; solution of the assignment problem represents a sum
b) no measurement can be assigned to more of elements of the cost matrix. To define cost matrix
than one target. correspondingly, we take logarithm from both sides
The set of all feasible hypotheses includes such of (1). From the left-hand side we obtain logarithm
hypotheses as ‘null’ hypothesis and all its of hypothesis probability and, from the right-hand
derivatives. The consideration of all possible side, a sum of logarithms of partitioning elements:
assignments including the ‘null’ assignments is ln (P ′(H l )) = (N M − (N T − N nD ))ln β + N nD ln (1 − PD ) +
important for optimal calculation of assignment + (N r − N nD )* ln PD + ∑ g ij ⋅
probabilities [6].
Hypothesis probability is computed by the We construct a cost matrix with negative logarithms
expression: of the elements. In this case the optimal solution (the
( )
P ′(H l ) = β [N M −(NT − N nD )] (1 − PD )N nD PDN r − N nD g ij g mn
minimum) of the assignment problem with such a
cost matrix will coincide with the hypothesis with
(1) highest probability.
where β is probability density for false returns, In order to use any of the widespread assignment
d ij algorithms, as well as the algorithm [1] for finding
−
e 2 the K-best hypotheses, the cost matrix has to be
g ij = - is probability density that filled to square matrix. The values into added
(2π ) M
2 S columns are appropriately chosen. These columns
measurement j originates from target i , N M is total have not to influence on the optimal solution.
Let us suppose that the algorithm finds K-best
number of measurements in the cluster, N T - total
assignments with highest probabilities. The
number of targets, dij – statistical distance, N nD - normalization (transformation from likelihood
number of not detected targets, M is measurement function to probability) can be done by equation:
vector size, S is innovation covariance matrix. The P ′(H l )
P(H l ) = K .
step ends with the standard normalization:
P ′(H l ) ∑ lP ′(H )
P(H l ) = N , l =1
∑ P′(H l )
H
One important question of practical significance is
l =1 how to choose the number of generated and
where N H is the total number of hypotheses. calculated hypotheses. The value of K has to be
sufficiently small to ensure acceleration of the
To compute for a fixed i the association probability
algorithm, and, at the same time, has to be not too
pij that observation j originates from track i , we
small to lead to distortion in computing assignment
have to take a sum over the probabilities of those probabilities. If, for example, the score of every one
hypotheses in which this event occurs: of these hypotheses differs from any of the others by
pij = ∑ P(H l ) , j = 1, , mi (k ) , i = 1, , N T , no more than one order of magnitude, it should not
l∈L j be possible to truncate some significant parts of all
where L j is a set of indices of all hypotheses, which hypotheses. If, however, the prevailing share of the
total score is concentrated in a small percent of the
include the event mentioned above, mi(k) is the total number of all hypotheses, then the interest in
number of measurements falling in the gate of target considering only this small percent of all hypotheses
i, and N T is the total number of targets in the becomes very high.
cluster. The analysis of hypotheses score distribution shows
For every target the ‘merged’ combined innovation that the scores of feasible hypotheses decrease very
is computed: rapidly and some 1-5 per cents of them cover more
mi (k ) than 95 per cents of the total score sum. One
ν i (k ) = ∑ pijν ij (k ) . possible expression for termination hypotheses
j =1
generation process is given in [1]:
Lhe most time consuming part of the algorithm is H (n ) − H (n + 1) < α ⋅ H (n ) ,
hypotheses generation and scores computation. The
where α << 1 . Here with H(n) denotes the
number of all feasible hypotheses increases
probability density of nth hypothesis to be true. The
exponentially with N M . To avoid these
implementation of this criterion, however, did not
overwhelming computations we take into give stable results. The reason is that very often
consideration only small part of all feasible there are subsets of hypotheses with very close
hypotheses with the highest scores. Let us suppose values of their scores, even in the beginning of the
that the first K-hypotheses (with highest score) are sorted hypothesis array. Another expression,
under consideration. In order to find out the first K- providing for higher stability, is [1]:
best hypotheses we use an algorithm due to Murty H (n ) < α ⋅ H (1) .
[2] and optimized by Miller et al. [3]. This algorithm
In this case the condition is function of only one
gives a set of assignments to the assignment problem
hypothesis score – the most powerful hypothesis.
target gates share measurements and all 12
3. Probabilistic threshold measurements and 6 targets form a cluster (fig.1).
Let us consider the main equation about hypothesis
probability (1). The right hand side has several
different terms. The first of them β [N M −(NT − N nD )]
denotes how much false alarms take participation in
this hypothesis – or how much false alarms fall into
gates of tracks of one and the same cluster. This
term is very small. The probability of false alarm
β usually is much less (several orders) than PD and
g ij . That is because the measurements under
consideration are gated at the beginning of
algorithm. The second term 1 − PD is usually less
than PD . These speculations help us to find that
hypothesis probability reaches its maximal values
Figure 1: A scenario of 6 targets and 12 measurement in a
for a given number of gated measurements N M cluster
when N T is maximal or when N nD = 0 . In this case The total number of hypotheses is 3993. The
equation (1) transforms as follow: hypothesis scores are depicted on fig.2, arranged by
P ′(H l ) = β N M − NT PDNT g ij g mn (2) its probabilities.
" −
r hypotheses form more than 99 per cents of total
2
Pthres = β N M − NT PDNT (4) score. The last two figures show several levels of
NT M NT threshold and corresponding hypothesis score and
(2π ) 2 ∏ Si number of hypotheses.
i
Finally, the main steps of the algorithm are outlined
below for a scan:
1. Calculation of d ij2 ;
2. Clusterization (determining N T and N M
for a cluster);
3. For every cluster:
• Calculation Pthres for given N T and
NM ;
• Calculation the best hypotheses using K-
best hypotheses algorithm, until Pthres is
reached;
• Normalization and JPDA estimation.
4. Simulation Results Figure 3: Simulation results for different thresholds
A typical example from numerous simulations is a) the total score of selected by corresponding threshold
given. The scenario includes 6 targets and 12 hypotheses;
b) The number of selected by corresponding threshold
measurements (6 from which are false alarms). The
hypotheses (the total number of hypotheses is equal
to 3993).
Approaches,” Comptes Randue de l'Academie Bulgare
5. Conclusions des Sciences (to appear).
In this paper a modification of well-known JPDA 2. Katta G. Murty, “An Algorithm for Ranking All the
algorithm is presented for tracking closely spaced Assignment in Order of Increasing Cost,” Operations
Research 16 (1968): 682-687.
targets in moderate and heavy clutter. Instead of all
3. Matt L. Miller, Harold S. Stone and Ingemar J. Cox,
feasible hypotheses in the presented algorithm only
“Optimizing Murty’s Ranked Assignment Method,” IEEE
part of the hypotheses are generated. We generate Transactions on AES 33, 3 (July 1997): 851-862.
the first K-best feasible hypotheses in terms of their
4. Roy Jonker and Ton Volgenant, “A Shortest
probability of being true. This quasi-optimal Augmenting Path Algorithm for Dense and Sparse
algorithm reduces significantly the necessary Assignment Problems,” Computing 38 (1987): 325-340.
computer resources without waste of any of 5. Yaakov Bar-Shalom, ed., Multitarget-Multisensor
significant hypotheses and assignment degradation. Tracking: Advanced Applications, (Norwood, MA: Artech
An expression for probabilistic threshold is given to House, 1990).
evaluate the number of rejected hypotheses and to 6. Samuel S. Blackman, Multiple-Target Tracking with
estimate the algorithm processing speedup. The Radar Applications (Norwood, MA: Artech House,
received results are applicable in all real time JPDA 1986).
algorithms and their modification (IMM JPDA).
.
6. References
1. Ljudmil Bojilov, Kiril Alexiev and Pavlina
Konstantinova, “An Algorithm Unifying IMM and JPDA