0% found this document useful (0 votes)
23 views

Hypothesis Pruning in JPDA Algorithm For Multiple

This document discusses hypothesis pruning in the JPDA algorithm for multiple target tracking in clutter. The JPDA algorithm generates all possible joint hypotheses but most hypotheses have negligible effect on the final result. The authors propose only generating the top K best hypotheses rather than all hypotheses to reduce computation. They provide a probabilistic method to determine the optimal value of K, the number of hypotheses to generate, based on ensuring the generated hypotheses capture a sufficient proportion of the total probability mass. Simulation results are presented to validate the proposed pruning approach.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Hypothesis Pruning in JPDA Algorithm For Multiple

This document discusses hypothesis pruning in the JPDA algorithm for multiple target tracking in clutter. The JPDA algorithm generates all possible joint hypotheses but most hypotheses have negligible effect on the final result. The authors propose only generating the top K best hypotheses rather than all hypotheses to reduce computation. They provide a probabilistic method to determine the optimal value of K, the number of hypotheses to generate, based on ensuring the generated hypotheses capture a sufficient proportion of the total probability mass. Simulation results are presented to validate the proposed pruning approach.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228813831

Hypothesis pruning in JPDA algorithm for multiple target tracking in clutter

Article · January 1997

CITATION READS
1 112

2 authors:

Kiril Alexiev Pavlina D. Konstantinova


Bulgarian Academy of Sciences Bulgarian Academy of Sciences
68 PUBLICATIONS   205 CITATIONS    30 PUBLICATIONS   306 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Future Education and Training in Computing: How to Support Learning at Anytime Anywhere (FETCH) View project

Modelling of voluntary saccadic eye movements during decision making View project

All content following this page was uploaded by Kiril Alexiev on 28 May 2014.

The user has requested enhancement of the downloaded file.


HYPOTHESES PRUNING IN JPDA ALGORITHM FOR MULTIPLE
TARGET TRACKING IN CLUTTER∗∗

K. M. Alexiev, P. D. Konstantinova

25A acad. "G.Bonchev" str., Sofia, Bulgaria, [email protected]

Multiple target tracking in heavy clutter is a challenging task. Many algorithms have been proposed in recent years to solve
this problem. One of the most effective and practical algorithms is Joint Probability Data Association (JPDA) algorithm. This
paper comments several aspects of this algorithm. The most time consuming (combinatorial) part of this algorithm is
hypotheses generation and hypotheses score calculation. Most of hypotheses are without significance, with negligible effect
over final result – the choice of the best hypothesis. In this case it is useful to reduce the number of generated hypotheses. The
paper comments how to do this. The received results are applicable in all real time JPDA algorithms and their modification
(IMM JPDA).

Keywords: multiple target tracking, JPDA


a-posteriori probability of each joint event. From
1. Introduction these probabilities, the data association coefficients
Multiple target tracking in heavy clutter is a of each track are calculated and then used to update
challenging task. This task differs from standard the track estimates.
state estimation problem by the fact that the The multiple hypotheses tracking (MHT) method
measurement origin is also uncertain. When new exhaustively enumerates all possible hypotheses
measurements are obtained, the association between over a number of most recent frames and chooses
the measurement list and the track list requires the the most likely one.
estimation algorithm to test which measurement-to- Joint Probability Data Association (JPDA) algorithm
track correspondence is correct, while is the most effective from described above
simultaneously estimating the target states. Some approaches and it can be implemented successfully
times, when there are closely spaced targets, for multiple closely spaced targets even in the
multiple tracks may share the same measurement(s). presence of heavy clutter. But JPDA is rather
Joint events are formed by creating all possible complex because it creates a joint event for each
combinations of track-measurement assignments. possible combination of measurement origin. The
The probabilities for these joint events are number of joint events can grow very rapidly in a
calculated. The expressions for the joint events dense clutter situation. In this case JPDA requires a
incorporate the probabilities of track existence of fairly large amount of computation to evaluate the
individual tracks, as well as an efficient weighting probabilities.
approximation for the cluster volume and an a-priori To improve this situation, the paper studies the
probability of the number of clutter measurements in problem of hypotheses generation. An extension of
each cluster. From these probabilities the data the algorithm in previous our work [1] is proposed.
association and track existence probabilities of Instead of enumeration of all feasible hypotheses we
individual tracks are obtained. Several approaches propose to use ranked assignment approach to find
were proposed to solve described data association the first K-best hypotheses only. The problem is how
problem [5]. many hypotheses K to be found out. The value of
The simplest method is the so-called nearest threshold K has to be optimal regarding a criterion.
neighbor (NN) approach. The NN approach In this paper a probabilistic approximate measure of
associates one from gated measurements with necessary number of hypotheses is given.
minimum distance with the track file under The paper is organized as follows. Next section
consideration. The strongest neighbor method can be describes briefly the common JPDA algorithm. In
regarded as a modification of NN method. the 3rd section the motivation of choice of
The JPDA algorithm is an extension of the probabilistic threshold is given. The 4th section
Probabilistic Data Association method, which allows presents simulation results.
the possibility that a measurement may have
originated by one of a number of candidate tracks or 2. JPDA algorithm and K-best hypotheses
by clutter. In each scan JPDA partitions tracks into When several closely spaced targets form a cluster,
clusters, where tracks in each cluster have common the standard JPDA algorithm [5] generates all
measurements. It generates all possible joint feasible hypotheses and computes their scores.
measurement to track assignments and calculates the Every hypothesis meets two important constraints:


The research reported in this paper is partially supported by the Bulgarian Ministry of Education and Science under grants I-1205/2002
and I-1202/2002 and by Center of Excellence BIS21 grant ICA1-2000-70016.
a) no target can create more than one [4], ranked in increasing order of cost. Every
measurement; solution of the assignment problem represents a sum
b) no measurement can be assigned to more of elements of the cost matrix. To define cost matrix
than one target. correspondingly, we take logarithm from both sides
The set of all feasible hypotheses includes such of (1). From the left-hand side we obtain logarithm
hypotheses as ‘null’ hypothesis and all its of hypothesis probability and, from the right-hand
derivatives. The consideration of all possible side, a sum of logarithms of partitioning elements:
assignments including the ‘null’ assignments is ln (P ′(H l )) = (N M − (N T − N nD ))ln β + N nD ln (1 − PD ) +
important for optimal calculation of assignment + (N r − N nD )* ln PD + ∑ g ij ⋅
probabilities [6].
Hypothesis probability is computed by the We construct a cost matrix with negative logarithms
expression: of the elements. In this case the optimal solution (the
( )

P ′(H l ) = β [N M −(NT − N nD )] (1 − PD )N nD PDN r − N nD g ij g mn
minimum) of the assignment problem with such a
cost matrix will coincide with the hypothesis with
(1) highest probability.
where β is probability density for false returns, In order to use any of the widespread assignment
d ij algorithms, as well as the algorithm [1] for finding

e 2 the K-best hypotheses, the cost matrix has to be
g ij = - is probability density that filled to square matrix. The values into added
(2π ) M
2 S columns are appropriately chosen. These columns
measurement j originates from target i , N M is total have not to influence on the optimal solution.
Let us suppose that the algorithm finds K-best
number of measurements in the cluster, N T - total
assignments with highest probabilities. The
number of targets, dij – statistical distance, N nD - normalization (transformation from likelihood
number of not detected targets, M is measurement function to probability) can be done by equation:
vector size, S is innovation covariance matrix. The P ′(H l )
P(H l ) = K .
step ends with the standard normalization:
P ′(H l ) ∑ lP ′(H )
P(H l ) = N , l =1

∑ P′(H l )
H
One important question of practical significance is
l =1 how to choose the number of generated and
where N H is the total number of hypotheses. calculated hypotheses. The value of K has to be
sufficiently small to ensure acceleration of the
To compute for a fixed i the association probability
algorithm, and, at the same time, has to be not too
pij that observation j originates from track i , we
small to lead to distortion in computing assignment
have to take a sum over the probabilities of those probabilities. If, for example, the score of every one
hypotheses in which this event occurs: of these hypotheses differs from any of the others by

pij = ∑ P(H l ) , j = 1, , mi (k ) , i = 1, , N T ,  no more than one order of magnitude, it should not
l∈L j be possible to truncate some significant parts of all
where L j is a set of indices of all hypotheses, which hypotheses. If, however, the prevailing share of the
total score is concentrated in a small percent of the
include the event mentioned above, mi(k) is the total number of all hypotheses, then the interest in
number of measurements falling in the gate of target considering only this small percent of all hypotheses
i, and N T is the total number of targets in the becomes very high.
cluster. The analysis of hypotheses score distribution shows
For every target the ‘merged’ combined innovation that the scores of feasible hypotheses decrease very
is computed: rapidly and some 1-5 per cents of them cover more
mi (k ) than 95 per cents of the total score sum. One
ν i (k ) = ∑ pijν ij (k ) . possible expression for termination hypotheses
j =1
generation process is given in [1]:
Lhe most time consuming part of the algorithm is H (n ) − H (n + 1) < α ⋅ H (n ) ,
hypotheses generation and scores computation. The
where α << 1 . Here with H(n) denotes the
number of all feasible hypotheses increases
probability density of nth hypothesis to be true. The
exponentially with N M . To avoid these
implementation of this criterion, however, did not
overwhelming computations we take into give stable results. The reason is that very often
consideration only small part of all feasible there are subsets of hypotheses with very close
hypotheses with the highest scores. Let us suppose values of their scores, even in the beginning of the
that the first K-hypotheses (with highest score) are sorted hypothesis array. Another expression,
under consideration. In order to find out the first K- providing for higher stability, is [1]:
best hypotheses we use an algorithm due to Murty H (n ) < α ⋅ H (1) .
[2] and optimized by Miller et al. [3]. This algorithm
In this case the condition is function of only one
gives a set of assignments to the assignment problem
hypothesis score – the most powerful hypothesis.
target gates share measurements and all 12
3. Probabilistic threshold measurements and 6 targets form a cluster (fig.1).
Let us consider the main equation about hypothesis
probability (1). The right hand side has several
different terms. The first of them β [N M −(NT − N nD )]
denotes how much false alarms take participation in
this hypothesis – or how much false alarms fall into
gates of tracks of one and the same cluster. This
term is very small. The probability of false alarm
β usually is much less (several orders) than PD and
g ij . That is because the measurements under
consideration are gated at the beginning of
algorithm. The second term 1 − PD is usually less
than PD . These speculations help us to find that
hypothesis probability reaches its maximal values
Figure 1: A scenario of 6 targets and 12 measurement in a
for a given number of gated measurements N M cluster
when N T is maximal or when N nD = 0 . In this case The total number of hypotheses is 3993. The
equation (1) transforms as follow: hypothesis scores are depicted on fig.2, arranged by
P ′(H l ) = β N M − NT PDNT g ij g mn (2)  its probabilities.

The equation (2) gives us all significant hypothesis.


Now, we try to estimate a threshold for detection of
these hypotheses. Using the expression for g ij we
transform equation (2):
1 NT 2
∑ dij
"

2 i
P ′(H l ) = β N M − NT PDNT . (3)
NT M NT
(2π ) 2 ∏ Si
i
NT
Here ∑ d ij2 is a sum of N T squared normal
i
distributed stochastic variables. We can find in the
table suitable value r for this χ 2 distribution. At Figure 2: The first 100 hypotheses, sorted by score
the end we calculate from (3) the threshold value,
which defines the level of significant hypotheses: It is obvious that a per cent from number of

" −
r hypotheses form more than 99 per cents of total
2
Pthres = β N M − NT PDNT (4) score. The last two figures show several levels of
NT M NT threshold and corresponding hypothesis score and
(2π ) 2 ∏ Si number of hypotheses.
i
Finally, the main steps of the algorithm are outlined
below for a scan:
1. Calculation of d ij2 ;
2. Clusterization (determining N T and N M
for a cluster);
3. For every cluster:
• Calculation Pthres for given N T and
NM ;
• Calculation the best hypotheses using K-
best hypotheses algorithm, until Pthres is
reached;
• Normalization and JPDA estimation.
4. Simulation Results Figure 3: Simulation results for different thresholds
A typical example from numerous simulations is a) the total score of selected by corresponding threshold
given. The scenario includes 6 targets and 12 hypotheses;
b) The number of selected by corresponding threshold
measurements (6 from which are false alarms). The
hypotheses (the total number of hypotheses is equal
to 3993).
Approaches,” Comptes Randue de l'Academie Bulgare
5. Conclusions des Sciences (to appear).
In this paper a modification of well-known JPDA 2. Katta G. Murty, “An Algorithm for Ranking All the
algorithm is presented for tracking closely spaced Assignment in Order of Increasing Cost,” Operations
Research 16 (1968): 682-687.
targets in moderate and heavy clutter. Instead of all
3. Matt L. Miller, Harold S. Stone and Ingemar J. Cox,
feasible hypotheses in the presented algorithm only
“Optimizing Murty’s Ranked Assignment Method,” IEEE
part of the hypotheses are generated. We generate Transactions on AES 33, 3 (July 1997): 851-862.
the first K-best feasible hypotheses in terms of their
4. Roy Jonker and Ton Volgenant, “A Shortest
probability of being true. This quasi-optimal Augmenting Path Algorithm for Dense and Sparse
algorithm reduces significantly the necessary Assignment Problems,” Computing 38 (1987): 325-340.
computer resources without waste of any of 5. Yaakov Bar-Shalom, ed., Multitarget-Multisensor
significant hypotheses and assignment degradation. Tracking: Advanced Applications, (Norwood, MA: Artech
An expression for probabilistic threshold is given to House, 1990).
evaluate the number of rejected hypotheses and to 6. Samuel S. Blackman, Multiple-Target Tracking with
estimate the algorithm processing speedup. The Radar Applications (Norwood, MA: Artech House,
received results are applicable in all real time JPDA 1986).
algorithms and their modification (IMM JPDA).
.
6. References
1. Ljudmil Bojilov, Kiril Alexiev and Pavlina
Konstantinova, “An Algorithm Unifying IMM and JPDA

View publication stats

You might also like