0% found this document useful (0 votes)
19 views

19 Tracking

This document summarizes a lecture on tracking in computer vision. It discusses: 1) Different tracking scenarios such as following a point, template, changing template, or all elements of a moving person. 2) Key considerations for tracking, including the dynamics of the object and how it is observed. 3) Three main issues in tracking: modeling object motion over time, handling changes in appearance, and dealing with occlusions. 4) The Kalman filter as a common approach for tracking that uses a linear dynamic model and Gaussian noise assumptions. It provides efficient recursive filtering and prediction of the object's state.

Uploaded by

rphmi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

19 Tracking

This document summarizes a lecture on tracking in computer vision. It discusses: 1) Different tracking scenarios such as following a point, template, changing template, or all elements of a moving person. 2) Key considerations for tracking, including the dynamics of the object and how it is observed. 3) Three main issues in tracking: modeling object motion over time, handling changes in appearance, and dealing with occlusions. 4) The Kalman filter as a common approach for tracking that uses a linear dynamic model and Gaussian noise assumptions. It provides efficient recursive filtering and prediction of the object's state.

Uploaded by

rphmi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

C280 Computer Vision

C280,ComputerVision
Prof.TrevorDarrell
[email protected]
Lecture19:Tracking

Tracking scenarios

Follow a point
Follow
ll a template
l
Follow a changing template
Follow all the elements of a moving person, fit a
model to it.

Things to consider in tracking


What are the dynamics of the thing being tracked?
How is
i it
i observed?
b
d?

Three main issues in tracking

Simplifying Assumptions

Kalman filter graphical model and


corresponding factorized joint probability
x1

x2

x3

y1

y2

y3

P( x1 , x2 , x3 , y1 , y2 , y3 )
P( x1 ) P( y1 | x1 ) P( x2 | x1 ) P( y2 | x2 ) P( x3 | x2 ) P( y3 | x3 )
6

Tracking as induction
Make a measurement starting in the 0th frame
Then:
h
assume you have
h
an estimate
i
at the
h ith
ih
frame, after the measurement step.
Show
Sh th
thatt you can do
d prediction
di ti for
f the
th i+1th
frame, and measurement for the i+1th frame.

Base case

Prediction step

given

Update step

given

10

The Kalman Filter


Key ideas:
Li
Linear models
d l interact
i t
t uniquely
i l well
ll with
ith Gaussian
G
i
noise - make the prior Gaussian, everything else
Gaussian and the calculations are easy
Gaussians are really easy to represent --- once you
know the mean and covariance, youre done

11

Recall the three main issues in tracking

(Ignore data association for now)


12

The Kalman Filter

13
[figure from http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html]

The Kalman Filter in 1D


Dynamic Model

Notation
Predicted mean
Corrected mean

14

The Kalman Filter

15

Prediction for 1D Kalman filter


The new state is obtained by
multiplying
lti l i old
ld state
t t by
b known
k
constant
t t
adding zero-mean noise

Therefore, predicted mean for new state is


constant times mean for old state

Old variance is normal random variable


variance is multiplied by square of constant
and variance of noise is added.

16

17

The Kalman Filter

18

Measurement update for 1D Kalman filter

Notice:
if measurement noise is small,
we rely mainly on the measurement,
if its large, mainly on the prediction
does
d
nott depend
d
d on y
19

20

Kalman filter for computing an on-line average


What Kalman filter parameters and initial
conditions should we pick so that the optimal
estimate for x at each iteration is just the average
of all the observations seen so far?

21

Kalman filter model

d i 1, mi 1, d i 0, mi 1
Initial conditions

x0 0 0

Iteration

y0

y0

x
x

y0 y1
2

1
2

2
y0 y1
2
y0 y1 y2
3
1
2
1
3

22

What happens if the x dynamics are given a


non-zero
non
zero variance?

23

Kalman filter model

d i 1, mi 1, d i 1, mi 1
Initial conditions

x0 0 0

Iteration

y0

xi

y0

2
3

y0 2 y1
3

y0 2 y1
3
y0 2 y1 5 y2
8
5
3

5
8

24

Linear dynamic models


A linear dynamic model has the form

x i N Di1x i1 ; d i

y i N Mi x i ; m i

This is much, much more general than it looks, and extremely


powerful

25

Examples of linear state space


models
Drifting points

x i N Di1x i1 ; d i

y i N Mi x i ; m i

assume that
th t the
th new position
iti off the
th point
i t is
i the
th old
ld one,
plus noise
D = Identity

cic.nist.gov/lipman/sciviz/images/random3.gif 26
http://www.grunch.net/synergetics/images/random
3.jpg

x i N Di1x i1 ; d i

Constant velocity

y i N Mi x i ; m i

We have
ui ui1 tvi1 i
vi vi1 i
(the Greek letters denote noise terms)

Stack (u, v) into a single state vector


u 1

vi 0
which is the form we had above
xi

t u
i


noise
1 v i1
Di-1

xi-1
27

velocity

position

pposition

time

measurement,position

Constant
C
Velocity
Model

time

28

x i N Di1x i1 ; d i

Constant acceleration

y i N Mi x i ; m i

We have

ui ui1 tvi1 i
vi vi1 tai1 i
ai ai1 i
(the Greek letters denote noise terms)

Stack (u, v) into a single state vector


u 1 t
v 0 1

ai 0 0
which is the form we had above
xi

Di-1

0 u
t v noise

1 a i1
xi-1

29

velocity

position

position

Constant
Acceleration
cce e o
Model

time

30

Periodic motion

x i N Di1x i1 ; d i

y i N Mi x i ;
mi

Assume we have a point, moving on a line with


a periodic movement defined with a
differential eq:

can be defined as

with
i h state d
defined
fi d as stacked
k d position
i i and
d
velocity u=(p, v)

31

x i N Di1x i1 ; d i

Periodic motion

y i N Mi x i ; m i

Take discrete approximation.(e.g., forward


Euler integration with t stepsize.)

xi

Di-1

xi-1

32

n-D
Generalization to n-D is straightforward but more complex.

33

n-D
Generalization to n-D is straightforward but more complex.

34

n-D Prediction
Generalization to n-D is straightforward but more complex.

Prediction:
Multiply estimate at prior time with forward model:

Propagate covariance through model and add new noise:


35

n-D Correction
Generalization to n-D is straightforward but more complex.

Correction:
Update a priori estimate with measurement to form a
posteriori

36

n-D correction
Find linear filter on innovations

which minimizes a posteriori error covariance:

E xx
x x

K is the Kalman Gain matrix. A solution is

37

Kalman Gain Matrix

As measurement becomes more reliable, K weights residual


more heavily,
1
K

M
lim i

m 0

As prior covariance approaches 0, measurements are ignored:

li K i 0
lim

i 0

38

39

posittion

veloocity

position

time

Constant Velocity Model


40

time
41

possition

posittion

This is figure 17.3 of Forsyth and Ponce. The notation is a bit involved, but is logical. We
plot the true state as open circles, measurements as xs, predicted means as *s
with three standard deviation bars, corrected means as +s with three
standard deviation bars.

time

42

posittion

time
The o-s give state, x-s measurement.

43

Smoothing
Idea
W
We dont
d t have
h
the
th best
b t estimate
ti t off state
t t - what
h t about
b t
the future?
Run two filters, one moving forward, the other
backward in time.
Now combine state estimates
The crucial point here is that we can obtain a smoothed
estimate by viewing the backward filters prediction as yet
another measurement for the forward filter

44

positionn

Forward estimates.

The o-s give state, x-s measurement.


time

45

positionn

Backward estimates.

The o-s give state, x-s measurement.

time

46

positionn

Combined forward-backward estimates.

The o-s give state, x-s measurement. time

47

2-D constant velocity example from Kevin Murphys Matlab toolbox

48
[figure from http://www.ai.mit.edu/~murphyk/Software/Kalman/kalman.html]

22-D
D constant velocity example from Kevin Murphys Matlab toolbox
MSE of filtered estimate is 4.9; of smoothed estimate. 3.2.
Not only is the smoothed estimate better, but we know that it is better,
as illustrated
ill
d bby the
h smaller
ll uncertainty
i
ellipses
lli
Note how the smoothed ellipses are larger at the ends, because these
points have seen less data.
Also, note how rapidly the filtered ellipses reach their steady-state
(Ricatti) values.
49

[figure from http://www.ai.mit.edu/~murphyk/Software/Kalman/kalman.html]

Resources
Kalman filter homepage
h //
http://www.cs.unc.edu/~welch/kalman/
d /
l h/k l
/
(kalman filter demo applet)
Kevin Murphys Matlab toolbox:
http://www.ai.mit.edu/~murphyk/Software/Kalman/k
alman.html

50

Embellishments for tracking


Richer models of P(xn|xn-1)
Richer
i h models
d l off P(y
( n|x
| n)
Richer model of probability distributions

51

Abrupt changes
What if environment is sometimes unpredictable?
Do people move with constant velocity?
Test several models of assumed dynamics, use the
best.

52

Multiple model filters


Test several models of assumed dynamics

53
[figure from Welsh and Bishop 2001]

MM estimate
Two models: Position (P), Position+Velocity (PV)

54
[figure from Welsh and Bishop 2001]

P likelihood

55
[figure from Welsh and Bishop 2001]

No lag

56
[figure from Welsh and Bishop 2001]

Smooth when still

57
[figure from Welsh and Bishop 2001]

Embellishments for tracking


Richer
i h models
d l off P(y
( n|x
| n)
Richer model of probability distributions

58

Jepson, Fleet, and El-Maraghi tracker

59

Wandering, Stable, and Lost appearance model


Introduce 3 competing models to explain the
appearance of the tracked region:
A stable modelGaussian with some mean and
covariance.
A 2-frame motion tracker appearance model, to rebuild
the stable model when it gets lost
An outlier modeluniform probability over all
appearances.

Use an on-line
on line EM algorithm to fit the (changing)
model parameters to the recent appearance data.
60

Jepson, Fleet, and El-Maraghi tracker for toy 1-pixel


image model

Red line: observations


Blue line: true appearance state
Black line: mean of the stable process
61

Mixing
probabilities for
Stable (black),
Wandering (red)
and Lost (green)
processes

62

Non-toy image representation


Phase of a steerable quadrature pair (G2, H2).
Steered to 4 different orientations
orientations, at 2 scales.
scales

63

The motion tracker


Motion prior, P(x n | x n1) , prefers slow velocities
and small accelerations
accelerations.
The WSL appearance model gives a likelihood for
each possible new position
position, orientation
orientation, and scale
of the tracked region.
They combine that with the motion prior to find
the most probable position, orientation, and scale
of the tracked region
g
in the next frame.
Gives state-of-the-art tracking results.

64

Jepson, Fleet, and El-Maraghi tracker

65

Jepson, Fleet, and El-Maraghi tracker


Add fleet&jepson tracking slides
Far right
column: the
stable
components
mixing
probability.
Note its
i
behavior at
occlusions.

66

Embellishments for tracking


Richer
i h model
d l off P(y
( n|x
| n)
Richer model of probability distributions
Particle filter models, applied to tracking humans

68

(KF) Distribution propagation


prediction from previous time frame

Noise
added to
that
h
prediction

Make new measurement at next time frame

69

[Isard 1998]

Distribution propagation

70

[Isard 1998]

Representing non-linear Distributions

71

Representing non-linear Distributions


Unimodal parametric models fail to capture realworld densities
densities

72

Discretize by evenly sampling over


the entire state space
Tractable for 1-d problems like stereo, but not for
high dimensional problems.
high-dimensional
problems

73

Representing Distributions using


Weighted Samples
Rather than a parametric form, use a set of samples
to represent a density:

74

Representing Distributions using


Weighted Samples
Rather than a parametric form, use a set of samples
to represent a density:

Sample positions

Probability mass at each sample

This gives us two knobs to adjust when representing a probability density by samples:
the locations of the samples, and the probability weight on each sample.
75

Representing distributions using


weighted samples,
samples another picture

76

[Isard 1998]

Sampled representation of a
probability distribution

You can also think of this as a sum of dirac delta functions,


each of weight w:
p f ( x ) w i ( x u i )
i

77

Tracking, in particle filter representation


x1

x2

x3

y1

y2

y3

p f ( x ) w i ( x u i )
i

P(x n | y1 ...y n ) k P(y n | x n ) dx n1P(x n | x n1 )P(x n1 | y1 ...y n1 )

P di ti step
Prediction
t
Update step

78

Particle filter
Lets apply this sampled probability density
machinery to generalize the Kalman filtering
framework.
More general probability density representation
than uni-modal Gaussian.
Allows for general state dynamics,
dynamics f(x) + noise

79

Sampled Prediction
= ?

Drop elements to marginalize to get

~=
80

Sampled Correction (Bayes rule)


Prior posterior
Reweight
i h every sample
l with
i h the
h likelihood
lik lih d off the
h
observations, given that sample:

yielding
i ldi a set off samples
l describing
d
ibi the
h probability
b bili
distribution after the correction (update) step:

81

Nave PF Tracking
Start with samples from something simple
(Gaussian)
Take each particle from the prediction step and
Repeat
modify the old weight by multiplying by the
new likelihood

Correct

Predict

s
Run everyy pparticle through
g the dynamics
y
function and add noise.

But doesnt work that well because of sample


impoverishment

82

Sample impoverishment
10 of the 100 particles, along with the true Kalman
filter track
track, with variance:

time

83

Resample the prior


In a sampled density representation, the frequency of
samples can be traded off against weight:
s.t.

These
h
new samples
l are a representation
i off the
h same
density.
I.e., make N draws with replacement from the
original set of samples,
samples using
sing the weights
eights as the
probability of drawing a sample.

84

Resampling concentrates samples

85

A practical particle filter with resampling

86

Pictorial view

87

[Isard 1998]

You might also like