03 Face Detection
03 Face Detection
• Face Detection
• Face Localization
Lecture 3: Face Detection • Segmentation
• Face Tracking
• Facial features localization
Reading: Eigenfaces – online paper • Facial features tracking
FP pgs. 505-512 • Morphing
1
Coincidental appearance of faces Coincidental appearance of face
in rock? profile in rock?
http://bensguide.gpo.gov/3-5/symbols/print/mountrushmore.html http://www.cs.dartmouth.edu/whites/old_man.html
Face Detection
• Euclidean distance:
∑ ( y1c − y2 c )
2 N
= y1 − y 2
2
d =
c =1 Linear Models
• Given an input image y (also called a probe), the NN
classifier will assign to y the label associated with the
closest image in the training set. So if, it happens to be
closest to another face it will be assigned L=1 (face),
otherwise it will be assigned L=0 (nonface)
2
Images Image Representation
..
............... .
i ∈ ℜ kl ×1
pixel kl
pixel value axis representing pixel 1
......
255
l2
pi
xe i1 1 0 0
25
5
i1 i2 ... il
i2 0 1
il+1 .
I ∈ ℜ k ×l pixel 1 I = . . i = M
= i1 M + i 2 0 L + i kl M
. .
. M 0
0
kl×1
255
• An image is a point in ℜ dimensional space il(k−1)+1
ikl
i
kl 0 0 1
Representation
Image Representation
i1
i1 1 0 L 0
i1 i2 ... il • Find a new basis matrix that results in a
i2
i2
0 1 compact representation
il+1 .
= M
I = . . i = M
M O
. .
.
il(k−1)+1
i
kl
0 1
i i
kl kl
Basis Matrix, B
i = Bc vector of coefficients, c
pixel 2
i3 n i3 n
..
................
l3 l3 1 0 0
1 0 0 i1n
. pi
xe
pi
xe
i n = i1n 0 + i2 n 1 + i3 n 0 = 0 1 0 i
...... 2n
0 0 1 0 0 1 i3n
i2 i3 i2 i3
Basis Matrix, B
i1 pixel 1 i1 pixel 1
3
Toy Example - Representation
Toy Example-Recognition
Heuristic
• Consider a set of images of N people under the same viewpoint and lighting Solve for and store the coefficient matrix C: ↑ ↑ ↑ 1 0 ↑ ↑ ↑
• Each image is made up of 3 pixels and pixel 1 has the same value as pixel 3
for all images i1 i 2 L i N = 0 1 c1 c 2 L c N
pixel 2
i1 n ↓ ↓ ↓ 1 0 ↓ ↓ ↓
i n = i2 n s.t . i1 n = i3 n and 1 ≤ n ≤ N
..
................
D, data matrix C, coefficient matrix
l3
xe
pixel 2
pi
−1
i3 n
C=B D
.. .....
................ l3 1 0 0
pi
xe 1 0 0 i1n Given a new image, inew :
i n = i1n 0 + i2 n 1 + i3 n 0 = 0 1 0 i
4
PCA: Theory PCA-Dimensionality Reduction
x2
e1
• Consider a set of images, & each image is made up of 3 pixels and pixel 1 has the same value
xy2 as pixel 3 for all images
i n = [i1n i2 n i3n ]
T
s.t. i1n = i3 n and 1 ≤ n ≤ N
e2
x
x
x • PCA chooses axis in the direction of highest variability of the data, maximum scatter
x x
x x
x
x
x x
PCA x
x x
| | | | | |
pixel 2
x
x
1st axis
i1 i 2 L i N = B c1 c 2 L c N
x1 yx11
..
.................
xe
l3
| | | | | |
pi
...... 2
• Define a new origin as the mean of the data set
data matrix, D
• Find the direction of maximum variance in the samples (e1) and align it with nd axis • Each image i n is now represented by a vector of
the first axis (y1),
coefficients c n in a reduced dimensionality space.
• Continue this process with orthogonal directions of decreasing variance,
aligning each with the next axis pixel 1 D = USV T (svd of D) set B = U
• Thus, we have a rotation which minimizes the covariance. • B minimize the following function
ST = (D − M)(D − M)T M = [µ L µ]
• Each element ST,ij is the co-variance between the two
where directions i and j, represents the level of correlation
(i.e. a value of zero indicates that the two dimensions are
← i1 − µ → uncorrelated).
↑ ↑ ↑
← i 2 − µ →
S T = i1 − µ i2 − µ L iN − µ
M
↓ ↓ ↓
← i − µ →
N
– [c … c ] = BT [i … i ]
1 N 1 N
(D − M)(D − M) = UΣ U (svd of ST )T 2 T
set B = U
CCT = BT (D − M)(D − M) T B
= BT ST B
5
Selecting the Optimal B Data Reduction: Theory
Bopt = [b1|…|bd]
c new = BT i new B −1 = BT
pixel 2
.... ..
representation of inew against all coefficient • Each image below is a column vector in the basis matrix B
l3
................
xe
pi vectors c n 1 ≤ n ≤ N
. .. 2 nd axis
•One possible classifier: nearest-neighbor
classifier
.. .......... ..
pixel kl
.
pixel kl
+ c3 + c9
.
+ c28
.....
c1
255
......
c3 c2
255
2
x el c1
pi e l2
5 pix
25 5
25
pixel 1
pixel 1
0 255
0 255
di = Uc i
Running Sum: 1 term 3 terms 9 terms 28 terms
6
The Covariance Matrix PIE Database (Weizmann)
• Define the covariance (scatter) matrix of the input samples:
N
ST = ∑ (in − µ)(in − µ)T
(where µ is the sample mean)
n =1
+ +
2
d f (y ) = y − U f U f y
T