0% found this document useful (0 votes)
43 views

Face Recognition, Experiments With Random Projection

The document summarizes Navin Goel's thesis on using random projection for face recognition. It provides an overview of principal component analysis and the eigenfaces method for face recognition. It then introduces random projection as an alternative dimensionality reduction technique with lower computational complexity than PCA. The document outlines Goel's experimental procedure using various face datasets and recognition approaches. It presents results showing random projection achieved recognition rates equivalent to or better than PCA. The conclusion discusses improvements in performance from multiple ensembles and scoring techniques.

Uploaded by

ARAVIND
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views

Face Recognition, Experiments With Random Projection

The document summarizes Navin Goel's thesis on using random projection for face recognition. It provides an overview of principal component analysis and the eigenfaces method for face recognition. It then introduces random projection as an alternative dimensionality reduction technique with lower computational complexity than PCA. The document outlines Goel's experimental procedure using various face datasets and recognition approaches. It presents results showing random projection achieved recognition rates equivalent to or better than PCA. The conclusion discusses improvements in performance from multiple ensembles and scoring techniques.

Uploaded by

ARAVIND
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 27

FACE RECOGNITION, EXPERIMENTS

WITH RANDOM PROJECTION

Navin Goel
Graduate Student

Advisor: Dr George Bebis


Associate Professor

Department Of Computer Science and Engineering


University of Nevada, Reno
Overview

• Introduction and Thesis Scope


• Principal Component Analysis
• Method of Eigenfaces
• Random Projection
• Properties of Random Projection
• Random Projection for Face Recognition
• Experimental Procedure and Data sets
• Recognition approaches and results
• Conclusion and Future work
Introduction

Problem Statement Identify a person’s face image from


face database.

Applications Human-Computer interface,


Static matching of photographs,
Video surveillance,
Biometric security,
Image and film processing.
Challenges

Variations in pose Head positions, frontal view, profile


view and head tilt, facial expressions

Illumination Changes Light direction and intensity changes,


cluttered background, low quality
images

Camera Parameters Resolution, color balance etc.

Occlusion Glasses, facial hair and makeup


Thesis Scope

Investigate the application of Random Projection (RP) in Face


Recognition.

Evaluate the performance of RP for face recognition under various


conditions and assumptions.

Aim at proposing an algorithm, which replaces the learning step of


PCA by cheaper and efficient step.
Principal Component Analysis (PCA)

For a set M of N-dimensional vectors {x1, x2…xM}, PCA finds the


eigenvalues and eigenvectors of the covariance matrix of the vectors

T
1 M  - the average of the
C
M
  x    x   
i 1
i i image vectors

uk - Eigenvectors
an image as  u k  k u k
1d vector k - Eigenvalues

Keep only k eigenvectors, corresponding to the k largest eigenvalues.


Method of Eigenfaces

• Apply PCA on the training dataset


• Project the Gallery set images to the reduced dimensional
eigenspace.
• For each test set image:
• Project the image to the reduced dimensional
eigenspace.
• Measure similarity by calculating the distance between
the projection coefficients of two datasets
• The face is recognized if the closest gallery image
belongs to same person in test set
Random Projection (RP)

The original N-dimensional data is projected to a d-dimensional


subspace, (d << n) using:
xNxM – original data
X dxM  RdxN x NxM
RdxN – random matrix

Random matrix is calculated using the following steps:


Each entry of the matrix follows N(0,1).

1   x  2   - Mean
N ( , )  exp  
2 2
 2 
2
 - Variance

The d rows of the matrix are orthogonalized using Gram-Schmidt


algorithm and then are normalized to unit length
Random Projection – Data Independence
S. Dasgupta. Experiments with Random Projection. Uncertainty in Artificial Intelligence, 2000.

Random Projection does not depend on the data itself.


Two 1-separated
spherical Gaussians
were projected onto a
random space of
dimension 20.
Error bars are for 1
standard deviation and
there are 40 trials per
dimension.
Digital images,
document databases,
signal processing.
Random Projection – Eccentricity
S. Dasgupta. Experiments with Random Projection. Uncertainty in Artificial Intelligence, 2000.

RP makes highly eccentric Gaussian clusters to spherical.

Gaussian in subspace
of 50-dimension and
eccentricity 1,000 is
projected onto lower
dimensions.
Conceptually easier to
design algorithms for
spherical clusters than
ellipsoidal ones.
Random Projection – Complexity
E. Bingham and H. Mannila. Random projection in dimensionality reduction: applications to image
and text data. Proceedings of the 7th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, pp. 245-250, August 26-29, 2001.

Complexity of RP is of the order of quadratic (n2) in contrast to


PCA which is cubic (n3).

Number of floating-point
operations needed when
reducing the
dimensionality of image
data using RP (+), SRP
(*), PCA () and DCT
(), in a logarithmic
scale.
Random Projection – Lower Bound
S. Dasgupta. Experiments with Random Projection. Uncertainty in Artificial Intelligence, 2000.

What value of d (lower space) must be chosen ?

1-separated mixtures of k
Gaussians of dimension
100 was projected on d =
lnk.
PCA cannot be expected
to reduce the
dimensionality of k
Gaussians below Ω(k).
Random Projection for Face Recognition

• Generate lower dimensional random subspace.


• Project the Gallery set images to the reduced dimensional
random space.
• For each test set image:
• Project the image to the reduced dimensional
random space.
• Measure similarity by calculating the distance
between the projection coefficients of two datasets.
• The face is recognized if the closest gallery image
belongs to same person in test set.
Experimental
Procedure
Main steps of the approach
Data Sets

Face images from ORL


data set for a particular
subject.

Face images from CVL


data set for a particular
subject.

Face images from AR


data set for a particular
subject.
Closest Match Approach

Averaging over 5
experiments.

Flowchart for
calculating recognition
rate using closest match
approach.
Closest Match Approach + Majority Voting

Flowchart for
calculating recognition
rate using closest match
approach + majority
voting technique.
Closest Match Approach + Scoring

Flowchart for
calculating recognition
rate using closest match
approach + scoring
technique.
Results for the ORL database

Experiment on ORL database using closest match approach + majority voting technique,
where training set consists of same subjects as in the gallery and testing set.

Experiment on ORL database using closest match approach + majority voting technique,
where training set consists of different subjects as in the gallery and testing set.
Results for the CVL database

Experiment on CVL database using closest match approach + majority voting technique,
where training set consists of same subjects as in the gallery and testing set.

Experiment on CVL database using closest match approach + majority voting, training set
consists of different subjects as in the gallery and testing set.
Results for the AR database

Experiment on AR database using closest match approach + majority voting, training set
consists of random subjects, gallery and Test set contains different combinations.
ORL database for Multiple Ensembles

Plot on RCA, Majority-Voting technique for 5 and 30 different random seeds, training set
consists of different subjects as in the gallery and testing set.
Results for the ORL database with Scoring Technique

Experiment on ORL database using closest match approach + scoring, training set consists
of same subjects as in the gallery and testing set.

Experiment on ORL database using closest match approach + scoring, training set consists
of different subjects as in the gallery and testing set.
Results for the CVL database with Scoring Technique

Experiment on CVL database using closest match approach + scoring, training set consists
of different subjects as in the gallery and testing set.
Results for the AR database with Scoring Technique

Experiment on AR database using closest match approach + scoring, training set consists of
random subjects as in the gallery and Test set contains different combinations.
Conclusion

• We were able to get recognition rate equivalent to PCA and in most cases
better than it.
• RP matrix is independent of the training data.
• The main advantage of using RP is the computational complexity, for RP
it is quadratic and for PCA cubic.
• RP works better when gallery to test set ratio is higher.
• RP works better than PCA when the training set images differ from
gallery and test set.
• RP shows irregularity for single runs, but improves with multiple
ensembles.
• Majority-voting over closest match for recognition further improves the
performance of RP.
• For scoring technique, greater the number of top hits per image, better the
performance.
Future Work

• Combine different random ensembles, that will improve


efficiency and accuracy.

You might also like