0% found this document useful (0 votes)
41 views7 pages

Face Recognition Using Eigen Faces and Artificial

Eigen vectors and eigen roots

Uploaded by

samunnatgrg6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views7 pages

Face Recognition Using Eigen Faces and Artificial

Eigen vectors and eigen roots

Uploaded by

samunnatgrg6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/271303091

Face Recognition Using Eigen Faces and Artificial Neural Network

Article in International Journal of Computer Theory and Engineering · January 2010


DOI: 10.7763/IJCTE.2010.V2.213

CITATIONS READS

96 2,600

4 authors, including:

Manish Kumar
Pranveer Singh Institute of Technology
29 PUBLICATIONS 299 CITATIONS

SEE PROFILE

All content following this page was uploaded by Manish Kumar on 31 January 2019.

The user has requested enhancement of the downloaded file.


International Journal of Computer Theory and Engineering, Vol. 2, No. 4, August, 2010
1793-8201

Face Recognition Using Eigen Faces and


Artificial Neural Network
Mayank Agarwal, Nikunj Jain, Mr. Manish Kumar and Himanshu Agrawal

identification the starting step involves extraction of the


Abstract—Face is a complex multidimensional visual model relevant features from facial images. A big challenge is how
and developing a computational model for face recognition is to quantize facial features so that a computer is able to
difficult. The paper presents a methodology for face recognition recognize a face, given a set of features. Investigations by
based on information theory approach of coding and decoding
the face image. Proposed methodology is connection of two
numerous researchers over the past several years indicate that
stages – Feature extraction using principle component analysis certain facial characteristics are used by human beings to
and recognition using the feed forward back propagation identify faces.
Neural Network. The algorithm has been tested on 400 images
(40 classes). A recognition score for test lot is calculated by
considering almost all the variants of feature extraction. The II. RELATED WORK
proposed methods were tested on Olivetti and Oracle Research
Laboratory (ORL) face database. Test results gave a There are two basic methods for face recognition. The first
recognition rate of 97.018% method is based on extracting feature vectors from the basic
parts of a face such as eyes, nose, mouth, and chin, with the
Index Terms—Face recognition, Principal component help of deformable templates and extensive mathematics.
analysis (PCA), Artificial Neural network (ANN), Eigenvector, Then key information from the basic parts of face is gathered
Eigenface.
and converted into a feature vector. Yullie and Cohen [1]
used deformable templates in contour extraction of face
I. INTRODUCTION images.
Another method is based on the information theory
The face is the primary focus of attention in the society, concepts viz. principal component analysis method. In this
playing a major role in conveying identity and emotion. method, information that best describes a face is derived from
Although the ability to infer intelligence or character from the entire face image. Based on the Karhunen-Loeve
facial appearance is suspect, the human ability to recognize expansion in pattern recognition, Kirby and Sirovich [5], [6]
faces is remarkable. A human can recognize thousands of have shown that any particular face can be represented in
faces learned throughout the lifetime and identify familiar terms of a best coordinate system termed as "eigenfaces".
faces at a glance even after years of separation. This skill is These are the eigen functions of the average covariance of the
quite robust, despite of large changes in the visual stimulus ensemble of faces. Later, Turk and Pentland [7] proposed a
due to viewing conditions, expression, aging, and face recognition method based on the eigenfaces approach.
distractions such as glasses, beards or changes in hair style. An unsupervised pattern recognition scheme is proposed
Face recognition has become an important issue in many in this paper which is independent of excessive geometry and
applications such as security systems, credit card verification, computation. Recognition system is implemented based on
criminal identification etc. Even the ability to merely detect eigenface, PCA and ANN. Principal component analysis for
faces, as opposed to recognizing them, canbe important. face recognition is based on the information theory approach
Although it is clear that people are good at face recognition, in which the relevant information in a face image is extracted
it is not at all obvious how faces are encoded or decoded by a as efficiently as possible. Further Artificial Neural Network
human brain. Human face recognition has been studied for was used for classification. Neural Network concept is used
more than twenty years. Developing a computational model because of its ability to learn ' from observed data.
of face recognition is quite difficult, because faces are
complex, multi-dimensional visual stimuli. Therefore, face
recognition is a very high level computer vision task, in III. PROPOSED TECHNIQUE
which many early vision techniques can be involved. For face
The proposed technique is coding and decoding of face
images, emphasizing the significant local and global features.
Mayank Agarwal, Student Member IEEE, Jaypee Institute of Information
Technology University, Noida ,India(email: [email protected]). In the language of information theory, the relevant
Nikunj Jain, Student, Jaypee Institute of Information Technology information in a face image is extracted, encoded and then
University, Noida ,India (email:[email protected]). compared with a database of models. The proposed method is
Mr. Manish Kumar, Sr. Lecturer (ECE), Jaypee Institute of Information
Technology University, Noida, India(email: [email protected]). independent of any judgment of features (open/closed eyes,
Himanshu Agrawal, Student Member IEEE, Jaypee Institute of different facial expressions, with and without Glasses). The
Information Technology University, Noida, India(email: face recognition system is as follows:
[email protected]).

624
International Journal of Computer Theory and Engineering, Vol. 2, No. 4, August, 2010
1793-8201

best account for the distribution of face images within the


entire image space.

Fig. 1 – Face Library Formation and getting face


descriptor

A. Preprocessing And Face Library Formation


Image size normalization, histogram equalization and Fig – 2 Eigen Faces and their mean Image
conversion into gray scale are used for preprocessing of the These vectors define the subspace of face images, which
image. This module automatically reduce every face image to we call "face space". Each vector is of length N2, describes
X*Y pixels(based on user request), can distribute the an N x N image, and is a linear combination of the original
intensity of face images (histogram equalization) in order to face images. Because these vectors are the eigenvectors of
improve face recognition performance. Face images are the covariance matrix corresponding to the original face
stored in a face library in the system. Every action such as images, and because they are face-like in appearance, we
training set or Eigen face formation is performed on this face refer to them as "eigenfaces". Some examples of eigenfaces
library. The face library is further divided into two sets – are shown in Figure 3.
training dataset (60% of individual image) and testing dataset Let the training set of face images be Г1, Г2, Г3... ГM then
(rest 40% images). The process is described in Fig. 1. the average of the set is defined by

B. Calculating Eigenfaces
The face library entries are normalized. Eigenfaces are (1)
calculated from the training set and stored. An individual face Each face differs from the average by the vector
can be represented exactly in terms of a linear combination of Φi =Гi – Ψ (2)
eigenfaces. The face can also be approximated using only the An example training set is shown in Figure 2, with the
best M eigenfaces, which have the largest eigenvalues. It average face Ψ.
accounts for the most variance within the set of face images. This set of very large vectors is then subject to principal
Best M eigenfaces span an M-dimensional subspace which is component analysis, which seeks a set of M orthonormal
called the "face space" of all possible images. For calculating vectors, un , which best describes the distribution of the data.
the eigenface PCA algorithm [5], [8], was used. The kth vector, uk , is chosen such that
Let a face image I(x, y) be a two-dimensional N x N array.
An image may also be considered as a vector of dimension
N2, so that a typical image of size 92 x 112 becomes a vector
of dimension 10,304, or equivalently a point in is a maximum, subject to
10,304-dimensional space. An ensemble of images, then,
maps to a collection of points in this huge space.
Images of faces, being similar in overall configuration,
The vectors uk and scalar λk are the eigenvectors and
will not be randomly distributed in this huge image space and
eigen values, respectively of the covariance matrix
thus can be described by a relatively low dimensional
subspace. The main idea of the principal component analysis
(or Karhunen- Loeve expansion) is to find the vectors that
625
International Journal of Computer Theory and Engineering, Vol. 2, No. 4, August, 2010
1793-8201

C. Using Eigenfaces to classify the face image and get

the face descriptor


where the matrix A = [ Φ1, Φ2,........ΦM ] The covariance The eigenface images calculated from the eigenvectors of
matrix C, however is N2 x N2 real symmetric matrix, and L, span a basis set with which to describe face images.
determining the N2 eigenvectors and eigen values is an Sirovich and Kirby evaluated a limited version of this
intractable task for typical image sizes. We need a framework on an ensemble of M = 115 images of Caucasian
computationally feasible method to find these eigenvectors. males digitized in a controlled manner, and found that 40
If the number of data points in the image space is less than eigenfaces were sufficient for a very good description of face
the dimension of the space ( M < N2 ), there will be only M-1, images. With M' = 40 eigenfaces, RMS pixel by pixel errors
rather than N2 , meaningful eigenvectors. The remaining in representing cropped versions of face images were about
eigenvectors will have associated eigen values of zero. We 2%.
can solve for the N2 dimensional eigenvectors in this case by In practice, a smaller M' can be sufficient for identification,
first solving the eigenvectors of an M x M matrix such as since accurate reconstruction of the image is not a
solving 16 x 16 matrix rather than a 10,304 x 10,304 matrix requirement and, it was observed that, for a training set of
and then, taking appropriate linear combinations of the face fourteen face images, seven eigenfaces were enough for a
images Φi. sufficient description of the training set members. But for
Consider the eigenvectors vi of ATA such that maximum accuracy, the number of eigenfaces should be
(5) equal to the number of images in the training set.
Premultiplying both sides by A, we have In this framework, identification becomes a pattern
recognition task. The eigenfaces span an M' dimensional
(6) subspace of the original N2 image space. The M' significant
from which we see that Avi are the eigenvectors of C = eigenvectors of the L matrix are chosen as those with the
AAT. largest associated eigen values.
Following these analysis, we construct the M x M matrix L A new face image ( Г ) is transformed into its eigen face
= ATA, where Lnm = Φm T Φn, and find the M eigenvectors, components (projected onto "face space") by a simple
vi , of L. These vectors determine linear combinations of the operation
M training set face images to form the eigenfaces ui.
(8)
for k = 1,2,… …M’
(7) The weights Wk formed a feature vector or face descriptor,
where i = 1,2 ... M
With this analysis, the calculations are greatly reduced, (9)
ΩT describes the contribution of each eigenface in
from the order of the number of pixels in the images ( N2 ) to
representing the input face image, treating the eigenfaces as a
the order of the number of images in the training set (M). In
basis set for face images. The feature vector/face descriptor is
practice, the training set of face images will be relatively
then used in a standard pattern recognition algorithm.
small (M << N2), and the calculations become quite
In the end, one can get a decent reconstruction of the image
manageable. The associated eigen values allow us to rank the
using only a few eigenfaces (M).
eigenvectors according to their usefulness in characterizing
the variation among the images. D. Training of Neural Networks
The success of this algorithm is based on the evaluation of Neural networks have been trained to perform complex
the eigen values and eigenvectors of the real symmetric functions in various fields of application including pattern
matrix L that is composed from the training set of images. recognition, identification, classification, speech, vision and
control systems.

Fig 3 – Eigen faces

626
International Journal of Computer Theory and Engineering, Vol. 2, No. 4, August, 2010
1793-8201

(10)
Where-

(11)
is the projected image.
Eq. (10) tells that the face image under consideration is
rebuilt just by adding each eigen face with a contribution of
wi Eq. (11) to the average of the training set images. The
degree of the fit or the "rebuild error ratio" can be expressed
by means of the Euclidean distance between the original and
the reconstructed face image as given in Eq. (12).

(12)
It has been observed that, rebuild error ratio increases as
the training set members differ heavily from each other.
This is due to the addition of the average face image. When
the members differ from each other (especially in image
Fig. 5 – Training of Neural Network background) the average face image becomes more messy
One ANN is used for each person in the database in which and this increases the rebuild error ratio.
face descriptors are used as inputs to train the networks [3].
During training of the ANN’s, the faces descriptors that
belong to same person are used as positive examples for the IV. EXPERIMENT
person’s network (such that network gives 1 as output), and
negative examples for the others network. (such that network The proposed method is tested on ORL face database.
gives 0 as output). Fig.5 shows schematic diagram for the Database has more than one image of an individual’s face
networks training. with different conditions. (expression, illumination, etc.)
There are ten different images of each of 40 distinct subjects.
E. Simulation of ANN for Recognition Each image has the size of 112 x 92 pixels with 256 levels of
New test image is taken for recognition (from test dataset grey. For some subjects, the images were taken at different
and its face descriptor is calculated from the eigenfaces (M times, varying the lighting, facial expressions (open / closed
found before. These new descriptors are given as an input to eyes, smiling / not smiling) and facial details (glasses / no
every network; further these networks are simulated. glasses). All the images were taken against a dark
Compare the simulated results and if the maximum output homogeneous background with the subjects in an upright,
exceeds the predefined threshold level, then it is confirmed frontal position (with tolerance for some side movement). A
that this new face belongs to the recognized person with the preview image of the Database of Faces is available (Fig. 4).
maximum output (fig. 6). The original pictures of 112 x 92 pixels have been resized to
56 x 46 so that the input space has the dimension of 2576.
Eigenfaces are calculated by using PCA algorithm and
experiment is performed by varying the number of eigenfaces
used in face space to calculate the face descriptors of the
images.
The numbers of network used are equal to number of
subjects in the database. The initial parameters of the Neural
Network used in the experiment are given below:
• Type: Feed forward back propagation network
• Number of layers: 3 (input, one hidden, output layer)
- Number of neurons in input layer : Number of
eigenfaces to describe the faces
- Number of neurons in hidden layer : 10
- Number of neurons in output layer : 1
• Transfer function of the ith layer: Tansig
• Training Function: Trainlm
• Number of epochs used in training: 100
Fig. 6 – Testing of Neural Network • Back propagation weight/bias learning function:
learngdm
F. Reconstruction of face Image using the extracted face • Performance function: mse
descriptor Since the number of networks is equal to the number of
A face image can be approximately reconstructed (rebuilt) people in the database, therefore forty networks, one for each
by using its feature vector and the eigenfaces as person was created. Among the ten images, first 6 of them are

627
International Journal of Computer Theory and Engineering, Vol. 2, No. 4, August, 2010
1793-8201

used for training the neural networks, then these networks are tested and their properties are updated. The trained networks
tested and their would be used later on for recognition purposes.
For testing the whole database, the faces used in training,
testing and recognition are changed and the recognition
performance is given for whole database.
THE COMPLETE FACE RECOGNITION PROCESS IS
SHOWN IN FIG. 4

V. ANALYSIS
The proposed technique is analyzed by varying the number
of eigenfaces used for feature extraction. The recognition
performance is shown in Table I.
The result derived from proposed method is compared
with the other techniques which are 1. K-means [2], 2. Fuzzy
Ant with fuzzy C-means.[2] Comparison of the result has
been tabulated in Table II.
TABLE I: RECOGNITION SCORE OF FACE RECOGNITION USING
PCA AND ANN.
No of Recognition Rate (%)
Eign
Facs Result Result Result Average of
1 2 3 Result 1-3
20 98.037 96.425 96.487 96.983

30 96.037 96.581 96.581 96.399

40 96.506 96.45 97.012 96.656

50 96.525 97.231 97.3 97.018

60 94.006 94.987 95.587 94.860

70 94.643 96.031 95.556 95.410

80 94.950 94.837 95.212 95

90 93.356 94.431 93.439 93.742

100 95.250 93.993 93.893 94.379

TABLE II: COMPARISON OF THE RESULT


Method Recognition Rate
K-means 86.75
Fuzzy Ant with fuzzy 94.82
C-means
Proposed 97.018

VI. CONCLUSION
The paper presents a face recognition approach using PCA
and Neural Network techniques. The result is compared with
K-means, Fuzzy Ant with fuzzy C-means and proposed
technique gives a better recognition rate then the other two.
In the Table I one can see the recognition rate by varying
the eigenfaces and the maximum recognition rate obtained
for the whole dataset is 97.018. Eigenfaces of highest
eigenvalue are actually needed to produce a complete basis
for the face space, As shown in Table I, maximum
Fig. 4 – A complete process of PCA, Eigenface and ANN recognition rate is for M =50.
based faced recognition system
In the Table II one can see the advantage of using the
properties are updated. The trained networks would be used
proposed face recognition over K-means method and Fuzzy
later on for recognition purposes.
Ant with fuzzy C-means based algorithm.
Since the number of networks is equal to the number of
The eigenface method is very sensitive to head orientations,
people in the database, therefore forty networks, one for each
and most of the mismatches occur for the images with large
person was created. Among the ten images, first 6 of them are
head orientations.
used for training the neural networks, then these networks are
628
International Journal of Computer Theory and Engineering, Vol. 2, No. 4, August, 2010
1793-8201

By choosing PCA as a feature selection technique (for the [4] [Howard Demuth,Mark Bele,Martin Hagan, “Neural Network
Toolbox”
set of images from the ORL Database of Faces), one can [5] Kirby, M., and Sirovich, L., "Application of theKarhunen-Loeve
reduce the space dimension from 2576 to 50 (equal to no. of procedure for thecharacterization of human faces", IEEE PAMI, Vol.
selected eigenfaces of highest eigenvalue). 12, pp. 103-108, (1990).
[6] Sirovich, L., and Kirby, M., "Low-dimensionalprocedure for the
characterization of human faces", J.Opt. Soc. Am. A, 4, 3,pp. 519-524,
(1987).
REFERENCES [7] Turk, M., and Pentland, A., "Eigenfaces for recognition", Journal of
[1] Yuille, A. L., Cohen, D. S., and Hallinan, P. W.,"Feature extraction Cognitive Neuroscience, Vol. 3, pp. 71-86, (1991).
from faces using deformable templates", Proc. of CVPR, (1989) [8] S. Gong, S. J. McKeANNa, and A. Psarron, Dynamic Vision, Imperial
[2] S. Makdee, C. Kimpan, S. Pansang, “Invariant range image multi - pose College Press, London, 2000.
face recognition using Fuzzy ant algorithm and membership matching [9] Manjunath, B. S., Chellappa, R., and Malsburg, C., "A feature based
score” Proceedings of 2007 IEEE International Symposium on Signal approach to face recognition", Trans. Of IEEE, pp. 373-378, (1992)
Processing and Information Technology,2007, pp. 252-256 [10] http://www.cl.cam.ac.uk/research/dtg/attarchive/faced atabase.html for
[3] Victor-Emil and Luliana-Florentina, “Face Rcognition using a fuzzy – downloading the ORL database.
Gaussian ANN”, IEEE 2002. Proceedings, Aug. 2002 Page(s):361 –
368

629

View publication stats

You might also like