0% found this document useful (0 votes)
40 views5 pages

Shape Invariant Recognition of Segmented Human Faces Using Eigenfaces

1) The document describes a two-step approach for face recognition: first, segmenting the face region from an image using an appearance-based model; second, using eigenfaces for person identification on the segmented face region. 2) Experiments on the Cohn Kanade facial database with 10 subjects for training and 7 for testing achieved a face recognition rate of up to 92.85% with and without facial expressions. 3) The proposed approach segments about 12% of the texture information from the original image for classification using eigenfaces, reducing computation compared to using the full face image texture.

Uploaded by

Fahad Aslam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views5 pages

Shape Invariant Recognition of Segmented Human Faces Using Eigenfaces

1) The document describes a two-step approach for face recognition: first, segmenting the face region from an image using an appearance-based model; second, using eigenfaces for person identification on the segmented face region. 2) Experiments on the Cohn Kanade facial database with 10 subjects for training and 7 for testing achieved a face recognition rate of up to 92.85% with and without facial expressions. 3) The proposed approach segments about 12% of the texture information from the original image for classification using eigenfaces, reducing computation compared to using the full face image texture.

Uploaded by

Fahad Aslam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Shape Invariant Recognition of Segmented Human

Faces using Eigenfaces

Zahid Riaz, Michael Beetz, Bernd Radig


Department of Informatics
Technical University of Munich, Germany

{riaz,beetz,radig}@in.tum.de

Abstract This paper describes an efficient approach for face


recognition as a two step process: 1) segmenting the face region
from an image by using an appearance based model, 2) using
eigenfaces for person identification for segmented face region.
The efficiency lies not only in generation of appearance models
which uses the explicit approach for shape and texture but also
the combined use of the aforementioned techniques. The result is
an algorithm that is robust against facial expressions variances.
Moreover it reduces the amount of texture up to 12% of the
image texture instead of considering whole face image.
Experiments have been performed on Cohn Kanade facial
database using ten subjects for training and sever for testing
purposes. This achieved a successful face recognition rate up to
92.85% with and without facial expressions. Face recognition
using Principal Component Analysis (PCA) is fast and efficient to
use, while the extracted appearance model can be further used
for facial recognition and tracking under lighting and pose
variations. This combination is simple to model and apply in real
time.

Keywords- Active shape models, active appearance


models, principal components analysis, eigenfaces, face
recognition

I. INTRODUCTION
Since last three decades of face recognition technology,
there exists many commercially available systems to identify
human faces, however face recognition is still an outstanding
challenge against different kinds of variations like facial
expressions, poses, non-uniform light illuminations and
occlusions. Meanwhile this technology has extended its role to
Human-Computer-Interaction (HCI) and Human-Robot-
Interaction (HRI). Person identity is one of the key tasks while
interacting with the robots, exploiting the un-attentional Figure 1 Face Recognition Technology compared to other
system security and authentication of the human interacting Biometrics for years 2003 and 2006
with the system. This problem has been addressed in various
scenarios by researchers resulting in commercially available As cameras are widely used and mounted on computer
face recognition systems [1,2]. However, other higher level screens, embedded into mobiles and installed into everyday
applications like facial expression recognition and face living and working environments they become valuable tools
tracking still remain outstanding along with person identity. for human system interaction. A particular important aspect of
This gives rise to an idea for generating a framework suitable this interaction is detection and recognition of faces and
for solving these issues together. interpretation of facial expressions. These capabilities are
deeply rooted in the human visual system and a crucial This paper focuses on the modelling of human face using a
building block for social interaction. Consequently, these two dimensional approach of shape and texture information
capabilities are an important step towards the acceptance of and then utilising this model for recognition purposes. These
many technical systems. Although faces are the most type of models in literature are called Active Appearance
important and natural way for human-human interaction but Models (AAMs), introduced by Edwards et al [8]. These
some outstanding challenges like uniqueness, performance and models utilize the shape information based on a point
circumvention made its market value a bit less than the other distribution of various landmarks points marked on the face
biometrics in year 2003. But in 2006 face recognition image. A parameterized recognition of the faces is performed
technology again raised up to 19% of the biometric market as by Edwards et al [9]. They have used models parameter for
shown comparably in Figure 1. recognition using LDA. However, PCA were firstly used by
This publication focuses on one of aspects of natural Sirvovich and Kirby [10], which were latterly adopted by M.
human-computer interfaces. Our goal is to build a real-time Turk and A. Pentland introducing the famous idea of
system for facial recognition that could robustly run in real- eigenfaces [11,12], which was adopted by many researchers
world environments. We develop it using model-based image afterwards [13].
interpretation techniques, which have proven its great
potential to fulfil current and future requests on real-world III. OUR APPROACH
image understanding. Our approach comprises methods that The goal of this paper is to develop a system for face
robustly localize facial features, seamlessly track them recognition that could be able to recognize persons in real time
through image sequences, and finally infer the facial for HCI applications. The goal is the recognition of persons in
recognition. the presence of facial expressions in daily life environment. A
The remainder of this paper is divided in five sections. model based technique followed by eigenfaces is utilized in
Section II deals with the work related to the proposed scheme. this regard. Active appearance models (AAM) are strongly
In section III the proposed scheme is described. Section IV applicable at this stage however we tried to achieve the
describes face image segmentation using model based accuracy using textural approach only.
approach. Section V explains about feature extraction
technique using eigenface approach. In the end, we showed
experimentation results in Section VI.
II. RELATED WORK
The problem of person identity lies under area of pattern
recognition. Various techniques have been used from last few
decades. A formal method of classifying the faces was first
proposed by Francis Galton [2] in 1888. Galton proposed Figure 2 Our Approach
collecting facial profiles as curves, finding their norm, and
then classifying other profiles by their deviations from the All the subjects in the database are labelled for
norm. The classification was resulting in a vector of identification. An active shape model (ASM) is fitted to all the
independent measure that could be compared with other face images in our database. Shape model could be fitted using
vectors in the database. Traditional recognition systems have any fitting algorithm [14,15]. A fitted shape model on training
the abilities to recognise the human using various techniques face images is then used for defining the reference shape
like feature based recognition, face geometry based which is actually mean shape of all the shapes in our database.
recognition, classifier design and model based methods. A Given a reference shape Sref and an example image I fitted
term defined in [3] is Distinguishing Component Analysis with shape S, it is suitable to apply texture mapping from
(DCA) and is compared to Principal Components Analysis example shape to reference shape using affine transformation.
(PCA) attempting to capture what distinguishes one subject However prior to mapping we use planar subdivision i.e.
from the others. In [4] Luca and Roli have used the similar delaunay triangulation for reference and example shape. The
kind of technique, which is fusion of Linear Discriminant texture vector for corresponding image is stored as T,. Instead
Analysis (LDA) and PCA is used. PCA is used to reduce the of parametrizing our model we use this representative texture
data size while LDA is used to classify the data. Similar for eigenface classifier. The use of eigenspace at this stage is
application is performed by [5], which first used PCA and suitable in two major aspects, 1) reducing the amount of
LDA in second step. The idea of combination of PCA and texture to be used in classification i.e. segmenting about 12%
LDA is to improve generalization capability when few gray values information of the original image as compared to
samples per class are given. Principle Component Analysis conventional eigenface approach, where we use full image
(PCA) and LDA are two commonly used techniques for data texture, 2) Texture mapping on reference shape benefited in
classification and dimensionality reduction, belonging to the terms of shape normalization for all the available images, this
same class [6]. In [7] authors have performed a review of makes facial expressions mould to reference shape. In this
Hidden Markov Models (HMM), Independent Components regard we can control the facial expressions in short extent.
Analysis (ICA), Neural Networks and PCA. Further light variations are normalized in the extracted texture
to improve the segmented face for classification. The approach B. Active Appearance Models (AAM)
is shown in Figure 2. For the various instances of the same person different types
This established framework could be used either to of variations are required to be modelled. For example, shape
parameterize the appearance model or to use the texture deformations including both expression changes and pose
information for eigenface classifier. Given a texture vector T, variations along with the texture variations caused by
it could be projected to the eigenspace E using a linear illuminations. Once we have information about the model for
combination: the example image, we can extract texture information easily.
Tim = wi i
n
Where wi are the weights for the example image and i are the
corresponding eigenvectors over the whole space of n
dimensions.
IV. APPEARANCE MODEL FOR FACES

A. Active Shape Models (ASM)


Different kind of shape models have been utilized by the
researchers depending upon different applications. Some are
the landmark based models defining some fixed points
annotated on the images and then defining the boundaries
around the objects. However some rely on the contour based
approach. Different contour define the shape of the object for
outlining it along with covering the feature inside an object
[16]. Landmark based models [17,18,19] however provide the
exact location of the features inside the object. We utilize a 2D
Figure 4 (Top Left) Face Fitted with Mesh, (Top Right)
face model as shown in Figure 3. This point distributed model
Mesh of the input image, (Bottom Left) Average shape
(PDM) consists of 134 landmarks which prominently describe
where the texture is warped, (Bottom Right) Texture
the location of individual face features. This model covers full
warping results on average face
face area except some parts of the human head like hair,
forehead and ears. By the analysis of human faces, it is
At first, shape variation is required to be controlled in order
observed that most of the expressional changes lie in this
to record the texture. This is achieved by defining a reference
segmented region [20]. Shape model is projected on the image
shape (mean shape in our case) for whole database. Figure 4
plane and fitted to the face. We used fitting of the shape on the
(bottom-left) shows the average shape (mean shape) of the
face by training an objective function. This technique was
subject in consideration. Delaunay triangulation is used to
devised by Wimmer et al [14].
divide the shape into a set of different facets. The delaunay
triangulation is a triangulation which is equivalent to the nerve
of the cells in a voronoi diagram, i.e., that triangulation of the
convex hull of the points in the diagram in which every
circumcircle of a triangle is an empty circle [22].
Given a set of shape points x of the input example image
and xavg of the average image, we can find the texture vector
gim as follows:
a) Compute the pixel position in the average shape.
b) Find the relative position in example image using affine
Figure 3 2D Shape model transformation.
c) Sample the texture values at the points inside convex
An ASM is parameterized using PCA to form the shape hull of the average image forming texture vector gim.
feature vector. The texture vector is normalized to remove global lighting
effects. This is performed by applying the linear
x x m + Ps bs transformation [21],
Where the shape x is parameterized by using mean shape xm g = ( g im 1) /
and matrix of eigenvectors Ps to obtain the parameter vector bs Where
[21]. However in our case face parameters are only required = ( g im .1) / n
for model fitting.
=| g im | 2 / n
Where, g is the texture vector obtained after light the eigenfaces in our scenario, whereas Figure 6 shows an
variations adjustment. Varying textural parameters cause example standard eigenface. The amount of texture is reduced
changes in the image similar to eigenfaces [26]. Face images up to 12% of the original image along with the normalization
segmented after appearance model are shown in Figure 5. in terms of shape and illuminations.

Figure 6 A conventional Eigenface for Cohn Kanade


Database

Figure 5 Segmenting face patches using AAM approach


V. EIGENFACES FOR FACE RECOGNITION
Eigenfaces are one of initial algorithms used for person
identification. The underline approach is taken from
information theory and consists of calculating Principal
Components Analysis (PCA). PCA can predict, remove
redundancy, extract features and can compress data. Using Figure 7 Eigenfaces for segmented face images
PCA for face recognition is to express the large 1-D vector of
pixels constructed from 2-D facial image into the eigenspace We use a nearest neighbour classifier. For a given example
projection. In [23] author has found the some limitations of the image I, determine the k nearest neighbours, determine the
algorithm used: (i) that the face images should be normalized class c which has the largest number of representative n. If
and frontal-view, (ii) the system is an auto associative the n>l then I belongs to class c, where l is the minimum
memory. It is harmful to be over fitted and (ii) it is hard to threshold and distance used is Euclidean distance [24].
decide suitable thresholds - It is kind of Art!
The idea of the eigenfaces was given by Turk and Pentland d ( xi , x j ) = (x
N
i x j )2
in early 1990s. The approach transforms face images into a
small set of characteristic feature images, called eigenfaces,
which are the principal components of the initial set of faces VI. EXPERIMENTS
images. Recognition is performed by projecting a new image Keeping in view the facial expressional changes in daily
into the subspace spanned by the eigenfaces (face space) life human-human interaction, we try to recognize a person
and then classifying the face by comparing its position in face under varying facial expression. In this regard we choose
space with positions of known individuals. This is an Cohn-Kanade-Facial-Expression database (CKFE-DB) [25].
information theory approach of coding and decoding face CKFE-DB contains 488 short image sequences of 97 different
images and may give insight into the information content of persons performing the six universal facial expressions. Each
face images, emphasizing the significant local and global sequence shows a neutral face at the beginning and then
features. Such features may or may not be directly related to develops into the peak expression. Furthermore, a set of action
intuition notion of the face features such as the eyes, nose, lips units (AUs) has been manually specified by licensed Facial
and hair [11,12]. Expressions Coding System (FACS)-experts for each
In mathematical terms, we wish to find the principal sequence. The subjects in this database were said to produce
components of the distribution of faces, or the eigenvectors of exaggerated expressions, hence expressions are not natural.
the covariance matrix of the set of face images. These Experiments have been conducted on ten subjects of the
eigenvectors can be thought of as a set of features, which database with different facial expressions variations. Training
together characterize the variation between the face images. set consists of mixture of expression images along with neutral
Each image location contributes more or less to each images. Some of the examples are used in our experiments are
eigenvector, so that these vectors can be displayed as a sort of shown in Figure 8. Eigenfaces are extracted for the face image
ghostly face, which are called the eigenfaces. Figure 7 shows patches present in the database after AAM. Each new face is
projected on this eigenspace and its weight in face space is for Face Recognition, Proceedings of International Conference on
Automatic Face and Gesture Recognition, 1998, pp. 336-341.
calculated.
[6] S. Balakrishnama, A. Ganapathiraju, Linear Discriminant Analysis: A
Face images of ten subjects were selected for training the Brief Tutorial, Institute for Signal and Information Processing,
dataset for calculating eigenspace. Seven subjects were Mississippi State University, USA.
randomly selected from the dataset for testing. A successful [7] Z. Riaz, A. Gilgiti, S.M. Mirza, Face Recognition: A Review and
Comparison of HMM PCA, ICA and NN, International Multitopic
recognition rate of 92.86% was obtained on testing images.
Conference, Proceeding of IEEE,July 2004. ISBN 0-7803-8655-8, pp
41-46
[8] Edwards, G.J., Taylor, C.J., Cootes, T.F., Interpreting Face Images
using Active Appearance Models, Proceedings of International
Conference on Automatic Face and Gesture Recognition (1998) 300-
305.
[9] G.J. Edwards, T.F. Cootes and C.J. Taylor, Face Recognition using
Active Appearance Models, in Proc. European Conference on
Computer Vision 1998, vol. 2, pp- 581-695, Springer 1998.
[10] L. Sirovich and M. Kirby, Low-dimensional procedure for the
characterization of human faces, J. Opt. Soc. Am. A, Vol. 4, No. 3,
March 1987, pp 519-524
[11] Turk M.A., Pentland A.P., Face Recognition using Eigenfaces, IEEE
Conference on Computer Vision and Pattern Recognition, 1991, pp
Figure 8 Images in database used for experiments purpose 586-591.
[12] Turk M.A., Pentland A.P., Eigenfaces for Recognition, Journal of
VII. CONCLUSIONS Cognitive Neuroscience 3 (1): 1991, pp 7186.
[13] Z. Riaz, S.M. Mirza, A. Gilgiti, Face Recognition using Principal
This paper discusses about a technique to segment the face Components and Independent Components, International Multitopic
images and then using eigenfaces for recognition purpose. In Conference, Proceeding of IEEE, Dec. 2004, ISBN 0-7803-8680-9, pp.
image segmentation phase, each image is mapped to a pre- 14-19.
defined shape and hence normalizes our shape information. [14] Wimmer M, Stulp F, Tschechne S, and Radig B, Learning Robust
Objective Functions for Model Fitting in Image Understanding
Since faces contain facial expressions so after textural Applications, In Proceedings of the 17th British Machine Vision
mapping the expressions mould to the expression of the target Conference, pp 11591168, BMVA, Edinburgh, UK, 2006.
shape. This helps in transforming all the images towards [15] Blanz, V., Vetter, Face Recognition Based on Fitting a 3D Morphable
neutral expressions and training the classifier for the optimum Model, IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol.25 no. 9, pp 1063 - 1074, 2003.
results. Eigenface approach is well-known and has [16] Gupta, H., R. Chowdhury, A. K., Chellappa, R., Contour-based 3D
computational ability to run efficiently in real time. We used a Face Modeling from a Monocular Video, British Machine Vision
benchmark database to show our result, however this system Conference 2004.
can be applied in real time for daily life applications with [17] Cootes T.F., Edwards G.J., Taylor C.J., Active Appearance Models,
Proc. European Conference on Computer Vision 1998 H.Burkhardt &
slight modifications. The intention behind developing this B. Neumann Ed.s. Vol. 2, pp. 484-498, Springer, 1998
system is not limited to human robot joint activities but [18] Stegmann M.B., Active Appearance Models: Theory Extensions and
equally useful in tradition biometric application. Cases, Master Thesis, Technical University of Denmark, 2000.
In future, this system could be coupled with other sources of [19] J. Ahlberg, An Experiment on 3D Face Model Adaptation using the
variations like lighting, poses and occlusions. Active Appearance Algorithm, Image Coding Group, Deptt of Electric
Engineering, Linkping University.
ACKNOWLEDGEMENT [20] Paul Ekman and Wallace Friesen. The Facial Action Coding System: A
Technique for The Measurement of Facial Movement. Consulting
This work is mainly supported by cluster of excellence Psychologists Press, San Francisco, 1978
Cognition for Technical Systems (CoTeSys) at Technical [21] Stan Z. Li and Anil K. Jain, Handbook of Face recognition, Springer
2005.
University of Munich, Germany, for human robot joint [22] Okabe, A.; Boots, B.; and Sugihara, K. Spatial Tessellations: Concepts
activities. Further we are thankful to Higher Education and Applications of Voronoi Diagrams. New York: Wiley, 1992.
Commission (HEC) of Pakistan and German Academic [23] Kyungnam Kim, Face Recognition using Principal Component
Exchange Service (DAAD). Analysis, Department of Computer Science, University of Maryland,
College Park, MD 20742, USA.
REFERENCES [24] Forsyth D.A. and Ponce J., Computer Vision A Modern Approach,
Pearson Education Inc, 2003.
[1] W. Zhao, R. Chellapa, A. Rosenfeld and P.J. Philips, Face [25] Kanade, T., Cohn, J.F., Tian, Y. (2000). Comprehensive database for
Recognition: A Literature Survey, UMD CFAR Technical Report facial expression analysis, Proceedings of Fourth IEEE International
CAR-TR-948, 2000. Conference on Automatic Face and Gesture Recognition (FG00),
[2] William A. Barrett, A Survey of Face Recognition Algorithms and Grenoble, France, 46-53.
Testing Results, Proceeding of IEEE, 1998. [26] Tucek J and Constock C, Active Appearance Modelling, Final Project
[3] Tsuhan Chen, Yufeng Jessie Hsu, Xiaoming Liu and Wende Zhang,
Washington University in St. Louis, 2004.
Principal Components Analysis and its Variants for Biometric,
International Conference of Image Processing, Feb, 2002.
[4] Gian Luca Marcialis and Fabio Roli, Fusion of LDA and PCA for Face
Recognition, Department of Electrical and Electronic Engineering -
University of Cagliari, Piazza dArmi, Italy.
[5] Wenyi Zhao, Arvindh Krishnaswamy, Rama Chellappa, Denial L,
Swets and John Weng,Discriminant Analysis of Principal Components

You might also like