Shape Invariant Recognition of Segmented Human Faces Using Eigenfaces
Shape Invariant Recognition of Segmented Human Faces Using Eigenfaces
{riaz,beetz,radig}@in.tum.de
I. INTRODUCTION
Since last three decades of face recognition technology,
there exists many commercially available systems to identify
human faces, however face recognition is still an outstanding
challenge against different kinds of variations like facial
expressions, poses, non-uniform light illuminations and
occlusions. Meanwhile this technology has extended its role to
Human-Computer-Interaction (HCI) and Human-Robot-
Interaction (HRI). Person identity is one of the key tasks while
interacting with the robots, exploiting the un-attentional Figure 1 Face Recognition Technology compared to other
system security and authentication of the human interacting Biometrics for years 2003 and 2006
with the system. This problem has been addressed in various
scenarios by researchers resulting in commercially available As cameras are widely used and mounted on computer
face recognition systems [1,2]. However, other higher level screens, embedded into mobiles and installed into everyday
applications like facial expression recognition and face living and working environments they become valuable tools
tracking still remain outstanding along with person identity. for human system interaction. A particular important aspect of
This gives rise to an idea for generating a framework suitable this interaction is detection and recognition of faces and
for solving these issues together. interpretation of facial expressions. These capabilities are
deeply rooted in the human visual system and a crucial This paper focuses on the modelling of human face using a
building block for social interaction. Consequently, these two dimensional approach of shape and texture information
capabilities are an important step towards the acceptance of and then utilising this model for recognition purposes. These
many technical systems. Although faces are the most type of models in literature are called Active Appearance
important and natural way for human-human interaction but Models (AAMs), introduced by Edwards et al [8]. These
some outstanding challenges like uniqueness, performance and models utilize the shape information based on a point
circumvention made its market value a bit less than the other distribution of various landmarks points marked on the face
biometrics in year 2003. But in 2006 face recognition image. A parameterized recognition of the faces is performed
technology again raised up to 19% of the biometric market as by Edwards et al [9]. They have used models parameter for
shown comparably in Figure 1. recognition using LDA. However, PCA were firstly used by
This publication focuses on one of aspects of natural Sirvovich and Kirby [10], which were latterly adopted by M.
human-computer interfaces. Our goal is to build a real-time Turk and A. Pentland introducing the famous idea of
system for facial recognition that could robustly run in real- eigenfaces [11,12], which was adopted by many researchers
world environments. We develop it using model-based image afterwards [13].
interpretation techniques, which have proven its great
potential to fulfil current and future requests on real-world III. OUR APPROACH
image understanding. Our approach comprises methods that The goal of this paper is to develop a system for face
robustly localize facial features, seamlessly track them recognition that could be able to recognize persons in real time
through image sequences, and finally infer the facial for HCI applications. The goal is the recognition of persons in
recognition. the presence of facial expressions in daily life environment. A
The remainder of this paper is divided in five sections. model based technique followed by eigenfaces is utilized in
Section II deals with the work related to the proposed scheme. this regard. Active appearance models (AAM) are strongly
In section III the proposed scheme is described. Section IV applicable at this stage however we tried to achieve the
describes face image segmentation using model based accuracy using textural approach only.
approach. Section V explains about feature extraction
technique using eigenface approach. In the end, we showed
experimentation results in Section VI.
II. RELATED WORK
The problem of person identity lies under area of pattern
recognition. Various techniques have been used from last few
decades. A formal method of classifying the faces was first
proposed by Francis Galton [2] in 1888. Galton proposed Figure 2 Our Approach
collecting facial profiles as curves, finding their norm, and
then classifying other profiles by their deviations from the All the subjects in the database are labelled for
norm. The classification was resulting in a vector of identification. An active shape model (ASM) is fitted to all the
independent measure that could be compared with other face images in our database. Shape model could be fitted using
vectors in the database. Traditional recognition systems have any fitting algorithm [14,15]. A fitted shape model on training
the abilities to recognise the human using various techniques face images is then used for defining the reference shape
like feature based recognition, face geometry based which is actually mean shape of all the shapes in our database.
recognition, classifier design and model based methods. A Given a reference shape Sref and an example image I fitted
term defined in [3] is Distinguishing Component Analysis with shape S, it is suitable to apply texture mapping from
(DCA) and is compared to Principal Components Analysis example shape to reference shape using affine transformation.
(PCA) attempting to capture what distinguishes one subject However prior to mapping we use planar subdivision i.e.
from the others. In [4] Luca and Roli have used the similar delaunay triangulation for reference and example shape. The
kind of technique, which is fusion of Linear Discriminant texture vector for corresponding image is stored as T,. Instead
Analysis (LDA) and PCA is used. PCA is used to reduce the of parametrizing our model we use this representative texture
data size while LDA is used to classify the data. Similar for eigenface classifier. The use of eigenspace at this stage is
application is performed by [5], which first used PCA and suitable in two major aspects, 1) reducing the amount of
LDA in second step. The idea of combination of PCA and texture to be used in classification i.e. segmenting about 12%
LDA is to improve generalization capability when few gray values information of the original image as compared to
samples per class are given. Principle Component Analysis conventional eigenface approach, where we use full image
(PCA) and LDA are two commonly used techniques for data texture, 2) Texture mapping on reference shape benefited in
classification and dimensionality reduction, belonging to the terms of shape normalization for all the available images, this
same class [6]. In [7] authors have performed a review of makes facial expressions mould to reference shape. In this
Hidden Markov Models (HMM), Independent Components regard we can control the facial expressions in short extent.
Analysis (ICA), Neural Networks and PCA. Further light variations are normalized in the extracted texture
to improve the segmented face for classification. The approach B. Active Appearance Models (AAM)
is shown in Figure 2. For the various instances of the same person different types
This established framework could be used either to of variations are required to be modelled. For example, shape
parameterize the appearance model or to use the texture deformations including both expression changes and pose
information for eigenface classifier. Given a texture vector T, variations along with the texture variations caused by
it could be projected to the eigenspace E using a linear illuminations. Once we have information about the model for
combination: the example image, we can extract texture information easily.
Tim = wi i
n
Where wi are the weights for the example image and i are the
corresponding eigenvectors over the whole space of n
dimensions.
IV. APPEARANCE MODEL FOR FACES