0% found this document useful (0 votes)
4 views

p51 Singh

Facial Recognition

Uploaded by

khemkaabhi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

p51 Singh

Facial Recognition

Uploaded by

khemkaabhi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

An Improved Algorithm for Face Recognition using

Wavelet and Facial Parameters


Kanchan Singh Ashok K Sinha
Department of Information & Department of Information &
Technology, Technology,
ABES Engineering College, ABES Engineering College,
19th Km. Stone, NH-24, Ghaziabad 19th Km. Stone, NH-24, Ghaziabad
919717459911 919811529147
[email protected] [email protected]

ABSTRACT methods that yield promising results on frontal face recognition is


In this paper, the problem of face recognition in still color images the principal component analysis (PCA), which is a statistical
is addressed. An improved algorithm for face recognition is approach where face images are expressed as a subset of the
proposed here. The algorithm comprises of designing a feature eigenvectors, and hence called eigenfaces. Matthew Turk and
vector which has discrete wavelet coefficients of the face and, a Alex Pentland [19] used this PCA method for face recognition.
coefficient representing parameters of the face. Global features of Their work projected face images onto a feature space that spans
the face are captured by wavelet coefficients and the local feature the significant variations among known face images.
of the face is captured by facial parameter. The coefficients of the The significant features were known as “eigenfaces” because they
feature vector are used as inputs to the back-propagation were the eigenvectors (principle components) of the set of faces.
architecture of the neural network. The network is trained for But the results show that false reject rate is quite high for the faces
different images in the database. The proposed algorithm has been if face image is even a little different from the one in the database.
tested on various real images and its performance is found to be However the Eigen-face method is not able to differentiate
quite satisfactory when compared with the performance of between an unknown face image and the one, who is of the same
conventional methods of face recognition such as the Eigen-face person as in the database, with a little variation in its gesture. An
method. improved face recognition technique had been given by Rajkiran
Gottumukkal, Vijayan K.Asari [23]. They proposed a modular
Categories and Subject Descriptors PCA method, which was an extension of the PCA method for face
I.5.4 [Pattern Recognition]: Applications – Computer Vision. recognition. They claimed that modular PCA method performed
better than the PCA method. But again the deficiency of PCA for
General Terms frontal face recognition cannot be overcome using this modular
Algorithms, Design, Experimentation. PCA approach. Guan-Chun Luh and Ching-Chou Hsieh [11]
proposed a method using PCA and immune network for machine
Keywords learning. Within the last few decades, numerous novel face
Principal Component Analysis (PCA), Eigen-Face Method, Haar recognition algorithms had been proposed [29]. A central issue to
Wavelet, Discrete Cosine Transform (DCT), Discrete Wavelet these approaches was the feature exaction.
Transform (DWT), Wavelet, Facial Parameter, Artificial Neural
Network (ANN). The most well-known technique for linear feature extraction was
the linear discriminant analysis (LDA). Its basic idea was to seek
1. INTRODUCTION an optimal set of discriminant vectors by maximizing the Fisher
Face Recognition is emerging as an active research area with criterion. Then, Jin et al. [13] proposed the uncorrelated LDA
numerous commercial and law enforcement applications being technique (ULDA), which tried to find the optimal discriminant
developed. vectors by maximizing the Fisher criterion under the conjugated
orthogonal constrains. Zhongkai Han et. al. [30] proposed a
method which used discriminated correlation classifier and
This technique is an effective means of authenticating a person. addressed mainly the classification problem in face recognition.
There are various approaches and methods for face recognition However, the ULDA and discrimination techniques suffer from
proposed by researchers till date. One of the most popular the so-called small sample size problem (SSSP) which is often
encountered in face recognition. The face detection technique
proposed by Lamiaa Mostafa and Sherif Abdelazeem [15] was
Permission to make digital or hard copies of all or part of this work for based on Skin Color. Their method used the normalized RGB
personal or classroom use is granted without fee provided that copies are color space. To build the model, samples were collected of human
not made or distributed for profit or commercial advantage and that skin from different races like Africans, Americans, Arabs.... etc.
copies bear this notice and the full citation on the first page. To copy They proposed that the first step in skin detection was pixel-based
otherwise, or republish, to post on servers or to redistribute to lists, skin detection, where the skin detector tested every pixel of the
requires prior specific permission and/or a fee. input image and computed its normalized red value r and
IITM’10, December 28–30, 2010, Allahabad, Uttar Pradesh, India.
normalized green value g. If r and g values of the pixels satisfied
Copyright 2010 ACM 978-1-4503-0408-5/10/12…$10.00.

51
Indian Institute of Information Technology Allahabad, India

some predefined inequalities, then this pixel was considered skin. wavelet energy entropy can be used as new facial features to
The drawback of this approach is that it produces good results for recognize faces.
different races people but for the same race it does not produce Mrinal Kanti Bhowmik et. al. [17] performed a comparative study
satisfactory results. Various other methods based on finding skin on fusion of visual and thermal images using different wavelet
and skin color [6, 7, 14] were also proposed for face recognition transformations like Haar and Daubechies (db2). They found
and face detection. Further Sanjay Kr. Singh et. al. [25] proposed wavelet a useful transformation for face recognition. Boqing
a Robust Skin Color Based Face Detection Algorithm in which Gong et. al. [5] proposed a method based on extracting facial
they used the three popular color models named RGB, YCbCr and features for 3-D faces. They emphasized on two components one
HSI color models to detect skin color features. But again this for corresponding basic face structure and another corresponding
method suffers with the problem of robustness as no other expression change in the face based on the change in shape of 3D
parameter was taken into the account except the skin color for the face. Further Arindam Biswas et. al. [3] proposed a novel
faces of the persons belonging to the same country. algorithm for extracting the region of interest of a face like eye
Further Yuehui Chen and Yaou Zhao [28] proposed a face pair, nostrils, and mouth area. Bart Karoon et. al. [4] addressed the
recognition method which used Discrete Cosine Transformation influence of eye localization accuracy on face matching
(DCT) and Hierarchical Radial Basis Function (HRBF) model. performance in the case of low resolution image and video
The DCT was employed to extract the input features to build a content. It can be inferred that the regions of our interest in a face
face recognition system, and the HRBF was used to identify the can be located correctly even if the face image is of low quality.
faces. Fabrizia M. de S. Matos et. al. [10] proposed a method In an attempt to further improve the performance of the face
which used three coefficient selection criterion. The drawback of recognition method we have presented an improved algorithm by
this approach is that DCT is not a very efficient transform to designing a feature vector comprising of wavelet horizontal,
extract features of the image. Wavelet transform is considered vertical and diagonal coefficients, determined from the wavelet
better than DCT as wavelets can be used for Multi Resolution energy function capturing global features of the face and, the
Analysis (MRA) while DCT can not be used for the same and facial parameter capturing local features of the face. The result of
further there is no way to use the DCT for lossless compression, this algorithm when compared with that of the PCA method is
since outputs of the transformation are not integers while wavelet found more satisfactory.
transform has efficient implementation, both in lossy and lossless
case as stated by Øyvind Ryan [20]. Futher Cunjian.chen and 2. METHODOLOGY OF WORK
Jiashu.zhang [8] proposed a new feature extractor for face The block diagram of the proposed work is given in Figure 1
recognition which used wavelet energy entropy. They stated that which depicts all the processing steps used in this work as
following:

Figure 1. Block diagram of The Methodology of The Proposed Work

52
IITM 2010 - Intelligent Interactive Technologies and Multimedia

corresponding to facial parameters. Thus the composite feature


2.1 Database of Images vector comprises of these four coefficients.
Database will consist of a number of images depending on the
number of persons and the number of images captured per person 2.5 Machine Learning
with different gestures. For each image calculate the wavelet‟s It consists of the training of the neural network with the
horizontal, vertical, diagonal coefficients and hue component as coefficients of the feature vector and then validating with the test
explained in subsection 2.2 and 2.3 as following: cases. The three test cases for the proposed work have been
designed as following:
2.2 Estimating Wavelet’s Coefficients of Face
(i) For each image crop the face. Case A: Testing with the same image as stored in database.
(ii) Decompose the image using „Haar‟ wavelet. Case B: Testing with the different image of the same person
(iii) Extract detail horizontal, vertical and diagonal (whose images is stored in the database) with variation in gesture.
coefficients named H, V and D.
(iv) As H, V and D are two dimensional matrices. In order Case C: Testing with an unknown image (image of a person
to generate a single valued horizontal, vertical and whose image is not at all present in the database).
diagonal coefficients use wavelet energy function for
each horizontal, vertical and diagonal detail coefficient
named WHE, WVE and WDE as in equations (1), (2) 3. INTRODUCTION TO WAVELET
and (3).
Wavelet transformations are a method of representing signals
across space and frequency [11]. The signal is divided across
several layers of division in space and frequency and then
analyzed. The goal is to determine which space/frequency bands
contain the most information about an image‟s unique features,
both the parts that define an image as a particular type
(fingerprint, face, etc.) and those parts which aid in classification
between different images of the same type. One type of discrete
wavelet transform (DWT) is the orthogonal DWT. The orthogonal
DWT projects an image onto a set of orthogonal column vectors
to break the image down into coarse and fine features. In
MATLAB wavedec2 is a two-dimensional wavelet analysis
function.
[C, S] = wavedec2(X, N, 'wname')
In above equations m is the number of rows and n is the number [C, S] = wavedec2(X, N, Lo_D, Hi_D)
of columns of the two dimensional matrices H, V and D. This returns the wavelet decomposition of the matrix X(an image)
(v) Take the square root of the equations (1), (2) and (3) which at level N, using the wavelet named in string 'wname'. Outputs are
finally gives the horizontal, vertical and diagonal coefficients the decomposition vector C and the corresponding bookkeeping
named h, v and d respectively: matrix S. N must be a strictly positive integer. Instead of giving
h=√WHE (4) the wavelet name, we can give the filters. Lo_D is the
v=√WVE (5) decomposition low-pass filter and Hi_D is the decomposition
high-pass filter. In MATLAB detcoef2 is also a two-dimensional
d=√WDE (6) wavelet analysis function [27, 9, 18].
2.3 Estimating Facial Parameters of Face D = detcoef2(O, C, S, N)
(i) Calculate the euclidean distance d1 between the center This function extracts from the wavelet decomposition structure
of both the pupil (the center of iris of the eye) of face. [C, S] (wavedec2 as described above), the horizontal, vertical or
(ii) Calculate the length of the nose d2. d1 and d2 together diagonal detail coefficients for O = 'h'(or 'v' or 'd', respectively), at
form a „T‟ shape . level N. [H, V, D] = detcoef2 ('all', C, S, N) returns the horizontal
H, vertical V, and diagonal D detail coefficients at level N. For
(iii) Calculate d2/d1 ratio named fp. level 2, the two-dimensional wavelet tree has the form as shown
in Figure 2. In Figure 3, we see the order in which filters are
fp=d2/d1 (7)
applied to achieve a simple one-level wavelet decomposition. The
2.4 Forming a Composite Feature Vector filter Lo_D is a low-pass filter and the filter Hi_D is a high-pass
From equations (4), (5) and (6) wavelet‟s horizontal, vertical and filter. cAj and cDj are the approximation decomposition vector
diagonal coefficients named h, v and d are used as three and the detail decomposition vector at level j
coefficients of the feature vector. The fourth coefficient of the respectively.………………………………………………………
feature vector is estimated from equation (7) which is a ratio

53
Indian Institute of Information Technology Allahabad, India

(10)

It is an orthogonal and bi-orthogonal wavelet. The Haar transform


is the simplest of the wavelet transforms. This transform cross-
multiplies a function against the Haar wavelet with various shifts
and stretches. Figure 4 shows an input image for 2-D wavelet and
its detail Horizontal, Vertical and Diagonal versions after
decomposing it using Haar wavelet at level 3.

Figure 2. Two Dimensional Wavelet Tree

The more the decomposition scheme is being repeated, the more


the approximation of images concentrates in the low frequency
energy. During the decomposition process it actually down-
samples the rows and columns of an image. Firstly it down-
samples the rows (keep one out of two) and then down-samples
the columns (keeps one out of two). Haar wavelet is recognized
as the first known wavelet. It is same as Daubechies wavelet
(db1). The Haar wavelet is proposed by Alfred Haar [1]. Haar
wavelet is a certain sequence of function. The Haar wavelet‟s
mother wavelet function can be described as in equation (9).

Figure 4. Wavelet Coefficients

4. FACIAL PARAMETERS
Detection of feature points of a still image is very important in
facial expression analysis because by knowing which expression
the current image is and which facial muscle actions.
Facial parameters are independent of illumination problem caused
by various lighting conditions. So keeping these facts into
consideration, we have taken the ratio of the length of the nose
and the distance between the center of pupil of both the eyes. This
ratio remains unchanged even if a person has beard or mustaches
or not. Further this ratio is unique for every person and remains
almost the same for a person‟s face image which has little
variation in gesture. Figure 5 depicts the extracting of facial
parameters of a frontal face. First, the center of the pupil is located
for both the eyes. When the located points are joined, a horizontal
line is being formed as shown in Figure 5. Then the tip of the nose
is located and length of nose is represented by a vertical line. This
forms a „T‟ shape structure and we calculate the euclidean
distance d1 and d2 of this „T‟ shape and then calculate the ratio
d1/d2.
Even with the different orientation of the faces, we can very well
Figure 3. Two Dimensional Discrete Wavelet Transform locate the coordinates of center of pupil of both the eyes and the
coordinate of the tip of the nose as shown in Figure 6. This forms
a rotated „T‟ structure. Once we know the coordinates of the
located points, we can very well calculate the distance as per
(9)
equations (11) and (12) as shown in Figure 6.

and its scaling function Φ(t) can be described as in equation (10).

54
IITM 2010 - Intelligent Interactive Technologies and Multimedia

vertical and diagonal coefficients are extracted at level 2 in


„detcoef2‟ function. After applying these functions the output is
detail Horizontal, Vertical and Diagonal coefficients. In order to
have single valued coefficients the wavelet energy functions as
described in equations (1), (2) and (3) are implemented. Finally
after taking the square root as per equation (4), (5) and (6)
wavelet‟s horizontal, vertical and diagonal coefficients are
estimated named h, v and d.
For estimating facial parameter we have first located the center of
the pupil of both the eyes of a face and then calculated the
distance d1 between the two. After this, the tip of the nose is
located in the face image and then the length of nose d2 is
computed as shown in Figure 5 and Figure 6. Then d2/d1 ratio is
calculated.

Figure 5. Extraction of Facial Parameters

Figure 6. Extraction of Facial Parameters of Oriented Face

5. RESULTS AND DISCUSSIONS Figure 7. Sample Images of the Database with Size 100×120
First the Eigen-Face method is implemented during this work in
MATLAB 7.0.1. Then proposed work is implemented in 5.1 Architecture of the Neural Network
MATLAB 7.0.1 with total 400 images representing 80 different There are three layers in the neural network used in this work.
persons. Each person‟s five images are captured with variation in One input layer, one hidden layer and one output layer.
gesture and variation in orientation of the face. Out of the
database a sample of images of 3 different persons and each Number of nodes in the input layer=4 (corresponding to the four
person‟s five images are shown in Figure 7. For each image the coefficient of the feature vector).
face is cropped and resampled with size of 100×120 pixels. In Number of nodes in the hidden layer=15
order to calculate wavelet‟s coefficients „wavedec2‟ and
Number of nodes in the output layer=5
„detcoef2‟ function [12] of MATLAB Wavelet‟s Toolbox are
used. In this work, for decomposing the image “Haar” wavelet is The algorithm starts with taking appropriate initial guess for the
used at level 3 in „wavedec2‟ function and the detail horizontal, weight matrix V(with dimensions 4×15) from input to hidden

55
Indian Institute of Information Technology Allahabad, India

layer and weight matrix W(with dimensions 15×5) from hidden to case A error index is zero in both the methods but for case B error
output layer. Back Propagation Algorithm is implemented for index is not zero in both, rather its high in Eigen-method. But in
training the neural network in MATLAB [16, 24]. The parameters the proposed method the error index is zero for the case B which
used for back propagation neural network are: is the evidence that proposed method is better than Eigen-method.
Learning Parameter = 0.09 Further from the Figure 8 it can be concluded that Eigen-method
treats case B and case C almost at the same level while its very
Momentum Coefficient=0.5 clear from Figure 9 that using proposed method there is a slope
The output Y is a vector with five elements Y = [y1 y2 y3 y4 y5]T. from case B to case C i.e. it is able to differentiate between both
The activation function at the output node is sigmoid function the test cases B and C while Eigen-method is not able to
with an appropriate threshold. The architecture of the ANN diffrenetiate between both the test cases.
considered for the implementation of the proposed work is shown
in Figure 8.

Figure 9. Error Index for all the Three Test Cases Using
Eigen Method
Figure 8. The Architecture of Neural Network in the
Proposed Work

5.2 Training
Training is performed with all the images in the database. The
neural network is trained for each person‟s five images at a time
i.e. the four coefficients corresponding to each image is presented
as four inputs to the neural network. So at a time the network is
trained for five observations corresponding to five images of the
same person and each observation consisting of four coefficients.
The converged weights after training are stored in a „.dat‟ file i.e.
for each person, a „.dat‟ file is created consisting of the converged
weights after the training. The network weights converge to value
0 for 200 epochs.

5.3 Testing
After training the network, testing of the faces being performed
for three test cases as discussed:
case A, case B and case C. For the given test image all the four Figure 10. Error Index for All The Three Test Cases Using
coefficients of the feature vector are calculated and presented to Proposed Method
all „.dat‟ files which consists of the converged weights after the
training for each person‟s images. Figure 10 is a comparison between Eigen-method and the
All the figures from Figure 7 to Figure 10 are the plots of test proposed method. As we can very well infer from the figure that
cases versus error index. Error index is defined as the square root for case B error index is zero (shown by red color cylinder) using
of the sum of squares of deviation from elements of output vector. the proposed method while using the Eigen method error index is
Figure 8 and Figure 9 are basically a comparison between Eigen- quite high (shown by blue color cylinder). Thus all these figures
method and the proposed method about the error index for all the are the evidence that the proposed method performs better than
three test cases. As it can be observed from both the figures that in the Eigen-method.

56
IITM 2010 - Intelligent Interactive Technologies and Multimedia

[4] Bart Kroon, Alan Hanjalic and Sander M.P. Maas, Eye
Localization for Face Matching: Is It Always Useful and
Under What Conditions?, In Proceedings of the 2008
international conference on Content-based image and video
retrieval, CIVR‟08, July 7–9, 2008, Niagara Falls, Ontario,
Canada. DOI=http://doi.acm.org/10.1145/1386352.1386401
[5] Boqing Gong, Yueming Wang, Jianzhuang Liu and Xiaoou
Tang, Automatic Facial Expression Recognition on a Single
3D Face by Exploring Shape Deformation, in proceedings
of the seventeen ACM international conference on
Multimedia, MM‟09, October 19–24, 2009 Beijing China,
DOI= http://doi.acm.org/10.1145/1631272.1631358
[6] Cahi, D. and Ngan, K. N., “Face Segmentation Using Skin-
Color Map in Videophone Applications,” IEEE Transaction
on Circuit and Systems for Video Technology, Vol. 9, pp.
551-564 (1999).
[7] Crowley, J. L. and Coutaz, J., “Vision for Man Machine
Interaction,” Robotics and Autonomous Systems, Vol. 19,
Figure 11. Comparison Between Eigen and The Proposed pp. 347-358d (1997).
Method for All The Three Test Cases [8] Cunjian.chen and Jiashu.zhang, “Wavelet Energy Entropy
as a New Feature Extractor for Face Recognition,” in Fourth
6. CONCLUSION International Conference on Image and Graphics, 2007 ©
During this work first we implemented the well known Eigen-face IEEE DOI 10.1109/ICIG.2007.60.
method and it has been found that its performance is not
[9] Daubechies, I. (1992), CBMS-NSF Conference Series:Ten
satisfactory for the different image which has variation in gesture,
lectures on wavelets, Applied Mathematics, SIAM Ed.
but is of the same person, whose image is already stored in the
database. Further in this proposed work, a wavelet analysis is used [10] Fabrizia M. de S. Matos, Leonardo V. Batista and JanKees
to decompose the original image. We propose a feature extraction v. d. Poel, Face Recognition Using DCT Coefficient
approach, called “wavelet energy function”, to extract wavelet Selection, In proceedings of the 2008 ACM symposium on
horizontal, vertical and diagonal coefficients of the face image. Applied computing SAC‟08, March 16-20, 2008, Fortaleza,
The derived coefficients are later fused to form a new feature Ceará, Brazil,
vector along with the facial parameter, which will be used as the DOI=http://doi.acm.org/10.1145/1363686.1364104
inputs to the back propagation neural network. Results of the
experiments on the face database as shown are the evidence that [11] Guan-Chun Luh and Ching-Chou Hsieh, Face Recognition
the new algorithm is able to perform much better than traditional Using Immune Network Based on Principal Component
methods like Eigen-face method and its variants. Analysis, in Proceedings of the first ACM/SIGEVO
Summit on Genetic and Evolutionary Computation,
GEC‟09, June 12-14, 2009, Shanghai, China DOI=
http://doi.acm.org/10.1145/1543834.1543888.
7. REFERENCES
[1] A. Haar. Zur Theorie der orthogonalen Funktionensysteme [12]...http://www.mathworks.in/help/toolbox/wavelet/ref/detcoef2
(German). Mathematische Annalen 69 (1910), no. 3, 331– .html
371.
[13] Jin, Z., Yang, J. Y., Hu, Z. S., Lou, Z, “Face Recognition
[2] Amara Graps. An Introduction to Wavelets. in IEEE Based on the Uncorrelated Discriminant Transformation,”
Computational Science and Engineering, Summer 1995, Pattern Recognition 34 (2001) 1405–1416.
vol. 2, num. 2, published by the IEEE Computer Society,
[14] Kjeldsen, R. and Kender. J., “Finding Skin in Color
10662 Los Vaqueros Circle, Los Alamitos, CA 90720,
Images,” in proceedings of the Second International
USA.
Conference on Automatic Face and Gesture Recognition,
[3] Arindam Biswas, Suman Khara and Partha Bhowmick. pp. 312-317 (1996).
Extraction of Regions of Interest from Face Images Using
[15] Lamiaa Mostafa and Sherif Abdelazeem, “Face Detection
Cellular Analysis. In Proceedings of the 1st Bangalore
Based on Skin Color Using Neural Networks,” in GVIP 05
Annual Compute Conference (Bangalore, Karnataka, India,
Conference, 19-21 December 2005, CICC, Cairo, Egypt.
Jan 18-20). Compute 2008 DOI=http://doi.acm.org/10.1145
1341771.1341787. [16] Laurene Fausett, Fundamentals of Neural Networks,
Pearson Education, pp. 289-320.
[17] M. K. Bhowmik, D. Bhattacharjee, M. Nasipuri, D. K. Basu Comparative Study,” International Journal of Image
& M. Kundu, “Fusion of Wavelet Coefficients from Visual Processing (IJIP), Volume (4), Issue (1), pp. 12-23.
and Thermal Face Images for Human Face Recognition – A

57
Indian Institute of Information Technology Allahabad, India

[18] Mallat, S. (1989), "A theory for multiresolution signal of Science and Engineering, Vol. 6, No. 4, pp. 227-234
decomposition: the wavelet representation," IEEE Pattern (2003).
Anal. and Machine Intell., vol. 11, no. 7, pp. 674-693.
[26] Stan Z. Li Anil K. Jain, Handbook of Face Recognition,
[19] Matthew Turk and Alex Pentland, “Eigenfaces for face Springer.
recognition,” Journal of Cognitive Neuroscience, vol. 3,
[27] Y. Meyer, “Wavelets: Algorithms and Applications,”
number1 (1991) pp. 71–86.
Society for Industrial and Applied Mathematics,
[20] Øyvind Ryan, “Applications of the wavelet transform in Philadelphia,1993, pp. 13-31, 101-105.
image Processing,” Sponsored by the Norwegian Research
Council, project nr. 160130/V30. [28] Yuehui Chen and Yaou Zhao, “Face Recognition Using
DCT and Hierarchical RBF Model,” IDEAL 2006, LNCS
[21] Rafael C. Gonzalez and Richard E. Woods, Digital image 4224, pp. 355–362, 2006, Springer-Verlag Berlin
Processing, Pearson Education. Heidelberg 2006.
[22] Rafael C. Gonzalez, Richard E. Woods and Steven L. [29] Zhao, W., Chellappa, R.,Rosenfeld A., Phillips P. J., “Face
Eddins, Digital image Processing using MATLAB, Pearson Recognition: A Literature Survey,” ACM Computing
Education. Survey 35 (2003) 399–458.
[23] Rajkiran Gottumukkal, Vijayan K.Asari, “An improved face [30] Zhongkai Han, Chi Fang and Xiaoqing Ding, A
recognition technique based on modular PCA approach,” Discriminated Correlation Classifier for Face Recognition,
Pattern Recognition Letters 25 (2004) 429–436. in proceedings of the 2010 ACM Symposium on Applied
[24] S. Rajasekaran and G.A. Vijayalakshmi Pai, Neural Computing, SAC‟10 March 22-26, 2010, Sierre,
Networks, Fuzzy Logic and Genetic Algorithms”, PHI Switzerland.
Publication. DOI=http://doi.acm.org/10.1145/1774088.1774406

[25] Sanjay Kr. Singh1, D. S. Chauhan2 et. al., “A Robust Skin


Color Based Face Detection Algorithm,” Tamkang Journal

58

You might also like