0% found this document useful (0 votes)
6 views

Face Recognition Using Gabor Filter Bank Kernel Pr

The paper presents a novel face recognition method that integrates Gabor filter bank, Kernel Principal Component Analysis (KPCA), and Support Vector Machine (SVM) to improve recognition rates. The method extracts robust features from face images to address variations in illumination, expression, and pose, achieving a maximum recognition rate of 98.5% on the ORL face database. This approach demonstrates the effectiveness of combining these techniques for enhanced face recognition performance.

Uploaded by

alex
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Face Recognition Using Gabor Filter Bank Kernel Pr

The paper presents a novel face recognition method that integrates Gabor filter bank, Kernel Principal Component Analysis (KPCA), and Support Vector Machine (SVM) to improve recognition rates. The method extracts robust features from face images to address variations in illumination, expression, and pose, achieving a maximum recognition rate of 98.5% on the ORL face database. This approach demonstrates the effectiveness of combining these techniques for enhanced face recognition performance.

Uploaded by

alex
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/272912407

Face recognition using Gabor filter bank, kernel principal component analysis
and support vector machine

Article in International Journal of Computer Theory and Engineering · October 2012


DOI: 10.7763/IJCTE.2012.V4.574

CITATIONS READS

22 999

3 authors:

Saeed Meshgini Ali Aghagolzadeh


University of Tabriz Babol Noshirvani University of Technology
11 PUBLICATIONS 83 CITATIONS 105 PUBLICATIONS 1,334 CITATIONS

SEE PROFILE SEE PROFILE

Hadi Seyedarabi
University of Tabriz
104 PUBLICATIONS 1,126 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

PhD Thesis View project

Surveying the Ratio of Conformity in Surface Orientation and Human Vision Trailing to Recognize Image Base on Gestalt Psychology View project

All content following this page was uploaded by Hadi Seyedarabi on 24 September 2016.

The user has requested enhancement of the downloaded file.


International Journal of Computer Theory and Engineering, Vol. 4, No. 5, October 2012

Face Recognition Using Gabor Filter Bank, Kernel


Principle Component Analysis and Support Vector
Machine
Saeed Meshgini, Ali Aghagolzadeh, and Hadi Seyedarabi

 to the human visual sense principles has been a hot research


Abstract—This paper presents a novel face recognition topic. Recently, researchers show that Gabor wavelets whose
method based on the Gabor filter bank, Kernel Principle kernels are similar to the response of the two-dimensional
Component Analysis (KPCA) and Support Vector Machine receptive field profiles of the mammalian simple cortical cell,
(SVM). At first, the Gabor filter bank with 5 frequencies and 8
orientations is applied on each face image to extract robust
exhibit the desirable characteristics of capturing salient
features against local distortions caused by variance of visual properties such as spatial localization, orientation
illumination, facial expression and pose. Then, the feature selectivity, and spatial frequency [4]. Now, more and more
reduction technique of KPCA is performed on the outputs of the researches are being conducted on face recognition approach
filter bank to form the new low-dimensional feature vectors. which uses Gabor wavelets to extract the human face features.
Finally, SVM is used for classification of the extracted features. But the size of Gabor face feature dimensions computed by
The proposed method is tested on the ORL face database. The
experimental results reveal that the proposed method has a
each pixel is very large; so the complexity is very high. To
maximum recognition rate of 98.5% which is higher than the reduce the dimensions, many researchers utilized simple
other related algorithms applied on the ORL database. down sampling technique to select feature points [5].
However these methods which used down sampling strategy
Index Terms—Face recognition, Gabor filter bank, kernel still have high dimensions of feature matrix and, in addition,
principle component analysis, support vector machine. can lead to partial loss of feature discriminative information.
Therefore, it causes accuracy reduction in the classification
stage. So, researchers used PCA or other feature extraction
I. INTRODUCTION approaches to reduce the size of dimensions [6].
The technology of automatic face recognition involves After the stage of feature extraction, the extracted features
computer recognition of personal identity based on the have to be entered to a powerful classifier. Support Vector
geometric or statistical features derived from face images [1]. Machines (SVMs) have been proposed by Vapnik [7] as a
This technology can be used in wide range of applications very effective method for general purpose pattern recognition.
such as identity authentication, access control, and Intuitively, given a set of points belonging to two classes, a
surveillance. Interests and research activities in face SVM finds the hyperplane that separates the largest possible
recognition have increased significantly over the past few fraction of points of the same class on the same side, while
years. A face recognition system should be able to deal with maximizing the distance from either class to the hyperplane.
the various changes in face images. However, the variations According to Vapnik [7], this hyperplane is called Optimal
between the images of the same face due to illumination and Separating Hyperplane (OSH) which minimizes the risk of
the viewing pose are near always larger than the image misclassifying not only for the examples in the training set,
variations due to change in face identity [2]. This problem but also for the unseen examples of the test set.
causes a great challenge in face recognition. Two issues are According to the above discussion, this paper presents a
important when a face recognition system is designed. The novel face recognition system by integrating the Gabor
first is what features should be extracted from face images. wavelet representation of face images, the kernel PCA
The selected features have to represent the discriminating method and SVM classifier. At first, the Gabor filter bank
properties between different faces in an effective manner. with 5 scales and 8 orientations is used for feature extraction
They should also be robust against the different variations from raw face images because the Gabor filters remove most
within the images of one face such as changes in viewpoint, of the variability in image caused by changes in lighting
illumination and expression. The second issue is how to conditions and contrast and create robust features against
classify a new face image using the selected features [3]. illumination and pose variations. Then, the KPCA technique
In recent years, extracting the effective features according is applied to reduce the dimensionality of the filtered images.
Finally, a multi-SVM (multi class support vector machine) is
used for classification. We examine the performance of our
Manuscript received June 20, 2012; revised August 6, 2012. proposed method over the ORL face database and compare
Saeed Meshgini and Hadi Seyedarabi are with the Faculty of Electrical the obtained results with those in the latest published papers.
and Computer Engineering, University of Tabriz, Tabriz, Iran (e-mail:
[email protected], [email protected]). The experimental results show that our proposed method has
Ali Aghagolzadeh was with the Faculty of Electrical and Computer a maximum recognition rate of 98.5% which is higher than
Engineering, University of Tabriz, Tabriz, Iran. He is now with the Faculty the other similar algorithms tested on the ORL database.
of Electrical and Computer Engineering, Babol Nooshirvani University of The rest of this paper is organized as follows. Section 2
Technology, Babol, Iran (e-mail: [email protected]).

767
International Journal of Computer Theory and Engineering, Vol. 4, No. 5, October 2012

gives the background information about the Gabor filter bank, displacement in the direction of the sinusoid, but its
KPCA, and SVM. Section 3 presents the proposed algorithm. magnitude changes slowly with the displacement [11]. Hence,
Experimental results are given in Section 4 and Section 5 we use the magnitude of the convolution outputs.
concludes the paper.
B. Kernel PCA
Principal Component Analysis (PCA), a powerful
II. BACKGROUND INFORMATION technique for reducing a large set of correlated variables to a
smaller set of uncorrelated components, has been applied
A. Gabor Filter Bank extensively for both face representation and face recognition.
As mentioned before, the characteristics of the Gabor Kirby and Sirovich [12] showed that any particular face can
wavelets (filters), especially for frequency and orientation be effectively represented along the eigenpictures coordinate
representations, are similar to those of the human visual space. Therefore, any face can be approximately
system and they have been found to be particularly reconstructed by using just a small collection of eigenpictures
appropriate for texture representation and discrimination. and the corresponding projections. Applying PCA technique
The Gabor filter-based features, directly extracted from to face recognition, Turk and Pentland [13] developed a
gray-level images, have been widely and successfully applied well-known eigenfaces method, where the eigenfaces
to different pattern recognition problems [8]. In the spatial correspond to the eigenvectors associated with the largest
domain, a 2-D Gabor filter is a Gaussian kernel function eigenvalues of the face covariance matrix. The eigenfaces
modulated by a sinusoidal plane wave. It can be represented thus define a feature space, or “face space” which drastically
by reduces the dimensionality of the original space, and face
detection and recognition are then carried out in the reduced
1  x 2  y 2  space.
  ,  x , y   exp    exp  j  x  
2 2
 2 2  (1) PCA algorithm, however, considers only the second order
x   x cos   y sin  , y    x sin   y cos  statistics (variances and covariances) of the input data. Since
these second order statistics provide only the partial
where (x, y) is the pixel position in the spatial domain, ω is information on the statistics of face images, it might become
the central angular frequency of a sinusoidal plane wave, θ is necessary to incorporate the higher order statistics as well.
the anti-clockwise rotation of the Gaussian function (the Toward that goal, PCA is extended to a nonlinear form by
orientation of the Gabor filter), and σ represents the mapping nonlinearly the input space to a feature space, where
sharpness of the Gaussian function along both x and y PCA is applied [14]. In other words, a pattern in the original
directions. We set σ ≈ π ∕ω to define the relationship between input space is mapped into a potentially much higher
σ and ω in our experiments [9]. dimensional feature vector in the feature space. An initial
The Gabor filters with the different frequencies and motivation of KPCA is to perform PCA in the feature space.
orientations, which form the Gabor filter bank, have been However, it is difficult to do so directly because it is
used to extract features of face images. In most cases, a computationally very extensive to compute the dot products
Gabor filter bank with 5 frequencies and 8 orientations is in a high dimensional feature space. Fortunately, kernel
used [10]. Fig. 1 shows the Gabor filter bank with 5 different techniques can be used to avoid this difficulty. The algorithm
scales and 8 different orientations. can be actually implemented in the input space by virtue of
The following equations give 5 frequencies (m = 1,2,…,5) kernel tricks. The explicit mapping process is not required at
and 8 orientations (n = 1,2,…,8) for the Gabor filter bank: all [15]. Now, let us describe KPCA formalization as follows.
Given a set of M samples x1,x2,…,xM  N in the input
   m 1 
m   2 , n   n  1 (2)
space, suppose Φ be a nonlinear mapping between the input
2 8
space and the feature space such as Φ: N → . Note that, for
KPCA, the nonlinear mapping Φ usually defines a kernel
function. Assume D represents the data matrix in the feature
space: D = [Φ(x1) Φ(x2) … Φ(xM)]. Let K  M×M define a
kernel matrix by means of dot product in the feature
space as in

Kij    xi     x j  (4)

Scholkopf and Smola [16] showed that the eigenvalues λ1,


Fig. 1. Real parts of the Gabor filters at 5 scales and 8 orientations.
λ2,…,λM and the eigenvectors V1, V2,…,VM of KPCA can be
The input image I(x,y) is convolved with the Gabor filter derived by solving the following eigenvalue equation
Ψω,θ(x,y) to obtain Gabor feature representation as
KA  MA with A  1  2   M 
Gm , n  x, y   I  x, y    m ,n  x, y  (5)
(3)   diag 1 , 2 ,, M 
The phase of Gm,n(x,y) changes linearly with small where A  M×M is an orthogonal eigenvector matrix, Λ 

768
International Journal of Computer Theory and Engineering, Vol. 4, No. 5, October 2012

M×M is a diagonal eigenvalue matrix with diagonal elements number of classes. The ith SVM is trained with all of
examples in the ith class with the positive labels, and all other
in decreasing order (λ1 ≥ λ2 ≥ … ≥ λM) and M is a constant (the examples with the negative labels. But, OAO constructs k(k-1)
number of training samples). In order to derive the ∕ 2 classifiers where each one is trained on data from two
eigenvector matrix V = [V1 V2 … VM], of KPCA, first, A classes. Another method to generalize binary SVM
should be normalized such that λi||αi||2 = 1, i = 1,2,…,M. The classification to multi-class state is all-together method
eigenvector matrix, V, is then derived as which solves multi-class SVM in one step [17].
V  DA . (6)
Let x be a test sample whose image in the feature space is III. THE PROPOSED ALGORITHM
Φ(x). The KPCA features of x are derived as
Block diagram of the proposed algorithm is shown in Fig.
F V t   x   A t B (7) 2. The input face images from the ORL database are given to
a Gabor filter bank at 5 scales and 8 orientations for feature
where B = [Φ(x1)∙Φ(x) Φ(x2)∙Φ(x) … Φ(xM)∙Φ(x)]t. extraction. Each face image of the ORL database has resized
C. SVM to the size of 70×80 pixels. After applying the Gabor filter
bank, 40 (5×8) outputs (each one with size 70×80 pixels) are
As mentioned before, SVM is a pattern classifier which obtained for every input image. Then, each of these 40
aims to find the Optimal Separating Hyperplane (OSH) outputs is down sampled with a factor of 20 (4×5) and
between two classes of the classification problem. The OSH normalized to zero mean and unit variance (whitening
is the hyperplane that separates the positive and negative transform). At the next stage, they are concatenated to each
training samples with the maximum margin. Also, SVM tries other resulting in a feature vector of size 11200 (40×(70×80)
to minimize the number of misclassified training samples. ∕ 20) for every input image. Before applying SVM classifier
Both of these are obtained by solving an optimization on feature vectors, KPCA algorithm is used for further
problem. Thus, we can obtain a classifier that has the best reduction of feature data dimensionality. There are several
generalization ability. different kernel functions which can be used in the KPCA
Suppose training samples are given as a set of pairs, method. This paper uses radial basis function (RBF) as
{(x1,y1),(x2,y2),…, (xM,yM)}, where xi  N indicates a feature

K  xi , x j   exp  xi  x j  2  
2 2
(10)
vector and yi  {-1,1} shows its label. In other words, if yi = 1,
then xi belongs to the positive class, otherwise it belongs to In this research, three kinds of multi-class SVMs are used
the negative one. The kernel function K(xi,xj) is used to map containing OAO, OAA, and all-together methods. Each of
the training data from the input space to the feature space. them uses the different kernel functions including RBF
SVM aims to obtain a decision function f such that it can kernel, polynomial kernel, and linear kernel which are
automatically recognize the label of a test feature vector x. To described at (10), (11), and (12), respectively.
do this, SVM has to solve the following optimization
K  xi , x j    xi  x j  1
n
problem (11)
M
W      i
K  xi , x j   xi  x j
Maximize
 (12)
i 1

  i y i  j y j K  xi , x j 
M M
1
 (8) The next section presents the results of implementing the
2 i 1 j 1
above proposed techniques.
M
subject to  i y i  0 ,  i   0,C  , i  1, 2, , M
i 1

where α1,α2,…,αM are Lagrange coefficients and C is a


constant. Lagrange multipliers are obtained by solving the
above problem. Support vectors are the training samples
which their corresponding Lagrange coefficients are not zero.
After determining support vectors and their corresponding
Lagrange coefficients, the decision function, f, is given by the
following equation

 
f  x   sgn   i y i K  xi , x   b 
Fig. 2. Block diagram of face recognition procedure by the proposed
(9) algorithm.
 xi S 
where S is the set of support vectors and b is the bias term.
SVM was originally designed for binary classification. IV. EXPERIMENTAL RESULTS
Some methods have been proposed where typically construct Face images from the Cambridge Olivetti Research Lab
multi-class classifier by combining several binary classifiers (ORL) database are used to train and test the proposed face
like One-Against-All (OAA) and One-Against-One (OAO) recognition system. The ORL database contains face images
methods [17]. OAA constructs k SVM models where k is the from 40 distinct persons. Each person has 10 different images,

769
International Journal of Computer Theory and Engineering, Vol. 4, No. 5, October 2012

taken at different times. We show four individuals (in four TABLE II: AVERAGE RECOGNITION RATES FOR GABOR+PCA+SVM
ALGORITHM OVER THE ORL FACE DATABASE (IN %)
rows) from the ORL face images in Fig. 3. There are
variations in facial expressions such as open/closed eyes, Method RBF Polynomial Linear
smiling/no smiling, and facial details such as glasses/no OAO 92.83 94.67 94.50
glasses. All the images were taken against a dark OAA 95.83 96.33 96.33
homogeneous background with the subjects in an up-right, All-together 95.67 96.83 96.83
frontal position with tolerance for some side movements.
There are also some variations in scale [3]. In order to compare the performance of our proposed
scheme with the other related methods, we also bring the
results of some recent papers below.
In [18], a face recognition algorithm based on
non-negative matrix factorization (NMF) and SVM was
presented. The first row of TABLE III shows the
performance of their proposed algorithm on the ORL
database. As we can see, the highest recognition rate of our
proposed method which is 98.5% is more than the highest
recognition rate reported by [18] which is 97.3%.
Fig. 3. Four individuals (each in one row) in the ORL face database. TABLE III: RECOGNITION RATES OVER THE ORL DATABASE (IN %)
OBTAINED BY [18]
In our experiment, 400 face images of the ORL database
from 40 persons (all images in the database) are used for Method 3 training 4 training 5 training
training and testing. We randomly divide them into 2 sets NMF-SVM 89.2 96.5 97.3
with equal number of images for each class i.e. each set Fisherfaces 85.4 91.6 93.8
consists of 200 images (5 images for every person). One set is Eigenfaces 78.3 83.7 87.5
used for training and the other one is used for testing. This is
performed for 3 times and in each time, the new training and Wavelet transform, PCA and SVM are combined in [19] to
testing sets is chosen. The final result is obtained by form a face recognition system. Their experiments over the
averaging the results of these 3 experiments. This ORL face database indicated a maximum accuracy of 92%
replacement strategy makes our results more reliable. while the maximum accuracy achieved by our proposed
algorithm is 98.5%.
For each kernel function of SVM classifier, we estimate
In [20], a feature extraction method (HMAX) was
the kernel parameter (σ for RBF kernel and n for polynomial introduced and then SVM was used for classification.
kernel as described by (10) and (11), respectively) and cost TABLE IV shows the results of their suggested algorithm
parameter (C in (8)) with 5-cross-validation method over the ORL database. As we can see from this table, the
explained in [17]. Also, the number of eigenvectors which maximum recognition rate of their method is 96% which is
are used in KPCA algorithm affects the recognition rate. lower than the results obtained by our proposed method.
Experimental results showed that the recognition rate is
TABLE IV: RECOGNITION RATES OVER THE ORL DATABASE (IN %)
maximized when the number of selected eigenvectors is set to OBTAINED BY [20]
180. So, we chose 180 eigenvectors in the KPCA algorithm
Class SVM SVM SVM
in all experiments. TABLE I contains the average recognition kNN
Number Kernel = 1 Kernel = 2 Kernel = 3
rates of our proposed method. This table shows that the
10 94 94 96 95.2
highest average recognition rate of the proposed algorithm is
20 93 93 94 90
98.5% which is acceptable.
30 91.6 92 92.1 88
TABLE I: AVERAGE RECOGNITION RATES FOR GABOR+KPCA+SVM 40 87 87 88.3 81.5
ALGORITHM (PROPOSED METHOD) OVER THE ORL FACE DATABASE (IN %)
Finally, a face recognition algorithm was presented in [21]
Method RBF Polynomial Linear
OAO 94.83 94.50
by using 2DLDA method for feature extraction and SVM for
94.50
OAA 98.00 98.17 98.50
classification. Their experiments over the ORL database
All-together 98.50 98.33 98.33
result in the maximum recognition rate of 98.33% which is
less accurate when compared with the maximum recognition
For the next experiment PCA is used as feature reduction rate of our proposed algorithm (98.5%).
algorithm instead of KPCA in order to compare the results In summary, our proposed algorithm can be an effective
with the proposed method. The experiment is based on the and reliable method for the technology of automatic face
same 5-cross-validation technique as used in KPCA. For this recognition.
experiment, the maximum recognition rates are achieved
when the number of eigenvectors is set to 152. TABLE II
indicates the results for PCA. This table shows that the V. CONCLUSION
highest average recognition rate for PCA is 96.83%. In this paper, we have proposed an efficient face
Therefore the proposed algorithm achieves better recognition recognition algorithm based on the Gabor filter bank for
rate than the PCA algorithm. feature extraction, KPCA algorithm for feature reduction,

770
International Journal of Computer Theory and Engineering, Vol. 4, No. 5, October 2012

and SVM for classification. Applying the Gabor filter bank extraction and recognition,” IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 27, no. 2, pp. 230-244, 2005.
causes to remove most of image variations due to changes in [16] B. Scholkopf, A. Smola, and K. Muller, “Nonlinear component
lighting conditions and contrast, and the extracted features analysis as a kernel eigenvalue problem,” Neural Computation, vol. 10,
are robust against small shifts and deformations. no. 5, pp. 1299-1319, 1998.
[17] C. W. Hsu and C. J. Lin, “A comparison of methods for multiclass
Comparisons between our proposed algorithm and the other
support vector machines,” IEEE Trans. Neural Networks, vol. 13, no. 2,
related face recognition methods are conducted over the ORL pp. 415-425, 2002.
database. The experimental results clearly demonstrate the [18] X. Sun, Q. Zhang, and Z. Wang, “Face recognition based on NMF and
effectiveness of our proposed algorithm. SVM,” in Proc. 2nd International Symp. Electronic Commerce and
Security, vol. 1, pp. 616-619, 2009.
[19] L. X. Wei, Y. Sheng, W. Qi, and L. Ming, “Face recognition based on
REFERENCES wavelet transform and PCA,” in Proc. KESE’09 Pacific-Asia Conf.
[1] J. Daugman, “Face and gesture recognition: overview,” IEEE Trans. Knowledge Eng. and Software Eng., pp. 136- 138, 2009.
Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 675-676, [20] Z. Yaghoubi, K. Faez, M. Eliasi, and S. Motamed, “Face recognition
1997. using HMAX method for feature extraction and support vector
[2] Y. Adini, Y. Moses, and S. Ullman, “Face recognition: the problem of machine classifier,” in Proc. 24th International Conf. Image and Vision
compensating for changes in illumination direction,” in Proc. Computing, pp. 421-424, 2009.
European Conf. Computer Vision, pp. 286-296, 1994. [21] J. Y. Gan, and S. B. He, “Face recognition based on 2DLDA and
[3] G. Guo, S. Z. Li, and K. Chan, “Face recognition by support vector support vector machine,” in Proc. International Conf. Wavelet Analysis
machines,” in Proc. 4th IEEE International Conf. Automatic Face and and Pattern Recognition, pp. 211-214, 2009.
Gesture Recognition, pp. 196-201, 2000.
[4] L. Shen and L. Bai, “A review on Gabor wavelets for face recognition,” Saeed Meshgini received B.S. degree in Electronic
Pattern Analysis and Applications, vol. 9, no. 2-3, pp. 273-292, 2006. Engineering in 2005 and his M.Sc. degree in
[5] L. Nanni and D. Maio, “Weighted sub-Gabor for face recognition,” Communication Systems in 2007 both from University
Pattern Recognition Letters, vol. 28, no. 4, pp. 487-492, 2007. of Tabriz, Tabriz, Iran. He is currently pursuing Ph.D.
[6] X. Xie and K. M. Lam, “Gabor-based kernel PCA with doubly program at University of Tabriz. His current research
nonlinear mapping for face recognition with a single face image,” IEEE interests are image processing, face recognition, pattern
Trans. Image Processing, vol. 15, no. 9, pp. 2481-2492, 2006. recognition and computer vision.
[7] V. N. Vapnik, “Statistical learning theory,” John Wiley and Sons, New
York, 1998.
[8] L. Shen, L. Bai, and M. Fairhurst, “Gabor wavelets and general Ali Aghagolzadeh received B.S. degree from
discriminant analysis for face identification and verification,” Image University of Tabriz, Tabriz, Iran, in 1985 and M.Sc.
and Vision Computing, vol. 25, no. 5, pp. 553-563, 2007. degree from Illinois Institute of Technology, Chicago,
[9] H. B. Deng, L. W. Jin, L. X. Zhen, and J. C. Huang, “A new facial IL, USA, in 1988 and Ph.D. degree from Purdue
expression recognition method based on local Gabor filter bank and University, West Lafayette, IN, USA, in 1991 all in
PCA plus LDA,” International Journal of Information Technology, vol. Electrical Engineering. He is currently a professor of
11, no. 11, pp. 86-96, 2005. Faculty of Electrical and Computer Engineering in
[10] C. Liu and H. Wechsler, “Independent component analysis of Gabor Babol Nooshirvani University of Technology, Babol,
features for face recognition,” IEEE Trans. Neural Networks, vol. 14, Iran. His research interests are Digital Signal Processing, Digital Image
no. 4, pp. 919-928, 2003. Processing, Computer Vision and Human - Computer Interaction. He has
[11] S. Fazli, R. Afrouzian, and H. Seyedarabi, “High-performance facial published more than 100 journal and conference papers on pattern
expression recognition using Gabor filter and probabilistic neural recognition, facial expression recognition, image fusion, video Compression
network,” in Proc. IEEE International Conf. Intelligent Computing and and image and video coding.
Intelligent Systems, vol. 4, pp. 93-96, 2009.
[12] M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve
procedure for the characterization of human faces,” IEEE Trans. Hadi Seyedarabi received B.S. degree from
Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108, University of Tabriz, Iran, in 1993 and M.Sc. degree in
1990. Telecommunication Engineering from K.N.T.
[13] M. Turk, and A. Pentland, “Eigenfaces for recognition,” Journal of University of Technology, Tehran, Iran in 1996 and
Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991. Ph.D. degree from University of Tabriz, Iran, in 2006
[14] C. Liu, “Gabor-based kernel PCA with fractional power polynomial all in Electrical Engineering. He is currently an
models for face recognition,” IEEE Trans. Pattern Analysis and assistant professor of Faculty of Electrical and
Machine Intelligence, vol. 26, no. 5, pp. 572-581, 2004. Computer Engineering in University of Tabriz, Tabriz, Iran. His research
[15] J. Yang, A. F. Frangi, J. Y. Yang, D. Zhang, and Z. Jin, “KPCA plus interests are image processing, computer vision, Human-Computer
LDA: a complete kernel fisher discriminant framework for feature Interaction, facial expression recognition and facial animation.

771

View publication stats

You might also like