ieeepaper (1)
ieeepaper (1)
net/publication/317915825
A robust method for face recognition and face emotion detection system using
support vector machines
CITATIONS READS
50 4,699
2 authors:
All content following this page was uploaded by Rajesh K M on 25 November 2020.
Abstract—This research presents framework for real time classifiers which groups the number of features into
face recognition and face emotion detection system based on different classifiers. Using these classifiers, the
facial features and their actions. The key elements of Face are different facial features are detected which can be used
considered for prediction of face emotions and the user. for further processing. But this method is limited to
The variations in each facial feature are used to determine the only frontal view of the face. Therefore in order to detect
different emotions of face. Machine learning algorithms are used the facial features of the person’s face under different
for recognition and classification of different classes of face poses, face landmark annotation [3] is required. For
emotions by training of different set of images. In this context, by annotation of facial landmarks, the popular method active
implementing herein algorithms would contribute in several
shape model (ASM) or open source software dlib can be
areas of identification, psychological researches and many real
world problems. The proposed algorithm is implemented using
used. Emotion detection is based on different expressions
open source computer vision (OpenCV) and Machine learning of face and these expressions are generated by variations
with python. in facial features.
Face emotion recognition [2] uses support vector
Keywords—Face recognition ; Fisherface ; Support vector machine [5] for finding the different emotions of face and
machine ; dlib. also for classifying them. PCA is used to extract the facial
features and to reduce the image dimensions. Face is a
I. INTRODUCTION two dimensional image, for face analysis it is preferred to
The study of face and its features is an active research area use two dimensional vector space. Therefore for
from past few decades. Pose variation, illumination dimensionality reduction also 2DPCA [4] is best for faces
conditions, bad lighting etc., are still challenging factors faced under different poses. 2DPCA is used to remove the
by all algorithms. Face recognition and emotion detection unnecessary parts of the image. Multiobjective algorithm
system are the major applications of recognition system, in based optimization and classifiers are used. SVMs are
which many algorithms have tried to solve these problems. used to classify the image data under consideration. It
The face recognition is the basic part in modern finds the minimum possible separation between two or
authentication/identification applications; the accuracy of this more classes of data and creates a hyper-plane with a
system should be high for better results. Fisherface [1] margin. Since in general greater the margin and lesser is
algorithm presents high accurate approach for face the generalization error. SVMs are memory efficient and
recognition; it performs two classes of analyses to achieve effective in higher dimension spaces.
recognition i.e. principal component analysis (PCA) and linear The rest of paper is organized as follows. Section II
discriminant analysis (LDA) respectively. describes the face recognition and its algorithm. Section
While dealing with the machine learning problems, III describes the emotion detection system. Experimental
dimensionality is the biggest issue. Therefore PCA is used to results are shown in section IV. Finally in section V the
reduce the dimensionality of the images/frames. It conclusion of our work will be discussed.
converts the high dimensional space into low dimensional
II. FACE RECOGNITION
space. By reducing the dimensions, the number of features per
image also be reduced. LDA is a discriminant method used in Face recognition is a process of identification of a
many recognition problems; it computes the group of person’s face in an image or video which includes several
characteristic features that normalizes the different classes if steps need to be processed. Figure.1 shows the block
image data for classification. Fisherface is the best algorithm diagram of face recognition system, which includes face
among others by its accuracy of around 96%. Detection of detection, face extraction and face matching.
face in an image or video is the fundamental step in any
recognition system. It is difficult for computer to find the
face in light of the fact that the number of features in an image
is extremely high. P.Viola and Jones [14] introduces different
2
Facial features such as eyes, nose, lips and face contour
are considered as the action units of face and are
responsible for creation of expressions on face, are
extracted using open source software called dlib. SVM
classifier compares the features of training data and
testing data to predict any emotion of the face. Here facial
features are considered as the key points which are used
for training and testing. Support vector machine is the
supervised learning method of machine learning. Machine
learning algorithms are advantageous over other
algorithms, because of less error rate and faster results.
LinearSVC which is also called as MultiSVM [11] is used
for classification. It uses “one-vs-all” strategy for training
Figure 4: Block diagram of Emotion detection system. of n-class models.
PCA is applied to training images to reduce the
dimensionality. Because training images are more compared IV. EXPERIMENTAL RESULTS
to testing and if the dimension is high then the time taken for The proposed system is tested on datasets which consists
processing will be more. Support vector machine classification of wide range of face images with different expressions,
is done for classifying different emotions namely, Happy, Sad, poses, illumination conditions and genders. We used CK
Angry, Fear, Disgust and Surprise. The emotion detection and CK+ database [10] for training of emotion detection
system detailed flow diagram is shown in figure 5. system. Algorithm is tested on IMM database [7] and also
our own test images. Both databases are open source and
our algorithm performed well on both datasets.
A. Experimental results of face recognition.
3
different expressions are used for training. For testing of the TABLE I. TIME ESTIMATION
algorithm, we used the images which are captured using
webcam. 50 to 60 different set of images of different persons Type Time taken (SEC)
are used in testing. MultiSVM classifier is used for Face detection 0.0844
classification of different emotions. One –vs-All SVM Facial feature extraction 0.9216
classifier is used for training of different classes of
expressions. Dlib is used to extract the facial features, the
Classification using SVM 0.1956
experimental results shows the detection of different emotions.
PCA is used to reduce the dimensionality; it finds the small
number of eigenfaces. These eigenfaces should span a space Emotion detection 0.1994
that is required to represent a face. Figure 7.1 to 7.2 shows the
detected results of different emotions.
V. CONCLUSION AND FUTURE WORK
Face recognition which is implemented in real-time
helps to recognize the human faces can be used for person
identification and authentication purposes. Face emotion
detection is implemented using support vector machine
classifiers which are capable of classifying different class
of emotions accurately. The accuracy of both face
recognition and emotion detection can be increased by
increasing the number of images during training. The
detection time is significantly less and hence the system
yields less run-time along with high accuracy. The future
work includes the implementation of the system in
android improves the availability of the system to more
Figure 7.1: Detection of ‘Happy’ face.
users.
REFERENCES
[1] Hyung-Ji Lee , Wan-Su Lee, Jae-Ho Chung , “Face recognition
using Fisherface algorithm and elastic graph matching”, IEEE
International Conference on Image Processing ,Vol.1,pp: 998-
1001, October 2001.
[2] Adrian Dinculescu,Cristian Vizitiu,Alexandru Nistorescu,Mihaela
Marin,Andreea Vizitiu, “Novel Approach to Face Expression
Analysis in Determining Emotional Valence and Intensity with
Benefit for Human Space Flight Studies”, 5th IEEE International
Conference on E-Health and Bioengineering - EHB, pp:1 - 4,
November 2015.
[3] Rajesh K M, Naveenkumar M, “An Adaptive-Profile Modified
Active Shape Model for Automatic Landmark Annotation Using
Open CV”, International Journal of Engineering Research in
Electronic and Communication Engineering (IJERECE), Vol.3,
Issue.5, pp:18-21, May 2016.
[4] Samiksha Agrawal,Pallavi Khatri, “Facial Expression Detection
Techniques: Based on Viola and Jones algorithm and Principal
Component Analysis”, Fifth International Conference on
Advanced Computing Communication Technologies, pp:108-112,
February 2015.
[5] Ibrahim A. Adeyanju,Elijah O. Omidiora,Omobolaji F. Oyedokun,
“Performance Evaluation of Different Support Vector Machine
Kernels for Face Emotion Recognition”, SAI Intelligent Systems
Conference,pp: 804 806, November 2015.
[6] Ambika Ramchandra , Ravindra Kumar, “Overview Of Face
Figure 7.2: Detection of Disgust, Fear, Anger and Surprise Recognition System Challenges”, International journal of scientific
faces. technology research , Vol.2,pp: 234-236, August 2013.
[7] M. M. Nordstrom,M. Larsen,J. Sierakowski,M. B. Stegmann, “The
The emotions can be classified as positive and negative, these imm face database-an annotated dataset of 240 face images”,
can be used to understand the mental condition of the person. Elsevier, DTU Informatics, Tech. Rep. , 2004.
The implementation is done using OpenCV and python along [8] V. Kazemi,J. Sullivan, “One Millisecond Face Alignment with an
with additional dependencies like dlib, scikit learn and Ensemble of Regression Trees”, IEEE Conference on Computer
scimage. Table 1 shows the time estimation of different Vision and Pattern Recognition,pp:1867 1874, 2014.
detections performed. The time taken for each process is [9] Abu Sayeed Md. Sohail,Prabir Bhattacharya, “Classifying Facial
obtained using the time function of the python. Expressions Using Point-Based Analytic Face Model and Support
Vector Machines”, IEEE International Conference on Systems,
Man and Cybernetics,pp:1008-1013, October 2007.
4
[10] Patrick Lucey, Je_rey F. Cohn , Takeo Kanade, “The Extended Cohn-
Kanade Dataset (CK+): A complete dataset for action unit and emotion
specified expression”, IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, pp: 94-101, June 2010.
[11] Jan Erik Solem, “Programming Computer Vision with Python”, First
Edition , ISBN 13:978-93-5023-766-3, July 2012.
[12] Ketki R. Kulkarni,Sahebrao B. Bagal, “Facial Expression Recognition”,
2015 Annual IEEE India Conference (INDICON), pp:1-5, 2015.
[13] Ajit P. Gosavi,S.R.Khot, “Emotion Recognition Using Principal
Component Analysis with Singular Value Decomposition”, International
Conference on Electronics and Communication System (lCECS -
2014),pp:1-5, , February 2014.
[14] P. Viola,M. Jones, “Rapid object detection using a boosted cascade of
simple features”, Proceedings of the 2001 IEEE Computer Society
Conference ,Vol.1,pp: 511-518, 2001.
[15] K. W. Wan,K. M. Lam,K. C. Ng, “An accurate active shape model for
facial feature extraction”, Pattern recognition letters, Vol.26,pp:2409-
2423, 2005.