0% found this document useful (0 votes)
11 views

ieeepaper (1)

Uploaded by

Abinisha BR
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

ieeepaper (1)

Uploaded by

Abinisha BR
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/317915825

A robust method for face recognition and face emotion detection system using
support vector machines

Conference Paper · December 2016


DOI: 10.1109/ICEECCOT.2016.7955175

CITATIONS READS
50 4,699

2 authors:

Rajesh K M Naveen kumar M


GSSS Institute of Engineering and Technology for Women Siddaganga Institute of Technology
3 PUBLICATIONS 50 CITATIONS 10 PUBLICATIONS 53 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Rajesh K M on 25 November 2020.

The user has requested enhancement of the downloaded file.


2016 International Conference on Electrical, Electronics, Communication, Computer and Optimization Techniques (ICEECCOT)

A Robust Method for Face Recognition and Face


Emotion Detection System using Support Vector
Machines
Rajesh K M Naveenkumar M
Dept. of Telecommunication Dept. of Telecommunication
Siddaganga Institute of Technology (SIT), Siddaganga Institute of Technology (SIT),
Tumkur, India Tumkur, India,
[email protected] [email protected]

Abstract—This research presents framework for real time classifiers which groups the number of features into
face recognition and face emotion detection system based on different classifiers. Using these classifiers, the
facial features and their actions. The key elements of Face are different facial features are detected which can be used
considered for prediction of face emotions and the user. for further processing. But this method is limited to
The variations in each facial feature are used to determine the only frontal view of the face. Therefore in order to detect
different emotions of face. Machine learning algorithms are used the facial features of the person’s face under different
for recognition and classification of different classes of face poses, face landmark annotation [3] is required. For
emotions by training of different set of images. In this context, by annotation of facial landmarks, the popular method active
implementing herein algorithms would contribute in several
shape model (ASM) or open source software dlib can be
areas of identification, psychological researches and many real
world problems. The proposed algorithm is implemented using
used. Emotion detection is based on different expressions
open source computer vision (OpenCV) and Machine learning of face and these expressions are generated by variations
with python. in facial features.
Face emotion recognition [2] uses support vector
Keywords—Face recognition ; Fisherface ; Support vector machine [5] for finding the different emotions of face and
machine ; dlib. also for classifying them. PCA is used to extract the facial
features and to reduce the image dimensions. Face is a
I. INTRODUCTION two dimensional image, for face analysis it is preferred to
The study of face and its features is an active research area use two dimensional vector space. Therefore for
from past few decades. Pose variation, illumination dimensionality reduction also 2DPCA [4] is best for faces
conditions, bad lighting etc., are still challenging factors faced under different poses. 2DPCA is used to remove the
by all algorithms. Face recognition and emotion detection unnecessary parts of the image. Multiobjective algorithm
system are the major applications of recognition system, in based optimization and classifiers are used. SVMs are
which many algorithms have tried to solve these problems. used to classify the image data under consideration. It
The face recognition is the basic part in modern finds the minimum possible separation between two or
authentication/identification applications; the accuracy of this more classes of data and creates a hyper-plane with a
system should be high for better results. Fisherface [1] margin. Since in general greater the margin and lesser is
algorithm presents high accurate approach for face the generalization error. SVMs are memory efficient and
recognition; it performs two classes of analyses to achieve effective in higher dimension spaces.
recognition i.e. principal component analysis (PCA) and linear The rest of paper is organized as follows. Section II
discriminant analysis (LDA) respectively. describes the face recognition and its algorithm. Section
While dealing with the machine learning problems, III describes the emotion detection system. Experimental
dimensionality is the biggest issue. Therefore PCA is used to results are shown in section IV. Finally in section V the
reduce the dimensionality of the images/frames. It conclusion of our work will be discussed.
converts the high dimensional space into low dimensional
II. FACE RECOGNITION
space. By reducing the dimensions, the number of features per
image also be reduced. LDA is a discriminant method used in Face recognition is a process of identification of a
many recognition problems; it computes the group of person’s face in an image or video which includes several
characteristic features that normalizes the different classes if steps need to be processed. Figure.1 shows the block
image data for classification. Fisherface is the best algorithm diagram of face recognition system, which includes face
among others by its accuracy of around 96%. Detection of detection, face extraction and face matching.
face in an image or video is the fundamental step in any
recognition system. It is difficult for computer to find the
face in light of the fact that the number of features in an image
is extremely high. P.Viola and Jones [14] introduces different

978-1-5090-4697-3/16/$31.00 ©2016 IEEE 1


The algorithm is designed in such a way that, if the person
is recognizing for the first time then the system considers
him as a new user and performs each step of operation.
But if the person/user data is already stored then it is
considered as “Registered user” and it performs only
matching operation to recognize the user identity.
OpenCV contains cascade classifiers in which Viola &
Jones face detection algorithm is implemented.
By using these classifiers the face region is detected from
the image. It classifies the images into positive and
Figure 1: Block diagram of Face recognition system. negative images respectively.
The images consists of face region is considered as
Recognition is based on the stored image data of the different positive and the images without face as negative images.
group of persons. Input images are of any type can be used for These negative images are ignored for further processing.
recognition, The stored images consists of face images of dimension
273x273, the more number of the images higher the
1. Still images. recognition rate. If the stored images consists false images
2. Video frames or video stills. or wrong extensions then recognition is not possible.
3. Video. Therefore care should be taken while capturing input
Input image is subjected for face detection to detect the face. images. Fisherface algorithm is applied for classification
Detected face is then extracted from the image and these of different users. Fisherface algorithm generates the
images are saved as a database. Saved images are used to fisherfaces of each image that are used for recognition. In
compare with the input image. The matching of input image is Fisherface algorithm, it performs “leave-one-out” cross
performed to identify the user’s identity. The recognition validation to validate the user identification.
result gives identification of the person (particularly his/her
name). Figure 2 depicts the step by step architecture III. EMOTION DETECTION
developed for face recognition.
Face emotion detection is used to predict the emotion
state of the person based on their face expressions.
The overview of the emotion detection system is shown in
the figure 3 as follows.

Figure 3: Block diagram of Emotion detection system.


Here input images are classified into two types,
o Training images.
o Testing images.
Training images are used for training of classifier.
Testing images are used to verify the algorithm by
predicting the different emotions of the face. Expression
analysis is the major part of the emotion detection, the
schematic of expression analysis for classifying different
emotions is shown in figure 4.

Figure 2: Real time Face recognition system.

2
Facial features such as eyes, nose, lips and face contour
are considered as the action units of face and are
responsible for creation of expressions on face, are
extracted using open source software called dlib. SVM
classifier compares the features of training data and
testing data to predict any emotion of the face. Here facial
features are considered as the key points which are used
for training and testing. Support vector machine is the
supervised learning method of machine learning. Machine
learning algorithms are advantageous over other
algorithms, because of less error rate and faster results.
LinearSVC which is also called as MultiSVM [11] is used
for classification. It uses “one-vs-all” strategy for training
Figure 4: Block diagram of Emotion detection system. of n-class models.
PCA is applied to training images to reduce the
dimensionality. Because training images are more compared IV. EXPERIMENTAL RESULTS
to testing and if the dimension is high then the time taken for The proposed system is tested on datasets which consists
processing will be more. Support vector machine classification of wide range of face images with different expressions,
is done for classifying different emotions namely, Happy, Sad, poses, illumination conditions and genders. We used CK
Angry, Fear, Disgust and Surprise. The emotion detection and CK+ database [10] for training of emotion detection
system detailed flow diagram is shown in figure 5. system. Algorithm is tested on IMM database [7] and also
our own test images. Both databases are open source and
our algorithm performed well on both datasets.
A. Experimental results of face recognition.

Figure 6:Real-Time Face recognition results.


For face recognition, we have used a webcam for
capturing of faces. The implemented algorithm is capable
of recognizing different persons in a single window. If the
recognition environment is under proper lighting
condition and less background noises then the recognition
rate will be high.
B. Experimental results of Emotion detection.
The images of dimension 640x480 are used for testing
Figure 5: workflow of Emotion detection system. of emotion detection system. For training, the images
from Ck and Ck+ database are used. 320 images of

3
different expressions are used for training. For testing of the TABLE I. TIME ESTIMATION
algorithm, we used the images which are captured using
webcam. 50 to 60 different set of images of different persons Type Time taken (SEC)
are used in testing. MultiSVM classifier is used for Face detection 0.0844
classification of different emotions. One –vs-All SVM Facial feature extraction 0.9216
classifier is used for training of different classes of
expressions. Dlib is used to extract the facial features, the
Classification using SVM 0.1956
experimental results shows the detection of different emotions.
PCA is used to reduce the dimensionality; it finds the small
number of eigenfaces. These eigenfaces should span a space Emotion detection 0.1994
that is required to represent a face. Figure 7.1 to 7.2 shows the
detected results of different emotions.
V. CONCLUSION AND FUTURE WORK
Face recognition which is implemented in real-time
helps to recognize the human faces can be used for person
identification and authentication purposes. Face emotion
detection is implemented using support vector machine
classifiers which are capable of classifying different class
of emotions accurately. The accuracy of both face
recognition and emotion detection can be increased by
increasing the number of images during training. The
detection time is significantly less and hence the system
yields less run-time along with high accuracy. The future
work includes the implementation of the system in
android improves the availability of the system to more
Figure 7.1: Detection of ‘Happy’ face.
users.
REFERENCES
[1] Hyung-Ji Lee , Wan-Su Lee, Jae-Ho Chung , “Face recognition
using Fisherface algorithm and elastic graph matching”, IEEE
International Conference on Image Processing ,Vol.1,pp: 998-
1001, October 2001.
[2] Adrian Dinculescu,Cristian Vizitiu,Alexandru Nistorescu,Mihaela
Marin,Andreea Vizitiu, “Novel Approach to Face Expression
Analysis in Determining Emotional Valence and Intensity with
Benefit for Human Space Flight Studies”, 5th IEEE International
Conference on E-Health and Bioengineering - EHB, pp:1 - 4,
November 2015.
[3] Rajesh K M, Naveenkumar M, “An Adaptive-Profile Modified
Active Shape Model for Automatic Landmark Annotation Using
Open CV”, International Journal of Engineering Research in
Electronic and Communication Engineering (IJERECE), Vol.3,
Issue.5, pp:18-21, May 2016.
[4] Samiksha Agrawal,Pallavi Khatri, “Facial Expression Detection
Techniques: Based on Viola and Jones algorithm and Principal
Component Analysis”, Fifth International Conference on
Advanced Computing Communication Technologies, pp:108-112,
February 2015.
[5] Ibrahim A. Adeyanju,Elijah O. Omidiora,Omobolaji F. Oyedokun,
“Performance Evaluation of Different Support Vector Machine
Kernels for Face Emotion Recognition”, SAI Intelligent Systems
Conference,pp: 804 806, November 2015.
[6] Ambika Ramchandra , Ravindra Kumar, “Overview Of Face
Figure 7.2: Detection of Disgust, Fear, Anger and Surprise Recognition System Challenges”, International journal of scientific
faces. technology research , Vol.2,pp: 234-236, August 2013.
[7] M. M. Nordstrom,M. Larsen,J. Sierakowski,M. B. Stegmann, “The
The emotions can be classified as positive and negative, these imm face database-an annotated dataset of 240 face images”,
can be used to understand the mental condition of the person. Elsevier, DTU Informatics, Tech. Rep. , 2004.
The implementation is done using OpenCV and python along [8] V. Kazemi,J. Sullivan, “One Millisecond Face Alignment with an
with additional dependencies like dlib, scikit learn and Ensemble of Regression Trees”, IEEE Conference on Computer
scimage. Table 1 shows the time estimation of different Vision and Pattern Recognition,pp:1867 1874, 2014.
detections performed. The time taken for each process is [9] Abu Sayeed Md. Sohail,Prabir Bhattacharya, “Classifying Facial
obtained using the time function of the python. Expressions Using Point-Based Analytic Face Model and Support
Vector Machines”, IEEE International Conference on Systems,
Man and Cybernetics,pp:1008-1013, October 2007.

4
[10] Patrick Lucey, Je_rey F. Cohn , Takeo Kanade, “The Extended Cohn-
Kanade Dataset (CK+): A complete dataset for action unit and emotion
specified expression”, IEEE Computer Society Conference on Computer
Vision and Pattern Recognition, pp: 94-101, June 2010.
[11] Jan Erik Solem, “Programming Computer Vision with Python”, First
Edition , ISBN 13:978-93-5023-766-3, July 2012.
[12] Ketki R. Kulkarni,Sahebrao B. Bagal, “Facial Expression Recognition”,
2015 Annual IEEE India Conference (INDICON), pp:1-5, 2015.
[13] Ajit P. Gosavi,S.R.Khot, “Emotion Recognition Using Principal
Component Analysis with Singular Value Decomposition”, International
Conference on Electronics and Communication System (lCECS -
2014),pp:1-5, , February 2014.
[14] P. Viola,M. Jones, “Rapid object detection using a boosted cascade of
simple features”, Proceedings of the 2001 IEEE Computer Society
Conference ,Vol.1,pp: 511-518, 2001.
[15] K. W. Wan,K. M. Lam,K. C. Ng, “An accurate active shape model for
facial feature extraction”, Pattern recognition letters, Vol.26,pp:2409-
2423, 2005.

View publication stats

You might also like