0% found this document useful (0 votes)
13 views

Emotion Detection Using Facial Expressio

Uploaded by

Laura De Gea
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Emotion Detection Using Facial Expressio

Uploaded by

Laura De Gea
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Volume 4, Issue 4, April 2014 ISSN: 2277 128X

International Journal of Advanced Research in


Computer Science and Software Engineering
Research Paper
Available online at: www.ijarcsse.com
Emotion Detection Using Facial Expressions -A Review
Jyoti Rani Kanwal Garg
M Tech Student Assistant professor
Department of computer science and Application Department of computer science and Application
Kurukshetra University, Kurukshetra Kurukshetra University, Kurukshetra
Haryana (India) Haryana (India)

Abstract— Facial expressions give important information about emotions of a person. Understanding facial
expressions accurately is one of the challenging tasks for interpersonal relationships. Automatic emotion detection
using facial expressions recognition is now a main area of interest within various fields such as computer science,
medicine, and psychology. HCI research communities also use automated facial expression recognition system for
better results. Various feature extraction techniques have been developed for recognition of expressions from static
images as well as real time videos. This paper provides a review of research work carried out and published in the
field of facial expression recognition and various techniques used for facial expression recognition.

Keywords— automated facial expression recognition system, face detection, emotion detection, and human computer-
interaction.

I. INTRODUCTION
Recognition of facial expressions results in identifying the basic human emotions like anger, fear, disgust, sadness,
happiness and surprise. These expressions can vary in every individual. Mehrabian [1] indicated that 7% of message is
conveyed by spoken words, 38% by voice intonation while 55% of message is conveyed by facial expressions. Facial
expressions are produced by movement of facial features.
The facial expression recognition system consists of four steps. First is face detection phase that detects the face from a
still image or video. Second is normalization phase that removes the noise and normalize the face against brightness and
pixel position. In third phase features are extracted and irrelevant features are eliminated. In the final step basic
expressions are classified into six basic emotions like anger, fear, disgust, sadness, happiness and surprise.

Fig 1. Architecture of facial expression recognition system

Facial expressions show the intention, affective state, cognitive activity, psychopathology and personality of a person [2].
In face-to-face interactions facial expressions convey many important communication cues. These cues help the listener
to understand the intended meaning of the spoken words.Facial expression recognition also helps in human computer
interaction (HCI) systems [3]. In some robotic applications facial expressions are also used to detect human emotions [4].
Automatic facial expressions analyses also have applications in behavioral science or medicine [2] [5]. The facial
expression recognition also has major application in areas like behavioral science, medicine, social interaction and social
intelligence. For automatic facial expression recognition system, representation and categorization of characteristics of
facial features deformations is a problem area. The detailed information about the problem space for features extraction is
given in [6]. This paper presents an overview of emotion detection using facial expression recognition, various emotions
that can be automatically detected. Thereafter, a review of various recognition techniques some research challenges are
also pointed out.
II. FACIAL EXPRESSIONS AND EMOTIONS RECOGNITION
In 1884, William James gives the important physiological theory of emotion that is in a person emotions are rooted in the
bodily experience. First we perceive the object then response occurs and then emotions appear. For example, when we
see a lion or other danger we begin to run and then we fear. Each emotion has its own characteristics and appearance
figures. Six basic emotions i.e. fear, surprise, sadness, happiness, anger and disgust are universally accepted. Basic
emotions can be distinguished as negative and positive emotions.

© 2014, IJARCSSE All Rights Reserved Page | 465


Jyoti et al., International Journal of Advanced Research in Computer Science and Software Engineering 4(4),
April - 2014, pp. 465-467
Happiness is a positive emotion and everyone wants to experience it. Happiness is an emotion or mood to attain a goal. It
generally used as a synonym of pleasure and excitement. Fear, anger, disgust and sadness are negative emotions and
most people do not enjoy them. Sadness can be described simply as the emotion of losing a goal or social role [7]. It can
be described as distraught, disappointed, dejected, blue, depressed, despairing, grieved, helpless, miserable, and
sorrowful. Fear is a negative emotion of foreseen danger, psychological or physical harm [7][8][9].
Anger is the most dangerous emotion for everyone. During this emotion, they hurt other people purposefully. Although
anger is commonly described as a negative emotion, some people often report feeling good about their anger but it can
have harmful social or physiological consequences, especially when it is not managed [11].Surprise is neither positive
nor negative [9]. It is the briefest emotion triggered by unexpected events when you haven’t a time to think about that
event [10].
Disgust is a feeling of disliking and is the emotion of avoidance of anything that makes one sick [9]. Disgust usually
involves getting rid of and getting-away from responses. Recently a real time emotion recognition system deployed on a
Microsoft’s Windows desktop is purposed that work on still images of face as well as in real time environment for
feature extraction and emotion recognition [12].For an accurate and high speed emotion detection system edges of the
image are detected and by using Euclidean distance Formulae edge distance between various features is calculated. This
edge distance is different for every image and on the basis of these distances emotions are classified [13].

III. TECHNIQUES USED FOR FACIAL EXPRESSION RECOGNITION


This section provides an overview and comparison of various techniques that can be used for facial expression
recognition. Principal Component Analysis (PCA) is a technique that reduces the dimensionality of image and provides
the effective face indexing and retrieval. It is also known as the Eigen face approach [14]. Linear projection is used in
PCA, which maximize the projected sample scattering [15]. Imaging conditions like lighting and viewpoint should not be
varied for better performance. Fisher’s Linear Discriminant is another approach that reduces the projected sample
scattering and have better performance than PCA [15]. Independent Component Analysis (ICA) produces statistically
independent basis vector while both PCA and LDA produces spatially global feature vectors [16]. ICA gives better
performance than PCA but it is computationally expensive than PCA.
All the above methods are 1-dimensional in nature so 2- dimensional Principal Component Analysis (2DPCA) is
introduced [17]. In 2DPCA 2D matrix is used rather than 1D vector. It needs more coefficients for image representation
therefore the storage space requirement for 2DPCA is much more than PCA. Since all above techniques can be used only
for gray scale images therefore there is a requirement for the approaches that can work with color images.
Global Eigen Approach [18] and Sub pattern Extended 2-dimensional Principal Component Analysis (SpE2DPCA) [19]
are so introduced for color space. Global Eigen Approach uses the color information present in the images rather than the
luminance information as used in PCA and LDA. YUV colors space provides high recognition rate with respect of RGB
color space. SpE2DPCA is also introduced to work with colored images. The recognition rate of SpE2DPCA is higher
than PCA, 2DPCA, E2DPCA. Multilinear Image Analysis uses tensor concept and is introduced to work with different
lighting conditions and other distractions. It uses multilinear algebra [20]. Recognition rate of MIA is greater than PCA
but color information is not incoluded in it. Color Subspace Linear Discriminant Analysis also uses tensor concept but
can work with color space. A 3-D color tensor is used to produce color LDA subspace which improves the efficiency of
recognition [21]. Gabor Filer Bank is another technique that gives greater performance in terms of recognition rate than
other methods [22]. But this method has a major limitation that the maximum bandwidth is limited.

IV. PROBLEMS
As we know that we can recognize human emotions using facial expressions without any effort or delay but reliable
facial expression recognition by computer interface is still a challenge. An ideal emotion detection system should
recognize expressions regardless of gender, age, and any ethnicity. Such a system should also be invariant to different
distraction like glasses, different hair styles, mustache, facial hairs and different lightening conditions. It should also be
able to construct a whole face if there are some missing parts of the face due to these distractions. It should also perform
good facial expression analysis regardless of large changes in viewing condition and rigid movement [23].
Achieving optimal feature extraction and classification is a key challenge in this field because we have a huge variability
in the input data [24]. For better recognition rates most current facial expressions recognition methods require some work
to control imaging conditions like position and orientation of the face with respect to the camera as it can result in wide
variability of image views. More research work is needed for transformation-invariant expression recognition.

V. CONCLUSION
In this paper the automatic facial expression recognition systems and various research challenges are overviewed.
Basically these systems involve face recognition, feature extraction and categorization. Various techniques can be used
for better recognition rate. Techniques with higher recognition rate have greater performance .These approaches provide
a practical solution to the problem of facial expression recognition and can work well in constrained environment.
Emotion detection using facial expression is a universal issue and causes difficulties due to uncertain physical and
psychological characteristics of emotions that are linked to the traits of each person individually. Therefore, research in
this field will remain under continuous study for many years to come because many problems have to be solved in order
to create an ideal user interface and improved recognition of complex emotional states is required.

© 2014, IJARCSSE All Rights Reserved Page | 466


Jyoti et al., International Journal of Advanced Research in Computer Science and Software Engineering 4(4),
April - 2014, pp. 465-467
REFERENCES
[1] A. Mehrabian, “Communication without words”, psychology today, vol. 2, no. 4, pp. 53-56, 1968.
[2] G. Donato, M.S. Bartlett, J.C. Hager, P. Ekman, T.J. Sejnowski, “Classifying Facial Actions”, IEEE Trans.
Pattern Analysis and Machine Intelligence, Vol. 21, No. 10, pp. 974-989, 1999
[3] A. van Dam, “Beyond WIMP”, IEEE Computer Graphics and Applications, Vol. 20, No. 1, pp. 50-51, 2000
[4] V. Bruce, “What the Human Face Tells the Human Mind: Some Challenges for the Robot-Human Interface”, Proc.
IEEE Int. Workshop Robot and Human Communication, pp. 44-51, 1992
[5] I.A. Essa, A.P. Pentland, “Coding, Analysis, Interpretation, and Recognition of Facial Expressions”, IEEE Trans.
Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 757-763, 1997
[6] T. Kanade, J.F. Cohn, Y. Tian, “Comprehensive Database for Facial Expression Analysis”, Proc. 4th IEEE
Int.Conf. on Automatic Face and Gesture Recognition , pp. 46–53, 2000
[7] Oatley K, Jenkins JM (1996), “Understanding Emotions”. Cambridge, Massachusetts: Blackwell Publishers Inc.,
pp.259-260, 1996
[8] Ekman P, “An argument for basic emotions”, UK: Lawrence Erlbaum Associates Ltd. publishers, pp-169-200,
1992.
[9] Ekman P , “Emotions Revealed: Recognizing faces and feelings to improve communication and emotional life,
New York: Times Books, pp-171-173,2003.
[10] Teigen KH, Keren G, “When are successes more surprising than failures?” Cogn Emot; vol. 16, pp-245-268.2002.
[11] Lazarus RS, “Emotion and Adaptation”, USA: Oxford Uni-versity Press, Inc., pp-5, 1991.
[12] P. M. Chavan, M. C. Jadhav, J. B. Mashruwala, A. K. Nehete, Pooja A. Panjari, “Real Time Emotion Recognition
through Facial Expressions for Desktop Devices”, International Journal of Emerging Science and Engineering,
Vol. 1, No. 7, May 2013.
[13] Neha Gupta, Prof. Navneet Kaur, “Design and Implementation of Emotion Recognition System by Using Matlab”,
International Journal of Engineering Research and Applications, Vol. 3, Issue 4, pp. 2002-2006, Jul-Aug 2013.
[14] L. Sirovich and M. Kirby, “Low Dimensional Procedure for Characterization of Human Faces,” J. Optical Soc.
Am., vol. 4, pp. 519-524, 1987.
[15] P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: recognition using class specific
linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720,
Jul. 1997.
[16] B.A. Draper, K. Baek, M.S. Bartlett, J.R. Beveridge, “Recognizing Faces with PCA and ICA,” Computer Vision
and Image Understanding: special issue on face recognition, in press.
[17] J. Yang, D. Zhang, A. F. Frangi, and J. Y. Yang, “Two-dimensional PCA: A new approach to appearance-based
face representation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26,
no. 1, pp. 131–137, 2004.
[18] L.Torres, J. Reutter, and L. Lorente, “The importance of the color information in face recognition,” in Proceedings
IEEE International Conference on Image Processing, vol. 3, pp. 627–631, 1999.
[19] Chen S., Sun Y. and Yin B., “A Novel Hybrid Approach Based on Sub-pattern Technique and Extended 2DPCA
for Color Face Recognition,” 11th IEEE International Symposium on Multimedia, pp. 630-634, 2009.
[20] M. Thomas, C. Kambhamettu and S. Kumar, “Face recognition using a color subspace LDA approach”,
Proceedings International Conference on Tools with Artificial Intelligence, 2008.
[21] M. A. O. Vasilescu and D. Terzopoulos, “Multilinear image analysis for facial recognition,” in Proc. Int Conf.
Pattern Recognit., Quebec City, QC, Canada, pp. 511–514, Aug. 2002.
[22] Barbu, T, “Gabor filter-based face recognition technique”, Proceedings of the Romanian Academy, vol.11, no. 3,
pp. 277–283, 2010.
[23] D. Arumugam, S. Purushothaman, “Emotion Classification Using Facial Expression”, International Journal of
advanced Computer Science and Applications, Vol. 2, No. 7, pp. 92-98, 2011.
[24] A.K. Jain, R.P.W. Duin, J Mao, "Statistical Pattern Recognition: A Review", IEEE Trans. Pattern Analysis and
Machine Intelligence, Vol. 22, No. 1, pp. 4-37, 2000.

© 2014, IJARCSSE All Rights Reserved Page | 467

You might also like