0% found this document useful (0 votes)
42 views

Recognition of Sentiment Using Deep Neural Network

Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-1 , February 2023, URL: https://www.ijtsrd.com/papers/ijtsrd52797.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/52797/recognition-of-sentiment-using-deep-neural-network/amit-yadav

Uploaded by

Research Park
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Recognition of Sentiment Using Deep Neural Network

Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-7 | Issue-1 , February 2023, URL: https://www.ijtsrd.com/papers/ijtsrd52797.pdf Paper URL: https://www.ijtsrd.com/computer-science/artificial-intelligence/52797/recognition-of-sentiment-using-deep-neural-network/amit-yadav

Uploaded by

Research Park
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

International Journal of Trend in Scientific Research and Development (IJTSRD)

Volume 7 Issue 1, January-February 2023 Available Online: www.ijtsrd.com e-ISSN: 2456 – 6470

Recognition of Sentiment using Deep Neural Network


Amit Yadav1, Anand Gupta2, Ms. Aarushi Thusu3
1,2
Student, Department of AI, 3Assistant Professor, Department of AIML,
1, 2, 3
Noida Institute of Engineering and Technology, Greater Noida, Uttar Pradesh, India

ABSTRACT How to cite this paper: Amit Yadav |


Emotion is one of the maximum essential details which determines in Anand Gupta | Ms. Aarushi Thusu
predicting the human nature and information the human behaviour. "Recognition of Sentiment using Deep
Though it is an easy task for human being for recognizing human’s Neural Network" Published in
emotion but it is not the same for a computer to understand. And so International
Journal of Trend in
let research is being conducted to predict the behaviour correctly with Scientific Research
higher precision and accuracy. and Development
This paper demonstrates the real time facial emotion recognition in (ijtsrd), ISSN:
one of the seven categories o emotion that are angry, disgust, fear, 2456-6470,
happy, neutral, sad and surprise. We are using a simple 4-layer Volume-7 | Issue-1, IJTSRD52797
February 2023,
Convolution Neural Network (CNN). We also have implemented
pp.896-903, URL:
various filter and pre-processing to remove the noise and also have www.ijtsrd.com/papers/ijtsrd52797.pdf
taken care of over-fitting the curve. We have tried to improve the
accuracy o model by applying various filters and optimizing the data Copyright © 2023 by author (s) and
for feature extraction and obtaining the accurate data prediction. The International Journal of Trend in
dataset used for testing and training is FER2013 and the proposed Scientific Research and Development
trained model gives an accuracy of about 73%. Keyword: Emotion Journal. This is an
Recognition, Convolution Neural Network (CNN), pre-processing, Open Access article
Over-fitting, Optimization, features extraction. distributed under the
terms of the Creative Commons
Attribution License (CC BY 4.0)
(http://creativecommons.org/licenses/by/4.0)

I. INTRODUCTION
Emotion constitutes an important part in processing In recent year scientists have developed various
human behaviour. Understand human behaviour and algorithm like K nearest neighbour (KNN), Decision
predicting it can revolutionize the business model of Tree (DT),
our society. The ability to understand this emotion Probabilistic neural network (PNN), Random Forest,
will play a role in understanding non-verbal Support Vector Machine, Convolution Neural
communication. The makes use of emotion popularity Network (CNN) etc.
is limitless, think about a case whilst a dealer will
without problems realize whether or not the consumer In this paper we have use four convolution layers
appreciated the product or now no longer or with the with rectified Linear Unit (ReLU) as activation
aid of using how a whole lot have he appreciated it. function. The proposal model undergoes a series of
This has a massive marketplace capacity to be pre-processing and feature detection and feature
discovered. Not only this is also having huge extraction using techniques such as HaarCascade. We
potential in security, robotics, surveillance, have sorted over-becoming the version through
marketing, industries and a lot. growing out after each layer. It may be very smooth
for a human to recognize other’s emotion via way of
It may be very smooth for a human to recognize means of searching at his face, the mind robotically
other’s emotion via way of means of searching at his does the work, however the case isn't for a device to
face, the mind robotically does the work, however the carry out. It wants to do numerous calculations and
case isn't for a device to carry out. It wants to do carry out numerous algorithms and optimize
numerous calculations and carry out numerous numerous records units to teach the model. In recent
algorithms and optimize numerous records units to year scientists have developed various algorithm like
teach the model. K-nearest neighbour (KNN), Decision Tree (DT),

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 896
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
Probabilistic neural network (PNN), Random Forest, function. The proposal model undergoes a series of
Support Vector Machine, Convolution Neural pre-processing and feature detection and feature
Network (CNN) etc. extraction using techniques
This paper purpose a CNN architecture because it has such as Haar Cascade. We have taken care of over-
shown better results in contrast to different algorithms fitting the model by developing out after every layer.
in area of emotion popularity with more accuracy an And the proposed version is skilled with FER2013
precision. statistics set and the beat software program rating out
In this paper we have use four convolution layers of seven emotion expression as bathe as discover
with rectified Linear Unit (ReLU) as activation result.

II. LITERATURE SURVEY


Various deep gaining knowledge of and system gaining knowledge of setoff rules are being applied and
numerous peoples are running on unique algorithm.[36][37][38][39]
Date& Problem
Member Name Possible Solution Reference
Year Description
Jiaxing Li, Dexiang
Faster R-CNN The school of Electrical
Zhang, Jingjing Facial
(Faster Regions with Engineering and Automation
2017 Zhang, Jun Zhang, Expression
Convolution Neural Anhui University, Hefei
Teng Li, Yi Xia, recognition
Networks Features) 230601, China
Qing Yan, Lina Xun
Harbin Institute of
Age and
Technology, Harbin
Xuan Liu, Junbao Li, Gender D-CNN
150080,China
2017 Cong Hu, Jeng- Classification (Deep Convolutional
Fujian University of
Shyang Pan with Facial Neural Networks)
Technology, Fuzhou 350108,
Image
China
Raghav Puri, Mohit Using Python (version Electronics &
14th- Emotion
Tiwari, Archit 2.7) and Open Source Communication Engineering
16th Detection
Gupta, Nitish Computer Vision Bharati Vidyapeeth’s College
March, Using Image
Pathak, Manas Sikri, Library (Open CV) and of Engineering New Delhi,
2018 Processing
Shivendra Goel numpy India
F-CNN
College of Information
Research on (Fisher Convolutional
Science and Engineering
Liu Hui, face Neural Networks)
2018 Wuhan University of Science
Song Yu-jie recognition P-SVM
and Technology Wuhan,
algorithm (Profile Support Vector
China
Machine)
Christopher Facial
Using Convolutional Computer Vision Lab, TU
2013 Pramerdorfer, Expression
Neural Networks Wien Vienna, Austria
Martin Kampel recognition
College of Information
Facial
Huibai Wang, The Fusion of CNN and Science and Technology
2020 Expression
Siyang Hou SIFT Features North China University of
Recognition
Technology Beijing, China
School of Electronics and
Chen Jia, Facial
Ensemble learning of Information Engineering,
2020 Chu Li Li, expression
CNNs Liaoning university of
Zhou Ying recognition
technology JinZhou, China
III. DATA SET
The data set used for training is FER2013, which is open-source dataset contains 25,887 48X48 pixel grayscale
images of different emotion into seven categories that are angry, disgust, fear, joy(happy), neutral, sad and
surprise respectively. The CSV contain 2 columns in which the first columns contain the emotion cable from 0-6
and the second columns contains string surrounded in quote. The string pixel value of the image.

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 897
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470

Fig. 1. Sample taken form FER2013 data set


 The FER2013 dataset is divided into two directories
i.e., 1. train, 2. test
 Each of them consists of seven sub-directories which are further divide into seven sub categories.
 Each subdirectory contains images of specific expressions taken from various sources
IV. METHODOLOGY
Now we discuss various methods which we have used for predicting the emotion.
We went through several steps to extract data and find faces and then run them through the trained model, which
is based on the CNN architecture.[12]
A. Face and Feature Detection
This is one of the early stages of image processing where we break the video into pictures and then process
image by image and try to detect face and if multiple face are found it will work on that also. Before face and
Feature extraction we are resizing the frames and converting the image in 48x48 pixel and convert into
greyscale. Then we are using OpenCV for Face detection.[9][10][11][13]
Date & Problem
Member Name Possible Solution Reference
Year Description
Lutfiah Zahara, Facial Emotion Using the
Purnawarman Musa, Recognition (FER- Convolutional Department of Computer
30th Eri Prasetyo 2013) Dataset for Neural Network Science Gunadarma
May,2021 Wibowo, Irwan Prediction System (CNN) Algorithm University Depok,
Karim, Saiful Bahri of Micro- based Raspberry Indonesia
Musa Expressions Face Pi
College of Automation &
Xuefeng Liu, Feature Extraction
3D- CNN Electronic Engineering,
Qiaoqiao Sun, Yue and Classification
25th-27th (3D-Convolution Qingdao University of
Meng, Congcong of Hyperspectral
May, 2018 Neural Network) Science and Technology,
Wang , Min Fu Image
Qingdao, China

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 898
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
Shanghai Maritime
Yi Dian, Shi Using a Deep
Dropout Method of University china,
Xiaohong, Convolution
2018 Face Recognition Shanghai
Xu Hao Neural Network
[email protected]
Gu Shengtao, Global and Local School of Electronics and
Facial expression
Xu Chao, feature fusion Information Engineering,
2019 recognition
Feng Bo with CNNs AnHui University
Kewen Yan , School of Automation,
Using
Shaohui Huang, Hangzhou Dianzi
Face Recognition Convolution
26th-28th Yaoxian Song, Wei University, HangZhou
Neural Network
July, 2017 Liu1 , Neng Fan 310018
Ahmed Ali Faculty of Computer
Mohammed Al- Systems and Software
Saffar, Hai Tao, Image Deep Convolution Engineering University
2017 Mohammed Ahmed Classification Neural Network Malaysia Pahang Pahang,
Talab Malaysia
“Gheorghe Asachi”
George-Cosmin
Technical University, Iași,
Porușniuc, Florin Architectures for CNNs
Romania University of
21st-23rd Leon, Radu Facial Expression (Convolutional
Eastern Finland, Joensuu,
November, Timofte, Casian Recognition Neural Networks)
Finland , ETH Zurich,
2019 Miron
Zurich, Switzerland
For this, we use the Harr Cascade classifier from OpenCV. The classifier is quite effective and works flawlessly
which was proposed by Paul Viola and Michael Jones in 2001. .[5][6][7][8]
B. Feature Extraction
In this step, the maximum 8 critical components of the face are extracted and cut the eyebrows, eyes, nose, chin,
mouth and jaw and are used and optimized for greater precision. The extracted data is saved in numpy format.
for example, in Fig. 2. green part of the face is extracted and is cropped and it is stored in numpy format and
later on it is passed to the ANN layers for extraction and processing.[20][24][26][27][29]
C. CNN architecture
The next step is to develop the cape and for that we got used to CNN. In deep learning, a convolutional neural
community (CNN, or ConvNet) is a category of synthetic neural community, maximum usually implemented to
research visible imagery. Convolutional neural community consists of a couple of constructing blocks, inclusive
of convolution layers, pooling layers, and completely related layers, and is designed to analyse spatial
hierarchies of functions robotically and adaptively via a backpropagation algorithm. It was first proposed by a
scientist Yann LeCunn who was inspired by the way humans could the encircling [31][32][35] and understand
them. CNNs have proved itself to have greater success in the research area of Facial Emotion Recognition (FER)
because they could perform feature extraction and image simultaneously with high precision, making it the ideal
methodology for image the classification.[14][15][16][17][18][21]
V. IMPLEMENTATION
We have Trained the model on Train data set available in the FER2013 i.e., 28709 in numbers and for testing
purpose we have reserved 7178 pictures which again is in FER2013 in Test sub folder. All the images are of
48x48 pixels and are grayscale and are in PNG format.[19][22][23][25][27]

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 899
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
There are two steps in this proposed model. The first part involves processing the image and extracting the faces
using the Harr cascade as is shown in Fig. 3. and the image is and the image is scaled down to 48x48 pixels,
otherwise it is Converted to grayscale. It is then passed to CNN architecture which is our second module.
Our CNN architecture consists of five convolution layer and uses reLU as the activation function. Each layer
uses a filter of 1,32,64,128, 128 respectively with a 3x3 kernel matrix. Each convolution layer is saved in 3x3
matrix and dot product is calculated after which is handed to max_pooling which converts 3x3 matrix into 2x2
and then to mange the over-fitting the model 0.25 of the data is eliminated and again and then once again to
max_pooling. After that once again process is repeated and finally all layers are flattened and a hidden dense
layer of 1024 nodes is created. Dropout of 0.50 50 is done and another output will classify the photo into these
seven categories. The proposed model using five dense layer is procreated having softmax as the activation
function with seven output which layer of convolution neural network and many complex neurons produces an
accuracy of 63% on this data set. The illustration of the above implementation is shown in Fig. 4. Various thing
like the input and output of various layers is shown along with the batch i.e., how many image will it process at a
given time and the output layer is also shown. The model was later tested for various epochs and efficiency is
tested at various epochs and what we found was the accuracy stops increasing after about 20 epochs as shown in
graph [40]

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 900
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470

Fig 5 Epoch vs accuracy graph for 10 epoch

Fig 6 epoch vs accuracy for 100 epochs


VI. CONCLUSION [2] R. Nicole, “Title of paper with only first word
We have tried implementing the CNN and various capitalized,” J. Name Stand. Abbrev., in press.
pre-processing algorithms and have reached [3] Y. Yorozu, M. Hirano, K. Oka, and Y. Tagawa,
efficiency and accuracy of more than 63 percent on “Electron spectroscopy studies on magneto-
this FER2013 dataset this in itself is difficult and we optical media and plastic substrate interface,”
can also try to improve and adjust the set of rules to IEEE Transl. J. Magn. Japan, vol. 2, pp. 740–
achieve better precision. For testing purpose, we have 741, August 1987 [Digests 9th Annual Conf.
taken 100 images randomly from each of the Magnetics Japan, p. 301, 1982].
expression’s test sub folder and passed the image
through the predicting model and if the model [4] M. Young, The Technical Writer’s Handbook.
predicts correctly, accuracy counter is increased. So Mill Valley, CA: University Science, 1989.
after doing the experiment on 700 images taken [5] Syaffeza A R, Khalil-Hani M, Liew S S, et al.
randomly and evenly form different data set we Convolutional neural network for face
correctly predicted 443 out of 700 image which offer recognition with pose and Illumination
the accuracy of 63.2 %. Variation[J]. International Journal of
VII. REFERENCES Engineering & Technology, 2014, 6(1): 44-57.
[1] S. Jacobs and C. P. Bean, “Fine particles, thin [6] Toshev A, Szegedy C. Deeppose: Human pose
films and exchange anisotropy,” in Magnetism, estimation via deep neural networks[C]. 2014
vol. III, G. T. Rado and H. Suhl, Eds. New IEEE Conference on Computer Vision and
York: Academic, 1963, pp. 271–350. Pattern Recognition(CVPR). Los AIamitos:
IEEE, 2014: 1653-1660.

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 901
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
[7] Lawrence S, Giles C L, Tsoi A C, et al. Face [18] Ruder, S.: An Overview of Gradient Descent
recognition: a convolutional neural-network Optimization Algorithms. arXiv:1609.04747
approach [J]. IEEE Transactions on Neural (2016)
Networks, 1997, 8(1):98. [19] Sang, D.V., Dat, N.V., Thuan, D.P.: Facial
[8] R?zvanDaniel Albu. Human Face Recognition expression recognition using deep
Using Convolutional Neural Networks[J]. convolutional neural networks. In: 9th
Journal of Electrical & Electronics International Conference on Knowledge and
Engineering, 2009, 2(2):110. Systems Engineering (KSE) (2017)
[9] Chen L, Guo X, Geng C. Human face [20] Liu, C., Wechsler, H.: Gabor Feature Based
recognition based on adaptive deep Classification Using the Enhanced Fisher
Convolution Neural Network[C]. Chinese Linear Discriminant Model for Face
Control Conference. 2016:6967-6970. Recognition. IEEE Trans. Image Process. 11, 4,
[10] Moon H M, Chang H S, Pan S B. A face 467–476 (2002).
recognition system based on convolution neural [21] Girshick, Ross. "Fast R-CNN." Computer
network using multiple distance face[J]. Soft Science (2015).
Computing, 2016:1-8 [22] A. Agrawal, Y.N.Singh, “An efficient approach
[11] Krizhevsky A, Sutskever I, Hinton G E. for face recognition in uncontrolled
Imagenet classification with deep convolutional environment [J]. Multimedia Tools and
neural networks[C]. Advanees in neural Applications 76 (8):1-10, 2017.
information processing systems. 2012: 1097- [23] P. J. Phillips, J. R. Beveridge, B. A. Draper, G.
1105. Givens, A. J. O’Toole, D. S. Bolme, J. Dunlop,
[12] Goodfellow, I.J., Erhan, D., Carrier, P.L., Y. M. Lui, H. Sahibzada, and S. Weimer, “An
Courville, A., Mirza, M., Hamner, B., Zhou, introduction to the good, the bad, & the ugly
Y.: Challenges in representation learning: a face recognition challenge problem,” in 2011
report on three machine learning contests. In: IEEE International Conference on Automatic
Lee, M., Hirose, A., Hou, Z. G., Kil R.M. (eds.) Face & Gesture Recognition and Workshops
Neural Information Processing, ICONIP 2013. (FG). IEEE, pp. 346–353, 2011.
Lecture Notes in Computer Science, vol. 8228, [24] Y. Taigman, M. Yang, M. Ranzato, and L.
Springer, Berlin, Heidelberg (2013) Wolf. “Deepface: Closing the gap to human-
[13] Simonyan, K., Zisserman, A.: Very deep level performance in face verification”, IEEE
convolutional networks for large-scale image Conference on Computer Vision and Pattern
recognition. arxiv:cs/arXiv:1409.1556 (2014) Recognition (CVPR). IEEE, pp. 1701–1708,
[14] Wan, W., Yang, C., Li, Y.: Facial Expression 2014.
Recognition Using Convolutional Neural [25] K. O’Shea, and R. Nash. “An Introduction to
Network. A Case Study of the Relationship Convolutional Neural Networks”,
Between Dataset Characteristics and Network arXiv:1511.08458v2, 2015.
Performance. Stanford University Reports, [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton.
Stanford (2016) “Imagenet classification with deep
[15] Liu, K., Zhang, M., Pan, Z.: Facial expression convolutional neural networks”. In Proc. NIPS,
recognition with CNN ensemble. In: 2012.
International Conference on Cyberworlds [27] A. G. Howard. “Some Improvements on Deep
IEEE, pp. 163–166 (2016) Convolutional Neural Network Based Image
[16] Shin, M., Kim, M., Kwon, D.-S.: Baseline Classification”, https://arxiv.org/abs/1312.5402,
CNN structure analysis for facial expression 2013.
recognition. In: 2016 25th IEEE International [28] K. Simonyan and A. Zisserman. “Very deep
Symposium on Robot and Human Interactive convolutional networks for large-scale image
Communication (ROMAN). IEEE (2016) recognition”. In ICLR, 2015.
[17] Li, S., Deng, W.: Deep Facial Expression [29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S.
Recognition: A Survey arXiv:1804.08348 Reed, D. Anguelov, D. Erhan, V. Vanhoucke,
(2018)

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 902
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
and A. Rabinovich. “Going deeper with [38] The Database of Grimace, D. L. Spacek, “Face
convolutions”. CVPR, 2015. recognition data,” University of Essex. UK.
[30] K. He, X.Zhang, S.Ren, and J.Sun. “Deep Computer Vision Science Research Projects,
Residual Learning for Image Recognition”, In 2007.
CVPR, 2016. [39] K. Zhang, M. Sun, Tony X. Han, X. Yuan,
[31] G.B. Huang, H. Lee, & E. Learned-Miller, L.Guo, and T. Liu, “Residual Networks of
“Learning hierarchical representations for face Residual Networks: Multilevel Residual
verification with convolutional deep belief Networks”, IEEE 2016.
networks”. In Proc. of Computer Vision and [40] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S.
Pattern Recognition (CVPR), 2012. Reed, D. Anguelov, D. Erhan, V. Vanhoucke,
[32] S. Yi, , W. Xiaogang, T. Xiaoou, “Hybrid Deep and A. Rabinovich, “Going Deeper with
Learning for Face Verification”, ICCCV 2013. Convolutions,” in IEEE Conference on
Computer Vision and Pattern Recognition
[33] Y. Sun, X. Wang, and X. Tang. “Deep learning (CVPR), 2015.
face representation from predicting10,000
classes”. In Proc.CVPR, 2014. VIII. BIBLOGRAPHY
[1] https://docs.python.org/2/library/glob.html
[34] M. D. Zeiler and R. Fergus. “Visualizing and
understanding convolutional neural networks”. [2] https://opencv.org/
In ECCV, 2014. [3] http://docs.python.org/3.4/library/random.html
[35] Y. Sun, X. Wang, and X. Tang. “Deep learning [4] https://www.tutorialspoint.com/dip/
face representation by joint identification- [5] https://pshychmnemonics.wordpress.com/2015/
verification”. Technical report, 07/03/primary-emtions
arXiv:1406.4773, 2014
[6] https://docs.scipy.org/doc/numpy-
[36] Y.Sun, D.Liang, X.Wang and X.Tang. dev/user/quickstart.html
“DeepID3: Face Recognition with very Deep
Neural Networks”, arXiv:1502.00873v1, 2015. [7] https://github.com/warriorwizard/Facial_expres
sion_recognitioin_facial_expression/blob/main/
[37] The Database of face94, face95 and face96, D. 20211103-015517_train.png
L. Spacek, “Face recognition data,” University
of Essex. UK. Computer Vision Science [8] https://www.engineersgarage.com/articles/imag
Research Projects, 2012. e-processing-tutorialapplications
[9] https://github.com/ayushrag1/opencv

@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 903

You might also like