Recognition of Sentiment Using Deep Neural Network
Recognition of Sentiment Using Deep Neural Network
Volume 7 Issue 1, January-February 2023 Available Online: www.ijtsrd.com e-ISSN: 2456 – 6470
I. INTRODUCTION
Emotion constitutes an important part in processing In recent year scientists have developed various
human behaviour. Understand human behaviour and algorithm like K nearest neighbour (KNN), Decision
predicting it can revolutionize the business model of Tree (DT),
our society. The ability to understand this emotion Probabilistic neural network (PNN), Random Forest,
will play a role in understanding non-verbal Support Vector Machine, Convolution Neural
communication. The makes use of emotion popularity Network (CNN) etc.
is limitless, think about a case whilst a dealer will
without problems realize whether or not the consumer In this paper we have use four convolution layers
appreciated the product or now no longer or with the with rectified Linear Unit (ReLU) as activation
aid of using how a whole lot have he appreciated it. function. The proposal model undergoes a series of
This has a massive marketplace capacity to be pre-processing and feature detection and feature
discovered. Not only this is also having huge extraction using techniques such as HaarCascade. We
potential in security, robotics, surveillance, have sorted over-becoming the version through
marketing, industries and a lot. growing out after each layer. It may be very smooth
for a human to recognize other’s emotion via way of
It may be very smooth for a human to recognize means of searching at his face, the mind robotically
other’s emotion via way of means of searching at his does the work, however the case isn't for a device to
face, the mind robotically does the work, however the carry out. It wants to do numerous calculations and
case isn't for a device to carry out. It wants to do carry out numerous algorithms and optimize
numerous calculations and carry out numerous numerous records units to teach the model. In recent
algorithms and optimize numerous records units to year scientists have developed various algorithm like
teach the model. K-nearest neighbour (KNN), Decision Tree (DT),
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 896
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
Probabilistic neural network (PNN), Random Forest, function. The proposal model undergoes a series of
Support Vector Machine, Convolution Neural pre-processing and feature detection and feature
Network (CNN) etc. extraction using techniques
This paper purpose a CNN architecture because it has such as Haar Cascade. We have taken care of over-
shown better results in contrast to different algorithms fitting the model by developing out after every layer.
in area of emotion popularity with more accuracy an And the proposed version is skilled with FER2013
precision. statistics set and the beat software program rating out
In this paper we have use four convolution layers of seven emotion expression as bathe as discover
with rectified Linear Unit (ReLU) as activation result.
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 897
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 898
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
Shanghai Maritime
Yi Dian, Shi Using a Deep
Dropout Method of University china,
Xiaohong, Convolution
2018 Face Recognition Shanghai
Xu Hao Neural Network
[email protected]
Gu Shengtao, Global and Local School of Electronics and
Facial expression
Xu Chao, feature fusion Information Engineering,
2019 recognition
Feng Bo with CNNs AnHui University
Kewen Yan , School of Automation,
Using
Shaohui Huang, Hangzhou Dianzi
Face Recognition Convolution
26th-28th Yaoxian Song, Wei University, HangZhou
Neural Network
July, 2017 Liu1 , Neng Fan 310018
Ahmed Ali Faculty of Computer
Mohammed Al- Systems and Software
Saffar, Hai Tao, Image Deep Convolution Engineering University
2017 Mohammed Ahmed Classification Neural Network Malaysia Pahang Pahang,
Talab Malaysia
“Gheorghe Asachi”
George-Cosmin
Technical University, Iași,
Porușniuc, Florin Architectures for CNNs
Romania University of
21st-23rd Leon, Radu Facial Expression (Convolutional
Eastern Finland, Joensuu,
November, Timofte, Casian Recognition Neural Networks)
Finland , ETH Zurich,
2019 Miron
Zurich, Switzerland
For this, we use the Harr Cascade classifier from OpenCV. The classifier is quite effective and works flawlessly
which was proposed by Paul Viola and Michael Jones in 2001. .[5][6][7][8]
B. Feature Extraction
In this step, the maximum 8 critical components of the face are extracted and cut the eyebrows, eyes, nose, chin,
mouth and jaw and are used and optimized for greater precision. The extracted data is saved in numpy format.
for example, in Fig. 2. green part of the face is extracted and is cropped and it is stored in numpy format and
later on it is passed to the ANN layers for extraction and processing.[20][24][26][27][29]
C. CNN architecture
The next step is to develop the cape and for that we got used to CNN. In deep learning, a convolutional neural
community (CNN, or ConvNet) is a category of synthetic neural community, maximum usually implemented to
research visible imagery. Convolutional neural community consists of a couple of constructing blocks, inclusive
of convolution layers, pooling layers, and completely related layers, and is designed to analyse spatial
hierarchies of functions robotically and adaptively via a backpropagation algorithm. It was first proposed by a
scientist Yann LeCunn who was inspired by the way humans could the encircling [31][32][35] and understand
them. CNNs have proved itself to have greater success in the research area of Facial Emotion Recognition (FER)
because they could perform feature extraction and image simultaneously with high precision, making it the ideal
methodology for image the classification.[14][15][16][17][18][21]
V. IMPLEMENTATION
We have Trained the model on Train data set available in the FER2013 i.e., 28709 in numbers and for testing
purpose we have reserved 7178 pictures which again is in FER2013 in Test sub folder. All the images are of
48x48 pixels and are grayscale and are in PNG format.[19][22][23][25][27]
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 899
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
There are two steps in this proposed model. The first part involves processing the image and extracting the faces
using the Harr cascade as is shown in Fig. 3. and the image is and the image is scaled down to 48x48 pixels,
otherwise it is Converted to grayscale. It is then passed to CNN architecture which is our second module.
Our CNN architecture consists of five convolution layer and uses reLU as the activation function. Each layer
uses a filter of 1,32,64,128, 128 respectively with a 3x3 kernel matrix. Each convolution layer is saved in 3x3
matrix and dot product is calculated after which is handed to max_pooling which converts 3x3 matrix into 2x2
and then to mange the over-fitting the model 0.25 of the data is eliminated and again and then once again to
max_pooling. After that once again process is repeated and finally all layers are flattened and a hidden dense
layer of 1024 nodes is created. Dropout of 0.50 50 is done and another output will classify the photo into these
seven categories. The proposed model using five dense layer is procreated having softmax as the activation
function with seven output which layer of convolution neural network and many complex neurons produces an
accuracy of 63% on this data set. The illustration of the above implementation is shown in Fig. 4. Various thing
like the input and output of various layers is shown along with the batch i.e., how many image will it process at a
given time and the output layer is also shown. The model was later tested for various epochs and efficiency is
tested at various epochs and what we found was the accuracy stops increasing after about 20 epochs as shown in
graph [40]
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 900
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 901
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
[7] Lawrence S, Giles C L, Tsoi A C, et al. Face [18] Ruder, S.: An Overview of Gradient Descent
recognition: a convolutional neural-network Optimization Algorithms. arXiv:1609.04747
approach [J]. IEEE Transactions on Neural (2016)
Networks, 1997, 8(1):98. [19] Sang, D.V., Dat, N.V., Thuan, D.P.: Facial
[8] R?zvanDaniel Albu. Human Face Recognition expression recognition using deep
Using Convolutional Neural Networks[J]. convolutional neural networks. In: 9th
Journal of Electrical & Electronics International Conference on Knowledge and
Engineering, 2009, 2(2):110. Systems Engineering (KSE) (2017)
[9] Chen L, Guo X, Geng C. Human face [20] Liu, C., Wechsler, H.: Gabor Feature Based
recognition based on adaptive deep Classification Using the Enhanced Fisher
Convolution Neural Network[C]. Chinese Linear Discriminant Model for Face
Control Conference. 2016:6967-6970. Recognition. IEEE Trans. Image Process. 11, 4,
[10] Moon H M, Chang H S, Pan S B. A face 467–476 (2002).
recognition system based on convolution neural [21] Girshick, Ross. "Fast R-CNN." Computer
network using multiple distance face[J]. Soft Science (2015).
Computing, 2016:1-8 [22] A. Agrawal, Y.N.Singh, “An efficient approach
[11] Krizhevsky A, Sutskever I, Hinton G E. for face recognition in uncontrolled
Imagenet classification with deep convolutional environment [J]. Multimedia Tools and
neural networks[C]. Advanees in neural Applications 76 (8):1-10, 2017.
information processing systems. 2012: 1097- [23] P. J. Phillips, J. R. Beveridge, B. A. Draper, G.
1105. Givens, A. J. O’Toole, D. S. Bolme, J. Dunlop,
[12] Goodfellow, I.J., Erhan, D., Carrier, P.L., Y. M. Lui, H. Sahibzada, and S. Weimer, “An
Courville, A., Mirza, M., Hamner, B., Zhou, introduction to the good, the bad, & the ugly
Y.: Challenges in representation learning: a face recognition challenge problem,” in 2011
report on three machine learning contests. In: IEEE International Conference on Automatic
Lee, M., Hirose, A., Hou, Z. G., Kil R.M. (eds.) Face & Gesture Recognition and Workshops
Neural Information Processing, ICONIP 2013. (FG). IEEE, pp. 346–353, 2011.
Lecture Notes in Computer Science, vol. 8228, [24] Y. Taigman, M. Yang, M. Ranzato, and L.
Springer, Berlin, Heidelberg (2013) Wolf. “Deepface: Closing the gap to human-
[13] Simonyan, K., Zisserman, A.: Very deep level performance in face verification”, IEEE
convolutional networks for large-scale image Conference on Computer Vision and Pattern
recognition. arxiv:cs/arXiv:1409.1556 (2014) Recognition (CVPR). IEEE, pp. 1701–1708,
[14] Wan, W., Yang, C., Li, Y.: Facial Expression 2014.
Recognition Using Convolutional Neural [25] K. O’Shea, and R. Nash. “An Introduction to
Network. A Case Study of the Relationship Convolutional Neural Networks”,
Between Dataset Characteristics and Network arXiv:1511.08458v2, 2015.
Performance. Stanford University Reports, [26] A. Krizhevsky, I. Sutskever, and G. E. Hinton.
Stanford (2016) “Imagenet classification with deep
[15] Liu, K., Zhang, M., Pan, Z.: Facial expression convolutional neural networks”. In Proc. NIPS,
recognition with CNN ensemble. In: 2012.
International Conference on Cyberworlds [27] A. G. Howard. “Some Improvements on Deep
IEEE, pp. 163–166 (2016) Convolutional Neural Network Based Image
[16] Shin, M., Kim, M., Kwon, D.-S.: Baseline Classification”, https://arxiv.org/abs/1312.5402,
CNN structure analysis for facial expression 2013.
recognition. In: 2016 25th IEEE International [28] K. Simonyan and A. Zisserman. “Very deep
Symposium on Robot and Human Interactive convolutional networks for large-scale image
Communication (ROMAN). IEEE (2016) recognition”. In ICLR, 2015.
[17] Li, S., Deng, W.: Deep Facial Expression [29] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S.
Recognition: A Survey arXiv:1804.08348 Reed, D. Anguelov, D. Erhan, V. Vanhoucke,
(2018)
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 902
International Journal of Trend in Scientific Research and Development @ www.ijtsrd.com eISSN: 2456-6470
and A. Rabinovich. “Going deeper with [38] The Database of Grimace, D. L. Spacek, “Face
convolutions”. CVPR, 2015. recognition data,” University of Essex. UK.
[30] K. He, X.Zhang, S.Ren, and J.Sun. “Deep Computer Vision Science Research Projects,
Residual Learning for Image Recognition”, In 2007.
CVPR, 2016. [39] K. Zhang, M. Sun, Tony X. Han, X. Yuan,
[31] G.B. Huang, H. Lee, & E. Learned-Miller, L.Guo, and T. Liu, “Residual Networks of
“Learning hierarchical representations for face Residual Networks: Multilevel Residual
verification with convolutional deep belief Networks”, IEEE 2016.
networks”. In Proc. of Computer Vision and [40] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S.
Pattern Recognition (CVPR), 2012. Reed, D. Anguelov, D. Erhan, V. Vanhoucke,
[32] S. Yi, , W. Xiaogang, T. Xiaoou, “Hybrid Deep and A. Rabinovich, “Going Deeper with
Learning for Face Verification”, ICCCV 2013. Convolutions,” in IEEE Conference on
Computer Vision and Pattern Recognition
[33] Y. Sun, X. Wang, and X. Tang. “Deep learning (CVPR), 2015.
face representation from predicting10,000
classes”. In Proc.CVPR, 2014. VIII. BIBLOGRAPHY
[1] https://docs.python.org/2/library/glob.html
[34] M. D. Zeiler and R. Fergus. “Visualizing and
understanding convolutional neural networks”. [2] https://opencv.org/
In ECCV, 2014. [3] http://docs.python.org/3.4/library/random.html
[35] Y. Sun, X. Wang, and X. Tang. “Deep learning [4] https://www.tutorialspoint.com/dip/
face representation by joint identification- [5] https://pshychmnemonics.wordpress.com/2015/
verification”. Technical report, 07/03/primary-emtions
arXiv:1406.4773, 2014
[6] https://docs.scipy.org/doc/numpy-
[36] Y.Sun, D.Liang, X.Wang and X.Tang. dev/user/quickstart.html
“DeepID3: Face Recognition with very Deep
Neural Networks”, arXiv:1502.00873v1, 2015. [7] https://github.com/warriorwizard/Facial_expres
sion_recognitioin_facial_expression/blob/main/
[37] The Database of face94, face95 and face96, D. 20211103-015517_train.png
L. Spacek, “Face recognition data,” University
of Essex. UK. Computer Vision Science [8] https://www.engineersgarage.com/articles/imag
Research Projects, 2012. e-processing-tutorialapplications
[9] https://github.com/ayushrag1/opencv
@ IJTSRD | Unique Paper ID – IJTSRD52797 | Volume – 7 | Issue – 1 | January-February 2023 Page 903