0% found this document useful (0 votes)
213 views

Emotion Detection

This document describes a project to classify human facial expressions from images and map them to corresponding emojis or avatars using deep learning. The project uses a CNN model trained on the FER2013 dataset of facial images labeled with seven emotions. The CNN architecture is built and trained to classify emotions, then OpenCV is used to detect faces in webcam footage and feed them to the trained model. The classified emotions are mapped to matching emojis or avatars for a customized emoji generation system based on detected real-time facial expressions.

Uploaded by

naina nautiyal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
213 views

Emotion Detection

This document describes a project to classify human facial expressions from images and map them to corresponding emojis or avatars using deep learning. The project uses a CNN model trained on the FER2013 dataset of facial images labeled with seven emotions. The CNN architecture is built and trained to classify emotions, then OpenCV is used to detect faces in webcam footage and feed them to the trained model. The classified emotions are mapped to matching emojis or avatars for a customized emoji generation system based on detected real-time facial expressions.

Uploaded by

naina nautiyal
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 23

Emotion Detection

LIVE IMAGE TO EMOJI USING DEEP LEARNING


Content:

 Abstract
 Introduction
 Workflow of the project
 CNN Structure
 Feasibility
 Existing System Analysis
 Proposed System
 Code Snippets
 Output Screenshots
 Future Scope
 References
Abstract:

Emojis are small images that are commonly included in social media text messages. The combination of
visual and textual content in the same message builds up a modern way of communication. Emojis or avatars
are ways to indicate nonverbal cues. These cues have become essential to online chatting, product review,
brand emotion, and many more. It also led to increasing data science research dedicated to emoji-driven
storytelling. With advancements in computer vision and deep learning, it is now possible to detect human
emotions from images. In this deep-learning project, we will classify human facial expressions to filter and
map corresponding emojis or avatars.
 In Today’s Generation people usually tend to communicate with each other using Emoticons. So, we
thought of making our own customized emojis
Introduction:

Nowadays emojis have become a new language that can more effectively express an idea or emotion.
This visual language is now a standard for online communication, available not only on Twitter but
also on other large online platforms such as Facebook, Instagram, WhatsApp, Telegram, Hike, etc. In
Today’s Generation people usually tend to communicate with each other using Emoticons. So, we
thought of making our own customized emojis. Emotion Detection is software that deals with the
creation of emojis or Avatars. The neural network has been an emerging application in numerous and
diverse areas as an example of end-to-end learning This project is based on a system that implements
Convolutional Neural Network and Fer2013 Dataset to detect emotions from facial expressions and
convert them to personalized emojis. We are building a convolution neural network to recognize facial
emotions. We will be training our model on the FER2013 dataset. Then we will map those emotions
with the corresponding emojis or avatars.
Continued…..

FER2013 (Facial Expression Recognizer) contains approximately 30,000 facial RGB images of different
expressions with a size restricted to 48×48, and the main labels of it can be divided into 7 types:
0=Angry,
1=Disgust,
2=Fear,
3=Happy,
4=Sad,
5=Surprise,
6=Neutral
The training set consists of 28,709 examples and the public test set consists of 3,589 examples.
Dataset link: https://www.kaggle.com/datasets/msambare/fer2013
Workflow of the project:
CNN Structure:
Feasibility:

 This project will create the emoji or avatar of the user using facial expression
recognition.
 We will be using one of the state-of-the-art facial recognition models, CNN for
identifying faces and expressions.
 This project is concerned with providing salient facial features such as the left eye,
right eye, left eyebrow, right eyebrow, nose, and mouth detected using the Viola-
Jones object detection framework from a face image.
 Using OpenCV haarcascade XML we are getting the bounding box of the faces in
the webcam. Then we feed these boxes to the trained model for classification.
Existing System Analysis:

 Existing system provides us only static way of emoji generations. By using this
system, we can only create emojis statically.
 Our proposed system will identify or detect the user’s emotions and feelings.
Based on the suitable the emotion will be mapped to the selected avatar or emoji.
So, the limitation is covered by our proposed system.
 We will build a deep-learning model to classify facial expressions from the
images. Then we will map the classified emotion to an emoji or an avatar.
Proposed System:

 In this proposed system, we build a convolution neural network architecture and


train the model on the FER2013 dataset for Emotion recognition from images.
 Using OpenCV haarcascade XML we are getting the bounding box of the faces in
the webcam. Then we feed these boxes to the trained model for classification.
 The categorization of the raw data given to the model can be stored long and it
can give better results.
 Our proposed system generates emojis; by using Deep Learning it processes the
input taken by images and the selected avatar or emoji is generated.
Code Snippets:

 1. Build a CNN architecture.


Here we are importing all the required libraries needed for our model and then we are
initializing the training and validation generators i.e., we are first rescaling all the images
needed to train our model and then converting them to grayscale images.
Continued…..

 Import
Continued………

 Initializing the training and validation generators


Continued………

 Build the CNN architecture


Continued………

 2. Train the model on Fer2013 dataset


 Here we are training our network on all the images we have i.e., the FER2013 dataset, and
then saving the weights in the model for future predictions. Then using OpenCV harassed
XML to detect the bounding boxes of faces in the webcam and predict the emotions.
1. Training the model
Continued………

 2. Predicting emotions
Continued………..

 Code for GUI and mapping with emojis


OUTPUT SCREENSHOT:
Future Scope:

 At this level, we have included only 7 emojis to detect our facial expressions, but future
work would include far more emojis so that we can map emojis as per the expressions
accurately.
 The introduction of emojis with multiple skin tones will be our other possible
development for this project. Our model will therefore identify our skin tone in real time
and according to our skin tone, will have the required emojis.
 The inclusion of emojis in a chatbot will be our third solution. And then we can send our
own emoji in a very cool and quite funny way during a chat to show our expressions.
References:

 P. Ekman, ‚Emotion in the Human Face‛, Cambridge University Press, 1982.


 I. Cohen, N. Sebe, A. Garg, S. C. Lawrence, S. H. Thomas, ‚Facial expression
recognition from video sequences: temporal and static modeling‛, Computer
Vision and Image Understanding, Special Issue on Face Recognition, vol. 91, pp.
160-187, Issues 1-2, July -August 2003.
 https://www.kaggle.com/datasets/msambare/fer2013
 L. Ma, K. Khorasani, ‚Facial expression recognition using constructive feedforward
neural networks,‛ Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE
Transactions on , vol. 34, no. 3, pp.1588-1595, June 2004.
Thank You!!!

You might also like