0% found this document useful (0 votes)
22 views

Weekly Report 12

Uploaded by

mathandass5555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views

Weekly Report 12

Uploaded by

mathandass5555
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

19CS71C- Research Paper and Patent Review

Weekly Report - 12

Title: Emotion Detection Using Deep Learning


Reg No: 2112033
Name: G. Abishek
Domain: Deep Learning

Problem Statement:
The challenge of accurately recognizing and interpreting human facial emotions from
various angles and perspectives poses significant difficulties in the fields of medical research
and computer science. Traditional methods in facial emotion recognition (FER) rely on
feature extraction and feature selection using Convolutional Neural Networks (CNNs).
However, these methods often struggle with images that are not captured from a frontal view,
limiting their effectiveness in real-world applications. This limitation is particularly critical
for enhancing patient care and supporting children with speech impairments, where accurate
emotion detection can provide essential psychological insights.The proposed research aims to
develop an advanced facial emotion recognition model utilizing a Bi-Directional Long Short-
Term Memory (Bi-LSTM) network. This model seeks to overcome the limitations of
conventional CNN-based approaches by accurately recognizing facial emotions from images
captured at various angles. The primary objective is to create a robust system that can identify
and interpret a wide range of emotional states, enhancing its application in medical and social
contexts.

No. of Papers Reviewed: 2

Paper 1:
Title: Facial Micro-Expression Detection Using Deep Learning Architecture
Authors: Srushti S. Yadahalli, Shambhavi Rege, Dr. Sukanya Kulkarni
Publisher: IEEE
Date of Publication: 2020
Proposed Work: This paper proposes a deep learning model using Convolutional Neural
Networks (CNNs) to detect six emotions (happy, sad, fear, angry, surprise, neutral) from the
FER-2013 dataset. The model includes key techniques like dropout to avoid overfitting, and
pooling to reduce the training time. The authors achieved a test accuracy of 61.13% on single
and group images(yadahalli2020).
Paper 2:
Title: Micro-Expression Classification Using Facial Color and Deep Learning Methods
Authors: Hadas Shahar, Hagit Hel-Or
Publisher: IEEE
Date of Publication: 2019
Proposed Work: This paper focuses on micro-expression detection by analyzing facial color
changes due to blood flow. Instead of relying on motion-based methods, the authors
developed a system that classifies emotions by tracking changes in skin color in the cheek
area. The system uses Long Short-Term Memory (LSTM) neural networks for classification
and achieves improved accuracy across multiple datasets, including CASME, CASME II, and
SMIC(shahar2019).
Comparative Analysis:
Content Paper 1 Paper 2
Objective Detect six primary emotions Classify micro-expressions
from facial expressions by analyzing facial color
using CNNs with the FER- changes using LSTM
2013 dataset. networks across several
micro-expression datasets.
Dataset Used CASME, CASME II, SMIC
FER-2013 dataset with
datasets (RGB video
35,685 grayscale images
sequences of micro-
(48x48 pixels).
expressions).
Technique Used CNN with dropout and max-
Facial color detection with
pooling layers for feature
LSTM networks for time-
extraction and overfitting
series emotion
reduction.
classification, focusing on
skin color changes.
Modules Description 1. Preprocessing (grayscale, 1. Detect facial landmarks
resizing) 2. Feature (cheek regions) 2. Color
extraction using convolution space conversion 3. Feature
layers 3. Max-pooling for extraction based on changes
dimensionality reduction 4. in RGB values 4. Use LSTM
Dropout to prevent for classification 5. Use of
overfitting 5. Softmax for apex frames for training.
final classification.
Performance Measure Achieved 61.13% accuracy Achieved up to 91.89%
on the test set of individual accuracy on CAS(ME)2
and group images. dataset for classifying
micro-expressions.
Future Work Explore additional datasets,
Improve model accuracy by
apply techniques to real-
using more advanced CNN
time video, and enhance
architectures and larger
LSTM-based classification
datasets.
for subtle emotions.
Conclusion:
The papers focus on different aspects of emotion detection using deep learning. Paper
1 emphasizes continuous emotion prediction in valence-arousal spaces using deep CNN
models, achieving high accuracy in predicting emotional intensity and pleasantness. In
contrast, Paper 2 focuses on discrete emotion classification using transfer learning, achieving
strong results with pre-trained models like VGG-16. Both approaches highlight the potential
of deep learning in emotion detection but differ in their focus on continuous vs. discrete
emotion recognition. The proposed research can build upon these techniques by integrating
continuous emotion prediction methods into real-time applications and expanding the scope
of emotion detection.

Project Guide Course Instructor

You might also like