0% found this document useful (0 votes)
12 views1 page

F6 EDAI2 Project Poster

Uploaded by

manasiphand2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views1 page

F6 EDAI2 Project Poster

Uploaded by

manasiphand2005
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 1

Visual Entertainment Recommendation system EDAI 2

PROJECT
Using Emotion Detection Software GROUP NO
(Supriya Telsang), Aditya Phadke ,Arnav Phadke,Tushar Phadke,
Manasi Phand, Vedant Phutane , Prathavita Pichare
Department of Engineering Science and Humanities (DESH)
Vishwakarma Institute of Technology, Pune – 411037
Students’ Conference on Engineering Design and Innovation June 2024
F06
INTRODUCTIO
ABSTRACT RESULTS AND DISCUSSIONS
N
This project develops a movie recommendation system using emotion
 Results of the project are as follows :
detection via real-time facial recognition.  Through our project we are able to detect the emotions through web cam.
 By accessing the device’s camera,it captures the user’s face and  This emotions include Anger, Disgust, Fear, Happy, Neutral, Sad and surprise .
identifies emotions such as anger,sadness,surprise,disgust,fear,  These are some emotions that are going to be recognized by our model.
happiness,and a neutral baseline.  Initially a face detection step is performed on the input image.
 Based on the detected emotion,the system suggests suitable movies.  Afterwards an image processing-based feature point extraction method is used to
extract feature points .
 Finally, a set of values obtained from processing the extracted feature points are
given as input to neural network to recognize emotion contained

INTRODUCTION
IMP 2/3 IMAGES OF PROJECT
 Facial Expression Recognition (FER) has witnessed significant
advancements, leveraging deep learning and machine learning
IMAGE 1
techniques.
 It is focusing on preprocessing methods, model architectures, and
emotion recognition accuracy.
 By exploring diverse approaches and datasets, it aims to provide a
comprehensive overview of current methodologies and their effectiveness
in improving FER systems.

METHODOLOGY
BLOCK DIAGRAM

Input Image Face Detection Facial Feature Analysis of


Extraction Expression
IMAGE 2

Output Recommendation Emotion Detection


of movies,series

FLOW CHART

NOVELTY FEATURES / FINDINGS


 Innovative Model Architectures and Techniques:
• Use of advanced CNN architectures with inception layers, residual
blocks, CNN-LSTM, 3DCNN, CNN-10, and ViT models for enhanced
feature extraction, sequential data processing, and scalability .
 Enhanced Preprocessing and Analysis:
• Techniques like image resizing, normalization, and edge detection to
ACTUAL CIRCUIT/ DESIGNED SYSTEM capture detailed texture and structural information, improving
recognition accuracy.
 Multimodal Approaches and Comprehensive Reviews:
• Combining facial expressions, speech, behavior, and physiological
signals for holistic emotion recognition, and consolidating literature
and datasets for comprehensive benchmarking

CONCLUSIONS
Successfully recognize user emotions and provides media
suggestions accordingly.
 Future improvements :
TESTING / IMPLEMENTATION Enhance accuracy by training the model on a larger dataset.
 Data Collection: Gather a dataset with annotated facial landmarks. Make the system suitable for healthcare and defense applicatons.
 Preprocessing: Normalize images and landmarks, augment data, split into Incorporate a more diverse range of emotions in the training
training and testing sets. dataset to broaden the model’s applicability.
 Model Selection: Choose a model (e.g., CNN, MTCNN, SSD).
 Training: Train the model on the training dataset.
 Evaluation: Test the model on the testing dataset, refine as needed.
 Implementation: Integrate the trained model for real-time facial detection.

You might also like