F6 EDAI2 Project Poster
F6 EDAI2 Project Poster
PROJECT
Using Emotion Detection Software GROUP NO
(Supriya Telsang), Aditya Phadke ,Arnav Phadke,Tushar Phadke,
Manasi Phand, Vedant Phutane , Prathavita Pichare
Department of Engineering Science and Humanities (DESH)
Vishwakarma Institute of Technology, Pune – 411037
Students’ Conference on Engineering Design and Innovation June 2024
F06
INTRODUCTIO
ABSTRACT RESULTS AND DISCUSSIONS
N
This project develops a movie recommendation system using emotion
Results of the project are as follows :
detection via real-time facial recognition. Through our project we are able to detect the emotions through web cam.
By accessing the device’s camera,it captures the user’s face and This emotions include Anger, Disgust, Fear, Happy, Neutral, Sad and surprise .
identifies emotions such as anger,sadness,surprise,disgust,fear, These are some emotions that are going to be recognized by our model.
happiness,and a neutral baseline. Initially a face detection step is performed on the input image.
Based on the detected emotion,the system suggests suitable movies. Afterwards an image processing-based feature point extraction method is used to
extract feature points .
Finally, a set of values obtained from processing the extracted feature points are
given as input to neural network to recognize emotion contained
INTRODUCTION
IMP 2/3 IMAGES OF PROJECT
Facial Expression Recognition (FER) has witnessed significant
advancements, leveraging deep learning and machine learning
IMAGE 1
techniques.
It is focusing on preprocessing methods, model architectures, and
emotion recognition accuracy.
By exploring diverse approaches and datasets, it aims to provide a
comprehensive overview of current methodologies and their effectiveness
in improving FER systems.
METHODOLOGY
BLOCK DIAGRAM
FLOW CHART
CONCLUSIONS
Successfully recognize user emotions and provides media
suggestions accordingly.
Future improvements :
TESTING / IMPLEMENTATION Enhance accuracy by training the model on a larger dataset.
Data Collection: Gather a dataset with annotated facial landmarks. Make the system suitable for healthcare and defense applicatons.
Preprocessing: Normalize images and landmarks, augment data, split into Incorporate a more diverse range of emotions in the training
training and testing sets. dataset to broaden the model’s applicability.
Model Selection: Choose a model (e.g., CNN, MTCNN, SSD).
Training: Train the model on the training dataset.
Evaluation: Test the model on the testing dataset, refine as needed.
Implementation: Integrate the trained model for real-time facial detection.