Osy notes
Osy notes
ON
“SENTIMENT SOUNDS”
OF
(2024-2025)
BY
Arnav Jadhav Roll No.2617
Aadya Parasnis Roll No.2328
Shardul Vaidya Roll No.2632
Mr. B. S. Patil
1
VISION AND MISSION OF THE INSTITUTE
❖ VISION:
❖ MISSION:
M1: Empower the students by inculcating various technical and soft skills
M2: Upgrade teaching-learning process and industry-institute interaction continuously
❖ VISION:
❖ MISSION:
M1: To fulfill industrial requirement in the area of artificial intelligence and machine
learning.
M2: To motivate students for continuous learning with entrepreneurial skills.
2
PROGRAM OUTCOMES (POs)
PO1 Basic and Discipline specific knowledge: Apply knowledge of basic
mathematics, science and engineering fundamentals and engineering specialization
to solve the engineering problems.
PO2 Problem analysis: Identify and analyze well-defined engineering problems using
codified standard methods.
PO7 Life-long learning: Ability to Analyze individual needs and engage in updating in
the context of technological changes.
3
PROGRAM SPECIFIC OUTCOMES(PSO)
PSO 1: Use advanced technologies for application of computer software and hardware.
PSO 2: Maintain the AI & ML based system.
4
CERTIFICATE
This is to certify that Mr. Arnav Jadhav from All India Shri Shivaji Memorial
Society’s Polytechnic College having enrolment 2201410210 has completed Report on
Problem Definition/Semester V Project Report/Final Project Report having title
“Sentiment Sounds” in a group consisting of 3 persons under the guidance of the
faculty guide.
Guide Name
Mr. B. S. Patil
5
CERTIFICATE
This is to certify that Mr. Aadya Parasnis from All India Shi Shivaji Memorial
Society’s Polytechnic College having enrolment 2201410223 has completed Report on
Problem Definition/Semester V Project Report/Final Project Report having title
“Sentiment Sounds” in a group consisting of 3 persons under the guidance of the
faculty guide.
Guide Name
Mr. B. S. Patil
6
CERTIFICATE
This is to certify that Mr. Shardul Vaidya from All India Shri Shivaji Memorial
Society’s Polytechnic College having enrolment 2201410227 has completed Report on
Problem Definition/Semester V Project Report/Final Project Report having title
“Sentiment Sounds” in a group consisting of 3 persons under the guidance of the
faculty guide.
Guide Name
Mr. B. S. Patil
7
ACKNOWLEDGEMENT
With immense pleasure and satisfaction, I am presenting this Project report as part of the
curriculum of Diploma Artificial Intelligence & Machine Learning. I wish to express my sincere
gratitude towards all those who have extended their support right from the stage this idea was
conceived.
I am profoundly grateful to Mr. B. S. Patil, HOD, Department of Artificial Intelligence &
Machine Learning, Project Guide and Project Coordinator, for her expert guidance and
continuous encouragement throughout to see that project work rights its target since its
commencement to its completion.
Finally, I am also grateful to Honorable Mr. S. K. Giram, Principal, AISSMS POLYTECHNIC,
Pune, for his support and guidance that have helped me to expand my horizons of thought and
expression.
We would also thank all Staff members of the Artificial Intelligence & Machine Learning
Department for showing us the way for achieving the project target we want. It would not have
been possible to complete the project without the support and motivation from our family
members and friends.
Arnav Jadhav
Roll No. 2617
Aadya Parasnis
Shardul Vaidya
Roll no 2632
8
ABSTRACT
The Sentiment Sounds project aims to transform music personalization by
leveraging artificial intelligence (AI) to analyze user emotions in real time
through facial recognition. As music plays a significant role in enhancing daily
experiences and expressing emotions, aligning music choices with users' real-
time moods can deepen emotional engagement. This project uses a
comprehensive dataset of facial expressions across seven emotional categories
(e.g., happy, sad, angry), training an AI model to detect these emotions
accurately. The system then dynamically curate’s music playlists that resonate
with the detected mood. This approach combines data analysis, machine
learning, and music personalization to provide a unique, adaptive user
experience. Key challenges, including privacy concerns and potential
inaccuracies due to factors like lighting, are acknowledged, with solutions
focusing on user consent and technical adjustments. The project envisions a
future where AI-driven emotional analysis creates more immersive musical
experiences, establishing a new frontier for personalized media.
9
LIST OF FIGURES
2
Module 02: Proximity Based
Web-Application for Connecting
Passions
CONTENTS
CERTIFICATE I
ACKNOWLEDGEMENT II
ABSTRACT III
LIST OF FIGURES IV
10
CHAPTER TITLE PAGE NO
1. 13
INTRODUCTION
2. 15
PROBLEM STATEMENT
3. 17
OBJECTIVES
4. 19
LITERATURE SURVEY
5. 21
6. 27
CONCLUSION
7. 31
REFERENCE LIST
11
CHAPTER 1
INTRODUCTION
12
INTRODUCTION
13
Rationale: -
The rationale for the Sentiment Sounds project lies in the profound impact that music can
have on human emotions and mental states. The integration of AI into music personalization
represents an innovative step in bridging technology with emotional experience. Music has
been shown to affect dopamine levels, help manage emotions, and even boost productivity.
However, existing music recommendation systems primarily consider listening habits or
preferences without accounting for the user's current emotional state.
2. Facial Recognition
Definition: Facial Recognition is a technology that identifies or verifies individuals
by analyzing their facial features in images or video frames. It often involves the use
of algorithms that map facial landmarks and compare them against a database.
14
3. Emotion Detection
Definition: Emotion Detection is the process of identifying human emotions using
data from various sources, such as facial expressions, voice tones, or physiological
signals. In the context of facial analysis, it typically involves classifying expressions
into categories like happiness, sadness, anger, or surprise.
5. Neural Network
Definition: A Neural Network is a type of machine learning model inspired by the
structure of the human brain. It consists of layers of interconnected nodes (neurons)
that process data, recognize patterns, and make decisions.
15
CHAPTER 2
PROBLEM STATEMENT
16
Problem Statement
The Sentiment Sounds project aims to address the challenge of providing a more immersive
and responsive music experience by analyzing facial expressions to gauge user emotions
and curate music playlists accordingly. Traditional music streaming services lack real-time
adaptation based on user mood, which this project intends to solve through AI.
The research attempts to solve several key problems:
1. Lack of Real-Time Emotional Adaptability: Current recommendation systems do
not adapt to changing emotional states, which limits their effectiveness in meeting
users' immediate emotional needs.
2. Challenge of Accurate Emotion Detection: Real-time emotion recognition through
facial analysis has technical challenges, such as variability in lighting and facial
expressions. This study will address these technical limitations to enhance the
system’s reliability.
3. Privacy and Ethical Concerns: Facial recognition technology poses ethical and
privacy concerns. This research will explore measures to protect user data and ensure
that the system respects user privacy and consent.
17
CHAPTER 3
OBJECTIVES
18
Objectives
19
CHAPTER 4
LITERATURE REVIEW
20
Literature Survey:
3. Zhang, Z., Lee, K., & Zhang et al. provide insights into the challenges of
Chung, Y. (2018). Facial facial expression recognition under varying
expression recognition in conditions. Their work underscores the importance
challenging scenarios: A of robust AI models in handling real-world
comprehensive review. variations, informing Sentiment Sounds on the need
Neurocomputing, 309, pp 1- to address lighting and angle changes to maintain
10. accuracy.
CHAPTER 5
METHODOLOGY
21
22
Methodology
Phase 3: Development
Objective: Implement system features using suitable technologies for a robust, responsive
application.
Approach:
Developed a user-friendly interface to display emotions and corresponding
playlists.
Integrated the Convolutional Neural Network (CNN) for emotion detection using
TensorFlow and Keras libraries.
Established an API connection to music streaming services for dynamic playlist
updates based on detected emotions.
23
Phase 4: Testing
Objective: Validate the accuracy of emotion detection, music matching, and user experience.
Approach:
o Conducted user testing with a sample group to gather feedback on system
responsiveness, playlist relevance, and overall satisfaction.
o Tested the emotion detection model under different lighting conditions and facial
angles to improve robustness.
o Collected performance metrics such as latency, emotion detection accuracy, and
feedback on music suitability.
Phase 5: Deployment
Objective: Make the application available for users while ensuring a smooth launch process.
Approach:
o Deployed the application on a secure cloud server to handle real-time processing and
user traffic.
o Integrated a feedback mechanism allowing users to report issues and suggest
improvements.
24
Fig: Process from capturing user emotions to generating a personalized music playlist
25
Fig: Illustrating the layout of the user interface, emotion detection, recommendation engine, privacy
layer, database, and API integration
26
Fig: Illustrating the cyclical process of user feedback collection, feedback analysis, model
improvement, system maintenance checks, and deployment of updates.
27
FUNCTIONAL DESCRIPTION:
28
Project Plan
Sr. Activity Semester Duratio Planne Execution
No. n d Date Date
29
21 Project Expert Review (100%) 2 days 25/2/25
30
CHAPTER 6
CONCLUSION
31
CONCLUSION
The Sentiment Sounds project demonstrates the potential of AI-driven emotion detection to
transform music personalization, providing a deeply engaging and adaptive user experience.
The research successfully addressed its objectives, rationalizing each finding through
analysis and experimentation. The conclusions drawn from this study highlight the project’s
significance, practical implications, and areas for further research.
32
FUTURE SCOPE
A dedicated mobile app for iOS and Android would enable users to access the Sentiment Sounds
platform on the go. Push notifications could alert users to new playlists, emotion-based updates, and
Expanding the system to integrate additional sensors, such as voice tone analysis or heart rate
monitors, could improve emotion detection accuracy. Combining facial recognition with these
AI-Driven Personalization:
Advanced machine learning could analyze user preferences over time, allowing Sentiment Sounds to
make more refined recommendations based on both real-time emotions and historical listening
Future updates could include deeper integration with streaming services like Spotify or Apple
Music, allowing users to seamlessly import playlists, receive dynamic recommendations, and
As privacy concerns evolve, implementing features like federated learning and on-device processing
would help protect user data. Regular updates on privacy practices and ethical AI use would ensure
33
CHAPTER 7
REFERENCE LIST
34
REFERENCES
[1] Ekman, P. (1999). Basic emotions. In Handbook of Cognition and Emotion. John
Wiley & Sons. Pp – 45-78.
Pachet, F. (2003). The future of content-based music analysis and retrieval. New York:
Springer. Pp – 120-135.
[2] Hazrati, F., Smith, R., & Kumar, A. (2020). Emotion detection from speech and face
based on deep learning. Retrieved from
https://www.academia.edu/emotion_detection_study; Accessed on September 15, 2023.
35
Teacher Evaluation
Sheet (ESE) For
Capstone 1 Project Planning
36
37
Any other comment: …………………………………………………………………
…………………………………………………………………
38