0% found this document useful (0 votes)
6 views

Osy notes

As per msbte

Uploaded by

dhanashree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Osy notes

As per msbte

Uploaded by

dhanashree
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 38

A PROJECT REPORT

ON

“SENTIMENT SOUNDS”

IN THE PARTIAL FULFILLMENT OF THE REQUIREMENTS


FOR THE AWARD

OF

DEPARTMENT OF ARTIFICIAL INTELLIGENCE &


MACHINE LEARNING

(2024-2025)

BY
Arnav Jadhav Roll No.2617
Aadya Parasnis Roll No.2328
Shardul Vaidya Roll No.2632

UNDER THE GUIDANCE OF

Mr. B. S. Patil

KENNEDY ROAD, NEAR R.T.O., PUNE 411001

1
VISION AND MISSION OF THE INSTITUTE

❖ VISION:

Achieve excellence in quality technical education by imparting knowledge, skills and


abilities to build a better technocrat.

❖ MISSION:

M1: Empower the students by inculcating various technical and soft skills
M2: Upgrade teaching-learning process and industry-institute interaction continuously

VISION AND MISSION OF ARTIFICIAL INTELLIGINCE & MACHINE LEARNING

❖ VISION:

To serve the society by imparting knowledge in artificial intelligence and machine


learning along with professional skills to build a responsible human being

❖ MISSION:

M1: To fulfill industrial requirement in the area of artificial intelligence and machine
learning.
M2: To motivate students for continuous learning with entrepreneurial skills.

2
PROGRAM OUTCOMES (POs)
PO1 Basic and Discipline specific knowledge: Apply knowledge of basic
mathematics, science and engineering fundamentals and engineering specialization
to solve the engineering problems.

PO2 Problem analysis: Identify and analyze well-defined engineering problems using
codified standard methods.

PO3 Design/ development of solutions: Design solutions for well-defined technical


problems and assist with the design of systems components or processes to meet
specified needs.

PO4 Engineering Tools, Experimentation and Testing: Apply modern engineering


tools and appropriate technique to conduct standard tests and measurements.

PO5 Engineering practices for society, sustainability and environment: Apply


appropriate technology in context of society, sustainability, environment and
ethical practices.

PO6 Project Management: Use engineering management principles individually, as a


team member or a leader to manage projects and effectively communicate about
well-defined engineering activities.

PO7 Life-long learning: Ability to Analyze individual needs and engage in updating in
the context of technological changes.

3
PROGRAM SPECIFIC OUTCOMES(PSO)

Students will be able to:

 PSO 1: Use advanced technologies for application of computer software and hardware.
 PSO 2: Maintain the AI & ML based system.

4
CERTIFICATE

This is to certify that Mr. Arnav Jadhav from All India Shri Shivaji Memorial
Society’s Polytechnic College having enrolment 2201410210 has completed Report on
Problem Definition/Semester V Project Report/Final Project Report having title
“Sentiment Sounds” in a group consisting of 3 persons under the guidance of the
faculty guide.

Guide Name
Mr. B. S. Patil

5
CERTIFICATE

This is to certify that Mr. Aadya Parasnis from All India Shi Shivaji Memorial
Society’s Polytechnic College having enrolment 2201410223 has completed Report on
Problem Definition/Semester V Project Report/Final Project Report having title
“Sentiment Sounds” in a group consisting of 3 persons under the guidance of the
faculty guide.

Guide Name
Mr. B. S. Patil

6
CERTIFICATE

This is to certify that Mr. Shardul Vaidya from All India Shri Shivaji Memorial
Society’s Polytechnic College having enrolment 2201410227 has completed Report on
Problem Definition/Semester V Project Report/Final Project Report having title
“Sentiment Sounds” in a group consisting of 3 persons under the guidance of the
faculty guide.

Guide Name
Mr. B. S. Patil

7
ACKNOWLEDGEMENT

With immense pleasure and satisfaction, I am presenting this Project report as part of the
curriculum of Diploma Artificial Intelligence & Machine Learning. I wish to express my sincere
gratitude towards all those who have extended their support right from the stage this idea was
conceived.
I am profoundly grateful to Mr. B. S. Patil, HOD, Department of Artificial Intelligence &
Machine Learning, Project Guide and Project Coordinator, for her expert guidance and
continuous encouragement throughout to see that project work rights its target since its
commencement to its completion.
Finally, I am also grateful to Honorable Mr. S. K. Giram, Principal, AISSMS POLYTECHNIC,
Pune, for his support and guidance that have helped me to expand my horizons of thought and
expression.
We would also thank all Staff members of the Artificial Intelligence & Machine Learning
Department for showing us the way for achieving the project target we want. It would not have
been possible to complete the project without the support and motivation from our family
members and friends.

Arnav Jadhav
Roll No. 2617

Aadya Parasnis

Roll No. 2628

Shardul Vaidya
Roll no 2632

8
ABSTRACT
The Sentiment Sounds project aims to transform music personalization by
leveraging artificial intelligence (AI) to analyze user emotions in real time
through facial recognition. As music plays a significant role in enhancing daily
experiences and expressing emotions, aligning music choices with users' real-
time moods can deepen emotional engagement. This project uses a
comprehensive dataset of facial expressions across seven emotional categories
(e.g., happy, sad, angry), training an AI model to detect these emotions
accurately. The system then dynamically curate’s music playlists that resonate
with the detected mood. This approach combines data analysis, machine
learning, and music personalization to provide a unique, adaptive user
experience. Key challenges, including privacy concerns and potential
inaccuracies due to factors like lighting, are acknowledged, with solutions
focusing on user consent and technical adjustments. The project envisions a
future where AI-driven emotional analysis creates more immersive musical
experiences, establishing a new frontier for personalized media.

9
LIST OF FIGURES

Figure no. Title Page no.

1 Module 01: User Registration

2
Module 02: Proximity Based
Web-Application for Connecting
Passions

3 Gantt Chart for project schedule

CONTENTS

CERTIFICATE I

ACKNOWLEDGEMENT II

ABSTRACT III

LIST OF FIGURES IV

10
CHAPTER TITLE PAGE NO

1. 13
INTRODUCTION

2. 15

PROBLEM STATEMENT

3. 17

OBJECTIVES

4. 19

LITERATURE SURVEY

5. 21

METHODOLOGY (MATERIALS AND


METHODS)

6. 27

CONCLUSION

7. 31

REFERENCE LIST

11
CHAPTER 1
INTRODUCTION

12
INTRODUCTION

In an era where artificial intelligence (AI) is increasingly integrated into


everyday technology, the Sentiment Sounds project explores a unique
application of AI in music personalization. Music has long been recognized as a
medium for emotional expression, enjoyment, and creativity. Traditional music
recommendation systems rely on user inputs, listening history, or genre
preferences, but they lack adaptability to the user’s real-time emotional state.
This gap presents an exciting opportunity: to create a more responsive music
experience that dynamically aligns with the user’s current mood.

Sentiment Sounds leverages AI to analyze facial expressions, one of the most


universal forms of emotional communication. The human face, with its over 40
muscles, can display a range of emotions that an AI model can learn to interpret.
By detecting emotions such as happiness, sadness, anger, and surprise through
real-time facial analysis, the project can tailor music recommendations to match
the listener's mood, thereby enhancing the listening experience.

The introduction of AI-driven music personalization addresses the demand for


technology that understands and adapts to the user’s feelings, offering an
engaging and meaningful interaction. Beyond personalization, this technology
promises benefits in areas like productivity and mental well-being, as studies
show that music tailored to mood can influence emotional regulation, focus, and
relaxation.

13
Rationale: -

The rationale for the Sentiment Sounds project lies in the profound impact that music can
have on human emotions and mental states. The integration of AI into music personalization
represents an innovative step in bridging technology with emotional experience. Music has
been shown to affect dopamine levels, help manage emotions, and even boost productivity.
However, existing music recommendation systems primarily consider listening habits or
preferences without accounting for the user's current emotional state.

This project is worth investigating because it addresses the limitations of current


recommendation systems, which often lack emotional sensitivity and fail to provide the
level of engagement users seek. By introducing AI-driven emotion analysis, Sentiment
Sounds enables a music experience that resonates with the user's feelings, creating a deeper
and more personalized connection. Moreover, as AI technology continues to advance, this
approach could extend to other areas of personalized media, transforming how users interact
with various content forms based on their emotional needs.
The potential for improving mental well-being and creating adaptive entertainment
experiences also makes this project highly relevant. As AI plays a larger role in the
creative industry, projects like Sentiment Sounds explore how technology can cater to
and enhance the human experience

Key Terms and Definitions: -

1. Artificial Intelligence (AI)


 Definition: Artificial Intelligence refers to the simulation of human intelligence in
machines that are programmed to think and learn like humans. AI systems are
designed to process information, recognize patterns, and make decisions based on
data.

2. Facial Recognition
 Definition: Facial Recognition is a technology that identifies or verifies individuals
by analyzing their facial features in images or video frames. It often involves the use
of algorithms that map facial landmarks and compare them against a database.

14
3. Emotion Detection
 Definition: Emotion Detection is the process of identifying human emotions using
data from various sources, such as facial expressions, voice tones, or physiological
signals. In the context of facial analysis, it typically involves classifying expressions
into categories like happiness, sadness, anger, or surprise.

4. Machine Learning (ML)


 Definition: Machine Learning is a branch of AI focused on building algorithms that
allow computers to learn from and make predictions based on data. Unlike traditional
programming, ML algorithms improve their accuracy over time by analyzing more
data.

5. Neural Network
 Definition: A Neural Network is a type of machine learning model inspired by the
structure of the human brain. It consists of layers of interconnected nodes (neurons)
that process data, recognize patterns, and make decisions.

6. Convolutional Neural Network (CNN)


 Definition: A Convolutional Neural Network is a deep learning algorithm commonly
used for image recognition tasks. CNNs process images by using filters that detect
specific features like edges, shapes, and textures.

15
CHAPTER 2
PROBLEM STATEMENT

16
Problem Statement

The Sentiment Sounds project aims to address the challenge of providing a more immersive
and responsive music experience by analyzing facial expressions to gauge user emotions
and curate music playlists accordingly. Traditional music streaming services lack real-time
adaptation based on user mood, which this project intends to solve through AI.
The research attempts to solve several key problems:
1. Lack of Real-Time Emotional Adaptability: Current recommendation systems do
not adapt to changing emotional states, which limits their effectiveness in meeting
users' immediate emotional needs.
2. Challenge of Accurate Emotion Detection: Real-time emotion recognition through
facial analysis has technical challenges, such as variability in lighting and facial
expressions. This study will address these technical limitations to enhance the
system’s reliability.
3. Privacy and Ethical Concerns: Facial recognition technology poses ethical and
privacy concerns. This research will explore measures to protect user data and ensure
that the system respects user privacy and consent.

17
CHAPTER 3
OBJECTIVES

18
Objectives

1. To Develop an Emotion Detection Model:


o Achieve accurate real-time emotion detection by training an AI model capable
of analyzing facial expressions. This model will detect emotions such as
happiness, sadness, anger, and surprise, forming the foundation for creating a
responsive music recommendation system.

2. To Personalize Music Based on Real-Time Emotions:


o Design and implement a system that matches music playlists to the user’s
detected emotional state, enhancing the relevance and emotional connection of
each recommendation. This objective seeks to create a music experience that
resonates more deeply with users.

3. To Measure User Satisfaction and Engagement:


o Evaluate how well emotion-based music personalization improves user
satisfaction compared to traditional, non-personalized recommendations. This
research aims to demonstrate that adaptive playlists enhance user experience by
aligning music with real-time emotions.

4. To Address Privacy and Ethical Concerns:


o Implement data privacy safeguards, such as user consent agreements,
encryption, and anonymization, to protect sensitive facial data. This objective
ensures that ethical standards are met while advancing the functionality of the
system.

5. To Explore the Future Potential of Emotion-Based Personalization:


o Assess the broader implications and potential advancements of AI-driven
emotion detection in personalized media. This objective explores how future
improvements in AI and facial recognition could lead to even more precise,
intuitive, and immersive media experiences.

19
CHAPTER 4
LITERATURE REVIEW

20
Literature Survey:

Sr No. Name of the document Inference

1. Ekman, P. (1999). Basic Ekman's foundational work on basic human


emotions. In Handbook of emotions supports the categorization of facial
Cognition and Emotion. John expressions, which is essential for building accurate
Wiley & Sons. Pp – 45-78. emotion detection systems. His research establishes
a framework for understanding universal emotions
(such as happiness, sadness, anger) that can be
applied in AI models for reliable emotion
classification.

2. Pachet, F. (2003). The Pachet’s exploration of content-based music


future of content-based analysis emphasizes the value of aligning music
music analysis and retrieval. with emotional states. This aligns well with
New York: Springer. Pp – Sentiment Sounds, as it provides a theoretical basis
120-135. for emotion-based music personalization,
illustrating how emotional alignment enhances
engagement and satisfaction.

3. Zhang, Z., Lee, K., & Zhang et al. provide insights into the challenges of
Chung, Y. (2018). Facial facial expression recognition under varying
expression recognition in conditions. Their work underscores the importance
challenging scenarios: A of robust AI models in handling real-world
comprehensive review. variations, informing Sentiment Sounds on the need
Neurocomputing, 309, pp 1- to address lighting and angle changes to maintain
10. accuracy.

CHAPTER 5
METHODOLOGY
21
22
Methodology

Phase 1: Requirement Analysis


 Objective: Understand user needs for a personalized music recommendation system based on
real-time emotional analysis.
 Approach:
o Conducted surveys and interviews with potential users (e.g., music streaming users, AI
researchers, and general users) to identify preferences and expectations for emotion-
driven music personalization.
o Collected requirements around emotion detection accuracy, privacy preferences, real-
time functionality, and interface simplicity.

Phase 2: System Design


 Objective: Develop a structured system architecture to enable seamless interaction between
emotion detection and music personalization.
 Approach:
o Designed the system architecture to include:
 Emotion Detection Module: Real-time facial recognition model to analyze facial
expressions.
 Privacy and Data Security Module: Ensures secure handling of facial
recognition data.
o Created a database schema to store user consent, session data, and emotion-based
preferences.

Phase 3: Development
 Objective: Implement system features using suitable technologies for a robust, responsive
application.
 Approach:
 Developed a user-friendly interface to display emotions and corresponding
playlists.
 Integrated the Convolutional Neural Network (CNN) for emotion detection using
TensorFlow and Keras libraries.
 Established an API connection to music streaming services for dynamic playlist
updates based on detected emotions.

23
Phase 4: Testing
 Objective: Validate the accuracy of emotion detection, music matching, and user experience.
 Approach:
o Conducted user testing with a sample group to gather feedback on system
responsiveness, playlist relevance, and overall satisfaction.
o Tested the emotion detection model under different lighting conditions and facial
angles to improve robustness.
o Collected performance metrics such as latency, emotion detection accuracy, and
feedback on music suitability.

Phase 5: Deployment
 Objective: Make the application available for users while ensuring a smooth launch process.
 Approach:
o Deployed the application on a secure cloud server to handle real-time processing and
user traffic.
o Integrated a feedback mechanism allowing users to report issues and suggest
improvements.

Phase 6: Maintenance and Support


 Objective: Maintain system performance and address any emerging issues.
 Approach:
o Established a plan for regular model updates to incorporate user feedback and improve
emotion detection accuracy.
o Implemented a support system for user inquiries, updates, and troubleshooting

24
Fig: Process from capturing user emotions to generating a personalized music playlist

25
Fig: Illustrating the layout of the user interface, emotion detection, recommendation engine, privacy
layer, database, and API integration

26
Fig: Illustrating the cyclical process of user feedback collection, feedback analysis, model
improvement, system maintenance checks, and deployment of updates.

27
FUNCTIONAL DESCRIPTION:

Module 1: Emotion Detection and Privacy


 Emotion Detection:
o Objective: Detect real-time user emotions via facial recognition.
o Process: Analyze facial expressions using a trained CNN model, detecting emotions such as
happiness, sadness, and anger.
 Privacy Module:
o Objective: Ensure user data protection by managing consent and encrypting data.
o Functionality: Requires user consent before accessing the camera, encrypts facial data during
transmission, and anonymizes stored data.
Diagram/Figure: Emotion Detection Flow Diagram: A detailed diagram illustrating the emotion analysis pipeline,
from camera input to emotion classification.

Module 2: Music Recommendation Engine


 Emotion-to-Genre Mapping:
o Objective: Match each detected emotion to a suitable music genre.
o Process: Create an internal mapping, e.g., happy → pop, sad → blues, which dynamically updates
based on real-time emotional data.
 Playlist Generation:
o Objective: Generate and display a playlist based on detected emotions.
o Process: Connect to music APIs (e.g., Spotify) to fetch and play songs matching the user’s current
mood.
Chart/Figure: Emotion-to-Genre Mapping Table: A table displaying each emotion with its corresponding genre,
providing a quick visual reference.

Module 3: User Interaction and Feedback


 User Interface:
o Objective: Provide a simple, intuitive UI for users to view detected emotions and listen to
personalized playlists.
o Features: Displays current detected emotion, recommended playlist, and controls for user feedback
on the suitability of recommendations.
 Feedback Collection:
o Objective: Gather user feedback on playlist relevance to refine future recommendations.
o Process: Users can rate each playlist's alignment with their emotional state, helping improve system
accuracy over time.
Diagram/Figure: User Feedback Flowchart: A flowchart illustrating how user feedback is collected, processed, and
incorporated back into the recommendation engine.

Module 4: System Performance and Analytics


 Real-Time Processing:
o Objective: Ensure emotion detection and playlist updates occur seamlessly without noticeable delays.
o Metrics Tracked: Latency, accuracy of emotion detection, and user engagement.
 Analytics Dashboard:
o Objective: Monitor system performance and identify areas for improvement.
o Features: Displays key metrics like detection accuracy, playlist effectiveness, and user satisfaction
rates.
Chart/Figure: System Performance Dashboard: A sample dashboard interface with charts displaying performance
metrics and user analytics.

28
Project Plan
Sr. Activity Semester Duratio Planne Execution
No. n d Date Date

1 Group Formation 7 days 22/7/24 29/7/24

2 Domain Finalization 6 days 29/7/24 10/8/24

3 Guide Allocation 4 days 10/8/24 15/8/24

4 Topic Finalization and 8 days 16/8/24 25/8/24


Project Approval

5 Search Base paper 11 days 25/8/24 5/9/24

6 Search related Information 10 days 5/9/24 15/9/24


(Literature Survey) Fifth

7 Prepare Project Proposal 8 days 15/9/24 23/9/24

8 Prepare Log Book 8 days 23/9/24 1/10/24

9 Prepare Project Plan 6 days 1/10/24 11/9/24

10 First Progress Demo1 8 days 14/11/24 14/9/24

11 Report Writing 12 days 15/11/24 17/9/24

12 Presentation PPT 5 days 17/11/24 19/11/24

13 Prepare Project Design Models 13 days 19/11/24 21/11/24

14 Publish Review Paper 5 days 15/11/24 15/11/24

15 Implementation of System - I 15 days 15/11/24 19/11/24

16 Progress Presentation 1 (50%) 2 days 22/11/24

17 Implementation of System – II 15 days 20/12/24

18 Testing 8 days 30/12/24

19 Deployment Sixth 10 days 15/1/25

20 Publish Implementation Paper 7 days 5/2/25

29
21 Project Expert Review (100%) 2 days 25/2/25

22 Implementation of Suggestions 12 days 30/2/25


in Expert Review

23 Prepare Final Report of 9 days 20/3/25


Sixth Semester

24 Final Project Oral Third week of April

Fig 3: Gantt chart for the project schedule

30
CHAPTER 6
CONCLUSION

31
CONCLUSION

The Sentiment Sounds project demonstrates the potential of AI-driven emotion detection to
transform music personalization, providing a deeply engaging and adaptive user experience.
The research successfully addressed its objectives, rationalizing each finding through
analysis and experimentation. The conclusions drawn from this study highlight the project’s
significance, practical implications, and areas for further research.

32
FUTURE SCOPE

 Mobile Application Development:

A dedicated mobile app for iOS and Android would enable users to access the Sentiment Sounds

platform on the go. Push notifications could alert users to new playlists, emotion-based updates, and

personalized music suggestions, increasing engagement and accessibility.

 Enhanced Emotion Detection with Multimodal Analysis:

Expanding the system to integrate additional sensors, such as voice tone analysis or heart rate

monitors, could improve emotion detection accuracy. Combining facial recognition with these

modalities would create a more comprehensive and responsive user experience.

 AI-Driven Personalization:

Advanced machine learning could analyze user preferences over time, allowing Sentiment Sounds to

make more refined recommendations based on both real-time emotions and historical listening

habits, further enhancing personalization.

 Integration with Third-Party Music Platforms:

Future updates could include deeper integration with streaming services like Spotify or Apple

Music, allowing users to seamlessly import playlists, receive dynamic recommendations, and

synchronize emotion-based playlists across platforms.

 Data Privacy and Ethical AI Enhancements:

As privacy concerns evolve, implementing features like federated learning and on-device processing

would help protect user data. Regular updates on privacy practices and ethical AI use would ensure

the platform remains secure and user-centered.

33
CHAPTER 7
REFERENCE LIST

34
REFERENCES

[1] Ekman, P. (1999). Basic emotions. In Handbook of Cognition and Emotion. John
Wiley & Sons. Pp – 45-78.
Pachet, F. (2003). The future of content-based music analysis and retrieval. New York:
Springer. Pp – 120-135.

[2] Hazrati, F., Smith, R., & Kumar, A. (2020). Emotion detection from speech and face
based on deep learning. Retrieved from
https://www.academia.edu/emotion_detection_study; Accessed on September 15, 2023.

[3] AI and Music Personalization: How It Works. (2022). Retrieved from


https://www.musictech.com/features/ai-music-personalization-explained/; Accessed on
January 5, 2024.
The Evolution of Facial Recognition Technology. (2023). Retrieved from
https://www.techcrunch.com/evolution-facial-recognition-technology; Accessed on
February 10, 2024.

35
Teacher Evaluation
Sheet (ESE) For
Capstone 1 Project Planning

Name of Student: Arnav Jadhav, Aadya Parasnis and Shardul Vaidya


Name of Programme: Artificial Intelligence & Machine Learning
Semester: 5
Course Title and Code: Capstone Project Planning (22058).
Title of the Capstone Project: Sentiments Sounds

A. COs addressed by the Capstone Project


a. Write the problem/task specification in existing systems related to the occupation.
b. Select, collect and use required information/knowledge to solve the problem/complete the
task.
c. Logically choose relevant possible solution(s).
d. Consider the ethical issues related to the project (if there are any).
e. Assess the impact of the project on society (if there is any).
f. Prepare 'project proposals' with action plan and time duration beginning of project.
g. Communicate confidently and effectively as a member and team leader.

B. Other Learning Outcomes Achieved Through This Project


a. Affective Domain Outcomes
a. Follow safety practices.
b. Practice good house-keeping.
c. Demonstrate work as a leader/team member.
d. Follow ethical practices.

C. Suggested Rubrics for assessment of capstone project:

36
37
Any other comment: …………………………………………………………………
…………………………………………………………………

(Name and designation of the Faculty Member)


Mr. B. S. Patil
Signature

38

You might also like