groupdocument[1][2] full
groupdocument[1][2] full
(Assistant Professor)
Mental disturbance affects how a person thinks and behaves. They change their mood and
make it difficult to function at home, work, school, or in your community. It’s important to
note that having poor mental status doesn’t always mean you have a behavioral health
disorder. Almost 15% of the total population around the world is suffering from mental health
issues. This became one of the major concerns worldwide. Conventional methods like
Ayurvedic Medicine, Philosophical, and Spiritual Approaches are used to solve these
problems. But, they have disadvantages such as lack of evidence, harmful practices, delayed
treatment, etc. Millions of people wouldn’t go to a psychiatrist although doing so would
benefit greatly. When a person feels low, he or she should share their feelings with another
person but, it is impossible for many people. Introverts won’t share their feelings with others
most of them have the fear of judgment. AI-driven chatbots have become the trend in the last
few years. This system overcomes the above issues by introducing AI-powered assistant
chatbots into the game. Nowadays millions of people worldwide are using AI-powered
assistant chatbots. This system detects mental disturbances in the early stages and cures them
initially, providing accessible, personalized, and effective support. Mental disturbances make
people lose concentration in day-to-day life like home, work, and other places. The history of
AI-driven assistant chatbots for mental disturbances is marked by rapid progress, increasing
recognition, and growing potential to transform mental health support. The objective of this
chatbot is to provide an efficient, cost-efficient solution for finding and solving mental health
problems. Utilizing Natural Language Processing (NLP) and Machine Learning (ML)
algorithms, these chatbots offer a confidential and accessible platform for users to share their
emotions, concerns, and experiences. Integrating traditional wisdom with modern technology
and AI-driven chatbots holds promise for enhanced mental health support and wellness.
Keywords: Mental health, AI-driven chatbots, Integrating traditional wisdom, Mental well-
being, Natural Language Processing.
CHAPTER 1
INTRODUCTION
1.1 Overview
A chatbot is a computer program that simulates a human conversation with an end user. Not
all chatbots are equipped with Artificial Intelligence (AI), but modern chatbots increasingly
use AI techniques such as NLP to understand user questions and automate responses to them
[1].
This generation of chatbots with generative AI capabilities will provide even more enhanced
functionality, as they will understand common language and complex queries, adapt to a
user's style of conversation, and use empathy when answering users' questions [1].
Users of mental health chatbots report high satisfaction with chatbot interactions, positive
perceptions of chatbots, prefer chatbots to information control groups and indicate interest in
using chatbots in the future [2]. In particular, people are more satisfied when they perceive
conversations as private, reporting learning something new during the interaction when
chatbot content is similar to what their therapist recommended previously and perceived to be
of high quality, when there is appropriate usage of high-quality technological elements [3].
Compared to human beings, chatbots are perceived as less judgmental, which facilitates self-
disclosure among users, and allows for more conversational flexibility [6]. In fact, some
people prefer to interact with chatbots over mental health professionals, which may
encourage people who do not normally seek therapy to receive care [8].
1.4 Objective
Chatbots are emerging as viable complementary services to provide a person with assistance,
and often, some sort of companionship also known as “virtual therapists”. If the user is
feeling depressed at 2 am, they might not be able to talk to their therapist. But a chatbot is
available for them 24/7 and is eager to talk to them whenever and wherever they need a
friendly ear [1].
In the field of mental health assistants and chatbots in psychology and psychotherapic
applications is rapidly growing. The application of cutting-edge Natural language
technologies in combination with psychotherapy will lead to tools that can to a great extent
fill the holes in the delivery of mental care [2].
The Objective of this AI-driven chatbot is to provide immediate and accessible mental health
support in the wake of traditional therapy methods that have limitations concerning
accessibility and stigma. Other key features include personalized interaction powered by
advanced NLP, which allows the chatbot to answer responses based on individual user inputs
and adapt over time. According to the specific issues, coping strategies and self-help
techniques would be given to them. It will also enable mood tracking and feedback
mechanisms to people, thereby helping them to track their mental state [6]. The anonymity of
the chatbot will reduce stigma over time regarding the seeking of help from anyone at any
time and place. This would also provide access to human therapists when appropriate, making
it a hybrid model, hence smoothly transitioning between AI support and professional care. It
has emergency protocols ensuring user safety in case of self-harm or suicidal thoughts,
connecting users with crisis services as needed. Continuous learning and improvement will
be part of the system, as user feedback will guide enhancements based on the latest research
in mental health and AI technology. Ultimately, this AI-driven chatbot aims to empower
people to take charge of their mental health and improve overall well-being by providing
accessible, personalized support [5].
1.5 Advantages
Accurate [4].
User-friendly [4].
Time-Saving [4].
Availability around the clock [4].
Quick response to common queries [4].
Reduced Waiting Times [4].
1.6 Applications
common queries
15. Reduced Waiting
Times
16. Schedule
Appointment User-Frien
CHAPTER 2
LITERATURE SURVEY
2.1 Introduction
AI-driven assistant chatbots are becoming an important tool in mental health care. Advanced
technology is the medium through which easy access and personal support have been given to
those facing challenges of mental health. This survey explores the usage of AI chatbots by
people and demonstrates the effectiveness and benefits of their use. In the literature, we find
studies that evaluate the performance and real-world application of AI chatbots in mental
health. These studies highlight how attention mechanisms enhance chatbot responses by
focusing on crucial parts of conversations.
P. Dinesh, et al. [1] The proposed system uses a Naive Bayes classifier as an integral part of
enhancing the functionality of a mental health support chatbot through various applications:
Text Classification, Sentiment Analysis, Detection of Mental Health Issues, and Continuous
Improvement with an accuracy range 85%-95%. The existing system is conventional methods
like Cognitive Behavioral Therapy (CBT) and NLP algorithms for conversation, mood
tracking, and personalized exercises had low accuracy of 65%-80%. The result of the
proposed system is personalized interaction and early recognition of mental disorders. The
result of the existing system is limited recognition of mental diseases. Research gaps include
limited context understanding, crisis intervention capabilities, and personalization.
Limitations involve challenges in handling severe mental health crises, biases in training data,
and long-term effectiveness.
Green A., et al. [2] proposed a system that uses NLP to understand and process user input,
Emotion detection algorithms to identify signs of distress, and CBT integration for basic
therapeutic support with accuracy in the range of 85%-98%. Over existing system machine
learning models SVM (Support Vector Machine) for classification of distress levels, Random
Forests for multi-class classification, Random Forests typically show moderate accuracy
(60%-80%) for multi-class classification tasks. The results of proposed system are 40-60%
improvement in symptoms like anxiety and depression. Provides immediate emotional
support, interventions, and referrals. The results of existing system are offers personalized
conversations but lacks the same therapeutic impact in clinical sittings. The limitations are
struggles to understand highly complex mental health conditions. Research gaps include
needs better real-time intervention for severe cases, struggles with deeply understanding
complex emotional states, and needs more sophisticated long-term therapeutic relationship
capabilities.
Moh. Heri Kurniawan, et al. [3] proposed a system that uses Machine Learning (ML)
techniques such as Random Forests and Support Vector Machines (SVM). They achieve
an overall accuracy of around 70% to 85%. The Existing system includes Logistic
Regression, Decision trees, clustering, and dimensionality reduction. The accuracy of existing
systems ranges from 60% to 70%. The result of the proposed system is Improved patient
activation, health outcomes, and reduced health costs. The result of the existing system is
Diagnosing Chronic illnesses, Patient satisfaction rate, reducing symptoms, and Hospital
readmissions.
Yu-Hao Li, et al. [4] The proposed AI system utilizes advanced techniques such as deep
learning (e.g., Convolutional Neural Networks and recurrent Neural Networks). Achieving an
accuracy of 92%. The existing system typically relies on traditional machine learning
algorithms (e.g., Logistic Regression, Support Vector Machines, Decision Trees) or rule-
based expert systems. The accuracy of the existing system is lower, around 85%. The result
of the proposed system is overall performance in personalized diagnosis, faster risk
prediction, and tailored treatment. The result of the existing system is Good for broad,
generalized applications but less accurate in individualized care.
Sally Moy, et al. [5] They proposed a system using ML (Supervised Learning algorithms
(e.g., Support Vector Machines (SVM), Random Forests, Logistic Regression), NLP Models
(BERT, GPT), DL (Convolutional Neural Networks (CNNs) and Recurrent Neural Networks
(RNNs). The accuracy rate is 80% to 95%. The existing system uses rule-based systems. The
accuracy is 80% to 90%. The result of the proposed system is focused on trust, transparency,
and the human touch, which may impact their acceptance and effectiveness in healthcare
settings. The result of the existing is improved efficiency and decision-making, but
challenges remain, particularly in terms of patient trust and the humanization of AI
interactions.
Jinming Du A, et al. [6] proposed a system that used Transformer Models (GPT, BERT),
Deep Learning (DNN, RNN), CTC, ASR, and TTS. They achieved an accuracy range of 85-
90%. The existing system is comprised of rule-based models, HMMs, FSMs, and shallow
neural networks (DNN). The accuracy range is 65-75%. The research gap is the need for
deeper conversational AI and improved context awareness. The result of the proposed system
is more effective in recognizing speech and providing precise pronunciation feedback. The
result of the existing system is limited adaptability.
Syed Mahmudul Hug, et al. [7] proposed a system that uses Machine Learning (ML)
techniques such as Random Forests, Support Vector Machines (SVM), and K-nearest
neighbors (KNN). They achieve overall accuracy ranging from 75% to 90%. The existing
system is Expert systems, decision trees, flowcharts, Logistic Regression, Random Forests,
and SVM. The accuracy of existing systems can range from 60% to 80%. The result of the
proposed system understanding cognitive difficulties, and user satisfaction. The result of
the existing system is improved cognitive function and reduced caregiver burden.
Olivia Brown, et al. [8] Proposed a system that uses (NLP) LSTM, Transformers (e.g., GPT,
BERT), SVM, Naive Bayes, and Deep Learning. The overall accuracy of the Proposed
System ranges between 80%-90%. The existing system Basic NLP models (e.g., bag-of-
words, simple RNNs), Sentiment Analysis (basic classifiers), and Rule-based Systems for
CBT. Existing Systems generally have an overall accuracy of 60%-75%. The result of the
proposed system promising results in symptom reduction and engagement. The result of the
existing system excelling in symptom tracking but lacking personalized interventions. The
research gaps are emotion detection and user satisfaction.
Daniel Lee, et al. [9] They proposed a system that uses (NLP) Transformers (e.g., BERT,
GPT-3), LSTM, Attention Mechanisms, and Sentiment Analysis (Deep Learning, SVM).
They achieved an overall accuracy rate of 80%- 90%. The existing system uses the Basic
NLP (e.g., bag-of-words, basic RNNs), Rule-based CBT, basic SVMs, or linear classifiers.
The overall accuracy range is 60%-75%. The result of the proposed system is user
engagement, symptom reduction, and personalization. The results of the existing system are
tracking symptoms and offering basic CBT exercises. The research gaps are Deeper
emotional understanding and better integration with mental health professionals. Limitations
are a lack of full human empathy, data privacy concerns, and the risk of over-reliance on
technology.
Mia Chen, et al. [10] They proposed a system that uses (NLP) Transformer models (GPT,
BERT), LSTM, Sentiment Analysis: Deep Learning (CNN, LSTM), or SVM. The accuracy
rate is 80% -90%. The existing system uses NLP basic models (bag-of-words, simple RNNs),
Rule-based systems for CBT, Machine Learning Models Naive Bayes, and Decision Trees.
The overall system accuracy is 60% - 75%. The result of the proposed system is High
engagement and satisfaction due to dynamic personalization and adaptive learning. The result
of the existing system struggle with user retention and repeated interactions. The research
gaps are user engagement, long-term interaction, and personalized CBT interventions.
Limitations lack of empathy.
Angela Lee, et al. [11] They proposed a system using (NLP) Transformer models (GPT-3,
BERT), LSTM, Attention Mechanisms, Reinforcement Learning (RL), and Personalized
CBT. The overall accuracy rate is 80%- 90%. The existing system uses NLP (Simple RNNs,
bag-of-words, naive Bayes), Rule-Based Systems, and Basic Sentiment Analysis. The
accuracy rate is 60%-75%. The research gaps are Long-Term Efficacy, Bias in
Personalization, Scalability, and Cross-Cultural Adaptation. The result of the proposed system
is better engagement and higher user satisfaction. The result of the existing system is a lack of
the depth and flexibility needed for deeper, more sustained therapeutic engagement.
Limitations are bias in personalization and the fact that AI chatbots cannot fully replicate
human empathy.
Brian McArthur, et al. [12] They proposed system uses the NLP (Transformer-based models
(GPT-3, BERT), Sentiment Analysis (Deep Learning, CNN, LSTM), and Rule-based
techniques, Reinforcement Learning (RL). They achieved an overall accuracy rate is 80%-
90%. The existing system uses NLP (simple RNNs, bag-of-words), Rule-based Systems, and
Basic Sentiment Analysis. The overall accuracy rate is 60%-75%. The result of the proposed
system is adjusted based on real-time feedback, providing a more dynamic, responsive, and
empathetic experience for users. The result of the existing system is a struggle with trust-
building because their interactions are often predictable and not contextually aware. The
research gaps have a long-term impact on trust and engagement, addressing bias in AI
systems, and exploring AI's role in complementing human therapists. Limitations are lack of
full empathy compared to human therapists, and the risk of over-reliance on AI.
David Johnson, et al. [13] They proposed a system that used the algorithms like NLP Models
GPT-3, BERT, LSTM, Emotion Recognition FER, SER, and Reinforcement Learning (RL).
The proposed system achieves a higher overall accuracy rate of 80% to 90%. The existing
system uses NLP Models (Rule-based systems or simple RNNs) and basic sentiment analysis.
The overall accuracy rate is 60% to 75%. The result of the proposed system is the ability to
provide personalized, real-time support has resulted in reduced PTSD symptoms and
increased user retention. The result of the existing system is showing limited personalization
and engagement, often leading to short-term relief rather than sustained improvement. The
research gaps are long-term efficacy, bias in emotion recognition, and Integration with
Human Therapists. Limitations are AI empathy, over-reliance, and severe PTSD cases.
Catherine Hartman, et al. [14] Proposed a system using NLP (Natural Language Processing),
Sentiment Analysis (e.g., BERT, GPT), Cognitive Behavioral Therapy (CBT), Reinforcement
Learning for continuous learning with an accuracy of 85%-90% (with advanced models such
as BERT and reinforcement learning). The existing system uses pre-trained models (e.g.,
GPT, BERT), Sentiment Analysis, and CBT integration with an accuracy range of 65%-75%.
The results of the proposed system incorporate ethical principles such as bias prevention,
regular audits, and clear communication regarding data usage, ensuring better alignment with
ethical guidelines. The results of existing systems while effective, often rely on predefined
models and are less adaptable to real-time user feedback. However, limitations include the
potential for over-reliance on AI, misinterpretation of complex emotional states, and ethical
challenges such as balancing autonomy with intervention.
Anshika Jain, et al. [15] Proposed a system using techniques such as Natural Language
Processing (NLP), Machine Learning (ML), and Deep Learning (DL), like LSTM or BERT.
Achieving an accuracy range of 80%-90%. The existing system utilizes Cognitive Behavioral
Therapy (CBT). The accuracy is 75%-85%. The result of the proposed system is chatbots in
personalized care and continuous improvement. The result of the existing system is scalable,
effective solutions for mental health support.
Samir Dey, et al. [16] They proposed a system that uses BERT, GPT (NLP), Random Forests,
SVM, KNN, Logistic Regression, Deep Learning (CNNs, RNNs), and Reinforcement
Learning (Q-Learning, DQN). The accuracy accuracy ranges from 70% to 95%. The existing
system is Expert systems, decision trees, flowcharts. The accuracy is typically ranging from
60% to 85%. The result of the proposed system is real-time adaptation and personalized
feedback, high user engagement, and emotional support. The result of the existing system is
simple to deploy less resource-intensive, and easier to use for basic tasks and applications.
Prakash Nathaniel Kumar Sarella, et al. [17] They proposed system of BERT, GPT(NLP),
Transformer models, VADER, TextBlob, RoBERTa, MedBERT, and BioBERT. They
typically achieve a high accuracy range is 80% to 95%. The existing system traditional
machine learning models (e.g., SVM, Naive Bayes). The accuracy range is 70% to 85%. The
result of the proposed system's Real-time adaptation leads to continuous improvement and
personalized responses, high scalability to handle large volumes of user interactions. The
result of the existing system is Lower computational cost and simpler implementation, easier
to deploy for simple tasks like patient scheduling or basic FAQ handling.
Mohammad Amin Kuhaila, et al. [18] They proposed a system using Natural Language
Processing (NLP) models, such as BERT, GPT, and RNN (Recurrent Neural Networks) or
LSTM (Long Short-Term Memory) models can be used. The accuracy is 70% to 85%. The
existing system is based on therapeutic approaches like Cognitive Behavioral Therapy (CBT)
and Person-centred Therapy. The accuracy is 60%-80%. The result of the proposed system is
symptom reduction, user satisfaction, and engagement. The result of the existing system is
personalized care, emotional support, and empathy, leading to high user satisfaction.
Lalitha S, et al. [19] proposed a system that uses NLP and ML algorithms (like BERT, GPT,
and BioBERT). The accuracy of these systems generally falls in the range of 85% to 95%.
The existing system is a clinical Decision Support System. The accuracy is 70% to 85%. The
result of the proposed system is initial assessments, routine consultations, and health advice.
The result of the existing system is human medical professionals, especially for complex
conditions.
Dennis Redeemer Korda, et al. [20] They proposed a system that NLP (Word2Vec, BERT,
GPT), Supervised Learning (SVM, RF, Naive Bayes), Neural Networks (CNN, RNN,
LSTM). They achieved an accuracy is 75%–90%. The existing system is rule-based. The
accuracy has shown high accuracy 60% to 75%. The result of the proposed system is to
provide more personalized, dynamic, and contextualized responses, which are key for modern
educational applications. The existing system lacks the adaptability and contextual
understanding of the newer approaches.
Sri Banerjee, et al. [21] proposed a system that uses GPT-3/4, BERT, T5, Reinforcement
Learning, and Sentiment Analysis. The accuracy is 85-95%. The existing systems is rule-
based systems, SVM, Decision Trees, and Naive Bayes. They perform at lower accuracy 70%
to90%. The result of the proposed system response generation, contextual understanding,
emotion detection, and personalized therapeutic interventions. The result of the existing
system is handling ambiguous or unstructured conversations, dynamic user inputs, and
changing conditions.
Georgios Goumas, et al. [22] proposed a system of NLP (BERT, GPT-3), Deep Learning
(RNN, GPT), and Decision Trees. The accuracy is 85-95%. The existing system is Rule-
based, SVM, Naive Bayes, Sentiment Analysis, and Regex. The accuracy is 60-85%. The
result of the proposed system is personalized responses, multi-turn conversations, and
contextual understanding. The result of the existing is limited contextual understanding and
the inability to adapt to complex, dynamic user inputs.
Mohammad Shafiquzzaman Bhuiyan, et al. [23] proposed a system of NLP (BERT, GPT-3)
and Deep Learning (CNN, RNN). The accuracy is 80% to 95%. The existing system is based
on rule-based models and decision trees. The accuracy is 60% to 80%. The result of the
proposed system can adapt to changing customer behavior and provide dynamic, context-
aware recommendations. The result of the existing system is products based on past data;
they fail to provide the adaptive, context-aware experiences that modern AI systems offer.
Arfan Ahmed, et al. [24] proposed AI-powered chatbots for mental health management.
Using models like Naive Bayes, Support Vector Machines (SVM), or Long Short-Term
Memory (LSTM) networks. The accuracy of proposed systems is 60%-80% range. The
existing system is CBT. Accuracy rates range from 60% to 75% for most treatments. The
results of the proposed system have shown moderate effectiveness for mild to moderate
anxiety and depression, offering anonymity, accessibility, and self-guided support. The result
of the existing system of traditional therapy and medications remains the gold standard for
treating moderate to severe anxiety and depression.
Jay H Shore, et al. [25] They proposed systems that include Recurrent Neural Networks
(RNNs) and Long Short-Term Memory (LSTM) networks. The accuracy is 60%-80% for
mild to moderate anxiety and depression. The existing methods is traditional algorithms like
Cognitive Behavioral Therapy (CBT). The accuracy has a proven success rate of around
60%-75%. The result of the proposed system is high user satisfaction due to anonymity, 24/7
support, and cost-effectiveness. The result of the existing system is proven long-term
effectiveness, and personalized care with evidence-based treatments.
Theodore Vial, et al. [26] They proposed a system of RNNs and LSTM networks used to
process sequential data, and typically performed using algorithms like Naive Bayes and
support Vector Machines (SVM). The accuracy typically leads to a 60%-75% reduction. The
existing system is Cognitive Behavioral Therapy (CBT). The accuracy is 50%-65%. The
result of the proposed system is engagement and anonymity, with children and adolescents
appreciating the 24/7 support and the personalized nature of interventions. The result of the
existing system is high success rates for moderate to severe.
Abdulqadir J Nashwan, et al. [27] The proposed system uses Support Vector Machines
(SVM), and Random Forests are often used for classification tasks. Decision Trees and K-
Nearest Neighbors (KNN) also support diagnostic decisions by analyzing symptom patterns
and patient characteristics. The accuracy is typically reaching 85%-95%. The existing system
is psychiatric care involves traditional, clinician-driven methods that rely on structured
interviews, symptom checklists, manual observation, and periodic assessments. The accuracy
of traditional methods can range widely, but it typically falls between 60% and 90%. The
result of the proposed system is accurate diagnoses, personalized treatment, and early
intervention. The result of the existing system is accuracy, efficiency, and patient outcomes.
Gordon, K, et al. [28] They proposed a system of NLP, deep learning models like LSTM,
BERT, and GPT, and machine learning algorithms like SVM and Random Forest. The
system's accuracy typically ranges from 80% to 95%. The existing system is structured or
semi-structured interviews conducted by clinicians. The accuracy is 70% - 90%. The result of
the proposed system is higher accuracy, real-time interaction, personalization, scalability,
cost-effectiveness, and instant access. The result of the existing system of clinical interviews,
psychological testing, and telemedicine, offers personalized and highly empathetic care.
Patterson S, et al. [29] They proposed a system utilizing Cognitive Behavioral Therapy
(CBT), using algorithms like BERT, LSTM, and reinforcement learning. The accuracy is
accuracy of 70% - 85%. Over the existing system is AI chatbots for Cognitive Behavioral
Therapy (CBT), these systems often rely on a variety of NLP techniques, machine learning
models, and pre-programmed therapeutic frameworks. The accuracy is 25-60% range. The
result of the Proposed system shows the highest range of improvement. The result of the
existing system shows positive outcomes in symptom reduction, user engagement, and
satisfaction.
Emma L van der Schyff, et al. [30] The proposed system Leora (AI-Powered Mental Health
Chatbot) uses Natural Language Processing (NLP), Sentiment Analysis (BERT, LSTM),
Machine Learning (Deep Learning, Reinforcement Learning), Cognitive Behavioral Therapy
(CBT) Techniques with an accuracy in the range 85%-90%. The existing system uses pre-
trained models (e.g., BERT) and rule-based systems with an accuracy of 70%-75%. The
results of the proposed system are high user satisfaction with personalized feedback,
improved coping strategy adherence, and positive feedback on CBT-based interactions. The
results of existing system are less personalized, often template-based responses. Limitations
of the proposed system include the heavy reliance on sentiment analysis, which could lead to
inaccuracies in detecting subtle emotions.
Abhishek Aggarwal, et al. [31] proposed a system that uses Machine Learning (ML)
techniques such as Random Forests, Support Vector Machines (SVM), and K-nearest
neighbors (KNN). They achieve overall accuracy ranging from 75% to 90%. The existing
system is Expert systems, decision trees, flowcharts, Logistic Regression, Random Forests,
and SVM. The accuracy of existing systems can range from 50%-90%. The result of the
proposed system is Strong personalization and adaptation to user needs. The result of the
existing system is Basic emotion recognition and behavioral predictions.
Leona Cilar Budler, et al. [32] They proposed system of NLP (e.g., BERT, GPT, BioBERT),
medical knowledge bases (e.g., UMLS, Medline), machine learning techniques (e.g.,
supervised learning, reinforcement learning), domain-specific models. They achieve
moderate to high accuracy range is70% to 93%. The existing system is NLP, Deep Learning,
and Machine Learning. The accuracy is up to 70% to 85%. The result of the proposed system
offers broader accessibility and cost-effective solutions for routine healthcare inquiries. The
results of the existing system are more established and accurate for specific, complex
healthcare tasks.
Akbobek Abilkaiyrkyzy, et al. [33] They proposed a system that incorporates a digital twin
and uses NLP and machine Learning. The accuracy is 75% to 90%. The existing system is
NLP, CBT-based therapy, and Machine Learning. The achieved accuracy is 70% to 80%. The
result of the proposed system is an innovative integration of digital twin technology and real-
time dialogue-based assessments. The result of the existing system is improving symptoms of
anxiety, depression, and emotional stress.
Gerard Anmella, et al. [34] proposed a system that uses Machine Learning (ML) techniques
such as K-nearest neighbors (KNN), decision trees, and Recurrent Neural Networks
(RNN). They achieve overall accuracy ranging from 90% to 98%. The existing system
consists of flowcharts, logistic regression, random forests, and SVM. The accuracy of
existing systems can range from 70% to 90%. The result of the proposed system work related
to burnout, and reducing anxiety and depression. The result of the existing system is
improved mental health and well-being for users.
Julian De Freitas, et al. [35] proposed a system that uses Machine Learning (ML) techniques
such as Random Forests and Support Vector Machines (SVM). They achieve overall accuracy
ranging from 75% to 90%. The existing system includes Dimensionality reduction,
flowcharts, Logistic Regression, Random Forests, and SVM. The accuracy of existing
systems can range from 50% to 80%. The result of the proposed system is Strong
personalization and adaptation to user needs. The result of the existing system is Basic
emotion recognition and behavioral predictions.
Jana, et al. [36] proposed a system that uses Machine Learning (ML) techniques such as
Logistic Regression, clustering, Q- learning, and decision trees. They achieve overall
accuracy ranging from 80% to 90%. The existing system is Random Forests, Expert Systems,
and SVM. The accuracy of existing systems can range from 60% to 70%. The result of the
proposed system for better coping skills, and conversation management. The result of
the existing system effectively identifies mental health symptoms and improves the
outcomes, enhancing cognitive functioning and problem-solving skills.
Emre Sezgin, et al. [37] The proposed system uses HMMs, DNNs, CNNs, RNNs, Intent
Recognition Algorithms, NER, BERT, GPT-2, Wave Net, and Tacotron. The accuracy is 75-
85%. The existing system uses rule-based systems, finite state machines, and simple database
querying methods. The accuracy is approximately 72.5%. The result of the proposed system
is offers real-time decision support and is highly scalable, handling a large number of
interactions simultaneously. The result of the existing systems is providing essential
functionality but are less accurate and flexible due to their reliance on predefined rules,
manual data entry, and limited language understanding.
Prabodh Rathnayaka et al. [38] proposed a system that uses Machine Learning (ML)
techniques such as Cognitive Behavioral Techniques and deep Q-learning, dimensionality
reduction. They achieve overall accuracy ranging from 80% to 95%. The existing system
includes decision trees, Logistic regression, and LSTM networks. The accuracy of existing
systems can range from 70% to 85%. The result of the proposed system for increased
awareness & relationships. The result of the existing system provided personalized support
and resources, increased motivation, and attachment in therapy, and improved overall quality
of life.
Ghazala Bilquise et al. [39] proposed a system that uses Machine Learning (ML) techniques
such as Random Forests and Support Vector Machines (SVM). They achieve overall accuracy
ranging from 80% to 95%. The existing system includes Dimensionality reduction,
flowcharts, Logistic Regression, Random Forests, and SVM. The accuracy of existing
systems can range from 75% to 85%. The result of the proposed system is improving context
understanding, mental health results, and coping skills. The result of the existing system is
recognizing emotions, user experience & satisfaction, and accurate identification of users.
Sharma, et al. [40] The proposed system is RL models (e.g., Q-learning, Deep Q Network,
Actor-Critic) that adapt over time with high accuracy with proper training, typically over 80-
90% in user satisfaction and engagement. The existing system is Rule-Based or Supervised
Learning (e.g., machine learning classification) with accuracy high for specific tasks but may
drop in dynamic or open-ended interactions (60-85%). The results of the proposed system are
simpler to implement, but it lacks the adaptability, personalization, and long-term user
engagement that RL-based systems offer. It is suitable for simpler, more controlled scenarios
but may fall short in complex or emotional interactions. Research gaps in terms of developing
more efficient RL algorithms, reducing the reliance on extensive data, and ensuring stability
and fairness in learning.
Park, H, et al. [41] The Proposed System for a Self-Improving Chatbot for Adolescent Mental
Health uses reinforcement learning and deep learning models like GPT-3/4 to provide
personalized, adaptive support tailored to adolescents' emotional and developmental needs.
By integrating evidence-based therapeutic approaches such as CBT and mindfulness, it tracks
users' emotional and behavioral changes over time, continuously improving its responses
based on past interactions, and achieving high accuracy (80%-90%). However, it has several
limitations, including computational demands, privacy concerns, and the potential for over-
reliance on AI in managing mental health. In contrast, Existing Systems are often rule-based,
offering generic emotional support without the ability to adapt in real-time or track long-term
progress, resulting in moderate accuracy (60%-75%) and limited personalization. Research
gaps in the Proposed System include improving reinforcement learning for better
personalization, enhancing cultural sensitivity, and developing robust methods for long-term
emotional tracking.
Taylor B, et al. [42] proposed a system aimed at improving access to mental health screening
and support in underserved rural communities. Compared to traditional, in-person mental
health screening clinics, this chatbot received positive feedback from 90% of rural users,
demonstrating increased accessibility. However, the chatbot’s efficacy has not been
thoroughly tested across diverse rural settings worldwide, and its reliance on internet access
limits its usability in remote areas with limited connectivity.
Basit Ali, et al. [43] The proposed system integrates multiple techniques to enhance the
chatbot's performance and adapt it to various domains. One layer uses a machine learning
classifier (such as a Support Vector Machine or SVM) to classify user inputs into specific
intent categories. Another level uses a Sequence-to-Sequence deep learning model. This
model is used for generating more complex responses with an average accuracy of 91.5%.
Existing systems rely on predefined rules and keywords to generate responses. Rule-based
chatbots are often used in limited or specific use case systems to match user inputs to
predefined responses using techniques like cosine similarity or TF-IDF had an accuracy of
77.5%. The proposed system performs well in intent classification.
Eliane M. Boucher, et al. [44] The Proposed System for AI-powered chatbots in digital
mental health interventions uses advanced AI models like GPT-3/4 and BERT, integrating
evidence-based therapeutic approaches such as CBT and DBT to provide personalized,
adaptive mental health support. It tracks users' emotional states in real time, offering dynamic
interventions based on individual needs and long-term mental health progress. This system
demonstrates high accuracy (80%-90%) in emotional detection and therapeutic effectiveness,
making it suitable for managing conditions like anxiety, depression, and stress. However,
there are limitations related to computational demands and privacy concerns. Despite its
effectiveness, there is a significant research gap in enhancing cultural adaptability,
incorporating multi-modal emotion detection, and measuring long-term outcomes. In
contrast, existing systems rely on simpler rule-based approaches with limited adaptability,
personalization, and therapeutic depth, leading to moderate accuracy (60%-75%) and less
engagement, thus highlighting the need for further advancements in AI-driven mental health
interventions.
Khamis, et al. [45] proposed system uses computer vision (CNNs, YOLO, Faster R-CNN),
NLP (BERT, GPT-3), Reinforcement Learning (Q-Learning, DQN), AI for Disease Prediction
(LSTM, ARIMA, Prophet). The accuracy is 80% to 95%. The existing system includes
Manual Contact Tracing, Basic Temperature Screening (Threshold-based), Inventory
Management Algorithms (EOQ, reorder points), and Basic Telemedicine Platforms. The
accuracy is 50% - 80%. The result of the proposed system is High accuracy in tasks such as
resource allocation, temperature screening, and disease prediction. The result of the existing
system-specific contexts faces significant limitations in scalability, real-time adaptability, and
accuracy, particularly in handling large-scale pandemic scenarios.
Pat Pataranutaporn, et al. [46] They proposed system that used NLP (RNNs, LSTMs), RL (Q-
Learning and Deep Q-Networks (DQN), and Convolutional Neural Networks (CNNs). The
accuracy is 80% to 90%. The existing system is Rule-Based Chatbots (Decision Trees, If-
Then Rules). The achieving accuracies of 60% to 85%. The result of the proposed system is
engagement and well-being impacts showing positive results. The result of the existing
system is tend to rely on rigid rules and predefined avatars, resulting in lower interaction
quality and limited personalization.
Zhou, et al. [47] proposed a system that leverages natural language processing and sentiment
analysis to monitor and support users with anxiety and depression in real time. This AI-driven
chatbot was compared to traditional therapy approaches and static self-help apps, aiming to
provide a more responsive and personalized experience. The system demonstrated an 85%
accuracy rate in detecting depressive symptoms and a 78% accuracy rate for anxiety, making
it a promising tool in mental health monitoring. However, the study was limited in its focus,
addressing primarily single conditions rather than co-morbid disorders, and lacked
multilingual support. Further, the results were derived from a relatively small and less diverse
sample group, which may limit the generalizability of findings.
Denecke, et al. [48] proposed a system that uses Machine Learning (ML) techniques such as
clustering, Q- learning, and decision trees. They achieve overall accuracy ranging from 80%
to 98%. The existing system is Random Forests and SVM. The accuracy of existing systems
can range from 70% to 85%. The result of the proposed system is for increased accuracy in
emotion, personalized support, continuous learning, stress management, and user motivation.
The result of the existing system's successful identification of risk cases, personal coping
strategies, enhanced therapist-client cooperation, and personalized improvement in user self-
awareness.
Wang L, et al. [49] The proposed system for real-time sentiment detection in mental health
chatbots utilizes advanced deep learning models like GPT-3/4 and BERT to analyze
emotional shifts, providing highly personalized and adaptive feedback with an accuracy of
80%-90%. This system tracks users emotional states and detects subtle nuances in sentiment,
offering targeted interventions for conditions like anxiety, depression, and stress. However, it
faces challenges related to computational demands, privacy concerns, and ethical risks of
emotional manipulation. In contrast, existing systems rely on basic rule-based sentiment
analysis, resulting in moderate accuracy (60%-75%) and less personalized user experiences,
although they are easier to scale and maintain.
Kumar N, et al. [50] They proposed a system using Deep learning models (e.g., transformers
like GPT-3/4) with fine-tuned CBT modules, utilizing NLP and emotion detection with high
accuracy (80%-90%) in recognizing cognitive distortions and providing evidence-based
therapeutic interventions the existing system is Rule-based or template-based NLP models,
sometimes incorporating simple CBT principles without deep learning with moderate
accuracy (60%-75%) in detecting emotions or thoughts, often without sophisticated
mechanisms for providing therapeutic responses. The results of the proposed system are
strong performance in delivering effective, personalized mental health support, tracking
progress, and applying CBT methods to real-time situations. The results of the existing
system are basic performance with limited capacity for therapeutic intervention or long-term
progress tracking. The limitations of the proposed system are the risk of misapplying CBT
techniques without human supervision, ethical concerns around data privacy, and emotional
manipulation.
Lee H, et al. [51] proposed a system that uses Machine Learning (ML) techniques such as
Cognitive Behavioral Techniques (CBT). They achieve overall accuracy ranging from 75% to
95%. The existing system includes flow charts, decision trees, Logistic regression, and LSTM
networks. The accuracy of existing systems can range from 60% to 90%. The result of the
proposed system is improved guided support of therapists, increased communication, and
better health outcomes. The result of the existing system provided effective symptom
identification, personalized support and resources, and increased motivation.
Chang L, et al. [52] proposed a system that uses Machine Learning (ML) techniques such as
Logistic Regression, clustering, Q- learning, and decision trees. They achieve overall
accuracy ranging from 80% to 95%. The existing system is Random Forests, Expert Systems,
and SVM. The accuracy of existing systems can range from 60% to 85%. The result of the
proposed system is accurate mood tracking, better coping skills, and early detection of mood
changes and mental health issues. The result of the existing system effectively identifies
mental health symptoms and basic mental health support and improves the outcomes.
Andersson, et al. [53] proposed a system that uses Machine Learning (ML) techniques such
as Random Forests, Support Vector Machines (SVM), and K-nearest neighbors (KNN). They
achieve overall accuracy ranging from 75% to 90%. The existing system consists of expert
systems, decision trees, flowcharts, logistic regression, random forests, and SVM. The
accuracy of existing systems can range from 50% to 80%. The result of the proposed system
is personalization insights and user needs. The result of the existing system is Basic mental
issue recognition and behavioral changes.
Martínez A, et al. [54] proposed a system that uses Machine Learning (ML) techniques such
as Random Forests and Support Vector Machines (SVM). They achieve overall accuracy
ranging from 55% to 96%. The existing system decision trees, flowcharts, Logistic
Regression, Random Forests, and SVM. The accuracy of existing systems can range from
60% to 80%. The result of the proposed system is satisfaction and user engagement. The
result of the existing system is improved mood classification and recognition.
Walker J, et al. [55] proposed a system that uses Machine Learning (ML) techniques such as
the Cognitive Behavioural Technique (CBT). They achieve overall accuracy ranging from
80% to 90%. The existing system includes Dimensionality reduction, flowcharts, Logistic
Regression, Random Forests, and SVM. The accuracy of existing systems can range from
75% to 85%. The proposed system results in Hospital readmissions, user trust, and
confidence in conversational support. The result of the existing system is empathy
recognition and accurate identification of users.
Singh A, et al [56] proposed a system that uses Machine Learning (ML) techniques such as
Navies Bayes, Reinforcement learning, and text classification. They achieve overall accuracy
ranging from 70% to 85%. The existing system includes Long short-term memory (LSTM),
Random Forests, and SVM. The accuracy of existing systems can range from 50% to 86%.
The proposed system results in improved problem detection, and a better understanding of the
concept. The result of the existing system is detecting analysis, increased support, and
improved mental health help management.
Lewis D, et al. [57] proposed a system that uses Machine Learning (ML) techniques such as
Logistic Regression, clustering, Q- learning, and decision trees. They achieve overall
accuracy ranging from 75% to 96%. The existing system is Random Forests, Expert Systems,
and SVM. The accuracy of existing systems can range from 50% to 85%. The result of the
proposed system is mood tracking, the detection of mood changes. The result of the existing
system effectively identifies mental health symptoms and mental health support and improves
the results, Identification of basic mental health issues, and proper steps towards support
services.
Perez M, et al. [58] proposed a system that uses Machine Learning (ML) techniques such as
Navies Bayes, Random Forests, and Support Vector Machines (SVM). They achieve overall
accuracy ranging from 60% to 92%. The existing system includes Long short-term memory
(LSTM), Random Forests, and SVM. The accuracy of existing systems can range from 56%
to 85%. The result of the proposed system is stress management, emotion recognition, and
reduced anxiety and depression. The result of the existing system is improved satisfaction and
therapy support, evaluation of user feedback.
Oliveira P, et al. [59] proposed a system that uses Machine Learning (ML) techniques such as
Random Forests, Support Vector Machines (SVM), and K-nearest neighbors (KNN). They
achieve overall accuracy ranging from 75% to 92%. The existing system is Expert systems,
decision trees, flowcharts, Logistic Regression, Random Forests, and SVM. The accuracy of
existing systems can range from 60% to 82%. The result of the proposed system is Enhanced
user trust and confidence in conversational support and improved coordination with
healthcare services and resources. The result of the existing system is Regular monitoring and
evaluation of user feedback, Precise sentiment analysis, empathy recognition, and emotional
intelligence.
ThompsoN J, et al. [60] proposed a system that uses Machine Learning (ML) techniques such
as Navies Bayes, Reinforcement learning, and text classification. They achieve overall
accuracy ranging from 85% to 92%. The existing system includes Long short-term memory
(LSTM), Random Forests, and SVM. The accuracy of existing systems can range from 60%
to 80%. The proposed system results in live support to users, improved problem detection of
mental health help, and a better understanding of the concept. The result of the existing
system is detecting accurate sentimental analysis, increased therapy support, and better
mental health help management.
Kim E, et al. [61] proposed a system that uses Deep Learning (ML) techniques such as Long
Short- Term Memory (LSTM), Recurrent Neural Networks (RNNs), and Deep Q-Networks
(DQNs). They achieve an overall accuracy of around 70% to 85%. The Existing system
includes Support Vector Machines (SVM), random Forests, and decision trees. The accuracy
of existing systems ranges from 60% to 70%. The result of the proposed system is emotional
support and improved accuracy in finding emotional conditions. The result of the existing
system is sentiment classification and recognition, Basic emotional support, and resources,
support for mental health management.
Martin L, et al. [62] proposed a system that uses Machine Learning (ML) techniques such as
Random Forests and Support Vector Machines (SVM). They achieve an overall accuracy
of around 80% to 92%. The Existing system includes Decision trees, clustering, and
dimensionality reduction. The accuracy of existing systems ranges from 65% to 85%. The
result of the proposed system is Enhanced personalized support and therapy
recommendations, Improved social support, and community connectivity. The result of
the existing system is Basic personalized support, therapy recommendations, Effective mood
classification, and Limited social support.
Kim. H, et al. [63] Proposed a system that uses Machine Learning (ML) techniques such as
Long Short-Term Memory (LSTM), Random Forests, and Support Vector Machines (SVM).
They achieve an overall accuracy of around 80% to 95%. The Existing system includes
Logistic Regression, Decision trees, and SVM. The accuracy of existing systems ranges from
70% to 85%. The result of the proposed system is user engagement and improved therapeutic
outcomes. The result of the existing system is Basic personalized support and response and
limited emotional coping skills.
Thomas M, et al. [64] Proposed a system that uses Machine Learning (ML) techniques such
as Logistic Regression, clustering, Q- learning, and decision trees. They achieve overall
accuracy ranging from 80% to 90%. The existing system is Random Forests, Expert Systems,
and SVM. The accuracy of existing systems can range from 60% to 70%. The result of the
proposed system for increased awareness & relationships. The result of the existing system
effectively identifies mental health symptoms and improves the outcomes, enhancing
cognitive functioning and problem-solving skills.
Silva J, et al. [65] Proposed a system that uses Machine Learning (ML) techniques such as
Random Forests and Support Vector Machines (SVM). They achieve overall accuracy
ranging from 80% to 95%. The existing system includes flowcharts, Logistic Regression,
Random Forests, and SVM. The accuracy of existing systems can range from 75% to 85%.
The result of the proposed system is improving context and mental health results. The result
of the existing system is recognizing emotions and user satisfaction.
Rivera C, et al. [66] proposed a system that uses Machine Learning (ML) techniques such as
Logistic Regression, clustering, Q- learning, and decision trees. They achieve overall
accuracy ranging from 75% to 96%. The existing system is Random Forests, Expert Systems,
and SVM. The accuracy of existing systems can range from 50% to 85%. The result of the
proposed system is mood tracking, the detection of mood changes. The result of the existing
system effectively identifies symptoms of mental health and supports and improves the
results and identification of mental health issues.
Brown R, et al. [67] Proposed a system that uses Machine Learning (ML) techniques such as
Random Forests and Support Vector Machines (SVM). They achieve an overall accuracy
of around 70% to 85%. The Existing system includes Logistic Regression, Decision trees,
clustering, and dimensionality reduction. The accuracy of existing systems ranges from 60%
to 70%. The result of the proposed system is Improved patient activation, health outcomes,
and reduced health costs. The result of the existing system is Hospital readmissions, Patient
satisfaction rate, reducing symptoms.
Chen L, et al. [68] Proposed a system that uses Machine Learning (ML) techniques such as
Support Vector Machines (SVM), and K-nearest neighbors (KNN). They achieve overall
accuracy ranging from 75% to 92%. The existing system is flowcharts, Logistic Regression,
Random Forests, and SVM. The accuracy of existing systems can range from 60% to 82%.
The result of the proposed system is Enhanced user trust and confidence in conversational
support and improved coordination with healthcare services and resources. The result of
the existing system is Regular evaluation of user feedback, Precise sentiment analysis, and
emotional intelligence.
Santos R, et al. [69] Proposed a system that uses Machine Learning (ML) techniques such as
Navies Bayes, Random Forests, and Support Vector Machines (SVM). They achieve overall
accuracy ranging from 80% to 96%. The existing system includes Long short-term memory
(LSTM), Random Forests, and SVM. The accuracy of existing systems can range from 75%
to 85%. The result of the proposed system is stress management and emotion recognition.
The result of the existing system is improved satisfaction, user experience & satisfaction, and
therapy support, user-friendly interface.
Mori, K, et al. [70] Proposed a system that uses Machine Learning (ML) techniques such as
Logistic Regression, clustering, Q- learning, and decision trees. They achieve overall
accuracy ranging from 80% to 95%. The existing system is Random Forests, Expert Systems,
and SVM. The accuracy of existing systems can range from 60% to 85%. The result of the
proposed system is increased awareness & relationships, and increased motivation. The result
of the existing system is reduced caregiver burden, and well-being for users, improved
outcomes, and recognition of emotions.
Hassan R, et al. [71] The proposed system likely employs state-of-the-art deep learning
models (such as transformers or LSTM networks) with an average accuracy of range 80%-
90%. Existing chatbots generally use simpler sentiment analysis tools that only detect basic
emotions (positive or negative) with an accuracy of 70%. The result of the proposed system is
designed to dynamically adapt its responses based on the emotional state of the user. Existing
systems may provide some degree of personalized responses, but they typically rely on
predefined templates or rule-based responses. The system has limitations despite having
advanced techniques, the system cannot achieve perfect emotion detection, especially with
complex emotions like anxiety.
Yamamoto A, et al. [72] The proposed system uses BERT-based Sentiment Analysis
(Bidirectional Encoder Representations from Transformers), LSTM-based Emotion Detection
(Long Short-Term Memory networks), Transformers (e.g., GPT, RoBERTa) for sentiment
classification and emotion understanding with an accuracy 85%-95%. The existing chatbots
like Woebot, Wysa, and Replika offer basic CBT techniques but may not be fully customized
for GAD, often providing generalized interventions with 70% accuracy. The result of the
proposed system is high engagement and improved anxiety reduction over time due to
personalized and dynamic CBT interventions. The result of the existing system is
Demonstrated effectiveness in some trials (e.g., Woebot showed significant improvement in
anxiety and depression in clinical studies), but effectiveness varies depending on user
engagement and adherence. Limitations are developing a highly adaptive, emotion-aware
system requires sophisticated NLP models and data.
Chetan Bulla, et al. [73] Proposed a system that uses ML techniques like decision trees,
random forests, and support vector machines (SVM) are commonly used for classifying
symptoms and conditions. Accuracy rates in ideal conditions can be quite high (70-95%). The
existing system is Babylon Health and Ada Health. The accuracy is 70%. The result of the
proposed system is AI-based medical assistant chatbot performs similarly to or better than
existing systems in terms of accuracy, engagement, and user satisfaction.
Sarah Carr. [74] Proposed a system that uses Machine Learning (ML) techniques such as
Navies Bayes, Random Forests, and Support Vector Machines (SVM). They achieve overall
accuracy ranging from 80% to 96%. The existing system includes Long short-term memory
(LSTM), Random Forests, and SVM. The accuracy of existing systems can range from 75%
to 85%. The result of the proposed system is better coping skills, stress management, emotion
recognition, and reduced anxiety and depression. The result of the existing system is
improved satisfaction, user experience & satisfaction, and therapy support, user-friendly
interface, increased user trust.
Alaa Ali, et al [75] Proposed a system that uses Machine Learning (ML) techniques such as
K-nearest neighbors (KNN), decision trees, and Recurrent Neural Networks (RNN). They
achieve overall accuracy ranging from 80% to 95%. The existing system is Long short-term
memory (LSTM), Random Forests, and SVM. The accuracy of existing systems can range
from 70% to 85%. The result of the proposed system work related to increased accuracy in
recognizing fix the emotions, and attachments of users, and reducing symptoms. The result of
the existing system is user experience and satisfaction with chatbots, provision of emotional
support, and identification of symptoms of mental health.
Nguyen T, et al. [76] The proposed system uses NLP and sentiment analysis models (e.g.,
BERT, GPT, or Roberta) to identify the emotional tone in user messages (e.g., stress, sadness,
frustration, etc.) with high accuracy (F1 score: 0.75–0.85 for text, ~0.80 for text). The result
is a system capable of detecting complex emotions. The existing system is primarily text-
based sentiment analysis (e.g., VADER, TextBlob). Simple rule-based or template responses
with moderate accuracy (F1 score: 0.70–0.80 for basic emotions). The result is system
struggles with nuanced emotions. One major research gap is emotion detection for mixed or
complex emotions.
Kwon Y, et al. [77] The proposed system is an emotion-driven AI Chatbot using Deep
Learning & NLP for Emotional Support using Transformer-based models (e.g., BERT, GPT-
3/4, or specialized fine-tuned models) with an accuracy of 90%. The existing system is Rule-
based or Pre-trained NLP models (e.g., RNN, LSTM, or sentiment analysis tools) with an
accuracy 75%. The results of the proposed system are highly responsive and adaptive,
providing better emotional support and personalized care. The result of proposed system is
handling basic queries but may miss nuanced emotional cues or fail to engage deeply. The
research gaps include Real-time learning on diverse datasets, and addressing cultural and
linguistic differences in emotion recognition. Limitations are Requires large, diverse datasets
for training, Dependence on internet connection for real-time learning and feedback.
Zhang Q, et al. [78] The proposed system is a sentiment-driven AI Chatbot for real-time
mental health tracking using advanced NLP and ML techniques using Transformer-based
models (e.g., BERT, GPT, or other transformer-based models) with fine-tuned
emotion/sentiment recognition with high, with more nuanced detection of mixed emotions
and subtle sentiment shifts (typically 80%-90%). The existing system is rule-based chatbots
or traditional sentiment analysis tools for mental health tracking with an accuracy of 60%-
75%. The result of the proposed system is to provide a more holistic view of mental health by
tracking sentiment trends over time, and personalized emotional insights. The existing system
result is simple tracking based on sentiment over time but lacks depth and can misinterpret
complex emotional states. Research gaps include continuous improvement based on user-
specific data and real-time sentiment detection across different languages and cultures.
Roberts T, et al. [79] The proposed system uses Transformer-based models (e.g., GPT-3/4,
BERT) with fine-tuned recovery-focused NLP and sentiment analysis with high, especially
when trained on addiction-specific data (e.g., user sentiment, behavioral cues, progress
tracking) (80%-90%). The existing system is rule-based or traditional chatbots for general
mental health support or addiction recovery has moderate, general sentiment or emotion
recognition, and struggles with nuanced addiction-related behaviors (60%-75%). The results
of the proposed system are Highly responsive to the user's recovery journey, providing
motivational support, tracking progress, and predicting relapse risks. The results of the
proposed system provide basic emotional support but lack sophistication in addressing
specific recovery needs and triggers. The research gaps include enhancing models to
recognize more subtle addiction-related emotional cues and triggers and are limited to generic
mental health support. One major drawback is dependence on large, high-quality training data
specific to substance abuse recovery.
Lu F, et al. [80] The proposed system is deep learning models (e.g., transformers like GPT-4,
multimodal neural networks) for integrating text, voice tone, facial expressions, and
physiological data with high accuracy (80%-90%) by integrating multimodal cues for
detecting distress (text, expression, physiology). The existing system is Rule-based or
traditional NLP models for text, and some basic sentiment analysis for physiological data
with moderate accuracy (60%-75%), limited by the single input type (text or voice), and
struggles with complex distress signals. The results of the proposed system are high
performance in real-time distress detection and adaptive responses, with personalized
interventions. The results of the existing system are limited performance due to single-mode
input and less sensitivity to nuanced distress signals. The proposed system limitations require
high-quality data from diverse sources to train an accurate model, and sometimes miss subtle
signs of distress, and fail to adapt to user-specific patterns.
2.4 Summary
Mental health has excellent potential to be supported with AI chatbots indeed, a solution to
conditions like depression, anxiety, and stress becomes accessible, scalable, and cost-
effective. The benefits of using them include support immediately available, coping
strategies, and education available, a few can learn with users to improve over time. Still, the
challenges abound in many different aspects very low user engagement calls for more
empathetic, more personalized responses, and many issues arise concerning clinical
integration. There is a need for further research into chatbots to be used in human care
advancing their capacity to understand emotions, long-term effectiveness, and working
through complexities of mental issues in the human mind. Standardized metrics will be
needed to establish success and ensure responsible application in the healthcare domain.
CHAPTER 3
EXISTING SYSTEM
1. Ayurveda
Ayurveda is an ancient Indian medicine system that's over 3,000 years old. It focuses on
achieving balance in the body and mind using natural methods. Key treatments for mental
disturbances include herbs like Brahmi and Ashwagandha, detoxification processes like
Panchakarma, and lifestyle changes tailored to an individual's body type, known as dosha.
These approaches aim to harmonize the body, mind, and spirit, promoting overall health and
well-being.
Yoga and meditation are old practices that help your body and mind stay healthy. Yoga uses
physical postures (asanas) and breathing exercises (pranayama) to improve flexibility,
strength, and focus. Meditation helps calm the mind and brings mental peace. Together, they
reduce stress, improve concentration, and make your mind clearer, helping manage mental
health.
3. Psychiatrists
Psychiatrists are medical doctors who specialize in diagnosing and treating mental health
conditions. They assess, diagnose, and treat mental, emotional, and behavioral disorders
using a combination of medical tests, conversations about symptoms, and medical history2.
Psychiatrists can prescribe medication, provide psychotherapy, and develop treatment plans
tailored to individual needs. They often work with other mental health professionals to
provide comprehensive care for conditions such as depression, anxiety, schizophrenia, and
bipolar disorder [4].
Spiritual and religious practices play a significant role in mental health for many individuals,
especially in India. These practices include prayer, worship, fasting, and participating in
religious rituals. People often seek spiritual guidance and strength from visiting places of
worship and engaging in religious activities. These practices provide a sense of community,
purpose, and inner peace, helping individuals cope with mental disturbances and find
emotional support.
Folk healing practices are traditional methods used by communities, especially in rural areas,
to address mental health issues. These often include rituals, prayers, and the use of local herbs
and plants. Local healers, like shamans, perform these rituals to ward off negative energies
and spirits believed to cause mental disturbances. These practices also depend on strong
community support systems to provide both emotional and practical assistance.
In many regions, community-based support systems are crucial for addressing mental health
issues. Local healers offer personalized care using traditional knowledge and practices.
Regular community gatherings and events strengthen social bonds, providing emotional and
practical support to those in need. These shared resources ensure that individuals do not face
their problems alone, fostering a sense of belonging and mutual aid within the community
[4].
While these traditional methods offer some relief, it's important to remember that they should
not be considered substitutes for modern, evidence-based treatments for serious mental health
conditions. Consulting with a qualified mental health professional is crucial for proper
diagnosis and treatment. When integrated with modern medical approaches, some of these
traditional methods offer an approach to mental health care, providing a comprehensive
framework for addressing the complex interplay between mind, and body.
3.2 Limitations
Quite Expensive
Traditional therapy methods like talk therapy and CBT involve sessions with licensed mental
health professionals, which can be costly. These therapies require significant time, expertise,
and personalized attention from therapists. The costs cover the therapist's education, ongoing
training, professional fees, and often the expenses of maintaining a private practice or clinic.
Additionally, the need for multiple sessions to achieve meaningful results can further increase
the overall cost for individuals seeking help [3].
Long Waiting Times
Long waiting times for mental health services occur due to high demand, a shortage of mental
health professionals, administrative delays, and limited accessibility, particularly in rural or
underserved areas. These factors result in delays for appointments, worsening individuals'
mental health conditions as they wait for treatment. The increasing awareness of mental
health issues has led to more people seeking help, but the existing infrastructure and
resources are often insufficient to meet the growing need, leading to significant delays in
providing timely and effective care [2].
Inaccessibility
Therapy isn't always accessible, especially for people in rural or remote areas, due to factors
like the shortage of mental health professionals, long travel distances to reach clinics, and
limited availability of specialized services. These challenges make it difficult for individuals
to receive timely and effective mental health care. Additionally, rural areas often lack the
resources and infrastructure needed to support comprehensive mental health services, further
exacerbating the issue of accessibility for those in need [2].
Social Stigma
There is a social stigma attached to seeking mental health treatment, which discourages many
people from getting the help they need. This stigma arises from misconceptions and negative
attitudes towards mental health issues, leading to feelings of shame and embarrassment for
those seeking treatment. As a result, individuals may avoid therapy or mental health services
due to fear of judgment, discrimination, or being labelled as "weak" or "crazy." This stigma
can prevent people from accessing timely and effective mental health care, exacerbating their
conditions and creating additional barriers to recovery [2].
In traditional societies, strong community bonds are essential for providing support during
times of mental distress. These bonds offer emotional and practical assistance through close-
knit relationships and shared resources, fostering a sense of belonging and mutual aid.
However, not everyone has access to such supportive communities. Factors like urbanization,
migration, and changes in family structures can lead to social isolation. Without these
community connections, individuals may struggle to find the support they need, making it
harder to address mental health issues effectively [2].
Inaccurate
In traditional mental health treatment, the accuracy of diagnosis and treatment can sometimes
be challenging for several reasons. Traditional methods often rely on local healers'
knowledge and practices, which may lack scientific validation. These methods might not
fully understand the complexities of mental health disorders, leading to misdiagnosis or
inappropriate treatments. Additionally, traditional approaches often depend on subjective
assessments and observations, which can vary widely between practitioners. Cultural beliefs
and practices can also influence the perception and treatment of mental health issues,
sometimes resulting in methods that may not align with modern medical standards. These
factors can lead to treatments that are not precisely tailored to the individual's specific mental
health needs, potentially affecting their effectiveness [3].
CHAPTER 4
PROPOSED SYSTEM
4.1 Overview
Machine learning is a way for a computer to learn from large datasets presented to it, without
explicit instructions. It requires structured databases; unlike scientific research which begins
with a hypothesis, ML begins by looking at the data and finding its own hypothesis based on
the patterns that it detects [8]. It then creates algorithms to be able to predict new information,
based on the created algorithm and pattern that it was able to generate from the original
dataset [8]. This model of AI is data-driven, as it requires a huge amount of structured data in
the field of psychiatry with a lot of its patient encounters being based on interviews and
storytelling on the part of the patient [8].
4.1.2 Classifications
1. Supervised Learning: The model is trained on labeled data, where input-output pairs are
provided. The chatbot learns to make predictions or decisions based on this training data.
Example: Classifying user queries into categories like "billing" or "technical support."
2. Unsupervised Learning: The model is trained on unlabeled data and tries to find
patterns or groupings within the data. It's useful for discovering hidden structures in data.
Example: Clustering similar customer queries together.
3. Semi-Supervised Learning: This approach combines both labeled and unlabeled data for
training. It leverages a small amount of labeled data and a large amount of unlabeled data
to improve learning accuracy. Example: Improving chatbot responses by using a mix of
pre-labeled and new user queries.
4. Reinforcement Learning: The model learns by interacting with an environment and
receiving feedback in the form of rewards or penalties. It's used to develop chatbots that
can improve their performance over time through trial and error. Example: A chatbot that
learns to provide better customer service by receiving positive feedback when it
successfully resolves a query.
4.1.3 Model
The aim of this project is to develop an AI-driven assistant chatbot that provides support and
counseling for individuals experiencing mental health disturbances. The chatbot leverages
machine learning techniques and NLP to deliver personalized, accessible, and effective
mental health care.
The name of our proposed AI-driven chatbot for mental health support is “Pandora” and it is
designed to provide accessible, text-based assistance for individuals experiencing various
mental disturbances. By engaging users in personalized conversations, the chatbot allows
them to express their feelings and concerns openly, showing a sense of connection and
understanding. This interaction is vital for those who feel isolated or unsure about discussing
their mental health with others.
The chatbot can also identify distress crises. If the user shows hopelessness, extreme anxiety,
or suicidal thoughts, the chatbot will know how to respond to it, and it will immediately
provide access to resources and even make a referral for professional assistance. It ensures
that users have resources available to them at a time when they need them most.
It is a mental health-aware resource through the provision of basic support to people trying to
better their mental health, giving a safe non-judgmental environment in which the user feels
at ease and can interact to examine thoughts and feelings as the individual chooses.
The interaction of this personalized assistant chatbot provides personalized content such as
jokes, humourous words, etc. Providing support around the clock gives the feel that it is
working only for you. This system helps in detecting health concerns at an early stage and
provides preventive measures for avoiding mental disturbances.
Figure 4.1: Block Diagram
JSON Dataset
The chatbot is trained using a dataset containing examples of user inputs (patterns) and
corresponding responses.
Text Preprocessing
The input text (user's message) is cleaned, tokenized (split into words), and converted into a
format that the machine learning model can process.
A Random Forest Classifier is used for Intent classification. Random Forest is a meta
estimator that fits the number of Decision Trees on various sub-samples of training data and
gives the average accuracy and controls the over-fitting.
A Random Forest is an ensemble learning method that operates by constructing multiple deci
sion trees during training and outputting the class that is the mode of the classes (classificatio
n) of the individual trees.
Preprocessing: Clean and preprocess the data to make it suitable for training the model.
Feature Extraction: Extract relevant features from the text data, such as keywords, phrases, or
embeddings.
Training the Model: Use the Random Forest Classifier to train the chatbot on the dataset. The
classifier will learn to predict the appropriate response based on the input query.
Drawbacks
LSTM networks, an advanced type of Recurrent Neural Network (RNN), are fundamental in
modern chatbot development due to their unique capability to handle long-term dependencies
and sequential data. Traditional RNNs often struggle with remembering long sequences,
leading to the vanishing gradient problem, where the influence of older inputs diminishes
over time. LSTMs mitigate this issue by incorporating memory cells and gates that control
the flow of information, allowing them to maintain context over extended dialogues.
In chatbot applications, this ability to retain and utilize context is crucial. Conversations are
inherently sequential and context-dependent, requiring the bot to remember past interactions
to generate relevant and coherent responses. LSTMs excel in such environments by
preserving information over long sequences, enabling the chatbot to understand references to
earlier parts of the conversation and providing contextually appropriate responses.
Training an LSTM-based chatbot involves feeding the model large datasets of conversations,
helping it learn language patterns and the nuances of human dialogue. This process includes
converting text data into numerical formats using embeddings, which the LSTM then
processes to predict and generate natural responses. The result is a chatbot’s human-like
conversation.
Advantages
Model Prediction
When the user types a message, the model predicts the intent (meaning) of the message based
on what it has learned during training.
Response Generation
The chatbot selects a response from a pre-defined set of responses that match the predicted
intent.
Chatbot Interaction
The chatbot interacts with the user, displaying appropriate responses based on the user's
message.
CHAPTER 7
SYSTEM REQUIREMENTS
SOFTWARE REQUIREMENTS
The chatbot project is designed to recognize user intents and provide relevant responses using
machine learning models like Random Forest and LSTM. It requires a development
environment with Python 3.7.6 and runs on Windows, Linux, or macOS. Key libraries
include TensorFlow, Keras, NumPy, Pandas, and Scikit-learn for model training, data
processing, and machine learning tasks. The system should support Jupyter Notebook for
interactive development, with an optional GPU for faster model training. The design should
prioritize scalability, modularity, and efficient performance, while the documentation should
provide clear setup instructions for developers and usage guidelines for end-users.
HARDWARE REQUIREMENTS
Minimum hardware requirements are very dependent on the particular software being
developed by a given Jupyter notebook. Applications that need to store large arrays/objects in
memory will require more RAM, whereas applications that need to perform numerous
calculations or tasks more quickly will require a faster processor.
CHAPTER 8
FUNCTIONAL REQUIREMENTS
Intent Recognition: The system must be capable of identifying user intents based on input
patterns using machine learning models like Random Forest and LSTM. Each user input is
mapped to a corresponding intent (tag) which defines the appropriate response.
Response Generation: The chatbot should provide relevant, pre-defined responses based on
the detected intent from the training data. The responses must be randomly selected from the
matching intent’s set of responses.
Data Processing: The system must preprocess user inputs through tokenization and padding
to convert text into sequences that can be input into the machine learning model. It should use
a tokenizer to build a vocabulary of words and convert input text into numerical sequences.
Model Training: The system must be able to train a machine learning model (Random Forest
or LSTM) on user input patterns and corresponding tags (intents). The training should
involve splitting the data into training and testing sets and evaluating model performance.
Model Testing: The chatbot must be able to predict the intent of a new user input during
interactions and select an appropriate response. The model should be able to generalize well
to unseen input patterns.
User Interaction: The system should provide an interactive chat interface where users can
input text and receive responses from the chatbot. It should also include a mechanism for
terminating the conversation, such as the user typing "quit" or "exit".
Model Accuracy: The system should achieve high accuracy in intent classification, ideally
greater than 90%, based on model evaluation metrics such as accuracy score from the
classification results on test data.
Error Handling: The system should handle any input errors gracefully, ensuring that invalid
or malformed user inputs are processed without causing crashes or system failures.
CHAPTER 9
SOURCE CODE
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
#DATA READING
import json
data = json.load(f)
df = pd.DataFrame(data['intents'])
df
for i in range(len(df)):
for j in range(len(ptrns)):
dic['tag'].append(tag)
dic['patterns'].append(ptrns[j])
dic['responses'].append(rspns)
df = pd.DataFrame.from_dict(dic)
df
df['tag'].unique()
# DATA PREPROCESSING
from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer.fit_on_texts(df['patterns'])
tokenizer.get_config()
vacab_size = len(tokenizer.word_index)
ptrn2seq = tokenizer.texts_to_sequences(df['patterns'])
X = pad_sequences(ptrn2seq, padding='post')
lbl_enc = LabelEncoder()
y = lbl_enc.fit_transform(df['tag'])
rfc.fit(X_train, Y_train)
Y_pred = rfc.predict(X_test)
import tensorflow
model = Sequential()
model.add(Input(shape=(X.shape[1])))
model.add(LSTM(32, return_sequences=True))
model.add(LayerNormalization())
model.add(LSTM(32, return_sequences=True))
model.add(LayerNormalization())
model.add(LSTM(32))
model.add(LayerNormalization())
model.add(Dense(128, activation="relu"))
model.add(LayerNormalization())
model.add(Dropout(0.2))
model.add(Dense(128, activation="relu"))
model.add(LayerNormalization())
model.add(Dropout(0.2))
model.add(Dense(len(np.unique(y)), activation="softmax"))
model.compile(optimizer='adam', loss="sparse_categorical_crossentropy",
metrics=['accuracy'])
model.summary()
plot_model(model, show_shapes=True)
#MODEL TESTING
import re
import random
def generate_answer(pattern):
text = []
txt = txt.lower()
txt = txt.split()
text.append(txt)
x_test = tokenizer.texts_to_sequences(text)
x_test = np.array(x_test).squeeze()
y_pred = model.predict(x_test)
y_pred = y_pred.argmax()
tag = lbl_enc.inverse_transform([y_pred])[0]
print("you: {}".format(pattern))
print("model: {}".format(random.choice(responses)))
generate_answer('help me:')
generate_answer(':')
def chatbot():
print("Chatbot: Hi! I'm your friendly chatbot. How can I assist you today?")
while True:
print("Chatbot: Goodbye!")
break
generate_answer(user_input)
if __name__ == "__main__":
chatbot( )
CHAPTER 10
RESULTS AND DISCUSSION
This project implements a chatbot system using deep learning techniques to understand and
generate responses to user input. In summary, this loads a dataset of intents, trains a chatbot
model using a sequential neural network with LSTM layers, and allows users to interact with
the chatbot in a simple text-based conversation. The chatbot responds to user input based on
patterns learned during training. Let's break down the implementation step by step:
The code starts by importing necessary libraries, including numpy, pandas, Warnings, and
json. It also sets up a filter to suppress warnings. The json library is used to read data from a
file named “intents.Json”.
It opens and reads the intents.json file, which contains a dataset for training the chatbot.
The code extracts the 'intents' data from the JSON file and converts it into a pandas
DataFrame named ‘df’. It creates a dictionary ‘dic’ to store ‘tag’, ‘patterns’, and ‘responses’
data separately.
The code loops through the DataFrame ‘df’ and extracts ‘patterns’, ‘responses’, and ‘tag’
for each intent. It then appends this data to the ‘dic’ dictionary.
A new DataFrame ‘df’ is created from the ‘dic’ dictionary, combining ‘tag’, ‘patterns’, and
‘responses’.
STEP 6: Tokenization
The code imports the Tokenizer class from the Keras library and tokenizes the ‘patterns’ data
in ‘df’. It computes the vocabulary size and prints it.
The labels (‘tag’) are encoded using sklearn’s LabelEncoder, and the encoded values are
stored in ‘y’.
The model first splits the data into training and testing sets using train_test_split.
A Random Forest Classifier is initialized and trained on the training data.
After training, the model makes predictions based on the test data, and the accuracy is
evaluated using the accuracy score. The output from the Random Forest model (around
26% accuracy) isn't very high because Random Forests are not the best-suited method for
text classification in this case.
A sequential Keras model is defined. The model includes layers for embedding, LSTM
(Long Short-Term Memory) networks, layer normalization dense layers, and dropout
layers. It ends with a SoftMax activation layer with the number of classes equal to the
unique tags in the dataset The model is compiled with the Adam optimizer and sparse
categorical cross-entropy loss.
The model summary is printed, showing the architecture and the number of parameters.
The model is trained on the tokenized and padded input data X and the encoded labels.
The training includes early stopping based on accuracy and runs for 50 epochs.
Tag: The "tag" column represents a categorical label or tag associated with a specific
intent or category of user input. In the context of a chatbot or NLP model, these tags
typically correspond to different topics, commands, or purposes that the chatbot is
designed to recognize and respond to. For example, tags include "greeting”, "farewell",
"information request", "emotions", etc.
Patterns: The "patterns" column contains textual patterns or user input examples that are
associated with each tag. These patterns serve as training data for the chatbot or NLP
model to learn how to recognize the user’s intent or request. Patterns can be in the form of
sentences, phrases, or keywords, each pattern is used to teach the model what kind of user
input corresponds to a particular tag.
Responses: The "responses” column includes predefined responses or messages that the
chatbot should provide when it recognizes a specific tag or user intent. These responses
are the chatbot's way of interacting with the user and providing relevant information or
assistance based on the detected intent. Responses can vary depending on the tag and may
include greetings, answers to questions, instructions, or any other appropriate text.
Figure 10.1 displays a portion of the original dataset used for training the of chatbot. It shows
examples of intents or patterns along with their corresponding tags and responses. Figure
10.2 shows how the original dataset has been structured after converting it into a Panda
DataFrame. It focuses on the “patterns” and “tags” columns. The “patterns” column contains
the input text or user queries. The “tags” column contains labels or categories associated with
the input patterns.
Figure 10.1: Original Dataset.
Figure 10.3 summarizes the unique values found in the “tag” column of the DataFrame. It
shows the different categories or tags that the chatbot has been trained to recognize, It
provides an overview of the classes or intents that the chatbot can identify.
Figure 10.3: Unique Values of Column Tag.
Figure 10.4 shows the accuracy given by existing methodology RFC i.e. 26%.
Figure 10.5 represents the architecture of the LSTM (Long Short-Term Memory) model used
in the chatbot. It shows a detailed summary of the model's layers, including input dimensions,
layer types (e.g., embedding, LSTM), the number of units or neurons in each layer, activation
functions, and the total number of trainable parameters.
Figure 10.5: Model Summary of LSTM.
Figure 10.6 displays the training performance metrics of the LSTM model over a series of
epochs, this shows how the accuracy of the model changes with each training epoch. The
proposed model obtained an accuracy of 99.14%. It also shows the training loss, which
measures how well the model's predictions match the actual target values during training.
Lower values indicate better performance. The proposed model achieves 0.0328 of loss.
Figure 10.6: Training Performance of the LSTM Model with Accuracy and Loss for 50
Epochs.
Figure 10.7 illustrates a sample conversation between a patient or user and the chatbot model
designed to monitor emotional health. It showcases how the chatbot responds to user input,
providing an example of a simulated interaction The conversation includes user queries and
the chatbot's generated responses, demonstrating the chatbot's functionality.
Figure 10.7: Sample Conversation Between Patient and Proposed Model for Monitoring
Mental Health.
10.4 Comparison Table
Fig 10.8: Comparison of Accuracy and Loss between Existing and Proposed
Methodology.
Accuracy:
This measures the overall performance of the model by calculating the percentage of correct
predictions out of the total number of predictions.
Loss:
Loss refers to the measure used to evaluate how well the model is performing during training
and validation. It quantifies the difference between the predicted outputs and the actual target
values.
CHAPTER 11
11.1 Conclusion
Moreover, the accessibility offered by chatbots is a key advantage, users can comfortably
express their emotions and seek help through this platform, which can alleviate some of the
barriers and stigma associated with discussing mental health concerns. This accessibility
aspect is especially valuable in reaching individuals who may be hesitant to seek help through
traditional channels. Moreover, the chatbot system has the potential to amass a substantial
dataset of user interactions. Analyzing this data can yield valuable insights into user behavior,
common mental health issues, and the effectiveness of various interventions. Such data-
driven insights can inform the ongoing development and refinement of mental health support
systems,
11.2 Future Scope
The future scope of AI-driven assistant chatbots for mental disturbance models is vast, with
numerous opportunities for further research, development, and application. As technology
and machine learning methods continue to advance, several key areas of potential growth and
enhancement can be identified.
AI chatbots can offer virtual therapy sessions, providing support and counselling to
individuals in need. They can assist in diagnosing mental health conditions, facilitating
consultations, and delivering personalized treatment options.
Preventive Care
Chatbots can be used for early detection and prevention of mental health issues by monitoring
users' mental well-being and providing timely interventions.
Health Promotion
AI chatbots can promote mental health awareness and educate users on coping strategies,
stress management, and healthy lifestyle habits.
Behavioral Change
Chatbots can help users adopt healthier behaviors by providing motivation, reminders, and
tracking progress.
Accessibility
AI chatbots can make mental health support more accessible and affordable, especially in
regions with a shortage of mental health professionals.
Personalization
AI-driven chatbots can offer personalized and adaptive responses based on individual needs,
preferences, and interaction history.
[2] Green, A., & Zhang, H. “AI Chatbots and Crisis Management in Mental Health”. Journal
of Crisis Intervention Technology, 12(1), 34-47, (2024).
[3] Moh. Heri Kurniawan, Hanny, & Tuti Nuraini, “A Systematic Review of Artificial
Intelligence (AI-powered) chatbot Intervention for Managing Chronic Illness”. Journal of
Annals of Medicine, Vol. 56, no. 1, (2024).
[4] Yu-Hao Li, Yu-Lin Li, Mu-Yang Wei, and Guang-Yu Li, “Innovation and challenges of
artificial intelligence technology in personalized healthcare”, Scientific Reports, 14(23),
18994, (2024).
[5] Sally Moy, Mona Irannejad, Stephanie Jeanneret Manning, Mehrdad Farahani, Yomna
Ahmed, Ellis Gao, Radhika Prabhune, Suzan Lorenz, Raza Mirza, Christopher Klinger,
“Patient Perspectives on the Use of Artificial Intelligence in Health Care: A Scoping
Review”, Journal of Patient-Centered Research and Reviews, 11(1): 51–62, (2024).
[6] Jinming Du A B, Ben Kei Daniel B, “A systematic review of AI-powered chatbots for
English as a foreign language speaking practice”, Computers and Education: Artificial
Intelligence, Vol. 33, no. 2, Pp. 147-164, (2024).
[7] Syed Mahmudul Hug, Rytis, &Robertas, “Dialogue Agents for Artificial Intelligence –
Based Conversational System for Cognitively Disabled: A Systematic Review”. Journal of
Disability and Rehabilitation: Assistive Technology, Vol. 19, no. 3, (2024).
[8] Olivia Brown, Asmaa Hassan, and Mowafa Househ, “AI Chatbots for Depression
Management: An Evaluation of Chatbot Interventions and User Satisfaction”, International
Journal of Digital Psychiatry, 15(2), (2024).
[9] Daniel Lee, and Mohmood Alzaubaidi, “Mental Health Support in the Digital Age: AI-
Driven Chatbots for Cognitive Behavioral Therapy”, Journal of Artificial Intelligence in
Health, 9(4), (2024).
[10] Mia Chen, Ghazaleh Azar, Umberto Maniscalco, and Massimo Esposito. “Challenges in
Implementing AI Chatbots for Mental Health: A Review of Systematic Barriers”, Journal of
AI and Healthcare, 7(3), (2024).
[11] Angela Lee, and Jaesu Han, “Personalization Techniques for AI-Based Mental Health
Chatbots: Current Trends and Future Directions”, Journal of Personalized Health Technology,
10(1), (2024).
[12] Brian McArthur, Dawn Bounds, Jessica Borelli, and Amir Rahmani. “Building Trust
with AI: How Chatbots Can Improve User Engagement in Mental Health”, Journal of Digital
Health Innovations, 8(3), (2024).
[13] David Johnson, Hela Desai, Breeana Wuckovich, and Randee Schmitt. “Assessing the
Impact of AI Chatbots on Reducing Symptoms of PTSD in Veterans”, Journal of Military
Psychiatry and AI, 3(1), (2024).
[15] Anshika Jain, Garima Srivastava, Shikha Singh, and Vandana Dubey, “Application of
Artificial Intelligence (AI) Technologies in Employing Chatbots to Access Mental Health”,
Computer Vision and AI-Integrated IoT Technologies in the Medical Ecosystem, 23(1), 1-23,
(2024).
[16] Samir Dey, Tanisha Mitra, and Titli Nath, “Artificial Intelligence and its Application in
Mental Health Care”, Cirs Publication, Volume 13, (2024).
[17] Prakash Nathaniel Kumar Sarella, and Vinny Therissa Mangam, “AI-Driven Natural
Language Processing in Healthcare: Transforming Patient-Provider Communication”, Indian
Journal of Pharmacy Practice, 17(1):21-26, (2024).
[18] Mohammad Amin Kuhaila, Nazik Alturkib, Justin Thomas, Amal K. Alkhalifa, and
Amal Alshardan, “Human-Human vs Human-AI Therapy: An Empirical Study”, International
Journal of Human-Computer Interaction, Volume 23, 1-12, (2024).
[21] Sri Banerjee, Pat Dunn, Scott Conard, and Asif Ali, “Mental Health Applications of
Generative AI and Large Language Modeling in the United States”, International Journal of
Environmental Research and Public Health, 21(7), 910, (2024).
[24] Arfan Ahmed, Asmaa Hassan, Sarah Aziz, Alaa A Abd-Alrazaq, Nashva Ali, Mahmood
Alzubaidi, Dena Al-Thani, Bushra Elhusein, Mohamed Ali Siddig, Maram Ahmed, Mowafa
Househ, “Chatbot features for anxiety and depression: A scoping review”, Health Informatics
Journal, 29(1), (2023).
[25] Jay H. Shore, Darlene R King, Guransh Nanda, Joel Stoddard, Allison Dempsey, Sarah
Hergert, John Torous, “An Introduction to Generative Artificial Intelligence in Mental Health
Care: Considerations and Guidance”, Current Psychiatry Reports, volume 2, 839–846,
(2023).
[26] Theodore Vial, and Alires Almon, “Artificial Intelligence in Mental Health Therapy for
Children and Adolescents”, JAMA pediatrics, 177(12):1251-1252, (2023).
[27] Abdulqadir J Nashwan, Suzan Gharib, Majdi Alhadidi, A. El-Ashry, Asma Alamgir,
Mohammed Adnan Al-Hassan, Mahmoud Abdelwahab Khedr, Shaimaa Samir Dawood,
Bassema Abufarsak, “Harnessing Artificial Intelligence: Strategies for Mental Health Nurses
in Optimizing Psychiatric Patient Care”, Issues Mental Health Nursing, 44(10):1020-1034,
(2023).
[28] Gordon, K, Feng Liu, Qianqian Ju, Qijian Zheng, Yujia Peng, “AI-Driven Chatbots for
Mental Health: Innovations and Challenges”, Journal of Medical Internet Research, 25(3),
e29573, (2023).
[29] Patterson, S, Yenushka Goonesekera, Liesje Donkin, “AI Chatbots in Mental Health
Care: The Future of Cognitive Behavioral Therapy”, Journal of Technology in Behavioral
Science, 8(1), 54-63, (2023).
[30] Emma L van der Schyff, Brad Ridout, Krestina L Amon, Rowena Forsyth, and Andrew J
Campbell, “Providing Self-Led Mental Health Support Through an Artificial Intelligence-
Powered Chat Bot (Leora) to Meet the Demand of Mental Health Care”, J Med Internet Res,
PMC10337342, (2023).
[31] Abhishek Aggarwal, Cheuk Chi Tam, Dezhi Wu, Xiaoming Li, and Shan Qiao,
“Artificial Intelligence–Based Chatbots for Promoting Health Behavioral Changes:
Systematic Review”, Journal of Medical Internet Research, Volume 25, (2023).
[32] Leona Cilar Budler, Lucija Gosak, and Gregor Stiglic, “Review of artificial intelligence-
based question-answering systems in healthcare”, WIREs Data Mining and Knowledge
Discover, 13(2), e1487, (2023).
[34] Gerard Anmella, Morilla I, Grande I, “Vicky bot, a chatbot for Anxiety -Depressive
symptoms and work-related burnout in primary care and Health care professionals:
Development, Feasibility, and Potential Effectiveness Studies”. Journal of Medical Internet
Research, Vol 25, (2023).
[35] Julian De Freitas, Ahmet Kaan, and Zeliha, “Chatbots and Mental Health: Insights into
the Safety of Generative AI”. Journal of Consumer Psychology, vol.34, no.3, p. 481-491,
(2023).
[37] Emre Sezgin, and Shona D Arcy, “Voice Technology and Conversational Agents in
Health Care Delivery”, Digital Public Health, Volume 10, (2022).
[38] Prabodh Rathnayaka, Nisha Mills, and Donna Burnett, “A Compared Study of a Mental
Health Chatbot with Cognitive Skills for Personalized Behavioral and Remote Health
Monitoring”, Journal of Sensors, vol 2, no. 33, (2022).
[39] Ghazala Bilquise, Samar Ibrahim, and Khaled Shaalan, “Emotionally Intelligent
Chatbots: A Systematic Literature Review”, Journal of Human Behavior and Emerging
Technologies, vol. 22, no.1, (2022).
[40] S. Sharma, Abdulqahar Mukhtar Abubakar, Deepa Gupta, Shantipriya Parida, and T.
Mehta, "Mental Health Chatbot Using Reinforcement Learning", Journal of AI in Behavioral
Health, vol. 22, no. 2, pp. 98-110, (2022).
[41] H. Park, Wenjun Zhong, Jianghua Luo, Hong Zhan, and S. Lee, "Self-Improving Chatbot
for Adolescent Mental Health", Journal of AI in Youth Mental Health, vol. 22, no. 4, pp. 112-
125, (2022).
[42] B. Taylor, Griffith, Sophia Louise, and S. Morgan, "AI Chatbot for Mental Health
Screening in Rural Areas", Journal of Rural Mental Health Technology, vol. 19, no. 2, pp. 87-
98, (2022).
[43] Basit Ali, Vadlamani Ravi, Chandra Bushan, M.G. Santosh, & O.Shiva Shankar,
“Chatbot via Machine Learning and Deep Learning Hybrid”, SCI, volume 956, pp 255-256,
(2021).
[44] Eliane M. Boucher, Nicole R. Harake, Haley E. Ward, Sarah Elizabeth Stoeckl, Junielly
Vargas, Jared Minkel, Acacia C. Parks, & Ran Zilca, “Artificially intelligent chatbots in
digital mental health interventions: a review”, Expert Review of Medical Devices, 18:sup 1,
37-49, (2021).
[45] Khamis, Alaa and Meng, Jun and Wang, Jin and Azar, Ahmad Taher and Prestes, Edson
and Takács, Árpád and Rudas, Imre J, and Haidegger, Tamas, “Robotics and Intelligent
Systems Against a Pandemic”, Acta Polytechnica Hungarica, 18 (5), pp. 13-35, ISSN 1785-
8860 (print), 2064-2687 (online), (2021).
[46] Pat Pataranutaporn, Valdemar Danry, Joanne Leong, Parinya Punpongsanon, Dan Novy,
Pattie Maes, & Misha Sra, “AI-generated characters for supporting personalized learning and
well-being”, Nature Machine Intelligence, volume 3, pages1013–1022 (2021).
[47] S. Zhou J. Li, Zhandos Zhumanov, and Sergazi Narynov, "An Intelligent Chatbot for
Managing Anxiety and Depression", Journal of Mental Health Technology, vol. 10, no. 3, pp.
123-135, (2021).
[48] Denecke, Kerstin, Abd-alrazaq, Alaa, Househ, and Mowafa, “Artificial Intelligence for
Chatbots in Mental Health: Opportunities and Challenges”, Journal of Multiple Perspectives
on Artificial Intelligence in Healthcare, (2021).
[49] L. Wang, Nicole R Harake, Haley E Ward, and X. Liu, "Real-Time Sentiment Detection
in Mental Health Chatbots", Journal of Natural Language Processing, vol. 18, no. 2, pp. 130-
145, (2021).
[50] N. Kumar, Leora Trub, Todd Essig, Laura Eltahawy, and S. Das, "AI Chatbot Using CBT
for Mental Health Support", Journal of Cognitive Therapy and AI, vol. 14, no. 3, pp. 50-65,
(2021).
[51] H. Lee, V., Mohit Verma, and T. Kim, "Conversational AI for Mental Health Therapy,"
AI in Mental Health Therapy Journal, vol. 14, no. 2, pp. 110-123, (2021).
[52] L. Chang, Prabod Rathnayaka, Nishan Mills, Donna Burnett, Daswin De Silva,
Damminda Alahakoon, Richard Gray, and S. Tanaka, "AI Chatbot for Monitoring Mood and
Behavioral Patterns", Journal of AI and Behavioral Science, vol. 16, no. 1, pp. 98-112,
(2021).
[53] M. Andersson, Haggstrom Fordell, Vidar, and K. Eriksson, "Interactive Chatbot for
Mindfulness and Stress Relief", Journal of Digital Therapeutics, vol. 9, no. 2, pp. 45-59,
(2021).
[55] J. Walker, Lee Chun-Hung, Liaw Guan-Hsiung, Yang Wu-Chuan, Liu Yu-Hsin, and K.
Wilson, "Chatbot-Assisted Cognitive Behavioral Therapy for Mental Health”, Journal of
Cognitive Behavioral AI, vol. 20, no. 3, pp. 120-132, (2021).
[56] A. Singh, Simon D Alfonso, Olga Santesteban-Echarri, Simon Rice,Simon Rice, Greg
WadleyGreg Wadley, Reeva Lederman, Christopher, John GleesonJohn Gleeson, Mario
Alvarez-Jimenez, and B. Patel, "AI Chatbot for Youth Mental Health Engagement", Journal
of Youth Mental Health Technology, vol. 18, no. 1, pp. 45-60, (2021).
[57] D. Lewis, Wenjun Zhong, Jianghua Luo, Hong Zhang, and P. Green, "AI Chatbot for
Managing Social Anxiety", Journal of Social Anxiety and AI Applications, vol. 16, no. 3, pp.
67-79, (2021).
[59] P. Oliveira, Anna Xygkou, Panote Siriaraya, Alexandra Covaci, Holly Gwen Prigerson,
and M. Costa, "AI Chatbot for Grief Counseling", Journal of AI and Bereavement Support,
vol. 16, no. 3, pp. 58-70, (2021).
[60] J. Thompson, Juan Dempere, and L. Green, "Crisis Intervention AI Chatbot for Mental
Health Emergencies", Journal of Crisis Support AI, vol. 14, no. 2, pp. 90-104, (2021).
[61] E. Kim, Shuya Lin, Lingfeng Lin, Cuiqin Hou, Baijun Chen, Jianfeng Li, Shiguang Ni
and J. Choi, "Empathy-Based AI Chatbot for Mental Health," Journal of AI and Emotional
Support, vol. 12, no. 1, pp. 45-57, (2021).
[62] L. Martin, Ashley C Griffin, Zhaopeng Xing, Saif Khairat, Yue Wang, Stacy Bailey,
Jaime Arguello, Arlene E Chung, and Y. Chen, "Conversational AI for Chronic Mental Health
Conditions," Journal of AI in Chronic Mental Health Support, vol. 10, no. 2, pp. 102-115,
(2021).
[63] H. Kim, "Conversational Agents for Mental Health Support," International Journal of AI
in Healthcare, vol. 8, no. 2, pp. 78-85, (2020).
[64] M. Thomas, Sarah Elizabeth Stoeckl, Ran Zilcaand P. James, "An AI-Powered Chatbot
for Depression Screening," Journal of AI in Healthcare, vol. 7, no. 3, pp. 100-110, (2020).
[65] J. Silva, Arfan Ahmed, and E. Santos, "AI Chatbot for Crisis Intervention in Mental
Health," Journal of Crisis Intervention Technology, vol. 11, no. 1, pp. 90-104, (2020).
[66] C. Rivera, Siobhan O'Neill, Martin Malcolm, Maurice Mulvenna, Andrea Bickerdike,
and P. Gomez, "Bilingual AI Chatbot for Mental Health," Journal of Multilingual Mental
Health, vol. 8, no. 3, pp. 45-58, (2020).
[67] R. Brown, Jillian Shah, Bianca DePietro, Laura D'Adamo, Marie-Laure Firebaugh,
Olivia Laing, Lauren A. Fowler, Lauren Smolar, Shiri Sadeh-Sharvit, C. Barr Taylor, Denise
E. Wilfley, Ellen E. Fitzsimmons-Craft and Y. Zhao, "AI Chatbot for Mood Disorders
Screening," Journal of AI Diagnostics in Mental Health, vol. 13, no. 3, pp. 53-65, (2020).
[68] L. Chen, Payam Kaywan, Khandakar Ahmed, Ayman Ibaida, Yuan Miao, Bruce Gu and
M. Xu, "AI Chatbot for Early Detection of Suicidal Ideation," Journal of Suicide Prevention
and Digital Health, vol. 7, no. 3, pp. 120-132, (2020).
[69] R. Santos, Donghoon Shin, Subeen Park, Esther Hehsun Kim, Soomin Kim, Jinwook
Seo, Hwajung Hong and J. Torres, "AI Chatbot for Mental Health in Multicultural Contexts,"
International Journal of Multicultural Mental Health, vol. 11, no. 2, pp. 102-115, (2020).
[70] K. Mori, Pradeep Nazareth, G B Nikhil; G Chirag, N R Prathik, and Y. Tanaka, "AI-
Powered Chatbot for Real-Time Mental Health Assessment," Journal of Mental Health
Monitoring, vol. 7, no. 3, pp. 55-68, (2020).
[71] R. Hassan, Philip Kossack, Herwig Unger, and M. Khan, "Emotion-Aware AI Chatbot
for Anxiety Support", Journal of AI for Mental Health, vol. 11, no. 2, pp. 120-133, (2020).
[73] Chetan Bulla, Chinmay Parushetti, Akshata Teli, Samiksha Aski, and Sachin Koppad, “A
Review of AI-Based Medical Assistant Chatbot”, Research and Applications of Web
Development and Design, 8(2), 1-14, (2020).
[74] Sarah Carr, “Artificial Intelligence Gone Mental: Engagement and Ethics in Data-Driven
Technology for Mental Health”. Journal of Mental Health, Vol. 29, no. 2, (2020).
[75] Alaa Ali, Abd-Alrazaq, Asma Rababeh, Alajlani., “Effectiveness and Safety of Using
Chatbots to Improve Mental Health: Systematic Review and Meta-Analysis”, Journal of
Medical Internet Research, vol.22, no.7, (2020).
[76] T. Nguyen, David C. Mohr, Robert E. Krau, and Yi-Chieh Lee, "An Emotion-Sensitive
AI Chatbot for Mental Health", Journal of Affective Computing, vol. 5, no. 4, pp. 215-230,
(2019).
[77] Y. Kwon, JungKyoon Yoon, Chajoong Kim, and J. Kim, "Emotion-Driven AI Chatbot
for Supporting Mental Wellbeing", Journal of Emotion and AI in Mental Health, vol. 9, no. 2,
pp. 75-88, (2019).
[78] Q. Zhang, Akhilesh Kali, and R. Liu, "Sentiment-Based Chatbot for Real-Time Mental
Health Tracking", Journal of Real-Time Emotional Support, vol. 8, no. 4, pp. 120-134,
(2019).
[79] T. Roberts, Emi Moriuch, and D. Evans, "AI-Powered Mental Health Bot for Substance
Abuse Recovery", Journal of Substance Abuse and AI Technology, vol. 18, no. 1, pp. 65-78,
(2019).
[80] F. Lu, Yuqi Chu, Lizi Liao, Zhiyuan Zhou, Chong-Wah Ngo, Richang Hong, and X.
Zhang, "Multimodal AI Chatbot for Detecting Mental Distress", Journal of Multimodal AI
and Mental Health, vol. 15, no. 4, pp. 123-135, (2019).