0% found this document useful (0 votes)
9 views

Mahesh Mini Doc

The document discusses the development and potential of AI-driven chatbots for addressing mental health disturbances, emphasizing their ability to provide accessible, cost-effective support through natural language processing and machine learning. It highlights the limitations of traditional mental health care, such as high costs and long wait times, and proposes a chatbot system that integrates modern technology with traditional healing practices. The research identifies gaps in personalization, long-term efficacy, and ethical concerns, advocating for further studies to enhance the effectiveness of these AI tools in mental health care.

Uploaded by

rahulrd252002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Mahesh Mini Doc

The document discusses the development and potential of AI-driven chatbots for addressing mental health disturbances, emphasizing their ability to provide accessible, cost-effective support through natural language processing and machine learning. It highlights the limitations of traditional mental health care, such as high costs and long wait times, and proposes a chatbot system that integrates modern technology with traditional healing practices. The research identifies gaps in personalization, long-term efficacy, and ethical concerns, advocating for further studies to enhance the effectiveness of these AI tools in mental health care.

Uploaded by

rahulrd252002
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

AI-DRIVEN ASSISTANT CHATBOT FOR MENTAL

DISTURBANCE

ABSTRACT:

Mental disturbances make people to lose concentration in day-to-day life like home, work,
and other places. The history of AI driven assistant chatbots for mental disturbances is
marked by rapid progress, increasing recognition, and growing potential to transform mental
health support. The objective of this chatbot is to provide an efficient, cost-efficient solution
for finding and solving mental health problems. Utilizing natural language processing (NLP)
and machine learning (ML) algorithms, these chatbots offers a confidential and accessible
platform for users to share their emotions, concerns, and experiences. The traditional systems
for this problem are very expensive and needs so much of time. Some are Ancient and
Cultural Practices like Ayurvedic Medicine (India, 5000 BCE): Emphasized balance,
mindfulness, and natural remedies. Traditional Chinese Medicine (China, 2000 BCE):
Incorporated acupuncture, herbalism, and meditation. Philosophical and Spiritual approaches
like Sufism (Middle East, 800 CE): Focused on spiritual growth, love, and self-awareness.
Community-Based Support like Community Mental Health Centers (1960s). The integration
of traditional wisdom with modern technology and AI-driven chatbots holds promise for
enhanced mental health support and wellness. Limited availability, long wait times, and
geographical constraints. High costs, inadequate insurance coverage. Lack of personalized
support, inadequate resource allocation, fear of judgment. Reach underserved populations,
rural areas, and those with mobility issues Personalized support, empathy, and understanding
Advance mental health research: Data-driven insights, AI-driven discoveries ,address mental
health disparities, worldwide. Reduced healthcare costs, minimized emergency interventions.
Immediate assistance during crises, integrate wearable devices: Seamless data exchange,
personalized insights. By addressing these motivations, AI-driven assistant chatbots can
revolutionize mental health support, providing accessible, effective, and compassionate care.
User Profiling: User registration, demographics, and mental health history Intent Detection:
Identify user needs, concerns, and goals Leverage AI advancements: NLP, machine learning,
sentiment analysis. Integrate wearable devices: Seamless data exchange, personalized
insights. Develop intelligent systems: Adaptive, predictive, and proactive support. Enhance
data analytics: Insights into mental health trends, patterns.
CHAPTER 1

INTRODUCTION

1.1 HISTORY
 A chatbot is a computer program that simulates a human conversation with an end user. Not
all chatbots are equipped with artificial intelligence (AI), but modern chatbots increasingly
use Artificial intelligence techniques such as natural language processing (NLP) to
understand user questions and automate responses to them.
 The next generation of chatbots with generative AI capabilities will offer even more enhanced
functionality with their understanding of common language and complex queries, their ability
to adapt to a user’s style of conversation, and the use of empathy when answering users’
questions.
1.2 TRADITIONAL METHODS
 The literature pertaining to traditional healing practices in mental health is largely descriptive
and therefore is limited in its rigor by today’s scientific standards. Indeed, there is an active
debate as to whether current Western scientific methods are appropriate for examining the
nature, processes, and outcomes of traditional healing. Part of this debate revolves around a
historical monopoly of the study of traditional healing by Western, nonindigenous researchers
whose views typically lie outside the cultural perspectives that inform a particular healing
tradition.
1.3 PROBLEM STATEMENT
 Despite the growing need for mental health support, many individuals face significant
barriers to accessing timely, personalized, and effective care. Limited availability, long wait
times, and geographical constraints high costs, inadequate insurance coverage, social stigma,
fear of judgment, lack of personalized support, inadequate resource allocation, unavailability
of treatment in intime.
1.4 RESEARCH STATEMENT
 Depression impacts the lives of a large number of university students. Mobile-based therapy
chatbots are increasingly being used to help young adults who suffer from depression.
However, previous trials have short follow-up periods. Evidence of effectiveness in
pragmatic conditions are still in lack. This study aimed to compare chatbot therapy to
bibliotherapy, which is a widely accepted and proven-useful self-help psychological
intervention.
1.5 RESEARCH MOTIVATION
 Clinical applications of Artificial Intelligence (AI) for mental health care have experienced a
meteoric rise in the past few years. AI-enabled chatbot software and applications have been
administering significant medical treatments that were previously only available from
experienced and competent healthcare professionals. Such initiatives, which range from
“virtual psychiatrists” to “social robots” in mental health, strive to improve nursing
performance and cost management, as well as meeting the mental health needs of vulnerable
and underserved populations.
1.6 PROPOSED SYSTEM
ANN:
Artificial Neural Networks contain artificial neurons which are called units . These units are
arranged in a series of layers that together constitute the whole Artificial Neural Network in a
system. A layer can have only a dozen units or millions of units as this depends on how the
complex neural networks will be required to learn the hidden patterns in the dataset.
Commonly, Artificial Neural Network has an input layer, an output layer as well as hidden
layers. The input layer receives data from the outside world which the neural network needs
to learn about. Then this data passes through one or multiple hidden layers that transform the
input into data that is valuable for the output layer. Finally, the output layer provides an
output in the form of a response of the Artificial Neural Networks to input data provided.
1.7 ADVANTAGES
1. Anxiety and Depression Support.
2. Stress Management.
3. Eating Disorder Support.
4. Relationship Therapy.
5. Self-Care and Wellness.
6. Workplace Wellness Initiatives.
7. Disaster Response and Recovery.
8. Mental Health Awareness Campaigns.
CHAPTER 2

LITERATURE SURVEY

2.1 INTRODUCTION
AI-powered chatbots have evolved from simple rule-based systems to advanced models
using natural language processing (NLP). They show great potential in medical contexts,
offering personalized, on-demand health promotion interventions. These chatbots mimic
human interaction through written, oral, and visual communication, providing accessible
health information and services. Over the past decade, research has assessed their feasibility
and efficacy, particularly in improving mental health outcomes. Systematic reviews have
evaluated their effectiveness, feasibility in healthcare settings, and technical architectures in
chronic conditions. Recent studies focus on using AI chatbots for health behavior changes
like physical activity, diet, and weight management . Integrated into devices like robots,
smartphones, and computers, they support behavioral outcomes such as smoking cessation
and treatment adherence. Additionally, AI chatbots aid in patient communication, diagnosis
support, and other medical tasks, with studies discussing their benefits, limitations, and future
directions . Their potential uses include mental health self-care and health literacy education.

2.2 RELATED WORK

Depression and anxiety have been associated with economic inactivity, loneliness, poor
physical health, and mortality. Prevalence estimates for depression and anxiety vary
considerably between studies depending on how mental health problems are defined and
the methodologies researchers have adopted. The World Mental Health survey (WMHS), the
most comprehensive longitudinal study of mental health globally, estimated that in high-
income countries, the lifetime prevalence of depression is around 15% . Half of participants
in the WMHS reported at least some symptoms of depression. The World Health
Organisation ranks depression as the second leading cause of disability globally . Therefore,
it is crucial to explore innovative therapies and interventions to support affected individuals.
Practice guidelines generally advocate a stepped approach to depression treatment . These
guidelines recommend self-help interventions, such as psycho-education and mindfulness, for
individuals experiencing mild symptoms. If symptoms are more severe, then psychological
treatments such as CBT are recommended. CBT is based on the theory that how we think
affects how we behave and feel. During CBT sessions, the therapist will use a range of
strategies and techniques to help challenge an individual’s thinking. There is extensive
evidence from systematic reviews and meta-analyses that CBT is a safe and effective
treatment for depression . However, CBT is a complex treatment that requires extensive
training to be able to competently deliver a high degree of treatment fidelity. Consequently,
there is considerable and persistent unmet need for effective psychological depression
treatments because of a chronic shortage of trained therapists . To address the unmet needs
for treatment, there has been considerable interest in “third-wave psychological
interventions”, such as Behavioural Activation (BA). A systematic review and meta-analysis
of five randomised controlled trials involving 601 participants has reported equal efficacy
between BA and CBT.

2.3 RESEARCH GAPS

There are several research gaps in the field of enabled assistant chatbots for mental health:

1. Personalization: Many chatbots lack the ability to tailor responses to individual users uni
que needs and contexts. Research is needed to develop more adaptive and personalized int
erventions.

2. Longterm Efficacy: Most studies focus on shortterm outcomes. There's a need for long
term studies to assess the sustained impact of chatbots on mental health.

3. Integration with Healthcare Systems: Effective integration of chatbots into existing hea
lthcare frameworks is still a challenge. Research should explore how to seamlessly incorp
orate these tools into clinical practice.

4. Ethical and Privacy Concerns: Ensuring data privacy and addressing ethical issues, suc
h as informed consent and transparency, is crucial. More research is needed to develop ro
bust ethical guidelines.

5. Cultural Sensitivity: Many chatbots are not designed with diverse cultural contexts in mi
nd. Research should focus on creating culturally aware and inclusive chatbots.
6. User Engagement: Maintaining user engagement over time is a significant challenge. Stu
dies should investigate methods to keep users consistently engaged with chatbot intervent
ions.

7. Regulatory Frameworks: The regulatory landscape for AI chatbots in mental health is still e
volving. Research should address the development of appropriate regulatory frameworks to e
nsure safety and efficiency.

These gaps highlight the need for continued research and development to maximize the poten
tial of AI chatbots in supporting mental health.

2.4 SUMMARY

As conversational agents are becoming a readily available platform for many service
providers, the benefits in the healthcare domain are emerging. The design and development of
our BA-based AI chatbot, followed by its participatory evaluation, confirmed its effectiveness
in providing support for individuals with mental health issues. With an ever-increasing
demand for healthcare service providers and mental health issues being at the forefront of
healthcare challenges, our “Bunji” has the potential to be scaled up and rolled out in the
healthcare domain to support front-line workers and the community as a whole. Prior to such
a deployment, we need to conduct and report on long-term evaluation of functionality and the
impact of regular usage of the chatbot across diverse cohorts representative of the socio-
demographic interest groups. This is a primary limitation of this study that we intend to
address as part of our future work. We also plan the following innovations as future work:
creation of a community of Bunji users with like-minded interests who can engage in group
activities, Bunji being able to find “Chat Pals” in this community, which is ideal for people
who are more introverted and live by themselves, and extending the gamification of Bunji for
visually setting and tracking goals against a community benchmark and one’s own track
record. We are also working on expanding the activity banks to include activities,
inspirations, quotations, and workshops, as well as enhancements to the language model for
more fun/inspirational human-like conversations with a range of responses that suit diverse
demographics (e.g., emojis and memes). We will also consolidate the code of ethics with opt-
in options for users to provide more information, which will improve the personalised
conversations. In conclusion, mental healthcare and treatment are major challenges inhibiting
the health and well-being of contemporary human society, and this study contributes the
technological innovation of an intelligent chatbot with cognitive skills for personalised
behavioural activation and remote health monitoring.

 Nicole Ruggiano explored chatbots to support people with dementia and their caregivers:
systematic review of functions and quality. This study aims to identify the types of current
commercially available chatbots that are designed for use by people with dementia and their
caregivers and to assess their quality in terms of features and content. Chatbots were
identified through a systematic search on google play store, apple app store, Alexa skills, and
the internet. an evidence-based assessment tool was used to evaluate the features and content
of the identified apps. The assessment was conducted through interrater agreement among 4
separate reviewers.
 Zhijun Guo studied large language models for mental health applications: systematic review.
This systematic review aims to critically assess the use of LLMs in mental health, specifically
focusing on their applicability and efficacy in early screening, digital interventions, and
clinical settings. By systematically collating and assessing the evidence from current studies,
our work analyses models, methodologies, data sources, and outcomes, thereby highlighting
the potential of LLMs in mental health, the challenges they present, and the prospects for
their clinical use.
 Megha Gupta explored Delivery of a Mental Health Intervention for Chronic Pain Through
an Artificial Intelligence–Enabled App (Wysa): Protocol for a Prospective Pilot Study. This
prospective study aims to examine the efficacy and use of an AI-CBT intervention for
chronic pain (Wysa for Chronic Pain app, Wysa Inc) using a conversational agent (with no
human intervention). To the best of our knowledge, this is the first such study for chronic
pain using a fully-automated, free-text–based conversational agent.
 Amy J. C. Trappey explained Development of an Empathy-Centric Counselling Chatbot
System Capable of Sentimental Dialogue Analysis College students encounter various types
of stresses in school due to schoolwork, personal relationships, health issues, and future
career concerns. Some students are susceptible to the strikes of failures and are inexperienced
with or fearful of dealing with setbacks. When these negative emotions gradually accumulate
without resolution, they can cause long-term negative effects on students’ physical and
mental health. Some potential health problems include depression, anxiety, and disorders
such as eating disorders. Universities commonly offer counselling services; however, the
demand often exceeds the counselling capacities due to limited numbers of counsellors
psychologists. Thus, students may not receive immediate counselling or treatments. If
students are not treated, some repercussions may lead to severe abnormal behaviour and even
suicide. In this study, combining immersive virtual reality (VR) technique with psychological
knowledge base, we developed a VR empathy-centric counselling chatbot (VRECC) that can
complementarily support troubled students when counsellors cannot provide immediate
support.
 Aratrika Chaudhuri given The Psychosocial Reasons for the Surge of Chatbot Usage Among
Young Adults: A Review. With the advent of Artificial Intelligence (AI), machine learning
and advancements in computer technology, there is a growing popularity of conversational
agents or chatbots among young adults for seeking different forms of therapeutic and social
support. Young adults are also considered to be more vulnerable to various mental health
disorders and psychological disturbances. Moreover, the rapid integration of chatbot
technology into modern lifestyle has sparked an increasing interest in understanding the
underlying reasons behind its widespread adoption, particularly among young adults. The
current research aimed to provide a comprehensive overview of the psychosocial factors
driving such chatbot usage among young adults worldwide by reviewing previous relevant
literature from multiple reputable academic sources such as Google Scholar.
 Alex Vakaloudis, Lauri Kuosmanen, Martin Malcolm, Thomas Broderick, Andrea
Bickerdike, Con Burns, Edward Coughlan, Brian Cahill created Chatbots to Support Mental
Health & Wellbeing: Early Findings from Chat Pal Use During COVID-19 Lockdown A
conversational user interface, or chatbot is “a computer program designed to simulate
conversation with human users, especially over the internet”. The Chat Pal consortium is
developing a chatbot called Chat Pal to support mental health and wellbeing of people in
rural areas of Northern Europe. In a recent survey undertaken by the team, 65% of the mental
healthcare professionals surveyed agreed that there were benefits associated with mental
healthcare chatbots, yet the perceived adoption among clients at 24 is quite low. The survey
also found that as people’s experience grows, so too does their belief that the use of chatbots
can improve the quality of care, client self-management, access to care and can assist mental
healthcare workers in their roles. Even though the level of personal experience with chatbots
among professionals in mental health has been quite low, this survey shows that, where they
have been used, the experience has been mostly satisfactory. As a consequence of the positive
findings from the survey and in order to help those isolated, the Chat Pal chatbot was
implemented ahead of schedule in order to offer support to people in English-speaking
Europe while the continent was under COVID-19 lockdown.
 Liuping Wang explained CASS: Towards Building a Social-Support Chatbot for Online
Health Community Chatbots systems, despite their popularity in today's HCI and CSCW
research, fall short for one of the two reasons: 1) many of the systems use a rule-based dialog
flow, thus they can only respond to a limited number of pre-defined inputs with pre-scripted
responses; or 2) they are designed with a focus on single-user scenarios, thus it is unclear
how these systems may affect other users or the community. In this paper, we develop a
generalizable chatbot architecture (CASS) to provide social support for community members
in an online health community. The CASS architecture is based on advanced neural network
algorithms; thus it can handle new inputs from users and generate a variety of responses to
them. CASS is also generalizable as it can be easily migrated to other online communities.
With a follow-up field experiment, CASS is proven useful in supporting individual members
who seek emotional support. Our work also contributes to fill the research gap on how a
chatbot may influence the whole community's engagement.
 Dr. Kapila Mahindra explained impact of artificial intelligence on mental and psychological
health of working women Artificial Intelligence (AI) is the impersonation of human
brainpower in computers that are skilled to do actions that generally require human intellect.
AI is the advancement of the computer machines capable of learning, reasoning, problem
solving, interpreting natural language, and adapting to new contexts. By Russell and Norvig
(2016), these systems can evaluate vast volumes of data, detect patterns, and make judgments
without the need for explicit human programming. Computer vision, Machine learning,
expert systems, robotics and natural language processing are some of the arenas of AI. It has
uses in a multiple field, including banking, healthcare, education, transportation and others.
AI knowledge is always growing, with continual research and development important to
advancements in AI competencies and its inclusion into a wide range of daily activities. “A
working woman refers to a female individual who is engaged in paid employment or self
employment and actively participates in the labour force”. This statement, according to the
United States Bureau of Labor Statistics (2021), refers to women who work outside the house
and contribute to the formal economy.
 Eleni Mitsea explained neuro technologies in Breathing Interventions for Mental and
Emotional Health: A Systematic Review Humans can survive weeks without consuming food,
days without drinking water, and just a few minutes without breathing oxygen. Although it is
an automatic action, breathing is a pivotal component of the total human being and one of the
most vital physiological processes. For a long time, the importance of functional breathing
was underestimated, because there was limited knowledge and relevant research about the
crucial role of breathing in human health. The physiological and psychological benefits
derived from breathing practices indicate that such interventions may hold the key to
improving various aspects of human health. Thus, a global discussion has begun regarding
the impact of breathing on various aspects of human health, highlighting the importance of
making breathwork an integral part of our daily lives .
 Yining Hua explained the integration of large language models (LLMs) in mental health care
is an emerging field. There is a need to systematically review the application outcomes and
delineate the advantages and limitations in clinical settings. This review aims to provide a
comprehensive overview of the use of LLMs in mental health care, assessing their efficacy,
challenges, and potential for future applications. A systematic search was conducted across
multiple databases including PubMed, Web of Science, Google Scholar, arXiv, medRxiv, and
PsyArXiv in November 2023. All forms of original research, peer-reviewed or not, published
or disseminated between October 1, 2019, and December 2, 2023, are included without
language restrictions if they used LLMs developed after T5 and directly addressed research
questions in mental health care settings. From an initial pool of 313 articles, 34 met the
inclusion criteria based on their relevance to LLM application in mental health care and the
robustness of reported outcomes. Diverse applications of LLMs in mental health care are
identified, including diagnosis, therapy, patient engagement enhancement, etc. Key
challenges include data availability and reliability, nuanced handling of mental states, and
effective evaluation methods. Despite successes in accuracy and accessibility improvement,
gaps in clinical applicability and ethical considerations were evident, pointing to the need for
robust data, standardized evaluations, and interdisciplinary collaboration. LLMs hold
substantial promise for enhancing mental health care. For their full potential to be realized,
emphasis must be placed on developing robust datasets, development and evaluation
frameworks, ethical guidelines, and interdisciplinary collaborations to address current
limitations.
 Hassan made Recognizing Suicidal Intent in Depressed Population using NLP: A Pilot Study
Depression is a prevalent form of mental disorder that can affect productivity in daily
activities and might lead to suicidal thoughts or attempts. Conventional diagnostic techniques
performed by mental health professionals can help identify the level of depression present in
a person. To facilitate such a diagnostic approach, in this paper, we present an automated
conversational platform that was used as a preliminary method of identifying depression
associated risks. The platform was developed to understand conversations using Natural
Language Processing (NLP) via machine learning technique. In the proposed two-phased
platform, the initial intent recognition phase would analyse conversation and identify
associated sentiments into four categories of `happy', `neutral', `depressive' and `suicidal'
states. In the final emotion nurturing phase, the platform continued with supportive
conversations for the first three states while triggering a local call to a suicide prevention
helpline for `suicidal' state as a preventive measure. This multi-layer platform integrated
Google Home mini, Google Dialog flow Machine Learning (ML) algorithm and Twilio API.
Dialog flow ML obtained classification accuracy of 76% in recognizing user's mental state
via NLP and was found efficient over the classic SVM classifier. As a pilot study, current
focus of this paper was solely based on the usage of words and intent of the user and was
found effective.
 Ana Carolina published Technologies for Hedonic Aspects Evaluation in Text-based
Chatbots: A Systematic Mapping Study. Many studies present and evaluate daily-use
technologies ranging from information to conversational systems. One of the technologies
that have attracted the attention of researchers is chatbots through text or voice messages. In
particular, User experience (UX) has been pointed out as one of chatbots leading aspects of
evaluation. UX evaluation involves pragmatic and hedonic aspects. The first deals with the
usability and efficiency of the system, while the second considers aspects related to the
originality, innovation, beauty of the system, and the user’s psychological well-being.
Although there are previous studies on usability evaluation and human-computer interaction
in conversational systems, the absence of specific works that consider the hedonic aspects of
UX in chatbots is evident. Therefore, this paper presents a Systematic Mapping Study
investigating UX evaluation technologies (questionnaires, methods, techniques, models,
among others) from the hedonic aspect of chatbots. We focused our investigation on studies
with chatbots that are activated through text, although they may be able to display click
interactions, videos, and images in addition to written text. We discovered 29 different
technologies used to evaluate hedonic aspects of UX in chatbots, and the most frequent
aspect found is trust. Our study provides relevant data on the researched topic, addressing the
specific characteristics of human-chatbot interaction, such as identity and social interaction.
Moreover, we highlight gaps in the hedonic aspect evaluation in chatbots, such as a few
works investigating the assessment of user emotional state.
 Surya Roca proposed “Microservice chatbot architecture for chronic patient support
Chatbots” are able to provide support to patients suffering from very different conditions.
Patients with chronic diseases or comorbidities could benefit the most from chatbots which
can keep track of their condition, provide specific information, encourage adherence to
medication, etc. To perform these functions, chatbots need a suitable underlying software
architecture. In this paper, we introduce a chatbot architecture for chronic patient support
grounded on three pillars: scalability by means of microservices, standard data sharing
models through HL7 FHIR and standard conversation modelling using AIML. We also
propose an innovative automation mechanism to convert FHIR resources into AIML files,
thus facilitating the interaction and data gathering of medical and personal information that
ends up in patient health records. To align the way people interact with each other using
messaging platforms with the chatbot architecture, we propose these very same channels for
the chatbot-patient interaction, paying special attention to security and privacy issues. Finally,
we present a monitored-data study performed in different chronic diseases, and we present a
prototype implementation tailored for one specific chronic disease, psoriasis, showing how
this new architecture allows the change, the addition or the improvement of different parts of
the chatbot in a dynamic and flexible way, providing a substantial improvement in the
development of chatbots used as virtual assistants for chronic patients.
 Brenna N. Renn Artificial Intelligence: An Interprofessional Perspective on Implications for
Geriatric Mental Health Research and Care Artificial intelligence (AI) in healthcare aims to
learn patterns in large multimodal datasets within and across individuals. These patterns may
either improve understanding of current clinical status or predict a future outcome. AI holds
the potential to revolutionize geriatric mental health care and research by supporting
diagnosis, treatment, and clinical decision-making. However, much of this momentum is
driven by data and computer scientists and engineers and runs the risk of being disconnected
from pragmatic issues in clinical practice. This interprofessional perspective bridges the
experiences of clinical scientists and data science. We provide a brief overview of AI with the
main focus on possible applications and challenges of using AI-based approaches for research
and clinical care in geriatric mental health. We suggest future AI applications in geriatric
mental health consider pragmatic considerations of clinical practice, methodological
differences between data and clinical science, and address issues of ethics, privacy, and trust.
 Shih-Wen Su proposed Development of an AI-based System to Enhance School counselling
Models for Asian elementary Students with Emotional Disorders In Asia, the availability of
school counsellors is significantly lower than global standards recommend, particularly in
elementary education settings. This shortage is exacerbated by rising mental health concerns
among young students, particularly those with emotional disorders. Considering the critical
gap in the provision of mental health services in Asia, this paper studies a digital intervention
approach with an AI-driven supportive system developed by adopting OpenAI to enhance the
effectiveness of counselling in elementary education. Twenty-two students with ADHD,
autism spectrum disorder, and emotional disorders undergoing counselling at a primary
school in Taiwan were recruited as participants for a three-month experiment, with the five
Social-Emotional competencies as dependent variables. The treatment group utilized the
proposed system with a digital journaling platform to help students reflect on their emotions,
thoughts, and actions after counselling sessions, fostering an ongoing dialogue with their
counsellors through the system. Conversely, the control group received standard counselling
without integrating the use of the proposed platform. The results of a two-factor mixed design
ANOVA revealed that students who did not use the supportive system showed significant
improvement in self-awareness. In contrast, students who went through the new model
demonstrated significant changes in all competencies. These findings highlight the value of
the proposed intervention approach for students with emotional disorders and suggest broader
applications for AI technologies in school counselling, offering valuable insights for
educators and policymakers.
 Yicho Cui explained Exploring Effects of Chatbot's Interpretation and Self-disclosure on
Mental Illness Stigma Chatbots are increasingly being used in mental healthcare - e.g., for
assessing mental-health conditions and providing digital counselling - and have been found to
have considerable potential for facilitating people's behavioural changes. Nevertheless, little
research has examined how specific chatbot designs may help reduce public stigmatization of
mental illness. To help fill that gap, this study explores how stigmatizing attitudes toward
mental illness may be affected by conversations with chatbots that have 1) varying ways of
expressing their interpretations of participants' statements and 2) different styles of self-
disclosure. More specifically, we implemented and tested four chatbot designs that varied in
terms of whether they interpreted participants' comments as stigmatizing or non-stigmatizing,
and whether they provided stigmatizing, non-stigmatizing, or no self-disclosure of chatbot's
own views. Over the two-week period of the experiment, all four chatbots' conversations with
our participants centred on seven mental-illness vignettes, all featuring the same character.
We found that the chatbot featuring non-stigmatizing interpretations and non-stigmatizing
self-disclosure performed best at reducing the participants' stigmatizing attitudes, while the
one that provided stigmatizing interpretations and stigmatizing self-disclosures had the least
beneficial effect. We also discovered side effects of chatbot's self-disclosure: notably, that
chatbots were perceived to have inflexible and strong opinions, which undermined their
credibility. As such, this paper contributes to knowledge about how chatbot designs shape
users' perceptions of the chatbots themselves, and how chatbots' interpretation and self-
disclosure may be leveraged to help reduce mental-illness stigma.
 Anna Xygkou explained “Can I be More Social with a Chatbot?”: Social Connectedness
Through Interactions of Autistic Adults with a Conversational Virtual Human The
development of AI to function as communicators (i.e., conversational agents), has opened the
opportunity to rethink AI’s place within people’s social worlds, and the process of sense-
making between humans and machines, especially for people with autism who may stand to
benefit from such interactions. The current study aims to explore the interactions of six
autistic and six non-autistic adults with a conversational virtual human (CVH/conversational
agent/chatbot) over 1–4 weeks. Using semi-structured interviews, conversational chatlogs and
post-study online questionnaires, we present findings related to human-chatbot interaction,
chatbot humanization/dehumanization and chatbot’s autistic/non-autistic traits through
thematic analysis. Findings suggest that although autistic users are willing to converse with
the chatbot, there are no indications of relationship development with the chatbot. Our
analysis also highlighted autistic users’ expectations of empathy from the chatbot. In the case
of the non-autistic users, they tried to stretch the conversational agent’s abilities by
continuously testing the AI conversational/cognitive skills. Moreover, non-autistic users were
content with Kuki’s basic conversational skills, while on the contrary, autistic participants
expected more in-depth conversations, as they trusted Kuki more. The findings offer insights
to a new human-chatbot interaction model specifically for users with autism with a view to
supporting them via companionship and social connectedness.
 Xin-Qiao Liu explained Risk factors and digital interventions for anxiety disorders in college
students: Stakeholder perspectives The worldwide prevalence of anxiety disorders among
college students is high, which negatively affects countries, schools, families, and individual
students to varying degrees. This paper reviews the relevant literature regarding risk factors
and digital interventions for anxiety disorders among college students from the perspectives
of different stakeholders. Risk factors at the national and societal levels include class
differences and the coronavirus disease 2019 pandemic. College-level risk factors include the
indoor environment design of the college environment, peer relationships, student satisfaction
with college culture, and school functional levels. Family-level risk factors include parenting
style, family relationship, and parental level of education. Individual-level risk factors include
biological factors, lifestyle, and personality. Among the intervention options for college
students' anxiety disorders, in addition to traditional cognitive behavioural therapy,
mindfulness-based interventions, psychological counselling, and group counselling, digital
mental health interventions are increasingly popular due to their low cost, positive effect, and
convenient diagnostics and treatment. To better apply digital intervention to the prevention
and treatment of college students' anxiety, this paper suggests that the different stakeholders
form a synergy among themselves. The nation and society should provide necessary policy
guarantees, financial support, and moral and ethical supervision for the prevention and
treatment of college students' anxiety disorders. Colleges should actively participate in the
screening and intervention of college students' anxiety disorders. Families should increase
their awareness of college students' anxiety disorders and take the initiative to study and
understand various digital intervention methods. College students with anxiety disorders
should actively seek psychological assistance and actively accept and participate in digital
intervention projects and services. We believe that in the future, the application of methods
such as big data and artificial intelligence to improve digital interventions and provide
individualized treatment plans will become the primary means of preventing and treating
anxiety disorders among college students.
 Surjodeep Sarkar explained Towards Explainable and Safe Conversational Agents for Mental
Health: A Survey Virtual Mental Health Assistants (VMHAs) are seeing continual advancements to
support the overburdened global healthcare system that gets 60 million primary care visits, and 6
million Emergency Room (ER) visits annually. These systems are built by clinical psychologists,
psychiatrists, and Artificial Intelligence (AI) researchers for Cognitive Behavioural Therapy (CBT).
At present, the role of VMHAs is to provide emotional support through information, focusing less on
developing a reflective conversation with the patient. A more comprehensive, safe and explainable
approach is required to build responsible VMHAs to ask follow-up questions or provide a well-
informed response. This survey offers a systematic critical review of the existing conversational
agents in mental health, followed by new insights into the improvements of VMHAs with contextual
knowledge, datasets, and their emerging role in clinical decision support. We also provide new
directions toward enriching the user experience of VMHAs with explainability, safety, and
wholesome trustworthiness.
CHAPTER-3

EXISTING SYSTEM
Traditional therapy methods, such as talk therapy and cognitive-behavioural therapy, have
been the primary form of mental health treatment for decades. While these methods have
proven effective for many, they have limitations. For example, traditional therapy can be
expensive, time-consuming, and not always accessible, especially for those living in rural or
remote areas. Additionally, there is a stigma attached to seeking therapy that prevents many
people from seeking help.

AI-driven therapy, on the other hand, is a relatively new approach to mental health treatment
that is gaining popularity. AI-driven therapy utilizes natural language processing (NLP) and
machine learning algorithms to simulate a conversation with a human therapist. These
chatbots use various techniques such as cognitive-behavioral therapy (CBT), dialectical
behavior therapy (DBT), and mindfulness to provide support and guidance to users.

One of the benefits of AI-driven therapy is its accessibility. Users can access therapy from
anywhere, at any time, and at a lower cost than traditional therapy. Additionally, AI-driven
therapy can provide an opportunity for those who may not feel comfortable speaking to a
human therapist to seek help.

There are several popular mental health apps that utilize AI to provide therapy and support to
users. For example, Woebot is an AI-powered chatbot that utilizes CBT techniques to provide
therapy to users. The app is designed to help users understand and manage their emotions,
providing personalized support and guidance.

Another popular mental health app is Talkspace, which provides users with access to a
licensed therapist via text message. While Talkspace therapists are human, the app utilizes AI
to match users with the right therapist and provide personalized treatment options.
The benefits of these apps are clear – they provide accessible, affordable, and personalized
mental health support. However, there are limitations. Users may not receive the same level
of care or attention as they would from a human therapist, and some users may find it
challenging to build a rapport with an AI-powered chatbot.

AI is also advancing mental health research and diagnostics. Machine learning algorithms can
analyze large datasets and identify patterns that human researchers may not be able to detect.
This technology has the potential to revolutionize mental health research and provide
breakthroughs in treatment options.

AI-driven therapy and support are clear, there are also ethical considerations and privacy
concerns that need to be addressed. ne ethical concern is the potential for AI to perpetuate
biases and stereotypes. For example, if AI is trained on biased data, it may inadvertently
perpetuate those biases in its recommendations and treatment options. Additionally, there is a
concern about the potential for AI to replace human therapists entirely, leading to job loss and
a decrease in the quality of mental health care.

Privacy is also a significant concern when it comes to AI-driven therapy. Users may be
sharing sensitive information with an AI-powered chatbot, and there is a risk that this
information could be hacked or leaked. It's crucial that mental health apps and chatbots using
AI prioritize user privacy and take steps to protect user data.

To address these concerns, it's important to establish clear ethical guidelines for the use of AI
in mental health treatment. Mental health providers and app developers should prioritize user
privacy and take steps to ensure that AI is being used ethically and responsibly.

AI chatbots offer promise as complementary tools rather than a replacement for human
mental health professionals. A 2021 review of digital mental health interventions (DMHIs)
found AI chatbots to specifically help mental health professionals meet overwhelming service
demand. A 2023 systematic review and meta-analysis of randomized controlled trials (RCTs)
found AI chatbots to be acceptable for a wide range of mental health problems. For example,
an RCT found a fully automated conversational agent, Woebot, to be a feasible, engaging,
and effective way to deliver cognitive behavioral therapy (CBT) for anxiety and depression in
young adults. There is a promise for Woebot and Wysa in establishing a therapeutic bond
with users.
AI chatbots are feasible as an engaging and acceptable way to deliver therapy, more studies
are required for what may facilitate a digital therapeutic alliance and to reduce
misunderstandings .Mental health chatbot attrition rates are lower in comparison to other
digital interventions . However, dropout rates require attention, as does clarity around what
disorders they are useful for Some reviews found a high potential for AI chatbots in
identifying patients at risk of suicide , and triage and treatment development through NLP
integrated to social media in real-time.

CHAPTER-4

PROPOSED SYSTEM
 We are developing a personal assistant chatbot that uses AI&ML algorithms. We can use this
AI-driven assistant chatbot to find mental issues at early stages and proper diagnosis of
mental illnesses such as anxiety and depression. This system has a user-friendly interface and
can use humorous words, jokes, and GIFs for personalized interaction with persons. Creating
a personalized assistant mental health care chatbot that can assist you 24/7 and ensure privacy
and data security.
 Chatbots can be integrated into different platforms, including mobile applications, websites,
SMS texting, smart technologies, and virtual reality. Chatbots also vary in their complexity of
interaction; they can rely on systems ranging from straightforward rule-based models, like
ELIZA, to more advanced AI models using natural language processing (NLP) and machine
learning. Typically, after analyzing user dialogue content, chatbots respond through text-
based or voice-enabled conversations. User input is primarily written, via open text or
multiple-choice options, while output generated by the chatbot can be written, spoken, or
visual .
 Chatbots have been incorporated into DMHIs to perform various functions, ranging from
assistance, screening, psychoeducation, therapeutic intervention, monitoring behavior
changes, and relapse prevention. We briefly review some of the most common functions:
diagnosis, content delivery, and symptom management.
 Number of consumer behavior tendencies has identified the urgent need to further our
comprehension of artificial intelligence (AI) enabled chatbots in mental health treatment.
Furthermore, text-based chatbots such as woebot have gained popularity by offering private
chats , as well as AI-enabled virtual assistants on mobile phones and gadgets. Statistical
surveys point to an increasing willingness among consumers to receive a treatment from
conversational agents or chatbots. The number of users who have downloaded mental health
chatbots demonstrates the growing popularity of these self-service technologies.
 Presently the world wide mental health care system is going through challenging times.
According to the World Health Organization, one in four people affected by mental illness at
some point in their lives . Mental disorder is still the leading cause of health-related economic
hardship around the world . In particular, depression and anxiety are the most frequent
causes, affecting an estimated 322 million (depression) and 264 million (anxiety) individuals
globally . In spite of such growing burden, there seems to be an acute shortage of mental
health professionals worldwide (9 per 100, 000 people), principally in Southeast Asia (2.5 per
100,000 people) . Despite the fact that there are efficient and well-known therapies for
numerous mental and neourological disorders, only half of people, afflicted by mental
disorder, receive them
SOURCE CODE
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
import json with open('intents.json', 'r') as f:
data = json.load(f)
df = pd.DataFrame(data['intents'])
df
dic = {"tag":[], "patterns":[], "responses":[]}
for i in range(len(df)):
ptrns = df[df.index == i]['patterns'].values[0]
rspns = df[df.index == i]['responses'].values[0]
tag = df[df.index == i]['tag'].values[0]
for j in range(len(ptrns)):
dic['tag'].append(tag)
dic['patterns'].append(ptrns[j])
dic['responses'].append(rspns)
df = pd.DataFrame.from_dict(dic)
df
df['tag'].unique() from tensorflow.keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(lower=True, split=' ')
tokenizer.fit_on_texts(df['patterns'])
tokenizer.get_config()
vacab_size = len(tokenizer.word_index)
print('number of unique words = ', vacab_size)
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.preprocessing import LabelEncoder
ptrn2seq = tokenizer.texts_to_sequences(df['patterns'])
X = pad_sequences(ptrn2seq, padding='post')
print('X shape = ', X.shape)
lbl_enc = LabelEncoder()
y = lbl_enc.fit_transform(df['tag'])
print('y shape = ', y.shape)
print('num of classes = ', len(np.unique(y)))
x
y
import tensorflow
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Input, Embedding, LSTM, LayerNormalization,
Dense, Dropout
from tensorflow.keras.utils import plot_model
model = Sequential()
model.add(Input(shape=(X.shape[1])))
model.add(Embedding(input_dim=vacab_size+1, output_dim=100,
mask_zero=True))
model.add(LSTM(32, return_sequences=True))
model.add(LayerNormalization())
model.add(LSTM(32, return_sequences=True))
model.add(LayerNormalization())
model.add(LSTM(32))
model.add(LayerNormalization())
model.add(Dense(128, activation="relu"))
model.add(LayerNormalization())
model.add(Dropout(0.2))
model.add(Dense(128, activation="relu"))
model.add(LayerNormalization())
model.add(Dropout(0.2))
model.add(Dense(len(np.unique(y)), activation="softmax"))
model.compile(optimizer='adam',
loss="sparse_categorical_crossentropy", metrics=['accuracy'])
model.summary()
plot_model(model, show_shapes=True)
model_history = model.fit(x=X,
y=y,
batch_size=10,
callbacks=[tensorflow.keras.callbacks.EarlyStopping(monitor='accuracy',patience=3))
epochs=50)
import re
import random
def generate_answer(pattern):
text = []
txt = re.sub('[^a-zA-Z\']', ' ', pattern)
txt = txt.lower()
txt = txt.split()
txt = " ".join(txt)
text.append(txt)

x_test = tokenizer.texts_to_sequences(text)
x_test = np.array(x_test).squeeze()
x_test = pad_sequences([x_test], padding='post', maxlen=X.shape[1])
y_pred = model.predict(x_test)
y_pred = y_pred.argmax()
tag = lbl_enc.inverse_transform([y_pred])[0]
responses = df[df['tag'] == tag]['responses'].values[0]
print("you: {}".format(pattern))
print("model: {}".format(random.choice(responses)))
generate_answer('Hi! How are you?')
generate_answer('Maybe I just didn\'t want to be born :)')
generate_answer('help me:')
generate_answer(':')
def chatbot():
print("Chatbot: Hi! I'm your friendly chatbot. How can I assist you today?")
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit', 'q', 'bye']:
print("Chatbot: Goodbye!")
break
generate_answer(user_input)
if __name__ == "__main__":
chatbot()

You might also like