RRL - English
RRL - English
I. INTRODUCTION
STEM medicine students, as they ensure that learning remains fair, meaningful, and
aligned with the core values of education. In medical and STEM fields, where
precision, critical thinking, and integrity are essential, AI-assisted learning presents
both opportunities and risks. Ethical concerns such as academic honesty, fairness in
assessments, the development of critical thinking skills, and data privacy must be
shallow learning and a lack of essential problem-solving abilities, which are critical
for future medical and scientific professionals. The purpose of this study is to explore
assessment integrity, and student development. By analyzing both the benefits and
challenges of AI-assisted education, this research aims to provide insights into how
The study uses a qualitative research method, including literature reviews, case
studies, and expert opinions, to examine how AI is currently being used in STEM
and medical education. Additionally, surveys and interviews with students and
educators will be conducted to gather perspectives on AI’s role in academic
performance. This research aims to discover what influences the students to use
learning easier, faster, and more efficient. AI tools help with research, writing, and
problem-solving, saving students time and effort. Many students use AI-powered
tutoring apps, grammar checkers, and language translators to improve their skills. AI
manage their workload. Curiosity about technology, peer influence, and the growing
are being developed, particularly those that pertain to education. Ethical concerns
surrounding AI, such as privacy issues, data security, algorithmic bias, and the
educators, and policymakers about the need for clear, high ethical standards. It
helps identify what has already been explored regarding AI’s role in academic
policies are being developed in education that requires high ethical standards. Also,
conducting a literature review can assess benefits and risks, it helps balance AI’s
potential benefits with ethical challenges. Furthermore, some scholars may argue AI
levels the playing field, while others warn about ethical pitfalls. The review can
fairness, academic integrity, and institutional policies on AI use are key ethical
issues. Sources must be credible and reliable, with a focus on filtering out topics that
are most relevant to the study to ensure the selection of relevant literature. By
gathered from various sources, including case studies from academic institutions.
healthcare education, IEEE Xplore for AI ethics and technology-related studies, and
of the selected literature reveals distinct patterns in the ethical concerns surrounding
AI-assisted academic performance in STEM-Medicine education. The benefits and
risks of AI integration are the studies highlight. On the positive side, AI improves
leading to unfair academic evaluations, while others emphasize the risk of over-
reliance on AI, which may reduce critical thinking and problem-solving skills among
ARTIFICIAL INTELLIGENCE
integration and unclear teaching approaches highlight the need for further study. As
stated by Triplett (2023), the application of Artificial Intelligence (AI) has become
associated with its implementation in this domain. In line with Triplett (2023), while
justifications have been made for emerging technologies’ transformative potential in
evaluate the efficiency of AI-driven teaching methods, and assess both the short-
term and long-term effects of AI in STEM fields. The study's findings are expected to
evidence and overlook the challenges of AI integration, which this research intends
to address.
B. METHODS USED
This study will involve teachers and students from various public and private
diversity based on factors like gender, grade level, and experience with AI. A mixed-
using validated instruments like the AI in STEM Education Survey. Interviews will
provide deeper insights into participants' experiences, with data analyzed through
thematic analysis to identify recurring themes. The data collection will occur in two
phases: first, online questionnaires via platforms like Google Forms, followed by
analyzed using descriptive and inferential statistics, while qualitative data will
undergo thematic analysis. Ethical standards will be upheld, with informed consent
ability to tailor learning experiences helps address challenges like varying student
learning paces and limited hands-on opportunities. Unlike previous research, this
study focuses specifically on AI's impact in STEM education, offering deeper insights
The findings emphasize the need for educators and policymakers to develop
strategies for effective AI integration in STEM education. While the study provides
long-term impacts. Future research should explore AI's influence on student career
paths, ethical considerations, and its application in specific STEM fields. Overall, this
education. It revolves around AI's impacts when used in education. Additionally, the
improve learning outcomes unlike before because some may argue such as
disadvantages of AI use due to the widespread use and the lack of its proper
The statistical approach from this study concluded that future research on AI
integration in STEM education should focus on its impact on specific subjects like
in different disciplines. Long-term studies are also necessary to assess AI's influence
Additionally, exploring AI's role in promoting inclusivity and reducing bias in STEM
education can help create more equitable learning environments. Research should
challenges. Building on these findings will help advance AI integration and improve
educators utilize AI to generate ideas or simply find ways to create a more engaging
well as Roll and Wylie (2016), the use of AI has extended to education and teaching
activities in schools and as Ai continue to more and more people understand its
along with Yang and Bai (2020), argued that AI has been integrated into educational
Intelligence can endlessly re arrange and perfect the concept of learning, which
(2018) and Wang (2020) also stated that AI technologies have the potential to
well as their effects on teaching methods and student learning outcomes while the
Through this study, the researchers aspire to provide extensive and valuable
insights, offering references and suggestions for future AI-driven educational
developments.
B. METHODS USED
tutoring systems, and analyze their influence on teaching and learning. In this
education that can serve as a basis for further investigation and changes in the
educational system.
Base on the results of the study, teaching efficiency and student learning
education. Still, issues such as data safety, teacher training, and increase of
educational inequality is being identified. As the authors state, even though AI could
the integration of AI into education, including its perks and downsides, which enables
for a greater understanding of AI's application within the context of education. This is
summarize, the study "A Review on Artificial Intelligence in Education" offers a broad
Education.
nature of AI within educational environment. However, the study has some flaws.
irrelevant so promptly that there must be constant revisions made to them to ensure
that they are accurate. To summarize, the study gives useful information on the role
PRIVACY CONCERNS
AI is designed to act like human thinking and learning, helping with problem-
solving, decision-making, and adapting over time. It has changed the way
et al., 2017; Cheng et al., 2023; Coss & Dhillon, 2019; Hille et al., 2015; Tiwari et al.,
2023; Wiegard & Breitner, 2019; Wieringa et al., 2021; Wu et al., 2024), the
economic and corporate landscape is crucial, considering the rapid increase in data
gathering and use for personalized advertising and customer profiling. (Duan et al.,
2019; Feng et al., 2021) mentioned that these tasks involve a diverse array of
making (Duan et al., 2010; Song & Ma, 2010) stated that a systematic search
strategy was devised utilizing five strategically chosen keywords: ‘data security’,
‘privacy concern’, ‘marketing’, and the synonymous terms ‘Artificial intelligence’ and
‘AI’. This search string encompassed key concepts directly relevant to the research
objectives. The search was conducted within the Scopus databases to ensure the
This study evaluates the existing gaps in understanding the specific security
and privacy implications of AI-driven marketing within economic and context. The
main objectives of this review are to address the following research questions: What
are the primary challenges and implications for businesses utilizing AI marketing
strategies, and how do these concerns impact consumer trust and regulatory
compliance. What strategies and frameworks can effectively address data security
and privacy concerns in AI-driven marketing practices, and how do these solutions
B. METHODS USED
thoroughly investigate existing research on the selected issue. This strategy entails
valuable insights that direct the research process and improve understanding of the
issue. Researchers followed the PRISMA (Preferred Reporting Items for Systematic
systematic search was conducted in the Scopus database using five key terms
related to AI, marketing, and data security. To keep the research relevant, only peer-
reviewed papers published in English between January 2014 and February 2024
were included. The focus was on studies within business and economics to explore
AI’s role in marketing and its impact on data privacy. These criteria ensured the
The study effectively addressed data security and privacy concerns in AI-
enhance data security, build consumer confidence, and gain a competitive edge by
Advanced technologies like Radio Frequency Identification (RFID) and the Health
illustrate effective security practices for safeguarding data. Securing support from
senior management and tailoring global security standards to a company’s size are
crucial for mitigating security risks. Regulatory policies also play a vital role in
these strategies, organizations can improve security, drive innovation, and contribute
to sustainable economic growth in the digital age. While this study provides
about data security and privacy concerns, which also apply to AI-assisted academic
consent, and responsible AI use. Additionally, issues like bias, decision-making, and
ethical challenges introduced by AI are relevant in both contexts. Your study can use
The systematic literature review underscores the need for further research to
delve deeper into several key areas. Future investigations could focus on the
DISHONESTY
several formats that includes text, images, code, and audio are called generative AI.
Hutson (2024) mentioned that the tools is threatening academic and intellectual
integrity of the academic work while blurring the line of Ai generated content and the
students' original work as indicated by Nabee, Mageto, and Pisa (2020), the
emergence and over reliance of Ai tools underlines the pressing demand for a clear,
consistent, and credible guidelines to ensure that the students' will understand and
follow the ethical use of AI in education. According to Abbas, Jam, and Khan (2024),
the academic community is worried that students could misuse these technologies
compromising academic integrity. Nabee, Mageto, and Pisa (2020) talks that there
are pressing requirement for clear, consistent, and research-based guidelines that
guarantee that the students will be able to comprehend and adhere to ethical AI
declaration in academic work, or to be precise the study aims to answer why some
students' at King's Business School are reluctant in revealing their use of Ai tools,
the study seeks to broaden the understanding surrounding academic integrity the
study also offers insights by using the Theory of Planned Behavior (TPB) as a
to academic integrity and robust assessment practices and the research objective is
to develop strategies that enhance transparency and the ethical usage of AI through
clearer policies and institutional support. This study is guided by the following
research questions: (1) What are the reasons behind student non-compliance with AI
use declarations at King’s Business School? (2) How does non-compliance impact
within academic settings? with these questions, the study is able to provide a
valuable insight for policymakers and also educators in fostering academic integrity
B. METHODS USED
they also utilized the use of sequential mixed-methods, starting with interviews with
instructors and peers, and experiences with the AI non-compliance. To fig the
research data were analyzed using thematic analysis outlined by Braun and Clarke
(2006, 2023), which followed the process of doing both theory-driven and emergent
coding in relation to the Theory of Planned Behavior (TPB). With these methods the
study provides insights which may help in developing policy toward promoting
academic work. However, only 65% of these students consistently completed the AI
declaration on their coursework and the main reason for the students' non-
getting caught. This study used thematic analysis to further revealed the study's five
consequences formed a significant barrier since many students were concerned that
disclosing AI usage would affect their grades or reputation because of this the
students called for clearer guidelines, consistent enforcement, and support in the
rather than solely compliance and the avoidance of penalties. These results showed
that there is the need for clear, research-based policies that foster trust and
This research study is closely related to our study since both of them discuss
the implications that artificial intelligence tools have on academic integrity and also
explore the ethical issues involved in the use of artificial intelligence. These
behavior (TBP) framework, which helps to provide good theoretical backbone to the
research study and allows for systematic investigation of compliance, and lastly the
education as AI tools are becoming more common than ever, hence the findings
could be useful to the policy maker and education administrator but along with these
strengths, the study also has limitations. and one of its main limitations is regarding
to its reliance on single case study from king's business school that limit the
generalization of the finding. Despite the limitations the study provides valuable
Perception of Ai-giarism.
use it as a reference to get ideas and some students resort in using Ai tools to
answer their homework and do their research which pose a threat to academic
integrity. According to Chan & Lee; Hosseini, Rasmussen, & Resnik, (2023, 2023)
and also sustaining academic standards. Chan & Tsi; Cotton, Cotton, & Shipway,
(2023, 2023) stated that the fundamental principles of academic integrity in writing
interest, maintaining data integrity and accuracy, preventing plagiarism and research
misconduct, respecting intellectual property rights, and demonstrating a willingness
principles form the foundation for maintaining the quality of academic integrity in
both scientific research and students' academic work. One of the most rampant and
students' work is authentic has become a challenge for educators, threatening the
integrity of academic work. Park (2003) mentioned that act of using someone else's
work without the owners’ permission, giving proper credits and introducing it as your
own work is called plagiarism. Wagner (2014) talks about the issues related to
plagiarism in academic work and how the students change the definition and handle
plagiarism. Wagner (2014) also mentioned that plagiarism can range from a few
phrases to full papers or chapters, with the latter violating copyright laws. The actual
position of the plagiarized content inside the work, whether it is correctly referenced,
the author's purpose to plagiarize, the author's age and native language are all
using AI-generated content in higher education. In this study, the researchers delve
into students' perceptions of adopting AI tools in their work and examine their
plagiarism and AI-giarism offers new insights into the rapid advancement of AI tools.
This aligns with the study's goal of developing clear guidelines and policies
regarding AI-generated content to ensure fair recognition and credit for students' and
researchers' work.
B. METHODS USED
data collection was conducted to gather participants' information, along with survey
questions about traditional plagiarism and AI-giarism. Since there were no pre-
existing, credible questions related to AI-giarism given that this concept has
literature review and university discussions, and participants we're selected through
convenience sampling technique and the data we're analyze using descriptive
giarism.
The results showed a table that has two parts. The first part presents
students' conceptualization of plagiarism, and the results show that most participants
highlights the need for more discussions related to plagiarism. The second part of
use AI tools for brainstorming concepts or improving their original ideas. Scenario F8
conclusion, the second part indicate that participants have a limited understanding of
the ethical considerations of using AI in academic work. This underscores the need
academics.
misconduct that is offering new knowledge about the use of AI tools that is often
participants, and the research study did not fully address the perspective of the
educators and only limits their participants by including only the students' which is
crucial for more deeper and comprehensive understanding regarding AI's impact on
academic integrity.
OVERRELIANCE
and helping teachers and students with their work. It can answer questions,
assignments, and even tutor students. However, relying too much on AI can be a
problem. Students might stop thinking for themselves, teachers might lose their role
in guiding students, and there could be issues with fairness and privacy. While Ai is
useful, it should not replace real learning. According to Gao et al. (2022) identified a
which cause deviations from rational thinking, and heuristics, or mental shortcuts,
(2021) found that relying too much on AI without checking its accuracy can lead to
content, which is risky and may result in research misconduct, such as copying
dialogue systems. Relying too much on AI dialogue systems has raised serious
ethical issues, such as inaccurate information, bias, plagiarism, privacy risks, and
lack of transparency, which have not been properly addressed. To achieve this, the
study adopts the comprehensive five step methodology for conducting systematic
literature reviews proposed by Macdonald et al. (2023). To address this issue, the
educational subjects and levels, what are the primary ethical concerns causing over-
challenges of over reliance on AI. Through this structured approach, the study aims
skills and to develop a framework for navigating the ethical pitfalls associated with
their use.
B. METHODS USED
research and education. Studies were screened based on inclusion and exclusion
process. The data was analyzed using thematic analysis to identify key factors such
This may lead users to place undue trust in these technologies, escalating the
might deter students from engaging in thorough research and forming their insights,
challenging the integration of these tools in ways that enhance rather than diminish
and critically evaluating information sources. It might also ultimately weaken the
and creativity, which are essential academic skills. Furthermore, accessibility and
equity become ethical concerns, as not all students have equal access to AI-
critical media literacy into educational programs to empower students with the ability
world. Future research should evaluate the cognitive effects of utilizing AI dialogue
worldwide. By this, specifically, it has been used in teaching and learning. While this
may be good as it assists the way of teaching and learning, however, arguments
ignited because there should be existing policies and practices of AI use to maintain
integrity in the education field. According to Funa & Gabay (2024), Artificial
Intelligence (AI) has been a topic of interest since its inception in the 1950s, when
(as cited in Guo, 2015). Funa & Gabay (2024) noted that AI's capabilities have since
evolved significantly, especially with the advent of machine learning and big data in
the 21st century. These advancements have sparked renewed interest in the
application of AI across various fields, including education. Despite its long history,
engagement, and optimize educational outcomes (as cited in Zheng et al., 2021).
[...] Additionally, Funa & Gabay (2024) highlighted that despite these promising
practices.
the widespread use of AI tools, many policies intended to guide their ethical and
K-12 education system. To address this issue, the study explores several key
questions, such as what existing policies regulate AI use in education, how these
policies impact AI adoption, and what benefits they offer to both teachers and
effectiveness, this research aims to contribute valuable insights that could guide
B. METHODS USED
learning. Relevant literature was sourced from databases such as Google Scholar,
PubMed, ERIC, and Scopus, using specific keywords related to AI, policy, and
ensuring that only current and relevant research was analyzed. Additionally, the
study applied thematic analysis, following Braun and Clarke’s (2006) framework, to
identify key themes related to AI policies. These themes were categorized based on
education and science experts helped refine the themes, ensuring consistency and
accuracy. The study’s findings provide insights into existing policy gaps and
student interests, while AI Literacy highlights the need to integrate AI education into
and Equity stress the importance of designing AI systems that accommodate diverse
resources.
orientation is crucial, while AI-powered tools and data insights can improve teaching
The findings mainly showed policies and guidelines and how these can
maintain learning and teaching integrity. It also delves into the implementation
strategies in which educators and students can benefit from it. By these, the findings
can be helpful because one of the sub-themes of our study are policies and
Additionally, this study recommends policies and guidelines that can potentially add
to the existing rules and regulations establishing limits to maintain integrity in the
educational field.
E. STRENGTH AND WEAKNESSES
The systematic review and integration of findings from various literatures and
the thematic analysis was used accurately in this study since it categorizes the key
continuous training for educators and addressing infrastructure gaps may present
ongoing challenges.
Learning
main concerns. As stated by Chan (2023), a recent survey found that about one in
three college students in the U.S. have used AI to help with their assignments. More
than half of those students use it for most of their work. the use of text generative
artificial intelligence (AI), such as ChatGPT, Bing and the latest, Co-Pilot integrated
within the Microsoft Office suite and this has been a growing concern in the
academic settings in recent months. With almost half of the students saying their
professors or institutions have banned the tool for homework, the study noted that
some professors are considering whether to include ChatGPT in their lessons or join
calls to ban it. This has led to calls for stricter regulations and penalties for academic
misconduct involving AI. Chan (2023) added that the use of AI may lead to a decline
in students' writing and critical thinking skills is another concern and also stated, as
they become increasingly dependent on automation in their work. This could have a
negative impact on the quality of education and ultimately harm the students’
learning outcomes, some academics argue (as cited in Chan & Lee, 2023; Korn &
learning based on the findings. This framework is organized into three dimensions:
training. Ensuring that stakeholders are aware of their responsibilities and can take
appropriate actions accordingly, the framework fosters a nuanced understanding of
B. METHODS USED
teachers, and staff in Hong Kong to develop an AI education policy framework for
AI's impact on teaching and learning, the topics covered in the survey were major
Ensuring that the results reflect the needs and values of all participants, the
data were collected via an online survey from a diverse group of stakeholders in the
the study, a convenience sampling method was employed for selecting the
respondents. Provided with an informed consent form prior to completing the survey,
from the open-ended questions in the survey, descriptive analysis was used to
analyze the survey data. The survey was completed by 457 undergraduate and
postgraduate students, as well as 180 teachers and staff members across various
university teaching and learning. The survey was conducted among 457 students
and 180 teachers and staff from different disciplines in Hong Kong universities.
need for comprehensive AI policy in higher education that addresses the potential
risks.
and learning, addressing these issues through informed policy and institutional
For AI integration in education, the quantitative findings support the key areas
found in the quantitative data. In doing homework and assignment, the quantitative
data reveals that both students and teachers share concerns of the potential misuse
among students and teachers on the necessity for higher education institutions to
implement a plan for managing the potential risks associated with using generative
AI technologies.
The concerns of students taking advantage of using generative AI in their
technologies for all students, yet they also believe that AI technologies can provide
Based on the findings of the study, it underscores the idea of the AI's role in
STEM education. The study circles around the behavior of students when it comes
to taking advantage of AI when doing their homework. Moreover, the findings wish to
guidelines and strategies for detecting and preventing the misuse of generative AI.
Schools and universities must take responsibility for decisions made regarding the
about data collection and usage, and being receptive to feedback and criticism. By
universities can foster trust and confidence among students and staff in AI
technology usage.
investigate how AI affects critical thinking, skill development, and ethical decision-
The literature reviewed suggests that while artificial intelligence (AI) is useful
teaching. However, its potential is not yet fully realized because of limited empirical
"the use of Artificial Intelligence (AI) has become more important in many domains,
including health care and education." While AI promises better benefits, its complete
Chassignol, Khoroshavin, Klimova, and Bilyatdinova (2018), and Roll and Wylie
Aldabbagh (2016), and Yang and Bai (2020) argue that AI's integration in learning
It has been discovered through research that there is diverse concern in the
customer insight, trend prediction, and personalization. Breward et al. (2017), Cheng
et al. (2023), Coss & Dhillon (2019), Hille et al. (2015), Tiwari et al. (2023), Wiegard
& Breitner (2019), Wieringa et al. (2021), and Wu et al. (2024) highlight the
gathering. Duan et al. (2019) and Feng et al. (2021) highlight the use of AI in
Duan et al. (2010) and Song & Ma (2010) built a systematic search strategy based
on keywords like "data security," "privacy concern," and "marketing" to identify high-
quality studies from Scopus databases. In the same way, the development of AI
tools for use in scholarly writing has raised integrity and ethical use concerns.
prevalent (Moorhouse, Yeo, & Wan, 2024). Its misuse compromises scholarly
work by students (Hutson, 2024). Abbas, Jam, and Khan (2024) highlight the
undermining real learning. In the same way, Nabee, Mageto, and Pisa (2020)
transparency, and integrity (Chan & Lee; Hosseini, Rasmussen, & Resnik, 2023).
Chan & Tsi; Cotton, Cotton, & Shipway (2023) highlight that scholarly integrity is
plagiarism avoidance.
not (Park, 2003). Wagner (2014) describes the evolving dynamics of plagiarism,
copyright. Intent, accuracy of citation, and the circumstances of the student decide
the degree of plagiarism. With increased use of AI, academic integrity requires more
personalized learning and helping students and teachers. AI can offer answers, help
with homework, and even tutor. But excessive dependence on AI is risky. Gao et al.
(2022) pointed out that users are likely to accept AI-provided answers, including
heuristics. Similarly, Xie et al. (2021) believed that unregulated reliance on AI can
in the 1950s, highlighting its remarkable advancements through the use of machine
learning and big data. They note how AI integration in education has revolutionized
learning outcomes (as cited in Guo, 2015; Zheng et al., 2021). Nevertheless, despite
application of AI equipped tools such as ChatGPT and Bing for homework, the
nearly a third of U.S. college students have been known to use AI for assignments,
with more than half of them using it for the majority of their work. Some of the
and problem-solving has provoked concerns over decreased critical thinking abilities
among students, which could compromise the quality of education (as cited in Chan
& Lee, 2023; Korn & Kelly, 2023; Oliver, 2023; Zhai, 2022). These issues have
prompted demands for increased measures of control and punishment to curb AI-
personalized learning and enhanced engagement, but its use is a source of concern
Studies quantify the risks of AI-aided academic work, including plagiarism, diluted
critical thinking, and obfuscation of sharp distinctions between original work and AI-
generated work. Due to such concerns, researchers call for clearly defined policies
and guidelines for the regulation of AI usage in education so that it can enhance