0% found this document useful (0 votes)
4 views38 pages

RRL - English

The document discusses the ethical implications of AI-assisted academic performance in STEM medicine education, emphasizing the importance of ethical guidelines to ensure fairness and integrity in learning. It explores the benefits and challenges of AI in education, including its potential to enhance personalized learning while addressing concerns like academic honesty and data privacy. The study employs qualitative research methods, including literature reviews and surveys, to gather insights on AI's role in shaping educational outcomes and to inform best practices for responsible AI integration.

Uploaded by

Sean Nellas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views38 pages

RRL - English

The document discusses the ethical implications of AI-assisted academic performance in STEM medicine education, emphasizing the importance of ethical guidelines to ensure fairness and integrity in learning. It explores the benefits and challenges of AI in education, including its potential to enhance personalized learning while addressing concerns like academic honesty and data privacy. The study employs qualitative research methods, including literature reviews and surveys, to gather insights on AI's role in shaping educational outcomes and to inform best practices for responsible AI integration.

Uploaded by

Sean Nellas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 38

CHAPTER II

REVIEW OF RELATED LITERATURE

I. INTRODUCTION

Ethical considerations play a crucial role in the academic performance of

STEM medicine students, as they ensure that learning remains fair, meaningful, and

aligned with the core values of education. In medical and STEM fields, where

precision, critical thinking, and integrity are essential, AI-assisted learning presents

both opportunities and risks. Ethical concerns such as academic honesty, fairness in

assessments, the development of critical thinking skills, and data privacy must be

addressed to maintain the credibility and effectiveness of education. Without clear

ethical guidelines, students may become overly dependent on AI tools, leading to

shallow learning and a lack of essential problem-solving abilities, which are critical

for future medical and scientific professionals. The purpose of this study is to explore

the ethical implications of AI-assisted academic performance in STEM medicine

education, particularly focusing on how AI influences learning outcomes,

assessment integrity, and student development. By analyzing both the benefits and

challenges of AI-assisted education, this research aims to provide insights into how

institutions can integrate AI responsibly without compromising educational quality.

The study uses a qualitative research method, including literature reviews, case

studies, and expert opinions, to examine how AI is currently being used in STEM

and medical education. Additionally, surveys and interviews with students and
educators will be conducted to gather perspectives on AI’s role in academic

performance. This research aims to discover what influences the students to use

artificial intelligence. Students use artificial intelligence (AI) because it makes

learning easier, faster, and more efficient. AI tools help with research, writing, and

problem-solving, saving students time and effort. Many students use AI-powered

tutoring apps, grammar checkers, and language translators to improve their skills. AI

also provides personalized learning experiences, adapting to individual needs and

preferences. Additionally, AI reduces stress by helping students stay organized and

manage their workload. Curiosity about technology, peer influence, and the growing

importance of AI in future careers also encourage students to explore AI tools.

II. IMPORTANCE OF CONDUCTING A LITERATURE REVIEW

Conducting a literature review is essential in understanding how AI policies

are being developed, particularly those that pertain to education. Ethical concerns

surrounding AI, such as privacy issues, data security, algorithmic bias, and the

equitable treatment of students, have prompted discussions among scholars,

educators, and policymakers about the need for clear, high ethical standards. It

helps identify what has already been explored regarding AI’s role in academic

settings, particularly in STEM-Med education. In addition, it helps determine how AI

policies are being developed in education that requires high ethical standards. Also,

universities and policymakers need ethical guidelines for AI use in education. A

literature review can inform recommendations on best practices. Moreover,

conducting a literature review can assess benefits and risks, it helps balance AI’s

potential benefits with ethical challenges. Furthermore, some scholars may argue AI
levels the playing field, while others warn about ethical pitfalls. The review can

identify where AI policies are lacking or insufficient, particularly in educational

institutions that require specialized guidelines for STEM-Med programs.

III. METHODS: HOW DID WE CONDUCT OUR LITERATURE REVIEW?

STEM-Medicine education requires careful selection of relevant and credible

sources of conducting a strong literature review on the ethical aspects of AI in

academic performance. A qualitative case study approach explores the benefits,

challenges, and ethical concerns surrounding AI in education. Responsibility, bias,

fairness, academic integrity, and institutional policies on AI use are key ethical

issues. Sources must be credible and reliable, with a focus on filtering out topics that

are most relevant to the study to ensure the selection of relevant literature. By

examining the AI’s role in STEM-Medicine education, regulatory and policy

documents related to AI in education, and scholarly research on AI ethics, data is

gathered from various sources, including case studies from academic institutions.

Research includes data from dependable online databases to achieve

comprehensive coverage such as Google Scholar for peer-reviewed articles and

academic papers, PubMed for research on AI applications in medicine and

healthcare education, IEEE Xplore for AI ethics and technology-related studies, and

platforms like SpringerLink and ScienceDirect for interdisciplinary research on AI in

STEM education. Thematic analysis is applied to identify common themes and

ethical concerns for analysis, providing a structured understanding of AI’s impact on

academic performance. By conducting a descriptive analysis, a detailed examination

of the selected literature reveals distinct patterns in the ethical concerns surrounding
AI-assisted academic performance in STEM-Medicine education. The benefits and

risks of AI integration are the studies highlight. On the positive side, AI improves

student engagement through adaptive learning systems, enhances personalization

learning, and automates assessments. However, AI algorithms can maintain biases,

leading to unfair academic evaluations, while others emphasize the risk of over-

reliance on AI, which may reduce critical thinking and problem-solving skills among

students in several studies.

IV. RELATED LITERATURES

ARTIFICIAL INTELLIGENCE

A. RESEARCH TITLE AND QUESTION

Artificial Intelligence in STEM Education

Education AI enhances STEM education through personalized learning and

automation, but research on its true impact remains limited. Challenges in

integration and unclear teaching approaches highlight the need for further study. As

stated by Triplett (2023), the application of Artificial Intelligence (AI) has become

increasingly significant in various sectors, such as healthcare and education. Within

STEM (Science, Technology, Engineering, and Mathematics) education, AI has

played a crucial role in facilitating personalized learning, advanced analytics, and

instructional automation. Despite the potential advantages that AI offers to STEM

education, there is a lack of comprehensive and empirical studies that thoroughly

examine the real impacts, integration challenges, and pedagogical approaches

associated with its implementation in this domain. In line with Triplett (2023), while
justifications have been made for emerging technologies’ transformative potential in

STEM education, the roadmap for their eventual implementation in schools is

underexplored (as cited in Chng, Tan, & Tan, 2023).

This research primarily aims to investigate how effective AI is in enhancing

STEM education. It seeks to identify the practical challenges of integrating AI,

evaluate the efficiency of AI-driven teaching methods, and assess both the short-

term and long-term effects of AI in STEM fields. The study's findings are expected to

offer valuable insights for educators, curriculum developers, policymakers, and

researchers. Furthermore, the results could support the creation of innovative

strategies to successfully implement AI in STEM education, potentially improving

students' learning outcomes. Additionally, many existing studies lack empirical

evidence and overlook the challenges of AI integration, which this research intends

to address.

B. METHODS USED

This study will involve teachers and students from various public and private

STEM education institutions, selected through purposive sampling to ensure

diversity based on factors like gender, grade level, and experience with AI. A mixed-

methods approach will be used, combining quantitative data from self-administered

questionnaires and qualitative data from semi-structured interviews. The

questionnaires will assess perceptions of AI's effectiveness, challenges, and impact,

using validated instruments like the AI in STEM Education Survey. Interviews will

provide deeper insights into participants' experiences, with data analyzed through

thematic analysis to identify recurring themes. The data collection will occur in two
phases: first, online questionnaires via platforms like Google Forms, followed by

interviews conducted in person or via video conferencing. Quantitative data will be

analyzed using descriptive and inferential statistics, while qualitative data will

undergo thematic analysis. Ethical standards will be upheld, with informed consent

obtained, participant anonymity maintained, and data securely stored.

C. RESULTS AND CONCLUSION

The study highlights AI's significant role in enhancing STEM education

through personalized learning, advanced analytics, and instructional automation. AI's

ability to tailor learning experiences helps address challenges like varying student

learning paces and limited hands-on opportunities. Unlike previous research, this

study focuses specifically on AI's impact in STEM education, offering deeper insights

into its potential to improve learning outcomes. However, challenges such as

inconsistent implementation and rapidly evolving AI technologies pose limitations to

its widespread use.

The findings emphasize the need for educators and policymakers to develop

strategies for effective AI integration in STEM education. While the study provides

valuable insights, it acknowledges the difficulty of generalizing results and predicting

long-term impacts. Future research should explore AI's influence on student career

paths, ethical considerations, and its application in specific STEM fields. Overall, this

study contributes to existing literature and offers practical recommendations for

maximizing AI's potential in enhancing STEM education.

D. HOW IT RELATES TO OUR STUDY


Based on the findings, it highlights valuable insights of AI's role in STEM

education. It revolves around AI's impacts when used in education. Additionally, the

findings compared information from previous studies in which AI can potentially

improve learning outcomes unlike before because some may argue such as

disadvantages of AI use due to the widespread use and the lack of its proper

implementation and limitations. By these findings, it can add specific perception to

our study because it is centered on STEM education parallel to our study. In

accordance to this, it can provide more understanding of AI use in education which is

helpful to fully acknowledge and recognize ethical considerations of AI-assisted

academic performance which is our study.

E. STRENGTH AND WEAKNESSES

The statistical approach from this study concluded that future research on AI

integration in STEM education should focus on its impact on specific subjects like

mathematics, physics, and computer science to better understand its effectiveness

in different disciplines. Long-term studies are also necessary to assess AI's influence

on student learning outcomes, career paths, and critical thinking development.

Additionally, exploring AI's role in promoting inclusivity and reducing bias in STEM

education can help create more equitable learning environments. Research should

also investigate teachers' perspectives, professional development needs, and

readiness to adopt AI technologies. Furthermore, it is crucial to examine the ethical

implications of AI in education and develop guidelines to address potential

challenges. Building on these findings will help advance AI integration and improve

educational outcomes in STEM fields.


A. RESEARCH TITLE AND QUESTION

A Review on Artificial Intelligence in Education

In the evolving digital age, Artificial Intelligence (AI) is revolutionizing

education. Students have adopted AI tools in their academic endeavors, while

educators utilize AI to generate ideas or simply find ways to create a more engaging

class. According to Chassignol, Khoroshavin, Klimova, and Bilyatdinova (2018), as

well as Roll and Wylie (2016), the use of AI has extended to education and teaching

activities in schools and as Ai continue to more and more people understand its

significance in education. Colchester, Hagras, Alghazzawi, and Aldabbagh (2016),

along with Yang and Bai (2020), argued that AI has been integrated into educational

systems which demonstrates significant impacts on teaching and Artificial

Intelligence can endlessly re arrange and perfect the concept of learning, which

stimulate students' motivation, self-organizing abilities, and innovative skills. Tuomi

(2018) and Wang (2020) also stated that AI technologies have the potential to

increase effective classroom management making it more organized and systematic.

The researchers aim to explore how AI technologies are used in various

areas, such as adaptive learning, teaching evaluations, and virtual classrooms, as

well as their effects on teaching methods and student learning outcomes while the

research questions examine the advantages and challenges of AI implementation in

education, particularly in improving teaching quality and learning experiences.

Through this study, the researchers aspire to provide extensive and valuable
insights, offering references and suggestions for future AI-driven educational

developments.

B. METHODS USED

The study employs comprehensive literature review to study the use of Ai in

education and by examining numerous sources. The author's objective is to delve

into the use of AI in educational environment, especially adaptive learning, intelligent

tutoring systems, and analyze their influence on teaching and learning. In this

manner, there is a thorough assessment of the positive and negative aspects of AI in

education that can serve as a basis for further investigation and changes in the

educational system.

C. RESULTS AND CONCLUSION

Base on the results of the study, teaching efficiency and student learning

outcomes can improve if the educational process is personalized and instant

feedback is provided as a result of the artificial intelligence's involvement in

education. Still, issues such as data safety, teacher training, and increase of

educational inequality is being identified. As the authors state, even though AI could

be revolutionary for education, it is necessary to think through how to tackle these

challenges to ensure real benefits.

D. HOW IT RELATES TO OUR STUDY


The study relates to our research study because the study context focuses on

the integration of AI into education, including its perks and downsides, which enables

for a greater understanding of AI's application within the context of education. This is

useful in regard to the focus on AI-facilitated performance in STEM medicine. To

summarize, the study "A Review on Artificial Intelligence in Education" offers a broad

perspective of AI technologies in educational settings, whereas our study elaborates

this by addressing ethical issues related to AI applications in STEM and Medical

Education.

E. STRENGTH AND WEAKNESSES

The study clearly demonstrates AI's ability to improve learning experiences.

The study features the use of AI in intelligent tutoring systems as an example of

personalized learning that AI provides because it enhances student motivation and

learning results. Moreover, the integration of different AI technologies in different

educational situations gives the audience a rich comprehension of the multifaceted

nature of AI within educational environment. However, the study has some flaws.

Because technologies advance so quickly, some pieces of content may become

irrelevant so promptly that there must be constant revisions made to them to ensure

that they are accurate. To summarize, the study gives useful information on the role

of AI in education but offered insufficient coverage.


PRIMARY CONCERNS OF AI USE

PRIVACY CONCERNS

A. RESEARCH TITLE AND QUESTION

Introduction Data Security and Privacy Concerns of AI-driven Marketing in the

Context of Economics and Business Field: An Exploration into Possible Solutions

AI is designed to act like human thinking and learning, helping with problem-

solving, decision-making, and adapting over time. It has changed the way

businesses approach marketing by allowing them to understand customers better,

predict trends, and create more personalized experiences. According to (Breward

et al., 2017; Cheng et al., 2023; Coss & Dhillon, 2019; Hille et al., 2015; Tiwari et al.,

2023; Wiegard & Breitner, 2019; Wieringa et al., 2021; Wu et al., 2024), the

importance of dealing with privacy issues in AI-powered marketing within the

economic and corporate landscape is crucial, considering the rapid increase in data

gathering and use for personalized advertising and customer profiling. (Duan et al.,

2019; Feng et al., 2021) mentioned that these tasks involve a diverse array of

activities, such as problem-solving, speech recognition, learning, and decision-

making (Duan et al., 2010; Song & Ma, 2010) stated that a systematic search
strategy was devised utilizing five strategically chosen keywords: ‘data security’,

‘privacy concern’, ‘marketing’, and the synonymous terms ‘Artificial intelligence’ and

‘AI’. This search string encompassed key concepts directly relevant to the research

objectives. The search was conducted within the Scopus databases to ensure the

inclusion of high-quality peer-reviewed research.

This study evaluates the existing gaps in understanding the specific security

and privacy implications of AI-driven marketing within economic and context. The

main objectives of this review are to address the following research questions: What

are the primary challenges and implications for businesses utilizing AI marketing

strategies, and how do these concerns impact consumer trust and regulatory

compliance. What strategies and frameworks can effectively address data security

and privacy concerns in AI-driven marketing practices, and how do these solutions

impact business operations, consumer trust, and regulatory compliance.

B. METHODS USED

This study employed a systematic literature review (SLR) strategy to

thoroughly investigate existing research on the selected issue. This strategy entails

systematically collecting, merging, and evaluating relevant material to provide

valuable insights that direct the research process and improve understanding of the

issue. Researchers followed the PRISMA (Preferred Reporting Items for Systematic

Reviews and Meta-Analyses) approach as described by Moher et al. (2009). A

systematic search was conducted in the Scopus database using five key terms

related to AI, marketing, and data security. To keep the research relevant, only peer-

reviewed papers published in English between January 2014 and February 2024
were included. The focus was on studies within business and economics to explore

AI’s role in marketing and its impact on data privacy. These criteria ensured the

selection of high-quality and relevant literature.

C. RESULTS AND CONCLUSION

The study effectively addressed data security and privacy concerns in AI-

driven marketing requires a holistic approach that integrates technological

innovations, organizational strategies, and regulatory frameworks. Companies can

enhance data security, build consumer confidence, and gain a competitive edge by

implementing strong security measures and investing in customer data insurance.

Advanced technologies like Radio Frequency Identification (RFID) and the Health

Information Technology Acceptance Model (HITAM) in health tracking devices

illustrate effective security practices for safeguarding data. Securing support from

senior management and tailoring global security standards to a company’s size are

crucial for mitigating security risks. Regulatory policies also play a vital role in

reducing perceived risks and strengthening data protection through ethical

frameworks and privacy-focused service models across industries. By adopting

these strategies, organizations can improve security, drive innovation, and contribute

to sustainable economic growth in the digital age. While this study provides

important insights, further research is necessary to develop and validate integrated

frameworks that incorporate technological, organizational, and regulatory

approaches to enhance data security and consumer trust.


D. HOW IT RELATES TO OUR STUDY

Through these findings, the article on AI-driven marketing discusses concerns

about data security and privacy concerns, which also apply to AI-assisted academic

performance in STEM medicine education. Both areas involve handling sensitive

information, student data in education—raising ethical questions about privacy,

consent, and responsible AI use. Additionally, issues like bias, decision-making, and

ethical challenges introduced by AI are relevant in both contexts. Your study can use

these insights to emphasize the importance of ethical guidelines in AI-assisted

learning, ensuring transparency, fairness, and accountability in medical education.

E. STRENGTH AND WEAKNESSES

The systematic literature review underscores the need for further research to

delve deeper into several key areas. Future investigations could focus on the

development and validation of comprehensive frameworks integrating technological

advancements, organizational tactics, and regulatory guidelines to bolster data

security measures and enhance consumer trust.

DISHONESTY

A. RESEARCH TITLE AND QUESTION

Addressing Students Non-compliance in AI use Declaration: Implications for

Academic Integrity and Assessment in Higher Education

Dishonesty in academic writing is no longer a new issue, the usage of Ai tools

to create academic work has become a frequent scenario nowadays. Moorhouse,


Yeo, and Wan (2024) stated that technologies that are able to produce content in

several formats that includes text, images, code, and audio are called generative AI.

Hutson (2024) mentioned that the tools is threatening academic and intellectual

integrity of the academic work while blurring the line of Ai generated content and the

students' original work as indicated by Nabee, Mageto, and Pisa (2020), the

emergence and over reliance of Ai tools underlines the pressing demand for a clear,

consistent, and credible guidelines to ensure that the students' will understand and

follow the ethical use of AI in education. According to Abbas, Jam, and Khan (2024),

the academic community is worried that students could misuse these technologies

for completing assessments, evading genuine learning experiences and

compromising academic integrity. Nabee, Mageto, and Pisa (2020) talks that there

are pressing requirement for clear, consistent, and research-based guidelines that

guarantee that the students will be able to comprehend and adhere to ethical AI

usage in educational contexts.

The study examine the factors of students' non-compliance with Ai use

declaration in academic work, or to be precise the study aims to answer why some

students' at King's Business School are reluctant in revealing their use of Ai tools,

the study seeks to broaden the understanding surrounding academic integrity the

study also offers insights by using the Theory of Planned Behavior (TPB) as a

framework in understanding students' AI disclosure decisions, examining attitudes,

subjective norms, and perceived behavioral control as influencing factors the

objectives would involve identifying barriers to compliance, for example: fear of

academic punishment, lack of clarity of expected behaviors, inconsistent application


of rules, peer dynamics and exploring the extent to which risks to compliance relate

to academic integrity and robust assessment practices and the research objective is

to develop strategies that enhance transparency and the ethical usage of AI through

clearer policies and institutional support. This study is guided by the following

research questions: (1) What are the reasons behind student non-compliance with AI

use declarations at King’s Business School? (2) How does non-compliance impact

perceptions of academic integrity and assessment practices? (3) What strategies

can be implemented to enhance compliance and ensure transparency in AI use

within academic settings? with these questions, the study is able to provide a

valuable insight for policymakers and also educators in fostering academic integrity

in the usage of Ai tools.

B. METHODS USED

The researchers used a single case study to investigate students' non-

compliance with the use of AI in academic assessments at King's Business School,

they also utilized the use of sequential mixed-methods, starting with interviews with

themes related to students' perceptions of AI usage, possible risk, the role of

instructors and peers, and experiences with the AI non-compliance. To fig the

research data were analyzed using thematic analysis outlined by Braun and Clarke

(2006, 2023), which followed the process of doing both theory-driven and emergent

coding in relation to the Theory of Planned Behavior (TPB). With these methods the

study provides insights which may help in developing policy toward promoting

academic integrity within higher education.

C. RESULTS AND CONCLUSION


The results showed that 79% of the students are using Ai tools in their

academic work. However, only 65% of these students consistently completed the AI

declaration on their coursework and the main reason for the students' non-

compliance includes fear of academic repercussion such as accusation or fear in

getting caught. This study used thematic analysis to further revealed the study's five

prominent themes associated with non-compliance. First, fear of academic

consequences formed a significant barrier since many students were concerned that

disclosing AI usage would affect their grades or reputation because of this the

students called for clearer guidelines, consistent enforcement, and support in the

form of workshops on ethical AI use to boost compliance. A change in the framing of

AI policies was also suggested to encourage academic innovation and transparency

rather than solely compliance and the avoidance of penalties. These results showed

that there is the need for clear, research-based policies that foster trust and

encourage honest disclosures about artificial intelligence use in academic context

D. HOW IT RELATES TO OUR STUDY

This research study is closely related to our study since both of them discuss

the implications that artificial intelligence tools have on academic integrity and also

explore the ethical issues involved in the use of artificial intelligence. These

researchers' findings affirm our study because it highlights significant issues,

including students' fears about academic consequences, confusion over institutional

guidelines, and inconsistency in enforcing AI policies, which happen to be one of

several issues in STEM medicine education.

E. STRENGTH AND WEAKNESSES


The study has many strength, first the study use of the theory of plan

behavior (TBP) framework, which helps to provide good theoretical backbone to the

research study and allows for systematic investigation of compliance, and lastly the

study is contextually relevant because it deals with emerging challenge in higher

education as AI tools are becoming more common than ever, hence the findings

could be useful to the policy maker and education administrator but along with these

strengths, the study also has limitations. and one of its main limitations is regarding

to its reliance on single case study from king's business school that limit the

generalization of the finding. Despite the limitations the study provides valuable

insight into the complexities of ai use in academic integrity

A. RESEARCH TITLE AND QUESTION

Is AI Changing the Rules of Academic Misconduct? An In-depth Look at Students'

Perception of Ai-giarism.

Ai tools undoubtedly make a drastic change to students’ academics. Many

use it as a reference to get ideas and some students resort in using Ai tools to

answer their homework and do their research which pose a threat to academic

integrity. According to Chan & Lee; Hosseini, Rasmussen, & Resnik, (2023, 2023)

adherence to ethical issues in academic writing is crucial for guaranteeing fairness,

integrity, and authenticity, protecting research rights, maintaining transparency, truth

and also sustaining academic standards. Chan & Tsi; Cotton, Cotton, & Shipway,

(2023, 2023) stated that the fundamental principles of academic integrity in writing

and research should include authorship criteria, transparency regarding conflicts of

interest, maintaining data integrity and accuracy, preventing plagiarism and research
misconduct, respecting intellectual property rights, and demonstrating a willingness

to retract or amend published work in cases of errors or misconduct. These

principles form the foundation for maintaining the quality of academic integrity in

both scientific research and students' academic work. One of the most rampant and

big concerns related to Ai-tools is plagiarism. Nowadays, assessing whether the

students' work is authentic has become a challenge for educators, threatening the

integrity of academic work. Park (2003) mentioned that act of using someone else's

work without the owners’ permission, giving proper credits and introducing it as your

own work is called plagiarism. Wagner (2014) talks about the issues related to

plagiarism in academic work and how the students change the definition and handle

plagiarism. Wagner (2014) also mentioned that plagiarism can range from a few

phrases to full papers or chapters, with the latter violating copyright laws. The actual

position of the plagiarized content inside the work, whether it is correctly referenced,

the author's purpose to plagiarize, the author's age and native language are all

aspects that can influence the extent of the plagiarism.

The research study sought to determine what is considered a violation when

using AI-generated content in higher education. In this study, the researchers delve

into students' perceptions of adopting AI tools in their work and examine their

understanding of plagiarism and AI-giarism. By exploring these aspects, the

researchers aim to identify gaps in knowledge, understand key concepts integral to

evolving academic integrity, and investigate the relationship between traditional

plagiarism and the emerging concept of AI-giarism. The fusion of traditional

plagiarism and AI-giarism offers new insights into the rapid advancement of AI tools.
This aligns with the study's goal of developing clear guidelines and policies

regarding AI-generated content to ensure fair recognition and credit for students' and

researchers' work.

B. METHODS USED

The researchers utilized an online questionnaire consisting of three parts to

explore students' experiences, attitudes, and understanding. The questionnaire

included Likert-scale questions (ranging from Strongly Disagree to Strongly Agree)

to assess students' understanding of the use and misuse of AI tools. Demographic

data collection was conducted to gather participants' information, along with survey

questions about traditional plagiarism and AI-giarism. Since there were no pre-

existing, credible questions related to AI-giarism given that this concept has

emerged recently the researchers opted to develop new questions based on a

literature review and university discussions, and participants we're selected through

convenience sampling technique and the data we're analyze using descriptive

analysis to understand the participants understanding regarding Plagiarism and Ai-

giarism.

C. RESULTS AND CONCLUSION

The results showed a table that has two parts. The first part presents

students' conceptualization of plagiarism, and the results show that most participants

have a general understanding of traditional plagiarism in an academic context. This

highlights the need for more discussions related to plagiarism. The second part of

the table focuses on students' perceptions of AI-giarism and includes three


scenarios. Scenario F1 to F2: These scenarios involve the most direct use of AI tools

to generate AI content. Scenario F3 to F7: These questions explore how students

use AI tools for brainstorming concepts or improving their original ideas. Scenario F8

to F11: These tasks involve grammar correction and finding information. In

conclusion, the second part indicate that participants have a limited understanding of

the ethical considerations of using AI in academic work. This underscores the need

for discussions and educational initiatives regarding the ethical implications of AI in

academics.

D. HOW IT RELATES TO OUR STUDY

This study explores the students' perception of Ai generated content and

ethical implication surrounding academic work, specifically plagiarism and academic

integrity. Similarly, our research study examines the ethical boundaries of AI

assistance in STEM medicine education, where academic integrity is crucial due to

the significance of the medical knowledge and practices involved.

E. STRENGTH AND WEAKNESSES

The study provides a valuable insight about the AI related academic

misconduct that is offering new knowledge about the use of AI tools that is often

overlooked in discussions. However, the weakness of this study is the limited

participants, and the research study did not fully address the perspective of the

educators and only limits their participants by including only the students' which is

crucial for more deeper and comprehensive understanding regarding AI's impact on

academic integrity.
OVERRELIANCE

A. RESEARCH TITLE QUESTION

The Effects of Over-Reliance on AI Dialogue Systems on Students’

Cognitive Abilities: A Systematic Review

AI is changing the way students learn by making lessons more personalized

and helping teachers and students with their work. It can answer questions,

assignments, and even tutor students. However, relying too much on AI can be a

problem. Students might stop thinking for themselves, teachers might lose their role

in guiding students, and there could be issues with fairness and privacy. While Ai is

useful, it should not replace real learning. According to Gao et al. (2022) identified a

troubling pattern in which users tend to overly rely on AI dialogue systems,

frequently accepting their responses—including AI hallucinations—without

verification. This excessive dependence is further intensified by cognitive biases,

which cause deviations from rational thinking, and heuristics, or mental shortcuts,

that lead to the uncritical acceptance of AI-generated content. Additionally, Xie et al

(2021) found that relying too much on AI without checking its accuracy can lead to

mistakes in classification and understanding. AI can create incorrect or misleading

content, which is risky and may result in research misconduct, such as copying

others' work, making up data, or changing results dishonestly.

This study aims to investigate the over-reliance on AI dialogue systems in

educational and research contexts, with a particular focus on their impact on


decision-making, critical thinking and analytical thinking facilitated through the use of

dialogue systems. Relying too much on AI dialogue systems has raised serious

ethical issues, such as inaccurate information, bias, plagiarism, privacy risks, and

lack of transparency, which have not been properly addressed. To achieve this, the

study adopts the comprehensive five step methodology for conducting systematic

literature reviews proposed by Macdonald et al. (2023). To address this issue, the

study explores several key questions, such as how does over-reliance on AI

dialogue systems affect critical and analytical thinking abilities in different

educational subjects and levels, what are the primary ethical concerns causing over-

reliance on AI dialogue systems in research and education. It also examines the

challenges of over reliance on AI. Through this structured approach, the study aims

to provide a nuanced understanding of how these systems influence users’ cognitive

skills and to develop a framework for navigating the ethical pitfalls associated with

their use.

B. METHODS USED

The researchers conducted a systematic literature review (SLR) to investigate

the contributing factors and effects of over-reliance on AI dialogue systems in

research and education. Studies were screened based on inclusion and exclusion

criteria, following the PRISMA framework to ensure transparency in the selection

process. The data was analyzed using thematic analysis to identify key factors such

as AI hallucination, algorithmic bias, plagiarism, privacy concerns, and transparency

issues, along with their effects on cognitive abilities, including decision-making,


critical thinking, and analytical thinking. Collaborating with education and science

experts helped refine the themes, ensuring consistency and accuracy.

C. RESULTS AND CONCLUSION

This may lead users to place undue trust in these technologies, escalating the

risk of dependency. Such over-reliance could impede the cultivation of essential

skills in medical students, including critical thinking, problem-solving, and effective

communication. The convenience offered by AI tools in providing quick answers

might deter students from engaging in thorough research and forming their insights,

challenging the integration of these tools in ways that enhance rather than diminish

critical faculties and problem-solving abilities. In conclusion, it could lead to reduced

effort in crafting well-structured sentences, adhering to proper grammar and spelling,

and critically evaluating information sources. It might also ultimately weaken the

students’ ability to perform independent analysis and interpretation.

D. HOW IT RELATES TO OUR STUDY

AI can be misused for plagiarism, cheating, or automated content generation,

raising questions about originality and fair assessment. Furthermore, AI provides

convenience, excessive dependence may hinder critical thinking, problem-solving,

and creativity, which are essential academic skills. Furthermore, accessibility and

equity become ethical concerns, as not all students have equal access to AI-

powered tools, potentially widening the education gap.

E. STRENGTH AND WEAKNESSES


The systematic review that educators and policymakers ought to incorporate

critical media literacy into educational programs to empower students with the ability

to assess AI-generated material critically. This involves fostering an understanding of

the fundamental mechanisms behind AI technologies, potential biases, and ethical

implications. Institutions should introduce AI literacy initiatives that stress the

responsible use of AI technologies, underscoring the necessity of preserving

cognitive skills such as critical thinking and analytical reasoning in an automated

world. Future research should evaluate the cognitive effects of utilizing AI dialogue

systems in academic environments. Such investigations could yield more definitive

evidence to inform the establishment of best practices for AI incorporation.

POLICIES AND GUIDELINES OF AI-USE

A. RESEARCH TITLE AND QUESTION

Policies and Guidelines and Recommendations on AI Use in Teaching and

Learning: A Meta-Synthesis Study

Over the years, artificial intelligence (AI) has immensely developed

worldwide. By this, specifically, it has been used in teaching and learning. While this

may be good as it assists the way of teaching and learning, however, arguments

ignited because there should be existing policies and practices of AI use to maintain

integrity in the education field. According to Funa & Gabay (2024), Artificial

Intelligence (AI) has been a topic of interest since its inception in the 1950s, when

early AI research sought to create systems capable of simulating human intelligence

(as cited in Guo, 2015). Funa & Gabay (2024) noted that AI's capabilities have since
evolved significantly, especially with the advent of machine learning and big data in

the 21st century. These advancements have sparked renewed interest in the

application of AI across various fields, including education. Despite its long history,

the recent integration of AI into educational settings marks a transformative shift,

offering unprecedented opportunities to personalize learning, enhance student

engagement, and optimize educational outcomes (as cited in Zheng et al., 2021).

[...] Additionally, Funa & Gabay (2024) highlighted that despite these promising

advancements, several challenges complicate the successful integration of AI in

educational contexts, necessitating a thorough examination of existing policies and

practices.

This study evaluates current policies governing AI integration in teaching and

learning. As AI technologies rapidly advance, there remains limited understanding of

how these developments are regulated, particularly in educational settings. Despite

the widespread use of AI tools, many policies intended to guide their ethical and

practical application are often outdated, inconsistent, or insufficient, especially in the

K-12 education system. To address this issue, the study explores several key

questions, such as what existing policies regulate AI use in education, how these

policies impact AI adoption, and what benefits they offer to both teachers and

students. It also examines the challenges faced by educators and learners in

implementing these policies. By identifying policy gaps and assessing their

effectiveness, this research aims to contribute valuable insights that could guide

policymakers and educational institutions in establishing stronger, more inclusive,


and practical AI governance, ensuring improved educational outcomes while

minimizing potential risks.

B. METHODS USED

The researchers conducted a systematic literature review (SLR) using Meta-

synthesis to analyze AI integration policies in education from 2020 to 2024. This

approach allowed them to combine findings from multiple qualitative studies,

providing a comprehensive understanding of how AI policies impact teaching and

learning. Relevant literature was sourced from databases such as Google Scholar,

PubMed, ERIC, and Scopus, using specific keywords related to AI, policy, and

education. Studies were screened based on inclusion and exclusion criteria,

ensuring that only current and relevant research was analyzed. Additionally, the

study applied thematic analysis, following Braun and Clarke’s (2006) framework, to

identify key themes related to AI policies. These themes were categorized based on

their relevance to learners, teachers, or administrators, highlighting issues such as

ethical AI use, AI literacy, and policy implementation challenges. Collaborating with

education and science experts helped refine the themes, ensuring consistency and

accuracy. The study’s findings provide insights into existing policy gaps and

challenges in AI integration, offering a foundation for future policy development to

support responsible AI use in education.

C. RESULTS AND CONCLUSIONS

This paper identifies three major themes: policies and guidelines,

implementation strategies, and practical constraints. Under policies and guidelines,


key subthemes include Ethical AI Use, AI Literacy, and Inclusivity and Equity. Ethical

AI Use emphasizes the importance of responsible and fair AI practices to protect

student interests, while AI Literacy highlights the need to integrate AI education into

curricula to enhance understanding and application of AI technologies. Inclusivity

and Equity stress the importance of designing AI systems that accommodate diverse

student needs and minimize educational disparities, ensuring equal access to AI

resources.

The study also outlines implementation strategies such as Student

Orientation and Professional Development, Enhanced Teaching Tools, Data-Driven

Insights, Improved Learning Outcomes, and Streamlined Administrative Processes.

Preparing educators and students to use AI effectively through training and

orientation is crucial, while AI-powered tools and data insights can improve teaching

methods and student engagement.

D. HOW IT RELATES TO OUR STUDY

The findings mainly showed policies and guidelines and how these can

maintain learning and teaching integrity. It also delves into the implementation

strategies in which educators and students can benefit from it. By these, the findings

can be helpful because one of the sub-themes of our study are policies and

guidelines in AI-assisted academic performance. The study can be useful in

determining boundaries of AI use for it to be consider ethical in academics.

Additionally, this study recommends policies and guidelines that can potentially add

to the existing rules and regulations establishing limits to maintain integrity in the

educational field.
E. STRENGTH AND WEAKNESSES

The systematic review and integration of findings from various literatures and

the thematic analysis was used accurately in this study since it categorizes the key

strength: emphasis on inclusivity, ethical AI use, and improving educational

outcomes. It also highlights the importance of collaboration and resource allocation

for effective AI integration. However, a notable weakness is the lack of concrete

strategies to overcome technical and financial barriers, as well as limited guidance

on measuring AI's long-term effectiveness in education. Additionally, ensuring

continuous training for educators and addressing infrastructure gaps may present

ongoing challenges.

A. RESEACH TITLE AND QUESTION

A Comprehensive AI Policy Education Framework for University Teaching and

Learning

Students use an AI tool, such as essay-generating software, to finish their

homework or coursework. Because of that, it is one of the schools and universities

main concerns. As stated by Chan (2023), a recent survey found that about one in

three college students in the U.S. have used AI to help with their assignments. More
than half of those students use it for most of their work. the use of text generative

artificial intelligence (AI), such as ChatGPT, Bing and the latest, Co-Pilot integrated

within the Microsoft Office suite and this has been a growing concern in the

academic settings in recent months. With almost half of the students saying their

professors or institutions have banned the tool for homework, the study noted that

some professors are considering whether to include ChatGPT in their lessons or join

calls to ban it. This has led to calls for stricter regulations and penalties for academic

misconduct involving AI. Chan (2023) added that the use of AI may lead to a decline

in students' writing and critical thinking skills is another concern and also stated, as

they become increasingly dependent on automation in their work. This could have a

negative impact on the quality of education and ultimately harm the students’

learning outcomes, some academics argue (as cited in Chan & Lee, 2023; Korn &

Kelly, 2023; Oliver, 2023; Zhai, 2022).

The objective of this study is by examining the perception and implications of

text generative AI technologies to develop an AI education policy for higher

education. The study proposes an AI Ecological Education Policy Framework to

address the multifaceted implications of AI integration in university teaching and

learning based on the findings. This framework is organized into three dimensions:

Pedagogical, Governance, and Operational. The Pedagogical dimension

concentrates on using AI to improve teaching and learning outcomes, while the

Governance dimension tackles issues related to privacy, security, and accountability.

The Operational dimension addresses matters concerning infrastructure and

training. Ensuring that stakeholders are aware of their responsibilities and can take
appropriate actions accordingly, the framework fosters a nuanced understanding of

implications of AI integration in academic settings.

B. METHODS USED

This study conducted a survey designed to gather data from students,

teachers, and staff in Hong Kong to develop an AI education policy framework for

university teaching and learning. Featuring a mix of closed-ended and open-ended

questions, the survey was administered through an online questionnaire. Including

the use of generative AI technologies like ChatGPT, the integration of AI

technologies in higher education, potential risk associated with AI technologies, and

AI's impact on teaching and learning, the topics covered in the survey were major

issues concerning the use of AI in higher education.

Ensuring that the results reflect the needs and values of all participants, the

data were collected via an online survey from a diverse group of stakeholders in the

educational community. Based on their availability and willingness to participate in

the study, a convenience sampling method was employed for selecting the

respondents. Provided with an informed consent form prior to completing the survey,

participants were recruited through an online platform.

Whilst a thematic analysis approach was applied to examine the responses

from the open-ended questions in the survey, descriptive analysis was used to

analyze the survey data. The survey was completed by 457 undergraduate and

postgraduate students, as well as 180 teachers and staff members across various

disciplines in Hong Kong


C. RESULTS AND CONCLUSION

The purpose of the survey was to explore the kinds of requirements,

guidelines and strategies necessary for developing AI policies geared towards

university teaching and learning. The survey was conducted among 457 students

and 180 teachers and staff from different disciplines in Hong Kong universities.

Opportunities associated with generative AI technology the findings highlight the

need for comprehensive AI policy in higher education that addresses the potential

risks.

Overall, in higher education and a recognition of the potential advantages and

challenges, the survey results indicate an openness to adopting generative AI

technologies. For maximizing the benefits of AI technologies in university teaching

and learning, addressing these issues through informed policy and institutional

support will be crucial.

For AI integration in education, the quantitative findings support the key areas

found in the quantitative data. In doing homework and assignment, the quantitative

data reveals that both students and teachers share concerns of the potential misuse

of AI technologies. To prevent academic misconduct, it emphasizes the need for

guidelines and strategies. Moreover, highlighting the importance of addressing data

privacy, transparency, accountability, and security, there is a significant agreement

among students and teachers on the necessity for higher education institutions to

implement a plan for managing the potential risks associated with using generative

AI technologies.
The concerns of students taking advantage of using generative AI in their

homework and assignments highlights the importance of ensuring equal access to AI

technologies for all students, yet they also believe that AI technologies can provide

unique insights and perspective and personalized feedback.

D. HOW IT RELATES TO OUR STUDY

Based on the findings of the study, it underscores the idea of the AI's role in

STEM education. The study circles around the behavior of students when it comes

to taking advantage of AI when doing their homework. Moreover, the findings wish to

address academic misconduct, schools and universities must develop clear

guidelines and strategies for detecting and preventing the misuse of generative AI.

Schools and universities must take responsibility for decisions made regarding the

use of generative AI in teaching and learning, which includes being transparent

about data collection and usage, and being receptive to feedback and criticism. By

disclosing information about the implementation of generative AI, including the

algorithms employed, their functions, and any potential biases or limitations,

universities can foster trust and confidence among students and staff in AI

technology usage.

E. STRENGTH AND WEAKNESSES

The study has a well-structured AI Ecological Education Policy Framework by

providing clear guidelines on integration of AI into academic settings while

addressing governance, ethical considerations, and operational challenges.

Additionally, it is defining AI usage boundaries and refining assessment strategies


highlighting academic integrity and offers actionable recommendations. For future

research, the study recommends exploring long-term AI impacts, refining discipline-

specific policies, and developing alternative assessment methods that minimize AI

misuse while ensuring academic integrity. Additionally, further studies should

investigate how AI affects critical thinking, skill development, and ethical decision-

making in various educational contexts.

V. SYNTHESIS OF THE LITERATURES REVIEWED

The literature reviewed suggests that while artificial intelligence (AI) is useful

in scholarly use, some of its consequences need to be remembered and

maintenance of ethical standard of AI usage needs to be ensured. AI is transforming

STEM education through individualized learning, advanced analytics, and automated

teaching. However, its potential is not yet fully realized because of limited empirical

research and integration and pedagogy issues. As Triplett (2023) acknowledges,

"the use of Artificial Intelligence (AI) has become more important in many domains,

including health care and education." While AI promises better benefits, its complete

realization of implementation in STEM education is uncertain (as cited in Chng, Tan,

& Tan, 2023). In education as a whole, AI is utilized intensively by pupils and

instructors as well, in enabling learning experiences, and in class participation.

Chassignol, Khoroshavin, Klimova, and Bilyatdinova (2018), and Roll and Wylie

(2016) acknowledge AI's growing role in teaching procedures, as a testament to its

growing significance in education. Similarly, Colchester, Hagras, Alghazzawi, and

Aldabbagh (2016), and Yang and Bai (2020) argue that AI's integration in learning

systems enables student motivation, self-organization, and innovation. More


importantly, Tuomi (2018) and Wang (2020) acknowledge AI's potential to improve

classroom management, making it more organized and efficient.

It has been discovered through research that there is diverse concern in the

use of AI. AI imitates human thought and learning, helping in decision-making,

problem-solving, and learning. AI has transformed marketing through better

customer insight, trend prediction, and personalization. Breward et al. (2017), Cheng

et al. (2023), Coss & Dhillon (2019), Hille et al. (2015), Tiwari et al. (2023), Wiegard

& Breitner (2019), Wieringa et al. (2021), and Wu et al. (2024) highlight the

imperative to address privacy concerns in AI-marketing due to increased data

gathering. Duan et al. (2019) and Feng et al. (2021) highlight the use of AI in

activities like speech recognition, learning, and decision-making. Added to that,

Duan et al. (2010) and Song & Ma (2010) built a systematic search strategy based

on keywords like "data security," "privacy concern," and "marketing" to identify high-

quality studies from Scopus databases. In the same way, the development of AI

tools for use in scholarly writing has raised integrity and ethical use concerns.

Generative AI to produce content in various formats is becoming increasingly

prevalent (Moorhouse, Yeo, & Wan, 2024). Its misuse compromises scholarly

integrity, with it being challenging to differentiate between AI-generated and original

work by students (Hutson, 2024). Abbas, Jam, and Khan (2024) highlight the

concern of scholars that students can employ AI to complete assignments,

undermining real learning. In the same way, Nabee, Mageto, and Pisa (2020)

highlight the importance of transparent, research-based guidelines to ensure

students understand and follow ethical AI use in learning. AI becomes significant in


students' academic work, both as an idea-generating device and assignment quick-

solution. Ethical compliance is necessary in scholarly writing to ensure fairness,

transparency, and integrity (Chan & Lee; Hosseini, Rasmussen, & Resnik, 2023).

Chan & Tsi; Cotton, Cotton, & Shipway (2023) highlight that scholarly integrity is

based on authorship standards, conflict-of-interest disclosure, data truthfulness, and

plagiarism avoidance.

Plagiarism is a major concern in AI-enabled academic writing. It is becoming

increasingly difficult for instructors to decide whether a student's work is original or

not (Park, 2003). Wagner (2014) describes the evolving dynamics of plagiarism,

from minimal copying of texts to copying whole papers, sometimes violating

copyright. Intent, accuracy of citation, and the circumstances of the student decide

the degree of plagiarism. With increased use of AI, academic integrity requires more

specific guidelines and stronger ethical control. AI is transforming education through

personalized learning and helping students and teachers. AI can offer answers, help

with homework, and even tutor. But excessive dependence on AI is risky. Gao et al.

(2022) pointed out that users are likely to accept AI-provided answers, including

hallucinations, without verification, a tendency made easy by cognitive bias and

heuristics. Similarly, Xie et al. (2021) believed that unregulated reliance on AI can

lead to classification errors, misinterpretation, and even research misconduct, like

plagiarism or falsification of data. Although AI is useful, it must augment but not

replace critical thinking and real learning.

Owing to these issues, research concludes policies and regulations of AI

application in academic environment for it to be deemed ethical. Funa & Gabay


(2024) address the history of artificial intelligence (AI) since its development began

in the 1950s, highlighting its remarkable advancements through the use of machine

learning and big data. They note how AI integration in education has revolutionized

learning by customizing experience, boosting student participation, and enhancing

learning outcomes (as cited in Guo, 2015; Zheng et al., 2021). Nevertheless, despite

its advantages, issues persist in the effective application of AI in schools, and

measures are necessary to maintain academic integrity. A critical issue is the

application of AI equipped tools such as ChatGPT and Bing for homework, the

questionability of academic integrity notwithstanding. Chan (2023) documents that

nearly a third of U.S. college students have been known to use AI for assignments,

with more than half of them using it for the majority of their work. Some of the

institutions have countered by prohibiting AI-based homework, while others are

considering including it in the curriculum. The increased reliance on AI for writing

and problem-solving has provoked concerns over decreased critical thinking abilities

among students, which could compromise the quality of education (as cited in Chan

& Lee, 2023; Korn & Kelly, 2023; Oliver, 2023; Zhai, 2022). These issues have

prompted demands for increased measures of control and punishment to curb AI-

based academic dishonesty.

In short, Artificial intelligence (AI) is transforming education through

personalized learning and enhanced engagement, but its use is a source of concern

about academic integrity, ethical usage, and excessive dependence on technology.

Studies quantify the risks of AI-aided academic work, including plagiarism, diluted

critical thinking, and obfuscation of sharp distinctions between original work and AI-
generated work. Due to such concerns, researchers call for clearly defined policies

and guidelines for the regulation of AI usage in education so that it can enhance

learning and not substitute actual student effort.

You might also like