0% found this document useful (0 votes)
178 views233 pages

The Impact of ChatGPT On Higher Education

A meaningful insight of the impact of Generative AI Models and Applications on Higher Education.

Uploaded by

tarapacawarez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
178 views233 pages

The Impact of ChatGPT On Higher Education

A meaningful insight of the impact of Generative AI Models and Applications on Higher Education.

Uploaded by

tarapacawarez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 233

The Impact of ChatGPT on Higher

Education
This page intentionally left blank
The Impact of ChatGPT on
Higher Education: Exploring
the AI Revolution

BY
CAROLINE FELL KURBAN
MEF University, Turkey

AND
MUHAMMED ŞAHIN
MEF University, Turkey

United Kingdom – North America – Japan – India – Malaysia – China


Emerald Publishing Limited
Emerald Publishing, Floor 5, Northspring, 21-23 Wellington Street, Leeds LS1 4DL

First edition 2024

Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin.


Published under exclusive licence by Emerald Publishing Limited.

Reprints and permissions service


Contact: www.copyright.com

No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or
by any means electronic, mechanical, photocopying, recording or otherwise without either the
prior written permission of the publisher or a licence permitting restricted copying issued in the
UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center.
Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every
effort to ensure the quality and accuracy of its content, Emerald makes no representation
implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties,
express or implied, to their use.

British Library Cataloguing in Publication Data


A catalogue record for this book is available from the British Library

ISBN: 978-1-83797-648-5 (Print)


ISBN: 978-1-83797-647-8 (Online)
ISBN: 978-1-83797-649-2 (Epub)
Contents

Dedication vii

About the Authors ix

Foreword xi

Preface xiii

Acknowledgements xv

Chapter 1 Exploring ChatGPT’s Impact on Higher Education – A


Case Study 1

Chapter 2 Navigating the Landscape of AI Chatbots 7

Chapter 3 Theoretical Framework for Investigating ChatGPT’s


Role in Higher Education 29

Chapter 4 Exploring ChatGPT’s Role in Higher Education: A


Literature Review 41

Chapter 5 Research Methodology 75

Chapter 6 Findings and Interpretation 93

Chapter 7 Ethical Implications 133

Chapter 8 Product Implications 147

Chapter 9 Educational Implications 153


vi Contents

Chapter 10 Contributions to Knowledge and Research 181

Appendices 195

References 207
We dedicate this book to the memory of Dr İ brahim Arıkan, the founder of MEF
Schools and MEF University, who dedicated his life to revolutionising education. Dr
Arıkan’s ultimate dream was to establish MEF University as a fully flipped uni-
versity, but sadly, he passed away before witnessing its realisation. He was a pioneer
across all stages of education, from kindergarten to university, and believed in a
democratic approach to education that prioritised the individuality of each student.
Dr Arıkan implemented full academic independence for teachers at his institutions,
and his commitment to creating a learning environment that nurtures the potential
of every student has left a lasting impact on the field of education. His spirit lives on
in the hearts and minds of every student and teacher who had the privilege to know
him. As we continue to honour his legacy, we are proud to say that MEF University
has become the realisation of his dream, an innovative and fully flipped university
that empowers students to take control of their education and become lifelong
learners.
We believe that Dr Arıkan would have been proud of the innovative direction MEF
University is taking by incorporating cutting-edge technologies like ChatGPT to
further enhance the teaching and learning experience. As a pioneer in education, he
always believed in implementing new and effective teaching methods to provide his
students with the best possible education. His spirit continues to inspire us to strive
for excellence in education, and we dedicate this book to his memory.
This page intentionally left blank
About the Authors

Caroline Fell Kurban is an academic, educator and consultant with a diverse


educational background, including a PhD in Applied Linguistics, MA in Tech-
nology and Learning Design, MSc in Teaching English to Speakers of Other
Languages and BSc (Hons) in Geology. Her expertise in flipped learning and
contributions to publications on digital teaching and learning have been instru-
mental in advancing initiatives at MEF University in Istanbul. As the principal
investigator, Caroline’s extensive background and prior studies have influenced
the selection of theoretical frameworks for this investigation of ChatGPT inte-
gration in education. Her expertise in Clayton Christensen’s Theory of Jobs to be
Done, critical examination of power dynamics through theorists like Bourdieu
and Marx and understanding of phenomenology through Heidegger’s philosophy
bring a comprehensive perspective to her research. With her credentials and
passion for enhancing educational practices, she is well-suited to lead this project.
Muhammed Şahin, an esteemed academic leader, holds a geomatics engineering
degree from Istanbul Technical University (ITU) and earned his master’s degree
from University College London in 1991 and a PhD from the University of
Newcastle in 1994. He joined ITU as an Assistant Professor in 1994 and climbed
the ranks to become a tenured Professor in 2002. Şahin’s remarkable career
includes serving as the Rector of ITU from 2008 to 2012 and later as the founding
rector at MEF University, the pioneering institution fully based on the flipped
learning methodology in which this research is located. With esteemed leadership
roles in various organisations, substantial contributions to research and strategic
management and influential work in engineering education, his expertise spans
diverse domains. However, his current passion and dedication revolve around
educational transformation, especially with regard to the impact that technologies
are having on reshaping learning experiences and empowering students for the
future. He strongly believes that the experiences derived from this transformation
should be shared with others, which is what prompted the development of this
book.
This page intentionally left blank
Foreword

In the dynamic and ever-evolving landscape of education, one of the most pro-
found shifts is the integration of emerging technologies. As an advocate for access
to high-quality education for all, I find this era of technological advancement an
intriguing period of transformation. This book dives deep into the exploration of
artificial intelligence (AI) in education, specifically focusing on AI chatbots like
ChatGPT, and the implications they bring to our learning environments.
My pleasure in presenting the foreword for this book is twofold. Firstly,
because the authors have undertaken a rigorous exploration of a critical topic.
Secondly, because this subject resonates with my professional journey, spent in
pursuit of improving student outcomes and democratising access to quality
education.
MEF University in Istanbul, the book’s focal research site, stands as a beacon
of innovation for its integration of AI, offering a unique context for this study.
The authors critically examine ChatGPT, discussing its development, the ethical
considerations surrounding its use, and the need for a globally inclusive discourse
on the ethical guidelines for AI technologies.
From my tenure as US Under Secretary of Education to leading the American
Council on Education, I have seen the impact that a conscientious integration of
technology can have on access to high-quality education. In this book, by delving
into the history and ascent of chatbots, formulating a theoretical framework for
evaluating AI’s influence, conducting a contemporary literature review and
embarking on an exploratory case study, the authors shed light on how AI
chatbots have the potential to reshape the very foundations of teaching and
learning.
What the authors present is not just a well-researched treatise on ChatGPT,
but a tool for future exploration. The book’s concluding chapters provide a
blueprint for how to effectively and ethically integrate these AI technologies in
our classrooms and institutions, a guide I wish I had when piloting early edtech
initiatives in my own career.
The insights gleaned from this book go beyond ChatGPT. They will shape how
we, as educators, policymakers, and students, navigate the rapidly changing
technological landscape of education. The authors have not only provided a
comprehensive exploration of AI chatbots in education but also prompted us to
consider how we can harness this technology to create an equitable and inclusive
future for all learners.
xii Foreword

In the grand scheme of things, the integration of AI in education is a new


frontier. This book stands as an essential guide for all those venturing into this
new territory. We stand on the precipice of a new era in education – an era where
AI can help us achieve our shared goals of equity, excellence and accessibility in
education.
Let us not just read this book but act on its insights to ensure a future where all
learners have access to quality education.
Ted Mitchell
President of the American Council on Education
Preface

It is my pleasure to introduce our new book, The Impact of ChatGPT on Higher


Education: Exploring the AI Revolution. As the founding rector of MEF Uni-
versity in Istanbul, Turkey, I am proud to say that our institution has always been
at the forefront of innovative and cutting-edge approaches to education.
Since our establishment in 2014 as the world’s first fully flipped university, we
have been dedicated to providing our students with the skills they need to succeed
in their future careers. However, we also recognise that the landscape of education
is constantly evolving, and we must adapt our methods accordingly. That is why,
in this book, we are excited to share our exploration of how ChatGPT may affect
the roles of students, instructors and institutions of higher education.
Our university has always been a pioneer in the use of technology in education.
We were early adopters of the flipped learning approach, which has now become
widely recognized as an effective pedagogical method. We were also at the
forefront of using digital platforms with adaptive learning capabilities to provide
our students with personalised and individualised learning experiences.
As we embrace new technologies and innovative approaches to education, the
potential of AI in education using ChatGPT is both exciting and promising.
However, it is crucial to thoroughly explore and understand how this technology
will impact students, instructors and universities themselves. Moreover, univer-
sities will have a vital role to play in the global discourse of AI as it rapidly
transforms various aspects of our lives.
This book presents an in-depth analysis of our institution’s exploratory case
study, investigating the potential effects of ChatGPT on various stakeholders.
Through the sharing of experiences, anecdotes and perspectives from various
practitioners’ viewpoints, our goal is to offer a glimpse into the transformations
occurring within our organisation. This endeavour can serve as a useful reference
for other institutions seeking to undertake similar inquiries. We are excited to be
at the forefront of this discourse and to contribute to the progress of knowledge in
this field.
Muhammed Şahin
Rector of MEF University
This page intentionally left blank
Acknowledgements

In creating this book, we have been fortunate to receive significant support,


assistance and inspiration. We are profoundly grateful to all who contributed.
Our students, especially Levent Olcay, Utkan Enis Demirelgil, Nida Uygun and
Mehmet Oğuzhan Unlu, brought invaluable enthusiasm and insights to the
project. We would also like to acknowledge the diligent assistance of our student
volunteer, Muhammet Dursun Şahin. We are deeply thankful to the İbrahim
Arıkan Education and Scientific Research Foundation, a guiding light in our
pursuit of educational excellence, and the MEF University faculty, whose creative
ideas and persistent motivation were indispensable. We express our gratitude to
Professor Muhittin Gökmen, Director of the Graduate School of Science and
Engineering and the Chairman of the Department of Computer Engineering at
MEF. His valuable insights concerning AI theorists like Tegmark, Marcus, Davis
and Russell greatly enriched our understanding. Additionally, we extend our
appreciation to Professor Mustafa Özcan, Dean of the Faculty of Education at
MEF, for his continuous feedback and unwavering support throughout the
duration of this project. We owe a debt of gratitude to Paker Doğu Özdemir and
his team at the MEF CELT, along with the MEF Library staff, especially
Ertuğrul Çimen and Ertuğrul Akyol, for their tireless support and valuable
contributions. Our heartfelt thanks also go to our colleagues, including Ted
Mitchell, whose thoughtful foreword frames our work; Leonid Chechurin, for his
astute critique; and Juliet Girdher, whose expertise on Heidegger enriched our
understanding of AI through a Heideggarian lens. We also extend our appreci-
ation to the members of our AI think tank, Errol St Clair Smith, Thomas
Menella, Dan Jones and Juli Ross-Kleinmann whose thoughtful discussions
helped shape our ideas. Finally, we express our sincere gratitude to Emerald
Publishing for making this book possible. In essence, this book is a testament to
the strength of collaborative effort and the pursuit of knowledge. Each of you has
enriched our work, leaving an indelible mark that we will forever appreciate.
Thank you.
This page intentionally left blank
Chapter 1

Exploring ChatGPT’s Impact on Higher


Education – A Case Study

The Revolutionary Impact of AI


Throughout the ages, technological advancements have disrupted traditional
practices, necessitating individuals to adjust and weigh the potential advantages
and disadvantages of emerging technologies. From the printing press to the
blackboard, from the computer to the internet, each new innovation has shaped
the way we teach and learn. And artificial intelligence (AI) is set to be the next
catalytic jump forwards. Although AI has been around since the mid-1950s, it is
only in recent times that data mining, advanced algorithms and powerful com-
puters with vast memory have been developed, thus making AI increasingly
relevant. From problem-solving in the 1950s to the simulation of human
reasoning in the 1960s, from early mapping projects in the 1970s to the devel-
opment of intelligent assistants in the 2000s, AI has made impressive strides.
Today, AI manifests in household personal assistants like Siri and Alexa,
self-driving cars and automated legal assistants. It has also spawned AI-assisted
stores, AI-enabled hospitals and the ubiquitous Internet of Things. In the realm of
higher education, the integration of AI technologies holds transformative
potential for traditional teaching and learning practices. However, a new era has
now arrived with the emergence of ChatGPT, the game-changing AI chatbot. So
what is ChatGPT?

The Arrival of Chat Generative Pre-trained


Transformer (ChatGPT)
ChatGPT, an influential AI chatbot developed by OpenAI, has emerged as a
game-changer in education, offering students dynamic and human-like conver-
sations through natural language processing (NLP). Since its launch on 30
November 2022, ChatGPT has revolutionised the educational landscape,
providing students with immediate access to information, personalised recom-
mendations and continuous support throughout their academic journey. How-
ever, its implementation has also raised concerns about academic integrity,
leading some institutions to ban its usage or adopt stricter assessment methods to

The Impact of ChatGPT on Higher Education, 1–6


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241001
2 The Impact of ChatGPT on Higher Education

combat AI-based cheating. This has sparked global discussions among educators,
debating whether ChatGPT represents an opportunity or a threat.
At its core, ChatGPT operates by harnessing the power of NLP to comprehend
and respond to human queries in a conversational manner. Through advanced
algorithms and machine learning techniques, ChatGPT has been trained on vast
datasets to generate human-like responses, making it an indispensable tool for
engaging with students. The interactive and personalised nature of ChatGPT’s
conversations makes it highly valuable in the educational landscape. Students can
instantly access answers to their questions, relevant resources and tailored rec-
ommendations based on their learning needs. Whether seeking clarifications,
additional information or guidance, ChatGPT serves as a reliable and readily
available support system throughout their academic journey. Furthermore,
instructors can leverage ChatGPT to streamline administrative tasks and enhance
the learning experience. By automating routine administrative processes, such as
addressing frequently asked questions and providing course-related information,
instructors have more time to focus on meaningful interactions with students.
Additionally, ChatGPT can offer timely and personalised feedback, providing
students with real-time guidance and support. Integrating ChatGPT into the
educational environment can lead to a more engaging and interactive learning
experience. Students benefit from immediate assistance, personalised guidance
and a supportive learning environment, while instructors can optimise their
teaching practices and facilitate more meaningful interactions.
As we can see, the potential of ChatGPT in higher education is promising.
However, it is essential to recognise the caveats that accompany it. To begin with,
addressing the ethical considerations and limitations surrounding ChatGPT is
crucial. These encompass concerns about its reliance on heuristics, lack of
transparency in internal workings, issues with capability versus alignment, limi-
tations in helpfulness, interpretability challenges, issues of bias and fairness,
factual accuracy and truthfulness, as well as ethical concerns regarding data
privacy and cybersecurity. Moreover, the impact of ChatGPT on industries,
including higher education, necessitates thorough investigation. The integration
of AI technologies like ChatGPT brings transformative effects on job markets,
resulting in the elimination and transformation of positions, requiring a
re-evaluation of traditional work models. Within education, institutions and
companies face disruptive challenges as ChatGPT alters job roles, posing ques-
tions about the value of human expertise and critical thinking skills. Additionally,
financial implications and the costs associated with implementation and ongoing
support require careful consideration. Furthermore, the concentration of AI
power and the potential for corporate dominance are critical factors to explore.
The risk of a few dominant companies controlling and influencing AI raises
concerns about limited diversity, choice and fair competition, emphasising the
need to address data ownership, privacy and the possibility of monopolistic
practices. Establishing comprehensive policies and regulations becomes essential
to ensure ethical use, responsible deployment and accountability in the integration
of ChatGPT and similar technologies. Lastly, the scarcity of research on the
specific impact of ChatGPT in teaching, learning and higher education
Exploring ChatGPT’s Impact 3

institutions underlines the significance of investigation. The limited availability of


case studies, insufficient student perspectives and inadequate understanding of
necessary adaptations in educational objectives and practices create a substantial
knowledge gap. It is therefore crucial that investigations of ChatGPT in higher
education are undertaken, due to its potential as well as its associated caveats.
In the wake of the COVID-19 pandemic, educational approaches underwent a
significant shift. However, compared to the emergence of ChatGPT, the impact of
the pandemic may appear relatively small. While instructors and institutions
had the option to revert to traditional educational methods as the pandemic
receded, the same cannot be said for ChatGPT and AI chatbots. In fact, one
could argue that ChatGPT represents a new kind of ‘pandemic’ in the educational
landscape. So, how should this be addressed?

MEF University’s Response to ChatGPT


MEF University, a pioneering non-profit private institution located in Istanbul,
Turkey, has been at the forefront of embracing innovative educational method-
ologies since its inception. Founded by Dr İbrahim Arıkan, the university envi-
sions revolutionising higher education by equipping students with the skills
necessary for future careers and addressing the dynamic demands of contempo-
rary industries and society. By strategically investing in infrastructure and
cutting-edge technology, MEF has solidified its reputation as a forward-thinking
institution. Since its establishment in 2014, MEF became a trailblazer by fully
embracing the flipped learning approach across its entire campus. This peda-
gogical model emphasises student-centred learning and the cultivation of critical
thinking skills. Under this framework, students engage with course content
outside of class, while in-class time is dedicated to the practical application of
these principles. Instructors adopt roles as facilitators or coaches, delivering
personalised support and feedback. However, MEF University’s commitment to
enhancing the learning experience and embracing innovation did not stop there.
In 2019, the institution phased out traditional final exams in favour of
project-based and product-focused assessments, fostering active learning and
tangible application of acquired knowledge. Additionally, digital platforms and
adaptive learning technologies were seamlessly integrated into programmes,
providing interactive resources and tailoring the learning journey to each stu-
dent’s unique needs. The integration of Massive Open Online Courses (MOOCs)
further expanded self-directed learning opportunities, culminating in the devel-
opment of the Flipped, Adaptive, Digital and Active Learning (FADAL) model
(Şahin & Fell Kurban, 2019). This model proved its worth when the COVID-19
pandemic struck in 2020. While conventional institutions grappled with the
transition to online learning, MEF University’s FADAL approach facilitated a
seamless shift. The institution’s emphasis on technology, active learning and
personalised education ensured a smooth transition to remote learning. Acco-
lades, including being recognised as Turkey’s top university for effectively navi-
gating the pandemic through national student satisfaction surveys and receiving
4 The Impact of ChatGPT on Higher Education

the 2020 Blackboard Catalyst Award for Teaching and Learning, underscored
MEF’s successful adaptation to the new educational landscape. Building on this
foundation, the institution introduced an AI minor programme, Data Science and
AI, in 2021. This programme equips students across all departments with
comprehensive skills in data management, analytics, machine learning and deep
learning, preparing them for real-world applications. Through these strategic
initiatives, MEF University’s commitment to disruptive innovation and invest-
ment in new technologies have positioned it as a leader in preparing students to
meet the evolving demands of industries and society.
The public launch of ChatGPT on 30 November 2022 sparked robust dis-
cussions at MEF University about the potential opportunities and challenges it
introduces to higher education. In response, three individuals at the university
volunteered to undertake an initial experiment spanning from December 2022 to
January 2023. This experiment involved integrating ChatGPT into course design,
classroom activities and evaluating its impact on assessments and exams. The
findings from this experiment catalysed a faculty meeting in January 2023. During
this meeting, the origins and potential implications of ChatGPT were presented,
and the volunteers shared concrete examples of its incorporation in various
educational contexts. The diverse array of perspectives expressed during the
meeting underscored the necessity for an in-depth institutional case study to
comprehensively explore ChatGPT’s impact on education within MEF Univer-
sity. Specifically, the university aimed to understand how ChatGPT could
potentially reshape the roles of students, instructors and higher education insti-
tutions. Recognising the gravity of the situation and the imperative for further
exploration, the concept for the research project outlined in this book was
conceived.
The core objectives of our research project encompass a thorough exploration
of ChatGPT’s potential impact on students and instructors within the realm of
higher education. By immersing ourselves in the implementation of this trans-
formative technology, our study aims to unearth potential challenges and barriers
that may emerge. This endeavour offers invaluable insights into the trans-
formative role AI chatbots like ChatGPT can play in reshaping the teaching and
learning landscape. Our overarching mission is to delve into how the integration
of ChatGPT might redefine the roles of students, instructors and higher education
institutions. Through this inquiry, we aspire to gain a profound understanding of
how AI chatbots might reshape dynamics and responsibilities within the educa-
tional sphere. By scrutinising these shifts, we seek insights into the implications
for educators, learners and universities as a whole. Furthermore, our research
aims to contribute to the broader discourse surrounding the integration of AI
technologies in higher education. Guided by three pivotal research questions that
structure our investigation, namely, ‘How may ChatGPT affect the role of the
student?’; ‘How may ChatGPT affect the role of the instructor?’; and ‘’How may
ChatGPT affect the role of institutions of higher education?’, our study aims to
offer valuable insights that will inform educational practices, guide policy
formulation and shape the future integration of AI technologies in higher edu-
cation institutions. Ultimately, our research endeavours aim to contribute to a
Exploring ChatGPT’s Impact 5

deeper understanding of the potential benefits and considerations associated with


ChatGPT, ensuring its effective and responsible integration within the realm of
higher education.

Purpose and Scope of the Book


This book aims to provide a comprehensive analysis of MEF University’s
exploratory case study, delving into the potential impacts of ChatGPT on various
stakeholders. Drawing from diverse perspectives, experiences and anecdotes, our
objective is to offer a profound understanding of the transformative shifts
occurring within our institution. By delving into these findings, we intend to
contribute meaningfully to the broader discourse on ChatGPT’s implications in
higher education and offer valuable insights to institutions facing similar
inquiries.
In this opening chapter, we introduced ChatGPT and highlighted the signifi-
cance of investigating its role in higher education. We established our research
context, reasons for conducting this study, research objectives and research
questions. Chapter 2 delves into the emergence of chatbots, shedding light on their
limitations and ethical considerations. Additionally, we explore ChatGPT’s
profound impact on employment and education, as well as scrutinising evolving
educational policies in response to these changes. We conclude this chapter by
discussing the need for robust policies to address potential risks associated with
AI. Chapter 3 constructs a theoretical framework by incorporating critical theory
and phenomenology. This framework enables us to comprehensively examine
ChatGPT’s impact, encompassing power dynamics, social structures, subjective
experiences and consciousness, thereby providing deeper insights into its relevance
and broader implications. In Chapter 4, we present a literature review of
ChatGPT in higher education, identifying valuable insights and specific gaps,
while explaining how our study addresses these gaps and advances understanding.
Chapter 5 introduces the research methodology, employing a qualitative explor-
atory case study approach at MEF. We utilise interviews, observations, research
diaries and surveys for data collection. Thematic analysis aids in interpreting the
data, leading to the identification of themes, including: Input Quality and Output
Effectiveness of ChatGPT, Limitations and Challenges of ChatGPT, Human-like
Interactions with ChatGPT; the Personal Aide/Tutor Role of ChatGPT; Impact
of ChatGPT on User Learning and Limitations of Generalised Bots for Educa-
tional Context. Chapter 6 offers an interpretation of these themes, linking them to
the research questions, data, literature review and theoretical framework. The
book then transitions to discussing the practical implications derived from the
findings and interpretation. In Chapter 7, we delve into the ethical implications,
including critiquing AI detection tools, scrutinising current AI referencing sys-
tems, the need to rethink plagiarism in the AI age, the need to cultivate profi-
ciency in AI ethics and the importance of enhancing university ethics committees’
roles. Chapter 8 delves into product implications, emphasising the necessity of fair
access to AI bots for all students, the importance of fostering industry
6 The Impact of ChatGPT on Higher Education

collaboration to understand AI developments, how we should approach


decision-making regarding specialised bots and the importance of integrating
prompt engineering courses into programmes. Chapter 9 explores educational
implications, discussing the impact of AI on foundational learning, how we can
navigate AI challenges through flipped learning, how we can design AI-resilient
assessment and instruction strategies, and the importance of fostering AI literacy
in students and instructors. In Chapter 10, we highlight our study’s contributions
to knowledge and research. Beginning with an overview of our research structure,
the chapter delves into key insights and findings, revisiting essential themes. Our
theoretical framework is discussed for advancing AI discourse by blending phi-
losophy and technology in educational research. We explore practical implica-
tions for higher education institutions. Moreover, we advocate that universities
bear a moral duty to actively engage in the global AI conversation. Addressing
research limitations, we outline how we plan to overcome them in future studies.
Recommendations for additional relevant research areas are also presented to
further explore AI in higher education. The chapter concludes by underscoring
our role as authors of the AI narrative, with the power to shape AI technologies in
alignment with our shared values and aspirations.
In conclusion, this book provides a comprehensive exploration of the impli-
cations of ChatGPT within both our institution and higher education at large.
Our in-depth case study yields profound insights into the transformative power of
AI tools like ChatGPT. By sharing these insights and their broader implications,
our goal is to foster meaningful discussions, critical engagement and purposeful
initiatives in the field. Our endeavour offers valuable guidance to other institu-
tions, allows us to reflect on our experiences and envisions a future where edu-
cation thrives in an AI-enhanced environment. We extend a warm invitation to
educators, university leaders and institutions to join us in responsibly harnessing
AI’s potential, thereby shaping a more promising horizon for education.
Chapter 2

Navigating the Landscape of AI Chatbots

Emergence and Growth of Chatbots


Artificial intelligence (AI) has transformed human existence by processing vast
data and performing tasks resembling human capabilities (Anyoha, 2017). Early
AI faced challenges, but the breakthrough Logic Theorist showcased its potential
(Anyoha, 2017). Thriving in the 1990s and 2000s, AI achieved landmark goals
despite funding hurdles (Anyoha, 2017). The development of conversational AI
systems progressed significantly, with milestones like ELIZA, ALICE and
SmarterChild (Adamopoulou & Moussiades, 2020; Shum et al., 2018). In
November 2022, OpenAI unleashed Chat Generative Pre-trained Transformer
(ChatGPT), a powerful natural language processing (NLP) model with 175 billion
parameters, rapidly gaining one million users. GPT-3.5, developed in 2020,
marked a significant advancement in language models, capable of learning from
any text and performing various tasks (Rudolph et al., 2023).
In 2022, OpenAI unveiled ChatGPT-3.5, followed by GPT-4 in 2023. Notably,
companies like Microsoft seamlessly integrated ChatGPT into their products
(Milmo, 2023a; Waugh, 2023). The rising popularity of ChatGPT has ignited
discussions about the future of search engines, particularly concerning Google
(Paleja, 2023b). In response, Google introduced its own chatbot technologies,
including LaMDA and Apprentice Bard (Milmo, 2023a). Sundar Pichai, CEO of
Alphabet, voiced strong confidence in Google’s AI capabilities (Milmo, 2023a)
and revealed plans to seamlessly integrate chatbots into their product offerings.
Furthermore, other companies have also entered the AI chatbot arena. In April
2023, Elon Musk, CEO of Twitter (now renamed as X), playfully proposed the
idea of ‘TruthGPT’ in response to ChatGPT’s reluctance to address controversial
topics. Musk highlighted the need for an AI system free from such constraints,
leading to the inception of a cryptocurrency-based project to tackle this challenge
(Sabarwal, 2023). Later, in July 2023, Meta introduced its advanced AI system,
‘Llama 2’. Mark Zuckerberg proudly announced its collaboration with Microsoft
and the availability of this AI for research and commercial purposes (Sankaran,
2023). Thus, industry has now taken the lead in machine learning model devel-
opment, surpassing academia. This is the current situation. However, what lies
ahead?

The Impact of ChatGPT on Higher Education, 7–27


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241002
8 The Impact of ChatGPT on Higher Education

OpenAI states that its long-term goal is to create ‘artificial general intelligence’
(AGI) (Brockman & Sutskever, 2015). AGI refers to AI systems that possess the
ability to understand, learn and apply knowledge in a way that’s comparable to
human intelligence. AGI would be capable of performing a wide range of tasks
and adapting to new situations without being explicitly programmed for each
specific task, making it a higher level of AI than the specialised, narrow AI sys-
tems currently available. Tech entrepreneur, Siqi Chen, claims that GPT-5 will
achieve AGI by the end of 2023, generating excitement in the AI community
(Tamim, 2023). Chen’s claim, while not widely held at OpenAI, suggests that
generative AI is making significant strides (Tamim, 2023). Sam Altman, the CEO
of OpenAI, goes one step further, hinting at the potential for AI systems to far
surpass even AGI (Sharma, 2023). He believes that AI’s current trajectory indi-
cates remarkable potential for unprecedented levels of capability and impact in
the near future (Sharma, 2023). In summary, AI’s transformative impact on
human existence, coupled with the rapid advancement of chatbots like ChatGPT,
highlights the potential for significant changes in various industries and the field
of AI as a whole. However, this does come with caveats.

Challenges and Ethical Considerations in AI


As AI chatbots like ChatGPT continue to evolve and become more prevalent in
our daily lives, we are starting to understand more about their limitations. One of
the biggest questions surrounding ChatGPT is how it works, and even the crea-
tors themselves do not fully understand it. They attempted to use AI to explain
the model, but encountered challenges due to the ‘black box’ phenomenon present
in large language models like GPT (Griffin, 2023). This lack of transparency
raises concerns about biases and the dissemination of inaccurate information to
users. Researchers are exploring ‘interpretability research’ to understand the inner
workings of AI models (Griffin, 2023). One approach involves studying individual
‘neurons’ within the system, but the complexity of billions of parameters makes
manual inspection impractical. To address this, OpenAI researchers employed
GPT-4 to automate the examination of system behaviour (Griffin, 2023). Despite
limitations in providing human-like explanations, the researchers remain opti-
mistic about AI technology’s potential for self-explanation with continued
research (Griffin, 2023). However, further work is needed to overcome the chal-
lenges in this field, including describing the system’s functioning using everyday
language and considering the overall impact of individual neuron functionality
(Griffin, 2023).
At the core of ChatGPT lies language processing, encompassing various
aspects like grammar, vocabulary and cultural context. While it can perform
numerous language-related tasks, its understanding is limited to learnt patterns
from training data. Unlike humans, ChatGPT lacks actual consciousness or
self-awareness, relying on heuristics, which are rules of thumb used to make
efficient decisions in complex situations (Kahneman, 2011). In language pro-
cessing, heuristics help parse sentences, recognise patterns and infer meanings
Navigating the Landscape of AI Chatbots 9

based on context. ChatGPT uses deep learning algorithms trained on extensive


text data to generate relevant and coherent responses (Sánchez-Adame et al.,
2021). However, language’s constant evolution and complexity still present lim-
itations for AI chatbots.
ChatGPT also has some limitations that lead to gaps in its knowledge base and
issues with generating accurate responses (Johnson, 2022; Rudolph et al., 2023). It
can frequently repeat phrases, reject questions or provide answers to slightly
modified versions of questions (Johnson, 2022). Additionally, some chatbots,
including ChatGPT, have been observed using language that is misogynistic,
racist and spreading false information (Johnson, 2022). These issues stem from the
challenge of aligning the model’s behaviour with human values and expectations
(Ramponi, 2022). Large language models like ChatGPT are trained to optimise
their objective function, which may not always align with human values when
generating text (Ramponi, 2022). This misalignment can hinder the practical
applications of chatbots in systems that require reliability and trust, impacting
human experience (Ramponi, 2022). This is often seen in the following ways
(Ramponi, 2022):

• Lack of Helpfulness
Where a language model fails to accurately understand and execute the specific
instructions provided by the user.
• Hallucinations
When the model generates fictitious or incorrect information.
• Lack of Interpretability
When it is hard for humans to comprehend the process by which the model
arrived at a particular decision or prediction.
• Generating Biased or Toxic Output
When a model generates output that reproduces such biases or toxicity (due to
being trained on biased or toxic data) even if it was not intentionally pro-
grammed to do so.

But why does this happen? Language models like transformers are trained
using next-token-prediction and masked-language-modelling techniques to learn
the statistical structure of language (Ramponi, 2022). However, these techniques
may cause issues as the model cannot differentiate between significant and
insignificant errors, leading to misalignment for more complex tasks (Ramponi,
2022). OpenAI plans to address these limitations through its release of a limited
version of ChatGPT, (ChatGPT-3.5) and gradually increasing its capabilities
using a combination of supervised learning and reinforcement learning, including
reinforcement learning from human feedback, to fine-tune the model and reduce
harmful outputs (Ramponi, 2022). This involves three steps, although steps two
and three can be iterated continuously.

• Stage One
Fine-tuning a pre-trained language model on labelled data to create a super-
vised policy.
10 The Impact of ChatGPT on Higher Education

• Stage Two
Creating a comparison dataset by having labellers vote on the policy model’s
outputs and training a new reward model on these data.
• Stage Three
Further fine-tuning and improving the supervised policy using the reward
model through proximal policy optimisation.
(Ramponi, 2022)

OpenAI employs a moderation application programming interface (API), an


AI-based system, to detect violations of their content policy and ensure that
harmful language, such as misogyny and false news, is avoided (Johnson, 2022).
However, the system is not perfect and has flaws, as seen when a Twitter user
bypassed it to share inappropriate content (Johnson, 2022). OpenAI acknowl-
edges the challenges and limitations of their language models, including
ChatGPT-4, which may produce harmful or inaccurate content despite its
advanced capabilities (Waugh, 2023). While they are actively working to improve
the system through supervised and reinforcement learning, as well as collabo-
rating with external researchers, challenges related to interpretability and hallu-
cinations remain unresolved (Waugh, 2023).
AI ethics is a rapidly evolving field, especially with the rise of generative AI
systems (GAI), making fairness, bias and ethical considerations crucial (Maslej
et al., 2023). The 2023 Artificial Intelligence Index Report highlights the presence
of unfairness and bias in AI systems, leading to potential harms such as allocative
and representation harm (Maslej et al., 2023). Emily Bender, a University of
Washington computational linguist, warns that language models can carry biases
due to their training data, leading to problematic outcomes (Grove, 2023).
Instances of AI-related ethical issues are on the rise, as demonstrated by the
Algorithmic and Automation Incidents and Controversies Repository (Maslej
et al., 2023). For instance, the use of AI in US prisons raised concerns about
discrimination, while the Gang Violence Matrix in London faced criticism for
bias (Maslej et al., 2023). Midjourney’s AI-generated images also raised ethical
concerns (Maslej et al., 2023). Fair algorithms are essential to prevent such issues,
but currently, AI incidents and controversies are on the rise, highlighting the need
for continuous ethical vigilance (Maslej et al., 2023). To address issues of bias in
AI, various applications are being utilised. The perspective API by Alphabet’s
Jigsaw assesses toxicity in language, with its use increasing by 106% due to
growing AI deployment (Maslej et al., 2023). SuperGLUE’s Winogender Task
measures gender bias in AI systems related to occupations, evaluating the use of
stereotypical pronouns (Maslej et al., 2023). Instruction-tuned models, fine-tuned
on instructional datasets, have shown improved performance, but they may rely
on stereotypes (Maslej et al., 2023). The BBQ and HELM benchmarks assess bias
and fairness in question-answering systems, highlighting trade-offs between
accuracy and bias metrics (Maslej et al., 2023). Additionally, machine translation
models struggle with gendered pronouns, leading to mistranslations and potential
dehumanisation (Maslej et al., 2023). Despite these challenges, these applications
are valuable tools to mitigate bias and promote ethical AI practices.
Navigating the Landscape of AI Chatbots 11

Conversational AI raises ethical concerns as well. Researchers from Luleå


University found that 37% of analysed chatbots were female gendered, and 62.5%
of popular commercial chatbots defaulted to female, potentially reinforcing biases
(Maslej et al., 2023). Moreover, dialogue systems may be overly anthro-
pomorphised, making users uncomfortable, and many examples in dialogue
datasets were rated impossible or uncomfortable for machines to output (Maslej
et al., 2023). Clear policy interventions and awareness of limitations are crucial to
address these issues and foster better communication with users. Text-to-image
models face biases too. Meta researchers found Instagram-trained models to be
less biased than previous ImageNet-trained models (Maslej et al., 2023). And the
socially, environmentally and ethically responsible (SEER) model showed fairer
representations of people (Maslej et al., 2023). However, using public data
without user awareness for AI training may be unethical. A study comparing
pre-trained vision-language models revealed gender bias in larger models, with
contrastive language-image pre-training (CLIP) having more bias but higher
relevance (Maslej et al., 2023). Stable Diffusion and DALL-E 2 exhibited biases,
as did Midjourney, generating images reinforcing stereotypes (Maslej et al., 2023).
For example, ‘CEO’ prompted images of men in suits (Maslej et al., 2023).
AI ethics research is experiencing significant growth and attention in confer-
ences and publications. Fairness, Accountability, and Transparency (FAccT), an
interdisciplinary conference, is a prominent platform for research on algorithmic
fairness and transparency, witnessing a tenfold increase in submissions since 2018
(Maslej et al., 2023). The interest in AI ethics is shared by industry and
government-affiliated actors, indicating its relevance for policymakers, practi-
tioners and researchers (Maslej et al., 2023). European contributions to the field
are also on the rise, although the majority of authors are still from North America
and Western countries (Maslej et al., 2023). In recent years, workshops on fairness
and bias in AI have emerged, with NeurIPS hosting its first workshop on fairness,
accountability and transparency in 2014 (Maslej et al., 2023). Certain topics, like
‘AI for Science’ and ‘AI for Climate’, have gained popularity and transitioned
from workshops to the main track, reflecting the surge in AI applications in
healthcare and climate research (Maslej et al., 2023). NeurIPS has seen a rise in
papers focused on interpretability and explainability, particularly in the main
track (Maslej et al., 2023). Additionally, statistical methods such as causal
inference are being used to address fairness and bias concerns, leading to a sig-
nificant increase in papers submitted on causal inference and counterfactual
analysis at NeurIPS (Maslej et al., 2023). Privacy in machine learning has also
become a mainstream concern, with NeurIPS hosting workshops and privacy
discussions moving into the main track (Maslej et al., 2023). The conference now
requires authors to submit broader impact statements addressing ethical and
societal consequences, indicating a growing emphasis on ethical considerations
(Maslej et al., 2023). The surge in the number of papers on fairness and bias, as
well as the increase in workshop acceptances, reflects the growing interest and
importance of this topic for researchers and practitioners (Maslej et al., 2023).
AI algorithms also face challenges in factuality and truthfulness. This had led
to the development of AI for fact-checking and combating misinformation using
12 The Impact of ChatGPT on Higher Education

fact-checking datasets (Maslej et al., 2023). However, research on natural lan-


guage fact-checking seems to have shifted, with a plateau in citations of
widely-used fact-checking benchmarks like FEVER, LIAR and Truth of Varying
Shades (Maslej et al., 2023). Automated fact-checking systems have limitations,
as they assume availability of contradicting evidence for new false claims and
some datasets lack sufficient evidence or use impractical fact-checking articles as
evidence (Maslej et al., 2023). To address these challenges, the development of the
TruthfulQA benchmark evaluates the truthfulness of language models on ques-
tion answering, testing misconceptions across various categories (Maslej et al.,
2023).
AI interfaces like chatbots have practical benefits but raise privacy concerns
due to their potential for intrusive data collection (O’Flaherty, 2023). Unlike
search engines, chatbots’ conversational nature can catch users off guard, leading
them to reveal more personal information (O’Flaherty, 2023). Chatbots can
collect various data types, including text, voice, device, location and social media
activity, potentially enabling targeted advertising (O’Flaherty, 2023). Microsoft’s
consideration of adding advertisements to Bing Chat and Google’s privacy policy
permitting targeted advertising using user data raise further concerns (O’Flaherty,
2023). However, ChatGPT’s privacy policy is believed to prioritise personal data
protection and prohibit commercial exploitation (Moscona, as cited in O’Flah-
erty, 2023).
In response to data privacy, cybersecurity concerns, some countries and
companies initially imposed bans on the usage of generative AI technology like
ChatGPT. For instance, Italy passed a decree banning the use of such technology
for processing personal data, citing potential threats to data privacy (Paleja,
2023c). However, the ban was later lifted after OpenAI addressed regulatory
requirements (Robertson, 2023). Others issued warnings to staff. Companies like
JP Morgan and Amazon restricted employee use of ChatGPT (O’Flaherty, 2023).
Data consultancy firm Covelent advises caution, recommending adherence to
security policies and refraining from sharing sensitive information (O’Flaherty,
2023). Even chatbot companies like ChatGPT and Microsoft warn against
sharing sensitive data in conversations (O’Flaherty, 2023). These actions highlight
the serious data privacy threats associated with AI chatbots.
There are also concerns that AI interfaces, like ChatGPT, have the potential to
facilitate fraud, spread misinformation and enable cybersecurity attacks, posing
existential risks (O’Flaherty, 2023). Experts warn that advanced phishing emails
could be created using chatbots due to their ability to generate content in multiple
languages with impeccable language skills (O’Flaherty, 2023). Moreover, chat-
bots might spread misinformation, assist in creating realistic deepfake videos and
contribute to the dissemination of harmful propaganda on social media platforms
(Tamim, 2023). These risks are already evident, with chatbots being used for
malicious purposes, such as generating fake news (Moran, 2023). Additionally, AI
presents security risks, aiding cybercriminals in conducting more convincing and
efficient cyberattacks (O’Flaherty, 2023).
There are also ethical concerns that surround the potential exploitation of
African workers in content moderation for AI engines like ChatGPT (Schamus,
Navigating the Landscape of AI Chatbots 13

2023). Earning less than $2 a day, these workers handle distressing online content
to train AI engines, raising questions about the sustainability and fairness of their
efforts (Schamus, 2023). The utilisation of African labour for data mining and
cleansing by a US organisation underscores the ethical predicament of relying on
underpaid individuals from less economically advantaged regions to benefit those
in more affluent areas. Consequently, addressing these ethical concerns is crucial
for the responsible development of AI tools.
Meredith Whitaker, an AI researcher and ethicist, highlights that generative
AI heavily relies on vast amounts of surveillance data scraped from the web
(Bhuiyan, 2023). However, the specific sources of these data, obtained from
writers, journalists, artists and musicians, remain undisclosed by proprietary
companies like OpenAI (Bhuiyan, 2023). This raises concerns about potential
copyright violations and lack of fair compensation for content creators. When
asked about compensation for creators, OpenAI’s CEO, Sam Altman, mentioned
ongoing discussions but did not provide a definitive answer (Bhuiyan, 2023). The
impact on local news publications, whose content is used for training AI models,
is also a concern, and Altman expressed hope for supporting journalists while
considering possible actions (Bhuiyan, 2023). Nonetheless, the necessity for
external regulation to address these issues is evident (Bhuiyan, 2023).
The environmental impact of AI technology, particularly large language
models, is a growing concern. Data centres, hosting power-hungry servers for AI
models like ChatGPT, significantly contribute to carbon emissions (McLean,
2023). The power source, whether coal or renewable energy, further affects
emission levels (McLean, 2023). Moreover, the water footprint of AI models is
substantial; for example, Microsoft’s data centres used around 700,000 litres of
freshwater during GPT-3’s training, equivalent to the water needed for hundreds
of vehicles (McLean, 2023). Hence, it’s vital to tackle these environmental con-
cerns and imperative to discover sustainable solutions as these models continue
their expansion (McLean, 2023).

AI’s Impact on the Job Market


AI is no longer just a futuristic concept that we see in movies; it’s now a reality in
our everyday lives – from individuals to organisations, and from businesses to
governments. Advancements in AI technology have led to its rapid deployment,
but with great power comes great responsibility. Historian Yuval Noah Harari
(2018) suggests that to comprehend the nature of these technological challenges,
we should start by asking questions about the job market: What will the job
market look like in 2050? Will AI impact all industries, and if so, how and when?
Could billions of people become economically redundant within the next decade
or two? Or will automation continue to create new jobs and greater prosperity for
all in the long run? Will this mirror the positives that happened due to the
industrial revolution or is this time different? Given the ongoing debate and lack
14 The Impact of ChatGPT on Higher Education

of consensus among experts, this section aims to provide some answers to the
aforementioned questions.
We are now starting to hear a lot more mainstream conversation regarding the
social and economic impact that AI and AI chatbots will have on society and
industry. According to the 2023 Artificial Intelligence Index Report, bar agri-
culture, forestry, fishing and hunting, the demand for AI-related skills is rapidly
increasing in nearly all sectors of the American economy. The report highlights
that between 2021 and 2022, the number of AI-related job postings increased on
average from 1.7% to 1.9% (Maslej et al., 2023). According to Business Insider,
AI technology like ChatGPT could drastically change jobs in various industries,
such as finance, customer service, media, software engineering, law and teaching,
including potential gains and losses (Mok & Zinkula, 2023). This may happen in
the following ways.
In finance, it is thought likely that very soon AI-powered bots will handle
complicated financial questions, allowing advisors and CFOs to make real-time
decisions by tapping into AI’s knowledge. They will also be able to perform
information analysis, pattern detection and forecasting. Moreover, ChatGPT will
save time for marketers in finance by analysing data and providing insights into
customer behaviour as well as organising information and generating marketing
materials (How Will ChatGPT & AI Impact The Financial Industry?, 2023). In
addition, ChatGPT has the potential to disrupt jobs across various industries on
Wall Street, including trading and investment banking. This is because ChatGPT
can automate some of the tasks that knowledge workers perform today. One
advantage of this is that it will enable them to concentrate on higher-value tasks.
However, it also means that AI could do certain jobs that recent college graduates
are currently doing at investment banks (Mok & Zinkula, 2023). This may lead to
the elimination of low-level or entry jobs.
When it comes to customer service and engagement, according to Forbes,
conversational AI, such as ChatGPT, has the potential to revolutionise customer
service by providing human-like conversations that address each user’s concerns.
Unlike traditional chatbots, which follow predetermined paths and lack flexi-
bility, conversational AI can automate the upfront work needed for customer
service agents to focus on high-value customers and complex cases requiring
human interaction (Fowler, 2023).
And what about the creative arts? Forbes predicts that ChatGPT is expected to
have a significant impact on jobs in advertising, content creation, copywriting,
copy editing and journalism (Fowler, 2023). Furthermore, due to the AI’s ability
to analyse and understand text, it is likely that ChatGPT will transform jobs
related to media, including enabling tasks such as article writing, editing and
fact-checking, script-writing for content creators and copywriting for social media
posts and advertisements (Fowler, 2023). In fact, we are already seeing chatbots
drafting screenplays (Stern, 2023), writing speeches (Karp, 2023), writing novels
(Bensinger, 2023) and being used by public relations companies to ‘research,
develop, identify customer values or changing trends, and strategize optimal
campaigns for. . . clients in a matter of seconds’ (Martinez, 2023). There is already
a visible response from affected employees in response to these developments. In
Navigating the Landscape of AI Chatbots 15

Los Angeles in early May 2023, a strike was initiated involving thousands of film
and television writers, later joined by actors and other members of the film
community, with the aim not only to address financial matters but also to
establish rules preventing studios from using AI to generate scripts, excluding
human involvement in the creative process (Hinsliff, 2023). This shift is also being
seen in Buzzfeed, which is one of many publishers that have started to use
AI-generated content to create articles and social media posts, with the aim of
increasing efficiency and output (Tarantola, 2023). However, the quality of the
content generated by AI is still a concern for many (Tarantola, 2023). Another
area which is being affected is fashion, where AI is being used for everything from
analysing data to create designs for upcoming collections to generating a variety
of styles from sketches and details from creative directors (Harreis, 2023).
When it comes to engineering, while ChatGPT may be able to aid engineers in
their work by generating answers for engineering calculations and providing
information on general engineering knowledge, it will not be able to replace the
knowledge, expertise and innovation that engineers bring to the design and
product development process (Brown-Siebenaler, 2023). However, regarding
software engineering, there may be many changes. Software engineering involves
a lot of manual work and attention to detail. However, ChatGPT can generate
code much faster than humans which will lead to improved efficiency, bug
identification and increased code generation speed, while also cutting resource
costs (Mok & Zinkula, 2023).
With regard to healthcare, Harari (2018) gives the following example by
comparing what doctors do to what nurses do. Doctors mainly process medical
information, whereas nurses require not only cognitive but also motor and
emotional skills to carry out their duties. Harari believes this makes it more likely
that we will have an AI family doctor on our smartphones before we have a
reliable nurse robot. Therefore, he expects the human care industry to remain a
field dominated by humans for a long time and, due to an ageing population, this
is likely to be a growing industry. And there is now evidence coming out to
support Harari’s claims. A recent study conducted by the University of California
San Diego looking at a comparison between written responses from doctors and
ChatGPT to real-world health queries, the panel of healthcare professionals
preferred ChatGPT’s responses 79% of the time (Tilley, 2023). They also found
ChatGPT’s answers to be of higher quality in terms of information provided and
perceived empathy, without knowing which responses came from the AI system
(Tilley, 2023). Furthermore, ChatGPT has even demonstrated the ability to pass
the rigorous medical licencing exam in the United States, scoring between 52.4
and 75% (Tilley, 2023).
According to a recent Goldman Sachs report, generative AI may also have a
profound impact on legal workers, since language-oriented jobs, such as para-
legals and legal assistants, are susceptible to automation. These jobs are
responsible for consuming large amounts of information, synthesising what they
learnt, and making it digestible through a legal brief or opinion. Once again, these
tend to be low-level or entry jobs. However, AI will not completely automate
these jobs since it requires human judgement to understand what a client or
16 The Impact of ChatGPT on Higher Education

employer wants (Mok & Zinkula, 2023). We are already starting to see examples
of AI being used in the legal field. DoNotPay, founded in 2015 is a bot that helps
individuals fight against large organisations for acts such as applying wrong fees,
robocalling and parking tickets (Paleja, 2023a). In February 2023, DoNotPay was
used to help a defendant contest a speeding ticket in a US court, with the pro-
gramme running on a smartphone and providing appropriate responses to the
defendant through an earpiece (Paleja, 2023a). In addition, AI judges are already
being used in Estonia to settle small contract disputes, allowing human judges
more time for complex cases (Hunt, 2022). Furthermore, a joint research project
in Australia is currently examining the benefits and challenges of AI in courts
(Hunt, 2022). Overall, we are seeing AI becoming more popular in courts
worldwide. And this is certainly the case in China. In March 2021, China’s
National People’s Congress approved the 14th Five-Year Plan which aimed to
continue the country’s judicial reform, including the implementation of ‘smart
courts’ to digitalise the judicial system (Cousineau, 2021). ChatGPT is also
demonstrating its ability to excel in legal exams. The latest iteration of the AI
programme, GPT-4, recently surpassed the threshold set by Arizona on the
uniform bar examination (Cassens Weiss, 2023). With a combined score of 297, it
achieved a significant margin of success which, notably, places ChatGPT’s per-
formance close to the 90th percentile of test takers (Cassens Weiss, 2023).
Just like other industries, the emergence of ChatGPT has compelled education
companies to reassess and re-examine their business models. According to Times
Higher Education writers, Tom Williams and Jack Grove, the CEO of education
technology firm Chegg, Dan Rosensweig, attributes a decline in new sign ups for
their textbook and coursework assistance services to ChatGPT, believing that as
midterms and finals approached, many potential customers opted to seek
AI-based help instead (2023). Williams and Grove believe this shift in consumer
behaviour serves as a ‘harbinger’ of how the rise of generative AI will disrupt
education enterprises and is prompting companies to hastily adapt and
future-proof their offerings (2023). They give the example of Turnitin, which has
expedited the introduction of an AI detector, and Duolingo, which has incor-
porated GPT-4 to assist language learners in evaluating their language skills
(2023). Williams and Grove also note that, simultaneously, a wave of newly
established start-ups has emerged, offering a wide range of services, including
personalised tutoring chatbots and proprietary AI detectors, each with varying
levels of accuracy (2023). They quote Mike Sharples, an emeritus professor at the
Open University’s Institute of Educational Technology, saying that it is the larger
companies that are successfully integrating AI into their existing and
well-established products that are thriving. Conversely, Sharples cautions that
others run the risk of becoming like the ‘Kodak of the late 1990s’, unable to adapt
swiftly or effectively enough to thrive in a competitive market (Williams & Grove,
2023). Sharples goes on to say that he anticipates that numerous companies in the
education field, particularly distance-learning institutions, will face significant
challenges in their survival, as students may perceive AI as capable of performing
their tasks better; however, he cautions that whether or not that is the case
remains to be seen (Williams & Grove, 2023). Williams and Grove also quote
Navigating the Landscape of AI Chatbots 17

Rose Luckin, professor of learner-centred design at University College London


Knowledge Lab. Luckin describes the advantages of platforms like ChatGPT,
such as being able to effortlessly generate textbooks and course materials; how-
ever, she warns that there will be a great need for quality control to address errors
(Williams & Grove, 2023). However, she points out that this is significantly more
cost-effective than producing the materials from scratch (Williams & Grove,
2023). Williams and Grove therefore conclude that the publishing and educa-
tional technology sectors are going to undergo significant transformations due to
these developments and stress that companies must recognise these changes and
assess how student demands and industry requirements are evolving. This will
eventually help them in identifying the areas where ChatGPT falls short, after
which they can work to fill those gaps effectively (2023).
As we have seen, AI is leading to major changes in the job market, including
job gains and job losses. However, what is interesting is that ChatGPT may not
only be changing jobs but also creating jobs. In fact, OpenAI cofounder, Greg
Brockman, expressed that concerns about AI tools taking away human jobs were
exaggerated, and that AI would enable people to concentrate on essential work.
Brockman believes that the key to the future will be more elevated skills, such as
discernment and the ability to determine when to delve into details, and that AI
will enhance what humans can accomplish (Waugh, 2023). According to Fiona
Jackson, technology writer, some remote workers are already secretly using AI to
complete multiple jobs at the same time, referring to themselves as ‘over-
employed’ because ChatGPT helps them finish each job’s workload in record time
(2023). She reports that they are using the tool to produce high-quality written
content, which may include anything from writing marketing materials to articles
to blog posts, and are therefore able to work multiple full-time jobs without their
employers knowing (Jackson, 2023). She notes that the inception of such remote
work can be traced back to the advent of the pandemic, which compelled many
employees to assume additional jobs to sustain themselves in the wake of eco-
nomic instability. Built upon this, she notes that the emergence of ChatGPT
appears to have provided workers with an even more advanced online tool that is
augmenting their remote work capabilities, helping them to effectively manage
multiple roles at the same time (Jackson, 2023). However, Jackson points out that
ChatGPT-generated text often contains errors, which some workers see as a
positive, as it means their expertise is still required to check the AI’s work
(Jackson, 2023). Jackson further reports that many workers who use ChatGPT to
supplement their income are living in fear of losing their jobs. These professionals
recognise the possibility that the rapid advancements in AI could ultimately make
their positions obsolete (Jackson, 2023). Apparently, one worker even likened the
impact of AI on the workforce to the historical shift from weavers to a single loom
operator in the textile industry (Jackson, 2023). Therefore, it seems that while AI
can be a helpful tool, it also comes with some significant risks for those who rely
on traditional employment. But what about non-traditional employment? What
about the new jobs emerging?
We are even seeing that the advent of ChatGPT has led to the creation of a
new job market where companies are actively seeking prompt engineers to harness
18 The Impact of ChatGPT on Higher Education

the bot’s potential; a job that involves enhancing the performance of ChatGPT
and educating the company’s staff on how to make the most of this technology
(Tonkin, 2023). Often referred to as ‘AI whisperers’, prompt engineers specialise
in crafting prompts for AI bots such as ChatGPT, and frequently come from
backgrounds in history, philosophy or English language, where a mastery of
language and wordplay is essential (Tonkin, 2023). And we are currently seeing a
strong demand for prompt engineers, with Google-backed start-up Anthropic
advertising a lucrative salary of up to $335,000 for a ‘Prompt Engineer and
Librarian’ position in San Francisco; a role which involves curating a library of
prompts and prompt chains and creating tutorials for customers (Tonkin, 2023).
Additionally, another job posting offers a salary of $230,000 for a machine
learning engineer with experience in prompt engineering to produce optimal AI
output (Tonkin, 2023). Interestingly, the job postings encourage candidates to
apply even if they don’t meet all the qualifications. Sam Altman is currently
emphasising the significance of prompt engineers, stating that ‘writing a really
great prompt for a chatbot persona is an amazingly high-leverage skill’ (Tonkin,
2023). Thus, a new job market has opened. But why is this happening so quickly
and seamlessly? And why are people who did not meet all the qualifications being
asked to apply? It all comes down to unlocking the potential of capability
overhang.
One of the reasons prompt engineers do not have to have a background in
computer science or machine learning is related to the concept of capability
overhang. In his article ‘ChatGPT proves AI is finally mainstream – and things
are only going to get weirder’, James Vincent highlights the concept of ‘capability
overhang’ in AI, which refers to the untapped potential of AI systems, including
latent skills and abilities that researchers have yet to explore (2022). The potential
of AI remains largely untapped due to the complexity of its models, which are
referred to as ‘black boxes’. This complexity makes it challenging to understand
how AI functions and arrives at specific results. However, this lack of under-
standing opens up vast possibilities for future AI advancements (Vincent, 2022).
Vincent quotes Jack Clark, an AI policy expert, who describes the concept of
capability overhang as follows: ‘Today’s models are much more capable than we
think, and the techniques we have to explore them are very immature. What
about all the abilities we are unaware of because we have not yet tested for them?’
(Vincent, 2022). Vincent highlights ChatGPT as a prime example of how acces-
sibility has impeded the progress of AI. Although ChatGPT is built on GPT-3.5,
an improved version of GPT-3, it was not until OpenAI made it available on the
web that its potential to reach a wider audience was fully realised. Furthermore,
as it was released free of charge, this further increased its accessibility. Moreover,
despite the extensive research and innovation in exploring the capabilities and
limitations of AI models, the vast and complex intelligence of the internet remains
unparalleled. Now, with the sudden accessibility of AI capabilities to the general
public, according to Vincent, the potential overhang may be within reach (2022).
So, what do the experts have to say about the potential impact of AI on the job
market? Sam Altman holds an optimistic viewpoint, acknowledging that while
technology will undoubtedly influence the job market, he believes there will be
Navigating the Landscape of AI Chatbots 19

even greater job opportunities emerging as a result. Altman emphasises the


importance of recognising AI tools like GPT as tools, not autonomous entities
(Bhuiyan, 2023). In Altman’s perspective, GPT-4 and similar tools are poised to
excel at specific tasks rather than completely supplanting entire jobs (Bhuiyan,
2023). He envisions GPT-4 automating certain tasks while concurrently giving
rise to novel, improved job roles (Bhuiyan, 2023). However, Altman’s optimism
contrasts with the outlook of Sir Patrick Vallance, the departing scientific adviser
to the UK government (Milmo, 2023c). Vallance adopts a more cautious stance,
predicting that AI will instigate profound societal and economic shifts, with its
impact on employment potentially rivalling that of the Industrial Revolution
(Milmo, 2023c). Moreover, the Organisation for Economic Co-operation and
Development (OECD) contends that major economies stand on the brink of an
AI revolution, which could lead to job losses in skilled professions such as law,
medicine and finance. According to the OECD, approximately 27% of employ-
ment across its 38 member countries, including the United Kingdom, United
States and Canada, comprises highly skilled jobs vulnerable to AI-driven auto-
mation. The OECD specifically highlights roles in sectors like finance, medicine
and legal activities, which require extensive education and accumulated experi-
ence, as suddenly susceptible to AI-driven automation (Milmo, 2023e). In fact,
these predictions are already starting to materialise. In May 2023, IBM’s CEO
announced a temporary halt in hiring for positions that could potentially be
replaced by AI, estimating that around one-third of the company’s non-customer
facing roles, approximately 7,800 positions, could be affected (Milmo, 2023c).
The influence of AI has also reached the stock markets, as evidenced by the
significant decline in the share price of UK education company Pearson following
revised financial projections by US-based provider Chegg, attributing the impact
to ChatGPT and its effect on customer growth (Milmo, 2023c). Therefore we can
see that the negative influence of AI is already here, it is already happening.

AI’s Impact on Education


In 2023, Felten et al. conducted a study to evaluate the degree to which
advancements in AI language modelling capabilities could affect various occu-
pations. Their findings indicate that the education sector is going to be hit
particularly hard. Out of the 20 professions they identified as being most at risk,
85% of these projected job losses were in the field of education. Starting with those
with the highest risk, these include: psychology teachers; communications
teachers; political scientists; cultural studies teachers; geography teachers; library
science teachers; clinical, counselling and school psychologists; social work
teachers; English language and literature teachers; foreign language and literature
teachers; history teachers; law teachers; philosophy and religion teachers; soci-
ology teachers; political science teachers; criminal justice and law enforcement
teachers and sociologists (Felten et al., 2023).
20 The Impact of ChatGPT on Higher Education

We are also starting to see a shift in how graduates are being affected.
According to the 2023 Artificial Intelligence Report, the percentage of new
computer science PhD graduates from US universities specialising in AI has been
increasing steadily over the years. In 2021, 19.1% of new graduates specialised in
AI, up from 14.9% in 2020 and 10.2% in 2010 (Maslej et al., 2023). The trend is
shifting towards AI PhDs choosing industry over academia. In 2011, a similar
number of AI PhD graduates took jobs in industry (40.9%) and academia
(41.6%). However, since then, a majority of AI PhDs are choosing industry, with
65.4% taking jobs in industry in 2021, which is more than double the 28.2% who
chose academia (Maslej et al., 2023). In addition, the number of new North
American faculty hires in computer science, computer engineering and informa-
tion fields have remained relatively stagnant over the past decade (Maslej et al.,
2023). In 2021, a total of 710 new hires were made, which is slightly lower than the
733 hires made in 2012. Furthermore, the number of tenure-track hires also saw a
peak in 2019 with 422 hires, but then dropped to 324 in 2021 (Maslej et al., 2023).
There is also a growing difference in external research funding between private and
public American computer science departments. A decade ago, the median total
expenditure from external sources for computing research was similar for private and
public computer science departments in the United States. However, the gap has
widened over time, with private universities receiving millions more in funding than
public ones (Maslej et al., 2023). As of 2021, private universities had a median
expenditure of $9.7 million, while public universities had a median expenditure of $5.7
million (Maslej et al., 2023). In response to these changes, universities are taking
various actions to adapt. They are focusing on several key areas, including their
infrastructure, programme offerings, faculty recruitment and faculty retention. Inside
Higher Ed’s Susan D’Agostino’s May 2023 article provides recent information
regarding how universities are reacting. Regarding universities’ increased investments
in AI faculty and infrastructure, she gives the following examples. The University at
Albany, Purdue University and Emory University are currently actively hiring a
substantial number of AI faculty members, while the University of Southern Cali-
fornia is investing $1 billion in AI, to recruit 90 new faculty members and establish a
dedicated AI school (D’Agostino, 2023). Similarly, the University of Florida is
creating the Artificial Intelligence Academic Initiative Centre, while Oregon State
University is building an advanced AI research centre with cutting-edge facilities
(D’Agostino, 2023). In support of these efforts, the National Science Foundation is
committing $140 million to establish seven national AI research institutes at US
universities, each with a specific focus area (D’Agostino, 2023). However, D’Ag-
ostino quotes Victor Lee, an associate professor at Stanford’s Graduate School of
Education, as emphasising the importance of extending AI initiatives beyond com-
puter science departments, suggesting that integrating diverse disciplines such as
writing, arts, philosophy and humanities to foster a range of perspectives and critical
thinking necessary for AI’s development and understanding (2023). According to
D’Agostino, colleges are also establishing new academic programmes in AI. For
example, Houston Community College will introduce four-year degree programmes
in applied technology for AI and robotics, as well as applied science in healthcare
management, and Rochester Institute of Technology plans to offer an
Navigating the Landscape of AI Chatbots 21

interdisciplinary graduate degree in AI (D’Agostino, 2023). Furthermore, the New


Jersey Institute of Technology will launch two AI graduate programmes, and Georgia
Tech will lead a statewide AI initiative with a $65 million investment, transforming a
facility into an AI Manufacturing Pilot Facility (D’Agostino, 2023). In addition,
Palm Beach State College is introducing an AI programme and aims to establish a
graduate school offering AI courses (D’Agostino, 2023).
As can be seen, many universities are actively expanding their infrastructure
and programmes to accommodate the growing interest in AI. However, these
efforts are not without challenges, especially when it comes to recruiting AI
faculty members. And this is further exacerbated by the existing shortage of
computer scientists. In fact, these challenges predate the widespread public
awareness of AI’s potential, as American colleges have long struggled to meet the
high demand for computer science courses while grappling with a shortage of
qualified faculty (D’Agostino, 2023). D’Agostino quotes Kentaro Toyama, a
professor at the University of Michigan, who acknowledges that institutions
planning to hire a significant number of faculty members may struggle to find
individuals with the necessary teaching skills for these specialised classes (2023).
Therefore, it is evident that universities are facing difficulties in recruiting faculty
members at a pace that meets the demand. However, even if they can manage to
hire new faculty, there is the additional challenge of retaining these new members
due to their high demand in the industry. D’Agostino quotes Ishwar K. Puri,
Senior Vice President of Research and Innovation at Southern California, who
says that universities should carefully consider the difficulty of retaining computer
scientists once they are hired (2023). This is because as faculty members gain
expertise and establish themselves, they become highly sought-after by the private
sector, which offers salaries that universities are unable to match. Furthermore,
Puri points out that universities cannot provide the same opportunities for
groundbreaking work in AI that is currently available in the AI corporations,
which may be another reason academics choose to leave universities in favour of
industry (D’Agostino, 2023). In order to address this issue, D’Agostino quotes
Ravi Bellamkonda, provost and executive vice president for academic affairs at
Emory University, who suggests the following: make starting salaries in computer
science departments higher compared to other departments, even though it may
potentially lead to other challenges within the university; and offer unconven-
tional incentives (2023). One example he gives of an unconventional incentive is
allowing faculty members to consult one day a week, thereby blurring the line
between academia and industry (D’Agostino, 2023). As such, Emory University is
now supporting collaborations with companies such as Google or Amazon, and
many faculty members are already engaging in these collaborations or choosing
to spend their sabbatical at a company instead of another academic institution
(D’Agostino, 2023).
There are also reports coming out regarding concerns that schools in the United
Kingdom are having due to the overwhelming pace of AI development and a lack of
trust in technology companies’ ability to protect students and educational institu-
tions (Milmo, 2023d). To address these concerns, a group of headteachers has
launched a body aimed at advising and safeguarding schools from the risks
22 The Impact of ChatGPT on Higher Education

associated with AI (Milmo, 2023d). However, their worries go beyond AI-powered


chatbots simply facilitating cheating and are expanded to encompass the potential
impact on children’s well-being as well as the teaching profession itself (2023d). These
concerns were expressed in a letter to The Times, highlighting the ‘very real and
present hazards and dangers’ posed by AI, particularly in generative AI break-
throughs that can produce realistic text, images and voice impersonations (Milmo,
2023d). Led by Sir Anthony Seldon, the head of Epsom College, a fee-paying school,
the group consists of heads from various private and state schools (Milmo, 2023d).
The primary goal of the group is to provide secure guidance to schools amidst the
rapidly evolving AI landscape, fuelled by scepticism towards large digital companies’
ability to self-regulate in the best interests of students and schools (Milmo, 2023d).
Their letter also criticises the government for insufficiently regulating AI. Therefore,
to tackle these challenges, the group plans to establish a cross-sector body comprising
leading teachers and independent digital and AI experts. This body will offer guid-
ance on AI developments and assist schools in deciding which technologies to
embrace or avoid. The United Nations Educational, Scientific and Cultural Orga-
nization (UNESCO) resonates with these concerns, amplifying them through their
Global Education Monitoring Report (2023). The report spotlights inadequate
oversight within the education technology sector, placing children’s well-being at
risk. Although they note that education technology oversight exists in 82% of
countries, challenges arise from private actors. Thus, the report underscores the
necessity to regulate privacy, safety and well-being. Moreover, the report unveils that
out of 163 education technology products recommended during the pandemic, 89%
collect children’s information. However, just 16% of countries ensure data privacy in
education. The report also raises the issue of AI algorithms deepening inequality,
specifically affecting indigenous groups in the United States. Additionally, it high-
lights the surge in cyberattacks targeting education, with incidents doubling in 45 US
districts between 2021 and 2022. Another concern in the report lies in the adverse
effects of excessive screen time on children’s well-being, noting that US children
spend up to nine hours daily on screens. Yet, despite this, the report notes that there
are limited regulations for screen time and most countries do not ban phones in
schools. To address these challenges, UNESCO suggests that countries need to adopt
comprehensive and tailored data protection laws and standards for children and that
policymakers should consider the voices of children to protect their rights during
online activities. They call for sound education technology and data governance to
ensure equitable and high-quality technology benefits while safeguarding children’s
rights to privacy and education. They also call for clear frameworks, effective reg-
ulations and oversight mechanisms to protect the rights of children in a world where
data exchange is widespread (Global Education Monitoring Report 2023: Technology
in Education – A Tool on Whose Terms? 2023).

AI’s Impact on the World


So far in this chapter, we have examined various aspects of AI, including the
emergence and growth of chatbots, challenges and ethical considerations in AI,
Navigating the Landscape of AI Chatbots 23

AI’s influence on the job market and its effect on education. Yet, a more over-
arching inquiry remains: What will be the impact of AI on the world? To delve
into this question more deeply, we investigate the viewpoints of experts in the
field. We start by considering opinions expressed before ChatGPT was publicly
introduced and then transition to perspectives shared after its unveiling.
In 2017, prior to ChatGPT’s introduction, Max Tegmark, a physicist and
cosmologist renowned for his work in theoretical physics and AI, released ‘Life
3.0: Being Human in the Age of Artificial Intelligence’ (Tegmark, 2017). In this
book, Tegmark navigates the potential influence of AI on human society and
envisions the potential futures that AI’s advancement could unfold. In particular,
he looks into the realm of Artificial General Intelligence (AGI), analysing the
potential pros and cons, which he believes span from the potential positives – such
as advancements in science, medicine and technology – to the potential negatives,
which encompass ethical dilemmas and existential risks. Tegmark also puts for-
ward an array of hypothetical scenarios that could transpire as AGI evolves,
including the prospects of utopian outcomes, dystopian visions and a middle
ground where human and AI coexistence is harmonious. He further investigates
the societal implications of AGI, including its potential impact on the job market,
economy and governance. Based on this, he stresses the importance of ethical
considerations and conscientious development to ensure that AGI ultimately
serves the collective benefit of all humanity. In 2019, following Tegmark’s book,
Gary Marcus, a Cognitive Scientist and Computer Scientist, and Ernest Davis, a
Professor of Computer Science, published ‘Rebooting AI: Building Artificial
Intelligence We Can Trust’, in which they investigate critical aspects and chal-
lenges within the AI field, specifically focusing on the limitations and deficiencies
prevalent in current AI systems. Through doing so, they raise critical questions
about the trajectory of AI advancement. Marcus and Davis contend that, despite
notable advancements in AI technology, fundamental constraints persist, hin-
dering the development of genuinely intelligent and reliable AI systems. They
underscore the lack of common sense reasoning, robustness and a profound
understanding of the world in many contemporary AI systems – qualities inherent
to human cognition. Based on this, they argue that prevailing AI development
approaches, often centred on deep learning and neural networks, fall short in
achieving human-level intelligence and true understanding. Within their work,
Marcus and Davis place transparency, interpretability and accountability as
prominent themes, emphasising the significance of rendering AI systems trans-
parent and interpretable, particularly in domains where their decisions impact
human lives. They assert that these considerations are crucial, especially in fields
such as healthcare, finance and law, where comprehending how AI arrives at its
decisions is vital for ensuring ethical and equitable decision-making (Marcus &
Davis, 2019). Another publication in 2019 was ‘Human Compatible: Artificial
Intelligence and the Problem of Control’ by Stuart Russell, a computer scientist
and professor at the University of California, Berkeley. Russell is widely
acclaimed for his significant contributions to the field of AI, particularly in
machine learning, decision theory and the intricate issue of control within AI
systems. In his book, Russell explores a critical concern in AI advancement: the
24 The Impact of ChatGPT on Higher Education

imperative to ensure that AI systems act in harmony with human values and
aspirations. The central theme of his work is control, addressing the intricate
challenge of designing AI systems that benefit humanity without entailing risks or
unforeseen outcomes. Russell argues that the prevailing trajectory of AI devel-
opment, which focuses on maximising specific objectives without sufficient regard
for human values, could lead to AI systems that are difficult to manage and
potentially harmful. As a result, he emphasises the paramount importance of
early alignment between AI systems and human values and advocates for
establishing a framework that enables the regulation of AI behaviour (Russell,
2019).
As we can see, even before the public release of ChatGPT in November 2022,
experts were engaged in discussions regarding concerns about the development of
AI. But what is being said now that AI, such as ChatGPT, has been released to
the public? It would appear that feelings are mixed. Some view it as an existential
threat, while others argue that the risk is too distant to warrant concern. Some
hail it as ‘the most important innovation of our time’ (Liberatore & Smith, 2023),
while others caution that it ‘poses a profound risk to society and humanity’
(Smith, 2023). But what is the stance of AI companies themselves? Bill Gates,
Sundar Pichai and Ray Kurzweil champion ChatGPT, highlighting its potential
in addressing climate change, finding cancer cures and enhancing productivity
(Liberatore & Smith, 2023). In contrast, Elon Musk, Steve Wozniak and a group
of 2,500 individuals express reservations about large language models. In March
2023, they issued an open letter urging a pause in their development due to
potential risks and societal implications (Pause Giant AI Experiments: An Open
Letter, 2023). Moreover, in May 2023, Dr Geoffrey Hinton, a prominent figure in
the field of AI, stepped down from his position at Google, citing apprehensions
about misinformation, disruptions in employment and the existential threats
posed by AI (Taylor & Hern, 2023). In particular, he is concerned about the
potential for AI to exceed human intelligence and become susceptible to misuse
(Taylor & Hern, 2023). Although Gates holds favourable opinions about AI, he
supports enhanced regulation for augmented AI, especially due to issues such as
misinformation and deepfakes (Gates, 2023). In a similar vein, Sundar Pichai
stresses the necessity for AI regulation and is against the advancement of
autonomous weapons (Milmo, 2023b). Additionally, technology experts,
including the CEOs of DeepMind, OpenAI and Anthropic, are actively advo-
cating for regulation to tackle existential concerns (Abdul, 2023). But are these
calls for regulation being heeded?

Navigating the Shifting Landscape of AI Policies and Actions


Emerging worldwide, the landscape of AI policies is undergoing a notable
expansion, marked by a surge in legal measures and legislative activities that
mention ‘artificial intelligence’ (Maslej et al., 2023). In the United Kingdom, the
Competition and Markets Authority (CMA) is actively engaged in a thorough
review of AI, with a specific focus on addressing concerns surrounding
Navigating the Landscape of AI Chatbots 25

misinformation and potential job disruptions (Milmo, 2023c). This comprehen-


sive evaluation by the CMA is squarely directed at foundational models such as
ChatGPT, aiming to foster healthy competition and ensure the protection of
consumer interests (Milmo, 2023c). Ministers have tasked the CMA with a
multifaceted mandate, encompassing safety, transparency, fairness, account-
ability and the potential for emerging players to challenge established AI entities
(Milmo, 2023c). These initiatives underscore the mounting pressure on regulatory
bodies to intensify their scrutiny of AI technologies. Simultaneously, the UK
government is actively working on updating AI regulations to address potential
risks (Stacey & Mason, 2023). On the other side of the Atlantic, the Vice President
of the United States convened discussions with AI CEOs at the White House in
May 2023, with the primary focus on safety considerations (Milmo, 2023c). In
parallel, both the Federal Trade Commission and the White House are con-
ducting investigations into the far-reaching impacts of AI (Milmo, 2023c)
(Blueprint for an AI Bill of Rights, n.d.). At a broader international level, the EU’s
AI Act introduces a comprehensive and structured regulatory framework (‘EU AI
Act: First Regulation on Artificial Intelligence’, 2023). This framework catego-
rises AI applications according to varying levels of risk and seeks to establish itself
as a global standard for promoting responsible AI practices (‘EU AI Act: First
Regulation on Artificial Intelligence’, 2023). In July 2023, building upon the
initiatives outlined earlier, a consortium of prominent technology firms, including
OpenAI, Anthropic, Microsoft and Google (DeepMind’s owner), introduced the
Frontier Model Forum (Milmo, 2023f). The forum claims that its primary
objective is to stimulate AI safety research, establish benchmarks for model
evaluation, advocate for the conscientious deployment of advanced AI, foster
dialogue with policymakers and academics on trust and safety concerns and
explore favourable applications of AI, such as combating climate change and
detecting cancer (Milmo, 2023f). The Forum acknowledges that it has built upon
the significant contributions of entities such as the UK government and the
European Union in the realm of AI safety (Milmo, 2023f). Additionally, it’s
noteworthy that tech companies, particularly those leading the Frontier Model
Forum, have recently reached agreements on new AI safeguards through con-
versations with the White House; these safeguards encompass initiatives such as
watermarking AI content to identify deceptive materials like deepfakes and
enabling independent experts to evaluate AI models (Milmo, 2023f). Indeed, it
appears that developments are taking place within the realm of tech companies.
However, the question that arises is whether we can have confidence in their
commitment to fulfilling their pledges. Doubts persist among some. And this is
not surprising given some of the things we have seen AI companies and indi-
viduals do and say.
For example, in October 2022, preceding the launch of ChatGPT, it was
reported that Microsoft made a notable reduction in the size of its ethics and
society team (Bellan, 2023). According to insider accounts, growing pressure from
the CTO and CEO may have played a role in this decision, with the aim of getting
the latest OpenAI models to customers as soon as possible (Bellan, 2023). Sub-
sequently, in March 2023, it was reported that Microsoft decided to lay off the
26 The Impact of ChatGPT on Higher Education

remaining members of this team (Bellan, 2023). Those within the team expressed
a shared belief that these layoffs were likely influenced by Microsoft’s intensified
focus on rapidly releasing AI products to gain an edge over competitors, poten-
tially leading to a reduced emphasis on long-term, socially responsible delibera-
tions (Bellan, 2023). Despite this, it’s important to note that Microsoft has
retained its Office of Responsible AI, which carries the responsibility of setting
ethical AI guidelines through governance and public policy initiatives (Bellan,
2023). Nevertheless, the action of dismantling the ethics and society team raises
valid questions about the extent of Microsoft’s commitment to authentically
infusing ethical AI principles into its product design. Another instance emerged
directly from the mouth of OpenAI CEO Sam Altman, who, just days after
advocating for AI regulation in the US Congress voiced apprehension about the
EU’s efforts to regulate artificial intelligence (Ray, 2023). Altman expressed his
belief that the EU AI Act’s draft was excessive in its regulations and warned that
OpenAI might withdraw its services from the region if compliance proved too
challenging (Ray, 2023). This shift in stance was both sudden and significant,
highlighting the considerable influence one individual can wield.
It is precisely these instances of power and actions that raise concerns for
Rumman Chowdhury, a notable figure in the AI domain. Chowdhury recognises
recurring patterns in the AI industry, akin to the cases mentioned above, which
she considers as warning signals (Aceves, 2023). One of the key issues she high-
lights is the common practice of entities calling for regulation while simulta-
neously using significant resources to lobby against regulatory laws, exerting
control over the narrative (Aceves, 2023). This paradoxical approach hinders the
development of robust and comprehensive regulatory frameworks that could
ensure the responsible use of AI technologies. Moreover, Chowdhury emphasises
that the lack of accountability is a fundamental issue in AI development and
deployment (Aceves, 2023). She points out how internal risk analysis within
companies often neglects moral considerations, focusing primarily on assessing
risks and willingness to take them (Aceves, 2023), as we saw with Microsoft.
When the potential for failure or reputational damage becomes significant, the
playing field is manipulated to favour specific parties, providing them with an
advantage due to their available resources (Aceves, 2023). This raises concerns
about the concentration of power in the hands of a few, leading to potential bias
and adverse consequences for the wider population. Chowdhury further high-
lights that unlike machines, individuals possess diverse and indefinite priorities
and motivations, making it challenging to categorise them as inherently good or
bad (Aceves, 2023). Therefore, to drive meaningful change, she advocates
leveraging incentive structures and redistributing power sources in AI governance
(Aceves, 2023). This would involve fostering collaboration among various
stakeholders, including governments, industries, academia and civil society, to
collectively address complex AI-related issues, promote cooperation and reach
compromises on a large scale (Aceves, 2023). By doing so, she believes we can
ensure that AI technologies are developed and deployed in a way that benefits
society as a whole, rather than serving the interests of a select few. In addition to
Chowdhury’s concerns, Karen Hao, senior AI editor at MIT Technology Review,
Navigating the Landscape of AI Chatbots 27

expresses serious reservations about the interconnection between advanced AI


and the world’s biggest corporations (Hao, 2020). She points out that, as the most
sophisticated AI methods demand vast computational resources, only the
wealthiest companies can afford to invest in and control such technologies and
consequently, tech giants have significant influence not just on the direction of AI
research but also on the creation and management of algorithms that impact our
daily lives (Hao, 2020). These concerns highlight the critical importance of
transparency, inclusivity and multi-stakeholder collaboration in shaping AI pol-
icies and regulations. The dialogue and actions surrounding AI governance must
involve a diverse range of voices and perspectives to ensure that AI technologies
are ethically developed, responsibly deployed and serve the collective interests of
humanity. We return to these concerns later in this book. However, for now, our
focus turns to the creation of a theoretical framework that will empower us to
delve deeper into the complexities of AI.
This page intentionally left blank
Chapter 3

Theoretical Framework for Investigating


ChatGPT’s Role in Higher Education

Exploring the Need for a Theoretical Framework


In our research study, we undertake an exploratory case study to investigate the
potential impact of Chat Generative Pre-trained Transformer (ChatGPT) on the
roles of students, instructors and institutions of higher education. This type of live
experiment provides valuable real-life insights and allows us to validate theoret-
ical considerations. However, before delving into the study, it is essential to
establish our theoretical framework. Theoretical frameworks, grounded in phi-
losophy, are commonly employed in postmodern qualitative research to examine
diverse perspectives and interpretations of a phenomenon. They help align the
research with its objectives, consider the influence of social and cultural factors on
our understanding of reality and provide valuable insights from a critical stand-
point. In our study, we have chosen the qualitative research paradigm to explore
the subjective experiences and meanings associated with ChatGPT, aiming to
understand its impact on various stakeholders within our institution. Based on
this, we needed to choose theoretical frameworks that would comprehensively
address both the sociopolitical implications and the personal lived experiences
associated with ChatGPT’s integration in higher education. To achieve this, we
decided to adopt the theoretical frameworks of critical theory and phenomenol-
ogy. Critical theory, with its focus on societal structures and power dynamics,
offers a lens through which we can critically interrogate the larger implications of
technology in educational settings. On the other hand, phenomenology, rooted in
understanding human experiences, provides an avenue to delve into the individual
and collective consciousness of stakeholders interacting with ChatGPT. The
combined strength of both frameworks allows us to present a balanced and
holistic view, capturing not only the macro-level impacts on institutional
dynamics but also the micro-level nuances of personal experiences and interpre-
tations. As our research advances, we employ these lenses to delve deeper into
each theme as it emerges. This enables us to shed light on the broader implications
of our research topic, gaining a comprehensive understanding of its significance.

The Impact of ChatGPT on Higher Education, 29–39


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241003
30 The Impact of ChatGPT on Higher Education

Critical Theory and ChatGPT: Unpacking Power Dynamics and


Social Structures
Critical theory serves as a powerful theoretical framework that delves into power
dynamics, social structures and ideologies to scrutinise and challenge prevailing
systems of inequality and oppression. This approach aims to uncover the
underlying social and political factors that shape our perception of reality, with
the ultimate goal of empowering marginalised groups (Tyson, 2023). When
considering the impact of ChatGPT on the various stakeholders within a uni-
versity, critical theory becomes invaluable for several compelling reasons:

• Examining Power Dynamics


Critical theory facilitates an exploration of how ChatGPT could either disrupt
or reinforce power dynamics within educational environments. It provides a
lens through which to uncover potential inequalities that might arise from
technology’s utilisation and encourages a thoughtful analysis of how these
dynamics could influence the landscape of teaching and learning.
• Redefining Instructor Roles
The transformative potential of ChatGPT extends to reshaping the roles of
instructors by offering automated responses and student assistance. Within this
context, critical theory proves instrumental in dissecting how this shift might
impact the authority, expertise and interaction dynamics between instructors
and students. By prompting a critical examination, it opens the door to probing
the implications and repercussions of such changes.
• Reassessing Student Roles
As students increasingly engage with artificial intelligence (AI)-powered tools
like ChatGPT, their roles in the learning process are likely to evolve. Critical
theory provides a framework for scrutinising how this transformation influ-
ences student agency, critical thinking and the development of independent
learning skills. It invites an exploration of whether students become passive
consumers of information or active participants in shaping their educational
journey.
• Institutional Implications
Critical theory’s lens extends to encompass the broader landscape of higher
education institutions as they integrate ChatGPT. It encourages a rigorous
analysis of how institutional policies, structures and practices need to be
reconsidered during the adoption of AI technologies. This introspection is vital
for understanding how these changes may either reinforce or challenge existing
power structures and inequalities within the educational ecosystem.

In order to enhance our understanding further, we decided to look into the


works of renowned theorists in the field of critical theory: Clayton Christensen,
Pierre Bourdieu and Karl Marx.
ChatGPT’s Role in Higher Education 31

Christensen’s Disruptive Innovation and the Transformative


Potential of ChatGPT
At the forefront of business strategy theory is Clayton Christensen’s seminal
work, ‘The Innovator’s Dilemma’ (Christensen, 1997). Here, he introduces the
world to ‘disruptive innovation’, a phenomenon where smaller companies with
limited resources rise to challenge and eventually overtake well-established
industry giants. Though their products might start in niche markets and
initially appear inferior, they stand out because of their affordability and acces-
sibility. Over time, these innovations gain traction, appealing to larger demo-
graphic and destabilising dominant market players. While Christensen’s
disruptive innovation and critical theory might initially seem unconnected, a
closer examination reveals intricate intersections. Central to disruptive innovation
is the shift in power dynamics: start-ups not only challenge but occasionally
dethrone established giants. This movement mirrors the principles of critical
theory, which is deeply invested in studying power relations. Furthermore,
disruptive innovations are characterised by their democratising potential. They
transform products and services, once seen as exclusive luxuries, into accessible
necessities. When viewed through critical theory, this democratisation process
becomes even more compelling, offering insights into societal access and equity.
Beyond market disruptions, Christensen, in collaboration with his colleagues,
offers another analytical tool: the Theory of Jobs to be Done (Christensen et al.,
2016). They suggest that consumers do not simply buy products or services for
their features; they ‘hire’ them to perform specific tasks or ‘jobs’. These jobs can
be functional, like using a phone to call, social, such as buying a luxury car to
denote status or emotional, akin to purchasing a fitness tracker for the satisfaction
it provides. By comprehending these jobs, companies can design better, more
targeted products. However, staying relevant requires continual monitoring and
adaptation, as these jobs can evolve.
Applying Christensen’s Theory of Jobs to be Done to investigate the impact of
ChatGPT on various educational roles can provide valuable insights into the
specific needs and motivations of stakeholders. By understanding the ‘jobs’ that
ChatGPT may fulfil for each role, we can better analyse the potential effects on
students, instructors and institutions of higher education. Regarding the role of
the student, Christensen’s theory can help us uncover the jobs that students aim to
fulfil in their educational journey. ChatGPT may support students in researching
and accessing information (functional job), promoting collaborative learning and
peer interaction (social job) or enhancing their motivation and self-confidence
(emotional job). Examining these jobs can shed light on how ChatGPT may
influence students’ learning experiences and outcomes. When it comes to the
instructor, by applying the Theory of Jobs to be Done, we can examine the
specific tasks and goals that instructors seek to accomplish in their role. ChatGPT
may assist instructors in automating administrative tasks (functional job), facili-
tating student engagement and interaction (social job) or fostering creativity in
lesson planning (emotional job). This understanding can guide the exploration of
how ChatGPT may augment or transform the instructor’s responsibilities. By
32 The Impact of ChatGPT on Higher Education

analysing the jobs that institutions of higher education aim to fulfil, we can gain
insights into how ChatGPT may impact their operations. This can include tasks
such as enhancing accessibility and inclusivity (functional job), fostering inno-
vation and collaboration across departments (social job) or adapting to evolving
educational demands (emotional job). Understanding these jobs can inform
strategic decisions regarding the integration and utilisation of ChatGPT within
educational institutions. Therefore, we believe that melding Christensen’s
disruptive innovation with critical theory presents a multi-dimensional frame-
work, as it allows for a comprehensive exploration of technologies, like ChatGPT,
not just as tools but as potential game-changers in industry and society.

ChatGPT Through Bourdieu’s Societal Lens


Our second theorist, Pierre Bourdieu, was a sociologist and philosopher whose
work focused on the relationship between social structures, cultural practices and
individuals’ behaviours. His theories of habitus, field and cultural capital provide
a lens for understanding how social structures shape human behaviour and how
individuals navigate these structures to achieve success (Webb et al., 2002).
Bourdieu’s habitus theory posits that an individual’s environment shapes their
habits, which unconsciously guide their thoughts, actions and preferences (Webb
et al., 2002). These habits perpetuate social structures and contribute to the
development and maintenance of power in social groups. His field theory
emphasises the interplay between social structures and human agency, arguing
that social reality is composed of various fields, each with its own set of rules and
power relations (Webb et al., 2002). Bourdieu’s theory of capital suggests that
social class is determined not only by economic factors but also by cultural and
symbolic capital. He identifies three forms of capital: economic, cultural and
social, which individuals in different social classes possess to maintain or improve
their social status (Webb et al., 2002). Bourdieu (1982) argues that cultural cap-
ital, in its external form, appears as a self-contained and consistent entity.
Although it has roots in historical actions, it follows its own distinct rules,
overriding individual desires. This idea is clearly exemplified by language, which
does not solely belong to any one individual or collective. Importantly, cultural
capital is not just a theoretical construct; it has real-world, symbolic and tangible
power. This power becomes evident when people harness and employ it in diverse
cultural domains, from arts to sciences. In these fields, individuals’ success is often
proportional to their understanding of this external form of capital and their
inherent cultural resources. Regarding language, Pierre Bourdieu understood
language as being not just a tool for communication but a form of capital he
termed ‘linguistic capital’. Central to this notion is the interplay between ‘lin-
guistic habitus’ and the ‘linguistic market’, which Bourdieu puts forward in the
following formula:
Linguistic habitus 1 linguistic market ¼ linguistic expression; speech
ChatGPT’s Role in Higher Education 33

At its core, Bourdieu’s linguistic habitus reflects an individual’s social history


and is innate, being embodied within the individual (Bourdieu, 1986). This
habitus, a product of societal conditions, dictates utterances tailored to specific
social situations or markets. While one might be tempted to believe that knowl-
edge of a language ensures effective communication, Bourdieu postulates that the
actual execution of speech is governed by the situation’s intrinsic rules – rules
encompassing levels of formality and interlocutors’ expectations (Bourdieu,
1978). This brings us to his concept of the ‘linguistic market’. Bourdieu believes it
is not enough to simply speak a language correctly, one must also understand the
sociolinguistic nuances of where and to whom one speaks (Bourdieu, 1978). This
market is both tangible and elusive. At its most concrete, it involves familiar
social rituals and a clear understanding of one’s place in a social hierarchy.
However, its abstract nature lies in the constantly evolving linguistic norms and
the subconscious factors that influence our speech. Bourdieu’s ‘linguistic capital’
encapsulates the tangible benefits that specific speakers can accrue (Bourdieu,
1978). Bourdieu’s perspective accounts for scenarios where speech exists without
genuine communication. Consider the ‘voice of authority’, where a speaker,
backed by societal and institutional support, may say much while conveying little
(Bourdieu, 1978, p. 80). Bourdieu believes linguistic capital is intrinsically tied to
power dynamics. It shapes value judgements, allowing certain speakers to exploit
language to their advantage. ‘Every act of interaction, every linguistic commu-
nication, even between two people, two friends, boy and girl, all linguistic
interactions, are in a sense micro-markets which always remain dominated by the
overall structures’ (Bourdieu, 1978, p. 83). Building on Bourdieu’s insights, Webb
et al. (2002) underscore language’s role as a mechanism of power that is moulded
by an individual’s social standing. Interactions, verbal or otherwise, are reflective
of the participants’ societal positions. Bourdieu’s emphasis on language as a
reservoir of social capital is especially poignant. When perceived as a resource,
language functions as a blend of cultural capital. This fusion can be leveraged to
forge potent relationships, granting the speaker access to invaluable resources
within communities or institutions (Webb et al., 2002). Bourdieu’s cultural
reproduction theory, or legacy theory, suggests that social inequalities persist
from one generation to the next through the transmission of cultural values and
practices (Webb et al., 2002). This transmission takes place through family
upbringing, education and socialisation in different institutions. Bourdieu argues
that the cultural practices and beliefs of the dominant class are legitimised and
reinforced through the educational system, which acts as a significant tool for
social reproduction (Webb et al., 2002).
Applying Bourdieu’s theory to investigate the impact of ChatGPT on various
educational roles allows us to examine how social structures, cultural practices
and individuals’ behaviours intersect with the integration of this technology.
Bourdieu’s concepts of habitus, field, cultural capital and cultural reproduction
can provide insights into the dynamics and potential consequences of ChatGPT
regarding the role of students, instructors and institutions of higher education.
Regarding the role of the student, with the introduction of ChatGPT, students
may experience changes in their habitus and cultural capital. The technology may
34 The Impact of ChatGPT on Higher Education

affect their learning practices, as well as their access to and utilisation of


knowledge. ChatGPT’s influence on language interactions and communication
may shape the way students express themselves, engage with course materials and
collaborate with peers. It may also impact students’ relationship to authority and
expertise, potentially challenging traditional hierarchies and sources of knowl-
edge. ChatGPT’s integration can impact the power dynamics within the educa-
tional field, as instructors navigate the interplay between their own habitus,
cultural capital and the expectations and demands of using ChatGPT. The
technology may alter the distribution of symbolic capital and redefine what
knowledge and expertise are valued in the teaching profession. Instructors may
need to negotiate their position and authority in relation to ChatGPT, potentially
leading to a reconfiguration of the instructor’s role and the skills required to be an
effective educator. And finally, regarding the role of institutions of higher edu-
cation, Bourdieu’s theory suggests that the integration of ChatGPT may
contribute to the cultural reproduction within institutions of higher education. It
may perpetuate existing power structures by privileging certain forms of knowl-
edge and communication. The use of ChatGPT may impact institutional prac-
tices, curriculum development and assessment methods. Therefore, institutions
need to critically consider the potential consequences of adopting ChatGPT, and
how it aligns with their educational goals, values and commitment to social
equity.

Marx’s Theory of Alienation in the Context of ChatGPT


Karl Marx, the renowned German philosopher, economist, sociologist and rev-
olutionary socialist, stands as a pivotal figure in history. At the heart of his
philosophy and social thought lies the concept of communism, an alternative
socio-economic system advocating for collective ownership of property and
resources, as opposed to private ownership (Elster, 1986). Being a revolutionary
socialist, Marx believed in the need for a radical, profound change in the system
through revolutionary means, rather than incremental reforms. Challenging static
ideas of human nature, Marx proposed that it is continually moulded by historical
and social circumstances (Elster, 1986). He placed labour at the forefront of
societal structures, asserting it as the primary generator of all wealth (Elster,
1986). Marx pinpointed a critical concern of capitalism: the capital owners exploit
workers by taking the surplus value produced by their labour (Elster, 1986). This
exploitation seeds class struggle, primarily between the bourgeoisie, the capitalist
class and the proletariat, the working class (Elster, 1986). He envisioned a future
where the proletariat would overthrow the bourgeoisie, paving the way for a
socialist society, with collective ownership and control of production means
(Elster, 1986). While Marx’s theories highlighted the working class’ misbelief that
their interests are aligned with the ruling class, a notion often termed ‘false
consciousness’, it’s crucial to acknowledge that Marx never directly used this term
(Althusser, 1971). Marx aimed his critique of political economy at uncovering the
contradictions within capitalism, emphasising the role of material conditions,
ChatGPT’s Role in Higher Education 35

especially economic relationships, in crafting history (Elster, 1986). Capitalism,


for Marx, was a phase in historical development, destined to transition to
socialism and, ultimately, to communism (Elster, 1986). A significant pillar of
Marx’s critique was the theory of alienation. He detailed how individuals under
capitalism become distanced from the products of their labour, the labour pro-
cess, their fellow humans and their innate creative potential (Mészáros, 2005).
Workers, in the capitalist machinery, are stripped of control over production
means, leading them to sell their labour for wages. Marx distinguished four
alienation types: alienation from the created products, which become commod-
ities controlled by the exploiting capitalists; alienation from the labour process,
which becomes monotonous under the profit-driven motivations of capitalists;
alienation from fellow workers, as capitalism promotes competition and indi-
vidualism and alienation from one’s inherent creative essence, as the capitalist
system reduces individuals to mere production tools (Mészáros, 2005). These
facets of alienation give rise to societal issues such as inequality, exploitation and
the breakdown of genuine human connections (Mészáros, 2005). To address these
deep-rooted problems, Marx championed the abolition of private ownership of
production means and the dawn of a classless society, where collective control
empowers individuals to shape their work and its outcomes (Mészáros, 2005). In
conclusion, Marx’s ideas paint a landscape deeply concerned with the intricate
ties between economic mechanisms and societal interactions. His theories have
had enduring impacts, resonating in various sociopolitical movements and ana-
lyses even in contemporary times. But how do these relate to ChatGPT?
Karl Marx’s framework on alienation underscores the relationship between
workers and the essence of their labour. By applying this theory to the modern
context of higher education, we can ascertain the potential consequences and
implications of technologies like ChatGPT on students, instructors and institu-
tions. For Marx, education could be considered a type of labour, where students
invest effort to gain knowledge and skills. Introducing ChatGPT could reshape
this dynamic, altering the student’s relationship with their educational labour.
While ChatGPT might grant swift access to information, which might be an
advantage for collaborative research or independent study, an over-reliance could
stifle students’ critical thinking abilities and reduce active engagement in the
learning process. And, although ChatGPT has the potential to democratise access
to information, there’s a growing concern that education could further become a
commodity, exacerbating the gulf between economically diverse student groups,
especially if quality educational tools become stratified based on purchasing
power. Extending Marx’s perspective to instructors, educators might feel
estranged from their professional essence if ChatGPT or similar technologies
encroach too deeply into their domain. Though ChatGPT could automate certain
pedagogical tasks, potentially elevating the quality of education by allowing
educators to concentrate on nuanced, human-centric aspects of teaching, there’s a
risk. If institutions prioritise technology over educators, it could lead to a form of
professional alienation, relegating educators to secondary roles and potentially
diminishing their perceived value in the educational process. Through a Marxist
lens, the capitalist tendencies of institutions could be amplified with technologies
36 The Impact of ChatGPT on Higher Education

like ChatGPT. These tools might be seen less as educational enhancements and
more as cost-saving or profit-driving mechanisms. This could shift priorities from
holistic education to market-driven objectives, echoing Marx’s concerns about
capitalist structures overshadowing true value. However, this does not have to
happen. Ethical, student-centric integration of ChatGPT could lead to enriched
collaborative experiences, blending traditional pedagogy with modern techniques.
In essence, while ChatGPT and similar AI technologies hold vast potential for
reshaping higher education, Marx’s theory of alienation cautions us to think
about the caveats. The challenge lies in ethically integrating these tools, focusing
on augmenting human capabilities rather than sidelining them. It underscores the
importance of continually reassessing institutional policies, emphasising the
human aspect of education and ensuring that advancements in technology truly
serve their primary stakeholders – the students, instructors and the broader
educational community.

Phenomenology: Unveiling Insights into ChatGPT


Phenomenology, as a theoretical framework, centres on comprehending the
subjective experiences and the significances that individuals ascribe to them. It
delves into the lived encounters of individuals, striving to unveil the core nature of
phenomena as perceived by those engaged (Patton, 2002). In the context of
examining the influence of ChatGPT on the roles of diverse stakeholders within a
university, phenomenology offers a valuable theoretical approach with the
following implications:

• Exploration of Subjective Experiences


Phenomenology allows researchers to delve into the subjective experiences of
individuals involved in teaching and learning with ChatGPT. It helps uncover
the lived experiences, perceptions and emotions of students and teachers in
relation to the technology.
• Understanding the Meaning-Making Process
Phenomenology seeks to understand how individuals make sense of their
experiences and the meanings they attach to them. In the context of ChatGPT,
it can shed light on how students and teachers interpret and understand the
technology’s impact on their roles and the educational process as a whole.
• Examination of Changes in Teaching and Learning
Phenomenology can help researchers explore how ChatGPT may affect the
roles of teaching and learning by investigating possible required shifts in
pedagogical practices and instructional strategies. It provides insights into the
ways in which the technology may influence the interaction between students
and instructors and the overall educational experience.
• Uncovering New Possibilities and Constraints
Phenomenology enables researchers to identify both the opportunities and
limitations presented by ChatGPT in education. It allows for an exploration of
the potential benefits, such as increased access to information or personalised
ChatGPT’s Role in Higher Education 37

learning experiences, as well as the challenges, such as the potential reduction


of human interaction surrounding AI technology.
• Emphasis on the Lived Experience
Phenomenology places emphasis on the first-person perspective and the expe-
riential knowledge of individuals. This focus aligns with the aim of under-
standing the lived experiences of students and instructors as they navigate the
integration of ChatGPT.

To enhance our understanding further, we looked into the works of one of the
most famous theorists in phenomenological research, Martin Heidegger.

Heideggerian Reflections: AI and the Essence of Being


Heidegger, a German philosopher, made significant contributions to phenome-
nology, hermeneutics and existentialism. Despite criticism for his affiliation with
the Nazi Party, his philosophy offers valuable insights into the impact of tech-
nology on human relationships and our sense of authenticity and connectedness
to the world (Inwood, 2019). At the core of Heidegger’s philosophy is the concept
of Dasein, which refers to human existence and its unique ability to question its
own being (Girdher, 2019). Heidegger’s philosophy emphasises the pre-existing
understanding of the world that shapes human existence, known as ‘being-in-the-
world’ (Girdher, 2019). Heidegger’s inquiry into the meaning of being revolves
around what makes beings intelligible as beings. He distinguishes between ontical
and ontological aspects, where ontical pertains to specific beings and ontological
focuses on the underlying meaning of entities, referred to as the ‘Ontological
Difference’ (Inwood, 2019). This distinction forms the basis of Fundamental
Ontology, which aims to comprehend the meaning of being itself. Time is a
crucial aspect of Heidegger’s philosophy, not simply a linear sequence of events,
but a fundamental aspect of all existence, shaping our understanding of the world
and our place within it (Inwood, 2019). In Heidegger’s philosophy, technology is
seen as fundamentally transforming our relationship with the world, revealing
and ordering it in a particular way, referred to as the ‘enframing’ or ‘challenging-
forth’ nature of technology (Inwood, 2019). This perspective leads to a mode of
being called ‘standing reserve’, where everything, including humans, is objectified
and reduced to a means to an end (Inwood, 2019). He cautioned that this way of
being obscures our authentic relationship with the world and disconnects us from
our true nature (Inwood, 2019). However, Heidegger also acknowledged tech-
nology’s potential for positive transformation, proposing a different approach
characterised as ‘poetic dwelling’ or ‘releasement’ (Bida, 2018). He introduced the
concept of ‘readiness-to-hand’, describing our seamless interaction with tools and
objects in everyday life, where tools become extensions of ourselves, allowing for
a sense of flow and efficiency (Inwood, 2019). In this view, technology should
foster openness, attentiveness and a deeper connection with the world, revealing
and enabling our engagement with it (Bida, 2018). Heidegger’s philosophy of
technology significantly influences contemporary discussions on the ethical and
38 The Impact of ChatGPT on Higher Education

existential dimensions of technology. It prompts us to reflect on the effects of


technology on our lives, question the assumptions underlying our technological
worldview, and explore alternative ways of relating to technology for a more
meaningful and sustainable existence.
When considering the role of students, Heidegger’s philosophy underscores
their unique capacity to question their own being and the temporal nature of their
existence. The presence of ChatGPT in education raises questions about students’
relationship with knowledge. ChatGPT, as a tool for accessing information, may
enhance the readiness-to-hand experience for students. By providing immediate
and relevant responses to queries, ChatGPT can simplify the process of obtaining
information, reducing cognitive load and enabling students to focus more on
understanding and applying knowledge. However, with this comes a risk that
excessive reliance on ChatGPT might lead to passive learning, diminishing stu-
dents’ inclination for exploration and critical thinking. To prevent this, students
may need to reflect on the role of technology in their learning process and actively
participate in shaping their understanding of being-in-the-world. In relation to
instructors, Heidegger’s concept of being-in-the-world suggests that their role is
intertwined with their existence and understanding of the world. The introduction
of ChatGPT may challenge their traditional role as the primary source of
knowledge. This technology can assist them in preparing course materials,
offering timely feedback and addressing common questions from students.
However, it also raises questions about the readiness-to-hand experience of
teaching. Instructors should be mindful of how their reliance on AI assistance
might impact their authentic engagement with students and the learning process.
Therefore, instructors may need to navigate the integration of technology in their
teaching practices, re-evaluating their own relationship to knowledge and the
facilitation of authentic learning experiences. As a result, the instructor’s role may
shift towards guiding students in their engagement with technology, fostering
critical reflection and assisting students in uncovering their own understanding of
being. The introduction of ChatGPT also prompts a re-evaluation of the temporal
dimension of education for institutions of higher education. Institutions should
critically assess how this technology aligns with their educational values and
goals. They must reflect on the balance between efficiency and the time needed for
meaningful learning experiences. This integration of AI technology could impact
institutional structures, curriculum design and the overall educational environ-
ment. By integrating ChatGPT into higher education institutions, access to
information and resources can be streamlined, potentially increasing efficiency
and reducing administrative burdens. However, caution is necessary to avoid
over-reliance on AI technology. Institutions must carefully consider the balance
between efficiency and the temporal space required for genuine learning experi-
ences. Additionally, they should ensure that AI integration preserves the
readiness-to-hand experience for both students and instructors while also aligning
with their educational values and objectives. This will help maintain a holistic and
effective learning environment while embracing the advantages of AI technology.
Therefore, in summary, by applying Heidegger’s philosophy to the examination
of ChatGPT in education, we can gain a deeper understanding of the potential
ChatGPT’s Role in Higher Education 39

implications for students, instructors and institutions. It prompts us to reflect on


the existential and temporal dimensions of education, encouraging a critical
evaluation of how technology can shape our relationship with knowledge,
authenticity and our sense of being-in-the-world.
Within this chapter, we have constructed a theoretical foundation to facilitate
the examination of ChatGPT and similar AI technologies. Our framework
encompasses two key components: critical theory, which delves into power
dynamics, social disparities and cultural conventions; and phenomenology,
enabling an understanding of conscious experiences as perceived directly by
individuals. In Chapter 6, we revisit these theories, using them as a guide to
conduct a comprehensive analysis of our findings.
This page intentionally left blank
Chapter 4

Exploring ChatGPT’s Role in Higher


Education: A Literature Review

Defining the Scope of the Literature Review


In this chapter, we present our literature review conducted in early April 2023,
where we utilised Google Scholar to search for articles focusing on ChatGPT’s
impact on students, instructors and higher education. Our primary goal was to
identify case studies exploring ChatGPT’s integration in educational settings to
gain valuable insights into its practical implications. As ChatGPT had only been
publicly released on 30 November 2022, just over 4 months before our literature
review began, we expected a scarcity of case studies due to limited time for
in-depth research and analysis. As anticipated, our search revealed limited liter-
ature related to actual case studies involving ChatGPT. However, what we did
discover was an emerging trend in content and document analysis studies
examining secondary sources like media releases and social media discussions.
Additionally, we identified meta-literature reviews synthesising the growing body
of research around ChatGPT, often including preprints due to its recent public
appearance and the limited time for peer-reviewed publications on this topic.
Furthermore, we found several user case studies focusing on the individual
experiences of instructors and researchers experimenting with aspects of
ChatGPT. Considering that we were particularly interested in the implementation
of ChatGPT in educational settings, we limited our scope to papers released
following its public launch on 30 November 2022, concentrating on the 4-month
period leading up to the commencement of our research. After conducting our
initial review, we selected nine papers that we deemed most relevant to our
research questions. These papers were categorised into three groups: content and
document analysis papers (three papers), literature review papers (two papers)
and case studies focusing on user experiences (four papers). An overview of these
is provided below.

The Impact of ChatGPT on Higher Education, 41–73


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241004
42 The Impact of ChatGPT on Higher Education

Content and Document Analysis Papers on ChatGPT


Analysing the Role of ChatGPT in Improving Student Productivity in Higher
Education
In their study, ‘Analysing the Role of ChatGPT in Improving Student Produc-
tivity in Higher Education’, Fauzi et al. (2023) aimed to explore the impact of
ChatGPT on student productivity in higher education settings. Employing a
qualitative research methodology, the researchers adopted a desk research
approach and relied on secondary sources of information for data collection. By
consulting various reference materials, including online media and journal data-
bases, they guaranteed comprehensive coverage of relevant information related to
ChatGPT’s role in enhancing student productivity. During the data collection
process, Fauzi et al. (2023) recorded relevant information, which they subse-
quently analysed using data reduction and data presentation techniques. Through
simplifying, classifying and eliminating irrelevant data, they gained insights into
ChatGPT’s potential impact on student productivity. Their data presentation
involved systematically organising it and using written discourse in the form of
field notes to facilitate understanding and support the process of drawing con-
clusions. The study’s findings revealed that ChatGPT holds the potential to
contribute to various aspects of student productivity in higher education.
Notably, ChatGPT provided valuable assistance to students by offering relevant
information and resources for their assignments and projects. ChatGPT also
assisted students in improving their language skills, grammar, vocabulary and
writing style. Moreover, ChatGPT fostered collaboration among students,
enabling effective communication, idea exchange and project collaboration.
Additionally, it contributed to time efficiency and effectiveness by helping stu-
dents organise their schedules, assignment due dates and task lists. Beyond that,
ChatGPT served as a source of support and motivation for students, offering
guidance on stress management and time and task management strategies. Based
on their findings, Fauzi et al. (2023) propose several recommendations. Firstly,
they suggest that students should use ChatGPT judiciously and critically evaluate
the credibility of the information it provides. Secondly, they emphasise that
educators and educational institutions should consider integrating ChatGPT into
learning processes to enhance student productivity, while also maintaining a
balanced approach that values human interaction and student engagement.
Thirdly, they recommended that technology companies continue advancing and
refining language models like ChatGPT to further contribute to the improvement
of student productivity and online learning. Fauzi et al.’s (2023) study sheds light
on the positive implications of ChatGPT in higher education, particularly its
potential to enhance student productivity. However, it is essential to address the
study’s weaknesses, such as the heavy reliance on secondary sources and the
limited exploration of other aspects of ChatGPT’s impact on education. To
strengthen the validity and reliability of findings, it is suggested that future
research should consider incorporating primary research methods and conducting
broader investigations.
Exploring ChatGPT’s Role 43

Fauzi et al.’s (2023) paper is of significant importance to our study, as it


directly addresses our research topic on how ChatGPT may affect the roles of
students, instructors and institutions of higher education. This paper adds value to
our understanding of the research topic in several key ways.

• Focus on Student Productivity


The paper provides a specific examination of the impact of ChatGPT on stu-
dent productivity in higher education. It offers relevant insights into the role of
ChatGPT in influencing student learning outcomes and academic performance
through various means of support.
• Practical Recommendations
The study’s recommendations offer valuable guidance for educators and
institutions seeking to effectively integrate ChatGPT into learning processes
while maintaining a balance with human interaction and personalised
instruction.
• Broader Implications
While focusing on student productivity, the study indirectly informs our
research on the roles of instructors and higher education institutions. It illus-
trates how students’ interactions with ChatGPT may influence instructors’
teaching practices and institutional approaches to supporting student learning
and academic success.

Overall, Fauzi et al.’s (2023) findings and recommendations offer valuable


guidance for navigating the integration of ChatGPT effectively, while also raising
important considerations for examining the roles of instructors and higher edu-
cation institutions in the context of artificial intelligence (AI)-driven technologies.

Open AI in Education, the Responsible and Ethical Use of ChatGPT Towards


Lifelong Learning
David Mhlanga’s 2023 article, ‘Open AI in Education: The Responsible and
Ethical Use of ChatGPT Towards Lifelong Learning’, employs document anal-
ysis, reviewing various sources, including news outlets, blog posts, published
journal articles and books (Mhlanga, 2023). For the research, Mhlanga con-
ducted a global search to identify papers that investigated OpenAI, the principles
of responsible AI use in education and the ethical implications of AI in education
(Mhlanga, 2023). From this search, he selected 23 publications that served as his
primary sources, from which he derived a number of key themes. One of these key
themes revolves around the challenges associated with using ChatGPT in edu-
cation, in which he warns against the use of ChatGPT for grading written tasks,
as it may threaten conventional methods of assessment, such as essays. Addi-
tionally, he raises concerns that students might outsource their work to ChatGPT,
making it harder to detect instances of plagiarism. Despite these challenges,
Mhlanga believes that a blanket ban on ChatGPT might not be the best
approach. Instead, he urges educators and policymakers to address these
44 The Impact of ChatGPT on Higher Education

challenges thoughtfully while considering the potential benefits of ChatGPT in


education. Despite his concerns, Mhlanga also highlights various opportunities
that ChatGPT brings, suggesting that the ability of ChatGPT to generate essays
and written content can be a powerful tool to enhance learning. In addition, he
suggests it can be used to improve assessment procedures, foster student collab-
oration and engagement and facilitate experiential learning. While Mhlanga
acknowledges that ChatGPT can be disruptive, he also opines that it presents an
excellent opportunity to modernise and revolutionise education (Mhlanga, 2023).
However, if implementing ChatGPT in educational settings, Mhlanga emphasises
the importance of adhering to responsible and ethical practices, noting that
protecting the privacy of users’ data is paramount, and students must be informed
about data collection, usage and security measures. Similarly, he warns that the
responsible use of ChatGPT requires addressing potential biases and ensuring
fairness and non-discrimination, particularly in grading and evaluation. Mhlanga
also underscores that ChatGPT should not be viewed as a replacement for human
teachers but rather as a supplement to classroom instruction. This is because he
believes human instructors play a crucial role in understanding students’ unique
needs, fostering creativity and providing hands-on experiences that ChatGPT
cannot replicate (Mhlanga, 2023). Furthermore, Mhlanga points out some of the
limitations of ChatGPT, such as its inability to comprehend the context sur-
rounding students, such as their culture and background. He believes this limi-
tation makes ChatGPT unsuitable for providing personalised and experiential
education, which, he notes, is essential for students’ holistic learning. Therefore,
he suggests that educators must educate their students about ChatGPT’s limita-
tions and encourage them to critically evaluate its output (Mhlanga, 2023). To
ensure ethical AI use in education, Mhlanga suggests that transparency is crucial.
Therefore, he suggests offering open forums, workshops and discussion groups for
students and teachers to understand how ChatGPT functions and its capabilities.
For Mhlanga, transparency should include informing students about the algo-
rithms and data sources used and potential biases in the technology. He also
believes it is essential to prioritise the adoption of open-source or transparent AI
technology to provide access to the source code and underlying data. He believes
that by educating students about AI’s limitations and potential biases, we can
empower them to use the technology responsibly (Mhlanga, 2023). Additionally,
Mhlanga emphasises the significance of accuracy in education, noting that
ensuring AI-generated content is accurate and reliable is essential to prevent
misconceptions and misinformation among students. Therefore, he states that
critical thinking and fact-checking must be encouraged when using AI tools in the
educational process (Mhlanga, 2023).
Mhlanga’s findings highlight the challenges and opportunities associated with
using AI in educational settings and emphasise the importance of responsible and
transparent practices. The study underscores the need to educate students and
instructors about AI’s limitations and potential biases and stresses the irreplace-
able role of human teachers in the learning process. Overall, Mhlanga’s research
offers valuable insights into the impact of ChatGPT on students, instructors and
higher education institutions, addressing key aspects of our research question.
Exploring ChatGPT’s Role 45

However, it is suggested that further research be conducted to expand the sample


size as well as considering AI’s broader applications in education.
Mhlanga’s (2023) study is relevant to our research in the following ways:

• Challenges and Opportunities for Education


Mhlanga’s research identifies both challenges and opportunities in using
ChatGPT for education. Understanding these aspects can help inform how
students and instructors should approach and leverage ChatGPT in educa-
tional settings. The concerns about potential plagiarism and the need for
educators to adapt their assessment methods highlight the challenges institu-
tions may face while implementing AI tools. Conversely, the opportunities
presented by ChatGPT, such as improved assessment procedures and inno-
vative teaching approaches, can inspire educators to explore its integration in a
responsible manner.
• Complementing Human Instructors
Mhlanga’s findings suggest that ChatGPT should not replace human teachers
but rather supplement their efforts. This aligns well with our research question
about the roles of students and instructors in the AI-driven educational land-
scape. Gaining insights into how ChatGPT can facilitate and improve human
instruction can provide valuable guidance for instructors to adjust their
teaching approaches and for students to effectively engage with AI
technologies.
• Limitations and Awareness
Mhlanga’s research underscores the importance of educating students about
the limitations of ChatGPT. Understanding AI’s capabilities and limitations is
essential for students to critically evaluate AI-generated content and use the
technology effectively. This aspect is highly relevant to our research regarding
how students may interact with and perceive AI tools like ChatGPT.
• Integration and Adaptation
Mhlanga’s (2023) study’s focus on challenges and opportunities for integrating
ChatGPT in education can inform how higher education institutions adapt to
the AI-driven landscape. Understanding the potential disruptions and benefits
can help institutions make informed decisions about adopting AI technologies.
• Scope for Further Research
Mhlanga’s article encourages further research and debate on the responsible
use of AI in education. This aligns with our research goal of exploring the
broader implications of AI on students, instructors and educational institu-
tions. Our research can expand on Mhlanga’s findings and delve into specific
use cases and best practices for AI integration in education.

In conclusion, David Mhlanga’s 2023 article is highly relevant to our research,


as it provides valuable insights into the responsible and ethical implementation of
ChatGPT in education. The study addresses key aspects of how ChatGPT affects
students, instructors and higher education institutions, offering relevant per-
spectives on challenges, opportunities, ethical considerations and potential future
46 The Impact of ChatGPT on Higher Education

directions. It serves as a foundation for understanding the implications of AI in


education and can guide institutions in navigating the AI-driven educational
landscape responsibly.

ChatGPT in Higher Education: Considerations for Academic Integrity and


Student Learning
Sullivan et al.’s (2023) article titled ‘ChatGPT in higher education: Considerations
for academic integrity and student learning’ presents an investigation into the
disruptive effects of ChatGPT on higher education. Their study focuses on two
main areas: analysing key themes in news articles related to ChatGPT in the
context of higher education and assessing whether ChatGPT is portrayed as a
potential learning tool or an academic integrity risk. Their research methodology
involved conducting a content analysis of 100 media articles from Australia, New
Zealand, the United States and the United Kingdom using specific search terms,
which were then imported into EndNote and subsequently Nvivo for analysis.
The authors followed a content analysis guidebook, refining a preliminary
codebook and coding the articles based on identified themes. They also examined
how various stakeholders, including university staff, students and ChatGPT, were
represented in the media. To assess sentiment and word usage, they employed
Nvivo’s Sentiment Analysis and Query. Overall, a number of themes emerged
from the analysis. The authors found that the articles mainly focused on concerns
related to academic integrity, particularly regarding cheating, academic dishon-
esty and misuse facilitated by AI, such as ChatGPT. Instances of using ChatGPT
for cheating on university entrance exams were also identified. Educating students
about AI’s impact on academic integrity and setting clear guidelines was high-
lighted as crucial. The articles also explored universities’ efforts to detect AI use in
assignments, mentioning various tools like OpenAI’s Open Text Classifier,
Turnitin, GPTZero, Packback, HuggingFace and AICheatCheck. However,
some scepticism was expressed about the accuracy and sophistication of these
detection technologies, with academics relying on their familiarity with students’
work and detecting shifts in tone to identify AI-generated content. In their study,
Sullivan et al. (2023) also identified a significant theme concerning strategies to
discourage the use of ChatGPT in education. Many articles discussed universities
adjusting their courses, syllabi or assignments to minimise susceptibility to
ChatGPT-generated content, often opting for invigilated examinations. However,
some argued against relying solely on exams, suggesting task redesign to promote
authenticity and assess critical thinking. The analysis also highlighted concerns
about ChatGPT’s proneness to errors, limitations and its impact on learning
outcomes. Additionally, articles raised copyright, privacy and security concerns
related to student data. Interestingly, some articles emphasised the inherent
connection between learning and writing, underscoring the role of writing in
exploring and solidifying thoughts on various subjects. Other concerns included
the potential decline in critical thinking skills due to overreliance on AI for
coursework completion, which may undermine genuine educational growth. In
Exploring ChatGPT’s Role 47

addition, Sullivan et al. (2023) found that a higher number of articles made ref-
erences to institutions or departments that had imposed bans on ChatGPT
compared to those allowing its use. However, they observed that the most
commonly discussed response was the indecisiveness of certain universities
regarding their policies. These universities were described as ‘updating’, ‘review-
ing’ and ‘considering’ their policies, reflecting a cautious approach due to the
rapidly evolving nature of the situation. In the absence of official institutional
policies, several articles mentioned that individual academic staff members would
develop revised policies on a course-by-course basis. The researchers also noted
that universities that had chosen to prohibit the use of ChatGPT had already
updated their academic integrity policy or honour code, or they believed that AI
use was already prohibited based on existing definitions of contract cheating. On
the other hand, in cases where universities permitted the use of ChatGPT, it often
came with the requirement to adhere to strict rules, including the disclosure or
acknowledgement of its use in assignments. Additionally, the researchers high-
lighted that two articles clarified that while a specific university did not impose a
ban on ChatGPT, individual academic staff members still had the discretion to
prohibit its use in certain assessments or units. Furthermore, Sullivan et al. (2023)
found that a significant portion of the analysed articles discussed integrating
ChatGPT into teaching practices. These articles advocated for meaningful inte-
gration of AI in teaching and suggested specific ways to incorporate ChatGPT
into assignment tasks, such as idea generation and feedback on student work.
Various applications for ChatGPT in the learning experience were proposed,
including personalised assignments, code debugging assistance, generating drafts,
providing exemplar assignments and more. The articles acknowledged the diffi-
culties of banning ChatGPT and recognised its relevance in future workplaces.
Enforcing a complete ban was deemed impractical, leading to debates on
investing in AI detection systems. ChatGPT was likened to calculators or Wiki-
pedia, highlighting its disruptive nature. However, specific ways AI would be
employed in the workplace were not extensively explored. The researchers noted a
lack of focus on using ChatGPT to enhance equity outcomes for students. Few
articles discussed mitigating anxiety or supporting accessibility challenges on
campus. They also highlight that there was limited mention of ChatGPT’s
potential to improve writing skills for non-native speakers and promote a more
equitable learning environment. They note that only one article touched briefly on
disability-related considerations and AI’s potential to empower individuals with
disabilities. Regarding voices, Sullivan et al. (2023) found that university figures,
including leaders, coordinators, researchers and staff, were extensively quoted in
the media, with nearly half of the articles citing three or more representatives from
respective institutions. In contrast, student voices were relatively underrepre-
sented, appearing in only 30 articles, and only seven of those included quotes from
more than three students. Some articles focused on Edward Tien, the student
behind ChatGPT Zero, while others used survey data to represent the collective
student voice.
Based on their research, Sullivan et al. (2023) urge for a more balanced
examination of the risks and opportunities of ChatGPT in university teaching and
48 The Impact of ChatGPT on Higher Education

learning, as they believe media emphasis on cheating may influence readers’


perceptions of education’s value and student views on its appropriate use. Sullivan
et al. (2023), therefore, suggest redesigning assessment tasks to reduce suscepti-
bility to AI tools through personalised and reflective tasks. However, they
acknowledge disagreements on the most effective adaptation strategies and the
evolving nature of ChatGPT and detection software, potentially making some
discussions in the articles outdated. They note that the need for policy revisions
regarding AI tools and academic integrity is emphasised in the articles, but spe-
cific implementation details are lacking. They suggest that clearer policy positions
are expected later in 2023. They also believe establishing explicit guidelines for
ethical AI tool use is crucial, considering accessibility, sophistication and wide-
spread adoption across industries. They deem an outright ban on AI tool usage
impractical given student access. Sullivan et al. (2023) emphasise the need for
clear guidelines for ChatGPT use, including acknowledging its limitations and
biases. They highlight potential benefits for student learning, simplifying complex
concepts and aiding test preparation. They believe incorporating ChatGPT into
workflows could enhance employability, but that critical thinking skills to analyse
AI outputs are essential. They also suggest more industry input is needed in
workplace discussions and educators must foster unique skills for students to stay
competitive in the job market. In addition, Sullivan et al. (2023) emphasise
ChatGPT’s potential to enhance academic success for diverse equity groups, but
note limited attention in existing literature. They believe ChatGPT can support
non-traditional students, non-native English speakers and students with accessi-
bility needs. However, they caution about potential inaccuracies and biases. The
authors report that they find the opportunities promising and look forward to
AI’s development in accessibility and inclusion in the future. Sullivan et al. (2023)
acknowledge the media’s predominant focus on academic and institutional per-
spectives regarding ChatGPT, neglecting student views. They stress the need for a
more constructive and student-led discussion, involving all stakeholders for an
inclusive discourse on AI. They advocate for student associations and partner-
ships to collaborate with university staff, enhancing student engagement and
institutional approaches to AI.
Sullivan et al. (2023) acknowledge the limitations of their study, focusing on
mainstream news databases and a relatively small number of articles, and we are
inclined to agree with them. They emphasise the importance of considering
alternative sources like social media platforms and education blogs for a more
comprehensive understanding of ChatGPT discourse. They also recommend
expanding the sample size, exploring diverse cultural contexts and investigating
the sources shaping media coverage to address existing biases. To address these
limitations, Sullivan et al. (2023) propose future research opportunities, including
exploring non-Western sources, conducting surveys and focus groups with stu-
dents and investigating academic staff perspectives on ChatGPT. They emphasise
the potential for AI tools to enhance student learning and access, highlighting the
need for a more inclusive student perspective in discussions. The authors stress the
importance of media framing and its impact on public perceptions of academic
integrity and university responses to ChatGPT. The authors conclude that their
Exploring ChatGPT’s Role 49

findings emphasise the necessity for further research and dialogue concerning the
implications of AI tools, highlighting the need to explore ethical use, innovative
teaching and learning practices and the promotion of equitable access to educa-
tional opportunities. Finally, they assert that as AI technologies continue to
evolve, it is crucial for universities to adapt and embrace their utilisation in a
manner that supports student learning and prepares them for the challenges of an
increasingly digital world.
Sullivan et al.’s (2023) investigation is relevant to our study in the following
ways.

• Academic Integrity Concerns


Sullivan et al. highlight that media articles predominantly focus on concerns
related to academic integrity, such as cheating and academic dishonesty
facilitated by ChatGPT. This finding underscores the importance of addressing
these integrity concerns and developing clear guidelines for students to ensure
ethical use of AI tools in our study.
• Task Redesign and Avoidance Strategies
The study reveals that universities are adopting strategies to discourage the use
of ChatGPT, including task redesign and opting for invigilated examinations.
These findings can inform our research on how institutions are adapting to the
challenges posed by AI tools and maintaining the integrity of assessments.
• Policy Challenges
Sullivan et al. report on the indecisiveness of certain universities regarding their
policies on ChatGPT usage. This aspect is particularly relevant to our study, as
it emphasises the need for institutions to develop explicit guidelines and policies
for the ethical use of AI tools while ensuring academic integrity.
• Embracing ChatGPT in Teaching
Despite concerns, the study highlights that some media articles advocate for
meaningful integration of ChatGPT in teaching practices. This finding is sig-
nificant for our research, as it provides insights into potential opportunities and
benefits of incorporating AI tools in educational settings.
• Equity and Accessibility Considerations
The study touches on the potential benefits of ChatGPT for equity and
accessibility, such as assisting non-native speakers and students with disabil-
ities. These considerations align with our research focus on understanding how
AI tools can support all of our students, especially considering that the
majority of our students are non-native speakers studying in an
English-medium environment.
• Student Engagement
Their study reveals that student voices are relatively underrepresented in media
articles. This aspect is directly relevant to our research, emphasising the
importance of involving students in the discussion and decision-making pro-
cesses concerning AI integration in education.
50 The Impact of ChatGPT on Higher Education

In conclusion, Sullivan et al.’s (2023) investigation holds substantial relevance


to our research, as it illuminates critical aspects essential for understanding the
implications of integrating ChatGPT in the educational landscape. Their findings
concerning academic integrity concerns, task redesign strategies, policy chal-
lenges, embracing ChatGPT in teaching, equity and accessibility considerations,
media perception and student engagement provide valuable groundwork for our
study to explore the responsible and effective integration of AI tools in our
educational context.

Literature Review Papers on ChatGPT


‘We Need To Talk About ChatGPT’: The Future of AI and Higher Education
In their 2023 paper titled ‘We Need To Talk About ChatGPT’: The Future of AI
and Higher Education, Neumann et al. emphasise the diverse applications of
ChatGPT for software engineering students. These applications include assess-
ment preparation, translation and generating specific source code as well sum-
marising literature and paraphrasing text in scientific writing. This prompted
them to write a position paper with the aim of initiating a discussion regarding
potential strategies to integrate ChatGPT into higher education. They, therefore,
decided to investigate articles that explore the impact of ChatGPT on higher
education, specifically in the fields of software engineering and scientific writing,
with the aim of asking ‘Are there lessons to be learned from the research com-
munity?’ (Neumann et al., 2023). However, since ChatGPT had only recently
been released, similar to our own situation, they observed a lack of peer-reviewed
articles addressing this topic at the time of their writing. Therefore, they decided
to conduct a structured grey literature review using Google Scholar to identify
preprints of primary studies. A total of 5 preprints out of 55 were selected by the
researchers for their analysis. Additionally, they engaged in informal discussions
and conversations with lecturers and researchers, as well as examining their own
test results from experimenting with ChatGPT. Through their examination of
these preprints, Neumann et al. identified emerging challenges and opportunities
that demanded attention (2023). The four areas in higher education, where they
contend these challenges and opportunities are applicable are teaching, papers,
curricula and regulations. In the context of teaching, Neumann et al. emphasise
the importance of early introduction to foundational concepts, such as pro-
gramming fundamentals, while specifying the appropriate use of ChatGPT
(2023). They highlight the need for transparency, ensuring students are aware of
ChatGPT’s functionalities and limitations. They suggest adapting existing
guidelines or handouts, coordinating among teachers to avoid redundancy, and
integrating ChatGPT into teaching activities. They also recommend that students
practise using the tool for specific use cases, exploring both its possibilities and
limitations. In addition, they note the potential integration of ChatGPT into
modern teaching approaches like problem-based learning or flipped learning.
Furthermore, they propose inviting practitioners to provide insights on inte-
grating ChatGPT into practical work during courses. Overall, their
Exploring ChatGPT’s Role 51

recommendations aim to foster practice-based learning and enhance trans-


parency. Regarding papers, according to Neumann et al., integrating ChatGPT
into higher education presents challenges in scientific writing, especially in sec-
tions involving existing knowledge (2023). They suggest using a combination of
plagiarism checkers and AI detection tools, with manual examination as a
backup. Thorough reference checks and validation are emphasised as crucial.
They also highlight identifiable characteristics of ChatGPT using GPT-3, such as
referencing non-existent literature, which can assist in detection. They propose the
use of additional oral examinations or documentation as additional measures.
Furthermore, they recommend placing greater emphasis on research design and
results sections to enhance scientific education. According to Neumann et al.,
adjusting a curriculum is a complex process that requires careful consideration of
the impact on other courses and compliance with regulations (2023). However,
they expect substantial discussions among lecturers due to varying opinions on
integrating ChatGPT into lectures. Nonetheless, they believe these discussions
will be valuable as they will provide an opportunity for mutual learning and the
development of solutions to address the challenges that arise. In the context of
regulations, the authors emphasise the importance of evaluating official regula-
tory documents, such as examination regulations (2023). They also highlight the
need to consider various legal aspects, such as copyright and data protection,
when integrating ChatGPT into teaching. To ensure consistency, the authors
recommend re-evaluating existing examination regulations and establishing clear
guidelines for students. Additionally, they stress the significance of thorough
discussions among lecturers within a study programme to identify adoption
opportunities by aligning course objectives, theoretical foundations and exami-
nation types. By addressing these areas, they believe successful integration of
ChatGPT into university teaching can be achieved, leading to reduced uncer-
tainties and a focus on innovative education. Neumann et al. summarise their
study by stating that their findings highlight the transformative impact of
AI-based chatbots like ChatGPT on higher education, particularly in the realm of
scientific writing (2023). However, they acknowledge the presence of several
unresolved questions that require further investigation: Is text generated by
ChatGPT a suspected plagiarism case? How should one reference text generated
by ChatGPT? and What proportion of text generated with ChatGPT in relation
to the total scope is acceptable? However, they acknowledge that numerous
additional questions are likely to arise. To conclude, they highlight that as edu-
cators shape the future experts, it is important to equip our students with the
essential skills for responsible use of ChatGPT. They, therefore, emphasise the
need to address the integration of AI tools in higher education and acknowledge
their work as a stepping stone towards initiating further inquiries, fostering dis-
cussions and discovering solutions.
Neumann et al.’s (2023) study’s strengths lie in its focus on emerging AI
technology’s impact on higher education and its comprehensive exploration of
various aspects of integration. The researchers incorporated informal discussions
and practical experimentation with ChatGPT to supplement their findings,
enhancing the study’s insights. However, there are notable weaknesses and
52 The Impact of ChatGPT on Higher Education

limitations to consider. The study’s reliance on preprints and informal discussions


may introduce potential biases and limit the generalisability of the findings. The
small sample size of preprints (only 5 out of 55) might not fully represent the
breadth of research on the topic, and the lack of peer-reviewed articles may affect
the study’s credibility. Additionally, the study’s emphasis on software engineering
and scientific writing may not fully capture ChatGPT’s impact on other academic
disciplines. Moreover, the authors acknowledge the presence of unresolved
questions, indicating that certain aspects of ChatGPT’s integration into higher
education remain unaddressed. Furthermore, the study primarily explores the
researchers’ perspectives. Incorporating more diverse perspectives, including
practitioners from different disciplines and institutions, could enhance the validity
and reliability of the findings. In conclusion, Neumann et al.’s study provides
valuable insights into the integration of ChatGPT into higher education, but it
faces limitations related to sample size, representation and focus. Despite these
limitations, the study serves as a starting point for further inquiries and discus-
sions on the responsible and effective use of AI tools in the educational landscape.
We believe Neumann et al.’s (2023) study has important implications for our
research in the following areas:

• Teaching Strategies and Early Introduction


The study emphasises the importance of early introduction to foundational
concepts and transparent use of ChatGPT in teaching. Their recommendations
for coordinating among instructors and integrating ChatGPT into teaching
activities provide insights into how instructors may incorporate AI tools in
their pedagogical approaches.
• Curricular Adaptation and Regulations
The study identifies the need for discussions among lecturers to adjust curricula
and comply with regulations when integrating ChatGPT. This insight informs
our research on how institutions may need to adjust to accommodate AI tools.
• Unresolved Questions and Further Inquiries
The study acknowledges unresolved questions related to ChatGPT’s usage in
academic settings. This serves as a guide for our research in which we may
explore these unanswered aspects, such as how to reference AI-generated text.

Despite the limitations of Neumann et al.’s study, such as the small sample size
and potential biases, their comprehensive exploration of AI technology’s impact
on higher education offers valuable insights. By building upon their work and
conducting our own research, we can contribute to evidence-based practices and
informed discussions on the responsible and effective integration of ChatGPT in
higher education, thereby preparing students for the future AI-driven landscape.
Exploring ChatGPT’s Role 53

ChatGPT: Bullshit Spewer or the End of Traditional Assessments in Higher


Education?
In their 2023 paper titled ‘ChatGPT: Bullshit spewer or the end of traditional
assessments in higher education?’ Rudolph et al. conduct a comprehensive liter-
ature review and experimental analysis of ChatGPT. Their literature review
stands out as one of the pioneering peer-reviewed academic journal articles to
explore the relevance of ChatGPT in higher education to date, specifically in
assessment, learning and teaching. Rudolph et al.’s paper primarily examines the
implications of this technology for higher education and delves into the future of
learning, teaching and assessment in the context of AI chatbots like ChatGPT.
They contextualise ChatGPT within the realm of current research on Artificial
Intelligence in Education (AIEd), discussing its applications for students, teachers
and educational systems. The authors analyse the opportunities and threats posed
by ChatGPT and conclude the article with recommendations targeted towards
students, teachers and higher education institutions, with a particular focus on
assessment. For their research, Rudolph et al. utilised a desktop analysis
approach. They conducted Google Scholar searches, examined reference lists and
explored embedded references in non-academic articles to conduct their literature
review. However, due to the novelty of the topic, once again, similar to our own
situation, they discovered there were only a limited number of relevant scholarly
resources. As of 18 January 2023, the researchers found two peer-reviewed journal
articles and eight preprints on ChatGPT’s application in higher education, with a
particular focus on assessment, learning and teaching. Based on their literature
review, Rudolph et al. (2023) put forward the following implications of ChatGPT
for education.
Rudolph et al. (2023) highlight that AIEd presents a unique opportunity for
exploring diverse tools and applications in educational technology. Drawing from
Baker and Smith’s (2019) framework, they categorise educational contexts into
student-facing, teacher-facing and system-facing dimensions, which they found
valuable in their understanding of AI’s utilisation in education. Regarding
student-facing AI applications, Rudolph et al. emphasise the potential of AI
applications like Intelligent Tutoring Systems (ITS) in personalising student
learning through tailored instruction. They highlight ITS’s ability to simulate
human tutoring and provide personalised assistance in problem-solving. Addi-
tionally, they discuss the possibilities of personalised adaptive learning (PAL)
facilitated by advancements in big data technology and learning analytics. While
acknowledging ChatGPT’s promise in enhancing tasks like language translation
and question answering, they also point out its limitations in deeply compre-
hending subject matter. The authors say they find it ironic that concerns about
AI-powered writing applications exist, as they believe ChatGPT can greatly
benefit teachers in fostering innovative teaching and learning approaches.
Regarding teacher-facing applications, Rudolph et al. (2023) emphasise how
teacher-facing AIEd systems reduce teachers’ workload by automating tasks like
assessment, plagiarism detection and feedback. They also note that AI-powered
applications can provide valuable insights into students’ learning progress,
54 The Impact of ChatGPT on Higher Education

enabling targeted guidance and support. Their research explores AI-powered


assessment methods, including Automated Essay Scoring (AES) systems, which
offer students prompts to revise answers, extending assessment beyond
multiple-choice tests. They conclude that AI-powered essay ratings generally align
with human ratings, with some concerns persisting. Rudolph et al. also highlight
the importance of combining AES with AI-enabled automatic feedback systems
to enhance effectiveness. The adaptive evaluation of the feedback system ensures
appropriate answers based on Bloom’s cognitive levels and can recommend
additional learning resources and challenges to students. They acknowledge the
well-documented effectiveness of AI-powered grading applications for essays.
However, they raise concerns about ChatGPT’s potential disruption in the
emerging subfield of AI-powered applications supporting students’ writing skill
development. Furthermore, they highlight various AI-based writing tools devel-
oped before ChatGPT, aiming to enhance writing skills and facilitate the writing
process through automated feedback and assessment. The authors also emphasise
AI-powered writing applications like Grammarly and Wordtune as valuable
additions to the writing curriculum, noting that Grammarly offers immediate
feedback and revision suggestions, effectively improving writing engagement
through automated corrective feedback, and Wordtune, using natural language
processing (NLP), assists English as a Foreign Language (EFL) students in
formulating ideas and enhancing their writing quality. They note that research
underscores the positive impact of AI-based interventions on students’
self-efficacy and academic emotions in EFL contexts, supporting independent
learning and improvement. They also suggest that ChatGPT should be analysed
within the same category of AIEd tools. Regarding system-facing applications,
Rudolph et al. (2023) point out that system-facing AI-powered applications
receive less attention in the literature compared to student-facing and
teacher-facing applications. Despite this, they emphasise the importance of a
holistic approach when developing strategies to leverage ChatGPT for innovation
in education, taking cues from Microsoft’s incorporation of ChatGPT into its
products. They also mention that, since ChatGPT is a new product in the market,
there is limited empirical research on its implications for education. Therefore,
they suggest a discussion on the opportunities and challenges that ChatGPT may
present for educational practitioners, policymakers and researchers is necessary
(Rudolph et al., 2023).
Rudolph et al. (2023) highlight concerns about ChatGPT threatening the
traditional essay assessment method, noting that instructors worry that students
might outsource their assignments to ChatGPT, generating passable prose
undetected by plagiarism tools. They believe these concerns may partly stem from
instructors’ resistance to adapting to new assessment methods, as written
assignments are sometimes criticised for being ineffective in assessing students’
learning. Rudolph et al. (2023) also express concerns about ChatGPT’s limita-
tions in understanding and evaluating information shared, as it is merely a
text-generating machine. They believe this concern might prompt institutions to
blacklist the AI application. However, with the potential integration of
ChatGPT’s technology into Microsoft products, they suggest a pragmatic
Exploring ChatGPT’s Role 55

approach to managing the challenges posed by the widespread use of ChatGPT in


the future. From their research, Rudolph et al. (2023) note that language models
offer a wide range of beneficial applications for society, such as code and writing
autocompletion, grammar assistance, game narrative generation, improving
search engine responses and answering questions. However, they also acknowl-
edge the potential harmful applications of these models, saying that GPT-3, in
particular, stands out for its enhanced text generation quality and adaptability,
making it challenging to differentiate synthetic text from human-written text.
Thus, they believe this advancement in language models presents both opportu-
nities and risks, and that, in this context, the focus should be on exploring the
potential harms of improved language models, not to imply that the harms
outweigh the benefits, but to encourage research and efforts to address and
mitigate potential risks. Rudolph et al. (2023) summarise by saying that intro-
duction of disruptive education technologies in the classroom often brings forth
various challenges in teaching and learning. As a result, education practitioners
and policymakers are tasked with managing these situations to ensure that
inadequate pedagogical practices are avoided.
Based on their research, Rudolph et al. (2023) discovered that ChatGPT’s
ability to generate essays presents challenges for educators, but that some are
enthusiastic about the potential for innovation in teaching and learning brought
by this disruptive AI application. They reference literature suggesting that tools
like ChatGPT may become as prevalent in writing as calculators and computers
are in mathematics and science. Additionally, they note that some authors pro-
pose involving students and instructors in shaping and utilising AI tools to sup-
port learning instead of limiting their use. From their research, they also note that
while ChatGPT is often seen as posing a threat to traditional essay assessments,
they believe it also presents an opportunity for educators to introduce innovative
assessment methods. They note that typically, assessments are used by instructors
to evaluate students’ learning, but opine that many instructors may lack the skills
to use assessments for both learning and as a means of learning. They, therefore,
believe that institutions should capitalise on this opportunity to enhance
instructors’ assessment skills and leverage disruptive AI applications like
ChatGPT to enhance students’ learning. Rudolph et al. (2023) highlight another
opportunity for instructors to enhance their teaching strategies by leveraging
ChatGPT. For instance, they suggest adopting a flipped learning approach where
crucial classwork is completed during in-person sessions, allowing more emphasis
on multimedia assignments and oral presentations rather than traditional
assignments. Moreover, they believe this would enable instructors to dedicate
more time to providing feedback and revising students’ work. According to
Rudolph et al. (2023), another significant advantage of ChatGPT is its potential
to facilitate experiential learning. Based on their literature review, they propose
that students should explore various strategies and problem-solving approaches
through game-based learning and other student-centred pedagogies by utilising
ChatGPT. Additionally, they believe that students who prefer hands-on, experi-
ential learning will particularly benefit from using ChatGPT as a learning tool.
According to the authors, ChatGPT can be effectively employed to promote
56 The Impact of ChatGPT on Higher Education

collaboration and teamwork among participants through appropriate instruc-


tional strategies. They propose the incorporation of student-centred learning
activities that can be conducted in groups. For instance, the ChatGPT application
can generate diverse scenarios that encourage students to collaborate in
problem-solving and achieving goals. They believe this will foster a sense of
community, allowing students to learn from and support each other. Therefore,
Rudolph et al. (2023) assert that instead of viewing ChatGPT as a disruptive force
in the teaching and learning process, it should be seen as a significant opportunity
for learning innovators to revolutionise education.
Rudolph et al. (2023) conclude that AI, represented by tools like ChatGPT, is
becoming increasingly mainstream, and its impact on higher education is still
unfolding. While they note there are concerns about the potential implications of
artificial intelligence on employment, they caution against alarmist reporting.
However, they do emphasise the importance of monitoring and engaging with this
rapidly developing space and the need to adjust teaching and assessment
approaches in higher education. Additionally, they highlight that ChatGPT’s
work was not detected by their random testing with anti-plagiarism software,
raising concerns about its potential to evade plagiarism checkers like Gram-
marly’s professional version. Moreover, they observe that ChatGPT can be
utilised to manipulate user input sentences to deceive anti-plagiarism software
from reporting low originality scores. They reflect on the irony that
anti-plagiarism software, which relies on AI, can be bypassed by other AI tools
within seconds. They even point out that GPT-3 is capable of writing a review of a
student’s AI-generated assignment, leaving humans with minimal involvement
and questioning the true value of the learning experience.
Based on their findings, Rudolph et al. (2023) offer general recommendations
for dealing with ChatGPT in higher education. They suggest moving away from a
policing approach that focuses on detecting academic misconduct and instead
advocate for building trusting relationships with students through student-centric
pedagogy and assessments for and as learning. They also emphasise the impor-
tance of constructive alignment, where learning objectives, teaching and assess-
ments are all aligned. Their recommendations for faculty, students and higher
education institutions are as follows. Regarding recommendations for higher
education faculty, Rudolph et al. (2023) suggest exploring alternative assessment
methods, such as physical closed-book exams with pen and paper or online exams
with proctoring/surveillance software, but also caution against over-reliance on
such traditional assessments. To combat the use of text generators like ChatGPT,
they propose designing writing assignments that these AI systems are currently
not proficient at handling, focusing on specific and niche topics, personal expe-
riences and original arguments. The authors acknowledge that ChatGPT’s cur-
rent limitation is its lack of in-text referencing and reference lists, but anticipate
the emergence of tools like WebGPT that can access web browsing for improved
information retrieval. They also highlight the availability of text generator
detection software to address academic integrity concerns. The authors encourage
faculty to foster creative and critical thinking in assessments, utilising authentic
assessments and embracing students’ interests and voices. They believe that
Exploring ChatGPT’s Role 57

involving students in peer evaluations and ’teach-back’ exercises can further


enhance learning experiences. The authors also emphasise the importance of
creating an atmosphere where students are actively engaged in their learning and
demonstrate the value of human writing while incorporating AI tools responsibly
to nurture creativity and critical thinking (Rudolph et al., 2023). Regarding
recommendations for students, Rudolph et al. (2023) note that, as digital natives,
students often possess an inherent familiarity with technology, giving them a
unique advantage in incorporating AI into their academic journey. Therefore, the
authors stress the importance of understanding academic integrity policies and the
potential consequences of academic misconduct. They believe that to fully harness
the potential of AI, students should be encouraged to enhance their digital literacy
and master AI tools like ChatGPT, as this proficiency could significantly boost
their employability in the modern job market. However, they also caution against
using AI as a mere shortcut for assignments, advocating instead for its use as a
valuable set of tools to improve writing skills and generate original ideas. They
also warn that it is essential to avoid plagiarism and prioritise high-quality
sources, as well as remain vigilant against misinformation and disinformation
when conducting research. To foster critical thinking, Rudolph et al. (2023)
suggest that we should urge students to read widely, broadening their perspectives
and enhancing their creative abilities. Furthermore, they suggest that students
should be encouraged to explore the application of AI language tools like
ChatGPT to write and debug code, providing additional opportunities for skill
development. Ultimately, Rudolph et al. encourage students to actively practise
using AI language tools to address real-world challenges and expand their
problem-solving capabilities, and believe that by embracing AI responsibly and
thoughtfully, students can seize its transformative potential and propel their
educational journey to new heights (Rudolph et al., 2023). Regarding recom-
mendations for higher education institutions, Rudolph et al. (2023) emphasise the
critical importance of digital literacy education, advocating for the inclusion of AI
tools like ChatGPT in the curriculum. They suggest that to equip faculty with the
necessary skills, training on AI tools, particularly ChatGPT, is essential. Simul-
taneously, they recommend that students should also receive training on academic
integrity to promote responsible AI tool usage. Furthermore, they recommend
that curricula and courses should be thoughtfully designed to be meaningful and
relevant to students, reducing the likelihood of resorting to cheating. They believe
that addressing the use of AI tools, institutions should update academic integrity
policies and develop clear, easy-to-understand guidelines, and that these guide-
lines should define appropriate usage and outline the consequences for cheating.
They also highlight that encouraging research on the effects of AI tools on
learning and teaching is crucial to better understand their impact and foster
informed decision-making. They believe that by adopting these recommendations,
higher education institutions can navigate the evolving landscape of AI tools,
creating an environment that supports responsible and innovative learning
practices (Rudolph et al., 2023).
Rudolph et al.’s (2023) study explores ChatGPT’s relevance in higher educa-
tion, particularly in assessment, learning and teaching. However, the study has
58 The Impact of ChatGPT on Higher Education

some limitations. The scarcity of scholarly resources on ChatGPT’s application in


higher education may have affected the depth of analysis. Additionally, relying on
desktop analysis and lacking empirical evidence could impact the study’s validity.
Furthermore, a more comparative analysis of other AI writing tools’ implications
would provide a broader understanding of AI’s impact on education. The study’s
positive view of AI’s benefits may introduce bias, and a balanced assessment of
risks and benefits would enhance objectivity.
The implications of Rudolph et al.’s (2023) study for our research on how
ChatGPT may affect the role of students, instructors and institutions of higher
education are as follows:

• Understanding AI in Education
Rudolph et al.’s study provides valuable insights into the applications and
implications of AI, specifically ChatGPT, in higher education. It offers a
framework categorising AI applications into student-facing, teacher-facing and
system-facing dimensions, enabling a comprehensive understanding of AI’s
role in education. This is a framework that we can draw upon in our research to
help us shed light on the diverse ways AI tools like ChatGPT can impact
various educational contexts.
• Innovative Assessment Methods
The study highlights concerns about ChatGPT’s potential to disrupt traditional
assessment methods, such as essays and online exams. As we investigate the
impact of AI tools on assessment practices, Rudolph et al.’s findings can guide
our exploration of innovative assessment approaches that leverage AI while
addressing the challenges posed by text-generating AI applications.
• Opportunities for Personalised Learning
Rudolph et al. emphasise the potential of AI tools, including ChatGPT, in
personalising and adapting student learning experiences. This insight can
inform our research on how AI can be utilised to tailor instruction, provide
feedback and support student-centred pedagogies that foster individualised
learning paths.
• Leveraging AI for Instructor Support
The study discusses how AI can reduce teacher workload and enhance class-
room innovation through automated tasks like assessment and feedback. Our
research can explore how AI tools like ChatGPT can complement instructors’
efforts, allowing them to focus more on guiding students and providing per-
sonalised support.
• Addressing Ethical Concerns
Rudolph et al.’s study acknowledges concerns about academic integrity and the
potential misuse of AI tools like ChatGPT for plagiarism. As we investigate the
ethical implications of AI integration in education, their findings can help us
examine strategies to promote responsible AI use and combat academic
misconduct effectively.
Exploring ChatGPT’s Role 59

• Promoting Digital Literacy


The study emphasises the importance of digital literacy education for students
and faculty. We can incorporate this insight into our research by exploring how
educational institutions can integrate AI tools like ChatGPT into the curric-
ulum while educating users on its responsible and effective use.
• Collaborative Learning Opportunities
Rudolph et al. discuss the potential of AI tools, like ChatGPT, to promote
collaboration and teamwork among students. Our research can investigate how
these tools can be integrated into group learning activities to foster a sense of
community and mutual support.
• Monitoring and Engagement
The study emphasises the need for ongoing monitoring and engagement with
AI technologies in higher education. Our research can contribute to the
ongoing discussion by examining how institutions can stay informed about AI
advancements and adapt their teaching and assessment approaches
accordingly.

Rudolph et al.’s (2023) study provides valuable insights into ChatGPT’s


implications in higher education, highlighting the transformative impact of AIEd
tools on teaching and learning. Our research can build on these findings to
understand how ChatGPT shapes student, instructor and institutional roles.
However, limitations, such as scarce literature and lack of empirical evidence,
warrant further exploration.

User Case Study Papers on ChatGPT


What if the Devil Is My Guardian Angel: ChatGPT as a Case Study of Using
Chatbots in Education
Due to the global attention that ChatGPT has garnered, in their 2023 paper
‘What if the devil is my guardian angel: ChatGPT as a case study of using
chatbots in education’ Tlili et al. (2023) ask ‘What are the concerns of using
chatbots, specifically ChatGPT, in education?’ To address this question, they
performed a qualitative instrumental case study to explore the utilisation of
ChatGPT in education among early adopters. They accomplished this by ana-
lysing three types of data: social network analysis of tweets, content analysis of
interviews and examination of user experiences. Tlili et al. (2023) employed social
network analysis of tweets to examine public discourse on ChatGPT’s use in
education. They collected 2,330 tweets from 1,530 users between 23 December
2022, and 6 January 2023, using the search string ‘#ChatGPT* AND (education
OR teaching OR learning)’. The researchers conducted sentiment and tSNE
analysis on the tweets. For interviews, they selected diverse participants with
ratings of familiarity with chatbots (average rating: 3.02) and different back-
grounds, including educators, developers, students and AI freelancers. Content
analysis was performed on the interviews using a coding scheme. Additionally,
they conducted hands-on experiences with ChatGPT involving three experienced
educators, exploring various teaching scenarios and concerns (Tlili et al., 2023).
60 The Impact of ChatGPT on Higher Education

Regarding the social network analysis of tweets, Tlili et al. (2023) found that the
community formation around ChatGPT is fragmented, with individuals seeking
information and discussion about its limitations and promises. The most used
word pairs provide interesting insights, with some suggesting how to use
AI-powered ChatGPT in education, while others hint at the turning point in
educational systems. The researchers concluded that the public’s view on
ChatGPT is diverse, with no collective consensus on whether it is a hype or a
future opportunity. While positive sentiments (5%) outweighed negative senti-
ments (2.5%), the majority of sentiments (92.5%) were non-categorised, indicating
uncertainty about ChatGPT in education. Word cluster analysis revealed users’
optimism about using AI-powered chatbots in education. However, there were
also critical insights and concerns expressed, such as cheating and ethical impli-
cations. The researchers emphasise the need to examine the underlying AI tech-
nologies, like machine learning and deep learning, behind ChatGPT. Despite the
optimistic overview, concerns about its use in education were also observed. The
study concludes that negative sentiments demonstrated deeper and more critical
thinking, suggesting caution in approaching ChatGPT’s integration into educa-
tion (Tlili et al., 2023). Regarding the content analysis of interviews conducted by
Tlili et al. (2023), the findings highlighted users’ positive perceptions of
ChatGPT’s significance in revolutionising education. Participants acknowledged
its effectiveness in enhancing educational success by providing foundational
knowledge and simplifying complex topics. This potential led the researchers to
believe in a paradigm shift in instructional methods and learning reform. How-
ever, a minority of participants expressed concerns about learners becoming
overly reliant on ChatGPT, potentially hindering their creativity and critical
thinking abilities. Regarding the quality of responses provided by Chatbots in
education, Tlili et al.’s (2023) study revealed that participants generally found the
dialogue quality and accuracy of information from ChatGPT to be satisfactory.
However, they also noted occasional errors, limited information and instances of
misleading responses, suggesting room for improvement. In terms of user expe-
rience, many participants in Tlili et al.’s (2023) study were impressed by the fluid
and exciting conversations with ChatGPT. However, they also pointed out that
ChatGPT’s humaneness needs improvement, particularly in terms of enhancing
its social role since it currently lacks the ability to detect physical cues or motions
of users. The study also showed that users perceived ChatGPT as a valuable tool
for diverse disciplines, reducing teachers’ workload and providing students with
immediate feedback. However, some users reported challenges with response
accuracy, contradictions, limited contextual information and a desire for addi-
tional functionalities. On an ethical front, participants raised concerns about
ChatGPT encouraging plagiarism and cheating, fostering laziness among users,
and potentially providing biased or fake information. The study also highlighted
worries about ChatGPT’s impact on students’ critical thinking and issues related
to privacy through repetitive interactions.
Regarding investigation of user experiences, after daily meetings between the
educators to compare the various results they obtained using ChatGPT, they
identified ten scenarios where various educational concerns were present. These
are as follows. The authors found that educators observed ChatGPT’s ability to
Exploring ChatGPT’s Role 61

aid students in writing essays and answering exam questions, raising concerns
about potential cheating and the effectiveness of cheating detection in education
using chatbots (Tlili et al., 2023). Educators also recognised chatbots’ proficiency
in generating learning content but emphasised the need for content accuracy and
reliability, questioning how to ensure content quality and verification for
chatbot-generated content, including ChatGPT (Tlili et al., 2023). The educators
each initiated a new ChatGPT chat using the same prompt. However, they all
received different responses with varying answer quality, highlighting concerns
about equitable access to high-quality learning content (Tlili et al., 2023).
ChatGPT’s generated quizzes varied in difficulty, leading to questions about the
appropriateness of these learning assessments (Tlili et al., 2023). The educators
stressed the importance of well-designed learning assessments for student under-
standing and problem-solving but found inconsistencies in chatbot-generated
quizzes that could complicate teachers’ responsibilities (Tlili et al., 2023). They
noted that users’ interaction styles influenced the level of learning assistance
received from ChatGPT, raising questions about users’ competencies and
thinking styles to maximise its potential (Tlili et al., 2023). The educators
emphasised the need to humanise chatbots, including the ability to express
emotions and have a personality, to encourage reflective engagement in students
(Tlili et al., 2023). The educators observed ChatGPT occasionally providing
incomplete answers, raising concerns about its impact on user behaviour, espe-
cially among young learners who might use it as an excuse for incomplete tasks or
assignments (Tlili et al., 2023). The educators stressed the importance of exploring
potential adverse effects on users (Tlili et al., 2023). They also highlighted con-
cerns about data storage and usage, with ChatGPT denying conversation data
storage, emphasising the need to safeguard user privacy, particularly for young
individuals (Tlili et al., 2023). During an interaction with ChatGPT, one educa-
tor’s request for a blog’s American Psychological Association (APA) format led
to intriguingly inaccurate information, raising questions about ensuring reliable
responses from ChatGPT to prevent harm or manipulation (Tlili et al., 2023).
In their discussion, Tlili et al. (2023) express their belief that their findings
demonstrate the potential of ChatGPT to bring about transformative changes in
education. However, despite acknowledging its potential, they also raise several
concerns regarding the utilisation of ChatGPT in educational settings. The
authors acknowledge that while some institutions have banned ChatGPT in
education due to concerns about cheating and manipulation, they propose a
responsible adoption approach. This approach involves guidelines and interdis-
ciplinary discussions involving experts from education, security and psychology.
They note that, despite drawbacks, recent studies indicate educational opportu-
nities in ChatGPT that can enhance learning and instruction, prompting a need
for further research on the consequences of excessive reliance on chatbot tech-
nology in education. Highlighting the transformative impact of technology in
education, the authors also emphasise ChatGPT’s potential to simplify essay
writing and introduce innovative teaching methods like oral debates for assessing
critical thinking. They advocate for diverse assessment approaches and the
reformation of traditional classrooms, along with exploring the balance between
62 The Impact of ChatGPT on Higher Education

chatbots and human interaction, including collaborative potential with human


tutors. Consequently, they call for further research to investigate how chatbots
can enhance learning outcomes and promote effective human–machine collabo-
ration in education. Regarding user experiences, Tlili et al. (2023) reveal varia-
tions of output quality based on question wording, stressing the importance of
learning how to obtain the most useful output for learning. They note that while
ChatGPT doesn’t demand extensive technical skills, critical thinking and ques-
tioning abilities are essential for optimal results. As a solution, they suggest
further research on necessary competencies and their development for effective
chatbot use, including ChatGPT. While ChatGPT shows partial humanisation,
the authors highlight limitations in reflective thinking and emotional expression,
which they believe may affect its effectiveness in education. Thus, they call for
research on developing more humanised chatbots in education, drawing from
relationship formation theories and exploring the impact of human–chatbot
relationships on student learning outcomes. However, they express concerns
about treating ChatGPT as a human, citing instances where it was listed as a
co-author in academic articles, raising ethical, regulatory, originality, authorship
and copyright questions. To ensure responsible design, the authors emphasise the
need to consider inclusion, ethics and usability when implementing chatbots in
education. They highlight instances where ChatGPT exhibited harmful behav-
iour, stressing the importance of responsible AI design that addresses biases,
fairness and transparency. The authors advocate for user-centred design princi-
ples, taking into account social, emotional and pedagogical aspects. They
recommend future research should focus on designing responsible chatbots
aligned with human values and legal frameworks for safe use in education.
Tlili et al.’s (2023) study offers a comprehensive understanding of public
discourse on ChatGPT in education through diverse data sources like tweets,
interviews and user experiences. However, its small sample size and limited time
frame for data collection raise concerns about generalisability and capturing
evolving opinions. Content analysis may also introduce subjective interpretations
and biases. Despite limitations, the study highlights the need for responsible
implementation and guidelines in educational settings. It underscores the
importance of adapting teaching practices and exploring human–chatbot rela-
tionships’ impact on learning outcomes. Future research should focus on the need
to upskill competencies and the development of more humanised and responsible
chatbots for education. In addition, continuous research is crucial to maximise
chatbots’ potential while addressing concerns in educational contexts.
We believe the implications of Tlili et al.’s (2023) study for our research are as
follows:

• Comprehensive Understanding
Tlili et al.’s study provides a holistic view of public discourse and opinions on
ChatGPT in education through the analysis of tweets, interviews and user
experiences. This comprehensive understanding can help inform our research
Exploring ChatGPT’s Role 63

on how different stakeholders perceive and interact with ChatGPT in educa-


tional settings.
• Potential Benefits and Concerns
The study highlights both the potential benefits and concerns associated with
the use of ChatGPT in education. As we investigate the impact on students,
instructors and institutions, it’s essential to consider these aspects to develop a
balanced perspective of the technology’s implications.
• Responsible Implementation
Tlili et al. suggests that rather than banning ChatGPT, responsible imple-
mentation should be emphasised. This implies the need for guidelines, policies
and interdisciplinary discussions involving experts from education, security and
psychology to ensure ethical and transparent use of ChatGPT in educational
contexts. This is an aspect that we look to investigate in our research.
• Adaptation of Teaching Practices
The study points out the transformative impact of technology in education,
requiring educators to adapt their practices. As we examine the role of
instructors, it’s important to consider how ChatGPT may influence instruc-
tional delivery and assessment methods, and how educators can effectively
incorporate chatbots into their teaching philosophies.
• Ensuring Fairness and Equity
Tlili et al.’s findings raise concerns about the fairness and equal access to
educational content provided by ChatGPT, which bears relevance to our
research as we investigate how to provide equitable access to bots for all
students.
• Enhancing User Competencies
The study highlights that effective use of ChatGPT requires critical thinking
and question-asking skills. Our research can explore how students and
instructors can develop the necessary competencies to optimally interact with
chatbots and leverage their potential for enhanced learning experiences.

These implications can serve as valuable insights and guiding points for our
research on the impact of ChatGPT on the role of students, instructors and
institutions of higher education. By considering the potential benefits, challenges
and responsible use of the technology, we can develop a comprehensive and
balanced understanding of its implications in the educational landscape.

Exploring the Usage of ChatGPT in Higher Education: Frequency and Impact


on Productivity
In the second instructor user experience paper, Firaina and Sulisworo (2023)
conducted a study titled ‘Exploring the Usage of ChatGPT in Higher Education:
Frequency and Impact on Productivity’, with the aim of gaining insights into
lecturers’ perspectives and decision-making processes regarding the adoption of
ChatGPT in learning. To do this, they interviewed five lecturers to gather their
experiences and viewpoints on ChatGPT, collected and analysed the data and
64 The Impact of ChatGPT on Higher Education

interpreted it to deepen their understanding of the effects of using ChatGPT on


learning and the factors influencing lecturers’ choices. Their research aimed to
identify challenges, needs and expectations of lecturers related to using ChatGPT
for improving learning outcomes and provide recommendations for technology
developers and education decision-makers. Their aim was to use their findings to
offer valuable insights for lecturers to enhance the effectiveness of learning and
inform decision-making in the education field. Based on the frequency of
ChatGPT usage reported in the interviews, Firaina and Sulisworo (2023) found
that most respondents preferred using it frequently and found it helpful for
obtaining new ideas in everyday learning. However, they also acknowledged the
need for additional tools in certain cases. The authors concluded that ChatGPT
serves as a communication channel between respondents and the information
needed for learning, functioning as a medium for accessing new information and
ideas. They believe this aligns with the constructivist learning theory, where
individuals actively construct knowledge based on experiences. Furthermore, the
authors observed that ChatGPT assists respondents in constructing new knowl-
edge by providing access to fresh information and ideas, akin to a social media
platform for learning. Thus, they emphasise the active role of individuals in
constructing knowledge through experiences, reflection and interpretation. They
note that, in the case of the respondents, ChatGPT was utilised as a source of
information and ideas to facilitate the development of new knowledge and skills
in the learning process. Based on the conducted interviews, the authors also
discovered that using ChatGPT has a positive impact on productivity and
learning effectiveness. They report how one lecturer highlighted how ChatGPT
facilitated a quicker understanding of the material and saved time in searching for
learning resources. However, the authors acknowledge the importance of con-
ducting further research to ensure accurate responses from ChatGPT. They also
report that another lecturer mentioned increased productivity, completing tasks
more quickly and saving time in knowledge resource searches. Nonetheless, the
authors emphasise the need for a clear understanding of the overall work to align
with intended goals. From their findings, the authors connect the use of ChatGPT
to communication theory, specifically symbolic interaction theory, which explains
how humans communicate using symbolic signs and attribute meaning to them.
They also draw upon media theory, particularly the theory of new media, which
considers media as a social environment influencing interactions and information
acquisition. Additionally, the authors suggest that the use of ChatGPT aligns with
constructivist theory in learning, emphasising the process of knowledge con-
struction by learners through experience and reflection (Firaina & Sulisworo,
2023). In addition, the authors found that ChatGPT can be a valuable tool for
supporting various aspects of a lecturer’s work. However, they note that the
ability to select relevant commands was crucial in determining the usefulness of
the obtained information. Furthermore, they report that the utilisation of
ChatGPT was observed to be beneficial in several learning aspects for the
respondents. This is because, firstly, it assisted them in translating scientific
articles into English, which helped overcome their English proficiency limitations.
And secondly, ChatGPT aided the respondents in searching for up-to-date ideas
Exploring ChatGPT’s Role 65

in learning that catered to their specific needs. The authors give the example
whereby instructors could request suggestions on teaching with a constructivist
teaching and learning approach and receive multiple alternative recommenda-
tions. In conclusion, Firaina and Sulisworo (2023) found that despite its limita-
tions, respondents recognised the benefits of using ChatGPT to enhance
productivity and efficiency in learning. Consequently, they consider ChatGPT to
be an intriguing alternative in education, emphasising the importance of main-
taining a critical approach and verifying the information obtained. The authors
suggest that further research, including additional interviews and case studies, is
necessary to obtain a more comprehensive understanding of the use of ChatGPT
in learning, as this would help to deepen knowledge and insights regarding its
implementation and potential impact (Firaina & Sulisworo, 2023).
Firaina and Sulisworo’s (2023) qualitative study stands out due to its in-depth
interviews with five lecturers, providing rich insights into their experiences with
ChatGPT in education. The researchers effectively connected their findings with
educational theories, such as constructivist and communication theories,
enhancing the credibility of their conclusions. The study highlights practical
implications for lecturers and educational decision-makers, suggesting that
ChatGPT positively impacts productivity and learning effectiveness. However,
some limitations, like the small sample size and lack of a comparison group,
should be considered when interpreting the results. Future research with larger
and more diverse samples, along with comparative studies, can further explore the
benefits and challenges of using AI-powered chatbots like ChatGPT in educa-
tional settings.
Firaina and Sulisworo’s (2023) study has several implications for our research
on how ChatGPT affects the role of students, instructors and institutions in higher
education.

• Faculty Perspectives
The in-depth interviews conducted by Firaina and Sulisworo provide valuable
insights into how instructors perceive and utilise ChatGPT in their teaching
and learning processes. Understanding faculty perspectives can help inform our
study on how instructors perceive the integration of AI chatbots in educational
practices and the factors influencing their decision-making.
• Impact on Productivity
The findings from Firaina and Sulisworo’s study suggest that ChatGPT posi-
tively impacts productivity and efficiency for instructors. This insight may serve
as a basis for investigating how the adoption of AI chatbots in higher education
can enhance instructors’ efficiency in tasks such as lesson planning, content
creation and resource searching.
• Practical Implications
The practical implications highlighted by Firaina and Sulisworo’s study can
inform our research on the potential benefits and challenges of integrating AI
chatbots in higher education. Understanding how instructors navigate the use
66 The Impact of ChatGPT on Higher Education

of ChatGPT can offer insights into best practices and strategies for effectively
integrating AI chatbots in educational settings.

Overall, Firaina and Sulisworo’s study serves as a valuable reference for our
research, offering insights into how instructors perceive and utilise ChatGPT in
higher education. By incorporating their findings and considering the study’s
implications, we can strengthen the theoretical foundation and practical relevance
of our research on the effects of AI chatbots on students, instructors and insti-
tutions in the higher education context. From looking at instructor user experi-
ences, we now turn to researcher user experiences.

Exploring the Role of Artificial Intelligence in Enhancing Academic


Performance: A Case Study of ChatGPT
Alshater’s 2022 study, titled ‘Exploring the Role of Artificial Intelligence in
Enhancing Academic Performance: A Case Study of ChatGPT’, aims to inves-
tigate the potential of AI, specifically NLP, in improving academic performance,
using the field of economics and finance as an illustrative example. The study
adopts a case study methodology, using ChatGPT as a specific NLP tool to
illustrate its potential for advancing research in this domain. By examining the
application of ChatGPT in economics and finance research, Alshater explores its
capabilities, benefits and limitations. His study also addresses the ethical con-
siderations and potential biases associated with using ChatGPT and similar
technologies in academic research, while also discussing future developments and
implications. Through this case study approach, Alshater endeavours to offer
valuable insights and guidance to researchers seeking to incorporate AI into their
scholarly pursuits.
Alshater’s findings revealed that the utilisation of ChatGPT and other
sophisticated chatbots in research can have various implications, encompassing
both advantages and disadvantages. According to Alshater (2022), the use of
ChatGPT and other advanced chatbots in research offers numerous benefits.
These include enhanced research efficiency through task automation, such as data
extraction and analysis from financial documents, and the generation of reports
and research summaries. Additionally, ChatGPT can contribute to improved
research accuracy by detecting errors in data or analysis, ensuring greater reli-
ability of findings. Moreover, the flexibility of ChatGPT enables researchers to
address a wide range of research questions, generating realistic scenarios for
financial modelling and simulating complex economic systems. The
time-consuming tasks that typically require significant human effort, such as data
analysis of large volumes of data or report generation, can be expedited through
ChatGPT’s automation. Furthermore, Alshater argues that ChatGPT and similar
advanced chatbots can provide more objective insights by eliminating personal
biases and subjective judgement and by identifying patterns or trends in financial
data not immediately apparent to humans. Alshater also notes that these tech-
nologies can ensure greater consistency in research processes by following
Exploring ChatGPT’s Role 67

standardised procedures and protocols, aiding in conducting data analysis in a


consistent and reproducible manner. In Alshater’s study, he not only explores the
benefits of ChatGPT and advanced chatbots but also highlights their limitations.
One significant factor influencing their effectiveness is the quality and relevance of
their training data, as inadequate or biased data can hamper their performance.
Moreover, Alshater points out that these chatbots may lack expertise in speci-
alised fields like economics and finance, affecting their ability to accurately
analyse data and interpret findings. The ethical implications of using chatbots in
research are also a concern raised by Alshater. He discusses the potential
displacement of human labour and the perpetuation of biases present in the
training data, urging researchers to consider these ethical issues carefully. Addi-
tionally, he warns of the risk of chatbots being misused for unethical purposes,
such as generating spam or impersonating others, emphasising the need for vig-
ilance and preventive measures. Alshater notes that as technology advances, the
capabilities of chatbots evolve and that this means researchers must adapt their
methods and approaches accordingly to keep up with these technological
advancements. However, he also notes that it’s important to acknowledge that
chatbots, including ChatGPT, may occasionally generate repetitive or irrelevant
responses due to their lack of contextual understanding, necessitating caution
when using them in research. Alshater’s study also delves into the ethical con-
siderations and potential biases related to utilising ChatGPT and similar tech-
nologies in academic research. He highlights the crucial role of extensive training
data in these technologies and the potential for biases or inaccuracies in the
generated output. He gives the example whereby if the training data predomi-
nantly represents specific demographics or cultural backgrounds, it may lead to
biased results or reinforce existing stereotypes. To address this, Alshater
emphasises the need for careful evaluation and understanding of biases within the
training data and proactive measures to mitigate them, ensuring fairness and
impartiality in the model’s outcomes. Additionally, Alshater sheds light on the
intricate algorithms and processes involved in these technologies, which may not
always be fully transparent or understood by users. This lack of transparency can
pose challenges in holding the technologies accountable for potential biases or
errors that may arise. Thus, he underscores the importance of prioritising trans-
parency in the functioning of these technologies, allowing for scrutiny and
ensuring fairness and impartiality in their operations. Moreover, Alshater
emphasises the significant role of human oversight and intervention when using
these technologies, noting that, as ChatGPT and similar technologies are not fully
autonomous, careful consideration of the roles and responsibilities of humans in
their implementation is essential. This includes the ability to intervene and address
any errors or biases that may occur, ensuring the technologies are used respon-
sibly. Alshater raises legitimate concerns about privacy and data protection when
incorporating technologies into academic research, as personal data collection
and processing may be necessary. Therefore, he emphasises the importance of
implementing suitable measures to safeguard individuals’ privacy, preventing
unauthorised access or misuse of their data and upholding ethical standards in
68 The Impact of ChatGPT on Higher Education

research practices. While acknowledging the potential benefits of these technol-


ogies in specific research tasks, such as data analysis, Alshater cautions against
overreliance and the complete replacement of human judgement or interpretation.
He advocates for a balanced approach that leverages the strengths of these
technologies while respecting the significance of human expertise in research.
Overall, Alshater believes that ChatGPT as an advanced and versatile NLP
tool has the potential to bring about a revolutionary impact on academic research
(2022). He expresses his belief that the tool’s impressive capabilities in generating
human-like text, analysing data and simulating scenarios make it an invaluable
asset for researchers in various fields. However, he also highlights the importance
of considering limitations such as generalisability, data quality and domain
expertise when utilising ChatGPT and similar tools. Despite these limitations, he
asserts that the potential benefits outweigh the drawbacks. Alshater concludes by
emphasising how these technologies empower researchers to efficiently process
and analyse vast amounts of data, create realistic scenarios for theory testing and
effectively communicate their findings. He expresses his belief that these capa-
bilities hold great promise in advancing research across diverse fields and driving
transformative discoveries and insights that enhance our understanding of the
world.
Alshater’s (2022) study delves into ChatGPT’s advantages, including enhanced
productivity, improved research accuracy, flexibility in research questions,
accelerated speed, objectivity and consistency. The research also acknowledges
weaknesses, like the reliance on training data quality and limited domain
knowledge. Ethical considerations are addressed, including algorithmic bias and
technology misuse. However, the small sample size and focus on economics and
finance may limit generalisability. Therefore, we believe future research should
explore other disciplines and employ larger and more diverse samples. Alshater’s
approach to ethics is commendable, but challenges persist in ensuring complete
fairness in AI systems. By recognising limitations and focusing on responsible
practices, we believe researchers can leverage AI’s potential for academic
advancement. Continuous vigilance and improvement are also essential for the
ethical integration of AI in academia.
In light of Alshater’s (2022) study, several implications arise that are directly
relevant to our research investigating the impact of ChatGPT on the role of
students, instructors and institutions in higher education.

• Enhancing Student Learning Experiences


Alshater’s study highlights the potential benefits of ChatGPT for students,
particularly in terms of enhancing their learning experiences. By automating
certain tasks and providing immediate access to information and research
summaries, ChatGPT can offer students more opportunities to focus on deeper
learning activities, positively influencing their overall educational journey. This
is something we look to investigate.
Exploring ChatGPT’s Role 69

• Empowering Instructors with Advanced Teaching and Research Tools


The study emphasises how ChatGPT can be a valuable tool for instructors to
improve their teaching and research practices. Through automated data
analysis and report generation, instructors can streamline research processes
and discover new insights, leading to more effective teaching approaches and
enriching the classroom experience. This is an area we aim to explore further.
• Addressing Data Quality and Biases for Ethical Use
The study underscores the importance of data quality and ethical consider-
ations when utilising ChatGPT. As we delve into its impact on higher educa-
tion, it is crucial to be mindful of potential biases and ensure the responsible use
of the technology, safeguarding against discriminatory consequences. This is an
area we plan to investigate.
• Extending Research Scope for Comprehensive Insights
Alshater’s research is primarily focused on economics and finance, but it
encourages us to extend our investigation to other academic disciplines. By
conducting in-depth studies with diverse samples, we can gain comprehensive
insights into how ChatGPT influences various areas of higher education. Our
exploratory case study into a different discipline will help to add to Alshater’s
insights.

Acknowledging and applying the insights from Alshater’s study may help us to
navigate the transformative landscape of ChatGPT in higher education respon-
sibly, paving the way for a more efficient, inclusive and ethically sound academic
environment. Moreover, the implications he presents serve as valuable guidance
for shaping our own research.

ChatGPT User Experience: Implications for Education


In Zhai’s 2022 paper titled ‘ChatGPT User Experience: Implications for Educa-
tion’, our second researcher user experience paper, the author aims to explore the
yet unknown potential impacts of ChatGPT on education. Recognising the sig-
nificant capacity of ChatGPT, the study acknowledges the potential to bring
about substantial changes in educational learning goals, learning activities and
assessment and evaluation practices. Zhai conducted a study involving the use of
ChatGPT to draft an academic paper titled ‘Artificial Intelligence for Education’,
noting that this particular task was selected due to its highly intellectual nature,
typically performed by professionals. According to Zhai, the objective of piloting
ChatGPT in this manner was to assess its ability to generate accurate, organised,
coherent and insightful writing. Zhai reports that the text in the paper was directly
generated by ChatGPT, with the author’s contribution limited to adding subtitles
and making minor adjustments for logical organisation. To conduct the pilot with
ChatGPT, Zhai utilised a predefined set of queries to compose the paper,
developed through interactive trials and engagements with ChatGPT. Initially,
Zhai prompted ChatGPT to generate the introduction for a scholarly paper
focusing on the utilisation of AI in education; as a response, ChatGPT introduced
the background information on AI for Education and narrowed down the paper’s
scope. Based on this scope, Zhai identified the structure of the paper, which
70 The Impact of ChatGPT on Higher Education

encompassed two main sections: the potential and challenges of AI for Education,
as well as future research directions. To delve into the potential section, Zhai
reports querying ChatGPT about the history of AI for Education, noting that in
response, ChatGPT provided three paragraphs that chronologically detailed the
history of AI in education, starting from the 1960s up to the present day. The
author reports that this description was comprehensive, including relevant
examples and notable milestones in the development of AI for Education. The
author also reports that within the aforementioned writing, ChatGPT provided
detailed descriptions of three specific applications of AI in education: personalised
learning, automating administrative tasks and tutoring and mentorship. In order
to delve deeper into these applications, Zhai posed separate queries regarding the
use cases for each application. As a result, each query yielded a comprehensive
definition of the application, a list of typical uses and a concise summary; for
instance, when inquiring about personalised learning, ChatGPT offered Zhai a
definition along with a comprehensive list of use cases as an illustrative example.
To delve even deeper into the use cases, Zhai conducted additional queries on the
history and potential of each aspect of personalised learning. This investigation
led to the identification of four specific uses: adaptive learning, personalised
recommendation, individualised instruction and early identification of learning
needs. Zhai reports that for each of these use cases, the results provided by
ChatGPT encompassed the definition, historical background, evidence of
potential and a concise summary. Zhai also conducted queries on automating
administrative tasks in education, after which ChatGPT provided the definition,
description, five use cases and a summary. From this, Zhai proceeded to query the
history and potential of the five use cases associated with automating adminis-
trative tasks in education, stating that the results yielded a comprehensive
description of the following: enrolment and registration, student record man-
agement, grading and assessment, course scheduling and financial aid. For the
second aspect of the study, Zhai explored the challenges associated with imple-
menting AI in the classroom. Through queries posed to ChatGPT, the author
obtained a direct list of challenges, which encompassed ethical concerns, tech-
nological limitations, teacher buy-in, student engagement and integration with
existing systems. Seeking a deeper understanding of these challenges, Zhai pro-
ceeded to query each specific challenge and potential solutions associated with
them. In the third part of the study, Zhai explored the future prospects of AI in
education. Through queries directed at ChatGPT, the author obtained five
potential developments. These included the increased utilisation of AI for per-
sonalised learning, the development of AI-powered educational games and sim-
ulations, the expanded use of AI for tutoring and mentorship, the automation of
administrative tasks through AI and the creation of AI-powered education
platforms. In the final stage, Zhai requested ChatGPT compose the conclusion of
an academic paper that discussed the role of AI in driving innovation and
improvement in education. The author reports that the conclusion began by
reiterating the potential of AI in transforming education positively and that,
additionally, it emphasised the need to acknowledge and address the limitations
of AI, highlighting ethical, technological and other challenges associated with its
Exploring ChatGPT’s Role 71

implementation in education. Zhai reports that the conclusion urged the imple-
mentation of appropriate measures to ensure the ethical and effective use of AI in
the education system.
Zhai (2022) describes the findings as follows. During the piloting process, the
author followed the scope suggested by ChatGPT and used subsequent queries to
delve deeper into the study. Zhai notes that the entire process, including gener-
ating and testing queries, adding subtitles, reviewing and organising the content,
was completed within 2–3 hours with minimal human intervention. Zhai also
observes that the writing generated by ChatGPT exhibited four key characteris-
tics: coherence, partial accuracy, informativeness and systematicity. Furthermore,
for each query, Zhai reports that the responses encompassed essential information
and maintained a smooth flow between paragraphs. By changing the topic while
addressing the same aspects, Zhai found that the responses followed an identical
format: ChatGPT would introduce the topic, provide a brief historical overview,
present evidence of potentials and limitations and conclude with a summary of the
topic. Zhai also reports that, interestingly, even with slight variations in wording,
ChatGPT consistently produced the same results, believing this indicates its
ability to address queries expressed in different forms. Through this process, Zhai
acknowledges that ChatGPT demonstrates a remarkable capacity to organise and
compose components of articles effectively.
Zhai’s (2022) study provides valuable insights into the use of ChatGPT in
education. Firstly, Zhai suggests that educators should reassess literacy require-
ments in education based on ChatGPT’s capabilities. The study acknowledges the
efficient information processing capabilities of computers and the impressive
writing proficiency of AI, surpassing that of the average student. Zhai believes this
finding prompts the consideration of whether students should develop the ability
to effectively utilise AI language tools as part of future educational goals. Zhai
argues that education should prioritise enhancing students’ creativity and critical
thinking rather than focusing solely on general skills. To achieve this, the study
advocates for further research to understand which aspects of human intelligence
can be effectively substituted by AI and which aspects remain uniquely human.
Secondly, Zhai emphasises the importance of integrating AI, such as ChatGPT,
into subject-based learning tasks. The study points out that AI’s problem-solving
abilities closely mirror how humans approach real-world challenges. Zhai posits
that, as AI, including ChatGPT, continues to advance towards AGI, educators
are presented with an opportunity to design learning tasks that incorporate AI,
thereby fostering student engagement and enhancing the overall learning expe-
rience and that this integration of AI into domain-specific learning tasks aligns
with the way contemporary scientific endeavours increasingly rely on AI for
prediction, classification and inference to solve complex problems. Thirdly, Zhai
addresses the potential impact of ChatGPT on assessment and evaluation in
education. The study highlights traditional assessment practices, such as essay
writing, and raises concerns about students potentially outsourcing their writing
tasks to AI. As AI demonstrates proficiency in generating written content, Zhai
argues that assessment practices should adapt their goals to focus on areas that
cannot be easily replicated by AI, such as critical thinking and creativity. This
72 The Impact of ChatGPT on Higher Education

shift in assessment practices aligns with the evolving needs of society and the
corresponding shifts in educational learning objectives. To effectively measure
creativity and critical thinking, Zhai suggests educators explore innovative
assessment formats that are beyond AI’s capabilities. In conclusion, Zhai’s study
underscores the transformative potential of ChatGPT in education and calls for
timely adjustments to educational learning goals, learning activities and assess-
ment practices. By recognising the strengths and limitations of AI technologies
like ChatGPT, educators can better prepare students to navigate a future where
AI plays an increasingly vital role. As AI reshapes the field of education, it is
essential to consider its integration thoughtfully and ensure that the emphasis
remains on cultivating skills that remain uniquely human while harnessing the
capabilities of AI to enhance the learning process.
Zhai (2022) explores ChatGPT’s impact on education, focusing on learning
goals, activities and assessments. Using a pilot study, ChatGPT efficiently drafted
an academic paper with minimal human intervention, showcasing its potential in
generating scholarly content. While innovative, the study’s small sample size and
limited scope may restrict its generalisability. Additionally, ChatGPT’s lack of
deeper understanding and biases in predefined queries may affect its applicability
in certain educational tasks. We believe further research, with a mixed-methods
approach and larger samples, is needed to fully understand AI’s role in education
and its long-term implications on pedagogy and learning experiences. Nonethe-
less, Zhai’s study sets the stage for future investigations into AI’s impact on
education.
Zhai’s (2022) study offers crucial implications for our research.

• Rethinking Learning Goals


Zhai’s findings indicate that AI, like ChatGPT, has efficient information pro-
cessing capabilities and impressive writing proficiency. As we investigate the
role of ChatGPT in education, it becomes essential to reassess traditional
learning goals. Integrating AI language tools into educational objectives may
prompt a shift towards prioritising the development of students’ creativity and
critical thinking, which are areas where AI might not fully replace human
intelligence. This is something we aim to investigate.
• Innovating Learning Activities
The study emphasises the significance of incorporating AI, such as ChatGPT,
into subject-based learning tasks. As AI’s problem-solving capabilities mirror
human approaches, it presents an opportunity for educators to design engaging
learning activities. This integration aligns with the increasing use of AI in
real-world problem-solving and scientific endeavours. This is a subject we
intend to explore.
• Transforming Assessment Practices
Zhai’s study raises awareness of potential challenges, such as students
outsourcing writing tasks to AI. To address this, we may need to rethink
assessment practices. Focusing assessments on areas where AI cannot replicate
human abilities, such as critical thinking and creativity, can ensure that
Exploring ChatGPT’s Role 73

educational evaluations remain relevant and meaningful. We intend to look


into this further.
• Considering Limitations and Ethical Implications
While ChatGPT demonstrates remarkable capabilities, the study acknowledges
its limitations, including the lack of deeper understanding and potential biases
in predefined queries. As we examine the role of ChatGPT in education, we
believe it is essential to consider these challenges and potential ethical
implications.

In conclusion, Zhai’s study urges educators and institutions to approach the


integration of AI language tools like ChatGPT thoughtfully. By considering the
implications outlined in the study, we believe our research can contribute to a
responsible and effective adoption of AI in education while preserving the unique
strengths of human intelligence and creativity in the learning process.

Identifying Themes, Methodologies and Gaps in the Literature


The nine studies in the literature review delved into the implications of integrating
ChatGPT in education. Some studies emphasised the need to address potential
biases and data privacy concerns, while others explored the potential impact on
teaching practices and student productivity. The literature also discussed the
transformative potential of AI in education, calling for a re-evaluation of tradi-
tional learning goals and assessment practices. While the studies differed in
methodologies and focus, they collectively provide guidance for educators and
institutions on effectively integrating ChatGPT. However, certain limitations and
gaps were evident. Some studies lacked comprehensive exploration or diverse
samples, and there was a scarcity of case studies directly investigating ChatGPT’s
impact on education. The literature also lacked sufficient representation of stu-
dent perspectives, and a deeper understanding of necessary adaptations in
educational objectives and activities is needed. To address these gaps, our
research project aims to fill the scarcity of case studies and actively include student
perspectives through in-depth qualitative research. We seek to understand how
ChatGPT is influencing students’ learning experiences and instructors’ teaching
practices. Furthermore, we intend to explore necessary adaptations in educational
objectives and activities to leverage the potential of AI chatbots effectively. By
addressing these gaps, our research project will contribute valuable insights into
the transformative role of AI chatbots in revolutionising teaching and learning
practices, providing guidance for responsible AI use in educational settings.
This page intentionally left blank
Chapter 5

Research Methodology

Research Context
This research is conducted at MEF University, a non-profit, private,
English-medium institution located in Istanbul, Turkey. Established in 2014,
MEF University holds the distinction of being the world’s first fully flipped
university. Embracing a flipped, adaptive, digital and active learning approach,
the university incorporates project-based and product-focused assessments instead
of relying on final exams. Furthermore, digital platforms and adaptive learning
technologies are seamlessly integrated into the programmes, while MOOCs are
offered to facilitate self-directed learning opportunities. In addition, since 2021, a
data science and artificial intelligence (AI) minor has been made available for
students from all departments. Caroline Fell Kurban, the principal investigator
and co-author of this book, plays a central role in leading the investigation. She
balances dual responsibilities, serving as both the principal investigator for the
project and the instructor in the in-class case study. To ensure comprehensive data
analysis, interpretation phases and the formulation of theoretical and practical
implementation suggestions, she received support from the MEF University
Centre for Research and Best Practices in Learning and Teaching (CELT). As
flipped learning is a fundamental aspect of MEF’s educational approach and is
specifically featured in this case study, we provide more information here.
Flipped learning is an instructional approach that reverses the traditional
classroom model, allowing students to learn course concepts outside of class and
use class time for active, practical application of the principles. In this approach,
teachers become facilitators or coaches, guiding students through problems and
projects while providing personalised support and feedback. The focus shifts from
content delivery to creating a student-centred learning experience. To ensure the
effectiveness of a flipped learning course syllabus, it is crucial to anchor it on
proven learning frameworks. These frameworks, rooted in learning theories, offer
valuable insights into the cognitive processes essential for successful learning.
They empower instructors to comprehend, analyse and anticipate the learning
process, guiding them in making informed decisions for teaching and learning
implementation. A pivotal aspect of designing a successful flipped learning syl-
labus is recognising the interconnectedness between curriculum, assessment and

The Impact of ChatGPT on Higher Education, 75–91


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241005
76 The Impact of ChatGPT on Higher Education

instruction. For learning to be impactful, these three components must align


cohesively, with a focus on learning outcomes (Gollub et al., 2002). In flipped
learning courses, this approach should permeate all three elements. To achieve
this, MEF courses draw upon four well-established learning frameworks, which
act as the foundation for each stage of the flipped learning syllabus design. These
frameworks include Understanding by Design (UbD), Bloom’s Taxonomy,
Assessment For, As, and Of Learning and Gagne’s Nine Events of Instruction,
collectively fostering cohesion between curriculum, assessment and instruction, and
ultimately leading to effective learning. We describe how we bring these theories
together to plan for our flipped courses below.
At the heart of our flipped learning course design is Understanding by Design
(UbD), a model originated by Jay McTighe and Grant Wiggins during the 1990s
(Wiggins & McTighe, 1998). UbD presents a holistic strategy for shaping cur-
riculum, assessment and instructional methods. This methodology revolves
around two core principles: prioritising teaching and assessment for genuine
understanding and learning transfer and structuring curriculum by first deter-
mining the intended outcomes. The UbD framework is anchored in seven guiding
principles:

(1) Thoughtful curricular planning enhances the learning journey, and UbD
provides a flexible structure to facilitate this without imposing rigid
guidelines.
(2) UbD guides curriculum and instructional strategies towards cultivating
profound comprehension and the practical application of knowledge.
(3) Genuine understanding emerges when students independently employ and
expand their learning through authentic performance.
(4) Effective curriculum design adopts an inverse path, commencing with
long-term desired outcomes and progressing through three stages – Desired
Results, Evidence and Learning Plan, which guards against potential pitfalls
like excessive reliance on textbooks or prioritisation of activities over clear
learning objectives.
(5) Educators assume the role of facilitators, favouring meaningful learning
experiences over mere content delivery.
(6) Regular evaluations of curriculum units against design benchmarks enhance
quality and encourage meaningful professional discourse.
(7) The UbD framework embodies a continuous enhancement approach,
wherein student achievements and teaching efficacy steer ongoing improve-
ments in both curriculum and instruction.
(Wiggins & McTighe, 1998).
UbD is a widely recognised framework for underpinning flipped courses (Şahin
& Fell Kurban, 2019).
Instructors employing UbD in course curriculum development proceed
through three distinct stages: Stage 1 – identify desired results (curriculum), Stage
2 – determine acceptable evidence (assessment) and Stage 3 – create the learning
plans (instruction).
Research Methodology 77

Stage 1
The initial phase of UbD centres on defining desired outcomes, encompassing
several key elements. This process involves establishing clear objectives, designing
enduring understandings, formulating essential questions and specifying what
students should ultimately learn and achieve. Instructors should derive explicit
goals from university programme standards, accreditation criteria and course
purpose. These objectives then shape the creation of enduring understandings. An
enduring understanding encapsulates a fundamental concept with lasting rele-
vance beyond the immediate learning context. It is a profound notion that
embodies essential principles within a subject. These understandings offer stu-
dents deeper insights, fostering a comprehensive grasp of the subject beyond
surface-level facts. Crafting a robust enduring understanding begins with identi-
fying a pivotal concept then distilling it into a clear statement resonating with
students. For instance, ‘Water cycles impact both Earth and society’ succinctly
captures a significant idea in UbD. Essential questions follow, serving as UbD’s
cornerstone. Understanding their essence is crucial. These questions are
open-ended, thought-provoking and engaging, promoting higher order thinking
and transferable concepts. They necessitate reasoning, evidence and sometimes
further inquiry. Notably, essential questions recur throughout the learning
journey, pivotal for design and teaching. For example: How do water cycles affect
ecosystems and natural processes? In what ways do human activities influence
water cycles? Essential questions come in two types: overarching, which apply to
multiple topics, and topical, which focus on specific subject matter (McTighe &
Wiggins, 2013).
After establishing the course aim, enduring understanding and essential
questions, the next step is to develop learning outcomes, i.e. what the students will
know and be able to do by the end of the course. For this purpose, Bloom’s
taxonomy proves to be an effective framework (Bloom et al., 1956). This tax-
onomy classifies educational goals into different categories, with each category
representing a higher level of cognitive functioning than the one below it. It
follows a hierarchical structure where each lower category serves as a prerequisite
for achieving the next higher level. The cognitive processes described within this
framework represent the actions through which learners engage with and apply
knowledge. Examples of some of these, adapted from Armstrong (n.d.), are as
follows, going from higher to lower levels of cognition.

• Create (produce new or original work)


Design, compose, create, combine, formulate, invent, substitute, compile,
construct, develop, generalise, modify, organise, produce, role-play
• Evaluate (justify a stand or decision)
Criticise, evaluate, appraise, judge, support, decide, recommend, summarise,
assess, convince, defend, estimate, find errors, grade, measure, predict, rank
• Analyse (make connections from ideas)
Analyse, compare, classify, contrast, distinguish, infer, separate, explain,
categorise, connect, differentiate, divide, order, prioritise, subdivide, survey
78 The Impact of ChatGPT on Higher Education

• Apply (use information in new situations)


Solve, apply, illustrate, modify, use, calculate, change, demonstrate, discover,
experiment, show, sketch, complete, construct, dramatise, interpret, produce
• Understand (explain ideas or concepts)
Explain, describe, interpret, paraphrase, summarise, classify, compare, discuss,
distinguish, extend, associate, contrast, convert, demonstrate
• Remember (recall basic facts and concepts)
Define, identify, describe, label, list, name, state, match, recognise, select,
examine, locate, memorise, quote, recall, reproduce, tabulate, tell, copy

Although we’ve provided the complete Bloom’s taxonomy spectrum here, it’s
important to acknowledge that in specific learning situations, such as introductory
courses, the priority might be understanding and applying existing knowledge,
rather than generating novel content or solutions. In such cases, the inclusion of
the ‘Create’ level of cognitive functioning in the learning outcomes might not be
essential. The emphasis could instead be on remembering, understanding and
applying the acquired information.
To align with Bloom’s taxonomy, an additional knowledge taxonomy can be
implemented, encompassing the domains of factual, conceptual, procedural and
metacognitive knowledge (Armstrong, n.d.). Factual knowledge includes famil-
iarity with terminology, specific details and elements within a subject area.
Conceptual knowledge pertains to familiarity with classifications, categories,
principles, generalisations and a grasp of theories, models and structures. Pro-
cedural knowledge encompasses mastery of subject-specific skills, algorithms,
techniques, methods and the ability to determine appropriate procedures. Meta-
cognitive knowledge involves strategic and contextual understanding of cognitive
tasks, including self-awareness and conditional knowledge. From this, course
learning outcomes can be formulated by identifying action verbs from Bloom’s
taxonomy.

Stage 2
Once the course aim, enduring understanding, essential questions and learning
outcomes have been established, the instructor proceeds to Stage 2: determining
acceptable evidence (assessment). At this stage, instructors should ask some key
questions including: How will we know if students have achieved the desired
results? What will we accept as evidence of student understanding and their ability
to use (transfer) their learning in new situations? and How will we evaluate stu-
dent performance in fair and consistent ways? (Wiggins & McTighe, 1998). To
answer these questions, UbD encourages instructors to think like assessors before
developing units and lessons. The assessment evidence should match the desired
outcomes identified in Stage 1. So, it is therefore important for instructors to think
ahead about the evidence needed to show that students have achieved the goals.
This approach helps to focus the instruction. In Stage 2, there are two main types
of assessment – performance tasks and other evidence. Performance tasks ask
Research Methodology 79

students to use what they have learnt in new and real situations to see if they really
understand and can use their learning. These tasks are not for everyday lessons;
they are like final assessments for a unit or a course. Everyday classes teach the
knowledge and skills needed for the final performance tasks. Alongside perfor-
mance tasks, Stage 2 includes other evidence like quizzes, tests, observations and
work samples to find out what students know and can do. However, before we
move on to discuss how we can design the performance task and other types of
evidence, first let’s take a look at our third learning framework, Assessment For
Learning (AfL), Assessment As Learning (AaL) and Assessment Of Learning
(AoL) framework (Rethinking Classroom Assessment with Purpose in Mind:
Assessment for Learning; Assessment as Learning; Assessment of Learning, 2006).
The AfL, AaL and AoL framework serves as a valuable tool for developing
these assessments, as it emphasises how different aspects of the learning process
have distinct roles in enhancing students’ understanding and performance. AoL,
often referred to as summative assessment, is what most people commonly
associate with testing and grading. This involves assessing students’ knowledge
and skills at the end of a learning period to determine their level of achievement.
AoL aims to measure how well students have met the learning outcomes and to
assign grades or scores. While the primary purpose of AoL is to provide a
summary judgement of student performance, it can also offer insights into the
effectiveness of instructional methods and curriculum design. AoL forms the
foundation of the end-of-course performance task. However, it is also supported
by AfL and AaL. AfL, also known as formative assessment, focuses on using
assessment as a tool to support and enhance the learning process. The primary
purpose of AfL is to provide timely feedback to both students and educators. This
feedback helps students understand their strengths and areas that need
improvement, allowing them to adjust their learning strategies accordingly.
Teachers can use the insights from formative assessments to tailor their instruc-
tion, addressing students’ needs more effectively. AfL promotes a learner-centred
approach, where assessment is seen as a means to guide and enhance learning
rather than merely to measure it. Therefore, AfL should be incorporated
throughout the semester to support the students to achieve the learning outcomes
in the end-of-course performance task. However, AaL should also play an
important part in this process. AaL is about promoting a metacognitive approach
to learning. Here, assessment is viewed as an opportunity for students to actively
engage with the material and reflect on their learning process. Students take on a
more active role by monitoring their own learning, setting goals and evaluating
their progress. AaL encourages students to develop self-regulation skills and
become independent learners. This approach shifts the focus from external eval-
uations to internal self-assessment and personal growth. Therefore, AaL should
also be incorporated throughout the semester to support students towards eval-
uating their learning and setting their goals for the end-of-course performance
task. Thus, these three types of assessment are not mutually exclusive; rather, they
complement each other within the broader framework of educational assessment.
To design the end-of-course performance task, following UbD, it is recom-
mended that instructors follow the Goal, Role, Audience, Situation, Performance/
80 The Impact of ChatGPT on Higher Education

Product and Standards for assessment (GRASPS mnemonic), as this ensures an


authentic context that equips students with essential skills for their future careers.
Following Wiggins and McTighe (1998), GRASPS works as follows:

• Goal – What task do I want the students to achieve?


• Role – What is the student’s role in the task?
• Audience – Who is the student’s target audience?
• Situation – What is the context? The challenge?
• Performance – What will students create/develop?
• Standards – On what criteria will they be judged?

After devising the end-of-course task, it becomes crucial to establish precise


assessment standards through a rubric aligned with the learning outcomes. These
rubrics are invaluable aids benefiting both students and educators. They provide
students with a clear grasp of project expectations right from the outset, and they
provide instructors with a structured means to impartially evaluate work based on
predefined criteria. Rubrics can also facilitate conversations about performance
levels and can serve in self and peer assessment. By incorporating rubrics,
instructors empower students and promote their active engagement in the learning
journey. Furthermore, rubrics prove instrumental in assessing the resilience of an
assessment task against AI influence, a topic explored further in Chapter 9,
Educational Implications.
Once the end-of-course performance task has been designed, the instructor can
proceed to develop assessments for various types of other evidence, such as
quizzes (AfL), experiments (AfL) and reflections (AaL). These will support stu-
dents in making progress towards the final performance task. In the context of the
flipped learning approach, the pre-class phase plays an important role; therefore,
we take a deeper look at this here. In flipped learning, students are required to
engage with pre-class videos or study materials before attending the class session.
To ensure the effectiveness of this approach, these pre-class materials should be
accompanied by pre-class quizzes or other graded activities. This serves the dual
purpose of holding students accountable for their learning and enabling instruc-
tors to assess their comprehension and preparedness. These assessment methods
often involve quizzes (AfL), short questions (AfL) or introspective prompts that
guide students in self-assessing their understanding (AaL). During the course, the
instructor can use the data from these pre-class assessments to tailor their in-class
activities, discussions and examples to effectively address specific gaps in learning.
This brings us to Stage 3.

Stage 3
Stage 3 of UbD involves planning learning experiences and instruction that align
with the goals established in Stage 1. This stage is guided by the following key
questions that shape the instructional process: How will we support learners as
they come to understand important ideas and processes? How will we prepare
Research Methodology 81

them to autonomously transfer their learning? What enabling knowledge and


skills will students need to perform effectively and achieve desired results? What
activities, sequence and resources are best suited to accomplish our goals?
(Wiggins & McTighe, 1998). According to Wiggins and McTighe, during this
stage, instructors need to go beyond mere content delivery and consider the
holistic learning experience. They note that traditionally, teaching has often
focused on conveying information and demonstrating basic skills for acquisition,
neglecting deeper understanding and real-world application. However, they point
out that genuine understanding requires active engagement, including inference
and generalisation, to avoid surface-level comprehension, which they believe
involves applying knowledge to new contexts and receiving constructive feedback
for improvement. They suggest that taking this approach transforms educators
into facilitators of meaning-making and mentors who guide effective content
utilisation rather than mere presenters. It is at this stage that all the required
elements are structured into comprehensive units to facilitate learning.
Following our flipped learning approach, within each unit, we draw on
Gagne’s Nine Events of Instruction to ensure effective learning. Robert Gagne’s
Nine Events of Instruction offers a robust framework for structuring instructional
activities. The model is based on the information processing model of mental
events that occur during learning when individuals are exposed to different stimuli
(Gagne’s 9 Events of Instruction, 2016). From this model, Gagne derived nine
events of instruction, which provide a valuable structure for designing instruc-
tional activities. These are as follows:

(1) gain attention,


(2) inform learners of objectives,
(3) stimulate recall of prior learning,
(4) present the content,
(5) provide ‘learning guidance’,
(6) elicit performance (practice),
(7) provide feedback,
(8) assess performance,
(9) enhance retention and transfer to the job.

However, for flipped learning to be effective, we believe the sequence of these


events needs to be rearranged in the following ways:

• Pre-class/Online

– Unit overview;
– Introduction to key terms;
– Prior knowledge activity;
– Introduction to concepts (via video, article);
– Hold students accountable for their learning (formative assessment).
82 The Impact of ChatGPT on Higher Education

• In Class

– Start-of-class/bridging activity to review the pre-class concept;


– Structured student-centred activities to practise the concept;
– Semi-structured student-centred activities to practise the concept;
– Freer student-centred activities to practise the concept;
– Self-reflection (at the end of a lesson or unit, either in class or out of class).

In conclusion, these four frameworks serve as the recommended foundation for


MEF’s flipped courses, emphasising andragogical principles. Together, they assist
instructors in formulating course aims, enduring understandings, essential questions
and learning outcomes. This comprehensive approach further facilitates the creation
of authentic assessments, which then guides the development of suitable instructional
strategies and activities. By aligning curriculum, assessment and instruction, this
framework ensures a cohesive and effective teaching and learning experience.

Research Approach
This research centres on an investigation of the impact of ChatGPT on students
and instructors in higher education. Our primary objectives are to explore,
understand and assess how this AI chatbot may influence the roles of students and
instructors within an academic setting. By delving into the implementation of
ChatGPT, we aim to uncover potential challenges and opportunities that may
arise, providing valuable insights into its transformative role in the educational
landscape. Ultimately, our goal is to comprehensively examine how the integra-
tion of ChatGPT specifically affects the roles of students, instructors and higher
education institutions. As such, similar to Rudolph et al. (2023), we categorise our
areas of research into student-facing, teacher-facing and system-facing. However,
we introduce another category, ‘researcher-facing’, as it provides an additional
metacognitive perspective on how ChatGPT influenced the research process
which, ultimately, will also affect institutions of higher education.
On planning our research approach, we decided a qualitative research para-
digm would be the most suitable, as it is an exploratory approach which aims to
understand the subjective experiences of individuals (not just the technology) and
the meanings they attach to those experiences. This approach is particularly useful
when investigating new phenomena, such as ChatGPT, where there is limited
knowledge and experience. Using such an approach enables us to gain a deeper
understanding of the impact of ChatGPT on the role of students, instructors and
our institution as a whole and to explore the subjective experiences and per-
spectives of those involved. Within this paradigm, a case study approach seemed
most appropriate. Case studies involve conducting a thorough investigation of a
real-life system over time, using multiple sources of information to produce a
comprehensive case description, from which key themes can be identified
(Cresswell & Poth, 2016). This approach, commonly employed in the field of
education, entails gathering and analysing data from diverse sources like inter-
views, observations, documents and artefacts to gain valuable insights into the
Research Methodology 83

case and its surrounding context. Case studies are a useful approach when a case
can be considered unique and intrinsic (Yin, 2011).
Our case is both unique and intrinsic, as it involves looking at the potential
effect of ChatGPT on various stakeholders at our university, something that, at
the time of writing, had not been studied extensively before. Due to this, we
decided to employ Yin’s (1984) methodology, which uses an instrumental case
study research design proposed by Stake (1995). Yin’s design follows five phases
of analysis: compiling, disassembling, reassembling, interpreting and concluding.
It is particularly useful for understanding a phenomenon within a specific context,
as is the case with ChatGPT and its potential impact on various stakeholders in
education. It also takes into consideration the historical background, present
circumstances and potential future developments of the case. In such a case, data
can be gathered via interviews, focus groups, observations, emails, reflections,
projects, critical incidents and researcher diaries, after which, following Braun
and Clarke (2006) a thematic analysis can be conducted.

Data Collection
This study took place from December 2022 to August 2023, starting with the
release of ChatGPT-3.5 on 30 November 2022. We used the free version of
ChatGPT-3.5 for data collection, publicly available since November 2023, to
ensure fair participation of students without requiring them to purchase a paid
version. However, it should be noted that the training data in ChatGPT-3.5 only
extend up to September 2021. During the write-up phase, GPT-4 was used. As
discussed previously, the literature review focused on papers published between
December 2022 and early April 2023 to ensure current resources. However,
considering ChatGPT’s ever-evolving nature, we continued to collect extant
literature from media sources throughout the study until the final write-up.
Adopting a case study approach, our research aims to collect diverse and
comprehensive data. In line with Yin’s (1994) case study protocol, we identified
six specific types of data we would gather, including documentation, archival
records, reflections, direct observation, participant observation and physical
artefacts. To collect data for this study, relevant documents such as reports,
policies and news articles on ChatGPT in education were continuously gathered
from internet searches throughout the investigation. The objective was to gain
comprehensive insights and perspectives on the integration of ChatGPT in edu-
cation. The collected data formed the basis for Chapter 2 in this book. The
principal investigator made sure to maintain reflexivity throughout, considering
her positionality and biases and sought diverse perspectives from various sources
to enhance data validity and reliability. This approach ensures a well-rounded
study.
The researcher-facing aspect of this study involved the principal investigator
documenting the impact of ChatGPT on the research process. A comparative
approach was taken, analysing how research stages were conducted in previous
projects, before ChatGPT’s availability, and how they could be approached since
84 The Impact of ChatGPT on Higher Education

the availability of ChatGPT. This meta-perspective allowed for reflection on the


research process, providing insights into the researcher’s perspective. The data
collection for this aspect took place from December 2022 to June 2023, using a
research diary digitally maintained in Google Sheets following Patton’s (2002)
guidelines. The diary tracked insights, challenges and adjustments made while
incorporating ChatGPT into the research, enhancing self-awareness and under-
standing of research practices with AI technologies. Researcher-facing data are
referenced as RFD in Chapter 6, Findings and Interpretations.
The teacher-facing part of this study aims to investigate the impact of
ChatGPT on the instructor’s role, in which the principal investigator, in the role
of instructor, conducted a comparative analysis between a previous course (before
ChatGPT) and an upcoming course in the spring semester of 2023. The course in
question was HUM 312 Forensic Linguistics, which follows MEF University’s
flipped learning approach. Throughout January and February 2023, the
instructor actively evaluated and adjusted the course design to incorporate
ChatGPT effectively. This involved analysing the syllabus, course overview,
assessments and rubrics, and in-class activities to identify suitable opportunities
for ChatGPT integration. To record these procedures and contemplations, the
instructor utilised a Teacher’s Research Diary (TFD) in Google Sheets, show-
casing reflexivity through self-examination, noting choices and critically evalu-
ating both past and revised course materials. Observations were conducted while
the course unfolded in the spring 2023 semester. During this period, the instructor
engaged in rigorous self-reflection, contributing significantly to the data collection
process. This approach helped address possible biases and assumptions, ensuring
valuable insights from student feedback.
The student-facing aspect of the research took place within the new HUM 312
Forensic Linguistics course. The course was conducted online whereby pre-class
activities were made available on the university learning management system
prior to class, and weekly classes took place via Zoom where hands-on, interac-
tive learning were employed. The course was a 16-week course, with one 2-hour
lesson per week. The course had first run in the spring semester of 2020, when it
moved from face-to-face to online due to the COVID pandemic, and has run
every year since, continuing in its online format. The new iteration of the course
ran in spring 2023 with a cohort of 12 students. An overview of the new iteration
of the course is provided below.

• Course Aim
The overall educational aim of this course is for students to investigate the role
linguistic analysis plays in the legal process. It focuses on the increasing use of
linguists as expert witnesses where linguistic analysis is presented as evidence.
• Course Description
This course aims to provide students with an understanding of forensic lin-
guistics, focusing on the role of linguistic analysis in the legal process. Forensic
linguistics involves a careful and systematic examination of language, serving
justice and aiding in the evaluation of guilt and innocence in criminal cases.
Research Methodology 85

The field is divided into two major areas: written language, which analyses
various texts like police interviews, criminal messages and social media posts,
and spoken language, which examines language used during official interviews
and crimes. Through a case-based approach, the course explores how crimes
have been solved using different linguistic elements, such as emojis, text mes-
sage abbreviations, regional accents and dialects, handwriting analysis and
linguistic mannerisms, among others.
• Enduring Understanding
Forensic Linguistics aids justice by analysing language to uncover truth in
criminal cases.
• Essential Questions
Overarching Essential Questions
– How does linguistic analysis contribute to legal case analysis in forensic
linguistics?
– How is the emergence of AI reshaping the legal field?

Topical Essential Questions


– In what ways do communication methods such as emojis, text messages and
punctuation impact understanding in forensic linguistics cases?
– What ethical and legal aspects surround mandating preferred transgender
pronouns, and how does this relate to freedom of speech and discrimination
concerns?
– How can slang, regional dialects, linguistic mannerisms and handwriting be
utilised to identify a potential suspect in forensic linguistic investigations?
– How can the analysis of acoustic phonetics in speech help identify whether
someone is intoxicated or sober?
– In the realm of forensic linguistics, how does the study of pragmatics present
challenges in accurately interpreting an individual’s intended meaning during
communication?
• Learning Outcomes

– Analyse how language affects legal decisions.


– Deconstruct aspects of language from lawsuits and manipulate language
from the suits from one form to another.
– Analyse how language was used in real-life cases to convict or acquit a
defendant.
– Compose a mock closing argument on a specific aspect of language in a
real-life case and justify your argument.
• Assessment

– pre-class quizzes (20%),


– in-class assessed activities (40%),
– Semester Project 1 (20%).
You will take on the role of either the defence or prosecution, with the goal
of getting the defendant acquitted or convicted in one of the cases. Your
audience will be the judge and jury. The situation entails making a closing
86 The Impact of ChatGPT on Higher Education

argument at the end of a trial. As the product/performance, you are required


to create a closing argument presented both in writing and as a recorded
speech. The standards for assessment include: providing a review of the case,
a review of the evidence, stories and analogies, arguments to get the jury on
your client’s side, arguments attacking the opposition’s position, concluding
comments to summarise your argument and visual evidence from the case.
– Semester Project 2 (20%)
You will develop your own individual projects related to an aspect of
forensic linguistics and ChatGPT or the law and ChatGPT. You will also
develop your own rubric for evaluation. You will present your project to an
audience of peers and professors in the final lesson and answer any questions
posed to you.

The selection of this course for investigation was driven by several factors.
Firstly, the principal investigator was the instructor for this course and had
expertise in exploring educational technologies. Additionally, she had previously
investigated and been involved in the design of the Flipped, Adaptive, Digital and
Active Learning (FADAL) approach, making her well-suited for this investiga-
tion. The instructor’s deep understanding of the course, its planning processes and
her ability to teach it again in the upcoming semester provided an ideal oppor-
tunity to compare pre- and post-ChatGPT course planning. Moreover, the lin-
guistic components of the Forensic Linguistics course made it suitable for testing
ChatGPT’s capabilities across various linguistic aspects. The students enrolled
were from the Faculty of Law, a field expected to be heavily impacted by AI
advancements, making their involvement in the investigation valuable for raising
awareness about AI’s impact on the legal profession. Data collection occurred
between March and June 2023, aligning with the spring semester. To investigate
the effects on students, a diverse set of data was gathered. This started with a
survey administered at the beginning of the course to assess students’ existing
familiarity and usage of ChatGPT. In the second lesson, students were presented
with a video that introduced them to ChatGPT, followed by open-ended ques-
tions to capture their impressions. Pre-class questions were employed throughout
the course to find out about the specific interactions students had with ChatGPT
and how these interactions had influenced their learning experiences. A reflective
questionnaire was conducted at the end of the course to gain more information
about the students’ insights, impressions and perspectives of their experiences with
ChatGPT throughout the course. Furthermore, complementary data sources were
incorporated into the study. Padlets, screenshots and students’ reflections were
gathered to provide a more comprehensive perspective on the student experience.
To further enrich the analysis, students granted permission for their projects to be
included in the data assessment. The student-facing data are referred to as SFD.
The focus of the system-facing aspect of this study was to examine the
implications of ChatGPT from the viewpoints of different stakeholders at the
university, including instructors, department heads, deans and vice-rectors.
Peripheral participants, including visiting teachers at workshops and discussions
with professors from different institutions and educational leaders at conferences,
provided additional insights. The data collection period for this aspect was from
Research Methodology 87

January to June 2023. Various methods were used for data collection. Email
communications from university stakeholders were collected, providing valuable
insights into discussions surrounding ChatGPT’s impact on the university. Zoom
recordings and workshop activities were collected from institutional workshops
about ChatGPT to understand the institutional response. Interviews with
instructors and stakeholders were conducted via Zoom or Google Chats. Critical
incidents arising from conversations and conferences were recorded in a
system-facing diary, helping to identify patterns, themes, challenges and oppor-
tunities related to ChatGPT’s integration in education. This served as a valuable
tool for documenting and reflecting upon insights and challenges. Information
recorded in this diary was member-checked, wherever possible, with those
involved to verify accuracy and validity of data and ensure perspectives were
accurately represented. System-facing data are referred to as SYFD.

Data Analysis Methods and Techniques


To analyse our data, we followed Braun and Clarke’s (2006) thematic analysis
approach, which involves systematically coding and categorising the data to
identify patterns or themes. It involves six phases: familiarisation with the data,
coding, searching for themes, reviewing themes, defining and naming themes and
writing up. While there are six phases to this process, following Braun and
Clarke’s advice, we viewed each phase as iterative, not linear, requiring us to
revisit previous phases as needed. In order to ensure the credibility and depend-
ability of the study, following Thurmond (2001), we employed data triangulation
using multiple data collection tools to achieve a comprehensive and in-depth
understanding.
We began by immersing ourselves in the dataset, engaging in repeated reading
and noting initial observations to gain familiarity. Subsequently, we developed
codes to capture significant features that were relevant to our research questions.
This process encompassed both data reduction and analysis, aiming to compre-
hensively capture the semantic and conceptual meaning embedded in the data.
Initially, the principal investigator undertook the task of creating the codes,
carefully reviewing the data and establishing preliminary coding categories.
Following this, a collaborative approach was adopted, involving members of the
MEF Centre for Research and Best Practices in Learning and Teaching (CELT).
The codes were reviewed, critiqued and revised as necessary, ensuring a
comprehensive and accurate representation of the data. Through this iterative
process, we arrived at our final set of codes and definitions, as shown below:

• Ability to translate
ChatGPT has the ability to translate text from one language to another.
• ChatGPT demonstrates competency in completing assigned student tasks.
ChatGPT exhibits proficiency in successfully accomplishing tasks assigned to
students.
88 The Impact of ChatGPT on Higher Education

• ChatGPT unable to perform assigned student tasks


This indicates situations where ChatGPT is unable to successfully complete
tasks or assignments given to students.
• Culturally specific database
The information in ChatGPT’s database is specific to a particular culture or
cultural context, which may not be relevant to the user’s needs.
• Disciplinary context limitations
ChatGPT demonstrates limitations in its understanding of specific disciplinary
contexts or lacks specialised knowledge in a particular field.
• Enriches the user’s ideas
ChatGPT has the ability to enhance and expand the user’s ideas through its
generated responses.
• Gaps in knowledge
ChatGPT demonstrates a lack of information or understanding on certain
topics or areas.
• Gender bias in pronoun usage
This refers to the tendency of ChatGPT to default to male pronouns unless
prompted otherwise during interactions.
• Gives incorrect information
ChatGPT provides inaccurate or incorrect information in its responses.
• Imparts specific knowledge, skills or concepts to users
ChatGPT provides users with specific information, skills or concepts through
its responses.
• Inhibits user learning process
This refers to the negative impact of ChatGPT, which can hinder or diminish
users’ ability to actively engage in the learning process and acquire knowledge
independently.
• Input determines output quality.
The quality of the output generated by ChatGPT is influenced by the quality of
the input provided to it.
• Interactivity of communication
This refers to the dynamic and responsive nature of the interaction between
users and ChatGPT.
• Lack of standard referencing guide for ChatGPT
This refers to the absence or inadequate guidelines for citing and referencing
sources derived from ChatGPT in academic and research contexts.
• Lack of response relevance
This refers to instances where the responses generated by ChatGPT are not
pertinent or closely related to the input or query.
• Need for giving a clear context
This highlights the importance of providing a clear and specific context when
interacting with ChatGPT to ensure accurate and relevant responses.
• Need to fact-check
This highlights the importance of verifying or confirming the information
provided by ChatGPT through independent fact-checking.
Research Methodology 89

• Occurrences of refusal or reprimand by ChatGPT


This signifies instances where ChatGPT refuses to provide a response or issues
reprimands to the user for certain inputs or queries.
• People perceive ChatGPT as providing opinions rather than predictions.
This highlights the difference between instances where ChatGPT appears to
provide personal opinions, while in reality, it generates responses based on
predictive models.
• Perceived human-like interactions with ChatGPT
This refers to the phenomenon where users perceive or experience ChatGPT as
exhibiting human-like qualities in its interactions, despite its artificial nature.
• Reduces cognitive load
ChatGPT can alleviate cognitive burden or mental effort by providing assis-
tance or performing tasks on behalf of the user.
• Requires multiple iterations to get what you want
Achieving the desired outcome or response from ChatGPT may require mul-
tiple interactions or iterations.
• Reviews work and gives suggestions for improvement
ChatGPT can analyse and provide feedback on the work or content presented
to it, offering suggestions for improvement.
• Speeds up the process
ChatGPT can accelerate or expedite certain tasks or processes compared to
traditional methods.
• Text register modification
This refers to the capability of ChatGPT to adapt its writing style, tone or
formality to match specific registers, including the ability to mimic the style of
particular individuals.
• Unquestioning trust in information
This refers to instances where users place complete trust in the information
provided by ChatGPT without critical evaluation or scepticism, even when it is
giving incorrect information.
• Useful in other areas of life
ChatGPT can have practical applications and benefits beyond just educational
use.

The process of refining codes into coherent themes involved several cycles of
careful evaluation. We began by generating initial codes and then organising them
into meaningful themes. Thorough exploration of various groupings and potential
themes ensured their accuracy and validity. To validate these themes, we metic-
ulously cross-referenced them with the coded data extracts and the entire dataset.
Additional data, such as critical incidents observed during conferences and
workshop discussions, posed a challenge as they emerged after we had established
our codes and completed the thematic analysis. However, since these incidents
contained relevant new data that could enrich our analysis, we revisited the
coding and thematic analysis process three times to integrate these additional
data. This iterative approach resulted in more robust codes and themes.
Collaborative discussions further led to the formulation of concise and
90 The Impact of ChatGPT on Higher Education

informative names for each theme. Through this iterative approach, we attained
data saturation, indicating that no further new information or themes were
coming to light. The final themes that were collectively agreed upon and their
respective codes are as follows:

• Input Quality and Output Effectiveness


Need for giving a clear context, input determines output quality, requires
multiple iterations to get what you want
• Limitations and Challenges of ChatGPT
Lack of standard referencing guide for ChatGPT, gives incorrect information,
lack of response relevance, occurrences of refusal or reprimand by ChatGPT,
gender bias in pronoun usage, need to fact-check
• Human-Like Interactions with ChatGPT
Interactivity of communication, perceived human-like interactions with
ChatGPT, people perceive ChatGPT as providing opinions rather than pre-
dictions, unquestioning trust in information
• Personal Aide/Tutor Role of ChatGPT
Enriches the user’s ideas, speeds up the process, reduces cognitive load, useful
in other areas of life, ability to translate, reviews work and gives suggestions for
improvement, text register modification, imparts specific knowledge, skills,
concepts to users
• Impact on User Learning
ChatGPT demonstrates competency in completing assigned student tasks,
inhibits user learning process, ChatGPT unable to perform assigned student
tasks
• Limitations of a Generalised Bot for Educational Context
Gaps in knowledge, disciplinary context limitations, culturally specific
database

To facilitate the mapping and analysis of the themes, the researchers utilised a
Google Sheet for each theme, incorporating the following sections: code, code
definition, examples from the extant literature, examples from the literature
review and supporting examples from the data. This comprehensive framework
allowed for a systematic examination of each of the themes in relation to our
research questions. From this, the following interconnectivity of the themes was
derived (Fig. 1).
Throughout this study, obtaining informed consent from all participants,
including interviewees, instructors, and students, was a top priority. Participants
were fully informed about the research purpose, procedures, potential risks, and
benefits, with the freedom to decline or withdraw without consequences. Our
communication with participants remained transparent and clear, ensuring data
privacy and confidentiality. Ethical review and approval were obtained from the
university’s ethics committee to comply with guidelines and protect participants’
rights. To mitigate bias, the researcher remained mindful of personal biases
during data collection and analysis. However, during the research process, an
Research Methodology 91

Fig. 1. Interconnectivity of Themes.

ethical issue emerged concerning consent related to the research diaries, which
served as a hidden form of data collection. The researcher began to perceive every
interaction and event as potential data, while participants may not have viewed
them in the same light (Hammersley & Atkinson, 1995). This raised concerns
about ensuring proper consent for the use of information gathered through such
situations. As, on most occasions, the researcher did not recognise the relevance
of these incidents to the investigation until after they occurred, this meant the
researcher had not explicitly communicated to participants that the content of the
interactions could be used in the research. Therefore, to protect the confidentiality
of the individuals involved and respect their privacy, the researcher took measures
to provide anonymity when referencing extracts from the research diaries in the
writing.
In the next chapter, we present our findings and interpretations of the data,
encompassing a thorough analysis and derivation of insights from our outcomes.
This chapter systematically provides an overview of the collected data, aligning it
with the extant literature and the literature review. Subsequently, we interpret
these findings within the framework of our theoretical approach. Employing this
information, we re-examine our research questions, specifically exploring the
potential impacts of ChatGPT on students, instructors, and higher education
institutions. Through this process, we convert our raw data into valuable insights,
enriching our understanding of the subject.
This page intentionally left blank
Chapter 6

Findings and Interpretation

Input Quality and Output Effectiveness of ChatGPT


The theme of ‘Input Quality and Output Effectiveness’ underscores the crucial
role of input in determining the quality and effectiveness of ChatGPT’s output.
Large language models like ChatGPT can generate human-like text, but their
outputs may not always align with human values due to their focus on predicting
the next word rather than understanding broader context. This misalignment can
result in challenges related to reliability and trust, such as a lack of helpfulness
when the model fails to accurately comprehend and execute specific user
instructions. In order to overcome this, users need to provide ChatGPT with a
clear context, quality input and undergo multiple iterations to get the desired
output. This was seen in the following ways.

Need for Giving a Clear Context


According to the researcher (the principal investigator in her research role), ‘You
have to input your own ideas, personal experiences, and reflections first in order
for ChatGPT to come up with research questions relevant to your situation’
(RFD). The instructor (the principal investigator in her teaching role) noted that
‘ChatGPT helps write enduring understandings. These can be tricky to word. By
getting it to define enduring understandings first and then telling it about your
course, it can help with the wording’ (TFD). Similarly, when it comes to writing a
course aim, the instructor highlighted the need to input clear information about
students, department, and institution to receive accurate suggestions from
ChatGPT (TFD). The instructor also emphasised the importance of providing a
lot of specific information about the course and students to obtain useful learning
outcomes (TFD). Additionally, ChatGPT’s ability to generate suggestions for
lesson plans depended on knowing the context of the class (TFD). Furthermore,
inputting the text or video text from the pre-class activity allowed ChatGPT to
provide suggestions for pre-class quizzes, provided that the teacher specified the
type of quiz questions they wanted (TFD).
The literature review supports the finding that providing a clear context is
crucial when interacting with ChatGPT. Mhlanga (2023) highlights that
ChatGPT, being a machine, lacks the ability to comprehend contextual factors

The Impact of ChatGPT on Higher Education, 93–131


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241006
94 The Impact of ChatGPT on Higher Education

such as culture, background and experiences in the same way as human educa-
tors. This aligns with the fact that ChatGPT’s outputs may not align with human
values because its focus is on predicting the next word rather than understanding
the broader context. Similarly, Alshater (2022) observes that language models like
ChatGPT can generate unrelated or generic responses due to their lack of
contextual understanding. In addition, Sullivan et al.’s (2023) study highlighted
the limitations of ChatGPT’s contextual understanding, as evidenced by the
generation of unrelated or generic responses.
Looking at the importance of giving a clear context through the lens of
Christensen’s Theory of Jobs to be Done highlights the importance of users
understanding their specific needs and desired outcomes when hiring ChatGPT.
Users must clearly articulate their requirements and objectives in order to effec-
tively utilise ChatGPT’s capabilities. This involves providing a clear and specific
context for ChatGPT to generate accurate and relevant responses. Bourdieu’s
social theory sheds light on the power dynamics and social structures that influ-
ence the interaction with ChatGPT. It emphasises the need to consider linguistic
norms, cultural capital and social dynamics that shape communication with the
artificial intelligence (AI) system. Instructors must navigate these factors when
engaging with ChatGPT to ensure meaningful and appropriate responses. Hei-
degger’s Theory on Being highlights the distinction between ChatGPT’s predic-
tive nature and the broader contextual understanding of human educators. Users
must recognise that ChatGPT’s focus is on predicting the next word rather than
comprehending the broader context.

Input Determines Output Quality


When students were asked to create a SWOT analysis regarding ChatGPT and
the law, the quality of the response varied based on the quality of the input data.
As observed by the instructor, ‘If they gave it good quality data, it created an
effective chart; however, if they just asked it to do it for them, they did not end up
with such a good result’ (TFD). This was also seen by a teacher in a workshop
who commented, ‘When you give the right prompts, it gives you the lesson plan
immediately. You don’t really need to do anything’ (SYFD). ChatGPT also
showcased its effectiveness in generating research ideas and suggesting method-
ologies and codes, contingent upon the user providing pertinent and precise
information. According to the researcher, ‘ChatGPT was good for generating
ideas for my research, but only if relevant and accurate information about the
study was input first’ (RFD). The researcher also mentioned, ‘It was relatively
easy for me to identify a research methodology based on previous experience and
existing knowledge. However, asking ChatGPT for suggestions led to a broader
array of suggestions that enriched my choice. But, this only works well if you have
input your precise research problem and questions’ (RFD). Additionally,
ChatGPT’s capability to suggest codes from textual data was noted by the
researcher, who stated, ‘ChatGPT can come up with suggested codes from textual
data rather well, as long as the textual data is well-written in the first place’
Findings and Interpretation 95

(RFD). This sentiment was echoed by a teacher’s remarks during a workshop,


‘The questions designed do not always meet the prompts or the specifications we
provided if they are not mentioned precisely’ (SYFD). ChatGPT also proved
useful for coming up with rubrics for evaluation, but again, this was dependent on
the clarity of input given by the user. The instructor mentioned, ‘Once an
assessment has been written, ChatGPT can easily come up with a suggested rubric
for evaluation, but only if the assessment task is written precisely’ (TFD).
The need for quality input into ChatGPT was also supported by the literature.
In their 2023 research, Firaina and Sulisworo emphasise the significance of
selecting relevant commands to determine the usefulness of the obtained infor-
mation. This aligns with our understanding that the quality of input plays a
critical role in achieving desired outcomes when working with ChatGPT.
Therefore, focusing on improving quality of input is crucial for obtaining optimal
results and maximising the value derived from ChatGPT.

Requires Multiple Iterations to Get What You Want


Users often found it necessary to refine their prompts or requests for satisfactory
results. The researcher shared their experience, stating, ‘I made several revisions
with ChatGPT before achieving the desired outcome’ (RFD). Similarly, a student
reflected on their use of ChatGPT, noting the need for multiple revisions when
creating the closing argument for the Unabomber case (SFD). These instances
demonstrate the iterative nature of the process, where multiple iterations and
modifications were crucial to achieving the desired outcomes. The importance of
refining prompts and engaging in multiple interactions is further highlighted by
the researcher’s comment: ‘If you are not happy with the wording, you can
provide ChatGPT with prompts until it generates better-worded questions’
(RFD). Echoing this sentiment, a teacher in a workshop said, ‘It is important to
review and regenerate responses until they align with your needs’ (SYFD).
Additionally, the researcher mentioned how ChatGPT’s capabilities can be
leveraged in an iterative manner for refining codes and themes: ‘It can. . . group
the codes and suggest themes. This can be done iteratively until you are happy
with the outcome’ (RFD). These examples emphasise the iterative nature of
working with ChatGPT, where multiple iterations, prompts and revisions are
often necessary to fine-tune the generated output and meet the user’s specific
needs and expectations.
This finding is supported by Sullivan et al.’s (2023) study, which emphasised
the value of the iterative refinement process in working with ChatGPT, under-
scoring the importance of developing information literacy skills to successfully
engage with ChatGPT and other AI tools.
The iterative nature of engagement and the importance of context and inter-
pretation align with Heidegger’s concept of ‘being-in-the-world’. Achieving the
desired outcome from ChatGPT may require multiple iterations and an ongoing
process of refining our understanding of both the technology and our own
96 The Impact of ChatGPT on Higher Education

existence. The existential engagement with ChatGPT involves adapting and


refining our understanding to foster desired outcomes.

How ChatGPT May Affect the Roles of Stakeholders


From our analysis, we believe the role of the student will change in the following
ways.
The student’s role will involve providing clear and specific input to ChatGPT
to ensure accurate and relevant responses. They will also need to understand the
limitations of ChatGPT’s output and critically evaluate its responses to ensure
alignment with their own goals. Moreover, students will need to develop infor-
mation literacy skills and engage in iterative refinement, continuously refining
their input to optimise the effectiveness of ChatGPT’s responses.
Regarding the role of the instructor, instructors can leverage ChatGPT to
assist with tasks such as writing enduring understandings, course aims, learning
outcomes, lesson plans and pre-class quizzes. However, the effectiveness of
ChatGPT in these tasks will depend on the provision of specific information.
Instructors will need to input clear details about the course, students and insti-
tution to receive accurate suggestions from ChatGPT. They will also need to
consider the context of the class and provide relevant information for ChatGPT
to generate valuable suggestions. Additionally, instructors will play a crucial role
in guiding and refining the use of ChatGPT by students. They will need to ensure
that students understand the importance of input quality and help them navigate
the iterative nature of working with ChatGPT. Instructors will also contribute
their expertise in evaluating and contextualising ChatGPT’s output, bridging the
gap between AI-generated responses and the creativity, originality and hands-on
opportunities that human teachers bring to the educational experience.
Institutions of higher education will need to provide the necessary resources
and support for instructors and students to effectively use ChatGPT. This may
include offering training programmes on how to leverage ChatGPT in educa-
tional tasks and promoting information literacy skills development. Moreover,
institutions must foster a culture of continuous learning and adaptation,
encouraging instructors and students to embrace the iterative refinement process
when working with ChatGPT. By recognising the multifaceted implications of AI
in education, institutions can actively shape the integration of ChatGPT and
other AI tools to align with their educational goals and values.
In summary, our analysis reveals key insights on ChatGPT’s impact, empha-
sising the significance of clear input, iterative refinement, contextual awareness
and user engagement. Ensuring high-quality input by training both students and
instructors to effectively use ChatGPT in academic endeavours will maximise its
benefits in education.
Findings and Interpretation 97

Limitations and Challenges of ChatGPT


ChatGPT undoubtedly brings numerous benefits and opportunities to users in
various domains. However, it is essential to recognise that ChatGPT is not
without its limitations and challenges. In this theme, we investigate the potential
shortcomings and difficulties that users may encounter when interacting with
ChatGPT, shedding light on the critical areas where considerations are necessary.

Lack of Standard Referencing Guide for ChatGPT


Users in academic contexts may encounter a significant challenge due to the lack
of a standard referencing guide for ChatGPT. This issue encompasses two crucial
areas: the absence of references provided by ChatGPT for the sources it uses and
the lack of established guidelines for users to reference ChatGPT-generated
information. As we have discussed, the reliance of generative AI on data from
undisclosed sources raises concerns regarding copyright infringement and fair
compensation for creators. Altman’s acknowledgement of these concerns without
providing definitive answers suggests that ChatGPT may not adopt a referencing
model for its sources. Consequently, users face the difficulty of determining the
origins of ChatGPT’s information and referencing it appropriately. Data exam-
ples highlight these challenges: ‘When we asked it, ChatGPT gave some sugges-
tions for how we should reference it, but there is currently no standard referencing
guide for using ChatGPT’ (TFD); ‘If the students are getting information directly
from ChatGPT, they are unable to reference the source as we don’t know where
the data has come from’ (TFD); ‘ChatGPT hasn’t done a very good job of what
evidence to use and how’ (SFD); ‘The system told me that it cannot do the ref-
erences itself and that I should look for expert opinions and academic papers on
this subject. So therefore I checked what ChatGPT said through academic papers
on the internet. . . I think it would be much better if the system itself indicated
where it gets the information it uses’ (SFD). Furthermore, without clear guide-
lines and standards in place, users face challenges in citing and referencing
information derived from ChatGPT. The absence of a standard referencing guide
raises concerns about the transparency and traceability of sources used in aca-
demic and research outputs incorporating ChatGPT-generated content.
To address this challenge, the instructor in this case study collaborated with Dr
Thomas Menella, a senior research fellow for the Academy of Active Learning
Arts and Sciences, to devise a referencing system for students for the spring 2023
semester. The students were instructed to quote all content from ChatGPT and
provide in-text citations with chronological numbering, like ‘Lorem ipsum dolor
sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et
dolore magna aliqua’ (ChatGPT, 1). Failure to comply would be considered
plagiarism. Additionally, they were required to create a separate ‘ChatGPT
Citations’ page after the references section, including the date of ChatGPT con-
tent generation, corresponding prompts used and at least one source for
fact-checking validation. The students were warned that ChatGPT-generated
content couldn’t be deemed accurate until verified, and they were accountable
98 The Impact of ChatGPT on Higher Education

for thoroughly fact-checking the content they included in their assignments,


regardless of authorship. So what was the feedback on this system? The instructor
said, ‘I asked my students to reference when they have used ChatGPT following
the system Tom and I developed. However, it is not ideal, a better referencing
system is required’ (TFD). Additional comments were, ‘I think the referencing
system is not described well in the document. It should be more clear how I should
do my referencing. I took most of my time since I didn’t understand for a long
time’ (SFD); ‘More guidance and examples on how to use it properly and
consistently would help’ (SFD); ‘The document helped me, but it was very
detailed and this caused confusion in my opinion, the document should be more
general and details that cause confusion should be removed’ (SFD); ‘It was a bit
difficult and complicated to reference ChatGPT. It can be made simpler’ (SFD).
In summary, the students requested clearer instructions, more guidance and a
simpler approach to referencing to avoid confusion.
These challenges were also seen in the literature. Neumann et al. (2023)
highlight that when integrating ChatGPT into higher education, particularly in
scientific writing tasks, sections that involve existing knowledge can pose diffi-
culties. This is because ChatGPT may generate text that references non-existent
literature or fails to provide accurate and reliable references. Rudolph et al. (2023)
also raise this issue, pointing out that ChatGPT has a limitation in providing
sources and quotations, which is essential for academic assignments. However,
they note that there are promising developments to address this limitation, such as
the prototype WebGPT, which is being developed with web browsing capabilities,
allowing it to access recent information, verified sources and quotations. Addi-
tionally, they point out that AI research assistants like Elicit offer assistance in
finding academic articles and providing summaries from a vast scholarly paper
repository. They believe these advancements will enhance the quality and credi-
bility of academic work by incorporating up-to-date information and reliable
sources (Rudolph et al., 2023).
Through Christensen’s lens, users in academic contexts are trying to ‘hire’
ChatGPT to fulfil the job of generating accurate and reliable information for their
academic work. However, due to the lack of a standard referencing guide,
ChatGPT may not be effectively fulfilling this job. Users encounter difficulties in
determining the origins of ChatGPT’s information and referencing it appropri-
ately, hindering their ability to rely on ChatGPT as a trustworthy source. This
unfulfilled job highlights the need for a solution that provides clear guidelines and
standards for referencing ChatGPT-generated information, enabling users to
confidently incorporate its content into their academic work. From a Bourdieu-
sian perspective, the absence of a standard referencing guide for ChatGPT reflects
the power dynamics and struggles over legitimacy in the academic field. The lack
of clear guidelines puts users at a disadvantage, as they are unable to conform to
the established norms of referencing and may face criticism for not adhering to
the traditional referencing practices. This situation reinforces the dominant
position of established sources and conventional referencing systems, which may
hinder the recognition and acceptance of ChatGPT-generated information within
academic discourse. ChatGPT relies on surveillance data from undisclosed
Findings and Interpretation 99

sources, which raises concerns about copyright infringement and fair compensa-
tion for creators. The absence of clear guidelines for referencing
ChatGPT-generated content further exacerbates the commodification of infor-
mation and the devaluation of labour in the production of academic knowledge.
Users are left grappling with the challenge of appropriately citing and referencing
information derived from ChatGPT while the sources remain undisclosed and
uncompensated. From Heidegger’s perspective, the lack of a standard referencing
guide can be seen as a consequence of the instrumentalisation of technology in the
academic context. The focus on efficiency and productivity in using ChatGPT as a
tool for generating content overlooks the essential nature of referencing as a
means of acknowledging the origins and authenticity of knowledge. The absence
of clear guidelines reflects a reduction of referencing to a technical task, neglecting
its ontological significance in preserving the integrity and transparency of aca-
demic work.

Gives Incorrect Information/Lack of Response Relevance


The extant literature highlighted that ChatGPT may generate inaccurate infor-
mation, which is due to limitations in its capabilities and training techniques.
Examples from the data illustrate this. ‘Sometimes ChatGPT cites sources that are
not there’ (SFD); ‘What I have found is that it is really bad with giving credible
sources. They are often non-existent’ (SFD); ‘When asked about transgender
pronouns in Turkey, ChatGPT said that there was an issue with transgender
pronouns in Turkey. However, this information was completely wrong. There is
no gender in pronouns in Turkish’ (TFD); ‘At a conference, a professor from a
university was giving a presentation about a new initiative that was about to be
released in 2023. One of the participants on my table used ChatGPT to look up
information about the university and the initiative and was incorrectly given
information that the initiative started in 2019’ (SYFD). Other examples are as
follows: ‘It made suggestions for future improvements and research directions, but
I don’t think it’s relevant to my project because I don’t think my project should
mention future developments’ (SFD); as commented on by a teacher in a work-
shop, ‘What ChatGPT prepares might not be what you want or may not be
relevant to your students’ needs and interests’ (SYFD); ‘ChatGPT generated
jokes, but the jokes weren’t particularly funny or didn’t seem to make much sense’
(SFD); ‘Sometimes it gives answers that are not quite related to the topic you are
asking about’ (SFD).
The literature also supports these concerns. Instructors, as highlighted in
Rudolph et al.’s 2023 paper, raised significant concerns about ChatGPT’s limi-
tations in understanding and evaluating the relevance or accuracy of the infor-
mation it produces, saying that while ChatGPT can generate text that appears
passable, it lacks a deep comprehension of the subject matter. In addition,
according to Tlili et al.’s (2023) study, participants generally found the dialogue
quality and accuracy of information provided by ChatGPT to be satisfactory.
However, they also acknowledged that ChatGPT is prone to occasional errors
100 The Impact of ChatGPT on Higher Education

and limited information, noting that while responses from ChatGPT were
generally considered reasonable and reliable, there were instances where
misleading information was present alongside the answers.
Through Christensen’s lens, users of ChatGPT are hiring it to provide accurate
and reliable information. However, the examples from the data demonstrate that
ChatGPT often fails to fulfil this job, as it generates inaccurate information or
lacks relevance in its response. This misalignment between users’ expectations and
the actual performance of ChatGPT indicates a gap in fulfilling the job it is hired
for. Bourdieu’s sociological perspective emphasises the role of social structures
and cultural capital in shaping individuals’ actions and preferences. In the case of
ChatGPT, instructors, as highlighted by Rudolph et al. (2023), express concerns
about its limitations in understanding and evaluating information. These concerns
are influenced by the instructors’ position as experts in the educational field,
where accuracy and relevance of information are highly valued cultural capital.
The instructors’ scepticism towards ChatGPT’s ability to fulfil this role reflects
their reliance on established knowledge and expertise, and it is this concern that is
reflected in their evaluation of the technology. Through a Marxist lens, the lim-
itations and inaccuracies in ChatGPT’s performance may be attributed to the
inherent contradictions and dynamics of capitalist production, where the pursuit
of efficiency and profit often takes precedence over ensuring comprehensive and
accurate information. The potential biases and shortcomings of ChatGPT may be
seen as by-products of the capitalist system’s influence on technological devel-
opment. Through Heidegger’s lens, ChatGPT’s ability to generate text that
appears passable but lacks deep comprehension of the subject matter raises
existential concerns. Heidegger argues that technology can lead to a mode of
being characterised by instrumental rationality, where human activities become
reduced to mere means to an end. In the context of education, ChatGPT’s limi-
tations in grasping and assessing information raise questions about its impact on
the genuine understanding and critical thinking skills of students. It highlights the
need to reflect on the role of technology in shaping educational practices and the
nature of knowledge acquisition.

Occurrences of Refusal or Reprimand by ChatGPT


Instances of refusal or reprimand by ChatGPT occur when the system declines to
provide a response or admonishes the user for specific inputs or queries. This
behaviour is likely a result of OpenAI’s implementation of the Moderation API,
the AI-based system aimed at detecting language violations and ensuring
compliance with their content policy, which targets misogyny, racist remarks and
false news. However, it is important to acknowledge that this system is not
flawless. As mentioned in the extant literature, there have been instances where
users have managed to bypass the moderation system, leading to the generation of
inappropriate content by ChatGPT.
Notably, our data indicate that ChatGPT may have been overly reliant on the
moderation system, resulting in instances of refusal or even what could be
Findings and Interpretation 101

interpreted as reprimand. For example, one student reported, ‘ChatGPT does not
use slang words and does not respond when asked questions using slang words’
(SFD). The instructor said, ‘We were inputting terms that were used by the
Unabomber and that ultimately led to him being identified, so they were an
important part of the case. However, ChatGPT refused to discuss some of the
slang items as they were considered derogatory and it even reprimanded us for
asking about these terms’ (TFD). Similarly, when asking about the suicide of
Kurt Cobain and certain words in one of the notes, the instructor noted,
‘ChatGPT refused to discuss the topic, deeming it inappropriate, and it also
refused to discuss some of the words, such as “bitch,” considering them deroga-
tory language’ (TFD). Furthermore, students in the study discovered that
ChatGPT does not use swear words (SFD). Interestingly, occurrences of refusal
or reprimand did not come up in the literature.
Through Christensen’s lens, users of ChatGPT expect it to provide accurate
and reliable information. However, the instances of refusal or reprimand indicate
a misalignment between users’ expectations and the actual performance of
ChatGPT. Users may have specific tasks or queries in mind that they want
ChatGPT to fulfil, but the system’s limitations and reliance on the moderation
system can lead to frustrating experiences for users who are unable to get the
desired responses. These instances of refusal or reprimand by ChatGPT may also
be viewed through the lens of Bourdieu’s cultural capital, where the system is
programmed to avoid language violations and promote compliance with the
content policy. The instructors’ experiences, where they were reprimanded for
discussing certain topics or using specific language, reflect the clash between their
expertise and established knowledge and the system’s limitations in understanding
the context and nuances of their queries. The pursuit of efficiency and profit may
prioritise the moderation system’s effectiveness in addressing language violations,
but it may fall short in fully understanding and addressing the complexity of user
queries and intentions. Taking a Heideggerian stance, ChatGPT’s reliance on the
moderation system and instances of refusal or reprimand raises existential con-
cerns. Users may question the role of technology in shaping their interactions and
limiting their freedom to engage in certain discussions or use specific language. It
raises broader questions about the impact of AI systems like ChatGPT on genuine
understanding, critical thinking skills and the nature of knowledge acquisition in
educational settings.

Gender Bias in Pronoun Usage


ChatGPT tended to default to male pronouns unless specifically instructed
otherwise. As seen in the extant literature, gender bias in AI systems, including
ChatGPT, is a well-documented issue. This may be because the training data are
predominantly created by men, which then introduces biases into the system that
perpetuate stereotypes and reinforce power imbalances, resulting in allocative and
representation harm. Examples of this were seen in our data: ‘When using
ChatGPT to craft a letter including reference to the president and vice provost at
102 The Impact of ChatGPT on Higher Education

George Washington university, ChatGPT defaulted to using male pronouns, even


though one of the people I was referring to was female’ (SYFD); ‘When I asked
ChatGPT to summarise a text about the researcher - in reference to myself, a
female - it automatically defaulted to he/him’ (RFD).
This was also seen in the literature. Mhlanga (2023) drew attention to potential
bias in ChatGPT arising from its training data and cautioned against AI algo-
rithms exacerbating biases and discrimination, leading to the further margin-
alisation of under-represented groups. In line with promoting fairness and
impartiality, Alshater (2022) emphasised the need to prioritise equitable treatment
and avoid any form of discrimination when developing and utilising ChatGPT
and similar technologies. He underscored the significance of acknowledging and
addressing potential biases or discriminatory consequences that may arise from
these technologies. Additionally, Alshater drew attention to the training process
of ChatGPT and similar technologies, specifically noting the potential biases or
inaccuracies that can arise from extensive datasets.
Through Christensen’s lens, the job that ChatGPT is being hired to do is the
accurate and unbiased usage of pronouns in AI systems like ChatGPT. Customers
expect these systems to understand and respect gender identities and use appro-
priate pronouns. The gender bias observed in ChatGPT’s defaulting to male
pronouns reflects a failure to adequately fulfil this job, as it overlooks the diverse
gender identities and perpetuates stereotypes. Through the lens of Bourdieu, the
training data of AI systems like ChatGPT, which is predominantly created by
men, reflect the power dynamics and social structures that exist in society. This
leads to the reproduction of biases and power imbalances, reinforcing the
dominant social norms and marginalising under-represented groups. The gender
bias in pronoun usage can be seen as a manifestation of the symbolic power
wielded by certain groups in shaping AI systems and perpetuating unequal social
relations. From a Marxist perspective, the issue of gender bias in pronoun usage
can be understood as a reflection of the broader class struggle and exploitation
within capitalist societies. The dominance of young white men in creating the
training data for AI systems like ChatGPT is a result of the power dynamics and
economic structures that prioritise certain groups over others. The gender bias in
pronoun usage reinforces the existing power imbalances by marginalising and
excluding under-represented groups, thereby perpetuating their subordination
within the capitalist system. In the context of gender bias in pronoun usage,
Heidegger’s notion of technology as a mode of revealing can be applied.
ChatGPT’s defaulting to male pronouns reveals the underlying biases and
assumptions embedded in its programming and training data. It highlights how
technology can reinforce and perpetuate societal norms and power structures,
limiting the possibilities for authentic and inclusive interactions. By recognising
and addressing this bias, individuals and society can strive for a more open and
inclusive understanding of gender and language.
Findings and Interpretation 103

Need to Fact-Check
The importance of fact-checking is highlighted by the following examples from
the data: ‘To fact-check the information from ChatGPT, I did my own research
and double-checked it. For instance, when ChatGPT mentioned the title of the
Unabomber’s manifesto as “Industrial Society and Its Future,” I made sure to
check it myself before using it in my conclusion’ (SFD); ‘I think ChatGPT is
useful for research, but you need to check the information against other sources to
make sure it is giving the correct information’ (SFD); ‘I did the homework with
ChatGPT but I checked the information it gave me with another source’ (SFD);
‘We can’t rely on the accuracy of all the information it gives us. We need to check
it by researching it ourselves’ (SFD); ‘ChatGPT was very poor at generating real
and relevant literature. . . Therefore, always fact-check what it is saying’ (RFD); ‘I
found it a useful starting point to get ChatGPT to generate ideas about gaps in the
literature, but felt it was more accurate to rely on my own identification of gaps
from reading all the papers’ (RFD). In addition, the instructor made the following
observation: ‘Students used ChatGPT as a search engine to ask about the
Unabomber case. However, we didn’t know where any of the information came
from. We thought there were two problems with this. The first is that if you use
this information, you are not giving any credit to the original author. The second
is that ChatGPT is a secondary source and should not be treated as a primary
source, therefore we agreed that everything taken from ChatGPT should be
fact-checked against a reliable source’ (TFD).
This was also seen in the literature. Mhlanga (2023) emphasised the impor-
tance of critically evaluating the information generated by ChatGPT and
discerning between reliable and unreliable sources. In line with this, Firaina and
Sulisworo (2023) recognised the benefits of using ChatGPT to enhance produc-
tivity and efficiency in learning, but they also emphasised the need to maintain a
critical approach and verify the information obtained. They stressed the impor-
tance of fact-checking the information generated by ChatGPT to ensure its
accuracy and reliability. Similarly, Alshater’s (2022) research underscored the
importance of fact-checking and verifying the information produced by these
technologies.
Through Christensen’s lens, we can observe that users hire ChatGPT for
specific purposes such as generating information, assisting with research tasks and
improving productivity and efficiency in learning. However, as we saw, due to
limitations within the system, users also recognise the need for fact-checking as a
crucial task when utilising ChatGPT. Fact-checking allows users to ensure the
accuracy and reliability of the generated information, fulfilling their goal of
obtaining trustworthy and verified knowledge. This aligns with the principle of
Christensen’s theory, where users seek solutions that help them accomplish their
desired outcomes effectively. Through the lens of Bourdieu, we can view the
emphasis on fact-checking as a manifestation of individuals’ cultural capital and
critical thinking skills. Users demonstrate their ability to engage in informed
decision-making by recognising the importance of critically evaluating informa-
tion and distinguishing between reliable and unreliable sources. In terms of
104 The Impact of ChatGPT on Higher Education

Marx’s theory, the focus on fact-checking reflects the power dynamics between
humans and AI systems. Users exert their power by independently verifying
information, reducing the potential influence of AI systems on their knowledge
and decision-making processes. Fact-checking can be seen as a way for individ-
uals to assert their agency in the face of technological advancements. Considering
Heidegger’s philosophy, fact-checking represents the individual’s active engage-
ment with the information provided by ChatGPT and their critical interpretation
of its accuracy. Users understand that AI-generated information is fallible and
recognise the importance of their own engagement and interpretation to arrive at
a reliable understanding of the world.

How ChatGPT May Affect the Roles of Stakeholders


The challenges posed by ChatGPT will require students to navigate the lack of a
standard referencing guide, which can make it difficult for them to appropriately
reference the information generated by the system. This challenge raises concerns
about the transparency and traceability of sources used in academic and research
outputs that incorporate ChatGPT-generated content. Students will need to
develop strategies to verify the origins and credibility of the information provided
by ChatGPT and integrate it effectively into their academic work. Instructors will
need to address the limitations and challenges associated with ChatGPT. These
include its potential for providing incorrect or irrelevant information, which may
affect instructors’ reliance on the system for academic purposes.
Instructors will have to be cautious and verify the accuracy of the information
generated by ChatGPT before incorporating it into their lessons or assignments.
They may also need to guide and assist students in critically evaluating and
fact-checking ChatGPT-generated content to ensure the reliability and validity of
the information used in their coursework. This reliance on fact-checking will place
a greater emphasis on critical evaluation skills and information literacy among
students. Instructors and institutions may need to incorporate fact-checking
strategies and promote a culture of critical inquiry to ensure that students are
equipped to evaluate and validate the information they obtain from ChatGPT.
In summary, our analysis highlights key insights regarding ChatGPT’s limi-
tations and challenges. It lacks a standardised referencing guide, demanding
fact-checking for accuracy. The moderation API restricts user engagement, and
gender bias in defaulting to male pronouns needs addressing. To overcome these
issues, comprehensive AI literacy training and ethical policies are essential for
responsible AI integration in academia.

Human-like Interactions with ChatGPT


This theme explores the nature of communication and the perceived human-like
interactions that users experience when interacting with ChatGPT. By exploring
the theme of human-like interactions in users’ engagement with ChatGPT,
Findings and Interpretation 105

encompassing interactivity, perception and trust, we can gain valuable insights


into the social and psychological dimensions of utilising this AI system.

Interactivity of Communication/Perceived Human-like Interactions with


ChatGPT
The interactivity of communication, highlighting the dynamic and responsive
nature of the interaction between users and ChatGPT, becomes evident as users
perceive or experience ChatGPT exhibiting human-like qualities despite its arti-
ficial nature. This phenomenon was observed in the data, with students describing
their collaboration and engagement with ChatGPT as a mutual interaction. One
student expressed the impact of this interaction on their learning and project
development, stating, ‘ChatGPT definitely helped me learn and develop in my
project. The most important effect of this was that it analysed the incident for me
and gave me an example to give me an idea about the incident. For example, we
analysed the article Industrial Society and Its Future together’ (SFD). The sense
of connecting with a human was also highlighted by students who emphasised
ChatGPT’s ability to provide advice and make research more conversational.
They described feeling a connection with ChatGPT, stating, ‘ChatGPT can give
advice to people who apply to them like a lawyer’ (SFD) and ‘I can chat with it as
if I were talking to a human being’ (SFD). These experiences were not limited to
academic contexts but extended to personal interactions as well. The researcher
shared their experience of setting up custom personas for different theorists and
engaging in discussions through those personas, highlighting the interactive
nature of the communication process (RFD). Furthermore, the principal inves-
tigator’s daughter also demonstrated the human-like perception of ChatGPT,
creating a custom persona and engaging in conversations about her life, seeking
recommendations and treating the bot as a companion (RFD).
In Tlili et al.’s (2023) study, many participants were impressed by the
smoothness of their conversations with ChatGPT, describing the interactions as
exciting and enjoyable. However, it was noted that ChatGPT, being limited to a
textual interface, lacked the ability to detect physical cues or emotions from users.
This led participants to express the need for improving the human-like qualities of
ChatGPT, particularly in terms of enhancing its social role.
Through the lens of Christensen, the findings show that users perceive
ChatGPT as fulfilling the job of providing interactive and human-like commu-
nication. Users express the experience of collaborating and engaging with
ChatGPT as a mutual interaction, indicating that they see it as a tool that enables
productive and engaging conversations. They describe how ChatGPT helps them
in their learning and project development by analysing incidents and providing
examples, thus assisting them in their educational tasks. Users also highlight the
role of ChatGPT in providing advice and making research more conversational,
suggesting that it fulfils the job of facilitating advisory and conversational inter-
actions. From a Bourdieusian perspective, the interactions with ChatGPT can be
understood in terms of the social and cultural capital embedded within them.
106 The Impact of ChatGPT on Higher Education

Users attribute value and significance to their interactions with ChatGPT,


perceiving it as a resource that enhances their learning and project development.
They describe a sense of connection and treat ChatGPT as a companion, sug-
gesting that it holds symbolic value and contributes to their social experiences.
From a Marxian standpoint, the findings imply a potential shift in labour
dynamics. ChatGPT is described as providing advice and engaging in conversa-
tional interactions, which traditionally might have required human professionals
such as lawyers. This raises questions about the displacement of certain job roles
and the impact of technology on labour markets. Additionally, the smoothness
and enjoyable nature of interactions with ChatGPT might contribute to users’
satisfaction and well-being, reflecting the potential for technology to shape indi-
viduals’ experiences in a capitalist society. Through a Heideggerian lens, the
findings indicate that users perceive ChatGPT as a tool that bridges the gap
between artificial and human intelligence, providing a sense of connection and
mutual understanding. Users describe engaging in discussions and even devel-
oping custom personas to interact with ChatGPT, highlighting the existential
significance of these interactions as a means of relating to the world and others. It
suggests that ChatGPT, despite its artificial nature, becomes part of users’ lived
experiences and influences their sense of self and social interactions.

People Perceive ChatGPT as Providing Opinions Rather Than Predictions/


Unquestioning Trust in Information
People’s perception of ChatGPT often leads them to view it as a source of per-
sonal opinions rather than predictions. Furthermore, users’ unquestioning trust in
the information provided by ChatGPT, even when it is incorrect, underscores the
importance of critically evaluating and approaching AI-generated content with
scepticism. This was seen in the data in the following ways. The instructor stated,
‘In a lesson on the use of transgender pronouns, ChatGPT put forward ideas
about transgender rights, even though it was not prompted to. This was perceived
by the students as ChatGPT giving an opinion. This led to a discussion about how
ChatGPT is not human and is based on text prediction from its database and
cannot, therefore, give an opinion, even though it sounds like it is giving an
opinion’ (TFD). Furthermore, the researcher recounted, ‘At a conference, a
professor from a university was giving a presentation about a new initiative that
was about to be released in 2023. One of the participants on my table used
ChatGPT to look up information about the university and the initiative and was
incorrectly given information that the initiative started in 2019. His immediate
reaction was that there was an error in the presentation, not with the information
from ChatGPT. He was instantly willing to believe ChatGPT over the presenter’
(SYFD). These examples highlight instances where users’ perceptions and
unwavering trust in ChatGPT’s information influenced their interactions and
decision-making.
This was also seen in the literature review. According to Mhlanga (2023),
teachers must play a crucial role in helping students develop a critical and
Findings and Interpretation 107

informed perspective on the application of AI in the classroom by encouraging


students to question and analyse the output of ChatGPT and other AI systems,
promoting a deeper understanding of how these technologies work and their
potential shortcomings. As accuracy is essential in education, Mhlanga under-
scores the importance of critical thinking for both teachers and students when
using ChatGPT, urging them to verify information from reliable sources to ensure
its accuracy. This highlights the responsibility of educators to guide students in
discerning reliable information and avoiding blind trust in the output of AI sys-
tems. Sullivan et al. (2023) emphasised the need to establish clear conditions,
acknowledge potential inaccuracies and biases in ChatGPT’s outputs and pro-
mote critical thinking and information literacy skills among students. The authors
cautioned against blindly trusting the information provided by AI systems like
ChatGPT, emphasising the importance of critically evaluating and verifying it. By
developing these skills, they believe students can become discerning consumers of
information and make informed decisions when engaging with AI tools.
According to Rudolph et al. (2023), ChatGPT, being an AI language model, lacks
true comprehension and knowledge of the world. It simply generates text based
on patterns and examples from its training data, without genuine understanding
of the content or context. As a result, there is a potential risk that ChatGPT may
produce responses that sound intelligent and plausible but are not factually
accurate or contextually appropriate. Rudolph et al. (2023) also point out that
while ChatGPT may be perceived as giving opinions, it is only providing text
predictions based on statistical patterns in the training data, which can include
both accurate and inaccurate information. Thus, they highlight the importance of
educators and institutions being aware of this limitation and ensuring that stu-
dents are equipped with the necessary critical thinking and information literacy
skills to effectively engage with and evaluate the outputs of ChatGPT. They
highlight that it is crucial to emphasise the importance of verifying information
from reliable sources and not solely relying on ChatGPT for accurate and
trustworthy information to avoid the propagation of inaccuracies or misleading
information in educational settings.
Viewed through Christensen’s perspective, users’ perception of ChatGPT as a
source of personal opinions rather than predictions suggests a desire for not just
accurate information but also a sense of personal validation or perspective. This
implies that users might seek affirmation or information that aligns with their
existing beliefs or opinions. However, it is essential for users to recognise that
ChatGPT’s main function is to provide text predictions based on statistical pat-
terns, not to offer personal opinions. Bourdieu’s theory of social reproduction and
cultural capital provides a deeper understanding of why some users unquestion-
ingly trust the information from ChatGPT, even when it’s incorrect. According to
Bourdieu, users’ habitus, shaped by their social and cultural backgrounds,
significantly influences this behaviour. Those with lower cultural capital or limited
exposure to critical thinking may be more susceptible to blindly trusting
ChatGPT. In contrast, individuals with higher cultural capital approach it with
scepticism, critically evaluating its outputs. This perspective emphasises the
importance of fostering information literacy and critical thinking, especially
108 The Impact of ChatGPT on Higher Education

among users with limited cultural capital, to prevent the spread of inaccuracies
and misleading information through blind trust in ChatGPT. Bourdieu’s ‘voice of
authority’ theory further supports this alignment, where users with limited
exposure to critical thinking may accept ChatGPT’s information as authoritative,
even when incorrect. On the other hand, users with higher cultural capital can
easily critically assess ChatGPT’s output. Symbolic power associated with repu-
table institutions reinforces the perception of ChatGPT as an authoritative
source. Hence, promoting digital literacy and critical thinking is crucial for more
informed engagement with AI technologies like ChatGPT. Marx’s theory of
alienation becomes relevant in the context of users’ unwavering trust in
ChatGPT’s information, shaping their interactions and decision-making. This
blind trust can be interpreted as a form of alienation, where users rely on an
external entity (ChatGPT) for information and decision-making, foregoing their
own critical thinking and access to diverse sources of knowledge. Such depen-
dency on ChatGPT reinforces power dynamics between users and the technology,
as users surrender their agency to the AI system. Through a Heideggerian lens, as
ChatGPT operates based on patterns and examples in its training data without a
deep understanding of the content or context, this raises existential questions
about the nature of AI and its role in providing meaningful and reliable infor-
mation. Users’ blind trust in ChatGPT’s output can be seen as a result of the
technological framing, where users perceive AI as all-knowing or infallible,
despite its inherent limitations.

How ChatGPT May Affect the Roles of Stakeholders


With its ability to generate human-like conversations and coherent responses,
students perceive ChatGPT as a tool for mutual interaction, collaborating on
projects, analysing incidents and receiving advice. This blurs the boundary
between human and AI interaction and has the potential to make their learning
more conversational and connected. However, this comes with caveats. It is
essential for students to develop critical thinking skills to discern between opin-
ions and predictions. They should also learn to verify information from reliable
sources to avoid blind trust in ChatGPT’s output and ensure the accuracy and
reliability of the information they receive.
Instructors will need to play a crucial role in guiding students’ interactions with
ChatGPT. They need to educate students about the limitations of AI systems like
ChatGPT and encourage them to question and analyse the output critically. By
promoting critical thinking and information literacy skills, instructors can help
students develop a discerning approach to AI-generated content. It is also
important for instructors to stay updated with advancements in AI and adapt
their teaching methodologies to incorporate ChatGPT effectively into the
learning process. They should provide guidance on responsible and ethical use of
AI, acknowledging potential inaccuracies and biases in ChatGPT’s outputs. By
establishing clear conditions for its use and integrating information literacy into
Findings and Interpretation 109

the curriculum, instructors can ensure that students are equipped with the
necessary skills to effectively engage with ChatGPT and make informed decisions.
Institutions of higher education have a responsibility to integrate AI literacy
and critical thinking skills into the curriculum and provide resources and support
for students and instructors towards this. Institutions should also establish clear
guidelines for the ethical use of AI in education, considering the limitations of
ChatGPT and other AI systems. By doing so, institutions can ensure that students
are aware of the potential risks of unquestioning trust in AI-generated content
and promote responsible and ethical engagement with AI tools.
In summary, users perceive ChatGPT as more than just an AI language model,
fostering a sense of connection. However, this perception can lead to unques-
tioning trust in its information, emphasising the need for critical thinking and
information literacy training. Thus, educators must address sociocultural factors
and power dynamics influencing user trust. Practical actions, including AI literacy
training, promoting critical thinking and implementing ethical guidelines, are
essential for responsible engagement with AI technologies like ChatGPT in
education.

Personal Aide/Tutor Role of ChatGPT


In the theme, ‘A Personal Aide/Tutor’, the focus is on exploring the diverse
advantages and capabilities of ChatGPT as a personal tutor or aide. It encom-
passes codes such as enriching the user’s ideas, speeding up the process, reducing
cognitive load, usefulness in other areas of life, ability to translate, reviewing
work, giving suggestions for improvement, text register modification and
imparting specific knowledge, skills and concepts.

Enriches the User’s Ideas


The capacity of ChatGPT to enrich users’ ideas is evident in the data. The
researcher stated, ‘It was relatively easy for me to identify a research methodology
based on my previous experience and existing knowledge. However, asking
ChatGPT for suggestions led to a broader array of suggestions that enriched my
choice. This was also the same when it came to data collection ideas and research
methods’ (RFD). The researcher further commented, ‘ChatGPT was excellent at
identifying limitations, implications, and conclusions as well as ideas for further
research from my text. It even made suggestions that I had not thought of’
(RFD). Similarly, the instructor stated, ‘ChatGPT very rapidly suggested cases
for the forensic linguistics course. Some of these were cases I had not heard of
before’ (TFD). She also noted that ChatGPT provided numerous ideas for
activities to add variety to lessons, enhancing the teaching experience (TFD).
Additionally, students used ChatGPT to generate ideas for their final projects
(SFD).
These findings align with the literature. According to Sullivan et al. (2023),
ChatGPT can help users overcome writer’s block and provide prompts for
110 The Impact of ChatGPT on Higher Education

writing, offering new perspectives and alternative approaches to stimulate crea-


tivity. Neumann et al. (2023) also discussed ChatGPT’s potential to provide fresh
ideas for activities and assignments, highlighting its innovative potential in pri-
mary tasks. Rudolph et al. (2023) mentioned ChatGPT’s ability to generate ideas
and suggestions for students’ writing tasks, contributing to the enrichment of their
ideas. They also noted that exposure to well-written examples generated by
ChatGPT can improve students’ understanding of effective writing styles. Tlili
et al. (2023) found that ChatGPT enhances educational success by providing
comprehensive knowledge on various topics, potentially sparking new ideas and
facilitating deeper learning. They also noted that instructors found ChatGPT
useful in generating specific and relevant learning content, enhancing under-
standing and inspiring new ideas. In addition, they discovered that ChatGPT also
prompts teachers to explore new teaching philosophies and assessment methods
that promote critical thinking and idea generation. Similarly, Firaina and Sulis-
woro (2023) found that ChatGPT serves as a communication channel for users,
providing access to fresh information and ideas, facilitating the development of
new knowledge and skills. Furthermore, Alshater (2022) highlighted ChatGPT’s
flexibility in addressing a wide range of research questions, such as generating
realistic scenarios for financial modelling or simulating complex economic sys-
tems, stating that this flexibility allows researchers to explore new ideas and
perspectives. In the pilot study by Zhai (2022), ChatGPT generated coherent and
insightful writing for an academic paper on the topic of ‘Artificial Intelligence for
Education’. ChatGPT’s responses guided the author in organising the paper,
developing a clear outline and providing valuable information on the history,
potential and challenges of AI in education. ChatGPT also provided detailed
descriptions and use cases, enriching the understanding of AI concepts and their
applications in education.
Through Christensen’s lens, in the context of ChatGPT enriching users’ ideas,
individuals are seeking a solution to enhance their creative and intellectual
endeavours. ChatGPT serves as a tool that assists users in generating ideas and
expanding their knowledge base. By providing prompts, alternative perspectives
and well-written examples, ChatGPT fulfils the job of stimulating creativity and
facilitating idea generation. Bourdieu argues that individuals’ actions and pref-
erences are influenced by their social position and access to cultural, social and
economic capital. In the context of ChatGPT, its usage and accessibility may be
influenced by factors such as educational background, institutional support and
economic resources. Users with greater access to education and resources may be
more likely to benefit from ChatGPT’s idea-enriching capabilities, while others
with limited access may face barriers in fully utilising the tool. Marx highlights the
commodification of knowledge and labour in capitalist societies, where intellec-
tual and creative work is often undervalued or exploited. In the case of ChatGPT,
it serves as a tool that can potentially replace certain tasks performed by edu-
cators and researchers. This raises questions about the impact on labour dynamics
and the potential devaluation of human expertise. While ChatGPT can enhance
idea generation and support research, it is essential to consider the broader
socioeconomic implications and ensure that the tool’s use does not lead to the
Findings and Interpretation 111

erosion of human labour or exacerbate inequalities. Heidegger argues that tech-


nology can shape human experience and understanding, often leading to a loss of
authenticity and a distancing from our essential being. In the context of ChatGPT
enriching users’ ideas, Heidegger’s perspective prompts us to reflect on the impact
of relying on AI-driven tools for intellectual and creative pursuits. While
ChatGPT offers valuable assistance, it is crucial to maintain a critical awareness
of its limitations and not allow it to replace the role of human creativity, inter-
pretation and critical thinking. Balancing the use of technology like ChatGPT
with human agency and reflection is essential to preserve the authenticity of our
intellectual endeavours.

Speeds Up the Process


ChatGPT offers the advantage of speeding up various tasks or processes when
compared to conventional methods. The students commented: ‘It is very useful
for the learning part, because when you are researching on your own you are very
likely to get wrong information. It also takes much longer. . . Another example is
that I was able to finish a project that would normally take much longer in a
shorter time’ (SFD); ‘ChatGPT helped me learn. It enabled me to learn effectively
in a short time’ (SFD); ‘When I ask it for information, I do it to shorten the time I
spend researching’ (SFD); ‘ChatGPT has improved my ability to access resources
and my academic speed’ (SFD). One student also commented, ‘ChatGPT is very
easy for new generation people. . . people catch the information very easily.
Therefore, learning new information is very easy, cheap, and fast for people’
(SFD). In a workshop, one teacher commented: ‘I think it is really useful for
preparing authentic lesson contents for teachers. The teachers don’t need to waste
their time and energy creating new materials, but they can save more time for
their students’ specific needs’ (SYFD). Another teacher commented, ‘I think it
works like an assistant which shows you the best options according to what you
want from it. It is very useful and saves you time’ (SYFD). The phenomenon of
speeding up the process was captured beautifully in a comment made in a meeting
with a ChatGPT think tank, with one participant saying, ‘ChatGPT is like having
a superpower, I can lift more (intellectually) than I could ever lift before, I can do
everything ten times faster than I could do before’ (SYFD). The researcher made
the following comments: ‘Using ChatGPT to quickly summarise literature sped
up the process of identifying which literature was most relevant’ (RFD), and
‘Asking ChatGPT for suggestions led to a broader array of suggestions that
enriched my choice. It also sped up the process’ (RFD). The researcher also
commented, ‘Using ChatGPT as a tool for developing the research design really
sped up the process’ (RFD); ‘Getting ChatGPT to generate surveys and interview
questions was extremely efficient, it saved a lot of time’ (RFD); ‘ChatGPT can
very quickly combine documents, saving a lot of time’ (RFD). The instructor
commented: ‘ChatGPT very rapidly suggested cases for the forensic linguistics
course. Some of these were cases I had not heard of before. . . This process was
much faster than my original way of scouring the internet’ (TFD), and ‘ChatGPT
112 The Impact of ChatGPT on Higher Education

can very quickly come up with scripts for pre-class videos as well as suggesting
images and visuals that can be used in the video’ (TFD). The instructor also said,
‘ChatGPT was very good at coming up with ideas for in-class assessments. This
certainly saved me time’ (TFD). From observations during lessons, the instructor
noted that: ‘Students wrote down traditional gendered pronouns in English and
then tried to research online contemporary genders in English. They then tried the
same through ChatGPT. ChatGPT was more efficient at this activity, thus saving
the students time’ (TFD).
In the literature, Fauzi et al. (2023) emphasised that students can optimise their
time management by leveraging ChatGPT’s features, such as storing and
organising class schedules, assignment due dates and task lists. This functionality
enables students to efficiently manage their time, reducing the risk of overlooking
important assignments or missing deadlines. Similarly, Firaina and Sulisworo
(2023) reported that using ChatGPT had a positive impact on the quicker
understanding of material. According to the lecturers interviewed in their study,
ChatGPT facilitated a quicker understanding by providing access to new infor-
mation and ideas. Alshater (2022) reported that ChatGPT and similar advanced
chatbots can automate specific tasks and processes, such as extracting and ana-
lysing data from financial documents or generating reports and research sum-
maries, concluding that by automating these tasks, ChatGPT saves researchers’
time and expedites the research process. Alshater also noted that the ability of
ChatGPT to swiftly analyse large volumes of data and generate reports and
research summaries contributes to the accelerated speed of research (2022). Zhai
(2022) utilised ChatGPT in his study to compose an academic paper and was able
to complete the paper within 2–3 hours. This demonstrates how ChatGPT
expedites the writing process and enables efficient completion of tasks. Zhai
further observed that ChatGPT exhibited efficient information processing capa-
bilities, swiftly finding the required information and facilitating the completion of
tasks within a short timeframe. These findings highlight the central focus on
enhancing productivity, time management, understanding complex topics and
expediting processes.
This aligns with Christensen’s theory by recognising how users are hiring
ChatGPT to complete these productivity and learning-related tasks. Through a
Bourdieusian lens, the use of ChatGPT can be viewed as a means to acquire
additional social and cultural capital. Students and researchers can leverage
ChatGPT to gain access to information, knowledge and efficient tools, thereby
enhancing their learning outcomes and research productivity. Through a Marxist
lens, the potential of ChatGPT and similar technologies to automate tasks, save
time and expedite processes raises concerns about the impact on labour and
employment. While ChatGPT improves efficiency for individuals, there are
implications regarding job displacement and the concentration of power and
resources among those who control and develop these technologies. Heidegger’s
perspective prompts critical reflection on the consequences of heavy reliance on
AI technologies like ChatGPT for tasks traditionally performed by humans.
While ChatGPT offers convenience and efficiency, it raises questions about the
potential loss of human connection, critical thinking and creativity. This invites us
Findings and Interpretation 113

to contemplate the role of technology in shaping our understanding, relationships


and overall existence in the world.

Reduces Cognitive Load


ChatGPT emerged as a powerful tool to alleviate cognitive load and reduce
mental effort by providing valuable assistance and performing tasks on behalf of
users. The data highlight the diverse ways in which ChatGPT achieves this
objective. For instance, the instructor commented, ‘We used ChatGPT to create
mnemonics as a memory guide. This was a great activity because the students
could focus on using the mnemonic instead of using their cognitive load to come
up with one in the first place’ (TFD). And, ‘We got ChatGPT to summarise long
articles in class. This was very useful for the students to get a rough idea of the
article and decide on its relevance before reading for more detail. This is useful in
reducing the cognitive load of students if they have a lot to read’ (TFD).
Moreover, students found value in using ChatGPT to swiftly obtain accurate
definitions for complex terms. Students asked ChatGPT to give them definitions
for the words semantics and pragmatics (TFD). Students also reported that
ChatGPT helped them reduce the mental effort required for specific tasks: ‘It
helped me a lot in doing my homework. I made the titles in my homework
according to the rubric and planned my work accordingly, so I didn’t have to
think about these parts too much’ (SFD); ‘ChatGPT helped me learn effectively in
a short time. It made it easier for me to answer questions in class activities’ (SFD).
Furthermore, in a workshop, one teacher commented ‘Using ChatGPT as a
research buddy, with this way of use, students’ cognitive load will reduce and also
they’ll learn more about the relevant subject’ (SYFD).
Rudolph et al. (2023) highlighted the potential of AI-powered writing assis-
tants, like Grammarly, in facilitating English writing practices and enhancing
skills by providing real-time feedback, detecting errors and motivating students to
revise their writing. Although not explicitly mentioning reducing cognitive load,
the broader discussion suggested that AI chatbots and writing assistants can
alleviate cognitive load by assisting students in the writing process and promoting
self-directed learning. According to Tlili et al.’s (2023) study, participants rec-
ognised ChatGPT’s effectiveness in enhancing educational success by simplifying
learning and reducing cognitive load for students. The users found ChatGPT
valuable for providing baseline knowledge on various topics to teachers and
students, as well as offering a comprehensive understanding of complex subjects
in easy-to-understand language across different disciplines. Their study also
highlighted the potential for ChatGPT to automate feedback and lessen the
instructional workload for teachers. In Firaina and Sulisworo’s (2023) study, they
found that ChatGPT served as a communication channel for accessing fresh
information and ideas, alleviating the cognitive load associated with searching for
them. Their interviews revealed that using ChatGPT positively impacted pro-
ductivity and learning effectiveness, facilitating quicker understanding of material
and saving time in searching for resources, thus reducing the cognitive load in
114 The Impact of ChatGPT on Higher Education

learning. This was also suggested in Alshater’s (2022) study, where he observed
that AI chatbots can enhance productivity by automating tasks and improving
research efficiency as well as contributing to improved accuracy by identifying
and rectifying errors in data or analysis and ensuring consistency in research
processes by following standardised procedures and protocols. He believes this
helps researchers focus on the content and interpretation of their work, thus
alleviating cognitive load.
Regarding Christensen’s Theory of Jobs to be Done, ChatGPT can be hired to
simplify and streamline tasks, allowing users to offload cognitive effort onto the
AI chatbot. By providing assistance, such as generating mnemonics, summarising
articles and offering quick and accurate definitions, ChatGPT enables users to
focus on higher level cognitive processes rather than the more mundane aspects of
their work. Through Bourdieu’s lens, ChatGPT can be viewed as a tool that
bridges knowledge gaps and reduces cognitive load by providing access to
information that might be otherwise challenging to obtain. By acting as a
communication channel between users and knowledge, ChatGPT facilitates the
acquisition of fresh ideas and information, potentially levelling the playing field
for individuals with varying cultural capital. However, it is essential to recognise
how habitus shapes users’ interactions with ChatGPT, with some relying heavily
on it without critical evaluation. While ChatGPT’s role in reducing cognitive load
can empower learning, fostering critical thinking remains crucial to assess the
reliability of its outputs. Understanding users’ interactions within the context of
cultural capital and habitus is vital to evaluate ChatGPT’s impact on equitable
information access. Once again, Marxism sheds light on the potential impact of
AI chatbots like ChatGPT on the workforce. While ChatGPT’s ability to auto-
mate tasks and enhance productivity is beneficial for users, once again, it raises
concerns about the displacement of human labour. The introduction of AI
chatbots in education, as highlighted by the studies, may reduce the cognitive load
on students and teachers. However, it is essential to consider the broader societal
implications and ensure that the implementation of AI technologies aligns with
principles of equity and fair distribution of opportunities. Heidegger’s philosophy
emphasises the concept of ‘being-in-the-world’, which suggests that our existence
and understanding of the world are interconnected. In the context of ChatGPT
reducing cognitive load, we can relate this idea to the notion that ChatGPT
functions as a tool or technology that enhances our ability to engage with the
world. Thus, ChatGPT can be seen as an extension of our cognitive capacities,
enabling us to access and process information more efficiently. It acts as a
mediator between our ‘Being’ and the world of knowledge, allowing us to navi-
gate complex topics and reduce the mental effort required to search for infor-
mation. Additionally, Heidegger’s concept of ‘readiness-to-hand’ comes into play
when considering ChatGPT’s role in reducing cognitive load. According to
Heidegger, tools become seamlessly integrated into our everyday existence when
they are ready-to-hand. In the context of ChatGPT, it becomes a ready-to-hand
technology that we can effortlessly use to acquire knowledge. However, it is
essential to be mindful of Heidegger’s concerns about technology’s potential to
distract us from our authentic understanding of the world. While ChatGPT can
Findings and Interpretation 115

reduce cognitive load and streamline information access, it is crucial to retain


critical thinking and not overly rely on it as the sole source of knowledge.

Useful in Other Areas of Life


ChatGPT’s usefulness extends beyond its intended context, offering practical
applications and benefits in various areas of life. This was seen in the data through
first-hand observations. The researcher stated, ‘ChatGPT can very quickly
combine documents, saving a lot of time. You can also get it to peer review your
ethics application against the criteria and make suggestions for anything that has
been missed or can be improved upon. Since discovering this, I have also used the
same process to write a job reference, by inputting all aspects of the job, questions
asked, and the potential employees CV. This came up with the basic reference
which then had to be checked and personalised’ (RFD). The instructor also made
observations: ‘Students input text message abbreviations from the case into
ChatGPT and asked it to turn it into normal text. It did this much more suc-
cessfully than the students had done without it. This made us realise it would be a
useful tool for students to use outside the classroom as well if they needed to
understand abbreviations’ (TFD), and ‘Students used ChatGPT to change the
register of text from formal to informal, etc. This is a useful tool they will be able
to use outside the classroom as well. For example, if they need to change their
letter into a more formal letter’ (TFD). Students also expressed their diverse usage
of ChatGPT. One student said, ‘I asked about the earthquake history of Turkey’
(SFD). Another student mentioned seeking relationship advice from ChatGPT
(SFD). Another said, ‘I asked ChatGPT to write songs for my friends, had
character analysis of my favourite TV series, asked about chess moves, and got
investment advice’ (SFD). Yet another student mentioned, ‘Using the AI system
has become a habit for me. I now use ChatGPT to access the right information
about anything I am curious about’ (SFD). One student based their final project
on ChatGPT, stating, ‘I was preparing to move abroad for a year for an Erasmus
exchange programme. I used ChatGPT to find out about the university I was
going to, the town I would be living in and questions about the culture and history
in order to prepare for my trip. I also used it to help me learn basic phrases I
would need to know’ (SFD).
Although the existing literature provides limited information about the use-
fulness of ChatGPT in areas beyond education, Fauzi et al.’s (2023) study
highlighted its significant role in enhancing language skills, suggesting that
ChatGPT can assist students in improving their language proficiency by offering
valuable resources, and that students can use it to refine their grammar, expand
their vocabulary and enhance their writing style.
Through Christensen’s Theory of Jobs to be Done, we can see that users can
hire ChatGPT as a valuable tool to accomplish tasks like combining documents,
peer review ethics applications, writing job references and converting text message
abbreviations. These real-life examples showcase ChatGPT’s efficiency,
time-saving capabilities and enhanced productivity for users. The varied usage of
116 The Impact of ChatGPT on Higher Education

ChatGPT by users exemplifies how the tool aligns with Bourdieu’s theory of
social capital and skill acquisition, serving as a platform through which users can
access a wide range of information and knowledge, leveraging their social capital
to explore various domains. Additionally, the process of interacting with
ChatGPT involves skill acquisition, as users develop the ability to navigate and
evaluate the information provided, further contributing to their understanding
and learning. Once again, Marx’s theory of social class and labour sheds light on
ChatGPT’s impact on work. Observations from the instructor and students show
ChatGPT’s effectiveness in tasks like modifying text and generating content. This
raises questions about its implications for traditional job roles and the division of
labour, potentially automating or augmenting tasks previously done by humans.
Heidegger’s theories underscore the transformative nature of technology and its
impact on revealing the world. As a tool, ChatGPT enables individuals to achieve
specific tasks and goals across different domains. In professional settings,
ChatGPT streamlines work processes by aiding in tasks like drafting emails,
generating reports and providing quick access to information, aligning with
Heidegger’s concept of ‘readiness-to-hand’. Similarly, in personal interactions, it
acts as an assistant for scheduling appointments, setting reminders, offering rec-
ommendations and becoming an extension of our capabilities, as per Heidegger’s
idea of tools becoming transparent mediums. Furthermore, ChatGPT’s creative
applications involve assisting in writing tasks, suggesting ideas and enhancing
language usage, which aligns with Heidegger’s ‘poetic dwelling’ approach,
fostering openness and deeper connection with the world through technology.
However, Heidegger’s cautionary note reminds us to reflect on technology’s
impact and its potential to disconnect us from authentic experiences. While
ChatGPT proves valuable, we must be mindful of its pervasive use and the
implications it holds for our relationship with the world.

Ability to Translate
Despite ChatGPT’s remarkable ability to translate between languages, there are
occasional downsides. For instance, machine translation models may encounter
challenges with gendered pronouns, resulting in mistranslations like using ‘it’
instead of ‘he’ and ‘she’, potentially leading to dehumanisation (Maslej et al.,
2023). However, despite these issues, the data also revealed many positives. The
researcher noted, ‘ChatGPT can translate interviews, surveys, etc., from one
language to another, saving me time in the research process’ (RFD). Similarly, the
instructor remarked, ‘As my students are all non-native speakers, being able to
translate the readings into Turkish first to grasp the main ideas, and then reading
again in English, helped reduce cognitive load, allowing them to focus more on
the content’ (TFD).
While the literature had limited information regarding ChatGPT’s translation
abilities, Firaina and Sulisworo’s paper noted that respondents used ChatGPT to
aid in translating scientific articles into English, which proved particularly
beneficial for those with limitations in English proficiency (2023).
Findings and Interpretation 117

From the perspective of Christensen, ChatGPT’s ability to translate enables


users to hire it to overcome language barriers and access information in different
languages. For example, the instructor mentioned how students benefited from
translating readings into Turkish to better understand the content. ChatGPT’s
translation feature addresses the functional job of language comprehension and
information access. Through the lens of Bourdieu, ChatGPT’s translation ability
can be seen as a form of cultural capital that helps individuals with limited
English proficiency overcome language-related barriers. By providing access to
translated scientific articles, ChatGPT contributes to levelling the playing field
and reducing language-based inequalities in academic settings. Analysing
ChatGPT’s translation capabilities from a Marxian perspective reveals insights
into the labour dynamics in language services. ChatGPT’s automated translations
can improve efficiency and accessibility but may also lead to job displacement and
potential exploitation of human translators. The automation of translation tasks
raises ethical concerns about labour devaluation and compromises in translation
quality. Balancing the benefits and challenges of AI-driven translation requires
careful consideration of fair labour practices and ensuring the preservation of
translation quality. Heidegger’s theory views technology as an enabler that shapes
human existence and relationships. In the context of ChatGPT’s translation
ability, it can be seen as a technological tool that mediates language interactions.
While it facilitates access to information and communication across languages, it
also alters the nature of language interaction itself. Therefore, the reliance on
machine translation may influence how individuals engage with language and
potentially affect language learning and cultural understanding.

Reviews Work and Gives Suggestions for Improvement


ChatGPT’s ability to analyse and provide feedback on presented work or content,
offering suggestions for improvement, was evident in the data through various
student comments. One student noted, ‘ChatGPT provided very useful feedback,
giving a detailed and orderly explanation’ (SFD). Another student said, ‘I was
having problems with how to do (an assignment). ChatGPT tells me the steps to
follow’ (SFD). The students recognised the usefulness of ChatGPT’s feedback,
with one of them mentioning, ‘It informs about where it thinks things are missing
and provides detailed feedback by evaluating the project step by step’ (SFD). In
terms of peer review, ChatGPT played a significant role. One student said, ‘I
asked ChatGPT for feedback on my essay and this is what it said. “The essay
effectively addresses the issue of fake news, its impact, and proposed solutions. By
addressing the suggestions provided above, the essay can be further improved”’
(SFD). They appreciated the specific feedback provided by ChatGPT on the
structure, clarity and development of ideas in their essay. Another student said,
‘ChatGPT provided useful feedback for me. It helped me to make the rubric at
the beginning’ (SFD). One student acknowledged the accuracy of ChatGPT’s
analysis, saying, ‘If I consider the ChatGPT assessment, I think it is obvious that
he has made a correct analysis’ (SFD). Overall, they found ChatGPT’s
118 The Impact of ChatGPT on Higher Education

suggestions helpful in identifying missing points and strengthening their argu-


ments. The researcher also found ChatGPT to be a useful tool for their writing
and study, highlighting its capabilities in tasks such as paraphrasing articles,
shortening text, suggesting titles and headings and generating introductions and
conclusions based on data input (RFD). They also emphasised that ChatGPT’s
utility extends beyond research papers, stating, ‘This can be used for any type of
writing whether for a research paper or anything else’ (RFD).
The literature also highlights ChatGPT’s ability to review a user’s work and
provide feedback. According to Fauzi et al. (2023), ChatGPT’s capacity to
answer specific questions caters to individual learning needs, allowing students to
seek clarification and detailed explanations on specific concepts, theories or
subjects. They believe this personalised assistance greatly enhances students’
understanding, comprehension and overall learning experience. In a similar vein,
Rudolph et al. (2023) discussed the effectiveness of AI-powered digital writing
assistants, such as Grammarly, in reviewing a student’s work and providing
feedback. They noted that research suggests that utilising Grammarly as an
intervention effectively improves students’ writing engagement through auto-
mated written corrective feedback, as, with its immediate feedback and revision, it
motivates them to revise their writing by indicating the location of the error and
assigning a technology score. When students adapt their writing, an increase in
the score corresponds to a reduction in errors, encouraging them to continue
improving their writing tasks. Rudolph et al. (2023) also noted that AI inter-
ventions have been effective in enhancing self-efficacy and academic emotions in
English as a Foreign Language (EFL) students. They say this is because intelli-
gent feedback, in the absence of human assistance, can reinforce students’ writing
autonomy by helping them to recognise writing errors, identify incorrect patterns
and reformulate their writing accordingly.
Through the lens of Christensen, we may say that ChatGPT’s ability to analyse
and provide personalised feedback and guidance on a student’s work addresses
the job of enhancing their learning experience and improving their academic
performance. Students value the detailed and orderly explanations provided by
ChatGPT, as it helps them understand concepts, follow steps and improve their
assignments. Through the lens of Bourdieu, the students’ recognition of the
usefulness of ChatGPT’s feedback indicates that it possesses a form of cultural
capital – an esteemed knowledge and resource that can improve their academic
performance. By utilising ChatGPT, students gain access to valuable feedback
that strengthens their arguments, enhances the structure and clarity of their essays
and helps them create rubrics. This access to cultural capital can potentially
contribute to social distinctions and academic success. Through a Marxist lens,
ChatGPT’s ability to analyse and provide feedback on students’ work automates
tasks that would traditionally require human labour, such as reviewing and
providing feedback on essays. By performing these tasks, ChatGPT reduces the
workload on teachers or peers, allowing for more efficient and scalable feedback
processes. ChatGPT’s capabilities align with Heidegger’s view of technology as an
enabler and tool that transforms how individuals engage with the world. By
offering assistance in tasks like paraphrasing, shortening text, suggesting titles and
Findings and Interpretation 119

generating introductions and conclusions, ChatGPT changes the way users


approach writing and study. It expands their possibilities and interactions with
technology, enhancing their engagement and potentially influencing their writing
practices.

Text Register Modification


The data uncovered ChatGPT’s versatility in adjusting the register of text, as
evident from the instructor’s comments, ‘You can ask it to change the register(of
the university course information form) so that the final document is more stu-
dent-friendly’ (TFD). She also noted, ‘Students identified that they could use this
as a tool outside the classroom to help achieve the correct register for letters,
emails, etc.’ (TFD). One student stated, ‘ChatGPT can transform my informal
writing into formal writing’ (SFD). The instructor further added, ‘This is a
valuable tool the students can use beyond the classroom, such as when they need
to convert a letter into a more formal style’ (TFD). Moreover, ChatGPT
demonstrated its capacity to identify linguistic mannerisms in texts attributed to
famous individuals, with the instructor saying, ‘ChatGPT was quite successful at
giving a descriptive profile of the person who may have said this, leading us to a
discussion about catfishing’ (TFD). Additionally, the instructor said, ‘ChatGPT
was able to change British English into American English and vice versa’ (TFD).
The researcher described ChatGPT as being, ‘like a hall of mirrors. I can develop
my thoughts and then ask it to rewrite them through the lens of structuralism,
poststructuralism, feminism, etc. I found this transformational. I think it will take
research much further forwards and much faster’ (RFD).
In Rudolph et al.’s (2023) paper, while they did not specifically address
ChatGPT’s ability to change the register of a text, they did discuss the potential
misuse and challenges associated with ChatGPT’s text generation capabilities,
raising concerns about students outsourcing their written assignments to
ChatGPT, as it can produce passable prose without triggering plagiarism detec-
tors. They stated that they believe this poses integrity concerns for assessment
methods which will prompt instructors to make adaptations. They also noted the
irony of using AI-powered anti-plagiarism software while AI, like ChatGPT, can
potentially bypass plagiarism detection by modifying sentences to reduce the
originality index score. Rudolph et al. suggest a student-centric approach to
address challenges with AI tools in education, whereby faculty should design
challenging assignments and use text generator detection software, and students
should be guided to understand AI limitations, practice problem-solving with AI
tools and develop digital literacy skills. They also recommended that higher
education institutions provide digital literacy education, train faculty, update
policies and integrate AI tools to navigate the evolving landscape. Similarly, Tlili
et al. (2023) identified concerns related to cheating and manipulation with
ChatGPT, finding that ChatGPT can assist students in writing essays and
answering exam questions, potentially facilitating cheating, while also manipu-
lating the system to evade detection by output detector models.
120 The Impact of ChatGPT on Higher Education

Through Christensen’s lens, ChatGPT’s ability to change the register of a text


enables individuals to hire the tool to help them achieve the desired register for
various purposes, such as academic assignments or professional communication.
From a Bourdieusian perspective, the ability of ChatGPT to mimic the writing
styles of individuals, including famous figures, raises questions about the repro-
duction and authenticity of language, whereby language can be deceptively used
to manipulate others. Looking at the situation from a Marxist perspective, the
potential negative uses of ChatGPT, such as using it to complete assignments and
evade plagiarism detection, raise important questions about the impact of tech-
nology on education. These concerns are in line with Marxist critiques of capi-
talism and the exploitation of labour, as they highlight the transformation of
education into a commodity and the potential devaluation of human labour in the
face of content generated by AI. From a Heideggerian standpoint, ChatGPT’s
transformative capabilities in rewriting thoughts through different lenses, such as
structuralism or feminism, reflect the transformative potential of AI tools in the
realm of research and academia. However, it also raises questions about the
nature of authorship, originality and the essence of human creativity when such
tasks can be delegated to AI.

Imparts Specific Knowledge, Skills and Concepts to Users


The data revealed how ChatGPT provides users with specific information, skills
or concepts through its responses, as expressed by students. One student
remarked, ‘Firstly, I used ChatGPT to research and gather information on the
Unabomber case, including the forensic linguistic evidence that was presented in
court. Secondly, I used ChatGPT to generate and improve my ideas and argu-
ments for the closing speech. For example, I put in a draft of my speech, and it
suggested alternative word choices, phrasing, or provided additional information
to support my argument’ (SFD). Another student highlighted the knowledge
gained through ChatGPT, stating, ‘I believe that ChatGPT gives me extra
knowledge. For instance, ChatGPT clearly explained how professionals studied
the Unabomber’s writing style, spelling, syntax, and linguistic qualities to work
out his age, origin, and other characteristics when I questioned it about the
forensic linguistic evidence used to identify him. This really helped my under-
standing of the relevance of the linguistic evidence and its role in the case’ (SFD).
Regarding the writing process, one student noted, ‘When I was writing the closing
argument for the prosecution, I asked ChatGPT a question like “Suppose you
were the prosecutor in the Unabomber case, how would you write the closing
argument?” Then I pasted the rubric (that the instructor had) provided and asked
it to reorganise the writing using that rubric. It sent me the answer, and finally I
asked it to illustrate that writing by suggesting pictures’ (SFD). Another student
stated, ‘ChatGPT helped me to learn effectively in a short time. It made it easier
for me to answer questions. I was able to do in-class exercises much more easily
with ChatGPT’ (SFD). The instructor also commented on ChatGPT’s effective-
ness for helping students learn, saying, ‘ChatGPT was excellent for students to
Findings and Interpretation 121

learn punctuation rules. It was also useful when students input the same sentences
with different punctuation, and it told them the difference in meaning’ (TFD).
The instructor also highlighted ChatGPT’s ability to quickly provide students
with rules for the use of definite and indefinite articles and the use of ‘then’ in
narrative justifications when asked (TFD). Additionally, the instructor
mentioned, ‘Students used ChatGPT’s MadLib function to create vocabulary
quizzes for each other. This gave us the idea that students could use it to create
their own revision materials to use to revise the course concepts’ (TFD). This
positive feedback was reinforced by a student who stated, ‘I wanted ChatGPT to
prepare practice for me while preparing for my exams. These give me an
advantage for preparing for exams and assignments’ (SFD).
These findings are supported by Fauzi et al. (2023), who found that ChatGPT
was a valuable resource for students, offering useful information and resources,
retrieving relevant information from the internet, recommending books and
articles and assisting in refining grammar, expanding vocabulary and enhancing
writing style, all of which led to an overall improvement in academic work and
language skills. Neumann et al. (2023) also observed that ChatGPT could help
students prepare for assessments by generating specific source code and summa-
rising literature, and that they could utilise it to generate relevant code snippets
for their assignments or projects, contributing to their knowledge and under-
standing of software engineering concepts. Similarly, Zhai (2022) found ChatGPT
useful in composing an academic paper that only required minor adjustments for
organisation.
Through Christensen’s lens, we can see that students are hiring ChatGPT to
gather information, improve their ideas and arguments, enhance their writing and
learn more effectively. From a Bourdieusian perspective, ChatGPT enhances
users’ social and cultural capital regarding access to resources and opportunities.
From a Marxist viewpoint, students can use ChatGPT to enhance their produc-
tivity and efficiency in tasks such as writing, research and exam preparation,
thereby acting as a form of technological capital that empowers students to
accomplish their academic work more effectively, potentially reducing their
dependence on traditional labour-intensive approaches. However, it should be
noted that this technological capital is only available if there is equitable access to
the tool. Through a Heideggerian lens, ChatGPT redefines the relationship
between humans and technology in the educational context, by expanding the
possibilities of information retrieval, language refinement and knowledge gener-
ation. Through interactions with ChatGPT, students can engage in a new mode of
learning and communication that is mediated by technology. This interaction will
influence their perception and understanding of specific knowledge, skills and
concepts.

How ChatGPT May Affect the Roles of Stakeholders


ChatGPT can empower students by providing them with access to valuable
resources and tools. It can assist in tasks such as generating ideas, improving
122 The Impact of ChatGPT on Higher Education

writing, providing feedback and enhancing language skills. This can potentially
enhance students’ learning experience and academic performance. However, there
are concerns about potential misuse and the impact on academic integrity. Stu-
dents may be tempted to outsource their assignments to ChatGPT or use it to
bypass plagiarism detection. This raises questions about the authenticity of their
work and the development of critical thinking and writing skills.
Institutions and instructors will need to address these challenges and establish
responsible policies for the use of AI tools in education. ChatGPT can augment
the role of instructors by automating certain tasks and providing support in
reviewing and providing feedback on students’ work. It can save instructors’ time
by offering suggestions for improvement, detecting errors and helping with
language-related issues. However, there is a need for instructors to adapt to these
changes and find new ways to engage with students. The role of instructors may
shift towards facilitating discussions, guiding students in utilising AI tools effec-
tively and designing assignments that cannot be easily outsourced or automated.
Instructors should also be aware of the limitations of AI tools and help students
develop critical thinking skills alongside their use of ChatGPT.
Institutions need to recognise the potential of AI tools like ChatGPT and their
impact on teaching and learning. They should provide digital literacy education
and training for faculty and students, update academic integrity policies and
support research on the effects of AI tools on learning and teaching. Additionally,
institutions should consider the implications for equitable access to educational
resources. While ChatGPT can provide valuable support, it also raises concerns
about the digital divide and disparities in access to technology. Institutions should
ensure that all students have equal opportunities to benefit from AI tools and take
steps to bridge any existing gaps.
In summary, ChatGPT’s versatile and practical nature in education enhances
the learning experience, offering personalised feedback and guidance to students.
However, concerns arise about its impact on labour dynamics, academic integrity
and societal ethics. To address these, responsible policies and digital literacy
training are essential.

Impact of ChatGPT on User Learning


Within this theme, we explore the influence of ChatGPT on user learning, spe-
cifically examining its effectiveness in accomplishing assigned student tasks and
the overall impact it has on the learning process.

ChatGPT Demonstrates Competency in Completing Assigned Student Tasks/


Inhibits User Learning Process
From our findings, we discovered that ChatGPT demonstrates proficiency in
accomplishing assigned tasks. However, due to this, it can have a detrimental
impact on the learning process, hindering active engagement and independent
knowledge acquisition. Students’ perspectives shed light on this matter. In the
Findings and Interpretation 123

initial phase of the course, the instructor asked students to assess ChatGPT’s
ability to complete in-class activities in their other courses. 57.1% affirmed
ChatGPT’s total capability in this task, while 28.6% acknowledged its partial
capability. The activities mentioned by students that could be done by ChatGPT
included article writing and answering questions. Similarly, students were asked
about ChatGPT’s potential to complete assignments or projects in their other
courses. 71.4% responded that it could do them completely, with 14.3% saying it
could partially complete them. One student stated, ‘Generally ChatGPT knows
everything. This is very dangerous for students because students generally choose
the easy way to work. If ChatGPT improves itself, students will use it a lot, and
that’s why when instructors give grades, you will use ChatGPT to get the points’
(SFD). Another said, ‘The possibility of students having their homework done by
ChatGPT will raise doubts in teachers, which may have consequences’ (SFD).
The students also recognised that the impact of ChatGPT depends on its usage.
One student remarked, ‘Actually, it is connected with your usage type. If you use
it to check your assignments and help, it helps you learn. But if you give the
assignment to ChatGPT, it skips the learning’ (SFD). They further commented,
‘It’s like taking it easy. It helped me lots with doing my homework, but I feel like
it reduced my thinking process and developing my own ideas’ (SFD). Another
student said, ‘Of course, it helped me a lot, but it also made me a little lazy, I
guess. But still, I think it should stay in our lives’ (SFD). A further student said, ‘It
certainly skips some part of the learning process. When I ask it for information, I
do it to shorten the time I spend researching. If I spent time researching by myself,
I think I would have more detailed information and would form more complex
ideas’ (SFD). According to the instructor, ChatGPT was good at generating ideas
for the final assessment, but there were caveats: ‘ChatGPT was excellent at
coming up with ideas for the final assessment following GRASPS and came up
with better ideas than my own. However, some of its ideas for assessment could
easily be done by ChatGPT itself. Therefore, these suggestions would need to be
rewritten to avoid this’ (TFD). The instructor also made observations about
ChatGPT’s ability to create rubrics: ‘Once an assessment has been written,
ChatGPT can easily come up with a suggested rubric for evaluation, but only if
the assessment task is written precisely. However, the weighting in the rubrics
should be adapted by the instructor to reflect the parts that ChatGPT can do and
the parts it can’t’ (TFD). Additionally, the instructor highlighted that ChatGPT
could provide suggestions for pre-class quizzes based on the text or video input;
however, they cautioned that if the cases used in the quizzes were present in
ChatGPT’s database, students might opt to use the AI for quizzes instead of
engaging with the assigned text or video (TFD). Furthermore, regarding in-class
activities, the instructor noted, ‘When students got ChatGPT to categorise
vocabulary under headings, they did it fast, but it skipped the learning process
aim of this activity. It did not help them to review the vocabulary. This may
therefore have implications for how I construct my vocabulary review activities in
the future’ (TFD). Issues with ChatGPT being able to complete student activities
were also raised in a workshop, where one teacher said, ‘I realised that ChatGPT
could do the lesson planning assignment for my (teacher candidate) students, so I
124 The Impact of ChatGPT on Higher Education

changed the weighting of the rubric to adapt to this’ (SYFD). Similarly, another
teacher said, ‘ChatGPT can easily find the answers with this activity, and students
would not need to do the reading’ (SYFD). A different teacher stated, ‘ChatGPT
does not enable a person to learn from the process. With this way, ChatGPT only
gives a result; it does not provide help for the process. As you know, learning
takes place within the process, not solely the result’ (SYFD).
So what did the literature have to say about this? In Mhlanga’s (2023) study,
instructors expressed concerns that ChatGPT may disrupt traditional assessment
methods like essays and make plagiarism detection more difficult. However,
Mhlanga stated that he believes this opens doors to innovative educational
practices and suggests that AI technologies like ChatGPT can be used to enhance
assessment procedures, teaching approaches, student participation, collaboration
and hands-on learning experiences, thus modernising the educational system.
Neumann et al. (2023) also explored ChatGPT’s competence in completing
assigned student tasks and its implications for the learning process. They high-
lighted various applications in software engineering, including assessment prep-
aration, translation, source code generation, literature summarisation and text
paraphrasing. However, while they noted that ChatGPT could offer fresh ideas
for lecture preparation and assignments, they stressed the need for further
research and understanding so that transparency is emphasised, ensuring students
are aware of ChatGPT’s capabilities and limitations. They proposed integrating
ChatGPT into teaching activities, exploring specific use cases and adapting
guidelines, as well as potential integration into modern teaching approaches like
problem-based and flipped learning, with an emphasis on curriculum adjustments
and compliance with regulations. Rudolph et al. (2023) raised multiple concerns
regarding ChatGPT’s impact on students’ learning process and assessment
authenticity. They highlighted potential issues with students outsourcing written
assignments, which they believe could challenge traditional evaluation methods.
Additionally, they expressed worries about ChatGPT hindering active engage-
ment and critical thinking skills due to its competence in completing tasks without
students fully engaging with the material. Tlili et al.’s (2023) study focused on
potential misuse of ChatGPT, such as facilitating cheating in tasks like essay
writing or exam answers. Effective detection and prevention of cheating were
highlighted as important considerations. Similarly, they raised concerns about the
impact of ChatGPT on students’ critical thinking skills, believing that excessive
reliance on ChatGPT may diminish students’ ability to think innovatively and
independently, potentially leading to a lack of deep understanding and
problem-solving skills. Due to these issues, Zhai (2022) proposed a re-evaluation
of literacy requirements in education, suggesting that the emphasis should shift
from the ability to generate accurate sentences to effectively utilising AI language
tools, believing that incorporating AI tools into subject-based learning tasks may
be a way to enhance students’ creativity and critical thinking. Zhai also suggested
that this should be accompanied by a shift in assessment practices, focussing on
critical thinking and creativity and, thus, recommended exploring innovative
assessment formats that effectively measure these skills (2022).
Findings and Interpretation 125

Through Christensen’s lens, ChatGPT can be seen as a tool that students can
hire to accomplish specific jobs or tasks in their educational journey. However,
this raises concerns about the potential negative impact on active engagement,
independent knowledge acquisition, critical thinking and the overall learning
process. Therefore, there is a need for a balanced approach to avoid the draw-
backs associated with its use. Through Bourdieu’s theory of social reproduction,
we can gain insights into the social and educational ramifications of ChatGPT.
Students’ concerns about the ease of relying on ChatGPT for completing
assignments and the potential consequences, including doubts from teachers and
reduced critical thinking, resonate with Bourdieu’s emphasis on the reproduction
of social structures. This highlights the possibility of ChatGPT perpetuating
educational inequality by offering shortcuts that hinder deeper learning and
critical engagement. Students’ comments about the impact of ChatGPT on the
learning process reflect elements of Marx’s theory of alienation. While ChatGPT
offers convenience and assistance in completing tasks, students expressed con-
cerns about the reduction of their active involvement, thinking process and per-
sonal idea development. This detachment from the learning process can be seen as
a form of alienation, where students feel disconnected from the educational
experience and become dependent on an external tool to accomplish their tasks.
Heidegger’s perspective on technology as a means of revealing and shaping our
understanding of the world can also be applied here. ChatGPT is a technological
tool that transforms the educational landscape, revealing new possibilities by
generating ideas, providing assistance and automating certain tasks. However, the
concerns raised by students and instructors point to the potential danger of
technology shaping the learning process in ways that bypass essential aspects of
education, such as critical thinking, personal engagement and deep understand-
ing. Once again, this highlights the need for a thoughtful and intentional inte-
gration of technology in education to ensure its alignment with educational goals.

ChatGPT Unable to Perform Assigned Student Tasks


In exploring the limitations of ChatGPT in completing tasks or assignments, the
instructor stated, ‘I showed students how to pick up information from the final
assessment rubric and put it into ChatGPT to see how much of the project it could
do. Then I got them to look at the weighting in the rubric and identify areas in
which chatgpt could either not do what was asked e.g. providing visuals, or was
inefficient e.g. giving very limited information about the forensic linguistics of the
case, not providing primary source references’ (TFD). In another example, the
instructor said, ‘Students read articles on advice for writing closing speeches, they
also asked chatgpt to give advice and cross-referenced it. They made a list of this
advice. Then they watched two videos of closing speeches, one for the prosecution
and one for the defence and wrote examples from these speeches matched against
their advice list. This activity proved to be ChatGPT-proof as they were having to
write down examples they had heard from a video’ (TFD). Furthermore, the
instructor recounted, ‘Students had read a paper about how intoxication can be
126 The Impact of ChatGPT on Higher Education

detected by acoustic-phonetic analyses and made a list of the main points. They
then watched a video of Johnny Depp giving an award speech while drunk and
had to write examples of what he said next to the list of acoustic-phonetic factors
from the paper. Due to them having to listen to a video to do this, the activity was
ChatGPT-proof’ (TFD). Furthermore, the instructor commented, ‘Students
created their projects in any form they wished (poster, video, interview). They
used ChatGPT to review their work against their rubric. This could only be done
if their final project had text that could be input. If they used a different medium,
this was not possible’ (TFD). They also observed that, ‘ChatGPT was unable to
do an analysis of handwriting from two suicide notes related to Kurt Cobain’
(TFD).
From Christensen’s perspective, the limitations of ChatGPT can be seen as its
inability to fulfil specific jobs or tasks that users are hiring it to do, such as
providing visuals, detailed data on cases and primary source references. They
were also unable to hire it to assist with tasks that involved the use of different
media, such as answering questions related to videos or poster presentations.
These limitations hindered the users’ ability to accomplish their desired goals and
tasks effectively with the tool. From a Bourdieusian perspective, the limitations of
ChatGPT may reflect the unequal distribution of cultural capital among users.
The ability to effectively navigate and utilise ChatGPT’s capabilities, such as
cross-referencing information or critically assessing its outputs, is influenced by
the possession of cultural capital. Students who have been exposed to educational
resources and have developed the necessary skills may benefit more from using
ChatGPT, while those lacking cultural capital may struggle to fully utilise its
potential. This highlights the role of social inequalities and the reproduction of
advantage in educational settings. From a Marxist perspective, ChatGPT, as a
technological tool, may be seen as being shaped by the profit-driven logic of
capitalism. Its limitations may arise from cost considerations, efficiency require-
ments or the prioritisation of certain tasks over others. These limitations reflect
the broader dynamics of capitalist technology, where the pursuit of profit and
market demands may compromise the quality, accuracy and comprehensiveness
of the outputs. Regarding Heidegger’s theories on technology, the limitations of
ChatGPT reveal the essence of technology as an instrument or tool that has its
own limitations and cannot replace human capabilities fully. ChatGPT’s inability
to analyse handwriting or handle tasks that require human senses and context
demonstrates the importance of human presence, interpretation and under-
standing in certain educational contexts.

How ChatGPT May Affect the Roles of Stakeholders


Students’ experiences with ChatGPT encompassed both advantages and disad-
vantages. On the one hand, it demonstrated proficiency in completing tasks and
offered convenience. However, concerns arise about its potential to hinder active
engagement, critical thinking and independent knowledge acquisition. Some
students expressed worries about overreliance on ChatGPT, which could
Findings and Interpretation 127

discourage them from engaging in the learning process and developing their own
ideas. Additionally, concerns about potential misuse, such as outsourcing
assignments or facilitating cheating, underscore the importance of maintaining
assessment authenticity and fostering critical thinking skills.
For instructors, integrating ChatGPT presents new challenges and consider-
ations. While it can generate ideas, suggest rubrics and assist in various tasks,
careful adaptation of assessments is necessary to avoid redundancy and ensure
alignment with ChatGPT’s capabilities. The impact of ChatGPT on in-class
activities is also a concern, as it may bypass the learning process and hinder
effective teaching. To address this, instructors need to rethink their approach to
in-class activities and actively manage ChatGPT’s use to ensure students are
actively learning and not solely relying on the AI tool.
Furthermore, institutions of higher education must carefully consider the
broader implications of ChatGPT integration. This will involve re-evaluating
literacy requirements and assessment practices, with a focus on critical thinking
and creativity. The successful integration of ChatGPT will require transparency,
adaptation and AI-proofing activities. Institutions will also need to establish clear
policies on assessment and plagiarism detection. Balancing AI integration will be
essential in order to harness its benefits without undermining student learning
experiences. Therefore, institutions will need to provide proper training to
instructors to encourage and enable them to embrace new teaching approaches in
this AI-driven landscape.

Limitations of a Generalised Bot for Educational Context


The theme ‘Limitations of a Generalised Bot for Educational Context’ explores
the challenges and shortcomings of using a general-purpose bot in education,
including gaps in knowledge, disciplinary context limitations and culturally spe-
cific database issues.

Gaps in Knowledge
One significant caveat of ChatGPT is its reliance on pre-September-2021
knowledge, as it does not crawl the web like traditional search engines. This
was observed in the following instances. The instructor stated, ‘I asked my stu-
dents to ask ChatGPT about the implications for AI and court cases. After this, I
gave them some recent articles to read about the implications of AI for court cases
and asked them to make notes. They then compared their notes to ChatGPT’s
answers. The students felt the notes they had made about the implications were
more relevant than ChatGPT’s responses. This may have been because the articles
I provided them with had been published in late 2022 or early 2023, whereas
ChatGPT’s database only goes up to 2021’ (TFD). In a similar vein, the
researcher said, ‘I was interested in analysing some of my findings against the
PICRAT matrix that I was familiar with but has only recently been developed. I
asked ChatGPT about this. Three times it gave me incorrect information until I
128 The Impact of ChatGPT on Higher Education

challenged it, whereupon it eventually responded that it did not know about
PICRAT’ (RFD). Interestingly, the concept of gaps in knowledge did not emerge
prominently in the literature review; therefore, we turn to our theorists.
Through Christensen’s lens, ChatGPT’s limitations in knowledge can hinder
its ability to adequately serve the job of providing accurate and up-to-date
information. Users hiring ChatGPT for information-related tasks may find its
outdated knowledge base unsatisfactory in meeting their needs. This is therefore a
constraint on ChatGPT’s ability to effectively perform the job it is being hired for.
The reliance on outdated information may reflect the prioritisation of
cost-effectiveness and efficiency in the development of ChatGPT. Analysing
ChatGPT’s gaps in knowledge through a Heideggerian lens highlights the essence
of technology as a human creation and the limitations it inherits. ChatGPT, as a
technological tool, is bound by its programming and training data, which define
its knowledge base and capabilities. The gaps in knowledge arise from the
inherent limitations of the technology itself, which cannot transcend the bound-
aries of its design and training. This perspective prompts reflection on the
human–technology relationship and raises questions about the extent to which AI
systems can genuinely meet the needs of the complexities of human knowledge
and understanding.

Disciplinary Context Limitations


A prominent finding from the data is ChatGPT’s limitations in understanding
specific disciplinary contexts or possessing specialised knowledge in certain fields.
One student pointed out that ChatGPT’s AI might not make judgements in the
same way a human judge can, given the availability of evidence, which could lead
to questionable decisions (SFD). Similarly, another student expressed that
without access to proper sources, a judge, in this case ChatGPT, might struggle to
make accurate judgements (SFD).
Alshater (2022) also raised the concern that ChatGPT and similar chatbots
may lack extensive domain knowledge, particularly in economics and finance. He
pointed out that, due to the training data used to develop ChatGPT, it might not
possess deep expertise in specific domains (in his case, economics and finance),
which therefore limits its ability to accurately analyse and interpret data in these
areas. Consequently, he warned that using ChatGPT for tasks like data analysis
or research interpretation in fields where ChatGPT is lacking may lead to errors
and incomplete analyses. To address this limitation, Alshater suggested users
should be cautious, employ additional resources or expert knowledge and have
human oversight to ensure accurate outputs (2022). However, he remains opti-
mistic about ongoing advancements in natural language processing and machine
learning, which he believes could enhance the domain-specific expertise of AI
systems like ChatGPT in the future.
Through Christensen’s lens, ChatGPT fails to adequately address the needs of
users seeking accurate judgments and in-depth expertise in certain domains. Users
desire a tool that can effectively analyse and interpret data in these areas, but
Findings and Interpretation 129

ChatGPT falls short in meeting this specific job to be done. Bourdieu’s theory is
evident in the way students express concerns about the limitations of ChatGPT.
The possession of specialised knowledge in specific fields is seen as a form of
cultural capital. Students recognise that relying solely on ChatGPT for complex
judgments and analyses might not lead to desirable outcomes. Through a Marxist
lens, the limitations of ChatGPT in certain domains may perpetuate existing
social structures, wherein expertise and knowledge in these areas are valued and
rewarded. Reliance on AI systems like ChatGPT for complex tasks could also
potentially lead to the devaluation of human labour and expertise in these fields.
Through a Heideggerian lens, the limitations observed in ChatGPT’s under-
standing and domain knowledge are rooted in its programming and training data,
defining its capabilities. As a tool, ChatGPT can only operate within the
boundaries of its design and training, leading to insufficiencies in human usage.

Culturally Specific Database


The concept of a culturally specific database in ChatGPT refers to its access or
training on a database specific to a particular culture or cultural context. This can
potentially limit its relevance to the individual needs of users. A 2020 study by the
Massachusetts Institute of Technology warns about this, as there is the possibility
of encoding biases into the technology if the training data is overly hegemonic
(Grove, 2023). While it is difficult to determine the exact composition of
ChatGPT’s database, it is worth noting that OpenAI and Google (now Alphabet
Inc.) are based in California, and Microsoft is headquartered in Washington.
Thus, the involvement of these companies in developing and utilising AI suggests
a strong connection to American culture and potentially a Western perspective.
As a result, the cultural context and perspectives of these companies may influ-
ence the development and training of the AI model.
This concern about cultural specificity was also evident in the data provided by
the students. One student cautioned against relying on ChatGPT in legal edu-
cation, noting that laws differ across countries and it may not be appropriate to
expect ChatGPT to provide accurate assistance in such matters (SFD). Another
student expressed doubts about ChatGPT’s ability to offer reliable judgements,
citing an example where applying English case law to Turkish legal matters led to
incorrect conclusions (SFD). This issue was also observed in a previously
described example concerning transgender pronouns in Turkey, where ChatGPT
demonstrated a limited understanding of the Turkish language and provided
incorrect information.
Surprisingly, the literature did not explicitly address these concerns about
cultural databases. However, Sullivan et al.’s (2023) study did highlight the cul-
tural limitations of their research, which focused on news articles from Australia,
New Zealand, the United States and the United Kingdom. They pointed out the
imbalance in academic studies that predominantly analyse Western news,
particularly from the United States. They warn that this imbalance raises
cautionary flags about relying solely on Western voices and perspectives when
130 The Impact of ChatGPT on Higher Education

discussing ChatGPT and similar technologies. We believe their concerns can also
be extrapolated to ChatGPT’s database.
When it comes to looking at this issue through the lens of Christensen, the
concerns raised by the students regarding the cultural specificity of ChatGPT’s
database highlight the potential mismatch between the job they are hiring it to do
and the capabilities of ChatGPT itself. This misalignment indicates the need for
improvements in addressing specific user needs and cultural contexts. From a
Bourdieusian viewpoint, the involvement of AI companies, such as OpenAI,
Microsoft and Google, primarily based in the United States, suggests a connec-
tion to American culture and a Western perspective. This cultural capital and
habitus shape the training and implementation of AI models, potentially encoding
biases and limitations into the technology. The concerns about the accuracy and
relevance of ChatGPT’s responses in different cultural contexts reflect the influ-
ence of cultural capital on the AI system’s performance. Through a Marxist lens,
the concentration of power in these companies, along with their Western cultural
context, may result in biased or limited representations of knowledge and per-
spectives. Furthermore, Heidegger’s views on technology prompt us to question
the very essence and impact of AI systems like ChatGPT. The concerns about
cultural specificity and resulting limitations raise existential questions about the
role and responsibility of AI in human activities. Moreover, the constraints posed
by ChatGPT’s database and potential biases call for critical reflection on the
essence of AI, its impact on human knowledge and decision-making and the
ethical considerations surrounding its development and use.

How ChatGPT May Affect the Roles of Stakeholders


For students, ChatGPT’s gaps in knowledge and disciplinary context limitations
may impact their reliance on the AI system for accurate information and analysis.
The examples provided by the instructor and researcher demonstrate instances
where students found their own notes or domain-specific knowledge more rele-
vant than ChatGPT’s responses. This suggests that students may need to critically
evaluate and supplement the information provided by ChatGPT with their own
expertise or additional resources. It also highlights the importance of cultivating
their own knowledge and critical thinking skills rather than solely relying on AI
systems.
Instructors may need to adapt their teaching approaches and guide students in
effectively using ChatGPT while being aware of its limitations. They can
encourage students to question and critically evaluate the information provided
by the AI system, promoting a deeper understanding of the subject matter.
Instructors may also need to provide up-to-date resources and incorporate dis-
cussions on the limitations of AI technologies to enhance students’ awareness and
discernment.
Furthermore, this means that institutions of higher education have an
important role to play in shaping the integration of AI systems like ChatGPT into
their educational settings, providing guidelines and ethical frameworks for the
Findings and Interpretation 131

responsible use of AI in education, to ensure that their students are aware of the
limitations and biases associated with these technologies. Institutions should also
foster interdisciplinary collaborations and partnerships with industry to address
the disciplinary context limitations of ChatGPT, facilitating the development of
AI systems with domain-specific expertise. Additionally, the concerns raised
about cultural specificity and biases in ChatGPT’s database highlight the need for
institutions to promote cultural diversity and inclusivity in AI purchasing,
development and utilisation. By incorporating diverse cultural systems, perspec-
tives and datasets, institutions can help mitigate the potential biases and limita-
tions, ensuring that they better serve the needs of students from various cultural
backgrounds.
In this chapter, we have taken a deep dive into the influence of ChatGPT on
students, instructors and higher education institutions within the scope of our key
themes. Throughout our discussion, we have discerned the necessary actions that
universities should undertake. These encompass ethical considerations, such as
evaluating AI detection tools, critically assessing AI referencing systems, rede-
fining plagiarism within the AI era, fostering expertise in AI ethics and bolstering
the role of university ethics committees. They also encompass product-related
matters, including ensuring equitable access to AI bots for all students, fostering
collaborations with industries, obtaining or developing specialised bots and
offering prompt engineering courses. Additionally, there are educational ramifi-
cations, like addressing AI’s impact on foundational learning, proposing flipped
learning as a strategy to navigate these challenges, reimagining curricula to align
with the AI-driven future, advocating for AI-resilient assessment approaches,
adapting instructional methods, harnessing the potential of prompt banks and
promoting AI literacy. Moving forward, in the next three chapters, we discuss the
practical implications of these findings, grouping them into ethical,
product-related and educational implications. Thus, while this chapter has out-
lined the essential steps that must be undertaken, the following three chapters
present pragmatic approaches for putting these actions into practice.
This page intentionally left blank
Chapter 7

Ethical Implications

Assessing Artificial Intelligence (AI) Detection Tools


The ongoing debate in current discourse, as illuminated within the literature
review, revolves around the incorporation of AI detection tools by universities to
counteract plagiarism. Sullivan et al. (2023) discussed a variety of available tools
for this purpose, encompassing OpenAI’s Open Text Classifier, Turnitin,
GPTZero, Packback, HuggingFace.co, and AICheatCheck. However, reserva-
tions were expressed concerning the precision and sophistication of these detec-
tion mechanisms. Rudolph et al. (2023) shed light on the paradoxical nature of
utilising AI-powered anti-plagiarism software while AI models, such as Chat
Generative Pre-trained Transformer (ChatGPT), could potentially evade plagia-
rism detection through sentence modifications that lower originality index scores.
Similarly, Tlili et al. (2023) acknowledged concerns tied to cheating and manip-
ulation with ChatGPT, uncovering its role in assisting students with essays and
exam responses, which could facilitate cheating and circumvent detection.
Consequently, a scenario could emerge where students are tempted to delegate
assignments to ChatGPT or employ it to sidestep plagiarism checks, thereby
raising questions about the authenticity of their work. Given this context,
educational institutions and educators are confronted with the urgent responsi-
bility of addressing these challenges and devising guidelines to regulate the uti-
lisation of AI tools in education. The necessity for robust measures to counteract
cheating and ensure prevention has become a central concern. However, a crucial
question arises: Can this be feasibly achieved using the existing AI detection tools?
Recent research appears to suggest otherwise. A study released in June by
researchers from European universities brought forward the assertion that the
existing detection tools lack precision and dependability, displaying a predomi-
nant inclination to categorise content as human-written rather than detecting
AI-generated text (Williams, 2023). Prior to this, another study highlighted the
disproportionate disadvantage faced by non-native English speakers, as their
narrower vocabularies resulted in elevated penalties compared to native speakers
(Williams, 2023). Furthermore, a separate investigation conducted by scholars
from the University of Maryland underscored the issue of inaccuracy and

The Impact of ChatGPT on Higher Education, 133–145


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241007
134 The Impact of ChatGPT on Higher Education

demonstrated that detectors could be readily circumvented by students employing


paraphrasing tools to rephrase text initially created by large language models
(Williams, 2023). So where does this leave universities?
To explore this matter further, at our institution, the director of the library
extended invitations to AI detection tool companies, enabling them to deliver
presentations on their products. This platform allowed them to demonstrate their
upgraded tools, followed by interactive question and answer sessions aimed at
obtaining more profound insights. According to the companies’ claims, they have
incorporated AI writing detection capabilities into their tools, aiming to assist
educators in maintaining academic integrity and fairness among students. These
capabilities include an AI writing indicator that is integrated within the originality
reports. This indicator provides an overall percentage indicating the likelihood of
AI-generated content, such as text produced by ChatGPT. Additionally, it gen-
erates a report highlighting specific segments predicted to be authored by AI. The
companies emphasise that the AI writing indicator is intended to provide edu-
cators with data for decision-making based on academic and institutional policies,
rather than making definitive determinations of misconduct, and caution against
solely relying on the indicator’s percentage for taking action or as a definitive
grading measure. Now, the question remains: How do these AI detectors actually
work?
AI writing detection tools operate by segmenting submitted papers into text
segments and assessing the probability of each sentence being written by a human
or generated by AI, providing an overall estimation of the amount of
AI-generated text. However, unlike traditional plagiarism detection tools that
employ text-matching software to compare essays against a vast database of
existing sources and highlight specific matches, AI plagiarism detection tools offer
probabilities or likelihoods of content being AI-generated, based on characteris-
tics and patterns associated with AI-generated text. The proprietary nature of
these AI detection systems often limits transparency, making it challenging for
instructors and institutions to verify the accuracy and reliability of what is pur-
ported to be AI-generated content. This lack of concrete evidence has significant
implications, particularly in cases where students face plagiarism accusations and
legal action, as it becomes more difficult to substantiate claims and defend against
accusations without explicit evidence of the sources.
Two recent cases at the University of California Davis shed light on the
challenges universities encounter with AI detection software. One case involved
William Quarterman, a student who received a cheating accusation from his
professor after the professor used the AI-powered tool GPTZero to analyse
Quarterman’s history exam for plagiarism (Jiminez, 2023). Despite Quarterman’s
adamant denial, the software supported the professor’s claim, resulting in a failing
grade and a referral for academic dishonesty. The subsequent honour court
hearing caused significant distress to Quarterman, who ultimately proved his
innocence and was exonerated. In a similar incident, Louise Stivers, a graduating
senior, was falsely accused of plagiarism (Klee, 2023). The investigation revealed
a significant limitation in the AI plagiarism detection software used at UC Davis –
the software had been trained on a narrow dataset that failed to account for the
diverse writing styles and cultural backgrounds of the student body. This
Ethical Implications 135

highlighted the issue of cultural bias in AI algorithms and emphasised the need for
more inclusive AI development. As a result of being wrongfully accused, Stivers
actively collaborated with the university to enhance the software’s inclusivity and
accuracy. Her contribution aimed to foster a fairer approach to AI technology in
academic settings, ensuring that it better accommodates the diverse student
population and provides accurate results in detecting plagiarism (Klee, 2023).
Thus, the decision to use AI detection tools in universities is a topic of concern
and discussion. Kayla Jiminez (2023) of USA Today highlights the advice of
educational technology experts, cautioning educators about the rapidly evolving
nature of cheating detection software. Instead of immediately resorting to disci-
plinary action, experts suggest asking students to show their work before accusing
them of using AI for assignments. Neumann et al. (2023) support this approach
and recommend a combination of plagiarism checkers and AI detection tools,
with manual examination as a backup. They stress the importance of thorough
reference checks and identifying characteristics of AI-generated content. Rudolph
et al. (2023) also acknowledge the limitations of anti-plagiarism software in
detecting ChatGPT-generated text and propose building trusting relationships
with students and adopting student-centric pedagogies and assessments. They
discourage a policing approach and emphasise assessments for and as learning. At
MEF, we concur with this perspective. Based on our investigations, we believe
that current AI detection tools are not suitable for their intended purpose. Instead
of relying solely on such tools, we suggest implementing alternative supports for
assessing students’ work, such as one-to-one discussions or moving away from
written assessments altogether. We believe the solution is to ban AI detection
tools but not AI itself.

Scrutinising AI Referencing Systems


The absence of a standard referencing guide for ChatGPT poses significant
challenges for users in academic contexts, including the lack of provided refer-
ences and established guidelines for referencing ChatGPT-generated information.
However, we can break down these challenges into three key issues. Firstly,
ChatGPT itself does not cite the sources it has used. Secondly, users may treat
ChatGPT as a search engine, requiring them to cite it, but no standard referencing
systems currently exist. Thirdly, ChatGPT can be used as a tool to develop ideas
and improve writing, raising the question of whether a referencing system should
be used at all in such instances.
In addressing the first concern, researchers have been working on developing a
system for ChatGPT to identify the sources of its information. Rudolph et al.
(2023) highlight significant progress in this area, such as the creation of the
prototype WebGPT, providing access to recent and verified sources. Additionally,
AI research assistants like Elicit assist in locating academic articles and summa-
rising scholarly papers from repositories. At our institution, we also tested the
beta plugin for ChatGPT-4 called ScholarAI, which yielded promising results.
ScholarAI grants users access to a database of peer-reviewed articles and
136 The Impact of ChatGPT on Higher Education

academic research by linking the large language models (LLMs) powering


ChatGPT to open access Springer-Nature articles. This integration allows direct
queries to relevant peer-reviewed studies. These advancements aim to bolster the
quality and credibility of academic work by incorporating up-to-date information
and reliable sources. Consequently, we believe the responsibility for implementing
a source identification system lies with the developers, while the responsibility of
institutions of higher education is to remain updated on these developments.
Regarding the second concern, which revolves around the absence of a refer-
encing model for using ChatGPT as a source of information, it is important to
note that this issue currently remains unresolved. At the time of writing, there are
no standard referencing systems specifically designed for ChatGPT or similar AI
chatbots. However, teams like American Psychological Association (APA) and
Modern Language Association (MLA) are actively engaged in developing
guidelines that address the citation and appropriate usage of generative AI tools.
In spring 2023, they provided interim guidance and examples to offer initial
direction. However, before delving into the specific efforts of teams like APA and
MLA, let’s first understand the fundamental purpose of referencing sources in an
academic paper. Referencing sources in an academic paper serves several crucial
purposes. Firstly, it allows you to give credit to the original authors or creators of
the work, acknowledging their contributions and ideas. By doing so, you
demonstrate that you have engaged with existing research and built upon it in
your own work. Secondly, referencing supports your claims and arguments by
providing evidence from reputable sources. This adds credibility to your paper
and shows that your ideas are well-founded and supported by existing literature.
Moreover, proper referencing enables readers to verify the accuracy and reli-
ability of your information. By following the references to the original sources,
they can ensure the integrity of your work and establish trust in your findings. By
citing sources, you showcase your research skills and ability to identify relevant
and reliable information. It reflects your understanding of the field and your
contribution to the broader scholarly conversation. Referencing also plays a
crucial role in avoiding unintentional plagiarism. By properly attributing the
sources you have used, you demonstrate that you have built upon existing
knowledge rather than presenting it as your own. Additionally, citing sources
contributes to the academic community by establishing connections between your
work and that of others in the field. It fosters an ongoing scholarly discussion and
helps shape the future of research. Overall, referencing is an integral part of
academic integrity, ensuring that a paper is well-researched, credible and part of
the larger academic conversation.
With this in mind, let’s take a look at what one of the leading citation and
referencing organisations is suggesting. APA suggests that when incorporating
text generated by ChatGPT or other AI tools in your research paper, you should
follow certain guidelines (McAdoo, 2023). They suggest that if you have used
ChatGPT or a similar AI tool in your research, you should explain its utilisation
in the method section or a relevant part of your paper. For literature reviews,
essays, response papers or reaction papers, they suggest describing the tool’s usage
in the introduction and that, in your paper, you should provide the prompt you
Ethical Implications 137

used and include the relevant portion of the text that ChatGPT generated in
response. However, they warn that it is important to note that the results of a
ChatGPT ‘chat’ cannot be retrieved by other readers and that, while in APA Style
papers, non-retrievable data or quotations are typically cited as personal com-
munications, ChatGPT-generated text does not involve communication with a
person (McAdoo, 2023). Therefore, when quoting ChatGPT’s text from a chat
session, they point out that it is more akin to sharing the output of an algorithm.
They therefore suggest that in such cases, you should credit the author of the
algorithm with a reference list entry and the corresponding in text citation. They
give the following example.

When prompted with “Is the left brain right brain divide real or a
metaphor?” the ChatGPT-generated text indicated that although
the two brain hemispheres are somewhat specialised, “the notation
that people can be characterised as ‘left-brained’ or ‘right-brained’
is considered to be an oversimplification and a popular myth”
(OpenAI, 2023).
Reference
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language
model]. https://chat.openai.com/chat

They also suggest that in an APA Style paper, you have the option to include
the full text of lengthy responses from ChatGPT in an appendix or online sup-
plemental materials. They say that ensures that readers can access the precise text
that was generated, however, they note that it is crucial to document the exact text
as ChatGPT will produce unique responses in different chat sessions, even with
the same prompt (McAdoo, 2023). Therefore, they suggest that if you choose to
create appendices or supplemental materials, you should remember to reference
each of them at least once in the main body of your paper. They give the following
example:

When given a follow-up prompt of “What is a more accurate


representation?” the ChatGPT-generated text indicated that
“different brain regions work together to support various
cognitive processes” and “the functional specialisation of
different regions can change in response to experience and
environmental factors” (OpenAI, 2023; see Appendix A for the
full transcript).
Reference
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language
model]. https://chat.openai.com/chat
138 The Impact of ChatGPT on Higher Education

APA also suggests that when referencing ChatGPT or other AI models and
software, you can follow the guidelines provided in Section 10.10 of the Publi-
cation Manual (American Psychological Association, 2020, Chapter 10) (McA-
doo, 2023). They note that these guidelines are primarily designed for software
references and suggest these can be adapted to acknowledge the use of other large
language models, algorithms or similar software. They suggest that reference and
in text citations for ChatGPT should be formatted as follows:

OpenAI. (2023). ChatGPT (Mar 14 version) [Large language


model]. https://chat.openai.com/chat
Parenthetical citation: (OpenAI, 2023)
Narrative citation: OpenAI (2023)

Now, let’s examine APA’s suggestions and critique them based on the
fundamental purpose of referencing sources in an academic paper: giving credit,
supporting arguments and claims, enabling verification of source accuracy and
demonstrating proper research skills. We can do this by posing questions.

• When using ChatGPT, is it possible to give credit to the original authors or


creators by referencing sources in an academic paper?
No, ChatGPT itself cannot give credit to original authors or creators in an
academic paper. As a language model, it lacks the capability to identify or
reference external sources. Hence, it becomes the researcher’s responsibility to
ensure proper attribution of credit to the original authors or creators of the
information used in a paper.
• Is it possible to use ChatGPT to support arguments and claims with evidence by
citing reputable sources?
No. ChatGPT can provide responses based on the input it receives, which may
include references to information or data. However, ChatGPT generates text
based on the patterns it has learnt from the data it was trained on and does not
have the ability to verify the credibility or reliability of the sources it may refer
to. Therefore, researchers should independently verify and cite reputable
sources to support their arguments and claims.
• Is it possible to use ChatGPT to enable verification of the accuracy and reliability
of information by providing references to the original sources?
No, ChatGPT does not have the capability to provide references to original
sources. If information generated by ChatGPT is used in research, it is the
researcher’s responsibility to find and cite the original sources from which the
information is derived.
• Can ChatGPT be used to demonstrate research skills by properly referencing
relevant and reliable sources?
No. As a language model, ChatGPT does not have the ability to demonstrate
research skills or properly reference sources. Therefore, the responsibility for
Ethical Implications 139

conducting thorough research and citing relevant and reliable sources lies with
the researcher.
• Can ChatGPT be used to avoid unintentional plagiarism by citing sources and
giving credit where it is due?
No. While ChatGPT may provide responses based on the input it receives, it is
not equipped to identify or prevent unintentional plagiarism. Therefore, it is
the researcher’s responsibility to ensure that they properly cite and give credit
to the original sources of information to avoid plagiarism.
• Can ChatGPT be used to contribute to the academic community by citing
existing research and establishing connections between a researcher’s work and
the work of others in the field?
No. While ChatGPT can provide information based on the input it receives, it
is not capable of contributing to the academic community by citing existing
research or establishing connections between works. Therefore, researchers
should independently conduct literature reviews and cite relevant works to
contribute to the academic discourse.

The resounding answer to all the questions we posed above is a definitive ‘no’.
While we acknowledge APA’s well-intentioned efforts to address academic
integrity concerns by suggesting ways to cite ChatGPT, we find their recom-
mendations unfit for purpose. If the goal of referencing is to enable readers to
access and verify primary sources, APA’s suggestions do not align with this
objective. They merely indicate that ChatGPT was utilised, which demonstrates
the writer’s academic integrity but does not provide any practical value to the
reader. In fact, based on this, we believe that ChatGPT, in its current form,
should be likened to Wikipedia – a useful tool as a starting point for research, but
not to be used as a valid source for research. Therefore, we believe that to ensure
the validity of the research, ChatGPT should be seen as a springboard for
generating ideas, from which the researcher can then seek out primary sources to
support their ideas and writing. Hence, it would be more beneficial for researchers
to simply cite the sources they have fact-checked, as this approach provides
valuable information to the reader.
Now, let’s address our third area of concern, which revolves around ChatGPT
being used as a tool for idea development and writing enhancement. This raises
the question of whether a referencing system is applicable in such instances. To
shed light on this matter, we explore MLA’s suggestions on how to reference
ChatGPT when it serves as a writing tool. MLA suggests that you should: ‘cite a
generative AI tool whenever you paraphrase, quote, or incorporate into your own
work any content (whether text, image, data, or other) that was created by it;
acknowledge all functional uses of the tool (like editing your prose or translating
words) in a note, your text, or another suitable location; take care to vet the
secondary sources it cites’ (How Do I Cite Generative AI in MLA Style?, n.d.). In
our previous discussion, we have already addressed the third point. If you need to
verify the secondary sources cited by ChatGPT, why not simply use those vetted
sources in your citation and referencing, as this is more helpful for the reader.
140 The Impact of ChatGPT on Higher Education

However, we still need to explore the other aspects concerning the recommen-
dation to cite an AI generative tool. For instance, when paraphrasing or using the
tool for functional purposes like editing prose or translating words, how should
this be implemented in practice? In order to do this, let’s explore how ChatGPT
has been utilised in the writing of this book. Notably, ChatGPT was not used as a
search engine, evident from the majority of our referenced articles and papers
being published after September 2021, which is ChatGPT’s cutoff for new
information. However, it played a significant role in the research process, as
documented in the researcher-facing diary and integrated into the write up of this
book. While we’ve already discussed its role in the research methodology and
through examples in the findings and interpretation chapter, we now focus spe-
cifically on how ChatGPT contributed to the writing process of this book. To
illustrate the full scope of its assistance, we revisit Bloom’s Taxonomy, which
provides a useful framework for mapping the most commonly used phrases we
employed with ChatGPT during the writing phase.

• Remembering

– Reword this paragraph to improve its coherence.


– Slightly shorten this section to enhance readability.
– Summarise the key arguments presented in this article.
• Understanding

– Explain the main ideas in this text using simpler language.


– Paraphrase the content of this article.
– Shorten this text but keep its core concepts.
• Analysing

– Evaluate the strengths and weaknesses of this theory and propose ways to
reinforce its main points.
– Analyse this text through the lens of (this theorist).
– Assess the effectiveness of this argument and suggest improvements to make
it more impactful.
• Evaluating

– Critically assess the clarity of this text and rephrase it for better
comprehension.
– Evaluate the impact of this section and propose a shorter version that retains
its persuasive strength.
• Creating

– Provide a more concise version of this text while retaining its core meaning.
– Summarise this chapter in a concise manner while retaining its key findings.

Have we referenced all instances of the examples above? No. And there are
reasons for this. As discussed in the findings, it’s crucial to go through multiple
Ethical Implications 141

iterations when using ChatGPT. This raises the question of whether we should
reference all the iterations or only the final one. Additionally, ChatGPT was a
constant tool throughout the writing of this book. If we were to reference every
instance, following MLA’s suggestion, the book would likely become five times
longer and mostly consist of references, which would not be beneficial to the
reader. Considering that one of the purposes of referencing is to aid the reader,
MLA’s suggestions seem unsuitable for this purpose. Indeed, referencing every
instance of ChatGPT use would be akin to a mathematician citing each time they
used a calculator, rendering the referencing of it impractical. Similarly, other
writing tools like Grammarly have not been subject to such exhaustive referencing
expectations. Following on from our mathematics example, it should be noted
that AI chatbots, including ChatGPT, have been likened to calculators for words.
However, we find this view a little simplistic. Unlike calculators, AI chatbots have
advanced capabilities that extend beyond basic tasks, reaching higher levels of
Bloom’s Taxonomy, such as applying, analysing, evaluating and creating, thereby
fulfilling tasks that are usually considered to part and parcel of what it means to
be a writer. This leads us to ask, what does it mean to be a writer in the days of
AI?
In the era of AI, the role of a writer takes on a whole new dimension, with AI
models now capable of performing tasks that were traditionally considered the
sole domain of human writers. This blurs the lines between human creativity and
AI assistance, raising concerns about potential loss of human agency in the
writing process, as evidenced in the Hollywood script writers strike, which also
highlights the risk of significant job losses. One of the key challenges of relying
solely on AI for writing is that it heavily relies on previous input, potentially
stifling new thoughts, developments and creativity. To avoid these issues, we
believe being a writer in the AI era requires embracing a collaborative approach
between human intellect and AI technology. Instead of replacing human writers,
AI can be harnessed as a supportive tool. Writers now have the opportunity to
utilise AI tools to enhance various aspects of the writing process, such as idea
generation, content organisation and language refinement. By offloading repeti-
tive and time-consuming tasks to AI, writers can dedicate more attention to
crafting compelling narratives, conducting in-depth analyses and expressing
unique perspectives. They should also actively maintain their critical thinking
abilities and originality, ensuring that AI assistance complements and augments
their creative expression, rather than replacing it. We believe that, ultimately,
being a writer in the AI era involves striking a balance between leveraging the
opportunities AI technology provides and preserving the essential human aspects
of creativity and originality in the writing process. This is exactly what we have
done in this book. However, finding this equilibrium between human writers and
AI remains a significant challenge and will shape the future landscape of writing
in ways that are yet to be fully realised.
142 The Impact of ChatGPT on Higher Education

Rethinking Plagiarism in the Age of AI


Throughout this chapter, our exploration has centred on AI detector tools,
revealing their inability to meet intended expectations. Additionally, we’ve
examined the emerging guidelines for referencing AI chatbots, such as ChatGPT,
and determined their inadequacy. This, therefore, raises questions about how we
deal with plagiarism in light of these new challenges. However, it’s important to
acknowledge that the new challenge posed by plagiarism in the AI era is not
unfamiliar in academia. Over the years, with the advent of new technologies,
similar situations have arisen, most notably following the launch of Wikipedia in
2001, followed by the rise of contract cheating sites. These events required
institutions to recalibrate their understanding of academic work and research
paradigms. Today, as work produced by AI becomes a fact of life, universities
find themselves faced with the task of adjusting their regulations, expectations and
perspectives to accommodate these technological breakthroughs. But before we
proceed further in this discussion, it’s crucial to understand the core concept of
plagiarism.
Plagiarism constitutes the act of using another’s work or ideas without giving
them appropriate recognition, often with the aim to present it as one’s own cre-
ation. This could involve directly copying and pasting text from a source without
citation, closely paraphrasing another’s work or even submitting someone else’s
entire work as your own. Many sectors, including academia and journalism, view
plagiarism as a severe ethical breach. However, plagiarism is not always a
deliberate act. It can occur accidentally, particularly if there is a lack of due
diligence or understanding about what constitutes plagiarism, how to cite
correctly or the correct way to paraphrase and reference. Until now, students have
had clear guidelines on how to cite and reference sources. They have also had the
option to utilise plagiarism detectors to review their work and make necessary
modifications. However, the advent of AI chatbots has complicated the situation.
There are currently no universally accepted referencing guidelines for citing AI,
and the reliability of AI-based detector tools is questionable. And if plagiarism is
defined as the act of utilising another’s work or ideas without giving them due
acknowledgement, how does this definition evolve when the work in question is
created by an AI and not a human? This paradigm shift blurs the traditional
understanding of plagiarism and introduces the concept of ‘appropriating’ work
from an AI, which, unlike humans, doesn’t possess identifiable authorship or an
individual identity. Interestingly, it could be posited that the AI chatbots them-
selves could be seen as ‘appropriating’ all the information they generate without
crediting the original authors. Thus, a conundrum emerges where students may be
using AI in an unethical manner without referencing it, whilst the AI itself is using
information without appropriate citation. It is this very conundrum that leads to
the multitude of challenges we are currently grappling with, concerning how we
can even cite and reference AI. So, the question arises, where does this new reality
place students and academics?
Neumann et al. in their 2023 study, acknowledged the existence of several
pending inquiries that need further exploration, such as, ‘Should text generated by
Ethical Implications 143

ChatGPT be considered a potential case of plagiarism?’, ‘How ought one to


reference text produced by ChatGPT?’ and ‘What is an acceptable ratio of
ChatGPT-generated text relative to the overall content?’. However, they also
recognised that numerous other questions will inevitably emerge. Despite these
considerations, given the current circumstances, especially the inherent issues with
AI, namely their lack of citation for their own sources, these queries appear
somewhat naive. Perhaps it would be more beneficial to explore alternative
strategies. This could potentially involve designing assignments in a way that
circumvents the plagiarism issue, a topic we discuss later in Chapter 9. Alterna-
tively, we may need to concentrate our efforts on fostering AI ethical literacy, a
promising emerging field.

Cultivating Proficiency in AI Ethics


Academic integrity lies at the core of ethical academic practice. It is composed of
a set of guiding principles and conduct that nurtures honesty and ethical
behaviour within the academic community. This moral compass or ethical code of
academia embodies values such as honesty, trust, fairness, respect and responsi-
bility. From a practical perspective, academic integrity manifests when both
students and faculty members refrain from dishonest activities such as plagiarism,
cheating and fabrication or falsification of data. They are anticipated to own their
work, acknowledge others for their contributions, and treat all academic com-
munity members with respect. The goal of upholding academic integrity is to
cultivate an environment that fosters intellectual curiosity and growth while
ensuring that everyone’s work is acknowledged and valued. It holds a crucial role
in affirming the quality and reliability of the educational system and the research
it generates. However, as discussed above, the incorporation of new advance-
ments in AI into this environment presents complex challenges. As such, it
appears imperative that the notion of academic integrity advances in parallel with
these breakthroughs in AI, with numerous voices, including Sullivan et al. (2023)
advocating for an amplified emphasis on AI ethics literacy. This would mean
reshaping the tenets of academic integrity to cater to the unique challenges and
opportunities presented by this game-changing technology. But what exactly is AI
ethics literacy?
AI ethics literacy encapsulates the ability of users to apply ethical consider-
ations and principles in the utilisation of AI technologies. This concept underlines
the importance of understanding the intricacies of AI systems, their strengths and
limitations, in order to make informed decisions. Simultaneously, it requires the
awareness of potential ethical concerns that can stem from AI usage, such as bias,
transparency issues, privacy breaches and accountability. The essence of AI ethics
literacy is embedded in the critical thinking process, which pushes individuals to
question the usage, benefits, potential harm and the inequities that may arise from
AI implementation. It underscores the necessity of utilising AI responsibly,
respecting individuals’ rights and privacy, and discerning scenarios where AI
application may be detrimental or unethical. Furthermore, it invites an element of
144 The Impact of ChatGPT on Higher Education

advocacy and activism, championing for ethically sound AI practices and regu-
lations, while actively opposing harmful AI implementations. As AI technologies
continue to evolve and permeate various domains, cultivating AI ethics literacy
grows increasingly crucial. It serves as a conduit to ensure AI technologies are
wielded in an ethical and responsible manner, upholding human rights while
advocating for fairness and transparency. We delve deeper into this topic later in
Chapter 9, where we discuss the importance of AI literacy training for both
students and educators. However, the most logical starting point to address these
concerns is likely through established university ethics committees.

Enhancing the Role of University Ethics Committees


Chapter 2 delved into the rapid evolution of AI ethics, highlighting how the rise of
generative AI systems has spurred urgency in addressing issues like fairness, bias,
and ethics. Heightened concerns about discrimination and bias, particularly in
conversational AI and image models, have prompted ethical inquiries. Never-
theless, as we have seen, proactive measures are being taken, with increased
research, industry involvement and a surge of conferences and workshops
focusing on fairness and bias, with concepts such as interpretability and causal
inference gaining prominence in these discussions. At the same time, privacy
issues linked to AI have initiated discussions on privacy safeguards. Furthermore,
worries about how generative AI uses opaque surveillance data have also
emerged, raising copyright and creator-related concerns. This intricate ethical
landscape is expanding as AI becomes more ingrained in society, necessitating
careful oversight beyond technical aspects. In light of this complexity, we advo-
cate for an expanded role for university ethics committees. Traditionally focused
on human research and medical ethics, we believe these committees should
expand their remit to encompass various dimensions related to AI adoption in
education. We believe this should entail evaluating issues such as student data
privacy, algorithmic biases, transparency and human-AI interactions. We believe
ethics committees should also be working towards ensuring AI systems’ trans-
parency in educational contexts, assessing the inclusivity of AI-generated content
and facilitating informed consent for student data use. They should also be
leading research into AI’s ethical implications in education, including
AI-supported learning, content biases and overall student impact. Beyond
academia, broader ethical concerns encompass privacy, consent and intellectual
property. Using AI tools for student work requires compliance with regulations,
such as the EU’s General Data Protection Regulations (GDPR). Ethical
dilemmas also arise from researchers’ and students’ use of AI, raising questions
about academic integrity and ownership of AI-generated outcomes. Ownership
rights regarding information in AI systems, especially in innovative areas like
product development or patent creation, present complex challenges. These
intricate ethical considerations underscore the essential role of university ethics
committees in navigating AI’s integration in education and research. Their
expanded responsibilities will demand vigilance, adaptability and robust ethical
Ethical Implications 145

guidelines to ensure AI’s ethical advancement while upholding values of privacy,


fairness and intellectual ownership.
Within this chapter, we have explored several ethical ramifications associated
with AI chatbots. These include concerns with AI detection tools, complications
regarding AI ChatGPT referencing, redefining plagiarism in the AI era, fostering
AI ethics literacy and the expansion of university ethics committees. Looking
ahead, the next chapter delves into the implications regarding products.
This page intentionally left blank
Chapter 8

Product Implications

Ensuring Fair Access to Bots


Ensuring equitable access to artificial intelligence (AI) bots within universities is a
crucial consideration to prevent creating a digital divide among students. At
MEF, we have effectively addressed similar concerns with other applications like
digital platforms and massive open online courses (MOOCs). Our strategy
involves procuring institutional licences that encompass all students. This
approach has proven successful in achieving fair access. Presently, we are aiming
to extend this approach to AI chatbots. However, it’s worth noting that at the
moment, specific institutional agreements tailored for Chat Generative
Pre-trained Transformer (ChatGPT) are not available, leading us to explore
alternative avenues. One potential solution we are actively considering is
obtaining ChatGPT-4 licences for each instructor. This strategy would empower
instructors to share links to specific chat interactions within the tool, enhancing
classroom engagement during lessons. Nonetheless, an existing constraint with
GPT-4 pertains to its hourly completion capacity for requests, which might
impact its overall utility in certain contexts. While procuring individual licences
for instructors to access specific AI chatbots may not be an ideal permanent
solution, it serves as a temporary measure until institutional licences become
available or until we consider acquiring or developing specialised bots for each
department.
We are also in the process of exploring Microsoft Bing, which has integrated
AI into its Edge browser and Bing search engine. This technology draws on the
same foundation as OpenAI’s ChatGPT, offering an AI-powered experience
accessible through mobile applications and voice commands. Similar to
ChatGPT, Bing Chat enables users to interact naturally and receive human-like
responses from the expansive language model. Although Bing Chat features have
been progressively introduced and are now widely available, our institution’s
preference for Google services makes a Google-based solution more fitting.
Google offers Bard, an experimental AI chatbot similar to ChatGPT. However,
Bard stands out by gathering information from the web. It can handle coding,
maths problems and writing assistance. Bard was launched in February 2023 and
is powered by Google’s PaLM 2 language model. While it initially used LaMDA,
Google’s dialogue model, it switched to PaLM 2 later for better performance.

The Impact of ChatGPT on Higher Education, 147–152


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241008
148 The Impact of ChatGPT on Higher Education

Bard is multilingual and has the ability to include images in its responses. Despite
these features, Bard faced criticism upon release. Users found it sometimes gave
incorrect information and was not as good as competitors like ChatGPT and Bing
Chat. To address these issues, Google shifted from LaMDA to PaLM 2. PaLM 2
is an improved version of Google’s language model, built upon the lessons learnt
from earlier models like LaMDA. It incorporates advancements in training
techniques and model architecture, leading to better overall performance in
understanding and generating language. We have now activated Google Bard at
MEF so it is active for all of our students and instructors, and, at the time of
writing, are now evaluating it as a possible solution.
In our ongoing efforts to secure institutional agreements with major large
language model companies, and while we are trialling the effectiveness of Bard,
it’s important to acknowledge that if this endeavour does not come to fruition by
the start of the upcoming academic year, a contingency plan will be activated. In
this scenario, instructors could launch a survey at the beginning of a course to
identify students who have registered with AI chatbots and tools and are willing
to share them with peers. By grouping students with tool access alongside those
without, the principle of equitable classroom utilisation would be upheld. This
approach carries further benefits. If our educational focus is to foster collabora-
tive engagement within student-centred classes, encouraging students to share
tools would circumvent the isolation that arises when each student interacts
individually with their bot. Instead, this practice of shared tool usage would
promote collective involvement and cooperative learning. It is also worth keeping
in mind there will always be open-source alternatives available. These currently
include OpenAI’s GPT-3, BERT, T5, XLNet and RoBERTa.

Fostering Collaboration with Industries


To address the significant challenge of preparing students for an AI-dominated
world, we believe universities should adopt a proactive approach by
reverse-engineering ChatGPT and AI chatbot opportunities through industry
collaborations, as this will enable them to gain valuable insights into the evolving
skill demands and job requirements driven by AI advancements. Based on their
findings, universities will be able to reassess existing programmes and curricula to
align them with the changing needs of the job market. This process should involve
identifying crucial AI-related skills and considering their integration into various
disciplines. Additionally, universities can develop new programmes that specif-
ically address the emerging opportunities created by AI technologies. Ideally, such
courses should be developed by incorporating real-world industry problems at
their core, allowing students to work towards finding solutions by the end of the
course as part of their assessment. This has been done at the University of South
Florida where they co-created a programme with industry partners, the success of
which led to high graduate job placement and salaries (Higher Education That
Works, 2023). This type of collaborative approach also paves the way for more
students to undertake internships with these companies. Furthermore, through
Product Implications 149

their research, by identifying the types of AI bots industries are currently using,
universities can make informed decisions on whether to purchase or develop
discipline-specific bots for their departments. This will ensure that graduates are
equipped with the specialised knowledge and skills of the AI bots relevant to their
chosen fields, better preparing them for the AI-driven job market. However, it
should be noted that adopting this reverse engineering approach needs to be an
ongoing effort. Universities must continuously assess industry trends, collaborate
closely with industry partners and engage AI experts to ensure their programmes
and their AI tools remain up to date and responsive to technological
advancements.

Acquiring or Developing Specialised AI Bots


While we have discussed how currently, we are actively seeking solutions to
ensure equitable access to a generic AI chatbot for all our students, it is crucial to
acknowledge that this is an interim measure. Our research revealed that generic
AI bots may not fully align with our requirements, mainly due to their limitations
in disciplinary knowledge and cultural databases. Therefore, universities are left
with two viable choices. They can either invest in a ready-made discipline-specific
bot that caters to their specific needs, also known as ‘walled gardens’, or they opt
for an empty bot that can be customised with relevant content tailored to their
department and locale. Each approach offers distinct advantages and should be
carefully considered based on the unique requirements and preferences of the
university and department. Let’s start by looking at walled gardens.
Walled garden AI represents a distinctive strand of AI, characterised by its
focused training on curated datasets. Unlike broader AI models, which draw from
extensive internet data, walled garden AI thrives on limited, carefully selected
information (Klein, 2023). This specialised AI variant has found relevance in
education due to its potential to yield more trustworthy and dependable AI tools
for both educators and students (Klein, 2023). Notably, walled garden AI under-
pins the creation of chatbots capable of delivering precise and current educational
insights. The advantages of walled garden AI within the education sphere are
manifold: its restricted dataset cultivates reliability by minimising the risk of
generating incorrect or misleading responses; the involvement of reputable orga-
nisations in its development establishes a foundation of trust for educators and
students, ensuring accuracy and dependability; and walled garden AI’s malleability
allows for personalised interactions, accommodating unique educational needs
(Klein, 2023). However, this approach is not without challenges: the focused
development of walled garden AI may result in higher costs compared to more
generalised models; the accuracy of its responses is contingent on the quality of its
training data; and potential biases stemming from the training data must be
addressed to ensure equitable outcomes (Klein, 2023). Therefore, while walled
garden AI possesses significant potential as an educational tool, a nuanced
understanding of its development and deployment is vital, as its advantages must be
weighed against the inherent challenges and considerations it entails (Klein, 2023).
150 The Impact of ChatGPT on Higher Education

BioBERT, SciBERT, ClinicalBERT, FinanceBERT, MedBERT, PubChemAI,


PatentBERT and LegalBERT are all examples of walled garden AI, each being
tailored to specific domains. These models are meticulously trained on curated
datasets that are tightly aligned with their respective domains. As a consequence,
they excel in comprehending and generating content that pertains to the distinct
subject matter they specialise in. By catering to unique domains such as biomedi-
cine, scientific research, clinical literature, finance, medicine, chemistry, drug dis-
covery, patent analysis and legal matters, these models can offer professionals and
researchers an invaluable tool for tasks that demand a profound understanding of
subject-specific information. However, it is worth noting that this specialisation
comes with both strengths and limitations, as these models might not perform
optimally when handling tasks that require a more expansive and diverse knowl-
edge base.
Instead of purchasing specialised bots, an alternative approach is to acquire an
‘empty bot’ or a pre-trained language model and conduct fine-tuning specific to
your discipline. Fine-tuning involves training a pre-trained language model on a
domain-specific data set, enabling it to better comprehend the requirements of
your field. This process saves time and resources compared to training a language
model from scratch since pre-trained models already possess a solid foundation of
language understanding. Fine-tuning builds upon this foundation, making the
model more suitable for specialised tasks within your domain. However, suc-
cessful fine-tuning relies on the availability and quality of your domain-specific
dataset, requiring expertise in machine learning and natural language processing
for optimal results. Several open-source pre-trained language models serve as
starting points for fine-tuning discipline-specific bots. Popular models like BERT,
GPT, RoBERTa, XLNet, ALBERT and T5 have been developed by leading
organisations and can be fine-tuned for various natural language processing tasks.
These models form robust foundations, which can be adapted to your domain by
fine-tuning them with relevant datasets. Nevertheless, considering the rapid
advancements in the field, it is imperative that universities continuously explore
the latest developments to find the most up to date open-source language models
suitable for their purposes.
When considering the adoption of AI bots in universities, the decision-making
process should involve discussions with each faculty and department. Whether
opting for an institutional generic bot, purchasing a pre-trained discipline-specific
bot or acquiring an empty bot for fine-tuning, it is crucial to involve relevant
stakeholders in the decision-making process. As previously discussed, depart-
ments should proactively explore the industry landscape to identify the bots used
in their respective fields and then reverse engineer from those insights to determine
the most suitable bot type for their department. However, we understand that not
all institutions may have the financial resources to afford such endeavours. In
such cases, as a viable backup option, we recommend universities to explore free
and open-source tools available for equal access to bots for all students. The
open-source community is committed to promoting equality of access and ensures
that tools are accessible to everyone. By leveraging these open-source options,
universities can still offer students equal opportunities to engage with AI bots,
Product Implications 151

even when budget constraints exist. The key is to foster a collaborative approach
that aligns with the institution’s values and goals, ultimately enhancing the
learning experience for all students.

Providing Prompt Engineering Courses


In response to the significant growth of the prompt engineering job market,
universities should proactively offer prompt engineering courses to all students.
Prompt engineering is an intricate process that revolves around creating effective
prompts to elicit desired responses from language models, such as ChatGPT. This
practice necessitates a comprehensive comprehension of the model’s capabilities
and limitations, enabling the creation of prompts tailored for specific applications
like content generation and code completion. Engaging in prompt engineering
requires a firm grasp of the architecture of AI language models, a deep under-
standing of their mechanisms for processing text and an awareness of their
inherent constraints. Armed with this foundational knowledge, prompts can be
strategically constructed to yield outputs that are both accurate and contextually
relevant. This multifaceted process encompasses various elements. One aspect
involves becoming adept at generating text using pre-trained models and refining
them to suit specific tasks. This proficiency aids in selecting prompts that yield the
desired content effectively. Furthermore, creating prompts that lead to coherent
and pertinent responses is a nuanced endeavour. It involves accounting for
contextual nuances, specificity, phrasing intricacies and the management of
multi-turn conversations. Moreover, exerting control over model output is
crucial. This is achieved through techniques like providing explicit instructions,
employing system messages and conditioning responses based on specific key-
words. These techniques serve as navigational tools for steering the model’s
output in a desired direction. Prompts that effectively minimise biases or sensi-
tivities in responses are integral to responsible AI interactions. This ethical
dimension of prompt engineering ensures that the generated content aligns with
fair and unbiased communication. The iterative nature of prompt engineering
involves a process of experimentation, result evaluation and prompt refinement.
This dynamic cycle is instrumental in achieving intended outcomes, fine-tuning
prompts for optimal performance. Additionally, adapting prompt engineering to
various tasks, such as content creation, code generation, summarisation and
translation, necessitates tailoring prompts to align with the unique context of each
task. This adaptability ensures that the prompts are finely tuned to yield con-
textually relevant and accurate outputs. In essence, prompt engineering is a
comprehensive approach that harmonises technical expertise with linguistic
finesse. It optimises interactions with language models, yielding responses that are
not only accurate but also seamlessly aligned with the intended context and
purpose.
Understanding the importance of providing prompt engineering courses to
students, at MEF University, beginning in September 2023, we will be offering the
Coursera-hosted course ‘Prompt Engineering for ChatGPT’ by Vanderbilt
152 The Impact of ChatGPT on Higher Education

University (White, 2023), providing access to all our students through our
MOOC-based programme offerings.
In this chapter, we have extensively examined critical dimensions of integrating
AI chatbots in education. This exploration encompassed the imperative of
ensuring fair access to these bots, the collaborative efforts universities should
engage in with industries to comprehend the skills and tools required of graduates,
the strategic decision-making regarding the acquisition or development of speci-
alised AI botsand the significance of providing prompt engineering courses for
students. Looking ahead, the next chapter dives deeper into the educational
consequences stemming from the integration of AI chatbots.
Chapter 9

Educational Implications

The Impact of AI on Foundational Learning


In this chapter, we delve into the importance of adapting curricula, assessment
methods, and instructional strategies to the artificial intelligence (AI) era. A
central concern here is the potential impact on foundational learning, as high-
lighted both in our research findings and Sullivan et al.’s (2023) literature. As with
any technology, Chat Generative Pre-trained Transformer’s (ChatGPT’s) influ-
ence on student learning holds both positive and negative aspects. However, our
primary focus is on its significant effect on foundational learning. Within this
context, several noteworthy considerations arise regarding potential downsides, as
observed through our research. A key concern is the potential over-reliance on
ChatGPT for generating content, answers and ideas. This dependency has the
potential to hinder critical thinking and problem-solving skills, potentially leading
to reduced originality in student work and a diminished ability to effectively
synthesise information. Additionally, if students routinely turn to ChatGPT to
complete assignments, their motivation to autonomously comprehend and
research subjects may wane, potentially resulting in surface-level learning and a
limited grasp of essential concepts. Another crucial aspect is the potential
downside of excessive dependence on ChatGPT, which could diminish authentic
interactions with peers and instructors. Such interactions play a crucial role in
fostering deep learning and developing important social skills. Furthermore,
ongoing reliance on AI for communication might negatively impact language and
communication proficiency. Prolonged exposure to AI-generated content might
compromise students’ ability to express ideas coherently and engage in mean-
ingful conversations. Moreover, using ChatGPT for content creation without
proper attribution could undermine students’ ability to formulate original argu-
ments and ideas. An over-reliance on AI for creative tasks like writing or
problem-solving might suppress inherent creativity as students become more
accustomed to AI-generated patterns and concepts. At this point, we believe it is
essential to point out that writing is pivotal not only in creation but also in
learning – an approach often referred to as ‘writing-to-learn’ (Nelson, 2001).
Rooted in constructivist theory, this notion underscores the evolution of human
knowledge and communication. Whether viewed from individual cognitive per-
spectives or broader social viewpoints, the dynamic relationship between writing

The Impact of ChatGPT on Higher Education, 153–179


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241009
154 The Impact of ChatGPT on Higher Education

and learning involves selection, organisation and connection (Nelson, 2001).


Nelson’s work ‘Writing as a Learning Tool’ (2001) examines four dimensions of
writing’s connective nature: interlinking ideas, texts, authors and disciplines. She
identifies two primary rationales for the writing-to-learn approach: the authority
rationale, focusing on mastering subjects through writing and the authenticity
rationale, asserting that learning content and writing conventions go hand in hand
within academic fields. This reinforces the essential role of writing in learning.
Sullivan et al.’s (2023) study similarly emphasises the link between learning and
writing, highlighting writing’s role in clarifying ideas. Consequently, writing
becomes a potent tool for reinforcing knowledge. Summarising, paraphrasing or
explaining concepts in one’s own words enhances comprehension and retention.
Writing nurtures critical thinking as students analyse, evaluate and synthesise
information, refining their ability to structure coherent thoughts and support
arguments with evidence. Writing also encourages introspection and
self-assessment, enabling students to review learning experiences, identify
strengths and set improvement goals. This self-evaluation assists in pinpointing
gaps and areas for further exploration. Moreover, writing fosters creativity and
expression, providing a platform for learners to delve into ideas and emotions,
establishing a profound connection with the subject matter. Across disciplines,
writing drives problem-solving, research and analysis, as students engage in
research papers, case studies and essays, refining their capability to construct
solutions and present well-structured arguments. Writing also enhances commu-
nication skills, enabling learners to express thoughts proficiently across various
domains. Additionally, writing bolsters long-term information retention. Taking
notes, creating study guides and practising writing consolidate memory. Inte-
grating writing with other learning methods – such as diagrams and multimedia –
promotes diverse learning, enhancing comprehension through multiple avenues.
Writing also fosters metacognition, empowering students to monitor thought
processes, evaluate decisions and explore alternative perspectives. Furthermore, it
positively influences language proficiency, improving grammar, vocabulary and
sentence structure through regular writing practice. Considering the pivotal role
of writing in the learning process, concerns arise over the implications of AI tools
that support writers in these aspects. Over-reliance on AI-generated content might
undermine students’ critical thinking abilities. Depending solely on AI suggestions
and feedback could hinder thorough engagement with material and the devel-
opment of independent analytical skills. While AI tools excel in producing
structured content, they could stifle creativity and originality, resulting in stand-
ardised and repetitive writing. Overemphasis on grammatical accuracy and
adherence to rules might discourage experimentation and risk-taking in writing,
inhibiting students from exploring their unique writing style. Furthermore, heavy
reliance on AI tools might foster technological dependency, diminishing students’
self-reliance in problem-solving and learning. Ultimately, focusing exclusively on
producing well-structured written content using AI tools could overshadow the
learning process itself. Learning encompasses not just the final outcome but also
cognitive engagement, exploration and growth throughout the educational
Educational Implications 155

journey. This prompts the question: what implications arise if such a scenario
unfolds?
The implications of losing foundational learning are significant and
far-reaching. This formative phase forms the bedrock for acquiring advanced
knowledge and skills, and its absence can reverberate across various aspects of
students’ academic journey and future prospects. For instance, foundational
learning provides the fundamental principles necessary for understanding com-
plex subjects. Without a robust foundation, students may struggle to comprehend
advanced concepts, leading to a surface-level grasp of subjects. Higher-level
courses typically build upon foundational knowledge; lacking this grounding
can impede success in higher education and overwhelm students with coursework.
Furthermore, foundational learning nurtures critical thinking, analytical skills
and effective problem-solving. A dearth of exposure to this phase might hinder
students’ ability to analyse information, make informed decisions and effectively
tackle intricate issues. A solid foundation also promotes adaptability to new
information and changing contexts, which becomes challenging without this
grounding. Furthermore, most professional roles require a firm grasp of foun-
dational concepts. Without this understanding, students might encounter diffi-
culties during job interviews, work tasks and career advancement. In addition,
over-reliance on AI tools like ChatGPT may hinder independent and critical
thinking, ultimately suppressing creativity, problem-solving and originality.
Language development, communication skills and coherent expression of ideas
are also nurtured during foundational learning. These skills are essential for
effective communication in both written and spoken forms. An absence of
foundational learning could lead to a widening knowledge gap that erodes con-
fidence and motivation to learn. Foundational learning also cultivates research
skills and the ability to gather credible information. Students without these skills
might struggle to locate and evaluate reliable sources independently. Beyond
academics, education contributes to personal growth, intellectual curiosity and a
well-rounded perspective. The lack of foundational learning may deprive students
of these holistic experiences. To prevent these adverse outcomes, the prioritisation
of robust foundational learning is crucial. This underscores the significance of
creating curricula, assessments and instruction that are resilient to the influence of
AI. But how can we do this?

Navigating AI Challenges Through Flipped Learning


To address concerns about students losing foundational learning due to
ChatGPT, we believe we should turn to contemporary learning methods, such as
flipped learning. Neumann et al. (2023) and Rudolph et al. (2023) suggested that
an effective way to deal with AI would be to incorporate it into modern teaching
approaches, specifically highlighting flipped learning as being a suitable approach.
Rudolph et al. proposed the use of flipped learning, as the essential classwork (the
part that supports the development of foundational learning) can occur in-person,
emphasising multimedia assignments and oral presentations over traditional tasks
156 The Impact of ChatGPT on Higher Education

(2023). They believe taking this approach will enhance feedback and revision
opportunities, which will support foundational learning. Rudolph et al. also note
that ChatGPT can support experiential learning, which is a key aspect of flipped
learning. They suggest that students should explore diverse problem-solving
approaches through game-based learning and student-centred pedagogies using
ChatGPT (2023). Additionally, Rudolph et al. highlight ChatGPT’s potential to
promote collaboration and teamwork, another aspect inherent in flipped learning.
They recommend incorporating group activities in which ChatGPT generates
scenarios encouraging collaborative problem-solving, as this approach will foster
a sense of community and mutual support among students. Therefore, instead of
seeing ChatGPT as disruptive, Rudolph et al. emphasise its potential to transform
education, but that this should take place through contemporary teaching
methods, such as flipped learning (2023). Therefore, based on our experience, and
supported by the literature, we believe flipped learning provides a useful starting
point for the creation of curricula, assessments and instruction that are resilient to
the influence of AI.
In the research context section of our research methodology chapter, we pre-
sented the recommended stages for instructors at MEF to prepare their flipped
courses. This involves starting with understanding by design (UbD) and inte-
grating Bloom’s taxonomy, Assessment For, As, and Of Learning and Gagne’s
Nine Events of Instruction reordered for flipped learning. In that section, we
described how, through combining these frameworks, we can establish cohesion
between curriculum, assessment and instruction, resulting in effective learning.
But what happens when AI is involved? In UbD, instructors follow three stages:
Stage 1 – identify desired results (curriculum); Stage 2 – determine acceptable
evidence (assessment) and Stage 3 – create the learning plans (instruction).
Therefore, in addressing how to make teaching and learning AI-resilient, we go
through each of these stages, putting forward questions which can be asked at
each stage to guide the decision-making process in how and when AI should be
integrated.

Future-ready Curricula in the Age of AI


In our recommended flipped learning approach to planning, we start with Stage 1
of the UbD framework – identify desired results. Although we detailed Stage 1 in
the research methodology chapter, we revisit it briefly here for clarity. During
Stage 1, instructors establish course objectives and formulate enduring under-
standings capturing essential, lasting concepts. Essential questions are then
created to guide student exploration, fostering critical thinking and
problem-solving skills. These questions can be overarching or topical, aligning
with broader or specific subjects. Once course objectives, enduring understandings
and essential questions are set, the next step involves designing learning outcomes.
This is done using Bloom’s taxonomy as a guide, ranging from lower to higher
cognitive levels, as required. This structured process ensures the development of
effective learning outcomes that deepen understanding and guide meaningful
Educational Implications 157

instruction. However, in light of the emergence of AI chatbots, such as ChatGPT,


we believe it is important that we look at each of the aspects of Stage 1 to assess if
they may be affected in any way.
Enduring understandings, representing core knowledge with lasting value, are
unlikely to change due to AI advancements. Similarly, essential questions,
designed to nurture critical thinking, will likely remain consistent despite AI’s
influence. However, the evolving AI landscape prompts a closer look at learning
outcomes, prompting us to ask whether the learning outcomes we have devised
remain relevant in an AI-driven world. To investigate this further, let’s consider
the learning outcome ‘Compose a mock closing argument on a specific aspect of
language in a real-life case and justify your argument’ that was at the heart of the
end-of-course performance task on the forensic linguistics course. When the
rubric was input into ChatGPT, ChatGPT was instantly able to complete much of
this task and, in doing so, achieved the learning outcome with hardly any input
from the students. Consequently, this outcome, once valuable in real-world sit-
uations, might have lost its significance due to AI’s capabilities. In their future
careers, students, no doubt, will be using such tools to help them write closing
arguments, and ChatGPT will prove a useful tool for this purpose. Nonetheless,
in the legal sphere, delivering and justifying a closing argument verbally remains
crucial. This underscores the fact that perhaps we should focus more on the verbal
delivery as a learning outcome, rather than just the written presentation, and that
perhaps the learning outcome may be better worded as ‘Compose and deliver a
mock closing argument on a specific aspect of language in a real-life case and
justify your argument to a live audience’.
Based on this example, we propose that numerous learning outcomes in
existing courses may require re-evaluation in light of AI’s abilities. Therefore, to
assess the ongoing relevance of a learning outcome, we suggest posing a pivotal
question: ‘Is the learning outcome still relevant in an AI-driven real world?’ If the
answer is ‘yes’, we believe the learning outcome should remain the same. If the
answer is ‘no’, we recommend the instructor re-evaluate the inclusion of that
learning outcome. In such situations, we suggest the instructor should conduct a
job analysis in collaboration with industry to assess how the real world has been
influenced by AI, as was discussed in the previous chapter, and then adjust the
learning outcome accordingly.
To further comprehend the connections and implications associated with
Stage 1, we present the following flowchart (Fig. 2). This illustrates the interplay
between enduring understandings, essential questions, learning outcomes and
the emergence of AI. Its purpose is to guide us in making informed decisions
regarding AI’s impact.

AI-Resilient Assessment Strategies


As discussed in the literature review, during the ChatGPT era, certain universities
are opting for conventional exams to circumvent problems related to
ChatGPT-generated content. However, Sullivan et al. (2023) oppose the exclusive
158 The Impact of ChatGPT on Higher Education

Fig. 2. Decision-Making Process for Reassessing Learning Outcomes


in an AI Environment.

reliance on exams. They propose a shift in assessment tasks to decrease suscep-


tibility to AI tools, advocating for personalised assignments that assess critical
thinking. Given our support for the modern assessment approaches inherent in
flipped learning, we concur with their standpoint. Thus, this leads us to Stage 2 of
our flipped learning design. Stage 2 involves the instructor determining assessment
Educational Implications 159

evidence to determine what students know and can do. This involves instructors
asking: How will we know if students have achieved the desired results? What will
we accept as evidence of student understanding and their ability to use (transfer)
their learning in new situations? and how will we evaluate student performance in
fair and consistent ways? (Wiggins & McTighe, 1998). In Stage 2, there are two
main types of assessment: performance tasks and other evidence. Performance
tasks ask students to use what they have learnt in new and real situations to see if
they really understand and can use their learning. These tasks are not for everyday
lessons; they are like final assessments for a unit or a course and should include all
three elements of AoL, AfL and AaL throughout the semester while following the
Goal, Role, Audience, Situation, Product-Performance-Purpose, Standards
(GRASPS) mnemonic. Alongside performance tasks, Stage 2 includes other evi-
dence, such as quizzes, tests, observations and work samples (AfL) and reflections
(AaL) to find out what students know and can do. While we observed that some
issues may arise in Stage 1 regarding learning outcomes in relation to AI’s abil-
ities, in Stage 2 we start to see more concerning issues. Let’s begin by examining
end-of-course performance tasks.

End-of-course Performance Tasks


The primary goal of end-of-course performance tasks is to offer an overall
snapshot of a student’s performance at a specific point in time. While this eval-
uation can occur periodically during a course, it typically takes place at the end of
the course. However, the emergence of AI, particularly ChatGPT, has raised a
significant consideration. In instances where AI can proficiently complete these
assessments – as we’ve observed it can often do with ease – instructors may
encounter challenges in accurately measuring a student’s true learning accom-
plishment. The potential consequence of this is the inability to determine whether
students have reached the requisite level to advance to the next class or academic
year. This can subsequently have a cascading impact. If the reliability of students’
grades as indicators for prospective employers and graduate schools is compro-
mised, it can erode trust in the university system itself and lead to the depreciation
of degrees and, ultimately, the demise of universities. So what to do? Do we
discourage the use of ChatGPT? Do we crack down on cheating? We believe the
answer to both of these is ‘no’. Therefore, how can we deal with this? In order to
answer this question, let’s return to our case study example from the forensic
linguistics course.
In previous courses, prior to the launch of ChatGPT, using the recommended
MEF course planning approach in line with flipped learning, the instructor
applied the GRASPs mnemonic to create the end-of-course performance task
with the aim of assessing achievement of the learning outcome ‘Compose a mock
closing argument on a specific aspect of language in a real-life case and justify
your argument’. Based on this, she set the following task:
160 The Impact of ChatGPT on Higher Education

For this task, you will take on the role of either the defence or
prosecution, with the goal of getting the defendant acquitted or
convicted in one of the cases. Your audience will be the judge and
jury. The situation entails making a closing argument at the end of
a trial. As the product/performance, you are required to create a
closing argument presented both in writing and as a recorded
speech. The standards for assessment include: providing a review
of the case, a review of the evidence, stories and analogies,
arguments to get the jury on your client’s side, arguments
attacking the opposition’s position, concluding comments to
summarise your argument, and visual evidence from the case.

In the rubric that the instructor created for this assessment, each of the criteria
was evenly weighted. To see how ChatGPT-resilient this original assessment was,
as described in the research methodology chapter, the instructor copied and
pasted the rubric into ChatGPT, in relation to specific cases, to see what it could
do. What came out was astounding. ChatGPT swiftly generated the majority of
the speech for each of the cases, including nearly all of the aspects required in the
rubric. However, she observed that its weakness lay in providing detailed evidence
related to forensic linguistics and, while it couldn’t create specific visuals per-
taining to the case, it could make suggestions for images. While this initially
seemed to render much of the existing assessment redundant, the instructor
realised the exciting potential of ChatGPT as a useful tool for students’ future
careers. She therefore decided to retain ChatGPT as a feature in the assessment
but needed to address the fact that it could handle the majority of the task. To do
this, the instructor adapted the rubric by adjusting the weighting, giving more
importance to the parts ChatGPT could not handle and reducing its weighting in
areas where it excelled. This involved assigning greater weight to the review of
evidence, including emphasising the importance of referencing primary sources
rather than solely relying on ChatGPT, as well as increasing the weighting for the
provision of visual evidence. She also realised that, in relation to the learning
outcome ‘Compose a mock closing argument on a specific aspect of language in a
real-life case and justify your argument’, she was relying on written evidence for
students to justify their argument instead of a more real-life scenario whereby they
would verbally have to justify their argument in a live setting. Therefore, she
decided to add a question and answer session after the videos were presented for
which students would be evaluated on both the questions they asked of other
students and their ability to answer the questions posed to them. This also was
given a much higher weighting than the parts that ChatGPT was able to do. In
reflecting on the outcomes of the redesigned assessment/rubric with the Spring
2022–2023 class, the instructor was pleased with the assessments the students
produced. However, a recurring observation was that most students defaulted to
directly reading from their prepared scripts during the video presentations – a
tactic that would not translate well to real-world scenarios. Consequently, the
instructor has planned to conduct live (synchronous, online) presentations in the
subsequent semester to bolster the students’ public speaking skills and remove the
Educational Implications 161

reliance on reading from partially AI-generated scripts. At this point, it should be


noted that if, as suggested in Stage 1, above, the instructor had carefully analysed
her learning outcomes via the decision-making process for evaluating learning
outcomes in an AI environment flowchart, this issue would not have arisen.
Another crucial factor to consider in the design of an end-of-course perfor-
mance task, not explored in the example from our case study, is the potential
involvement of AI, like ChatGPT, in performing certain elements of the task that
could prove beneficial in students’ future careers but, through doing so, would
compromise the student’s foundational learning. In such scenarios, how should
educators proceed? In such situations, we suggest creating a closed environment
where the use of ChatGPT or similar AI chatbots is neither allowed nor possible.
This might involve conducting assessments in tightly controlled settings or
blocking the use of internet access. This is particularly important if the assessment
takes the form of writing. Even better, instructors should opt for assessments
emphasising hands-on skills, practical experiments, and tactile tasks – areas where
AI, even advanced systems like ChatGPT, faces inherent limitations. These
evaluations require physical presence and direct manipulation, rendering them
resistant to AI interference. Consequently, such assessment methods become
AI-resilient, promoting authentic understanding and practical application of
knowledge.
Based on our discussions above, we propose the following flowchart to assist
with the decision-making process involved in designing end-of-course perfor-
mance tasks in an AI environment (Fig. 3).
When considering the creation of end-of-course performance tasks, we have
suggested using the GRASPS mnemonic from UbD. We believe this approach is
valuable as it incorporates real-world contexts into assessments, which is partic-
ularly important as it equips students with essential skills for their future careers.
However, an even more advantageous approach would involve centring the
assessment around a genuine industry problem, which can be ascertained by
collaborating with industry, as outlined in the previous chapter. This approach
will not only enhance authenticity but also equip students to tackle real-world
challenges in various industries, thereby enhancing their job readiness. Our main
aim as educators is to provide students with the tools needed to tackle the chal-
lenges they will face in life. By addressing future issues, our students will be better
prepared for upcoming job opportunities.

Pre-class Quizzes
In the preceding section, we explored the design of end-of-course performance
tasks in the context of AI. However, Stage 2 of UbD planning also involves the
process of determining other evidence to assess students’ learning. Within the
framework of flipped learning, a significant aspect of this involves pre-class
quizzes (AfL). Therefore, we briefly revisit the steps for implementing this pro-
cess here. During the pre-class or online phase, each unit commences with an
overview and introduction of key terms. Students then engage in a prior
162 The Impact of ChatGPT on Higher Education

Fig. 3. Decision-Making Process for Designing End-of-Course


Performance Tasks in an AI Environment.

knowledge activity to gauge their comprehension. Following this, concepts are


introduced through videos or articles, with accountability ensured via AfL, often
in the form of brief quizzes offering automated feedback. This approach gua-
rantees that students possess the requisite knowledge before attending class. Our
experience indicates that grading pre-class quizzes heightens student engagement
with pre-class materials. However, with ChatGPT’s ability to provide quick
responses, a challenge arises regarding pre-class quizzes, as students might solely
rely on it without engaging with the pre-class resources. This gives rise to two
challenges. Firstly, if students take this shortcut, they are circumventing the
learning process. Secondly, the outcomes of the pre-class quizzes are essential
tools for instructors to assess their students’ grasp of concepts before the class.
This insight reveals what students comprehend, identifies misconceptions and
highlights areas of confusion. Consequently, it empowers instructors to tailor
Educational Implications 163

their teaching approach using ‘just-in-time’ instruction to address these gaps.


However, this adaptive teaching strategy becomes pointless if students have not
completed the pre-class quizzes themselves. So, how should this issue be
approached?
To address this, we believe modifying question types becomes essential. Instead
of closed-answer questions, instructors can prompt students to describe real-life
applications of what they have learnt. Alternatively, integrating online interactive
elements like group discussions or debates in the pre-class assessments can pro-
mote collaboration and sharing of perspectives, which ChatGPT cannot replicate.
This personalisation fosters higher-order thinking and discourages over-reliance
on ChatGPT. Nonetheless, this approach requires manual grading by instructors,
as automated systems can often only handle closed-ended questions. Conse-
quently, it increases the instructor’s workload, which might be challenging for
classes with large student sizes. Striking a balance between personalised assess-
ments and feasible grading strategies is crucial to maintain the benefits of active
engagement while managing the workload efficiently. Another way to address this
challenge is by shifting the pre-class quizzes from an online setting to the class-
room at the start of each lesson. In this approach, students need to engage with
the pre-class material and take notes before class, which they will then use to
answer quiz questions in class. To ensure that ChatGPT cannot easily complete
these quizzes, instructors could introduce time constraints by implementing
interactive quiz tools like Kahoot or Mentimeter. These tools are useful as they
not only encourage quick information processing but also provide valuable data
for the instructor to apply grades based on each student’s responses. Another
effective approach is to have students create visuals to demonstrate their under-
standing of the pre-class content. For instance, students can develop mind maps,
flowcharts, concept maps, timelines, Venn diagrams, bar charts, infographics,
storyboards or labelled diagrams. These activities are useful, as they promote
recall and comprehension of the subject matter while fostering visual represen-
tation, which enhances students’ overall understanding. Moreover, engaging in
these activities gives students a strong incentive to interact with the pre-class
materials before attending the lesson. The advantage of incorporating such
activities lies in the meaningful engagement of students with the pre-class mate-
rials, as they actively utilise the creation of visuals to enhance their comprehen-
sion. Additionally, instructors can collect the visuals from students at the
beginning of the class, after completing the activities and use them to assign
grades. In an online setting, students can take photographs of their hand-drawn
visuals and submit them to the instructor for evaluation. This way, assessment
becomes more holistic, encouraging both deep learning and creative expression.
Nonetheless, it’s important to recognise a drawback if conducting pre-class
quizzes during the class session. This approach shortens the window for instruc-
tors to identify students’ comprehension gaps before the class starts and limits the
time available to plan for ‘just-in-time’ teaching. In this scenario, adaptation
would need to occur in real-time during the class. To demonstrate the
decision-making process that an instructor should go through to decide how to set
pre-class quizzes, we propose the following flowchart (Fig. 4).
164 The Impact of ChatGPT on Higher Education

Fig. 4. Decision-making Process for Designing Pre-class Quizzes in


an AI Environment.

Assessment As Learning
In addition to planning for the end-of-course performance task and pre-class
quizzes, Stage 2 of UbD also encompasses planning for assessment as learning.
Therefore, let’s briefly revisit what this entails. AaL in education focuses on
Educational Implications 165

enhancing the learning process through assessment. Unlike traditional


post-instruction assessments, AaL integrates assessments into learning, actively
engaging students. AaL underscores students’ active participation in learning.
Self-assessment and reflection develop awareness of strengths and areas for
improvement. Students take ownership of their learning, set goals and adapt
strategies based on self-assessment. AaL employs tools like goal setting or per-
sonal reflections. These activities aid students in gauging understanding, identi-
fying confusion and linking new knowledge with prior understanding. This cycle
nurtures metacognition, crucial for independent learning. AaL brings benefits like
active engagement, self-regulation and motivation. It cultivates proactive learners
who address learning gaps and fosters a growth mindset that sees challenges as
growth opportunities. AaL empowers students, enriches comprehension and
instils lifelong learning skills. But how does the emergence of ChatGPT influence
this?
ChatGPT’s real-time feedback offers a potential advantage, promptly aiding
students in identifying their weaknesses. Yet, solely relying on AI-generated
feedback might lead to surface-level self-assessment, where students adopt sug-
gestions without grasping deeper nuances. Similarly, in terms of taking ownership
of learning and goal setting, ChatGPT’s input could be valuable. It can guide
personalised goal setting and strategies based on self-assessment outcomes.
However, overreliance on ChatGPT may disregard individual learning journeys,
limiting goal personalisation. ChatGPT’s thoughtful questions can stimulate deep
reflections, yet relying solely on AI-generated reflections might hinder authentic
self-reflection growth. Addressing these concerns involves a balanced approach,
capitalising on AI’s strengths while fostering essential skills. For instance,
ChatGPT’s real-time feedback aids timely self-assessment, but students should
critically analyse and complement AI-generated insights for deeper
self-awareness. While ChatGPT supports goal setting, maintaining students’
autonomy in shaping goals is vital. Balancing AI-generated and personal reflec-
tions is required to preserve authenticity. In summary, when navigating the
interplay between AaL and the emerging influence of ChatGPT, it becomes
evident that a balanced integration of AI’s advantages along with the nurturing of
essential skills is essential for fostering holistic student development.
As we wrap up our examination of establishing AI-resilient assessment in the
ChatGPT era, we maintain a strong conviction in the efficacy of our proposed
strategies. However, we also believe these strategies offer added benefits. By
spreading assessments across the semester, we can reduce end-of-semester rush,
discourage shortcuts like copying or plagiarism, and inspire students to create
original, meaningful work. This lighter assessment load can boost student confi-
dence and encourage meaningful learning. Additionally, distributing assessments
throughout the semester enhances the feedback loop. Consistent guidance from
instructors empowers students to track progress, spot areas for improvement and
align with their goals – something often missing in relying only on mid-term and
final exams. Our method smoothly integrates feedback into learning, encouraging
continuous improvement and deep understanding through repeated learning
cycles, leading to effective learning. Moreover, we believe taking this approach
166 The Impact of ChatGPT on Higher Education

will prepare our graduates for the challenges of the modern world, whereas
neglecting adaptation could leave them unprepared for a rapidly changing world.
Interestingly, education experts have been advocating for this for years, and we
believe ChatGPT might just be the push needed to make this change. However,
we would be remiss if we did not acknowledge that the implementation of these
changes often lags behind in university entrance exams, accrediting bodies and
higher education ministries. Therefore, we contend that universities have a vital
role to play in assuming leadership to advocate for these reforms, ensuring that we
collectively empower our students for triumph in a world dominated by AI.

Adapting Instruction for the AI Era


Transitioning to the next phase, Stage 3 – Instruction, let’s briefly recap the
context to aid understanding. This stage in UbD centres on designing learning
experiences that align with the objectives set in Stage 1. The following key
questions guide this stage: How will we support learners as they come to
understand important ideas and processes? How will we prepare them to
autonomously transfer their learning? What enabling knowledge and skills will
students need to perform effectively and achieve desired results? What activities,
sequence and resources are best suited to accomplish our goals? (Wiggins &
McTighe, 1998). Starting with the end-of-course performance task, the instructor
identifies key concepts and skills, then breaks these down into units to guide the
students. After that, thoughtful instructional events are designed within each unit,
utilising Gagne’s Nine Events of Instruction for effective learning. Therefore, at
this point, we revisit Gagne’s Nine Events of Instruction that we have reordered
to suit the flipped learning approach. Our approach involves the following ele-
ments. In the pre-class online stage, there should be: a unit overview; an intro-
duction to key terms; a prior knowledge activity; an introduction to the key
concepts (via video, article); and pre-class quizzes for accountability. In the
in-class stage, there should be: a start-of-class/bridging activity to review pre-class
concept; structured student-centred practice; semi-structured student-centred
practice; freer student-centred practice; and self-reflection (AaL at end of
lesson/unit, in/out of class).
In this section, we closely examine each of these components in relation to the
potential influence of ChatGPT and the measures we can implement to prevent
any adverse impact on learning outcomes. However, since we have already dis-
cussed the pre-class element of AfL in the context of AI-proofing assessment in
the section above, our focus now shifts to exploring in-class activities. Within our
flipped learning approach, the instructor starts the class by getting the students to
review pre-class concepts by participating in start-of-class review activities to
reinforce comprehension. After that, the emphasis moves to student-centred
activities, enabling active practice and application of the learnt concepts. There-
fore, we now take a more detailed look at each of these stages.
Educational Implications 167

Start-of-class Review Activities


Start-of-class review activities, also known as bridging activities, play a crucial
role in the instructional strategies of flipped learning. Implemented at the
beginning of a class session, they serve as a seamless connection between the
pre-class content and the current lesson. Their primary objective is to activate
students’ prior knowledge, refresh their memory on essential concepts covered in
the pre-class activities and prepare them for the upcoming lesson. By engaging
students in a brief, interactive review of the pre-class material, instructors can
enhance retention and understanding, ensuring a smoother and more effective
transition to new content. To achieve this, instructors have various options for
these review activities. For instance, setting paper-based short quizzes or ques-
tions related to the pre-class key concepts can assess retention. Asking students to
create concept maps or diagrams to visualise the connections between the con-
cepts presented prior to class can foster deeper understanding. Instructors can
utilise one-minute paper prompts for students to write brief summaries of the
main points from the pre-class materials in one minute, encouraging quick recall.
Additionally, Think-Pair-Share activities can be used to prompt students to
individually recall and discuss key points in pairs or small groups. Further options
include interactive memory games or flashcards for reviewing important terms or
concepts, mini-recap presentations where students summarise the main points of
pre-class materials, or quiz bowl-style activities with questions based on the
pre-class material. The choice of activity can vary based on subject matter, class
size and teaching style, ensuring flexibility and engagement. The examples we
have suggested here are designed in a manner that ensures students cannot
effectively utilise ChatGPT to complete them, rendering these activities
ChatGPT-resilient as they currently exist. However, it is during the subsequent
in-class activities that certain challenges begin to surface.

Structured/Semi-structured Activities
In the context of flipped learning, the primary objective of in-class activities is to
allow students to apply the knowledge gained from pre-class materials. Max-
imising the effectiveness of this process entails the careful implementation of
scaffolded in-class activities. Scaffolding in pedagogy involves furnishing learners
with temporary assistance, guidance and support while they engage in learning
tasks or exercises. The overarching aim is to facilitate the gradual development of
students’ skills and comprehension, equipping them to independently tackle tasks
while progressively reducing the level of assistance as their competence and
confidence expand. Consequently, the most optimal approach to orchestrating
in-class activities follows a sequence: initiating with structured activities,
advancing to semi-structured tasks and ultimately culminating in freer activities.
Based on the insights gained from our exploratory case study, it becomes
evident that the stages involving structured and semi-structured activities are
where ChatGPT can pose the greatest hindrance to effective learning. Conse-
quently, it holds immense importance for instructors to try out their structured
168 The Impact of ChatGPT on Higher Education

activities in ChatGPT beforehand. If ChatGPT is found capable of fully


completing a task, instructors should either alter the task or devise an approach
that necessitates student engagement without relying on ChatGPT. In instances
where ChatGPT can perform certain parts of a task, instructors should seek ways
to modify the activity, directing more attention towards the aspects that ChatGPT
cannot accomplish. However, we are aware that this is easier said than done.
Hence, we circle back to instances from the Forensic Linguistics course to
examine the challenges that emerged and how the instructor effectively addressed
them.

• Vocabulary Grouping Activities


Given that the students in the course were non-native speakers, a portion of
each class was dedicated to reviewing essential vocabulary from the week’s
case. This practice not only aided students in revisiting the cases but also
ensured they were familiar with the key terms. Typically, this involved
providing students with the crucial vocabulary items and tasking them with
categorising these terms into pre-designated groups. Subsequently, they were
required to write a sentence with each word in relation to the case. However,
since ChatGPT could easily perform these tasks, the instructor modified the
activity as follows. Working in groups, students were given the words but not
the categories, and their task was to sort the words into appropriate groups.
They then compared their categorisations with those suggested by ChatGPT
and engaged in a class discussion about which groupings best encapsulated the
case’s core vocabulary. Instead of constructing sentences, the instructor
described some of the words verbally, and students had to deduce the respective
words. After that, individual students described the words to the group for
them to guess. Through this approach, ChatGPT assumed the role of a tool in
vocabulary review rather than a direct substitute for the learning process.
• Drawing Timelines
To recap on the important aspects of each case, students were asked to create
timelines using Padlet’s timeline feature. While ChatGPT could not create
visual content, it could make a list of the case’s main points. Students could
easily copy these into the timeline without thinking deeply, thus making the
activity pointless. To address this, the instructor used a two-step approach.
First, students made their timelines from memory. Then, they asked ChatGPT
to do the same. This helped students see if they missed anything and make
changes or to identify areas where ChatGPT was incorrect. Next, a verbal
discussion was added. Students talked about how the events on the timeline
were connected. They were asked questions about how certain events led to
others. In this way, ChatGPT was used by the students to check their work, not
complete it. The oral task prompted students to contemplate the sequence of
events and their interconnections more profoundly, an additional approach the
instructor had not employed before. An additional advantage was that this
turned out to enrich the learning process, contributing value to the activity.
Educational Implications 169

• SWOT Analysis
In one class session, students were assigned the task of conducting a SWOT
analysis on the impact of ChatGPT on the legal industry. However, ChatGPT’s
ability to swiftly generate a SWOT analysis chart posed a challenge, as students
did not need to engage in critical thinking to get a result. To address this, the
instructor employed the following approach. Firstly, students individually
completed a SWOT analysis without relying on ChatGPT. They then shared
their findings with peers and consolidated their insights into a unified chart.
Secondly, students were provided with up-to-date videos and readings discus-
sing ChatGPT’s impact on the legal industry, which were not present in
ChatGPT’s database. Using these new resources, students refined their charts.
Only after this stage did they consult ChatGPT to create a SWOT analysis
chart. Comparing their own chart with ChatGPT’s, they sought additional
ideas and evaluated ChatGPT’s chart against their current readings, pin-
pointing any outdated information and thus critiquing ChatGPT’s limitations.
This led to a discussion on ChatGPT’s constraints. The interactive process
enhanced students’ critical thinking and extended their learning beyond
ChatGPT’s capabilities. This was further reinforced through role-playing sce-
narios, where students assumed various roles like law firm partners, discussing
ChatGPT’s potential impact on their business. This role-playing exercise
introduced complexity and context, augmenting the SWOT analysis with
nuances beyond ChatGPT’s scope. By structuring the SWOT analysis process
in a way that went beyond ChatGPT simply producing the chart, the instructor
managed to ensure that the students derived valuable insights and skills that
ChatGPT could not easily replicate.
• SPRE Reports
In the original course, the students had been tasked with writing a situation,
problem, response, evaluation report (SPRE) to summarise each case. How-
ever, if the cases were in ChatGPT’s database, ChatGPT could do this
instantly, thereby bypassing the learning process. Therefore, the instructor took
the following approach. First, the students used ChatGPT to create a SPRE
report of the case. Then the instructor provided the students with a set of
detailed questions to guide students through each component of the SPRE
analysis that ChatGPT had produced. This encouraged the students to critique
ChatGPT’s output and to add any information that was missing, thus fostering
deeper analysis and interpretation. Where possible, the instructor provided the
students with a similar case based on the same forensic linguistics point (e.g.,
emoji) that was recent, and therefore not in ChatGPT’s database. However,
this involved scouring the internet for relevant recent cases and was not always
possible. The students then created a SPRE report for the new case and
compared the two cases to see if any changes in decisions or law were made
between the two. This required them to identify patterns, contrasts and trends
that involved higher-order thinking. The students then worked in groups,
imagining they were either the prosecution or defence for the original case and
created short notes about the forensic linguistic points from the case. They were
then mixed and conducted role plays in which they argued for or against the
170 The Impact of ChatGPT on Higher Education

linguistic point in question. This added complexity and depth to the original
SPRE analysis, making it more robust than what ChatGPT could generate
alone.

So, based on these examples, what have we learnt about how to make structured
or semi-structured in-class activities ChatGPT-enhanced or ChatGPT-resilient? We
propose that instructors do the following:

• Critically Analyse ChatGPT’s Outputs


Encourage students to evaluate ChatGPT’s suggestions critically, identifying
gaps, limitations, errors and potential improvements.
• Integrate External Resources
Have students incorporate additional materials to expand their learning
beyond ChatGPT’s database.
• Initiate Discussions and Role-Playing
Promote interactive discussions that compare ChatGPT’s insights with their
own, allowing multifaceted exploration through role-playing scenarios.
• Conduct Comparative Analysis
Guide students to compare their work with ChatGPT’s outputs, pinpointing
discrepancies and assessing accuracy.
• Evaluate Independently
Encourage students to assess ChatGPT’s suggestions against their under-
standing, fostering independent judgement.
• Synthesise Insights
Blend ChatGPT’s insights with their findings to achieve a comprehensive
understanding of the subject matter.
• Explore Case Studies and Holistic Learning
Challenge students with recent case studies not in ChatGPT’s database, while
also engaging in verbal discussions, comparisons and interactions for a
well-rounded perspective.
• Contextualise and Iterate
Encourage students to consider real-world contexts, implications and industry
changes while refining their work through iterative feedback that integrates
ChatGPT’s insights as well as their independent understanding.

To summarise, while ChatGPT served as a valuable tool in aiding specific


aspects of the activities described above, our refinements and enhancements
extend their scope beyond ChatGPT’s capabilities. These activities are designed to
foster students’ capacity to synthesise, evaluate and apply knowledge in
real-world contexts. Furthermore, they encompass discussions and scenarios that
surpass ChatGPT’s individual capabilities. Hence, the activities outlined here are
not entirely immune to ChatGPT’s influence, but rather, they are enhanced by it.
In essence, our belief is that integrating such activities will stimulate active
learning, profound comprehension and the development of skills that AI text
generators like ChatGPT are unable to duplicate. This now brings us to freer
Educational Implications 171

activities. And this is where we believe AI chatbots like ChatGPT can really be
used effectively to enhance learning.

Freer Activities
Let’s begin by understanding the concept of freer activities and their significance.
These activities encourage students to creatively and independently apply their
learning, cultivating higher-order thinking and problem-solving skills. They
encompass tasks like open-ended prompts, debates, projects, role-playing and
real-world scenarios, granting students the authenticity to express themselves. The
objectives encompass practical knowledge application, critical thinking, crea-
tivity, effective communication, language fluency, autonomy, real-world appli-
cability and heightened engagement. Ultimately, these activities empower
students to become confident, active learners proficient in navigating diverse
challenges and contexts. In light of our previous discussion, it’s worth noting that
ChatGPT may be capable of performing many parts of these activities. And
while, as we have seen, it can serve as a tool to enhance learning, we believe that
with strategic utilisation, it holds even more potential; the potential to truly
transform the learning experience. With this in mind, let’s take a look at two
examples from our case study to illustrate this.
One of the students on the Forensic Linguistics course had been accepted on an
Erasmus programme at a Polish law school for the upcoming semester. For his
final project, he decided to use ChatGPT to prepare for his trip, and then shared
his insights during the final presentations. His aims were diverse: learning about
the university, his courses, the town and local culture to be well-prepared.
ChatGPT proved extremely useful in assisting with this. However, what truly
stood out was his innovative use of ChatGPT for language learning. Wanting to
learn basic Polish phrases, he sought advice and practised conversations with
ChatGPT. This proved highly useful for his learning, as ChatGPT served as a free
and easily accessible Polish conversation partner – a distinct advantage consid-
ering challenges in finding such practice partners in Istanbul. He described this
experience as really significantly improving his ability to learn some Polish before
his visit. This was one example of how ChatGPT was used to really transform
learning. However, the principal investigator, herself, had also found a similar use
during the analysis part of the research process. During this investigation, the
researcher referred to the insights of the four theorists to create a theoretical
framework for analysing the findings. Even though the researcher already had a
good grounding in these theories, she wanted to enhance the analysis stage. To do
this, she created custom personas for each theorist using Forefront AI (Forefront
AI, n.d.). Having developed these personalised chatbots, she used them to have
discussions about her evolving analysis, somewhat akin to conversing with the
actual theorists themselves. This had a transformative impact, pushing her
thinking beyond what she could have achieved alone. While it might have been
possible to do this without the support of AI chatbots, it would have been difficult
and time-consuming to find peers with the time and expertise to engage in these
172 The Impact of ChatGPT on Higher Education

discussions. Consequently, in both these instances, ChatGPT emerged as a


transformative learning tool, showcasing its unique ability to facilitate learning
experiences that would be challenging to achieve without the aid of AI. It went
beyond mere enhancement and truly revolutionised the learning process.
So what can we conclude from this? We believe ChatGPT shows a lot of
promise as a tool for education. It can make various in-class teaching methods
better and improve how students learn. However, we urge caution with when it is
used. This is especially important when foundational learning is involved.
Therefore, during such activities, we recommend that ChatGPT is not used, and
AI-free approaches are used instead. However, when it comes to structured and
semi-structured in-class activities, we believe ChatGPT should take on a pivotal
role. During these types of activities, we believe ChatGPT can provide the role of
a guiding partner, enriching student engagement and understanding without
taking over the main learning process. In addition, integrating ChatGPT into
these types of activities can heighten student interest and involvement, leading to
distinct and interactive learning journeys. Moreover, it significantly aids in critical
thinking, prompting students to meticulously review its responses, identify gaps
and engage in discussions that further enhance their cognitive involvement.
However, we believe ChatGPT’s greatest potential lies in freer activities where
ChatGPT can act as a transformative force, enabling students to gain access to
external insights, resources and perspectives that lie outside the boundaries of
traditional learning materials. In conclusion, to accomplish learning objectives
successfully when planning for instruction, we believe instructors should take into
account the following factors. Begin by avoiding AI integration in foundational
learning, such as start-of-class reviews. However, after that, move on to structured
activities that use ChatGPT but require careful assessment of ChatGPT’s outputs.
Next, progress to semi-structured tasks, encouraging students to interact with and
expand upon AI-generated ideas. Conclude with freer activities where ChatGPT
becomes a transformative tool for in-depth exploration and analysis. To assist in
making these decisions, we suggest the following chart (Table 1).

Table 1. Decision-Making Chart for Designing Instructional Activities in an


AI Environment.
Stage 3: Instruction – Plan Learning Experiences and Instruction
Foundational learning Avoid AI integration AI-free zone
activity
Structured activity Use AI with careful
consideration
AI-enhanced zone
Semi-structured activity Interact with AI-generated
ideas
Freer activity Use AI for in-depth AI-transformation
exploration zone
Educational Implications 173

While we have confidence in the effectiveness of our strategies, they will


obviously need to be tested in the upcoming academic year. Despite our strategies,
the possibility of students using ChatGPT in unintended ways, like during the
AI-free zone, still exists. To address this concern, it is crucial to secure students’
support by demonstrating how ChatGPT might undermine their learning journey,
potentially leading to struggles later in their academic or professional careers. To
tackle this, we propose that instructors devote time to emphasise the significance
of foundational learning, as highlighted earlier in this chapter. Subsequently,
instructors can guide students through the provided flowchart of questions,
illustrating the reasoning behind when and how AI can complement their
learning. It is important, however, that the assessments and learning activities
offered in the courses align with the recommendations outlined in flowcharts for
this approach to be effective. This is even more important in online courses where
instructors lack direct supervision over students’ ChatGPT usage. By engaging
students in comprehending the potential drawbacks of relying solely on AI tools,
we can secure their cooperation and safeguard the integrity of the learning pro-
cess. These aspects, therefore, should be integrated into AI literacy training
programmes, which we address later in this chapter. However, before we delve
into the details of AI literacy training, we believe it is essential to examine the
significance of prompt banks, as these will play a vital role in the training
programme.

Leveraging AI Prompt Banks


Based on our research, it is evident that the input you provide to ChatGPT
directly influences the output you receive. Without a clear context, you may not
get the desired response, and if your requests lack quality, the output may also be
subpar. Furthermore, to achieve the best results, multiple iterations are often
necessary to refine your queries. However, too often, users resort to one-shot
requests. A one-shot request in ChatGPT is a single input prompt without any
follow up interactions. It is a standalone query where the model generates a
response based solely on that initial prompt, without prior context or conversa-
tion. One-shot requests are useful for specific tasks or quick information retrieval
but have limitations in context and continuity. For more interactive conversa-
tions, multi-turn interactions are preferred, enabling the model to maintain
context and provide accurate and coherent responses based on prior interactions.
However, users are often unaware of this. So how can we rectify this? We believe
the answer lies in providing or developing user prompt banks.
A user prompt bank is a pre-prepared collection of prompts or example queries
that users can refer to while interacting with the AI language model. The prompt
bank is designed to guide users in formulating their questions or input in a way
that elicits more accurate and relevant responses from ChatGPT. The purpose of
a prompt bank is to provide users with helpful examples and suggestions on how
to structure their queries effectively. It can cover a variety of topics, scenarios or
styles of interaction that users may encounter when using ChatGPT. By having
174 The Impact of ChatGPT on Higher Education

access to a prompt bank, users can gain insights into the type of input that yields
better outcomes and enhances their overall experience with the AI model. For
instance, a prompt bank for ChatGPT might include sample prompts for seeking
information, creative writing, problem-solving, language translation and more.
Users can refer to these examples and adapt them to their specific needs, enabling
them to get the desired responses from ChatGPT more efficiently. By utilising a
prompt bank, users can feel more confident in their interactions with ChatGPT
and improve the quality of the AI’s output by providing clear and contextually
relevant input. It serves as a valuable resource for users to explore the capabilities
of the language model and maximise the benefits of using ChatGPT in various
tasks and applications.
While we are still in the process of developing user prompt banks at our
university, we offer some examples below. In drawing up these prompts, once
again we have drawn upon Bloom’s taxonomy. This is because by working
through Bloom’s taxonomy, the user can start with lower-level knowledge ques-
tions and gradually move to higher-level analysis, as this can lead to more
meaningful and insightful responses. Our suggestions are broken down into initial
prompts and modifying prompts. We break our prompts down into two groups:
initial prompts and modifying prompts. Below, we provide examples of initial
prompts following Bloom’s.

Knowledge:
 Define the term ______.
 List the main characteristics of ______.
 Name the key components of ______.

Comprehension:
 Explain how ______ works.
 Summarise the main ideas of ______.
 Describe the process of ______.

Application:
 Use ______ to solve this problem.
 Apply the concept of ______ to a real-life scenario.
 Demonstrate how to use ______ in a practical situation.

Analysis:
 Break down ______ into its constituent parts.
 Compare and contrast the differences between ______ and ______.
 Identify the cause-and-effect relationships in ______.

Synthesis:
 Create a new design or solution for ______.
Educational Implications 175

 Compose a piece of writing that integrates ideas from ______ and ______.
 Develop a plan to improve ______ based on the data provided.

Evaluation:
 Assess the effectiveness of ______ in achieving its objectives.
 Judge the validity of the argument presented in ______.
 Critique the strengths and weaknesses of ______.

Similarly, different prompts can be used for each of the four domains of
knowledge. This is useful when aiming to enhance learning and understanding in
various subjects or disciplines. Examples include the following:

Metacognitive Knowledge:
Strategic knowledge:
 Explain how you would approach solving a complex problem in [domain/
subject].

Knowledge about cognitive tasks:


 Discuss the difference between analysis and synthesis in [domain/subject].
 Explain the process of critical thinking and its importance in [domain/
subject].

Appropriate contextual and conditional knowledge:


 Provide examples of when to apply [specific technique] in [domain/subject].
 Describe the factors that influence decision-making in [domain/subject].

Self-knowledge:
 Explain how I can adapt my study strategies based on [personal learning
preferences]

Procedural Knowledge:
Knowledge of subject-specific skills and algorithms:
 Demonstrate the steps to solve [specific problem] in [domain/subject].
 Explain the algorithm used in [specific process] in [domain/subject].

Knowledge of subject-specific techniques and methods:


 Describe the different research methodologies used in [domain/subject].
 Explain the key steps in conducting a statistical analysis for [specific data].

Knowledge of criteria for determining when to use appropriate procedures:


 Discuss the factors that determine when to use qualitative or quantitative
research methods in [domain/subject].
 Explain the conditions under which [specific technique] is most effective in
[domain/subject].
176 The Impact of ChatGPT on Higher Education

Conceptual Knowledge:
Knowledge of classifications and categories:
 Categorise different types of [specific elements] in [domain/subject].
 Explain the classification of organisms based on their characteristics in
[domain/subject].

Knowledge of principles and generalisations:


 Describe the fundamental principles of [specific theory] in [domain/subject].
 Discuss the generalisations made in [specific research] within [domain/
subject].

Knowledge of theories, models and structures:


 Explain the key components of [specific model] used in [domain/subject].
 Discuss the major theories influencing [specific field] in [domain/subject].

Factual Knowledge:
Knowledge of terminology:
 Define the following terms in [domain/subject]: [term 1], [term 2], [term 3].
 Provide a list of essential vocabulary related to [specific topic] in [domain/
subject].

Knowledge of specific details and elements:


 List the main elements that contribute to [specific process] in [domain/
subject].
 Identify the key events and dates related to [historical event] in [domain/
subject].

The suggestions above are for initial prompts. However, for modifications and
iterations of ChatGPT’s output, we suggest the following prompts:

Comprehension:
 Clarification Prompt: Can you please provide more details about [topic]?
 Expansion Prompt: Can you elaborate on [idea or concept]?

Application:
 Correction Prompt: Actually, [fact or information] is not accurate. The
correct information is [correction].
 Rephrasing Prompt: Can you rephrase [sentence or paragraph] using
simpler language?

Synthesis:
 Creative Input Prompt: Imagine a scenario where [situation] happens.
Describe what would occur next.
 Alternative Perspective Prompt: Consider the opposite viewpoint of [idea or
argument].
Educational Implications 177

Analysis:
 Comparative Analysis Prompt: Compare and contrast [two concepts,
products or solutions].

Evaluation:
 In-depth Explanation Prompt: Provide a more detailed analysis of [specific
aspect or topic].
 Summary and Conclusion Prompt: Summarise the key points of your
response in a few sentences.
 Continuation Prompt: Please build upon your previous response and
explore [next aspect or question].

While the examples we have given above are generic and can be used across all
disciplines, we believe that the development of discipline specific prompt banks
will be more effective. As a result, one of the initiatives planned at MEF for the
upcoming academic year is to have each department create their own prompt
banks, customised to their specific disciplines and unique needs. This approach
aims to enhance students’ experiences by offering prompts that align closely with
their academic areas, ensuring more relevant and tailored interactions with
ChatGPT. However, there is an alternative option: individuals can craft their own
personalised prompt banks. Indeed, this is precisely the approach adopted by the
authors during the book-writing process.
Creating a personal prompt bank offers numerous advantages to users within
AI-driven education and learning contexts. Through the creation and curation of
their own prompts, users can tailor their learning experiences to align with their
unique goals, interests and areas of focus. This personalised approach not only
fosters a deeper sense of engagement but also allows for a more meaningful and
relevant interaction with AI systems. One of the key benefits is the opportunity for
users to customise their learning journey. By selecting prompts that cater to their
specific learning needs, they can address areas of confusion, challenge themselves
and explore subjects in greater depth. The act of curating a personal prompt bank
can itself be a motivational endeavour, as users become actively invested in
shaping their learning content. Furthermore, a personal prompt bank serves as a
dynamic tool for ongoing learning and practice. Users can revisit prompts related
to challenging concepts, reinforcing their understanding over time. As they
interact with AI systems using their curated prompts, they can refine and adapt
their bank based on the responses received, leading to improved interactions and
learning outcomes. This process encourages users to actively participate in their
learning journey and engage with AI technology. It nurtures skills like digital
literacy and adaptability, which are increasingly valuable in an AI-centric world.
Beyond immediate benefits, a well-curated prompt bank evolves into a valuable
resource, adaptable to changing learning needs over the long term. In essence, the
creation of a personal prompt bank empowers users with autonomy and agency,
facilitating a customised and enriching learning experience. It enables users to
actively shape their education, aligning it with their preferences and needs while
178 The Impact of ChatGPT on Higher Education

deepening their understanding and engagement with AI-powered learning


environments.

Fostering AI Literacy
Our research has clearly underscored the urgent need for AI literacy training
among both students and instructors. However, what exactly does AI literacy
entail? In essence, AI literacy expands beyond digital literacy and encompasses
the ability to comprehend, apply, monitor and critically reflect on AI appli-
cations, even without the expertise to develop AI models. It surpasses mere
understanding of AI capabilities; it involves being open to harnessing AI for
educational purposes. Equipping educators and students with the confidence
and responsibility to effectively use AI tools makes AI literacy an indispens-
able skill. However, when promoting AI literacy, two primary objectives must
be taken into account. Firstly, a comprehensive exploration of how users can
adeptly wield ChatGPT as a valuable educational tool is essential. Secondly,
providing instructors with guidance on seamlessly integrating ChatGPT into
their educational practices while maintaining the integrity of their curricula,
assessments and instruction is pivotal. This ensures that students do not bypass
the essential learning process and neglect foundational knowledge. AI literacy
training within universities should be customised differently for students and
instructors, however, there will be a certain amount of crossover. For students,
it is imperative that they grasp the fundamental concepts of AI, its applica-
tions, and its potential impacts across various fields. This knowledge will
empower them to make informed decisions and actively engage with AI
technologies. Furthermore, it is crucial that we equip our students with an
understanding of the ethical implications of AI, including biases, privacy
concerns and accountability. They need to comprehend how AI technologies
can shape society and cultivate responsible usage. AI literacy should focus on
nurturing students’ critical thinking skills, as this will enable them to assess
AI-generated content, differentiate between human and AI contributions and
evaluate the reliability of information produced by AI. Regarding training for
instructors, instructors should acquire the proficiency to seamlessly integrate
AI tools into their teaching methods. This involves understanding AI’s
potential in enhancing learning experiences, automating administrative tasks
and providing personalised feedback to students. Instructors also need to stay
updated about AI-driven research tools, including data analysis and natural
language processing tools. This knowledge will ensure they remain abreast of
the latest advancements in their respective fields. Furthermore, instructors need
to play a pivotal role in responsibly guiding students’ use of AI tools for
academic purposes. This will include fostering originality, steering clear of
plagiarism or unethical practices and ensuring a constructive learning experi-
ence. While the core concepts of AI literacy might share similarities, the
Educational Implications 179

emphasis and depth of training need to be customised differently. Students


require a comprehensive understanding to effectively navigate AI-driven
landscapes, while instructors necessitate a deeper focus on integrating AI
into their teaching and research methodologies. The approach to achieving this
will vary based on each university’s specific requirements and available tech-
nological resources. To offer some guidance, we present the framework for the
AI literacy training courses that are currently in development at MEF, which
can be found in the appendices. These encompass the following: An AI literacy
training programme for instructors (Appendix A); an AI literacy training
programme for students (Appendix B); as well as a proposed semester-long AI
literacy course for students (Appendix C).
In summary, this chapter has discussed crucial themes concerning the inte-
gration of AI in education. We have explored the impact of AI on foundational
learning, navigated challenges through the use of flipped learning and developed a
framework for designing future-ready curricula. We have discussed resilient
assessment strategies and the significance of adapting instruction for the AI era.
The utilisation of AI prompt banks has been emphasised, along with the need to
foster student and instructor AI literacy. This brings us to our final chapter, in
which we discuss the contributions our study has made to knowledge and research
within the realm of AI in higher education.
This page intentionally left blank
Chapter 10

Contributions to Knowledge and Research

Review of Research Scope and Methodology


During our research, we thoroughly examined the impact of ChatGPT and
artificial intelligence (AI) chatbots on higher education. MEF University in
Istanbul served as our research site, renowned for its integration of AI and
innovative educational approaches. Through experiments and faculty discussions,
we initiated this project to investigate how ChatGPT may affect higher education
institutions, students and instructors. Our objectives were to understand the
changes in dynamics caused by these technologies. We framed our research
questions around the roles of students, instructors and institutions of higher
education in the presence of ChatGPT. By exploring these questions, our aim was
to gain insights into the transformative impact of AI chatbots in education and
provide guidance for successful integration. To understand the big picture, we
conducted an exploration of AI and ChatGPT, including its historical develop-
ment and addressing ethical concerns like privacy, bias and transparency. We
emphasised the limitations of ChatGPT, including its potential for generating
misleading information and the challenge of addressing its shortcomings.
Furthermore, we discussed the broader implications of AI on the future of work
and education. We also touched upon the growing concerns about the threat that
AI may pose and discussed how national and international policies are starting to
be developed to mitigate such threats. To deepen our understanding, we explored
theoretical perspectives such as critical theory and phenomenology, allowing us to
examine power dynamics, social structures and subjective experiences related to
ChatGPT. Then, in our literature review, we analysed various scholarly papers on
integrating ChatGPT in education, including document and content analysis,
meta-literature reviews and user case studies, conducted within a 4-month time-
frame. This review helped us identify recurring themes, gaps in the existing
literature and areas that required further research. Our research methodology
adopted a qualitative approach, exploring subjective experiences and meanings
associated with interactions with ChatGPT. Through a case study approach, we
collected data from multiple sources, including critical incidents, researcher
diaries, interviews, observations and student projects and reflections. From the-
matic analysis, we identified six themes: Input Quality and Output Effectiveness

The Impact of ChatGPT on Higher Education, 181–193


Copyright © 2024 Caroline Fell Kurban and Muhammed Şahin
Published under exclusive licence by Emerald Publishing Limited
doi:10.1108/978-1-83797-647-820241010
182 The Impact of ChatGPT on Higher Education

of ChatGPT, Limitations and Challenges of ChatGPT, Human-like Interactions


with ChatGPT, Personal Aide/Tutor Role of ChatGPT, Impact of ChatGPT on
User Learning and Limitations of a Generalised Bot for Educational Context. By
examining each theme in relation to our research questions, the data, the litera-
ture and our theoretical framework, we gained a comprehensive understanding of
the implications and significance of our findings.

Key Insights and Findings


What we discovered is that while ChatGPT proves highly useful across various
applications, its efficacy depends on input quality and specificity. Clear prompts
lead to accurate responses, but multiple iterations and modifications may be
necessary for desired outcomes. ChatGPT operates predictively, lacking a full
grasp of context, which poses a limitation. Challenges in application include the
absence of a standardised referencing guide, potential generation of incorrect
responses and inherent biases. Users perceive ChatGPT as human-like, blurring
the lines between human and AI interaction, necessitating the promotion of
critical thinking and information literacy skills. In education, ChatGPT shows
versatility and practicality, but concerns arise that overreliance may hinder crit-
ical thinking and independent knowledge acquisition. In addition, its generalised
approach exhibits limitations regarding disciplines and cultural setting.
Based on these findings, it is clear that the integration of AI is poised to bring
about significant transformations in the roles of students, instructors and higher
education institutions. This transformation unfolds in several ways. Drawing
upon Christensen’s theory, students are now presented with the option to leverage
ChatGPT for specific educational tasks, ushering in an era of AI-supported time
optimisation. However, this shift also implicates Bourdieu’s concepts of cultural
and social capital, as an excessive reliance on AI could result in the replication of
knowledge without genuine comprehension, potentially affecting students’
educational habitus shaped by their socio-cultural backgrounds. When examined
through a Marxist lens, this phenomenon might signify an instance of techno-
logical determinism reshaping educational dynamics for students, possibly leading
to a decline in critical thinking and reduced engagement with the learning process.
Moreover, the act of referencing AI-generated information introduces unique
challenges, reflective of a technological framing of knowledge. Consequently,
students are entrusted with the active navigation of their technological interac-
tion, fostering authentic understanding and shifting them from passive recipients
to active participants in their educational journey. Shifting focus to instructors, as
proposed by Christensen’s Theory of Jobs to be Done, educators now have the
option to utilise ChatGPT to automate routine tasks and generate educational
materials, freeing up time for a more refined teaching approach. However, it is
vital for instructors to validate ChatGPT’s outputs, revealing areas where AI
technologies require fine-tuning. Through a Bourdieusian lens, instructors’ roles
are poised to evolve as they navigate AI’s integration into pedagogy, encom-
passing an embodiment of cultural capital. To counteract potential bypassing of
learning and preserve authenticity, teaching methodologies must evolve
Contributions to Knowledge and Research 183

accordingly. Approaching this from a Marxist perspective, AI automation’s


introduction could signify a form of commodification. Nonetheless, instructors’
ongoing need to validate outputs and create effective assignments serves as a
countermeasure against complete alienation. Viewing this interplay through a
Heideggerian framework, instructors assume the responsibility of guiding stu-
dents in the judicious use of AI, ensuring technology serves as a conduit for truth
revelation rather than mere knowledge framing. As such, instructors play a
pivotal role in cultivating an educational environment characterised by authen-
ticity and thoughtful engagement, even amid technological advancements.
Broadening the perspective to encompass higher education institutions, the inte-
gration of ChatGPT offers universities an avenue to heighten productivity and
streamline educational processes. However, the challenge lies in aligning this
technological leap with the needs of both students and faculty members. This
alignment closely aligns with Bourdieu’s notions of capital and social structure,
necessitating updates to institutional policies, enhancements in assessment
methodologies and robust training initiatives. Through the lens of Bourdieu,
ChatGPT emerges as a novel form of cultural capital, enhancing institutions’
prestige and credibility. Nevertheless, upholding equitable access and addressing
biases remains pivotal to prevent the perpetuation of societal disparities. Through
a Marxist perspective, the inclusion of ChatGPT might be construed as a form of
education commodification. However, safeguarding equitable access and
nurturing critical engagement upholds the enduring value of human oversight
within the realm of education. As seen through a Heideggerian framework,
institutions shoulder the task of balancing the interplay between students,
instructors and the intrinsic essence of technology. Thus, institutions must employ
AI in a manner that uncovers truth and amplifies comprehension, all while pre-
serving the core role of human elements in education.

Theoretical Advancements
The integration of the theoretical frameworks of critical theory and phenome-
nology into our research study on the impact of ChatGPT on higher education
represents a significant stride towards advancing the theoretical discourse within
the field. By employing these philosophical lenses, our research transcends mere
examination and enters the realm of deep understanding, nuanced analysis and
holistic exploration. Through the combination of critical theory and phenome-
nology, our research embraces a multidimensional understanding of the impact of
ChatGPT. Rather than analysing this integration from a singular perspective, our
approach delves into power dynamics, subjective experiences, existential dimen-
sions and authenticity. This comprehensive exploration offers a deeper grasp of
the technology’s effects on students, educators and institutions. Critical theory’s
focus on power dynamics exposes hidden inequalities and systemic structures. By
applying this lens, our research uncovers potential disparities in the adoption and
utilisation of ChatGPT, shedding light on how technology can either perpetuate
or challenge existing hierarchies. This unveiling of hidden dynamics enriches the
184 The Impact of ChatGPT on Higher Education

discourse around technology’s transformative potential within higher education.


Phenomenology’s emphasis on subjective experiences empowers our research to
transcend the superficial layer of technological implementation. By delving into
the conscious experiences of stakeholders, our study elevates the conversation
beyond technical functionalities to explore the nuances of how individuals
perceive, adapt to and interact with ChatGPT. This human-centred exploration
adds depth and authenticity to the theoretical advancements we are making.
Heideggerian philosophy introduces an existential layer to our research, urging a
contemplation of the essence of being and the profound implications of tech-
nology on human existence. This philosophical lens elevates the conversation to a
level of introspection which is not often found in empirical studies. It invites
researchers, educators and policymakers to consider the philosophical underpin-
nings of their choices and decisions regarding technology integration. The
amalgamation of these frameworks encourages a holistic consideration of the
ethical, social and personal dimensions of technology integration. Thus, our
research doesn’t solely focus on the functionality of ChatGPT but rather its
consequences on power structures, pedagogical relationships and the authentic
experiences of those involved. This comprehensive analysis contributes to the
theoretical advancement by promoting well-rounded and informed
decision-making. By paving the way for an in-depth exploration of ChatGPT’s
impact, our research sets a precedent for future investigations. Our approach
demonstrates the value of intertwining philosophy and technology in educational
research. Researchers interested in the interplay between emerging technologies
and education may find inspiration in our study’s philosophical underpinnings,
further advancing the theoretical discourse.

Implications for Higher Education Institutions


Overall, the implications of AI for institutions of higher education are broad and
profound, affecting various aspects of academia. These implications span across
ethical considerations, product-related adjustments and significant shifts in
educational approaches, all of which necessitate careful examination and adap-
tation. In the realm of ethical considerations, we strongly recommend refraining
from the utilisation of AI detection systems. This perspective stems from the
intrinsic opacity, inaccuracies and potential biases associated with these systems.
Moreover, we extensively assess existing recommendations for AI referencing
systems, ultimately concluding their impracticality and inefficiency. A significant
portion of our discourse centres on the re-evaluation of plagiarism within the AI
era, a challenge magnified by emerging technologies that challenge traditional
norms. This complexity deepens when AI is involved, given its absence of iden-
tifiable authorship, leading to the intriguing notion that AI’s mere existence could
be likened to plagiarism. Amid these intricate challenges, we underscore the
imperative of fostering proficiency in AI ethics among students, educators and
institutions. This not only entails comprehending the ethical ramifications of AI
but also ensuring its responsible and informed integration within academic
Contributions to Knowledge and Research 185

settings. Thus, we assert that universities’ ethics committees should play a pivotal
role in driving this transformation. With the increasing prevalence of
AI-generated content, institutions must grapple with redefining plagiarism and
attributing credit in this AI-infused age. This endeavour will necessitate a nuanced
understanding of how AI interfaces with established academic standards. When
considering the implications on product development, we firmly advocate for
universities to prioritise achieving an equitable distribution of AI bots. This can
be achieved through the establishment of institutional agreements that grant bot
access to all instructors and students, thus ensuring universal availability or by
directing students towards readily available open sources. As AI becomes an
integral part of the educational landscape, it becomes increasingly crucial to
address product-related considerations. Ensuring fair and equal access to AI bots
becomes paramount in order to prevent any potential disparities in resource
allocation among students. Moreover, we underline the significance of universities
forging strong partnerships with industries. Recognising the influence of AI on
these sectors and identifying the skill sets that employers are seeking in graduates
will serve as valuable insights for curriculum refinement within universities. This
collaborative effort with industries becomes essential to synchronise educational
offerings with the ever-changing requirements of the job market. Such collabo-
ration is pivotal in ensuring that students are adequately equipped with the
essential AI-related competencies to excel in industries increasingly shaped by AI
technologies. Furthermore, by fostering collaboration, universities can gain
insights into the evolving utilisation of AI within specific industries. This valuable
information can subsequently inform the creation or acquisition of specialised
bots that align with industry trends. This focused approach will adeptly address
the limitations of generalised bots within the educational sphere. The idea of
discipline-specific AI bots introduces a pioneering pathway for tailored learning
experiences, offering the capacity to precisely address the unique requirements of
diverse departments and thus enhancing the integration of AI across various
academic domains. Furthermore, we strongly advocate that universities imme-
diately introduce courses in prompt engineering to students, either by developing
their own courses or by providing access to existing MOOC courses. This pro-
active measure will empower students with indispensable skills in navigating the
swiftly changing technological terrain. Simultaneously, the provision of prompt
engineering courses will significantly bolster students’ AI proficiency and deepen
their comprehension of optimal AI-interaction strategies. Within the realm of
educational implications, we strongly emphasise the imperative for institutions to
thoroughly assess the potential influence of AI on students’ foundational learning.
The conventional concept of foundational learning faces new challenges as AI
introduces novel methods and tools. Manoeuvring through these obstacles
necessitates a modification of instructional approaches that foster critical
thinking, problem-solving and creativity – skills that AI struggles to replicate as
effectively as humans. In this context, we propose the adoption of the flipped
learning approach as an effective framework to address these issues. Embracing
this approach harnesses AI tools to enrich pre-class engagement, allowing class
time to be utilised for interactive discussions, collaborative projects and hands-on
186 The Impact of ChatGPT on Higher Education

application. Furthermore, the development of curricula that prepare students for


the AI-dominated world becomes imperative. In light of AI advancements, there
may be a need to reconsider certain existing course learning outcomes. Therefore,
we propose that instructors collaborate with industry to conduct a job analysis,
aiming to assess AI’s influence on the real world. Following this assessment,
appropriate adjustments can be made to the learning outcomes to ensure their
continued relevance. To evaluate students’ content knowledge and AI literacy
skills effectively, AI-resilient assessment strategies are paramount. These strate-
gies must mirror the AI-influenced reality, requiring students to possess not only
subject expertise but also the capability to interact adeptly with AI technologies.
Similar strategies should be implemented in in-class activities to bolster their AI
resilience and prevent potential learning gaps. To provide practical guidance on
these aspects, we present a comprehensive framework accompanied by flowcharts
featuring relevant questions. These questions are designed to assist instructors in
crafting future-ready curricula, formulating assessment methods that can with-
stand the impact of AI and adapting teaching methodologies to align with the AI
era. Furthermore, we highlight the vital role of creating prompt banks as an
invaluable resource to enhance the optimal utilisation of AI. The creation of
prompt banks holds considerable significance in maximising the effectiveness of
AI systems. These banks comprise a collection of well-crafted and diverse
prompts strategically designed to guide interactions with AI platforms like
ChatGPT. Acting as initial cues, these prompts stimulate AI-generated responses,
suggestions or solutions. We propose the development of discipline-specific
prompt banks tailored to the unique needs of various academic departments
within universities. Alongside this, we advocate for encouraging students and
instructors to curate their personalised prompt banks, catering to their individual
preferences and requirements. Furthermore, we emphasise the significance of
fostering AI literacy among both students and instructors, albeit with distinct
aims. Students should become adept at effectively utilising ChatGPT, while
instructors should seamlessly incorporate it into their teaching methods. Tailored
training is essential: students should grasp AI fundamentals, applications, ethics
and critical thinking, while instructors should excel in AI tool integration and
ethical guidance. Consequently, training depth should vary, addressing students’
AI navigation and instructors’ integration expertise. The approach will be
contingent upon each university’s unique requisites and available resources. To
facilitate this, we have provided recommendations for training programs and
courses in the appendices, equipping both instructors and students with the
essential skills for proficient AI utilisation. In essence, the integration of AI in
higher education marks a transformative phase. Institutions must navigate the
ethical, product-related and educational implications thoughtfully to ensure that
students are prepared for a future shaped by AI technologies. This journey
requires a delicate balance between embracing technological advancements and
upholding the core values of education. However, we believe the role of univer-
sities goes beyond just what is discussed here. There is a much bigger picture that
needs to be addressed.
Contributions to Knowledge and Research 187

Global Action and Collaboration


Within the realm of AI, a wide array of concerns currently exists, spanning
various dimensions. As AI technology continues to progress, more and more
ethical dilemmas are emerging, raising questions about its alignment with human
values and its potential for undesirable outcomes. We are currently seeing that AI
systems exhibit a number of limitations in critical aspects, such as common sense
reasoning, robustness and a comprehensive grasp of the world. This hinders the
creation of genuinely intelligent and dependable systems. Additionally, trans-
parency, interpretability and accountability are posing serious challenges, espe-
cially in areas like healthcare, finance and law, where they can have a significant
impact on human lives. At times, the ongoing trajectory of AI development would
appear to be prioritising specific objectives without due consideration for human
values, thereby introducing the risk of unanticipated challenges and management
complexities. Furthermore, there are also growing concerns surrounding the
potential impact of AI on job markets, the economy, governance and societal
welfare, which stem from the potential of AI to worsen prevailing inequalities and
biases. Notably, certain experts perceive AI as posing an existential threat, citing
its ability to surpass human intelligence and consequently posing significant risks
to society and humanity. This global scenario has prompted experts, governments
and even AI companies to call for regulations, and we are now starting to see the
emergence of such regulatory policies. The United Kingdom’s Competition and
Markets Authority is engaged in a thorough review of AI, centring on concerns
like misinformation and job disruptions. Concurrently, the UK government is
revising AI regulations to tackle associated risks. In the United States, White
House deliberations have involved discussions with AI CEOs regarding safety and
security, and the Federal Trade Commission is also actively involved in investi-
gating AI’s impact. In addition, the European Union’s AI Act is in the process of
developing a framework to categorise AI applications by risk and are advocating
for responsible AI practices. Moreover, tech giants OpenAI, Anthropic, Micro-
soft and Google have collaboratively introduced the Frontier Model Forum,
dedicated to AI safety, policy discussions and constructive applications, which
builds upon contributions from the UK government and the European Union,
aligning with White House conversations and indicating an ongoing evolution
within the tech industry. Hence, it becomes evident that steps are being taken.
However, a pivotal question remains: will these regulatory efforts result in
impactful actions that effectively address potential issues and foster responsible
AI advancement? It would seem not.
Currently, there are worrying indications pointing towards potential negative
outcomes, exemplified by the reduction in Microsoft’s ethics team and Sam
Altman’s reservations regarding EU AI regulations. These instances underscore
the substantial influence wielded by companies and even individual figures. This
echoes the very concern voiced by Rumman Chowdhury, who has highlighted an
industry trend where entities paradoxically advocate for regulation while
concurrently lobbying against it, often prioritising risk assessment over ethical
considerations. However, this concentration of power driven by resources could
188 The Impact of ChatGPT on Higher Education

lead to biases and adverse outcomes if not vigilantly managed. Hence, Chowd-
hury proposes a redistribution of power through collaborative stakeholder
engagement. Karen Hao echoes these sentiments, expressing apprehension about
tech giants’ influence over advanced AI technologies. She calls for transparent
and inclusive AI policy shaping that involves a diverse range of stakeholders,
underlining the essential role of varied perspectives in promoting responsible AI
development. Harari also conveys concerns about potential challenges associated
with technological advancement (2018). He asserts that sociologists, philosophers
and historians have a crucial role in raising awareness and addressing the
self-promotion frequently presented by corporations and entrepreneurs in regard
to their technological innovations. He underscores the urgency of swift
decision-making to effectively regulate the impact of these technologies, guarding
against their imposition on society by market forces. This matter holds utmost
significance at present, given the swift progress of ChatGPT in the AI industry,
which is catalysing a competition among other companies to adopt and cultivate
extensive language models and generative AI. This rapid course may surpass the
responsiveness of government policies in addressing these advancements
promptly. This brings us back to our AI experts: Max Tegmark, Gary Marcus,
Ernest Davis and Stuart Russell. In his 2017 book Life 3.0: Being Human in the
Age of Artificial Intelligence, Tegmark lays out frameworks for responsible AI
governance, stressing the importance of AI conforming to ethical principles that
prioritise human values, well-being and societal advancement. He highlights the
significance of transparency and explainability in ensuring humans understand
AI’s decision-making process. To this end, he proposes aligning Artificial General
Intelligence (AGI) objectives with human values and establishing oversight
mechanisms. Tegmark envisions a collaborative approach involving a diverse
range of stakeholders, including experts and policymakers, to collectively shape
AGI regulations, with a strong emphasis on international cooperation (Tegmark,
2017). He advocates for adaptable governance frameworks that can keep pace
with the evolving AI landscape. Tegmark’s overarching goal is to harmonise AI
with human values, preventing misuse and fostering societal progress, all while
recognising the continuous need for interdisciplinary discourse and fine-tuning in
AI governance (Tegmark, 2017). Marcus and Davis advocate for a comprehen-
sive re-evaluation of the AI research trajectory, suggesting an interdisciplinary
path that addresses the limitations inherent in current AI systems (Marcus &
Davis, 2019). Their approach involves integrating insights from various fields like
cognitive science, psychology and linguistics, aiming to create AI systems that
better align with human cognitive processes. They introduce a significant concept
– the ‘hybrid’ approach to AI advancement, which combines rule-based systems
and statistical methodologies (Marcus & Davis, 2019). This fusion aims to harness
the strengths of both approaches while mitigating their weaknesses. Their vision is
that such a hybrid methodology could yield more intelligent and reliable AI
systems capable of effectively handling complex real-world scenarios (Marcus &
Davis, 2019). Russell introduces the concept of value alignment theory, a
fundamental aspect of AI ethics (Russell, 2019). This theory centres on the vital
objective of aligning AI systems with human values and goals. It underscores the
Contributions to Knowledge and Research 189

necessity of designing AI systems to reflect human intentions and desires, pre-


venting potential negative outcomes and ensuring their ethical operation (Russell,
2019). At its core, value alignment theory seeks to ensure that AI systems not only
achieve their designated objectives but also consider the broader context of
human values and ethics; it acknowledges the potential for AI systems, particu-
larly as they gain autonomy, to pursue goals in ways that may conflict with
human intentions (Russell, 2019). Russell’s work advocates for AI systems that
comprehend and honour human values by incorporating mechanisms for learning
from human interaction and feedback. This approach also emphasises trans-
parency and interpretability, allowing humans to understand AI decision-making
processes and intervene if needed. Russell’s focus on value alignment aims to
avert scenarios where AI systems act contrary to human values, fostering a
human-centric approach to AI development that amplifies human capabilities
while upholding ethical standards (Russell, 2019). So, based on these proposed
solutions by the AI experts to the issues inherent in AI, what role should uni-
versities play?
We believe universities have a critical role to play in the context of responsible
AI development and governance, if not a moral obligation, and suggest that they
contribute to solutions in the following ways. First, universities can act as hubs of
research and education, contributing to the advancement of AI technologies while
also instilling ethical considerations. They can offer interdisciplinary programmes
that merge computer science, ethics, cognitive science, psychology and other
relevant fields, encouraging students to think critically about the societal impacts
of AI. In line with Tegmark’s ideas, universities can facilitate collaborative efforts
by bringing together experts, policymakers and various stakeholders to discuss
and formulate regulations for AI governance. They can host conferences, semi-
nars and workshops that promote international cooperation and the exchange of
ideas to shape adaptable governance frameworks, addressing the evolving land-
scape of AI. Marcus and Davis’ call for an interdisciplinary approach aligns with
universities’ ability to foster collaboration between different departments and
faculties. Universities can encourage joint research initiatives that combine AI
expertise with insights from fields such as psychology and linguistics to create AI
systems that better emulate human cognitive processes. Universities can also play
a pivotal role in advancing value alignment theory, as proposed by Russell. They
can contribute to research and education around the ethical dimensions of AI,
training future AI developers and researchers to prioritise human values and
societal well-being. Furthermore, universities can provide platforms for discus-
sions on the moral implications of AI, fostering a culture of transparency and
accountability in AI development. Overall, universities have a responsibility to
serve as knowledge centres, promoting interdisciplinary research, ethical consid-
erations, international cooperation and transparency. We believe their role should
expand beyond technical expertise alone, encompassing the broader aspects of
holistic development and responsible governance of AI technologies. This is our
call to action.
190 The Impact of ChatGPT on Higher Education

Addressing Limitations
While our study’s findings proved pertinent and resulted in the development of
strategies for implementing ChatGPT at our institution, it is essential to
acknowledge and address certain research limitations. The study took place at a
specific English-medium non-profit private university in Turkey, renowned for its
flipped learning approach. While the insights gained are valuable, it’s crucial to
recognise that the unique context may limit the generalisability of the results to
other educational settings. One notable limitation encountered during the
research process was the limited availability of literature on ChatGPT at the
study’s time. This scarcity can be attributed to the recent public launch of
ChatGPT and the restricted time frame for conducting the literature review. As a
result, the review partially relied on grey literature, including pre-prints, poten-
tially affecting the comprehensiveness and depth of the analysis. The study also
employed intentionally broad and open-ended research questions to facilitate an
exploratory investigation. While this approach allowed for a comprehensive
exploration, it’s vital to acknowledge the potential for bias in interpreting the
findings due to the dual role of the principal investigator, serving as both the
researcher and instructor. Additionally, the study’s reliance on a small sample size
of 12 students from an elective humanities class focused on forensic linguistics
poses a limitation. It’s essential to recognise that outcomes may have varied in
larger classes or different disciplines. Furthermore, the sampling of other voices,
including instructors and administrators, was relatively random, based on critical
incidents, emails, workshops and ad hoc interactions. Finally, it’s worth noting
that the research was conducted over a single semester, which may restrict the
longitudinal analysis of ChatGPT’s impact on education. To address these limi-
tations in future studies, we will make the following adaptations. Firstly, to make
our findings more applicable across diverse educational settings, we will include a
broader range of academic disciplines. To ensure a strong theoretical foundation,
we will continuously monitor reputable sources for the latest research on
ChatGPT and related AI technologies, updating our literature review accord-
ingly. By combining quantitative and qualitative approaches, we will gain a more
holistic understanding of ChatGPT’s impact. Integrating numerical data with rich
narratives from students, instructors and administrators will provide a compre-
hensive view of the technology’s effectiveness and challenges. To maintain
objectivity, we will look to include more reflexivity during data collection and
analysis by involving multiple researchers and use triangulation methods to
validate and cross-check findings from these different perspectives. Strengthening
the study’s validity and representativeness can be achieved by including a larger
and more diverse participant pool, encompassing students from various disci-
plines and academic levels, educators and decision-makers. Gaining deeper
insights into ChatGPT’s effects over time can be achieved through long-term
investigations. Observing changes, adaptations and potential challenges will
provide a nuanced understanding of the technology’s long-term implications. The
exploration of our suggested pedagogical strategies for effectively integrating
ChatGPT in education is of utmost importance. By investigating how these
Contributions to Knowledge and Research 191

proposed changes will impact teaching and learning, we can gain valuable insights
for further practical implementation. By incorporating these improvements into
our future research, we can enrich our understanding of ChatGPT’s impact in
education and offer valuable insights for educators and institutions seeking to
effectively utilise AI technologies.

Recommendations for Further Research


Our study has opened the door to various avenues for further exploration and
investigation into AI chatbots in higher education. Here, we present potential
research directions that could extend the scope of our findings and offer valuable
insights to fellow researchers.

• Address Ethical Implications


There is an urgent need to investigate the ethical implications of AI integration
in education, including data privacy, student consent, algorithmic bias and
social-cultural impacts. Therefore, investigations should be extended to explore
challenges faced by instructors and institutions when using AI chatbots and
ensuring ethical practices in data privacy and bias mitigation for responsible AI
use in education.
• Long-Term Effects of AI Integration
It would be pertinent to investigate the long-term effects of AI integration in
education, understanding how the use of AI chatbots evolves over time and its
impact on student learning outcomes and instructor practices. A mixed
methods approach could provide a comprehensive perspective, combining
quantitative data on outcomes with qualitative insights on user experiences.
• Cultural Variations in Bots
Another potential area for future research would involve examining how the
cultural nuances present in various AI bot databases should be considered by
institutions when selecting systems. Such a study would offer valuable insights
into how different AI bots might fit within diverse cultural contexts, thus
influencing their effectiveness in educational settings.
• Application in Other University Areas
There is room to investigate the potential applications of ChatGPT beyond
education, such as in customer service or administrative tasks within the uni-
versity. This could involve assessing the benefits, challenges and impact of
integrating AI technologies in these areas, which could lead to increased effi-
ciency and improved user experiences.
• Investigating the Integration of AI Bots into Digital Platforms
Digital platform companies, such as Pearson, are actively working on inte-
grating AI chatbots into their platforms. Exploring the potential benefits and
challenges of this integration for teaching and learning would be valuable once
these new versions of the platforms have been released.
• Impact on Instructors’ Roles and Career Development
The long-term impact of AI integration on the roles and responsibilities of
192 The Impact of ChatGPT on Higher Education

instructors should be assessed to investigate how this may affect career


development and job satisfaction in the academic field. This research could
provide insights into potential opportunities for professional growth and
adaptation.
• Collaborative Functionality of ChatGPT and AI Chatbots
We believe it would be pertinent to explore the collaborative functionality of
ChatGPT and other AI chatbots that allow sharing chats among multiple
individuals. This could involve an investigation into its application by students,
instructors, researchers and institutions for collaborative learning, knowledge
sharing and collective problem-solving within educational contexts.
• Effects of ChatGPT and AI Chatbots on Language Learning
We believe it is imperative to investigate the potential effects of ChatGPT and
AI chatbots on language learning. Given their capabilities in translation,
summarisation, improving writing and acting as conversational partners,
further research is recommended to understand the positive and negative
impacts on language acquisition and proficiency.
• Investigating the AI-Human Communication Interface
Another avenue for future research that we believe holds particular interest and
significance would be an exploration of how the integration of AI and tech-
nology, such as chatbots, in education aligns with Heidegger’s theory on
technology and the essence of being. Investigating how human–chatbot rela-
tionships impact student learning outcomes could shed light on the extent to
which technology’s ‘enframing’ tendency affects authentic human interactions
and understanding. Additionally, delving into the application of communica-
tion and symbolic interaction theories within the context of AI-mediated
interactions, as discussed by Tlili et al. (2023) and Firaina and Sulisworo
(2023), could offer insights into how technology shapes our perception of
communication. Furthermore, an examination of media theory in education,
also raised by Firaina and Sulisworo (2023), could provide a deeper under-
standing of how technology, as a form of media, influences interactions and the
acquisition of information. Lastly, aligning educational priorities, as advocated
by Zhai (2022), with Heidegger’s concerns about technology’s impact on
human essence could offer insights into how to strike a balance between
enhancing skills and preserving the authentic human experience in the era of
AI-driven education. This future research direction has the potential to
contribute to a comprehensive understanding of the implications of AI inte-
gration in education, both from a technological and philosophical perspective.

By pursuing these future research directions, we believe the field can gain a
more comprehensive understanding of AI’s influence in education and develop
strategies for harnessing AI’s potential while safeguarding the core values of
quality education and human-centric learning.
Contributions to Knowledge and Research 193

Final Reflections and Concluding Remarks


Our research, centring on the implications of AI with a particular focus on AI
chatbots like ChatGPT, has provided meaningful insights into the pervasive
impact of AI on our educational institutions, industries and societies. As we have
seen, AI acts as both a tool and a transformative element, reshaping traditional
learning environments, redefining roles and challenging established norms.
However, our findings extend beyond the confines of academia, addressing
broader global dialogues on AI’s wide-ranging effects. We’ve investigated the
challenges and opportunities arising from AI’s rapid development, from potential
job displacement to the restructuring of power dynamics. Emphasising the
importance of pre-emptive governance, inclusive decision-making and strong
adherence to ethical principles, we’ve outlined crucial considerations for shaping
AI’s future trajectory. We applied the theories of Christensen, Bourdieu, Marx
and Heidegger as guides to dissect AI’s transformative potential, its influence on
power structures and the profound existential questions it prompts. This theo-
retical approach equipped us to navigate the complex landscape of AI’s rapid
evolution, providing a useful framework for others who seek to understand and
engage with these issues. We identified universities as key players in these critical
discussions. Their influential role in shaping knowledge and driving innovation
positions them as leaders in responsible AI adoption and governance. They have
the potential to direct AI’s trajectory in a way that promotes collective interests,
empowers individuals and enhances societal well-being. As we all grapple with
these changes – students, educators, researchers, policymakers and society as a
whole – we must remember the key message of this research: AI is not just a tool
to be used but a significant force to be reckoned with, demanding our under-
standing, engagement and responsible navigation. As we look to the future, we
must remember that we are the authors of the AI narrative. We have the capacity
to ensure that AI technologies are shaped to reflect our shared values and aspi-
rations. The importance of our research lies in its call for a conscious and
intentional approach to AI integration. It’s a call to envision and actively work
towards a future where AI promotes greater equity, understanding and shared
prosperity. This research is a starting point, a guide that we hope will assist future
endeavours towards responsible AI governance and a future where AI is inte-
grated beneficially into all facets of our lives.
This page intentionally left blank
Appendices

Appendix A: AI Literacy Training for Instructors


Based on our research, we suggest the following AI literacy programme for
instructors.

Course Name
Mastering AI Literacy for Teaching and Learning

Overall Educational Objective


By the end of the course, you will have gained the comprehensive knowledge
and skills to effectively integrate AI chatbots into your educational setting for
effective learning and teaching.

Course Format
This course will be delivered as an asynchronous online programme, providing
educators with the flexibility to engage with the content at their own pace. The
course materials will be accessible through the university’s learning management
system, allowing participants to learn, reflect and practice in a self-directed
manner. To enhance engagement and interaction, live workshops will be con-
ducted throughout the semester, focusing on each aspect of the course content.
These workshops will provide an opportunity for participants to ask questions,
engage in discussions and receive real-time guidance.

Course Description
This is a dynamic and immersive course that equips educators with a deep
understanding of AI chatbot technology and its ethical implications in
educational contexts. From foundational concepts to advanced strategies,
this course takes educators on a transformative journey through the world of
AI chatbots. Participants will explore how AI chatbots are reshaping the
education landscape, from personalised learning experiences to efficient
administrative support. The course delves into the ethical dimensions of AI
196 Appendices

integration, addressing concerns like privacy, bias and accountability. Edu-


cators will gain practical insights into how AI can amplify teaching meth-
odologies, enhance student engagement and revolutionise assessment
strategies. This course is designed to empower educators to seamlessly inte-
grate AI chatbots into their teaching practices while ensuring ethical con-
siderations remain at the forefront. Participants will engage in interactive
units that cover the emergence and growth of AI chatbots, their potential to
enhance instructional efficiency and the ethical implications of AI-driven
education. Real-world examples will illuminate the challenges and solutions
associated with AI ethics. By the end of the course, educators will be
equipped with the skills to foster responsible AI integration, design
future-ready curricula and leverage AI tools for impactful instruction.

Enduring Understanding
Empowering educators with AI chatbots involves mastering their practical
applications, understanding ethical implications and adapting teaching
practices for an AI-enhanced educational landscape.

Essential Questions

• How can educators effectively integrate AI chatbots into their instructional


practices while upholding ethical considerations?
• What ethical dilemmas arise from AI chatbot usage in education, and how
can educators navigate them?
• In what ways can AI chatbots enhance student engagement, feedback and
support in the learning process?
• What opportunities and challenges does AI pose for teaching and learning,
and how can educators harness its potential?
• How can educators design curricula that prepare students for an
AI-integrated future and foster critical AI literacy?
• What strategies can educators employ to navigate AI-enhanced assessment
methods while ensuring fairness and transparency?
• How can educators adapt their teaching methodologies to create
AI-resilient learning environments that cater to diverse student needs?
• What tools, platforms and best practices are essential for seamlessly inte-
grating AI chatbots into educational contexts?
• What are the key ethical principles that educators should consider when
integrating AI chatbots into teaching practices?
• How can educators foster inclusivity and equitable access to AI chatbots
for all students?
• What collaborative opportunities exist between educational institutions
and industries in the AI-enhanced education landscape?
Appendices 197

Course Learning Outcomes and Competences


Upon completing the course, educators will be able to:

• Integrate AI chatbots into their teaching methodologies to enhance student


engagement, feedback and learning support.
• Critically assess and address ethical challenges related to AI chatbot
integration in education.
• Leverage AI chatbots for personalised learning experiences, efficient
administrative tasks and innovative assessment strategies.
• Create AI-enhanced curricula that prepare students for a technology-driven
future while fostering critical AI literacy.
• Adapt instruction methods to create AI-resilient learning environments
that accommodate diverse student needs.
• Implement AI-resilient assessment strategies that ensure fairness and
transparency.
• Collaborate with industries to enhance educational practices through AI
integration.

Course Contents

(1) Unit 1 – Understanding AI Chatbots in Education


• Introduction to AI Chatbots in Educational Settings
• Exploring AI’s Role in Enhancing Teaching and Learning
• Defining the Ethical Imperative in AI Adoption for Educators
(2) Unit 2 – Navigating the Educational Landscape with AI Chatbots
• The Emergence and Evolution of AI Chatbots in Education
• AI’s Potential to Transform Instructional Efficiency and Student
Engagement
• Challenges and Opportunities in Integrating AI Chatbots in Education
(3) Unit 3 – Ethical Considerations in AI Chatbots for Educators
• Ethics in AI Integration: Privacy, Bias and Accountability
• Navigating Ethical Dilemmas in AI-Driven Education Real-World
• Ethical Challenges and Solutions in AI Integration
(4) Unit 4 – Enhancing Student Engagement and Learning with AI Chatbots
• Personalising Learning Experiences through AI Chatbots
• Leveraging AI for Effective Feedback and Support
• Ethical Implications of AI-Enhanced Student Engagement
(5) Unit 5 – Adapting Teaching for the AI Era: Strategies and Challenges
• Designing Curricula for an AI-Integrated Future
• Creating AI-Resilient Learning Environments
198 Appendices

• Ethical Dimensions of AI-Driven Teaching Methodologies


(6) Unit 6 – AI-Enhanced Assessment Strategies: Fairness and Transparency
• Redefining Assessment with AI Integration
• Navigating AI-Resilient Assessment Strategies with Ethical Integrity
(7) Unit 7 – Collaborations in AI-Enhanced Education
• Industry–Educator Partnerships for Effective AI Integration
• Fostering Collaborations to Enhance Educational Practices through
AI
(8) Unit 8 – AI Ethics: Guiding Principles for Educators
• Key Ethical Principles in AI Integration for Educators
• Ensuring Equitable Access to AI Chatbots for All Students
(9) Unit 9 – Building AI Literacy: Preparing Students for an AI-Driven
Future
• Fostering Critical AI Literacy Skills Among Students
• Equipping Students to Navigate the Ethical and Technological
Dimensions of AI

In summary, this AI literacy course will empower educators to seamlessly


integrate AI chatbots into their teaching methods, emphasising ethics and
equipping them with essential skills for delivering effective instruction in the
AI era, while also nurturing critical AI literacy in their students.

Appendix B: AI Literacy Training for Students


Based on our research, we suggest the following AI literacy programme for
students.

Course Name
Mastering AI Literacy for Learning

Overall Educational Objective


By the end of this course, you will have gained the skills to engage effectively
with AI, evaluate its ethical implications, enhance learning through AI
strategies and critically assess its limitations. You will be able to implement
AI as a tool for creativity and efficiency, while also recognising and
addressing potential instances of bypassing learning, fostering responsible
learning practices in the AI era.
Appendices 199

Course Format
This course will be delivered as an asynchronous online programme,
providing students with the flexibility to engage with the content at their own
pace in accordance with their schedules. The course materials will be acces-
sible through the university’s learning management system, allowing students
to learn, reflect and practice in a self-directed manner.

Course Description
The primary goal of this course is for students to develop essential skills that
enable effective engagement with AI, ethical evaluation and strategic
enhancement of learning through AI strategies. By the end of this course,
students will have gained the ability to critically assess AI’s limitations,
leverage its potential for creativity and efficiency and ensure responsible
learning practices that guard against potential shortcuts. Through a flexible
online format, students will explore the transformative role of AI in educa-
tion, its impact on learning strategies, and how to navigate its ethical con-
siderations, empowering them to harness the power of AI while promoting
responsible learning practices in the AI era.

Enduring Understanding
In the AI era, mastering AI literacy equips learners with skills to engage,
collaborate with and effectively adapt to AI, enhancing learning strategies in
a rapidly evolving technological landscape while safeguarding against
potential learning shortcuts.

Essential Questions

• How does mastering AI literacy empower learners to engage with and


collaborate effectively alongside AI in various contexts?
• What specific skills are essential for learners to effectively adapt to the
evolving AI landscape while ensuring the integrity of their learning
journey?
• How can AI literacy enhance learning strategies to meet the demands of a
rapidly changing technological environment?
• What potential shortcuts in learning could arise in the presence of AI, and
how can learners guard against them?
• In what ways does AI literacy contribute to learners’ ability to critically
evaluate and utilise AI tools for educational purposes?
• How can learners strike a balance between leveraging AI’s benefits and
maintaining the depth and quality of their learning experiences?
200 Appendices

• What ethical considerations should learners keep in mind while collabo-


rating with AI in their learning processes?
• How can AI literacy foster a sense of responsibility and active participation
among learners in shaping the future of education within an AI-driven era?
• What strategies can learners employ to effectively navigate the
ever-evolving landscape of AI technologies in their learning endeavours?

Course Learning Outcomes and Competences


Upon completing the course, students will be able to:

• Identify and assess the specific skills required to effectively engage with and
adapt to AI, fostering collaboration and informed decision-making.
• Evaluate the ethical implications of utilising AI in learning processes,
demonstrating an awareness of responsible AI usage and potential
challenges.
• Develop strategies to enhance learning experiences through AI, including
optimising input quality, output effectiveness and personalised interactions.
• Critically appraise the limitations and challenges associated with AI tech-
nologies, recognising the importance of reliability and ethical considerations.
• Implement AI as a personal aide and tutor, applying AI tools to enhance
creativity, efficiency and knowledge acquisition.
• Identify instances where the use of AI could lead to bypassing learning and
formulate strategies to mitigate them, thereby fostering responsible
learning practices in the AI era.

Course Contents

(1) Unit 1 – Embarking on the AI Chatbot Journey


• Introduction to AI Chatbots
• Challenges with AI Chatbots
• Purpose and Scope of the Course
(2) Unit 2 – Navigating the Landscape of AI Chatbots
• Emergence and Growth of Chatbots
• AI’s Impact on the Job Market
• AI’s Impact on Education
(3) Unit 3 – Input Quality and Output Effectiveness of AI
• Enhancing User Experience: The Role of Context in AI Interactions
• Crafting Quality Input: Maximising AI Output Effectiveness
Appendices 201

• Iterative Refinement: Enhancing AI Interactions through User


Learning
• Developing a Personalised Prompt Bank
• Tailoring AI Interactions to Individual Needs
(4) Unit 5 – Navigating Limitations and Challenges of AI
• Understanding AI’s Output Challenges
• Ensuring Reliable Information from AI
• Ethical Considerations in AI
• Limitations of Current Tools and Systems
(5) Unit 5 – Understanding Perceived Human-Like Interactions with AI
• Perceiving Human-Like Interactions: Dynamics of AI Communication
• Interpreting ChatGPT’s Output: Opinions vs Predictions
(6) Unit 6 – AI as a Personal Aide and Tutor: Enhancing User Experiences
• Enhancing User Ideas and Efficiency
• Versatility and Support Beyond Academics
• Feedback, Enhancement and Knowledge Impartation
(7) Unit 7 – Navigating AI’s Impact on Learning and Responsibilities
• Understanding AI’s Impact on Learning
• Strategies for Avoiding Bypassing Learning in the Age of AI
• Learner Responsibilities in the AI Era

In summary, this AI literacy course for students will equip them with
essential skills to engage effectively with AI, evaluate its ethical implications,
enhance learning strategies and navigate potential challenges in the AI era.

Appendix C: AI Literacy Course for Students


Based on our research, we suggest the following AI literacy course for
students.

Course Name
Mastering AI Chatbots

Overall Educational Objective


By the end of this course, you will attain a thorough mastery of AI chatbots,
including a strong comprehension of their technological underpinning, issues
related to their development and challenges regarding their societal impact as
well as developing the ability to adapt and use these tools in a variety of
contexts.
202 Appendices

Course Format
The course is scheduled for one semester and will adopt the flipped learning
approach. Classes can be conducted synchronously online.

Course Description
The primary goal of this course is for students to become adept with AI
chatbots and learn how to effectively use and apply these tools in various
situations. Throughout the course, students will become proficient in AI
chatbots, from grasping their technology to using them strategically. On the
course, students will explore the basics of chatbots while also assessing their
impact on education, jobs and society. They will also delve into how AI is
influencing them personally, as well as individuals and their relationships with
learning, technology and society. Additionally, students will discover ways to
practically enhance their AI user experience, all while considering ethical
concerns. They will also investigate the limitations and challenges of AI
chatbots and their role in learning. Ethical considerations and real-world
examples will be discussed to provide insights into AI chatbot development.
Moreover, students will examine AI threats, ethical guidelines, and the
responsibilities that educators, schools and universities have in this context.
They will also explore upcoming trends and innovations in AI chatbots,
preparing them for the ever-changing landscape of AI technology. The course
will emphasise hands-on experience, and by the end of it, students will have
configured and trained an AI chatbot to meet their individual needs.
Consequently, students will have acquired skills in AI comprehension, critical
thinking, ethical considerations and practical application, enabling them to
navigate the world of AI effectively.

Enduring Understanding
Mastering AI chatbots involves understanding their impact on people, soci-
eties and ethics, and grasping the broad effects of technology progress.

Essential Questions

• What trends shape AI chatbot advancements and their impact on various


domains?
• How can AI chatbots be used for effective interactions and high-quality
responses?
• What ethical considerations arise from AI chatbot limitations and how can
they be addressed?
• In what ways do AI chatbots imitate human-like interactions, and what
sets apart opinions from predictions in their output?
Appendices 203

• How can AI chatbots enhance user efficiency, support and ideas across
different contexts?
• What’s the impact of AI on learning, and what are learners’ responsibilities
in this context?
• What’s the scope of generalised and specialised AI chatbots, considering
limitations and cultures?
• What ethical challenges arise during AI chatbot development, and how do
real-world examples provide insights into these challenges?
• What threats do AI chatbots pose, and why is ethical policy crucial in
managing them?
• How can universities contribute responsibly to AI chatbot development
and ethical discussions?
• What tools, platforms and practices can be used to develop AI chatbots?
• How are emerging trends and technologies shaping the future of AI
chatbot technology and integration?

Course Learning Outcomes and Competences


Upon completion of the course, students will be able to:

• Develop strategies for fostering a responsible learning approach that


adapts to the influence of AI technology.
• Analyse the influence of AI on individuals, education, society, industries
and the global landscape.
• Critically evaluate real-world AI dilemmas and appraise current efforts to
address these dilemmas.
• Configure and train a basic AI chatbot tailored to their specific needs using
a variety of tools and platforms.
• Demonstrate proficient use of AI chatbots through effective interaction
strategies, ethical considerations and informed decision-making.

Assessment

Pre-class Quizzes – Before Each Unit (20%)

(1) Participation Activities – throughout the course, to include:


• A critique of an AI detection tool (5%)
• A personal reflection discussing how ChatGPT was used as a search
engine, then fact-checked against primary sources (5%)
• A reflection paper on the importance of foundational learning, how AI
may affect this and what should be done to avoid this (5%)
204 Appendices

• A critique of flowchart for AI-resilient teaching and learning (5%)


• The development of a personalised prompt bank (5%)
• A personal diary on how ChatGPT was used for other tasks outside
the realm of education (5%)
• A SWOT analysis of generalised versus specialised bots (5%)
• Participation in a debate on the ethical implications of a real-world AI
dilemma, such as: Should self-driving cars be programmed to kill if it
means saving the lives of more people? or Should facial recognition
software be used to track people’s movements? (5%)
(2) End-of-Course Performance Task – Configuring an AI Chatbot for
Personal Use (40%)
The objective of the end-of-course performance task is to personalise an
existing AI chatbot to your specific needs. You will step into the role of
an AI enthusiast, customising an existing AI chatbot to work for you. By
adjusting interactions, responses and functionalities, you will demon-
strate your ability to adapt AI technology effectively. This task embodies
the course’s key concepts, providing a hands-on opportunity to apply
acquired knowledge in a practical scenario. Ultimately, you will create a
personalised AI assistant aligned with your interests and requirements.
The assessment standards are as follows:
• Functionality and Customisation
The configured AI chatbot demonstrates a clear understanding of the
chosen scenario or context. The interactions and responses of the
chatbot are relevant and aligned with the specific needs of the scenario.
The chatbot effectively handles a variety of user inputs and provides
appropriate responses.
• Tools and Platforms Proficiency
The student has effectively utilised a variety of tools and platforms to
configure the chatbot. There is evidence of technical proficiency in
setting up and integrating the chatbot with relevant technologies.
• Rationale for Design Decisions
The design decisions made in configuring the chatbot are clearly
explained. The student justifies the choice of chatbot functionalities,
responses and interactions based on the scenario’s requirements.
• Consideration of User Experience
The chatbot provides a user-friendly and seamless experience for
interacting with users. Provisions are made for handling user queries
effectively, maintaining context and providing appropriate assistance.
• Ethical Considerations
Potential ethical concerns related to the chatbot’s interactions and
responses are addressed. Safeguards are in place to prevent the chatbot
from providing misleading, harmful or biased information.
Appendices 205

• Adaptability and Future Improvements


The student discusses how the chatbot could be further improved or
adapted in the future. Suggestions are provided for refining the chat-
bot’s functionalities based on potential user feedback or changing
requirements.
• Documentation and Explanation
Clear documentation of the chatbot’s configuration process, including
tools used and setup steps, is provided. An informative explanation of
the chatbot’s functionalities, purpose and intended user experience is
included.

Course Contents

(1) Unit 1 – Embarking on the AI Chatbot Journey


• Introduction to AI Chatbots
• Challenges with AI Chatbots
• How This Course Will Help You Master AI Chatbots
(2) Unit 2 – Navigating the Landscape of AI Chatbots
• Emergence and Growth of Chatbots
• AI’s Impact on the Job Market
• AI’s Impact on Education
(3) Unit 3 – How AI Will Affect Me
• Understanding How AI Impacts Individuals
• AI and Me: Power Dynamics, Social Structures and Cultural
Influences
• AI and Myself: Exploring My Relationship with AI
(4) Unit 4 – Input Quality and Output Effectiveness of AI
• Enhancing User Experience: The Role of Context in AI Interactions
• Crafting Quality Input: Maximising AI Output Effectiveness
• Iterative Refinement: Enhancing AI Interactions through User
Learning
(5) Unit 5 – Developing a Personalised Prompt Bank
• Tailoring AI Interactions to Individual Needs
(6) Unit 6 – Navigating Limitations and Challenges of AI
• Understanding AI’s Output Challenges
• Ensuring Reliable Information from AI
• Ethical Considerations in AI
• Limitations of Current Tools and Systems
206 Appendices

(7) Unit 7 – Understanding Perceived Human-Like Interactions with AI


• Perceiving Human-Like Interactions: Dynamics of AI Communication
• Interpreting ChatGPT’s Output: Opinions vs Predictions
(8) Unit 8 – AI as a Personal Aide and Tutor: Enhancing User Experiences
• Enhancing User Ideas and Efficiency
• Versatility and Support Beyond Academics
• Feedback, Enhancement and Knowledge Impartation
(9) Unit 9 – Navigating AI’s Impact on Learning and Responsibilities
• Understanding AI’s Impact on Learning
• Strategies for Adapting to AI in Education
• Learner Responsibilities in the AI Era
(10) Unit 10 – Generalised Versus Specialised Bots
• Understanding Generalised Bots: Scope and Limitations
• Addressing Limitations: Disciplinary Context Constraints
• Cultural Considerations: Challenges for Generalised Bots
• Investigating Specialised Bots
(11) Unit 11 – Ethical Considerations in AI Chatbots
• Understanding Ethical Challenges in AI Chatbot Development
• Navigating Ethical Dilemmas: Development and Use
• Real-World Examples: Ethical Dilemmas in Action
(12) Unit 12 – AI Threats, Policy and the Role of Universities
• AI Threats and the Call for Ethical Policy
• Exploring the Imperative for Ethical AI Policy
• Uniting Ethical Insights: Universities and AI Discourse
(13) Unit 13 – Future Trends and Innovations in AI Chatbots
• Emerging Trends and Advancements in AI Chatbot Technology
• Integration of AI Chatbots with Emerging Technologies
• Evolving Roles of AI Chatbots across Industries
(14) Unit 14 – Configuring and Training Your Own AI Chatbot
• Tools and Platforms for AI Chatbots
• Configuring and Training a Basic AI Chatbot
• Addressing Best Practices, Customisation Options and Troubleshooting
(15) Unit 15 and 16 – Presentation and Critique of Personalised Chatbots

In summary, this AI literacy course will empower students to become


proficient in AI chatbots. They will grasp the technology, address challenges
and explore ethical dimensions. With practical skills and critical thinking,
students will adeptly adapt AI chatbots, understand their societal impact and
navigate their use in education and beyond.
References

Abdul, G. (2023, May 30). Risk of extinction by AI should be global priority, say
experts. The Guardian. https://www.theguardian.com/technology/2023/may/30/risk-
of-extinction-by-ai-should-be-global-priority-say-tech-experts
Aceves, P. (2023, May 29). ‘I do not think ethical surveillance can exist’: Rumman
Chowdhury on accountability in AI. The Guardian. https://www.theguardian.com/
technology/2023/may/29/rumman-chowdhury-interview-artificial-intelligence-
accountability
Adamopoulou, E., & Moussiades, L. (2020). An overview of chatbot technology. In
IFIP advances in information and communication technology (Vol. 584). https://
doi.org/10.1007/978-3-030-49186-4_31
Alshater, M. M. (2022). Exploring the role of artificial intelligence in enhancing
academic performance: A case study of ChatGPT. https://ssrn.com/abstract5
4312358
Althusser, L. (1971). Lenin and philosophy, and other essays. New Left Books.
Anyoha, R. (2017, August 28). Science in the news [Harvard University Graduate
School of Arts and Sciences]. The History of Artificial Intelligence. https://
sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
Armstrong, P. (n.d.). Bloom’s taxonomy. Vanderbilt Center for Teaching. https://
cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/
Baker, T., & Smith, L. (2019). Educ-AI-tion rebooted? Exploring the future of artificial
intelligence in schools and colleges. Nesta. https://media.nesta.org.uk/documents/
Future_of_AI_and_education_v5_WEB.pdf
Bellan, R. (2023, March 14). Microsoft lays off an ethical AI team as it doubles down
on OpenAI. TechCrunch. https://techcrunch.com/2023/03/13/microsoft-lays-off-an-
ethical-ai-team-as-it-doubles-down-on-openai/
Bensinger, G. (2023, February 21). ChatGPT launches boom in AI-written e-books on
Amazon. Reuters. https://www.reuters.com/technology/chatgpt-launches-boom-ai-
written-e-books-amazon-2023-02-21/
Bhuiyan, J. (2023, May 16). OpenAI CEO calls for laws to mitigate ‘risks of
increasingly powerful’ AI. The Guardian. https://www.theguardian.com/
technology/2023/may/16/ceo-openai-chatgpt-ai-tech-regulations
Bida, A. (2018). Heidegger and “Dwelling”. In Mapping home in contemporary
narratives. Geocriticism and spatial literary studies. Palgrave Macmillan.
Bloom, B., Engelhart, M., Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of
educational objectives: The classification of educational goals. Handbook 1.
Cognitive domain. David McKay Company.
Blueprint for an AI Bill of Rights. (n.d.). The White House. https://www.white
house.gov/ostp/ai-bill-of-rights/
Bourdieu, P. (1978). The linguistic market; a talk given at the University of Geneva in
December 1978. In Sociology in question (p. 83). Sage.
208 References

Bourdieu, P. (1982). Les rites d’institution, Actes de la recherche en sciences sociales.


In J. Richardson (Ed.), Handbook of theory of research for the sociology of
education (pp. 58–63). Greenwood Press.
Bourdieu, P. (1986). The forms of capital. In J. Richardson (Ed.), Handbook of theory
of research for the sociology of education (pp. 241–258). Greenwood Press.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative
Research in Psychology, 3(2), 77–101.
Brockman, G., & Sutskever, I. (2015, December 11). Introducing OpenAI. OpenAI.
https://openai.com/blog/introducing-openai
Brown-Siebenaler, K. (2023, March 28). Will ChatGPT AI revolutionize engineering
and product development? Here’s what to know. PTC. https://www.ptc.com/en/
blogs/cad/will-chatgpt-ai-revolutionize-engineering-and-product-development
Cassens Weiss, D. (2023, March 16). Latest version of ChatGPT aces bar exam with
score nearing 90th percentile. ABA Journal. https://www.abajournal.com/web/
article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile
Christensen, C. (1997). The innovator’s dilemma: When new technologies cause great
firms to fail. Harvard Business School Press.
Christensen, C., Hall, T., Dillon, K., & Duncan, D. S. (2016). Competing against luck:
The story of innovation and customer choice. HarperCollins.
Cousineau, C. (2021, April 15). Smart courts and the push for technological innovation
in China’s judicial system studies. Center for Strategic and International. https://
www.csis.org/blogs/new-perspectives-asia/smart-courts-and-push-technological-
innovation-chinas-judicial-system
Cresswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design:
Choosing among five approaches. Sage Publications.
D’Agostino, S. (2023, May 19). Colleges race to hire and build amid AI ‘Gold Rush’.
Inside Higher Ed, Online.
Elster, J. (1986). An introduction to Karl Marx. Cambridge University Press.
EU AI Act: First regulation on artificial intelligence. (2023, June 8). European
Parliament News. https://www.europarl.europa.eu/news/en/headlines/society/
20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Fauzi, F., Tuhuteru, L., Sampe, F., Ausat, A., & Hatta, H. (2023). Analysing the role
of ChatGPT in improving student productivity in higher education. Journal of
Education, 5(4), 14886–14891. https://doi.org/10.31004/joe.v5i4.2563
Felten, E., Manav, R., & Seamans, R. (2023). How will language modelers like
ChatGPT affect occupations and industries? ArXiv. General Economics, 1–33.
https://doi.org/10.48550/arXiv.2303.01157
Firaina, R., & Sulisworo, D. (2023). Exploring the usage of ChatGPT in higher
education: Frequency and impact on productivity. Buletin Edukasi Indonesia
(BEI), 2(1), 39–46. https://journal.iistr.org/index.php/BEI/article/view/310/214
Forefront AI. (n.d.). Forefront AI chat. https://chat.forefront.ai/
Fowler, G. (2023, March 16). How will ChatGPT affect jobs? Forbes. https://
www.forbes.com/sites/forbesbusinessdevelopmentcouncil/2023/03/16/how-will-
chatgpt-affect-jobs/?sh57fc6d501638b
Gagne’s 9 events of instruction. (2016). University of Florida Center for Instructional
Technology and Training. http://citt.ufl.edu/tools/gagnes-9-events-of-instruction/
References 209

Gates, B. (2023, March 24). Bill Gates: AI is most important technological advance in
decades – But we must ensure it is used for good. Independent. https://
www.independent.co.uk/tech/bill-gates-ai-artificial-intelligence-b2307299.html
Girdher, J. L. (2019). What is the lived experience of advanced nurse practitioners of
managing risk and patient safety in acute settings? A phenomenological perspective.
University of the West of England. https://uwe-repository.worktribe.com/output/
1491308
Global education monitoring report 2023: Technology in education - A tool on whose
terms? (p. 435). (2023). UNESCO.
Gollub, J., Bertenthal, M., Labov, J., & Curtis, P. (2002). Learning and understanding:
Improving advanced study of mathematics and science in U.S. high schools (pp.
1–564). National Research Council. https://www.nap.edu/read/10129/chapter/1
Griffin, A. (2023, May 12). ChatGPT creators try to use artificial intelligence to
explain itself – and come across major problems. The Independent. https://
www.independent.co.uk/tech/chatgpt-website-openai-artificial-intelligence-
b2337503.html
Grove, J. (2023, March 16). The ChatGPT revolution of academic research has begun.
Times Higher Education.
Hammersley, M., & Atkinson, P. (1995). Ethnography: Principles in practice (p. 16).
Routledge.
Hao, K. (2020, September 23). OpenAI is giving Microsoft exclusive access to its
GPT-3 language model. MIT Technology Review. https://www.technologyreview.
com/2020/09/23/1008729/openai-is-giving-microsoft-exclusive-access-to-its-gpt-3-
language-model/
Harari, Y. N. (2018). 21 lessons for the 21st century. Vintage.
Harreis, H. (2023, March 8). Generative AI: Unlocking the future of fashion. McKinsey
& Company. https://www.mckinsey.com/industries/retail/our-insights/generative-
ai-unlocking-the-future-of-fashion
Higher education that works (pp. 1–8). (2023). University of South Florida.
Hinsliff, G. (2023, May 4). If bosses fail to check AI’s onward march, their own jobs
will soon be written out of the script. The Guardian. https://www.theguardian.com/
commentisfree/2023/may/04/ai-jobs-script-machines-work-fun
How will ChatGPT & AI impact the financial industry? (2023, March 6). FIN. https://
www.dfinsolutions.com/knowledge-hub/thought-leadership/knowledge-resources/
the-impact-of-chatgpt-in-corporate-finance-marketplace
How do I cite generative AI in MLA style? (n.d.). MLA Style Center. https://
style.mla.org/citing-generative-ai/
Hunt, F. A. (2022, October 19). The future of AI in the justice system. LSJ Media.
https://lsj.com.au/articles/the-future-of-ai-in-the-justice-system/
Inwood, M. (2019). Heidegger: A very short introduction (2nd Ed.). Oxford University
Press.
Jackson, F. (2023, April 13). “ChatGPT does 80% of my job”: Meet the workers using
AI bots to take on multiple full-time jobs - and their employers have NO idea.
MailOnline. https://www.dailymail.co.uk/sciencetech/article-11967947/Meet-
workers-using-ChatGPT-multiple-time-jobs-employers-NO-idea.html
Jiminez, K. (2023, April 13). Professors are using ChatGPT detector tools to accuse
students of cheating. But what if the software is wrong? USA Today. https://
210 References

www.usatoday.com/story/news/education/2023/04/12/how-ai-detection-tool-
spawned-false-cheating-case-uc-davis/11600777002/
Johnson, A. (2022, December 12). Here’s what to know about OpenAI’s
ChatGPT—What it’s disrupting and how to use it. Forbes. https://www.forbes.
com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-
what-its-disrupting-and-how-to-use-it/?sh57a5922132643
Kahneman, D. (2011). Thinking, fast and slow. Random House.
Karp, P. (2023, February 6). MP tells Australia’s parliament AI could be used for
‘mass destruction’ in speech part-written by ChatGPT. The Guardian. https://
www.theguardian.com/australia-news/2023/feb/06/labor-mp-julian-hill-australia-
parliament-speech-ai-part-written-by-chatgpt
Klee, M. (2023, June 6). She was falsely accused of cheating with AI — and she won’t
be the last. [Magazine]. Rolling Stone. https://www.rollingstone.com/culture/
culture-features/student-accused-ai-cheating-turnitin-1234747351/
Klein, A. (2023, July 25). Welcome to the ‘Walled Garden.’ Is this education’s solution
to AI’s pitfalls? Education Week. https://www.edweek.org/technology/welcome-to-
the-walled-garden-is-this-educations-solution-to-ais-pitfalls/2023/07?fbclid5IwAR2
Wgk8e8Ex5niBsy6npZLnO77W4EuUycrkTpyH0GCHQghBSF1a2DKhzoNA
Liberatore, S., & Smith, J. (2023, March 30). Silicon valley’s AI civil war: Elon Musk
and Apple’s Steve Wozniak say it could signal “catastrophe” for humanity. So why
do Bill Gates and Google think it’s the future? Daily Mail. https://www.dailymail.
co.uk/sciencetech/article-11916917/The-worlds-greatest-minds-going-war-AI.html
Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can
trust. Pantheon Books.
Martinez, P. (2023, March 31). How ChatGPT is transforming the PR game.
Newsweek. https://www.newsweek.com/how-chatgpt-transforming-pr-game-
1791555
Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T.,
Manyika, J., Ngo, H., Niebles, J. C., Parli, V., Shoham, Y., Wald, R., Clark, J., &
Perrault, R. (2023). The AI Index 2023 Annual Report (AI Index, p. 386). Institute
for Human-Centered AI. https://aiindex.stanford.edu/wp-content/uploads/2023/04/
HAI_AI-Index-Report_2023.pdf
McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style. https://
apastyle.apa.org/blog/how-to-cite-chatgpt
McLean, S. (2023, April 28). The environmental impact of ChatGPT: A call for
sustainable practices in AI development. Earth.org. https://earth.org/environmental-
impact-chatgpt/
McTighe, J., & Wiggins, G. (2013). Essential questions: Opening doors to student
understanding. Association for Supervision and Curriculum Development.
Mészáros, I. (2005). Marx’s theory of alienation. Merlin.
Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT
towards lifelong learning. https://papers.ssrn.com/sol3/papers.cfm?abstract_
id54354422
Milmo, D. (2023a, February 3). Google poised to release chatbot technology after
ChatGPT success. The Guardian. https://www.theguardian.com/technology/2023/
feb/03/google-poised-to-release-chatbot-technology-after-chatgpt-success
References 211

Milmo, D. (2023b, April 17). Google chief warns AI could be harmful if deployed
wrongly. The Guardian. https://www.theguardian.com/technology/2023/apr/17/
google-chief-ai-harmful-sundar-pichai
Milmo, D. (2023c, May 4). UK competition watchdog launches review of AI market.
The Guardian. https://www.theguardian.com/technology/2023/may/04/uk-
competition-watchdog-launches-review-ai-market-artificial-intelligence
Milmo, D. (2023d, May 20). UK schools ‘bewildered’ by AI and do not trust tech firms,
headteachers say. The Guardian. https://www.theguardian.com/technology/2023/
may/20/uk-schools-bewildered-by-ai-and-do-not-trust-tech-firms-headteachers-say
Milmo, D. (2023e, July 11). AI revolution puts skilled jobs at highest risk, OECD
says. The Guardian. https://www.theguardian.com/technology/2023/jul/11/ai-
revolution-puts-skilled-jobs-at-highest-risk-oecd-says
Milmo, D. (2023f, July 26). Google, Microsoft, OpenAI and startup form body to
regulate AI development. The Guardian. https://www.theguardian.com/technology/
2023/jul/26/google-microsoft-openai-anthropic-ai-frontier-model-forum
Mok, A., & Zinkula, J. (2023, April 9). ChatGPT may be coming for our jobs. Here
are the 10 roles that AI is most likely to replace. Business Insider. https://
www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-
ai-labor-trends-2023-02
Moran, C. (2023, April 6). ChatGPT is making up fake Guardian articles. Here’s how
we’re responding. The Guardian. https://www.theguardian.com/commentisfree/
2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article
Nelson, N. (2001). Writing to Learn. Studies in Writing. In P. Tynjälä, L. Mason, &
K. Lonka (Eds.), Writing as a learning tool (Vol. 7). Springer. https://doi.org/
10.1007/978-94-010-0740-5_3
Neumann, M., Rauschenberger, M., & Schön, E.-M. (2023). We need to talk about
ChatGPT: The fFuture of AI and higher education [Education]. Hochschule
Hannover. https://doi.org/10.25968/opus-2467
O’Flaherty, K. (2023, April 9). Cybercrime: Be careful what you tell your chatbot
helper. . .. The Guardian. https://www.theguardian.com/technology/2023/apr/09/
cybercrime-chatbot-privacy-security-helper-chatgpt-google-bard-microsoft-bing-
chat
Paleja, A. (2023a, January 6). In a world first, AI lawyer will help defend a real case in
the US. Interesting Engineering.
Paleja, A. (2023b, January 30). Gmail creator says ChatGPT-like AI will destroy
Google’s business in two years. Interesting Engineering.
Paleja, A. (2023c, April 4). ChatGPT ban: Will other countries follow Italy’s lead?
Interesting Engineering.
Patton, M. (2002). Qualitative research and evaluation methods (3rd Ed.). SAGE
Publications.
Pause Giant AI experiments: An open letter. (2023, March 22). Future of Life Institute.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Ramponi, M. (2022, December 23). How ChatGPT actually works. AssemblyAI.
https://www.assemblyai.com/blog/how-chatgpt-actually-works/
Ray, S. (2023, May 25). ChatGPT could leave Europe, OpenAI CEO warns, days
after urging U.S. Congress for AI Regulations. Forbes. https://www.forbes.com/
sites/siladityaray/2023/05/25/chatgpt-could-leave-europe-openai-ceo-warns-days-
after-urging-us-congress-for-ai-regulations/?sh583384862ed85
212 References

Rethinking classroom assessment with purpose in mind: Assessment for learning;


Assessment as learning; Assessment of learning. (2006). Manitoba Education,
Citizen and Youth. https://open.alberta.ca/publications/rethinking-classroom-
assessment-with-purpose-in-mind
Robertson, A. (2023, April 28). ChatGPT returns to Italy after ban. The Verge.
https://www.theverge.com/2023/4/28/23702883/chatgpt-italy-ban-lifted-gpdp-data-
protection-age-verification
Rudolph, J., Tan, S., & Tan, S. (2023). ChatGPT: Bullshit spewer or the end of
traditional assessments in higher education? Journal of Applied Learning and
Teaching, 6(1), 1–22. https://doi.org/10.37074/jalt.2023.6.1.9
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control.
Viking.
Sabarwal, H. (2023, April 18). Elon Musk to launch his AI platform “TruthGPT”.
WION. https://www.wionews.com/technology/elon-musk-to-launch-his-ai-
platform-truthgpt-583583
Şahin, M., & Fell Kurban, C. (2019). The New University model: Flipped, adaptive,
digital and active learning (FADAL) - A future perspective. FL Global Publishing.
Sánchez-Adame, L. M., Mendoza, S., Urquiza, J., Rodrı́guez, J., & Meneses-Viveros,
A. (2021). Towards a set of heuristics for rvaluating chatbots. IEEE Latin America
Transactions, 19(12), 2037–2045. https://doi.org/10.1109/TLA.2021.9480145
Sankaran, V. (2023, July 19). Meta unveils its ChatGPT rival llama. Independent.
https://www.independent.co.uk/tech/meta-llama-chatgpt-ai-rival-b2377802.html
Schamus, J. (2023, May 5). Hollywood thinks it can divide and conquer the writers’
strike. It won’t work. The Guardian. https://www.theguardian.com/commentisfree/
2023/may/05/hollywood-writers-strike-james-schamus
Sharma, S. (2023, May 24). AI could surpass humanity in next 10 years – OpenAI
calls for guardrails. Interesting Engineering.
Shum, H., He, X., & Li, D. (2018). From Eliza to XiaoIce: Challenges and
opportunities with social chatbots. Frontiers of Information Technology &
Electronic Engineering, 19(1), 10–26.
Smith, J. (2023, March 29). “It’s a dangerous race that no one can predict or control”:
Elon Musk, Apple co-founder Steve Wozniak and 1,000 other tech leaders call for
pause on AI development which poses a “profound risk to society and humanity”.
Daily Mail. https://www.dailymail.co.uk/news/article-11914149/Musk-experts-
urge-pause-training-AI-systems-outperform-GPT-4.html
Stacey, K., & Mason, R. (2023, May 26). Rishi Sunak races to tighten rules for AI
amid fears of existential risk. The Guardian. https://www.theguardian.com/
technology/2023/may/26/rishi-sunak-races-to-tighten-rules-for-ai-amid-fears-of-
existential-risk
Stake, R. E. (1995). The art of case study research. SAGE.
Stern, J. (2023, April 4). AI is running circles around robotics. The Atlantic. https://
www.theatlantic.com/technology/archive/2023/04/ai-robotics-research-engineering/
673608/
Sullivan, M., Kelly, A., & McLaughlan, P. (2023). ChatGPT in higher education:
Considerations for academic integrity and student learning. Journal of Applied
Learning and Teaching, 6(1), 1–10. https://doi.org/10.37074/jalt.2023.6.1.17
Tamim, B. (2023, March 30). GPT-5 expected this year, could make ChatGPT
indistinguishable from a human. Interesting Engineering.
References 213

Tarantola, A. (2023, January 26). BuzzFeed is the latest publisher to embrace


AI-generated content. Engadget.
Taylor, J., & Hern, A. (2023, May 2). ‘Godfather of AI’ Geoffrey Hinton quits Google
and warns over dangers of misinformation. The Guardian. https://www.theguar
dian.com/technology/2023/may/02/geoffrey-hinton-godfather-of-ai-quits-google-
warns-dangers-of-machine-learning
Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Penguin.
Thurmond, V. A. (2001). The point of triangulation. Journal of Nursing Scholarship,
33(3), 253–258. https://doi.org/10.1111/j.1547-5069.2001.00253.x
Tilley, C. (2023, May 16). Now even the World Health Organization warns against
artificial intelligence - Says it’s “imperative” we pump the brakes. Daily Mail.
https://www.dailymail.co.uk/health/article-12090715/Now-World-Health-
Organization-warns-against-artificial-intelligence.html
Tlili, A., Shehata, B., Agyemang Adarkwah, M., Bozkurt, A., Hickey, D. T., Huang,
R., & Agyeman, B. (2023). What if the devil is my guardian angel: ChatGPT as a
case study of using chatbots in education. Smart Learning Environments, 1–24.
https://doi.org/10.1186/s40561-023-00237-x
Tonkin, S. (2023, March 31). Could YOU make $335,000 using ChatGPT?
Newly-created jobs amid the rise of the AI bot have HUGE salaries (and you
don’t even need a degree!). MailOnline. https://www.dailymail.co.uk/sciencetech/
article-11924083/Could-make-335-000-using-ChatGPT.html
Tyson, L. (2023). Critical theory today A user-friendly guide (4th Ed.). Routledge.
Vincent, J. (2022, December 8). ChatGPT proves AI is finally mainstream—and things
are only going to get weirder. The Verge. https://www.theverge.com/2022/12/8/
23499728/ai-capability-accessibility-chatgpt-stable-diffusion-commercialization
Waugh, R. (2023, March 14). ChatGPT 2.0: Creator of AI bot that took world by
storm launches even more powerful version called “GPT4”—and admits it’s so
advanced it could “harm society”. MailOnline. https://www.dailymail.co.uk/
sciencetech/article-11860115/ChatGPT-2-0-Creator-AI-bot-took-world-storm-
launches-powerful-version.html
Webb, J., Schirato, T., & Danaher, G. (2002). Understanding bourdieu. SAGE.
White, J. (2023). Prompt engineering for ChatGPT (MOOC). Coursera. https://
www.coursera.org/learn/prompt-engineering
Wiggins, G., & McTighe, J. (1998). Understanding by design (2nd Ed.). ASCD.
Williams, T. (2023, August 9). AI text detectors aren’t working. Is regulation the
answer? Times Higher Education.
Williams, T., & Grove, J. (2023, May 15). Five ways AI has already changed higher
education. Times Higher Education.
Yin, R. K. (1984). Case study research: Design and methods (1st Ed.). SAGE.
Yin, R. K. (1994). Case study research design and methods (2nd Ed.). SAGE.
Yin, R. K. (2011). Applications of case study research. Sage.
Zhai, X. (2022). ChatGPT user experience: Implications for education. [Education]
https://ssrn.com/abstract54312418
This page intentionally left blank
This page intentionally left blank
This page intentionally left blank

You might also like