The Impact of ChatGPT On Higher Education
The Impact of ChatGPT On Higher Education
Education
This page intentionally left blank
The Impact of ChatGPT on
Higher Education: Exploring
the AI Revolution
BY
CAROLINE FELL KURBAN
MEF University, Turkey
AND
MUHAMMED ŞAHIN
MEF University, Turkey
No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or
by any means electronic, mechanical, photocopying, recording or otherwise without either the
prior written permission of the publisher or a licence permitting restricted copying issued in the
UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center.
Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every
effort to ensure the quality and accuracy of its content, Emerald makes no representation
implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties,
express or implied, to their use.
Dedication vii
Foreword xi
Preface xiii
Acknowledgements xv
Appendices 195
References 207
We dedicate this book to the memory of Dr İ brahim Arıkan, the founder of MEF
Schools and MEF University, who dedicated his life to revolutionising education. Dr
Arıkan’s ultimate dream was to establish MEF University as a fully flipped uni-
versity, but sadly, he passed away before witnessing its realisation. He was a pioneer
across all stages of education, from kindergarten to university, and believed in a
democratic approach to education that prioritised the individuality of each student.
Dr Arıkan implemented full academic independence for teachers at his institutions,
and his commitment to creating a learning environment that nurtures the potential
of every student has left a lasting impact on the field of education. His spirit lives on
in the hearts and minds of every student and teacher who had the privilege to know
him. As we continue to honour his legacy, we are proud to say that MEF University
has become the realisation of his dream, an innovative and fully flipped university
that empowers students to take control of their education and become lifelong
learners.
We believe that Dr Arıkan would have been proud of the innovative direction MEF
University is taking by incorporating cutting-edge technologies like ChatGPT to
further enhance the teaching and learning experience. As a pioneer in education, he
always believed in implementing new and effective teaching methods to provide his
students with the best possible education. His spirit continues to inspire us to strive
for excellence in education, and we dedicate this book to his memory.
This page intentionally left blank
About the Authors
In the dynamic and ever-evolving landscape of education, one of the most pro-
found shifts is the integration of emerging technologies. As an advocate for access
to high-quality education for all, I find this era of technological advancement an
intriguing period of transformation. This book dives deep into the exploration of
artificial intelligence (AI) in education, specifically focusing on AI chatbots like
ChatGPT, and the implications they bring to our learning environments.
My pleasure in presenting the foreword for this book is twofold. Firstly,
because the authors have undertaken a rigorous exploration of a critical topic.
Secondly, because this subject resonates with my professional journey, spent in
pursuit of improving student outcomes and democratising access to quality
education.
MEF University in Istanbul, the book’s focal research site, stands as a beacon
of innovation for its integration of AI, offering a unique context for this study.
The authors critically examine ChatGPT, discussing its development, the ethical
considerations surrounding its use, and the need for a globally inclusive discourse
on the ethical guidelines for AI technologies.
From my tenure as US Under Secretary of Education to leading the American
Council on Education, I have seen the impact that a conscientious integration of
technology can have on access to high-quality education. In this book, by delving
into the history and ascent of chatbots, formulating a theoretical framework for
evaluating AI’s influence, conducting a contemporary literature review and
embarking on an exploratory case study, the authors shed light on how AI
chatbots have the potential to reshape the very foundations of teaching and
learning.
What the authors present is not just a well-researched treatise on ChatGPT,
but a tool for future exploration. The book’s concluding chapters provide a
blueprint for how to effectively and ethically integrate these AI technologies in
our classrooms and institutions, a guide I wish I had when piloting early edtech
initiatives in my own career.
The insights gleaned from this book go beyond ChatGPT. They will shape how
we, as educators, policymakers, and students, navigate the rapidly changing
technological landscape of education. The authors have not only provided a
comprehensive exploration of AI chatbots in education but also prompted us to
consider how we can harness this technology to create an equitable and inclusive
future for all learners.
xii Foreword
combat AI-based cheating. This has sparked global discussions among educators,
debating whether ChatGPT represents an opportunity or a threat.
At its core, ChatGPT operates by harnessing the power of NLP to comprehend
and respond to human queries in a conversational manner. Through advanced
algorithms and machine learning techniques, ChatGPT has been trained on vast
datasets to generate human-like responses, making it an indispensable tool for
engaging with students. The interactive and personalised nature of ChatGPT’s
conversations makes it highly valuable in the educational landscape. Students can
instantly access answers to their questions, relevant resources and tailored rec-
ommendations based on their learning needs. Whether seeking clarifications,
additional information or guidance, ChatGPT serves as a reliable and readily
available support system throughout their academic journey. Furthermore,
instructors can leverage ChatGPT to streamline administrative tasks and enhance
the learning experience. By automating routine administrative processes, such as
addressing frequently asked questions and providing course-related information,
instructors have more time to focus on meaningful interactions with students.
Additionally, ChatGPT can offer timely and personalised feedback, providing
students with real-time guidance and support. Integrating ChatGPT into the
educational environment can lead to a more engaging and interactive learning
experience. Students benefit from immediate assistance, personalised guidance
and a supportive learning environment, while instructors can optimise their
teaching practices and facilitate more meaningful interactions.
As we can see, the potential of ChatGPT in higher education is promising.
However, it is essential to recognise the caveats that accompany it. To begin with,
addressing the ethical considerations and limitations surrounding ChatGPT is
crucial. These encompass concerns about its reliance on heuristics, lack of
transparency in internal workings, issues with capability versus alignment, limi-
tations in helpfulness, interpretability challenges, issues of bias and fairness,
factual accuracy and truthfulness, as well as ethical concerns regarding data
privacy and cybersecurity. Moreover, the impact of ChatGPT on industries,
including higher education, necessitates thorough investigation. The integration
of AI technologies like ChatGPT brings transformative effects on job markets,
resulting in the elimination and transformation of positions, requiring a
re-evaluation of traditional work models. Within education, institutions and
companies face disruptive challenges as ChatGPT alters job roles, posing ques-
tions about the value of human expertise and critical thinking skills. Additionally,
financial implications and the costs associated with implementation and ongoing
support require careful consideration. Furthermore, the concentration of AI
power and the potential for corporate dominance are critical factors to explore.
The risk of a few dominant companies controlling and influencing AI raises
concerns about limited diversity, choice and fair competition, emphasising the
need to address data ownership, privacy and the possibility of monopolistic
practices. Establishing comprehensive policies and regulations becomes essential
to ensure ethical use, responsible deployment and accountability in the integration
of ChatGPT and similar technologies. Lastly, the scarcity of research on the
specific impact of ChatGPT in teaching, learning and higher education
Exploring ChatGPT’s Impact 3
the 2020 Blackboard Catalyst Award for Teaching and Learning, underscored
MEF’s successful adaptation to the new educational landscape. Building on this
foundation, the institution introduced an AI minor programme, Data Science and
AI, in 2021. This programme equips students across all departments with
comprehensive skills in data management, analytics, machine learning and deep
learning, preparing them for real-world applications. Through these strategic
initiatives, MEF University’s commitment to disruptive innovation and invest-
ment in new technologies have positioned it as a leader in preparing students to
meet the evolving demands of industries and society.
The public launch of ChatGPT on 30 November 2022 sparked robust dis-
cussions at MEF University about the potential opportunities and challenges it
introduces to higher education. In response, three individuals at the university
volunteered to undertake an initial experiment spanning from December 2022 to
January 2023. This experiment involved integrating ChatGPT into course design,
classroom activities and evaluating its impact on assessments and exams. The
findings from this experiment catalysed a faculty meeting in January 2023. During
this meeting, the origins and potential implications of ChatGPT were presented,
and the volunteers shared concrete examples of its incorporation in various
educational contexts. The diverse array of perspectives expressed during the
meeting underscored the necessity for an in-depth institutional case study to
comprehensively explore ChatGPT’s impact on education within MEF Univer-
sity. Specifically, the university aimed to understand how ChatGPT could
potentially reshape the roles of students, instructors and higher education insti-
tutions. Recognising the gravity of the situation and the imperative for further
exploration, the concept for the research project outlined in this book was
conceived.
The core objectives of our research project encompass a thorough exploration
of ChatGPT’s potential impact on students and instructors within the realm of
higher education. By immersing ourselves in the implementation of this trans-
formative technology, our study aims to unearth potential challenges and barriers
that may emerge. This endeavour offers invaluable insights into the trans-
formative role AI chatbots like ChatGPT can play in reshaping the teaching and
learning landscape. Our overarching mission is to delve into how the integration
of ChatGPT might redefine the roles of students, instructors and higher education
institutions. Through this inquiry, we aspire to gain a profound understanding of
how AI chatbots might reshape dynamics and responsibilities within the educa-
tional sphere. By scrutinising these shifts, we seek insights into the implications
for educators, learners and universities as a whole. Furthermore, our research
aims to contribute to the broader discourse surrounding the integration of AI
technologies in higher education. Guided by three pivotal research questions that
structure our investigation, namely, ‘How may ChatGPT affect the role of the
student?’; ‘How may ChatGPT affect the role of the instructor?’; and ‘’How may
ChatGPT affect the role of institutions of higher education?’, our study aims to
offer valuable insights that will inform educational practices, guide policy
formulation and shape the future integration of AI technologies in higher edu-
cation institutions. Ultimately, our research endeavours aim to contribute to a
Exploring ChatGPT’s Impact 5
OpenAI states that its long-term goal is to create ‘artificial general intelligence’
(AGI) (Brockman & Sutskever, 2015). AGI refers to AI systems that possess the
ability to understand, learn and apply knowledge in a way that’s comparable to
human intelligence. AGI would be capable of performing a wide range of tasks
and adapting to new situations without being explicitly programmed for each
specific task, making it a higher level of AI than the specialised, narrow AI sys-
tems currently available. Tech entrepreneur, Siqi Chen, claims that GPT-5 will
achieve AGI by the end of 2023, generating excitement in the AI community
(Tamim, 2023). Chen’s claim, while not widely held at OpenAI, suggests that
generative AI is making significant strides (Tamim, 2023). Sam Altman, the CEO
of OpenAI, goes one step further, hinting at the potential for AI systems to far
surpass even AGI (Sharma, 2023). He believes that AI’s current trajectory indi-
cates remarkable potential for unprecedented levels of capability and impact in
the near future (Sharma, 2023). In summary, AI’s transformative impact on
human existence, coupled with the rapid advancement of chatbots like ChatGPT,
highlights the potential for significant changes in various industries and the field
of AI as a whole. However, this does come with caveats.
• Lack of Helpfulness
Where a language model fails to accurately understand and execute the specific
instructions provided by the user.
• Hallucinations
When the model generates fictitious or incorrect information.
• Lack of Interpretability
When it is hard for humans to comprehend the process by which the model
arrived at a particular decision or prediction.
• Generating Biased or Toxic Output
When a model generates output that reproduces such biases or toxicity (due to
being trained on biased or toxic data) even if it was not intentionally pro-
grammed to do so.
But why does this happen? Language models like transformers are trained
using next-token-prediction and masked-language-modelling techniques to learn
the statistical structure of language (Ramponi, 2022). However, these techniques
may cause issues as the model cannot differentiate between significant and
insignificant errors, leading to misalignment for more complex tasks (Ramponi,
2022). OpenAI plans to address these limitations through its release of a limited
version of ChatGPT, (ChatGPT-3.5) and gradually increasing its capabilities
using a combination of supervised learning and reinforcement learning, including
reinforcement learning from human feedback, to fine-tune the model and reduce
harmful outputs (Ramponi, 2022). This involves three steps, although steps two
and three can be iterated continuously.
• Stage One
Fine-tuning a pre-trained language model on labelled data to create a super-
vised policy.
10 The Impact of ChatGPT on Higher Education
• Stage Two
Creating a comparison dataset by having labellers vote on the policy model’s
outputs and training a new reward model on these data.
• Stage Three
Further fine-tuning and improving the supervised policy using the reward
model through proximal policy optimisation.
(Ramponi, 2022)
2023). Earning less than $2 a day, these workers handle distressing online content
to train AI engines, raising questions about the sustainability and fairness of their
efforts (Schamus, 2023). The utilisation of African labour for data mining and
cleansing by a US organisation underscores the ethical predicament of relying on
underpaid individuals from less economically advantaged regions to benefit those
in more affluent areas. Consequently, addressing these ethical concerns is crucial
for the responsible development of AI tools.
Meredith Whitaker, an AI researcher and ethicist, highlights that generative
AI heavily relies on vast amounts of surveillance data scraped from the web
(Bhuiyan, 2023). However, the specific sources of these data, obtained from
writers, journalists, artists and musicians, remain undisclosed by proprietary
companies like OpenAI (Bhuiyan, 2023). This raises concerns about potential
copyright violations and lack of fair compensation for content creators. When
asked about compensation for creators, OpenAI’s CEO, Sam Altman, mentioned
ongoing discussions but did not provide a definitive answer (Bhuiyan, 2023). The
impact on local news publications, whose content is used for training AI models,
is also a concern, and Altman expressed hope for supporting journalists while
considering possible actions (Bhuiyan, 2023). Nonetheless, the necessity for
external regulation to address these issues is evident (Bhuiyan, 2023).
The environmental impact of AI technology, particularly large language
models, is a growing concern. Data centres, hosting power-hungry servers for AI
models like ChatGPT, significantly contribute to carbon emissions (McLean,
2023). The power source, whether coal or renewable energy, further affects
emission levels (McLean, 2023). Moreover, the water footprint of AI models is
substantial; for example, Microsoft’s data centres used around 700,000 litres of
freshwater during GPT-3’s training, equivalent to the water needed for hundreds
of vehicles (McLean, 2023). Hence, it’s vital to tackle these environmental con-
cerns and imperative to discover sustainable solutions as these models continue
their expansion (McLean, 2023).
of consensus among experts, this section aims to provide some answers to the
aforementioned questions.
We are now starting to hear a lot more mainstream conversation regarding the
social and economic impact that AI and AI chatbots will have on society and
industry. According to the 2023 Artificial Intelligence Index Report, bar agri-
culture, forestry, fishing and hunting, the demand for AI-related skills is rapidly
increasing in nearly all sectors of the American economy. The report highlights
that between 2021 and 2022, the number of AI-related job postings increased on
average from 1.7% to 1.9% (Maslej et al., 2023). According to Business Insider,
AI technology like ChatGPT could drastically change jobs in various industries,
such as finance, customer service, media, software engineering, law and teaching,
including potential gains and losses (Mok & Zinkula, 2023). This may happen in
the following ways.
In finance, it is thought likely that very soon AI-powered bots will handle
complicated financial questions, allowing advisors and CFOs to make real-time
decisions by tapping into AI’s knowledge. They will also be able to perform
information analysis, pattern detection and forecasting. Moreover, ChatGPT will
save time for marketers in finance by analysing data and providing insights into
customer behaviour as well as organising information and generating marketing
materials (How Will ChatGPT & AI Impact The Financial Industry?, 2023). In
addition, ChatGPT has the potential to disrupt jobs across various industries on
Wall Street, including trading and investment banking. This is because ChatGPT
can automate some of the tasks that knowledge workers perform today. One
advantage of this is that it will enable them to concentrate on higher-value tasks.
However, it also means that AI could do certain jobs that recent college graduates
are currently doing at investment banks (Mok & Zinkula, 2023). This may lead to
the elimination of low-level or entry jobs.
When it comes to customer service and engagement, according to Forbes,
conversational AI, such as ChatGPT, has the potential to revolutionise customer
service by providing human-like conversations that address each user’s concerns.
Unlike traditional chatbots, which follow predetermined paths and lack flexi-
bility, conversational AI can automate the upfront work needed for customer
service agents to focus on high-value customers and complex cases requiring
human interaction (Fowler, 2023).
And what about the creative arts? Forbes predicts that ChatGPT is expected to
have a significant impact on jobs in advertising, content creation, copywriting,
copy editing and journalism (Fowler, 2023). Furthermore, due to the AI’s ability
to analyse and understand text, it is likely that ChatGPT will transform jobs
related to media, including enabling tasks such as article writing, editing and
fact-checking, script-writing for content creators and copywriting for social media
posts and advertisements (Fowler, 2023). In fact, we are already seeing chatbots
drafting screenplays (Stern, 2023), writing speeches (Karp, 2023), writing novels
(Bensinger, 2023) and being used by public relations companies to ‘research,
develop, identify customer values or changing trends, and strategize optimal
campaigns for. . . clients in a matter of seconds’ (Martinez, 2023). There is already
a visible response from affected employees in response to these developments. In
Navigating the Landscape of AI Chatbots 15
Los Angeles in early May 2023, a strike was initiated involving thousands of film
and television writers, later joined by actors and other members of the film
community, with the aim not only to address financial matters but also to
establish rules preventing studios from using AI to generate scripts, excluding
human involvement in the creative process (Hinsliff, 2023). This shift is also being
seen in Buzzfeed, which is one of many publishers that have started to use
AI-generated content to create articles and social media posts, with the aim of
increasing efficiency and output (Tarantola, 2023). However, the quality of the
content generated by AI is still a concern for many (Tarantola, 2023). Another
area which is being affected is fashion, where AI is being used for everything from
analysing data to create designs for upcoming collections to generating a variety
of styles from sketches and details from creative directors (Harreis, 2023).
When it comes to engineering, while ChatGPT may be able to aid engineers in
their work by generating answers for engineering calculations and providing
information on general engineering knowledge, it will not be able to replace the
knowledge, expertise and innovation that engineers bring to the design and
product development process (Brown-Siebenaler, 2023). However, regarding
software engineering, there may be many changes. Software engineering involves
a lot of manual work and attention to detail. However, ChatGPT can generate
code much faster than humans which will lead to improved efficiency, bug
identification and increased code generation speed, while also cutting resource
costs (Mok & Zinkula, 2023).
With regard to healthcare, Harari (2018) gives the following example by
comparing what doctors do to what nurses do. Doctors mainly process medical
information, whereas nurses require not only cognitive but also motor and
emotional skills to carry out their duties. Harari believes this makes it more likely
that we will have an AI family doctor on our smartphones before we have a
reliable nurse robot. Therefore, he expects the human care industry to remain a
field dominated by humans for a long time and, due to an ageing population, this
is likely to be a growing industry. And there is now evidence coming out to
support Harari’s claims. A recent study conducted by the University of California
San Diego looking at a comparison between written responses from doctors and
ChatGPT to real-world health queries, the panel of healthcare professionals
preferred ChatGPT’s responses 79% of the time (Tilley, 2023). They also found
ChatGPT’s answers to be of higher quality in terms of information provided and
perceived empathy, without knowing which responses came from the AI system
(Tilley, 2023). Furthermore, ChatGPT has even demonstrated the ability to pass
the rigorous medical licencing exam in the United States, scoring between 52.4
and 75% (Tilley, 2023).
According to a recent Goldman Sachs report, generative AI may also have a
profound impact on legal workers, since language-oriented jobs, such as para-
legals and legal assistants, are susceptible to automation. These jobs are
responsible for consuming large amounts of information, synthesising what they
learnt, and making it digestible through a legal brief or opinion. Once again, these
tend to be low-level or entry jobs. However, AI will not completely automate
these jobs since it requires human judgement to understand what a client or
16 The Impact of ChatGPT on Higher Education
employer wants (Mok & Zinkula, 2023). We are already starting to see examples
of AI being used in the legal field. DoNotPay, founded in 2015 is a bot that helps
individuals fight against large organisations for acts such as applying wrong fees,
robocalling and parking tickets (Paleja, 2023a). In February 2023, DoNotPay was
used to help a defendant contest a speeding ticket in a US court, with the pro-
gramme running on a smartphone and providing appropriate responses to the
defendant through an earpiece (Paleja, 2023a). In addition, AI judges are already
being used in Estonia to settle small contract disputes, allowing human judges
more time for complex cases (Hunt, 2022). Furthermore, a joint research project
in Australia is currently examining the benefits and challenges of AI in courts
(Hunt, 2022). Overall, we are seeing AI becoming more popular in courts
worldwide. And this is certainly the case in China. In March 2021, China’s
National People’s Congress approved the 14th Five-Year Plan which aimed to
continue the country’s judicial reform, including the implementation of ‘smart
courts’ to digitalise the judicial system (Cousineau, 2021). ChatGPT is also
demonstrating its ability to excel in legal exams. The latest iteration of the AI
programme, GPT-4, recently surpassed the threshold set by Arizona on the
uniform bar examination (Cassens Weiss, 2023). With a combined score of 297, it
achieved a significant margin of success which, notably, places ChatGPT’s per-
formance close to the 90th percentile of test takers (Cassens Weiss, 2023).
Just like other industries, the emergence of ChatGPT has compelled education
companies to reassess and re-examine their business models. According to Times
Higher Education writers, Tom Williams and Jack Grove, the CEO of education
technology firm Chegg, Dan Rosensweig, attributes a decline in new sign ups for
their textbook and coursework assistance services to ChatGPT, believing that as
midterms and finals approached, many potential customers opted to seek
AI-based help instead (2023). Williams and Grove believe this shift in consumer
behaviour serves as a ‘harbinger’ of how the rise of generative AI will disrupt
education enterprises and is prompting companies to hastily adapt and
future-proof their offerings (2023). They give the example of Turnitin, which has
expedited the introduction of an AI detector, and Duolingo, which has incor-
porated GPT-4 to assist language learners in evaluating their language skills
(2023). Williams and Grove also note that, simultaneously, a wave of newly
established start-ups has emerged, offering a wide range of services, including
personalised tutoring chatbots and proprietary AI detectors, each with varying
levels of accuracy (2023). They quote Mike Sharples, an emeritus professor at the
Open University’s Institute of Educational Technology, saying that it is the larger
companies that are successfully integrating AI into their existing and
well-established products that are thriving. Conversely, Sharples cautions that
others run the risk of becoming like the ‘Kodak of the late 1990s’, unable to adapt
swiftly or effectively enough to thrive in a competitive market (Williams & Grove,
2023). Sharples goes on to say that he anticipates that numerous companies in the
education field, particularly distance-learning institutions, will face significant
challenges in their survival, as students may perceive AI as capable of performing
their tasks better; however, he cautions that whether or not that is the case
remains to be seen (Williams & Grove, 2023). Williams and Grove also quote
Navigating the Landscape of AI Chatbots 17
the bot’s potential; a job that involves enhancing the performance of ChatGPT
and educating the company’s staff on how to make the most of this technology
(Tonkin, 2023). Often referred to as ‘AI whisperers’, prompt engineers specialise
in crafting prompts for AI bots such as ChatGPT, and frequently come from
backgrounds in history, philosophy or English language, where a mastery of
language and wordplay is essential (Tonkin, 2023). And we are currently seeing a
strong demand for prompt engineers, with Google-backed start-up Anthropic
advertising a lucrative salary of up to $335,000 for a ‘Prompt Engineer and
Librarian’ position in San Francisco; a role which involves curating a library of
prompts and prompt chains and creating tutorials for customers (Tonkin, 2023).
Additionally, another job posting offers a salary of $230,000 for a machine
learning engineer with experience in prompt engineering to produce optimal AI
output (Tonkin, 2023). Interestingly, the job postings encourage candidates to
apply even if they don’t meet all the qualifications. Sam Altman is currently
emphasising the significance of prompt engineers, stating that ‘writing a really
great prompt for a chatbot persona is an amazingly high-leverage skill’ (Tonkin,
2023). Thus, a new job market has opened. But why is this happening so quickly
and seamlessly? And why are people who did not meet all the qualifications being
asked to apply? It all comes down to unlocking the potential of capability
overhang.
One of the reasons prompt engineers do not have to have a background in
computer science or machine learning is related to the concept of capability
overhang. In his article ‘ChatGPT proves AI is finally mainstream – and things
are only going to get weirder’, James Vincent highlights the concept of ‘capability
overhang’ in AI, which refers to the untapped potential of AI systems, including
latent skills and abilities that researchers have yet to explore (2022). The potential
of AI remains largely untapped due to the complexity of its models, which are
referred to as ‘black boxes’. This complexity makes it challenging to understand
how AI functions and arrives at specific results. However, this lack of under-
standing opens up vast possibilities for future AI advancements (Vincent, 2022).
Vincent quotes Jack Clark, an AI policy expert, who describes the concept of
capability overhang as follows: ‘Today’s models are much more capable than we
think, and the techniques we have to explore them are very immature. What
about all the abilities we are unaware of because we have not yet tested for them?’
(Vincent, 2022). Vincent highlights ChatGPT as a prime example of how acces-
sibility has impeded the progress of AI. Although ChatGPT is built on GPT-3.5,
an improved version of GPT-3, it was not until OpenAI made it available on the
web that its potential to reach a wider audience was fully realised. Furthermore,
as it was released free of charge, this further increased its accessibility. Moreover,
despite the extensive research and innovation in exploring the capabilities and
limitations of AI models, the vast and complex intelligence of the internet remains
unparalleled. Now, with the sudden accessibility of AI capabilities to the general
public, according to Vincent, the potential overhang may be within reach (2022).
So, what do the experts have to say about the potential impact of AI on the job
market? Sam Altman holds an optimistic viewpoint, acknowledging that while
technology will undoubtedly influence the job market, he believes there will be
Navigating the Landscape of AI Chatbots 19
We are also starting to see a shift in how graduates are being affected.
According to the 2023 Artificial Intelligence Report, the percentage of new
computer science PhD graduates from US universities specialising in AI has been
increasing steadily over the years. In 2021, 19.1% of new graduates specialised in
AI, up from 14.9% in 2020 and 10.2% in 2010 (Maslej et al., 2023). The trend is
shifting towards AI PhDs choosing industry over academia. In 2011, a similar
number of AI PhD graduates took jobs in industry (40.9%) and academia
(41.6%). However, since then, a majority of AI PhDs are choosing industry, with
65.4% taking jobs in industry in 2021, which is more than double the 28.2% who
chose academia (Maslej et al., 2023). In addition, the number of new North
American faculty hires in computer science, computer engineering and informa-
tion fields have remained relatively stagnant over the past decade (Maslej et al.,
2023). In 2021, a total of 710 new hires were made, which is slightly lower than the
733 hires made in 2012. Furthermore, the number of tenure-track hires also saw a
peak in 2019 with 422 hires, but then dropped to 324 in 2021 (Maslej et al., 2023).
There is also a growing difference in external research funding between private and
public American computer science departments. A decade ago, the median total
expenditure from external sources for computing research was similar for private and
public computer science departments in the United States. However, the gap has
widened over time, with private universities receiving millions more in funding than
public ones (Maslej et al., 2023). As of 2021, private universities had a median
expenditure of $9.7 million, while public universities had a median expenditure of $5.7
million (Maslej et al., 2023). In response to these changes, universities are taking
various actions to adapt. They are focusing on several key areas, including their
infrastructure, programme offerings, faculty recruitment and faculty retention. Inside
Higher Ed’s Susan D’Agostino’s May 2023 article provides recent information
regarding how universities are reacting. Regarding universities’ increased investments
in AI faculty and infrastructure, she gives the following examples. The University at
Albany, Purdue University and Emory University are currently actively hiring a
substantial number of AI faculty members, while the University of Southern Cali-
fornia is investing $1 billion in AI, to recruit 90 new faculty members and establish a
dedicated AI school (D’Agostino, 2023). Similarly, the University of Florida is
creating the Artificial Intelligence Academic Initiative Centre, while Oregon State
University is building an advanced AI research centre with cutting-edge facilities
(D’Agostino, 2023). In support of these efforts, the National Science Foundation is
committing $140 million to establish seven national AI research institutes at US
universities, each with a specific focus area (D’Agostino, 2023). However, D’Ag-
ostino quotes Victor Lee, an associate professor at Stanford’s Graduate School of
Education, as emphasising the importance of extending AI initiatives beyond com-
puter science departments, suggesting that integrating diverse disciplines such as
writing, arts, philosophy and humanities to foster a range of perspectives and critical
thinking necessary for AI’s development and understanding (2023). According to
D’Agostino, colleges are also establishing new academic programmes in AI. For
example, Houston Community College will introduce four-year degree programmes
in applied technology for AI and robotics, as well as applied science in healthcare
management, and Rochester Institute of Technology plans to offer an
Navigating the Landscape of AI Chatbots 21
AI’s influence on the job market and its effect on education. Yet, a more over-
arching inquiry remains: What will be the impact of AI on the world? To delve
into this question more deeply, we investigate the viewpoints of experts in the
field. We start by considering opinions expressed before ChatGPT was publicly
introduced and then transition to perspectives shared after its unveiling.
In 2017, prior to ChatGPT’s introduction, Max Tegmark, a physicist and
cosmologist renowned for his work in theoretical physics and AI, released ‘Life
3.0: Being Human in the Age of Artificial Intelligence’ (Tegmark, 2017). In this
book, Tegmark navigates the potential influence of AI on human society and
envisions the potential futures that AI’s advancement could unfold. In particular,
he looks into the realm of Artificial General Intelligence (AGI), analysing the
potential pros and cons, which he believes span from the potential positives – such
as advancements in science, medicine and technology – to the potential negatives,
which encompass ethical dilemmas and existential risks. Tegmark also puts for-
ward an array of hypothetical scenarios that could transpire as AGI evolves,
including the prospects of utopian outcomes, dystopian visions and a middle
ground where human and AI coexistence is harmonious. He further investigates
the societal implications of AGI, including its potential impact on the job market,
economy and governance. Based on this, he stresses the importance of ethical
considerations and conscientious development to ensure that AGI ultimately
serves the collective benefit of all humanity. In 2019, following Tegmark’s book,
Gary Marcus, a Cognitive Scientist and Computer Scientist, and Ernest Davis, a
Professor of Computer Science, published ‘Rebooting AI: Building Artificial
Intelligence We Can Trust’, in which they investigate critical aspects and chal-
lenges within the AI field, specifically focusing on the limitations and deficiencies
prevalent in current AI systems. Through doing so, they raise critical questions
about the trajectory of AI advancement. Marcus and Davis contend that, despite
notable advancements in AI technology, fundamental constraints persist, hin-
dering the development of genuinely intelligent and reliable AI systems. They
underscore the lack of common sense reasoning, robustness and a profound
understanding of the world in many contemporary AI systems – qualities inherent
to human cognition. Based on this, they argue that prevailing AI development
approaches, often centred on deep learning and neural networks, fall short in
achieving human-level intelligence and true understanding. Within their work,
Marcus and Davis place transparency, interpretability and accountability as
prominent themes, emphasising the significance of rendering AI systems trans-
parent and interpretable, particularly in domains where their decisions impact
human lives. They assert that these considerations are crucial, especially in fields
such as healthcare, finance and law, where comprehending how AI arrives at its
decisions is vital for ensuring ethical and equitable decision-making (Marcus &
Davis, 2019). Another publication in 2019 was ‘Human Compatible: Artificial
Intelligence and the Problem of Control’ by Stuart Russell, a computer scientist
and professor at the University of California, Berkeley. Russell is widely
acclaimed for his significant contributions to the field of AI, particularly in
machine learning, decision theory and the intricate issue of control within AI
systems. In his book, Russell explores a critical concern in AI advancement: the
24 The Impact of ChatGPT on Higher Education
imperative to ensure that AI systems act in harmony with human values and
aspirations. The central theme of his work is control, addressing the intricate
challenge of designing AI systems that benefit humanity without entailing risks or
unforeseen outcomes. Russell argues that the prevailing trajectory of AI devel-
opment, which focuses on maximising specific objectives without sufficient regard
for human values, could lead to AI systems that are difficult to manage and
potentially harmful. As a result, he emphasises the paramount importance of
early alignment between AI systems and human values and advocates for
establishing a framework that enables the regulation of AI behaviour (Russell,
2019).
As we can see, even before the public release of ChatGPT in November 2022,
experts were engaged in discussions regarding concerns about the development of
AI. But what is being said now that AI, such as ChatGPT, has been released to
the public? It would appear that feelings are mixed. Some view it as an existential
threat, while others argue that the risk is too distant to warrant concern. Some
hail it as ‘the most important innovation of our time’ (Liberatore & Smith, 2023),
while others caution that it ‘poses a profound risk to society and humanity’
(Smith, 2023). But what is the stance of AI companies themselves? Bill Gates,
Sundar Pichai and Ray Kurzweil champion ChatGPT, highlighting its potential
in addressing climate change, finding cancer cures and enhancing productivity
(Liberatore & Smith, 2023). In contrast, Elon Musk, Steve Wozniak and a group
of 2,500 individuals express reservations about large language models. In March
2023, they issued an open letter urging a pause in their development due to
potential risks and societal implications (Pause Giant AI Experiments: An Open
Letter, 2023). Moreover, in May 2023, Dr Geoffrey Hinton, a prominent figure in
the field of AI, stepped down from his position at Google, citing apprehensions
about misinformation, disruptions in employment and the existential threats
posed by AI (Taylor & Hern, 2023). In particular, he is concerned about the
potential for AI to exceed human intelligence and become susceptible to misuse
(Taylor & Hern, 2023). Although Gates holds favourable opinions about AI, he
supports enhanced regulation for augmented AI, especially due to issues such as
misinformation and deepfakes (Gates, 2023). In a similar vein, Sundar Pichai
stresses the necessity for AI regulation and is against the advancement of
autonomous weapons (Milmo, 2023b). Additionally, technology experts,
including the CEOs of DeepMind, OpenAI and Anthropic, are actively advo-
cating for regulation to tackle existential concerns (Abdul, 2023). But are these
calls for regulation being heeded?
remaining members of this team (Bellan, 2023). Those within the team expressed
a shared belief that these layoffs were likely influenced by Microsoft’s intensified
focus on rapidly releasing AI products to gain an edge over competitors, poten-
tially leading to a reduced emphasis on long-term, socially responsible delibera-
tions (Bellan, 2023). Despite this, it’s important to note that Microsoft has
retained its Office of Responsible AI, which carries the responsibility of setting
ethical AI guidelines through governance and public policy initiatives (Bellan,
2023). Nevertheless, the action of dismantling the ethics and society team raises
valid questions about the extent of Microsoft’s commitment to authentically
infusing ethical AI principles into its product design. Another instance emerged
directly from the mouth of OpenAI CEO Sam Altman, who, just days after
advocating for AI regulation in the US Congress voiced apprehension about the
EU’s efforts to regulate artificial intelligence (Ray, 2023). Altman expressed his
belief that the EU AI Act’s draft was excessive in its regulations and warned that
OpenAI might withdraw its services from the region if compliance proved too
challenging (Ray, 2023). This shift in stance was both sudden and significant,
highlighting the considerable influence one individual can wield.
It is precisely these instances of power and actions that raise concerns for
Rumman Chowdhury, a notable figure in the AI domain. Chowdhury recognises
recurring patterns in the AI industry, akin to the cases mentioned above, which
she considers as warning signals (Aceves, 2023). One of the key issues she high-
lights is the common practice of entities calling for regulation while simulta-
neously using significant resources to lobby against regulatory laws, exerting
control over the narrative (Aceves, 2023). This paradoxical approach hinders the
development of robust and comprehensive regulatory frameworks that could
ensure the responsible use of AI technologies. Moreover, Chowdhury emphasises
that the lack of accountability is a fundamental issue in AI development and
deployment (Aceves, 2023). She points out how internal risk analysis within
companies often neglects moral considerations, focusing primarily on assessing
risks and willingness to take them (Aceves, 2023), as we saw with Microsoft.
When the potential for failure or reputational damage becomes significant, the
playing field is manipulated to favour specific parties, providing them with an
advantage due to their available resources (Aceves, 2023). This raises concerns
about the concentration of power in the hands of a few, leading to potential bias
and adverse consequences for the wider population. Chowdhury further high-
lights that unlike machines, individuals possess diverse and indefinite priorities
and motivations, making it challenging to categorise them as inherently good or
bad (Aceves, 2023). Therefore, to drive meaningful change, she advocates
leveraging incentive structures and redistributing power sources in AI governance
(Aceves, 2023). This would involve fostering collaboration among various
stakeholders, including governments, industries, academia and civil society, to
collectively address complex AI-related issues, promote cooperation and reach
compromises on a large scale (Aceves, 2023). By doing so, she believes we can
ensure that AI technologies are developed and deployed in a way that benefits
society as a whole, rather than serving the interests of a select few. In addition to
Chowdhury’s concerns, Karen Hao, senior AI editor at MIT Technology Review,
Navigating the Landscape of AI Chatbots 27
analysing the jobs that institutions of higher education aim to fulfil, we can gain
insights into how ChatGPT may impact their operations. This can include tasks
such as enhancing accessibility and inclusivity (functional job), fostering inno-
vation and collaboration across departments (social job) or adapting to evolving
educational demands (emotional job). Understanding these jobs can inform
strategic decisions regarding the integration and utilisation of ChatGPT within
educational institutions. Therefore, we believe that melding Christensen’s
disruptive innovation with critical theory presents a multi-dimensional frame-
work, as it allows for a comprehensive exploration of technologies, like ChatGPT,
not just as tools but as potential game-changers in industry and society.
like ChatGPT. These tools might be seen less as educational enhancements and
more as cost-saving or profit-driving mechanisms. This could shift priorities from
holistic education to market-driven objectives, echoing Marx’s concerns about
capitalist structures overshadowing true value. However, this does not have to
happen. Ethical, student-centric integration of ChatGPT could lead to enriched
collaborative experiences, blending traditional pedagogy with modern techniques.
In essence, while ChatGPT and similar AI technologies hold vast potential for
reshaping higher education, Marx’s theory of alienation cautions us to think
about the caveats. The challenge lies in ethically integrating these tools, focusing
on augmenting human capabilities rather than sidelining them. It underscores the
importance of continually reassessing institutional policies, emphasising the
human aspect of education and ensuring that advancements in technology truly
serve their primary stakeholders – the students, instructors and the broader
educational community.
To enhance our understanding further, we looked into the works of one of the
most famous theorists in phenomenological research, Martin Heidegger.
addition, Sullivan et al. (2023) found that a higher number of articles made ref-
erences to institutions or departments that had imposed bans on ChatGPT
compared to those allowing its use. However, they observed that the most
commonly discussed response was the indecisiveness of certain universities
regarding their policies. These universities were described as ‘updating’, ‘review-
ing’ and ‘considering’ their policies, reflecting a cautious approach due to the
rapidly evolving nature of the situation. In the absence of official institutional
policies, several articles mentioned that individual academic staff members would
develop revised policies on a course-by-course basis. The researchers also noted
that universities that had chosen to prohibit the use of ChatGPT had already
updated their academic integrity policy or honour code, or they believed that AI
use was already prohibited based on existing definitions of contract cheating. On
the other hand, in cases where universities permitted the use of ChatGPT, it often
came with the requirement to adhere to strict rules, including the disclosure or
acknowledgement of its use in assignments. Additionally, the researchers high-
lighted that two articles clarified that while a specific university did not impose a
ban on ChatGPT, individual academic staff members still had the discretion to
prohibit its use in certain assessments or units. Furthermore, Sullivan et al. (2023)
found that a significant portion of the analysed articles discussed integrating
ChatGPT into teaching practices. These articles advocated for meaningful inte-
gration of AI in teaching and suggested specific ways to incorporate ChatGPT
into assignment tasks, such as idea generation and feedback on student work.
Various applications for ChatGPT in the learning experience were proposed,
including personalised assignments, code debugging assistance, generating drafts,
providing exemplar assignments and more. The articles acknowledged the diffi-
culties of banning ChatGPT and recognised its relevance in future workplaces.
Enforcing a complete ban was deemed impractical, leading to debates on
investing in AI detection systems. ChatGPT was likened to calculators or Wiki-
pedia, highlighting its disruptive nature. However, specific ways AI would be
employed in the workplace were not extensively explored. The researchers noted a
lack of focus on using ChatGPT to enhance equity outcomes for students. Few
articles discussed mitigating anxiety or supporting accessibility challenges on
campus. They also highlight that there was limited mention of ChatGPT’s
potential to improve writing skills for non-native speakers and promote a more
equitable learning environment. They note that only one article touched briefly on
disability-related considerations and AI’s potential to empower individuals with
disabilities. Regarding voices, Sullivan et al. (2023) found that university figures,
including leaders, coordinators, researchers and staff, were extensively quoted in
the media, with nearly half of the articles citing three or more representatives from
respective institutions. In contrast, student voices were relatively underrepre-
sented, appearing in only 30 articles, and only seven of those included quotes from
more than three students. Some articles focused on Edward Tien, the student
behind ChatGPT Zero, while others used survey data to represent the collective
student voice.
Based on their research, Sullivan et al. (2023) urge for a more balanced
examination of the risks and opportunities of ChatGPT in university teaching and
48 The Impact of ChatGPT on Higher Education
findings emphasise the necessity for further research and dialogue concerning the
implications of AI tools, highlighting the need to explore ethical use, innovative
teaching and learning practices and the promotion of equitable access to educa-
tional opportunities. Finally, they assert that as AI technologies continue to
evolve, it is crucial for universities to adapt and embrace their utilisation in a
manner that supports student learning and prepares them for the challenges of an
increasingly digital world.
Sullivan et al.’s (2023) investigation is relevant to our study in the following
ways.
Despite the limitations of Neumann et al.’s study, such as the small sample size
and potential biases, their comprehensive exploration of AI technology’s impact
on higher education offers valuable insights. By building upon their work and
conducting our own research, we can contribute to evidence-based practices and
informed discussions on the responsible and effective integration of ChatGPT in
higher education, thereby preparing students for the future AI-driven landscape.
Exploring ChatGPT’s Role 53
• Understanding AI in Education
Rudolph et al.’s study provides valuable insights into the applications and
implications of AI, specifically ChatGPT, in higher education. It offers a
framework categorising AI applications into student-facing, teacher-facing and
system-facing dimensions, enabling a comprehensive understanding of AI’s
role in education. This is a framework that we can draw upon in our research to
help us shed light on the diverse ways AI tools like ChatGPT can impact
various educational contexts.
• Innovative Assessment Methods
The study highlights concerns about ChatGPT’s potential to disrupt traditional
assessment methods, such as essays and online exams. As we investigate the
impact of AI tools on assessment practices, Rudolph et al.’s findings can guide
our exploration of innovative assessment approaches that leverage AI while
addressing the challenges posed by text-generating AI applications.
• Opportunities for Personalised Learning
Rudolph et al. emphasise the potential of AI tools, including ChatGPT, in
personalising and adapting student learning experiences. This insight can
inform our research on how AI can be utilised to tailor instruction, provide
feedback and support student-centred pedagogies that foster individualised
learning paths.
• Leveraging AI for Instructor Support
The study discusses how AI can reduce teacher workload and enhance class-
room innovation through automated tasks like assessment and feedback. Our
research can explore how AI tools like ChatGPT can complement instructors’
efforts, allowing them to focus more on guiding students and providing per-
sonalised support.
• Addressing Ethical Concerns
Rudolph et al.’s study acknowledges concerns about academic integrity and the
potential misuse of AI tools like ChatGPT for plagiarism. As we investigate the
ethical implications of AI integration in education, their findings can help us
examine strategies to promote responsible AI use and combat academic
misconduct effectively.
Exploring ChatGPT’s Role 59
Regarding the social network analysis of tweets, Tlili et al. (2023) found that the
community formation around ChatGPT is fragmented, with individuals seeking
information and discussion about its limitations and promises. The most used
word pairs provide interesting insights, with some suggesting how to use
AI-powered ChatGPT in education, while others hint at the turning point in
educational systems. The researchers concluded that the public’s view on
ChatGPT is diverse, with no collective consensus on whether it is a hype or a
future opportunity. While positive sentiments (5%) outweighed negative senti-
ments (2.5%), the majority of sentiments (92.5%) were non-categorised, indicating
uncertainty about ChatGPT in education. Word cluster analysis revealed users’
optimism about using AI-powered chatbots in education. However, there were
also critical insights and concerns expressed, such as cheating and ethical impli-
cations. The researchers emphasise the need to examine the underlying AI tech-
nologies, like machine learning and deep learning, behind ChatGPT. Despite the
optimistic overview, concerns about its use in education were also observed. The
study concludes that negative sentiments demonstrated deeper and more critical
thinking, suggesting caution in approaching ChatGPT’s integration into educa-
tion (Tlili et al., 2023). Regarding the content analysis of interviews conducted by
Tlili et al. (2023), the findings highlighted users’ positive perceptions of
ChatGPT’s significance in revolutionising education. Participants acknowledged
its effectiveness in enhancing educational success by providing foundational
knowledge and simplifying complex topics. This potential led the researchers to
believe in a paradigm shift in instructional methods and learning reform. How-
ever, a minority of participants expressed concerns about learners becoming
overly reliant on ChatGPT, potentially hindering their creativity and critical
thinking abilities. Regarding the quality of responses provided by Chatbots in
education, Tlili et al.’s (2023) study revealed that participants generally found the
dialogue quality and accuracy of information from ChatGPT to be satisfactory.
However, they also noted occasional errors, limited information and instances of
misleading responses, suggesting room for improvement. In terms of user expe-
rience, many participants in Tlili et al.’s (2023) study were impressed by the fluid
and exciting conversations with ChatGPT. However, they also pointed out that
ChatGPT’s humaneness needs improvement, particularly in terms of enhancing
its social role since it currently lacks the ability to detect physical cues or motions
of users. The study also showed that users perceived ChatGPT as a valuable tool
for diverse disciplines, reducing teachers’ workload and providing students with
immediate feedback. However, some users reported challenges with response
accuracy, contradictions, limited contextual information and a desire for addi-
tional functionalities. On an ethical front, participants raised concerns about
ChatGPT encouraging plagiarism and cheating, fostering laziness among users,
and potentially providing biased or fake information. The study also highlighted
worries about ChatGPT’s impact on students’ critical thinking and issues related
to privacy through repetitive interactions.
Regarding investigation of user experiences, after daily meetings between the
educators to compare the various results they obtained using ChatGPT, they
identified ten scenarios where various educational concerns were present. These
are as follows. The authors found that educators observed ChatGPT’s ability to
Exploring ChatGPT’s Role 61
aid students in writing essays and answering exam questions, raising concerns
about potential cheating and the effectiveness of cheating detection in education
using chatbots (Tlili et al., 2023). Educators also recognised chatbots’ proficiency
in generating learning content but emphasised the need for content accuracy and
reliability, questioning how to ensure content quality and verification for
chatbot-generated content, including ChatGPT (Tlili et al., 2023). The educators
each initiated a new ChatGPT chat using the same prompt. However, they all
received different responses with varying answer quality, highlighting concerns
about equitable access to high-quality learning content (Tlili et al., 2023).
ChatGPT’s generated quizzes varied in difficulty, leading to questions about the
appropriateness of these learning assessments (Tlili et al., 2023). The educators
stressed the importance of well-designed learning assessments for student under-
standing and problem-solving but found inconsistencies in chatbot-generated
quizzes that could complicate teachers’ responsibilities (Tlili et al., 2023). They
noted that users’ interaction styles influenced the level of learning assistance
received from ChatGPT, raising questions about users’ competencies and
thinking styles to maximise its potential (Tlili et al., 2023). The educators
emphasised the need to humanise chatbots, including the ability to express
emotions and have a personality, to encourage reflective engagement in students
(Tlili et al., 2023). The educators observed ChatGPT occasionally providing
incomplete answers, raising concerns about its impact on user behaviour, espe-
cially among young learners who might use it as an excuse for incomplete tasks or
assignments (Tlili et al., 2023). The educators stressed the importance of exploring
potential adverse effects on users (Tlili et al., 2023). They also highlighted con-
cerns about data storage and usage, with ChatGPT denying conversation data
storage, emphasising the need to safeguard user privacy, particularly for young
individuals (Tlili et al., 2023). During an interaction with ChatGPT, one educa-
tor’s request for a blog’s American Psychological Association (APA) format led
to intriguingly inaccurate information, raising questions about ensuring reliable
responses from ChatGPT to prevent harm or manipulation (Tlili et al., 2023).
In their discussion, Tlili et al. (2023) express their belief that their findings
demonstrate the potential of ChatGPT to bring about transformative changes in
education. However, despite acknowledging its potential, they also raise several
concerns regarding the utilisation of ChatGPT in educational settings. The
authors acknowledge that while some institutions have banned ChatGPT in
education due to concerns about cheating and manipulation, they propose a
responsible adoption approach. This approach involves guidelines and interdis-
ciplinary discussions involving experts from education, security and psychology.
They note that, despite drawbacks, recent studies indicate educational opportu-
nities in ChatGPT that can enhance learning and instruction, prompting a need
for further research on the consequences of excessive reliance on chatbot tech-
nology in education. Highlighting the transformative impact of technology in
education, the authors also emphasise ChatGPT’s potential to simplify essay
writing and introduce innovative teaching methods like oral debates for assessing
critical thinking. They advocate for diverse assessment approaches and the
reformation of traditional classrooms, along with exploring the balance between
62 The Impact of ChatGPT on Higher Education
• Comprehensive Understanding
Tlili et al.’s study provides a holistic view of public discourse and opinions on
ChatGPT in education through the analysis of tweets, interviews and user
experiences. This comprehensive understanding can help inform our research
Exploring ChatGPT’s Role 63
These implications can serve as valuable insights and guiding points for our
research on the impact of ChatGPT on the role of students, instructors and
institutions of higher education. By considering the potential benefits, challenges
and responsible use of the technology, we can develop a comprehensive and
balanced understanding of its implications in the educational landscape.
in learning that catered to their specific needs. The authors give the example
whereby instructors could request suggestions on teaching with a constructivist
teaching and learning approach and receive multiple alternative recommenda-
tions. In conclusion, Firaina and Sulisworo (2023) found that despite its limita-
tions, respondents recognised the benefits of using ChatGPT to enhance
productivity and efficiency in learning. Consequently, they consider ChatGPT to
be an intriguing alternative in education, emphasising the importance of main-
taining a critical approach and verifying the information obtained. The authors
suggest that further research, including additional interviews and case studies, is
necessary to obtain a more comprehensive understanding of the use of ChatGPT
in learning, as this would help to deepen knowledge and insights regarding its
implementation and potential impact (Firaina & Sulisworo, 2023).
Firaina and Sulisworo’s (2023) qualitative study stands out due to its in-depth
interviews with five lecturers, providing rich insights into their experiences with
ChatGPT in education. The researchers effectively connected their findings with
educational theories, such as constructivist and communication theories,
enhancing the credibility of their conclusions. The study highlights practical
implications for lecturers and educational decision-makers, suggesting that
ChatGPT positively impacts productivity and learning effectiveness. However,
some limitations, like the small sample size and lack of a comparison group,
should be considered when interpreting the results. Future research with larger
and more diverse samples, along with comparative studies, can further explore the
benefits and challenges of using AI-powered chatbots like ChatGPT in educa-
tional settings.
Firaina and Sulisworo’s (2023) study has several implications for our research
on how ChatGPT affects the role of students, instructors and institutions in higher
education.
• Faculty Perspectives
The in-depth interviews conducted by Firaina and Sulisworo provide valuable
insights into how instructors perceive and utilise ChatGPT in their teaching
and learning processes. Understanding faculty perspectives can help inform our
study on how instructors perceive the integration of AI chatbots in educational
practices and the factors influencing their decision-making.
• Impact on Productivity
The findings from Firaina and Sulisworo’s study suggest that ChatGPT posi-
tively impacts productivity and efficiency for instructors. This insight may serve
as a basis for investigating how the adoption of AI chatbots in higher education
can enhance instructors’ efficiency in tasks such as lesson planning, content
creation and resource searching.
• Practical Implications
The practical implications highlighted by Firaina and Sulisworo’s study can
inform our research on the potential benefits and challenges of integrating AI
chatbots in higher education. Understanding how instructors navigate the use
66 The Impact of ChatGPT on Higher Education
of ChatGPT can offer insights into best practices and strategies for effectively
integrating AI chatbots in educational settings.
Overall, Firaina and Sulisworo’s study serves as a valuable reference for our
research, offering insights into how instructors perceive and utilise ChatGPT in
higher education. By incorporating their findings and considering the study’s
implications, we can strengthen the theoretical foundation and practical relevance
of our research on the effects of AI chatbots on students, instructors and insti-
tutions in the higher education context. From looking at instructor user experi-
ences, we now turn to researcher user experiences.
Acknowledging and applying the insights from Alshater’s study may help us to
navigate the transformative landscape of ChatGPT in higher education respon-
sibly, paving the way for a more efficient, inclusive and ethically sound academic
environment. Moreover, the implications he presents serve as valuable guidance
for shaping our own research.
encompassed two main sections: the potential and challenges of AI for Education,
as well as future research directions. To delve into the potential section, Zhai
reports querying ChatGPT about the history of AI for Education, noting that in
response, ChatGPT provided three paragraphs that chronologically detailed the
history of AI in education, starting from the 1960s up to the present day. The
author reports that this description was comprehensive, including relevant
examples and notable milestones in the development of AI for Education. The
author also reports that within the aforementioned writing, ChatGPT provided
detailed descriptions of three specific applications of AI in education: personalised
learning, automating administrative tasks and tutoring and mentorship. In order
to delve deeper into these applications, Zhai posed separate queries regarding the
use cases for each application. As a result, each query yielded a comprehensive
definition of the application, a list of typical uses and a concise summary; for
instance, when inquiring about personalised learning, ChatGPT offered Zhai a
definition along with a comprehensive list of use cases as an illustrative example.
To delve even deeper into the use cases, Zhai conducted additional queries on the
history and potential of each aspect of personalised learning. This investigation
led to the identification of four specific uses: adaptive learning, personalised
recommendation, individualised instruction and early identification of learning
needs. Zhai reports that for each of these use cases, the results provided by
ChatGPT encompassed the definition, historical background, evidence of
potential and a concise summary. Zhai also conducted queries on automating
administrative tasks in education, after which ChatGPT provided the definition,
description, five use cases and a summary. From this, Zhai proceeded to query the
history and potential of the five use cases associated with automating adminis-
trative tasks in education, stating that the results yielded a comprehensive
description of the following: enrolment and registration, student record man-
agement, grading and assessment, course scheduling and financial aid. For the
second aspect of the study, Zhai explored the challenges associated with imple-
menting AI in the classroom. Through queries posed to ChatGPT, the author
obtained a direct list of challenges, which encompassed ethical concerns, tech-
nological limitations, teacher buy-in, student engagement and integration with
existing systems. Seeking a deeper understanding of these challenges, Zhai pro-
ceeded to query each specific challenge and potential solutions associated with
them. In the third part of the study, Zhai explored the future prospects of AI in
education. Through queries directed at ChatGPT, the author obtained five
potential developments. These included the increased utilisation of AI for per-
sonalised learning, the development of AI-powered educational games and sim-
ulations, the expanded use of AI for tutoring and mentorship, the automation of
administrative tasks through AI and the creation of AI-powered education
platforms. In the final stage, Zhai requested ChatGPT compose the conclusion of
an academic paper that discussed the role of AI in driving innovation and
improvement in education. The author reports that the conclusion began by
reiterating the potential of AI in transforming education positively and that,
additionally, it emphasised the need to acknowledge and address the limitations
of AI, highlighting ethical, technological and other challenges associated with its
Exploring ChatGPT’s Role 71
implementation in education. Zhai reports that the conclusion urged the imple-
mentation of appropriate measures to ensure the ethical and effective use of AI in
the education system.
Zhai (2022) describes the findings as follows. During the piloting process, the
author followed the scope suggested by ChatGPT and used subsequent queries to
delve deeper into the study. Zhai notes that the entire process, including gener-
ating and testing queries, adding subtitles, reviewing and organising the content,
was completed within 2–3 hours with minimal human intervention. Zhai also
observes that the writing generated by ChatGPT exhibited four key characteris-
tics: coherence, partial accuracy, informativeness and systematicity. Furthermore,
for each query, Zhai reports that the responses encompassed essential information
and maintained a smooth flow between paragraphs. By changing the topic while
addressing the same aspects, Zhai found that the responses followed an identical
format: ChatGPT would introduce the topic, provide a brief historical overview,
present evidence of potentials and limitations and conclude with a summary of the
topic. Zhai also reports that, interestingly, even with slight variations in wording,
ChatGPT consistently produced the same results, believing this indicates its
ability to address queries expressed in different forms. Through this process, Zhai
acknowledges that ChatGPT demonstrates a remarkable capacity to organise and
compose components of articles effectively.
Zhai’s (2022) study provides valuable insights into the use of ChatGPT in
education. Firstly, Zhai suggests that educators should reassess literacy require-
ments in education based on ChatGPT’s capabilities. The study acknowledges the
efficient information processing capabilities of computers and the impressive
writing proficiency of AI, surpassing that of the average student. Zhai believes this
finding prompts the consideration of whether students should develop the ability
to effectively utilise AI language tools as part of future educational goals. Zhai
argues that education should prioritise enhancing students’ creativity and critical
thinking rather than focusing solely on general skills. To achieve this, the study
advocates for further research to understand which aspects of human intelligence
can be effectively substituted by AI and which aspects remain uniquely human.
Secondly, Zhai emphasises the importance of integrating AI, such as ChatGPT,
into subject-based learning tasks. The study points out that AI’s problem-solving
abilities closely mirror how humans approach real-world challenges. Zhai posits
that, as AI, including ChatGPT, continues to advance towards AGI, educators
are presented with an opportunity to design learning tasks that incorporate AI,
thereby fostering student engagement and enhancing the overall learning expe-
rience and that this integration of AI into domain-specific learning tasks aligns
with the way contemporary scientific endeavours increasingly rely on AI for
prediction, classification and inference to solve complex problems. Thirdly, Zhai
addresses the potential impact of ChatGPT on assessment and evaluation in
education. The study highlights traditional assessment practices, such as essay
writing, and raises concerns about students potentially outsourcing their writing
tasks to AI. As AI demonstrates proficiency in generating written content, Zhai
argues that assessment practices should adapt their goals to focus on areas that
cannot be easily replicated by AI, such as critical thinking and creativity. This
72 The Impact of ChatGPT on Higher Education
shift in assessment practices aligns with the evolving needs of society and the
corresponding shifts in educational learning objectives. To effectively measure
creativity and critical thinking, Zhai suggests educators explore innovative
assessment formats that are beyond AI’s capabilities. In conclusion, Zhai’s study
underscores the transformative potential of ChatGPT in education and calls for
timely adjustments to educational learning goals, learning activities and assess-
ment practices. By recognising the strengths and limitations of AI technologies
like ChatGPT, educators can better prepare students to navigate a future where
AI plays an increasingly vital role. As AI reshapes the field of education, it is
essential to consider its integration thoughtfully and ensure that the emphasis
remains on cultivating skills that remain uniquely human while harnessing the
capabilities of AI to enhance the learning process.
Zhai (2022) explores ChatGPT’s impact on education, focusing on learning
goals, activities and assessments. Using a pilot study, ChatGPT efficiently drafted
an academic paper with minimal human intervention, showcasing its potential in
generating scholarly content. While innovative, the study’s small sample size and
limited scope may restrict its generalisability. Additionally, ChatGPT’s lack of
deeper understanding and biases in predefined queries may affect its applicability
in certain educational tasks. We believe further research, with a mixed-methods
approach and larger samples, is needed to fully understand AI’s role in education
and its long-term implications on pedagogy and learning experiences. Nonethe-
less, Zhai’s study sets the stage for future investigations into AI’s impact on
education.
Zhai’s (2022) study offers crucial implications for our research.
Research Methodology
Research Context
This research is conducted at MEF University, a non-profit, private,
English-medium institution located in Istanbul, Turkey. Established in 2014,
MEF University holds the distinction of being the world’s first fully flipped
university. Embracing a flipped, adaptive, digital and active learning approach,
the university incorporates project-based and product-focused assessments instead
of relying on final exams. Furthermore, digital platforms and adaptive learning
technologies are seamlessly integrated into the programmes, while MOOCs are
offered to facilitate self-directed learning opportunities. In addition, since 2021, a
data science and artificial intelligence (AI) minor has been made available for
students from all departments. Caroline Fell Kurban, the principal investigator
and co-author of this book, plays a central role in leading the investigation. She
balances dual responsibilities, serving as both the principal investigator for the
project and the instructor in the in-class case study. To ensure comprehensive data
analysis, interpretation phases and the formulation of theoretical and practical
implementation suggestions, she received support from the MEF University
Centre for Research and Best Practices in Learning and Teaching (CELT). As
flipped learning is a fundamental aspect of MEF’s educational approach and is
specifically featured in this case study, we provide more information here.
Flipped learning is an instructional approach that reverses the traditional
classroom model, allowing students to learn course concepts outside of class and
use class time for active, practical application of the principles. In this approach,
teachers become facilitators or coaches, guiding students through problems and
projects while providing personalised support and feedback. The focus shifts from
content delivery to creating a student-centred learning experience. To ensure the
effectiveness of a flipped learning course syllabus, it is crucial to anchor it on
proven learning frameworks. These frameworks, rooted in learning theories, offer
valuable insights into the cognitive processes essential for successful learning.
They empower instructors to comprehend, analyse and anticipate the learning
process, guiding them in making informed decisions for teaching and learning
implementation. A pivotal aspect of designing a successful flipped learning syl-
labus is recognising the interconnectedness between curriculum, assessment and
(1) Thoughtful curricular planning enhances the learning journey, and UbD
provides a flexible structure to facilitate this without imposing rigid
guidelines.
(2) UbD guides curriculum and instructional strategies towards cultivating
profound comprehension and the practical application of knowledge.
(3) Genuine understanding emerges when students independently employ and
expand their learning through authentic performance.
(4) Effective curriculum design adopts an inverse path, commencing with
long-term desired outcomes and progressing through three stages – Desired
Results, Evidence and Learning Plan, which guards against potential pitfalls
like excessive reliance on textbooks or prioritisation of activities over clear
learning objectives.
(5) Educators assume the role of facilitators, favouring meaningful learning
experiences over mere content delivery.
(6) Regular evaluations of curriculum units against design benchmarks enhance
quality and encourage meaningful professional discourse.
(7) The UbD framework embodies a continuous enhancement approach,
wherein student achievements and teaching efficacy steer ongoing improve-
ments in both curriculum and instruction.
(Wiggins & McTighe, 1998).
UbD is a widely recognised framework for underpinning flipped courses (Şahin
& Fell Kurban, 2019).
Instructors employing UbD in course curriculum development proceed
through three distinct stages: Stage 1 – identify desired results (curriculum), Stage
2 – determine acceptable evidence (assessment) and Stage 3 – create the learning
plans (instruction).
Research Methodology 77
Stage 1
The initial phase of UbD centres on defining desired outcomes, encompassing
several key elements. This process involves establishing clear objectives, designing
enduring understandings, formulating essential questions and specifying what
students should ultimately learn and achieve. Instructors should derive explicit
goals from university programme standards, accreditation criteria and course
purpose. These objectives then shape the creation of enduring understandings. An
enduring understanding encapsulates a fundamental concept with lasting rele-
vance beyond the immediate learning context. It is a profound notion that
embodies essential principles within a subject. These understandings offer stu-
dents deeper insights, fostering a comprehensive grasp of the subject beyond
surface-level facts. Crafting a robust enduring understanding begins with identi-
fying a pivotal concept then distilling it into a clear statement resonating with
students. For instance, ‘Water cycles impact both Earth and society’ succinctly
captures a significant idea in UbD. Essential questions follow, serving as UbD’s
cornerstone. Understanding their essence is crucial. These questions are
open-ended, thought-provoking and engaging, promoting higher order thinking
and transferable concepts. They necessitate reasoning, evidence and sometimes
further inquiry. Notably, essential questions recur throughout the learning
journey, pivotal for design and teaching. For example: How do water cycles affect
ecosystems and natural processes? In what ways do human activities influence
water cycles? Essential questions come in two types: overarching, which apply to
multiple topics, and topical, which focus on specific subject matter (McTighe &
Wiggins, 2013).
After establishing the course aim, enduring understanding and essential
questions, the next step is to develop learning outcomes, i.e. what the students will
know and be able to do by the end of the course. For this purpose, Bloom’s
taxonomy proves to be an effective framework (Bloom et al., 1956). This tax-
onomy classifies educational goals into different categories, with each category
representing a higher level of cognitive functioning than the one below it. It
follows a hierarchical structure where each lower category serves as a prerequisite
for achieving the next higher level. The cognitive processes described within this
framework represent the actions through which learners engage with and apply
knowledge. Examples of some of these, adapted from Armstrong (n.d.), are as
follows, going from higher to lower levels of cognition.
Although we’ve provided the complete Bloom’s taxonomy spectrum here, it’s
important to acknowledge that in specific learning situations, such as introductory
courses, the priority might be understanding and applying existing knowledge,
rather than generating novel content or solutions. In such cases, the inclusion of
the ‘Create’ level of cognitive functioning in the learning outcomes might not be
essential. The emphasis could instead be on remembering, understanding and
applying the acquired information.
To align with Bloom’s taxonomy, an additional knowledge taxonomy can be
implemented, encompassing the domains of factual, conceptual, procedural and
metacognitive knowledge (Armstrong, n.d.). Factual knowledge includes famil-
iarity with terminology, specific details and elements within a subject area.
Conceptual knowledge pertains to familiarity with classifications, categories,
principles, generalisations and a grasp of theories, models and structures. Pro-
cedural knowledge encompasses mastery of subject-specific skills, algorithms,
techniques, methods and the ability to determine appropriate procedures. Meta-
cognitive knowledge involves strategic and contextual understanding of cognitive
tasks, including self-awareness and conditional knowledge. From this, course
learning outcomes can be formulated by identifying action verbs from Bloom’s
taxonomy.
Stage 2
Once the course aim, enduring understanding, essential questions and learning
outcomes have been established, the instructor proceeds to Stage 2: determining
acceptable evidence (assessment). At this stage, instructors should ask some key
questions including: How will we know if students have achieved the desired
results? What will we accept as evidence of student understanding and their ability
to use (transfer) their learning in new situations? and How will we evaluate stu-
dent performance in fair and consistent ways? (Wiggins & McTighe, 1998). To
answer these questions, UbD encourages instructors to think like assessors before
developing units and lessons. The assessment evidence should match the desired
outcomes identified in Stage 1. So, it is therefore important for instructors to think
ahead about the evidence needed to show that students have achieved the goals.
This approach helps to focus the instruction. In Stage 2, there are two main types
of assessment – performance tasks and other evidence. Performance tasks ask
Research Methodology 79
students to use what they have learnt in new and real situations to see if they really
understand and can use their learning. These tasks are not for everyday lessons;
they are like final assessments for a unit or a course. Everyday classes teach the
knowledge and skills needed for the final performance tasks. Alongside perfor-
mance tasks, Stage 2 includes other evidence like quizzes, tests, observations and
work samples to find out what students know and can do. However, before we
move on to discuss how we can design the performance task and other types of
evidence, first let’s take a look at our third learning framework, Assessment For
Learning (AfL), Assessment As Learning (AaL) and Assessment Of Learning
(AoL) framework (Rethinking Classroom Assessment with Purpose in Mind:
Assessment for Learning; Assessment as Learning; Assessment of Learning, 2006).
The AfL, AaL and AoL framework serves as a valuable tool for developing
these assessments, as it emphasises how different aspects of the learning process
have distinct roles in enhancing students’ understanding and performance. AoL,
often referred to as summative assessment, is what most people commonly
associate with testing and grading. This involves assessing students’ knowledge
and skills at the end of a learning period to determine their level of achievement.
AoL aims to measure how well students have met the learning outcomes and to
assign grades or scores. While the primary purpose of AoL is to provide a
summary judgement of student performance, it can also offer insights into the
effectiveness of instructional methods and curriculum design. AoL forms the
foundation of the end-of-course performance task. However, it is also supported
by AfL and AaL. AfL, also known as formative assessment, focuses on using
assessment as a tool to support and enhance the learning process. The primary
purpose of AfL is to provide timely feedback to both students and educators. This
feedback helps students understand their strengths and areas that need
improvement, allowing them to adjust their learning strategies accordingly.
Teachers can use the insights from formative assessments to tailor their instruc-
tion, addressing students’ needs more effectively. AfL promotes a learner-centred
approach, where assessment is seen as a means to guide and enhance learning
rather than merely to measure it. Therefore, AfL should be incorporated
throughout the semester to support the students to achieve the learning outcomes
in the end-of-course performance task. However, AaL should also play an
important part in this process. AaL is about promoting a metacognitive approach
to learning. Here, assessment is viewed as an opportunity for students to actively
engage with the material and reflect on their learning process. Students take on a
more active role by monitoring their own learning, setting goals and evaluating
their progress. AaL encourages students to develop self-regulation skills and
become independent learners. This approach shifts the focus from external eval-
uations to internal self-assessment and personal growth. Therefore, AaL should
also be incorporated throughout the semester to support students towards eval-
uating their learning and setting their goals for the end-of-course performance
task. Thus, these three types of assessment are not mutually exclusive; rather, they
complement each other within the broader framework of educational assessment.
To design the end-of-course performance task, following UbD, it is recom-
mended that instructors follow the Goal, Role, Audience, Situation, Performance/
80 The Impact of ChatGPT on Higher Education
Stage 3
Stage 3 of UbD involves planning learning experiences and instruction that align
with the goals established in Stage 1. This stage is guided by the following key
questions that shape the instructional process: How will we support learners as
they come to understand important ideas and processes? How will we prepare
Research Methodology 81
• Pre-class/Online
– Unit overview;
– Introduction to key terms;
– Prior knowledge activity;
– Introduction to concepts (via video, article);
– Hold students accountable for their learning (formative assessment).
82 The Impact of ChatGPT on Higher Education
• In Class
Research Approach
This research centres on an investigation of the impact of ChatGPT on students
and instructors in higher education. Our primary objectives are to explore,
understand and assess how this AI chatbot may influence the roles of students and
instructors within an academic setting. By delving into the implementation of
ChatGPT, we aim to uncover potential challenges and opportunities that may
arise, providing valuable insights into its transformative role in the educational
landscape. Ultimately, our goal is to comprehensively examine how the integra-
tion of ChatGPT specifically affects the roles of students, instructors and higher
education institutions. As such, similar to Rudolph et al. (2023), we categorise our
areas of research into student-facing, teacher-facing and system-facing. However,
we introduce another category, ‘researcher-facing’, as it provides an additional
metacognitive perspective on how ChatGPT influenced the research process
which, ultimately, will also affect institutions of higher education.
On planning our research approach, we decided a qualitative research para-
digm would be the most suitable, as it is an exploratory approach which aims to
understand the subjective experiences of individuals (not just the technology) and
the meanings they attach to those experiences. This approach is particularly useful
when investigating new phenomena, such as ChatGPT, where there is limited
knowledge and experience. Using such an approach enables us to gain a deeper
understanding of the impact of ChatGPT on the role of students, instructors and
our institution as a whole and to explore the subjective experiences and per-
spectives of those involved. Within this paradigm, a case study approach seemed
most appropriate. Case studies involve conducting a thorough investigation of a
real-life system over time, using multiple sources of information to produce a
comprehensive case description, from which key themes can be identified
(Cresswell & Poth, 2016). This approach, commonly employed in the field of
education, entails gathering and analysing data from diverse sources like inter-
views, observations, documents and artefacts to gain valuable insights into the
Research Methodology 83
case and its surrounding context. Case studies are a useful approach when a case
can be considered unique and intrinsic (Yin, 2011).
Our case is both unique and intrinsic, as it involves looking at the potential
effect of ChatGPT on various stakeholders at our university, something that, at
the time of writing, had not been studied extensively before. Due to this, we
decided to employ Yin’s (1984) methodology, which uses an instrumental case
study research design proposed by Stake (1995). Yin’s design follows five phases
of analysis: compiling, disassembling, reassembling, interpreting and concluding.
It is particularly useful for understanding a phenomenon within a specific context,
as is the case with ChatGPT and its potential impact on various stakeholders in
education. It also takes into consideration the historical background, present
circumstances and potential future developments of the case. In such a case, data
can be gathered via interviews, focus groups, observations, emails, reflections,
projects, critical incidents and researcher diaries, after which, following Braun
and Clarke (2006) a thematic analysis can be conducted.
Data Collection
This study took place from December 2022 to August 2023, starting with the
release of ChatGPT-3.5 on 30 November 2022. We used the free version of
ChatGPT-3.5 for data collection, publicly available since November 2023, to
ensure fair participation of students without requiring them to purchase a paid
version. However, it should be noted that the training data in ChatGPT-3.5 only
extend up to September 2021. During the write-up phase, GPT-4 was used. As
discussed previously, the literature review focused on papers published between
December 2022 and early April 2023 to ensure current resources. However,
considering ChatGPT’s ever-evolving nature, we continued to collect extant
literature from media sources throughout the study until the final write-up.
Adopting a case study approach, our research aims to collect diverse and
comprehensive data. In line with Yin’s (1994) case study protocol, we identified
six specific types of data we would gather, including documentation, archival
records, reflections, direct observation, participant observation and physical
artefacts. To collect data for this study, relevant documents such as reports,
policies and news articles on ChatGPT in education were continuously gathered
from internet searches throughout the investigation. The objective was to gain
comprehensive insights and perspectives on the integration of ChatGPT in edu-
cation. The collected data formed the basis for Chapter 2 in this book. The
principal investigator made sure to maintain reflexivity throughout, considering
her positionality and biases and sought diverse perspectives from various sources
to enhance data validity and reliability. This approach ensures a well-rounded
study.
The researcher-facing aspect of this study involved the principal investigator
documenting the impact of ChatGPT on the research process. A comparative
approach was taken, analysing how research stages were conducted in previous
projects, before ChatGPT’s availability, and how they could be approached since
84 The Impact of ChatGPT on Higher Education
• Course Aim
The overall educational aim of this course is for students to investigate the role
linguistic analysis plays in the legal process. It focuses on the increasing use of
linguists as expert witnesses where linguistic analysis is presented as evidence.
• Course Description
This course aims to provide students with an understanding of forensic lin-
guistics, focusing on the role of linguistic analysis in the legal process. Forensic
linguistics involves a careful and systematic examination of language, serving
justice and aiding in the evaluation of guilt and innocence in criminal cases.
Research Methodology 85
The field is divided into two major areas: written language, which analyses
various texts like police interviews, criminal messages and social media posts,
and spoken language, which examines language used during official interviews
and crimes. Through a case-based approach, the course explores how crimes
have been solved using different linguistic elements, such as emojis, text mes-
sage abbreviations, regional accents and dialects, handwriting analysis and
linguistic mannerisms, among others.
• Enduring Understanding
Forensic Linguistics aids justice by analysing language to uncover truth in
criminal cases.
• Essential Questions
Overarching Essential Questions
– How does linguistic analysis contribute to legal case analysis in forensic
linguistics?
– How is the emergence of AI reshaping the legal field?
The selection of this course for investigation was driven by several factors.
Firstly, the principal investigator was the instructor for this course and had
expertise in exploring educational technologies. Additionally, she had previously
investigated and been involved in the design of the Flipped, Adaptive, Digital and
Active Learning (FADAL) approach, making her well-suited for this investiga-
tion. The instructor’s deep understanding of the course, its planning processes and
her ability to teach it again in the upcoming semester provided an ideal oppor-
tunity to compare pre- and post-ChatGPT course planning. Moreover, the lin-
guistic components of the Forensic Linguistics course made it suitable for testing
ChatGPT’s capabilities across various linguistic aspects. The students enrolled
were from the Faculty of Law, a field expected to be heavily impacted by AI
advancements, making their involvement in the investigation valuable for raising
awareness about AI’s impact on the legal profession. Data collection occurred
between March and June 2023, aligning with the spring semester. To investigate
the effects on students, a diverse set of data was gathered. This started with a
survey administered at the beginning of the course to assess students’ existing
familiarity and usage of ChatGPT. In the second lesson, students were presented
with a video that introduced them to ChatGPT, followed by open-ended ques-
tions to capture their impressions. Pre-class questions were employed throughout
the course to find out about the specific interactions students had with ChatGPT
and how these interactions had influenced their learning experiences. A reflective
questionnaire was conducted at the end of the course to gain more information
about the students’ insights, impressions and perspectives of their experiences with
ChatGPT throughout the course. Furthermore, complementary data sources were
incorporated into the study. Padlets, screenshots and students’ reflections were
gathered to provide a more comprehensive perspective on the student experience.
To further enrich the analysis, students granted permission for their projects to be
included in the data assessment. The student-facing data are referred to as SFD.
The focus of the system-facing aspect of this study was to examine the
implications of ChatGPT from the viewpoints of different stakeholders at the
university, including instructors, department heads, deans and vice-rectors.
Peripheral participants, including visiting teachers at workshops and discussions
with professors from different institutions and educational leaders at conferences,
provided additional insights. The data collection period for this aspect was from
Research Methodology 87
January to June 2023. Various methods were used for data collection. Email
communications from university stakeholders were collected, providing valuable
insights into discussions surrounding ChatGPT’s impact on the university. Zoom
recordings and workshop activities were collected from institutional workshops
about ChatGPT to understand the institutional response. Interviews with
instructors and stakeholders were conducted via Zoom or Google Chats. Critical
incidents arising from conversations and conferences were recorded in a
system-facing diary, helping to identify patterns, themes, challenges and oppor-
tunities related to ChatGPT’s integration in education. This served as a valuable
tool for documenting and reflecting upon insights and challenges. Information
recorded in this diary was member-checked, wherever possible, with those
involved to verify accuracy and validity of data and ensure perspectives were
accurately represented. System-facing data are referred to as SYFD.
• Ability to translate
ChatGPT has the ability to translate text from one language to another.
• ChatGPT demonstrates competency in completing assigned student tasks.
ChatGPT exhibits proficiency in successfully accomplishing tasks assigned to
students.
88 The Impact of ChatGPT on Higher Education
The process of refining codes into coherent themes involved several cycles of
careful evaluation. We began by generating initial codes and then organising them
into meaningful themes. Thorough exploration of various groupings and potential
themes ensured their accuracy and validity. To validate these themes, we metic-
ulously cross-referenced them with the coded data extracts and the entire dataset.
Additional data, such as critical incidents observed during conferences and
workshop discussions, posed a challenge as they emerged after we had established
our codes and completed the thematic analysis. However, since these incidents
contained relevant new data that could enrich our analysis, we revisited the
coding and thematic analysis process three times to integrate these additional
data. This iterative approach resulted in more robust codes and themes.
Collaborative discussions further led to the formulation of concise and
90 The Impact of ChatGPT on Higher Education
informative names for each theme. Through this iterative approach, we attained
data saturation, indicating that no further new information or themes were
coming to light. The final themes that were collectively agreed upon and their
respective codes are as follows:
To facilitate the mapping and analysis of the themes, the researchers utilised a
Google Sheet for each theme, incorporating the following sections: code, code
definition, examples from the extant literature, examples from the literature
review and supporting examples from the data. This comprehensive framework
allowed for a systematic examination of each of the themes in relation to our
research questions. From this, the following interconnectivity of the themes was
derived (Fig. 1).
Throughout this study, obtaining informed consent from all participants,
including interviewees, instructors, and students, was a top priority. Participants
were fully informed about the research purpose, procedures, potential risks, and
benefits, with the freedom to decline or withdraw without consequences. Our
communication with participants remained transparent and clear, ensuring data
privacy and confidentiality. Ethical review and approval were obtained from the
university’s ethics committee to comply with guidelines and protect participants’
rights. To mitigate bias, the researcher remained mindful of personal biases
during data collection and analysis. However, during the research process, an
Research Methodology 91
ethical issue emerged concerning consent related to the research diaries, which
served as a hidden form of data collection. The researcher began to perceive every
interaction and event as potential data, while participants may not have viewed
them in the same light (Hammersley & Atkinson, 1995). This raised concerns
about ensuring proper consent for the use of information gathered through such
situations. As, on most occasions, the researcher did not recognise the relevance
of these incidents to the investigation until after they occurred, this meant the
researcher had not explicitly communicated to participants that the content of the
interactions could be used in the research. Therefore, to protect the confidentiality
of the individuals involved and respect their privacy, the researcher took measures
to provide anonymity when referencing extracts from the research diaries in the
writing.
In the next chapter, we present our findings and interpretations of the data,
encompassing a thorough analysis and derivation of insights from our outcomes.
This chapter systematically provides an overview of the collected data, aligning it
with the extant literature and the literature review. Subsequently, we interpret
these findings within the framework of our theoretical approach. Employing this
information, we re-examine our research questions, specifically exploring the
potential impacts of ChatGPT on students, instructors, and higher education
institutions. Through this process, we convert our raw data into valuable insights,
enriching our understanding of the subject.
This page intentionally left blank
Chapter 6
such as culture, background and experiences in the same way as human educa-
tors. This aligns with the fact that ChatGPT’s outputs may not align with human
values because its focus is on predicting the next word rather than understanding
the broader context. Similarly, Alshater (2022) observes that language models like
ChatGPT can generate unrelated or generic responses due to their lack of
contextual understanding. In addition, Sullivan et al.’s (2023) study highlighted
the limitations of ChatGPT’s contextual understanding, as evidenced by the
generation of unrelated or generic responses.
Looking at the importance of giving a clear context through the lens of
Christensen’s Theory of Jobs to be Done highlights the importance of users
understanding their specific needs and desired outcomes when hiring ChatGPT.
Users must clearly articulate their requirements and objectives in order to effec-
tively utilise ChatGPT’s capabilities. This involves providing a clear and specific
context for ChatGPT to generate accurate and relevant responses. Bourdieu’s
social theory sheds light on the power dynamics and social structures that influ-
ence the interaction with ChatGPT. It emphasises the need to consider linguistic
norms, cultural capital and social dynamics that shape communication with the
artificial intelligence (AI) system. Instructors must navigate these factors when
engaging with ChatGPT to ensure meaningful and appropriate responses. Hei-
degger’s Theory on Being highlights the distinction between ChatGPT’s predic-
tive nature and the broader contextual understanding of human educators. Users
must recognise that ChatGPT’s focus is on predicting the next word rather than
comprehending the broader context.
sources, which raises concerns about copyright infringement and fair compensa-
tion for creators. The absence of clear guidelines for referencing
ChatGPT-generated content further exacerbates the commodification of infor-
mation and the devaluation of labour in the production of academic knowledge.
Users are left grappling with the challenge of appropriately citing and referencing
information derived from ChatGPT while the sources remain undisclosed and
uncompensated. From Heidegger’s perspective, the lack of a standard referencing
guide can be seen as a consequence of the instrumentalisation of technology in the
academic context. The focus on efficiency and productivity in using ChatGPT as a
tool for generating content overlooks the essential nature of referencing as a
means of acknowledging the origins and authenticity of knowledge. The absence
of clear guidelines reflects a reduction of referencing to a technical task, neglecting
its ontological significance in preserving the integrity and transparency of aca-
demic work.
and limited information, noting that while responses from ChatGPT were
generally considered reasonable and reliable, there were instances where
misleading information was present alongside the answers.
Through Christensen’s lens, users of ChatGPT are hiring it to provide accurate
and reliable information. However, the examples from the data demonstrate that
ChatGPT often fails to fulfil this job, as it generates inaccurate information or
lacks relevance in its response. This misalignment between users’ expectations and
the actual performance of ChatGPT indicates a gap in fulfilling the job it is hired
for. Bourdieu’s sociological perspective emphasises the role of social structures
and cultural capital in shaping individuals’ actions and preferences. In the case of
ChatGPT, instructors, as highlighted by Rudolph et al. (2023), express concerns
about its limitations in understanding and evaluating information. These concerns
are influenced by the instructors’ position as experts in the educational field,
where accuracy and relevance of information are highly valued cultural capital.
The instructors’ scepticism towards ChatGPT’s ability to fulfil this role reflects
their reliance on established knowledge and expertise, and it is this concern that is
reflected in their evaluation of the technology. Through a Marxist lens, the lim-
itations and inaccuracies in ChatGPT’s performance may be attributed to the
inherent contradictions and dynamics of capitalist production, where the pursuit
of efficiency and profit often takes precedence over ensuring comprehensive and
accurate information. The potential biases and shortcomings of ChatGPT may be
seen as by-products of the capitalist system’s influence on technological devel-
opment. Through Heidegger’s lens, ChatGPT’s ability to generate text that
appears passable but lacks deep comprehension of the subject matter raises
existential concerns. Heidegger argues that technology can lead to a mode of
being characterised by instrumental rationality, where human activities become
reduced to mere means to an end. In the context of education, ChatGPT’s limi-
tations in grasping and assessing information raise questions about its impact on
the genuine understanding and critical thinking skills of students. It highlights the
need to reflect on the role of technology in shaping educational practices and the
nature of knowledge acquisition.
interpreted as reprimand. For example, one student reported, ‘ChatGPT does not
use slang words and does not respond when asked questions using slang words’
(SFD). The instructor said, ‘We were inputting terms that were used by the
Unabomber and that ultimately led to him being identified, so they were an
important part of the case. However, ChatGPT refused to discuss some of the
slang items as they were considered derogatory and it even reprimanded us for
asking about these terms’ (TFD). Similarly, when asking about the suicide of
Kurt Cobain and certain words in one of the notes, the instructor noted,
‘ChatGPT refused to discuss the topic, deeming it inappropriate, and it also
refused to discuss some of the words, such as “bitch,” considering them deroga-
tory language’ (TFD). Furthermore, students in the study discovered that
ChatGPT does not use swear words (SFD). Interestingly, occurrences of refusal
or reprimand did not come up in the literature.
Through Christensen’s lens, users of ChatGPT expect it to provide accurate
and reliable information. However, the instances of refusal or reprimand indicate
a misalignment between users’ expectations and the actual performance of
ChatGPT. Users may have specific tasks or queries in mind that they want
ChatGPT to fulfil, but the system’s limitations and reliance on the moderation
system can lead to frustrating experiences for users who are unable to get the
desired responses. These instances of refusal or reprimand by ChatGPT may also
be viewed through the lens of Bourdieu’s cultural capital, where the system is
programmed to avoid language violations and promote compliance with the
content policy. The instructors’ experiences, where they were reprimanded for
discussing certain topics or using specific language, reflect the clash between their
expertise and established knowledge and the system’s limitations in understanding
the context and nuances of their queries. The pursuit of efficiency and profit may
prioritise the moderation system’s effectiveness in addressing language violations,
but it may fall short in fully understanding and addressing the complexity of user
queries and intentions. Taking a Heideggerian stance, ChatGPT’s reliance on the
moderation system and instances of refusal or reprimand raises existential con-
cerns. Users may question the role of technology in shaping their interactions and
limiting their freedom to engage in certain discussions or use specific language. It
raises broader questions about the impact of AI systems like ChatGPT on genuine
understanding, critical thinking skills and the nature of knowledge acquisition in
educational settings.
Need to Fact-Check
The importance of fact-checking is highlighted by the following examples from
the data: ‘To fact-check the information from ChatGPT, I did my own research
and double-checked it. For instance, when ChatGPT mentioned the title of the
Unabomber’s manifesto as “Industrial Society and Its Future,” I made sure to
check it myself before using it in my conclusion’ (SFD); ‘I think ChatGPT is
useful for research, but you need to check the information against other sources to
make sure it is giving the correct information’ (SFD); ‘I did the homework with
ChatGPT but I checked the information it gave me with another source’ (SFD);
‘We can’t rely on the accuracy of all the information it gives us. We need to check
it by researching it ourselves’ (SFD); ‘ChatGPT was very poor at generating real
and relevant literature. . . Therefore, always fact-check what it is saying’ (RFD); ‘I
found it a useful starting point to get ChatGPT to generate ideas about gaps in the
literature, but felt it was more accurate to rely on my own identification of gaps
from reading all the papers’ (RFD). In addition, the instructor made the following
observation: ‘Students used ChatGPT as a search engine to ask about the
Unabomber case. However, we didn’t know where any of the information came
from. We thought there were two problems with this. The first is that if you use
this information, you are not giving any credit to the original author. The second
is that ChatGPT is a secondary source and should not be treated as a primary
source, therefore we agreed that everything taken from ChatGPT should be
fact-checked against a reliable source’ (TFD).
This was also seen in the literature. Mhlanga (2023) emphasised the impor-
tance of critically evaluating the information generated by ChatGPT and
discerning between reliable and unreliable sources. In line with this, Firaina and
Sulisworo (2023) recognised the benefits of using ChatGPT to enhance produc-
tivity and efficiency in learning, but they also emphasised the need to maintain a
critical approach and verify the information obtained. They stressed the impor-
tance of fact-checking the information generated by ChatGPT to ensure its
accuracy and reliability. Similarly, Alshater’s (2022) research underscored the
importance of fact-checking and verifying the information produced by these
technologies.
Through Christensen’s lens, we can observe that users hire ChatGPT for
specific purposes such as generating information, assisting with research tasks and
improving productivity and efficiency in learning. However, as we saw, due to
limitations within the system, users also recognise the need for fact-checking as a
crucial task when utilising ChatGPT. Fact-checking allows users to ensure the
accuracy and reliability of the generated information, fulfilling their goal of
obtaining trustworthy and verified knowledge. This aligns with the principle of
Christensen’s theory, where users seek solutions that help them accomplish their
desired outcomes effectively. Through the lens of Bourdieu, we can view the
emphasis on fact-checking as a manifestation of individuals’ cultural capital and
critical thinking skills. Users demonstrate their ability to engage in informed
decision-making by recognising the importance of critically evaluating informa-
tion and distinguishing between reliable and unreliable sources. In terms of
104 The Impact of ChatGPT on Higher Education
Marx’s theory, the focus on fact-checking reflects the power dynamics between
humans and AI systems. Users exert their power by independently verifying
information, reducing the potential influence of AI systems on their knowledge
and decision-making processes. Fact-checking can be seen as a way for individ-
uals to assert their agency in the face of technological advancements. Considering
Heidegger’s philosophy, fact-checking represents the individual’s active engage-
ment with the information provided by ChatGPT and their critical interpretation
of its accuracy. Users understand that AI-generated information is fallible and
recognise the importance of their own engagement and interpretation to arrive at
a reliable understanding of the world.
among users with limited cultural capital, to prevent the spread of inaccuracies
and misleading information through blind trust in ChatGPT. Bourdieu’s ‘voice of
authority’ theory further supports this alignment, where users with limited
exposure to critical thinking may accept ChatGPT’s information as authoritative,
even when incorrect. On the other hand, users with higher cultural capital can
easily critically assess ChatGPT’s output. Symbolic power associated with repu-
table institutions reinforces the perception of ChatGPT as an authoritative
source. Hence, promoting digital literacy and critical thinking is crucial for more
informed engagement with AI technologies like ChatGPT. Marx’s theory of
alienation becomes relevant in the context of users’ unwavering trust in
ChatGPT’s information, shaping their interactions and decision-making. This
blind trust can be interpreted as a form of alienation, where users rely on an
external entity (ChatGPT) for information and decision-making, foregoing their
own critical thinking and access to diverse sources of knowledge. Such depen-
dency on ChatGPT reinforces power dynamics between users and the technology,
as users surrender their agency to the AI system. Through a Heideggerian lens, as
ChatGPT operates based on patterns and examples in its training data without a
deep understanding of the content or context, this raises existential questions
about the nature of AI and its role in providing meaningful and reliable infor-
mation. Users’ blind trust in ChatGPT’s output can be seen as a result of the
technological framing, where users perceive AI as all-knowing or infallible,
despite its inherent limitations.
the curriculum, instructors can ensure that students are equipped with the
necessary skills to effectively engage with ChatGPT and make informed decisions.
Institutions of higher education have a responsibility to integrate AI literacy
and critical thinking skills into the curriculum and provide resources and support
for students and instructors towards this. Institutions should also establish clear
guidelines for the ethical use of AI in education, considering the limitations of
ChatGPT and other AI systems. By doing so, institutions can ensure that students
are aware of the potential risks of unquestioning trust in AI-generated content
and promote responsible and ethical engagement with AI tools.
In summary, users perceive ChatGPT as more than just an AI language model,
fostering a sense of connection. However, this perception can lead to unques-
tioning trust in its information, emphasising the need for critical thinking and
information literacy training. Thus, educators must address sociocultural factors
and power dynamics influencing user trust. Practical actions, including AI literacy
training, promoting critical thinking and implementing ethical guidelines, are
essential for responsible engagement with AI technologies like ChatGPT in
education.
can very quickly come up with scripts for pre-class videos as well as suggesting
images and visuals that can be used in the video’ (TFD). The instructor also said,
‘ChatGPT was very good at coming up with ideas for in-class assessments. This
certainly saved me time’ (TFD). From observations during lessons, the instructor
noted that: ‘Students wrote down traditional gendered pronouns in English and
then tried to research online contemporary genders in English. They then tried the
same through ChatGPT. ChatGPT was more efficient at this activity, thus saving
the students time’ (TFD).
In the literature, Fauzi et al. (2023) emphasised that students can optimise their
time management by leveraging ChatGPT’s features, such as storing and
organising class schedules, assignment due dates and task lists. This functionality
enables students to efficiently manage their time, reducing the risk of overlooking
important assignments or missing deadlines. Similarly, Firaina and Sulisworo
(2023) reported that using ChatGPT had a positive impact on the quicker
understanding of material. According to the lecturers interviewed in their study,
ChatGPT facilitated a quicker understanding by providing access to new infor-
mation and ideas. Alshater (2022) reported that ChatGPT and similar advanced
chatbots can automate specific tasks and processes, such as extracting and ana-
lysing data from financial documents or generating reports and research sum-
maries, concluding that by automating these tasks, ChatGPT saves researchers’
time and expedites the research process. Alshater also noted that the ability of
ChatGPT to swiftly analyse large volumes of data and generate reports and
research summaries contributes to the accelerated speed of research (2022). Zhai
(2022) utilised ChatGPT in his study to compose an academic paper and was able
to complete the paper within 2–3 hours. This demonstrates how ChatGPT
expedites the writing process and enables efficient completion of tasks. Zhai
further observed that ChatGPT exhibited efficient information processing capa-
bilities, swiftly finding the required information and facilitating the completion of
tasks within a short timeframe. These findings highlight the central focus on
enhancing productivity, time management, understanding complex topics and
expediting processes.
This aligns with Christensen’s theory by recognising how users are hiring
ChatGPT to complete these productivity and learning-related tasks. Through a
Bourdieusian lens, the use of ChatGPT can be viewed as a means to acquire
additional social and cultural capital. Students and researchers can leverage
ChatGPT to gain access to information, knowledge and efficient tools, thereby
enhancing their learning outcomes and research productivity. Through a Marxist
lens, the potential of ChatGPT and similar technologies to automate tasks, save
time and expedite processes raises concerns about the impact on labour and
employment. While ChatGPT improves efficiency for individuals, there are
implications regarding job displacement and the concentration of power and
resources among those who control and develop these technologies. Heidegger’s
perspective prompts critical reflection on the consequences of heavy reliance on
AI technologies like ChatGPT for tasks traditionally performed by humans.
While ChatGPT offers convenience and efficiency, it raises questions about the
potential loss of human connection, critical thinking and creativity. This invites us
Findings and Interpretation 113
learning. This was also suggested in Alshater’s (2022) study, where he observed
that AI chatbots can enhance productivity by automating tasks and improving
research efficiency as well as contributing to improved accuracy by identifying
and rectifying errors in data or analysis and ensuring consistency in research
processes by following standardised procedures and protocols. He believes this
helps researchers focus on the content and interpretation of their work, thus
alleviating cognitive load.
Regarding Christensen’s Theory of Jobs to be Done, ChatGPT can be hired to
simplify and streamline tasks, allowing users to offload cognitive effort onto the
AI chatbot. By providing assistance, such as generating mnemonics, summarising
articles and offering quick and accurate definitions, ChatGPT enables users to
focus on higher level cognitive processes rather than the more mundane aspects of
their work. Through Bourdieu’s lens, ChatGPT can be viewed as a tool that
bridges knowledge gaps and reduces cognitive load by providing access to
information that might be otherwise challenging to obtain. By acting as a
communication channel between users and knowledge, ChatGPT facilitates the
acquisition of fresh ideas and information, potentially levelling the playing field
for individuals with varying cultural capital. However, it is essential to recognise
how habitus shapes users’ interactions with ChatGPT, with some relying heavily
on it without critical evaluation. While ChatGPT’s role in reducing cognitive load
can empower learning, fostering critical thinking remains crucial to assess the
reliability of its outputs. Understanding users’ interactions within the context of
cultural capital and habitus is vital to evaluate ChatGPT’s impact on equitable
information access. Once again, Marxism sheds light on the potential impact of
AI chatbots like ChatGPT on the workforce. While ChatGPT’s ability to auto-
mate tasks and enhance productivity is beneficial for users, once again, it raises
concerns about the displacement of human labour. The introduction of AI
chatbots in education, as highlighted by the studies, may reduce the cognitive load
on students and teachers. However, it is essential to consider the broader societal
implications and ensure that the implementation of AI technologies aligns with
principles of equity and fair distribution of opportunities. Heidegger’s philosophy
emphasises the concept of ‘being-in-the-world’, which suggests that our existence
and understanding of the world are interconnected. In the context of ChatGPT
reducing cognitive load, we can relate this idea to the notion that ChatGPT
functions as a tool or technology that enhances our ability to engage with the
world. Thus, ChatGPT can be seen as an extension of our cognitive capacities,
enabling us to access and process information more efficiently. It acts as a
mediator between our ‘Being’ and the world of knowledge, allowing us to navi-
gate complex topics and reduce the mental effort required to search for infor-
mation. Additionally, Heidegger’s concept of ‘readiness-to-hand’ comes into play
when considering ChatGPT’s role in reducing cognitive load. According to
Heidegger, tools become seamlessly integrated into our everyday existence when
they are ready-to-hand. In the context of ChatGPT, it becomes a ready-to-hand
technology that we can effortlessly use to acquire knowledge. However, it is
essential to be mindful of Heidegger’s concerns about technology’s potential to
distract us from our authentic understanding of the world. While ChatGPT can
Findings and Interpretation 115
ChatGPT by users exemplifies how the tool aligns with Bourdieu’s theory of
social capital and skill acquisition, serving as a platform through which users can
access a wide range of information and knowledge, leveraging their social capital
to explore various domains. Additionally, the process of interacting with
ChatGPT involves skill acquisition, as users develop the ability to navigate and
evaluate the information provided, further contributing to their understanding
and learning. Once again, Marx’s theory of social class and labour sheds light on
ChatGPT’s impact on work. Observations from the instructor and students show
ChatGPT’s effectiveness in tasks like modifying text and generating content. This
raises questions about its implications for traditional job roles and the division of
labour, potentially automating or augmenting tasks previously done by humans.
Heidegger’s theories underscore the transformative nature of technology and its
impact on revealing the world. As a tool, ChatGPT enables individuals to achieve
specific tasks and goals across different domains. In professional settings,
ChatGPT streamlines work processes by aiding in tasks like drafting emails,
generating reports and providing quick access to information, aligning with
Heidegger’s concept of ‘readiness-to-hand’. Similarly, in personal interactions, it
acts as an assistant for scheduling appointments, setting reminders, offering rec-
ommendations and becoming an extension of our capabilities, as per Heidegger’s
idea of tools becoming transparent mediums. Furthermore, ChatGPT’s creative
applications involve assisting in writing tasks, suggesting ideas and enhancing
language usage, which aligns with Heidegger’s ‘poetic dwelling’ approach,
fostering openness and deeper connection with the world through technology.
However, Heidegger’s cautionary note reminds us to reflect on technology’s
impact and its potential to disconnect us from authentic experiences. While
ChatGPT proves valuable, we must be mindful of its pervasive use and the
implications it holds for our relationship with the world.
Ability to Translate
Despite ChatGPT’s remarkable ability to translate between languages, there are
occasional downsides. For instance, machine translation models may encounter
challenges with gendered pronouns, resulting in mistranslations like using ‘it’
instead of ‘he’ and ‘she’, potentially leading to dehumanisation (Maslej et al.,
2023). However, despite these issues, the data also revealed many positives. The
researcher noted, ‘ChatGPT can translate interviews, surveys, etc., from one
language to another, saving me time in the research process’ (RFD). Similarly, the
instructor remarked, ‘As my students are all non-native speakers, being able to
translate the readings into Turkish first to grasp the main ideas, and then reading
again in English, helped reduce cognitive load, allowing them to focus more on
the content’ (TFD).
While the literature had limited information regarding ChatGPT’s translation
abilities, Firaina and Sulisworo’s paper noted that respondents used ChatGPT to
aid in translating scientific articles into English, which proved particularly
beneficial for those with limitations in English proficiency (2023).
Findings and Interpretation 117
learn punctuation rules. It was also useful when students input the same sentences
with different punctuation, and it told them the difference in meaning’ (TFD).
The instructor also highlighted ChatGPT’s ability to quickly provide students
with rules for the use of definite and indefinite articles and the use of ‘then’ in
narrative justifications when asked (TFD). Additionally, the instructor
mentioned, ‘Students used ChatGPT’s MadLib function to create vocabulary
quizzes for each other. This gave us the idea that students could use it to create
their own revision materials to use to revise the course concepts’ (TFD). This
positive feedback was reinforced by a student who stated, ‘I wanted ChatGPT to
prepare practice for me while preparing for my exams. These give me an
advantage for preparing for exams and assignments’ (SFD).
These findings are supported by Fauzi et al. (2023), who found that ChatGPT
was a valuable resource for students, offering useful information and resources,
retrieving relevant information from the internet, recommending books and
articles and assisting in refining grammar, expanding vocabulary and enhancing
writing style, all of which led to an overall improvement in academic work and
language skills. Neumann et al. (2023) also observed that ChatGPT could help
students prepare for assessments by generating specific source code and summa-
rising literature, and that they could utilise it to generate relevant code snippets
for their assignments or projects, contributing to their knowledge and under-
standing of software engineering concepts. Similarly, Zhai (2022) found ChatGPT
useful in composing an academic paper that only required minor adjustments for
organisation.
Through Christensen’s lens, we can see that students are hiring ChatGPT to
gather information, improve their ideas and arguments, enhance their writing and
learn more effectively. From a Bourdieusian perspective, ChatGPT enhances
users’ social and cultural capital regarding access to resources and opportunities.
From a Marxist viewpoint, students can use ChatGPT to enhance their produc-
tivity and efficiency in tasks such as writing, research and exam preparation,
thereby acting as a form of technological capital that empowers students to
accomplish their academic work more effectively, potentially reducing their
dependence on traditional labour-intensive approaches. However, it should be
noted that this technological capital is only available if there is equitable access to
the tool. Through a Heideggerian lens, ChatGPT redefines the relationship
between humans and technology in the educational context, by expanding the
possibilities of information retrieval, language refinement and knowledge gener-
ation. Through interactions with ChatGPT, students can engage in a new mode of
learning and communication that is mediated by technology. This interaction will
influence their perception and understanding of specific knowledge, skills and
concepts.
writing, providing feedback and enhancing language skills. This can potentially
enhance students’ learning experience and academic performance. However, there
are concerns about potential misuse and the impact on academic integrity. Stu-
dents may be tempted to outsource their assignments to ChatGPT or use it to
bypass plagiarism detection. This raises questions about the authenticity of their
work and the development of critical thinking and writing skills.
Institutions and instructors will need to address these challenges and establish
responsible policies for the use of AI tools in education. ChatGPT can augment
the role of instructors by automating certain tasks and providing support in
reviewing and providing feedback on students’ work. It can save instructors’ time
by offering suggestions for improvement, detecting errors and helping with
language-related issues. However, there is a need for instructors to adapt to these
changes and find new ways to engage with students. The role of instructors may
shift towards facilitating discussions, guiding students in utilising AI tools effec-
tively and designing assignments that cannot be easily outsourced or automated.
Instructors should also be aware of the limitations of AI tools and help students
develop critical thinking skills alongside their use of ChatGPT.
Institutions need to recognise the potential of AI tools like ChatGPT and their
impact on teaching and learning. They should provide digital literacy education
and training for faculty and students, update academic integrity policies and
support research on the effects of AI tools on learning and teaching. Additionally,
institutions should consider the implications for equitable access to educational
resources. While ChatGPT can provide valuable support, it also raises concerns
about the digital divide and disparities in access to technology. Institutions should
ensure that all students have equal opportunities to benefit from AI tools and take
steps to bridge any existing gaps.
In summary, ChatGPT’s versatile and practical nature in education enhances
the learning experience, offering personalised feedback and guidance to students.
However, concerns arise about its impact on labour dynamics, academic integrity
and societal ethics. To address these, responsible policies and digital literacy
training are essential.
initial phase of the course, the instructor asked students to assess ChatGPT’s
ability to complete in-class activities in their other courses. 57.1% affirmed
ChatGPT’s total capability in this task, while 28.6% acknowledged its partial
capability. The activities mentioned by students that could be done by ChatGPT
included article writing and answering questions. Similarly, students were asked
about ChatGPT’s potential to complete assignments or projects in their other
courses. 71.4% responded that it could do them completely, with 14.3% saying it
could partially complete them. One student stated, ‘Generally ChatGPT knows
everything. This is very dangerous for students because students generally choose
the easy way to work. If ChatGPT improves itself, students will use it a lot, and
that’s why when instructors give grades, you will use ChatGPT to get the points’
(SFD). Another said, ‘The possibility of students having their homework done by
ChatGPT will raise doubts in teachers, which may have consequences’ (SFD).
The students also recognised that the impact of ChatGPT depends on its usage.
One student remarked, ‘Actually, it is connected with your usage type. If you use
it to check your assignments and help, it helps you learn. But if you give the
assignment to ChatGPT, it skips the learning’ (SFD). They further commented,
‘It’s like taking it easy. It helped me lots with doing my homework, but I feel like
it reduced my thinking process and developing my own ideas’ (SFD). Another
student said, ‘Of course, it helped me a lot, but it also made me a little lazy, I
guess. But still, I think it should stay in our lives’ (SFD). A further student said, ‘It
certainly skips some part of the learning process. When I ask it for information, I
do it to shorten the time I spend researching. If I spent time researching by myself,
I think I would have more detailed information and would form more complex
ideas’ (SFD). According to the instructor, ChatGPT was good at generating ideas
for the final assessment, but there were caveats: ‘ChatGPT was excellent at
coming up with ideas for the final assessment following GRASPS and came up
with better ideas than my own. However, some of its ideas for assessment could
easily be done by ChatGPT itself. Therefore, these suggestions would need to be
rewritten to avoid this’ (TFD). The instructor also made observations about
ChatGPT’s ability to create rubrics: ‘Once an assessment has been written,
ChatGPT can easily come up with a suggested rubric for evaluation, but only if
the assessment task is written precisely. However, the weighting in the rubrics
should be adapted by the instructor to reflect the parts that ChatGPT can do and
the parts it can’t’ (TFD). Additionally, the instructor highlighted that ChatGPT
could provide suggestions for pre-class quizzes based on the text or video input;
however, they cautioned that if the cases used in the quizzes were present in
ChatGPT’s database, students might opt to use the AI for quizzes instead of
engaging with the assigned text or video (TFD). Furthermore, regarding in-class
activities, the instructor noted, ‘When students got ChatGPT to categorise
vocabulary under headings, they did it fast, but it skipped the learning process
aim of this activity. It did not help them to review the vocabulary. This may
therefore have implications for how I construct my vocabulary review activities in
the future’ (TFD). Issues with ChatGPT being able to complete student activities
were also raised in a workshop, where one teacher said, ‘I realised that ChatGPT
could do the lesson planning assignment for my (teacher candidate) students, so I
124 The Impact of ChatGPT on Higher Education
changed the weighting of the rubric to adapt to this’ (SYFD). Similarly, another
teacher said, ‘ChatGPT can easily find the answers with this activity, and students
would not need to do the reading’ (SYFD). A different teacher stated, ‘ChatGPT
does not enable a person to learn from the process. With this way, ChatGPT only
gives a result; it does not provide help for the process. As you know, learning
takes place within the process, not solely the result’ (SYFD).
So what did the literature have to say about this? In Mhlanga’s (2023) study,
instructors expressed concerns that ChatGPT may disrupt traditional assessment
methods like essays and make plagiarism detection more difficult. However,
Mhlanga stated that he believes this opens doors to innovative educational
practices and suggests that AI technologies like ChatGPT can be used to enhance
assessment procedures, teaching approaches, student participation, collaboration
and hands-on learning experiences, thus modernising the educational system.
Neumann et al. (2023) also explored ChatGPT’s competence in completing
assigned student tasks and its implications for the learning process. They high-
lighted various applications in software engineering, including assessment prep-
aration, translation, source code generation, literature summarisation and text
paraphrasing. However, while they noted that ChatGPT could offer fresh ideas
for lecture preparation and assignments, they stressed the need for further
research and understanding so that transparency is emphasised, ensuring students
are aware of ChatGPT’s capabilities and limitations. They proposed integrating
ChatGPT into teaching activities, exploring specific use cases and adapting
guidelines, as well as potential integration into modern teaching approaches like
problem-based and flipped learning, with an emphasis on curriculum adjustments
and compliance with regulations. Rudolph et al. (2023) raised multiple concerns
regarding ChatGPT’s impact on students’ learning process and assessment
authenticity. They highlighted potential issues with students outsourcing written
assignments, which they believe could challenge traditional evaluation methods.
Additionally, they expressed worries about ChatGPT hindering active engage-
ment and critical thinking skills due to its competence in completing tasks without
students fully engaging with the material. Tlili et al.’s (2023) study focused on
potential misuse of ChatGPT, such as facilitating cheating in tasks like essay
writing or exam answers. Effective detection and prevention of cheating were
highlighted as important considerations. Similarly, they raised concerns about the
impact of ChatGPT on students’ critical thinking skills, believing that excessive
reliance on ChatGPT may diminish students’ ability to think innovatively and
independently, potentially leading to a lack of deep understanding and
problem-solving skills. Due to these issues, Zhai (2022) proposed a re-evaluation
of literacy requirements in education, suggesting that the emphasis should shift
from the ability to generate accurate sentences to effectively utilising AI language
tools, believing that incorporating AI tools into subject-based learning tasks may
be a way to enhance students’ creativity and critical thinking. Zhai also suggested
that this should be accompanied by a shift in assessment practices, focussing on
critical thinking and creativity and, thus, recommended exploring innovative
assessment formats that effectively measure these skills (2022).
Findings and Interpretation 125
Through Christensen’s lens, ChatGPT can be seen as a tool that students can
hire to accomplish specific jobs or tasks in their educational journey. However,
this raises concerns about the potential negative impact on active engagement,
independent knowledge acquisition, critical thinking and the overall learning
process. Therefore, there is a need for a balanced approach to avoid the draw-
backs associated with its use. Through Bourdieu’s theory of social reproduction,
we can gain insights into the social and educational ramifications of ChatGPT.
Students’ concerns about the ease of relying on ChatGPT for completing
assignments and the potential consequences, including doubts from teachers and
reduced critical thinking, resonate with Bourdieu’s emphasis on the reproduction
of social structures. This highlights the possibility of ChatGPT perpetuating
educational inequality by offering shortcuts that hinder deeper learning and
critical engagement. Students’ comments about the impact of ChatGPT on the
learning process reflect elements of Marx’s theory of alienation. While ChatGPT
offers convenience and assistance in completing tasks, students expressed con-
cerns about the reduction of their active involvement, thinking process and per-
sonal idea development. This detachment from the learning process can be seen as
a form of alienation, where students feel disconnected from the educational
experience and become dependent on an external tool to accomplish their tasks.
Heidegger’s perspective on technology as a means of revealing and shaping our
understanding of the world can also be applied here. ChatGPT is a technological
tool that transforms the educational landscape, revealing new possibilities by
generating ideas, providing assistance and automating certain tasks. However, the
concerns raised by students and instructors point to the potential danger of
technology shaping the learning process in ways that bypass essential aspects of
education, such as critical thinking, personal engagement and deep understand-
ing. Once again, this highlights the need for a thoughtful and intentional inte-
gration of technology in education to ensure its alignment with educational goals.
detected by acoustic-phonetic analyses and made a list of the main points. They
then watched a video of Johnny Depp giving an award speech while drunk and
had to write examples of what he said next to the list of acoustic-phonetic factors
from the paper. Due to them having to listen to a video to do this, the activity was
ChatGPT-proof’ (TFD). Furthermore, the instructor commented, ‘Students
created their projects in any form they wished (poster, video, interview). They
used ChatGPT to review their work against their rubric. This could only be done
if their final project had text that could be input. If they used a different medium,
this was not possible’ (TFD). They also observed that, ‘ChatGPT was unable to
do an analysis of handwriting from two suicide notes related to Kurt Cobain’
(TFD).
From Christensen’s perspective, the limitations of ChatGPT can be seen as its
inability to fulfil specific jobs or tasks that users are hiring it to do, such as
providing visuals, detailed data on cases and primary source references. They
were also unable to hire it to assist with tasks that involved the use of different
media, such as answering questions related to videos or poster presentations.
These limitations hindered the users’ ability to accomplish their desired goals and
tasks effectively with the tool. From a Bourdieusian perspective, the limitations of
ChatGPT may reflect the unequal distribution of cultural capital among users.
The ability to effectively navigate and utilise ChatGPT’s capabilities, such as
cross-referencing information or critically assessing its outputs, is influenced by
the possession of cultural capital. Students who have been exposed to educational
resources and have developed the necessary skills may benefit more from using
ChatGPT, while those lacking cultural capital may struggle to fully utilise its
potential. This highlights the role of social inequalities and the reproduction of
advantage in educational settings. From a Marxist perspective, ChatGPT, as a
technological tool, may be seen as being shaped by the profit-driven logic of
capitalism. Its limitations may arise from cost considerations, efficiency require-
ments or the prioritisation of certain tasks over others. These limitations reflect
the broader dynamics of capitalist technology, where the pursuit of profit and
market demands may compromise the quality, accuracy and comprehensiveness
of the outputs. Regarding Heidegger’s theories on technology, the limitations of
ChatGPT reveal the essence of technology as an instrument or tool that has its
own limitations and cannot replace human capabilities fully. ChatGPT’s inability
to analyse handwriting or handle tasks that require human senses and context
demonstrates the importance of human presence, interpretation and under-
standing in certain educational contexts.
discourage them from engaging in the learning process and developing their own
ideas. Additionally, concerns about potential misuse, such as outsourcing
assignments or facilitating cheating, underscore the importance of maintaining
assessment authenticity and fostering critical thinking skills.
For instructors, integrating ChatGPT presents new challenges and consider-
ations. While it can generate ideas, suggest rubrics and assist in various tasks,
careful adaptation of assessments is necessary to avoid redundancy and ensure
alignment with ChatGPT’s capabilities. The impact of ChatGPT on in-class
activities is also a concern, as it may bypass the learning process and hinder
effective teaching. To address this, instructors need to rethink their approach to
in-class activities and actively manage ChatGPT’s use to ensure students are
actively learning and not solely relying on the AI tool.
Furthermore, institutions of higher education must carefully consider the
broader implications of ChatGPT integration. This will involve re-evaluating
literacy requirements and assessment practices, with a focus on critical thinking
and creativity. The successful integration of ChatGPT will require transparency,
adaptation and AI-proofing activities. Institutions will also need to establish clear
policies on assessment and plagiarism detection. Balancing AI integration will be
essential in order to harness its benefits without undermining student learning
experiences. Therefore, institutions will need to provide proper training to
instructors to encourage and enable them to embrace new teaching approaches in
this AI-driven landscape.
Gaps in Knowledge
One significant caveat of ChatGPT is its reliance on pre-September-2021
knowledge, as it does not crawl the web like traditional search engines. This
was observed in the following instances. The instructor stated, ‘I asked my stu-
dents to ask ChatGPT about the implications for AI and court cases. After this, I
gave them some recent articles to read about the implications of AI for court cases
and asked them to make notes. They then compared their notes to ChatGPT’s
answers. The students felt the notes they had made about the implications were
more relevant than ChatGPT’s responses. This may have been because the articles
I provided them with had been published in late 2022 or early 2023, whereas
ChatGPT’s database only goes up to 2021’ (TFD). In a similar vein, the
researcher said, ‘I was interested in analysing some of my findings against the
PICRAT matrix that I was familiar with but has only recently been developed. I
asked ChatGPT about this. Three times it gave me incorrect information until I
128 The Impact of ChatGPT on Higher Education
challenged it, whereupon it eventually responded that it did not know about
PICRAT’ (RFD). Interestingly, the concept of gaps in knowledge did not emerge
prominently in the literature review; therefore, we turn to our theorists.
Through Christensen’s lens, ChatGPT’s limitations in knowledge can hinder
its ability to adequately serve the job of providing accurate and up-to-date
information. Users hiring ChatGPT for information-related tasks may find its
outdated knowledge base unsatisfactory in meeting their needs. This is therefore a
constraint on ChatGPT’s ability to effectively perform the job it is being hired for.
The reliance on outdated information may reflect the prioritisation of
cost-effectiveness and efficiency in the development of ChatGPT. Analysing
ChatGPT’s gaps in knowledge through a Heideggerian lens highlights the essence
of technology as a human creation and the limitations it inherits. ChatGPT, as a
technological tool, is bound by its programming and training data, which define
its knowledge base and capabilities. The gaps in knowledge arise from the
inherent limitations of the technology itself, which cannot transcend the bound-
aries of its design and training. This perspective prompts reflection on the
human–technology relationship and raises questions about the extent to which AI
systems can genuinely meet the needs of the complexities of human knowledge
and understanding.
ChatGPT falls short in meeting this specific job to be done. Bourdieu’s theory is
evident in the way students express concerns about the limitations of ChatGPT.
The possession of specialised knowledge in specific fields is seen as a form of
cultural capital. Students recognise that relying solely on ChatGPT for complex
judgments and analyses might not lead to desirable outcomes. Through a Marxist
lens, the limitations of ChatGPT in certain domains may perpetuate existing
social structures, wherein expertise and knowledge in these areas are valued and
rewarded. Reliance on AI systems like ChatGPT for complex tasks could also
potentially lead to the devaluation of human labour and expertise in these fields.
Through a Heideggerian lens, the limitations observed in ChatGPT’s under-
standing and domain knowledge are rooted in its programming and training data,
defining its capabilities. As a tool, ChatGPT can only operate within the
boundaries of its design and training, leading to insufficiencies in human usage.
discussing ChatGPT and similar technologies. We believe their concerns can also
be extrapolated to ChatGPT’s database.
When it comes to looking at this issue through the lens of Christensen, the
concerns raised by the students regarding the cultural specificity of ChatGPT’s
database highlight the potential mismatch between the job they are hiring it to do
and the capabilities of ChatGPT itself. This misalignment indicates the need for
improvements in addressing specific user needs and cultural contexts. From a
Bourdieusian viewpoint, the involvement of AI companies, such as OpenAI,
Microsoft and Google, primarily based in the United States, suggests a connec-
tion to American culture and a Western perspective. This cultural capital and
habitus shape the training and implementation of AI models, potentially encoding
biases and limitations into the technology. The concerns about the accuracy and
relevance of ChatGPT’s responses in different cultural contexts reflect the influ-
ence of cultural capital on the AI system’s performance. Through a Marxist lens,
the concentration of power in these companies, along with their Western cultural
context, may result in biased or limited representations of knowledge and per-
spectives. Furthermore, Heidegger’s views on technology prompt us to question
the very essence and impact of AI systems like ChatGPT. The concerns about
cultural specificity and resulting limitations raise existential questions about the
role and responsibility of AI in human activities. Moreover, the constraints posed
by ChatGPT’s database and potential biases call for critical reflection on the
essence of AI, its impact on human knowledge and decision-making and the
ethical considerations surrounding its development and use.
responsible use of AI in education, to ensure that their students are aware of the
limitations and biases associated with these technologies. Institutions should also
foster interdisciplinary collaborations and partnerships with industry to address
the disciplinary context limitations of ChatGPT, facilitating the development of
AI systems with domain-specific expertise. Additionally, the concerns raised
about cultural specificity and biases in ChatGPT’s database highlight the need for
institutions to promote cultural diversity and inclusivity in AI purchasing,
development and utilisation. By incorporating diverse cultural systems, perspec-
tives and datasets, institutions can help mitigate the potential biases and limita-
tions, ensuring that they better serve the needs of students from various cultural
backgrounds.
In this chapter, we have taken a deep dive into the influence of ChatGPT on
students, instructors and higher education institutions within the scope of our key
themes. Throughout our discussion, we have discerned the necessary actions that
universities should undertake. These encompass ethical considerations, such as
evaluating AI detection tools, critically assessing AI referencing systems, rede-
fining plagiarism within the AI era, fostering expertise in AI ethics and bolstering
the role of university ethics committees. They also encompass product-related
matters, including ensuring equitable access to AI bots for all students, fostering
collaborations with industries, obtaining or developing specialised bots and
offering prompt engineering courses. Additionally, there are educational ramifi-
cations, like addressing AI’s impact on foundational learning, proposing flipped
learning as a strategy to navigate these challenges, reimagining curricula to align
with the AI-driven future, advocating for AI-resilient assessment approaches,
adapting instructional methods, harnessing the potential of prompt banks and
promoting AI literacy. Moving forward, in the next three chapters, we discuss the
practical implications of these findings, grouping them into ethical,
product-related and educational implications. Thus, while this chapter has out-
lined the essential steps that must be undertaken, the following three chapters
present pragmatic approaches for putting these actions into practice.
This page intentionally left blank
Chapter 7
Ethical Implications
highlighted the issue of cultural bias in AI algorithms and emphasised the need for
more inclusive AI development. As a result of being wrongfully accused, Stivers
actively collaborated with the university to enhance the software’s inclusivity and
accuracy. Her contribution aimed to foster a fairer approach to AI technology in
academic settings, ensuring that it better accommodates the diverse student
population and provides accurate results in detecting plagiarism (Klee, 2023).
Thus, the decision to use AI detection tools in universities is a topic of concern
and discussion. Kayla Jiminez (2023) of USA Today highlights the advice of
educational technology experts, cautioning educators about the rapidly evolving
nature of cheating detection software. Instead of immediately resorting to disci-
plinary action, experts suggest asking students to show their work before accusing
them of using AI for assignments. Neumann et al. (2023) support this approach
and recommend a combination of plagiarism checkers and AI detection tools,
with manual examination as a backup. They stress the importance of thorough
reference checks and identifying characteristics of AI-generated content. Rudolph
et al. (2023) also acknowledge the limitations of anti-plagiarism software in
detecting ChatGPT-generated text and propose building trusting relationships
with students and adopting student-centric pedagogies and assessments. They
discourage a policing approach and emphasise assessments for and as learning. At
MEF, we concur with this perspective. Based on our investigations, we believe
that current AI detection tools are not suitable for their intended purpose. Instead
of relying solely on such tools, we suggest implementing alternative supports for
assessing students’ work, such as one-to-one discussions or moving away from
written assessments altogether. We believe the solution is to ban AI detection
tools but not AI itself.
used and include the relevant portion of the text that ChatGPT generated in
response. However, they warn that it is important to note that the results of a
ChatGPT ‘chat’ cannot be retrieved by other readers and that, while in APA Style
papers, non-retrievable data or quotations are typically cited as personal com-
munications, ChatGPT-generated text does not involve communication with a
person (McAdoo, 2023). Therefore, when quoting ChatGPT’s text from a chat
session, they point out that it is more akin to sharing the output of an algorithm.
They therefore suggest that in such cases, you should credit the author of the
algorithm with a reference list entry and the corresponding in text citation. They
give the following example.
When prompted with “Is the left brain right brain divide real or a
metaphor?” the ChatGPT-generated text indicated that although
the two brain hemispheres are somewhat specialised, “the notation
that people can be characterised as ‘left-brained’ or ‘right-brained’
is considered to be an oversimplification and a popular myth”
(OpenAI, 2023).
Reference
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language
model]. https://chat.openai.com/chat
They also suggest that in an APA Style paper, you have the option to include
the full text of lengthy responses from ChatGPT in an appendix or online sup-
plemental materials. They say that ensures that readers can access the precise text
that was generated, however, they note that it is crucial to document the exact text
as ChatGPT will produce unique responses in different chat sessions, even with
the same prompt (McAdoo, 2023). Therefore, they suggest that if you choose to
create appendices or supplemental materials, you should remember to reference
each of them at least once in the main body of your paper. They give the following
example:
APA also suggests that when referencing ChatGPT or other AI models and
software, you can follow the guidelines provided in Section 10.10 of the Publi-
cation Manual (American Psychological Association, 2020, Chapter 10) (McA-
doo, 2023). They note that these guidelines are primarily designed for software
references and suggest these can be adapted to acknowledge the use of other large
language models, algorithms or similar software. They suggest that reference and
in text citations for ChatGPT should be formatted as follows:
Now, let’s examine APA’s suggestions and critique them based on the
fundamental purpose of referencing sources in an academic paper: giving credit,
supporting arguments and claims, enabling verification of source accuracy and
demonstrating proper research skills. We can do this by posing questions.
conducting thorough research and citing relevant and reliable sources lies with
the researcher.
• Can ChatGPT be used to avoid unintentional plagiarism by citing sources and
giving credit where it is due?
No. While ChatGPT may provide responses based on the input it receives, it is
not equipped to identify or prevent unintentional plagiarism. Therefore, it is
the researcher’s responsibility to ensure that they properly cite and give credit
to the original sources of information to avoid plagiarism.
• Can ChatGPT be used to contribute to the academic community by citing
existing research and establishing connections between a researcher’s work and
the work of others in the field?
No. While ChatGPT can provide information based on the input it receives, it
is not capable of contributing to the academic community by citing existing
research or establishing connections between works. Therefore, researchers
should independently conduct literature reviews and cite relevant works to
contribute to the academic discourse.
The resounding answer to all the questions we posed above is a definitive ‘no’.
While we acknowledge APA’s well-intentioned efforts to address academic
integrity concerns by suggesting ways to cite ChatGPT, we find their recom-
mendations unfit for purpose. If the goal of referencing is to enable readers to
access and verify primary sources, APA’s suggestions do not align with this
objective. They merely indicate that ChatGPT was utilised, which demonstrates
the writer’s academic integrity but does not provide any practical value to the
reader. In fact, based on this, we believe that ChatGPT, in its current form,
should be likened to Wikipedia – a useful tool as a starting point for research, but
not to be used as a valid source for research. Therefore, we believe that to ensure
the validity of the research, ChatGPT should be seen as a springboard for
generating ideas, from which the researcher can then seek out primary sources to
support their ideas and writing. Hence, it would be more beneficial for researchers
to simply cite the sources they have fact-checked, as this approach provides
valuable information to the reader.
Now, let’s address our third area of concern, which revolves around ChatGPT
being used as a tool for idea development and writing enhancement. This raises
the question of whether a referencing system is applicable in such instances. To
shed light on this matter, we explore MLA’s suggestions on how to reference
ChatGPT when it serves as a writing tool. MLA suggests that you should: ‘cite a
generative AI tool whenever you paraphrase, quote, or incorporate into your own
work any content (whether text, image, data, or other) that was created by it;
acknowledge all functional uses of the tool (like editing your prose or translating
words) in a note, your text, or another suitable location; take care to vet the
secondary sources it cites’ (How Do I Cite Generative AI in MLA Style?, n.d.). In
our previous discussion, we have already addressed the third point. If you need to
verify the secondary sources cited by ChatGPT, why not simply use those vetted
sources in your citation and referencing, as this is more helpful for the reader.
140 The Impact of ChatGPT on Higher Education
However, we still need to explore the other aspects concerning the recommen-
dation to cite an AI generative tool. For instance, when paraphrasing or using the
tool for functional purposes like editing prose or translating words, how should
this be implemented in practice? In order to do this, let’s explore how ChatGPT
has been utilised in the writing of this book. Notably, ChatGPT was not used as a
search engine, evident from the majority of our referenced articles and papers
being published after September 2021, which is ChatGPT’s cutoff for new
information. However, it played a significant role in the research process, as
documented in the researcher-facing diary and integrated into the write up of this
book. While we’ve already discussed its role in the research methodology and
through examples in the findings and interpretation chapter, we now focus spe-
cifically on how ChatGPT contributed to the writing process of this book. To
illustrate the full scope of its assistance, we revisit Bloom’s Taxonomy, which
provides a useful framework for mapping the most commonly used phrases we
employed with ChatGPT during the writing phase.
• Remembering
– Evaluate the strengths and weaknesses of this theory and propose ways to
reinforce its main points.
– Analyse this text through the lens of (this theorist).
– Assess the effectiveness of this argument and suggest improvements to make
it more impactful.
• Evaluating
– Critically assess the clarity of this text and rephrase it for better
comprehension.
– Evaluate the impact of this section and propose a shorter version that retains
its persuasive strength.
• Creating
– Provide a more concise version of this text while retaining its core meaning.
– Summarise this chapter in a concise manner while retaining its key findings.
Have we referenced all instances of the examples above? No. And there are
reasons for this. As discussed in the findings, it’s crucial to go through multiple
Ethical Implications 141
iterations when using ChatGPT. This raises the question of whether we should
reference all the iterations or only the final one. Additionally, ChatGPT was a
constant tool throughout the writing of this book. If we were to reference every
instance, following MLA’s suggestion, the book would likely become five times
longer and mostly consist of references, which would not be beneficial to the
reader. Considering that one of the purposes of referencing is to aid the reader,
MLA’s suggestions seem unsuitable for this purpose. Indeed, referencing every
instance of ChatGPT use would be akin to a mathematician citing each time they
used a calculator, rendering the referencing of it impractical. Similarly, other
writing tools like Grammarly have not been subject to such exhaustive referencing
expectations. Following on from our mathematics example, it should be noted
that AI chatbots, including ChatGPT, have been likened to calculators for words.
However, we find this view a little simplistic. Unlike calculators, AI chatbots have
advanced capabilities that extend beyond basic tasks, reaching higher levels of
Bloom’s Taxonomy, such as applying, analysing, evaluating and creating, thereby
fulfilling tasks that are usually considered to part and parcel of what it means to
be a writer. This leads us to ask, what does it mean to be a writer in the days of
AI?
In the era of AI, the role of a writer takes on a whole new dimension, with AI
models now capable of performing tasks that were traditionally considered the
sole domain of human writers. This blurs the lines between human creativity and
AI assistance, raising concerns about potential loss of human agency in the
writing process, as evidenced in the Hollywood script writers strike, which also
highlights the risk of significant job losses. One of the key challenges of relying
solely on AI for writing is that it heavily relies on previous input, potentially
stifling new thoughts, developments and creativity. To avoid these issues, we
believe being a writer in the AI era requires embracing a collaborative approach
between human intellect and AI technology. Instead of replacing human writers,
AI can be harnessed as a supportive tool. Writers now have the opportunity to
utilise AI tools to enhance various aspects of the writing process, such as idea
generation, content organisation and language refinement. By offloading repeti-
tive and time-consuming tasks to AI, writers can dedicate more attention to
crafting compelling narratives, conducting in-depth analyses and expressing
unique perspectives. They should also actively maintain their critical thinking
abilities and originality, ensuring that AI assistance complements and augments
their creative expression, rather than replacing it. We believe that, ultimately,
being a writer in the AI era involves striking a balance between leveraging the
opportunities AI technology provides and preserving the essential human aspects
of creativity and originality in the writing process. This is exactly what we have
done in this book. However, finding this equilibrium between human writers and
AI remains a significant challenge and will shape the future landscape of writing
in ways that are yet to be fully realised.
142 The Impact of ChatGPT on Higher Education
advocacy and activism, championing for ethically sound AI practices and regu-
lations, while actively opposing harmful AI implementations. As AI technologies
continue to evolve and permeate various domains, cultivating AI ethics literacy
grows increasingly crucial. It serves as a conduit to ensure AI technologies are
wielded in an ethical and responsible manner, upholding human rights while
advocating for fairness and transparency. We delve deeper into this topic later in
Chapter 9, where we discuss the importance of AI literacy training for both
students and educators. However, the most logical starting point to address these
concerns is likely through established university ethics committees.
Product Implications
Bard is multilingual and has the ability to include images in its responses. Despite
these features, Bard faced criticism upon release. Users found it sometimes gave
incorrect information and was not as good as competitors like ChatGPT and Bing
Chat. To address these issues, Google shifted from LaMDA to PaLM 2. PaLM 2
is an improved version of Google’s language model, built upon the lessons learnt
from earlier models like LaMDA. It incorporates advancements in training
techniques and model architecture, leading to better overall performance in
understanding and generating language. We have now activated Google Bard at
MEF so it is active for all of our students and instructors, and, at the time of
writing, are now evaluating it as a possible solution.
In our ongoing efforts to secure institutional agreements with major large
language model companies, and while we are trialling the effectiveness of Bard,
it’s important to acknowledge that if this endeavour does not come to fruition by
the start of the upcoming academic year, a contingency plan will be activated. In
this scenario, instructors could launch a survey at the beginning of a course to
identify students who have registered with AI chatbots and tools and are willing
to share them with peers. By grouping students with tool access alongside those
without, the principle of equitable classroom utilisation would be upheld. This
approach carries further benefits. If our educational focus is to foster collabora-
tive engagement within student-centred classes, encouraging students to share
tools would circumvent the isolation that arises when each student interacts
individually with their bot. Instead, this practice of shared tool usage would
promote collective involvement and cooperative learning. It is also worth keeping
in mind there will always be open-source alternatives available. These currently
include OpenAI’s GPT-3, BERT, T5, XLNet and RoBERTa.
their research, by identifying the types of AI bots industries are currently using,
universities can make informed decisions on whether to purchase or develop
discipline-specific bots for their departments. This will ensure that graduates are
equipped with the specialised knowledge and skills of the AI bots relevant to their
chosen fields, better preparing them for the AI-driven job market. However, it
should be noted that adopting this reverse engineering approach needs to be an
ongoing effort. Universities must continuously assess industry trends, collaborate
closely with industry partners and engage AI experts to ensure their programmes
and their AI tools remain up to date and responsive to technological
advancements.
even when budget constraints exist. The key is to foster a collaborative approach
that aligns with the institution’s values and goals, ultimately enhancing the
learning experience for all students.
University (White, 2023), providing access to all our students through our
MOOC-based programme offerings.
In this chapter, we have extensively examined critical dimensions of integrating
AI chatbots in education. This exploration encompassed the imperative of
ensuring fair access to these bots, the collaborative efforts universities should
engage in with industries to comprehend the skills and tools required of graduates,
the strategic decision-making regarding the acquisition or development of speci-
alised AI botsand the significance of providing prompt engineering courses for
students. Looking ahead, the next chapter dives deeper into the educational
consequences stemming from the integration of AI chatbots.
Chapter 9
Educational Implications
journey. This prompts the question: what implications arise if such a scenario
unfolds?
The implications of losing foundational learning are significant and
far-reaching. This formative phase forms the bedrock for acquiring advanced
knowledge and skills, and its absence can reverberate across various aspects of
students’ academic journey and future prospects. For instance, foundational
learning provides the fundamental principles necessary for understanding com-
plex subjects. Without a robust foundation, students may struggle to comprehend
advanced concepts, leading to a surface-level grasp of subjects. Higher-level
courses typically build upon foundational knowledge; lacking this grounding
can impede success in higher education and overwhelm students with coursework.
Furthermore, foundational learning nurtures critical thinking, analytical skills
and effective problem-solving. A dearth of exposure to this phase might hinder
students’ ability to analyse information, make informed decisions and effectively
tackle intricate issues. A solid foundation also promotes adaptability to new
information and changing contexts, which becomes challenging without this
grounding. Furthermore, most professional roles require a firm grasp of foun-
dational concepts. Without this understanding, students might encounter diffi-
culties during job interviews, work tasks and career advancement. In addition,
over-reliance on AI tools like ChatGPT may hinder independent and critical
thinking, ultimately suppressing creativity, problem-solving and originality.
Language development, communication skills and coherent expression of ideas
are also nurtured during foundational learning. These skills are essential for
effective communication in both written and spoken forms. An absence of
foundational learning could lead to a widening knowledge gap that erodes con-
fidence and motivation to learn. Foundational learning also cultivates research
skills and the ability to gather credible information. Students without these skills
might struggle to locate and evaluate reliable sources independently. Beyond
academics, education contributes to personal growth, intellectual curiosity and a
well-rounded perspective. The lack of foundational learning may deprive students
of these holistic experiences. To prevent these adverse outcomes, the prioritisation
of robust foundational learning is crucial. This underscores the significance of
creating curricula, assessments and instruction that are resilient to the influence of
AI. But how can we do this?
(2023). They believe taking this approach will enhance feedback and revision
opportunities, which will support foundational learning. Rudolph et al. also note
that ChatGPT can support experiential learning, which is a key aspect of flipped
learning. They suggest that students should explore diverse problem-solving
approaches through game-based learning and student-centred pedagogies using
ChatGPT (2023). Additionally, Rudolph et al. highlight ChatGPT’s potential to
promote collaboration and teamwork, another aspect inherent in flipped learning.
They recommend incorporating group activities in which ChatGPT generates
scenarios encouraging collaborative problem-solving, as this approach will foster
a sense of community and mutual support among students. Therefore, instead of
seeing ChatGPT as disruptive, Rudolph et al. emphasise its potential to transform
education, but that this should take place through contemporary teaching
methods, such as flipped learning (2023). Therefore, based on our experience, and
supported by the literature, we believe flipped learning provides a useful starting
point for the creation of curricula, assessments and instruction that are resilient to
the influence of AI.
In the research context section of our research methodology chapter, we pre-
sented the recommended stages for instructors at MEF to prepare their flipped
courses. This involves starting with understanding by design (UbD) and inte-
grating Bloom’s taxonomy, Assessment For, As, and Of Learning and Gagne’s
Nine Events of Instruction reordered for flipped learning. In that section, we
described how, through combining these frameworks, we can establish cohesion
between curriculum, assessment and instruction, resulting in effective learning.
But what happens when AI is involved? In UbD, instructors follow three stages:
Stage 1 – identify desired results (curriculum); Stage 2 – determine acceptable
evidence (assessment) and Stage 3 – create the learning plans (instruction).
Therefore, in addressing how to make teaching and learning AI-resilient, we go
through each of these stages, putting forward questions which can be asked at
each stage to guide the decision-making process in how and when AI should be
integrated.
evidence to determine what students know and can do. This involves instructors
asking: How will we know if students have achieved the desired results? What will
we accept as evidence of student understanding and their ability to use (transfer)
their learning in new situations? and how will we evaluate student performance in
fair and consistent ways? (Wiggins & McTighe, 1998). In Stage 2, there are two
main types of assessment: performance tasks and other evidence. Performance
tasks ask students to use what they have learnt in new and real situations to see if
they really understand and can use their learning. These tasks are not for everyday
lessons; they are like final assessments for a unit or a course and should include all
three elements of AoL, AfL and AaL throughout the semester while following the
Goal, Role, Audience, Situation, Product-Performance-Purpose, Standards
(GRASPS) mnemonic. Alongside performance tasks, Stage 2 includes other evi-
dence, such as quizzes, tests, observations and work samples (AfL) and reflections
(AaL) to find out what students know and can do. While we observed that some
issues may arise in Stage 1 regarding learning outcomes in relation to AI’s abil-
ities, in Stage 2 we start to see more concerning issues. Let’s begin by examining
end-of-course performance tasks.
For this task, you will take on the role of either the defence or
prosecution, with the goal of getting the defendant acquitted or
convicted in one of the cases. Your audience will be the judge and
jury. The situation entails making a closing argument at the end of
a trial. As the product/performance, you are required to create a
closing argument presented both in writing and as a recorded
speech. The standards for assessment include: providing a review
of the case, a review of the evidence, stories and analogies,
arguments to get the jury on your client’s side, arguments
attacking the opposition’s position, concluding comments to
summarise your argument, and visual evidence from the case.
In the rubric that the instructor created for this assessment, each of the criteria
was evenly weighted. To see how ChatGPT-resilient this original assessment was,
as described in the research methodology chapter, the instructor copied and
pasted the rubric into ChatGPT, in relation to specific cases, to see what it could
do. What came out was astounding. ChatGPT swiftly generated the majority of
the speech for each of the cases, including nearly all of the aspects required in the
rubric. However, she observed that its weakness lay in providing detailed evidence
related to forensic linguistics and, while it couldn’t create specific visuals per-
taining to the case, it could make suggestions for images. While this initially
seemed to render much of the existing assessment redundant, the instructor
realised the exciting potential of ChatGPT as a useful tool for students’ future
careers. She therefore decided to retain ChatGPT as a feature in the assessment
but needed to address the fact that it could handle the majority of the task. To do
this, the instructor adapted the rubric by adjusting the weighting, giving more
importance to the parts ChatGPT could not handle and reducing its weighting in
areas where it excelled. This involved assigning greater weight to the review of
evidence, including emphasising the importance of referencing primary sources
rather than solely relying on ChatGPT, as well as increasing the weighting for the
provision of visual evidence. She also realised that, in relation to the learning
outcome ‘Compose a mock closing argument on a specific aspect of language in a
real-life case and justify your argument’, she was relying on written evidence for
students to justify their argument instead of a more real-life scenario whereby they
would verbally have to justify their argument in a live setting. Therefore, she
decided to add a question and answer session after the videos were presented for
which students would be evaluated on both the questions they asked of other
students and their ability to answer the questions posed to them. This also was
given a much higher weighting than the parts that ChatGPT was able to do. In
reflecting on the outcomes of the redesigned assessment/rubric with the Spring
2022–2023 class, the instructor was pleased with the assessments the students
produced. However, a recurring observation was that most students defaulted to
directly reading from their prepared scripts during the video presentations – a
tactic that would not translate well to real-world scenarios. Consequently, the
instructor has planned to conduct live (synchronous, online) presentations in the
subsequent semester to bolster the students’ public speaking skills and remove the
Educational Implications 161
Pre-class Quizzes
In the preceding section, we explored the design of end-of-course performance
tasks in the context of AI. However, Stage 2 of UbD planning also involves the
process of determining other evidence to assess students’ learning. Within the
framework of flipped learning, a significant aspect of this involves pre-class
quizzes (AfL). Therefore, we briefly revisit the steps for implementing this pro-
cess here. During the pre-class or online phase, each unit commences with an
overview and introduction of key terms. Students then engage in a prior
162 The Impact of ChatGPT on Higher Education
Assessment As Learning
In addition to planning for the end-of-course performance task and pre-class
quizzes, Stage 2 of UbD also encompasses planning for assessment as learning.
Therefore, let’s briefly revisit what this entails. AaL in education focuses on
Educational Implications 165
will prepare our graduates for the challenges of the modern world, whereas
neglecting adaptation could leave them unprepared for a rapidly changing world.
Interestingly, education experts have been advocating for this for years, and we
believe ChatGPT might just be the push needed to make this change. However,
we would be remiss if we did not acknowledge that the implementation of these
changes often lags behind in university entrance exams, accrediting bodies and
higher education ministries. Therefore, we contend that universities have a vital
role to play in assuming leadership to advocate for these reforms, ensuring that we
collectively empower our students for triumph in a world dominated by AI.
Structured/Semi-structured Activities
In the context of flipped learning, the primary objective of in-class activities is to
allow students to apply the knowledge gained from pre-class materials. Max-
imising the effectiveness of this process entails the careful implementation of
scaffolded in-class activities. Scaffolding in pedagogy involves furnishing learners
with temporary assistance, guidance and support while they engage in learning
tasks or exercises. The overarching aim is to facilitate the gradual development of
students’ skills and comprehension, equipping them to independently tackle tasks
while progressively reducing the level of assistance as their competence and
confidence expand. Consequently, the most optimal approach to orchestrating
in-class activities follows a sequence: initiating with structured activities,
advancing to semi-structured tasks and ultimately culminating in freer activities.
Based on the insights gained from our exploratory case study, it becomes
evident that the stages involving structured and semi-structured activities are
where ChatGPT can pose the greatest hindrance to effective learning. Conse-
quently, it holds immense importance for instructors to try out their structured
168 The Impact of ChatGPT on Higher Education
• SWOT Analysis
In one class session, students were assigned the task of conducting a SWOT
analysis on the impact of ChatGPT on the legal industry. However, ChatGPT’s
ability to swiftly generate a SWOT analysis chart posed a challenge, as students
did not need to engage in critical thinking to get a result. To address this, the
instructor employed the following approach. Firstly, students individually
completed a SWOT analysis without relying on ChatGPT. They then shared
their findings with peers and consolidated their insights into a unified chart.
Secondly, students were provided with up-to-date videos and readings discus-
sing ChatGPT’s impact on the legal industry, which were not present in
ChatGPT’s database. Using these new resources, students refined their charts.
Only after this stage did they consult ChatGPT to create a SWOT analysis
chart. Comparing their own chart with ChatGPT’s, they sought additional
ideas and evaluated ChatGPT’s chart against their current readings, pin-
pointing any outdated information and thus critiquing ChatGPT’s limitations.
This led to a discussion on ChatGPT’s constraints. The interactive process
enhanced students’ critical thinking and extended their learning beyond
ChatGPT’s capabilities. This was further reinforced through role-playing sce-
narios, where students assumed various roles like law firm partners, discussing
ChatGPT’s potential impact on their business. This role-playing exercise
introduced complexity and context, augmenting the SWOT analysis with
nuances beyond ChatGPT’s scope. By structuring the SWOT analysis process
in a way that went beyond ChatGPT simply producing the chart, the instructor
managed to ensure that the students derived valuable insights and skills that
ChatGPT could not easily replicate.
• SPRE Reports
In the original course, the students had been tasked with writing a situation,
problem, response, evaluation report (SPRE) to summarise each case. How-
ever, if the cases were in ChatGPT’s database, ChatGPT could do this
instantly, thereby bypassing the learning process. Therefore, the instructor took
the following approach. First, the students used ChatGPT to create a SPRE
report of the case. Then the instructor provided the students with a set of
detailed questions to guide students through each component of the SPRE
analysis that ChatGPT had produced. This encouraged the students to critique
ChatGPT’s output and to add any information that was missing, thus fostering
deeper analysis and interpretation. Where possible, the instructor provided the
students with a similar case based on the same forensic linguistics point (e.g.,
emoji) that was recent, and therefore not in ChatGPT’s database. However,
this involved scouring the internet for relevant recent cases and was not always
possible. The students then created a SPRE report for the new case and
compared the two cases to see if any changes in decisions or law were made
between the two. This required them to identify patterns, contrasts and trends
that involved higher-order thinking. The students then worked in groups,
imagining they were either the prosecution or defence for the original case and
created short notes about the forensic linguistic points from the case. They were
then mixed and conducted role plays in which they argued for or against the
170 The Impact of ChatGPT on Higher Education
linguistic point in question. This added complexity and depth to the original
SPRE analysis, making it more robust than what ChatGPT could generate
alone.
So, based on these examples, what have we learnt about how to make structured
or semi-structured in-class activities ChatGPT-enhanced or ChatGPT-resilient? We
propose that instructors do the following:
activities. And this is where we believe AI chatbots like ChatGPT can really be
used effectively to enhance learning.
Freer Activities
Let’s begin by understanding the concept of freer activities and their significance.
These activities encourage students to creatively and independently apply their
learning, cultivating higher-order thinking and problem-solving skills. They
encompass tasks like open-ended prompts, debates, projects, role-playing and
real-world scenarios, granting students the authenticity to express themselves. The
objectives encompass practical knowledge application, critical thinking, crea-
tivity, effective communication, language fluency, autonomy, real-world appli-
cability and heightened engagement. Ultimately, these activities empower
students to become confident, active learners proficient in navigating diverse
challenges and contexts. In light of our previous discussion, it’s worth noting that
ChatGPT may be capable of performing many parts of these activities. And
while, as we have seen, it can serve as a tool to enhance learning, we believe that
with strategic utilisation, it holds even more potential; the potential to truly
transform the learning experience. With this in mind, let’s take a look at two
examples from our case study to illustrate this.
One of the students on the Forensic Linguistics course had been accepted on an
Erasmus programme at a Polish law school for the upcoming semester. For his
final project, he decided to use ChatGPT to prepare for his trip, and then shared
his insights during the final presentations. His aims were diverse: learning about
the university, his courses, the town and local culture to be well-prepared.
ChatGPT proved extremely useful in assisting with this. However, what truly
stood out was his innovative use of ChatGPT for language learning. Wanting to
learn basic Polish phrases, he sought advice and practised conversations with
ChatGPT. This proved highly useful for his learning, as ChatGPT served as a free
and easily accessible Polish conversation partner – a distinct advantage consid-
ering challenges in finding such practice partners in Istanbul. He described this
experience as really significantly improving his ability to learn some Polish before
his visit. This was one example of how ChatGPT was used to really transform
learning. However, the principal investigator, herself, had also found a similar use
during the analysis part of the research process. During this investigation, the
researcher referred to the insights of the four theorists to create a theoretical
framework for analysing the findings. Even though the researcher already had a
good grounding in these theories, she wanted to enhance the analysis stage. To do
this, she created custom personas for each theorist using Forefront AI (Forefront
AI, n.d.). Having developed these personalised chatbots, she used them to have
discussions about her evolving analysis, somewhat akin to conversing with the
actual theorists themselves. This had a transformative impact, pushing her
thinking beyond what she could have achieved alone. While it might have been
possible to do this without the support of AI chatbots, it would have been difficult
and time-consuming to find peers with the time and expertise to engage in these
172 The Impact of ChatGPT on Higher Education
access to a prompt bank, users can gain insights into the type of input that yields
better outcomes and enhances their overall experience with the AI model. For
instance, a prompt bank for ChatGPT might include sample prompts for seeking
information, creative writing, problem-solving, language translation and more.
Users can refer to these examples and adapt them to their specific needs, enabling
them to get the desired responses from ChatGPT more efficiently. By utilising a
prompt bank, users can feel more confident in their interactions with ChatGPT
and improve the quality of the AI’s output by providing clear and contextually
relevant input. It serves as a valuable resource for users to explore the capabilities
of the language model and maximise the benefits of using ChatGPT in various
tasks and applications.
While we are still in the process of developing user prompt banks at our
university, we offer some examples below. In drawing up these prompts, once
again we have drawn upon Bloom’s taxonomy. This is because by working
through Bloom’s taxonomy, the user can start with lower-level knowledge ques-
tions and gradually move to higher-level analysis, as this can lead to more
meaningful and insightful responses. Our suggestions are broken down into initial
prompts and modifying prompts. We break our prompts down into two groups:
initial prompts and modifying prompts. Below, we provide examples of initial
prompts following Bloom’s.
Knowledge:
Define the term ______.
List the main characteristics of ______.
Name the key components of ______.
Comprehension:
Explain how ______ works.
Summarise the main ideas of ______.
Describe the process of ______.
Application:
Use ______ to solve this problem.
Apply the concept of ______ to a real-life scenario.
Demonstrate how to use ______ in a practical situation.
Analysis:
Break down ______ into its constituent parts.
Compare and contrast the differences between ______ and ______.
Identify the cause-and-effect relationships in ______.
Synthesis:
Create a new design or solution for ______.
Educational Implications 175
Compose a piece of writing that integrates ideas from ______ and ______.
Develop a plan to improve ______ based on the data provided.
Evaluation:
Assess the effectiveness of ______ in achieving its objectives.
Judge the validity of the argument presented in ______.
Critique the strengths and weaknesses of ______.
Similarly, different prompts can be used for each of the four domains of
knowledge. This is useful when aiming to enhance learning and understanding in
various subjects or disciplines. Examples include the following:
Metacognitive Knowledge:
Strategic knowledge:
Explain how you would approach solving a complex problem in [domain/
subject].
Self-knowledge:
Explain how I can adapt my study strategies based on [personal learning
preferences]
Procedural Knowledge:
Knowledge of subject-specific skills and algorithms:
Demonstrate the steps to solve [specific problem] in [domain/subject].
Explain the algorithm used in [specific process] in [domain/subject].
Conceptual Knowledge:
Knowledge of classifications and categories:
Categorise different types of [specific elements] in [domain/subject].
Explain the classification of organisms based on their characteristics in
[domain/subject].
Factual Knowledge:
Knowledge of terminology:
Define the following terms in [domain/subject]: [term 1], [term 2], [term 3].
Provide a list of essential vocabulary related to [specific topic] in [domain/
subject].
The suggestions above are for initial prompts. However, for modifications and
iterations of ChatGPT’s output, we suggest the following prompts:
Comprehension:
Clarification Prompt: Can you please provide more details about [topic]?
Expansion Prompt: Can you elaborate on [idea or concept]?
Application:
Correction Prompt: Actually, [fact or information] is not accurate. The
correct information is [correction].
Rephrasing Prompt: Can you rephrase [sentence or paragraph] using
simpler language?
Synthesis:
Creative Input Prompt: Imagine a scenario where [situation] happens.
Describe what would occur next.
Alternative Perspective Prompt: Consider the opposite viewpoint of [idea or
argument].
Educational Implications 177
Analysis:
Comparative Analysis Prompt: Compare and contrast [two concepts,
products or solutions].
Evaluation:
In-depth Explanation Prompt: Provide a more detailed analysis of [specific
aspect or topic].
Summary and Conclusion Prompt: Summarise the key points of your
response in a few sentences.
Continuation Prompt: Please build upon your previous response and
explore [next aspect or question].
While the examples we have given above are generic and can be used across all
disciplines, we believe that the development of discipline specific prompt banks
will be more effective. As a result, one of the initiatives planned at MEF for the
upcoming academic year is to have each department create their own prompt
banks, customised to their specific disciplines and unique needs. This approach
aims to enhance students’ experiences by offering prompts that align closely with
their academic areas, ensuring more relevant and tailored interactions with
ChatGPT. However, there is an alternative option: individuals can craft their own
personalised prompt banks. Indeed, this is precisely the approach adopted by the
authors during the book-writing process.
Creating a personal prompt bank offers numerous advantages to users within
AI-driven education and learning contexts. Through the creation and curation of
their own prompts, users can tailor their learning experiences to align with their
unique goals, interests and areas of focus. This personalised approach not only
fosters a deeper sense of engagement but also allows for a more meaningful and
relevant interaction with AI systems. One of the key benefits is the opportunity for
users to customise their learning journey. By selecting prompts that cater to their
specific learning needs, they can address areas of confusion, challenge themselves
and explore subjects in greater depth. The act of curating a personal prompt bank
can itself be a motivational endeavour, as users become actively invested in
shaping their learning content. Furthermore, a personal prompt bank serves as a
dynamic tool for ongoing learning and practice. Users can revisit prompts related
to challenging concepts, reinforcing their understanding over time. As they
interact with AI systems using their curated prompts, they can refine and adapt
their bank based on the responses received, leading to improved interactions and
learning outcomes. This process encourages users to actively participate in their
learning journey and engage with AI technology. It nurtures skills like digital
literacy and adaptability, which are increasingly valuable in an AI-centric world.
Beyond immediate benefits, a well-curated prompt bank evolves into a valuable
resource, adaptable to changing learning needs over the long term. In essence, the
creation of a personal prompt bank empowers users with autonomy and agency,
facilitating a customised and enriching learning experience. It enables users to
actively shape their education, aligning it with their preferences and needs while
178 The Impact of ChatGPT on Higher Education
Fostering AI Literacy
Our research has clearly underscored the urgent need for AI literacy training
among both students and instructors. However, what exactly does AI literacy
entail? In essence, AI literacy expands beyond digital literacy and encompasses
the ability to comprehend, apply, monitor and critically reflect on AI appli-
cations, even without the expertise to develop AI models. It surpasses mere
understanding of AI capabilities; it involves being open to harnessing AI for
educational purposes. Equipping educators and students with the confidence
and responsibility to effectively use AI tools makes AI literacy an indispens-
able skill. However, when promoting AI literacy, two primary objectives must
be taken into account. Firstly, a comprehensive exploration of how users can
adeptly wield ChatGPT as a valuable educational tool is essential. Secondly,
providing instructors with guidance on seamlessly integrating ChatGPT into
their educational practices while maintaining the integrity of their curricula,
assessments and instruction is pivotal. This ensures that students do not bypass
the essential learning process and neglect foundational knowledge. AI literacy
training within universities should be customised differently for students and
instructors, however, there will be a certain amount of crossover. For students,
it is imperative that they grasp the fundamental concepts of AI, its applica-
tions, and its potential impacts across various fields. This knowledge will
empower them to make informed decisions and actively engage with AI
technologies. Furthermore, it is crucial that we equip our students with an
understanding of the ethical implications of AI, including biases, privacy
concerns and accountability. They need to comprehend how AI technologies
can shape society and cultivate responsible usage. AI literacy should focus on
nurturing students’ critical thinking skills, as this will enable them to assess
AI-generated content, differentiate between human and AI contributions and
evaluate the reliability of information produced by AI. Regarding training for
instructors, instructors should acquire the proficiency to seamlessly integrate
AI tools into their teaching methods. This involves understanding AI’s
potential in enhancing learning experiences, automating administrative tasks
and providing personalised feedback to students. Instructors also need to stay
updated about AI-driven research tools, including data analysis and natural
language processing tools. This knowledge will ensure they remain abreast of
the latest advancements in their respective fields. Furthermore, instructors need
to play a pivotal role in responsibly guiding students’ use of AI tools for
academic purposes. This will include fostering originality, steering clear of
plagiarism or unethical practices and ensuring a constructive learning experi-
ence. While the core concepts of AI literacy might share similarities, the
Educational Implications 179
Theoretical Advancements
The integration of the theoretical frameworks of critical theory and phenome-
nology into our research study on the impact of ChatGPT on higher education
represents a significant stride towards advancing the theoretical discourse within
the field. By employing these philosophical lenses, our research transcends mere
examination and enters the realm of deep understanding, nuanced analysis and
holistic exploration. Through the combination of critical theory and phenome-
nology, our research embraces a multidimensional understanding of the impact of
ChatGPT. Rather than analysing this integration from a singular perspective, our
approach delves into power dynamics, subjective experiences, existential dimen-
sions and authenticity. This comprehensive exploration offers a deeper grasp of
the technology’s effects on students, educators and institutions. Critical theory’s
focus on power dynamics exposes hidden inequalities and systemic structures. By
applying this lens, our research uncovers potential disparities in the adoption and
utilisation of ChatGPT, shedding light on how technology can either perpetuate
or challenge existing hierarchies. This unveiling of hidden dynamics enriches the
184 The Impact of ChatGPT on Higher Education
settings. Thus, we assert that universities’ ethics committees should play a pivotal
role in driving this transformation. With the increasing prevalence of
AI-generated content, institutions must grapple with redefining plagiarism and
attributing credit in this AI-infused age. This endeavour will necessitate a nuanced
understanding of how AI interfaces with established academic standards. When
considering the implications on product development, we firmly advocate for
universities to prioritise achieving an equitable distribution of AI bots. This can
be achieved through the establishment of institutional agreements that grant bot
access to all instructors and students, thus ensuring universal availability or by
directing students towards readily available open sources. As AI becomes an
integral part of the educational landscape, it becomes increasingly crucial to
address product-related considerations. Ensuring fair and equal access to AI bots
becomes paramount in order to prevent any potential disparities in resource
allocation among students. Moreover, we underline the significance of universities
forging strong partnerships with industries. Recognising the influence of AI on
these sectors and identifying the skill sets that employers are seeking in graduates
will serve as valuable insights for curriculum refinement within universities. This
collaborative effort with industries becomes essential to synchronise educational
offerings with the ever-changing requirements of the job market. Such collabo-
ration is pivotal in ensuring that students are adequately equipped with the
essential AI-related competencies to excel in industries increasingly shaped by AI
technologies. Furthermore, by fostering collaboration, universities can gain
insights into the evolving utilisation of AI within specific industries. This valuable
information can subsequently inform the creation or acquisition of specialised
bots that align with industry trends. This focused approach will adeptly address
the limitations of generalised bots within the educational sphere. The idea of
discipline-specific AI bots introduces a pioneering pathway for tailored learning
experiences, offering the capacity to precisely address the unique requirements of
diverse departments and thus enhancing the integration of AI across various
academic domains. Furthermore, we strongly advocate that universities imme-
diately introduce courses in prompt engineering to students, either by developing
their own courses or by providing access to existing MOOC courses. This pro-
active measure will empower students with indispensable skills in navigating the
swiftly changing technological terrain. Simultaneously, the provision of prompt
engineering courses will significantly bolster students’ AI proficiency and deepen
their comprehension of optimal AI-interaction strategies. Within the realm of
educational implications, we strongly emphasise the imperative for institutions to
thoroughly assess the potential influence of AI on students’ foundational learning.
The conventional concept of foundational learning faces new challenges as AI
introduces novel methods and tools. Manoeuvring through these obstacles
necessitates a modification of instructional approaches that foster critical
thinking, problem-solving and creativity – skills that AI struggles to replicate as
effectively as humans. In this context, we propose the adoption of the flipped
learning approach as an effective framework to address these issues. Embracing
this approach harnesses AI tools to enrich pre-class engagement, allowing class
time to be utilised for interactive discussions, collaborative projects and hands-on
186 The Impact of ChatGPT on Higher Education
lead to biases and adverse outcomes if not vigilantly managed. Hence, Chowd-
hury proposes a redistribution of power through collaborative stakeholder
engagement. Karen Hao echoes these sentiments, expressing apprehension about
tech giants’ influence over advanced AI technologies. She calls for transparent
and inclusive AI policy shaping that involves a diverse range of stakeholders,
underlining the essential role of varied perspectives in promoting responsible AI
development. Harari also conveys concerns about potential challenges associated
with technological advancement (2018). He asserts that sociologists, philosophers
and historians have a crucial role in raising awareness and addressing the
self-promotion frequently presented by corporations and entrepreneurs in regard
to their technological innovations. He underscores the urgency of swift
decision-making to effectively regulate the impact of these technologies, guarding
against their imposition on society by market forces. This matter holds utmost
significance at present, given the swift progress of ChatGPT in the AI industry,
which is catalysing a competition among other companies to adopt and cultivate
extensive language models and generative AI. This rapid course may surpass the
responsiveness of government policies in addressing these advancements
promptly. This brings us back to our AI experts: Max Tegmark, Gary Marcus,
Ernest Davis and Stuart Russell. In his 2017 book Life 3.0: Being Human in the
Age of Artificial Intelligence, Tegmark lays out frameworks for responsible AI
governance, stressing the importance of AI conforming to ethical principles that
prioritise human values, well-being and societal advancement. He highlights the
significance of transparency and explainability in ensuring humans understand
AI’s decision-making process. To this end, he proposes aligning Artificial General
Intelligence (AGI) objectives with human values and establishing oversight
mechanisms. Tegmark envisions a collaborative approach involving a diverse
range of stakeholders, including experts and policymakers, to collectively shape
AGI regulations, with a strong emphasis on international cooperation (Tegmark,
2017). He advocates for adaptable governance frameworks that can keep pace
with the evolving AI landscape. Tegmark’s overarching goal is to harmonise AI
with human values, preventing misuse and fostering societal progress, all while
recognising the continuous need for interdisciplinary discourse and fine-tuning in
AI governance (Tegmark, 2017). Marcus and Davis advocate for a comprehen-
sive re-evaluation of the AI research trajectory, suggesting an interdisciplinary
path that addresses the limitations inherent in current AI systems (Marcus &
Davis, 2019). Their approach involves integrating insights from various fields like
cognitive science, psychology and linguistics, aiming to create AI systems that
better align with human cognitive processes. They introduce a significant concept
– the ‘hybrid’ approach to AI advancement, which combines rule-based systems
and statistical methodologies (Marcus & Davis, 2019). This fusion aims to harness
the strengths of both approaches while mitigating their weaknesses. Their vision is
that such a hybrid methodology could yield more intelligent and reliable AI
systems capable of effectively handling complex real-world scenarios (Marcus &
Davis, 2019). Russell introduces the concept of value alignment theory, a
fundamental aspect of AI ethics (Russell, 2019). This theory centres on the vital
objective of aligning AI systems with human values and goals. It underscores the
Contributions to Knowledge and Research 189
Addressing Limitations
While our study’s findings proved pertinent and resulted in the development of
strategies for implementing ChatGPT at our institution, it is essential to
acknowledge and address certain research limitations. The study took place at a
specific English-medium non-profit private university in Turkey, renowned for its
flipped learning approach. While the insights gained are valuable, it’s crucial to
recognise that the unique context may limit the generalisability of the results to
other educational settings. One notable limitation encountered during the
research process was the limited availability of literature on ChatGPT at the
study’s time. This scarcity can be attributed to the recent public launch of
ChatGPT and the restricted time frame for conducting the literature review. As a
result, the review partially relied on grey literature, including pre-prints, poten-
tially affecting the comprehensiveness and depth of the analysis. The study also
employed intentionally broad and open-ended research questions to facilitate an
exploratory investigation. While this approach allowed for a comprehensive
exploration, it’s vital to acknowledge the potential for bias in interpreting the
findings due to the dual role of the principal investigator, serving as both the
researcher and instructor. Additionally, the study’s reliance on a small sample size
of 12 students from an elective humanities class focused on forensic linguistics
poses a limitation. It’s essential to recognise that outcomes may have varied in
larger classes or different disciplines. Furthermore, the sampling of other voices,
including instructors and administrators, was relatively random, based on critical
incidents, emails, workshops and ad hoc interactions. Finally, it’s worth noting
that the research was conducted over a single semester, which may restrict the
longitudinal analysis of ChatGPT’s impact on education. To address these limi-
tations in future studies, we will make the following adaptations. Firstly, to make
our findings more applicable across diverse educational settings, we will include a
broader range of academic disciplines. To ensure a strong theoretical foundation,
we will continuously monitor reputable sources for the latest research on
ChatGPT and related AI technologies, updating our literature review accord-
ingly. By combining quantitative and qualitative approaches, we will gain a more
holistic understanding of ChatGPT’s impact. Integrating numerical data with rich
narratives from students, instructors and administrators will provide a compre-
hensive view of the technology’s effectiveness and challenges. To maintain
objectivity, we will look to include more reflexivity during data collection and
analysis by involving multiple researchers and use triangulation methods to
validate and cross-check findings from these different perspectives. Strengthening
the study’s validity and representativeness can be achieved by including a larger
and more diverse participant pool, encompassing students from various disci-
plines and academic levels, educators and decision-makers. Gaining deeper
insights into ChatGPT’s effects over time can be achieved through long-term
investigations. Observing changes, adaptations and potential challenges will
provide a nuanced understanding of the technology’s long-term implications. The
exploration of our suggested pedagogical strategies for effectively integrating
ChatGPT in education is of utmost importance. By investigating how these
Contributions to Knowledge and Research 191
proposed changes will impact teaching and learning, we can gain valuable insights
for further practical implementation. By incorporating these improvements into
our future research, we can enrich our understanding of ChatGPT’s impact in
education and offer valuable insights for educators and institutions seeking to
effectively utilise AI technologies.
By pursuing these future research directions, we believe the field can gain a
more comprehensive understanding of AI’s influence in education and develop
strategies for harnessing AI’s potential while safeguarding the core values of
quality education and human-centric learning.
Contributions to Knowledge and Research 193
Course Name
Mastering AI Literacy for Teaching and Learning
Course Format
This course will be delivered as an asynchronous online programme, providing
educators with the flexibility to engage with the content at their own pace. The
course materials will be accessible through the university’s learning management
system, allowing participants to learn, reflect and practice in a self-directed
manner. To enhance engagement and interaction, live workshops will be con-
ducted throughout the semester, focusing on each aspect of the course content.
These workshops will provide an opportunity for participants to ask questions,
engage in discussions and receive real-time guidance.
Course Description
This is a dynamic and immersive course that equips educators with a deep
understanding of AI chatbot technology and its ethical implications in
educational contexts. From foundational concepts to advanced strategies,
this course takes educators on a transformative journey through the world of
AI chatbots. Participants will explore how AI chatbots are reshaping the
education landscape, from personalised learning experiences to efficient
administrative support. The course delves into the ethical dimensions of AI
196 Appendices
Enduring Understanding
Empowering educators with AI chatbots involves mastering their practical
applications, understanding ethical implications and adapting teaching
practices for an AI-enhanced educational landscape.
Essential Questions
Course Contents
Course Name
Mastering AI Literacy for Learning
Course Format
This course will be delivered as an asynchronous online programme,
providing students with the flexibility to engage with the content at their own
pace in accordance with their schedules. The course materials will be acces-
sible through the university’s learning management system, allowing students
to learn, reflect and practice in a self-directed manner.
Course Description
The primary goal of this course is for students to develop essential skills that
enable effective engagement with AI, ethical evaluation and strategic
enhancement of learning through AI strategies. By the end of this course,
students will have gained the ability to critically assess AI’s limitations,
leverage its potential for creativity and efficiency and ensure responsible
learning practices that guard against potential shortcuts. Through a flexible
online format, students will explore the transformative role of AI in educa-
tion, its impact on learning strategies, and how to navigate its ethical con-
siderations, empowering them to harness the power of AI while promoting
responsible learning practices in the AI era.
Enduring Understanding
In the AI era, mastering AI literacy equips learners with skills to engage,
collaborate with and effectively adapt to AI, enhancing learning strategies in
a rapidly evolving technological landscape while safeguarding against
potential learning shortcuts.
Essential Questions
• Identify and assess the specific skills required to effectively engage with and
adapt to AI, fostering collaboration and informed decision-making.
• Evaluate the ethical implications of utilising AI in learning processes,
demonstrating an awareness of responsible AI usage and potential
challenges.
• Develop strategies to enhance learning experiences through AI, including
optimising input quality, output effectiveness and personalised interactions.
• Critically appraise the limitations and challenges associated with AI tech-
nologies, recognising the importance of reliability and ethical considerations.
• Implement AI as a personal aide and tutor, applying AI tools to enhance
creativity, efficiency and knowledge acquisition.
• Identify instances where the use of AI could lead to bypassing learning and
formulate strategies to mitigate them, thereby fostering responsible
learning practices in the AI era.
Course Contents
In summary, this AI literacy course for students will equip them with
essential skills to engage effectively with AI, evaluate its ethical implications,
enhance learning strategies and navigate potential challenges in the AI era.
Course Name
Mastering AI Chatbots
Course Format
The course is scheduled for one semester and will adopt the flipped learning
approach. Classes can be conducted synchronously online.
Course Description
The primary goal of this course is for students to become adept with AI
chatbots and learn how to effectively use and apply these tools in various
situations. Throughout the course, students will become proficient in AI
chatbots, from grasping their technology to using them strategically. On the
course, students will explore the basics of chatbots while also assessing their
impact on education, jobs and society. They will also delve into how AI is
influencing them personally, as well as individuals and their relationships with
learning, technology and society. Additionally, students will discover ways to
practically enhance their AI user experience, all while considering ethical
concerns. They will also investigate the limitations and challenges of AI
chatbots and their role in learning. Ethical considerations and real-world
examples will be discussed to provide insights into AI chatbot development.
Moreover, students will examine AI threats, ethical guidelines, and the
responsibilities that educators, schools and universities have in this context.
They will also explore upcoming trends and innovations in AI chatbots,
preparing them for the ever-changing landscape of AI technology. The course
will emphasise hands-on experience, and by the end of it, students will have
configured and trained an AI chatbot to meet their individual needs.
Consequently, students will have acquired skills in AI comprehension, critical
thinking, ethical considerations and practical application, enabling them to
navigate the world of AI effectively.
Enduring Understanding
Mastering AI chatbots involves understanding their impact on people, soci-
eties and ethics, and grasping the broad effects of technology progress.
Essential Questions
• How can AI chatbots enhance user efficiency, support and ideas across
different contexts?
• What’s the impact of AI on learning, and what are learners’ responsibilities
in this context?
• What’s the scope of generalised and specialised AI chatbots, considering
limitations and cultures?
• What ethical challenges arise during AI chatbot development, and how do
real-world examples provide insights into these challenges?
• What threats do AI chatbots pose, and why is ethical policy crucial in
managing them?
• How can universities contribute responsibly to AI chatbot development
and ethical discussions?
• What tools, platforms and practices can be used to develop AI chatbots?
• How are emerging trends and technologies shaping the future of AI
chatbot technology and integration?
Assessment
Course Contents
Abdul, G. (2023, May 30). Risk of extinction by AI should be global priority, say
experts. The Guardian. https://www.theguardian.com/technology/2023/may/30/risk-
of-extinction-by-ai-should-be-global-priority-say-tech-experts
Aceves, P. (2023, May 29). ‘I do not think ethical surveillance can exist’: Rumman
Chowdhury on accountability in AI. The Guardian. https://www.theguardian.com/
technology/2023/may/29/rumman-chowdhury-interview-artificial-intelligence-
accountability
Adamopoulou, E., & Moussiades, L. (2020). An overview of chatbot technology. In
IFIP advances in information and communication technology (Vol. 584). https://
doi.org/10.1007/978-3-030-49186-4_31
Alshater, M. M. (2022). Exploring the role of artificial intelligence in enhancing
academic performance: A case study of ChatGPT. https://ssrn.com/abstract5
4312358
Althusser, L. (1971). Lenin and philosophy, and other essays. New Left Books.
Anyoha, R. (2017, August 28). Science in the news [Harvard University Graduate
School of Arts and Sciences]. The History of Artificial Intelligence. https://
sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
Armstrong, P. (n.d.). Bloom’s taxonomy. Vanderbilt Center for Teaching. https://
cft.vanderbilt.edu/guides-sub-pages/blooms-taxonomy/
Baker, T., & Smith, L. (2019). Educ-AI-tion rebooted? Exploring the future of artificial
intelligence in schools and colleges. Nesta. https://media.nesta.org.uk/documents/
Future_of_AI_and_education_v5_WEB.pdf
Bellan, R. (2023, March 14). Microsoft lays off an ethical AI team as it doubles down
on OpenAI. TechCrunch. https://techcrunch.com/2023/03/13/microsoft-lays-off-an-
ethical-ai-team-as-it-doubles-down-on-openai/
Bensinger, G. (2023, February 21). ChatGPT launches boom in AI-written e-books on
Amazon. Reuters. https://www.reuters.com/technology/chatgpt-launches-boom-ai-
written-e-books-amazon-2023-02-21/
Bhuiyan, J. (2023, May 16). OpenAI CEO calls for laws to mitigate ‘risks of
increasingly powerful’ AI. The Guardian. https://www.theguardian.com/
technology/2023/may/16/ceo-openai-chatgpt-ai-tech-regulations
Bida, A. (2018). Heidegger and “Dwelling”. In Mapping home in contemporary
narratives. Geocriticism and spatial literary studies. Palgrave Macmillan.
Bloom, B., Engelhart, M., Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of
educational objectives: The classification of educational goals. Handbook 1.
Cognitive domain. David McKay Company.
Blueprint for an AI Bill of Rights. (n.d.). The White House. https://www.white
house.gov/ostp/ai-bill-of-rights/
Bourdieu, P. (1978). The linguistic market; a talk given at the University of Geneva in
December 1978. In Sociology in question (p. 83). Sage.
208 References
Gates, B. (2023, March 24). Bill Gates: AI is most important technological advance in
decades – But we must ensure it is used for good. Independent. https://
www.independent.co.uk/tech/bill-gates-ai-artificial-intelligence-b2307299.html
Girdher, J. L. (2019). What is the lived experience of advanced nurse practitioners of
managing risk and patient safety in acute settings? A phenomenological perspective.
University of the West of England. https://uwe-repository.worktribe.com/output/
1491308
Global education monitoring report 2023: Technology in education - A tool on whose
terms? (p. 435). (2023). UNESCO.
Gollub, J., Bertenthal, M., Labov, J., & Curtis, P. (2002). Learning and understanding:
Improving advanced study of mathematics and science in U.S. high schools (pp.
1–564). National Research Council. https://www.nap.edu/read/10129/chapter/1
Griffin, A. (2023, May 12). ChatGPT creators try to use artificial intelligence to
explain itself – and come across major problems. The Independent. https://
www.independent.co.uk/tech/chatgpt-website-openai-artificial-intelligence-
b2337503.html
Grove, J. (2023, March 16). The ChatGPT revolution of academic research has begun.
Times Higher Education.
Hammersley, M., & Atkinson, P. (1995). Ethnography: Principles in practice (p. 16).
Routledge.
Hao, K. (2020, September 23). OpenAI is giving Microsoft exclusive access to its
GPT-3 language model. MIT Technology Review. https://www.technologyreview.
com/2020/09/23/1008729/openai-is-giving-microsoft-exclusive-access-to-its-gpt-3-
language-model/
Harari, Y. N. (2018). 21 lessons for the 21st century. Vintage.
Harreis, H. (2023, March 8). Generative AI: Unlocking the future of fashion. McKinsey
& Company. https://www.mckinsey.com/industries/retail/our-insights/generative-
ai-unlocking-the-future-of-fashion
Higher education that works (pp. 1–8). (2023). University of South Florida.
Hinsliff, G. (2023, May 4). If bosses fail to check AI’s onward march, their own jobs
will soon be written out of the script. The Guardian. https://www.theguardian.com/
commentisfree/2023/may/04/ai-jobs-script-machines-work-fun
How will ChatGPT & AI impact the financial industry? (2023, March 6). FIN. https://
www.dfinsolutions.com/knowledge-hub/thought-leadership/knowledge-resources/
the-impact-of-chatgpt-in-corporate-finance-marketplace
How do I cite generative AI in MLA style? (n.d.). MLA Style Center. https://
style.mla.org/citing-generative-ai/
Hunt, F. A. (2022, October 19). The future of AI in the justice system. LSJ Media.
https://lsj.com.au/articles/the-future-of-ai-in-the-justice-system/
Inwood, M. (2019). Heidegger: A very short introduction (2nd Ed.). Oxford University
Press.
Jackson, F. (2023, April 13). “ChatGPT does 80% of my job”: Meet the workers using
AI bots to take on multiple full-time jobs - and their employers have NO idea.
MailOnline. https://www.dailymail.co.uk/sciencetech/article-11967947/Meet-
workers-using-ChatGPT-multiple-time-jobs-employers-NO-idea.html
Jiminez, K. (2023, April 13). Professors are using ChatGPT detector tools to accuse
students of cheating. But what if the software is wrong? USA Today. https://
210 References
www.usatoday.com/story/news/education/2023/04/12/how-ai-detection-tool-
spawned-false-cheating-case-uc-davis/11600777002/
Johnson, A. (2022, December 12). Here’s what to know about OpenAI’s
ChatGPT—What it’s disrupting and how to use it. Forbes. https://www.forbes.
com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-
what-its-disrupting-and-how-to-use-it/?sh57a5922132643
Kahneman, D. (2011). Thinking, fast and slow. Random House.
Karp, P. (2023, February 6). MP tells Australia’s parliament AI could be used for
‘mass destruction’ in speech part-written by ChatGPT. The Guardian. https://
www.theguardian.com/australia-news/2023/feb/06/labor-mp-julian-hill-australia-
parliament-speech-ai-part-written-by-chatgpt
Klee, M. (2023, June 6). She was falsely accused of cheating with AI — and she won’t
be the last. [Magazine]. Rolling Stone. https://www.rollingstone.com/culture/
culture-features/student-accused-ai-cheating-turnitin-1234747351/
Klein, A. (2023, July 25). Welcome to the ‘Walled Garden.’ Is this education’s solution
to AI’s pitfalls? Education Week. https://www.edweek.org/technology/welcome-to-
the-walled-garden-is-this-educations-solution-to-ais-pitfalls/2023/07?fbclid5IwAR2
Wgk8e8Ex5niBsy6npZLnO77W4EuUycrkTpyH0GCHQghBSF1a2DKhzoNA
Liberatore, S., & Smith, J. (2023, March 30). Silicon valley’s AI civil war: Elon Musk
and Apple’s Steve Wozniak say it could signal “catastrophe” for humanity. So why
do Bill Gates and Google think it’s the future? Daily Mail. https://www.dailymail.
co.uk/sciencetech/article-11916917/The-worlds-greatest-minds-going-war-AI.html
Marcus, G., & Davis, E. (2019). Rebooting AI: Building artificial intelligence we can
trust. Pantheon Books.
Martinez, P. (2023, March 31). How ChatGPT is transforming the PR game.
Newsweek. https://www.newsweek.com/how-chatgpt-transforming-pr-game-
1791555
Maslej, N., Fattorini, L., Brynjolfsson, E., Etchemendy, J., Ligett, K., Lyons, T.,
Manyika, J., Ngo, H., Niebles, J. C., Parli, V., Shoham, Y., Wald, R., Clark, J., &
Perrault, R. (2023). The AI Index 2023 Annual Report (AI Index, p. 386). Institute
for Human-Centered AI. https://aiindex.stanford.edu/wp-content/uploads/2023/04/
HAI_AI-Index-Report_2023.pdf
McAdoo, T. (2023, April 7). How to cite ChatGPT. APA Style. https://
apastyle.apa.org/blog/how-to-cite-chatgpt
McLean, S. (2023, April 28). The environmental impact of ChatGPT: A call for
sustainable practices in AI development. Earth.org. https://earth.org/environmental-
impact-chatgpt/
McTighe, J., & Wiggins, G. (2013). Essential questions: Opening doors to student
understanding. Association for Supervision and Curriculum Development.
Mészáros, I. (2005). Marx’s theory of alienation. Merlin.
Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT
towards lifelong learning. https://papers.ssrn.com/sol3/papers.cfm?abstract_
id54354422
Milmo, D. (2023a, February 3). Google poised to release chatbot technology after
ChatGPT success. The Guardian. https://www.theguardian.com/technology/2023/
feb/03/google-poised-to-release-chatbot-technology-after-chatgpt-success
References 211
Milmo, D. (2023b, April 17). Google chief warns AI could be harmful if deployed
wrongly. The Guardian. https://www.theguardian.com/technology/2023/apr/17/
google-chief-ai-harmful-sundar-pichai
Milmo, D. (2023c, May 4). UK competition watchdog launches review of AI market.
The Guardian. https://www.theguardian.com/technology/2023/may/04/uk-
competition-watchdog-launches-review-ai-market-artificial-intelligence
Milmo, D. (2023d, May 20). UK schools ‘bewildered’ by AI and do not trust tech firms,
headteachers say. The Guardian. https://www.theguardian.com/technology/2023/
may/20/uk-schools-bewildered-by-ai-and-do-not-trust-tech-firms-headteachers-say
Milmo, D. (2023e, July 11). AI revolution puts skilled jobs at highest risk, OECD
says. The Guardian. https://www.theguardian.com/technology/2023/jul/11/ai-
revolution-puts-skilled-jobs-at-highest-risk-oecd-says
Milmo, D. (2023f, July 26). Google, Microsoft, OpenAI and startup form body to
regulate AI development. The Guardian. https://www.theguardian.com/technology/
2023/jul/26/google-microsoft-openai-anthropic-ai-frontier-model-forum
Mok, A., & Zinkula, J. (2023, April 9). ChatGPT may be coming for our jobs. Here
are the 10 roles that AI is most likely to replace. Business Insider. https://
www.businessinsider.com/chatgpt-jobs-at-risk-replacement-artificial-intelligence-
ai-labor-trends-2023-02
Moran, C. (2023, April 6). ChatGPT is making up fake Guardian articles. Here’s how
we’re responding. The Guardian. https://www.theguardian.com/commentisfree/
2023/apr/06/ai-chatgpt-guardian-technology-risks-fake-article
Nelson, N. (2001). Writing to Learn. Studies in Writing. In P. Tynjälä, L. Mason, &
K. Lonka (Eds.), Writing as a learning tool (Vol. 7). Springer. https://doi.org/
10.1007/978-94-010-0740-5_3
Neumann, M., Rauschenberger, M., & Schön, E.-M. (2023). We need to talk about
ChatGPT: The fFuture of AI and higher education [Education]. Hochschule
Hannover. https://doi.org/10.25968/opus-2467
O’Flaherty, K. (2023, April 9). Cybercrime: Be careful what you tell your chatbot
helper. . .. The Guardian. https://www.theguardian.com/technology/2023/apr/09/
cybercrime-chatbot-privacy-security-helper-chatgpt-google-bard-microsoft-bing-
chat
Paleja, A. (2023a, January 6). In a world first, AI lawyer will help defend a real case in
the US. Interesting Engineering.
Paleja, A. (2023b, January 30). Gmail creator says ChatGPT-like AI will destroy
Google’s business in two years. Interesting Engineering.
Paleja, A. (2023c, April 4). ChatGPT ban: Will other countries follow Italy’s lead?
Interesting Engineering.
Patton, M. (2002). Qualitative research and evaluation methods (3rd Ed.). SAGE
Publications.
Pause Giant AI experiments: An open letter. (2023, March 22). Future of Life Institute.
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Ramponi, M. (2022, December 23). How ChatGPT actually works. AssemblyAI.
https://www.assemblyai.com/blog/how-chatgpt-actually-works/
Ray, S. (2023, May 25). ChatGPT could leave Europe, OpenAI CEO warns, days
after urging U.S. Congress for AI Regulations. Forbes. https://www.forbes.com/
sites/siladityaray/2023/05/25/chatgpt-could-leave-europe-openai-ceo-warns-days-
after-urging-us-congress-for-ai-regulations/?sh583384862ed85
212 References