The AI Revolution and Higher Education
The AI Revolution and Higher Education
net/publication/370637059
The AI Revolution & Higher Education: Why 21st Century Durable Skills Are
Needed More Than Ever
CITATIONS
1 author:
Sean Hughes
Minerva Project
105 PUBLICATIONS 1,976 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Sean Hughes on 10 May 2023.
Sean Hughes
This document is licensed under a Creative Commons Attribution 4.0 International License (CC-BY
4.0). You are free to share (copy and redistribute the material in any medium or format) and adapt
(remix, transform, and build upon the material) for any purpose, even commercially, as long as you
provide appropriate credit, include a link to the license, and indicate if any changes were made. You
may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or
your use.
Hughes, S. (2023). The AI Revolution & Higher Education: Why 21st Century Durable Skills Are
Needed More Than Ever. Retrieved from
https://www.researchgate.net/publication/370627530_The_AI_Revolution_Higher_Education_Why_21
st_Century_Durable_Skills_Are_Needed_More_Than_Ever
1
Given the rapid pace of AI development this “living document” will be continually updated to
accommodate significant changes taking place at the intersection between (higher) education and AI.
Table of Contents
Table of Contents 2
1. Introduction 3
1.1 AI & Society 3
1.2 AI & Education 3
1.3 Worries Around AI 6
1.4 Conclusion: Adapt and Evolve Rather than Detect and Ban 8
2. Academic Leadership 9
2.1 Craft Forward Looking (AI) Policies 9
2.2. Faculty: Workshops on AI 11
2.3 Students: Foundational (AI & Media) Literacy Training 12
3. Faculty 18
3.1 How AI Can Supercharge Your Teaching 18
3.2 Possible Educator Applications 20
3.3 Navigating the AI Landscape in Education: Reflections and Recommendations 21
4. Students 26
4.1 Personalized Learning Co-Pilot 26
4.2. Many More Student Applications 29
5. Assessment 31
5.1 AI Resistant Assessment 31
5.2 AI-Enhanced Assessment 37
6. Education Fit for an AI Era 38
6.1 What Are 21st Century Durable Skills? 38
6.2 How To Equip Students With 21st Century Durable Skills 39
6.3 An Educational Approach Fit For an AI Era 41
7. Conclusions 43
8. Resources 45
1. Introduction
Generative artificial intelligence (AI) has taken the world by storm, transforming how we
relate to and interact with digital content. At its core, this class of new technology works by
ingesting vast amounts of data (sometimes the entire content of the internet), and then
training sophisticated machine learning algorithms such as neural networks and deep
learning on that data. These systems then autonomously generate new content, such as
images, text, or music, frequently producing creations that are indistinguishable from their
human-generated counterparts.2
AI is also transforming the educational sector in real time. This new class of tools is
reshaping how students learn, educators teach, and institutions function, requiring each of
these stakeholders to grapple with both the positive and negative implications of AI in
multiple domains.
One tool in particular, ChatGPT, has garnered significant attention within the educational
sector. And it is no wonder why. This text-based conversational agent possesses an
impressive range of capabilities, the limits of which are still being discovered. ChatGPT can
instantly generate multiple assignment types, from essays, and reports, to dissertations, on a
near endless range of topics. It can write code, solve mathematical problems, ace quizzes,
condense and compose scientific research papers, act as a persuasive debate partner, a
resourceful study companion or tutor, and even role-play as notable historical, scientific, and
cultural figures.
And remarkably it's already out of date. Its successor (GPT4) is more reliable and creative,
outperforming its predecessor in nearly every way. GPT4 cannot only take but also excel on
2
A variety of terms and acronyms have been used to refer to this emerging class of technology, from
AI and generative AI (GAI), to AI bots, chat bots, and so on. For the sake of simplicity and clarity, I will
refer to this general class of products as “AI tools” in this report.
many different university exams, from law, and computer science, to literature, microbiology,
and medicine (see Figure 1).
GPT4 is just one example of a text-based agent, while text-based agents are just one of
many different tools in the wider AI ecosystem (see Table 1 for an overview).
This growing array of tools is transforming teaching and learning. Students can now
seamlessly convert a single sentence into an elaborate essay or research paper, or even
create photo-realistic images, videos, or music compositions that rival the work of seasoned
photographers, illustrators, and musicians. Various tools provide personalized, formative
feedback to students when learning new concepts or developing as writers (e.g., see
Grammarly, Quill, Turnitin). Others act as private tutors, offering real-time guidance on how
to deliver verbal presentations and speeches. Still other tools adeptly dissect coding and
mathematical problems, walking students through solution steps while nurturing their critical
thinking skills. For educators, these very same technologies offer ways to streamline course
and lesson design, as well as develop, evaluate, and refine teaching materials, or deliver
customized feedback tailored to a given students' level and goals. 3
3
A number of plugins are emerging for GPT4 that allow it to access real-time information on the
internet. Others such as Code Interpreter allow it to utilize a Python interpreter to do what previous
versions of the tool could not (e.g., analyze and visualize data, extract text from images, and edit
videos).
Table 1. Overview of generative AI tools in May 2023 with the potential to radically transform
teaching and learning.
Perplexity
Note: Educators eager to delve into the contemporary AI ecosystem and experience these
tools firsthand should consult FuturePedia. This website categorizes and tags each tool,
offers users the ability to subscribe to updates on the latest AI developments, and has a
section devoted to Education AI Assistants. Also see the AI Tool Directory for another such
database. 4 5
Cheating. A primary concern for many educators lies in the potential for students to
utilize AI tools for plagiarism and cheating. The ability for these tools to generate human-like
text raises the possibility that students will leverage such tools to write entire assignments or
essays for them, making it difficult for educators to detect dishonesty, and distinguish
between human and machine generated content. Students could also employ AI-generated
writing or code for their assignments, submitting the material with little to no modification.
Likewise, in creative courses, students might use the technology to generate music, images,
or videos to similar deceitful ends.
Recent surveys have substantiated these concerns, suggesting that most students and
faculty are aware of this new class of tools, with many students employing them in class and
during final exams. In a Study.com survey of 200 primary and secondary school teachers, a
quarter reported instances of students cheating with the assistance of ChatGPT. Similarly, in
a survey involving 4,497 Stanford students, 17% admitted to using generative AI for support
with assignments and exams. Among those who used AI, the majority claimed to have
4
A suite of paid AI tools are emerging explicitly for teachers (e.g., Education Copilot, SchoolAI, and
Nolej). However, they are not free and faculty can often achieve similar outcomes with other (open
source) tools with effective prompt engineering.
5
Each of the AI tools mentioned in Table 1 can either be used individually or in interaction with one,
accessed directly or via APIs. When used in interaction, many of the constraints present in one can be
overcome by the other (e.g., connecting GPT4 to certain APIs allows it to access real-time information
from the internet, analyze and visualize data).
employed it solely for brainstorming and outlining, while 5% confessed to submitting written
material directly from ChatGPT with minimal or no edits. 6
Critically, there are significant limitations to these tools. Notably, none of them, even the
detector developed by ChatGPT's creators, are entirely reliable. 7 They frequently yield false
negatives (i.e., fail to recognize AI-generated content as AI-generated) and false positives
(i.e., mislabel human-written content as AI-created). They gather and reuse student and
faculty data, and can be easily bypassed by students (e.g., by altering words or phrases in
the generated material). Indeed, one recent study, not yet peer reviewed, found that of 50
AI-generated essays, 40 were able to effectively evade detection by traditional plagiarism
check software.
These issues are compounded by yet another problem: many detectors are limited in their
scope and not designed to detect other more common forms of plagiarism (e.g., text copied
from the internet or alternative sources). Just as unreliable evidence cannot be the basis for
accusations in a courtroom, so too can unreliable detectors not serve as a strong basis for
claims of academic misconduct or dishonesty.
Finally, policing perpetuates an arms race mentality between educational institutions and
their students, with both in an ever evolving game of cat-and-mouse to identify new, and
circumvent existing, detection tools.
Taking a step back, even the most precise and effective detector cannot prevent students
from using AI tools, nor can it make these tools disappear. Such tools should represent just
one out of many factors when making holistic judgements about academic dishonesty or
plagiarism. 8 In many instances, educators will possess a deeper understanding of a
student's work and progress, with human intuition potentially more reliable than AI detection
alone.
Rather than police AI use, it may be better to set clear, up front expectations for students
that signal what, when, and why AI tools are allowed, as well as the potential consequences
of inappropriately using such tools in their own work. I will return to this topic in more detail in
later sections.
6
For more on cheating in an AI era see Alfie Kohn’s article on who’s cheating whom, Marc Watkin’s
article, and Ditchthattextbook’s thoughtful position.
7
Each time text-based agents receive a prompt, they produce a novel piece of content, crafting a
response tailored to each inquiry. The resulting content is undetected by plagiarism checker
databases (given that it is different to past content). Many AI tools do not store a record of its created
responses and their creators presumably have no incentive to share such data with plagiarism
checkers. Even if the data were made available, the sheer amount of processing power needed to
search through existing databases would make it economically unfeasible.
8
Detectors may actually better function as conversational prompts between educators and students.
For instance, if a student submits suspect work that seems AI-generated, then this may be a prime
opportunity to discuss their current understanding of the subject matter, if there are obstacles
preventing the student from completing the work themselves, and how that student is going to
capitalize on their uniquely human abilities to complement (and not be replaced by) AI in the future.
Banning AI. If AI-detection is so unreliable, perhaps it may simply be better to ban
the use of such tools altogether. Such an idea seems to resonate with certain US institutions
and educational districts (e.g., New York City, Los Angeles, Baltimore) who have recently
restricted AI use on school networks and devices. Italy has recently done the same. The
rationale for such bans understandably stems from concerns about the negative impact that
AI has on learning, as well as the safety and accuracy of AI-generated content on students
themselves. As one New York City spokesperson put it, although “the tool may be able to
provide quick and easy answers to questions, it does not build critical-thinking and
problem-solving skills.”
While these concerns are legitimate, an outright ban on AI use may be both impractical and
ineffective. The sheer number of such tools is exploding and are likely the first of many to
come. Trying to block each and every one would be a time-consuming and distracting
exercise. Students and teachers have already adopted these tools in large numbers, and
can readily access them on their personal devices, both on and off campus. These same
tools are being integrated into web-browsers (Google Chrome and Microsoft Edge) and
widely used software (e.g., Microsoft Office), as well as many education technology
products. Banning AI would therefore require institutions to track, document, and prevent
access to an ever growing spectrum of software and technologies, an approach that can
instantly be circumvented by students when at home, or when using their personal
smartphones and laptops. Indeed, students will always find ways to access such tools,
whether it's asking AI for the answer to an exam question during a bathroom break on their
personal device, copying AI-generated content when completing their assignment at home,
or quietly consulting the tool under their desk during lessons.
In short, AI tools are ubiquitous and continue to grow in their capabilities and adoption.
Those looking to police and ban their use are likely fighting a losing battle.
1.4 Conclusion: Adapt and Evolve Rather than Detect and Ban
The educational sector is clearly undergoing significant disruption. But it's worth
remembering that technology has always disrupted the way we teach and learn. From the
introduction of calculators to the advent of word processors, internet search engines, and
Wikipedia, each new technology has prompted concern over its impact on education. Each
time a new technology has come along educators have adapted and the education sector
has evolved.
The same will be true in an AI era: the sector will need to evolve alongside technology or
face being left behind by it. Administrators will need to resist the temptation to police or ban
AI and instead guide faculty and students in its effective, ethical, and responsible use.
Faculty to explore and harness the full potential of AI tools in their teaching practices to
better support their learning objectives. Students taught that they are entering a new era
where AI increasingly drives human decision-making and behavior for both good and ill. That
they need to cultivate a deep understanding of AI’s capabilities, limitations, societal and
ethical implications. To use this knowledge to make AI informed decisions and become
responsible stewards of its future development and implementation.
In what follows I offer a set of recommendations for the three main educational stakeholders
most affected by AI: academic leadership, faculty, and students. Thereafter I argue that 21st
century durable skills, acquired through active learning, evaluated via formative, authentic,
and experiential assessment, and super-charged via AI, is an educational approach ideally
suited to preparing students for a rapidly evolving job market and changing world.
2. Academic Leadership
The topic of AI has gained significant traction in educational circles, with a proliferation of
online resources, webinars, debate and discussion taking place. Preliminary data indicates a
high level of awareness surrounding these emerging technologies among both faculty and
students, a general optimism and positivity around their use, educators purportedly adopting
AI tools at a quicker rate than students, and widespread desire for clarity around acceptable
use cases. This latter concern has stimulated many institutions to form generative AI working
groups, tasked with devising guidelines and recommendations for faculty and students alike.
There are three immediate steps academic leaders can take to help prepare their institutions
for an AI era (see below). Although these recommendations hold across institutions, their
implementation will differ depending on the unique needs, existing capabilities, and timelines
present in a given institution:
● The first involves drafting an AI Academic Honesty Policy that can be applied across
the entire institution (and used by faculty to develop policy statements within course
syllabi). Create resources to promote academic integrity, such as plagiarism
prevention materials and guidelines for ethical AI usage practices.
● The second is to create faculty workshops that highlight what effective, ethical, and
responsible AI usage looks like, so that faculty can better appreciate how, when, and
where to use AI in their courses. Develop and facilitate workshops or training
sessions on pedagogical best practices and emerging AI trends in higher education.
● Introduce a foundational AI literacy course for students that equips them with
essential knowledge and skills for utilizing these technologies and critically
consuming AI-output.
In what follows, I delve deeper into each of these strategies, providing a comprehensive
blueprint for academic institutions to adapt and thrive in the age of AI. 9
The academic integrity policies of most institutions were created prior to the emergence of
generative AI - and as such - do not effectively reference this new class of technology. This
gap in policy leaves both students and faculty uncertain about permissible practices.
9
Recent AI advances have additional implications for academic leadership, from admissions, and
enrollment, to student and career services, as well as higher education management. That said, in
this report, I focus exclusively on the twin issues of teaching and learning.
While each institution will update their policies according to their own unique needs and
timelines, several key considerations will likely need to be addressed:
○ Before policies can be developed (see below), academic leadership will first
need to determine what constitutes cheating and plagiarism in an AI era. This
is because AI tools significantly blur previously established boundaries.
■ Now consider plagiarism. While copying and pasting work from online
sources or other students presents clear evidence of misconduct,
detecting AI-generated content is problematic for the aforementioned
reasons. If an AI detector assigns a 88% probability that an essay was
AI-authored, and a student denies any assistance, how should an
institution proceed?
○ Certain institutions have already begun to explore these and related issues
(see here and here). Their initial insights may prove useful when crafting your
own AI policies.
○ Ensure policies provide clear guidance on what, when, how, and where AI
tools are acceptable to use in both learning and assessment contexts.
Articulate the rationale behind these choices and the consequences of policy
violations.
○ Resources already exist that can help during the drafting process. Several
institutions have initiated policy revisions (see here and here for useful
resources for designing policies. The Sentient Syllabus Project also contains
useful language when drafting policies).
○ Present positive AI use cases for both instructors and students. Help both see
how AI can enhance teaching and learning.
■ For concrete advice see Section 3 (Faculty) and Section 4 (Students)
Naturally the number and content of workshops can be tailored to best meet a given
institution’s needs. But providing faculty with appropriate training will better prepare them for
changes unfolding in their classrooms. 10
To support students in these endeavors, educational institutions must prioritize AI and media
literacy courses that help students appreciate that technology is a tool to expand their own
thinking rather than a crutch to limit it. Training can take various forms, from pre-semester
onboarding sessions, to workshops, or introductory lessons at the beginning of the academic
year. Ideally, all new students would enroll in a foundational course on the topic. Several
ideas will likely be important to communicate regardless of the specific approach that is
adopted:
10
Many faculty workshops have already been run in the educational sector. See Section 8
(Resources) for materials that can be used when building your own workshops.
AI Literacy: Critical Thinking
● Begin by asking students to examine the tools themselves. Ask questions such as
who created the AI tool, what is its purpose, and how does its creator profit from its
use? What data is it trained on, who is its target audience, and how is user data
utilized, shared, and monetized? Students should also explore the tool's limitations
as well as its (un)intended harms and benefits.
Students need to be well-informed about the limitations of AI if they are to unlock the full
potential of these tools for learning.11 They will need to appreciate that, for now:
○ These tools often accept obviously false statements without question and fail
to evaluate their output against evidence or principles. In many cases they do
not consult external or contemporary sources (Bing Chat is one notable
exception) or run experiments against objective reality.
○ When faced with uncertainty, many AI tools may generate factually incorrect
outputs rather than conceding error.
○ It's crucial then that students be taught how to evaluate the trustworthiness of
information using external, reliable sources (i.e., to perform a reality check
against all AI-generated output). Doing so will have the additional benefit of
highlighting the students’ own critical thinking readiness and give them an
opportunity to practice and apply those abilities.
○ AI tools are constrained by the data they have been trained on, which means
their knowledge is limited to historical events, facts, and concepts present in
their training datasets.
○ Certain tools may not be aware of events that have occurred after their data
cutoff and cannot predict future events. That said, AI models are being
constantly updated, and as new tools emerge, we can increasingly expect
these tools to have access to current events and information (see here).
○ Students need to be taught that these tools can produce content that is toxic,
offensive, or harmful, and may negatively impact learning experiences for
certain groups of students based on race, gender, sexuality, or socioeconomic
status.
○ They may also generate output that fails to reflect the rich diversity in student
populations or to foster equitable learning environments for underserved
learners in education systems. 13
12
AI tools typically fail to reason or check the veracity of their claims by design. Most are examples of
large language models (LLMs) which statistically predict the most likely words to follow in a sequence,
akin to auto-complete on steroids. Consequently, tools like GPT4 will not evaluate the veracity of a
statement, only the likelihood that a series of words are statistically likely to ‘go together’.
13
The ability for AI tools to clarify complex concepts and offer tailored guidance can also make it a
highly useful tool for students living with disabilities, struggling with writing and spelling, or learning in
a second or subsequent language. For instance, if English is not a student’s first language, these
tools can assist in drafting or paraphrasing (or translating and explaining content back to their native
language). If a student learns better via image than word, AI can generate explanations in the former
rather than latter medium. For more on how AI can be used to foster accessibility and support see
here and here.
○ Moreover, if used for grading assignments or exams, AI tools could potentially
lead to unfair grading practices by favoring certain writing styles.
Media Literacy
○ The same is true for image, video, and speech. This rapid proliferation of
AI-generated content may transform the internet into what has been labeled a
‘dark forest’ populated more by machines than people.
○ These new tools may also accelerate the rate with which misinformation and
disinformation are created and shared by aiding in the creation of text-based
and video Deepfakes, fake news and automated propaganda posts shared on
social media, industrial level fake product reviews, misleading statistics and
data visualizations, and images.
○ To counteract this, students must develop strong media literacy skills (what
has been labeled by some as their ‘crap detection’ abilities). Taught how to be
critical consumers of media and developing skepticism towards convincing
stories from questionable sources. The evaluation of source credibility and
quality, cross-referencing of facts, analysis of style and bias, and thoughtful
consideration before engaging with and sharing information are all essential
skills.
Ethics of AI-Use
● These companies may collect, share, or sell sensitive data from students and
educators to third party vendors, law enforcement, affiliates, and other users. Even if
one asks for their account details to be deleted, their prompts may remain, meaning
queries about sensitive or controversial topics may persist.
○ Tools such as the AI Incident Database can be used to generate case studies
for those discussions. This resource tracks instances of AI implicated in
potentially or actual harm (e.g., generating biased or toxic content, exploiting
workers, facilitating cheating, and generating malware).
● Humanity and AI
○ Students should be encouraged to examine the unique aspects of humanity
that set us apart from AI. By discussing when AI should or shouldn't be used,
they are better placed to understand the balance between leveraging
technology and preserving uniquely human values.
● Over-Reliance on AI
○ Overusing AI tools can lead to missed opportunities for personal growth and
development.
○ It can also lead students to accept an AI recommendation without
understanding or verifying whether the recommendation is correct.
○ By teaching students where and how to use AI systems appropriately, they
can mitigate potential harm and ensure they continue to develop essential
skills.
AI holds immense potential to revolutionize students' educational journeys. These tools can
offer students personalized tutoring, instant formative feedback on their written, verbal,
artistic products, help when drafting assignments, and much more. When effectively and
14
Academic leadership and faculty may be interested in reading the ethical guidelines on the use of AI
and data in teaching and learning for educators produced by the EU and Jisc’s summary report of AI
in tertiary education.
properly applied, they can enhance their learning experience, help them cultivate critical
thinking skills, and forge their own path to academic success. I dive deeper into these and
related points in Section 4 below.
3. Faculty
Great educators are needed more than ever in an AI era. Yet many may be asking
themselves if, and how, AI can be used to accelerate teaching and learning in their
classrooms. With this in mind, a number of potential use cases are listed below. These
examples are designed to highlight how AI can be used to facilitate course, lesson, and
pedagogical design, student evaluation and support, faculty coaching and administration.
Many educators are already using AI Tools to supercharge teaching in their classrooms to
help with lesson planning, generating creative ideas, and building background knowledge for
their classes. But AI is capable of much more. AI can help with:
● Pedagogical Design:
○ Development of innovative active learning techniques and resources, creating
an engaging and effective learning experience for students.
● Student Evaluation:
○ Assist in the creation of formative assessment tools to measure student
learning outcomes (e.g., practice tests, poll questions, multiple-choice or
short-answer questions, along with sample responses and feedback).
● Student Support:
○ Many educators receive limited actionable feedback that they can use to
improve their teaching practice. AI can also be used to support educators by:
Important: AI should be viewed as a springboard rather than a definitive end point in the
design, development, and delivery process. The outputs you obtain from AI require
modification, fact-checking, and refinement to ensure their effectiveness and accuracy.
For context-specific examples, consult Table 2, which demonstrates the ways one AI tool
(GPT4) can be used to support a foundational course on AI literacy for incoming first-year
students.
Table 2. Overview of the different ways that just one type of generative AI (e.g., GPT4) could
be used by faculty in course, lesson, and pedagogical design, as well as student feedback,
and administration.
Write policy for Design activities Provide students Provide faculty Identify ways to
course syllabus that incorporate with low-quality with feedback on accommodate
the physical and high-quality the quality of their students with specific
classroom writing examples instruction and learning differences in
tips for written and math
improvement activities
● Craft digital choice boards to hone students' critical media literacy skills, empowering
them to scrutinize how AI-generated content portrays specific topics and to reflect on
the reasons behind such portrayals.
● Generate counter arguments in Philosophy lessons, as a Buddhist monk interviewee
in a Religious Studies lesson, and to make formal notes from a mind map on the
whiteboard.
● Present counter arguments or simulate dialogues with historical figures, authors,
scientists, and explorers.
● Create rap music about similes and metaphors, which students can then edit and set
to their own beats.
● Augment the Think-Pair-Share routine by having students initially think about a
prompt they would feed to AI to solve a question, discuss those ideas with a peer, run
the prompt, and then share their findings with the peer or wider class.
● Challenge students to revise their prior projects by using AI to identify flaws and use
that feedback to refine their initial prototypes or ideas.
● Have students feed their projects from the previous year to AI and have it find issues
with their work. Then use this list of flaws to redesign the project and build a new
prototype design.
● Write a choose-your-own-adventure story to interactively explain concepts to
students.
● Encourage students to develop board games, prototypes, inventions based on
course content, with AI assistance in the design process.
● Draft scripts for a podcast, video, infographic, meme, poster, timeline. Have students
revise those drafts and then record a podcast, video presentation, or musical
composition.
15
For more tips and recommendations on how to use AI tools to support your teaching, see here,
here, here, here, here, and here.
● Remix student work, such that students submit content in one form (e.g., essay,
lesson summary, video presentation) and an AI tool “remixes” that work into a
different form (e.g., a rap song, poem, or book with illustrations). Seeing their work
remixed can stimulate students to think about it in new ways.
While AI's text-generating capabilities are impressive, its potential extends far beyond this
realm. There are many other tools in the broader AI ecosystem that educators can draw on.
For instance, they can:
As AI continues to evolve, so too will its education applications, enriching the learning
experience for students and educators alike.
○ Conveying this information requires that faculty first identify when, where, and
how they want students to use such tools in their courses. For instance:
■ What are the cognitive tasks students need to perform without AI
assistance?
■ When and why should students rely on AI assistance?
■ How can AI enhance the learning experience?
■ Are new grading rubrics and assignment descriptions needed?
● Craft An AI Policy Tailored to Your Course Aims
○ Once you’ve identified the above, develop a course policy outlining the
acceptable use of AI tools, citation requirements for AI outputs, and
consequences for policy violations (for more on this see Section 2.1).
○ Several institutions and educators have already taken such steps and their
insights can provide input in your own policy design (also see here, here,
here, and here).
○ Elsewhere, the Sentient Syllabus Project, argues for three principles when
designing courses for an AI era: (a) AI should not be able to pass a course,
(b) AI contributions must be attributed and true, and (c) the use of AI should
be open and documented.
AI tools may present educators with a challenge when it comes to student academic
integrity. But those same tools can be co-opted to help students develop their
analytical and problem-solving skills. For instance, educators can ask an AI tool to:
○ Write an essay and then have students critique that essay against a scoring
rubric, offer feedback for improvement, and revise the content accordingly.
○ Write 10 multiple-choice questions on a topic covered during class, and then
have students write critiques about whether those questions effectively
address the core issues discussed in class, and evaluate the quality of the
distractor items.
○ Write a short research paper or report on a historical event. Students are then
asked to identify logical inconsistencies and accuracy issues in AI-generated
historical reports, trace the sources used, and assess their validity. Doing so
would encourage students to think about fabrications, misrepresentations,
fallacies and other biases in AI output.
○ Participate in the Real or Fake Text game: Test their ability to distinguish
between human and AI-authored text, and use this experience as inspiration
when designing their own AI-detection or evaluation tools.
○ You will need to play around with these new tools and techniques if you want
to truly grasp their potential and utility. This hands-on experience will provide
you with invaluable insights into how students may interact and learn from
such innovations.
○ Educators will find themselves learning about these tools simultaneously with
their students. By embracing a co-learning approach, they can embark on a
journey of discovery alongside their students, exploring new tools and
techniques as collaborative partners. This mindset will foster an environment
of open learning and experimentation, where both educators and students
benefit from one another's perspectives and experiences.
■ See Section 8 (Resources) for a wealth of such materials.
○ What is quickly becoming apparent is that the quality and content of an AI’s
answer depends heavily on how those prompts are phrased.
○ Being able to effectively phrase one’s prompts (i.e., ‘prompt engineering’) will
have a marked impact on the output obtained from the AI, and allow you and
your students to get the most of this new technology.
■ For more info on how you can learn how to effectively prompt
engineer, see here, here, and here.
4. Students
Earlier sections of this report have touched on many student relevant issues, from concerns
about AI fuelled academic dishonesty, the need for foundational AI and media literacy
training, clear academic guidelines surrounding AI use, and potential use cases. Alongside
these points, students will also need to understand that there are two distinct ways in which
AI can be employed.
The first involves misusing and over-relying on AI tools, submitting machine-generated work
instead of their own. This involves sacrificing the process of learning for the fleeting success
of a passing grade, ultimately leaving them ill-prepared for a shifting job market and a rapidly
changing world.
The second involves using AI to enrich their educational journey. Recognizing that these
tools can act as personalized tutors, assessing their current abilities, customizing content
and teaching methods to suit their needs, thereby offering more targeted and effective
support. Tools that they can use to hone their critical thinking, communication, and transfer
of knowledge skills, improve their self-testing capabilities, and unleash their creative
potential. In what follows I consider several ways that students can draw on AI in effective,
ethical, and responsible ways. 16
Although effective, the time, effort, and scalability of this approach has long represented
constraints in its wide-spread adoption. AI tools have the potential to address these
limitations, functioning as personalized AI tutors with a range of capabilities, such as:
○ AI debates can take place before, after, or during class, allowing students to
prepare for upcoming lessons or reflect on previous ones. In-class debates
can also begin with students practicing their points against AI before
engaging with their peers.
○ By adjusting the rules, time limits, and specific roles, AI tools can be used to
help students develop quick thinking, organization, and adaptability skills.
Additionally, they offer students low-stakes practice in refining their debating
abilities, encouraging them to reflect on their current understanding, identify
missed points, and recognize areas for improvement.
○ Educators can then provide students with a scoring rubric, asking them to
analyze, give feedback, and grade that AI output. What was correct,
problematic, or missing from that output, and how it could be improved.
Students could do so individually using track changes in Google Docs,
collectively using social annotation tools, or via short explanatory videos or
audio.
17
"Socratic" AI tools face many of the same limitations as previously outlined, from issues with
context-awareness, nuance, and empathy, all key components of the Socratic method. Educators
must therefore provide students with guidance and oversight so that they can effectively deploy this
tool.
18
Dual Assignments could be used to similar effect. Here students are offered a choice between two
versions of the same assignment: one utilizing AI tools and another without. In the AI-assisted
version, students have to submit their AI prompts and AI-generated output, and to indicate where they
have improved upon the AI output and added their own perspective. In the traditional version,
students complete the assignment and sign a statement confirming that no AI was used. In both
cases, students are assessed based on how well they have demonstrated depth of knowledge, either
through their changes and improvements to AI-generated content or via their own original writing.
are not only prompted to recall knowledge from memory but also reflect on
the importance of that information and justify their reasoning.
The aforementioned examples are only a thin slice of all the ways that AI tools can be used.
Students can use those same tools to:
● Generate Ideas. AI tools can prove invaluable for students during the ideation
process, especially for those new to, or grappling with, the early stages of a project.
The “regenerate response” feature of certain tools allows students to generate
multiple answers to the same prompt, extract the best elements from each, and
produce a robust list of ideas. AI can help categorize and prioritize those ideas, and
provide an initial assessment of their feasibility, relevance, and potential impact given
the project’s goals. It can also encourage students to consider alternative
perspectives and approaches, and help generate visual cues (e.g., mind maps) that
visually represent the students' thinking, potentially revealing unseen connections
and new areas for exploration.
Academic leaders and educators face the question of whether and how AI tools should be
used in their institutions and classes. If they opt for AI adoption, then it will become essential
to train students on how to effectively interact with those tools. Prompt engineering, a skill
crucial to extracting optimal content from AI, should be a key area of focus in early training
programs (for more on this topic see here, here, and here).
Without proper training, students will likely resort to teaching themselves by exploring
websites containing examples of text and image prompts generated by others, or prompt
marketplaces that allow them to buy tailored prompts for their individual needs. Such an
outcome may disproportionately favor wealthy or technologically sophisticated students over
others.
A more equitable approach would involve student workshops that showcase effective prompt
engineering through interactive activities. Students could create and compare weak,
mediocre, and strong prompts to see the vastly different outputs they occasion from AI tools.
They could participate in “prompt competitions” where they collaborate (in pairs or small
teams) to develop criteria for building strong prompts and then use those criteria to judge the
prompts and associated responses of other teams. They could also be introduced to
resources such as Prompt Box, which enables them to download, back up, and share their
prompts with both professors and their peers. Such tools would encourage collaboration
when mastering AI interaction and could also be used in both classroom and later
assessment.
5. Assessment
Traditional teaching practices in many institutions still hinge on the lecture format, where a
large group of students passively listen to a lengthy lesson delivered by a single professor.
While this approach may be resource efficient for both professor and institution, it is an
especially poor method for encouraging student learning. Likewise, traditional assessment
practice often relies on high stakes summative assessments which demand rote
memorization, retrieval, and application at a single point in time (e.g., quizzes, end of course
exams, single-draft essays). Although efficient for evaluating large numbers of students,
these methods are suboptimal for promoting retention of concepts and skills, as information
which is not consistently reinforced is typically forgotten.
Such practices were ill-suited before advances in AI and they are especially inadequate in
the AI era. If AI tools allow students to learn in highly personalized ways, tailored to their
readiness level, whenever and however they want, then students will increasingly question
the value and high cost of lectures, degrees, curricula, and courses centered on passive
knowledge transmission.
Similarly, in a world where AI can comfortably solve multiple-choice and short answer
questions, craft essays, and write reports, students may be tempted to misuse or over-rely
on such tools in the hunt for a better grade. When students inappropriately submit
AI-generated content as their own, educators find themselves in the frustrating situation of
evaluating machines rather than humans. This dilemma has led many to transition from
digital to analogue assessment, opting for handwritten essays completed during class, or
high-stakes, in-person oral and written exams at the end of courses.
Taken together, one thing is certain: the educational sector must fundamentally re-evaluate
its approach to assessing student learning in the AI era. The focus should be on devising
improved methods for measuring the acquisition and application of skills and concepts over
time and across various contexts, while simultaneously preparing students for workplaces
that require AI proficiency.
Consequently, educators have two alternatives. Their first option is to modify teaching and
assessment in ways that make mindless or inappropriate AI use difficult or problematic. This
requires that they place an emphasis on teaching students methods which require real-time
interactivity, adaptability, and critical reflection, and also create conditions where relying on
AI-generated content is suboptimal due to speed of the activity and/or the nature of the
response that is required. In short, educators should:
○ Two-Stage Assessment
■ Students first complete and submit an assessment individually (e.g.,
pre-class work, poll question, quiz) then come together as a group to
tackle novel, more challenging questions. This method encourages
peer-to-peer learning and the application of acquired skills and
concepts.
■ Although AI may assist in the first stage, its limitations become evident
in the second stage, where human interaction and cooperation are
essential. By assigning greater weight to the collaborative component,
educators can emphasize the importance of real-time learning.
○ In-Class Presentations
■ Students prepare and deliver a presentation during class, with faculty
and their peers asking questions about different aspects of that
presentation. This requires students to “think on their feet” and
demonstrate the depth of their understanding in real-time.
○ Debates
■ Students are presented with a “Big Question” or challenge relevant to
the course and required to explore or defend a certain position, while
adjusting to incoming questions and remarks. Although AI tools might
be used during preparation, students are ultimately responsible for
responding to questions and remarks during the debate itself.
○ Discussion
■ Educators can also facilitate discussions by presenting a topic and
require students to share their opinions or respond to their peers'
comments. This method encourages active listening and thoughtful
reflection on real-time information.
○ ‘Teach Back’
■ This communication technique, often used in healthcare settings,
involves asking patients to explain, in their own words, the information
they have just received. This allows healthcare professionals to
identify gaps or misconceptions in patient understanding and provide
clarification as needed. This same method can be adapted to the
classroom, with students verbally restating key concepts, ideas, or
instructions from a lesson or discussion in their own words.
○ Writing Sprints
■ These short, timed exercises help students develop their writing skills
and receive immediate feedback. Students can summarize class
discussions, connect learning goals to their lives, or draft upcoming
assignments. Alternatively, students could be asked to take two
19
For still other strategies see here and here.
concepts covered during the lesson and highlight the relationship
between those concepts.
The common thread linking these teaching and assessment methods is their shared focus
on real-time interactivity, adaptability, and critical thinking. By engaging in spontaneous
discussions, debates, presentations, and performances, students are encouraged to think on
their feet and respond to probing questions or requests for relevant examples. Such high
speed, dynamic, activities leverage the weaknesses of AI (i.e., the requirement to pause,
input a prompt to an AI tool, wait for a response, and then share it) and challenge students to
demonstrate their understanding through real-time, social interaction.
The second option available to educators is to lean into AI tools in teaching and assessment.
Rather than simply avoiding AI use entirely, or trying to catch students misusing them,
educators can actively encourage students to draw on AI tools (where appropriate). For
instance, students could use AI to help them prepare for in-class presentations, debates,
discussions, interviews, simulations, and role-playing activities. To provide them with
inspiration for writing sprints, media, and performance-based assignments; personalized
feedback and suggestions for iterative assessments; or potential solutions to the challenges
faced in their authentic assessments. Critically, students would need to acknowledge and
cite AI use, analyze the information provided, and carefully evaluate if and where to utilize it.
○ AI tools may be highly effective in generating essays and reports. But they are
still poor at iteratively reflecting on, and revising, past work in ways that
involve evaluating and updating sources, or building complex, coherent
argumentation.
○ Humans have the ability to craft rich and detailed stories based on their
personal histories, contexts, and cultures. In contrast, AI tools struggle to
achieve similar levels of depth or authenticity. They will hedge their
responses, omit details, and struggle to maintain a coherent sense of self
across paragraphs. Without extensive prompt engineering they produce
content that seems comprehensive, but is in fact generic, never insightful nor
original.
○ One way to increase the chances that output is student rather than
AI-generated is to leverage human uniqueness in your assessments:
20
For instance, a peer assessment approach could be used where students are required to provide
comments on their classmates output (and vice-versa) using social annotation tools such as
Hypothes.is or Perusall. These tools allow students to respond to one another's annotations and ask
questions that can further discussion around the topic.
21
This idea is akin to a “portfolio of thinking”. In the same way a graphic design student may compile a
portfolio of their designs to demonstrate their growth and proficiency, so too can students in other
courses compile a portfolio of ideas and reflections on topics relevant to their own studies.
■ Allow (or require) students to relate the academic topic to an area of
personal interest:
■ There are also many political, cultural, artistic, and racial biases
present in AI training datasets. Assessment could directly reference
these biases, or seek sources and inspirations that extend beyond
biased training datasets.
○ AI tools are also currently limited in their ability to synthesize patterns across
multiple sources such as books, blogs, media narratives, conversations,
podcasts, lived experiences, and market trends. Humans are able to observe
22
Once again a better strategy may be to allow students to draw on AI tools where appropriate. For
instance, they could be encouraged to write about contemporary topics and/or compare them to
related historical events with the aid of AI tools.
and analyze a wide range of real time inputs, and this ability can be
capitalized on to design more insightful assessments.
● Require Reflection
○ Research shows that authentic assessment offers numerous benefits for both
students and employers. Furthermore, it serves as a powerful strategy to
mitigate over-reliance on AI, as it requires students to reflect on how they
applied their specific skills to address a specific contemporary challenge
faced by a specific (local) partner.23
○ The rationale behind this approach is that without access to laptops, smart
devices, or the internet, student performance during synchronous sessions
cannot be AI-driven. Some institutions have also opted for high-stakes oral
exams for similar reasons.
○ While analog assessment has its merits, such as being an easy way for
educators to prevent students from using technology during assessment, it
also has significant drawbacks:
■ Students may still use AI when preparing for written exams, which
means there is no guarantee that the assessment truly evaluates their
unique perspective. Only their ability to remember and reproduce
material in a single high stakes session.
23
For more on authentic assessment in an AI era see here, here, here and here.
This is especially true when compared to the benefits of formative
(authentic) assessment.
■ Oral exams face issues with scalability, time efficiency, and demand
high effort from both educators and students, particularly at larger
institutions.24
In short, curricular and assessment redesign to mitigate against mindless AI use, combined
with intentional integration of AI into teaching and assessment may be a more pragmatic
solution that attempts to build AI resistant assessment or removing technology from the
assessment process entirely.
AI is transforming the job market that graduates will soon step into. Advances in automation
are creating jobs for some while making many others redundant, with machines substituting
humans in sectors where their skills can be easily replicated. Elsewhere, AI is mastering
abilities once thought unique to highly-educated professions such as financial analysis, web
design, legal research, and journalism. 25 And it’s getting better each day. 26
Institutional leaders and educators who are serious about preparing their students for an
AI-powered workplace will need to reflect on several points. First, upon graduating, students
will enter a workplace that increasingly expects them to be proficient in the use of various AI
24
For additional discussion around the merit of such an approach see here.
25
Recent research suggests that around 80% of the U.S. workforce could have at least 10% of their
work tasks affected by the introduction of AI tools, while approximately 19% of workers may see at
least 50% of their tasks impacted.
26
The prospect of human workers being replaced by new technology has always been a source of
anxiety. Debate currently rages on whether the doom and gloom around AI is similarly overblown or
appropriate given the unique features that AI possesses. For more on trends in discourse around this
topic also see here.
tools (e.g., to automate and increase the efficiency of research, ideation, writing,
visualization, analysis, and other activities). If they want to succeed, they need to have
sufficient training in those tools, and also possess goal-setting, decision-making, and moral
reasoning capacities which allow them to recognize when those tools should and should not
be used or their content trusted.
Second, students are going to require a strong understanding of their professional options in
a job market which is increasingly mediated by AI tools and systems. Institutional services
such as student advising and professional development will need to engage in honest
dialogue with students as well as highlight steps they can take now to prepare themselves
for a rapidly changing job market.
Finally, graduates entering the employment market will soon find that their value no longer
lies in how much specialized knowledge they possess upon leaving university, or their ability
to brainstorm, plan, conduct research, translate languages, generate content, or automate
tasks. AI can instantly access the accumulated information present on the Internet, and
perform the aforementioned functions far more efficiently than any human can. Rather a
graduate will find that their value is increasingly determined by their ability to do what
machines and AI cannot. In the years to come, success will belong to those with
21st-century durable skills.
Broadly speaking, the skills students leave university with can be divided into three
categories. The first are perishable skills. These refer to abilities, knowledge, or concepts
that have a relatively short half-life (i.e., the amount of time which elapses before half of
one’s skills are superseded or become obsolete). Examples include the ability to code in a
specific programming language or use a certain software tool, network security or encryption
method, marketing strategy, or negotiation technique. Such skills quickly lose their value as
technology, industry trends, or professional requirements change over time.
The second category involves semi-durable skills. These are not based on individual
techniques or methods but rather on the more general frameworks that underpin such
technologies, processes, and tools. A semi-durable skill would involve training students to
understand programming paradigms, grasping design principles, or comprehending
business management theories. Although these skills possess greater longevity and
importance than perishable skills, they are also subject to replacement as a field progresses,
expands, and evolves.
Educational models based on perishable or semi-durable skills were already facing problems
prior to AI. The half-life of such skills has seen a sharp decline over the past decade,
meaning that many graduates find their skills are not matched to changing job market
demands. AI further compounds these issues. If AI tools allow one to extract, summarize,
explain, and communicate information, learn in highly personalized ways, whenever and
however they want, then students will increasingly question the value and high cost of
degrees, curricula, and courses centered on knowledge transmission. Upon leaving
university, they will also find themselves competing with tools that already possess
instantaneous access to, and mastery of, a vast range of techniques, methods, and
strategies (perishable skills), along with the general frameworks which underpin them
(semi-durable skills).
Clearly another approach is needed. This leads us to the third, and arguably most vital,
category of skills - durable skills. In contrast to perishable and semi-durable skills, which
teach students "what to think" (i.e., a specific technique or its underlying framework), durable
skills educate students on "how to think." And unlike perishable and semi-durable skills,
these skills are highly transferable across domain and context, and resilient in the face of
technological and sectoral evolution. Examples include the ability to think critically and
creatively, solve problems, communicate effectively, exhibit emotional intelligence, and
successfully collaborate with as well as lead others. 27
Students in possession of durable skills are already highly sought after by employers, able to
move at ease within or between industries, and have a relatively “AI-proof” skill set that is
currently beyond the reach of such tools, and likely to remain so for time to come.
○ To equip students with durable skills, it is essential to first identify and define
what those skills are. This necessitates the creation of a hierarchical learning
taxonomy composed of learning outcomes (specific skills that can be
practiced and assessed individually), nestled within sub-competencies
(groups of complementary skills), which together form a broader set of overall
competencies (areas of related skills utilizing similar cognitive and behavioral
aptitudes). All three components in the taxonomy need to align with the
institution's core mission and objectives while remaining responsive to wider
market forces and employer demands.
27
Although I place strong emphasis on the importance of durable skills, I also recognize that students
also require technical or specialized knowledge in many fields. Therefore a tree-shaped model may
be an appropriate way to visualize skill development, with durable skills forming the roots and trunk of
the skills tree, semi-durable skills its branches, and perishable skills the leaves that come and go with
changes in technological and sectorial demands.
28
What follows is a concise, simplified vision of pedagogy centered around 21st century durable skill
acquisition. See here for a more detailed and nuanced treatment on durable skills.
● Engage in Intentional Curricular Redesign
○ A skills based approach not only requires that we radically rethink how we
teach but also how we assess learning. Standard assessment practice
involves evaluating students' on their acquisition and retention of specialized
knowledge, and relies on infrequent and high-stakes assignments such as
essays, exams, and quizzes. These assessments, intended to offer insights
into prior performance, often leave students without actionable feedback on
how to iterate and improve future performance.
In contrast, education centered on 21st century (durable) skills differs in nearly every way. By
making the acquisition and repeated application of durable skills across time and context,
both inside and outside of the classroom, the central aim, several important consequences
follow.
First, the learning process itself is elevated to center stage. Students need to show up and
interact, critically reflect, productively struggle, and leverage their human uniqueness to
solve authentic, experiential, real-world problems. These are all areas where AI
demonstrates weaknesses. Second, the purpose and method of assessment shifts from
summative to formative, from evaluating students through singular scores or grades at the
end of courses, to providing them with iterative feedback on how to continue mastering
durable skills. When students are evaluated based on how well they creatively and critically
“think on their feet” during synchronous, real-time activities such as debate, discussion,
performance, or presentations, their ability to misuse or over-rely on AI is drastically
reduced. This is also true when they are evaluated on their ability to take skills acquired in
the classroom and apply them to personal, local, and contemporary issues faced by their
real-world communities. AI can certainly provide inspiration and suggestions in such cases,
but it cannot be mindlessly applied to solve such problems.
Finally, graduates entering the job market who are able to critically and creatively think,
effectively communicate, lead and collaborate with others, will find that they possess skills
that AI (and many other graduates) lack. Skills in high demand by employers and which
enable them to move easily within and between sectors.
7. Conclusions
The rapid explosion of generative AI tools has brought with it wildly diverging perspectives
on the technology and its implications for higher education. Opinions swing from the
extremely pessimistic (AI heralding an educational apocalypse, a surge in cheating, and the
collapse of traditional education as we know it) to the wildly optimistic (AI as the great
equalizer, granting each student a tailor-made learning tutor capable of transforming them
from ordinary to extraordinary).
As is often the case, the truth likely lies somewhere in between. It's true that past
technologies (such as calculators, Google, and Wikipedia) were met with similar levels of
panic and skepticism. Yet AI tools have the potential to be far more potent and disruptive
than their predecessors. Present-day AI tools can craft persuasive essays and dissertations,
excel in university exams, and grapple with increasingly complex math and coding problems.
The forthcoming generation of tools will showcase even greater creativity and capabilities.
They will be underpinned by a host of APIs that allow users to instantly analyze and visualize
data; work with multimedia input and output; connect to real-time information from the
internet, and much more. In all likelihood, what we're witnessing now is likely just the first,
tentative step on a much longer journey towards increasingly sophisticated AI adoption in
higher education and beyond.
For academic leaders, it will be vital to steer clear of short-term thinking focused on policing
or banning AI as a means to address academic dishonesty. Instead, their efforts would be
better directed towards devising progressive AI policies, equipping faculty with training on
harnessing AI for teaching and learning, and exploring methods to provide students with
essential training in AI and media literacy. Doing so will allow their students to safely engage
with these tools and critically reflect on the information they produce.
To enrich their teaching methods, content, and processes, faculty members must actively
experiment with these novel tools and techniques. This exploration will not only reveal AI's
potential to automate routine aspects of teaching, but also uncover innovative strategies that
promote rich, equitable learning for their diverse students. By embracing a co-learning
approach, educators and students can collaboratively delve into new technologies, fostering
an environment of open learning and experimentation. This collaborative mindset empowers
educators to more efficiently guide their students in discerning the strengths and limitations
of AI tools, as well as understanding when and how to employ them in the context of their
broader learning goals and objectives.
Academic leaders and educators will both need to carefully re-examine how students are
assessed, and in particular, shift their focus away from learning products and towards
assessing the learning process itself. Greater emphasis will need to be placed on
synchronous, active-learning exercises that have students show up and interact, as well as
critically think and reflect on those experiences. Authentic, experiential assessments that
capitalize on human uniqueness should direct students towards tackling challenging,
real-world problems that are relevant, personalized, and of immediate concern.
For their part, students will need to be taught how to use AI tools ethically, effectively, and
responsibly, in ways that accelerate their learning rather than undermine it. That
responsibility lies with them to ensure that AI-generated content is accurate, tailored to
audience and purpose, and reflective of their own voices and perspectives. They will also
need to learn that, when used appropriately, AI tools can serve as a positive force to support
and enhance their learning (e.g., by acting as a personalized co-pilot that can clarify and
communicate complex concepts, summarize information, generate ideas, and provide
tailored feedback to their level and needs). A tool that can act as a socratic tutor or debate
partner, driving the development of critical and creative thinking, rather than impeding it.
In closing, what is abundantly clear is that certain educational models are well positioned to
tackle the challenges of the AI era, while others are not. Traditional models, centered around
acquiring and testing perishable skills through summative assessments, inadvertently
encourage students to misuse or over-rely on AI to demonstrate learning, and send them
into the wider job market with skills that rapidly lose their value. In contrast, models which
instill 21st-century durable skills and emphasize practicing and applying these skills through
authentic, experiential methods, while utilizing assessment as a means of providing
formative feedback on skill mastery, prove more resilient to inappropriate AI use. This
approach benefits both students and their future employers, ensuring they remain
well-equipped and adaptable in our rapidly evolving world.
8. Resources
Below are a set of resources for those looking to learn more about developments in AI, new
teaching examples:
AI Tools
● FuturePedia is highly recommended for those looking to explore the current
AI ecosystem and try those tools out for themselves. Each tool is categorized
and tagged (including an Education Assistant category), and you can
subscribe to updates if you wish (also see AI Tool Directory).
● The AI Incident Database tracks examples of AI tools and vendors, including
OpenAI, being implicated in doing or potentially doing harm, such as
generating biased and toxic content, exploiting workers, facilitating cheating,
and generating malware.
● A number of AI tools offer more open and ethical options (e.g., see here for a
list of open source AI models such as Open Assistant and LlamaIndex).
Resource Databases
● AI in Education Resource Directory
● ChatGPT References: a list of teaching and Classroom Resources on AI
● Resources on language models, learning, and teaching
● Questions for writing teachers to consider
● Strategies for mitigating harm associated with language models and writing
● Teaching and Classroom Resources on AI
● AI in Education
● Resources for exploring ChatGPT and higher education
● Twitter Thread compilation of resources
● SFCC Library LibGuide for ChatGPT
● A Small Collection of ChatGPT Articles and Resources
● School Libraries ChatGPT
● Generative AI (ChatGPT) Resources
● Unlocking the Power of Generative AI for Higher Education,
● The ABCs of ChatGPT for Learning.