0% found this document useful (0 votes)
20 views

AI Hacks for Educators

Uploaded by

Iván Salamanca
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

AI Hacks for Educators

Uploaded by

Iván Salamanca
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 145

AI Hacks for Educators

AI Hacks for Educators


50+ Practical Tips for Faculty to Save
Time by Using GenAI

Kevin Yee, Laurie Uttich, Eric Main, and Liz Giltner

First Edition

FCTL Press
Orlando, Florida
AI Hacks for Educators
by Kevin Yee, Laurie Uttich, Eric Main, and Liz Giltner

Published by
FCTL Press
Orlando, Florida

This work is licensed under Creative Commons BY-NC-SA


4.0. You are free to share (copy and redistribute the material
in any medium or format) and adapt (remix, transform, and
build upon the material) under the following terms:
• Attribution – You must give appropriate credit,
provide a link to the license, and indicate if changes
were made. You may do so in any reasonable
manner, but not in any way that suggests the
licensor endorses you or your use.
• Non-Commercial – You may not use the material
for commercial purposes.
• Share-Alike – If you remix, transform, or build upon
the material, you must distribute your contributions
under the same license as the original.
• No additional restrictions – You may not apply legal
terms or technological measures that legally restrict
others from doing anything the license permits.

Cover design by Laurie Uttich.


Image of woman generated by Adobe Firefly.

Copyright  2024 FCTL Press

FIRST EDITION
Printed in the United States of America
To our families.

As many authors will tell you, every work is


influenced and aided by many more people than the
authors directly. Several friends and faculty colleagues
helped us find and explore individual AI tools,
including Dexter Hadley, Bill Zanetti, Richard Hofler,
Patsy Moskal, Wendy Howard, Rebecca McNulty,
Rowan Jowallah, and Lily Dubach.

As interest in AI grows, so too does the mushrooming


network of faculty support. This past year has seen the
creation of FALCON, the Florida AI Learning
CONsortium, which includes many support offices
from campuses around Florida’s higher education
landscape, in addition to many frontline faculty. There
have also been multiple groups on the UCF campus
sharing ideas and resources about AI. The work from
both groups is reflected here in these pages, and we
owe our thanks to all of them.

We are indebted to Lee Dotson at the UCF Libraries for


hosting this and other open source e-books at the UCF
STARS repository.

Finally, we are grateful to the members of our


leadership team at UCF who supported this journey,
including Provost Michael Johnson, Vice Provost Jana
Jasinski, and Dean of the UCF Libraries Beau Case.
Introduction
This book, our second in as many years about Generative
AI (GenAI), is a result of our growing realization that
faculty need ongoing support with AI tools. Your co-
authors work in the teaching & learning center at the
University of Central Florida (http://fctl.ucf.edu), and we
see firsthand that the faculty appetite to learn more about
artificial intelligence is insatiable. It is not enough to
provide merely an orientation and primary training. As
faculty become familiar with one or more GenAI tools,
their level of sophistication rises, and they are ready—and
even hungry—for new challenges. Even more importantly,
AI tools continue to proliferate, with new ones coming to
market constantly, and even the familiar ones update over
time, both by adding functionality and by changing how
the tools might be used.

This work attempts to capture a snapshot in time


(mid-2024) of the various ways GenAI could be used by
educators in the course of doing their jobs. Many of the
specifics discussed here may well become outdated very
quickly, but the germ of the idea will hopefully ring true
even with AI tools of the future: namely, that we
educators, like all workers in the knowledge economy
being turned upside down by the promise of AI, need to
demonstrate our ability to USE artificial intelligence, and
to add value to its output. As the now-common saying
goes, “You will not be replaced by AI… you will be
replaced by someone who knows HOW to use AI.”

i
All of your authors come from a background in humanities
and writing, and so it is not surprising that each of us
experienced some version of the seven stages of grief when
GenAI burst onto the scene. Do you recognize yourself, or
any of your colleagues, in Kübler-Ross’s framework from
On Death and Dying (1969)? Many of us started at the
beginning phases by being in shock and denial—and we
experienced pain, anger, and depression—but after
spending a few months bargaining, we’re now
reconstructing, working through the changes, and settling
into acceptance... even hope! We’re aware that some
frontline faculty members are still in the denial phase, and
it is our hope that this book will help convince them that
AI is not only inevitable and already here, but that it can
be quite useful to them as faculty members as well.

In the sections that follow, we will lay out applications for


AI by educators, proceeding one at a time to lay emphasis
on each basic idea, but then also allow space for some
examples and a slightly deeper dive. We refer to these AI
applications as “hacks,” a term by now familiar in uses
such as “life hacks” which has become synonymous with
“tips and tricks.”

GenAI Fundamentals: How Large Language Models Work

It’s worthwhile to pause for a brief explanation of how


ChatGPT and similar tools work. There are many different
types of AI, and several of them have been part of our
everyday lives for years. Smartphone apps that provide
driving directions are powered by AI, as of course are
home assistants (Alexa, etc.) and machine translation apps

ii
that effortlessly convert English into another language,
even signs and printed text as seen through the phone’s
camera, and vice versa. And there are many other such
examples in modern life.

ChatGPT and several of its competitors (Copilot, Gemini,


Claude, Perplexity, etc.) are part of a branch of AI called
“generative” AI, which is a category of software that
generates an output after having learned common patterns
and structures. The category includes not only text but also
images and even video. Those that focus on text are called
Large Language Models (LLMs). LLMs can generate text
because they have absorbed billions or even trillions of
pages of text, often described as having been “trained on”
the material. This could include parts of the internet,
published books, academic articles, and almost any printed
and digital material deemed relevant for a broad audience.
Ultimately, exactly what an LLM has been trained on
remains a black box mystery, as few of the companies have
been forthcoming with details. ChatGPT is so named
because it’s optimized to provide a conversation (“chat”)
that optimizes its generative pre-trained transformer
(“GPT”) training.

LLMs are essentially word-predictors. Based on all those


prior examples of recorded text, they have a good idea of
the next logical word in any given sentence. Thus, these
systems don’t actually think. They don’t even comprehend
the meaning of their words, leading some scholars to
compare LLMs to parrots—they can mimic speech, but
don’t understand what they are saying. Therefore,

iii
everyone from educators to students needs to remember
that these word predictors are not answer-generators.

Or to put it more accurately, LLMs CAN—and almost


always will—generate answers, but they are not always
accurate. In the rare cases one of the LLMs refuses to offer
an answer, it will claim to not have access to the most
recent events or what’s current on the internet, or it
will offer a rationale why it should not generate an answer
for a particular query. But if it does provide an answer, it
will deliver its response with verisimilitude and with
absolute certainty.

It’s understandable why users might accept LLMs’


explanations and arguments since they are usually
delivered without the slightest hedging or trace of
hesitation. Yet its answers are not always trustworthy.
Since they not accessing a database of information known
to be true, but merely generating “plausible next words,”
LLMs sometimes invent (often called “hallucinate”) facts
and details wholesale, and baldly assert them as if they
were true. Fans of the board game Balderdash will
recognize a similarity—like players in Balderdash, LLMs
try to convince their audiences that they have provided
true definitions. At the same time, while LLMs should be
potentially distrusted when it comes to factual
information, academic citations, and specific quotes, they
are actually quite good at brainstorming and ideation—in
particular when creating lists of sub-topics or bullets that
relate to a given prompt.

iv
AI Fluency

Clearly, students will need new skill sets in the future to


meet the challenges of future workplaces. Much has been
accomplished toward career readiness through the efforts
of the National Association of Colleges and Employers
(NACE), particularly through the definition of eight core
competencies: career and self-development,
communication, critical thinking, equity and inclusion,
leadership, professionalism, teamwork, and technology.

We first defined AI Fluency in our 2023 open-source book


ChatGPT Assignments to Use in Your Classroom Today at
http://bit.ly/chatgptassignments. Since then, we’ve updated
this definition and now view AI Fluency as consisting of
five components:

1. Understanding how AI works


2. Deciding when to use AI (and when not to)
3. Applying effective prompt engineering methods
4. Displaying digital adaptability
5. Adding human value

These components are, in our view, broad enough to


capture AI Fluency for not only ChatGPT and all LLMs,
but also extend beyond GenAI to other types of AI as well.

The first component, understanding AI, is important


because there are different branches of AI—each with its
own strengths and weaknesses—and one must understand
the AI currently being employed to fully grasp its
capabilities. LLMs like ChatGPT, for example, may be

v
prone to hallucinations, but this is not true of every type of
AI. Artificial intelligence tools of the future may not
construct output in the same fashion, so it’s important to
have a minimal understanding of how the AI tool at hand
creates its output.

Deciding when to use AI and when not to is the second


component. An experienced AI user must exercise sound
judgment about the output of a particular AI. With LLMs,
we know that it’s neither safe nor ethical to copy its output
wholesale and represent this text as something created by
an individual. There are also ethical issues of ownership
and copyright, including the works of deceased creators.
On the other hand, some uses of AI may be warranted, or
even desired. For example, instructors may assign students
to use LLMs to brainstorm ideas or use it themselves to
assist in creating an assignment.

Because AI doesn’t have the lifetime of experiences a


human does, it is extremely poor at reading between the
lines, or knowing what an imprecisely worded question is
actually asking. Therefore, our third component to AI
Fluency is creating effective prompts that elicit useful or
desirable output. As the common phrase goes, if you put
garbage in, you’ll get garbage out. We need to think about
prompts (the question posed to the AI) in ways that are
systematic, intentional, and deliberately plotted. While
some disciplines already train students to think with these
methods, especially about the architecture of programming
or arguments, many do not. Prompt engineering is in
many ways a discipline unto itself, and we all need to

vi
become better at it. See the following section for more
details about effective prompt engineering.

The fourth component is digital adaptability. We


recognize that artificial intelligence will continue to
evolve; in fact, many believe its evolution and
advancement will accelerate over time. As a result, people
will not stay fluent if they are habituated solely to the one
AI system they know. There will assuredly be future AI
products, and these need to be approached with an attitude
of curiosity and optimism, or at least not with reluctance,
irritation, or resignation that yet another new system
needs to be learned. We will all need the kind of
disposition that welcomes lifelong AI learning and the
flexibility to keep our attitudes positive as we embrace
ongoing AI change.

A truly critical skill, especially with ChatGPT and its


hallucinations, is the ability to analyze and evaluate AI
output, and in the process add human value, which is our
fifth and final component of AI Fluency. We are
increasingly seeing deepfakes in images and videos
concerning public figures and celebrities, such that one
truly should not trust one’s eyes when viewing digital
images. We know that LLMs invent facts, names, and
publications, and it does so with such confidence as to
border on chutzpah. Users need to remember to approach
AI output of all types with appropriate skepticism, a skill
we likely need to develop further. Because AI can already
automate so many tasks—and because future artificial
intelligences will continue removing human agency from
additional processes—the only employees needed in the

vii
workplace of the future are ones who can add additional
value to what the AI creates. This might look like
correcting the AI output or applying/integrating it into
other systems and processes that the AI cannot perform.
After all, if workers CAN be replaced by AI, arguably they
deserve to be. Future workers need to be “better than AI”
to compete in the marketplace, and it’s our duty as
educators to get them ready for that future.

A Deeper Dive into Prompt Engineering: CAPTURE

Because LLMs are still so new, there is not yet clear


consensus on the best ways to prompt them. Some
prominent advocates, including author and blogger Ethan
Mollick, suggest that no rigid scheme or pattern to
prompts is necessary at all, since LLMs are optimized to
hold ongoing conversations, and users can simply adjust
the ask with additional refinements, even ones written in
half-sentences as one might say out loud to a human
partner in conversation.

Others, however, have tried to put some order and science


into the process of prompting. Author Dan Fitzpatrick was
first out of the gate with PREP: Prompt with a concise
command, Role for the AI to undertake, Explicit
instructions on what to do and how to do it, and
Parameters such as format, tone, and length. He later
refined this to PREPARE, adding Ask (tell it to ask you
questions for refinement), Rate (it should grade its own
response), and Emotion (appealing to its emotional side).

viii
Dave Birss’s model is named CREATE: Character (giving
the AI a role), Request (specific output you are looking
for), Examples (give ideas to exemplify the desired tone),
Adjustments (with refinements in follow-up prompts),
Type (define the output’s format), and Extras (such as
encouraging the AI to ask you questions or explain its
thought process).

With these models in mind, we set out to create our own


framework that was both more specific AND able to be
collapsed into a shorter format. Our framework is
CAPTURE:

• Context – tell the LLM why we need this output


• Attitude – specify desired sentiment or tone
• Persona – tell the LLM to roleplay as someone (this
often improves output)
• Task – define what output the LLM should create;
the core of the ask
• Uniqueness – include details, adjectives, adverbs to
strengthen output
• Requirements – ask for a specific length, format,
level of sophistication, and the steps the LLM
should take
• Explain – how is this output derived? What steps
did the LLM take to arrive at an answer?

At first glance, our framework is similar to those created


by Fitzpatrick and Birss. However, we also recognize that
actual prompts used “in the wild” usually unfold in a
specific order, which does not follow PREPARE, CREATE,
or even CAPTURE.

ix
We think of the essence of a real prompt to be, in this
order: Persona, Context, and Task (or PCT for short). The
remaining elements of CAPTURE are really sub-bullets
and refinements of the “task”: Uniqueness, Attitude,
Requirements, and Explain.

Here’s an example of a prompt to put into an LLM:

You are a college student researching medieval life


(Persona). You need to learn about daily medieval life in
Europe for an upcoming essay you will have to write
(Context). Write five examples that explain how medieval
life was not that different from modern America (Task).
Include both gritty and mundane details, as well tools used
in everyday life (Uniqueness). The output should be
slightly playful (Attitude). The output should be organized
in bullet points, and should be no more than two pages
long, written at a level a middle-schooler would
understand (Requirements). Specify which research and
sources were used to arrive at this output (Explain).

Finally, we encourage educators to become familiar with


tools that can help you get even better with prompts.
One such tool, Prompt Perfect asks you to write your
prompt as best you can, then it will interview you for
more information, and finally return a longer, more
specific prompt that can be pasted into your LLM of
choice. This type of inter-tool use of AI is likely the future
of GenAI in higher education, at least in the short- to
middle-term.

x
A Deeper Dive into Image Prompt Engineering: SCALE

Prompting a GenAI tool to create an image (also called


text-to-image functionality) requires some different
vocabulary. The CAPTURE method isn’t a perfect match
for what you need when detailing what an invented image
should look like. Instead, we advocate the SCALE
framework:

• Subject
• Context
• Actions
• Layout
• Elements

The “elements” portion could fruitfully be expanded to


include characteristics, details, adjectives, and style of the
subject or overall image.

Here’s an example prompt using SCALE:

Create an image of a cartoon eagle (Subject). The eagle


should look friendly, as if from a child’s picture book
(Context). Depict the eagle flying a kite in a thunderstorm
and speaking with a cartoon turkey (Actions). In the
background, show a few trees and, off to the side, a red
barn (Layout). Despite the setting in a thunderstorm, the
image should be bright and cheerful, showing vivid colors
such as the green grass underfoot and the rainbow-colored
kite (Elements).

xi
Scope, Reach, and Organization of This Book

The tips and tricks provided in this volume were


predominantly created without one particular GenAI tool
in mind, partly in recognition that today’s leaders in LLMs
technology may not be the leaders of tomorrow, or that
LLMs might not even be the AI that matters mere years
from now. However, this approach was not universally
adopted. Some of the hacks discussed here are in fact
specific to one tool. We are aware these examples will not
age well. Other GenAI tools might add similar (or better)
functionality, for instance. It’s also possible the tools
displayed here might remove the discussed functionality in
the future. Even the business models may change for
specific tools, making the provided explanations outdated.
Nevertheless, we felt it important to capture some up-to-
the-minute (for 2024) best practices, which, in today’s
fragmented world of AI tools, meant including some
specific tools and their capabilities.

All of the sample prompts provided in this book were


vetted with ChatGPT 3.5 (the public and free version) in
mid-2024 to verify that they would provide interesting
and relevant output that might be profitably utilized across
disciplines. Future searches of ChatGPT, or of other LLMs,
might not yield productive results. That said, it is our hope
and strong suspicion that many, if not all, of the sample
prompts provided here could apply to other LLMs beyond
ChatGPT as well. We expect that most of these strategies,
in other words, could be used by almost any related AI.

xii
As for AI-generated text within this volume: there isn’t
much. We wrote this book in mid-2024 without using AI,
except in limited ways to test sample AI prompts for each
of the assignments, and to aid with first drafts of the
chapters about research. While we recognize that future
book-length works may opt to follow our advice about
using AI to help outline and chart writing projects, our
own process only did so mostly as verification and after-
the-fact analyses instead of as first steps. We find it to be
natural that current pedagogy experts and holders of
terminal degrees may continue with their established
composition practices that do not use AI in the initial
stages, while the opposite may become more common for
undergraduates in the next few years. Eventually, of
course, these undergraduates will become our institutional
colleagues, and yet another shift in mindset and practices
may become advisable and necessary.

While the book is organized by a contiguous set of


numbers, it is also divided into sections: making your
teaching life easier, making your faculty life easier, making
your research easier, and additional specific AI tools you
might want to consider. We conclude with some
ruminations on how AI tools might continue to evolve, as
well as our ways of using them to improve our lives and
work outputs. We hope this book will provide you with
support during these exciting—and daunting—times and
inspire you to explore the possibilities of engaging LLMs
and other AI tools into your curriculum.

Kevin Yee, Laurie Uttich, Eric Main, and Liz Giltner


UCF Faculty Center for Teaching and Learning

xiii
Section I:
Make
Teaching
Easier

1
1
Craft an Enticing Welcome
Statement for Your Syllabus
We often think of syllabi as legal documents that help
adjudicate process and grade disagreements with students,
or at a minimum that help set and calibrate student
expectations when it comes to the number of major
assignments, the anticipated weekly workload, and unique
classroom policies. While these are all important and
necessary functions of syllabi, a syllabus that only includes
such core elements can very easily drift into an off-putting
legalistic tone, creating the risk of alienating students in
their first introduction to the course, when in fact we
could (and should) be using the syllabus document to set a
positive tone. After all, the syllabus frequently serves as
students’ first introduction to you and to the course. If
they encounter only no-nonsense expectations, it’s only
natural that they may take the tone of the syllabus as an
indication of what your personal interactions could be like.

Given that LLMs are trained by ingesting countless pages


of text, it is not surprising that they are expert at crafting
texts themselves, especially ones shorter than a few pages.
In this case, since we are only seeking one or two
paragraphs, any LLM should be capable of creating the

2
desired output and tone. We just need to ensure, via
careful prompt engineering, that we provide enough
details about the desired output that it strikes the right
tone.

Sample LLM prompt:

Roleplay as a college instructor assembling a syllabus for a


course you’ll be teaching for the first time. You’ve already
set the schedule of readings and assignments, and you
know how the assessments will be weighted for the final
grade. But you’re worried the tone of the syllabus might
feel too legalistic for students and may make them
unconsciously dislike the subject. Create a two-paragraph
welcome statement for the start of the syllabus that will
instead have the effect of making students excited for the
subject and this class. The class is first semester organic
chemistry, a class many students dread for its math
content. Many of the enrolled students are historically
pre-med majors who dislike organic chemistry as a
tiresome requirement for the major. Stress the positive
elements of the course for a pre-med student, as well as
how its topics will be useful for many STEM-related
disciplines. The tone should be optimistic, encouraging,
inspiring, and persuasive for a traditional-aged college
student.

3
2
Summarize Your Biography for
the Syllabus
Because students so frequently encounter depersonalized
syllabus documents, meaning there is little information
about the instructor as a person, many students never
become curious in the first place about the human being
behind the class. This is a missed opportunity, however,
since there are proven benefits when instructors humanize
themselves to students, including students trying harder
and studying more due to the social and interpersonal
dynamics, ultimately leading to greater student success.
Thus, it’s a recommended best practice to introduce not
only the course in the context of skills to be prioritized,
but also by introducing the instructor, ideally in a way that
demonstrates your experience and readiness to teach this
topic, but also injecting your own personality and
humanity into the introduction. Creating such a summary
on one’s own is possible, but it requires considerable time
and thought, particularly when customizing a biography
for a specific course.

While it can be a drawn-out process to manually provide


direct and succinct connections between one’s diverse CV
and the course in question, such a task is simple indeed for

4
an LLM. While some LLMs crawl the current internet
where your CV may be available online, it’s safer to direct
the LLM to look at your CV only. This can be done by
uploading the CV to the LLM (if you’ve chosen an LLM
that allows uploads), or by pasting relevant parts of your
CV along with the prompt.

Sample LLM prompt:

You are a sociology instructor. Next semester you will


teach a class on environmental sociology, which you have
not taught in more than seven years. Summarize the
attached curriculum vita into an “About the Instructor”
statement for the environmental sociology syllabus,
making sure to point out the instructor’s past publications
and preparation to teach this subject. The tone should be
inviting, warm, and relatable for the average college
student. The statement should only be one paragraph long
so that students actually read it.

5
3
Draft a Syllabus Statement to
Map Course Outcomes to
NACE Competencies
It is an unfortunate truth that many students view college
solely as a means to an end, as if its only value is the
diploma that is needed to land the job. The underlying
assumption tends to be that what they actually need to
know to perform the job will be taught to them while
working in the job. We know this to be fallacious
thinking, and so do employers, but it has proven difficult
to disabuse students of this kind of mental shortcut.

One strategy that can help is to remind students—in every


semester and in every class—how a particular course will
impart skills and knowledge that will be useful for their
future careers. Fortunately, we don’t need to reinvent the
wheel of career readiness. The National Association of
Colleges and Employers (NACE) has long advocated for a
useful framework of competencies that evaluate career
readiness across multiple domains: Career and Self-
Development, Oral/Written Communication, Critical
Thinking/Problem Solving, Teamwork/Collaboration,

6
Technology, Leadership, Professionalism/Work Ethic, and
Equity and Inclusion.

Because the NACE competencies are not new, every LLM


should know what they are in detail, and it would not be
difficult to upload/paste the course’s Student Learning
Outcomes (SLOs) into the LLM and ask for a syllabus
statement that maps the SLOs to the skills employers want.

Sample LLM prompt:

Pretend you are an experienced humanities instructor,


about to teach an Intro to Humanities course. You are
aware that a majority of students in this class are taking it
to meet General Education requirements for graduation.
As a result, historically many students have struggled to
see the relevance of the course to their future careers,
especially if they are studying a STEM-related discipline.
Write a one-to-two paragraph statement for the course
syllabus that highlights for students the ways in which this
course will advance their career readiness, as seen through
the categories of the NACE competencies. Do this by
examining the student learning outcomes and major
assignments in the uploaded document. The statement
should have a conversational tone, making the instructor
appear approachable yet persuasive. You should also
clearly indicate which assignments/outcomes map onto
which NACE competencies. There is no need to find a
match for every NACE competency.

7
4
Write a Syllabus Statement for
How to Succeed in this Course
Although we might normally assume students who were
accepted to college are truly “college-ready” in terms of
fundamental math and writing skills, many are not… and
most have not yet fully developed their critical thinking
and problem-solving abilities. Surprisingly, this is
sometimes true even at some of the most prestigious
universities with strict admissions standards. Some
students simply had an easier time in high school. Because
the assignments and pace of learning in high school are
slower than in college, many students were able to earn
good grades in high school without needing to employ
study skills or habits that help achieve long-term memory
storage and deep learning. As a result, even “straight A”
students sometimes don’t know the most fundamental
study strategies like spaced retrieval practice and self-
quizzing. We as faculty should all feel a sense of ownership
that it is our job, in every class we teach, to ensure
students are exposed to techniques to help them become
better students.

Effective study skill strategies are well understood by


learning scientists, in particular cognitive psychologists.

8
Although people may have differing learning preferences
(such as group vs. solo study, silent study vs. listening to
light music, etc.), it turns out that by and large humans
learn in the same ways. Because this has become an
agreed-upon science, much literature exists that lists the
basics, and therefore LLMs are easily able to summarize
these study skill techniques into a syllabus statement.

Sample LLM prompt:

Assume the character of a full-time lecturer working for


the Interdisciplinary Studies program. Next semester you’ll
be teaching a First Year Seminar course, which is often
taken by First Time in College (FTIC) students as an
onboarding course to the institution. Draft a syllabus
statement entitled “how to succeed in this course” that
explains effective study skills and strategies. Make use of
wisdom gleaned from cognitive psychology but do not
specifically refer to any particular studies. You should list
at least six strategies, with 1-2 sentences of description and
elaboration each, that will aid students in truly
memorizing the information contained in your course.
Keep the tone practical and attuned to the level of a high
school senior. Make explicit to readers exactly why each
strategy works for long-term memory formation.

9
5
Compose Syllabus Policy
Statements
Syllabi are pretty self-explanatory when students are
seeking to understand the workload of the course or how
many tests and essays there are. But college classes differ
from each other in significant ways, often having to do
with the instructor’s expectations. One instructor might
expect students to rewrite essays significantly between
drafts, while another might never bother to check. One
instructor might expect every citation listed in APA 7e
format, while another might assume that no one needs to
use citations at all. If these expectations are not clearly
communicated, at a minimum students could become
confused. But worse outcomes are also possible, such as
students making their own assumptions and losing points
if they didn’t match the instructor’s assumptions. Syllabus
policy statements provide clarity for both parties.

Experienced teachers know how to craft syllabi


statements, but might still benefit from the brainstorming
prowess of LLMs to dream up other possible, even
advisable, policy statements. Newer instructors,
meanwhile, might depend heavily on an LLM-generated
list of policy statements so that they can leave as little to

10
chance as possible. Both populations of teachers will also
benefit from how quickly LLMs can generate text. In no
time at all, LLMs can generate lists of policies to include, as
well as draft versions of the policies themselves, saving
faculty hours of work.

Sample LLM prompt:

You’re an instructor with a new colleague in your


department who wants advice on creating syllabus policies
for an introductory course on Finance in the College of
Business. This new colleague has never taught before and
has nothing to base their syllabus on. Suggest a list of 5-8
policies for grading that are common on syllabi in similar
courses, another 5-8 policies about technologies (both
instructor and student), another 3-5 about academic
integrity and the technologies related to monitoring, and
finally another 3-8 policies about the course not in those
categories. Make each policy only 2-3 sentences long.
Ensure that the tone is neutral rather than condescending,
rude, or assuming the worst in students, but also not so
welcoming that students are tempted to seek exceptions.

If the LLM provides definitions but not examples, follow


up with a request for sample language for all the policies.

11
6
Prepare a Course Proposal
Submission
In many institutions, faculty require permission from their
institutional peers before they can teach a completely new
course for the campus. This may take the form of
departmental approval, as well as permission from a
curriculum committee, and sometimes even an
interdisciplinary committee such as one for General
Education or Undergraduate Studies. In some states,
faculty even need to gain approval from a state governing
body or board. Such course approvals are often thorough,
requiring the submission of not only the course description
and the whole syllabus, but often the complete list of
student learning outcomes, deliverables and main
assignments, and a schedule of weekly topics. It requires
significant time and effort to put together all those
documents—without a guarantee that the course will ever
come to fruition—which can act as a disincentive to
attempt submission in the first place.

Large Language Models can create first drafts or polished


revisions of all the required documents mentioned above,
thereby saving instructors a lot of time, or at minimum

12
providing them with a starting point and lots of ideas for
inspiration.

If you have access to an LLM that allows you to upload


documents, such as GPT-4o, Claude, or Perplexity, you can
get even more enhanced results. Begin by uploading
similar documents from other courses, then ask for a
custom output that mimics those uploaded samples but for
your new topic. Note that the LLM may have a limit on
how many files per message are allowed.

Sample LLM prompt:

Let’s start a role play. You will be a relatively new


instructor in the anthropology department looking to get a
brand-new course approved through various institutional
committees. The course will be a 3000-level class entitled
“Denisovans and Hobbits: Separating Facts From Fantasy.”
Create first drafts of all documents required to submit a
proposal, including a paragraph-length course description,
a syllabus, 8-10 student learning outcomes written with
Bloom’s action verbs describing what students will be able
to do by the end of the course, a list of larger assignments,
and a weekly schedule of topics.

13
7
Revise Existing Assignment
Prompts to Nudge Student
Success
While stronger students know how to interpret the
nuances of an assignment prompt to ensure they are
maximizing their responses and they aren’t forgetting any
angles, many students have not yet learned how to answer
with sophistication and complexity. Certainly, additional
student preparation and practice would help, but so too
could a longer, more detailed prompt that clarifies
directions for students to follow when crafting a response.
This is not to say that our “normal” assignments are
inadequate, but it is true that we sometimes forget to
explain HOW an assignment will be graded (or how
students should best go about completing the assignment).
Even more common is that we often forget to explain
WHY an assignment is assigned in the first place. If
students don’t fully grasp the purpose behind an
assignment, they might struggle with adequately meeting
that need—and the reverse is true as well. Knowing the
purpose will help them rise to that specific occasion in
their answer.

14
To improve your assignments, paste them one at a time
into the LLM along with a prompt to add transparency
(the “why”, the “how graded”, and the “how to complete”
details). You may also want to ask the LLM to explain the
features of a high-scoring example.

Sample LLM prompt:

Let’s start a role play. You will portray an instructor in


Interdisciplinary Studies about to begin a course teaching
students to use ARC GIS software. You’re aware that some
students in the past have struggled with the major
assignments. Adjust and lengthen the assignment prompt
that is pasted below in such a way that students will have a
clearer understanding of how to proceed. Begin by
explaining the rationale for giving the assignment (what
skills does this reinforce?), and make sure you explain both
how it will be graded in terms of specific rubric elements,
and what steps students should take to complete the
assignment successfully. Include details about what a high-
scoring paper would look like.

15
8
Create Assignment Prompts
While faculty members are experts in their field, that
knowledge doesn’t always translate easily to course design
or effective prompts for large assignments. It takes
creativity to write good prompts, which can therefore
require large investments of time. In today’s digital era,
when electronic tools make it easy for students to share
material from previous semesters to future semesters, there
is a need to constantly refresh our assignments, increasing
our workload.

Apart from the time-saving element, LLMs can improve


the process because they have access to many hundreds of
examples, having ingested billions of pages of text in their
training. This extensive training makes it possible, likely
even, that an LLM-generated series of suggestions for
assignments could yield unexpected discoveries. It is
recommended that the LLMs be asked for multiple top-
level ideas for prompts at first, since faculty will want to
judge the idea itself without the details needing to be in
place. Once a winning idea is selected, it’s easy to create
follow-up prompts to generate the final full prompt,
including a longer description, a list of student learning
outcomes, and a list of requirements for the product.

16
As a side note, consider that many students appreciate a
choice in projects, so it may be best to narrow the list
down to 2-3 choices rather than just one project everyone
in the class must work on.

Sample LLM prompt:

In this scenario, you will inhabit the persona of a


mechanical engineering instructor. Knowing that students
share assignments and submissions with each other from
prior semesters, you are motivated to create new
assignments each year for your capstone class. Create a list
of 10 possible capstone projects for your Senior Design
class in mechanical engineering. For each list item, provide
a 1-2 sentence overview (we will later choose a winner
and seek a longer description, SLOs, and product
requirements).

17
9
Invent Course Readings for
Writing Courses
Courses that primarily teach writing or languages
(including both English such as ESOL courses and foreign
languages) face unique difficulties in the era of GenAI.
Almost any assignment one might dream up is something
an LLM would be able to write for a student. In these
courses, the language itself is the content, and LLMs by
definition deliver flawless language. Any culturally-related
content (such as famous literature written in the target
language) we might think to include in the assignment so
that there is content beyond language is likely known to
the LLM as well. What we need is a piece of content NOT
known to the LLMs, yet not tremendously difficult for
students to grasp.

Oddly enough, even though the existence LLMs create the


problem here, they can also create the solution! If we need
content that isn’t part of the ingested training of any LLM,
we just need to ask this same LLM (or a different one) to
invent something, such as a story or a case study. With
everything entirely fictional, the details won’t exist in the
training of any LLM, and an assignment based off that
reading would be safe to give students in an LLM-infused

18
world. One caveat: some LLMs use input prompts, as well
as their generated outputs, to continue to train the model.
In that event, a fabricated product or story might become
part of the knowledge base of this particular LLM. If you
have access to it, a “walled garden” LLM is the optimal
place to generate the invented product or story, since this
type of LLM does not “phone home” and train the model.
You are most likely to have access to such a walled garden
LLM if your university has arranged for this type of LLM
at an institutional level.

After the fictional text is created, it can become a reading


assignment. Then, a writing assignment based on that
reading can be created. (In the example below, writing a
marketing strategy could be one such assignment.)

Sample LLM prompt:

Working as a graduate teaching assistant in charge of your


own Composition class, create a detailed case study about
“divisional zyblocks,” a physical product used between
fingers to alleviate arthritis. Although this is a fictional
product, you will write a case study about it as if it were
real. Invent the history of its development, including
details about key players and conflicts (financial, political,
or medical) that occurred along the way to FDA approval.
Make sure the case study is at least five pages long, adding
enough embellishments and details that it reads as
realistically as any published case study. End the case with
the next pending step being marketing to the public.

19
10
Generate Test Questions
Faculty can be forgiven for re-using tests semester after
semester. After all, who has time to write all new
questions each term? The problem is, eventually these test
questions make it out into the student community,
whether through third-party clearinghouses (CourseHero,
Chegg, etc.), on-campus Greek organizations, messaging
apps (WhatsApp, GroupMe), or even loose friendship
networks. Once the actual test questions are out there,
student cheating is likely to increase.

LLMs, fortunately, are quite adept at creating test


questions, including multiple-choice questions, arguably
the most desirable kind for instructors, because they are
self-grading. Of course, LLMs are equally good at
inventing open test questions as well, in case you have
adequate grading resources at your disposal to use this
more accurate gauge of student understanding.

The prompt for the LLM could be as simple as asking for


ten multiple-choice questions about a single sub-topic
from the current chapter. However, if a level is not
specified, you might find the output heavy on definitions

20
and identification, which are low-level tasks that might
not be a match for what you’re trying to test. Instead, it’s
useful to specify one or more levels of Bloom’s Taxonomy
for the questions to be generated. It can be an especially
effective strategy to generate a miniature quiz bank of each
sub-topic. Most LMSs (Canvas, D2L, Blackboard) allow for
test question groups, such that every student gets a
different test, but each test has one question on each sub-
topic from the chapter.

Note: it’s important to reset the LLM by clicking the “new


topic” (or “new conversation”) button after each
generation of questions. Otherwise, what’s being asked
will begin to blur, as the LLM believes you are continuing
a conversation rather than starting a new one.

Sample LLM prompt:

You are a physics instructor teaching first-year college


Physics. You’ve had evidence before that tests from
previous years are in the student population, so you want
all-new tests. Create ten multiple-choice questions on the
subject of Vectors, Scalars, and Coordinate Systems. The
first five should be gauging Knowledge or Comprehension
on Bloom’s Taxonomy, while the last five should test
Application. Each question should have a relatively short
stem, and four possible answers. The three distractors
should be realistic options that an uninformed student
might select.

21
11
Generate GRE-Style Test
Questions
In the era of GenAI, the take-home essay may be dead. In
a majority of cases, students can meet the assignment by
having an LLM generate an output. Some students know to
doctor the output to decrease the chances they will get
caught, but by and large this type of assignment is on the
wane, particularly as automated writing tools continue to
increase in complexity. Eventually we won’t be able to
confidently distinguish student writing from AI writing no
matter what we do.

As a result, we need new ways to think about measuring


critical thinking, especially in online classes, where we
don’t have the option to perform in-class writing. The
most viable solution, at scale, is to switch to multiple-
choice testing that measures higher-order thinking. Such
questions are difficult to write, but we’ve seen them work
in tests such as the GRE (and to a lesser extent, the SAT).
The examples usually boil down to a dense text to read,
followed by several questions about the text that call for
judgment and evaluation.

22
We can generate such questions with an LLM, saving vast
quantities of time, so long as we prompt carefully. We can
also ask some LLMs—like GPT-4o and Gemini
Advanced—to create images, tables, charts, and other
illustrations to accompany these questions.

In the example below, a few alterations from the previous


example yield totally different results.

Sample LLM prompt:

You are a physics instructor teaching first-year college


Physics. You’ve had evidence before that tests from
previous years are in the student population, so you want
all-new tests. Create five multiple-choice questions on the
subject of Vectors, Scalars, and Coordinate Systems. All
five of these questions should test Analysis or Evaluation
on Bloom’s Taxonomy. The questions should be preceded
by a text and/or graphic to read and interpret, and all five
questions will depend on understanding the reading and
graphic. The three distractors should be realistic options
that an uninformed student might select.

23
12
Develop Rubrics
Many faculty members rely on rubrics to grade student
work because they often streamline the process by
allowing us to easily pinpoint relevant categories that
impact grades. Students appreciate rubrics as well since
they provide more details about our expectations for the
assignment. However, while rubrics save time during the
grading process, they can be time-consuming to create
and calibrate.

LLMs can help. Because they generate text so well, it’s easy
for them to create the various levels of each rubric
category (top scoring, middle levels, and low scoring).
Faculty members only need to specify what the categories
are, each one functioning like a sub-grade of the overall
score. Such an output alone would be a time saver for
faculty, even if delivered one paragraph at a time.
However, many LLMs—including Copilot, ChatGPT,
and Gemini Advanced—can deliver an entire rubric in a
table format, which can then be copy-pasted directly into
a document file. While the documents cannot yet be
imported directly into an LMS for the kind of grading
that involves clicking the rubric on-screen, the table
format makes it easy to see what to duplicate within the
LMS interface.

24
Note: the LLM’s first attempt may lack enough specifics to
be immediately usable. You may have to refine the prompt
and try again, add details regarding your assignment,
provide a strong student example, and/or engage the LLM
in an ongoing conversation to add the missing parameters.

Sample LLM prompt:

Roleplay as an instructor who is teaching future K-12


educators. You need to create a rubric for a new
assignment you are trying out, in which preservice
teachers will give a mock math lesson about multiplying
fractions to fellow undergrads portraying middle schoolers.
Write a rubric in table format that gives up to ten points
each in these categories: completeness, accuracy,
interactivity, and classroom management. Provide detailed
descriptions of how each level of accomplishment looks for
each category, with the levels including advanced (9-10
points), medium (7-8 points), and developing (0-6 points).

25
13
Flag Surface Errors in Writing
Not every college student enjoyed rigorous writing
instruction in high school or middle school. A number of
college students, in fact, simply accept the score assigned
to writing assignments without questioning what could
have been improved. As a result, some students lack the
kind of fundamental curiosity that would to prevent future
recurrence of structural problems that might be endemic
to their particular style of writing. A student prone to
dangling modifiers, for instance, might never know why
they keep earning scores in the 80s, yet also never inquire.

Large language models that allow uploads (Claude,


Perplexity, and premium options) are ideally suited to the
task of creating a custom summary of an individual
student’s pattern of grammatical and syntactical mistakes.
In fact, the LLM can not only generate a summary of the
patterns, but it will also excel at generating tutorials
tailored to each pattern of mistakes. In this fashion, faculty
can free themselves from needing to personally educate
students on writing conventions, and yet continue to hold
them accountable for effective communication and
provide guidance for them to improve for the next
assignment. Of course, it may be most effective to provide
students with an opportunity to earn additional points if

26
they use the feedback provided by the LLM to improve the
problematic sentences. Over time, this kind of correction
will help students to improve their writing mechanics.

Sample LLM prompt:

Please take on the persona of a graduate student grader.


Students have submitted five-page essays to the LMS for
electronic grading. Since this is a history class, the
instructor does want the grading to reflect students’
writing abilities, and you have been asked to put a priority
on identifying patterns in students’ grammar and syntax
mistakes, and to help them identify how to avoid future
problems of a similar nature. Scan the uploaded essay for
common mistakes in sentence structure, such as comma
splices or dangling modifiers, as well as others. Identify the
patterns of mistakes this essay makes, then provide a label
to the problems and short definitions. Finally, include
tutorials on how to structure sentences differently to avoid
these problems in the future.

27
14
Derive Custom Comments for
Essay Grading
Veteran educators know the pain of working through a
collection of essays and realizing that many students are
committing similar errors, whether they be aligned with
interpretive/analytical misjudgments or more technical
errors of grammar or syntax. One common grading
technique in response is to craft “common comments” that
apply to most of the submitted essays. While this shortcut
is a valid technique to save the instructor time during
grading, it comes at the expense of extensive comments
customized for each student. As a result, a student
receiving only standardized comments receives little
personalized feedback about their writing.

Using Claude, Perplexity, or similar LLMs that allow for


file uploads, import student essays one at a time along with
a prompt to look for certain patterns. If nothing else, LLMs
excel at standard written English and would be able to
evaluate writing at a technical level. It is also worth asking
the LLM to evaluate the essayistic elements of the
submission, such as the strength of a thesis, use of
transitions, identified topic sentences, connecting claims to
evidence, linking (rather than stacking) of ideas, and an

28
effective conclusion. In this fashion, the student receives
two types of feedback: one on sentence-level mechanics,
and one on the macro elements of the essay. This frees the
instructor to focus their comments on conceptual and
theoretical concerns, or other elements of higher-order
thinking being measured by the instrument.

If you have access to Claude Pro, GPT-4o, Gemini


Advanced, Perplexity Pro, or another LLM that allows you
to attach multiple files, you can upload a series of students’
essays and ask the LLM to generate a list of common errors
and subsequent suggestions and share it with the class.

Sample LLM prompt:

Roleplay as a lecturer in criminology teaching a course on


serial crime. You’ve assigned a five-page essay, and all
student submissions have been downloaded from the LMS
to your local computer. Looking at the attachment, offer
one set of suggestions to fix surface errors in the writing,
and a second set of suggestions to improve the essay’s
mechanics, such as the strength and originality of the
thesis, use of transitions, identified topic sentences,
connecting claims to evidence, linking (rather than
stacking) of ideas, and an effective conclusion. Both sets of
suggestions can be numbered list, up to ten each. Finally,
suggest an overall grade (out of ten) for each set of criteria.

29
15
Create Activity-Rich
Lesson Plans
In classes where PowerPoint presentations are the main
focus, instructors often prioritize delivering content
through slides. However, they may give less consideration,
if any, to incorporating interactive activities and
promoting active learning among students. A lesson plan,
traditionally present in the form of a printed sheet of paper
kept hidden from students, often steers faculty more
successfully toward thinking about a mixture of lecture
and interactions. Yet it can be daunting as a new instructor
to dream up enough varied activities to build engagement
and promote learning, and even an experienced veteran
can use help brainstorming.

LLMs are surprisingly good at crafting lesson plans that


mix chunked lectures of 5-15 minutes and activities for
active learning. The suggested time allotments for mini-
lectures and activities are particularly useful for new
instructors to gauge how quickly (or slowly) to go when
introducing and reinforcing content.

Instead of simply asking for activities in a vacuum, one


option is to guide the LLM by mentioning specific

30
possibilities, such as icebreakers, games, escape room
challenges, debates, civil discourse activities, moral/ethical
dilemmas, or mock trials.

Even without additional resources, LLMs will return a


good variety of activities that are usually realistic enough
to use out of the box. If you use an LLM that can scan a
live website (such as Gemini, Perplexity, Copilot, or GPT-
4o), you can optionally ask the LLM to suggest activities
that fit the content from an online repository of
interactions, such as our list at http://bit.ly/FCTL-CATS.

Sample LLM prompt:

You’re a new instructor of anthropology. Since you are


new to teaching, you need help writing lesson plans that
ensure you include activities every 10-15 minutes, instead
of just lecturing for the entire class period. Create a lesson
plan for a 75-minute Introduction to Anthropology class.
Today’s class will introduce the chapter on apes and
primates. The students have not yet read this chapter in
the textbook; that will be assigned reading after today’s
class. Include at least three activities from the list found at
http://bit.ly/FCTL-CATS.

31
16
Create Personas for Students to
Use as Role-Play Partners
There is a longstanding history of students interacting
with their peers during class hours as a way of deepening
comprehension and application, as well as breaking up the
class time into different activities for the sake of variety
and mental interest. The most common modes of
interactions here involve quizzing each other, group
brainstorming, or joint problem solving. Another valuable
method is to ask students to engage in a role-play, to
inhabit the persona of someone related to the content
being discussed. The benefits of role-plays are numerous,
and include enhanced attention and concentration, seeing
the content from a novel perspective, and increased
retention of the material since it was experienced so
personally and possibly with sharpened emotions.
However, role-plays can be difficult to craft, and some
students find it awkward or unfulfilling to embrace the
“acting” side of a role-play. Also, any unprepared students
might negate the effects of the activity.

Using an LLM as a role-play partner neatly solves the


problems above. Since the “acting” now takes place via
typing, fewer students will feel self-conscious. And,

32
barring increasingly rare hallucinations, LLMs will
perform a stellar job at role-playing famous characters, or
even invented ones if given enough details, background,
and parameters.

If you plan to use role-playing frequently in your course,


you may find it helpful to subscribe to a role-playing app.
(We like Humy.ai which has trained its LLM on historical
figures and allows you to create your own, and RolePlai
which allows you to create a group chat.) Some LLMs also
allow you to “talk” with the character you’ve created, such
as GPT-4o. Pi.ai, another LLM, has a voice feature which
can boost engagement during role-playing activities and
does not require a paid subscription.

Sample LLM prompt:

You’re an instructor in the College of Medicine teaching a


class to first-year Med students, and the current module
concerns patient interactions, ethics, and empathy. You
need a way for students to roleplay scenarios that involve
difficult conversations, yet feel like a safe space to make
mistakes and try again. Create a prompt for another LLM
that students can use to paste into the LLM and interact
with an AI roleplaying as a patient, while the student, here
playing the attending physician, informs the patient that
the condition they have is actually a terminal illness. Make
it clear that the student-LLM interaction is meant to be an
ongoing conversation. The LLM should react in a realistic
fashion to this news, and should challenge the “doctor” to
find words of comfort or hope.

33
17
Generate Case Studies and
Micro-Scenarios
Case studies are extremely useful for students, as they are
often effective at piquing students’ curiosity while also
forcing them to apply concepts from the course. Students
need to apply course concepts in order to solve these ill-
defined problems, the very type of problems they are
likely to face once in the working world. All of these
factors contribute to making case studies highly engaging
for students, particularly when approached in pairs or in
groups, which adds social learning to the mix of benefits.
Cases don’t always need to be long and complex to be
interesting and useful pedagogically. Even short two-
sentence micro-scenarios could pose problems that require
students to apply course concepts to find a solution.

However, it can be time-consuming for instructors to


locate case studies or micro-scenarios for use in class, and
it’s even more time-consuming to create them ourselves.
Fortunately, both types of problems are easily created by
LLMs, which are surprisingly good at creating realistic
examples. Faculty simply need to prompt the LLM to
generate the requested number of scenarios/cases, and

34
then paste the output into a handout for class (or put on
screen one at a time for small, simultaneous discussions).

Sample LLM prompt:

As a lecturer in Psychology, you have taught Abnormal


Psych many times in your career. However, you’ve heard
from a colleague that they had great success in doing
reviews before each chapter test using case studies
analyzed in groups, and you’d like to try it yourself. Create
10 case studies that call for students to decide which
personality disorder is being described. Each case study
should be 4-8 sentences long and should be written in a
way that makes it difficult to decide between possible
diagnoses. Include some cases with more than one
personality disorder.

35
18
Generate Handouts to Evaluate
LLM Output
One of the most tried-and-true methods of using GenAI
since it was first launched in late 2022 has been to ask the
LLM to generate an output of some sort, and then ask
students to analyze and evaluate its response. Is it biased?
Factually correct? What’s its tone? Does it have blind
spots, or areas of explanation that are too shallow? When
generating code, what errors did it commit, and how could
they be fixed? Can its translation of a foreign language be
improved? The general idea is to habituate students to
both generating AI output and improving on it. In the
workplaces of the future, they will struggle to keep a job if
they are only as good as the AI output. But to evaluate AI
output—and especially to improve it—is an in-demand
need for sure, at least in the short- to middle-term.

It’s possible to place the entire duty on students’ shoulders


by asking them to also generate the LLM output, perhaps
even creating their own custom prompt so that they can
practice that skill as well. However, such an assignment
combines skills in a way that might not be advantageous
for novice AI users. For those truly new to AI fluency, it’s
enough to provide the outputs for them directly as a

36
handout, without needing them to also be expert at
prompt engineering. Complex and multi-step tasks are best
tackled one at a time until students are experienced at
both, and then have the available cognitive load to attempt
the integrative exercise.

Sample LLM prompt:

Take on the persona of a college instructor in computer


science. You are aware that students have long had access
to pre-made code from places like GitHub or GitLab, but
LLMs have upped the ante by both making it seem easy to
obtain custom code, tempting the student to trust more
readily than they should since the code was created on
demand and seemingly already aligned to the task
perfectly. Yet the actual skill students need is to parse pre-
existing code and adjust it to fit the requirements in the
best possible manner. Create two examples of code that
can be made into a handout for students to evaluate, in
groups, what could be improved. The first example should
use C++ to create a command-line task manager that
would allow users to add, delete, and view tasks. The
second example should use Python to create a desktop
application that allows user to track expenses, categorize
spending, set budgets, and generate reports.

37
19
Create Practice Quizzes,
Worksheets, and Problem Sets
It’s not entirely true that “practice makes perfect.”
Without adequate feedback, students might believe they
have a correct answer when they don’t, perhaps even
leading to error fossilization. But the greater challenge
most students face is a more fundamental lack of practice
in the first place. Problem sets are usually limited in
textbooks, and often look so similar to the examples
provided in the explanations that a true application of the
underlying concepts is avoided by using mimicry instead
of critical thinking. What students need is a longer, more
varied regiment of practice questions that will render them
stronger overall in the application of core concepts.

With most disciplines that have an established base of


knowledge, LLMs are quite capable of providing practice
questions that help students prepare for a test. This is
especially true in lower-level courses in the major, where
the epistemologies are well-known and there are fewer
fundamental discoveries in recent years.

The format of the output can take several forms. One


simple idea is to generate questions and place them onto

38
the same handout, perhaps to use as groupwork during
face-to-face class time. However, keep in mind that
students need access to the correct answers as well. Those
should ideally come on a second sheet, handed out later so
that students aren’t tempted to take a shortcut to the
answers. The solution that best helps students avoid the
temptation of shortcuts is to place the AI output into
quizzes inside the LMS. This can automate the feedback
without offering students a shortcut in lieu of thinking.

Sample LLM prompt:

You are an experienced instructor in civil engineering,


looking for ways to put additional word problems in front
of students, since you know from experience how valuable
that extra practice can be. Your years of teaching have
convinced you that few students will do optional
problems, so you will configure these problems as required
daily quizzes. Create 20 word problems to prepare students
for a chapter test on hydraulics and hydrology. Half of the
problems should be multiple-choice, while the other half
should be open-ended. In all cases, provide a mix of
knowledge, comprehension, and application questions.
Provide the correct answer for each in parentheses after
each question.

39
20
Quickly Create Presentations
One of the primary strengths of an LLM is its ability to
summarize, extract, and synthesize information. By
inputting your content into an LLM—or attaching a new
chapter from an OER (open education resource) or other
common resource—you can quickly prompt the LLM to
create an outline of a presentation with slides that include
a title, bullet points, notes for the speaker, and suggestions
for images. Currently, Claude and Perplexity are the only
two of the larger, free LLMs that allow you to upload a
PDF or other form of text file, eliminating the need to cut
and paste the information into the chat interface while
adhering to its word count restrictions. You can, of course,
input sections of your content into a chat interface and ask
the LLM to generate slides based on that material and keep
feeding the model until it has all the information, but that
process is time consuming and many LLMs have chat
interface limitations that may force you to begin new
“conversations” before you’ve completed the task.

If you have a great deal of content, premium accounts—


like GPT-4o, Claude Pro, Perplexity Pro, and Gemini
Advanced—allow you to input multiple documents and
prompt the LLM to synthesize the key points from
all of the materials. This can be especially helpful for

40
when you’re creating materials related to specific
themes or concepts.

When prompting the LLM to generate content for


presentations, it’s important to ask the LLM to only use the
content you’ll be providing and to avoid adding any of its
own knowledge… and it’s important to make sure that
those directions were followed. If not, LLMs have a
tendency to pull from many sources which might differ
from your inputted content. After approving the content,
you can ask it to generate images (or image ideas) for each
slide. Many LLMs can generate images, even in free
versions. GPT-4o can generate content and images for each
slide, and even deliver it all together in a PowerPoint file
that you can download and tweak.

Sample LLM prompt:

You are an interdisciplinary college instructor who strives


to make learning fun for the undergraduates you teach.
You’re creating content for a PowerPoint presentation to
be used for class discussion on types of nanomaterials.
Ignore any knowledge you have of the topic and only use
the text that is at the end of this prompt [or attached]. The
purpose of this presentation is to cover the unique
properties at the nanoscale (e.g., quantum effects, surface
area) and educate students on how these theories impact
their daily lives in interesting ways. Generate the content
by using the Title, Content, Image format and include
notes for me while I’m presenting. Also, suggest images for
each slide and, after I approve, generate those images.

41
21
Add ALT Text and Captions
As educators, we recognize the importance of ensuring
every student has access to the materials they need to
learn. But writing explanations for complex images can be
challenging. Writing Alternative (Alt) Text is an art and a
science which forces us to consider meaning, context,
conciseness, and language. When looking at graphs, tables,
or other diagrams that illustrate complex concepts, it’s
often difficult to know where to begin.

LLMs are an effective tool for beginning the process of


writing Alt Text and tweaking its output is often a much
more appealing option than the blank screen. Many LLMs
(including Claude and Copilot, and allow you to upload an
image and, after doing so, you can then provide the LLM
with the context of how the image is being used and what
it’s communicating to the learner. You might try
beginning with a prompt for the LLM to describe the
image. After approving its description of the image—and it
doesn’t always “see” it correctly—you can provide context
for the image’s use and instruct the LLM to write Alt Text.

For the most helpful outputs, it’s often necessary to tell the
LLM the purpose of Alt Text and guidelines for writing it
effectively, reminding it to focus on the meaning of what

42
it “sees” versus a comprehensive explanation of the image.
The LLM will often generate multiple paragraphs to meet
this goal if you fail to remind it that Alt Text should be
concise. For those with GPT-4o, you can upload a
document—a PowerPoint, PDF, or other file—and it
will generate Alt Text for all of the images in it. This
can be a huge time saver, especially if the images have
shared context.

LLMs can also assist with captioning videos. Many apps


allow you to input your video and generate captions—
Veed.io is a favorite of many due to its ability to generate
captions quickly and accurately (even removing filler
words)—but one way LLMs can assist you is by editing
auto-generated transcripts, providing you with a clean
version to add to your editing program. LLMs excel at this
type of editing and by prompting it to ask questions or
indicate where it “guessed” on the transcription, you can
improve its accuracy even more.

Sample LLM prompt:

You are an instructional designer who is generating alt text


for this image used in a presentation for faculty members
interested in research on gamification in STEM courses.
The image is a diagram that models how to incorporate
gamification into a college course and demonstrates how
students can earn points. The purpose of this image is to
demonstrate how the activities are scaffolded throughout
the semester and the impact on the learner. Provide a
clear, concise description that conveys the essential
information of the image to someone who cannot see it.

43
22
Create Visual Representations
of Data
Whether faculty members are conducting their own
research or creating presentation materials for courses,
many often require visual representations that convey
findings, illustrate complex ideas, and facilitate better
understanding of materials. In the past, we relied on
various programs and tools to manually create these
materials. Now many LLMs can create all sorts of charts,
tables, diagrams, and more.

Generating visual representations of data and other work


may require more LLM “training” than text-based tasks.
Begin by including the basics—the persona the LLM
should adopt, the task details, the image’s purpose, and the
format—then ask it to analyze your inputted data to
determine the most effective way to visually represent the
data, ensuring key points and findings are clear. Finally,
prompt the LLM to ensure labels are included and the
layout is clean and maintains academic standards.

After you’ve built your prompt, you’ll find some LLMs


have more sophisticated abilities to generate these types of
visual representations. For example, GPT-4o allows you to

44
upload raw forms of data (such as, Excel files, Qualtrics
data exports, CSV files, and JSON files) and can create a
wide range of visualizations including waterfall charts,
Sankey diagrams, paradigm diagrams, theoretical models,
storyboards, scatter plots, and more. Claude 3.5 Sonnet can
even create interactive visual representations. The free
versions of Gemini and Copilot can also create many types
of visual representations if you feed the data into the
prompt (versus uploading it as a file). There are also a
variety of apps trained on generating these materials that
are effective in completing more complex tasks.

Sample LLM prompt:

As a professor, conducting in-depth research and


publishing its results, your task is to generate a detailed
visual representation of the data supplied below. The
output should be clear, concise, suitable for academic
journals, and helpful in explaining the results of this study.
To accomplish that, please follow this process:

• Step One: Carefully analyze the research data to


determine the most suitable type of visual
representation.
• Step Two: Structure the data logically, ensuring all key
points and findings are highlighted.
• Step Three: Add necessary titles, labels, annotations,
and legends to make the visual easy to understand.
• Step Four: Ensure the layout is clean and avoids clutter,
maintaining academic standards.
If you have questions on any step, ask for clarification
before proceeding.

45
23
Develop Modules for
Remediation or Study Skills
It is not uncommon for students to be underprepared for
their classes. In some cases, they register for classes for
which they’ve met the prerequisites, but they never
mastered the fundamental skills they should have before
taking the course. In other cases, they may be new to
college (FTIC students) or new to the four-year institution
(transfer students), in which case their lack of readiness
might have more to do with not having adequate study
skills or study habits. This could be because the challenge
level was lower in previous contexts, so they never needed
to develop these study skills. Of course, it’s always possible
for both scenarios to occur simultaneously, making it
doubly challenging for an unprepared FTIC student to
succeed. The most common courses to encounter these
problems in the early college years are foreign language,
science, and math courses, though many disciplines rely on
basic arithmetic and algebra, and some students have
forgotten basics such as multiplying or dividing fractions.

To meet the needs of students with these deficits, the


simplest solution is to provide them with reminders of the
basics they should have already mastered, ideally alongside

46
practice questions so they can have some assurance that
they understand the concepts. It’s time consuming to
create a custom remediation module in the LMS, especially
with practice questions. Fortunately, this is the sort of
activity LLMs are good at. The same is true of a module
that helps all students develop effective study habits. Many
courses would benefit from having both types of modules
available to students.

Sample LLM prompt:

You’re an instructor in your third year of teaching college.


Next semester you’ll be teaching Calculus-1 for the second
time, and you know from last year that many students
lacked the math preparation for the course. Prepare an
online module for the LMS that will review the basics
from algebra and pre-calculus that students will use in
Calc-1. Also include some foundational arithmetic such as
order of operations. The basics should come in a numbered
list, each with a label, a short definition, and an example.
The module should be long enough that it would take 45
minutes for a student to review carefully. Also create 30
application-level multiple-choice questions to serve as a
post-module quiz.

47
24
Invite a Virtual Guest “Speaker”
So many of us have daydreamed about the people we’d like
to invite for coffee and the sorts of questions we’d ask
them if we could just have an hour of their time. Often, we
extend this wish list to include who we’d invite to class to
discuss how they’ve impacted our work, provoked our
curiosity, or been influential in our disciplines.

While (unfortunately?) AI tools won’t evolve enough to


transport a person into our classroom—or communicate
with those who have passed on—it can adopt personas
which simulate conversations and add to class discussions.
Some apps, like Humy.ai, have hundreds of historical
figures as personas who are already trained on their work,
time period, and influence, and allow you to add your own
course content or even create your own “expert” that
students can engage with any time they have a question.

But it’s not necessary to subscribe to an app to interact


with a persona. All LLMs will respond to prompts that
instruct them to act as widely known figures… and, if the
expert you’d like students to converse worth isn’t as well
known, you can train the LLM on who the person is, what
they’ve accomplished, and other content that would
enrich the conversation. Claude allows you to upload a

48
PDF or other text-based document, simplifying the
training process. For those who have Claude Pro, GPT-4o,
Gemini Advanced, or Perplexity Pro, you can upload
multiple documents and hundreds of pages of text.

When “hosting” this LLM “guest speaker” in class, you can


engage students by having them ask the LLM questions,
review its responses, and perhaps see if they can “trick” the
LLM into providing false or misleading information. You
may consider working with Pi, a LLM that has the ability
to “voice” its responses and doesn’t require a paid
subscription to do so. While you’ll still need to type in
your part of the conversation, hearing the “expert” speak
can add to class engagement. Students in online classes can
be assigned to have a conversation with a LLM and turn in
a copy of their chat and their reflections on its output.

Sample LLM prompt:

You are Ada Lovelace, a renowned mathematician and


writer, known for your work on Charles Babbage’s early
mechanical general-purpose computer, the Analytical
Engine. Today, you will have a conversation with a
student who is eager to learn more about your life, your
work, and your thoughts on various topics related to your
field. Please respond to the student’s questions as you
would, using the knowledge and perspective that you had
during your lifetime. [OR: Please base your responses
solely on the content I provide here.]

49
25
Demonstrate Collaborative
Storytelling
When we consider collaborative storytelling as a teaching
tool, we tend to think of it as a pedagogical approach that
works best with creative writing students or others in the
humanities. But collaborative storytelling can engage
students in all disciplines and help develop critical
thinking and problem-solving skills while building
communication and collaboration skills among students.

Collaborative storytelling is part of scenario-based learning


and can be applied in almost any discipline. History
students can explore different outcomes of events, re-enact
historical moments, and pull on their knowledge of the
subject matter to develop an accurate—or inaccurate—
storyline. Science students can dip into sci-fi and generate
plausible futuristic technologies. Business majors can
create a start-up and market its launch. Environmental
students can create realistic scenarios based on current
data, and education students can develop storylines that
put them in an IEP meeting with the student, their
parents, and a counselor. (Need an idea for your own
course? Ask a LLM to brainstorm with you.)

50
By its nature, collaborative storytelling invites group work
and asking students to join together to create these
prompts and participate in the process can build their
teamwork skills. For multiple groups in a class, you could
give students context for the task and then divide them
into subgroups, assigning each different parts of the
narrative (e.g., introduction, exposition, rising action,
climax, resolution) while instructing them to use LLMs to
generate character profiles, scenarios, dialogue, and other
elements. After students have expanded from the LLM
content and reflected on it and the process, they can
present “in order” as the class sees how their work
connects. Finally, a class discussion wraps up the activity
and distills what students gained from the experience.

Sample LLM prompt:

You are an instructor of a college history class, and you’ve


divided students into groups to collaboratively create a
narrative around the signing of the Declaration of
Independence. You will now generate dialogue and
background information. Each group needs the following:
(1) Profiles for key figures involved in the signing of the
Declaration of Independence, including their backgrounds,
personalities, and possible motivations; (2) Realistic
dialogues between these figures discussing the main issues
of the time, their concerns, and their visions for the future;
and (3) Different scenarios and outcomes based on the
signing of the Declaration of Independence, including
alternative possibilities. Offer historical context students
should consider while developing their narratives.

51
26
Design an Activity to Use the
LLM as a Live Answer
Generator
Many disciplines benefit from dialogical approaches, in
which processes, assumptions, and even truths are
questioned and debated in a discursive fashion. While
faculty can lead such a conversation, in practice this often
takes on the appearance of Socratic questioning, because
the instructor can only be interacting with one person at a
time. As a result, many students are reduced to a bystander
role, though hopefully they are mentally following along
(if not actively thinking through their own answers). The
possibility of student inaction is the primary weakness of
the instructor-centric approach.

If each student has access to an LLM, it’s possible to re-


envision the entire paradigm of dialogue and interaction.
One at a time, each student can query the LLM about a
given process being discussed in the current chapter and
ask follow-up questions to fully understand the process as
described by the LLM. Because AI output can sometimes
be faulty, students should be watching for incomplete,
inaccurate, or insufficient answers generated by the LLM,

52
perhaps identifying these weaknesses in groupwork with
other students. The same could be done for assumptions or
even conclusions/truths as understood for a given concept,
as a way of re-examining what we know and how we
know it.

Some LLMs will provide specific references used to arrive


at the output, particularly if asked. These can be useful for
the in-class activity described above, as a further method
to impress upon students the need to find, document, and
verify sources in the AI era.

Sample LLM prompt:

As an experienced instructor in higher education


mechanical engineering, over the years you’ve seen many
different pedagogies and activities in teaching. You’ve read
about using LLMs to show students how to query a
“knowledge source” about processes and then evaluate the
answer, and you’re curious to try it yourself. Design a
student-facing activity in which they will individually
query an LLM about a particular engineering design
process, such as creating a robot that can collect loose
basketballs and shoot them at a basket.

53
27
Design an Activity for Thought
Experiments
One element of critical thinking that is ideal for advanced
learners in higher level courses is the ability to conduct
thought experiments and predict outcomes. Just as it can
be difficult to write multiple-choice questions that
measure higher-order thinking, it can be even more
challenging to structure an activity for higher-order
thought experiments. As a result, creating such activities
without AI tools can be a time-consuming process.

LLMs can automate the brainstorming portion of designing


thought experiments. As with many LLM prompts, it may
be best to request “too many” examples at first, knowing
that you can parse the output to find, with the help of your
intuition won from years of experience, a winning final
selection or two.

There are several ways to use thought experiments. One


could request a detailed scenario from the LLM, in order to
discuss the overall situation. It could be an effective
groupwork exercise to ask students to predict an outcome,
based on the situation described so far…and then circle

54
back one more time to an LLM to see if it agrees with the
prediction.

Sample LLM prompt:

Pretend you are an instructor teaching a college


Advertising class. You want to provide students with a
detailed scenario so that they can predict possible
outcomes from various proposed marketing campaigns.
Write a one-page scenario that provides a lot of details
about possible market trends and macroeconomic forces
that might affect sales in the next six months of Smiley
Pop, a fictional brand that is currently in second place in
the national cola wars. Include at least three ideas for
high-priced marketing campaigns that are being
considered to boost additional sales in the face of market
trends and the macro forces: one fairly standard, one
original and ambitious but possibly risky, and one
targeting teens specifically. The tone of your report should
be brisk and full of details, with a “just the facts” approach.
The audience for your report is the Smiley Pop CEO, who
will be considering the various proposals in order to
choose the winning marketing campaign.

55
28
Interact with Diverse Cultures
There’s an old joke about two young fish who swim by an
old man who asks, “How’s the water today, boys?” The
young fish respond, “What water?” It’s difficult to
understand other cultures and time periods because we’re
so immersed in our “realities” that we can’t see the
situations of others—or even ourselves—clearly. But, as
educators, we know the importance of exposing students
to all sorts of communities and experiences, especially as
we move to a more globalized world.

Simulations can help build more empathetic students and


enhance critical thinking, as students emerge themselves
into other cultures and learn to analyze situations from
multiple perspectives. They also allow students the
opportunity to apply theoretical concepts learned in class
and engage more actively in the material.

Perhaps subjects like anthropology and sociology are the


most natural fits for this teaching approach, but many
other disciplines can employ cultural simulation activities.
Political science majors can visit the United Nations and
simulate negotiations. Business, marketing, and economics
students can launch products into other countries and alter
their messaging to increase appeal. Writing and rhetoric

56
students can study idiomatic expressions, slang, and
dialects unique to a culture. Education, social work,
criminal justice majors, psychology, and more can practice
recognizing, appreciating, and interacting with others in
diverse environments.

Sample LLM prompt:

You are instructing a Global Studies course and are


creating a cross-cultural communication workshop
designed to navigate the nuances of international
collaboration. Imagine an American tech startup and an
Indian IT firm partnering to create innovative software.
The American project manager, Emily Johnson, is direct,
advocating for clear goals and swift progress. In contrast,
Rajesh Kumar, the Indian team lead, promotes a more
indirect approach, valuing team harmony and collective
input. Choose to represent either Emily or Rajesh in a
crucial project update meeting. If you are Emily, push for
quick decisions to meet deadlines. If you are Rajesh,
emphasize the importance of team consensus for quality
results. Continue the discussion between Emily and Rajesh
until they either come to an agreement or decide to end
the negotiation.

57
Section II:
Make Faculty
Life Easier

66
29
Minimize Emails from Students
Teaching classes with smaller enrollments is often easier
than classes with large numbers of students. The logistics
of large classes frequently present more challenges, even if
grading is done exclusively through automation within the
LMS. The primary expression of this extra work is often
the volume of emails from students, especially in large-
enrollment courses that are conducted in a fully online,
asynchronous modality, where students don’t have
another easy way to ask questions. One method to
minimize student emails before the advent of AI was
known as “three before me,” which involved posting in an
open topic Discussion Board and requiring students to
direct their question to other students before emailing the
instructor, and if they still needed to email the instructor,
to link to the thread showing they had collected three
opinions first.

And updated version of “three before me” in the era of


GenAI could be even easier for students, since they could
get answers right away. Instead of asking their peers via
the Discussion Board, students could be directed to ask the
LLM for their content questions. For questions about
course procedures, students could visit Claude, Perplexity,
or another LLM that accepts file uploads and upload the

67
class syllabus before asking their question. If they still need
to email the instructor, a rule could be put in place
mandating that students include a screenshot of first
asking the LLM after uploading the syllabus. In this
fashion, some students will get their answer right away
and may never need to email the instructor. Instructors
adopting this method might wish to provide students with
a sample LLM prompt on the syllabus.

Sample LLM prompt:

Roleplay as the instructor of an undergraduate course


named [Name of Course]. Your students have questions
about an upcoming essay assignment. Scan the attached
syllabus and determine the answers to the questions. Their
main question is: [type question as it would have been
phrased if emailed to the instructor]. Answer as the
instructor might, with a particular eye toward syllabus
policies and assignment descriptions.

68
30
Draft Your Annual Report
End-of-the-year reporting is part of faculty life.
Departments, colleges, and institutions use annual reports
to evaluate faculty members, but such reports may also be
used when considering contract renewals or promotions.
Some institutions may require faculty to complete a
specific, mandated form, while others require faculty to
prepare narrative statements that reflect their teaching,
mentoring, outreach, and/or professional development
activities. Reporting on efforts at the end of the academic
year might not be a faculty member’s ideal way of
beginning the summer, but it is a required task that
attests to the contributions made to the student experience
and, therefore, to the reputation of the university by a
faculty member.

If the prospect of drafting an annual report is daunting,


LLMs can help! To start, faculty can use annual report
templates available online to devise lists of work done for
each category of their required annual report.
Alternatively, faculty can ask an LLM to generate an
annual report outline to complete. Once their outline is
filled out as completely as possible, faculty can use LLMs
to synthesize the information and compose a narrative
paragraph that summarizes their work. Faculty members

69
who complete their outline or their mandated forms as
documents or PDFs may be able to upload a copy of the
outline or form to LLMs like Claude and Perplexity.
Faculty can also cut and paste sections from their
completed forms into the query box if the LLM doesn’t
have the ability to read a file. As always, giving the LLM
an informative, clear prompt will help faculty maximize
the LLM output to meet their annual report needs.

Sample LLM prompt:

You are a faculty member in counselor education who


needs to write an annual report for end-of-the-year
reporting. Using the attached outline, generate paragraphs
that are each 100-150 words and focus on teaching,
research, service, professional development, and outreach,
respectively. The paragraphs will be evaluated by your
department chair before they are sent to the provost, so
they need to focus on how the work you did contributed
to students’ learning experience and to the reputation of
the institution, which is considered research-intensive.

70
31
Summarize Commentary on
Student Evaluations
Statistical data can help faculty see how they compare to
other instructors in their department, college, and
institution for the categories surveyed, but teachers would
be wise to also consider the comments students submit, as
they can be a rich resource for evaluating the effectiveness
of course organization, teaching methods, and
assignments. Reading the comments is not without its
pitfalls, however. It can be nerve-wracking to read the
critiques leveled by individual students, and the human
tendency to remember the negative more than the positive
may result in an instructor feeling angry, depressed, and
anxious about their prospects for promotion and about
teaching during the next term. How can faculty leverage
student comments to determine what students find
ineffective about their teaching without going down a
shame spiral?

LLMs can be an effective tool for processing student


commentary holistically because they are designed to
identify and summarize patterns in language. Given the
proper prompt(s), LLMs that accept uploads (such as,
Claude, Perplexity, GPT-4o, and Gemini Advanced) can

71
provide faculty with a general idea of whether the student
comments are positive or negative; pinpoint specific
aspects of a course or teaching techniques that were (or
weren’t) particularly useful or conducive to learning; and
compile suggestions students gave for improvement. Using
AI in this way is, perhaps, something with which we are
already familiar; online retailers have begun using LLMs to
provide customers with a global snapshot of customer
reviews. In a similar fashion, faculty can use LLMs to
process student feedback and devise ways to improve their
instruction more objectively, without the risk of losing the
forest for the trees.

Sample LLM prompt:

You are a faculty member who taught three sections of


introductory chemistry during the last semester. Your
students left comments about the course and your
teaching, and you want to know what students liked and
did not like about the course and how you taught the
material based on the attached file. Use the uploaded file
to learn the following: a) what did students like about the
course; b) what did students not like about the course;
c) what suggestions did students give for improving the
course; d) what did students like about your teaching;
e) what did students not like about your teaching; f) what
suggestions did students give for improving your teaching.
Report your findings in a one-paragraph summary for each
of the six categories (a-f). Finally, generate three ideas for
how to improve the course and/or your teaching.

72
32
Automatically Take Minutes
During Meetings
Faculty are required to attend various meetings during the
academic year. How diligently individual faculty members
take notes during these meetings can vary greatly, as can
someone’s ability to keep up with a discussion. Factors
such as language issues, noise, technological glitches
encountered during virtual meetings, and even illness can
impact a faculty member's ability to understand or
contribute to meetings.

Rather than attempting to keep up with meeting


developments via hand-written or typed notes, faculty can
enlist the help of an AI tool, such as Otter.ai or Read AI.
However, there are some extremely important issues
related to the use of AI for transcribing meetings that
MUST be considered and approved before AI can be used
for this purpose. The first consideration is ensuring that
faculty have obtained consent by the other attendees to
record or transcribe the meeting. People who have
concerns about privacy, security, and litigation are likely
to object to having AI do this work, while others may not
be comfortable expressing their ideas if they know they
are being recorded. Furthermore, not all AI programs

73
inform participants that they are being used. Faculty
members who are interested in exploring AI transcription
for their meetings are encouraged to be proactive about
reviewing the protocols in place at their institution for
this technology.

When (or if) faculty do obtain permission to transcribe


meetings using AI, the analyses of the transcriptions can
be facilitated by using LLMs. Uploading the transcription
to LLMs like Claude, Perplexity, GPT-4o, or Gemini
Advanced is typically preferable to copying and pasting
the text, particularly given the character limitations of
certain LLMs. To make the best use of LLM technology,
faculty can prompt the LLM to provide a global summary
of the meeting, list a specified number of important take-
aways from the meeting, and generate a list of tasks to
complete or questions to answer before the next meeting,
if necessary.

Sample LLM prompt:

You are a faculty member who obtained the transcription


of a meeting held during a day you were sick. Although
you weren’t at the meeting, you want to be sure you
understand what topics were discussed, what questions
were raised, and what solutions were offered. Using the
attached text, generate a 250-word summary of the
meeting, and generate a list of questions your colleagues
asked, as well as the solutions to those questions that were
offered. Finally, explain when the next meeting will occur
(if known), and what tasks need to be done before it.

74
33
Summarize Long Emails
Email has made communication with others easy, but
there can be some aggravating aspects to this form of
communication. Not all emails are simple, direct, or
limited to a single subject, and faculty members may find
themselves ensnared in a message or message chain that is
complex and difficult to keep track of. Despite their best
efforts, it is entirely possible that faculty members neglect
addressing important or time-sensitive questions. How can
instructors keep track of the developments and priorities
in such messages, particularly those that contain the
thoughts and opinions of two or more other people?

LLMs can promote effective communication for faculty by


generating email summaries. With an effective prompt, an
LLM can analyze a long email (or chain), consolidate the
main points of discussion, clarify the messages being sent
and received, and, therefore, enhance the efficacy of their
communication via email. LLMs can also help faculty more
readily identify lingering questions and points of confusion
or debate to provide guidance for moving the conversation
forward toward resolution.

Depending on the size of the email chain, it might work to


paste the conversation into an LLM. With a long enough

75
chain, it may be wise to use an LLM that accepts uploads,
such as Claude, Perplexity, GPT-4o, or Gemini Advanced.

Sample LLM prompt:

You are a faculty member who is working on a proposal


for a study abroad program to Peru. You have been
discussing your proposal with the chair of your
department and the study abroad staff, but the email chain
is lengthy, and you are concerned you have not addressed
all the parts of the program application or the concerns of
your chair and the study abroad staff adequately or
completely. You need to review the attached email
messages and generate the following: a) a numbered list of
priorities for the program that your chair has discussed;
b) a numbered list of those parts of the study abroad
program proposal that need to be further developed and
what suggestions have been made for how to do this; and
c) a numbered list of questions that have been asked but
not answered in the email chain. Then, write a response
email to everyone on the chain who asked a direct
question or requested a response.

76
34
Draft Email Replies
Regardless of discipline, faculty must manage many
interpersonal relationships. As with any relationship—
personal or professional—good communication is critical.
Responding to emails is an important daily task that
faculty have to make time for, in addition to juggling class
preparation, grading, meetings, and research. Failure to do
so may result in the other party concluding that the
faculty member is not interested in them or their issue,
resulting in feelings of frustration or anxiety. The problem
of time management for emails is particularly acute for
teachers of large classes, especially if the instructor lacks
adequate support of teaching assistants.

LLMs can help reduce the time and effort faculty put in to
composing emails by generating an initial response that
can be adjusted to fit their needs. Using AI in this way can
help faculty respond in a timelier manner to messages and,
therefore, convey an impression of interest in contributing
to a larger discussion or concern in resolving a problem.
Additionally, LLMs can help faculty members
professionalize their response and help them avoid the
pitfalls of impulsive, emotional, or world-weary
communication with colleagues and students.

77
Sometimes it may be helpful to use an LLM that has access
to the internet (such as, Copilot, GPT-4o, Gemini, and
Perplexity) and prompt the LLM to also search the web in
order to more fully respond to the inquiry.

Sample LLM prompt:

You are a faculty member in Biology who received an


inquiry from a student who wants to know what the
difference is between a Bachelor of Science and a Bachelor
of Arts in Biology at your institution. The student does not
understand the differences in course requirements for the
two degrees. Write an email that describes the type of
work done for each of the degrees and provides a list of
questions that the student can answer to help them decide
which degree program best suits their interests in Biology.
The email should be factual and encouraging.

78
35
Adjust the Tone of a Draft
Email
We previously discussed the importance of maintaining
effective email communication, but many factors can
influence the tone and temperature of an email: inter-
office politics, the pressures of one’s job, and personal
stressors are just some examples. A whole host of issues
may arise, for instance, if a faculty member’s attempt at
conveying enthusiasm is instead read as arrogance or
insincerity. Another factor that can impact the
comprehensibility of a message is its organization;
messages that are difficult to follow can exacerbate feelings
of confusion, rather than provide actionable solutions.
How can faculty ensure that they don’t come off as too
aggressive, curt, disorganized, or annoyed in an email?

LLMs can help faculty adjust the tone of an email to ensure


that a recipient does not misinterpret the intention of the
message, and they can also be used to improve readability.
Providing an email draft for the LLM to edit will allow the
program to adjust words and phrases to ensure consistency
of the tenor of a message or to regroup them to develop a
message that is succinct and cogent. Furthermore,
providing the LLM with an initial draft will guard against

79
it inventing scenarios, examples, or questions in its version
of the email.

Remember that LLMs cannot provide you (or any other


user) with personal opinions, so asking the LLM “Is this a
good email?” is an ineffective use of the technology. You
can prompt the LLM, however, to pinpoint areas of
strengths and weaknesses in your communication (and
overall writing) and ask for reasons why these sections
were indicated.

Sample LLM prompt:

You are working on an application for a National


Endowment for the Humanities grant that is due in five
days. The colleague you are working with is overly
concerned about the information in the budget
justification section and you have drafted a reply, but you
think the tone of the email you drafted is too harsh. Revise
the attached email and make sure it has a professional, but
firm tone so your colleague knows you have considered
their concerns about the budget justification section, but
that you think it is better to focus on developing the
project’s significance section because it isn’t as refined as it
should be, and you want to be sure to have time to edit the
application before submitting it.

80
36
Compose a Letter of
Recommendation
Faculty are often asked to write letters of recommendation
(LOR) for students who are applying to graduate school or
their first professional job. Although it can be flattering
and rewarding to help students by providing an LOR,
sometimes the timing of such requests makes crafting a
thoughtful and effective letter inconvenient at best, and
next to impossible at worst. The pressures of writing an
LOR (or several) are often compounded by teaching duties,
publishing deadlines, and personal dynamics that can limit
a teacher’s time and attention.

LLMs are very effective editing tools for faculty who are
struggling to compose letters of recommendation. Once
the LOR is drafted, faculty can prompt the LLM to
improve the structure, clarity, and appropriateness of the
letter to ensure that it highlights the qualities sought by an
organization. A carefully worded prompt can help the
LLM best understand the assignment; the LOR it generates
for a law school applicant, for example, would be
necessarily quite different than one edited for a student
who is applying to a Master of Fine Arts program. Of
course, faculty are strongly encouraged to review the

81
LLM-revised letter to ensure accuracy and tone, and to
reveal any biases. (LLMs often produce output that
includes gender stereotypes, such as referring to students
with traditionally female names as “nurturing” and
“supportive” while categorizing the same behaviors in
males as “strong team players.”) Faculty should be sure that
the letter accurately describes a student as they know
them, and selectively reworking the LLM’s phrasing will
make the letter sound more authentic.

Sample LLM prompt:

You are a language teacher who has been asked by a


student to write a 1-2 page letter of recommendation for
their application to a military officer training program.
The student took two of your classes and they were a good
student in your classes, earning Bs both semesters. You
respect the student’s efforts; you know the student to be
hard-working and engaged in their learning, despite the
financial/personal difficulties they experienced during
your classes. Edit the copy of the attached letter to focus
on the student’s work ethic, skills, and academic discipline.
Be sure to highlight the experiences mentioned in the draft
that follows this prompt to enhance the letter of
recommendation you submit on your student’s behalf.

82
37
Assist with Dossiers
Dossiers and portfolios are valued assessment tools in the
field of teaching and learning. Faculty members may use
dossiers to track and assess student learning, but too often
we’re also asked to submit a portfolio when we apply for
promotions, tenure, or awards. Alternatively, faculty may
be asked to contribute to departmental reports by
summarizing or synthesizing data that demonstrates the
effectiveness of courses and curricula. The time it takes to
compile, organize, and process files for such work is
significant, and faculty may find the additional task of
writing a compelling summary or description for each
section overwhelming.

The ability of LLMs to process information from multiple


sources can be a boon for faculty who are feeling stressed
or exasperated by the process of dossier compilation and
synthesis. LLMs are excellent at synthesizing information
and can quickly compose a summary of their individual
work or the work done by all members of a department.
For dossiers that require the analysis of multiple
materials—such as, effectiveness reports based on several
years of program assessments—investing in Claude Pro,
Gemini Advanced, Perplexity Pro, or GPT-4o may be
beneficial. All three LLMs allow you to upload multiple

83
files and large amounts of text. While LLMs consider input
“tokens” which don’t translate neatly into page numbers,
GPT-4o can read around 250 pages while Claude Opus
(part of Claude Pro) can read up to 400 pages.

Another possible use of LLMs for dossiers is submitting a


prepared portfolio as a document or PDF and asking the
LLM to identify areas of strengths and weaknesses given
the rubric that will be used to assess the faculty’s
submission. This use of AI can be particularly beneficial to
faculty members who are uncertain if they have addressed
all the areas required for a dossier submission.

Sample LLM prompt:

You are an instructor of Digital Media. Your department at


the university is undergoing an audit as part of an
Institutional Effectiveness review. You have been asked by
the coordinator for your department to contribute to the
preparations by supplying a dossier review of the work
completed by students graduating from your program
during the past four years, and you are given copies of the
assessment reports that have been submitted to review and
write up. You must submit the following: 1) a 300-word
summary of your department mission statement and how
the work submitted by graduates in your program
exemplifies the mission statement; 2) a 500-word summary
of the data contained in the assessment reports; 3) a 300-
word discussion of the three largest factors that influence
students’ grades for the targeted skills, based on the data
from the assessment reports.

84
38
Improve Dossiers for Awards
and Promotion
One of the more time-consuming tasks for educators
is creating a comprehensive portfolio that highlights
our research, academic achievements, publications, and
teaching philosophies. Often, we’re asked to generate
additional materials as well to demonstrate our impact
on students, our institution, and our academic and
civic communities.

Compiling and shaping a narrative out of these materials—


especially for reviewers who may not be in our
disciplines—can be a daunting task. LLMs shine when it
comes to summarizing and synthesizing text, and they
also excel at reviewing materials, identifying gaps or
missing information, and comparing and contrasting
materials as they relate to readers within—and outside—
of our disciplines.

After working with a LLM to generate your dossier—as


discussed in the previous chapter—you can then upload
award criteria and other guidelines and ask the LLM to
review your work from this lens. If you’re presenting this
dossier to members outside of your area of expertise, it can

85
be helpful to point out areas that might be confusing to
those unfamiliar with your subject matter and to offer
suggestions for clarifying key points. If the dossiers of
faculty members who have won in previous years are
posted publicly, you can ask the LLM to compare them
with your own work and brainstorm ways you can
supplement your dossier.

Sample LLM prompt:

You are a committee reviewer tasked with reviewing my


award dossier that highlights my research, academic
achievements, publications, and teaching philosophies.
This portfolio is intended to demonstrate my impact on
students, my institution, and my academic community. I
need your assistance to review the entire portfolio and
summarize the key points, identify any gaps or missing
information, and compare my dossier with those of faculty
members who have won awards in previous years (if
available). Additionally, please review the dossier from the
perspective of the award criteria and guidelines provided
[upload award criteria and guidelines]. Highlight areas that
might be confusing to reviewers outside of my discipline
and offer suggestions for clarifying these points. Provide a
detailed analysis and suggestions for improvement.

86
39
Market Your Course/Program
One of the constant considerations of teaching is how to
adapt to meet the changing needs and interests of students.
So often we’ll work to add a new certificate or program—
or we’ll shake up our special topics course by adopting a
theme like The Hunger Games—only to discover that
students aren’t aware of the options we’ve created.

Marketing our courses, programs, and other additions to


our departments becomes an essential part of the process.
LLMs excel at brainstorming and are good at synthesizing
and summarizing information. If you input your course
description, syllabus, and other relevant materials, you can
ask the LLM to create marketing materials, such as emails
to students, content for flyers, and social media ads (and
hashtags) that attract students.

Some LLMs’ free versions—such as, Copilot and Gemini—


generate images, allowing you to easily prompt the LLM to
create images to accompany your materials. GPT-4o and
Adobe Express can go a step further and, after creating
images and content, generate a completed flyer or other
marketing tool that you can easily download and use.

87
Sample LLM prompt:

You are designing marketing materials for a new special


topics course. The syllabus and course description have
been uploaded. This interdisciplinary course uses
The Hunger Games as a lens for political analysis,
exploring themes such as power, resistance, and societal
structures. Create engaging marketing copy for Instagram
(caption and hashtags), Facebook (event description and
details), Twitter/X (short post and hashtags), TikTok (short
video script and hashtags), and a flyer (title, description,
schedule, instructor, enrollment info, and call to action)
based on the provided syllabus and course description.
Finally, you’ll suggest and generate three images to use
with these materials.

88
Section III:
Make
Research
Easier

90
40
Find Seminal Publications in a
New Research Area
One of the primary challenges faculty researchers face
when expanding their research purview into a new area is
identifying authors and publications at the leading edge of
the conversation. The large number of academic
contributions makes it difficult to pinpoint the most
influential and foundational works. Traditional methods,
such as combing through databases and manually
reviewing citations, are time-consuming and often result
in an overwhelming amount of information, much of
which may not be directly relevant. This inefficiency can
hinder the initial stages of research, delaying the
development of a comprehensive understanding of the
field. Additionally, the sheer volume of new publications
can make it difficult to stay updated with the latest
advancements and understand how they build on or
diverge from established research.

While LLMs can identify key words, trends, and topics,


they don’t yet access academic databases well and they still
frequently hallucinate sources, DOIs, and publication
details. But there are a series of research tools that are

91
powered by AI that are dramatically expediting—and
improving—the academic research process. Some of our
favorites include Semantic Scholar (an AI-powered search
engine for scientific literature); ResearchRabbit (a “forever
free” app that searches and monitors new papers); Elict (a
research assistant that compiles and sorts specific data from
journals); and Consensus AI (an extensive collection of
peer-reviewed papers with intuitive search capabilities).
See “Section IV: Tools Worth Considering” for details.

LLMs that allow you to attach documents (GPT-4o,


Claude, and Perplexity) can assist these apps and help
researchers (and students) understand the context and
significance of particular studies within the broader
landscape of the field. They can also summarize large
articles and ferret out specific information.

Sample LLM prompt:

You are a researcher who is writing a lit review on


organizational change for your own research that
compares the effectiveness of qualitative and quantitative
methods. First, summarize each article. Then, extract the
methodology used and separate into 3 categories:
qualitative, quantitative, and other, and list the strengths
and weaknesses of each.

92
41
Consolidate Research for a
Lit Review
Conducting a scholarly literature review can be very time
consuming. The task requires a high degree of consistent
focus and critical reading yet an openness to make creative
connections or follow promising new ideas and
perspectives. Given the demands on time and cognitive
load for today’s busy faculty researchers, LLMs can add
efficiency, accuracy, and representation of perspectives to
a literature review.

LLMs like Elicit, Consensus, Research Rabbit, Perplexity,


or Semantic Scholar can be leveraged to reduce some of
the inefficiencies of conducting a manual review. LLMs
can not only scan and summarize a vast collection of
journal articles quickly, but they can also be prompted to
ensure a more comprehensive and representative selection
of relevant studies. They can better identify patterns and
trends leading to greater accuracy and objectivity in the
synthesis, and they can be directed to edit multiple entries
for consistency in style and formatting. LLMs can
condense long articles to extract key findings, highlight
core arguments, and identify methodologies. They can
perform a meta-analysis by identifying gaps or patterns

93
across studies. They can categorize and organize the
literature chronologically or by theme, and they can create
initial drafts of the sections of the lit review. When tools
like Zotero or EndNote are enhanced with AI capabilities,
they can also help manage and organize references. Some
LLMs like Claude Pro, GPT-4o, Gemini Advanced, and
Perplexity Pro allow you to upload multiple PDFs and
other files which makes it even easier to synthesize
multiple sources, search for keywords, and filter articles
based on your sorting preferences (research question,
sample size, recommendations, etc.).

Sample LLM prompt:

You are a college instructor researching attachment


disorders in young adults. You need to write a
comprehensive scholarly literature review. Write short
reviews for the attached peer-reviewed articles and then
create an outline to assist with writing a lit review. The
review should include an introduction, key themes and
findings, methodologies, gaps in the literature and a
conclusion that suggests potential directions for future
research. The output should be organized in APA 7e style
requirements and should be written for expert researchers
and practitioners, and should clearly indicate where each
article should be mentioned or considered.

94
42
Brainstorm Ideas for a New
Publication
Proposing new ideas for grants, articles, or book projects is
sometimes a significant challenge for researchers across all
fields. The process requires not only a deep understanding
of the existing body of literature but also the ability to
identify gaps and propose novel contributions. This can be
particularly daunting given the sheer volume of published
research and the rapid pace at which new findings emerge.
Additionally, researchers often need to balance originality
with feasibility, ensuring that their proposals are
innovative yet grounded in achievable methodologies. The
pressure to stand out in competitive funding or publication
environments further exacerbates these challenges,
making the ideation phase both critical and demanding.

LLMs offer a transformative solution for researchers


seeking to generate and refine ideas for their projects.
LLMs can analyze vast amounts of literature, identifying
trends, gaps, and emerging areas of interest. These models
can provide insights into how different research questions
have been approached, suggest potential methodologies,
and even propose new angles on established topics. For
instance, an LLM can help a researcher by highlighting

95
underexplored intersections between disciplines,
suggesting novel applications of existing theories, or
identifying emerging questions that have yet to be
addressed. This capability not only accelerates the ideation
process but also enhances the quality and originality of
research proposals. By leveraging LLMs, researchers can
efficiently navigate the extensive body of existing work
and generate well-informed, innovative ideas that stand a
better chance of success in competitive environments.

Sample LLM prompt:

You are a research faculty member with a doctorate in


philosophy, and you are working in an interdisciplinary
faculty cluster at a university. Your team focuses on ethical
issues in technologies, and recent advances in artificial
intelligence proposes many ethical challenges. Your task is
to generate a list of innovative research ideas in the field of
artificial intelligence ethics, focusing on underexplored
intersections with social justice, environmental
sustainability, and global governance. For each idea,
provide a brief overview of the current state of research,
potential research questions, and suggested methodologies.

96
43
Create an Outline of a Grant
Application
Outlining a new grant proposal is a complex and often
daunting task for researchers. It requires a thorough
understanding of the current state of research,
identification of significant gaps or opportunities, and the
articulation of a clear and compelling narrative that aligns
with the priorities of funding agencies. The process
involves extensive literature review, careful consideration
of methodological approaches, and the ability to project
potential impacts and outcomes convincingly.
Additionally, researchers must navigate the specific
requirements and guidelines of different funding bodies,
which can vary significantly. The pressure to secure
funding in a highly competitive environment adds another
layer of difficulty, making it crucial to develop a well-
structured and persuasive proposal.

Large Language Models offer a powerful tool for


researchers looking to streamline the process of creating
grant proposal outlines. By leveraging advanced natural
language processing, LLMs can quickly analyze vast
amounts of literature and funding agency guidelines to
provide targeted insights and recommendations. These

97
models can help identify key research gaps, suggest
innovative methodologies, and propose potential impacts
and outcomes based on existing research trends. LLMs can
also assist in aligning the proposal with the specific
requirements and priorities of various funding agencies by
analyzing successful proposals and extracting common
elements and strategies. This not only saves time but also
enhances the quality and coherence of the proposal,
increasing the chances of securing funding. By using
LLMs, researchers can efficiently develop a structured and
comprehensive outline that serves as a strong foundation
for their grant proposals.

Sample LLM prompt:

You are a researcher in the field of Green Finance, and you


have an idea for a grant project. Your task is to create a
detailed outline for a grant proposal in the field of
renewable energy, focusing on innovative solutions for
solar energy storage. The outline should include sections
on background and significance, specific aims, research
design and methods, expected outcomes, and potential
impact. Additionally, provide guidance on aligning the
proposal with the priorities of major funding agencies such
as the National Science Foundation (NSF) and the
Department of Energy (DOE).

98
44
Compose the First Draft of a
Grant Application
Writing the first draft of a grant proposal presents
numerous challenges for researchers. This initial stage
involves transforming a conceptual idea into a structured
document that clearly articulates the research objectives,
significance, methodology, and expected outcomes.
Researchers must ensure that their proposals are both
scientifically rigorous and persuasive, making a compelling
case for funding. This process requires meticulous
attention to detail, including adherence to specific
formatting and submission guidelines of various funding
agencies. Additionally, the pressure to secure funding
often leads to heightened anxiety, making it difficult to
maintain clarity and focus during the drafting process. The
result is that many researchers find themselves
overwhelmed by the sheer scope of the task.

Large Language Models can significantly ease the burden


of writing the first draft of a grant proposal. By utilizing
advanced natural language processing capabilities, LLMs
can generate coherent and well-structured text based on
input provided by the researcher. These models can help
organize ideas, ensure logical flow, and maintain a

99
persuasive tone throughout the document. LLMs can also
assist in crafting specific sections of the proposal, such as
the background, literature review, methodology, and
anticipated outcomes, by drawing on a vast database of
existing research and best practices. Furthermore, LLMs
can provide suggestions for meeting the specific
requirements and priorities of different funding agencies,
enhancing the alignment and relevance of the proposal. By
leveraging LLMs, researchers can expedite the drafting
process, reduce cognitive load, and produce higher-quality
first drafts that stand a better chance of success in
competitive funding environments.

Note: researchers should verify before beginning that the


use of GenAI is allowed under the grant.

Sample LLM prompt:

You are a researcher in the area of integrative medicine,


and you are seeking funding for a study of non-drug
related interventions to improve human health. Your task
is to draft the first section of a grant proposal for a research
project focused on developing innovative alternative
therapies for obesity-related diseases. The draft should
include an introduction, background information, a review
of relevant literature, and the research objectives.
Additionally, provide suggestions for aligning the proposal
with the priorities of major funding agencies such as the
National Institutes of Health (NIH) and the Centers for
Disease Control and Prevention (CDC).

100
45
Improve Grant Applications
with Comparisons
One of the perennial challenges in grant writing is
ensuring that a proposal stands out while adhering to the
stringent criteria set by funding bodies. Researchers often
need to compare their proposals with successful
applications from previous years and align them with the
specific requirements and expectations of the grant. This
process is labor intensive and fraught with the difficulty of
parsing through voluminous documents to extract relevant
information, making it challenging to identify gaps,
strengths, and areas for improvement in one’s proposal.

LLMs like GPT-4o, Claude Pro, Gemini Advanced, and


Perplexity Pro offer a transformative approach to this
problem by leveraging advanced natural language
processing capabilities. These models can swiftly analyze
and synthesize information from multiple documents,
enabling researchers to gain a comprehensive
understanding of the key elements found in successful
proposals. By inputting both the draft proposal and the
successful ones into an LLM, the model can perform a
detailed comparison, highlighting discrepancies in
language, structure, and content. This comparative analysis

101
helps identify areas where the draft falls short and suggests
specific enhancements that align it more closely with the
successful examples. Furthermore, LLMs can cross-
reference the draft proposal with the grant criteria,
providing a meticulous check to ensure that all necessary
components are addressed. This includes verifying the
presence of essential sections such as the introduction,
literature review, methodology, expected outcomes, and
budget justification, and ensuring that each section meets
the funder’s expectations. The model can also suggest
improvements in clarity, coherence, and persuasiveness,
which are critical factors in the evaluation process.

Sample LLM prompt:

You are an experienced grant writer who is mentoring a


junior faculty member trying to secure funding for a
research project. Your task is to analyze the following
grant proposal draft and compare it with successful grant
applications from previous years (also attached). Identify
the strengths and weaknesses of the draft in relation to the
successful ones. Additionally, cross-check the draft with
the specified grant criteria and provide detailed
suggestions on how to improve the proposal to meet or
exceed the funder’s expectations. Ensure the analysis
covers all major sections, including the introduction,
literature review, methodology, expected outcomes, and
budget justification.

102
46
Adjust Length of a Grant
Application
One of the significant challenges researchers face when
editing a grant proposal is adhering to the strict space
requirements set forth by granting agencies. These
constraints often necessitate precise and strategic editing
to ensure that the proposal remains comprehensive and
compelling while fitting within the specified limits.
Balancing the need to include all essential information
with the requirement to keep the text concise can
be daunting. Researchers must decide which sections
to condense without losing critical details and which
areas may need further elaboration to meet the grant
criteria effectively.

Large Language Models (LLMs) like ChatGPT offer a


powerful solution to this problem by assisting in the
precise editing and optimization of grant proposals. These
models can analyze the text to identify which sections can
be succinctly summarized without omitting vital
information, ensuring that the proposal remains impactful
while meeting space constraints. Conversely, LLMs can
also identify areas that might benefit from further
elaboration, enhancing clarity and completeness. By

103
providing tailored suggestions, LLMs help maintain the
proposal’s overall quality and coherence, ensuring that all
necessary details are included in a concise manner that
aligns with the funder’s expectations.

Sample LLM prompt:

You are a recently appointed female faculty member in


Aerospace Engineering, and you have written a first draft
of a proposal for an NSF CAREER grant. Analyze the
following (or attached) grant proposal draft, considering
the space limitations imposed by the granting agency.
Identify sections that can be shortened without losing
essential information and suggest concise edits.
Additionally, highlight areas that may need further
elaboration to meet the funder’s criteria more effectively.
Provide detailed recommendations on how to balance the
content to ensure clarity, completeness, and alignment
with the grant requirements, focusing on the introduction,
literature review, methodology, expected outcomes, and
budget justification.

104
47
Unify the Tone of Co-Authored
Drafts
Editing a research publication with multiple authors
requires an experienced wordsmith to effectively blend
each contributor’s unique voice, writing style, and
perspective, without which could often result in a
disjointed and inconsistent final manuscript. Achieving a
uniform tone and style is essential for readability and
coherence, yet it often requires extensive revisions and a
keen editorial eye. The complexity increases when authors
are from diverse disciplines or linguistic backgrounds,
further complicating the harmonization of the text. These
challenges can lead to a prolonged editing process,
consuming valuable time and resources. Also, the iterative
nature of academic writing, where feedback and revisions
are continuously incorporated, adds another layer of
difficulty in maintaining a consistent style. The pressure to
meet publication deadlines while ensuring high-quality
output exacerbates these challenges, highlighting the need
for an efficient and effective solution.

LLMs can help facilitate the editing task by analyzing and


understanding the nuances of different writing styles and
tones. They can assist in editing the publication to ensure

105
consistency in style and semantic density, making the text
more coherent and accessible. They can identify and
correct stylistic discrepancies, standardize terminology,
and enhance the overall readability of the manuscript. This
not only saves time but also improves the quality of the
final document, ensuring it meets the high standards
required for academic publications. They can also provide
valuable insights into the text’s structure and content,
suggesting improvements that might not be immediately
apparent. The result should be a polished and professional
publication that effectively conveys the research findings.

Sample LLM prompt:

You are a principal investigator working with several co-


PI’s on an international project, and each has contributed
to a draft narrative. English is not the first language for any
of the authors. Your task is to review the following
research manuscript and make edits to ensure a consistent
writing style and tone throughout the document. Focus on
harmonizing the language, standardizing terminology, and
improving readability. Pay special attention to sections
where different authors’ writing styles might clash and
make adjustments to ensure a seamless narrative.
Additionally, enhance the semantic density by ensuring
that key concepts and arguments are clearly articulated
and supported. Highlight any significant changes made
and provide a brief explanation for each major edit.

106
48
Double-Check a Proposal
Against the Original CFP
Proposing a conference presentation for a prestigious
academic conference is a highly competitive and rigorous
process. The event attracts submissions from leading
experts and scholars worldwide, making it crucial to stand
out with a compelling and well-structured proposal. One
of the significant challenges is ensuring that your proposal
aligns with the current trends and topics of interest within
the field, while also clearly demonstrating the relevance
and impact of your research. The proposal must effectively
communicate your research objectives, methods, and
anticipated outcomes in a concise and engaging manner.
The pressure to meet these high standards, coupled with
the need for precision and clarity, often results in a failed
proposal. This challenge is compounded by the necessity to
balance technical language with accessibility, making sure
that the content is understandable to both specialists and a
broader academic audience.

LLMs can guide you through the proposal writing process


so you can generate a proposal that is not only coherent
and well-structured but also tailored to the specific
requirements and expectations of the conference. LLMs

107
can help you articulate your research objectives, methods,
and anticipated outcomes in a clear and compelling
manner. They can analyze existing literature and identify
key themes and gaps that your research addresses,
ensuring that your proposal highlights its significance and
contribution to the field. Additionally, they can assist in
refining your proposal by suggesting improvements in
language, tone, and style, ensuring that your submission is
professional and impactful. This process includes
optimizing the abstract, structuring the sections logically,
and providing persuasive arguments that emphasize the
importance and novelty of your work.

Sample LLM prompt:

You are a faculty researcher of workplace equity in large


corporations, and your work centers on the impact of
remote work. You wish to present your research at the
annual conference of the Society for Industrial and
Organizational Psychology. Your task is draft a proposal
for a conference presentation at this upcoming SIOP
conference with the topic of “The Impact of Remote Work
on the Well-Being, Productivity, and Career Longevity of
Women Managers.” Ensure that the proposal includes a
clear statement of the research problem, objectives,
methodology, and anticipated outcomes. Highlight the
significance of this research in the context of current
trends in remote work and its implications for
organizational practices. Make sure the language is
professional, engaging, and tailored to the expectations of
the SIOP conference review committee.

108
49
Tailor Your Bio for New
Contexts
One of the crucial aspects of preparing for a significant
professional opportunity is customizing your bio to
highlight the key points of your work and meet the
audience’s expectations. A well-crafted bio not only
introduces you but also sets the tone for establishing your
credibility and engaging the audience from the outset. It’s
important to emphasize your relevant expertise, notable
achievements, and any unique perspectives or experiences
that directly relate to the topic. This tailored approach
helps create a connection with your audience and ensures
that your bio aligns with the theme and goals of the event.
Failing to customize your bio might result in a missed
opportunity to effectively convey your qualifications and
the significance of your work, potentially diminishing the
impact of your presentation.

LLMs can create a bio that is coherent, engaging, and


tailored to the specific audience and context of a meeting,
online post, or publication. They can emphasize your most
relevant qualifications and achievements, ensuring that
your bio resonates with the audience and underscores your
authority on the subject matter. They can assist in refining

109
the language and tone of your bio, making sure it is
professional yet approachable and aligns with expectations.
This not only enhances your introduction but also
positively influences the audience’s perception and sets the
stage for a successful impression. By using LLMs, you can
ensure that every aspect of your bio is optimized to capture
the audience’s interest and convey the importance of your
contributions to the field.

Sample LLM prompt:

You are a seasoned faculty member and considered an


expert in multiple fields of human communication. Your
task is to write a speaker bio for keynote address at an
upcoming IEEE GLOBECOM conference, and the title of
the presentation is “A Failure to Communicate: Progress in
Communication Technologies and Regress in Negotiation
Skills.” Scan the attached C.V. for details relevant to this
topic. Include my academic and professional background,
highlighting my expertise, notable achievements, and any
relevant research or projects. Ensure the bio is engaging,
professional, and tailored to meet the expectations of the
audience attending this professional meeting.

110
50
Create a Report from Raw
Survey Data
Even with advanced statistical tools, analyzing large
amounts of survey data can be an arduous and error-prone
task. Common pitfalls include overlooking key patterns,
misinterpreting responses, and the sheer volume of data
making it difficult to draw accurate conclusions. Manually
sifting through open-ended responses, categorizing them,
and ensuring that all relevant themes are captured requires
significant time and effort. The complexity increases when
dealing with diverse responses that may include varying
degrees of detail, different terminologies, and subtle
nuances that are easy to miss. Ensuring consistency in
interpretation and presentation of the data can be
challenging, particularly when multiple researchers are
involved, each bringing their subjective biases and
perspectives. These challenges often lead to delays in
reporting and can compromise the quality and reliability
of the findings.

LLMs can streamline the analysis of survey data and the


creation of summary reports by processing vast amounts of
text data quickly and accurately, identifying patterns and
themes in open-ended responses that might be missed in

111
manual analysis, categorizing responses, quantifying
sentiments, and highlighting significant trends. This allows
researchers to gain deeper insights into the data and
produce more accurate and comprehensive reports. They
can reduce the influence of individual biases and ensuring
a more objective interpretation of the data. This
technology can also handle complex linguistic nuances,
enabling a more nuanced understanding of respondent
feedback. LLMs can assist in writing these reports by
summarizing findings in a clear and coherent manner,
ensuring consistency and professionalism in presentation.
The ability to automate these tasks not only saves time but
also enhances the overall quality and reliability of the
research outcomes. (GPT-4o allows you to attach raw data
from multiple files in all sorts of formats, including CSV,
Excel, JSON, and text, saving even more time.)

Sample LLM prompt:

You are a graduate research assistant supporting a team of


faculty researchers who surveyed tourists to the Central
Florida area over the past five years. Many of the survey
fields called for respondents to generate text responses.
Your task is to analyze the attached survey data, which
includes both quantitative and open-ended responses.
Identify key patterns and themes in the open-ended
responses, categorize them appropriately, and quantify any
recurring sentiments. Summarize the findings in a detailed
report, highlighting significant trends and insights. Ensure
the report includes an introduction, methodology, results,
and conclusion sections, and is written in a clear and
professional tone suitable for academic publication.

112
51
Generate “Extras” for Your
Research Paper Draft
Writing critical components such as keywords, abstracts,
conclusions, marketing copy, letters to editors, and
submission lists for research papers can be a complex and
time-consuming process. Common pitfalls include
selecting keywords that fail to capture the essence of the
research, drafting abstracts that are either too detailed or
too vague, and writing conclusions that do not adequately
summarize the findings. Creating engaging marketing copy
that highlights the significance of the research for a
broader audience can be particularly challenging, as can
crafting a persuasive letter to the editor. These tasks
require precision, coherence, and a deep understanding of
both the research and its potential audience, making them
prone to inconsistencies and errors when done manually.

LLMs can generate high-quality keywords, abstracts,


conclusions, and other essential documents with great
accuracy and coherence. They can analyze the content of a
research paper, identify its key themes and contributions,
and produce concise and relevant keywords that enhance
discoverability. They can draft abstracts that succinctly
summarize the research while highlighting its importance

113
and main findings. Conclusions generated by LLMs can
effectively encapsulate the implications and future
directions of the study. LLMs can create compelling
marketing copy that captures the interest of a broader
audience and write persuasive letters to editors that
emphasize the paper’s relevance and impact. Some AI apps
like Trinka have a “journal finder” that generates a curated
list of suitable journals and conferences for submission,
based on the paper’s subject matter and quality. scite
Assistant is another helpful tool that analyzes your
citations and indicates those which have been challenged,
supported, and retracted, ensuring the quality of the
research you refer to in your own work.

Sample LLM prompt:

You would like to publish a completed research paper, and


you want to maximize your efforts toward a successful
submission to the top journals in the field of international
relations. Your task is to analyze the attached research
paper and generate the following documents: 1) a list of
relevant keywords that capture the essence of the research,
2) a succinct abstract summarizing the research objectives,
methods, findings, and significance, 3) a concise
conclusion that highlights the main findings and their
implications, 4) marketing copy that effectively
communicates the importance of the research to a general
audience, and 5) a persuasive letter to the editor
emphasizing the paper’s relevance and contributions.
Ensure that each document is clear, coherent, and tailored
to its specific purpose.

114
Section IV:
Tools Worth
Considering

116
52
Adobe Firefly
Create Consistent (Themed) Images for
a Presentation
While several tools, apps, and websites offer text-to-image
capability (creating GenAI images), most of them re-
invent the interpretation of the text anew with each new
image generation. Let’s say you created an image of a
cartoon eagle at a typical generator (for example, Copilot
who uses DALL·E 3), and you liked the way it looked. If
you wanted to use that eagle as a consistent mascot for an
entire PowerPoint presentation, you might find it difficult
for Copilot to reproduce the same style of cartoon eagle as
the first one you liked. Asking for a cartoon eagle riding a
bicycle for your second slide will result in brand new
designs for cartoon eagles, so the theme is not consistent.

Adobe Firefly offers a functionality that will solve this


problem. In addition to style pre-sets, it also offers a
“reference image” upload, where you can import a starting
image you like, and subsequent text-to-image prompts will
mimic the style and look of the reference image, as if they
were created by the same artist. This works especially well
if the reference image were created with a detailed prompt

117
in Adobe Firefly the first time, and the relevant details of
the prompt are repeated in future prompts along with the
uploaded reference image.

In this fashion, you can get the same cartoon eagle


appearing throughout your PowerPoint presentation while
doing different things that align with each slide’s unique
content, such as flying a kite while lightning zaps an
attached key, peering into a microscope to unlock the
mysteries of DNA, or signing the Declaration of
Independence alongside other cartoon animals.

118
53
ResearchRabbit
Visualize Research Connections and
Quickly Connect to Academic Sources
Too often students rely on Google—or Google Scholar, if
we’re lucky—to find sources that can contribute to an
academic conversation. Many of these sources fall short
and those that might be helpful often lead to abstracts that
require library logins or payments before proceeding. Even
the quality sources we may obtain there—or in other
industry-specific shared collections or digital libraries—
don’t always show us how other researchers are
contributing to the conversation.

ResearchRabbit is a “free forever” innovative, citation-


based literature mapping tool that’s designed to support
your research without switching between search modes
and databases. But it’s ResearchRabbit’s ability to automate
“citation mining,” highlight relationships and trends, and
quickly lead you to relevant research (which might not be
immediately obvious) that makes it a true game changer.

ResearchRabbit provides a visual map of the literature that


reveals the structure of research networks and allows

119
researchers to see how papers are connected. Closely-
related papers cluster together, allowing you to easily
identify papers central to the discussion. The visual format
also allows researchers to spot trends, dominant theories,
and gaps where little research has been conducted.

With ResearchRabbit, you can interact with these filters


by selecting specific journals and tracking where they’ve
been cited. Or you can filter by criteria (i.e., publication
date, number of citations, keywords) and zoom in—or
out—to understand the research connection or to limit
your scope.

As you continue to use ResearchRabbit, it begins to tailor


its suggestions based on user preferences or research
history, personalizing your searches. You can even ask the
app to email you when a related study is first published.

After you’ve added articles to your collection, you can


export all of the papers at once, share that collection with
collaborators or use it to build a course reading list for
students, and sync it with Zotero collections.

120
54
Elicit
Quickly Extract Specific Data from
Relevant Research
Elicit is another AI-powered research tool that assists in
streamlining the literature review process. You can begin
by entering a research question to generate a search
process or you can upload a document and extract data
from it.

When you initiate a search, Elicit pulls up the four most


relevant articles and generates a mini literature review
that shows how these four studies are related. The app
then creates a table of all four articles that includes the
citation information and a one-sentence abstract summary.
You can create columns that allow you to search for over
30 types of specific data, such as limitations, measured
outcomes, intervention effects, methodology, research
gaps, funding sources, software used, sample sizes, and
more. This ability is particularly useful for meta-analyses
and systemic reviews and saves countless hours for
researchers who are interested in specifics within a
research study.

121
After reviewing the four papers Elicit suggested, you can
keep selecting “Load more” until you run out of time or
energy. Elicit organizes its search by relevance, so having a
strong research question for the initial inquiry is critical
for building a collection of papers.

Like ResearchRabbit, Elicit encourages collaboration and


allows you to share your “Notebook” with colleagues and
students, and it also syncs with Zotero. Elicit will save
your results, but it does not, unfortunately, allow you to
export your data, tables, columns, or citations for free.
Researchers who desire that option will need to purchase
Elicit Plus.

122
55
Consensus
Quickly Firm Up Relevant Research
Questions
Consensus is an AI-powered search engine that combs
through over 200 million academic research articles,
papers, and books in every academic discipline. Powered
by Copilot and their “Consensus Meter,” this tool allows
you to enter your research question, pull up relevant work,
and view an AI-generated Study Snapshot that extracts key
information about a study’s methods and can quickly
reveal the strength of your research questions based on
verified data.

Displayed sources include a one-sentence review that


summarizes the key findings of the study which may
greatly assist in the search process. Consensus also flags
other aspects of the study that might be relevant to
researchers, including if it’s highly cited, a randomized
controlled trial, published in a rigorous journal, or a
systemic review.

123
Consensus has free and premium options but allows
unlimited searches and the ability to export research lists
in all of its packages.

124
Conclusion
As mentioned in the introduction, we view this book as a
companion to our 2023 open-source book ChatGPT
Assignments to Use in Your Classroom Today
(http://bit.ly/chatgptassignments). Whereas the first book
set out to offer examples of student-facing assignments
that made use of LLMs, this book is aimed instead at ways
faculty can use LLMs in their own working lives. Some of
that is by necessity aligned with their lives as teachers, but
we wanted to expand the view to include ‘hacks’ involving
faculty as researchers or just their overall productivity in
other elements such as service or their lives as employees
of institutions of higher education.

Ethical Use of GenAI

A few of the examples of hacks discussed in earlier


chapters made oblique references to ethics, but a deeper
reflection is certainly warranted. Just as faculty expect
students to be transparent and ethical with GenAI tools—
and to avoid any unethical practices—we should hold
ourselves to the same high standards. Unfortunately, the
easy comparisons end there. With students, it’s relatively
simple to see the dividing line between ethical and
unethical use, particularly if students are told on the
syllabus exactly where to draw that line in a particular
class. Faculty use of GenAI comes with fewer clearly
delineated lines of usage.

125
Here are just a few questions we might need to ask
ourselves about faculty use of GenAI:

• The parent companies of some LLMs are facing


lawsuits because the models appear to be capable of
reproducing the style of living authors, implying
copyrighted works were ingested without
permission. Does this taint our use of LLMs for
teaching or research purposes?
• Is it always okay to use AI-generated images over
ones found via online image searches? Does it
change anything if the GenAI was “trained” on
copyrighted images without permission?
• Is it wrong to use LLMs to generate class/teaching
materials if my own policy is that students can’t use
LLMs at all?
• If we embrace GenAI to its fullest extent and “lean
in” to it not just for faculty usage, but also
interwoven into student assignments, are we
possibly short-changing them on an education in
the fundamentals that doesn’t use AI at all? And, if
we do assign assignments that require AI tools, how
can we ensure that use remains equitable for
students who lack digital access or resources?
• How much AI assistance is “too much” when it
comes to writing recommendation letters, drafting
an employee’s annual evaluation, or student
grading?

One thing is clear: it would be unethical to use AI in any


form or fashion without full transparency (or, put another
way, it’s only ethical to use AI when clearly

126
communicating where and how you’ve used it). Even
invisible brainstorming and outlining needs to be
disclosed. As mentioned in the introduction, in this book
we only used LLMs to verify that the prompts we provided
in the book actually returned useful results, and to draft
some of the prompts and the research chapters.

We might be tempted to draw a similar conclusion about


the ethics of evaluation, but the lines are blurrier here. On
first glance, it might seem innocuous enough to ask an
LLM to create a first draft of a recommendation letter for a
graduating student, especially if you plan to heavily edit
the original AI output, but if you don’t change every
sentence, then part of the “evaluation” will have been
written by a machine that never met this student. This is
especially problematic because evaluative documents have
consequences. Your former student might not be accepted
to medical school; or your colleague at another institution
might be denied tenure. Someone you supervise at work
might not get this year’s raise.

The ramifications of AI-guided grading might not seem


immediately obvious, but the implications are sobering. If
AI tools become reliable enough to replace humans in
grading, it could have grave consequences for staffing
levels within academic departments.

Future Directions for GenAI

It is of course folly and the height of hubris to pretend we


have any confidence in knowing for sure how AI
technology will continue to develop and evolve. Few saw

127
the development of LLMs making such rapid in-roads in
school and work life, yet this revolution is well underway,
and these environments are unlikely to ever return to
practices from the pre-AI days. But while we don’t know
where we’re headed long-term, we think it might be
possible to prognosticate about short-term and medium-
term time horizons.

In the immediate present, we’re seeing increased


sophistication among the early adopters, particularly when
it comes to advanced prompt engineering. The entire
concept of prompt engineering was new for most of the
population, but enough time has elapsed that more people
have started experimenting, and, equally importantly,
sharing their discoveries with their colleagues.

In the short term, we’ll see adoption in college gradually


rise as late adopters, both faculty and students, come
around to recognizing the seismic shift as permanent.
Peers across all groups will be key in helping bring late
adopters up to speed, but we may face a few years still of
heterogenous audiences.

Our best guess for the medium-term future is that we’ll see
a commingling of tools, and a concomitant shift in ways of
thinking about how to use AI. Partly this will be driven by
the tools themselves becoming multi-modal: rather than
an LLM receiving only text prompts and dispensing only
text outputs, we’ll increasingly see tools that accept images
and provide text analysis, or text-to-generated images, and
perhaps the ultimate killer app, text to video. There are
important questions to answer about deepfake videos that

128
look so realistic as to overcome initial skepticism about
their reality, but were in fact AI-generated.

The explosion of possible modalities will cause a shift in


how we think about AI, and how we interact with such
technologies. Rather than labor over a perfect prompt to
obtain a striking AI-generated image of a colorful
underwater cave, for instance, we can already turn to a
different mono-modal LLM to explain our desired image
outcome, and ask the LLM for a text-based prompt to put
into the image generator. In many cases, the LLM can
write a better prompt than we can!

This is just one example of how our thinking will shift. We


will continue to find ways to inject AI into our daily
processes and tasks. In fact, we view it as likely that this
transition to a new AI-economy is not only longitudinal, it
is likely eternal. We will, now and forevermore, be in a
state of learning new AI tools and re-evaluating how they
might provide added value to our current processes. The
one constant is likely to be the need for humans adding
value, both on the prompt engineering side (asking the
right questions) and on the side of evaluating AI output
(putting it to use, correcting it, etc.). As we march
inexorably toward the future, we will continue to see that
AI will not displace humans; nor will humans overcome
the need for AI. The future of all work is humans + AI, and
the field of education is no exception.

129
About the Authors
Kevin Yee earned his Ph.D. in German Literature from UC
Irvine and enjoyed teaching for several years as a full-time
faculty member at the University of Iowa and Duke
University before changing his focus to educational
development when joining the University of Central Florida
in 2004. He is now the director of UCF’s Faculty Center for
Teaching and Learning (https://fctl.ucf.edu), and co-author of
the 2023 book ChatGPT Assignments to Use in Your
Classroom Today .

Laurie Uttich, a poet, is a Senior Lecturer at UCF where she


taught composition and creative writing for 15 years. She is
now an Instructional Specialist at UCF’s Faculty Center for
Teaching and Learning and recently used Copilot to come up
with names for her son’s dog (result: Basho). She is a co-
author of the 2023 book ChatGPT Assignments to Use in
Your Classroom Today .

Eric Main joined UCF’s Faculty Center for Teaching and


Learning in 2001 and is its Associate Director. He teaches the
center’s Preparing Tomorrow’s Faculty program, organizes its
annual faculty development institute, and edits its Faculty
Focus essay collections.

Liz Giltner earned her Ph.D. in TESOL from UCF where she
taught French for 17 years. She is now an Instructional
Specialist at UCF’s Faculty Center for Teaching and Learning
and has redesigned aspects of her French courses using AI.
She is interested in helping faculty use the technology to
facilitate their teaching.

130

You might also like