ENG reading Annotations
ENG reading Annotations
unlocked_
article_code=1.pU4.YK0h.M8CTmbLVfRDW&smid=url-share
“For one particular assignment related to the novel ‘Persepolis,’ I had students research
prophets,” Wheless explained, because the main character fantasizes about being a prophet. But,
she told me via email, internet searches that incorporated A.I.:
Gave students jewels such as “the Christian prophet Moses got chocolate stains out of T-shirts”
— I guess rather than Moses got water out of a rock(?). Demonstrates AIs potential to generate incorrect
or misleading information, which students may accept as truth. And let me tell you, eighth graders wrote
that down as their response. They did not come up to me and ask, “Is that correct? Moses is
known for getting chocolate stains out of T-shirts?” They simply do not have the background
knowledge or indeed the intellectual stamina to question unlikely responses. Suggests that over
reliance on AI may hinder students ability to think critically and question inacccuracies.
After I wrote a series in the spring about tech use in K-12 classrooms, I asked teachers about
their experiences with A.I. because its ubiquity is fairly new and educators are just starting to
figure out how to grapple with it. I spoke with middle school, high school and college instructors,
and my overall takeaway is that while there are a few real benefits to using A.I. in schools — it
can be useful in speeding up rote tasks like adding citations to essays and doing basic coding —
the drawbacks are significant. Acknowledge both the potential and the limitations of AI in educational settings.
The biggest issue isn’t just that students might use it to cheat — students have been trying to
cheat forever — or that they might wind up with absurdly wrong answers, like confusing Moses
with Mr. Clean. The thornier problem is that when students rely on a generative A.I. tool like
ChatGPT to outsource brainstorming and writing, they may be losing the ability to think
critically and to overcome frustration with tasks that don’t come easily to them. Raises concerns
about AIs impact on students perseverance, intelligence, and problem solving skills
Sarah Martin, who teaches high school English in California, wrote to me saying, “Cheating by
copying from A.I. is rampant, particularly among my disaffected seniors who are just waiting
until graduation.”
When I followed up with her over the phone, she said that it’s getting more and more difficult to
catch A.I. use because a savvier user will recognize absurdities and hallucinations and go back
over what a chatbot spits out to make it read more as if the user wrote it herself. But what
troubles Martin more than some students’ shrewd academic dishonesty is “that there’s just no
grit that’s instilled in them. There’s no sense of ‘Yes, you’re going to struggle, but you’re going to
feel good at the end of it.’” indicates a decline in students willingness to engage in challenging tasks without
immediate answers.
She said that the amount of time her students are inclined to work on something that challenges
them has become much shorter over the seven years she’s been teaching. There was a time, she
said, when a typical student would wrestle with a concept for days before getting it. But now, if
that student doesn’t understand something within minutes, he’s more likely to give up on his
own brain power and look for an alternative, whether it’s a chatbot or asking a friend for help.
Students aren’t giving up because they’re lazy, Martin said, but because they’re quick to assume
they’re not smart if they can’t grasp certain concepts right away; it’s almost as if the speed of
available technology is making them assume that their human brains should have all the
answers. They worry that their friends will make fun of them for not catching on fast enough.
“It’s avoiding the peer judgment that they anticipate, whether it’s real or not,” she said. These
teenagers think: “My friends are going to see I don’t get it. They’re going to think I’m stupid.”
Many instructors have wised up to student use of A.I. and have already changed their methods
of instruction, teachers are adjusting assessment methods to counter AIs influence and encourage authentic
learning. in some cases relying less on assignments that are completed outside of the classroom,
or updating their coursework to make cheating more difficult. Several English teachers told me
that there are fewer accurate plot summaries about newer books, so it’s harder to get generative
A.I. to write a good essay about a book written in 2023 than about “The Catcher in the Rye.”
Teachers have also tried to A.I.-proof their tests. Jerald Hughes, an associate professor of
information systems at the University of Texas Rio Grande Valley, told me that in his coding
classes, he has replaced traditional quizzes with an in-class game that requires quick responses.
Shows how educators are innovating to ensure genuine student engagement and learning. “It’s like a Space
Invaders game that you’ve only got a few seconds to get to the right answer,” he told me. And
students need a perfect score to pass.
Hughes, who is also the associate dean for undergraduate studies, told me that he also
customizes his test questions. For instance, instead of asking his students to create a disaster
plan for an I.T. center, he presents them with a scenario like this: You’re the C.E.O. of a small
trucking firm located in Port Isabel, on the Texas coast. Make a disaster plan for this company.
Chatbots, he said, cannot offer this level of detail in a satisfying way, at least not yet.
Hughes sees this approach as preparing his students for job interviews, in which basic facts will
have to be at their fingertips and they won’t be able to rely on A.I.
The educators I spoke to seem to have the right attitude of skeptical practicality: They know A.I.
isn’t going away, so they aren’t banning it from their classrooms, but they are mostly
unconvinced of its transformative properties and aware of its pitfalls. There’s a reason only 6
percent of American public school teachers think that A.I. tools produce more benefit than
harm. reflects widespread skepticism and cautious adoption of AI in classrooms.
I’m more worried about the policymakers who appear to have drunk the Kool-Aid on artificial
intelligence: Some seem to believe that we’re peering into a kind of open maw that will
eventually swallow all of our jobs and educational processes, so we better acquiesce, or else.
In July, my newsroom colleague Dana Goldstein reported that the Los Angeles Unified School
District had agreed to pay a start-up called AllHere “up to $6 million to develop Ed,” a chatbot
that “would direct students toward academic and mental health resources, or tell parents
whether their children had attended class that day, and provide their latest test scores.”
The school district’s superintendent, Alberto Carvalho, crowed about the potential of this new
technology. He appeared at Arizona State University’s annual summit with Global Silicon Valley
on a panel titled “Bright Spots: K-12 Leaders’ Guide to Embracing a Generative A.I. World.” But
as Goldstein reported: “Just two months after Mr. Carvalho’s April presentation at a glittery tech
conference, AllHere’s founder and chief executive left her role, and the company furloughed
most of its staff. AllHere posted on its website that the furloughs were because of ‘our current
financial position.’” Reporting in July for the education website The 74, Mark Keierleber wrote
that right before AllHere went belly up, a whistle-blower tried to warn the school district that the
Ed chatbot “violated bedrock student data privacy principles.” raises ethical and privacy concerns
regarding AI implementation in schools without proper oversight
If policy experts think that A.I. can part the Red Sea, maybe our students aren’t the only ones
who need to develop their critical-thinking skills. JAW DROPPED. Great ending.