0% found this document useful (0 votes)
12 views

week 6 EAP (1)

This document is a literature review by Moxira Gavachayeva for a TESOL course, focusing on the role of AI in writing instruction, particularly through the comparison of three studies on ChatGPT and Automated Writing Evaluation. The review highlights both the potential benefits and limitations of AI tools in enhancing writing skills, emphasizing the necessity of human intervention in the learning process. It also reflects on the author's use of reporting verbs, hedges, and boosters to maintain a balance between caution and confidence in academic writing.

Uploaded by

usmonovam0199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

week 6 EAP (1)

This document is a literature review by Moxira Gavachayeva for a TESOL course, focusing on the role of AI in writing instruction, particularly through the comparison of three studies on ChatGPT and Automated Writing Evaluation. The review highlights both the potential benefits and limitations of AI tools in enhancing writing skills, emphasizing the necessity of human intervention in the learning process. It also reflects on the author's use of reporting verbs, hedges, and boosters to maintain a balance between caution and confidence in academic writing.

Uploaded by

usmonovam0199
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Week 6 Formative assessment

Expanded Literature Review

Moxira Gavachayeva

Ma TESOL Program (UC S1 2025)

Webster University in Tashkent

TESL 5740: Teaching English for Academic Purposes

Instructor: Shohista Nurbayeva

February 15, 2025


Statement of Originality and Assistance.

This assignment is completely original and is all my own work. I used Microsoft Word to

compose this assignment and used Grammarly to help check my spelling, grammar, and

punctuation. In the early stages of writing, I made use of ChatGPT to help generate some ideas,

but the final product is all my own work. The prompt I gave to ChatGPT was “What kind of

structure is appropriate for literature review in which 3 articles are compared?” and the

response it provided was “If you are comparing and contrasting three articles in your literature

review, the most appropriate structure would be thematic structure because your three articles

discuss the same topic from different angles.”

Disclosure of Prior Existing Material.

To develop summary of week 6 assignment I made use of week 2 and week 4 summaries of

articles as this assignment requires the summaries of previous articles as well. For this

assignment, 95% is my original contribution.


Literature Review

Hockly, N. (2019). Automated writing evaluation. ELT Journal, 73(1), 82–88.

https://doi.org/10.1093/elt/ccy044

Lee, Y. J. (2024). Can my writing be polished further? When ChatGPT meets human touch. ELT

Journal, 78(4), 401–413. https://doi.org/10.1093/elt/ccae039

Javier, D. R. C., & Moorhouse, B. L. (2023). Developing secondary school English language

learners' productive and critical use of ChatGPT. TESOL Journal, 14(3), e755.

https://doi-org.library3.webster.edu/10.1002/tesj.755

Researchers have become very interested in the use of artificial intelligence (AI) in language

acquisition, especially in the field of writing, where AI-powered tools like ChatGPT and

Automated Writing Evaluation (AWE) are increasingly being used. These technologies can help

students with grammar and vocabulary tweaks, instant feedbacks, and enhance their overall

writing coherence. However, while some researchers contend that AI promotes self-directed

learning and metacognitive growth, others caution that relying too much on AI-generated

feedback may impair students’ ability to critically evaluate their own writing. To explore these

contrasting perspectives, this literature review compares and contrasts three studies that examine

AI’s role in writing instruction within English as a Foreign Language (EFL) and English

Language Teaching (ELT) contexts. Javier and Moorhouse (2023) examine the incorporation of

ChatGPT in secondary classrooms, stressing its potential and critical engagement, whereas Lee

(2024) focuses on the interplay between assisted writing and human scaffolding, highlighting its

impact on collaborative learning and metacognitive awareness. Conversely, Hockly (2019)

provides a broader evaluation of AWE tools, examining their benefits and limitations on

delivering automated feedback. Although these studies share a common interest in AI’s impact

in writing development, they differ significantly in their focal points, methodogical frameworks,
and interpretations of AI’s efficacy. This paper critically analyzes the key similarities, significant

differences, and the progression of AI in writing instruction over time, elucidating its

implications for language learning and pedagogy.

While analyzing all three articles, I noticed some similarities among these three articles. One key

similarity among the three articles is that all three studies acknowledge the potential benefits of

AI in enhancing students’ writing proficiency. For example, Javier and Moorhouse (2023)

suggest that ChatGPT can serve as a helpful tool for developing both productive and critical

engagement in writing. Similarly, Lee (2024) argues that AI, when paired with human

scaffolding, significantly enhances students’ writing through interactive and iterative feedback.

Likewise, Hockly (2019) cautions that AWE tools alone cannot adequately evaluate creativity

and rhetorical effectiveness (p.3). Taken together, these studies show that AI can be

advantageous for improving writing skills, especially when it is utilized as a complementary

instructional tool.

The second similarity of all three articles is that all three writers emphasize that human

intervention is essential. Just as Lee (2024) asserts that teacher scaffolding is crucial in guiding

students to interpret and apply AI-generated feedback effectively (p.406), Javier and Moorhouse

(2023) also stress the importance of critical engagement, warning that students may otherwise

become overly dependent on ChatGPT (p.4). In a similar manner, Hockly (2019) mentions that

automated feedback alone cannot adequately assess rhetorical effectiveness, making human

oversight necessary (p.3). Thus, AI is the best used as a supportive tool rather than a replacement

for human feedback.

The studies collectively suggest that AI-powered writing tools can enhance student engagement,

provide real-time feedback, and improve writing skills when used effectively alongside teacher

guidance. However, despite these shared insights, the studies differ in their approaches to AI

implementations, its level of effectiveness, and the extent to which human intervention remains
necessary. One of the biggest differences of all three studies is their focus on different AI

applications. While Javier and Moorhouse (2023) investigate the incorporation of ChatGPT in

secondary school classrooms, emphasizing its role in fostering both productive and critical

engagement with AI, Lee (2024) examines how ChatGPT facilitates academic essay writing in

collaborative university settings, with an emphasis on teacher scaffolding. On the other hand,

Hockly (2019) concentrates on Automated Writing Evaluation (AWE) tools, which are mostly

made for automated evaluation assessment rather than interactive learning. Therefore, the third

study provides a more comprehensive critique of AI-generated feedback in ELT, whereas the

first two researches examine AI’s interactive role in education.

Another difference among these three articles is their methodological approaches. Javier and

Moorhouse (2023) adopt a classroom-based case study, conducting intervention lessons in a

secondary school in Philippines. By contrast, Lee (2024) implements a longitudinal study in a

Korean university, exploring how students interact with AI-generated feedback over multiple

writing drafts. Meanwhile, Hockly (2019) uses a theoretical analysis rather than empirical

research to assess the efficacy of AWE tools in writing instruction. This comparison highlights

that although the first two articles provide specific classroom insights, the third presents a

broader critique of AI’s role in writing evaluation.

While these studies highlight varying perspectives on AI’s effectiveness, they also reflect a huge

change in how AI has been integrated into writing instruction over the years. Early research

focused mostly on AWE systems, which were designed to assess grammatical accuracy and

lexical choices used pre-programmed algorithms (Hockly, 2019). However, more recent studies

indicate a shift toward interactive AI tools, such as ChatGPT, which facilitate real-time dialogues

and collaborative writing (Javier & Moorhouse, 2023; Lee 2014). This transition suggests that AI

has moved beyond basic error detection to become a dynamic learning tool that supports higher-

order writing skills such as critical thinking and argument development.


Overall, all three studies examine the function of AI in writing, but they differ in their

interpretations and focus. On one hand, Javier and Moorhouse (2023) stress AI’s potential in

fostering interactive learning, whereas on the other hand, Hockly (2019) questions the efficacy of

AI generated feedback in fostering writing progress. Furthermore research indicates a significant

transition from initial automated grammar correction tools to advanced, interactive AI-driven

writing assistance, highlighting AI’s growing influence in English Language Teaching (ELT)

and English as a Foreign Language (EFL) education.

Reflection

This reflection is for discussion of my use of reporting verbs, hedges, and boosters. In my

literature review I used 26 different verbs which are: examine, focus on, provide, suggest, argue,

caution, assert, stress, mention, highlight, contend, acknowledge, enhance, show, warn,

investigate, concentrate, adopt, implement, use, facilitate, question, elucidate, indicate, and

reflect. The reporting verb “suggest” has been used more than once because I wanted to present

claims with caution. Suggest is commonly used in academic writing to introduce ideas that are

not absolute but backed by evidence. By using it twice I wanted to accurately represent studies

that do not make definitive claims but instead offer interpretations based on their findings. While

writing and editing my literature review, I carefully selected reporting verbs to ensure variety

and accuracy in representing each study’s claims. To maintain diversity, I used verbs like

“caution” to reflect skepticism, “argue” for well-supported claims, and “mention” for minor

points ensuring that my writing remained both precise and engaging. There are some reporting

verbs that can serve as hedges or boosters alone (without the support of any word). In order to

make them very clear to the reader of reflection, I made a list of them below.

Booster reporting verbs: contend, stress, argue, show, emphasize, assert, warn, highlight,

elucidate.
Hedge reporting verbs: caution, acknowledge, suggest, mention, indicate question.

As I mentioned reporting verbs that serve as a hedge or booster, I have some words and phrases

that can be hedge or booster in my text (not reporting verbs). These kinds of hedges are: may,

potential, can, cannot adequately, can be, collectively, its level of effectiveness, mostly, more

comprehensive, the efficacy, varying perspectives, and mostly. These kinds of boosters are:

increasingly, significantly, critically, significant, essential, crucial, overly, necessary, effectively,

biggest, emphasis, broader critique, huge change, a dynamic learning tool, significant transition,

growing influence.

To strength my academic writing, I carefully used reporting verbs, hedges and boosters by

balancing caution and confidence. For instance, I employed “suggest” and “stress” to articulate

point successfully. Javior and Moorhouse (2023) propose that ChatGPT may serve as a valuable

instrument, indicating that their assertion is an interpretation rather than a fact, consistent with

Hyland and Jiang (2022) on the function of hedges. Javier and Moorhouse (2023) emphasize the

significance of critical engagement and the reinforcement of certainty, a notion supported by

Yoon and Abdi Tabari (2023), who observe that proficient authors employ boosters judiciously.

To exercise caution, I employed the hedge “to some extent” to recognize the advantages of AI

while circumventing overgeneralization (Hyland & Jiang, 2022). Conversely, I employed

“clearly” and “significant” as amplifiers to underscore robust assertions, maintaining assurance

without exaggeration (Yoon & Abi Tabari, 2023). These decisions facilitated a balance between

caution and confidence, maintaining the critical and intellectual nature of my writing.
References

1. Hockly, N. (2019). Automated writing evaluation. ELT Journal, 73(1), 82–88.

https://doi.org/10.1093/elt/ccy044

2. Lee, Y.-J. (2024). Can my writing be polished further? When ChatGPT meets human

touch. ELT Journal, 78(4), 401–413. https://doi.org/10.1093/elt/ccae039

3. Javier, D. R. C., & Moorhouse, B. L. (2023). Developing secondary school English

language learners' productive and critical use of ChatGPT. TESOL Journal, 14(3), e755.

4. Yoon, H., & Tabari, M. A., (2023). Authorial voice in source-based argumentative

writing: A comparison of L1 and L2 novice writers. Journal of English for Academic

Purposes, 61, 101225 https://doi.org/10.1016/j.jeap.2023.101228

5. Hyland, K., & Jiang, F. (2022). Metadiscourse choices in EAP: An intra-journal study of

JEAP. Journal of English for Academic Purposes, 60, 101165.

https://doi.org/10.1016/j.jeap.2022.101165

You might also like