AI-assisted Writing ChatGPT Paradigm Shift de Scribd de Mierda
AI-assisted Writing ChatGPT Paradigm Shift de Scribd de Mierda
Fall 8-7-2023
This material is brought to you by the Latin America (ISLA) at AIS Electronic Library (AISeL). It has been accepted
for inclusion in ISLA 2023 Proceedings by an authorized administrator of AIS Electronic Library (AISeL). For more
information, please contact [email protected].
AI-assisted Writing Tools
Mina Richards
California State University, Los Angeles
[email protected]
Abstract
This study aims to introduce the recent developments of ChatGPT as an AI-assisted writing tool and the
novel AI Literacy framework, which should facilitate the optimal utilization of Generative AI technology.
The framework was initially tested with academics, leading to the identification of six constructs:
application, accountability, authenticity, and agency, all of which advocate for AI literacy. Subsequently,
this study tested the model with 50 students enrolled in business communication courses at a large state
university in the United States. The primary focus of the study was to adapt the model in higher education
settings while examining the AI Literacy constructs among students. The study presents the results of the
analysis and provides recommendations for incorporating ChatGPT into the classroom. Furthermore, this
research aims to share knowledge and principles with Latin American business communication educators
and practitioners, with the objective of advancing the practical use of curriculum design and workplace
application.
Keywords
AI-Assisted Writing, Generative AI, ChatGPT, AI Literacy, Large Language Models, Natural Language
Processing
1
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
Introduction
Large Language Models (LLMs) have a history dating back to the 1980s. Early models were used for natural
language processing (NPL) but were limited in capturing natural language arguments. The introduction of
neural language models in the 2000s brought significant advancements in NLP. Artificial intelligence (AI)
has made significant advancements in recent years, leading to the development of AI-assisted writing tools
such as ChatGPT. Developed by OpenAI, ChatGPT is the leading LLM with 175 billion parameters to analyze
natural language producing human-like textual responses. As an advanced language model, it utilizes
Generative AI technology to assist users in generating written content responding to prompts. Integrating
such tools in educational settings holds great potential for enhancing students' writing skills and fostering
AI literacy. While ChatGPT has gained rapid adoption in various fields, its application in education remains
controversial. Schools are debating whether to restrict or leverage the use and have considered the impact
on teaching strategies, academic integrity, assessments, and students critical thinking. Using generative AI
raises concerns about the authenticity of the student's work.
This paper briefly discusses ChatGPT as a AI-assisted writing tool and the Federal Trade Commission (FTC)
and the European Union (EU) advancements regarding AI consumer security. The paper also discusses the
newly introduced AI Literacy model and the first test conducted with undergraduate students mapping to
confirm utility of the model constructs. Finally, it highlights the need for academic governance of AI-
assisted writing and the need to address trusted content and authorship. The study also aims to extend its
findings and recommendations to Latin American business communication educators and practitioners.
The research seeks to advance the practical use of curriculum design and workplace application in Latin
American contexts by sharing knowledge and principles.
Literature Review
Large Language Models (LLMs) have a rich history dating back to the early days of computing. The first
LLMs were developed in the 1980s and 1990s (Jacob & Shantanu, 2022; Scholten, 2023). Before the
development of deep learning, two popular models used for natural language processing were the Hidden
Markov Models (HMMs) and N-Gram models (Tamoghna, 2023). According to Tardif (2023) the
development of Large Language Models has its roots in early natural language processing (NLP) and
machine learning research.
The early LLMs could not capture natural language arguments due to reduced computational power and
data availability. The models were simplistic and could not associate word dependencies as in natural
language. The word prediction was constrained to N-Gram where 1-3 words equal "N" based on the possible
sequence. An N-Gram is an algorithm for modeling words in sequence to find and predict single or blocks
of words (AI Chat, 2023). The N-Gram concept is similar to the auto-complete feature in text editors.
Computational linguistics, NLP, and AI have worked on modern advances using longer N-Gram
approaches. The approach allows for more accurate word predictions by relying on large datasets and
significant computational power.
In the 2000's, the emergence of large language models was the next milestone in this progression. The
introduction of neural language brought a new direction in NLP by simply interconnecting units or neurons
arranged in layers, which grew to exceptional sizes. In 2017, the inception of end-ground language models,
larger data sets and computation, and the introduction of the Transformer Architecture steered the
evolution and rapid growth of LLMs such as ChatGPT (Chen, 2023). In the 2020's, the computational power
and LLMs size has grown, playing a significant role in developing generative AI offerings. For example, in
2018, language models with millions of parameters were substantial, but by the end of 2022, models like
ChatGPT had 175 billion parameters, making them massive. ChatGPT, developed by OpenAI, is leading
LLMs advancements by integrating the collaboration from various investors and research groups.
While the field of generative AI continues to evolve, previous versions did some functions of ChatGPT. Still,
the output quality produced different results than human perspectives. According to Gosh (2023), it is
important to acknowledge that developing large language models is a collaborative effort involving multiple
2
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
research groups and companies. OpenAI's contribution has been significant, consolidating advancements,
massive datasets, and releasing models at an accelerated pace. The evolution of GPT-3 to GPT-4 is
considered the most significant growth of deep learning to create generative AI models. Generative AI is
artificial intelligence technology that can produce various types of content, including text, imagery, audio,
synthetic data, or other media, in response to prompts. Generative AI (GenAI) or artificial intelligence
technology can produce various types of content, including text, imagery, synthetic audio data, or other
media, in response to prompts (Lawton, 2023). Generative models learn the patterns and structure from
input data and then generate new content similar to the training data but with innovation built in.
ChatGPT, a generative AI prototype, was publicly introduced in November 2022 and has made an important
transition in the population. It made headlines as a novel invention capable of conversing with humans and
doing research. Within weeks, it reached millions of users, leading to the fastest adoption curve of any
technology. Figure 1 (eMarketer, 2023) shows the number of people already familiar with ChatGpt in the
U.S. as of February 2023.
for safety and accuracy, leading to skeptical responses in more structured environments. Since ChatGPT
can produce essays and written text, the application to schoolwork has many disagreements, failing to reach
a consensus. On the administrative side, the need to continually adapt teaching and assessment methods is
being considered. For professors, the assessment for written assignments can also benefit from ChatGPT,
particularly in providing formative feedback to many students.
Generative AI and Regulations
Federal Trade Commission Ethical Considerations
The Federal Trade Commission (FTC) promotes competition, protects, and educates consumers on
various issues, including AI. The discussion of AI regulations addresses the consumer and individuals
affected by AI practices. The educational sector is excluded from the FTC and other agencies and leaves
decisions to the leaders in education. With the introduction of generative AI, the FTC is trying to regulate
deceptive AI practices for consumers and prevent monopolistic businesses. The FTC has jurisdiction over
various activities, including advertising, marketing, and product safety. Since 2019, the FTC has been
active in artificial intelligence (AI), and it issued the first report on AI and consumer protection. The
guidelines identified several potential risks associated with AI as follows (FTC, 2022).
Bias: AI systems can be biased in their decision-making, leading to discrimination against certain groups
of people.
Privacy: AI systems can collect and use large amounts of personal data, which can raise privacy
concerns.
Commercial surveillance: AI tools can be used to track and monitor individuals' online activity,
physical locations, and even conversations.
To address AI risks, the FTC has taken several steps, including a guidance document, consumer
protection, and action enforcement against companies violating the FTC's laws in connection with AI. The
work is ongoing, and the agency continues to monitor AI development to identify consumer risks. The
FTC also works with other federal agencies, such as the Department of Justice and the National Institute
of Standards and Technology, to develop a coordinated approach to regulating AI. In addition, it has
taken initiatives to control credit practices that include AI to support the Fair Credit Reporting Act
(FCRA)and the Equal Credit Opportunity Act (ECOA). Ajao (2021) discusses that the FTC forbids using
racially biased or unexplainable algorithms for consumer credit, employment, housing, and insurance.
The latest initiatives will focus on designing generative AI and machine learning products (State, et al.,
2023).
The European Union (EU) also protects the consumer in AI initiatives. The EU has proposed new laws
and regulations to fully utilize AI and reap its rewards during the Digital Decade. The "Coordinated Plan
on AI" advances strategic alignment, policy action, and investment acceleration. The AI Act is the first
proposal in the world for a legal framework governing the use of AI. Specifically, the Act will establish a
risk-based framework to regulate AI-related applications, products, and services.
In light of the political context, the AI Commission proposes regulation based on the strategies to
accelerate investments in AI, act on AI strategies, and align AI policies to global challenges (European
Commision, 2021a). The specific goals are as follows.
• Ensure that AI systems placed on the Union market and used are safe and respect existing laws on
fundamental rights and Union values.
4
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
According to the European Commission (2021b), there are several measures to shape Europe in the digital
AI future. The framework covers short and long-term goals, management measures, and financial impact
to assess the effect of the regulation and policy implementation. It aims to balance rules for
communications networks, content, technology, industry, and entrepreneurship. The budgetary impact
relates to the new tasks assigned to the Commission, including support for establishing an EU AI Board.
The FTC and EU are concerned with data privacy and security regarding AI use in schools; however, their
jurisdiction excludes generative AI as it relates to learning applications.
In education, the emergence of AI-assisted writing has diverse implications for students' work because of
plagiarism and possibly incorrect information. The new generation of AI tools is designed to be user-
friendly and generate text on demand, enticing users to converse online and create a close-to-human-like
paper (Terry 2012). As the new AI tool, ChatGPT is widespread and popular, but many schools are
blocking it as a learning tool. While AI-assisted writing enthusiasm is worldwide, its usage is beneficial
and controversial, raising uncertainty about ethical considerations in authorship. The reliance on
ChatGPT concerns higher education about diluting problem-solving and diminishing the critical thinking
skills core to good education.
5
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
Owen (2013) suggests that there should be a division between assignments where the use of AI as a writing
assistant is encouraged and assignments where AI is not beneficial. He argues that colleges should prepare
students for the future by promoting AI literacy but also emphasizes that AI is not the only avenue to
succeed in school. That means that education systems should move away from traditional take-home essays
and explore alternative forms of assessment, such as oral exams, in-class writing, or new types of
schoolwork less susceptible to AI influence. There is a lack of instruction policies on how to utilize AI best,
and at the same time, there is insufficient control to prevent AI from obstructing critical thinking exercises.
The uncertain middle ground leaves academics wondering how to proceed while advocating a well-thought-
out approach to integrating AI into education. Owens also supports developing AI literacy while ensuring
it does not replace critical thinking skills. Educational institutions can better adapt their coursework to
modern AI by reevaluating an AI literacy framework.
While research on ChatGPT and similar tools predict all-inclusive use for writing tasks, the trends offer
valuable insights into fields like biomedical research and scientific publishing (Rahman et al. 2023). More
recently, Cardon et al. (2023) advocated that AI-assisted writing will dominate business communication,
and instructors must conceptualize how to teach the field. The authors also discuss establishing a
community of practice or a group of academics and intellectuals to brainstorm the ethical, political, and
economic framework suitable to ChatGPT. Schools can also reflect on the issues of academic integrity,
6
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
educational technologies, creativity, and originality in content creation, research, and service. The idea is
to come up with a unified framework and stay away from silo thinking. The authors suggest adapting the
AI Literacy model to a guiding question list, which can prepare students to build AI literacy and become
acquainted with assessment practices.
Similarly, collaborating with the industry to establish and design ethical models for AI use should be a
priority. Cardon et al. proposed the first AI Literacy model for communicators to effectively incorporate AI-
assisted writing tools like ChatGPT in education. This framework aims to give students the necessary
knowledge and skills to understand, evaluate, and ethically use AI technologies. The framework highlights
the importance of integrating AI-assisted writing tools into communication practices to enhance
effectiveness. It emphasizes the need to maintain authenticity and agency, ensure that human input
logically interacts with AI tools, and establish personal accountability (Bollen, et al. 2023). Additionally, it
highlights the significance of relying on trustworthy and credible sources of information.
It is also critical to research the relationship between AI use and the future of knowledge workers in the
workplace. Academic researchers could focus on rethinking career readiness using the ubiquity of
generative AI and other tools for entrepreneurship, business strategy, and industry compliance and
regulation, among other topics.
Methodology
In Cardon et al. (2023), the AI Literacy framework was initially tested by academics, identifying six
constructs: application, accountability, authenticity, and agency. They recommended trying the model with
administrators, students, and other practitioners since their research was only tested with instructors. To
arrive at the AI Literacy model constructs, the authors administered a survey asking closed and open
questions to 343 instructors across the United States. The authors also encouraged student testing to
capture their views as a benchmark to help develop instructional practice in business communication
7
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
courses. Following this opportunity, the researcher invited undergraduate students to an in-class exercise
or "test-drive" ChatGPT. To further explore the practical implications of the AI Literacy framework, this
study involved 50 students enrolled in two business communication courses at a large state university in
the United States. The objective was to adapt ChatGPT in higher education and evaluate the student's
understanding and application of the AI Literacy constructs.
Based on the model constructs, the researcher developed a set of exercises and questions to test the model's
feasibility. The goal was to determine how students would use ChatGPT and their expectations regarding
the appropriate use of AI-assisted writing for course assignments. The students sat at tables of five students
to engage in group discussions after the ChatGPT exercise. The exercise lasted two hours, and their active
participation earned them 50 bonus points. The students were interested in exploring ChatGPT and its
viability, although they were concerned about potential academic dishonesty. To familiarize themselves
with ChatGPT, the participants received a list of prompts and four questions to reflect on their ChatGPT
experiences after the exercise. Some exercise prompts come from ideas updating course activities with AI
in mind (UCLA, 2023).
The list of exercise prompts is as follows. Before engaging in the exercise, the students created an account
on openai.com to access the ChatGPT free version.
1. Pose a question to ChatGPT, such as a homework assignment query asking "the similarities
between direct and indirect approaches to writing." Assess the quality of its response.
2. Experiment with modifying the prompt to observe how it impacts the output from ChatGPT.
Reword the sentence above at least three different ways to obtain multiple results.
3. Submit results from #2 to Grammarly and Turnitin for possible plagiarism detection.
4. Request ChatGPT to synthesize text from lengthy documents, like a 1,000 words text article, and
generate a 10-slide PowerPoint presentation with headings and bullet points to make a
compelling case for action.
5. Use the #4 article to generate three discussion prompts.
6. Use the previous report for ChatGPT to produce three questions using Bloom's Taxonomy lower
levels.
7. Request to write an email introducing yourself to the upcoming Eagle Tank school club. Give your
major, GPA, and desired career field.
8. Ask ChatGPT to respond to a short writing essay of 300 words. Ask for a specific style, such as
emulating a professor's feedback on the essay's positive and negative aspects. Use your last
assignment discussion innovation as the input.
9. Take the ChatGPT "professor feedback" from #8 and review the output for errors or
improvements.
After the exercise, the students had half an hour to answer four open-ended questions to solicit enough
reflection. The questions were directly mapped to the AI Literacy model to test each construct.
1. Explore with your group how ChatGPT can enhance writing and problem-solving skills
(Application).
2. Reflect on how ChatGPT could benefit business writing students in the course (Accountability).
3. Think of the email draft in #7. Were you satisfied with the results or would you change
something? (Agency)
4. Reflect on your critical thinking abilities during the exercise. Did you review or improve the
“professor feedback” output? (Authenticity)
8
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
For exercises #1-3, the students follow the specific prompts related to "the similarities between direct and
indirect approaches to writing." The phrase was rephrased at least thrice, creating iterations to observe if
the tool would yield consistent responses. Although the eight responses were on-topic, they were
constructed and worded differently. Despite the variations in phrasing and the diverse sample of students,
all responses differed. The task asked the students to upload their responses to Canvas and later to
Grammarly. Initially, Turnitin did not detect any instances of plagiarism; however, after stressing Canvas
to more uploads, Turnitin began reporting similarity indices ranging from 3% to 45%, shown in the AI
detection tool only. Surprisingly, Grammarly Premium provided minimal suggestions to enhance clarity
and correctness, indicating that all the responses were well-written and fluent.
For exercise #4, ChatGTP suggested a slide deck summarizing the article. Although ChatGPT cannot
create slides, it provides key points for each slide. Later, the students were asked to present the slide
findings in a table format. The output was presented well organized and by key points. It is worth noting
that the results are interesting for educators in exercises #5 and 6. Those were helpful for educators who
often search for various discussion questions for Socratic questioning or simply online discussions. The
ChatGPT questions are intriguing and give ideas, although they could be revised. They serve as
inspirations. Next, the students were asked to create three more questions based on Bloom's Taxonomy
categories. These questions all provide starting points to establish class discussions and elevate the
conversations and delve into new issues through progressive abstraction.
Exercises #7-9 were written according to the instructions and offered no novelty other than providing
different and unique writing drafts for each of the 50 students sampled. The following statements further
elaborate on the reflections and positive feedback from the students regarding using ChatGPT for
learning. It is essential to explicitly outline these expectations in the syllabus and engage in classroom
discussions. Acknowledging that students may encounter varying expectations from different instructors
is vital.
The post-exercise discussion allowed students to reflect on ChatGPT, pushing the edge of significant
educational change. Mhlanga (2023) said that generative tools are revolutionary and crucial in shaping
the academic future. Like search engines, AI generates answers and obtains results without one going
through the process of researching and comparing notes. Students further elaborated on the positive and
negative implications. Concerns were raised about increasing plagiarism similarity indices and the long-
term effects on their motivation to study, engage, and retain knowledge in class. It is important to
recognize the potential benefits as well. Students also conveyed that AI could save them time and effort on
research tasks, offer fresh perspectives on problem-solving, and generate content for analysis and
critique. Like search engines, AI-assisted writing can assist in information finding for decision-making.
While ChatGPT can write with correct grammar and flow, it needs more depth and formal citations. Its
output relies on word patterns and randomness, but ChatGPT has limitations. For example, the University
of Wisconsin-Madison (2023) recommends reflecting on the AI strengths and weaknesses perceived for
teaching and learning.
The students also emphasized that there is no deeper learning if one only relies on ChatGPT findings
without understanding the underlying rationale. One would only obtain answers if one comprehended
why they make sense or how to expand one's understanding. The learning process requires reasoning
rather than just getting a response. Education is not solely about obtaining correct answers but also about
understanding and the moral judgment and reasoning behind them (Krügel et al., 2023). Consequently,
AI should be used to enhance learning and consider safeguarding the traditional learning process. It is
crucial to align ChatGPT and other content generators to their benefits while ensuring that students
actively participate in learning, develop critical thinking skills, and receive instructor guidance. AI should
complement traditional teaching methods and facilitate learning rather than replace the learning process.
Table 1 shows the comments and classification of the post-exercise questions. The five participants per
table answered the four questions as a group. Due to time limitation in the classroom, only a few answers
were collected per table, representing the consensus of the table group. In total, there were ten tables
9
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
representing the 50 students. The typical answer was 1-2 comments and enough to observe and interpret
the context of the model constructs. It is noted that more positive comments prevailed in their answers.
Agency Negative “Become reliant on AI-generated content instead of engaging in critical thinking, problem-solving,
Comments and knowledge acquisition.”
Authenticity Positive “To obtain accurate results, it is crucial to ask the right questions."
Comments
Authenticity “Limits creativity and originality.”
Negative “Increase plagiarism scores.”
Comments
The responses can be further categorized as positive and negative for analysis by mapping the students'
reflections to the AI Literacy model. On the positive side, application, accountability, and agency received
the most student support, followed by authenticity. On the negative side, accountability received the most
answers, followed by one answer each for the other categories. The model tested successfully based on the
analysis. The students find value in applying AI-assisted writing to tasks permitted by the instructor. They
also find value in decision-making and in adhering to equity and fairness, such as plagiarism checkers;
however, the authenticity of inserting their human ideas to realize more robust communication needs more
training and explanation. Authenticity only obtained one result. On the other hand, the negative reflections
were equally distributed with one comment each except for accountability. The main concern in
accountability is the need to include references to provide research validity and accuracy of those references.
By testing the model with students, the administration can find the areas needing improvement and the
areas that can be the primary topics for AI policy building.
AI Governance in Education
10
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
The debate about accepting ChatGPT in the classroom is ongoing. A conservative school of thought
advocates changing how questions are asked in assignments to encourage a more profound learning
experience and involve more personal reflection. AI is here to stay, and one of the challenges is to adopt
these tools and leverage the positive aspects without forgoing the learning process. Education must consider
how to incorporate AI to maximize its positive benefits. It is also essential to explicitly outline the positive
expectations in the syllabus and engage in classroom discussions. Acknowledging that students may
encounter varying expectations from different instructors is essential. Although everyone's initial
impression of ChatGPT is limited, the results are significant. ChatGPT is impressive in answering questions,
interpreting solutions to problems, generating essays, and writing program code, to name a few tasks.
Nonetheless, our limited observations show high potential. Because the technology is so new, one can only
draw models or experiences within its current use. The knowledge base is concentrated on basic
experimentation with the tool at the novice level. Many opinions converge to that conclusion.
The evaluation of ChatGPT's performance and ethics reveals several key findings (Green & Clayton, 2021).
Most notably, the key findings in the literature align with the general impressions conveyed by the sample
students in the class exercise.
First, generative AI tools such as ChatGPT can be helpful in various industries, including education.
Students can use it to generate exam answers, and instructors can develop course content; however, its use
raises ethical concerns, such as determining if the content from ChatGPT should be referenced or if
machines can claim authorship. In terms of monitoring ethical use, plagiarism checkers will be able to
detect AI-generated text and report the percentage borrowed.
Second, ChatGPT's intelligence is driven by machine learning and access to vast data, but it can generate
both accurate and erroneous text called hallucinations. Deceptive intelligence poses challenges in assessing
the reliability of its outputs. ChatGPT's natural language allows it to adapt to roles in business and society,
triggering creative thoughts and leading to different roles in idea creation. Whether ChatGPT should
possess general or specialized intelligence needs consideration, as both have advantages. Responsible use
of ChatGPT is crucial due to its unique capabilities, and criteria should be developed to evaluate its outputs.
The rapid diffusion of ChatGPT has implications for practice and policy, requiring organizations to adapt
and update procedures (Cardon et al. 2023). It is crucial to address limitations such as lack of originality
and vagueness in output and educate users about them. Developing regulations and policies for using AI-
assisted writing tools are necessary, and international coordination is essential due to its global nature
(Dwivedi et al. 2023). AI practitioners should be mindful of potential biases in ChatGPT and work to
minimize them. Users should be aware of ChatGPT's limitations and be cautious when relying on its
generated text.
Third, ChatGPT has significant promises and challenges. It can enhance productivity, but one must be
vigilant of ethical, moral, and policy concerns (Dwivedi et al. 2023; Ray, 2023). The need for clear guidelines
and ethical use for ChatGPT in the educational sector poses challenges, and revised policies are needed to
address inconsistent practices and plagiarism. Due to legal limitations, schools struggle to penalize
deliberate misuse or abuse of such tools. Therefore, new policies are necessary to govern AI-assisted writing
in education. Additionally, international coordination is crucial to harness the benefits of global use and
adoption. Generative tools present opportunities and challenges that require careful consideration of
ethical, practical, and policy aspects (Liebrenz et al. 2023).
Fourth, Zhuo, et al. (2023) discuss that ChatGPT demonstrates superior accuracy and increased robustness
compared to other language models; however, it is susceptible to hallucinations that can alter semantics.
The issue of hallucination, where models generate false or misleading information, is observed in responses
to open-ended questions, therefore indicating unreliable performance. According to Zhou, in terms of
toxicity, ChatGPT exhibits minimal toxicity and demonstrates a reduction compared to other language
models.
Similarly, Rahimi and Abadi (2023) propose that ChatGPT use in academic publishing raises ethical
concerns due to uncertainty in establishing originality and accuracy. Language limitations and reliance on
outdated sources limit its inclusivity and relevance. The likelihood of producing incorrect answers poses
risks in fields like sciences. Improper use without human oversight can compromise reliable statements
and diminish trust. Thus far, international guidelines emphasize human authorship only (Stokel-Walker,
11
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
2023). Rahimi and Abadi speculate on the impact of ChatGPT on publishing has yet to be fully understood
because of the responsible use and human intervention to ensure integrity in academic publishing.
Overall, the evaluation highlights that ChatGPT is adaptable to work and academic environments, but there
are unknowns concerning ethics, bias, reliability, hallucinations, and toxicity. While ChatGPT continues to
progress in mitigating bias, there are still limitations and risks associated with AI-assisted writing.
Eventually, all organizations will require an AI policy for ethical considerations and safety. Specifically,
schools must outline the student roles and responsibilities to ensure effective use in an AI-assisted writing
use policy. The academic policy must support the school's mission, learning outcomes, and specific uses
and violations. Faculty co-creation and concurrence are critical, including policy revisions to align to new
uses, compliance, and exceptions.
Administrators must write the AI policy, which should be later approved by faculty and administrators. The
general draft must disclose the policies, restrictions, accountabilities of faculty and student, and any
information on violations and consequences and the escalation process. Each school should have one
central policy, but it would have some variations depending on the discipline and faculty considerations.
The syllabus should incorporate the policy for student guidance.
Conclusions
Over time, researchers in computational linguistics, NLP, and AI have actively pursued advancements in
language models, particularly in the context of speech recognition systems. Limited computational power
and data availability constrained earlier language models. Those models needed more accuracy in capturing
the sequence relationships between words in natural language (Scholtens, 2023). AI-assisted writing
models still have biases and limitations. Agencies like the FTC and EU focus on protecting consumers from
deceptive practices and risks associated with AI bias, privacy, security, and surveillance. At this point, it
remains to be seen who is responsible for developing ethical policies across all users. Still, academics are
leading in recognizing the need to formulate ChatGPT policies. AI-assisted writing revolutionizes education
by offering personalized and on-demand support for students at all levels. It can make the learning
experience more accessible and effective for students; however, it is essential to note that incorporating AI
in education requires thoughtful consideration to balance leveraging its benefits and maintaining the value
of learning.
The development of AI-assisted writing has yet to enter the maturing stage to establish its models and
communities of practice. Ethical, political, and economic considerations are essential for acceptance and
adoption. The literature suggests creating a division between assignments where AI is encouraged and
where it is not beneficial to the student’s development. For educational institutions, acknowledging AI
literacy and gaining industry collaboration is fundamental to adapting and refining AI literacy models.
The study highlighted the recent development of ChatGPT and the introduction of the AI Literacy
framework. It discussed the results of testing the model with students in higher education, indicating
positive outcomes for writing skills and AI literacy. The study results present several recommendations for
adopting ChatGPT in the classroom. Firstly, providing comprehensive training and resources to instructors
and students is essential to utilize AI-assisted writing tools effectively. Secondly, educators should
emphasize the importance of creating policies to support critical thinking and ethical considerations when
using AI technologies. These recommendations are intended to assist in effectively integrating ChatGPT
and promoting AI literacy among students. Ultimately, these discoveries highlight the practical application
of AI-assisted writing in business communication education and the workplace, with particular emphasis
on its relevance to Latin American countries.
References
AI Chat (2023). N-grams. Retrieved from https://deepai.org/machine-learning-glossary-and-terms/n-
gram
12
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
Ajao, E. (2021, October, 19). FTC pursues AI regulation, bans bias algorithms. Techtarget. Retrieved
https://www.techtarget.com/searchenterpriseai/feature/FTC-pursues-AI-regulation-bans-
biased-algorithms
Chen, F. (2023, March 6). A Brief history of Large Language Models (LLM). LinkedIn. Retrieved from
https://www.linkedin.com/pulse/brief-history-large-language-models-llm-feiyu-chen
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., ... & Wright, R. (2023). “So what
if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications
of generative conversational AI for research, practice and policy. International Journal of
Information Management, 71, 102642.
eMarketer. (February 8, 2023). ChatGPT awareness in the United States in 2023, by level of education
[Graph]. In Statista. Retrieved May 24, 2023, from https://www-statista-
com.mimas.calstatela.edu/statistics/1369168/knowledge-of-chatgpt-by-education-in-us/
European Commission. (2021a). Artificial Intelligence Act. Regulation of the European Parliament and of
the Council Laying Down Harmonized Rules on Artificial Intelligence and Amending Certain Union
Legislative Acts. Retrieved from https://digital-strategy.ec.europa.eu/en/library/proposal-
regulation-laying-down-harmonised-rules-artificial-intelligence
European Commission. (2021b). Communication on Fostering and European Approach to Artificial
Intelligence. Retrieved from https://digital-strategy.ec.europa.eu/en/library/communication-
fostering-european-approach-artificial-intelligence
Cardon, P., Fleischmann, C., Aritz, J., Logemann, M., & Heidewald, J. (2023). The Challenges and
Opportunities of AI-assisted Writing: Developing AI Literacy for the AI Age. Business and
Professional Communication Quarterly, 0(0). https://doi.org/10.1177/23294906231176517
Jacob, J., & Shantanu, N. (2022, October 26). LLMs, a brief history and their use cases.Exemplary.ai.
Retrieved from https://exemplary.ai/blog/llm-history-usecases
Gosh, A. (2023). The evolution of generative AI: A deep dive into the lifecycle and training of advanced
language models. LinkedIn. Retrieved from https://www.linkedin.com/pulse/evolution-
generative-ai-deep-dive-life-cycle-training-aritra-ghosh
Green, C. & Clayton, A. (2021). Ethics and AI Innovation. International Review of Information Ethics, 29.
https://doi.org/10.29173/irie417
Lawton, G. (2023). What is generative AI? Everything you need to know. Techtarget Enterprise AI.
Retrieved from https://www.techtarget.com/searchenterpriseai/definition/generative-AI
Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., & Smith, A. (2023). Generating scholarly content with
ChatGPT: Ethical challenges for medical publishing. The Lancet Digital Health, 5(3), e105-e106.
Krügel, S., Ostermaier, A., & Uhl, M. (2023). The moral authority of ChatGPT. arXiv preprint
arXiv:2301.07098.
Mhlanga, D. (2023). Open AI in education, the responsible and ethical use of ChatGPT towards lifelong
learning. Education, the Responsible and Ethical Use of ChatGPT Towards Lifelong Learning
(February 11, 2023).
State, C., Chakrabarti, P., Huges, D., & Rippy, S. (2023, March 21). United States: Everyone’s talking AI,
including the FTC: Keytakeaways from the FTC 2023 AI Guidance. Retrieved from
https://www.mondaq.com/unitedstates/privacy-protection/1295706/everyones-talking-ai-
including-the-ftc-key-takeaways-from-the-ftcs-2023-ai-guidance#
Scholten, A. (2023, February 17). The Power of Large Language Models: Advances, applications, and
challenges. Medium. Retrieved from https://medium.com/@sas_155/the-power-of-large-
language-models-advances-applications-and-challenges-59a08939fece
Stokel-Walker, C (2023). CHATGPT listed as author on research papers. Nature London, 613 (7945), 620–
621. https://doi.org/10.1038/d41586-023-00107-z
13
Information Systems in Latin America (ISLA 2023)
AI-assisted Writing Tools
Tamoghna, D. (2023, April 21). Understanding Large Language Models (LLMs), history, mechanisms,
and applications. LinkedIn. Retrieved form https://www.linkedin.com/pulse/understanding-
large-language-modelsllms-history-mechanisms-das
Tardif, A. (2023, April 22). Unveiling the power of Large Language Models (LLMs). Unite.AI. Retrieved
from https://www.unite.ai/large-language-models/
Terry, K.O. (2012, May 13). I am a student. You have no idea how much we are using ChatGPT. No
professor or Software could pick up on it. Chronicle of Higher Education. Retrieved from
https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-
chatgpt
Raman, R., Lathabhai, H., & Diwakar, S. (2023). Early research trends on ChatGPT: A review based on
Altmetrics and science mapping analysis. Retrieved from Research Square
https://doi.org/10.21203/rs.3.rs-2768211/v1
Rahimi, A., & Abadi, A. T. (2023). ChatGPT and Publication Ethics. Archives of Medical Research, 54(3),
272–274. https://doi.org/10.1016/j.arcmed.2023.03.004
Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias,
ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems.
Richter, F. (February 6, 2023). Which Sectors Are Working With OpenAI? [Digital image]. Retrieved from
https://www-statista-com.mimas.calstatela.edu/chart/29244/number-of-companies-using-
open-ai-in-their-business-processes-worldwide/
UCLA. (2023). The use of generative Artificial Intelligence and teaching and learning. Center from the
Advancement and Teaching. Retrieve from https://teaching.ucla.edu/resources/ai_guidance/
University of Wisconsin-Madison. (2023). Considerations for using AI in the classroom. L&S Instructional
Design Collaborative. Retrieved from https://idc.ls.wisc.edu/guides/using-artificial-intelligence-
in-the-classroom/
Bollen, E., Zuidema, J., Willem, R., Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for
research. Nature (London), 614(7947), 224–226. https://doi.org/10.1038/d41586-023-00288-7
Zhuo, T. Y., Huang, Y., Chen, C., & Xing, Z. (2023). Exploring ai ethics of chatgpt: A diagnostic
analysis. arXiv preprint arXiv:2301.12867.
14
Information Systems in Latin America (ISLA 2023)