0% found this document useful (0 votes)
40 views

The Importance of Transparency

Uploaded by

Ghita LAMHIMIDA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

The Importance of Transparency

Uploaded by

Ghita LAMHIMIDA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Received: 27 June 2023 | Revised: 21 August 2023 | Accepted: 18 October 2023

DOI: 10.1111/jnu.12938

PROFESSION AND SOCIETY

The importance of transparency: Declaring the use of


generative artificial intelligence (AI) in academic writing

Arthur Tang PhD1 | Kin-Kit Li PhD2 | Kin On Kwok PhD3,4,5,6 | Liujiao Cao MD7 |
Stanley Luong PhD1 | Wilson Tam PhD8

1
School of Science, Engineering and
Technology, RMIT University, Ho Chi Minh Abstract
City, Vietnam
The integration of generative artificial intelligence (AI) into academic research writing
2
Department of Social and Behavioural
Sciences, City University of Hong
has revolutionized the field, offering powerful tools like ChatGPT and Bard to aid re-
Kong, Hong Kong, Hong Kong Special searchers in content generation and idea enhancement. We explore the current state
Administrative Region
3
of transparency regarding generative AI use in nursing academic research journals,
JC School of Public Health and Primary
Care, The Chinese University of Hong emphasizing the need for explicitly declaring the use of generative AI by authors in
Kong, Hong Kong, Hong Kong Special the manuscript. Out of 125 nursing studies journals, 37.6% required explicit state-
Administrative Region
4 ments about generative AI use in their authors' guidelines. No significant differences
Stanley Ho Centre for Emerging
Infectious Diseases, The Chinese in impact factors or journal categories were found between journals with and with-
University of Hong Kong, Hong Kong,
out such requirement. A similar evaluation of medicine, general and internal journals
Hong Kong Special Administrative Region
5
Hong Kong Institute of Asia-Pacific showed a lower percentage (14.5%) including the information about generative AI
Studies, The Chinese University of Hong usage. Declaring generative AI tool usage is crucial for maintaining the transparency
Kong, Hong Kong, Hong Kong Special
Administrative Region and credibility in academic writing. Additionally, extending the requirement for AI
6
Department of Infectious Disease usage declarations to journal reviewers can enhance the quality of peer review and
Epidemiology, School of Public Health,
combat predatory journals in the academic publishing landscape. Our study highlights
Imperial College London, London, UK
7
West China School of Nursing/West
the need for active participation from nursing researchers in discussions surrounding
China Hospital, Sichuan University, standardization of generative AI declaration in academic research writing.
Chengdu, China
8
Alice Lee Centre for Nursing Studies,
National University of Singapore,
Singapore

Correspondence
Kin On Kwok, JC School of Public Health
and Primary Care, The Chinese University
of Hong Kong, Hong Kong Special
Administrative Region, China.
Email: [email protected]

I NTRO D U C TI O N scientific community. AI-powered tools such as ChatGPT and Bard


have garnered particular attention from academic researchers
The integration of generative artificial intelligence (AI) into aca- since the debut free research release of ChatGPT on 30 November
demic research writing has generated substantial hype within the 2022. These AI-powered tools are increasingly being utilized by

Correction added on 7 March 2024, after first online publication: The article type for this paper has been updated.

This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium,
provided the original work is properly cited.
© 2023 The Authors. Journal of Nursing Scholarship published by Wiley Periodicals LLC on behalf of Sigma Theta Tau International.

314 | 
wileyonlinelibrary.com/journal/jnu J Nurs Sch. 2024;56:314–318.
THE IMPORTANCE OF TRANSPARENCY: DECLARING THE USE OF GENERATIVE ARTIFICIAL
|

15475069, 2024, 2, Downloaded from https://sigmapubs.onlinelibrary.wiley.com/doi/10.1111/jnu.12938 by Morocco Hinari NPL, Wiley Online Library on [30/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
INTELLIGENCE (AI) IN ACADEMIC WRITING 315

researchers to assist them at various stages of the writing process of The importance of declaring the usage of generative AI tools
academic research. Generative AI tools assist authors in generating by authors stems from two critical perspectives. One perspec-
initial drafts by building upon authors’ initial ideas and provide valu- tive relates to the phenomenon known as artificial hallucination,
able suggestions for improving the existing text. With its contextual where generative AI models can produce text that appears coher-
comprehension and human-like responses, generative AI tools pres- ent and meaningful but is actually fictional or lacks accuracy (Ji
ent researchers with a fresh approach to idea generation, expanding et al., 2023). Another well-known hallucination effect reported by
the scope of research writing and fostering innovative thinking. The some academic researchers is that generative AI tools sometimes
widespread adoption of generative AI tools has undoubtedly trans- include references of scientific studies that do not exist (Tam
formed the landscape of academic research writing. et al., 2023). This means that authors who use generative AI tools
While the potential benefits of generative AI tools in academic run the risk of including fabricated or inaccurate information in
research writing are undeniable, ethical concerns have arisen along- their research. These hallucinations can range from minor errors
side the hype. One key issue has sparked discussion within the ac- to entirely invented data that may seem plausible. By openly de-
ademic research community is authorship and the attribution of claring the usage of generative AI tools, authors can transparently
AI-generated content. Research papers and scholarly works are acknowledge the potential limitations and risks associated with
attributed to human authors, reflecting their knowledge, expertise, the AI-generated content, thereby ensuring the integrity and cred-
and intellectual contributions. When employing generative AI tools, ibility of their research. Another perspective is the potential bias
questions arise regarding how to appropriately disclose the AI sys- in training data. Generative AI systems learn from large datasets,
tem's role in the writing process. Recent debates among academic and if these datasets contain biases, the AI models can inadver-
researchers suggest that crediting generative AI systems as the tently reproduce and amplify them. Consequently, this can result
co-author of academic research articles is not suitable (Thorp, 2023), in the generation of biased content that reinforces negative ste-
as these systems “do not fulfill the criteria for a study author, because reotypes and discrimination. When authors utilize generative AI
they cannot take responsibility for the content and integrity of scien- tools in their research writing without disclosing it, there is a risk
tific papers.” (Stokel-Walker, 2023) Several instances have emerged that these biases go unnoticed and unaddressed.
wherein ChatGPT has been included as a co-author in academic Our study aims to investigate whether nursing academic re-
publications (O'Connor, 2023a, Flanagin et al., 2023). However, search journals require authors to declare their use of generative
a subsequent wave of scrutiny has triggered a reconsideration of AI in their manuscripts. Examining whether nursing academic
authorship attribution to generative AI tools (O'Connor, 2023b, research journals require explicit declaration regarding the use
Flanagin et al., 2023). Generative AI tools are nonlegal entities and of generative AI can offer insights into the level of transparency
are incapable of accepting responsibility and accountability for the within the nursing research community. This holds implications for
content within the manuscript. Additional ethical concerns beyond readers, allowing them to be well informed and make discerning
authorship issues include copyright implications arising from the use judgments regarding the reliability, trustworthiness and credibil-
of third-party content, conflict of interest, and the broader concept ity of content of the journals. The awareness of whether authors
of plagiarism, which encompasses not only verbatim text copying have utilized generative AI can aid readers in gauging the extent to
but also the replication of ideas, methods, graphics, and other forms which human expertise and machine-generated content contribute
of intellectual output originating from others (Lund et al., 2023). to the material.
There is a growing consensus among academic research journals This research effort not only sheds light on the extent to which
that transparency in declaring the use of AI in the research process authors in nursing journals acknowledge and disclose their employ-
is vital to uphold the integrity and credibility of academic research ment of generative AI but also aims at initiating discussions on the
writing. Two notable initiatives in reporting research studies that responsible and transparent implementation of AI technologies in
involve AI intervention are CONSORT-AI (Consolidated Standards the realm of nursing research writing. By illuminating current prac-
of Reporting Trials for Artificial Intelligence) (Liu et al., 2020) tices and attitudes, our study contributes to the ongoing dialog about
and SPIRIT-AI (Standard Protocol Items: Recommendations for ethical considerations, disclosure protocols, and best practices when
Interventional Trials—Artificial Intelligence) (Cruz Rivera et al., 2020). integrating AI-generated content into the academic publishing land-
The International Committee of Medical Journal Editors (ICMJE) rec- scape of nursing studies.
ommended the need for authors to explicitly disclose the use of AI-
assisted technologies, including large language models like ChatGPT,
in the production of submitted work (ICMJE, 2023). Furthermore, M ATE R I A L S A N D M E TH O D S
efforts are underway to develop comprehensive reporting guide-
lines to evaluate the use of ChatGPT and large language models A list of nursing academic research journals was compiled by in-
in scientific research and its impact on scientific research writing cluding all journals indexed in the Nursing category of the 2021
(Cacciamani et al., 2023). These guidelines aim to promote transpar- Journal Citation Reports—Science Edition (Clarivate, 2022). The
ency in the research process by providing a framework for declaring Author Guidelines or Instructions to Authors for each of these
the use of generative AI in academic research. identified nursing academic research journals were downloaded
THE IMPORTANCE OF TRANSPARENCY: DECLARING THE USE OF GENERATIVE ARTIFICIAL
|

15475069, 2024, 2, Downloaded from https://sigmapubs.onlinelibrary.wiley.com/doi/10.1111/jnu.12938 by Morocco Hinari NPL, Wiley Online Library on [30/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
316 INTELLIGENCE (AI) IN ACADEMIC WRITING

on June 22–24, 2023. A thorough review of these guidelines was process were 2.24 (0.82) and 2.15 (1.07), respectively; the mean
then conducted to ascertain whether they included any explicit difference (0.09, 95% CI from −0.27 to 0.45) was not statistically
statements or requirements concerning the utilization of gen- significant (p = 0.611). There is no statistically significant association
erative AI in manuscript submissions. For Author Guidelines or (p = 0.520) between the themes of the journal (1. Specialized nursing
Instructions to Authors that were written in non-English language, journals, 2. General nursing journals, 3. Other nursing journals) and
the Author Guidelines or Instructions to Authors were trans- the availability of the explicit statement related to generative AI (Yes
lated to English using Google Translate, and the translated ver- or No) (Table 1).
sion of the documents were reviewed. The aim of this review was
to ascertain whether the guidelines contained any references to
artificial intelligence. Using Google Translate for translation suf- DISCUSSION
ficed for achieving this specific objective. Descriptive statistics,
including frequency and percentages, were used to summarize the Exactly 47 out of 125 (37.6%) of nursing journals explicitly include
results. Independent samples t-test was used to examine if there information about utilizing generative AI tools in their Author
is any difference of the mean impact factor of the journal associa- Guidelines or Instructions to Authors on or before 24 June 2023,
tion between with and without explicit statements about the use indicating that editorial boards of nursing journals are beginning to
of generative AI. Pearson's chi-square test of independence was catch up on the ethical issues of using generative AI tools by authors
used to examine the association between the themes of the jour- in the writing process. We applied the same methodology to evalu-
nal (categorized according to the Scopus database) and the avail- ate the Author Guidelines or Instructions to Authors from journals
ability of the explicit statement. in the “Medicine, General and Internal” category. In about the same
period, 25 out of the 172 journals (14.5%) in the “Medicine, General
and Internal” category included information about the utilization of
R E S U LT S generative AI tools.
All of these 47 journals require the authors to declare gener-
A total of 125 nursing academic research journals were identified ative AI tools usage in the body of the manuscript (e.g. in a sepa-
from the Nursing category of the 2021 Journal Citation Report and rate declaration or acknowledgment section), rather than through
121 of them published articles in English, two in Spanish, one in paperwork or online form during the submission process to the
German and one in Italian. The Author Guidelines or Instructions editor. Mandating the inclusion of generative AI tool usage dec-
to Authors of these journals were extracted. Among the four non- laration within the manuscript facilitates clear communication to
English journals, two had their Instructions to Authors available in readers with essential information about the research process.
English, whereas the other two did not. For the non-English jour- This approach not only promotes transparency but also enables
nals lacking English versions of Instructions to Author, we employed readers to evaluate the potential effect of generative AI tools on
Google Translate to translate their Instructions to Authors to English the article. We did not find any significant association between
for subsequent evaluation. The Author Guidelines or Instructions to the Author Guidelines and (i) impact factors of the journals and (ii)
Authors were then reviewed by the authors. 47 out of 125 (37.6%) categories of the journals.
journals provided explicit statements about the use of generative There have been some initiatives in the declaration of gener-
AI tools in the writing process. 47 journals (37.6%) explicitly re- ative AI usage in academic research writing among researchers
quire declaration of the use of generative AI tools or AI-assisted (Cacciamani et al., 2023; Flanagin et al., 2023), but the authors are
technologies in the writing process, whereas 46 journals (36.8%) not aware of any published discussion of this issue in nursing stud-
explicitly state that generative AI tools or AI-assisted technologies ies journals. We are advocating for active participation from nurs-
should not be listed as author or co-author. The result is included as ing researchers in these initiatives to ensure that our perspectives
Supplementary Material S1. are included. We see a parallel between this declaration and the
The mean (SD) impact factors of those journals with and without adoption of the Institutional Review Board (IRB) approval declara-
explicit statement about the use of generative AI tools in the writing tion in medical journals in the 1990s. This standardization emerged

TA B L E 1 Crosstabulation of the journal category and statement of generative AI in the instruction of authors.

Categorya No Yes Total

Specialized nursing journals 39 (60.9%) 25 (39.1%) 64


General nursing journals 29 (60.4%) 19 (39.6%) 48
Other nursing journals 10 (76.9%) 3 (23.1%) 13
p-value from Pearson's chi-square test = 0.520 78 (62.4%) 47 (37.6%) 125
a
The three categories were merged from smaller subgroups extracted from Scopus.
THE IMPORTANCE OF TRANSPARENCY: DECLARING THE USE OF GENERATIVE ARTIFICIAL
|

15475069, 2024, 2, Downloaded from https://sigmapubs.onlinelibrary.wiley.com/doi/10.1111/jnu.12938 by Morocco Hinari NPL, Wiley Online Library on [30/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
INTELLIGENCE (AI) IN ACADEMIC WRITING 317

from a broader movement towards ethical oversight in human DATA AVA I L A B I L I T Y S TAT E M E N T
subjects research, spurred by key documents like the Belmont The data that supports the findings of this study are available in the
Report and the Declaration of Helsinki. Academic journals began supplementary material of this article.
to require IRB approval as part of the manuscript submission pro-
cess, reflecting the growing emphasis on ethical oversight and the ORCID
global trend towards standardized ethical practices in research Wilson Tam https://orcid.org/0000-0003-0641-3060
(Amdur & Biddle, 1997). The declaration of using generative AI in
academic research writing is a subject that is gaining attention and REFERENCES
relevance in today's rapidly evolving technological landscape. Just Amdur, R. J., & Biddle, C. (1997). (1997). Institutional review board ap-
as the declaration of IRB approval became a cornerstone of ethical proval and publication of human research results. Journal of the
American Medical Association, 277(11), 909–914. https://​doi.​org/​10.​
research practice, the transparent acknowledgment of generative
1001/​jama.​1997.​03540​35005​9034
AI usage could similarly become a vital aspect of responsible aca- Cacciamani, G. E., Collins, G. S., & Gill, I. S. (2023). ChatGPT: Standard
demic conduct. reporting guidelines for responsible use. Nature, 618(7964), 238.
Another important aspect to consider is the declaration of the https://​doi.​org/​10.​1038/​d4158​6-​023-​01853​-​w
Clarivate. (2022). 2011 Journal Citation Reports Science Edition.
use of generative AI tools by journal reviewers. While authors are
Cruz Rivera, S., Liu, X., Chan, A. W., Denniston, A. K., Calvert, M. J.,
required to declare their utilization of generative AI tools in their & SPIRIT-AI and CONSORT-AI Working Group; SPIRIT-AI and
submitted manuscripts, it is equally relevant to impose the same CONSORT-AI Steering Group; SPIRIT-AI and CONSORT-AI
requirement on the reviewers. The declaration of generative AI Consensus Group. (2020). Guidelines for clinical trial protocols for
interventions involving artificial intelligence: The SPIRIT-AI exten-
tools usage by journal reviewers holds significant importance,
sion. Nature Medicine, 26(9), 1351–1363. https://​doi.​org/​10.​1038/​
particularly in the context of addressing the challenges posed by s4159​1-​020-​1037-​7
predatory journals. Predatory journals pose a threat to scientific in- Flanagin, A., Bibbins-Domingo, K., Berkwits, M., & Christiansen, S. L.
tegrity by charging authors “high article-processing fees but don't (2023). Nonhuman “authors” and implications for the integrity
provide expected publishing services, such as peer review or other of scientific publication and medical knowledge. Journal of the
American Medical Association, 329(8), 637–639. https://​doi.​org/​10.​
quality checks” (Singh Chawla, 2020). By openly declaring the use
1001/​jama.​2023.​1344
of generative AI tools, reviewers contribute to quality assessment, ICMJE. (2023). Defining the role of authors and contributors. https://​
helping to differentiate reputable journals from predatory ones. www.​i cmje.​o rg/​r ecom​m enda​t ions/​​ b rowse/​r oles​-​a nd-​r espo​n sibi​
This commitment to transparency reinforces the credibility of the lities/​defin​ing-​the-​role-​of-​autho​rs-​and-​contr​ibuto​rs.​html
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto,
review process and prevents inadvertent support for predatory
A., & Fung, P. (2023). Survey of hallucination in natural language
journals. Additionally, the disclosure invites enhanced scrutiny and generation. ACM Computing Survey, 55(12), 1–38. https://​doi.​org/​
accountability, as predatory journals often lack robust peer review 10.​1145/​3571730
processes. Furthermore, explicit declaration helps expose deceptive Liu, X., Cruz Rivera, S., Moher, D., Calvert, M. J., Denniston, A. K., &
SPIRIT-AI and CONSORT-AI Working Group. (2020). Reporting
claims of AI usage by predatory journals, safeguarding against ma-
guidelines for clinical trial reports for interventions involving ar-
nipulation and promoting transparency in publishing decisions for tificial intelligence: The CONSORT-AI extension. Nature Medicine,
researchers. 26(9), 1364–1374. https://​doi.​org/​10.​1038/​s 4159​1-​020-​1034-​x
Lund, B. D., Wang, T., Mannuru, N. R., Nie, B., Shimray, S., & Wang, Z.
(2023). ChatGPT and a new academic reality: AI-written research
papers and the ethics of the large language models in scholarly
CO N C LU S I O N publishing. Journal of the Association for Information Science and
Technology, 74(5), 570–581. https://​doi.​org/​10.​1002/​asi.​24750​
Our study provides an understanding of the current landscape of de- O'Connor, S. (2023a). Open artificial intelligence platforms in nursing
education: Tools for academic progress or abuse? Nurse Education
claring the use of generative AI tools by authors in nursing journals.
Practice, 66, 103537. https://​doi.​org/​10.​1016/j.​nepr.​2022.​103537
Future studies could examine the implications of such disclosure O'Connor, S. (2023b). Corrigendum to “Open artificial intelligence plat-
on reader perception and the evaluation of research quality. This forms in nursing education: Tools for academic progress or abuse?”
would help shed light on the effectiveness of disclosure in enhancing [Nurse Educ. Pract. 66 (2023) 103537]. Nurse Education in Practice,
transparency and trust in academic writing. Furthermore, develop- 67, 103572. https://​doi.​org/​10.​1016/j.​nepr.​2023.​103572
Singh Chawla, D. (2020). Predatory-journal papers have little scientific
ing guidelines or best practices for the appropriate declaration of
impact. Nature. https://​doi.​org/​10.​1038/​d4158​6-​020-​0 0031​-​6
generative AI usage by authors and reviewers would be beneficial in Stokel-Walker, C. (2023). ChatGPT listed as author on research papers:
ensuring consistent and responsible implementation of AI technolo- Many scientists disapprove. Nature, 613, 620–621. https://​doi.​org/​
gies in scholarly research. 10.​1038/​d4158​6-​023-​0 0107​-​z
Tam, W., Huynh, T., Tang, A., Luong, S., Khatri, Y., & Zhou, W. (2023).
Nursing education in the age of artificial intelligence powered
C O N F L I C T O F I N T E R E S T S TAT E M E N T Chatbots (AI-Chatbots): Are we ready yet? Nurse Education Today,
The authors declare that there is no conflict of interest. 129, 105917. https://​doi.​org/​10.​1016/j.​nedt.​2023.​105917
THE IMPORTANCE OF TRANSPARENCY: DECLARING THE USE OF GENERATIVE ARTIFICIAL
|

15475069, 2024, 2, Downloaded from https://sigmapubs.onlinelibrary.wiley.com/doi/10.1111/jnu.12938 by Morocco Hinari NPL, Wiley Online Library on [30/07/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
318 INTELLIGENCE (AI) IN ACADEMIC WRITING

Thorp, H. H. (2023). ChatGPT is fun, but not an author. Science, 379(6630),


313. https://​doi.​org/​10.​1126/​scien​ce.​adg7879 How to cite this article: Tang, A., Li, K-K., Kwok, K. O., Cao,
L., Luong, S. & Tam, W. (2024). The importance of
transparency: Declaring the use of generative artificial
S U P P O R T I N G I N FO R M AT I O N
intelligence (AI) in academic writing. Journal of Nursing
Additional supporting information can be found online in the
Scholarship, 56, 314–318. https://doi.org/10.1111/jnu.12938
Supporting Information section at the end of this article.
Data S1.

You might also like