0% found this document useful (0 votes)
6 views

The Intersection of Chatgpt, Clinical Medicine, and Medical Education

Uploaded by

elhami.haron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

The Intersection of Chatgpt, Clinical Medicine, and Medical Education

Uploaded by

elhami.haron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

JMIR MEDICAL EDUCATION Wong et al

Viewpoint

The Intersection of ChatGPT, Clinical Medicine, and Medical


Education

Rebecca Shin-Yee Wong1,2, MBBS, MSc, PhD; Long Chiau Ming3*, BPharm Hons, MClinPharm, PhD; Raja Affendi
Raja Ali3,4*, MBBch, MMedSc, MD, MBA
1
Department of Medical Education, School of Medical and Life Sciences, Sunway University, Selangor, Malaysia
2
Faculty of Medicine, Nursing and Health Sciences, SEGi University, Petaling Jaya, Malaysia
3
School of Medical and Life Sciences, Sunway University, Selangor, Malaysia
4
GUT Research Group, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
*
these authors contributed equally

Corresponding Author:
Long Chiau Ming, BPharm Hons, MClinPharm, PhD
School of Medical and Life Sciences
Sunway University
No 5, Jalan Universiti
Bandar Sunway
Selangor, 47500
Malaysia
Phone: 60 374918622 ext 7452
Email: [email protected]

Abstract
As we progress deeper into the digital age, the robust development and application of advanced artificial intelligence (AI)
technology, specifically generative language models like ChatGPT (OpenAI), have potential implications in all sectors including
medicine. This viewpoint article aims to present the authors’ perspective on the integration of AI models such as ChatGPT in
clinical medicine and medical education. The unprecedented capacity of ChatGPT to generate human-like responses, refined
through Reinforcement Learning with Human Feedback, could significantly reshape the pedagogical methodologies within
medical education. Through a comprehensive review and the authors’ personal experiences, this viewpoint article elucidates the
pros, cons, and ethical considerations of using ChatGPT within clinical medicine and notably, its implications for medical
education. This exploration is crucial in a transformative era where AI could potentially augment human capability in the process
of knowledge creation and dissemination, potentially revolutionizing medical education and clinical practice. The importance of
maintaining academic integrity and professional standards is highlighted. The relevance of establishing clear guidelines for the
responsible and ethical use of AI technologies in clinical medicine and medical education is also emphasized.

(JMIR Med Educ 2023;9:e47274) doi: 10.2196/47274

KEYWORDS
ChatGPT; clinical research; large language model; artificial intelligence; ethical considerations; AI; OpenAI

data used in its training and its ability to generate human-like


Introduction conversations covering diverse topics.
Accelerated by advancement of computing technology, the use Over the past few years, AI involving various techniques have
of artificial intelligence (AI) in clinical medicine has seen many gained significance in clinical medicine, whereas the use of
remarkable breakthroughs from diagnosis and treatment to chatbots has been documented in the published literature, even
prediction of disease outcomes in recent years [1]. As new before the launch of ChatGPT. For example, one study reported
technological applications continue to emerge, ChatGPT, a the use of a chatbot in the diagnosis of mental health disorders
generative language model launched by OpenAI in November [2]. In another study, Tudor et al [3] reported various
2022 has essentially has essentially revolutionized the IT world.. applications of chatbots and conversational agents in health
What makes ChatGPT a promising tool is the vast amounts of care, such as patient education and health care service support.

https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 1


(page number not for citation purposes)
XSL• FO
RenderX
JMIR MEDICAL EDUCATION Wong et al

Many of these applications can be delivered via smartphone generate human-like responses to a wide range of questions and
apps [3]. prompts (instructions). “GPT” stands for “Generative Pretrained
Transformer.” ChatGPT is an enhanced version of previous
The use of AI in medicine, including the use of generative
generations of GPTs (GPT-1, -2, -3, and -3.5) and a sibling
language models, is often accompanied by challenges and
model to InstructGPT (OpenAI). It is an AI-based language
contentions. Some common challenges include privacy, data
model designed to generate high-quality texts resembling human
security, algorithmic transparency and explainability, errors and
conversations [8]. The technology underpinning ChatGPT is
liability, as well as regulatory issues associated with AI medicine
known as transformer-based architecture, a deep machine
[4]. Lately, the use of generative language models in scientific
learning model that uses self-attention mechanisms for natural
writing has also stirred up controversies in the academic and
language processing. The model was first introduced by a team
publishing communities. Some journals have declined ChatGPT
at Google Brain in 2017 [9]. Transformer-based architecture
as a coauthor, whereas others have happily accepted manuscripts
allows ChatGPT to break down a sentence or passage into
authored by ChatGPT [5].
smaller fragments referred to as “tokens.” Relationships among
Currently, numerous reviews on the use of generative language the tokens are then analyzed and used for new text generation
model in the field of clinical medicine have been reported, but in a similar context and style as the original text.
mainly in the context of academic writing [6] and medical
A detailed discussion of the technology used in ChatGPT is
education [7]. However, viewpoints on that relate the use of
beyond the scope of this viewpoint article. Briefly, ChatGPT
ChatGPT in clinical medicine, and its implications for medical
is a fine-tuned model belonging to the GPT 3.5 series. Compared
education are lacking. The inexorable march of technological
to earlier versions of GPT, some strengths of ChatGPT include
innovation, exemplified by AI applications in clinical medicine,
its ability to admit errors, ask follow-up questions, question
presents revolutionary changes in how we approach medical
incorrect assumptions, or even decline requests that are
education. With the advent of AI platforms like ChatGPT, the
inappropriate. There are 3 main steps in the training of ChatGPT.
landscape of pedagogical methodologies within medical
The first step involves sampling of a prompt (message or
education is poised for unprecedented change. This model's vast
instruction) from the prompt library and collection of human
training on an array of data and ability to generate human-like
responses. The data are then used in fine-tuning the pretrained
conversations is particularly compelling.
large language model (LLM). In the second step, multiple
Despite earlier uses of AI and chatbots in clinical medicine, the responses are generated by the LLM following prompt sampling.
introduction of highly advanced models such as ChatGPT The responses are then manually ranked and are used in training
necessitates a rigorous examination of their potential integration a reward model to fit human preferences. In the last step, further
within medical education. Understanding the challenges that training of the LLM is achieved by reinforcement learning
coincide with AI use, such as privacy, data security, and algorithms based on supervised fine tuning and reward model
algorithmic transparency, is crucial for a comprehensive, training in the previous steps [8].
informed, and ethically grounded exploration of AI in medical
Currently, the research preview version of ChatGPT is available
education. Hence, this article aims to provide a perspective on
to the public at no cost. Although ChatGPT is helpful in data
ChatGPT and generative language models in clinical medicine,
sourcing, and some users speculate that ChatGPT will replace
addressing the opportunities, challenges, and ethical
search engines like Google, it is noteworthy that several key
considerations inherent in their use, particularly their potential
differences exist between a chatbot and a search engine [10].
as transformative agents within medical education.
Table 1 summarizes the differences between a chatbot and a
search engine.
Generative Language Models and
ChatGPT
Generative language models such as ChatGPT are trained on a
massive amount of text data to understand natural language and

Table 1. Differences between a chatbot and a search engine.


Chatbot Search engine
Purpose To generate natural language text responses To index and retrieve information from the internet
Input Questions and queries raised by users Keywords entered by users
Output Natural language text in the form of human-like conversations List of links to web pages and relevant information
Output Responses generated are personalized and conversational Retrieved information is factual and objective
Information type In the form of conversational text Web-based contents in the form of text, images, and videos
Technology Transformer-based neural network architecture A combination of technologies (eg, machine learning, nat-
ural language processing, and web indexing).

https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 2


(page number not for citation purposes)
XSL• FO
RenderX
JMIR MEDICAL EDUCATION Wong et al

of ChatGPT. For example, a generative language model has


Opportunities for Using Generative been used in COVID-19 public health response [15], explanation
Language Models of treatment process to stakeholders [16], patient
self-management [17], mental health screening [18], research
Studies have reported the use of ChatGPT in several medical participant recruitment [19], research data collection [20].
education–related areas. In one study, ChatGPT passed the
United States Medical Licensing Examination (USMLE) [11] At present, the ability of ChatGPT’s to perform complex tasks
and in another, it outperformed InstructGPT in the USMLE, required of clinical medicine awaits further exploration [21]. It
achieving a passing score equivalent for a year 3 medical student has been shown that the performance of ChatGPT decreases
[12]. Fijačko et al [13] reported that ChatGPT generated with increased complexity of the task. For example, Mehnen et
accurate answers and provided logical explanations to Basic al [22] reported that the diagnostic accuracy of ChatGPT
Life Support and Advanced Cardiovascular Life Support decreased with rare diseases when compared to that with
examination questions but was unable to achieve the passing common diseases. Despite current limitations, a growing body
threshold for both examinations. Savage [14] described the of research suggests that ChatGPT and other chatbots can be
potential use of ChatGPT in drug discovery. trained to generate logical and informational context in
medicine. Some potential applications of ChatGPT in clinical
It is worth mentioning that researchers have explored the use medicine and medical education are summarized in Table 2
of generative language models in health care prior to the launch [12,19,23-28].

Table 2. Potential applications of ChatGPT in clinical medicine and medical education.


Area of research Potential applications Example Study (year)
Learning in medical ed- ChatGPT as a source of medical knowledge ChatGPT could pass the USMLEa, showing Mbakwe et al [11]
ucation its ability in generating accurate answers (2023)

Patient engagement and Provide information to patients, caretakers, and the Use of chatbots in prostate cancer education Görtz et al [23]
education public (2023)
Disease prevention Provide counseling and gather information (eg, risk Use of chatbots in symptom screening for Tan et al [24] (2023)
factors) for health screening patients with autoinflammatory diseases,
with high patient acceptability
Participant recruitment Analyze information from potential participants through Comparing recruitment of research partici- Kim et al [19]
conversations and medical records and streamlined in- pants using chatbot versus telephone out- (2021)
formation gathered reach
Data collection Review large volumes of data through conversations Use of a chatbot (Dokbot) for health data Wilczewski et al
and medical records, use data collected (eg, medical collection among older patients [25] (2023)
history, investigation findings, and treatment outcomes)
for pattern recognition in diseases, and correlate data
(eg, demographics and risk factors) with diseases
Clinical decision sup- Review data on medical history, investigation findings, Application of ChatGPT in making diag- Rao et al [26] (2023)
port and patient manage- etc, and provide treatment recommendations, and sup- noses and patient management using clinical
ment port clinical decision-making by providing supplemental vignettes
information
Drug discovery and de- Review large volumes of scientific data on drugs and Use of pretrained biochemical language Uludoğan et al [27]
velopment identify gaps and potential targets models for targeted drug design (2022)
Medical writing Assist in medical writing and publication Application of ChatGPT in case report Hedge et al [28]
writing (2023)

a
USMLE: United States Medical Licensing Examination.

produced nonexistent or erroneous references [30]. From these


Drawbacks of Using Generative examples, it is worrisome to learn that chatbots can generate
Language Model fabricated and incorrect information, or what is known as
“artificial hallucination.” These “hallucinations” have significant
Information accuracy and authenticity are a great challenge for implications, especially when it comes to life-and-death matters
using chatbots. In one study, researchers asked ChatGPT to in the clinical setting.
generate 50 abstracts from selected medical publications. The
study reported that ChatGPT could generate convincing abstracts Based on its performance in a parasitology examination, a
that escaped plagiarism detection. Further analysis showed that Korean study reported that ChatGPT showed lower knowledge
scientists had difficulties in differentiating the fabricated and interpretation ability when compared to medical students
abstracts from the original ones [29]. In another instance, [31]. Therefore, ChatGPT may need further training and
researchers asked the researchers observed that the ChatGPT enhancement on its ability to interpret medical information. In
addition, the uncertainty on how ChatGPT and other AI
https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 3
(page number not for citation purposes)
XSL• FO
RenderX
JMIR MEDICAL EDUCATION Wong et al

applications derive their information and the black box problem on previous training. Biases in generative language models can
have always been a big challenge in AI medicine [32]. This be introduced through various sources, such as the training data,
further raises concerns of transparency and trust, which are 2 algorithms, labeling and annotation, as well as product design
crucial elements in medicine. decisions and policy decisions. On the other hand, different
types of biases can occur, which include demographic, cultural,
The training period of ChatGPT was between 2020 and 2021.
linguistic, and political biases [35].
As of this writing, ChatGPT was unable to provide information
beyond the training period. For example, based on the authors’ Using LLMs like ChatGPT in clinical decision-making may
experience, ChatGPT failed to describe the Turkey-Syria lead to other unintended consequences such as malpractice and
earthquakes that took place in February 2023. This implies that lawsuits. The use of traditional decision support tools like
further training is necessary for ChatGPT to provide up-to-date clinical practice guidelines allow physicians to assess the
information, whereas training a large-scale AI model like reliability of information according to the source and level of
ChatGPT is expensive and time-consuming. Moreover, it evidence. However, AI models like ChatGPT may generate
involves feeding ChatGPT with high volumes of information, biased and incorrect output with a lack of transparency in data
which requires highly skilled personnel. sourcing. AI models may treat all sources of data equally and
fail to differentiate the data based on evidence levels [36].
Ethical Considerations Depending on how the question is phrased, ChatGPT may
provide different answers for the same question. Hence, the
The use of AI models like ChatGPT may give rise to social, physicians should take these issues into consideration and use
ethical, and medico-legal issues. This section discusses these ChatGPT with caution in clinical decision-making.
challenges and the potential pitfalls associated with the use of
ChatGPT. Regulation of the Use of AI in Medicine
With the emergence of social, ethical, and legal issues associated
Privacy, Confidentiality, and Informed Consent
with applications of AI in health care, there is a need to impose
Patient privacy and confidentiality, as well as data protection regulatory measures and acts to address these issues. The
are common issues of debate in AI medicine [33]. Integration regulation of AI medicine varies in different parts of the world.
of existing health care systems and medical records with For example, in the United States, a regulatory framework and
ChatGPT may lead to such issues. Informed consent must be an action plan were published by the Food and Drug
obtained from the patients before ChatGPT accesses their data. Administration in 2019 and 2021, respectively. In the United
The requirements of informed consent may vary depending on States, the responsibilities of AI lie with the specific federal
the situations. Some additional elements may need to be agencies [37].
included when obtaining informed content for application of
AI in medicine. Some examples include the disclosure of On the contrary, the European Commission proposed a robust
algorithmic decision support, a description of the input and legal framework (the AI Act) that regulates applications of AI
output data, an explanation on the AI training, as well as the in not only medicine but also other sectors. AI applications in
right of a second opinion by a human physician [34]. It is medicine must meet the requirements of both the AI Act and
important that physicians ensure privacy and data security, as the EU (European Union) Medical Device Regulation [38].
a breach of confidentiality may lead to a breach of trust, which Some areas under such regulation include lifecycle regulation,
can negatively impact the doctor-patient relationship. transparency to users, and algorithmic bias [37]. The European
Union also regulates the data generated by AI models via the
Accountability, Liability, and Biases GDPR (General Data Protection Regulation). Under the GDPR,
Accountability and liability are other ethical considerations. As solely automated decision-making and data processing are
some medical errors are life-threatening, physicians and prohibited [39].
researchers must ensure safety and accountability when using Academic Dishonesty
AI to support diagnosis, clinical decision-making, treatment
recommendations, and disease predictions. Other ethical issues The use of ChatGPT in medical writing must be transparent, as
include biased and inaccurate data, leading to unfair and it raises issues on academic dishonesty and fulfillment of
discriminatory results. Therefore, it is important to ensure that authorship criteria, with some disapproving ChatGPT from
AI applications used in research and clinical medicine are trained being listed as an author in journal publications [5,40,41]. While
on representative and diverse data sets to avoid such biases. the use of ChatGPT in clinical medicine and medical education
allows easy access to a vast amount of information, it may raise
In the context of generative language models, bias may be issues like plagiarism and a lack of originality in scientific
viewed as systematic inaccurate representations, distortions or writing. Overreliance on ChatGPT may hinder the development
assumptions that favor certain groups or ideas, perpetuate of skills in original thinking and critical analysis. Figure 1
stereotypes or any incorrect judgments made by the model based summarizes the use of ChatGPT in clinical medicine.

https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 4


(page number not for citation purposes)
XSL• FO
RenderX
JMIR MEDICAL EDUCATION Wong et al

Figure 1. Overview of the use of ChatGPT in clinical medicine and medical education.

Impact of Using AI Models in Clinical Conclusions


Medicine on Medical Training Generative language models have revolutionized the world.
As the use of AI models such as ChatGPT becomes more With its current state of technology, we believe that this new
common in clinical medicine, it is likely to reshape the landscape AI application has great potential in clinical medicine and
of medical education and affect how medical students learn and medical education. “Garbage in, garbage out” is a common
handle information [42]. Some of the applications mentioned adage in computer science. Like any AI application, the key to
in Table 2 may also be applied in medical education. For the efficient use of ChatGPT depends on the quality of the
instance, the use of ChatGPT in making diagnoses and patient training data. Given the fact that it can generate inaccurate and
management using clinical vignettes may enhance student nonexistent information, generative language models still have
learning experience and increase accessibility to learning room for improvement. Therefore, when using ChatGPT,
resources [26]. The use of ChatGPT as a supportive tool in physicians and medical students must always verify the
medical writing [28] may also have an impact on medical information with reliable and evidence-based sources such as
education. On the other hand, with the integration of AI models practice guidelines, peer-reviewed literature, and trusted medical
in medical education, medical educators will need to address databases.
certain issues such as accuracy and reliability of the information, While clinical researchers and physicians may use ChatGPT as
as well as academic dishonesty. a supportive tool, its role in replacing humans in complex data
Furthermore, while medical educators and physicians continue collection, analysis, and validation remains uncertain. Hence,
to explore the use of AI models in the clinical and research the integration of AI in clinical medicine warrants further
settings, there is an emerging need to introduce new elements investigation. After all, when the chatbot makes mistakes, the
in the teaching of medical ethics and medico-legal issues [43]. ultimate responsibility lies with the human user. The use of
Whether medical educators readily embrace AI or approach it generative language models in clinical medicine and medical
with caution, the growing presence of AI in our daily lives and education should also be ethical, taking into consideration
the medical field cannot be denied. Therefore, it is time that patient safety, data protection, accountability, transparency, and
medical educators re-evaluate the existing medical curriculum academic honesty. When incorporating AI models in medical
and incorporate these elements to prepare medical graduates education, it is crucial that medical educators establish
for effective and ethical use of AI in their medical career. guidelines on the responsible and ethical use of applications
such as ChatGPT. The importance of academic integrity,
originality, and critical thinking should be emphasized to ensure
that medical students uphold the highest professional standards
throughout their medical education journey and their future
clinical practice.

https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 5


(page number not for citation purposes)
XSL• FO
RenderX
JMIR MEDICAL EDUCATION Wong et al

Authors' Contributions
RSYW contributed to the writing and editing of this manuscript. LCM and RARA contributed to conceptualization, data search,
and editing.

Conflicts of Interest
None declared.

References
1. Bhattamisra S, Banerjee P, Gupta P, Mayuren J, Patra S, Candasamy M. Artificial intelligence in pharmaceutical and
healthcare research. BDCC 2023 Jan 11;7(1):10 [FREE Full text] [doi: 10.3390/bdcc7010010]
2. Jungmann SM, Klan T, Kuhn S, Jungmann F. Accuracy of a chatbot (Ada) in the diagnosis of mental disorders: comparative
case study with lay and expert users. JMIR Form Res 2019 Oct 29;3(4):e13863 [FREE Full text] [doi: 10.2196/13863]
[Medline: 31663858]
3. Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng Y, et al. Conversational agents in health care: scoping
review and conceptual analysis. J Med Internet Res 2020 Aug 07;22(8):e17158 [FREE Full text] [doi: 10.2196/17158]
[Medline: 32763886]
4. Fenech ME, Buston O. AI in cardiac imaging: a UK-based perspective on addressing the ethical, social, and political
challenges. Front Cardiovasc Med 2020;7:54 [FREE Full text] [doi: 10.3389/fcvm.2020.00054] [Medline: 32351974]
5. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 2023
Jan;613(7945):620-621 [doi: 10.1038/d41586-023-00107-z] [Medline: 36653617]
6. Bhatia P. ChatGPT for academic writing: a game changer or a disruptive tool? J Anaesthesiol Clin Pharmacol 2023;39(1):1-2
[FREE Full text] [doi: 10.4103/joacp.joacp_84_23] [Medline: 37250265]
7. Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ 2023 Mar 14 [doi: 10.1002/ase.2270]
[Medline: 36916887]
8. ChatGPT: Optimizing language models for dialogue. OpenAI. URL: https://openai.com/blog/chatgpt/ [accessed 2023-06-13]
9. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A, et al. Attention is all you need. 2017 Presented at:
Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017); Dec 4-9, 2017; Long Beach, CA
10. AI Chatbots Vs Search Engines: What Is the Difference. Analytics Insight. 2023. URL: https://www.analyticsinsight.net/
ai-chatbots-vs-search-engines-what-is-the-difference/ [accessed 2023-01-21]
11. Mbakwe AB, Lourentzou I, Celi LA, Mechanic OJ, Dagan A. ChatGPT passing USMLE shines a spotlight on the flaws
of medical education. PLOS Digit Health 2023 Feb;2(2):e0000205 [FREE Full text] [doi: 10.1371/journal.pdig.0000205]
[Medline: 36812618]
12. Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, et al. How does ChatGPT perform on the United States
Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment.
JMIR Med Educ 2023 Feb 08;9:e45312 [FREE Full text] [doi: 10.2196/45312] [Medline: 36753318]
13. Fijačko N, Gosak L, Štiglic G, Picard CT, John Douma M. Can ChatGPT pass the life support exams without entering the
American heart association course? Resuscitation 2023 Apr;185:109732 [doi: 10.1016/j.resuscitation.2023.109732] [Medline:
36775020]
14. Savage N. Drug discovery companies are customizing ChatGPT: here's how. Nat Biotechnol 2023 May;41(5):585-586
[doi: 10.1038/s41587-023-01788-7] [Medline: 37095351]
15. Amiri P, Karahanna E. Chatbot use cases in the Covid-19 public health response. J Am Med Inform Assoc 2022 Apr
13;29(5):1000-1010 [FREE Full text] [doi: 10.1093/jamia/ocac014] [Medline: 35137107]
16. Rebelo N, Sanders L, Li K, Chow JCL. Learning the treatment process in radiotherapy using an artificial intelligence-assisted
chatbot: development study. JMIR Form Res 2022 Dec 02;6(12):e39443 [FREE Full text] [doi: 10.2196/39443] [Medline:
36327383]
17. Echeazarra L, Pereira J, Saracho R. TensioBot: a chatbot assistant for self-managed in-house blood pressure checking. J
Med Syst 2021 Mar 15;45(4):54 [doi: 10.1007/s10916-021-01730-x] [Medline: 33723721]
18. Giunti G, Isomursu M, Gabarron E, Solad Y. Designing depression screening chatbots. Stud Health Technol Inform 2021
Dec 15;284:259-263 [doi: 10.3233/SHTI210719] [Medline: 34920522]
19. Kim YJ, DeLisa JA, Chung Y, Shapiro NL, Kolar Rajanna SK, Barbour E, et al. Recruitment in a research study via chatbot
versus telephone outreach: a randomized trial at a minority-serving institution. J Am Med Inform Assoc 2021 Dec
28;29(1):149-154 [FREE Full text] [doi: 10.1093/jamia/ocab240] [Medline: 34741513]
20. Asensio-Cuesta S, Blanes-Selva V, Conejero JA, Frigola A, Portolés MG, Merino-Torres JF, et al. A user-centered chatbot
(Wakamola) to collect linked data in population networks to support studies of overweight and obesity causes: design and
pilot study. JMIR Med Inform 2021 Apr 14;9(4):e17503 [FREE Full text] [doi: 10.2196/17503] [Medline: 33851934]
21. Xue VW, Lei P, Cho WC. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med 2023
Mar;13(3):e1216 [FREE Full text] [doi: 10.1002/ctm2.1216] [Medline: 36856370]

https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 6


(page number not for citation purposes)
XSL• FO
RenderX
JMIR MEDICAL EDUCATION Wong et al

22. Mehnen L, Gruarin S, Vasileva M, Knapp B. ChatGPT as a medical doctor? A diagnostic accuracy study on common and
rare diseases. medRxiv Preprint posted online April 27, 2023. [FREE Full text] [doi: 10.1101/2023.04.20.23288859]
23. Görtz M, Baumgärtner K, Schmid T, Muschko M, Woessner P, Gerlach A, et al. An artificial intelligence-based chatbot
for prostate cancer education: Design and patient evaluation study. Digit Health 2023;9:20552076231173304 [FREE Full
text] [doi: 10.1177/20552076231173304] [Medline: 37152238]
24. Tan TC, Roslan NE, Li JW, Zou X, Chen X, - R, et al. Chatbots for symptom screening and patient education: a pilot study
on patient acceptability in autoimmune inflammatory diseases. J Med Internet Res 2023 May 23 [FREE Full text] [doi:
10.2196/49239] [Medline: 37219234]
25. Wilczewski H, Soni H, Ivanova J, Ong T, Barrera JF, Bunnell BE, et al. Older adults' experience with virtual conversational
agents for health data collection. Front Digit Health 2023;5:1125926 [FREE Full text] [doi: 10.3389/fdgth.2023.1125926]
[Medline: 37006821]
26. Rao A, Pang M, Kim J, Kamineni M, Lie W, Prasad AK, et al. Assessing the utility of ChatGPT throughout the entire
clinical workflow. medRxiv Preprint posted online February 26, 2023. [FREE Full text] [doi: 10.1101/2023.02.21.23285886]
[Medline: 36865204]
27. Uludoğan G, Ozkirimli E, Ulgen KO, Karalı N, Özgür A. Exploiting pretrained biochemical language models for targeted
drug design. Bioinformatics 2022 Sep 16;38(Suppl_2):ii155-ii161 [doi: 10.1093/bioinformatics/btac482] [Medline: 36124801]
28. Hegde A, Srinivasan S, Menon G. Extraventricular neurocytoma of the posterior fossa: a case report written by ChatGPT.
Cureus 2023 Mar;15(3):e35850 [FREE Full text] [doi: 10.7759/cureus.35850] [Medline: 37033498]
29. Else H. Abstracts written by ChatGPT fool scientists. Nature 2023 Jan;613(7944):423 [doi: 10.1038/d41586-023-00056-7]
[Medline: 36635510]
30. Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 2023
Feb;15(2):e35179 [FREE Full text] [doi: 10.7759/cureus.35179] [Medline: 36811129]
31. Huh S. Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking
a parasitology examination?: a descriptive study. J Educ Eval Health Prof 2023;20:1 [FREE Full text] [doi:
10.3352/jeehp.2023.20.1] [Medline: 36627845]
32. Poon AIF, Sung JJY. Opening the black box of AI-Medicine. J Gastroenterol Hepatol 2021 Mar;36(3):581-584 [doi:
10.1111/jgh.15384] [Medline: 33709609]
33. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics
2021 Sep 15;22(1):122 [FREE Full text] [doi: 10.1186/s12910-021-00687-3] [Medline: 34525993]
34. Ursin F, Timmermann C, Orzechowski M, Steger F. Diagnosing diabetic retinopathy with artificial intelligence: what
information should be included to ensure ethical informed consent? Front Med (Lausanne) 2021 Jul 21;8:695217 [FREE
Full text] [doi: 10.3389/fmed.2021.695217] [Medline: 34368192]
35. Ferrara E. Should ChatGPT be biased? Challenges and risks of bias in large language models. arXiv Preprint posted online
April 7, 2023. [FREE Full text]
36. Mello MM, Guha N. ChatGPT and physicians' malpractice risk. JAMA Health Forum 2023 May 05;4(5):e231938 [FREE
Full text] [doi: 10.1001/jamahealthforum.2023.1938] [Medline: 37200013]
37. Vokinger KN, Gasser U. Regulating AI in medicine in the United States and Europe. Nat Mach Intell 2021 Sep;3(9):738-739
[FREE Full text] [doi: 10.1038/s42256-021-00386-z] [Medline: 34604702]
38. Niemiec E. Will the EU Medical Device Regulation help to improve the safety and performance of medical AI devices?
Digit Health 2022;8:20552076221089079 [FREE Full text] [doi: 10.1177/20552076221089079] [Medline: 35386955]
39. Meszaros J, Minari J, Huys I. The future regulation of artificial intelligence systems in healthcare services and medical
research in the European Union. Front Genet 2022;13:927721 [FREE Full text] [doi: 10.3389/fgene.2022.927721] [Medline:
36267404]
40. Curtis N, ChatGPT. To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. Pediatr
Infect Dis J 2023 Apr 01;42(4):275 [doi: 10.1097/INF.0000000000003852] [Medline: 36757192]
41. Yeo-Teh N, Tang B. Letter to editor: NLP systems such as ChatGPT cannot be listed as an author because these cannot
fulfill widely adopted authorship criteria. Account Res 2023 Feb 13:1-3 [doi: 10.1080/08989621.2023.2177160] [Medline:
36748354]
42. Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT - Reshaping medical education and clinical management. Pak J Med
Sci 2023;39(2):605-607 [FREE Full text] [doi: 10.12669/pjms.39.2.7653] [Medline: 36950398]
43. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Med Teach 2023
Jun;45(6):574-584 [doi: 10.1080/0142159X.2023.2186203] [Medline: 36912253]

Abbreviations
AI: artificial intelligence
EU: European Union
GDPR: General Data Protection Regulation
GPT: Generative Pretrained Transformer

https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 7


(page number not for citation purposes)
XSL• FO
RenderX
JMIR MEDICAL EDUCATION Wong et al

LLM: large language model


USMLE: United States Medical Licensing Examination

Edited by T de Azevedo Cardoso; submitted 14.03.23; peer-reviewed by J Luo, L Weinert; comments to author 09.06.23; revised
version received 16.06.23; accepted 30.06.23; published 21.11.23
Please cite as:
Wong RSY, Ming LC, Raja Ali RA
The Intersection of ChatGPT, Clinical Medicine, and Medical Education
JMIR Med Educ 2023;9:e47274
URL: https://mededu.jmir.org/2023/1/e47274
doi: 10.2196/47274
PMID:

©Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali. Originally published in JMIR Medical Education
(https://mededu.jmir.org), 21.11.2023. This is an open-access article distributed under the terms of the Creative Commons
Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic
information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must
be included.

https://mededu.jmir.org/2023/1/e47274 JMIR Med Educ 2023 | vol. 9 | e47274 | p. 8


(page number not for citation purposes)
XSL• FO
RenderX

You might also like