The Intersection of Chatgpt, Clinical Medicine, and Medical Education
The Intersection of Chatgpt, Clinical Medicine, and Medical Education
Viewpoint
Rebecca Shin-Yee Wong1,2, MBBS, MSc, PhD; Long Chiau Ming3*, BPharm Hons, MClinPharm, PhD; Raja Affendi
Raja Ali3,4*, MBBch, MMedSc, MD, MBA
1
Department of Medical Education, School of Medical and Life Sciences, Sunway University, Selangor, Malaysia
2
Faculty of Medicine, Nursing and Health Sciences, SEGi University, Petaling Jaya, Malaysia
3
School of Medical and Life Sciences, Sunway University, Selangor, Malaysia
4
GUT Research Group, Faculty of Medicine, Universiti Kebangsaan Malaysia, Kuala Lumpur, Malaysia
*
these authors contributed equally
Corresponding Author:
Long Chiau Ming, BPharm Hons, MClinPharm, PhD
School of Medical and Life Sciences
Sunway University
No 5, Jalan Universiti
Bandar Sunway
Selangor, 47500
Malaysia
Phone: 60 374918622 ext 7452
Email: [email protected]
Abstract
As we progress deeper into the digital age, the robust development and application of advanced artificial intelligence (AI)
technology, specifically generative language models like ChatGPT (OpenAI), have potential implications in all sectors including
medicine. This viewpoint article aims to present the authors’ perspective on the integration of AI models such as ChatGPT in
clinical medicine and medical education. The unprecedented capacity of ChatGPT to generate human-like responses, refined
through Reinforcement Learning with Human Feedback, could significantly reshape the pedagogical methodologies within
medical education. Through a comprehensive review and the authors’ personal experiences, this viewpoint article elucidates the
pros, cons, and ethical considerations of using ChatGPT within clinical medicine and notably, its implications for medical
education. This exploration is crucial in a transformative era where AI could potentially augment human capability in the process
of knowledge creation and dissemination, potentially revolutionizing medical education and clinical practice. The importance of
maintaining academic integrity and professional standards is highlighted. The relevance of establishing clear guidelines for the
responsible and ethical use of AI technologies in clinical medicine and medical education is also emphasized.
KEYWORDS
ChatGPT; clinical research; large language model; artificial intelligence; ethical considerations; AI; OpenAI
Many of these applications can be delivered via smartphone generate human-like responses to a wide range of questions and
apps [3]. prompts (instructions). “GPT” stands for “Generative Pretrained
Transformer.” ChatGPT is an enhanced version of previous
The use of AI in medicine, including the use of generative
generations of GPTs (GPT-1, -2, -3, and -3.5) and a sibling
language models, is often accompanied by challenges and
model to InstructGPT (OpenAI). It is an AI-based language
contentions. Some common challenges include privacy, data
model designed to generate high-quality texts resembling human
security, algorithmic transparency and explainability, errors and
conversations [8]. The technology underpinning ChatGPT is
liability, as well as regulatory issues associated with AI medicine
known as transformer-based architecture, a deep machine
[4]. Lately, the use of generative language models in scientific
learning model that uses self-attention mechanisms for natural
writing has also stirred up controversies in the academic and
language processing. The model was first introduced by a team
publishing communities. Some journals have declined ChatGPT
at Google Brain in 2017 [9]. Transformer-based architecture
as a coauthor, whereas others have happily accepted manuscripts
allows ChatGPT to break down a sentence or passage into
authored by ChatGPT [5].
smaller fragments referred to as “tokens.” Relationships among
Currently, numerous reviews on the use of generative language the tokens are then analyzed and used for new text generation
model in the field of clinical medicine have been reported, but in a similar context and style as the original text.
mainly in the context of academic writing [6] and medical
A detailed discussion of the technology used in ChatGPT is
education [7]. However, viewpoints on that relate the use of
beyond the scope of this viewpoint article. Briefly, ChatGPT
ChatGPT in clinical medicine, and its implications for medical
is a fine-tuned model belonging to the GPT 3.5 series. Compared
education are lacking. The inexorable march of technological
to earlier versions of GPT, some strengths of ChatGPT include
innovation, exemplified by AI applications in clinical medicine,
its ability to admit errors, ask follow-up questions, question
presents revolutionary changes in how we approach medical
incorrect assumptions, or even decline requests that are
education. With the advent of AI platforms like ChatGPT, the
inappropriate. There are 3 main steps in the training of ChatGPT.
landscape of pedagogical methodologies within medical
The first step involves sampling of a prompt (message or
education is poised for unprecedented change. This model's vast
instruction) from the prompt library and collection of human
training on an array of data and ability to generate human-like
responses. The data are then used in fine-tuning the pretrained
conversations is particularly compelling.
large language model (LLM). In the second step, multiple
Despite earlier uses of AI and chatbots in clinical medicine, the responses are generated by the LLM following prompt sampling.
introduction of highly advanced models such as ChatGPT The responses are then manually ranked and are used in training
necessitates a rigorous examination of their potential integration a reward model to fit human preferences. In the last step, further
within medical education. Understanding the challenges that training of the LLM is achieved by reinforcement learning
coincide with AI use, such as privacy, data security, and algorithms based on supervised fine tuning and reward model
algorithmic transparency, is crucial for a comprehensive, training in the previous steps [8].
informed, and ethically grounded exploration of AI in medical
Currently, the research preview version of ChatGPT is available
education. Hence, this article aims to provide a perspective on
to the public at no cost. Although ChatGPT is helpful in data
ChatGPT and generative language models in clinical medicine,
sourcing, and some users speculate that ChatGPT will replace
addressing the opportunities, challenges, and ethical
search engines like Google, it is noteworthy that several key
considerations inherent in their use, particularly their potential
differences exist between a chatbot and a search engine [10].
as transformative agents within medical education.
Table 1 summarizes the differences between a chatbot and a
search engine.
Generative Language Models and
ChatGPT
Generative language models such as ChatGPT are trained on a
massive amount of text data to understand natural language and
Patient engagement and Provide information to patients, caretakers, and the Use of chatbots in prostate cancer education Görtz et al [23]
education public (2023)
Disease prevention Provide counseling and gather information (eg, risk Use of chatbots in symptom screening for Tan et al [24] (2023)
factors) for health screening patients with autoinflammatory diseases,
with high patient acceptability
Participant recruitment Analyze information from potential participants through Comparing recruitment of research partici- Kim et al [19]
conversations and medical records and streamlined in- pants using chatbot versus telephone out- (2021)
formation gathered reach
Data collection Review large volumes of data through conversations Use of a chatbot (Dokbot) for health data Wilczewski et al
and medical records, use data collected (eg, medical collection among older patients [25] (2023)
history, investigation findings, and treatment outcomes)
for pattern recognition in diseases, and correlate data
(eg, demographics and risk factors) with diseases
Clinical decision sup- Review data on medical history, investigation findings, Application of ChatGPT in making diag- Rao et al [26] (2023)
port and patient manage- etc, and provide treatment recommendations, and sup- noses and patient management using clinical
ment port clinical decision-making by providing supplemental vignettes
information
Drug discovery and de- Review large volumes of scientific data on drugs and Use of pretrained biochemical language Uludoğan et al [27]
velopment identify gaps and potential targets models for targeted drug design (2022)
Medical writing Assist in medical writing and publication Application of ChatGPT in case report Hedge et al [28]
writing (2023)
a
USMLE: United States Medical Licensing Examination.
applications derive their information and the black box problem on previous training. Biases in generative language models can
have always been a big challenge in AI medicine [32]. This be introduced through various sources, such as the training data,
further raises concerns of transparency and trust, which are 2 algorithms, labeling and annotation, as well as product design
crucial elements in medicine. decisions and policy decisions. On the other hand, different
types of biases can occur, which include demographic, cultural,
The training period of ChatGPT was between 2020 and 2021.
linguistic, and political biases [35].
As of this writing, ChatGPT was unable to provide information
beyond the training period. For example, based on the authors’ Using LLMs like ChatGPT in clinical decision-making may
experience, ChatGPT failed to describe the Turkey-Syria lead to other unintended consequences such as malpractice and
earthquakes that took place in February 2023. This implies that lawsuits. The use of traditional decision support tools like
further training is necessary for ChatGPT to provide up-to-date clinical practice guidelines allow physicians to assess the
information, whereas training a large-scale AI model like reliability of information according to the source and level of
ChatGPT is expensive and time-consuming. Moreover, it evidence. However, AI models like ChatGPT may generate
involves feeding ChatGPT with high volumes of information, biased and incorrect output with a lack of transparency in data
which requires highly skilled personnel. sourcing. AI models may treat all sources of data equally and
fail to differentiate the data based on evidence levels [36].
Ethical Considerations Depending on how the question is phrased, ChatGPT may
provide different answers for the same question. Hence, the
The use of AI models like ChatGPT may give rise to social, physicians should take these issues into consideration and use
ethical, and medico-legal issues. This section discusses these ChatGPT with caution in clinical decision-making.
challenges and the potential pitfalls associated with the use of
ChatGPT. Regulation of the Use of AI in Medicine
With the emergence of social, ethical, and legal issues associated
Privacy, Confidentiality, and Informed Consent
with applications of AI in health care, there is a need to impose
Patient privacy and confidentiality, as well as data protection regulatory measures and acts to address these issues. The
are common issues of debate in AI medicine [33]. Integration regulation of AI medicine varies in different parts of the world.
of existing health care systems and medical records with For example, in the United States, a regulatory framework and
ChatGPT may lead to such issues. Informed consent must be an action plan were published by the Food and Drug
obtained from the patients before ChatGPT accesses their data. Administration in 2019 and 2021, respectively. In the United
The requirements of informed consent may vary depending on States, the responsibilities of AI lie with the specific federal
the situations. Some additional elements may need to be agencies [37].
included when obtaining informed content for application of
AI in medicine. Some examples include the disclosure of On the contrary, the European Commission proposed a robust
algorithmic decision support, a description of the input and legal framework (the AI Act) that regulates applications of AI
output data, an explanation on the AI training, as well as the in not only medicine but also other sectors. AI applications in
right of a second opinion by a human physician [34]. It is medicine must meet the requirements of both the AI Act and
important that physicians ensure privacy and data security, as the EU (European Union) Medical Device Regulation [38].
a breach of confidentiality may lead to a breach of trust, which Some areas under such regulation include lifecycle regulation,
can negatively impact the doctor-patient relationship. transparency to users, and algorithmic bias [37]. The European
Union also regulates the data generated by AI models via the
Accountability, Liability, and Biases GDPR (General Data Protection Regulation). Under the GDPR,
Accountability and liability are other ethical considerations. As solely automated decision-making and data processing are
some medical errors are life-threatening, physicians and prohibited [39].
researchers must ensure safety and accountability when using Academic Dishonesty
AI to support diagnosis, clinical decision-making, treatment
recommendations, and disease predictions. Other ethical issues The use of ChatGPT in medical writing must be transparent, as
include biased and inaccurate data, leading to unfair and it raises issues on academic dishonesty and fulfillment of
discriminatory results. Therefore, it is important to ensure that authorship criteria, with some disapproving ChatGPT from
AI applications used in research and clinical medicine are trained being listed as an author in journal publications [5,40,41]. While
on representative and diverse data sets to avoid such biases. the use of ChatGPT in clinical medicine and medical education
allows easy access to a vast amount of information, it may raise
In the context of generative language models, bias may be issues like plagiarism and a lack of originality in scientific
viewed as systematic inaccurate representations, distortions or writing. Overreliance on ChatGPT may hinder the development
assumptions that favor certain groups or ideas, perpetuate of skills in original thinking and critical analysis. Figure 1
stereotypes or any incorrect judgments made by the model based summarizes the use of ChatGPT in clinical medicine.
Figure 1. Overview of the use of ChatGPT in clinical medicine and medical education.
Authors' Contributions
RSYW contributed to the writing and editing of this manuscript. LCM and RARA contributed to conceptualization, data search,
and editing.
Conflicts of Interest
None declared.
References
1. Bhattamisra S, Banerjee P, Gupta P, Mayuren J, Patra S, Candasamy M. Artificial intelligence in pharmaceutical and
healthcare research. BDCC 2023 Jan 11;7(1):10 [FREE Full text] [doi: 10.3390/bdcc7010010]
2. Jungmann SM, Klan T, Kuhn S, Jungmann F. Accuracy of a chatbot (Ada) in the diagnosis of mental disorders: comparative
case study with lay and expert users. JMIR Form Res 2019 Oct 29;3(4):e13863 [FREE Full text] [doi: 10.2196/13863]
[Medline: 31663858]
3. Tudor Car L, Dhinagaran DA, Kyaw BM, Kowatsch T, Joty S, Theng Y, et al. Conversational agents in health care: scoping
review and conceptual analysis. J Med Internet Res 2020 Aug 07;22(8):e17158 [FREE Full text] [doi: 10.2196/17158]
[Medline: 32763886]
4. Fenech ME, Buston O. AI in cardiac imaging: a UK-based perspective on addressing the ethical, social, and political
challenges. Front Cardiovasc Med 2020;7:54 [FREE Full text] [doi: 10.3389/fcvm.2020.00054] [Medline: 32351974]
5. Stokel-Walker C. ChatGPT listed as author on research papers: many scientists disapprove. Nature 2023
Jan;613(7945):620-621 [doi: 10.1038/d41586-023-00107-z] [Medline: 36653617]
6. Bhatia P. ChatGPT for academic writing: a game changer or a disruptive tool? J Anaesthesiol Clin Pharmacol 2023;39(1):1-2
[FREE Full text] [doi: 10.4103/joacp.joacp_84_23] [Medline: 37250265]
7. Lee H. The rise of ChatGPT: exploring its potential in medical education. Anat Sci Educ 2023 Mar 14 [doi: 10.1002/ase.2270]
[Medline: 36916887]
8. ChatGPT: Optimizing language models for dialogue. OpenAI. URL: https://openai.com/blog/chatgpt/ [accessed 2023-06-13]
9. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez A, et al. Attention is all you need. 2017 Presented at:
Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS 2017); Dec 4-9, 2017; Long Beach, CA
10. AI Chatbots Vs Search Engines: What Is the Difference. Analytics Insight. 2023. URL: https://www.analyticsinsight.net/
ai-chatbots-vs-search-engines-what-is-the-difference/ [accessed 2023-01-21]
11. Mbakwe AB, Lourentzou I, Celi LA, Mechanic OJ, Dagan A. ChatGPT passing USMLE shines a spotlight on the flaws
of medical education. PLOS Digit Health 2023 Feb;2(2):e0000205 [FREE Full text] [doi: 10.1371/journal.pdig.0000205]
[Medline: 36812618]
12. Gilson A, Safranek CW, Huang T, Socrates V, Chi L, Taylor RA, et al. How does ChatGPT perform on the United States
Medical Licensing Examination? The implications of large language models for medical education and knowledge assessment.
JMIR Med Educ 2023 Feb 08;9:e45312 [FREE Full text] [doi: 10.2196/45312] [Medline: 36753318]
13. Fijačko N, Gosak L, Štiglic G, Picard CT, John Douma M. Can ChatGPT pass the life support exams without entering the
American heart association course? Resuscitation 2023 Apr;185:109732 [doi: 10.1016/j.resuscitation.2023.109732] [Medline:
36775020]
14. Savage N. Drug discovery companies are customizing ChatGPT: here's how. Nat Biotechnol 2023 May;41(5):585-586
[doi: 10.1038/s41587-023-01788-7] [Medline: 37095351]
15. Amiri P, Karahanna E. Chatbot use cases in the Covid-19 public health response. J Am Med Inform Assoc 2022 Apr
13;29(5):1000-1010 [FREE Full text] [doi: 10.1093/jamia/ocac014] [Medline: 35137107]
16. Rebelo N, Sanders L, Li K, Chow JCL. Learning the treatment process in radiotherapy using an artificial intelligence-assisted
chatbot: development study. JMIR Form Res 2022 Dec 02;6(12):e39443 [FREE Full text] [doi: 10.2196/39443] [Medline:
36327383]
17. Echeazarra L, Pereira J, Saracho R. TensioBot: a chatbot assistant for self-managed in-house blood pressure checking. J
Med Syst 2021 Mar 15;45(4):54 [doi: 10.1007/s10916-021-01730-x] [Medline: 33723721]
18. Giunti G, Isomursu M, Gabarron E, Solad Y. Designing depression screening chatbots. Stud Health Technol Inform 2021
Dec 15;284:259-263 [doi: 10.3233/SHTI210719] [Medline: 34920522]
19. Kim YJ, DeLisa JA, Chung Y, Shapiro NL, Kolar Rajanna SK, Barbour E, et al. Recruitment in a research study via chatbot
versus telephone outreach: a randomized trial at a minority-serving institution. J Am Med Inform Assoc 2021 Dec
28;29(1):149-154 [FREE Full text] [doi: 10.1093/jamia/ocab240] [Medline: 34741513]
20. Asensio-Cuesta S, Blanes-Selva V, Conejero JA, Frigola A, Portolés MG, Merino-Torres JF, et al. A user-centered chatbot
(Wakamola) to collect linked data in population networks to support studies of overweight and obesity causes: design and
pilot study. JMIR Med Inform 2021 Apr 14;9(4):e17503 [FREE Full text] [doi: 10.2196/17503] [Medline: 33851934]
21. Xue VW, Lei P, Cho WC. The potential impact of ChatGPT in clinical and translational medicine. Clin Transl Med 2023
Mar;13(3):e1216 [FREE Full text] [doi: 10.1002/ctm2.1216] [Medline: 36856370]
22. Mehnen L, Gruarin S, Vasileva M, Knapp B. ChatGPT as a medical doctor? A diagnostic accuracy study on common and
rare diseases. medRxiv Preprint posted online April 27, 2023. [FREE Full text] [doi: 10.1101/2023.04.20.23288859]
23. Görtz M, Baumgärtner K, Schmid T, Muschko M, Woessner P, Gerlach A, et al. An artificial intelligence-based chatbot
for prostate cancer education: Design and patient evaluation study. Digit Health 2023;9:20552076231173304 [FREE Full
text] [doi: 10.1177/20552076231173304] [Medline: 37152238]
24. Tan TC, Roslan NE, Li JW, Zou X, Chen X, - R, et al. Chatbots for symptom screening and patient education: a pilot study
on patient acceptability in autoimmune inflammatory diseases. J Med Internet Res 2023 May 23 [FREE Full text] [doi:
10.2196/49239] [Medline: 37219234]
25. Wilczewski H, Soni H, Ivanova J, Ong T, Barrera JF, Bunnell BE, et al. Older adults' experience with virtual conversational
agents for health data collection. Front Digit Health 2023;5:1125926 [FREE Full text] [doi: 10.3389/fdgth.2023.1125926]
[Medline: 37006821]
26. Rao A, Pang M, Kim J, Kamineni M, Lie W, Prasad AK, et al. Assessing the utility of ChatGPT throughout the entire
clinical workflow. medRxiv Preprint posted online February 26, 2023. [FREE Full text] [doi: 10.1101/2023.02.21.23285886]
[Medline: 36865204]
27. Uludoğan G, Ozkirimli E, Ulgen KO, Karalı N, Özgür A. Exploiting pretrained biochemical language models for targeted
drug design. Bioinformatics 2022 Sep 16;38(Suppl_2):ii155-ii161 [doi: 10.1093/bioinformatics/btac482] [Medline: 36124801]
28. Hegde A, Srinivasan S, Menon G. Extraventricular neurocytoma of the posterior fossa: a case report written by ChatGPT.
Cureus 2023 Mar;15(3):e35850 [FREE Full text] [doi: 10.7759/cureus.35850] [Medline: 37033498]
29. Else H. Abstracts written by ChatGPT fool scientists. Nature 2023 Jan;613(7944):423 [doi: 10.1038/d41586-023-00056-7]
[Medline: 36635510]
30. Alkaissi H, McFarlane SI. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 2023
Feb;15(2):e35179 [FREE Full text] [doi: 10.7759/cureus.35179] [Medline: 36811129]
31. Huh S. Are ChatGPT’s knowledge and interpretation ability comparable to those of medical students in Korea for taking
a parasitology examination?: a descriptive study. J Educ Eval Health Prof 2023;20:1 [FREE Full text] [doi:
10.3352/jeehp.2023.20.1] [Medline: 36627845]
32. Poon AIF, Sung JJY. Opening the black box of AI-Medicine. J Gastroenterol Hepatol 2021 Mar;36(3):581-584 [doi:
10.1111/jgh.15384] [Medline: 33709609]
33. Murdoch B. Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Med Ethics
2021 Sep 15;22(1):122 [FREE Full text] [doi: 10.1186/s12910-021-00687-3] [Medline: 34525993]
34. Ursin F, Timmermann C, Orzechowski M, Steger F. Diagnosing diabetic retinopathy with artificial intelligence: what
information should be included to ensure ethical informed consent? Front Med (Lausanne) 2021 Jul 21;8:695217 [FREE
Full text] [doi: 10.3389/fmed.2021.695217] [Medline: 34368192]
35. Ferrara E. Should ChatGPT be biased? Challenges and risks of bias in large language models. arXiv Preprint posted online
April 7, 2023. [FREE Full text]
36. Mello MM, Guha N. ChatGPT and physicians' malpractice risk. JAMA Health Forum 2023 May 05;4(5):e231938 [FREE
Full text] [doi: 10.1001/jamahealthforum.2023.1938] [Medline: 37200013]
37. Vokinger KN, Gasser U. Regulating AI in medicine in the United States and Europe. Nat Mach Intell 2021 Sep;3(9):738-739
[FREE Full text] [doi: 10.1038/s42256-021-00386-z] [Medline: 34604702]
38. Niemiec E. Will the EU Medical Device Regulation help to improve the safety and performance of medical AI devices?
Digit Health 2022;8:20552076221089079 [FREE Full text] [doi: 10.1177/20552076221089079] [Medline: 35386955]
39. Meszaros J, Minari J, Huys I. The future regulation of artificial intelligence systems in healthcare services and medical
research in the European Union. Front Genet 2022;13:927721 [FREE Full text] [doi: 10.3389/fgene.2022.927721] [Medline:
36267404]
40. Curtis N, ChatGPT. To ChatGPT or not to ChatGPT? The impact of artificial intelligence on academic publishing. Pediatr
Infect Dis J 2023 Apr 01;42(4):275 [doi: 10.1097/INF.0000000000003852] [Medline: 36757192]
41. Yeo-Teh N, Tang B. Letter to editor: NLP systems such as ChatGPT cannot be listed as an author because these cannot
fulfill widely adopted authorship criteria. Account Res 2023 Feb 13:1-3 [doi: 10.1080/08989621.2023.2177160] [Medline:
36748354]
42. Khan RA, Jawaid M, Khan AR, Sajjad M. ChatGPT - Reshaping medical education and clinical management. Pak J Med
Sci 2023;39(2):605-607 [FREE Full text] [doi: 10.12669/pjms.39.2.7653] [Medline: 36950398]
43. Masters K. Ethical use of artificial intelligence in health professions education: AMEE Guide No. 158. Med Teach 2023
Jun;45(6):574-584 [doi: 10.1080/0142159X.2023.2186203] [Medline: 36912253]
Abbreviations
AI: artificial intelligence
EU: European Union
GDPR: General Data Protection Regulation
GPT: Generative Pretrained Transformer
Edited by T de Azevedo Cardoso; submitted 14.03.23; peer-reviewed by J Luo, L Weinert; comments to author 09.06.23; revised
version received 16.06.23; accepted 30.06.23; published 21.11.23
Please cite as:
Wong RSY, Ming LC, Raja Ali RA
The Intersection of ChatGPT, Clinical Medicine, and Medical Education
JMIR Med Educ 2023;9:e47274
URL: https://mededu.jmir.org/2023/1/e47274
doi: 10.2196/47274
PMID:
©Rebecca Shin-Yee Wong, Long Chiau Ming, Raja Affendi Raja Ali. Originally published in JMIR Medical Education
(https://mededu.jmir.org), 21.11.2023. This is an open-access article distributed under the terms of the Creative Commons
Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic
information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must
be included.