0% found this document useful (0 votes)
9 views

annotated-Final%20draft-1.docx

The essay argues that while deep learning models show promise in medical imaging, radiologists remain superior due to their contextual expertise, reasoning abilities, and the challenges of data dependency and interpretability in AI models. It highlights the importance of human expertise in ensuring accurate diagnoses and maintaining patient trust, as AI models often lack the ability to explain their outputs. Ultimately, the author contends that radiologists' skills are indispensable in the medical field despite advancements in AI technology.

Uploaded by

kienduong160
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

annotated-Final%20draft-1.docx

The essay argues that while deep learning models show promise in medical imaging, radiologists remain superior due to their contextual expertise, reasoning abilities, and the challenges of data dependency and interpretability in AI models. It highlights the importance of human expertise in ensuring accurate diagnoses and maintaining patient trust, as AI models often lack the ability to explain their outputs. Ultimately, the author contends that radiologists' skills are indispensable in the medical field despite advancements in AI technology.

Uploaded by

kienduong160
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Are Deep Learning Models Superior to Radiologists in Detecting Anomalies in

Medical Imaging?
Cao Pham Minh Dang
College of Engineering and Computer Science, VinUniversity
ENG1010: Fundamentals of Academic Writing – Section 7
Professor Nhu Dinh Ngoc Anh
November 24, 2024
748 words

Honor Code
I affirm that:
1. I fully understand and have adhered to the course policy on generative AI use for
this specific assignment.
2. I have not used generative AI in any manner that violates the stated policy for this
assignment.
3. I have accurately and completely disclosed all instances of generative AI use in the
accompanying acknowledgment table.
4. I take full responsibility for the integrity and originality of the work I am
submitting.

I understand that any violation of this honor code may result in disciplinary action as
outlined in the VinUniversity Academic Integrity Policy.

Acknowledgement Table

I have not used any AI tools in the creation or revision of this submission.
I have used AI tools responsibly in this submission, in accordance with the
course/assignment guidelines, AI Assessment Scale, and VinUniversity Guidelines on
Student use of Generative Artificial Intelligence. I have summarized how I used them
below. I take full responsibility for the final content of this submission.

Tool and Purpose for Prompt & follow-up How you


link to using the tool prompt(s) input into the used/adapted

1
chat (if tool (if link not available) the output
available)

ChatGPT Understand 1. “What is the most I use the


LINK how AI works important thing for an AI importance of
in medical to detect anomalies in the datasets as
imaging and medical imaging?” the main idea
the importance After it answers, I asks an for the rebuttal
of the datasets. additional question: in the first
2. “How important are the body
datasets?” paragraph,
which attacks
the
dependency of
AI models on
clean and
labeled
datasets

Gemini Have some 1. “Why can't AI and deep I use the AI to


LINK ideas about the learning models be applied help me have
weaknesses of broadly in medical some ideas for
deep learning imaging?” the rebuttal. I
models in 2. “Elaborate on the model pick the idea
medical interpretability” of model
imaging. interpretability
and do some
further search
for my rebuttal
in the second
body
paragraph.

ChatGPT Check whether “Check if this reference I use the AI to

2
LINK my reference list is cited properly? …. adjust some of
list is cited (my reference list)” my citations,
properly which I have
miscited.

FoAW Help me “Evaluate this rebuttal I just use this


ChatGPT evaluate the based on all the criteria to make sure
LINK effectiveness that you are trained: (the that the
of the rebuttal second body paragraph)” rebuttal in the
second body
paragraph is
relevant.

Signed by: Cao Pham Minh Dang on November 24, 2024

My essay:
If someone from 20 years ago were asked whether machines could surpass human
expertise in medicine, their answer would undoubtedly be a firm "no." Yet today,
advancements in artificial intelligence (AI), particularly deep learning, are transforming
medical imaging. With vast datasets and unprecedented technological evolution, many
people believe that deep learning models even have an edge over human experts. While
AI holds immense promise in accuracy and potential development, I contend that
radiologists remain superior due to their contextual expertise, reasoning ability, and
practical sustainability in real-world medical settings. This essay will explore the
limitations of deep learning models, focusing on data dependency and interpretability,
while emphasizing the enduring strengths of human radiologists.

First and foremost, the reliance of AI models on clean, labeled datasets indicates the
enduring importance of radiologists' expertise in medical imaging. It is often thought
that with sufficient data of diseases, AI models can be trained to outperform
radiologists. In a study about detecting chest diseases, Rajpurkar et al. (2018) found that
deep learning can automatically detect and localize many pathologies in chest
radiographs at a level comparable to practicing radiologists. In another study about

3
detecting breast cancer through images, McKinney et al. (2020) found that an AI
system, when used in a double-reading process, significantly reduced the second
reader's workload by 88% in screening breast cancer images. Given that these studies
came out 4-5 years ago, along with the unprecedented pace in development of
technology, the performance and speed of current AI models are believed to even
surpass the accumulated experience of a radiologist. However, while it is undeniable
that AI models can perform very well when given enough datasets, what this argument
fails to take into consideration is that it is extremely hard to find such a clean and
sufficient amount of data, especially in the medical field. Specifically, manually labeling
the data to feed the AI models requires a considerable amount of time and experience,
which, according to Zhang and Qie (2023), is a key challenge in applying AI models in
medical imaging. In addition, Simpson et al. (2019) found that due to the time-
consuming and costly nature of manual annotations, the number of cleaned and fully
labeled datasets is very limited. In contrast, radiologists are professionally groomed to
detect anomalies in a diverse range of different medical images. They do not rely on
external guidance. Consequently, it is still a long time before any AI model matches the
comprehensive knowledge of radiologists.

Secondly, the opacity of AI models highlights the continued necessity of radiologists'


interpretive skills and transparency. Many people believe that with the continuously
improving nature of AI models, they will eventually surpass human capabilities in the
future. "In the future, AI is expected to be able to independently make diagnoses."
(Hamanaka & Oda, 2024). Najjar (2023) confirmed that AI algorithms can emulate or
even surpass human cognitive capabilities, especially in tasks requiring high-speed
processing of vast datasets. When integrated into virtual reality technologies, AI can
greatly boost radiological efficiency, diagnostic accuracy and treatment planning. Given
the limitless potentials of AI, it is understandable why many contend that these models
will ultimately exceed the performance of radiologists. However, even though the
accuracy of those models may be even higher than that of radiologists, it would be no
use if they cannot interpret why such an output is made. According to Waller et al.
(2022), machines often fail to disclose the statistical rationale behind their tasks,
complicating their application in medical settings. Similarly, Zhang and Qie (2023)
also considered the lack of interpretability as a key challenge for deep learning models.
These observations underscore a critical weakness of opacity in current deep learning

4
models. In contrast, radiologists' ability to explain and transparency in diagnostic
processes fosters trust in patients, which plays a crucial role in complex cases where
there is a need for understanding the reasoning behind the disease. Therefore, while AI
models may improve in accuracy, radiologists' interpretive skills remain indispensable
and superior for maintaining trust and ensuring comprehensive patient care.

In evaluation, while deep learning models do show promise in detection accuracy and
future potentials, radiologists still remain superior due to their accumulative expertise
and interpretive capability. The dependence of AI on clean, labeled datasets as well as
the lack of interpretation has underscored the importance of human expertise in ensuring
accuracy and trust in medical diagnoses. Therefore, we should continue to lay our trust
in radiologists, ensuring that their expertise continues to guide medical imaging and
diagnosis.

References:
Hamanaka, R., & Oda, M. (2024). Can artificial intelligence replace humans for
detecting lung tumors on radiographs? An examination of resected malignant lung
tumors. Journal of Personalized Medicine, 14(2), 164.
https://doi.org/10.3390/jpm14020164

McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H.,
Back, T., Chesus, M., Corrado, G. S., Darzi, A., Etemadi, M., Garcia-Vicente, F.,
Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly,
C. J., King, D., ... Mostofi, H. (2020). International evaluation of an AI system for
breast cancer screening. Nature, 577(7797), 89–94. https://doi.org/10.1038/s41586-019-
1799-6

Najjar, R. (2023). Redefining radiology: A review of artificial intelligence integration in


medical imaging. Diagnostics, 13(17), 2760.
https://doi.org/10.3390/diagnostics13172760

Rajpurkar, P., Irvin, J., Ball, R. L., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D.,
Bagul, A., Langlotz, C. P., Patel, B. N., Yeom, K. W., Shpanskaya, K., Blankenberg, F.
G., Seekins, J., Amrhein, T. J., Mong, D. A., Halabi, S. S., Zucker, E. J., Ng, A. Y., &

5
Lungren, M. P. (2018). Deep learning for chest radiograph diagnosis: A retrospective
comparison of the CheXNeXt algorithm to practicing radiologists. PLoS Medicine,
15(11), e1002686. https://doi.org/10.1371/journal.pmed.1002686

Waller, J., O’Connor, A., Raafat, E., Amireh, A., Dempsey, J., Martin, C., & Umair, M.
(2022). Applications and challenges of artificial intelligence in diagnostic and
interventional radiology. Polish Journal of Radiology, 87, e113–e117.
https://doi.org/10.5114/pjr.2022.113531

Zhang, H., & Qie, Y. (2023). Applying deep learning to medical imaging: A review.
Applied Sciences, 13(18), 10521. https://doi.org/10.3390/app131810521

You might also like