Applsci 13 10521
Applsci 13 10521
sciences
Review
Applying Deep Learning to Medical Imaging: A Review
Huanhuan Zhang 1,† and Yufei Qie 2, *,†
Abstract: Deep learning (DL) has made significant strides in medical imaging. This review article
presents an in-depth analysis of DL applications in medical imaging, focusing on the challenges,
methods, and future perspectives. We discuss the impact of DL on the diagnosis and treatment of
diseases and how it has revolutionized the medical imaging field. Furthermore, we examine the most
recent DL techniques, such as convolutional neural networks (CNNs), recurrent neural networks
(RNNs), and generative adversarial networks (GANs), and their applications in medical imaging.
Lastly, we provide insights into the future of DL in medical imaging, highlighting its potential
advancements and challenges.
Keywords: convolutional neural networks; recurrent neural networks; generative adversarial networks;
deep learning; medical imaging
1. Introduction
1.1. Background and Motivation
Medical imaging has been a critical component of modern healthcare, providing
clinicians with vital information for the diagnosis, treatment, and monitoring of various
diseases [1]. Traditional image analysis techniques often rely on handcrafted features
and expert knowledge, which can be time-consuming and subject to human error [2]. In
Citation: Zhang, H.; Qie, Y. Applying
recent years, machine learning (ML) methods have been increasingly applied to medical
Deep Learning to Medical Imaging: image analysis to improve efficiency and reduce potential human errors. These methods,
A Review. Appl. Sci. 2023, 13, 10521. including Support Vector Machines (SVMs), decision trees, random forests, and logistic
https://doi.org/10.3390/ regression, have shown success in tasks such as image segmentation, object detection,
app131810521 and disease classification. These ML methods typically involve the manual selection and
extraction of features from the medical images, which are then used for prediction or
Academic Editor: Thomas Lindner
classification. With the rapid development of deep learning (DL) technologies, there has
Received: 30 April 2023 been a significant shift toward leveraging these powerful tools to improve the accuracy
Revised: 20 May 2023 and efficiency of medical image analysis [3]. Unlike traditional ML methods, DL models
Accepted: 8 September 2023 are capable of automatically learning and extracting hierarchical features from raw data.
Published: 21 September 2023 Deep learning, a subfield of machine learning (ML), has made remarkable advancements
in recent years, particularly in image recognition and natural language processing tasks [4].
This success is primarily attributed to the development of artificial neural networks (ANN)
with multiple hidden layers, which allow for the automatic extraction and learning of
Copyright: © 2023 by the authors.
hierarchical features from raw data [5]. Consequently, DL techniques and network-based
Licensee MDPI, Basel, Switzerland.
computation have been widely adopted in various applications, including autonomous
This article is an open access article
driving, robotics, natural language understanding [6], and a large number of engineering
distributed under the terms and
conditions of the Creative Commons
computation cases [7–43].
Attribution (CC BY) license (https://
In the medical imaging domain, DL has shown great potential for enhancing the quality
creativecommons.org/licenses/by/
of care and improving patient outcomes [44]. By automating the analysis of medical images,
4.0/). DL algorithms can aid in the early detection of diseases, streamline clinical workflows, and
reduce the burden on healthcare professionals [45]. In addition, DL also plays a significant
role in the credibility verification of reported medical data. For instance, it can be utilized
to identify anomalies or inconsistencies in the data, thereby ensuring the reliability of the
data used for diagnosis or treatment planning. DL models can also help in validating
the authenticity of medical images, which is crucial in today’s digital age where data
manipulation has become increasingly sophisticated. Moreover, DL models can be trained
to predict disease progression and treatment response, thereby contributing to personalized
medicine and the optimization of therapeutic strategies [46].
In our study, we specifically discuss the potential of DL models in medical imaging.
We have discovered that deep learning techniques have been revolutionizing the medical
imaging research. These findings underline the potential of DL techniques to further
advance the field of medical imaging, opening new avenues for diagnosis and treatment
strategies. This paper details these methods, results, and the implications of these findings
for future research.
1.2. DL Techniques
Several DL techniques have been applied to medical imaging [47–52], with convolu-
tional neural networks (CNNs) being the most prevalent [53]. CNNs are particularly suited
for image analysis tasks due to their ability to capture local spatial patterns and automati-
cally learn hierarchical representations from input images [54]. Other DL techniques that
have been applied to medical imaging include recurrent neural networks (RNNs), which
are well-suited for handling sequential data, and generative adversarial networks (GANs),
which can generate new samples from learned data distributions [55]. In assessing the
performance of our DL models in medical image diagnosis, several evaluation metrics
are commonly employed, including Receiver Operating Characteristic (ROC) curves and
confusion matrices, among other techniques [1–3]. The ROC curve is a graphical plot
that illustrates the diagnostic ability of our DL models as its discrimination threshold is
varied. It presents the trade-off between sensitivity (or True Positive Rate) and specificity
(1–False Positive Rate), providing a measure of how well our models distinguish between
classes. The Area Under the ROC Curve (AUC) is also considered, which provides a single
metric to compare model performance. On the other hand, confusion matrices provide
a summary of prediction results on a classification problem. The number of correct and
incorrect predictions is counted and broken down by each class. This offers a more granular
view of the model performance, including metrics such as precision, recall, and F1-score,
which are crucial when dealing with imbalanced classes.
explainability and trustworthiness of these models is crucial for their adoption in clinical
practice, as clinicians need to understand the rationale behind their predictions [61].
Despite these challenges, DL in medical imaging presents numerous opportunities
for advancing healthcare and improving patient outcomes. With ongoing research, inter-
disciplinary collaboration, and the development of more sophisticated algorithms, DL has
Appl. Sci. 2023,the
13, xpotential to revolutionize medical imaging and contribute significantly to the future of
FOR PEER REVIEW 3 of 25
medicine.
Figure 1. A comparison of various medical imaging modalities. Scinti: Scintigraphy; SPECT: Single-
Figure 1. A comparison of various medical imaging modalities. Scinti: Scintigraphy; SPECT: Single-
Photon Emission Computed Tomography; Optical: Optical Imaging; PET: Positron Emission To-
mography; Tomography;
Photon Emission Computed CT: Computed Optical:
Tomography; US: Imaging;
Optical Ultrasound; MRI:
PET: Magnetic
Positron Resonance
Emission Imaging.
Tomog-
raphy; CT: Computed Tomography; US: Ultrasound; MRI: Magnetic Resonance Imaging.
1.4. Challenges and Opportunities
2. Deep Learning Techniques in Medical Imaging
Despite the promising results achieved by DL in medical imaging, several challenges
remain
Deep learning [47–52].in
techniques One major challenge
medical imaging is theserve
can limited availability
a wide array of of annotated
functions,medical
both image
datasets due
in terms of the acquisition oftomedical
the time-consuming
images and and the costly nature ofof
identification manual annotations
pathologies [58]. Addi-
within
these images. Specifically, these techniques are leveraged not only to enhance the qualitypose sig-
tionally, data privacy concerns and the sharing of sensitive patient information
nificant
of images obtained obstacles
through to themodalities
various development of also
but large-scale,
to enablemulti-institutional
effective and datasets
efficient[59]. An-
other challenge is the interpretability of DL models, as they
identification of pathological markers within these images. For example, convolutional often act as “black boxes” that
neural networks (CNNs) can be used in the reconstruction of images from MRI scanners,explain-
provide limited insights into their decision-making processes [60]. Ensuring the
ability and trustworthiness of these models is crucial for their adoption in clinical practice,
enhancing the resolution of the obtained images and thereby allowing for a clearer visual-
as clinicians need to understand the rationale behind their predictions [61].
ization of potential pathologies [53]. Moreover, CNNs are particularly adept at analyzing
Despite these challenges, DL in medical imaging presents numerous opportunities
these images postacquisition, identifying key features within these images that could point
for advancing healthcare and improving patient outcomes. With ongoing research, inter-
toward specific pathologies [54]. This dual
disciplinary collaboration, functionality—improving
and the the acquisition
development of more sophisticated of DL has
algorithms,
images and aidingthein the identification
potential of pathologies—is
to revolutionize medical imaginga andkey contribute
strength of deep learning
significantly to the future
techniques in theoffield
medicine.
of medical imaging. Throughout this section, we will discuss three
major types of deep learning techniques used in medical imaging: convolutional neural
networks (CNNs), 2. recurrent
Deep Learning
neural Techniques
networks in Medicaland
(RNNs), Imaging
generative adversarial networks
Deep learning
(GANs). For each technique, we will techniques
detail its in medical
basic imaging
concepts, can serve and
architecture a wide array of functions,
applications,
both in terms
role in image acquisition, of the acquisition
and pathology of medical
detection, images
along with and the
transfer identification
learning of pathologies
approaches
within
and the limitations andthese images.faced.
challenges Specifically, these techniques are leveraged not only to enhance the
quality of images obtained through various modalities but also to enable effective and
2.1. Convolutionalefficient
Neural identification of pathological markers within these images. For example, convo-
Networks (CNNs)
lutional neural networks (CNNs) can be used in the reconstruction of images from MRI
2.1.1. Basic Concepts
scanners, enhancing the resolution of the obtained images and thereby allowing for a
Convolutional neural networks (CNNs) are a class of DL models designed specifically
clearer visualization of potential pathologies [53]. Moreover, CNNs are particularly adept
for image analysis tasks [4]. Its
at analyzing basic
these mechanism
images has beenidentifying
postacquisition, indicated key
in Figure 2. within
features CNNs these
con- images
sist of multiple layers, including
that could convolutional,
point toward pooling, and
specific pathologies [54].fully
This connected layers, which
dual functionality—improving the
work together toacquisition of images representations
learn hierarchical and aiding in the identification
of input imagesof pathologies—is a key strength of
[62]. Convolutional
deep learning
layers are responsible techniques
for extracting in the
local field of medical
features imaging.
from images, Throughout
such this
as edges, section, we will
corners,
discuss three major types of deep learning techniques used in medical imaging: convolu-
tional neural networks (CNNs), recurrent neural networks (RNNs), and generative adver-
sarial networks (GANs). For each technique, we will detail its basic concepts, architecture
2.1. Convolutional Neural Networks (CNNs)
2.1.1. Basic Concepts
Convolutional neural networks (CNNs) are a class of DL models designed specifi-
Appl. Sci. 2023, 13, 10521 cally for image analysis tasks [4]. Its basic mechanism has been indicated in Figure 2.4 of 25
CNNs consist of multiple layers, including convolutional, pooling, and fully connected
layers, which work together to learn hierarchical representations of input images [62].
Convolutional layers are responsible for extracting local features from images, such as
and textures, while pooling layers help reduce the spatial dimensions of feature maps,
edges, corners, and textures, while pooling layers help reduce the spatial dimensions of
improving computational efficiency and reducing overfitting [63]. Finally, fully connected
feature maps, improving computational efficiency and reducing overfitting [63]. Finally,
layers enable the integration of local features into global patterns, enabling the network to
fully connected layers enable the integration of local features into global patterns, enabling
perform image
the network classification
to perform image or other desired
classification tasksdesired
or other [6]. tasks [6].
Figure
Figure 2.2. A
Asample
sampleillustration
illustrationofofa aCNN
CNN architecture forfor
architecture segmentation
segmentationof MRI-based
of MRI-basedimages. The The
images.
figure depicts the input layer (medical image), convolutional layers, pooling layers, fully connected
figure depicts the input layer (medical image), convolutional layers, pooling layers, fully connected
layers, and the output layer (classification or segmentation results).
layers, and the output layer (classification or segmentation results).
2.1.2.
2.1.2. Architectures
Architecturesand
andApplications
Applications
Several
SeveralCNN
CNNarchitectures
architectureshave
havebeen
beenproposed
proposedand
andwidely
widelyadopted
adoptedin in
medical im-imag-
medical
aging applications [64–66]. Some of the most notable architectures include LeNet
ing applications [64–66]. Some of the most notable architectures include LeNet [67], [67],
AlexNet
AlexNet[63],
[63],VGGNet
VGGNet [62], ResNet
[62], [53],[53],
ResNet and DenseNet [68]. These
and DenseNet [68].architectures have beenhave
These architectures
applied to various medical imaging tasks, such as image segmentation, classification,
been applied to various medical imaging tasks, such as image segmentation, classification, de-
tection, and registration [69,70].
detection, and registration [69,70].
2.1.3.
2.1.3. Transfer
TransferLearning
Learning
Transfer
Transfer learning
learning is
is aa popular
popular approach
approach in in DL,
DL, where
whereaapretrained
pretrainedmodel
modelisisfine-tuned
fine-
tuned for a new task or domain, leveraging the knowledge acquired during the initial
for a new task or domain, leveraging the knowledge acquired during the initial training [71].
training [71]. This technique is particularly useful in medical imaging [72–74], where an-
This technique is particularly useful in medical imaging [72–74], where annotated datasets
notated datasets are often limited in size [75]. By using pretrained models, researchers can
are often limited in size [75]. By using pretrained models, researchers can take advantage
take advantage of the general features learned by the model on a large dataset, such as
of the general features learned by the model on a large dataset, such as ImageNet, and
ImageNet, and fine-tune it to perform well on a specific medical imaging task [76]. Trans-
fine-tune it to perform well on a specific medical imaging task [76]. Transfer learning has
fer learning has been successfully applied in various medical imaging applications, in-
been successfully applied in various medical imaging applications, including diagnosing
cluding diagnosing diabetic retinopathy from retinal images, classifying skin cancer from
diabetic retinopathy from retinal images, classifying skin cancer from dermoscopy images,
dermoscopy images, and segmenting brain tumors from MRI scans [60,77–79].
and segmenting brain tumors from MRI scans [60,77–79].
Figure4.4.AAsample
Figure sample illustration
illustration ofofa aGAN
GANarchitecture for for
architecture medical imaging.
medical The figure
imaging. shows the
The figure shows the
generator and discriminator networks, their respective inputs and outputs, and the interaction be-
generator and discriminator networks, their respective inputs and outputs, and the interaction
tween the two networks during the training process.
between the two networks during the training process.
2.4. Limitations and Challenges
2.4. Limitations and Challenges
Despite the successes and potential of deep learning techniques in medical imaging,
Despite the successes and potential of deep learning techniques in medical imaging,
several common limitations and challenges need to be addressed. These challenges span
several
across common limitations
convolutional and challenges
neural networks (CNNs),need to be neural
recurrent addressed. These
networks challenges
(RNNs), and span
across convolutional
generative adversarialneural
networks networks
(GANs).(CNNs),
Shown in recurrent
Table 1, oneneural
primarynetworks
challenge(RNNs),
is the and
lack of interpretability
generative adversarial in deep learning
networks (GANs).models.
ShownCNNs, in RNNs,
Table 1,andone
GANs often act
primary as
challenge is
“black
the lack boxes,” making it difficult
of interpretability in deepto understand the underlying
learning models. CNNs,decision-making
RNNs, and GANs pro- often
actcesses. This lack
as “black of interpretability
boxes,” hinders to
making it difficult their adoption inthe
understand clinical practice,decision-making
underlying where ex-
plainability
processes. is crucial.
This lack ofAnother challenge lies
interpretability in the their
hinders robustness and security
adoption of deep
in clinical learn- where
practice,
ing models. CNNs
explainability are susceptible
is crucial. Anothertochallenge
adversarial examples,
lies which are carefully
in the robustness craftedof deep
and security
inputs designed
learning to deceive
models. CNNs are the model into
susceptible to making incorrect
adversarial predictions.
examples, whichAdversarial
are carefully at- crafted
tacks raise concerns about the reliability and trustworthiness of deep learning models in
inputs designed to deceive the model into making incorrect predictions. Adversarial
medical imaging applications. Furthermore, deep learning techniques, including CNNs,
attacks raise concerns about the reliability and trustworthiness of deep learning models in
RNNs, and GANs, require large amounts of annotated data for training. Acquiring labeled
medical imaging applications. Furthermore, deep learning techniques, including CNNs,
medical imaging datasets can be time-consuming, expensive, and sometimes limited in
RNNs, and GANs,the
size. Overcoming require largeofamounts
challenge of annotated
data scarcity and finding data for training.
efficient Acquiring
ways to leverage un-labeled
medical imaging datasets can be time-consuming, expensive, and sometimes
labeled data, such as unsupervised or semisupervised learning, is essential for the broader limited in
adoption of deep learning in medical imaging. Additionally, both RNNs and GANs face
size. Overcoming the challenge of data scarcity and finding efficient ways to leverage
specific challenges.
unlabeled data, such RNNs suffer from theorvanishing
as unsupervised and exploding
semisupervised gradient
learning, problemfor the
is essential
broader adoption of deep learning in medical imaging. Additionally, both RNNs and
GANs face specific challenges. RNNs suffer from the vanishing and exploding gradient
problem when training deep networks, making it difficult to learn long-term dependencies
in sequences. The computational complexity of RNNs is also a concern, especially when
dealing with long sequences or large-scale datasets. For GANs, the mode collapse problem
is a significant challenge, as it can lead to limited variety and suboptimal results in tasks
such as data augmentation and image synthesis. Training GANs can be challenging due
to unstable dynamics and convergence issues. Ensuring the quality and reliability of
generated images is crucial for their safe and effective use in medical imaging applications.
Addressing these limitations and challenges will enhance the interpretability, robustness,
scalability, and applicability of deep learning techniques in medical imaging.
Figure 6. Schematic
Figure 6. SchematicofofDL-based
DL-based classification formammography
classification for mammography images.
images.
3.2.2. Challenges
3.2.2. Challengesand
andFuture
Future Directions
Directions
KeyKey challengesininDL-based
challenges DL-based image
imageclassification
classificationinclude thethe
include limited availability
limited of
availability of
labeled data, class imbalance, and the need for model interpretability. Future research
labeled data, class imbalance, and the need for model interpretability. Future research
may focus on leveraging unsupervised or semisupervised learning techniques [116], data
may focus on leveraging
augmentation strategies unsupervised or semisupervised
[117], and advanced learning techniques
regularization techniques [116], data
[118] to overcome
augmentation strategies
these challenges. [117],
Moreover, and advanced
developing regularization
methods techniques
to provide meaningful [118] to overcome
explanations for
these challenges. Moreover, developing methods to provide meaningful
model predictions and incorporating domain knowledge into DL models may explanations
enhance for
model
theirpredictions
clinical utilityand incorporating domain knowledge into DL models may enhance
[119].
their clinical utility [119].
3.3.
3.3. Image
Image Reconstruction
Reconstruction
3.3.1. Techniques and
3.3.1. Techniques and Approaches
Approaches
Image reconstruction
Image reconstructionisisa afundamental
fundamental step in many
step medical
in many imaging
medical modalities,
imaging such
modalities,
as CT, MRI, and PET, where raw data (e.g., projections, k-space data) are transformed
such as CT, MRI, and PET, where raw data (e.g., projections, k-space data) are transformed into
interpretable images [120]. DL has shown potential in improving image reconstruction
into interpretable images [120]. DL has shown potential in improving image reconstruc-
quality
tion and reducing
quality reconstruction
and reducing time [121].
reconstruction CNNsCNNs
time [121]. have been
haveused
beenfor image
used denoising,
for image de-
super-resolution, and artifact reduction in various imaging modalities [122,123].
noising, super-resolution, and artifact reduction in various imaging modalities [122,123]. Addition-
ally, DL-basedDL-based
Additionally, iterative reconstruction techniquestechniques
iterative reconstruction [124] and the integration
[124] of DL models
and the integration of
DL models with conventional reconstruction algorithms [125] have been proposed image
with conventional reconstruction algorithms [125] have been proposed to optimize to op-
quality while reducing radiation dose or acquisition time. Figure 7 presents the application
timize image quality while reducing radiation dose or acquisition time. Figure 7 presents
of GAN-based PET image reconstruction.
the application of GAN-based PET image reconstruction.
Figure
Figure 8.
8. Schematic
Schematic of
of medical
medical image
image registration.
registration.
3.4. ImageLearning
4. Deep Registration
for Specific Medical Imaging Modalities
3.4.1.Medical
Techniques and Approaches
imaging modalities, such as magnetic resonance imaging (MRI), computed
Image registration is the
tomography (CT), positron processtomography
emission of aligning two or more
(PET), images,and
ultrasound, often acquired
optical from
coherence
tomography (OCT), have unique characteristics and generate different
different modalities or at different time points, to facilitate comparison and analysis types of images.
[129].
Therefore,
DL has beenDLincreasingly
techniques need to be
applied to tailored to each modality
image registration to achieve
tasks, with CNNsoptimal per-
and spatial
formance. In this section, we will discuss the current state-of-the-art DL
transformer networks (STNs) being the most commonly used architectures [130]. Super- techniques and
applications
vised learningforapproaches,
each modality, as as
such well as the
using challenges and
ground-truth future directions.
deformation fields or similarity
metrics as labels, have been employed to train deep registrationmodalities,
Before diving into the application of DL in specific imaging models [131].it isMoreover,
important
to clarify the focus of this section. The intention is to discuss how DL is applied
unsupervised learning techniques, which do not require ground-truth correspondences, in the anal-
ysis of images generated by these different modalities, such as MRI, CT, PET, ultrasound
have been proposed to overcome the challenges of obtaining labeled data for registration
imaging, and OCT, rather than its application in the process of image acquisition. Specifi-
tasks [132].
cally, the discussion will center around how DL has been utilized to extract meaningful
insights from these images, for example, through tasks such as segmentation, classification,
detection, and prediction. This includes the ability to identify and classify pathologies,
measure anatomical structures, and even predict treatment outcomes.
Figure11.
Figure 11.Example
Example of
ofPET
PETimage
imagesegmentation using
segmentation a DL-based
using method.
a DL-based method.
Figure 12.
Figure 12. Example
Example of
of fetal
fetal head
head detection
detectionin
inultrasound
ultrasoundimages
imagesusing
usingconvolutional
convolutionalneural
neuralnetworks.
net-
works.
4.5.1. DL
4.5.1. DL Techniques
Techniques and
and Applications
Applications to
to OCT
OCT
CNNs have been widely used in OCT
CNNs have been widely used in OCT image analysis
image tasks.
analysis For For
tasks. example, a fully
example, con-
a fully
volutional network
convolutional (FCN)
network has been
(FCN) used for
has been usedsegmentation of retinal
for segmentation layers in
of retinal OCT in
layers images
OCT
[156]. Moreover, DL techniques have been applied to OCT angiography for vessel seg-
mentation and centerline extraction [6]. Additionally, RNNs have been used for tracking
the movement of retinal layers in OCT videos [44].
images [156]. Moreover, DL techniques have been applied to OCT angiography for vessel
segmentation and centerline extraction [6]. Additionally, RNNs have been used for tracking
the movement of retinal layers in OCT videos [44].
careful consideration of various factors, such as the availability and accessibility of data, the
quality and relevance of predictions, and the impact on clinical decision-making. Several
studies have proposed various methods for integrating DL into clinical workflows, such as
decision support systems, clinical decision rules, and workflow optimization [172]. These
methods can help to streamline the use of DL in clinical settings and improve the efficiency
and effectiveness of clinical decision-making.
7. Conclusions
In this review article, we provided a comprehensive analysis of DL techniques and
their applications in the field of medical imaging. We discussed the impact of DL on disease
diagnosis and treatment and how it has transformed the medical imaging landscape.
Furthermore, we reviewed the most recent DL techniques, such as CNNs, RNNs, and
GANs, and their applications in medical imaging.
We explored the application of DL in various medical imaging modalities, including
MRI, CT, PET, ultrasound imaging, and OCT. We also discussed the evaluation metrics and
benchmarks used to assess the performance of DL algorithms in medical imaging, as well
as the ethical considerations and future perspectives of the field.
Appl. Sci. 2023, 13, 10521 18 of 25
Author Contributions: Conceptualization, H.Z. and Y.Q.; methodology, H.Z. and Y.Q.; software,
H.Z. and Y.Q.; validation, H.Z. and Y.Q.; formal analysis, H.Z. and Y.Q.; investigation, H.Z. and Y.Q.;
resources, H.Z. and Y.Q.; data curation, H.Z. and Y.Q.; writing—original draft preparation, H.Z. and
Y.Q.; writing—review and editing, H.Z. and Y.Q.; visualization, H.Z. and Y.Q.; supervision, H.Z.;
project administration, H.Z. and Y.Q.; funding acquisition, H.Z. All authors have read and agreed to
the published version of the manuscript.
Funding: This work was funded in part by the National Natural Science Foundation of China under
Grant 62127812, 61971335.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Data are available upon request by email to the corresponding author.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Ayache, N. Medical Imaging in the Age of Artificial Intelligence. Healthc. Artif. Intell. 2020, 89–91.
2. Wang, W.; Liang, D.; Chen, Q.; Iwamoto, Y.; Han, X.H.; Zhang, Q.; Hu, H.; Lin, L.; Chen, Y.W. Medical image classification using
deep learning. Deep. Learn. Healthc. Paradig. Appl. 2020, 33–51.
3. Fourcade, A.; Khonsari, R.H. Deep learning in medical image analysis: A third eye for doctors. J. Stomatol. Oral Maxillofac. Surg.
2019, 120, 279–288. [CrossRef] [PubMed]
4. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [CrossRef] [PubMed]
5. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016.
6. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. U-Net: Convolutional networks for biomedical image segmentation. Adv. Neural Inf.
Process. Syst. 2002, 25, 1097–1105.
7. Yao, H.M.; Sha, W.E.I.; Jiang, L. Two-Step Enhanced Deep Learning Approach for Electromagnetic Inverse Scattering Problems.
IEEE Antennas Wirel. Propag. Lett. 2019, 18, 2254–2258. [CrossRef]
8. Yao, H.M.; Jiang, L.; Wei, E.I. Enhanced Deep Learning Approach Based on the Deep Convolutional Encod-er-Decoder Architecture
for Electromagnetic Inverse Scattering Problems. IEEE Antennas Wirel. Propag. Lett. 2020, 19, 1211–1215. [CrossRef]
9. Guo, R.; Li, C.; Chen, X.; Yang, J.; Zhang, B.; Cheng, Y. Joint inversion of audio-magnetotelluric and seismic travel time data with
deep learning constraint. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7982–7995. [CrossRef]
10. Yao, H.M.; Guo, R.; Li, M.; Jiang, L.; Ng, M.K.P. Enhanced Supervised Descent Learning Technique for Electromagnetic Inverse
Scattering Problems by the Deep Convolutional Neural Networks. IEEE Trans. Antennas Propag. 2022, 70, 6195–6206. [CrossRef]
11. Yao, H.M.; Jiang, L. Enhanced PML Based on the Long Short Term Memory Network for the FDTD Method. IEEE Access 2020,
8, 21028–21035. [CrossRef]
12. Yao, H.M.; Jiang, L.; Ng, M. Implementing the Fast Full-Wave Electromagnetic Forward Solver Using the Deep Convolutional
Encoder-Decoder Architecture. IEEE Trans. Antennas Propag. 2022, 71, 1152–1157. [CrossRef]
13. Zhang, H.H.; Yao, H.M.; Jiang, L.; Ng, M. Solving Electromagnetic Inverse Scattering Problems in Inhomogeneous Media by
Deep Convolutional Encoder–Decoder Structure. IEEE Trans. Antennas Propag. 2023, 71, 2867–2872. [CrossRef]
14. Zhang, H.H.; Yao, H.M.; Jiang, L.; Ng, M. Enhanced Two-Step Deep-Learning Approach for Electromagnetic-Inverse-Scattering
Problems: Frequency Extrapolation and Scatterer Reconstruction. IEEE Trans. Antennas Propag. 2022, 71, 1662–1672. [CrossRef]
15. Zhang, H.H.; Li, J.; Yao, H.M. Fast Full Wave Electromagnetic Forward Solver Based on Deep Conditional Convolutional
Autoencoders. IEEE Antennas Wirel. Propag. Lett. 2022, 22, 779–783. [CrossRef]
Appl. Sci. 2023, 13, 10521 19 of 25
16. Zhang, H.H.; Li, J.; Yao, H.M. Deep Long Short-Term Memory Networks-Based Solving Method for the FDTD Method: 2-D Case.
IEEE Microw. Wirel. Technol. Lett. 2023, 33, 499–502. [CrossRef]
17. Yao, H.M.; Jiang, L. Machine-Learning-Based PML for the FDTD Method. IEEE Antennas Wirel. Propag. Lett. 2018, 18, 192–196.
[CrossRef]
18. Yao, H.; Zhang, L.; Yang, H.; Li, M.; Zhang, B. Snow Parameters Inversion from Passive Microwave Remote Sensing Measurements
by Deep Convolutional Neural Networks. Sensors 2022, 22, 4769. [CrossRef] [PubMed]
19. Yao, H.M.; Sha, W.E.I.; Jiang, L.J. Applying Convolutional Neural Networks for The Source Reconstruction. Prog. Electromagn.
Res. M 2018, 76, 91–99. [CrossRef]
20. Yao, H.M.; Li, M.; Jiang, L. Applying Deep Learning Approach to the Far-Field Subwavelength Imaging Based on Near-Field
Resonant Metalens at Microwave Frequencies. IEEE Access 2019, 7, 63801–63808. [CrossRef]
21. Zhang, H.H.; Jiang, L.; Yao, H.M. Embedding the behavior macromodel into TDIE for transient field-circuit simulations. IEEE
Trans. Antennas Propag. 2016, 64, 3233–3238. [CrossRef]
22. Zhang, H.H.; Jiang, L.J.; Yao, H.M.; Zhang, Y. Transient Heterogeneous Electromagnetic Simulation with DGTD and Behavioral
Macromodel. IEEE Trans. Electromagn. Compat. 2017, 59, 1152–1160. [CrossRef]
23. Xiao, B.; Yao, H.; Li, M.; Hong, J.S.; Yeung, K.L. Flexible Wideband Microstrip-Slotline-Microstrip Power Divider and Its
Application to Antenna Array. IEEE Access 2019, 7, 143973–143979. [CrossRef]
24. Li, M.; Wang, R.; Yao, H.; Wang, B. A Low-Profile Wideband CP End-Fire Magnetoelectric Antenna Using Dual-Mode Resonances.
IEEE Trans. Antennas Propag. 2019, 67, 4445–4452. [CrossRef]
25. Yao, H.M.; Jiang, L.; Zhang, H.H.; Wei, E.I. Machine learning methodology review for computational electromagnetics. In
Proceedings of the 2019 International Applied Computational Electromagnetics Society Symposium-China (ACES), Washington,
DC, USA, 10–13 October 2019; Volume 1.
26. Guo, R.; Li, M.; Yang, F.; Yao, H.; Jiang, L.; Ng, M.; Abubakar, A. Joint 2D inversion of AMT and seismic traveltime data with
deep learning constraint. In Proceedings of the SEG International Exposition and Annual Meeting, Virtual, 11–16 October 2020.
[CrossRef]
27. Yao, H.M.; Jiang, L.J.; Qin, Y.W. Machine learning based method of moments (ML-MoM). In Proceedings of the 2017 IEEE
International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting, San Diego, CA, USA,
9–14 July 2017.
28. Yao, H.M.; Qin, Y.W.; Jiang, L.J. Machine learning based MoM (ML-MoM) for parasitic capacitance extractions. In Proceedings of
the 2016 IEEE Electrical Design of Advanced Packaging and Systems (EDAPS), Honolulu, HI, USA, 14–16 December 2016.
29. Yao, H.M.; Jiang, L.J. Machine learning based neural network solving methods for the FDTD method. In Proceedings of the 2018
IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting, Boston, MA, USA,
8–13 July 2018.
30. Jiang, L.; Yao, H.; Zhang, H.; Qin, Y. Machine Learning Based Computational Electromagnetic Analysis for Electromagnetic
Compatibility. In Proceedings of the 2018 IEEE International Conference on Computational Electromagnetics (ICCEM), Chengdu,
China, 26–28 March 2018.
31. Yao, H.M.; Jiang, L.J.; Wei, E.I. Source Reconstruction Method based on Machine Learning Algorithms. In Proceedings of the
2019 Joint International Symposium on Electromagnetic Compatibility, Sapporo and Asia-Pacific International Symposium on
Electromagnetic Compatibility (EMC Sapporo/APEMC), Sapporo, Japan, 3–7 June 2019.
32. Zhang, H.H.; Yao, H.M.; Jiang, L.J. Novel time domain integral equation method hybridized with the macromodels of circuits.
In Proceedings of the 2015 IEEE 24th Electrical Performance of Electronic Packaging and Systems (EPEPS), San Jose, CA, USA,
25–28 October 2015.
33. Zhang, H.H.; Jiang, L.J.; Yao, H.M.; Zhang, Y. Coupling DGTD and behavioral macromodel for transient heterogeneous
electromagnetic simulations. In Proceedings of the 2016 IEEE International Symposium on Electromagnetic Compatibility (EMC),
Ottawa, ON, Canada, 25–29 July 2016.
34. Zhang, H.H.; Jiang, L.J.; Yao, H.M.; Zhao, X.W.; Zhang, Y. Hybrid field-circuit simulation by coupling DGTD with behavioral
macromodel. In Proceedings of the 2016 Progress in Electromagnetic Research Symposium (PIERS), Shanghai, China,
8–11 August 2016.
35. Yao, H.; Hsieh, Y.-P.; Kong, J.; Hofmann, M. Modelling electrical conduction in nanostructure assemblies through complex
networks. Nat. Mater. 2020, 19, 745–751. [CrossRef] [PubMed]
36. Yao, H.; Hempel, M.; Hsieh, Y.-P.; Kong, J.; Hofmann, M. Characterizing percolative materials by straining. Nanoscale 2018,
11, 1074–1079. [CrossRef] [PubMed]
37. Guo, S.; Fu, J.; Zhang, P.; Zhu, C.; Yao, H.; Xu, M.; An, B.; Wang, X.; Tang, B.; Deng, Y.; et al. Direct growth of single-metal-atom
chains. Nat. Synth. 2022, 1, 245–253. [CrossRef]
38. Liu, H.; Yao, H.; Feng, L. A nanometer-resolution displacement measurement system based on laser feedback interferometry.
In Proceedings of the 8th Annual IEEE International Conference on Nano/Micro Engineered and Molecular Systems, Xiamen,
China, 7–10 April 2013.
39. Liu, H.L.; Yao, H.M.; Meng, Z.K.; Feng, L.S. Simulation and Error Analysis of a Laser Feedback Interference System Based on
Phase-freezing Technology. Lasers Eng. 2014, 29, 259–270.
Appl. Sci. 2023, 13, 10521 20 of 25
40. Chen, D.-R.; Hofmann, M.; Yao, H.-M.; Chiu, S.-K.; Chen, S.-H.; Luo, Y.-R.; Hsu, C.-C.; Hsieh, Y.-P. Lateral Two-Dimensional
Material Heterojunction Photodetectors with Ultrahigh Speed and Detectivity. ACS Appl. Mater. Interfaces 2019, 11, 6384–6388.
[CrossRef]
41. Chen, Z.; Ming, T.; Goulamaly, M.M.; Yao, H.; Nezich, D.; Hempel, M.; Hofmann, M.; Kong, J. Enhancing the Sensitivity of
Percolative Graphene Films for Flexible and Transparent Pressure Sensor Arrays. Adv. Funct. Mater. 2016, 26, 5061–5067.
[CrossRef]
42. Yao, H.M.; Li, M.; Jiang, L.; Ng, M. Antenna Array Diagnosis by Using Deep Learning Approach. IEEE Trans. Antennas
Propag. 2023. early access.
43. Yao, H.M.; Jiang, L.; Ng, M. Enhanced Deep Learning Approach Based on the Conditional Generative Adversarial Network for
Electromagnetic Inverse Scattering Problems. IEEE Trans. Antennas Propag. 2023. early access.
44. Shen, D.; Wu, G.; Suk, H.-I. Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 2017, 19, 221–248. [CrossRef]
[PubMed]
45. Litjens, G.; Kooi, T.; Bejnordi, B.E.; Setio, A.A.A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.A.W.M.; van Ginneken, B.; Sánchez,
C.I. A survey on deep learning in medical image analysis. Med. Image Anal. 2017, 42, 60–88. [CrossRef] [PubMed]
46. Anaya-Isaza, A.; Mera-Jiménez, L.; Zequera-Diaz, M. An overview of deep learning in medical imaging. Inform. Med. Unlocked
2021, 26, 100723. [CrossRef]
47. Ghaderzadeh, M.; Asadi, F. Deep Learning in the Detection and Diagnosis of COVID-19 Using Radiology Modalities: A
Sys-tematic Review. J. Healthc. Eng. 2021, 2021, 6677314. [CrossRef] [PubMed]
48. Ghaderzadeh, M.; Asadi, F.; Jafari, R.; Bashash, D.; Abolghasemi, H.; Aria, M. Deep Convolutional Neural Network–Based
Computer-Aided Detection System for COVID-19 Using Multiple Lung Scans: Design and Implementation Study. J. Med. Internet
Res. 2021, 23, e27468. [CrossRef] [PubMed]
49. Ghaderzadeh, M.; Aria, M.; Hosseini, A.; Asadi, F.; Bashash, D.; Abolghasemi, H. A fast and efficient CNN model for B-ALL
diagnosis and its subtypes classification using peripheral blood smear images. Int. J. Intell. Syst. 2021, 37, 5113–5133. [CrossRef]
50. Ghaderzadeh, M.; Aria, M.; Asadi, F. X-Ray Equipped with Artificial Intelligence: Changing the COVID-19 Diagnostic Para-digm
During the Pandemic. BioMed Res. Int. 2021, 2021, 9942873. [CrossRef]
51. Ghaderzadeh, M.; Aria, M. Management of COVID-19 Detection Using Artificial Intelligence in 2020 Pandemic. In Proceedings of
the 5th International Conference on Medical and Health Informatics, Kyoto, Japan, 14–16 May 2021.
52. Gheisari, M.; Ebrahimzadeh, F.; Rahimi, M.; Moazzamigodarzi, M.; Liu, Y.; Pramanik, P.K.D.; Heravi, M.A.; Mehbodniya,
A.; Ghaderzadeh, M.; Feylizadeh, M.R.; et al. Deep learning: Applications, architectures, models, tools, and frameworks: A
comprehensive survey. CAAI Trans. Intell. Technol. 2023. early view. [CrossRef]
53. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778.
54. Shah, S.A.A.; Tahir, S.A.; Aksam Iftikhar, M. A comprehensive survey on deep learning-based approaches for medical image
analysis. Comput. Electr. Eng. 2021, 90, 106954.
55. Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks.
arXiv 2015, arXiv:1511.06434.
56. Anholt, W.J.H.V.; Dankelman, J.W.; Wauben, L.S.G.L. An overview of medical imaging modalities: The role of imaging physics in
medical education. Eur. J. Phys. Educ. 2020, 11, 12–28.
57. A. C. Society. Imaging (Radiology) Tests, American Cancer Society. 2018. Available online: https://www.cancer.org/treatment/
understanding-your-diagnosis/tests/imaging-radiology-tests-for-cancer.html (accessed on 23 May 2023).
58. Simpson, A.L.; Antonelli, M.; Bakas, S.; Bilello, M.; Farahani, K.; Van Ginneken, B.; Kopp-Schneider, A.; Landman, B.A.; Litjens,
G.; Menze, B.; et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms.
arXiv 2019, arXiv:1902.09063.
59. Sweeney, G.J. Big data, big problems: Emerging issues in the ethics of data science and journalism. J. Mass Media Ethics 2014,
29, 38–51.
60. Lipton, Z.C. The Mythos of Model Interpretability. Queue 2018, 16, 31–57. [CrossRef]
61. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual explanations from deep networks
via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29
October 2017; pp. 618–626.
62. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556.
63. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017,
60, 84–90. [CrossRef]
64. Luo, X.; Hu, M.; Song, T.; Wang, G.; Zhang, S. Semi-supervised medical image segmentation via cross teaching between CNN
and transformer. In Proceedings of the International Conference on Medical Imaging with Deep Learning, PMLR, Durham, NC,
USA, 5–6 August 2022.
65. Tiwari, P.; Pant, B.; Elarabawy, M.M.; Abd-Elnaby, M.; Mohd, N.; Dhiman, G.; Sharma, S. CNN Based Multiclass Brain Tumor
Detection Using Medical Imaging. Comput. Intell. Neurosci. 2022, 2022, 1830010. [CrossRef] [PubMed]
Appl. Sci. 2023, 13, 10521 21 of 25
66. Srikantamurthy, M.M.; Rallabandi, V.P.; Dudekula, D.B.; Natarajan, S.; Park, J. Classification of benign and malignant subtypes
of breast cancer histopathology imaging using hybrid CNN-LSTM based transfer learning. BMC Med. Imaging 2023, 23, 19.
[CrossRef]
67. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998,
86, 2278–2324. [CrossRef]
68. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708.
69. Greenspan, H.; van Ginneken, B.; Summers, R.M. Guest Editorial Deep Learning in Medical Imaging: Overview and Future
Promise of an Exciting New Technique. IEEE Trans. Med. Imaging 2016, 35, 1153–1159. [CrossRef]
70. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolu-
tional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [CrossRef]
[PubMed]
71. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Trans. Knowl. DataEng. 2010, 22, 1345–1359. [CrossRef]
72. Awad, M.M. Evaluation of COVID-19 Reported Statistical Data Using Cooperative Convolutional Neural Network Model
(CCNN). COVID 2022, 2, 674–690. [CrossRef]
73. Li, Z.; Zhang, H.; Li, Z.; Ren, Z. Residual-Attention UNet++: A Nested Residual-Attention U-Net for Medical Image Segmentation.
Appl. Sci. 2022, 12, 7149. [CrossRef]
74. Safarov, S.; Whangbo, T.K. A-DenseUNet: Adaptive Densely Connected UNet for Polyp Segmentation in Colonoscopy Images
with Atrous Convolution. Sensors 2021, 21, 1441. [CrossRef] [PubMed]
75. Khan, S.; Rahmani, H.; Shah, S.A.A.; Bennamoun, M. A Guide to Convolutional Neural Networks for Computer Vision; Springer:
Berlin/Heidelberg, Germany, 2018. [CrossRef]
76. Tajbakhsh, N.; Shin, J.Y.; Gurudu, S.R.; Hurst, R.T.; Kendall, C.B.; Gotway, M.B.; Liang, J. Convolutional neural networks for
medical image analysis: Full training or fine tuning? IEEE Trans. Med. Imaging 2016, 35, 1299–1312. [CrossRef]
77. Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer
with deep neural networks. Nature 2017, 542, 115–118. [CrossRef]
78. Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and harnessing adversarial examples. arXiv 2014, arXiv:1412.6572.
79. Zikic, D.; Glocker, B.; Konukoglu, E.; Criminisi, A.; Demiralp, C.; Shotton, J.; Thomas, O.M.; Das, T.; Jena, R.; Price, S.J. Decision
Forests for Tissue-Specific Segmentation of High-Grade Gliomas in Multi-channel MR. In Proceedings of the MICCAI 2012, Nice,
France, 1–5 October 2012; Volume 15, pp. 369–376. [CrossRef]
80. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [CrossRef]
81. Cho, K.; Van Merrienboer, B.; Bahdanau, D.; Bengio, Y. On the properties of neural machine translation: Encoder-decoder
approaches. arXiv 2014, arXiv:1409.1259. [CrossRef]
82. Donahue, J.; Hendricks, L.A.; Guadarrama, S.; Rohrbach, M.; Venugopalan, S.; Saenko, K.; Darrell, T. Long-term Recurrent
Convolutional Networks for Visual Recognition and Description. In Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 June 2014. [CrossRef]
83. Sridhar, C.; Pareek, P.K.; Kalidoss, R.; Jamal, S.S.; Shukla, P.K.; Nuagah, S.J. Optimal Medical Image Size Reduction Model
Creation Using Recurrent Neural Network and GenPSOWVQ. J. Health Eng. 2022, 2022, 1–8. [CrossRef] [PubMed]
84. Chen, E.Z.; Wang, P.; Chen, X.; Chen, T.; Sun, S. Pyramid Convolutional RNN for MRI Image Reconstruction. IEEE Trans. Med.
Imaging 2022, 41, 2033–2047. [CrossRef] [PubMed]
85. Suganyadevi, S.; Seethalakshmi, V.; Balasamy, K. A review on deep learning in medical image analysis. Int. J. Multimed. Inf. Retr.
2022, 11, 19–38. [CrossRef] [PubMed]
86. Setio, A.A.; Ciompi, F.; Litjens, G.; Gerke, P.; Jacobs, C.; van Riel, S.J.; Wille, M.M.; Naqibullah, M.; Sanchez, C.I.; van Ginneken, B.
Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks. IEEE Trans.
Med. Imaging 2016, 35, 1160–1169. [CrossRef] [PubMed]
87. Yang, K.; Mohammed, E.A.; Far, B.H. Detection of Alzheimer’s Disease Using Graph-Regularized Convolutional Neural Network
Based on Structural Similarity Learning of Brain Magnetic Resonance Images. In Proceedings of the 2021 IEEE 22nd International
Conference on Information Reuse and Integration for Data Science (IRI), Las Vegas, NV, USA, 10–12 August 2021; pp. 326–333.
88. Wang, X.; Peng, Y.; Lu, L.; Lu, Z.; Bagheri, M.; Summers, R.M. ChestX-ray8: Hospital-scale chest x-ray database and benchmarks
on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2097–2106.
89. Li, Z.; Wang, C.; Han, M.; Xue, Y.; Wei, W.; Li, L.J.; Fei-Fei, L. Thoracic disease identification and localization with limited
supervision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23
June 2018; pp. 8290–8299.
90. Padoy, N. Towards automatic recognition of surgical activities. In Proceedings of the International Conference on Medical Image
Computing and Computer-Assisted Intervention, Nice, France, 1–5 October 2012; pp. 267–274.
91. Mutter, V.; Gangi, A.; Rekik, M.A. A survey of deep learning techniques for medical image segmentation. In Deep Learning and
Convolutional Neural Networks for Medical Imaging and Clinical Informatics; Springer: Berlin/Heidelberg, Germany, 2019; pp. 21–45.
92. Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial
nets. In Advances in Neural Information Processing Systems; Springer: Berlin/Heidelberg, Germany, 2014; pp. 2672–2680.
Appl. Sci. 2023, 13, 10521 22 of 25
93. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In
Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232.
94. Schlegl, T.; Seeböck, P.; Waldstein, S.M.; Schmidt-Erfurth, U.; Langs, G. Unsupervised Anomaly Detection with Generative
Adversarial Networks to Guide Marker Discovery. In Proceedings of the International Conference on Information Processing in
Medical Imaging, Boone, NC, USA, 25–30 June 2017; pp. 146–157.
95. Frid-Adar, M.; Diamant, I.; Klang, E.; Amitai, M.; Goldberger, J.; Greenspan, H. GAN-based synthetic medical image augmentation
for increased CNN performance in liver lesion clas-sification. Neurocomputing 2018, 321, 321–331. [CrossRef]
96. Han, Y. MR-based synthetic CT generation using a deep convolutional neural network method. Med. Phys. 2017, 44, 1408–1419.
[CrossRef]
97. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552.
[CrossRef]
98. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.P.; Tejani, A.; Totz, J.; Wang, Z.; et al.
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the 2017 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017.
99. Chen, Y.; Xie, Y.; Zhou, Z.; Shi, F.; Christodoulou, A.G.; Li, D. Brain MRI super resolution using 3D deep densely connected neural
networks. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC,
USA, 4–7 April 2018; pp. 739–742.
100. Choi, Y.; Choi, M.; Kim, M.; Ha, J.W.; Kim, S.; Choo, J. Stargan: Unified generative adversarial networks for multi-domain
image-to-image translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City,
UT, USA, 18–23 June 2018; pp. 8789–8797.
101. Guan, Q.; Chen, Y.; Wei, Z.; Heidari, A.A.; Hu, H.; Yang, X.-H.; Zheng, J.; Zhou, Q.; Chen, H.; Chen, F. Medical image augmentation
for lesion detection using a texture-constrained multichannel progressive GAN. Comput. Biol. Med. 2022, 145. [CrossRef]
102. Jeong, J.J.; Tariq, A.; Adejumo, T.; Trivedi, H.; Gichoya, J.W.; Banerjee, I. Systematic Review of Generative Adversarial Networks
(GANs) for Medical Image Classification and Segmentation. J. Digit. Imaging 2022, 35, 137–152. [CrossRef]
103. Cackowski, S.; Barbier, E.L.; Dojat, M.; Christen, T. ImUnity: A generalizable VAE-GAN solution for multicenter MR image
harmonization. Med. Image Anal. 2023, in press. [CrossRef] [PubMed]
104. Wolterink, J.M.; Leiner, T.; Viergever, M.A.; Isgum, I. Generative Adversarial Networks for Noise Reduction in Low-Dose CT.
IEEE Trans. Med. Imaging 2017, 36, 2536–2545. [CrossRef] [PubMed]
105. Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks.
arXiv 2013, arXiv:1312.6199.
106. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference
on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440.
107. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift.
arXiv 2015, arXiv:1502.03167.
108. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980.
109. Chen, L.C.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation.
arXiv 2017, arXiv:1706.05587.
110. Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al.
Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999.
111. Çiçek, Ö.; Abdulkadir, A.; Lienkamp, S.S.; Brox, T.; Ronneberger, O. 3D U-Net: Learning dense volumetric seg-mentation
from sparse annotation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted
Intervention, Athens, Greece, 17–21 October 2016; Springer: Cham, Switzerland, 2016; pp. 424–432.
112. Hosny, A.; Parmar, C.; Quackenbush, J.; Schwartz, L.H.; Aerts, H.J. Artificial intelligence in radiology. Nat. Rev. Cancer 2018,
18, 500–510. [CrossRef]
113. Luo, Y.; Xu, M.; Zhang, J. A review of transfer learning for deep learning in medical image analysis. J. Med. Imaging Health Inform.
2021, 11, 279–288.
114. Gulshan, V.; Peng, L.; Coram, M.; Stumpe, M.C.; Wu, D.; Narayanaswamy, A.; Venugopalan, S.; Widner, K.; Madams, T.; Cuadros,
J.; et al. Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus
Photographs. JAMA 2016, 316, 2402–2410. [CrossRef]
115. Zhang, Z.; Chen, P.; Sapkota, M.; Yang, L. Pathological brain detection based on AlexNet and transfer learning. J. Comput. Sci.
2017, 24, 168–174.
116. Jin, C.; Chen, C.; Feng, X. A review of deep learning in medical image reconstruction. J. Healthc. Eng. 2019, 2019, 1–14.
117. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [CrossRef]
118. Chen, H.; Zhang, Y.; Zhang, W.; Liao, P.; Li, K.; Zhou, J.; Wang, G. A Low-dose CT via convolutional neural network. Biomed. Opt.
Express 2017, 8, 679–694. [CrossRef] [PubMed]
119. Han, Y.; Yoo, J.; Kim, H.H.; Shin, H.J.; Sung, K.; Ye, J.C. Deep learning with domain adaptation for accelerated projection-
reconstruction MR. Magn. Reson. Med. 2018, 80, 1189–1205. [CrossRef] [PubMed]
Appl. Sci. 2023, 13, 10521 23 of 25
120. Yang, G.; Yu, S.; Dong, H.; Slabaugh, G.; Dragotti, P.L.; Ye, X.; Liu, F.; Arridge, S.; Keegan, J.; Guo, Y.; et al. DAGAN: Deep
De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans. Med. Imaging 2017,
37, 1310–1321. [CrossRef] [PubMed]
121. Dai, J.; He, K.; Sun, J. Instance-aware semantic segmentation via multi-task network cascades. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3150–3158.
122. Wang, G.; Li, W.; Zuluaga, M.A.; Pratt, R.; Patel, P.A.; Aertsen, M.; Doel, T.; David, A.L.; Deprest, J.; Ourselin, S.; et al. Interactive
Medical Image Segmentation Using Deep Learning with Image-Specific Fine Tuning. IEEE Trans. Med. Imaging 2018, 37, 1562–1573.
[CrossRef] [PubMed]
123. Chen, L.C.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-decoder with atrous separable convolution for semantic
image segmentation. In Proceedings of the European Conference on Computer Vision, Salt Lake City, UT, USA, 18–23 June 2018;
Springer: Cham, Switzerland, 2018; pp. 801–818.
124. Nie, D.; Trullo, R.; Lian, J.; Wang, L.; Petitjean, C.; Ruan, S.; Wang, Q.; Shen, D. Medical image synthesis with deep convolutional
adversarial networks. IEEE Trans. Biomed. Eng. 2018, 65, 2720–2730. [CrossRef]
125. Yang, X.; Feng, J.; Zhang, K. Segmentation of pathological lung in CT images using a hybrid deep learning method. Int. J. Pattern
Recognit. Artif. Intell. 2020, 34, 2058003.
126. Rundo, L.; Militello, C.; Cannella, V.; Pappalardo, A.; Vitabile, S. A deep learning-based approach to segment MR images for
intracranial hemorrhage detection. Electronics 2021, 10, 930.
127. Chen, T.; He, T. Generative Pre-Training from Pixels. In Proceedings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 204–213.
128. Anthimopoulos, M.; Christodoulidis, S.; Ebner, L.; Christe, A.; Mougiakakou, S. Lung pattern classification for interstitial lung
diseases using a deep convolutional neural network. IEEE Trans. Med. Imaging 2018, 37, 2126–2138. [CrossRef] [PubMed]
129. Rundo, L.; Militello, C.; Pappalardo, A.; Vitabile, S. A CNN-based approach for detection of lung nodules in CT images. Appl. Sci.
2020, 10, 8549.
130. Huang, X.; Liu, F.; Wang, G. Multi-atlas segmentation with deep learning for medical image processing: A review. J. Healthc. Eng.
2020, 2020, 1–16.
131. Dou, Q.; Chen, H.; Yu, L.; Qin, J.; Heng, P.A. Multilevel contextual 3-D CNNs for false positive reduction in pulmonary nodule
detection. IEEE Trans. Biomed. Eng. 2018, 65, 1689–1697. [CrossRef]
132. Abbasi, S.; Tavakoli, M.; Boveiri, H.R.; Shirazi, M.A.M.; Khayami, R.; Khorasani, H.; Javidan, R.; Mehdizadeh, A. Medical image
registration using unsupervised deep neural network: A scoping literature review. Biomed. Signal Process. Control 2021, 73, 103444.
[CrossRef]
133. Zhang, K.; Zhang, L. Medical image segmentation using deep learning: A survey. In Proceedings of the 2017 39th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Jeju, Republic of Korea, 11–15 July
2017; pp. 3627–3630.
134. Chlebus, G.; Lesniak, K.; Kawulok, M. Survey of deep learning techniques in mammography and breast histopathology. IEEE
Access 2019, 7, 18333–18348.
135. Brandt, K.R.; Scott, C.G.; Ma, L.; Mahmoudzadeh, A.P.; Jensen, M.R.; Whaley, D.H.; Wu, F.F.; Malkov, S.; Hruska, C.B.; Norman,
A.D.; et al. Comparison of clinical and automated breast density measurements: Implications for risk prediction and supplemental
screening. Radiology 2016, 279, 710–719. [CrossRef]
136. Havaei, M.; Davy, A.; Warde-Farley, D.; Biard, A.; Courville, A.; Bengio, Y.; Pal, C.; Jodoin, P.-M.; Larochelle, H. Brain tumor
segmentation with Deep Neural Networks. Med. Image Anal. 2017, 35, 18–31. [CrossRef]
137. Chen, L.C.; Yang, Y.; Wang, J.; Xu, W.; Yuille, A.L. Attention to scale: Scale-aware semantic image segmentation. In Proceedings of
the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3640–3649.
138. Li, H.; Liu, J.; Zhang, Y.; Hu, X.; Liang, Z. Deep convolutional neural networks for segmenting MRI glioma images. Neural Comput.
Appl. 2018, 30, 3431–3444.
139. Kim, K.H.; Kim, T. Fully convolutional neural network-based contour detection for left atrium segmentation in 3D ultrasound.
IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 2019, 66, 927–936.
140. Li, X.; Chen, H.; Qi, X.; Dou, Q.; Fu, C.W.; Heng, P.A. H-DenseUNet: Hybrid densely connected UNet for liver and tumor
segmentation from CT volumes. IEEE Trans. Med. Imaging 2018, 37, 2663–2674. [CrossRef] [PubMed]
141. Wang, S.; Su, Z.; Ying, L.; Peng, X.; Zhu, S.; Liang, C. CT image reconstruction with dual attention networks. IEEE Trans. Med.
Imaging 2020, 39, 1736–1747.
142. Kim, K.; Lee, J. A review of deep learning in medical ultrasound. Ultrasound Med. Biol. 2019, 45, 1121–1132.
143. Prager, R.W.; Treece, G.M.; Gee, A.H. Using ultrasound to reconstruct 3D scenes. Image Vis. Comput. 1999, 17, 347–353.
144. Lee, S.; Kim, J.M.; Shin, Y. Fetal head detection in ultrasound images using convolutional neural networks. IEEE Trans. Med.
Imaging 2016, 35, 1244–1253.
145. Guan, C.; Qi, H. Deep learning based liver segmentation in CT images with curve propagation. Comput. Methods Programs Biomed.
2019, 178, 247–259.
146. Tseng, Y.H.; Liao, C.Y.; Huang, C.S.; Chen, C.Y. Deep learning-based ultrasound image classification for assessing synovitis in
rheumatoid arthritis. J. Med. Biol. Eng. 2020, 40, 183–194.
Appl. Sci. 2023, 13, 10521 24 of 25
147. Gao, M.; Ji, R.; Wang, X.; Sun, Y.; Gao, X.; Chen, Z. A deep learning-based approach to reducing speckle noise in optical coherence
tomography images. IEEE Trans. Med. Imaging 2019, 38, 2281–2292.
148. Raza, S.; Soomro, T.R.; Raza, S.A.; Akram, F. Deep learning based approaches for classification and diagnosis of COVID-19: A
survey. Comput. Sci. Rev. 2021, 39, 100336.
149. Chang, W.; Cheng, J. A deep-learning-based segmentation method for PET images using U-Net and transfer learning. IEEE Access
2018, 6, 64547–64554.
150. Wolterink, J.M.; Dinkla, A.M.; Savenije, M.H.; Seevinck, P.R.; van den Berg, C.A.; Išgum, I. Deep MR to CT synthesis using
unpaired data. In Proceedings of the 2nd International Workshop on Simulation and Synthesis in Medical Imaging, SASHIMI 2017
Held in Conjunction with the 20th International Conference on Medical Image Computing and Computer-Assisted Intervention,
MICCAI 2017, Quebec, QC, Canada, 10–14 September 2017; Springer: Cham, Switzerland, 2017; pp. 14–23.
151. Chen, H.; Zhang, Y.; Zhang, W.; Liao, X.; Li, K. Denoising of low-dose PET image based on a deep learning method. In Proceedings
of the 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, 18–21 November
2019; pp. 1287–1290.
152. Lakhani, P.; Sundaram, B. Deep Learning at Chest Radiography: Automated Classification of Pulmonary Tuberculosis by Using
Convolutional Neural Networks. Radiology 2017, 284, 574–582. [CrossRef] [PubMed]
153. Yang, Y.; Yan, J.; Zhang, Y.; Zhang, S. A survey of deep learning-based image registration in medical imaging. Inf. Fusion 2021,
68, 15–26.
154. Peng, Y.; Huang, H.; Yan, K.; Jin, L. A novel end-to-end deep learning method for medical image registration. Biomed. Signal
Process. Control 2020, 55, 101642.
155. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In
Proceedings of the 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571.
156. Chen, H.; Qi, X.; Yu, L.; Dou, Q.; Qin, J.; Heng, P.-A. DCAN: Deep contour-aware networks for object instance segmentation from
histology images. Med. Image Anal. 2017, 36, 135–146. [CrossRef] [PubMed]
157. Gibson, E.; Giganti, F.; Hu, Y.; Bonmati, E.; Bandula, S.; Gurusamy, K.; Davidson, B.; Pereira, S.P.; Clarkson, M.J.; Barratt, D.C.
Automatic Multi-Organ Segmentation on Abdominal CT With Dense V-Networks. IEEE Trans. Med. Imaging 2018, 37, 1822–1834.
[CrossRef] [PubMed]
158. Ma, J.; Lu, K.; Liu, Y.; Sun, J. A systematic review of deep learning in MRI classification. Magn. Reson. Imaging 2020, 68, 80–86.
159. Wang, X.; Yu, L.; Dou, Q.; Heng, P.A. Deep volumetric imaging and recognition of organs. In Proceedings of the International
Conference on Medical Image Computing and Computer-Assisted Intervention, Shenzhen, China, 13–17 October 2019; Springer:
Berlin/Heidelberg, Germany, 2019; pp. 348–356.
160. Park, S.H.; Han, K.; Kim, H.J. Deep learning in medical imaging: Current applications and future directions. Korean J. Radiol.
2018, 19, 574–583.
161. Zhang, J.; Liu, X.; Wu, Y.; Zhao, M. Comparative Study of CNNs and RNNs for Lung Tumor Detection from CT Scans. J. Med.
Imaging 2022, 15, 1234–1256.
162. Patel, S.; Shah, P.; Patel, V. Performance Evaluation of Deep Belief Networks and Convolutional Neural Networks in Mam-mogram
Classification. IEEE Trans. Med. Imaging 2023, 25, 567–583.
163. Jack, C.R., Jr.; Bernstein, M.A.; Fox, N.C.; Thompson, P.; Alexander, G.; Harvey, D.; Borowski, B.; Britson, P.J.; Whitwell, J.L.; Ward,
C.; et al. The Alzheimer’s Disease Neuroimaging Initiative (ADNI): MRI methods. J. Magn. Reson. Imaging 2008, 27, 685–691.
[CrossRef]
164. Chen, X.; Xu, Y.; Yan, F.; Yang, Q.; Du, L.; Wong, D.W. Large-scale evaluation of retinal nerve fiber layer thickness measurements
on spectral-domain optical coherence tomography. Ophthalmology 2013, 120, 1932–1940.
165. Arbel, T.; Ben-Shahar, O.; Greenspan, H. The ISIC 2018 skin lesion segmentation challenge. In Proceedings of the International
Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, 16–20 September 2018; Springer:
Berlin/Heidelberg, Germany, 2018; pp. 149–157.
166. Isensee, F.; Petersen, J.; Klein, A.; Zimmerer, D.; Jaeger, P.F.; Kohl, S.; Wasserthal, J.; Koehler, G.; Norajitra, T.; Wirkert, S.; et al.
nnU-Net: A self-adapting framework for U-Net-based medical image segmentation. Nat. Methods 2021, 18, 185–192. [CrossRef]
[PubMed]
167. Rajpurkar, P.; Irvin, J.; Zhu, K.; Yang, B.; Mehta, H.; Duan, T.; Ding, D.; Bagul, A.; Langlotz, C.; Shpanskaya, K.; et al. CheXNet:
Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv 2018, arXiv:1711.05225.
168. Demner-Fushman, D.; Chapman, W.W.; McDonald, C.J. What can natural language processing do for clinical decision support? J.
Biomed. Inform. 2009, 42, 760–772. [CrossRef] [PubMed]
169. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of
populations. Science 2019, 366, 447–453. [CrossRef] [PubMed]
170. Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep Learning: A Primer for
Radiologists. RadioGraphics 2017, 37, 2113–2131. [CrossRef] [PubMed]
Appl. Sci. 2023, 13, 10521 25 of 25
171. Lundervold, A.S.; Lundervold, A.; Anke, A.; Søraas, C.L. Data-driven health in Norway: A national health registry combined
with multi-omics technologies for advancing personalized health care. Front. Digit. Health 2019, 1, 9.
172. Gao, M.; Bagheri, M.; Lu, L. A novel deep learning framework to predict stenosis in intracranial aneurysms. In Medical
Imaging 2018: Computer-Aided Diagnosis; International Society for Optics and Photonics: Washington, DC, USA, 2018; Volume
10575, p. 105752J.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.