0% found this document useful (0 votes)
10 views

Computer Image Analysis With Artificial Intelligence A

Uploaded by

mesfin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Computer Image Analysis With Artificial Intelligence A

Uploaded by

mesfin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Postgraduate Medical Journal, 2023, 99, 1178, 1287–1294

https://doi.org/10.1093/postmj/qgad095
Advance access publication date 5 October 2023
Education and Learning

Computer image analysis with artificial intelligence: a


practical introduction to convolutional neural networks
for medical professionals
Georgios Kourounis1,2, *, Ali Ahmed Elmahmudi3 , Brian Thomson3 , James Hunter4 , Hassan Ugail3 , Colin Wilson1,2

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024


1 NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
2 Instituteof Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
3 Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
4 Nuffield Department of Surgical Sciences, University of Oxford, Oxford, OX3 9DU, United Kingdom

*Corresponding author. Transplant and HPB Department, The Freeman Hospital, Freeman Rd, High Heaton, Newcastle upon Tyne NE7 7DN, United Kingdom.
E-mail: [email protected]

Abstract
Artificial intelligence tools, particularly convolutional neural networks (CNNs), are transforming healthcare by enhancing predictive,
diagnostic, and decision-making capabilities. This review provides an accessible and practical explanation of CNNs for clinicians and
highlights their relevance in medical image analysis. CNNs have shown themselves to be exceptionally useful in computer vision, a
field that enables machines to ‘see’ and interpret visual data. Understanding how these models work can help clinicians leverage their
full potential, especially as artificial intelligence continues to evolve and integrate into healthcare. CNNs have already demonstrated
their efficacy in diverse medical fields, including radiology, histopathology, and medical photography. In radiology, CNNs have been
used to automate the assessment of conditions such as pneumonia, pulmonary embolism, and rectal cancer. In histopathology, CNNs
have been used to assess and classify colorectal polyps, gastric epithelial tumours, as well as assist in the assessment of multiple
malignancies. In medical photography, CNNs have been used to assess retinal diseases and skin conditions, and to detect gastric and
colorectal polyps during endoscopic procedures. In surgical laparoscopy, they may provide intraoperative assistance to surgeons, helping
interpret surgical anatomy and demonstrate safe dissection zones. The integration of CNNs into medical image analysis promises to
enhance diagnostic accuracy, streamline workf low efficiency, and expand access to expert-level image analysis, contributing to the
ultimate goal of delivering further improvements in patient and healthcare outcomes.

Keywords: biotechnology & bioinformatics; education and training; radiology & imaging

Introduction and become more integrated into healthcare, it will be crucial for
clinicians to understand these powerful tools to leverage their full
Artificial intelligence (AI) tools are increasingly prevalent,
potential.
transforming numerous industries, including healthcare. AI
This review aims to provide an accessible entry level explana-
methods are being used to drive progress in predictive, diagnostic,
tion of CNNs for clinicians unfamiliar with AI and highlight their
and decision-making abilities [1]. Within medicine, AI has shown
relevance in medical image analysis. The goal is to equip medical
promise in various applications, including radiology and pathol-
professionals with the knowledge they need to start navigating
ogy analysis [1, 2], decision aid tools such as organ allocation in
the evolving landscape of AI for image analysis in healthcare.
transplantation [1, 3], and patient outcome prediction tools [4].
Image analysis, a significant aspect of AI, has proven particu-
larly useful. Convolutional neural networks (CNNs) are the subset
Brief overview of artificial intelligence
of AI models that are driving significant progress in the field of
medical image analysis. They play a crucial role, specifically in AI is a branch of computer science that aims to create algorithms
computer vision, a field that enables machines to ‘see’ and inter- capable of performing tasks that typically require human intelli-
pret visual data. Their use has the potential to increase the accu- gence. These tasks include visual perception, speech recognition,
racy, speed, and access to image analysis and interpretation [5]. decision-making, and language translation [1, 2, 4] (Fig. 1). The
Understanding the basics of CNNs will become essential for concept of AI was first introduced at the Dartmouth Conference
clinicians who seek to appreciate how these models work. Just in 1956 [6], marking the birth of AI as a field of study. Since
like our understanding of computed tomography (CT) scans is then, AI research has gone through various phases, including the
enhanced by having a basic understanding of Hounsfield units, a development of rule-based systems in the 1960s, expert systems
background knowledge of CNNs can help clinicians better under- in the 1970s and 1980s, and the emergence of machine learning
stand and engage with the subject. As AI continues to evolve (ML) in the 1990s [7].

Received: July 26, 2023. Revised: September 6, 2023. Accepted: September 13, 2023
© The Author(s) 2023. Published by Oxford University Press on behalf of Postgraduate Medical Journal.
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License
(https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original
work is properly cited. For commercial re-use, please contact [email protected]
1288 | Kourounis et al.

by three numbers (R = x, G = y, B = z), indicating the intensity of


each colour. Additionally, computers use zero-based numbering
meaning the first number in a sequence is 0 rather than 1. That
makes the last number in a sequence one less than the range,
so 255 as opposed to 256 for RGB values. To display white, all
primary colours are at their maximum intensity (R = 255, G = 255,
B = 255). Conversely, red is represented by only the red component
being fully on (R = 255, G = 0, B = 0), while purple combines red
and blue with no green (R = 255, G = 0, B = 255). This numerical
Figure 1 AI and its subsets leading to CNNs, with examples RGB information enables computers to store visual information
as numbers while also render images that humans can perceive
on a display [8] (Fig. 3).
ML, a subfield of AI, involves training machines with models To further illustrate these concepts, let’s compare them to CT
that learn from data and improve their performance on a specific scans. Like images, CT scans are made up of pixels. However,

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024


task. These models have three layers, an input, a hidden, and an instead of RGB, each pixel in a CT scan represents a different
output layer. The input layer is the data we present to the model. Hounsfield unit. These are units that measure the density of tis-
The hidden layer is the mathematical model used to process the sue imaged and can range from −1000 to +1000. Each Hounsfield
input data, such as linear regressions. The output layer is the unit corresponds to a different shade of grey, with lower values
result or decision the model generates. ML tools contain relatively appearing darker (−1000, black) and higher values appearing
simple layers with specific functions [4, 7] and are comparable to lighter (+1000, white) [9].
a single human neuron, with the dendrite analogous to the input Finally, it is important to understand that computer images
layer, the cell body corresponding to the hidden layer, and the contain a significantly larger range of visual information com-
axon serving as the output layer. pared to what the human eye can perceive. For instance, when
ML can be broadly categorized into supervised and unsuper- using a typical medical display, it has been estimated that humans
vised learning. In supervised learning, the model is trained on can discern a maximum of 720 shades of grey [10]. This limitation
labelled datasets, meaning the desired output is already known, in human perception is the reason why different CT ‘windows’
and the goal is to generate classification tools [4, 7]. An example are necessary to optimize visualization and analysis of different
would be the use of labelled retinal photographs to generate a body parts. Additionally, in the case of coloured images, humans
tool to detect diabetic retinopathy. Unsupervised learning involves can differentiate between ∼1 and 2 million distinct colours [11],
training the model on unlabelled datasets, where the desired out- which is significantly less than the 16.8 million colour variations
put is not known, and the goal is to find meaningful differences in that can be stored within a typical computer image file.
the data. The models learn by identifying patterns and structures
within the data [4, 7]. An example of this is the discovery of novel How convolutional neural networks work
drugs by using ML to reveal not yet defined chemical attributes of The major advantage of CNNs over human analysis of images is
medications (Fig. 2). they work with the numerical data of an image, as opposed to the
Deep learning (DL) is a subset of ML that uses multiple hidden image itself. That means they can process the multiple million
layers to process and learn from data—the ‘deep neural network’. shades of grey or intensity of colours, which we are not able to.
Each layer in a deep neural network performs a specific operation The process begins with the input layer, which is the image given
and passes its output to the next layer, allowing the network to the CNN for analysis.
to learn complex representations of the data. Similar to how a The first step in the CNN is the convolutional layer. Here, the
network of interconnected neurons surpasses the capabilities of a input image is transformed using a set of mathematical ‘filters’
single neuron, DL models surpass the data analysis capabilities of that can reveal certain features in an image. The mathematical
simpler ML models, achieving feats that were previously unattain- calculation to achieve this is called a convolution and involves
able. DL has been the driving force behind recent breakthroughs multiplying each value in a field by its corresponding weight and
in areas such as computer vision, speech recognition, and natural summing the results (Fig. 4). These filters, formally referred to
language processing [4, 7]. as kernels, move across the image analysing small patches of
CNNs are highly effective DL models specifically designed the image at a time. The filters extract important features in
for image recognition tasks. Each layer of a CNN applies oper- the image, such as edges, colours, or textures. The result of this
ations called convolutions to every pixel of an image, enabling process is a set of activation maps for each filter, which highlight
the extraction of important features. This process allows CNNs the areas in the image where the network found the respective
to excel at detecting patterns, objects, and abnormalities in visual features.
data. By extracting these meaningful features, CNNs provide valu- After the convolutional layers, the network applies a series of
able insights that can be further explored and utilized for various fully connected layers, also known as classification layers. These
purposes [4, 5, 7]. layers take the activation maps from the convolutional layers
and process them further to create classification probabilities.
Understanding images as computer data This is done using functions that can map the output of the
To understand CNNs, it is important to grasp how computers convolutional layers to the classes that the network is trying to
display, store, and process visual information. Digital images are classify (Fig. 5).
made up of individual pixels. Each pixel is made up of three Finally, the network produces an output. This is done in the
small ‘lamps’, one for each primary colour: red, green, and blue output layer, which presents the highest probability classification
(RGB). Each ‘lamp’ can have 256 levels of intensity. Over 16 million that the CNN has found. For example, if the network is designed to
possible colour variations (2563 ) can be generated by varying the identify whether an image contains a cat or a dog, the output layer
intensity of each ‘lamp’. For a computer, each pixel is represented would output the class (cat or dog) that has the highest probability.
AI and CNNs in medical image analysis | 1289

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024


Figure 2 Differences between supervised and unsupervised learning with ML

et al. [15] developed PENet, a CNN DL model that automated


the diagnosis of pulmonary embolism using CT images. Similarly,
Betancur et al. [16] used CNNs to predict vascular obstructive dis-
ease from the 3D images generated by fast myocardial perfusion
single-photon emission CT. Rectal cancer MRI images have been
used to generate CNNs that are able to automate detection and
segmentation of lymph node assessment [17]. The authors found
that the results of the model improved the efficacy and speed
of radiologists reporting the MRIs, while also minimizing the dif-
ferences between radiologists with different levels of experience.
Figure 3 Illustration showing relationship between images, pixels, and
computer image data stored in RGB format
MRI images for assessing liver tumours were also used to develop
CNNs that help radiologists differentiate between intermediate
from more likely malignant liver tumours [18].
Convolutional neural networks in radiology
The most obvious medical field to benefit from this technology
is radiology where multiple applications of CNNs have already Convolutional neural networks in histology
been reported (Table 1). The application of CNNs in chest X-rays Histopathology is another field of medicine that involves examin-
(CXRs) has shown impressive results. Hashmi et al. [12] developed ing visual information through microscopy. Here too CNNs have
a CNN model for pneumonia detection on CXR images reporting shown results in various areas (Table 1). For instance, Wei et
an area under the receiver operating characteristic curve (AUC) al. [19] employed CNN models to automate the classification of
of 0.99. Albahli et al. [13] reported a CNN model which was able colorectal polyps on histopathological slides, aiding pathologists
to label and classify 13 different chest-related diseases (atelecta- in enhancing diagnostic accuracy, efficiency, and reproducibility
sis, cardiomegaly, consolidation, oedema, effusion, emphysema, during screening. Similarly, Iizuka et al. [20] used CNN models to
fibrosis, infiltration, mass, nodule, pleural thickening, pneumo- also help pathologists classify gastric epithelial tumours. Numer-
nia, and pneumothorax). Their AUC results ranged from 0.65 for ous studies have reported on the use of CNN models to evaluate
pneumonia to 0.93 for emphysema. Lakhani et al. [14] used CNNs other cancer histopathology slides, including breast [21], skin [22],
for the automated classification of pulmonary tuberculosis on lung [23], pancreatic [24], and liver cancers [25].
chest radiographs, achieving an AUC of 0.99 for active tubercu- The speed advantage that accurate CNN models can confer in
losis, which was higher than the AUC of 0.95 achieved by the time critical situations was exemplified in a study from Hollon
radiologists. et al. [26]. In their research, they successfully implemented a CNN
Similar results have been reported across radiological studies model trained on 2.5 million stimulated Raman histology images
of CT and magnetic resonance imaging (MRI) modalities. Huang to enable near real-time intraoperative brain tumour diagnosis
1290 | Kourounis et al.

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024


Figure 4 Visual representation and real example of convolutional image transformation into activation map using a 3 × 3 filter/kernel; images from
wikicommons used with permission [50]

Figure 5 A summary overview of the steps in CNNs


AI and CNNs in medical image analysis | 1291

Table 1. Table summarizing a subset of various CNN models for medical image analysis across different specialties.

Medical field Application Reported accuracy

Radiology US—Thyroiditis [43] AUC 0.99


CXR—Pneumonia [12] AUC 0.99
CXR—COVID 19 [44] Accuracy 98.15%
CXR—Tuberculosis [14] AUC 0.99
Mammography—Breast cancer [45] AUC 0.88
CT—Appendicitis [46] Accuracy 90%–97.5%
CTPA—Pulmonary embolism [15] AUC 0.84
SPECT—Coronary artery disease [16] AUC 0.80
MRI—Lymph node assessment in rectal Sensitivity 0.80
cancer [17] PPV 0.735
MRI—Liver tumours [18] AUC 0.95
Histology Colorectal polyps [19] Accuracy 93.5%

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024


Gastric epithelial tumours [20] AUC 0.97
Breast cancer [21] AUC 0.97
Skin cancer [22] AUC 0.92
Kidney transplant biopsies [47] Dice score 0.88
Intraoperative brain tumour diagnosis [26] Accuracy 94.6%
Medical photography Retinal diseases [27] Accuracy 96.5%
Glaucoma [28] AUC 0.99
Skin cancer [29, 48] AUC 0.91–0.96
Intraoperative nerve detection [49] Sensitivity 0.91
Specificity 1.00
Burn severity assessment [30] Accuracy 95.63%
Endoscopy/video Colonoscopy polyps [31] Accuracy 86.7%
Gastroscopy polyps [32] F1 score 91.6%
Colposcopy [33] AUC 0.947
Surgical phase recognition [34] Accuracy 97.0%
Intraoperative dissection guidance [35] Accuracy 95%

NB: CNN performances will vary depending on multiple factors, as such they cannot be directly compared with each other between studies. CTPA, computed
tomography pulmonary angiography; US, ultrasound.

within <150 s, in contrast to the conventional techniques that typ- from laparoscopic surgery have been analysed using these models
ically require 20–30 min. The accelerated diagnostic capabilities to automate surgical phase recognition [34]. These data have
of CNN models have the potential to optimize surgical decision- subsequently been used to assess surgical proficiency and provide
making and improve patient outcomes. These examples highlight insights to inform training discussions. In laparoscopic chole-
the transformative impact of CNNs in streamlining diagnostic cystectomy, CNN models have been designed to aid surgeons
processes and advancing medical interventions. intraoperatively, detecting safe and dangerous zones of dissection
to reduce errors in visual perception and minimize complications
[35].
Convolutional neural networks in medical
photography
The final high yield area of medicine that is benefiting from Challenges and future directions
these systems is medical photography and endoscopy (Table 1). One of the main challenges in developing high quality ML models
In ophthalmology, CNN models have been developed to assess is the availability of high quality data. Digital notes and large
retinal diseases [27], including diabetic retinopathy, as well as datasets are crucial, yet these are not always readily available
glaucoma [28]. The evaluation of readily visualized and accessible in medicine. Some centres continue to rely on paper notes and
skin conditions has also experienced notable advancements with different centres often have fragmented and noncommunicating
the emergence of CNNs. Dermatology has benefited from CNNs database systems. In addition, concerns about data anonymiza-
in the classification of skin lesions, with Esteva et al. [29] reporting tion, patient privacy, and cybersecurity add further layers of com-
a model trained on 129 450 images with an AUC of 0.96, a per- plexity to data sharing processes [2, 36].
formance on par with dermatology experts. Additionally, Suha et The rapid advancements in these disciplines also present chal-
al. [30] developed a CNN model for diagnosing and grading skin lenges for both regulatory frameworks and workforce education.
frostbite with an overall accuracy of 97.3%. Although many of these technologies have undergone testing
CNNs have also found utility in endoscopic medical imaging, in research environments, their transition into clinical practice
such as in the detection of gastric and colorectal polyps dur- can be slowed down by regulations that have not kept up pace
ing gastrointestinal tract investigations. The use of CNNs has with recent advances in AI [2]. The current medical workforce
shown improved identification of malignant versus benign polyps is also not universally equipped to understand and deploy these
and enhanced accuracy in lesion assessment by endoscopists, technologies, causing further delays in their clinical adoption [2].
including both novice and senior practitioners [31, 32]. Within Future directions will involve the use of creative methods to
gynaecology, CNNs have proven valuable in cervical screening address the limited availability of medical data. Strategies like
by differentiating low and high-risk lesions, achieving an AUC of transfer learning and Generative Adversarial Networks have the
0.947 for detecting biopsy-requiring lesions [33]. Finally, videos potential to augment smaller datasets, rendering them more
1292 | Kourounis et al.

representative and robust [36, 37]. Multidisciplinary collaboration were made by A.A.E., B.T., J.H., H.U., and C.W. Final approval for the
is also set to play an increasingly significant role in these projects. manuscript was given by all authors.
Initiatives like the UK’s Topol Fellowship offer healthcare profes-
sionals the chance to gain practical experience in data science
and AI, effectively bridging the divide between these crucial dis-
Data availability
ciplines [38, 39]. All data relevant to this publication have been reported and pub-
lished. There are no additional unpublished data for this review.

Further reading
References
The current review offers a big picture overview of CNNs, inten-
1. Peloso A, Moeckli B, Delaune V et al. Artificial intelligence:
tionally avoiding an exhaustive review of the methodologies and
present and future potential for solid organ transplantation.
potential applications. For readers looking to explore the workings
Transpl Int 2022;35:10640. https://doi.org/10.3389/ti.2022.10640.
of CNNs in greater depth, we recommend the 2018 reviews by
2. He J, Baxter SL, Xu J et al. The practical implementation of
Yamashita et al. [36] and Anwar et al. [37].

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024


artificial intelligence technologies in medicine. Nat Med 2019;25:
Although we have primarily focused on the use of CNNs for
30–6. https://doi.org/10.1038/s41591-018-0307-0.
image classification, it is important to note that CNNs are also
3. Bertsimas D, Kung J, Trichakis N et al. Development and vali-
capable of other tasks, such as segmentation and object detection.
dation of an optimized prediction of mortality for candidates
Image segmentation involves isolating certain features, such as
awaiting liver transplantation. Am J Transplant 2019;19:1109–18.
extracting an area of cancer from healthy tissue in a radiological
https://doi.org/10.1111/ajt.15172.
scan. Object detection involves identifying specific objects in an
4. Connor KL, O’Sullivan ED, Marson LP et al. The future role
image, such as detecting a polyp during endoscopy. Both of the
of machine learning in clinical transplantation. Transplantation
2018 reviews mentioned above cover these topics in more detail
2021;105:723–35. https://doi.org/10.1097/TP.0000000000003424.
[36, 37].
5. Mascagni P, Alapatt D, Sestini L et al. Computer vision in surgery:
We have also focused on 2D images, as these are the most
from potential to clinical value. Npj Digit Med 2022;5:1–9. https://
intuitive and easiest introductions to the subject. We note that
doi.org/10.1038/s41746-022-00707-5.
CNNs have been used in 1D and 3D tasks as well. Some examples
6. McCarthy J, Minsky ML, Rochester N et al. A proposal for the
of 1D tasks include ECG assessments [40] and drug response
Dartmouth summer research project on artificial intelligence,
predictions [41]. Regarding 3D CNNs, these have been used in
August 31, 1955. AI Mag 2006;27:12–2. https://doi.org/10.1609/
volumetric imaging modalities like CT and MRI [37]. For an in-
aimag.v27i4.1904.
depth review of 1D CNNs, we recommend the 2021 review on the
7. Mitchell M. Artificial Intelligence: A Guide for Thinking Humans.
subject by Kiranyaz et al. [42].
London:Pelican, 2020.
8. Zubair M. A comprehensive guide on color representation
in computer vision (CV-02). Medium 2023. https://
Conclusion
towardsdatascience.com/how-color-is-represented-and-
The integration of CNNs in medical image analysis holds sig- viewed-in-computer-vision-b1cc97681b68; (11 June 2023,
nificant potential and offers several notable advantages. First, date last accessed).
CNNs have demonstrated their ability to match or exceed expert 9. Greenway K. Hounsfield Unit | Radiology Reference Article.
assessment, leading to more precise diagnoses and improved Radiopaedia.org. Radiopaedia. https://doi.org/10.53347/rID-
patient outcomes. Second, the utilization of CNNs can expedite 38181.
image analysis by clinicians, resulting in faster turnaround times 10. Kimpe T, Tuytschaever T. Increasing the number of gray shades
and enhanced workf low efficiency. Lastly, CNNs have the capacity in medical display systems—How much is enough? J Digit Imag-
to expand access to expert-level image analysis, particularly ben- ing 2007;20:422–32. https://doi.org/10.1007/s10278-006-1052-3.
efiting clinicians and patients in healthcare centres with limited 11. Mollon JD. Color vision: opsins and options. Proc Natl Acad Sci U
experience or situated in remote or underserved areas. S A 1999;96:4743–5. https://doi.org/10.1073/pnas.96.9.4743.
Conf lict of interest statement: None declared. 12. Hashmi MF, Katiyar S, Keskar AG et al. Efficient pneu-
monia detection in chest X-ray images using deep trans-
fer learning. Diagnostics 2020;10:417. https://doi.org/10.3390/
Funding diagnostics10060417.
13. Albahli S, Rauf HT, Algosaibi A et al. AI-driven deep CNN
This study is funded by the National Institute for Health and
approach for multi-label pathology classification using chest X-
Care Research (NIHR) Blood and Transplant Research Unit in
rays. PeerJ Comput Sci 2021;7:e495. https://doi.org/10.7717/peerj-
Organ Donation and Transplantation (NIHR203332), a partnership
cs.495.
between NHS Blood and Transplant, University of Cambridge
14. Lakhani P, Sundaram B. Deep learning at chest radiogra-
and Newcastle University. The views expressed are those of the
phy: automated classification of pulmonary tuberculosis by
author(s) and not necessarily those of the NIHR, NHS Blood, and
using convolutional neural networks. Radiology 2017;284:574–82.
Transplant or the Department of Health and Social Care.
https://doi.org/10.1148/radiol.2017162326.
15. Huang S-C, Kothari T, Banerjee I et al. PENet—a scalable
deep-learning model for automated diagnosis of pulmonary
Authors’ contributions embolism using volumetric CT imaging. Npj Digit Med 2020;3:1–9.
All authors were involved in the formulation of the study concept https://doi.org/10.1038/s41746-020-0266-y.
and design, data acquisition, analysis, and interpretation. The ini- 16. Betancur J, Commandeur F, Motlagh M et al. Deep learning for
tial draft of the article was prepared by G.K. Subsequent revisions prediction of obstructive disease from fast myocardial perfusion
AI and CNNs in medical image analysis | 1293

SPECT: a multicenter study. JACC Cardiovasc Imaging 2018;11: 31. Jin EH, Lee D, Bae JH et al. Improved accuracy in opti-
1654–63. https://doi.org/10.1016/j.jcmg.2018.01.020. cal diagnosis of colorectal polyps using convolutional neural
17. Zhao X, Xie P, Wang M et al. Deep learning–based fully networks with visual explanations. Gastroenterology 2020;158:
automated detection and segmentation of lymph nodes 2169–2179.e8. https://doi.org/10.1053/j.gastro.2020.02.036.
on multiparametric-MRI for rectal cancer: a multicentre 32. Cao C, Wang R, Yu Y et al. Gastric polyp detection in gastroscopic
study. eBioMedicine 2020;56:102780. https://doi.org/10.1016/j. images using deep neural network. PLoS One 2021;16:e0250632.
ebiom.2020.102780. https://doi.org/10.1371/journal.pone.0250632.
18. Wu Y, White GM, Cornelius T et al. Deep learning LI-RADS 33. Cho B-J, Choi YJ, Lee M-J et al. Classification of cervical neo-
grading system based on contrast enhanced multiphase MRI plasms on colposcopic photography using deep learning. Sci Rep
for differentiation between LR-3 and LR-4/LR-5 liver tumors. 2020;10:13652. https://doi.org/10.1038/s41598-020-70490-4.
Ann Transl Med 2020;8:701–1. https://doi.org/10.21037/atm.2019. 34. Shinozuka K, Turuda S, Fujinaga A et al. Artificial intelligence
12.151. software available for medical devices: surgical phase recog-
19. Wei JW, Suriawinata AA, Vaickus LJ et al. Evaluation of a deep nition in laparoscopic cholecystectomy. Surg Endosc 2022;36:
neural network for automated classification of colorectal polyps 7444–52. https://doi.org/10.1007/s00464-022-09160-7.

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024


on histopathologic slides. JAMA Netw Open 2020;3:e203398. 35. Madani A, Namazi B, Altieri MS et al. Artificial intelli-
https://doi.org/10.1001/jamanetworkopen.2020.3398. gence for intraoperative guidance: using semantic segmenta-
20. Iizuka O, Kanavati F, Kato K et al. Deep learning mod- tion to identify surgical anatomy during laparoscopic chole-
els for histopathological classification of gastric and colonic cystectomy. Ann Surg 2022;276:363–9. https://doi.org/10.1097/
epithelial tumours. Sci Rep 2020;10:1504. https://doi.org/10.1038/ SLA.0000000000004594.
s41598-020-58467-9. 36. Yamashita R, Nishio M, Do RKG et al. Convolutional neural net-
21. Rakhlin A, Shvets A, Iglovikov V, et al. Deep convolutional neu- works: an overview and application in radiology. Insights Imaging
ral networks for breast cancer histology image analysis. In: 2018;9:611–29. https://doi.org/10.1007/s13244-018-0639-9.
Campilho A, Karray F, ter Haar RB (eds.), Image Analysis and Recog- 37. Anwar SM, Majid M, Qayyum A et al. Medical image analy-
nition. Cham: Springer International Publishing, 2018. 737–44. sis using convolutional neural networks: a review. J Med Syst
https://doi.org/10.1007/978-3-319-93000-8_83. 2018;42:226. https://doi.org/10.1007/s10916-018-1088-1.
22. Höhn J, Krieghoff-Henning E, Jutzi TB et al. Combining CNN- 38. Topol Digital Fellowships. The Topol Review. NHS Health Edu-
based histologic whole slide image analysis and patient data to cation England. https://topol.hee.nhs.uk/digital-fellowships/ (3
improve skin cancer classification. Eur J Cancer 2021;149:94–101. September 2023, date last accessed)..
https://doi.org/10.1016/j.ejca.2021.02.032. 39. Topol E. The Topol Review — NHS Health Education England. UK
23. Le Page AL, Ballot E, Truntzer C et al. Using a convolutional neu- Secretary of State for Health 2019. https://topol.hee.nhs.uk/ (26
ral network for classification of squamous and non-squamous June 2022, date last accessed)..
non-small cell lung cancer based on diagnostic histopathol- 40. Kiranyaz S, Ince T, Gabbouj M. Real-time patient-specific
ogy HES images. Sci Rep 2021;11:23912. https://doi.org/10.1038/ ECG classification by 1-D convolutional neural networks.
s41598-021-03206-x. IEEE Trans Biomed Eng 2016;63:664–75. https://doi.org/10.1109/
24. Kriegsmann M, Kriegsmann K, Steinbuss G et al. Deep learning in TBME.2015.2468589.
pancreatic tissue: identification of anatomical structures, pan- 41. Mingxun Z, Zhigang M, Jingyi W. Drug response prediction
creatic intraepithelial neoplasia, and ductal adenocarcinoma. based on 1D convolutional neural network and attention mech-
Int J Mol Sci 2021;22:5385. https://doi.org/10.3390/ijms2210 anism. Comput Math Methods Med 2022;2022:8671348. https://doi.
5385. org/10.1155/2022/8671348.
25. Chen C, Chen C, Ma M et al. Classification of multi-differentiated 42. Kiranyaz S, Avci O, Abdeljaber O et al. 1D convolutional
liver cancer pathological images based on deep learning atten- neural networks and applications: a survey. Mech Syst Signal
tion mechanism. BMC Med Inform Decis Mak 2022;22:176. https:// Process 2021;151:107398. https://doi.org/10.1016/j.ymssp.2020.
doi.org/10.1186/s12911-022-01919-1. 107398.
26. Hollon TC, Pandian B, Adapa AR et al. Near real-time intraoper- 43. Zhao W, Kang Q, Qian F et al. Convolutional neural network-
ative brain tumor diagnosis using stimulated Raman histology based computer-assisted diagnosis of Hashimoto’s thyroiditis on
and deep neural networks. Nat Med 2020;26:52–8. https://doi. ultrasound. J Clin Endocrinol Metab 2021;107:953–63. https://doi.
org/10.1038/s41591-019-0715-9. org/10.1210/clinem/dgab870.
27. Tayal A, Gupta J, Solanki A et al. DL-CNN-based approach 44. Breve FA. COVID-19 detection on chest X-ray images: a
with image processing techniques for diagnosis of retinal dis- comparison of CNN architectures and ensembles. Expert
eases. Multimed Syst 2022;28:1417–38. https://doi.org/10.1007/ Syst Appl 2022;204:117549. https://doi.org/10.1016/j.eswa.2022.
s00530-021-00769-7. 117549.
28. Elmoufidi A, Skouta A, Jai-andaloussi S et al. Deep multiple 45. Ragab DA, Sharkas M, Marshall S et al. Breast cancer detec-
instance learning for automatic glaucoma prevention and auto- tion using deep convolutional neural networks and support
annotation using color fundus photography. Prog Artif Intell vector machines. PeerJ 2019;7:e6201. https://doi.org/10.7717/
2022;11:397–409. https://doi.org/10.1007/s13748-022-00292-4. peerj.6201.
29. Esteva A, Kuprel B, Novoa RA et al. Dermatologist-level classifica- 46. Park JJ, Kim KA, Nam Y et al. Convolutional-neural-network-
tion of skin cancer with deep neural networks. Nature 2017;542: based diagnosis of appendicitis via CT scans in patients
115–8. https://doi.org/10.1038/nature21056. with acute abdominal pain presenting in the emergency
30. Suha SA, Sanam TF. A deep convolutional neural network- department. Sci Rep 2020;10:9556. https://doi.org/10.1038/
based approach for detecting burn severity from skin burn s41598-020-66674-7.
images. Mach Learn Appl 2022;9:100371. https://doi.org/10.1016/ 47. Hermsen M, Ciompi F, Adefidipe A et al. Convolutional
j.mlwa.2022.100371. neural networks for the evaluation of chronic and
1294 | Kourounis et al.

inflammatory lesions in kidney transplant biopsies. Am J 49. Barberio M, Collins T, Bencteux V et al. Deep learning analy-
Pathol 2022;192:1418–32. https://doi.org/10.1016/j.ajpath.2022. sis of in vivo hyperspectral images for automated intraopera-
06.009. tive nerve detection. Diagn Basel Switz 2021;11:1508. https://doi.
48. Han SS, Moon IJ, Lim W et al. Keratinocytic skin cancer org/10.3390/diagnostics11081508.
detection on the face using region-based convolutional neural 50. Kernel (image processing). Wikipedia. 2023. https://en.
network. JAMA Dermatol 2020;156:29–37. https://doi. wikipedia.org/w/index.php?title=Kernel_(image_processing)&
org/10.1001/jamadermatol.2019.3807. oldid=1133314414. (14 June 2023, date last accessed)..

Downloaded from https://academic.oup.com/pmj/article/99/1178/1287/7289070 by guest on 21 June 2024

You might also like