Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
Creating Artificial Images for Radiology Applications Using Generative Adversarial Networks (GANs) – A Systematic Review
Rationale and Objectives: Generative adversarial networks (GANs) are deep learning models aimed at generating fake realistic looking
images. These novel models made a great impact on the computer vision field. Our study aims to review the literature on GANs applica-
tions in radiology.
Materials and Methods: This systematic review followed the PRISMA guidelines. Electronic datasets were searched for studies describ-
ing applications of GANs in radiology. We included studies published up-to September 2019.
Results: Data were extracted from 33 studies published between 2017 and 2019. Eighteen studies focused on CT images generation, ten on
MRI, three on PET/MRI and PET/CT, one on ultrasound and one on X-ray. Applications in radiology included image reconstruction and
denoising for dose and scan time reduction (fourteen studies), data augmentation (six studies), transfer between modalities (eight studies)
and image segmentation (five studies). All studies reported that generated images improved the performance of the developed algorithms.
Conclusion: GANs are increasingly studied for various radiology applications. They enable the creation of new data, which can be used to
improve clinical care, education and research.
Key Words: Generative adversarial networks; GANs; Deep Learning; Artificial Intelligence; Machine Learning.
© 2020 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
I
n recent years deep learning gained rapid success in com-
of public attention due to “deepfake” digital media manipu-
puter vision. Deep learning technology dates back to 1980
lations. This technique uses GANs to generate artificial
when Fukushima was the first to use a convolutional neu-
images of humans (7). As an example, this webpage (8) uses
ral network (CNN) (1). Yet, not until 2012 has deep learning
GAN to create random fake pictures of non-existent people.
gained its momentum in academia and industry. A decisive
GANs are composed of two networks that are trained
moment for deep learning was the 2012 ImageNet challenge.
simultaneously: a generator and a discriminator (Fig 1). Using
In that challenge, a CNN model (AlexNet) showed impressive
input data which can be random noise, image or text, the
abilities for image classification (2). Since then, the technology
generator creates fake images. These fake images are expected
continues to improve. It is applied to medical imaging in gen-
to resemble desired real image. The generated fake images are
eral and radiology in particular (3 5).
then fed to the discriminator along with real images. The dis-
Generative adversarial networks (GANs) are a more recent
criminator aims to distinguish between real and fake. It does
deep learning development, invented by Ian Goodfellow and
so by training on fake and real images and learning features of
real images. For each image, the discriminator outputs a prob-
Acad Radiol 2020; 27:1175–1185
ability for the image being real. Then the generator adapts its
From the Department of Diagnostic Imaging, Chaim Sheba Medical Center,
parameters to improve the generated images (6). Intuitively,
Affiliated to the Sackler School of Medicine, Tel-Aviv University, Emek Haela the generator and discriminator compete in a min-max game.
St. 1, Ramat Gan, Israel 52621 (V.S., Y.B., E.K., E.K.); Deep Vision Lab, Sheba In this game, the generator tries to “fool” the discriminator
Medical Center, Tel Hashomer, Israel (V.S., Y.B., E.K., E.K.). Received Novem-
ber 22, 2019; revised December 24, 2019; accepted December 27, 2019. with fake images. The networks of both the generator and
Address correspondence to: V.S. e-mail: [email protected] discriminator improve as the game develops.
© 2020 The Association of University Radiologists. Published by Elsevier Inc. Since their introduction, GANs had great impact on
All rights reserved.
https://doi.org/10.1016/j.acra.2019.12.024
research in radiology (9). This recent innovative technology
1175
SORIN ET AL Academic Radiology, Vol 27, No 8, August 2020
has the potential to be applied to a variety of radiology tasks. radiology. We searched MEDLINE and Google Scholar
These tasks include generation of fake images to increase for studies published up-to September 2019. The follow-
datasets for training deep learning algorithms, translation of ing keywords were used: generative adversarial network,
one image type to another and improving the quality of exist- generative model, adversarial network, GAN, radiology,
ing images (10). The radiology community can benefit from medical imaging.
getting acquainted with this technology. Our study aims to
review the literature on GANs applications in radiology.
Figure 2. Flow diagram of the search and inclusion process in the study. The study followed the Preferred Reporting Items for Systematic
Reviews and Meta-Analyses (PRISMA) guidelines.
1176
Academic Radiology, Vol 27, No 8, August 2020 GANs IN RADIOLOGY
8
7
6
5
4
3
2
1
0
Q4 Q1 Q2 Q3 Q4 Q1 Q2 Q3
Figure 3. Bar graph showing the trend of publications describing generative adversarial networks (GANs) applications in radiology over time.
(Color version of figure is available online.)
1177
1178
SORIN ET AL
TABLE 1. Publications Reporting on GANs Use for Image Denoising and Reconstruction in Radiology
Kang et al. (13) 2018 Cardiac CT Reduce noise in cardiac CTA SNR before denoising ranged between 2.9 § 1.7-11.1 §
6.5 SNR after denoising 3.4 § 1.3 12.3 § 6.2 (depend-
ing on organ)
Kim et al. (24) 2018 Brain MRI Generate HR MR images from down sam- NMSE 0.009 (random sampling); 0.007 (central
pled MR images, using HR MR images in sampling)
another contrast
Quan et al. (23) 2018 Brain + Chest MRI Compressed sensing MRI reconstruction N/A
Wang et al. (18) 2018 Head and neck CT Reduction of metal artifacts in CT ear GAN methods decreased P2PE from 0.366 to 0.173
images of cochlear implants recipients
Wang et al. (21) 2018 Brain PET/MRI Generate high quality full-dose images When comparing GAN to a network without a discrimi-
from low-dose PET neuroimages nator, mean PSNR increased by 0.28 for normal sub-
jects and by 0.43 for MSI subjects. Mean NMSE
decreased from 0.024 to 0.023 for normal studies and
by 0.0017 for MCI Patients
Wolterink et al. (12) 2017 Cardiac CT Reduce noise in low-dose CT images for N/A
coronary calcium quantification
Ouyang et al. (22) 2019 Brain PET/MRI Synthesize diagnostic quality stand- Compared to low-dose PET, the GAN method increased
ard dose like images with ultra- mean PSNR by 4.14 dB and mean SSIM by 7.63%.
low dose PET Mean RMSE decreased by 33.55% from low dose
PET images
Yang et al. (14) 2018 Abdomen CT Denoising of abdominal low-dose CT to N/A
reach the quality of normal-dose CT
Yang et al. (25) 2018 Brain + Cardiac MRI Compressed sensing MRI reconstruction NMSE 0.04 § 0.01, PSNR 47.83 § 4.10
CBCT, cone beam computed tomography; CT, computed tomography; CTA, computed tomography angiography; GAN, generative adversarial network; HR, high resolution; MAE, mean
absolute error (Hounsfield units); MRI, magnetic resonance imaging; MSK, musculoskeletal; N/A, not available; NCC, normalized cross correlation; NMSE, normalized mean square error; PET,
positron Emission Tomography; P2PE, Point-to-point error (mm); PSNR, peak signal to noise ratio (decibel); Ref, reference; SNR, signal to noise ratio (decibel); SNU, spatial non uniformity;
SSIM, structural similarity.
Academic Radiology, Vol 27, No 8, August 2020 GANs IN RADIOLOGY
CT, computed tomography; DSC, dice similarity coefficient; GAN, generative adversarial network; MRI, magnetic resonance imaging; MSK, musculoskeletal; N/A, not available; Ref, reference.
of radiology images is resource consuming and requires
N/A
MRI
CT
CT
CT
CT
Abdomen
Chest
Brain
MSK
2018
2019
2019
2018
2019
Year
et al. (26)
et al. (28)
et al. (30)
et al. (27)
formed a CNN model that was previously trained for that pur-
Frid-Adar
Kazuhiro
1179
SORIN ET AL Academic Radiology, Vol 27, No 8, August 2020
Ben-Cohen 2019 Abdomen CT to PET Generate PET from CT images for Mean MAE 0.72 § 0.35, mean
et al. (32) deep learning algorithm training, PSNR 30.22 § 2.42
for automatic liver lesions
detection
Choi et al. (33) 2018 Brain PET to MRI Generate MR images from PET SSIM 0.91 § 0.04
images, for quantification of
cortical amyloid load
Dar et al. (41) 2019 Brain MRI Multi-contrast MRI synthesis *T1! T2: SSIM 0.946 § 0.009,
PSNR 27.19 § 1.456; T2! T1:
SSIM 0.940 § 0.009, PSNR
25.80 § 1.867
Emami 2018 Brain MRI to CT Generate CT from MR images for Mean PSNR 26.6 § 1.2, mean
et al. (35) radiotherapy planning in the brain SSIM 0.83 § 0.03
Jin et al. (37) 2019 Brain CT to MRI Generate MR from CT images for Mean MAE 19.36 § 2.73, mean
radiotherapy planning in the brain PSNR 65.35 § 0.86, mean
SSIM 0.25 § 0.02
Lei et al. (36) 2019 Brain + Pelvis MRI to CT Generate CT from MR images for Brain images: mean MAE 55.7,
radiotherapy planning mean PSNR 26.6; Pelvis
images: mean MAE 50.8, mean
PSNR 24.5
Vitale et al. (40) 2019 Abdomen CT to US Generate US from CT for abdominal N/A
US simulation tool
Jiang et al. (38) 2019 Chest MRI to CT Generate MR from CT images for DSC 0.75 § 0.12, Hausdroff dis-
training augmentation of a deep tance 0.36 § 6 mm
learning segmentation tool, for
lung tumors radiotherapy planning
CT, computed tomography; DSC, dice similarity coefficient; GAN, generative adversarial network; MAE, mean absolute error (Hounsfield
units); MRI, magnetic resonance imaging; N/A, not available; PET, positron Emission Tomography; PSNR, peak signal to noise ratio (decibel);
Ref, reference; SSIM, structural similarity; US, ultrasound.
* Performance characteristics for one out of several datasets used in the study.
Huo et al. (44) 2018 Abdomen MRI Automatic spleen segmentation for Mean DSC 0.9260
volume estimation
Liu et al. (46) 2019 Abdomen CT Colorectal tumors segmentation Mean DSC 0.9154
Xue et al. (44) 2018 Brain MRI Brain tumor segmentation Mean DSC 0.85
Seah et al. (48) 2018 Chest XR Mark and remove heart failure features AUC of 0.82 for CHF identification
from chest x-rays
Dong et al. (47) 2019 Chest CT Segmentation of healthy organs for Mean DSC 0.87 0.97,
radiotherapy treatment planning depending on organ
AUC, area under the receiver operating characteristic curve; CHF, congestive heart failure; CT, computed tomography; DSC, dice similarity
coefficient; GAN, generative adversarial network; MRI, magnetic resonance imaging; Ref, reference; XR, X-ray.
recent study by Lei et al. developed a GAN model for the same was to augment the training of a deep learning segmentation
purpose (36). Their model was used both for brain and pelvis network for radiotherapy planning of lung tumors.
images. Jin et al. used GAN to transform brain CT images to US is an operator-dependent modality. That is why efforts
MR images also for radiotherapy planning (37). They showed have been made to develop an US simulation tool for training
that their GAN network outperformed other methods for gen- (39). In their work, Vitale et al. used GAN to generate
erating MR images from CT (37). Finally, Jiang et al. developed abdominal US from CT images, improving the quality and
a GAN that transformed CT to MR images (38). Their purpose realism of an US simulation (40).
1180
Academic Radiology, Vol 27, No 8, August 2020 GANs IN RADIOLOGY
Figure 4. Examples of denoised cardiac CT images generated using GAN. The images on the left column are low-dose CT images. On the
middle column are the denoised low-dose CT images, using GAN. The images on the right column are the routine-dose CT images. Due to car-
diac motion the images differ in heart shape and image intensity. The arrows indicate the distinctly different region between the low-dose and
the routine-dose images. Reprinted with permission from Kang et al. Cycle-consistent adversarial denoising network for multiphase coronary
CT angiography. Medical Physics. 2019 (13). GAN, generative adversarial network.
Finally, in 2019 Dar et al. published a study focusing on multi- work, Liu et al. applied GAN for segmentation of colorectal
contrast MRI synthesis using conditional GAN (41). In Condi- tumors on CT (46). Finally, Dong et al. used a GAN for
tional GAN an additional condition or parameter is added to the healthy organs segmentation for radiotherapy treatment plan-
generator and discriminator. The generator generates images ning at chest CT images (47).
while taking into account this condition. The discriminator eval- In a different study that was published in 2018 in Radiology,
uates images based on their correspondence to the input condi- Seah et al. trained a GAN based neural network to create gen-
tion. Dar et al. reported that the conditional GAN outperformed erative visual rationales (GVRs) (48). Their network is aimed
alternative methods (Replica and Multimodal) in PSNR and to mark and remove heart failure features from chest x-rays
SSIM. Successful multi-contrast MRI synthesis can shorten scan- (48). GVRs can identify image features learned by neural net-
ning time and improve image quality (41). works at image analysis. Then biased features can be removed
for network improvement. Furthermore, potentially new and
unknown features of disease can be uncovered this way (48).
Image Segmentation
There were five studies identified in this category (Table 4).
DISCUSSION
Image segmentation is the delineation of boundaries in images,
for example delineation of organs or tumors. Manual segmen- This review shows that there is increased research for genera-
tation is subjective and time-consuming. Thus, automatic seg- tive adversarial networks (GANs) applications in different
mentation is desirable and is a useful tool for research. Since radiology tasks. The major benefit of GANs is the ability to
the introduction of deep learning in radiology, automatic seg- increase data quantity and quality at a low cost. In the field of
mentation has significantly improved (42,43). In their paper, reconstruction, studies have shown GANs can reduce CT
Xue et al. introduce a GAN based segmentation model (44). radiation exposure and MRI acquisition time. This may
They tested it for segmentation of brain tumors on MRI and influence screening and follow-up protocols, the radiologist
reported superiority to other segmentation methods, including workload and clinical care. GANs were also shown to be use-
CNN models. Hue et al. trained a GAN for spleen segmenta- ful for image segmentation, cross-modality transfer and data
tion and splenomegaly assessment on MRI (45). In a later augmentation.
1181
SORIN ET AL Academic Radiology, Vol 27, No 8, August 2020
Figure 5. Fake liver lesions generated with GAN. (a) Cystic lesions; (b) Malignant lesions; (c) Hemangiomas. Reprinted with permission from
Frid-Adar et al. GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomput-
ing. 2018 (28)
Deep learning can improve diagnostic imaging tasks in radiol- GANs have evolved considerably since their introduc-
ogy enabling segmentation of images, improvement of image tion. They are widely researched but are yet to be applied in
quality, classification of images, detection of findings, and prioriti- clinical practice. There are several possible reasons why this
zation of examinations according to urgent diagnoses (3,5,49). technology remains in a proof-of-concept stage. One is the
Successful training of deep learning algorithms requires large-scale difficulty of GANs’ successful development and training.
data sets. However, the difficulty of obtaining sufficient data limits For example, an interruption of the balance between the
the development and implementation of deep learning algorithms generator and discriminator can fail the training process. In
in radiology. GANs can help to overcome this obstacle. As dem- most cases the discriminator is the one that becomes more
onstrated in this review, several studies have successfully trained powerful. When this happens the discriminator can easily
deep learning algorithms using augmented data generated by identify the generated images. Eventually it no longer pro-
GANs. Data augmentation with generated images significantly vides useful output for the generator and the generated
improved the performance of CNN algorithms. Furthermore, images cease to improve. There is also the possibility of
using GANs can reduce the amount of clinical data needed for complete or partial mode collapse. The generator will syn-
training. The increasing research focus on GANs can therefore thesize a limited diversity of images or even the same image,
impact successful automatic image analysis in radiology. regardless of its input. In radiology this can cause the
1182
Academic Radiology, Vol 27, No 8, August 2020 GANs IN RADIOLOGY
Figure 6. Examples of fake PET images generated from CT images using GAN. On the highest row are CT images. The images on the middle
row are real PET images. On the lowest row are synthetic PET images generated from the CT images above them using GAN. Reprinted with
permission from Ben-Cohen et al. Cross-modality synthesis from CT to PET using FCN and GAN networks for improved automated lesion
detection. Engineering Applications of Artificial Intelligence, 2019 (32). GAN, generative adversarial network.
generation of wrong artificial features. The networks can Some risks are involved with the development of GANs.
also be biased when there is under or over-representation of In a recent publication Mirski et al. warn against hacking of
certain findings. Finally, none of the studies included in this imaging examinations, artificially adding or removing medi-
review possessed all four fundamental criteria for algorithms cal conditions from patient scans (51). Also, using generated
clinical assessment described by Kim et al. These criteria images in clinical practice should be done with caution, as the
include external validation, prospective data collection, data algorithms are not without limitations. For example, in image
being obtained in a diagnostic cohort study, and from multi- reconstruction details can get lost at translation, while fake
ple institutions (50). Therefore, published studies ultimately inexistent details can suddenly appear.
assess technical feasibility, but not practical clinical perfor- This review has several limitations. First, due to large
mance of GANs in radiology. variability in metrics for image quality and in some studies
GANs in medical imaging have shown potential. Yet, it is subjective assessments, we could not perform a meta-anal-
still early to determine if GANs can have significant impact on ysis. Second, due to the large load of studies assessing
the computer-assisted radiology applications. The reviewed GANs in radiology we chose to include only studies pub-
articles are new in each subfield of medical imaging and use lished in peer-reviewed journals with an assigned impact
small data sets. More time is needed for follow up studies with factor. Third, the exact number of patient cases was not
larger sample sizes. There is also difficulty in effective assess- clear in a large proportion of the papers. Thus we could
ment and comparison of GANs performance. That is due to not report on the number patient cases. We suggest that
the variability in image quality assessment measures between future studies will include the number of patient cases that
studies. Some use objective numerical metrics of different sorts were used. Finally, we have only mentioned applications
but those vary between studies. Examples include mean abso- of GANs in radiology. GANs have been applied in other
lute error (MAE), peak signal-to-noise ratio (PSNR), structural medical fields including ophthalmology, pathology and
similarity (SSIM), inception score (IS). Due to the lack of stan- dermatology.
dardization, some studies use subjective physicians' evaluations In conclusion, GANs are increasingly studied for vari-
of image quality. Others assess down-stream tasks. For exam- ous radiology applications. They enable the creation of
ple, assessing whether the use of the generated images improves new data, which can be used for clinical care, education
algorithm training. and research.
1183
SORIN ET AL Academic Radiology, Vol 27, No 8, August 2020
REFERENCES 25. Yang G, Yu S, Dong H, et al. DAGAN: Deep De-Aliasing Generative Adver-
sarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE
1. Fukushima K. Neocognitron: a self-organizing neural network model for a Trans Med Imaging 2018; 37(6):1310–1321. doi:10.1109/tmi.2017.2785879.
mechanism of pattern recognition unaffected by shift in position. Biol 26. Chuquicusma MJM, Hussein S, Burt J, et al. How to fool radiologists with
Cybern 1980; 36(4):193–202. doi:10.1007/bf00344251. generative adversarial networks? Visual Turing Test Lung Cancer Diag-
2. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep nos. 2018: 240–244. doi:10.1109/isbi.2018.8363564.
convolutional neural networks. Commun ACM 2017; 60(6):84–90. 27. Koshino Kazuhiro RAW, Toriumi Fujio, Javadi Mehrbod S, et al. Genera-
doi:10.1145/3065386. tive adversarial networks for the creation of realistic artificial brain mag-
3. McBee MP, Awan OA, Colucci AT, et al. Deep learning in radiology. Acad netic resonance images. Tomography 2018; 4(4):159–163. doi:10.18383/
Radiol 2018; 25(11):1472–1480. doi:10.1016/j.acra.2018.02.018. j.tom.2018.00042.
4. Klang E. Deep learning and medical imaging. J Thorac Dis 2018; 10 28. Frid-Adar M, Diamant I, Klang E, et al. GAN-based synthetic medical
(3):1325–1328. doi:10.21037/jtd.2018.02.76. image augmentation for increased CNN performance in liver lesion clas-
5. Soffer S, Ben-Cohen A, Shimon O, et al. Convolutional neural networks sification. Neurocomputing 2018; 321:321–331. doi:10.1016/j.neu-
for radiologic images: a radiologist’s guide. Radiology 2019; 290(3):590– com.2018.09.013.
606. doi:10.1148/radiol.2018180547. 29. Onishi Y, Teramoto A, Tsujimoto M, et al. Automated pulmonary nodule
6. Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial classification in computed tomography images using a deep convolu-
nets. Adv Neural Inf Process Syst 2014; 27:2672–2680. tional neural network trained by generative adversarial networks. Biomed
7. Roose K. Here Come the Fake Videos, Too. The New York Times, 2018 Res Int 2019; 2019:1–9. doi:10.1155/2019/6051939.
https://www.nytimes.com/2018/03/04/technology/fake-videos-deep- 30. Gadermayr M, Li K, Mu €ller M, et al. Domain specific data augmentation for
fakes.html Accessed November 2019. segmenting MR images of fatty infiltrated human thighs with neural networks.
8. Karras T, Laine S, Aila T. https://thispersondoesnotexist.com. Accessed J Magn Reson Imaging 2019; 49(6):1676–1683. doi:10.1002/jmri.26544.
November 2019. 31. Russ T, Goerttler S, Schnurr A-K, et al. Synthesis of CT images from digi-
9. Vey BL, Gichoya JW, Prater A, et al. The role of generative adversarial net- tal body phantoms using CycleGAN. Int J Comput Ass Rad Surg 2019;
works in radiation reduction and artifact correction in medical imaging. J 14(10):1741–1750. doi:10.1007/s11548-019-02042-9.
Am Coll Radiol 2019; 16(9):1273–1278. doi:10.1016/j.jacr.2019.05.040. 32. Ben-Cohen A, Klang E, Raskin SP, et al. Cross-modality synthesis from
10. Yi X, Walia E, Babyn P. Generative adversarial network in medical CT to PET using FCN and GAN networks for improved automated lesion
imaging: a review. Med Image Anal 2019; 58:101552, doi:10.1016/j. detection. Eng Appl Artif Intell 2019; 78:186–194. doi:10.1016/j.engap-
media.2019.101552. pai.2018.11.013.
11. Journal Citation Reports. website https://jcr.clarivate.com Accessed 33. Choi H, Lee DS. Generation of Structural MR Images from Amyloid PET:
December 2019 . Application to MR-Less Quantification. J Nucl Med 2018; 59(7):1111–
12. Wolterink JM, Leiner T, Viergever MA, et al. Generative Adversarial Net- 1117. doi:10.2967/jnumed.117.199414.
works for Noise Reduction in Low-Dose CT. IEEE Trans Med Imaging 34. Edmund JM, Nyholm T. A review of substitute CT generation for MRI-
2017; 36(12):2536–2545. doi:10.1109/tmi.2017.2708987. only radiation therapy. Radiat Oncol 2017; 12(1). doi:10.1186/s13014-
13. Kang E, Koo HJ, Yang DH, et al. Cycle-consistent adversarial denoising 016-0747-y.
network for multiphase coronary CT angiography. Med Phys 2019; 46 35. Emami H, Dong M, Nejad-Davarani SP, et al. Generating synthetic CTs
(2):550–562. doi:10.1002/mp.13284. from magnetic resonance images using generative adversarial networks.
14. Yang Q, Yan P, Zhang Y, et al. Low-dose CT image denoising using a Med Phys 2018; 45(8):3627–3636. doi:10.1002/mp.13047.
generative adversarial network with wasserstein distance and perceptual 36. Lei Y, Harms J, Wang T, et al. MRI only based synthetic CT generation
loss. IEEE Trans Med Imaging 2018; 37(6):1348–1357. doi:10.1109/ using dense cycle consistent generative adversarial networks. Med Phys
tmi.2018.2827462. 2019; 46:3565–3581. doi:10.1002/mp.13617.
15. You C, Li G, Zhang Y, et al. CT Super-resolution GAN Constrained by the 37. Jin C-B, Kim H, Liu M, et al. Deep CT to MR synthesis using paired and
Identical, Residual, and Cycle Learning Ensemble (GAN-CIRCLE). IEEE unpaired data. Sensors 2019; 19(10):2361. doi:10.3390/s19102361.
Trans Med Imaging 2019: 1. doi:10.1109/tmi.2019.2922960. 38. Jiang J, Hu YC, Tyagi N, et al. Cross modality (CT MRI) prior aug-
16. You C, Yang Q, shan H, et al. Structurally-sensitive multi-scale deep mented deep learning for robust lung tumor segmentation from small
neural network for low-dose CT denoising. IEEE Access 2018; 6:41839– MR datasets. Med Phys 2019; 46:4392–4404. doi:10.1002/mp.13695.
41855. doi:10.1109/access.2018.2858196. 39. Kutter O, Shams R, Navab N. Visualization and GPU-accelerated simula-
17. Yi X, Babyn P. Sharpness-aware low-dose CT Denoising using condi- tion of medical ultrasound from CT images. Comput Methods Programs
tional generative adversarial network. J Digit Imaging 2018; 31:655–669. Biomed 2009; 94(3):250–266. doi:10.1016/j.cmpb.2008.12.011.
doi:10.1007/s10278-018-0056-0. 40. Vitale S, Orlando JI, Iarussi E, et al. Improving realism in patient-specific
18. Wang J, Zhao Y, Noble JH, et al. Conditional generative adversarial net- abdominal ultrasound simulation using CycleGANs. Int J Comput Ass
works for metal artifact reduction in CT images of the ear. 2018;11070:3-11. Rad Surg 2019. doi:10.1007/s11548-019-02046-5.
In International Conference on Medical Image Computing and Computer- 41. Dar SUH, Yurt M, Karacan L, et al. Image synthesis in multi-contrast MRI
Assisted Intervention. Springer, Cham. doi:10.1007/978-3-030-00928-1_1. with conditional generative adversarial networks. IEEE Trans Med Imag-
19. Liang X, Chen L, Nguyen D, et al. Generating synthesized computed ing 2019; 38(10):2375–2388. doi:10.1109/tmi.2019.2901750.
tomography (CT) from cone-beam computed tomography (CBCT) using 42. Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with
Cycle GAN for adaptive radiation therapy. Phys Med Biol 2019; 64 Deep Neural Networks. Med Image Anal 2017; 35:18–31. doi:10.1016/j.
(12):125002. doi:10.1088/1361-6560/ab22f9. media.2016.05.004.
20. Harms J, Lei Y, Wang T, et al. Paired cycle GAN based image correc- 43. Pereira S, Pinto A, Alves V, et al. Brain tumor segmentation using convo-
tion for quantitative cone beam computed tomography. Med Phys lutional neural networks in MRI Images. IEEE Trans Med Imaging 2016;
2019; 46(9):3998–4009. doi:10.1002/mp.13656. 35(5):1240–1251. doi:10.1109/tmi.2016.2538465.
21. Wang Y, Yu B, Wang L, et al. 3D conditional generative adversarial net- 44. Xue Y, Xu T, Zhang H, et al. SegAN: Adversarial Network with Multi-scale
works for high-quality PET image estimation at low dose. Neuroimage L1 Loss for Medical Image Segmentation. Neuroinformatics 2018; 16(3-
2018; 174:550–562. doi:10.1016/j.neuroimage.2018.03.045. 4):383–392. doi:10.1007/s12021-018-9377-x.
22. Ouyang J, Chen KT, Gong E, et al. Ultra low dose PET reconstruction 45. Yuankai Huo ZX, Shunxing Bao, Camilo Bermudez, et al. Splenomegaly
using generative adversarial network with feature matching and task - segmentation using global convolutional kernels and conditional genera-
specific perceptual loss. Med Phys 2019; 46(8):3555–3564. doi:10.1002/ tive adversarial networks. Medical Imaging 2018: Image Processing. Vol.
mp.13626. 10574. International Society for Optics and Photonics, doi:10.1117/
23. Quan TM, Nguyen-Duc T, Jeong W-K. Compressed sensing MRI 12.2293406.
reconstruction using a generative adversarial network with a cyclic 46. Liu X, Guo S, Zhang H, et al. Accurate colorectal tumor segmentation for
loss. IEEE Trans Med Imaging 2018; 37(6):1488–1497. doi:10.1109/ ct scans based on the label assignment generative adversarial network.
tmi.2018.2820120. Med Phys 2019; 46:3532–3542. doi:10.1002/mp.13584.
24. Kim KH, Do WJ, Park SH. Improving resolution of MR images with an 47. Dong X, Lei Y, Wang T, et al. Automatic multiorgan segmentation in tho-
adversarial network incorporating images with different contrast. Med rax CT images using U net GAN. Med Phys 2019; 46(5):2157–2168.
Phys 2018; 45(7):3120–3131. doi:10.1002/mp.12945. doi:10.1002/mp.13458.
1184
Academic Radiology, Vol 27, No 8, August 2020 GANs IN RADIOLOGY
48. Seah JCY, Tang JSN, Kitchen A, et al. Chest Radiographs in Congestive analysis of medical images: results from recently published papers.
Heart Failure: Visualizing Neural Network Learning. Radiology 2019; 290 Korean J Radiol 2019; 20(3):405. doi:10.3348/kjr.2019.0025.
(2):514–522. doi:10.1148/radiol.2018180887. 51. Mirsky Y, Mahler T, Shelef I, et al. CT-GAN: malicious tampering of
49. Ginat DT. Analysis of head CT scans flagged by deep learning software 3D medical imagery using deep learning. arXiv preprint arXiv:
for acute intracranial hemorrhage. Neuroradiology 2019. doi:10.1007/ 1901035972019.
s00234-019-02330-w. 52. Russ T, Goerttler S, Schnurr A-K, et al. Synthesis of CT images from digi-
50. Kim DW, Jang HY, Kim KW, et al. Design characteristics of studies report- tal body phantoms using CycleGAN. Int J Comput Ass Rad Surg 2019;
ing the performance of artificial intelligence algorithms for diagnostic 14(10):1741–1750. doi:10.1007/s11548-019-02042-9.
1185