0% found this document useful (0 votes)
10 views

Automatic_Segmentation_and_Visualization_of_Choroid_in_OCT_with_Knowledge_Infused_Deep_Learning

This paper presents a novel method for the automatic segmentation and visualization of the choroid in optical coherence tomography (OCT) using a biomarker-infused global-to-local network (Bio-Net) and deep learning techniques. The proposed approach effectively addresses challenges such as the fuzzy choroid-sclera interface and retinal vessel shadows, demonstrating superior performance compared to existing methods. Additionally, the method is applied in a clinical study to analyze the pathology of glaucoma, highlighting its potential for detecting structural and vascular changes related to intra-ocular pressure elevation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Automatic_Segmentation_and_Visualization_of_Choroid_in_OCT_with_Knowledge_Infused_Deep_Learning

This paper presents a novel method for the automatic segmentation and visualization of the choroid in optical coherence tomography (OCT) using a biomarker-infused global-to-local network (Bio-Net) and deep learning techniques. The proposed approach effectively addresses challenges such as the fuzzy choroid-sclera interface and retinal vessel shadows, demonstrating superior performance compared to existing methods. Additionally, the method is applied in a clinical study to analyze the pathology of glaucoma, highlighting its potential for detecting structural and vascular changes related to intra-ocular pressure elevation.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

3408 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO.

12, DECEMBER 2020

Automatic Segmentation and Visualization of


Choroid in OCT with Knowledge
Infused Deep Learning
Huihong Zhang , Jianlong Yang , Kang Zhou, Fei Li, Yan Hu, Yitian Zhao , Ce Zheng, Xiulan Zhang ,
and Jiang Liu

Abstract—The choroid provides oxygen and nourish- locations are predicted with an edge-to-texture genera-
ment to the outer retina thus is related to the pathology tive adversarial inpainting network. The results show our
of various ocular diseases. Optical coherence tomography method outperforms the existing methods on both tasks.
(OCT) is advantageous in visualizing and quantifying the We further apply the proposed method in a clinical prospec-
choroid in vivo. However, its application in the study of tive study for understanding the pathology of glaucoma,
the choroid is still limited for two reasons. (1) The lower which demonstrates its capacity in detecting the structure
boundary of the choroid (choroid-sclera interface) in OCT and vascular changes of the choroid related to the elevation
is fuzzy, which makes the automatic segmentation difficult of intra-ocular pressure.
and inaccurate. (2) The visualization of the choroid is hin-
dered by the vessel shadows from the superficial layers of Index Terms—Choroid, glaucoma, optical coherence
the inner retina. In this paper, we propose to incorporate tomography, vasculature.
medical and imaging prior knowledge with deep learning
to address these two problems. We propose a biomarker-
infused global-to-local network (Bio-Net) for the choroid
I. INTRODUCTION
segmentation, which not only regularizes the segmenta- HE choriod, lying between the retina and the sclera, is the
tion via predicted choroid thickness, but also leverages
a global-to-local segmentation strategy to provide global
structure information and suppress overfitting. For elim-
T vascular layer which provides oxygen and nourishment
to the outer retina [1]. Because traditional imaging modalities
inating the retinal vessel shadows, we propose a deep- like fundus photography and scanning laser ophthalmoscopy
learning pipeline, which firstly locate the shadows using acquire 2D overlapping information of the retina and the choroid,
their projection on the retinal pigment epithelium layer, then the pathological changes of the choroid could not be precisely
the contents of the choroidal vasculature at the shadow retrieved and evaluated. On the other hand, ocular ultrasound is
able to do 3D imaging, but it needs to touch the eye and has a
Manuscript received January 7, 2020; revised June 25, 2020 and low spatial resolution.
September 7, 2020; accepted September 7, 2020. Date of publication Optical coherence tomography (OCT) is a high-resolution
September 10, 2020; date of current version December 4, 2020. This non-invasive 3D imaging modality that could precisely separate
work was supported by Ningbo 2025 S&T Megaprojects (2019B10033),
Zhejiang Provincial Natural Science Foundation (LQ19H180001), and the information of the underlying choroid from the inner retina,
Ningbo Public Welfare Science, and Technology Project (2018C50049). thus has been becoming a powerful tool to understand the role of
(Corresponding author: Jianlong Yang.) the choroid in various ocular diseases [2]. It has been shown that
Jianlong Yang and Yitian Zhao are with the Cixi Institute of Biomedical
Engineering, Ningbo Institute of Materials Technology, and Engineer- the thickness of the choroid layer extracted from OCT, is directly
ing, Chinese Academy of Sciences, Ningbo 315201, China (e-mail: related to the incidence and severity of predominate ocular
[email protected]; [email protected]). diseases, such as pathological myopia [3], diabetic retinopathy
Fei Li and Xiulan Zhang are with the State Key Laboratory of Oph-
thalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, (DR) [4], age-related macular degeneration (AMD) [5], and
Guangzhou, China (e-mail: [email protected]; zhangxl2@mail. glaucoma [6].
sysu.edu.cn). Besides, the choroidal vasculature has also been applied in the
Kang Zhou is with the School of Information Science and Tech-
nology, ShanghaiTech University, Shanghai 201210, China (e-mail: study and diagnosis of ocular diseases. Agrawal et al. found the
[email protected]). choroidal vascularity index, which is extracted from binarized
Ce Zheng is with the Department of Ophthalmology, Xinhua Hospital OCT B-scan, is related to the vascular status of DR [7]. The
Affiliated to Shanghai Jiao Tong University School of Medicine, Shang-
hai, China (e-mail: [email protected]). choroidal vessel density (CVD), extracted from binarized en
Yan Hu and Jiang Liu are with the Department of Computer Science face choroid image, has been used in the evaluation of AMD [8]
and Engineering, Southern University of Science and Technology, Shen- and central serous chorioretinopathy [9]. Wang et al. further
zhen, China (e-mail: [email protected]; [email protected]).
Huihong Zhang is with the Cixi Institute of Biomedical Engineering, introduced the choroidal vascular volume, which combines the
Ningbo Institute of Materials Technology, and Engineering, Chinese CVD and the choroidal thickness, is more sensitive in detecting
Academy of Sciences, Ningbo, China, and also with the University proliferative DR [10].
of Chinese Academy of Sciences, China (e-mail: zhanghuihong18@
mails.ucas.ac.cn). However, the application of the choroidal biomarkers in clinic
Digital Object Identifier 10.1109/JBHI.2020.3023144 is still quite limited, which may be attributed to two primary
2168-2194 © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
ZHANG et al.: AUTOMATIC SEGMENTATION AND VISUALIZATION OF CHOROID IN OCT WITH KNOWLEDGE INFUSED DEEP LEARNING 3409

r We propose a biomarker-infused global-to-local network


(Bio-Net) for the choroid segmentation, which not only
regularizes the segmentation via predicted choroid thick-
ness, but also leverages a global-to-local segmentation
strategy to provide global structure information and sup-
press overfitting. Their effectiveness has been validated
via a comprehensive ablation study in Section V.
r For eliminating the retinal vessel shadows, we propose a
deep-learning pipeline, which firstly locate the shadows
using their projection on the retinal pigment epthelium
layer, then the contents of the choroidal vasculature at the
shadow locations are predicted with an edge-to-texture
generative adversarial inpainting network.
r The experiments shows the proposed method outperforms
Fig. 1. Demonstration of the choroid-sclera interface (orange dashed
line) and the retinal vessels and their projection (arrows) on the layers in- the existing methods on both the choroid segmentation and
cluding the GCL (green box), RPE (pink box) and choroidal vasculature shadow elimination tasks.
(orange box). GCL: ganglion cell layer. RPE: retinal pigment epithelium. r We further apply the proposed method in a clinical
prospective study for understanding the pathology of glau-
reasons. (1) The lower boundary of the choroid (choroid-sclera coma, which demonstrates its capacity in detecting the
interface, CSI) in OCT is fuzzy, which makes the automatic structure and vascular changes of the choroid related to
segmentation difficult and inaccurate. (2) The visualization of the elevation of intra-ocular pressure (IOP).
the choroid is contaminated by the vessel shadows from the The remainders of the paper are organized as follows. We
superficial layers of the inner retina. review the existing techniques related to the proposed method
Fig. 1 is a demonstration of the CSI and the retinal vessels in Section II. The methodology of the proposed method is
and their projection on the RPE and choroid layers. The po- presented in Section III. To validate the effectiveness and clinical
sition above the orange dashed line shows the fuzzy CSI in a significance of the proposed method, we conduct extensive
B-scan. The anisotropy of the red blood cells inside the vessels experiments in Section IV. We analyse and discuss the details of
cause strong forward attenuation of the probe light,thus bringing the proposed method in Section V, and draw our conclusion in
shadow-like dark tails to the underneath layers extending to Section VI.
the choroid and the sclera (white arrow). The center part of
Fig. 1 is a segmented OCT volume, which could further be
used to generate the en face images of each layer in the right II. RELATED WORKS
side. The ganglion cell layer (GCL) possesses the retinal vessels 1) Automatic Choroid Segmentation: The segmentation of
(black arrows) and has high light reflectance (green box). The the retinal layers in OCT has been explored since the com-
depth-projected vessel shadows (black arrows) turn dark on the mercialization of spectral domain (SD) OCT [29]–[35], but the
vessel-absent retinal pigment epithelium (RPE) layer (pink box) segmentation of the choroid layer was usually not included in
and the choroid layer (orange box). It is evident that the shadows these studies, which may be attributed to the fussy CSI (as
bring difficulties to the extraction of the choroidal vasculature. a comparison, the inner retinal layers usually have sharp and
Due to their clinical significance, the automatic segmenta- smooth boundary as shown in Fig. 1). Hu et al. adapted the graph
tion and visualization of the choroid have drawn numerous re- search algorithm to semi-automatically identify the choroidal
search interests recently [11]–[17]. However, the majority of the layer in SD-OCT [11]. Tian et al. segmented the CSI by finding
choroid segmentation methods are based on graph search [18], the shortest path of the graph formed by valley pixels using
which is restricted by the choice of a suitable graph-edge weight Dijkstraâs algorithm [12]. Alonso et al. developed an algorithm
model [15]. The inferior choice of the edge weight or the that detected the CSI based on OCT image enhancement and a
variation of OCT image features would cause inaccuracy in the dual brightness probability gradient [13]. Chen et al. generated
choroid segmentation [19], so tedious manual inspection and a gradual intensity distance image. Then an improved 2D graph
correction are still required for clinical usage [20]. The existing search method with curve smooth constraints was used to obtain
methods for eliminating the vessel shadows are based on the the CSI segmentation [14]. Wang et al. segment the choroid
compensation of vessel-induced light attenuation [21], [22], but layer using a Markov random field and distance regularization
the effectiveness of this kind of A-line based method is limited to and edge constraint terms embedded level set method [36].
small vessels and capillaries in OCT retinal imaging. The large With the development of deep learning techniques, many
vessel shadows still have residue on the choroid [23]. classical models for classification and segmentation tasks were
To address these two problems, and inspired by the recent proposed, such as CNN [37], FCN [38], SegNet [39], and
success of deep learning in medical image processing [24]–[28], U-Net [40]. These classical deep learning networks or their 3D
we propose an automatic segmentation and visualization method versions were directly used to segment the choroid [16], [41]–
for the choroid in OCT via knowledge infused deep learning. The [43], or utilized as a feature extractor and combined with other
main contributions of our work include: methods [15], [44]. Sui et al. combined the graph search with

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
3410 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 12, DECEMBER 2020

Fig. 2. Illustration of the framework of our method, which primarily includes the choroid segmentation using the proposed Bio-Net, en face
projection, and the shadow localization and elimination using the U-Net for shadow location mask generation and the Deshadow-Net for shadow
elimination. The choroid layer of a OCT volume is firstly segmented by the Bio-Net. We can get the RPE layer by moving the upper boundary of
the choroid 20 µm upward. Then the OCT volume is projected into the 2D en face plane with the mean value projection along the axial direction,
for generating the en face RPE and choroid images. The vessel shadows in the RPE projection image is segmented with the U-Net to locate their
positions. Finally, the shadow location mask in combination with the en face choroid image are inputted into the Deshadow-Net for the shadow
elimination.

convolutional neural network (CNN) by using the used CNN to et al. used a GAN model to search for the closest encoding of the
decide the edge weights in the graph search [15]. Masood et al. corrupted image in the latent image manifold using context and
converted the segmentation tasks into a binary classification prior losses, then passed the encoding through the GAN model
task, which extracted the choroid part of OCT images into to infer the missing content [55]. Yu et al. utilized contextual
patches with or without the CSI [16]. The U-Net may be the attention on surrounding image features as references during
most successful architecture for medical image segmentation GAN training to make better predictions [56]. Nazeri et al.
to date [40]. Cheng et al. proposed an improved U-Net with proposed a two-stage GAN inpainting method, which comprises
refinement residual block and channel attention block for the of an edge generator followed by an image completion network.
choroid segmentation [17]. The edge generator hallucinates edges of the missing region of
2) Vessel Shadow Removal in OCT: In 2011, Girard et al. the image, and the image completion network fills in the missing
developed an attenuation compensation (AC) algorithm to re- regions using hallucinated edges as a priori [57].
move the OCT vessel shadows and enhance the contrast of
optic nerve head [21]. This algorithm was then employed in the III. METHODOLOGY
calculation of the attenuation coefficients of retinal tissue [45],
Fig. 2 is an illustration of the framework of our method,
enhancing the visibility of lamina cribrosa [46], and improving
which primarily includes the choroid segmentation using the
the contrast of the choroid vasculature and the visibility of the
proposed Bio-Net, en face projection, and the proposed shadow
sclera-choroid interface [47], [48].
localization and elimination pipeline. We use the U-Net for
Very recently, Mao et al. analysed the energy profile in each
shadow location mask generation and the Deshadow-Net for
A-line and automatically compensated the pixel intensity of
shadow elimination. The Deshadow-Net follows the architecture
locations underneath the detected blood vessel [22]. However,
of the two-stage inpainting GAN in [57]. The choroid layer of
both of these methods perform well for the removal of small
a OCT volume is firstly segmented by the Bio-Net. We can get
vessel shadows but unable to handle the large vessel shadows,
the RPE layer by moving the upper boundary of the choroid
which would lead to shadow residue on the choroid [23].
20 µm upward. Then the segmented RPE and choroid regions
3) Inpainting/Object Removal: After locating the vessel
are projected into the 2D en face plane with the mean value
shadows, we propose to use inpainting techniques, which is also
projection along the axial direction, for generating the en face
referred as object removal. Here the object to be removed is the
RPE and choroid projection images. The vessel shadows in the
vessel shadows. Inpainting techniques have been extensively
RPE projection image is segmented with the U-Net to locate their
studied and applied in various computer vision and pattern
positions. Finally, the shadow location mask in combination with
recognition related fields (see [49], [50] and the references
the en face choroid image are inputted into the Deshadow-Net
therein). Early inpainting techniques primarily filled the targeted
for the shadow elimination.
area with information from similar or closest image parts, such
as exemplar-based inpainting (EBI) [51], or used higher-order
partial differential equations to propagate the information of A. Bio-Net for Choroid Segmentation
surrounding areas into the targeted area, such as coherence Fig. 3 is an illustration of the Bio-Net. It includes three
transport inpainting (CTI) [52]. modules. A global multi-layers segmentation module (GMS)
Deep learning, especially generative adversarial network is employed to segment all the layers (both retinal and choroidal
(GAN) [53], is a powerful tool for image synthesis [54], which layers) in OCT image Iinput . The output of the GMS Spred ,
has also shown its superiority in image inpainting [55]–[57]. Yeh which can be regarded as the global structure of Iinput , is

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
ZHANG et al.: AUTOMATIC SEGMENTATION AND VISUALIZATION OF CHOROID IN OCT WITH KNOWLEDGE INFUSED DEEP LEARNING 3411

Fig. 3. Illustration of the Bio-Net. It includes three modules. A global Fig. 4. Training and testing procedures of the Bio-Net. We firstly pre-
multi-layers segmentation module (GMS) is employed to segment all train the GMS and Bio modules. Then these trained modules are frozen
the layers (both retinal and choroidal layers) in OCT image Iinput . in the training of the LCS. In testing, we only employ the trained GMS
The output of the GMS Spred , which can be regarded as the global and LCS. The former is used generate the global structure and the latter
structure of Iinput , is concatenated with Iinput and inputted into a is for achieving the choroid segmentation results.
local choroid segmentation module (LCS). Different from the GMS, the
LCS only segments the choroid region (from the Bruch’s membrane to
choroid-sclera interface), which is the output of the Bio-Net. On the other
hand, a biomarker regularization module (Bio) is used to regularize the pixels (choroid and background). The Uc also uses the U-Net
segmentation of the choroid region Cpred . The Iinput is concatenated
as the input of the Bio for providing texture information. architecture [40] and a binary cross entropy loss Lseg for opti-
mization:
Lseg = −[Cgt ln(Cpred ) + (1 − Cgt )ln(1 − Cpred )]. (2)
concatenated with Iinput and inputted into a local choroid
segmentation module (LCS). Different from the GMS, the LCS In addition to the Lseg , the LCS is also regularized by the Bio
only segments the choroid region (from the Bruch’s membrane module and its loss Lbio . So the total loss of the LCS could be
to choroid-sclera interface), which is the output of the Bio-Net. written as:
On the other hand, a biomarker regularization module (Bio) is Llcs = λseg Lseg + λbio Lbio , (3)
used to regularize the segmentation of the choroid region Cpred .
Also, the Iinput is concatenated as the input of the Bio for where λseg , λbio denote the corresponding hyper-parameters.
providing texture information. The details of these modules are 3) Biomarker Regularization Module: The Bio further uses
given below. the thickness of choroid layer to regularize the segmentation of
1) Global Multi-Layers Segmentation Module: The GMS is the Bio-Net. It predicts a thickness vector Tpred ∈ RW ×1 , where
to segment retinal and choroidal layers following their anatom- W is the width of OCT images and also refers to the number of A-
ical characteristics in OCT images. It is mainly used to obtain lines. The Bio takes the concatenation of the original OCT image
the global structure information, which could be an effective Iinput and the predicted choroid region Cpred as input Binput .
auxiliary to improve local segmentation accuracy [58]. Besides, So Tpred = Bt (Binput ), where Bt is a biomarker regression
as a multi-tasking network, the segmentation of different layers network and uses the architecture of the ResNet-18 [60]. Its
are constrained with each other, which can reduce over-fitting optimization is to minimize the L1 distance between the Tpred
and improve robustness [59]. and the thickness ground truth Tgt :
The GMS takes the Iinput as input and generates the multi-
Lbio = Tpred − Tgt 1 . (4)
layer segmentation result Spred via a deep neural network Us ,
Spred = Us (Iinput ). The architecture of Us is the U-Net [40]. The Tgt is generated from the Cgt by counting the pixel number
The segmentation is optimized via a cross entropy loss Lgms , of the choroid region along each A-line.
which could calculated as: So, the total loss Ltotal of the proposed Bio-Net is:

Lgms = − Sgt ln(Spred ), (1) Ltotal = λgms Lgms + Llcs
i
= λgms Lgms + λseg Lseg + λbio Lbio , (5)
where Sgt denotes the manually-annotated ground truth of all
retinal and choroidal layers. i denotes the index of layers. where λgms denotes the hyper-parameter of the GMS.
2) Local Choroid Segmentation Module: The LCS takes the 4) Training and Testing Procedures: Fig. 4 illustrates the
concatenation of the global segmented structure Spred from the training and testing procedures of the Bio-Net. We firstly pre-
GMS and the original OCT image Iinput as input. It segments train the GMS and Bio modules. Then these trained modules are
the choroid region Cpred via another segmentation network Uc , frozen in the training of the LCS. Note that the pre-training of the
Cpred = Uc (Cinput ). Cgt is the manually-annotated ground Bio is different from its deployment in the regularization of the
truth of the choroid region, which contains two categories of choroid segmentation, which is because the predicted choroid

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
3412 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 12, DECEMBER 2020

Fig. 5. Illustration of the shadow elimination pipeline. We firstly segment the retinal vessel shadows from the en face RPE image with the
U-Net [40]. The generated shadow mask could be used to locate the shadows in a OCT volume. Then the shadow mask in combination with the en
face choroid image are fed into the shadow elimination module, namely the Deshadow-Net, to get a shadow-free choroid image.

To fully locate the shadow mask on the en face choroid, we


further enhance the U-Net segmentation results with morpho-
logical manipulation including dilation and erosion.
2) Shadow Removal: As demonstrated in Fig. 5, the
Deshadow-Net is a cascade of two GANs. Each GAN has a
pair of generator and discriminator. The generators follow the
architecture in [62] and the discriminators use a 70 × 70 Patch-
Fig. 6. Comparison of the RPE layer with other avascular layers in- GAN architecture [63]. The inputs of the Deshadow-Net are the
cluding outer nuclear layer (ONL) and photoreceptor layer (PRL) in OCT shadow-contaminated choroid image and the shadow location
imaging. mask. Before entering the first GAN, the structure feature of the
en face choroid is extracted with the Canny edge detector [64].
The first GAN is employed to generate the structure (edge)
region Cpred is not available at the stage of pre-training. Here information in the shadow mask areas. The second GAN uses the
we use pseudo labels of choroid region to generate the thickness edges of the choroidal vasculature generated in the first GAN as a
ground truth Tgt , which is derived from a pre-trained U-Net. prior, to fill the texture information of the choroid. Then we could
This U-Net is trained for the choroid segmented by only using get the shadow-free choroid from the output of the second GAN.
Iinput and Cgt . Finally, the LCS in combination with the frozen
GMS and Bio, is trained in an end-to-end manner. IV. EXPERIMENTS
In the testing stage, we only employ the trained GMS and
LCS. The former is used generate the global structure and the A. Automatic Choroid Segmentation
latter is for achieving the choroid segmentation result. 1) Dataset: Macular-centered 3D OCT volumes were col-
lected using the Topcon system from 20 different normal eye
subjects. Each 3D volume contains 256 non-overlapping B-
B. Shadow Removal Pipeline
scans, covering 6 × 6 × 2 mm3 region. Each B-scan has 512
Different from the previous shadow elimination methods that A-lines with 992 pixels in each A-line. Since the B-scans in
could not eliminate the shadows from large vessel [21], [22], the same volume have high similarity, Cheng et al. [65] an-
we propose a novel method that is able to remove the shadow notates the boundary information for 1/4 of the B-scans, thus
without the limitation in vessel caliber. It firstly locates the vessel 256/4 × 20 = 1280 B-scans are used. The boundaries in each
shadows then uses image inpainting techniques to repair the image were marked independently by two groups of trained pro-
shadow-conterminated areas. As shown in Fig. 5, we segment fessionals with the supervision of senior ophthalmologists. The
the retinal vessel shadows from the en face RPE image with 20 volumes were randomly divided into a 12-volume training
the U-Net [40]. The generated shadow mask could be used to set, a 4-volume validation set, and a 4-volume test set. We also
locate the shadows in a OCT volume. Then the shadow mask collected two diseased volumes from two subjects with different
in combination with the en face choroid image are fed into the levels of choroidal neovascularization (CNV). One volume has
shadow elimination module, namely the Deshadow-Net, to get 58 B-scans and was used in the training. The other volume
a shadow-free choroid image. has 21 B-scans and was used in the testing. Note that the data
1) Shadow Localization: The idea of using RPE to locate throughout this paper was collected from Topcon OCT systems,
the vessel shadows is inspired by two medical and imaging so we did not consider to remedy the domain discrepancy caused
knowledge. (1) The retinal layers below the outer plexiform layer by manufacturers in the proposed method. For using it in the
and above the BM are avascular [61], so any vessel-like structure scenerios that the OCT systems are from different manufactur-
appears on these layers are the projected shadows. (2) As shown ers, domain adaptation methods [66], [67] have been used for
in Fig. 6, the RPE layer has the highest OCT light reflectance achieving superior segmentation performance.
and best shadow contrast compared with other avascular layers 2) Implementation: Our Bio-Net is implemented by using
including outer nuclear layer (ONL) and photoreceptor layer PyTorch library [71] in the Ubuntu 16.04 operating system and
(PRL). the training was performed with NVIDIA GeForce GTX 1080

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
ZHANG et al.: AUTOMATIC SEGMENTATION AND VISUALIZATION OF CHOROID IN OCT WITH KNOWLEDGE INFUSED DEEP LEARNING 3413

Fig. 7. The visual examples of the choroid segmentation. From left to right: (a) is the input OCT B-scans, and from (b) to (i) are the segmentation
results using the Long et al. [38], Badrinarayanan et al. [39], Ronneberger et al. [40], Mazzaferri et al. [68], Cheng et al. [17], Venhuizen et al. [69],
Gu et al. [70], and the proposed Bio-Net, respectively. White indicates the choroid region that is correctly segmented, red indicates excessive
segmentation, and blue indicates insufficient segmentation.

TABLE I
QUANTITATIVE COMPARISON OF DIFFERENT CHOROID SEGMENTATION METHODS ON NORMAL SAMPLES

Ti GPU. We utilize horizontal flipping and rotation to augment et al. [69], Gu et al. [70], and the proposed Bio-Net, respectively.
the data. We resize all the B-scans to 192 × 192 pixels. The White indicates the choroid region that is correctly segmented,
batch size is 4 and the optimizer is Adam [72]. The initial red indicates excessive segmentation, and blue indicates insuf-
value of the learning rate is 0.01, and then the learning rate ficient segmentation. As demonstrated in the figure, all of the
is reduced to 1/10 of the original when the number of iterations methods have better performance in the segmentation of the
is 40, 80, 160, and 240, respectively. We set the hyper-parameter BM than the CSI, which may be because the BM has bright and
λseg,multi−layers = 1, λseg,choroid = 1, λbio,choroid = 0.01. It sharp features for recognition while the CSI is dark and fussy.
took about 4 hours for each training of the Bio-Net. Their quantitative comparison is listed in Table I. Our Bio-Net
3) Evaluation Metrics: We employ dice index (Dice), outperforms other methods, which may suggest the the infusion
intersection-over-union (IoU), average-unsigned-surface- of the biomarker prior and the global-to-local strategy contribute
detection-error (AUSDE), and thickness difference (TD) to to the improvement of the segmentation. We further verify the
quantitatively evaluate the performance of the Bio-Net. The performance superiority by using the diseased (CNV) data. The
Dice and IoU show the proportion of the overlap between the results are shown in Table II. We can see the results using the
segmented choroid region and the ground truth (larger is better). proposed method are mostly better than others, although the
The AUSDE [73] represents the pixel-wise mismatch between metrics are worse than the results of normal cases.
the segmented choroid boundary and the ground truth (smaller
is better). We give the AUSDEs of the BM and CSI separately.
4) Results: Fig. 7 shows the visual examples of the choroid
B. Shadow Localization and Removal
segmentation. We compared the proposed method with the 1) Dataset: We employ the 30 OCT volumes from 30 sub-
existing graph-search-based and deep-learning-based choroid jects for the training and testing of the shadow localization and
segmentation methods. From left to right: (a) is the input OCT elimination pipeline. Each volume has 992 × 512 × 256 voxels.
B-scans, and from (b) to (i) are the segmentation results using They cover a field of view (FOV) of 6 × 6 mm2 region and a
the Long et al. [38], Badrinarayanan et al. [39], Ronneberger imaging depth of around 2 mm. We randomly divided these
et al. [40], Mazzaferri et al. [68], Cheng et al. [17], Venhuizen volumes into 25 testing sets and 5 evaluation sets. The choroid

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
3414 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 12, DECEMBER 2020

TABLE II
QUANTITATIVE COMPARISON OF DIFFERENT CHOROID SEGMENTATION METHODS ON SAMPLES WITH CNV

and RPE layers of these volumes were segmented with the


Bio-Net. Then the en face RPE projection images were manually
annotated by two medical experts with pixel-level precision.
2) Implementation: The shadow localization and elimination
pipeline is also implemented using the PyTorch library [71]. For
training the U-Net in the shadow localization, we employ the
Adam optimizer [72] for training. The initial learning rate is set
to 0.0001. Then we gradually decrease the learning rate with a
momentum of 0.9. We further enhance the segmentation result
of the U-Net with six iterations of dilation and erosion. For the
shadow elimination, the Deshadow-Net is pre-trained with the
neutral scene datasets in [57] then fine-tuned with our choroid
datasets. The training of the model is divided into three stages:
the edge model, the inpainting model, and the joint model, as
suggested in the original implementation1 .
3) Evaluation Metrics: We employ the IOU, Acc, Sen and
area under the curve (AUC) to evaluate the segmentation per-
formance of the U-Net. Because no ground truth is available
for the shadow elimination task, we employ the vessel density
(VD) in the evaluation, which is an indirect but clinically useful
Fig. 8. An example of using the U-Net in the segmentation of the
metrics. We follow the calculation of the VD in [74] as retinal vessel shadows on the RPE projection image. The U-Net output
 is further processed to retrieve the shadow mask using dilation and
V dA
V D = A , (6) erosion.
A dA

Where A is the region of interest (ROI). Here it refers to the


6 × 6 mm2 centered on fovea. V is the binarized vessel map.
For an arbitrary pixel, if it belongs to a vessel, V = 1, otherwise
V = 0.
4) Results: Fig. 8 demonstrates an example of using the
U-Net in the segmentation of the retinal vessel shadows on the
RPE projection image. The U-Net output is further processed to
retrieve the shadow mask using dilation and erosion. The U-Net
achieves superior performance on the vessel shadow segmenta-
tion with a Acc of 0.969, a AUC of 0.938, a IoU of 0.901, and
a Sen of 0.967. After the dilation and erosion, the vessel caliber
is around 5 pixels wider than that of the original shadow, which
makes sure the shadow could be completely removed by the
Deshadow-Net. Ronneberger et al. have demonstrated the U-Net
was capable of achieving excellent segmentation performance
with tens of training samples [40]. Here we found the U-Net
could be efficient with less training samples. Fig. 9 demonstrates

Fig. 9. AUC and Acc values as functions of the number of training


1 https://github.com/knazeri/edge-connect samples for the RPE shadow segmentation.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
ZHANG et al.: AUTOMATIC SEGMENTATION AND VISUALIZATION OF CHOROID IN OCT WITH KNOWLEDGE INFUSED DEEP LEARNING 3415

Fig. 10. The visual examples of the shadow elimination. From left to right: the original en face choroid image, the shadow mask, and the shadow
elimination results using the AC [21], our previous work using the CTI [23], and the proposed method in this work, respectively. The second and
third rows are the zoom-in views inside the blue and yellow boxes in the first row, respectively. The last row are the corresponding vessel maps.

the AUC and Acc values as functions of the number of training strategy, both the large and small vessel shadows could be
samples for the RPE shadow segmentation. As shown in the thoroughly eliminated, but as shown in the zoom-in views, the
figure, the AUC and Acc values are close to 0.9 with a single CTI introduces unnatural artefacts compared with the proposed
training sample. The two metrics trend to be stable when the method.
number of training images is large than 5, which may be related The vessel shadows could be treated as the real vessels in
to the high uniformity of the morphological patterns of the retinal clinical assessment, which would cause the overestimation of the
vessel shadows among different OCT volumes. We compare the VD. The calculated VD values of the vessel maps in the last row
proposed shadow elimination pipeline with the A-line based AC of Fig. 10 are: 0.510 for the original choroid, 0.504 for the AC,
algorithm [21] and our previous implementation of this shadow 0.501 for the CTI, and 0.500 for the proposed method. We also
localization and elimination pipeline, which used the CTI as calculate a VD of 0.499 without including the shadow areas. The
the inpainting algorithm [23]. Fig. 10 demonstrates the visual results are in accordance with the overestimation assumption, in
examples of the shadow elimination. From left to right: the which the original image has the highest VD. The AC method
original en face choroid image, the shadow mask, and the shadow could eliminate part of the shadows thus lower the VD. The
elimination results using different methods. The second and third CTI and proposed method could further lower the VD because
rows are the zoom-in views inside the blue and yellow boxes they remove the shadows completely. Besides, their VDs are
in the first row, respectively. The last row are the corresponding very close to that of the masked vessel map, which indicate
vessel maps. As shown in the figure, the original choroidal the effectiveness of this shadow localization and elimination
vasculature is conterminated by the retinal vessel shadows at pipeline. We checked the VDs of other testing datasets,which
the locations shown in the shadow mask. Inside the zoom-in follow the exactly same trend. However, because the variation
views, the AC could enhance the contrast of the choroidal of the VDs among different eyes are much larger than that of
vessels and minimize small vessel shadows but could not get the shadow elimination, we did not include their average values
rid of the large vessel. Using the localization and elimination and standard deviations here.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
3416 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 12, DECEMBER 2020

Fig. 11. Demonstration of the vessel maps of 5 study cases in normal (top) and high IOP states (bottom). The reduction in blood flow could be
observed in the high IOP state.

C. Application in a Clinical Prospective Study TABLE III


STATICS RESULTS OF THE CLINICAL PERSPECTIVE STUDY
Primary angle-closure glaucoma (PACG) is prevailing not
only in East Asia but also in overseas Chinese and Eskimos [75].
The patients with PACG were found to have higher IOP and
thicker choroids than normal controls [76]. Previous studies ∗
CT: choroid thickness.
shown the changes of the choroid thickness and blood flow might
be associated with the PACG [6], but the initial mechanism states. The choroid thickness is the average value of the 6 × 6
underlying angle closure has not been fully understood. Here mm2 FOV. We summarize their statical averages and standard
we applied the proposed method in a clinical prospective study, deviations in Table III.
which quantitatively detected the changes of the choroidal in As the IOP increases from 15.84 ± 1.99 mm Hg during the
response to IOP elevation [20]. normal sitting state to 34.48 ± 5.35 mm Hg during the upside-
1) Data Collection: We recruited 34 healthy volunteers with down, the choroid becomes thicker while the VD decreases.
the ages ranging from 18 to 30 years old, with no previous history Both of them has a p value of < 0.001. These results provide
of IOP exceeding 21 mm Hg. The participants were volunteers evidence about the relationship between choroid expansion and
recruited mainly from the Zhongshan Ophthalmic Center at Sun shallowing of the anterior chamber, which may be of relevance
Yat-sen University Medical School, and nearby communities for the pathogenesis of the PACG.
in Guangzhou, China. The study was approved by the Ethical Fig. 11 demonstrates the vessel maps of 5 study cases from
Review Committee of the Zhongshan Ophthalmic Center and the clinical dataset in normal (top) and high IOP states (bottom).
was conducted in accordance with the Declaration of Helsinki In OCT imaging, the change on the vessel map is related to the
for research involving human subjects. A swept-source OCT change in blood flow. On these vessel maps, we could observe
system with the A-line rate of 100 kHz (DRI OCT-1 Atlantis, the reduction in blood flow in the high IOP state.
Topcon, Japan) was employed to collect data from both of their
eyes. We used the 6 × 6 mm2 FOV volumetric scan protocol V. DISCUSSIONS
centered on fovea. Each of the volumes contains 256 B-scans
and each of the B-scans has 512 A-lines. Each A-line contains We further analyze and discuss the details of the proposed
992 data points uniformly distributed in a depth range of ∼3 mm. method in this section, including the ablation study of the
To simulate the state of high IOP, after taking the baseline Bio-Net, the comparison of different inpainting methods, and
scans in a normal sitting position, each of the volunteers was the limitations of this work.
asked to take scans in upside-down position. The average IOP
was increased to 34.48 ± 5.35 mm Hg because of the upside- A. Ablation Study of Bio-Net
down, compared with the average IOP of 15.84 ± 1.99 mm Hg at To evaluate the effectiveness of the global multi-layers seg-
the normal position. A total of 136 OCT volumes were acquired mentation module and the biomarker prediction Net, we com-
(34 volunteers, 68 eyes, normal and high IOP). bine them with the U-Net respectively.
2) Results: With the proposed knowledge infused deep Fig. 12, Fig. 13 and Fig. 14 illustrate the effectiveness of
learning method, we processed this clinical dataset to retrieve the the biomarker prediction network and the global multi-layers
thickness of the choroid and the VD in normal and upside-down segmentation module. In the experiment, we take the U-Net

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
ZHANG et al.: AUTOMATIC SEGMENTATION AND VISUALIZATION OF CHOROID IN OCT WITH KNOWLEDGE INFUSED DEEP LEARNING 3417

Fig. 12. Ablation study of the Bio-Net. From left to right: (a) is the input OCT B-scans, and from (b) to (e) are the segmentation results using
the baseline method (U-Net), baseline+the biomarker prediction (Bio) module, baseline+the global multi-layer segmentation (GMS) module, and
the proposed Bio-Net (baseline+Bio+GMS), respectively. White indicates the choroid region that is correctly segmented, red indicates excessive
segmentation, and blue indicates insufficient segmentation.

Fig. 13. Effectiveness of the proposed Bio module by changing its hyper-parameter spanning four orders of magnitude. (a), (b), and (c) are the
results using AUSDE in CSI, TD, and Dice as measures, respectively.

Fig. 14. Effectiveness of the proposed GMS module by changing the number of the segmented layers from 4 to 12. (a), (b), and (c) are the results
using AUSDE in CSI, TD, and Dice as measures, respectively.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
3418 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 12, DECEMBER 2020

TABLE IV
QUANTITATIVE COMPARISON OF DIFFERENT INPAINTING ALGORITHMS

as a baseline, the table demonstrates that the infusion of the C. Limitations of This Work
biomarker prediction network can lead to an improvement on the
The methodology proposed in this work was developed under
choroid segmentation task, as the DI increases from 88.36% to
the requirement of quantifying subtle changes of the choroid
90.27% and the AUSDE decreases from 8.01 pixels to 6.46 pix-
blood flow in response to the IOP elevation, as described in
els. On the other hand, the performance of the U-Net added
Section IV-C. Because it is a clinical perspective study, only
by the global multi-layers segmentation module makes the IOU
healthy volunteers were involved. So we did not take the patho-
increases from 79.14% to 81.34% and AUSDE decreases from
logical changes such as CNV into consideration. Even though,
8.01 pixels to 6.54 pixels. Meanwhile, the AUSDE is 3.50 pixels
we found the existing methods were still need to be improved in
lower and Sen 1.79% higher than only GMS module employed,
the automatic choroid segmentation and vessel shadow removal,
which demonstrates that the global-to-local network works bet-
as discussed in Section I and II. Based on the observation of the
ter than a single global multi-layers segmentation module or a
clinical IOP dataset, we propose the Bio-Net for the automatic
single local choroid segmentation module.
choroid segmentation and the shadow removal pipeline for
We also examined the effectiveness of the proposed Bio
quantifying choroidal vasculature. The usage of the proposed
module by changing its hyper-parameter spanning four orders
method leaded to a significant clinical finding that a high IOP
of magnitude as show in Fig. 13. For different metrics including
would cause the thicken of the choroid layer and the decrease
AUSDE, TD, and Dice, the best performance is all achieved at a
of choroidal blood flow, which may help us to understand the
weight of Lbio,reg of 0.01. The results of the Bio-Net are always
mechanism of glaucoma [20]. To examine the performance of the
better than those of the U-Net baseline. For the GMS module,
Bio-Net in the presence of the pathological changes of choroid,
we checked the influence of the segmentation layers from 4 to
we included CNV cases in the training and testing and found
12 layers as shown in Fig. 14. The observation of the results is
our proposed method still outperformed the state-of-the-arts as
very similar to that of Fig. 13. Through these ablation studies,
demonstrated in Fig. 7 and Table I. However, this verification is
we may be confident about the proposed biomarker-infused
not enough since there are several other types of the patholog-
global-to-local segmentation network.
ical changes of choroid, such as DR and pathological myopia.
Besides, a very limited sample size was collected at the time
B. Quantitative Comparison of inpainting Methods
of submission. We need to keep collecting more cases with
In this work, we employ the GAN inpainting method [57] better diversity. Additionally, variance quantification of same
for the shadow elimination task. We have compared it with our subject over time would be extremely useful for delicate clinical
previous implementation [23] using the CTI [52] as shown in applications.
Fig. 10. It shows both of the inpainting methods could com- For the shadow removal pipeline, a major limitation is the
pletely eliminated the shadows and fill the shadow areas with method of determining the positions of vessel shadows. For the
neutral extensions of surrounding contents. We also could notice cases collected in the clinical perspective study [20], the RPE
the CTI create some locate artefacts. Here we further compare layer is undamaged, so the shadow localization is straightfor-
the performance of these inpainting method quantitatively. We ward. The intuition behind our method also brings new per-
also include the EBI [51] in the comparison. spective to this vessel shadow removal task, as discussed in
Due to the absent of the ground-truth choroidal vasculature, Section I and II. However, if the RPE layer is damaged due
the quantitative comparison of the inpainting methods can not to CNV or other types of ocular diseases, our method will be
directly implemented. Thus we created artificial retinal vessel invalid. A neutral thought is to use the layers above the RPE,
mask with the vessel widths slightly wider than the real shadows. such as ONL, to localize the shadows. But we found it might not
The artificial mask combining with the repaired choroid images be feasible because the correspondence between the shadows in
were used as the input to evaluate the inpainting algorithms. retinal and choroidal layers was broken due to the strong light
The widely-used image similarity measures including structure absorption at the lesion areas. So it may be necessary to further
similarity index (SSIM), peak signal to noise ratio (PSNR), and combine the A-line-based method and our method or develop
mean squared error (MSE) [65] are employed as quantitative specific methods, for handling the RPE-damaged cases.
metrics. We used the masked images as the baseline. Another major limitation of our method is its possible failure
As shown in Table IV, the GAN inpainting method outper- in tackling the OCT B-scans including optic nerve head (ONH).
forms the CTI and EBI methods on all the metrics. However, its As stated above, this work was inspired by the clinical per-
advantages are marginal. Because all of the inpainting methods spective study in [20]. Thus we only have the macular-centered
achieve SSIMs > 0.9, which suggest that all of them could be OCT B-scans and labels. We cannot assure that the Bio-Net
suitable choice for the proposed localization and elimination and vessel removal pipeline still perform well in the presence
pipeline. of the ONH, which has been listed in our future works. The

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
ZHANG et al.: AUTOMATIC SEGMENTATION AND VISUALIZATION OF CHOROID IN OCT WITH KNOWLEDGE INFUSED DEEP LEARNING 3419

capability of segmenting both the macular and ONH will enable [10] J. C. Wang et al., “Diabetic choroidopathy: choroidal vascular density
the generalization of segmenting wide-field OCT scans. Because and volume in diabetic retinopathy with swept-source optical coherence
tomography,” Amer. J. Ophthalmol., vol. 184, pp. 75–83, 2017.
the early signs of many ocular diseases such as DR emerge at the [11] Z. Hu, X. Wu, Y. Ouyang, Y. Ouyang, and S. R. Sadda, “Semiauto-
edge of the retina. This technique will be useful for the screening mated segmentation of the choroid in spectral-domain optical coherence
of ocular diseases among a large population. tomography volume scans,” Investig. Ophthalmol. Vis. Sci., vol. 54, no. 3,
pp. 1722–1729, 2013.
[12] J. Tian, P. Marziliano, M. Baskaran, T. A. Tun, and T. Aung, “Automatic
segmentation of the choroid in enhanced depth imaging optical coherence
VI. CONCLUSION tomography images,” Biomed. Opt. Express, vol. 4, no. 3, pp. 397–411,
2013.
In this paper, we have developed an automatic method for the [13] D. Alonso-Caneiro, S. A. Read, and M. J. Collins, “Automatic segmen-
segmentation and visualization of the choroid, which combined tation of choroidal thickness in optical coherence tomography,” Biomed.
Opt. Express, vol. 4, no. 12, pp. 2795–2812, 2013.
deep learning networks with prior medical and OCT imaging [14] Q. Chen, W. Fan, S. Niu, J. Shi, H. Shen, and S. Yuan, “Automated choroid
knowledge. We have proposed the Bio-Net for the choroid segmentation based on gradual intensity distance in HD-OCT images,”
segmentation, which is a biomarker infused global-to-local Opt. Express, vol. 23, no. 7, pp. 8974–8994, 2015.
[15] X. Sui et al., “Choroid segmentation from optical coherence tomogra-
network. It outperforms the state-of-the-art choroid segmenta- phy with graph-edge weights learned from deep convolutional neural
tion methods. For eliminating the retinal vessel shadows, we networks,” Neurocomputing, vol. 237, pp. 332–341, 2017.
have proposed a deep learning pipeline, which firstly locates [16] S. Masood et al., “Automatic choroid layer segmentation from optical
coherence tomography images using deep learning,” Sci. Rep., vol. 9, no. 1,
the shadows using anatomical and OCT imaging knowledge, p. 3058, 2019.
then removes the shadow using a two-stage GAN inpainting [17] X. Cheng, X. Chen, Y. Ma, W. Zhu, Y. Fan, and F. Shi, “Choroid segmen-
architecture. Compared with the existing methods, the proposed tation in oct images based on improved u-net,” in Proc. Med. Imag.: Image
Process., vol. 10949. International Society for Optics and Photonics, 2019,
method has superiority in shadow elimination and morphology Art. no. 1094921.
preservation. We have further applied the proposed method in [18] K. Li, X. Wu, D. Z. Chen, and M. Sonka, “Optimal surface segmentation
a clinical prospective study, which quantitatively detected the in volumetric images-a graph-theoretic approach,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 28, no. 1, pp. 119–134, Jan. 2005.
changes of the choroid in response to IOP elevation. The results [19] S. Lee et al., “Comparative analysis of repeatability of manual and auto-
show it is able to detect the structure and vascular changes of mated choroidal thickness measurements in nonneovascular age-related
the choroid efficiently. macular degeneration,” Investig. Ophthalmol. Vis. Sci., vol. 54, no. 4,
pp. 2864–2871, 2013.
[20] F. Li, H. Li, J. Yang, J. Liu, A. Tin, and X. Zhang, “Upside-down position
leads to choroidal expansion and anterior chamber shallowing: Oct study,”
ACKNOWLEDGEMENT Brit. J. Ophthalmol., vol. 104, pp. 790–794, 2020.
[21] M. J. Girard, N. G. Strouthidis, C. R. Ethier, and J. M. Mari, “Shadow re-
The authors would like to thank all the reviewers for their moval and contrast enhancement in optical coherence tomography images
insightful comments and concerns which stimulate us to improve of the human optic nerve head,” Investig. Ophthalmol. Vis. Sci., vol. 52,
our work. no. 10, pp. 7738–7748, 2011.
[22] Z. Mao et al., “Deep learning based noise reduction method for automatic
3D segmentation of the anterior of lamina cribrosa in optical coher-
ence tomography volumetric scans,” Biomed. Opt. Exp., vol. 10, no. 11,
REFERENCES pp. 5832–5851, 2019.
[23] J. Yang et al., “Enhancing visibility of choroidal vasculature in oct via
[1] D. A. Atchison and G. Smith, Optics of the Human Eye. U.K.: Butterworth- attenuation compensation and coherence transport inpainting,” Investig.
Heinemann, 2000. Ophthalmol. Vis. Sci., vol. 60, no. 9, pp. 148–148, 2019.
[2] H. Laviers and H. Zambarakji, “Enhanced depth imaging-oct of the [24] D. Shen, G. Wu, and H.-I. Suk, “Deep learning in medical image analysis,”
choroid: A review of the current literature,” Graefe’s Archive Clin. Exp. Annu. Rev. Biomed. Eng., vol. 19, pp. 221–248, 2017.
Ophthalmol., vol. 252, no. 12, pp. 1871–1883, 2014. [25] G. Litjens et al., “A survey on deep learning in medical image analysis,”
[3] S. Wang, Y. Wang, X. Gao, N. Qian, and Y. Zhuo, “Choroidal thickness Med. Image Anal., vol. 42, pp. 60–88, 2017.
and high myopia: A cross-sectional study and meta-analysis,” BMC Oph- [26] Z. Gong, P. Zhong, and W. Hu, “Statistical loss and analysis for deep
thalmol., vol. 15, no. 1, pp. 1–10, 2015. learning in hyperspectral image classification,” IEEE Trans. Neural Netw.
[4] C. V. Regatieri, L. Branchini, J. Carmody, J. G. Fujimoto, and J. S. Duker, Learn. Syst., 2020.
“Choroidal thickness in patients with diabetic retinopathy analyzed by [27] S. Dong, Y. Zhuang, Z. Yang, L. Pang, H. Chen, and T. Long, “Land cover
spectral-domain optical coherence tomography,” Retina (Philadelphia, classification from VHR optical remote sensing images by feature ensem-
Pa.), vol. 32, no. 3, p. 563, 2012. ble deep learning network,” IEEE Geosci. Remote Sens. Lett., vol. 17,
[5] G. Yiu et al., “Relationship of central choroidal thickness with age-related no. 8, pp. 1396–1400, Aug. 2020.
macular degeneration status,” Amer. J. Ophthalmol., vol. 159, no. 4, [28] M. Kordestani, M. Saif, M. E. Orchard, R. Razavi-Far, and K. Khorasani,
pp. 617–626, 2015. “Failure prognosis and applications? A survey of recent literature,” IEEE
[6] S. Chen et al., “Changes in choroidal thickness after trabeculectomy in Trans. Rel., to be published.
primary angle closure glaucoma,” Investig. Ophthalmol. Vis. Sci., vol. 55, [29] M. K. Garvin, M. D. Abramoff, X. Wu, S. R. Russell, T. L. Burns, and
no. 4, pp. 2608–2613, 2014. M. Sonka, “Automated 3-d intraretinal layer segmentation of macular
[7] K.-A. Tan, A. Laude, V. Yip, E. Loo, E. P. Wong, and R. Agrawal, spectral-domain optical coherence tomography images,” IEEE Trans. Med.
“Choroidal vascularity index–A novel optical coherence tomography pa- Imag., vol. 28, no. 9, pp. 1436–1447, Sep. 2009.
rameter for disease monitoring in diabetes mellitus?,” Acta Ophthalmol., [30] K. Lee, M. Niemeijer, M. K. Garvin, Y. H. Kwon, M. Sonka, and M. D.
vol. 94, no. 7, pp. e612–e616, 2016. Abramoff, “Segmentation of the optic disc in 3-d OCT scans of the optic
[8] F. Zheng et al., “Choroidal thickness and choroidal vessel density in nerve head,” IEEE Trans. Med. Imag., vol. 29, no. 1, pp. 159–168, Jan.
nonexudative age-related macular degeneration using swept-source optical 2009.
coherence tomography imaging,” Investig. Ophthalmol. Vis. Sci., vol. 57, [31] Q. Yang et al., “Automated layer segmentation of macular oct images
no. 14, pp. 6256–6264, 2016. using dual-scale gradient information,” Opt. Express, vol. 18, no. 20, pp.
[9] Y. Kuroda et al., “Increased choroidal vascularity in central 21 293–21 307, 2010.
serous chorioretinopathy quantified using swept-source optical co- [32] L. Zhang, K. Lee, M. Niemeijer, R. F. Mullins, M. Sonka, and M. D.
herence tomography,” Amer. J. Ophthalmol., vol. 169, pp. 199–207, Abramoff, “Automated segmentation of the choroid from clinical sd-oct,”
2016. Investig. Ophthalmol. Vis. Sci., vol. 53, no. 12, pp. 7510–7519, 2012.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.
3420 IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, VOL. 24, NO. 12, DECEMBER 2020

[33] L. Zhang et al., “Validity of automated choroidal segmentation in ss-oct [54] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image
and sd-oct,” Investig. Ophthalmol. Vis. Sci., vol. 56, no. 5, pp. 3202–3211, translation using cycle-consistent adversarial networks,” in Proc. IEEE
2015. Int. Conf. Comput. Vis., 2017, pp. 2223–2232.
[34] B. S. Gerendas et al., “Three-dimensional automated choroidal volume [55] R. A. Yeh, C. Chen, T. Yian Lim, A. G. Schwing, M. Hasegawa-Johnson,
assessment on standard spectral-domain optical coherence tomography and M. N. Do, “Semantic image inpainting with deep generative models,”
and correlation with the level of diabetic macular edema,” Amer. J. Oph- in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 5485–5493.
thalmol., vol. 158, no. 5, pp. 1039–1048, 2014. [56] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, “Generative image
[35] A.-M. Philip et al., “Choroidal thickness maps from spectral domain and inpainting with contextual attention,” in Proc. IEEE Conf. Comput. Vis.
swept source optical coherence tomography: Algorithmic versus ground Pattern Recognit., 2018, pp. 5505–5514.
truth annotation,” Brit. J. Ophthalmol., vol. 100, no. 10, pp. 1372–1376, [57] K. Nazeri, E. Ng, T. Joseph, F. Qureshi, and M. Ebrahimi, “Edgeconnect:
2016. Generative image inpainting with adversarial edge learning,” in Proc. Int.
[36] C. Wang, Y. X. Wang, and Y. Li, “Automatic choroidal layer segmentation Conf. Comput. Vis. Workshops, 2019.
using markov random field and level set method,” IEEE J. Biomed. Health [58] W. Chen, Z. Jiang, Z. Wang, K. Cui, and X. Qian, “Collaborative global-
Inform., vol. 21, no. 6, pp. 1694–1702, Nov. 2017. local networks for memory-efficient segmentation of ultra-high resolution
[37] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019,
with deep convolutional neural networks,” in Proc. Adv. Neural Inf. Pro- pp. 8924–8933.
cess. Syst., 2012, pp. 1097–1105. [59] S. Ruder, “An overview of multi-task learning in deep neural networks,”
[38] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks 2017, arXiv:1706.05098.
for semantic segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern [60] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
Recognit., 2015, pp. 3431–3440. recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016,
[39] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolu- pp. 770–778.
tional encoder-decoder architecture for image segmentation,” IEEE Trans. [61] J. Campbell et al., “Detailed vascular anatomy of the human retina
Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, Dec. 2017. by projection-resolved optical coherence tomography angiography,” Sci.
[40] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Rep., vol. 7, 2017, Art. no. 42201.
networks for biomedical image segmentation,” in Proc. Int. Conf. [62] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style
Med. Image Comput. Comput.-Assisted Intervention. Springer, 2015, transfer and super-resolution,” in Proc. Eur. Conf. Comput. Vis.. Springer,
pp. 234–241. 2016, pp. 694–711.
[41] J. Kugelman et al., “Automatic choroidal segmentation in OCT im- [63] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation
ages using supervised deep learning methods,” Sci. Rep., vol. 9, no. 1, with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis.
pp. 1–13, 2019. Pattern Recognit., 2017, pp. 1125–1134.
[42] A. Shah, M. D. Abramoff, and X. Wu, “Simultaneous multiple surface [64] J. Canny, “A computational approach to edge detection,” IEEE Trans.
segmentation using deep learning,” in Proc. Deep Learn. Med. Image Anal. Pattern Anal. Mach. Intell., vol. PAMI-8, no. 6, pp. 679–698, Nov.
Multimodal Learn. Clin. Decis. Support. Springer, 2017, pp. 3–11. 1986.
[43] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, [65] J. Cheng et al., “Speckle reduction in 3d optical coherence tomography of
“3D U-Net: learning dense volumetric segmentation from sparse annota- retina by a-scan reconstruction,” IEEE Trans. Med. Imag., vol. 35, no. 10,
tion,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assisted Interven- pp. 2270–2279, Oct. 2016.
tion. Springer, 2016, pp. 424–432. [66] Z. Chai et al., “Perceptual-assisted adversarial adaptation for choroid
[44] M. Chen, J. Wang, I. Oguz, B. L. VanderBeek, and J. C. Gee, “Automated segmentation in optical coherence tomography,” in Proc. IEEE 17th Int.
segmentation of the choroid in EDI-OCT images with retinal pathology Symp. Biomed. Imag. (ISBI), 2020, pp. 1966–1970.
using convolution neural networks,” in Proc. Fetal, Infant Ophthalmic [67] F. Mahmood, R. Chen, and N. J. Durr, “Unsupervised reverse domain
Med. Image Anal., Springer, 2017, pp. 177–184. adaptation for synthetic medical images via adversarial training,” IEEE
[45] K. Vermeer, J. Mo, J. Weda, H. Lemij, and J. De Boer, “Depth-resolved Trans. Med. Imag., vol. 37, no. 12, pp. 2572–2581, Dec. 2018.
model-based reconstruction of attenuation coefficients in optical coher- [68] J. Mazzaferri, L. Beaton, G. Hounye, D. N. Sayah, and S. Costantino,
ence tomography,” Biomed. Opt. Express, vol. 5, no. 1, pp. 322–337, 2014. “Open-source algorithm for automatic choroid segmentation of OCT
[46] J. M. Mari, N. G. Strouthidis, S. C. Park, and M. J. Girard, “Enhancement of volume reconstructions,” Sci. Rep., vol. 7, 2017, Art. no. 42112.
lamina cribrosa visibility in optical coherence tomography images using [69] F. G. Venhuizen et al., “Robust total retina thickness segmentation in opti-
adaptive compensation,” Investig. Ophthalmol. Vis. Sci., vol. 54, no. 3, cal coherence tomography images using convolutional neural networks,”
pp. 2238–2247, 2013. Biomed. Opt. Exp., vol. 8, no. 7, p. 3292, 2017.
[47] H. Zhou et al., “Attenuation correction assisted automatic segmentation [70] Z. Gu et al., “CE-Net: Context encoder network for 2D medical image
for assessing choroidal thickness and vasculature with swept-source OCT,” segmentation,” IEEE Trans. Med. Imag., vol. 38, no. 10, pp. 2281–2292,
Biomed. Opt. Express, vol. 9, no. 12, pp. 6067–6080, 2018. Oct. 2019.
[48] K. K. Vupparaboina et al., “Quantitative shadow compensated optical [71] A. Paszke et al., “Pytorch: An imperative style, high-performance
coherence tomography of choroidal vasculature,” Sci. Rep., vol. 8, no. 1, deep learning library,” in Proc. Adv. Neural Inf. Process. Syst., 2019,
p. 6461, 2018. pp. 8024–8035.
[49] C. Guillemot and O. Le Meur, “Image inpainting: Overview and recent [72] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,”
advances,” IEEE Signal Process. Mag., vol. 31, no. 1, pp. 127–144, 2013. 2014, arXiv:1412.6980.
[50] O. Elharrouss, N. Almaadeed, S. Al-Maadeed, and Y. Akbari, “Image [73] D. Xiang et al., “Automatic segmentation of retinal layer in OCT images
inpainting: A review,” Neural Process. Lett., pp. 1–22, 2019. with choroidal neovascularization,” IEEE Trans. Image Process., vol. 27,
[51] A. Criminisi, P. Pérez, and K. Toyama, “Region filling and object removal no. 12, pp. 5880–5891, Dec. 2018.
by exemplar-based image inpainting,” IEEE Trans. Image Process., vol. [74] Y. Jia et al., “Quantitative OCT angiography of optic nerve head blood
13, no. 9, pp. 1200–1212, Sep. 2004. flow,” Biomed. Opt. Express, vol. 3, no. 12, pp. 3127–3137, 2012.
[52] F. Bornemann and T. März, “Fast image inpainting based on coherence [75] J. L. Yip and P. J. Foster, “Ethnic differences in primary angle-closure
transport,” J. Math. Imag. Vision, vol. 28, no. 3, pp. 259–278, 2007. glaucoma,” Curr. Opin. Ophthalmol., vol. 17, no. 2, pp. 175–180, 2006.
[53] I. Goodfellow et al., “Generative adversarial nets,” in Proc. Adv. Neural [76] M. Zhou et al., “Is increased choroidal thickness association with primary
Inf. Process. Syst., 2014, pp. 2672–2680. angle closure?,” Acta Ophthalmol., vol. 92, no. 7, pp. e514–e520, 2014.

Authorized licensed use limited to: PSG COLLEGE OF TECHNOLOGY. Downloaded on December 03,2024 at 04:18:03 UTC from IEEE Xplore. Restrictions apply.

You might also like