0% found this document useful (0 votes)
41 views14 pages

Automorph: Automated Retinal Vascular Morphology Quantification Via A Deep Learning Pipeline

Uploaded by

okuwobi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views14 pages

Automorph: Automated Retinal Vascular Morphology Quantification Via A Deep Learning Pipeline

Uploaded by

okuwobi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Artificial Intelligence

AutoMorph: Automated Retinal Vascular Morphology


Quantification Via a Deep Learning Pipeline
Yukun Zhou1,2,4 , Siegfried K. Wagner2 , Mark A. Chia2 , An Zhao1,3 ,
Peter Woodward-Court2,5 , Moucheng Xu1,4 , Robbert Struyven2,4 ,
Daniel C. Alexander1,3,* , and Pearse A. Keane2,*
1
Centre for Medical Image Computing, University College London, London, UK
2
NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of
Ophthalmology, London, UK
3
Department of Computer Science, University College London, London, UK
4
Department of Medical Physics and Biomedical Engineering, University College London, London, UK
5
Institute of Health Informatics, University College London, London, UK

Correspondence: Pearse A. Keane, Purpose: To externally validate a deep learning pipeline (AutoMorph) for automated
NIHR Biomedical Research Centre for analysis of retinal vascular morphology on fundus photographs. AutoMorph has been
Ophthalmology, Moorfields Eye made publicly available, facilitating widespread research in ophthalmic and systemic
Hospital NHS Foundation Trust and diseases.
UCL Institute of Ophthalmology, 162
City Road, London EC1V 2PD, UK.
Methods: AutoMorph consists of four functional modules: image preprocessing, image
e-mail: [email protected]
quality grading, anatomical segmentation (including binary vessel, artery/vein, and
optic disc/cup segmentation), and vascular morphology feature measurement. Image
Received: January 28, 2022 quality grading and anatomical segmentation use the most recent deep learning
Accepted: June 6, 2022 techniques. We employ a model ensemble strategy to achieve robust results and analyze
Published: July 14, 2022 the prediction confidence to rectify false gradable cases in image quality grading. We
Keywords: retinal fundus externally validate the performance of each module on several independent publicly
photograph; vascular analysis; deep available datasets.
learning; oculomics; external Results: The EfficientNet-b4 architecture used in the image grading module achieves
validation performance comparable to that of the state of the art for EyePACS-Q, with an F1 -score
Citation: Zhou Y, Wagner SK, Chia of 0.86. The confidence analysis reduces the number of images incorrectly assessed as
MA, Zhao A, Woodward-Court P, Xu gradable by 76%. Binary vessel segmentation achieves an F1 -score of 0.73 on AV-WIDE
M, Struyven R, Alexander DC, Keane and 0.78 on DR HAGIS. Artery/vein scores are 0.66 on IOSTAR-AV, and disc segmentation
PA. AutoMorph: Automated retinal achieves 0.94 in IDRID. Vascular morphology features measured from the AutoMorph
vascular morphology quantification segmentation map and expert annotation show good to excellent agreement.
via a deep learning pipeline. Transl
Vis Sci Technol. 2022;11(7):12,
Conclusions: AutoMorph modules perform well even when external validation data
https://doi.org/10.1167/tvst.11.7.12
show domain differences from training data (e.g., with different imaging devices). This
fully automated pipeline can thus allow detailed, efficient, and comprehensive analysis
of retinal vascular morphology on color fundus photographs.
Translational Relevance: By making AutoMorph publicly available and open source, we
hope to facilitate ophthalmic and systemic disease research, particularly in the emerging
field of oculomics.

The significance of the retinal vasculature for assess-


Introduction ing ophthalmic disease is well known; however, there
is also growing interest in its capacity to provide
The widespread availability of rapid, non-invasive valuable insights into systemic disease, a field that
retinal imaging has been one of the most notable has been termed “oculomics.”1–4 Narrowing of the
developments within ophthalmology in recent decades. retinal arteries is associated with hypertension and

Copyright 2022 The Authors


tvst.arvojournals.org | ISSN: 2164-2591 1

This work is licensed under a Creative Commons Attribution 4.0 International License.

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 2

atherosclerosis,5–8 and dilation of the retinal veins is • AutoMorph alleviates the need for physician inter-
linked with diabetic retinopathy.9–11 Increased tortu- vention by addressing two key areas. First, we
osity of the retinal arteries is also associated with employ an ensemble technique with confidence
hypercholesterolemia and hypertension.12–14 Consider- analysis to reduce the number of ungradable
ing that manual vessel segmentation and feature extrac- images that are incorrectly classified as being
tion can be extremely time consuming, as well as poorly gradable (false gradable images). Second, accurate
reproducible,15 there has been growing interest in the binary vessel segmentation and artery/vein identi-
development of tools that can extract retinal vascular fication reduce the need for manual rectification.
features in a fully automated manner. • AutoMorph generates a diverse catalog of retinal
In recent decades, a large body of technical work feature measurements that previous work indicates
has focused on retinal vessel map segmentation. Perfor- has the potential to be used for the exploration of
mance has improved dramatically by employing a ocular biomarkers for systemic disease.
range of techniques, from unsupervised graph- and
feature-based methods16–20 to supervised deep learning Perhaps most importantly, we made AutoMorph
models.21 Despite this progress, the widespread use of publicly available with a view to stimulating break-
these techniques in clinical research has been limited by throughs in the emerging field of oculomics.
a number of factors. First, technical papers21–25 often
focus on performing a single function while ignoring
upstream and downstream tasks, such as preprocess- Methods
ing24,25 and feature measurement.21–23 Second, existing
techniques often perform poorly when applied to real- The AutoMorph pipeline consists of four modules:
world clinical settings limited by poor generalizability (1) image preprocessing, (2) image quality grading, (3)
outside of the environment in which they were devel- anatomical segmentation, and (4) metric measurement
oped.26,27 (Fig. 1). Source code for this pipeline is available from
Although some software has been utilized for https://github.com/rmaphoh/AutoMorph.
clinical research, most of it is only semi-automated,
requiring human intervention for correcting vessel Datasets
segmentation and artery/vein identification.6,24,25,28,29
This limits process efficiency and introduces subjec- The datasets used for development and external
tive bias, potentially influencing the final outcomes. validation of the deep learning models described in
Further, most existing software has not integrated this work are summarized in Table 1 and Supple-
the crucial functions required for such a pipeline— mentary Material S1. For model training, we chose
namely, image cropping, quality assessment, segmenta- publicly available datasets that contain a large quantity
tion, and vascular feature measurement. For example, of annotated images.30 Importantly, a diverse combi-
poor-quality images in research cohorts often must nation of public datasets was used in order to enhance
be manually filtered by physicians, which generates external generalizability. Some image examples are
a considerable workload. There is also the potential shown in Supplementary Figure S1. To validate the
to improve the performance of underlying segmenta- models, we externally evaluated the performance of
tion algorithms by employing the most recent advances those trained models on datasets distinct from those
in machine learning, thus enhancing the accuracy of on which they were trained (e.g., imaging devices,
vascular feature measurements. countries of origin, types of pathology). All of
In this study, we explored the feasibility of a the datasets provide the retinal fundus photographs
deep learning pipeline providing automated analysis and the corresponding expert annotation. For image
of retinal vascular morphology from color fundus quality grading datasets (using EyePACS-Q as an
photographs. We highlight three unique advantages of example), two experts grade each image into three
the proposed AutoMorph pipeline: categories: good, usable, and reject quality, determined
by image illumination, artifacts, and the diagnosability
of the general eye diseases to the experts. For anatom-
• AutoMorph consists of four functional modules, ical segmentation datasets, such as the Digital Retinal
including (1) retinal image preprocessing; (2) Images for Vessel Extraction (DRIVE) dataset for the
image quality grading; (3) anatomical segmen- binary vessel segmentation task, two experts annotate
tation (binary vessel segmentation, artery/vein each pixel as vessel or background, thus generating
segmentation, and optic disc segmentation); and a ground-truth map with the same size of the retinal
(4) morphological feature measurement. fundus photographs, where a white color indicates

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 3

Figure 1. Diagram of the proposed AutoMorph pipeline. The input is color fundus photography, and the final output is the measured
vascular morphology features. Image quality grading and anatomical segmentation modules use deep learning models. Confidence analysis
decreases false gradable images in the image quality grading module.

vessel pixels and a black color the background. More illary atrophic changes, which can be a hallmark
details can be found in Supplementary Material S1. of myopia or glaucoma, can cause large errors in
disc localization and segmentation. To counter this,
Modules AutoMorph employs a coarse-to-fine deep learning
network,49 which achieved first place for disc segmen-
Image Preprocessing tation in the MICCAI 2021 GAMMA challenge.45,46
Retinal fundus photographs often contain superflu- Two public datasets were utilized in model training.
ous background, resulting in dimensions that deviate Further detailed information is provided in Supple-
from a geometric square. To account for this, we mentary Material S3.
employed a technique that combines thresholding,
morphological image operations, and cropping31 to Vascular Morphology Feature Measurement
remove the background so that the resulting image AutoMorph measures a series of clinically relevant
conforms to a geometric square (examples are shown vascular features, as summarized in Figure 2 (compre-
in Supplementary Fig. S2). hensive list in Supplementary Fig. S13). Three differ-
ent calculation methods for vessel tortuosity are
Image Quality Grading provided, including distance measurement tortuos-
To filter out ungradable images that often fail in ity, squared curvature tortuosity,50 and tortuosity
segmentation and measurement modules, AutoMorph density.51 The fractal dimension value (Minkowski–
incorporates a classification model to identify ungrad- Bouligand dimension)52 provides a measurement of
able images. The model classifies each image as good, vessel complexity. The vessel density indicates the
usable, or reject quality. In our study, good and usable ratio between the area of vessels to the whole image.
images were considered to be gradable; however, this For vessel caliber, AutoMorph calculates the central
decision may be modified in scenarios with suffi- retinal arteriolar equivalent (CRAE) and central retinal
cient data to include only good-quality images. We venular equivalent (CRVE), as well as the arteriolar–
employed EfficientNet-B448 as the model architec- venular ratio (AVR).53–55 AutoMorph measures the
ture and performed transfer learning on EyePACS-Q. features in standard regions, including Zone B (the
Further details are outlined in Supplementary Material annulus 0.5–1 optic disc diameter from the disc margin)
S2 and Supplementary Figure S3. and Zone C (the annulus 0.5–2 optic disc diameter from
the disc margin).29 Considering that Zone B and Zone
Anatomical Segmentation C of macular-centered images may be out of the circu-
Vascular structure is thin and elusive especially lar fundus, the features for the whole image are also
against low-contrast backgrounds. To enhance binary measured.
vessel segmentation performance, AutoMorph uses
an adversarial segmentation network.23 Six public Ensemble and Confidence Analysis
datasets were used for model training (Table 1).
Accurate artery/vein segmentation is a long-standing In model training, 80% of the training data is used
challenge. To address this, we employed an informa- for model training and 20% is used to tune the train-
tion fusion network22 tailored for artery/vein segmen- ing hyperparameters, such as scheduling the learn-
tation. Three datasets were used for training. Parapap- ing rate. In retinal image grading, we ensemble the

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 4

Table 1. Characteristics of the Training and External Validation Data


Type of Data Dataset Name Country of Origin Image Quantitya Device (Manufacturer)
Image Quality Grading
Training data EyePACS-Q-train30,31 USA 12,543 (NR, more than A variety of imaging devices, including DRS
99%) (CenterVue, Padova, Italy); iCam (Optovue,
Fremont, CA); CR1/DGi/CR2 (Canon, Tokyo,
Japan); Topcon NW 8 (Topcon, Tokyo, Japan)
Internal validation EyePACS-Q-test30,31 USA 16,249 (NR, more than —
data 99%)
External validation DDR test32 China 4,105 (100%) 42 types of fundus cameras, mainly Topcon
data D7000, Topcon TRC NW48, D5200 (Nikon,
Tokyo, Japan), and Canon CR 2 cameras
Binary Vessel Segmentation
Training data DRIVE33 Netherlands 40 (100%) CR5 non-mydriatic 3CCD camera (Canon)
STARE34 USA 20 (100%) TRV-50 fundus camera (Topcon)
CHASEDB135 UK 28 (0%) NM-200D handheld fundus camera (Nidek,
Aichi, Japan)
HRF36 Germany and Czech 45 (100%) CF-60UVi camera (Canon)
Republic
IOSTAR37 Netherlands and 30 (53.3%) EasyScan camera (i-Optics, Rijswijk,
China Netherlands)
LES-AV38 NR 22 (0%) Visucam Pro NM fundus camera (Carl Zeiss
Meditec, Jena, Germany)
External validation AV-WIDE19,39 USA 30 (100%) 200Tx Ultra-widefield Imaging Device (Optos,
datab Dunfermline, UK)
DR HAGIS40 UK 39 (100%) TRC-NW6s (Topcon), TRC-NW8 (Topcon), or
CR-DGi fundus camera (Canon)
Artery/Vein Segmentation
Training data DRIVE-AV33,41 Netherlands 40 (100%) CR5 non-mydriatic 3CCD camera (Canon)
HRF-AV36,42 Germany and Czech 45 (100%) CF-60UVi camera (Canon)
Republic
LES-AV38 Nauru 22 (9%) Visucam Pro NM fundus camera (Zeiss)
External validation IOSTAR-AV37,43 Netherlands and 30 (53.3%) EasyScan camera (i-Optics)
data China
Optic Disc Segmentation
Training data REFUGE44 China 800 (100%) Visucam 500 fundus camera (Zeiss) and CR-2
camera (Canon)
GAMMA45,46 China 100 (100%) —
External validation IDRID47 India 81 (100%) VX-10α digital fundus camera (Kowa, Las Vegas,
datac NV)
External validation data are unseen for model training and were purely used to evaluate the trained model performance on
out-of-distribution data with different countries of origin and imaging devices. EyePACS-Q is a subset of EyePACS with image
quality grading annotation. NR, not reported.
a
Image quantity indicates the image number used in this work and the parentheses show the proportion of macula-
centered images.
b
Although we have evaluated the binary vessel segmentation model on the ultra-widefield retinal fundus dataset AV-WIDE,
we recommend using AutoMorph on retinal fundus photographs with a 25° to 60° FOV, as all of the deep learning models are
trained using images with FOV equals to 25° to 60°, and the preprocessing step is tailored for images with this FOV.
c
Evaluated on disc due to no cup annotation.

output from eight trained models with different subsets for confidence analysis. Average probability indicates
of training data, as it generally gives a more robust the average confidence of prediction. Low average
result.56 Moreover, the average value and standard cases are prone to false predictions, such as Figure 3c.
deviation (SD) of the eight possibilities are calculated Meanwhile, SD represents the inconsistency between

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 5

Figure 2. Features measured by AutoMorph, including tortuosity, vessel caliber, disc-to-cup ratio, and others. For each image, the optic
disc/cup information is measured, including the height and width, as well as cup-to-disc ratio. For binary vessels, the tortuosity, fractal
dimension, vessel density, and average width are measured. In addition to these features, arteries/veins are also used for measuring the
caliber features CRAE, CRVE, and AVR by Hubbard and Knudtson methods.

models. High inconsistency likely corresponds to a are


false prediction, as shown in Figure 3d. The images TP + TN
with either low average probability or high SD are Accuracy =
TP + FP + TN + FN
automatically recognized as low-confidence images and
rectified as ungradable. False gradable images can fail TP
the anatomical segmentation module, thus generating Sensitivity =
TP + FN
a large error in vascular feature measurement. The
confidence analysis economizes physician intervention TN
and increases the reliability of AutoMorph by filter- S peci f icity =
TN + FP
ing these potential errors. To our knowledge, this is the
first report of a confidence analysis combined with the TP
model ensemble integrated within the vessel analysis Precision =
TP + FP
pipeline. An average threshold corresponds to a change
of operating point and SD threshold involved in uncer- 2 × Sensitivity × Precision
tainty theory. In this work, we set an average thresh- F1 =
old of 0.75 and a SD threshold of 0.1 to filter out Sensitivity + Precision
false gradable images. Specifically, the average proba- where TP, TN, FP, and FN indicate true positive,
bility lower than 0.75 or SD larger than 0.1 were recti- true negative, false positive, and false negative, respec-
fied as ungradable images. The rationale for select- tively. AUC-ROC curve is a performance measure-
ing these threshold values is based on the probability ment for classification problems at various threshold
distribution histogram on tuning data. More details are settings; it tells how much the model is capable of
described in Supplementary Material S2 and Supple- distinguishing between classes. In segmentation tasks,
mentary Figure S4. IoU measures the overlap degree between ground-truth
maps and segmentation maps. Following the same
Statistical Analyses and Compared Methods setting,31,39,57–59 we set the ungradable images as the
positive class in image quality grading. The probabil-
For deep learning functional modules, the well- ity of the ungradable category equals that of reject
established expert annotation is used as a reference quality, and the probability of the gradable category is
standard to quantitatively evaluate the module perfor- the sum of good quality and usable quality. As intro-
mance. We calculated sensitivity, specificity, positive duced in the discussion on confidence analysis, we used
predictive value (precision), accuracy, area under the a mean value of 0.75 and SD of 0.1 as thresholds
receiver operating characteristic (AUC-ROC) curve, to obtain the final rectified gradable and ungradable
F1 -score, and intersection of union (IoU) metrics to categories. For binary vessel segmentation, each pixel
verify the model performance. These metric definitions of the retinal fundus photograph corresponds to a

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 6

Figure 3. Confidence analysis for image quality grading. M1 to M8 represent the eight ensemble models. For each image, the predicted
category is transferred as gradable or ungradable (good and usable are as gradable, reject as ungradable). The average probability and SD
are calculated for the predicted category. (a, b) Two image cases with high confidence in prediction. The case shown in (c) is classified as
gradable quality with low average probability of 0.619, and the case in (d) has a high SD of 0.191, which are defined as low-confidence images
in our work. Although (c) and (d) are preliminarily classified as gradable, the final classification is rectified as ungradable with the confidence
threshold.

binary classification task. The vessel pixel is positive validation (e.g., fivefold validation that means 80% of
class and the background pixel is negative. The proba- images are used for training and tuning and 20% are
bility range for each pixel is from 0 to 1, where a larger used for validating the trained model), and claimed
value indicates a higher probability of being a vessel that they have achieved state-of-the-art performance.
pixel. We thresholded the segmentation map with 0.5, As introduced in Table 1, the models of AutoMorph
which is a standard threshold for binary medical image are trained on several public datasets and externally
segmentation tasks. Optic disc segmentation is similar validated on separate datasets, whereas the compared
to binary vessel segmentation, but the difference is that methods39,57–59 are trained in the same domain data as
the positive class is the optic disc pixel. For artery/vein the validation data but with fewer training images. The
segmentation, each pixel has a four-class probability of goal of the comparison was not to prove the techni-
artery, vein, uncertain pixel, and background. Follow- cal strengths of AutoMorph over recent methods, as
ing standard settings for multiclass segmentation tasks, this has already been verified in previously published
the category with the largest probability across the four work.22,23,47,48 Rather, we aimed to demonstrate that,
classes is the thresholded pixel category. More informa- due to the diversity of its training data, AutoMorph
tion is listed in Supplementary Material S3. performs well on external datasets, even when these
We conducted the quantitative comparison to other datasets include pathology and show large domain
competitive methods to characterize the generalizabil- differences from the training data. Additionally, to
ity of AutoMorph using external validation. We used demonstrate the technical superiority of this method,
internal validation results from other published work we have provided the internal validation of AutoMorph
to provide a benchmark for a well-performing model. in Supplementary Table S1.
These methods used a reasonable proportion of data Considering that we employ standard formu-
for model training and the remainder for internal las29,50–52 to measure vascular morphology features,

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 7

the measurement error only comes from inaccu- score of 0.86.31 The prediction was transferred to
racy of anatomical segmentation. In order to evalu- gradable (good and usable quality) and ungradable
ate measurement error that occurs as a result of (reject quality), and the resulting confusion matrix of
vessel segmentation, we respectively measure the vascu- validation on the EyePACS-Q test is shown in Figure 4.
lar features based on AutoMorph segmentation and We learned that confidence thresholding brings a trade-
expert vessel annotation, and then we draw Bland– off in performance metrics, suppressing false gradable
Altman plots. Following the same evaluation,3,60 intra- ratio but simultaneously increasing false negative. False
class correlation coefficients (ICCs) are calculated gradable images are prone to fail the anatomical
to quantitatively show agreement. Additionally, the segmentation module and generate large errors and
boxplots of differences between the vascular features outliers in vascular feature measurement. Although
from AutoMorph segmentation and expert annotation this thresholding filters out some adequate quality
are shown in Supplementary Figures S9–S11. images, it maintains the reliability of AutoMorph.
The external validation is on the general-purpose
diabetic retinopathy dataset (DDR) test data. As
Results DDR includes only two categories in image quality
annotation (gradable and ungradable), we first trans-
Results for external validation of AutoMorph are ferred the AutoMorph prediction of good and usable
summarized in Table 2. quality as gradable and reject quality as ungradable
and then evaluated the quantitative results. Although
Image Quality Grading the difference in the annotation might underestimate
the AutoMorph image quality grading capability, the
The internal validation is on EyePACS-Q test data. performance was satisfactory compared to the internal
For fair comparison,31 we evaluated the image quality group, as shown in Table 2. The confusion matrix and
grading performance of categorizing good, usable, AUC-ROC curve are shown in Supplementary Figure
and reject quality. The quantitative results are listed S5. All ungradable images were correctly identified,
in Table 2. The classification F1 -score achieved 0.86, which is significant with regard to the reliability of
on par with the state-of-the-art method with a F1 - AutoMorph.

Table 2. Validation of Functional Modules and Comparison With Other Methods


Image Quality Grading Artery/Vein Segmentation

EyePACS-Q Test DDR Test IOSTAR-AV

AutoMorph (Internal) Comparison31 (Internal) AutoMorph (External) Comparisona (Internal) AutoMorph (External) Comparison58 (Internal)
Sensitivity 0.85 0.85 1 0.93 0.64 0.79
Specificity 0.93 NR 0.89 0.97 0.98 0.76
Precision 0.87 0.87 0.6 0.73 0.68 NR
Accuracy 0.92 0.92 0.91 0.99 0.96 0.78
AUC-ROC 0.97 NR 0.99 0.99 0.95 NR
F1 -score 0.86 0.86 0.75 0.82 0.66 NR
IoU — — — — 0.53 NR
Binary Vessel Segmentation Optic Disc

Ultra-widefield: AV-WIDE Standard Field: DR HAGIS IDRID

AutoMorph (External) Comparison39 (Internal) AutoMorph (External) Comparison57 (Internal) AutoMorph (External) Comparison59 (Internal)

Sensitivity 0.71 0.78 0.84 0.67 0.9 0.9


Specificity 0.98 NR 0.98 0.98 0.95 NR
Precision 0.75 0.82 0.73 NR 0.94 NR
Accuracy 0.96 0.97 0.97 0.97 0.99 0.99
AUC-ROC 0.96 NR 0.98 NR 0.95 NR
F1 -score 0.73 0.8 0.78 0.71 0.94 NR
IoU 0.57 NR 0.64 NR 0.91 0.85

“Internal”indicates that the validation and training data are from the same dataset but isolated. “External”means that valida-
tion data are from external datasets. The comparisons are with competitive methods of image quality grading,31 binary vessel
segmentation,39,57 artery/vein segmentation,58 and optic disc segmentation.59 NR, not reported.
a
Due to no comparison method on the DDR test, we compared AutoMorph (external) to the same architecture, EfficientNet-
b4, that is trained with DDR train data (internal).

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 8

Figure 4. The confusion matrix of the grading results on EyePACS-Q test data. (a) The results before confidence thresholding; (b) the results
after thresholding. The value is normalized in rows. The diagonal includes the correct classification ratio. The red box indicates false gradable
(i.e., ungradable images are wrongly classified as gradable), and the green box shows the percentage of false ungradable (i.e., gradable images
are wrongly categorized as ungradable). The false gradable of (b) is reduced by 76.2% compared with that of (a), but the false ungradable
increases in (b).

Figure 5. Visualization results of anatomical segmentation, including binary vessel (first two columns), artery/vein (third column), and optic
disc (final column).

Anatomical Segmentation IOSTAR-AV dataset. Compared with the most recent


method,58 AutoMorph achieves lower sensitivity but
Visualization results are presented in Figure 5, much higher specificity. The visualization results of
and quantitative results are listed in Table 2. For two challenging cases from Moorfields Eye Hospi-
binary vessel segmentation, the two public datasets tal and the Online Retinal Fundus Image Dataset
AV-WIDE and the diabetic retinopathy, hyperten- for Glaucoma Analysis and Research (ORIGA) are
sion, age-related macular degeneration, and glaucoma shown in Supplementary Figure S6. For optic disc
image set (DR HAGIS) are employed in model valida- segmentation, we validated the performance on the
tion. The binary vessel segmentation model works dataset Indian Diabetic Retinopathy Image Dataset
comparably to SOTA performance on the fundus (IDRID). The performance is on the par with the
photography data (DR HAGIS) and moderately so compared method,59 and the F1 -score is slightly higher.
on ultra-widefield data (AV-WIDE). For artery/vein Although pathology disturbs, the segmentation disc
segmentation, the performance is validated on the shows robustness.

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 9

Table 3. Agreement Calculation of Measured Vascular Features Between AutoMorph and Expert Annotation
ICC (95% Confidence Interval)
Zone B Zone C Whole Image
DR HAGIS
Fractal dimension 0.94 (0.88–0.97) 0.98 (0.95–0.99) 0.94 (0.88–0.97)
Vessel density 0.98 (0.96–0.99) 0.97 (0.94–0.99) 0.94 (0.88–0.97)
Average width 0.95 (0.89–0.98) 0.96 (0.93–0.98) 0.97 (0.95–0.99)
Distance tortuosity 0.80 (0.59–0.91) 0.85 (0.69–0.93) 0.86 (0.73–0.93)
Squared curvature tortuosity 0.68 (0.34–0.85) 0.88 (0.75–0.94) 0.84 (0.68–0.92)
Tortuosity density 0.89 (0.77–0.95) 0.70 (0.38–0.86) 0.87 (0.74–0.93)
IOSTAR-AV
CRAE (Hubbard) 0.81 (0.56–0.92) 0.82 (0.57–0.91) —
CRVE (Hubbard) 0.8 (0.54–0.91) 0.78 (0.52–0.89) —
AVR (Hubbard) 0.87 (0.69–0.94) 0.81 (0.66–0.92) —
CRAE (Knudtson ) 0.76 (0.45–0.9) 0.75 (0.44–0.89) —
CRVE (Knudtson) 0.85 (0.67–0.94) 0.86 (0.58–0.9) —
AVR (Knudtson) 0.85 (0.66–0.94) 0.82 (0.51–0.91) —
The agreement of vessel caliber was validated on the IOSTAR-AV dataset, other metrics with the DR HAGIS dataset. Because
caliber features rely on the six largest arteries and veins in Zones B and C, there is no caliber feature for the whole image level.

Vascular Feature Measurement (GPU) Tesla T4 graphic card, from preprocessing


to feature measurement. To ensure accessibility for
The ICCs between AutoMorph features and expert researchers without coding experience, we have made
features are listed in Table 3. For binary vessel AutoMorph compatible with Google Colaboratory
morphology, the fractal dimension, vessel density, and (free GPU) (Fig. 7). The process involves placing
average width metrics all achieve excellent reliability images in a specified folder and then clicking the
(ICC > 0.9). The other metrics show good consistency. “run” command. All results will be stored, including
Bland–Altman plots for Zone B are shown in Figure 6. segmentation maps and a file containing all measured
All features show agreement. For the fractal dimension, features.
the mean difference (MD) is –0.01, with 95% limits of
agreement (LOA) of –0.05 to 0.03; for vessel density,
the MD is 0.001, with 95% LOA of 0 to 0.002; for the
average width, the MD is 1.32 pixels, with 95% LOA Discussion
of 0.44 to 2.19; for distance tortuosity, the MD is 0.02,
with 95% LOA of –2.18 to 2.22; for squared curva- In this report, the four functional modules of the
ture tortuosity, the MD is –1.02, with 95% LOA of – AutoMorph pipeline achieved comparable, or better,
14.59 to 12.56; for tortuosity density, the MD is 0.02, performance compared with the state of the art for
with 95% LOA of –0.09 to 0.13; for CRAE Hubbard, both image quality grading and anatomical segmenta-
the MD is –0.13, with 95% LOA of –2.49 to 2.24; for tion. Furthermore, our approach to confidence analy-
CRVE Hubbard, the MD is 0, with 95% LOA of –2.9 to sis decreased the number of false gradable images by
2.9; and for AVR Hubbard, the MD is –0.03, with 95% 76%, greatly enhancing the reliability of our pipeline.
LOA of –0.17 to 0.11. The results at Zone C and the Hence, we have learned that, by using a tailored combi-
whole image are provided in Supplementary Figures S7 nation of deep learning techniques, it is practical to
and S8. Note that for the metrics CRAE, CRVE, and accurately analyze the retinal vascular morphology in
average width, measurements are presented in pixels, as a fully automated way. Although we have evaluated
resolution information is unknown. Some images with the binary vessel segmentation model on the ultra-
large errors are listed in Supplementary Figure S12. widefield retinal fundus dataset AV-WIDE, we recom-
mend using AutoMorph on retinal fundus photographs
Running Efficiency and Interface with a 25° to 60° field of view (FOV), as all of the
deep learning models are trained using images with
The average running time for one image is about FOVs equal to 25° to 60°, and the preprocessing step
20 seconds using a single graphics processing unit is tailored for images with this FOV.

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 10

Figure 6. Bland–Altman plots of vascular feature agreement between expert annotation and AutoMorph segmentation at Zone B. The first
two row features (e.g., tortuosity, fractal dimension) were calculated with the binary vessel segmentation map from DR HAGIS; the last row
features (caliber) were measured with the artery/vein segmentation map from IOSTAR-AV. In each subplot, the central line indicates the mean
difference and two dashed lines represent 95% limits of agreement. The unit of average width, CRAE, and CRVE is the pixel, as resolution was
unknown.

AutoMorph maintains computation transparency (e.g., CRAE and CRVE are calculated based on the
despite the use of deep learning techniques. Recently, six widest arteries and veins), it is difficult to verify
similar systems have used deep learning models to whether a model can learn this type of derivation.
skip intermediary steps and instead directly predict In contrast, the AutoMorph pipeline maintains trans-
morphology features. For example, the Singapore I parency, as the individual processes can be decom-
vessel assessment (SIVA) deep learning system (DLS) posed. Models are initially employed for anatomical
predicts vessel caliber from retinal fundus images segmentation before vascular features are measured
without optic disc localization or artery/vein segmen- with traditional formulas. This process is consistent
tation.3 Another work directly predicts CVD factors with the typical pipeline of human computation, thus
from retinal fundus images in an end-to-end manner.61 improving the credibility of feature measurements.
Although these designs provide some insight into The study cohort is selected by the image quality
the applications of deep learning to ophthalmology, grading module. In this work, being different from
the end-to-end pipeline sacrifices transparency and previous work with only good-quality images, we tried
raises interpretability concerns, representing a poten- to explore the effectiveness of usable images. Although
tial barrier to clinical implementation.62,63 Specifically, purely including good-quality images can avoid poten-
considering that some formulas are empirically defined tially challenging cases for anatomic segmentation

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 11

Figure 7. Interface of AutoMorph on Google Colaboratory. After uploading images and clicking the “run”button, all processes are executed
and results stored, requiring no human intervention. The left side shows the files directory, and the right bottom lists five examples with parts
of features.

models (e.g., images with gloomy illumination), it filters robustness of these features and understand the pros
out usable images that can contribute to a more general and cons of each one. Finally, a uniform protocol
conclusion with a larger study cohort. Also, in clinical for validating retinal analysis pipelines is required,
practice, a considerable number of images are usable because existing software (e.g., RA28 , IVAN,6 SIVA,29
quality but may not qualify as perfectly good quality. VAMPIRE25 ) shows high variation in feature measure-
The pipeline developed in an environment similar to ment.64,65 These four challenges exist in the field of
clinical reality is more prone to be deployed in the oculomics, presenting an impediment to more extensive
clinic. In image quality grading, the confidence analy- research.
sis has recognized a considerable proportion of false We have made AutoMorph publicly available to
gradable images and corrected them as reject quality benefit research in the field of oculomics, which
by thresholding, as shown in Figures 3 and 4. This studies the association between ocular biomarkers and
avoids some reject quality images failing the anatom- systemic disease. We designed the AutoMorph inter-
ical segmentation and then generating large errors face using Google Colaboratory to facilitate its use by
in feature measurement. Although this thresholding clinicians without coding experience. In future work,
increased the false ungradable cases (Fig. 4b, green we will investigate solutions dedicated to the above
box), the priority of recognizing the false gradable challenges in oculomics research. Also, the feasibility
images is secured. Of course, it is acceptable to include of automatic pipeline can be extended to other modal-
only the good-quality images in the research cohorts, ities, such as optical coherence tomography (OCT) and
the same as previous work, when the quantity of good- OCT angiography.
quality images is large.
Although this work demonstrates the effectiveness
of a deep learning pipeline for analyzing retinal vascu-
lar morphology, there are some challenges remain- Acknowledgments
ing regarding technique and standardization. First,
annotating retinal image quality is subjective and lacks Supported by grants from the Engineering and
strict guidelines; therefore, it is difficult to bench- Physical Sciences Research Council (EP/M020533/1,
mark external validation performance. Second, there EP/R014019/1, and EP/V034537/1); by the National
is still room for improving anatomical segmentation, Institute for Health and Care Research Biomedi-
especially for artery/vein segmentation. Third, consid- cal Research Centre; by an MRC Clinical Research
ering that the agreement varies across various vascu- Training Fellowship (MR/TR000953/1 to SKW); by a
lar features (Table 3), it is necessary to compare the Moorfields Eye Charity Career Development Award

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 12

(R190028A to PAK); and by a UK Research & Innova- systemic risk factors and management. Intern Med
tion Future Leaders Fellowship (MR/T019050/1 to J. 2008;38:904–910.
PAK). 11. Wong TY. Retinal vessel diameter as a clinical pre-
dictor of diabetic retinopathy progression: time
Disclosure: Y. Zhou, None; S.K. Wagner, None; to take out the measuring tape. Arch Ophthalmol.
M.A. Chia, None; A. Zhao, None; P. Woodward-Court, 2011;129:95–96.
None; M. Xu, None; R. Struyven, None; D.C. Alexan- 12. Owen CG, Rudnicka AR, Nightingale CM, et al.
der, None; P.A. Keane, DeepMind (C), Roche (C), Retinal arteriolar tortuosity and cardiovascular
Novartis (C), Apellis (C), BitFount (C), Big Picture risk factors in a multi-ethnic population study of
Medical (I), Heidelberg Engineering (F), Topcon (F), 10-year-old children; the Child Heart and Health
Allergan (F), Bayer (F) Study in England (CHASE). Arterioscler Thromb
Vasc Biol. 2011;31:1933–1938.
* DCA and PAK contributed equally to this work. 13. Cheung CY-L, Zheng Y, Hsu W, et al. Retinal vas-
cular tortuosity, blood pressure, and cardiovascu-
lar risk factors. Ophthalmology. 2011;118:812–818.
References 14. Owen CG, Rudnicka AR, Mullen R, et al. Mea-
suring retinal vessel tortuosity in 10-year-old chil-
1. Wagner SK, Fu DJ, Faes L, et al. Insights into sys- dren: validation of the Computer-Assisted Image
temic disease through retinal imaging-based ocu- Analysis of the Retina (CAIAR) program. Invest
lomics. Transl Vis Sci Technol. 2020;9:6. Ophthalmol Vis Sci. 2009;50:2004–2010.
2. Rizzoni D, Muiesan ML. Retinal vascular cal- 15. Couper DJ, Klein R, Hubbard LD, et al. Reliabil-
iber and the development of hypertension: a meta- ity of retinal photography in the assessment of reti-
analysis of individual participant data. J Hyper- nal microvascular characteristics: the Atheroscle-
tens. 2014;32:225–227. rosis Risk in Communities Study. Am J Ophthal-
3. Cheung CY, Xu D, Cheng C-Y, et al. A deep- mol. 2002;133:78–88.
learning system for the assessment of cardio- 16. Huang F, Dashtbozorg B, ter Haar Romeny
vascular disease risk via the measurement of BM. Artery/vein classification using reflection fea-
retinal-vessel calibre. Nat Biomed Eng. 2021;5:498– tures in retina fundus images. Mach Vis Appl.
508. 2018;29:23–34.
4. Wong TY, Mitchell P. Hypertensive retinopathy. N 17. Mirsharif Q, Tajeripour F, Pourreza H. Auto-
Engl J Med. 2004;351:2310–2317. mated characterization of blood vessels as arteries
5. Cheung N, Bluemke DA, Klein R, et al. Retinal and veins in retinal images. Comput Med Imaging
arteriolar narrowing and left ventricular remodel- Graph. 2013;37:607–617.
ing: the multi-ethnic study of atherosclerosis. J Am 18. Dashtbozorg B, Mendonça AM, Campilho A. An
Coll Cardiol. 2007;50:48–55. automatic graph-based approach for artery/vein
6. Wong TY, Islam FMA, Klein R, et al. Reti- classification in retinal images. IEEE Trans Image
nal vascular caliber, cardiovascular risk factors, Process. 2014;23:1073–1083.
and inflammation: the Multi-Ethnic Study of 19. Estrada R, Allingham MJ, Mettu PS, Cousins SW,
Atherosclerosis (MESA). Invest Ophthalmol Vis Tomasi C, Farsiu S. Retinal artery-vein classifi-
Sci. 2006;47:2341. cation via topology estimation. IEEE Trans Med
7. Wong TY, Klein R, Sharrett AR, et al. Retinal Imaging. 2015;34:2518–2534.
arteriolar diameter and risk for hypertension. Ann 20. Srinidhi CL, Aparna P, Rajan J. Automated
Intern Med. 2004;140:248–255. method for retinal artery/vein separation via graph
8. Wong TY, Shankar A, Klein R, Klein BEK, search metaheuristic approach [published online
Hubbard LD. Prospective cohort study of retinal ahead of print January 1, 2019]. IEEE Trans
vessel diameters and risk of hypertension. BMJ. Image Process, https://doi.org/10.1109/TIP.2018.
2004;329:79. 2889534.
9. Jaulim A, Ahmed B, Khanam T, Chatziralli 21. Ronneberger O, Fischer P, Brox T. U-Net: con-
IP. Branch retinal vein occlusion: epidemiology, volutional networks for biomedical image seg-
pathogenesis, risk factors, clinical features, diagno- mentation. In: Navab N, Hornegger J, Wells
sis, and complications. An update of the literature. WM, Frangi A, eds. Medical Image Comput-
Retina. 2013;33:901–910. ing and Computer-Assisted Intervention–MICCAI
10. Yau JWY, Lee P, Wong TY, Best J, Jenkins A. 2015. Berlin: Springer International Publishing;
Retinal vein occlusion: an approach to diagnosis, 2015:234–241.

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 13

22. Zhou Y, Xu M, Hu Y, et al. Learning to tation in color images of the retina. IEEE Trans
address intra-segment misclassification in retinal Med Imaging. 2004;23:501–509.
imaging. In: de Bruijne M, Cattin PC, Cotin S, 34. Hoover A, Kouznetsova V, Goldbaum M.
et al., eds. Medical Image Computing and Com- Locating blood vessels in retinal images by
puter Assisted Intervention–MICCAI 2021. Berlin: piecewise threshold probing of a matched filter
Springer International Publishing; 2021:482–492. response. IEEE Trans Med Imaging. 2000;19:203–
23. Zhou Y, Chen Z, Shen H, Zheng X, Zhao R, 210.
Duan X. A refined equilibrium generative adver- 35. Fraz MM, Remagnino P, Hoppe A, et al. An
sarial network for retinal vessel segmentation. Neu- ensemble classification-based approach applied to
rocomputing. 2021;437:118–130. retinal blood vessel segmentation. IEEE Trans
24. Fraz MM, Welikala RA, Rudnicka AR, Owen CG, Biomed Eng. 2012;59:2538–2548.
Strachan DP, Barman SA. QUARTZ: quantitative 36. Budai A, Bock R, Maier A, Hornegger J, Michel-
analysis of retinal vessel topology and size – an son G. Robust vessel segmentation in fundus
automated system for quantification of retinal ves- images. Int J Biomed Imaging. 2013;2013:154860.
sels morphology. Expert Syst Appl. 2015;42:7221– 37. Zhang J, Dashtbozorg B, Bekkers E, Pluim JPW,
7234. Duits R, Ter Haar Romeny BM. Robust retinal
25. Perez-Rovira A, MacGillivray T, Trucco E, et al. vessel segmentation via locally adaptive deriva-
VAMPIRE: vessel assessment and measurement tive frames in orientation scores. IEEE Trans Med
platform for images of the retina. In: Annual Inter- Imaging. 2016;35:2631–2644.
national Conference of the IEEE Engineering in 38. Orlando JI, Breda JB, van Keer K, Blaschko
Medicine and Biology Society (EMBC). Piscat- MB, Blanco PJ, Bulant CA. Towards a glau-
away, NJ: Institute of Electrical and Electronics coma risk index based on simulated hemodynamics
Engineers; 2011:3391–3394. from fundus images. In: Medical Image Comput-
26. Futoma J, Simons M, Panch T, Doshi-Velez F, ing and Computer Assisted Intervention–MICCAI
Celi LA. The myth of generalisability in clini- 2018. Berlin: Springer International Publishing;
cal research and machine learning in health care. 2018:65–73.
Lancet Digit Health. 2020;2:e489–e492. 39. Khanal A, Estrada R. Dynamic deep networks
27. Mårtensson G, Ferreira D, Granberg T, et al. The for retinal vessel segmentation. Front Comput Sci.
reliability of a deep learning model in clinical out- 2020;2:35.
of-distribution MRI data: a multicohort study. 40. Holm S, Russell G, Nourrit V, McLoughlin N.
Med Image Anal. 2020;66:101714. DR HAGIS-a fundus image database for the auto-
28. Wong TY, Shankar A, Klein R, Klein BEK. Reti- matic extraction of retinal surface vessels from
nal vessel diameters and the incidence of gross pro- diabetic patients. J Med Imaging (Bellingham).
teinuria and renal insufficiency in people with type 2017;4:014503.
1 diabetes. Diabetes. 2004;53:179–184. 41. Hu Q, Abràmoff MD, Garvin MK. Automated
29. Cheung CY, Tay WT, Mitchell P, et al. Quan- separation of binary overlapping trees in low-
titative and qualitative retinal microvascular contrast color retinal images. Med Image Comput
characteristics and blood pressure. J Hypertens. Comput Assist Interv. 2013;16:436–443.
2011;29:1380–1391. 42. Hemelings R, Elen B, Stalmans I, Van Keer K,
30. Khan SM, Liu X, Nath S, et al. A global review De Boever P, Blaschko MB. Artery-vein seg-
of publicly available datasets for ophthalmological mentation in fundus images using a fully con-
imaging: barriers to access, usability, and general- volutional network. Comput Med Imaging Graph.
isability. Lancet Digit Health. 2021;3:e51–e66. 2019;76:101636.
31. Fu H, Wang B, Shen J, et al. Evaluation of reti- 43. Abbasi-Sureshjani S, Smit-Ockeloen I, Zhang J,
nal image quality assessment networks in different Ter Haar Romeny B. Biologically-inspired super-
color-spaces. In: International Conference on Medi- vised vasculature segmentation in SLO retinal fun-
cal Image Computing and Computer-Assisted Inter- dus images. In: Kamel M, Campilho A, eds. Inter-
vention. Cham: Springer; 2019:48–56. national Conference Image Analysis and Recogni-
32. Li T, Gao Y, Wang K, Guo S, Liu H, Kang tion. Berlin: Springer; 2015:325–334.
H. Diagnostic assessment of deep learning algo- 44. Orlando JI, Fu H, Breda JB, et al. REFUGE
rithms for diabetic retinopathy screening. Inform Challenge: a unified framework for evaluating
Sci. 2019;501:511–522. automated methods for glaucoma assessment
33. Staal J, Abramoff MD, Niemeijer M, Viergever from fundus photographs. Med Image Anal.
MA, van Ginneken B. Ridge-based vessel segmen- 2020;59:101570.

Downloaded from tvst.arvojournals.org on 09/04/2023


AutoMorph: Automated Retinal Vascular Morphology Quantification TVST | July 2022 | Vol. 11 | No. 7 | Article 12 | 14

45. OMIA. OMIA8: 8th MICCAI Workshop on Oph- 56. Hansen LK, Salamon P. Neural network ensem-
thalmic Medical Image Analysis. Available at: bles. IEEE Trans Pattern Anal Mach Intell.
https://sites.google.com/view/omia8. Accessed July 1990;12:993–1001.
1, 2022. 57. Sarhan A, Rokne J, Alhajj R, Crichton A. Trans-
46. Wu J, Fang H, Li F, et al. GAMMA Chal- fer learning through weighted loss function and
lenge:Glaucoma grAding from Multi-Modality group normalization for vessel segmentation from
imAges. arXiv. 2022, https://doi.org/10.48550/ retinal images. In: Proceedings of ICPR 2020: 25th
arXiv.2202.06511. International Conference on Pattern Recognition
47. Porwal P, Pachade S, Kamble R, et al. Indian (ICPR). Piscataway, NJ: Institute of Electrical
Diabetic Retinopathy Image Dataset (IDRiD): and Electronics Engineers; 2021.
a database for diabetic retinopathy screening 58. Shin SY, Lee S, Yun ID, Lee KM. Topology-aware
research. Data. 2018;3:25. retinal artery–vein classification via deep vascular
48. Tan M, Le Q. Efficientnet: rethinking model scal- connectivity prediction. Appl Sci. 2020;11:320.
ing for convolutional neural networks. In: Chaud- 59. Hasan MK, Alam MA, Elahi MTE, Roy S, Martí
huri K, Salakhutdinov R, eds. Thirty-Sixth Inter- R. DRNet: segmentation and localization of optic
national Conference on Machine Learning. San disc and fovea from diabetic retinopathy image.
Diego, CA: ICML; 2019:6105–6114. Artif Intell Med. 2021;111:102001.
49. Galdran A, et al. The little W-Net that could: state- 60. Cheung CY-L, et al. A new method to mea-
of-the-art retinal vessel segmentation with min- sure peripheral retinal vascular caliber over an
imalistic models. arXiv. 2020, https://doi.org/10. extended area. Microcirculation. 2010;17:495–
48550/arXiv.2009.01907. 503.
50. Hart WE, Goldbaum M, Côté B, Kube P, Nel- 61. Poplin R, Varadarajan AV, Blumer K, et al. Pre-
son MR. Measurement and classification of retinal diction of cardiovascular risk factors from retinal
vascular tortuosity. Int J Med Inform. 1999;53:239– fundus photographs via deep learning. Nat Biomed
252. Eng. 2018;2:158–164.
51. Grisan E, Foracchia M, Ruggeri A. A novel 62. Kelly CJ, Karthikesalingam A, Suleyman M, Cor-
method for the automatic grading of retinal vessel rado G, King D. Key challenges for delivering clin-
tortuosity. IEEE Trans Med Imaging. 2008;27:310– ical impact with artificial intelligence. BMC Med.
319. 2019;17:195.
52. Falconer K. Fractal Geometry: Mathematical 63. Singh RP, Hom GL, Abramoff MD, Campbell JP,
Foundations and Applications. New York: John Chiang MF, AOO Task Force on Artificial Intel-
Wiley & Sons; 2004. ligence. Current challenges and barriers to real-
53. Wong TY, Klein R, Klein BEK, Meuer SM, Hub- world artificial intelligence adoption for the health-
bard LD. Retinal vessel diameters and their associ- care system, provider, and the patient. Transl Vis
ations with age and blood pressure. Invest Ophthal- Sci Technol. 2020;9:45.
mol Vis Sci. 2003;44:4644–4650. 64. Yip W, Tham YC, Hsu W, et al. Comparison of
54. Parr JC, Spears GF. General caliber of the reti- common retinal vessel caliber measurement soft-
nal arteries expressed as the equivalent width ware and a conversion algorithm. Transl Vis Sci
of the central retinal artery. Am J Ophthalmol. Technol. 2016;5:11.
1974;77:472–477. 65. McGrory S, Taylor AM, Pellegrini E, et al.
55. Parr JC, Spears GFS. Mathematic relationships Towards standardization of quantitative retinal
between the width of a retinal artery and the widths vascular parameters: comparison of SIVA and
of its branches. Am J Ophthalmol. 1974;77:478– VAMPIRE measurements in the Lothian Birth
483. Cohort 1936. Transl Vis Sci Technol. 2018;7:12.

Downloaded from tvst.arvojournals.org on 09/04/2023

You might also like