0% found this document useful (0 votes)
13 views

Color Fundus Photography and Deep Learning Applica

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

Color Fundus Photography and Deep Learning Applica

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

ORIGINAL ARTICLE

Color Fundus Photography and Deep


Learning Applications in Alzheimer Disease
Oana M. Dumitrascu, MD, MSc; Xin Li, MS; Wenhui Zhu, MS;
Bryan K. Woodruff, MD; Simona Nikolova, PhD; Jacob Sobczak; Amal Youssef, MD;
Siddhant Saxena; Janine Andreev; Richard J. Caselli, MD; John J. Chen, MD, PhD;
and Yalin Wang, PhD

Abstract

Objective: To report the development and performance of 2 distinct deep learning models trained
exclusively on retinal color fundus photographs to classify Alzheimer disease (AD).
Patients and Methods: Two independent datasets (UK Biobank and our tertiary academic institution) of
good-quality retinal photographs derived from patients with AD and controls were used to build 2 deep
learning models, between April 1, 2021, and January 30, 2024. ADVAS is a U-Netebased architecture that
uses retinal vessel segmentation. ADRET is a bidirectional encoder representations from transformers style
self-supervised learning convolutional neural network pretrained on a large data set of retinal color
photographs from UK Biobank. The models’ performance to distinguish AD from non-AD was determined
using mean accuracy, sensitivity, specificity, and receiving operating curves. The generated attention
heatmaps were analyzed for distinctive features.
Results: The self-supervised ADRET model had superior accuracy when compared with ADVAS, in both
UK Biobank (98.27% vs 77.20%; P<.001) and our institutional testing data sets (98.90% vs 94.17%;
P¼.04). No major differences were noted between the original and binary vessel segmentation and be-
tween both eyes vs single-eye models. Attention heatmaps obtained from patients with AD highlighted
regions surrounding small vascular branches as areas of highest relevance to the model decision making.
Conclusion: A bidirectional encoder representations from transformers style self-supervised convolutional
neural network pretrained on a large data set of retinal color photographs alone can screen symptomatic
AD with high accuracy, better than U-Netepretrained models. To be translated in clinical practice, this
methodology requires further validation in larger and diverse populations and integrated techniques to
harmonize fundus photographs and attenuate the imaging-associated noise.
ª 2024 THE AUTHORS. Published by Elsevier Inc on behalf of Mayo Foundation for Medical Education and Research. This is an open access article under
the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/) n Mayo Clin Proc Digital Health 2024;2(4):548-558

evaluations of patients with AD10,11 have

W
ithout new advancements in Alz-
heimer disease (AD) care, the revealed pathologic alterations in the neuro-
From the Department of cost associated with AD and sensory retina that precede and correlate
Neurology (O.M.D., B.K.W.,
S.N., J.S., A.Y., S.S., J.A., R.J.C.),
other dementias’ care is projected to reach with brain AD changes.5,12 Retina accumulates
and Department of Ophthal- nearly $1 trillion in 2050.1 Accurate and amyloid b-protein plaques and abnormal tau
mology (O.M.D.), Mayo widely accessible AD screening tools are proteins13e15 and exhibits neurodegeneration,
Clinic, Scottsdale, AZ; School
of Computed and
crucial in this emerging global health crisis.2 vascular amyloidosis, and increased
Augmented Intelligence, Ari- Current AD biomarkers’ availability is limited inflammation.5,10e12,16,17 Retinal changes can
zona State University, Tempe, by their prohibitive cost, invasiveness, or be captured via color fundus photography, a
AZ (X.L., W.Z., Y.W.); and
Department of Ophthal-
need for additional validation.3,4 The retina widely accessible technology in eye care, pri-
mology (J.J.C.), and Depart- is an unshielded extension of the brain that of- mary care, and underresourced community
ment of Neurology (J.J.C.), fers the opportunity to investigate multiple settings, carrying the promise of a noninvasive
Mayo Clinic Rochester, MN.
central nervous system disorders.5e7 Postmor- and cost-effective biomarker for AD.17e20 Ma-
tem histopathologic investigations of eyes and chine learning tools were developed to over-
brains from patients with AD8,9 and clinical come the subjectivity and low efficiency

548 Mayo Clin Proc Digital Health n December 2024;2(4):548-558 n https://doi.org/10.1016/j.mcpdig.2024.08.005


www.mcpdigitalhealth.org n ª 2024 THE AUTHORS. Published by Elsevier Inc on behalf of Mayo Foundation for Medical Education and Research. This is an open
access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
RETINAL IMAGING-BASED DEEP LEARNING IN ALZHEIMER’S

associated with manual retinal photographs 86,069 patients). During the ADRET model
analysis for disease and biomarker identifica- pretraining on these images, we first randomly
tion and to automate their interpretation in masked 60% of the input image, where the
multiple ocular disorders.21e23 Moreover, ma- size of each mask is equal to the (mask’s height
chine learning applications using retinal imag- [H] adjusted by the downsampling ratio [D],
ing alone demonstrate good prediction mask’s width [W] adjusted by the down-
accuracy of the individuals’ biological sex, sampling ratio [D] of the network) (HD, WD).
blood pressure, and smoking status.24,25 Then, we processed only the unmasked visible
Given the retinal changes in Alzheimer’s mir- regions of the input image. The size of the
rors the Alzheimer’s brain pathology,5,12,25,26 output image was (224, 224), and the down-
deep learning applications have been sampling ratio was 32, making our mask size
expanded27e31 to enable the automatic classi- (7, 7). We used the feature maps learned from
fication of AD and identification of retinal AD the pretraining module for classification
biomarkers that are not visible to the human studies. For quality control, 2 trained graders
eye.29,30,32 Although promising, we noted (A.Y. and S.S.) excluded the retinal images
certain limitations of the traditional convolu- exhibiting blur, low contrast, poor illumina-
tional neural networks (CNNs) that rely on tion, or artifacts (Supplemental Figure 1,
large numbers of images and expert labeling. available online at https://www.
Modern technologies, such as vision trans- mcpdigitalhealth.org/). After the quality con-
formers and self-supervised learning, provide trol, we selected 362 good-quality images (169
a pretraining strategy that uses easier attain- left eyes and 193 right eyes) from 230 patients
able unlabeled data, overcoming the challenge with AD. The reference group included 389
of label acquisition and expanding the span of good-quality images (170 left eyes and 219
images utilization. These are based on the right eyes) from 282 patients without AD. The
advancement of natural language processing, AD label was based on International Classifi-
in the form of self-supervised learning models cation of Diseases (ICD) codes from hospital
of bidirectional encoder representations from admission and death records, indicating a
transformers (BERT)33 and generative pre- definitive clinical diagnosis of dementia caused
trained transformers.34 Here, we evaluate the by AD (Data-field 42021).37 The non-AD label
value of both traditional CNNs and the mod- was based on absent neurodegenerative con-
ern generative self-supervised learning. We ditions and other dementias. We excluded
report the technological development and patients with Parkinson disease (G20), sec-
clinical application of ADVAS, a U-Netebased ondary parkinsonism (G21), other parkin-
architecture employing retinal vessel segmen- sonism (G22), other degenerative disorders of
tation35 and ADRET, a BERT-style self-super- nervous system (G32), vascular syndromes in
vised learning CNN pretrained on retinal cerebrovascular disease (G46), vascular de-
color photographs from UK Biobank.36 We mentia (F01), dementia associated with other
determined the 2 models’ performance to clas- diseases (F02), unspecified dementia (F03),
sify AD in 2 real-world patient populations and organic amnestic syndrome (F04). We
and attempted to identify the retinal regions also excluded patients with ocular conditions,
that were highlighted as AD discriminators including age-related macular degeneration,
by the models’ decision making. glaucoma, and diabetic retinopathy. For each
input image, we first detected the retina using
PATIENTS AND METHODS the Hough circle transform and then cropped
the mask region to minimize the effect of the
Participants and Retinal Color Photograph black background. Afterward, the images were
Acquisition resized to 224  224 and normalized to
UK Biobank Data Set. We used 178,803 (1, 1).38
unlabeled color fundus images from the UK
Biobank (http://www.ukbiobank.ac.uk/about- Institutional Data Set. The study received
biobank-uk)36 involving 87,245 participants, exempt approval from our institutional review
including patients with AD (1136 images, 553 board, with waiver of informed consent owing
patients) and without AD (176,392 images, to exclusive use of retrospective and
n n
Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005 549
www.mcpdigitalhealth.org
MAYO CLINIC PROCEEDINGS: DIGITAL HEALTH

0
U-net (encoder) classifier
50
AD
100 U-net Fully connected Softmax
(encoder) layer function
150 Non-
AD
200

0 50 100 150 200 Heatmap generated on


Original vessel U-net classifier last layer
segmentation
U-net (pretrained)
on DRIVE dataset

Dataset of retinal 0
fundus photographs U-net (encoder) classifier
50
AD
100 U-net Fully connected Softmax
150 (encoder) layer function
Non-
200 AD

0 50 100 150 200 Heatmap generated on


Binary vessel U-net classifier last layer
segmentation

FIGURE 1. Pipeline for the ADVAS model.

deidentified data (Protocol #21-013272). We single batch and the computational efficiency
collected 45 digital color fundus photographs were lower.40,41
from 118 patients with a clinical diagnosis of
AD from our academic multisite tertiary insti- Deep Learning Models Training and Testing
tution. The patients were retrospectively iden- (April 1, 2021, Through January 30, 2024)
tified through an electronic medical record ADVAS Model. The framework consisted of 2
search using ICD9 (331) and ICD10 (G30) main steps. In the first step, we used a U-
codes from January 1, 2011, to June 30, Netebased architecture42 with 2 parts,
2021; 129 patients from the EyePACS data- encoder (downsampling) and decoder
base created the control data set.39 After im- (upsampling). We first segmented the retinal
age quality control, the ADVAS model vasculature in the retinal images from the UK
included 318 binary vessel segmentation im- Biobank and our institutional data set and
ages from 113 patients with AD and 338 then inputted the segmented vessel results into
original vessel segmentation images from 116 the U-Net encoder for feature extraction. For
patients with AD. The control group vessel segmentation, we used the Digital
comprised 259 vessel segmentation images Retinal Images for Vessel Extraction database43
(original and binary) from 129 participants. to train the model and obtain its optimized
ADRET used 283 images from 76 patients weight parameters. Two outputs were gener-
with AD. We detected the retina using the ated: the original vessel segmentation image,
Hough circle transform and cropped the mask with detailed vessel structure (model 1); and
region to minimize the effect of the black binary vessel segmentation (denoted either
background. The images were then resized to 0 or 1), which enhanced the clear parts of the
512  512 and normalized to (1, 1).38 This vessel and ignored the faint parts, facilitating
resolution was chosen to increase the attention further analyses (model 2). Subsequently,
to detail in this small data set because the re- these segmentation results were used as inputs
quirements for the number of images in a for further extraction of vascular features using
n n
550 Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005
www.mcpdigitalhealth.org
RETINAL IMAGING-BASED DEEP LEARNING IN ALZHEIMER’S

Pretrain process
Random
mask
Patches (60%)

Encoder Decoder

Feature maps

AD classification

AD
Pre-trained Fully connected Softmax
encoder layer function
Non-
AD

FIGURE 2. Pipeline for the ADRET model.

U-Net encoders,44 which performed initiali- heatmaps were generated in the last layer of
zation using weight from the first segmenta- the U-Net classifier using Gradient-weighted
tion stage. This process focused on extracting Class Activation Mapping (GRAD-CAM).46,47
key information from the segmented images
that could help with the disease diagnosis. ADRET Model. The backbone was our
Finally, the extracted features were fed to a recently developed nn-MobileNet48 that re-
new linear classifier, a fully connected layer, ported improved network performance by the
and a Softmax function (pipeline illustrated in following: (1) adjusting the order of channel
Figure 1). A fully connected layer is a neural configurations for the inverted linear residual
network layer in which every neuron is con- bottleneck in the MobileNetV2 network49; (2)
nected to all activation units from the pre- using a heavy data augmentation strategy
ceding layer. This layer typically resides at the through Mixup,50 CutMix51 image cropping,
network’s end and maps the learned nonlinear flipping, contrast adjustment, brightness
features to the sample’s output space. The adjustment, and sharpening; and (3) adding
Softmax function is an activation function spatial-dropout52 modules at various locations
widely used for multiclass classification prob- within the network to identify their optimal
lems.45 It transforms a real-value vector into a placement in an attempt to address the over-
probability distribution where each element’s fitting. We adopted a BERT-style self-super-
value is between 0 and 1, and the sum of all vised learning53 method (Figure 2), masking
elements equals 1. The output from the Soft- the image and then reducing it to pretrain the
max function can be interpreted as the prob- encoder to obtain the representative features.54
ability distribution over various classes, To improve the computational efficiency, we
representing the likelihood that the sample resized the input image to 224  224. This
belongs to each class. The classifier predicted image resolution was chosen given ADRET
whether the individual represented by the was pretrained on many fundus images, to
image is an AD or control patient. The ensure computational efficiency and the

n n
Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005 551
www.mcpdigitalhealth.org
MAYO CLINIC PROCEEDINGS: DIGITAL HEALTH

number of samples processed in each single


Original vs binary

batch, while guaranteeing that sufficient fea-


tures can be acquired. Previous reports also
reported a successful use of the 224  224
P¼.29

resolution to balance the computational per-


TABLE 1. ADVAS Models (Original and Binary Vessel Segmentation) Performance for Alzheimer Disease Prediction Using Images Derived From Both Eyes vs Single Eye

Both vs R formance with the number of features.55e58

Both vs L
P¼.97 Next, we performed random masking with a

P¼.12
masking rate of 60%. We adopted the hierar-
Binary

Binary
Accuracy ¼ 0.9017

chical design principle of the Spark frame-


Both eyes images
460 training
116 testing

work59 and combined it with our


segmentation
Binary vessel

Accuracy ¼ 0.9070

Accuracy ¼ 0.9767
nn-MobileNet48 to generate feature maps with
169 training

169 training
43 testing

43 testing
different resolutions. Then, the feature maps
Institutional data set

used sparse convolution and a lightweight U-


Net42 decoder to perform upsampling, which
Right eye

Left eye

serves as an image reconstruction self-


supervised learning task. We set up 1600
epochs for pretraining. All experiments were
Both vs R

Both vs L
P¼.34

P¼.08

conducted on a single 4 NVIDIA A100 80GB


Original

Original

GPUs with an AMD EPYC 7413 24-Core


Accuracy ¼ 0.9417
Both eyes images

Processor.
476 training
Original vessel

120 testing
segmentation

Accuracy ¼ 0.9773

Accuracy ¼ 0.8636

Statistical Analyses
175 training

271 training
44 testing

68 testing

We randomly divided the data into training


and validation sets in the ratio of 8:2. We
Right eye

Left eye

used 5-fold stratified cross-validation on the


data set to evaluate the performance and
generalization ability of our models. The data
Original vs binary

set was divided equally into 5 subsets, 4 of


which were used at a time for training and
the remaining one for testing. The process
P¼.57

was repeated 5 times, with each subset used


once each as a testing set. The performance
Both vs R

Both vs L

metrics of the 5 tests were averaged to obtain


P¼.21

P¼.73

an overall performance evaluation of the


Binary

Binary
Accuracy ¼ 0.7373

model. The 5-fold cross-validation uses all


Both eyes images
601 training
150 testing
segmentation
Binary vessel

data points for training and validation, thus


Accuracy ¼ 0.6988

Accuracy ¼ 0.7941

providing a more robust estimate of the model


UK Biobank data set

329 training

271 training
83 testing

68 testing

performance. Models were evaluated based on


quadratic-weighted k, area under the receiver
Right eye

operating characteristic curve (AUROC), sensi-


Left eye

tivity, specificity, and accuracy. AUROC quan-


tifies the overall ability of the model to
discriminate between positive and negative
Both vs R

Both vs L
P¼.17

P¼.22

classes. Additionally, for ADVAS, we explored


Original

Original

models using all available images and images


Accuracy ¼ 0.772

derived from 1 single eye. We separately


601 training
Original vessel

150 testing
segmentation

Accuracy ¼ 0.7229

Accuracy ¼ 0.8676

trained and tested all groups and obtained


329 training

271 training

the model performance on the testing set.


Both eyes images

83 testing

68 testing

We compared the performance metrics of


L, left; R, right.

both eyes vs single-eye models and original


Right eye

Left eye

vs binary vessel segmentation models, using


the Pearson c2 test. A P value of <.05 was
considered statistically significant.
n n
552 Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005
www.mcpdigitalhealth.org
RETINAL IMAGING-BASED DEEP LEARNING IN ALZHEIMER’S

Institutional dataset
ROC curve ROC curve ROC curve
1.0 1.0 1.0
0.8 0.8 0.8
Sensitivity

Sensitivity

Sensitivity
0.6 0.6 0.6
0.4 0.4 0.4
0.2 0.2 0.2
0.0 0.0 0.0
0.0 0.2
0.4 0.6 0.8 1.0 0.0 0.20.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
1-Specificity 1-Specificity 1-Specificity
ADVAS original vessel segmentation ADVAS binary vessel segmentation ADRET
AUROC=0.991 AUROC=0.9801 AUROC=1.0000

UK biobank dataset
ROC curve ROC curve ROC curve
1.0 1.0 1.0
0.8 0.8 0.8
Sensitivity

Sensitivity
Sensitivity

0.6 0.6 0.6


0.4 0.4 0.4
0.2 0.2 0.2
0.0 0.0 0.0
0.0 0.2
0.4 0.6 0.8 1.0 0.0 0.20.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0
1-Specificity 1-Specificity 1-Specificity
ADVAS original vessel segmentation ADVAS binary vessel segmentation ADRET
AUROC=0.8440 AUROC=0.8367 AUROC=0.9992

FIGURE 3. Graphs illustrating the performance of the ADVAS and ADRET models in both data sets and the corresponding receiver
operating characteristic (ROC) curves.

Codes Availability segmentation. Right eye images (193 AD and


Our study’s implementation was based on the 219 non-AD) had 72.29% accuracy in original
Python (version 3.10) and PyTorch (version vessel and 69.88% accuracy in binary vessel
2.0) environments. In the image preprocessing segmentation. In our institutional data set,
stage, we used OpenCV (version 4.6). In the the overall original vessel segmentation group
data visualization stage, our study was imple- (120 test, 476 train images) had nonsignifi-
mented by Grad-CAM (version 1.5, https:// cantly greater accuracy than the binary vessel
github.com/jacobgil/pytorch-grad-cam) and segmentation model (116 test, 460 train im-
Matplotlib (version 3.8). ages; 94.17% vs 90.17%; P¼.29). In single-
eye models, the maximum (97.73%) accuracy
RESULTS was reached by the original vessel segmenta-
tion of the right eye, followed by the binary
ADVAS Accuracy for AD Classification vessel segmentation in the left eye (Table 1).
In the UK Biobank, the overall original vessel Nonsignificant differences were noted between
segmentation model (362 AD and 389 non- both eyes and single-eye (right or left) models
AD images) had 77.2% accuracy, which was (P>.05 in all models).
similar with the binary vessel segmentation
model (73.73% accuracy; P¼.57) (Table 1,
Figure 3). Left eye images (169 AD and 170 ADRET Accuracy for AD Classification
non-AD) had 86.7% accuracy in original vessel In the UK Biobank data set, the model
and 79.41% accuracy in binary vessel achieved 98.22% accuracy, a k score of
n n
Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005 553
www.mcpdigitalhealth.org
MAYO CLINIC PROCEEDINGS: DIGITAL HEALTH

TABLE 2. ADRET and ADVAS Models Performance for Alzheimer Disease Prediction in the UK Biobank and Our Institutional Data Set
ADVAS ADVAS
Original vessel segmentation Binary vessel segmentation ADRET
UK Biobank testing Institutional testing UK Biobank testing Institutional testing UK Biobank testing Institutional testing
data set (n¼150) data set (n¼120) data set (n¼150) data set (n¼116) data set (n¼150) data set (n¼109)
Accuracy 0.772 (0.0179) 0.9417 (0.0243) 0.7373 (0.0278) 0.9017 (0.0326) 0.9827 (0.0059) 0.989 (0.0077)
k score 0.5377 (0.0376) 0.8785 (0.0412) 0.4691 (0.0635) 0.7984 (0.0618) 0.9652 (0.012) 0.9777 (0.0157)
AUROC 0.8137 (0.0189) 0.9800 (0.0077) 0.7809 (0.0372) 0.9588 (0.0177) 0.9967 (0.0022) 0.9978 (0.0019)
Sensitivity 0.6554 (0.0906) 0.9549 (0.0310) 0.6823 (0.0683) 0.9065 (0.0540) 0.9827 (0.0059) 0.989 (0.0077)
Specificity 0.8775 (0.0517) 0.9176 (0.0393) 0.7839 (0.0719) 0.8992 (0.0797) 0.9817 (0.0122) 0.9805 (0.0137)
The data are mean (SD). Five-fold cross-validation method was applied in each testing data set.
AUROC, area under the receiver operating characteristic curve.

0.9652, and an AUROC of 0.9967 for AD clas- may be chosen for optimization in future
sification. In our institutional data set, the studies. The fact that multiple distinctive
model achieved 98.90% accuracy, a k score groups had similarly high performance rein-
of 0.9777, and an AUROC of 0.9978 for AD forces our findings accuracy and applicability.
classification (Table 2, Figure 3). Our framework identified regions surrounding
All models generated attention heatmaps smaller retinal vessels in AD-derived heat-
using GRAD-CAM. The AD-derived attention maps, which was similarly highlighted in AD
maps highlighted areas surrounding retinal imaging and pathologic studies.10,15,16,60e62
small vascular branches as regions of highest These hypothesis-generating findings will
relevance to the model decision making require confirmation in future machine
(Supplemental Figure 2, available online at learning studies. ADVAS reported different
https://www.mcpdigitalhealth.org/). performance metrics in the 2 data sets. Previ-
ous literature similarly reports considerable
DISCUSSION differences between UK Biobank and other
We report 2 different CNNs trained exclu- data sets (such as EyePACS) when performing
sively on 45 retinal color photographs, which the same task.63,64 This effect is explained by
reached good performance to discriminate AD several factors impacting diverse data sets,
from non-AD in 2 different populations. We such as different sampling equipment, sample
trained and tested these automated models diversity, data quality, or condition severity
to overcome the subjectivity and inefficiency and is particularly pronounced in large-scale
of retinal photographs manual analysis. First, data sets.
we aimed to further investigate the potential Our novel BERT-style self-supervised
of a formerly reported AD classification tool ADRET model had superior accuracy when
based on retinal vessel segmentation.30 In compared with ADVAS, in both UK Biobank
this scope, we developed ADVAS, a novel U- (98.27% vs 77.20%; P<.001) and our institu-
Netebased CNN using original retinal vessel tional testing data sets (98.90% vs 94.17%;
segmentation. We also explored binary vessel P¼.04). This underscores the increased preci-
segmentation, which pays more attention to sion of modern self-supervised techniques for
clearly delineated vessels and less attention disease classification. ADRET was imple-
to the faint vessels. All ADVAS models mented through a lightweight CNN architec-
achieved a classifying accuracy of at least ture, nn-MobileNet.48 This method fully
69.8%, without notable differences between leverages the advantages of CNNs in medical
both eyes vs single-eye models. The highest image processing and incorporates the pre-
accuracy (97%) was reached when images training mechanism of self-supervised
from right eye and original vessel segmenta- learning, aiming to efficiently process large-
tion, and left eye and binary vessel segmenta- scale unlabeled medical image data sets. The
tion, were used, suggesting that any of them performance of the model in data
n n
554 Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005
www.mcpdigitalhealth.org
RETINAL IMAGING-BASED DEEP LEARNING IN ALZHEIMER’S

representation and feature extraction was supervised learning-based transfer learning,


remarkably enhanced by employing a sparse even when the self-supervised models were
convolution technique to process the masked built using limited data sets68,69 and when
regions of the image, while preserving the tested on new data.68,70
original CNN hierarchical structure. To vali- There are some limitations of our study.
date the effectiveness of the proposed method, Few patients had amyloid-positron emission
we conducted pretraining on the UK Biobank tomography or cerebrospinal fluid biomarker
data set. Subsequently, we applied the pre- confirmation of their cerebral amyloid status;
trained model to the AD classification task in however, both data sets used the clinical
both UK Biobank and our institutional data expert established diagnosis of symptomatic
set, and ADRET achieved outstanding perfor- AD, which included neuropsychometric
mance on these downstream tasks. This dem- testing and fluorodeoxyglucose-positron emis-
onstrates the effectiveness of self-supervised sion tomography. Because the control Eye-
pretraining in improving the model’s under- PACS data set did not report the patients’
standing of retinal color photographs and pro- ages, we cannot exclude the impact of age
vides support for applying these novel on the models’ performance. However, UK
techniques to other areas of medical image Biobank AD and non-AD groups had similar
analysis. mean ages, implying that the classification ac-
Several machine learning algorithms and curacy was not confounded by age.
proof-of-concept studies had found that The use of 45 images may have deprived
retinal photographs alone could estimate the the model of relevant biomarker from the pe-
risk of AD.29,30 An algorithm developed using ripheral retina; however, this model reached
the UK Biobank retinal images, U-Net for an excellent accuracy, and 45 images are easier
vessel segmentation, and a support vector obtainable in nonophthalmic care settings.
machine-based classifier reported a binary ac- Nonmydriatic retinal photography is suscepti-
curacy of 82.4% for AD classification after ble to various sources of noise leading to sub-
matching for age.30 Another study that used optimal image quality. We excluded a
EfficientNET,65 an unsupervised domain considerable number of retinal photographs af-
adaptation network, and retinal color photo- ter quality control because suboptimal image
graphs from multiethnic cohorts reported an quality can hinder the CNN model develop-
accuracy ranging from 79.6%-92.1% to ment.30,32 The lack of decent quality retinal
differentiate AD dementia from non-AD.29 A photographs obtained in the real-world consti-
machine learning study using optical coher- tute an obstacle in the implementation and vali-
ence tomography (OCT) images had identi- dation of these CNN models in routine
fied the XGBoost algorithm with the best practice. Our generative image enhancement
diagnostic performance for AD (0.74 accu- models tested on various retinopathies35,71e73
racy), and macular thickness with the greatest have shown potential to increase the adapt-
importance for guiding the algorithm to the ability and resilience of retinal images across
AD diagnosis.66 Another study that used diverse distributions, making them suitable
multimodal retinal imaging (OCT, OCT angi- for clinical settings.
ography, fundus photography, autofluores- These results suggest that deep
cence, and metadata) to identify learningeassisted color fundus photography
symptomatic AD67 reported AUROC of analysis could become an accurate AD risk
0.836 (CI, 0.729-0.943) using all inputs. As stratification tool in optometry, community
technology for automated imaging analysis eye care, or established screening programs
had evolved, we explored innovative tech- for other retinopathies (eg, diabetic retinop-
niques and our BERT-style self-supervised athy). Future studies should assess the perfor-
model reported superior performance, with mance of ADRET in large cohorts with
a sensitivity of 98.9% and specificity of biomarker-confirmed AD and evaluate its
98.05% to discriminate AD from non-AD in cost-effectiveness in comparison with current
a real-world institutional data set. This is in AD screening methods, especially in commu-
line with recent studies reporting self- nities with limited infrastructure and access
supervised models’ superiority over to specialized care. Incoming studies should
n n
Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005 555
www.mcpdigitalhealth.org
MAYO CLINIC PROCEEDINGS: DIGITAL HEALTH

also investigate deep learningeassisted retinal Grant Support: This work has been partially supported by
imaging tools in AD clinical trials. the NIH (R01EY032125, R01DE030286, R01AG069453,
and P30AG072980) and the state of Arizona via the Ari-
zona Alzheimer’s Consortium.
CONCLUSION
Data Previously Presented: A subaim of this work was
A BERT-style self-supervised neural network presented at the American Academy of Neurology Annual
pretrained on a large data set of retinal color Meeting, April 2024, Denver, CO.
photographs alone can screen symptomatic
Correspondence: Address to Oana M. Dumitrascu, MD,
AD with high accuracy, better than U-Nete
Mayo Clinic College of Medicine and Science, 13400 East
pretrained models. With further validation in Shea Boulevard, Scottsdale, AZ 85259 (dumitrascu.oana@
diverse populations, as well as integrated mayo.edu; Twitter: @OanaDumitrascu5).
methods to attenuate imaging-associated
ORCID
noise, this methodology has the potential for
Oana M. Dumitrascu: https://orcid.org/0000-0003-2033-
application in clinical practice for point-of- 449X
care AD screening.

REFERENCES
POTENTIAL COMPETING INTERESTS 1. Alzheimer’s association: 2024 Alzheimer’s disease facts and fig-
Dr Dumitrascu and Wang report pending pat- ures report: executive summary. https://alz.org/media/
ents on D24-186 A BERT-Style Self-Supervised Documents/Facts-And-Figures-2024-Executive-Summary.pdf.
Accessed June 20, 2024.
Learning CNN for Disease Identification from 2. Goldman DP, Fillit H, Neumann P. Accelerating Alzheimer’s
Retinal Images, D24-170 Context-Aware disease drug innovations from the research pipeline to patients.
Optimal Transport Learning for Retinal Color Alzheimers Dement. 2018;14(6):833-836. https://doi.org/10.
1016/j.jalz.2018.02.007.
Fundus Photograph Enhancement, and D23- 3. Fatima H, Rangwala HS, Riaz F, Rangwala BS, Siddiq MA. Break-
176 Systems and methods for enhancing retinal throughs in Alzheimer’s research: a path to a more promising
color fundus images for retinopathy analysis. Dr future? Ann Neurosci. 2024;31(1):63-70. https://doi.org/10.
1177/09727531231187235.
Dumitrascu is a board member of AHA Greater 4. Garcia MJ, Leadley R, Ross J, et al. Prognostic and predictive fac-
Phoenix Chapter. The authors report no tors in early Alzheimer’s disease: a systematic review.
competing interests. J Alzheimers Dis Rep. 2024;8(1):203-240. https://doi.org/10.
3233/ADR-230045.
5. Koronyo Y, Rentsendorj A, Mirzaei N, et al. Retinal pathological
features and proteome signatures of Alzheimer’s disease. Acta
ETHICS STATEMENT
Neuropathol. 2023;145(4):409-438. https://doi.org/10.1007/
The study received exempt approval from an s00401-023-02548-2.
institutional review board, with waiver of 6. Hinton DR, Sadun AA, Blanks JC, Miller CA. Optic-nerve degen-
eration in Alzheimer’s disease. N Engl J Med. 1986;315(8):485-
informed consent owing to exclusive use of
487. https://doi.org/10.1056/NEJM198608213150804.
retrospective and deidentified data. 7. Blanks JC, Torigoe Y, Hinton DR, Blanks RH. Retinal degenera-
tion in the macula of patients with Alzheimer’s disease. Ann N Y
Acad Sci. 1991;640:44-46. https://doi.org/10.1111/j.1749-6632.
1991.tb00188.x.
ACKNOWLEDGMENTS
8. La Morgia C, Ross-Cisneros FN, Koronyo Y, et al. Melanopsin
Dr Dumitrascu and Mr Li contributed equally retinal ganglion cell loss in Alzheimer disease. Ann Neurol.
to this work. 2016;79(1):90-109. https://doi.org/10.1002/ana.24548.
9. Lee CS, Larson EB, Gibbons LE, et al. Associations between
recent and established ophthalmic conditions and risk of Alz-
SUPPLEMENTAL ONLINE MATERIAL heimer’s disease. Alzheimers Dement. 2019;15(1):34-41.
https://doi.org/10.1016/j.jalz.2018.06.2856.
Supplemental material can be found online at 10. Dumitrascu OM, Rosenberry R, Sherman DS, et al. Retinal ven-
https://www.mcpdigitalhealth.org/. Supple- ular tortuosity jointly with retinal amyloid burden correlates
mental material attached to journal articles with verbal memory loss: a pilot study. Cells. 2021;10(11):
2926. https://doi.org/10.3390/cells10112926.
has not been edited, and the authors take re- 11. Hart NJ, Koronyo Y, Black KL, Koronyo-Hamaoui M. Ocular indica-
sponsibility for the accuracy of all data. tors of Alzheimer’s: exploring disease in the retina. Acta Neuropathol.
2016;132(6):767-787. https://doi.org/10.1007/s00401-016-1613-6.
12. Mirzaei N, Shi H, Oviatt M, et al. Alzheimer’s retinopathy:
Abbreviations and Acronyms: AD, Alzheimer disease; seeing disease in the eyes. Front Neurosci. 2020;14:921.
CNN, convolutional neural network; BERT, bidirectional https://doi.org/10.3389/fnins.2020.00921.
encoder representations from transformers; ICD, Interna- 13. Dumitrascu OM, Doustar J, Fuchs DT, et al. Retinal peri-
tional Classification of Diseases; GRAD-CAM, Gradient- arteriolar versus peri-venular amyloidosis, hippocampal atrophy,
weighted Class Activation Mapping and cognitive impairment: exploratory trial. Acta Neuropathol

n n
556 Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005
www.mcpdigitalhealth.org
RETINAL IMAGING-BASED DEEP LEARNING IN ALZHEIMER’S

Commun. 2024;12(1):109. https://doi.org/10.1186/s40478-024- 32. Dumitrascu OM, Zhu W, Qiu P, Wang YL. Automated retinal
01810-2. imaging analysis for Alzheimer’s disease screening. Ann Neurol.
14. Gaire BP, Koronyo Y, Fuchs DT, et al. Alzheimer’s disease path- 2022:S106.
ophysiology in the retina. Prog Retin Eye Res. 2024;101:101273. 33. Devlin J, Chang MW, Lee K, Toutanova K. Bert: pre-training of
https://doi.org/10.1016/j.preteyeres.2024.101273. deep bidirectional transformers for language understanding.
15. Dumitrascu OM, Doustar J, Fuchs DT, et al. Distinctive retinal Preprint. Posted online October 11, 2018. arXiv:1810.04805.
peri-arteriolar versus peri-venular amyloid plaque distribution 34. Brown T, Mann B, Ryder N, et al. Language models are few-shot
correlates with the cognitive performance. Preprint. bioRxiv. learners. Adv Neural Inform Process Syst. 2022;33:1877-1901.
Published online February 29, 2024. https://doi.org/10.1101/ 35. Zhu WQP, Lepore N, Dumitrascu OM, Wang Y. NNMobile-
2024.02.27.580733. Net: rethinking CNN design for deep learning-based retinop-
16. Shi H, Koronyo Y, Rentsendorj A, et al. Identification of early athy research. Preprint. Published online June 2, 2023. arXiv:
pericyte loss and vascular amyloidosis in Alzheimer’s disease 2306.01289.
retina. Acta Neuropathol. 2020;139(5):813-836. https://doi.org/ 36. Bycroft C, Freeman C, Petkova D, et al. The UK Biobank
10.1007/s00401-020-02134-w. resource with deep phenotyping and genomic data. Nature.
17. Jiang H, Wang J, Levin BE, et al. Retinal microvascular alterations 2018;562(7726):203-209. https://doi.org/10.1038/s41586-018-
as the biomarkers for Alzheimer disease: are we there yet? 0579-z.
J Neuroophthalmol. 2021;41(2):251-260. https://doi.org/10. 37. Bush K, Wilkinson T, Schnier C, Nolan J, Sudlow C. Definitions
1097/WNO.0000000000001140. of Dementia and the Major Diagnostic Pathologies. UK Biobank
18. Dumitrascu OM, Qureshi TA. Retinal vascular imaging in Phase 1 Outcomes Adjudication; 2018.
vascular cognitive impairment: current and future perspectives. 38. Fu H, Wang B, Shen J, et al. Evaluation of Retinal Image Quality
J Exp Neurosci. 2018;12:1179069518801291. https://doi.org/10. Assessment Networks in Different Color-Spaces. Springer Interna-
1177/1179069518801291. tional Publishing; 2019.
19. Sasaki M. [Retinal imaging as potential biomarkers for demen- 39. Cuadros J, Bresnick G. EyePACS: an adaptable telemedicine
tia]. Brain Nerve. 2021;73(11):1209-1216. https://doi.org/10. system for diabetic retinopathy screening. J Diabetes Sci Tech-
11477/mf.1416201919. nol. May 1 2009;3(3):509-516. https://doi.org/10.1177/
20. Cheung CY, Chan VTT, Mok VC, Chen C, Wong TY. Poten- 193229680900300315.
tial retinal biomarkers for dementia: what is new? Curr Opin 40. Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE,
Neurol. 2019;32(1):82-91. https://doi.org/10.1097/WCO. Bressler NM. Automated grading of age-related macular degen-
0000000000000645. eration from color fundus images using deep convolutional
21. Li T, Bo W, Hu C, et al. Applications of deep learning in fundus neural networks. JAMA Ophthalmol. 2017;135(11):1170-1176.
images: a review. Med Image Anal. 2021;69:101971. https://doi. https://doi.org/10.1001/jamaophthalmol.2017.3782.
org/10.1016/j.media.2021.101971. 41. Kermany DS, Goldbaum M, Cai W, et al. Identifying medical di-
22. Dai L, Wu L, Li H, et al. A deep learning system for detecting agnoses and treatable diseases by image-based deep learning.
diabetic retinopathy across the disease spectrum. Nat Commun. Cell. 2018;172(5):1122-1131.e9. https://doi.org/10.1016/j.cell.
2021;12(1):3242. https://doi.org/10.1038/s41467-021-23458-5. 2018.02.010.
23. Nadeem MW, Goh HG, Hussain M, Liew SY, Andonovic I, 42. Ronneberger O, Fischer P, Brox T. U-net: Convolutional net-
Khan MA. Deep learning for diabetic retinopathy analysis: a re- works for biomedical image segmentation. In: Navab N,
view, research challenges, and future directions. Sensors (Basel). Hornegger J, Wells W, Frangi A, eds. Medical Image Computing
2022;22(18):6780. https://doi.org/10.3390/s22186780. and Computer-Assisted InterventioneMICCAI 2015. Springer;
24. Poplin R, Varadarajan AV, Blumer K, et al. Prediction of cardio- 2015:234-241.
vascular risk factors from retinal fundus photographs via deep 43. DRIVE Digital Retinal Images for Vessel Extraction. Kaggle; 2020.
learning. Nat Biomed Eng. 2018;2(3):158-164. https://doi.org/ 44. Zhuang J. LadderNet: Multi-path networks based on U-Net for
10.1038/s41551-018-0195-0. medical image segmentation. Preprint. Posted online October
25. Wagner SK, Fu DJ, Faes L, et al. Insights into systemic disease 17, 2018. arXiv:1810.07810.
through retinal imaging-based oculomics. Transl Vis Sci Technol. 45. Bishop C. Pattern Recognition and Machine Learning. Springer;
2020;9(2):6. https://doi.org/10.1167/tvst.9.2.6. 2006. ISBN 0-387-31073-8.
26. Snyder PJ, Alber J, Alt C, et al. Retinal imaging in Alzheimer’s 46. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D,
and neurodegenerative diseases. Alzheimers Dement. 2021; Batra D. Grad-Cam: visual explanations from deep networks via
17(1):103-111. https://doi.org/10.1002/alz.12179. gradient-based localization. 2017 IEEE International Conference
27. Ng WY, Cheung CY, Milea D, Ting DSW. Artificial intelligence on Computer Vision (ICCV). IEEE; 2017:618-626.
and machine learning for Alzheimer’s disease: let’s not forget 47. Pytorch library for cam methods. 2021: https://github.com/
about the retina. Br J Ophthalmol. 2021;105(5):593-594. jacobgil/pytorch-grad-cam. Accessed September 10, 2024.
https://doi.org/10.1136/bjophthalmol-2020-318407. 48. Zhu WQP, Chen X, Li X, et al. nnMobileNet: rethinking CNN for
28. Bahr T, Vu TA, Tuttle JJ, Iezzi R. Deep learning and machine retinopathy research. Presented at: Data Curation and Augmen-
learning algorithms for retinal image analysis in neurodegenera- tation in Enhancing Medical Imaging Applications Workshop
tive disease: systematic review of datasets and models. Transl Vis (DCAMI). Seattle, WA: CVPR; June 2024. https://arxiv.org/
Sci Technol. 2024;13(2):16. https://doi.org/10.1167/tvst.13.2.16. abs/2306.01289. Accessed September 10, 2024.
29. Cheung CY, Ran AR, Wang S, et al. A deep learning model for 49. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L. Mobile-
detection of Alzheimer’s disease based on retinal photographs: netv2: Inverted Residuals and Linear Bottlenecks. IEEE. 2018:
a retrospective, multicentre case-control study. Lancet Digit 4510-4520.
Health. 2022;4(11):e806-e815. https://doi.org/10.1016/S2589- 50. Zhang H, Cisse M, Dauphin YN, Lopez-Paz D. mixup: beyond
7500(22)00169-8. empirical risk minimization. Preprint. Posted online October 25,
30. Tian J, Smith G, Guo H, et al. Modular machine learning for Alz- 2017. arXiv:1710.09412.
heimer’s disease classification from retinal vasculature. Sci Rep. 51. Yun S, Han D, Oh SJ, Chun SJ, Junsuk Choe J, Yoo Y. CutMix:
2021;11(1):238. https://doi.org/10.1038/s41598-020-80312-2. regularization strategy to train strong classifiers with localizable
31. Corbin D, Lesage F. Assessment of the predictive potential of features. Presented at 2019 IEEE Internaltional Conference on
cognitive scores from retinal images and retinal fundus meta- Computer Vision, November 2019, Seoul, Korea (South),
data via deep learning using the CLSA database. Sci Rep. https://arxiv.org/abs/1905.04899. Accessed September 10,
2022;12(1):5767. https://doi.org/10.1038/s41598-022-09719-3. 2024.

n n
Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005 557
www.mcpdigitalhealth.org
MAYO CLINIC PROCEEDINGS: DIGITAL HEALTH

52. Tompson J, Goroshin R, Jain A, LeCun Y, Bregler C. Efficient 63. Vaghefi E, Squirrell D, Yang S, et al. Development and validation
Object Localization Using Convolutional Networks. Boston, MA: of a deep-learning model to predict 10-year atherosclerotic
IEEE Conference on Computer Vision and Pattern Recogntion; cardiovascular disease risk from retinal images using the UK Bio-
2015. bank and EyePACS 10K datasets. Cardiovasc Digit Health J.
53. Devlin J, Chang MW, Lee K, Toutanova K. Bert: Pre-training of 2024;5(2):59-69. https://doi.org/10.1016/j.cvdhj.2023.12.004.
deep bidirectional transformers for language understanding. North 64. Zhou Y, Chia MA, Wagner SK, et al. A foundation model for
American Chapter of the Association for Computational Lin- generalizable disease detection from retinal images. Nature. 2023;
guistics; 2019. https://arxiv.org/abs/1810.04805. Accessed 622(7981):156-163. https://doi.org/10.1038/s41586-023-06555-x.
September 10, 2024. 65. Tan MLQ. EfficientNet: rethinking model scaling for convolu-
54. He K, Chen X, Xie S, Li Y, Dollar P, Girshick R. Masked autoen- tional neural networks. Proc Mach Learning. 2019;97:6105-6114.
coders are scalable vision learners. 2022 IEEE/Conference on 66. Wang X, Jiao B, Liu H, et al. Machine learning based on optical
Computer Vision and Pattern Recognition, New Orleans, coherence tomography images as a diagnostic tool for Alz-
June 2022, https://arxiv.org/abs/2111.06377v3. Accessed heimer’s disease. CNS Neurosci Ther. 2022;28(12):2206-2217.
September 10, 2024. https://doi.org/10.1111/cns.13963.
55. Simonyan K, Zisserman A. Very Deep Convolutional Networks for 67. Wisely CE, Wang D, Henao R, et al. Convolutional neural
Large-Scale Image Recognition. Banff, Canada: International Con- network to identify symptomatic Alzheimer’s disease using
ference on Learning Representations; 2014. https://doi.org/10. multimodal retinal imaging. Br J Ophthalmol. 2022;106(3):388-
48550/arXiv.1409.1556. 395. https://doi.org/10.1136/bjophthalmol-2020-317659.
56. Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification 68. Chen T, Kornblith S, Norouzi M, Hinton G. A simple frame-
with deep convolutional neural networks. Proceedings of the work for contrastive learning of visual representations. In:
25th International Conference on Neural Information Process- Daume H, Singh A, eds. Proceedings of the 37th International
ing Systems. In: 1:1097-1105. Conference on Machine Learning. 2020;149:1597-1607.
57. He K, Zhang X, Ren S, Sun J. Deep residual learning for image 69. Chen K, Ghisays V, Luo J, et al. Improved comparability be-
recognition. In: Proceedings of the IEEE Conference on Computer tween measurements of mean cortical amyloid plaque burden
Vision and Pattern Recognition. IEEE; 2016:770-778. derived from different PET tracers using multiple regions-of-in-
58. Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions. terest and machine learning. Paper presented at: 2021 Alz-
In: Proceedings of the IEEE Conference on Computer Vision and heimer’s Association International Conference, Denver, CO..
Pattern Recognition. IEEE; 2015:1-9. https://alz.confex.com/alz/2021/meetingapp.cgi/Paper/51419.
59. Tian K, Jiang Y, Diao Q, et al. Designing bert for convolutional net- 70. Kea H. Masked autoencoders are scalable vision learners. In:
works: sparse and hierarchical masked modeling. Kigali, Rwanda: Dana K, ed. Proceedings of the 2022 IEEE/CVF Conference on
International Conference on Learning Representations; 2023. Computer Vision and Pattern Recognition. IEEE; 2022:16000-
arXiv:2301.03580v2. 16009.
60. Cheung CY, Ong YT, Ikram MK, et al. Microvascular network 71. Zhu W, Qiu P, Chen X, et al. Beyond MobileNet: An Improved
alterations in the retina of patients with Alzheimer’s disease. MobileNet for Retinal Diseases. Springer; 2023:56-65.
Alzheimers Dement. 2014;10(2):135-142. https://doi.org/10. 72. Zhu W, Qiu P, Dumitrascu OM, et al. OTRE: where optimal
1016/j.jalz.2013.06.009. transport guided unpaired image-to-image translation meets reg-
61. Williams MA, McGowan AJ, Cardwell CR, et al. Retinal micro- ularization by enhancing. Inf Process Med Imaging. 2023;13939:
vascular network attenuation in Alzheimer’s disease. Alzheimers 415-427. https://doi.org/10.1007/978-3-031-34048-2_32.
Dement (Amst). 2015;1(2):229-235. https://doi.org/10.1016/j. 73. Zhu W, Qiu P, Farazi M, Nandakumar K, Dumitrascu OM,
dadm.2015.04.001. Wang Y. Optimal transport guided unsupervised learning for
62. Jiang H, Wei Y, Shi Y, et al. Altered macular microvasculature in mild enhancing low-quality retinal images. Proc IEEE Int Symp Biomed
cognitive impairment and Alzheimer disease. J Neuroophthalmol. 2018; Imaging. 2023;2023. https://doi.org/10.1109/isbi53787.2023.
38(3):292-298. https://doi.org/10.1097/WNO.0000000000000580. 10230719.

n n
558 Mayo Clin Proc Digital Health December 2024;2(4):548-558 https://doi.org/10.1016/j.mcpdig.2024.08.005
www.mcpdigitalhealth.org

You might also like