0% found this document useful (0 votes)
4 views

2023 - Automated Decision Making ResNet Feed Fo

This research presents an automated decision-making methodology using a ResNet feed-forward neural network for the detection of diabetic retinopathy (DR) in retinal images. The study analyzes a dataset of 5672 sequential and 7231 non-sequential color fundus and black-and-white images, achieving high accuracy rates of 98.9% for good-quality images and 94.9% for poor-quality images. The proposed methodology aims to enhance early detection of DR, which is crucial for preventing vision loss associated with diabetes.

Uploaded by

Faiz Rangari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

2023 - Automated Decision Making ResNet Feed Fo

This research presents an automated decision-making methodology using a ResNet feed-forward neural network for the detection of diabetic retinopathy (DR) in retinal images. The study analyzes a dataset of 5672 sequential and 7231 non-sequential color fundus and black-and-white images, achieving high accuracy rates of 98.9% for good-quality images and 94.9% for poor-quality images. The proposed methodology aims to enhance early detection of DR, which is crucial for preventing vision loss associated with diabetes.

Uploaded by

Faiz Rangari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

(IJACSA) International Journal of Advanced Computer Science and Applications,

Vol. 14, No. 5, 2023

Automated Decision Making ResNet Feed-Forward


Neural Network based Methodology for Diabetic
Retinopathy Detection
A. Aruna Kumari1, Avinash Bhagat2, Santosh Kumar Henge3*, Sanjeev Kumar Mandal4
School of Computer Science & Engineering, Lovely Professional University, Phagwara, Punjab, India1, 2
Associate Professor, Department of Computer Applications-Directorate of Online Education, Manipal University Jaipur, Jaipur,
Rajasthan, India3*
Assistant Professor, School of CS & IT, Jain (Deemed-to-be University) Bangalore, India4

Abstract—The detection of diabetic retinopathy eye disease is be decelerated, but it can be complicated as DR-ED regularly
a time-consuming and labor-intensive process, that necessitates indicates rare symptoms until it is extremely late to deliver
an ophthalmologist to investigate, assess digital color fundus efficient medication [4] [5]. Consequently, uncovering DR at
photographic images of the retina, and discover DR by the an early stage is crucial in preventing the complications of this
existence of lesions linked with the vascular anomalies triggered illness, as shown in Fig. 1 [2]. CNN in DL manages to deliver
by the disease. The integration of a single type of sequential helpful results while it comes up to the job of classification of
image has fewer variations among them, which does not provide medical images [5].
more feasibility and sufficient mapping scenarios. This research
proposes an automated decision-making ResNet feed-forward
neural network methodology approach. The mapping techniques
integrated to analyze and map missing connections of retinal
arterioles, microaneurysms, venules and dot points of the fovea,
cottonwool spots, the macula, the outer line of optic disc
computations, and hard exudates and hemorrhages among color
and back white images. Missing computations are included in the
sequence of vectors, which helps identify DR stages. A total of
5672 sequential and 7231 non-sequential color fundus and black-
and-white retinal images were included in the test cases. The 80 Fig. 1. Hard exudates, hemorrhages, abnormal growth of blood vessels,
and 20 percentage rations of best and poor-quality images were aneuryam and cotton wool sports of DR affected retina.
integrated in testing and training and implicated the 10-ford
cross-validation technique. The accuracy, sensitivity, and Recognition of the initial clinical signs of DR initiation is a
specificity for testing and analysing good-quality images were crucial constraint for interference-free and efficient medication.
98.9%, 98.7%, and 98.3%, and poor-quality images were 94.9%, Ophthalmologists qualified to detect DR focus on analyzing
93.6%, and 93.2%, respectively. minor fluctuations in patient microaneurysms (MAs) of the
eyes, retinal bleeds, macular edema, and fluctuations in retinal
Keywords—Retinal lesion (RL); Fundus Images (FunImg); blood vessels. Segmentation of MAs is another crucial
Microaneurysms (MAs); Principal Component Analysis (PCA); constraint for primary identification of DR, which has attracted
Standard Scaler (StdSca); Feed-Forward Neural Network (FFNN);
the major attention of the research community across the early
cross pooling (CxPool)
years [27]. According to the International Clinical DR Disease
I. INTRODUCTION Severity Scale, DR seriousness is marked into five degrees,
such as non-DR, mild-NPDR, moderate-NPDR, PDR, or
Diabetic Retinopathy (DR) eye disease (ED) is correlated severe-NPDR [7] [8]. Mild-NPDR is specified as the
with chronic type diabetes, which is the primary trigger of occurrence of microaneurysms. Moderate NPDR is specified as
sightlessness in children, workforce employees, and elderly being further than exactly micro-aneurysms, although less
people across the globe, and it is impacting more than 96 severe NPDR produces CWS, retinal hemorrhages, and hard
million people [1]. DR is a type of diabetes that causes damage exudates. DME was analyzed if difficult exudates were
to the retinal blood vessels (BV). Primarily, it is symptomless identified in the interior of 500μm of the macular centre
and changes vision-based issues. As it becomes more severe, it corresponding to the specification of the initial medication for
disturbs both eyes and ultimately causes partial to complete DR research [9]. Ascribable-DR is specified as DME,
vision loss. It principally arises when blood sugar levels are moderate NPDR, or both. Based on the recommendations for
uncontrollable. The premature detection of DR can prevent the image procurement and clarification of DR assessment in
possibility of permanent blindness. Consequently, it needs an China [10], the image quality was rated conferring to
effective screening scheme [2]. Detection of the initial stages requirements specified in terms of three characteristic factors
of DR-ED is one of the challenging tasks in the DR diagnosis such as field definition (FD), clarity, and artifacts (AF). The
process, it helps in the advancement to vision loss, which can overall result was equivalent to a grade for transparency plus a

303 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

grade for FD and minus the score for AF; the overall count was quality assessment techniques now in use only consider overall
less than 12, which counted as ungradable [8]. Recognizing the image quality without providing comprehensible quality
first clinical signs of DR is a significant barrier to effective feedback for real-time correction. Furthermore, these models
intervention and treatment. Ophthalmologists who are trained frequently lack generalizability under various imaging settings
to identify DR focus on analyzing minute changes in patient and are susceptible to the particular imaging devices [11].
microaneurysms (MAs), retinal bleeding, macular edema, and
changes in retinal blood vessels. Another important barrier to The author, J. De Calleja et al. [31], integrated a 2-stage
the initial identification of DR is the segmentation of MAs, scheme for DR recognition. FE was processed through local
which has received significant attention from the academic binary patterns, and the classification stage was processed
community in recent years. through ML-based Support Vector machines (SVM) and
Random Forest (RF) and attained a 97.46% ACU rate with a
The article is planned in a section wise manner: Section I test case of 71 images. M. Gandhi et al. [32] proposed
included an introduction and research objectives; Section II automatic DR recognition through SVM by sensing exudates
comprised the associated works along with the background of from FunImgs with manual FE with DL for J. Orlando et. al.
the research; and Section III included the proposed [24] integrated CNN with manual and enhanced features for FE
methodology of the Automated Decision Making ResNet Feed- for sensing RED-lesion in the retina eye. U. Acharya et al. [33]
Forward Neural Network (RNFFNN) Methodology for integrated 331 FunImgs through MAs, BV, haemorrhages,
Recognition of DR Stages and its executional scenarios. exudates-based features using SVM and attained 85% of ACU.
Section IV described the experimental setup and analysis K. Anant et. al. [26] integrated texture and wavelet features for
through Image Normalization Principal Component Analysis DR recognition in basic level analysis with involvement of DM
(PCA) and Multi-level ConvNets based Pooling and Feature and IP on the DIARETDB1 database and accomplished
Integrations, along with the results and discussion; Section V 97.95% of ACU. In a different study, 331 fundus images were
contains the results and discussions. Finally, Section VI analyzed and morphological image processing and support
addresses the conclusion and future direction. vector machine (SVM) techniques were utilized for the
automatic detection of eye health [34]. S. Preetha et al. [14]
II. RELATED WORK described DM and ML methods in their analysis for the
DR is one of the significant interests that have captured the prediction of various diabetic-related diseases such as DR, skin
healthy world. Accepting the interest from numerous scientists cancer, and heart disease. S. Sadda et al. [13] used a
to discover the ideal solutions for initial recognition of DR quantitative based method to recognize new parameters for
disease, subsequently prominent to avoidance of early sensing proliferative DR based on hypotheses of lesions
oscillations in eyesight. Several investigations were performed location, surface area, number, and distance from the ONH
and continued in this field with the intention of improving the canter, which progressed the prediction procedure of DR with
lives of patients. This section articulates an analysis of DR- the involvement of imaging data and quantitative lesion
related research [2]. parameters. The authors, J.Amin et al. [27], deliver an
assessment of numerous practices for DR by sensing
The author, Anumol Sajan et al., proposed the detection of hemorrhages, MAs, exudates, and BV and analyzing numerous
DR stages using deep learning (DL). It proposed an automatic outcomes obtained from these practices experimentally. Y.
classification system, in which it analyzes fundus images Kumaran et. al. [18] emphasize the dissimilar types of pre-
(FunImg) with fluctuating illumination and fields of processing and segmentation methods typically used for the
assessment and produces a severity grade for DR using ML detection of DR in the human eye, which contain several
replicas such as VGG-16, Convolutional Neural Network classification models. I. Sadek et al. [25] proposed automatic
(CNN), and VGG-19 through five groups of classified images DR detection through DL with the integration of four CNNs to
ranging from 0 to 4, where 0 is no DR and 4 is proliferative categorize DR into three classes: normal, exudates, and drusen
DR. It accomplishes 82%, 80%, and 82% accuracy, sensitivity, and achieved 91%–92% ACU. G. Zago et al. [6] proposed a
and specificity, respectively [1]. Author Mushtaq et al. lesion localization model using a deep NN through CNN with
proposed detection of DR using DL-based densely connected integration of regions in the place of segmentation localization
CNN (DenseNet-169) for identification of early recognition of processes and 2-CNNs implicated for training through the
DR, which categories the FunImgs based on their levels of Standard DR Database and DIARETDB1 and achieved 95%
severity: Proliferative-DR, Severe, Moderate, Mild, and No- sensitivity. P. Kaur et al. [17] proposed the NN method for the
DR with integration of DR-Recognition-2015 and Aptos-2019- categorization of several RIs using the MATLAB environment.
Blindness-Recognition from Kaggle in the inclusion of data- A comparison study was done among the proposed methods
gathering, pre-processing, augmentation, and modeling levels using SVM. SVM helped generate an accurate result.
and achieved 90% accuracy (ACU) [2]. The fifth most
common cause of blindness in the world is now diabetes. One M. Voets et al. [12] proposed a study that integrated the
of the main causes of vision loss and blindness among diabetes Kaggle dataset EyePACS for finding DR from retinal FunImgs
individuals worldwide is diabetic retinopathy. According to the test cases and experimented on existing work on several data
WHO, diabetic retinopathy is a serious eye condition that sets that provided 95% of ACU [2]. It aimed to improve the
needs to be addressed right once by government agencies and performance of detecting certain retinal lesions (RL) with their
medical specialists. [3]. Image artifact, clarity, and field grading levels through a cost-effective ResNet implicated RL-
definition are the three main criteria used to evaluate the aware sub-network (RLASN) for reducing vanishing gradient
quality of fundus images. Unfortunately, the majority of complexity, which was improved with more sensitive FE for

304 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

small lesions compared to VGG and Inception's existing net DR's five stages, categorized into 1 to 4 numbers. By using
designs [15]. The RLASN included a feature pyramid structure patients' fundus eye images as input, DenseNet's FE process
that was intended to describe features of multi-scale and dig produced results through activation function, achieved 0.9611
out lesion types and position relationships [16] [23]. ACU, and described the distinction between the VGG16 and
Identifying several types of RLs can help with making DenseNet121 designs. The authors, Ayala et al. [47], proposed
decisions in the clinical process, like fenofibrate for patients DR-improved detection using DL. It integrated CNN to
with hard excaudate [19] and antiplatelet drugs used perform a fundus oculi image to identify the structure of the
thoroughly in patients with bleeding retina [20]. Progression is eyeball and establish the occurrence of DR. The factors
another major problem in DR screening, as advancement of improved using the TL approach for mapping an image with
RLs is symptomatic of improving sight-threatening DR [21– the subsequent labeling structure. Training, testing are
23]. It stated that, as a substitute for direct end-to-end training accomplished with a medical fundus oculi image dataset, and a
from FunImgs to DR grades, a cost-effective RLASN was pathology seriousness scale appears in the eyeball as labels and
established to improve the capability of acquiring features of attains 97.78% of ACU.
lesion [26]. For the purpose of ultimately detecting
nonproliferative diabetic retinopathy, the author outlined Author M. Mohsin Butt et al. [48] proposed a multi-
different ways for detecting microaneurysms, hemorrhages, channel CNN-based approach for the detection of DR from eye
and exudates. Techniques for detecting blood vessels are also fundus images. It integrated 35,126 images from EyePACS and
covered for proliferative diabetic retinopathy [28].Author achieved 97.08% ACU. The authors, Fatima, Muhammad
Veena Mayya et al. [30] proposed an analytical study through Imran, et al. [49], proposed a unified method for entropy
automated MAs recognition and segmentation for DR early improvement-based DR recognition using a hybrid NN. It
diagnosis, which was achieved using color fundus devised manipulating the discrete wavelet transforms to
photography, optical coherence tomography angiography, or enhance the visibility of medical imaging by making the
fluorescein angiography images. This study was categorized delicate features more prominent, and it classified images for
further stages. It integrated three datasets, such as those from
into classical IP, conventional ML, and DL based practices and
achieved significant analytical progress. the Asia Pacific Tele Ophthalmology Society (APTOS), Ultra-
Wide Filed (UWF), and MESSIDOR-2. The authors, Yuhao
This section articulates the DR study based on the Niu, Lin Gu, et al. [50], intended explicable DR recognition
accessibility of FunImgs data. A fundus camera is utilized to through RIs. It has proven a direct relationship between the
acquire two-dimensional digital RIs. The highly accessible lesions and isolated neuron activation for pathological
early-stage DR recognition works make use of databases, justification. Initially, it described new pathological signifiers
including images obtained by dilating the pupil. While many using triggered neurons of the DR detector to determine both
RI datasets such as Kaggle DR [38] [39], MESSIDOR [35] lesions appearance and spatial data, then visualized the DR
[36], STARE [30], DeepDR [41], HRF [37], ODIR [40], UoA- indication encoded in the descriptor through Patho-GAN,
DR [42], DRIVE [31], and so on are openly accessible for the which was used to produce medically possible RIs. The author,
persistence of DR research studies, MAs are generally the Abdel Maksoud E. et al. [51], proposed the E-DenseNet
initial visible sign of DR; their recognition can decrease computer-aided diagnosis system for detecting various diabetic
beyond difficulties and loss of vision. The present manual retinopathy grades based on a hybrid DL technique. E-
assessment is hard to scale and a time-consuming process for a DenseNet integrated DenseNet and EyeNet versions based on
large patient population. Efficient automated detection and TL. It modified conventional EyeNet by incorporating blocks
segmentation (ADS) of MA will be able to decrease the of dense and improving the resultant hyperparameters of
liability of ophthalmologists to a certain level by computerizing blended E-DensNet versions. The author, Sikder, N. et al. [52],
the assessment activity and assisting in the early stages of DR proposed classification of DR severity with integration of
diagnosis. The research society for ADS in MA has developed collaborative learning algorithmic sequences through
numerous methodologies for early DR diagnosis [29]. In order examining RIs. It included various additional IP practices and
to evaluate deep learning models and further investigate the steps of FE and feature selection and attained 94.20%
clinical applications, particularly for lesion recognition, the classification accuracy with 0.32% boundary error and a
author T. Li et al. [43] developed a new dataset called DDR. 93.51% F-measure with 0.5% boundary error.
Using the ideas of mathematical morphology, the authors B.
Lay et. al. [44] devised a computerized method for the The authors, Nikos Tsiknakis, Dimitris, et al. [53],
detection of microaneurysms (MA) in fluorescein angiograms. proposed DL integrated recognition and classification for DR
based on FunImgs. It included a description of all DR
The authors, Wejdan L. et al. [45], proposed an analysis of recognition stages, such as DR grading, complexity levels. The
the detection of DR using DL practices. It has reviewed various author, M. T. Al-Antary et al. [54], integrated features to
detection and classification techniques using DL techniques, enhance the interpretation, and after that, a pyramid of multi-
which analyze DR stages based on color fundus retina images. scale features was incorporated to define the retinal structure in
Author S. Mishra et al. [46] proposed DR recognition using a distinct region. It has trained a model in the traditional sense
DL. It integrated artificial intelligence (AI) techniques and using cross-entropy loss to categorize severity levels of DR
used DenseNet to train the model on a massive dataset through healthy and non-healthy RTs, and it has integrated
consisting of 3662 images to instinctively distinguish the DR EyePACS and APTOS datasets. The author, Veena Mayya et
stage, which has been categorized as having superior FunImgs al. [55], proposed a study on automated MAs recognition for
resolution. It integrated APTOS data derived from Kaggle with early diagnosis of DR with a description of various DR

305 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

diagnosis techniques with their advantages and limitations. The Many researchers have achieved significant progress in early
author, Shah P. et al. [56], proposed validation of deep CNN- DR diagnosis and detection, but various complexities and
based algorithmic sequences for recognition of DR-AI against disparities still occur, emphasizing a significant possibility for
the screening clinician process. The authors, Chetoui M. et.al. the advancement of completely automated early DR diagnosis
[57], proposed reasonable end-to-end DL for DR recognition [29].
across multiple datasets. It included 90,000 images from nine
open datasets, which were employed to evaluate the III. METHODOLOGY
effectiveness of the planned procedure. The planned DL The Deep Convolutional Neural Network (D-CNN) designs
process tunes a pre-trained deep CNN for DR recognition. The are extensively used in multi-labeling mapping and
author, Sebti, R. et al. [58], proposed a DL-based methodology classification, which improves the analysis of the various DR
for the recognition of DR. It presented an automated grades such as normal, mild, moderate, severe, proliferative
classification scenario from a certain set of RI to identify the DR, and non-proliferative DR. DR degrees are articulated by
DR. An automatic retinal image analysis (ARIA) method has seeming multiple DR lesions concurrently on the color retinal
been created by authors Shi, C. et al. [59] that combines FunImgs. The various lesion types have numerous features that
transfer net ResNet50 deep network with the automatic are difficult to segment and recognize by employing
features generation approach to automatically assess image conventional methods. Consequently, the practical solution is
quality and differentiate between eye abnormalities and to utilize an effective CNN model with a dual image ResNet
artefacts that are associated with poor quality on color fundus mapping approach. Retinal diagnosis promotes early detection
retinal images. According to individual risk variables, authors of DR stages, which helps with timely treatment.
Alfian, G. et al. [60] suggest using a deep neural network
(DNN) in conjunction with recursive feature elimination (RFE) To accelerate the screening process, this research uses the
to offer an early diagnosis of diabetic retinopathy (DR). Color Automated Decision Making ResNet Feed-Forward Neural
fundus photography, fluorescein angiography, B-scan Network (RNFFNN) Methodology to detect early-to-late
ultrasonography, and optical coherence tomography are a few stages of DR. The majority of the uses for CNN's high-level
of the crucial imaging modalities that are utilized to diagnose features are in the detection and classification of retinal lesions.
diabetic retinopathy [61]. This research is mainly focused on developing the best RI
interpretation, which further helps to enhance the
A multi-classification prototype has been generated through implementation of DR detection simulations. To obtain the best
CNN algorithmic sequences with numerous parameters on a possible interpretation, features obtained from various pre-
dataset of DR with several structures. The authors R K. Jha et trained ConvNet simulations were intermingled using the
al. [64] stated an analysis to assess various categorization intended multi-modal blended module.
algorithmic sequences for estimation of HD where several
conventional processes like SVM, KNN, DT-DNN, NB, and The final stage of descriptions is employed to train a D-
RF [65–66] were utilized to be valid selection of features over CNN used for DR recognition and severity level prediction.
the Rapid Minor (RM) instrument to train-learn employing the Each ConvNet obtains unique features, blending them using
Cleveland dataset from the UCI repository environment [67– 1D and cross pooling, which leads to improved interpretation
72]. The Diabetic Retinopathy Debrecen Data Set from the compared to using features extracted from a single ConvNet.
UCI machine learning repository was taken into account by the This research will adopt deep learning-based convolutional
author Nagaraja Gundluru et. al. [73] who then designed a deep neural networks to achieve varying objectives. First, an
learning model with principal component analysis (PCA) for exploratory research study is to be carried out to gain an in-
dimensionality reduction and Harris hawks optimization depth understanding of AD. The second objective is the core
algorithm to extract the most crucial features. To distinguish objective of my research, in which we are going to propose a
the stages of DR, the author Asia, A.-O et al. [74] use fundus new framework and apply this framework to the public dataset.
photography and a deep learning tool called a convolutional The proposed methodology module for training the image with
neural network (CNN). The Xiangya No. 2 Hospital labeled deep understanding could satisfy unlabeled data
Ophthalmology (XHO), Changsha, China, provided the study's because deep learning could satisfy supervised and
pictures dataset, which is very vast, sparse, and labeled in an unsupervised segmentation. Finally, to check the feasibility of
uneven manner. A hybrid method for the detection and the proposed framework, an empirical evaluation will be
classification of diabetic retinopathy in fundus pictures of the carried out. The classification and detection of DR stages are
eye is proposed by author Butt, M.M. [75]. On pre-trained integrated using the dual image approach of integration and
Convolutional Neural Network (CNN) models, transfer aggregation of color fundus images and black-and-white
learning (TL) is applied to extract features that are then images. Both photos are analyzed separately and combined
combined to produce a hybrid feature vector. The literature on with the missing points of each image sequence of the color
AI approaches to DR, such as ML and DL in classification and fundus and black-and-white images. This research has
segmentation, that has been published in the open literature integrated more than ten thousand images from different age
within six years (2016-2021), is covered by author groups, such as 10 to 25 years, 26 to 35 years, 3 years, 46 to 55
Lakshminarayanan, V [76]. A thorough list of the accessible years, and above 56 years. Initially, all color fundus images are
DR datasets is also presented. The PICO (Patient, I- collected from the various age-group patients; we consider
Intervention, C-Control, O-Outcome) and Preferred Reporting these to be the primary input images. In data collection, all
Items for Systematic Review and Meta-analysis (PRISMA) gathered color fundus images are classified into two groups:
2009 search methodologies were both used to create this list. sequential and non-sequential images.

306 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

APTOS DB Dataset

EyePACS DB Dataset Black & White Image

Original Image
Image Pre-processing Stage

Duplicate Image Noisy Removal Duplicate Image Noisy Removal

Resize Setting Cropping Resize Setting Cropping

Rotate Setting Colour Enhancement Rotate Setting Colour Enhancement

Direction Setting Contrast Setting Direction Setting Contrast Setting

Feature Selection and Extraction

DR based Features. Dependable DR based Features. Dependable


Parameters Parameters
Histogram based Features. Features Histogram based Features. Features

Symptoms based Features. Non- Symptoms based Features. Dependable


Dependable
Parameters Parameters
Medication based Features. Features Medication based Features. Features

Eye artefact associated and abnormality associated.

Collected Feature  Integration of Features


Training Data set  80% Testing Data set  20%

Training Data Clusters Training Data Clusters


Classification,
Segmentation and Testing Model with tuning parameters
Optimization Stage
Performance Testing and Evaluation

Fig. 2. Dual-image multi-layer mapping methodology for identification of DR early stages.

The sequential images are the images that have been picked multi-layer mapping approach. The color fundus and black-
from the same patient and age group. The various sequential and-white image-based data uniformly divide according to
images hold a slight variation in the color fundus and help the every DR stage, such as Non-DR, MiDR, MoDR, SeDR, and
research generate the best outcomes and high predictions for PrDR, which helps the model minimize any inequality during
the five stages of DR. The automated system feels complex the training and progression of the proposed approach. All
when it tries to tune non-sequential images. The proposed input color fundus and black-and-white images are equally
methodology for training proficiency completely depends on sized, then processed in a systematic series way that is elected
balanced error-free data, so it‘s required to tune the data for the combination images for analyzing the grading for further
training and testing purposes to process it further in the predictions as shown in Fig. 2. The classification task is mainly
proposed deep learning-based CNN implicated dual-image performed based on the deep learning Inception-Resnet model.

307 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

Furthermore, the classification task has initiated the cross- to perform classification task, which indicates a discrete
entropy loss function based on two variations: binary-class and probability distribution (DPD) for K classes, expressed as
multi-class classification.
IV. EXPERIMENTAL SETUP ∑ (2)
The integration of dual-type sequential and non-sequential
cluster images is required for auto-detection of DR stages. This If it takes x as the activation at the penultimate layer of a
research proposes an auto-fine-tuning system for the neural network, and θ as its weight parameters at the SoftMax
recognition of DR stages using a dual-image ResNet mapping layer, it has ‗o‘ as the input to the SoftMax layer,
approach. The sequential and non-sequential images were
processed parallelly in the pre-processing and classification
stages. The mapping techniques integrated to analyze and map ∑ (3)
missing connections of retinal arterioles, microaneurysms,
venules and dot points of the fovea, cottonwool spots, the Subsequently, yˆ is expected class.
macula, the outer line of optic disc computations, and hard
exudates and hemorrhages among color and back white images
Missing computations are included in the sequence of vectors, ∑ (4)
which helps identify DR stages. A total of 5672 sequential and
7231 non-sequential color fundus and black-and-white retinal (5)
images were included in the test cases. The 80 and 20 i Є 1,……, N
percentage rations of best and poor-quality images were
integrated in testing and training and implicated the 10-ford
ReLU is an activation function presented by, which has
cross-validation technique.
strong biological and mathematical Underpinning [63].
The proposed methodology's training ability is variable on
reasonable error-free data, which is essential to tune the data yˆ = argmax; i ∈1, . . ., N; max (0, o) (6)
for training-testing purposes and manage it for the advanced
process in the anticipated deep learning-based CNN (DL-CNN) ∑ (7)
implicated dual-image multi-layer mapping approach. The
color fundus and black-and-white image-based data uniformly Let the input x be replaced the penultimate activation
divide according to every DR stage, such as Non-DR, MiDR, output h,
MoDR, SeDR, and PrDR, which helps the model eliminate any
inequality during the progression of training and testing the (8)
proposed approach. All input color fundus and black-and-white
images are equally sized, then processed and tuned in a
The backpropagation algorithm as shown in the eq. 8. is the
systematic series way, which are elected as the combination
same as the conventional SoftMax-based deep neural network.
images for analyzing the grading for further predictions. The
Fig .3 representing the dual-image Structural design of custom-
built DL-CNN based network stem segment with extracted ∑[ (∑ )] (9)
features
A. Image Normalization Principal Component Analysis
B. Multi-level ConvNets based Pooling and Feature
(PCA)
Integrations
The Eq. (1) has integrated for normalizing the dataset
features, X signifies the features of dataset, μ signifies mean This research has integrated two distinct pooling-based
value for separately feature x(i) of dataset, and σ signifies methods such as cross pooling (CxPool) and 1D pooling
subsequent standard deviation. This normalization method was (1DPool) to merge multi-level feature extraction from VGG32
executed using the scikit-learn based Standard Scaler (StdSca) through fc1 and fc2 with integration of Xception net
[62] and employed Principal Component Analysis (PCA) for environment. The CxPool has implicated with two distinct
dimensionality decrease if in case of MNIST and Fashion- feature vectors (FV) of A and B are adopted as input and a
MNIST which has selected for representing features of image further FV C is produced, where A, B, C ∈Rd. Every feature
data. Which is achieved using the scikit-learn based PCA. element ci, of the output vector C, is processed employing
through the Eq. (10) to Eq. (13).
(1) ci = max (ai, bi) ∀ i ∈ {1,2…d} (10)

It has implicated a feed-forward neural network (FFNN) ci = min (ai, bi) ∀ i ∈ {1,2…d} (11)
and CNN, mutually come up with two different classification ci = mean (ai, bi+1) ∀ i ∈ {1,2…d} (12)
functions such as ReLU and SoftMax. DL solutions to
classification difficulties typically utilize the SoftMax function ci = ai+bi+1∀i ∈ {1,2…d} (13)

308 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

Augmented Colour Fundus Image Augmented black and white image


Class (A-CFI) Class (A-BWI)

Input Image 256 X 256 Input Image 256 X 256

Stem Value Stem Value

(Inception-Resnet-A) x 5 (Inception-Resnet-A) x 5

Reduction - A Reduction - A

(Inception-Resnet-B) x 10 (Inception-Resnet-B) x 10

Reduction - B Reduction - B

(Inception-Resnet-C) x 5 (Inception-Resnet-C) x 5

Pooling Technique Pooling Technique

Fully Connected Paths Fully Connected Paths

Dropout1 Dropout1

Fully Connected Paths Fully Connected Paths

Dropout2 Dropout2

Fully Connected Paths Fully Connected Paths

Dropout3 Dropout3

SoftMax || ReLU SoftMax || ReLU

Fig. 3. The Structural design of custom-built DL-CNN based network stem segment with extracted features.

309 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

The 1DPool is employed to choose leading regional Grade-3: Severe DR  011 Various levels  {011.01,
features from every VGG32 region, where the Cr-Pool permits 001.10, 001.11}.
accumulating the leading features achieved by 1DPool with
global interpretation of Xception net environment. The 1DPool Grade-4: Proliferate DR  100 Various levels  {100.01,
based synthesis brings one FV ‗K‘ as an input and which 001.10, 001.11}.
generates a further FV of K^, where K belongs to Rd1, K^ The DL-CNN based Layered Integration with training and
belongs to Rd2 with the executional condition of d2≤d1. K^ is testing scenario with grading process has shown in the Fig. 4
a decreased interpretation of K, which are integrated for detection of DR stages. The data has
where K={k1,k2…kd1} and K^={k^1,k^2…k^d1}. In this collected for training and testing purpose which are clustered
environment, every feature element k^i, of the output according to the DR stage and according to the DR symptoms
vector K^, is calculated employing through the Eq. (14) to Eq. through binary bit formation which is shown in the Table I.
(17).
k^i = max (ki*2, ki*2+1) ∀ i ∈ {1,2…d2} (14) Data Collection and Analysis  Diabetic Symptoms
(DSs)
k^i = min (ki*2, ki*2+1) ∀ i ∈ {1,2…d2} (15) DSs  {genital thrush, polyuria, visual blurring,
k^i = mean (ki*2, ki*2+1) ∀ i ∈ {1,2…d2} (16) polyphagia, delayed healing, sudden weight loss,
itching, irritability, obesity-level, partial paresis,
k^i = ki*2 + ki*2+1 ∀ i ∈ {1,2…d2} (17) muscle stiffness, alopecia, age, sex, weakness class}.
The 1DPool has been employed individually on extracted
features of VGG32 based fc1 and fc2 layers. After that, the
CxPool method has been employed on the subsequent pooled
features, which FV has unified with the extracted features from
the Xception, which are generated from the two individual sets VGG32 Features Data Set
of input image classes, such as the Augmented Color Fundus
Image Class and the Augmented Black and White Image Class
sets, using CrPool, as shown in Fig. 3 and 4. As the final FV is Trained
Tested Data
a unified form of the global and local interpretations of the RIs, Data
it offers robust hyper features.
10
11 10
V. RESULTS AND DISCUSSION 11
01
The multi-decision Inception-ResNet blended hybrid model 01
00
has integrated with multi-layers of dual image-based 00
parameters that process sequential and non-sequential images.
The proposed model has been trained with a multi-layered
transfer learning mechanism that has been tuned with 172
weighted multi-layers, of which 86 weighted layers are DL-CNN based Layered Integration
connected with color fundus images and 86 more weighted
layers are connected with black-and-white images. The images
are graded manually on a scale of 0 to 4 (0, normal DR; 1,
mild; 2, moderate; 3, severe; and 4, proliferative DR) to Cluster Groups
indicate different severity levels, and the grading process has
been extended to binary bit form, such as:
Diabetic Cluster Non-Diabetic Cluster
Dual Labeling Mechanism (P, Q) (~P, ~Q).
where Q1 = {q1 / q1{000, 001, 010, 011, 100}} and Q2 = Cluster Group 1  CG1 Cluster Group 1  CG1
{q2 / q2{00, 01, 10, 11}}.
Q1 representing primary case of labeling and Q2 10 00
representing secondary case of labeling based on positive (1), 11 01
true-positive (11), true-negative (10), false-positive (01), false- 01 10
negative (00). 00 11
Grade-0: Normal 000.00.
Grade-1: Mild DR  001 Various levels  {001.01,
Accuracy Precision
001.10, 001.11}. Recall F1
Grade-2: Moderate DR  010 Various levels  {010.01, score.
001.10, 001.11}. Fig. 4. DL-CNN based Layered Integration with training and testing scenario
for detection of DR stages.

310 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

TABLE I. INTEGRATED SET OF IMAGES AND DR GRADING CLASS


Base binary Sequential and Non- Single / Dual Image
DR Stages / Grade Impact Supporting sub-class Sampling
class sequential Images Processing
Grade-0 Normal 000 000.00 2150 Sequential Dual Image
Grade-1 Mild DR 001 {001.01, 001.10, 001.11} 526 both Dual Image
Grade-2 Moderate DR 010 {010.01, 001.10, 001.11} 1325 both Dual Image
Grade-3 Severe DR 011 {011.01, 001.10, 001.11} 372 both Dual Image
Grade-4 Proliferate DR 100 {100.01, 001.10, 001.11} 158 both Dual Image

TABLE II. PARAMETERS TURNING AND INTEGRATION FOR CLASSIFICATION-BASED DECISION MAKING, AND TEST-CASE CONDITIONS BASED ON STOCHASTIC
GRADIENT DECENT OPTIMIZATION (SGD)  PARAMETERS TURNING AND INTEGRATION

Test conditions Epochs Value Image Learning Rate (imlr) Momentum1


Test condition 1 (TC1) epochs>70 then 0.0001 0.4
Test condition 2 (TC2) epochs>140 then 0.0002 0.5
Test condition 3 (TC3) epochs>210 then 0.0003 0.6
Test condition 4 TC4 epochs>280 then 0.0004 0.7
Test condition 5 (TC5) epochs>350 then 0.0005 0.8
bwim1  first moment of black-white image-based
The Table II is representing the dual-image-based multi- exponential decomposition rate in AOpt.
layer mapping approach based on classification and regression
 used at the end to classify the five stages of DR based on the bwim2  second moment of black-white image-based
features extracted from a series of networks. Each stage exponential decomposition rate in AOpt.
consists of the following data. The Table III representing the A
dual-image-based multi-layer mapping approach based on cim1 =0.7, cim2=0.890.
classification and regression which used at the end to classify bwim1=0.7, bwim2=0.890.
the five stages of DR based on the features extracted from a
series of networks for further decision making (dm). The experimental scenarios are framed based on the Kaggle
APTOS dataset, which has shown that the proposed trained
Every dual image based multi-layer mapping approach. modelized approach represents a greater contribution to the
active methodologies through blended features. The proposed
imlr1 = 0.001, momentum1 = 0.4.
methodology has been compared with the existing approaches
imlr2 = 0.005, momentum2 = 0.8. based on integrated DR symptoms, their affecting factors, data
metrics, and dual image processing techniques. This research
cim1  first moment of color image based exponential has experimented with dual images, which has helped to
decomposition rate in AOpt. analyze the images in depth for detection of DR stages and has
cim2  second moment of color image based exponential helped to identify and map the missing patches with color
decomposition rate in AOpt. fundus images and black-and-white images.

TABLE III. A DUAL-IMAGE-BASED MULTI-LAYER MAPPING APPROACH BASED ON CLASSIFICATION AND REGRESSION WHICH IS USED AT THE END TO
CLASSIFY THE FIVE STAGES OF DR BASED ON THE FEATURES EXTRACTED FROM A SERIES OF NETWORKS FOR FURTHER DECISION MAKING (DM)
Each epoch
Test Condition stage1 Test Condition stage2 tiny cluster cim1 cim2 bwim1 bwim2
range
initialized Adaptive
Adaptive Moment Estimation
Moment Estimation 
(Adam)  Parameters for each tiny-cluster1 
bhm 1 to 70 0.7 0.890 0.4 0.5
turning and integration. (Pmini, Qmini)  (P, Q)

initialized Adaptive Adaptive Moment Estimation


for each tiny-cluster2 (Pmini,
Moment Estimation  (Adam)  Parameters
71 to 90 Qmini)  (P, Q) 0.7 0.890 0.5 0.6
bhm then. turning and integration.

If validation error is not


improving for four epochs,
then
Adaptive Moment Update the multitasking imlr1 = avg ((imlr1 * 0.01) +
above 90 0.7 0.890 0.7 0.8
Estimation  parameters ((imlr2 * 0.01))
imlr2 = avg ((imlr1 * 0.01) +
((imlr2 * 0.01))

311 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

VI. CONCLUSION [7] Wilkinson, C. et al. Proposed international clinical diabetic retinopathy
and diabetic macular edema disease severity scales. Ophthalmology 110,
A trained clinician or ophthalmologist must analyze and 1677–1682 (2003).
estimate digital color fundus photographs of the retina to [8] Dai, L., Wu, L., Li, H. et al. A deep learning system for detecting
identify DR based on the presence of lesions associated with diabetic retinopathy across the disease spectrum. Nat Commun 12, 3242
(2021). https://doi.org/10.1038/s41467-021-23458-5.
the vascular malformations brought on by the disease. This
labour-intensive and manual process takes time. This study [9] Group ETDRSR. Treatment techniques and clinical guidelines for
photocoagulation of diabetic macular edema: Early Treatment Diabetic
suggested ResNet feed-forward neural network technology for Retinopathy Study report number 2. Ophthalmology 94, 761–774
automated decision-making. In the pre-processing and (1987).
classification steps, the sequential and non-sequential pictures [10] Fundus disease Group in Ophthalmology Branch of Chinese Medical
were analyzed concurrently. The mapping approaches Association. Guidelines of retinal image acquisition and reading for
combined to evaluate and map the hard exudates and diabetic retinopathy screening in China. Chin. J. Ophthalmol. 53, 890–
hemorrhages, microaneurysms, venules and dot points of the 896 (2017).
fovea, cottonwool spots, the macula, the outside line of [11] Shen, Y. et al. Domain-invariant interpretable fundus image quality
assessment. Med. Image Anal. 61, 101654 (2020).
computations of the optic disc, and retinal arterioles between
[12] 16M. Voets, K. Møllersen, and L. A. Bongo, ―Reproduction study using
color and black-white pictures. Missing computations are public data of: Development and validation of a deep learning algorithm
incorporated into the vector sequence, which makes it easier to for detection of diabetic retinopathy in retinal fundus photographs,‖
recognize DR phases. The test cases comprised a total of 5672 PLoS One, vol. 14, no. 6, pp. 1–11, 2019, doi:
sequential and 7231 non-sequential color fundus and black and 10.1371/journal.pone.0217541.
white retinal pictures. The 10-ford cross-validation technique [13] S. R. Sadda et al., ―Quantitative assessment of the severity of diabetic
was used in testing and training using the 80 and 20% ratios of retinopathy,‖ Am. J. Ophthalmol., 2020, doi: 10.1016/j.ajo.2020.05.021.
high- and low-quality photos. For testing and analyzing high- [14] S. Preetha, N. Chandan, K. Darshan N, and B. Gowrav P, ―Diabetes
quality photographs, the ACU, sensitivity, and specificity were Disease Prediction Using Machine Learning,‖ Int. J. Recent trends Eng.
Res., vol. 6, no. 5, 2020, doi: 10.23883/IJRTER.2020.6029.65Q5H.
98.9%, 98.7%, and 98.3%, respectively; for low-quality
[15] He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image
images, they were 94.9%, 93.6%, and 93.2%. recognition. In Proc. IEEE Conference on Computer Vision and Pattern
Recognition 770–778 (IEEE, 2016).
AUTHORS‘ CONTRIBUTION [16] Lin T.-Y. et al. Feature pyramid networks for object detection. In Proc.
Conceptualization: A., AK, and Henge. S.K.; Methodology: IEEE Conference on Computer Vision and Pattern Recognition 2117–
Henge. S.K., and Bhagat., A.; Software: A., AK, and Henge. 2125 (IEEE, 2017).
S.K.; Validation: A., AK, and Henge. S.K., Mandal. S.K.; [17] P. Kaur, S. Chatterjee, and D. Singh, ―Neural network technique for
diabetic retinopathy detection,‖ Int. J. Eng. Adv. Technol., vol. 8, no. 6,
Formal analysis: Henge. S.K. and Bhagat., A; Investigation: pp. 440–445, 2019, doi: 10.35940/ijeat.E7835.088619.
Bhagat., A, and A., AK; Resources: A., AK and Henge. S.K.;
[18] Y. Kumaran and C. M. Patil, ―A brief review of the detection of diabetic
Data curation: A., AK, Mandal. S.K. and Henge. S.K.; retinopathy in human eyes using pre-processing & segmentation
Writing—original draft preparation, A., AK, Bhagat., A and techniques,‖ International Journal of Recent Technology and
Henge. S.K.; Writing—review and editing: A., AK, and Engineering, vol. 7, no. 4. pp. 310–320, 2018.
Henge. S.K.; Visualization: Bhagat., A, and A., AK; [19] Keech, A. C. et al. Effect of fenofibrate on the need for laser treatment
Supervision: Henge. S.K. for diabetic retinopathy (FIELD study): a randomised controlled trial.
Lancet 370, 1687–1697 (2007).
CONFLICTS OF INTEREST [20] Ying, G.-s. et al. Association between antiplatelet or anticoagulant drugs
and retinal or subretinal hemorrhage in the comparison of age-related
The authors declare no conflict of interest. macular degeneration treatments trials. Ophthalmology 123, 352–360
(2016).
FUNDING [21] Ribeiro, M. L., Nunes, S. G. & Cunha-Vaz, J. G. Microaneurysm
turnover at the macula predicts risk of development of clinically
This research received no external funding. significant macular edema in persons with mild nonproliferative diabetic
retinopathy. Diabetes Care 36, 1254–1259 (2013).
REFERENCES
[22] Hove, M. N., Kristensen, J. K., Lauritzen, T. & Bek, T. Quantitative
[1] Anumol Sajan, Anamika K, Simy Mary Kurian, Diabetic Retinopathy analysis of retinopathy in type 2 diabetes: identification of prognostic
Detection using Deep Learning, International Journal of Engineering parameters for developing visual loss secondary to diabetic
Research & Technology, Special Issue – 2022, Volume 10, Issue 04, pp. maculopathy. Acta Ophthalmol. Scand. 82, 679–685 (2004).
154-159. [23] Klein, R., Klein, B. E. & Moss, S. E. How many steps of progression of
[2] Mushtaq and Farheen Siddiqui 2021, Detection of diabetic retinopathy diabetic retinopathy are meaningful? The Wisconsin Epidemiologic
using deep learning methodology IOP Conf. Ser.: Mater. Sci. Eng. 1070 Study of Diabetic Retinopathy. Arch. Ophthalmol. 119, 547–553 (2001).
012049, pp. 1-13. DOI 10.1088/1757-899X/1070/1/012049. [24] J. I. Orlando, E. Prokofyeva, M. del Fresno, and M. B. Blaschko, ―An
[3] S. K. Pandey and V. Sharma, ―World diabetes day 2018: Battling the ensemble deep learning based approach for red lesion detection in
Emerging Epidemic of Diabetic Retinopathy,‖ Indian J Ophthalmol. fundus images,‖ Comput. Methods Programs Biomed., vol. 153, pp.
[4] https://www.health.harvard.edu/a_to_z/retinopathy-a-to-z. 115–127, 2018, doi: 10.1016/j.cmpb.2017.10.017.
[5] https://missinglink.ai/guides/convolutional-neural- [25] I. Sadek, M. Elawady, and A. E. R. Shabayek, ―Automatic Classification
networks/convolutional-neural-networks-architecture-forging-pathways- of Bright Retinal Lesions via Deep Network Features,‖ pp. 1–20, 2017.
future/. [26] Gülçehre, Ç. & Bengio, Y. Knowledge matters: importance of prior
[6] G. T. Zago, R. V. Andreão, B. Dorizzi, and E. O. Teatini Salles, informati on for optimization. J. Mach. Learn. Res. 17, 226–257 (2016).
―Diabetic retinopathy detection using red lesion localization and [27] K. A. Anant, T. Ghorpade, and V. Jethani, ―Diabetic retinopathy
convolutional neural networks,‖ Comput. Biol. Med., vol. 116, p. detection through image mining for type 2 diabetes,‖ in 2017
103537, 2020, doi: 10.1016/j.compbiomed.2019.103537.

312 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

International Conference on Computer Communication and Informatics, Technologies in Computing, Electrical and Electronics (ICSTCEE),
ICCCI 2017, 2017, doi: 10.1109/ICCCI.2017.8117738. 2020, pp. 515-520, doi: 10.1109/ICSTCEE49637.2020.9277506.
[28] J. Amin, M. Sharif, and M. Yasmin, ―A Review on Recent [47] Ayala, Angel, Tomás Ortiz Figueroa, Bruno Fernandes, and Francisco
Developments for Detection of Diabetic Retinopathy,‖ Scientifica, vol. Cruz. 2021. "Diabetic Retinopathy Improved Detection Using Deep
2016. 2016, doi: 10.1155/2016/6838976. Learning" Applied Sciences 11, no. 24: 11970.
[29] Veena Mayya, Sowmya Kamath S․, Uma Kulkarni, Automated https://doi.org/10.3390/app112411970.
microaneurysms detection for early diagnosis of diabetic retinopathy: A [48] M. Mohsin Butt, Ghazanfar Latif, D.N.F. Awang Iskandar, Jaafar
Comprehensive review, Computer Methods and Programs in Alghazo, Adil H. Khan, Multi-channel Convolutions Neural Network
Biomedicine Update, Volume 1, 2021, 100013, Based Diabetic Retinopathy Detection from Fundus Images, Procedia
https://doi.org/10.1016/j.cmpbup.2021.100013. Computer Science, Volume 163, 2019, Pages 283-291, ISSN 1877-0509,
[30] A.D. Hoover, V. Kouznetsova, M. Goldbaum, Locating blood vessels in https://doi.org/10.1016/j.procs.2019.12.110.
retinal im- ages by piecewise threshold probing of a matched filter [49] Fatima, Muhammad Imran, Anayat Ullah, Muhammad Arif, Rida Noor,
response, IEEE Trans. Med. Imaging 19 (3) (2000) 203–210, doi: A unified technique for entropy enhancement based diabetic retinopathy
10.1109/42.845178. detection using hybrid neural network, Computers in Biology and
[31] J. Staal , M.D. Abramoff, M. Niemeijer , M.A. Viergever , B. van Medicine, Volume 145, 2022, 105424, ISSN 0010-4825,
Ginneken , Ridge-based vessel segmentation in color images of the https://doi.org/10.1016/j.compbiomed.2022.105424.
retina, IEEE Trans. Med. Imaging 23 (4) (2004) 501–509 . [20] M.D. [50] Yuhao Niu, Lin Gu, Yitian Zhao, Feng Lu, Explainable Diabetic
Abràmoff, J.C. Folk , D.P. Han , et al. , Automated analysis of retinal Retinopathy Detection and Retinal Image Generation, GENERIC
images for detection of referable diabetic retinopathy, JAMA COLORIZED JOURNAL, VOL. XX, NO. XX, XXXX 2017,
Ophthalmol. 131 (3) (2013) 351–357. https://doi.org/10.48550/arXiv.2107.00296.
[32] J. De Calleja, L. Tecuapetla, and M. A. Medina, ―LBP and Machine [51] AbdelMaksoud E, Barakat S, Elmogy M. A computer-aided diagnosis
Learning for Diabetic Retinopathy Detection,‖ pp. 110–117, 2014. system for detecting various diabetic retinopathy grades based on a
[33] M. Gandhi and R. Dhanasekaran, ―Diagnosis of diabetic retinopathy hybrid deep learning technique. Med Biol Eng Comput. 2022
using morphological process and SVM classifier,‖ Int. Conf. Commun. Jul;60(7):2015-2038. doi: 10.1007/s11517-022-02564-6. Epub 2022
Signal Process. ICCSP 2013 - Proc., pp. 873–877, 2013, doi: May 11. PMID: 35545738; PMCID: PMC9225981.
10.1109/iccsp.2013.6577181. [52] Sikder, N.; Masud, M.; Bairagi, A.K.; Arif, A.S.M.; Nahid, A.-A.;
[34] U. R. Acharya, C. M. Lim, E. Y. K. Ng, C. Chee, and T. Tamura, Alhumyani, H.A. Severity Classification of Diabetic Retinopathy Using
―Computer-based detection of diabetes retinopathy stages using digital an Ensemble Learning Algorithm through Analyzing Retinal
fundus images,‖ Proc. Inst. Mech. Eng. Part H J. Eng. Med., vol. 223, Images. Symmetry 2021, 13, 670. https://doi.org/10.3390/sym13040670.
no. 5, pp. 545–553, 2009, doi: 10.1243/09544119JEIM486. [53] Nikos Tsiknakis, Dimitris Theodoropoulos, Georgios Manikis,
[35] M.D. Abràmoff, J.C. Folk , D.P. Han , et al. , Automated analysis of Emmanouil Ktistakis, Ourania Boutsora, Alexa Berto, Fabio Scarpa,
retinal images for detection of referable diabetic retinopathy, JAMA Alberto Scarpa, Dimitrios I. Fotiadis, Kostas Marias, Deep learning for
Ophthalmol. 131 (3) (2013) 351–357. diabetic retinopathy detection and classification based on fundus images:
A review, Computers in Biology and Medicine, Volume 135, 2021,
[36] E. Decencière , X. Zhang , G. Cazuguel , et al. , Feedback on a publicly
104599, ISSN 0010-4825,
distributed database: the messidor database, Image Anal. Stereol. 33 (3) https://doi.org/10.1016/j.compbiomed.2021.104599.
(2014) 231–234.
[54] M. T. Al-Antary and Y. Arafa, "Multi-Scale Attention Network for
[37] A. Budai, R. Bock, A. Maier, J. Hornegger, G. Michelson, Robust vessel Diabetic Retinopathy Classification," in IEEE Access, vol. 9, pp. 54190-
seg- mentation in fundus images, Int. J. Biomed. Imaging 2013 (2013) 54200, 2021, doi: 10.1109/ACCESS.2021.3070685.
154860, doi: 10.1155/2013/154860.
[55] Veena Mayya, Sowmya Kamath S․, Uma Kulkarni, Automated
[38] Kaggle Diabetic Retinopathy Detection Training Dataset (DRD), 2013, ( microaneurysms detection for early diagnosis of diabetic retinopathy: A
https://www.kaggle.com/c/diabetic-retinopathy-detection ). Online; Comprehensive review, Computer Methods and Programs in
accessed 5 January 2023.
Biomedicine Update, Volume 1, 2021, 100013, ISSN 2666-9900,
[39] APTOS 2019 Blindness Detection, 2019, ( https://doi.org/10.1016/j.cmpbup.2021.100013.
https://www.kaggle.com/c/aptos2019- blindness-detection ). Online; [56] Shah P, Mishra DK, Shanmugam MP, Doshi B, Jayaraj H, Ramanjulu R.
accessed 5 January 2023.
Validation of Deep Convolutional Neural Network-based algorithm for
[40] Ocular Disease Intelligent Recognition (ODIR-2019), 2013, ( detection of diabetic retinopathy - Artificial intelligence versus clinician
https://odir2019.grand-challenge.org/introduction/ ). Online; accessed for screening. Indian J Ophthalmol. 2020 Feb;68(2):398-405. doi:
accessed 5 January 2023. 10.4103/ijo.IJO_966_19. PMID: 31957737; PMCID: PMC7003578.
[41] DeepDR Diabetic Retinopathy Image Dataset (DeepDRiD), 2013, ( [57] Chetoui M, Akhloufi MA. Explainable end-to-end deep learning for
https://isbi.deepdr.org/data.html ). Online; accessed accessed 5 January diabetic retinopathy detection across multiple datasets. J Med Imaging
2023. (Bellingham). 2020 Jul;7(4):044503. doi: 10.1117/1.JMI.7.4.044503.
[42] W. Abdulla, R.J. Chalakkal, University of Auckland Diabetic Epub 2020 Aug 28. PMID: 32904519; PMCID: PMC7456641.
Retinopathy (UoA-DR) Database, 2018, [58] Sebti, R., Zroug, S., Kahloul, L., Benharzallah, S. (2022). A Deep
10.17608/k6.auckland.5985208.v5. Learning Approach for the Diabetic Retinopathy Detection. In: Ben
[43] T. Li , Y. Gao , K. Wang , S. Guo , H. Liu , H. Kang , Diagnostic Ahmed, M., Boudhir, A.A., Karaș, İ.R., Jain, V., Mellouli, S. (eds)
assessment of deep learning algorithms for diabetic retinopathy Innovations in Smart Cities Applications Volume 5. SCA 2021. Lecture
screening, Inf. Sci. (Ny) 501 (2019) 511–522. Notes in Networks and Systems, vol 393. Springer, Cham.
[44] B. Lay , C. Baudoin , J.-C. Klein , Automatic detection of https://doi.org/10.1007/978-3-030-94191-8_37.
microaneurysms in retinopa- thy fluoro-angiogram, in: Proceedings of [59] Shi, C., Lee, J., Wang, G. et al. Assessment of image quality on color
SPIE - The International Society for Opti- cal Engineering, vol. 432, fundus retinal images using the automatic retinal image analysis. Sci
1983, pp. 165–173. Rep 12, 10455 (2022). https://doi.org/10.1038/s41598-022-13919-2.
[45] Wejdan L. Alyoubi, Wafaa M. Shalash, Maysoon F. Abulkhair, Diabetic [60] Alfian, G.; Syafrudin, M.; Fitriyani, N.L.; Anshari, M.; Stasa, P.; Svub,
retinopathy detection through deep learning techniques: A review, J.; Rhee, J. Deep Neural Network for Predicting Diabetic Retinopathy
Informatics in Medicine Unlocked, Volume 20, 2020, 100377, ISSN from Risk Factors. Mathematics 2020, 8, 1620.
2352-9148, https://doi.org/10.1016/j.imu.2020.100377. https://doi.org/10.3390/math8091620.
[46] S. Mishra, S. Hanchate and Z. Saquib, "Diabetic Retinopathy Detection [61] Salz DA, Witkin AJ. Imaging in diabetic retinopathy. Middle East Afr J
using Deep Learning," 2020 International Conference on Smart Ophthalmol. 2015 Apr-Jun;22(2):145-50. doi: 10.4103/0974-
9233.151887. PMID: 25949070; PMCID: PMC4411609.

313 | P a g e
www.ijacsa.thesai.org
(IJACSA) International Journal of Advanced Computer Science and Applications,
Vol. 14, No. 5, 2023

[62] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. [70] Singh, B., Henge, S.K. (2021). Neural Fuzzy Inference Hybrid System
Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. with Support Vector Machine for Identification of False Singling in
Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Stock Market Prediction for Profit Estimation. In: Kahraman, C., Cevik
Duchesnay. 2011. Scikit-learn: Machine Learning in Python. Journal of Onar, S., Oztaysi, B., Sari, I., Cebi, S., Tolga, A. (eds) Intelligent and
Machine Learning Research 12 (2011), 825–2830. Fuzzy Techniques: Smart and Innovative Solutions. INFUS 2020.
[63] Richard HR Hahnloser, Rahul Sarpeshkar, Misha A Mahowald, Rodney Advances in Intelligent Systems and Computing, vol 1197. Springer,
J Douglas, and H Sebastian Seung. 2000. Digital selection and analogue Cham. https://doi.org/10.1007/978-3-030-51156-2_27.
amplification coexist in a cortex-inspired silicon circuit. Nature 405, [71] Henge, S.K., Rama, B. (2017). Five-Layered Neural Fuzzy Closed-Loop
6789 (2000), 947. Hybrid Control System with Compound Bayesian Decision-Making
[64] Jha, R.K., Henge, S.K. and Sharma, A., 2020. Optimal machine learning Process for Classification Cum Identification of Mixed Connective
classifiers for prediction of heart disease. Int. J. Control Autom, 13(1), Conjunct Consonants and Numerals. In: Bhatia, S., Mishra, K., Tiwari,
pp.31-37. Available: S., Singh, V. (eds) Advances in Computer and Computational Sciences.
http://sersc.org/journals/index.php/IJCA/article/view/6680. Advances in Intelligent Systems and Computing, vol 553. Springer,
Singapore. https://doi.org/10.1007/978-981-10-3770-2_58.
[65] S. K. Henge and B. Rama, "Neural fuzzy closed loop hybrid system for
classification, identification of mixed connective consonants and [72] Henge, S.K., Rama, B. (2018). OCR-Assessment of Proposed
symbols with layered methodology," 2016 IEEE 1st International Methodology Implications and Invention Outcomes with Graphical
Conference on Power Electronics, Intelligent Control and Energy Representation Algorithmic Flow. In: Saeed, K., Chaki, N., Pati, B.,
Systems (ICPEICES), 2016, pp. 1-6, doi: Bakshi, S., Mohapatra, D. (eds) Progress in Advanced Computing and
10.1109/ICPEICES.2016.7853708. Intelligent Engineering. Advances in Intelligent Systems and
Computing, vol 563. Springer, Singapore. https://doi.org/10.1007/978-
[66] Bhupinder Singh, Dr Santosh Kumar Henge, Neural Fuzzy Inference 981-10-6872-0_6.
Hybrid System with SVM for Identification of False Singling in Stock
Market Prediction for Profit Estimation, Intelligent Systems and [73] Nagaraja Gundluru, Dharmendra Singh Rajput, Kuruva Lakshmanna,
Computing, https://doi.org/10.1007/978-3-030-51156-2_27, July 2020. Rajesh Kaluri, Mohammad Shorfuzzaman, Mueen Uddin, Mohammad
Arifin Rahman Khan, "Enhancement of Detection of Diabetic
[67] Rahul Kumar Jha, Santosh Kumar Henge, Sanjeev Kumar Mandal, Amit Retinopathy Using Harris Hawks Optimization with Deep Learning
Sharma, Supriya Sharma, Ashok Sharma, Afework Aemro Berhanu, Model", Computational Intelligence and Neuroscience, vol. 2022,
"Neural Fuzzy Hybrid Rule-Based Inference System with Test Cases for Article ID 8512469, 13 pages, 2022.
Prediction of Heart Attack Probability", Mathematical Problems in https://doi.org/10.1155/2022/8512469.
Engineering, vol. 2022, Article ID 3414877, 18 pages, 2022.
https://doi.org/10.1155/2022/3414877. [74] Asia, A.-O.; Zhu, C.-Z.; Althubiti, S.A.; Al-Alimi, D.; Xiao, Y.-L.;
Ouyang, P.-B.; Al-Qaness, M.A.A. Detection of Diabetic Retinopathy in
[68] S. K. Henge and B. Rama, "Comprative study with analysis of OCR Retinal Fundus Images Using CNN Classification
algorithms and invention analysis of character recognition approched Models. Electronics 2022, 11, 2740.
methodologies," 2016 IEEE 1st International Conference on Power https://doi.org/10.3390/electronics11172740.
Electronics, Intelligent Control and Energy Systems (ICPEICES), 2016,
pp. 1-6, doi: 10.1109/ICPEICES.2016.7853643. [75] Butt, M.M.; Iskandar, D.N.F.A.; Abdelhamid, S.E.; Latif, G.; Alghazo,
R. Diabetic Retinopathy Detection from Fundus Images of the Eye
[69] Jha, R.K., Henge, S.K., Sharma, A. (2022). Heart Disease Prediction and Using Hybrid Deep Learning Features. Diagnostics 2022, 12, 1607.
Hybrid GANN. In: Kahraman, C., Cebi, S., Cevik Onar, S., Oztaysi, B., https://doi.org/10.3390/diagnostics12071607.
Tolga, A.C., Sari, I.U. (eds) Intelligent and Fuzzy Techniques for
Emerging Conditions and Digital Transformation. INFUS 2021. Lecture [76] Lakshminarayanan, V.; Kheradfallah, H.; Sarkar, A.; Jothi Balaji, J.
Notes in Networks and Systems, vol 308. Springer, Cham. Automated Detection and Diagnosis of Diabetic Retinopathy: A
https://doi.org/10.1007/978-3-030-85577-2_52. Comprehensive Survey. J. Imaging 2021, 7, 165.
https://doi.org/10.3390/jimaging7090165.

314 | P a g e
www.ijacsa.thesai.org

You might also like