Shashank - Seminar Report
Shashank - Seminar Report
ON
Submitted to
The faculty of Engineering and Technology of
Kakatiya University, Warangal
In Partial fulfilment of the requirements
For the award of
Bachelor of Technology
In
Information Technology
By
T.Shashank
B21IT083
Assistant Professor
Dept of IT
RETINOPATHY” in the partial fulfillment of the requirement of B.Tech degree during this
I wish to take this opportunity to express my deep gratitude to all the people who have extended
their cooperation in various ways during my Seminar. It is my pleasure to acknowledge the help
of all those individuals.
I thank Dr. K. Ashoka Reddy, Principle of Kakatiya Institute of Technology & Science,
Warangal, for his strong support.
I thank Dr. T.Senthil Murugan, Professor & Head, Department of Information Technology for
her constant support in bringing shape of this Seminar.
In completing this Seminar successfully all our faculty members have given an excellent
cooperation by guiding us in every aspect. All your guidance helped me a lot and I am very
grateful to you.
T.Shashank
B21IT083
iii
ABSTRACT
CONTENTS
iv
1. INTRODUCTION
1.1 Background
1.2 Objectives
2. DIABETIC RETINOPATHY
2.1 Definition and Characteristics
2.2 Impact on Vision
3. DIAGNOSTIC CHALLENGE
3.1 Time Delays in Traditional Screening
3.2 Financial Costs and Risk of Blindness
4. METHODOLOGY
4.1 Deep Learning Approach
4.2 Neural Network (CNN)
4.3 Dataset Description Convolutional
5. DIABETIC RETINOPATHY DIAGNOSIS RESULTS
5.1 CNN Performance Metrics
5.2 Sensitivity and Specificity
5.3 Validation Set Results
6. EYE LATERALITY DETECTION
6.1 CNN Training for Laterality Detection
6.2 Dataset for Eye Laterality
7. SIGNIFICANCE AND CONTRIBUTION
7.1 Advancements in Automated Diagnostics
7.2 Addressing Critical Challenges
8.CONCLUSION
9.REFERENCES
LIST OF FIGURES
v
5.3.3 Confusion Matrix 7
vi
CHAPTER 1
INTRODUCTION
1.1 Background
1.2 Objectives
1
CHAPTER 2
DIABETIC RETINOPATHY
CHAPTER 3
DIAGNOSTI CHALLENGES
Traditional screening methods for diabetic retinopathy are encumbered by inherent time
delays in reporting and subsequent intervention. The reliance on manual examination by
2
ophthalmologists, coupled with the growing demand for screenings, often results in delays that
can impact patient outcomes. The integration of automated and technologically advanced
diagnostic tools, as explored in this study, seeks to address and minimize these time delays,
enabling more timely and effective interventions for individuals with diabetic retinopathy.
The traditional methods of diagnosing diabetic retinopathy not only entail time delays but
are also associated with significant financial costs and heightened risks of blindness. The
financial burden arises from the extensive resources required for manual screenings by
ophthalmologists and subsequent medical interventions. Moreover, the inherent risk of blindness
underscores the urgent need for cost-effective and efficient diagnostic approaches. This study
delves into leveraging deep learning techniques to mitigate these financial costs and reduce the
peril of blindness through timely and accurate detection of diabetic retinopathy.
\ CHAPTER 4
METHODOLOGY
The methodology begins by elucidating the choice of a deep learning approach, specifically
Convolutional Neural Networks (CNNs). The discussion highlights the inherent capability of
CNNs to automatically learn hierarchical features from images, making them well-suited for
complex visual recognition tasks. Emphasis is placed on the ability of CNNs to capture spatial
dependencies in retinal images, crucial for diabetic retinopathy diagnosis.
3
4.2 Convolutional Neural Network (CNN)
This section provides a more detailed overview of the CNN architecture employed in the study. It
covers the fundamental building blocks, such as convolutional layers, pooling layers, and fully
connected layers. The discussion emphasizes how the architecture's depth and layer-specific
functions contribute to learning features from the funduscopic images.
Detailed information on the CNN's training parameters is included, covering aspects like learning
rate, batch size, and optimization algorithms. The rationale behind the chosen values is explained,
considering the trade-off between model convergence speed and avoidance of overfitting.
FIG 4.3.1
EyePacs Dataset:
Expanding on the dataset, this section provides an in-depth description of the EyePacs dataset
from Kaggle. Details include the number of images, distribution across different stages of diabetic
retinopathy, and any preprocessing steps applied to enhance the quality of the images. Challenges
encountered in the dataset, such as variations in resolution, lighting conditions, and the presence
of artifacts, are discussed.
4
CHAPTER 5
Sensitivity= TruePositives+FalseNegatives/TruePositives
A high sensitivity value indicates the model's proficiency in capturing instances of diabetic
retinopathy, minimizing false negatives.
Specificity= TrueNegatives+FalsePositives/TrueNegatives
A high specificity value signifies the model's competence in correctly classifying non-diabetic
retinopathy cases, mitigating false positives.
5
5.3 Validation Set Results
FIG 5.3.2
The validation set, comprising 53,126 images with diverse characteristics, serves as a robust
benchmark for evaluating the CNN's overall performance.
Confusion Matrix:
The confusion matrix delineates the model's predictions against the ground truth, providing a
granular view of classification outcomes.
Predicted0
TrueNegatives
FalseNegatives
Predicted1
FalsePositives
TruePositives
Predicted2
TruePositives
Predicted3
TruePositives
Predicted4
TruePositives
Overall Accuracy:
The overall accuracy assesses the CNN's ability to correctly classify images across all categories.
It is computed as the ratio of correctly classified images to the total number of images.
Accuracy= TruePositives+TrueNegatives/TotalImages
6
FIG 5.3.3
and strict capital controls. It's been suggested that during the2012–2013 Cypriot financial
crisis bitcoin purchases rose due to fears that savings accounts would be confiscated or taxed.
CHAPTER 6
The training dataset is crucial, comprising 8,810 labeled retinal images with an equal
distribution of left and right eye samples to prevent bias during training.
Data Preprocessing:
7
Similar to diabetic retinopathy diagnosis, preprocessing involves cropping, resizing, and
normalization. However, during augmentation, images are not flipped to preserve the
directionality of anatomical features.
FIG 6.2.4
The dataset utilized for Eye Laterality Detection is meticulously curated to encompass a wide
array of retinal images, ensuring the model's robustness across diverse scenarios. Images are
sourced from EyePacs, a telemedicine platform dedicated to preventing vision impairment from
diabetic retinopathy.
Retinal images are captured using different variants of fundus cameras, each employing a unique
recording method. This leads to variations in image resolutions, ranging up to 4500 x 3500 pixels,
and diverse camera characteristics. The inclusion of images from various cameras allows the
model to adapt to the intricacies of different imaging devices.
8
To simulate real-world conditions, the dataset includes images captured from different angles
and under various lighting conditions. This variation helps the model learn to identify laterality
features regardless of the orientation or lighting during image acquisition.
Real retinal images often contain noise and artifacts that can complicate the laterality detection
task. The dataset intentionally incorporates images with varying levels of noise and artifacts,
challenging the model to discern relevant patterns amidst potential distractions.
Challenges in Generalization:
Given the diversity in the dataset, the model faces the challenge of generalizing across different
cases, including variations in camera types, resolutions, lighting, and the presence of noise. This
necessitates robust training strategies to avoid overfitting to specific characteristics of the
training data.
9
FIG 6.2.5
10
CHAPTER 7
This study significantly contributes to the field of automated diagnostics by pioneering a deep
learning approach, specifically leveraging Convolutional Neural Networks (CNNs). The use of
CNNs, a cutting-edge subset of artificial intelligence, demonstrates a departure from traditional
methods, emphasizing the potential of neural networks in image-based diagnostics.
The application of CNNs enables the real-time diagnosis of diabetic retinopathy (DR) based on
funduscopic retinal images. By harnessing the power of deep learning algorithms, the system
achieves high sensitivity and specificity in detecting various stages of DR. This not only expedites
the diagnostic process but also offers a more efficient and accurate alternative to traditional
screening methods.
The study directly addresses the critical challenge of time delays in traditional screening for
diabetic retinopathy. By automating the diagnostic process through CNNs, the model facilitates
early detection of DR, a pivotal factor in preventing visual loss. The expedited diagnosis
contributes to timely intervention, potentially mitigating the risk of vision-threatening
complications.
11
Beyond the clinical aspects, the automated diagnostic system introduced in this study holds the
promise of reducing financial costs associated with traditional screening. The implementation of
an AI-driven approach minimizes the need for extensive human resources and streamlines the
diagnostic workflow. Additionally, by enabling early intervention, the system contributes to a
reduction in the long-term financial burden and the risk of blindness associated with DR.
A noteworthy contribution lies in the incorporation of a second CNN for detecting the laterality of
the eye. This novel aspect sets the system apart, offering a comprehensive diagnostic solution.
The ability to discern between left and right eyes enhances the diagnostic report, providing a
holistic evaluation of a patient's ocular health.
CONCLUSION
By tackling critical challenges in DR screening, this research aligns with the broader trend of
integrating artificial intelligence into healthcare. The success of this deep learning model suggests
a transformative potential for automated diagnostics, promising efficiency and cost-effectiveness.
Looking ahead, the system's integration into clinical practice could redefine DR screening
standards. Ongoing research could explore additional parameters for improved accuracy, ensuring
the model's reliability in diverse clinical settings.
In conclusion, this research signifies a significant step towards accessible, timely, and accurate
DR screening, leveraging CNNs to revolutionize medical diagnostics. The combination of
12
technological innovation and a holistic approach positions this study as a promising advancement
in improving patient outcomes and alleviating the societal burden of diabetic retinopathy.
REFERENCES
J.W. Yau, S.L. Rogers, R. Kawasaki, E.L. Lamoureux, J.W. Kowalski, T. Bek, S.J.
Chen, J.M. Dekker, A. Fletcher, and J. Grauslund, Global prevalence and major risk
factors of diabetic retinopathy, Diabetes Care 35 (2012), 556-564.
13
14