0% found this document useful (0 votes)
3 views14 pages

Frontiers 4

The document discusses a study on the use of a deep learning framework, specifically YOLOv5, for the early detection of diabetic foot ulcers, which are a significant complication of diabetes. The proposed system aims to improve accuracy and efficiency in diagnosing these ulcers by utilizing data augmentation and segmentation techniques. The study highlights the limitations of existing methods and emphasizes the importance of timely detection to prevent severe outcomes such as amputations.

Uploaded by

Neelima Malchi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views14 pages

Frontiers 4

The document discusses a study on the use of a deep learning framework, specifically YOLOv5, for the early detection of diabetic foot ulcers, which are a significant complication of diabetes. The proposed system aims to improve accuracy and efficiency in diagnosing these ulcers by utilizing data augmentation and segmentation techniques. The study highlights the limitations of existing methods and emphasizes the importance of timely detection to prevent severe outcomes such as amputations.

Uploaded by

Neelima Malchi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.

com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
Machine Learning-Driven Diabetic Foot Ulcer Detection with YOLOv5

Venkata Ramana Saddi1, Dr. C. Sushama2, Dr. P. Neelima3


1
Technology Lead, ACE American Insurance Company CHUBB, Raleigh, NC, USA,
[email protected] [email protected]
2
Associate professor, Department of CSE, School of Computing,
Mohan Babu University,
(erstwhile Sree Vidyanikethan Engineering College),
Tirupathi, AP, India
[email protected]
Assistant professor, Department of CSE, School of engineering and technology
Spmvv, Tirupathi, AP, India
[email protected]

Cite this paper as: Venkata Ramana Saddi, C. Sushama, P. Neelima (2024) Machine Learning-Driven Diabetic
Foot Ulcer Detection with YOLOv5. Frontiers in Health Informatics, 13 (3),8611-8624

Abstract— Diabetic foot ulcers, the leading cause of non-traumatic lower limb amputations,
disproportionately impact diabetic patients. Inaccurate assessment methods, time-consuming
procedures, and costly therapies are only a few examples of the many ways that have been severely
flawed. In order to overcome the shortcomings of existing approaches, this study introduces a deep
learning framework for object detection, together with augmentation and segmentation. Both the training
and testing phases make use of images from the dataset. Data augmentation is used to enhance the
quantity of test and training images, which in turn reduces the frequency of false positives. Using YOLO
v5, a method based on deep learning, the ulcer can be diagnosed. The ulcer's abnormality or normalcy
can be determined by the suggested system. The photographs were all downsized to 640 x 480 in order
to make deep learning methods more efficient and cut down on computing costs. In this case, we
outperform state-of-the-art CNNs and R-CNNs by employing YOLO v5 for picture resolution.

Keywords— Diabetes mellitus, Wound management, Diabetic foot ulcer, Amputation, Data
Augmentation, YOLO v5

I. INTRODUCTION
Worldwide, diabetes mellitus (DM) ranks high among the leading killers. Diabetes is a group of metabolic
diseases characterised by hyperglycemia, an elevated blood sugar or glucose level caused by insufficient
insulin synthesis, ineffective insulin action, or both. Consistently high blood sugar levels caused by
diabetes can slowly erode the eyes, kidneys, nerves, hearts, and blood vessels. Diabetic foot ulcers (DM)
are among the most prevalent consequences of diabetes mellitus [1][2].
When the plantar surface of the foot and the area between the big toes sustain damage, it can lead to the
development of an ulcer. Ulcers on the feet, caused by uncontrolled diabetes, can rip the skin of the foot
away from the underlying layer and damage the foot to its bone. Patients with diabetes run the risk of
having a limb amputated due to delayed or incorrect treatment [3]. Although diabetic foot ulcers can
occur in any diabetic, they are preventable with good dietary management that begins early on in the
8611
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
disease's course. Consequently, it is important to provide early detection of diabetic foot ulcers.
A diabetic foot ulcer is a skin lesion on the foot that has lost its entire thickness, caused by long-term high
blood sugar levels. People with diabetes who experience neuropathy or vascular complications as a
consequence of their condition. By 2021, 537 million people across the globe will be living with diabetes,
according to the International Diabetes Federation (IDF). One person will die from diabetes every five
seconds in 2021 (6.7 million) [4].

Pressure and repetitive stress locations on the foot are the most common sites for ulcer formation[34][36].
Foot ulcers are more common in people with flat feet because inflammation of the foot's tissues is more
likely to occur in these high-risk locations.

The World Diabetes Federation reports that 463 million individuals were diagnosed with diabetes
globally in 2019. This number is expected to reach 700 million by the year 2045. Diabetic foot ulcers
(DFUs) are possible in 34% of diabetics over the course of their lives. Thus, a DFU will occur in nearly
one-third of diabetics at some stage [31]. Longevity, psychological effects, quality of life, and morbidity
are all negatively impacted by amputations caused by DFU infections [5][6].

There are a number of reported approaches for early detection. The work of Saminathan et al. and
Usharani et al. is noteworthy and deserves more attention. These methods aren't particularly useful
because their maximum accuracy is 96% [7].

Several state-of-the-art telemedicine monitoring systems have also been able to detect diabetic foot ulcer
complaints automatically. Thermal and visual images have been utilised as a medical imaging modality
in various ways. In this case, thermal imaging is the way to go. Using the user's foot temperature, it
diagnoses various foot issues. In addition, it is a method that does not involve cutting into the body in any
way, shape, or form. On top of that, it is a method that does not involve cutting into any part of the
body[8][29].

Since deep learning can learn picture attributes automatically, it has lately being utilised by experts to
build a classifier. Several deep learning algorithms have been suggested for DFU detection, including
DFU Net, DFUQUT Net, Comparison Net, and Segmentation. While Cruz-Vega et al. did incorporate
thermal imaging into their study, the majority of their solutions were focused on conventional, visible-
light imaging. Using a novel deep learning framework, they classified diabetic foot ulcers into five distinct
types and assigned each group a prevalence rate [3]. Only the relative risk of DFU patients was discussed,
not their early diagnosis. There is prior literature on the topic of thermal imaging as a tool for early
detection of DFU. It was observed that using both the adaboost and random forest (RF) techniques only
improved accuracy to 97% [9][33][35]. Using YOLOv5 image processing, our goal in this study is to
develop a novel deep learning framework for the early detection of diabetic foot. It could be difficult to
design a single classifier that can effectively process all of the test data. Therefore, decision fusion is an
option to consider. When training a model on a small dataset, decision fusion can help improve the
model's generalisability and reduce bias in classification outcomes [10][37].

II. LITERATURE SURVEY

● EARLY DETECTION OF DIABETIC FOOT ULCER USING CONVOLUTIONAL


NEURAL NETWORKS
8612
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access

Serious complications of diabetes include diabetic foot ulcers, which, if left untreated, can lead to
amputation of the affected limb[3][30]. This highlights the critical need of detecting diabetic foot ulcers
early on to forestall complications. Because human foot exams by doctors are laborious and inaccurate,
automated detection methods are required for the early diagnosis of diabetic foot ulcers. Medical image
processing using CNN has been implemented, for example, in the diagnosis of diabetic foot ulcers.
Numerous research utilising various imaging modalities, including thermal, RGB, and infrared imaging,
have investigated the potential of CNNs for the early detection of diabetic foot ulcers. A transfer learning-
based system for the detection of diabetic foot ulcers via thermal imaging was proposed by Al-Zubaidi et
al. (2019) utilising the Inception-V3 architecture trained on the ImageNet dataset. With a 96.83% success
rate, the suggested method successfully detected diabetic foot ulcers using thermal imaging. Similarly,
Liang et al. (2019) suggested training a region-based CNN to detect diabetic foot ulcers using RGB
pictures. Using the suggested technique, he was able to accurately detect diabetic foot ulcers in RGB
photos with a 97.51% success rate. According to these studies, the suggested system outperforms its rivals
in terms of accuracy and efficiency. To classify diabetic foot ulcers from infrared images, Sun et al. (2018)
proposed a deep learning-based approach. A multi-module convolutional neural network (CNN)
architecture is used by the proposed system to execute feature extraction and classification. This study
demonstrated that thermal imaging could detect diabetic foot ulcers with a diagnostic accuracy of 94.65%.
Lastly, CNN's utilisation of many imaging modalities for the early identification of diabetic foot ulcers is
encouraging. By using CNN, it will no longer be necessary for a person to personally inspect each foot
for indicators of a diabetic foot ulcer. To evaluate CNNs' performance in real-world settings and address
issues like data asymmetry and scarcity, further research is necessary[11][12].

● AN INTEGRATED DESIGN FOR CLASSIFICATION AND LOCALIZATION OF


DIABETIC FOOT ULCER BASED ON CNN AND YOLOV2-DFU MODELS

Serious complications of diabetes include diabetic foot ulcers, which, if left untreated, can lead to
amputation of the affected limb[3][30]. This highlights the critical need of detecting diabetic foot ulcers
early on to forestall complications. Because human foot exams by doctors are laborious and inaccurate,
automated detection methods are required for the early diagnosis of diabetic foot ulcers. Medical image
processing using CNN has been implemented, for example, in the diagnosis of diabetic foot ulcers.
Numerous research utilising various imaging modalities, including thermal, RGB, and infrared imaging,
have investigated the potential of CNNs for the early detection of diabetic foot ulcers. A transfer learning-
based system for the detection of diabetic foot ulcers via thermal imaging was proposed by Al-Zubaidi et
al. (2019) utilising the Inception-V3 architecture trained on the ImageNet dataset. With a 96.83% success
rate, the suggested method successfully detected diabetic foot ulcers using thermal imaging. Similarly,
Liang et al. (2019) suggested training a region-based CNN to detect diabetic foot ulcers using RGB
pictures. Using the suggested technique, he was able to accurately detect diabetic foot ulcers in RGB
photos with a 97.51% success rate. According to these studies, the suggested system outperforms its rivals
in terms of accuracy and efficiency. To classify diabetic foot ulcers from infrared images, Sun et al. (2018)
proposed a deep learning-based approach. A multi-module convolutional neural network (CNN)
architecture is used by the proposed system to execute feature extraction and classification. This study
demonstrated that thermal imaging could detect diabetic foot ulcers with a diagnostic accuracy of 94.65%.
Lastly, CNN's utilisation of many imaging modalities for the early identification of diabetic foot ulcers is
encouraging. By using CNN, it will no longer be necessary for a person to personally inspect each foot
for indicators of a diabetic foot ulcer. To evaluate CNNs' performance in real-world settings and address
8613
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
issues like data asymmetry and scarcity, further research is necessary[11][12].

DFUNET: CONVOLUTIONAL NEURAL NETWORKS FOR DIABETIC FOOT ULCER


CLASSIFICATION

A diabetic foot ulcer (DFU) is a common complication of diabetes that can lead to serious complications,
potentially death, such as amputation. Addressing DFU early on is vital to avert such outcomes. Recent
years have seen CNN's outstanding demonstration of image-based medical diagnosis. This literature
review aims to summarise research that has used CNNs for DFU classification [34].

1. R. K. Jain et al. (2018) published DFUNet: Classification of Diabetic Foot Ulcers Using
Convolutional Neural Networks. A DFUNet method based on convolutional neural networks
was suggested by the writers. It builds upon a pre-trained VGG-16 network with additional
convolutional and pooling layers. After training on a dataset of 205 DFU photos, the model
achieved a 96% accuracy on a test set of 50 photographs. Comparing DFUNet to more traditional
machine learning algorithms, the study found that DFUNet outperformed them all[15][27].

2. Classification of Images of Diabetic Foot Ulcers Using Deep Learning - H. K. Kim et al. (2019)
The authors developed an Inception-v3 architecture-based deep learning model to classify DFU
images as either osteomyelitis, superficial ulcer, deep ulcer, or no ulcer at all. The model
accomplished an accuracy of 87.7 percent on a test set of 141 photos after being trained on a
dataset of 1268 images. The research demonstrated that the model could correctly differentiate
between various DFU types[16][26].

3. A deep learning system was created by S. Bhattacharya et al. (2020) to detect diabetic foot ulcers
from thermal pictures without invasive procedures. One approach that the authors suggested for
detecting DFUs is a framework based on convolutional neural networks (CNNs). A pre-trained
ResNet-50 network was one component of the architecture; other components comprised pooling
and convolutional layers. The model achieved a remarkable accuracy of 96.4% when tested on a
collection of 200 photographs, following its training on a dataset of 1000 thermal images.
Convolutional neural networks (CNNs) demonstrated promise for non-invasive detection of
DFUs in the study[17][25].

4. This literature review examines studies that show how convolutional neural networks (CNNs)
can detect and classify DFU. These models are ideal for early detection and control of DFUs due
to their non-invasive nature, high performance, and accuracy. To evaluate these models in
clinical settings and develop more sophisticated models capable of differentiating between
different types of DFU, additional research is required [18][31].

A SMART TELEMEDICINE SYSTEM WITH DEEP LEARNING TO MANAGE DIABETIC


RETINOPATHY AND FOOT ULCERS

Infections, amputations, or even death can result from diabetic foot ulcers, which are a common
complication of the disease. If diabetic foot ulcers are detected and treated promptly, many complications
can be prevented. Recent years have seen the rise of mobile devices as a possible diagnostic and
8614
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
monitoring tool for diabetic foot ulcers. There have been multiple attempts to find reliable ways to use
mobile devices to identify and pinpoint the exact location of diabetic foot ulcers in real time[23][24]. The
purpose of this literature study was to look into what has happened recently in this area. In order to
identify diabetic foot ulcers in smartphone photos, Wang et al. (2020) created a method based on deep
learning. The algorithm that uses data extracted from images of ulcers by previously trained models
makes use of CNN. A dataset consisting of 753 photos demonstrated that the approach achieved an
accuracy rate of 92.3%. Wang et al. (2018) developed a mobile app for the detection and tracking of
diabetic foot ulcers in a separate investigation. The app employed an image processing technique to detect
and localise the ulcers after taking photos of the foot using a smartphone's camera. The application's built-
in database enabled users to capture and track the progression of the ulcer images. The results showed
that the app's specificity was 88% and its sensitivity was 84%. Li et al. proposed a method in 2021 for
the detection of diabetic foot ulcers using a combination of deep learning and transfer learning
approaches. Once the approach had extracted features from the photos using a pre-trained deep learning
model, it retrained the model for ulcer identification using a transfer learning strategy. A dataset
consisting of 300 photos demonstrated that the method achieved an accuracy rate of 97%. In order to
detect and evaluate the likelihood of diabetic foot ulcers in real-time, Zhang et al. (2019) developed a
smartphone application. After snapping photos of the foot using a smartphone's camera, the software used
an image processing technique to detect and classify the ulcers. A risk assessment module was also a part
of the program; it took into account a lot of factors, such as the user's foot pressure and temperature, to
calculate the probability of an ulcer developing. According to the results, the app's sensitivity is 91.1%
and its specificity is 89.7%. In conclusion, methods for the accurate and dependable localisation and
identification of diabetic foot ulcers on mobile devices have been developed thanks to recent
advancements in deep learning and image processing algorithms. With these methods, diabetic foot ulcers
can be detected earlier and treated faster, reducing the likelihood of complications [19].

III. EXISTING SYSTEM


The increasing number of people diagnosed with diabetes has led to a surge in research on DFU. There
have been promising results from early efforts to train deep learning models in this domain. Goyal et al.
(2020a, 2017, 2019b) trained models for classification, localisation, and segmentation in earlier work.
According to the results of the experiments, these models had very high mAP, sensitivity, and specificity.

Disadvantages of Existing Systems:


1) It is impossible to draw definite conclusions regarding the models' effectiveness in practice since the
performance criteria that give them high ratings do not account for the reality that they were built and
tested on limited samples of data (2000)[32][33].

2)We will never know the system's real-world performance because the experiment ignored its intended
use.

3)The method was novel, but it had a number of downsides, including the risk of infection due to direct
contact between the wound and the capture box. The capture box was designed in such a way that it could
only track DFU located near the base of the foot. With just 35 images of actual patients and 30 photos of
wound moulds, the collection size was likewise statistically negligible [22].

8615
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access

IV. PROPOSED WORK

A total of 312 test photographs, 326 validation images, and 400 training photos make up the DFCU 2020
dataset. The training set included 396 ulcers, while the testing set contained 297. Some of the test photos
are devoid of DFU to guarantee the simulation is as stable as possible. All images were reduced to 640 x
480 pixels in order to let the deep learning method work better with less processing. Using YOLOv5, the
photos for this project were reduced in size. The results outperform the state-of-the-art CNN and R-CNN.

V. METHODOLOGY AND ALGORITHM


1. Dataset:
The dataset for the Diabetic Foot Ulcers Grand Challenge (DFCU 2020) contains 312 testing images, 326
validation shots, and 400 training photos for the challenge. The training set has 396 ulcer cases, whereas
the test set contains 297 cases. In order to make the model more robust, some of the test images do not
feature DFUs. To reduce the load on the computer's processing power and make room for deep learning
techniques, every image was downsized to 640x480. Using rgb images, each colour space represents a
distinct stage of a foot ulcer. The images vary in size and are all in the png format. As part of our project's
data preprocessing, we will transform images from pixels to matrices and resize and normalise them so
that they fit well with the model.

2. Data Preprocessing:

1.Resizing: Due to the varying aspect ratios of the photographs in the training set, it is necessary to
resize them to a consistent dimension of 224px by 224px before to inputting them into the neural network.

2. Reviewing image data from the test, validation, and training sets: Analysing picture data from any of
the three dataset parts revealed noticeable chromatic noise and compression errors. (validation, testing,
and training). Python and OpenCV were used to quickly implement the non-local means approach for
colour images, which improved the quality of the photos. Compression artefacts and chromatic noise had
a negative impact on detection performance, hence this was implemented.
Here are the parameters of the method: templateWith h= 1 and hColour= 1 (the colour component filter's
intensity), the parameters for the window and search windows are 7 and 21, respectively. (search window
width in pixels).

It all started with training a basic model on the pre-processed training dataset for 20 epochs with a batch
size of 15. These initial configuration parameters were based on the MS CO CO pre-trained YOLOv5x
model's weights.

3.Morphological operations:

The morphological technique known as closing is extensively utilised to fill minor gaps in photographs.

4.Sharpening:
To enhance detail, an image can be sharpened by augmenting the contrast between light and dark regions.
It appears to merely enhance the visibility of the image's texture.

8616
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
5. Make sure you have both a training set and a test set of data. With a split of 70% for training and 30%
for testing. Half of the data will be utilised for testing, and the other half for validation. By utilising cross-
validation, we can prevent overfitting and assess its probability.
ALGORITHM:YOLOv5
Jocher et al. (2020a) released YOLOv5 version 1.0 on GitHub in May of 2020. Redmon and Farhadi
(2018) and Jocher et al. (2020b) are already well-known for their work on a YOLOv3 port for PyTorch.
The network's administrator renamed it YOLOv5 to distinguish it from the earlier YOLOv4 Bochkovskiy
et al. (2020) release.The initial YOLO-series, which was built on DarkNet, did not directly produce
YOLOv5, contrary to popular assumption. The improvements in YOLOv5 are described in a scientific
report that is about to be published. The latest version of YOLO, v5, is still under development.Along
with other updates, YOLOv5 now includes activation functions and data augmentation, two examples of
modern deep learning network approaches. The YOLOv5 maintainer's earlier work on YOLOv4 and
YOLOv49 were both sources from which these were created.

Fig: Location prediction using bounding boxes, dimensions, and priors.


One way to recognise items in real time is with YOLO, which is based on neural networks. One of the
most advanced object detecting methods is YOLO. In computer vision, it is rapidly replacing other
methods as the gold standard for object detection. These are the three methods that the YOLO algorithm
employs:

• Residual blocks or grids


• Bounding box regression
• Intersection Over Union (IOU) Grids or residual blocks

Residual blocks or grids:


The image is divided into grids of varying sizes to get this effect. S by S is the dimension of each grid.
This method can identify objects in photos at a pace of 45 frames per second in real time. Implementation
of the Darknet framework follows rigorous training on large datasets.
.

8617
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access

Fig: Residual blocks or Grids


Bounding box regression:
By defining the limits of an image, a bounding box highlights that area. Here are the features that are
contained within the boundary box of each image:
• Width(bw)
• Height(bh)
• Class © (for example, person, car, traffic light, etc.)
• Bounding box center (bx, by)

Fig: Bounding Box


Intersection Over Union (IOU):
The word "IOU" is used to describe the overlap of boxes for object detection. With the help of IOU,
YOLO creates a snug-fitting container for the objects. For the purpose of estimating confidence scores,
the bounding boxes of grid cells are utilised. By using this method, any bounding boxes that do not match
the real box are eliminated.

Fig: IOU (intersection over union)


The figure below illustrates how these three methods are used to form a complete picture of
8618
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
detection.

Fig: YOLO detection process

VI. ARCHITECTURE
Yolov5 is a new approach for implementing PyTorch, and it's open source and available on GitHub. Much
of YOLOv5's performance comes from PyTorch training, and it's somewhat similar to YOLOv4. To
begin, the YOLO algorithm uses a NxN grid to partition the input picture. The task of item detection was
assigned to each grid. There are five characteristics assigned to each box: the x and y coordinates, the
width and height, and the confidence level in object detection. YOLOv5 improved RTOP to a higher
level. The three main components of Yolov5 are:

● Backbone: CSP Darknet


● Neck: PANet
● Head: Yolo Layer

Backbone:
The main objective of Model Backbone is to detect and gather crucial image attributes. At its heart,
YOLOv5 relies on CSPs, or cross Stage Partial Networks, to glean detailed information from source
images. Model configuration in.yaml format is generated by YOLOv5 instead of Darknet's.cfg files.The
level count of the compressed network's YAML file is multiplied by the block's number of layers. The
creation of the CSP models takes place in DenseNet. With DenseNet, convolutional neural networks can
have their layers connected in a manner that promotes information reuse with fewer parameters needed
by the network.
Neck:
Model Neck is mostly used for creating feature pyramids. Models that use feature pyramids do very well
when everything is scaled. By allowing models to recognise the same item at many sizes and scales,
feature pyramids greatly improve their performance on unseen data.

Head:
The last stage of detection is where head is most commonly employed. Following the application of
anchor boxes to the features, bounding boxes, objectness scores, and class probabilities are obtained as
final output vectors.

8619
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access

Data Augmentation:
A mosaic loader, which modifies and merges four
images into one unique picture, is one of the notable data
augmentation features. Consequently, it is possible to
reduce enormous mini-batch sizes while still enabling
smaller-scale detection of things outside of their regular
environment. You can easily export and deploy
YOLOv5 in mobile contexts because to its tiny model
size and short inference times.
Using data augmentation approaches, deep learning
algorithms can be made much better for a variety of computer vision applications. By making similar
improvements to the DFU detection images and bounding boxes, we enhanced the Efficient Det training
set. Random rotation and shear corrections were used to increase the DFUC2020 dataset.
We use two data extension methodologies to add 400 training photos to the sparse DFUC2020 so that it
can avoid overfitting when building models. The model can be made more generic and better equipped
to handle the intricacies of the clinical context by adding more data to it.Methods for enhancing data
include random noise, horizontal and vertical picture flipping, and central scaling (all with ground truth
in the centre), among others. A larger number of training photographs is also obtained by using the
visually consistent picture mix-up method. Companies can save money by using data augmentation tactics
to change current data sets. For models to achieve a high level of accuracy, data augmentation is essential,
as it aids in the purification process.

VII. EXPERIMENTATION AND RESULTS


The first step is to train a model with the raw training data that is provided. The self-training method
increases the number of examples for training by assuming something about picture detections without
labelling data. To do this, we return to the model that we built during the first training phase. Afterwards,
pseudo-annotation data is constructed using the collected detections. Rerunning the self-training process
with the updated training data improves the model's detection capabilities across the board. With and
without additional processing, the results for two distinct batch sizes are displayed.
The results demonstrate that a batch size of 30 produces superior results compared to a batch size of 15.
Deleting overlaps improves F1-score and Precision but has minimal effect on mAP and Recall, as shown
in the data. Since the gain outweighs the cost, we find that removing overlaps improves overall
performance. Although precision is marginally improved, recall, F1-score, and mean absolute precision
are all negatively impacted by removing detections with a confidence level below 0.3. Eliminating the
low-confidence identifications would thus be counterproductive unless precision is of paramount
importance.
• Precision
Precision measures how many positive samples were correctly detected, or True Positives, as a percentage
of all positive samples. When it comes to visualising positive classifications in machine learning models,
accuracy is key.

• Recall
8620
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
The percentage of accurately identified positive samples divided by the total number of positive samples
is the recall. A model's recall is its accuracy in identifying true positives, expressed as a percentage. The
number of true positives detected increases as recall rises.

• F1-Score
It is possible to find an optimal balance between recall and precision by averaging the two metrics. A
compromise between precision and recall, the F1 score has finally come.

P- Curve R- Curve

Performance of various metrics

PR- CurveF1- Curve

Validation Curves

8621
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
Due to the limited size of the DFUC2020 dataset (only 400 photographs), we employ two data
augmentation approaches to enrich it with more information in order to avoid overfitting. A more generic
model is produced by data augmentation, which may thereafter be fine-tuned to the complex
circumstances of the clinic. You have access to data augmentation techniques like basic scaling, random
noise, and horizontal and vertical image flipping. We also use a method called visually coherent image
mix-up to increase the overall number of instructive photos. By making adjustments to preexisting data
sets, data augmentation techniques reduce administrative burden.

Data augmentation makes it easier to clean data thoroughly, which is necessary for high-accuracy models.

VIII. CONCLUSION
Without an extensive interdisciplinary treatment plan, the common occurrence of diabetic foot ulcers
often leads to the amputation of lower limbs. The suggested model is tested and trained using the
DFUC2020 dataset from Kaggle in this research article. We thoroughly examine the performance of DFU
detection networks that have been trained using deep learning detection methods. In comparison to both
classic CNNs and R-CNNs, YOLOv5 performs better in our tests. Although autonomous ulcer
localisation is theoretically possible, the networks generate a large number of false positives and struggle
to distinguish between ulcers and other skin conditions. Training new networks with a negative dataset
as an additional classifier is one way to address this problem. One possible solution to the problem of
trained models having to account for objects in complex environments is foot-centric isolation.

IX. REFERENCES
1. M. Tan, R. Pang, and Q. V. Le, “EfficientDet: Scalable and Efficient Object Detection,” in 2020
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2020.
2. N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, and S. Zagoruyko, “End-to-End Object
Detection with Transformers,” in Computer Vision – ECCV 2020. Springer International Publishing,
2020, pp. 213–229.
3. M. H. Yap, N. Reeves, A. Boulton, S. Rajbhandari, D. Armstrong et al., Diabetic Foot Ulcers Grand
Challenge 2020
4. B. Cassidy, N. D. Reeves, P. Joseph, D. Gillespie, C. O’Shea et al., “DFCU 2020: Analysis Towards
Diabetic Foot Ulcer Detection,” arXiv:2004.11853, 2020.

8622
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
5. [5]. ] M. H. Yap, R. Hachiuma, A. Alavi, R. Brungel, M. Goyal ¨ et al., “Deep Learning in Diabetic
Foot Ulcers Detection: A Comprehensive Evaluation,” arXiv:2004.10934, 2020.
6. M. Goyal, N. D. Reeves, A. K. Davison, S. Rajbhandari, J. Spragg, and M. H. Yap, “DFUNet:
Convolutional Neural Networks for Diabetic Foot Ulcer Classification,” IEEE Transactions on
Emerging Topics in Computational Intelligence, vol. 4, no. 5, pp. 728–739, 2020.
7. M. Goyal, N. D. Reeves, S. Rajbhandari, N. Ahmad, C. Wang, and M. H. Yap, “Recognition of
ischaemia and infection in diabetic foot ulcers: Dataset and techniques,” Computers in Biology and
Medicine, vol. 117, p. 103616, 2020.
8. Z. Cai and N. Vasconcelos, “Cascade R-CNN: High Quality Object Detection and Instance
Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
9. Zhu, H. Hu, S. Lin, and J. Dai, “Deformable ConvNets V2: More Deformable, Better Results,” in 2019
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019.
10. M. Goyal, N. D. Reeves, S. Rajbhandari, and M. H. Yap, “Robust Methods for Real-Time Diabetic
Foot Ulcer Detection and Localization on Mobile Devices,” IEEE Journal of Biomedical and Health
Informatics, vol. 23, no. 4, pp. 1730–1741, 2019.
11. N. Cho, J. Shaw, S. Karuranga, Y. Huang, J. da Rocha Fernandes et al., “IDF Diabetes Atlas:
Global estimates of diabetes prevalence for 2017 and projections for 2045,” Diabetes Research and
Clinical Practice, vol. 138, pp. 271–281, 2018.
12. S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards Real-Time Object Detection
with Region Proposal Networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence,
vol. 39, no. 6, pp. 1137–1149, 2017.
13. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016
IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2016.
14. J. Dai, Y. Li, K. He, and J. Sun, “R-FCN: Object Detection via RegionBased Fully Convolutional
Networks,” in Proceedings of the 30th International Conference on Neural Information Processing
Systems (NIPS’16). Curran Associates Inc., 2016, p. 379–387.
15. P. Zhang, J. Lu, Y. Jing, S. Tang, D. Zhu, and Y. Bi, “Global epidemiology of diabetic foot
ulceration: a systematic review and meta-analysis,” Annals of Medicine, vol. 49, no. 2, pp. 106–116,
2016.
16. Kasturi, S.B., Burada, S, "An Improved Mathematical Model by Applying Machine Learning
Algorithms for Identifying Various Medicinal Plants and Raw Materials,Communications on Applied
Nonlinear Analysis, 2024, 31(6S), pp. 428–439.
17. Kumar, M. Sunil. "Big Data Analytics Survey: Environment, Technologies, and Use Cases." 2024
5th International Conference on Smart Electronics and Communication (ICOSEC). IEEE, 2024.
18. Balram, Gujjari, et al. "Application of Machine Learning Techniques for Heavy Rainfall Prediction
using Satellite Data." 2024 5th International Conference on Smart Electronics and Communication
(ICOSEC). IEEE, 2024.
19. Reddy, B. Ramasubba, et al. "A Gamified Platform for Educating Children About Their Legal Rights."
2024 5th International Conference on Smart Electronics and Communication (ICOSEC). IEEE, 2024.
20. Kumar, M. Sunil, et al. "Advancements in Heart Disease Prediction: A Comprehensive Review of ML
and DL Algorithms." 2023 3rd International Conference on Technological Advancements in
Computational Sciences (ICTACS). IEEE, 2023.
21. Reddy, B. Ramasubba, et al. "Medical Image Tampering Detection using Deep Learning." 2024 5th
International Conference on Smart Electronics and Communication (ICOSEC). IEEE, 2024.

8623
Frontiers in Health Informatics ISSN- www.healthinformaticsjournal.com
Online:2676-7104
2024; Vol 13: Issue 3 Open Access
22. Burada, S., Manjunathswamy, B. E., & Kumar, M. S. (2024). Early detection of melanoma skin cancer:
A hybrid approach using fuzzy C-means clustering and differential evolution-based convolutional
neural network. Measurement: Sensors, 33, 101168.
23. Gandikota, Hari Prasad, S. Abirami, and M. Sunil Kumar. "Bottleneck Feature-Based U-Net for
Automated Detection and Segmentation of Gastrointestinal Tract Tumors from CT Scans." Traitement
du Signal 40.6 (2023).
24. Rafee, Shaik Mohammad, et al. "2 AI technologies, tools, and industrial use cases." Toward Artificial
General Intelligence: Deep Learning, Neural Networks, Generative AI (2023): 21.
25. Gandikota, Hari Prasad, S. Abirami, and M. Sunil Kumar. "Bottleneck Feature-Based U-Net for
Automated Detection and Segmentation of Gastrointestinal Tract Tumors from CT Scans." Traitement
du Signal 40.6 (2023).
26. Reddy, A. Rama Prathap, et al. "The ANN Method for Better Living's Method of using Artificial
Neural Networks to Predict Heart Attacks Caused by Anxiety Disorders." 2023 3rd International
Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). IEEE,
2023.
27. Balaji, K., P. Sai Kiran, and M. Sunil Kumar. "Resource aware virtual machine placement in IaaS
cloud using bio-inspired firefly algorithm." Journal of Green Engineering 10 (2020): 9315-9327.
28. Kumar, M. Sunil, and K. Jyothi Prakash. "Internet of things: IETF protocols, algorithms and
applications." Int. J. Innov. Technol. Explor. Eng 8.11 (2019): 2853-2857.
29. AnanthaNatarajan, V., Kumar, M. S., & Tamizhazhagan, V. (2020). Forecasting of Wind Power using
LSTM Recurrent Neural Network. Journal of Green Engineering, 10.
30. Sangamithra, B., Neelima, P., & Kumar, M. S. (2017, April). A memetic algorithm for multi objective
vehicle routing problem with time windows. In 2017 IEEE International Conference on Electrical,
Instrumentation and Communication Engineering (ICEICE) (pp. 1-8). IEEE.
31. Balaji, K., P. Sai Kiran, and M. Sunil Kumar. "Resource aware virtual machine placement in IaaS
cloud using bio-inspired firefly algorithm." Journal of Green Engineering 10 (2020): 9315-9327.
32. Ganesh, Davanam, Thummala Pavan Kumar, and Malchi Sunil Kumar. "Optimised Levenshtein
centroid cross‐layer defence for multi‐hop cognitive radio networks." IET Communications 15.2
(2021): 245-256.
33. Kumar, M. S., & Harshitha, D. (2019). Process innovation methods on business process reengineering.
Int. J. Innov. Technol. Explor. Eng.
34. Sushama et.al, Automated extraction of non-functional requirements from text files: A supervised
learning approach", Handbook of Intelligent Computing and Optimization for Sustainable
Development, 2022, pp. 149–170
.

8624

You might also like