A Review on Deep Learning for Precision Agriculture Plant Disease Detection and Classification
A Review on Deep Learning for Precision Agriculture Plant Disease Detection and Classification
Dr.M.SANGEETHA
Professor,
Department of Information Technology,
K.S.Rangasamy College of Technology,
Tiruchengode, Tamilnadu.
[email protected]
ABSTRACT
The significance of early detection of crop diseases and the ability of machine learning (ML) and deep
learning (DL) methods to automate this process. It presents a number of effective image-based plant
disease detection and classification systems that make use of convolutional neural networks (CNN), Plant
Village databases, shallow CNN with kernel SVM, and hybrid models like PLDPNet. These systems,
trained on extensive image datasets, show high accuracy rates in identifying diseases in crops such as
grapes, mangoes, rice, olives, potatoes, and tomatoes. The paper emphasizes the potential challenges
and solutions in implementing these automated systems, providing insightful information to agricultural
researchers and practitioners to improve crop disease management using cutting-edge AI technologies.
The findings show significant improvements in disease detection accuracy, demonstrating the practicality
and effectiveness of integrating ML and DL for agricultural applications.
Keywords: Precision Agriculture, Deep Learning, Plant Disease Detection, Convolutional Neural
Networks, Transfer Learning, Hybrid Framework, Support Vector Machine, Machine learning
𝑛ℎ − 𝑓ℎ + 2𝑝 3.4.2 PCA
𝑂ℎ𝑖 = +1 − − − (4) Principal Component Analysis, or PCA, is a
𝑠
flexible unsupervised machine learning approach
𝑛𝑤 − 𝑓𝑤 + 2𝑝 that may be applied to a number of tasks, including
𝑂𝑤𝑖 = +1 − − − (5) feature extraction, noise filtering, dimensionality
𝑠
reduction, and visualization (Liakos et al., 2018).
3.4 Framework overview For instance, the first 49 principal components
keep 80% of the overall variance in grayscale
The process begins by labelling and
images from the Maize dataset, which have
inputting plant leaf images into a shallow CNN
256x256 pixel dimensions (a total of 62,350
derived from the pre-trained VGG-16 model
pixels). The first 249 components preserve 90% of
through transfer learning, leveraging features from
the variation. This allows at least 90% of the image
the ImageNet dataset. This approach ensures
information to be preserved while representing the
efficient feature extraction while conserving
original 62,350 pixels with a 249-dimensional
computational resources. The extracted
vector called pixel features.[4]
embeddings undergo dimensionality reduction
using PCA to retain 99% variance, reducing
calculation costs and mitigating overfitting risks. 3.4.3 Image dataset
Finally, the embeddings are processed by Overall more than 3000 images were
classical classifiers like Kernel SVM and Random selected for the work. They are categorized into 3
Forest, termed SCNN-KSVM and SCNN-RF based on the features of the images. With 80% and
respectively, to evaluate the shallow CNN's 20% of the total data, respectively, the collection
performance across different classifiers[4]. has been divided into training and test sets[5].
Segmentation and classification are the two
3.4.1 Shallow CNN primary sequential stages of the PLDPNet system,
In recent plant disease identification which provides an end-to-end method for illness
studies, popular deep CNN models like VGG-16, prediction[6]. The CNN model VGG net placed
VGG-19, Inception-V3, and Xception have been second in the competition with a top-5 error rate of
widely adopted. VGG-16, known for its relative 7.5% on the validation set[9].
simplicity with 16 convolutional and fully connected
layers, still consists of approximately 138 million
parameters. However, this study focuses solely on
3.4.4 Multi-Class classification Methods Sensitivity Specificity Accuracy
Every sample in the dataset has a class ANN .96 .99 .99
label, and classification is the process of grouping KNN .94 .97 .96
SVM .92 .96 .94
records into particular classes based on feature
Naïve
values. Numerous deep learning (DL) and Bayes
.89 .95 .93
machine learning (ML) methods have been
created to predict classes for the test data and train
on the dataset.[7]. Comparison of Accuracy,
3.4.5 C-GAN model as synthetic image Specificity and Sensitivity
generator 1
A Conditional Generative Adversarial
0.9
Network (C-GAN) (Mirza and Osindero, 2014) can
be used as a data augmentation strategy to 0.8
increase the size of the dataset in order to reduce WOA-ANN KNN SVM Naïve Bayes
overfitting. In GANs, an image matrix is created
Sensitivity Specificity Accuracy
from random noise using conventional
convolutional layers. Two models are part of the
GAN architecture: a discriminator and a Fig. 4: The comparative performance of various models
generator.[15]. or methodologies in terms of three critical evaluation
metrics: accuracy, specificity, and sensitivity. The
graphical depiction allows for an intuitive understanding
4. RESULT AND DISCUSSION of the strengths and weaknesses of each approach,
aiding in performance analysis and model selection .
Using MATLAB, the enhanced deep
learning system divided the database into two Table 4: comparison of the precision, recall, and F-
primary groups: 80% for training and 20% for measure values for various models or methodologies,
evaluation. The first class was used for training, offering a comprehensive evaluation of their performance
in classification tasks.
while the second category was used for the whole
evaluation test.
Methods Precision Recall F-Measure
Estimating the degree of damage to olive
leaves and its percentage to the total leaf area was ANN .97 .98 .98
made easier by the forecasting of olive diseases
KNN .94 .94 .94
using the optimized artificial neural networks
technique. By separating the peacock disease SVM .91 .91 .91
area from the remaining leaf area, the method
increases the precision of picture evaluation and
categorization.Healthy leaves, which are free of Naïve
.89 .89 .89
any blemishes, are easily identifiable. Bayes