New Format Plant Disease
New Format Plant Disease
Abstract
The agricultural sector is a key driver of a nation's economic growth, especially in India, where it
serves as a primary source of livelihood for millions in rural areas. One of the major challenges
facing agriculture is plant diseases, which can be triggered by a variety of factors such as
synthetic fertilizers, outdated farming practices, and environmental conditions. These diseases
can severely impact crop yield, ultimately affecting the economy. To tackle this issue, researchers
have increasingly turned to AI and Machine Learning techniques for plant disease detection. This
research survey provides an in-depth review of common plant leaf diseases, evaluates both
traditional and deep learning approaches for disease identification, and highlights available
datasets. Additionally, it investigates the role of Explainable AI (XAI) in improving the
transparency of deep learning models, making their decisions more interpretable for end-users.
By synthesizing this knowledge, the survey offers valuable insights for researchers, practitioners,
and stakeholders, driving the development of effective and transparent solutions for managing
plant diseases and promoting sustainable agriculture.
1. Introduction
Despite advances in agricultural technology, plant disease remains a significant challenge,
leading to substantial crop losses globally and threatening food security. Traditional methods of
plant disease detection are highly dependent on human expertise, making them prone to errors,
subjective bias, and inefficiencies, particularly in large-scale farming operations. The lack of
timely and accurate diagnosis often results in delayed treatment, exacerbating the spread of
diseases and further reducing crop yields.
The advent of machine learning and computer vision offers a promising solution by enabling
automated disease detection systems. However, current models often function as "black boxes,"
providing little to no explanation of their decision-making processes. This lack of transparency
can hinder trust and adoption among farmers and agronomists who need to understand and
validate the system’s recommendations. Furthermore, these models may struggle to generalize
across different environmental conditions.
1
To address these challenges, there is a critical need for an intelligent plant disease diagnosis
system that not only provides high accuracy but also incorporates explainable AI methods. This
system should be capable of offering clear, interpretable explanations for its predictions,
thereby empowering users to make informed decisions about disease management.
Additionally, the system must be robust, scalable, and adaptable to various agricultural settings
to ensure broad applicability and effectiveness.
In this survey paper, we plan to summarize the various commonly occurring leaf diseases that
infect plants and the available datasets and state-of-the-art techniques for detecting infected leaf
diseases. Furthermore, we intend to introduce Explainable AI (XAI) in plant leaf-based disease
detection and classification. The goal is to enhance the transparency and interpretability of deep
learning models by generating XAI based solutions tailored explicitly for CNN and Transformer
models. The study also underscores the motivation for using XAI in plant leaf disease detection
and highlights possible future research directions
II. OVERVIEW ON LEAF BASED PLANT DISEASE DETECTION
All living organisms, including plants, animals, and humans, are vulnerable to diseases.
Researchers and professionals in agricultural science and management are actively searching
for advanced solutions to mitigate plant disease outbreaks, which can cause significant
damage to agricultural productivity. To address this, various scientific disciplines collaborate
to control the spread of plant leaf diseases and ensure a stable food supply for the world’s
growing population.
Plant diseases can manifest through various symptoms that comprise a plant’s structural
components—such as leaves, stems, and roots—ultimately affecting its ability to grow,
reproduce, or yield effectively. The occurrence of these diseases varies seasonally, influenced
by changes in weather conditions and the presence of specific pathogens.
This section is organized into three parts, discussing common leaf diseases, available datasets,
and highlighting key research contributions in leaf-based plant disease detection.
2
• Blight: One of the most destructive plant diseases, Blight has historically caused significant
damage, such as during the 1840s potato famine. This fungal disease spreads in warm, humid
conditions through wind-borne spores.
• Scab: This fungal disease is host-specific and can infect individual plants. It is prevalent in
apple trees, where it initially causes olive green spots on the leaves, which eventually turn
yellow before the leaves fall off.
• Powdery Mildew: Common in shaded areas, Powdery Mildew is easily recognizable by the
white powdery coating on the upper surface of the leaves. This disease spreads in humid
conditions with low soil moisture.
• Mosaic Virus: The mosaic virus affects plants at a molecular level, commonly infecting
tomatoes, tobacco, and other horticultural plants. Infected leaves develop yellowish and
whitish stripes.
• Marssonina Blotch: Caused by the fungus
Marsonina Caronaria, this disease occurs in high rainfall areas. Infected leaves develop
circular dark green patches that can turn dark brown in severe cases.
• Black Spot: Another fungal disease, Black Spot, creates round black spots on the upper
surface of leaves. It thrives in prolonged wet conditions or when leaves remain moist for
extended periods.
• Frogeye Spot: Caused by the fungus Cercospora Sojina, Frogeye Spot manifests as purple
spots on leaves during early spring, which later develop into brownish rings resembling a
frog's eye.
• Rust: This easily identifiable fungal disease causes brownish rusty spots on leaves and is
commonly found on apples, roses, and tomatoes, especially during wet weather in early
spring.
Plant leaf diseases present a significant challenge to agricultural productivity due to the diverse range of
pathogens that cause them, including fungi, bacteria, and viruses. These pathogens have distinct
lifecycles and environmental triggers, making disease management a complex and multifaceted task.
For instance, understanding the specific conditions under which diseases like Blight and Rust thrive
allows for more targeted and effective interventions. The impact of these diseases on plant physiology
is profound, as they can severely impair essential processes like photosynthesis and nutrient
absorption, leading to stunted growth, reduced yields, and even plant death if not properly managed.
3
III. OVERVIEW ON EXPLAINABLE AI (XAI)
Artificial intelligence (AI) is gaining significant attention, with nearly every research field either
adopting AI or upgrading outdated rule-based systems to AI-enabled ones. However, many current AI
systems, particularly those that use deep learning and machine learning, often lack transparency,
leaving users unable to understand how the system operates or the factors influencing key decisions.
This lack of clarity can lead to a loss of trust, discouraging users from adopting the final product.
Some AI researchers argue that focusing on explanations in AI research is either unnecessary or too
complex to achieve, while others believe that providing explanations alongside AI outputs can
enhance human intelligence and foster trust in these systems. Bridging this gap could increase
confidence in AI systems and unlock new opportunities for AI-driven products and services.
Users in fields such as law, medicine, agriculture, finance, and defence need explanations to effectively
and confidently work with AI systems. Explanations add a valuable layer of human-computer
interaction, helping users derive greater benefit from AI-based services. The rapid advancement of AI
has been largely driven by machine learning techniques like Support Vector Machines (SVMs), Random
Forests (RF), probabilistic models, and Deep Learning Neural Networks, which operate as "blackbox"
models. These models are designed to function with minimal human intervention and can be applied in
various contexts without much customization.
However, there is often a trade-off between the performance of machine learning models, such as
predictive accuracy, and their explainability. For instance, highly accurate models like deep learning are
typically less explainable, while more transparent models like decision trees tend to be less accurate. A
hypothetical graph (Fig. 2) illustrates this performance-explainability trade-off, showing that as model
accuracy increases, explainability often decreases.
To address this challenge and make AI solutions more transparent and trustworthy, a research domain
called Explainable Artificial Intelligence (XAI) has emerged. XAI aims to enhance the interpretability of AI
systems, making them easier for users to understand and trust. A. What is XAI?
4
Artificial intelligence (AI) is now widely applied across various fields, from autonomous vehicles to
medical diagnostics. However, users without a technical background often struggle to understand the
systems they rely on, which can lead to a lack of trust in AI-generated decisions. In critical industries
such as defences, healthcare, and safety, where AI plays a significant role, this issue becomes even more
pressing. As AI increasingly supports—or even replaces—human supervisors in these sectors, it
becomes essential to demonstrate not only how AI arrives at a particular decision but also how the
system operates, providing users with the ability to verify its claims.
Explainable Artificial Intelligence (XAI), a subfield of machine learning, was developed to address these
concerns. XAI aims to enhance the transparency and trustworthiness of AI systems by revealing their
inner workings and ensuring that users can understand and trust the model’s decisions. By
incorporating ethical considerations, XAI reduces unconscious biases and increases confidence in the
system's outputs. The primary goal of XAI is to offer explanations for AI decisions that are
understandable to humans, which can be achieved by following certain principles that make AI systems
more efficient and user-friendly.
For example, consider a healthcare scenario where a patient with breathing issues is placed on a ventilator.
A doctor monitors the patient's heart rate through an AIenabled system, which displays fluctuating heart
rates on the screen. The AI algorithm is designed to predict the patient's heart rate for the next 15 seconds
based on previous and current data. However, this system, like many "black-box" models, provides highly
accurate predictions without explaining the factors influencing these heart rate variations. In this case, the
doctor is relying on an AI system that offers no insight into its decision-making process, making it risky to
trust such a system without understanding the internal factors driving the predictions.
This hypothetical example highlights the need for explainable AI systems in high-stakes environments,
where users must be able to trust and understand the decisions made by AI in order to use them effectively
and safely. By making AI more transparent, XAI can bridge this gap,
The LAAMA architecture is designed with depth wise separable convolutions and attention mechanisms
that reduce computational complexity and make it mobile-friendly. We pertained LAAMA on the
ImageNet dataset, fine-tuned it for plant disease detection, and followed the same steps for the other
models to ensure a fair comparison.
We used the Adam optimizer with a learning rate of 0.0001, categorical cross-entropy as the loss
function, and a softmax activation function in the output layer, which has 38 neurons due to the
5
multiclass classification nature of this task. All models, including LAAMA, were trained for 50 epochs,
using a dropout function to mitigate overfitting.
TABLE I
V. RESULT ANALYSIS
We have tested our models on quantitative performance evaluation metrics: accuracy (1), precision
(2), recall (3), and f1 score (4) by their predictions on our test set.
(1)
Here, TN = True negative, TP = True positive, FN = False negative, FP = False positive.
(2)
Here, TP = True positive, FP = False positive.
(3)
Here, TP = True positive, FN = False negative.
6
From Table I, we observe that EfficientNetV2L achieved the highest performance across all metrics, but
LAAMA provided competitive results while maintaining a lightweight architecture suitable for mobile
deployment:
• Accuracy: LAAMA scored 99.25%, which is 0.38% lower than EfficientNetV2L but higher than
both MobileNetV2 and ResNet152V2.
• Precision: LAAMA achieved 99.13% precision, 0.5% lower than EfficientNetV2L but higher than
MobileNetV2 and ResNet152V2.
• Recall: LAAMA had a 98.94% recall, 0.69% lower than EfficientNetV2L but still higher than
MobileNetV2 and ResNet152V2.
• F1 Score: LAAMA's F1 score was 99.03%, just 0.60% lower than EfficientNetV2L and higher than
the other two models.
Thus, while EfficientNetV2L outperformed in raw accuracy and precision, LAAMA provides a strong
trade-off between performance and mobile-friendliness, making it highly suitable for applications
requiring real-time processing on edge devices.
V. REFERENCES
[1] J. Shirahatti, R. Patil, and P. Akulwar, “A survey paper on plant disease identification using machine
learning approach,” in 2018 3rd International Conference on Communication and Electronics
Systems (ICCES). IEEE, 2018, pp. 1171–1174.
[2] CS Arvind et al. “Deep Learning Based Plant Disease Classification
With Explainable AI and Mitigation
Recommendation”. In: 2021 IEEE Symposium Series on Computational Intelligence (SSCI). Dec.
2021, pp. 01– 08
[3] Quan Huu Cap et al. “LeafGAN: An Effective Data Augmentation Method for Practical Plant Disease
Diagnosis”. In: IEEE Transactions on Automation Science and Engineering 19.2 (Apr. 2022).
Conference Name: IEEE Transactions on Automation Science and Engineering, pp. 1258–1267.
ISSN: 1558-3783. [4] Uday Pratap Singh et al. “Multilayer Convolution Neural Network for the
Classification of Mango Leaves Infected by Anthracnose Disease”. In: IEEE Access 7 (2019).
Conference Name: IEEE Access, pp. 43721– 43729.
[5] Amer Tabbakh and Soubhagya Sankar Barpanda. “A Deep Features Extraction Model Based on the
Transfer Learning Model and Vision Transformer “TLMViT” for Plant Disease Classification”. In: IEEE
Access 11 (2023). Conference Name: IEEE Access, pp. 45377– 45392. ISSN: 2169-3536
[6] K. M. Hasib, F. Rahman, R. Hasnat, and M. G. R.
Alam, “A machine learning and explainable ai approach for predicting secondary school student
performance,” in 2022 IEEE 12th Annual Computing and Communication Workshop and
Conference (CCWC), 2022, pp. 0399– 0405.
7
Thoung, “Support Vector Machine Based Classification of Leaf Diseases”, International Journal of
Science and Engineering Applications, 2018.
[8] Godliver Owomugisha, Friedrich Melchert, Ernest Mwebaze, John
A Quinn and Michael Biehl, “Machine Learning for diagnosis of disease in plants using spectral data”,
Int'l Conf. Artificial Intelligence (2018).
[9] Daglarli, Evren, “Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models
for Cyber-Physical Systems”, Artificial Intelligence Paradigms for Smart Cyber-Physical Systems,
edited by Ashish Kumar Luhach and Atilla Elçi, IGI Global, 2021, pp. 42-67.
[10] Adi Dwifana Saputra , Djarot Hindarto, Handri Santoso, “Disease Classification on Rice Leaves using
DenseNet121, DenseNet169, DenseNet201”, Sinkron : Jurnal dan Penelitian Teknik Informatika
Volume 8,
Issue 1, January 2023, DOI : https://doi.org/10.33395/sinkron.v8i1.11906
[11] Fathimathul Rajeena, Aswathy S, Mohamed A.
Moustafa and Mona A. S. Ali, “Detecting Plant Disease in Corn Leaf Using EfficientNet
Architecture—An Analytical Approach”, Electronics 2023, https://doi.org/10.3390/
electronics12081938
[12] HASSAN AMIN, ASHRAF DARWISH (Member, IEEE), ABOUL ELLA HASSANIEN AND MONA SOLIMAN.
“End-to-End Deep Learning Model for
Corn Leaf Disease Classification”, IEEE Access, “, Volume 10, 2022, Digital Object Identifier
10.1109/ACCESS.2022.3159678
[13] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K. R. Müller, “How to explain
individual classification decisions,” J. Mach. Learn. Res., vol. 11, pp. 1803–1831, 2010
[14] Kaihua Wei, Bojian Chen, Jingcheng Zhang , Shanhui Fan, Kaihua Wu, Guangyu Liu and Dongmei
Chen,
“Explainable Deep Learning Study for Leaf Disease Classification”, Agronomy 2022, 12, 1035,
https://doi.org/ 10.3390/agronomy12051035
[15] S. M. Lundberg and S. I. Lee, “A unified approach to interpreting model predictions,” Adv. Neural
Inf. Process. Syst., vol. 2017-Decem, no. Section 2, pp.
4766–4775, 2017