Accurate Plant Species Analysis For Plant Classification Using Convolutional Neural Network Architecture
Accurate Plant Species Analysis For Plant Classification Using Convolutional Neural Network Architecture
Corresponding Author:
Savitha Patil
Department of Computer Science and Engineering, Sharnbasva University
Kalaburagi, Karnataka, India
Email: [email protected]
1. INTRODUCTION
There are numerous environmental resources on the planet and one of the most essential and
advantageous environmental resources is plants. Plants are the most essential element for the survival of
humans and a key resource of all the available ecological resources. Plants can be of different varieties such as
green plants, mossy plants, flowering plants, grass, wine plants, and seed plants (angiosperms and
gymnosperms). The plant is extremely important to human society because they contribute massively to
providing human food and they generate synthetic starch with the help of the photosynthetic process. Further,
plants absorb carbon-di-oxide (𝐶𝑂2 ) gas and exhibit oxygen (𝑂2 ) gas, which is the most essential element for
human survival. It also controls ecological conditions like temperature, global warming, and humidity.
According to research conducted by the food and agriculture organization (FAO) in the United Nations of
America (USA), the world population will grow up to 9.1 billion by the year 2050. Thus, the nutrition
production rate needs to be increased by 70% to provide nutrition to such a huge number of people by the year
2050 [1]. However, multiple factors can heavily affect the growth of nutrition production rates such as limited
clean water and the absence of large areas for cultivation.
Furthermore, diseases in crops certainly do not help in increasing the production rate of nutrition as
they massively attack the quality as well as quantity of crops. The existence of diseases in plants hurts the food
production rate. These diseases in plants can be of various types but plant disease can be identified by precisely
detecting the types of marks or lesions that occurred on the leaves, flowers, fruits, or stems. Usually, plant
disease starts from leaves and can be controllable if identifies early. Every disease on the plant leaves has some
unique patterns which are also called abnormalities. By identifying these abnormalities, plant disease
identification, and analysis of their symptoms can be possible [2]. If diseases do not identify in the initial stages
of corps production, then food insecurity will enhance, and in these types of cases, corps become wasted more
often [3]. The most effective solution to avoid these types of cases is early detection of diseases in plants so
that they can be prevented from disease and proper disease control ideas and precautions always play a key
role in the management or decision-making of plant production. Furthermore, image analysis and classification
of plant species have gained massive attention in the last few years, especially in the field of machine learning
and computer vision. The main objective of computer vision and machine learning techniques is used to analyze
and identify images belonging to numerous categories or meta-categories. These categories can be varied kinds
of plants, animals, vehicles, retail products, and medicines. The primary objective and challenge to
understanding these images are analyzing fine-grained visual variations so that objects can be distinguished
efficiently among all the objects with similar appearances. However, all the objects have different
characteristics. The identified discriminative region generates high-quality features which carry the most
significant and distinctive information about an image. Based on these distinctive features, the classification
of plant leaf species can be achieved successfully. However, the extraction of discriminative features from
plant leaf species requires a strong feature extraction technique. Thus, deep learning methods can be a powerful
tool to extract discriminative features from plant leaf species. Recently, deep learning methods have found
several breakthroughs in the analysis of discriminant features and learning of fine-grained characteristics of
plant leaf images [4]–[7].
However, there are a few problems associated with the traditional deep learning-based discriminant
feature extraction methods through deep learning methods such as high-class variance, object similarities,
complex backgrounds, and poor fine-grained analysis. Therefore, a convolutional neural network based deep
feature learning and classification (CNN-DFLC) model is employed to identify plant leaf species and classify
plant images belonging to exactly which class. The proposed CNN-DFLC model distinguishes plant species
among several classes. The proposed CNN-DFLC model obtains the most significant information from
discriminative image regions so that efficient training is performed and improved classification accuracy is
obtained. The proposed CNN-DFLC model is tested on the Vietnam dataset and classification performance
can be measured on the testing dataset using obtained fine-grained discriminative features. The proposed CNN-
DFLC model comparably improves the identification efficiency of plant leaf images.
2. LITERATURE SURVEY
In this world, there is an abundant amount of plants present and the leaves of these plants are the same
in color, appearance, and shape. As a result, the classification of plant leaf species becomes a challenging and
complex process. To distinguish between medicinal and non-medicinal plants, extraction of fine-grained
discriminative features is quite important which can be achieved using deep learning methods. Recently, many
deep learning methods are presented by different researchers to identify medicinal plants among several plant
categories. One of the best deep learning methods for plant leaf identification among several categories can be
CNN architecture. Some of the research works are presented in the next paragraph regarding the classification
of plant leaves through CNN architecture.
A detection and classification method for the analysis of plant species and diseases is reviewed using
deep learning methods [8]. The deep learning method is utilized for handling challenges and learning essential
features of plant leaf images. The latest and advanced imaging techniques can be utilized to improve efficiency
and obtain discriminative features. Plant type classification [9] is performed for feature filtering and fine-
grained features. Here, Adaboost.M1 and LogitBoost algorithms are utilized to improve plant classification
efficiency. Here, the classification of plant species is obtained using four types of classifiers such as k-nearest
neighbors (kNN), random forest (RF), support vector machine (SVM), and multi-layer perceptron (MLP). A
deep learning method [10] is presented to detect and classify plant diseases. Here, low-intensity information is
obtained from the background and foreground of the image. Further, to acquire information related to the
images such as image structure, chrominance, and image positions, deep learning methods are utilized. Here,
a disease classification system of plants is enabled to get the information related to the plant and to handle plant
diseases. Mathulaprangsan and Lanthong [11], a leaf disease detection system is utilized to classify cassava
leaves based on CNN architecture. Here, testing results are obtained using the DenseNet121 model, and
obtained classification accuracy using this DenseNet121 model is 94.32% and the F1-score at 92.13%. A deep
residual dense network [12] is presented to identify tomato leaf diseases. A hybrid deep learning technique is
Accurate plant species analysis for plant classification using convolutional neural … (Savitha Patil)
162 ISSN: 2089-4864
adopted to improve the efficiency of the deep residual dense network. This technique significantly reduces
several training parameters to enhance classification accuracy. Haider et al. [13], disease classification and
verification mechanisms are presented to improve knowledge-based decisions. Jin et al. [14], deep learning
methods are utilized to identify weed plant species and a training image dataset is adopted using image
processing techniques and reduces Bayesian classification errors. The center-net model is utilized to achieve
precision and recall of 95.6% and 95%, respectively. This model significantly reduces the computational cost.
A fine-grained-generative adversarial network (GAN) method is adopted to identify leaf spot diseases that
occurred in grape leaves [15]–[17]. Therefore, the CNN-DFLC model is presented to identify plant leaf classes
among several classes. The next section discusses the method related to the proposed CNN-DFLC model.
Int J Reconfigurable & Embedded Syst, Vol. 13, No. 1, March 2024: 160-170
Int J Reconfigurable & Embedded Syst ISSN: 2089-4864 163
horizontal, rotational, vertical, and zooming of certain regions in different epochs for each image, and images
can be transformed into several different orientations. The regions of each image are transformed into each
step of an epoch. Therefore, all the regions of each image are covered and accurate training is performed. Most
of the plant species are symmetric in nature, so more training images can be obtained by mirroring and rotating
the given dataset images using transformation and augmentation methods. Moreover, histogram equalization
improves contrast values and colour augmentation efficiency. All the training images must be of the same size
for efficient network modelling. Padding and scaling can be performed to analyse images precisely as the
images are gathered at varying heights and angles. Thus, after pre-processing, pre-trained features can be
generated from the model analysis and efficient training can be performed. Furthermore, computational
complexity reduction, dataset uniformity, image smoothening, and feature learning enhancement can be
achieved using pre-processing in the proposed CNN-DFLC model.
𝐿𝑗 = Ψ(𝑅𝑗−1 ∗ 𝐾𝑗 + 𝑦𝑗 ) (1)
where input image is given by 𝐾𝑗 and features weights are expressed by 𝑅𝑗 . Here, the ReLU activation function
is represented by Ψ and 𝑦𝑗 is the bias value. The output feature map is given by 𝐿𝑗 . The convolutional operator
is represented by an operator (∗). Each convolutional layer in the proposed CNN-DFLC model analyses
different attributes or characteristics to gather discriminative fine-grained features from input images to
differentiate between various classes of plant species. The training parameters are constantly updated in these
layers and so the data distribution also updates regularly and feature weights vary for each image. Thus, this
Accurate plant species analysis for plant classification using convolutional neural … (Savitha Patil)
164 ISSN: 2089-4864
parameter variation has a massive impact on the proposed CNN-DFLC model in terms of training speed. The
reduction of filter size minimizes computational cost and generates quality weights. The overall loss in the
feature extraction process is evaluated by (2):
where 𝐾𝑙𝑐𝑧 is represented as pixel localization loss in an input image, 𝐾𝑐𝑛𝑓𝑙 and 𝑅𝑗 are expressed as validation
loss and feature weights, respectively. The number of training iterations is given by 𝐷.
Where mean and standard deviation is given by 𝑏 and 𝜀, respectively for the present epoch 𝑐. Trainable
parameters 𝛽 and 𝜇 get updated regularly after each epoch. A small constant is added to the variance and
represented by 𝜆 so that zero-division could be avoided. Moreover, the mean and standard deviation are
evaluated only for the training dataset, not for the testing dataset to avoid problems. Finally, average mean and
standard deviation statistics are used in the training dataset. After the batch normalization layer, a ReLU
activation layer is employed to enhance the nonlinearity of the proposed CNN-DFLC model or to improve non-
linear decision boundaries so that over-fitting can be avoided. The ReLU activation layer is mostly utilized for
object identification using deep learning and CNN models. Thus, training speed is enhanced to get better
classification results. Then, the ReLU activation function is given by (4):
𝐾 , 𝑖𝑓 𝐾𝑗 > 0
𝑓( 𝐾𝑗 ) = { 𝑗 (4)
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
𝑓( 𝐾𝑗 ) = 𝑚𝑎𝑥(0, 𝐾𝑗 ) (5)
then, the final representation of the ReLU activation function is given by (6).
The main objective of the ReLU activation function is to retain all the positive pixel values of the
input image 𝐾𝑗 and convert all the negative pixel values to zero. The input image is fed to the convolutional
layers and the weights generated from the information related to the input image are utilized in terms of
tensor values. The element-wise multiplication is performed between weighted kernels and input tensor
values for each region of an image. Finally, all the output values are summed to obtain the final output
tensor.
Int J Reconfigurable & Embedded Syst, Vol. 13, No. 1, March 2024: 160-170
Int J Reconfigurable & Embedded Syst ISSN: 2089-4864 165
Where 𝑁𝑗𝑔ℎ represents elements of a particular region (𝑔, ℎ) of an image using the pooling layer and 𝐿𝑗𝑤𝑧 is
the output pooled feature map. Drop-out layers are utilized to improve the training capabilities of the proposed
CNN-DFLC model and avoid over-fitting by pixel regularization and are also utilized for scaling. The proposed
CNN-DFLC model supports multinomial probability distribution.
(𝑗) −1
𝑔(𝐿 = 𝑧|Φ(𝑗) ) = exp Φ(𝑗) . (∑𝑤
𝑧=0 exp Φ𝑤 ) (8)
where 𝑔(𝐿 = 𝑧) is the probability of belonging to the 𝑧 𝑡ℎ class among all the available 𝑤 classes. Moreover,
total training loss is evaluated by (9):
1
𝑀(𝑝, 𝑞) = ∑𝑆𝑜=1(𝑝𝑜 − 𝑞𝑜 )2 (9)
𝑆
where 𝑀(𝑝, 𝑞) is the square difference between ground truth labels and predicted labels and is termed as the
loss function. The total number of training images is given by 𝑆 and 𝑝𝑜 represents ground truth labels and 𝑞𝑜
represents the predicted class labels. Furthermore, categorical cross-validation and hyper-parameter tuning
approach is adopted to obtain the best possible parameters so that maximum classification accuracy can be
achieved. Certain optimizers are utilized to evaluate errors for forwarding propagation and fine-tune features
of the proposed CNN-DFLC model such as learning rate and feature weights. These optimizers are utilized to
reduce computational training loss. The optimizers can be of different types such as 𝑅𝑀𝑆𝑃𝑟𝑜𝑝, Adam, and
𝐴𝑀𝑆𝐺𝑟𝑎𝑑. Here, the 𝑅𝑀𝑆𝑃𝑟𝑜𝑝 optimizer is used for evaluating the dynamic learning rate whereas the Adam
optimizer is employed which supports the properties of 𝑅𝑀𝑆𝑃𝑟𝑜𝑝 optimizer and regulates the dynamic
components like mean or learning rate with respect to dynamic mean squared gradients. The Adam optimizer
is evaluated by (10) and (11):
where,
Δ(𝑀(𝑝,𝑞))
𝑣𝑢 = Γ𝑣𝑢−1 + (1 − Γ) [ ] (11)
Δ(𝐿𝑗𝑤𝑧 (𝑢))
where aggregation of gradients at time 𝑢 is given by 𝑣𝑢 and aggregation of gradients at time 𝑢 − 1 is given by
𝑣𝑢−1 , weights at time 𝑢 and 𝑢 + 1 are represented by 𝐿𝑗𝑤𝑧 (𝑢) and 𝐿𝑗𝑤𝑧 (𝑢 + 1), respectively. Here, Ψ represents
the learning rate and Δ(𝑀(𝑝, 𝑞)) shows loss function derivative and derivative of weights at time 𝑢 are given
by Δ (𝐿𝑗𝑤𝑧 (𝑢)) and Γ is a moving average coefficient. Furthermore, 𝐴𝑀𝑆𝐺𝑟𝑎𝑑 optimizer is one of the variants
of the Adam optimizer which is used to optimize the learning rate. In this way, a proposed CNN-DFLC model
is designed to perform efficient classification and identify plant species accurately. Figure 2 demonstrates the
design of the proposed CNN-DFLC model.
Accurate plant species analysis for plant classification using convolutional neural … (Savitha Patil)
166 ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 13, No. 1, March 2024: 160-170
Int J Reconfigurable & Embedded Syst ISSN: 2089-4864 167
discriminative features are obtained by analyzing plant images from these images to get classification
performance. Due to the presence of different backgrounds, soil, tree bark, flowers, and several leaves together,
noise can be present in the given images, which can be handled, in pre-processing stage using the proposed
CNN-DFLC model.
Agave Americana
Alocasia macrorrhizos
Ampelopsis cantoniensis
Blackberry Lily
Bengal Arum
Breynia vitis
Citrus aurantifolia
Curculigo gracilis
the VPN-200 dataset using different performance metrics like precision, recall, F1-score, and area under the
curve (AUC). The proposed CNN-DFLC model focuses on achieving high classification accuracy with
minimum computation cost and resources. Thus, fewer layers and blocks are used in the proposed CNN-DFLC
model in comparison with the previous CNN classification models. Convolutional and pooling layers
efficiently provide feature weights that can be utilized in the training of the model to generate feature maps
and obtained feature maps are utilized for further testing of the model. The classification performance is
evaluated by analysing confusion matrix results which are constructed using true positive, true negative, false
positive, and false negative values. In other words, confusion matric is a combination of two kinds of elements,
which first discusses ground truth labels, and other shows predicted labels. Furthermore, a system with the
configuration of an i7 processor, 16 GB RAM, 2 TB SSD+HDD, and GeForce RTX NITRO5 GPU memory is
considered to perform all the plant classification experiments and simulation results. The performance of the
proposed CNN-DFLC model is compared against varied CNN classification models such as VGG16 [19],
InceptionV3 [20], MobileNet V2 [21], ResNet 50 [22], DenseNet 121 [23], and Xception [24]. Here, VGG16
is a deep neural network architecture that is designed using several convolutional and fully connected layers to
analyze large datasets using small inception filters. Moreover, InceptionV3 is a combination of multiple local
structures with varied sizes of convolutional operators. It is a multi-scale presentation and can be extended to
generate pre-trained parameters. Here, MobileNet V2 artificial intelligence (AI) based is the built-in mobile
device to compute high computation through mobile devices. Then, ResNet 50 is a mapping function used to
optimize references to the multiple layers and restores the channel depth. Next, DenseNet 121 is a visual object
detection model using dense block transition layers. Finally, Xception utilizes depth-wise separable
convolutions to reduce inception module utilization. However, propose CNN-DFLC model is an efficient
object classification model with minimum computational resource utilization. Table 1 represents simulation
results for all 200 classes in terms of mean classification accuracy. The mean accuracy achieved using the
proposed CNN-DFLC model is 96.42% considering all 200 classes. The highest previous accuracy achieved
for the VPN-200 dataset considering all 200 classes is 88.26% and the model was Xception. So, the percentage
increment of mean accuracy considering all 200 classes against VGG16 is 27%, InceptionV3 is 17%,
MobileNet V2 is 10%, ResNet 50 is 10%, DenseNet 121 is 10%, and Xception is 9%. This shows the proposed
CNN-DFLC model outperforms existing CNN plant classification modes and claims the highest performance
than any other state-of-art classification model considering the VPN-200 dataset. The proposed CN [25], and
F1-measure for all 200 classes.
Here, Figure 4 shows a graphical representation of performance metrics like validation accuracy and
testing accuracy considering validation and testing data, respectively for varied CNN classification models
such as InceptionResnet-2, InceptionV3, MobileNet V2, ResNet 50, GoogleNet, and Xception against the
proposed CNN-DFLC model. Testing accuracy is denoted by green lines whereas validation accuracy is
denoted by blue lines. Here, the number of epochs is considered 100 and the number of steps is 250. This shows
each image is transformed or flipped with multiple orientations or angles and processed in training which
means each image is processed multiple times so most of the essential pixels are trained. These graphs show
that the testing results are slightly better than the validation metrics results. The previous best CNN
classification model was exception net with 91.8% testing accuracy whereas the second-best CNN method was
Inception ResNetV2 with 91.2% testing accuracy. The proposed CNN-DFLC model outperforms traditional
CNN classification models with a testing accuracy of 96.42%. Here, Figure 5 shows a graphical representation
of improvement in classification accuracy using proposed CNN-DFLC model against varied ensemble models
such as mean ensemble, voting ensemble, weighted mean ensemble, and stacking ensemble. The percentage
improvement in classification accuracy for mean ensemble is 4.1%, voting ensemble is 3.67%, the weighted
mean ensemble is 4.24%, the stacking ensemble is 2.14% and the proposed CNN-DFLC model is 5%. These
improvements are observed while keeping the individual best ensemble model as a reference with 91.80%
classification accuracy. These graphs show that the classification improvement is slightly better than the varied
Int J Reconfigurable & Embedded Syst, Vol. 13, No. 1, March 2024: 160-170
Int J Reconfigurable & Embedded Syst ISSN: 2089-4864 169
ensemble results. The previous best classification improvement is observed in the weighted mean ensemble
model. The proposed CNN-DFLC model outperforms varied ensemble models in terms of classification
accuracy improvement as well.
5. CONCLUSION
Plant classification is an interesting and challenging research area due to the presence of numerous
plant species across the world, the same green color of the leaves in a maximum number of plants, the presence
of flowers, and the presence of multiple leaves together. Thus, a CNN-DFLC model is proposed to analyze
plant classification and detect plant species accurately by countering these challenges. The main objective of
this work is the plant species identification of which plant image belongs to which class. The proposed CNN-
DFLC model is constructed using several layers and blocks like convolutional layer, pooling layer, ReLU
activation functional layer, soft-max layer, flatten layer, and fully linked layers. The proposed CNN-DFLC
model is performed in different stages such as the data selection stage, data pre-processing stage, feature
generation stage, training stage, and testing stage. Moreover, a comprehensive analysis is performed to
understand specific parameters to enhance training and testing efficiency and capture fine-grained feature
weights. Then, those obtained feature weights are utilized in the proposed CNN-DFLC model to get the
maximum yield. A deep mathematical analysis of CNN architecture is also presented. The performance of the
proposed CNN-DFLC model is tested on the Vietnam plant (VPN-200) dataset, which contains 200 plant
species images. Performance is measured using the proposed CNN-DFLC model in terms of classification
accuracy, precision, recall, and F1 score. The proposed CNN-DFLC model is compared against varied
traditional CNN plant classification models in terms of classification accuracy. Mean classification accuracy
is 96.42%, mean precision is 95.56%, mean sensitivity is 93.58%, mean specificity is 99.98%, and mean F1
measure is 94.23%. The model accurately detects which particular image belongs to which species. Thus, the
proposed CNN-DFLC model shows decent performance against different traditional classification models.
REFERENCES
[1] J. Bruinsma, “The resource outlook to 2050: by how much do land, water and crop yields need to increase by 2050?,” FAO Expert
Meeting on How to Feed the World in 2050, 2009, [Online]. Available: https://ftp.fao.org/docrep/fao/012/ak971e/ak971e00.pdf
[2] J. Ma, K. Du, F. Zheng, L. Zhang, Z. Gong, and Z. Sun, “A recognition method for cucumber diseases using leaf symptom images
based on deep convolutional neural network,” Computers and Electronics in Agriculture, vol. 154, pp. 18–24, Nov. 2018, doi:
10.1016/j.compag.2018.08.048.
[3] F. O. Faithpraise, P. Birch, R. C. D. Young, J. Obu, B. Faithpraise, and C. R. Chatwin, “Automatic plant pest detection and
recognition using k-means clustering algorithm and coresspondence filters,” International Journal of Advanced Biotechnology and
Research, vol. 4, no. 2, pp. 1052–1062, 2013.
[4] X.-S. Wei et al., “Fine-grained image analysis with deep learning: a survey,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 44, no. 12, pp. 8927–8948, Dec. 2022, doi: 10.1109/TPAMI.2021.3126648.
[5] J. Yin, A. Wu, and W.-S. Zheng, “Fine-grained person re-identification,” International Journal of Computer Vision, vol. 128, no.
6, pp. 1654–1672, Jun. 2020, doi: 10.1007/s11263-019-01259-0.
[6] S. D. Khan and H. Ullah, “A survey of advances in vision-based vehicle re-identification,” Computer Vision and Image
Understanding, vol. 182, pp. 50–63, May 2019, doi: 10.1016/j.cviu.2019.03.001.
[7] X.-S. Wei, Q. Cui, L. Yang, P. Wang, L. Liu, and J. Yang, “RPC: a large-scale and fine-grained retail product checkout dataset,”
Science China Information Sciences, vol. 65, no. 9, Sep. 2022, doi: 10.1007/s11432-022-3513-y.
Accurate plant species analysis for plant classification using convolutional neural … (Savitha Patil)
170 ISSN: 2089-4864
[8] L. Li, S. Zhang, and B. Wang, “Plant disease detection and classification by deep learning-a review,” IEEE Access, vol. 9, pp.
56683–56698, 2021, doi: 10.1109/ACCESS.2021.3069646.
[9] A. Bakhshipour, “Cascading feature filtering and boosting algorithm for plant type classification based on image features,” IEEE
Access, vol. 9, pp. 82021–82030, 2021, doi: 10.1109/ACCESS.2021.3086269.
[10] W. Albattah, M. Nawaz, A. Javed, M. Masood, and S. Albahli, “A novel deep learning method for detection and classification of
plant diseases,” Complex & Intelligent Systems, vol. 8, no. 1, pp. 507–524, Feb. 2022, doi: 10.1007/s40747-021-00536-1.
[11] S. Mathulaprangsan and K. Lanthong, “Cassava leaf disease recognition using convolutional neural networks,” in 2021 9th
International Conference on Orange Technology (ICOT), Dec. 2021, pp. 1–5. doi: 10.1109/ICOT54518.2021.9680655.
[12] C. Zhou, S. Zhou, J. Xing, and J. Song, “Tomato leaf disease identification by restructured deep residual dense network,” IEEE
Access, vol. 9, pp. 28822–28831, 2021, doi: 10.1109/ACCESS.2021.3058947.
[13] W. Haider, A.-U. Rehman, N. M. Durrani, and S. U. Rehman, “A generic approach for wheat disease classification and verification
using expert opinion for knowledge-based decisions,” IEEE Access, vol. 9, pp. 31104–31129, 2021, doi:
10.1109/ACCESS.2021.3058582.
[14] X. Jin, J. Che, and Y. Chen, “Weed identification using deep learning and image processing in vegetable plantation,” IEEE Access,
vol. 9, pp. 10940–10950, 2021, doi: 10.1109/ACCESS.2021.3050296.
[15] S. S. Chouhan, A. Kaul, U. P. Singh, and S. Jain, “Bacterial foraging optimization based radial basis function neural network
(BRBFNN) for identification and classification of plant leaf diseases: An automatic approach towards plant pathology,” IEEE
Access, vol. 6, pp. 8852–8863, 2018, doi: 10.1109/ACCESS.2018.2800685.
[16] X. Liu, W. Min, S. Mei, L. Wang, and S. Jiang, “Plant disease recognition: a large-scale benchmark dataset and a visual region and
loss reweighting approach,” IEEE Transactions on Image Processing, vol. 30, pp. 2003–2015, 2021, doi:
10.1109/TIP.2021.3049334.
[17] C. Zhou, Z. Zhang, S. Zhou, J. Xing, Q. Wu, and J. Song, “Grape leaf spot identification under limited samples by fine grained-
GAN,” IEEE Access, vol. 9, pp. 100480–100489, 2021, doi: 10.1109/ACCESS.2021.3097050.
[18] T. N. Quoc and V. T. Hoang, “VNPlant-200-a public and large-scale of Vietnamese medicinal plant images dataset,” in ICIS 2020:
Integrated Science in Digital Age 2020, 2021, pp. 406–411. doi: 10.1007/978-3-030-49264-9_37.
[19] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” Prepr. arXiv.1409.1556,
Sep. 2014, [Online]. Available: http://arxiv.org/abs/1409.1556
[20] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on
learning,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, Feb. 2017, doi: 10.1609/aaai.v31i1.11231.
[21] A. G. Howard et al., “MobileNets: Efficient convolutional neural networks for mobile vision applications,” Prepr.
arXiv.1704.04861, Apr. 2017, [Online]. Available: http://arxiv.org/abs/1704.04861
[22] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.
[23] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in 2017 IEEE
Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 2261–2269. doi: 10.1109/CVPR.2017.243.
[24] F. Chollet, “Xception: deep learning with depthwise separable convolutions,” in 2017 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Jul. 2017, pp. 1800–1807. doi: 10.1109/CVPR.2017.195.
[25] O. A. Malik, M. Faisal, and B. R. Hussein, “Ensemble deep learning models for fine-grained plant species identification,” in 2021
IEEE Asia-Pacific Conference on Computer Science and Data Engineering (CSDE), Dec. 2021, pp. 1–6. doi:
10.1109/CSDE53843.2021.9718387.
BIOGRAPHIES OF AUTHORS
Int J Reconfigurable & Embedded Syst, Vol. 13, No. 1, March 2024: 160-170