0% found this document useful (0 votes)
54 views9 pages

Ahila Priyadharshini Et Al - 2019 - Maize Leaf Disease Classification Using Deep Convolutional Neural Networks

This article proposes using a deep convolutional neural network based on the LeNet architecture to classify diseases in maize leaves. The CNN is trained on images of maize leaves from the PlantVillage dataset to identify four classes - three diseases and one healthy class. The model achieves 97.89% accuracy, showing potential for automated disease classification in maize leaves using deep learning techniques.

Uploaded by

bigliang98
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views9 pages

Ahila Priyadharshini Et Al - 2019 - Maize Leaf Disease Classification Using Deep Convolutional Neural Networks

This article proposes using a deep convolutional neural network based on the LeNet architecture to classify diseases in maize leaves. The CNN is trained on images of maize leaves from the PlantVillage dataset to identify four classes - three diseases and one healthy class. The model achieves 97.89% accuracy, showing potential for automated disease classification in maize leaves using deep learning techniques.

Uploaded by

bigliang98
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Neural Computing and Applications (2019) 31:8887–8895

https://doi.org/10.1007/s00521-019-04228-3 (0123456789().,-volV)(0123456789().
,- volV)

ORIGINAL ARTICLE

Maize leaf disease classification using deep convolutional neural


networks
Ramar Ahila Priyadharshini1 • Selvaraj Arivazhagan1 • Madakannu Arun1 • Annamalai Mirnalini1

Received: 1 August 2018 / Accepted: 9 May 2019 / Published online: 17 May 2019
 Springer-Verlag London Ltd., part of Springer Nature 2019

Abstract
Crop diseases are a major threat to food security. Identifying the diseases rapidly is still a difficult task in many parts of the
world due to the lack of the necessary infrastructure. The accurate identification of crop diseases is highly desired in the
field of agricultural information. In this study, we propose a deep convolutional neural network (CNN)-based architecture
(modified LeNet) for maize leaf disease classification. The experimentation is carried out using maize leaf images from the
PlantVillage dataset. The proposed CNNs are trained to identify four different classes (three diseases and one healthy
class). The learned model achieves an accuracy of 97.89%. The simulation results for the classification of maize leaf
disease show the potential efficiency of the proposed method.

Keywords Deep learning  CNN  Maize leaf disease  PCA whitening

1 Introduction pesticides which causes harmful chronic diseases on human


beings due to biomagnification and also in turn leads to
Maize, very popularly known as ‘‘corn,’’ is one of the reduction in quality and quantity of maize production.
emerging crops which is very versatile even under varied Thus, the detection of maize diseases plays a vital role in
climatic conditions. It is one of the important food and sheltering the high quality and high yield of maize. In this
industrial crops of the world. It is also named as the ‘‘queen work, a technique to classify the disease in maize leaf
of the cereals’’ as it has the highest yield among the cereal automatically is proposed.
crops. It is grown in almost every country except Antarc- Zhang et al. [1] have used the genetic algorithm support
tica and also under varied range of agro-climates than any vector machines (SVMs) for the classification of maize leaf
other cereal crops. diseases. To recognize and classify maize leaf diseases and
Maize diseases have a critical effect in maize produc- healthy leaf, Alehegn [2] developed a technique based on
tion. The diseases can be visible in various parts of crops color, texture and morphological features.
such as leaf, stem or panicle. Observing the plant with Many research works make use of artificial neural net-
naked eye and detecting the disease results in inaccurate works compared to SVM when the balanced learning is
detection of diseases. This leads to the wrong usage of absent [3]. Jafari et al. [4] have used the artificial neural
network (ANN) for scaling imbibition recovery curve.
Current trends have shown that deep learned neural net-
& Ramar Ahila Priyadharshini
[email protected] work is a precious tool in the field of computer vision and
pattern recognition. In the year 1998, LeCun et al. [5]
Selvaraj Arivazhagan
[email protected] developed the LeNet Architecture for digit recognition. For
the past two decades, LeNet architecture has been used for
Madakannu Arun
[email protected] variety of applications. Badea et al. [6] used two different
kinds of network architectures, namely LeNet and Network
Annamalai Mirnalini
[email protected] in Network (NiN), for various applications like recognizing
burn wounds from pediatric cases, art movement and facial
1
Department of ECE, Mepco Schlenk Engineering College, keypoints detection. Xu et al. [7] used LeNet for 3D object
Sivakasi, India

123
8888 Neural Computing and Applications (2019) 31:8887–8895

recognition using volumetric representation. Recently, component analysis (PCA) is a dimensionality reduction
convolutional neural networks (CNNs) are widely used for algorithm that can be used to significantly speed up the
plant leaf disease classification such as tomato [8], rice [9] feature learning algorithm. Whitening makes the features
and cucumber leaf [10]; Ferreria et al. [11] used CNNs for less correlated with each other by giving all the features the
weed detection in soybean crops. So far, no research works same variance which reduces the training period.
 
have explored the use of deep neural network for the Let yð1Þ ; yð2Þ ; yð3Þ ; . . .; yðnÞ be the maize leaf images
classification of diseases in maize leaf. In this research from the PlantVillage P dataset. First, compute the covari-
work, we have used the LeNet architecture for classifying ance matrix of y, y using Eq. 1.
diseases in maize leaf. The aim of our work is to study the X 1X m
potential efficiency of LeNet architecture in automated y¼ ðyðiÞ ÞðyðiÞ ÞT ð1Þ
disease classification for maize leaf images by varying m i¼0
various parameters such as kernel size and depth of the P
Next, compute eigenvectors of y and stack them to
convolutional layers.
the columns of U as shown in Eq. 2. Here, u1 represents the
In maize, the leaf is affected by bacterial and fungal P
top eigenvector of y, u2 represents the second eigen-
diseases. Northern leaf blight and gray leaf spot are the
vector and so on.
bacterial diseases, whereas common rust is a fungal dis- 2 3
ease. The motivation for developing the deep network j j j
model for maize leaf disease is to support the farmers to U ¼ 4 u1 u2 . . . un 5 ð2Þ
detect the maize leaf disease in an early stage by digital j j j
camera in an accurate and efficient manner. Extraction of To make the input features uncorrelated with each other,
effective features for identifying diseases in maize leaf ðiÞ
compute yrot ¼ U T yðiÞ . The covariance matrix of yrot
images is a critical and challenging task. But deep net-
results in diagonal matrix whose diagonal elements are k1,
works like LeNet can automatically extract the features
k2… kn. Here, k1, k2… kn be the corresponding eigenvalues
from the raw inputs in a systematic way. The learned
of eigenvector matrix U. Now PCA-whitened data are
features are reckoned as the superior level abstract depic-
computed according to Eq. 3.
tion of inferior level raw healthy and unhealthy maize leaf
yrot;i
images. Also, deep learning-based model is considered as YPCA Whitening;i ¼ p ð3Þ
one of the best classifications in pattern recognition tasks to ki
improve the analytical results. So we develop a LeNet- The PCA-whitened images are shown in Fig. 1.
based deep network model to classify the maize leaf
diseases. 2.2 LeNet architecture

Convolutional neural networks (CNNs) are special kind of


2 Proposed methodology multilayer neural networks, designed to recognize visual
patterns directly from pixel images with minimal prepro-
In this work, we present a novel maize leaf disease clas- cessing. CNNs are inspired from multilayer perceptrons,
sification method based on LeNet architecture. The gradi- and they maintain a spatially local correlation using a local
ent-descent algorithm is used to train the LeNet deep connectivity pattern between neurons of adjacent layers.
network. A total of 3852 maize leaf images are used from That is, the inputs of hidden neurons in layer N are from a
the PlantVillage dataset [12] for classification. The images subset of neurons in layer N - 1, neurons that have spa-
from the dataset are preprocessed using PCA whitening to tially contiguous receptive fields.
make the features less correlated for speeding up the fea- The LeNet architecture is an excellent ‘‘first architec-
ture learning algorithm. Then, the dataset is randomly ture’’ for convolutional neural networks. LeNet is small
partitioned into training and testing subsets. The training and easy to understand—yet large enough to provide
subset is used to train the LeNet. The testing subset is then interesting results [5]. Originally, LeNet is designed for
used to assess the performance of the learned model. handwritten and machine-printed character recognition.
LeNet is made up of neurons with learnable weights and
2.1 Preprocessing biases. Each neuron accepts several inputs, takes a
weighted sum over them, passes it through an activation
Preprocessing is used to improve the image data by sup- function and responds with an output. LeNet is a five-layer
pressing the unwilling distortions or enhancing some fea- network consists of 2 convolutional layers and 3 fully
tures of an image for further processing. Preprocessing connected layers. It is originally designed for an input
technique used here is PCA whitening. Principal

123
Neural Computing and Applications (2019) 31:8887–8895 8889

Fig. 1 a Sample images and b PCA-whitened images

Fig. 2 Modified LeNet architecture

image of size 32 9 32. But in this work, we modified the


LeNet-5 architecture to accommodate the input image of
size 64 9 64. The modified LeNet architecture is shown in
Fig. 2.

2.2.1 Convolutional layers

The convolution layer is the key element of a convolutional


neural network [13]. The convolution layer comprises of a
set of independent filters. Each filter is independently
convolved with the image, and feature maps are obtained.
In general, if we convolve an image of size M 9 N with a
Fig. 3 Convolution operation

123
8890 Neural Computing and Applications (2019) 31:8887–8895

where l represents the layer number, Wij represents the


convolutional kernel, bj represents bias, Ij represents the set
of input maps and f(.) represents the activation function.
The convolution layer makes the CNN to be scale
invariant.
The activation function is very essential in a convolu-
tional neural network making it capable of learning and
performing more complex tasks. Activation functions are
the nonlinear transformation applied to the input. They
basically decide whether the information that the neuron is
receiving is relevant for the given information or should it
be ignored. The commonly used activation functions are
sigmoid or logistic activation function, tanh or hyperbolic
tangent activation function, rectified linear unit (ReLU)
activation function, etc. In this work, the nonlinear acti-
vation function used is ReLU.
ReLU function is nonlinear, which means we can easily
back propagate the errors and have multiple layers of
neurons being activated by the ReLU function. The main
Fig. 4 ReLU activation function advantage of using the ReLU function over other activation
functions is that ReLU does not activate all the neurons at
filter of size w 9 h, we get an output feature map of size the same time. This means that, at a time, only a few
ow 9 oh and it is shown in Eq. 4. neurons are activated making the network sparse, efficient
M  w þ 2pw and easy for computation. For negative inputs, the gradient
ow ¼ þ1 is zero and the weights are not updated during back-
sw
ð4Þ propagation. This can create dead neurons which never get
N  h þ 2ph
oh ¼ þ1 activated. The ReLU function is depicted in Fig. 4. The
sh
equation for the ReLU function is given in Eq. 6.
where pw and ph represent the padding of zeros in width
f ð X Þ ¼ maxð0; X Þ
and height, respectively, and sw and sh represent the stride 
in horizontal and vertical directions. Figure 3 shows the X; X  0
f ðX Þ ¼ ð6Þ
convolutional operation of a 3 9 3 filter on an input map of 0; X\0
size 7 9 7.
The first convolutional layer extracts different low-level
Each output feature map is obtained by the convolution
features such as edges, lines and corner. Stacking of many
of the input maps with linear filter, adding a bias term and
such convolutional layers leads the network to learn more
then applying a nonlinear function. The output can be
global features. So in this work, we have used two con-
generally denoted by the formula as in Eq. 5.
! volutional layers.
X
Xjl ¼ f Xil1  Wijl þ blj ð5Þ
i2Ij

Fig. 5 Max pooling operation


with a stride of 2

123
Neural Computing and Applications (2019) 31:8887–8895 8891

2.2.2 Pooling layers 2.2.4 Learning algorithm

Sometimes, a pooling layer is inserted in between succes- We know that the first convolutional layer extracts low-
sive convolutional layers in CNN. The function of the level features such as edges, lines, curves and corner. The
pooling layer is to gradually reduce the spatial size of the next level of convolutional layers learns the global fea-
representation. This reduces the amount of parameters and tures. The way the features are learnt by the CNN is
computation in the network, and hence, it controls over- through a training process called back-propagation. The
fitting. Also, pooling layers make the CNN to be translation back-propagation learning algorithm consists of four dis-
invariant. The pooling layer operates independently on tinct sections, namely the forward pass, the loss function,
every layer of the input and resizes it spatially, using the the backward pass and the weight update.
pooling operation. The most common form of a pooling During the forward pass, the CNN takes the training
layer is with filter of size 2 9 2 applied with a stride of 2 image and passes it through the whole network. Initially,
down-samples every depth slice in the input by 2 along all of the weights or filter values are randomly initialized.
both width and height, discarding 75% of the activations The CNN, with its current randomly initialized weights,
[13]. will not be able to look for the low-level features, and thus,
Spatial pooling can be of different types: max, min, the CNN will not be able to make any reasonable conclu-
average, sum, etc. For spatial pooling, spatial neighbor- sion about what the classification might be. This goes to the
hood of 2 9 2 window is defined and largest element from loss function part of back-propagation. A loss function can
the feature map is taken within that window if it is max be defined in many different ways, but a common one is
pooling as depicted in Fig. 5. Average value is taken for MSE (mean squared error). In this proposed work, we are
average pooling and so on. Max pooling gives better results using soft margin loss as defined in Eq. 7.
for two reasons: (1) It reduces computation for upper layers
1 X
K
by elimination non-maximal values, and (2) it provides a J ð:Þ ¼ logð1 þ eok tk Þ ð7Þ
form of translation variance. Since it provides additional K k¼1
robustness to position, max pooling is a ‘‘smart’’ way of where K represents the total number of classes,
reducing the dimensionality of intermediate representations
[14]. ok 2 ½1; 1; the output of the CNN
tk 2 f1; 1g; target response desired.
2.2.3 Fully connected layers The loss will be extremely high for the first few training
images as expected. We need our CNN to predict the label
The term ‘‘fully connected’’ implies that every neuron in which is the same as the training label. To achieve this
the previous layer is connected to every neuron on the functionality, we want to minimize the amount of the loss.
current layer. Their activations can hence be computed Considering the minimization of the loss as an optimization
with a matrix multiplication followed by a bias offset. The problem in calculus, we wish to find out which inputs
number or neurons in the last fully connected layer is same (weights in our case) most directly contribute to the loss of
as the number of classes to be predicted. Since the default the network. So we need to perform a backward pass
LeNet architecture is designed for digit recognition [5], the through the network, which determines the weights that
output layer is of size 10. In our work, we carried out a contribute most to the loss and finding ways to adjust them
four-class problem, and thus, the size of the output layer is so that the loss decreases. After the backward pass, we
4 as depicted in Fig. 2. update the weight as shown in Eq. 8. This is the stage
Most of the features from convolutional layers and where the weights of the filters are updated, so that they
subsampling layers are good for the classification, but the change in the opposite direction of the gradient.
combination of those features might be even better. So the
fully connected layer combines all the features extracted oJ ðwÞ
wkþ1 ¼ wk þ g ð8Þ
from the previous convolutional and subsampling layers. owk
The final fully connected layer is using the softmax acti- where wkþ1 is the updated weight, wk is the old weight, g is
vation function. The softmax function is a more general- oJ ðwÞ
the learning rate and
is defined as follows.
owk
ized logistic activation function which is used for
 
multiclass classification. oJ ðwÞ oJ ðwÞ oo oZd1 oYd1 oZ2 oY2
¼   ...  X
owk oo oYd oYd1 owd1 oY ow2
ðwÞ
where oJoo is the rJ ð:Þ, X is the input to the CNN and the
other terms are termed as rðnetÞ.

123
8892 Neural Computing and Applications (2019) 31:8887–8895

Table 1 Details of maize leaf images in PlantVillage dataset function reaches a threshold. Now, the network should
Class No. of images
have been trained well enough so that the weights of the
layers are tuned correctly.
Common rust 1192
Gray leaf spot 513
Northern leaf blight 985 3 Experimental results and discussion
Healthy 1162
The proposed CNN model is applied to maize leaf disease
recognition problem. The experimentation is carried out
The process of forward pass, loss function, backward
using PlantVillage dataset [12]. The dataset consists of four
pass and weight update is for single iteration. The process
different classes. Among them, one class consists of
is repeated for a fixed number of iterations or till the loss

Fig. 6 Sample images from PlantVillage dataset: a common rust, b gray leaf spot, c northern leaf blight, d healthy

123
Neural Computing and Applications (2019) 31:8887–8895 8893

Table 2 Parameters of the


Layer name Task Kernel size Depth Output dimensions
modified LeNet architecture
Input 3 64 9 64
C1 Convolution 595 6 60 9 60
S1 Max pooling 292 6 30 9 30
C2 Convolution 595 16 26 9 26
S2 Max pooling 292 16 13 9 13
Output Classification 191

From Table 4, it is observed that, when comparing the


Table 3 Classification accuracy using modified LetNet architecture
overall performance of all four classes, the class gray leaf
Train–test ratio (%) Classification accuracy (%) spot showed less accuracy which affects the overall accu-
500 epochs 1000 epochs racy of the maize disease classification model. This is due
to class imbalance of the class gray leaf spot which has
50–50 84.91 85.85
comparatively less number of images. To make the dataset
75–25 86.53 87.46
as a balanced one, we have performed data augmentation
80–20 88.06 89.20 using horizontal flip for the class gray leaf spot. So, this
class now contains 1026 images (513 original ? 513 hor-
healthy maize leaf images and the other three classes are izontal flip). From Table 3, it is evident that the classifi-
the common diseases in maize leaf. The maize leaf images cation accuracy is high for 1000 epochs. So, further
are of size 256 9 256. The images are resized to 64 9 64 experimentation is carried out with the balanced dataset for
for the experimentation purpose. The details of the dataset 1000 epochs. The performance measure with the balanced
are given in Table 1. The sample images of PlantVillage dataset for 1000 epochs using different train and test ratios
dataset are shown in Fig. 6. is depicted in Table 5.
Initially, the resized images are preprocessed using PCA From Table 5, it is observed that classification accuracy
whitening, so that the features become less correlated. The is improved for the balanced dataset. So far, the experi-
PCA-whitened images are trained using the modified mentation is done using the depth and the kernel size as
LeNet architecture. The parameters of the modified LeNet mentioned in Table 1. For improving the classification
architecture is shown in Table 2. The experimentation is accuracy, once again the proposed architecture is modified
carried out with different train and test ratios. The classi- by varying the depth and kernel size. Huge hike in depth
fication accuracy for the various train and test ratio for the value leads to over-fitting. Thus, the depths are varied
above CNN network described in Table 2 is depicted in slightly. The performance measure for the balanced dataset
Table 3, and the classwise performance of Table 3 with with different kernel sizes and depths is shown in Table 6.
1000 epochs is shown in Table 4. From Table 6, it is clear that kernel size 3 9 3 outper-
forms the other kernel sizes irrespective of the variation in

Table 4 Classwise classification accuracy using modified LetNet architecture


Train–test ratio (%) Classes Classwise classification Average classification
accuracy (%) accuracy (%)

50–50 Common rust 96.69 85.85


Gray leaf spot 57.48
Northern leaf blight 90.57
Healthy 98.36
75–25 Common rust 98.32 87.46
Gray leaf spot 60.86
Northern leaf blight 92.01
Healthy 98.69
80–20 Common rust 99.49 89.20
Gray leaf spot 64.48
Northern leaf blight 93.77
Healthy 99.06

123
8894 Neural Computing and Applications (2019) 31:8887–8895

Table 5 Performance measure for the balanced dataset


Train–test ratio (%) Classes Classwise classification Average classification
accuracy (%) accuracy (%)

50–50 Common rust 99.04 91.97


Gray leaf spot 74.67
Northern leaf blight 94.99
Healthy 99.21
75–25 Common rust 99.53 94.66
Gray leaf spot 81.78
Northern leaf blight 97.82
Healthy 99.53
80–20 Common rust 99.87 95.57
Gray leaf spot 84.58
Northern leaf blight 98.14
Healthy 99.70

Table 6 Performance measure for the balanced dataset with different depths and kernel sizes
Train–test ratio (%) Classwise classification accuracy (%)
Depth: C1@6,C2@16 Depth: C1@10,C2@20
Kernel: 3 9 3 Kernel: 5 9 5 Kernel: 7 9 7 Kernel: 3 9 3 Kernel: 5 9 5 Kernel: 7 9 7

50–50 92.71 91.97 90.91 94.19 93.21 92.22


75–25 95.44 94.66 93.02 96.83 95.99 94.68
80–20 96.81 95.57 94.15 97.89 96.75 95.26

Table 7 Parameters of the


Layer name Task Kernel size Depth Output dimensions
proposed method
Input 3 64 9 64
C1 Convolution 393 10 60 9 60
S1 Max pooling 292 10 30 9 30
C2 Convolution 393 20 26 9 26
S2 Max pooling 292 20 13 9 13
Output Classification 191

Table 8 Comparison of our proposed method with other methods The performance measure of our proposed method
compared with those reported in the literature [1, 2, 15] is
Method Classification accuracy %
shown in Table 8. Experimental results show that the
GA-SVM [1] 92.82 proposed methodology can effectively recognize the maize
Artificial neural network classifier [2] 94.4 diseases.
Support vector machine [15] 89.6 In [1, 2, 15], the experiments are carried out with lesser
Our proposed methodology 97.89 number of images only and those images were collected
from various sources. In our work, we have used images
only from the PlantVillage dataset. We have carried out the
the depth. Also slight increase in the depth gives more data augmentation and worked with a total of 4365 images.
accurate results. Highest accuracies are shown in bold. The
architecture yielding highest accuracy is shown in Fig. 2,
and the corresponding parameters of our proposed method
are shown in Table 7.

123
Neural Computing and Applications (2019) 31:8887–8895 8895

4 Conclusion 4. Jafari I, Masihi M, Zarandi MN (2018) Scaling of counter-current


imbibition recovery curves using artificial neural networks.
J Geophys Eng 15(3):1062–1070
A deep convolutional neural network (CNN)-based archi- 5. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based
tecture (Modified LeNet) for the classification of maize leaf learning applied to document recognition. Proc IEEE
disease is proposed, and its usage has been explored in 86(11):2278–2324
6. Badea MS, Felea II, Florea LM, Vertan C (2016) The use of deep
detail. This method focuses on classifying the various learning in image segmentation, classification and detection.
diseases of maize leaves by learning the local and global arXiv:1605.09612 [cs.CV]
features together. Also, in this paper we have studied the 7. Xu X, Dehghani A, Corrigan D, Caulfield S, Moloney D (2016),
potential efficiency of the LeNet architecture for plant leaf Convolutional neural network for 3D object recognition using
volumetric representation. In: First international workshop on
disease classification by varying the parameters like kernel sensing, processing and learning for intelligent machines
size and depth. From this study, we infer that kernel of size (SPLINE)
3 9 3 is better suited for maize leaf disease classification. 8. Brahimi M, Boukhalfa K, Moussaoui A (2016) Deep learning of
Further, this proposed CNN can also be used for other plant tomato diseases: classification and symptoms visualization. Appl
Artif Intell. https://doi.org/10.1080/08839514.2017.1315516
leaf disease classification. 9. Yang L, Yi S, Zebg N, Liu Y, Zhang Y (2017) Identification of
rice diseases using deep convolutional neural networks. Neuro-
computing 267:378–384
10. Kawaski R, Uga H, Kagiwada S, Iyatomi H (2015) Basic study of
Compliance with ethical standards viral plant diseases using convolutional neural networks. In:
Proceedings of the international symposium on visual computing,
Conflict of interest The authors declare that they have no conflict of pp 638–645
interest. 11. dos Santos Ferreria A, Freitas DM, da Silva GG (2017) Weed
detection in soybean crops using ConvNets. Comput Electron
Agric 143:314–324
12. https://github.com/spMohanty/PlantVillage-Dataset
References 13. http://cs231n.github.io/convolutional-networks/#overview
14. Li F-F, Johnson J, Yeung S (2017) Convolutional neural networks
1. Zhang Z, He X, Sun X, Guo L, Wang J, Wang F (2015) Image for visual recognition lecture notes
recognition of maize leaf disease based on GA-SVM. Chem Eng 15. Zhang L, Yang B (2014) Research on recognition of maize dis-
Trans 46:199–204 ease based on mobile internet and support vector machine tech-
2. Alehegn E (2017) Maize leaf diseases recognition and classifi- nique. Adv Mater Res 905:659–662. https://doi.org/10.4028/
cation based on imaging and machine learning techniques. Int J www.scientific.net/AMR.905.659
Innov Res Comput Commun Eng 5(12):1–11
3. Ren J (2012) ANN vs. SVM: which one performs better in Publisher’s Note Springer Nature remains neutral with regard to
classification of MCCs in mammogram imaging. Knowl Based
jurisdictional claims in published maps and institutional affiliations.
Syst 26:144–153

123

You might also like