0% found this document useful (0 votes)
9 views

Brain CNN New-3

Uploaded by

dosci.ui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Brain CNN New-3

Uploaded by

dosci.ui
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

An Augmented Customized Deep Learning

Approach for Brain Tumor Identification


Mangena Venu Madhavan1 ,Aditya Khamparia2,∗ ,Sagar Pande 3 ,
Arun Kumar Sangaiah4,∗
1
School of Computer Science and Engineering, Lovely Professional University, India
2
Department of Computer Science, Babasaheb Bhimrao Ambedkar University,
Satellite Center, Amethi, Tikarmafi, UP, India
3
Department of Computer Science, VIT University, Amravati, India
4
National Yunlin University of Science and Technology, Douliu, 633102, Taiwan
Email: 1 [email protected], 2 [email protected],
3
[email protected],
4
[email protected]
September 8, 2022

Abstract
Brain is one of the crucial organs in the human body and the survival rate for those who are
affected by brain tumors across the globe is very low. There might be survival rate is low yet it
can be improved by identifying the disease by using MRI of brain condition at a very early stage.
At this juncture, the automatic identification of the tumor accurately using MRI images is very
essential. Once the deep learning came into existence, the accuracy of identification of various
biomedical diseases using MRI or X-ray images has been improved. Considering this aspect as
an inspiration, a framework proposed for the identification of brain tumor with a customized
neural network along with CNN architecture. The dataset considered for implementation of
this framework is an open dataset obtained from Kaggle. As the dataset smaller in size data
augmentation technique used to improve the dataset size. So, the effect of data augmentation in
attaining the accuracy is also discussed. The training accuracy obtained pre-data augmentation
is about 85.76% and post-data augmentation is about 97.85%.
Keywords: Brain tumor, MRI images, deep neural network, data augmentation, classification

1 Introduction
The brain is the most important organ of all the organs that need to be maintained and taken care of it
properly as it can be considered as the central processing unit of the human body. There are good chances
of getting cancer to the brain that might originate either from the brain cells or from the exterior cells of
the brain in the human body and then spread into the brain cells. The brain tumor is also known as an
intracranial tumor [1,2,3]. It caused due to the unbounded multiplication in the growth of the abnormal
tissue cell mass, apparently uncontrolled by the procedures that regulate normal cells. These responsible
abnormal tissue cells can be cancerous or non-cancerous. The brain tumor caused by non-cancerous abnormal
tissue cells considered to be a benign tumor and the brain tumor caused by cancerous abnormal tissue cells
considered to be a malignant tumor. In the very same instance, the brain tumor is also classified as a primary

1
tumor and secondary or metastatic tumor depending on the tumor caused by the location of the cells in the
human body. A brain tumor is caused by the abnormal tissue cells of the brain called the primary brain
tumor and a brain tumor is caused by the abnormal tissue cells of the other parts of the human body but not
in the brain called the secondary or metastatic brain tumor [4,5]. A primary brain tumor can be a benign
tumor or malignant tumor whereas the secondary tumor always a malignant brain tumor. The rapidity
of the development of the brain tumor varies considerably depending on the factors such as the rate of
development of the tumor and the occurrence place of the tumor in the brain that will influence the effect on
the functioning of the nervous system [6,7]. There is no age limitation for the occurrence of the brain tumor.
The various types of brain tumors have been documented so far about more than 150. But the very common
and popular brain tumors are Meningioma, Glioma, and Pituitary tumor[8]. The methods implemented to
identify and diagnose the brain tumor are neurologic examination, MRI scan, CT-scan, Angiogram, spinal
tap, skull X-rays, and biopsy. Of all these methods, MRI can be considered as the best and safest method
to diagnose the brain tumor as it can provide more clear information without using any radiation that will
help in generating more accurate identification [9, 10]. According to the survey conducted by the American
Society of Clinical Oncology(ASCO), the brain tumor is the 10th major cause of mortality for both women
and men in the US. In the year 2020, the estimation indicates that the adults around 18,000 will be dying
due to the reason for a primary brain tumor in the US. The same survey also indicates that the five-year
and ten-year survival rate with the brain tumor is about 36% and 31% respectively in the US. Similarly in
the US, the five-year survival rate across different classes according to ages such as under the age of 15, the
age of 15 to 39 and above the age of 39 are 74%, 71%, and 21% respectively which might infer that the
increase in the danger with increasing the age but that’s not true as this depends on various several factors
besides the age such as family history, the race of people, exposure to chemicals, and radiation exposure.
From this scenario, one can understand the complexity of identifying and handling of the brain tumor in
human beings. Figures 1 and 2 show the sampled brain MRI images with/without the tumor.

Figure 1: Brain MRI images with Tumor

Figure 2: Brain MRI images without Tumor

Diagnosing the tumor can be made more accurately with the aid of MRI images when compared with all
other diagnosing methods as MRI images can provide clear and essential information that would be provided

2
for the application of machine learning, image processing, and deep learning methods for identification of
the tumor. The framework proposed in this document is based on deep learning with a customized network.
The dataset used for the implementation of the proposed network is obtained from Kaggle with the name
of the dataset is ‘Brain MRI Images for Brain Tumor Detection’ which consists of 253 brain images. These
images are categorized into tumors having a dataset named ‘yes’ and tumor not having a dataset named ‘no’.
The obtained dataset is a very unbalanced dataset that is made balanced by using the data augmentation
technique. The contribution of the proposed framework can be summarized as follows:
1. The image size and features diversity gets improved by data augmentation without requirement of
new data.
2. Pre and Post data augmentation techniques adopted for varied customized neural network with the
aid of the convolution layer, batch normalization, and pooling layer.
3. Automate the process of brain tissue segmentation and analysis using deep learning model.
4. The comparative assessment of pre-data augmentation and post-data augmentation achieved accuracy
of 85.67% and 97.85% on MRI images.
In the present section, the introduction of the brain tumor, diagnosing methods, brain tumor-related
certain specific statistics, and the dataset utilized for implementation of this framework mentioned. In the
section-II, the related work and the literature review aspects mentioned. In the section-III, the method-
ology of the framework that implemented i.e., the augmentation methodologies, the network architecture
mentioned. In the section-IV, the obtained results of the implemented framework mentioned. Finally, in the
section-V, the conclusion along with the future work mentioned.

2 Related Works
The mechanism of automated or semi-automatic boundary detection within two-dimensional or three-
dimensional images usually consists of the segmentation of biomedical images. Several projects for segment-
ing diagnostic images such as skin lesions, brain tumor identification, heart ventricle surveillance, diagnosis
of liver, and detection of COVID or pneumonia disease using pulmonary images in recent years. As this
document framework dealing with the identification of the brain tumor, so discussed the work related to a
brain tumor and corresponding techniques used for the identification of the brain tumor. Tanzila Saba et
al. in 2020 [11] proposed a methodology for the proper and precise segmentation of the brain to identify
the existence of the tumor or not with the aid of transfer learning. The proposed methodology includes the
pre-tuning of the model that can be done by utilizing the popular convolutional neural network architecture
called VGG-16 for obtaining the features from the brain MRI images with the help of dataset BRATS. The
results of this methodology are explained through the Dice similarity coefficient. M. Sajid et al, in 2018 [12]
proposed a framework that was implemented with data augmentation as well as without data augmentation.
The framework is based on one of the popular CNN architecture VGG-19. In general, the CNN architecture
VGG-16 considered to be more effective than VGG-19 yet, the accuracy stands to be fine with the identi-
fication of Grade-I tumor with 90.03%, the identification of Grade-II tumor with 89.91%, the identification
of Grade-III tumor with 84.11%, and the identification of Grade-IV tumor with 85.50%. this framework
was implemented using two datasets such as radiopaedia and the T1-weighted brain CE-MRI dataset. Z.
Sobhaninia et al. in 2018 [13] proposed a framework as a part of computer vision by using deep learning
concepts. It was implemented by using deep learning and image segmentation with two different networks.
The performance of the network is mentioned in terms of dice which is not that popular. If one decodes the
obtained dice metric to accuracy, it may not be up to the mark of even 90%. It was implemented by using
T1-weighted brain CE-MRI dataset. A. Ari and D. Hanbay in 2018 [14] projected and executed a framework
and called it an extreme learning machine local receptive fields and ELM-LRF mentioned as an acronym.
This framework was implemented in three different stages. The first stage mainly dealt with the removal of
noises, the second stage dealt with the classification of images as a benign tumor or malignant tumor, and
the final third stage dealt with the segmentation of the image. This work generated an accuracy of about
97.2%. A. Anil et al. in 2019 [15] identified the importance of automatic detection of brain tumors and

3
planned a model for it with the aid of transfer learning to classify into two categories such as having tumors
and having no tumors. It was planned with three different pre-tunning CNN architectures such as Alexnet,
VCC-16, and VGG-19 the corresponding accuracies obtained were 89.6%, 93.22%, 95.78%. H. H. Sultan et
al. in 2019 [16] proposed architecture and implemented using deep learning for the classification of various
brain tumors such as meningioma tumor, glioma tumor, and pituitary tumor. The dataset utilized was a
T1-weighted contrast-enhanced brain CE-MRI dataset which comprises of 3064 images and the accuracy
obtained overall about 96.13%. S. Hussain et al. in 2018 [17] explained the identification of the brain tumor
through a proposed deep learning-based architecture and it’s performance measured in terms of the Dice
similarity coefficient, the Specificity, and the Sensitivity. The proposed methodology also includes 2-stage
weighted training of the network due to this better performance showed by the network. The metrics show
that the model working fine. It was implemented by using two popular datasets such as BRATS -2013 and
-2015. F. Özyurt et al. in 2019 [18] study proposed a hybrid methodology with a combination CNN and
Neutrosophy which is a branch of philosophy that explains the relation of an entity, concept, or event with
its anti- entity, concept, or event. Due to the usage of neutrosophy, fuzzy logic is also embedded in this
methodology. In this methodology, the attributes are classified by using SVM as well as KNN techniques
along with CNN. The average accuracy obtained by using the proposed methodology is about 95.62% and
it was implemented by using the TCGA-GBM dataset from TCI Archive. M. Toğaçar et al. in 2020 [19]
suggested a methodology and implemented using transfer learning and the built network architecture named
BrainMRNet whose main goal is to choose the best and the most effective attributes among the complete
attribute set. It was pre-trained using various CNN architectures such as the AlexNet, the inception, and
the VGG-16 and the accuracy of the classification attained about 96.05%. A. M. Alqudah et al. in 2019
[20] introduced a methodology and implemented using the most popular and largely utilized deep learning
architecture called CNN (convolutional neural network) to classify the brain tumor MRI images into three
categories such as meningioma, glioma, and pituitary tumor. The classification was made possible with
the proposed methodology with an accuracy of 97.62% with the cropped images. Kermi et al.in 2018 [21]
proposed methodology based on the two-dimensional deep learning architecture, popularly known as convo-
lutional neural networks making automated pipeline to extract the complete tumor, to improve the image
of the tumor from the pre-tuning three-dimensional magnetic resonance images. The proposed methodology
was inspired by the existing architecture called U-Net for improving the performance of brain tumor seg-
mentation. The performance of this network was measured by using a dice score. It was implemented by
using the dataset BRATS 2018 which consists of magnetic resonance of multimodal images of 351 patients.
J. Amin et al. in 2019 [22] mentioned the main objective of the proposed work to identify the brain tumor
at an early stage. The proposed work mainly deals with denoising the input images as a preprocessing
step utilizing Weiner filter with various bands of wavelets and then utilized the denoising images further to
enhance the images. In the following stages, potential field clustering, universal threshold, fluid-attenuated
inversion recovery were utilized for the separation of tumor zone. Local binary pattern and Gabor wavelet
transform were utilized for the accurate classification. The performance of the proposed model was estimated
by using various metrics such as peak signal to noise ratio, mean squared error, structured similarity index,
and dice score. It was implemented using datasets BRATS-2013 and -2015 and compared the results between
the datasets. R. Pugalenthi et al. in 2019 [23] proposed work related to identification and classification of
brain tumor utilizing certain techniques of machine learning such as random forest, k-nearest neighbor, and
the combination of support vector machine and radial basis function. The results are compared across all
three models as mentioned earlier. It was implemented using the dataset BRATS-2015 [24].The accuracy
obtained for the models of RF, KNN, and SVM-RBF was 88.67%, 92.67%, and 94.33% respectively [25]. H.
Mohsen et al. in 2018 [26] proposed Deep Neural Network classifier which is one of the DL architectures
for classifying a dataset of 66 brain MRIs into 4 classes e.g. normal, glioblastoma, sarcoma and metastatic
bronchogenic carcinoma tumors. There classifier was combined with the discrete wavelet transform (DWT)
the powerful feature extraction tool and principal components analysis (PCA) [27]. T. Brosch et.al Raheleh
Hashemzehi et al. in 2016[28] and 2020 [29] respectively used images to train our new hybrid paradigm
which consists of a neural autoregressive distribution estimation (NADE) and a convolutional neural net-
work (CNN). They subsequently tested their model with 3064 T1-weight-ed contrast-enhanced images with
three types of brain tumors. Their obtained results demonstrated that the hybrid CNN-NADE has a high

4
classification performance . S. Vieira et al. in 2017 has reviewed various studies that have used the approach
to classify brain-based disorders. Along with this the pros and cons of using DL to elucidate brain-based
disorders has been discussed. They also suggested that augmentation technique can be usefull in the study of
neuroimaging [30] S. Ali Abdelaziz Ismael et al. in 2020 [31] proposed an enhanced approach for classifying
brain tumor types using Residual Networks. Also they haved evaluated the proposed model on a benchmark
dataset containing 3064 MRI images of 3 brain tumor types.They have achieved the highest accuracy of 99%
[32].
So far various versions of literature aspects and the working methodologies for identification and classi-
fication of the various brain tumors were discussed. The importance of identification of brain tumors is very
essential at very early stages of benign nature. At that stage, the brain tumor might be treated in certain
scenarios so that the survival rate can be improved and the automated recognition of the brain tumor can
stand as a better alternative for the regular diagnosing methods.

3 Methodology
3.1 Data description
The dataset used for the implementation of the proposed network is obtained from Kaggle with the name
of the dataset is ‘Brain MRI Images for Brain Tumor Detection’ which consists of 253 brain images. These
images are categorized into tumors having a dataset named ‘yes’ and tumor not having a dataset named
‘no’. The category ‘yes’ consists of 155 images, and the category ‘no’ consists of 98 images. After looking at
this data, one can realize that the data is very small and also not that balanced data as it contains 61.3%
of the total images are of the ‘yes’ category, and the remaining 38.7% of the total images are of the ‘no’
category. It is very essential to balance the data for achieving better accuracy. For balancing the data,
the data augmentation was applied to improve the number of sample images. After data augmentation was
applied, the number of sample images increased to 2065 that achieved the balance of the data as well by
getting 52.5% of the total images were ‘yes’ category and the remaining 47.5% of the total images were ’no’
category. The dataset was further categorized into three such as training, testing, and validation datasets
of the proportions of 70%, 15%, and 15% respectively i.e, training dataset consists of 1445 images, testing
dataset consists of 310 images, and validation dataset consists of 310 images.

3.2 Data Augmentation


Deep learning applications were amazing in the area of segregation. This was motivated by the development
of deep network architectures, high-performance computing, and large data exposure. The production of
convolutional neural networks (CNNs) has effectively implemented deep neural networks to computer vision
applications, they are image recognition, object recognition, and the segmentation of images. Computer
vision includes a lot of activities, of that one of the popular activities is the classification of images in
which computers are more often correctly categorized when compared to humans. However, the significant
disadvantage of the classification of images is that vast quantities of data are required. For training of the
proposed model should need a huge sample of images then that scenario would improve the accuracy of
the classification of images. Yet, in many instances, sufficient data will not be available for training the
proposed models. In such instances, data augmentation comes in handy for resolving the data shortage issue
while training the data. Data augmentation is a technique that mainly deals with improving the size of the
sample images that are sufficient for the training of the model without incorporating any additional data.
The popular and common techniques utilized in data augmentation to improve the sample size of images
are flipping off the images, rotating the images, scaling the images, cropping the images, Translation of the
images, padding the images, the addition of Gaussian noise to the images.

5
Figure 3: Images obtained through Data augmentation technique

3.3 An Overview of CNN(Convolutional Neural Networks)


CNN(Convolutional Neural Network) can be described as a series of layers such as the convolution layer
and the pooling layer that is intended to feature extraction from the sample of images provides as input
is the ultimate goal. There are particular, two elementary operations that are implemented on CNN, they
are the padding operation and the stride operation. The pixels on the corner of the image which can be
considered as a two-dimensional matrix are not frequently utilized than the pixel in the center of the image
in the convolutional component by the vertical-end filter, which means that the detail is removed from the
edges. To overcome this challenge, the padding of the pixels around the image that can act as a solution to
consider the edges of the image. Traditionally, padding can be made by using additional pixels around the
image with their corresponding pixel values as zero. Figure – 4 demonstrates the padding operation in the
case of a two-dimensional image.

Figure 4: Padding Operation

Once the padding is completed, then the next operation stride will be initiated as the part of convolutional
component. During this operation, the size of the input the image will shrink, and the output will be the
shrink input image. This shrinking will depend on the size of stride. If the size of stride is larger then the
shrinking effect on the input image will also be larger and vice-versa. Figure – 5 demonstrates the stride
operation in the case of a two-dimensional image.

6
Figure 5: Stride Operation

The discussion of elementary operations followed by the convolution. As the proposed work implemented
using python, let the further discussion in terms of practicality in the direction of python. Usually, the
image in python can be considered as tensor and so the convolution can be defined as the product of
tensor and the kernel or filter. It can further redefine as a sum of the element-wise product of the image
in two-dimensional matrix form and kernel in the same form as that of the input image. Practically, the
mathematical representation of an image as a tensor in terms of dimensions as mentioned in eq. 1.

Dim(I) = (Sh , Sb , Sch )(1)


Where I represent the input image, S h represent the size of the height of the image, S b represent the size
of the width/breadth of the image, and S ch represent the number of channels of the image. Let the input
image be an RGB image, then the number of channels considered are 3 as red, green, and blue considered
as three channels. Let the input image be a gray-image then the number of channels considered is one only.
Further, as part of the convolution kernel/filter need to be identified. Traditionally, the filter/kernel can
be considered as a square form with the odd ordered dimension and let it be denoted by F, which requires
the centering of every pixel in the kernel/filter and therefore taking into account all the components. For
applying the operation of convolution, the kernel/filter should have the number of channels as that of an
input image i.e., the filter will be applied to each channel and the dimension of the kernel/filter can be
mentioned as in eq. (2)
Dim(F ) = (f, f, Sch )(2)
The convolution operation will be applied as the product of the image and the kernel/filter in the
form of the two-dimensional matrix where each value in each cell in the matrix is the sum of element-wise
multiplication of the kernel/filter. Mathematically, the convolution operation can be mentioned as in eq.
(3).
Sh Sb Sch
X XX
conv(I, F )x, y = Fi, j, k Ix+i−1, y+j−1, k (3)
i=1 j =1 k=1

And the dimensions of


conv(I, F )x, y

can be mentioned as in eq. (4).


( j k j k
Sh +2p−f
s
+ 1 , Sb +2p−f
s
+ 1 , when s > 0
Dim (conv (I, F )) = (4)
(Sh + 2p − f, Sb + 2p − f ) , when s = 0

7
There are various types of special convolutions that exist. They are as following: If p=0, then such a
convolution is known as the Valid convolution. If the size of the output = size of the input i.e., p= (f-1)/2,
then such a convolution is known as the Same convolution. If f = 1, then such a convolution is knowns as
the 1x1 convolution which is utilized in the case of shrinking the number of channels of the image without
changing the other dimensions. The number of kernel/filter parameters are equal to f ∗ f ∗ Sch and these
parameters can be learned through the backpropagation algorithm, The convolution phase is followed by
the pooling phase which is to check the features of the image by summarizing the details that exist in the
images. In this phase, the dimensions of the image will get affected and the number of channels will be
retained as it is. The kernel/filter that slides over the image to get modify the pixel values by the type
of pooling used such as average pooling, and maximum pooling, etc. The dimensions of the output of the
image obtained from the pooling phase can be mentioned as in eq. (5). The pooling phase is followed by
the fully connected layer to attain the information in the required form.
( j k j k  !
Sh +2p−f
s
+ 1 , Sb +2p−f
s
+ 1 , Sch , when s > 0
Dim (pooling (I)) = (5)
(Sh + 2p − f, Sb + 2p − f, Sch ) , when s = 0

The overview of the complete discussion can be generalized in the form of CNN(Convolutional Neural
Network) as mentioned in Figure 6. For the given input images, the series of convolution operations are
followed by the activation function and then applied by pooling operation for a specific number of times
to extract the features from the provided input. The extracted features will be used as the input for the
following neural network which comprises of fully connected layers followed by the activation function. The
ultimate intension of using CNN with an additional neural network to decrease the dimensions and increase
the number of channels as the deeper moving into the network. There are many popular and common CNN
architectures that exist, some of them are LeNet-5, VGG-16, VGG-19, AlexNet, ResNet-50, and Inception.

Figure 6: The generalized CNN structure

3.4 Algorithm for Proposed System


This section describes the feature set construction and classification using MRI images to detect brain tumor.
The method investigated different CNN features to yield desired results.

3.5 Proposed Network Architecture


The proposed framework utilizes the following architecture as mentioned in Figure 7. In Figure 7 different
blocks are utilized for represent two distinct tumor classes. After image processing being done with padding
and augmentation, images are trained in CNN network and able to distinguish tumor and non-tumor classes

8
Algorithm 1 Brain Tumor Identification using MRI
1: Input: Brain MRI dataset
2: Output: 0 or 1, 0 represents non-tumorous class and 1 represents tumorous class
3: Input images are passed onto the data augmentation process to improve the dataset size.
4: Improved dataset passed onto this step and each of the images padded with zeros.
5: The obtained images passed onto the convolution layer that includes batch normalization layers
and activation function, ‘ReLU’.
6: The matrix obtained in the previous step passed onto the pooling layer to obtain the features.
7: The obtained features passed onto the flatten layer to have all the features in an array form.
8: The feature set passed onto the fully connected layer to obtain the output as 1 or 0.

with desired semantic outputs. The main properties of detailed CNN driven augmented architecture can be
summarized as follows.
1. Each image is in the size of 240x240x3 initially, then after padding zeros with padding size of 2x2 that
transformed the image into the size of 244x244x3.
2. The zeros padded image will be passed onto the stack of the convolution layer, the batch normalization
layer, and the activation function, ‘ReLU’. The convolution layer consists of the total number of layers
equal to 32, each filter size is 7x7, and the stride is 1.
3. Sequentially two pooling layers are used and in both layers filter size is 4x4 and the stride is 4.
4. Thereafter, the flatten layer is used to convert the multi-dimensional matrix into a single-dimensional
vector.
5. Finally, a fully connected layer established with a single neuron with sigmoid function as an activation
function to obtain the output as 1 or 0.

Figure 7: Flow chart of the implemented algorithm

9
4 Results and Discussion
4.1 System Requirements
The proposed network implemented on the environment of windows10, 64-bit operating system with x64
architecture based processor of Intel® Core™ i3-8130U [email protected] with the processing capacity of 8.00
GB.The framework developed using python through a Jupyter notebook.

4.2 Pre-Data Augmentation Results


The proposed framework was implemented in two phases to identify the impact of data augmentation on the
final result of the classification of having or not having tumors. The first phase was implemented with the
available data i.e., pre-data augmentation application phase. The results obtained are not that satisfactory
even though there is consistency in the results across the training, validation, and testing datasets when
compared to the results of contemporary existing techniques. The number of epochs used for this proposed
network of about 24. The training accuracy, as well as the validation accuracy, was noted to compare them
in individual cases.Table-1 demonstrates the performance metrics for the implementation of the proposed
model pre-data augmentation. Figure-9 represents the comparison of the training accuracy and the validation
accuracy in the pre-data augmentation phase.

Table 1: Performance metric details pre-data augmentation phase


Training Set % Validation Set % Testing set %
Accuracy 85.76 84.98 80.23
F1-Score 87.0 86.0 81.0
Precision 87.1 86.19 81.8
Specificity 82.2 81.06 75.01
Recall 88.2 86.36 80.91

Figure 8: The generalized CNN structure

10
4.3 Post-Data Augmentation Resultsl
The second phase was implemented with the augmented data from the available data i.e., post-data aug-
mentation application phase. The results obtained are very satisfactory when compared to the pre-data
augmentation application phase, and even the consistency in the results across the training, validation, and
testing datasets retained and the accuracy is also better than many other existing methodologies. The num-
ber of epochs used for this proposed network of about 24. The training accuracy, as well as the validation
accuracy, was noted to compare them in individual cases.Table-2 demonstrates the performance metrics for
the implementation of the proposed model post data augmentation. Figure-10 represents the comparison of
the training accuracy and the validation accuracy in the post-data augmentation phase. Figure 11 demon-
strates the visualization of the comparison of the training loss vs validation loss and training accuracy vs
validation accuracy in the post-data augmentation phase. Figure 12 demonstrates the visualization of the
comparison of various performance metrics representation in the support of the results, particularly accuracy,
obtained in the post-data augmentation phase.

Table 2: Performance metric details for the post-data augmentation phase


Training Set % Validation Set % Testing set %
Accuracy 97.85 96.41 95.27
F1-Score 98.14 96.98 95.88
Precision 98.8 98.3 97.2
Specificity 98.3 97.6 96.0
Recall 97.5 95.76 94.6

Figure 9: The comparison of training accuracy and validation accuracies pre-data augmentation
phase

11
Figure 10: The comparison of training loss vs validation loss and training accuracy vs validation
accuracy in post-data augmentation phase

Figure 11: The representation of performance metrics in support of the attained accuracy of post-
data augmentation

4.4 Pre-data Augmentation vs Post-Data Augmentation Results


So far, the results are considered in the form of individual cases such as the pre-data augmentation phase
and post-data augmentation phase. Now, it is very essential that need to identify the comparison between
training accuracy of pre-data augmentation phase and post-data augmentation phase as well as the com-
parison between validation accuracy of pre-data augmentation phase and post-data augmentation phase for
identification of the effectiveness of data augmentation and it is represented in Figure-13.

12
Figure 12: The comparison of training accuracy prevs post- data augmentation phases and validation
accuracy prevs post-data augmentation phases respectively

5 Conclusion
The proposed framework able to achieve two aspects as planned. The very first one is to achieve better
accuracy in the identification of the brain tumor existence with a standard and customized network and
the second one is to identify the impact of data augmentation on achieving better accuracy. The results
mentioned in the previous section demonstrates that the data augmentation played a crucial role in achieving
better accuracy by improving the accuracy at an average of 12%. The proposed framework able to achieve
an accuracy of 97.85% which is better when compared to many existing contemporary methodologies. Iden-
tifying the impact of the genetic algorithm in the place of data augmentation and comparing the result of
that work with the existing work can be a part of the future work.

References
[1] M. L. Bondyet al., “Brain tumor epidemiology: Consensus from the Brain Tumor Epidemiology Con-
sortium,” Cancer, vol. 113, no. 7, pp. 1953–1968, 2008.
[2] M. Havaeiet al., “Brain tumor segmentation with Deep Neural Networks,” Medical Image Analysis, vol.
35, pp. 18–31, 2017.
[3] N. A. Charles, E. C. Holland, R. Gilbertson, R. Glass, and H. Kettenmann, “The brain tumor microen-
vironment,” Glia, vol. 59, no. 8, pp. 1169–1180, 2011.
[4] A. H. Soloway, H. Hatanaka, and M. A. Davis, “Penetration of Brain and Brain Tumor. V II. Tumor-
Binding Sulfhydryl Boron Compounds,” Journalof Medicine Chemistry, vol. 10, no. 4, pp. 714–717,
1967.
[5] B. H. Menzeet al., “The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS),” IEEE
Transactionson Medical Imaging, vol. 34, no. 10, pp. 1993–2024, 2015.
[6] M. Prastawa, E. Bullitt, S. Ho, and G. Gerig, “A brain tumor segmentation framework based on outlier
detection,” Medical Image Analysis, vol. 8, no. 3, pp. 275–283, 2004.
[7] M. S. Mahaley, C. Mettlin, N. Natarajan, E. R. Laws, and B. B. Peace, “National survey of patterns
of care for brain-tumor patients,” Journalof Neurosurgery, vol. 71, no. 6, pp. 826–836, 1989.

13
[8] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, “Brain Tumor Segmentation Using Convolutional Neural
Networks in MRI Images,” IEEE Transactionson Medical Imaging, vol. 35, no. 5, pp. 1240–1251, 2016.
[9] M. F. Kircher et al., “A brain tumor molecular imaging strategy using a new triple-modality MRI-
photoacoustic-Raman nanoparticle,” Nature Medicine, vol. 18, no. 5, pp. 829–834, 2012.
[10] N. Gordillo, E. Montseny, and P. Sobrevilla, “State of the art survey on MRI brain tumor segmentation,”
Magnetic Resonance Imaging, vol. 31, no. 8, pp. 1426–1438, 2013.
[11] T. Saba, A. Sameh Mohamed, M. El-Affendi, J. Amin, and M. Sharif, “Brain tumor detection using the
fusion of handcrafted and deep learning features,” Cognitive Systems Research, vol. 59, pp. 221–230,
2020.
[12] M. Sajjad, S. Khan, K. Muhammad, W. Wu, A. Ullah, and S. W. Baik, “Multi-grade brain tumor
classification using deep CNN with extensive data augmentation,” Journalof Computational Science,
vol. 30, pp. 174–182, 2019.
[13] Z. Sobhaninia, S. Rezaei, A. Noroozi, M. Ahmadi, Z. Hamidreza, K. Nader, E. Ali, ShadrokhSamavi,
“Brain Tumor Segmentation Using Deep Learning by Type-Specific Sorting of Images,” 2018.
[14] A. Ari and D. Hanbay, “Deep learning-based brain tumor classification and detection system,” Turkish
Journalof Electrical Engineeringand Computer Sciences, vol. 26, no. 5, pp. 2275–2286, 2018.
[15] A. Anil, A. Raj, H. Aravind Sarma, N. C. R, and D. P L, “Brain Tumor detection from brain MRI
using Deep Learning,” International Journalof Innovative Research Applied Sciences Engineering, vol.
3, no. 2, p. 458 - 465, 2019.
[16] H. H. Sultan, N. M. Salem and W. Al-Atabany, ”Multi-Classification of Brain Tumor Images Using
Deep Neural Network,” in Institute of Electrical and Electronics EngineersAccess, vol. 7, no. 3, pp.
69215-69225, 2019.
[17] S. Hussain, S. M. Anwar, and M. Majid, “Segmentation of glioma tumors in the brain using deep
convolutional neural network,” Neurocomputing, vol. 282, pp. 248–261, 2018.
[18] F. Özyurt, E. Sert, E. Avci, and E. Dogantekin, “Brain tumor detection based on Convolutional Neural
Network with neutrosophic expert maximum fuzzy sure entropy,” Measurement Journalof the Interna-
tional Measurement Confederation, vol. 147, 2019.
[19] M. Toğaçar, B. Ergen, and Z. Cömert, “BrainMRNet: Brain tumor detection using magnetic resonance
images with a novel convolutional neural network model,” Medical Hypotheses, vol. 134, no. November
2019, 2020.
[20] A. M. Alqudah, H. Alquraan, I. A. Qasmieh, A. Alqudah, and W. Al-Sharu, “Brain tumor classification
using deep learning technique - A comparison between cropped, uncropped, and segmented lesion images
with different sizes,” International Journalof Advanced Trends in Computer Scienceand Engineering,
vol. 8, no. 6, pp. 3684–3691, 2019.
[21] Kermi, Adeland Mahmoudi, Issamand Khadir, Mohamed Tarek, “Deep Convolutional Neural Networks
Using U-Net for Automatic Brain Tumor Segmentation in Multimodal MRI Volumes,” Springer Inter-
national Publishing, vol. 2, no. 6, pp. 37 - 48, 2018.
[22] J. Amin, M. Sharif, M. Raza, T. Saba, and M. A. Anjum, “Brain tumor detection using statistical
and machine learning method,” Computer Methods and Programs in Biomedicine, vol. 177, pp. 69–79,
2019.
[23] R. Pugalenthi, M. P. Rajakumar, J. Ramya, and V. Rajinikanth, “Evaluation and classification of the
brain tumor MRI using machine learning technique,” Control Engineeringand Applied Informatics, vol.
21, no. 4, pp. 12–21, 2019.
[24] J. Amin, M. Sharif, M. Yasmin, and S. L. Fernandes, “Big data analysis for brain tumor detection:
Deep convolutional neural networks,” Future Generation Computer Systems, vol. 87, pp. 290–297, 2018.
[25] S. Deepak and P. M. Ameer, “Brain tumor classification using deep CNN features via transfer learning,”
Computers in Biology and Medicine, vol. 111, no.3, p. 103345, 2019.

14
[26] Heba Mohsen, El-Sayed A. El-Dahshan, El-Sayed M. El-Horbaty, Abdel-Badeeh M. Salem, Classifica-
tion using deep learning neural networks for brain tumors, Future Computing and Informatics Jour-
nal,Volume 3, Issue 1, 2018, Pages 68-71, ISSN 2314-7288, https://doi.org/10.1016/j.fcij.2017.12.001.
[27] Hossein Shahamat, Mohammad Saniee Abadeh, Brain MRI analysis using a deep learning based
evolutionary approach, Neural Networks, Volume 126, 2020, Pages 218-234, ISSN 0893-6080,
https://doi.org/10.1016/j.neunet.2020.03.017.
[28] T. Brosch, Y. Yoo, L.Y.W. Tang, R. Tam, Deep learning of brain images and its application to mul-
tiple sclerosis, Machine Learning and Medical Imaging, Academic Press, 2016, Pages 69-96, ISBN
9780128040768, https://doi.org/10.1016/B978-0-12-804076-8.00003-7.
[29] Raheleh Hashemzehi, Seyyed Javad Seyyed Mahdavi, Maryam Kheirabadi, Seyed Reza Kamel, Detec-
tion of brain tumors from MRI images base on deep learning using hybrid model CNN and NADE,
Biocybernetics and Biomedical Engineering, Volume 40, Issue 3, 2020, Pages 1225-1232, ISSN 0208-
5216, https://doi.org/10.1016/j.bbe.2020.06.001.
[30] Sandra Vieira, Walter H.L. Pinaya, Andrea Mechelli, Using deep learning to investigate the
neuroimaging correlates of psychiatric and neurological disorders: Methods and applications,
Neuroscience Biobehavioral Reviews, Volume 74, Part A, 2017, Pages 58-75, ISSN 0149-
7634,https://doi.org/10.1016/j.neubiorev.2017.01.002.
[31] Xiao Zheng, Wanzhong Chen, Mingyang Li, Tao Zhang, Yang You, Yun Jiang, Decoding human brain
activity with deep learning, Biomedical Signal Processing and Control, Volume 56, 2020, 101730, ISSN
1746-8094,https://doi.org/10.1016/j.bspc.2019.101730.
[32] Sarah Ali Abdelaziz Ismael, Ammar Mohammed, Hesham Hefny, An enhanced deep learning approach
for brain cancer MRI images classification using residual networks, Artificial Intelligence in Medicine,
Volume 102, 2020, 101779, ISSN 0933-3657, https://doi.org/10.1016/j.artmed.2019.101779.

15

You might also like