0% found this document useful (0 votes)
15 views

3

Uploaded by

nckancherla17
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

3

Uploaded by

nckancherla17
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Classification and Segmentation of MRI Images

of Brain Tumors Using Deep Learning and


Hybrid Approach
Original Scientific Paper

Sugandha Singh
Department of Computer Science
Babasaheb Bhimrao Ambedkar University
Vidya Vihar, Rae Bareli Road, Lucknow (U.P.) 226025, INDIA
[email protected]

Vipin Saxena
Department of Computer Science
Babasaheb Bhimrao Ambedkar University
Vidya Vihar, Rae Bareli Road, Lucknow (U.P.) 226025, INDIA
[email protected]

Abstract – Manual prediction of brain tumors is a time-consuming and subjective task, reliant on radiologists' expertise, leading to
potential inaccuracies. In response, this study proposes an automated solution utilizing a Convolutional Neural Network (CNN) for brain
tumor classification, achieving an impressive accuracy of 98.89%. Following classification, a hybrid approach, integrating graph-based
and threshold segmentation techniques, accurately locates the tumor region in magnetic resonance (MR) brain images across sagittal,
coronal, and axial views. Comparative analysis with existing research papers validates the effectiveness of the proposed method, and
similarity coefficients, including a Bfscore of 1 and a Jaccard similarity of 93.86%, attest to the high concordance between segmented
images and ground truth.

Keywords: tumor images, graph-based approach, threshold segmentation, CNN, tumor identification, meningioma

1. INTRODUCTION meningioma may require radiotherapy along with sur-


gery. While most schwannomas within the cranial vault
Glioma, atypical meningioma, and schwannoma dis- primarily occur at the cerebellopontine angle, where
ease are among the most severe forms of brain tumor they are typically attached to the vestibular branch of
that pose a significant threat to human life. The prima- the eight cranial nerves, the symptoms experienced by
ry brain tumor is estimated to affect 24,810 people by patients often include tinnitus and hearing loss. Early
2023 in the United States. In the early stage of a medical detection of these brain tumors is crucial for prevent-
condition, patients may experience headaches. Howev-
ing further complications. Therefore, both classifica-
er, as time passes, the condition may progress, poten-
tion and segmentation are critical factors in identifying
tially leading to visual impairments [1]. Glioma is the
brain tumors at an early stage [2].
most common primary brain tumor and the symptoms
depend on the tumor’s location, growth, and infiltra- Due to the abnormal and rapid growth of tumor tis-
tion of tumors. Glioma symptoms can be quite severe. sues within the brain, it becomes imperative to accurate-
On the other hand, meningiomas are typically benign ly locate the position of the tumor that affects the brain
tumors that occur in adults. They are commonly found cells. Worldwide, medical practitioners and radiologists
attached to the dura and arise from the meningothe- are continually striving to diagnose brain tumors effec-
lial cell of the arachnoid. These tumors are rounded in tively. This is where MRI modalities play a crucial role in
shape with a well-defined dural base, which can lead enhancing the accuracy of brain tumor diagnosis and
to the compression of the underlying brain tissue. Me- identifying the affected areas. MRI is the dedicated im-
ningiomas have two stages: atypical and anaplastic. aging modality, a non-invasive technique widely used
Atypical meningiomas often exhibit a high rate of re- for detailed visualization of the brain’s internal struc-
currence and more aggressive local growth. Atypical tures. The integration of artificial intelligence (AI) in MRI

Volume 15, Number 2, 2024 163


analysis has become imperative due to the complex and 2. RELATED WORKS
voluminous nature of medical imaging data.
In the references provided, the imaging modality
MRI generates high-dimensional and intricate datas- used was primarily MRI (Magnetic Resonance Imaging).
ets that pose challenges for efficient interpretation by Specifically, Karayegen and Aksahin [6] utilized MRI for
human observers alone. The application of AI, particu- semantic segmentation to detect brain tumors using
larly deep learning models like convolutional neural 3D imaging They compared ground truth with the seg-
networks (CNNs), has shown promise in automating mented result. However, the classification error rate was
the analysis of MRI images. These models excel at dis- not successfully minimized. Saleem et al. [7] utilized the
cerning intricate patterns and features within the im- MRI Brats 2018 dataset for 3D brain tumor segmenta-
ages, enabling more accurate and rapid identification tion and analyzed the segmentation model by applying
of abnormalities, such as brain tumors. interpretability technique to different tumor regions, in-
Referring to existing literature, studies by Deb and cluding non-enhancing tumors, edema, and enhancing
Roy [3], and Ranjbarzadeh et al. [4] have explored the tumors. Khosravanian et al. [8] introduced a superpixels
application of artificial intelligence (AI), specifically fuzzy clustering method with a multiscale morphologi-
neural networks, for MRI image analysis. They empha- cal gradient reconstruction operation. They evaluated
size the need for advanced computational techniques the method’s performance on both synthetic data and
to handle the complexity of MRI data and enhance di- the MR Brats 2017 dataset. However, a limitation of this
agnostic accuracy. The introduction thus establishes paper is the use of single-modality MRI image fluid-at-
the context of MRI as the chosen imaging modality tenuated inversion recovery (FLAIR) for tumor segmen-
and justifies the integration of AI to address the inher- tation. Zhang et al. [9] introduced a multi-scale mesh
ent challenges in its analysis, drawing on insights from aggregation network for MRI brain tumor image seg-
relevant studies in the field. Noteworthy among these mentation. One limitation of their approach is that the
advancements is the work of Rehman et al. [5], which 2D network cannot fully leverage the details within the
introduces a compelling strategy using an enhanced three spatial dimensions in 3D volume images. Lei et al.
encoder-decoder network. 'BrainSeg-Net,' their novel [10] employed a sparse constrained level set method to
approach, merits careful consideration in the broader analyse brain tumor segmentation, implementing it us-
landscape of medical image analysis. ing the MR Brats 2017 dataset. Their approach achieved
higher accuracy compared to other methods. Shree and
To enhance the computational complexity and accu- Kumar [11], utilized MR data extracted features using
racy of brain tumor detection, a novel CNN based clas- a grey-level co-occurrence matrix (GLCM) and applied
sification and segmentation method is employed. Sam- discrete wavelet transform (DWT) with a region-grow-
ples of normal brain images and brain tumor images ing segmentation method, achieving an accuracy of
with glioma, atypical meningioma, and schwannoma, 98.02%. Mamatha et al. [12], introduced a graph theory
were collected from various hospitals as illustrated in based segmentation method in which a weighted di-
Fig. 1 representing the transverse plane of both con- rected graph is constructed. Each pixel in the image is
trast and non-contrast MR images. represented as a nodes, and paths are obtained for the
detection of MR brain tumors before the segmentation
process. They applied pre-processing steps to enhance
image quality and achieved favorable results. Balam-
urugan and Gnanamanoharan [13], present a novel ap-
proach employing a hybrid deep convolutional neural
network (DCNN) with an enhanced LuNet classifier has
been proposed. The primary goal is to precisely locate
and classify MRI brain tumors as glioma or meningioma.
(a) Normal brain image (b) Schwannoma brain The preprocessing stage involves the utilization of a la-
tumor image placian gaussian filter (LOG), while a fuzzy c means with
gaussian mixture model (FCM-GMM) algorithm is intro-
duced for segmentation. The extended LuNet algorithm
is then applied for data division, and VGG16 feature ex-
traction yields thirteen categorical features. Hossain et
al. [14], proposed method leverages lightweight deep
learning models, namely MicrowaveSegNet, to achieve
precise brain tumor segmentation, and BrainImageNet,
for accurate image classification. The research integrates
(c) Atypical meningioma (d) Glioma brain tumor advanced computational techniques for efficient brain
brain tumor image image tumor analysis. The utilization of a portable sensor-
Fig. 1. Sample of (a) normal brain image and brain based microwave imaging system adds a dimension
tumor images with (b) schwannoma, (c) atypical of flexibility to the diagnostic process, showcasing the
meningioma and (d) glioma potential impact of this innovative methodology in the

164 International Journal of Electrical and Computer Engineering Systems


field of medical imaging and brain tumor research. The • The limitation of using single-modality MRI images
proposed approach [15] combines adam sewing training for tumor segmentation.
based optimization with UNet++ (AdamSTBO+UNet++) These are the major challenges of different methods
for MRI brain tumor segmentation and adam salp water that motivate us to research on segmentation and clas-
wave optimisation with the deep convolutional neural sification. The paper addresses a suitable method to
network (AdamSWO-DCNN) for classification. The in- detect brain tumors more accurately and effectively.
troduction of AdamSTBO, an adaptation of the Adam
optimizer integrated with the upgrade function of the 4. MATERIAL AND METHOD
sewing training based optimization (STBO) algorithm,
signifies a distinctive advancement in optimization The aim of the research is to analyze radiologist’s di-
strategies. Ansari's [16], explores automated support agnoses using a deep learning model for classification
systems for brain tumor detection using MRI, leverag- and a hybrid approach for segmentation. The primary
ing soft computing and machine learning algorithms. goal of the proposed method is to locate tumor-affect-
The study proposes a strategy utilizing a fuzzy clustering ed tissues in a more precise and efficient manner. The
algorithm and a neural network system to identify brain CNN approach is applied for the classification of tumor
tumor cells in their early stages. Ullah et al. [17] applied a and no-tumor classes. The segmentation process parti-
statistical approach to enhance the image quality, to im- tions the tumor-affected tissues from healthy brain tis-
prove classification performance. For classification, they sues, with practitioners performing this crucial step for
utilized discrete wavelet transform to extract features clinical aids. The designed deep learning model, based
from MRI images and categorized them into malignant on radiologist’s assumptions, undergoes thorough
and benign tumor classes in deep neural networks. How- analysis to achieve effective performance and accuracy
ever, the limitations of this approach include its incom- surpassing existing approaches. The techniques are
patibility with larger datasets and the longer execution implemented and experimented with real MRI images
collected from reputable hospitals are shown in Fig. 2.
time required. Amin et al. [18] applied a fusion technique
using discrete wavelet transform (DWT) on MRI images. 1. In the pre-processing step, 2D MRI images are nor-
They employed a partial differential diffusion filter to re- malized to a scale of 1.0/255.0 using normalization
move noise and performed tumor segmentation using a techniques and resized to 224*224 to reduce com-
global thresholding method. The segmented image was putational complexity.
then passed to a proposed CNN model for classification 2. The 2D CNN model is applied to the trained im-
into tumor and non-tumor regions. Their analysis re- ages, and to perform classification into tumor and
vealed that fusion images provide superior results, and no-tumor.
this method was extended to PET and CT images. How- 3. After classification, the tumor region is located us-
ever, a drawback was noted as the fusion images some- ing a hybrid approach for tumor segmentation.
times produced distorted images, which had an impact
4. Evaluate the classification accuracy and segmenta-
on the classification process.
tion similarity coefficients.
While these studies do not directly resolve all the
highlighted problems, each contributes valuable in-
sights that could be leveraged to address the identified
challenges. Techniques such as improved segmenta-
tion methods, utilization of multiple modalities, net-
work enhancements, and preprocessing stages are all
potential avenues to explore in minimizing classifica-
tion errors and leveraging multi-modality imaging for
more accurate tumor segmentation.

3. PROBLEM STATEMENT

The critical stage of brain tumor identification is a vi-


tal task to avoid severe brain issues. Several techniques
have been developed to discover brain abnormalities
through brain images in a precise manner. However,
image classification and segmentation are the most
challenging and essential tasks for medical images. Var-
ious segmentation techniques are applied to locating
brain tumors, but they come with certain drawbacks
and challenges. Which are listed below.
• The classification error rate in brain tumor segmen-
tation needs to be minimized. Fig. 2. Schematic representation of proposed system

Volume 15, Number 2, 2024 165


4.1. DATASET SPECIFICATION In the T2 sequence, which is often used in the evalu-
ation of inflammatory processes, many diseases mani-
The dataset was obtained from distinct hospitals and fest an increase in tissue fluid content. Consequently,
encompasses three categories of brain tumor cases, these lesions appear brighter and are employed, much
namely atypical meningioma, glioma, and schwan- like T1-weighted imaging, to assess anatomical struc-
noma, alongside normal brain images. Initially stored tures and most lesions throughout the body. However,
in the DICOM format, these images underwent prepro- it is important to note that T2-weighted imaging may
cessing, during which they were converted into the not be the optimal choice for evaluating lesions around
JPG format. The collected dataset consists of various the brain ventricles, as both lesions and CSF can have a
MRI sequences for further pre-processing. Following similar appearance in this sequence.
preprocessing, the images were categorized into two
groups: with tumors and no-tumor, and facilitating On the other hand, T1-weighted images with con-
further analysis. The dataset comprises a total of 884 trast enhancement (T1+C), achieved by injecting con-
MRI brain images categorized into two classes: 624 im- trast material like gadolinium, serve to increase the T1
ages with tumors and 260 images of normal brains. The signal from moving blood. These MRI sequences will be
brain MRI dataset is divided into training and test sets, discussed in more detail in the context of the specific
with 707 images for training and 77 for testing. Each images used.
image has been resized to 224 x 224 pixels. A summary
4.2.1. Fluid-Attenuated Inversion
of the dataset specifications is provided in Table 1.
Recovery (FLAIR) image

Table 1. Dataset specification The FLAIR image in MRI is notable for its similarities
Data Specification
to T2-weighted imaging regarding brain tissue inten-
sities, with the key distinction being the appearance
Dataset source Safdarjung, Medanta and SGPGI Hospitals
of cerebrospinal fluid (CSF) as dark rather than bright.
Image Format DICOM It achieves this by selectively suppressing the signals
Size of Images 224 x 224 from fluids through the use of long echo (TE) and rep-
No. of Classes Two
etition (TR) times.
Name of Classes Tumor, No-tumor In FLAIR images, grey matter appears brighter than
Name of Sequence T1, T2, FLAIR, T1+C
white matter, and CSF stands out as dark. This particular
characteristic makes FLAIR sequences a valuable tool
Train 80%
for the assessment of various brain disorders, including
Test 20% infarction, hemorrhage, and head traumas. Addition-
ally, FLAIR imaging has the added benefit of reducing
In Table 2 the demographic details of patients with cerebrospinal fluid production. An illustrative example
three brain tumor categories including atypical menin- of the axial view of a FLAIR image is depicted in Fig. 3.
gioma, glioma, and schwannoma, along with normal
brain MRI images. The patient data has been collected
from radiologists, accompanied by authorized reports Tumor
and the consent of both patients and, their attendants. (bright) CSF
(dark)
Table 2. Demographic details of patients
Patient Hospital Age Gender Category Fig. 3. Axial view of FLAIR sequence
Patient#1 Medanta 58 Female Glioma
4.2.2. T1 image
Patient#2 SGPGI 54 Male Schwannoma

Patient#3 Safdarjung 62 Female Atypical Meningioma In the T1 sequence, tissue intensities reflect T1, which is
the long relaxation time. On T1 scans, fatty tissue appears
Patient#4 SGPGI 45 Female Normal brain
bright, but CSF with no fat appears dark. The T1 sequence
produces short TE and TR times, which darkens the CSF.
4.2. MRI imaging sequences The axial view of the T1 image is represented in Fig. 4.

All MRI sequences exhibit diverse properties charac-


teristics, and distinct appearances, which play a crucial
role in the analysis and grading of tumors. These MR se-
Tumor CSF
quences rely on the application of radiofrequency puls-
(dark) (dark)
es and gradients to capture detailed tissue information
and intensity variations. For instance, FLAIR images are
valuable for assessing lesions near the ventricles and
distinguishing them from cerebral spinal fluid (CSF). Fig. 4. Axial view of T1 sequence

166 International Journal of Electrical and Computer Engineering Systems


4.2.3. T2 image

The T2-weighted sequences generate long TE and


TR times, making CSF appear very bright. In the T2 se-
quence, fluid, bone, and air appear dark. As a part of
the inflammatory process, most diseases exhibit in-
creased fluid content, causing lesions to appear bright.
The sagittal view of the T2 image is shown in Fig. 5. (a) (b) (c)
Fig. 7. (a) Sagittal (b) Axial, and (c) Coronal plane

4.3. Convolutional neural network


CSF
(bright) The architecture of the CNN model is shown in Fig. 8.
The deep learning process consists of 2D convolution
Tumor
and max-pooling layers.
(dark)

Fig. 5. Sagittal view of T2 sequence

4.2.4. T1+C image

In the T1+C sequence, contrast material is injected,


which increases the T1 signal from moving blood and
thus allows the detection of highly vascular lesions.
Tissues have the same intensities as in T1, except that
the moving blood is bright. It is useful in determining
hypervascular lesions in haemangiomas and lymph- Fig. 8. Representation of 2D CNN Model
angiomas. The axial view of the T1+C image is shown
below in Fig. 6. MRI datasets are utilized, encompassing training
and validation approaches. The images undergo nor-
malization and augmentation processes, and the pro-
cessed dataset is then fed into the 2D model. Finally,
the model produces binary classification results, which
are used to categorize MRI brain images into tumor and
Tumor CSF
no-tumor categories.
(bright) (bright)
To improve the performance of the CNN model, the
dataset has been normalized for feature scaling. The pro-
cess begins with image pre-processing, which includes
the augmentation of images. After that, data generators
Fig. 6. Axial view of T1+C sequence are created, and random patches extracted from MR im-
ages are inserted as input. The model has a total of 11
The properties of the MRI sequences are compared layers with varying numbers of neurons and dense lay-
and represented in Table 3. ers such as convolution layers, batch normalization lay-
ers, max-pooling layers, and LeakyReLU layers. The pro-
Table 3. Comparison between MRI sequences cess of convolution deep learning is processed with the
MRI White Grey
SoftMax, and pixel classification layers. The architecture
CSF TE/TR of CNN network layers is shown in Fig. 9.
Sequence Matter Matter

T1 Hypointense White Grey Short/Short

T2 Hyperintense Grey White Long/Long

Very Long/
FLAIR Hypointense Grey White
Very Long

T1+C Hyperintense White White Long/Long


Fig. 9. Architecture of 2D CNN layers
The MRI scans can be viewed in three dimensions,
The total number of layers can be counted as follows:
namely Sagittal, Axial, and Coronal, allowing medical
professionals to study the morphology of tumors as Input Layer: The model takes grayscale images with
shown in Fig. 7. dimensions (150, 150, 1) as input.

Volume 15, Number 2, 2024 167


Convolutional Blocks:
(1)
First Block: Applies Convolutional operation with 8
filters, kernel size (5, 5), and LeakyReLU activation. P(a1, a2) is a binary indicator function in Eq. (1) that
Followed by MaxPooling2D layer (2, 2). outputs true if the variation between modules a1 and
a2, denoted by Diff(a1, a2), is greater than the internal
Second Block: Applies Convolutional operation with
variation within a1 and a2, represented by Dint(a1, a2).
8 filters, kernel size (3, 3), and LeakyReLU activation.
Otherwise, it outputs false.
Followed by MaxPooling2D layer (2, 2).
Third Block: Applies Convolutional operation with (2)
16 filters, kernel size (3, 3), and LeakyReLU activation.
Diff(a1, a2) in Eq. (2) represents the variation between
Fourth Block: Applies Convolutional operation with two modules. It calculates the minimum weight edge
16 filters, kernel size (3, 3), and LeakyReLU activation. connecting a node vi in module a1 to a node vj in a2.
Followed by BatchNormalization for normalization
and MaxPooling2D layer (2, 2). The term w(vi, vj) represents the weight associated
with the edge connecting node vi in module a1 to
Flatten Layer: Converts the 2D feature maps into a node vj in module a2.
1D vector.
(3)
Fully Connected Layers: Dense Layer (Hidden): Con-
sists of 10 neurons with LeakyReLU activation. Max(a) in Eq (3), calculates the maximum weight
edge in the Minimum Spanning Tree (MST) of the mod-
Dense Layer (Output): Consists of 2 neurons with
ule a. w(e) is a function that assigns a weight to the
Softmax activation, representing the output classes
edge e in the graph.
for binary classification.
Dint (a1, a2) =min(max(a1+τ(a1), max(a2 )+τ(a2)), (4)
Optimizer and Compilation: Uses the Adam opti-
mizer with a learning rate of 0.001, beta_1 of 0.9, and Dint (a1, a2) in Eq. (4) calculates the internal variation
beta_2 of 0.999. Compiles the model with categorical within modules a1 and a2. It involves the minimum of
crossentropy loss and accuracy as the metric. the maximum weights of nodes in the modules with a
threshold factor τ(a).
Data Augmentation: Utilizes the ImageDataGenera-
tor for real-time data augmentation during training. (5)
Training Configuration: Specifies 100 epochs and In eq. (5), k is a constant or parameter, and ∣a∣ de-
a batch size of 40 for training. The architecture is notes the cardinality (number of elements) in the set a.
shown in Fig. 10.
4.4.2. Threshold

The threshold method is a very simple technique used


to select threshold value T. The RGB image is converted
into a grayscale image, and further, it is converted into
a binary image for a segment of the tumor region. The
Fig. 10. CNN neural network layers threshold value, T, is obtained from the grayscale image
and is classified within the range of 0 to 255. The formula
for the threshold can be given as in Eq. (6):
4.4. Hybrid approach for segmentation
(6)
To locate tumors, a hybrid approach is applied. Firstly,
graph-based segmentation is used, and thereafter, the where, k (i, j) is an image and m (i, j) is grey conversion.
threshold method is applied to the segmented MRI In Fig. 11, the proposed method has been combined
brain images. to locate the tumor region. Further, the selected RGB
image is scaled and segmented to partition affected
4.4.1. Graph-based tissue from MR brain images. In the final stage, mor-
Graph-based [19] method was originally introduced phological operation is applied.
for a greedy approach to image segmentation based
on predicates and has been utilized in various fields of
image processing. The predicate P concludes in case
there is an edge for segmentation. The fast minimum
tree-based clustering on the image grid that produces
a multichannel image is one of the concepts of graph-
based segmentation concepts used in the proposed Fig. 11. Graph-based and threshold segmentation
method and can be defined as: method

168 International Journal of Electrical and Computer Engineering Systems


The hybrid approach algorithm is formed which is
given as:

Start
[Step 1] Input MRI brain image (I) from datasets
[Step 2] Check for the presence of a tumor
(Classification)
[Tumor Present]
[Step 3] Partition Image (I1, I2, ..., In)
[Step 4] Determine the number of partitions
(n) using felzenswalb()
[Step 5] Cluster the Partition Images based on
Image grid (k) [300 <= k <= 1000]
[Step 6] Set Parameters (S):
- Image (Height, Width), Scale: 350
- Sigma: 0.2, Min_Size: 20
- Threshold T >= 80
[Step 7] Compute the approximate distance (a) (b) (c)
(D_T) of Pixels of Tumors Image
Fig. 12. Brain tumor segmentation using a hybrid
[Step 8] Return the final segmentation result approach: (a) Original images, (b) Graph-based
[No Tumor] segmentation, (c) Hybrid approach
[Step 9] Return the result “No Tumor Detected”
End 5. EVALUATION METRICS AND RESULTS

In the algorithm, select MRI brain images as input The proposed method of classification and segmen-
from the dataset for tumor segmentation. The input tation is implemented on a computer with an intel core
image is partitioned into ‘n’ numbers of segments us- i5 11th generation processor unit with 8GB RAM, oper-
ing the Felzenszwalb() module. The partitioned image ating at a frequency of 2.40 GHz, and NVIDIA GEFORCE
is clustered based on the image grid (k) with a range GTX, using Python programming language. The results
of 300 >= k <= 1000 and set parameters (S) for image in the research work are discussed.
(I) such as to scale indicate the largeness of clusters, To calculate accuracy, a confusion matrix is created
sigma for smoothening of the image, min_size defines for classifying models and evaluating the segmenta-
the size of the output image and set threshold T >= 80 tion outcomes of the proposed method.
for segmentation. After setting parameters, compute
the approximate distance (D_T) of pixels of the tumor (7)
image. Finally, the result of Z is computed.
The observation result of testing images is shown in (8)
Fig. 12 column-wise.
(9)

The confusion matrix includes True Positive (TP), True


Negative (TN), False Positive (FP), and False Negative
(FN), which are essential for assessing classification ac-
curacy, recall, and precision. Additionally, BF (Boundary
F1) score and Jaccard are employed to assess segmen-
tation performance, as outlined in Eq. (7)-(11).
The BF score, a contour matching score, is utilized to
evaluate image segmentation techniques. In this sce-
nario, the two groups considered are the binary mask
of objects and the segmentation result obtained from
the hybrid approach.

(10)

In the provided context, S (x, y) represents the input


image, and G (x, y) is the binary mask depicting the seg-

Volume 15, Number 2, 2024 169


mentation result. The variables r denote recall, and p
signifies precision.
Jaccard(A, B) = |intersection(A, B) |/| union(A, B)|,(11)
where A is the input image and B is the ground truth
image.

5.1. Results

5.1.1. Performance of classification


Fig. 14. Tumor and no-tumor classification results
By examining the study depicted in Fig. 13 (a, b, c). It
can be observed that the training accuracy acquired at 5.1.2. Performance measure of segmentation
98.01% and the validation accuracy at 98%. The data
was split into 80 % for training and 20% for validation. To assess and scrutinize the performance of the pro-
posed hybrid method for tumor segmentation, a com-
parison is made with the ground truth image. Five im-
ages obtained are utilized as test images.
Table 4 shows results with Bfscore, and Jaccard, in-
dicating the similarity coefficients and segmentation
outcomes. The results for each test image demonstrate
satisfactory performance.

Table 4. Results based on similarity coefficients (a)


Original image, (b) Ground truth image, (c) Segmentation
using a hybrid approach, (d) Bfscore, and (e) Jaccard
Ground Hybrid
Input Bfscore Jaccard
Truth approach
Fig. 13. (a) Training and validation accuracy

0.92236 0.88438

0.88353 0.88912

0.56858 0.6722

Fig. 13. (b) Training and validation loss 1 0.93862

0.87662 0.72371

(a) (b) (c) (d) (e)

6. DISCUSSION

The successful performance of the proposed system


and comparative results are summarized in Table 5.
According to Table 5, Zhang et al. [20] employed
Fig. 13. (c) True Positive and False Positive Rate back propagation neural network (BPNN) classification
following the enhancement of image quality using 2D
The classification results of tumor and no-tumor are DWT Decomposition. They achieved a classification
represented in Fig. 14. accuracy of 98.10% but were limited to consist of T2-

170 International Journal of Electrical and Computer Engineering Systems


weighted MR brain images, with only 66 images for • The utilization of multimodal MRI sequence imag-
training and testing. Notably, they did not incorporate es is considered for the classification model.
any segmentation technique to locate tumor regions. • Implementation of 2D CNN to showcase high clas-
Selvaraj et al. [21] achieved an accuracy of 96%, but sification proficiency.
they used a support vector machine classifier as a
validation technique. Al Kadi et al. [22] focused on ex- • Achievement of 98.89% accuracy in the proposed
tracting histopathological features, without applying classification.
any segmentation method, and achieved an accuracy • Application of a hybrid approach for comparing
of 92% accuracy using a fuzzy clustering machine for test image results.
classification. In contrast, Muezzinoglu et al. [23] pro- • Evaluation of similarity coefficients to yield robust
posed the ResNet50 transfer learning technique, clas- segmentation results, with Bfscore registering a
sifying multiple types of brain tumors with a 98% ac-
high value of 1 and Jaccard with 93.86%.
curacy. Georgiardis et al. [24] attained an accuracy of
93%, though segmentation was not part of their study. • However, some drawbacks of our proposed meth-
Considering the studies outlined in Table 5 it is evident od include:
that the proposed method in this paper boasts mini- • The need for more cases of brain tumor for com-
mum computational complexity and demonstrates prehensive validation.
commendable segmentation accuracy.
• Suboptimal performance of the segmentation meth-
The essential stages of the research are as follows: od when applied to non-contrast MRI brain images.

Table 5. Performance comparison between the proposed method and previous work
Author Total images Classification method Classifier Segmentation Accuracy F1 Score Recall Precision
Zhang et al. 2D-DWT level 3
66 BPNN NA 98.02% x X x
[20] decomposition, DWT
Selvaraj et al.
1100 GLCM-4 LS-SVM KNN NA 96% x X x
[21]
Al Kadi et al. Histopathological
320 FCM NA 92% x X x
[22] features
Muezzinoglu Multi feature
3264 ResNet50 NA 98.10% 98.01% 98.15% 97.91
et al. [23] selector and KNN
Georgiardis et Histogram 4, LCM-22, 93%,
67 LSFT-PNN NA 75.65% 79% 88%
al. [24] GRLM-10 83.33%
Proposed Graph-based
884 CNN Binary 98.89% - 98.14% 98.43%
Method and Threshold

7. CONCLUSION AND FUTURE WORK Dilated Convolution Filter", IEEE Access, Vol. 9,
2021, pp. 168703-168714.
The proposed segmentation technique, the hybrid
approach, aims to more accurately locate tumor re- [3] D. Deb, S. Roy, “Brain tumor detection based on
gions while achieving high classification accuracy. The
hybrid deep neural network in MRI by adaptive
presented work utilized an MRI brain tumor dataset,
achieving a notable 98.89% accuracy using a 2D CNN squirrel search optimization”, Multimedia Tools
model. Segmentation similarity coefficients, includ- and Applications, Vol. 80, 2021, pp. 2621-2645.
ing a Bfscore of 1 and a Jaccard coefficient of 93.86%,
underscore the effectiveness of our approach in tu- [4] R. Ranjbarzadeh, K. A. Bagherian, G. S. Jafarzadeh
mor detection and segmentation. This method offers et al. “Brain tumor segmentation based on deep
a promising avenue for future research, with plans to learning and an attention mechanism using MRI
expand the dataset, incorporate more samples, and ex-
multi-modalities brain images”, Scientific Reports,
plore additional techniques for enhancing brain tumor
location and diagnosis. Vol. 11, 2021, p. 10930.

[5] M. U. Rehman, S. Cho, J. Kim, K. T. Chong. “Brain-


8. REFERENCES
Seg-Net: Brain Tumor MR Image Segmentation via
[1] S. Chang, “Brain Tumor: Introduction”, www.can- Enhanced Encoder-Decoder Network”, Diagnos-
cer.net/cancer-types/brain-tumor (accessed: tics, Vol. 11, 2021, p. 169.
2023)
[6] G. Karayegen, M. F. Aksahin, “Brain Tumor Predic-
[2] U. Asim, E. Iqbal, A. Joshi, F. Akram, K. N. Choi, "Ac- tion on MR Images with Semantic Segmentation
tive Contour Model for Image Segmentation with by using Deep Learning Network and 3D Imaging

Volume 15, Number 2, 2024 171


of Tumor Region”, Biomedical Signal Processing [16] A. S. Ansari, “Numerical Simulation and Develop-
and Control, Vol. 66, 2021, p. 102458. ment of Brain Tumor Segmentation and Classifica-
tion of Brain Tumor Using Improved Support Vec-
[7] H. Saleem, A. R. Shahid, B. Raza, “Visual Interpret-
tor Machine”, International Journal of Intelligent
ability in 3D Brain Tumor Segmentation Network”,
Systems and Applications in Engineering, Vol. 11,
Computers in Biology and Medicine, Vol. 133,
No. 2s, 2023, pp. 35-44.
2021, p. 104410.
[17] Z. Ullah, M. U. Farooq, S. H. Lee, D. An, “A hybrid im-
[8] A. Khosravanian, M. Rahmanimanesh, P. Keshavarzi,
age enhancement based brain MRI images classi-
S. Mozaffari, “Fast Level Set Method for Glioma
fication technique”, Medical Hypotheses, Vol. 143,
Brain Tumor Segmentation based on Superpixel
2020, p. 109922.
Fuzzy Clustering and lattice Boltzmann Method”,
Computer Methods, and Programs in Biomedi- [18] J. Amin, M. Sharif, N. Gul, M. Yasmin, S. A. Shad,
cine, Vol. 198, 2021, p. 105809. “Brain tumor classification based on DWT fusion
of MRI sequences using convolutional neural net-
[9] Y. Zhang, Y. Lu, W. Chen, Y. Chang, H. Gu, B. Yu, “MS-
work”, Pattern Recognition Letters, Vol. 129, 2020,
MANet: A Multi-scale Mesh Aggregation Network
pp. 115-122.
for Brain Tumor Segmentation”, Applied Soft Com-
puting, Vol. 110, 2021, pp. 1568-4946. [19] P. F. Felzenszwalb, D. P. Huttenlocher, “Efficient
Graph-Based Image Segmentation”, International
[10] X. Lei, X. Yu, J. Chi, Y. Wang, J. Zhang, C. Wu, “Brain
Journal of Computer Vision, Vol. 59, 2004, pp. 167-
Tumor Segmentation in MR Images using a Sparse
181.
Constrained Level Set Algorithm”, Expert Systems
with Applications, Vol. 168, 2020, p. 114262. [20] Y. Zhang, D. Zhengchao, W. Lenan, W. Shuihua, “A
Hybrid Method for MRI Brain Image Classification”,
[11] N. V. Shree, T. N. R. Kumar, “Identification, and Clas- Expert Systems with Applications, Vol. 38, No. 8,
sification of Brain Tumor MRI Images with Feature 2011, pp. 10049-10053.
Extraction using DWT and Probabilistic Neural
[21] H. Selvaraj, S. T. Selvi, D. Selvathi, L. Gewali, “Brain
Network”, Brain Informatics, Vol. 5, 2018, pp. 23-30.
MRI Slices classification using Least Squares Sup-
[12] S. K. Mamatha, H. K. Krishnappa, N. Shalini, “Graph port Vector Machine”, International Journal of In-
Theory Based Segmentation of Magnetic Resonance telligent Computing in Medical Sciences and Im-
Images for Brain Tumor Detection”, Pattern Recogni- age Processing, Vol. 1, No. 1, 2007, pp. 21-33.
tion and Image Analysis, Vol. 32, 2022, pp. 153-161.
[22] O. Al-Kadi, “A Multiresolution Clinical Decision
[13] T. Balamurugan, E. Gnanamanoharan, “Brain tu- Support System based on Fractal Model Design
mor segmentation, and classification using hybrid for Classification of Histological Brain tumours”,
deep CNN with LuNetClassifier”, Neural Comput- Comput Med Imaging Graph, Vol. 41, 2015, pp.
ing and Applications, Vol. 35, 2023, pp. 4739-4753. 67-79.
[14] A. Hossain, M. T. Islam, T. Rahman, M. E. H. Chowd- [23] T. Muezzinoglu et al. “PatchResNet: Multiple Patch
hury, A. Tahir, S. Kiranyaz, K. Mat, G. K. Beng, M. S. Division-Based Deep Feature Fusion Framework
Soliman, “Brain Tumor Segmentation and Classifica- for Brain Tumor Classification Using MRI Images”,
tion from Sensor-Based Portable Microwave Brain Journal of Digital Imaging, Vol. 36, 2023, pp. 973-
Imaging System Using Lightweight Deep Learning 987.
Models”, Biosensors, Vol. 13, No. 3, 2023, p. 302.
[24] G. Pantelis, C. Dionisis, K. Ioannis, D. Antonis, K.
[15] P. S. Bidkar, R. Kumar, A. Ghosh, “Hybrid Adam Sew- C. George, S. Koralia, M. Menelaos, N. George, S.
ing Training Optimization Enabled Deep Learning Ekaterini, “Improving Brain Tumor Characteriza-
for Brain Tumor Segmentation and Classification tion on MRI by Probabilistic Neural Networks and
using MRI Images”, Computer Methods in Biome- Non-Linear Transformation of Textural Features”,
chanics and Biomedical Engineering: Imaging & Computer Methods and Programs in Biomedicine,
Visualization, Vol. 11, No. 5, 2023, pp. 1921-1936. Vol. 89, No. 1, 2008, pp. 24-32.

172 International Journal of Electrical and Computer Engineering Systems

You might also like