Efficient Breast Cancer Diagnosis from Complex MammographicImages Using Deep Convolutional Neural Network
Efficient Breast Cancer Diagnosis from Complex MammographicImages Using Deep Convolutional Neural Network
Research Article
Efficient Breast Cancer Diagnosis from Complex Mammographic
Images Using Deep Convolutional Neural Network
Received 17 December 2022; Revised 15 February 2023; Accepted 23 February 2023; Published 2 March 2023
Copyright © 2023 Hameedur Rahman et al. Tis is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Medical image analysis places a signifcant focus on breast cancer, which poses a signifcant threat to women’s health and
contributes to many fatalities. An early and precise diagnosis of breast cancer through digital mammograms can signifcantly
improve the accuracy of disease detection. Computer-aided diagnosis (CAD) systems must analyze the medical imagery and
perform detection, segmentation, and classifcation processes to assist radiologists with accurately detecting breast lesions.
However, early-stage mammography cancer detection is certainly difcult. Te deep convolutional neural network has dem-
onstrated exceptional results and is considered a highly efective tool in the feld. Tis study proposes a computational framework
for diagnosing breast cancer using a ResNet-50 convolutional neural network to classify mammogram images. To train and
classify the INbreast dataset into benign or malignant categories, the framework utilizes transfer learning from the pretrained
ResNet-50 CNN on ImageNet. Te results revealed that the proposed framework achieved an outstanding classifcation accuracy
of 93%, surpassing other models trained on the same dataset. Tis novel approach facilitates early diagnosis and classifcation of
malignant and benign breast cancer, potentially saving lives and resources. Tese outcomes highlight that deep convolutional
neural network algorithms can be trained to achieve highly accurate results in various mammograms, along with the capacity to
enhance medical tools by reducing the error rate in screening mammograms.
cancer tissues are highly malignant and pose a severe danger neural network (DCNN) and breast mammogram images to
to patients’ lives as they can spread to other vital organs address previous drawbacks mentioned above. Te proposed
[12–15]. Te growth of mammary cells can lead to tumors in system intends to divide breast tumors into benign and
women. Tumors are classifed as benign or malignant based malignant categories. Te system’s performance is evaluated
on the area, size, and location, using the BI-RAD scores and compared with existing classifcation systems using
[16, 17]. Benign tumors are not life-threatening and can be a public mammographic image dataset named INbreast. Te
treated through medication to prevent further growth new system includes transfer learning to fne-tune the
[17, 18]. Malignant tumors, on the other hand, can spread to pretrained DCNN and detailed results from experiments on
other parts of the body via the lymphatic system or blood, the INbreast dataset. Te system’s performance is evaluated
making them much more dangerous [19–22]. Tis un- using the following metrics: AUC, specifcity, accuracy,
controlled cell proliferation in the breast leads to the for- sensitivity, and F-1 score.
mation of malignant tumors, which can only be treated Te rest of the paper is organized as follows. Section 2
through surgery or radiation therapy [23, 24]. presents the related work. Section 3 provides the proposed
Early detection of breast cancer is crucial for accurate approach for breast cancer detection. Section 4 presents
diagnosis and analysis, and many researchers are turning to experimental analysis, results and comparison with existing
biomedical imaging to aid specialist radiologists. Various work. Section 5 concludes this paper and presents
methods such as MRI, mammography, and ultrasound are future work.
utilized to identify breast carcinoma [25, 26]. However, the
large volume of images challenges radiologists in identifying 2. Related Work
potential cancerous areas. Terefore, an efcient automated
method is needed, and computer-aided diagnostic (CAD) Breast cancer diagnosis in modern medical procedures often
systems are being utilized in aiding radiologists in detecting involves using mammography images [37]. A summary of
cancerous breast tumors [27, 28]. recently developed systems for breast cancer diagnosis using
Increasingly, deep learning techniques are being applied mammogram images is presented in this section.
to medical imaging to develop automated computer-aided Structured support vector machine (SSVM) and con-
diagnosis (CAD) systems [29–35]. Deep learning is con- ditional random feld (CRF) are two structured prediction
sidered the most efective method for detecting and clas- techniques proposed in [38] to classify mass mammograms.
sifying medical images [29, 30]. With these techniques, the Both approaches used potential functions based on deep
mammogram image’s signifcant low to high-level hierar- convolution and belief networks. Te results demonstrated
chical features can be directly extracted, making deep that the CRF method outperformed the SSVM method in
learning the most reliable medical imaging method [29]. training and inference time. Authors in [29] utilized four-
Several CAD systems based on deep learning have been fold cross-validation on X-ray mammograms from the
developed for breast lesions detection, which outperforms INbreast dataset to estimate a full-resolution convolutional
traditional systems [36]. Accurate detection of breast lesions network (FrCN). It resulted in an F1 score of 99.24%, an
is crucial for improving breast cancer diagnosis [29, 31]. accuracy of 95.96%, and a Matthews correlation coefcient
However, detecting these lesions can be challenging due to (MCC) of 98.96%. In another study [39], the BDR-
their varying texture, shape, position, and size. Deep CNN-GCN approach was proposed by combining a graph-
learning and image processing methods have been proposed convolutional network (GCN) with a basic 8-layer CNN that
to overcome the limitations of conventional technology, includes batch normalization and dropout layers. Te fnal
which cannot perform automated identifcation [29]. Te BDR-CNN-GCN model was formed by integrating the two-
fnal stage in the CAD model is the classifcation of breast layer GCN with the CNN. Tis method was tested using the
lesions into benign or malignant, which is important in MIAS dataset, and successful results were obtained with
assessing the correctness of the diagnostic [30]. a 96.10% accuracy level.
Currently employed methods for detecting breast cancer Authors in [40] proposed modifying the YOLOv5 net-
are slow, costly, and require extra eforts to run the radiology work for identifying and classifying breast cancers, with the
equipment. Accurately detecting breast cancer automatically algorithm run using specifc parameter values. Te modifed
from an image processing perspective is not easy. Hence, YOLOv5 was compared with a faster RCNN and YOLOv3,
early diagnosis and proper treatment are deemed crucial. achieving an accuracy of 96.50% and an MCC value of
Terefore, an efcient screening system and automation are 93.50%. Te diverse features (DFeBCD) method was pro-
necessary for breast cancer detection due to the following posed by [41], which classifed mammograms into two
reasons [12]: incorrect diagnoses and predictions, tumors categories normal and abnormal. Tey used two classifers,
appearing in low contrast areas, unreliable human di- an emotion learning-inspired integrated classifer (ELiEC)
agnoses, overburdening of radiologists, human error in and SVM, with the IRMA mammography dataset. Te
diagnosis, need for large training data to avoid overftting in ELiEC classifer outperformed SVM, achieving an accuracy
deep learning algorithms, high computational complexity, rate of 80.30%. In [30], a deep-CNN model that utilized
and longer processing time for accurate tumor transfer learning (TL) was introduced to prevent overftting
identifcation. when working with small datasets. DDSM, MIAS, BCDR,
A novel breast cancer detecting system is proposed with and INbreast were used to assess its performance. INbreast
an improved architecture that integrates deep convolutional dataset achieved an accuracy of 95.5%, the DDSM dataset
8483, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2023/7717712, Wiley Online Library on [09/08/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Computational Intelligence and Neuroscience 3
achieved an accuracy level of 97.35%, and the BCDR da- segmentation, (3) feature extraction and the selection, and
tabase achieved a 96.67% accuracy level. (4) feature classifcation. Te proposed system is illustrated
Authors in [42] for extracting features from breast in Figure 1.
mammograms utilized lifting wavelet transform (LWT).
Feature vectors’ size was reduced using linear discriminant
analysis (LDA) and principal component analysis (PCA). 3.1. Dataset. Tis study used a digital breast X-ray database
Te classifcation was performed using the moth fame named INbreast to implement the proposed CAD approach.
optimization and extreme learning machine (ELM) ap- Te INbreast dataset is a public database that contains more
proach with MIAS and DDSM and datasets, achieving ac- recent FFDM images. It typically has an image size of
curacy of 95.80% and 98.76%, respectively. In addition, 3328 × 4084 pixels. It contains 115 patients’ cases along with
researchers have also trained the CNN Inception-v3 model 410 mammograms with both craniocaudal (CC) view and
on 316 images, resulting in a sensitivity of 0.88, specifcity of a mediolateral oblique (MLO) view. Of these 115 patients, 90
0.87, and an AUC of 0.946 [43]. Furthermore, in [44], a CNN had mammograms taken of both breasts, totaling 360 im-
and TL classifcation method was proposed to evaluate the ages, while the other 25 had only two mammograms taken
performance of eight fne-tuned pretrained models. Authors each. In total, 410 mammograms were produced from 115
in [45] presented a hybrid classifcation model using patients, including cases of normal, benign, and malignant
Mobilenet, ResNet50, and Alexnet with an accuracy level of breasts. 107 cases with breast lesions were used from the
95.6%. In [46], four diferent CNN architectures (VGG19, MLO and CC views for evaluation purposes.
InceptionV3, ResNet50, and VGG16) were utilized for
model training using 5000 images, while prediction models
3.2. Convolutional Neural Network. Tis subsection will
were evaluated on 1007 images.
examine the fundamental structure of all convolutional
Authors in [47] utilized alpha, geostatistics, and diversity
neural network (CNN) architectures. CNNs are deep neural
analyses forms in their proposed breast cancer detection
networks used for image recognition and classifcation. In
method. Tey employed the SVM classifer on MIAS and
recent years, CNNs have become a crucial tool in image
DDSM databases, which resulted in a detection accuracy
analysis, especially for identifying faces, text, and medical
level of 96.30%. Te SVM classifer and gray level co-
imaging. CNNs have a long history of success in image
occurrence matrix (GLCM) were employed by [48] for
classifcation and segmentation, frst developed in 1989.
detecting breast cancer abnormalities in the MIAS data set.
CNNs replicate the human brain’s visual information
Teir method achieved an accuracy of 93.88% and surpassed
processing by incorporating layers of “neurons” that only
the performance of the k-nearest neighbour (kNN) algo-
respond to their local surroundings. Tese networks can
rithm. Authors in [49] used AlexNet and SVM to enhance
understand the topological aspects of an image through
classifcation accuracy with data augmentation techniques.
a combination of convolutional, pooling, and fully con-
Te method achieved 71.01% accuracy, which increased to
nected (FC) layers. Te architecture of a CNN is shown in
87.2% with SVM and was evaluated on DDSM and
Figure 2.
CBIS-DDSM datasets.
A DenseNet deep learning framework extracted image
features and classifed cancerous and benign cells by feeding 3.2.1. Convolutional Layers. Te convolutional layers are
them into a fully connected (FC) layer. Te efectiveness of assembled into feature maps based on local connections and
this technique was evaluated by adjusting the hyper- weight distribution principles. A flter bank, a group of
parameters [50]. An algorithm named DICNN was de- weights, connects neurons in a feature map to corresponding
veloped by Irfan et al. [51], which uses a dilated semantic local regions in the preceding layer. Each feature map uses
segmentation network and morphological operation. a diferent flter bank, and all the units in the map share the
Combining these feature vectors with SVM classifcation same flter row. Tis weight distribution and local con-
yielded an accuracy of 98.9%. nection help reduce the number of parameters by utilizing
Although prior breast cancer detection and classifcation the close relationship between neighboring pixels and
systems have improved information extraction, several is- location-independent image features. Te output of the
sues still need attention, such as low contrast in tumor weights is then sent to an activation function, such as ReLU
location, high memory complexity, long processing time, or Sigmoid. Tis activation function enables the nonlinear
and the need for a large amount of training data for deep transformation of the input data, which is necessary for the
learning approaches. In response to these problems, we following processing stages.
propose a new approach to breast cancer detection and
classifcation, which will be discussed in detail in the fol-
lowing section. 3.2.2. Pooling Layer. As illustrated in Figure 2, the pooling
layer follows the convolution layer and uses subsampling to
3. Methodology integrate the features from the convolutional layer into
a single layer semantically. Tis layer’s primary objective is
In this section, the processes used for implementing our to decrease the size of the image by combining pixels into
proposed scheme are described in depth. Te system consists one value while preserving its features. In this layer, typical
of the following steps: (1) image enhancement, (2) image operations include max as well as main pooling.
8483, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2023/7717712, Wiley Online Library on [09/08/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
4 Computational Intelligence and Neuroscience
images
Modality
mammography
Test Data
Image
preprocessing
*Encoding
*ROI
Train Data segmentation *3D segmentation
*Decoding
Image
Augmentation
*lesions
*Tumor
Feature extraction *lump
and selection
*Encoding
Results
*Decoding
Classification
Input
Layer
Convolutional Pooling Convolutional Pooling Fully
Connected
Figure 2: Standard architecture of CNN.
3.2.3. Fully Connected Layer. Te last layer in CNN is the to enhance image contrast [52]. A drawback of AHE is that it
dense classifcation layer, which is responsible for de- can over-enhance the images due to the integration process
termining the category of input data based on extracted [49]. To mitigate this issue, CLAHE is used as it limits the
features from CNN. Te number of units in the FC layer is local histogram by setting a clip level, thus controlling
the same as the number of diferent classifcations contrast enhancement. Figure 3 illustrates an image en-
(categories). hanced by the CLAHE algorithm.
Furthermore, CLAHE algorithm steps are given as fol-
lows [53]:
3.3. Proposed Workfow. Tis section provides the proposed
workfow for breast cancer diagnosis using a deep con- (1) Split image into equal-sized contextual regions.
volutional neural network. (2) Apply histogram equalization to all contextual
regions.
3.3.1. Image Enhancement. Image enhancement refers to (3) Limit the histogram to the level of the clip.
increasing contrast and suppressing noise in mammogram (4) Reallocate the clipped values in the histogram.
images to assist radiologists in detecting breast abnormal-
(5) Obtain enhanced pixel value through histogram
ities. Various image enhancement methods exist, including
integration.
adaptive contrast enhancement (AHE). AHE improves the
local contrast and reveals more image details, making it
a helpful technique for enhancing both natural and medical 3.4. Image Segmentation. Image segmentation involves di-
images [52]. However, it may also result in considerable viding an image into regions with similar characteristics and
noise. In this paper, we utilized the contrast-limited adaptive features. Segmentation aims to simplify the image for easier
histogram equalization (CLAHE) technique, a form of AHE, analysis [54]. Popular image segmentation techniques
8483, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2023/7717712, Wiley Online Library on [09/08/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Computational Intelligence and Neuroscience 5
0 0
500 500
1000 1000
1500 1500
2000 2000
2500 2500
3000 3000
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
include edge detection, partial diferential equation (PDE), Te method for extracting ROI can be summarized in
fuzzy theory, artifcial neural network (ANN), region-based four steps:
segmentation, and thresholding.
(1) Tresholding the grayscale mammogram image to
create a binary image.
3.4.1. Tresholding Method. One of the simplest image (2) Labelling and counting the binary image objects,
segmentation methods is the thresholding method [55, 56]. then retaining only the largest one, which is the
Te pixels of the image are split according to their intensity tumor, as defned by the white bounding box.
level. Te global threshold is the most commonly used (3) Assign the largest area within the threshold value to
thresholding technique [57]. It is accomplished by setting “1” and the rest a value of “0.”
a threshold value (T) constant throughout the image. Te
output image is derived from the original image based on the (4) Multiply binary image with original mammogram
threshold value. image for obtaining fnal ROI without including
other parts of breast or artifacts.
the machine can learn and use the features to perform tasks 4.1. Image Acquisition Process. Te proposed system’s per-
such as classifcation or prediction. In deep learning, FL can formance is evaluated using digitized mammogram images
be accomplished by constructing a complete CNN to train from the INbreast dataset [71]. Te database is used to
and test image datasets or adjust a pretrained CNN for demonstrate the efciency and reliability of the proposed
classifcation or prediction on a new image dataset, referred method for identifying breast cancer. INbreast dataset in-
to as transfer learning. cludes 336 mammogram images, with 269 abnormal and 69
In deep learning, transfer learning (TL) is a widely-used normal images, where 220 are benign and 49 malignant
technique that enables the utilization of a pretrained net- cases. Tables 1 and 2 show the distribution of mammography
work for new prediction or classifcation tasks. Tis is images.
achieved by adjusting the parameters of the pretrained
network with randomly initialized weights for the new task.
4.2. Metrics of Performance. Te purpose of cross-validation
TL typically results in faster training than starting from
is to improve efciency, validate performance, and assess the
scratch and is considered an optimization that saves time
results from the dataset. To assess the classifcation efciency
and improves performance, as stated in [65]. For this of the proposed method, multiple metrics are utilized such as
purpose, transfer learning is utilized to fne-tune ResNet50
confusion matrix, accuracy, sensitivity, specifcity, error rate,
CNN. Tis involves using pretrained weights from the
F1 score, and area under the curve (AUC). All these metrics
ImageNet dataset [66] for retraining after preprocessing the
act as benchmark values for comparing the proposed
collected dataset. Te network parameters and hyper-
method against previous algorithms [72]. Tese measure-
parameters are optimized during this process.
ments are defned as follows.
4.2.3. Specifcity. Te chances that the test will correctly 50 approach is also compared quantitatively with previously
recognize the patient who has the disease is shown in the existing algorithms. Te study’s results revealed that the
following equation: presented approach outperformed these algorithms with
TN high accuracy, specifcity, F1 score values, and sensitivity.
Specificity � . (2) As shown in Table 3, the proposed approach demon-
TN + FP
strated improved results on the INbreast database with an
accuracy of 93.0%, specifcity of 93.86%, and sensitivity of
93.83%. It outperforms other methods in terms of accuracy.
4.2.4. Sensitivity. Te chance that the test will correctly Although the accuracy achieved by [16] is slightly higher at
recognize a patient with the disease is shown in equation: 91.0%, the proposed approach still exhibits the best per-
TP formance compared to the other methods. Compared to
Sensitivity � . (3) existing methods, the proposed approach enhances breast
TP + FN
cancer detection and classifcation performance. It can
potentially be used for real-time evaluation and to support
4.2.5. F1 Score. It is a weighted average of precision and radiologists in automating the analysis of mammogram
recall used for assessing the classifer’s performance. It images. However, performance may vary when the same
considers both false positives and negatives in its calculation, method is applied to diferent datasets due to factors such as
as shown in the following equation: background noise, lighting conditions, occlusion, over-
ftting, and the nature of the method.
Precision ∗ Recall Te performance of the presented approach is also
F1score � 2 ∗ . (4)
Precision + Recall evaluated using the confusion matrix and ROC curves.
Figure 5 illustrates the confusion matrix on the INbreast data
set. AUC, a crucial statistical metric in the ROC curve, is
4.2.6. Area Under the Curve (AUC). AUC is the classifer’s computed using the INbreast data set. Metric in the ROC
ability to distinguish between benign, normal, and malig- curve is calculated INbreast data set. ROC curves were
nant mammograms. constructed based on true positive rate (sensitivity) and false
positive rate (1-specifcity) rates, controlled by the threshold
5. Results and Discussion of the obtained probability maps. Figure 6 shows the ROC
curve graph.
For this study, a subset is taken from the INbreast dataset, Table 4 presents our proposed system’s results of breast
and each sample is increased to four images. During the cancer detection. Te proposed approach achieved an F1
experiment, 60% images were used for training, and the score and AUC of 93.03% and 93.02%, respectively, on the
remaining 40% were used for testing. Te samples were frst INbreast database.
subjected to enhancement and segmentation according to In recent years, breast cancer detection and classifcation
the procedures described in the “Methodology” section. applications have gained widespread use in the medical feld,
Afterward, features were extracted from the samples using making the diagnostic process more accurate [76, 77]. Te
a CNN. Finally, all the samples were classifed using goal of the proposed method is to enhance clinical diagnosis
ResNet-50. by enhancing the detection of breast cancer. Te opinions of
Te proposed DCNN method categorizes mammogram two medical specialists were gathered based on the accuracy
images of breast tumors into benign or malignant. A dataset level generated by our proposed algorithm. Tese experts
named INbreast is used for experimentation. Table 3 displays expressed their appreciation for the improved results of
the classifcation accuracy achieved by the proposed ResNet- ResNet-50 compared to other approaches. To sum it up, the
50 method across the INbreast database. From the INbreast proposed approach enhances performance compared to
dataset, 132 benign and 29 malignant image samples were other methods and can be utilized for real-time evaluations
selected for training, and 20 malignant and 88 benign for along with helping radiologists automate the evaluation of
testing. Te resulting accuracy is 93%. Te proposed ResNet- mammograms.
8483, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2023/7717712, Wiley Online Library on [09/08/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
8 Computational Intelligence and Neuroscience
6. Conclusion
Te proposed system aimed to detect malignant breast
masses and classify benign and malignant tissues in mam-
1 0.9386 0.0614 mograms. A novel computer-aided detection (CAD) system
is proposed, which involves thresholding and region-based
Predicted label
AUC = 0.9302
Data Availability
0.4
Te (Breast Cancer Diagnosis) data used to support the
fndings of this study are included within the article.
0.2
Conflicts of Interest
0.0
0.0 0.2 0.4 0.6 0.8 1.0 Te authors declare that they have no conficts of interest.
False Positive Rate
Figure 6: ROC plot on INbreast dataset. References
[1] M. Xian, Y. Zhang, H.-Da Cheng, F. Xu, B. Zhang, and J. Ding,
Table 4: Results of our proposed methodology for breast cancer “Automatic breast ultrasound image segmentation: a survey,”
detection and classifcation. Pattern Recognition, vol. 79, pp. 340–355, 2018.
[2] C. Xu, Z. Gu, J. Liu et al., “Adenosquamous carcinoma of the
Performance metrics Result obtained (%)
breast: a population-based study,” Breast Cancer, vol. 28,
Accuracy 93.0 no. 4, pp. 848–858, 2021.
Sensitivity 93.83 [3] S. J. S. Gardezi, A. Elazab, B. Lei, and T. Wang, “Breast cancer
Specifcity 93.86 detection and diagnosis using mammographic data: system-
F1 score 93.03 atic review,” Journal of Medical Internet Research, vol. 21,
AUC 93.02 no. 7, Article ID e14464, 2019.
8483, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2023/7717712, Wiley Online Library on [09/08/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Computational Intelligence and Neuroscience 9
[4] J. S. Whang, S. R. Baker, R. Patel, L. Luk, and A. Castro, “Te cancer detection on screening mammography,” Scientifc
causes of medical malpractice suits against radiologists in the Reports, vol. 9, no. 1, Article ID 12495, Aug 2019.
United States,” Radiology, vol. 266, no. 2, pp. 548–554, 2013. [20] D. Jansson, V. B. Dieriks, J. Rustenhoven et al., “Cardiac
[5] F. Adam, “Breast cancer: symptoms, causes, and treatment,” glycosides target barrier infammation of the vasculature,
2022, https://www.medicalnewstoday.com/articles/37136. meninges and choroid plexus,” Communications Biology,
[6] D. El-Chami, M. Al Haddad, R. Abi-Habib, and M. El-Sibai, vol. 4, no. 1, p. 260, Feb 2021.
“Recombinant anthrax lethal toxin inhibits cell motility and [21] O. El Zarif and R. A. Haraty, “Toward information preser-
invasion in breast cancer cells through the dysregulation of vation in healthcare systems,” in Innovation in Health In-
rho gtpases,” Oncology Letters, vol. 21, no. 2, p. 163, 2021. formatics, pp. 163–185, Elsevier, Amsterdam, Netherlands,
[7] W. Chen, R. Zheng, S. Zhang et al., “Cancer incidence and 2020.
mortality in China in 2013: an analysis based on urbanization [22] G. Salloum and J. Tekli, “Automated and personalized nu-
level,” Chinese Journal of Cancer Research, vol. 29, no. 1, trition health assessment, recommendation, and progress
pp. 1–10, 2017. evaluation using fuzzy reasoning,” International Journal of
[8] M. A. Al-Antari, S.-M. Han, and T.-S. Kim, “Evaluation of Human-Computer Studies, vol. 151, Article ID 102610, 2021.
deep learning detection and classifcation towards computer- [23] B.-Il Song, “A machine learning-based radiomics model for
aided diagnosis of breast lesions in digital x-ray mammo- the prediction of axillary lymph-node metastasis in breast
grams,” Computer Methods and Programs in Biomedicine, cancer,” Breast Cancer, vol. 28, 2021.
vol. 196, Article ID 105584, 2020. [24] N. H. Mukand, N. Y. Ko, and N. A. Nabulsi, “Te association
[9] R. Nader, E. Tannoury, T. Rizk, H. Ghanem, and E. Luvizotto between physical health-related quality of life, physical
Junior, “Atezolizumab-induced encephalitis in a patient with functioning, and risk of contralateral breast cancer among
metastatic breast cancer: a case report and review of neuro- older women,” Breast Cancer, vol. 29, 2021.
logical adverse events associated with checkpoint inhibitors,” [25] T. Sadad, A. Munir, T. Saba, and A. Hussain, “Fuzzy c-means
Autopsy Case Reports, vol. 11, Article ID e2021261, 2021. and region growing based classifcation of tumor from
[10] N. Mansour, K. Bodman-Smith, R. S. Khnayzer, and mammograms using hybrid texture feature,” Journal of
C. F. Daher, “A photoactivatable ru (ii) complex bearing 2, 9- computational science, vol. 29, pp. 34–45, 2018.
diphenyl-1, 10-phenanthroline: a potent chemotherapeutic [26] M. Jubeen, H. Rahman, A. U. Rahman et al., “An automatic
drug inducing apoptosis in triple negative human breast breast cancer diagnostic system based on mammographic
adenocarcinoma cells,” Chemico-Biological Interactions, images using convolutional neural network classifer,” Journal
vol. 336, Article ID 109317, 2021. of Computing & Biomedical Informatics, vol. 4, no. 1,
[11] M. Younes, C. Ammoury, T. Haykal, L. Nasr, R. Sarkis, and pp. 77–86, 2022.
S. Rizk, “Te selective anti-proliferative and pro-apoptotic [27] D. Qader Zeebaree, A. Mohsin Abdulazeez, D. Asaad Zebari,
efect of a. cherimola on mda-mb-231 breast cancer cell line,” H. Haron, and H. Nuzly Abdull Hamed, “Multi-level fusion in
BMC Complementary Medicine and Terapies, vol. 20, no. 1, ultrasound for cancer detection based on uniform lbp fea-
p. 343, 2020. tures,” Computers, Materials & Continua, vol. 66, no. 3,
[12] S. Punitha, F. Al-Turjman, and T. Stephan, “An automated pp. 3363–3382, 2021.
breast cancer diagnosis using feature selection and parameter [28] S. Maqsood, R. Damaševičius, F. M. Shah, and
optimization in ann,” Computers & Electrical Engineering, R. MaskeliūNas, “Detection of macula and recognition of
vol. 90, Article ID 106958, 2021. aged-related macular degeneration in retinal fundus images,”
[13] Ye-J. Mao, H.-J. Lim, M. Ni, W. H. Yan, D. W. C. Wong, and Computing and Informatics, vol. 40, no. 5, pp. 957–987, 2021.
J. C. W. Cheung, “Breast tumour classifcation using ultra- [29] M. A. Al-Antari, M. A. Al-Masni, M.-T. Choi, S.-M. Han, and
sound elastography with machine learning: a systematic T.-S. Kim, “A fully integrated computer-aided diagnosis
scoping review,” Cancers, vol. 14, no. 2, p. 367, Jan 2022. system for digital x-ray mammograms via deep learning
[14] H. A. Nasser, S. Assaf, G. Aouad, Y. Mouawad, and S. Elamely, detection, segmentation, and classifcation,” International
“Breast manifestations of type i diabetes mellitus,” Breast Journal of Medical Informatics, vol. 117, pp. 44–54, 2018.
Journal, vol. 26, no. 10, pp. 2079-2080, 2020. [30] H. Chougrad, H. Zouaki, and O. Alheyane, “Deep convolu-
[15] N. Al Saud, S. Jabbour, E. Kechichian et al., “A systematic tional neural networks for breast cancer screening,” Computer
review of zosteriform rash in breast cancer patients: an ob- Methods and Programs in Biomedicine, vol. 157, pp. 19–30,
jective proof of fap reinnervation and a management algo- 2018.
rithm,” Annals of Plastic Surgery, vol. 81, no. 4, pp. 456–461, [31] T. Kooi, G. Litjens, B. van Ginneken et al., “Large scale deep
2018. learning for computer aided detection of mammographic
[16] N. Dhungel, G. Carneiro, and A. P. Bradley, “A deep learning lesions,” Medical Image Analysis, vol. 35, pp. 303–312, 2017.
approach for the analysis of masses in mammograms with [32] S. Maqsood and U. Javed, “Multi-modal medical image fusion
minimal user intervention,” Medical Image Analysis, vol. 37, based on two-scale image decomposition and sparse repre-
pp. 114–128, 2017. sentation,” Biomedical Signal Processing and Control, vol. 57,
[17] M. Byra, “Breast mass classifcation with transfer learning Article ID 101810, 2020.
based on scaling of deep representations,” Biomedical Signal [33] S. R. Muzammil, S. Maqsood, S. Haider, and R. Damaševičius,
Processing and Control, vol. 69, Article ID 102828, 2021. “Csid: a novel multimodal image fusion algorithm for en-
[18] E. Hitti, D. Hadid, J. Melki, R. Kaddoura, and M. Alameddine, hanced clinical diagnosis,” Diagnostics, vol. 10, no. 11, p. 904,
“Mobile device use among emergency department healthcare 2020.
professionals: prevalence, utilization and attitudes,” Scientifc [34] S. Maqsood, U. Javed, M. M. Riaz, M. Muzammil,
Reports, vol. 11, no. 1, p. 1917, 2021. F. Muhammad, and S. Kim, “Multiscale image matting based
[19] Li Shen, L. R. Margolies, J. H. Rothstein, E. Fluder, multi-focus image fusion technique,” Electronics, vol. 9, no. 3,
R. McBride, and W. Sieh, “Deep learning to improve breast p. 472, 2020.
8483, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2023/7717712, Wiley Online Library on [09/08/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
10 Computational Intelligence and Neuroscience
[35] K. Jabeen, M. A. Khan, M. Alhaisoni et al., “Breast cancer [49] D. A. Ragab, M. Sharkas, S. Marshall, and J. Ren, “Breast
classifcation from ultrasound images using probability-based cancer detection using deep convolutional neural networks
optimal deep learning feature fusion,” Sensors, vol. 22, no. 3, and support vector machines,” PeerJ, vol. 7, p. e6201, 2019.
p. 807, 2022. [50] K. Kousalya and T. Saranya, “Improved the detection and
[36] N. I. R. Yassin, S. Omran, E. M. El Houby, H. Allam, and classifcation of breast cancer using hyper parameter tuning,”
H. Allam, “Machine learning techniques for breast cancer Materials Today Proceedings, 2021.
computer aided diagnosis using diferent image modalities: [51] R. Irfan, A. A. Almazroi, H. T. Rauf, R. Damasevicius,
a systematic review,” Computer Methods and Programs in E. A. Nasr, and A. E. Abdelgawad, “Dilated semantic seg-
Biomedicine, vol. 156, pp. 25–45, 2018. mentation for breast ultrasonic lesion detection using parallel
[37] Z. Rezaei, “A review on image-based approaches for breast feature fusion,” Diagnostics, vol. 11, no. 7, p. 1212, 2021.
cancer detection, segmentation, and classifcation,” Expert [52] E. D. Pisano, S. Zong, B. M. Hemminger et al., “Contrast
Systems with Applications, vol. 182, Article ID 115204, 2021. limited adaptive histogram equalization image processing to
[38] N. Dhungel, G. Carneiro, and A. P. Bradley, “Deep learning improve the detection of simulated spiculations in dense
and structured prediction for the segmentation of mass in mammograms,” Journal of Digital Imaging, vol. 11, no. 4,
mammograms,” in Proceedings of the MICCAI 2015: 18th pp. 193–200, 1998.
International Conference Medical Image Computing and [53] A. Sahakyan and H. Sarukhanyan, “Segmentation of the breast
Computer-Assisted Intervention, pp. 605–612, Springer, region in digital mammograms and detection of masses,”
Munich, Germany, October, 2015. International Journal of Advanced Computer Science and
[39] Yu-D. Zhang, S. C. Satapathy, D. S. Guttery, J. M. Górriz, and Applications, vol. 3, no. 2, 2012.
S. H. Wang, “Improved breast cancer classifcation through [54] I. Abid, S. Almakdi, H. Rahman et al., “A convolutional neural
combining graph convolutional network and convolutional network for skin lesion segmentation using double u-net
neural network,” Information Processing & Management, architecture,” Intelligent Automation & Soft Computing,
vol. 58, no. 2, Article ID 102439, 2021. vol. 33, no. 3, pp. 1407–1421, 2022.
[40] Y. J. Suh, J. Jung, and B.-J. Cho, “Automated breast cancer [55] R. Ahmad, A. Khalid, and H. Rahman, “Brain tumor detection
detection in digital mammograms of various densities via using image segmentation and classifcation,” LC In-
deep learning,” Journal of Personalized Medicine, vol. 10, no. 4, ternational Journal of STEM, vol. 1, no. 3, pp. 59–65, 2020.
p. 211, 2020. [56] M. Abbas, M. Arshad, and H. Rahman, “Detection of breast
[41] N. Chouhan, A. Khan, J. Z. Shah, M. Hussnain, and cancer using neural networks,” LC International Journal of
M. W. Khan, “Deep convolutional neural network and STEM, vol. 1, no. 3, pp. 75–88, 2020.
emotional learning based breast cancer detection using digital [57] D. Kaur and Y. Kaur, “Various image segmentation tech-
mammography,” Computers in Biology and Medicine, vol. 132, niques: a review,” International Journal of Computer Science
Article ID 104318, 2021. and Mobile Computing, vol. 3, no. 5, pp. 809–814, 2014.
[42] D. Muduli, R. Dash, and B. Majhi, “Automated breast cancer [58] W. Khan, “Image segmentation techniques: a survey,” Journal
detection in digital mammograms: a moth fame optimization of image and graphics, vol. 1, no. 4, pp. 166–170, 2014.
based elm approach,” Biomedical Signal Processing and [59] M. Kaur and P. Goyal, “A review on region based segmen-
Control, vol. 59, Article ID 101912, 2020. tation,” International Journal of Science and Research, vol. 4,
[43] Yi Wang, E. J. Choi, Y. Choi, H. Zhang, G. Y. Jin, and S. B. Ko, no. 4, pp. 3194–3197, 2015.
“Breast cancer classifcation in automated breast ultrasound [60] J. Han, D. Zhang, and X. Hu, “Background prior-based salient
using multiview convolutional neural network with transfer object detection via deep reconstruction residual,” IEEE
learning,” Ultrasound in Medicine and Biology, vol. 46, no. 5, Transactions on Circuits and Systems for Video Technology,
pp. 1119–1132, 2020. vol. 25, no. 8, pp. 1309–1321, 2014.
[44] M. Masud, A. E. Eldin Rashed, and M. S. Hossain, “Con- [61] J. Zabalza, J. Ren, J. Zheng et al., “Novel segmented stacked
volutional neural network-based models for diagnosis of autoencoder for efective dimensionality reduction and fea-
breast cancer,” Neural Computing & Applications, vol. 34, ture extraction in hyperspectral imaging,” Neurocomputing,
pp. 11383–11394, 2020. vol. 185, pp. 1–10, 2016.
[45] Y. Eroğlu, M. Yildirim, and A. Çinar, “Convolutional neural [62] Y. LeCun, K. Kavukcuoglu, and C. Farabet, “Convolutional
networks based classifcation of breast ultrasonography im- networks and applications in vision,” in Proceedings of the
ages by hybrid method with respect to benign, malignant, and 2010 IEEE International Symposium on Circuits and Systems,
normal using mrmr,” Computers in Biology and Medicine, pp. 253–256, IEEE, Paris, France, June, 2010.
vol. 133, Article ID 104407, 2021. [63] F. Alexandre Spanhol, L. S. Oliveira, C. Petitjean, and
[46] H. Zhang, L. Han, Ke Chen, Y. Peng, and J. Lin, “Diagnostic H. Laurent, “Breast cancer histopathological image classif-
efciency of the breast ultrasound computer-aided prediction cation using convolutional neural networks,” in Proceedings of
model based on convolutional neural network in breast the 2016 International Joint Conference on Neural Networks
cancer,” Journal of Digital Imaging, vol. 33, no. 5, pp. 1218– (IJCNN), pp. 2560–2567, IEEE, Vancouver, BC, Canada, July,
1223, 2020. 2016.
[47] G. Braz Junior, S. V. da Rocha, J. D. S. de Almeida, [64] Y. Bengio, A. Courville, and P. Vincent, “Representation
A. C. de Paiva, A. C. Silva, and M. Gattass, “Breast cancer learning: a review and new perspectives,” IEEE Transactions
detection in mammography using spatial diversity, geo- on Pattern Analysis and Machine Intelligence, vol. 35, no. 8,
statistics, and concave geometry,” Multimedia Tools and pp. 1798–1828, 2013.
Applications, vol. 78, no. 10, pp. 13005–13031, 2019. [65] F. L. Raúl de la, “Wild Data - Transfer Learning - Deep
[48] J. Harefa, A. Alexander, and M. Pratiwi, “Comparison clas- Learning - Blog,” 2018, https://blog.stratio.com/wild-data-
sifer: support vector machine (svm) and k-nearest neighbor part-three-transfer-learning/.
(k-nn) in digital mammogram images,” Jurnal Informatika [66] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning
dan Sistem Informasi, vol. 2, no. 2, pp. 35–40, 2017. for image recognition,” in Proceedings of the IEEE Conference
8483, 2023, 1, Downloaded from https://onlinelibrary.wiley.com/doi/10.1155/2023/7717712, Wiley Online Library on [09/08/2024]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Computational Intelligence and Neuroscience 11