DeepSkin A Deep Learning Approach For Skin Cancer Classification
DeepSkin A Deep Learning Approach For Skin Cancer Classification
Received 5 April 2023, accepted 6 May 2023, date of publication 10 May 2023, date of current version 30 May 2023.
Digital Object Identifier 10.1109/ACCESS.2023.3274848
ABSTRACT Skin cancer is one of the most rapidly spreading illnesses in the world and because of the limited
resources available. Early detection of skin cancer is crucial accurate diagnosis of skin cancer identification
for preventive approach in general. Detecting skin cancer at an early stage is challenging for dermatologists,
as well in recent years, both supervised and unsupervised learning tasks have made extensive use of deep
learning. One of these models, Convolutional Neural Networks (CNN), has surpassed all others in object
detection and classification tests. The dataset is screened from MNIST: HAM10000 which consists of seven
different types of skin lesions with the sample size of 10015 is used for the experimentation. The data
pre-processing techniques like sampling, dull razor and segmentation using autoencoder and decoder is
employed. Transfer learning techniques like DenseNet169 and Resnet 50 were used to train the model to
obtain the results.
INDEX TERMS Skin cancer, segmentation, deep learning, CNN, Densenet169, Resnet50.
been dominated by Convolutional Neural Networks (CNN). Some of the most commonly used machine learning tech-
As a result, trained end-to-end in a controlled environment, niques were employed. 328 images of benign melanoma and
CNNs eliminate the need for humans to manually create fea- 672 images of melanoma were retrieved from the ISIC collec-
ture sets. The use of Convolutional Neural Networks (CNNs) tion. With SVM classifiers, a 97.8% accuracy and 0.94 area
to categorize lesions in skin cancer in recent years has out- under the curve resulted in a classification [2], [12]. Further-
performed skilled mortal specialists. more, the KNN results showed that the Sensitivity was 86.2%
and the Specificity was 85%. The proposed strategy in [13]
II. LITERATURE REVIEW focuses on the identification and categorization of skin cancer
The methodology for classification of skin lesions with through the specialised use of machine learning methodology.
fine-tuned neural networks is proposed in [8]. For the pur- The classification rate for unsupervised learning using the
pose of balancing the dataset, the skin lesion photos are k-means algorithm is 52.63%. The k-means algorithm divides
resampled. A combination of DenseNet and U net is trained the input data into n data points and k clusters. Two clusters
for segmentation and then used to fine-tune the following are produced when melanoma skin cancer is detected; one
classifiers. The encoder element of the segmentation model’s cluster is for cancer detection and the other is for non-cancer
extracted architecture is then trained to categorize the seven detection. Back Propagation Neural Network classification
different skin disorders. The classification model’s average accuracy has been found to range from 60% to 75%, whereas
balanced accuracy was 0.836 in the test set and 0.840 in the Support Vector Machine (SVM) accuracy ranges from 80%
validation set. An innovative strategy is put forth in [9] that to 90%. As a result, Support Vector Machine performs bet-
can pre-process the image automatically before segmenting ter than K-means clustering and Back Propagation Neural
the lesion. Hair, gel, bubbles, and specular reflection are just Network classification. Unsupervised learning is used as the
a few of the undesirable artefacts that the system filters. foundation for the K-means method. The proposed approach
An innovative method for identifying and painting the hairs in [14] concentrates on classification issues. The main goal
visible in cancer images is given using the wavelet idea. of this study is to categorise skin lesions using deep learning,
Utilizing an adaptive sigmoidal function that manages the especially using the CNN method. The dataset was compiled
localised intensity distribution within the pictures of a spe- using ISIC. The techniques include the employment of trans-
cific lesion, the contrast between the lesion and the skin is fer learning algorithms like Inception V3, Resnet and VGG-
improved. We then provide a segmentation method to pre- 16 and Mobilenet, as well as data augmentation, image nor-
cisely separate the lesion from the surrounding tissue. On the malisation, and image normalizing. The approaches utilised
European dermoscopic image database, the proposed method in [15] are focused on employing the supervised learning
is tested. The proposed system is focusing on classifying method to categorise skin lesions. With the specialised imple-
skin lesions in deep learning with specific implementation mentation of computer aided diagnosis, MAP estimate can
of CNN approach [10]. The methods in this paper includes carry out numerous routine activities in automated skin lesion
the screening of a dataset from MNIST: HAM10000 which diagnosis (CAD). Lesion segmentation, hair detection, and
consists of seven different types of skin lesions with the pigment network detection are some of the techniques. The
sample size of 10015. The methods in this paper included developed model has an accuracy rate of 86%. The techniques
training the model with the help of CNN and obtained an include computer-aided diagnostics and MAP estimate for
accuracy of 78%. In the study, a deep fully convolutional categorising skin lesions. The methods proposed in [16]
neural network is presented for semantic pixel-by-pixel seg- is focusing on detection and classification of skin cancer
mentation. A pixel- wise classification layer, a corresponding with specific implementation of deep learning approach. The
decoder network, and an encoder network make up the train- methods in this paper includes the screening of a dataset from
able network [1]. Convolution layers are identical to those MNIST: HAM10000 which consists of seven different types
in VGG-16’s 13 convolution layers. As its name implies, the of skin lesions with the sample size of 10015 and PH2 dataset
decoder network’s primary goal is to convert encoder feature which contains 200 images of skin lesions. The methods
maps into full input resolution feature maps. The method is include data augmentation and the model is trained using
focusing on classifying skin lesions in deep learning with deep learning architectures like mobilenet, and VGG-16 [17].
specific implementation of CNN approach [2]. The methods The accuracy obtained is 81.52% with mobilenet and 80.07%
in this paper includes the screening of a dataset from MNIST: with VGG-16.
HAM10000 which consists of seven different types of skin
lesions with the sample size of 10015. The methods in this
paper included training the model with the help of CNN III. METHODOLOGY
and obtained an accuracy of 88% and used transfer learning Detecting skin cancer at an early stage is challenging for
methods like the Resnet model. An application of machine dermatologists. With the extensive use of deep learning pro-
learning with a focus on skin cancer categorization has been cedures as shown in Figure1 helps to classify the seven types
developed [11]. Pre-processing, segmentation, feature extrac- of skin cancer images. Deep learning methods like Convo-
tion, and classification are all included in the research. The lutional Neural Networks (CNN), has surpassed all others in
ABCD rule, GLCM, and HOG were used to extract features. object detection and classification tests.
there are, the more input nodes there are, which increases the
model’s complexity, training a neural network may be short-
ened. It also helps with zooming in on images. We frequently
need to resize an image, either to reduce it or to scale it up
to suit the size requirements. Figure5 shows the output for
image thresholding for skin lesion image. a neural network
may be shortened. It also helps with zooming in on images.
We frequently need to resize an image, either to reduce it or
to scale it up to suit the size requirements. Figure 5. shows
the output for image thresholding for skin lesion image.
Unit) shown in equation (6) Performance on issues involving sparse gradients is enhanced
It is defined as by the use of the adaptive gradient algorithm (AdaGrad) (e.g.,
natural language and computer vision problems). Addition-
y = max(0, x) (6)
ally, using Root Mean Square Propagation (RMSProp), per-
Rectified linear is a more is more interesting transform that parameter learning rates are adjusted to correspond to the
activates a node only if the input is above a certain quantity. mean of recent weight gradient magnitudes (e.g., how quickly
FIGURE 12. Training and validation loss graph for 20 epochs. FIGURE 14. Learning graph of training for 3 epochs.
The accuracy of the model obtained is 83%. Training loss Figure 15 shows the model accuracy graph plot and Figure 16
versus validation loss and training accuracy vs validation shows the model loss learning graph plot.
accuracy are shown in the graph given below as shown in For 40:60 the model is trained for 30 epochs with a learning
Figure 14. rate of 0.0001. The accuracy of the model obtained is 81.6%.
The training and testing ratio were conducted on different At each iteration, the training accuracy is shown below the
ranges like 80:20, 70:30 and 40:60. Using a learning rate of validation accuracy displayed in the plot of training loss
0.00001 and 30 training epochs, the model is trained to learn versus validation loss. Figure 17 shows the model accuracy
70:30 with the batch size of 16 dropout layer of 0.5 to avoid graph plot and Figure 18 shows the model loss learning graph
overfitting of the model. The model’s accuracy is 80.9%. plot.
TABLE 4. Training the model for 3 epochs. a person’s vulnerability to the sun’s UV radiation. Given
the limited resources available, early identification of skin
cancer is essential. Accurate diagnosis and identification
viability are generally essential for skin cancer prevention
strategies. Additionally, dermatologists have trouble seeing
skin cancer in its early stages. The use of deep learning for
both supervised and unsupervised applications has increased
TABLE 5. Classification report. significantly in recent years. Convolutional Neural Networks
(CNNs) are one of these models that have excelled in object
identification and classification tasks (CNN). The dataset is
filtered from MNIST: HAM10000, which has a sample size of
10015 and includes seven different types of skin lesions. Data
preprocessing methods include sampling, segmentation using
an autoencoder and decoder, and dull razor. The model was
trained using transfer learning methods like DenseNet169 and
Resnet 50. Different ratios were used for the training and
assessment, including 80:20, 70:30, and 40:60. When under-
sampling and oversampling were compared, DenseNet169’s
undersampling technique produced accuracy of 91.2% with a
f1-measure of 91.7% and Resnet50’s oversampling technique
TABLE 6. Comparison between undersampling and oversampling with
different split. produced accuracy of 83% with a f1-measure of 84%. The
future extension of this study includes increasing forecast
accuracy through parameter tuning.
REFERENCES
[1] Y. C. Lee, S.-H. Jung, and H.-H. Won, ‘‘WonDerM: Skin lesion classifica-
tion with fine-tuned neural networks,’’ 2018, arXiv:1808.03426.
[2] U. Jamil, M. U. Akram, S. Khalid, S. Abbas, and K. Saleem, ‘‘Computer
based melanocytic and nevus image enhancement and segmentation,’’
BioMed Res. Int., vol. 2016, pp. 1–13, Jan. 2016.
TABLE 7. Comparison between undersampling and oversampling. [3] A. Mahbod, G. Schaefer, C. Wang, R. Ecker, and I. Ellinge, ‘‘Skin lesion
classification using hybrid deep neural networks,’’ in Proc. IEEE Int. Conf.
Acoust., Speech Signal Process. (ICASSP), May 2019, pp. 1229–1233.
[4] K. Pai and A. Giridharan, ‘‘Convolutional neural networks for classify-
ing skin lesions,’’ in Proc. TENCON IEEE Region 10 Conf. (TENCON),
Oct. 2019, pp. 1794–1796.
[5] A. S. Shete, A. S. Rane, P. S. Gaikwad, and M. H. Patil, ‘‘Detection of skin
cancer using CNN algorithm,’’ Int. J., vol. 6, no. 5, pp. 1–4, 2021.
[6] M. Vidya and M. V. Karki, ‘‘Skin cancer detection using machine learning
Figure 19 shows the confusion matrix for undersam- techniques,’’ in Proc. IEEE Int. Conf. Electron., Comput. Commun. Tech-
nol. (CONECCT), Jul. 2020, pp. 1–5.
pling technique. The accuracy obtained by using under-
[7] H. Nahata and S. P. Singh, ‘‘Deep learning solutions for skin cancer detec-
sampling technique is 83% and f1 measure of 84%. tion and diagnosis,’’ in Machine Learning with Health Care Perspective.
Figure 20 shows the confusion matrix for oversampling Cham, Switzerland: Springer, 2020, pp. 159–182.
technique. [8] P. Wighton, T. K. Lee, H. Lui, D. I. McLean, and M. S. Atkins, ‘‘General-
izing common tasks in automated skin lesion diagnosis,’’ IEEE Trans. Inf.
The training and testing ratio were conducted on different Technol. Biomed., vol. 15, no. 4, pp. 622–629, Jul. 2011.
ranges like 80:20, 70:30 and 40:60. as shown in Table 5. [9] J. Saeed and S. Zeebaree, ‘‘Skin lesion classification based on deep con-
Table 6 shows the comparison between undersampling and volutional neural networks architectures,’’ J. Appl. Sci. Technol. Trends,
vol. 2, no. 1, pp. 41–51, Mar. 2021.
oversampling technique. [10] Y. Li, A. Esteva, B. Kuprel, R. Novoa, J. Ko, and S. Thrun, ‘‘Skin cancer
detection and tracking using data synthesis and deep learning,’’ 2016,
VI. COMPARISON BETWEEN EXISTING METHOD arXiv:1612.01074.
[11] V. Badrinarayanan, A. Kendall, and R. Cipolla, ‘‘SegNet: A deep convolu-
Our proposed work performs better when compare with other tional encoder–decoder architecture for image segmentation,’’ IEEE Trans.
CNN models that were recently published. We noticed that, Pattern Anal. Mach. Intell., vol. 39, no. 12, pp. 2481–2495, Dec. 2017.
the AUC score of 0.912 higher than the result in [17], 4% [12] P. Tschandl, C. Rosendahl, and H. Kittler, ‘‘The HAM10000 dataset, a
large collection of multi-source dermatoscopic images of common pig-
higher than the work carried out in [18] and 5% higher when
mented skin lesions,’’ Sci. Data, vol. 5, no. 1, pp. 1–9, Aug. 2018.
compared with [19]. [13] K. M. Hosny, M. A. Kassem, and M. M. Foaud, ‘‘Skin cancer classification
using deep learning and transfer learning,’’ in Proc. 9th Cairo Int. Biomed.
VII. CONCLUSION Eng. Conf. (CIBEC), Dec. 2018, pp. 90–93.
[14] A. Javaid, M. Sadiq, and F. Akram, ‘‘Skin cancer classification using image
Skin cancer is one of the illnesses that is spreading the processing and machine learning,’’ in Proc. Int. Bhurban Conf. Appl. Sci.
quickest on the earth. Skin cancer is mostly brought on by Technol. (IBCAST), Jan. 2021, pp. 439–444.
[15] R. Ashraf, I. Kiran, T. Mahmood, A. U. R. Butt, N. Razzaq, and Z. Farooq, A. NAGARJUN received the B.E. degree in
‘‘An efficient technique for skin cancer classification using deep learning,’’ computer science and engineering from the
in Proc. IEEE 23rd Int. Multitopic Conf. (INMIC), Nov. 2020, pp. 1–5. Sri Jayachamarajendra College of Engineering,
[16] M. Uckuner and H. Erol, ‘‘A new deep learning model for skin cancer clas- Mysuru, and the M.Tech. degree in data science
sification,’’ in Proc. 6th Int. Conf. Comput. Sci. Eng. (UBMK), Sep. 2021, from the JSS Science and Technology University,
pp. 27–31. Mysuru. He was a Project Intern with Vigyanlabs,
[17] Y. Filali, H. E. Khoukhi, M. A. Sabri, and A. Aarab, ‘‘Analysis and Mysuru. He is currently an Assistant Professor
classification of skin cancer based on deep learning approach,’’ in Proc. with the Department of Information Science and
Int. Conf. Intell. Syst. Comput. Vis. (ISCV), May 2022, pp. 1–6.
Engineering, JSS Science and Technology Uni-
[18] D. Yousra, A. B. Abdelhakim, and B. A. Mohamed, ‘‘Transfer learning
versity. He has published papers in international
for automated melanoma classification system: Data augmentation,’’ in
Proc. Int. Conf. Smart City Appl. Cham, Switzerland: Springer, Mar. 2023, conferences. His research interests include machine learning, artificial intel-
pp. 311–326. ligence, and image processing.
[19] T. Mazhar, I. Haq, A. Ditta, S. A. H. Mohsan, F. Rehman, I. Zafar,
J. A. Gansau, and L. P. W. Goh, ‘‘The role of machine learning and deep
learning approaches for the detection of skin cancer,’’ Healthcare, vol. 11,
no. 3, p. 415, Feb. 2023.