Handcrafted Features Vs Deep-Learned Features - Hermite Polynomial Classification of Liver Images
Handcrafted Features Vs Deep-Learned Features - Hermite Polynomial Classification of Liver Images
Guilherme B. Rozendo† , Guilherme F. Roberto ¶ , Alessandra Lumini , Leandro A. Neves† , Marcelo Z. do Nascimento∗
∗ Faculty of Computer Science (FACOM), Federal University of Uberlândia (UFU), Brazil
† Department of Computer Science and Statistics (DCCE), São Paulo State University (UNESP), Brazil
‡ Science and Technology Institute, Federal University of São Paulo (UNIFESP), Brazil
§ Federal Institute of Triângulo Mineiro (IFTM), Brazil
¶ Faculty of Engineering, University
of Porto (FEUP), Portugal
Department of Computer Science and Engineering (DISI) - University of Bologna, Italy
E-mail: [email protected]
Abstract—Liver cancer is one of the most common types of The choice of the most relevant features is directly related to
cancer according to World Health Statistics. Computer-aided the image quality, the method used in feature extraction steps
diagnosis (CAD) systems are used in medical imaging for liver and the classification algorithm. A variety of feature extraction
tumor identification and classification. Texture is a type of
feature that can provide measurements of properties such as techniques can influence the performance of classifiers in CAD
smoothness and regularity of the image. Handcraft techniques systems, such as overtraining [6] or heuristic dependencies of
based on fractal geometry allow quantifying self-similarity prop- a specific group of features [7]. Among the approaches, an
erties present in images. However, new studies have shown that important technique for description is the quantification of an
using information obtained from deep-learned feature maps can image’s texture. Texture provides measurements of properties
maximize the results of classical classifiers. This work presents an
approach that investigates descriptors obtained by handcrafted such as smoothness, roughness, and regularity of the image.
and deep learning features, feature selection methods and the Techniques based on fractal dimension and lacunarity allow
Hermite polynomial (HP) algorithm to classifier liver histological quantifying self-similarity properties present in images, which
images. The results were evaluated using metrics such as accuracy can not be defined by Euclidean geometry [8].
(ACC) and the imbalance accuracy metric (IAM). The association Convolutional neural networks (CNNs) are approaches that
with fractal features, Lasso regularization and the HP algorithm
achieved 0.98 of IAM and 99.53% ACC, which was relevant when allow the extraction of image feature patterns, taking into
evaluated with other studies in the literature. account multiple observation scales. This is accomplished
Keywords—Liver Tissue, Hermite Polynomial, Handcrafted through deep layers that enable the quantification of global and
features, Deep-learned Features, Feature Selection. local patterns, known as deep-learned features. Recent studies
have shown that using information obtained from CNN feature
I. I NTRODUCTION maps can maximize the results of classical classifiers [9],
Pattern recognition and classification have gained popularity [10]. However, there are certain real-world problem-solving
in recent years, particularly in the field of medical imaging in situations where available training data is limited and large
order to develop computer-aided diagnosis (CAD) systems. datasets do not exist. In these cases, applying deep learning
These tools are employed as supplementary reading in these methods is not a viable option, conventional feature extraction
tasks and include the evaluation of the digitized image, pre- and classification techniques may be a suitable solution [11].
processing, segmentation, feature extraction and classification. Machine learning (ML) classification is a task that employs
The feature extraction and selection modules are crucial features as the basis for assigning class labels to patterns
in improving the classification performance allowing reduce obtained from the input images [12]. Among the ML methods,
computational complexity and decreasing dimensionality [1]. the polynomial classifier is an algorithm that has shown
Liver cancer (LC) is the sixth most common type of cancer promising results, especially when dealing with non-linearly
and the third leading cause of death from cancer worldwide. separable data. This algorithm is a parameterized method
In the year 2022, 960,000 diagnoses of LC were recorded, that exponentially expands its polynomial basis according to
accounting for 830,000 deaths [2]. Excessive alcohol use, the number of elements in the data vector and the function
smoking, family history, diabetes, obesity, hepatitis B or C degree. Polynomial algorithms have demonstrated relevant
virus infection and low immunity are factors that lead to LC performance in the process of analysis and labeling of samples
[3]. In recent years, a significant number of researchers have during the classification stage. Among the bases, Hermite
been working on the development of the stages of a CAD polynomials (HP) are able to generate a complete orthogonal
system for the analysis of lesions, such as tumors present in basis of the Hilbert space that satisfy the orthogonality and
the liver [4], [5]. completeness conditions of that space’s family of elements.
496
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE UBERLANDIA. Downloaded on October 14,2024 at 20:58:28 UTC from IEEE Xplore. Restrictions apply.
Fig. 1. Block diagram of the main stages of the proposed approach.
Fig. 2. Female mouse liver tissue samples from the LA database with the following ages: (a) 1, (b) 6, (c) 16 and (d) 24 months. Source: [?]
the model entirely. Larger penalties lead to coefficient values other [25]. According to [26], Relief-F has the ability to reduce
closer to zero, promoting the creation of simpler models. the dimension of the feature set by removing negative values,
which can maximize the efficiency of algorithms that perform
p
multiclass classification. The Relief-F method also presents
F (θ, X, y) + λ |βj | . (1)
the ability to handle noisy data, efficient processing time and
j=1
Original Cost Function high accuracy.
Lasso Regularization
For this method, Eq. 2 presents the operation of the feature
The optimal value of λ, which produces the highest clas- selection. In this equation, ft,i describes the value of the
sification rate, can be determined iteratively using a cross- instance xi for the feature fi , P represents the distance
validation procedure. In this study, the range of λ values measure and the terms fdc(xi ) and fsc(xi ) are the values of
explored was between 10−6 and 10−2 with a total of 3×106 the points of the i-th neighboring features of xi with different
iterations. These values were chosen based on an empirical or equal labels [27].
evaluation of the feature vectors. During the training process,
the regularization stage was implemented in each fold to iden-
1
N
tify the most relevant features for classification. This approach fi = P (ft,i − fdc(xi ) ) − P (ft,i − fsc(xi ) ) (2)
reduced the number of features required for the classification 2 i=1
stage [24]. At this stage, the features were ordered according to
their importance weights and, then, four groups were obtained Similarly to the regularization method, vectors constructed
by selecting the 10, 15, 20 and 25 features. The selected in this algorithm were composed of 10, 15, 20 and 25 features,
features were applied to all of the classifiers investigated in according to the importance level assigned to them with this
this study. method.
D. Relief-F E. Classification
The Relief-F method is a feature weighting technique that A polynomial classifier is a supervised approach that ex-
analyzes the dataset information and assigns weight values pands the input feature vector x = [x1 . . . xd ]T , where d
according to the association between each feature and the represents the number of features and T is the transpose
class. This technique selects the features based on the weight operation, to a higher dimension in a nonlinear manner. This
values that help distinguish instances that are close to each technique enables the generation of linear approximations in
497
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE UBERLANDIA. Downloaded on October 14,2024 at 20:58:28 UTC from IEEE Xplore. Restrictions apply.
this space, which can be used to classify the input data into random forest (RF), a strategy that utilizes a tree ensemble
the desired output [28]. method, with bootstrapping for each tree generating subsets of
In [29], Thangavelu describes that the HP algorithm is observations not included in the tree-growing process. These
orthogonal on the interval (−∞, ∞) with respect to the algorithms and HP classifier were evaluated using the cross-
standard Gaussian weight function and provides advantages validation method with k=10, with 90% of the dataset used
in function approximation. The HP algorithm can be defined for training and 10% for testing the model.
mathematically as shown in Eq. 3: The IAM and ACC were used to evaluate the classification
dn −x2 /2
2
algorithms. ACC is a widely used metric in image classifica-
HPn (x) = (−1)n e+x [e/2
]. (3) tion analysis because it is easy to calculate and interpret, and
dxn
ranges between 0 and 100%. A perfect classification results in
As a result, the HP method can be computed using the
an accuracy of 100%. However, ACC may not be a reliable
recurrence relation for any order where n > 0 (see Eq.4):
measure for unbalanced class problems. The IAM metric,
defined by Eq. 10, was also applied in the experiments to
HPn+1 (x) = xHPn (x) − nHPn−1 (x) = 0, (4)
address the issue of unbalanced labels and improve the recall
of results based on the data features.
where HP0 (x) = 1 and HP1 (x) = x. k k
k
cii − max( j=i cij , j=i cji )
Using the recurrence equation, the polynomials among the IAM = 1/k , (10)
degree 1 to 3 are obtained by Eqs. (5), (6), and (7): i=1
max(c.i , ci. )
HP2 (x) = x2 − 1, (5) where cij is the confusion matrix generated by the clas-
k
3 sifier, the max value of total off-diagonal items ( j=i cij
HP3 (x) = x − 3x, (6) k
4 2
or j=i cji ) are subtracted from the diagonal values (cii ),
HP4 (x) = x − 6x + 3. (7) divided by the max sum in the corresponding row or column
For the classification of image groups, the feature vectors (max(c.i , ci. )), and finally averaged (/k) to obtain the expec-
defined by x were used as inputs and then expanded in terms tation.
of the polynomial basis HPn (x). III. R ESULTS AND D ISCUSSION
For the multiclass classification, the decision rule can be
expressed using Eq. 8. In this equation, the i-th problem is Table I shows the results achieved with the IAM and
addressed by a linear discriminant function that separates the ACC metrics for the HP classifier associated with the Lasso
points assigned to wi from those not assigned to wi . The regularization considering a range of five features obtained of
decision surface, given by g(x) = 0, demarcates the boundary range among 10 to 25. This table presents the sets of features
between the classes wi and not(wi ) for a multiclass problem. obtained with three approaches, one using handcrafted tech-
niques and the other with deep-learned features. Analyzing the
ωi , if g(x) > 0 data, it can be seen that the HP algorithm and its associations
Decide (8)
not(ωi ), if g(x) < 0 showed more relevant results with the handcrafted descriptor
Finally, the output g(x) can be obtained by Eq. 9. using fractal techniques and 25 features, achieving an ACC of
99.53% and an IAM value of 0.98. Regarding the descriptors
g(x) = aT HPn (x), (9) obtained with the ResNet-50 model, the results were 89.87%
where the coefficient vector of the polynomial basis function for ACC and 0.49 for the IAM metric. The VGG-19 model
denoted by a, HPn (x) represents the Hermite basis function, presented 91.47% and 0.57 for the ACC and IAM metrics,
and n corresponds to the order or degree of the polynomial. respectively, with 25 features. In this case, the use of 20
This algorithm was divided into two steps, namely training features also showed similar results for the ACC metric, but on
and testing, which are detailed in [30]. IAM, the performance with 25 features was better, including
In this work, the degree of the polynomial expansion a lower standard deviation. IAM is a more robust metric
was determined empirically, where the relevant results were for evaluating classifiers on multiclass datasets, which offers
achieved using a third-order polynomial for class separation. benefits in imbalanced data. It is noticed in these experiments
that the fractal descriptors were more robust regarding this
F. Evaluation of Methods metric, as the performances ranged from 0.86 to 0.98. When
The HP algorithm was compared with three different classi- applied to the descriptors from CNNs, this metric was lower
fiers based on the primary supervised ML approaches, namely in all experiments, despite the number of features used. When
function-based, ensemble learning, and tree-based. The se- applied to the descriptors from CNNs, this metric was lower in
lected algorithms were logistic regression (LGT), which is all experiments, despite the number of features used. IAM has
a model that integrates tree induction and additive logistic a representation range between -1 to 1 and, when this value
regression; multilayer perceptron (MLP), an approach that is closer to zero, it indicates that the number of correctly and
employs a system of interconnected neurons or nodes to map incorrectly classified instances is close. As mentioned by the
nonlinear relationships between input and output vectors; and authors in [31], the IAM metric allows for a better evaluation
498
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE UBERLANDIA. Downloaded on October 14,2024 at 20:58:28 UTC from IEEE Xplore. Restrictions apply.
TABLE I TABLE II
R ESULTS OBTAINED WITH THE HP CLASSIFICATION ALGORITHM AND R ESULTS OBTAINED WITH HP CLASSIFICATION ALGORITHM WITH
L ASSO REGULARIZATION . R ELIEF -F SELECTOR EMPLOYED IN LA IMAGES .
Data Number ACC (%) IAM Data Number ACC (%) IAM
10 97.44±0.15 0.86±0.09 10 93.27±0.01 0.64±0.06
15 98.49±0.14 0.91±0.07 15 95.74±0.02 0.76±0.10
Fractal Fractal
20 99.24±0.09 0.95±0.06 20 98.67±0.01 0.91±0.05
25 99.53±0.08 0.98±0.03 25 99.43±0.01 0.96±0.04
10 80.88±0.03 0.12±0.12 10 94.13±0.02 0.70±0.09
15 85.05±0.03 0.28±0.13 15 96.60±0.01 0.80±0.08
ResNet-50 ResNet-50
20 87.88±0.03 0.40±0.13 20 97.64±0.01 0.84±0.06
25 89.87±0.01 0.49±0.07 25 98.39±0.01 0.88±0.05
10 85.32±0.03 0.30±0.10 10 92.32±0.02 0.60±0.13
15 89.58±0.02 0.49±0.10 15 95.17±0.01 0.73±0.08
VGG-19 VGG-19
20 91.28±0.02 0.56±0.11 20 96.21±0.02 0.78±0.10
25 91.47±0.02 0.57±0.10 25 97.44±0.02 0.85±0.10
TABLE III
of the classifier’s behavior since it provides information about C LASSIFICATION OBTAINED WITH THE FRACTAL DESCRIPTOR , L ASSO
REGULARIZATION AND ML ALGORITHM .
mislabeling instance classes.
The results achieved by the HP algorithm in association with Classifiers Number ACC (%) IAM
the Relief-F selector are presented in Table II for intervals be-
LGT 25 91.53±0.04 0.96±0.02
tween 10 and 25 features. Upon evaluation, it is noted that the
HP algorithm provided the best values using fractal descriptors
with 25 features, ACC of 99.43% and IAM of 0.96. The same MLP 25 96.04±0.01 0.97±0.01
behavior occurred with the deep-learned descriptors, using
ResNet-50 and VGG-19, where 25 characteristics with Relief- RF 25 94.15±0.02 0.97±0.01
F allowed ACC values of 98.39% and 97.44%, respectively.
With the IAM metric, these descriptors using the Relief-F
provided better results than those achieved with HP and Lasso other works available in the literature, indicating a relevant
regularization. However, these values were lower compared to solution to assist experts in the analysis of this type of image.
those obtained with fractal descriptors (25 features), Relief-F
and HP classifier. In addition, the ResNet-50 network allowed TABLE IV
A NALYSIS OF THE ACCURACY METRIC FROM DIFFERENT APPROACHES
an IAM value of 0.88, considering the same feature number. DEVELOPED IN THE LITERATURE .
VGG-19 achieved an IAM of 0.85 with the same number of
features. Reference Approach ACC (%)
The results depicted in these tables showed that the as- ResNet-50, FD,
Roberto et al. [5] 99.62
LAC and PERC (DLHC)
sociation of the 25 most relevant fractal descriptors, Lasso Handcrafted Features
regularization, and HP classifier allowed the achievement Proposed Method 99.53
Lasso and HP Classifier
of more effective values. It is also possible to verify that Andrearczyk et al. [32] Collective T-CNN 98.20
Novel set of image features
the standard deviation values with handcrafted features were Huang et al. [33]
and Ensemble SVM Classifier
97.01
lower for the IAM metric, regardless of the feature selection GIST descriptors,
Watanabe et al. [4] 88.40
technique. PCA and LDA (HC)
Based on the results presented in Tables I and II, the
association of the HP algorithm with the Lasso regularization
(25 features) were compared with ML algorithms. Table III IV. C ONCLUSION
displays the results achieved by these strategies. The MLP The present study introduced a computational tool for
and RF algorithms achieved values of 96.04% and 94.15% analyzing LA images based on fractal descriptors, Lasso
for ACC and 0.97 with the IAM metric for both methods. regularization, and the HP classifier. The analyses presented
The LGT algorithm obtained 91.53% ACC and 0.96 for IAM. in this study explored the association between feature se-
It is important to highlight that the standard deviation of all lectors and the HP algorithm for building prediction and
classifiers was below 0.05% for both metrics. According to classification models of LA histological images. The obtained
the results, the ML algorithms had lower performance than results indicated that the proposed approach using a set of 25
the results with the HP algorithm and proposed associations. handcrafted features through the Lasso regularizer presented
The indirect analysis of the proposed association and other the best performance, with values of IAM and ACC metrics
recent works that investigated computational techniques for the exceeding 0.98 and 99.53%, respectively.
classification of LA images are presented in Table IV. It can Values obtained with the CNN-based descriptors were more
be observed that the association of the HP algorithm, Lasso, relevant when the Relief-F selector was applied for the ACC
and fractal descriptors showed promising results compared to metric. The same behavior is observed with the IAM metric. In
499
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE UBERLANDIA. Downloaded on October 14,2024 at 20:58:28 UTC from IEEE Xplore. Restrictions apply.
future studies, it is intended to analyze which features provide [14] D. C. Pereira, L. C. Longo, T. A. Tosta, A. S. Martins, A. B. Silva,
P. R. de Faria, L. A. Neves, and M. Z. Do Nascimento, “Classification
the most satisfactory results, as well as to measure the gain in of lymphomas images with polynomial strategy: An application with
relation to the computational cost of the HP algorithm when ridge regularization,” in 2022 35th SIBGRAPI Conference on Graphics,
performing the training and testing stages using the proposed Patterns and Images (SIBGRAPI), vol. 1. IEEE, 2022, pp. 258–263.
[15] J. M. Zahn, S. Poosala, A. B. Owen, D. K. Ingram, A. Lustig, A. Carter,
approach. Further, the number of mice is low and images from A. T. Weeraratna, D. D. Taub, M. Gorospe, K. Mazan-Mamczarz et al.,
the same animal can be used in the training and test sets (16 “Agemap: a gene expression database for aging in mice,” PLoS genetics,
different animals). However, we can find great variability in vol. 3, no. 11, p. e201, 2007.
[16] A. S. Mubarak, S. Serte, F. Al-Turjman, Z. S. Ameen, and M. Ozsoz,
the cells of the same mice. In addition, we plan to explore the “Local binary pattern and deep learning feature extraction fusion for
feature selection techniques and evaluate the effectiveness of COVID-19 detection on computed tomography images,” Expert Systems,
the proposed method in other histological image dataset. 2021.
[17] M. G. Ribeiro, L. A. Neves, M. Z. do Nascimento, G. F. Roberto, A. S.
Martins, and T. A. A. Tosta, “Classification of colorectal cancer based
ACKNOWLEDGMENT on the association of multidimensional and multiresolution features,”
Expert Systems with Applications, vol. 120, pp. 262–278, 2019.
The authors gratefully acknowledge the financial support [18] G. F. Roberto, L. A. Neves, M. Z. Nascimento, T. A. Tosta, L. C. Longo,
of National Council for Scientific and Technological De- A. S. Martins, and P. R. Faria, “Features based on the percolation theory
for quantification of Non-Hodgkin Lymphomas,” Computers in Biology
velopment CNPq (Grants #313643/2021-0, #311404/2021-9), and Medicine, vol. 91, no. Supplement C, pp. 135 – 147, 2017.
the State of Minas Gerais Research Foundation - FAPEMIG [19] J. Hoshen and R. Kopelman, “Percolation and cluster distribution. i.
(Grant #APQ-00578-18 and Grant #APQ-01129-21) and São cluster multiple labeling technique and critical concentration algorithm,”
Physical Review B, vol. 14, no. 8, p. 3438, 1976.
Paulo Research Foundation - FAPESP (Grant #2022/03020-1). [20] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image
recognition,” in Proceedings of the IEEE conference on computer vision
R EFERENCES and pattern recognition, 2016, pp. 770–778.
[21] K. Simonyan and A. Zisserman, “Very deep convolutional networks for
[1] E. Tasci and A. Ugur, “A novel pattern recognition framework based large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
on ensemble of handcrafted features on images,” Multimedia Tools and [22] “torchvision.models,” Jun 2021. [Online]. Available: https://pytorch.org/
Applications, vol. 81, no. 21, pp. 30 195–30 218, 2022. vision/stable/models.html
[2] U. Cinar, R. Cetin Atalay, and Y. Y. Cetin, “Human hepatocellular [23] H. Rhys, Machine Learning with R, the tidyverse, and mlr. Simon and
carcinoma classification from h&e stained histopathology images with Schuster, 2020.
3d convolutional neural networks and focal loss function,” Journal of [24] W. N. van Wieringen, “Lecture notes on Ridge regression,” arXiv
Imaging, vol. 9, no. 2, p. 25, 2023. preprint arXiv:1509.09169, 2021.
[3] K. M. Napte and A. Mahajan, “Liver segmentation using marker [25] K. Liu, Q. Chen, and G.-H. Huang, “An efficient feature selection
controlled watershed transform,” International Journal of Electrical and algorithm for gene families using nmf and relieff,” Genes, vol. 14, no. 2,
Computer Engineering, vol. 13, no. 2, p. 1541, 2023. p. 421, 2023.
[4] K. Watanabe, T. Kobayashi, and T. Wada, “Semi-supervised feature [26] Y. M. Yacob, H. Alquran, W. A. Mustafa, M. Alsalatie, H. A. M.
transformation for tissue image classification,” Plos one, vol. 11, no. 12, Sakim, and M. S. Lola, “H. pylori related atrophic gastritis detection
p. e0166413, 2016. using enhanced convolution neural network (cnn) learner,” Diagnostics,
[5] G. F. Roberto, A. Lumini, L. A. Neves, and M. Z. do Nascimento, vol. 13, no. 3, p. 336, 2023.
“Fractal neural network: A new ensemble of fractal geometry and [27] L. O. Felix, D. H. C. de Sá Só, U. A. B. V. Monteiro, B. M.
convolutional neural networks for the classification of histology images,” Castro, L. A. V. Pinto, C. A. O. Martins et al., “A feature selection
Expert Systems with Applications, vol. 166, p. 114103, 2021. committee method using empirical mode decomposition for multiple
[6] S. Tripathi and S. K. Singh, “Ensembling handcrafted features with fault classification in a wind turbine gearbox,” 2023.
deep features: an analytical study for classification of routine colon [28] T. Shanableh and K. Assaleh, “Feature modeling using polynomial
cancer histopathological nuclei images,” MULTIMEDIA TOOLS AND classifiers and stepwise regression,” Neurocomputing, vol. 73, no. 10-12,
APPLICATIONS, 2020. pp. 1752–1759, 2010.
[7] T. G. Dietterich, “Ensemble methods in machine learning,” in Interna- [29] S. Thangavelu, “Hermite and laguerre semigroups: Some recent devel-
tional workshop on multiple classifier systems. Springer, 2000, pp. opments,” in Seminaires et Congres (to appear), 2006.
1–15. [30] A. S. Martins, L. A. Neves, P. R. de Faria, T. A. Tosta, L. C. Longo,
[8] M. Ivanovici, N. Richard, and H. Decean, “Fractal dimension and A. B. Silva, G. F. Roberto, and M. Z. do Nascimento, “A hermite
lacunarity of psoriatic lesions-a colour approach,” medicine, vol. 6, no. 4, polynomial algorithm for detection of lesions in lymphoma images,”
p. 7, 2009. Pattern Analysis and Applications, vol. 24, pp. 523–535, 2021.
[9] E. F. Ohata, J. V. S. d. Chagas, G. M. Bezerra, M. M. Hassan, V. H. C. [31] E. Mortaz, “Imbalance accuracy metric for model selection in multi-class
de Albuquerque et al., “A novel transfer learning approach for the imbalance classification problems,” Knowledge-Based Systems, vol. 210,
classification of histological images of colorectal cancer,” The Journal p. 106490, 2020.
of Supercomputing, vol. 77, no. 9, pp. 9494–9519, 2021. [32] V. Andrearczyk and P. F. Whelan, “Deep learning for biomedical texture
[10] N. Kumar, M. Sharma, V. P. Singh, C. Madan, and S. Mehandia, “An image analysis,” in Proceedings of the Irish Machine Vision & Image
empirical study of handcrafted and dense feature extraction techniques Processing Conference. Irish Pattern Recognition & Classification
for lung and colon cancer classification from histopathological images,” Society (IPRCS), 2017.
Biomedical Signal Processing and Control, vol. 75, p. 103596, 2022. [33] H.-L. Huang, M.-H. Hsu, H.-C. Lee, P. Charoenkwan, S.-J. Ho, and S.-Y.
[11] H. Alshazly, C. Linse, E. Barth, and T. Martinetz, “Handcrafted versus Ho, “Prediction of mouse senescence from he-stain liver images using
cnn features for ear recognition,” Symmetry, vol. 11, no. 12, 2019. an ensemble svm classifier,” in Intelligent Information and Database
[Online]. Available: https://www.mdpi.com/2073-8994/11/12/1493 Systems: 5th Asian Conference, ACIIDS 2013, Kuala Lumpur, Malaysia,
March 18-20, 2013, Proceedings, Part II 5. Springer, 2013, pp. 325–
[12] E. Zanaty and A. Afifi, “Generalized Hermite kernel function for Support
334.
Vector Machine classifications,” International Journal of Computers and
Applications, vol. 42, no. 8, pp. 765–773, 2020.
[13] A. S. Martins, L. A. Neves, P. R. Faria, T. A. Tosta, D. O. Bruno,
L. C. Longo, and M. Z. do Nascimento, “Colour feature extraction
and polynomial algorithm for classification of lymphoma images,” in
Iberoamerican Congress on Pattern Recognition. Springer, 2019, pp.
262–271.
500
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE UBERLANDIA. Downloaded on October 14,2024 at 20:58:28 UTC from IEEE Xplore. Restrictions apply.