0% found this document useful (0 votes)
3 views

Biometric Recognition Based on Fingerprint(2017)

This paper presents a comparative study of five classifiers for fingerprint biometric recognition, focusing on their performance in accuracy. The classifiers evaluated include Neural Networks, Support Vector Machines, Optimum Path Forest, K-nearest neighbors, and Extreme Learning Machine, with Support Vector Machines showing superior performance. The study emphasizes the importance of feature extraction techniques, particularly the FingerCode method, in enhancing recognition accuracy.

Uploaded by

gitelov533
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Biometric Recognition Based on Fingerprint(2017)

This paper presents a comparative study of five classifiers for fingerprint biometric recognition, focusing on their performance in accuracy. The classifiers evaluated include Neural Networks, Support Vector Machines, Optimum Path Forest, K-nearest neighbors, and Extreme Learning Machine, with Support Vector Machines showing superior performance. The study emphasizes the importance of feature extraction techniques, particularly the FingerCode method, in enhancing recognition accuracy.

Uploaded by

gitelov533
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2017 Workshop of Computer Vision

Biometric Recognition based on Fingerprint: A


Comparative Study
Bruno Matarazzo Durú Jonas Mendonça Targino Clodoaldo Aparecido de Moraes Lima
University of São Paulo - EACH University of São Paulo - EACH University of São Paulo - EACH
São Paulo, Brazil São Paulo, Brazil São Paulo, Brazil
Email: [email protected] Email: [email protected] Email: [email protected]

Abstract—Fingerprint recognition is regarded as one of the by a fast Fourier transform. Chong et al. [6] employed both
most popular and reliable techniques for automatic personal a geometric grouping and a global geometric shape analysis
identification due to the well-known distinctiveness and persis- of fingerprint ridges, while Cappelli et al. [2] proposed a
tence of fingerprints. A critical issue in biometric system design
is the choice of classifier. In this paper, we conducted a systematic directional image that models fingerprints with a graph. Nagaty
performance evaluation of the five classifiers (Neural Networks, [16] extracted a string of symbols using block directional
Support Vector Machines configured with radial basis function, images of fingerprints, while Chang and Fan [3] proposed
Optimum Path Forest, K-nearest neighbors and Extreme Le- a ridge distribution model consisting of a combination of 10
arning Machine) for the task of biometric recognition based basic ridge patterns with different ridge distribution sequences.
on fingerprint. Experimental results conducted on a publicly
available database are reported whereby we observe that the In this paper, we use the approach proposed by Jain et al. [12]
Support Vector Machine significantly outperform the others for feature extraction.
classifiers according to accuracy measure calculated. One of the problems presented while recognizing a finger-
print it is a recognition of patterns, since the purpose of this is
I. I NTRODUCTION to classify objects Of interest in one of several categories or
Biometric recognition refers to the automated identification classes. Generally, the objects of interest are called patterns
of individuals based on their physical and behavioral charac- and in the case of fingerprint recognition are called vector
teristics such as fingerprint, face, iris, and voice. Biometric codes or Fingercodes that are extracted from an input image
recognition offers an alternative to traditional methods, such of fingerprint using feature extraction techniques.
as passports, ID cards, driving licenses or PIN numbers. Thus, The classification problem can be categorized as binary
biometric recognition can be used to identify individuals in classification problems (two-class classification) and multi-
surveillance operations where covert recognition is required class classification problems. Nowadays, binary or muticlass
or in scenarios where a person may attempt to conceal their classification problem can be solved by many algorithms. For
true identity (e.g., by using forged documents to claim social instance, Neural networks, Support Vector Machines, Opti-
welfare benefits). Consequently, the application domain of bi- mum Path Forest, K-nearest neighbors and Extreme Learning
ometrics far exceeds that of passwords and tokens. Fingerprint Machine. A critical issue in biometric system design is the
recognition is regarded as one of the most popular and reliable choice of classifier.
techniques for automatic personal identification due to the In this paper, we focus mostly on the classification stage,
well-known distinctiveness and persistence of fingerprints. taking into account the highly complex behavior displayed
A biometric system comprises of (i) Image acquisition by fingerprint. The approach consists of two main modules:
module: this acquires the image of a biometric modality and a feature extractor based on FingerCodes that generate a
submits it to the system for further processing,(ii) Feature ex- feature vector from an image of fingerprint and a classifier
traction module: processes the acquired image thereby extrac- that produces the class based on the features vector.
ting the salient or discriminatory features,(iii) Matcher module: In the Section II, we describe the feature extraction te-
matches the extracted features of probe image with those of chnique based on FingerCodes. In Section III we present
gallery image to obtain a match score whereas,an embedded the classifiers; in Section IV we show the results of the
decision making module verifies or rejects the claimed identity classifications and the accuracy rate; and then in Section V
based on the match score and(iv) Database module: contains we conclude the study.
the digital representation of previously acquired samples very
often termed as templates. II. F EATURE E XTRACTION
In order to obtain higher accuracy, various features, such It is desirable to obtain representations for fingerprints
as the FingerCode, ridge distributions and directional images, which are scale, translation, and rotation invariant. Scale
have also been actively investigated. Jain et al. [12] proposed invariance is not a significant problem since most images
the FingerCode; a method that uses a Gabor filter to extract di- of fingerprint could be scaled as per the dpi specification
rectional ridge flow, and Park [21] used an orientation filtered of the sensors. The rotation and translation invariance could

0-7695-6357-0/17/$31.00 ©2017 IEEE 84


DOI 10.1109/WVC.2017.00022
be accomplished by establishing a reference frame based on prototype in that it is higher degree of connection according
the intrinsic fingerprint characteristics which are rotation and to the edges that are the distances between the descriptors.
translation invariant. It is also possible to establish many
frames of reference based upon several landmark structures C. Neural Networks
in a fingerprint to obtain multiple representations. A neural network is a computational structure inspired by
The four main steps in FingerCode [12] feature extraction the study of biological neural processing. There are many
algorithm are: different types of neural networks, from relatively simple to
1) Determine a reference point and region of interest for very complex, just as there are many theories on how biolo-
the fingerprint image; gical neural processing works. A layered feed-forward neural
2) Tessellate the region of interest around the reference network has layers, or subgroups of processing elements. A
point; layer of processing elements makes independent computations
3) Filter the region of interest in eight different directions on data that it receives and passes the results to another layer.
using a bank of Gabor filters (eight directions are requi- The next layer may in turn make its independent computations
red to completely capture the local ridge characteristics and pass on the results to yet another layer. Finally, a subgroup
in a fingerprint while only four directions are required of one or more processing elements determines the output from
to capture the global configuration); the network. Each processing element makes its computation
4) Compute the average absolute deviation from the mean based upon a weighted sum of its inputs. The first layer is the
of grey values in individual sectors in filtered images to input layer and the last the output layer. The layers that are
define the feature vector or the FingerCode. placed between the first and the last layers are the hidden
layers. The processing elements are seen as units that are
III. C LASSIFICATION A LGORITHMS similar to the neurons in a human brain, and hence, they are
In this section we revise the use of K-nearest neighbors, Op- referred to as cells, or artificial neurons. A threshold function
timum Path Forest, Neural Networks, Support Vector Machine is sometimes used to qualify the output of a neuron in the
and Extreme Learning Machine in classification problems. output layer. Synapses between neurons are referred to as
connections, which are represented by edges of a directed
A. K-nearest neighbors - k-NN graph in which the nodes are the artificial neurons. Neural
k-NN was formally proposed more than 60 years ago and is networks consist of small units called neurons, and these are
still a very popular and a very studied classifier. The literature connected to each other in such a way that they can pass
presents many applications using k-NN, such as breast cancer signals to each other [1].
diagnosis [22], text classification [26] and [9], emotion re- A feed forward single layer perceptron trained with Back-
cognition [4], speaker identification [13], among many others. propagation and Levenberg-Marquardt algorithm [10], was
One of the main weaknesses of the k-NN classifier is that used in this work. The Back-propagation algorithm used in the
all the training samples have to be stored in memory, and to training of multilayer perceptron, is formulated as a nonlinear
perform classification it is necessary the computation of the least squares problem. Essentially, the Levenberg-Marquardt
distance of the test sample to all training samples. Then, look algoritm is a least-squares estimation method based on the
for the k samples that are closer, and finally perform a voting maximum neighborhood idea. Let E(w) be an objective error
scheme to decide the class of the test sample. As the number function made up of m individual error terms e2i (w) as
of samples in the training set increases, storing all its values follows: m

in the computer memory may not be feasible and also the E(w) = e2i (w) = ||f (w)||2
classification procedure may take too much time due to the i=1
distances computation.
where
B. Optimum Path Forest - OPF e2i (w) = (yi − yri )2
Unlike k-NN, the Optimum Path Forest (OPF) is a very yi is desired value of output neuron i and yri is the actual
recent classifier proposed in the 2000s by [18]. It is non- output of that neuron. It is assumed that function f (.) and its
parametric, fast, simple, multi-class, does not make any as- Jacobian J are known at point w. The aim of the Levenberg-
sumption about the shapes of the classes, and can handle Marquardt algorithm is to compute the weight vector w such
some degree of overlapping between classes [18]. OPF has that E(w) is minimal. In each iteration the weight vector is
been successfully used in many applications, such as laryn- updated according to (1).
geal pathology detection [20], face recognition [17], rainfall
estimation [8], image categorization [19], among many others. wk+1 = wk + δwk (1)
The OPF is a classification technique based on graphs, being where
responsible for reducing, in order to classify each class, that δwk = −(JkT f (wk ))(JkT Jk + λI)−1 (2)
is, the prototype according to one or more trees. So each node
of the graph represents a sample of the training set acting Jk is the Jacobian of f (.) evaluated at wk , λ is the Marquardt
more specifically as the descriptor, belonging to the tree of the parameter, and I is the identity matrix.

85
D. Support Vector Machine most effective SVM-based method, the latest research [11]
Given a training data set composed of N samples {xi , yi }N also shows that the ELM tends to achieve better generalization
i=1
with input xi ∈ Rn and output yi ∈ ±1, the SVM clas- performance, less sensitivity to user-specified parameters, and
sifier aims at constructing a decision surface of the form easier implementation than a traditional SVM.
sign[f (x; w)], where f (x; w) = wT φ(x) + b is an appro-
ximation to the mapping function y, w ∈ Rm , and φ(.) :
Rn → Rm is a function mapping the input into a called
higher dimensional feature space. The parameter w and b can
be obtained through the following optimization problem [24]:

 N
1 T
min Φ(w, b, ξ) = (w w) + C ξi , (3)
w,b,ξ 2 i=1
(a) AES2501 (b) FPR620 (c) FT-2BU
subject to yi [wT φ(xi ) + b] ≥ 1 − ξi , ξi ≥ 0, i = 1, · · · , N ,
where C is a trade-off parameter indicating the relative im-
portance of the model’s complexity when compared to the
training error, and ξi is the training error for the i-th sample.
For simplicity, the problem (3) is usually converted into an
equivalent problem defined in a dual space, by constructing
the following Lagrangian [24]: L(w, b, ξ, β, γ)
N N N
(d) URU4000 (e) ZY202-B
1 T   
w w +C ξi − βi {[wT φ(xi )+b]yi −1+ξi }− γ i ξi Figura 1: Examples of scanners images
2 i=1 i=1 i=1
(4)
where βi ≥ 0, γi ≥ 0, (i = 1, · · · , N ) are Lagrange Tabela I: Information about sensors
multipliers. In such information, a particular kind of function,
Sensor Model Size
known as kernel, is employed [23]. It should follow the AES2501 Not fixed
constraint imposed by Mercer’s Theorem and provides a one- FPR620 256×304
step implicit calculation of the product between φ(xi ) and FT-2BU 152×200
URU4000 294×356
φ(xj ): K(xi , xj ) =< φ(xi ), φ(xj ) >. The results discussed in ZY202-B 400×400
Section
 IV were  with the RBF Kernel: K(xi , xj ) =
obtained
(x −x )T (x −x )
exp − i j2σ2 i j , where σ 2 denotes the variance to be
IV. C OMPUTATIONAL E XPERIMENTS
defined by the user. Using the Kernel, f (x; w) can be rewritten
N In what follows, we provide details about the dataset used in
as f (x; w) = i=1 βi yi K(x, xi ) + b.
For the training samples along the decision boundary, the the experiments and how the experiments were set up. Then,
corresponding αi s are greater than zero, as ascertained by the we present the accuracy results revealed by the classifiers,
Karesh-Kuhn-Tucker theorem. These samples are known as considering the FingerCode as feature extractor. In this paper,
support vectors. The number of support vectors is generally the one-versus-one approach was adopted when using the
much smaller than N , being proportional to the generalization SVM and Neural Networks.
error of the classifier. A test vector x ∈ Rm is then assigned A. SDUMLA-HMT Database
T
N class according to f (n) = sign[w φ(x) + b] =
to a given
For assessing the performance of the classifiers in the task
sign( i=1 αi yi K(x, xi ) + b)
of biometric recognition, we have employed the SDUMLA-
E. Extreme Learning Machine HMT Database made available to research community through
1
. SDUMLA-HMT was collected during the summer of 2010
Huang et al. [7], [11] proposed a novel machine learning at Shandong University, Jinan, China. This database consists
algorithm called the Extreme Learning Machine (ELM) that of the following biometric modality: face, finger vein, gait, iris
has significantly faster learning speed and requires less human and fingerprint of 106 individuals (including 61 males and 45
intervention than other learning methods. It has been proven females with age between 17 and 31,). In this paper, we use
that the hidden nodes of the “generalized” single-hidden-layer only the images of the fingerprint.
feedforward networks (SLFNs) can be randomly generated and The fingerprint images on SDUMLA-HMT database [25]
that the universal approximation capability of such SLFNs can are collected with five different sensors (multi-sensor database,
be guaranteed. The ELM can determine analytically all the see table I for more details), the figure 1 shows examples
parameters of SLFNs instead of adjusting parameters iterati- of fingerprints collected by the scanners. Fingerprint images
vely. Thus, it can overcome the demerits of the gradient-based
method and of most other learning methods. Compared to the 1 http://mla.sdu.edu.cn/sdumla-hmt.html

86
Tabela II: Results obtained for the fingerprint recognition using Neural Networks.
Number of Neurons AES2501 FPR620 FT-2BU URU4000 ZY202-B
μ(%) ± σ(%) μ(%) ± σ(%) μ(%) ± σ(%) μ(%) ± σ(%) μ(%) ± σ(%)
2 41.49 ±2.83 74.84 ±2.53 62.89±3.57 67.19±3.29 40.62±2.24
5 43.92±2.65 76.48 ±2.63 64.17±3.33 68.75±2.64 42.86±1.85
10 44.45±2.80 77.46 ±2.22 64.90±3.17 69.54±2.82 43.88±2.11
15 45.15±2.84 78.15 ±2.22 65.44±3.41 70.06±2.63 44.44±2.17
20 44.99±2.61 78.38 ±2.17 65.54±3.17 70.02±2.46 44.53±1.97
30 45.2 ±2.63 78.50 ±2.36 65.67±3.68 70.14±2.43 44.38±2.06

Tabela III: Results obtained for the fingerprint recognition using Support Vector Machines.
σ AES2501 FPR620 FT-2BU URU4000 ZY202-B
Mean(%)± Std (%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%)
23 49.03 ±3.43 74.18 ±3.10 59.01 ±4.49 69.90 ±3.31 48.20 ±1.75
24 52.01 ±2.90 79.26 ±2.09 67.93 ±3.93 72.51 ±2.83 51.04 ±1.45
25 49.80 ±2.88 79.47 ±2.39 67.77 ±3.65 72.11 ±2.77 48.62 ±1.46
26 46.17 ±2.75 78.35 ±2.27 65.65 ±3.86 70.57 ±2.78 45.25 ±1.82
27 44.47 ±2.53 77.73 ±2.17 64.62 ±3.94 69.67 ±2.82 43.81 ±1.62
28 43.77 ±2.54 77.11 ±2.42 63.85 ±3.71 68.89 ±2.88 43.01 ±1.74
29 37.85 ±2.38 73.65 ±2.09 57.11 ±3.54 63.58 ±2.60 36.81 ±1.46
210 23.59 ±1.56 54.61 ±2.27 38.58 ±1.72 43.22 ±1.51 23.48 ±1.15

Tabela IV: Results obtained for the fingerprint recognition using Optimum Path Forest.
Distances AES2501 FPR620 FT-2BU URU4000 ZY202-B
Mean(%)± Std (%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%)
Euclidean 50.51 ±3.09 76.06 ±2.72 58.90 ±3.63 71.23 ±2.60 49.64 ±1.39
Chi-Square 46.11 ±2.70 72.81 ±2.15 54.20 ±3.77 69.54 ±3.02 45.67 ±1.93
Manhattan 49.94 ±2.66 77.87 ±1.84 60.17 ±3.25 72.49 ±2.54 49.39 ±1.34
Canberra 42.92 ±2.14 69.42 ±2.14 47.06 ±3.19 68.44 ±2.67 42.18 ±1.49
SquaredChord 46.60 ±2.72 72.35 ±2.24 53.92 ±3.47 69.26 ±2.59 45.82 ±1.76
SquaredChi-Squared 47.48 ±2.91 73.11 ±2.47 55.16 ±3.40 69.91 ±2.71 46.69 ±1.61
BrayCurtis 42.92 ±2.14 69.42 ±2.14 47.06 ±3.19 68.44 ±2.67 42.18 ±1.49

Tabela V: Results obtained for the fingerprint recognition using K-nearest neighbors.
K AES2501 FPR620 FT-2BU URU4000 ZY202-B
Mean(%)± Std (%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%)
1 50.53 ±2.98 76.77 ±2.53 58.97 ±3.40 71.64 ±2.75 49.76 ±1.45
3 43.13 ±2.52 71.29 ±2.27 49.94 ±3.01 66.54 ±2.54 42.82 ±1.19
5 41.21 ±2.24 70.42 ±2.18 48.67 ±3.09 65.23 ±2.86 41.15 ±0.90
7 39.74 ±2.00 68.88 ±1.78 47.18 ±2.96 62.74 ±2.96 39.57 ±0.96

in SDUMLA-HMT database are acquired from six fingers Kernel and its parameters, denoted by σ). Although we know
such as: thumb finger, index finger and middle finger, of both that there are several rules of thumb to select the value of
hands. Notice that SDUMLA-HMT Group has requested from the kernel parameters [5], in this paper, we opted for using
participants eight impressions (attempts) for each of six fingers values of σ as 2i , with i = {−2, −1, ..., 14, 15}, and chose
to five previous mentioned sensors. empirically keeping the parameter C constant in 1000. This
We consider all images from each sensor as a database and value for C was achieved after some preliminary experiments
we have checked the quality of images from five different and agrees with the fact that SVM models with low values of
sensors (for each of these databases). It can be noticed that the C tend in general to attain better performance than those with
best quality and the worst quality databases are those generated high values of this parameter.
by FPR620 and FT-2BU sensors respectively
Regarding the OPF classifier, we use the following dis-
B. Experimental Setup
tance measures: Euclidean, Chi-Square, Manhattan, Canberra,
The SVM is a binary classifier that can not be applied di- SquaredChord, SquaredChi-Squared, BrayCurtis. There is no
rectly to a multi-class problem, thus to overcome this problem rule to select the optimum number of neurons in the hidden
we used the one-vs-one strategy. Moreover, the efficiency and layer of a Neural Network. However, some thumb rules are
effectiveness of the SVM training process depend directly on available for calculating number of neurons. In this work, we
the a priori selection of the values of some control parameters. have opted to set the values of number of neurons as 2, 5, 10,
One of them, denoted by C, controls the tradeoff between 15, 20, 30. Whereas, for the k-NN classifier the value of K
margin maximization and error minimization. Other parame- was set as 1, 3, 7 and the distance measure used was Euclidean
ters appear in the non-linear mapping into feature space (the distance. For the ELM classifier, the number of neurons was

87
Tabela VI: Results obtained for the fingerprint recognition using Extreme Learning Machine.
Number of Neurons AES2501 FPR620 FT-2BU URU4000 ZY202-B
Mean(%)± Std (%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%) Mean(%)± Std(%)
30 34.73 ±2.63 71.66 ±2.46 53.45 ±3.85 63.03 ±2.91 34.58 ±1.63
50 38.11 ±2.63 74.28 ±2.54 58.49 ±4.00 65.67 ±3.36 37.41 ±1.79
100 39.50 ±2.25 75.67 ±2.27 60.61 ±3.23 67.15 ±3.07 39.27 ±1.85
200 42.43 ±2.55 77.21 ±2.38 63.46 ±3.27 69.13 ±2.67 42.19 ±2.18
300 43.25 ±2.80 77.96 ±2.21 64.42 ±3.64 69.35 ±2.60 42.81 ±1.68

set as 30, 50, 100, 200, 300. The variation of the parameters FPR620 sensor. In this case, a recognition rate of 77.96 ± 2.21
values was performed in order to find the best parameter for was obtained.
the problem addressed here. For each of values chosen, a 10-
fold cross-validation process was performed in order to better
measure the average performance of the classifiers. Moreover, D. Hypothesis test
the datasets were scaled using min-max normalization.
C. Simulation Results Tabela VII: Wilcoxon test for the scanner between SVM
Tables II, III, IV, V and VI provides the best results Scanner Classifier Parameter P value Null hypothesis
obtained in the experiments, in terms of identification error MLP 30 0.0002 Rejected
SVM with OPF Euclidian 0.0001 Rejected
achieved by classifiers different, when they were induced σ = 16
AES2501
KNN 1 0.0002 Rejected
with the coefficients generated by FingerCode. The accuracy ELM 300 0.0006 Rejected
MLP 30 0.3066 Not rejected
is shown in terms of average and standard deviation of the SVM with OPF Manhatann 0.1194 Not rejected
FPR620
cross-validation error rate for the best-calibrated classifier. The σ = 16 KNN 1 0.0252 Rejected
ELM 300 0.1116 Not rejected
error rate was calculated as the number of missclassifications MLP 30 0.0002 Rejected
divided by the total number of test examples. SVM with
ZY202-B
OPF Euclidian 0.0814 Not rejected
σ = 16 KNN 1 0.0957 Not rejected
Analyzing Tables II, III, IV, V and VI , it can be noted ELM 300 0.0002 Rejected
that the best result was achieved with the FPR620 sensor. MLP 30 0.1038 Not rejected
SVM with OPF Manhatann 0.0028 Rejected
The worst result was obtained with the AES2501 sensor. This σ = 16
FT-2BU
KNN 1 0.0006 Rejected
demonstrates that the quality of fingerprint images is very ELM 300 0.0450 Rejected
MLP 30 0.0376 Rejected
important for biometric recognition. SVM with OPF Manhatann 0.7051 Not rejected
The results obtained with Neural Networks for the AES2501 URU4000
σ = 16 KNN 1 0.2119 Not rejected
and ZY202-B sensors were very similar, except for the number ELM 300 0.0310 Rejected

of neurons equal to 2. The best result was obtained for the


sensor FPR620 with 30 neurons. In this case, a recognition In this work we used the test de Wilcoxon [14] as a
rate of 78.50 ± 2.46 was obtained. Regarding the variation of mode hypothesis test. The Wilcoxon test is a non-parametric
the number of neurons, it can be observed that this produced method for the results of two paired samples. At first they are
small variation in the performance. calculated with the numerical values of the difference between
For all sensors, the SVM produced the best performance each pair. In order to compile the Wilcoxon test we used the
when compared to other classifiers. In this case, the optimal best classifiers and their parametrization consequences, a null
sigma value lies in the range [24 − 25 ]. It is important to hypothesis to be tested was that the classifiers were the same.
note that performance degrades rapidly to other σ values. This After the analysis of the hypothesis test and its accuracy
indicates that the paramenter’s selection for the kernel is very of classification, the hypothesis test was analyzed in order to
important. verify if the classifier SVM in order to verify if this classifier
With the exception of the AES2501 sensor, the best result was really the best classifier for the five scanners presented.
with the OPF was obtained using Manhattan distance. For The table VII present all results after analysis of the SVM
the FPR620 sensor, a recognition rate of 77.87 ± 1.84 was classifier for the other classifiers presented in this work. The
obtained. It is possible to note that different distance metrics AES2501 scanner presents the results of the classifiers OPF,
did not produce a great variation in the performance of the MLP, ELM and KNN in relation to SVM, and it was seen
OPF. From the results, it is possible to conclude that the chosen that the SVM overcame all other classifiers. With the FPR620
of the distance metric is not a very important factor to be taken scanner is clear that the SVM has only exceeded the KNN. we
into account. can see that the SVM did not present better results than the
With regard to kNN, the best result was achieved with MLP neural network according to the digital images collected
k = 1 for all sensors. Values of k greater than 5 produce with the help of the FT-2BU scanner. The MLP and ELM were
a degradation in kNN performance when using the AES2501 superseded by the SVM with the URU4000 scanner. In the
and ZY202-B sensors. ZY202-B scanner, it is possible to realize that the SVM was
Regardless of the sensor, ELM produced the best result with able to obtain better results than the MLP and ELM classifiers,
300 neurons. The best recognition rate was obtained using this conclusion being shown in table VII.

88
V. C ONCLUDING R EMARKS [10] S. Haykin. Neural Networks and Learning Machines. Number v. 10 in
Neural networks and learning machines. Prentice Hall, 2009.
In this work, we have provided a assessment of the per- [11] G.-B. Huang, Q.-Y. Zhu, and C.-K. Siew. Extreme learning machine:
formance of several classifiers when coping with the task Theory and applications. Neurocomputing, 70(1–3):489 – 501, 2006.
Neural NetworksSelected Papers from the 7th Brazilian Symposium
biometric recognition based on fingerprint. For this purpose, on Neural Networks (SBRN ’04)7th Brazilian Symposium on Neural
FingerCode was adopted for data preprocessing. Networks.
Overall, the results show that all classifiers achieved high [12] A. K. Jain, S. Prabhakar, and L. Hong. A multichannel approach to
fingerprint classification. IEEE Transactions on Pattern Analysis and
recognition rates. It can be observed that the performance Machine Intelligence, 21(4):348–359, Apr 1999.
achieved by the different classifiers is very similar. The factor [13] J. Kacur, R. Vargic, and P. Mulinka. Speaker identification by k-nearest
that most influenced the performance of the classifiers was neighbors: Application of pca and lda prior to knn. In 2011 18th
International Conference on Systems, Signals and Image Processing,
the quality of fingerprint images. On the other hand, with the pages 1–4, June 2011.
exception of SVM and k-NN, the variation of the classifiers [14] J. a. LITCHFIELD and F. Wilcoxon. A simplified method of evaluating
parameters did not produce a great variation in the classifiers dose-effect experiments. Journal of pharmacology and experimental
therapeutics, 96(2):99–113, 1949.
performance. From the results obtained by analyzing the [15] O. L. Mangasarian and D. R. Musicant. Lagrangian support vector
accuracy rate and the Wilcoxon test it is possible to notice that machine classification. Technical Report 00-06, Data Mining Institute,
the SVM was the classifier that produced better performance Computer Sciences Department, University of Wisconsin, Madison,
Wisconsin, June 2000. ftp://ftp.cs.wisc.edu/pub/dmi/tech-reports/00-
when compared to the other classifiers. It can be seen that 06.ps.
the SVM for all sensors was better than at least one of the [16] K. A. Nagaty. Fingerprints classification using artificial neural networks:
classifiers when applied to the five sensors. This classifier has a combined structural and statistical approach. Neural Networks,
14(9):1293 – 1305, 2001.
obtained superior results to the other techniques presented in [17] J. P. Papa, A. X. Falcao, A. L. M. Levada, D. C. Correa, D. H. P.
this work with the aid of the AES2501 Scanner, As shown in Salvadeo, and N. D. A. Mascarenhas. Fast and accurate holistic face
the table VII. recognition using optimum-path forest. In 2009 16th International
Conference on Digital Signal Processing, pages 1–6, July 2009.
As ongoing work, we are currently extending the scope [18] J. P. Papa, A. X. Falcão, and C. T. N. Suzuki. Supervised pattern
of investigation by considering other SVM configured with classification based on optimum-path forest. International Journal of
other kernel functions (and parameters), other types of vector Imaging Systems and Technology, 19(2):120–131, 2009.
[19] J. P. Papa and A. Rocha. Image categorization through optimum path
machines, such as the Proximal SVMs, the Lagrangian SVMs forest and visual words. In 2011 18th IEEE International Conference
[15], as well as (and most importantly) the conjoint influence on Image Processing, pages 3525–3528, Sept 2011.
of the hyper-parameters. In the future, we plan to investigate [20] J. P. Papa, A. A. Spadotto, A. X. Falcao, and J. C. Pereira. Optimum path
forest classifier applied to laryngeal pathology detection. In 2008 15th
how the combination of models coming from different types International Conference on Systems, Signals and Image Processing,
of vector machines, each configured with the same values of pages 249–252, June 2008.
the control parameters, can improvethe levels of performance, [21] C. H. Park and H. Park. Fingerprint classification using fast fourier
transform and nonlinear discriminant analysis. Pattern Recognition,
in terms of accuracy and generalization, from that achieved by 38(4):495 – 503, 2005.
each vector machine type alone. [22] M. Sarkar and T. T. Leong. Application of k-nearest neighbors algorithm
on breast cancer diagnosis problem. In Proceedings of the AMIA Annual
Symposium, Los Angeles, USA, 2000.
R EFER ÊNCIAS [23] B. Scholkopf and A. J. Smola. Learning with Kernels: Support
[1] A. Askarunisa, S. K, S. R. Liu, and S. M. Batcha. Finger print Vector Machines, Regularization, Optimization, and Beyond. MIT Press,
authentication using neural networks. MASAUM Journal od Computing, Cambridge, MA, USA, 2001.
1(2), 2009. [24] V. N. Vapnik. The nature of statistical learning theory. Springer-Verlag
[2] R. Cappelli, A. Lumini, D. Maio, and D. Maltoni. Fingerprint classifi- New York, Inc., New York, NY, USA, 1995.
cation by directional image partitioning. IEEE Transactions on Pattern [25] Y. Yin, L. Liu, and X. Sun. Sdumla-hmt: A multimodal biometric
Analysis and Machine Intelligence, 21(5):402–421, May 1999. database. In Z. Sun, J. Lai, X. Chen, and T. Tan, editors, Biometric
[3] J.-H. Chang and K.-C. Fan. A new model for fingerprint classification Recognition, volume 7098 of Lecture Notes in Computer Science, pages
by ridge distribution sequences. Pattern Recognition, 35(6):1209 – 1223, 260–268. Springer Berlin Heidelberg, 2011.
2002. [26] X. P. Yu and X. G. Yu. Novel text classification based on k-nearest
[4] D. Cheng, G. Liu, and Y. Qiu. Applications of particle swarm optimiza- neighbor. In 2007 International Conference on Machine Learning and
tion and k-nearest neighbors to emotion recognition from physiological Cybernetics, volume 6, pages 3425–3430, Aug 2007.
signals. In 2008 International Conference on Computational Intelligence
and Security, volume 2, pages 52–56, Dec 2008.
[5] V. Cherkassky and Y. Ma. Practical selection of {SVM} parameters and
noise estimation for {SVM} regression. Neural Networks, 17(1):113 –
126, 2004.
[6] M. M. Chong, H. N. Tan, L. Jun, and R. K. Gay. Geometric framework
for fingerprint image classification. Pattern Recognition, 30(9):1475 –
1488, 1997.
[7] G. Feng, G. B. Huang, Q. Lin, and R. Gay. Error minimized extreme
learning machine with growth of hidden nodes and incremental learning.
IEEE Transactions on Neural Networks, 20(8):1352–1357, Aug 2009.
[8] G. M. Freitas, A. M. H. Avila, J. P. Papa, and A. X. Falcao. Optimum-
path forest-based rainfall estimation. In 2009 16th International Confe-
rence on Systems, Signals and Image Processing, pages 1–4, June 2009.
[9] E.-H. S. Han, G. Karypis, and V. Kumar. Text Categorization Using
Weight Adjusted k-Nearest Neighbor Classification, pages 53–65. Sprin-
ger Berlin Heidelberg, Berlin, Heidelberg, 2001.

89

You might also like