p9
p9
Abstract The method for developing a secure and reliable identification system relies on the use of biometrics. In this
case, the palm vein is used as a security measure. The proposed system is powered by a convolutional neural network,
which is a type of neural network that is commonly utilized for image recognition. The palm vein's visual features are
extracted using a convolutional neural network. This method can improve the recognition rate and its performance
parameters. The results of an experiment conducted with this method were better than those obtained with conventional
techniques.
Keywords: palm vein, image processing, CNN
1. Introduction
Biometric techniques are methods that can be used to identify people using their biological traits. These include
various types of facial and palm vein recognition, as well as fingerprint and iris scanning. A simple facial recognition technique
usually relies on a light camera to take a picture of the person's face. However, this method can be vulnerable to errors due
to how the light source varies or how the face is located. In addition, getting older and being sick can also affect how well the
techniques work (Shinde et al 2017). In spite of the accuracy of iris scanning, infrared cameras may cause discomfort to users
(Khoje and Shinde 2023). Among the disadvantages of using biometric methods is that they can be easily imitated by an
attacker. In addition to price, other factors such as the size and type of equipment also affect their viability. Palm veins, on
the other hand, are more stable compared to other methods due to how they are hidden beneath the skin (Kabaciński and
Kowalski 2011). With a small camera, features that remain the same for a person's age can be captured. Every person has
different palm vein patterns, and the texture of some veins can vary between twins. Although other methods rely on
biological markers, palm vein data can be safely stored in a large database. In the past few years, a majority of biometric
systems being used for body authentication have been made using bioinformatics. One of their main disadvantages is that
these devices tend to occupy a lot of space, which limits their adoption. Other factors such as faster response times and
lower costs are also needed to make these systems grow (Dhandapani et al 2023). One of the main disadvantages of using
low-cost cameras is that they only produce low-quality images due to their low resolution. This issue can prevent them from
being used in practical applications (Shinde et al. 2021). Various techniques for palm vein recognition have been proposed to
overcome these issues (Lin et al 2005).
2. Image Processing
In order to enhance or extract information from an image, various techniques are used, which are referred to as image
processing. This type of signal processing involves taking a picture as its input, and then coming up with another image or
feature, depending on its associated elements (Wagh et al 2022; Shinde and Waghulade 2017). Image processing is a
promising area of research within the fields of computer science and engineering. It involves three main steps, which include
importing the image, changing the image, and generating a report or modifying it. The two different types of image
processing are digital and analog. The former is used for making hard copies of images, such as prints. On the other hand, the
latter is utilized for computer-based image editing. When using digital techniques, different types of data go through
different phases. These include the extraction of information, augmentation, and pre-processing.
3. Literature Survey
Cho et al (2020) A technique for identifying people using palm vein and print samples obtained from the NIR and
visible spectral bands was developed. The gallery samples were obtained from the blue and red spectra, while the samples
Multidiscip. Sci. J. (2024) 6:e2024049 Received: May 31, 2023 | Accepted: September 28, 2023
Shende et al. (2024) 2
from the NIR band were used as the probe. A low-cost LBP encoding method was used to improve the discriminative
capabilities of the palmprint templates. It was also used to extract the palmprint features. The scores from the comparison of
the NIR and registered RGB templates were then combined with those from the NIR and palm-vein templates. The results of
the study revealed that the proposed system for validating palmprint features consistently delivers high performance.
Cho et al (2021) the researchers focused on the utilization of the visible light spectrum in images for validating palm-
vein identity. This is different from the traditional method, which involves extracting the features from the NIR. They were
also looking into the missing information in the RGB images. The researchers used a different image projection technique to
extract the vein line features. They then used a Hamming distance method to match the features from the gallery with those
from the probing images. The experiments were conducted on two multi-spectral databases. The results of the study
revealed the suggested method of extracting the palm-vein features using the visible spectrum can provide high accuracy and
efficiency. The findings of this study can be integrated into a multi-biometric system for authentication (Shinde et al 2023).
In the past few years, the field of automated identification systems has expanded significantly. These include various
functions such as validating personal identity, preventing identity fraud, and security checking (Shinde et al 2023a). Biometric
identification systems, which are becoming more prevalent due to advancements in biotechnology, are expected to be user-
friendly and accurate. One of the most popular features of these systems is palm vein detection. In 2020, Jhong and his
colleagues developed a method that can detect the presence of palm veins in a region. The researchers used a modified
neural network to identify the most effective model for this type of system. They then used a low-level Raspberry Pi
computer to implement the system. The results of the evaluation revealed that the system was able to achieve an accuracy
of 96.54 percent. The researchers noted that vein patterns are a practical and trustworthy way to implement biometric
recognition. Despite the advantages of this technology, there is still an issue regarding its application in commercial
applications. One of the main issues that prevent deep learning techniques from being used in vein recognition is the limited
sample sizes of available datasets. This is typically not enough to sufficiently train the models and networks (Sardeshmukh et
al 2023; Kathole et al 2023; Shinde et al 2023b). Instead of using new neural networks, the researchers decided to use
existing frameworks and transfer learning to develop a biometric system that can detect vein patterns. They tested the
system's performance in an open-set condition. Three different vein data sets were used for the evaluation: palm veins,
finger veins, and hand dorsal veins. The researchers found that the system performed well in open-set conditions with
identical error rates. In the PolyU, SDUMLA, and Bosphorus vein sets, the results were 5.63 percent, 0.006 percent, and 0.41
percent. Li et al (2014), a team of researchers developed an embedded palm biometric system using the Exynos5410
platform. They used a non-Halo complex matched filtering technique to improve system anti-spoofing and accuracy. The
researchers utilized the Cortex-A7 CPU for various tasks and functions, such as user interface and peripheral management.
They also used the Cortex-A15 for extraction operations. The system collected and encrypted the biometric information of a
person, which was then sent to a smart card, where it was compared with the data that was stored on the card. According to
the study, by using multiple biometric parameters, the researchers were able to achieve a lower EER. Li et al (2014), a team
of researchers developed an embedded palm biometric system using the Exynos5410 platform. They used a non-Halo
complex matched filtering technique to improve system anti-spoofing and accuracy. The researchers utilized the Cortex-A7
CPU for various tasks and functions, such as user interface and peripheral management. They also used the Cortex-A15 for
extraction operations. The system collected and encrypted the biometric information of a person, which was then sent to a
smart card, where it was compared with the data that was stored on the card. According to the study, by using multiple
biometric parameters, the researchers were able to achieve a lower EER. Mirmohamadsadeghi and Drygajlo (2011), the goal
of this study was to develop a new method for extracting palm features using local texture patterns. The data collected from
the operators and histograms of LBPs were then analyzed to find novel LDP-based descriptors for the palm veins. The two
extraction methods were evaluated and compared during tasks that involved verification and identification. The LDP and LBP
descriptors that are more closely related to the texture of the palm veins were identified through the use of the CASIA
database. The results of the tests revealed that the LDP descriptors performed better than their LBP counterparts when it
came to identifying and verifying palm veins.
In hand-vein recognition, one of the issues that can affect its accuracy is the lack of resistance to degradation in image
quality. This paper presents a comprehensive analysis of methods for extracting vein features (Qin et al 2019). Deep neural
networks have shown promise in medical picture segmentation in recent years and have been applied to vein verification,
but current vein segmentation methods face two difficulties:
1. a lack of labelling data, which is expensive to gather, and
2. incorrect label data obtained by a manual labelling scheme or an automatic labelling scheme that may
dramatically influence parameters when the network is trained.
We use a deep neural network to extract vein attributes from the first label data. This method requires minimal
knowledge about the data structure and iteratively corrects it. Qin et al (2019) proposed the first step in the process is to
automatically label the vein and background pixels using a widely used segmentation technique. This method then produces
a training dataset that is based on the patches that are centered on the pixels. The second step is to train the DBN on the
database to predict which pixels are related to the vein.
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 3
The training dataset was reconstructed using the extracted vein features, and the network was trained using it. The
iterative process allowed the DBN to distinguish between the background and the vein patterns. Results of an experiment on
two public databases revealed that the accuracy of hand vein verification had significantly improved.
A method for authenticating palm vein patterns that are 850 nm in infrared light was presented by Rastogi and
colleagues in 2020. The system used a combination of image pre-processing and extraction techniques, and a user interface
was developed using OpenCV and Python libraries. The researchers utilized the Sobel kernel filter and the Bank of Gabor
filter to achieve the required results. They obtained matching accuracy scores using the Gaussian Naive Bayes and Random
Forest classifiers. The palm vein classifier performed well with both methods, with an accuracy of 97.40% and 96.30%,
respectively. The researchers developed a framework that can be used to connect and scale various applications, such as
identity verification and record-keeping. The hand has a lot of physiologic properties that can be exploited, such as its
internal and external knuckles, palm lines, and geometry.
In 2017, Andani and Yazdani noted that the blood vascular pattern is more durable than the palm and has a higher
acceptability level. A new method was then used to extract the texture features from the images. The researchers used an
autoregressive model to estimate the wavelet coefficient, and the K-nearest neighbor and support vector machine classifiers
were used to classify the 600 palm photos. Deshpande et al (2016), according to the researchers, their system's accuracy,
resilience, and anti-spoofing capabilities could be improved by integrating the palm vein and palm print techniques. They
presented a system that uses a combination of these two biometric methods. The first module of the project involved
extracting the palm print's features using wavelet decomposition. In addition, the vein characteristics were obtained through
a matched filter method. Different matches were then used for each modality. The conclusions of the matches were then
combined to form a logical representation of the person. For instance, in order to perform effective hand recognition, the
researchers used a rough-to-fine feature-matching method on the 96 modules of Module 1.
4. Methodology
4.1. Feature Extraction
A feature extraction is a process that involves properly representing a portion of an image's pixels. It is commonly
used in CNN. Since the network can automatically generate various features from frequency and time series images, it is very
popular in healthcare applications.
4.2. Convolutional Neural Network
A CNN is a type of neural network that has many layers. It is capable of processing data in a grid-like fashion and
extracting key features. This reduces the amount of preparation required for image processing. In most algorithms, engineers
use heuristics to develop the filters. Through a CNN, we can identify the most important filtering criteria, which eliminate the
need for additional parameters. It saves a lot of time and effort by not requiring many.
4.3. Biometrics
The concept of biometric authentication refers to the statistical and physical analysis of people's traits. It is mainly
used for identifying people, monitoring their activities, and granting access control. The idea behind this technology is that it
can be used to identify an individual based on their traits. Biometrics are the combination of the Greek words "bio" and
"metric," which literally means "measure." Nowadays, it is commonly used for securing various types of consumer electronics
and corporate security systems. One of the most common types of biometric techniques that can be used is gait analysis. This
allows a person to be identified and authenticated without being physically examined.
4.4. Database
A database is utilized for security purposes, and it can be used to identify or authenticate people. The process of
building such databases involves using the palm vein. Images of the prints were taken by placing the palms at a distance of
about 10 to 15 cm from the camera. A fine contrast between the background and the palm is also considered. Since the palm
print application is commonly used in public areas, it is important that the photos of the veins are included in the database. A
self-generated version of the database holds the images of the palm veins, while the research involves preserving and
gathering a complete collection of photos. A large number of datasets is required to achieve good recognition rates. The
variations within the database make it harder to perform an analysis. Figure 1 shows the images of the palm.
5. Block Schematic
Figure 2 shows the schematic block diagram of the system. As the input, the palm image is subjected to the first
filtration layer, which is known as Convolution. The CNN consists of a Convolutional layer, a ReLU layer, and a Max Pooling
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 4
layer. To minimize the complexity of the system, the number of layers is maintained at a minimum. The Convolution layer
takes the extracted features and forwards them to the ReLU layer.
Output is provided by the ReLU layer if the input is positive, otherwise, it will give zero. The filtered parts of the ReLU
layer go through a Max Pooling phase, where the most prominent ones are extracted. Each filter in the Convolutional layer is
designed to enhance its features. The selection of the right size is a crucial factor that affects the accuracy and training time.
For instance, if the training time is longer, the larger the filter, the more accurate it will be. On the other hand, if the training
time is shorter, the smaller the filter will be. The Convolutional layer can reduce the size of the filter and generate numerous
feature maps. On the contrary, the ReLU layer can remove the extreme values extracted by forwarding them to it.
The Fully Connected layer takes into account the data extracted from the previous layers and then produces the
output. Due to CNN, this process or system will have a higher recognition rate and accuracy. This implementation is carried
out using Python programming.
Developers of deep learning platforms such as TensorFlow and Keras can create and train a CNN model for biometric
palm recognition. The latter includes dense and convolutional layers, which are then flattened to produce a 1-D vector.
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 5
For this model, the input shape is 600 pixels by 600 pixels. The first dense layer has 64 units and uses the ReLU
activation procedure to introduce non-linearity. The second and final layer, which has 2 units each, represents the output
classes.
The SoftMax activation method is used to calculate the probabilities of each class. It ensures that the outputs reach 1.
6. Results
A palm login interface page is created using HTML and CSS which contains the user name and palm vein sections
shown in Figure 3. A user has to enter the username which he/she created while registering.
Django administration page, shown in Figure 4, is the home page which contains view site, change password and log
out options. We can add new users from this page. It also shows the recent actions such as the users which are recently
added aur deleted.
Page shown in Figure 5 is the user registration page where user will enter their details such as their username and
register using their palm image.
To log in, the user will enter the username and select the palm image they have used while registeringas shown in
Figure 6.
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 6
If the username and the palm image match with the username and palm image of the user which he/she entered
while registering then the message will be displayed showing the Authentication is successful. Figure 7 shows the page
displayed after successful authentication.
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 7
If the palmprint does not match with the user's palmprint then it will display a ‘Failed Authentication’ message such as
the one shown in Figure 8.
7. Observations
7.1. Observation Table
Table 1 represents the classification results and performance metrics observed for a system using 10 sample images.
Table 1 Observation table.
Sl. No. Sample True value Predicted value TP TN FP FN
1 Image1 Positive Positive 1 0 0 0
2 Image2 Positive Positive 1 0 0 0
3 Image3 Positive Positive 1 0 0 0
4 Image4 Positive Positive 1 0 0 0
5 Image5 Positive Negative 0 0 0 1
6 Image6 Negative Negative 0 1 0 0
7 Image7 Negative Negative 0 1 0 0
8 Image8 Negative Negative 0 1 0 0
9 Image9 Negative Negative 0 1 0 0
10 Image10 Negative Negative 0 1 0 0
The table includes the True Positives (TP), True Negatives (TN), False Positives (FP), and False Negatives (FN) values.
The observation table provides insights into the classification results of the system for the given set of sample images. It
shows how many positive and negative samples were correctly classified (TP and TN) and how many were misclassified (FP
and FN). These values are crucial for evaluating the performance of the system, determining metrics such as accuracy.
7.2. Confusion Matrix
The confusion matrixillustrated in Figure 9 provides a clear overview of the model's performance in terms of correctly
and incorrectly classified samples. It helps assess the balance between true positives and false positives, as well as true
negatives and false negatives.
The bottom right cell represents the True Positive (TP) count, which is 9. It indicates that the model correctly
predicted 9 instances as positive. The bottom left cell represents the False Negative (FN) count, which is 1. It indicates that
the model incorrectly predicted 1 instance as negative when it should have been positive. The top right cell represents the
False Positive (FP) count, which is 0. It indicates that the model incorrectly predicted 0 instances as positive when they should
have been negative. The top left cell represents the True Negative (TN) count, which is 1. It indicates that the model correctly
predicted 1 instance as negative.
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 8
7.3. Equations
FAR (False Acceptance Rate) represents the probability of the system incorrectly accepting an imposter or
unauthorized individual. FAR (False Rejection Rate) represents the probability of the system incorrectly accepting an
imposter or unauthorized individual. The FAR and FRR can be computed using the formulas mentioned below:
FAR = False Positives / (False Positives + True Negatives)
FRR = False Negatives / (False Negatives + True Positives)
Accuracy can be computed using the formula:
Accuracy = 1 - (FAR + FRR) / 2
FAR and FRR are calculated using true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN).
The values gained after testing 20 (10 Genuine and 10 Imposters) samples are as follows:
TP = 9
FP = 0
TN = 10
FN = 1
FAR = FP / (FP + TN) (1)
= 0 / (0 + 1)
=0%
FRR = FN / (TP + FN) (2)
= 1 / (9 + 1)
= 0.1 %
Accuracy of the model is calculated as follows:
Accuracy = [100 – (FAR + FRR) / 2] (3)
= [ 100 – (0 + 0.1) / 2]
= 99.95 %
The combination of a FAR of 0%, FRR of 0.1%, and accuracy of 99.95% suggests that the CNN-based palm biometric
recognition system performs exceptionally well. A FAR of 0% indicates a high level of security, as no impostors were wrongly
accepted. A FRR of 0.1% indicates a low rate of rejecting genuine users, resulting in a system that is convenient for valid users
to access. The high accuracy of 99.95% suggests that the model has effectively learned the discriminative features of palm
biometrics and can accurately differentiate between different individuals.
7.4. Graphs
One of the most important visualizations that you should consider when evaluating a CNN is the accuracy graph
shown in Figure 10. This graph shows the changes in the model's accuracy over time as it trains. The epoch graph in Figure 11
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 9
shows the total number of passes through the training set. The accuracy graph is used to compare the predicted labels with
the actual labels from the CNN model. It plots the accuracy across the y-axis and across the number of epochs. The epoch
graph displays the loss function's values for each epoch during the training session. The number of epochs is represented by
the x-axis, while the value of the loss is shown by the y-axis.
8. Conclusions
CNN is a biometric technology that can be used to identify the palm vein pattern. It is very easy to use and has a high
accuracy level. The development of this technology was mainly due to the unique features of the blood flow pattern on the
palm, which makes it very different from other biometric features. In order to classify and evaluate images, this new
approach uses a single classification for all individuals. This technology will lead to a revolution in the area of science and
technology.
Ethical considerations
Not applicable.
Conflict of Interest
The authors declare no conflicts of interest.
Funding
This research did not receive any financial support.
https://www.malque.pub/ojs/index.php/msj
Shende et al. (2024) 10
References
Cho S, Oh B-S, Kim D, Toh K-A (2021) Palm-vein verification using images from the visible spectrum. IEEE Access. 9:86914–86927.
DOI:10.1109/access.2021.3089484.
Cho S, Oh B-S, Toh K-A, Lin Z (2020). Extraction and cross-matching of palm-vein and palmprint from the RGB and the NIR spectrums for identity verification.
IEEE Access. 8:4005–4021. DOI:10.1109/access.2019.2963078.
Deshpande PD, Tavildar AS, Dandwate YH, Shah E (2016) Fusion of dorsal palm vein and palm print modalities for higher security applications. Conference on
Advances in Signal Processing (CASP).
Dhandapani L, Shinde SB, Wadhwa L, Hariramakrishnan P, Padmaja SM, Devi Gurusamy M, Venkatarao MK, Razia S (2023) A deep learning-based approach
to optimize power systems with hybrid renewable energy sources. Electric Power Components and Systems 51:1740–1755. DOI:
10.1080/15325008.2023.2202677.
Jhong S-Y, Tseng P-Y, Siriphockpirom N, Hsia C-H, Huang M-S, Hua K-L, Chen Y-Y (2020) An automated biometric identification system using CNN-based palm
vein recognition. International Conference on Advanced Robotics and Intelligent Systems (ARIS).
Kabaciński R, Kowalski M (2011) Vein pattern database and benchmark results. Electron Lett. 47(20):1127. DOI:10.1049/el.2011.1441.
Kathole A, Shinde S, Wadhwa L (2023) Integrating MLOps and EEG Techniques for Enhanced Crime Detection and Prevention. Multidisciplinary Science
Journal 6(1):2024009. DOI: 10.31893/multiscience.2024009.
Khoje S, Shinde S (2023) Evaluation of ripplet transform as a texture characterization for Iris recognition. J Inst Eng (India) Ser B. 104:369–380. DOI:
10.1007/s40031-023-00863-6.
Kuzu RS, Maiorana E, Campisi P (2020) Vein-based Biometric Verification using Transfer Learning. 43rd International Conference on Telecommunications and
Signal Processing (TSP). IEEE.
Li P, Miao Z, Wang Z (2014) Fusion of palmprint and palm vein images for person recognition. 12th International Conference on Signal Processing (ICSP).
Lin C-L, Chuang TC, Fan K-C (2005) Palmprint verification using hierarchical decomposition. Pattern Recognit. 38(12):2639–2652.
DOI:10.1016/j.patcog.2005.04.001.
Mirmohamadsadeghi L, Drygajlo A (2011) Palm vein recognition with Local Binary Patterns and Local Derivative Patterns. International Joint Conference on
Biometrics (IJCB).
Mirmohamadsadeghi L, Drygajlo A (2011) Palm vein recognition with Local Binary Patterns and Local Derivative Patterns. International Joint Conference on
Biometrics (IJCB).
Qin H, El Yacoubi MA, Lin J, Liu B (2019) An iterative deep neural network for hand-vein verification. IEEE Access. 7:34823–34837.
DOI:10.1109/access.2019.2901335.
Rastogi S, Duttagupta SP, Guha A, Prakash S (2020) Palm vein pattern: Extraction and Authentication. IEEE International Conference on Machine Learning
and Applied Network Technologies (ICMLANT).
Sardeshmukh M, Chakkaravarthy M, Shinde S, Chakkaravarthy D (2023) Crop image classification using convolutional neural network. Multidisciplinary
Science Journal 5:2023039. DOI: 10.31893/multiscience.2023039.
Shinde S, Kathole A, Wadhwa L, Shaikha AS (2023b) Breaking the silence: Innovation in wake word activation. Multidisciplinary Science Journal 6:2024021.
DOI: 10.31893/multiscience.2024021.
Shinde S, Khoje S, Raj A, Wadhwa L, Shaikha AS (2023c) Artificial intelligence approach for terror attacks prediction through machine learning.
Multidisciplinary Science Journal 6(1):2024011. DOI: 10.31893/multiscience.2024011.
Shinde S, Wadhwa L, Bhalke D (2021) Feedforward back propagation neural network (FFBPNN) based approach for the identification of handwritten math
equations. Advances in Intelligent Systems and Computing. Cham: Springer International Publishing, pp 757–775. DOI: 10.1007/978-3-030-51859-2_69.
Shinde S, Wadhwa L, Bhalke DG, Sherje N, Naik S, Kudale R, Mohnani K (2023a) Identification of fake currency using soft computing. Multidisciplinary Science
Journal 6:2024018. DOI: 10.31893/multiscience.2024018.
Shinde S, Waghulade RB (2017) An improved algorithm for recognizing mathematical equations by using machine learning approach and hybrid feature
extraction technique. 2017 IEEE International Conference on Electrical, Instrumentation and Communication Engineering (ICEICE), pp 1–7. DOI:
10.1109/ICEICE.2017.8191926.
Shinde S, Waghulade RB, Bormane DS (2017) A new neural network based algorithm for identifying handwritten mathematical equations. 2017 International
Conference on Trends in Electronics and Informatics (ICEI), pp 204–209. DOI: 10.1109/ICOEI.2017.8300916.
Wagh KP, Vasanth K, Shinde S (2022) Emotion recognition based on EEG features with various brain regions. Indian Journal of Computer Science and
Engineering 13:108–115. DOI: 10.21817/indjcse/2022/v13i1/221301095.
Yazdani F, Andani ME (2017) Verification based on palm vein by estimating wavelet coefficient with autoregressive model. 2nd Conference on Swarm
Intelligence and Evolutionary Computation (CSIEC).
https://www.malque.pub/ojs/index.php/msj